ImageVerifierCode 换一换
格式:DOC , 页数:85 ,大小:1.43MB ,
资源ID:2090449      下载积分:10 金币
验证码下载
登录下载
邮箱/手机:
验证码: 获取验证码
温馨提示:
支付成功后,系统会自动生成账号(用户名为邮箱或者手机号,密码是验证码),方便下次登录下载和查询订单;
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

开通VIP
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.zixin.com.cn/docdown/2090449.html】到电脑端继续下载(重复下载【60天内】不扣币)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  
声明  |  会员权益     获赠5币     写作写作

1、填表:    下载求助     索取发票    退款申请
2、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
3、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
4、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
5、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前自行私信或留言给上传者【胜****】。
6、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
7、本文档遇到问题,请及时私信或留言给本站上传会员【胜****】,需本站解决可联系【 微信客服】、【 QQ客服】,若有其他问题请点击或扫码反馈【 服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【 版权申诉】”(推荐),意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:4008-655-100;投诉/维权电话:4009-655-100。

注意事项

本文(搬运工业机器人数控转台设计说明书学士学位论文.doc)为本站上传会员【胜****】主动上传,咨信网仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知咨信网(发送邮件至1219186828@qq.com、拔打电话4008-655-100或【 微信客服】、【 QQ客服】),核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载【60天内】不扣币。 服务填表

搬运工业机器人数控转台设计说明书学士学位论文.doc

1、本 科 毕 业 设 计(论文)( 2006届)(外 文 翻 译)题 目: 搬运工业机器人转台的设计 分 院: 机械工程系 专业: 机械设计制造及其自动化 班级: 2 姓 名: 学 号: 1 指导老师: 原文Humanoid Robots: A New Kind of ToolIn his 1923 play R.U.R.: Rossums Universal Robots, Karel Capek coined robot as a derivative of the Czech robota (forced labor). Limited to work too tedious or dan

2、gerous for humans, todays robots weld parts on assembly lines, inspect nuclear plants, and explore other planets. Generally, robots are still far from achieving their fictional counterparts intelligence and flexibility.Humanoid robotics labs worldwide are working on creating robots that are one step

3、 closer to science fictions androids. Building a humanlike robot is a formidable engineering task requiring a combination of mechanical, electrical, and software engineering; computer architecture; and real-time control. In 1993, we began a project aimed at constructing a humanoid robot for use in e

4、xploring theories of human intelligence. In addition to the relevant engineering, computer architecture, and real-time-control issues, weve had to address issues particular to integrated systems: What types of sensors should we use, and how should the robot interpret the data? How can the robot act

5、deliberately to achieve a task and remain responsive to the environment? How can the system adapt to changing conditions and learn new tasks? Each humanoid robotics lab must address many of the same motor-control, perception, and machine-learning problems.The principles behind our methodologyThe rea

6、l divergence between groups stems from radically different research agendas and underlying assumptions. At the MIT AI Lab, three basic principles guide our research We design humanoid robots to act autonomously and safely, without human control or supervision, in natural work environments and to int

7、eract with people. We do not design them as solutions for specific robotic needs (as with welding robots on assembly lines). Our goal is to build robots that function in many different real-world environments in essentially the same way. Social robots must be able to detect and understand natural hu

8、man cuesthe low-level social conventions that people understand and use everyday, such as head nods or eye contactso that anyone can interact with them without special training or instruction. They must also be able to employ those conventions to perform an interactive exchange. The necessity of the

9、se abilities influences the robots control-system design and physical embodiment. Robotics offers a unique tool for testing models drawn from developmental psychology and cognitive science. We hope not only to create robots inspired by biological capabilities, but also to help shape and refine our u

10、nderstanding of those capabilities. By applying a theory to a real system, we test the hypotheses and can more easily judge them on their content and coverage.Autonomous robots in a human environmentUnlike industrial robots that operate in a fixed environment on a small range of stimuli, our robots

11、must operate flexibly under various environmental conditions and for a wide range of tasks. Because we require the system to operate without human control, we must address research issues such as behavior selection and attention. Such autonomy often represents a trade-off between performance on part

12、icular tasks and generality in dealing with a broader range of stimuli. However, we believe that building autonomous systems provides robustness and flexibility that task-specific systems can never achieve.Requiring our robots to operate autonomously in a noisy, cluttered, traffic-filled workspace a

13、longside human counterparts forces us to build systems that can cope with natural-environment complexities. Although these environments are not nearly as hostile as those planetary explorers face, they are also not tailored to the robot. In addition to being safe for human interaction and recognizin

14、g and responding to social cues, our robots must be able to learn from human demonstration.The implementation of our robots reflects these research principles. For example, Cog began as a 14-degrees-of-freedom (DOF) upper torso with one arm and a rudimentary visual system. In this first incarnation,

15、 we implemented multimodal behavior systems, such as reaching for a visual target. Now, Cog features two six-DOF arms, a seven-DOF head, three torso joints, and much richer sensory systems. Each eye has one camera with a narrow field of view for high-resolution vision and one with a wide field of vi

16、ew for peripheral vision, giving the robot a binocular, variable-resolution view of its environment. An inertial system lets the robot coordinate motor responses more reliably. Strain gauges measure the output torque on each arm joint, and potentiometers measure position. Two microphones provide aud

17、itory input, and various limit switches, pressure sensors, and thermal sensors provide other proprioceptive inputs.The robot also embodies our principle of safe interaction on two levels. First, we connected the motors on the arms to the joints in series with a torsional spring. In addition to provi

18、ding gearbox protection and eliminating high-frequency collision vibrations, the springs compliance provides a physical measure of safety for people interacting with the arms. Second, a spring law, in series with a low-gain force control loop, causes each joint to behave as if controlled by a low-fr

19、equency spring system (soft springs and large masses). Such control lets the arms move smoothly from posture to posture with a relatively slow command rate, and lets them deflect out of obstacles way instead of dangerously forcing through them, allowing safe and natural interaction. (For discussion

20、of Kismet, another robot optimized for human interaction, see “Social Constraints on Animate Vision,” by Cynthia Breazeal and her colleagues, in this issue.)Interacting socially with humansBecause our robots must exist in a human environment, social interaction is an important facet of our research.

21、 Building social skills into our robots provides not only a natural means of humanmachine interaction but also a mechanism for bootstrapping more complex behavior. Humans serve both as models the robot can emulate and instructors that help shape the robots behavior. Our current work focuses on four

22、social-interaction aspects: an emotional model for regulating social dynamics, shared attention as a means for identifying saliency, acquiring feedback through vocal prosody, and learning through imitation.Regulating social dynamics through an emotional model. One critical component for a socially i

23、ntelligent robot is an emotional model that understands and manipulates its environment. A robot requires two skills to learn from such a model. First is the ability to acquire social inputto understand the relevant clues humans provide about their emotional state that can help it understand any giv

24、en interactions dynamics. Second is the ability to manipulate the environmentto express its own emotional state in such a way that it can affect social-interaction dynamics. For example, if the robot is observing an instructor demonstrating a task, but the instructor is moving too quickly for the ro

25、bot to follow, the robot can display a confused expression. The instructor naturally interprets this display as a signal to slow down. In this way, the robot can influence the instructions rate and quality. Our current architecture incorporates a motivation model that encompasses these exchange type

26、sIdentifying saliency through shared attention. Another important requirement for a robot to participate in social situations is to understand the basics of shared attention as expressed by gaze direction, pointing, and other gestures. One difficulty in enabling a machine to learn from an instructor

27、 is ensuring the machine and instructor both attend to the same object to understand where new information should be applied. In other words, the student must know which scene parts are relevant to the lesson at hand. Human students use various social cues from the instructor for directing their att

28、ention; linguistic determiners (such as “this or “that), gestural cues (such as pointing or eye direction), and postural cues (such as proximity) can all direct attention to specific objects and resolve this problem. We are implementing systems that can recognize the social cues that relate to share

29、d attention and that can respond appropriately based on the social context.Acquiring feedback through speech prosody. Participating in vocal exchange is important for many social interactions. Other robotic auditory systems have focused on recognition of a small hardwired command vocabulary. Our res

30、earch has focused on understanding vocal patterns more fundamentally. We are implementing an auditory system to let our robots recognize vocal affirmation, prohibition, and attentional bids. By doing so, the robot will obtain natural social feedback on which actions it has and has not executed succe

31、ssfully. Prosodic speech patterns (including pitch, tempo, and vocal tone) might be universal; infants have demonstrated the ability to recognize praise, prohibition, and attentional bids even in unfamiliar languages.Learning through imitation. Humans acquire new skills and new goals through imitati

32、on. Imitation can also be a natural mechanism for a robot to acquire new skills and goals. Consider this example:The robot is observing a person opening a glass jar. The person approaches the robot and places the jar on a table near the robot. The person rubs his hands together and then sets himself

33、 to removing the lid from the jar. He grasps the glass jar in one hand and the lid in the other and begins to unscrew the lid by turning it counter-clockwise. While he is opening the jar, he pauses to wipe his brow, and glances at the robot to see what it is doing. He then resumes opening the jar. T

34、he robot then attempts to imitate the action.Although classical machine learning addresses some issues this situation raises, building a system that can learn from this type of interaction requires a focus on additional research questions. Which parts of the action to be imitated are important (such

35、 as turning the lid counter-clockwise), and which arent (such as wiping your brow)? Once the action has been performed, how does the robot evaluate the performance? How can the robot abstract the knowledge gained from this experience and apply it to a similar situation? These questions require knowl

36、edge about not only the physical but also the social environment.Constructing and testing human-intelligence theories In our research, not only do we draw inspiration from biological models for our mechanical designs and software architectures, we also attempt to use our implementations of these mod

37、els to test and validate the original hypotheses. Just as computer simulations of neural nets have been used to explore and refine models from neuroscience, we can use humanoid robots to investigate and validate models from cognitive science and behavioral science. We have used the following four ex

38、amples of biological models in our research.Development of reaching and grasping. Infants pass through a sequence of stages in learning hand-eye coordination. We have implemented a system for reaching to a visual target that follows this biological model. Unlike standard kinematic manipulation techn

39、iques, this system is completely self-trained and uses no fixed model of either the robot or the environment.Similar to the progression observed in infants, we first trained Cog to orient visually to an interesting object. The robot moved its eyes to acquire the target and then oriented its head and

40、 neck to face the target. We then trained the robot to reach for the target by interpolating between a set of postural primitives that mimic the responses of spinal neurons identified in frogs and rats. After a few hours of unsupervised training, the robot executed an effective reach to the visual t

41、arget.Several interesting outcomes resulted from this implementation. From a computer science perspective, the two-step training process was computationally simpler. Rather than attempting to map the visual-stimulus locations two dimensions to the nine DOF necessary to orient and reach for an object

42、, the training focused on learning two simpler mappings that could be chained together to produce the desired behavior. Furthermore, Cog learned the second mapping (between eye position and the postural primitives) without supervision. This was possible because the mapping between stimulus location

43、and eye position provided a reliable error signal. From a biological standpoint, this implementation uncovered a limitation in the postural primitive theory. Although the model described how to interpolate between postures in the initial workspace, no mechanism for extrapolating to postures outside

44、the initial workspace existed.Rhythmic movements. Kiyotoshi Matsuoka describes a model of spinal cord neurons that produce rhythmic motion. We have implemented this model to generate repetitive arm motions, such as turning a crank. Two simulated neurons with mutually inhibitory connections drive eac

45、h arm joint. The oscillators take proprioceptive input from the joint and continuously modulate the equilibrium point of that joints virtual spring. The interaction of the oscillator dynamics at each joint and the arms physical dynamics determines the overall arm motion.This implementation validated

46、 Matsuokas model on various real-world tasks and provided some engineering benefits. First, the oscillators require no kinematic model of the arm or dynamic model of the system. No a priori knowledge was required about either the arm or the environment. Second, the oscillators were able to tune to a

47、 wide task range, such as turning a crank, playing with a Slinky, sawing a wood block, and swinging a pendulum, all without any change in the control system configuration. Third, the system was extremely tolerant to perturbation. Not only could we stop and start it with a very short transient period

48、 (usually less than one cycle), but we could also attach large masses to the arm and the system would quickly compensate for the change. Finally, the input to the oscillators could come from other modalities. One example was using an auditory input that let the robot drum along with a human drummer.Visual search and attention. We have implemented Jeremy Wolfes model of human visual search and attention, combining low-level feature detectors for visual motion, innate perceptual classifiers (such as face detectors), color saliency, and depth segmentation with a motivational and behaviora

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        获赠5币

©2010-2024 宁波自信网络信息技术有限公司  版权所有

客服电话:4008-655-100  投诉/维权电话:4009-655-100

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :gzh.png    weibo.png    LOFTER.png 

客服