收藏 分销(赏)

Level8.unit1-part2-on-controlling-AI.docx

上传人:丰**** 文档编号:4545867 上传时间:2024-09-27 格式:DOCX 页数:11 大小:15.43KB
下载 相关 举报
Level8.unit1-part2-on-controlling-AI.docx_第1页
第1页 / 共11页
Level8.unit1-part2-on-controlling-AI.docx_第2页
第2页 / 共11页
Level8.unit1-part2-on-controlling-AI.docx_第3页
第3页 / 共11页
Level8.unit1-part2-on-controlling-AI.docx_第4页
第4页 / 共11页
Level8.unit1-part2-on-controlling-AI.docx_第5页
第5页 / 共11页
点击查看更多>>
资源描述

1、00:01Im going to talk about a failure of intuitionthat many of us suffer from.Its really a failure to detect a certain kind of danger.Im going to describe a scenariothat I think is both terrifyingand likely to occur,and thats not a good combination,as it turns out.And yet rather than be scared, most

2、 of you will feelthat what Im talking about is kind of cool.00:25Im going to describe how the gains we makein artificial intelligencecould ultimately destroy us.And in fact, I think its very difficult to see how they wont destroy usor inspire us to destroy ourselves.And yet if youre anything like me

3、,youll find that its fun to think about these things.And that response is part of the problem.OK? That response should worry you.And if I were to convince you in this talkthat we were likely to suffer a global famine,either because of climate change or some other catastrophe,and that your grandchild

4、ren, or their grandchildren,are very likely to live like this,you wouldnt think,Interesting.I like this TED Talk.01:09Famine isnt fun.Death by science fiction, on the other hand, is fun,and one of the things that worries me most about the development of AI at this pointis that we seem unable to mars

5、hal an appropriate emotional responseto the dangers that lie ahead.I am unable to marshal this response, and Im giving this talk.01:30Its as though we stand before two doors.Behind door number one,we stop making progress in building intelligent machines.Our computer hardware and software just stops

6、getting better for some reason.Now take a moment to consider why this might happen.I mean, given how valuable intelligence and automation are,we will continue to improve our technology if we are at all able to.What could stop us from doing this?A full-scale nuclear war?A global pandemic?An asteroid

7、impact?Justin Bieber becoming president of the United States?02:08(Laughter)02:12The point is, something would have to destroy civilization as we know it.You have to imagine how bad it would have to beto prevent us from making improvements in our technologypermanently,generation after generation.Alm

8、ost by definition, this is the worst thingthats ever happened in human history.02:32So the only alternative,and this is what lies behind door number two,is that we continue to improve our intelligent machinesyear after year after year.At a certain point, we will build machines that are smarter than

9、we are,and once we have machines that are smarter than we are,they will begin to improve themselves.And then we risk what the mathematician IJ Good calledan intelligence explosion,that the process could get away from us.02:58Now, this is often caricatured, as I have here,as a fear that armies of mal

10、icious robotswill attack us.But that isnt the most likely scenario.Its not that our machines will become spontaneously malevolent.The concern is really that we will build machinesthat are so much more competent than we arethat the slightest divergence between their goals and our owncould destroy us.

11、03:23Just think about how we relate to ants.We dont hate them.We dont go out of our way to harm them.In fact, sometimes we take pains not to harm them.We step over them on the sidewalk.But whenever their presenceseriously conflicts with one of our goals,lets say when constructing a building like thi

12、s one,we annihilate them without a qualm.The concern is that we will one day build machinesthat, whether theyre conscious or not,could treat us with similar disregard.03:53Now, I suspect this seems far-fetched to many of you.I bet there are those of you who doubt that superintelligent AI is possible

13、,much less inevitable.But then you must find something wrong with one of the following assumptions.And there are only three of them.04:11Intelligence is a matter of information processing in physical systems.Actually, this is a little bit more than an assumption.We have already built narrow intellig

14、ence into our machines,and many of these machines performat a level of superhuman intelligence already.And we know that mere mattercan give rise to what is called general intelligence,an ability to think flexibly across multiple domains,because our brains have managed it. Right?I mean, theres just a

15、toms in here,and as long as we continue to build systems of atomsthat display more and more intelligent behavior,we will eventually, unless we are interrupted,we will eventually build general intelligenceinto our machines.04:59Its crucial to realize that the rate of progress doesnt matter,because an

16、y progress is enough to get us into the end zone.We dont need Moores law to continue. We dont need exponential progress.We just need to keep going.05:13The second assumption is that we will keep going.We will continue to improve our intelligent machines.And given the value of intelligence -I mean, i

17、ntelligence is either the source of everything we valueor we need it to safeguard everything we value.It is our most valuable resource.So we want to do this.We have problems that we desperately need to solve.We want to cure diseases like Alzheimers and cancer.We want to understand economic systems.

18、We want to improve our climate science.So we will do this, if we can.The train is already out of the station, and theres no brake to pull.05:53Finally, we dont stand on a peak of intelligence,or anywhere near it, likely.And this really is the crucial insight.This is what makes our situation so preca

19、rious,and this is what makes our intuitions about risk so unreliable.06:11Now, just consider the smartest person who has ever lived.On almost everyones shortlist here is John von Neumann.I mean, the impression that von Neumann made on the people around him,and this included the greatest mathematicia

20、ns and physicists of his time,is fairly well-documented.If only half the stories about him are half true,theres no questionhes one of the smartest people who has ever lived.So consider the spectrum of intelligence.Here we have John von Neumann.And then we have you and me.And then we have a chicken.0

21、6:45(Laughter)06:47Sorry, a chicken.06:48(Laughter)06:49Theres no reason for me to make this talk more depressing than it needs to be.06:53(Laughter)06:56It seems overwhelmingly likely, however, that the spectrum of intelligenceextends much further than we currently conceive,and if we build machines

22、 that are more intelligent than we are,they will very likely explore this spectrumin ways that we cant imagine,and exceed us in ways that we cant imagine.07:15And its important to recognize that this is true by virtue of speed alone.Right? So imagine if we just built a superintelligent AIthat was no

23、 smarter than your average team of researchersat Stanford or MIT.Well, electronic circuits function about a million times fasterthan biochemical ones,so this machine should think about a million times fasterthan the minds that built it.So you set it running for a week,and it will perform 20,000 year

24、s of human-level intellectual work,week after week after week.How could we even understand, much less constrain,a mind making this sort of progress?07:56The other thing thats worrying, frankly,is that, imagine the best case scenario.So imagine we hit upon a design of superintelligent AIthat has no s

25、afety concerns.We have the perfect design the first time around.Its as though weve been handed an oraclethat behaves exactly as intended.Well, this machine would be the perfect labor-saving device.It can design the machine that can build the machinethat can do any physical work,powered by sunlight,m

26、ore or less for the cost of raw materials.So were talking about the end of human drudgery.Were also talking about the end of most intellectual work.08:37So what would apes like ourselves do in this circumstance?Well, wed be free to play Frisbee and give each other massages.Add some LSD and some ques

27、tionable wardrobe choices,and the whole world could be like Burning Man.08:50(Laughter)08:54Now, that might sound pretty good,but ask yourself what would happenunder our current economic and political order?It seems likely that we would witnessa level of wealth inequality and unemploymentthat we hav

28、e never seen before.Absent a willingness to immediately put this new wealthto the service of all humanity,a few trillionaires could grace the covers of our business magazineswhile the rest of the world would be free to starve.09:22And what would the Russians or the Chinese doif they heard that some

29、company in Silicon Valleywas about to deploy a superintelligent AI?This machine would be capable of waging war,whether terrestrial or cyber,with unprecedented power.This is a winner-take-all scenario.To be six months ahead of the competition hereis to be 500,000 years ahead,at a minimum.So it seems

30、that even mere rumors of this kind of breakthroughcould cause our species to go berserk.09:54Now, one of the most frightening things,in my view, at this moment,are the kinds of things that AI researchers saywhen they want to be reassuring.And the most common reason were told not to worry is time.Thi

31、s is all a long way off, dont you know.This is probably 50 or 100 years away.One researcher has said,Worrying about AI safetyis like worrying about overpopulation on Mars.This is the Silicon Valley versionof dont worry your pretty little head about it.10:26(Laughter)10:27No one seems to noticethat r

32、eferencing the time horizonis a total non sequitur.If intelligence is just a matter of information processing,and we continue to improve our machines,we will produce some form of superintelligence.And we have no idea how long it will take usto create the conditions to do that safely.Let me say that

33、again.We have no idea how long it will take usto create the conditions to do that safely.11:00And if you havent noticed, 50 years is not what it used to be.This is 50 years in months.This is how long weve had the iPhone.This is how long The Simpsons has been on television.Fifty years is not that muc

34、h timeto meet one of the greatest challenges our species will ever face.Once again, we seem to be failing to have an appropriate emotional responseto what we have every reason to believe is coming.11:26The computer scientist Stuart Russell has a nice analogy here.He said, imagine that we received a

35、message from an alien civilization,which read:People of Earth,we will arrive on your planet in 50 years.Get ready.And now were just counting down the months until the mothership lands?We would feel a little more urgency than we do.11:52Another reason were told not to worryis that these machines cant

36、 help but share our valuesbecause they will be literally extensions of ourselves.Theyll be grafted onto our brains,and well essentially become their limbic systems.Now take a moment to considerthat the safest and only prudent path forward,recommended,is to implant this technology directly into our b

37、rains.Now, this may in fact be the safest and only prudent path forward,but usually ones safety concerns about a technologyhave to be pretty much worked out before you stick it inside your head.12:24(Laughter)12:26The deeper problem is that building superintelligent AI on its ownseems likely to be e

38、asierthan building superintelligent AIand having the completed neurosciencethat allows us to seamlessly integrate our minds with it.And given that the companies and governments doing this workare likely to perceive themselves as being in a race against all others,given that to win this race is to wi

39、n the world,provided you dont destroy it in the next moment,then it seems likely that whatever is easier to dowill get done first.12:58Now, unfortunately, I dont have a solution to this problem,apart from recommending that more of us think about it.I think we need something like a Manhattan Projecto

40、n the topic of artificial intelligence.Not to build it, because I think well inevitably do that,but to understand how to avoid an arms raceand to build it in a way that is aligned with our interests.When youre talking about superintelligent AIthat can make changes to itself,it seems that we only hav

41、e one chance to get the initial conditions right,and even then we will need to absorbthe economic and political consequences of getting them right.13:33But the moment we admitthat information processing is the source of intelligence,that some appropriate computational system is what the basis of int

42、elligence is,and we admit that we will improve these systems continuously,and we admit that the horizon of cognition very likely far exceedswhat we currently know,then we have to admitthat we are in the process of building some sort of god.Now would be a good timeto make sure its a god we can live with.14:08Thank you very much.14:09(Applause)

展开阅读全文
相似文档                                   自信AI助手自信AI助手
猜你喜欢                                   自信AI导航自信AI导航
搜索标签

当前位置:首页 > 教育专区 > 其他

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        获赠5币

©2010-2024 宁波自信网络信息技术有限公司  版权所有

客服电话:4008-655-100  投诉/维权电话:4009-655-100

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :gzh.png    weibo.png    LOFTER.png 

客服