ImageVerifierCode 换一换
格式:PDF , 页数:53 ,大小:4.40MB ,
资源ID:1204173      下载积分:25 金币
快捷注册下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

开通VIP
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.zixin.com.cn/docdown/1204173.html】到电脑端继续下载(重复下载【60天内】不扣币)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

开通VIP折扣优惠下载文档

            查看会员权益                  [ 下载后找不到文档?]

填表反馈(24小时):  下载求助     关注领币    退款申请

开具发票请登录PC端进行申请

   平台协调中心        【在线客服】        免费申请共赢上传

权利声明

1、咨信平台为文档C2C交易模式,即用户上传的文档直接被用户下载,收益归上传人(含作者)所有;本站仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。所展示的作品文档包括内容和图片全部来源于网络用户和作者上传投稿,我们不确定上传用户享有完全著作权,根据《信息网络传播权保护条例》,如果侵犯了您的版权、权益或隐私,请联系我们,核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
2、文档的总页数、文档格式和文档大小以系统显示为准(内容中显示的页数不一定正确),网站客服只以系统显示的页数、文件格式、文档大小作为仲裁依据,个别因单元格分列造成显示页码不一将协商解决,平台无法对文档的真实性、完整性、权威性、准确性、专业性及其观点立场做任何保证或承诺,下载前须认真查看,确认无误后再购买,务必慎重购买;若有违法违纪将进行移交司法处理,若涉侵权平台将进行基本处罚并下架。
3、本站所有内容均由用户上传,付费前请自行鉴别,如您付费,意味着您已接受本站规则且自行承担风险,本站不进行额外附加服务,虚拟产品一经售出概不退款(未进行购买下载可退充值款),文档一经付费(服务费)、不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
4、如你看到网页展示的文档有www.zixin.com.cn水印,是因预览和防盗链等技术需要对页面进行转换压缩成图而已,我们并不对上传的文档进行任何编辑或修改,文档下载后都不会有水印标识(原文档上传前个别存留的除外),下载后原文更清晰;试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓;PPT和DOC文档可被视为“模板”,允许上传人保留章节、目录结构的情况下删减部份的内容;PDF文档不管是原文档转换或图片扫描而得,本站不作要求视为允许,下载前可先查看【教您几个在下载文档中可以更好的避免被坑】。
5、本文档所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用;网站提供的党政主题相关内容(国旗、国徽、党徽--等)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
6、文档遇到问题,请及时联系平台进行协调解决,联系【微信客服】、【QQ客服】,若有其他问题请点击或扫码反馈【服务填表】;文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“【版权申诉】”,意见反馈和侵权处理邮箱:1219186828@qq.com;也可以拔打客服电话:0574-28810668;投诉电话:18658249818。

注意事项

本文(OpenAI全网疯传的53页PDF文档:计划2027年前开发出通用人工智能(英).pdf)为本站上传会员【Stan****Shan】主动上传,咨信网仅是提供信息存储空间和展示预览,仅对用户上传内容的表现方式做保护处理,对上载内容不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知咨信网(发送邮件至1219186828@qq.com、拔打电话4009-655-100或【 微信客服】、【 QQ客服】),核实后会尽快下架及时删除,并可随时和客服了解处理情况,尊重保护知识产权我们共同努力。
温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载【60天内】不扣币。 服务填表

OpenAI全网疯传的53页PDF文档:计划2027年前开发出通用人工智能(英).pdf

1、 Revealing OpenAIs plan to create AGI by 2027 In this document I will be revealing information I have gathered regarding OpenAIs(delayed)plans to create human-level AGI by 2027.Not all of it will be easily verifiable but hopefully theres enough evidence to convince youSummary:OpenAI started training

2、 a 125 trillion parameter multimodal model in August of 2022.The first stage was Arrakis also called Q*.The model finished training in December of 2023 but the launch was canceled due to high inference cost.This is the original GPT-5 which was planned for release in 2025.Gobi(GPT-4.5)has been rename

3、d to GPT-5 because the original GPT-5 has been canceled.The next stage of Q*,originally GPT-6 but since renamed to GPT-7(originally for release in 2026),has been put on hold because of the recent lawsuit by Elon MuskQ*2025(GPT-8)was planned to be released in 2027 achieving full AGI.Q*2023=48 IQQ*202

4、4=96 IQ(delayed)Q*2025=145 IQ(delayed)Elon Musk caused the delay because of his lawsuit.This is why Im revealing the information now because no further harm can be done Ive seen many definitions of AGI artificial general intelligence but I will define AGI simply as an artificial intelligence that ca

5、n do any intellectual task a smart human can.This is how most people define the term now.2020 was the first time I was shocked by an AI system that was GPT-3.GPT-3.5,an upgraded version of GPT-3,is the model behind ChatGPT.When ChatGPT was released,I felt as though the wider world was finally catchi

6、ng up to something I was interacting with 2 years prior.I used GPT-3 extensively in 2020 and was shocked by its ability to reason.GPT-3,and its half-step successor GPT-3.5(which powered the now famous ChatGPT-before it was upgraded to GPT-4 in March 2023),were a massive step towards AGI in a way tha

7、t earlier models werent.The thing to note is,earlier language models like GPT-2(and basically all chatbots since Eliza)had no real ability to respond coherently at all.So why was GPT-3 such a massive leap?.Parameter Count“Deep learning”is a concept that essentially goes back to the beginning of AI r

8、esearch in the 1950s.The first neural network was created in the 50s,and modern neural networks are just“deeper”,meaning,they contain more layers theyre much,much bigger and trained on lots more data.Most of the major techniques used in AI today are rooted in basic 1950s research,combined with a few

9、 minor engineering solutions like“backpropogation”and“transformer models”.The overall point is that AI research hasnt fundamentally changed in 70 years.So,theres only two real reasons for the recent explosion of AI capabilities:size and data.A growing number of people in the field are beginning to b

10、elieve weve had the technical details of AGI solved for many decades,but merely didnt have enough computing power and data to build it until the 21st century.Obviously,21st century computers are vastly more powerful than 1950s computers.And of course,the internet is where all the data came from.So,w

11、hat is a parameter?You may already know,but to give a brief digestible summary,its analogous to a synapse in a biological brain,which is a connection between neurons.Each neuron in a biological brain has roughly 1000 connections to other neurons.Obviously,digital neural networks are conceptually ana

12、logous to biological brains.So,how many synapses(or“parameters”)are in a human brain?The most commonly cited figure for synapse count in the brain is roughly 100 trillion,which would mean each neuron(100 billion in the human brain)has roughly 1000 connections.If each neuron in a brain has 1000 conne

13、ctions,this means a cat has roughly 250 billion synapses,and a dog has 530 billion synapses.Synapse count generally seems to predict higher intelligence,with a few exceptions:for instance,elephants technically have a higher synapse count than humans yet display lower intelligence.The simplest explan

14、ation for larger synapse counts with lower intelligence is a smaller amount of quality data.From an evolutionary perspective,brains are“trained”on billions of years of epigenetic data,and human brains evolved from higher quality socialization and communication data than elephants,leading to our supe

15、rior ability to reason.Regardless,synapse count is definitely important.Again,the explosion in AI capabilities since the early 2010s has been the result of far more computing power and far more data.GPT-2 had 1.5 billion connections,which is less than a mouses brain(10 billion synapses).GPT-3 had 17

16、5 billion connections,which is getting somewhat close to a cats brain.Isnt it intuitively obvious that an AI system the size of a cats brain would be superior to an AI system smaller than a mouses brain?.Predicting AI PerformanceIn 2020,after the release of the 175 billion parameter GPT-3,many specu

17、lated about the potential performance of a model 600 times larger at 100 trillion parameters,because this parameter count would match the human brains synapse count.There was no strong indication in 2020 that anyone was actively working on a model of this size,but it was interesting to speculate abo

18、ut.The big question is,is it possible to predict AI performance by parameter count?As it turns out,the answer is yes,as youll see on the next page.Source:https:/ The above is from Lanrians LessWrong post.As Lanrian illustrated,extrapolations show that AI performance inexplicably seems to reach human

19、level at the same time as human-level brain size is matched with parameter count.His count for the synapse number in the brain is roughly 200 trillion parameters as opposed to the commonly cited 100 trillion figure,but the point still stands,and the performance at 100 trillion parameters is remarka

20、bly close to optimal.By the way an important thing to note is that although 100 trillion is slightly suboptimal in performance,there is an engineering technique OpenAI is using to bridge this gap.Ill explain this towards the very end of the document because its crucial to what OpenAI is building.Lan

21、rians post is one of many similar posts online its an extrapolation of performance based on the jump between previous models.OpenAI certainly has much more detailed metrics and theyve come to the same conclusion as Lanrian,as Ill show later in this document.So,if AI performance is predictable based

22、on parameter count,and 100 trillion parameters is enough for human-level performance,when will a 100 trillion parameter AI model be released?.GPT-5 achieved proto AGI in late 2023 with an IQ of 48 The first mention of a 100 trillion parameter model being developed by OpenAI was in the summer of 2021

23、mentioned offhand in a wired interview by the CEO of Cerebras(Andrew Feldman),a company which Sam Altman is a major investor of.Sam Altmans response to Andrew Feldman,at an online meetup and Q&A called AC10,which took place in September 2021.Its crucial to note that Sam Altman ADMITS to their plans

24、 for a 100 trillion parameter model.(Sources:https:/ reddit posting itself is sourced from a LessWrong post,which was deleted at Sam Altmans request:https:/ researcher Igor Baikov made the claim,only a few weeks later,that GPT-4 was being trained and would be released between December and February.A

25、gain,I will prove that Igor really did have accurate information,and is a credible source.This will be important soon Gwern is a famous figure in the AI world he is an AI researcherand blogger.He messaged Igor Baikov on Twitter(in September 2022)and this isthe response he received.Important to remem

26、ber:“Colossal number of parameters”.“Text”,“audio”,“images”,“possibly video”,and“multimodal”.This comes from a subreddit called“thisisthewayitwillbe”which is a small,private subreddit Im part of,run by a mathematics professor with an interest in AGI.AI enthusiasts(and a few experts)use the subreddit

27、 to discuss AI topics deeper than what youll find in the mainstream.A“colossal number of parameters”?Sounds like Igor Baikov was referencing a 100 trillion parameter model,as 500 billion parameter models and up to 1 trillion parameter models had already been trained many times by the time of his twe

28、et in summer 2022(making models of that size unexceptional and certainly not“colossal”).These tweets from“rxpu”,seemingly an AI enthusiast(?)from Turkey,are interesting because they make a very similar claim about GPT-4s release window before anyone else did(trust me I spent many hours,daily,scourin

29、g the internet for similar claims,and no one else made this specific claim before he did).He also mentions a“125 trillion synapse”GPT-4 however,he incorrectly states GPT-3s parameter count as 1 trillion.(It seems as though rxpu did have inside information,but got something mixed up with the paramete

30、r counts again,I will illustrate this later,and prove that rxpu was not lying).This is a weaker piece of evidence,but its worth including because“roon”is fairly notable as a Silicon Valley AI researcher,and is followed by Sam Altman,CEO of OpenAI,and other OpenAI researchers on Twitter.In November 2

31、022 I reached out to an AI blogger named Alberto Romero.His posts seem to spread pretty far online so I was hoping that if I sent him some basic info about GPT-4 he might do a writeup and the word would get out.The results of this attempt were pretty remarkable as Ill show in the next two pages.Albe

32、rto Romeros post.The general response will be shownon the next page.The 100 trillion parameter leak went viral,reaching millions of people,to the point that OpenAI employees including CEO Sam Altman had to respond calling it“complete bullshit”.The Verge called it“factually incorrect”.Alberto Romero

33、claimed responsibility for the leak as you can see on the left.Igor Baikov,the origin of the“colossal number ofparameters”statement,also saw the viral spread of the GPT-4leak(which was essentially his own doing)and responded.So,after all,Igor really did mean“100 trillion parameters”when he said“a co

34、lossal number of parameters”.But,is Igor a reliable source?Are his other claimsaccurate?What about the multimodality?What about the ability for GPT-4 to processimages,sounds,and videos?I will prove Igorsreliability shortly.Somewhere around Oct/Nov 2022 I became convinced that OpenAI planned to first

35、 release a 1-2 trillionparameter subset of GPT-4 before releasing the full 100 trillion parameter model(“GPT-5”).These sources arent particularly solid but they all said the same thing including rxpu,who once claimed there was a 125 trillion parametermodel in the works,and then incorrectly claimed G

36、PT-3 was 1 trillion I believe he got his information mixed up.-VIDEO -IMAGES Examples of the current level of quality of publically available video&image generation AI models.These models are less than 10 billion parameters in size.What happens,when you train a model 10,000 times larger,on all the d

37、ata available on the internet,and give it the ability to generate images and video?(The answer:images and videos completely indistinguishable from the real thing,100%of the time,with no exceptions,no workarounds,no possible way for anyone to tell the difference,no matter how hard they try).-(update:

38、SORA IS FROM GPT-5 Q*2023 MODEL)Important:notice how the AI model is able to generate multiple angles of the same scene with physically accurate lighting,and insome cases even physically accurate fluid and rain.If you can generate images and videos with accurate,common-sense physics,you have COMMON

39、SENSE REASONING.If you can generate common sense,you UNDERSTAND common sense.Two posts from Longjumping-Sky-1971.Im including this because he accurately predicted the release date of GPT-4 weeks in advance(no one else posted this information publicly beforehand,meaning he had an inside source).His p

40、osts now have much more credibility and he claimed image and audio generation would be trained in Q3 of 2023.If video generation training is simultaneous or shortly after,this lines up with Siqi Chens claim of GPT-5 being finished training in December of 2023.Lets take a stroll backto February 2020,

41、a few months before GPT-3 was released.An article fromTechnology Review,which wasan“inside story”about OpenAI,seems to suggest that OpenAIwas in the early stages of a“secret”project involvingan AI system trained onimages,text,and“other data”,andthat leadership at OpenAIthought it was the mostpromisi

42、ng way to reach AGI.I wonderwhat this could possibly be referring to.The next slide will reveal some quotes from the President of OpenAI from 2019 and it will tell you what their plan was.OpenAI president GregBrockman stated in 2019,following a 1 billion dollarinvestment from Microsoftat the time,th

43、at OpenAIplanned to build a human-brain-sized model within five years,and that this was their planfor how to achieve AGI.2019+5=2024 Both of these sources are clearly referring to the same plan to achieve AGI a human-brain-sized AI model,trained on“images,text,and other data”,due to be trained withi

44、n five years of 2019,so,by 2024.Seems to line up with all the other sources Ive listed in this document.Source:Time Magazine,Jan 12 2023As Ill show in these next few slides,AI leaders are suddenly starting to sound the alarm almost like they know something VERY SPECIFIC that the general public doesn

45、t.Date of NYT interview:May 1 2023 “I thought it was 30 to 50 years or even longer away.Obviously,I no longer think that.”What made him suddenly change his mind-AND decide to leave Google to speak about the dangers of AI?“I thought it was 30 to 50 years or even longer away.Obviously,I no longer thin

46、k that.”What suddenly made him change his mind-and decide to leave Google-and speak about the dangers of AI?Shortly after the release of GPT-4,the Future of Life Institute,a highlyinfluential non-profit organization concerned with mitigating potential catastrophic risksto the world,released an open

47、letter calling on all AI labs to pause AIdevelopment for six months.Why?The first released version of the letter specifically said“(including the currently-being-trained GPT-5)”.Why was that included,and why was it removed?Source:Wired,March 29 2023 Source:Vox,March 29 2023 Some alarming quotes from

48、 an interview and Q&A with Sam Altman from October 2022-youtube link:https:/ Q&A question:“Do we have enough information in theinternet to create AGI?”Sam Altmans blunt,immediate response,interrupting the man asking the question:“Yes.”Sam elaborates:“Yeah,were confident there is.We think about this

49、and measure it quite a lot.”The interviewer interjects:“What gives you that confidence?”Sams reply:“One of the things I think that OpenAI has driven in the field thats been really healthy is that you can treat scaling laws as a scientific prediction.You can do this for compute,you can do this for da

50、ta,but you can measure at small scale and you can predict quite accurately how its going to scale up.How much data youre going to need,how much compute youre going to need,how many parameters youre going to need,when the generated data gets good enough to be helpful And the internet istheres a lot o

移动网页_全站_页脚广告1

关于我们      便捷服务       自信AI       AI导航        抽奖活动

©2010-2026 宁波自信网络信息技术有限公司  版权所有

客服电话:0574-28810668  投诉电话:18658249818

gongan.png浙公网安备33021202000488号   

icp.png浙ICP备2021020529号-1  |  浙B2-20240490  

关注我们 :微信公众号    抖音    微博    LOFTER 

客服