资源描述
1Governing AI:A Blueprint for the FutureGoverning AI:A Blueprint for the FutureMay 25,20232Governing AI:A Blueprint for the FutureTable of contentsBy Microsoft Vice Chair and President Brad SmithForeword Part 1 Governing AI:A legal and regulatory blueprint for the futureConclusionPart 2 Responsible by design:Microsofts approach to building AI systems that benefit societyImplementing and building upon new government-led AI safety frameworksPromote transparency and ensure academic and nonprofit access to AIMicrosofts commitment to developing AI responsiblyOperationalizing Responsible AI at MicrosoftCase study:Applying our Responsible AI approach to the new BingAdvancing Responsible AI through company cultureEmpowering customers on their Responsible AI journeyRequiring effective safety brakes for AI systems that control critical infrastructureDeveloping a broad legal and regulatory framework based on the technology architecture for AIPursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology392841101215222629303436393Governing AI:A Blueprint for the FutureForeword:How Do We Best Govern AI?Brad Smith,Vice Chair and President,Microsoft“Dont ask what computers can do,ask what they should do.”That is the title of the chapter on AI and ethics in a book I coauthored in 2019.At the time,we wrote that“this may be one of the defining questions of our generation.”Four years later,the question has seized center stage not just in the worlds capitals,but around many dinner tables.As people have used or heard about the power of OpenAIs GPT-4 foundation model,they have often been surprised or even astounded.Many have been enthused or even excited.Some have been concerned or even frightened.What has become clear to almost everyone is something we noted four years agowe are the first generation in the history of humanity to create machines that can make decisions that previously could only be made by people.Countries around the world are asking common questions.How can we use this new technology to solve our problems?How do we avoid or manage new problems it might create?How do we control technology that is so powerful?These questions call not only for broad and thoughtful conversation,but decisive and effective action.This paper offers some of our ideas and suggestions as a company.These suggestions build on the lessons weve been learning based on the work weve been doing for several years.Microsoft CEO Satya Nadella set us on a clear course when he wrote in 2016 that“perhaps the most productive debate we can have isnt one of good versus evil:The debate should be about the values instilled in the people and institutions creating this technology.”Since that time,weve defined,published,and implemented ethical principles to guide our work.And weve built out constantly improving engineering and governance systems to put these principles into practice.Today we have nearly 350 people working on responsible AI at Microsoft,helping us implement best practices for building safe,secure,and transparent AI systems designed to benefit society.New opportunities to improve the human conditionThe resulting advances in our approach have given us the capability and confidence to see ever-expanding ways for AI to improve peoples lives.Weve seen AI help save individuals eyesight,make progress on new cures for cancer,generate new insights about proteins,and provide predictions to protect people from hazardous weather.Other innovations are fending off cyberattacks and helping to protect fundamental human rights,even in nations afflicted by foreign invasion or civil war.Everyday activities will benefit as well.By acting as a copilot in peoples lives,the power of foundation models like GPT-4 is turning search into a more powerful tool for research and improving productivity for people at work.And for any parent who has struggled to remember how to help their 13-year-old child through an algebra homework assignment,AI-based assistance is a helpful tutor.In so many ways,AI offers perhaps even more potential for the good of humanity than any invention that has preceded it.Since the invention of the printing press with movable type in the 1400s,human prosperity has been growing at an accelerating rate.Inventions like the steam engine,electricity,the automobile,the airplane,computing,and the internet have provided many of the building blocks for modern civilization.And like the printing press itself,AI offers a new tool to genuinely help advance human learning and thought.4Governing AI:A Blueprint for the FutureGuardrails for the futureAnother conclusion is equally important:its not enough to focus only on the many opportunities to use AI to improve peoples lives.This is perhaps one of the most important lessons from the role of social media.Little more than a decade ago,technologists and political commentators alike gushed about the role of social media in spreading democracy during the Arab Spring.Yet five years after that,we learned that social media,like so many other technologies before it,would become both a weapon and a toolin this case aimed at democracy itself.Today,we are 10 years older and wiser,and we need to put that wisdom to work.We need to think early on and in a clear-eyed way about the problems that could lie ahead.As technology moves forward,its just as important to ensure proper control over AI as it is to pursue its benefits.We are committed and determined as a company to develop and deploy AI in a safe and responsible way.We also recognize,however,that the guardrails needed for AI require a broadly shared sense of responsibility and should not be left to technology companies alone.When we at Microsoft adopted our six ethical principles for AI in 2018,we noted that one principle was the bedrock for everything elseaccountability.This is the fundamental need:to ensure that machines remain subject to effective oversight by people and the people who design and operate machines remain accountable to everyone else.In short,we must always ensure that AI remains under human control.This must be a first-order priority for technology companies and governments alike.This connects directly with another essential concept.In a democratic society,one of our foundational principles is that no person is above the law.No government is above the law.No company is above the law,and no product or technology should be above the law.This leads to a critical conclusion:people who design and operate AI systems 5Governing AI:A Blueprint for the Futurecannot be accountable unless their decisions and actions are subject to the rule of law.In many ways,this is at the heart of the unfolding AI policy and regulatory debate.How do governments best ensure that AI is subject to the rule of law?In short,what form should new law,regulation,and policy take?A five-point blueprint for the public governance of AIPart 1 of this paper offers a five-point blueprint to address several current and emerging AI issues through public policy,law,and regulation.We offer this recognizing that every part of this blueprint will benefit from broader discussion and require deeper development.But we hope this can contribute constructively to the work ahead.First,implement and build upon new government-led AI safety frameworks.The best way to succeed is often to build on the successes and good ideas of others.Especially when one wants to move quickly.In this instance,there is an important opportunity to build on work completed just four months ago by the U.S.National Institute of Standards and Technology,or NIST.Part of the Department of Commerce,NIST has completed and launched a new AI Risk Management Framework.We offer four concrete suggestions to implement and build upon this framework,including commitments Microsoft is making in response to a recent White House meeting with leading AI companies.We also believe the Administration and other governments can accelerate momentum through procurement rules based on this framework.Second,require effective safety brakes for AI systems that control critical infrastructure.In some quarters,thoughtful individuals increasingly are asking whether we can satisfactorily control AI as it becomes more powerful.Concerns are sometimes posed regarding AI control of critical infrastructure like the electrical grid,water system,and city traffic flows.This is the right time to discuss this question.This blueprint proposes new safety requirements that in effect would create safety brakes for AI systems that control the operation of designated critical A five-point blueprint for governing AI1Implement and build upon new government-led AI safety frameworksRequire effective safety brakes for AI systems that control critical infrastructure23Develop a broader legal and regulatory framework based on the technology architecture for AlPromote transparency and ensure academic and public access to Al45Pursue new public-private partnerships to use Al as an effective tool to address the inevitable societal challenges that come with new technology6Governing AI:A Blueprint for the Futureinfrastructure.These fail-safe systems would be part of a comprehensive approach to system safety that would keep effective human oversight,resilience,and robustness top of mind.In spirit,they would be similar to the braking systems engineers have long built into other technologies such as elevators,school buses,and high-speed trains,to safely manage not just everyday scenarios,but emergencies as well.In this approach,the government would define the class of high-risk AI systems that control critical infrastructure and warrant such safety measures as part of a comprehensive approach to system management.New laws would require operators of these systems to build safety brakes into high-risk AI systems by design.The government would then ensure that operators test high-risk systems regularly to make certain that the system safety measures are effective.And AI systems that control the operation of designated critical infrastructure would be deployed only in licensed AI datacenters that would ensure a second layer of protection through the ability to apply these safety brakes,thereby ensuring effective human control.Third,develop a broad legal and regulatory framework based on the technology architecture for AI.We believe there will need to be a legal and regulatory architecture for AI that reflects the technology architecture for AI itself.In short,the law will need to place various regulatory responsibilities upon different actors based upon their role in managing different aspects of AI technology.For this reason,this blueprint includes information about some of the critical pieces that go into building and using new generative AI models.Using this as context,it proposes that different laws place specific regulatory responsibilities on the organizations exercising certain responsibilities at three layers of the technology stack:the applications layer,the model layer,and the infrastructure layer.This should first apply existing legal protections at the applications layer to the use of AI.This is the layer where the safety and rights of people will most be impacted,especially because the impact of AI can vary markedly in different technology scenarios.In many areas,we dont need new laws and regulations.We instead need to apply and enforce existing laws and regulations,helping agencies and courts develop the expertise needed to adapt to new AI scenarios.KY3C:Applying to AI services the“Know Your Customer“concept developed for financial servicesKnow your CloudKnow your CustomerKnow your Content7Governing AI:A Blueprint for the FutureAI resources.While there are some important tensions between transparency and the need for security,there exist many opportunities to make AI systems more transparent in a responsible way.Thats why Microsoft is committing to an annual AI transparency report and other steps to expand transparency for our AI services.We also believe it is critical to expand access to AI resources for academic research and the nonprofit community.Basic research,especially at universities,has been of fundamental importance to the economic and strategic success of the United States since the 1940s.But unless academic researchers can obtain access to substantially more computing resources,there is a real risk that scientific and technological inquiry will suffer,including relating to AI itself.Our blueprint calls for new steps,including steps we will take across Microsoft,to address these priorities.Fifth,pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology.One lesson from recent years is what democratic societies can accomplish when they harness the power of technology and bring the public and private sectors together.Its a lesson we need to build upon to address the impact of AI on society.We will all benefit from a strong dose of clear-eyed optimism.AI is an extraordinary tool.But like other technologies,it too can become a powerful weapon,and there will be some around the world who will seek to use it that way.But we should take some heart from the cyber front and the last year and a half in the war in Ukraine.What we found is that when the public and private sectors work together,when like-minded allies come together,and when we develop technology and use it as a shield,its more powerful than any sword on the planet.Important work is needed now to use AI to protect democracy and fundamental rights,provide broad access to the AI skills that will promote inclusive growth,and use the power of AI to advance the planets sustainability needs.Perhaps more than anything,a wave of new AI technology provides an occasion for thinking big and acting boldly.In each area,the key to success will be to develop concrete initiatives and bring governments,respected companies,There will then be a need to develop new law and regulations for highly capable AI foundation models,best implemented by a new government agency.This will impact two layers of the technology stack.The first will require new regulations and licensing for these models themselves.And the second will involve obligations for the AI infrastructure operators on which these models are developed and deployed.The blueprint that follows offers suggested goals and approaches for each of these layers.In doing so,this blueprint builds in part on a principle developed in recent decades in banking to protect against money laundering and criminal or terrorist use of financial services.The“Know Your Customer”or KYCprinciple requires that financial institutions verify customer identities,establish risk profiles,and monitor transactions to help detect suspicious activity.It would make sense to take this principle and apply a KY3C approach that creates in the AI context certain obl
展开阅读全文