麻豆官网首页入口免费

Sections

Commentary

With AI, we need both competition and safety

July 8, 2024


  • Regulatory oversight of AI must encourage collaboration on AI safety without enabling anticompetitive alliances.
  • Regulation must close the gaps in voluntary commitments with an AI safety model that includes a supervised process to develop standards, a market that rewards firms who exceed standards, and ongoing oversight of compliance activities.
  • This new model of AI safety should also look to the multitude of examples of industry-government alliances to create enforceable standards, such as FINRA and the NERC, as inspiration.
President Joe 麻豆官网首页入口免费 is reflected in a screen on stage while he speaks during an event on Artificial Intelligence in the East Room at the White House on October 30, 2023, in Washington, DC.
President Joe 麻豆官网首页入口免费 is reflected in a screen on stage while he speaks during an event on Artificial Intelligence in the East Room at the White House on October 30, 2023, in Washington, DC. Samuel Corum/Sipa USA via Reuters Connect

The Federal Trade Commission (FTC) and Department of Justice (DOJ) are whether certain transactions and collaborations between artificial intelligence (AI) companies and others violate antitrust laws. Such investigations are warranted. As a nation, we should be concerned that not only is the development of cutting-edge frontier models controlled by a handful of companies, but also that AI is adjacent to, and dependent on, already concentrated markets, such as cloud platforms and high-powered microchips. We should want AI to grow in a competitive environment.

The country should also want AI models to be safe. Competition and safety should not be mutually exclusive. The FTC and DOJ should make clear that collaboration on AI safety is not only allowed, but also expected. AI should grow up in an environment where the companies cooperate on the delivery of safe products.

Building the AI future around competition and safety should be a no-brainer. The challenge is how to ensure that what could look like technology collaboration on AI safety does not mask what might be an anticompetitive alliance. Meeting that challenge will require a new type of regulatory oversight, although one for which there are historical precedents.

聽AI safety鈥攁n all-hands effort

Since the possibilities of AI came to the attention of the public, it has been clear that the technology combines the potential of enormous rewards with frightening risks鈥攂oth of which are expanding at exponential speed. 鈥淏y the time children born today are in kindergarten,鈥 AI pioneer Ray Kurzweil recently , 鈥渁rtificial intelligence (AI) will probably have surpassed humans at all cognitive levels, from science to creativity.鈥

It is difficult for all but the most technologically sophisticated to understand where the risks are, let alone how to address them. Policymakers should encourage鈥攁nd not discourage鈥攖hose who know the most to set standards for what constitutes harmful AI, how to detect it, and how to protect against it.

That is not something that only one entity can do. AI safety must be an all-hands-on-deck effort. It is a challenge made all the more important鈥攁nd difficult鈥攂y the proliferation of open-source AI models capable of being altered by anyone with the necessary skills.

Such an effort begins with an enforceable agreement for uniformly applicable safety standards. The absence of such standards promises a 鈥渞ace to the bottom鈥 in which some enterprises cut corners to gain profits while forcing the rest of society to bear the costs of unsafe products and services.

AI safety standards

鈥淭he Rome Call for AI Ethics鈥 created in 2020鈥攚ith signatories as diverse as Microsoft, IBM, and the Vatican鈥攈as been the 鈥済old standard鈥 in AI practices. Reportedly, the G7 leaders have looked to it as the for an AI code of conduct. Experience has shown, however, that such self-regulation, while commendable, is too often insufficient.

There are two root problems with a voluntary AI safety code. The first is that the code鈥檚 development and application is only as good as its weakest link. As the noted in the context of cybersecurity, an entirely voluntary approach to safety 鈥渄oes not have the expanse needed to address the broad-based issues in cyber space where the weak link in the chain can break the entire security perimeter.鈥 So too with AI, we need all the links in the AI ecosystem to live up to a minimum standard that protects all. The second issue with voluntary activities is that, by their very definition, voluntary means the absence of enforcement.

To overcome the problems inherent in a voluntary code, we should seek to create an AI safety model with three basic components. First, it would need to have a supervised process that identifies issues and convenes affected companies and civil society to develop standards. Just as the standard for mobile phones has been agile enough to evolve from 1G through 5G as technology has evolved, so can a standard for AI behavior evolve as the technology evolves.

Second, such a safety model must create a virtuous cycle in which the market rewards enterprises that not only meet baseline standards but exceed them. As the noted, the government and private sector should 鈥渃reate market incentives for higher tiers of standards and practices鈥uch a model would provide incentives for individual companies to invest, purely on a voluntary basis, in enhanced cyber security.鈥

Finally, there should be ongoing oversight of compliance activities. Insight into the code鈥檚 effectiveness requires the requisite levels of transparency to enable auditing and enforcement. Such efforts can neither be done by government alone, nor through individual enterprise efforts. It requires the collaboration between the multiple stakeholders鈥攊ncluding government鈥攖o 鈥渋nspect what you expect鈥 and impose penalties for unacceptable behavior.

A new model for AI safety

AI may be new, but the responsibilities of AI companies to protect their users have been around for literally hundreds of years. As England was breaking free of the bonds of feudalism, a set of common law principles arose to protect the nascent middle class. Imported to the American colonies, one of those principles is the Duty of Care that holds that the provider of a good or service has the obligation to anticipate and mitigate potential harm that may result. Throughout history, the Duty of Care has been applied to a continuing stream of technological innovations. AI is but the latest in that parade.

AI is a departure from the linear world of step-by-step progress to an exponential experience characterized by the velocity of innovation and change. Assuring safety in this new reality must similarly cleave from the practices of the old analog world. This means the government behaving less as a dictator of practices and more of an overseer of AI safety standards.

There are many examples how industry collaboration can be done in the public interest. For example, the sets for doctors. Failure to follow those standards becomes the basis for legal action and court decisions as to whether they were being followed as the key factor in determining liability. Fordham University Law Professor has likened this to a

There are multiple examples of industry-government alliances to create enforceable codes or standards. The National Society of Professional Engineers (鈥淣SPE鈥) , which has several safety-related principles, such as engineers holding 鈥減aramount the safety, health, and welfare of the public鈥 and performing 鈥渟ervices only in areas of their competence鈥 is enforceable through disciplinary action, civil and criminal liability, and state oversight. The (FINRA) regulates aspects of the financial industry through an industry-developed code overseen by the (SEC). The (NERC) is an industry-led group that has developed policies to prevent blackouts and is overseen by the (FERC).

The establishment of enforceable safety standards鈥攖his time for AI鈥攊s a well-paved pathway. We should want to already be on that path.

Collaboration for safety and antitrust law

Of course, such a safety collaboration should be structured so as not to affect prices, output, or competitive intensity. This is a doable task. The U.S. government has a history of not challenging competitor collaborations as antitrust law violations when those efforts served the national interest.

As a starting point, the law recognizes that the Sherman Act could lead to overdeterrence that could harm the public interest. As the Supreme Court noted in ., a rule of reason should govern review of collaborative activity designed to produce public benefit. 鈥淲ith certain exceptions for conduct regarded as聽per se聽illegal because of its unquestionably anticompetitive effects, the behavior proscribed by the Sherman Act is often difficult to distinguish from the gray zone of socially acceptable and economically justifiable business conduct,鈥 the Court observed. 鈥淭he imposition of criminal liability鈥 for engaging in such conduct which only after the fact is determined to violate the statute because of anticompetitive effects鈥 holds out the distinct possibility of overdeterrence; salutary and procompetitive conduct lying close to the borderline of impermissible conduct might be shunned by businessmen who chose to be excessively cautious in the face of uncertainty regarding possible exposure to criminal punishment for even a good-faith error of judgment.鈥澛

But the nation does not have time, and there is too much at stake for the rules of the road to wait upon a case winding its way through the courts. We need clarity on collaborations involving AI sooner. And we can get that clarity through government providing a blessing for industry collaborations that clearly serve the public interest.

Once again, a good model to study is that used for cybersecurity. It became clear in the early days of the Obama administration that the government had to do more to upgrade defenses against cybersecurity attacks. After congressional efforts stalled, in , which called for creating voluntary public-private partnerships that would collaboratively define standards and best practices for both industry and the government.

Then, in 2014, the FTC and DOJ issued that, while not enforceable or binding, gave joint ventures the go-ahead on information sharing related to addressing cyberthreats. As the statement noted, 鈥淪ome private entities may be hesitant to share cyber threat information with each other, especially competitors, because they have been counseled that sharing of information among competitors may raise antitrust concerns. The Agencies do not believe that antitrust is鈥攐r should be鈥攁 roadblock to legitimate cybersecurity information sharing. While it is true that certain information sharing agreements among competitors can raise competitive concerns, sharing of the cyber threat information mentioned above is highly unlikely to lead to a reduction in competition and, consequently, would not be likely to raise antitrust concerns. To decrease uncertainty regarding the Agencies鈥 analysis of this type of information sharing, the Agencies are issuing this Statement to describe how they analyze cyber threat information sharing.鈥

As the statement further noted, 鈥淐yber threat information typically is very technical in nature and very different from the sharing of competitively sensitive information such as current or future prices and output or business plans.鈥 That is true for AI as well.

Getting to yes on both competition and safety for AI

Recently, the Assistant Attorney General Jonathan Kanter, the head of the Justice Department鈥檚 antitrust division, whether media outlets should be allowed to collaborate to suppress misinformation. He responded 鈥淭his is a thorny issue. We stand for the proposition that competition is good for our democracy and the free flow of information. There are no legal prohibitions, under the right instances, under the right circumstances, of efforts to improve safety. But it doesn鈥檛 need to come at the expense of competition.鈥

He’s right. The same logic should apply to AI and safety. Yes, the FTC and DOJ antitrust investigations should continue. But so should efforts to set standards and otherwise address safety, which also encompasses cybersecurity. We need to make sure neither undercuts the two goals that must be achieved, both a competitive and a safe AI market. Whether through an executive order, a joint FTC/DOJ statement, or through some other means, the government should clarify that while it will remain focused on addressing collaborations and acquisitions that reduce competition, it is possible鈥攁nd encouraged鈥攆or AI enterprises to collaborate to protect safety without sharing information about prices or business plans.

  • Acknowledgements and disclosures

    Microsoft and IBM are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.