Top Stories

Why tech titans want to drown AI in regulations

 Why tech titans want to drown AI in regulations


The occasional realization that traditions are changing in front of you is one of the pleasures of writing about business. It sends chills up and down the spine. You begin taking meticulous notes about your surroundings in a vain attempt to create a bestseller's introductory paragraph. Your columnist experienced it recently while seated in the immaculate offices of Anthropic, a darling of the artificial-intelligence (AI) community, in San Francisco. There was that well-known thrill when Jack Clark, one of Anthropic's co-founders, compared the need for international coordination to stop the spread of hazardous AI to the 1946 Baruch Plan, an attempt to place the world's nuclear weapons under UN supervision. It marks a turning point when businesspeople even slightly compare their inventions to nuclear bombs.




There hasn't been a shortage of anxiety about the existential dangers posed by AI since ChatGPT emerged late last year. But this is unique. When you talk to some of the pioneers in the industry, you'll find that they are more concerned about the hazards present in the products they are producing right now than they are about a dystopian future where machines outsmart people. An example of "generative" AI is ChatGPT, which develops material that resembles that of a human based on its examination of texts, photos, and sounds found online. At a congressional hearing this month, Sam Altman, CEO of OpenAI, the company that created it, stated that government involvement is essential to reducing the hazards posed by the more potent "large language models" (LLMs) that fuel the bots.


In the absence of regulations, several of his San Francisco rivals claim they have already established back channels with government representatives in Washington, DC, to talk about the potential risks found while looking into their chatbots. These include harmful concepts like racism and risky skills like child-grooming or bomb-making. Co-founder of Inflection AI and member of the board of The Economist's parent company Mustafa Suleyman intends to reward generously hackers who can find flaws in Pi, his business's computerized talking dog.


This impending digital boom appears to be different from the past thanks to such caution, at least on the surface. Venture capital is flowing in as usual. But unlike the "move fast and break things" strategy of the past, safety is now a major concern in many startup pitches. The cliché from Silicon Valley that it is preferable to ask for forgiveness than permission when it comes to regulation has been abandoned. Startups like OpenAI, Anthropic, and Inflection have put in place corporate structures that restrain profit-maximization because they want to communicate the image that they won't sacrifice safety for money.


Another way that this boom differs from previous ones is that the companies creating their proprietary LLMs don't seek to topple the established big-tech hierarchy. They might even aid in its consolidation. This is due to the symbiotic relationships they have with the tech behemoths who are at the forefront of the race for generative AI. Microsoft is a significant investor in OpenAI and leverages the latter's technology to enhance its software and search businesses. Anthropic has a sizable ownership in Alphabet's Google; on May 23, the business announced the completion of a $450 million funding round that includes further investment from the tech behemoth. The young companies' reliance on big tech's cloud computing platforms to train their models on vast amounts of data, which enables the chatbots to behave like human interlocutors, strengthens their economic relationships.


Microsoft and Google are eager to demonstrate that they take safety seriously, much like the startups, even as they fiercely compete with one another in the chatbot race. They also contend that new regulations are required and that worldwide coordination of LLM oversight is crucial. Sundar Pichai, the chief executive officer of Alphabet, stated that "AI is too important to not regulate, and too important not to regulate well."


The risks of disinformation, election rigging, terrorism, job disruption, and other possible problems that increasingly potent AI models may create may be fully warranted in making such overtures. However, it is important to keep in mind that regulation will also benefit the world's largest tech companies. It tends to raise entry barriers and create costs that incumbents find easiest to bear, which tends to perpetuate existing market arrangements.


This is crucial. There is a cost if big tech utilizes legislation to strengthen its position at the top of generative AI. The big companies are more likely to use the technology to improve their current products than to completely replace them. They'll work to defend their main lines of business, which for Microsoft and Google are enterprise software and search, respectively. It will serve as a reminder that large incumbents now govern the innovation process—what some refer to as "creative accumulation"—rather than ushering in a period of Schumpeterian creative destruction. It's possible that the technology won't be as revolutionary as it could be.


LLaMA is at large.

Such a result is not guaranteed to occur. One of the unknowns is open-source AI, which has become increasingly popular after LLaMa, the LLM created by Meta, leaked online in March. The talk in Silicon Valley is that open-source developers can create generative AI models that are a fraction of the price and nearly as effective as the current proprietary ones.


Mr. Clark of Anthropic calls open-source AI a "very troubling concept." Although it is a useful method for accelerating invention, it is also intrinsically difficult to manage, whether in the hands of an adversarial state or a 17-year-old ransomware creator. As regulatory organizations around the world struggle with generative AI, these issues will be worked out. To handle whatever the authorities come up with, Microsoft and Google—and, by extension, their startup charges—have considerably greater pockets than open-source developers. Additionally, they stand to gain more if the information-technology system that has elevated them to titan status can be maintained. The demand for safety and the desire for wealth might finally coincide.



No comments: