Top Stories

How to govern AI is a topic on which spies, scientists, defense officials, and tech entrepreneurs cannot agree. We're rushing headlong toward a cliff

How to govern AI is a topic on which spies, scientists, defense officials, and tech entrepreneurs cannot agree. We're rushing headlong toward a cliff


The issue of regulation has become urgent—and divisive—as billions of dollars are still being spent on improving superintelligent robots.


That much was made very evident at the CogX Festival, a significant AI event going place this week in London.


Stuart Russell, a well-known British computer scientist who has studied artificial intelligence for many years and spoke at the event on Tuesday, claimed Big Tech and the broader technology sector had hindered regulation for so long that officials were now "running around like headless chickens" in an effort to figure out how to control AI.




The aliens have been sending us emails from space for the past ten years or so, predicting their arrival in ten or twenty years. In response, he added, "we have been sending back an out of office auto reply saying we'll get back to you when we return."


According to Russell, a computer science professor at the University of California, Berkeley who has written numerous books on artificial intelligence and urges governments to take action now to ensure risky AI systems may never "cause havoc on our species," unsafe AI systems must never be allowed to exist.


On Tuesday, he posed the question to the audience, "Would you like to board a plane that no one has ever tested to make sure it has been tested?" "Even sandwiches are much more tightly regulated than AI systems."


The impact of Big Tech

Russell contended that Big Tech's influence was in part to blame for why the globe was so far behind when it comes to AI legislation.


"For decades and decades, governments have been informed that regulation stifles innovation, which has lulled them into inactivity. I believe that every night when they go to bed, tech corporations have placed something in their earpieces that says that. "The technical community needs to quit assuming a defensive posture. Thinking about the potential drawbacks of AI is not anti-science. Saying that nuclear power plants could explode if not properly maintained is not against physics.


While tech companies have in the past opposed regulatory ideas, many of them have supported the introduction of legislation to limit the use of AI. However, there have been concerns that regulation should be "balanced" to prevent it from stifling innovation. Amazon, Google, Microsoft, and Facebook's parent company Meta decided to abide by a set of AI safety regulations during the summer after being negotiated by the Biden administration.


While self-regulation is important, it is insufficient, according to a document released by Google earlier this year. "AI is too vital to not be regulated, as our CEO Sundar Pichai has stated. The difficult part is to do it in a way that still promotes innovation while being proportionately suited to reduce risks.


Russell's suggested rules would essentially be a set of boundaries established by governments all around the world. The prohibition of "unallowable behaviors," such as self-replication and hacking into another person's computer, should be included in those restrictions, according to him.


According to Russell, penalties for breaking these guidelines should include the instant removal of the offending AI system from the market and a fine equal to 10% of the developer's annual income.


That would provide businesses a strong incentive to build their AI systems in a way that prevents them from carrying out prohibited actions, he added. "It exposes the organization to existential peril. Your business is in danger if you cross the line.


He noted that although other nations, such as the United States and the United Kingdom, have "no red lines whatsoever," China has already established guidelines for generative AI tools like ChatGPT.


Despite releasing its Blueprint for an AI Bill of Rights last October, Washington has yet to formally adopt any laws intended to regulate artificial intelligence.  


Britain's government has produced recommendations for a "pro-innovation" strategy to regulating AI in the meantime, but the nation still lacks comprehensive legislation to control the technology.


The author of the 1995 book Artificial Intelligence: A Modern Approach, Russell, stated that "we can't afford to take 30 or 40 years, and we can't afford mistakes." Governments have allowed the tech industry to operate as it pleases for far too long; the time has come to address the issue.


Russell's view was mirrored by renowned historian and Sapiens: A Brief History of Humankind best-selling author Yuval Noah Harari, who used the occasion of the event in Britain's capital to caution about AI's capacity for evil and damage.


He advocated for a levy on significant AI investments to pay for regulation and organizations that can keep the technology in check.


Innovation versus regulation

NATO official David van Weel concurred with Russell's assessment that more needed to be done to safeguard society from the possible harms that could be caused by superintelligent computers when speaking at CogX Festival from a military perspective.


Van Wheel, the assistant secretary general for emerging security problems for the military alliance, stated that "we should swiftly regulate, we are too slow to keep up with technology."


But he added that it was challenging to create regulations for a young technology.


According to van Wheel, if we govern AI based on its current state, it will be out of date in two years and likely prevent innovation. We must keep up with technological advancement and be ready to adopt regulations that do the same.


Elon Musk, the CEO of Tesla, and Steve Wozniak, the co-founder of Apple, were among the more than 1,000 tech titans who signed a statement earlier this year calling for a six-month halt in the development of artificial intelligence. They claimed that decisions about the technology "must not be delegated to unelected tech leaders."


Reid Hoffman, a co-founder of LinkedIn, called that letter "foolish" and "anti-humanist" during the CogX Festival on Wednesday. In his view, the advancement of AI should really be accelerated.


The former head of MI6, Britain's covert intelligence agency, Alex Younger, issued a warning that excessive regulation can have unexpected repercussions.


He explained to a crowd at the CogX Festival that regulations are frequently outdated by the time they pass through the system. "I can comprehend...I welcome the ethical approach being taken here, but if the ultimate result is that innovation is made too difficult, we will become economically weaker and more reliant on China for our essential goods and services.


He continued by saying that Europe's domestic technology economy had suffered as a result of the continent's stricter stance on the IT sector.


"No large-scale IT enterprises exist in Europe... There is a connection, I'm afraid," he remarked.


In an interview with Fortune on the sidelines of the conference, Roeland Decorte, the founder and CEO of Decorte Future Industries, a business that uses AI to extract health data from sound, said he was "scared the founder voice will get lost" as governments crack down on artificial intelligence.


The actual creator of artificial intelligence is never truly included in these discussions, according to him. "The kind of things we are witnessing tend to be the same group of large corporates, academics, politicians, and policymakers [discussing regulation]," he added.


"AI can be misused just like any other technology with a lot of potential. But for us to successfully govern it, you need to go to startups to test out the first commercial uses, not only corporations that scale technologies or academics who pioneer them.


Decorte further claimed that once a technology has been adopted by a firm, it will already be "too late," as the public will have access to it and can utilize it "for bad or for good."  


"If you actually think that since like Elon Musk, that this AI is a threat to the future of the human race, then the answer to that is not necessarily regulation, because the regulators will never have the knowledge to actually even know what's in a single algorithm," he claimed. "Investing in AI that can be explained is the logical next step if your focus is on wanting to avoid the Terminator."



No comments: