The latest hysteria about AI comes in the form of a one-sentence letter signed by a large number of "Notable Personalities" and "AI Scientists" (the letter has a helpful checkbox interface that sorts the two groups for you). The single sentence reads:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Who can really argue with that? Setting aside that many of the "Notable Personalities" are not really AI experts at all, the statement is a truism: nobody likes extinction unless, of course, it involves mosquitoes. The most noticeable thing about the letter is that it acknowledges that AI is not going away. "Mitigating" means "reducing," as in, the risk will now remain with us. It has been so with many profound scientific advancements. The atomic bomb comes to mind. Will schoolchildren soon be required to go into AI lockdown drills?
Nothing is stopping or even slowing the runaway AI train. Every business is realizing an imperative to get on it or be crushed. Many have been on it for some time, without their Boards even knowing it. Suddenly, however, AI is available to the masses through popular writing and graphics interfaces, which means that the media has learned about it. Creating avatars of yourself as a superhero using AI will not result in human extinction, however. The extinction-level ideas are still being worked on -- feverishly so -- by "AI Scientists," many of whom probably signed the letter above.
Is it the scientists' responsibility to self-moderate? Discover less? Put the brakes on AI research? If so, who goes first?
In fact, as reported by Forbes, Lisa Su, the CEO of AMD, or the world's largest chip maker, has banked the company's future on AI. “If you look out five years,” she says, “you will see AI in every single product at AMD, and it will be the largest growth driver.”
So the business imperative is AI. A firehose of it. All of the time. Concomitantly, scientists at universities who depend on grant funding, and perhaps hope to create lucrative patents and or get consultancies, will urgently be working on improved AI. None of them will be remotely concerned about causing extinction with their research. That risk, they will reason, will always be someone else's responsibility.
AI-related majors will crop up everywhere. You thought MIT was popular before? Just wait. The end result will be millions of new AI-trained grads hungry to feed the ever-growing machine.
So, if we cannot rely on the innate restraint of the "scientists" or "the markets," then how exactly will we "mitigate the risk of extinction"? The "R" word comes to mind. "Regulation," that is. And it turns out that numerous legislative and regulatory efforts are creaking to life to meet the new reality. To name a few:
1. Senator Chuck Schumer has created a bipartisan AI working group with to review the need for AI legislation. The regulated community continues to urge a light touch.
2. Showing that all politics is self-interested, one representative has proposed a bill to combat the use of Deep Fakes in political advertising.
3. Making a comeback is a bill proposing the Algorithmic Accountability Act: Text - H.R.6580 - 117th Congress (2021-2022): Algorithmic Accountability Act of 2022 | Congress.gov | Library of Congress
4. Also, there is a proposed AI Accountability Act embedded in the Energy and Commerce Bill: AI_Accountability_Act_76222e1fe1.pdf (d1dth6e84htgma.cloudfront.net)
5. There will be a “hackathon” of generative AI, which seems important (and a bit terrifying): About • Event - The Generative AI Conference
6. And the NIST is continually evaluating and refining thinking on responsible AI management.
So, thinking is evolving in the U.S. We have a long way to go, as does AI. Because, the genie is not going back in the bottle.