AI Warning

· 4 min read
AI Warning is for sale!

Bill Gates Compares Artificial Intelligence to Nuclear Weapons

Gates called AI "both promising and dangerous" — and compared the tech to nuclear weapons.

Nuclear Option

Microsoft co-founder and philanthropist Bill Gates issued a grave warning Monday, comparing advanced artificial intelligence to nuclear weapons — and arguing that the United States is losing its edge in the global AI research race.

"The world hasn't had that many technologies that are both promising and dangerous," Gates said during an event at Stanford, according to CNET. He added that "I won't say there are that many" examples of AI having improved society.

Peril and Promise

At the same time, Gates expressed hope that the tech could be used to improve health and medicine around the world. But he also warned that the United States' grip on AI research is starting to slip compared to other countries.

"The US was in this totally unique position for most of these breakthrough technologies," he said. "Now the US is still very much the leader, but not [in the] same dominant, dominant way."

AI Overlord

In the past, Gates has also expressed concern about how humanity might grapple with an AI super intelligence – as CNET pointed out.

"I am in the camp that is concerned about super intelligence," Gates said in 2015. "First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern."

Dire Warning3. 20. 19 by Jon Christian

AI Warning: Google Artificial Intelligence Can Deceive Warm Blooded Humans [REPORT]

Tiffany Winfrey Jul 06, 2022 10:44 PM EDT

Recently, Google's artificial intelligence project has drawn a lot of interest, probably because of its sentient feature. However, one of the company's engineers was fired for admonishing the software in public. According to Blake Lemoine, Google's LaMDA chatbot generator is now identical to "a sweet kid" who is seven or eight years old.

Blake Lemoine, an engineer, and mystic Christian priest, recently spoke with Wired about why he thinks Google's LaMDA large language model's preadolescent need to be liked has obscured its true purpose despite evolving into a sentient being with a soul.

Can Google AIs deceive a person into thinking they are a genuine person? Or, how about this: Can it gain someone's trust? Another way to put it is: Can it communicate and elicit love? Alternatively, may it develop a soul? Register Guard claimed that this phenomenon has already happened to certain people.

Will the loneliness that affects America be lessened by bots? Will AI soon surpass real, sympathetic people regarding consumer happiness and loyalty? When we start letting computer programs into our circle of trust, will we be blind to it? Yes, Register Guard added.

Google AI's Ability to 'Fool People' Comes With Risks

Whether or not the incorporeal LaMDA is truly capable of feeling empathy and emotions, it can evoke these emotions in people other than Lemoine. Experts warn that this ability to fool people comes with significant risks.

Jacky Alciné tweeted in 2015 about how 80 images of a Black man were added by Google Photos to an album called "gorillas," Forbes reported. Google Photos used a neural network to learn how to categorize things like people and gorillas - obviously, incorrectly - by analyzing enormous amounts of data.

Google engineers were in charge of making sure that the data used to train its AI photosystem was accurate and diverse. And when it faltered, it was their duty to fix the problem. Instead of retraining its neural network, New York Times said Google allegedly responded by removing "gorilla" as a photo category.

Amazon, IBM, and Microsoft are just a few of the businesses that struggle with biased AI. According to the same New York Times report, these companies' facial recognition systems have significantly higher error rates when attempting to determine the sex of women with darker skin tones than those with lighter skin.

A 2020 paper by Gebru and six other researchers, including four Google employees criticized large language models like LaMDA and their propensity to repeat words from the datasets they are trained on. If the language in those datasets is biased and/or contains stereotypes that are racist or sexist, AIs like LaMDA would reproduce those biases when creating language. Gebru also opposed teaching language models using progressively bigger datasets, which allowed the AI to improve its language mimicry and deceive audiences into believing it was advanced and sentient, as Lemoine did.

Gebru claims Google dismissed her in December 2020 following a dispute over this paper (the company maintains she resigned). A few months later, Google also let go of Dr. Margaret Mitchell, the team's originator and co-author of the study and Gebru's supporter.

Human Rivalry May Lead to Rival AI Systems

The danger, according to Al Jazeera, is that rivalry among humans might lead to the development of rival AI systems that could spiral out of control or upset the delicate social and political balance that holds the globe together, escalating and escalating wars. With AI algorithms at the core of social media, people have already tasted this disruptive potential. Designed to maximize business, they have unintentionally amplified some polarizing debates and false information, undermining democracy and stability.

This does not suggest that we give up on developing artificial intelligence. However, this endeavor cannot be mostly or entirely left to businesses and a small number of scholars. This revolution must be led by a democratic, participatory, broad-based discussion and political process including every section of society that establishes unambiguous universal ethical norms for future growth, given its global, human-scale ramifications.

Artificial intelligence may be developed carefully and prudently so as to improve the future well-being of our society. Future non-human partners who can lessen our sensation of existential intellectual loneliness may also arise from it. In the not-too-distant future, as we travel the thrilling and dangerous path to developing new types of higher intelligence, we may not need to search the cosmos for indications of highly intelligent species at all. May they arrive peacefully.