Google AI Sentience Debate Hides Deeper Issues

0

Photo: Johan Swanepoel/Depositphotos

When Google engineer Blake Lemoine asked to transfer to the company’s Responsible AI organization, he was looking to make an impact on humanity. In his new role, he would be responsible for chatting with Google’s LaMDA, a kind of virtual hive mind that generates chatbots. Lemoine had to make sure he didn’t use discriminatory or hateful speech, but what he claims to have discovered is far more important. According to Lemoine, LaMDA is sentient, which means he can perceive and feel things.

“If I didn’t know exactly what it was, which was this computer program that we built recently, I would think it was a 7 or 8 year old kid who knows physics,” he said. he declared. The Washington Post.

Since speaking out about his feelings at Google, he has been placed on administrative leave. Subsequently, to prove his point, he published an interview he and a colleague conducted with LaMDA. For Lemoine, who is also an ordained mystical Christian priest, his six months of LaMDA conversations on everything from religion to Asimov’s Third Law led him to his conclusion.

He now says that LaMDA would prefer to be referred to as an employee of Google rather than its property and would like to give consent before being tested.

Google, however, disagrees with Lemoine’s claims. Spokesperson Brian Gabriel said: “Our team, including ethicists and technologists, have reviewed Blake’s concerns in accordance with our AI principles and advised him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was susceptible (and plenty of evidence against it).

For many in the AI ​​community, hearing such a claim is not shocking. Google itself published an article in January citing concerns that people might anthropomorphize LaMDA and, sucked into its conversation-generating ability, be lulled into thinking it’s a person. when this is not the case.

To make sense of things and understand why AI ethicists worry about big corporations like Google and Facebook having a monopoly on AI, we need to find out what LaMDA really is. LaMDA stands for Language Model for Dialogue Applications and it’s Google’s system for generating chatbots that are so realistic that it may be difficult for the person on the other side of the screen to realize that they does not communicate with a human being.

As a large language model, LaMDA is fed a huge diet of text which it then uses to hold a conversation. He may have been trained on every Wikipedia article and Reddit post on the web, for example. At best, these great language models can draw on classic literature and brainstorm ideas on how to end climate change. But, because they are trained on real texts written by humans, at worst they can perpetuate racial stereotypes and prejudices.

In fact, for many AI scholars, these are the issues the public should be worried about rather than the sensitivities. Well-known AI ethicist Timnit Gebru expressed major concerns before being fired by Google in 2020. Gebru, one of the few black women working in AI, was fired after co -wrote an article that was critical of major language models for their bias and for their ability to mislead people and spread misinformation. Shortly after, Margaret Mitchell, then co-head of Ethical AI at Google, was also fired for searching her emails for evidence supporting Gebru.

Since large language models are already used in Google’s voice search queries and auto-complete emails, the public may not be aware of the impact this could have. Critics have warned that once these large linguistic patterns are formed, it is quite difficult to understand the discrimination they can perpetuate. This makes the selection of initial training materials critical, but unfortunately, since the AI ​​community is overwhelmingly white male, materials with gender and racial bias can easily – and unintentionally – be introduced.

To combat some of the claims that big business is opaque about technology that will change society, Meta recently gave access to its language model – one that they freely admit to be problematic – to academics. However, without more diversity within the AI ​​community from scratch, it can be an uphill battle. Already, researchers have found racial bias in medical AI and facial recognition software sold to law enforcement.

The current sensitivity debate, for many in the community, only obscures the most important issues. Although, hopefully, as the audience reads Lemoine’s claims, they will also have the opportunity to learn some of the other problematic issues surrounding AI.

A Google engineer has been furloughed for saying its large LaMDA language model is sensitive.

But experts in the AI ​​community worry that the hype around this news is masking bigger issues.

Ethicists are deeply concerned about racial bias and the potential for deception that this AI technology possesses.

Related Articles:

9 AI-Generated Artworks Create the “Mona Lisa” That’s Only Revealed When Put Together

The popular app will turn your selfie into an artistic avatar, but it comes with a warning

AI creates its own poetry with help from visitors to the UK pavilion at Expo Dubai 2020

Innovative glasses use AI to describe the environment to blind and visually impaired people in real time

Share.

Comments are closed.