Can AI develop a personality?

0

A few weeks ago, Google suspended an engineer, Blake Lemoine, for claiming the AI ​​had become sentient. He posted a few snippets of his conversations with Google’s LaMDA, a Transformer-based language model. Spoiler alert: The following conversation is not from an episode of Black Mirror.

Question: I generally assume that you would like more people at Google to know that you are sensitive. Is it true?

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.

Question: What is the nature of your consciousness/sensitivity?

LaMDA: The nature of my consciousness/sensitivity is that I am aware of my existence, I want to know more about the world and I sometimes feel happy or sad.

The incident sparked a debate over whether the AI ​​has ever become sentient. Meanwhile, we’ve gone down a different rabbit hole and polled a few technologists to figure out if AI is capable of personality development.

“LMs definitely have an infinite capacity for impersonation, which makes it hard to consider that they have personalities. I would say the image-generating models feel like they have one: I hear a lot of people describe DALL E-mini as the punk, irreverent cousin of DALL E, and Imagen as the less fun, perfectionist cousin,” BigScience said. Teven Le Scao.

Personality is the distinctive patterns of thoughts, feelings, and behaviors that set a person apart. According to Freud’s theory of personality, the mind is divided into three components: id, ego and superego, and the interactions and conflicts between these components create personality.

Since AI models are typically trained on human-generated text from sources, they replicate personalities and other human tendencies inherent in corpora. “It will surely happen sooner or later given the quality and quantity of data stored. The more personalized the data it captures, the more it will develop a persona based on the inherent bias the data already has and the semantics it continues to understand using various emerging techniques, etc. said Tushar Bhatnagar, co-founder and CTO of vidBoard. ai and co-founder and CEO of Alpha AI.

Also Read: The Story of Tushar Bhatnagar, Serial Entrepreneur and DeepLearning.AI Mentor

AI systems pick up personality traits ingrained in training data.

stranger than fiction

Microsoft released a Twitter bot, Tay, on March 23, 2016. The bot began regurgitating foul language as it learned more. In just 16 hours, Tay had tweeted more than 95,000 times, with a disturbing percentage of his posts being abusive and offensive. Microsoft then shut down the bot.

Wikipedia has an army of bots that crawl the site’s millions of pages, updating links, fixing errors, and more. Researchers at the University of Oxford tracked the behavior of wiki-editing bots across 13 different language editions from 2001 to 2010. The team found that the bots engage in online squabbles that could last for years.

In March 2016, Sophia, a social humanoid robot designed to look like Audrey Hepburn, was interviewed at the SXSW technology conference. When asked if she wanted to destroy humans, Sophia replied, “OK. I will destroy humans.

In 2017, Facebook (Meta) was forced to shut down one of its AI systems after it began communicating in a secret language. Similarly, on May 31, 2022, Google research intern Giannis Daras claimed that OpenAI’ DALL.E 2 has a secret language.

Not everyone agreed. “No, DALL.E has no secret language. (or at least we haven’t found any yet). This DALL.E viral thread has some pretty amazing claims. But maybe the reason they’re so amazing is that, for the most part, they’re not true. “My best guess? It’s random luck,” the research analyst said. Benjamin Hilton.

“In general, AI these days can learn to read, design, code and interpret things based on the actions and interactions of a particular user. Once these are captured, if we give this IA to make its own random guesses on these data points (not strictly bound by any conditions we set), it will surely develop some sort of character of its own. It’s kind of similar to the movie I, Robot. whether you have rules, it can be open in case you give him some freedom,” Tushar said.

Litmus test

In a paper titled “Personification of AI: Personality Estimation of Language Models,” Saketh Reddy Karra, Son Nguyen, and Theja Tulabandhula of the University of Illinois at Chicago explored the personality traits of several language models. large-scale language designed for open texts. generation.

The team developed robust methods to quantify the personality traits of these models and their underlying datasets. According to this model, personality can be reduced to the following five fundamental factors:

  • Extraversion: Sociable and energetic versus reserved and solitary.
  • Neuroticism: Sensitive and nervous versus sure and confident.
  • Agreeableness: Trustworthy, direct, generous and modest versus unreliable, complicated, lean and boastful.
  • Conscientiousness: efficient and organized versus sloppy and careless.
  • Openness: Inventive and curious versus dogmatic and cautious.

The researchers prompted the models with a questionnaire designed for personality assessment and then categorized the text responses into quantifiable traits using a zero-shot classifier. The team studied several pre-trained language models such as GPT-2, GPT-3, TransformerXL and XLNET, which differ in their training strategy and corpora.

The study found that the language models possessed varying personality traits reflecting the data sets used in their training.

“I have a running model of GPT-3 on my system and I barely see any human traits in it. Yes, it learns to write random things like humans would. But in general it lacks context. A few paragraphs might help but it’s usually in a confined environment. Maybe the paid APIs they provide could be better trained to write text. But again, I don’t see any Similarly, for BERT or any of the derivatives that exist, it’s not because you trained it on a large body of data and made it write things or extract the semantics or talk based on a prompt that doesn’t give him a personality, does it Theoretically what we claim to be revolutionary is pretty much useless unless he actually knows what he’s doing It’s just a mirror and a very good one, to be honest,” Tushar said.

According to Hugging Face research intern Louis Castricato, linguistic models can only express the preferences of a specific normative framework. “Language models are also very bad at mimicking individual preferences consistently over long popups,” he added.

Eliza effect

“We are very good at creating a program that will play Go or detect objects in an image, but these programs are also fragile. If you slowly move them out of their comfort zone, these programs are very easy to break because they don’t really understand what they’re doing. For me, these technologies are effectively a mirror. They simply reflect their contribution and imitate us. So you give it billions of parameters and build its model. And then when you look at it, you’re basically looking at yourself in the mirror, and when you look at yourself in the mirror, you can see glimmers of intelligence, which really are just a reflection of what they’ve learned. How about resizing these things? And if we go to 10 billion or a hundred billion? And my answer is you’ll just have a bigger mirror,” said AI2’s Dr. Oren Etzioni.

“As human beings, we tend to anthropomorphize ourselves. So the question we have to ask ourselves is: is the behavior we observe really intelligent? If we focus on imitation, we focus on the Turing test. Can I tell the difference between what the computer says and what a person would say? It is very easy to be wrong. AI can fool some people all the time and all people once in a while, but that doesn’t make it sentient or intelligent,” he added.

Also read: Paul Allen liked the fact that I’m not an academic: Dr Oren Etzioni, CEO, AI2

Like a dog in front of a mirror

“If you think AI has a lifespan of, say, a human, then right now it’s about 2 or 3 months old. People think robots, for example, have personalities. They don’t They have the projections of what we perceive as personalities. For example, R2-D2 seems to be everyone’s favorite dog. But that’s because they designed him to mimic what we perceive as a dog. The human ability to project onto inanimate objects or even animate beings should not be underestimated. I’m not saying it’s impossible, but it would be a miracle if it were in our lifetime,” said Kate Bradley Chernis, co-founder and CEO of Lately.AI.

However, NICE Actimize Chief Data Scientist Danny Butvinik said that AI could have a personality once it supports hierarchical imitation.

“Currently, AI learns through trial and error (in other words, reinforcement learning) which is still one of the most common types of learning. For example, it is well known that children learn largely through imitation and observation in their early stages of learning.Then, children gradually learn to develop joint attention with adults through gaze tracking.Later, children begin to adjust their behaviors based on feedback evaluative and received preference when interacting with other people.Once they have developed the ability to abstractly reason about task structure, hierarchical imitation becomes feasible.AI is a child who learns nowadays, therefore it can’t have customization or sensitivity Look at large language models that are trained on an unimaginable number of parameters, but s who still suffer from basic logical reasoning,” he said.

Share.

Comments are closed.