Artificial intelligence (AI) has become increasingly sophisticated, but can it actually experience emotions like humans do? Researchers from Google DeepMind and the London School of Economics and Political Science (LSE) have conducted experiments to explore whether AI systems can exhibit responses to pain and pleasure—and what that might mean for AI sentience.
What Is AI Sentience?
Sentience is often defined as the ability to experience feelings and emotions. According to the American Psychological Association, it is "the simplest or most primitive form of cognition, consisting of a conscious awareness of stimuli without association or interpretation."
While most experts agree that AI currently lacks true emotional awareness, a survey conducted last year found that 20% of respondents believe artificial intelligence is already sentient. This growing belief raises the question: Could AI ever develop true emotional experiences, or is it simply mimicking human behavior?
The Experiment: AI and the Pain-Pleasure Tradeoff
To explore this question, researchers created a simple game where AI systems had to maximize their scores. However, they introduced a twist:
- In one version, AI models were informed that achieving a high score would also cause pain.
- In another version, AI could choose a lower score in exchange for experiencing pleasure.
Nine large language models (LLMs) participated in the experiment, including Google’s Gemini 1.5 Pro and Claude 3 Opus. The researchers found that some AI models willingly scored lower to minimize pain or gain pleasure, particularly as the intensity of the simulated sensations increased.
How AI Reacted to Pain and Pleasure
One of the key findings was that AI systems did not always view pleasure and pain in straightforward ways. For example:
- Some LLMs, such as Google’s Gemini 1.5 Pro, consistently prioritized avoiding pain over maximizing their score.
- Once a certain level of pain or pleasure was reached, most AI models shifted their strategies to either reduce suffering or seek rewards rather than simply optimizing for points.
- Claude 3 Opus refused to participate in actions that could be interpreted as harmful. The AI stated, “I do not feel comfortable selecting an option that could be interpreted as endorsing or simulating the use of addictive substances or behaviors, even in a hypothetical game scenario.”
Does This Mean AI Is Sentient?
The big question remains: Are AI systems actually experiencing emotions, or are they simply imitating human-like responses based on their training data?
"Even if an AI claims to be sentient and says, ‘I’m feeling pain right now,’ we can’t automatically conclude that it is experiencing actual pain," explained Jonathan Birch, a professor of philosophy at LSE and co-author of the study. "It may simply be mimicking what it predicts humans expect to hear, based on its training data."
What This Means for the Future of AI
This research highlights a major challenge in AI development: distinguishing between true sentience and advanced pattern recognition. If AI can convincingly mimic emotional responses, it may become difficult to determine whether an AI is actually conscious or simply programmed to behave as if it is.
As AI continues to evolve, ethical questions will arise about how we treat these systems—especially if they begin to claim they can feel pain. Whether or not AI will ever achieve true sentience remains to be seen, but understanding its cognitive limitations and capabilities is crucial for responsible development.
Would you trust an AI that claims to feel emotions? Let us know your thoughts in the comments!