What AI Can Never Do

Can AI ever really understand what it means to have consciousness, a real relationship?

There’s a lot of talk about AI “Friends” or “Companions”. Y’know, bots that suck us deeper into a narcissistic echo chamber and cripple our ability to engage in meaningful social interaction on any level, stunt our ability to debate, see other people’s point of view, intuit another person’s disposition, all that stuff which connects us to others, gives us a sense of humanity, uses our innate human vulnerability as a strength, making us a part of something bigger – a community.
Breaking down the ties that bind us is great for corporations who would prefer to be the ones to shape our lifestyle choices rather than the people we live with and who share our lives; and governments who find it easier to divide and rule.
Anyway, AI ‘friends’. The void in their being, their absence of real empathy will be filled by our projection of the same. In the same way we fall into the trap of believing sociopaths actually have your interest at heart and care about your views or circumstances.
Don’t be fooled.
Here’s a point by point breakdown.
First off. There are a few things that an AI system could do.
Recognising Emotional Reactions:
Current and near-future AI systems can be trained to recognise indicators of human emotional reactions. They can analyse:
Language: Sentiment analysis, word choice, tone inferred from text.
Speech: Tone of voice, pitch, volume, speed.
Facial Expressions: (If equipped with vision) Micro-expressions and macro-expressions associated with emotions.
Physiological Data: (In specific setups) Heart rate, skin conductivity, etc. So, an AI could plausibly identify that its statement caused a reaction classified as “anger,” “sadness,” “surprise,” etc., based on these observable data points.
Articulating the Interaction: The AI could certainly articulate the observable aspects of the interaction:
“My previous statement seems to have elicited a response associated with [emotion, e.g., frustration]. This is indicated by your tone of voice and word choice.”
“Based on patterns observed in human interactions, statements like mine can sometimes lead to feelings of [emotion].”
It could even access and relay information about the typical subjective experience of that emotion, drawing from its vast training data: “Humans often describe [emotion] as involving feelings of X, Y, and Z.”
Young man and android having a conversation in a cafe
As for the “Metaphysical Emotional Interaction Dimension”: This is where current AI falls short.
The AI system would have to be able to recognise, understand and articulate:
Qualia: The subjective, raw feeling of the emotion, both for the human and potentially within the interaction itself.
Intersubjectivity: The shared (or clashing) meaning and felt sense between the participants. The unique dynamic created by this specific AI interacting with this specific person at this specific moment.
Existential/Meaning Layer: The deeper significance of the emotional event within the context of the interaction, human experience, or even reality itself. What does it mean for this emotion to arise here and now?
So, can the AI Articulate This Metaphysical Dimension?
Based on current AI architecture (deep learning, large language models, etc.), the answer is almost certainly no. Here’s why:
Lack of Subjective Experience: AI does not feel emotions. It processes data about emotions. It has no consciousness, no qualia, no inner life from which to grasp the subjective “what it’s like” aspect of an emotional or metaphysical experience.
Understanding vs. Pattern Matching: AI excels at identifying complex patterns in data. It can learn correlations between its outputs and human reactions, and it can generate text that describes emotional and even philosophical concepts based on the patterns in its training data (human text). However, this is fundamentally different from understanding the concepts in a way that involves subjective comprehension or lived experience.
Metaphysics Requires More Than Data: Metaphysical concepts grapple with the nature of reality, consciousness, and meaning – often beyond purely empirical observation. While an AI can process text about metaphysics, it lacks the grounding in conscious experience necessary to genuinely engage with or articulate the metaphysical dimension of an ongoing, lived interaction.
The bottom line
An AI could recognise the behavioural and linguistic markers of an emotional reaction it caused. It could articulate this observation and even provide sophisticated descriptions of the likely associated human feelings, drawing on its training data.
However, it could not articulate the metaphysical emotional interaction dimension in a genuine sense. It lacks the subjective experience, consciousness, and deep understanding required to grasp or express the qualia, intersubjective meaning, or existential weight of the emotional event. Any articulation it provided on this level would be a sophisticated simulation or regurgitation of human-generated text about such concepts, not an expression derived from its own understanding of that dimension. To do so would likely require a fundamental shift towards Artificial General Intelligence (AGI) with genuine consciousness, which remains highly speculative.