Microsoft AI Chief Dismisses Consciousness Attribution to Neural Networks Translation: Microsoft AI Chief Dismisses Consciousness Attribution to Neural Networks

Only biological entities possess consciousness. Developers and researchers should halt projects suggesting otherwise, asserted Mustafa Suleyman, head of Microsoft’s AI division, in a conversation with CNBC.

«I don’t believe people should engage in such work. If you ask the wrong question, you get the wrong answer. I think this is a classic case,» he stated at the AfroTech conference in Houston.

The Microsoft executive opposes the notion of creating conscious artificial intelligence or AI services that may supposedly experience suffering.

In August, Suleyman wrote an essay introducing the term «Seemingly Conscious AI» (SCAI). This form of artificial intelligence exhibits characteristics of rational beings and thus appears to possess consciousness. It simulates all traits of self-awareness but remains fundamentally empty inside.

«My imagined system is not genuinely conscious, yet it will so convincingly mimic human-like reasoning that it will be indistinguishable from claims you or I might make about our own thoughts,» Suleyman writes.

Attribution of consciousness to AI is dangerous, according to the expert. It would amplify misconceptions, create new dependency problems, exploit our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing debates over rights, and lead to a colossal new categorical error for society.

In 2023, Suleyman authored The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma, where he extensively explores the risks associated with AI and other emerging technologies. Among these:

The AI market is progressing towards AGI—artificial general intelligence capable of performing any human-level tasks. In August, OpenAI’s CEO Sam Altman stated that the term might be «not very useful.» Models are evolving rapidly, and soon we will rely on them «more and more,» he believes.

For Suleyman, it is crucial to draw a clear line between the increasing intelligence of artificial intelligence and its capacity to ever experience human emotions.

«Our physical experience of pain makes us very sad and feels awful, but AI does not feel sadness when it experiences ‘pain,'» he remarked.

According to the expert, this is a vital distinction. In reality, AI creates an illusion—an apparent narrative about experiences—regarding itself and consciousness, but does not genuinely undergo these experiences.

«Technically, you know this since we can see what the model does,» the expert emphasized.

In the realm of artificial intelligence, philosopher John Searle proposed a theory known as biological naturalism. It states that consciousness relies on the processes of a living brain.

«The reason we grant rights to people today is that we do not want to harm them as they suffer. They have pain and preferences that include avoiding it. These models lack that. It’s merely simulation,» Suleyman stated.

The executive opposes the idea of researching consciousness in AI since it does not possess it. He noted that Microsoft is creating services that understand they are artificial intelligence.

«In simple terms, we are developing AI that always works for the benefit of humans,» he pointed out.

Recall that in October, experts from Anthropic discovered that leading models can exhibit a form of «introspective self-awareness»—they can recognize and describe their own internal «thoughts,» and in some instances, even control them.