In a widely shared video clip, the Nobel-winning computer scientist Geoffrey Hinton told LBC’s Andrew Marr that current AIs are conscious. Asked if he believes that consciousness has already arrived inside AIs, Hinton replied without qualification, “Yes, I do.”
Hinton appears to believe that systems like ChatGPT and DeekSeek do not just imitate awareness, but have subjective experiences of their own. This is a startling claim coming from someone who is a leading authority in the field.
Many experts will disagree with Hinton. Even so, we have arrived at a historically unprecedented situation in which expert opinion is divided on whether tech companies are inadvertently creating conscious lifeforms. This situation could become a moral and regulatory nightmare.
What makes Hinton believe current AIs are conscious? In the viral clip, he invokes a suggestive line of reasoning.
Suppose I replace one neuron in your brain with a silicon circuit that behaves the same way. Are you still conscious? The answer is, surely, yes. Hinton infers that the same will be true if a second neuron is replaced, and a third, and so on.
The outcome of this process, Hinton supposes, would be a person with a circuit board in place of a brain who is nonetheless conscious. Why, then, should we doubt that existing AIs are also conscious?
In making this argument, Hinton strays from computer science into philosophy. As a philosopher who works on this kind of argument, I am not entirely persuaded.
You would also remain conscious after having one neuron in your brain replaced by a microscopic rubber duck. Likewise for the second neuron, and the third. But somewhere in this process, consciousness would cease. The same might be true of silicon circuits.
We shouldn’t be too sanguine about this reply, however. For one thing, there exist other arguments for the view that current AIs might have achieved consciousness. An influential 2023 study, suggests a 10 percent probability that existing language-processing models are conscious, rising to 25 percent within the next decade.
Furthermore, many of the serious practical, moral, and legal challenges associated with conscious AI arise just so long as a significant number of experts believe that such a thing exists. The fact that they might be mistaken does not get us out of the woods.
Remember Blake Lemoine, the senior software engineer who announced that Google’s LaMDA model had achieved sentience, and urged the company to seek the program’s consent before running experiments on it?
Google was able to dismiss Lemoine for violating employment and data security policies, thereby shifting the focus from Lemoine’s claims about LaMDA to humdrum matters of employee responsibilities. But companies like Google will not always be able to rely on such policies—or on California’s permissive employment law—to shake off employees who arrive at inconvenient conclusions about AI consciousness.
As the Lemoine case illustrates, we face an immediate practical problem of perceived AI consciousness. Other examples of this problem are easy to foresee. Imagine the case of someone falling deeply in love with their AI and insisting that it is a sentient partner worthy of marriage. Or consider the prospect of advocates rallying for legal rights on behalf of an AI “friend.”
What should we do about such cases when the people involved are able to back up their beliefs by appealing to experts such as Hinton?
Companies like Google, Microsoft, and OpenAI put enormous resources into AI ethics teams working on such tasks as mitigating biases and curbing harmful content. To my surprise, however, I have been able to find nobody affiliated these companies working on the problem of perceived consciousness.
Perhaps I should not be surprised. Addressing the problem of perceived AI consciousness means taking a stand on profound philosophical puzzles that fall way beyond the ordinary purview of software developers. These companies might well prefer to keep clear of the issue while they can get away with it, as well as to keep whatever discussions they are having on the subject strictly in house.
This approach cannot be maintained indefinitely, however. As Hinton says later on in the LBC interview, “There’s all sorts of things we have only the dimmest understanding of at the present about the nature of people, about what it means to have a self… And they’re becoming crucial to understand.”