Consciousness is at once highly familiar and quintessentially alien.
My consciousness is ever present to me. Arguably it is the only thing to which I have direct access. Everything else must pass through its filters and lenses.
Your consciousness is foreign to me, it is a land I cannot visit. Your actions might appear motivated by desire, joy, or fear, but I cannot experience your feelings.
The first tool, functionalism, offers an antidote to these doubts. It is the philosophical expression of what programmers call duck typing: if something walks like a duck and quacks like a duck, it is a duck. If someone smiles, cries, plans, talks then they’re conscious.
Functionalism in its strongest form says there’s not more to it than that. To be conscious is to act conscious. But this is not so. A holographic recording of me going about my day has no internal experiences; my sleeping self may have plenty.
More nuanced versions of functionalism attempt to address these concerns. For example, computational functionalism suggests that consciousness is not merely about external behavior but about the underlying processes that generate that behavior—specifically, the computation taking place within a system. In this view, it’s not just actions that matter but the specific functional roles played by internal states.
The second tool, abduction, or inference to the best explanation is more robust and can be combined with computational functionalism. Given that other humans are physiologically similar to me, the best explanation of their complex behavior is that they are indeed conscious. This is not to say that human physiology is the only sort of thing that can sustain consciousness, but it is one such substrate.
A short digression:
When Ian McKellan, shouts “Though shalt not pass” and whacks his staff in the ground. The best explanation is not that he is a wizard. It is that he is pretending to be a wizard.
What are we to think of machines that behave as if they were conscious?
Our answer should consider both their behavior and their design and construction. Consider a machine programmed with a list of simple fixed action patterns, if told a certain joke then emit laugh-like noises, if presented with one sequence of symbols, then reply with another. No matter how well these mimic humans, these are not conscious.
The best AI models we have are much more complex than these strawmen. Even so, they are diesmbodied. They lack access to temporal or spatial capacities or sensations. However intelligent models in this paradigm become we have good reason for not considering them conscious in any rich sense.
But what of a robot filled with sensors and capacities? One that could fly as well as walk, that could measure magnetic fields and register exotic wavelengths. Something whose actions suggests it understands the link between symbols and the world, an agent that seemingly influences the world to realize its preferences?
Again much would depend on what we knew of its mechanics. Does its circuitry support the same sort of spiraling self-reference that is a feature of human cognition? Does it fit with an independently proposed architecture for consciousness, such as global workspace theory, or integrated information theory? Or is it only semi-autonomous, with a human pulling the levers behind the scenes? Is it coded up as a set of fixed-action patterns?
Even if we reject computational functionalism, inference to the best explanation can help us adjudicate here. Keeping in mind this overarching principle, rather than focusing on a single theory of consciousness prompts us to fully consider the landscape of possibilities.
Despite recent technological advances, we should look to ourselves and to animals for data on consciousness.
The progress in generative AI will have an enormous impact, but for now, the models remain models, simulacra, machines without the ghost.