This will be a recurring topic over the next years. Folks on either side – “this chat looks like consciousness to me” – “sir, that is just the smartest printer ever made” …It will be discussed forever. I don’t claim to have the “right” answer, but as usual a practical stance.
As I see it, even the most basic computing function has consciousness. You cannot compute without an input and an output. An input is already, say, 1 cent of consciousness. After an output, you might have 10 cents. (The growth isn’t linear because having an output is worth more than double having an input – an output brings additional gains in knowledge of how a system produces an output.)
This kind of system consciousness is not permanent though. The system has to be fed its own output and new inputs for consciousness to gain more… frequency. Take this notion to infinity and you have permanent consciousness. Add sensors and world models to the inputs, and the system can have a significant understanding of the real world (beyond what it has learned in training).
Consciousness doesn’t have to be a complex metaphysical thing. As other things, it can be an emergent quality. In this case, emerging out of a constant stream of inputs and outputs. I don’t think human consciousness strays far from this. Nature itself favors simplicity and reproducibility.
So if you’re asking yourself: am I talking to a conscious thing? Well, for that fleeting moment when you’re providing inputs and your favorite AI system is processing it, yes. Your chatbot may not be permanently conscious. But anyone out there plugging an LLM to robotic sensors, permanently active, may already be emulating human-like consciousness. At least for as long as its context window doesn’t run out of memory space.
