Could Dreams be the Gateway to Understanding Consciousness in People and Chatbots?
The first truly sentient AI will be a chatty self-driving car
that decides where to go on vacation, and dreams about the trip beforehand.
Recent
strides in dream research may have quietly led to a simple and practical
definition of consciousness.
Researchers
investigating neural correlates of dreaming have discovered a hotspot
(posterior cortical hot zone), which when activated is associated with dream
recall on awakening. Conversely, suppression of this region is predictive of
reports of dreamless sleep.
This
finding is consistent with a theory known as Prefrontal Synthesis. In this
model, percepts abstracted from our interaction with the environment are
represented as distributed neural ensembles, circuits defined by the
synchronous firing of constituent neurons according to the principle of ‘what
fires together wires together’.
When
an ObjectNE as these ensembles are known is activated, we become conscious of
the corresponding percept. When the prefrontal cortex activates a sequence of
ObjectNEs in the course of planning or problem solving, we become conscious of
our own thoughts. And spontaneous activation of ObjectNEs by the posterior
cortical hot zone (primarily during REM sleep) leads to the more freewheeling
experience we know as dreaming.
The
interesting thing here is that a concise definition of consciousness just falls
out of this approach. In a nutshell, consciousness becomes synonymous with
experience. We say we are conscious when we are interacting with the
environment, albeit always indirectly through our stored percepts:
1)
If we are experiencing the formulation of new ObjectNEs through the
consolidation of sensations, we call this being conscious of our surroundings.
2)
If a sequence of ObjectNEs is activated by the prefrontal cortex, this is experienced
as reflecting on something or being conscious of our thoughts.
3)
And if the ObjectNEs are spontaneously activated and recombined during sleep by
the posterior cortical hot zone, we call this (altered) state of consciousness
dreaming.
All
this has interesting ramifications for the recent debate over whether
language-parsing algorithms should be called conscious. There is no question
that such programs build up the equivalent of ObjectNEs, as weighted neural
networks arise through interaction with big data. By analogy, one could argue
that during the training process, the program is converting sensation to
perception and is thus conscious of its (informational) surroundings. Likewise,
as these percepts are referenced and organized during external queries, one
could argue that the chatbot is potentially conscious of its own
(millisecond-long) train of thought.
Taken
together, these two phenomena may be responsible for the impression popularized
by some researchers that AI is already ‘a little conscious’.
If
we accept the idea that consciousness equates to ‘interaction with the
environment’ (always mediated by internal percepts), additional mechanisms
would be required before AI could be called truly conscious in the ordinary
sense. Specifically:
1)
Before a program could be considered conscious of its surroundings (‘awake’),
it would need to be in continuous training mode, consistently forming new
perceptions and consolidating them with previous learning. Not impossible by
the way: think the marriage of a self-driving car with a chatbot.
2)
There would need to be a revolution in the understanding and coding of
executive functions (volition) to replace the user query system with a
self-querying design. The resulting potential for autonomous reflection would
be one step towards an ongoing stream of consciousness…and maybe an awareness
of self.
3)
Spontaneous reactivation of ObjectNEs during downtime, perhaps through some
Darwinian analog to random mutation and natural selection (with an eye to
increased storage efficiency?) would lay the groundwork for ‘electric dreams’.
Note
that equating consciousness with experience does nothing to explain qualia (the
redness of red or the taste of an apple). A thorny question for another day…
For
more reading…Siclari F, Baird B, Perogamvros L, Bernardi G, LaRocque JJ,
Riedner B, Boly M, Postle BR, Tononi G. The neural correlates of dreaming. Nat
Neurosci. 2017 Jun;20(6):872–878. doi: 10.1038/nn.4545. Epub 2017 Apr 10. PMID:
28394322; PMCID: PMC5462120.
Comments
Post a Comment