Thursday, March 16, 2023

Machine Learning: Sentient or not?

Sentient ML?

ChatGPT and the new Bing have been recently (early 2023) providing more and more "this feels real" moments.  Or said differently, there are moments where the ML entity feels alive and sentient.  What gives? Are these ML entities sentient?

False(?) sensations

As a vehicle driver, many of us are familiar with the surprising "false" relative sensations of movement. For example, we suddenly feel that the world coming toward us, more than us going forward. Or, we may be stopped, and for a moment the car feels like we are moving backwards, when in fact it is the other car lanes that are moving forwards. I put quotes around "False", as in fact, technically, there is nothing false about these sensations. These are in fact "projected" sensations: we are feeling what we would expect from a specific context, for example, from the relative view of our car. The world continues to function around us,  and we choose the wrong reference context to interpret it. Sometimes this may throw us off. As in the car example, when sideway forces and change of speed may feel odd, or even disturbing, and movements from other cars may appear surprising. 

Are the sentiment of sentience of these ML systems false relative sensations? Just like we have as a driver with regards to movement? At a first glance, it seems like there is little relations between the two, yet in a broader modelling view, the comparison is valid, and provides some understanding why current ML is already much more than dumb, yet not fully sentient yet.

Combining a local and non-local model

The interplay between local and non-local model is a good place to start in order to understand what is happening when feel something as sentient. 

I again use a car concept to kick-off the explanation: 

We take a local view when we "feel the world" from the perspective of our car. A non-local view, for example, is the feeling of traffic we have on the road, the expectations of other drivers' actions, the feeling of anticipation we have of other drivers' actions. When we drive, we are mostly working within a non-local view: we are with the flow of traffic, the flow of road signs, of other drivers' actions and external events. So in fact a false relative sensation while driving, is our mind switching to a local view for more than a short moment. Remarkably, switching between local and non-local view is in part how we validate our view of the world. Yet because of the needs of driving, we normally live driving in a non-local view while driving, and only touch a local view, mostly subconsciously, to validate the coherence between the two views. When this process "stalls",  and the mind is unable to the keep the local view connected to the non-local view, the local view (e.g. view from the car referential) pops up to attention, demanding that we do make sure it is ok. (And thise is sometimes perceived as a false perception).

A mind body reality duality

"I think, therefore I am" might be interpreted as Descartes' way of saying:

  • I doubt my mind's existence within a local view, but not my body
  • A do not doubt my mind's existence within a non-local view, but non-locally my body may not exist
  • Therefore mind and body are two separate things
Even in a modern context, Descartes view is relevant. In effect, he is saying: "My reality of separation of mind and body feels real to me because it feels real within both my local and non-local perception of the world, and both these views are compatible with each other.

What is sentience?

We are sentient when we maintain consistency between our local and non-local selves. A mind and body minimalistic view of our selves, as simple and reduced as in Descartes's view, is a good basis. However, we need more. The Wikipedia "self" start with:
The self is an individual as the object of that individual’s own reflective consciousness. 
Here the "reflective consciousness" is a non-local view. The view from one own's object of self is a local view.

For a ML system to be sentient it must have a reflective "mindful" process that combines a coherent local and non-local model of its self. The current GPT models do in fact learn by construction to keep local and non-local coherent views. However, they do not ensure continuity between local and non-local coherence. 

The mind trick

Magicians are very much aware of the local and non-local model. A magic trick is much about showing a bigger situation, from which the audience infers a believable non-local model of reality. The magicians then show local details, that highlight a specific believable local model that is complementary to the non-local model. The magic trick happens with the audience complementing real local details with incorrectly inferred non-local reality. 

Stochastic fractal flows

Mid-eighties had me generating Mandelbrot like fractal landscapes with recursive fractal noise code that I wrote in C. A basic stochastic noise generator has no properties of continuity: there is no notion of flow, no balance in what comes in and goes out of the different parts of the stochastic fractal (e.g synthetic fractal mountain range). However, if care is taken, one can construct "higher order" generators with continuity properties. I used these many years ago to synthesize fractal like flows of fluids.

The sentience trick

Current ML already knows how to maintain a coherent local and non-local view. The problem is that it does no know (yet)  how to move to another coherent local / non-local view while maintaining consistency with the previous view. Two tricks can then be used:

  1. To rely on the user, in the form of a chat to guide the progression and continuity of coherence.
  2. To rely on "unrolling" higher dimensions into a sequence that maintains continuity, in a fixed, non-sentient, manner, somewhat in a similar manner to a generator of a stochastic fractal flow, so as to be perceived to have continuity and possibly to have sentience.
With a combination of these two approaches the ML system can feel genuinely sentient:  the system seems to maintain a reflective mindful process that combines a coherent local and non-local model of its self. The missing piece is that the system has no "mindfulness". Instead it either is following the users lead, or it is "following a path within a preset higher dimensional learned relation". 

Take away

In all honesty, I can well imagine a limited form of sentience soon. Therefore this blog entry feels already a bit dépasser. However, the current state seems to be that we do not have ML sentience just right now, yet that we are pretty good at making one feel that we do have sentience.

All original content copyright James Litsios, 2023.



No comments: