Sentient ML?
ChatGPT and the new Bing have been recently (early 2023) providing more and more "this feels real" moments. Or said differently, there are moments where the ML entity feels alive and sentient. What gives? Are these ML entities sentient?
False(?) sensations
As a vehicle driver, many of us are familiar with the surprising "false" relative sensations of movement. For example, we suddenly feel that the world coming toward us, more than us going forward. Or, we may be stopped, and for a moment the car feels like we are moving backwards, when in fact it is the other car lanes that are moving forwards. I put quotes around "False", as in fact, technically, there is nothing false about these sensations. These are in fact "projected" sensations: we are feeling what we would expect from a specific context, for example, from the relative view of our car. The world continues to function around us, and we choose the wrong reference context to interpret it. Sometimes this may throw us off. As in the car example, when sideway forces and change of speed may feel odd, or even disturbing, and movements from other cars may appear surprising.
Are the sentiment of sentience of these ML systems false relative sensations? Just like we have as a driver with regards to movement? At a first glance, it seems like there is little relations between the two, yet in a broader modelling view, the comparison is valid, and provides some understanding why current ML is already much more than dumb, yet not fully sentient yet.
Combining a local and non-local model
The interplay between local and non-local model is a good place to start in order to understand what is happening when feel something as sentient.
I again use a car concept to kick-off the explanation:
We take a local view when we "feel the world" from the perspective of our car. A non-local view, for example, is the feeling of traffic we have on the road, the expectations of other drivers' actions, the feeling of anticipation we have of other drivers' actions. When we drive, we are mostly working within a non-local view: we are with the flow of traffic, the flow of road signs, of other drivers' actions and external events. So in fact a false relative sensation while driving, is our mind switching to a local view for more than a short moment. Remarkably, switching between local and non-local view is in part how we validate our view of the world. Yet because of the needs of driving, we normally live driving in a non-local view while driving, and only touch a local view, mostly subconsciously, to validate the coherence between the two views. When this process "stalls", and the mind is unable to the keep the local view connected to the non-local view, the local view (e.g. view from the car referential) pops up to attention, demanding that we do make sure it is ok. (And thise is sometimes perceived as a false perception).
A mind body reality duality
"I think, therefore I am" might be interpreted as Descartes' way of saying:
- I doubt my mind's existence within a local view, but not my body
- A do not doubt my mind's existence within a non-local view, but non-locally my body may not exist
- Therefore mind and body are two separate things
What is sentience?
The self is an individual as the object of that individual’s own reflective consciousness.
The mind trick
Stochastic fractal flows
Mid-eighties had me generating Mandelbrot like fractal landscapes with recursive fractal noise code that I wrote in C. A basic stochastic noise generator has no properties of continuity: there is no notion of flow, no balance in what comes in and goes out of the different parts of the stochastic fractal (e.g synthetic fractal mountain range). However, if care is taken, one can construct "higher order" generators with continuity properties. I used these many years ago to synthesize fractal like flows of fluids.
The sentience trick
Current ML already knows how to maintain a coherent local and non-local view. The problem is that it does no know (yet) how to move to another coherent local / non-local view while maintaining consistency with the previous view. Two tricks can then be used:
- To rely on the user, in the form of a chat to guide the progression and continuity of coherence.
- To rely on "unrolling" higher dimensions into a sequence that maintains continuity, in a fixed, non-sentient, manner, somewhat in a similar manner to a generator of a stochastic fractal flow, so as to be perceived to have continuity and possibly to have sentience.
Take away
In all honesty, I can well imagine a limited form of sentience soon. Therefore this blog entry feels already a bit dépasser. However, the current state seems to be that we do not have ML sentience just right now, yet that we are pretty good at making one feel that we do have sentience.
All original content copyright James Litsios, 2023.
No comments:
Post a Comment