The Turing-test proposes a way to 'test' whether an artificial system is in a meaningful way exchangeable with a human being. The word 'exchangeable' is used in the meaning of 'fulfilling the same purpose', like a horse and wagon and an of-the-road motorbike are to some extent exchangeable as a means of transportation. The Turing-test has the form of a game where one of the players can be either a human being or an artificial system. The game is set up in such a way that it would be hard, but not unthinkable, to create an artificial system that plays the game as well as a human.
So far, attempts to create an artificial system that passes the Turing-test have been unsuccessful. The LEIA project aims to solve a fundamental obstacle that prevents systems from doing well in the Turing-test: mindlessness.
There is no consensus about what a mind is. Within the context of the LEIA project we suppose that there are two requisites for having a mind: a brain and a belief structure. A mind is then the process of a belief structure 'running on' a brain. The brain is the hardware, the belief structure is the software, and the mind is what happens when the hardware operates according to the rules and expectations that are embodied in the software.
Let's assume that the brain is to some extent exchangeable with computer hardware. Obviously the degree of mindfulness (to borrow a term from Buddhism) is to a very large extent dependent on the software that the computer runs. No doubt all current software in the world (we are talking about the year 2007 here) is incapable of moving the mindfulness dial above zero.
From a theoretical point of view the hardware will not be the problem. All past and future computers can be described by a formal system that consists of a single fixed simple device and a potentially infinite amount of memory. This formal system is not only able to describe the behavior of computers, but can also be adapted to deal with all the complexities of brains. (It would be to technical to discuss this here, because the adaptation involves Algorithmic Probability.) Therefore we don't worry too much about the exchangeability of brains and computers.
So what kind of software would imbue a computer with a certain level of mindfulness?
The traditional way to tackle this question is to embrace the view: mindful is as mindful does. Mindful creatures: play chess, prove theorems, play games, solve problems, play soccer, and so on. So Artificial Intelligence (A.I.) researchers created programs (and sometimes robots) to do those things, and more. However, these computer programs share a disappointing common fact: they all prove that their purpose can be achieved mindlessly!
Obviously there has to be another way.
An important observation about mindless A.I. is that these systems have no self-symbol as described in Hofstadters I am a strange loop. They don't suffer consequences of their own actions. In other words, the expected consequences of the actions of these systems are only compared to a goal that was provided to the system from the outside. These consequences are not compared to self assigned goals, because these systems have no 'self'.
In particular, these mindless A.I. systems use a fixed internal language that is structured around the external goal that is pursued. Whereas in a mindful system, we would expect an evolving internal language structured around aspects of 'self': pleasure, fear, beauty, etc. A two year old child has a feeling of accomplishment when it has peed in a potty. The child needs an internal language to express this feeling.
How to put a self into software? The following simple procedure would certainly not do. Create a data structure named 'self' and put other structures into the memory subsystem that have pointers to this object:
Object self = new Object(); fact.subject = self;
But what else then? Well, The boundaries of the hardware (or virtual machine) that is used to run the software can be used as a natural demarcation of self. We can look at the information that is flowing into and out of the machine. It is the goal of the LEIA project to create software that analyzes causal relations between the output and the input as well as causal relations between input and other input. Every causal relation found can be called a piece of 'knowledge'. Even stronger: as these relations concern the specific interactions of that machine with its environment, these relations represent knowledge of itself. When we ask: what (who?) benefits from finding these relations, there is only one plausible candidate: the system that is demarcated by the machine on which it runs. In other words: the only candidate is the system itself. So a system like this would suffer the consequences of its own actions.
The LEIA software has no fixed internal language. There is no assumed relation between the symbols in its memory subsystem and the 'real' world. The only 'meaning' that can be assigned to symbols is inherent in the causal relations between these symbols. There are several ways to describe causal relations. A common division of these approaches is: logical versus probabilistic. The LEIA project uses a probabilistic approach to circumvent the grounding problem. A further subdivision within the probabilistic approach is: predictions versus expectations. The LEIA project uses expectations for the simple reason that predictions rarely come true. Anything could happen. If a system assigns probabilities to events based on past events, then every possible future has a low combined probability. However, these probabilities are nevertheless useful to evaluate possible actions. Actions that can be chosen to maximize the likelihood of exposing more specific patterns.
So this is in a nutshell what LEIA does: autonomously searching for probabilistic patterns in its communication with its environment. In other words, Learning Expectations Incrementally and Autonomously.
Visit the LEIA project page on GitHub for project details.