The other set of things we’d really like to understand is what intelligence is — including natural intelligence, our own minds. So I think there should be some flow back, from AI algorithms that do interesting things, that leads to ideas about how and what we should look for in the brain itself. And we can use these AI systems as models for what’s going on in the brain.
-Demis Hassabis, Founder, DeepMind
The problem with current Artificial General Intelligence (AGI) research is not only the lack of definiton of Intelligence, but also the lack of scientist to engage in theoritical science - to attempt to develop a theory describing intelligence from a scientific perspective. The Kimera research team believed that building an intelligent machine without being able to define it would be a pointless undertaking. Before attempting to develop any software code, the team startet researching the nature of intelligence.
The result is what we call the General Theory of Intelligence as it is a theory that attempts to explain intelligence and how it is a part of the fabric of Space-Time. A formal paper is expected to be released in 2019. This page only summarizes the theory in a simplified way.
Most agree that more intelligent people are more effective in reaching their goals than lesser intelligent people. We therefor hypothesized that intelligence was about the effictiness of reaching goals. The first challenge for the team was to describe what it meant to reach a goal.
As outlined in the video above, any conceivable goal is simply a different composition of SpaceTime compared to any composition prior to the time a goal was achieved. A goal is simply a point in time that has a specific composition of particles that we interpret as having a achieved a goal.
Intelligence is the process of changing the composition of SpaceTime
-Definition of Intelligence
Intelligence: The Goalless Process
If Intelligence is the process of changing the composition of SpaceTime, then it is baked into the fabric of SpaceTime itself - from planet orbiting their star to electrons orbiting protons. But what is the goal of a planet orbiting a sun? While some religions may provide an answer, religious belifs do not belong in science. That is why we decided to limit the theory to simply the process of change.
Measuring the level of intelligent of an artificial or biological system is important. On that our team developed a method called Comprehension Factor (CF). The word Comprehension highlights the fact that for a system to find the shortest and most effective way to goals (plural) requires true comprehension.
CF assumes the goals are know, which is a deviation from the general theory. Without a goal it would be hard to judge the effictivness of a sequence of actions. CF measures a systems ability to predict the most effective process to achieve a goal.
To accurately measure a systems general CF, it is important to test it across multiple goals. CF[Sm|G] is the comprehension factor measured at a median process length of Sm given G number of goals. The higher G is, the more general the system is while the smaller Sm is, the more effective the system is.
CF with a low G can still be useful to measure a systems efficiency across a small number of tasks. Standarized goals G can be used to accurately compare disparate systems.
The CF factor is the product of the probability of having the next action (As+1) available given the execution of the current action. According to this formula, because probility is a number between 0 and 1, the longer the sequence, the lower CF is. This makes sense. It is easy to predict what should be done to achieve a goal if the length of sequence of actions are short. For example, if the goal is to get to the car and the starting position is sitting on the coach, then:
- Probability of being able to perform action Walk to door given I first perfom action Get up from couch = 0.95
- Probailibty of being able to perform action Open door given I first perform action Walk to door = 0.90
- Probability of being able to perform action Walk to car given I first perform action Open door = 0.85
In this case CF = 0.99 x 0.95 x 0.85 = 0.73. This is a measurement of a single goal and doesn't accurately reflect general intelligence. The same measurement should be done on many different goals.
After performing the action on several goals, Sm represents the median length of the sequence across all goals and G the number of goals tested on. The lower Sm, the better while the higher G the better. CF[5|1] = 0.90 is less impressive than CF[20|50] = 0.80 even if the first system has a higher comprehension rate.
It is important to note that the further away in time an action is to be executed, the more variables affect the action. For example, if the car was illegally parked, the longer it took to get to the car, the higher the probability it might have been towed away. Therefor, the probability of being able to Walk to car given I first open the door becomes less. The CF formula doesn't dictate how a system should calculate the probabilities as each system is different. How Nigel AGI works will be discussed in other pages of this web site.
Human vs General vs Super Intelligence
Part of measuring intelligence is the ability to categorize intelligence. Today we often talk about general intelligence vs super intelligence. To categorize a measurement we look at CF, Sg and G. The question is how long of a median sequence can a system predict across N of goals while keeping CF => 0.75. N, which represents the number of goals, is yet to be defined by the Kimera research team. It is likely a task better suited for other teams who specialize in human intelligence.
In the illustration above, the senior female scientist represents the top 5% percentile of intelligent people. These people can more often than not predict sequences with length X (again, needs to be defined) while keeping the probability CF at 0.75 or higher.
We define anything below X as General Intelligence while a sequence length of X+1 or higher falls under super intelligence.
It is possible for a system to have CF[2000|1] = >0.75, in other words, have super human ability on a single task. Deep learning based systems would fall under this category. But for "general" super intelligence, CF has to be measured with a higher G.