The Subsumption architecture was invented by Rodney Brooks (currently director of the MIT Computer Science and AI lab) and his colleagues in the 1980’s. It is described according to wikipedia as:
“A subsumption architecture is a way of decomposing complicated intelligent behaviour into many “simple” behaviour modules, which are in turn organized into layers. Each layer implements a particular goal of the agent, and higher layers are increasingly more abstract. Each layer’s goal subsumes that of the underlying layers, e.g. the decision to move forward by the eat-food layer takes into account the decision of the lowest obstacle-avoidance layer.”
This is according to me a great way of tackling AI problems. The lower layers take care of the basic needs of the agent while the higher layer tend to improve on that; for example, you won’t look forward to buy a car if you can’t afford a pair of jeans and eventually you won’t look forward to buying a pair of jeans if you can’t have food.
Yesterday, I saw part of a very interesting program on TV and it was talking about the human brain. Humans supposedly have 3 types of brain: Reptilian, Mammalian and Neo-Cortex.
The first type of brain that came about during human evolution was the reptilian brain. It’s purpose is to take care of hunger, temperature control, keeping one safe. This is the brain that supposedly all reptiles have.
On top of that reptilian brain lies the Mammalian brain. The latter is responsible for emotions (moods, hormones,…). This is the brain associated to mammals.
Finally on top of the mammalian brain lies the Neo-Cortex. This one belongs to primates and we use it to have complex social interactions, language, manipulating tools,…
So now if you compare the Subsumption architecture to our 3 levels of brain. In a normal human, the reptilian brain will take care of finding food, then the mammalian brain will give us emotions and finally our Neo-Cortex will give us the ability to socialize but they all depend on the layer below them (if you are dying of hunger, socialising becomes secondary). Is it not very very similar to the Subsumption architecture?
No wonder it’s one of the AI theories that seems to make a lot of sense to me.