(Originally posted on my linkedin on Aug 12, 2017)
Recently I read this 2-part write-up “The AI Revolution: The Road to Superintelligence”. I know enough of AI to understand this elaborate writeup and fat claims within. However, I fall short to be able to assess the validity of those claims. But that doesn’t stop me from ramble around.
I feel that this write-up has missed discussing the currency of ‘emotion’ that humans use so eloquently in their transactions. Which has to be tamed (IMO) by machines to go from ANI to AGI. Indeed, there is a brief mention of this aspect and to quote,
‘To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.’
But the above is underplayed on how this might slow down increasingly as ANI (Artificial Narrow Intelligence) converges to AGI (Artificial General Intelligence). Let’s look at this using a recently published Facebook research on making chatbots negotiate with each other.
Facebook research explains us how two machines ‘logically’ arrive at most optimal result at both ends. This is machine to machine negotiation. If it is a human to human negotiation, we employ many other items like emotional blackmailing, sentiments and manipulation. Favoritism, racism, exploitation etc., are all realities of life. Is it rare we say ‘man we both are from same school, you will make that extra for me’ or ‘you look like a sensible lady and you will understand my situation, you will give me for the price’. And people may fall for these. The optimal solution that machines would arrive will have no relevance or computational explanation in such situation.
If a machine has to negotiate with a human, all these traits must be mastered by the machine to negotiate (manipulate?) him/her. Not impossible, it only takes providing sizable such transactions and transcripts between humans as a learning data.
But the challenge begins, as we start taking out machines, first at one end of the table and then at the other end. At some point there is no more human left for machines to learn from human behavior. Once humans are out of the equation, the evolution of these machines may fall short to appeal to humans. That may prompt humans to come back into the fray. And machines will learn again. So, as ANI starts to spread across various fields, it’s likely that it will learn more from fellow machines and less from humans.
First generation of driverless cars will evolve based on other human drivers on the street. As the number of driverless cars increase on the road, they learn from each other’s evolution and behavior. Imagine street design, construction etc., are also done by machines. Machines that drive cars and machines that build infrastructure learn from each other. There is no human needed here. It should be intuitive by now that at some point these machines may go far away from observing, factorizing and/or prioritizing how humans are reacting to these decisions. There is no bad intention within these machines, they learn from the training data that is available to them and humans are not contributing to that dataset anymore!
This may result in a situation similar to ‘predator-prey model’ . Humans become the prey and machines become the predator. That is within the context of the model and not literally (hopefully). As ANI goes far away from humans, humans will jump in and take over. As more humans start interacting, ANI becomes smarter again and reduce human interaction. This cycle will continue but how long? I guess that ANI will become AGI the day the wavelengths of this predator-prey model converge to zero. What I am not able to conceive in my thought experiment is, does the wavelengths increase or decrease as the time passes by? If the wave lengths have to increase, ANI may never become AGI!