It is said that we are living the most extraordinary era of all times in human history. Everything, from our personal lives, how we set up our homes and interact with our home environment, to how we work and how we run our businesses will change radically and forever within the next five to fifteen years. Concepts like exponential growth, innovation and disruption hit us more times a day than we have time to think about what they really mean. Technology is accelerating in amazing speeds and till now technologies were many and with many different purposes and applications. What is different now is how seemingly disparate technologies converge in platforms and ecosystems to serve humanity increase their potential or… to dominate it (?).
The keyword and common factor behind all technology today is Artificial Intelligence!
AI in the form of robots featured in science fiction movies as far back as shy of a century, with the first robot appearing in film Metropolis (1927), but the real revelation was Star Wars (1977) and The Terminator (1984) later on. AI has both been used and depicted as an enabler to reach a utopia, and as a weapon that leads to a dystopian future. And yes, the debate all those years has been on the ethics of how, when and where to use technology and AI, and yes, resistance by the majority of the population has been non-yielding more often than not.
Could that be due to the fact that once faced with dystopian futures where an AI rebellion subdues our species and leads to AI-controlled societies where the human race is dominated and enslaved triggers our fight-or-flight instinct? One way of fleeting would be to completely ignore it and resist the expansion of its application by not using it – not even as an enabler – wouldn’t it? While one way of fighting could be to hinder its growth by posing ethical questions where there are not any really, or ostracizing those working on improving it or even those using it through social exclusion based on acceptance norms and stereotypes depending on what suits us every time.
Could it be that while instinct was once what preserved the continuity and survival of the human race is now what seriously hinders its growth and alarmingly restrains its potential?
The undeniable truth is that machines, engines, artificial intelligence entities or how else you decide to call technological conceptions are doing many things a lot better than humans are. They can outplay the best chess players and even beat themselves in a Go game, they can control miles-long production lines in factories and diagnose health conditions by simply looking into your pupil, they can work out the most complex risk-management scenarios and make informed decisions on when they should intervene in a process and change it and how. It is also undeniable that when compared to people – highly specialized in their field, extremely knowledgeable, with above average IQ – they do all those tasks faster, better and cheaper than humans. In fact what they are best at are highly repetitive, patterned and structured tasks backed by a whole load of data like rules, statistics and corpora, and delivered through sophisticated algorithms.
What is fascinating though is how, despite the amazing things AI can do and the exponential potential it has, AI lacks understanding, context and flexibility. The impact AI has on language, communication and learning has been significant over the past 5 years to say the least. Learning and teaching models have been transferred from classrooms to technology labs to train engines, and so we moved from rule-based to statistical models of machine translation to neural networks and deep learning to natural language processing to adaptive and predictive models. And yet language is one of the most flexible and context-dependent things, but also one of the most exponential ones in terms of development and capabilities, despite the fact that it too is based on rules and structures, occasionally rather strict and unbending ones.
AI may produce language and engage in conversations, it may even infer our intentions gathering all the data we have consciously or unconsciously conceded about ourselves throughout the years, combine it with other data sets and offer us meaningful suggestions based on reasoning. But AI can hardly respond to novel scenarios. Lacking the basic understanding of purpose, of why it does what it does, at least for the time being, when faced with offbeat, singular instances of events, it can at its best make an educated guess, but does not take any extra steps to clarify things and does not take responsibility when things go wrong. Think how Siri, Cortana, Alexa or Google would interact with a 2- or 3- year-old, and how effective natural communication would be – or even if you happen to have such a little person in the house, why don’t you try it. Then, all of you who happen to be parents, think how when your child was articulating his or her first words, you were the only ones who could understand what he or she was saying. No other person could at the time and definitely not an AI, and the reason is simple; you shared content and context that was singular between the two of you and only you had access to it.
No matter how evolved AI is as we speak and how it leapfrog within the next years, it still has quite a lot of ground to cover before -if ever- it can be singular. This is not what we should worry about. Instead, what we should start worrying about is how conditioned we are in our patterned lives and how safe we feel in our comfort zone that we dismiss the simplest change as threatening and distrust the bearers of it. What we should re-think is our definition of ourselves against the machines. Contrary to the latter, people and the human brain is not designed to be conditioned; humans can bend or break the rules and that makes us unpredictable. This is where the real exponential lies.