The relationship between mankind and machine is an inherently complex one. Technology has evolved beyond a tool. It has become an extension of our very selves.
Some may find this statement a bit extreme (as they read this on the phone that hasn’t been more than a few feet away from them since waking up this morning), but the statistics prove it. Globally, the average individual is likely to own 3.64 devices connecting us to the aptly named Internet of Things. This connection drives social interaction, education, entertainment, and just about any other facet of the modern human experience you can imagine.
Meanwhile, the workplace is transforming before our eyes. Automation is redefining entire industries, the concept of a job may soon lose its meaning, and even those who, in the years to come, remain in traditional fields, will soon be joined by colleagues in the Cloud: software augmented by virtual and artificial intelligence.
The intention is to create a future of work in which humans feel both relevant and happy; one that optimises both efficiency and humanity.
But where do we draw the line? Current studies into the automated vehicles industry proves that balance is not easily come by. One such study, carried out by the Kellogg School of Management’s Adam Waytz and highlighted in the May/June 2016 edition of Psychology Today had volunteers split into three groups and put into a driving simulator. The first group drove the car themselves. The second group were in an automated car. This was the same for the third group, except their digital driver had a name and voice that cheerfully shared information on their journey.
In each scenario, the car would end up in a minor accident with another vehicle, which the passengers were asked to describe to the researchers. Unsurprisingly, group one’s drivers were quick to blame themselves, while group two blamed the car and its manufacturers. Subjects in group three, however, rarely blamed their automated drivers. They equated the car’s ‘personality’ with intelligence and, being under the misconception that digital intelligence was superior to organic intelligence (perhaps from watching too many sci-fi films) argued that the fault should be placed on them.
But here’s the thing: the simulation was designed so that the other vehicle was always responsible for the incident. The subjects’ perception of the technology just obscured them from realising it.
That the more ‘intelligent’ car was seen as less likely to be responsible is what concerns critics of computer intelligence. A presumption that the tech is better at performing a function than a user can be leads to the user placing more agency in it; a risky decision, particularly while such systems are still in their infancy.
Such utter faith in the infallibility of AI is what has led revered futurists like Ray Kurzweil to warn of singularity, the moment when artificial superintelligence outpaces the human mind’s capability of understanding. Simply put, it is the point at which the future of human civilisation will be decided solely by machines. Great minds like Bill Gates and Stephen Hawking have reiterated such predictions, fearing that our obsession with ubiquitous concepts like AI and deep learning will have dire consequences.
Kurzweil has declared 2045 as the year in which the singularity will occur. Whether he will be proved right or not depends on the checks and balances we put in place.
Kranzberg’s first law of technology is “technology is not good nor bad; nor is it neutral”. By this, he is saying that the driver behind technological advancements does not define what those advancements are capable of. Simply being able to create something is not an excuse for creating it.
Some have argued, therefore, that we must let the technology decide the best course of action. Such beliefs are shortsighted.
Humarithm is a term coined by futurist Gerd Leonhard to describe what he believes will protect people from losing their sense of humanity. A counterpoint to algorithm, Leonhard’s humarithm is the combination of ethical and creative principles that keep us from becoming the logical, systematic robots our jobs would otherwise make us.
Rather than placing the future in the hands of artificial intelligence, we must retain our exclusive rights to human intuition and morality. Human happiness is the future of business, and automation will be crucial to that, but only in the context of support.
To accept more than that is to abandon what makes us human. And at a time when experts are warning of AI-based killer robots, a future devoid of humanity still seems the most frightening of all.