Humarithm: Defining the Relationship Between Mankind and Machine

January 30, 2017

by Mitch Ziems

The relationship between mankind and machine is an inherently complex one. Technology has evolved beyond a tool. It has become an extension of our very selves.

That may seem like hyperbole for some, but statistics show that, globally, an individual is likely to own 3.64 devices connecting us to the Internet of Things. This connection drives social interaction, education, entertainment, and just about any other facet of the human experience you can imagine.

Such change occurred gradually, but has brought us into an age in which technology moves at such a fast pace that our existence can be redefined at any given moment.

There is no stopping it. Nor can we truly predict it in any specific terms. We can only set the guidelines, and hope progress doesn’t stray out of bounds.

Meanwhile, the workplace is transforming before our eyes. Automation is dominating whole industries, the definition of a job is under scrutiny, and even those who remain in traditional fields will soon be joined by colleagues in the Cloud: software augmented by virtual and artificial intelligence.

The intention is to create a future of work in which humans feel both relevant and happy; one that optimises both efficiency and humanity.

But where do we draw the line? Current studies into the automated vehicles industry proves that balance is not easily come by. One such study, carried out by the Kellogg School of Management’s Adam Waytz and highlighted in the May/June 2016 edition of Psychology Today had volunteers split into three groups and put into a driving simulator. The first group were told to drive the car themselves. The second group were in an automated car. The third group was the same, except their car had a name and voice that gave passengers information on their location and navigation.

In each scenario, the car would get into a minor accident, which the passengers were asked to describe to the researchers. Unsurprisingly, group one’s drivers were most likely to blame themselves, while group two blamed the car and its manufacturers. Group three, however, were less likely to blame the automated car. They equated the car’s voice with intelligence, seeing it almost as human rather than a machine, and so were unlikely to fault it.

The test was set up so that the other car was responsible for the incident, but the subjects’ perception of the technology obscured them from this realisation.

That the more ‘intelligent’ car was seen as less at fault is what’s disconcerting to futurists. A presumption of understanding on the tech’s behalf leads to the user placing more agency in it.

Such utter trust is what has lead revered futurists like Ray Kurzweil to warn of singularity, the moment when artificial superintelligence outpaces the human mind’s capability of understanding. Simply put, it is the point at which the future of human civilisation will be decided solely by machines. Great minds like Bill Gates and Stephen Hawking have reiterated such predictions, fearing that our obsession with ubiquitous concepts like AI and deep learning will have dire consequences.

Kurzweil has declared 2045 as the year in which the singularity will occur. Whether he will be proved right or not depends on the checks and balances we put in place.

Kranzberg’s first law of technology is “technology is not good nor bad; nor is it neutral”. By this, he is saying that the driver behind technological advancements does not define what those advancements are capable of. Simply being able to develop something is not an excuse we should be using to develop it.

Some would argue, therefore, that we must let the technology decide the best course of action. Such beliefs are shortsighted.

Humarithm is a term coined by futurist Gerd Leonhard to describe what he feels will protect workers from losing their sense of humanity. A counterpoint to algorithm, Leonhard’s humarithm is the combination of ethical and creative principles that keep us from becoming the logical, systematic robots our jobs would otherwise make us.

Rather than placing the future in the hands of artificial intelligence, we must retain our exclusive rights to human intuition and morality. Human happiness is the future of business, and automation will be crucial to that, but only in the context of support.

For if we can not shape our own future, we are no longer relevant in this world. Who can think of anything more frightening?

Join the 8 Percent.

Join the group that everyone's talking about! Just enter your name and email to receive a weekly update on what's new in the elite world of the 8 Percenters, as well as special offers, invitations and free downloads.