Humanising technology, part 1
During a meeting at work earlier this week about an upcoming conference based on certain select UN SDGs, the idea of balancing technology and human development was brought up briefly. This is an idea I have myself been thinking about for some time and this seemed like an apt moment to reflect on some of my passing thoughts on this issue.
The idea that technology is cold and distant has already gripped most of society and George Orwell had little to do with it. Technology is essentially a phenomenal example of the master–slave dynamic. Sordid as the idea seems that is all everyday technology is as of now. But that is not to say the status quo will remain unchallenged. As men race to make robots that can outsmart their creators it can only be considered prudent to take a step back and reconsider if we want to instil some method to this madness.
Outlining a core purpose
That A.I. will take over humanity before we realise it is a given. And this is precisely where the problem lies: technology today is developing much faster than our minds can fathom, which is why the potential threat from smart computers sounds like science fiction to most people while it is in fact a real, present danger. Artificial Intelligence is in some ways the opposite of climate change: while the former is developing too fast for us to realise, the latter is happening too slowly for us to notice. In other ways A.I. is just like climate change: unchecked, it can be the end of us.
However, all is not lost – yet. We are in the perfect juncture to question our intentions with A.I. and set out a roadmap that can balance the benefits we can reap from new technology and the threats it can pose to us. There are, as always, two teams participating in this argument: there is one group that claims our fear[^ I use the word ‘fear’ knowing that I take offence to this myself, but it drives the point home. In earnest we have concerns about A.I. rather than fears about it.] of A.I. is unfounded and there is another that claims A.I. is a pointless exercise that we can do without.
Clinging to their beliefs both sides are doomed to fail. On the one hand we must realise that all technology has its place; the trouble starts when technology crosses certain lines. So A.I. has a place and we must accept it: rarely has blind opposition against global change been successful in stopping it. At the same time A.I. unchecked, like too much of anything, can be troublesome and ardent supporters of A.I. need to accept that. This is step one: realising that there is a debate to be had here for our own collective good.
Reaping the benefits
Technology exists for one reason alone: efficiency. These are the spiritual successors of Industrial Age machines and we will do well to remind ourselves of that every time we pick up our gadgets. The threat, if you can call it that[^ Once again, this is a term I am opposed to. By this definition all tools are threats; but a tool is a threat only if you choose to make it one. However, the sheer use of this word drives a point home same as last time.], starts here. Even when technology is not artificially intelligent it has the unintentional consequence of ruling our minds: the acts of picking up one's phone to do something versus picking up one's phone and thinking of something to do are two wholly different ideas.
The consequence of technology taking on certain trivial tasks – especially certain repetitive ones[^ Which is why, despite screencasts and webinars and whatnot, scientists and teachers are not about to be replaced anytime soon.] – is that we can use it to make things unbiased and longer lasting. Now we enter the realm where we have to decide just where we want to employ these perks of technology. Use them everywhere and the loss of the human element can be disastrous; the instinctive reactions humans make can be the difference between making and breaking a situation but exactly how worthy this tradeoff between precision and instinct is will have to be fodder for another debate.
There are two ways to look at the statement that technology makes our work more efficient: one, that we still do the work but use technology to make it quicker; or, two, that we no longer do the work and give it all up to technology. In the first case technology remains a tool, doing exactly what it is told; in the second, it replaces us.
A different world
The likelihood that A.I. will change our world is great and one that few have argued against. It is not whether but how it will change our world that prompts everyone to pick a side. Like machines throwing people out of work in the early to mid-1900s[^ I am always reminded of Charlie Chaplin’s Modern Times when I think of this.] A.I. will leave many unemployed. But then arises the question, where are those people now? Clearly, their next generations have worked in different jobs, aspiring to be something besides factory workers. Clearly, the children and grandchildren of such people have found a life and different jobs. The ill effects of machines affected many but hardly wiped humanity out.
Artificial Intelligence is different. Once a machine begins to effectively teach itself there is no telling where the line gets drawn – or even whether a line gets drawn. What stops can be put into place to ensure that artificially intelligent technology does not view itself as superior to humans, as a different species from humans? And all this keeping in mind that the whole purpose of A.I. is to ensure that gadgets can take certain decisions about and for humans, in our best interests, with minimum or no intervention from us. The only solution worth discussing then is how we can ‘humanise’ technology so that it identifies itself as siding with us rather than being disparate from us.
Three ways forward
The other face of such ‘humanised technology’ are gadgets that seem more approachable to their human
overlords users. This requires us to understand that what fits one will never fit all, neither will a select size keep dynamically changing to adapt to a changing, growing individual. What we need is technology whose core design, in hardware or software, takes into account the diversity in humankind and caters to them all equally.
Alastair MacDonald had an interesting paper called ‘Humanising technology’ in Inclusive Design back in 2003 in which he beautifully stated the role design can play in making technology more human:
In the future we will have a population with a much more diverse profile of capabilities – physical, sensorial, and cognitive – than that of today, accompanied by different lifestyle patterns for work, leisure, living and social interaction, and with diverse socially and culturally induced needs and desires ... We are an inventive and adaptable biological species with deep socio-cultural and spiritual needs and desires that lives largely, in the developed world, in a ‘technosphere’, a synthesised artificial world of our own making. How do we reconcile our many individual corporeal, social and spiritual needs in facing the challenge of delivering ‘inclusive’ design in an era of rapid technological change which will also see profound changes in population demographics and lifestyles in the next twenty years?
That is the first approach and it applies as much to the ‘smart’ technology we have today as it will to A.I. in the future. But this is just the start and there are two more approaches we can take. Several clever blokes have called for this already (for example, Stephen Hawking and Elon Musk) and the need of the hour is an international debate that can help drive a common set of policies across the globe that help determine the precise direction we take A.I. in. After all if we intend to build something as calculative as an independent thinking, self-teaching robot we better be more precise and more calculative than it or spend our last days reminiscing Darwin – survival is for the fittest, and in a technology-driven world A.I. will always have the upper hand.
The third approach is the weakest link in the chain: developers need to be conscientious and ensure that A.I. is not built to go out of hand on purpose. The Terminator series is no longer science fiction. One rogue programmer could potentially wreck havoc beyond our wildest imaginations. Naturally this can be curbed through policies and extreme policing of the people working directly to create A.I. but it will always remain a risk we have to take.
None of this is to say that I am against Artificial Intelligence. As a scientist I would enjoy having it for purely academic reasons as well as for the betterment of mankind. We can potentially use A.I. to reach more places and more people, to connect the world so we can move forward together, to help bring up the underdeveloped, to help improve people’s lives by making production cheaper so that food and aid and support can reach people who have been deprived of it so far. But the catch is A.I. is a tool like any other, which means it is up to us how we use it. Like guns, say, we can use it to protect our borders and keep antisocial elements at bay, or we can use it to kill each other for no reason. The decision is ours – do we trust ourselves, or should we work to keep ourselves in check?