AI: Puppets or Puppeteers?

AI – Artificial Intelligence – it conjures up many emotions and motivations – wonder, fear, ambition, competition, vainglory, greed, hope. What is it really? Is it changing too quickly to define? It seems that when we talk of AI we understand different things. There is the sci-fi aspect – Rosie, the Jetsons’ goodhearted home-helper whose autonomous decision making and fast responses will save the domestic day vs Dr Who’s evil Cybermen, robotic soldiers with steely resolve, incapable of autonomous thought and whose metallic responses are powered, in now a very retro way, by commandeered human brains. Then there is the, very now, commercial application of man-made deep learning neural networks that power the advertising powerhouses of Facebook, Microsoft, Uber and Google. Glorified number-crunching processes that we have all interfaced with each time we’ve seen an ad on FB or Google that funnily enough espouses the desirability of that product or service we researched last month.  And then there’s that nebulous space inbetween, research, where the limitless horizons of science fiction are the endgame.

When Google’s AI program AlphaGo beat its human opponent at the ancient Asian boardgame, Go, it wasn’t a case of technology streamlining itself to play a more difficult game of chess. For AlphaGo to win at this game it had to play against the logic of winning. It had to learn that its opponent was playing to a cultural norm. That by playing an unexpected move it gained the psychological high ground and won. Did it signal the beginning of autonomous thought by a machine? Did it mark the first seed of free will? Or was it programmed to collect data on its opponents moves – as in the frequency of a style of logic – and assess the likelihood of the opponent playing against this style?


Free will and breath, the common denominators of intelligent life, or are they? Just how intelligent can computers become? Will they ever be able to make autonomous decisions, moral judgements and act on them? Like Pinocchio will they ever be able to transcend the limitations of the materials from which they are made and breathe?

No matter our opinion on the matter, AI is coming and in some ways is already here. But we are told that we can make our voice heard. The Montreal Responsible AI Declaration is a survey of opinions on the matter that cover a series of issues that can be thought to determine personal liberty or impact on it. The University of Montreal through the survey hopes to gather opinions to guide it in writing a protocol for AI researchers and developers to abide by.

 It is requesting your say until 31st March, 2018.

Questions fall under the categories: well-being, autonomy, justice, privacy, knowledge, democracy and responsibility. Some of the questions are very specific and confronting e.g., Is it acceptable for an autonomous weapon to kill a human? while others are so general, they are difficult to limit to a clear response after a first read e.g., how can AI contribute to personal well being?

The types of questions posed highlight concerns that may not immediately cross the mind of the uninitiated. Must we fight against the phenomena of attention seeking which has accompanied advances in AI? This is a question of personal vanity and advancement against perhaps the greater good. Should machine learning be aided to advance when the developer doesn’t know what the machine will be exactly capable of – simply so the developer can show off or sell/publish his/her work?

Or Must we fight against the concentration of power and wealth in the hands of a small number of AI companies? At the moment machine learning, the back bone of AI needs a lot of data to operate. Amounts of data so large that few companies are able to collect and manipulate it, companies like Google, Facebook, Uber. These companies are not only data rich but money rich. What is to guarantee that profit motive won’t weigh heavier than an altruistic world view in their decisions of what to develop and how to use it?

There are questions that relate to freedom of speech. How to minimise the dissemination of fake news or misinformation. This statement is a little unclear. Are they referring to fake news and misinformation about the advancements in AI research and development or fake news in general? Fake news actually takes hold of the imagination because it’s answering an anxiety or fulfilling some kind of need, be it curiosity or a dearth in answers. What it does do, at its best, is inspire discussion and research.

Another question relating to freedom of speech and freedom of the individual in general is, Should research results on AI, whether positive or negative, be made available and accessible? Because AI has the potential to impact us all in the way we will live and the way that we earn a living and the world our children will inhabit, AI research should be accessible to all, in my opinion. What must be kept in mind though when thinking about this question is that not all countries foster the same level of freedom for the individual and not all private multinational corporations would be open to sharing their advances that give their marketing strategies an edge and so will have no qualms with using published research but not sharing their own advances (or making transparent the algorithms that form their AI’s Internal decision making processes). AI development is viewed with such trepidation by some that publishing adverse results in behaviours or outcomes may stymie further development funding in that area.

How to react when faced with AI’s predictable consequences on the labour market? This question brings bias into consideration. I recently asked a programmer whether he thought AI development is a good or bad thing. His immediate response was that it was a good thing. It will take away all of the mundane jobs and only the creative ones will remain. His bias was talking. He is an educated, well paid individual in IT. The kinds of jobs that would engage him will be beyond the understanding of many people of sound body in the community. A repetitive job or one that requires little decision making but simple routine-pattern following would not only bore him but take away some of his pride. However, to many in the community being able to perform simple tasks repetitively and earn money for them is a source of self esteem and income – consider mentally and/or physically handicapped people.

AI can not only impact the labour market in the jobs robotic machines could replace but if placed in charge of hiring individuals, they can impact on who gets the job. Arguments have been raised that the personal bias of the programmers of AI have been and may continue to reflect in their outcomes.

The ages old question about original sin and who was more culpable, the snake that gave the knowledge of sin, the woman whose curiosity passed on the knowledge or the man who used it, surfaces in the question, Can an artificial agent, such as Tay, Microsoft’s “racist” chatbot, be morally culpable and responsible? When I read this I had to ask how culpable was the team that wrote the program that fed the chatbot the data it used? Should they have placed a censor on the chatbot, effectively restricting the download of certain words, images or phrases? How would that have impacted its learning?

What if an AI’s behaviour was morally reprehensible and dangerous? eg., an AI that is placed in charge of an abattoir chooses to slaughter not only cows but any four limbed creature that inhabits the yard. Who would be to blame – the AI trying to exceed its quota or the programming team that failed to impress upon their creation the idea of limits or the ability to discern the difference between a Shetland pony and cattle?

For me the most important question asked is: Must AI research and its application, at the institutional level be controlled? Here I have to ask what sort of institutions are being referred to and what and how would the control be policed? What if the institution was a country manipulating its census data to feed an AI application?

In my ideal future, AI would be used to do the tasks that are out of reach of our physical realm  – because they are too small – as in genetic manipulation in medicine, past our reach physically –  space exploration, navigating the Kuiper Belt and beyond – or past our reach for their enormity and the immediacy of their need, like solving environmental catastrophes – or to avoid physical danger or risk.

AI is fascinating and exciting to me but I believe it should also be reined in. IT should serve humanity to humanities betterment and that of our planet. IT shouldn’t be replacing mundane jobs. It shouldn’t be aimed at increasing our leisure time – don’t we have enough – who will work in the end? It should be gathering data and leaving the processing of that data to us. While it doesn’t have a conscience, it should be leaving the decision making to us who do. It should be out there exploring, advancing medicine, studying clouds and global cooling efforts and generally opening new vistas, ai!

Photo Credit

Photo on





3 thoughts on “AI: Puppets or Puppeteers?

Comments are closed.