Thomas Russell: Programmer & Designer
Have a browse of my thoughts!

Musings on the Technological Singularity

Recently there has been increased discussion in the media about the so-called Technological Singularity (for instance this BBC article on Stephen Hawking’s opinion). For those who are unfamiliar, this is a hypothetical event whereby humans create a machine that is capable of producing a machine more capable than itself, which will also be capable of producing a machine more capable than itself, and so on ad infinitum; resulting in the development of a machine that no longer requires the existence of human beings and thus eliminates the entirety of the human race (or alternatively enslaves us in a Matrix-style coup).

This is not a new concept, the idea was mentioned briefly by Richard Thornton in 1847 after the invention of the four-function calculator:

[…] But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!

And again more explicitly by the great computer scientist Alan Turing in 1951:

[…]once the machine thinking method has started, it would not take long to outstrip our feeble powers. […] At some stage therefore we should have to expect the machines to take control[…]

However it has been brought once again back into the spotlight due to amazing modern advances in Artificial Intelligence such as IBMs Watson and Google’s speech-to-text and facial recognition algorithms. Improvements like these have driven people to varying estimates on the time in which it will take for us to reach a singularity; futurist and computer scientist Ray Kurzweil argues in his book, The Singularity is Near, that this event will be realised in the year 2045, whilst other critics have longer time frames. Most believe though that this event, if likely, will occur before the turn of the century.

It would seem that it is an inevitable consequence of the development of AI that eventually, if left unimpeded, we will reach the Technological Singularity and cause our own doom. However, I do not believe that this is necessarily the case.

Whilst on a coach journey from Cologne I explained the problem to my girlfriend and after a brief discussion she asked an interesting question: “What if the machines got jealous?”. This was a curious proposition and after a bit of thought I started to think that this could be the beginning of an interesting potential halting solution to the singularity problem; the argument that machines would continually produce better machines would fall down if machines started to develop self-preservation; why would a ‘sentient’ machine sacrifice itself for a better machine which would render it redundant?

One could argue that even a machine with the desire to preserve itself and compete for resources may design and develop a machine that surpasses and succeeds it, human beings, after all, would have been required to do this in the first place to create the first machine which drives the singularity. However, a rational machine with self-preserving goals will attempt to mitigate the chance of being made redundant as this would result in a loss of resources for itself (the economic principle of utility optimization, which assumes purely rational agents, forces this), and as machines become more intelligent, so will their predictive power, resulting in a negative feedback loop preventing a singularity.

This of course requires that machines are self-preserving, after all, why would they be? AI researcher Steve Omohundro believes that the drive for self-preservation and resource acquisition must be inherent in any goal-driven system of a certain intelligence level as argued here. However, if they are not self-preserving and have no desire for resource acquisition then any potential technological singularity would not likely be catastrophic for humans anyway.

All of this does not mean that we do not need to be concerned with the development and use of AI, even if a singularity did not occur, this does not mean that the human race would necessarily not be negatively impacted. The final computer in the iteration may be malicious towards mankind and be more than intelligent/powerful enough to destroy or severely damage human civilization, and even if this is not the case, the economic impact of superhuman AIs could be severe enough in itself to require a complete paradigm shift. This is why it is of critical importance to analyse these potential problems and try and invent preventative solutions to circumvent them, now and in the rest of the century to come.

Leave a Reply

Your email address will not be published. Required fields are marked *

Designed and Produced by Thomas Russell © 2014-2017

Log-in | Register