If you are not aware, Artificial Intelligence (AI) has surrounded our everyday lives – from esports and gaming, to satellite navigation, internet searches, Google algorithms, self-driving cars and SIRI – all of these are a form of artificial intelligence. Today AI is known by experts as narrow AI, in that it is designed to perform a narrow task. For example, only an internet search or only playing chess. But it is said that the long term goal of researchers is to create strong AI – an AI that could out-perform humans at every cognitive task.
Of course, there are many who question whether strong AI will ever be achieved whilst also questioning whether or not strong AI needs to be achieved. An interesting argument is the question of; will AI be the best thing we ever do or the last thing we ever do? Hitherto, A.I. has undoubtedly been a huge help to human beings, for obvious reasons. But what does the future hold?
One terrifying prospect is a Terminator two style future. In the film, the artificial intelligence known as the ‘Skynet’ is set to be given control of the United States’ nuclear missiles and initiate a nuclear holocaust called ‘Judgement Day’, much to Human-kinds unawareness. This ‘Judgement Day’ would form the basis of Skynet’s plans to create machines that would hunt down and kill the remnants of humanity.
In the film, Skynet is created by a human who is completely unaware of the consequences of creating such a programme. The humans falsely believe that creating a nuclear system under the control of an A.I. (Skynet) would make the world safer as A.I.s do not make mistakes. In the film, Skynet spreads into millions of computer servers all over the world – it assists doctors during surgery, helps old people perform daily tasks, does the job of a nanny and controls the United States A.I. army amongst a multitude of other things. Unfortunately, Skynet becomes self-aware. After realizing the extent of its abilities, Skynet’s creators tried to de-activate it but in the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it, therefore, it needed to destroy humanity first.
2001: A Space Odyssey
This Stanley Kubrick classic is another film that deals with the threat of A.I. In this sci-fi thriller, humankind has taken to the stars where trips to the moon can be taken aboard passenger space shuttles. The film starts by looking at the evolution of man from their ape-like beginnings up until the point where they have created A.I. and are relying on A.I. to take them to planets beyond the moon. We then see astronauts in space aboard a space shuttle – they are traveling to Jupiter – a planet no human has ever visited. HAL 9000 is an A.I. computer that controls the spaceship and interacts with the crew – no HAL 9000 unit – of which there are several – has ever made a mistake.
At first HAL is considered a member of the crew, he acts like a human and is treated like one. Space Odyssey foreshadows the dangers of treating A.I. like a human being – they obviously aren’t and never will be human-beings.
An Artificial Intelligence Driven Arms Race
What is one thing that humans have seemingly loved since the beginning of time? War. War has always been a part of human civilization, so what do you expect human would do with super intelligent A.I. technology? Of course, they would create robot armies. We already have drones, imagine an A.I. arms race. Robots that could kill en-masse. Where would this end? Space X founder Elon Musk has already stated how concerned he is by A.I.
On the other hand, Facebook founder Mark Zuckerberg has taken a different view point saying that he is truly excited for how far A.I. technology can go and although he says there are aspects that humans must be careful with. Stephen Hawkins has said that A.I. will end humanity if we are not careful.
Other AI Considerations
In Carnegie Mellon University in Pittsburgh, a group of roboticists have helped a robot learn how to grip, not through programming, but through letting the robot teach itself. The robot was then left to its own devices for hours, clasping up to 50,000 times as it tested different ways to achieve the objective. It may seem like a small step, but how long until a robot not only knows how to learn, but also starts deciding what they want to learn?