In 1942, Science fiction writer Isaac Asimov imagined a world of robots and artificial intelligence that catered to our every need. Now in 2017, conversations with Siri, Alexa and Google Assistant are now a daily occurrence.
In 1942, Science fiction writer Isaac Asimov imagined a world of robots and artificial intelligence that catered to our every need, Asimov even theorised laws these robots should live by so they would continue to serve and protect us:
• A robot may not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey orders given to it by human beings except where such orders would conflict with the first law.
• A robot must protect its own existence as long as such protection does not conflict with the first or second law.
(I, Robot by Isaac Asimov, released 1950.)
Here, in 2017, the time of Asimov’s imagination is truly here as conversations with Siri, Alexa and Google Assistant are now a daily occurrence. Now, we should be looking more than ever if these theorised laws of robotics stand the test of time.
Asimov’s own writings show the perils of these laws, the book iRobot and it’s later film adaptation show enslavement and control of the human race a potential and logical conclusion.
A joint report by the British Academy and the Royal Society presents another theory, that artificial intelligences need only to be governed by one law; Humans must flourish.
This overarching principle, complimented by 4 high level principles:
• Protect individual and collective rights and interests.
• Ensure that trade-offs affected by data management and data use are made transparently, accountably and inclusively.
• Seek out good practices and learn from success and failure.
• Enhance existing democratic governance.
(The Amazon Echo Dot uses the Alexa AI Assistant)
These ‘rules to live by’ for AI should allow us to maintain control over the growing number of artificial intelligences for our benefit, rather than the other way around
While these recommendations, if implemented, may hold off the robot uprising, for now, there are still unanswered questions about our growing relationship with artificial intelligences yet to be debated.
Imagine you're behind the wheel of a self driving car when your brakes fail. As you speed toward a crowded crossing, the AI is confronted with an impossible choice: drive straight into and run over a group of pedestrians, or veer off the road into a potentially fatal crash? Is it the AI’s responsibility to prioritise your life or that of the pedestrians?