

Course Director – Bachelor of Mechatronics Engineering, School of Engineering, Deakin University
Read profile
#1 Victorian uni for graduate employment1
#1 in the world for sport science2
#1 Victorian uni for course satisfaction3
Could humans be responsible for manufacturing their own enemy? If Avengers: The Age of Ultron is to be believed, the artificial intelligence we create in robots could in turn be humanity’s undoing. In the Hollywood blockbuster, Ultron is programmed to believe that he must eradicate humanity. It’s the kind of science fiction that’s best served with popcorn, but it does raise concerns about the potential for autonomous killer robots to exist one day. Are we really ready for robots to make their own decisions? Today, Defence Force personnel are trained to use armed drones – these flying robots are sophisticated enough to kill at the push of a button. Deakin University’s mechatronics expert, Dr Ben Horan says you needn’t run screaming through the streets for fear of killer robots just yet, though. Uncontrollable Terminator-style technology won’t contribute to war efforts any time soon.
However, robots are fast becoming far more capable than ever before. Google-owned Boston Dynamics has developed humanoid robots Atlas and Escher. With the US military pouring billions of dollars in to research and development of these eerily agile machines, it’s no wonder people hold a genuine fear that autonomous robots could one day turn on us. When this sort of technology is owned by one of the world’s most ubiquitous companies, there’s no assured limit to how far they could take it.
In a statement on their website, the coalition argue, ‘Giving machines the power to decide who lives and dies on the battlefield is an unacceptable application of technology. Human control over any combat robot is essential to ensuring both humanitarian protection and effective legal control.’
In 2014, they were present at a United Nations meeting in Geneva, arguing that there were important legal and moral considerations to make moving forward and pushed for human control. Not everyone agrees with their agenda, though. NBC News technology writer Keith Wagstaff said, ‘Hysteria over the robopocalypse could hold back technology that would save human lives.’
Around the world, the Navy and other military organisations have been using automated weapons for years. Michael Mortimer, PhD student at Deakin University, says extensive advances in artificial intelligence would need to occur if we were going to see a true dawn of the killer robots. He argues that military robotics, like any technological advance, are intended to be used for good, and killer robots inflicting mass destruction in the future is unlikely.
But he does admit that while most robotic research is conducted with the intention of assisting humans, any technology in the wrong hands can become dangerous. ‘Whatever tool you give people, how they use it comes down to human psyche.’
The Australian Army is now operating unmanned aerial vehicles in order to provide intelligence surveillance. Dr Horan says this is not a conspiracy. ‘In defence they’re looking at how to reduce the danger for Australian military personnel,’ he points out. In addition, tools are also being developed to assist developing nations where, in some cases, people are using very basic tools like sticks to probe mines. Dr Horan says this shows that the military isn’t the enemy. ‘Robots can go into hazardous locations and deactivate mines.’
But even if military robots are intended to be used for good, it’s doubtless that where wars are concerned robots could be used for ill-intended purposes.
DARPA’s advanced drone ARGUS-IS, a 1.8 gigapixel camera can watch targets across a 25-square kilometre stretch from 20,000 feet in the air – that’s like monitoring all of Manhattan at once through one lens. In the wrong hands, it’s not hard to imagine some worrying situations.
A better application for surveillance droids is in the field of search and rescue. In future, you can expect to see robots that look like BB-8 from the upcoming Star Wars film, rolling into dangerous situations. His real world purpose would be to use his floating head and small spherical body to get into tight spaces and collect data for humans to interpret.
It’s hard to be completely sure of what large tech companies and military groups are working towards when developments are so closely guarded. So for now, we’ll have to trust that there’s nothing to fear unless we’re given a reason to. A lot of what we’re seeing in science fiction still remains just that. So it might be best to remain alert rather than alarmed. As aspects of the imagined futures we see in films begin to become reality, let’s hope that any artificially intelligent robots that do come to exist are trained to suppress any appetite for destruction.
Tell us what you think
Course Director – Bachelor of Mechatronics Engineering, School of Engineering, Deakin University
Read profile