Robots: Friend or Foe?
Our Flipside articles let you see things from both sides. Think you’ve got it all worked out? Hit ‘Read the Flipside’ below to see this story from a different perspective.
Could humans be responsible for manufacturing their own enemy? If Avengers: The Age of Ultron is to be believed, the artificial intelligence we create in robots could in turn be humanity’s undoing. In the Hollywood blockbuster, Ultron is programmed to believe that he must eradicate humanity. It’s the kind of science fiction that’s best served with popcorn, but it does raise concerns about the potential for autonomous killer robots to exist one day. Are we really ready for robots to make their own decisions? Today, Defence Force personnel are trained to use armed drones – these flying robots are sophisticated enough to kill at the push of a button. Deakin University’s mechatronics expert, Dr Ben Horan says you needn’t run screaming through the streets for fear of killer robots just yet, though. Uncontrollable Terminator-style technology won’t contribute to war efforts any time soon.
However, robots are fast becoming far more capable than ever before. Google-owned Boston Dynamics has developed humanoid robots Atlas and Escher. With the US military pouring billions of dollars in to research and development of these eerily agile machines, it’s no wonder people hold a genuine fear that autonomous robots could one day turn on us. When this sort of technology is owned by one of the world’s most ubiquitous companies, there’s no assured limit to how far they could take it.
Indeed, there are already cases of robots causing fatal harm. In July 2015, a 22-year-old Volkswagen factory worker was killed after a robot crushed him. Although a spokesperson said human error was to blame, someone must be accountable. Reports indicate that prosecutors are considering whether criminal charges should be laid.
International coalition, Campaign to Stop Killer Robots, is a pre-emptive attempt to have an international ban placed on autonomous weapons.
In a statement on their website, the coalition argue, ‘Giving machines the power to decide who lives and dies on the battlefield is an unacceptable application of technology. Human control over any combat robot is essential to ensuring both humanitarian protection and effective legal control.’
In 2014, they were present at a United Nations meeting in Geneva, arguing that there were important legal and moral considerations to make moving forward and pushed for human control. Not everyone agrees with their agenda, though. NBC News technology writer Keith Wagstaff said, ‘Hysteria over the robopocalypse could hold back technology that would save human lives.’
Around the world, the Navy and other military organisations have been using automated weapons for years. Michael Mortimer, PhD student at Deakin University, says extensive advances in artificial intelligence would need to occur if we were going to see a true dawn of the killer robots. He argues that military robotics, like any technological advance, are intended to be used for good, and killer robots inflicting mass destruction in the future is unlikely.
But he does admit that while most robotic research is conducted with the intention of assisting humans, any technology in the wrong hands can become dangerous. ‘Whatever tool you give people, how they use it comes down to human psyche.’
There’s no need for Hollywood to over-dramatise the rise of drones. Semi-autonomous military aircraft are becoming increasingly prevalent, taking to the skies and battling through conflict on our behalves. In a time when robotic technology is fast becoming a war imperative, you’d be right to feel a bit uneasy about these things hovering above you. Among the most advanced is the US Air Force’s Global Hawk which can fly and spy for up to 30 hours non-stop. However, Mortimier points out that drones like the Global Hawk ultimately have a human responsible for pressing the buttons. In fact, ‘they utilise a large number of operating staff to get it going’.
Dr Horan adds, ‘The processes surrounding the operation are far beyond what perhaps one fighter jet pilot would have. There are so many processes behind the scenes before any decisions are made.’
The Australian Army is now operating unmanned aerial vehicles in order to provide intelligence surveillance. Dr Horan says this is not a conspiracy. ‘In defence they’re looking at how to reduce the danger for Australian military personnel,’ he points out. In addition, tools are also being developed to assist developing nations where, in some cases, people are using very basic tools like sticks to probe mines. Dr Horan says this shows that the military isn’t the enemy. ‘Robots can go into hazardous locations and deactivate mines.’
But even if military robots are intended to be used for good, it’s doubtless that where wars are concerned robots could be used for ill-intended purposes.
DARPA’s advanced drone ARGUS-IS, a 1.8 gigapixel camera can watch targets across a 25-square kilometre stretch from 20,000 feet in the air – that’s like monitoring all of Manhattan at once through one lens. In the wrong hands, it’s not hard to imagine some worrying situations.
In Australia, there are laws that protect a person’s privacy to a point, but in many cases, the technology is developing faster than the legislation. Despite that, Dr Horan says paranoia is not warranted. ‘There have been satellites with cameras in them for quite a while. People were worried about being watched through Google glasses,’ he says. Even cameras in mobile phones give civilians their own surveillance capabilities, if they choose to use them.
But that’s hardly comparable to the rise of mass surveillance. ‘We do need to be conscious of the ethical considerations surrounding the use of robots. It falls under the umbrella of ethical considerations we apply to all technologies,’ Dr Horan cautions.
A better application for surveillance droids is in the field of search and rescue. In future, you can expect to see robots that look like BB-8 from the upcoming Star Wars film, rolling into dangerous situations. His real world purpose would be to use his floating head and small spherical body to get into tight spaces and collect data for humans to interpret.
It’s hard to be completely sure of what large tech companies and military groups are working towards when developments are so closely guarded. So for now, we’ll have to trust that there’s nothing to fear unless we’re given a reason to. A lot of what we’re seeing in science fiction still remains just that. So it might be best to remain alert rather than alarmed. As aspects of the imagined futures we see in films begin to become reality, let’s hope that any artificially intelligent robots that do come to exist are trained to suppress any appetite for destruction.
Tell us what you think
What will happen if we develop robots with full artificial intelligence in the future?
Dr Ben Horan
Course Director – Bachelor of Mechatronics Engineering, School of Engineering, Deakin University
- Don’t miss