Tuesday, September 10, 2013

Will robots be capable of evil?



The burning question from Adam Frank at cosmos & culture:

Can You Trust A Robot? Let's Find Out
When they come — and they are coming — will the robots we deploy into human culture be capable of evil? Well, perhaps "evil" is too strong a word. Will they be capable of inflicting harm on human beings in ways that go beyond their programing? 
While this may seem like a question for the next installment of The Terminatorfranchise (or The Matrix or whatever, pick your favorite), it's a serious question in robotics and it's being taken up by researchers now. 
Yes, it is a bit early to worry about robots planning to take over the world and enslave their former masters. That would require the development of artificial intelligence (AI) in machines (a milestone which always seems to be about 20 years away, no matter when the question is asked). But it is not too early to ask about the safety of robot-human collaborations, which are already happening on a small scale in areas likemanufacturing and health care. And that is why a team of scientists in England has started the Trustworthy Robotic Assistant project. 
The goal of the project is to understand not only if robots can make safe moves in their interactions with humans, but to also understand if they can knowingly or deliberately make "unsafe moves." 
Without AI robots are, of course, just slaves to their own programming. But given the complexity of those programs, along with the requirement to interact with sometimes-unpredictable, non-artificial intelligences (i.e. you and me), "trust" in working with robots has become an operative concept. Can we safely rely on the robots we will be working with? 
As the project's website RoboSafe.org puts it:

The development of robotic assistants is being held back by the lack of a coherent and credible safety framework. Consequently, robotic assistant applications are confined either to research labs or, in practice, to scenarios where physical interaction with humans is purposely limited, e.g., surveillance, transport or entertainment. 
The Trustworthy Robot Assistant research program's ultimate goal is to get robots out of these cloistered environments so that they can be put to good use out in the world — among us. To make that leap, researchers need to understand what limits both the realities and perceptions of robot behavior. 
As Professor Michael Fisher of Liverpool University put it:

The assessment of robotic trustworthiness has many facets, from the safety analysis of robot behaviors, through physical reliability of interactions, to human perceptions of such safe operation. 
It will be interesting to follow the project's progress since their results may very well shape the fine-grained texture of our potentially robot-saturated lives a few decades in the future. 
At this point it's worth reminding everyone of The Three Laws of Robotics so presciently set down by Isaac Asimov more than 70 years ago: 
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 
Lets hope so.

Can robots do evil?

Evil is a disorder of the will. The will is a power of the rational soul, which is possessed in the natural world only by man. Machines are artifacts, not natural substances, so they have no soul, no rational soul, and no will. 

Machines can of course be instruments of human evil. They can also be instruments of human error. They can also exhibit agency as a result of chaos, or accident, or mechanical dysfunction, or as a result of dynamics that were not predicted by the designers. All of that will be enough to keep robot designers busy for the foreseeable future. 

But evil-- primary evil that is a disorder of the will-- is not a power of machines, which are artifacts without rational souls.

So, no. Robots will not be capable of evil.

However, people who speculate about evil robots will continue to be capable of silliness. 

5 comments:

  1. Robots already kill. Asimov's laws were trashed before they were ever implemented.
    A drone is a robot, if not an android. It is the (human) programming that is capable of evil. It is their masters we must worry about.
    We now have drones that are run by an AI that selects targets to kill. Targets and 'acceptable collateral losses'. There are android-like robots in development that not only act autonomously in theatre, but use 'organic' material to refuel. In plain English they are being designed with an appetite for flesh.
    So the real question in my mind is: Can robots be designed to function in a evil fashion by madmen? The answer is most certainly YES!
    Can robots commit acts of evil on behalf of their masters? Ask a Pakistani or Yemeni whose family was killed in order to hit a high value target.

    ReplyDelete
  2. Adm. G Boggs, Glenbeckistan NavySeptember 10, 2013 at 8:22 AM

    When the public thinks of robots, they usually think of something like either WALL-E or Terminator. In fact, they should be thinking of something more like a Roomba. Apparently, only the Japanese have a yen to copulate with robots. Not sure "what's up with that", as it were.

    Anyway, C-Rex is correct about the push for autonomous military robots. Here's the deal: it's just signal processing layered over a Bayesian detection and recognition algorithm and a gun. But the downside is an unavoidable false alarm rate. You can push the false alarm rate around by manipulating the bias, but the receiver operating characteristic demands that a zero false alarm rate yields a zero correct detection rate. Likewise, a p = 1.0 correct detection rate yields a p = 1.0 false alarm rate. It's not evil, it's just mathematics.

    The evil lies in the hearts of people who will turn machines loose on other people.

    ReplyDelete
  3. What we really need is a Turing Test for Evil. No computer has actually passed the original Turing test yet, so for the TTfE to be applied, first of all we need a computer that is sentient, and then we'd be able to presume that any evil it does is deliberate, instead of inadvertent as with the Windows operating system.

    A robot, just because of its limited computing capacity, is extremely unlikely to ever be sentient, unless there's some enormous breakthrough in computer technology, so for a robot to ever do deliberate evil not programmed by a human (deliberately or inadvertently) it would need to be controlled by a sentient computer.

    It's a circular argument that robots aren't capable of doing deliberate evil because they don't have souls or free wills. Robots can't currently do deliberate evil because they aren't sentient. If they ever become sentient, then we can discuss if they have 'souls' or 'free will'.

    ReplyDelete
    Replies
    1. Adm. G Boggs, Glenbeckistan NavySeptember 10, 2013 at 7:28 PM

      Well, ELIZA and PARRY came pretty close to passing the Turing test. Of course, they were just dumb language robots like the Rogerian psychotherapists they were programmed to emulate.

      But, IMO at least, Searle put the nail in the Strong AI coffin with his Chinese Room gedankenexperiment.

      Transhumanist loons like Ray Kurzweil who blatter on about the imminent Singularity Event (a materialist Parousia) are going to look as pathetic as the Japanese did when their imbecilic Fifth Generation Computing Project fizzled out like a wet fart...

      Materialists are particularly susceptible to this disease, given their tendency to view human cognitive processes as "computing".

      Delete
  4. Robots GK robot play a significant role by increasing the precise and correct motion of the tools within the operation. robots cognition is sweet. Not solely this however the help of medical automaton ends up in the improved outcomes of the surgery and reduced trauma of the patients.

    ReplyDelete