A single quote captures my feelings about this article rather succinctly:
"It's poorly informed, poorly supported by science and it is sensationalist," said Professor Owen Holland of the University of Essex.
The author is concerned with determining who to blame should an autonomous military robot kill someone in error. The proper answer, of course, would be the operator and/or chain of command, as appropriate. Additionally, just like any bit of military hardware, the defense contractor should also be investigated, and similar models inspected and re-tested.
Robotic rights are also addressed in this article, and again, another quote from a professor is needed to steer the discussion in the right direction:
"The more pressing and serious problem is the extent to which society is prepared to trust autonomous robots and entrust others into the care of autonomous robots."
We have a long way to go to counter Hollywood's negative portrayal of robots & AI, and it seems to me that most reporters are only interested in playing off of and propagating these sentiments. Certainly, there needs to be "informed debate" in order to give careful consideration about safeguards regarding human-machine interaction, but vacuous drivel such as this is not a good start.