Tuesday, April 24, 2007

Robo-FUD from the BBC

The BBC generally has much better articles about AI than their latest piece about the questions posed by autonomous robots. Plenty of misplaced fear-mongering about the coming distopia with machine enslaving man, sprinkled with a bit of socialist concern for robots' rights to health care.

A single quote captures my feelings about this article rather succinctly:
"It's poorly informed, poorly supported by science and it is sensationalist," said Professor Owen Holland of the University of Essex.

The author is concerned with determining who to blame should an autonomous military robot kill someone in error. The proper answer, of course, would be the operator and/or chain of command, as appropriate. Additionally, just like any bit of military hardware, the defense contractor should also be investigated, and similar models inspected and re-tested.

Robotic rights are also addressed in this article, and again, another quote from a professor is needed to steer the discussion in the right direction:
"The more pressing and serious problem is the extent to which society is prepared to trust autonomous robots and entrust others into the care of autonomous robots."

We have a long way to go to counter Hollywood's negative portrayal of robots & AI, and it seems to me that most reporters are only interested in playing off of and propagating these sentiments. Certainly, there needs to be "informed debate" in order to give careful consideration about safeguards regarding human-machine interaction, but vacuous drivel such as this is not a good start.

Wednesday, April 18, 2007

Ashes to Ashes...

Scottish engineers (alas, not Montgomery Scott) are speculating about tiny robotic "dust particles" which utilize wireless distributed computing and are capable of physically reconfiguring themselves. Consisting of multitudes of individual computer chips surrounded by morphic polymer sheaths, clouds of smart dust could fly in formation and changing shape to meet environmental and mission requirements. These devices would form a swarm intelligence, and could be used to explore alien worlds if released into the atmosphere of extrasolar planets by interplanetary space probes.

Talk of simulated swarms of 50 such particles able to shift into star formation immediately reminded me of one futurist's concept of a starfish-like body built of nanorobots designed for sophisticated AI beings of the future. Absolutely wild.

An Artificial Conscience for Artificial Warriors

The Economist has more on the U.S. military's attempt to build an ethical decision-making framework for autonomous weapon systems. This particular initiative seeks to map out all possible actions & outcomes in order to select the most appropriate or "ethical" behavior. It sounds a bit like Deep Blue's brute force calculation of all possible moves & consequences, however, the decision space of a battlefield is slightly more complex than a chessboard...to say the least.

Sunday, April 15, 2007

SourceForge.net Site Maintenance

I just completed a good scrub updating the SourceForge.net project websites for JWords and AutoSummary. I removed most of the unused features, uploaded & organized documentation, and configured the SF.net web hosting service. No new file releases are available (or pending), but the SourceForge.net page and documentation should make JWords & AutoSummary much more accessible and easier to use.

Javadoc source code documentation was uploaded onto SourceForge for both of the projects. Additionally, I created step-by-step instructions for getting each program up and running, which should be easy to follow for even the non-programmer. And finally, I posted more extensive project descriptions to the respective documentation pages.

The updated SourceForge.net project websites can be viewed at:
JWords Java Interface for WordNet
AutoSummary Semantic Analysis Engine

My SourceForge.net developer profile can be found at:
http://sourceforge.net/users/greenbacker/

Saturday, April 14, 2007

Let the machines target machines - not people?

Not to be outdone by her allies, the US military has decided to draft some proposals regarding the proper use of robots and other autonomous systems in combat. The basic idea of the guidelines suggested by the Naval Surface Warfare Center is that machines should be allowed to freely engage other unmanned systems and weapons, but must require human interaction for permission to fire on enemy personnel manned systems. The researchers point to precedence of this concept in the automatic engaging protocols of the Patriot Missile Battery, Aegis Auto Special "hands-off" mode, and the Phalanx ship defense system.

An interesting suggestion, but it puts an awful lot of reliance on the autonomous systems' abilities to discriminate and positively identify the enemy, leaving allies unharmed. Many video game developers can't even get simple collision detection working. So while this may be an easier pill to swallow for those who criticize the "de-humanization" of warfare (giving lethal decision authority to a machine), those folks will be the loudest to complain as soon as the first error leads to a friendly fire incident. And, of course, all bets are off as soon as we face a symmetric adversary who's systems are not bound by these same guidelines. Overly-restrictive rules of engagement created by lawyers and politicians in the "ivory tower" can be quite the disadvantage for soldiers at the front lines in a shooting war.

(via Slashdot)

Laws to Govern Robots and AI

Several nations are proposing early legislation aimed at protecting the rights of humans and robots should AI researchers successfully produce sentient, conscious machines.

As part of a future-looking series of papers commissioned by the UK Office of Science and Innovation's Horizon Scanning Centre, developments over the next 50 years could lead to robots demanding the same rights as human beings. The study suggests that machines might be granted the both the rights & responsibilites of all Her Majesty's subjects, including voting, taxes, and military service.

South Korea is drafting an ethical code to regulate interaction between humans and robots. The rules are expected to set guidelines for robotic surgery, household servants, and human-robot "relationships." (via Slashdot)

The Government of Japan is also following suit, although its efforts appear to be mired in committee gibberish and red tape. Fears of conflict between man and machine, and civil servants seeking to avoid liability, seem to be the driving forces behind the draft proposal. Unlike the straightforward language of Asimov's Three Laws of Robotics, the Japanese rulebook contains wonderful bureaucratic poetry such as:
Risk shall be defined as a combination of the occurrence rate of danger and the actual level of danger. Risk estimation involves estimating the potential level of danger and evaluating the potential sources of danger. Therefore total risk is defined as the danger of use of robots and potential sources of danger.

Sounds like bloatware to me...

Monday, April 2, 2007

Google Speaks 12 Languages

I just read an interesting article about the great success Google has been having using statistical machine translation for automatic translation of foreign language documents, as opposed to the rule & grammar based approaches used before.

I, for one, am very much persuaded by the idea that human language is so complex (we don't even fully understand it, just ask a linguist!) that it can never be fully hard-coded into a machine, but rather, a machine must "learn" it on it's own for the most part (we'll guide it and help it where needed). I like the approach that Google is taking, but without a conceptual framework or knowledge representation model, the system really isn't "understanding" or "comprehening" anything--it's just doing a "dumb" translation using statistical references. Still quite an accomplishment, but entirely different from my ultimate objective.