Monday, April 30, 2007

Emotion a Prerequisite for Thought?

A group of scholars gathering at Harvard to celebrate the 50th anniversary of the “cognitive revolution,” are promoting the idea that understanding human emotion is absolutely essential to understanding human thought. The scholars go on to argue that the reason all attempts to recreate human thought in AI have failed up to this point is because AI researchers have ignored the role played by emotion in intelligence. While it is true that emotion is central to human thought, it’s awfully anthropocentric to claim the human model as the only path to intelligence.

Although minds and computers may not be completely analogous, to say that emotion is this missing link to artificial general intelligence seems rather naïve and simplistic. There is much work to be done before we can even come close to developing strong AI, and it is quite easy to imagine thinking machines completely devoid of emotion. Intentionality may perhaps be integral to any intelligent system, and emotion may be deeply-seated in human intentionality, but any number of goal-oriented schemes could be devised to replace emotion as the driving force behind an intelligent machine.

Building a Better Mouse...Brain

IBM researchers have simulated 'half' a mouse brain on a Blue Gene L supercomputer. For comparison, a real mouse brain has 8 million neurons with 8,000 synapses each. The simulation had 8 million artificial neurons with 6,300 virtual synapses each. It was only run for 10 seconds at one-tenth the speed of real-life...which equates to one full mouse second.

Quite a virtualization overhead if you ask me.

Friday, April 27, 2007

When Will It End?

Yet another article has appeared in a British newspaper forecasting the dire consequences of AI as a result of a recent public debate at the London Science Media Centre. The Daily Mail chimes in again, repeating most of what's previously been covered, with concerns that robotic caregivers could "dehumanise" patients, and citing an uneasy relationship with technology.

One ray of coherent thought shined through, however, as it was revealed that the original intent of the debate was for scientists to dispute the "silly report" on robot rights released by the UK Department of Trade and Industry. Unfortunately, all of the media outlets covering the debate chose instead to focus on vague worries of out-of-control robots running amuck. Gotta sell papers I suppose...

Here's a gem from this latest article:
I certainly wouldn't want to be a British squaddie on the sands of Iraq with robot-controlled American gunships patrolling overhead.

Umm...too late?

Thursday, April 26, 2007

More Fear Mongering over Robots in UK

Now it seems the “West Britons” are jumping on the bandwagon, with the Belfast Telegraph being the latest UK media outlet to spread FUD about the coming robotic menace. The article has a decent layman’s overview of the current state of robotics, but can’t avoid inciting worry over the increasing use of military robots, “robot rights,” and robots replacing humans providing care for children and the elderly.

It doesn’t get much more ominous than this:
So are these machines a threat?

Yes...

Sounds like those scientists at the London Science Media Centre really frightened the journalists in attendance...or perhaps the less-than tech-savvy reporters misunderstood much of the discussion and wrote sensational articles in order to attract readers.

More coverage of this debate:
Artificial Minds: British Wary of AI
Artificial Minds: Robo-FUD from the BBC

I just wish I could get my hands on some transcripts or other source material from the actual event to see for myself what really went on...

It Takes a Village…?

An interesting experiment in Scotland seeks to build a robotic society to study emergent cultural behaviors. The robots will be grouped into “villages” and programmed to mimic each other’s actions under various conditions. The scientists expect the mimicked behaviors to change slightly every time they are copied, resulting in a variety of different and unpredictable results...much like the telephone game. Of course, the emergent behaviors will be uniquely robotic, and should be distinct from human or animal activities.

Given the general sentiment towards AI in Great Britain lately, don’t be surprised to see protesters warning against the inevitability of these robots going crazy and killing everyone...

Wednesday, April 25, 2007

NLP Calculator for Mac

The Unofficial Apple Weblog points us in the direction of a calculator that can solve a limited scope of natural language word problems. Written as a front-end for the GNU bc calculator, Soulver is something fun to play with but isn't meant to satisfy all your calculating needs.

I was recently mulling around the idea of creating a natural language Unix shell. It wouldn't be too difficult to map out a significant number of input patterns to actual Unix/Linux commands, but the necessarily narrow scope of the system would have been a major usability drawback, as it must be for Soulver.

British Wary of AI

More doom & gloom from the UK as a result of a "forward-looking" debate on robot ethics at the London Science Media Centre.

The Financial Times is concerned that robots designed for urban warfare could be misused for policing purposes. The Daily Mail fears robots making wrong decisions and warns of vague "consequences." To me these seem more to be matters of posse comitatus and accountability rather than cultural issues exclusively pertaining to AI.

The old saying "We fear that which we do not understand" rings awfully true...

Tuesday, April 24, 2007

Robo-FUD from the BBC

The BBC generally has much better articles about AI than their latest piece about the questions posed by autonomous robots. Plenty of misplaced fear-mongering about the coming distopia with machine enslaving man, sprinkled with a bit of socialist concern for robots' rights to health care.

A single quote captures my feelings about this article rather succinctly:
"It's poorly informed, poorly supported by science and it is sensationalist," said Professor Owen Holland of the University of Essex.

The author is concerned with determining who to blame should an autonomous military robot kill someone in error. The proper answer, of course, would be the operator and/or chain of command, as appropriate. Additionally, just like any bit of military hardware, the defense contractor should also be investigated, and similar models inspected and re-tested.

Robotic rights are also addressed in this article, and again, another quote from a professor is needed to steer the discussion in the right direction:
"The more pressing and serious problem is the extent to which society is prepared to trust autonomous robots and entrust others into the care of autonomous robots."

We have a long way to go to counter Hollywood's negative portrayal of robots & AI, and it seems to me that most reporters are only interested in playing off of and propagating these sentiments. Certainly, there needs to be "informed debate" in order to give careful consideration about safeguards regarding human-machine interaction, but vacuous drivel such as this is not a good start.

Wednesday, April 18, 2007

Ashes to Ashes...

Scottish engineers (alas, not Montgomery Scott) are speculating about tiny robotic "dust particles" which utilize wireless distributed computing and are capable of physically reconfiguring themselves. Consisting of multitudes of individual computer chips surrounded by morphic polymer sheaths, clouds of smart dust could fly in formation and changing shape to meet environmental and mission requirements. These devices would form a swarm intelligence, and could be used to explore alien worlds if released into the atmosphere of extrasolar planets by interplanetary space probes.

Talk of simulated swarms of 50 such particles able to shift into star formation immediately reminded me of one futurist's concept of a starfish-like body built of nanorobots designed for sophisticated AI beings of the future. Absolutely wild.

An Artificial Conscience for Artificial Warriors

The Economist has more on the U.S. military's attempt to build an ethical decision-making framework for autonomous weapon systems. This particular initiative seeks to map out all possible actions & outcomes in order to select the most appropriate or "ethical" behavior. It sounds a bit like Deep Blue's brute force calculation of all possible moves & consequences, however, the decision space of a battlefield is slightly more complex than a chessboard...to say the least.

Sunday, April 15, 2007

SourceForge.net Site Maintenance

I just completed a good scrub updating the SourceForge.net project websites for JWords and AutoSummary. I removed most of the unused features, uploaded & organized documentation, and configured the SF.net web hosting service. No new file releases are available (or pending), but the SourceForge.net page and documentation should make JWords & AutoSummary much more accessible and easier to use.

Javadoc source code documentation was uploaded onto SourceForge for both of the projects. Additionally, I created step-by-step instructions for getting each program up and running, which should be easy to follow for even the non-programmer. And finally, I posted more extensive project descriptions to the respective documentation pages.

The updated SourceForge.net project websites can be viewed at:
JWords Java Interface for WordNet
AutoSummary Semantic Analysis Engine

My SourceForge.net developer profile can be found at:
http://sourceforge.net/users/greenbacker/

Saturday, April 14, 2007

Let the machines target machines - not people?

Not to be outdone by her allies, the US military has decided to draft some proposals regarding the proper use of robots and other autonomous systems in combat. The basic idea of the guidelines suggested by the Naval Surface Warfare Center is that machines should be allowed to freely engage other unmanned systems and weapons, but must require human interaction for permission to fire on enemy personnel manned systems. The researchers point to precedence of this concept in the automatic engaging protocols of the Patriot Missile Battery, Aegis Auto Special "hands-off" mode, and the Phalanx ship defense system.

An interesting suggestion, but it puts an awful lot of reliance on the autonomous systems' abilities to discriminate and positively identify the enemy, leaving allies unharmed. Many video game developers can't even get simple collision detection working. So while this may be an easier pill to swallow for those who criticize the "de-humanization" of warfare (giving lethal decision authority to a machine), those folks will be the loudest to complain as soon as the first error leads to a friendly fire incident. And, of course, all bets are off as soon as we face a symmetric adversary who's systems are not bound by these same guidelines. Overly-restrictive rules of engagement created by lawyers and politicians in the "ivory tower" can be quite the disadvantage for soldiers at the front lines in a shooting war.

(via Slashdot)

Laws to Govern Robots and AI

Several nations are proposing early legislation aimed at protecting the rights of humans and robots should AI researchers successfully produce sentient, conscious machines.

As part of a future-looking series of papers commissioned by the UK Office of Science and Innovation's Horizon Scanning Centre, developments over the next 50 years could lead to robots demanding the same rights as human beings. The study suggests that machines might be granted the both the rights & responsibilites of all Her Majesty's subjects, including voting, taxes, and military service.

South Korea is drafting an ethical code to regulate interaction between humans and robots. The rules are expected to set guidelines for robotic surgery, household servants, and human-robot "relationships." (via Slashdot)

The Government of Japan is also following suit, although its efforts appear to be mired in committee gibberish and red tape. Fears of conflict between man and machine, and civil servants seeking to avoid liability, seem to be the driving forces behind the draft proposal. Unlike the straightforward language of Asimov's Three Laws of Robotics, the Japanese rulebook contains wonderful bureaucratic poetry such as:
Risk shall be defined as a combination of the occurrence rate of danger and the actual level of danger. Risk estimation involves estimating the potential level of danger and evaluating the potential sources of danger. Therefore total risk is defined as the danger of use of robots and potential sources of danger.

Sounds like bloatware to me...

Monday, April 2, 2007

Google Speaks 12 Languages

I just read an interesting article about the great success Google has been having using statistical machine translation for automatic translation of foreign language documents, as opposed to the rule & grammar based approaches used before.

I, for one, am very much persuaded by the idea that human language is so complex (we don't even fully understand it, just ask a linguist!) that it can never be fully hard-coded into a machine, but rather, a machine must "learn" it on it's own for the most part (we'll guide it and help it where needed). I like the approach that Google is taking, but without a conceptual framework or knowledge representation model, the system really isn't "understanding" or "comprehening" anything--it's just doing a "dumb" translation using statistical references. Still quite an accomplishment, but entirely different from my ultimate objective.