Monday, August 13, 2007

Roll up the score, Navy, Anchors Aweigh

DefenseTech has an article featuring a profile on a standard aircraft carrier-based mission for the X-45B (now an entirely Navy-controlled project). The mission sounds almost identical to that of any manned naval aircraft, except in this case the bird is completely autonomous.

When the USAF decided to stop funding and supporting the X-45 project (the pilots in charge felt threatened), I feared that our development of UCAVs would remain stillborn until we were eclipsed by a rival military power. Fortunately, the USN has demonstrated the vision and willingness to push American technology forward, ensuring continued air superiority.

For shame, Air Force...

Capitalizing on Tragedy

In light of the mining disaster in Utah, NPR has taken the opportunity to ask two patronizing questions:
Could Robots Replace Humans in Mines?

Duh.
Why do human beings still risk their lives burrowing miles under ground and doing one of the dirtiest and most dangerous jobs in the world?

Because robots are expensive and people are not.

That’s about all it amounts to. Until robots become the cheaper option, humans will continue to perform the dull, dirty and dangerous tasks.

via AAAI.org News

Thursday, August 9, 2007

Fill in the Blank

Carnegie Mellon researchers have developed an algorithm capable of filling arbitrarily-sized holes of photographs by finding similar regions in a database of a million pictures. A pretty interesting application of computer vision and AI if you ask me. As for practical uses of this specific implementation are concerned, I’d imagine it could be used to remove unwanted ex-boyfriends, politically undesireable items, and “that guy” from photos the world over.

via Slashdot

On the Frontline

Slashdot points to a roundup of some DARPA technology on display. I've seen the robotic surgeon before, which is designed to extract and treat wounded soldiers in combat environments. Really nothing new among the rest either, just the same small walking robots and hovering recon UAVs from the DARPA website.

I’m looking forward to robotic surgeons designed to repair robotic super-soldiers...

Wednesday, August 8, 2007

Never to be Outdone

Apparently the French aren't the only latecomers to the robotic ethics debate, now the US Congress has decided the enter the mix. A “Congressional Caucus of Robotics” has been formed to investigate the situation. Right. Note the same Bill Gates quote about robotics found in EVERY SINGLE OTHER ARTICLE OF THIS TYPE... can’t these “journalists” find anything new to use?

via AAAI.org News

Attack of the Drones

The editors of DefenseTech.org are covering the Unmanned Systems Demo sponsored by NAVAIR and AUVSI with a series of articles regarding the current and future states of naval UAVs. Here are the first two:

The Robot Plane Lives - As earlier reported by DefenseTech, the US Navy has selected the Northrop Grumman X-47B as their platform of choice for future unmanned combat air systems development. Notwithstanding the lukewarm announcement by Navy leadership, hopefully the USN will prove not to be as shortsighted as the USAF.

War Shaping Drone Plan - As to be expected, current use of unmanned systems in support of the Global War on Terror by special ops and other forces are affecting the short-term development of UAVs.

Another Quality Microsoft Product(tm)

IEEE Spectrum has a lengthy profile of the tiny, 11-man division of mighty Microsoft developing a toolkit for robot designers. The blatantly uncritical article glowingly portrays the team as an underdog guaranteed to succeed. I’m shocked that there are still tech-savvy people out there who believe Microsoft to be capable of producing quality products.

Will the machines controlled by MS Robotic Studio crash with Blue Screens or Red Rings?

via AAAI.org News

More on Ineffective, Unenforceable Guidelines

Apparently the "we need a code of robot ethics" meme just won’t die. As previously reported, the government of South Korea is hard at work drafting just such a code for robots and developers to follow. Nothing new here, just the same fear-mongering jargon, the same quote from Bill Gates and the same lack of discussion about the effectiveness and enforceability of the proposed guidelines. I guess the French media are just slow to catch on...

via AAAI.org News

Looking for a Few Good ‘Bots

Wired and National Defense Magazine report that M249 rifle-equipped combat robots have finally been deployed to operational units in Iraq. Known as SWORDS, these robots have been promised for some time now, and although there are no reports of actual combat use yet, are expected to eventually engage the enemy at close range with lethal force if necessary.

I’m slightly surprise that the mass media hasn’t picked up on this yet. It could easily generate a firestorm of controversy surrounding the “dehumanizing” of war and the ethical issues concerning armed robots.

via KurzweilAI.net, Slashdot and Defense Tech

Laughter Brings Us Together

Jokes help humans interact and relate with each other, so why not share a laugh with robots too? Despite the failure of most sci-fi androids to grasp the intricacies of humor, researchers are developing a system to identify jokes and puns. They hope it will help increase machine understanding of common language usage, thus improving human-computer interaction.

via KurzweilAI.net

More Powerset Hype

MIT Technology Review has another article on Palo Alto startup Powerset (previously covered here), which promises to use NLP and semantic technologies to revolutionize search. Nothing new here, just the same promises as before. As always, I’ll remain skeptical until I see the product in action.

By the way, I’m still waiting for my beta invitation.

via KurzweilAI.net

Monday, August 6, 2007

What’s Next in Defense Tech

Government Computer News has a good round-up of ongoing defense research, some of which (as we all know) is related in one way or another to AI. Nothing too Earth-shattering, but a nice collection of news for the casual reader (GCN is not known for hard-hitting technical content). The coverage includes the DARPA Urban Challenge, machine translation and cognitive computing initiatives.

via AAAI.org News

Automated Annulments

Computers have made it easier for people to do lots of things. Soon, we may have to add getting a divorce to that list. Australian researchers are developing software designed to help mediate a divorce by combining AI, game theory and an external mediator. The goal is to provide a system to divide assets and decide other issues as fairly as possible.

Perhaps the potential clients should seek couples’ counseling from ELIZA instead...

via AAAI.org News

Monday, July 30, 2007

Mainstream Media

The NY Times has an extensive profile of robotics at MIT, which, like many of the Times' science & technology articles, is dumbed-down to the point of being unreadable.

The author, Robin Marantz Henig, is apparently an accomplished science writer who strives to make her work attainable for the general masses. Unfortunately, she glosses over significant issues, focuses on the wrong things, and mixes up some important details.

For example, the author understates the complexities of machine vision, pattern recognition and mastering language. She criticizes current research for being unsophisticated, but seems impressed by MIT robots from 14 years ago. For some strange reason, Henig feels the need to describe Prof. Rodney Brooks’ “rubbery features and bulgy blue eyes” (perhaps this make him seem more “human” for readers). Finally, she appears to be much more enamored with the robots’ hardware while the vast majority of the “sophistication” lies in the software and algorithms controlling the machines.

I understand that the Times is designed for the average American, and more scholarly papers belong in specialized academic journals, but works of this nature do a disservice to both reader and subject. Instead of employing professional writers who claim an ability to digest complex topics for the public, media outlets should seek genuine subject matter experts with a complementary gift for writing (they do exist).

Read at your own risk. I gave up about halfway through.

Thursday, July 26, 2007

Baby Talk

Stanford researchers are working on an interesting alternative to building natural language rules by hand... having the software learn the language on it's own like a human child. The idea is for the system to analyze and sort through speech sounds until it understands language structure. While I agree that it will be much easier to build a system that can learn and acquire language on its own, it will need to be "seeded" with some general rules of grammar, much like the innate rules that some believe human babies are born with.

via AI-Depot.com

Google’s Future

MIT Technology Review recently posted an interview with Peter Norvig, director of research at Google, regarding the future of search. An AI expert, Norvig sees machine translation and speech recognition as the next big things to improve Google's search and advertising. He also identifies understanding the contents of documents as one of the two biggest problems in search... leading to much NLP work ongoing at Google.

via AAAI.org News

Close, But Not Quite

Slashdot reported a researcher has created a text compression program capable of coming within 1% of the AI equivalent of human performance as determined by Claude Shannon.

What does this mean? Are we 99% of the way to “achieving” AI? No, it simply means we have AI tools that are 99% as capable as humans when it comes to text compression. We already have computers that are better at chess than humans, so this is simply another domain where algorithms are successfully competing with neurons.

Kinder, Gentler Robots

Wired has astutely recognized that the Roomba and Mars rovers don't look much like the anthropomorphic androids prominently featured in most popular sci-fi. While the article focuses on efforts to develop robots with human-like articulation, I would argue that language and personality are more important to human-computer interaction than replicating the mechanics of primate skeletons.

via KurzweilAI.net

Friday, June 29, 2007

Don't tell me I'm lost

Following up on my previous post regarding a flawed critique of AI published in MIT's Technology Review, I came across the blog of Larry Hardesty who makes some very good points revealing the poor logic behind the arguments of the original author.

Blind Impress - Uplift the bytecode!

Blind Impress - Gelernter Wrapup

How would NLP parse buzzwords?

More about Powerset (covered here before), which Techdirt seems to think is little more than buzzwords and patent threats. The start-up claims to be developing natural language search technology, and recently held an event in San Francisco to unveil itself to the world.

Among its many lofty goals, Powerset wants to become the ultimate web system, by creating what ZDNet calls "the natural language search mashup platform." For now, I've got to be as skeptical as Techdirt and think that these folks just combined as many hot buzzwords as they could come up with and slapped a couple of questionable patents on them. This kind of talk is a great way to generate venture capital funding, but likely won't do much to advance NLP. Hopefully it will turn out that Powerset has something great in store for all of us, and that this is all just some marketing and PR run amok, but until then we'll just have to wait & see.

Attack of the P-Zombies

Digg points us to a New York Times article about the increasing support for physicalism coming from our growing understanding of the material origin of thought. More and more research indicates that physical structures and processes in the brain are solely (no pun intended) responsible for the emergence of emotions, morality and consciousness, challenging the proponents of dualism who claim a distinction between mind and spirit.

A quote by Pope John Paul II in 1996 struck a chord with me; he said these materialistic ideas were "incompatible with the truth about man." Facts and evidence are incompatible with "the truth?" Sounds like something Galileo would hear...

Tuesday, June 26, 2007

We’re not lost, we just need a map...

MIT’s Technology Review has an article written by a Yale Computer Science Professor who is awfully pessimistic about the possibility of creating a conscious artificial intelligence. It seems to me, however, that his arguments against conscious AI are grounded in our inability to achieve success in the past 50 years and classical notions of subjective experience. But if the Wright Brothers had been deterred by previous failures they would have never built their flying machine, and if Magellan were bound to ancient concepts of a flat earth he would have never circumnavigated the globe.

Professor Gelernter frames the discussion around the differences between what he describes as "a ‘simulated conscious mind’ versus a ‘simulated unconscious intelligence’" and the mysterious "stuff" that makes us human. The professor has a special distaste for "conscious" AIs. He makes unsupported assertions that consciousness cannot be reproduced in software, simply because he believes this to be so. These arguments remind me of those of John Searle and others who contend that conscious AI is impossible because it is utterly inconceivable italic to them italic. The language of these philosophical arguments is often heavily-weighted with emotional triggers and sentimental definitions, rather than the cold, empirical judgment of hard science. Yet we can use the same philosophy to defend our science. I demonstrated the fallacy of many of these arguments (or at least attempted to do so) in my undergraduate honors thesis, "In Defense of Artificial Intelligence." Indeed, Gelernter points to Searle’s famous "Chinese Room" rebuttal to the Turing Test, which is addressed at length in my aforementioned thesis.

He also lumps emotions together with consciousness, and claims that since a machine cannot have emotions or a ‘sense of beauty,’ it therefore cannot have an ‘inner mental life’ or consciousness. Despite the fact that it has not been determined that emotions cannot be duplicated with software, it is not difficult whatsoever to envision a fully-conscious artificial intelligence completely devoid of emotional sensation. Indeed, sci-fi writers have had much success creating believable stories of conscious robots with no emotions at all. Although many AI researchers hope to recreate human intelligence, it is not the only conceivable model for intelligence. We may one day encounter a form of extraterrestrial intelligence completely unlike our own, but if it acts conscious, we will describe it as such.

The author ties the possibility of conscious AI to the performance of digital computers. He claims that consciousness is one of the many incomputable problems of the universe. This may be the case, but our present definition of computability may be as incomplete as our knowledge of that universe. Perhaps in the future we will build non-digital computers that are not bound by these limitations. This is another example of using current shortcomings to predict future failures.

Furthermore, how could an objective observer discern the difference between a ‘simulated conscious mind’ and a ‘simulated unconscious intelligence’? If consciousness is subjective, and one has no method of observing the subjective experience of another entity, then it is impossible to determine whether or not a given individual possesses a subjective conscious experience. The only method of observation available would be based on behavior, and if the behavior of a given system is indistinguishable from that of a conscious entity, then we have no other logical option but to ascribe consciousness to that system. We cannot know with certainty whether our fellow human beings possess the same type of consciousness we experience, but since we can observe that their behaviors are consistent with how we believe a conscious being would act, we do not hesitate to assign the attribute of consciousness. AI should be judged by the same standard.

The reason we’ve failed at using our biological brains as the model for AI development is precisely because we have very little understand of how our minds actually work. If you don’t know how something operates, its awfully difficult to recreate it. This does not, as Professor Gelernter suggests, imply an insurmountable technological hurdle, but merely a contemporary inability to implement our ideas. History is filled with examples of insufficient knowledge causing temporary roadblocks to progress, which are later overcome via persistent, hard work and breakthroughs.

Again, past failures are not good predictors of future results and traditional wisdom is often wrong.

NLP to Earn Big Bucks

The NY Times covers news mining algorithms designed to digest the volumes of information available on the Internet in news articles, journals, studies, and legal filings. Financial institutions are using these programs to generate massive amounts of stock trades, easily replacing large staffs of analysts. Much like reactive day-traders who launch waves of trades based on buzzwords found in headlines, these systems look for key words & phrases known to be trade triggers to predict movement of individual stocks and sectors.

Robots on the Highways

The LA Times has a nice fit-for-mass-consumption profile of the upcoming DARPA Urban Challenge, highlighting the potential civilian benefits from this military R&D program.

What We're Working Towards

For a good understanding of why AI is possibly the single most important work in human history, I highly recommend "Why Work Toward the Singularity?" from The Singularity Institute for Artificial Intelligence.

Monday, June 25, 2007

Overrated Semantic Search

Several IT news outlets have been fawning over Xerox's new semantic-based search engine, which I've covered before. The general idea behind the technology is to analyze linguistic structures in order to improve search results.

Xerox plans to use this technology in legal software to enable "e-discovery" by sifting through massive amounts of documents, searching for information relevant to a case. Perhaps this will lead to the second instance of software being sued for practicing law without a license.

This is all well and good, and a natural progression for the science of search. Not all that dramatic an improvement as the articles would lead us to believe, buy hey, you gotta sell papers, right? However, it aggravates me when the media makes a factual error while covering a topic I’m familiar with...
For example, common searches using keywords "Lincoln" and "vice president" likely won't reveal President Abraham Lincoln's first vice president. A semantic search should yield the answer: Hannibal Hamlin.

Except a Google search for “lincoln’s first vice president” provides the correct result, as does running “Who was Lincoln’s first vice president?” through my quite unsophisticated Answer Machine. While I can’t fault the reporter for overlooking my humble research, shouldn’t they be capable of running a simple Google query? Wouldn’t this fall under the category of “thorough fact checking?” Shouldn’t they run their “facts” through a subject matter expert before publishing them? And I mean an actual expert, not a PR staffer from the company at hand. Unfortunately, more and more tech articles in the media have regressed to little more than paraphrasing press releases.

Furthermore, if I notice obvious mistakes regarding topics I know a little something about, what other incorrect information am I obliviously consuming? And the media wonders why we don’t trust them anymore...

Tuesday, June 19, 2007

If you can’t beat ‘em, join ‘em

Apparently unable to replicate insect flight using ultra-small micro uavs, DARPA is now funding research that seeks to control moths using implanted computer chips. The chips will be implanted in the larva while still in the cocoon, and once the moth has matured, should allow a pilot to control the flight of the creature, beaming back photos and video. These cyber moths will be designed to infiltrate terrorist training camps and other enemy strongholds to gather intelligence without detection.

I suppose they’re building on the success of the remote-controlled pigeon and are seeking to miniaturize the technology.

Monday, June 18, 2007

Delusions of Grandeur

An article has been floating around the web that overstates the importance of video game artificial intelligence.
"A lot of the most interesting work in artificial intelligence is being done by game developers," says Bruce Blumberg, senior scientist at Blue Fang Games in Waltham, and formerly a professor at MIT's Media Lab.

I don't agree. I think this (just like everything else, right?) is a problem of semantics. Consider this quote:

"As soon as we solve a problem, instead of looking at the solution as AI, we come to view it as just another computer system." - Martha Pollack

Researchers all over the world are doing exciting and relevant work in AI, yet we don't recognize it at such. Consider the work by Google in machine translation, by Stanford and other in autonomous navigation, and by scores of groups in data mining. I would argue that these are monumentally important advancements, far greater than the incrementally improved enemy combat tactics in Halo, yet most people don't recognize them as artificial intelligence.

AI has a PR issue. Perhaps AAAI needs an advocacy arm?

Son of Stanley

Stanford has unveiled their successor to Stanley, the driverless car that won the DARPA Grand Challenge two years back. Junior is an upgraded version of the champion autonomous vehicle, sporting several new enhancements. These include many additional sensors, optical trackers designed to follow road markers, and a 360-degree camera system--all controlled by seemingly COTS computer equipment. The software powering Junior has been upgraded as well, allowing it to tackle a much more complex urban environment, unlike its desert-bound forbearer. Junior’s 500 probabilistic algorithms take less than 300 milliseconds to make critical navigation decisions.

Junior is currently undergoing tests to qualify for the DARPA Urban Challenge on November 3.

Wednesday, June 6, 2007

AI News for the Masses...Down Under Style

n extensive article about robots, androids, and artificial intelligence was featured in the Sydney Morning Herald in Australia. Although designed for a layman, it still provides a decent perspective about the current state and future of AI.

Tuesday, June 5, 2007

Using NLP to Organize Unstructured Data

Another facet of the information overload problem is trying to get a handle of the volumes of unstructured data created by organizations on a daily basis, and package them into a searchable, manageable package. Some establishment struggle with file plan compliance and enforcement, while others provide tools to index and search documents based on keywords. IBM, on the other hand, is applying NLP techniques to try and solve the problem. OmniFind tackles content classification by scanning varied types of unstructured data, automatically learning and categorizing information into newly-created as well as existing taxonomies. By understanding linguistics, semantics, and context, OmniFind is able to determine connections and make inferences beyond the reach of even the greatest keyword-based search algorithms. Another example of NLP making information easier to find, access, and use.

Stovepipe NLP Research

The National Science Foundation is sponsoring research into NLP designed to help government clerks get a handle of the information overload coming from the glut of public comments pouring into www.regulations.gov. The site allows officials to solicit and consider public comments while creating rule and regulations concerning things like organic food labeling and media ownership consolidation. It seems to be a success, as far more comments are submitted than can be effectively sorted through by hand. While it seems reasonable to apply NLP techniques to this problem, should the research money be directed at something like the more general problem of information overload than such a narrow application as this?

Monday, June 4, 2007

Child-like Robot Creeps Out Bloggers

Engadget has declared the Japanese-built (of course) CB2 Child Robot as the most disturbing machine ever built. But it isn't that bad, is it?

Thursday, May 31, 2007

World's First Robotic Murderer?

It's the same old story: idiot accidentally kills himself with robot, media blames 'unsafe killer robots,' we all laugh at 'Skynet became self-aware' jokes. Expect to see more of these in the future... blaming artificial intelligence for human stupidity.

Google Knows Your Face...?

Google has added another powerful AI tool to their search arsenal, this time in facial recognition. By adding the argument "&imgtype=face" into the URL of a Google Image Search, results can be filtered to provide only faces related to the search string. It seems to be pretty accurate, excluding most non-facial images, and capturing many cartoon faces along with results where the face is only a small portion of the image.

Maybe if they mash this up with their new Street View in Maps, they can start identifying people outside of strip clubs...

Semantic Search is Coming

I came across an article about semantic search that does a decent job of explaining the differences between statistics-based and semantics-based approaches to information retrieval. It also describes some of the difficulties and shortcomings of the Semantic Web, along with a few of the related natural language applications.

via Slashdot

Monday, May 14, 2007

Hubris Spells Doom for Grandmaster

Sophisticated AI chess programs are continuing to beat the world’s greatest human players, and the humiliated Grandmasters are continuing to gripe & complain. First, Deep Blue defeated world champion Gary Kasparov in 1997, and just five months ago the current champ was beaten in a one-move checkmate by a new AI program.
"I rechecked this variation many times and analyzed quite far ahead," Kramnik protested. "It seemed to me I was winning."

Indeed. Chess had been long regarded as a game requiring much creativity & ingenuity to excel, something that machines would never dominate. But as Deep Blue and others have shown us, our unabashed confidence was no match for the brute force mapping of all possible outcomes. Eventually, human players adapted their game play to try to throw off the computer programs, but the AI have adapted as well, going so far as to invention previously unknown variations, to maintain their chess supremacy.

How long until other domains previously described as bastions of creativity & consciousness give way to algorithms & AI? Many have spoken about the “God of the Gaps” concept, but what about a Consciousness of the Gaps?

Dartmouth’s New Approach to AGI

Dartmouth College’s new Institute for Computational Science seems like a very promising approach to creating Artificial General Intelligence, combining elements of engineering, philosophy, neuroscience, and traditional computer science.

While the student paper article seems little more than a press release regarding the new Institute, the author raises and interesting issue about algorithms used by computational scientists to “re-create” elements of the real world for research & experimentation. Could sufficiently-complex modeling & simulation systems serve as the mechanism behind internal mental modeling for a conscious artificial mind? We regularly use similar tools for testing aerospace & fluidic systems, as well as modeling astronomic & quantum phenomena, so why not a generalized apparatus for predictive conscious thought? Such a system could provide the means for planning, intentionality, exploring cause & effect, as well as guessing what one will find around the next corner. Of course, expanding the domain of our modeling & simulation algorithms to be thoroughly general in nature is no trivial task, but it does open an interesting path to explore.

A Field Study in Human-Robot Relationships

The Washington Post published an intriguing article about the emotional connection forged between U.S. soldiers in Iraq & Afghanistan and their robot companions. It seems many soldiers attribute personalities to the machines they interact with on a daily basis, and are considerably upset when one is damaged or destroyed. While many critics are quick to point out the vast differences between mankind and its robotic creations, our frontline troops prefer to treat them as valuable team members worthy of praise & honor.

Thursday, May 10, 2007

Using Guesses to Build Mental Maps

Purdue Univ researchers have constructed a robotic navigation system that utilizes predictive mapping to chart the environment of unknown areas. The robots create “mental maps” of unseen regions based on observed patterns recognized in previously-charted territory. Simulations have shown successful navigation while exploring as little as 33% of an environment.

This system seems well-suited for Grand Challenge driverless vehicle applications, which in the past have mostly relied on interpretation of real-time sensor telemetry. If this predictive mental mapping could be expanded beyond environmental navigation to contextual awareness in general, building abstract mental models of ideas and concepts, it could be another step on the path towards artificial consciousnesss.

Meet Domo, Son of Kismet & Cog

With funding from NASA and Toyota, MIT researchers are creating a domestic assistant robot designed to interact with people in an unfamiliar human environment. Combining the human interaction skills of Kismet with Cog’s object manipulation abilities, it is hoped Domo will someday assist the elderly with common household tasks.

Friday, May 4, 2007

AI as a Moral Agent

The Ottawa Citizen takes a closer look at the ideology behind South Korea's initiative to draw up ethical guidelines to govern intelligent machines (as previously covered here). The article predictably examines the influence of Asimov's Three Laws of Robotics, but also dismisses the "hype" surrounding the proposals as much ado about nothing. The author correctly points out the weaknesses in the Three Laws, as relating to military and sentry applications of robotics. However, despite describing himself as an optimistic Luddite, he betrays an anthropocentric egoism that human intelligence will never be surpassed, therefore, these proposed guidelines will ultimately serve no purpose.

Monday, April 30, 2007

Emotion a Prerequisite for Thought?

A group of scholars gathering at Harvard to celebrate the 50th anniversary of the “cognitive revolution,” are promoting the idea that understanding human emotion is absolutely essential to understanding human thought. The scholars go on to argue that the reason all attempts to recreate human thought in AI have failed up to this point is because AI researchers have ignored the role played by emotion in intelligence. While it is true that emotion is central to human thought, it’s awfully anthropocentric to claim the human model as the only path to intelligence.

Although minds and computers may not be completely analogous, to say that emotion is this missing link to artificial general intelligence seems rather naïve and simplistic. There is much work to be done before we can even come close to developing strong AI, and it is quite easy to imagine thinking machines completely devoid of emotion. Intentionality may perhaps be integral to any intelligent system, and emotion may be deeply-seated in human intentionality, but any number of goal-oriented schemes could be devised to replace emotion as the driving force behind an intelligent machine.

Building a Better Mouse...Brain

IBM researchers have simulated 'half' a mouse brain on a Blue Gene L supercomputer. For comparison, a real mouse brain has 8 million neurons with 8,000 synapses each. The simulation had 8 million artificial neurons with 6,300 virtual synapses each. It was only run for 10 seconds at one-tenth the speed of real-life...which equates to one full mouse second.

Quite a virtualization overhead if you ask me.

Friday, April 27, 2007

When Will It End?

Yet another article has appeared in a British newspaper forecasting the dire consequences of AI as a result of a recent public debate at the London Science Media Centre. The Daily Mail chimes in again, repeating most of what's previously been covered, with concerns that robotic caregivers could "dehumanise" patients, and citing an uneasy relationship with technology.

One ray of coherent thought shined through, however, as it was revealed that the original intent of the debate was for scientists to dispute the "silly report" on robot rights released by the UK Department of Trade and Industry. Unfortunately, all of the media outlets covering the debate chose instead to focus on vague worries of out-of-control robots running amuck. Gotta sell papers I suppose...

Here's a gem from this latest article:
I certainly wouldn't want to be a British squaddie on the sands of Iraq with robot-controlled American gunships patrolling overhead.

Umm...too late?

Thursday, April 26, 2007

More Fear Mongering over Robots in UK

Now it seems the “West Britons” are jumping on the bandwagon, with the Belfast Telegraph being the latest UK media outlet to spread FUD about the coming robotic menace. The article has a decent layman’s overview of the current state of robotics, but can’t avoid inciting worry over the increasing use of military robots, “robot rights,” and robots replacing humans providing care for children and the elderly.

It doesn’t get much more ominous than this:
So are these machines a threat?

Yes...

Sounds like those scientists at the London Science Media Centre really frightened the journalists in attendance...or perhaps the less-than tech-savvy reporters misunderstood much of the discussion and wrote sensational articles in order to attract readers.

More coverage of this debate:
Artificial Minds: British Wary of AI
Artificial Minds: Robo-FUD from the BBC

I just wish I could get my hands on some transcripts or other source material from the actual event to see for myself what really went on...

It Takes a Village…?

An interesting experiment in Scotland seeks to build a robotic society to study emergent cultural behaviors. The robots will be grouped into “villages” and programmed to mimic each other’s actions under various conditions. The scientists expect the mimicked behaviors to change slightly every time they are copied, resulting in a variety of different and unpredictable results...much like the telephone game. Of course, the emergent behaviors will be uniquely robotic, and should be distinct from human or animal activities.

Given the general sentiment towards AI in Great Britain lately, don’t be surprised to see protesters warning against the inevitability of these robots going crazy and killing everyone...

Wednesday, April 25, 2007

NLP Calculator for Mac

The Unofficial Apple Weblog points us in the direction of a calculator that can solve a limited scope of natural language word problems. Written as a front-end for the GNU bc calculator, Soulver is something fun to play with but isn't meant to satisfy all your calculating needs.

I was recently mulling around the idea of creating a natural language Unix shell. It wouldn't be too difficult to map out a significant number of input patterns to actual Unix/Linux commands, but the necessarily narrow scope of the system would have been a major usability drawback, as it must be for Soulver.

British Wary of AI

More doom & gloom from the UK as a result of a "forward-looking" debate on robot ethics at the London Science Media Centre.

The Financial Times is concerned that robots designed for urban warfare could be misused for policing purposes. The Daily Mail fears robots making wrong decisions and warns of vague "consequences." To me these seem more to be matters of posse comitatus and accountability rather than cultural issues exclusively pertaining to AI.

The old saying "We fear that which we do not understand" rings awfully true...

Tuesday, April 24, 2007

Robo-FUD from the BBC

The BBC generally has much better articles about AI than their latest piece about the questions posed by autonomous robots. Plenty of misplaced fear-mongering about the coming distopia with machine enslaving man, sprinkled with a bit of socialist concern for robots' rights to health care.

A single quote captures my feelings about this article rather succinctly:
"It's poorly informed, poorly supported by science and it is sensationalist," said Professor Owen Holland of the University of Essex.

The author is concerned with determining who to blame should an autonomous military robot kill someone in error. The proper answer, of course, would be the operator and/or chain of command, as appropriate. Additionally, just like any bit of military hardware, the defense contractor should also be investigated, and similar models inspected and re-tested.

Robotic rights are also addressed in this article, and again, another quote from a professor is needed to steer the discussion in the right direction:
"The more pressing and serious problem is the extent to which society is prepared to trust autonomous robots and entrust others into the care of autonomous robots."

We have a long way to go to counter Hollywood's negative portrayal of robots & AI, and it seems to me that most reporters are only interested in playing off of and propagating these sentiments. Certainly, there needs to be "informed debate" in order to give careful consideration about safeguards regarding human-machine interaction, but vacuous drivel such as this is not a good start.

Wednesday, April 18, 2007

Ashes to Ashes...

Scottish engineers (alas, not Montgomery Scott) are speculating about tiny robotic "dust particles" which utilize wireless distributed computing and are capable of physically reconfiguring themselves. Consisting of multitudes of individual computer chips surrounded by morphic polymer sheaths, clouds of smart dust could fly in formation and changing shape to meet environmental and mission requirements. These devices would form a swarm intelligence, and could be used to explore alien worlds if released into the atmosphere of extrasolar planets by interplanetary space probes.

Talk of simulated swarms of 50 such particles able to shift into star formation immediately reminded me of one futurist's concept of a starfish-like body built of nanorobots designed for sophisticated AI beings of the future. Absolutely wild.

An Artificial Conscience for Artificial Warriors

The Economist has more on the U.S. military's attempt to build an ethical decision-making framework for autonomous weapon systems. This particular initiative seeks to map out all possible actions & outcomes in order to select the most appropriate or "ethical" behavior. It sounds a bit like Deep Blue's brute force calculation of all possible moves & consequences, however, the decision space of a battlefield is slightly more complex than a chessboard...to say the least.

Sunday, April 15, 2007

SourceForge.net Site Maintenance

I just completed a good scrub updating the SourceForge.net project websites for JWords and AutoSummary. I removed most of the unused features, uploaded & organized documentation, and configured the SF.net web hosting service. No new file releases are available (or pending), but the SourceForge.net page and documentation should make JWords & AutoSummary much more accessible and easier to use.

Javadoc source code documentation was uploaded onto SourceForge for both of the projects. Additionally, I created step-by-step instructions for getting each program up and running, which should be easy to follow for even the non-programmer. And finally, I posted more extensive project descriptions to the respective documentation pages.

The updated SourceForge.net project websites can be viewed at:
JWords Java Interface for WordNet
AutoSummary Semantic Analysis Engine

My SourceForge.net developer profile can be found at:
http://sourceforge.net/users/greenbacker/

Saturday, April 14, 2007

Let the machines target machines - not people?

Not to be outdone by her allies, the US military has decided to draft some proposals regarding the proper use of robots and other autonomous systems in combat. The basic idea of the guidelines suggested by the Naval Surface Warfare Center is that machines should be allowed to freely engage other unmanned systems and weapons, but must require human interaction for permission to fire on enemy personnel manned systems. The researchers point to precedence of this concept in the automatic engaging protocols of the Patriot Missile Battery, Aegis Auto Special "hands-off" mode, and the Phalanx ship defense system.

An interesting suggestion, but it puts an awful lot of reliance on the autonomous systems' abilities to discriminate and positively identify the enemy, leaving allies unharmed. Many video game developers can't even get simple collision detection working. So while this may be an easier pill to swallow for those who criticize the "de-humanization" of warfare (giving lethal decision authority to a machine), those folks will be the loudest to complain as soon as the first error leads to a friendly fire incident. And, of course, all bets are off as soon as we face a symmetric adversary who's systems are not bound by these same guidelines. Overly-restrictive rules of engagement created by lawyers and politicians in the "ivory tower" can be quite the disadvantage for soldiers at the front lines in a shooting war.

(via Slashdot)

Laws to Govern Robots and AI

Several nations are proposing early legislation aimed at protecting the rights of humans and robots should AI researchers successfully produce sentient, conscious machines.

As part of a future-looking series of papers commissioned by the UK Office of Science and Innovation's Horizon Scanning Centre, developments over the next 50 years could lead to robots demanding the same rights as human beings. The study suggests that machines might be granted the both the rights & responsibilites of all Her Majesty's subjects, including voting, taxes, and military service.

South Korea is drafting an ethical code to regulate interaction between humans and robots. The rules are expected to set guidelines for robotic surgery, household servants, and human-robot "relationships." (via Slashdot)

The Government of Japan is also following suit, although its efforts appear to be mired in committee gibberish and red tape. Fears of conflict between man and machine, and civil servants seeking to avoid liability, seem to be the driving forces behind the draft proposal. Unlike the straightforward language of Asimov's Three Laws of Robotics, the Japanese rulebook contains wonderful bureaucratic poetry such as:
Risk shall be defined as a combination of the occurrence rate of danger and the actual level of danger. Risk estimation involves estimating the potential level of danger and evaluating the potential sources of danger. Therefore total risk is defined as the danger of use of robots and potential sources of danger.

Sounds like bloatware to me...

Monday, April 2, 2007

Google Speaks 12 Languages

I just read an interesting article about the great success Google has been having using statistical machine translation for automatic translation of foreign language documents, as opposed to the rule & grammar based approaches used before.

I, for one, am very much persuaded by the idea that human language is so complex (we don't even fully understand it, just ask a linguist!) that it can never be fully hard-coded into a machine, but rather, a machine must "learn" it on it's own for the most part (we'll guide it and help it where needed). I like the approach that Google is taking, but without a conceptual framework or knowledge representation model, the system really isn't "understanding" or "comprehening" anything--it's just doing a "dumb" translation using statistical references. Still quite an accomplishment, but entirely different from my ultimate objective.

Tuesday, March 20, 2007

Machine Analysis of Scientific Papers

There's a lot of exciting work going on in NLP right now, and it's hard to keep up...and even harder to maintain a blog about all of it! Larry pointed me to an article from last year detailing an automated tool for analyzing and comparing experimental reports.

This project sounds like some sort of XML markup scheme for outlining scientific papers, similar to the ontologies powering the semantic web initiative. It probably involves too much overhead to be widely adopted and therefore be useful, as it likely requires the author to spend an awful lot of additional time constructing papers such that the EXPO system could parse it. A much more elegant method would be for the system to perform an automatic analysis and markup of the text, however that would require NLP technology beyond what's currently available.

As you might imagine, a similar hurdle exists for the adoption of the semantic web in general. However, the analysis & synthesis of peer-reviewed journals presents us with yet another "killer app" for NLP. There is simply far too much information covering any given topic being generated for a human being to digest, even experts in a particular field...let alone a renaissance man or polymath to synthesize from diverse fields. Only a machine with advanced NLP capabilities would be able to make sense of it all and create new knowledge from what's already available.

Friday, March 2, 2007

Commercially Available Automatic Summarization Software

In his weekly article, Robert X. Cringley profiles a company that has created software which is seemingly able to create relevant summaries of arbitrary size from bodies of text covering all possible subject domains. This is precisely the sort of thing I wanted to accomplish with my AutoSummary project. Automatic summarization is yet another application of NLP as a solution to the problem of too much information for humans to deal with. Learn more about iReader at Syntactica.

Marvin Minsky AI Podcast from 2001

Slashdot points us to three podcasts from MIT Professor Marvin Minsky where he discusses the past failures of AI and the hope for progress in the future. Part One can be found here.

Saturday, February 24, 2007

"Text Enrichment" to Improve Written English

An interesting application of NLP compares user-generated input against a vast database of known proper English to generate suggested improvements to help readability and fluency.

By using a corpus filled with millions of real-world modern English texts, the software is capable of recommending thousands of grammatical corrections, as well as relevant adjectives, adverbs and synonyms. The Israel-based company WhiteSmoke hopes to help improve emails and documents by leaping well ahead of the limited grammar checking functions found in programs like Microsoft Word.

Sunday, February 18, 2007

Autonomous Cars by 2030.

Only 23 more years until we can sleep while driving to work. That's the prediction of scientists working on this year's DARPA Urban Challenge, the third annual Grand Challenge featuring a 60-mile course for driverless vehicles through a simulated city.

The Stanford researcher also predicts battlefield use of this technology by 2015--conveniently just in time to meet the Congressional deadline to make a third of all military vehicles autonomous. So not only will the technology drastically reduce drunk driving accidents, it will hopefully reduce deaths by roadside bomb as well.

Saturday, February 17, 2007

PARC to build NLP Search Engine

The Palo Alto Research Center (of Xerox fame) has licensed its sophisticated natural language processing technology to a start-up hoping to develop an NLP-powered search engine.

The start-up, called Powerset, intends to create a system where users search for data by entering plain-language queries, rather than using keywords. Similar efforts have been launched by MIT and others to solve the problems of automated response generation. With decades of research and significant resources behind them, Powerset hopes to foster the third generation of search engines, following in the footsteps of AltaVista and Google.

Monday, February 12, 2007

Boeing Brings the Magic

Major defense contractor Boeing displayed several very impressive advanced technologies at the annual Airlift/Tanker Association convention in Orlando last fall...

Following on the (eventual) success of the V-22 Osprey tiltrotor aircraft, Boeing is currently developing a quad-tiltrotor design--that is, a VTOL aircraft with four rotating turboprop engines that convert from helicopter to airplane mode. The Boeing QuadTiltRotor was recently awarded a contract for the US military's Joint Heavy Airlift study. Replace those turboprops with turbofans and we're getting close to the flying "Hunter-Killer" design from T2...

Perhaps the most impressive technology showcased by Boeing was its work in pulse jet lift thrusters. This system groups together a number of very small, simple & highly efficient pulse jet engines to provide powerful, controllable & fault-tolerant VTOL ability. Boeing claims to have overcome the poor fuel economy issues that have plagued pulse jets since their first principal use in the V-1 rocket. Future applications include the Light Aerial Multi-purpose Vehicle (LAMV) concept, in which a (patented) pulsejet ejector thrust augmentor provides VTOL capability, while traditional jet engines allow for standard flight like an ordinary fixed-wing aircraft. Scale it down a bit, and perhaps we'll all be riding hoverboards someday...

Not shown at the A/TA Convention but prominent on the Boeing Integrated Defense Systems website is the A160 Hummingbird helicopter UAV. The Hummingbird's first test flight was in 2002 and is currently in development under contract with DARPA.

Saturday, February 3, 2007

Human-like Decision Making in Software

For those employees who believe they cannot be replaced with a very small shell script, your days may be numbered.

Compsim LLC, a seemingly small company based outside of Milwaukee, has developed a patented system which they claim can add human-like decision making to software applications, devices or even websites. By using Knowledge Enhanced Electronic Logic (KEEL) Technology to design custom decision processes, Compsim's customers can build expert system software capable of making judgmental decisions.

Like any expert system, KEEL is a knowledge-based system that uses domain-specific information provided by human experts to generate decisions based on various inputs. However, KEEL is different because it allows for graphical development of decision formulas or "policies" in a web (as opposed to tree) structure, and its solutions are fully explainable and auditable. The company's website has a variety of interactive demos, including UAV guidance & collision-avoidance and automatic dispensing of drugs. Compsim forsees numerous applications for KEEL technology, from automotive & medical diagnostics, to industrial automation, to stock market forcasting, to intelligent weapon systems.

The next step for this line of research is to incorporate machine learning into the development of decision policies. Compsim admits that while KEEL systems can be designed to adapt, they are not "true" learning systems in that they are unable to independently integrate new information sources. An expert system that can learn could potentially become an "expert" on anything, and might be a path towards artificial general intelligence.

Monday, January 29, 2007

Inside the Walls of DARPA

Computerworld offers A Peek Inside DARPA, the Defense Advanced Research Projects Agency responsible for inventing the Internet and other advanced technologies.

Projects of note include Global Autonomous Language Exploitation (GALE), which is designed to "transcribe, translate & distill" data collected from English and foreign language sources into actionable information for human decision makers. Another, the Personalized Assistant that Learns (PAL), is hoped will one day "automatically watch a conversation between two people and, using natural-language processing, figure out what are the tasks they agreed upon."

GALE in particular seems to be a promising endeavor. Researchers at Penn's Linguistic Data Consortium (LDC) are working to provide resources and tools for the project. Possibly the most interesting and impactful component is the Distillation engine, which seems to be the segment focused on "understanding" the information at hand. LDC has posted a thorough background & analysis of this function on their website.

Wednesday, January 24, 2007

The Proliferation of UAVs

Once limited to military operations, private companies and domestic governmental agencies are becoming increasingly interesting in surveillance UAVs for non-defense applications.

MTC Technologies, Inc., for example, offers its Airborne Security Monitoring System (ASMS) for Unmanned Aircraft Systems (UAS) Operations. Already holding a contract to provide "SpyHawk" reconaissance UAVs to the US Marine Corps, MTC is now looking to supply similar systems with autonomous flight operations for the real-time monitoring & inspection of large-scale infrastructure, transportation hubs and other facilities.

Launching UAVs from Manned Aircraft

Snow Aviation International, Inc. (SAI) recently completed a concept validation of UAV airborne basing on a C-130 cargo aircraft, with the manned vehicle serving as a "mothership" capable of launching and recovering the UAV in flight.

The SAI Quick Reaction UAV Program (pdf, pg. 26) was intended to demonstrate mid-flight controlled launch & recapture of a unmanned surveillance aircraft from a C-130 Hercules. Initial test flights in April 2005 included a successful launch and further flights will try out the complete release, flight & redocking cycle.

Tuesday, January 23, 2007

WWII's Unmanned Battleship

Between serving as part of a show of force just before the end of WWI and her eventual destruction at Pearl Harbor, the USS Utah was converted to a remotely-controlled vessel used for target practice.

One of the earlier military applications of unmanned systems, the Utah was refitted with a radio-control mechanism which enabled remote operation from the safety of a nearby ship. She was used for realistic targeting exercises for nine years, allowing sailors to sharpen their bombing & gunning skills, and was eventually capsized as a result of the Pearl Harbor attack, trapping an unknown number of men inside.

Read more about the history of the USS Utah: USS Utah (BB-31) - Wikipedia

Unmanned Vehicles Magazine

I stumbled across an interesting UK-based magazine devoted to UAVs and the like. The website in particular has an excellent collection of news items related to unmanned systems of all kinds.

Excuses, excuses...

It's been a while since my last blog post or project update. Between work, personal life & grad school applications I haven't had much time to spare for this site. But now that I've got things pretty much back in order, get ready for the deluge.

Several of the forthcoming "backblog" come from the Airlift/Tanker Association conference I attended in Orlando back in October. Many exhibitors had a load of good material about UAVs and other technologies, so you'll see plenty of that.

Also, I'm expecting to start grad school full time in the Fall, so I'll be returning to my research projects to get back into the swing of things. I've been meaning to continue work on the Answer Machine, refining the algorithm and implementing additional query types, so you can expect more on this as well.

I think I'm going to start trying to publicize this website a bit more to bring in additional traffic...we'll see how that goes.

I'll try to stay on top of things this time...I swear!