Friday, June 29, 2007

Don't tell me I'm lost

Following up on my previous post regarding a flawed critique of AI published in MIT's Technology Review, I came across the blog of Larry Hardesty who makes some very good points revealing the poor logic behind the arguments of the original author.

Blind Impress - Uplift the bytecode!

Blind Impress - Gelernter Wrapup

How would NLP parse buzzwords?

More about Powerset (covered here before), which Techdirt seems to think is little more than buzzwords and patent threats. The start-up claims to be developing natural language search technology, and recently held an event in San Francisco to unveil itself to the world.

Among its many lofty goals, Powerset wants to become the ultimate web system, by creating what ZDNet calls "the natural language search mashup platform." For now, I've got to be as skeptical as Techdirt and think that these folks just combined as many hot buzzwords as they could come up with and slapped a couple of questionable patents on them. This kind of talk is a great way to generate venture capital funding, but likely won't do much to advance NLP. Hopefully it will turn out that Powerset has something great in store for all of us, and that this is all just some marketing and PR run amok, but until then we'll just have to wait & see.

Attack of the P-Zombies

Digg points us to a New York Times article about the increasing support for physicalism coming from our growing understanding of the material origin of thought. More and more research indicates that physical structures and processes in the brain are solely (no pun intended) responsible for the emergence of emotions, morality and consciousness, challenging the proponents of dualism who claim a distinction between mind and spirit.

A quote by Pope John Paul II in 1996 struck a chord with me; he said these materialistic ideas were "incompatible with the truth about man." Facts and evidence are incompatible with "the truth?" Sounds like something Galileo would hear...

Tuesday, June 26, 2007

We’re not lost, we just need a map...

MIT’s Technology Review has an article written by a Yale Computer Science Professor who is awfully pessimistic about the possibility of creating a conscious artificial intelligence. It seems to me, however, that his arguments against conscious AI are grounded in our inability to achieve success in the past 50 years and classical notions of subjective experience. But if the Wright Brothers had been deterred by previous failures they would have never built their flying machine, and if Magellan were bound to ancient concepts of a flat earth he would have never circumnavigated the globe.

Professor Gelernter frames the discussion around the differences between what he describes as "a ‘simulated conscious mind’ versus a ‘simulated unconscious intelligence’" and the mysterious "stuff" that makes us human. The professor has a special distaste for "conscious" AIs. He makes unsupported assertions that consciousness cannot be reproduced in software, simply because he believes this to be so. These arguments remind me of those of John Searle and others who contend that conscious AI is impossible because it is utterly inconceivable italic to them italic. The language of these philosophical arguments is often heavily-weighted with emotional triggers and sentimental definitions, rather than the cold, empirical judgment of hard science. Yet we can use the same philosophy to defend our science. I demonstrated the fallacy of many of these arguments (or at least attempted to do so) in my undergraduate honors thesis, "In Defense of Artificial Intelligence." Indeed, Gelernter points to Searle’s famous "Chinese Room" rebuttal to the Turing Test, which is addressed at length in my aforementioned thesis.

He also lumps emotions together with consciousness, and claims that since a machine cannot have emotions or a ‘sense of beauty,’ it therefore cannot have an ‘inner mental life’ or consciousness. Despite the fact that it has not been determined that emotions cannot be duplicated with software, it is not difficult whatsoever to envision a fully-conscious artificial intelligence completely devoid of emotional sensation. Indeed, sci-fi writers have had much success creating believable stories of conscious robots with no emotions at all. Although many AI researchers hope to recreate human intelligence, it is not the only conceivable model for intelligence. We may one day encounter a form of extraterrestrial intelligence completely unlike our own, but if it acts conscious, we will describe it as such.

The author ties the possibility of conscious AI to the performance of digital computers. He claims that consciousness is one of the many incomputable problems of the universe. This may be the case, but our present definition of computability may be as incomplete as our knowledge of that universe. Perhaps in the future we will build non-digital computers that are not bound by these limitations. This is another example of using current shortcomings to predict future failures.

Furthermore, how could an objective observer discern the difference between a ‘simulated conscious mind’ and a ‘simulated unconscious intelligence’? If consciousness is subjective, and one has no method of observing the subjective experience of another entity, then it is impossible to determine whether or not a given individual possesses a subjective conscious experience. The only method of observation available would be based on behavior, and if the behavior of a given system is indistinguishable from that of a conscious entity, then we have no other logical option but to ascribe consciousness to that system. We cannot know with certainty whether our fellow human beings possess the same type of consciousness we experience, but since we can observe that their behaviors are consistent with how we believe a conscious being would act, we do not hesitate to assign the attribute of consciousness. AI should be judged by the same standard.

The reason we’ve failed at using our biological brains as the model for AI development is precisely because we have very little understand of how our minds actually work. If you don’t know how something operates, its awfully difficult to recreate it. This does not, as Professor Gelernter suggests, imply an insurmountable technological hurdle, but merely a contemporary inability to implement our ideas. History is filled with examples of insufficient knowledge causing temporary roadblocks to progress, which are later overcome via persistent, hard work and breakthroughs.

Again, past failures are not good predictors of future results and traditional wisdom is often wrong.

NLP to Earn Big Bucks

The NY Times covers news mining algorithms designed to digest the volumes of information available on the Internet in news articles, journals, studies, and legal filings. Financial institutions are using these programs to generate massive amounts of stock trades, easily replacing large staffs of analysts. Much like reactive day-traders who launch waves of trades based on buzzwords found in headlines, these systems look for key words & phrases known to be trade triggers to predict movement of individual stocks and sectors.

Robots on the Highways

The LA Times has a nice fit-for-mass-consumption profile of the upcoming DARPA Urban Challenge, highlighting the potential civilian benefits from this military R&D program.

What We're Working Towards

For a good understanding of why AI is possibly the single most important work in human history, I highly recommend "Why Work Toward the Singularity?" from The Singularity Institute for Artificial Intelligence.

Monday, June 25, 2007

Overrated Semantic Search

Several IT news outlets have been fawning over Xerox's new semantic-based search engine, which I've covered before. The general idea behind the technology is to analyze linguistic structures in order to improve search results.

Xerox plans to use this technology in legal software to enable "e-discovery" by sifting through massive amounts of documents, searching for information relevant to a case. Perhaps this will lead to the second instance of software being sued for practicing law without a license.

This is all well and good, and a natural progression for the science of search. Not all that dramatic an improvement as the articles would lead us to believe, buy hey, you gotta sell papers, right? However, it aggravates me when the media makes a factual error while covering a topic I’m familiar with...
For example, common searches using keywords "Lincoln" and "vice president" likely won't reveal President Abraham Lincoln's first vice president. A semantic search should yield the answer: Hannibal Hamlin.

Except a Google search for “lincoln’s first vice president” provides the correct result, as does running “Who was Lincoln’s first vice president?” through my quite unsophisticated Answer Machine. While I can’t fault the reporter for overlooking my humble research, shouldn’t they be capable of running a simple Google query? Wouldn’t this fall under the category of “thorough fact checking?” Shouldn’t they run their “facts” through a subject matter expert before publishing them? And I mean an actual expert, not a PR staffer from the company at hand. Unfortunately, more and more tech articles in the media have regressed to little more than paraphrasing press releases.

Furthermore, if I notice obvious mistakes regarding topics I know a little something about, what other incorrect information am I obliviously consuming? And the media wonders why we don’t trust them anymore...

Tuesday, June 19, 2007

If you can’t beat ‘em, join ‘em

Apparently unable to replicate insect flight using ultra-small micro uavs, DARPA is now funding research that seeks to control moths using implanted computer chips. The chips will be implanted in the larva while still in the cocoon, and once the moth has matured, should allow a pilot to control the flight of the creature, beaming back photos and video. These cyber moths will be designed to infiltrate terrorist training camps and other enemy strongholds to gather intelligence without detection.

I suppose they’re building on the success of the remote-controlled pigeon and are seeking to miniaturize the technology.

Monday, June 18, 2007

Delusions of Grandeur

An article has been floating around the web that overstates the importance of video game artificial intelligence.
"A lot of the most interesting work in artificial intelligence is being done by game developers," says Bruce Blumberg, senior scientist at Blue Fang Games in Waltham, and formerly a professor at MIT's Media Lab.

I don't agree. I think this (just like everything else, right?) is a problem of semantics. Consider this quote:

"As soon as we solve a problem, instead of looking at the solution as AI, we come to view it as just another computer system." - Martha Pollack

Researchers all over the world are doing exciting and relevant work in AI, yet we don't recognize it at such. Consider the work by Google in machine translation, by Stanford and other in autonomous navigation, and by scores of groups in data mining. I would argue that these are monumentally important advancements, far greater than the incrementally improved enemy combat tactics in Halo, yet most people don't recognize them as artificial intelligence.

AI has a PR issue. Perhaps AAAI needs an advocacy arm?

Son of Stanley

Stanford has unveiled their successor to Stanley, the driverless car that won the DARPA Grand Challenge two years back. Junior is an upgraded version of the champion autonomous vehicle, sporting several new enhancements. These include many additional sensors, optical trackers designed to follow road markers, and a 360-degree camera system--all controlled by seemingly COTS computer equipment. The software powering Junior has been upgraded as well, allowing it to tackle a much more complex urban environment, unlike its desert-bound forbearer. Junior’s 500 probabilistic algorithms take less than 300 milliseconds to make critical navigation decisions.

Junior is currently undergoing tests to qualify for the DARPA Urban Challenge on November 3.

Wednesday, June 6, 2007

AI News for the Masses...Down Under Style

n extensive article about robots, androids, and artificial intelligence was featured in the Sydney Morning Herald in Australia. Although designed for a layman, it still provides a decent perspective about the current state and future of AI.

Tuesday, June 5, 2007

Using NLP to Organize Unstructured Data

Another facet of the information overload problem is trying to get a handle of the volumes of unstructured data created by organizations on a daily basis, and package them into a searchable, manageable package. Some establishment struggle with file plan compliance and enforcement, while others provide tools to index and search documents based on keywords. IBM, on the other hand, is applying NLP techniques to try and solve the problem. OmniFind tackles content classification by scanning varied types of unstructured data, automatically learning and categorizing information into newly-created as well as existing taxonomies. By understanding linguistics, semantics, and context, OmniFind is able to determine connections and make inferences beyond the reach of even the greatest keyword-based search algorithms. Another example of NLP making information easier to find, access, and use.

Stovepipe NLP Research

The National Science Foundation is sponsoring research into NLP designed to help government clerks get a handle of the information overload coming from the glut of public comments pouring into www.regulations.gov. The site allows officials to solicit and consider public comments while creating rule and regulations concerning things like organic food labeling and media ownership consolidation. It seems to be a success, as far more comments are submitted than can be effectively sorted through by hand. While it seems reasonable to apply NLP techniques to this problem, should the research money be directed at something like the more general problem of information overload than such a narrow application as this?

Monday, June 4, 2007

Child-like Robot Creeps Out Bloggers

Engadget has declared the Japanese-built (of course) CB2 Child Robot as the most disturbing machine ever built. But it isn't that bad, is it?