Monday, July 30, 2007

Mainstream Media

The NY Times has an extensive profile of robotics at MIT, which, like many of the Times' science & technology articles, is dumbed-down to the point of being unreadable.

The author, Robin Marantz Henig, is apparently an accomplished science writer who strives to make her work attainable for the general masses. Unfortunately, she glosses over significant issues, focuses on the wrong things, and mixes up some important details.

For example, the author understates the complexities of machine vision, pattern recognition and mastering language. She criticizes current research for being unsophisticated, but seems impressed by MIT robots from 14 years ago. For some strange reason, Henig feels the need to describe Prof. Rodney Brooks’ “rubbery features and bulgy blue eyes” (perhaps this make him seem more “human” for readers). Finally, she appears to be much more enamored with the robots’ hardware while the vast majority of the “sophistication” lies in the software and algorithms controlling the machines.

I understand that the Times is designed for the average American, and more scholarly papers belong in specialized academic journals, but works of this nature do a disservice to both reader and subject. Instead of employing professional writers who claim an ability to digest complex topics for the public, media outlets should seek genuine subject matter experts with a complementary gift for writing (they do exist).

Read at your own risk. I gave up about halfway through.

Thursday, July 26, 2007

Baby Talk

Stanford researchers are working on an interesting alternative to building natural language rules by hand... having the software learn the language on it's own like a human child. The idea is for the system to analyze and sort through speech sounds until it understands language structure. While I agree that it will be much easier to build a system that can learn and acquire language on its own, it will need to be "seeded" with some general rules of grammar, much like the innate rules that some believe human babies are born with.

via AI-Depot.com

Google’s Future

MIT Technology Review recently posted an interview with Peter Norvig, director of research at Google, regarding the future of search. An AI expert, Norvig sees machine translation and speech recognition as the next big things to improve Google's search and advertising. He also identifies understanding the contents of documents as one of the two biggest problems in search... leading to much NLP work ongoing at Google.

via AAAI.org News

Close, But Not Quite

Slashdot reported a researcher has created a text compression program capable of coming within 1% of the AI equivalent of human performance as determined by Claude Shannon.

What does this mean? Are we 99% of the way to “achieving” AI? No, it simply means we have AI tools that are 99% as capable as humans when it comes to text compression. We already have computers that are better at chess than humans, so this is simply another domain where algorithms are successfully competing with neurons.

Kinder, Gentler Robots

Wired has astutely recognized that the Roomba and Mars rovers don't look much like the anthropomorphic androids prominently featured in most popular sci-fi. While the article focuses on efforts to develop robots with human-like articulation, I would argue that language and personality are more important to human-computer interaction than replicating the mechanics of primate skeletons.

via KurzweilAI.net