The NY Times has an extensive profile of robotics at MIT, which, like many of the Times' science & technology articles, is dumbed-down to the point of being unreadable.
The author, Robin Marantz Henig, is apparently an accomplished science writer who strives to make her work attainable for the general masses. Unfortunately, she glosses over significant issues, focuses on the wrong things, and mixes up some important details.
For example, the author understates the complexities of machine vision, pattern recognition and mastering language. She criticizes current research for being unsophisticated, but seems impressed by MIT robots from 14 years ago. For some strange reason, Henig feels the need to describe Prof. Rodney Brooks’ “rubbery features and bulgy blue eyes” (perhaps this make him seem more “human” for readers). Finally, she appears to be much more enamored with the robots’ hardware while the vast majority of the “sophistication” lies in the software and algorithms controlling the machines.
I understand that the Times is designed for the average American, and more scholarly papers belong in specialized academic journals, but works of this nature do a disservice to both reader and subject. Instead of employing professional writers who claim an ability to digest complex topics for the public, media outlets should seek genuine subject matter experts with a complementary gift for writing (they do exist).
Read at your own risk. I gave up about halfway through.
Monday, July 30, 2007
Thursday, July 26, 2007
Baby Talk
Stanford researchers are working on an interesting alternative to building natural language rules by hand... having the software learn the language on it's own like a human child. The idea is for the system to analyze and sort through speech sounds until it understands language structure. While I agree that it will be much easier to build a system that can learn and acquire language on its own, it will need to be "seeded" with some general rules of grammar, much like the innate rules that some believe human babies are born with.
via AI-Depot.com
via AI-Depot.com
Google’s Future
MIT Technology Review recently posted an interview with Peter Norvig, director of research at Google, regarding the future of search. An AI expert, Norvig sees machine translation and speech recognition as the next big things to improve Google's search and advertising. He also identifies understanding the contents of documents as one of the two biggest problems in search... leading to much NLP work ongoing at Google.
via AAAI.org News
via AAAI.org News
Close, But Not Quite
Slashdot reported a researcher has created a text compression program capable of coming within 1% of the AI equivalent of human performance as determined by Claude Shannon.
What does this mean? Are we 99% of the way to “achieving” AI? No, it simply means we have AI tools that are 99% as capable as humans when it comes to text compression. We already have computers that are better at chess than humans, so this is simply another domain where algorithms are successfully competing with neurons.
What does this mean? Are we 99% of the way to “achieving” AI? No, it simply means we have AI tools that are 99% as capable as humans when it comes to text compression. We already have computers that are better at chess than humans, so this is simply another domain where algorithms are successfully competing with neurons.
Kinder, Gentler Robots
Wired has astutely recognized that the Roomba and Mars rovers don't look much like the anthropomorphic androids prominently featured in most popular sci-fi. While the article focuses on efforts to develop robots with human-like articulation, I would argue that language and personality are more important to human-computer interaction than replicating the mechanics of primate skeletons.
via KurzweilAI.net
via KurzweilAI.net
Friday, June 29, 2007
Don't tell me I'm lost
Following up on my previous post regarding a flawed critique of AI published in MIT's Technology Review, I came across the blog of Larry Hardesty who makes some very good points revealing the poor logic behind the arguments of the original author.
Blind Impress - Uplift the bytecode!
Blind Impress - Gelernter Wrapup
Blind Impress - Uplift the bytecode!
Blind Impress - Gelernter Wrapup
How would NLP parse buzzwords?
More about Powerset (covered here before), which Techdirt seems to think is little more than buzzwords and patent threats. The start-up claims to be developing natural language search technology, and recently held an event in San Francisco to unveil itself to the world.
Among its many lofty goals, Powerset wants to become the ultimate web system, by creating what ZDNet calls "the natural language search mashup platform." For now, I've got to be as skeptical as Techdirt and think that these folks just combined as many hot buzzwords as they could come up with and slapped a couple of questionable patents on them. This kind of talk is a great way to generate venture capital funding, but likely won't do much to advance NLP. Hopefully it will turn out that Powerset has something great in store for all of us, and that this is all just some marketing and PR run amok, but until then we'll just have to wait & see.
Among its many lofty goals, Powerset wants to become the ultimate web system, by creating what ZDNet calls "the natural language search mashup platform." For now, I've got to be as skeptical as Techdirt and think that these folks just combined as many hot buzzwords as they could come up with and slapped a couple of questionable patents on them. This kind of talk is a great way to generate venture capital funding, but likely won't do much to advance NLP. Hopefully it will turn out that Powerset has something great in store for all of us, and that this is all just some marketing and PR run amok, but until then we'll just have to wait & see.
Subscribe to:
Posts (Atom)