I just read an interesting article about the great success Google has been having using statistical machine translation for automatic translation of foreign language documents, as opposed to the rule & grammar based approaches used before.
I, for one, am very much persuaded by the idea that human language is so complex (we don't even fully understand it, just ask a linguist!) that it can never be fully hard-coded into a machine, but rather, a machine must "learn" it on it's own for the most part (we'll guide it and help it where needed). I like the approach that Google is taking, but without a conceptual framework or knowledge representation model, the system really isn't "understanding" or "comprehening" anything--it's just doing a "dumb" translation using statistical references. Still quite an accomplishment, but entirely different from my ultimate objective.