Tuesday, June 26, 2007

We’re not lost, we just need a map...

MIT’s Technology Review has an article written by a Yale Computer Science Professor who is awfully pessimistic about the possibility of creating a conscious artificial intelligence. It seems to me, however, that his arguments against conscious AI are grounded in our inability to achieve success in the past 50 years and classical notions of subjective experience. But if the Wright Brothers had been deterred by previous failures they would have never built their flying machine, and if Magellan were bound to ancient concepts of a flat earth he would have never circumnavigated the globe.

Professor Gelernter frames the discussion around the differences between what he describes as "a ‘simulated conscious mind’ versus a ‘simulated unconscious intelligence’" and the mysterious "stuff" that makes us human. The professor has a special distaste for "conscious" AIs. He makes unsupported assertions that consciousness cannot be reproduced in software, simply because he believes this to be so. These arguments remind me of those of John Searle and others who contend that conscious AI is impossible because it is utterly inconceivable italic to them italic. The language of these philosophical arguments is often heavily-weighted with emotional triggers and sentimental definitions, rather than the cold, empirical judgment of hard science. Yet we can use the same philosophy to defend our science. I demonstrated the fallacy of many of these arguments (or at least attempted to do so) in my undergraduate honors thesis, "In Defense of Artificial Intelligence." Indeed, Gelernter points to Searle’s famous "Chinese Room" rebuttal to the Turing Test, which is addressed at length in my aforementioned thesis.

He also lumps emotions together with consciousness, and claims that since a machine cannot have emotions or a ‘sense of beauty,’ it therefore cannot have an ‘inner mental life’ or consciousness. Despite the fact that it has not been determined that emotions cannot be duplicated with software, it is not difficult whatsoever to envision a fully-conscious artificial intelligence completely devoid of emotional sensation. Indeed, sci-fi writers have had much success creating believable stories of conscious robots with no emotions at all. Although many AI researchers hope to recreate human intelligence, it is not the only conceivable model for intelligence. We may one day encounter a form of extraterrestrial intelligence completely unlike our own, but if it acts conscious, we will describe it as such.

The author ties the possibility of conscious AI to the performance of digital computers. He claims that consciousness is one of the many incomputable problems of the universe. This may be the case, but our present definition of computability may be as incomplete as our knowledge of that universe. Perhaps in the future we will build non-digital computers that are not bound by these limitations. This is another example of using current shortcomings to predict future failures.

Furthermore, how could an objective observer discern the difference between a ‘simulated conscious mind’ and a ‘simulated unconscious intelligence’? If consciousness is subjective, and one has no method of observing the subjective experience of another entity, then it is impossible to determine whether or not a given individual possesses a subjective conscious experience. The only method of observation available would be based on behavior, and if the behavior of a given system is indistinguishable from that of a conscious entity, then we have no other logical option but to ascribe consciousness to that system. We cannot know with certainty whether our fellow human beings possess the same type of consciousness we experience, but since we can observe that their behaviors are consistent with how we believe a conscious being would act, we do not hesitate to assign the attribute of consciousness. AI should be judged by the same standard.

The reason we’ve failed at using our biological brains as the model for AI development is precisely because we have very little understand of how our minds actually work. If you don’t know how something operates, its awfully difficult to recreate it. This does not, as Professor Gelernter suggests, imply an insurmountable technological hurdle, but merely a contemporary inability to implement our ideas. History is filled with examples of insufficient knowledge causing temporary roadblocks to progress, which are later overcome via persistent, hard work and breakthroughs.

Again, past failures are not good predictors of future results and traditional wisdom is often wrong.

No comments: