Coming soon to your TV: newscasts written, produced and anchored entirely by autonomous software.
Think Ananova without the writers. The system, called News at Seven, generates a script from RSS news feeds customized to a viewer’s interests which is then read by an avatar created using the Half-Life game engine. The system can also augment the broadcast with clips from YouTube or other video sites based on keywords in the news stories.
News at Seven was created by a team of researchers at the Northwestern University Intelligent Information Lab led by Kristian Hammond along with grad students Nathan Nichols and Sara Owsley. Several videos are already available for viewing, including a report on the alleged North Korean nuclear test.
A novelty for now, perhaps, but possibly the next step in the long journey towards building systems we interact with in a much more natural way. Imagine an avatar you could ask questions of or hold a conversation with. Using methods such as those described above, this avatar could instantly become an expert in almost any conceivable subject. Or, instead of re-formatting RSS feeds for a news script, the system could follow some sort of knowledge representation framework to build an information bank for later use. If we replace the RSS feeds with some other form of input, say sensory input, it could create “memories” of “personal experiences.” The question at the heart of the matter is: how do we define understanding? Does the News at Seven avatar understand the stories it presents to its audience? Most would agree that it does not. But what if the avatar was able to keep a record of the information from these stories in an accessible memory bank, and could discuss the matter with other people (or avatars) to formulate decisions, actions or even opinions based on that knowledge? Would that qualify as understanding? Or intentional behavior? Or even conscious thought? What do we humans, as conscious beings, do beyond this that leads us to define it all as consciousness and understanding?