Thursday, August 18, 2005

When the flaw in the software is the human element...

If you read between the lines of the most recent controversy concerning the intelligence failures and the 9/11 commission you'll find an interesting story concerning artificial intelligence and human error.

The NY Post has a piece covering the "Able Danger" data-mining software (registration required) used to identify and track suspected terrorists. It seems as though Able Danger located several of the 9/11 hijackers (including mastermind Mohammed Atta) in the United States well before September 2001. This information, however, was not disseminated or acted upon by intelligence personnel (for whatever reason), and the terrorists were allowed to continue planning their attack. So what we have is software designed to protect American citizens executing its mission successfully, but poor judgement (granted, in hindsight) on the part of the human actors possibly led to the deaths of thousands. Like I've said before, until we become comfortable with fully automated systems making life and deaths decisions on our behalf, society will insist on keeping humans in the loop. This will only change after the human element is repeatedly identified as the single point of failure in the decision chain, and I fear this example will unfortunately be the first of many yet to come.

No comments: