Science fiction is quickly becoming science fact.
In a move that mirrors Minority Report, the 2002 Tom Cruise blockbuster on a Phillip K. Dick story, statistical researcher Richard Berk is hoping to identify which children will grow up to commit crimes.
Berk’s work centers on the nation of Norway, which collects an Orwellian amount of data on its citizens and enters it into a single identification file. Berk believes that by looking at the numbers, he may be able to pick out which children will commit a criminal act before their 18th birthday owing largely to the circumstances of their birth.
Of course, determining whether a newborn will grow up to be a criminal is a tricky prospect, as newborns are largely a blank slate. While some people are genetically predisposed to certain criminal acts, one cannot look at the way a baby was born and classify him or her a criminal. You have to wait and watch. They may display signs of abnormality as they mature and their brain develops, and they may not. Until then you cannot say one way or the other.
If you’ll remember, the point of Minority Report was that determining crimes that people haven’t committed is bad. The system was fatally flawed.
Much of Berk’s hopes hang on computer learning, which involves data scientists designing algorithms that condition computers to recognize patterns in large data sets. Once the program is established, computers may then use these patterns to make predictions. In 2012, the model was able to correctly predict the pregnancy of a Minnesota high schooler. It has also been used to determine whether an offender is likely to reoffend.
The success of computer learning rests entirely on having as much contextual data as possible, including data on prior arrests, types of crimes, IQ, and prominity to high crime areas. In that case, Berk may very well be successful in his endveor. However, criminus behavior transcends all of these things. Ted Bundy was a good looking law student who lived a fairly normal life, but killed upwards of forty women during a 1970s killing spree that spanned the nation. On the other hand, too many people to name have come from rough neighborhoods and criminal backgrounds to become celebrities, billionaires, and even model citizens.
Berk himself admits that there has been relatively little success in predicting who actually poses a risk in comparison to predicting who does not pose a risk. The algorithm was accurate in predicting which prison inmates would engage in serious misconduct only 9 percent of the time, and correct in guessing which offender on parole would commit murder only seven percent of the time. Numbers like these are now not the rule but the (minute) exception.
The idea that predictions can be made on what someone may or may not do also negates the idea of free will, and leads to a slippery slope where people are guilty until proven innocent, not the other way around.
Berk’s work can definitely yield positive results. But, as with all things: In moderation.