‘Minority Report’ Becomes Reality – DHS Will Arrest You Before You Commit A Crime


In the 2002 Tom Cruise film, “Minority Report” a government unit called Precrime arrests people before they have committed any offense based on information obtained from a trio of mysterious siblings with some sort of psychic powers. We aren’t quite there yet, but the U.S. government appears headed toward a similar effort using information obtained from artificial intelligence (AI).

The idea is that AI has the capacity to analyze data and predict events, specifically in this case regarding insurrection. In this case, though, we are not talking about coups or revolutions abroad. We are talking about insurrection right here at home, specifically a repeat of the attempted “revolution” of January 6, 2021.

Yes, Precrime is coming to arrest you. A recent Washington Post article on the topic had this headline.

“The battle to prevent another Jan. 6 features a new weapon: The algorithm”

Washington Post

The proponents of this approach could not be more excited. “We now have the data — and opportunity — to pursue a very different path than we did before,” said Clayton Besaw, of CoupCast, a machine-learning-driven program based at the University of Central Florida. He was celebrating the fact that we can now – in his opinion – prevent any reoccurrence of January 6th by acting in advance to stop “political violence.”

The idea behind this concept of predicting acts in advance is to empower AI to monitor a broad range of factors, economic data, “social trust” levels, weather, etc., and produce predictions of behavior in advance of the occurrence of any events. We won’t need to wait for a mob to form or even for a group to organize. We will arrest you before you even know you want to form a group.

“For the 2024 election God knows we absolutely need to be doing this,” said Sheldon Himelfarb, chief executive of PeaceTech a group pushing the use of AI in this way. “You can draw a line between data and violence in elections.”

Even as the use of AI in this way is still being perfected, the Department of Homeland Security has already reorganized to maximize its use and utility. A new ‘Center for Prevention Programs and Partnership’ (CP3) has been established. It focuses on what is said on Twitter, Facebook, and other social media platforms and attempts to predict behavior.

According to the DHS press release announcing the formation of the new group, “DHS’s efforts are grounded in an approach to violence prevention that leverages behavioral threat assessment and management tools and addresses early-risk factors that can lead to radicalization to violence.” According to Secretary of Homeland Security Alejandro Mayorkas, future criminals “typically exhibit behaviors that are recognizable to many but are best understood by those closest to them, such as friends, family, and classmates.”

“As a way to gauge potential threats, potential narratives that animate people to action, the online space is where that’s at,” says Oren Segal vice-president of the Center on Extremism at the Anti-Defamation League. “This is why the insurrection was predictable from our point of view, because the planning and the organizing was happening in plain sight. … This is not an easy issue, but one thing we can all agree on is that in order to get ahead of the next threat, you need to go into the spaces in which the extremists are present.”

Much of the justification being used to push these measures is, of course, based on a complete misstatement of what happened on January 6th. A paper by the Middlebury Institute, which works closely with DHS, reads in part:

“On January 6, 2021, a large group of Trump supporters, QAnon adherents, and members of extremist movements rioted at the United States Capitol building as Congress was certifying the election of Joe Biden as the 46th President of the United States. The attack, which left five dead including one Capitol police officer, set an unfortunate precedent for how the rise of conspiracy theorists, extremists, and domestic terrorists can metastasize and spark violent activity in the United States.

This attack could have been prevented. Clear indicators suggested that thousands of Donald Trump supporters and violent extremists were planning to descend on Washington, D.C., for a pro-Trump rally geared toward protesting the election results and even stopping the certification from occurring…Emerging technologies may have been able to mitigate the riot, as they can have the capacity to improve the capabilities of tools across a broad range of domains. Specifically, artificial intelligence is considered to be one of the most versatile emerging technologies with a potential to exponentially increase the productivity and efficiency of various facets in an array of fields, such as medicine, agriculture, policing, and counterterrorism.”

Middlebury Institute

The problem with all of this, of course, is that no one has a true crystal ball and we do not all agree on what constitutes an indicator of future violent behavior. While DHS may consider individuals spreading “election disinformation” future terrorists I may well consider them patriots telling the truth about a corrupt electoral system.
Activist Ed Hasbrouck, a consultant to the nonprofit Identity Project, summed it up quite well in his comments on DHS’s initiative.

“CP3’s attempts to predict future crimes are to be based on behavioral patterns — i.e., profiling — and on encouraging members of the public to inform on their families, friends, and classmates,” Hasbrouck wrote. “The problem … is that the law does not permit prosecution based solely on patterns of lawful behavior. With good reason: ‘precrime’ prediction is a figment of the imagination of the creators of a dystopian fantasy movie, “Minority Report.” He added, “Neither DHS nor anyone else actually has any precogs (human, robotic, or cybernetic) like those in the movie who can predict future crimes or any profile or algorithm that actually enables it to predict who will commit future crimes.”


We live in a world where every day the line between fiction and reality seems harder and harder to distinguish. It is nowhere more true than in regard to this desire to use AI to identify those who must be subjected to the power of the state and arrested before they can act. Precrime is becoming a reality, and we will all pay the price.