Mind-Blowing Claim: Did ChatGPT Write the Police Crime Report?

A footnote in a 223-page ruling by US District Judge Sara Ellis, investigating migrant raids in Chicago, revealed that an officer received help from ChatGPT to create the narrative for a use of force report. The judge stated that this situation completely undermined the credibility of the reports.
Last week, a judge in the US issued a 223-page opinion severely criticizing the Department of Homeland Security (DHS) regarding the manner in which raids targeting undocumented immigrants in Chicago were implemented. And two sentences contained in a footnote of this opinion revealed that a member of law enforcement used ChatGPT to write a report prepared to document the use of force against an individual.
In the decision written by US District Judge Sara Ellis, the conduct of Immigration and Customs Enforcement (ICE) officials and other agencies during the operation named “Operation Midway Blitz” was criticized. In this operation, more than 3,300 people were arrested and over 600 were detained by ICE, including repeated incidents of violence with protesters and citizens. Such incidents were required to be documented by agencies in use of force reports. However, Judge Ellis noticed frequent inconsistencies between footage from officers’ body cameras and the information in written records, declaring the reports unreliable.
Furthermore, Judge Ellis said that at least one report was not even written by an officer. As noted in her footnote, body camera footage showed an officer “asking ChatGPT to generate a narrative for a report based on a short sentence regarding an encounter and a few images.” Although the officer provided extremely limited information to the artificial intelligence, he submitted the output from ChatGPT as the report; this raised the possibility that the AI filled in the remaining gaps with assumptions.
According to what Judge Ellis wrote in the footnote, “Agents’ use of ChatGPT to generate use of force reports further undermines the credibility of the reports and may explain the inaccuracies in these reports in light of body camera footage.”
Worst Case Scenario of AI Use

According to reports by the Associated Press, it is unknown whether the Department of Homeland Security (DHS) has a clear policy regarding the use of generative AI tools to generate reports. Given that generative AI will fill gaps with completely fabricated information (hallucinations) when it cannot find information in its training data, it is certain that this is not the best practice.
DHS has a dedicated page regarding AI use within the agency and has even deployed its own chatbot to help officers complete their “daily activities” after testing with commercially available chatbots including ChatGPT. However, the footnote does not indicate that the officer used the agency’s internal tool. On the contrary, it appears the person filling out the report went directly to ChatGPT and uploaded the information. It should come as no surprise that an expert described this situation to the Associated Press as the “worst case scenario” of AI use by law enforcement.
You Might Also Like;
- We Selected 10 Series Similar to Stranger Things for Those Who Love It
- Where and How is Silver Used in Electric Vehicles?
- Hyundai Unveils Its Multi-Purpose Wheeled Robot
Follow us on TWITTER (X) and be instantly informed about the latest developments…









