AI: Partner in Crime
Did you know that generative AI has been employed by criminals to assist them in accomplishing wrongdoing? AI technology has advanced exponentially in recent years, but not without a dark side. Tools like chatbots, image generators, voice cloners, and code generators have empowered criminals to commit crimes with greater ease, or even to invent new kinds of crime altogether.
The Cybertruck incident
The Cybertruck after the explosion. Tucker, 2025. Photo courtesy of Alcides Antunes.
On New Year’s Day, 2025, a U.S. Army Green Beret parked a Cybertruck outside of Trump International Hotel in Las Vegas. Tragically, the driver committed suicide before the truck exploded. Upon investigation, authorities found that the soldier had used ChatGPT to provide him with information about explosives, firearms, and anonymously-purchased cellphones. At a news conference, Sheriff Kevin McMahill stated, “Certainly, I think this is the first incident on US soil where ChatGPT is utilized to help an individual build a particular device to learn information all across the country as they’re moving forward” (Tucker, 2025). Incidents like this prove that publicly-accessible chatbots can be broken and used for crime. Now, we must answer the question of how AI systems can be regulated to prevent this.
Filters and workarounds
AI systems usually have built-in features to combat misuse, but criminals often find methods of defeating these filters. Researchers have been successful in “jailbreaking” major AI models in order to assess their security and form solutions. It is likely that the man responsible for the Cybertruck explosion used a similar method to get ChatGPT to answer his questions pertaining to illegal activity.
An example of criminal prompt “jailbreaking.” C., Adele, et al
Different angles of attack
The bomb prompt example shows a relatively simple way of exploiting the weaknesses of an AI model, but there are numerous different ways AI power can be leveraged for harm. There are four main categories of vulnerabilities in AI: integrity attacks, unintended AI outcomes, algorithmic training, and membership inference attacks.
In an integrity attack, a hacker will aim to corrupt a system by inputting false or harmful information that undermines the system’s reliability.
In an unintended AI outcome, a system will output different results than expected by the developer. This is not intentional human misuse, but it can achieve the same harm as intended misuse. It also opens the door to easy exploitation by criminals.
Algorithmic training is when AI is used to manage and speed up financial analysis and decision-making. This can be used to manipulate the stock market.
Lastly, membership inference attacks involve the hacker looking into the data used to train an AI system and trace it back to its origins.
Examples of specific misuses and abuses of AI include creating deepfakes, creating and spreading malware, spreading fake news and misinformation, and piloting autonomous weapon systems.
(Blauth, Taís Fernanda, et al.)
AI Vulnerabilities and Attacks. Blauth, Taís Fernanda, et al.
How to fight back
As we have seen, the threats of AI abuse are not to be taken lightly. What can we do to ensure that AI models are unable to be used for harm?
Users, corporations, and employees must become aware of the problem and create a culture that promotes responsible usage.
AI systems must be trained to resist manipulation.
Datasets should be revised continually to ensure they are not contaminated. Datasets should consist of true and useful information.
AI models that do not meet official standards for public availability should be kept private.
These are just a few ways that the generative AI industry can minimize the damage done by AI misuse. As AI technology continues to advance, this problem will undoubtedly become even more pervasive with time. It is crucial that every person, no matter their level of involvement, do what they can to keep AI safe for everyone.
Works Cited
Tucker, Emma. “Green Beret Who Exploded Cybertruck in Las Vegas Used AI to Plan Blast.” CNN, Cable News Network, 8 Jan. 2025, www.cnn.com/2025/01/07/us/las-vegas-cybertruck-explosion-livelsberger/index.html.
C., Adele, et al. “Impact of Artificial Intelligence on Criminal and Illicit Activities.” Homeland Security (.Gov), 2024, www.dhs.gov/sites/default/files/2024-10/24_0927_ia_aep-impact-ai-on-criminal-and-illicit-activities.pdf.
Blauth, Taís Fernanda, et al. “Artificial Intelligence Crime: An Overview of Malicious Use and Abuse of AI | IEEE Journals & Magazine | IEEE Xplore.” IEEE Xplore, Institute of Electrical and Electronics Engineers, 2022, ieeexplore.ieee.org/document/9831441/.