Skip to Content

In the News

Congress Takes One Step Closer to Challenging ISIS’ AI Revolution

Originally Published in Newsweek on November 20, 2025.

A new bill passed by the House of Representatives seeks to ramp up U.S. efforts to combat the growing use of artificial intelligence in the hands of militant groups such as the Islamic State (ISIS).

Representative August Pfluger, the senior Texas Republican and reserve U.S. Air Force colonel who introduced the Generative AI Terrorism Risk Assessment Act passed unanimously by lawmakers late Wednesday, said the move marks a win in the ever-evolving battle against malign actors waging an increasingly high-tech war in cyberspace.

“I spent two decades as a fighter pilot, flying combat missions in the Middle East against terrorist organizations," Pfluger said in a statement shared with Newsweek. "Since then, I have witnessed the terror landscape evolve into a digital battlefield shaped by the rapid rise of artificial intelligence. To confront this emerging threat and stop terrorist organizations from weaponizing AI to recruit, train, and inspire attacks on U.S. soil, I am proud that the House passed my Generative AI Terrorism Risk Assessment Act today."

"While my uniform has changed, my mission to protect the United States from all enemies, foreign and domestic, has not," he added. "From the cockpit to serving as Chairman of the Homeland Security Subcommittee on Counterterrorism and Intelligence, I know how critical it is for our policies and capabilities to keep pace with the threats of tomorrow. That is exactly what my legislation ensures.”

An Innovative Insurgency

The use of generative AI, or Gen AI, by ISIS is no longer hypothetical, as Newsweek has recently reported. The group and its supporters have already begun utilizing this technology in a variety of ways, from ramping up the prodution and dissemination of jihadi material online to generating fake news anchors providing updates on attacks being conducted from West Africa to Afghanistan.

Official ISIS outlets have even devoted space to educating followers on safe practices, with articles and graphics discussing the best platforms to use.

"We’ve seen Gen AI being used in propaganda creation and translation by terrorist groups of varied ideological persuasions (but more commonly, used by their supporters), some evidence of it being used for operational research, and some fairly rudimentary experimentation with chatbots," David Wells, a global security consultant who works with a range of international institutions, including the United Nations Office of Counter-Terrorism, the Council of Europe and the Organization for Security Cooperation in Europe, told Newsweek.

And while Wells argued that militant groups weaponizing AI has not yet "been a game-changer in terms of their ability to recruit and radicalize or conduct attacks," owing to the fact that their adoption of such services is "following a non-linear pathway, with high profile experiments and energized supporter bases but not necessarily the kind of methodical investment in Gen AI as a new tech," Wells warned that the worst may yet to come.

"We know that Gen AI’s user base continues to expand rapidly, and that new ways in which Gen AI tools can be misused are being discovered every day," said Wells, who is also an honorary research associate at Swansea University’s Cyber Threats Research Centre, part of the Vox Pol Institute’s leadership team and a Middle East Institute affiliate. "As the barriers to entry continue to reduce, we should expect Gen AI to play a greater role for terrorists and violent extremists in the near future."

"And if they start to invest serious time and effort and learn from other groups—particularly criminals—who are seeing real benefits from Gen AI, things could shift quite quickly," he added, "particularly in terms of the misuse of chatbots and taking more innovative approaches to radicalization using Gen AI, including through games, music and other cultural touchpoints." 

Raising Alarms

The rise of AI as a tool for ISIS cyberwarfare has not gone unnoticed. Observers have warned about the prospect of jihadis undergoing an AI revolution for years, and authorities have also sought to grapple with the issue.

A report published Monday by The Telegraph revealed that the United Kingdom's top spy agencies, MI5 and MI6, were actively tracking the use of AI by ISIS to recruit British nationals to both stage attacks across the West and shore up fighting forces abroad.

The trend came at a time when ISIS and its global affiliates were actively seeking to resurge the group toward its height of power a decade ago, when the self-proclaimed "caliphate" spanned large parts of Iraq and Syria and gained footholds in many other parts of the world.

ISIS may have been beaten back in the Middle East, but loyal remnants in the region continue to find opportunities to strike. Affiliates in several African regions, such as those in the Sahel (ISSP) and West Africa (ISWAP), have managed to expand their territory in recent years, while the Afghanistan-based Khorasan province (ISKP, or ISIS-K) has demonstrated an ability to conduct large-scale operations abroad, claiming some of the deadliest attacks in the modern histories of Iran and Russia just last year.

A wave of plots by supporters also continues to test Western intelligence agencies, particularly in Europe. The U.S. is not immune, either, with those allegedly inspired by ISIS behind a deadly New Year’s' truck ramming in New Orleans and an FBI-foiled Halloween attack plot in Detroit.

As President Donald Trump's administration seeks to tackle ISIS' attempted revival, the "Generative AI Terrorism Risk Assessment Act" would require the Department of Homeland Security "to periodically provide Congress with an assessment of threats to the United States posed by the use of generative artificial intelligence (AI) for terrorism."

The bill additionally "requires DHS to review and disseminate related information gathered by state and major urban area fusion centers and the National Network of Fusion Centers," and "requires other federal agencies to share related information with DHS."

In its latest assessment of threats to the homeland issued in September, DHS found that ISIS and Al-Qaeda, whose own affiliate in the Sahel is currently gaining ground, "maintain worldwide networks of supporters that could target the Homeland," using "media outlets promote violent rhetoric intended to inspire US persons to mobilize to violence, while foreign terrorists continue engaging online supporters to solicit funds; create and share media; and encourage followers to attack the Homeland, US interests, and what they perceive as the West."

In a potentially even more startling assessment, the DHS report anticipated that "threat actors will continue to explore emerging and advanced technologies to aid their efforts in developing and carrying out chemical and biological attacks," noting how "foreign and domestic extremists online expressed interest in using DNA modification to develop biological weapons to target specific groups."

"We remain concerned about the potential exploitation of advances in artificial intelligence (AI) and machine learning to proliferate knowledge that supports the development of novel chemical or biological agents," the DHS report said.

The War Continues

The passage of Pfluger's bill in the House has been applauded by a number of influential lawmakers, including Speaker Mike Johnson of Louisiana, the site of the ISIS-inspired rampage that killed 14 people at the beginning of the year.

Johnson referred to the legislation as pivotal "to ensure we stay ahead of emerging threats and prevent terrorist organizations from pushing propaganda and exploiting generative AI to radicalize, recruit, and spread violence on American soil."

But significant challenges lie ahead in the battle against an AI-wielding ISIS and other entities that seek to exploit breakthrough technologies for malicious purposes.

Wells listed a number of factors that complicate efforts by law enforcement and government agencies to effectively counter the growing issue.

These include "the speed with which Gen AI technology is developing; the number of apps, platforms and other internet services where terrorist content and activity features (and the reality of trying to monitor these); the number of stakeholders with relevant information on the use of Gen AI by terrorists (across academia, tech, other governments, civil society)" and "the difficulty of reliably detecting AI-generated activity, which means it’s difficult to quantify the scale of the problem."

Beyond these, he also identified "the decline in content moderation standards by a number of the larger social media platforms, particularly X (and the collective encouragement of AI slop/content by all of these platforms" as well as "lack of resources and technological capabilities" as elements that serve to hinder state-backed initiatives against bad actors with AI in their arsenal.

Wells, too, stated that the House-endorsed legislation could mark a step forward in the battle, saying "any effort like the Generative AI Terrorism Risk Assessment Act that requires government agencies to collate relevant data on this issue, analyze it for threat and risk, and share that information more widely should be welcomed."

Yet "this should be just one part of the response," he added.

To further bolster an all-of-government approach to combating the use of AI by ISIS and others who seek to capitalize on groundbreaking technology to cause harm, Wells proposed several additional steps including "much greater government involvement in multi-stakeholder information sharing and red teaming exercises, helping to better anticipate the threat" and "Creating relationships between law enforcement and Gen AI companies to better understand the guardrails in place and what behavior could or should trigger engagement with law enforcement."

Finally, he suggested taking the fight to companies themselves, by "putting more pressure on the tech platforms hosting/publishing this activity to remove it and/or prevent it from going online as part of their broader content moderation efforts."