Trump’s use of AI images pushes new boundaries, further eroding public trust, experts say
By KAITLYN HUAMANI, Associated Press Technology Writer
LOS ANGELES (AP) — The Trump administration has not shied away from sharing AI-generated imagery online, embracing cartoonlike visuals and memes and promoting them on official White House channels.
But an edited — and realistic — image of civil rights attorney Nekima Levy Armstrong in tears after being arrested is raising new alarms about how the administration is blurring the lines between what is real and what is fake.
Homeland Security Secretary Kristi Noem’s account posted the original image from Levy Armstrong’s arrest before the official White House account posted an altered image that showed her crying. The doctored picture is part of a deluge of AI-edited imagery that has been shared across the political spectrum since the fatal shootings of Renee Good and Alex Pretti by U.S. Border Patrol officers in Minneapolis
However, the White House’s use of artificial intelligence has troubled misinformation experts who fear the spreading of AI-generated or edited images erodes public perception of the truth and sows distrust.
In response to criticism of the edited image of Levy Armstrong, White House officials doubled down on the post, with deputy communications director Kaelan Dorr writing on X that the “memes will continue.” White House Deputy Press Secretary Abigail Jackson also shared a post mocking the criticism.
David Rand, a professor of information science at Cornell University, says calling the altered image a meme “certainly seems like an attempt to cast it as a joke or humorous post, like their prior cartoons. This presumably aims to shield them from criticism for posting manipulated media.” He said the purpose of sharing the altered arrest image seems “much more ambiguous” than the cartoonish images the administration has shared in the past.
Memes have always carried layered messages that are funny or informative to people who understand them, but indecipherable to outsiders. AI-enhanced or edited imagery is just the latest tool the White House uses to engage the segment of Trump’s base that spends a lot of time online, said Zach Henry, a Republican communications consultant who founded Total Virality, an influencer marketing firm.
“People who are terminally online will see it and instantly recognize it as a meme,” he said. “Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids about it.”
All the better if it prompts a fierce reaction, which helps it go viral, said Henry, who generally praised the work of the White House’s social media team.
The creation and dissemination of altered images, especially when they are shared by credible sources, “crystallizes an idea of what’s happening, instead of showing what is actually happening,” said Michael A. Spikes, a professor at Northwestern University and news media literacy researcher.
“The government should be a place where you can trust the information, where you can say it’s accurate, because they have a responsibility to do so,” he said. “By sharing this kind of content, and creating this kind of content … it is eroding the trust — even though I’m always kind of skeptical of the term trust — but the trust we should have in our federal government to give us accurate, verified information. It’s a real loss, and it really worries me a lot.”
Spikes said he already sees the “institutional crises” around distrust in news organizations and higher education, and feels this behavior from official channels inflames those issues.
Ramesh Srinivasan, a professor at UCLA and the host of the Utopias podcast, said many people are now questioning where they can turn to for “trustable information.” “AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence,” he said.
Srinivasan said he feels the White House and other officials sharing AI-generated content not only invites everyday people to continue to post similar content but also grants permission to others who are in positions of credibility and power, like policymakers, to share unlabeled synthetic content. He added that given that social media platforms tend to “algorithmically privilege” extreme and conspiratorial content — which AI generation tools can create with ease — “we’ve got a big, big set of challenges on our hands.”
An influx of AI-generated videos related to Immigration and Customs Enforcement action, protests and interactions with citizens has already been proliferating on social media. After Renee Good was shot by an ICE officer while she was in her car, several AI-generated videos began circulating of women driving away from ICE officers who told them to stop. There are also many fabricated videos circulating of immigration raids and of people confronting ICE officers, often yelling at them or throwing food in their faces.
Jeremy Carrasco, a content creator who specializes in media literacy and debunking viral AI videos, said the bulk of these videos are likely coming from accounts that are “engagement farming,” or looking to capitalize on clicks by generating content with popular keywords and search terms like ICE. But he also said the videos are getting views from people who oppose ICE and DHS and could be watching them as “fan fiction,” or engaging in “wishful thinking,” hoping that they’re seeing real pushback against the organizations and their officers.
Still, Carrasco also believes that most viewers can’t tell if what they’re watching is fake, and questions whether they would know “what’s real or not when it actually matters, like when the stakes are a lot higher.”
Even when there are blatant signs of AI generation, like street signs with gibberish on them or other obvious errors, only in the “best-case scenario” would a viewer be savvy enough or be paying enough attention to register the use of AI.
This issue is, of course, not limited to news surrounding immigration enforcement and protests. Fabricated and misrepresented images following the capture of deposed Venezuelan leader Nicolás Maduro exploded online earlier this month. Experts, including Carrasco, think the spread of AI-generated political content will only become more commonplace.
Carrasco believes that the widespread implementation of a watermarking system that embeds information about the origin of a piece of media into its metadata layer could be a step toward a solution. The Coalition for Content Provenance and Authenticity has developed such a system, but Carrasco doesn’t think that will become extensively adopted for at least another year.
“It’s going to be an issue forever now,” he said. I don’t think people understand how bad this is.”
Associated Press writers Jonathan J. Cooper in Phoenix and Barbara Ortutay in San Francisco contributed to this report.




















