โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

How deepfake technology works

10 December 2024 at 23:15

Doctored images have been around for decades. The term "Photoshopped" is part of everyday language. But in recent years, it has seemingly been replaced by a new word: deepfake.

It's almost everywhere online, but you likely won't find it in your dictionary at home. What exactly is a deepfake, and how does the technology work?

RELATED STORY | Scripps News Reports: Sex, Lies, and Deepfakes

A deepfake is an image or video that has been generated by artificial intelligence to look real. Most deepfakes use a type of AI called a "diffusion model." In a nutshell, a diffusion model creates content by stripping away noise.

"With diffusion models, they found a very clever way of taking an image and then constructing that procedure to go from here to there," said Lucas Hansen said. He and Siddharth Hiregowdara are cofounders of CivAI, a nonprofit educating the public on the potential and dangers of AI.

How diffusion models work

It can get complicated, so imagine the AI or diffusion model as a detective trying to catch a suspect. Like a detective, it relies on its experience and training.

It recalls a previous case - a sneaky cat on the run. Every day it added more and more disguises. On Monday, no disguise. Tuesday, it put on a little wig. Wednesday, it added some jewelry. By Sunday, it's unrecognizable and wearing a cheeseburger mask.

The detective learned these changes can tell you what it wore and on what day.

AI diffusion models do something similar with noise, learning what something looks like at each step.

"The job of the diffusion model is to remove noise," Hiregowdara said. "You would give the model this picture, and then it will give you a slightly de-noised version of this picture."

RELATED STORY | Scripps News got deepfaked to see how AI could impact elections

When it's time to solve the case and generate a suspect, we give it a clue: the prompts we give when we create an AI-generated image.

"We have been given the hint that this is supposed to look like a cat. So what catlike things can we see in here? Okay, we see this curve, maybe that's an ear," Hiregowdara said.

The "detective" works backward, recalling its training.

It sees a noisy image. Thanks to the clue, it is looking for a suspect a cat. It subtracts disguises (noise) until it finds the new suspect. Case closed.

Now imagine the "detective" living and solving crimes for years and years. It learns and studies everything landscapes, objects, animals, people, anything at all. So when it needs to generate a suspect or an image, it remembers its training and creates an image.

Deepfakes and faceswaps

Many deepfake images and videos employ some type of face swapping technology.

You've probably experienced this kind of technology already faceswapping filters like on Snapchat, Instagram or Tiktok use technology similar to diffusion models, recognizing faces and replacing things in real time.

"It will find the face in the image and then cut that out kind of, then take the face and convert it to its internal representation," Hansen said.

The results are refined then repeated frame by frame.

The future and becoming our own detectives

As deepfakes become more and more realistic and tougher to detect, understanding how the technology works at a basic level can help us prepare for any dangers or misuse. Deepfakes have already been used to spread election disinformation, create fake explicit images of a teenager, even frame a principal with AI-created racist audio.

"All the netizens on social media also have a role to play," Siwei Lyu said. Lyu is a SUNY Empire Innovation Professor at the University of Buffalo's Department of Computer Science and Engineering, and the director of the Media Forensics Lab. His team has created a tool to help spot deepfakes called "DeepFake-o-meter."

"We do not know how to handle, how to deal, with these kinds of problems. It's very new. And also requires technical knowledge to understand some of the subtleties there," Lyu said. "The media, the government, can play a very active role to improve user awareness and education. Especially for vulnerable groups like seniors, the kids, who will start to understand the social media world and start to become exposed to AI technologies. They can easily fall for AI magic or start using AI without knowing the limits."

RELATED STORY | AI voice cloning: How programs are learning to pick up on pitch and tone

Both Lyu and CivAI believe in exposure and education to help combat any potential misuse of deepfake technology.

"Our overall goal is that we think AI is going t impact pretty much everyone in a lot of different ways," Hansen said. "And we think that everyone should be aware of the ways that it's going to change them because it's going to impact everyone."

"More than just general education just knowing the facts and having heard what's going to happen," he added. "We want to give people a really intuitive experience of what's going on."

Hansen goes on to explain CivAI's role in educating the public.

"We try and make all of our demonstrations personalized as much as possible. What we're working on is making it so people can see it themselves. So they know it's real, and they feel that it's real," Hansen said. "And they can have a deep gut level feel for tthe impact that it's going to have."

"A big part of the solution is essentially just going to be education and sort of cultural changes," he added. "A lot of this synthetic content is sort of like a new virus that is attacking society right now, and people need to become immune to it in some ways. They need to be more suspicious about what's real and what's not, and I think that will help a lot as well."

Real vs. fake: Can you spot AI-generated images?

3 December 2024 at 21:36

Three of these images are fake. Can you spot the real image?

Some images generated by artificial intelligence have become so convincingly real that there is no surefire way to spot the fakes. But experts say there are still things we can try to detect fakes.

"Media literacy is super awesome," said Matt Groh, assistant professor at Northwestern University. "But it needs to extend to AI literacy. Like the classic kind of things that you want to teach in media literacy, we still need to teach those same things. We just need to add the AI portion to it now."

RELATED STORY | Nobel Prize in physics awarded to 2 scientists for discoveries that enabled artificial intelligence

Groh's team at Northwestern released a guide on how to spot AI generated images. The full preprint paper was released in June.

"So what we've done is we've articulated 5 different categories of artifacts, implausibilities," Groh said. "Ways to tell AI-generated image apart from a real photograph."

The academic preprint guide offers detailed tips, tricks and examples on spotting AI-generated images. It also teaches important questions to consider when consuming media.

Anatomical implausibilities

The first and easiest telltale signs: anatomical implausibilities.

Ask yourself: Are the fingers, eyes, and bodies off? Are there extra limbs or do they bend strangely? Are there too many teeth?

Stylistic implausibilities

Ask yourself: Do images seem plastic, glossy, shiny or cartoonish? Are there overly dramatic or cinematic?

Functional implausibilities

Ask yourself: Is text garbled? Is clothing strange? Are objects not physically correct, like how this backpack strap merges into clothing?

Violation of physics

Ask yourself: Are light and shadows off? Are there impossible reflections?

Sociocultural implausibilities

Ask yourself: Are there images that are just too unbelievable or historically inaccurate?

RELATED STORY | AI voice cloning: How programs are learning to pick up on pitch and tone

"What we're trying to do is give you a snapshot of what it looks like in 2024 and how we can help people move their attention as effectively as possible," Groh said.

"Education is really the biggest thing. There's education on the tools," said Cole Whitecotton, senior professional research associate at the National Center for Media Forensics.

Whitecotton encourages the public to educate themselves and try AI tools to know their capabilities and limits.

"I think everybody should go out and use it. And look at how these things do what they do and understand a bit of it," he said. "Everyone should interact with ChatGPT. In some way. Everyone should interact with Midjourney. And look at how these things do what they do and understand a bit of it."

Whitecotton suggests being inquisitive and curious when scrolling through social media.

"If you interacted with every piece of content in that way, then there you would be a lot less likely to be duped and to be sort of sucked into that sort of stuff, right?" he said.

"How do you interact with Facebook and with Twitter and all these things? How do you consume the media?" Whitecotton added.

RELATED STORY | Biden's AI advisor speaks on AI policy, deepfakes, and the use of AI in war

While AI-generated images and videos continue to evolve, Groh and his team offer a realistic approach to a changing technological landscape where tips and tricks may become outdated quickly.

"I think a real, good, useful thing is we build this. We update this every year. Okay, some of these things work. Some of these things don't. And I think once we have a base, we're able to update it," Groh said. "I think one of the problems is we didn't have a base. And so one of the things we're really excited about is even sharing our framework, because I think our framework is going to help people just navigate that conversation."

So were you able to guess which image is real?

If you guessed the image of the girl in the bottom left corner, you are correct!

"It sucks that there's this misinformation in the world. But it's also possible to navigate this new problem," Groh said.

If you want to test yourself even more, the Northwestern University research team has released this site that gives you a series of real and AI-generated images to differentiate.

AI voice cloning: How programs are learning to pick up on pitch and tone

26 November 2024 at 22:18

Voice cloning is an emerging technology powered by artificial intelligence and it's raising alarms about its potential misuse.

Earlier this year, New Hampshire voters experienced this firsthand when a deepfake mimicking President Joe Bidens voice urged them to skip the polls ahead of the primary.

The deepfake likely needed only several seconds of the president's voice to create the clone. According to multiple AI voice cloning models, about 10 seconds of an actual voice is all that is needed to recreate it. And that can easily come from a phone call or a video from social media.

"A person's voice is really probably not that information-dense. It's not as unique as you may think," James Betker, a technical staff member at OpenAI, told Scripps News.

Betker developed TortoiseTTS, an open-source voice cloning model.

"It's actually very easy to model, very easy to learn, the distribution of all human voices from a fairly small amount of data," Betker added.

How AI voice cloning works

AI models have been trained on vast amounts of data, learning to recognize human speech. Programs analyze the data and train repeatedly, learning characteristics such as rhythm, stress, pitch and tone.

"It can look at 10 seconds of someone speaking and it has stored enough information about how humans speak with that kind of prosody and pitch. Enough information about how people speak with their processing pitch and its weights that it can just continue on," Betker said.

Imagine a trained AI model as a teacher, and the person cloning the voice to be a student. When a student asks to create a cloned voice, it starts off as white noise. The teacher scores how close the student is to sounding correct. The student tries again and again based on these scores until the student produces something close to what the teacher wants.

While this explanation is extremely simplified, the concept of generating a cloned voice is based on bit-by-bit, based on probability distributions.

"I think, at its core, it's pretty simple," Betker said. "I think the analogy of just continuing with what you're given will take you pretty far here."

There are currently some AI models that claim to only need two seconds of samples. While the results are not convincing yet, Betker says future models will need even fewer voice samples to create a convincing clone.

Retailers say they're ready for potential Trump tariffs

21 November 2024 at 00:06

President-elect Trump is promising major tariffs that could impact retailers and their consumers.

A tariff is a tax placed on goods when they cross national borders.

Trump has said all U.S. trading partners could face tariffs of up to 20%. He's said goods from China could be levied at 60% or higher on some specific products.

"I will impose whatever tariffs are required 100%, 200%, 1,000%," Trump said of some Chinese imports during an event in October.

The potential for these tariffs is already having some retailers rethink their business, and it could mean consumers paying higher prices.

"It's not a one size fits all situation with this," said Bill Reinsch, Chair in International Business at the Center for Strategic and International Studies. "Each company, each retailer is going to decide what it wants to do for itself. Sometimes, they'll choose to eat part of the tariff. In other words, absorb some of the increased price and simply have a lower profit margin in order to maintain their market share. But most of the time, they pass part, if not all of it, onto the consumer."

Walmart's chief financial officer John David Rainey told CNBC if Trump's tariffs take effect "there probably will be cases where prices will go up for consumers."

Lowe's CEO Marvin Ellison also addressed the topic on the company's earnings call on Tuesday.

"Like everyone, we're waiting to see what happens when the Trump administration actually takes office in January," Ellison said. "Having said that, we feel good about the processes and the systems we put in place since the first Trump administration to manage tariffs or other challenges."

The Home Depot told Scripps News it's following this situation to see how it could impact its business.

"It's too early to speculate, but tariffs would impact our industry more broadly," The Home Depot said in a statement to Scripps. "The majority of our goods are sourced in the U.S. While the remaining products are not all sourced from Asia, we do source from several Asian countries, so we are watching this issue closely. Our teams have been through this before and we anticipate that we will manage through any new tariffs similarly to how we have done so in the past."

RELATED STORY | Trump will nominate Howard Lutnick to oversee 'tariff and trade' policy

Trump sees tariffs as having two purposes raising revenue for the government and taking money from other countries. The Tax Foundation estimates a 20% tariff on all goods would raise $3.3 trillion for the federal government from 2025 through 2034.

The Peterson Institute for International Economics projects Trump's tariff plan could cost the average U.S. household $2,600 per year.

โŒ
โŒ