Share this article

Latest news

With KB5043178 to Release Preview Channel, Microsoft advises Windows 11 users to plug in when the battery is low

Copilot in Outlook will generate personalized themes for you to customize the app

Microsoft will raise the price of its 365 Suite to include AI capabilities

Death Stranding Director’s Cut is now Xbox X|S at a huge discount

Outlook will let users create custom account icons so they can tell their accounts apart easier

OpenAI’s deepfake detector can identify images generated with DALL-E 3

The tool correctly identified 98% of the images

2 min. read

Published onMay 10, 2024

published onMay 10, 2024

Share this article

Read our disclosure page to find out how can you help Windows Report sustain the editorial teamRead more

AI gives us many opportunities to generate, enhance, and modify content. In addition, AI models are evolving fast, and their content is becoming more qualitative. However, while artificial intelligence can be a great assistant, wrongdoers can use it to generate deepfakes. So, OpenAI decided to launch a deepfake detector.

Deepfakes are nothing new. They’ve been around for a while now. Yet, in the past, they weren’t so accurate. After all, even if we had the tools to edit photos, videos, and audio tapes, the process took a while and was costly. But, the AI solved those issues. For example,Sora AIcan generate high-quality videos in a few minutes.

When can the deepfake detector from OpenAI do?

When can the deepfake detector from OpenAI do?

The deepfake detector from OpenAI can discover images generated with DALL-E 3. In addition, it can check for AI’s assistance. Also,according to OpenAI, the tool successfully identified 98% of the cases. Yet, it incorrectly tagged 0.5% of the results asDALL-E 3products.

OpenAI trained the deepfake image detector to spot content created with their tools. So, unfortunately, it might not be able to detect AI images generated with other tools. Additionally, the classifier handles AI content with modifications such as cropping, compression, and saturation, but it is less proficient with others.

Do AI images have metadata?

Besides the deepfake detector, OpenAI acknowledged introducing C2PA metadata to the images and with DALL-E 3. On top of that, the company will add it to Sora. However, skilled threat actors could remove the metadata.

In addition, OpenAI incorporated audio watermarks into the content generated with their Voice Engine. However, like Sora, their voice model is still under development, and it will take a while before it becomes available.

Ultimately, the deepfake image detector from OpenAI will help us detect DALL-E 3 and Sora creations once released. According to OpenAI, it will be available for a limited number of testers, research labs, and research-oriented journalism nonprofits. If you think the tool can help you, you can apply to become a tester for theDALL-E detection program.

What are your thoughts? Are you going to apply? Let us know in the comments.

More about the topics:AI,OpenAI

Sebastian Filipoiu

Sebastian is a content writer with a desire to learn everything new about AI and gaming. So, he spends his time writing prompts on various LLMs to understand them better. Additionally, Sebastian has experience fixing performance-related problems in video games and knows his way around Windows. Also, he is interested in anything related to quantum technology and becomes a research freak when he wants to learn more.

User forum

0 messages

Sort by:LatestOldestMost Votes

Comment*

Name*

Email*

Commenting as.Not you?

Save information for future comments

Comment

Δ

Sebastian Filipoiu

Sebastian is a content writer with a desire to learn everything new about AI and gaming.