Share this article

Latest news

With KB5043178 to Release Preview Channel, Microsoft advises Windows 11 users to plug in when the battery is low

Copilot in Outlook will generate personalized themes for you to customize the app

Microsoft will raise the price of its 365 Suite to include AI capabilities

Death Stranding Director’s Cut is now Xbox X|S at a huge discount

Outlook will let users create custom account icons so they can tell their accounts apart easier

Microsoft and Google offer new Child Safety Commitments for AI

Developing AI models to protect kids from AI models

3 min. read

Published onApril 23, 2024

published onApril 23, 2024

Share this article

Read our disclosure page to find out how can you help Windows Report sustain the editorial teamRead more

Two of the biggest brands offering generative artificial intelligence services have come together to address the harm their AI platform can pose to child safety.

In a post from theMicrosoft on the Issues blog, the company announced itspartnership with Thorn, a non-profit organization committed to protecting children from sexual abuse,and All Tech Is Humanwhich aims to tackle risks generative AI poses to children.

As part of a new security pact dubbedSafety by Design, Microsoft has committed to the following three tenants to help transparently address its process towards protecting children from harm by its own AI services.

Across the business divide, Google also penned an update toits Safety and Security blogthat echoes Microsoft’s committed partnership with Thron and All Tech Is Human.

Google’s voluntary commitment to address AI-generated child sexual abuse material (CSAM) with Thorn and All Tech Is Human includes:

Google is building on its decades of work with other liked-minded NGO’s, industry peers, and law enforcement to combat CSAM by allowing greater access to its dedicated APIs from its free-to-license Child Safety Toolkit. Third party partner organizations can utilize Google’s CST APIs to monitor and report child abuse and exploitation (CSAE) that could occur because of its generative AI services.

Microsoft is also looking to engage with policymakers to cement future standards for addressing CSAM in lieu of expanding AI capabilities that are producing undreamt of outcomes.

We will also continue to engage with policymakers on the legal and policy conditions to help support safety and innovation. This includes building a shared understanding of the AI tech stack and the application of existing laws, as well as on ways to modernize law to ensure companies have the appropriate legal frameworks to support red-teaming efforts and the development of tools to help detect potential CSAM.

More about the topics:microsoft

Kareem Anderson

Networking & Security Specialist

Kareem is a journalist from the bay area, now living in Florida. His passion for technology and content creation drives are unmatched, driving him to create well-researched articles and incredible YouTube videos.

He is always on the lookout for everything new about Microsoft, focusing on making easy-to-understand content and breaking down complex topics related to networking, Azure, cloud computing, and security.

User forum

0 messages

Sort by:LatestOldestMost Votes

Comment*

Name*

Email*

Commenting as.Not you?

Save information for future comments

Comment

Δ

Kareem Anderson

Networking & Security Specialist

He is a journalist from the bay area, now living in Florida. He breaks down complex topics related to networking, Azure, cloud computing, and security