VentureBeat November 7, 2024
Michael Nuñez

French artificial intelligence startup Mistral AI launched a new content moderation API on Thursday, marking its latest move to compete with OpenAI and other AI leaders while addressing growing concerns about AI safety and content filtering.

The new moderation service, powered by a fine-tuned version of Mistral’s Ministral 8B model, is designed to detect potentially harmful content across nine different categories, including sexual content, hate speech, violence, dangerous activities, and personally identifiable information. The API offers both raw text and conversational content analysis capabilities.

“Safety plays a key role in making AI useful,” Mistral’s team said in announcing the release. “At Mistral AI, we believe that system level guardrails are critical to protecting downstream deployments.”

Multilingual moderation capabilities position Mistral...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Technology
Cofactor AI Nabs $4M to Combat Hospital Claim Denials with AI
Set Your Team Up to Collaborate with AI Successfully
What’s So Great About Nvidia Blackwell?
Mayo develops new AI tools
Medtronic, Tempus testing AI to find potential TAVR patients

Share This Article