VentureBeat November 7, 2024
French artificial intelligence startup Mistral AI launched a new content moderation API on Thursday, marking its latest move to compete with OpenAI and other AI leaders while addressing growing concerns about AI safety and content filtering.
The new moderation service, powered by a fine-tuned version of Mistral’s Ministral 8B model, is designed to detect potentially harmful content across nine different categories, including sexual content, hate speech, violence, dangerous activities, and personally identifiable information. The API offers both raw text and conversational content analysis capabilities.
“Safety plays a key role in making AI useful,” Mistral’s team said in announcing the release. “At Mistral AI, we believe that system level guardrails are critical to protecting downstream deployments.”
Multilingual moderation capabilities position Mistral...