VentureBeat September 24, 2024
Michael Nuñez

Microsoft unveiled a suite of new artificial intelligence safety features on Tuesday, aiming to address growing concerns about AI security, privacy, and reliability. The tech giant is branding this initiative as “Trustworthy AI,” signaling a push towards more responsible development and deployment of AI technologies.

The announcement comes as businesses and organizations increasingly adopt AI solutions, bringing both opportunities and challenges. Microsoft’s new offerings include confidential inferencing for its Azure OpenAI Service, enhanced GPU security, and improved tools for evaluating AI outputs.

“To make AI trustworthy, there are many, many things that you need to do, from core research innovation to this last mile engineering,” said Sarah Bird, a senior leader in Microsoft’s AI efforts, in an interview with VentureBeat....

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Healthcare System, Privacy / Security, Technology
AI-enabled clinical data abstraction: a nurse’s perspective
Contextual AI launches Agent Composer to turn enterprise RAG into production-ready AI agents
OpenAI’s latest product lets you vibe code science
WISeR in 2026: Legal, Compliance, and AI Challenges That Could Reshape Prior Authorization for Skin Substitutes
Dario Amodei warns AI may cause ‘unusually painful’ disruption to jobs

Share Article