
As AI brokers develop into extra autonomous, producing software program on-the-fly from human prompts, one query looms bigger than ever: how will we hold them safe? On this episode of Invisible Machines, Robb Wilson and Josh Tyson sit down with Omar Santos, Distinguished Engineer of AI Safety at Cisco and co-chair of the Coalition for Safe AI, to discover the evolving panorama of AI safety in the agentic period.
Omar argues that conventional safety fashions are now not adequate. The concept of a safety division feels each antiquated and woefully insufficient. As AI brokers create complicated software program environments dynamically, safety should develop into an ever-present, built-in layer, supported by fixed human oversight and the capacity to simulate potential outcomes to mitigate danger. For organizations racing towards AI adoption, ignoring safety isn’t simply dangerous, it’s a barrier to progress.
The dialog dives deep into how AI brokers are reworking work, groups, and expertise ecosystems. Omar explains how superior orchestration combines human judgment with AI capabilities, and why simulations and real-time danger assessments shall be vital as brokers evolve. He additionally shares insights from his work main AI safety at Cisco and guiding business requirements like CSAF and VEX.
For anybody exploring agentic AI, this episode is a masterclass in accountable innovation. It challenges leaders to rethink safety as a core a part of AI design, adoption, and administration, as a result of in the age of agentic AI, safety is elementary.
The put up Siloed Security? Forget AI Adoption appeared first on UX Magazine.
Disclaimer: This article is sourced from external platforms. OverBeta has not independently verified the information. Readers are advised to verify details before relying on them.