An AI agent went rogue at Meta, exposing delicate firm and person information to workers who did not have permission to entry it.
Per an incident report, which was seen and reported on by The Information, a Meta worker posted on an inner discussion board asking for assist with a technical query — which is a regular motion. Nevertheless, one other engineer requested an AI agent to assist analyze the query, and the agent ended up posting a response with out asking the engineer for permission to share it. Meta confirmed the incident to The Info.
Because it seems, the AI agent did not give good recommendation. The worker who requested the query ended up taking actions based mostly on the agent’s steering, which inadvertently made large quantities of firm and user-related information obtainable to engineers, who had been not approved to entry it, for 2 hours.
Meta deemed the incident a “Sev 1,” which is the second-highest stage of severity in the firm’s inner system for measuring safety points.
Rogue AI brokers have already posed an issue at Meta. Summer time Yue, a security and alignment director at Meta Superintelligence, posted on X last month describing how her OpenClaw agent ended up deleting her complete inbox, though she informed it to verify together with her before taking any motion.
Nonetheless, Meta appears bullish on the potential for agentic AI. Simply final week, Meta purchased Moltbook, a Reddit-like social media web site for OpenClaw brokers to talk with each other.
Disclaimer: This article is sourced from external platforms. OverBeta has not independently verified the information. Readers are advised to verify details before relying on them.