# Meta AI to Automate 90% of Risk Assessment
Meta is reportedly preparing to automate up to 90% of risk assessments for updates across its platforms, including Instagram and WhatsApp, according to internal documents. The AI-driven system would evaluate potential privacy risks and compliance requirements before new features launch.
## How the AI Risk Assessment System Works
Under the new process, product teams will answer a questionnaire about their updates, after which an AI system will generate an instant risk evaluation. The AI will flag potential concerns and outline necessary compliance measures before deployment.
This shift aims to accelerate Meta’s update cycle, but critics warn that automated risk assessments could miss nuanced issues. A former executive told NPR that AI-driven evaluations may increase the likelihood of unintended consequences, as human oversight is reduced.
## Meta’s Commitment to Compliance
Meta has emphasized its investment in AI-powered compliance, stating that it has allocated over $8 billion to its privacy program. The company claims the new system will improve efficiency while maintaining regulatory adherence.
> “We leverage technology to add consistency to low-risk decisions while relying on human expertise for complex issues,” a Meta spokesperson said.
The move reflects a broader industry trend toward AI-driven governance, though concerns remain about the balance between speed and accountability. As AI takes on more compliance responsibilities, regulators may scrutinize its effectiveness in mitigating risks.