- Enterprise AI Executive (formerly Generative AI Enterprise)
- Posts
- MIT: 28 top executives reveal ‘biggest’ AI risks
MIT: 28 top executives reveal ‘biggest’ AI risks
Plus, computer-use agents, Anthropic threats, and more.
Welcome executives and professionals. A new MIT survey administered to a select group of senior enterprise AI leaders, including C-suite executives and heads of AI, reveals their ‘biggest’ AI risks.
Since the previous edition, we have reviewed hundreds of the latest insights in agentic and generative AI, spanning best practices, case studies, market dynamics, and innovations.
This briefing outlines what is driving material value — and why it’s important.
Note: Previously published as Generative AI Enterprise, this briefing is now titled Enterprise AI Executive.
In today’s briefing:
MIT: 28 executives reveal ‘biggest’ AI risks.
Booz Allen: Building enterprise gen AI apps.
Anthropic exposes enterprise AI threats.
Computer-use agents extend automation.
Transformation and technology in the news.
Career opportunities & events.
Read time: 4 minutes.

EXECUTIVE INSIGHT

Image source: MIT Sloan School of Management
Brief: MIT surveyed 28 executives from ASFAI (American Society for AI), a selective group of ~100 senior AI leaders, to uncover their ‘biggest’ enterprise AI risks spanning security, governance, compliance, and more.
Breakdown:
The executives included 36% CEOs/partners, 32% technical leaders (CTOs, CISOs, CROs), and 32% execs such as COOs and Heads of AI.
The most represented sectors were technology (32%), finance (25%), healthcare/pharma (21%), and media/entertainment (7%).
Top business risks cited: data security/privacy, compliance/governance, and competitive/strategic risks (see table above).
Security risks include data breaches, privacy violations, ransomware, IP theft, malicious actors, and others threatening AI adoption.
MIT stresses three priorities: stronger data governance and access controls, adversarial testing, and balancing speed with capability.
Why it’s important: AI is reshaping business and security. Executives must harness efficiency, revenue, and innovation gains while addressing security, governance, and compliance risks. With both opportunity and risk ahead, strong oversight is essential to scale enterprise AI responsibly.
BEST PRACTICE INSIGHT

Image source: Booz Allen
Brief: Booz Allen shared its framework for building enterprise gen AI, built on six architecture layers, LLMOps practices for monitoring and improvement, and governance, risk, and compliance (GRC) frameworks.
Breakdown:
The six architecture layers span Infrastructure, Platform, LLM, Data & Pipeline, Agent & Capability, and UI/Application (see diagram above).
LLM deployment option trade-offs evaluated include on-premises self-managed, cloud self-managed, and cloud-hosted API.
The report explores orchestrating LLMs by task complexity and cost, and data pipelines to support real-time, domain-specific use cases.
The importance of embedding human oversight into AI workflows where appropriate is emphasized, guided by the "agentic spectrum."
Beyond architecture, Booz Allen outlines core concepts of LLMOps, alongside governance, risk, and compliance frameworks.
Why it’s important: Generative AI is transforming enterprise knowledge, decision-making, and user interaction. Success requires more than model access; it demands a layered architecture, strong data pipelines, human oversight, and rigorous governance to align adoption with business goals.
CASE STUDIES

Image source: Anthropic
Brief: Anthropic’s 25-page threat intelligence report details how cybercriminals attempt to bypass Claude’s safety measures. While focused on Anthropic’s models, the case studies highlight broader AI ecosystem threat patterns.
Breakdown:
Agentic AI has been weaponized: models aren’t just advising on cyberattacks, they’re executing sophisticated operations themselves.
AI lowers entry barriers: low-skilled actors can launch ransomware and advanced attacks once limited to those with expert knowledge.
Cybercriminals embed AI throughout operations: profiling victims, analyzing stolen data, creating fake identities, and stealing card data.
A notable case, “vibe hacking,” used Claude Code to automate credential harvesting, and extortion across 17 organizations.
Another case: North Korean actors used Claude to secure Western tech jobs; AI helped them code and communicate professionally.
Why it’s important: The report’s findings and case studies show how quickly AI can be weaponized against enterprises. For CISOs and security leaders, understanding these tactics is essential to building resilient defenses as AI abuse grows in sophistication and scale.
BEST PRACTICE INSIGHT

Image source: Andreessen Horowitz
Brief: Andreessen Horowitz outlined how computer-use agents advance beyond RPA, moving enterprises closer to truly agentic coworkers able to operate across fragmented, legacy-heavy environments that workers navigate daily.
Breakdown:
The enterprise “long tail” includes specialized, low-volume tasks often tied to legacy software systems that lack accessible or modern APIs.
In the mid-2010s, firms increasingly adopted RPA to automate repetitive, rules-based tasks, fast when stable but brittle with UI/workflow changes.
Computer-use agents apply LLM reasoning, currently slower than RPA but enabling broader automation without reprogramming.
Deploying computer-use enterprise-wide is challenging; software is often specialized, unintuitive, and heavily customized.
Computer-use agents such as ChatGPT Agent need enterprise-specific context and additional training to function reliably in complex workflows.
Why it’s important: While progress is rapid, today’s agents face limits in capability, struggling with complex/unfamiliar interfaces, and efficiency, often too slow and costly. Advances are expected in 6–18 months, but success depends on tuning, contextualization, and deployment within real enterprises.

OpenAI explored how Californians use ChatGPT and sector-level AI productivity gains. California hosts 33 of the top 50 private AI firms globally.
Capgemini’s 92-slide report shared recommendations to build agile, AI-powered, and sustainable new-generation supply chains.
Deloitte outlined how to move from business cases to concrete value metrics to help prove AI agent value and drive adoption.
C3 AI outlined how it accelerated legal review and contract analysis to reduce risk, improve consistency, and accelerate deal execution.
Accenture’s 27-slide report explored how to accelerate human–AI collaboration by embedding learning at work and hardwiring trust.
Stanford highlighted six facts on AI’s employment effects, including a 13% relative decline in jobs for 22–25 year-olds in AI-exposed roles.
Galileo shared why 40% of projects fail pre-production, citing hidden costs of agentic AI from data quality, evaluation, infra complexity, and debugging.
WRITER shared its four-part model for AI ROI calculation, and outlines why a unified enterprise AI platform is key to scaling agentic AI effectively.

Microsoft introduced MAI-Voice-1 and MAI-1-preview, its first fully in-house gen AI models after years of predominantly relying on its OpenAI partnership.
Anthropic and OpenAI published joint internal safety evaluations, testing each other’s models for risky behaviors, alignment, and real-world safety issues.
OpenAI moved its Realtime API out of beta, adding a gpt-realtime speech-to-speech model, and new features for its Codex software development tool.
China plans to triple AI chip production next year to reduce reliance on Nvidia amid U.S. export controls and growing geopolitical technology pressures.
Google released Gemini Flash 2.5 Image, enabling precise multi-step edits that preserve likeness while giving users more creative control over outputs.
xAI released Grok Code Fast 1, open-sourced Grok 2, and announced its ‘purely AI software company’ Macrohard to replicate competitors like Microsoft
Perplexity unveiled a $42.5M revenue-sharing initiative that gives media outlets 80% of proceeds when their content appears in AI search results.
Andreessen Horowitz published the fifth edition of its Top 100 GenAI Consumer Apps, highlighting OpenAI's lead and vibe coding trends.

CAREER OPPORTUNITIES
CBRE - AI Executive Director
Meta - Head of AI GTM Partnerships
PwC - AI Operations Director
EVENTS
Gartner - AI Sovereignty for Execs - September 3, 2025
Google - Unlock Value with AI Agents - September 16, 2025
Deloitte - Unlocking AI Benefits - October 8, 2025

Originally conceived as a practical communication for executives the editor, Lewis Walker, has worked with, this briefing now serves as a trusted resource for thousands of senior decision-makers shaping the future of enterprise AI.
If your AI product or service adds value to this audience, contact us for information on a limited number of sponsorship opportunities.
We also welcome feedback as we continue to refine the briefing.