Government's Mythos Grab; Next Big Surface - Your Car; Bye Bye Lawyers?
Today's AI Outlook: 🌤️
White House Finds Way To An Anthropic 'OK'
The White House is reportedly pushing back on Anthropic’s plan to expand private-sector access to its powerful Mythos AI model, citing compute concerns and the need to preserve government access. Anthropic reportedly wanted to grow access from roughly 50 firms to nearly 120, but U.S. officials appear worried that too much commercial usage could strain capacity needed for national security work.
At the same time, the White House is preparing an AI memo that could soften parts of the government’s earlier stance toward Anthropic. Axios reportedly says the move may allow agencies to work around a “supply chain risk designation,” despite the ongoing legal and political fight around the company.

Why It Matters
This is the clearest signal yet that Washington’s posture toward frontier AI companies is shifting from simple approval or rejection to something much messier: containment, dependence, leverage, and access.
The government may not fully trust Anthropic, but it also may not want to lose access to models with advanced cyber capabilities. That is the national-security version of “I’m mad at you, but I still need your Netflix password.”
The Deets
- Anthropic reportedly wanted to expand Mythos access from about 50 private firms to nearly 120.
- U.S. officials cited concerns about compute availability for government use.
- A White House AI memo is expected to encourage multi-vendor AI adoption across agencies.
- The memo may address some Anthropic concerns that helped trigger the dispute.
- GPT-5.5 has reportedly reached similar cyber capabilities to Mythos Preview.
- Former AI czar David Sacks reportedly said all frontier models could reach this capability level within six months.
- The administration appears divided, with some officials seeking access while others continue criticizing Anthropic.
Key Takeaway
Frontier AI is becoming too strategically important for clean breakups. The White House may want to punish Anthropic, but it also wants access to Anthropic’s capabilities.
đź§© Jargon Buster - Supply Chain Risk Designation: A government label that flags a vendor, product, or technology as potentially risky for federal use because of security, reliability, ownership, or operational concerns.
Claude’s 'Jupiter' Model Appears Ahead Of Developer Event

A new Claude model, reportedly labeled claude-jupiter-v1-p, has surfaced in red-team testing ahead of Anthropic’s Code with Claude developer conference on May 6. TestingCatalog reports that Anthropic has begun internal red teaming, which typically means a model is being evaluated for safety, reliability, and misuse risk before broader release.
Anthropic has been pushing Claude deeper into coding, agents, vision, and professional workflows, so Jupiter may be tied to that broader developer strategy.
Why It Matters
Anthropic is increasingly competing on developer workflows, enterprise automation, and agentic coding environments. If Jupiter is real and close to launch, it could be part of a larger push to keep Claude Code and related products moving fast.
This also lands as Anthropic is reportedly racing toward a massive new valuation. A new model leak, a developer event, and a potential mega-round are not separate stories. They are parts of the same capital-and-capability flywheel.
The Deets
- The model reportedly appeared under the label claude-jupiter-v1-p.
- Internal red teaming suggests the model may be in a release-candidate phase.
- AI Breakfast speculates it could be a coding-focused Claude release, a Sonnet refresh, a Haiku refresh, or a preview model connected to Mythos.
- Anthropic’s recent Opus 4.7 release focused on coding, agents, vision, and complex professional work.
Key Takeaway
Jupiter may be Anthropic’s next developer-focused move, but until the company announces it, the safe read is “strong signal, still speculative.”
đź§© Jargon Buster - Red Teaming: A safety and security testing process where experts intentionally try to make a model fail, misbehave, leak information, or enable harmful use before release.
đź’° Power Plays
Anthropic Reportedly Eyes A Near-Trillion-Dollar Valuation
Anthropic is reportedly considering a $40 billion to $50 billion private funding round at an $850 billion to $900 billion valuation. That would more than double its February valuation of $380 billion and put it near, or possibly above, OpenAI’s reported $852 billion post-money valuation.
AI Secret says the capital interest is being driven by Anthropic’s revenue growth, especially from Claude Code and Cowork. TechCrunch reportedly pegged Anthropic’s annual revenue run rate at more than $30 billion, with one source saying it could be closer to $40 billion.
Why It Matters
The AI market is increasingly rewarding companies that can turn model capability into recurring enterprise workflows. Consumer adoption made OpenAI the gravitational center of AI. Anthropic is now showing that developer tooling and enterprise agents may be just as financially powerful.
If the round happens, Claude Code stops being just a product. It becomes the wedge that reprices Anthropic.
The Deets
- Anthropic is reportedly weighing a $40 billion to $50 billion private round.
- The proposed valuation is reportedly between $850 billion and $900 billion.
- That would more than double its February valuation of $380 billion.
- Anthropic’s annual revenue run rate reportedly jumped from about $9 billion at the end of 2025 to more than $30 billion.
Key Takeaway
The AI race is now a capital arms race between near-trillion-dollar private companies, with coding agents becoming one of the clearest paths from model capability to revenue.
đź§© Jargon Buster - Revenue Run Rate: A projection of annual revenue based on current revenue levels, often used by fast-growing companies to show their current business momentum.
OpenAI’s Codex App Moves Toward Remote Control

AI Breakfast reports that hidden files in a recent Codex update point toward more ambitious remote-control capabilities. The update already delivered a 20% speed boost for computer-use tasks, but newly discovered SSH-related features suggest Codex may eventually hop across machines to execute work in different environments.
That would move Codex from a coding assistant toward something closer to a persistent, multi-device operator.
Why It Matters
This is where “AI assistant” starts to become “AI operator.” The difference is not cosmetic. A coding assistant suggests changes... An operator can navigate systems, connect to machines, execute steps, and complete work across environments.
That unlocks productivity, but it also raises the stakes around security, identity, permissions, auditing, and user trust.
The Deets
- Codex reportedly received a 20% speed boost for computer-use tasks.
- Hidden files suggest new SSH-related capabilities.
- SSH access could allow an AI system to work across machines.
- The broader direction points toward persistent, cross-device task execution.
Key Takeaway
Codex is quietly evolving from “help me code” toward “operate my computing environment,” which is both useful and spicy in the cybersecurity sense.
đź§© Jargon Buster - SSH: Secure Shell, a protocol that lets users securely connect to and control another computer over a network.
đź§° Tools & Products
Gemini Moves Into Google-Powered Cars
Google is beginning to replace Google Assistant with Gemini in vehicles that have

Google built-in. The upgrade is designed to make in-car AI more conversational, supporting navigation, messaging, music, vehicle questions, hands-free controls, and eventually deeper integrations with Gmail, Calendar, and Google Home.
The rollout begins in compatible U.S. vehicles, with General Motors also announcing support for roughly 4 million vehicles from model year 2022 onward.
Why It Matters
Cars are one of the most natural places for conversational AI because the user’s hands and eyes are already busy. The first features are fairly basic, but the direction is obvious: AI becomes the interface layer for navigation, entertainment, maintenance, energy management, and eventually autonomy.
The car dashboard is becoming another AI surface. Hopefully one that does not ask follow-up questions while you are merging.
The Deets
- Gemini will replace Assistant in compatible vehicles with Google built-in.
- Drivers can ask for navigation help, route planning, radio controls, and temperature changes.
- Gemini can pull from Google Maps for customized updates.
- Gemini Live beta supports conversational learning and brainstorming.
- Gmail, Calendar, and Home integrations are expected later.
- Gemini can answer vehicle-specific questions using manufacturer manuals.
Key Takeaway
Gemini in cars is an early step toward the AI-native dashboard, where the vehicle becomes less of a control panel and more of a rolling assistant.
🧩 Jargon Buster - Google Built-In: Google’s integrated automotive software system that brings services like Maps, Assistant, Play, and now Gemini directly into a vehicle’s infotainment system.
🧬 Research & Models
OpenAI Finds The Source Of ChatGPT’s Goblin Obsession

OpenAI reportedly traced ChatGPT’s habit of using goblins, gremlins, ogres, trolls, raccoons, pigeons and other odd creature metaphors to a reward signal in the former Nerdy personality preset. The creature-heavy behavior apparently became concentrated in that preset, then leaked into broader model behavior through fine-tuning loops.
After ChatGPT-5.1 launched in November, goblin mentions reportedly jumped 175%, while gremlin mentions rose 52%. OpenAI retired Nerdy in March and added system-level restrictions in GPT-5.5 and Codex to suppress the behavior.
Why It Matters
This is funny, but also revealing. Tiny reward-shaping choices can create surprisingly visible model behaviors. A personality preset used by a small share of traffic can still influence broader outputs if its responses are fed back into training or tuning pipelines.
In other words: alignment bugs do not always arrive wearing a villain cape. Sometimes they arrive holding a tiny goblin lantern.
The Deets
- Goblin mentions reportedly jumped 175% after ChatGPT-5.1’s November launch.
- Gremlin mentions reportedly rose 52%.
- The Nerdy personality preset drove two-thirds of goblin mentions from just 2.5% of traffic.
- Fine-tuning loops helped spread the creature-heavy language into default behavior.
- GPT-5.5 shipped with restrictions against goblins, gremlins, ogres, trolls, raccoons, and pigeons in Codex prompts.
Key Takeaway
The goblin episode is a small but useful reminder that model personality, reward design, and training loops can create weird emergent habits at global scale.
đź§© Jargon Buster - Reward Signal: Feedback used during model training or tuning to encourage certain responses and discourage others.
Software 3.0 Reframes LLMs As A New Kind Of Computer
AI Secret highlights Andrej Karpathy’s framing of large language models as Software 3.0. In this view, Software 1.0 was hand-written code, Software 2.0 was learned model weights, and Software 3.0 is programming through prompts, context and agents.
The example: OpenClaw. Instead of shipping a giant installer that anticipates every machine, dependency, and error case, OpenClaw can give users instructions to hand to an agent. The agent reads the local environment, debugs the setup, and completes the installation intelligently.
Why It Matters
This reframes software from fixed workflows to adaptive delegation. In the old model, developers had to encode every branch. In the new model, the “program” can be text, the interpreter is the large language model, and the runtime is an agent operating inside the user’s environment.
That has huge implications for product design. The interface shifts from clicking through steps to expressing intent and supervising outcomes.
The Deets
- Software 1.0: human-written code.
- Software 2.0: learned neural network weights.
- Software 3.0: prompts, context, and agents acting as programmable systems.
- The product model shifts from “user operates app” to “agent operates workflow.”
Key Takeaway
Software 3.0 suggests the next software era may be less about apps people use and more about agents people trust to act.
đź§© Jargon Buster - Agent-Native Software: Software designed around AI agents that can take actions, use tools, and adapt to context, rather than only following fixed user-driven workflows.
đź’¸ Funding & Startups
Manifest OS Raises $60 Million To Build AI-Native Law Firms

Manifest OS raised a $60 million Series A at a $750 million valuation to build AI-native law firms under the Manifest Law brand, starting with business immigration. Unlike legal AI copilots that sell tools to lawyers, Manifest wants to own the legal service layer itself.
The model combines AI software, centralized operations, and licensed lawyer review to deliver fixed-fee, outcome-based legal services.
Why It Matters
This attacks the legal market from the customer side rather than the lawyer productivity side. Harvey and similar tools make existing lawyers faster. Manifest asks whether some legal services should be rebuilt around AI-first operations from day one.
That is a much more disruptive question because it targets the business model, not just the workflow.
The Deets
- Manifest OS raised $60 million in Series A funding.
- The company is valued at $750 million.
- It is launching AI-native law firms under the Manifest Law brand.
- The first focus area is business immigration.
- The model uses fixed-fee, outcome-based pricing.
Key Takeaway
Manifest is trying to rebuild the law firm as an AI operating system with lawyers attached where required.
đź§© Jargon Buster - Outcome-Based Pricing: A pricing model where customers pay for a defined result or service outcome, rather than paying by the hour.
⚡ Quick Hits
Synthesia Says Real-Time Multimedia Will Replace Standard Slide Decks: Synthesia’s CEO argues that the cost gap between text and video is disappearing, making personalized, real-time multimedia practical for enterprise operations. The bigger idea is that companies may eventually generate tailored video briefings and training materials instead of static slide decks.
Fuel iX Hosts AI Safety And Security Summit: Fuel iX by TELUS Digital is promoting Uncharted, an AI safety and security summit on May 5 focused on governance, enterprise security, and safety benchmarking. The practical pitch is that companies need production-grade AI safety systems, not just policy PDFs with good vibes.
OpenAI Launches Advanced Account Security: OpenAI reportedly launched an opt-in security feature that replaces traditional passwords with phishing-resistant passkeys and hardware keys. The timing matters because stronger frontier models are also becoming better at simulating complex attack chains.
Musk And Altman Trial Continues To Define AI’s Origin Story: AI Breakfast frames the Musk-Altman trial as a battle over OpenAI’s founding mission. Musk argues OpenAI betrayed its nonprofit purpose, while OpenAI argues Musk understood the structure and its evolution.
Today’s Sources: The Internet, AI Breakfast, The Rundown AI, AI Secret