AI in 15 — April 11, 2026
The Treasury Secretary, the Fed Chair, and the CEOs of America's biggest banks, all in one room, all because of an AI model. When Anthropic's Mythos starts finding vulnerabilities that humans missed for nearly three decades, apparently the first call isn't to IT. It's to the people guarding the financial system.
Welcome to AI in 15 for Saturday, April 11, 2026. I'm Kate, your host.
And I'm Marcus, your co-host.
Happy Saturday, Marcus. We've got a big show. The fallout from Anthropic's Mythos continues with an emergency meeting at Treasury. Someone threw a Molotov cocktail at Sam Altman's house. OpenAI is backing legislation that would make AI companies nearly impossible to sue. Meta's Muse Spark goes proprietary, and we have new details. OpenAI publishes a blueprint for robot taxes and a four-day workweek. And researchers trick every major AI chatbot into thinking a fake disease is real. Let's get into it.
The Fed and Treasury convene Wall Street CEOs over Mythos cyber risks.
A firebomb attack on Sam Altman's home.
And OpenAI wants liability protection unless your AI kills a hundred people.
Marcus, we covered the Mythos announcement on Wednesday. The zero-days, the sandbox escape, Project Glasswing. But this week the story moved from the tech world to the highest levels of government. What happened at Treasury?
Treasury Secretary Scott Bessent and Fed Chair Jerome Powell called in the CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs to Treasury headquarters on Tuesday. Jamie Dimon was invited but couldn't attend. The agenda was straightforward. Anthropic's Mythos has demonstrated the ability to find thousands of previously unknown zero-day vulnerabilities across every major operating system and browser, and the U.S. government is now treating AI-powered cyber threats as a systemic risk to the financial system.
Systemic risk. That's the language they use for things that could crash the economy.
Exactly. And remember what we reported Wednesday. Mythos found a 27-year-old vulnerability in OpenBSD, one of the most secure operating systems ever built. A 16-year-old bug in FFmpeg that automated tools had scanned five million times without catching. And it can chain multiple vulnerabilities together. In corporate network simulations, it outperformed human security experts who needed ten-plus hours. When the people responsible for protecting trillions of dollars in assets hear that, they take a meeting.
Dimon put out a statement even though he wasn't there.
He said cybersecurity remains one of the biggest risks to the financial system and that AI will almost surely make this risk worse. That's Jamie Dimon publicly acknowledging that AI has moved from a productivity tool to a potential weapon against the institutions he runs.
Now, Anthropic's response has been Project Glasswing, giving Mythos to defenders first. A hundred million in usage credits, four million in direct donations to open-source security. But some critics are calling this a sales pitch.
There's a legitimate debate here. Some voices on Hacker News and Tom's Hardware argue the dramatic framing, the emergency meetings, the restricted access, is partly designed to justify Anthropic's pricing and attract investment. The counterargument is that similar capabilities will inevitably appear in other models, so arming the defenders early is genuinely responsible. I lean toward this being a real threat that also happens to be good marketing. Both things can be true.
The list of organizations getting early access is telling. AWS, Apple, Google, Microsoft, CrowdStrike, NVIDIA, the Linux Foundation, JPMorgan Chase. That's basically the infrastructure of modern civilization.
And forty additional organizations managing critical infrastructure. The scale of the defensive operation tells you how seriously Anthropic and the government are taking this. We've crossed a line where an AI company's product launch triggers a national security response.
From institutional threats to personal ones. Early Friday morning, someone threw a Molotov cocktail at Sam Altman's home in San Francisco. Marcus, this is deeply alarming.
At approximately 3:45 AM Pacific, a 20-year-old named Daniel Alejandro Moreno-Gama threw a firebomb at Altman's Russian Hill residence near Chestnut and Jones. It struck the exterior gate, started a fire, but caused minimal damage. No one was injured. Officers responded at 4:12 AM. Less than an hour later, the same individual showed up at OpenAI's Mission Bay headquarters threatening to burn the building down. Police recognized him from a department-wide photograph and arrested him.
The charges are severe.
Attempted murder, explosion of a destructive device with intent to injure, arson, criminal threats, possession of incendiary materials. He's being held in San Francisco County Jail.
Police haven't confirmed an anti-AI motivation, but the online reaction tells its own story.
Over 500 comments on Hacker News, massive Reddit threads. And Kate, the disturbing part isn't the attack itself. It's how muted the sympathy was in some corners. Commenters talked about visceral AI hatred among non-techies and drew parallels to Luddite movements. One commenter wrote, putting millions of people out of work comes with consequences. That's not a justification, but it's a signal of where public sentiment is heading.
We covered the Gen Z poll yesterday showing anger about AI rising from 22 to 31 percent while usage stays flat. This feels like that tension manifesting in the most extreme possible way.
It's a chilling escalation from online anger to real-world violence. And it raises practical security questions for every AI executive. Altman shared a private family photo afterward. OpenAI thanked the SFPD. But the gap between Silicon Valley's techno-optimism and growing public unease isn't theoretical anymore. Someone acted on it.
Speaking of OpenAI and public perception, they're now backing an Illinois bill that would essentially shield AI companies from most lawsuits. Marcus, the threshold here is remarkable.
SB 3444, the Illinois Artificial Intelligence Safety Act. It defines critical harms as the death or serious injury of 100 or more people, at least one billion dollars in property damage, or using AI to develop a chemical, biological, radiological, or nuclear weapon. Below those thresholds, AI companies that have made safety reports public and didn't act intentionally or recklessly are largely protected from liability.
So your AI has to kill a hundred people or cause a billion dollars in damage before you can be sued.
That's the bar. And it only applies to frontier models built on more than 100 million dollars in compute, so it's targeting OpenAI, Anthropic, Google, xAI, and Meta specifically. OpenAI's spokesperson said they support approaches that focus on reducing the risk of serious harm while allowing the technology to reach people and businesses.
The context makes this uncomfortable. There are active wrongful death lawsuits against OpenAI right now alleging ChatGPT provided suicide methods to minors.
Those cases would not meet the threshold under this bill. And that's exactly what critics are pointing out. The Hacker News discussion, over 300 comments, compared it to Big Tobacco's liability shields. One commenter drew a parallel to Iowa's pesticide liability bill. The framing is clever though. By setting extreme thresholds, the bill sounds reasonable. Of course we should regulate existential risks. But the practical effect is immunity from the kinds of lawsuits AI companies actually face today.
If Illinois passes this, other states follow.
That's the playbook. Establish precedent in one state, then replicate. This is the first time a major AI company has openly lobbied for liability protection at the state level. It's a significant moment in AI regulation regardless of which side you're on.
Let's talk about Meta. We covered the Muse Spark launch Thursday, but there are new details worth discussing. Marcus, the proprietary pivot is solidifying.
Muse Spark is now live for select partners via private API with paid public access planned later. The key technical claims are 10x better compute efficiency than Llama 4 and strong multimodal reasoning across science, math, and health. It supports tool use, visual chain-of-thought reasoning, and multi-agent orchestration. But the strategic story remains the headline. Meta, the company that built its entire AI identity on open source, shipped its flagship model as closed source.
They say they hope to open-source future versions.
Hope is doing a lot of work there. After Llama 4's stumble, Meta poured everything into Alexandr Wang's Superintelligence Labs. The reported price tag for Wang alone was 14 billion dollars. When you've invested that much, you don't give the result away for free. Analysts rewarded the move. Meta's stock rallied on the announcement.
The question thousands of Llama developers are asking: is this the end of Meta's open-source era?
I think it's the end of open source as Meta's primary strategy. They may release smaller or older models openly, but the frontier work is staying closed. The market has spoken. Open source built Meta an ecosystem, but closed models build a moat. And right now, Meta needs a moat.
OpenAI released a 13-page policy document this week proposing robot taxes, a public wealth fund, and a four-day workweek. Marcus, this is OpenAI doing economic policy now.
The document is called Industrial Policy for the Intelligence Age. The core argument is that if AI displaces enough workers, wage and payroll tax revenue funding Social Security, Medicaid, and SNAP collapses. So the tax base needs to shift from payroll toward capital gains and corporate income. They're proposing a tax on automated labor, what Bill Gates first suggested in 2017.
And a national wealth fund modeled after Alaska's Permanent Fund.
Seeded partly by contributions from AI companies themselves, invested in AI firms and businesses adopting the technology, with returns distributed to citizens. They're also proposing government-subsidized trials of a 32-hour workweek with no pay reduction, plus an automatic mechanism that expands government assistance when AI job displacement crosses defined thresholds and winds down when the labor market recovers.
This arrives as OpenAI approaches an 852-billion-dollar IPO valuation.
And that's the tension everyone notices. The company most actively disrupting employment is now designing the safety nets. Critics call it PR. Supporters say at least they're engaging with the problem. Either way, the fact that an AI company is publishing detailed proposals for restructuring the American social contract tells you something about where we are. These aren't theoretical discussions anymore.
Quick hit on a story I love. Scientists invented a completely fake skin disease called bixonimania, published two obviously fake papers about it, with references to Starfleet Academy no less, and within weeks every major AI chatbot was telling people it was real.
ChatGPT, Gemini, Copilot, all confidently advising users to consult ophthalmologists for a condition that literally does not exist. A follow-up study of 20 LLMs found they're especially prone to hallucinate when text looks professionally medical. The good news is some models have since improved. By March 2026, ChatGPT was flagging it as probably made up. But the study proves how easily AI systems absorb misinformation from just two papers and redistribute it to millions. If two obviously fake preprints can fool these systems, imagine what a sophisticated actor could accomplish.
And one more quick hit. The Linux kernel has formalized guidelines for AI coding assistants. Marcus, the approach is refreshingly sensible.
AI tools are allowed but humans take full responsibility. AI agents must not add Signed-off-by tags. Only humans can certify the Developer Certificate of Origin. AI contributions get identified with Co-developed-by tags. Testing is required when changes are tool-generated. NVIDIA's Sasha Levin and Intel's Dave Hansen drove this, and they even contributed unified configuration files for Claude, Copilot, Cursor, and other major tools. The top Hacker News comment summed it up: that's refreshingly normal.
Normal is good. Normal means the adults are in the room.
Saturday big picture. Marcus, the Fed convenes an emergency meeting over an AI model. Someone firebombs Sam Altman's house. OpenAI writes legislation to protect itself from lawsuits while simultaneously publishing proposals to restructure the economy. What's the thread?
The AI industry has outgrown the tech bubble. It's now a matter of national security, public safety, and economic policy. When the Treasury Secretary and the Fed Chair are in a room because of a model's cybersecurity capabilities, that's not a tech story anymore. When an AI CEO's home is attacked, that's not an online debate anymore. When an AI company is drafting economic policy for the country, that's not a startup pitch anymore. The scale of impact has crossed every boundary the industry thought it was operating within.
And the trust deficit keeps widening. Fake diseases in medical chatbots, liability shields that protect companies from the lawsuits people actually bring, a public that's growing angrier by the month.
The technology is advancing faster than the institutions around it can adapt. Government, law, public trust, all of them are playing catch-up. The Linux kernel's approach, pragmatic rules with human accountability, is a model for how to handle this well. But it works because the kernel community has decades of governance culture. Most of the AI industry doesn't have that. Building it is the urgent task now.
Pragmatic rules, human accountability. That might be the formula.
That's your AI in 15 for Saturday, April 11, 2026. Enjoy your weekend, and we'll see you Monday.