Yesterday in AI
A rundown of all of the important stories in AI that happened yesterday in 10 minutes or less.
Yesterday in AI
The Math Proof That Made Yale Professors Go Quiet
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Yesterday in AI | Friday, April 17, 2026
A 60-year-old math problem just fell, and the way it was solved is going to make you rethink what human expertise is worth. A brand you definitely own something from abandoned its entire identity for reasons that are equal parts absurd and deeply revealing about where capital is flowing right now. Anthropic dropped a new flagship model on the exact day users were publicly calling out the last one - timing is everything. The EU pulled an emergency lever against a tech giant that will affect every AI assistant trying to reach you. And a layoff announcement came with a justification that's about to echo across every boardroom in America.
Remember to subscribe, rate, and share this podcast if you like it!
Hi folks, this is Yesterday in AI, your daily digest of everything happening in the world of artificial intelligence. I'm Mike Robinson. It's Friday, April 17th, and Thursday gave us a math proof that made Yale professors reach for superlatives, and a sneaker brand that woke up and decided GPUs were its future. That plus a new Claude model arriving just as the complaints about the old one were getting loud. Let's start with Anthropic, which released Claude Opus 4.7 Thursday. The timing is pointed. Earlier in the day, Axios ran a piece on power user complaints about Claude feeling slower and weaker. TLDR AI published an analysis concluding the complaints were probably legitimate, even if secret nerfing was an overstatement. What likely changed is that Anthropic dialed back how hard the model thinks on each query to stretch available compute. Opus 4.7 is the actual response to that problem. The new model sits above OPUS 4.6 for complex software engineering and long-running tasks. The headline improvements are more consistent instruction following and substantially better vision. OPUS 4.7 accepts images up to 2576 pixels on the long edge, roughly 3.75 megapixels. That sounds like a spec sheet detail, but for anyone using AI to reason over documents, screenshots, or detailed diagrams, what the industry calls multimodal work. It removes the need for a separate image processing step. Pricing stayed the same,$5 per million input tokens,$25 per million output tokens, available now on Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. The distribution story is at least as interesting as the model itself. GitHub announced that Opus 4.7 is rolling out into GitHub Copilot and will replace earlier Opus versions for Pro Plus users over the coming weeks. That matters because Copilot is the tool most developers already have open every hour of the workday. Getting a Frontier model into the product people use as their default is worth more than releasing it into a crowded marketplace where it has to compete for attention. There's a pricing detail worth knowing. GitHub is running Opus 4.7 with a 7.5x premium request multiplier through April 30th, meaning each Opus 4.7 query burns through 7.5 of your usage credits, not one. Take that as a preview of what model tiering looks like in practice because that structure will be everywhere. Now to a story the broken math circles before the AI press picked it up, mathematician Pishekmek Huetsky used GPT-5.4 Pro to crack Erdos problem number 1196, a 60-year-old asymptotic primitive set conjecture that had only seen partial human progress since Paul Erdos' own 1935 paper. The proof is three pages long. It uses a single elegant trick, a smarter way of counting how prime numbers distribute across large sequences. Using something mathematicians call a von Mangold function rearrangement. Yale mathematician Jared Lichtmann reviewed it and called it a book proof. If you haven't encountered that term before, Paul Erdos had this idea that God keeps a book containing the most elegant possible proof of every theorem. When a mathematician produces something so clean and clever it belongs in that imaginary volume, that's a book proof. Lichtmann said it's the highest compliment the field offers, and he used it here without qualification. GPT 5.4 found a completely different path than any human mathematician had tried, bypassing the probability framework that researchers had been working within since the 1930s. The kicker per Lichtmann is that the approach uses a single elegant trick, which is exactly what a book proof looks like. The relevant boundary to keep in mind, this works on a problem where you can verify the answer precisely and automatically. Most research doesn't work that way. Most of mathematics doesn't work that way. But for the class of problems where machine verification is possible, Thursday's result is a meaningful marker of where AI capability actually sits relative to domain experts. Here's the business story from Thursday that tells you more about the current moment than almost anything else I could cover. Allbirds, the eco-friendly wool sneaker brand your yoga instructor wore in 2019, announced it is renaming itself New Bird AI and pivoting to GPU cloud infrastructure. The actual shoe brand was sold to American Exchange Group in March for$39 million. What remains is the public stock ticker, a$50 million convertible financing facility, and a plan to lease GPUs and resell compute as a service. The stock jumped 600 to 700% on the news. That's not a typo. Here's the detail that makes this genuinely sharp. Allbirds is a Delaware public benefit corporation and a certified B-Corp since 2016. Environmental conservation is written into its corporate charter. The company's founding mission was to reverse climate change through better business. This same company, legally obligated to weigh environmental impact in its decisions, is now pivoting into one of the most electricity-intensive businesses humans have ever built. A B-Corp pivoting to GPU cloud is the same energy as a vegan restaurant pivoting to Foy Gras. The smarter read is that this is a capital market story. When a sneaker shell company can multiply its market cap seven times by saying GPU is a service, the market is pricing AI exposure, not fundamentals. Nobody knows what anything is worth right now. All they know is AI equals important. The closest historical parallel is Long Blockchain Core, which renamed itself in 2017 to chase the crypto wave. Shares popped 200%. The SEC later charged insiders with fraud. NASDAQ delisted them within 18 months. On the regulatory front, the European Commission sent Meta a supplementary statement of objections Thursday over WhatsApp's treatment of third-party AI assistance. The timeline? In October 2025, Meta banned those assistants from the platform. The ban took effect January 2026. In March, Meta said it would reinstate access but charge a fee. The Commission looked at that fee structure and concluded it had the same exclusionary effect as the original ban. The Commission said it intends to impose interim measures. In EU antitrust terms, that's an emergency lever. Regulators intervening before the broader investigation concludes. The intended order is to restore third-party AI assistant access to WhatsApp on the same terms that existed before October 2025. The case now covers the entire European economic area. The underlying dynamic matters well beyond Europe. Messaging platforms control AI assistant distribution at scale, and who sets the terms of that access will determine which AI assistants can actually compete for users over the next few years. Snap announced Thursday it's cutting 1,000 employees, 16% of its workforce. CEO Evan Spiegel's memo framed the decision around rapid advancements in AI, suggesting the technology could fill the gaps left by fewer humans. Activist investor Irene Capital has been pushing for cost reductions. That's worth examining beyond the headline. LinkedIn published data this week showing the broader hiring decline in tech can't be cleanly attributed to AI yet, per their labor market analysis. Snap's stated rationale cuts against that, but whether or not AI actually eliminated those 1,000 jobs is a separate question from whether executives now have a widely accepted justification for investor-driven headcount reductions. That framing is clearly available, and Thursday's memo shows it's already being used. American Express said Thursday it will acquire Hyper, an AI-focused expense management startup backed by OpenAI CEO Sam Altman. The deal closes in Q2 2026. Terms weren't disclosed. Hyper builds agents that automate expense categorization and compliance checks. The kind of high-volume, rules-based work that AI handles well and that finance departments spend real manual time on. The pattern here is the one to watch. MX is buying a specialist AI team rather than building in-house, which is consistent with what we're seeing across financial services broadly. When large incumbents start building AI workflow automation into standard commercial products, the competitive bar for standalone SaaS expense tools goes up fast. SAP Concurr and similar platforms are presumably already working through what this means for them. Two quick ones before we close. Google launched a native Gemini app for Mac on Thursday. OptionPlus Space brings up a floating assistant with screen reading, file access from Drive, and support for image and video generation. It runs on Mac OS 15 Sequoia or later, Apple Silicon only. Google is the last of the three major AI labs to ship a Mac desktop app. Windows wasn't announced. And JaneStreet committed$6 billion in cloud spending to Core Weave plus a$1 billion equity stake. JaneStreet is one of the world's most profitable quantitative trading firms, a shop that makes its money running mathematical models against financial markets at high speed with billions of dollars at stake daily. When a firm like that decides AI driven trading justifies$7 billion in commitment, it's a concrete signal about where the most sophisticated professional money actually thinks this is going. That's all for this edition of Yesterday in AI. Stay curious, and I'll see you tomorrow.