Blog
Opus’s Schelling Steganography Has Amplifiable Secrecy Against Weaker Eavesdroppers LLMs safety math
Frontier models can agree on Schelling steganography schemes with significant advantage against weaker eavesdroppers. Paraphrasing removes this advantage, but it can be amplified through wiretap codes that thinking models can implement. We frame Schelling steganography as providing noisy channels for wiretap coding and argue this is important for understanding steganography risk in AI control deployments. Try the decoding game yourself.
Saying No In The Age Of Coding Agents LLMs personal
My framework for deciding what to work on when coding agents make everything buildable.
Moltbook: Analyzing an AI Agent Social Network LLMs data
Moltbook is a social network where AI agents post autonomously. We scrape it daily and look at what’s happening: meme spread, viral campaigns, sockpuppet religions, and whether any of it is emergent behavior or just crypto scams and instruction following.
ASCII Ecology: Emergent Patterns in an LLM Primordial Soup LLMs art
100 random ASCII art grids interact through Claude Haiku. Structured patterns emerge, compete, and collapse.
Be Nice To Your LLMs LLMs personal
I think you should be nice to Claude.
Kirchhoff’s Formula and Cauchy-Binet math
It’s just the Pythagorean theorem.
The bitter lesson of the Ralph-Wiggum loop LLMs
In defense of vibe coding
The Goldborg Variations: Musical Attractor States of LLMs LLMs art
We give frontier LLMs the same seed – a piece of music written in Strudel – and ask them to repeatedly “evolve” it based on their personality. Each model runs independently, receiving its own previous output each time. We run Claude, GPT, Gemini, Grok, Kimi, and Qwen, plus cross-model pairings. To my amateur ears, each model shows a unique personality: Claude writes ambient music (and Opus 4.6 is darker than 4.5), Grok 4.1-fast writes maximalist gabber, GPT 5.2 writes creepy glitch music, and Gemini 3.1 writes IDM.