
Anthropic flipped on the artifacts feature in Claude Desktop this week. The same ones previously gated to Pro. You can ask Claude to build a small clickable interface (a planner, a calculator, a checklist) and it renders inside the conversation, fully interactive.
It runs HTML, CSS and JS in a sandboxed canvas. No deploy, no setup. The artifact persists across the conversation and can be edited by asking for changes.
This is the feature I keep using to surprise non-technical friends. Last week I built a friend a honeymoon itinerary as a clickable day-by-day with map links and time blocks, instead of a wall of text she had to re-read every morning.
If you've been outputting plans, comparisons, or anything with structure as flat text, stop. Ask for it as an interactive artifact. It's a one-line change to your prompt and the result is dramatically more useful.
Build this as an interactive artifact. I want to click between days/sections and check things off as I go.The Codex CLI shipped a 0.18 update that introduces a per-file edit mode. Instead of agentic multi-file rewrites, you can scope it to one file, see a unified diff before it touches disk, and accept or reject hunk by hunk.
The agent-mode-by-default approach was scary for anything you actually care about. Too many files changing at once, hard to review.
I've been using the new mode for boring-but-real work: cleaning up a tax spreadsheet's formulas, refactoring a single component, even editing a Markdown post. It's the first time Codex has felt safe for files that aren't throwaway.
OpenAI moved gpt-image-1 behind the Codex CLI, so you can generate, save and reference images from inside an agent run without switching tools or pasting URLs. The images land in your working directory.
Useful overlap I didn't expect: I used this last weekend to generate placeholder hero images for a landing page Codex was already scaffolding. One session, no copy-paste, the images were named and dropped into /public for me.
If you build with Codex, this collapses a workflow. If you don't, ignore. There are easier ways to make images.
Previously NotebookLM accepted PDFs, Docs and pasted text. Now you can drop in a URL and it fetches the page, indexes it, and treats it as a citable source alongside your other materials.
The reason this matters: it ends the worst part of research mode in any AI. The model confidently inventing things from a URL it pretended to read.
I'm using it to keep a running notebook on AI tools themselves. Add the docs page, add the changelog, add a few good blog posts. Ask it questions. Every answer is citation-linked back to the actual paragraph. No more 'where did it get that.'
Anthropic published a detailed walkthrough of prompt-caching patterns for Sonnet 4.5: where to put your cache breakpoints, how to structure tool definitions, and how to measure the hit rate. They claim 60–80% cost reduction on common agent loops.
This is a coder-only story. If you don't ship anything that talks to Claude, skip.
If you do: the guide is the most concrete cost-engineering doc Anthropic has put out. The big idea is that your system prompt and tool list are the single biggest cache target — get those stable and you stop paying for them on every turn. I'm rewriting one of my agents around this today.
The full version of the artifact trick from story #1. Templates, prompts, and the exact follow-ups I use to make the output actually useful.
See you tomorrow,
Ky Tomita, The Playbooks AI