Lede.

Lead images, briefed and baked

Open-source editorial tool

The lead image your article actually deserves.

Most publisher AI imagery fails because the brief fails: a title goes to a model with no editorial direction, and the model returns the safest cliché it knows. Glowing brains, swirling data, a handshake at sunset.

Lede reads your article and writes a real art-director's brief: scene, mood, lighting, composition, what to avoid. It bakes that brief across three frontier cloud models in parallel. You see all three. You pick what earns its place above the fold.

Paste an article URL, the article's text, or its raw HTML. The showcase below shows what Lede produces against real published lead images, with a free local pair (Flux Krea + FLUX.2 Klein) pre-baked on a MacBook for comparison.

Why use Lede
A real brief, not a guessed prompt
The brief layer turns each article into an editorial scene with concrete subjects, lighting, mood, and an anti-cliché list — not a vague tagline tossed at a model.
Three frontier models, side by side
Recraft v3, GPT-5.4 Image 2, Gemini 3 Pro Image. Same brief, three executions. You choose what runs — no black-box, no auto-publish.
Open and self-hostable
MIT-licensed. Bring your own API keys, deploy anywhere Next.js runs. Optional local fallback on Apple Silicon for zero per-image cost. Source on GitHub →

Briefs by Claude Sonnet 4.6 (via OpenRouter). Average bake under four minutes.

See it on your articles

Want a private demo?

Live baking is gated on this hosted demo so the API budget doesn't get drained. Drop me a line and I'll run a private demo against your own articles, or clone the repo for a self-hosted run.

I run private demos against real publisher backlogs — paste a handful of your own article URLs, see the brief synth plus a three-model bakeoff against your actual headlines. Takes about ten minutes.

For a self-hosted run with your own API keys, the source is on GitHub — clone, paste any URL, and you have your own.