Lead images, briefed and baked
Most publisher AI imagery fails because the brief fails: a title goes to a model with no editorial direction, and the model returns the safest cliché it knows. Glowing brains, swirling data, a handshake at sunset.
Lede reads your article and writes a real art-director's brief: scene, mood, lighting, composition, what to avoid. It bakes that brief across three frontier cloud models in parallel. You see all three. You pick what earns its place above the fold.
Paste an article URL, the article's text, or its raw HTML. The showcase below shows what Lede produces against real published lead images, with a free local pair (Flux Krea + FLUX.2 Klein) pre-baked on a MacBook for comparison.
Briefs by Claude Sonnet 4.6 (via OpenRouter). Average bake under four minutes.
Each card shows the article's actual published image next to what Lede would have offered the same desk from the same brief.

Global Gaming Insider · 5/5 models rendered

Global Gaming Insider · 5/5 models rendered

Global Gaming Insider · 5/5 models rendered

Global Gaming Insider · 5/5 models rendered

Global Gaming Insider · 5/5 models rendered
Live baking is gated on this hosted demo so the API budget doesn't get drained. Drop me a line and I'll run a private demo against your own articles, or clone the repo for a self-hosted run.
I run private demos against real publisher backlogs — paste a handful of your own article URLs, see the brief synth plus a three-model bakeoff against your actual headlines. Takes about ten minutes.
For a self-hosted run with your own API keys, the source is on GitHub — clone, paste any URL, and you have your own.