Written by Amber Jain

Blogposts

Generating Images from the CLI Using ChatGPT's $20 Plan (Without Getting Blocked by Cloudflare)

How OpenAI's Codex CLI Quietly Unlocked Scriptable Image Generation for Paying Users

Most people using ChatGPT's $20-per-month plan think of it as a chat subscription. That's reasonable, because that's how it's marketed - but it undersells what's actually included by a significant margin. Buried inside that flat monthly fee is access to image generation compute that, priced out through any direct API or credits-based system, would cost most active users several times what they're paying. The problem is that getting that compute to do anything useful outside the browser - to slot into a script, a pipeline, a server-side workflow - has historically been an exercise in frustration, mostly because the chatgpt.com website sits behind Cloudflare's bot detection and actively resists automation. A recent, relatively quiet update to OpenAI's Codex tooling changes that picture considerably, and if you're someone who cares about programmatic access to AI capabilities without paying per-image rates, it's worth understanding what just became possible.

Blogposts

From Canvas to Code - How AI Is Reshaping Product Design (Ft. Figma)

There is a quiet but significant transition underway in the world of product design, one that challenges long-held assumptions about how interfaces should be created, shared, and brought to life. Tools like Claude Design are accelerating the push to rethink design workflow from scratch by enabling people to work directly in code, reducing the need for traditional canvas-based tools and the handoff processes they depend on.

This is something I’ve been thinking about a lot in my day-to-day as a product manager, especially when I look at where time actually goes in a product team. Not into thinking or deciding, but into translating - translating ideas into specs, specs into designs, designs into code (and somehow still managing to lose sharpness at every handoff).  What’s interesting about the current moment is that those translation layers are starting to blur, and with them, a lot of the hidden inefficiencies we’ve normalized.

Blogposts

Self-Hosting Large LLMs Without High-End GPUs: Distributed Inference on Consumer Hardware

 

There is a quiet shift happening in the world of self-hosted AI, one that challenges the long-held assumption that running powerful language models requires either expensive GPUs or reliance on cloud providers, and instead opens up a third path that feels surprisingly accessible - pooling together the devices you already own into a distributed AI cluster that behaves like a single machine.