
From Canvas to Code - How AI Is Reshaping Product Design (Ft. Figma
There is a quiet but significant transition underway in the world of product design, one that challenges long-held assumptions about how interfaces should be created, shared, and brought to life. Tools like Claude Design are accelerating the push to rethink design workflow from scratch by enabling people to work directly in code, reducing the need for traditional canvas-based tools and the handoff processes they depend on.
This is something I’ve been thinking about a lot in my day-to-day as a product manager, especially when I look at where time actually goes in a product team. Not into thinking or deciding, but into translating - translating ideas into specs, specs into designs, designs into code (and somehow still managing to lose sharpness at every handoff). What’s interesting about the current moment is that those translation layers are starting to blur, and with them, a lot of the hidden inefficiencies we’ve normalized.
And this will only fuel more curiosity in coming months, as it sits right at the intersection of speed, clarity, and execution - the three things that define whether a team actually ships meaningful work or just talks about it.
As design and implementation begin to merge into a single continuous activity, the roles within product teams are evolving, the tools we rely on are being rethought, and the very idea of what it means to “design” is expanding beyond static representations into something far more dynamic and immediate.
Table of Contents
The Rise of Canvas Tools and Their Hidden Tradeoffs
There has always been a quiet tension at the heart of product design - a push and pull between visual tools and code, between representation and reality, between what is imagined and what actually ships. For the past decade, canvas-based tools like Figma defined the center of gravity for design workflows, giving teams a shared visual language and a canonical place to iterate, but this dominance came with an implicit compromise that most teams accepted without question - the idea that design could live separately from implementation.
That separation created an entire layer of translation work, where designers produced static mocks and engineers rebuilt them in code, often imperfectly, leading to endless cycles of clarification, QA, and pixel-parity adjustments that slowed teams down more than they realized. Over time, design became less about working with the material of the product itself and more about crafting representations of it, which made exploration easier but also introduced friction at the exact moment ideas needed to become real. What looked like efficiency at the surface level masked a deeper inefficiency underneath - every screen was effectively built twice, once in imagination and once in reality.
Transition from Designers to Builder-Designers
That balance is now shifting, not gradually but decisively, as AI coding tools begin to reshape what it means to design in the first place. Tools like Codex and Claude Code are not just accelerating engineering workflows - they are expanding who gets to participate in building interfaces, allowing designers, product managers, and even non-technical users to express intent directly in code and see results immediately, without the traditional handoff.
The result is that design is moving back into the medium of code, not because designers suddenly became engineers, but because the tools have made code expressive enough to function as a design surface. The feedback loop tightens dramatically - instead of iterating on static frames, teams iterate on living systems, where fidelity is more tangible (i.e. something you can interact with and validate).
When Tools Become Thinking Partners
This shift is subtle but profound, because it changes the question from “how do we design this screen?” to “how do we describe what we want?” and then lets the system translate that intent into working interfaces. When you treat these tools as collaborators rather than utilities, they become something closer to a thinking partner - one that can highlight vague specs, conflicting choices, challenge assumptions, compare and contrast trade-offs, and even debate UX decisions in real time. This dynamic fundamentally alters how ideas are explored and refined.
The Resistance to Change
Most designers I interact with still operate from a place of purism, in sharp contrast to developers, product managers, and even marketers - groups that, in that order, at different speeds, have begun integrating AI into their work.
Many designers still position themselves as gatekeepers of taste and craft, pushing back against the idea that AI can meaningfully contribute to design.
There’s a recurring pattern among designers I have interacted with: a quick experiment with a few prompts across few prompting tools, followed by a firm but preditable conclusion that AI produces “generic” outputs, is too shallow, incapable of handling real product complexity, or that anyone relying on it fundamentally misunderstands design. And a lot of those who do use AI, are using it with rudimentary and manual prompting (Figma Make et al.)
It’s reminiscent of the early pushback in software development with AI, where initial friction was mistaken for fundamental limitation and AI was written off until their utility became harder to ignore when the discipline reached inflection point sometime around November 2025.
What’s often missing is a deeper exploration of the emerging stack - things like structured prompting, AGENT.md, skills, agent frameworks, MCPs, or even fine-tuning models that go beyond surface-level usage.
Programming was easier to disrupt because it is grounded in logic and benefits from decades of robust tooling around testing and debugging (which leads to constructs like Ralph Wiggum loop, where AI can test it's own code and keep iterating). But design, being more subjective and less instrumented (as of 2026, Sketch is only 15 years old and Figma is 9 years old), will naturally take longer. The shift will be slower - but not stalled. But if history is any guide - much like how Figma overtook Sketch in a relatively short window - we may be underestimating how quickly agent-first design tools could reshape the landscape.
What’s interesting is the divergence in mindset: while many (most?) designers double down on resistance, few are actively experimenting; checkout:
- Hardik Pandya's work (Head of Design & Sr AI Principal, Atlassian)
- Jenny Wen's work (who was leading FigJam as Director of Product Design at Figma, but now works at Anthropic)
They are treating AI not as a replacement, but as a frontier worth exploring despite its current imperfections.
Why Figma’s Dominance May Not Last
As this transition unfolds, it exposes a structural disadvantage in traditional design tools that once seemed like strengths. Platforms like Figma established themselves as the source of truth by owning the design format, but that same closed ecosystem now works against them in an AI-driven world, where models have been trained overwhelmingly on code rather than proprietary closed source design primitives.
This creates an asymmetry that is hard to ignore - AI understands HTML, CSS, and JavaScript deeply, but has little native understanding of design files locked inside specialized formats, which means that as agents become more capable, they naturally gravitate toward environments where they can operate fluently. Code becomes not just the implementation layer, but the shared language between humans and machines, and that gives it a massive advantage as workflows become increasingly agent-driven.
There is still a possibility for Figma to adapt by fine tuning open models on it's own proprietary format, but that would be a massive undertaking and even then it would fall short of the terabytes of HTML/CSS/JS that SOTA models have been trained on.
Incumbents like Figma also face the classic challenges of success - large enterprise user bases, accumulated complexity, and a tech stack optimized for a previous era. What once made them indispensable now makes them slower to adapt, and in a landscape where iteration speed is everything, that drag becomes increasingly visible.
When Design, Product, and Frontend Become One
One of the most immediate consequences of this shift is the blurring of roles that were once clearly defined. The boundaries between frontend engineering, UX design, and product management are dissolving, not because those disciplines no longer matter, but because the tools allow a single person to operate across them more fluidly than before.
In practical terms, this reduces the need for intermediate artifacts like wireframes and static mockups, which historically served as communication tools between specialists. When intent can be expressed directly and executed immediately, those intermediaries become less valuable, and the workflow compresses into a tighter loop of idea, implementation, and refinement.
This does not eliminate design as a discipline, but it does change where its value lies. Instead of focusing on pixel-perfect artifacts, designers increasingly contribute through judgment, taste, and the ability to guide systems toward better outcomes. The work shifts from drawing interfaces to shaping experiences, from specifying details to curating possibilities.
The Technical Designer vs the Creative Designer
What emerges from all of this is not a single future for design, but two distinct directions that reflect different ways of engaging with these new tools.
- On one side is the Technical Designer - someone who leans into code as a primary medium, using AI to strengthen the spec, wireframe, iterate, and ship directly - collapsing the gap between idea and execution as much as possible.
- On the other side is the Creative/ Designer - someone who uses AI to generate, remix, and curate ideas at scale, focusing less on implementation and more on discovering what is worth building in the first place.
These two modes are not mutually exclusive, but they do represent different centers of gravity, and most people will naturally gravitate toward one over the other. What matters is that both are amplified by the same underlying shift - the removal of friction between thinking and making.
In that sense, the most important change is not about tools at all, but about how quickly ideas can move from concept to reality. As that cycle compresses, the advantage shifts to those who can think clearly, ask better questions, and iterate faster, regardless of whether they identify as designers, engineers, or something in between. The tools are simply accelerating a deeper convergence that has been building for years, and we are now starting to see what it looks like when that convergence becomes the default.

