AI + Design: How I Approach Figma Make
AI Is Not a Designer. It’s a tool.
More of us are starting to use AI in the design process; Figma Make, Claude, Gemini, and others. That’s a good thing, as long as we treat these tools like structured design tools, not shortcuts.
When AI output feels “off,” it’s usually not because the tool is bad. It’s because we didn’t give it the right constraints, the right context, or a clear definition of what “good” looks like.
How I Think About AI in Product Work
I don’t think of AI as a designer. I think of it as a structured thinking partner for early exploration, a systems assistant for workflows and edge cases, and a consistency manager when we need repeatable output.
It’s most valuable when the work is complex, constraint-heavy, and easy to mess up through omission—permissions, auditability, failure states. Which, in practice, is most of what product designers deal with day to day.
AI is doing its job when it helps me get to a defensible solution faster, surfaces edge cases earlier, makes tradeoffs explicit, and produces artifacts that are easier to review as a group.
AI is not doing its job when it creates pretty screens that ignore workflow reality, skips over data integrity or permissions, or produces confident-looking solutions that aren’t grounded in constraints, or when it generates divergent directions that are hard to compare.
Figma Make and the Role of Guidelines
One of the most important things I’ve learned using Figma Make is that prompts alone aren’t enough.
I rely heavily on a persistent Guidelines.md file; a ruleset that Figma Make always sees in the background, regardless of which model or prompt I’m using. This file enforces the design system , prevents drift, and reduces the need to restate non-negotiable constraints every time.
The goal isn’t to overload the model with context. It’s to give it a stable foundation so it doesn’t invent solutions when instructions are ambiguous. If something is unclear, I want the system to preserve the existing UI and flag uncertainty—not make decisions for me.
From there, the guidelines cover execution rules, design system constraints, typography, spacing, layout, actions, dialogs, tables, forms, and all the things that tend to quietly break when AI is allowed to “be helpful.”
Model Choice Matters
Every model behaves differently. Architecture, training data, and alignment shape how each one performs. Treating them as interchangeable leads to inconsistent results.
In Figma Make, the default model (Claude) is a general-purpose, reasoning-oriented model that performs well at summarization, following nuanced structure, and maintaining a clear, approachable tone; especially when prompts are well-structured and logic is embedded directly.
The other models (Gemini Flash/Pro) perform better when I want breadth: comparing layouts, exploring interaction approaches, or generating multiple directions inside constraints.
And there are models I reach for when I’m not thinking, just doing—small layout tweaks, naming passes, or very targeted refinements where speed matters more than depth.
Craft a Repeatable Prompt Structure
Prompt fluency isn’t just about how prompts are written. It’s about choosing the right model for the job.. Most AI prompts fail because they jump straight into what to build, don’t explain why the thing exists, don’t define what “good” looks like, and leave too much room for interpretation.
The structure I use mirrors how experienced product designers already think. It forces intent to be front-loaded.
Before I design anything, I want clear answers to:
What am I doing?
In what environment?
With what constraints?
How will I judge success?
That structure shows up consistently in my prompts to prevent a specific kind of failure; scope creep, over-design, invented workflows, or premature optimization.
This isn’t about micromanaging the tool. It’s about preventing it from inventing solutions I’m not ready to build.
Lightweight vs. Structured Prompts
I don’t use the same kind of prompt at every stage. If I’m still figuring out the problem; early discovery, comparing layout directions, or stress-testing an idea visually, I keep prompts lightweight. At that stage, I’m learning.
Once I’m converging on a solution, the core workflow is understood, and tradeoffs are already decided, I switch to structured prompts. This is especially important for v1 features, enterprise workflows, or regulated and high-risk flows.
Treating every prompt as exploratory is a mistake. That approach breaks down the moment the work needs to ship or be shared with engineering.
Using AI to Compile Intent
One thing I’m very deliberate about is that I don’t use AI tools to design the solution. I use them to extract and structure intent so Figma Make can execute cleanly.
That often means pasting in messy inputs, raw requirements, notes from multiple stakeholders, half-formed ideas, without cleaning them up first. Noise exposes ambiguity. Cleaning too early hides missing decisions.
This is where AI tools outside of Figma can be a big help. I ask the model to extract only design-relevant intent: core user problems, primary workflows, constraints, risks, and open questions, without resolving them. (I’ll save a deep dive on how I craft clear design requirements for another post.)
The Point
Figma Make doesn’t replace design judgment. It exposes where judgment is missing.
Used well, it helps surface structure earlier, reduce accidental scope creep, and create a single source of truth the team can align around. Used poorly, it just accelerates ambiguity into polished UI. The difference isn’t the tool. It’s whether you’ve done the work to be clear about intent before you ask AI to help.
In my next post, I’ll break down prompt creation in practice.