When Patterns Break: Why AI Still Can’t Design with Structure

by | Jul 6, 2025 | AI, Narrative and Neural Nets

Summary: This article explores why LLMs struggle with structured visual tasks like generating Celtic knots or accurate neural networks. Written by Michelle Jamesina, a strategist and designer with a background in graphic design and AI, it breaks down the core AI design limitations when it comes to geometry and systems thinking. It also explains why human understanding remains essential for structured design work

Why Can’t AI Create Structured Visual Designs? The Truth About AI Design Limitations

As a former brand designer and UI/UX designer with a graphic design degree, I’ve spent years training my eye to spot visual inconsistencies. That expertise makes AI-generated images immediately obvious to me, especially when the lighting defies basic physics or shadows fall in impossible directions.

Recently, while developing a visual identity combining Celtic knots with neural networks, I discovered something fascinating about AI design limitations: AI struggles dramatically with structured visual tasks that require rule-based thinking.

What Happens When You Ask AI to Design Celtic Knots and Neural Networks?

I started with what seemed like straightforward requests:

  • Generate a visually complex Celtic knot
  • Show a 3D rendering of a neural network
  • Combine them: make each neural network node a stylized knot

The concept was clean. The visual patterns were well-established. If AI could understand these structures, it should handle the task easily.

It couldn’t. Here’s what it generated…

1. The first attempt:
A white silhouette on a gold and blue gradient background. The silhouette is of a nonsensical celtic knot laying over a net of circles with lines connecting each other. The connecting lines and circles almost make some sense. The celtic knot clearly does not.
2. Asking it to generate a visualization of a neural network (this is not what a neural network looks like):
A bunch of spheres floating in space with lines randomly connecting some of them.
3. Asking it to turn the nodes (which I told it where the spheres in the previous image to be safe) into woven knots:
A bunch of small celtic knots floating in space with lines randomly connecting some of them. The connections are different from the previous image, and the loops of the knots don't make sense.

The Specific Ways AI Failed at Structured Design

The result wasn’t what I was looking for, and closer inspection revealed fundamental flaws. Take a look at that first image.

Neural Network Errors:

  • Nodes scattered randomly without proper layer structures
  • Missing meaningful geometric relationships
  • No consistent flow or connectivity logic

Celtic Knot Problems:

  • Violated core knot rules
  • Inconsistent over-under weaving patterns
  • Broken continuity—not actually continuous loops

I decided to do some more testing to see if it could actually “see” the strands in the knots it was generating by leveraging its memory. I first asked it to search the web to understanding celtic knot structure. Once it output a breakdown of the rules that celtic knots follow, I asked it to generate an image of a celtic knot. This is what it produced:

A pseudo-celtic knot generated by ChatGPT. It looks right at a glance, but the strands don't make sense when you follow them. AI design limitations have broken the rules of physics.

It looks passable, right? But try following any of the strands and see where they end. We already know AI is bad at counting things, but just for fun I asked how many strands are in that pattern. ChatGPT answered:

Based on the generated image, there are two strands in the Celtic knot pattern.

You can tell this by following the path of each strand:

  • One strand loops through roughly half the grid, interlacing continuously before returning to its starting point.
  • A second distinct strand weaves through the remaining spaces, maintaining the alternating over-under pattern.

This is completely wrong. So I decided to push further by asking it to generate the same image but with each strand in a different color. This is what I got:

A pseudo-celtic knot generated by ChatGPT. It looks right at a glance, but the colors don't line up with the strands, and the loops don't make sense when you follow them.

See that? It didn’t produce the same knot, which is unsurprising, but it did produce three strands and three colors. It just changed colors mid-strand because it doesn’t understand the architecture it’s visualizing. It can only repeat a visual aesthetic.

So why did it produce this specific image?

A quick google image search for “celtic knot” gives us the clues:

  1. A lot of knot artwork uses gradients to visualize the shadows produced by strands passing under others. It picked up on this visualization element and added a gradient without understanding why it was there.
  2. A lot of the images of knotwork have the strands outlined so that they don’t need to include any whitespace to show where a strand passes over another like they do on a solid, one-color strand visualization.
  3. People tend to use Irish-feeling colors in their images. That means the vast majority of existing images are green, orange, and gold.

Why This Reveals Fundamental AI Design Limitations

Here’s the core issue: LLMs predict pixels based on pattern recognition, not structural understanding.

Understanding these AI design limitations helps explain why the technology consistently struggles with rule-based visual work.

When you ask an AI to draw a Celtic knot, it’s not thinking about:

  • Spatial topology rules
  • How weaving actually works or how ropes would interact in the physical world
  • Mathematical relationships between crossing points

Instead, it’s guessing what pixels might look aesthetically similar to Celtic knots it’s seen before.

This explains why AI consistently fails at:

  • Subway map design (requires spatial logic)
  • Game board layouts (needs rule-based positioning)
  • Comic panel continuity (demands systematic consistency)
  • Architectural blueprints (requires structural engineering principles)

What This Means for Designers, Strategists, and Creative Teams

Recognizing AI design limitations is crucial for making strategic decisions about when and how to use AI tools effectively.

Tasks Where AI Cannot Replace Human Expertise:

  1. Precision-based visual work requiring geometric accuracy
  2. Rule-governed design systems with logical constraints
  3. Meaningful visual encoding where structure carries information
  4. Novel system combinations the AI hasn’t encountered before

What AI Can Still Do Well:

  • Generate initial design concepts and inspiration
  • Create stylistic variations of existing designs
  • Speed up iteration cycles for exploratory work
  • Produce placeholder visuals for early-stage projects

Why I Banned AI Image Use at the NIH

Here’s one more example I used to explain the AI image use policy I set when I was working at the National Institutes of Health (NIH). My policy banned the use of generated imagery in communications for a simple reason: AI does not understand the human body. It will always produce inaccuracies that would need extensive human oversight to monitor. The NIH is an authority governing scientific research into the human body. It can’t afford the risk of an AI generated image depicting the human body or the science they do incorrectly.

Here’s how I explained it to the staff at the National Institute of Dental and Craniofacial Research (NIDCR). One of their goals is to communicate accurate information about teeth. As of right now, GenAI tools have zero idea how many teeth human beings have. They can look up the answer online. They can make an educated guess based on what they think the answer should be. None of that matters. Just like with the celtic knot, the AI cannot actually apply those rules when generating an image. There is always a chance the AI will get it right, but for an institute like the NIH, it isn’t worth the risk.

And just for fun, here’s ChatGPT’s latest guess at how human teeth work:

A "diagram" of human teeth generated by ChatGPT. It looks similar to ones you might see on a doctor's office wall at first glance, but AI design limitations have lead to incorrect labels and anatomy.

It’s both incorrectly labeled and a couple teeth short.

The Paradox: More Powerful AI Needs More Human Oversight

This experience reinforced a crucial principle: The more sophisticated AI becomes, the more important human expertise becomes for quality control.

Why? Because AI can now create convincing-looking results that contain fundamental errors only experts can spot. A non-designer might look at AI-generated Celtic knots and think they look fine. Someone with design training immediately sees the structural problems.

Questions Designers Should Ask When Using AI Tools

Before relying on AI for visual work, consider:

  • Does this task require understanding spatial relationships?
  • Am I combining visual systems in ways the AI hasn’t seen before?
  • Does the design need to follow specific rules or constraints?
  • Will the visual need to encode meaning beyond just aesthetics?
  • Do I have the expertise to verify the AI’s output is structurally sound?

The Real Value of Human-AI Collaboration in Design

The solution isn’t avoiding AI—it’s understanding where human insight remains irreplaceable.

Effective Collaboration Model:

  1. Use AI for initial concepts and rapid iteration
  2. Apply human expertise to verify structural integrity
  3. Manually refine rule-based elements that require precision
  4. Combine AI speed with human constraint awareness

Why I’m Opening Illustrator Instead

For my Celtic-neural network hybrid, I’ll be:

  • Manually drawing a knot so it follows the laws of physics
  • Ensuring proper over-under weaving logic
  • Building geometrically sound neural network structures by referencing existing visualizations of them
  • Creating a meaningful visual connection between the systems

Because good design isn’t just about aesthetics; it’s about encoding meaning through structure.

Key Takeaways for the Future of AI-Assisted Design

Understanding AI design limitations reveals both the technology’s current constraints and opportunities for human-AI collaboration.

What This Teaches Us:

  • Pattern recognition ≠ pattern understanding
  • Visual complexity requires rule-based thinking, not just style mimicry
  • Human expertise becomes more valuable as AI becomes more sophisticated
  • The best results come from strategic human-AI collaboration

The Bottom Line: AI excels at generating visual possibilities. Humans excel at understanding visual logic. The future of design lies in combining both strengths strategically.


Michelle Jamesina is a strategist and designer exploring the intersection of human creativity and AI capabilities. Her work focuses on where traditional design principles meet emerging technology constraints.

Frequently Asked Questions About AI Design Limitations

Q: Can AI improve at structured visual design over time?

A: Current AI models would need fundamental architectural changes to understand spatial rules and geometric constraints, not just better training data. There are companies working on this, so we’ll see new models get better at it in the future.

Q: What design tasks should I never give to AI?

A: Avoid using AI for any work requiring mathematical precision, rule-based systems, or where visual structure encodes specific meaning. Pattern design is a great example of this; it can give you a random amalgamation of colors, but it won’t be able to follow pattern rules.

Q: How can I tell if AI-generated visuals have structural problems?

A: Look for inconsistent lighting, impossible shadows, broken patterns, and violations of physical or geometric rules. Lighting is the biggest giveaway right now, but models keep improving as they absorb more data.

Q: Should designers worry about AI replacing them?

A: Yes, but not for the reason you think. AI changes the role but increases the value of human expertise in verifying, refining, and applying structural knowledge to visual work. You know what else totally changed the role for designers in the past? Photoshop. This is not the first time the design industry has flipped on its head, and it definitely won’t be the last. The best advice I ever got in design school was to keep learning because that would determine if I was able to survive as a graphic designer. Generative AI is just the latest tool you need to learn. It will never replace a good designer who keeps learning, but it will replace the ones who refuse to change.

Archives