In this workflow, I used the Gentic Creative MCP server to generate ad assets across ChatGPT, Claude, Claude Code, Claude Cowork, and n8n. The cool part is that the same MCP-powered workflow works across multiple tools without needing to rebuild the system each time.

That’s the real promise of MCP: portability.

Instead of locking a workflow into one app, you expose it once as a service and then use it from wherever you want.

What the Gentic Creative MCP server does

The creative MCP server is built for agent-native ad generation. It includes a set of tools that let an AI agent go from brand understanding to finished ad assets.

The core tools in the workflow are:

  • Vectorize product images

  • Search product images

  • Fetch page

  • Search inspiration ads

  • Generate ad asset

  • List asset jobs

Together, these tools give the agent the context it needs to make better creative decisions.

The product image vectorization step is especially important. It lets the system understand what’s actually inside each product image and store that information in a vector database. In this case, the workflow uses multimodal embeddings so the agent can later search and retrieve the right visual assets when generating ads.

That matters because if you want agents to make creative choices on their own, they need more than just file names. They need to understand the images.

Sign up at gentic.co/creative with free credits to create your assets.

Starting with the brand context

The first step in the workflow is simple: help the agent understand the brand.

For that, I use fetch page on the brand homepage and usually on the product page too. That gives the system context around:

  • what the brand does

  • what the product is

  • how the brand presents itself

  • what kind of language and positioning it uses

That context becomes the foundation for the ad generation step.

The demo brand: Mio

For this demo, I used Mio, another project of mine.

Mio is a personal AI agent. The challenge is that it’s not a physical product, so there isn’t a real product photo to use in an ad.

To solve that, I created fictional product imagery: a small, cute robotic version of Mio. Think of it like a desktop or home companion robot. Those images became the “product photos” for the workflow.

This was useful for the demo because it let the system treat Mio like a real consumer product and generate ad creatives around it.

Step 1: Vectorizing the product images

Once the Mio images were ready, I passed their URLs into the MCP server and asked it to vectorize them.

That step does two things:

First, it processes the images and understands what’s in them.

Second, it stores them in a way the agent can search later.

After that, I switched tools and verified the images were available through the search product images function. That alone is a nice example of why MCP is useful: one app can trigger the indexing, and another app can immediately use the indexed assets.

Step 2: Letting ChatGPT generate the first ad

With the brand context loaded and the product images indexed, I went back into ChatGPT and asked it to generate a minimalistic ad asset for Mio.

What’s nice here is how little manual work is needed.

The agent was able to:

  • choose one of the product images

  • come up with minimalistic ad copy

  • choose the right aspect ratio and image settings

  • trigger the asset generation job

I didn’t need to tell it exactly which image to use or how to structure the ad. The system used the available context and made those choices on its own.

That’s the kind of workflow I care about most. Less prompting. More delegation.

Step 3: Checking the asset job in n8n

Ad generation takes a few minutes, so instead of waiting inside one interface, I switched to n8n and used the list asset jobs tool to check the latest result.

That’s where the MCP setup really starts to feel powerful.

The job was created in one tool, and I could inspect the result from another. In n8n, the agent pulled the latest job and showed the completed ad.

The output looked surprisingly strong. It used one of the fictional Mio robot images and turned it into a polished ad creative with very little human intervention.

That cross-tool continuity is the whole point.

Step 4: Using inspiration ads from real ecommerce brands

The next part of the workflow is where it gets more interesting.

The MCP server includes a searchable inspiration library built from ads across more than 1,000 ecommerce brands. Around 20,000+ ads have been vectorized so agents can search them by brand, style, angle, or creative pattern.

Instead of manually browsing ads one by one, the agent can search for relevant inspiration and use that as input when generating a new asset.

For the demo, I asked Claude Cowork to find inspiration ads from Jones Road Beauty.

The goal wasn’t to manually pick an ad myself. I wanted the agent to:

  • review the Mio product images

  • review the Jones Road Beauty ads

  • decide which product image made the most sense

  • decide which inspiration ad was the best fit

  • explain its reasoning

  • generate the final ad asset

That’s a much better test of the system.

Letting the agent make creative decisions

Claude Cowork broke the task into a series of steps:

  1. search Mio product photos

  2. choose a Jones Road Beauty inspiration ad

  3. write the ad copy

  4. generate the ad asset

That breakdown is important. It shows the agent isn’t just executing a single function call. It’s turning the task into a structured mini-project.

In this case, it selected a Mio image that it thought worked best for a minimalistic creative direction. Then it picked a Jones Road Beauty ad because it matched that same minimal feel.

That’s where the workflow starts to feel less like prompt engineering and more like creative direction by an agent.

Why this matters

The coolest part of this demo is not the final image.

It’s the fact that the same MCP server worked across:

  • ChatGPT

  • Claude

  • Claude Code

  • Claude Cowork

  • n8n

That means the creative workflow is no longer trapped inside one interface.

You can use one tool to set context, another to trigger jobs, another to review outputs, and another to automate everything. The underlying service stays the same.

A lot of AI workflows today are still app-specific. You build something inside one environment and then have to rebuild the logic somewhere else if you want to use another tool.

MCP changes that.

My takeaway

This is the direction I’m most excited about: agent-native creative workflows.

Not just AI that responds to prompts, but AI that can:

  • understand a brand

  • understand product imagery

  • search for inspiration

  • make creative choices

  • generate multiple ad variations

  • work across different tools using the same backend service

That’s what makes this more than a demo.

It’s a glimpse of what creative production can look like when agents have real tools, shared context, and a consistent protocol underneath.

And honestly, once you see the same workflow run across ChatGPT, Claude, and n8n, it becomes hard to go back to isolated point solutions.

Sign up at gentic.co/creative with free credits to create your assets.

Keep Reading