PUBLISHED
October 13, 2025
5 minutes read
Elisabetta Salatino on control, authorship, and what it really means to design with a machine that learns you back.

In fashion, the line between human and machine creativity is blurring fast. What started as a novelty is now an industry pivot. In 2025, the AI-in-fashion market is worth over $2.9 billion, projected to exceed $94 billion by 2034. Over 70 percent of fashion executives plan to integrate generative AI into their creative or marketing pipelines this year, from digital campaign development to material simulation and trend forecasting.
Consumers are adapting faster than the industry expected: 52 percent of shoppers say they’d use AI tools to help choose outfits; 71 percent believe virtual try-ons will make them more confident to buy.
Yet the creative tension is palpable. How much of creation still belongs to us?
When Balenciaga leans into AI, the goal is provocation, exaggeration, even parody. Gentle Monster, by contrast, treats it as a tool for dialogue, using AI to build interaction, curiosity, and community.
For young designers, the shift feels both electric and uneasy. Everyone’s using the same tools, the same interfaces. If Midjourney can produce a perfect campaign in seconds, what makes an image yours?
That’s where Elisabetta Salatino enters the frame: a young Italian designer exploring AI as both medium and message. Her work doesn’t chase hyper-polished perfection — it toys with friction and control. From her thesis Automated Fashion Design to her workshops on “AI Fashion Campaigns” at Accademia Italiana, she argues that real innovation lies not in letting AI think for us, but in teaching it to think with us.
We sat down with Salatino to talk about what it means to design with a machine that can imagine faster than you can, how prompting became the new form of writing, and why the future of fashion might depend less on who draws the line and more on who decides where it begins.
You’ve created campaign concepts across different brands. How do you balance the brand’s world with your own visual language?
I first study the visual identity of the brands, trying to find points of convergence with my personal creative vision. I then move on to an entirely experimental phase where I test different concepts until I find one that convinces me. However, I must admit that the brands I choose for my “imaginary” collaborations are always brands that are already in line with my tastes. The important thing for me is to ensure that my campaigns communicate messages and emotions that make viewers reflect, which is why most of the content I generate is never simple or purely commercial imagery.
“The important thing for me is to ensure that my campaigns communicate messages and emotions that make viewers reflect, which is why most of the content I generate is never simple or purely commercial imagery. ”
Walk me through your process, from the first idea to the final image.
The first step is a study of the brand, accompanied by a conceptual study where I choose what I want to communicate with a specific campaign. Secondly, there is a visual research phase that can take place through various sources such as the brand's social media pages or websites, but also platforms such as Pinterest or online archives where I find old campaigns or collections to seek inspiration. Thirdly, I combine all this work into various mixed prompt tests, which I combine with reference images that also make up the prompt, in order to understand the best way to translate my idea into an image, I call it “prompt refinement” phase. After testing, testing and testing again, and having achieved a vision that I find suitable, I use it as a stylistic reference to develop the entire campaign. After that, I don't want to spoil anything else because the real gems will be in the masterclass.
How do you edit or “direct” the AI once you’ve started generating images?
The real trick to getting AI (in my case, MidJourney) to generate images that reflect my ideas and style is to train your own “version of the machine”. Software such as MidJourney offer the possibility to train your own version through a long process of choosing images you like from a long list, and this also means that the system will continue to learn from you as you generate and choose the best images. This is a very important step that allows you to make each work personal, as my own prompt, used on another user's model, will never generate the same result.
What makes a good prompt? What makes a bad one?
To be a good prompt, it must be extremely descriptive, precise and leave nothing to chance. When we launch an approximate prompt, it is the equivalent of rolling a dice. In this new “creative era”, whether we like it or not, creativity lies in the prompt, and the prompt designer is actually a writer in disguise. The creative essence lies in our description of the image, because that is exactly what the machine will produce. If the prompt leaves the details to chance, the result could be a beautiful work (for which we would not take full credit) or something completely different from our idea. AI should be conceived as a tool that, unlike its predecessors such as Photoshop, offers "everything" directly, but it is up to us to choose what to show by extracting our idea from the “everything” that we have the opportunity to show.
What’s the first thing you think about before writing a prompt? The image? The feeling? The world you’re building?
The first thing I think about is always the message I want to convey. Beyond brands and marketing, I find that campaigns are a wonderful way to communicate important messages. You will always find references to technology, control and the impact it is having on the world. And I find it very amusing and contradictory to be able to express all this thanks to a sophisticated computing machine.
How much does the wording matter? Can two people write the same idea and get completely different results?
The words we choose undoubtedly change the result, but if the machine is not trained on our model, it is actually very easy to create the same result if the idea is trivial. That is why it is important not to leave anything to chance and to make the description of the image we want to generate as detailed as possible. Making simple 1-2 line prompts is like playing a game: nice and fun, but not very productive.
“It is important not to leave anything to chance and to make the description of the image we want to generate as detailed as possible.”
Are there any words that always ruin a prompt?
I don't think there are any words that can ruin a prompt. One thing I have experienced is prompting in languages other than English. Apparently, despite being very intelligent, MidJourney does not render ideas well when writing in languages other than English (for example, my language, Italian) it translates these words into much simpler images, even if our description is super detailed. The important thing, however, is to provide as many details as we can produce with our minds.
How many iterations does it usually take before something feels right?
To arrive at a visual and stylistic concept that I define as final, I need at least 50 to 100 different generations. This is because the “prompt refinement” phase is the longest, where we understand which word should be better expressed, which should be replaced and with which word, and so on. This also helps people understand how complex this job really is, contrary to popular belief.
What do you wish designers stopped doing when using AI tools?
With such a powerful tool, it's not just easy, it's really easy to become anesthetised and let the machine do all the work without even being the prompt writers anymore. In this way, designers who do this also lose, in my opinion, the right to complain that “machines are replacing us” because they are the first ones who will be replaced, no longer giving anything human and personal to the final work.