A Vibe Coding Story: The FWB Creative Builders Cohort Week 1
I was really lucky to be invited to take part in a creative coding cohort (I have no idea how to code). I've decided to share the process, hopefully without turning off my music fans :)
I’ve been a bit quiet on socials lately—because honestly, there’s only so much time in the day. Right now, I’ve decided to double down on the FWB Builders Cohort (I’ll explain what this is in a moment) and use it as a push to test out some bigger-picture ideas and maybe reshape how I think about being an artist entirely.
One of those ideas is a big one: What is the future of my creative career?
Earlier this year, I explored some thoughts on this in a series of posts called Re-imagining Me, where I started imagining what Sound of Fractures could be if it wasn't just a solo music project, but something more like a creative studio—or a container for multidisciplinary experiments that blur music, tech, art, and emotional connection.
The core idea: I think musicians have a ton of creative power that goes way beyond just making audio files. And since it's getting harder and harder to live off those files alone, I'm trying to figure out what else can be built around our creative identities and what a future version of this kind of career might actually look like. I know that I love creative direction, embedding the concepts of my music into content and well everything is a passion of mine… and honestly I don’t want to be dancing to my own songs in my bedroom on TikTok when I’m 60. So, I’m trying to think about what my future looks like and how I can create a life where making music is still at its core, but not everything.
This cohort feels like an early step toward that. Or maybe the third step, depending on how you count it.
Step one was doing the Re-imagining Me work—trying to understand who I am as an artist and where I want this all to go.
Step two was pitching the idea in MVP form to CY Lee, a friend and patron, and asking for help to bring Jade Garcia on board. She’s a friend and has a background in both music and tech, and I knew she was the right person to help me push Sound of Fractures into new territory.
I’ve have to add that I’ve been lucky to have another brilliant manager and friend, Casper, who’s helped build the music side of all of this from day one. But this other side of what I do / direction—more art, more tech, more experimentation. I wanted to take a different approach, and explore some ideas that other creators might want to replicate or learn from as Casper puts more time into developing his own start up.
So I wanted to try and bring Jade in.
In fact, I was going to say I applied for the FWB cohort, but actually, Jade applied for us. That in itself was a shift, and the first example of how this new thing could work. This isn’t just about me anymore. It’s about building a structure where Sound of Fractures can evolve into something more autonomous, collaborative, and flexible.
What Is Vibe Coding?
Ok so firstly I should quickly introduce the term ‘vibe coding’ or some times ‘creating coding’. Vibe coding is a creative approach to using code for making music, visuals, or interactive art—focused more on feeling and expression than technical precision. It’s about experimenting, improvising, and using code like an instrument to shape atmosphere and emotion. It’s recently emerged as AI coding tools have made it easier for non-coders and creatives to jump in and start building without needing deep technical knowledge. The FWB cohort is essentially building on this movement by supporting artists in this journey of being able to build their own creative technology ideas such as apps, websites and games themselves at low cost. The application required us to pitch an idea to get in…
Week 1: From Pitch to Iteration
In Week 1, we pitched our idea to the cohort (image below), which FWB describe as being for creative technologists—people who write code and design interfaces, architect systems and consider their cultural implications.
The original idea was simple, but ambitious: what if fans could use their own memories to shape the music? I wanted to create a tool that gave people a way to emotionally connect to my sound—not just passively listen, but actively co-create. The spark came from SCENES and all the memories people submitted. I kept thinking: what if a listener could type or record a personal memory, and from that, a unique version of a Sound of Fractures song is generated—one that feels like them? Not a remix, not AI noise, but something recognisably mine, layered with their story. That feeling of “I’m in it”—that’s what I was chasing. It started as a question, then an experiment, and now it’s becoming something real. The idea developed into the SCENES: Memory Atlas. The idea would be that you could listen to other people’s memory + music creations and feel connected with them or reassured there was others around the world with shared experiences.
A week later we were invited to take part!! and we attended the initial welcome call.
My first thought was:
Ah shit. They’re not going to build this for us… they’re here to support us while we build it.
That small shift in understanding changed everything. It reframed the whole project, and was frankly pretty scary.
I immediately switched into “what’s actually possible?” mode. I started spending days digging into generative music models, trying to figure out how hard it would be to train my own model based on my own data—something that could let people create personalised versions of Sound of Fractures music. A little like what Suno is doing, but rooted entirely in my sound and style.
Building the Thing (Or Trying To)
To get going, I used ChatGPT as a kind of assistant/mentor/sounding board.
Here are a few of core prompts / questions I dropped into ChatGPT that I started with:
“I’m an electronic music artist called Sound of Fractures. How would I create a simple AI music generator in the vein of Suno, but trained only on my own catalogue so people can make music that sounds like me?”
“I want the user input to be audio. The idea is they leave a voice note of a memory and I want that audio to become the prompt for the music creation.”
“Would the simple way be to create emotion tags—happy, contemplative, sad, etc.—then have AI interpret the text and assign a 1-to-10 score (like Cyanite.ai), and use those tags to pull matching audio stems from my dataset?”
“The issue is that the user input is text and Cyanite’s input is audio, that’s correct, right?”
These questions formed the backbone of the planning session: defining the goal, choosing text input as the creative surface, exploring emotion-tag retrieval vs. generative models, clarifying tooling constraints, and finally packaging the journey for your blog.
Together, I next sketched out the bones of what an MVP might look like as its separate components. Here's what we mapped:
Front-End Concept: A simple interface where a user records a memory, and receives a custom 30-second Sound of Fractures-style audio clip.
Plan A: A memory-to-music generator trained on my own catalogue, keeping the sonic identity intact. Music is influenced by the memory.
Dataset: Slice each track into short clips, auto-caption them, and train a generative model like MusicGen or RAVE
Memory Recording: Take a audio memory from a listener and use a lightweight LLM link it into musical parameters in order to influence the musical output (e.g. Sad, Happy etc).
Plan B (if AI music generation fails): look at other examples of algorithmic music generation like Bronze.ai
Play Back: User gets to hear their audio memory layered with unique music.
Memory Map: All music memories show on a map for users to listen to each others creations / memories.
ChatGPT was was helping me ask better questions. It helped me get my head round how generative music worked and what were the options. . It helped me think through emotional resonance, stem matching, and how to bring fans into the creation process in a meaningful way.
But yeah—I got stuck.
One of the options it shared was to use a generative model called RAVE which is genuine generative AI music model and I blindly dived in using GPT to explain the steps based on some tutorial links I found.
I actually managed to train a RAVE model using my stems (which felt like a win), but I quickly found myself in what I now call a “vibe coding loop.” I could get to a certain point but then would gets stuck and the only answers GPT could give me was to keep reinstalling elements, and I soon realised I was stuck. ( I should add here that I have played with Cursor.ai and built a chrome extension, so I had worked through some simple vibe coding before and go over the initial humps)
That’s when I realised I might need more help, as the main aim of this first step was: how hard will this actually be to do.. the answer was too hard! I started thinking seriously about collaborating with someone like dav a musician and artist I really love who has done some cool generative work, and potentially pulling in a developer who could help bring this idea to life. With all the use of GPT I mainly realised I needed to speak to a human who had experience in this area and dav was kind enough to give me and Jade some time to ask questions. Which has now led to him helping us make the project a reality using one of his code based / algorithmic music models from a project titled Cycles (its amazing check it out). Instead we could repurpose his concept to draw from stems of my music to create interesting and unique audio versions of the Sound of Fractures world.
Learning Vibe Coding
Before the cohort workshops started, I was already experimenting with prompts and prototypes using GPT and Cusor.ai —but it was during the sessions with Ohara.ai ( designed to enable anyone to have a go at making apps) that pushed me to see what more I could do. The workshops on vibe coding and prompt design helped me stop treating prompts as questions and start treating them as creative tools.
I started thinking less like a coder and more like a world builder. I explored how to use language to capture energy, intention, audience, and emotional design which the workshops explained were important when using AI tools, you need to include the problem you are trying to solve and the intended purpose and feeling it should have for the user when writing your prompts. Ohara started turning the ideas into early functional prototypes (above). What I still find hard about these app building tools is they are very much focussed on the function, and as an artist I’m thinking both visually and sonically about how this builds out my world, so although I was excited and happy I could create what I did, I quickly learnt that putting together the components of the idea ( music generator, audio recorder, visuals, map) that we would need help from someone with proper coding experience.
Explaining Myself
Visualising ideas is really important to me so knowing we would need help from others I started working on a mood-board and using AI to sketch out the idea of what the site might look like. I took this same approach to SCENES early on and built a mockup on a free square space account, it really helped me communicate the idea and work out: What is the simplest way to do this and it still look cool?
This was a prelude to what comes next which is working as part of a team, and hopefully by next week we will have had some positive talks with dav and a developer that I can report back on :)
This is a really exciting step for what comes next because this is a future I wanted to experiment with: a group of people doing something they love, with the skills they enjoy, together. And instead of a company taking the credit we can work collectively as a project team on something we can all share and talk about in our circles.
Up Next
In the next post, I’ll go deeper into the tools I’ve been testing, the visuals I’ve drafted, and how I’m continuing to explore this hybrid practice—somewhere between music, technology, storytelling, and emotion.
If you're building something weird, ambitious, or collaborative like this, hit me up. I think a lot of us are thinking about these same questions right now—and if we can share the load, we might just build something better.
⚡️