Lipless’s new collaboration with Kaskade, “State Of Mind,” hits a sweet spot that a lot of producers chase but few actually lock in – a mix that feels huge without crushing the emotion out of it. Built around Poppy Baskcomb’s vocal and a wave of layered synths, the track moves with patience, giving the energy time to build instead of flattening it at the mastering stage.
Behind the scenes, AI tools played a real role in shaping the final version – but not in the way most producers think. For GRAMMY-nominated Lipless, AI serves as a second set of ears, enabling faster detection without compromising the character and tension inherent in the original mix.
Tools like the Stabilizer in iZotope Ozone helped tighten the EQ balance on “State Of Mind.” At the same time, careful mid/side EQ, multiple limiters, and analog hardware touches kept the master dynamic, wide, and emotionally sharp.
In this interview, Lipless shares how he uses reference tracks today, how AI mastering fits into his workflow without compromising the vibe, and why mastering still comes down to making calls by ear, rather than just hitting the loudness target.
Jump to these sections:
- How Lipless uses reference tracks differently today vs. early in his career
- Why AI mastering tools are useful but not a final solution
- Techniques for protecting dynamic range on emotional dance tracks
- Using mid/side EQ and multiple limiters for cleaner, louder masters
- Practical tips for making AI tools work without flattening your mix
Stick around for real-world mastering advice from someone balancing big-room energy with real emotional movement – and not letting one kill the other.
Let’s start simple – how important are reference tracks in your workflow today compared to when you first started?
The way I use reference tracks today is much different than how I used to when I first started producing.
When I first started, I wanted so badly to sound like ‘this artist’ or ‘that artist’ and would try to copy or recreate their sound, which was a great way to learn.

However, as I matured and improved at producing, I found myself becoming more reliant on my own creativity. What I have seen, though, is that over the years, I started using reference tracks more and more in the mixing and mastering phase. Using reference tracks and breaking them down sonically helped me to understand the mechanics of sound and how certain elements work together.
I still use reference songs almost every time I mix and master, especially when trying to get a specific EQ shape, width, or loudness.
Mastering tip from Lipless: If Ozone’s Master Assistant is having to make drastic adjustments to your mastering chain, then it might be an indicator that you need to go back and address some things back in the mix.
How early in the process do you bring in references – writing, mixing, mastering?
It depends on what I am working on, if I am working on my own music, I typically don’t bring in references until the mixing and mastering phase.
When I reach the mixing and mastering stage, I will reference some of my own earlier releases that I like the dynamics and sound of, that have also stood the test of time. It’s a good way to keep consistency in my sound.
When I am working on music for other artists, in the early production and writing stage, I’ll often find a reference track that reflects their sound or the sound that they are trying to achieve, which often gets us going in the right direction quickly and usually results in getting a result that they are happy with.
And then once again, I repeat the reference process for the mixing and mastering phase.
Mastering tip from Lipless: AI-assistive technology is not your final stop when mastering, it just pushes you in the right direction. There are other manual processing steps and artistic choices to make after you use something like a Master Assistant.
What’s been your experience using AI-powered mastering tools to match EQ, width, or loudness to your references?

It’s been a game changer for me. Using AI to mimic the EQ curve, or width of my reference targets just speeds the process up by a huge margin. One tool that I find extremely useful is the Ozone Stabilizer. It continuously adjusts the frequencies across the entire spectrum, boosting and cutting frequencies in real-time to balance the overall EQ curve and achieve a clear and comfortable sound.
They have it down to a science; it’s incredible. It’s also a great source of information on how good your mix is tonally, as you can see what is being adjusted by the technology.

It’s a great source of instant feedback on whether you need to make mix adjustments. For example, If you have too much going on in the low end, or are lacking some brightness because of some dull drums or synths, you can see it on the analyzer. Stabilizer shows you what it’s having to do to balance the mix or tame certain areas of the frequency spectrum.

I don’t like to rely solely on assistive tools but they are extremely helpful for identifying problem areas quickly, and I can then go back to the mix and try to get things to where they should be before they hit the mastering chain.
The better your mix sounds before it hits the mastering chain, the better your overall results will be and the better AI can enhance your results.
Mastering tip from Lipless: Try to implement mid/side EQ more in your mixes. It’s a great way to achieve width and loudness, and also a good way to get clear, clean mixes. It’s also a great way to carve out space in your mix to get clear vocals.
With something as lush and emotional as “State of Mind,” how do you make sure the mastering process doesn’t flatten the dynamic range or vibe?
That’s a great question, it can be tough to keep dynamic range when there are lots of elements going on and lots of headspace used up in the mix.
The key to that is a great mixdown.
I try to give every main element in the song its own pocket of headroom. So EQ is key here, boosting, cutting etc. It’s also important to decide on what elements of the song you want to stand out and be the focus, and how you can mix other sounds with similar frequencies around those. In the case of “State Of Mind” we kept the main arpeggiated synth out in front of the mix and got everything to fit around it, except for when the vocal was in.
Then I actually automated a bell EQ on the lead arpeggio that would create a dip or hole in the frequencies where the vocal was most prominent. This really opens up some space and lets the vocal breathe without competing.

Also, one thing that is usually overlooked by producers, which I used heavily on “State Of Mind”is using mid/side EQ. This is where you can really get things sounding wide and full, but without overpowering the focus elements and without introducing phase. It’s also a great way to increase perceived loudness.
These days everyone wants their music very loud in order to compete with other music of the same genre.
Dance music is extremely guilty of this.
But unfortunately, quite often by pushing for loudness, you sacrifice some of that dynamic range, especially while mastering a song that was not mixed well. Mastering engineers can only do so much with a poor or average mix.
Mastering tip from Lipless: It can be helpful to use multiple limiters on your master. I typically run two or sometimes even three limiters at the end of my mastering chain so that not one single limiter is doing all the work. I find this helps with loudness while better retaining the dynamics and preventing distortion.
Has using AI in the mastering stage sped up your workflow – or do you still spend time tweaking after it’s done?
Absolutely, it has sped up my workflow.
With just the click of a button and about 5-10 seconds wait time, Ozone has already addressed some of the key things that are usually taken care of in the mastering phase (EQ, width, multiband compression etc).

Those things can sometimes take hours to dial in or get right without the aid of assistive technology. However, although AI tools can get me close to where I want my masters to be, I still do a lot of tweaking. This is where the engineer separates from AI.

I’ve found that although AI is capable of perfectly-fine results that would be suitable in most instances, it doesn’t yet quite have the know-how or ability to get that final finesse or character. After Ozone is done analyzing a song, it will run a standard chain to try to match your reference target, which is generally EQ, Stabilizer, Impact, Imager, Dynamic EQ, and Maximizer.
It does a great job with these but I always find myself making some adjustments and then adding the extra sauce or character that comes from other tools, like certain saturators, analog style EQs or compressors.
This is where you can achieve some of that extra warmth and character. I often run my mixes through some of my analog EQs or compressors before they reach any digital mastering tools, just to get some of that warmth or character that you just can’t achieve using AI.
Mastering tip from Lipless: Check your mixes in mono, this can help identify phase issues and also helps dial in levels properly. A mix that sounds better balanced in mono will typically translate better across different speakers, headphones, etc.
What’s your advice to a producer trying AI mastering for the first time, who’s afraid of losing the character of their mix?
I think for most producers starting to use AI tools like the Ozone Master Assistant to master their music, try not to rely too heavily on it. Use it as a means to enhance your mixes and use it tastefully. Also pay close attention and learn precisely what AI is doing to your sound.
Look at what it’s adjusting or shaping, then maybe go back into your mix and do some of those things (EQ, compression, adding width etc), to individual or grouped sounds that call for it. Sometimes AI will decide that your mix is too narrow and widen everything when it’s possible that you only needed more width on that one pad or lead, for example.
At the end of the day, I see AI as my second set of ears that will catch things in the final stages of mixing and mastering that I may have overlooked earlier in the process. And finally, it’s essential to remember that it’s not a tool to fix a bad mix, but rather a tool that can enhance or inspire a good mix and aid your workflow in the process.
Mastering tip from Lipless: Check your mixes and masters on devices that most consumers use for everyday listening. I always check my final master on multiple systems, but I always check on AirPods last and make sure it translates well on them, since this is how most people will listen to it.
Start using AI assistive tools for mastering
Big thanks to Lipless for breaking down a workflow that’s fast without cutting corners. What stood out most is how clearly he treats AI as a tool, not a crutch – something that speeds up the mix-checking and polishing, but never replaces the ear-driven decisions that actually make a track land.
“State Of Mind” hits the way it does because of that balance. The vocal floats, the synths stay wide without getting messy, and the mix never loses the tension that drives the song forward. It’s a reminder that the goal isn’t to automate your instincts – it’s to protect them, even when you’re moving fast.
If you’re leaning on assistive technology in your mastering process, the takeaway is simple: use it to catch what you might miss, not to finish the track for you. The best tools just clear the path – what you do with it still has to come from you.