AI Mastering

“Should I use LANDR or not?….” Replace LANDR with any other machine based mastering service and you’ll get one of the most commonly asked questions among music makers in 2020, especially home studio owners.

We’ve been talking a lot about mixing lately, now it’s time to talk about mastering. In this article, not only we’ll discuss AI mastering: we will unveil mastering for what it really is, whether you should master your own music or not AND whether artificial intelligence is a legit tool when it comes to mastering.

What is mastering?

There are many things that we need to discuss before we can really understand what AI can do for us in the context of mastering, this is going to take a while.

Let’s start by saying what mastering isn’t: it is not making something louder, brighter, crispier, wider, more compressed….if you knew that already then good for you, if it’s the first time you hear this you might be a little disappointed but here’s the ugly truth….mastering is quality control.

There will be times when your mastered track will sound brighter than the original mix but there will also be times when it will sound darker, so let’s see why, when and how we’re going to make such decisions.

A mandatory QC process

If you have read The Mixing System, you are aware of the importance of reference tracks, these are perhaps even more important when mastering because they will provide information about transients, eq, stereo width and so on. The single most important concept that we need to interiorize is that we are now checking a track, in a reliable listening environment, before it’s bounced into the official “master copy” and if the mix is perfect as it is we just need to say it – export the master file – and we’ve mastered it! Or maybe it just needs +0.5 at 13KHz and we just do that…..or we might need to do a ton of work before it’s ready, or anywhere in between……reference tracks will guide our decision making process and if AI can help it’s going to be exactly at this stage. Whether we are mastering our own music or someone else’s, we must check it against our reference material and understand its current state, then decide where we want to take it.

Mastering in 5 steps

In order to understand our options, and to put AI mastering into perspective, we need to learn a bit more about the process, even though this guide won’t discuss each point in great detail, there are five steps:

  1. taming resonances
  2. eq
  3. compression
  4. saturation and stereo width
  5. limiting

Taming resonances

We playback the mix and open the spectrum analyzer, if something pokes out too much throughout the song we can use an eq to make things right.

Eq

We need to establish if the shape of the song matches that of the sonic result we have in mind (and our references), we can do that by ear and use a spectrum analyzer as well and then for example decide that we need more highs, less highs and so on. This is why it’s so important that we don’t give anything for granted…..yes, many mixes sound a bit dark and will need brightening but not all of them, some in fact are already too bright and if we just apply a “nice mastering” stock eq preset we might bring out some nasty highs especially around 9KHz where the Hi-Hat tends to be heard more. We need to take a deep breath, listen and ascertain the actual situation. For people making music at thome AI could potentially be a valid tool at this stage and we’re going to see why in a bit.

Compression

Our track might be very dull and we might need to use a slow attack/fast release compressor to bring back some transient information. Or it might have too many transients and need a fast attack/slow release compressor to sound more coherent. Some tracks will need a bit of both: a fast compressor to make it sound consistent and a second slow compressor to bring back some grit. What’s important is that we need to compare our material to the references and learn how to hear these differences.

Saturation and Stereo Width

Trying different types of saturation and exciters, seeing how they perform and what brings our track closer to the references might be tricky….this step really is about trying different options and developing a personal taste. It is important to check the stereo width whilst bearing in mind that not every track needs widening! Sometimes you will have to narrow down the stereo width and the references will guide you. Most importantly the low end needs to be checked thoroughly: music sounds more balanced and powerful when the low end is mono, so depending on how the track was mixed with might have to collapse the stereo image of the low end with a multiband stereo imager or shave off some lows from the sides with a mid-side eq.

Limiting

Now it’s time to destroy our beautiful work by smashing everything into the most aggressive maximixer that we can think of……no, sorry, we won’t let that happen, will we?

Yes we will make our track sound more or less as loud as other commercial releases within the same genre, but it doesn’t really matter if it’s 1-2dB quiter, remember that all streaming platforms will lower it down anyway. The reason why we will try to shoot for a reasonable target level within the given genre is because it’s going to help us shaping the tone of the track, at this stage we might find out that our eq needs further tweaking and perhaps we will also use a multiband compressor or a dynamic eq: if a given frequency only pokes out here and there, there’s no reason to apply a drastic eq, multiband dynamic processors instead, can bring it down only when it’s too loud, thus improving the clarity of our master and the limiter’s response without sacrificing the frequency in question too much. Also remember that substantial gain reduction sounds less obvious when spread over several processors so don’t be afraid of using 2-3 different limiters in a row with each limiter only doing a little bit of work.

If we are mastering a single release we will base our decisions on the references as well as previous releases from the same artist for whom we are mastering the track, obviously if we are mastering an album we are going to make sure that all the tracks sit nicely in the same recording with similar volume and tonal shape but also with enough variety to keep the listener interested: it is okay if some tracks are slightly brighter/darker than others etc…as long as it sounds intentional and it expresses a musical vision.

What options do we have?

Depending on our budget and time we might decide to:

  • master our own mixes
  • get them mastered by a specialist
  • use AI mastering
  • combine AI and our own resources

Mastering our own mixes

Not the best solution for a number of reasons. We are checking our own work and we will carry biases from the mixing stage, it doesn’t really matter if we rest our ears for a day, take a walk, do a space trip or whatever: we will go back to the same track in the same listening environment with the same ears and, most importantly, the same head. If our mix is too bright and it sounds good on our speakers, we won’t notice at this stage (unless we reference other tracks and trust the spectrum analyzer over our ears) and what’s worse…..if we apply a preset eq with boosted highs just because “the mastered track should sound brighter” we are going to ruin it completely. It takes a lot of time, dedication and effort to learn how to stay away from this mistakes and it really is about trying to listen with a new mindset. It is definitely possible and it’s the reality for many working composers and music producers but it’s not ideal.

On the other hand it’s a fantastic opportunity to learn more about our mixes, for example when we run our mix through a limiter for the first time we will notice things that we weren’t able to hear before, when we compare the overall spectrum of the track to one of our references we might see a big hole in a given region or a big hump somewhere else, why do these things happen? They happen as a result of our mixing decisions and whilst some of these characteristics might sometimes sound intentional, more often they reveal the imperfections of our listening enviroment and in a few cases poor mixing technique. Taking the time to master our own music as part of the learning process, making different versions (quieter, louder, darker, brighter etc…) is a great experiment and it’s going to teach us a lot.

Getting our music mastered

This is by far the best option so, if the budget allows it, for less than £100 per track you can really get a good mastering done by a specialist in a room that was treated for the task. It will be the best money you ever spent but make sure you look around, do some research and find the right person. There is also a lot you can learn from the mastered track when you compare it to the original mix. If the music is good, the composition, the arrangement, the mixing etc…..a professional mastering will surely add the final 10-15% and it’s going to make the music stand out. It’s also very liberating because it is in fact what it’s meant to be: a quality control done by someone else whilst you, the mixer can sit back and rest assured that undetected issues from the mixing stage are going to be addressed by the mastering engineer and the music will traslate well on all devices.

AI mastering

Not good on its own. Here we go, we’ve said it. You can use AIs but at least try many, pick one where you can make a reasonable number of tweaks and be skeptical because it doesn’t always sound that good….it can work but in some cases it does a poor job. What’s worse about AI mastering is that is going to prevent you from learning many things about your own mixes, things that you would learn if you were banging your head against the wall when trying to make your music sound good, or things that you would learn if you had a mastering engineer who could give you meaningful feedback about your work. Sorry, Skynet, heres’ the ugly truth: you’re not quite there yet.

Combining AI and DIY mastering

Now that’s something really interesting, something you should definitely explore once you’ve done your fair share of experimenting with differnt plug-ins, mastering the same tracks from scratch a few different times, seeing how far you can get with stock plug-ins alone. The reason why this is interesting is because it gives you the best of both worlds. When combining the two methods you are in control, and you know that you can’t trust your own ears because you’ve mixed the track, but at the same time you can’t trust the AI blindly otherwise it will take over the world might stray away from your artistic vision of the music (even if it let’s you upload references).

The disadvantage with some online AI mastering services is that you get to upload a track outside your DAW and you can’t see exactly what happens to it, but if you can afford something like Ozone 9 (note: this post is not being sponsored by anyone) and use the built in mastering assistant you will be able to see what’s going on, tweak the settings, accept them, disregard them, use them as a starting point…..it really helps especially for the first steps: resonances and overall eq, as we said once you’ve mixed a track your ears will be subject to bias and won’t necessarily identify the problematic areas straight away, of course spectrum analyzers are very helpful and should be used at this stage but listening to a different eq curve, generated by the AI mastering assistant, can be quite refreshing and point you in the right direction. This also helps you overcome to some degree the limitations imposed by your listening enviroment and monitoring system: if your room is absorbing a given frequency and you end up with too much energy in that region the AI will see that, that’s for sure, then you can take it from there. It’s still very important that you learn how to tweak the eq even further, plus you really need to experiment with compression, saturation and stereo width. For loudness again, the AI assisted mastering can give you a starting point but you definetly want to compare the result to your reference material and see how it performs, plus you might want to try different limiters as they add different flavours to the music.

To sum up, AI mastering can be helpful but you shouldn’t rely on it too much: always try to learn how to master from scratch and experiment as much as possible, consider a real mastering engineer whenever possible and if you have to use an AI treat it as “a second pair of ears” to help you overcome the intrinsic limitations of DIY mastering, take the AI generated settings as a starting point, then use your judgment and artistic vision to complete the task.

Do you have any questions about this article? Contact Francesco.

Listen to Francesco’s Music