Version française

? | Home page | Tutorial | Mixing in theory

Home page


During the mixing step, you will balance instruments, place them in the sound space, set their volumes, modify their frequencies, fix or hide the flaws. The purpose is to define each instrument's location, make them separated from one another and create a song that is pleasant to listen to.


Top of page


Dynamics is the interval between the lowest and the highest volumes in a sound. OK, fine, but we are doing Rock music, who cares about low volume? We want to kick ass! Drums must beat, guitars must yell, the bass must make the walls shake and the vocals must be loud! Pump up the volume! Wooo... calm down!

If everything is loud, then nothing is

Yes, that's logical. If everything is loud all the time, nothing stands out. But it's the contrasts that bring both subtleties and raging moments to light. How do you expect to surprise your listeners with a sound hurricane if you've already given them thunder and lightning without a break? You have to spare some quiet moments before the storm. Let's compare this with drawing. A black dot on a white background will stand out more easily than the very same black dot on a dark-gray background.
Black dot
Obvious? Yes, but many musicians just care for that "powerful sound" to such an extent that it eventually becomes tiring to listen to.

You can increase the contrasts through the very writing of the song (guitar/vocal bars, followed by a passage with 3 guitars, 1 distorted bass, big drums...), or through interpretation (the musicians play louder, faster...), or through the use of effects (saturation, compression...).

Contrasts don't have to be huge, it all depends on what kind of emotion you wish to convey, on the music genre, on the topic of the song, on what you are looking for...

Nowadays, too many songs just look (and sound) like this:
Over compression
Do your ears a favor: don't do this
Top of page


EQ stands for equalization. It's a tool that allows you to set the level of the frequencies that make the sound. Basically, it's like a volume knob, but instead of changing the volume of a whole song, you lower or raise only the frequencies of your choice. Just like when you use a stereo amplifier for instance: you can find "bass", "medium" and "treble" knobs. They are a simple EQ that help you to tune the sound.

Well, you are going to apply this same principle to each of the instruments of your song.

The purpose of equalization is simple: changing the sound, so that each instrument primarily occupies the frequency range that defines it the best, without encroaching on the other instruments.

In other words, lower the level of the frequencies that are not necessary to an instrument without distorting it, and see to it that the various instruments main frequencies do not overlap. Each instrument will thus be alone in its own sound space without encroaching on the others, ensuring all instruments are clearly distinct from the rest.

The hard part is to do this without distorting the sound of the instruments, and also to see to it that instruments which have common frequencies by nature be well-marked.

You will deduce that the more instruments there are, the more complex mixing becomes.
Top of page


Music is usually mixed in stereo. Not always, but we have two ears, we use headsets or speakers that usually come in pairs, so it only makes sense to use left and right channels when mixing.

That's what panning is: acting on the width of a sound, from left to right.

On top of the EQing described above, we can also part instruments from one another by placing them in different spaces. If two guitars play simultaneously in your song, you may want to place one on the left and the other one on the right. This will help to distinguish them and it will also balance the sound.
Top of page


Compression affects the dynamic range of a sound signal.

It reduces the amplitude modulations of the sound, that is to say the gap between the lowest and the highest levels.
Kjaerhus Audio Classic compressor
I will use the above compressor as an example. All compressors have more or less the same basic functions, so it will be easy to reproduce the manipulations on alternative models.

Let's see how this works:

- Threshold: this defines the minimum level from which the compressor will start affecting the sound. 0 is the highest sound level.
- Ratio: this is the level of compression you are going to apply. A 1:1 ratio has no effect, whereas an infinite ratio will limit the sound to the threshold level. In this case, the compressor acts like a limiter and nothing will go higher than the threshold. The higher the ratio, the more compressed the sound will be, and the volume level will be reduced in proportion, until we use the output button.
- Knee: this knob sets how much the compression will affect the sound right before and after the threshold. The softer it is, the softer the transition will be. A hard knee will result in a more abrupt transition. It is rather difficult to hear the effect of this function.
- Attack: this knob indicates the time it takes for the compression to start acting once the threshold is reached. Set it short (0 ms) to compress the sound as soon as the threshold is reached, and set it longer to avoid compressing the attack of an instrument. For example, with a solo guitar, you may want to preserve the string attack and thus set the attack knob on 10 or 20 ms. It all depends on what you wish to do.
- Release: this knob indicates the time it takes for the compression to stop acting once the volume level goes back lower than the threshold. A long release allows for a less brutal change of gain.
- Output: it can also be called "Gain". This knob sets the output volume level after compression of the sound. It compensates for the loss of volume due to the compression.

So basically, compression takes all the sound energy there is between the threshold and the highest level of the original signal and crushes it in a smaller gap. Then you raise the whole signal with the output button, which eventually renders a denser, richer sound compared to the uncompressed signal. This can be subtle or obvious, depending on your settings and the original sound.
Top of page


Reverberation brings a little air to a recording that would otherwise sound hollow or confined. It is not necessary to add much of it, or it could drown the sound and give the impression that the recording took place in a cathedral, unless this is what you are looking to obtain. A lot of reverb can be nice at some specific moments in a song, on one specific instrument, but I doubt anyone would apply this effect at a high level on a whole song.

It is recommended to put some on vocals. This will add depth. The more you add, the more the sound will seem to move away and get drowned in the mix. On the contrary, the less there is, the more vocals will seem to be close to the listener. A nice little touch of reverb makes vocals sound richer, deeper and more beautiful. Just find the right level.
Top of page


Delay is a sound effect that allows to... delay an audio signal and have it repeated. It's very much like an echo: shout and you will hear your voice first, then the echo of your voice repeated one or several times.

This effect is somewhat related to the reverb effect, with one major difference: delay repeats at regular intervals without generating a continuous surrounding sound. You can use it instead of the reverb to avoid overcrowding the mix.

As a general rule, set this effect by basing it on the song tempo. Choose whichever interval you wish to hear the echoes. Here is a chart to help you:
Delay by tempo
Click on the chart to download the corresponding Excel file ("Delay by Tempo & Instrument Frequencies.xls" - 64 KB)
Top of page


By changing not only the volume, but also the frequencies or some effects like reverb, you can give the impression that a sound is more or less near the listener. This adds depth to a recording.

In everyday life, how do you recognize a distant sound?

Its volume is low. At least relatively. A motorbike engine roaring in the distance will be much less loud than if it were 2 metres away from you. Thus, the first thing to do is to lower the volume of the instrument you want to place in the background.

Distant sounds don't have much bass frequencies. Bass frequencies require a lot of energy to move. Unless you hear a very powerful sound (like an explosion), the bass tone won't get to you. So you may cut low frequencies with a high-pass filter or an EQ.

On the other end of the sound spectrum, high frequencies of a distant sound are also fainted because they are absorbed by obstacles as they're moving. So you also need to lower these high frequencies.

Distant sounds are indistinct, unintelligible. Human voice, for example: if you hear someone speaking in the distance, you will of course recognize that this is a human voice (yes, your brain is that smart), you will probably know whether it belongs to a man or a woman, but you will not understand a word. You are going to have to reproduce this lack of clarity to simulate distance. A little reverb, a little chorus or some phaser touch will do. Give it a try and see what suits your music best.

You can't always tell exactly where the faint sound you're hearing is coming from. So pan the background instruments in the middle position and leave the sides to foreground instruments.

When listening to a music, a listener will mainly focus on changing elements. The use of a repeating, short musical pattern, treated as described above, will be easily forgotten and place itself in the background. The purpose of a background instrument is to add some fullness to the song, without being so interesting that it draws attention. It should be kept simple to have it blend into the song without being noticed too much.
And of course, on the contrary, foreground instruments will be the ones bringing volume, lowest and highest frequencies, melodies, and elements of surprise.
Example of a voice that gradually goes away from the listener
(full screen for a better view)
Top of page


When adding effects, check the result by activating and deactivating them as the song (or the instrument) plays. It will help you to see whether an effect is too present, or not enough, whether it is necessary, whether it is too agressive, etc. You may like heavy reverb, but too much of it just drowns it all and makes the song inaudible. Control is essential. Too little is useless, you might as well remove the effect completely, whereas too much will make everything weighty and will surely waste the whole piece.

An effect often turns out to be... effective if you miss it when it's gone, even though you were not certain to hear it when it was on.
Recording - Previous | Next - Mixing in practice
Top of page


(Leave a message)

Message page # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

01/12/2020, 23h06

So the chain goes:

DAW > Audio Interface Out > Amp > Speaker > Mic > DAW

This is correct based on my understanding from what I've read, and the few videos I've watch on creating IRs. My question, then, is when I plug into the Amp I've seen people say plug your Interface out into the FX return, but you say the guitar cable jack. What is the purpose in doing one or the other?

Side questions:

What channel should my amp be on? I'm assuming the clean channel.

What should my Amp settings be (EQ, Gain, Channel Volume, Presence, Master Volume)? I can't find a clear answer anywhere.

* * * * * * * * * * * * * * * *


About plugging into the FX return or the guitar jack, I don’t know. Actually, the amps I’ve used myself to make IRs don’t have any FX return, so I didn’t have a choice and had to plug into the guitar jack. I guess there’s no harm trying both (not at the same time!) and comparing if you have that possibility. Chances are there’s not much of a difference, but again, I may be wrong as I have not tried this myself.
About the choice of a channel, and the settings: the channel doesn’t actually matter. You’re not capturing the amp sound, but the speaker sound.
From what I’ve experienced, the EQ and Presence should be neutral, the gain/saturation should not be engaged (or set to a level where no distorsion can be heard). As for the volume, set it to a level that’s high enough for your microphone to be able to pick up a good signal (no need to record higher than -6 dB, by the way, give your signal a bit of headroom).
But you should also be careful not to set it too loud to protect your own ears. It doesn’t need to be pushed too high. I think a level high enough to cover your own conversational voice should be enough. I tried various volume levels, and it did not affect the results notably. I did not get better results with very high levels than with normal, humanely bearable levels. Don’t set it too low, though, because it’s better if your speaker does move some air.

Experiment, try different amp settings and see whether that changes the results.


10/20/2019, 17h06

Hey, I downloaded the plug-in and extracted it. Then put it in the plugin folder but it is not working. C:|Program Files|Common Files|Avid|Audio|Plug-Ins. Would this be the right steps? Please let me know thanks!

* * * * * * * * * * * * * * * *

As you explained it to me by e-mail, you were using Pro Tools First, which doesn't support third party plugins. The solution is then to either upgrade to a paid version of Pro Tools, or use another free DAW, such as Cakewalk by Bandlab (Windows only), or use Reaper, which is not free, but can be used freely without constraints. These DAWs do support third party plugins.


08/26/2019, 11h06

Tout d'abord bravo pour ce site.
Je suis débutant et rencontre quelques soucis.
J'ai un PC Windows 10 (64 bits, 8 Go de RAM) avec carte son intégrée en 5.1, driver realteck, et quand je lance un programme de simu type Amplitube 4, il y a un son horrible qui sort, est-ce normal ? Y a-t-il un moyen d'y remédier ?
J'ai essayé également avec Bandlab comme séquenceur mais je ne sais pas comment intégrer le cab et le simulateur.
Merci d'avance

* * * * * * * * * * * * * * * *

Bonjour Dam40,
Le son horrible qui sort avec un logiciel de simulation n’est pas « normal », mais c’est peut-être dû au fait que vous utilisez la carte son intégrée de votre ordinateur. Ce type de carte n’est pas du tout adapté pour enregistrer et mixer de la musique.
Pour enregistrer de la guitare par exemple, il faut passer par la prise Jack de la guitare et les cartes son intégrées ne possèdent pas ce type de fiche. D’autre part, les drivers des cartes intégrées ne possèdent pas non plus l’impédance électrique compatible pour avoir un niveau de son correct en provenance de l’instrument, et d’autre part, même quand ça marche, elles induisent une latence, c’est-à-dire un délai entre le moment où l’on joue sur la guitare et le moment où le son est entendu sur l’ordinateur.

Pour remédier à ce problème, il faut acquérir une interface audio, un type de carte audio qui se présente sous la forme d’un boîtier externe connecté à l’ordinateur par la prise USB (le plus souvent, même s’il existe d’autres types de connexions). Ces interfaces sont fournies avec un driver spécifique qui permet de gérer le son grâce au protocole ASIO. Ce protocole est standard et permet d’obtenir de faibles latences pour pouvoir jouer de la guitare et entendre le son, avec ou sans effets, sans délai gênant.


08/16/2019, 04h18

Bonjour !

J'ai testé la quasi-totalité des simulateurs présents ici pour une raison : impossible d'ouvrir un fichier DLL !
Mon PC me demande d'associer l'ouverture des DLL à un logiciel mais je n'ai rien de spécial qui va avec...

J'ai eu ce souci, j'avais formaté mon PC vu que je ne l'avais pas fait depuis des années (1,65 To de données à re-télécharger)
Et là encore le même souci, je teste donc sur 6 PC différents et tous ont ce souci... Je suppose donc qu'il faut un logiciel spécial mais rien n'est mentionné, tu pourrais m'aider ? Merci d'avance !

* * * * * * * * * * * * * * * *

Bonjour Blastrax,
Tous les simulateurs d’ampli gratuits sous forme de fichiers DLL sont des « plugins », et non pas des logiciels autonomes.
Je l’explique ici.

Ces fichiers de plugins ne s’installent pas, il faut simplement les recopier dans un répertoire du disque dur. À noter aussi que les simulateurs d’ampli gratuits ne simulent que la tête d’un ampli. Pour avoir également une simulation du haut-parleur, un autre plugin qu’on appelle « chargeur d’impulsions », dans laquelle on charge des « réponses impulsionnelles », ou IR (impulse responses, en anglais). Les IR sont des petits fichiers audio qui reproduisent le son d’un vrai haut-parleur. On peut trouver des IR reproduisant le son des amplis Fender, Vox, Marshall, Orange, Mesa Boogie, etc. Il en existe des gratuites et des payantes.


06/02/2019, 18h49

Merci pour le tutoriel sur la création d'impulsion , j'ai capturé les signatures de mes cab Markbass traveler 121h 2x12 et le vieux combo Fender rockpro 1000 en 1x12 ,c'est tout à fait le son que je voulais. Je suis bluffé par la qualité , par rapport à ce que j'ai pu télécharger sur le web...! j'ai plus besoin de casser mes oreilles à volume fort pour faire des prises.

Top of page