Vocal Mixing: 6 Popular Styles of Vox in Pop and Rock Mixes; Mixing Vocals: How to Create Space for Vox. This is particularly useful on instruments like piano, guitars, and vocals that span a wide frequency range and can easily overlap or overshadow other parts of the mix.
The balance of level and frequency is what gives a mix it’s emotion and power, while a lack of balance can leave mixes. For new engineers, tonal balance can be a major challenge to get right. That’s why today we’re looking at five tips to improve the tonal balance of your mix. What is tonal balance?In his on tonal balance, fellow iZotope contributor Phillip Nichols explored how engineers, musicians, and recordists interpret the term “tonal balance.”Though each interpretation differs slightly, they all come back to the idea that music with sophisticated tonal balance has a pleasing mix of frequencies across the spectrum. For mixers, if a song has too much low-end, we say its “boomy,” but if there’s not enough we call it “thin.” There are similar bounds for mids and highs. Music that is tonally balanced is then somewhere in the middle.Now let’s go a little deeper into the specifics of how to improve tonal balance. Don’t think in terms of individual tracksAs you take note of the various issues in a mix, you’ll often find they span the entirety of the frequency spectrum. There might be a masking war between low-end elements, bloating in the mids, and uncontrolled transients in the highs that just about rip your ears off. This begs the question, how does one remedy them all?An excited might jump from track to track in search of things to do, tackling issues as they hear them. Roll off some low-end and the highs become brighter in response. Reduce those highs and the mix might feel muddy. If you go wherever your ears tell you, you end up putting out one fire after another, instead of alleviating the core problems.For this reason, I have found it's often easier to resolve tonal balance issues by splitting a mix into frequency ranges and investigating each one from the bottom up. Don’t think in terms of individual tracks, but rather as whole groups of similar frequencies. Setting up busses and track groups can help conceptualize your mix this way.By attacking groups of frequencies rather than individual tracks and instruments, by attenuating the boominess of the kick AND bass, for example, you’ll find that it’s far easier to manage the relationships between competing mix elements. With a plan in place, you reduce the number of decisions you have to make, use less processing, and save time trying to figure out why things sound the way they do. Granted, there will be some back and forth, but if you deal with the big issues in an ordered way, things will go smoother overall. Reference with your ears AND eyesAs a means of sonic guidance, it's common practice for engineers to keep a few reference tracks nearby, occasionally checking in to compare if their mix stacks up to professional standards.It's generally a very helpful practice, as long as the reference bears some similarity to your mix. We’ve discussed the merits of before so I won’t dwell on this point much longer except to provide you with an iZotope-curated Spotify, the contents of which are bound by their excellent balance in frequency across the spectrum.There are other ways to reference your mix too. Since many of us spend our time mixing while looking at a screen, it’s no surprise that plug-ins have been designed to give us visual feedback on the quality of our work. One of them is which juxtaposes the frequency content of a mix (as long as it's placed on the main output channel) with a customized target. This target can be a genre, a single song, or a collection of songs to represent an era, aesthetic, or artist.Since we are dealing with music here, you should ultimately judge your mix based on what’s coming out of the speakers. But in less-than-ideal studio spaces and with new engineers, trust isn’t always there. For this reason, TBC can be very helpful to draw our attention to mix problems we’re not hearing or to simply confirm our mix is as balanced as can be. Learn more about TBC in the video below. Get mix suggestions with Neutron’s EQ LearnLike Tonal Balance, suggests mixing decisions based on the frequency content of your song. Accessible from the EQ window in either Neutron 2 or Elements, EQ Learn analyzes your audio and places nodes at areas of interest, like rumble, sharp resonances, and other unpleasant build-ups. On a single track or submix that draws too much attention to itself, run EQ Learn (it only takes a few seconds) to see where you need to focus your EQ efforts. After the nodes settle, all you need to do is choose whether to boost or cut. This is particularly useful on instruments like piano, guitars, and vocals that span a wide frequency range and can easily overlap or overshadow other parts of the mix. A few strategic EQ moves are often all you need to balance things out.Since EQ learn works on busses as well as individual instruments, be sure to try this technique in conjunction with technique #1 to locate areas of overall resonance to solve tonal balance issues. Verify your mix with headphonesUnfortunately, the walls in my apartment are thin and incapable of absorbing sound well. I imagine many of you are in the same position; most apartments (both old and new) are just not designed for music production and mixing.If you need to remove your room from the mixing equation, try using a pair of. By eliminating the distracting reflections and resonances hitting your ears, you’ll get a more accurate reading of the tonal balance of your mix and be able to make judgments with more confidence. For obvious reasons, you’ll want to avoid models that hype low-end or alter frequency response in a significant way.To get the most out of your headphones, spend some time listening to well-balanced music to calibrate your ears when you’re not mixing. Once again, the is a great starting point, though I imagine you have a few songs you know well too. Play with playback levelsDuring long sessions, we can easily nudge things up just a bit until our home studio sounds a lot closer to a club. Not only is this a direct ticket to, but we also have a preferential bias for loud tunes over quiet ones which influences our perception of sound quality. With our monitors cranked up, it’s also a lot harder to gauge differences in level between individual tracks, hiding what could be hazardous.Simply turning down the playback dial will reveal a lot about the tonal balance of a mix. As you return to safer listening levels and even further below, you’ll find certain elements that were once bold and powerful slip into the background and others are way overdone and in-your-face. I find this to be a humbling experience after getting carried away with loudness, as my attention is re-focused on what needs to be taken care of.Another approach is to switch your playback output to your laptop speakers or a pair of earbuds, which have a more restricted frequency response and dynamic range. Don’t do any mixing and just listen to how your mix sounds. You will notice some sounds really poke through and others get buried.Return back to your monitor output and you will likely now hear these same imbalances in your music. When perspective is lost, this is one way to get it back. Use smoother EQ curves on low-pass filtersWhen setting the cutoff on a, many plug-ins default to a steep curve. Though sometimes this is what you need to trim sounds that belong in a narrow, defined frequency range (808-style hi-hats come to mind), too many steep cuts can leave your mix sounding unnatural and thin, disrupting the tonal balance.A little bit of masking isn’t such a bad thing, as it can bring character to your music and connect the disparate sounds in a mix. So before cutting the low-end on everything that isn’t a kick or bassline, try some or just turn the level down.If you do need that EQ, opt for a 12 or 6 dB slope instead of a 24 dB slope; if 10 minutes pass and you still find the sound irksome, make the slope more aggressive. ConclusionWith the tips and videos in this article in hand, you should have a good foundation to approach tonal balance on your next mix. To leave you with one final piece of advice, remember that improving tonal balance is not solely an engineering task. The recording, production, and arrangement stages of a song all play into the way frequencies are distributed across the spectrum, and if you can influence this in a positive way, it's all to the benefit of the song. Ok.i tell you what i do.for the piano track first i do a little bit of eq, pushing up some highs and lows, cutting the middle tones, then i pan the high tones to the right and low tones to the left. I usually give a little bit of reverb, just to sound more huge.the vocals i take em twice.doubled.one centered, one panned slightly (30%) and approx 2db quieter.then i use waves c4 on vocals, and TC native compressor deesser, sometimes also a waves vocal exciter and a little bit of reverb.then i mix em to sound good, but what to do next that the thing doesnt sound so cheap. Ok.i tell you what i do.for the piano track first i do a little bit of eq, pushing up some highs and lows, cutting the middle tones, then i pan the high tones to the right and low tones to the left. I usually give a little bit of reverb, just to sound more huge.the vocals i take em twice.doubled.one centered, one panned slightly (30%) and approx 2db quieter.then i use waves c4 on vocals, and TC native compressor deesser, sometimes also a waves vocal exciter and a little bit of reverb.then i mix em to sound good, but what to do next that the thing doesnt sound so cheap. What is cheap sounding about it. Telling us what you did doesn't tell us anything. We don't know what it sounded like before you did anything to it.There is no set thing you must do to a track to make it sound good. There is no preset that's going to work, there is no plugin that does it all, whatever all is. And that's the point. These devices are dumb. You have to know what something needs and then you have to tell the dumb device to do something to correct it. Which means you have to know what is wrong and you have to know what to tell the device to do to correct whatever is wrong. The first step is identify the problem(s). We can't do that without hearing it. You have to be able to accurately describe what it is you don't like and what you are using and how you are using it to correct the problem. Then maybe we can give you some kind of meaningful answer.Or you can buy that silly does it all program and hope that they have whiteboy playing piano preset.Or you can buy that stupid fix it all program and hope that they have whiteboy playing piano preset.Example- the piano I just mixed is sounding a bit thin and doesn't have much body in it. I have (insert plugin or outboard) to work with. I tried compressing the piano with (insert device) but it took some of the dynamics away and didn't give me any body. Would my (insert device) eq work better if I were to boost the low mids and maybe some lows?You get the point. You have to be specific if you want specific answers.If you just tell a plugin to do something, anything, then it's going to do just that. It's not going to correct what is wrong if there is something wrong, it's just going to do whatever it wants to do. Mastering is like pool where you have to call your shot. 6 ball in the right corner pocket. You tell us where you are hitting the ball and the shot you are trying to make, and we can give you some advice on maybe where you can hit the ball to get it in or maybe a choice of stick. If you are just hitting the cue ball and hoping one goes in, then we can't be much help. Click to expand.It would be like someone asking a makeup artist 'What should my wife do to make her make her face look like a model - Here's what she does now'He's not going to be able to tell you anything without seeing a picture - and even that won't help much because the personallity would make a difference too. Depending on the face, there may be no hope, but a different approach might yield best result.If this stuff was cookbook, there'd just be a bunch of presets everyone would use. That ain't how it works.The piano, the room, the condition of the instrument, how it was played, the voice, the type of music, the mic placement - all of that stuff is going to impact what is done during mixing and mastering. Hi dot;Sorry so many are pulling your chain here.I think we got off track from your original question.Actually, Mastering should be the last step, and you probably need to sort out the recording itself, first of all.What's with all the gimmicks on the vocals? Is your singer that weak? Assuming this isn't just a track within a rock tune but more likely a Jazz/Pop/Cabaret/Live vocal with piano recording, you shouldn't have to do too much trickery to get it sounding good, assuming:1. You have a good, tuned grand piano in a decent space. (Studio, hall, etc.)2. You have a trained accompanist/piano player who's into doing a good job supporting the singer.3. You have a good singer with pipes, chops (pitch and phrase control) and material.Assuming you have all that, I'd simply mic the piano in a traditional stereo 'classical' or Jazz approach (stop over to the Acoustic Music forum for more info - there's a long thread on there about miking a piano.) DO NOT go tweaking the high end brighter, and the low end bassier. That's just silly; the piano strings already do that by themselves, naturally. You want accuracy first; all the tweaks and EQ (if any) should happen later. And don't try to go carving the 'middle' out of the piano with EQ. I don't know who gave you THAT advice, but all you're going to end up with is a cheesy sounding piano.Use a pair of good quality condenser mics, of course. (No dynamic mics on the piano).As for the vocalist, you'll probably want to check out a few different large diaphragm condenser mics that suit your singer, and again, assuming it's jazz/classical/traditional singing, you'll probably want to go sparingly with any EQ or compression. Keep it light, and only for peaks, overs, etc. (Let the mastering house tell you if there's going to be anything else needed, later on.)For mixing, you want to create a crisp, clean soundstage that will feature your vocalist front and center (hopefully with enough talent that you effects needed will be minimal), and the piano can spread out and 'live' in the aread behind and around the singer. Add Reverb and/or room sims to taste; you may want one generic room sim sound for both, plus a little extra plate or brighter hall sound for the vocal. You may want a LITTLE bit of peak limiting on the stereo bus as you mix/bounce it all down, but don't overdo it.Go for the cleanest, non-gimmicky sound you can get, believe me, all that goopy stuff gets dated really quick. (If done properly, you may not need much EQ at all; maybe some low end rolloff for the vocal mic, etc. Best to leave it out of the signal chain if you don't need it.)Assuming you're working at 24/44, make a hi res copy at that rate, and then do a dithered-to-16bit copy for the temp copy/reference copy. Bring BOTH to the mastering house if you're having someone outside do it.Get your proof copy when it's ready and enjoy the rest of the process. Every settingfor a reason.Guest post by: Jose David Irizarry. Jose does a great job of capturing the work required for vocal mixing. I call this the “shorter” guide because if you’ve read my, you’ll know why. That being said, Jose brings in some great stuff not covered in the guide, such as mixing vocals like musical chordsread on to see what I mean.In general, bass and drums are the cornerstone of a musical theme in a band. Then guitars, keyboards and other instruments complement the harmonic setting of the musical arrangement. Finally, on top of all this, vocals are the crown jewel of the song. I’d like to direct focus to vocal mixing. The Source: A Human VoiceThe human voice is one of the most—if not the most—common sources of sound. It has a wide frequency range (75 Hz—8 kHz, including harmonics), comparable to that of the piano. The human voice also has a wide dynamic range covering an astonishing 80 dB (40—120 dB).Basic (fundamental) voice ranges:. Bass: 75 – 300 Hz. Baritone: 100 – 400 Hz. Tenor: 135 – 500 Hz. Alto: 180 – 700 Hz. Soprano 250 – 1100 HzYou can expect a lot of variability and stability issues. A sung melody can contain a sample of the entire dynamic range just in a single verse, an example of how much its intensity can vary.Another attribute of the voice is its timbre or tonal color. This distinguishes one instrument from another, a guitar from a banjo, or a violin from a flute. Now, in many cases it’s quite difficult to distinguish one electric guitar from another, but the timbre of human voices distinguishes one singer from another singer. Capturing the Vocal SourceTo capture and reinforce vocals for live performances, the most foolproof technique is close mic’ing. You need to have, at the least, an idea of how the vocals sound (refer to my previous paragraph), as well as the microphone handling technique of the singer(s), in order to make a good decision on what mic(s) to choose from (assuming you have this luxury). Know your mics—know their characteristics, the types, the polar patterns, the strengths and weaknesses.Some voices may take advantage of the proximity effect that certain mics provide to accentuate the low end of a weak voice, but sometimes a mic with a more open polar pattern is necessary in order to be able to capture an exited jumping singer. Using a single mic for two singers at the same time generally isn’t a good idea unless you’re using the right mic in the right setting.And in general for live applications, the tighter the polar pattern, the better and cleaner the vocal pick-up will be. ![]() It’s advisable to use mics with a directional pattern as opposed to an omni-directional pattern.Another tactic that can help is to train vocalists on the proper use/handling of mics as well as the various types. They should understand basic mic properties and polar patterns—this knowledge can help them do a better job.In addition, it’s usually a good idea to use high-pass filters (roll-off at 70—100 Hz) on vocals. HPF also helps in eliminating background/stage rumble and mic handling.When mic’ing an ensemble or vocal group, it’s a good idea to use the same mic for all vocals. Remember that different mics have different polar patterns and frequency responses, and this can complicate the EQ of your stage monitors when trying to eliminate problematic frequencies. Processing the VoiceThe second stage after capturing the voice is a good preamp. If your console/mixer lacks this, acquire and insert a quality external preamp. There is a huge variety and at different price-points to choose from.The preamp will define the quality of the signal being passed to the rest of the audio path. EQ for the voice can be tricky. My approach is to eliminate what is not needed, like a sculptor remove the unnecessary material from the stone to uncover a masterpiece. Keep EQ flat as much as possible, only eliminating those frequencies that cause trouble, particularly mid-lows that “opaque” the voice.Between 250 Hz – 1.5 kHz, a notch filter can help in reducing nasal resonances which, most times, are annoying. This frequency is different for each person, and is created by a combination of the natural resonance of the nasal cavities and the skull. In certain cases, a boost at 3 kHz can add clarity to the voice, making it more intelligible and/or helping it stand out (cut through) in the mix. A boost at 5 kHz can add brilliance, while a boost in the 8 kHz range adds “air” or high end to the source material.Another frequently used treatment is compression. Unfortunately, a lot of folks don’t have the slightest idea of how to apply compression to a voice. To me, it’s both science and art, and took me years to really understand it. I continue to experiment and search for new approaches and techniques.As with any other piece of gear, fully understand your compressor (read the manual, it was printed for a reason). Compressors have personality. A particular unit may work beautifully well for one thing, such as drums but, may be terrible for voice. After identifying the right compressor for your needs, use it on all vocals (especially on the worship leader or lead singer).Here are suggested values for common compression parameters:. Ratio: Very strong and aggressive voice (4:1 to 6:1); all others (2:1 to 3:1). Attack: 20 60 ms. Release: 300 ms 1 sec. Threshold: begin at the max and start reducing it until you get 3 – 5 dB of gain reduction during the louder passages.Use the output compensation button if your mixer has this functionality! It’s there for a reason (too). Use it to compensate for the gain reduction of the compression stage circuitry. Without this compensation, the gain reduction and pumping effect will be very noticeable to the ear.Remember that what the compressor does is to reduce the dynamic range of a variable signal, confining it into a smaller range. The ratio and threshold define the upper limit of this range, and the output compensation defines the lower end of this range. Without it you’re only half-done. Properly used compression should not be noticed. Mixing VocalsA lone singer in a band is usually rather easy to mix—make sure the lyrics can be heard intelligibly without overpowering the band. Toss in background vocals (BGV) in addition to the lead and the mix can easily get out of control.The EQ approach for BGV is different than for lead vocals. You can be more aggressive with BGV EQ to keep feedback or bleed under control (increased amount of open mics) without affecting their overall appearance.Pay special attention to the lead vocal mic. The lead vocal or worship leader has to be clearly distinguished without overpowering the others. The lyrics, as well as any spoken words during the performance, need to be heard clearly by the audience. The BGV and/or choir level must fit in the whole mix.A little background in music theory: chords. When there’s a lead singer and a BGV group, it’s likely that the BGV group is doing a two-voiced harmony. By adding the lead’s first voice (or melody), you now have a three-voiced chord for every note where the BGV sings. It’s like a piano where each finger playing a key represents a human voice. Fully defined chords are composed of at least three notes. Each note contributes to the quality of the chord (major, minor, seventh, ninth, augmented, suspended, etc.).The melody is always the first voice (typically sopranos in choir setting). If one of these notes can’t be heard, then the intended quality of the chord is missed. Translating this into the mix, make sure that all voices are present. In live mixing, I’ve noticed that the BGV singing the first voice requires a little boost over the other vocals.Unison—one note (same frequency). Duo—two-voiced chord. Chords (fully qualified)—three, four-voiced (an up) chords. Modern voicing—makes some interesting alterations to the second and third voices (e.g., the third voice can be singing notes one octave up or down during the entire song or during some verses).Constructing the Mix: BalanceA) Identify each voice’s voicing, especially the first voice or soprano. The harmony of each chord has to be heard (quality).B) Pair equal voices (people singing exactly the same thing), try to make them sound like one (level wise).C) Balance each group relative to other groups – consult the musical director or group leader to get feedback about the balance between the different voicings. Sometimes a background vocalist may sing the melody to support the lead singer.D) The group with the first voice or sopranos should be a bit over the other groups (perceived level). If there is a leader or lead, then the leader has to be slightly on top. Compressors (properly used) are a huge help in placing the lead voice in the mix, and they also free you from having to “ride the faders” all the time. Another use for the compressors is that they help to maintain the harmony (i.e., the relative level between the different voicings). After the initial compressor setup, use the compressor output level and the faders to fine-tune the balance.E) The perfect balance is reached when you can’t distinguish individual voices or singers (besides the leads). The entire choir or group sounds like one huge instrument or an organ.F) Once your voices are mixed, use the console’s sub-groups to set the balance between the music and the vocals. In general, the vocals are set around the same level as the music. What makes the vocals stand out in the mix should be how well you planned and managed the frequency distribution of the band and the vocals. When “EQing” the band’s instruments remember to leave room (frequency wise) for the singers.G) You should hear a “sound” that is proportional to the size of the choir or vocal group (i.e., if you have a 6-piece choir, it should sound like a 6-piece choir, not like a quartet, a trio or a duet). Constructing the Mix:FX (Effects) I have to confess that I’m a big fan of effects, but can’t use them all the time for various reasons. Mixing Piano And Vocals MusicA mix must be artistic, and part of this is refraining from certain things we like and being sensitive to each song—and even to each performance of the same song.Some songs may require long reverb, while other songs require nothing at all. Reverb and delay can be used subtlety to provide depth to the vocals. In some particular (and extreme) cases, long delays can be used to create the illusion of a second BGV group repeating a small passage. Be creative and feel free to experiment. Not all songs require the same reverb or delay. As with any piece of gear, know your FX processor(s) and parameters. Psycho-acoustic PhenomenaPerceived balance of vocals and the band—when mixing the same group performing a well-known song (to you) on a regular basis, unconsciously you might tend to put the vocals at a level (relative to the rest of the band) too low, thereby making it hard for the audience to hear them. Now, you may “hear” or perceive the lyrics clearly because you already know them and/or have heard them many times. However, this is not the case for the audience. (I’ve seen this psycho-acoustic phenomenon happen with professional shows and tours many times.)Your ears are your most valuable possession. They are more important than your legs or arms (food for thought). So take good care of your ears by visiting a hearing specialist at least twice a year, and don’t ever introduce anything solid into your ear channel, not even to clean it (even a cotton-tip stick can cause damage to your ear drums).Every time you hear a sound, your brain tries to decipher what it is and what it means. This indicates that those structures of your brain dedicated to process sound, communication, and speech are constantly working, even when you’re just sitting around the house doing nothing or even sleeping. Particularly when working with audio, this consumes energy and eventually you get tired. Behold, you’ve reached the threshold for hearing fatigue. It happens even at lower sound pressure levels. A good way to prevent this fatigue is by taking regular breaks when exposed to continuous music, say, every 30 minutes or so.Flu, cold, allergies and congestion affect your perceived sound field and also the frequency response of your ears. Avoiding mixing if you have one of these conditions.And finally, don’t fly-by-wire: Use your EARS, not your eyes. Be prepared, plan ahead, and take notes. The goal is professionalism and excellence on your part.Jose David Irizarry has been heavily involved in live sound for almost a decade, mixing Christian music and popular events, and he’s also currently the technical director at his church in Canovanas, Puerto Rico. He can be contacted at. One statement in point F that reads: “When “EQing” the band’s instruments remember to leave room (frequency wise) for the singers.”, this I agree with.With singers that use soundtracks, I consistently set the mid controll on the tracks to 1K, and dip it by 6dB. It makes room for the voices, without the tracks masking, and competing with, the vocals. For everyone who has read the previous sentence, try it sometime, it’s readily audible.If you don’t use tracks, try to put all your music into a subgroup and do that EQ change to the group. Typically, those of us mixing with analog mixers don’t have acess to an extra sweepable EQ that can be put on a subgroup.) See if you like what it does for the vocals. Quaid,This is an area I wrestle with from our current crop of up and coming audio techs in my church. “Make sonic room in the spectrum for vocals” is very valid in concept but much more difficult in practice. It is the “Phil Spector” method and can quickly make or break a good live mix depending on the skill and ears of the audio tech.Let’s explore this a bit to understand my reservations. If we have a mixed group of male and female singers, the sonic space they occupy is typically 100hz-1.2khz for fundamental notes plus overtones out to about 8khz. Mixing Piano And Vocals ChordsThat is essentially all of the audible frequencies except one octave at the upper and lower boundaries. A very big spectrum indeed and this is where 95% of all music is heard. In order to make sonic space my 1st year techs will often pull down the guitars and keys broadly below 1khz as much as 6-8db. Now they have made sonic space for fundamental vocal tones but there is a problem. The side effects of doing this is that when soloed, the guitars and keys have lost their natural tonal qualities with the left hand on the keys essentially going away (fundamental chords) and the guitars sounding more like mandolins with no warmth left. Many song intros and instrumental passages require full natural sounding instruments but now these sound thin and weak and their sonic impact is lost due to our overuse of eq.The solution is to use subtlety and choose eq to “create sonic space” if and only if we can preserve the natural tonal quality of the instruments when soloed on headphones. No easy trick. More than 3db is often too much and we simply need to use fader control so the vocals always can be heard clearly. We must trust our ears and wield our power with digital eq weapons graciously. As an audio tech my unwritten oath is to first “do no harm to the performance” and I think most behind the mixing desk agree with this in principle.Remember that in a fully acoustic performance without mics or electronics, the voices, guitars, piano, cello, violins and many other instruments share the same sonic space and if the performance is of high quality there is no audible clash of overlapping frequencies from 100hz-8Khz. The music simply sounds very natural and beautiful with a great natural mix. If we measured this with a real time analyzer we would find most of the sonic energy is between 200hz-3khz and that is what our ears are accustomed to hearing. I actually experienced this while singing the Messiah last night with orchestra. The room acoustics were excellent and the only mic was for the director to speak to the audience between movements.Stepping off soap box now Just one Sr. Sound tech’s opinion. Others will surely disagree.:). Sound Tech,Thanks for your very interesting comments. I also agree with your approach about the “sonic space” or better yet, the frequency band space, its exactly the concept I was mentioning in the article. However, in your comparison with the choir and the orchestra in a natural acoustic setting you are forgetting one fundamental detail: the sound system. It is nearly impossible to reproduce the perfect mixture of sounds found in a natural environment, in a man-made sound system. The complex interactions between the different instrument’s sound waves in the air behave differently when we try to capture them with mics and mix them in a console and then pretend to reproduce them in a PA. That’s why we need to create “sonic space” in the mix, to try to overcome the limitations of the PA (compared to the pure natural sound). This technique is used when we are close mic’ing. For an orchestra and choir ensemble 1 or 2 ambient mics will do the job and such case there is no need to worry about “sonic spaces”. The point is that we can not compare the example of a symphonic orchestra in a natural and pure environment with an environment where many mics and inputs are being mixed together. Moreover, like with any other mixing technique, it should be used with caution and the final judgment is based in what we all hear. Jose,I appreciate what you are saying and we may be confusing several different issues here. If the system is introducing a bunch of midrange energy into the mix, my recommendation is to deal with this during system setup and room tuning rather than during a live mix if it can be avoided. Arma 3 dynamic recon ops. My first goal in a live mix is to reproduce and reinforce the voices and instruments as truly and accurately as possible while leaving room to fix problem areas and enhance the overall sound. I still recommend using care when choosing EQ “to create sonic space” for voices. We choose EQ if-and-only-if you can maintain the natural quality of the instruments and avoid making the guitars sound like mandolins and the piano sound like a harpsichord. I say this because due to heavy handed EQ, I have been hearing a great deal of soloed mandos and harpsichords lately where magnificent guitars and grand pianos once roamed free. Sometimes simply pulling the fader down 3db on the guitar reduces midrange energy and is a more effective approach to creating sonic space for the vocals while preserving the natural timbre of the guitar. JMHOAs always, there is more than one way to skin a cat and each sound tech must choose the method that suits their situation the best. The most clear and concise how-to on mixing vocals yet. These are very close to the goals and methods I was trained to use many years ago. Trust quality mics, trust your ears, minimal eq to fix problems only, subtle use of effects, and make certain you can hear the lyrics and all harmony parts as one.These days the technology at our fingertips is so much better but it seems that skills behind the console are becoming a lost art. Vocal Mixing TechniquesI often suffer through a service where I cannot hear the lyrics with heavy handed eq or delay effects all over it, or drums overpowering everything and cannot hear vocal harmonies at all. So much work to do, so little time.
0 Comments
Leave a Reply. |