Reply
 
Thread Tools Display Modes
  #1   Report Post  
Matrixmusic
 
Posts: n/a
Default Mixing, Any additional suggestions?

MIXING

Most good mixers these days can start their mix process at any desired
point because of their years of experience and their relationship with
their monitors. When starting out as a mixer you do not have this
experience and need to start at a reference point that will produce
desired results for your mix. I have designed this mixing segment for
those with little experience or are new to the mixing process.

Before starting a mix you need to have a vision of how you want your
mix to sound. Refer to CDs with examples of what you are trying to
achieve, for creative and tactical purposes this will give you guidance
on where you would like to take your mix sonically and musically.


Near Field Monitors

Good near field monitors play an essential role in consistent
referencing. The monitors should be capable of reproducing frequencies
from 60hz to 17Khz and be able to handle high SPL, and set up in a
triangular fashion 3-4 feet apart. Make sure the monitors are not too
close to the plane of the console so to minimize high frequency
reflections that will corrupt proper imaging. If you're using
monitors that are not true in frequency response, equalize the monitors
in the monitor stage (post fade) to allow for discrepancies. This will
alleviate you from incorrectly EQing your mix to compensate for
inaccurate monitors. Also the distance from your ears to the monitors
should be set up so the room acoustics do not play a significant role
in the sound of your mix. For example, if the monitors are too far
away and the room is reflective your mix will sound too dry.

Outboard Gear

I like to start off my mixing sessions with at least three different
reverbs, three DDL's, a stereo chorus effect and two extra stereo
effects processors with many assorted stereo effects like phasing,
flanging, etc. as well as enough analog comp/limit for processing
acoustic audio. One good stereo EQ and stereo compressor are necessary
for mastering my final mix. Two audio storage mediums, one for master
and one for safety purposes for your final mix, such as a hard drive,
DAT machine, analog 2-track etc. Storing audio to digital should be
done in the best sounding formats e.g. 24Bit/96Khz.


Setting up the console

1) Grouping - assign all tracks of similar instruments close to each
other. For instance put all drum and percussion channels side by side.
All guitars side-by-side etc. Mark all different instruments with
different colors on the console strip. This will make it easy to
recognize and locate certain instruments easily. Try to group all hard
drive returns to the center part of console. Things like solos and
lead vocals that require a lot of fader moves should be placed in the
center of the console for optimum monitoring purposes. Patch all
outboard gear to the outside channels, e.g. 1-8 and 29-36 for they only
need to be set to one optimum level. If you have the time and will be
mixing for more than a couple of days, insert 1Khz tone at 0VU into
each input strip placing the fader at 0VU position to check line
cleanliness and continuity.

2) Setting up Line Amps - First bring up all channels to a basic rough
balance with the priority music tracks such as lead vocal to a position
where the lead vocal sounds cleanly audible with another 10dB of fader
headroom. Now fine tune all line level amps (-5dB to -20dB) so all
faders are in maximum working range. It is very hard to make detailed
level changes when the fader is close to the bottom. Allow 10dB of
headroom on all faders.


3) With a priority track such as a lead vocal, bring the lead vocal up
on one channel and buss it to another input. This will allow you to
control the level of the vocal before any processing. In the first
vocal channel you can roll off low frequencies such as rumble (60hz),
proximity effect, etc. In the second vocal channel insert limiting,
equalization and compression and any de-essing, if necessary. If a
vocal needs to be compressed whereby the choruses are recorded
significantly louder than the verses, what will happen is that the
vocal in the verses will not be compressed at all. Or, if you set
compression on the verse vocal, the chorus vocal will be overly
compressed and very thin sounding. Remember the more you compress the
signal's quality tends to be reduced. If all verses are similar in
level and all choruses similar in level but a lot louder designate one
channel for verse and other channel for choruses. This same approach
can be used for solo instruments or anything that will be a priority in
the mix.


Starting the mix

At this stage you should have a basic idea of where the focus of the
mix resides. If it's Norah Jones, it will be the lead vocal and the
piano, for hip-hop it will be the groove, the bass and vocal, for rock
it will be guitars and vocal. Whatever the focus is, it should get the
best treatment such as good analog equalization and compression. I
have yet to hear any digital equalization and compression that sounds
as good as analog. If dealing with someone like Norah Jones, listen to
similar sounding albums in that genre of music. Try to approximate the
equalization, compression and reverb of the sound that you desire.
Remember that you will most likely be processing it further and the
object here is not to emulate totally, but to start you in the right
direction. Next, you would bring in the piano, respecting the fact
that the vocal will take precedence in the high frequency range
(presence). The piano should sound clear but not override the high
frequency of the vocal. A good way to test this is to listen to the
piano without the lead vocal and if you feel it is a little dull you
are on the right track. As soon as you start to make the piano sound
like the focus you will have to EQ more high frequencies into the lead
vocal. This will obviously make the vocal sound too bright and thin
where you're actually separating the sonic qualities from the musical
qualities of the vocal. From there you can then turn the lead vocal
off and build the sound of your rhythm section.

Also in this stage you need to assign your instrument breakdowns to
group to fader masters. This will allow you to make level changes and
mutes on groups of instruments as a whole. If using a moving fader
system, assign your lead vocal channel to a group master even though it
is only one channel. If you have made a lot of fader moves with the
vocal channel in a verse and now realize you need to bring up the lead
vocal for the entire verse, having a group master will make it easy for
you.

Drums

You need to decide where the drums should fit into your mix. Should
the bass drum be tight with the bass by introducing the rhythmic or
attack part of the bottom end. This will allow the bass guitar to be
warm and full in the bottom end that tends to work for a lot of pop
tracks. A common mistake is to EQ too much low end on the bass drum
and not enough on the bass guitar. This will give you the illusion
that your mix is bottom light for what you are doing is shortening the
duration of the low frequency envelope in your mix. Also, the bass
drum tends to be more transitory than the bass guitar, giving you the
idea that the low frequency content of your mix is inconsistent.
Should the bass drum need more resonance and depth to it, adding in
ambient mics or short reverb programs will suffice. One thing to make
sure in your mix is "do you want the bass drum to be felt or
heard"? EQing in the 30-60hz range will produce a "feel" bass
drum but will sound very thin on smaller speakers. If you EQ the bass
drum between 60-120hz it will allow the bass drum to be heard on
smaller speakers. With it you want to get a lot of "hear" low end
and attack sound between 2-4khz and also dipping between 300-600hz
range which contains a lot of unnecessary overtones. If the track has
enough space in it, you can factor in a tight verb or a tight ambient
room for you will be able to hear it. If the track is dense, don't
bother to try and create one for it will just take up space and clutter
the bottom end of the track.

What sound should the snare drum have? Should the snare have a lot of
reverb to make the backbeat sound longer in duration or short and
percussive? Do you want to mix in a lot of room ambience that is
triggered by the snare to make the snare drum sound bigger? (see
Gating). Do you want to compress the snare drum to get more sustain?
If you desire this effect you will need to bring up the snare on 2
tracks, one for the attack sound and another channel to first gate the
snare and then compress the snare with a fast attack and fast release
time. You might want to gate the snare and compress the overhead mics
(keyed by the snare) to remove snare leakage from the overheads without
making the hi-hat sound too ambient. You might also want to gate the
toms for cymbal leakage especially if you use condenser microphones on
the toms. Also, gating the snare reverb send will minimize the hi-hat
from washing out the reverb. If the transients of the drums are random
and excessive you might try to buss comp/limit the drums to control the
transient excursions and minimize the dynamics in the performance to
maintain a consistent level from the drums. Adding rhythmic delays to
the snare might make the groove more interesting.


Bass

Once you have finished the drums, you can add in the bass. For pop
music it is best to have the bass drum provide the percussive nature of
the bottom while the bass fills out the sustain and musical parts.
With the bass you will want to find a balance between the amp and the
direct sound. The amp sound will give you an edgier quality where the
direct sound will give you a fuller sound. With EQing the bass for low
end should be between 80-120hz for you will want to hear the bass on
smaller monitors. Remember to check phasing between the DI and the amp
signal. Compression is a good idea with the ratio of 2:1 - 4:1 with a
medium attack time and medium-slow release. With a medium attack time
you will allow the percussive nature of the bass to be heard. With the
slow release time you will have the low end sustain. The release time
should be long enough to avoid half cycle distortion. If you need the
bass to sound more musical you will need to EQ in the 400-800hz range,
and for getting an edgier sound EQ between the 2-3Khz range should
suffice. Remember to EQ before you compress.

With hip-hop music, the bass tends to be a feel bass with a lot of
information in the 30-60hz range. Also minimizing sonic information in
the musical range and the mid range will remove any actual music
information and the attack of the bass. Synth bass is very popular
because you can create an even balance between 30-60hz and elongate the
duration of the note to create the illusion that you have more bottom
end. On some of the better hip-hop records they will raise the low
frequency target area slightly higher to the 70-100hz range and
elongate the duration to create the illusion that there is a lot of
bass information so that it can sound full on smaller monitors. Be
careful not to over-EQ the bottom end so it will sound good in clubs or
in cars with huge bass drivers. These kind of audio systems already
hype the feel frequency range of the bottom end. In compressing
hip-hop bass do not be afraid to use a lot with even higher ratios.
The goal is to have the bass loud and as even as possible.

With rock bass the idea is to create an aggressive in your face bass
sound. For this you will focus mainly on the amp sound. Trying to mix
in the DI sound with the amp sound might cause phasing problems in the
mid range that will be detrimental to what you want for your bass
sound. With your sound you need to get a consistent bottom end and a
lot of mid range. Boost anywhere between 50-100hz for the bottom end,
dip between 400-800hz (this will allow the guitars and vocal to have
more room to speak musically) and boost between 1.5-2.5Khz for mid
range. Be aware if the bass player is using a pick instead of his
fingers for it can create uncontrollable audio transients in the mid
range. With compression, you need to use a lot (4:1 - 8:1). If the
player is using a pick you might need to limit the transients before
you compress. The attack release times will have to be fast (listen
for half cycle distortion) in limiting and medium to slow for
compression. Sometimes it's a good idea to put in multi band
compressor over the bass to target specific frequency areas. If you
also recorded the bass direct and you needed a more aggressive sound
for your mix, try sending the direct signal out to an amplifier in the
studio that can be miked. This will allow you to modify on the spot
your bass guitar sound to your needs.




Piano

In a situation like Norah Jones, the piano will be second in priority
behind the lead vocal. The piano will be spread fully across the
stereo image. When getting the piano to be present you will need to EQ
the mid range and high end. When starting the mix you will have
already ball parked the lead vocal EQ and have approximately EQed the
piano in relation to the lead vocal. So when you add in the piano to
the bass and drums and if it sounds dull, EQ the piano slightly
brighter and you will most likely be okay, for when you started out,
you allowed yourself a certain amount of head room in the mid and high
frequency range for the lead vocal. If you find that the piano needs a
lot of high frequencies you have obviously over EQed the bass and drums
in the mid range and high frequency. If this has occurred pull back
the boosts in mid range and high frequencies on the bass and drums.
The problem will most likely be with the overheads and snare.
Remember, in dealing with the snare your dealing with a lot of high
frequency information over short time duration. So instead of adding
more high end EQ over the snare's transient, try limiting the snare
which will allow you to elongate the high frequency content of the
snare drum's duration and create the illusion that it is brighter.
Here's another solution, if the snare is sounding the way you would
like in the high end and you do not want to reduce the level of the
snare try compressing the snare with a medium attack time. This will
shorten the duration of the snare but will not sacrifice the rhythmic
transient of the snare drum that is integral to the overall drum
performance. This gives the illusion that the performance has not been
sacrificed rhythmically or musically in the mix but the snare drum
still sounds bright.

Guitar

In a situation like Norah Jones the guitar performance on the bed track
was tailored to support the piano and vocal in a musical and rhythmic
fashion. Just bringing the guitar track up to balance it in the track
should be easy to do. For the guitar player has designed his
performance rhythmically and harmonically around the vocal and piano
phrasing. The only potential problems that might occur is if the
guitar is not present enough and/or loud enough throughout the
performance. A solution is to add presence in the 3-5khz area factoring
in the fact that you do not want to have a build in the frequency range
between the guitar and the piano. If you notice the guitar is getting
lost in places, try compressing in the 2:1 - 4:1 range with a medium
attack and release times. This will allow the rhythmic transients to
go through unobstructed while raising the sustain resonance of the
guitar. If the guitar is soloing in an expressive manner you might
require a bit of limiting first. Also add in processing to create
depth perception of the guitar remembering that the piano should be
forefront to the guitar. A quick solution is to add a stereo delay
with setting of 40ms hard left and 60ms hard right with a short reverb.
Remember to roll off some of the high frequency content on your delay
returns. This will create the illusion that the guitar will be sitting
further back in the mix than the piano without creating noticeable
level discrepancies between the piano and the guitar. With a pop track
where the guitar is not the main focus, but is there to add rhythm and
harmony, EQ it in a range that is not as wide as the main instrument.
Avoid EQing in the very low and very high frequency ranges. Balance its
level against the piano so it sits comfortably. If you feel it needs to
sound further back in the mix and you do not want to lower its level,
try an assortment of these effects: add in short delays (15-100ms),
unnoticeable rhythmic delays (eighth note or quarter note), chorusing
and reverbs with little pre-delays.


Mixing the Bed Track

(Norah Jones) Once you have EQ'd the drums, bass and guitar and have
placed them in their proper perspective get a balance on the drums,
bass, piano, guitar and lead vocal. Start factoring in processing such
as reverb, chorusing and delays to create depth perception in your mix,
allowing yourself a little more headroom for further enhancement.
Remember mixing is a building process that requires constant sonic
evaluation throughout the process. It is important that you
incorporate mutes or level changes at this stage though automation.
Once finished this basic mix of all bed track components with the lead
vocal you should have a mix that should be able to stand out on its own
for these are the basic elements of the song. If you have not achieved
a satisfactory product by then keep working on it and do not expect
that adding in any additional musical elements will make it better, it
won't! All you will do is create a confusing and unprofessional mix.
A good idea is to refer back to the monitor mix you did on the date
you recorded for in a lot of cases there are certain things about the
monitor mix which will sound better then where you are at now with your
mix. You will easily discover if you have over EQed or over processed
any elements that might separate the sonic components from the musical
components of the song. Remember that you might need to continually
reference your lead vocal sound against other outstanding albums. Then
prioritize what is important to the lead vocal. In a case like John
Mayer it will be the guitar and the vocal. In hip-hop music it will be
the drums, bass and vocal. If you maintain this philosophy mixing
will always have a creative rather than a redundant approach. One
critical component of creative mixing is remaining in a creative
headspace. If you get your bed track balanced with your vocal,
automate it to sound like a final mix. This will remove repetitive
redundant moves that the brain should not be focusing on. It is hard to
be creative when you are preoccupied with making level changes that you
know could be automated. The strategy here and until the end of the mix
is to keep the creative process alive.

Backup Vocals

Recording backup vocals is fairly easy if the vocalist understands
their objective how to work with the lead vocal performance. In the
case of the lead vocalist adding a double track in unison, you should
record with the identical set up that was used for the lead vocal. When
adding in the double track, mix it at a level below the lead vocal and
be prepared to not make it as present as the lead vocal. The goal here
is to add more musical body to the vocal performance. If both vocals
have the same presence it might confuse the listener to which vocal is
the lead. When adding in the vocal double you will lose presence to the
lead vocal but will achieve a vocal performance that will be more
forgiving in pitch.

If the lead vocalist is adding a harmony to their lead vocal melody it
will usually be the 3rd and or the 5th and sometimes the 7th. Record
the vocalist with the same set up used for recording the lead vocal.
When adding in the harmony it will always be at a slightly lower level
to the lead vocal.

With two or more singers singing harmony to the lead vocal they can
perform in two ways. One is for the backup singers to sing the same
harmony part at one time. The other method is for the singers to split
the harmonies amongst themselves at the same time. Double or even
triple tracking harmony parts is very popular and can best be heard by
groups like The Bee Gees and The Eagles. If the backup vocals are
singing counter point to the lead vocal you will want to have them as
present as the lead vocal. When recording three or more tracks of
backup vocals it is best to submix the parts to a stereo bus and bring
up the stereo bus into two additional channels. This will allow you to
put exactly the right amount of processing on all backup vocal parts
rather than guessing at sends and EQ levels on each individual track.
Remember to clean your backup vocal tracks before mixing for backup
vocalists like to sing a pitch reference before they sing their part.

Solos

When an instrumentalist is soloing they should have the same
perspective as the lead vocalist. In other words, when they are
performing their solo they should stand forefront in the mix. The only
exception to this is when you want the soloist to sound like they are
soloing in a band performance. This usually happens when their bed
track performance is replaced by soloing. This can be heard in punk and
rock music. If the soloist is a lead guitar, saxophone or another
instrument make sure all parts of their performance can be heard. This
usually requires a bit of limiting, EQ and compression. For effects, I
usually will use delays, reverbs with pre delays and other forms of
processing. If the soloist is performing in a call and answer style you
will need to make sure that they are slightly less present than the
lead vocalist but more present than the rest of the instruments.


Adding-in additional instruments

Before embarking on the next step, review the status of your mix and
make sure it sounds finished. If for example you have made the decision
that the vocal performance in the second verse needs to be louder than
the first verse and you don't make that level adjustment then, how
will you know what levels to set for any additional instruments coming
in at the beginning of the second verse? For example, if you added
congas in at the second verse and you have not made the lead vocal
level change you will most likely mix the congas in at a level relating
to the drums and the lead vocal. When you start automating the mix and
increase the vocal level in the second verse what happens is that the
congas will be lower in level than where they should be and in most
cases you won't even notice. By the end of the mix, the conga
performance will be at a level where they are just taking up space
instead of lifting the rhythm at the second verse. Automate all moves
and mutes when ready. This will make it easier to place additional
instruments in the proper perspective.

When adding in strings be careful not to put too much reverb on them.
This will prevent their performance from creating harmonic confusion
and keep them articulate sounding. If you need to recess the
perspective of the strings use a short reverb or even a DDL. You will
most likely need to ride the level of the strings especially with the
violas and cellos due to their harmonic placement in lower registers.
If you need to compress use 2:1 to 3:1 ratio with slow attack and
release times.

If you're adding in horn sections be careful to watch for transients
especially from trumpets. Due to the complex frequencies of horns it is
best as with all additional instruments to try and ride the levels
before using any dynamic processing. In the case of horns where
transients are very fast you will often have to use fast limiting. If
adding reverb use short reverbs (1-2 seconds) that are bright sounding.


With percussion the idea is to make sure that the attack part of their
performance comes through cleanly and relatively even. With parts like
congas percussionists will perform with a dynamic range that often
cannot be translated in a mix. If the performance is 16th note in
nature and perform on 2 or more congas you will most likely have level
discrepancies between the congas. If you solo the congas on their own
they will sound fine but hearing them in the mix you will not hear an
even balance between the two. To solve this, use compression with fast
attack and fast release times to even out the dynamics.

Woodwinds such as flutes, oboes and clarinets are very warm sounding in
nature. They often don't need any dynamic processing and if they do
it is very subtle. When a flute plays in a high register you might need
to compress. Piccolos on the other hand should be burned at first
sight. With perspective medium to long reverbs with pre delay work
quite well in keeping woodwinds sounding warm and natural.


Finalizing the mix

When you have finished your mix make copies of the mix and audition
them on other monitoring systems like a ghetto blaster, a car stereo
and home speakers. If you have the time, give your ears a rest. I like
to leave the mix set up over night and come in the next morning with
fresh ears to do final adjustments, which I tend to always do. Do not
belabor your mix, which means no endeavors to seek perfection. Believe
me, you'll most likely be the only one to notice. Early in my career
I would present a mix to the client for their comments which would
often be "sounds great" and then inform them I only have a couple
of minor adjustments to make. After spending four hours on the mix I
would spend another eight hours making my minor adjustments and present
the updated mix to the client who would comment, "we can't tell the
difference". Perfection, I have learned, is the ability to present
something in its simplest form that can be appreciated to its fullest
extent. Listening to some of my favorite recordings I have noticed
mistakes but who am I to remix Sgt. Pepper's. I might mix Sgt.
Pepper's perfectly but I know for certain it will sound nowhere near
as good as the original mix.
Try to play your mix to normal people who buy CDs because they like
the music, which means avoid your techy friends who might steer you in
a direction of technical merit that might not make any musical sense.
If you are having problems with your mix by all means reach out for
advice to your trusted peers for their subjective and constructive
feedback. This is not the time to be a sensitive new age drama queen
worrying about your feelings getting hurt. This is a time to be honest
and open minded and welcome suggestions that you're willing to put
into action.

Rock mixing

With rock mixing the goal is to get your song sounding big and
powerful, by incorporating the full frequency range and limiting the
dynamic range. To achieve this you will need to dynamically process
each element on it's own. Try using the limit-EQ-compress process,
which will allow you to basically just set levels and keep them there.
With drums, subgroup into 2 stereo pairs including all original and
perspective elements. On one stereo subgroup limit all the transients
and do not be afraid to do a lot of limiting. You will need to
incorporate a very fast attack time and a release time that will allow
the signal to return to unity gain before the onset of the next
transient. This process should sound as transparent as possible. On the
other stereo subgroup use massive limiting with a very fast attack time
and very fast release time with the goal of elongating the duration of
the drum sound. The goal here is to limit so you can master as much
level on a CD and create a bigger drum sound by sustaining the sound of
the drums that do not add any more level to the transient. When
you're adding the sustain limiting to the transparent limiting, you
will notice that the overall peak level of the drums does not get any
higher but the drum sounds gets bigger.

With rock guitars, the idea is to have them big and "in-your-face".
This is accomplished by first limiting the transients out of the
signal especially if it has been recorded to a hard drive. Recording
to analog tape solves this problem through tape compression. Try
limiting with ratios 10:1 or higher and use a lot. Be careful to make
sure that the sustain parts of the signal return to unity gain. Next,
EQ the guitar in the 3-5khz range for presence, for the low end
80-120hz. With EQing the low end, listen for out-of-control bass levels
of the guitar, which are caused by turning up the bass control on the
amp to 11. When this happens certain low frequencies jump out at loud
levels while others remain unaffected. If you notice this occurring
you need to roll off the low bottom end first before any other
processing, which will allow you to manage the dynamics of the guitar.
If you do not do this before compression you will most likely corrupt
the harmonic content of the guitars performance. If the guitar player
goes from a G6 (an open E on top) chord to an open E chord the low end
might increase due to the fact that you are playing open low E string.
When striking the open low E chord you are also playing an open E and
an open B remember that when you played the G6 chord you also had an
open B and an open E that were harmonically balanced against the low G.
Due to the fact that the low open E is much louder than the low G the
compression will bring down the open E and the open B. So what is
occurring is that even though there is an even balance change between
the G and E low notes, the open E and open B notes of the chords are
much different in levels. Rock guitars tend to not require a lot of
perspective processing, if any processing is desired it will be effects
like chorusing, phasing, etc. These days some of the actual distortion
processing found in effects boxes sound pretty decent. Mixing in this
type of processing with an amp sound on a performance can produce huge
guitar sounds. What the processing can bring is that
"in-your-face" component to the sound with the amp adding the
resonance of the sound. A major problem with this is phasing. The
return of the processed signal and the amp sound are not exactly in
phase over the entire frequency spectrum. A solution for this is to
double track your guitars, have the processing tracks panned hard left
and hard right, while the amps sounds reversed in panning. This will
allow the processed sound of the performance to be panned to one side
and the amp sound from the performance to be panned to the other side,
ultimately removing any phase discrepancies. This is great if you are
working at home recording processed guitar sounds and taking a direct
signal at the same time allowing you to record your performance through
various amps while mixing. With leads you might need to limit the
transients first. With processing subtle stereo chorus, rhythmic delays
and reverbs (with pre-delays) will enhance the sound significantly.

With guitars and bass the limit-EQ-comp works quite effectively even
with guitar sounds that sound very compressed from amps like Marshalls.
With solos you will tend to limit a lot due to their transient nature.
If you stand in front of a Fender Twin Reverb while a guitar player is
soloing on a Strat you will hear what I mean.

With lead vocals processing with the limit-EQ-comp works quite
effectively when used extensively especially if the singer has recorded
with a dynamic mic. When rock singers sing out, their throat tightens
and when recorded with a dynamic mic it can produce transients between
1.2 - 2K. What might help here is to use a multi band dynamic
processor. This will allow you to turn your mix up to a level that will
rival a 747 without the vocal tearing your head off.

When processing for perspective in rock music, reverbs should be short
if used at all. A common effect for lead vocal is a rhythmic digital
delay that enhances the rhythm of the performance and adds depth to the
vocal so it can sit further back in the mix. With bass and drums subtle
use of DDL and short reverbs will aid in placing them in the right
perspective. Be careful of over EQing the mid range and the high end
especially if there is a lack of 3rds played on the guitars. At some
point you will start to separate the sonic elements from the musical
elements. A good example of this is to compare a song by Billy Talent,
Tea Party, and Green Day with a song from Led Zeppelin, Tool and Dredg.
When you're finished remember to compare your mix with successful
mixes.


Hip-hop mixing

Hip-hop music is comprised basically of grooves, bass, vocals and
little harmonic content. The goal in hip-hop is to get the rhythm to be
the focus point, a good working relationship between the groove and the
vocals. With the groove a lot of the EQ is spread over the entire
frequency spectrum, from 30Hz to 17K. The bass and bass drum are
designed more as feel than to be heard, with little presence on the
bass drum and the bass. The duration of the bass drum is quite long in
comparison to other genres of music creating the illusion that the
track has a lot of bottom end. There is a lot of dynamic processing on
the bass and the groove to keep it at one consistent level throughout
the song. When starting a hip-hop mix begin with the bass, drums and
vocal. You should achieve a balance between these elements that can
make the mix stand out on it's own. Next mix in the harmonic elements
of the song as in the case of Destiny's Child's new song "Lose My
Breath" there is an orchestra pad that plays only two chords and is
used periodically throughout the mix. I believe if you add in a lot of
harmonic information it will require the vocalist to sing in tune. A
lot of hip-hop music these days is sung with one note in a rhythmic
pattern based on the bpm of the song. It seems fortunate that anyone
with a sense of rhythm but tone deaf can be a hip-hop singer. With the
vocalist there is no perspective processing and if any EQing is used it
is in the mid range and high end. A lot of hip-hop singers like to hand
hold dynamic mics while rapping which slots the sonic nature of their
vocal in the mid range area because of the frequency response of a hand
held dynamic mic. In the mastering of hip-hop a lot of dynamic
processing and EQing is done. If you follow this basic formula you will
not be surprised to discover that you can mix hip-hop as well as any
body out there.



  #2   Report Post  
Jonny Durango
 
Posts: n/a
Default

Kevin,

Wow!! Thanks a ton! This is a great tutorial and I'm sure it will
provide some wonderful ideas for amateur and seasoned mix engineers
alike. Nonetheless, you might want to preface this article by pointing
out that there is no absolute formula for mixing. Sometimes the most
"ridiculous" techniques sound great and the rules of thumb sound like
junk, and it would be a shame if people didn't experiment and break the
"rules" like so many great producers/engineers of yore. I remember
specifically an EQ setting that I had saved for kick drum that was all
over the place with high Q boosts and cuts....but even though it looked
like a rollercoaster, it sounded great in somes mixes and with some kick
drums.

Don't get me wrong, this is probably the most helpful and educational
post I've seen in RAP during the years I've been hanging around. I'm in
the process of mixing down a rock demo right now (ala "The Strokes" or
"Jet")...I'm going to print up this tutorial and try some of the
techniques. Thanks a billion!! Keep 'em coming!

Jonny Durango
  #3   Report Post  
Arny Krueger
 
Posts: n/a
Default


"Matrixmusic" wrote in message
ups.com...

MIXING


Actually, it's better titled Mixing in real time with a
traditional console.

I see nothing about nonlinear mixing on a DAW.

If all goes well, I'm getting a 02R96 (digital) delivered
next week.

I'm guessing that if I optimize my use of the 02R96, I'm
going to change how I mix in real time.



  #7   Report Post  
Jonny Durango
 
Posts: n/a
Default

Agent 86 wrote:


Would you call a person who programs a sequencer one note at a time a
piano player? Even if they find real pianos tiring and frustrating to
work with?


Oh c'mon, this is an unfair comparison. A piano player can PLAY a
PIANO...a sequencer is neither a piano, nor can it be played (i suppose
that part might be arguable)....

Just because a mix engineer is using a digital hardware and software
based mixer instead of an analog console doesn't mean they aren't
mixing....and it certainly doesn't mean that the results will
automatically be worse.

Music is a real-time phenomenon. Technology lets us fudge a bit on that in
recording/programming, but the listening experience will always be
in real-time.


Once automation is programmed, can it not be played back in real time?
That's besides the point, the listener doesn't give a damn if you
automated the reverb aux returns or if you paid tom dowd to come in and
twist the little knobbies in "real time".....I'm so tired of people who
can't let go of old **** and try to margianalize new technology because
they feel threatened by it. I'm sorry, not saying this is the case with
you, I don't mean to be an asshole....I'm just saying that a lot of
people who've worked very hard to be proficient mix engineers would take
offense at being compared to a "beat maker" just because they use a DAW.

Jonny Durango
  #8   Report Post  
Arny Krueger
 
Posts: n/a
Default


"Agent 86" wrote in message
news
On Thu, 19 May 2005 12:16:32 -0400, Arny Krueger wrote:


"Mike Rivers" wrote in message
news:znr1116511404k@trad...



Not to say that you can't get artistic and dynamic with

a DAW, but to
most users, it's more like a convenient funnel to
shove all those tracks through and get a 2-track stream

out.

Begs the question about people who don't have a

real-time mixing console.

Begs the question about people who find real-time mixing

consoles tiring
and frustrating to work with because after all, they

have to operate in
real-time.


Would you call a person who programs a sequencer one note

at a time a
piano player?


Would you say that someone who can play the piano isn't a
piano player if they spent too much time with a sequencer?

Even if they find real pianos tiring and frustrating to

work with?

Doesn't that come with the territory when you are a piano
player?

Music is a real-time phenomenon.


Then do you want attack the credentials of every engineer
who stopped his tape machine to splice the tape? It's
obviously *not* real time.

Technology lets us fudge a bit on that in
recording/programming, but the listening experience will

always be
in real-time.


Which applies to nonlinear editing exactly how?



  #10   Report Post  
Mike Caffrey
 
Posts: n/a
Default


There are people who get frustrated working on a real console, but
don't in a DAW?

I've yet to meet anyone who preferred a DAW over a console for mixing
except for a jingle guy who prefer a console, but chooses a DAW becuase
of the volume of revisions he's asked to make.



  #11   Report Post  
Mike Rivers
 
Posts: n/a
Default


In article writes:

Just because a mix engineer is using a digital hardware and software
based mixer instead of an analog console doesn't mean they aren't
mixing....and it certainly doesn't mean that the results will
automatically be worse.


I sort of said that in my initial response to the question. The
problem is that cheap computer-based mixing has brought the potential
to people who use it without having the experience or good taste to
use it in a musical manner. Some eventually learn how, some never do.
It's the same with inexpensive hardware mixing consoles (and the
people who eventually learn how to mix and graduate to better
sounding, perhaps even computer-based mixers).

Once automation is programmed, can it not be played back in real time?


Sure. But a "musical" mixer will program the automation in real time,
and then perhaps tweak it a bit in unreal time. You move the fader to
make the mix sound right and the automation system remembers what you
did and can reproduce it. The person who looks at waveforms and says
"this is a bit loud, I think it needs to go down 5 dB" and then draws
a volume envelope do to that doesn't get to hear what he's done until
he plays the track. Yeah, I know, composers don't get to hear what
they've done either until they pass out the music to the orchestra
members (unless they have home studios g) but composition and mixing
are really different things.

That's besides the point, the listener doesn't give a damn if you
automated the reverb aux returns or if you paid tom dowd to come in and
twist the little knobbies in "real time".....I'm so tired of people who
can't let go of old **** and try to margianalize new technology because
they feel threatened by it.


I work at my own pace. If I have to slow down too much to learn new
technology, I'll never get caught up. I have no problem with someone
learning the new technology from the start, or from learning it a bit
at a time while they're still working productively. But you have to
learn more than the application technology in order to mix. And
personally, I think it's easier to learn mixing as essentially a real
time process.

...I'm just saying that a lot of
people who've worked very hard to be proficient mix engineers would take
offense at being compared to a "beat maker" just because they use a DAW.


Some of the best old time engineers use DAWs today. But they learned
their craft the old way, on the old gear. The craft is what's
important. As long as the technology doesn't stand in the way, there's
nothing wrong with it. I need to be convinved that "the new way" is
really better for me. So far I haven't seen advantages for what I do.
I can see many advantages for those who do other things, however.


--
I'm really Mike Rivers )
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me he double-m-eleven-double-zero at yahoo
  #13   Report Post  
Arny Krueger
 
Posts: n/a
Default


"Mike Rivers" wrote in message
news:znr1116543473k@trad...


Sure. But a "musical" mixer will program the automation in

real time,
and then perhaps tweak it a bit in unreal time. You move

the fader to
make the mix sound right and the automation system

remembers what you
did and can reproduce it.


I see nothing sacred there.

The person who looks at waveforms and says
"this is a bit loud, I think it needs to go down 5 dB" and

then draws
a volume envelope do to that doesn't get to hear what he's

done until
he plays the track.


Not so with Audition/CE. If you move an envelope line in
real time, the sound from that track changes in real time.
Given all the different flavors of envelope that can control
a given track at the same time...


  #14   Report Post  
reddred
 
Posts: n/a
Default


"Mike Rivers" wrote in message
news:znr1116543473k@trad...

In article

writes:

Just because a mix engineer is using a digital hardware and software
based mixer instead of an analog console doesn't mean they aren't
mixing....and it certainly doesn't mean that the results will
automatically be worse.


I sort of said that in my initial response to the question. The
problem is that cheap computer-based mixing has brought the potential
to people who use it without having the experience or good taste to
use it in a musical manner. Some eventually learn how, some never do.
It's the same with inexpensive hardware mixing consoles (and the
people who eventually learn how to mix and graduate to better
sounding, perhaps even computer-based mixers).

Once automation is programmed, can it not be played back in real time?


Sure. But a "musical" mixer will program the automation in real time,
and then perhaps tweak it a bit in unreal time. You move the fader to
make the mix sound right and the automation system remembers what you
did and can reproduce it. The person who looks at waveforms and says
"this is a bit loud, I think it needs to go down 5 dB" and then draws
a volume envelope do to that doesn't get to hear what he's done until
he plays the track. Yeah, I know, composers don't get to hear what
they've done either until they pass out the music to the orchestra
members (unless they have home studios g) but composition and mixing
are really different things.


This is a step in the right direction, anyway:

http://www.tascam.com/Products/US-2400.html

All it really needs is another 120 or so knobs so I can eyeball the EQ, aux
or compressor settings for 24 channels at a time.

jb




That's besides the point, the listener doesn't give a damn if you
automated the reverb aux returns or if you paid tom dowd to come in and
twist the little knobbies in "real time".....I'm so tired of people who
can't let go of old **** and try to margianalize new technology because
they feel threatened by it.


I work at my own pace. If I have to slow down too much to learn new
technology, I'll never get caught up. I have no problem with someone
learning the new technology from the start, or from learning it a bit
at a time while they're still working productively. But you have to
learn more than the application technology in order to mix. And
personally, I think it's easier to learn mixing as essentially a real
time process.

...I'm just saying that a lot of
people who've worked very hard to be proficient mix engineers would take
offense at being compared to a "beat maker" just because they use a DAW.


Some of the best old time engineers use DAWs today. But they learned
their craft the old way, on the old gear. The craft is what's
important. As long as the technology doesn't stand in the way, there's
nothing wrong with it. I need to be convinved that "the new way" is
really better for me. So far I haven't seen advantages for what I do.
I can see many advantages for those who do other things, however.


--
I'm really Mike Rivers )
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me he double-m-eleven-double-zero at yahoo



  #15   Report Post  
Mike Rivers
 
Posts: n/a
Default


In article writes:

This is a step in the right direction, anyway:
tascam US-2400

All it really needs is another 120 or so knobs so I can eyeball the EQ, aux
or compressor settings for 24 channels at a time.


Funny you should say that. Just a day or so ago, the product manager
for the Mackie dxb digital console responded to a poster on their
forum who had suggested that the TASCAM X-48 with their US-2400 was a
good alternative to the dxb for 1/3 the price, with a 48 track
recorder thrown in for free. Dan extolled the virtues of the console's
touch screen and (a few) more controls per channel than the US-2400.

While he didn't come right out and say it (and it's something I've
said in at least one of my articles), a recording console is not only
a set of controls, but also a set of indicators. What Dan neglected to
take into account with his (admittedy Mackie marketing-centric)
response is that the X-48 with a monitor attached has plenty of
indicators.

Still, my soul tells me that I'd really rather work at a console
that's fully integrated rather than have to do the integration myself.
And I'm still waiting for Mackie to send me my dxb for evaluaton so
that I can prove it to myself and convince the rest of the world that
I'm right - at least for me.


--
I'm really Mike Rivers )
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me he double-m-eleven-double-zero at yahoo


  #16   Report Post  
Lorin David Schultz
 
Posts: n/a
Default

"Mike Rivers" wrote:

I need to be convinved that "the new way" is really better for me.
So far I haven't seen advantages for what I do.




Fine. You didn't just say that you prefer a different method though.
You said working that way produces unmusical results. That may be true
for you, but that doesn't automatically extend to everyone else.

I find it much harder to get "musical" results with a guitar than a
piano. That doesn't give me license to denigrate guitar players for
choosing a newer approach to making music than the traditional way to
which I have become accustomed.

I happen to think that the DAW has opened up ways for me to do things
that actually improve the musical nature of the material. Don't confuse
the capabilities (and limitations) of the tool with the taste, skill and
decisions of the operator.

--
"It CAN'T be too loud... some of the red lights aren't even on yet!"
- Lorin David Schultz
in the control room
making even bad news sound good

(Remove spamblock to reply)


  #17   Report Post  
Agent 86
 
Posts: n/a
Default

On Mon, 23 May 2005 16:23:34 +0000, Lorin David Schultz wrote:

I happen to think that the DAW has opened up ways for me to do things that
actually improve the musical nature of the material.


No argument there. But I have to agree with Mike that mixing's not one of
them. At least not when I'm the one doing the mixing.

  #18   Report Post  
reddred
 
Posts: n/a
Default


"Mike Rivers" wrote in message
news:znr1116588550k@trad...

In article

writes:

This is a step in the right direction, anyway:
tascam US-2400

All it really needs is another 120 or so knobs so I can eyeball the EQ,

aux
or compressor settings for 24 channels at a time.


Funny you should say that. Just a day or so ago, the product manager
for the Mackie dxb digital console responded to a poster on their
forum who had suggested that the TASCAM X-48 with their US-2400 was a
good alternative to the dxb for 1/3 the price, with a 48 track
recorder thrown in for free. Dan extolled the virtues of the console's
touch screen and (a few) more controls per channel than the US-2400.

While he didn't come right out and say it (and it's something I've
said in at least one of my articles), a recording console is not only
a set of controls, but also a set of indicators. What Dan neglected to
take into account with his (admittedy Mackie marketing-centric)
response is that the X-48 with a monitor attached has plenty of
indicators.

Still, my soul tells me that I'd really rather work at a console
that's fully integrated rather than have to do the integration myself.
And I'm still waiting for Mackie to send me my dxb for evaluaton so
that I can prove it to myself and convince the rest of the world that
I'm right - at least for me.


The question is, when did music start being about staring at a screen? It's
sure to have an effect on the final product, I think we're seeing that now.
A console or a console-like control surface is pretty unobtrusive, despite
it's size, and allows you to focus on whatever you want, it doesn't hang in
your face, shining brightly. There is also real tactile feedback, and no
screen is big enough to show what a console does.

I wonder what the price point would have to be for a MMC control surface
with pretty much the same layout as a mackie 8 bus, or slightly smaller -
perhaps letting you switch between EQ view, aux, or effects, but allowing
control of 24 channels at a time and replacing the buss faders with a jog
wheel. That, paired with a capable DAW, would be the best of both worlds
IMO, I could put the PC monitor way to the side like I used to, and still
have the dozens of tracks and effects the DAW allows. Oh well. I keep
wanting things that nobody makes.

jb


  #19   Report Post  
Mike Rivers
 
Posts: n/a
Default


In article writes:

The question is, when did music start being about staring at a screen?


One could also ask when did music start being about recording at all?
But it's just another aspect of it, and if staring at a screen is the
way that one choses to record, then so be it. There are some forms of
rhythmic and sometimes melodic sound and poetic verse that are called
"music" today that didn't exist in Beethoven's time, and many of those
forms of music came to be because of the technology that support them.

I don't record that sort of music because I don't like it enough to
listen to it as much as I'd have to do in order to record it. That
doesn't mean it isn't music (to some) and that it can't be profitable
(one reason for its existence), but since I don't participate, I don't
have a good reason to apply the technology that's best applied to that
sort of music. When I try to apply it to the kind of music that I work
with, I find it to be cumbersome and time consuming, and just not very
enjoyable. I'd rather not switch than fight.

A console or a console-like control surface is pretty unobtrusive, despite
it's size, and allows you to focus on whatever you want, it doesn't hang in
your face, shining brightly. There is also real tactile feedback, and no
screen is big enough to show what a console does.

I wonder what the price point would have to be for a MMC control surface
with pretty much the same layout as a mackie 8 bus, or slightly smaller -
perhaps letting you switch between EQ view, aux, or effects, but allowing
control of 24 channels at a time and replacing the buss faders with a jog
wheel.


The SSL AWS900 is about $90K. I think the Digidesign ICON runs about
2/3 that, with the ProTools hardware. In fact, a Mackie dxb-200 with
minimal I/O should be under $10K (some people are essentially using
their dxb as a control surface). All less expensive than an API, but
more expensive than a Soundcraft Ghost. But if you're dreaming of $2K,
for that you get a fader and a knob per channel, and a handful of
buttons.


--
I'm really Mike Rivers )
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me he double-m-eleven-double-zero at yahoo
  #20   Report Post  
reddred
 
Posts: n/a
Default


"Mike Rivers" wrote in message
news:znr1116878822k@trad...

In article

writes:

The question is, when did music start being about staring at a screen?


One could also ask when did music start being about recording at all?
But it's just another aspect of it, and if staring at a screen is the
way that one choses to record, then so be it.


I remember being kind of excited to use a more visual approach, but after a
solid seven years of pursuing that, I really want to go back to using my
ears more than my eyes.

There are some forms of
rhythmic and sometimes melodic sound and poetic verse that are called
"music" today that didn't exist in Beethoven's time, and many of those
forms of music came to be because of the technology that support them.


I think cut and paste music is fine, I've spent a lot of time learning how
to piece things together. But at some point, after the editing is all done,
I want to do something that's more like playing an instrument, more physical
and visceral. I've never seen a good pianist that has to stare at the keys
the whole time he's playing, but screens are so demanding of attention, and
they now have to be there in the middle of everything.

I don't record that sort of music because I don't like it enough to
listen to it as much as I'd have to do in order to record it.


I think there is good and bad like anything. The big danger I think is just
making lifeless or ill-thought out music, because if little snippets are
made up and recorded then assembled, nothing gets internalized and spat back
out like you do when you compose or learn a song. That doesn't necessarily
happen, it's just a danger, and there have been some really high profile hit
records that were awful because of it.

I wonder what the price point would have to be for a MMC control surface
with pretty much the same layout as a mackie 8 bus, or slightly

smaller -
perhaps letting you switch between EQ view, aux, or effects, but

allowing
control of 24 channels at a time and replacing the buss faders with a

jog
wheel.


The SSL AWS900 is about $90K.


That's a house hereabouts.

But if you're dreaming of $2K,
for that you get a fader and a knob per channel, and a handful of
buttons.


A man can dream, can't he? I still say if Tascam put another four rows of
knobs on their surface and some LED meters, it would be a great and
relatively affordable product, but I think in order to sell as many as they
want to, and have the product make sense in a lineup with all their other
current products, they went for the lower price and lower capability.

jb




  #22   Report Post  
Blind Johnny
 
Posts: n/a
Default



Mike Caffrey wrote:
There are people who get frustrated working on a real console, but
don't in a DAW?

I've yet to meet anyone who preferred a DAW over a console for mixing
except for a jingle guy who prefer a console, but chooses a DAW becuase
of the volume of revisions he's asked to make.


I run a small commercial studio with about 2 dozen active projects at
any given time...not to mention those coming back after a year or 2's
absence and wanting to pick up where we left off. A console mix is not
going to let me give artists scratch mixes that we can go back to and
update next session..not without a ton of extra time resetting
everything. I also find I can learn more with instant total recall as I
can make small changes quickly..without having to reset
everythying..and learn which technique works better for me.
YMMV

Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Some Mixing Techniques kevindoylemusic Pro Audio 78 February 16th 05 08:51 AM
Let's do some critical listening Michael McKelvy Audio Opinions 374 January 21st 05 08:39 PM
mixing drums Josh Brown Pro Audio 10 June 10th 04 11:06 AM
DAW-based Mixing: come up or down? Tim Ferrell Pro Audio 26 March 27th 04 08:38 AM
CPU mixing versus DSP mixing ! musurgio Pro Audio 0 December 8th 03 07:14 AM


All times are GMT +1. The time now is 05:33 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"