Twiggycan you tell more about what you did and learnt in 15 months of audio production? I'm not a musician nor a recorder but I find the process interesting. To me mastering is some weird process about moving faders and I can't tell whats so important about it.
You're thinking of mixing, not mastering. Crypto's description of mastering is correct.
If I had to summarize the recording process, you have tracking, mixing, then mastering.
I did all of the tracking and mixing for this album. I tried my hand at mastering it as well but didn't care for the results so I sent it off to get mastered by Carl Saff out of Chicago - he did a great job.
Tracking is about capturing the instruments and the performances - you want to have accurate timing, a good sound, etc.. I learned a fair amount here particularly in terms of guitars - precisely where you mic up an amp changes the sound character drastically.
Most of the time I spent was all in mixing. When I started this project, I had been recording for about 6-7 years and messing around with mixing only a little - doing essentially what you described - playing with faders - which, if recorded well, can end up sounding pretty good.
However, I've since learned that you can take it to the next level by actually mixing and not just adjusting levels. I'd say the cornerstones of what mixing really is about boil down to:
1. Volume levels - setting the physical volume of one track. "playing with faders," as it were.
2. Equalization - Adjusting the frequency spectrum of a track - boosting or cutting both for tonality and clarity.
3. Compression - Controlling the changes in the volume envelope of a track - "squishing" it so that it has less variance.
4. Panning left to right - controlling which speaker the track comes out of.
If you have a number of different tracks recorded and you want to hear them all, you need to make sure that they don't share the same "space". I think an example would probably be easier than trying to actually dive into the science of this without me drawing a bunch of crap.
In the case of track 1 - "Trapped By You," we have 2 rhythm guitar parts, 1 bass guitar part, lead vocals, background vocals, drums, and the lead guitar.
Guitars all tend to occupy the same frequency space, particularly when playing on the same part of the neck. To ensure you can hear both rhythm guitar parts, I panned them hard left and right - so if you pan the track left, you hear guitar A, right, guitar B.
The bass is a bit easier - a bass is an octave lower than a guitar, as such it tends to occupy a lower frequency range than the guitars do. To help make this clearer, I used equalization - I boosted the lower frequency range (60 - 300 hz ish) on the bass, and cut the mids and highs a bit (everything above about 1000 hz). I also boosted the upper mid range on the guitars (2000-4000 hz ish) to bring out the overdriven sound of the guitars a bit more and keep them away from the bass.
Drums are an interesting beast - probably what I spent most of my time on in the end. The drums are where most of the compression is done - I gated and compressed the attacks of the snare drum and bass drum so that you get a loud attack and then the volume falls of quickly so you aren't missing out what the rest of the band is doing. The cymbals / overheads are panned apart to spread the sound out a bit and also help clarity in the center channel. The overheads are also boosted in the highs and upper high range of sound to get more clarity of cymbals (boosting like, anything above 7000 hz ish).
The bass drum vs. the bass guitar is also very tough - in my case, I opted for a punchier bass drum sound than a boomier one - so I recorded both sides of the drum and ended up emphasizing the beater side a bit more and boosted a frequency a bit higher than the bass guitar's boost to try and keep things seperate (again, equalization).
Vocals and the lead guitar - they need to be the focus. They're both just physically set louder than everything else, but also carefully placed within the frequency spectrum to not clash too hard with anything else out there - vocals getting a boost in between the guitars and bass; lead guitar parts generally being an octave higher than the rhythm guitars and generally also getting a slightly higher frequency EQ boost. Lead vocals were also compressed to ensure their volume doesn't vary too much so they're always up front and center (not always desireable in more dynamic pieces, but you mix to the piece).
Background vocals were also hard panned to leave room in the "center channel" of both speakers, and also "pushed into the background" by using a little bit of reverb.
On top of all of this - you need to be aware of how things change throughout a song as well, and make adjustments mid track sometimes. Trapped By You's guitar solo track is actually boosted a little bit louder at the start because it starts low on the guitar, and gets a bit quieter when the solo ends up at the top of the neck. As well, the descending lead guitar part right before the end "off the rails" solo (during the big OH YEAH's, if you will) is panned a bit to one side so it comes in more clearly without mucking everything else up - I had a hard time with it being in the center of everything without it either being mind numbingly loud or washing everything else out. The rest of the end solo is just in the center channel.
Hopefully that makes at least a little bit of sense - those are the very basic principles. Take into account I didn't really know any of that beforehand and I had to A. learn it all (mostly via trial and error) and B. apply it to 9 tracks spanning 54 minutes of music, some with lots of instruments (there's 5 different guitar parts encompassing 10 audio tracks on top of everything else at one spot in "Back to the Wall," etc....) and of course I'm doing the whole thing as a side project while working full time and also attempting to occasionally work on maps for you guys and the 15 month timeline makes sense.
If you have any more questions let me know.