How I multitrack is pretty much how I've always done it, even back in the 2" tape days. I'll have separate instruments on each track that comprise the basic parts of a piece, recorded in one pass via an Orion32 (if external to the DAW) or direct in Live (if I use internal sources), or both as needs be. Once all of that's down, then I'll start doing overdubs while at the same time starting to work out what processing to layer onto the initial tracks. At this point is where we diverge from tape technique, though. In beginning the mixdown process, I try and combine related tracks as submixed stereo stems...say, all percussives on one group, bass and pads on another, "ear candy" bits on a third, and so on. By submixing and then subprocessing these stems in this way, you actually have quite a bit of control over the main mix with a minimum of faders in play and a minimum of CPU load because, once the stems are tracked, you can turn off all of the processors you used on the individual channels in the stem, plus your submix is now under the control of a single stereo fader pair and, when needed, you only need to add processing across the two tracks of the stem.

Stems are sort of a given these days with a lot of producers and engineers, but it wasn't that long ago that they were a rare thing, doable only when you have the massive budget needed for extra multitrack machines, tape, etc. With all of that tech out of the way, though, you can generate stems whenever you like and however complex you can deal with inside your DAW of choice. Then, once your stems and solo tracks are ready to go, the mix gets easier...you're not juggling a couple dozen faders all at once. Also, DO automate things such as levels, etc within your stems so that that 2-channel result is exactly the way you need it.

Mind you, this tends to take buttloads of practice to get used to envisioning how your mix should work prior to even mixing it. Ability comes with time and diligence.

Now, as for normalization...that process raises your overall track levels relative to the highest level. So, if it takes +8.5 dB of change so that the loudest peak comes up to your normalization threshold (and never, EVER normalize to 0 dB...always leave "excursion room" of 0.5 to 1 dB below 0 dB in case something gets raised by dithering, codec artifacting, etc), everything in the track gets raised by that amount. This doesn't equal apparent loudness, though. It just means that your peak level is where it goes and everything else went with it.

To increase apparent loudness, you need to use some form of compression. So, let's say you're cutting a track and your peak levels are hitting -1 dB, but your overall level outside of those peaks is about -12 dB. That's a pretty wide dynamic swing between peak and average, so what you'll want to do is to compress that 11 dB swing down to, say, 4.5 dB. Once properly compressed, your peaks should be back at that -1 dB level, but your average will now be raised by 6.5 dB, ergo the apparent loudness of the track is higher. Ultimately, you could even "brickwall limit" a mix so that everything is squashed into 1-2 dB of swing (or less, if you're some kind of sadist), but when you lose your dynamic swing, the track will just sound like a loud band of sound with no variance.

Mind you, all of this is for nothing if you don't have adequate monitoring while tracking and mixing. Especially the latter. Trying to get a good result with a pair of computer crackerboxes is akin to trying to read an important document without the aid of reading glasses if you're blind as a bat at close distances. You literally will not have any idea of what you're doing outside of certain inferences about what the end-result will be on everyone else's listening platforms.

And one other point along these lines: once you have your rough mix set up, you'll want to put your last set of processors on the DAW's mixbus, with the program compression last. That way, any changes to the signal levels caused by equalization, enhancers, stereo imagers, etc will still get dealt with properly just prior to being tracked or rendered.

Optimally, I prefer to break out of digital for initial mixing of stems and then for controlling stem levels; I simply like having the faders in hand for tiny adjustments. And doing this sort of mixing in analog on a quiet system with 24-bit audio (even at slower sample rates) still puts any noise and garbage signals way down in the mix where they'll disappear into the Least Significant Bit ranges when the track is reencoded at 16 bits for CD and other distribution methods. That is, if I want that; sometimes certain noises and noise amounts can actually add a bit of a presence.