Feature

The recording component in A Level Music Tech

Making recordings is an enjoyable experience for students – but how to teach it? Tim Hallas has taught A Level Music Tech for nearly 20 years; here, he explores the basics of recording, editing and mixing
Adobe Stock / Interiorphoto
Adobe Stock / Interiorphoto

Formal teaching in audio recording and engineering has been around since the mid-1960s as part of media degrees and then as standalone qualifications. But the teaching of recording skills goes further back, through apprenticeships and working up through the ranks at professional studios.

Even so, when I began studying music technology in the 1990s, it felt like a new academic subject – although it had already been around some time. Still today, parents at open evenings ask me if music tech is a new subject and I have to tell them it's been around a very long time. Which makes the fact that some of its skills aren't taught more widely slightly baffling.

Most music programmes in schools and colleges these days will include some element of music technology – usually recording parts into a DAW for a composition or similar. But this is only a very small part of music technology and doesn't include what my students consider the most exciting and fun element of the subject – the recording.

A Level Music Tech recording

At the launch event for the latest iteration of the Music Technology A Level in 2016, the chief examiner said that recording had been given a heavier weighting than in previous specifications because feedback from candidates, teachers and others was that it was most enjoyable part of the course. It's also the bit that differs most from Music as a separate academic subject.

As a result of this, I try to encourage my students to record as much as possible, to enjoy the process and to learn the skills that will allow them to capture musicians and create high-quality recordings for the rest of their lives.

Capture

The key to a good recording is a good capture. If the initial recording is poor, then no amount of editing, processing and mixing will make it sound good. It's always worth getting a good initial capture and then half the battle is done. There are three tricks to getting a good capture:

  • a good performer
  • the right mic for the job
  • a good space to record.

A good performer will make the recording process a pleasure – if the performer knows their part, makes a good sound, and can perform confidently, the only job of the recording engineer is to capture their performance. By putting the performer at their ease and using the other two elements listed above to ensure a good capture, the engineer has already won half the battle.

The right mic for the job will depend on the circumstances of what you're recording – but, as a general rule, I use a condenser microphone for most jobs that require greater sensitivity, and a dynamic mic for loud tasks (snare drums, guitar amps etc.). The right mic does not necessarily mean a really expensive mic; there's a reason people use SM57s (pictured on page 17) on drums and guitar amps – they sound great. But a large diaphragm condenser, even a modestly priced one, will capture more nuances of a vocal performance than a dynamic. So the right mic for the job can make a big difference.

The space in which you record can have a huge impact on the capture. Somewhere quiet that isn't going to pick up loads of background noise or traffic is ideal. At our college we have an isolation booth for very dry sounds, or a bigger hall for more ambient recordings. They are not perfect spaces, but they are quiet, and when mic'd up appropriately capture the performers excellently.

Mixing and processing

Once the recording has been captured, the process of editing and mixing can begin. There is no way around it, editing audio can be tedious; hours meticulously moving the timing of individual drum hits is few people's idea of fun – but sometimes it needs to be done.

A mix of a small student-led recording project

The key to editing is to not overdo it all the time. I have produced tracks that need to be absolutely metronomic and require precision of all the parts to be locked to the grid. However, other tracks require the natural ebb and flow of a performer, and if their performances are quantised, it sounds awful and unmusical. The skill that students need to develop – and I'm still working on this with my students – is to know when to edit and when to leave something alone. It's combining the musical knowledge with the audio engineering knowledge. The real skill is learning when and when not to edit.

When it comes to mixing, again the key is start small and build up. It's very tempting for students to pile loads of processing onto tracks because they automatically assume that more is better. This is very rarely the case. I teach students about my usual basic processing chain, which will contain:

  • compression
  • EQ (equalisation)
  • reverb.

That's it. That's the basics. Anything else is an artistic choice and isn't essential. And even then, I'm thinking about how much compression, EQ and reverb a track might need – and sometimes the decision is to leave them alone because, for instance, the amount of compression it needs is ‘none’.

When I teach mixing to my Year 12 students, I introduce it very slowly with a group project in which we start by adding an EQ to subtly alter the tonal characteristic of a sound – but they are taught about subtlety and to think about why they are making the change, the impact it will have on the sound, and the impact it will have on the other parts in the track.

The first true ‘effect’ we'll add is a simple reverb to make a performance sound like it's in a natural ambience. We explore the natural acoustics of a space and try to use the reverb to blend the parts together. Although all tracks need reverb – if we've used the large performance space, we'll capture the reverb live and use that rather than adding anything artificial – it's useful for students to understand that reverb isn't a plugin, it's a natural phenomenon that a plugin is trying to recreate in the absence of natural acoustics.

Students still go too far – they still turn things up too high and put too many processors on tracks, but they eventually learn and begin to apply things more naturally.

Teaching the concepts – back to analogue

I've talked a lot about capturing the natural sound and using the natural reverb, because that is essentially what recording should be – capturing and honing a brilliant performance. People have been using analogue reverb for hundreds (thousands?) of years (caves, rooms, halls, churches, etc.), and understanding the analogue concept that DAWs are replicating really helps to understand a concept.

I believe that my understanding of recording and music technology in general is based on the fact that I grew up working in the analogue environment. I'm not that old – and DAWs existed when I was studying, but they were rare and nobody I knew had one. My teenage bandmates and I had to cobble together recording equipment using cassettes, mini-disc machines, cheap mixers and so forth. We learned how to connect cables, why certain connections and signal routings worked and why others didn't – so when we then applied the same concepts to a DAW, it all made perfect sense to us.

However, today's student is coming at the DAW as abstract object with no understanding of why it's operating the way it does, why audio is routed the way it is, and why the effects plugins are designed the way they are. There's no grounding in how audio works and the reason that certain choices have the impact on the sound that they do. Therefore, I ensure that at least some of my practical teaching in audio goes back to the analogue so that students can connect a mixer to a tape machine, get audio in and out of it, and understand how that applies to the DAWs that they are using in their lessons and free time.

How can I apply this?

Not everyone has a background in audio engineering, and some might find the idea of recording daunting. But I strongly recommend attempting some audio recording with all students. Rather than just another MIDI project, why not connect a basic microphone and encourage your students to record a live performance, and learn how to edit it together. Start small, use the built-in audio recording apps in phones (if they are allowed in your school) and export these into a DAW for further manipulation.

Students do seem to love recording when they are given the option – get students to record each other and see what happens. The attempts at recording don't need to be complex – in fact the simpler the approach, the more likely it will be something great to listen to. It is overprocessing that makes recording hard, and often unlistenable. There is such a wealth of knowledge available around recording that no matter how deep you and your students go, there is always more to learn. Good luck in your recording experiments!