Broadcast quality audio for video interviews

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Regarding audio and video, there is huge difference between broadcasting quality and podcasting or play back from computer. Although computer sound cards are getting better, it lacks way behind in comparison with professional audio equipment. This article shows a couple of pitfalls you have to avoid if there is a chance that you interview is not only watched from the web but also on fairs with professional PA systems.

Interviews often have to be conducted in all sorts of places, therefore you do not have much control over the environment. Two weeks ago, I went out to do 6 interviews as part of a digital presentation. The location was a busy office building with irritating server fans blowing in the hallway and quite a lot of street noise on top of it.
I had my Sony FX1 camcorder with me and a Rode NTG-2 shotgun mic, with which you can get good results for broadcasting quality if you pay attention to details. The mic is directional, meaning that it doesn’t pick up much audio from the sides, and that makes it a good choice for interviews.

The interviews themselves went well, I had a spotlight of 800Watts with me and a softbox and everything appeared to be fine, although I had to filter some background noise in post production.   On the computer, everything sounded as expected, although perhaps the volume wasn’t really dynamic as you can see for yourself from the graphical representation of the audio waves below, which compares a dynamic sound wave with the audio produced during the interview (bottom wave):

In my situation, disaster struck when the interviews were broadcasted through a professional PA system (public address) with an audience of 250 people. I didn’t know that upfront or I would have done a test on a similar system upfront. As it happened, the presentation was thrown before the lions without proper checking and, as the law of Murphy predicts, the result was, to put it mildly, beyond believe.
The sound output was actually so low and distored that we had to stop the presentation altogether.  At first, I couldn’t comprehend what was wrong because playback on the computer gave no indication of a problem whatsoever and tests with the Hi-Fi stereo set didn’t reveal problems either.

What caused the problem?

  1. Microphone not close enough to the interviewee?
  2. Conversion of audio from stereo to mono before importing into Adobe Flash?  This is less common and might not concern you, but for multimedia enthusiasts, it will be interesting to know about this.

Reason number 1: My first thought was that I didn’t keep the mic close enough to the interviewee.  You best keep a directional mic within a range of 0.5 to 3 feet from the speaker. And here I did make a mistake, because I mounted the mic on the camcorder, which was originally close by, but I moved the camera backwards to get the upper body of the speaker into full view, thereby forgetting that I was moving the mic too. Ridiculous mistake, I know, but I was dead tired from working weeks on end without a single day off and that can kill one’s eye for detail.  And as it turned out, this was not the problem in this situation, although it didn’t really help either.
Shotgun mics are really not made to work from long distance although the name is somewhat misleading in that respect.

Back home, it didn’t sound too bad, but as I said earlier, I had to suppress the noise in the background with a noise removal filter in the audio editor, which is OK for podcasting and web publication, but for broadcasting this can be a bad thing because it creates weird effects that can become noticeable on amplified systems.

If you can, avoid having to use filters, thus make sure that your original recording is as pure as possible. That is the best way forward and related to that, you may want to read How to keep the mic close by without revealing it in video footage.

Reason number Two, the real problem: This is a bit more complicated.  If you never used Adobe flash and you are not planning to use it in the future, you may want to skip this part although I explain technical expressions in detail.

I assembled video and audio into a Flash project so that I could add effects and music in the background while being assured it would all work together without hiccups.  Originally, the clients asked to do it in  PowerPoint, but as it had to play as a standalone application and with all the media stuffed in it, I decided to use Flash instead.  In the past, I used the authoring software like Macromedia Director for this sort of work but Flash works faster and if the budget is tight, working with Director is not really an option.

In order to embed the video in Flash, I first had to extract the sound from the video, then import the video and extracted audio to avoid synchronization problems.
Synchronization means getting audio and video playing at exactly the same time and make sure it keeps in sync.  On digital TV’s you sometimes see that the voice doesn’t follow the movement of lips.  That means they are out of sync.  If you import a video into Flash, chances are that your audio will be out of sync too. Therefore, you import them both separately and in Flash you set the sound to Streaming in de properties box, so that it plays at the same pace as the frame with video footage pass by.  The expression streaming has in this context nothing to do with video streaming on the internet.

To resume: Most video editors can export sound on its own and it is a straightforward process.  As the sound was recorded in stereo, I was supposed to export the sound in stereo, except that I didn’t. I exported it as mono because I reasoned that it wouldn’t make any difference and it would reduce the weight of the sound file considerably. The lighter a presentation weights, the better it will perform, this is especially important for older computers.  The reason in itself to convert the sound file it was therefore a sound one, but it came out to be a mistake, because it created a compatibility problem with the PA system which didn’t know what to do with the converted sound and therefore produced a garbled sound, while other audio in the presentation played fine.  Why was that?

The other sound files contained background music and because of the music, I left it in stereo and, as it appeared, that part of the presentation played just fine through the PA system.   I actually didn’t know this was an issue until I was rubbed in it with my nose and I presume that most multimedia folks don’t know that either.

How did I find this out?

Although I have been researching this problem for several days, I couldn’t reproduce the same garbled effect. Yet, you cannot find a cure if you do not know what is causing it.  Actually, I had already given up and resigned to the idea that I had to redo the interviews, when a small incident gave me the clue to what really had happened:

In Cubase, the audio editor, I had created a stereo channel and I wanted to compare a couple of sound waves (like you see in the screenshot above). I imported one of the interview sound files in that channel and played it back. Exactly the same muffled sound came out of the Mac this time!
I still didn’t know at that instance what was going on, but it didn’t take me long to find out, because as soon as I dragged and dropped the audio into a mono channel, it played as it should.

This is such a typical story of falling and getting up again that I couldn’t resist telling you, although I realize that it is more technical then my usual articles.  Sorry about that.

Read also:

Read also: How to keep the mic close by without revealing it in video footage.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Leave a Comment