Here you find technical words, abbreviations and concepts related to video, simplified to their bare essentials as not to confuse beginners with irrelevant details. This is a living document. It is updated every week or so.
Current concepts covered:
Aspect ratio – Artifacts – Auto cue – Bandwidth – Chroma Key, color key – Condenser microphone – Compression – Dynamic microphone – Embedding – Frames, Keyframes – FLV – Fps – Kbps – Leeching – Master video/audio – Movie – Phantom power – Pixels – Player – px – podcast(ing) – poster image – Post production/processing – Quepoint – RTMP streaming video/audio –Teleprompter – Video cast(ing)
Artifacts are distortions easily found in large areas of the same color in movies and images when they are compressed in order to minimize the weight of the file (usually expressed in Kb or Mb). The compression method reduces the color information needed to describe an image of movie on a computer screen. This way, the image or movie becomes much lighter in weight and when it is well done, you don’t see much difference. But if you compress it too much, you get a blurry and discolored view with weird clusters of fuzzy rectangles. These are called artifacts.
The relation between height and width. A standard movie is 4 units wide and 3 units high, so if a normal movie is 400 pixels wide, it is 300 pixels high. Because 400 divided by 100 = 4 and 300 divided by 100 is 3. In video, the aspect ratio is often expressed like 4:3 This concept becomes important when you want to resize your video or when it runs with a player (a container in which a movie plays) that has a fixed aspect ratio that is different from your movie because you run the risk of a distorted display. You see this often in podcasts where heads look as if they are squeezed. In that case, the player’s width is smaller in relation to the height then the actual movie. You also have panoramic movies with an aspect ration of 16:9. You will see this type popping up more and more as time goes by.
Auto cue, Autocue, Teleprompter
Also called a teleprompter, used in broadcasting and also in videocasting. A news reader can type in the text he/she has to read when going live on TV. The software then scrolls the text over the screen at a predetermined pace adapted to the natural speed for that reader, so that it will sound like speaking instead of reading from a text. The screen is placed directly next to the camera targeted at the news reader, so that it looks like the reader is addressing you directly instead of just reading the text out loud. It demands a bit of practice to do this right. The best result is always after rehearsing the text a couple of times so that you know a large part by heart. That way you will not appear to be reading.
There is a very cheap application that provides you with a teleprompter for your podcasts, namely Vlog It (bought by Adobe in the meantime). It works quite well when you put your camera just above the screen while reading the scrolling text.
Bandwidth is the amount of traffic on your site. Every piece of information on your website has a weight in bytes, kilobytes, megabytes. Text is the lightest form of information. It usually doesn’t use more then 10Kilobytes on a page. But you probably have a logo, and perhaps a picture of yourself and then it starts to add up. Therefore, these days it is not abnormal to have a page of 50Kb to 100Kb, especially if you have ads on your pages as well. If you have audio or video on your page, your page becomes much heavier.
So, every time someone visits your site, a page is loaded and the total weight of the page (including text, graphics, animation, audio, video) is the bandwidth used by that one person at that moment. This usage is stored in the log files of the server and every time a visitor comes by, this is added up to the amount used so far. At the end of the month, you have an overview of the bandwidth used and if you cross the limit set by the hosting provider, you have to pay extra. Many providers do not set a limit because they know that most site owners do not use much bandwidth. However, if you do use a lot of bandwidth, chances are that they will ask you either to pay extra or to go somewhere else. Therefore, it is better to have a provider who sets a limit because that becomes an obligation for the provider. He cannot complain if you stay within those limits. Bandwidth limits vary widely, so it is worthwhile to spend some time to shop around. If your site has many local videos, bandwidth becomes an important issue.
A technique to make electronic files smaller, comparable with applications like WinZip or StuffIt. In the context of video and audio you use a movie -or image editor. Compression has the advantage of dramatically reducing file sizes up to nearly 10% of the original while maintaining reasonable quality. There is always a trade off, of course, the more you compress, the lower the image quality. Rule of the thumb is never to compress the same file twice because the effect becomes very clear the second time. A compression method tries to calculate how it can reduce the amount of code needed to play the movie. In other words, it tries to create shortcuts to describe the color fields. You can instruct a compression method to use the color information of 1 frame and use that for the next 23 frames, for instance. That way, the movie does not have to keep the color information for all those next frames, thus reducing the file size dramatically. See also Keyframes in Glossary.
Chroma key, Color key
To make an area in an image or video transparent, you use a technique called Color Keying or Chroma Keying. A typical application is the weatherman showing what is going on in the North while a movie with clouds plays in the background. In reality the weatherman is talking to a blind wall, painted in a solid color and the clouds video replaces that wall virtually through the use of software.
I hear you already think: “Why would that be interesting to me?”. Imagine making a podcast with yourself as the main actor explaining something about sports. While you talk about a running technique for athletes, for instance, you can show a video of a running athlete in the background showing the technique you discuss while you remain in view as well so that you can brand yourself as a person or you can be holding a product in your hands you want to promote that is related to the subject. In order to do that, you need to have a way to remove everything from the video that is not you and replace it with another video or still image and color keying does that trick.
Apart from the fact that you can do lots of fun stuff with it, it is actually very useful to enhance the visual strength of your message. And the great thing is that it isn’t that difficult to do yourself. What you primarily need is a well lit space and a piece of muslin or paper of about 10’x 7′ on a wall that is green or blue. We call that a Greenscreen or Bluescreen. You can also paint a wall if you have a smooth surface. The color doesn’t really matter except that is should be distinctly different from any color you are wearing during the shooting because that background color will be used to determine the transparent areas in your video. Therefore, the color is best a bit unusual and easy to isolate.
Traditionally, in film they use blue while in video green is mostly used. Like I said before, it doesn’t really matter which color, although I really would avoid using black, white and gray because those color values will be found on your person as well and as a result, those parts of you will become transparent too.
A high quality microphone which produces a weak signal, requiring an external power source, either with a battery placed in the handle of the mic or with phantom power delivered via a mixing device. The quality of a condenser mic is generally better then a usb- or mini jack mic but it cannot be connected directly to a computer, unless it has a special sound card or an external mixing device (preferably firewire).
A regular type of microphone that doesn’t need extra power. They are generally cheaper then condenser microphones. They can have XLR, stereo jack, USB or mini jack connections. Avoid using mini jack microphones because the jack is easily damaged and it can create static in your audio.
Embedding, embedding code
Embedding means placing audio or video on a page or blogpost. Most video editors and Video network services provide the code to “embed” media on your site. Embedding code looks awfully complicated and generally, unless you understand the code, you best copy and paste it as it is. However, on Miracle Tutorials there are several articles on how to tweak embedding code. Here is an example of embedding code:
<object width=”500″ height=”281″><param name=”allowfullscreen” value=”true” /><param name=”allowscriptaccess” value=”always” /><param name=”movie” value=”http://vimeo.com/moogaloop.swf?clip_id=5367093&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=00adef&fullscreen=1″ /><embed src=”http://vimeo.com/moogaloop.swf?clip_id=5367093&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=00adef&fullscreen=1″ type=”application/x-shockwave-flash” allowfullscreen=”true” allowscriptaccess=”always” width=”500″ height=”281″></embed></object><p><a href=”http://vimeo.com/5367093″>Prisoner of Rules – art video</a> from <a href=”http://vimeo.com/rudolfboogerman”>Rudolf Boogerman</a> on <a href=”http://vimeo.com”>Vimeo</a>.</p>
It looks like a nightmare, doesn’t? Luckily, you do not have to write this yourself. In any case, well formed embedding code makes sure a video can play on your site.
Videos, or movies exist of moving images. Funny enough, those images don’t move at all. In reality a movie switches or substitutes one image for the other and it does that 29 times per second for high quality movies and 15 to 24 times for web quality. The more images you have per second, the heavier the movie will be but it will also move smoother. At the film theatre, 15 frames per second will appear jerky while 29 frames per second is often overkill on the web (although there are exceptions). Every image is slightly different from the previous and as a result, you get the impression that something is moving. Every image is called a frame, like a photo frame with a portrait. And keyframes are frames used to determine the amount of compression that is used. A compression method tries to calculate how it can reduce the amount of code needed to play the movie. In other words, it tries to create shortcuts to describe the color fields.
In QuickTime, you can instruct a compression method to use the color information of 1 frame (the keyframe) and use that for the next 23 frames, for instance. That way, the movie does not have to keep the color information for 24 frames but only 1 per 24 frames, thus reducing the file size dramatically. So, if you would set Keyframes to 24, that means that every 24th frame the compression method takes a snapshot from the colors in that Keyframe and use that information to create the next 23 frames until a new Keyframe starts. If you would set the Keyframes to 1, the compression method will take a snapshot from every frame, thus resulting in a big file size, but much higher quality. There are no standards for Keyframes because every movie is different. Therefore, you have to experiment with the Keyframe settings until you have the right balance between quality and weight. I know that many tutorials will tell you otherwise, but for video channels I would either set the Keyframe to 1 (every frame is recalculated) or turn compression off altogether, because those video channels will compress your movie again, regardless what you do about it.
Flash For Video. A video compression format that is used by practically all video channels, like YouTube, Brightcove, VideoJug etc … Flash is widely used on the internet and although it started out as a method to animate graphics, it has become a fully fledged video platform as well with which you can play videos on the web. Since the quality of the video in relation to weigth is very good, it is the most popular method at this time. The greatest thing about the FLV format is that if you upload it to a video channel, it will not be compressed again by the video service (as happens in all other cases), thus the quality of the video you upload is preserved. In other words, what you see on your own computer before you upload is what you get.
However, Youtube will give unpredictable results with FLV. If it doesn’t work, upload a QuickTime movie instead. Not all video software support FLV, but Camtasia, Vlog It, Video Communicator, Flash and Adobe AfterEffects do and this will become a standard in most video editors soon.
Frames per second or images passing by. A typical film format is 24 Fps, meaning that you have 24 images to create a second of film. TV broadcasts are generally 30 Fps.
Kilobytes per second. Relates to the speed at which video- or audio information is transferred from a server to your browser. Slow connections have a low Kbps value, so if your target audience has a slow connection, you best limit the bandwidth of your video or audio by setting a maximum Kbps value. This will influence the way in which compression is applied to your video/audio, for instance, a low rate will result in a lower quality while a broadband connection can have better quality and thefefore a high bitrate.
The art of embedding videos from other sites while posing as if they are mine. When you have a video on your own server and play it through your site, it is possible to pick up the URL of that video and place it on my own site, for instance. Thus, I steal bandwidth from your server each time one of my readers clicks on your video and I get the credit for the content if I mask the origin of that video, by imposing a logo over it. Leeching is also used to fill up a site with content from various sources to make money from GoogleAds, for instance. This way they do not need to do much regarding content, just collect the money. Luckily, this type of sites is seldom successful because people get a feel for this sort of scams. However, it is still widely practiced in one form or other, so it is a question of protecting yourself against it. This practice is not to be confused with embedding videos from YouTube, Mspace;, Metacafe and others because those videos can be traced back to the originator without any effort.
A master video or master audio is the uncompressed end result after post processing. Related to video, it contains the full movie from start to end in its highest quality. Master videos can contain sound and voice over, but if it is a mulit lingual movie, voice over and text are separate from the master, so that native version can be created from the master. With the master video, you can create numerous compressed formats of videos without chnging the master video itself. The same goes for master audios. It is the full audio from start to end, but it might have seaprate voice)-overs for multi lingual use. You could create a podcast rom the master, or a CD native format without changing the master itself.
In the context of podcasting, a movie is a digital video. The words movie and video are interchangeable, they mean exactly the same, strangely enough. See also video for more info
An extra power source for condenser microphones. The word Phantom is used to indicate that this power source does not affect dynamic mics which do not need extra power. In other words, dynamic mics will not notice the phantom power, while condenser mics will make use of it if they are not battery powered.
One unit of light on your computer screen. That unit can have any color range. We use pixels as a measurement to determine how big an image on the computer or the internet is. Actually, everything you see on your computer screen is build up with pixels. The Abbreviation for pixels is px, so 400×300 pixels means the same as 400x300px.
In the context of podcasting or video casting, a container that loads the movie and plays it on a web Usually, video networks have their own player with their own requirements about how big a movie should be and what format. Most players are created with Flash because the Flash plugin is installed on 98% of the computers connected to the internet. Thus the problem of the past, when you had to install all sorts of native applications, is finally over. Anyway, you can place movies on a web page without a player and directly play the movie itself either with embedded code (safest method) or just with a link (unreliable in many cases) to the path and name of the movie.
Blog posts in the form of video or audio. As with regular blog sites, you can subscribe to the podcasts via RSS which is short for “Real Simple Syndication” (originally is was called something very technical but this term was soon adopted). But instead of viewing posts in a news reader or as a bookmark in your browser, you view or listen to video or audio in your media player like iTunes (www.apple.com) and others. As soon as new podcasts are available, you will find them in your player.
An image to show on a video wen it is not playing. In other words, a poster image is literally a poster image of your video. It is, together with a descriptive title, the reason why a visitors decoided to view your video or not. Therefore, your poster image in important. On many video channels you cannot select your own image, but if you can, best create a really good looking image.
Post production means working on your video or audio after you downloaded it onto your computer. It usually involves video- or audio editing software. Adding a caption to your video, adding sound, combining several video files into one is regarded as post processing. In fact, anyhing you do after the video is placed on yuor computer can be seen as post production. For instance, you can say: “Please add an intro to the video in post production”, or: “Remove the static noise in the audio during post production”.
Abbreviation for pixels
If you have been working with video editors, you may have come across the word quepoint.
Basically, it is point in time where something needs to be happening, either within the video or outside.
For instance, quepoints are used by sound studios to do voice-overs. Professional sound editors can pick up quepoints from a video or multimedia presentation and synchronize a recorded voice with the quepoint so that the voice starts talking at the exact moment it should be talking.
In short, instead of having to scroll through the timeline to find the right moment in time, you can jump from one quepoint to the other without having to bother about the rest, which is a huge time saver for multimedia professionals.
Lately, quepoints are even used in video networks like Viddix. While the video plays, something else shows up on the right hand of the video or even music can start to play when a quepoint passes by in the timeline.
RTMP Streaming Video/audio
RTMP means Real Time Messaging Protocol. What you really need to know about streaming is 5 things
- With RTMP streaming video, viewers/listeners do not have to download a complete movie or audio before it starts to play. Instead, the movie literally streams in while you play it, comparable with a running water tap. Without streaming it can take a while before a movie plays and most viewers are impatient, so for big videos, streaming is no luxury.
- With RTMP streaming, you can jump in the time line and the video almost immediately plays from that point onwards without having to do preloading.
- 99 chances out of 100, you cannot do streaming video or audio directly from your own web site unless you rent an server with expensive streaming software on board or you work via Amazon Web Services, which has an inexpensive solution to stream your videos. It is much cheaper then most Video services.
- With RTMP streaming video, your movie is better protected against illegal downloads although not bullet proof. Nothing is 100% bullet proof protected on the internet.
- Streaming video only plays on the internet, it doesn’t play from a CD or your local computer. In other words, you need to be connected to the internet or an intranet to play it.
In the context of podcasting, a video is a digital movie. This is a bit confusing because real video is not digital at all but analogue carried on a tape (video tape). For some reason we started to adopt the word video from the moment is became possible to play moving images on a computer in the 90s. The words movie and video are interchangeable, they mean exactly the same, strangely enough.
A video cast is the same as a podcast but this indicates that it is about video exclusively while a podcast can be either video or audio. Both video cast and podcast do not exclude audio in a movie.