Three Canon camera lined up


Video Basics (Part 1): Frame Rates and Resolution

Three Canon camera lined up

We're going back to basics. Breaking down videography jargon, techniques and processes for photographers. Neale Head, Sydney Retail Manager at SUNSTUDIOS will get you started with this 4-part series.

Question: How long have you been putting off learning about video? Have you just been too busy to even think about learning a new skill? Are all those technical concepts and jargon too intimidating?

These articles are designed to get you up to speed quickly and to explain the fundamental concepts of video production in a simple, easy to grasp way.

Canon C70 screen open to the resolution settings

 Image by Jordan Allison

25 Frames Per Second and Beyond!


The first step to understanding video is to understand frame rates. Simply put, a frame rate refers to how many images (or frames) a camera captures in one second of recording. The other dimension of frame rate is the question of what frequency those frames are played back over the same period.


Without going into the long and storied history of film and video frame rates, just know that the standard frame rate for video cameras in Australia* is 25fps (that’s “frames per second”).


But in your camera setting you'll see it expressed as 25p ("p" being for "progressive" - which refers to the way the camera captures images from the sensor**). Most high-end cameras and DLSRs will let you shoot at 50p and even 100p and above. But why?


There are a few reasons. For instance, some prefer the crispness and lack of motion blur in the 50p format when played back at the same speed. However, faster frame rates are more often used to capture slow motion. Footage recorded at 50p but played back at 25fps will display at half the natural speed. 100p at 25fps will be at quarter speed.

And 50 at 25 in particular gives a very pleasing dreamlike slo-mo that is much loved by DOPs and clients alike.

In fact, some operators will shoot everything in 50p, so they have the option of slo-mo or other frame rate manipulation in post-production.


So, should everyone just shoot at 50p most of the time? Not really, no. Keep in mind that recording twice as many frames every second means a greater load on image processing and on your computer during post.  There often has to be a payoff for recording at high frame rates, and that price is usually resolution but also colour!

Defining Definition Part 1 – Resolution

Remember the earlier, simpler days of good old-fashioned High Definition (HD)? How easy it seemed back then.

Well OK, not really, because an HD camera was anything that could record in the now laughable video resolution of 720p (the "p" in this case referring to High Definition progressive scanning export quality - not to be confused with the frame rates for capture above) or 1080i "interlaced". Cameras (and TVs) that could handle the newer, sharper 1080p resulting format were designated (largely for marketing reasons) as "FullHD". Now of course the standard is UHD (that's ULTRA-HD) and 4K, as well as 6K, UHDTV2 and 8K.


So what do these numbers and K's actually mean when we talk about resolution? It used to be that the numbers referred to the number of pixels in the vertical axis of the image. So 720p meant an image that was 1920 pixels across and 720 high. 1080p is 1920 x 1080 pixels. Easy enough. However, marketing got involved again and decided that '2160p'' was not good enough for the new 4096 x 2160 resolution format and they went to the higher horizontal pixel count to name it 4K.


And that has stuck as resolutions get bigger. Therefore 2K is actually 2160 x 1920 – so 1920p – and 8K is … well, look here’s a nice picture:

UHDTV1 and 2 as shown here represents terminology for consumer TV resolution. So, when you buy a “4K” TV you’re actually only getting 3.84K, same with these newer “8K” TVs, you’re topping out at 7.68K.  True 4K and 8K resolution (i.e. 4096 x 2160 and 8192 x 4320) is known as 4K or 8K DCI. DCI stands for “Digital Cinema Initiatives” and is an agreed upon standard of the film industry for 4K/8K movie production. You often see these to standards (UHD and DCI) in the resolution settings of a digital cinema camera.


The question that is often posed is - outside of Netflix, broadcast television and cinema - isn't the majority of content delivered and distributed still in HD? And the answer is yes – less and less so – but yes.

So why bother with the significantly larger files sizes, not to mention the cost of 4K capable cameras, if I’m not going to deliver in 4K 95 percent of the time, let alone 6K and 8K?


First of all, 4K deliverables are fast becoming a standard, but there's other things to consider like oversampling (i.e. using the whole 4K sensor to record 1080p – or 6K to 4K – resulting in sharper images and greater colour precision) and digital cropping. A 4K image consists of, roughly, 4 x 1080p images. That gives you amazing latitude for reframing and composition in post.


For example, you can create two or more separate frames for an interview (e.g. a wide, a mid-shot and a close-up) from just ONE camera angle. Or create a digital pan or zoom from a static shot – all in glorious HD. The same can be said for 8K, but you can output in even glorious-er 4K.


Ok, that's a good start, but what was that I was saying about the pay off between resolution and frame rates? Let’s save that for next time, because first we need to get into “Defining Definition Part 2: Colour Depth”.


We’ll also delve into the fascinating world of Chroma Sub-Sampling – it’s even more thrilling than it sounds!

See you then.


Canon EOS C70 up close

 Image by Jordan Allison

*Based on the PAL broadcast systems used here and in Europe – and most of the world. North America and Japan stuck to the early NTSC system and they have fun frame rates like 29.97p and 59.94p – which they just call 30p/60p, but it really isn’t.

** If a camera records in Progressive mode it will capture full frame images at “X" times per second, meaning that e.g. a 25p recording mode would capture 25 full images per second. The other method is Interlaced. Interlaced means that two video fields are used to build one full image frame. One field contains the odd-numbered lines of a image and the other field contains the even-numbered lines.