HEY VIDEOMAKER! LET'S SHARE THE KNOWLEDGE!
Character is destiny - We create our character through repeated actions.
Creation commands fate - Our character then commands the world around us, guiding the outcomes we experience: our fate.
1 The elements of a video file
How to understand the basic fundamentals of the raw material of video editors: video files! Part 1.
Hello, my name is Mikko Pakkala. I am a blackmagic design certified trainer for Davinci Resolve in Editing, color, fairlight and fusion. I have over 20 years of experience in news and current affairs video making. I am now starting a short series on the topic of video files. My goal is to remove a hefty amount of potential stumbling blocks from your road of becoming a productive and successful content creator. I do my best in familiarising you with some of the most crucial aspects and concepts of content creation. Let’s begin our exploration with the raw material, video files, what do they consist of and what are the most essential attributes that we all should have a grasp on. Let’s get moving!
What are the elements of a video file?
Uncovering the Mystery Behind the Video File! What are the elements of a video file? This is the first video of a long series of short video on video creation, where I present and explain some of the most basic, mostly technical, fundamentals of digital storytelling.
How to compress a video file without losing quality? Are you familiar with the error message: “Player doesn’t support this video format”. Check out my video series on video files to find answers!
Hello, my name is Mikko Pakkala. I am a blackmagic design certified trainer for Davinci Resolve in Editing, color, fairlight and fusion. I have over 20 years of experience in news and current affairs video making. I am now starting a short series on the topic of video files. My goal is to remove a hefty amount of potential stumbling blocks from your road of becoming a productive and successful content creator. I do my best in familiarising you with some of the most crucial aspects and concepts of content creation. Let’s begin our exploration with the raw material, video files, what do they consist of and what are the most essential attributes that we all should have a grasp on. Let’s get moving!
What are the elements of a video file?
Uncovering the Mystery Behind the Video File! What are the elements of a video file? This is the first video of a long series of short video on video creation, where I present and explain some of the most basic, mostly technical, fundamentals of digital storytelling.
How to compress a video file without losing quality? Are you familiar with the error message: “Player doesn’t support this video format”. Check out my video series on video files to find answers!
2 - Many video file types, containing many variables
Many video file types, containing many variables... There are numerous types of video files, and each file contains a long list of settings. So its no wonder there can be trouble dealing with the technical complexity.
How to compress a video file without losing quality? Are you familiar with the error message: “Player doesn’t support this video format”. Check out my video series on video files to find answers!
You might wonder, why there has to be so many different kinds of video files? The main problem is that uncompressed Video files have too high demands for transfer speed, capacity and processing. The solution is compression, in the form of video codecs, which group different methods of video compression to make the video files small enough for the hardware to handle. These compression methods can reduce video data significantly, and the image may still seem to remain the same as the original. Different files are also optimised for different purposes, some for quality, some for editability and some for compatibility. Differences also come from different devices, patents and manufacturers.This has led to us having more video file types than we care to count, the old ones remain and new ones are being built.
How to compress a video file without losing quality? Are you familiar with the error message: “Player doesn’t support this video format”. Check out my video series on video files to find answers!
You might wonder, why there has to be so many different kinds of video files? The main problem is that uncompressed Video files have too high demands for transfer speed, capacity and processing. The solution is compression, in the form of video codecs, which group different methods of video compression to make the video files small enough for the hardware to handle. These compression methods can reduce video data significantly, and the image may still seem to remain the same as the original. Different files are also optimised for different purposes, some for quality, some for editability and some for compatibility. Differences also come from different devices, patents and manufacturers.This has led to us having more video file types than we care to count, the old ones remain and new ones are being built.
3 - With video files, it depends on what you need
The reason why not that many people talk about video files is mostly because there are no definite answers to most questions. Most questions concerning video files begin with “it depends…” for example transcoding video, which means that you change the video file’s form into some other form. Most people don't know what that is, because they haven’t had the need to learn what that is, because they record, edit and deliver video without using any manual transcode operations. This is good enough for most people. Another reason is that Transcodes are becoming more automated. Most modern cameras have the option to have your proxy files created automatically, in camera. That is, If you happen to need proxies from your video files to begin with, most people do not.
With video files, it depends on what you need. Video files are what you make them to be. Unfortunately we are often unaware of the origins and purpose of any given video file we are give to work on. Camera operators often prefer to record with the highest setting any given camera can provide, but that may not be helping the post production! Communication is obviously essential between all team members.
With video files, it depends on what you need. Video files are what you make them to be. Unfortunately we are often unaware of the origins and purpose of any given video file we are give to work on. Camera operators often prefer to record with the highest setting any given camera can provide, but that may not be helping the post production! Communication is obviously essential between all team members.
4 - Footage acquisition: Recording the video file
First we need to use our camera for capture. The most common bottleneck here is the limited write speed to the small memory card in your camera. This usually means that you have to use heavy compression on your camera files, which you then might need to transcode into something less compressed for you to be able to edit the material with your computer. Why not solve this bottleneck with an external recorder using SSD-drives? You might ask. You could, but why I don't, comes to three points: First, because we shoot moving images, often the camera needs to also move, so it’s important for your camera to be wireless. Secondly, when you’re recording video, less gear means less time to set up your shots. Thirdly, The more devices you take on a shoot, the more points of failure you will have. A friend of a friend forgot to push the record button also on his external recording device.
On Location: Recording the video file: The age old saying "keep it simple, stupid" trings true, also with videography. The KISS (Keep It Short and Simple) method may not be the best if you ask marketers of video gear, but it will save your skin in more situations than people think.
On Location: Recording the video file: The age old saying "keep it simple, stupid" trings true, also with videography. The KISS (Keep It Short and Simple) method may not be the best if you ask marketers of video gear, but it will save your skin in more situations than people think.
5 - watchfolders for automated transcoding of video files
As you’ve gotten your footage, you move the video files from your camera's memory card into a watch folder which is a directory on your computer that is monitored for changes. When a new file or folder is added to the watch folder, a predefined action is triggered. So you can automate repetitive tasks within your video production workflow. Like automatically transcoding video footage into different formats and uploading these files to different cloud storages for editing and for backup. In essence this means that you can automate most operations in importing your video files.
Watchfolders are super useful for Automated transcoding of video files! Ingesting and importing video files starts the post production phase of your video workflow. Being organized and adding metadata becomes crucial assets as the production moves onward.
Watchfolders are super useful for Automated transcoding of video files! Ingesting and importing video files starts the post production phase of your video workflow. Being organized and adding metadata becomes crucial assets as the production moves onward.
6 - proxy video files for cloud & mezzanine files for on prem editing
If you strive for maximum image quality with your video files, or if you edit in a collaborative cloud environment, you might need to transcode your material into something less taxing on your network bandwidth or system processing power. Proxy files are your primary solution to ease network bandwidth demands. Proxy files are extremely light, low quality video files, linked to the full quality original recorded files. This means that you can switch between the heavy full and light proxy qualities during editing, if you so choose. For non cloud projects, you usually only have to worry about processing capability, from which a “mezzanine” meaning an “in-between” format becomes useful. Less compressed format takes a huge amount of storage, as it retains full quality, but it is easier to transcode for the GPU and processor.
Sometimes we need a lighter version of our video file, and sometimes we need the the most shareable version, or the highest quality version of our video file, it all depends...
Sometimes we need a lighter version of our video file, and sometimes we need the the most shareable version, or the highest quality version of our video file, it all depends...
7 - Video file delivery for distribution takes a lot of versioning
The delivery of your finished video files is where most transcodes need to take place. The viewer has to have flawless playback on any device on any setting, this means a long list of different transcodes from your exported video file. Social media platforms have this versioning automated, as you send your video to youtube, its automation creates the versions from your exported video automatically. The delivery version of your video is usually highly compressed as the storage and distribution of your video is actually quite expensive. This is a fact most youtubers are relatively unaware of as people can share their videos through youtube for free. This may not be a lasting state of things and it is nothing short of amazing.
As we send our videofile to Youtube, a lot of transcoding and processing is being done to our file. It is important for us to understand this if we want to build systems of our own at some point.
As we send our videofile to Youtube, a lot of transcoding and processing is being done to our file. It is important for us to understand this if we want to build systems of our own at some point.
8 - Video file archive to online, nearline or offline storage
The last operation in the video workflow is archiving, which can mean a lot of things depending on your needs. Will you repurpose your material a lot? Will there be multiple users utilising the archive on a daily basis? Then you might need to prune and leave some material on your online storage, which means your fastest drives from which you're editing your videos. For the material you might reuse, you need nearline storage which is the fast archive storage which may be too slow to edit from, but quick enough for browsing and transferring material to your online storage. For materials that you only need to preserve, you need offline storage, which is too slow for daily use, but is reliably backed up and inexpensive and thereby with ample capacity and upgradeability. This storage is smart to be located away from the production facilities.
9 - Media asset management and metadata for video files
Without proper cataloguing and organisation, your archived files would not be accessible through search and thereby no one would find anything from your archive, rendering it useless. MAM, meaning media asset management, is all about how your media is being managed and organised in your nearline and offline archive storages. What makes your media searchable is metadata, which can be embedded within the video file, or as a separate file. It is Information that describes the file’s video and audio data. Such metadata can be descriptive such as filename, keywords, author, tags, notes and comments. Or it can be technical data, meaning timecode, date & time and copyright data. The settings and metadata of a video file can also contain data that is in conflict with the files actual contents.
10 - File extension: The Container Format of a video file
The extension of a video file, like dot MOV, dot MP4, dot MXF, just to name a few, refer to the video file container formats, which are used to store and transport digital video content within the file. These container formats are also called wrappers and they determine the order in which all the elements of a video file, the various types of data, are organised. Video File consists of image as frames, audio as samples and all sorts of accompanying metadata. Each format has been developed for certain tasks: some for compression for distribution, some with an emphasis on quality. You can rename the extension of a video file, but it would most likely not work in any software, as the order of the data would be in conflict with what the file extension would imply, as the extension of the file would not match with the format of the video within the file.
The file extension of a video file usually gives us clues on what the file might contain, but it does not directly assign the video codec used by the file.
The file extension of a video file usually gives us clues on what the file might contain, but it does not directly assign the video codec used by the file.
11 - Format of a video file
A format provides a standardised structure for organising and synchronising the different data streams within the file. The format of a video file is the combination of the video codec used within the file and the wrapper of the file. In still image files, the extension of a file tells the format in which the image is stored. For video files, things are not that simple, as a video file’s video stream can be encoded in multiple different standards, using a video codec, an encoder and a decoder of video based on standardised video compression algorithms, which enable the storing, streaming and playback of video stream within a video file. In addition, a video file usually contains more than just the encoded video stream. It can contain timecode data, multiple audio streams, subtitles, metadata, such as chaptermarks and notes written in the camera etc. This is why the extension of a video file is called a container or a wrapper.
a video file usually contains more than just the encoded video stream. It can contain time code data, multiple audio streams, subtitles, metadata, such as chapter marks and notes written in the camera etc.
a video file usually contains more than just the encoded video stream. It can contain time code data, multiple audio streams, subtitles, metadata, such as chapter marks and notes written in the camera etc.
12 - AV compression codecs: lossless & lossy
A codec can be software, hardware, or a combination of both. It translates the data from the digital format into a format that is easier to transmit. In short: codecs allow your video and audio to be compressed and played back without any noticeable loss of quality. Audio and video codecs are divided into lossless and lossy codecs. Lossless compression reduces the size of a video file by removing redundant data from the file, without sacrificing any quality. Lossy compression can reduce the data from the file significantly, resulting in a much smaller file size but also with reduced quality. Lossless compression is often used in archiving and lossy compression is often used in streaming and delivery.
Video codecs have many different purposes and users. You could be a live stream creator or you might wnat to watch a video from your smartphone, each use case requires a different version for the various systems to function smoothly.
Video codecs have many different purposes and users. You could be a live stream creator or you might wnat to watch a video from your smartphone, each use case requires a different version for the various systems to function smoothly.
13 - Audio compression codec of a video file
The main function of a compression codec in an audio file is the same as in a video file: it aims to compress the continuous uncompressed data stream into significantly smaller file size while degrading the quality as little as possible. Certain video codecs can be used as audio only files. And a video file can contain within itself multiple audio streams utilising, for example MP3 codec. As you know, mp3 audio files can also exist as individual files. For consumers an MP4 video file uses most often the AAC (advanced audio coding) audio codec, which emphasises compression over quality and it is thereby optimised for streaming and mobile devices. Creators use most often Linear PCM (pulse code modulation) codec, which is an uncompressed codec, and it is commonly used with .Wav file extension for separate audio only files in order to retain the highest possible audio quality for recording audio and for post production work.
The main function of a compression codec in an audio file is the same as in a video file: it aims to compress the continuous uncompressed data stream into significantly smaller file size while degrading the quality as little as possible.
The main function of a compression codec in an audio file is the same as in a video file: it aims to compress the continuous uncompressed data stream into significantly smaller file size while degrading the quality as little as possible.
14 - Audio sample rate of a video file
Audio quality is determined by two factors: sample rate, such as forty four or forty eight kilohertz and bit depth, such as sixteen or thirty two bits. Audio is recorded as samples. Sample rate is the number of times per second that the audio signal is sampled. It is measured in hertz (Hz). A higher sample rate will result in a more accurate representation of the original audio signal. Sampling rate of 48 kHz means that audio samples are taken at the frequency of 48 kilohertz. 48.000 cycles per second. sample rate determines the number of snapshots taken to recreate the original sound wave. 48 kilohertz has long been the standard for AV productions but the use of higher sample rates is increasing as the larger file sizes no longer pose any trouble for the hardware.
15 Audio bit depth of a video file
Bit depth is the number of bits that are used to represent each sample of the audio signal. A higher bit depth will result in a more accurate representation of the amplitude of the original audio signal. bit depth determines how many amplitude values each of those snapshots contain. Bit depth in audio files refers to the number of bits used to represent each sample of audio data. The higher the bit depth, the more accurate the representation of the audio signal, and the wider the dynamic range of the audio file. 16-bit audio was used with compact discs in the nineteen eighties. 24-bit audio has been the standard for digital cinema audio. 32-bit audio is now the highest bit depth that is commonly used.
The latest "prosumer" audio recorders can record audio at 32 bits. This means that the recorded audio has much wider headroom, meaning that you don't have to worry about the recording audio level being too high or too low as the audio level can be significantly increased or decreased without any quality loss in edit. Other significant benefits include lower noise floor and increased dynamic range.
The latest "prosumer" audio recorders can record audio at 32 bits. This means that the recorded audio has much wider headroom, meaning that you don't have to worry about the recording audio level being too high or too low as the audio level can be significantly increased or decreased without any quality loss in edit. Other significant benefits include lower noise floor and increased dynamic range.
16 Audio tracks within a video file contain audio channels
An audio track in a video file is a separate stream of audio data that is synchronised with the video. Dolby atmos, which is the most recent standard, doesn't have a fixed number of tracks and speakers. But, for example, the MP4 video format can support up to 32 audio tracks and 255 audio channels per audio track. Audiotracks of a video file can be separate files using a specific folder structure or they can be embedded in a single video file. Some video files can only contain one audio track, which can have one channel, which is mono, or two channels, which can be stereo. A stereo track becomes audibly stereo only after its channels have been appointed to the intended speakers by panning the channels to left and to right. A five point one audio track has six channels for the six speakers of the five point one sound system. Surround sound could be as one track of six channels, but it wouldn’t be easily decoded and separate six tracks enable higher quality as higher bitrate than the channels of one track.
A five point one audio track has six channels for the six speakers of the five point one sound system. Surround sound could be as one track of six channels, but it wouldn’t be easily decoded and separate six tracks enable higher quality as higher bitrate than the channels of one track.
A five point one audio track has six channels for the six speakers of the five point one sound system. Surround sound could be as one track of six channels, but it wouldn’t be easily decoded and separate six tracks enable higher quality as higher bitrate than the channels of one track.
17 Transcode video file to optimise it for different purposes
Transcode operations can take a lot of time and they can degrade the quality of the image, so good planning becomes central, as the selection of your gear dictates what transcode operations are mandatory, and what are optional. Consumer cameras record highly compressed video files to be able to store them into the camera's small memory card. For you to be able to edit the footage on a basic laptop, you need to transcode the video files into a more uncompressed mezzanine, meaning in-between form for you to be able edit the material smoothly. Or, if you edit using cloud, you can transcode super light proxy versions of the camera clips for the clips to move swiftly within the network. delivery of the finished video requires a list of transcodes for all the destinations and view settings. And finally, the archives require the last transcode from the finished video and selected raw materials.
Because video files are so taxing on the computer, you sometimes need to change their form to fit what ever you need to do with the files. Certain video file form suits recording the file, some other form suit better for editing purposes and lastly, some video formats are best suited for the distribution. Some transcode operations can be automated so that the user can concentrate on other tasks. In many cases, especially with non-professionals, the user doesn't have to do any manual transcodes.
Because video files are so taxing on the computer, you sometimes need to change their form to fit what ever you need to do with the files. Certain video file form suits recording the file, some other form suit better for editing purposes and lastly, some video formats are best suited for the distribution. Some transcode operations can be automated so that the user can concentrate on other tasks. In many cases, especially with non-professionals, the user doesn't have to do any manual transcodes.
18 Resolution of a video file
Video resolution is the number of pixels contained in each frame of a video. It is measured by the number of pixels in the horizontal and vertical directions. The ten eighty pee, Full HD video frame has 1920 pixels horizontally and 1080 pixels vertically. The more pixels in each frame, the sharper and clearer the image. By our current standards, the full hd resolution is quite small, and yet, for example, Star wars (Phantom Menace) was shot in 1999 using resolution less than full hd and the attack of the clones (Episode 2.) was the first star wars movie to be shot in full hd in 2002. Youtube gave full hd support in late 2009. At the beginning, it took hours to render the files using resolution that was back then considered to be nothing less than enormous. Now eight Kei resolution is nothing special and Blackmagic design’s ursa mini can record 12K.
Video resolution options have increased through different social media platforms and advances in video technology. Resolution can bring more clarity to the video image but it also can increase the file sizes considerably. Having lots of extra resolution when editing will enable the editor to re-frame the shots if need be.
Video resolution options have increased through different social media platforms and advances in video technology. Resolution can bring more clarity to the video image but it also can increase the file sizes considerably. Having lots of extra resolution when editing will enable the editor to re-frame the shots if need be.
19 Aspect ratio of a video file
Is the proportional relationship between the width and height of the video image. The most common aspect ratios now are the vertical nine by sixteen and the horizontal sixteen by nine. If your display and the material you are watching have different aspect ratios, there will be black bars on top and bottom or sides of the video. the four by three aspect ratio has historically been by far the most used aspect ratio: earliest movies were filmed using it and it was only in the 1950’s that first sixteen by nine movies came to theatres with the purpose of giving the viewers more immersive experience. Many modern videographers have returned to recording in the four by thirds aspect ratio as its image is the easiest to reshape into both sixteen by nine and nine by sixteen, which you are now looking at.
This video uses the vertical aspect ratio of 9:16 optimal for phones. The option to record footage in the so called “open gate” mode has also increased the popularity of the 4:3 aspect ratio, as it utilises the full surface area of the camera's image sensor. A full frame camera sensor has the aspect ratio of 3:2, so using open gate, that is also the aspect ratio of the recorded files. The 3:2 is slightly wider than 4:3 aspect ratio.
This video uses the vertical aspect ratio of 9:16 optimal for phones. The option to record footage in the so called “open gate” mode has also increased the popularity of the 4:3 aspect ratio, as it utilises the full surface area of the camera's image sensor. A full frame camera sensor has the aspect ratio of 3:2, so using open gate, that is also the aspect ratio of the recorded files. The 3:2 is slightly wider than 4:3 aspect ratio.
20 Fast Turnaround and longform content
Video production can be divided into two categories: there’s fast turnaround content and longform content. The majority of video production has shifted to fast turnaround productions, meaning that as videos were created in days or weeks, it will now take hours or minutes to make the same amount of published video content. The majority of creators record, edit and deliver in a single MP4, H.264 video file. Because they can, because it’s good enough. Camera’s have evolved to record much higher quality video files, which modern computers can edit natively and the platforms take care of the transcoding of our material for the audiences. Longform content strives for maximum image quality, and as it pushes technical boundaries, it will require transcoding of footage into different codecs.
The number of the types of video production have increased. One can generalize that the modern video production is made with fraction of the crew size, time and cost when compared to the productions in the heydays of broadcast television. On the other hand, technology keeps on evolving and there will always be production that aim for the cutting edge: this means big crews, more time and a big production budget.
The number of the types of video production have increased. One can generalize that the modern video production is made with fraction of the crew size, time and cost when compared to the productions in the heydays of broadcast television. On the other hand, technology keeps on evolving and there will always be production that aim for the cutting edge: this means big crews, more time and a big production budget.
21 Human vision vs. computer vision
The way a computer processes images and the way we humans see images is NOT the same. First thing to note is that the way computers function and process images is logical and predictable, whereas we humans and our vision does not. Our vision does not function linearly. This CIE 1931 chromaticity diagram is a representation of color in terms of hue and saturation, but not brightness, it demonstrates the colors as we see them. If our vision would function linearly, this diagram would form a perfect triangle with equal amounts red, green and blue.
Is my red your red? Probably not, as the camera displays the recorded from a small lcd or oled display. The image data is based on multiple camera settings. Then the image is being viewed by multiple different monitors in the edit bay, where it may be transcoded multiple times over until it's being released and watched by the audience from all kinds of displays. Go Google "CIE1931" to see the diagram.
Is my red your red? Probably not, as the camera displays the recorded from a small lcd or oled display. The image data is based on multiple camera settings. Then the image is being viewed by multiple different monitors in the edit bay, where it may be transcoded multiple times over until it's being released and watched by the audience from all kinds of displays. Go Google "CIE1931" to see the diagram.
22 Human vision has adapted to low light
The human eyes see more shades of green, than we can see shades of blue and red. Evolution has formed our vision to make us see better in dark conditions. This means that we perceive brightness as something more than what it actually is. In other words, in our eyes, things APPEAR to be much brighter than what they actually are. If we were to point out a midpoint between pure black as one and pure white as one hundred, we would not point to the mathematical value of fifty, our eyes make us point to a value of eighteen.
More than 50% of the brain’s functionality is taken up by seeing. Eyes also have the most active muscles in our body. We blink about 15 - 20 times per minute. Average blink takes about one third of a second, so 10% of our wake time is spent with our eyes closed. We can't trust only our eyes to know what color value is what in each specific image. The lighting in the edit bay influences the way wee see colors in our grading monitor. These are just some of the reasons why we rely so heavily on the use of scopes as we color grade our image.
More than 50% of the brain’s functionality is taken up by seeing. Eyes also have the most active muscles in our body. We blink about 15 - 20 times per minute. Average blink takes about one third of a second, so 10% of our wake time is spent with our eyes closed. We can't trust only our eyes to know what color value is what in each specific image. The lighting in the edit bay influences the way wee see colors in our grading monitor. These are just some of the reasons why we rely so heavily on the use of scopes as we color grade our image.
23 Chroma subsampling: less color data
Chroma subsampling is a compression method that is based on human vision: it compresses and reduces the data we are less capable of seeing, which is changes in color chrominance and it retains the data we are able to detect well, which is the changes in brightness and luminance. Thereby the number of bits used for the color (R,G and B values) data for each pixel can be reduced without it being seen by the human eye. Chromaticity is about ratios of red, green and blue, and if we know the values of two colors, we can deduce the value of the third color. For example, if red is forty, and green is forty, then blue must be twenty. This is why a two dee, x and y graph is enough to depict the range of color, as in the CIE 1931 chromaticity diagram.
Even though the chroma subsampling of 4:2:0 reduces the amount of color information by 50%, we as viewers are usually not able to see anything unusual in the color reproduction of the image. This is because the color data is reduced is such a smart manner. The data is preserved fully, as schroma subsample value of 4:4:4 in grading the image, but the data can be reduced to 4:2:0 to the distributable, finished file version, as it no longer needs to be graded or modified.
Even though the chroma subsampling of 4:2:0 reduces the amount of color information by 50%, we as viewers are usually not able to see anything unusual in the color reproduction of the image. This is because the color data is reduced is such a smart manner. The data is preserved fully, as schroma subsample value of 4:4:4 in grading the image, but the data can be reduced to 4:2:0 to the distributable, finished file version, as it no longer needs to be graded or modified.
24 Chroma subsampling: settings
Chroma subsampling setting of 4:1:1 has the same file size as 4:2:2 and it was built to be more suitable with interlaced broadcast material. In any other case, avoid this mode. 4:2:0 keeps the data level for luminance full, but the data level for the chrominance is being sampled at quarter resolution. 4:2:2 datalevel for luminance stays full, but the data level for the chrominance is being sampled at half resolution. 4:4:4 data levels are equally full, there is no chroma subsampling. The image data is being sampled at full resolution on the luminance and the chrominance of the image.
Chroma subsampling is a smart method of data reduction, but when overused, it can introduce artifacts and reduce color accuracy of the video image. H.264, and H.265 files use chroma subsampling, but RAW video files, such as BRAW, don't use chroma subsampling. Raw files are much less processed, hence the name ”raw” as raw files have uncompressed color data which gives more flexibility in post.
Obviously a video file using 4:2:2 takes up more space than a file using 4:1:1 chroma subsampling, but the quality difference can be visible and the size difference is only about 12.5%. Using 4:2:0 chroma subsampling will bring a significant drop in file sizes. In short, 4:2:0 is ideal for distribution, where as 4:4:4 is ideal for post production.
Chroma subsampling is a smart method of data reduction, but when overused, it can introduce artifacts and reduce color accuracy of the video image. H.264, and H.265 files use chroma subsampling, but RAW video files, such as BRAW, don't use chroma subsampling. Raw files are much less processed, hence the name ”raw” as raw files have uncompressed color data which gives more flexibility in post.
Obviously a video file using 4:2:2 takes up more space than a file using 4:1:1 chroma subsampling, but the quality difference can be visible and the size difference is only about 12.5%. Using 4:2:0 chroma subsampling will bring a significant drop in file sizes. In short, 4:2:0 is ideal for distribution, where as 4:4:4 is ideal for post production.
25 Video compression codecs: intraframe
The Compression types of video can be divided into intraframe and interframe codecs. Interframe video codecs are lossy, while intraframe video codecs can also be lossless. The Intraframe compression compresses each frame of video individually. The encoder analyses each frame and looks for ways to reduce the amount of data needed to represent it. This can be done by removing redundant information, such as similar pixels or areas of the image that are not moving. Intraframe compression typically produces higher quality video than interframe compression, but it also produces larger file sizes.
Intraframe compression uses, among others, run-length encoding, RLE which identifies and replaces repeated sequences of pixels with a shorter code, and discrete cosine transform, DCT, as well as quantization which both break the image down into frequency components and compresses less the most visible frequencies. As intraframe deals with individual frames, it enables random access and possible errors don't necessarily span multiple frames as with interframe compression.
Intraframe compression uses, among others, run-length encoding, RLE which identifies and replaces repeated sequences of pixels with a shorter code, and discrete cosine transform, DCT, as well as quantization which both break the image down into frequency components and compresses less the most visible frequencies. As intraframe deals with individual frames, it enables random access and possible errors don't necessarily span multiple frames as with interframe compression.
26 Video compression codecs: interframe
Interframe compression compresses video by comparing each frame to the previous frame. the encoder only needs to encode the differences between the frames. This can be done by using motion vectors to track the movement of objects in the scene. Interframe compression uses keyframes which contain all the data of a full frame and in between the keyframes there are delta frames, which only contain the data of incremental changes in the frames. This means that intra frame compressed frames are more easily retrievable than in interframe compression, where most of the frames need to be built to become displayed individually.
Interframe compression’s weaknesses include the fact that errors in one frame can affect the decoding of subsequent frames, leading to a chain of errors throughout the video stream. Interframe-encoded video may not allow for random access, meaning that decoding a particular frame may require reference to previous frames. This method reduces file size by exploiting the temporal redundancy between frames of a video sequence. It works by identifying and encoding only the changes between successive frames, rather than storing the entire frame data independently. Interframe encoding relies on the concept of motion estimation, which involves identifying the motion vectors that represent the movement of objects between frames. These motion vectors are used to predict the current frame based on the previous frame, eliminating the need to store the entire current frame. The remaining information, known as the residual signal, represents the differences between the predicted and actual frames and is stored more efficiently.
Interframe compression’s weaknesses include the fact that errors in one frame can affect the decoding of subsequent frames, leading to a chain of errors throughout the video stream. Interframe-encoded video may not allow for random access, meaning that decoding a particular frame may require reference to previous frames. This method reduces file size by exploiting the temporal redundancy between frames of a video sequence. It works by identifying and encoding only the changes between successive frames, rather than storing the entire frame data independently. Interframe encoding relies on the concept of motion estimation, which involves identifying the motion vectors that represent the movement of objects between frames. These motion vectors are used to predict the current frame based on the previous frame, eliminating the need to store the entire current frame. The remaining information, known as the residual signal, represents the differences between the predicted and actual frames and is stored more efficiently.
27 PAL & NTSC Broadcast television standards
The analog broadcast tv standards of NTSC and PAL have caused trouble by their incompatibility to video creators around the globe for decades. NTSC was developed in the United states in the 1950’s and the PAL system was developed in Europe in the 1960's. The most notable difference between the standards is the framerate: 25 for pal and 30 for ntsc. The color encoding and the resolution of the image are not the same, and they also scan the lines of the image differently, all of which will continue to give failed imports and grey hair for those digging the archives. In the 1990’s ATSC and DVB-T broadcast standards were developed for digital broadcasting.
Conversions between NTSC and PAL have caused problems because of different color encoding schemes, different frame rate and from using different timing in the scanlines of the interlaced image structure. Sync issues, glitches, flickering, artifacts, color fringing and bleeding, you name it.
Conversions between NTSC and PAL have caused problems because of different color encoding schemes, different frame rate and from using different timing in the scanlines of the interlaced image structure. Sync issues, glitches, flickering, artifacts, color fringing and bleeding, you name it.
28 Progressive and interlaced video image
Interlaced video was developed in the 1920’s because the CRT TV’s of past times did not have enough bandwidth to display full images without flicker, at required framerate. In interlaced video, each frame has only half of the image as odd and even interlaced lines. Playback creates the illusion of watching full frames. Interlaced video is thankfully going the way of the buffalo as progressive video has now been the standard for high definition video, it scans the entire image, from top to bottom, as it was supposed to be, giving a sharper and more detailed image than interlaced video.
Progressive video encodes all of the lines of the video frame simultaneously, from top to bottom. Interlaced video encodes the odd and even lines of the video frame separately. This means that the top half of the frame is encoded in one pass, and the bottom half of the frame is encoded in a separate pass.
Progressive video encodes all of the lines of the video frame simultaneously, from top to bottom. Interlaced video encodes the odd and even lines of the video frame separately. This means that the top half of the frame is encoded in one pass, and the bottom half of the frame is encoded in a separate pass.
29 Color bit depth of a video file
Color bit depth refers to the number of bits that are used to store and represent the red, green and blue color channels for each pixel of the video image. It determines the number of possible values per channel. Eight bit color pixel can display twohundred and fifty five different tones. Ten bit color pixel can display one thousand and twenty three different tones and twelve bit color pixel can display four thousand and ninety five different tones. The more colors can be represented, the more accurate the color reproduction will be. The bit depth of 8 bits per channel can be surprisingly decent, but it can lead to banding and other artifacts in high-contrast images. The bit depth of 10 bits per channel is a big jump from eight bits and as it starts to have wider tonal range, it enables the possibility to color grade the image.
This topic of color; meaning “white lies inside a pixel in the image frame of a video file” is perhaps the area of most complexity. In this video I go through one scenario out of countless possibilities. I will offer more content concerning this ”inner workings of an image pixel”. RGB color space has 3 channels which contain both the luminance and the chrominance data, meaning the brightness and the color data. The more advanced color space of YCbCr, the luminance and the chrominance data are separated, allowing the chroma subsampling method of compression.
It is actually the used color space, that defines most of how greadeable a video file will be in post.
This topic of color; meaning “white lies inside a pixel in the image frame of a video file” is perhaps the area of most complexity. In this video I go through one scenario out of countless possibilities. I will offer more content concerning this ”inner workings of an image pixel”. RGB color space has 3 channels which contain both the luminance and the chrominance data, meaning the brightness and the color data. The more advanced color space of YCbCr, the luminance and the chrominance data are separated, allowing the chroma subsampling method of compression.
It is actually the used color space, that defines most of how greadeable a video file will be in post.
30 Clipping highlights in a video file
Result if you watch HDR video on a SDR display. One of the best solutions to fix this is HLG (Hybrid Log-Gamma) is a high dynamic range (HDR) standard that is designed to be backward compatible with standard-dynamic-range (SDR) displays. One of the key advantages of HLG over other HDR standards is that it does not require any special metadata to be transmitted with the video signal. This makes it ideal for broadcast television, where metadata transmission can be problematic. Another advantage of HLG is that it is relatively easy to implement on both the encoding and decoding sides. This has made it a popular choice for streaming video services and gaming consoles. When an HDR video is played on an SDR display, the display's brightness range is too limited to accommodate the brightest highlights of the video. This causes the highlights to be compressed, or "clipped," to fit within the SDR range. The result is that the whites in the video appear to be washed out or blown out, and they may lose some of their detail.
when an HDR video is played on an SDR display, the display's brightness range is too limited to accommodate the brightest highlights of the video. This causes the highlights to be compressed, or "clipped," to fit within the SDR range. The result is that the whites in the video appear to be washed out or blown out, and they may lose some of their detail. Other popular HDR format include HDR10+ and Dolby Vision.
For you to use HLG you need to record your video using HLG. Before purchase, make sure your camera is able to record footage using HLG.
For the HDR signal to be compatible with both SDR and HDR displays, it needs to be compressed more than when using other HDR formats. This means that HLG format offers compatibility on the expense of quality.
when an HDR video is played on an SDR display, the display's brightness range is too limited to accommodate the brightest highlights of the video. This causes the highlights to be compressed, or "clipped," to fit within the SDR range. The result is that the whites in the video appear to be washed out or blown out, and they may lose some of their detail. Other popular HDR format include HDR10+ and Dolby Vision.
For you to use HLG you need to record your video using HLG. Before purchase, make sure your camera is able to record footage using HLG.
For the HDR signal to be compatible with both SDR and HDR displays, it needs to be compressed more than when using other HDR formats. This means that HLG format offers compatibility on the expense of quality.
31 Bit rate of a video file
Is the amount of data used to encode each second of video. Video file has a video bitrate, audio bitrate and the sum of these two as the total bitrate. Bitrate is measured in megabits per second (Mbps) or megabytes per second (MBps). This adjustable value is the biggest contributor to the size and quality of the video file. The same bitrate can be high or low, depending on the used resolution, framerate, color depth, chroma subsampling and video codec. Bitrate can be constant (CBR) or variable (vbr) in which the data rate goes up or down, depending on the amount of movement in the image, resulting in a smaller file size than when using the constant bitrate. “With video files, it depends…” Constant bitrate and variable bitrate are merely different methods of encoding the image frame data of the video. The settings define the bitrate in such a way that the variable bitrate can take up more space than constant bitrate. In light of history, the constant bit rate option has resulted in larger file sizes than it’s variable bitrate counterparts.
“With video files, it depends…” Constant bitrate and variable bitrate are merely different methods of encoding the image frame data of the video. The settings define the bitrate in such a way that the variable bitrate can take up more space than constant bitrate. In light of history, the constant bit rate option has resulted in larger file sizes than it’s variable bitrate counterparts.
“With video files, it depends…” Constant bitrate and variable bitrate are merely different methods of encoding the image frame data of the video. The settings define the bitrate in such a way that the variable bitrate can take up more space than constant bitrate. In light of history, the constant bit rate option has resulted in larger file sizes than it’s variable bitrate counterparts.
32 Color space of a video file
The color space of a video file defines the range, meaning the gamut of the color, determining how accurately colors are encoded and decoded in the video file. The wider the gamut, the more accurate the color reproduction will ber and also, the bigger the size of the file will end up being. The Rec. seven ou nine is Currently the most common color space standard in use. It originates from broadcast and it is also synonymous with the so-called SDR, which stands for standard dynamic range. Rec. twenty twenty is still Emerging, more advanced color space standard designed for HDR, high dynamic range video. It is capable of higher dynamic range: meaning that it can display a wider range of brightness levels. it offers a much wider color gamut than the previous SDR standard, Rec. 709. The range of rec seven ou nine is limited to eight bits, where as the range of rec twenty twenty starts from ten bits.
The dynamic range of color in a video file refers to the difference between the darkest and lightest colors that the video can represent. Higher dynamic range means that the video can represent a wider range of colors, which can result in more vivid and realistic images.
The Rec.709 and Rec.2020 color spaces can be more easily converted interchangeably, unlike the older standars of BT.709 and BT.2020.
The dynamic range of color in a video file refers to the difference between the darkest and lightest colors that the video can represent. Higher dynamic range means that the video can represent a wider range of colors, which can result in more vivid and realistic images.
The Rec.709 and Rec.2020 color spaces can be more easily converted interchangeably, unlike the older standars of BT.709 and BT.2020.
33 Filename of a video file
When we are naming a file, what is most helpful for us, is adding conventions to it. Here are some of the best: Firstly, keep the filename short. Secondly, make the filename descriptive, so that even an outsider might be able to deduce as much as possible from the filename, on the contents of the file. Thirdly, whatever naming logic and conventions you follow, stick with them. Fourth, avoid using special characters. Fifth, remember to include a version number to the filename. Six, document your naming conventions so that others might understand your filenames better. Seventh, Use unique keywords in your filenames, so that your search results list would be as short as possible, making you find your own clips much faster.
Additional tips: Use only alphanumeric characters (a-z, A-Z, 0-9) in the file name to ensure compatibility with various operating systems and software. Avoid non-alphanumeric characters like symbols, spaces, or special characters that might cause issues.
Establish a logical and consistent folder structure to organize your video files within the project. This structure should mirror the file naming convention, making it easy to locate specific files. Create a detailed project documentation file outlining the file naming convention and folder structure, ensuring everyone involved in the project is aware of the guidelines. Communicate the file naming convention and folder structure clearly to all team members and stakeholders to maintain consistency and avoid confusion.
Additional tips: Use only alphanumeric characters (a-z, A-Z, 0-9) in the file name to ensure compatibility with various operating systems and software. Avoid non-alphanumeric characters like symbols, spaces, or special characters that might cause issues.
Establish a logical and consistent folder structure to organize your video files within the project. This structure should mirror the file naming convention, making it easy to locate specific files. Create a detailed project documentation file outlining the file naming convention and folder structure, ensuring everyone involved in the project is aware of the guidelines. Communicate the file naming convention and folder structure clearly to all team members and stakeholders to maintain consistency and avoid confusion.
34. Framerate of a video file
Video is recorded as frames. The frame rate of a video file determines how smooth the video will appear. The earliest films had roughly 15 frames per second with inconsistent framerate. 24fps is the minimum frame rate that the human eye can perceive as smooth motion. Capturing at higher frame rates enables the slowing down of the video, without compromising the image quality.
The higher the frame rate, the smoother the motion in the video will be. We have grown accustomed to the subtle flicker the playback with 24 frames gives, we call it "the cinematic look". 24 frames is the current cinema standard for movies. European PAL system uses 25 or 50 frames per second and American NTSC system uses 30 (or 29.97 to be precise) or 60 frames per second.
The higher the frame rate, the smoother the motion in the video will be. We have grown accustomed to the subtle flicker the playback with 24 frames gives, we call it "the cinematic look". 24 frames is the current cinema standard for movies. European PAL system uses 25 or 50 frames per second and American NTSC system uses 30 (or 29.97 to be precise) or 60 frames per second.
35. Filesize of a video file #1
The size of video that is being recorded into a memory card is often presented in bits per second, like 200 Mb/s. Video that is then stored in a computer's hard drive is often presented in Bytes per second. For 200 megabits per second would be around 25 bytes per second. To evaluate how much storage will be needed, we need to divide the bitrate by 8 to get the bytes per second value: 200 / 8 = 25MB/s. Then we need to multiply that result by 60, to get the value of how much one minute of this footage would approximately take up. 25 * 60 = 1,500MB/m.
36. Gigabit ethernet is the bottleneck in cloud editing: HEVC for proxy!
H.264 is a heavily compressed codec, and yet a 10-minute HEVC video file at 1080p resolution and 30 frames per second can be around 200MB in size, while the same video file in H.264 format would be around 500MB in size. If you stream video over the internet, this difference is significant. Network bandwidth has become the major bottleneck for cloud editing. The main reason why HEVC is now the recommended format for proxy files in collaborative, cloud editing environments is its efficiency. Another reason is that as the codec is no longer new, modern processors are able to encode the files relatively easily.
The idea that HEVC files could be used as proxies is fairly recent as some years ago, processing power wasn't sufficient for the smooth playback of even H.264 files, which are much less taxing on the processor than HEVC files.
The idea that HEVC files could be used as proxies is fairly recent as some years ago, processing power wasn't sufficient for the smooth playback of even H.264 files, which are much less taxing on the processor than HEVC files.
37. Sequence formats of a video file
sequence formats are a type of video file format that stores each frame of a video as a separate file and one clip is one folder. This kind of format. Like EXR format, released by industrial light and magic in 2003, is used in VFX work, it is also best suited as a render cache format, as it handles interruptions well. If all image frames were compressed into a single file, as with most common codecs, and the building of the file were to be interrupted,the process would have to start over instead of continuing the process from the frame it was interrupted. render farms operate on individual frame files to avoid conflicts.
ISF files are a file format that aims for macimum image quality. ISF files are used mainly in VFX work, but also in pre-visualization and animation, and in projects where it is important to have precise control over the timing and sequencing of frames. ISF files can be imported and exported from a variety of different formats, including TIFF, PNG, and JPEG.
ISF files are a file format that aims for macimum image quality. ISF files are used mainly in VFX work, but also in pre-visualization and animation, and in projects where it is important to have precise control over the timing and sequencing of frames. ISF files can be imported and exported from a variety of different formats, including TIFF, PNG, and JPEG.
38. Flags of a video file
Flags are bits of data that are used to control the playback of the video. For example, they can be used to: Rotate the video, Change the video’s aspect ratio, Speed up or slow down the playback, Crop the video, Apply filters to the video. While metadata is data that is stored about the video, such as the title, author, and copyright information. When metadata is stored in the video file, it is usually stored in the flags. However, it is also possible to store metadata in other parts of the file. Flags are usually stored in a binary format, while metadata can be stored in a variety of formats, such as text, XML, or JSON. Video and audio track flags can be used to indicate the video’s resolution, frame rate, playback speed and the audio’s sample rate, bit depth, and channels. such as in the file header or in the video track.Flags can be used to control the playback of a video in a number of ways. control the playback of the video track. For example, they Metadata flags can store other metadata about the video, such as the title, author, and copyright information.
Metadata can be embedded in the flags of a video file. Flags are bits of information that are stored in the header of a video file. You can apply flags to your video in your timeline in Resolve. Thse flags can be assigned as chaptermarks for your video. There are also “flags” within a video file, which can do a lot more than the flags we can see and use in Resolve.
a flag is a single bit or byte that indicates a particular condition or state of the file. Flags are typically used to store additional information about the file that is not directly related to the content itself. Flags are an efficient way to store additional information about a file because they take up very little space. They are also very fast to access, as they can be read and modified directly by the file system.
Metadata can be embedded in the flags of a video file. Flags are bits of information that are stored in the header of a video file. You can apply flags to your video in your timeline in Resolve. Thse flags can be assigned as chaptermarks for your video. There are also “flags” within a video file, which can do a lot more than the flags we can see and use in Resolve.
a flag is a single bit or byte that indicates a particular condition or state of the file. Flags are typically used to store additional information about the file that is not directly related to the content itself. Flags are an efficient way to store additional information about a file because they take up very little space. They are also very fast to access, as they can be read and modified directly by the file system.
39. Embedding data in a videofile
Embedded data means separate sets of data, all residing within the wrapper of one video file. The benefit is that one video file, instead of multiple files and folder structures, contains everything you need. The downside is that if some element of the embedded data needs to be altered, you need to re-transcode the entire video file. For example, the benefit of having non embedded subtitles for your video file means that separate subtitle files and languages can be added later on and the different language tracks can be altered without having to re-encode the entire video clip. With non embedded subtitles, you first need to assign the subtitle file to be used with the video file, whereas with embedded subtitles, you can activate the subtitles directly from the video players menu.
Embedding data in a video file is a technique for enhancing the usability, interoperability, and protection of digital video content. By embedding relevant information within the video file itself, organizations can improve data organization, facilitate automation, personalize user experiences, and protect their intellectual property.
Embedding data in a video file is a technique for enhancing the usability, interoperability, and protection of digital video content. By embedding relevant information within the video file itself, organizations can improve data organization, facilitate automation, personalize user experiences, and protect their intellectual property.
40. Proxy file of original camera file
Proxies used to exist because of the lack of processing power resulting in stuttery playback. Now proxies exist because of the lack of streaming bandwidth. The higher the compression of a codec, the harder it was for the editing software and processor to handle the workload with sufficient smoothness. Nowadays processing power has increased so that the bottleneck is shifting towards the lack of network throughput. H.264 and H.265 used to be shunned as editing codecs, because of their high toll in processing, now those same codecs are praised because of their ability to compress decent quality into as thin bitrate as possible, thereby enabling things such as ”cloud editing”.
The use of proxy files were thought to be diminishing over time, but now with the rise of collaborative cloud editing, the use of proxies has become more popular than ever before. With smaller local projects we can usually work with local, native files without the extra work that the proxies bring, but with bigger productions, with multiple users, the use of proxy files can be a significant aid to the editors and other post production professionals working with the heavy, original material.
The use of proxy files were thought to be diminishing over time, but now with the rise of collaborative cloud editing, the use of proxies has become more popular than ever before. With smaller local projects we can usually work with local, native files without the extra work that the proxies bring, but with bigger productions, with multiple users, the use of proxy files can be a significant aid to the editors and other post production professionals working with the heavy, original material.
41. Variable frame rate, VFR & constant frame rate, CFR of a video file
The variable frame rate option in camera means that you can choose different frame rates for capture and playback, resulting in a file that plays back either faster or slower than real time. A common use case is that you record in a high framerate like one hundred and eighty frames per second, but the playback frame rate is set to thirty frames per second, giving you a slow motion video file. VFR video files can have no fixed playback frame rate as the rate can change depending on the metadata timestamp contained in each frame. It tells how long or short duration each frame should be played out, making the playback to either speed up or slow down within the file. The old broadcast tech was built on using constant frame rates and is thereby mostly incompatible with VFR files.
The Older legacy technology of broadcast and cinema were using constant frame rate. Modern digital technology is using variable frame rate, which is much more flexible and advanced way of handling frames as they are needed, in a fixed or unfixed manner.
The Older legacy technology of broadcast and cinema were using constant frame rate. Modern digital technology is using variable frame rate, which is much more flexible and advanced way of handling frames as they are needed, in a fixed or unfixed manner.
42. Variable Bit Rate, VBR & Constant Bit Rate, CBR of a video file
Constant frame rate and constant bit rate are often used together in video files. This is because Constant Frame Rate ensures that the video plays back at a smooth and consistent speed, while Constant Bit Rate ensures that the video file has a predictable size. If you deliver your content to broadcast, you might need to transcode your files into the settings of having constant bitrate and constant framerate. If you dont deliver to broadcast, using Variable frame rate and variable bit rate will give you more efficiency and more options down the line. With constant bitrate and framerate, the potential complexity of the image is not taken into account. If you record a minute of static wall and a minute of fast moving waterfall, the bitrate remains identical even though the difference in the amount of image data is significant.
Bitrate is the most important setting of a videofile. It could be argued that it is the same as the quality of a video file. Do you find this video helpful? Do drop me a comment!
Bitrate is the most important setting of a videofile. It could be argued that it is the same as the quality of a video file. Do you find this video helpful? Do drop me a comment!
43. Constant or Variable Bitrate for a video file?
Variable bitrate and constant bitrate wont by themselves dictate the quality of the image. Although the constant bitrate files tend to be bigger than the variable bitrate files, constant bitrate uses the data less efficiently. For the same file size, constant bitrate will produce a lower quality image than a file using variable bitrate. Variable bitrate is simply a more advanced method as it can allocate the bitrate based on the amount of the changing visual data: static moments can do with less than average bitrate whereas moments with lots of movement need bitrates that are much higher than the average. The only reasons to choose constant bitrate are increased compatibility and if you get into a situation where your client demands it or you need to deliver your video file with a certain specified size.
Choosing between Constant Bitrate (CBR) and Variable Bitrate (VBR) for a video file involves balancing quality, file size, and compatibility. CBR ensures a consistent bitrate throughout the video, leading to predictable file sizes and stable quality. VBR is optimal for maximizing quality and efficiency, especially when file size constraints are present.
Choosing between Constant Bitrate (CBR) and Variable Bitrate (VBR) for a video file involves balancing quality, file size, and compatibility. CBR ensures a consistent bitrate throughout the video, leading to predictable file sizes and stable quality. VBR is optimal for maximizing quality and efficiency, especially when file size constraints are present.
44. Modify camera settings in post with RAW
The most essential camera settings affecting the video image, such as aperture, ISO, shutter speed, and white balance have a history of being more or less baked into the video file, but now, when working with Blackmagic RAW, BRAW footage in DaVinci Resolve, These settings are in the metadata of the video file, so you can adjust these settings not only in-camera, but also during post-production. This freedom to adjust the camera settings in post, has always been one of the biggest selling points in using a RAW format.
RAW video formats allow you to modify settings that are typically unchangeable once a video is recorded in standard formats. Raw video captures the unprocessed data directly from the camera sensor, preserving all the details without applying any in-camera processing, such as white balance, exposure, contrast, or sharpness adjustments.
RAW video formats allow you to modify settings that are typically unchangeable once a video is recorded in standard formats. Raw video captures the unprocessed data directly from the camera sensor, preserving all the details without applying any in-camera processing, such as white balance, exposure, contrast, or sharpness adjustments.
45. White Balance setting of a video file
The color temperature of light changes most when we go from indoors to outdoors. If the camera doesn’t have the correct color temperature information, the colors will be recorded inaccurately, we can see this from the white areas of the image having an unnatural orange or bluish tint. White balance neutralises the color casts, making the white objects appear white and thereby making the rest of the tones to be accurately represented, allowing viewers to perceive the scene as it appeared in real life. With modern cameras the automated white balance works so well that the users tend to forget its existence. However, with multicam work, it's often a good idea to manually assign a common white balance setting for the cameras in order to ensure the visual consistency of the footage.
White balance in video cameras is a setting that adjusts the colors to ensure that whites appear white and all other colors look natural under different lighting conditions. Light sources, like sunlight, incandescent bulbs, and fluorescent lights, emit light with varying color temperatures, measured in Kelvin (K). For example, daylight is around 5600K, which is relatively neutral, while indoor tungsten lighting is around 3200K, which is warmer and more yellow.
White balance in video cameras is a setting that adjusts the colors to ensure that whites appear white and all other colors look natural under different lighting conditions. Light sources, like sunlight, incandescent bulbs, and fluorescent lights, emit light with varying color temperatures, measured in Kelvin (K). For example, daylight is around 5600K, which is relatively neutral, while indoor tungsten lighting is around 3200K, which is warmer and more yellow.
46. Shutter Speed setting of a video file
Shutter speed determines how motion is captured into the video by determining how long the camera's sensor is exposed to light for each frame of the video. A slower shutter speed allows more light to reach the sensor, creating also more motion blur, while a faster shutter speed can make the image seem more detailed and clear and the movement potentially seem more stuttery. In low-light conditions, you will need to use a slower shutter speed to allow more light to reach the sensor. In bright light conditions, you can use a faster shutter speed to reduce motion blur and get a deeper depth of field, meaning that more of the image will be in focus. And conversely, a slow shutter speed will result in a shallower depth of field, meaning that only a small portion of the image.
Shutter speed setting refers to the amount of time each individual frame is exposed to light. It plays a crucial role in determining the motion blur and exposure of the video. Shutter speed is often set relative to the frame rate (frames per second, FPS) of the video. A common guideline is to use a shutter speed that is double the frame rate (known as the 180-degree shutter rule), such as 1/60 for 30 FPS video, to achieve natural-looking motion.
Shutter speed setting refers to the amount of time each individual frame is exposed to light. It plays a crucial role in determining the motion blur and exposure of the video. Shutter speed is often set relative to the frame rate (frames per second, FPS) of the video. A common guideline is to use a shutter speed that is double the frame rate (known as the 180-degree shutter rule), such as 1/60 for 30 FPS video, to achieve natural-looking motion.
47. Aperture setting of a video file
Aperture is kind of like the iris in the human eye. It controls the metal blades in the camera lens that control the amount of light getting into the sensor. In darker environments, you need a wider aperture value, which allows more light to reach the sensor, resulting in brighter images but by doing so, it will also shallow down the depth of field, meaning the range of area that can be in focus. In other words, you will get more blurred bokeh to your image. In brighter conditions you need to narrow down the aperture to block excess light from reaching the sensor and by doing so, you also widen the area of focus. In changing lighting conditions, auto aperture is often great, but if you have a certain look in mind, you need Manual aperture control and when outside, you may need to use ND filters to block excess light.
Aperture refers to the adjustable opening in the lens that controls the amount of light entering the camera. It plays a key role in determining the exposure and depth of field in a video. Aperture is crucial for controlling light intake, exposure levels, and the depth of field, thereby significantly affecting the aesthetic and technical quality of the video.
Aperture refers to the adjustable opening in the lens that controls the amount of light entering the camera. It plays a key role in determining the exposure and depth of field in a video. Aperture is crucial for controlling light intake, exposure levels, and the depth of field, thereby significantly affecting the aesthetic and technical quality of the video.
48. Blackmagic Cloud Workflow
You capture footage with iphone, using the blackmagic camera app, with which you send your footage to the blackmagic cloud, where also the shared project files reside for all the team members working on the same, shared project. Or, you capture footage with blackmagic cinema camera and upload the proxy versions of your files to the blackmagic cloud and once the video has been finished editing, you can relink the footage to the original files, finalise grading and deliver the finished product.
The blackmagic camera app is no longer Apple exclusive as there is a short list of most recent android phones which now also support the app. Blackmagic Cloud is a cloud storage solution that allows you to sync and share media files globally. It's like Google Docs for video editing, enabling seamless collaboration and efficient media sharing across projects. With Resolve, it give many new options to do things differently, more collaboratively, if you so choose.
The blackmagic camera app is no longer Apple exclusive as there is a short list of most recent android phones which now also support the app. Blackmagic Cloud is a cloud storage solution that allows you to sync and share media files globally. It's like Google Docs for video editing, enabling seamless collaboration and efficient media sharing across projects. With Resolve, it give many new options to do things differently, more collaboratively, if you so choose.
49. ISO setting of a video file
ISO controls the sensitivity of your camera's sensor to light. with ISO you set the optimal exposure and image quality for your videos. ISO setting has the most impact on the exposure, noise, and dynamic range of the video image. the more challenging the lighting conditions you work in, the more important the ISO setting becomes. The darker the environment, the higher the ISO value needs to be in order to maintain a proper exposure. The downside is that High ISO will narrow down the dynamic range of the image and it will also introduce more noise into the image. Conversely, in a bright environment you need to lower the ISO to maintain proper exposure. Higher ISO value will also widen the dynamic range and reduce the amount of noise from the image.
ISO setting of a video file. Using my Panasonic GH5 camera, in extremely dark situations, 3200 is the highest ISO value I will use, as the image quality becomes too low. Choosing the appropriate ISO setting depends on the lighting conditions and the desired balance between exposure and image quality.
ISO setting of a video file. Using my Panasonic GH5 camera, in extremely dark situations, 3200 is the highest ISO value I will use, as the image quality becomes too low. Choosing the appropriate ISO setting depends on the lighting conditions and the desired balance between exposure and image quality.
50. Should you use RAW for video?
Raw formats store the uncompressed image data from the camera's sensor. This means that they contain all of the information captured by the sensor, without any processing or compression. Video formats are the other option, where the visible image is built and compressed in-camera, rather than with RAW, on some software in post production. Raw video formats will give you the most flexibility in post-production and allow you to, at least in theory, produce the highest quality videos. The downside is that this takes a lot of time, skill, storage space and processing power. Raw video is also an option for archiving, as in theory, the raw footage you shot or archived today might look a lot better in ten years as the debayering algorithms of the post production software evolve over time.
When the highest possible image quality is your top priority, then some RAW format might be your best choice. Keep in mind though that using some RAW format will add complexity to your workflow, including the need for data management and more time spent on post-production.
When the highest possible image quality is your top priority, then some RAW format might be your best choice. Keep in mind though that using some RAW format will add complexity to your workflow, including the need for data management and more time spent on post-production.
51. Full Intra keyframes & groups of partial inter frames of a video file
An inter-frame codec has groups of partial frames between the full key frames. As the key frames need to contain the full data of an image frame, they are built using intra frame prediction, the frames are treated independent of each other, and their compression reduces the redundant information within the frame, similar to JPEG image compression, whereas the rest of the frames, the so called group of pictures, use inter frame prediction, where the video data is compressed by saving only the changes between the frames, by comparing the differences between the frames. So, the separation of intra and inter frame codecs is not an either or type of deal, there are codecs that use only intra frames and then there are codecs that use both intra and inter frames.
An intra-frame codec treats every frame as an important, independent photo, while an inter-frame codec treats some frames as important photos and others as quick sketches that just capture what has changed. Some codecs use only intra-frame, some use both intra-frame and inter-frame, and that's why the separation isn't a simple either-or situation.
An intra-frame codec treats every frame as an important, independent photo, while an inter-frame codec treats some frames as important photos and others as quick sketches that just capture what has changed. Some codecs use only intra-frame, some use both intra-frame and inter-frame, and that's why the separation isn't a simple either-or situation.
52. Spatial & Temporal, intra & inter frame compression of a video file
Predictive video coding methods have two classes: inter-frame prediction exploits the temporal redundancies, the similarities between a group of frames, and Intra-frame prediction exploits spatial redundancies, meaning the pixels within the space of one individual frame are being compressed. With the inter frame encoding, we deal with time that has a varying amount of frames, the so-called GOP, group of pictures. The idea with this compression method is that only the changes that occur in the image during the GOP will be recorded to the incomplete frames as new data. The parts of the image that do not change, the so-called temporal redundancies, do not need to be written multiple times as the data is brought from the previous frames.
Inter-frame prediction looks at how things change from one frame to the next, saving space by not repeating unchanged parts. Intra-frame prediction looks at each frame on its own, reducing repeated information within that single frame.
Inter-frame prediction looks at how things change from one frame to the next, saving space by not repeating unchanged parts. Intra-frame prediction looks at each frame on its own, reducing repeated information within that single frame.
53. Video codec compression methods: inter frame & intra frame
Video codecs can encode in two main compression methods: the much more common and highly compressed interframe method is all about compression with the expense of editability. Interframe compression is built for publishing to audiences. The other compression method is the less compressed and more editable, quality focused intraframe, which is aimed more for the needs of the video creators and is thereby much less common. Using the intra frame compression, the frames are so called keyframes, as they contain the full image data. This separation used to be a big deal, but nowadays, even when editing with a phone, you will have no problem in editing the inter frame files as well.
Separation between these methods used to be significant; inter-frame for smaller file sizes and publishing, intra-frame for quality and editing. Inter-frame compression is good for small files and easy sharing, while intra-frame compression is better for high-quality editing. But with modern technology, you can edit both types effectively.
Separation between these methods used to be significant; inter-frame for smaller file sizes and publishing, intra-frame for quality and editing. Inter-frame compression is good for small files and easy sharing, while intra-frame compression is better for high-quality editing. But with modern technology, you can edit both types effectively.
54. Video file formats & codecs: Features vs. Support
The more recent a video codec is, the more it can provide better quality with smaller file sizes using new, advanced features. The downside is that when a new codec is launched, most hardware is lacking in processing power to even play back the files and often the support for the new codec is missing entirely. The older a video codec is, the larger the file sizes and the lower the quality but on the upside, the files can be played on almost any device. This is why choosing a video codec and format is very much a compromise between efficiency, quality and compatibility.
Newer video codecs use advanced technology to deliver better quality videos with smaller file sizes. However, the downside is that when these new codecs first come out, most devices don't have the processing power to handle them, and many lack support to play them back at all. Older video codecs create larger files with lower quality, but they have the advantage of being compatible with almost any device. This means you can play these older video files anywhere without issues. Newer codecs offer better quality and smaller files but may not work on all devices, while older codecs work everywhere but don't look as good and take up more space.
Newer video codecs use advanced technology to deliver better quality videos with smaller file sizes. However, the downside is that when these new codecs first come out, most devices don't have the processing power to handle them, and many lack support to play them back at all. Older video codecs create larger files with lower quality, but they have the advantage of being compatible with almost any device. This means you can play these older video files anywhere without issues. Newer codecs offer better quality and smaller files but may not work on all devices, while older codecs work everywhere but don't look as good and take up more space.
55. Subtitle types of a video file
Subtitles in a video file can exist in three forms: You can have subtitles burnt into the video, which means they can no longer be altered or removed, which can be seen as a limitation or simplification, as there are no additional processes or settings to be done. The Second option is to have a separate file containing the subtitles and their timings to be run with a specific video file. This means that there can be multiple files as language options and the subtitles can be disabled or altered without altering the video file. Third option is to have the data from the separate subtitle file to be embedded within the video file. This is often considered to be the handiest option of the three types of subtitling a video file.
With embedding, subtitles are included within the video file itself but can be turned on or off as needed. This method is convenient because everything is in one file, and you still have the flexibility to enable, disable, or switch subtitles without having to deal with separate files. Burnt-in subtitles are permanent and simple, separate subtitle files offer flexibility and multiple language options, and embedded subtitles combine convenience with flexibility.
With embedding, subtitles are included within the video file itself but can be turned on or off as needed. This method is convenient because everything is in one file, and you still have the flexibility to enable, disable, or switch subtitles without having to deal with separate files. Burnt-in subtitles are permanent and simple, separate subtitle files offer flexibility and multiple language options, and embedded subtitles combine convenience with flexibility.
56. Live Stream Video Encoding options
Adaptive bitrate streaming, ABR, allows the encoder to dynamically adjust the bitrate of the video based on the available bandwidth. This ensures that the video will play smoothly even if the user's internet connection is slow or unreliable. There is also LL-ABR, meaning Low Latency Adaptive Bit Rate encoding which emphasises the start-up speed, or latency of the video over the quality of the video. For streaming there is also QVBR, meaning Quality based Variable Bit Rate encoding which has an assignable target bitrate. You can also create your own hybrid encoding solution for your particular needs by mixing the features of different encoding methods.
Adaptive bitrate streaming (ABR) adjusts the quality of a video based on the viewer's internet speed, ensuring smooth playback even on slower connections. There's also LL-ABR (Low Latency Adaptive Bit Rate), which focuses on reducing the delay before the video starts, prioritizing quick playback over quality. Another method, QVBR (Quality-based Variable Bit Rate), targets a specific video quality with a flexible bitrate to maintain that quality. You can also create a custom encoding solution by combining features from different methods to suit your specific needs.
Adaptive bitrate streaming (ABR) adjusts the quality of a video based on the viewer's internet speed, ensuring smooth playback even on slower connections. There's also LL-ABR (Low Latency Adaptive Bit Rate), which focuses on reducing the delay before the video starts, prioritizing quick playback over quality. Another method, QVBR (Quality-based Variable Bit Rate), targets a specific video quality with a flexible bitrate to maintain that quality. You can also create a custom encoding solution by combining features from different methods to suit your specific needs.
57. Live Stream Video: Target Bitrate
Usually a more low quality video stream is much more tolerable as long as the quality remains consistent. The same applies for audio: low quality is more tolerable when the quality stays the same. For this, you can apply and specify a target bitrate, in bits per second. The target bitrate determines the size of the video stream and the size of the resulting file. Target bitrate ensures the consistency of the quality level for the video stream. With completed video files, the encoder will first analyze the video content to determine the required bitrate for each section of the video. The encoder will then use this information to adjust the bitrate of the output file.
People are usually okay with lower-quality video or audio as long as it stays consistent. To achieve this, you can set a target bitrate, which controls the size and quality of the video stream. The target bitrate keeps the quality level steady. When creating a video file, the encoder first checks the video to see how much data each part needs. Then, it adjusts the bitrate to match those needs, ensuring consistent quality throughout the video.
People are usually okay with lower-quality video or audio as long as it stays consistent. To achieve this, you can set a target bitrate, which controls the size and quality of the video stream. The target bitrate keeps the quality level steady. When creating a video file, the encoder first checks the video to see how much data each part needs. Then, it adjusts the bitrate to match those needs, ensuring consistent quality throughout the video.
58. H.264 video codec was a true gamechanger
Big corporations holding crucial video patents created an alliance called MPEG which in the early 2000s created H.264, as a way to improve the efficiency of video compression. The H.264 codec became the first globally accepted standard for HD video codec by digital TV, Blu-ray Disc, and streaming video, changing the video industry. But for the consumers, the H.264 made an even bigger splash, by making video more accessible and affordable by reducing the amount of data required to encode and decode video. The H.264 enabled the streaming of video over the internet and the storing of video on mobile devices. Eventually it became the most widely used video codec in the world. And of course, codecs like H.265 and VP9 would not exist without the research and discoveries made with H.264.
Much of the innovations used in video codecs like H.265 and VP9 owe greatly to the research and discoveries made with H.264 codec. Big corporations with important video technology formed a group called MPEG and created H.264 in the early 2000s to make video compression more efficient. H.264 became the first widely accepted standard for HD video, used in digital TV, Blu-ray Discs, and streaming, revolutionizing the video industry. For consumers, H.264 made video more accessible and affordable by reducing the data needed for video files. This allowed for smoother video streaming over the internet and easier storage of videos on mobile devices. Eventually, H.264 became the most widely used video codec worldwide.
Much of the innovations used in video codecs like H.265 and VP9 owe greatly to the research and discoveries made with H.264 codec. Big corporations with important video technology formed a group called MPEG and created H.264 in the early 2000s to make video compression more efficient. H.264 became the first widely accepted standard for HD video, used in digital TV, Blu-ray Discs, and streaming, revolutionizing the video industry. For consumers, H.264 made video more accessible and affordable by reducing the data needed for video files. This allowed for smoother video streaming over the internet and easier storage of videos on mobile devices. Eventually, H.264 became the most widely used video codec worldwide.
59. RGB color space of a video file
RGB is a color space with three dimensions and three components which are based on the three primary colors seen by the human eye: red, green and blue. Each pixel has thereby three component values of R,G and B that can go from zero to two hundred and fifty five. The colors work as additive primaries, meaning that as all components have the maximum value, you get pure white and all values at zero, you get pure black. Using 10 bits, the value range is from zero to one thousand and twenty four. RGB is not as efficient a colorspace as YCbCr is as it has overlapping data. You are usually dealing with RGB values when you choose a color from a color palette and get a hexadecimal value for it.
The additive color space of RGB is typically used for screens, such as computer monitors, TVs, and mobile devices. Also, the raw data from cameras is converted to RGB before final encoding. RGB cannot do any chroma subsampling, so it stores the full range of color, but also with that, with no compression, it does require more storage space. For image transmission, using video codecs, we have the Y'CbCr color space, which can use chroma subsampling for compressing the image data, according to the capabilities of our human vision. It is used for broadcast and storage, most commonly with the codecs H.264 and HEVC.
The additive color space of RGB is typically used for screens, such as computer monitors, TVs, and mobile devices. Also, the raw data from cameras is converted to RGB before final encoding. RGB cannot do any chroma subsampling, so it stores the full range of color, but also with that, with no compression, it does require more storage space. For image transmission, using video codecs, we have the Y'CbCr color space, which can use chroma subsampling for compressing the image data, according to the capabilities of our human vision. It is used for broadcast and storage, most commonly with the codecs H.264 and HEVC.
60. YCbCr is the color space used by the Rec. 709 standard for HDTV
YCbCr is a color model and a color space, in pro lingo YCC, with components Y signifying luma, Cb signifying blue chroma, and Cr, signifying red chroma. YCC is a more efficient color space than RGB as in RGB color space the three components contribute to both luma and chroma, whereas in YCC color space the luma, meaning the Y and chroma, meaning the Cb and Cr data are separate. This enables the chroma data to be more heavily compressed than luma data aligning better with human vision detecting less chroma than luma. Why there is no separate component for the third color of green you might ask. When there is the chrominance data of two color components, the data of the third color component can be deduced from the two previous to form the full color chrominance spectrum.
RGB is often used for editing, computer graphics, and displays because it directly corresponds to how displays emit light and how people see color. It's also preferred for image fidelity when working with uncompressed or raw video data. Y'CbCr is used for broadcast, compression, and storage because it reduces data size while maintaining high visual quality, making it better suited for transmission and storage.
RGB is often used for editing, computer graphics, and displays because it directly corresponds to how displays emit light and how people see color. It's also preferred for image fidelity when working with uncompressed or raw video data. Y'CbCr is used for broadcast, compression, and storage because it reduces data size while maintaining high visual quality, making it better suited for transmission and storage.
61. EXFAT file system: eject usb drive safely
Safely eject a usb drive before disconnecting! You must have heard this, right? This is because many usb drives are formatted in the exfat file system. It’s most significant benefit is that it is natively supported by both mac and windows operating systems. It’s biggest downside is that it does not use a feature called ”journaling”, which means that if your files get corrupted for some reason, your data is most likely lost for good. This means that when you use a usb drive with an exfat file system, you must always use the ”remove safely” feature. As you eject the usb drive safely, you verify that the operating system has time to flush all changes to the drive and complete the files before the drive is disconnected. And one other thing, have you backed up your data? No? You’re gonna regret it!
Unlike the older FAT32 system with its 4GB file size limitation, the exfat has no size limitation, thankfully, making it usable with 4K or 8K footage. But, of course, the Cross-Platform Compatibility is the main reason people still use this format.
Unlike the older FAT32 system with its 4GB file size limitation, the exfat has no size limitation, thankfully, making it usable with 4K or 8K footage. But, of course, the Cross-Platform Compatibility is the main reason people still use this format.
62. HDR Rec.2020 Color Space Transfer Functions for Youtube: PQ & HLG
To get HDR content to youtube, the material must be using at least ten bit color depth in the Rec.2020 color gamut with the transfer function of either PQ, Perceptual Quantizer, which is one of two main HDR transfer functions, the other being HLG, Hybrid Log-Gamma. PQ is considered to be more accurate in terms of how our eyes function, and it is the preferred transfer function for most HDR content. When you set the output color space to HDR PQ in DaVinci Resolve, you are telling the software to render your footage using the PQ transfer function. This will ensure that your footage is displayed correctly on HDR, PQ compatible displays. Lastly, make sure you’re using an HDR capable web browser when viewing HDR content.
Transfer Functions are mathematical formulas that tell the screen how to map the video signal (brightness, colors) to what you actually see. Different transfer functions adjust the image in different ways. PQ, Perceptual Quantizer is used to create very high-quality HDR images for modern displays, offering great detail in both dark and bright areas. HLG, Hybrid Log-Gamma is designed to work with both HDR and non-HDR displays, so it can be used on a wider range of devices, making it a bit more flexible.
Transfer Functions are mathematical formulas that tell the screen how to map the video signal (brightness, colors) to what you actually see. Different transfer functions adjust the image in different ways. PQ, Perceptual Quantizer is used to create very high-quality HDR images for modern displays, offering great detail in both dark and bright areas. HLG, Hybrid Log-Gamma is designed to work with both HDR and non-HDR displays, so it can be used on a wider range of devices, making it a bit more flexible.
63. Use Built in HDR 3D LUTs in camera
Look up tables, LUTs are useful when shooting RAW or when using "film" dynamic range, both of which generally look flat when they’re captured. By using in-camera LUTs, the crew on location can get a better idea of what the footage will look like in post. Rec 2020 PQ Gamma is a complex color space, which uses a perceptual quantizer curve to compress the dynamic range of the image. This curve is designed to more closely match the way that human eyes perceive contrast. The details in dark areas are emphasised. This results in images that are more natural and pleasing to the eye. Even when a Lut is embedded in the Braw metadata, it can still be disabled or switched to another lut using the braw sidecart file.
Built-in HDR 3D LUTs allow filmmakers to see a more accurate representation of how their footage will look in HDR during shooting. This helps in making creative decisions on the spot, enhancing the overall workflow and efficiency.
Built-in HDR 3D LUTs allow filmmakers to see a more accurate representation of how their footage will look in HDR during shooting. This helps in making creative decisions on the spot, enhancing the overall workflow and efficiency.
64. Video Latitude, the limits of image data
The higher the latitude, the more information, meaning detail, you can retrieve from the darkest and brightest parts of your captured video image. With a low latitude image, like when using rec. 709, the darkest and brightest parts of the image aren’t very modifiable, as there is no retrievable data available. When you use camera with low latitude setting, you kind of work in a ”what you see is what you get” mode as the image you see on the viewfinder will be more or less the image you are then able to publish, as there is not much wiggle room in fixing the image in post. So you have to get the shot right on location. Latitude refers to the image data that is not directly visible, but it is available in the image, so it can be recovered and retrieved in post.
Video latitude refers to the range of exposure values that a camera can capture, effectively indicating the limits of image data in terms of highlight and shadow detail. It is crucial for content creators to understand video latitude when shooting to ensure optimal image quality and detail retention.
Video latitude refers to the range of exposure values that a camera can capture, effectively indicating the limits of image data in terms of highlight and shadow detail. It is crucial for content creators to understand video latitude when shooting to ensure optimal image quality and detail retention.
65. How to choose a video Format & Codec?
As the processing power continually increases, newer video codecs and formats are more efficient and have more advanced features than the older ones, which in turn offer better support for software, licensing and hardware with CPU, GPU and multi thread processing. It can take a long time for a new codec to build a sufficient user base and wider support. The size and growth of the user community is essentially important when you need to troubleshoot a problem or when you have some question about a codec. Other things to consider when choosing a codec and a format are its future outlook, abilities to compete with the challenges of future video technologies. Is the codec under how many lawsuits? Who is developing the codec? and with what resources?
Choosing the right video format and codec requires balancing quality, file size, compatibility, and intended use. By considering these factors and staying informed about industry standards, you can make effective decisions that enhance your video production workflow.
Choosing the right video format and codec requires balancing quality, file size, compatibility, and intended use. By considering these factors and staying informed about industry standards, you can make effective decisions that enhance your video production workflow.
66. QuickTime gamma shift bug
When the dark areas of your exported video look a bit lighter and washed out, when compared to what you see in the editor display, you are dealing with gamma shift which may occur when using the MOV wrapper and / or the H.264 codec. Gamma shift means that the gamma value of your exported video file, using the rec 709 color space is changed unintentionally during the encoding process. Gamma shift occurs when there is a mismatch between how the system interprets color profiles and how the content is encoded or displayed. The most obvious fix for this problem is to export your files with something other than the MOV wrapper and the H.264 codec.
The QuickTime gamma shift bug refers to a known issue with how QuickTime handles gamma correction in video files, particularly in the context of exporting and playing back video. This bug can result in videos appearing with incorrect brightness or contrast levels, causing a noticeable difference in how the video looks on different devices or when viewed in different applications.
The QuickTime gamma shift bug refers to a known issue with how QuickTime handles gamma correction in video files, particularly in the context of exporting and playing back video. This bug can result in videos appearing with incorrect brightness or contrast levels, causing a noticeable difference in how the video looks on different devices or when viewed in different applications.
67. Live Streaming Encoder efficiency vs. Decoder complexity
The more efficient the encoding of a codec is, the smaller the file sizes will be, but also, the heavier the codec will be for the processor to decode. So, the more simple the decoding of a codec is, the less it consumes power as it requires less computing resources. So the goal is that the encoding of a codec becomes as efficient as possible, whereas the decoding needs to become as easy as possible. Some of the additional key aspects to consider are error resiliency, will the file open even when the recording has been interrupted? Latency: will the codec be usable for live streaming? And Bitrate: will the bitrates be too high for live streaming?
What are the things to consider when choosing video file codec and format? More efficiency brings more complexity. More quality increases the file size etc.
Find out the difference between live streaming encoder efficiency and decoder complexity in Davinci Resolve 19.
What are the things to consider when choosing video file codec and format? More efficiency brings more complexity. More quality increases the file size etc.
Find out the difference between live streaming encoder efficiency and decoder complexity in Davinci Resolve 19.
68. BRAW File systems for video file: os x extended & exfat
It is recommended that you format your media cards from the camera menu, which gives you two options: the OS X extended and exFAT. The os x option is better in the sense that in case of a technical glitch, the data in the media is likely to be retrievable, whereas in using the exfat format, your recorded media is lost. First option is to use the os x extended format and get a mac as your media ingest machine. If the media is ingested by someone else, you might be better off formatting your media cards into the exfat format, despite the increased risk of data loss.
Blackmagic design RAW format's flexibility is perhaps it's most important asset for video makers.
Blackmagic design RAW format's flexibility is perhaps it's most important asset for video makers.
69. Does the media mountain move to the editors or vice versa?
The options for collaborative editing are using your own servers on local LAN and having your media in your own storage or renting cloud resources and moving your media to the cloud. When you have your terabytes of media, I think its smarter to let users come to the media and edit the original media through VPN remote connection rather than move the media to the users through cloud and having to transcode proxy versions from your media. Within your LAN, the media moves at 10 gig speed or faster. Through the cloud your speed is capped to what your internet service provider can offer, which is usually the speed of 1 gig.
70. ProRes format is not available on all systems
Prores has been a popular codec, but it has one significant weakness over the other codecs available and that is mac exclusivity on Davinci Resolve. The DNxHR codec, developed by Avid is the equivalent for what the mac users have in prores, an intraframe codec, but in comparison, it lacks one major weakness: the DNXHR is available on all versions of resolve. This limitation in compatibility of the prores codec becomes increasingly significant when you're sharing your projects and media: the codecs. Collaborative projects often have all three systems connected: Windows, mac and linux. Thankfully we have options.
Prores video format from Apple is not bad, but it's artificial limitations make it a less viable option for content creators.
As of version 19.1.4 of DaVinci Resolve, the Prores format is now supported on the Windows and Linux as well.
Prores video format from Apple is not bad, but it's artificial limitations make it a less viable option for content creators.
As of version 19.1.4 of DaVinci Resolve, the Prores format is now supported on the Windows and Linux as well.
71. Artefacts in video file
Artefacts meaning the visible glitches and pixelation in the video image occurs in most cases due to data compression and lack of data. These artefacts can be divided into subcategories such as banding, which means visible lines where those should not exist, usually sky, where bands of pixels exist instead of the original smooth color gradients. Or Macroblocking, which results from insufficient bitrate, creating large unified single color blocks into areas where originally was image noise. Or simply loss of detail making the edges of objects overly defined and the textures within those objects have disappeared, kind of smoothed out as there was not enough data encoded to retain the details.
Artefacts or artifacts? I dont really know, but they are the same thing for sure.
Artefacts or artifacts? I dont really know, but they are the same thing for sure.
72. Resolve Clone tool - ingest media with checksum
We all use a number of cloud services to store and transfer our video files. The problem with these services is that they often compress our data. This might happen even when we distinctly choose not to compress the data. Compression is fine in most use cases, but for video files, that is something we do not want because of potential quality loss and also because of security. We want to be able to verify that everyone is using the same video files. The clone tool adds a identifiable unique checksum value to our files, by which we can verify that the file has remained unchanged. In short, checksum helps us to Verify data integrity and detect unintentional data corruption.
When I use a cloud service, having the checksum for each file becomes important as with that I can verify that my files have remained unmodified.
When I use a cloud service, having the checksum for each file becomes important as with that I can verify that my files have remained unmodified.
73. Variable frame rate, VFR in cameras
In most cameras, the VFR option is not about fluctuating frame rate as with computers, with cameras, it is synonymous with slow motion recording. With VFR you can choose different frame rates for recording and playback. With constant frame rate, the recording frame rate and playback frame rate are fixed to the same rate. To capture the higher frame rates at sufficient quality, you often need to lower the resolution of the image significantly. The higher the frame rate, the more data is required to represent the video, and the lower the compression ratio. Frame rate is a bit like what sample rate is for audio, as video frames can be thought of as a kind of snapshots at specific points in time.
What about video using accelerated speed? Well, you could do the old interval recording photography. In most cases you can just speed up a clip in your chosen NLE, in my case that would be the DaVinci Resolve.
What about video using accelerated speed? Well, you could do the old interval recording photography. In most cases you can just speed up a clip in your chosen NLE, in my case that would be the DaVinci Resolve.
74. Variable bitrate of a video file
H.264 was the first video codec to use a variable bitrate. The majority of video codecs up to that point had used constant bitrate, which proved to be inefficient for high-resolution video, as it created over and under compression of video. Variable bit rate allows the encoder to allocate more bitrate to parts of the video that require it, such as scenes with a lot of motion or detail, and less bitrate to parts of the video that have less visual changes. Variable bitrate gave such efficiency, meaning small file sizes and high quality to older consumer codecs such as the MP3 for audio and H.264 for video that the heydays of buying and selling physical audio and video products were soon to be over.
Variable bitrate made the streaming of video through the internet a much more viable option. Variable bitrate was also essential in the success of the H.264 video codec.
Variable bitrate made the streaming of video through the internet a much more viable option. Variable bitrate was also essential in the success of the H.264 video codec.
75. Variable bitrate endocing methods
With Quality based variable bitrate encoding, you specify a level of quality for the stream instead of assigning a bit rate. The codec will then encode the content so that all samples are of comparable quality. With unconstrained variable bitrate encoding, you specify a bit rate for the stream, as you would with constant bit rate encoding. However, the codec uses this value only as the average bitrate for the stream and encodes so that the quality is as high as possible while maintaining the average. In addition to specifying an average bit rate, you can also specify peak values using peak constrained variable bitrate encoding.
The settings of a codec have two main purposes: efficiency for Live streaming or quality for recording into fast (camera) storage.
The settings of a codec have two main purposes: efficiency for Live streaming or quality for recording into fast (camera) storage.
76. Logarithmic curve of human vision, camera sensors and displays
Because our vision behaves logarithmic, meaning that we see better in darker environments, our cameras are built to behave and capture data based on logarithmic compression of luminance values. The same goes for our displays, which use logarithmic mapping to accommodate our vision, mimicking the way our eyes respond to light. The logarithmic curve means that the less of luminance, the more it’s emphasised in the curve. This enables us to perceive changes in brightness over a much wider range than if our vision were behaving linearly, meaning that an increase in stimulus would correspond to an equal increase in response.
77. HDR Dynamic metadata and display luminance
A typical older Standard Dynamic Range displays maximum brightness topped roughly at around three to four hundred nits, whereas the maximum brightness of a typical High Dynamic Range display starts from thousand nits. a typical older Standard Dynamic Range displays minimum brightness was around zero point one nits whereas in a typical HDR display the minimum can go as low as zero point zero one nits. First, the metadata assigning the brightness, contrast and color of the HDR image was static, meaning that once it was assigned, those values could not be changed within a program. Then came dynamic metadata, which meant that the values of the image could be changed on frame level.
SDR (Standard Dynamic Range) television uses a single range for brightness and color that's consistent across all scenes. HDR can provide dynamic metadata for scene-by-scene optimization using a standard like Dolby Vision.
SDR (Standard Dynamic Range) television uses a single range for brightness and color that's consistent across all scenes. HDR can provide dynamic metadata for scene-by-scene optimization using a standard like Dolby Vision.
78. Image sequence formats
Image sequence formats, such as cinemaDNG, DPX and TIFF can now be seen as rather inefficient and non-optimal. These formats are not dealing with video, as they are merely groups of still images in a folder. These formats come from an era when it was thought to be impossible to have both uncompromised image quality and efficient, video native compression. As the BRAW format proves, we can now have a sufficient level of quality WHILE being able to choose a suitable level of video native compression.
79. Two-pass encoding for video file #1
When using a codec such as the H.265, you have the option of Two-pass encoding, where the first pass is for the encoder to analyse the entire video file as it takes into account the distribution of the video data. This allows the encoder to allocate more bitrate to parts of the video that require more and removes bitrate from parts of the video that don't need it. The second pass uses the data produced by the first pass, to encode the video file efficiently. As a result, two-pass encoded videos typically have fewer compression artefacts and a higher overall quality than single-pass encoded videos.
80. Two-pass encoding for video file #2
although Two-pass encoding cannot be used for live streaming, to use many of the features of the encoder, the two pass encoding is required, for example for accurate adaptive bitrate encoding, as on the first pass the encoder gathers the data from the file, its motion, complexity, and bitrate, and based on this data, on the second pass, the encoder then allocates the bitrate optimally according to the specified bitrate level and applies quality features. As a general rule, use two-pass for delivery, to max out the quality and efficiency of the file, but for non delivery files, such as proxies, you can use the single-pass option for much faster renders.
81. Resolve Ingest with Clone tool #1
It’s a good idea for you to ingest all your material to Resolve by using the clone tool. By doing so, your media files will receive individual checksums, which work as a verification for your media’s integrity. When collaborating, it becomes increasingly important to make sure that everyone is working with the exact same original media, which remains intact during the whole production process. Another asset of the clone tool is its ability to copy and transfer the media to multiple locations at once. For example, to your fast edit drive and into a slower backup drive. Thirdly, with the clone tool, you can create a work queue, a list of tasks that you can build and leave processing while moving on to other things.
Bring media into your project using the clone tool, found in the media page of DaVinci Resolve. It offers file integrity verification by giving a file specific checksum identifier. There are other benefits as well, which I will cover in the video.
Bring media into your project using the clone tool, found in the media page of DaVinci Resolve. It offers file integrity verification by giving a file specific checksum identifier. There are other benefits as well, which I will cover in the video.
82. Resolve Ingest with Clone tool #2
Clone tool becomes handy when you deal with non single file video formats, where audio, video and metadata have separate folders and files. Clone tool preserves the file structure of such formats automatically. You can also choose between different checksums, some offer more speed, some some more security, the default MD5 offers a good compromise between speed and security. Checksum can also become valuable with troubleshooting, as with it we can exclude media corruption as the cause of some technical error. Separate software for checksum creation is not cheap, so even many hardcore premiere users have taken up Resolve into their daily ingest workflow, because of it’s ability to create the checksums for the ingested media.
83. DSLR video cameras had terrible audio hardware #1
The Consumer Cameras which we have been using for videography, have a long history of including microphones and audio hardware that is surprisingly low quality. The manufacturers reasoned that the hobbyists won't even know the difference between good and bad audio and professionals have always recorded their audio separately anyways. Professional audio components would have also brought a significant price bump to the overall price of the relatively cheap cameras. This meant that the cameras actually sold better without the decent audio hardware. Nowadays good quality audio hardware has come down in price and for example the blackmagic cinema camera has excellent audio hardware in-camera.
We all know good audio quality is as important or even more important than good video quality. To get good audio we have accustomed to use a separate audio recorder to get good audio but this is changing as it's now more common to have decent audio hardware even on iphones etc.
We all know good audio quality is as important or even more important than good video quality. To get good audio we have accustomed to use a separate audio recorder to get good audio but this is changing as it's now more common to have decent audio hardware even on iphones etc.
84. DSLR video cameras had terrible audio hardware #2
Another crucial point to be made is that with attaining good audio, microphone placement becomes one of the most important things and almost always, placing the mic within the camera, is absolutely terrible mic placement, almost guaranteeing bad audio. Despite this, these internal microphones did end up serving a reasonable purpose: the low quality audio could easily be used for syncing and audio and be replaced by the high quality audio recorded externally. Audio can be synced in Resolve either by using audio waveforms or by using timecode. Sync through waveform is nice, but it’s not as reliable as sync through timecode. This means that for professional work, sync through timecode is most likely a better choice.
To get good sounding audio to your videos you need decent audio gear and smart microphone placement. Syncing audio using waveform is often the cheaper option but it does also require more tweaking in post which might erode the initial purchase price gains.
To get good sounding audio to your videos you need decent audio gear and smart microphone placement. Syncing audio using waveform is often the cheaper option but it does also require more tweaking in post which might erode the initial purchase price gains.
85. Dynamic range of a video file
In videography, the dynamic range of the image refers to the range of light intensities, from darkest shadows to brightest highlights, that can be captured and represented. The dynamic range is measured in stops. Modern cameras typically offer dynamic ranges at around 11 to 15 stops. The higher the dynamic range, the harder it will be for the user to ”clip” the image data and the more the camera can capture information from the brightest and darkest areas of the image. How wide the dynamic range is from any given camera depends mostly on how advanced, meaning how old the sensor in the camera is.
Dynamic range defines the span of brightness levels that can be represented, shaped by bit depth and the video format (SDR or HDR). A higher dynamic range, especially in HDR video, delivers greater detail, realism, and visual impact by capturing a broader spectrum of light and shadow.
Dynamic range defines the span of brightness levels that can be represented, shaped by bit depth and the video format (SDR or HDR). A higher dynamic range, especially in HDR video, delivers greater detail, realism, and visual impact by capturing a broader spectrum of light and shadow.
86. Quantization in the compression of a video file
A thing called quantization plays a crucial role in video compression by reducing the precision of the numerical values representing pixel colors or other video attributes. After applying techniques like motion estimation, the resulting data is quantized to reduce the number of bits needed to represent each sample. H.265 introduced adaptive quantization, which adjusts quantization parameters based on the characteristics of the video content. The higher the used resolution, the more useful this, as many other features of the codec will turn out to be. With increasing processing power, more sophisticated quantization methods are being provided, resulting in higher efficiency. Almost all popular codecs use quantisation, the only exception being a codec called dirac from BBC.
Quantization in video codecs is the process of reducing the precision of transform coefficients by dividing them by a quantization step size and rounding to integers, controlled by a quantization parameter. This scalar quantization process enables lossy compression, allowing codecs to balance video quality and bitrate efficiently. In AV1 codec, the quantization is optimized to enhance web streaming performance, whereas in H.265, quantization aims to broaden compatibility and efficiency.
Quantization in video codecs is the process of reducing the precision of transform coefficients by dividing them by a quantization step size and rounding to integers, controlled by a quantization parameter. This scalar quantization process enables lossy compression, allowing codecs to balance video quality and bitrate efficiently. In AV1 codec, the quantization is optimized to enhance web streaming performance, whereas in H.265, quantization aims to broaden compatibility and efficiency.
87. Limits of imaging tech vs. Limits in human vision #1
In chroma subsampling, the Luma data is unique for every pixel, and as the green color data is extrapolated from the luma channel, only the data from the red and blue channels is being duplicated with chroma subsampling, this is something that the human eye can detect much less than changes in luma. This is why the 4:2:2 chroma subsampling image is difficult for us to separate from an image using 4:4:4 chroma subsampling, especially when using some display made for consumer markets. The idea is that oftentimes you are able to compress the image data significantly, while not being able to see any visual difference in the quality of the viewed image.
Each image pixel contains data for brightness, called "luma" and for color, called "chroma". Brightness helps us see the shapes, edges, and details, whether something is light or dark. Color makes these shapes have red, blue, green, and anything in between. Chroma subsampling brings the size of the image data down considerably without us seeing any difference.
Each image pixel contains data for brightness, called "luma" and for color, called "chroma". Brightness helps us see the shapes, edges, and details, whether something is light or dark. Color makes these shapes have red, blue, green, and anything in between. Chroma subsampling brings the size of the image data down considerably without us seeing any difference.
88. Limits of imaging tech vs. Limits in human vision #2
Video image using 8-bit color bit depth can be difficult for the human eye to separate from images using higher bit depths of 10, 12, 16 or 32 bits. The significant differences appear in post production, as you're trying to tweak the image. The 8-bit image has little to no wiggle room to modify and grade, which means that you have to be able to get the shot right on location, you are forced to go with wysiwyg, what you see is what you get. The image using 10-bits or above gives quite significant room for tweaking, which means that footage recorded with wrong iso, wrong white balance and wrong exposure can often be modified into proper order with precision and without any loss in image quality.
8-bit video gives you 256 shades for each color, which is enough for presenting and delivery but it’s not enough to alter the image values without having the image “break apart” because of insufficient data. 10-bit video gives us 1,024 shades per color to work with, making the shades and gradients smoother. 10-bit image has been considered too big for delivery (not anymore!) It also gives us the ability to modify the image data while still retaining the gradients from breaking apart into visible blocks which can most often be seen in images of sky.
WYSIWYG originates from the 70's, it meant that the way content appears on your screen during editing or creation will closely match how it looks in the final output—whether it's a printed document, a web page, or another medium. So it was a marketing term for something positive, now I use it to signify something negative.
8-bit video gives you 256 shades for each color, which is enough for presenting and delivery but it’s not enough to alter the image values without having the image “break apart” because of insufficient data. 10-bit video gives us 1,024 shades per color to work with, making the shades and gradients smoother. 10-bit image has been considered too big for delivery (not anymore!) It also gives us the ability to modify the image data while still retaining the gradients from breaking apart into visible blocks which can most often be seen in images of sky.
WYSIWYG originates from the 70's, it meant that the way content appears on your screen during editing or creation will closely match how it looks in the final output—whether it's a printed document, a web page, or another medium. So it was a marketing term for something positive, now I use it to signify something negative.
89. What is Floating Point in Video Color Processing? #1
In 8-bit fixed-point image representation, each color channels of Red, Green and Blue can have one of 256 possible values, or steps of grey luminance. 10-bit color depth equals 1025 possible values, and 12-bit color depth equals 4095 possible values, or steps of grey luminance for each color channel. In fixed point representation, numbers are stored with a specific number of digits before and after the decimal point. This means there’s a fixed number of tonal steps that can be represented. It’s like having a ruler with marks at every millimetre; you can’t measure more precisely than the smallest mark.
With the older way of using integers, you’re stuck with a fixed number of steps, so subtle changes, like brightening a scene might skip some shades, causing visible jumps. Floating point fills in all those gaps, making adjustments smooth and natural. It’s like upgrading from a piano with just a few keys to one with a full range or even a trombone where you can slide to any note. In regular video, brightness is limited, and integers might work fine. But in HDR video, where you have dazzling highlights and deep shadows, floating point can manage both extremes and everything in between. It’s like having a volume knob that goes from a whisper to a roar, with every level in between crystal clear.
Images for my videos are created mostly by midjourney v7 and videos from those images are created using Kling AI. The background music is always made by me, using Reaper and Ableton with synths from Arturia, UVI and Omnisphere.
With the older way of using integers, you’re stuck with a fixed number of steps, so subtle changes, like brightening a scene might skip some shades, causing visible jumps. Floating point fills in all those gaps, making adjustments smooth and natural. It’s like upgrading from a piano with just a few keys to one with a full range or even a trombone where you can slide to any note. In regular video, brightness is limited, and integers might work fine. But in HDR video, where you have dazzling highlights and deep shadows, floating point can manage both extremes and everything in between. It’s like having a volume knob that goes from a whisper to a roar, with every level in between crystal clear.
Images for my videos are created mostly by midjourney v7 and videos from those images are created using Kling AI. The background music is always made by me, using Reaper and Ableton with synths from Arturia, UVI and Omnisphere.
90. What is Floating Point in Video Color Processing? #2
Fixed point color information is allocated evenly to a fixed dynamic range, whereas Floating point color data is allocated unevenly, it can be allocated more to the areas where it is needed and less to the areas of the image where there is less data to be represented. So you could compare the logic of fixed and floating point color data to how constant and variable bitrates behave. Higher bit depths of 16 and 32 bits are often used with floating-point to take full advantage of the much wider dynamic range available. Better numerical precision also offers better color accuracy and better adaptability to brightness level changes.
Fixed-point “integer” color uses a fixed number of evenly‑spaced steps over a set range (0–255 in 8‑bit, or 0–65 535 in 16‑bit). Every increment represents exactly the same change in light‑intensity, no matter whether you’re in very dark or very bright regions. This is like a constant bitrate in audio or video: every frame (or sample) gets the same number of bits, regardless of how much “detail” is actually there. Floating‑point color gives relative precision: more granular steps near zero (shadows) and progressively larger steps as brightness grows. Fixed‑point is like having 256 equally spaced marks on a ruler: each tick is the same. If you need to measure tinier detail, you can’t.
Fixed-point “integer” color uses a fixed number of evenly‑spaced steps over a set range (0–255 in 8‑bit, or 0–65 535 in 16‑bit). Every increment represents exactly the same change in light‑intensity, no matter whether you’re in very dark or very bright regions. This is like a constant bitrate in audio or video: every frame (or sample) gets the same number of bits, regardless of how much “detail” is actually there. Floating‑point color gives relative precision: more granular steps near zero (shadows) and progressively larger steps as brightness grows. Fixed‑point is like having 256 equally spaced marks on a ruler: each tick is the same. If you need to measure tinier detail, you can’t.
91. What is Floating Point in Video Color Processing? #3
If floating point color information is better than fixed, why use fixed point? You might ask. This is because firstly, there isn’t enough dynamic range in the 8 and 10 bit color depths for the floating point system to take use from. Secondly, fixed point information is easier to process and in many use cases, the fixed point system is considered to be sufficient. The use of the floating point color system began to take shape for those on the look for maximum image quality, when processing power had increased to sufficient levels to enable the use of High Dynamic Range image and also, as the wide color gamut standards began to take shape.
92. 10GBe vs. 25GBe networking speed for video editing: huge cost difference
10 gig network speed is sufficient for roughly five editors working at once. For you to saturate the thousand megabytes per second pipe that ten gig speed provides, you can easily spend ten thousand dollars for the drives alone. The next tier of twenty five gig network speed, you would need to create the throughput of two thousand and two hundred megabytes per second, that’s a big jump, as you would need some professional rack mounted gear for that preferably with its own ventilated server room. Currently, the Twenty five gig network speed may be unnecessary for most, in a world of the two pizza rule: if the team cannot be fed with two pizza’s, it's starting to be too big.
10Gig ethernet, giving about a gigabyte per second is enough bandwidth for about five people each streaming one high‑quality 4K video stream without hiccups. 10GbE plus a decent NAS/SSD box is usually all you need. If you scale beyond that, or need huge throughputs, that’s when you look at 25 GbE and (much) bigger budgets.
I’m talking about the throughput speed within a LAN, for collaborative video editing. LAN speed depends on the hardware you buy. The WAN speed is another thing, it is dictated by your ISP. My WAN speed tops at 1GBe so I leave video editing to occur within my LAN, which I can have remote access as well.
10Gig ethernet, giving about a gigabyte per second is enough bandwidth for about five people each streaming one high‑quality 4K video stream without hiccups. 10GbE plus a decent NAS/SSD box is usually all you need. If you scale beyond that, or need huge throughputs, that’s when you look at 25 GbE and (much) bigger budgets.
I’m talking about the throughput speed within a LAN, for collaborative video editing. LAN speed depends on the hardware you buy. The WAN speed is another thing, it is dictated by your ISP. My WAN speed tops at 1GBe so I leave video editing to occur within my LAN, which I can have remote access as well.
93. Blackmagic cinema camera 6K tips #1
When you format your memory card, in camera, the reel number metadata will automatically change based on the number of times you format the card. Until you tap on the reset project data button. If you use a drive that has multiple partitions, the blackmagic camera will only recognize the first partition of the media. But when you format a media card from the camera menu, all the partitions of the media card will be wiped out. With the braw codec, you can choose the quality of your recorded video by choosing the amount of compression being applied to the recorded video. The constant bitrate option is a good choice if and when you have a limited space in your editing system’s media storage and you need to be able to predict and calculate how much space you can use for a certain amount of footage.
Reel numbering helps you keep your clips organized, especially across different shoots or cards. Partition behavior can save you from accidentally losing data you didn’t mean to touch. BRAW compression settings let you balance image quality against file size. Constant bitrate is your friend when storage space is scarce or when you have a strict archiving workflow.
Reel numbering helps you keep your clips organized, especially across different shoots or cards. Partition behavior can save you from accidentally losing data you didn’t mean to touch. BRAW compression settings let you balance image quality against file size. Constant bitrate is your friend when storage space is scarce or when you have a strict archiving workflow.
94. Blackmagic cinema camera 6K tips #2
If the highest possible video quality overrides your concerns about the storage space, then you should choose the constant quality, which alters the bitrate based on how much movement data the sensor captures, instead of basing the bitrate on some fixed level as when using the constant bitrate option. Constant quality option offers the most efficiency for the file size as bits are used when needed and it also offers most reliability as bits are used to maintain quality, meaning that the image will get the least amount of artefacts even on the most demanding occasions, such as when filming bursts of confetti flying in the air.
If you care more about getting the absolute best image than saving space, go with “Constant Quality” instead of “Constant Bitrate.” Rather than sticking to a fixed data rate, the camera adjusts how much data it writes depending on what’s happening in the scene (lots of motion = more data; still shot = less). This mode preserves image fidelity even in chaotic, high‑motion moments—think confetti blasts or fast action—so you get minimal compression artifacts.
If you care more about getting the absolute best image than saving space, go with “Constant Quality” instead of “Constant Bitrate.” Rather than sticking to a fixed data rate, the camera adjusts how much data it writes depending on what’s happening in the scene (lots of motion = more data; still shot = less). This mode preserves image fidelity even in chaotic, high‑motion moments—think confetti blasts or fast action—so you get minimal compression artifacts.
95. Blackmagic cinema camera 6K tips #3
The constant quality option is the same as variable bitrate. This means that the produced video files, with equal lengths, will have significant, unpredictable differences in their size. In short, the video file’s size will be as high as it needs to be in order to maintain the desired quality. The naming of the bitrate settings, such as three to one, eight to one and twelve to one mean the relation to the uncompressed raw image. Three to one is the size of one third from the size of fully uncompressed raw video. Twelve to one is roughly twelve times smaller from a fully uncompressed raw video file. Quantisation removes data from the video image which is deemed unchanged and thereby unnecessary. The more the quantisation is being used, the more the image will degrade and certain tones will disappear from the image.
Constant Quality = Variable Bitrate. The camera adjusts data rates moment‑to‑moment to keep image quality high, so clips of the same length can vary widely in file size. Quantisation tosses out “unchanged” data to shrink files. Light quantisation = small size savings, minimal quality loss. Heavy quantisation = bigger savings but noticeable loss of detail and subtle tones.
Constant Quality = Variable Bitrate. The camera adjusts data rates moment‑to‑moment to keep image quality high, so clips of the same length can vary widely in file size. Quantisation tosses out “unchanged” data to shrink files. Light quantisation = small size savings, minimal quality loss. Heavy quantisation = bigger savings but noticeable loss of detail and subtle tones.
96.
With BRAW format, blackmagic design has taken apart the features of raw formats and the features of video formats and recombined them in a way that uses the benefits of both. With BRAW we may no longer need to choose between having the controllability and quality of a raw codec versus having the ease and speed of using some heavily compressed video codec. So, are there any downsides for this codec? Well, one thing that got me thinking is that BRAW may not suit well with render farms, but I could be wrong with that assumption as I have no personal experience on it. I’m personally really excited about the BRAW format, but If you have found some negatives on using it, I would be really interested to hear about those issues
97.
As a one file solution, BRAW is easier to use than any of the old RAW formats, and at the same time, it manages to offer the controllability of a RAW format in post. A single file contains everything: images, sound, metadata and sensor information. This means that copying of the files occurs much faster than with a folder containing a large amount of image files. You can also change the settings of the BRAW file without overwriting the original metadata using the sidecart files, more on those later.
98.
As we have come to experience, video compression, using standard video formats lose color information through chroma subsampling and when pressed hard enough, the compression start to add visible artefacts to the image. With BRAW the negatives of compression are minimised significantly, as it does not compress image data, instead of this, with BRAW the compression is applied to the raw data coming from the camera sensor, before it is turned into visible imagery by the debayering process.
99.
Prores raw uses cinema dng raw, which means that every frame is a single file, and being an image sequence file, it becomes quite heavy to playback. The heaviest burden for playback with RAW formats is the debayer processing, which needs to be done in real-time by the edit software. BRAW does a hardware accelerated, partial de-bayer of the video data into visible imagery in-camera. Lessening the heavy burden of de-bayering in post, in resolve. This is one of the key reasons why BRAW is much lighter to edit than other, traditional RAW formats. In addition to this, Resolve does a faster, real-time de-bayering, while editing the footage and a much heavier and higher quality de-bayering when exporting.
100.
Raw data from the camera's image sensor needs to be processed into video imagery through de-bayering, which is done in the edit software, which is why RAW formats tend to be too heavy to edit natively, which is why editing is done with proxy files. This is no longer necessarily with Blackmagic RAW. Like other RAW formats, the BRAW format files contain unprocessed image data from the camera’s image sensor. But unlike the other RAW video formats, BRAW files are partially de-bayered in-camera, using hardware acceleration. In addition, BRAW files contain full color information, which means that the compression of the image does not result in loss of color data.
101.
One potential downside of BRAW is that it may not be as future proof as other raw formats, as with them, you may be able to reprocess their old RAW footage with some not-yet-invented, more advanced de-bayering algorithm with the latest software down the line. This may not be the case with BRAW, as the de-bayering of BRAW is done partially in-camera, so you are tied in finalising the de-bayering using the same de-bayering algorithm. This potential downside, while being true, I have to say that in my opinion, it does lean more on the theoretical rather than the practical side of things.
102.
The fact that RAW formats leave the de-mosaic slash de-bayering to be processed with some external software in post production has one negative aspect: as the software changes, so does potentially the quality of the de-bayering, enabling the image to look different with different software, which does sound problematic. As the demosaicing with BRAW is partial, it is not fully reliant on some external software. having the ability to use more than one de-bayering algorithm on any given post production software, as with other RAW formats, will put the unified look of the image, within one project, under considerable threat.
103.
In 1991 when the VHS cassette system was still going strong, you could say that a certain milestone was reached with the MPEG-1 video codec, also known as H.261 and its successor MPEG-1 part two in 1993. These codecs were one of the earliest practical video coding standards for digital video, paving the way for popularising digital video storage and transmission. The goal with MPEG one compression was to achieve the modest VHS-quality video by compressing the data stream into 1.5 Mbit/s without excessive quality loss. MPEG one’s primary purpose was to enable video playback on video CD’s, CD-ROMs, in digital television broadcasts and early digital media players.
104.
In 1994, MPEG-1 was succeeded by MPEG-2 also known as H.262. This codec became the standard video format for DVDs and SD digital television. It offered improved quality and scalability compared to MPEG-1. Five years later, in 1999, the arrival of MPEG-4 or H.263 brought a leap forward in video compression technology. Introducing features such as object-based coding, which means that instead of treating the entire frame as a single entity, distinct objects could now be recognised and tracked within the frame, and each of these objects could be coded independently, allowing for more efficient compression and better handling of complex scenes. Another big addition was interactivity, meaning the ability to control playback and navigate through the video content.
105.
in 2003 we got the H.264 codec which remains to be the most popular video codec, even up to now. The naming of the codecs have been a bit confusing as H.263 was also MPEG-4 and now the H.264 got the name MPEG-4 AVC, advanced video codec and if this wasn’t enough, there is the addition of parts, from one to 33 to tell what set of different standards the codec happens to comply with, for example: MPEG-4 part 10. The previous codecs versions had emphasised compression over quality but now, with H.264, it was possible to reach a good balance between acceptable quality and efficient compression.
106.
The HEVC codec was launched in 2003. And yet, when Samsung released its fantastic mirrorless camera, the NX1 in 2015. It was one of the first consumer cameras to offer HEVC recording for its video files and most computers did not have sufficient processing power to provide stutter-free playback of the clips. So it took over twelve years for the codec to shift from theory into reality and become widely used in consumer devices. It would be a worthwhile effort to Encode your existing H.264 content into H.265, or HEVC, High Efficiency Video Coding, as it would significantly reduce file sizes without visible quality loss. So its fair to say that HEVC was a significant leap forward from the older H.264 codec.
107.
The use of 4K footage spreaded alongside with the use of the H.265 codec. Nowadays, 4K is nothing special as even cheaper consumer cameras can do 8K or above. As with most hardware technology, the most recent release is often the most capable, the same goes with video codecs. Hardware compatibility is currently the H.265 codec’s strongest asset. And that’s a big one, because that means less stress for the creators. Being a more recent codec, the AV1 beats the H.265 in encoding efficiency, plus the AV1 is royalty free, unlike the H.265. Most likely, by the time you watch this video, there will be a growing number of software and devices offering support to the successor to the H.265 codec, the H.266. More on that in some future episode.
108.
With Blackmagic’s BRAW files, the metadata is handled better than with any other video or raw codec so far as you don't have to choose between using a format with baked in metadata or a format using only sidecart files to store the metadata. With formats like cinema DNG raw and red raw the image frame files lack any embedded metadata information from the camera. This data needs to be stored in separate sidecart files. When you change the original raw settings of a BRAW file, an additional sidecart file is created automatically to accompany the original, embedded metadata. The original metadata of the files remains intact, but are overridden by the altered settings of the sidecart files, which are using a human readable format, so you can add or edit metadata with any text editor.
109.
The BRAW Sidecart files can be used to automatically add new settings by simply by moving the sidecar file into the same folder as the BRAW file. If you remove the sidecart file and reopen the video file, it will use the original, embedded metadata settings. Any software that uses the BRAW SDK can access these settings. Changes in metadata, such as BRAW settings as well as information on iris, focus, focal length, white balance, tint, color space, project name, take number and more, are saved in the sidecart file. The metadata of a BRAW file is encoded frame by frame over the duration of the clip, which is important for changes made during capture, for example if the lens of the camera is adjusted during a shot.
110.
When someone talks about video gamma, they usually refer to the standardised gamma value of 2.2, which in turn refers to the color space standard of Rec. 709. Which in turn equals SDR, standard dynamic range. Film gamma value varies depending on what kind of film stock and development process is being simulated. Film gamma refers to software algorithm controlled gamma curves which try to mimic some physical film stock’s behaviour on color reproduction, tonal range and contrast.
111.
Even when you're shooting using the two point two video gamma, the BRAW file records and stays in film gamma. In other words, the codec preserves the full dynamic range of the sensor even when using video gamma. So the metadata of the file merely instructs the BRAW processing to display the limited rec. 709 video gamma range. As the wider dynamic range data stays in the BRAW file, it can be retrieved at any point, meaning the details from the darkest and lightest areas of the image are still retrievable.
112.
With the limited latitude of a rec.709 video file, the clipped image data in the brightest and darkest areas is not retrievable in post. So, as you can really grade 8-bit rec.709 footage, what you see as you record, is pretty much what you're left with in post. This can be seen as a positive if your material looks fine and the time consuming color correction work phase can be skipped entirely from post production schedules. This is suitable if and when you need to deliver content quickly.
113.
With BRAW you don't need to make any compromises between using fast and light rec.709 footage versus using heavier, gradable film gamma footage. You can get the best of both worlds as you can work using the fast rec. 709 workflow and still, if need be, you can go at any time to retrieve the details back, pull up the black parts of the image, or pull down the white areas, with the detail retained. The video may be clipped, but only when you choose to display the footage using the rec.709 color space.
114.
Dynamic range in a video file is the difference between the brightest and darkest areas of a video image that can be recorded without losing detail. It is typically measured in stops, which is a logarithmic unit of exposure. One stop is equal to a doubling of brightness. Dynamic range and latitude are terms that are often used interchangeably. Dynamic range of the camera enables the practical latitude range of the video image in post. The size of the camera sensor and the color bit depth of the video file have the biggest effect on how wide dynamic range is being provided.
115.
HDR and SDR viewing is now running in parallel across televisions and mobile devices across the world. If you want, You can make images on an HDR screen identical to the range you had in SDR. All cameras that record using some kind of Log setting, can be graded into HDR in post. HDR is not technically achieved in the camera, but rather the captured image is graded into High Dynamic Range in post. The reason for HDR’s existence is the fact that human vision is much more capable than what the SDR image and displays were able to provide. HDR provides a wider range of luminance levels, closer to what the human eye perceives in the real world.
116.
If you work in a collaborative project and have multiple resolve studio workstations under the same Local Area Network, you can easily assign the resolve workstations as either ”artist” workstations, for editing or as ”remote” workstations for processing render tasks using a feature called remote rendering. This way, the artist workstations can assign their render jobs to the remote workstations. This brings flexibility and saves time, as the editors don't have to wait for renders to finish on their own editing workstations. The heaviest load of rendering can now be lifted away from the edit computers, and time the editors used to spend on waiting for renders to finish, can now be used for editing.
117.
To assign a resolve workstation as a remote workstation, open up the project browser and right click on the panel and choose the ”remote rendering” option. Resolve will automatically open the delivery page and start polling for remote render jobs. As you choose the remote rendering from your render queue. By choosing ”Any” the render workstation with the least amount of render tasks will take up the render job, from the same menu you can choose to render your job locally or assign a specific render workstation for the job. The ability to assign render tasks to the workstation with most resources also ease the hardware requirements from the workstations used for editing.
118.
Just as delivery or recording have their own demands from the format and codec of video, so does VFX work. For VFX work, the use of some image sequence format is usually preferred instead of using some native video format. PNG, JPEG 2000, Cineon, DPX, OpenEXR and TIFF are all still image codecs that can be used for video work as well. Image sequence formats render footage on a single image frame basis, so the process can be continued from the last successfully rendered frame, instead of starting the entire render process from scratch.
119.
A hobbyist can make videos for a long time without ever caring about a thing called timecode. When you record solo, with in camera audio, the audio is in sync as its being recorded. But when we start to work professionally and collaboratively, timecode becomes essential. Timecode is a hardware clock that counts in frames. Timecode is the unique identifier for each frame of video. Timecode is made up of four parts: hours, minutes, seconds, and frames. The idea of Timecode is that it enables multiple devices to be synced at frame level. For example, when doing multicam work, the used gear can receive external timecodes from one source onto which all the devices are jam-synced.
120.
In post production, timecode is a more accurate and less time consuming method to sync audio than using waveform. But in production, timecode often requires external hardware to be used, so the time you lose in production is won in post. When recording video, we usually use freerun or rec run modes for our timecode. In freerun mode, the camera generates its own timecode and the timecode advances even when the camera is not recording. Freerun is useful when syncing footage from multiple devices in post. In rec run mode, the camera's timecode starts and stops along with the recording, so rec run is used for real-time timecode sync with other devices during production.