Skip to content

Latest commit

 

History

History
360 lines (276 loc) · 18.6 KB

EXAMPLES.md

File metadata and controls

360 lines (276 loc) · 18.6 KB

videojitter examples gallery

This page shows some videojitter measurements of real-world systems and a discussion of the various phenomena they surface. It can be seen as a showcase of what videojitter is capable of, or as a guide on how to interpret the various patterns that sometimes appear in practical measurements.

All measurements shown here have been made using a cheap homemade instrument, specifically a Panasonic AMS3 light sensor feeding an ASUS Xonar U3 used as an ADC (unless otherwise noted). You can build a similar instrument for yourself in minutes.

Perfect result

This result, obtained by playing a 120/1.001 FPS video on the built-in player of an LG G1 OLED TV, it the best result a videojitter user could possibly hope for.

The frame transitions form an extremely clear, sharp, straight horizontal line on the chart with no deviations whatsoever, except for the intentionally delayed transition (as expected). This means the time interval between frames (8.34 ms, as expected for 120/1.001 FPS) stayed exactly the same throughout the entire test signal with no outliers - no late nor early frames.

This example also demonstrates the ability of both the playback and measurement system to handle very high FPS - much higher than typical video content - as well as their amazing timing accuracy, which as described in the "fine print" (the text below the chart) is in the order of 10 µs (yes, microseconds). This indicates the recording is virtually noise-free.

If you get this kind of result, your playback system is pretty much flawless as far as frame presentation timing is concerned.

Isolated timing errors

This result was obtained by playing a 24/1.001 FPS video (typical of most movie content) using the madVR software video player running on a Windows 11 PC. The display device is the same as the previous example and is being driven at a 120/1.001 Hz refresh rate.

Compared to the perfect example above, we can spot a couple of differences.

The line is higher, which simply reflects the slower frame rate (longer transitions between frames). It's worth noting that, even though the display is running at 120 Hz, the test video is running at 24 FPS, so that's what videojitter sees - obviously it cannot observe individual display refreshes if the frame does not change (same goes for the human eye).

More importantly, one immediately spots a few unexpected outliers on the chart (the intentionally delayed transition is also an outlier, but it's expected). This means there are transitions that did not happen after the expected 41.71 ms since the previous one: the frame was presented either too early (below the line) or too late (above the line).

This is a typical result when playing videos using PC software players, which suffer from a number of technical challenges that make it difficult for the player to perfectly time frame presentation.

In this particular case, the delayed transition at about 6 seconds into the recording was likely caused by the video output clock running too fast relative to the playback clock, forcing the system to delay a frame to compensate.

The second pair of late and early transitions around the 25 second mark was likely caused by the playback system "missing a beat" on a single frame (possibly due to a thread scheduling deadline miss), and then showing a later frame earlier to "catch up".

It's also worth noting the amount of time by which the transition interval deviated on these early and late frames. It turns out that every one of these outliers deviated by ±8.34 milliseconds. This is completely unsurprising given the display refresh interval in this example was 120/1.001 Hz - the frame was simply displayed one refresh interval too early or too late, as one would naturally expect.

Systematic timing errors

The above result was obtained by playing a 24/1.001 FPS video on the built-in player of an LG G1 OLED TV.

Just like the previous example we observe timing errors, but this time they are not one-offs. The typical error is ±8.3 ms, which suggests that the TV is running at 120 Hz internally. The built-in video player seems unable to reliably meet individual refresh targets and is constantly "missing the mark". This is a surprisingly poor result for a built-in hardware video player.

While it may look like the errors follow a precise periodic pattern, this is not quite the case - if we zoom in we see the behavior is not exactly the same each time:

Periodic timing errors

The above result was obtained by playing a 24/1.001 FPS video on the built-in "Nebula HDR" display of an ASUS ROG Flow X16 2023 laptop using Windows Media Player on Windows 11. The display is being driven at 240 Hz.

Aside from a couple of outliers near the 11-second mark, what is interesting in this example is a seemingly periodic pattern of delayed frames. The affected frames are delayed by exactly one refresh interval, which at 240 Hz is ~4.2 ms.

This phenomenon can be explained by the display refresh rate (240 Hz) not being a perfect whole multiple of the video FPS (24/1.001 FPS). Most of the time each frame is displayed for exactly 10 refresh intervals, which is ~41.666666 ms, but that's slightly wrong. In a 24/1.001 FPS video, each frame should be displayed for slightly longer: ~41.708333 ms. As a result, an absolute, constant timing error of ~0.041666 ms accumulates with each new frame. This goes on until the error reaches the duration of a whole refresh interval (~4.16666) ms), which happens after exactly 100 frames or ~4.2 seconds. At that point, the system delays the next frame by one refresh interval so that the playback clock can "catch up" to the excessively fast display refresh clock. If this hypothesis is correct, we should expect to see a transition delayed by ~4.2 ms every ~4.2 seconds… and that is exactly what we see here!

This example also demonstrates that you don't need a fast instrument to accurately measure high refresh rate displays: here the display refresh interval is ~4.2 ms, but it was measured using an instrument that takes ~8.5 ms to settle. Even then, the instrument was still able to very precisely measure the timing errors shown above. This is because what really matters is not display refresh rate, but the frame rate of the test video. In this example the test video is 24/1.001 FPS which is well within the limits of the instrument.

3:2 "24p@60Hz" pattern

The above is a textbook example of a 24 FPS test video being played on a display being driven at a 60 Hz refresh rate, leading to the infamous "3:2 pattern" which is very commonplace when playing video on PC.

Basically, the problem is that 60 is not a whole multiple of 24, so the only way for the video player to make that work is to display a frame for 2 refresh intervals (2/60), then 3 refresh intervals (3/60), then 2, then 3, etc. This leads to the correct average overall frame duration of 5/120 or 1/24 so the video can play at the proper speed, but at the cost of a very significant timing error on each frame.

On a videojitter chart, this takes the form of a very obvious pattern where the frame transition intervals arrange themselves into two clear "lines": one at ~33ms (2 60 Hz refresh intervals) and one at ~50 ms (3 60 Hz refresh intervals).

Zooming in, we can easily observe that frame durations indeed alternate between the two from one frame to the next, as expected:

The reason why the colours swap positions in the middle of the first chart is because of the intentionally delayed transition, which causes the pattern between black and white frames to be inverted.

Severe presentation delays

The above result was obtained by playing a 60/1.001 FPS video on a Google Pixel 5 smartphone using the Android VLC media player app.

While the video is being played correctly most of the time, one can easily spot severe issues around the 10, 27 and 46 second marks. This is consistent with what could be observed during measurement, where the playback system was visibly struggling to the naked eye. The text below the chart also points out that hundreds of transitions are missing, suggesting that the playback system dropped many frames. It would appear this playback system is simply not capable of keeping up with the frame rate of the test video.

This example demonstrates that, even when the system under test is badly misbehaving, videojitter can still make sense of the data and produces reports that accurately reflect what went right and what went wrong.

Recording noise

The above result was obtained by playing a 120/1.001 FPS video on on the built-in player of an LG G1 OLED TV. Importantly, the recording was made using an instrument with especially bad noise performance: an OSRAM BPW 34 connected to the microphone jack of an Asus Xonar U7 with the DC bias voltage applied in the forward mode, instead of the more suitable reverse mode.

Recording noise creates random variations in measured frame timestamps with no discernible discrete steps nor patterns. On the chart this manifests as points forming "fuzzy lines" instead of clean, neat alignments.

The "standard deviation" estimate in the fine print (the text below the chart) makes it possible to quantify the noise. In the above example, the standard deviation is about 850 µs which is very high. Good recordings made with high performance instruments will show a standard deviation of less than 50 µs. Note that this metric is only meaningful if the playback system being measured is well-behaved (no variations in frame timestamps).

In extreme cases, the noise will be so high as to make the chart unreadable.

This issue can be examined further by looking at the raw waveform, where the noise should be visually obvious. Here is an example from the above measurement:

The same noisy signal can also look like this if a lowpass filter was applied:

If the videojitter chart looks "noisy" but the waveform looks clean, it likely means you are not dealing with recording noise but with video clock jitter, which is discussed in the next section.

Video clock jitter (e.g. VRR)

The above result was obtained by playing a 60/1.001 FPS video from a PC using mpv to an LG G1 OLED TV, with the entire chain using Variable Refresh Rate (VRR).

Playing video from a PC using VRR is a very interesting and peculiar case. When using VRR, frames are sent to the display as soon as the video player presents them, as opposed to waiting for the next VSync.

The major advantage is there cannot be any mismatch between the video frame rate and the display refresh rate, thus eliminating any highly noticeable "patterns" (such as 3:2 "24p@60Hz") as well as isolated discontinuities due to audio/video clock skew.

The disavantage is that frame presentation timing now depends on the precise instant at which the video player decides to present a frame. Because Windows is not a real time operating system, and therefore does not make strong timing guarantees, some amount of jitter (most likely video player thread scheduling jitter) is inevitable and will manifest as random variations in frame presentation times.

videojitter is capable of measuring and characterizing this VRR jitter phenomenon; in fact, this is the original reason why videojitter was created in the first place, and where its name comes from.

In general, jitter creates random variations in measured frame timestamps with no discernible discrete steps nor patterns. On the chart this manifests as points forming "fuzzy lines" instead of clean, neat alignments. That said, in principle some patterns could emerge depending on what is causing the variations (e.g. thread scheduling timing is not necessarily always random).

A keen eye will point out that this case seems indistiguishable from the previous example involving recording noise. This is correct insofar as both cause random variations in measured frame timestamps, and so manifest in the same way in the resulting videojitter chart. However, it is usually possible to distinguish between the two cases by looking at the raw waveform. Indeed, recording noise causes the waveform itself to look "noisy". In contrast, in the case of video clock jitter, the recording will look "clean", but the resulting videojitter chart will not.

Artefacts caused by overly slow display/instrument

The above result was obtained by playing a 240 FPS video on the built-in "Nebula HDR" display of an ASUS ROG Flow X16 2023 laptop using Windows Media Player on Windows 11. The display is being driven at 240 Hz.

The instrument used to make this measurement has a documented response time of 8.5 ms, making it challenging to measure a 240 FPS video which has frame intervals of ~4.2 ms. It is quite likely that even the display itself isn't fast enough anyway: indeed, current-technology LCD displays are unlikely to be able to complete a full black-white transition in such a short amount of time.

In this way the videojitter test video is arguably more demanding than real-world content, which rarely flips between 0% black and 100% white in the span of a single frame. Not to mention 240 FPS video is not exactly commonplace…

It is not possible, in general, to distinguish between a slow instrument and a slow display in a videojitter measurement, as both would look roughly the same in the raw recording. However, we can discuss the effects of a "slow recording" as shown in the above example.

The chart mostly looks right despite the equipment being driven to its limits. videojitter will still attempt to extract what it can from the recording, and is able to time the vast majority of transitions correctly.

However, the text at the bottom of the chart warns that 4 transitions went "missing". Also, things look a bit weird when sudden changes in frame durations occur, such as around the intentionally delayed transition and the outlier at around 35 seconds. Let's zoom into the latter:

One transition happened ~1.7 ms after the previous one, something that the display is not even capable of!

The problem with sudden changes in frame durations is that, if a frame is displayed for longer, then the slow response time has less of an impact and the display and instrument will be able to go further into the transition than they otherwise could. This may sound like a good thing, but that actually poses a challenge for the videojitter analyzer because this means the waveform of longer transitions look different from shorter transitions - not just in duration, but also in amplitude and shape. This makes it harder for videojitter to determine where the true transition time is.

Consider, for example, how the intentionally delayed transition looks like in the raw recording:

One can see that, because the delayed transition is a longer frame, the signal peaks significantly higher than for the surrounding transitions. In this situation, there is ambiguity (illustrated by the highlighted area) as to what the precise transition time should be. Currently videojitter will tend to pick a time closer to the right of the highlighted area, which in this case happens to be wrong (but it might not be in other cases, which is what makes this problem difficult to solve). This leads to videojitter interpreting the transition as being excessively late. Consequently, the next transition is interpreted as being excessively early - hence impossibly quick transitions are being reported.

This also leads to videojitter getting confused as to which frames are black and which frames are white. Videojitter uses the delayed transition to deduce this, as it knows what color the longer-lasting frame is supposed to be. Here the true delayed transition is the falling edge after the large peak, but videojitter thinks it's the (mistimed) rising edge before the peak. Therefore, videojitter incorrectly deduces that rising edges are transitions to white and falling edges are transitions to black - in reality it's the opposite in this particular measurement.

These issues can be more or less pronounced depending on the frame rate and instrument/display speed. Here's another example where the same 240 Hz display is being measured playing 60 Hz video where the player (VLC in this case) is missing its target very frequently, leading to many random shifts in frame durations:

In that example we can see that videojitter is slightly mistiming most transitions due to the sudden random shifts, which causes the lines on the chart to look fuzzy/noisy instead of exactly matching a multiple of the display refresh interval. Looking at the raw recording, it becomes clear why videojitter struggles to precisely time these transitions, given their amplitude and shape is all over the place:

The overall takeaway here is: while videojitter is still mostly usable when measuring high frame rates using excessively slow displays and/or instruments, it is important to watch out for these artefacts while interpreting measurements. People making 24 FPS measurements can normally ignore these issues as that frame rate should be well within the limits of any reasonable display and instrument.