Connected Magazine

Main Menu

  • News
  • Products
    • Audio
    • Collaboration
    • Control
    • Digital Signage
    • Education
    • IoT
    • Networking
    • Software
    • Video
  • Reviews
  • Sponsored
  • Integrate
    • Integrate 2024
    • Integrate 2023
    • Integrate 2022
    • Integrate 2021

logo

Connected Magazine

  • News
  • Products
    • Audio
    • Collaboration
    • Control
    • Digital Signage
    • Education
    • IoT
    • Networking
    • Software
    • Video
  • Reviews
  • Sponsored
  • Integrate
    • Integrate 2024
    • Integrate 2023
    • Integrate 2022
    • Integrate 2021
CommercialFeaturesVideo
Home›Technology›Commercial›Understanding video compression

Understanding video compression

By Staff Writer
19/06/2015
617
0

CompressionCompression allows video and audio to be stored or transmitted at much lower data rates than ‘raw’ digital video. Rod Sommerich explains.

The two main types of video compression used commercially in the audio-visual sector are MPEG and Motion-JPEG.

ADVERTISEMENT

However, there are many versions and subsets of these Standards, including JPEG-2000, MPEG1, MPEG2, MPEG4, H.264 and others.

Each has characteristics tailored for specific applications, and different benefits and issues to consider.

 

M-JPEG and JPEG 2000

Compression based on M-JPEG uses a system that compresses the information within a single video frame.

This means each frame of video is stored as an independent frame, and the content is easily dividable and switchable frame by frame.

M-JPEG was the original compression Standard used in non-linear editing, which allowed editors to cut accurately between frames. Being able to edit the information frame accurately is important in making content.

Some of the common applications for M-JPEG are web browsers, media players, game consoles, digital cameras, internet protocol (IP) cameras, webcams, streaming servers and video cameras.

As stated, each frame or field of the video is compressed separately as individual JPEG images. These frames are divided in to ‘blocks’ of pixels and a number is calculated to generate a value for each block. The smaller the blocks the higher the data rate, and the better the image quality. In Motion JPEG the data rate is usually relatively constant even with fast-moving content or fine detail in an image.

M-JPEG is an ideal format for digital signage and other applications where multiple screens will show the same content, because M-JPEG is delivered as discrete frames of information and the content is played on all screens at the same moment without the issues of delays and echoes often seen with MPEG compression.

Most video over IP matrix switchers and splitters use M-JPEG or JPEG 2000 to ensure that displays are showing the same frame at the same time. In MPEG systems, complex timing systems are required to achieve synchronised output, otherwise the systems are identified as non-synchronous and the same content can appear delayed to different destinations.

M-JPEG has native support in applications such as QuickTime and PlayStation and other consoles; and browsers such as Safari, Google Chrome and Firefox, which typically means no additional software is required for playback of M-JPEG video.

The data rates of M-JPEG compared with compression and picture quality is good, and the quality of the image is usually more consistent even when there are fast transitions or changes in the image.

As you would expect, if you increase the data rate (i.e. block sizes are made smaller) image problems are less obvious and subjective picture quality improves.

 

MPEG compression

This technology creates groups of pictures as a compressed data and requires all the frames in a group to be displayed. MPEG compares pixels across multiple frames then transmits only the differences between the frames. There are big data savings when using this system but it also has other characteristics the user needs to consider when planning a system.

MPEG is used in many transmission and storage systems where the content is not changed or manipulated during transmission – for example digital TV (DVB, ATSC, etc), Blu-ray, DVD, CCTV, cable TV and other uses where the signal is received and limited synchronisation or manipulation is required.

MPEG stream bandwidth is usually lower than for an M-JPEG stream at equivalent picture quality; in some cases 10-20 times less for the same content. MPEG can be saved in much smaller files and distributed more effectively than M-JPEG.

One consideration of an MPEG system is that increased movement or changes in the video content will require more bandwidth to maintain the quality.

This happens if there is a cut or many changes in the image across a group of pictures, or GOP such as when there is a ‘pan’ or low light noise in the picture.

MPEG splits the frames into GOPs, which can have a wide number of frames – from three frames to many more.

The settings are chosen based on the application of the system. The settings in the encoder of an MPEG system will allow the user to select ‘constant quality’ or ‘constant bandwidth’ during the encoding phase, and these options determine how the image looks on the screen and how the changes in the image are managed.

A GOP has three main components:

  • ‘I frame’ or intra coded picture, which is coded independently of all other pictures. This frame is compressed as a single frame of video and stored in the stream to be used as a reference for the GOP.
  • ‘P frame’ (predictive coded picture, sometimes called predictive frame) is the motion-compensated difference information relative to the current frame and next picture.
  • ‘B frame’ (bi-predictive coded picture, sometimes called back frame) is the motion-compensated difference information relative to the current frame and previous picture.

In simple terms, the I Frame indicates the beginning of a GOP, the P Frame predicts movement using the current frame and next I Frame, the B Frame predicts changes based on the current frame and previous I Frame.

An example of a typical 12-frame GOP would be something like this sequence:
I-B-B-P-B-B-P-B-B-P-B-B-I.

The first consideration is that it is impossible to generate a complete frame of video until an ‘I frame’ is received and until a complete GOP received, the movement or changes in the content may not be accurate.

When multiple screens show the same content, it is not usually possible to have them playing video simultaneously. In the above example a delay of up to 12 frames may exist between two screens next to each other.

This is not an ideal situation in a video distribution system in which multiple displays are in the same area.

M-JPEG typically uses a player with lower processing requirements than MPEG, which requires the receiver or player to calculate more information and manage more information to allow replay.

 

Delivery

Now that we have compressed the video, we can choose how it is to be delivered.

It can be put it on a hard drive or other storage device for replay via computer, DVD, Blu-ray, media player, etc.

We could also stream the media across a network or the internet for delivery to clients. There are many streaming protocols, and the most common are set out below.

  • Unicast sets up a direct one-to-one connection between the sending and receiving devices. A unicast stream is similar to pulling a rope from point A to point B, with both ends of the stream communicating to ensure the packets are received correctly. Video over the internet uses Unicast.
  • Multicast allows multiple users to see the content on the network. It sends the stream to a virtual IP address in the network switch, devices on the network ‘tune in’ to the virtual address and users watch the content. Multicast is similar to how free-to-air TV works – the information is sent out all the time and the user can tune in to the channel they want.

The traffic load on the network is similar if one person is watching or 1,000. Unlike unicast, in multicast systems the receiver does not communicate directly to the source to ask for information to be resent, if the stream is interrupted the information is lost and the image can be disrupted.

If you want to use multicast on your local area network (LAN), a switcher with multicast is needed, often called IGMP or a Layer 2 switcher. Multicast is not permitted on the internet and internet service providers (ISPs) will remove multicast streams from their traffic.

As well as the above there are protocols that manage the communication of these signals. These protocols manage the handshake of the stream. There are many of these but the common ones are:

  • HTTP – Hypertext Transfer Protocol allows the sender and receiver to communicate.
  • RTP – Real-Time Transport Protocol creates packets of a sequence of JPEG images.
  • RTSP – Real-time streaming protocol is designed for use in entertainment and communications systems to control streaming media servers.

 

Summary

M-JPEG and MPEG each have applications and uses in the AV market, and they won’t disappear in the near future.

The information in this article is general and based on a broad description of these technologies. There are many variations on these Standards, and several manufacturers have implemented proprietary changes that overcome some of the limitations of each compression.

When choosing the products for a project, integrators should consult the supplier to understand the products and the advantages and limitations of each solution. System choices should be based on the features required by users to ensure an appropriate solution is provided for the application.

  • ADVERTISEMENT

  • ADVERTISEMENT

Previous Article

REVIEW: Current Audio SB800 soundbar and subwoofer

Next Article

Fibersystem joins the HDBaseT Alliance

  • ADVERTISEMENT

  • ADVERTISEMENT

Advertisement

Sign up to our newsletter

Advertisement

Advertisement

Advertisement

Advertisement

  • HOME
  • ABOUT CONNECTED
  • DOWNLOAD MEDIA KIT
  • CONTRIBUTE
  • CONTACT US