Sistemas Multimedia - (Digital Re-)Quantization

Vicente González Ruiz - Departamento de Informática - UAL

August 4, 2024

Contents

 1 What is quantization?
 2 What is (digital re-)quantization?
 3 Basic terminology
 4 Types of quantizers
  4.1 Scalar quantizers (SQ) VS Vector Quantizers (VQ)
  4.2 Uniform VS non-uniform quantizers
  4.3 Static VS adaptive quantizers
 5 Deadzone quantization
 6 Lloyd-Max quantization
 7 Vector quantization applied to the spatial domain
 8 To-Do
 9 References

1 What is quantization?

In Information Theory [1], quantization [3254] is any mapping process between two sets of elements \(A\) and \(B\), where all elements of \(A\) are associated with one (not necessarily the same) element of \(B\), being \(|A|>|B|\), where \(|\cdot |\) represents the order (the number of elements) of a set.

Notice that, when \(|A|>|B|\), quantization is an irreversible process because at least two elements of \(A\) will be mapped to the same element of \(B\), and therefore, there is no way to find the reverse mapping1.

2 What is (digital re-)quantization?

In the quantization of digital signals [25], the elements of \(A\) and \(B\) are digital samples [3]. Since, by definition, digital samples have been quantized to be converted from the analog world to the digital one, we are actually applying a re-quantization. Again, if \(|A|>|B|\), such quantization implies a loss of information in the quantized signal.

3 Basic terminology

When working with 1D signals, quantizers divide the range of possible input values that the samples can take into a collection of non-overlapped intervals2. The values that define such intervals are called decision levels, and the value that in the quantized domain represents all the input possible values that fall into an interval is called the representation level of the interval.

4 Types of quantizers

4.1 Scalar quantizers (SQ) VS Vector Quantizers (VQ)

When quantization maps single elements of \(A\) to single elements of \(B\), the quantizer is said to be scalar [2]. When tuples of elements are mapped, VQ is used [5]. Notice that this classification if not related to the dimensionality of the signal, but whether the samples are processed independently or whether they are quantized by tuples. In this last case, we can exploit the correlation between samples.

4.2 Uniform VS non-uniform quantizers

In an uniform quantizer [2], all the intervals have the same the size, which is equal to the quantization step size, \(\Delta \).3 In a non-uniform quantizer [2], at least one of the intervals is different to the rest. For example, a deadzone quantizer [2] has an interval size for the representation level 0 that doubles the size of the the rest of intervals, and for this reason, a deadzone quantizer is a non-uniform quantizer. Another example of a non-uniform quantizer is a Lloyd-Max quantizer [2] because it divides the range of input values in a set of intervals whose size is inversely proportional to the probability of using such interval.

4.3 Static VS adaptive quantizers

Static quantizers do not modify the partitioning of the input signal’s dynamic range (the decision levels), nor the representation level assigned to each interval. On the contrary, adaptive quantizers adapt such parameters to the sequence of samples that are quantizing.4 A Jayant quantizer [2] is a example of adaptive quantization.

5 Deadzone quantization

Deadzone quantizers [2] are static quasi-uniform scalar quantizers. These are used frequently in lossy compression systems because when the quantization step size is a power of two and the input sample values are integers5, then the representation levels can be found simply by discarding low-significant bits of the input samples (in other words, we only need to perform a bit-shift operation to find the corresponding quantization index). This also means that it is possible to build a progressive encoding system using a deadzone quantizer.

Another reason why dead-zone quantization is popular in lossy encoding systems is that it tends to remove electronic noise more than other scalar quantizers where the signal is weaker. If we suppose that the signal has 0 average6 and that the dead-zone is placed where the SNR is smaller, we basically replace electronic noise by quantization noise7. Obviously, this does not improve the RD performance of the encoder, but the perceived (subjective) quality is increased for the same bit rate.8

Resources

  1. Deadzone Quantization in VCF.

6 Lloyd-Max quantization

A Lloyd-Max quantizer [2] minimizes quantization noise given a signal with a known probabilistic distribution (histogram) of the input samples. As a result, the density of quantization intervals is higher where the probability of the samples is higher, and vice versa.

Lloyd-Max quantizers are considered non-uniform quantizers. Notice that the histogram must be also known by the decoder to “restore”9 the information.

Resources

  1. A partial implementation in VCF.

7 Vector quantization applied to the spatial domain

Vector quantizers [5] input blocks10 of samples and output a quantization index per block. For example, in most natural images, the spatial correlation [6] generates that some blocks of the image are similar to other blocks. If this is true, we can compute a set of centroids (blocks) and use them to represent the original blocks. As a result, we will obtain a matrix of quantization indexes that can be entropy coded.

Note that VQ exploits the spatial correlation. For this reason, the encoding performance of a vector quantizer is usually superior compared to that of a scalar quantizer because the number of output quantization indexes11 is smaller.

Resources

  1. Vector Quantization Example.
  2. Vector Quantization (in the 2D domain) of a grayscale image.
  3. Vector Quantization (in the 2D domain) of a color (RGB) image.
  4. Image compression using LBG.

8 To-Do

  1. Modify VCF to allow the use of Lloyd-Max quantization in the compression pipeline. Notice that VCF already implements this quantizer, but the compression pipeline is not finished. You must implement a complete pipeline (at least for compressing images). Complexity 2.
  2. Modify VCF to allow the use of VQ (applied to the spatial domain) in the compression pipeline. Notice that VQ used in the spatial domain can significantly minimize the advantage of using a spatial transform.12 For this reason, it can also be useful to implement also a “fake” spatial transform where no transformation is performed at all. The same happens when VQ is used in the color domain. Complexity 4.

9 References

[1]   V. González-Ruiz. Information Theory.

[2]   V. González-Ruiz. Scalar Quantization.

[3]   V. González-Ruiz. Signal Quantization.

[4]   V. González-Ruiz. Trellis Coding Quantization.

[5]   V. González-Ruiz. Vector Quantization.

[6]   V. González-Ruiz. Visual Redundancy.

1Dequantization does not exist.

2If the signal has more than 1 dimension, the process is exactly the same, but we don’t speak of intervals, but regions, for example, in the 2D case.

3Notice that, in an uniform quantizer, the distance between all the decision levels is also \(\Delta \).

4Notice that, in the case of the quantizer depends on the characteristics of the signal but they are known a priori, it should be considered static.

5... and negative integers are represented using two’s complement ...

6The electronic noise has 0 average and a flat spectrum.

7Which also has 0 average and a flat spectrum

8Obviously, if the electronic noise is perceived as a source of distortion comparable to the quantization noise.

9Remember that quantization is a irreversible process and therefore, the original signal is never restored (except if \(\Delta =1\) and we are using digital quantization).

10If we are removing spatial redundancy, the blocks are usually squared tiles of pixels. If we remove color redundancy, the blocks are multicomponent pixels, for example, RGB values.

11Indexes of the centroids.

12Spatial transforms are studied later in this course.