Lossless and Lossy Compression Techniques for Corners and Contours
Date Issued
2015
Date
2015
Author(s)
Hsiao, Ching-Wen
Abstract
Shape is an important feature for object recognition, template matching, and image analysis. To represent it efficiently, the techniques for encoding the corners and the contours with very low bit rate are necessary. Therefore, in this thesis, three data compression techniques are proposed in order to reduce the data size requirement for corners and contours. For corner compression, we rearrange the corners based on the distance information and use an active matrix to encode the position of corners. Moreover, the techniques of adaptive arithmetic coding and context modeling are also adopted to improve the efficiency. Besides, the techniques to compress the other information in binary feature point are also proposed. To encode the angles, the correlation between the distance information of feature points is considered. For descriptors, an asymmetric reference point selection scheme is proposed to improve the predictive coding. For lossless contour compression, the proposed algorithm first applies the morphology operation to shrink the contour slightly, and then uses similar concept to the Angle Freeman chain code. However, the chain code will be split into main-chain code and sub-chain code. From observation, the angles between consecutive directions are mostly 0 degree and 45 degrees to the right or left. To decrease the symbol diversity, the angles except for 0, 45, -45 degrees are represented as the same symbol in main-chain code, and are distinguished in another chain code. Moreover, there is some symbol substitution in main-chain code to simply the most common symbol combination owing to digital contour. After that, Huffman code is used to the main-chain code as an intermediate code according to the probability statistics of symbols, and to improve compression efficiency, the distribution transform is applied to alter the distribution of zeros and ones. Not only does the distribution of the same bit become denser, but the probability of zeros also gets higher. At last, by adaptive arithmetic coding and context modeling, the data size of the contour reduces considerably. The central concept of the proposed lossy contour compression is to approximate the original shape by a combination of vertices and polynomial curves. The vertices, which are also called dominant points, are found by the following steps. First, the initial dominant points will be chosen according to the curvature measure, and then 3rd order polynomial with optimization will be used to approximate the original contour. After calculating the error between the approximated contour and the original contours, new dominant points will be iteratively added in suitable positions until the error is tolerated. By employing the tuning around the eight-connected neighbor of dominant points, the dominant points and the polynomial coefficients will be replaced if there is some better position. When encoding the coordinates of dominant points, the concept similar to the proposed corner compression techniques is adopted. We found there is some correlation between polynomial coefficients, where the data size can be significantly decreased by improved adaptive arithmetic coding.
Subjects
Predictive coding
Entropy coding
Data compression
Image compression
Binary feature point
Chain code
Contour approximation
Type
thesis
File(s)![Thumbnail Image]()
Loading...
Name
ntu-104-R02942031-1.pdf
Size
23.32 KB
Format
Adobe PDF
Checksum
(MD5):4750fa482d0376f35343d37a9fd959a1
