Introduction of Image Compression:-

Image compression is an application of data compression. It is used for reducing the redundancy in the image, that is nothing but avoiding the duplicate data. It also reduces the required storage area to store an image. It can be lossy or lossless. There are several techniques for image compression such as DCT (discrete cosine transform), DWT (discrete wavelet transform), PCA (principal component analysis) etc.

figure below depicts the general flow of image compression and decompression.haybrid1

Discrete wavelet Transformation:-

Wavelet Transform decomposes a signal into a set of basis functions, these basis functions are called wavelets. Wavelets are mathematical  function that separate data into different frequency component with a resolution matched to its scale.  . DWT transforms a discrete time signal to a discrete wavelet representation.

Methodology:

hybrid2

In equation ѱ is a function called wavelet, a represent another function which measure the degree of compression or scale  and b represent translation function which measures the time location of the wavelet.

Discrete wavelet transform in 2D function f(x,y) of size M*N is :

hybrid3

Discrete Cosine Transformation:-

The Discrete Cosine Transformation is used for most compression applications. DCT is a technique to convert signal into elementary frequency component. It transforms digital image data from spatial domain to frequency domain. DCT is a fast transform. DCT has excellent compaction for highly correlated data. It gives good compromise between information packing ability and computational complexity.

Methodology:-

The discrete cosine transform helps to separate the image into parts or spectral sub bands of differing importance with respect to the images visual quality. The general equation for a 1D (N data items) DCT is defined by the following equation:

hybrid4

Implementation:-

I) Compression Procedurehybrid5II) Decompression Procedurehybrid6Quantization:-

Quantization is the process where actual reduction of image is done. It is achieved by compressing a range of values to a single quantum value. When the number of discrete symbols in a given stream is reduced, the stream becomes more compressible.

Encoding:-

 In Encoding the results of the quantization are encode. It can be Run Length encoding or Huffman coding. It optimizes the representation of the information to further reduce the bit rate.hybrid7hybrid8Result :-hybrid9

Conclusion :-

  • It is observed that compression ratio is High, for several images Compression and Decompression by Hybrid Method.

 

Introduction to Image Compression:-

Image compression is an application of data compression. It is used for reducing the redundancy in the image, that is nothing but avoiding the duplicate data. It also reduces the required storage area to store an image. It can be lossy or lossless. There are several techniques for image compression such as DCT (discrete cosine transform), DWT (discrete wavelet transform), PCA (principal component analysis) etc.

Figure below depicts the general flow of image compression and decompression.dwt1

Discrete wavelet Transformation:-

Wavelet Transform decomposes a signal into a set of basis functions, these basis functions are called wavelets. Wavelets are mathematical  function that separate data into different frequency component with a resolution matched to its scale.  . DWT transforms a discrete time signal to a discrete wavelet representation.

Methodology:-  

dwt2

In equation ѱ is a function called wavelet, a represent another function which measure the degree of compression or scale  and b represent translation function which measures the time location of the wavelet.

Discrete wavelet transform in 2D function f(x,y) of size M*N is :

dwt3

Implementation:-

dwt4

DWT factorize poly-phase matrix of wavelet filter into a sequence of alternating upper and lower triangular matrices and diagonal matrix.

dwt5

Quantization:-

Quantization is the process where actual reduction of image is done. It is achieved by compressing a range of values to a single quantum value. When the number of discrete symbols in a given stream is reduced, the stream becomes more compressible.

Encoding:-

 In Encoding the results of the quantization are encode. It can be Run Length encoding or Huffman coding. It optimizes the representation of the information to further reduce the bit rate.

dwt6

Applications:-

  • Medical Application
  • Image Processing
  • Data Compression
  • Signal de-noising

Results:-

dwt7

 

Introduction to Image Compression:-

Image compression is an application of data compression. It is used for reducing the redundancy in the image, that is nothing but avoiding the duplicate data. It also reduces the required storage area to store an image. It can be lossy or lossless. There are several techniques for image compression such as DCT (discrete cosine transform), DWT (discrete wavelet transform), PCA (principal component analysis) etc.

Figure below depicts the general flow of image compression and decompression.

dct2

Discrete Cosine Transformation:-

The Discrete Cosine Transformation is used for most compression applications. DCT is a technique to convert signal into elementary frequency component. It transforms digital image data from spatial domain to frequency domain. DCT is a fast transform. DCT has excellent compaction for highly correlated data. It gives good compromise between information packing ability and computational complexity.

Methodology:-

The discrete cosine transform helps to separate the image into parts or spectral sub bands of differing importance with respect to the images visual quality.

The general equation for a 1D (N data items) DCT is defined by the following equation:

The general equation for a 2D (N by M image) DCT is defined by the following equation:

dct3

dct4

Quantization:-

Quantization is the process where actual reduction of image is done. It is a lossy compression technique which basically used in DCT. It is achieved by compressing a range of values to a single quantum value. When the number of discrete symbols in a given stream is reduced, the stream becomes more compressible.

Encoding:-

 In Encoding the results of the quantization are encode. It can be Run Length encoding or Huffman coding. It optimizes the representation of the information to further reduce the bit rate.

Common Applications:-

  • JPEG Format
  • MPEG-1 and MPEG-2
  • MP3
  • Advanced Audio Coding.. etc

Results:-

dct5

Medical imaging is the process and technique of creating visual representations of interior of a body for medical intervention and clinical analysis. Medical imaging helps to reveal internal structures hidden by the skin and bones as well as to diagnose and treat disease. It also establishes a database of normal anatomy and physiology to make it possible to identify abnormalities.

Goals for MIP

MIP is to meet the following goals:

  • To develop computational method and algorithms to analyze biomedical data.
  • To develop tools to give our collaborators the ability to analyze biomedical data to support advancement of biomedical knowledge.

Need for MIP

Now a days Imaging has become an essential component in many fields of biomedical research. Biologists study cells and generate 3D confocal microscopy data sets, virologists generate 3D reconstruction of viruses from micrographs, radiologists identify and quantify tumor from CT and MRI scan and neuroscientist detect regional metabolic brain activity from MRI scans. Analysis of these diverse type of image requires sophisticated computerized quantification and visualization tools.

Benefits of Digital Image Processing for Medical Applications

  • Interfacing Analog output of sensor such as microscopes, endoscopes, ultrasound etc. to digitizer and in turn to Digital Image Processing system.
  • Image enhancements.
  • Changing density dynamic range of Black and White images.
  • Color correction in color images.
  • Manipulating of colors within an image.
  • Contour detection.
  • Area calculation of the cell of a biomedical image.
  • Display of image line profile.
  • Restoration of images.
  • Smoothing of images.
  • Construction of 3-D images from 2-D images.
  • Generation of negative images.
  • Zooming of image
  • Pseudo Coloring.
  • Point to point measurement.
  • Getting relief effect.
  • Removal of artifacts from the image.

Operation performed on Medical Images

Smoothing is the process of simplifying an image while preserving important information. The goal is to reduce noise or unwanted details without introducing too much distortion so as to simplify subsequent analysis.

Image Registration:-

This is the process of bringing two or more images into spatial correspondence. In the context of medical imaging image registration allow for the concurrent use of images taken with different modalities at different time or with different patient positions. In surgery images are acquired before pre operative, as well as during intraoperative surgery. Because of time constraints the real time intraoperative images have a lower resolution than the preoperative images obtained before surgery. Moreover, deformations which occur naturally during surgery make it difficult to relate the high resolution preoperative image to the lower resolution intraoperative anatomy of the patient. Image registration attempts to help the surgeon relate at the two sets of images.

Image Segmentation:-

When identifying at an image a human observer cannot help seeing structure which often may be identified with object. Segmentation is the process for creating a structured visual representation from unstructured one. Image segmentation is the problem of partitioning the image into homogeneous region that are semantically meaningful that correspond to objects. Segmentation is not concerned with actually determining what the partition is. In the context of medical imaging these regions have to be anatomically meaningful. Example of Segmentation is partitioning a MRI image of the brain into white and gray matter. Since it replaces continuous intensities with discrete label segmentation can be seen as an extreme form of smoothing information reduction. Segmentation is useful for visualization it allows quantitative shape analysis and provides an indispensable anatomical framework for virtually any subsequent automatic analysis.

Capture123 image processing

Here we sketched some of the fundamental concepts of medical image processing.

DIFFERENT SURROUND VIEW CONSTRUCTION MODELS: A CASE STUDY

The new concept of Advanced Driver Assistance System (ADAS) has rapidly emerged in the market to ensure the safety of people driving their vehicles. The system not only tries to ensure the safety of people behind the wheels but also prevents accidents and provides assistance in better driving. One of the main components of this system is “Top View” or “Surround View”. Automotive surround view, also called “around view” or “surround vision monitoring system”, provides the driver a 360-degree view of the area surrounding the vehicle.

In order to achieve the goals of ADAS, the image sensors should continuously input images that will be processed and monitored for precise and perfect maneuvering of vehicles. To start from the basic, first, a setup needs to be installed on the vehicles that would provide images to the image-processing unit that would later be displayed on the LCD or Central Information Display.

  • Four cameras are installed on the vehicle: one on the front bumper, second on the rear bumper and the other two on each of the side view mirror.
  • The cameras have fish-eye lenses because the images need to cover a large area of the surrounding without loss of much data. The fish eye lenses have as large a field of view as around 180⁰.
  • These cameras are also tilted at some angle from the horizontal so that the road areas next to the vehicle are also covered which increases accuracy of the system.

The cameras have to work on real time and output a video at 30 fps. Similarly, the surround view system should provide an output at the same pace such that as and when the vehicle moves, the system is updated. The cameras input the images and the image-processing unit (surround view system) takes these four images one at a time and stitches them together to provide a top-down view of the vehicle and the area around the vehicle.

SURROUND VIEW SOLUTION

The surround view system has three main components that are as follows:

2

Geometric Alignment itself has two parts: one, that includes the correction of distortion produced in each image due to the fish-eye lenses and second, the transformation of image from their individual perspective to top-down view perspective. The photometric alignment removes the varying intensity between the images captured by the cameras and lastly, composite view synthesis stitches the four images together to provide the driver a bird’s eye view as if the driver is present at a certain vertical height from the car’s roof.

GEOMETRIC ALIGNMENT

Due to the advantages of the fish-eye lenses over the other lenses, they are considered for this system, but these lenses achieve extremely wide angles of view by causing the straight lines to get distorted. The objects that are near to the lenses appear bigger and the ones that are away appear too small. The tangential distortion is neglected, as it is quite negligible as compared to the radial distortion. The radial distortion cause by the fish-eye lenses is the barrel distortion, wherein, image magnification decreases with distance from the optical axis. The apparent effect is that of an image which has been mapped around a barrel. The straight lines appear as curves and the points are moved in radial direction from their original position. Due to this distortion, the fish-eye lenses are able to map an infinitely wide object plane into a finite image area. The distortion is removed by lens-distortion correction algorithm.

  • Lens Distortion Correction: In this, inverse mapping can be done, where, for every output pixel, an input pixel is identified and accordingly the output pixels will pick up their intensity values. The mapping from the input pixel to the output pixel is one-to many. The output image obtained is then vertically and horizontally corrected and then restored to its initial input size.

Lens Distortion Correction

 

•Lens Distortion Correction

                          Final Corrected Image after Lens Distortion Correction                                                         

Final Corrected Image after Lens Distortion Correction

The image obtained as output, is then taken for the next level of processing where the images are transformed to top-down views.

  • Perspective Mapping: Perspective transformation is to project the image onto a new viewing plane, it is also called projective mapping. To get the bird’s eye view from distortion corrected images, four points are selected on the distortion corrected images and corresponding to them, four points are taken up on their top- down view. Then, homography matrix is calculated for those two planes. Using this matrix, an inverse mapping is done and again for every output pixel, corresponding input pixels are picked up from where they take up their intensity values.

After the above two steps, the objects present in the overlapping field of views of every camera, are checked for their positions so that stitching can take place without any misalignment in the overlapping regions.

PHOTOMETRIC ALIGNMENT

Due to illumination differences, the brightness or intensity of the same objects captured by different cameras can be quite different. For getting a seamless stitched top-down view, it is necessary that the photo metric difference between two adjacent views be removed. Using this, the composite view will appear as if it were taken by a single camera placed above the vehicle. This further minimizes any discrepancies in the overlapping regions of adjacent views.

COMPOSITE VIEW SYNTHESIS

Synthesis function receives input video streams from four fish-eye cameras and creates a composite surround view. Synthesis creates the stitched output image using the mapping that is pre-decided.  There are overlapping regions in the output frame, where image data from two adjacent input frames are required. In these regions, each output pixel maps to pixel locations in two input images, which is managed accordingly.

COMPOSITE VIEW SYNTHESIS

BENEFITS OF SURROUND VIEW

The benefits of the surround view can be summed up in the following points:

  1. Eliminates blind spots during critical maneuvers in crowded and narrow spaces
  2. Provides help during parking of vehicles
  3. Prevents any kind of accidents that include collision into another vehicle, which is turning into or crossing a road, collision between vehicle and pedestrian, collision with any obstacle such as any animal crossing the road etc
  4. Assistance in changing or leaving a lane
  5. Increases the efficiency in traffic and transport
Computer Vision: A Benchmark for Computer Science Thesis/Dissertations

M.Tech dissertations and assignments cannot be treated very lightly keeping in mind the importance of research at that level. There are several domains, streams or we can call it as fields for a master’s student. If we talk about computer science here, it will be a hot topic to discuss upon.

What are the best possible streams of research for a computer science student? This question keeps floating in the minds of every master student before starting their master thesis. Here we will let you know about the scope of computer vision in master thesis.

Technology Scaling in Electronic Circuits

In this highly advanced and competitive era of technology, portability and compactness of electronic devices is required along with its being fully featured. Research titans like Intel, Texas Instrument and IBM are proposing their work of chip implementation at around 7 nm these days. The certain need of reducing the device size arises due to following requirements of an electronic device:

ROLE OF DIGITAL SIGNAL PROCESSING IN FPGA AND ASICs

Importance of Digital Signal Processors:-

Digital Signal Processors are those microcomputers whose specifications are optimized for those applications which process high-speed numeric data­. When DSPs acts as digital filter, they receives digital values of signal samples and calculate results based on filter functions.

DSPs were proved outstanding in respect of time to market.

When we see in context of real time applications, DSPs were also proved excellent in context of performance.

DSPs were judged brilliant in terms of feature flexibility.

ASIC and FPGA:-

If we compare DSPs and FPGA, FPGAswere bestin context of time to market. FPGAs provide field-ready alterations to attain functionality. Their tractability is not as high as software programmable options, so they are less demanded than DSP in context of time to market. Although they are far better than ASSP in terms of cycle time and support.

ASICswere proved fair in context of time to market. ASICs were estimated exceptional in the matter of execution. For ASIC, if we consider a specific application, it is essential that ASIC must offer high execution. ASICs were considered good in context of price also. ASIC must be application oriented so there are less chances for cost efficiency; therefore ASICs offer good prices but price is good but somewhat behind Digital Signal Processing. If we talk about power than ASICs are best. Since ASICs are application oriented so it should consume less power. But as we know ASIC should  be more optimized  in respect of cost than power, so it is behind DSP

. ASSP was judged fair in the matter of development ease. This assumes that it will have some challenges achieving differentiated features that may slow down development somewhat. In terms of development help, ASSP is expected to have application-specific knowledge, but only the turnkey-oriented solution. Thus, they don’t have good development help. ASSP was judged poor in the matter of feature flexibility. ASSPs are by nature going to be somewhat poor in flexibility as they are inherently to their own specific application, and precisely to a unique solution approach to the targeted applications. These specifically focus and optimization is a tradeoff for flexibility.

Application-specific ICs can be tailored to perform specific functions extremely good, and can be improved to have quite good power efficiency. However, since ASICS are not particularly field-programmable and also their functionality cannot be iteratively changed or updated while in product development. As in every new update version of the product requires a redesign and trips through the foundry, expensive proposition, and impediment. The advanced programmable Digital Signal Processing devices, on the other hand, can be enhanced without making any change in the silicon, a small change in the software program, efficiently reducing development costs, and providing after market feature improvements with small code downloads. Consequently, it is more likable than nothing, when we consider ASICs design in the real time signal processing application they are typically employed as bus interfaces, glue logic, and/or functional accelerators for a programmable DSP-based system.

Verification and testing is the most crucial step of chip designing process in VLSI industry.

VERIFICATION

Extraction of all the parasitic information from the design is the first step in full-chip verification. The extraction of signal and power nets is followed by analysis of all the transistors must be done for potential problems.

There are some full chip verification techniques which are as follows:

Introduction

Charge recycling technique for digital circuits play vital role for decreasing the power. In this digital prominent era, Ample of researches are going on for diminishing power. MTCMOS circuits selectively connect/disconnect the low threshold voltage logic gates to the power supply or the leakage current produced by an MTCMOS circuit is significantly less. MTCMOS (Multi Threshold CMOS) technology provides a solution to the low power design requirements but with this boon of low power technique (MTCMOS) there is a problem of significant power dissipation during mode transition i.e. the active-to-sleep or sleep-to-active mode transitions consume a significant amount of additional energy in the conventional MTCMOS circuits.