- Open Access
- Total Downloads : 25
- Authors : Paras Gupta, Surya Pratap Singh Shekhawat, Hitesh Singh
- Paper ID : IJERTCONV5IS10056
- Volume & Issue : ICCCS – 2017 (Volume 5 – Issue 10)
- Published (First Online): 24-04-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
The Innovative Approach for Image Conversion
Paras Gupta Surya Pratap Singh Shekhawat Hitesh Singh
CSE Student CSE Student
HMRITM HMRITM
Assistant Professor Department of CSE
Delhi, India Delhi, India HMRITM, Delhi
Abstract With the evolution of digital concepts the need for image conversion arises. Many softwares are available in this regards but many improvements are required. As concept of digitization are increasing day by day so as the challenges of file compression. In regards of this one government of India department has taken responsibility to digitize its department. In that regards we have design an image convertor to cater their requirements.
KeywordsImage Conversion, Image Compression words)
1 INTRODUCTION
-
File system
A file system is the technique(s) and information structures that a working framework uses to monitor documents on a plate or disk; that is, the way the records are sorted out on the circle. The word is additionally used to allude to a partition or plate that is utilized to store the documents or the sort of the file system.
In processing, a document framework or file system is utilized to control how information is stored and recovered. Without a document framework, data put in a storage medium would be one vast body of information with no real way to tell where one piece of data stops and the next starts. By dividing the information into pieces and giving each piece a name, the data is effortlessly separated and recognized. Taking its name from the way paper-based data frameworks are named, each set of information is known as a "file". The structure and logical rules used to deal with the groups of data and their names is known as a "file system".
-
Image Properties
Pixel dimensions: Shows the image height and width in pixels, that is, the physical size of the image. It is represented as width x height.
ex: 1008 x 900 pixels.
Print size: Shows the size the image will have when it is printed, in the current units. This is the logical size of the image. It depends upon the physical size of the image and the screen resolution [1].
ex: 3.500 x 3.005 inches.
Resolution: Shows the print resolution of the image in pixel per inch. It is represented as Horizontal resolution x Vertical resolution
ex: 299.974 x 299.974 ppi.
Bit depth: Bit depth alludes to the color data stored in a picture. The higher the bit depth of a picture, the more shades it can store. The easiest picture, a 1 bit picture, can show just two shades, black and white. That is because the 1 bit can
store only one of two values, 0 (white) and 1 (dark). An 8 bit picture can store 256 possible shades, while a 24 bit picture can show around 16 million shades.
ex: 24
Compression: The process of minimizing the size of a file is alluded to as file compression.
Resolution unit:
Color representation: Each shade, however, can have a range of appearances in view of its richness (saturation). In the RGB portrayal, adding white light to the picture will change all of the three segments.
Compressed bits/pixel: Compressed means the size of the data file is made smaller, bits per pixel indicates how much data can be recorded for each pixel.
Color space: A variety of shades can be made by the primary shades of pigment and these shades then describe a particular color space. Color space, otherwise called the color model, is a theoretical scientific model which just defines the variety of colors as tuples of numbers, regularly as 3 or 4 values or color elements (e.g. RGB). Fundamentally, color space is an expansion of the coordinate framework and sub-space. Each shade in the framework is expressed by a solitary dot.
ex: RGB, CMY, HSV, HIS
File name: Path and name of the file that contains the image. ex: /Users/hmr.img.png.
File size: Size of the file that contains the image. ex: 197 KB.
File type/format: Format of the file that contains the image. ex: PNG image, JPEG image.
Size in memory: RAM consumption of the loaded image including the images journal. This information is also displayed in the image window. The size is quite different from the size of the file on disk. That is because the displayed image is decompressed and because GIMP keeps a copy of the image in memory for Redo operations.
ex: 9.2 MB.
Undo steps: Number of actions you have performed on the image, that you can undo. You can see them in the History dialog.
Redo steps: Number of actions you have undone, that you can redo.
Number of pixels:
ex: 974150.
-
IMAGE COMPRESSION TECHNIQUES.
Image compression might be lossy or lossless. Lossless image compression is favored for documented purposes and regularly for medicinal imaging, specialized drawings, cut
craftsmanship, or comics. Lossy image compression techniques, particularly when utilized at low bit rates, present compression relics. Lossy techniques are particularly appropriate for normal pictures, for example, photos in applications where minor loss of constancy is satisfactory to accomplish a generous decrease in bit rate. Lossy compression method that produces negligible contrasts might be called visually lossless.
Methods for lossless image compression are:
Run-length encoding is an exceptionally basic type of lossless data compression in which keeps running of information (that is, groupings in which similar information value arise in numerous continuous data components) are put away as a solitary information value and count, instead as the first run. This is most helpful on information that contains numerous such runs. For instance, basic pictures, like symbols and line drawings. It is not helpful with documents that don't have many runs as it could significantly expand the document size.
DPCM and Predictive Coding: DPCM led on signals with connection between progressive specimens prompts great compression proportions. In images this means that there is a correlation between the neighboring pixels, in video signals correlation is between the same pixels in consecutive frames and inside frames (which is same as correlation inside image)[4]. Formally composed, DPCM compression technique can be led for intra-frame coding and inter-frame coding.
Entropy encoding: One of the primary sorts of entropy coding makes and assigns a unique without prefix code to every unique symbol that arise in the input[1]. These entropy encoders then compress information by substituting every fixed length input symbol with the corresponding variable- length yield prefix-free codeword. The length of each codeword is around relative to the negative logarithm of the probability. In this manner, the most widely recognized images utilize the smallest codes.
Adaptive dictionary algorithms such as LZW used in GIF and TIFF, in the 1980s, many pictures had little shading tables (on the request of 16 hues). For such a minimized letters in order, the full 12-bit codes generated poor compression unless the picture was huge, so the proposal of a variable-width code was presented: codes commonly begin one bit more extensive than the images being encoded, and as each code size is spent, the code width increments by 1 bit, up to some recommended greatest (commonly 12 bits). At the point when the most extreme code value is achieved, encoding continues utilizing the current table, however new codes are not produced for addition to the table.
Methods for lossy compression:
Minimizing the shading space to te most typical hues in the picture. The chosen shades are indicated in the shading
palette in the header of the compressed picture. Every pixel just references the index of a shade in the shading palette, this technique can be consolidated with dithering to keep away from posterization.
Chroma sub-sampling. This exploits the way that the human eye sees contiguous changes of brilliance more clearly than those of shades, by averaging or dropping a few chrominance data in the picture.
Transform coding. This is the most normally utilized strategy. Specifically, a Fourier-related transform, for example, the Discrete Cosine Transform (DCT) is broadly utilized[3]. The DCT is unoftenly alluded to as "DCT-II" with regards to a group of discrete cosine transforms; for e.g., see discrete cosine transform. The more newly created wavelet transform is also utilized broadly, pursued by quantization and entropy coding.
Fractal compression. Fractal compression method is a lossy compression strategy for digital pictures, hinge on fractals. The strategy is most appropriate for textures and natural pictures, depending on the fact that parts of a picture usually resemble other parts of that image.[citation needed] Fractal algorithms transform these parts into mathematical information called "fractal codes" which are utilized to reproduce the encoded picture.
-
FILE FORMAT.
The file format is the structure of how data is stored (encoded) in a computer document. File formats are intended to store particular sorts of data, for example- JPEG and TIFF for picture or raster information.
-
WORK DONE.
In response to the requisition given by Botanical Survey of India (BSI), Ministry of Environment, Forest & Climate Change, Government of India, to prepare a software to transform 600 DPI TIFF images into 600 DPI JPEG and 300 DPI JPEG images. We have developed a software which is capable of converting .tiff file format image to .jpg file format image and converting image into 600DPI and 300 DPI subsequently.
-
RESULT
While implementing above method we get following results.
Table 4.1. CONVERTING .TIFF TO 300 DPI .JPEG IMAGE
SIZE
TIME
STATUS
2.58mb
1.5 sec
Successful
125kb
5.8sec
Successful
75kb
2.3sec
Successful
10.3kb
1.25sec
Successful
5.2mb
2.5sec
Successful
3.7mb
1.7sec
Successful
250kb
1.5sec
Successful
Table 4.2. CONVERTING .TIFF TO 600 DPI .JPEG IMAGE
SIZE
TIME
STATUS
2.58mb
1.0 sec
Successful
125kb
6.2sec
Successful
75kb
4.3sec
Successful
10.3kb
1.35sec
Successful
5.2mb
2.7sec
Successful
3.7mb
1.9sec
Successful
250kb
1.4sec
Successful
a.
b.
c.
-
CONCLUSION
-
.tiff to jpeg converter and dpi setter uses an innovative approach to change the dpi of the image and simultaneously converts it into a .jpeg image, its fast as compared to its online counterparts and easy to use.
It is developed in java environment and uses the following libraries to achieve the desired result.
Table 4.3.
import com.sun.media.jai.codec.FileSeekableStream; |
import com.sun.media.jai.codec.ImageCodec; |
import com.sun.media.jai.codec.ImageDecoder; |
import com.sun.media.jai.codec.ImageEncoder; |
import com.sun.media.jai.codec.JPEGEncodeParam; |
import com.sun.media.jai.codec.SeekableStream; |
import com.sun.media.jai.codec.TIFFDecodeParam; |
import java.awt.Color; |
import java.awt.image.BufferedImage; |
import java.awt.image.RenderedImage; |
import java.io.File; |
import java.io.FileOutputStream; |
import java.io.IOException; |
import javax.imageio.IIOImage; |
import java.util.Iterator; |
import java.util.logging.Level; |
import java.util.logging.Logger; |
import javax.imageio.ImageIO; |
import javax.imageio.ImageTypeSpecifier; |
import javax.imageio.ImageWriteParam; |
import javax.imageio.ImageWriter; |
import javax.imageio.metadata.IIOInvalidTreeException; |
import javax.imageio.metadata.IIOMetadata; |
import javax.imageio.metadata.IIOMetadataNode; |
import javax.imageio.stream.ImageOutputStream; |
import static javax.print.attribute.ResolutionSyntax.DPI; |
import javax.swing.JFileChooser; |
import javax.swing.JOptionPane; |
import javax.swing.filechooser.FileNameExtensionFilter; |
6. REFERENCES
-
data electronically accessed at https://docs.gimp.org/en/gimp- image-properties.html , 04/05/2017, 02:42am.
-
data electronically accessed at https://en.wikipedia.org/wiki/Image_compression , 04/05/2017, 22:15pm.
-
N. Ahmed, T. Natarajan and K.R.Rao, "Discrete Cosine Transform," IEEE Trans. PCs, 90-93, Jan. 1974.
-
data electronically accessed at http://einstein.informatik.uni- oldenburg.de/rechnernetze/dpcm.htm, 04/07/2017, 20:12pm.
-
D. Suarjaya, and I. Made Agus, A New Algorithm for Data Compression Optimization, International Journal of Advanced Computer Science & Applications, vol. 3, no. 8., 2012