A Survey on Portable Virtual Keyboard Based on Threshold and Perspective Transform

DOI : 10.17577/IJERTV3IS21320

Download Full-Text PDF Cite this Publication

Text Only Version

A Survey on Portable Virtual Keyboard Based on Threshold and Perspective Transform

  1. Priyanka1, I. Kogila2, M. Mohanapriya3

    Department of Information Technology Manakula Vinayagar Institute of Technology Puducherry

    Vijiya Kumar4

    Assistant Professor Department of Information Technology

    Manakula Vinayagar Institute of Technology

    Puducherry

    Abstract-With fast emerging technology, all devices are becoming compact and there is a human feeling that there should be some simple system which helps to enter text. Standard keyboard is difficult for people to carry and difficult to use. There are some existing virtual keyboards which are based on cmos technique, vibration calculated using some softwares and using some hardwares, but they are inefficient in locating exact location. To solve this problem, we go for image processing technique. We use camera to implement this technique. In this paper, we see how a paper can be used as an efficient input device. The process undergoes a series of phases such as absolute difference, threshold, contour and perspective transformation that makes our proposal more efficient.

    Key wordsabsolute difference, threshold, contour, perspective transformation.

    1. .INTRODUCTION

      Today world is in race of miniaturization cell phones, PDA, Pocket PC etc day by day all getting smaller and smaller, but our hand and fingers cannot do the same. And for this problem we have come forward with a perfect solution. In this paper we are describing an emerging technology, which replaces the bulky keyboards with virtual paper keyboards. These keyboards are based on image took by the camera. The camera continuously captures images of the region where the printed paper keyboard is placed and checks these images for finger placements and special image processing algorithms are used to recognize the finger and its position which is directly related to Key. The camera watches finger movements and translates them into keystrokes in the device which can be palm PC,

      mobile phone, PDA etc.This paper keyboard can be

      replaced on any flat surface, such as desktops, airplane tray tables, kitchen ounters, etc. and can theoretically be interfaced with any computing device that requires text entry.

    2. .RELATED WORK

      A prototype app made for the iPhone 4 allows users to type on a keyboard made of paper using vibrations. Created by Florian Kräutli, a university student in London, the "Vibrative" app makes use of the iPhone's inbuilt accelerometer, reading vibrations as a finger taps a surface to work out which key is being pressed. According to the Daily Mail, the software works out the approximate location of a strike on a paper keyboard by analyzing the strength and

      frequency of tremors through the surface the iPhone is resting on. Currently it is only compatible with "jailbroken" iPhones, but it could work on other phones too.

      "Touch-screen devices, such as smart phones, lack a suitable method for text input which can compete with mechanical keyboards," Kräutli said in a statement on Goldsmith's website. "The Vibrative Virtual Keyboard aims to appease the frustration felt by smart phone users when faced with a drafting lengthy emails or notes on a small on- screen keyboard.The keyboard requires no additional hardware as it taps into an iPhone's built-in accelerometer, which is able to measure the vibrations caused by typing on any hard surface." Although not 100 per cent accurate every time a letter is struck on a makeshift paper keyboard, the

      app makes use of auto correct and can also be trained to work better if a user devotes some time to giving it intelligence.

      A video of the concept uploaded to video-sharing website Vimeo has been viewed 307,000 times. It's not the first concept Kräutli has worked on – his Vimeo shows he has also worked on two other projects in the last year, one of which was a deaf-blind robot drummer and the other a "human antenna". He developed the Vibrative app for his cognitive computing master's degree. The drawback here is that we have to record vibrations each time it takes time and finger touch vibration varies.

      Miroslav Hagara, Jozef Pucik had proposed Fingertip Detection for Virtual Keyboard Based on Camera. They introduced three methods for fingertip localization, which have used for virtual keyboard based on camera. The first algorithm is based on finding local maxima of the finger contours according to certain criteria. Advantage of this method is that it does not required traced contour. Sometimes it is quite difficult to determine traced contour correctly. Disadvantage of this method is that it does not localize real fingertip but finger point closest to the camera.

      Figure- 1.1

    3. .ARCHITECTURE

VIDEO CAPTURED

CURRENT FRAME

PREVIOUS FRAME

REMOVAL OF NOISE

(Absolute difference)

IMAGE SEPERATION

(Threshold)

EXTRACTION

(Contour)

LOCATION DETECTION

(Perspective Transform)

MOUSE EVENT

Figure 1.1

  1. PROPOSED SOLUTION

    This section describes extensively the implementation details of the proposed solution shown in Figure 1. It has been divided into four phases. 1. Removal of noise, 2. Image Separation, 3. Extraction, 4.Location Detection. The image of virtual keyboard can be taken using camera as video comprise of frames. The images taken from frames are processed under these phases to locate the point where touch made. First step that to be taken is removing all noises, this can be achieved through phase 1, followed by phase 2, phase 3 and phase 4, by which the exact location touch made can be found.

    1. Removal of noise

      In the first phase, the noises present in the images are to be removed. This can be achieved by Absolute Difference.

      Absolute differences (AD) are an algorithm for measuring the similarity between image blocks. It works by taking the absolute difference between each pixel in the original block and the corresponding pixel in the block being used for comparison. These differences are summed to create a simple metric of block similarity.

      Z = imabsdiff(X, Y)

      Z = imabsdiff(X,Y) subtracts each element in array Y from the corresponding element in array X and returns the absolute difference in the corresponding element of the output Array Z.

      Absdiff (InputArray src1, InputArray src2, OutputArray dst)

      Parameters:

      src1 first input array or a scalar. src2 second input array or a scalar. src single input array.

      Value scalar value.

      dst output array that has the same size and type as input arrays.

      Figure 1.2

    2. Image separation

    From the above method, removing all the noise makes the frame clearer to detect the touch made. The next phase is to trace out

    the finger in the frame. Threshold technique is one of the important techniques in image segmentation. This technique can be expressed as:

    T=T[x, y, p(x, y), f(x, y)]

    Where: T is the threshold value. x, y are the coordinates of the threshold value point.

    P(x, y), f(x, y) are points the gray level image pixels.

    Threshold image can be calculated:

    AdaptiveThreshold ( InputArray src, OutputArray dst, double maxValue, int adaptiveMethod, int thresholdType, int blockSize, double C)

    Parameters:

    src Source 8-bit single-channel image.

    dst Destination image of the same size and the same type as src .

    maxValue Nonzero value assigned to the pixels for which the condition is satisfied.

    See the details below.

    Adaptive Method Adaptive thresholding algorithm to use, ADAPTIVE_THRESH_MEAN_C along with the or ADAPTIVE_THRESH_GAUSSIAN_C . See the details below. ThresholdType Thresholding type that must be either THRESH_BINARY or THRESH_BINARY_INV

    blockSize Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.

    C Constant subtracted from the mean or weighted mean (see the details below). Normally, it is positive but may be zero or negative as well.

    The threshold image by using edge maximization technique (EMT) is used when there is more than one homogenous region in image or where there is a change on illumination between the object and its Background.

    This techniques segmentation depend on the research about the maximum edge threshold in the image to start segmentation that image with help the edge detection techniques operators.

    D. Location Detection

    Perspective transformation produces perspective by viewing the 3-D space from an arbitrary eye point. In the DCL library, it refers to the transformation from a 3-D VC to 2-D RC. Image Perspective Transformation [image, m] applies a linear fractional transform specified by a matrix m to the positions of each pixel in image. Image Perspective Transformation [image, m, size] gives an image of the specified size.

    To Calculate a perspective transform from the corresponding points.

    Perspective Transform (InputArray src, InputArray dst ). When the human eye views a scene, objects in the distance appear smaller than objects close by – this is known as perspective. While orthographic projection ignores this effect to allow accurate measurements, perspective definition shows distant objects as smaller to provide additional realism.

    Figure 1.5

    1. Extraction

      Figure 1.3

  2. FUTURE WORK

    In the future input to keyboard can be given

    Among the edge detection methods proposed so far, the Contour techniques is the most rigorously defined method and is widely used. The popularity of the contour techniques can be attributed to its optimality according to the three criteria of good detection, good localization, and single response to an edge. The finger's discrete outline is converted into a list of consecutive coordinates representing the contour of the finger.

    Figure 1.4

    without the use of any object. Here say two coins can be placed at two ends which will be considered as boundary for the keyboard. The camera captures it and the boundary

    of the keyboard as given as input to the system and the coins can be removed now and the display of virtual keyboard can be displayed on device. Thus, without the use of any object or surface material typing can be done.

  3. CONCLUSION

    The implementation cost is too low and green computing can be achieved. Since paper is portable it makes easier for user to use it anywhere and at any time. It reduces the space needed by present keyboard and mouse and it can be used for any devices. It also reduces the cost of buying keyboard and mouse, and also avoids any to manufacture it.

  4. REFERENCES

    1. Erez Posner, comNick Starzicki, ilEyalKatz, A Single Camera Based Floating Virtual Keyboard with Improved Touch Detection, IEEE 27th Convention of Electrical and Electronics Engineers in Israel,2012.

    2. Miroslav Hagara, Jozef Pucik, Fingertip Detection for Virtual Keyboard, Based on Camera,23th Conference

    3. Radio elektronika 2013, April 16-17, Pardubice, Czech Republic.

    4. Hafiz Adnan Habib, Muid Mufti, Real Time Mono Vision Gesture Based Virtual Keyboard System, IEEE 1262 Transactions on Consumer Electronics, Vol. 52, No. 4, NOVEMBER 2006.

    5. Miroslav Hagara, Jozef Pucik, Peter Kulla, Specification of Camera Parameters for Virtual Keyboard,23th Conference Radio elektronika 2013, April 16-17, Pardubice, Czech Republic.

    6. Heiko H¨ubert, Benno Stabernack, and Frederik Zilly, Architecture of a Low Latency Image Rectification Engine for Stereoscopic 3-D HDTV Processing, IEEE Transactions on circuits and systems for video technology,Vol.23,No.5,MAY 2013.

    7. Silvia Valero, Philippe Salembier and Jocelyn Chanussot, Hyperspectral Image Representation and Processing With Binary Partition Trees, IEEE Transactions on circuits and systems for video technology,Vol.22,No.4,APRIL 2013.

    8. W. Jendernalik, G. Blakiewicz, J. Jakusz, S. Szczepaski, and R. Piotrowski, An Analog Sub-Miliwatt CMOS Image Sensor With Pixel-Level Convolution Processing,IEEE Transactions on circuits and systems for video technology,Vol.60,No.2,FEBRUARY 2013.

    9. Hafeez-Ur-R. Siddiqui, Stephen R. Alty , Michelle Spruce and Sandra E. Dudley, Automated Peripheral Neuropathy Assessment of Diabetic Patients using Optical Imaging and Binary Processing Techniques,2013 IEEE Point-of-Care Healthcare Technologies (PHT) Bangalore, India, 16 – 18 January, 2013.

    10. Santhosh K V, Tamal Dutta, Measurement of Elasticity Modulus Using Image Processing, 2013 International Conference on Computer Communication and Informatics (ICCCI -2013), Jan. 04 06, 2013, Coimbatore.

Leave a Reply