Augmented Reality Application for Mobile Phones

DOI : 10.17577/IJERTV3IS030672

Download Full-Text PDF Cite this Publication

Text Only Version

Augmented Reality Application for Mobile Phones

Mayank Krishnatre , Punyatoya Soumya Darshinee, Meenu Kumari Guide: Prof G.M.Walunjkar,

Assistant professor

Department of Information Technology, Army Institute of Technology, India

Abstract In todays world, mobile applications are triggering a fundamental shift in the way people experience computing and use mobile. The explosive growth in Android phones in the last three years facilitated the development of hundreds of thousands of mobile applications. The overall goal of the project is to develop an Android-based augmented reality (AR) mobile application through which images and text can be placed on top of the objects of the phone camera view based on the information retrieved from the database after image/face recognition. Augmented reality is a highly researched topic in the software development groups for last many years. Augmented reality is a technology that works on computer vision based recognition algorithm to augment different aspects such as graphics, sound, videos, and other sensor based inputs on the real world objects. And it uses the camera of your device. The AR implementation contains two parts: the live data were augmenting and the Meta data used for the augmentation. So our application is simply capturing the image/face from the real world while pointing the camera to the person or any-thing, the underlying platform detects/recognizes it, retrieve the information from the database and overlay it on the camera screen.

I. INTRODUCTION

NEED

The cameras on mobile phones have untapped potential as input devices. Mobile phones are becoming popular all around the world and through convergence with digital cameras, music players and PDAs, they are set to become the mobile computing platform of choice for most people in the future. However, this computing potential is largely untapped currently, despite the fact that continuously improving communication abilities, strong and, practicality due to small size and their ubiquitous nature make mobile devices an ideal platform for conveying custom-tailored, context-based information to the user. Implementing technologies on limited devices like mobile phones, poses a number of challenges which are different from those on which the researches are going on. These includes limited computational resources with little possibilities to upgrade

.We propose a system that can be used to provide a user with information of the image on the go by utilizing the camera on the phone as an input device using a novel kind of user interface called Augmented Reality Augmented reality is a highly researched topic in the software development groups for last many years. The real world is observed with a camera and is augmented with virtual

objects, which are spatially registered in the scene. Its one of the best way to gather real world information and present it in an interactive way.it also allows virtual elements to become part of the real world around us. The need is to make image/face recognition mobile application in more interactive way which can be done only by Augmented Reality. Recently, mobile phones with cameras have become attractive as inexpensive AR devices. Information not only follows a person, but also her very gaze: looking at an object is enough to retrieve and display relevant information, amplifying her intelligence. By mixing of real and virtual world, Augmented Reality (AR) is a technology that is attracting a lot of attention and research from the different science communities all over the world and is seen as a best way to visualize context-related information.

APPLICATION:

INTERIOR DESIGN

One application of AR is in interior design. It allows you to see how furniture fits into a room before buying it. Another use case would be the planning of a rearrangement of furniture. Instead of moving the furniture around, you can place markers at the desired destinations and see how that looks, immediately.

PRODUCT INFORMATION

Lego started to equip some of their stores with an AR terminal. It allows the customers to see what is inside the box. In order to do so, they have to present the box to the webcam. The terminal will then determine whats inside the box by analyzing the marker on it. A complete 3D model of the assembled vehicle on the top of the box itself will be shown in a display.

MEDICINE

There are many fields in which AR can be applied to medicine. For example Computed Tomography (CT) is widespread today. During this process, a 3D model of body parts is created. AR makes it possible to display those models directly where they belong.

INDUSTRY

In an application for factory planning and design developed for Volkswagen is described. The application was already used in the planning process of different industrial environments. Another imaginable application would be the annotation of parts for a repairman.

ENTERTAINMENT

AR can be found in the entertainment sector, too. There are various applications that were developed during research processes. Those include a racing game, a tennis game and a train game. The game displays a virtual train on track age made of wood. The application itself runs on a PDA.

DESIGN INTERFACE:

FACE RECOGNITION

IMAGE RECOGNITION

There are basically two ways of image recognition algorithm which are currently used. These are SIFT or Scale-invariant Feature Transform and SURF or Speeded-Up Robust Features. SIFT or Scale-invariant Feature Transform is an algorithm that uses features based on the appearance of an object at certain interest points. SURF or Speeded-Up Robust Features is based on SIFT, but the algorithm is faster than SIFT and it provides better invariance of image transformations. It detects key points in images by using a Hessian-matrix. Generalized color moments combine the pixel coordinates and the color intensities. In this way they give information about the spatial layout of the colors. Color moment invariants are features of a picture, based on generalized color moments, which are designed to be invariant of the viewpoint and illumination.

OpenCV (Open Source Computer Vision Library) is an open source software. Its essentially a computer vision and a machine learning software library. OpenCV is built to provide the common infrastructure between two things which are computer vision applications and accelerating the use of the machine perception in the commercial products. The library has a more than 2500 optimized algorithms. These algorithms helps in both machine learning algorithms i.e. the classic and state of the art computer vision.

DATABASE

A No SQL database provides a best mechanisms for storing

,retrieving ,updating, deleting of data that uses constrained consistency models which uses less constrained than the traditional relational databases in use. This approach helps to include horizontal scaling over availability, simplicity of

design, finer control .The goal of NOSQL databases is to bring significant performance benefits in terms of throughput latency. NoSQL databases are intended for the simple retrieval and appending operations and uses highly optimized key value stores.

IMPLEMENTATION

After focusing the camera on the object we are going to take the image of that object. This image is going to be stored in the internal memory of the android phone. The path for this image will be stored in the SQLLITE database. Now our image recognition algorithm is going to retrieve this image and will map this image to all the available images in the database one y one. As soon as it will found the match its going to retrieve all the information related to that image which is stored in the database. And its going to display this information on the screen of the android user. Also if the application founds that the image which is stored in the database is new and no information is available about it in the database then its going to ask the user to enter any information about the new image found. If user enters the information then it will store that new information along with the image in the database. And if the user doesnt enters any information about the image then that image is deleted from the database. Now another feature which we have added to our application is that as soon as the mobile phone is detected in place by GPS where the user interacted with his friends before also. Its going to display the information regarding all the people whom he met at that particular place in his last stay at that place.

CONCLUSION

This is how we implemented augmented reality application for android phones and then used it as a way to know the information about people very quickly. This also helped to quickly retrieve all the information about people belonging to a particular place using GPS system. Therefore this

application is the best way to socialize with our friends around us.

CONFLICTS OF INTERESTS

We faced two major problem in implementing this design. Firstly we had to choose a retrieval solution that would retrieve the image from the database as quickly as possible. The second one was regarding the efficiency of face recognition algorithm to recognize images under different luminosity of light. Though these were two major problems but we could efficiently run our application smoothly to detect faces.

ACKNOWLEDGEMENT

On the outset we would like to thank our Head of Department Ms. (prof) Sangita Jadhav for giving us an opportunity to go ahead with the research work related with the topic. We take this opportunity to express our profound gratitude and deep regards to our guide (Prof) GM Walunjkar for his exemplary guidance, monitoring and constant encouragement throughout the course of this research.

REFERENCES

  1. Ronald Azuma. A survey of augmented reality. Presence, 6:355385, 1995.

  2. Arno Becker and Marcus Pant. Android – Grundlagen und Programmierung.

    dpunkt.verlag, Heidelberg, 2009.

  3. Dan Bornstein. Dalvik VM internals. Presentation, Google Inc..

  4. Patrick Brady. Anatomy physiology of an android. Presentation, Google Inc., May 2008. URL http://sites.google.com/site/io/ anatomy physiology-of-an-android.

  5. Tim Bray. On android compatibility.

  6. Andrew I. Comport, ric Marchand, and Franois Chaumette. A real-time tracker for

    markerless augmented reality.

  7. J. Fischer. Rendering Methods for Augmented Reality. Dissertation, University of Tbingen.

  8. Anders Henrysson, Mark Billinghurst, and Mark Ollila. Face to face collaborative ar on mobile phones.

  9. Wolfgang Hhl. Interactive environments with open-source-software.

  10. Google Inc. Android compatibility denition: Android 1.6.

  11. Google Inc. Android 2.1 compatibility denition.

  12. Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino.

    Augmented

    reality: A class of displays on the reality-virtuality continuum.

  13. Daniel Wagner and Dieter Schmalstieg. Making augmented reality practical on mobile

    phones, part 2.

  14. Richard S.Wright, Benjamin Lipchak, and Nicholas Haemel. OpenGL SuperBible: comprehensive tutorial and reference.

Leave a Reply