- Open Access
- Total Downloads : 18
- Authors : Smitha. V, Suhasini.E.R
- Paper ID : IJERTCONV2IS13009
- Volume & Issue : NCRTS – 2014 (Volume 2 – Issue 13)
- Published (First Online): 30-07-2018
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Design and Implementation of Human computer Interface for smart home using Kinect sensor
Smitha. V Suhasini.E.R
-
tech 2nd Year Assistant Professor
The Oxford College Of Engineering The Oxford College Of Engineering Bangalore,Karnataka,India Bangalore,Karnataka,India
smitha.chaitra@gmail.com suhasiner@gmail.com
Abstract. Smart home technologies help Elderly people, patients, complete activities of daily living independently, while saving their time, money, and extra effort. The objective of this project was to design and implementation of an automated tracking and user identification system for disabled peoples by use in a smart home environment. Most existing smart home systems require users to either carry some sort of object that the house can identify or provide some sort of identification when they issue a command. Our project seeks to eliminate these inconveniences and enable users to issue commands to their smart home through simple gesture moments. Microsofts Kinect sensor unit was chosen as our design platform for its functionality and ease of use
Key words: Kinect, Smart Home, gesture,
-
INTRODUCTION
Monitoring and caring of elderly or disabled peoples are major concern at home for working peoples, or classical idea of an accompanying nurse might interfere with patient privacy. On the economic front, monitoring an elderly or disabled person while keeping him in his natural environment would reduce the costs of assistance comparing to specialized staff at the hospital, or to a nursing home, Some solutions such as biometric sensors, who can be integrated on patients body and control their medical states, are commonly used In many cases, these devices must be attached to the body. They can disturb patients and limit their mobility as well as their comfort
Our solution to the above problem is to have smart home, this system will be able to detect the patient at home and recognize their gesture in aim to react to their needs like turn on or turn off the lights or fans ,
The main contribution in this proposed work when compared with other proposals, the user himself uses his hands and body to control the intelligence monitoring system, patient no need of using remote control switches or keywords to explain their needs, this work focuses on detection and reorganization of gestures using Kinect video sensors. which provides depth images which include devices to monitor the patients health status by helping them not only function more independently but also receive feedback and communicate to all network on their activities, Thus, the general idea is to design a smart environment which controls and monitors by itself the state of elderly or disabled persons
-
KINECT SENSOR
Microsoft Kinect sensor is a motion sensing input device, this sensor, originally designed for Microsoft Xbox 360 video game console, allows controlling devices with the body itself using hands or feet. Thus, we can imagine what can be done with such features, device consists of three entities an RGB camera, a depth camera and an infrared transmitter, First Kinect emits IR lights. Reflections of the laser beam are generated across the field of depth, which is basically all the pixels covered by the Kinect, that the camera captures in return. This technique is called light coding. Using a highly parallel SOC called PS1080, the sensor receives as input a coded infrared light (IR), and produces a colorized picture, an audio stream and a depth image. All of them are synchronized with a VGA format. Here specific sensor that is depth camera. Indeed, it is a CMOS sensor that captures the 3D scene that with the pixels of depth image that represents coordinates (XYZ) of the scene objects relative to the sensor. This depth image is insensitive to variations in brightness and contrast levels. Indeed, it works at night due to active infrared lights, so it is very useful for monitoring a person in the dark. The light can be turn on by his gestures.
PROPERTY
SPECIFICATION
View
58 H,45 V,70 D
Depth image
VGA(640X480)
Spatial X/Y
resolution
3mm
Depth Z Resolution
1cm
Frame rate
60FPS
Colour image size
UXGA(1600X1200)
Data Interface
USB 2.0
Power Consumption
2.25W
Characteristics
Data Processing by Kinetic
Depth imaage
Control Respectiv e Device
Skeleton point coordinat
After Receiving IR light reflected from the scene the Kinect sensor sends to the system a data like depth image as shown in above block diagram by using depth image skeleton point coordinates are generated these coordinates are sent to system to recognizes gestures and it will aim to react to control respective device
-
GESTURE RECOGNITION USING KINECT SENSOR
We made gesture recognition using hands position first we made a simple gesture recognition using hands position. In human body 20 joints are recognized using Kinect sensor. If we calculate the coordinates (x, y, z) of each joint in the space we can have an idea about the position of each part of the body referred to the head or spin joint. We have programmed the sensor so that it can recognize if hands are below or over head/spin, or if the patient is showing the right or the left direction. We have also calculated the coordinates of the right hand in every case. Some of these recognized postures we made are illustrated in figure
Through these examples, we see that the application can recognize many gestures made by the patient. In these images, we notice that position is well displayed. It has been calculated by referring to the position of the hand in three Directions (x, y, z). We choose a threshold distance, so if the patient is not moving in the scene, the program is in an idle mode. A push and pull gesture is detected when there is a20 cm We choose a threshold distance, so if the patient is not moving in the scene, the program is in an idle mode. A push and pull gesture is detected when there is a20 cm difference on the Z axis of each hand during 30 frames.
Waiting for Skeleton Moving
If we want to recognize other gestures that may be useful to our application, whether algorithmic or template based search. The main challenge is to choose the most appropriate method algorithms, such as the one dollar ($1) and N dollar ($N) recognizers, can be Implemented in any environment, even in a context of fast prototyping. An act done by a person is defined by a set of points. These points are then compared to another set of points previously stored. We can guess very quickly that the only comparison between each of these points is not sufficient to determine the best candidate among sets of points. The templates to recognize gestures are the variations between two gestures made by the same person but at different speeds and / or different hardware. They will not generate the same number of points and the comparison is not based only on this criterion alone. We can add to this the problem of orientation and scale gesture / figure, and therefore the angles resulting Problems. This is why the $ 1 recognizer was constructed, because it is insensitive to all these types of variations. The learning of a gesture is performed only one time, which means it only requires a single pas to create a template It should also be noted that the $ 1 recognizer considers only movements "Unistroke", which means a single figure is formed by a continuous gesture, contrary to "Multistroke" movements.
As example, the one dollar Unistroke recognizer is defined as four steps. The first three steps are done to create templates for the first time or to compare them in the immediate future.
-
Resampling
-
Rotation based on indicative angle
-
Scaling and translation
-
Get the best score
This score is calculated using the equation:
Thus, it is possible to integrate another recognizer algorithm such as $1 or $N in application. The diagram of the recognition scenario proposed in our application is represented
Waiting for push pull gesture
Gesture Detection
Recognizing algorithm
Device action
-
-
HARDWARE IMPLIMENTATION
Implemented figure shown above which consists of an central FPGA controller, that communicates via RS-232 protocol to computer interface the implemented device connects to home automation devices to the controller, these devices are controlled by using gesture recognization of Kinect sensor, system uses the parallel communication so that the speed is increased. The FPGA controller is implemented on a which uses a Xilinx Spartan-3E using the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL). The devices which need to be controlled are directly connected to the FPGA controller
-
CONCLUSION
The implementation of home automation using FPGA is achieved. Furthermore, the system is expanded by cascading FPGAs or by multiplexing data coming from different devices. This makes the system scalable. The devices connected to the FPGA can use either a wired connection or a wireless one, such as Zigbee or Infra-red. In this implemented module wired solutions can be used, however, the interface can be easily replaced by a wireless solution implemented modules can interface to the system and can control the modules by switch on and off .
-
REFERENCES
-
-
C Wolf << Maintaining older people at home : Scientific Issues and technologies related to computer vision>> Etia 2011
-
M. Tölgyessy, P. Hubinský, The Kinect Sensor in
-
Robotics Education, Institute of Control and Industrial Informatics, Faculty of Electrical Engineering and Information Technology, Slovak University of Technology in Bratislava Slovakia,
-
P. Carner,, Project Domus: Designing Effective Smart Home Systems,
-
T. Kerthove, What is the difference between Kinect for Windows & Kinect for Xbox360 Response of Microsoft to the UK independent review of intellectual property and growth, 4 March 2011..
-
D. Joseph Camp and W. Edward Knightly, The IEEE 802.11s Extended Service Set Mesh Networking Standard, Electrical and computer Engineering, rice University, Houston TX, 2006
-
T. Hirofuchi, E. Kawai, K. Fujikawa and H. Sunahara, USB/IP a Peripheral Bus Extension for Device Sharing over IP Netwo, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, 630- 0192, Japan, 2005 USENIX Annual Technical Conference
-
G. Song, Z. Wei, W. Zhang and A. Song, Design of a networked monitoring system for home automation, IEEE Trans. on Consumer Electronics, vol. 53, no. 3, pp. 933 – 937, Aug. 2007
-
FPGA assisted Zigbee communication for low power home automation design by Fukran cayckon
-
VHDL.http://www.eda.org/vhdl-200x/.2.1.
-
VerilogHDL.http://www.sutherlanddl.com/online_verilog_ref_guide/vlo g_ref_top.html. 2.1
-
Fang Yao Khusvinder Gill, Shuang-Hua Yang and Xin Lu. A zigbee- based home automation system. IEEE Transactions on Consumer Electronics, 55(2), May2009. 1