- Open Access
- Authors : Vineeth S , Renukumar B R , Sneha V C , Prashant Ganjihal, Rani B
- Paper ID : IJERTV9IS050513
- Volume & Issue : Volume 09, Issue 05 (May 2020)
- Published (First Online): 22-05-2020
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Automatic Pet Food Dispenser using Digital Image Processing
1Vineeth S , 2 Renukumar B R
2Assistant Professor, Electronics & Instrumentation Engg JSS Academy of Technical Education
Bengaluru, India
3Sneha V C , 4Prashant Ganjihal, 5Rani B
1,3,4,5 Students, Electronics & Instrumentation Engg JSS Academy of Technical Education
Bengaluru, India
Abstract The paper has a project design aimed at which pet owner can be able to feed their pet/pets without their presence or Interferences unlike older versions of pet feeder. The motive of our mission is to offer a simpler and extra efficient way for the pet owners to feed their pets, even when they may be now not at domestic and when they are no longer capable of manipulate remotely. The system makes use of Digital Image Processing for implementation. At First, In the project a pet call is provided using a recorded voice through a speaker to indicate feed time of the pet is initiated. The Ultrasonic Sensor is placed in order to detect the pet in front of the system. Once pet detection is done the camera is switched on and Camera captures image of the pet and processes. If the pet is recognized as required pet, a dc motor will be activated to dispense food. The project implemented is for two pets of different species. Hence, we have employed two dc motors to dispense two different kinds of food for two different pets. So, two food containers and two food bowls are provided in this design. Once the required pet if fed successfully, the message will be sent to the owners mobile number using an API.
KeywordsAutomatic Feeder, Computer Vision, Digital Image Processing, Neural networks, Pet Food Dispenser.
-
INTRODUCTION
Nowadays, people tend to be busier and due to this, human beings tend to overlook additionally a number of their obligations which might be the primary purpose of trouble. One of those obligations is having a puppy at home. Most humans need to have their personal pet for its appealing appearance, loyalty and playful personality. Having a puppy is a responsibility which doesnt go into waste because having a puppy serves enjoyment and employer at domestic. One foremost trouble within the gift society is the peoples busyness. It is one major reason why puppy proprietors devote irresponsible deeds with regards to looking after their pets. Their pets appear to be on the bottom listing of their priorities. One important factor on puppy care is feeding. This is in which this challenge is available in movement in which a Digital Image Processing primarily based food dispenser will be activated on pet detection and recognition. The motive of our mission is to offer a simpler and extra efficient way for the pet owners to feed their pets, even when they may be now not at domestic and when they are no longer capable of manipulate remotely. Specifically, the purpose is to construct a design which can automatically discover specific pets, healthy the detected pets with the modern stored pet profiles and dispense the right sort of meals at the person precise quantity. A critical point is that the pet feeder can help pets from distinct species. The meals boxes and food plates are all separate in order that the person can put
distinctive ingredients for distinctive pets. Man-made brainpower has been seeing a grand development in overcoming any issues between the capacities of people and machines. Specialists and fans the same, chip away at various parts of the field to cause astonishing things to occur. One of numerous such regions is the space of Computer Vision. The motivation for this field is to empower machines to see the world as people do, see it likewise and even utilize the information for a huge number of assignments, for example, Image and Video acknowledgment, Image Analysis and Classification, Media Recreation, Recommendation Systems, Natural Language Processing, and so on. The progressions in Computer Vision with Deep Learning has been developed and idealized with time, fundamentally more than one specific calculation a Convolutional Neural Network.
Objectives of the proposed work are to design a product:
-
Which can automatically detect different pets from different species.
-
That Dispenses right kind of food for right kind of pet.
-
Dispense food at the userspecified time.
-
Completely avoid the presence and interference of owner.
-
-
LITERATURE SURVEY
-
This food dispenser is controlled using an android application which gives the control to the device through Wi- Fi module to dispense food. The microcontroller FRDM KL25Z is programmed in a way it sets the motor to work. There are two basic parts for dispensing of the food. A storage box is used to store the food that has an opening on storage. The storage box has a lid beneath the box. The lid is attached to a DC motor that is interfaced with FRDM board. The amount of time for openings of both lid and storage box coincide will decide amount of food dispensed. Once the food is dispensed, the motor is programmed to rotate, thus closing lid. The android app proved helpful in this case. It can control time for how long motor stays in the opening position. Advantages of this system are: It requires low maintenance. The quantity of food to be fed is controlled. The food is fed at proper timing. Disadvantages of this system are: The food can be fed for only one animal, there is no water dispenser.
-
In this system they used PCA Algorithm with eigenfaces to propose image-based animal detection system. The images are resized and they calculate the Eigen Faces and Eigen Values of the images. Then the projection of centered image
to face space is calculated. By this calculation we obtain test image and its projection is calculated from the previous one. Both the projections are compared by using Euclidean Distance. If the dataset image is having least Euclidean Distance, then that is our Recognized Image. Advantages of this system are: Accuracy for the recognition is better, less power is consumed. Disadvantage of this system is This system uses outdated method for detection.
-
Images are the evident sources in image processing applications. Image processing will change the human- computer interaction in future. A huge number of image processing applications, tools and techniques helps to extract complex features in an image. While presently image processing works beyond multidimensional and check what is actually in an image. Several techniques are being played on images in real time but image processing is the core. The image processing applications, tools and techniques helps to extract complex features of an image. Image processing works on single dimensional image as well as multidimensional image and check what actually happens with the image. Image processing is the core for many developing techniques in real time aspect. Advantages of this system are: It improves the quality of the image and distribution of intensity. Disadvantages of this system are: higher cost, it is difficult to understand.
-
This paper focus on topic where the pet owners can feed their pet in their absence by sending a message to a system through Mobile phone. GSM technology is adopted in this system to receive a text from owner. The solenoid valve and servo motor are activated once the message is received. This causes the servo motor to rotate in order to dispense the food. Also, water to be free-flowing, the valve will be open. Once the feeding process is done, owner will receive a message. This concept is for the family havng a busy schedule and who are not able to feed the pet. Advantages of this system are: The owner can feed their pet without their presence; this system is cost effective. Disadvantages of this system are: Pet owners need to intermediate in the process, the network coverage is poor.
-
Different sensors are employed for this system of pet feeder to work in an efficient way. A proximity sensor is going to be connected to an Arduino. Once the pet is detected in the surrounding of the feeder, the food from a container is put in a food bowl. Whenever the sensor detects motion at a distance from the feeder, once the pet comes near to the food bowl, the food will be served. A servo motor is employed in the system for locking purpose. Together, all these components will determine efficiency of the feeder. Advantages of this system are: Presence of the owner is not required. Disadvantages of this system are: There will be wastage of food, this system does not have water dispenser.
-
At present, most commercial pet feeders are stationary systems. Owner can control feeder to dispense food to their pet in a remote mode using a smartphone. Few feeders have camera function that allows the owner to observe pet at home. Anyhow, these machines are stationary and cannot
move. The photos are also in a fixed shoot angle, while pets can move around the house. Henceforth, this paper will design a Remote-control system on a toy car which is equipped with a camera, feeding food and water. It allows the owner not only to receive an image by the remote camera through android device, but also to control its movement through MQTT protocol to achieve purpose of pet food and water supplement. Advantages of this system are: It can feed for both cats and dogs; it supplies both water and food for the pets. Disadvantages of this system is Quantity of food that can be stored is less.
-
In this the pet feeder is controlled by using a Remote- control system. The food is dispensed at exact time that is set by the owner. The user can adjust the feed time, quantity of food, refill alert and call for the pet when food is over by using owners voice. It can feed food even without the presence of owner. Refill alert is given will help of buzzer. Advantages for this system are: It can overcome overfeeding problem and it also avoids the wastage of food. Advantages of this system are: This avoids the wastage of food, can overcome overfeeding problem, it gives a refill alert to owner. Disadvantage of this system is This system is not cost effective.
-
This gadget has Arduino Uno controller which drives the stepper motor thru stepper motor force. The controller is pre- programmed for the operation of stepper motor. It delivers the food to the pet. Buzzer is attached to the Arduino so one can supply a refill alert. The Arduino affords the define software and yield for the stepper engine driver and it offers the desired set time and engine revolution. Advantages of this gadget is It is value powerful. Disadvantages of this system are: It is easy to paintings with it because there is only simulation and no implementation.
-
Deep Learning algorithms are designed in such a way that they mimic the function of the human cerebral cortex. These algorithms are representations of deep neural networks i.e., Neural networks with many hidden layers. Convolutional neural networks are deep mastering algorithms that can train huge datasets with millions of parameters, in form of 2D photographs as input and convolve it with filters to produce the preferred outputs. In this newsletter, CNN fashions are built to evaluate its overall performance on photograph reputation and detection datasets. The algorithm is applied on MNIST and CIFAR-10 dataset and its overall performance are evaluated. The accuracy of fashions on MNIST is 99.6 %, CIFAR-10 is using real-time statistics augmentation and dropout on CPU unit.
-
This paper describes a gaining knowledge of approach based on schooling convolutional neural networks (CNN) for a traffic sign category gadget. In addition, it offers the preliminary category effects of applying this CNN to learn capabilities and classify RGB-D pix undertaking. To decide the suitable structure, we explore the switch mastering method referred to as great tuning method, of reusing layers educated on the ImageNet dataset that allows you to
provide a solution for a 4-magnificence type task of a new set of records.
-
-
METHODOLOGY
Animal Detection is the process of finding real-world animals in still images or Videos. It allows for the recognition, localization, and detection of animals within an image. Animal Detection can be done via multiple ways: Feature- Based Object Detection, Viola Jones Object Detection, SVM Classifications with HOG Features and Deep Learning Object Detection.
The Below diagram illustrates the proposed block diagram we are implementing in the project. The Raspberry pi is the controller we are employing in the system. The Raspberry pi is a basic embedded system and being a low-cost single board computer used to reduce the complexity of systems in real time applications we have used the board.
will be sent to the owners mobile number using a Twilio API.
Pets Detection and recognition is done using Convolution Neural Network technique. For model training first we need create dataset. TensorFlow Object Detection API uses the TFRecord file format, so at the end we need to convert our dataset to this file format. TensorFlow is a free open source software library for data flow. To prepare the input file for the API you need to consider two things.
-
Image must be in the form of jpeg or png.
-
we need a list of bounding boxes for image and class of the object in the bounding boxes.
We scraped 200 dogs and cat images (mainly jpegs and a few png) from Google Images and Pixabay. We created datasets in pascal voc format. Afterwards, we labeled them manually with LabelImg. Afterwards, we labeled them manually with LabelImg. LabelImg is a graphical image annotation tool that is written in Python. Then we used Pre- trained model checkpoint, the ssd_mobilenet_v1_coco model.
Ultrasonic sensor
Camera
Load cell
Twilio API
Raspberry pi
DC
motors
Speaker
Mobile
Food Bowls
Then trained model is exported to a single file (Tensorflow graph proto) so that we can use it for inference.
To detect animal with OpenCV well need to (1) access our webcam/video stream in an efficient manner and (2) apply object detection to each frame. We have used arguments for the following: The path to the Caffe prototxt file (–prototxt), the path to the pre-trained model (–model) and the minimum probability threshold to filter weak detections (–confidence). The default is 20%.
We then initialize CLASS labels and corresponding random COLORS. We load our serialized model, providing the references to our prototxt and model files. Next, we initialize our video stream (this can be from a video file or a camera). First, we start the Video Stream, then we wait for the camera to warm up, and finally we start the frames per second counter. The video Stream and FPS classes are part of
Figure 1: Block Diagram
At First, In the project a pet call is provided using a recorded voice through a speaker to indicate feed time of the pet is initiated. The Ultrasonic Sensor is placed in order to detect the pet in front of the system. Once the pet detection is done using an ultrasonic sensor, the camera is switched on and Camera captures image of the pet and processes. If the pet is recognized as required pet, a dc motor will be activated to dispense food. The dc motor is rotated to serve food and the rotation is controlled by H-Bridge.
The diet of pet can be controlled by dispensing a proper amount of food. This is done by controlling the rotation of dc motor. And then loadcell is used to detec the presence of food in the bowl. Also, when the food starts to decrease than the set point value, the load cell detects and a message will be sent as pet is fed. This System is implemented to feed one pet or more than one pet of either same species or different species using Image processing.
The project implemented is for two pets of different species. Hence, we have employed two dc motors to dispense two different kinds of food for two different pets. So, two food containers and two food bowls are provided in this design. Once the required pet if fed successfully, the message
imutils package. First, we read a frame from the stream, followed by resizing it. Since we will need the width and height later, we grab these now. This is followed by converting the frame to a blob using CNN module.
Now for the heavy lifting we set the blob as the input to our neural network and feed the input through the net which gives us our detections. At this point, we have detected objects in the input frame. It is now time to look at confidence values and determine if we should draw a box. We also apply a check to the confidence (i.e., probability) associated with each detection. If the confidence is high enough (i.e. above the threshold), then well display the prediction in the terminal as well as draw the prediction on the image with text and a colored bounding box. If we break out of the loop (q key press or end of the video stream), we have some housekeeping to take care of: When weve exited the loop, we stop the fps counter and print information about the frames per second to our terminal. We close the open window followed by stopping the video stream.
Figure 2: Flow chart of methodology
-
-
RESULT
The Raspberry pi 3 model B is used as a controller in the project. The pet call is given at feed time for each pet initially, once pet arrived in front of system the ultrasonic sensor detected both the pets and camera was initiated on detection of pet by ultrasonic sensor.
Figure 3: Final product
Since we used tensorflow framework on CNN the accuracy we got is more than 90%.
Figure 4: Recognition of pets
Once the pet is detected and it is recognized as required pet, the specific dc motor dispensed the right kind of food. For dog food the amount dispensed is based on the rotation of DC motor and measured using a load cell so, once the food becomes half the amount of predefined the message is sent. The predefined amount of food dispensed is 40g. Once food amount becomes half the amount of predefined, the message is sent to the mobile using a Twilio API.
Figure 5: Load Check for dog food
For cat food the amount dispensed is based on the rotation of DC motor and the message is sent once the food is dispensed.
Figure 6: Product after food dispense
The message was sent using Twilio API on successful dispense of both dog and cat food.
Figure 7: Message using Twilio API
CONCLUSION
The project we have done is to feed two pets of different species. However, This Design is implemented to feed one or more than one pet of either same or different species. The product and design can be altered depending on the necessity for one or more pet/s. This design of pet feeder provides few other features which will be more convenient for both owner and pet like The Feed time, Time gap between consecutive feeds, call for pet at feed time and to have control over the quantity of food served. This system also sends a text message to the owner on successful feed of each pet using a Twilio API. For the same design as a future work A
Refill alert can be added to give an alert for the owner in case of the container of feed is going to be empty. This can be added in such a way that it sends alert to owner by a buzzer and a message to the owners phone. Also, a Dual power supply with battery charger can be added in the future work. This feature will help the system to continue its process in case of power cut and power failure. the system will use the battery to operate its basic functionality. This power supply will also auto recharge the battery for use when the ac source is provided to the system.
ACKNOWLEDGMENT
We would like to express our gratefulness to our guide Asst. Prof. B. R. Renukumar for the continuous aid for our project and related study, for his patience, encouragement, and vast knowledge. His advice helped us in all the time of research and writing of this content. We could not have imagined having a better advisor and guide for our academic project work. Besides my guide, we would like to thank the rest of the faculty: Asst Prof. Sowmya M.S, the project coordinator and Dr. D. Mahesh Kumar the Head of the department for their perceptive comments and motivation, but also for the hard question which made us to widen our research from various standpoints. Our heartfelt thanks also go to all the authors of our research related reference papers and the JSSATEB college who gave access to the laboratory and research facilities. Their valuable support has made it possible to complete this project.
REFERENCES
-
Wayne Intelligent Food Dispenser (Ifd) Hari N. Khatavkar, Rahul S.
Kini, Suyash K. Pandey, Vaibhav V. Gijare, 2019
-
Proposed System for Animal Recognition Using Image Processing Ankur Mahanty, Ashutosh Engavle, Taha Bootwala, Prof. Ichhanchu Jaiswal,2019.
-
Digital Image Processing-A Quick Review R. Ravikumar, Dr V. Arulmozhi,2019.
-
Pet Feeding Dispenser Using Arduino And Gsm Technology Smruthi Kumar, 2018
-
Automatic Pet Feeder Aasavari Kank, Anjali Jakhariye, 2018
-
A Remote Pet Feeder Control System Via Mqtt Protocol Wen-Chuan Wu, Ke-Chung Cheng, Peiyu Lin, 2018
-
Iot Based Pet Feeder System Saurabh A. Yadav, Sneha S. Kulkarni,
Ashwini S. Jadhav, Prof. Akshay R. Jain,2018
-
Simulation Of Automatic Food Feeding System For Pet Animals
Dharanidharan.J, R.Puviarasi, 2018
-
Convolutional Neural Network (CNN) for Image Detection and Recognition Rahul Chauhan, Kamal Kumar Ghanshala, R.C Joshi, 2018
-
Convolutional Neural Networks for image classification Nadia Jmour, Sehla Zayen, Afef Abdelkrim, IEEE, 2018
AUTHOR PROFILE
Mr. Vineeth S is currently pursuing Bachelors of Engineering (Electronics and Instrumentation) from JSS Academy of Technical Education affiliated to VTU, Belagavi, Karnataka, India. He is currently recruited by Infosys. |
|
Mr. Renukumar B R is presently working as Assistant Professor in the Department of Electronics and Instrumentation Engineering, JSS Academy of Technical Education. He is working in teaching field from past 19 years |
|
Ms. Sneha V C is currently pursuing Bachelors of Engineering (Electronics and Instrumentation) from JSS Academy of Technical Education affiliated to VTU, Belagavi, Karnataka, India. She is currently recruited by Yokogawa. |
|
Mr. Prashant Ganjihal is currently pursuing Bachelors of Engineering (Electronics and Instrumentation) from JSS Academy of Technical Education affiliated to VTU, Belagavi, Karnataka, India. |
|
Ms. Rani B is currently pursuing Bachelors of Engineering (Electronics and Instrumentation) from JSS Academy of Technical Education affiliated to VTU, Belagavi, Karnataka, India. |