A software platform for building the virtual dressing room and integrating the 3D scanning technology and VR/AR technology

DOI : 10.17577/IJERTCONV11IS08023

Download Full-Text PDF Cite this Publication

Text Only Version

A software platform for building the virtual dressing room and integrating the 3D scanning technology and VR/AR technology

Karthik S V1, Ranganath Sagar S2, Abhijith M3, Lavanya G N4

Karthik S V1, Ranganath Sagar S2, Abhijith M3, Lavanya G N4, CSE Department, Sri Krishna Institute of Technology, Blore-560090, India

Aruna R5, CSE Department, Sri Krishna Institute of Technology, Blore-560090, India

Abstract:

This paper proposes a virtual dressing room application that makes use of the Microsoft Kinect sensor to improve time efficiency and access to clothing try-on. The suggested method is based on extracting the user from a video stream, aligning models, and detecting skin color. The project is written in C# and is intended to be a real-time Kinect hacking application. The growth of the Internet and technology has enabled people to buy and use many items and services online rather than in person. As the scale of online shopping malls has grown, additional features have been tested and implemented to compensate for the limitation of not being able to actually wear clothes in an online mall. Among these, 3D virtual try-on is an innovative service whose technology is constantly being advanced. Technological advancements and increased interest in 3D virtual try-on have resulted in a number of related studies.

Keywords: Virtual dress room, 3D scanning technology, Microsoft kinect.

  1. Introduction

    The two universes are the physical world and the digital world. When people first learned about computers, they started using digital technology. He has been working to seamlessly merge the virtual and digital worlds. In an effort to bridge the gap between the digital and physical worlds, numerous technologies have been created. Software that links the virtual and physical worlds includes virtual reality, augmented reality, and mixed reality. This led to the development of numerous tools that let users experience the actual and virtual worlds side by side. The fast-paced evolution of the technology development business has had a considerable impact on

    the smart technologies that expedite our daily operations. For instance, online shopping evolved swiftly. People are

    getting used to utilising the internet more and more customers can purchase the items they are interested in from shops, online auctions, etc. This style of transaction is now the most common and provides clients with a tonne of convenience. Customers cannot try on clothing before buying it, which is a disadvantage of online apparel shopping. How a client feels after dressing affects their choice to buy the products. Virtual dressing rooms that can replicate the visual component of dressing are thus becoming more and more necessary. Trying things on in a store typically takes a long time. Additionally, it might not even be possible to try on clothing in situations like online shopping. We aim to enhance this by creating a virtual changing room environment. Easy access and quick time for trying on clothes. The only problem is the precise alignment of the cloth models with the user in terms of placement, scale, rotation, and ordering. Identifying the user and their physical components is one of the first steps in resolving the problem. The literature has proposed a number of techniques for body component detection and posture assessment. Thanks to the use of web cameras, online shoppers are better able to control costs. As a result, our virtual trial room software may substantially alter the way people shop today. People don't need to be concerned about covert cameras or spend hours in queue in front of the trial room to examine their clothing.

  2. Background Study (Literature)

    Virtual dressing room applications attracted many researches. In [1], in order to initiate actions within an electronic marketplace on behalf of the user, a method and system were provided to facilitate recognition of body based on gestures that represent

    commands to initiate actions. Such that, by using the first set of spatial data, a model of the user body is generated. Then, a second model is generated by the action machine based on the second spatial dataset received. The difference between the first model and the second model is determined by the action machine represented by a gesture, where this gesture represents a command by the user.

    In [2], a system and method was presented for virtually fitting a piece of clothing on an accurate presentation of a user's body obtained by 3D scanning. The system allows the user to access a database of garments and accessories available for selection. Finite element analysis was applied to determine the shape of the combined user body and garment and an accurate visual representation of the selected garment or accessory.

    In [3], Virtual Fitting Room Using Image Processing by Srinivasan K. Vivek S. The Virtual Dressing Room technique for virtually dressing a person requires separating the person from the background while taking into account changes in lighting and with the least amount of disruption to surrounding items. Following this, a Laplacian filter is used to detect the contour of the upper and lower body, followed by edge detection. Following that, feature points are extracted based on the fundamental human anatomy. The sample shirt is bent to precisely suit the person using these locations as a guide.

    In [4], according to this discussion it is about the real-time simulation of 3D clothes. The application uses Microsoft V2 sensor to get body parameters, fits clothes virtually to the customers by using the unity 3D game engine. Compared with new VFR system, this system has some similarities like getting body parameters, gender detection etc. considering the whole product, it gives a realistic fitting experience to the customers. But the application is somewhat expensive. Due to that, it is not fulfilling the cost effective parameters. Compared with the features, uses different techniques for both applications and some features are available in new VFR system rather than the discussed system in the paper.

  3. Methodology

The first step in the virtual fitting room is estimating

human position and fitting clothing to the client's body.

A. Estimation of human pose

Pre-trained models can be used to estimate human stance []. It uses a Python API to implement a pre-trained model to find 17 human body landmarks. These landmarks contribute to the formation of the human body's skeleton. These landmarks include the facial landmarks, background, neck, shoulders, elbows, wrist, hips, knees, and ankles. However, in the new implementation, the background should be used as a landmark to efficiently extract the human body shape from the background image.

Fig 1. Pre-trained model

The OpenCV function, which is called in the Implemented API, calls the pre-trained model that is crucial to landmark detection. Frame by frame, the video will be provided to the API as input, and those frames will interact with the trained model. The creation of landmarks and a connection between two landmarks, referred to as pairs, follows. Similar to that, the API creates a skeleton of the human body using a pre-trained model. If users remain in front of the cameras, the system will build a skeleton from their bodies.

Fig 2. Output of webcam

A.Kinect Skeletal Tracking

The skeleton tracking describes a large number of dimensions. Skeletal tracking parameters characterize distinctive people, including their shapes, sizes, hairstyles, and apparel.

The joints of the huma body, including the head, neck, hair, and arms, are displayed in this. All joints for body parts were displayed in accordance with their 3D coordinates.

Stage 1: pixel-by-pixel categorization of body parts. Stage 2: involves mapping hypothesized joints' temporal continuousness to skeletal joints and a skeleton is completed.

Fig 3. Architecture

B. Human Parsing

This section creates a dataset for a store that has hundreds of 3D clothing models. In order to generate a transparent model of clothing shapes, the pre-trained model must first be completed. That model is based on various clothing shapes. To make transparent models for

textile shapes, for instance, the transparent model for a T- shirt should have the same shape as a T-shirt.

Fig 4. Human parsing using trained model

C.Cloth warping- Fitting clothes to the User

This section creates a dataset for a store that has hundreds of 3D clothing models. In order to generate a transparent model of clothing shapes, the pre- trained model must first be completed. That model is based on various clothing shapes. To make transparent models for textile shapes, for instance, the transparent model for a T-shirt should have the same shape as the T-shirt.

Fig 5. Cloth Wrapped to the user

  1. RESULTS AND DISCUSSION

    We've identified the use of a compact scanner that can be connected with virtual dressing rooms thanks to our examination of existing scanning technologies.

    (i) recognize that the approach in which the user performs the scanning task at home does not fit well with the project purposes due to the many limitations implied; and (ii) recognize that the network of sensors, coupled with dedicated software, as the most natural choice to build a sufficient precise digital body shape starting from the measured data.

    On the other hand, allowing clients to create their digital body shape in a real store and then download it at home at a later time seems to be the right compromise to make

    it possible for the fashion industry to actually deploy virtual dressing rooms.

  2. Conclusion

    This application has evolved into a better performance joint position after applying the cloth model. This is a suitable application to give users access to a virtual changing room:

    1. human measurement is produced based on the user's position in front of the Kinect.

    2. A flexible and realistic-looking fabric replica that the user can "wear".

    3. A body motion -based GUI that is simple to use, user-friendly, and trendy;

    4. Our program has a lot of enjoyable and practical features for users to use.

The AES encryption technique used ensures the security for the text data and the Auditor keeps a record of the credentials for both the data owner and the end user. Data recovered from the proxy servers in case of a malicious attack ensures the recovery process. The security key sent through an e-mail adds to the security level provided. These techniques make the system a reliable and secure product with a huge potential to secure content on cloud.

References

[1] A. Hilsmann and P. Eisert, Realistic cloth augmentation in single view video, in Proc. Vis.,

Modell., Visualizat. Workshop, 2009, pp

[2] LanZiquan, Augmented Reality Virtual fitting Room using Kinect, Department of Computer Science, School of Computing, National University of Singapore 2011/2012 .

[3] Muhammed Kotan and Cemil Oz, Virtual Dressing Room Application with Virtual Human Using Kinect Sensor, Journal of Mechanics Engineering and Automation 5 (2015) 322-326, May 25, 2015.

[4] Nikita Deshmukh, Ishani Patil, Sudehi Patwari, Aarati Deshmukh, Pradnya Mehta, Real Time Virtual Dressing

Room, IJCSN International Journal of Computer Science and Network, Volume 5, Issue 2, April 2016.

[5] Vipin Paul, Sanju Abel J, Sudarshan S, Praveen M, Virtual Trial Room, South Asian Journal of Engineering and Technology Vol3, No5(2017)87-96

[6] Srinivasan K. Vivek S, Implementation of Virtual Fitting Room Using Image Processing, Department of Electronics and Instrumentation Engineering, Sri Ramakrishna Engineering College Coimbatore, India, ICCCSP-2017.