A Virtual Storage Platform for Rapid Cloud Computing and Utility Services

DOI : 10.17577/IJERTCONV4IS16003

Download Full-Text PDF Cite this Publication

Text Only Version

A Virtual Storage Platform for Rapid Cloud Computing and Utility Services

A. Vijay Vasantp

1Senior Assistant Professor,

Dept. of CSE Christ college of Engineering & Technology, Pondicherry, India

T. Gajalakshmy2

P. Anitha3

M. Pavithra4

B.tech, Dept. of CSE

B.tech, Dept. of CSE

B.tech, Dept. of CSE

Christ college of Engg. & Tech.,

Christ college of Engg. & Tech.,

Christ college of Engg.& Tech.,

Pondicherry-605110, India

Pondicherry-605110, India

Pondicherry-605110, India

Abstract:- In modern years virtual storage plays an important role in e-commerce application, Banking, IT infrastructure and telecommunication devices. Because cloud computing is a rapidly growing technology and slackly defined. Cloud based virtualization technology has progressed rapidly and lead a large change in IT architecture, letting compute resources to be used and stored more flexibly to manage software, application and multiple operating system which allow users to run desired operating systems from centralized servers. It is also freeing companies from the hassle of purchasing hardware/software and maintenance. These virtual storages also provide high security, I/O performance and continuous access to data centre operation to clients globally. Experiments results are given to analyses the effectiveness of virtual storage platform in cloud services and suggested techniques for load balancing, preserving data through backup and handling storage issues.

Keywords:- VMS, Virtual storage, centralized server, cloud and utility services.

1. INTRODUCTION

A virtual device is warehouse data set of files in its own storage directory, which is a consistent of container, like a distributed file system, that hides the particulars of each storage device and offers a uniform model for storing virtual device files. They can be sponsored by either a VMFS- Virtual Machine File System or NFS-Network File System, subject to the storage type. Likely non-virtualized storages, servers are connecting directly to storage devices, whichever internal or inside of organization to the server chassis or an external content server [1]. The major disadvantage of a specific server imagines complete title of the physical device, with a whole disk drive tied on a same single server. Distributed storage assets in a non-virtualized environment needs multifaceted file systems or moving left from block- based disks to network-attached storage.

In recent trend, cloud computing has developed a widespread topic in the academe and IT industry, pointing to offers software, application, data, infrastructure as a service in hosting data centres. The ability of cloud computing

delivers reliable performance and are accessible in many configurations [2]. We can also create custom machine storage for specific needs. A distributed cloud system might have a similar goal, such as explaining a big computational problem. Otherwise, each workstation/system might take its own user with a separate need, and the determination of the distributed computing system is to manage the use of shared assets.

Fig 1: Virtual data storage devices.

Distributed cloud computing might be defined as the use of a distributed system to fix a single large issues by breach it depressed into several tasks where each job is defined in the separate computers of the centralized system. A centralized distributed system consists of more than one multitasked computer that interconnects through a network. All the workstations/systems are linked in a network connect with each other to reach a public goal by fixing use of their own internal memory.

On the other side, multiple users of a workstations perhaps the different chucks and the distributed computing systems will grab the direction of the shared resources by

assisting them connect with other systems to complete their specific tasks [3].

The executive task of distributed computing system is to exploit performance by linking users and IT assets in a cost-effective and reliable way. It is also confirming fault tolerance and allows resource availability in the event that one of the components fails.

A Virtual Data centre is a group of cloud infrastructure assets deliberate to specifically for enterprise commercial requests. Those resources include compute, application, memory, storage, service assurance and bandwidth. Because most of the ecommerce applications are moving towards public cloud based data centres for the service availability with lower price also avoid the physical server maintenance.

    1. Distributed computing system memory sharing

      The rest of the paper is organized as follows. In section II, related work. In section III and IV analysing the virtual storage and its performance. Finally, we conclude this paper in section V.

      1. SYSTEM OVERVIEW

        The Transcom follows a client-server system that can manage a specific set of computer system the client system is set up and storage is made globally so the resource such as software, database, OS resides on the virtual server this system uses physical address to identify the client inside network

        The entire set up must be present within LAN client request resources such as software, memory, data from the virtual server by issuing disk access protocol(DAP). Once the client system is getting ready to be in active state the appropriate OS is loaded from the virtual server to the corresponding client system here remote booting takes place separate requests are made for OS and resource utilisation. Initially boot request is made OS gets loaded in virtual memory of corresponding client system that enables the further access to the server. To prevent the concurrent read/write operation on the same shared file, file redirector mechanism is used. each block in server is handled by disk driver, appropriate mapping is done between client's physical address with

        respective logical address in present virtual server and to corresponding database clients can access more than one disk according to their privileges. Once the request in made from client virtual server searches for the requested data in Vdisk image as soon as the respective data is located in server proper acknowledgement is sent along with response that follows piggy backing technique. a special key is used for each request and response message from the server in order to prevent the Packetloss.

      2. PROPOSED SYSTEM

        This system is a service based model, where a virtual server in setup and multiple clients are connected to that virtual server. server respond all clients connected to them simultaneously setting up a virtual server may have accompanied with many critical metrics such as making backup, storage size, network capacity.

        SIZING STORAGE:

        The sizing storage involves in compressing the original size of data eliminating the garbage space and null spaces in the data efficiently hence sizing storage results in enhancing the storage capacity of virtual server

        CAPACITY ISSUE HANDLING:

        capacity issue handling focus on preventing duplicate copies of data that gets stored in virtual server if the server has a set of data A in the virtual image client 1 may update the data A and simultaneously if client 2 also tends to update the same data A, a unique key is generated and this key will sent to client based on their log timing and equest timing hence only one updating is made possible on a single data at a time

        INCREMENTAL BACKUP:

        incremental backup are the simplest backup technique it just propagate the changes made in system just few hours ago only latest changes made in system get updates at regular intervals of time this reduces the backup timing instead of loading the entire content to virtual server RSYNC involves in restoring the part of content where latest update is made, it involves in calculation of checksum for 1st block of data .during back up process KEMP checks the client system checksums with the checksum that is already present in virtual image if both checksums are equal the no changes is

        made in block of data if checksum does not matches then move 1 byte of data from client system to server image

        LOAD BALANCING:

        Load balancing technique involves in managing the traffic in network weighted round robin mechanism is used depending on the weight of issued request ,virtual server handles the request .The weightage precedence is previously calculated by server with respect to server capacity if multiple request are issued by clients simultaneously then request with higher priority is served 1st lower priority requests will be waiting for specific time then they are server KEMP load balancer is used to balance the network load handovered to the virtual server .This load balancer serves request based on priority configured by the server

        Algorithm for load balancing

      3. RELATED WORK

        Research is done on transparent computing system it states the procedure for storing and manipulating the user applications, programs, OS from the centralised server [1] virtualisation decreases the maintenance cost it meets demand, maintain integrity, security. It hides the complexity of data utilises the resource efficiently client server storage are the three kinds of virtualisation a virtual system can establish access to physical resource that is termed as pooling of resource VM Migration is the concept used for load balancing it transfers the VM to another machine [6]Scalability is the main factor that has to be maintained in server for its efficient performance server can be termed as bucket certain keys are used for finding the appropriate records in database it follow the client redirection strategy once client request is sent to server depending on the size of bucket request are handled it make use of reference counter for checking the bucket size the replicas from bucket are maintained carefully[2].system deals with servicing request based on priority certain time intervals are maintained, control is transferred from one process to another based on deadline[3]. in order to balance work load a unique ID or priority status are set up to differentiate the service requests and such request are served by virtual server work will be directed to the server which has lesser number of request to be served load migration is the concept of sharing data between servers they transfer respective processing demands[4].backup is the process of preserving the data for future use, incremental backup is much faster than full backup because partial data is being transferred not the entire data certain parameters are involved to check the consistency of data that is already present in virtual server for matching the consistency of data[5]

      4. CONCLUSION

We have developed a virtual server for a corporate level work environment. The virtual server comprises of software and necessary that can be utilized by clients present inside the protected network. Our proposed system is designed to handle the load balancing and making backup of the updated data in the virtual server the designed algorithm for incremental backup involves in propagating the changes or updates that made before specific period of time KEMP compression involves in compressing the original size of data and enables the server to accommodate more data as possible System enhances the capacity of server and avoid replication of data by generating a specific key it is more reliable kind of virtual server .Proper credential are set up at the user level and protect the system from the security issues

REFERENCES

  1. Yaoxue Zhang and Yuezhi Zhou Transparent Computing: Spatio- Temporal Extension on von Neumann TSINGHUA science and technology ISSNl l1007-0214l l02/12l lpp10-21 Volume 18, Number 1,

    February 2013

  2. Grzegorz ukawski, Krzysztof Sapiecha Balancing Workloads of Servers Maintaining Scalable Distributed Data Structures 2011 19th International Euromicro Conference on Parallel, Distributed and Network-Based Processing

  3. G. Teodoro T. Tavares B. Coutinho W. Meira Jr. D.Guedes Load Balancing on Stateful Clustered Web Servers

  4. Pedro Mej´a-Alvarez, Member, IEEE Computer Society, Rami Melhem, Fellow, IEEE,Daniel Mosse´ , Member, IEEE Computer Society, and Hakan Aydin An Incremental Server for Scheduling Overloaded Real-Time Systems 2013 27th International Conference on Advanced Information Networking and Applications Workshops

  5. Shih-Yu Lu Encrypted Incremental Backup without Server-Side Software

  6. Durairaj. M, Kannan.P A Study On Virtualization Techniques And Challenges In Cloud Computing International journal of scitenific & technology research volume 3, issue 11, november 2014 ISSN 2277- 8616 147 IJSTR©2014

Leave a Reply