Decentralized Data Offloading in High Performance Computing Centers using Scratch Space
P. Nishmi Irin1, K. John Peter2, I. Nancy Jeba Jingle3
1P. Nishmi Irin, Computer Science, Vins Christian College of Engineering, Anna University, Nagercoil, India,
2K.John Peter, Information Technology, Vins Christian College of Engineering, Anna University, Nagercoil, India.
3I. Nancy Jeba Jingle, Computer Science, Vins Christian College of Engineering, Anna University, Nagercoil, India.
Manuscript received on July 01, 2012. | Revised Manuscript received on July 04, 2012. | Manuscript published on July 05, 2012. | PP: 302-305 | Volume-2, Issue-3, July 2012. | Retrieval Number: C0757062312 /2012©BEIESP
Open Access | Ethics and Policies | Cite
© The Authors. Published By: Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: This project addresses the issues associated with providing Decentralized Data Offloading service to HPC Centers. HPC centers are High Performance Computing centers that use Parallel Processing for running advanced applications more reliably and efficiently. The main concept of this project is the offloading of data from a HPC Center to the destination site in decentralized mode. This project uses the decentralization concept where it is possible for the end user to retrieve the data even when the center logs out. This is possible by moving the data from the center to the Scratch Space. From Scratch space the data is moved to the intermediate storage nodes 1..n and from the nth node the data is transferred to the destination site within a deadline. These techniques are implemented within a Production Job Scheduler which schedules the jobs and Bit Torrent tool is used for data transfer in a decentralized environment. Thus the total offloading times are minimized; data loss and offload delays are also prevented.
Keywords: High Performance Data Management, HPC Center Serviceability.