Find the answer to your Linux question:
Results 1 to 5 of 5
Like Tree1Likes
  • 1 Post By Irithori
Hi, Our apps server using cluster GFS. The idea is want to upgrade to GFS2. The reason is our vendor backup not support anymore for GFS. We have some issue ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Mar 2010
    Posts
    2

    Copy file from large partition


    Hi,

    Our apps server using cluster GFS. The idea is want to upgrade to GFS2. The reason is our vendor backup not support anymore for GFS.

    We have some issue with our apps server. Our apps server have large partition that store attachment like pdf, video, document file.

    This partition is about 20TB. We need to migrate all this folder and sub-folder on new SAN.

    What is the best solution? Currently large partition is from SAN storage.

  2. #2
    Trusted Penguin Irithori's Avatar
    Join Date
    May 2009
    Location
    Munich
    Posts
    3,392
    Use rsync
    But for 20TByte this will take a while.
    Can you afford to take that app server out of service while rsync is running?
    If yes: It would simplify the procedure. Just a rsync run.
    If no: Then you will need multiple rsync runs to catch the latest changes. These are not cheap, because building two filelists over 20TByes takes time.
    Maybe an additional setup of lsyncd can help mitigate the impact by catching all current file changes.
    Lakshmipathi likes this.
    You must always face the curtain with a bow.

  3. #3
    Just Joined!
    Join Date
    Jan 2007
    Posts
    9
    I already use rsync. It took long time. Cannot afford downtime more than 4 hours. Seeking other method than rsync or sync method, not efficient.

  4. #4
    Trusted Penguin Irithori's Avatar
    Join Date
    May 2009
    Location
    Munich
    Posts
    3,392
    The usecase at hand is to copy between different filesystems.
    So one cannot use blockdevice or san to san copy mechanisms, it needs to be via filecopy.

    And for that rsync is the most efficient.
    zabidin2 wants to copy ca 20TByte and yes, this will take long.


    Edit:
    Or letīs say it in an other way:
    On an offline service, one could use cp instead of rsync.
    This will safe the time of building two filelists.
    But
    a) cp is only local. So one would need to mount both sans and filesystems on one machine (or use a networkfilesystem inbetween)
    b) Should *anything* happen during the cp, then you have no choice than to start over again to assure a 100% copy.
    c) rsync can be limited. So it can run in the background with reduced speed, hence allowing the service to be online.
    Last edited by Irithori; 11-09-2012 at 05:45 AM.
    You must always face the curtain with a bow.

  5. #5
    Just Joined!
    Join Date
    Sep 2012
    Location
    India
    Posts
    29
    Hi,

    I hope you have compressed 20TB files.

    One important advantage of using rsync is that it will work on network and if at any point the connection breaks out, it will start the copy from last file where connection broke,
    instead of copying the entire list of 20TB files again.

    Using rysnc you can preserve permissions as well as SELINUX & ACLs.

    As Irithori suggested it will take time, but rysnc is better instead of using cp or scp.

    Best Wishes,
    Warm Regards.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •