Find the answer to your Linux question:
Page 2 of 2 FirstFirst 1 2
Results 11 to 15 of 15
Originally Posted by ganitolngyundre Thank you so much Ateryu, I've learned a lot on your inputs. I just modified the script you've given. Instead of tar I used rsync for ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #11
    Trusted Penguin
    Join Date
    May 2011
    Posts
    4,353

    Quote Originally Posted by ganitolngyundre View Post
    Thank you so much Ateryu,

    I've learned a lot on your inputs. I just modified the script you've given. Instead of tar I used rsync for as to have an incremental backup.
    Code:
    cd /backup
    users=$(find . -maxdepth 1 -type d -name 'user*')
    for user in $users; do
      rync -avz ${user} /backup2;
    done
    this will give a data structure like:
    /backup2/user1
    /backup2/user2


    This solved my problem on doing backup for the new/change files. However, the backup directories on /backup2 is not compress yet. Is there a way to compress all the backup dir using rsync?

    Correct me if I'm wrong, as you can see I used the option -z on rsync. It says it's for compression. But I think it will only compress while the rsync command is taking place. Not when the rsync is done.

    Can you kindly give me your inputs, feedbacks and expert suggestion on this one.

    I really appreciated what you did for me. Thanks

    J
    rsync was initially what i was thinking of, too, but it didn't compress into a single file like you wanted. but you got there yourself. yeah, the -z option will just compress the data en route to the destination machine. to compress the user data you've copied, you'll have to add a tar command to your script, e.g.:

    Code:
      rync -avz ${user} /backup2;
      tar -zcf /backup2/${user}.tar.gz /backup2/${user}
    you can add --remove-files to the tar command, too, if you want to remove the duplicate files.

    come to think of it, you could probably send the rsync command to write to STDOUT, then pipe it to a filename, but IMO, it is better to have them as separate commands.

  2. #12
    Just Joined!
    Join Date
    Jun 2012
    Posts
    12
    Thanks Atreyu. I'm now one step closer to what I'm aiming for (which is backup to amazon s3). I know it's a hassle to you to teach a newbie like me. Right now, I'm currently doing another script same with the first one. But this time. It's not rsync but tar. then i'll try to put split command for the files larger than 5GB. I have all the idea in my mind. All I need is to write it on. It's just that I'm lost with commands . If you have suggestion, will you please again share it with me. Here's my script:

    Code:
    #!/bin/bash
    
    # The backup directory
    cd /to/backup2
    
    # find subdir of each users
    users=$(find . -maxdepth 1 -type d -name '*user')
    size=$(du -h '*.tar.gz')
    file=$(find . -maxdepth 1 -type f -name '*.tar.gz')
    # shows date
    date=`date '+%d%m%Y'`
    
    for user in $users; do tar zcpf ${user}$date.tar.gz $user; done
    rm -rf *user
    
    if [ $size gt 5GB ]; then
            split -b 4g ${file}
    fi
    It doesn't work. I don't know if I'm doing it right. But all I want is if the .tar.gz file is greater than 5GB it will split. but if doesn't it won't change. I need your expert opinion on this. I'm open for improvements.

    Again Thank you and Godbless.
    Last edited by ganitolngyundre; 07-23-2012 at 06:37 AM.

  3. #13
    Trusted Penguin
    Join Date
    May 2011
    Posts
    4,353
    Quote Originally Posted by ganitolngyundre View Post
    Thanks Atreyu. I'm now one step closer to what I'm aiming for (which is backup to amazon s3). I know it's a hassle to you to teach a newbie like me. Right now, I'm currently doing another script same with the first one. But this time. It's not rsync but tar. then i'll try to put split command for the files larger than 5GB. I have all the idea in my mind. All I need is to write it on. It's just that I'm lost with commands . If you have suggestion, will you please again share it with me. Here's my script:

    Code:
    #!/bin/bash
    
    # The backup directory
    cd /to/backup2
    
    # find subdir of each users
    users=$(find . -maxdepth 1 -type d -name '*user')
    size=$(du -h '*.tar.gz')
    file=$(find . -maxdepth 1 -type f -name '*.tar.gz')
    # shows date
    date=`date '+%d%m%Y'`
    
    for user in $users; do tar zcpf ${user}$date.tar.gz $user; done
    rm -rf *user
    
    if [ $size gt 5GB ]; then
            split -b 4g ${file}
    fi
    It doesn't work. I don't know if I'm doing it right. But all I want is if the .tar.gz file is greater than 5GB it will split. but if doesn't it won't change. I need your expert opinion on this. I'm open for improvements.

    Again Thank you and Godbless.
    you are close, just do everything in the loop. e.g., try something like this:

    Code:
    #!/bin/bash
    
    # The backup directory
    cd /to/backup2
    
    # find subdir of each users
    users=$(find . -maxdepth 1 -type d -name '*user')
    #size=$(du -h '*.tar.gz')
    #file=$(find . -maxdepth 1 -type f -name '*.tar.gz')
    # shows date
    date=`date '+%d%m%Y'`
    
    maxGB=5
    
    # convert GB to bytes
    maxBytes=$(( 5 * 1073741824 ))
    
    for user in $users; do
      tarball="${user}${date}.tar.gz"
      tar zcpf $tarball $user || exit 1
      rm -rf ./$user
      bytes=$(stat -c %s $tarball)
      if [ $bytes -gt $maxBytes ]; then
        split --verbose -b 4G $tarball ${user}-
      fi
    done

  4. #14
    Just Joined!
    Join Date
    Jun 2012
    Posts
    12
    Hi Atreyu,

    I hope it's not to late to say thanks to you.. I got it working.. Thanks to your help. I followed your script and just add some of it. please feel free to write comments and suggestions.

    Code:
    #!/bin/bash
    
    # The backup directory
    cd /to/backup2
    
    # find subdir of each users
    users=$(find . -maxdepth 1 -type d -name '*user')
    #size=$(du -h '*.tar.gz')
    #file=$(find . -maxdepth 1 -type f -name '*.tar.gz')
    # shows date
    date=`date '+%d%m%Y'`
    
    maxGB=5
    
    # convert GB to bytes
    maxBytes=$(( 5 * 1073741824 ))
    
    for user in $users; do
      tarball="${user}${date}.tar.gz"
      tar zcpf $tarball $user || exit 1
      rm -rf ./$user
      bytes=$(stat -c %s $tarball)
      if [ $bytes -gt $maxBytes ]; then
        split --verbose -b 4G $tarball ${user}-
      fi
    done
    # to amazons3
    s3=(find . -type -f -name '*.tar.gz')
    for i in $3; do s3cmd put $i s3://mybucket; done
    rm -rf *.tar.gz
    I add a script that will put all the .tar.gz file to amazon s3. then remove .tar.gz in to my backup path.

    Again, Thank you very much. Long Live.

  5. #15
    Trusted Penguin
    Join Date
    May 2011
    Posts
    4,353
    Quote Originally Posted by ganitolngyundre View Post
    I hope it's not to late to say thanks to you.. I got it working.. Thanks to your help. I followed your script and just add some of it. please feel free to write comments and suggestions.
    No, i'm still here - not banned for life yet...and you're welcome! I do have a couple of suggestions...

    Code:
    maxGB=5
    
    # convert GB to bytes
    maxBytes=$(( 5 * 1073741824 ))
    This was my bad. As long as we are defining maxGB, we should use it!
    Code:
    maxBytes=$(( $maxGB * 1073741824 ))
    in this next portion, the s3 assignment should not work. you need to either use backticks:
    `command`

    or dollar sign parenthesis:
    $(command)

    so change this:
    Code:
    # to amazons3
    s3=(find . -type -f -name '*.tar.gz')
    to this:
    Code:
    s3=$(find . -type -f -name '*.tar.gz')
    and in the loop, you are not calling the variable by the right name. change this:
    Code:
    for i in $3; do s3cmd put $i s3://mybucket; done
    to this:
    Code:
    for i in $s3; do s3cmd put $i s3://mybucket; done
    that's pretty much it!

Page 2 of 2 FirstFirst 1 2

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •