Find the answer to your Linux question:
Results 1 to 10 of 10
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    I wan to backup my files before I edit them


    I am very careful about what I do and I want to backup my config files before I edit them with vI and I want to do this every time and I am a Linux admin.

    I copy the file to the same name and add .old to the end. Them I move (mv command) the .old copy to the old directory.

    The problem is that It may overwwrite or not write the file as it is a duplicate name.

    I do not want to cp and delete the file for danger of loosing the change.

    is there a way I can mv the file and have it take care of duplicate file names for me ( like httpd.conf.old ) ?

    I use the later versions of redhat Linux

  2. #2
    Linux User peteh's Avatar
    Join Date
    Oct 2006
    Location
    UK
    Posts
    432
    A short bash script instead of the move command would work. You would need to give existing filename, the date as,say, mmdd and the new folder path as parameters.
    And your script would have the existing name as $1, the new name as $3 + $1 + $2. If you open the terminal from within the appropriate folder, $1 would pick up the existing file without needing the full path.

    Or you could just copy the file inserting the 4 digit date before the '.old' then mv it. (Sorry I'd missed the part where you copied the file first).
    Last edited by peteh; 12-11-2016 at 01:44 PM. Reason: to add the simple copy/move

  3. #3
    Linux Guru Segfault's Avatar
    Join Date
    Jun 2008
    Location
    Acadiana
    Posts
    2,185
    You could use vim, it has backup feature.

  4. $spacer_open
    $spacer_close
  5. #4
    Easiest way, is to change the way you rename the files to add a timestamp from (example):

    cp /etc/fstab /etc/fstab.old

    to

    cp /etc/fstab /etc/fstab.old.$(date +"%Y-%m-%dT%H%M%S%:z")"

  6. #5
    Trusted Penguin Irithori's Avatar
    Join Date
    May 2009
    Location
    Munich
    Posts
    4,031
    a) Dont directly edit config files on a server.
    Directly editing on the server will lead to uniq snowflakes and missing documentation.

    b) Let a automation tool of your choice (puppet/chef/ansible) execute *all* changes.

    c) Manage the manifests/cookbooks/playbooks via git.
    Git is the de facto standard for any versioned textfiles.
    You must always face the curtain with a bow.

  7. #6
    Linux Guru budman7's Avatar
    Join Date
    Oct 2004
    Location
    Knee deep in Grand Rapids, Michigan
    Posts
    3,901
    Deleted post
    If you want to learn more about linux take a linux journey
    https://linuxjourney.com/
    Use CODE tags when posting output of commands. Thank you.
    https://www.linuxcounter.net/cert/608410.png

  8. #7
    Quote Originally Posted by Irithori View Post
    a) Dont directly edit config files on a server.
    Directly editing on the server will lead to uniq snowflakes and missing documentation.

    b) Let a automation tool of your choice (puppet/chef/ansible) execute *all* changes.

    c) Manage the manifests/cookbooks/playbooks via git.
    Git is the de facto standard for any versioned textfiles.
    Do you have an example of how this would work? I am having a major issue with vfs_snapper which I am trying to unravel; some of it seems to have happened from some careless modification of snapper configs.

    Sent from my SM-N920C using Tapatalk

  9. #8
    mv -i confirms before an overwrite

    mv -b automatically creates a backup of a file if it would overwrite

    mv --backup=[parameters] allows you to control these backups

    mv -n will exit with status 1 if there would be an overwrite.

    Am I missing something or is everyone just making this note complicated than it should be?

    Sent from my SAMSUNG-SM-G925A using Tapatalk

  10. #9
    Trusted Penguin Irithori's Avatar
    Join Date
    May 2009
    Location
    Munich
    Posts
    4,031
    You are missing something

    Even if there was only one machine:
    Letīs look at an example usecase: "Add a new virtualhost to a webserver"

    - So you edit dns and reload it (There is only one machine, so dns is on it)
    - You create a document root and set permissions
    - You copy content to DocRoot and set permissions
    - You add the virtualhost to the webserver and reload it

    These changes belong together, they form a changeset.
    Because they only make sense in combination.

    Additionally, you want to know who, why and when a changeset was done.
    You also want to have not only one backup, but all changes available.

    Now scale that to 10 machines. Or 100, 1000..
    Not only can git track the what, who, when and why, but puppet/chef/ansible will execute the very same change on any number of machines.

    I agree, that the initial learning curve can be high, but well worth the effort.
    You must always face the curtain with a bow.

  11. #10
    -->
    Quote Originally Posted by Irithori View Post
    You are missing something

    Even if there was only one machine:
    Letīs look at an example usecase: "Add a new virtualhost to a webserver"

    - So you edit dns and reload it (There is only one machine, so dns is on it)
    - You create a document root and set permissions
    - You copy content to DocRoot and set permissions
    - You add the virtualhost to the webserver and reload it

    These changes belong together, they form a changeset.
    Because they only make sense in combination.

    Additionally, you want to know who, why and when a changeset was done.
    You also want to have not only one backup, but all changes available.

    Now scale that to 10 machines. Or 100, 1000..
    Not only can git track the what, who, when and why, but puppet/chef/ansible will execute the very same change on any number of machines.

    I agree, that the initial learning curve can be high, but well worth the effort.
    Very belated response lol, I know, but I'm 100% in agreement with that, although vSphere (along with the clients installed on every vm) takes care of that for me- albeit not for a discount when they charge per CPU and count logical processors as CPUs (which I have to say, really requires you to put unnecessary thought into the CPU config of your servers- like, how many VMs exactly will we need, now let's weigh the performance cost of less parallelism in turn for more speed per thread, should we get a few K-series nVidia Teslas for $48,000 since they're license free etc. and since 90% of the workloads run either asynchronous and/or perfectly multi-threaded, normally I'd opt for maximum logical cores, to allow more flexibility in allocation of logical cores to VMs for max efficiency in allocation. On the other hand, I've literally resorted to X99 and X299 Hexacore Core-series processors mounted in supermicro rack units and turn off HT, for any 32-bit applications, since the free license allows 1 logical processor per VM, and there isn't a Xeon on the market that has virtualization extensions that can run a VM off a single of its logical cores- I mean one of my R730s has 72 logical processors, try running anything on one of those and it's maxed NUMA CPU usage 100% of the time... I was thinking of just saying fck it and going with the free linux hypervisor from here on out, but I don't want a heterogeneous enviroment).

    Sorry about the rant- I just have kinda grown to hate VMware and their GPL violating asses.

    Back to the topic at hand. You're definitely correct in that use case, my answer was just answering exactly as the question was phrased.

    Sent from my SM-N960U using Tapatalk

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •