Results 1 to 6 of 6
Thread: restoring a bad logical volume
Enjoy an ad free experience by logging in. Not a member yet? Register.
- Join Date
- Sep 2010
restoring a bad logical volume
The server runs on a LVM spanned across two harddrives, and one of those harddrives are failing. I need to move the LVM to a single drive while retaining the email, and file server settings. To complicate things I am 1000 miles away from this server, so I need to keep things simple. I currently access the server through ssh.
I flew up to see the server, and used DDrescue, and first forgot to move the seperate boot partition, and in my second attempt the dd failed, and I had to restore the linux partitions to get a boot. I run DDrescue using a live linux distro, and I think it would work, but I'm worried that it could fail again, and kill the server.
if I backed up the /etc directory, and saved the files could I could copy the etc to a fresh install?
I could also make a snapshot of the server, but there is only 100 megs of space to play with. is that enough for a ddrescue while the server is inactive?
What other methods would you all suggest to backup, and restore this server before the inevitable happens?
- Join Date
- Dec 2007
Backing up hard drive.
I would not recommend to save your data on the same hard drive that you want to replecate.
This is just my opinion, but perhaps if you were to acquire an external USB docking station, you can purchase 1TB or greater. and plug them into the USB docking station. These internal hard drives are rugged, and now cheap to purchase.
Just partition and format these hard drives, using
"Gparted" in Linux . Make sure that you set the flag as bootable for the drive that you want to boot up when the system is restarted. You can then use a bit copyer. It will replecate the old hard drive, bit-for-bit.
You can then install the hard drive(s) into the server box. The external USB docking station can be use as part of the hard drive team, if you so desire. It's an easy way to expand your storage capacity. Do you want a deal on internal hard drives? Keep on checking on Tiger Direct.com for a sale.
First of all, yes, you could copy etc to a fresh install, but it might not work exactly like you expect because of various dependencies throughout the system.
I might tend to disagree with onederer somewhat as I have had mixed results doing bit-level copies of failing storage devices. His approach might be simpler, but you might consider the following.
1) Perform a fresh install to a new disk using the same CentOS release as is currently installed.
2) Mount the partitions from the old LVM disk (you may have to use a live disk ala gparted, clonezilla, etc. in which you have actually mounted the partitions from both disks).
3) Use rsync to copy the complete contents of the old disk to the new disk. Rsync will give you a file-level copy. Just make sure to exclude /boot and /etc/fstab from the copy.
4) Boot to the new disk. The system should be nearly identical.
Just keep in mind you are probably best off not mounting /boot from either the new disk or the old disk for the copy, or, at the very least, you should exclude it from the rsync copy operation. Same goes for /etc/fstab. I think everything else is game.
I actually used a similar method to migrate some old servers to new hardware from several hundred miles away across the WAN at 10Mbps. It took awhile, but one cool think about rsync is you can easily copy only the diffs since the last copy.
After I posted this I had another thought. If you have the hardware, you could actually install the new server where you are, boot it to a live disk and mount the partitions of the new hard drive, then rsync the old system to the new. Your dad's upload speed would determine the amount of time required to do so, and you might find that time unacceptable, not just because of the amount of data that might change during the initial copy, but also because of the pending hardware failure.
Last edited by nplusplus; 08-02-2011 at 01:32 AM. Reason: additional thoughts
- Join Date
- Jan 2005
- Saint Paul, MN
It is advised that you backup the drives before proceeding to do any migration of the data!!! Your are FORE-WARNED!!
Can you add the new drive(s) (internal or via USB)?
Does the old drive(s) contain partitions outside the LVM?
Does the computer boot from the old drive(s)?
I will answer where raid (hardware or software) is not in the picture.
If the old drive is partitioned then the replacement needs to be partitioned to contain those partitions that are not part of the LVM (often "/boot") and the non-LVM data needs to be transferred via rsync, cpio, etc. If the new drives are going to replace the "boot" device, then grub will need to installed on the new drive(s).
As long as the new drive(s) can be added while the old drive(s) are present, then the new drives can be added into the existing "volume group". Once added into the "volume group", LVM has commands the say that you would like to move the data off of the old drive(s) (time for this to happen depends on amount of data being moved. After the data has been moved, the old drive(s) can be removed from the "volume group". Now if you have the other data copied (if needed) and the grub installed on the new drive(s) if needed, then the machine can be powered off and the old drive(s) removed and the new drives put into their place.
Now if you took proper care in the the non-LVM partitions and the grub install correctly, the machine should boot up using the new drive(s).
See the LVM Howto web page, Common Tasks, and see "Adding physical volumes to a volume group" and "Removing physical volumes from a volume group"
For recovery of raid and LVM see Recovery of RAID and LVM2 Volumes | Linux Journal
Can you add the new drive(s) (internal or via USB)? Id so you can make the new drive(s) part of the LVM "Volume Group" and then you can tell LVM that you would like to migrate from the older drives (which does take some time (depending on size). This would take care of all the stuff in the LVM. If the old drives are partitioned to hold stuff outside of the
Nplusplus suggests a scheme that I've used many times, and it's workable, except that I generally include /boot, but preserve a copy of it (and of /etc/fstab) for troubleshooting/resolving problems after the rsync. There's a risk of ending up with kernels, etc, in /boot that are not compatible with the rest of the system if you don't include it.
- Join Date
- Sep 2010
To clear a few questions:
The server is NOT set up with software raid, the servers creator actually lied about that.
I can access a terabyte usb drive at /dev/sdc
The server still boots, but it suffered a fatal error when a file used by the email server was in a bad segment of the harddrive. I fixed it, but the hardrive is likely going to start failing sooner or later.
bit by bit sounds pretty similar to my plan with ddrescue, and last time I used ddrescue the system lost its partition table, and restoring the partition table without access to the server sounds like a headache.
I will look into adding a drive to the lvm, and removing the old drive just because this sounds like the easiest method from where I am, and then I'll look into the rsync copy as well. see which one would work best for me.
In the event of the backup failing, what segments other then the data, and /etc should be backed up?