Results 1 to 7 of 7
Enjoy an ad free experience by logging in. Not a member yet? Register.
230+ missed updates on CentOS Trixbox Server
I was considering cloning the drive using dd and updating the original, using the clone as a backup if something goes wrong but I would have to wait until an approved time to do this. Otherwise, I considered running the updates one at a time and reverting back if there were issues.
If a downtime of 1h is catastrophic, then you need redundancy.
Because a non updated server is an easier target for attacks and therefore more likely to be down/not available for that reason alone.
So, in your case I would request at least another server.
Then install the new server with latest centos, install your Trixbox CE application
and test it with fake data parallel to your production installation.
If everything works as expected, concentrate on how to sync the real data
(what, how and how long does it take?)
If that is sorted, lower the TTL of the applicable dns records.
Shutdown the application, sync the data as planned, set DNS to the new server (with normal ttl of course).
As not all dns servers and/or clients follow ttls, you might need to let the old machine still run and port-forward the relevant ports to the new box.
This is the hack-ish way of doing this.
A better solution would be to loadbalance the whole setup and put the two machines in a pool, where the pool members can be drained.
But the LB approach is a different topicYou must always face the curtain with a bow.
This is indeed an excellent solution to our lack of redundancy. However, these updates need to be run as soon as possible. For the time being, I am in need of opinions on running this many updates at one time.
Thanks for the quick reply
CentOS/RedHat are designed for reliability.
That means, they wont do feature or version updates (with exceptions) but rather only fix bugs, even if that means backporting.
So, there is a chance, that it might just work(tm) to do a yum update.
However, this is the risky road.
If your boss/business demands that the update needs to be done without a safety net, then clearly point out the risk:
There is a chance of a longer downtime.You must always face the curtain with a bow.
True. As the system is currently stable, I think I will wait and clone the drive as a safety net. Even if the clone isn't needed, it would be nice to use it in a test machine for future implementations.
Thanks for your help.
I don't know how stateful Trixbox is, but in general a clone gets out of sync very quickly on most platforms, and can easily take more than your hour maintenance window. The way I'd approach it is to prime an rsync target from the running system. Then take a maintenance window, boot a live CD, sync up the rsync target with everything quiescent, which will take very little time because only the changed blocks will be transferred. Boot back up and run your update. Short of the update munging up the MBR and/or partition table, which is extremely unlikely, reversing the rsync would suffice to reverse the updates.
Yes, I have done this successfully more than once, even reversing Fedora version upgrades in this fashion. But I've also run Centos updates on your approximate scaling without having any problems. YMMV.
Interesting. I actually cloned the server's drive over the weekend and tested the updates on the clone. Everything ran fine without problems so I'm adding the second drive to a test machine for future testing.
Thanks for the advice.