Find the answer to your Linux question:
Results 1 to 7 of 7
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    HDD Cleaning server


    i work a lot with servers(hardware) and i need to erase used/old HDD-s regulary, so far i have used USB bootable systems like magicpart and so on.
    I was wondering if its possible to config linux system to automaticly erase and overwrite it with 0-s bunch of times.

    Server disk-s are hot swap-able, i wondered if it's possible by just insertting (hotswap ) disk's and the server erases all data automaticly, and on the monitor just shows wich bay-s are ready.

    i hope you understand what im asking for, sorry if i did'nt express myself clearly, english is not my main language

  2. #2
    Linux Newbie sarlacii's Avatar
    Join Date
    May 2005
    South Africa
    Hi there

    As a project, I'm sure you could get something like that going, assuming you can then detect, via udev or some such, that a new drive is available.

    However, why not just get a HDD cloning device, for example, Maiwo docking station or similar, and then wipe drives like that? You load in a template device, that is formated blank and have random data written to it, which the docking station then clones to all the other inserted drives. You get versions that support multiple drives?

  3. #3
    Right now i have 1 server + 2 msa-s attached to it to clean the drives, docking station woulnt do the trick(disk ammount wise) i have regularly 10-20 disks in the cleaning. I was hoping to automate that process, so i dont have to run the program and select the disk-s and the cleaning formats every time.

    But thank you for the answer.

  4. $spacer_open
  5. #4
    Just Joined!
    Join Date
    Oct 2017
    dd if=/dev/zero of=/dev/sd?

    You could use /dev/random instead. Quite frankly though, it's slow as molasses. If the drives are being trashed and not reused, a giant electromagnet (aka a degausser) will zero the media much faster.

  6. #5
    I need to reuse those disks, for the broken ones i have degausser.

  7. #6
    Linux Engineer drl's Avatar
    Join Date
    Apr 2006
    Saint Paul, MN, USA / CentOS, Debian, Slackware, {Free, Open, Net}BSD, Solaris
    Quote Originally Posted by PELinux64 View Post
    dd if=/dev/zero of=/dev/sd?

    You could use /dev/random instead. Quite frankly though, it's slow as molasses ...
    I usually follow the advice:
            "The kernel random-number generator is designed to produce
            a small amount of high-quality seed material to seed a
            cryptographic pseudo- random number generator (CPRNG). It
            is designed for security, not speed, and is poorly suited
            to generating large amounts of random data. Users should be
            very economical in the amount of seed material that they
            read from /dev/urandom (and /dev/random); unnecessarily
            reading large quantities of data from this device will have
            a negative impact on other users of the device."
            -- excerpt from man urandom (linux), Solaris has a similar
    If I needed a lot of truly random data generated locally, I would probably get something like:

    For dd, the block size can make a, err, sizeable timing difference, see, for example, the discussion at

    Best wishes ... cheers, drl
    Welcome - get the most out of the forum by reading forum basics and guidelines: click here.
    90% of questions can be answered by using man pages, Quick Search, Advanced Search, Google search, Wikipedia.
    We look forward to helping you with the challenge of the other 10%.
    ( Mn, 2.6.n, AMD-64 3000+, ASUS A8V Deluxe, 1 GB, SATA + IDE, Matrox G400 AGP )

  8. #7
    Just Joined!
    Join Date
    Oct 2017
    Yeah, security "experts" will warn about random number generator reseeding and a minimum number of overwrites. It's been my experience though, playing with flux analyzers, that it's virtually impossible to recover the data after just two passes (random and zero.) I guess if you're protecting nuclear secrets from North Korea you might need more than that but otherwise..

    Also, if the number of blocks isn't evenly divisible you'll get an out of space error from dd. Since dd's default block size is the same as the abstraction layer's logical block size of 512 (not physical block size which is usually 4096 or more) omitting the parameter ALWAYS works without error.

    I recommend people dd the PELinux64 img file with bs=4M though

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts