Find the answer to your Linux question:
Results 1 to 4 of 4
Hi everyone! I'm trying to figure out what is the largest 'dd' block size I can use to data wipe a HDD with random data? So let's say I have ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Sep 2011
    Posts
    6

    Largest dd block size?


    Hi everyone!

    I'm trying to figure out what is the largest 'dd' block size I can use to data wipe a HDD with random data?

    So let's say I have a 1TB HDD and I wanna data wipe 260GB of space with random data.

    I tried

    dd if=/dev/urandom of=/dev/sdd count=130 bs=2G

    but the error gave me

    dd: invalid number '2G'

    How do I go about finding the maximal dd block size allowable?

    Thanks!

  2. #2
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,601
    Trying to write a buffer of 2GB at a time is not a good idea as your kernel probably doesn't support that. It won't give you any performance benefit in any case. Writing 1M at a time is a good maximum. As it is, you don't need to specify the number of blocks to write. Finally, /dev/urandom is vulnerable to cryptographic attacks under certain conditions, especially as you are trying to use it. So, try this:
    Code:
    dd if=/dev/random of=/dev/sdd obs=2M
    If you are trying to "shred" the drive so that data is totally unrecoverable, do a 7 pass overwrite, alternating between the use of /dev/zero and /dev/random for the input file. Recovery of any recognizable data will then be pretty much impossible with any current forensic tools.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  3. #3
    Just Joined!
    Join Date
    Sep 2011
    Posts
    6
    Thanks for the ideas!

    a) Also, since the kernel probably doesn't support 2GB read/write at a time, is there a way to determine the maximum # of bytes it can read/write at a time?
    b) And also, won't writing 1GB at a time be faster than writing 1M at a time?
    c) And is there a reason you did "obs=2M" instead of "bs=2M"?
    d) I know ibs = # of bytes to read at a time & obs = # of bytes to write at a time. So does this mean the command you gave would still read at the default 512 bytes at a time, but will only write until 2M have been read?

    I don't quite understand where the speed-up is coming from between the 3:

    1) dd if=/dev/random of=/dev/sdd obs=2M
    2) dd if=/dev/urandom of=/dev/sdd bs=2G
    3) dd if=/dev/urandom of=/dev/sdd

    Thanks!

  4. #4
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,601
    You can use bs=2M if you want, but I'm not sure how much data can be read from /dev/random at a time. This is one of those situations where experimenting with different parameters will help you find the "sweet spot" on your system. As for whether writing 1G vs 1M at a time is faster, I seriously doubt it. Even at 1M you will be saturating the drive hardware's capability.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •