Quantcast
Channel: FedoraForum.org
Viewing all articles
Browse latest Browse all 35424

Wiping Hard drive

$
0
0
Hi All,

The standard way I wipe a HDD is to use

Code:

dd if=/dev/zero of=/dev/sdX
many places say that this is not enough and that you need to use /dev/(u)random instead. This however increases the time to wipe the disk from 90mins to 900 - 1600 minutes (500Gb disk).

Ignoring whether this is a better way of wiping data etc (many other threads are dedicated to this), I was wondering how the following would compare.

If I create a file by doing
Code:

cat /dev/(u)random) > file.dat
and then let this grow to say 1GB and then do:
Code:

for i in `seq -w 0 600`; do cp file $i; done;
This then fills the drives with repeating blocks of random data. How would the two compare? Would it do the job about the same?

(II do not really understand HDD recovery in detail - my first thoughts are that you could spot a rough pattern and subtract it from the detetect level of magnetism, leaving behind some trace levels of variation. But this sounds as crazy as the idea that you need 7 wipes to clean a drive.)

Viewing all articles
Browse latest Browse all 35424

Trending Articles