I have a terebyte sata drive that I need to securely wipe clean. It originally had 2 partitions. I deleted them using `fdisk', rebooted, and then as root ran
shred -vz /dev/sdd
The drive is capable of about 60MB/sec, but shred is only "shredding" about 25MB every 5 seconds according to its output. Since the default number of passes is 25, this works out to about 5 days.
The `shred' process is running at 100% CPU, presumably computing the special random patterns for erasure. Since I have 4 CPUs would creating 4 unformatted partions on the drive and then running something like:
shred -vz /dev/sdd1 shred -vz /dev/sdd2 shred -vz /dev/sdd3 shred -vz /dev/sdd4
in parallel cut my time? Would be just as secure?
Thanks Dean
On 02/09/09 21:32, Dean S. Messing wrote:
I have a terebyte sata drive that I need to securely wipe clean. It originally had 2 partitions. I deleted them using `fdisk', rebooted, and then as root ran
shred -vz /dev/sdd
Angle-Grinder or Belt-Sander on the platters. Works every time.
On 09/02/2009 04:32 PM, Dean S. Messing wrote:
I have a terebyte sata drive that I need to securely wipe clean. It originally had 2 partitions. I deleted them using `fdisk', rebooted, and then as root ran
shred -vz /dev/sdd
The drive is capable of about 60MB/sec, but shred is only "shredding" about 25MB every 5 seconds according to its output. Since the default number of passes is 25, this works out to about 5 days.
The `shred' process is running at 100% CPU, presumably computing the special random patterns for erasure. Since I have 4 CPUs would creating 4 unformatted partions on the drive and then running something like:
shred -vz /dev/sdd1 shred -vz /dev/sdd2 shred -vz /dev/sdd3 shred -vz /dev/sdd4
in parallel cut my time? Would be just as secure?
Thanks Dean
I strongly suspect that would actually be a lot slower since the drive would be doing _a lot_ of seeking.
Jeff
On Wednesday 02 September 2009 21:32:32 Dean S. Messing wrote:
I have a terebyte sata drive that I need to securely wipe clean. It originally had 2 partitions. I deleted them using `fdisk', rebooted, and then as root ran
shred -vz /dev/sdd
The drive is capable of about 60MB/sec, but shred is only "shredding" about 25MB every 5 seconds according to its output. Since the default number of passes is 25, this works out to about 5 days.
I have always wondered about this, why not just do a rm -rf * on the drive, then put one big file on it (some divx movie or such), and copy it over and over under different names until the drive space gets exhausted completely? This can easily be scripted, and I believe it would work as fast as possible for a drive of given capacity.
The idea is that drive has a limited capacity overall, so if you do this a couple of times, there will be no way of recovering any sensitive previously deleted data. A drive can remember only so many bits, eventually all storage space will get exhausted/overwritten...
Or am I missing something?
Best, :-) Marko
2009/9/2 Frank Murphy (Frankly3D) frankly3d@gmail.com:
On 02/09/09 21:32, Dean S. Messing wrote:
I have a terebyte sata drive that I need to securely wipe clean. It originally had 2 partitions. I deleted them using `fdisk', rebooted, and then as root ran
shred -vz /dev/sdd
Angle-Grinder or Belt-Sander on the platters. Works every time.
Too slow and few opportunities for parallelisation.
Now, a .50 cal - that's a "secure" erase, and it scales to 17 erase operations in parallel:
http://www.crunchgear.com/2009/03/22/video-50-caliber-armor-piercing-round-v...
On Wed, Sep 02, 2009 at 13:32:32 -0700, "Dean S. Messing" deanm@sharplabs.com wrote:
I have a terebyte sata drive that I need to securely wipe clean. It
How securely? (I.e. what order of magnitude is the budget an adversary is assumed to have?)
The drive is capable of about 60MB/sec, but shred is only "shredding" about 25MB every 5 seconds according to its output. Since the default number of passes is 25, this works out to about 5 days.
For many definitions of secure, one pass writing zeros will make the cost of recovering any data beyond the benefit to your assumed adversaries. Your biggest risk is probably going to be that you thought you overwrote the disk but made a mistake and didn't (or only partially did).
Note that in most cases where the adversary is assumed to be able to afford to try to recover spare blocks or use electron microscopes to try to figure out what may have been written previously, you should be physically destroying the drive (after wiping) rather than save a few bucks repurposing or selling it.
On Wed, Sep 2, 2009 at 4:32 PM, Dean S. Messing deanm@sharplabs.com wrote:
I have a terebyte sata drive that I need to securely wipe clean. It originally had 2 partitions. I deleted them using `fdisk', rebooted, and then as root ran
shred -vz /dev/sdd
The drive is capable of about 60MB/sec, but shred is only "shredding" about 25MB every 5 seconds according to its output. Since the default number of passes is 25, this works out to about 5 days.
The `shred' process is running at 100% CPU, presumably computing the special random patterns for erasure. Since I have 4 CPUs would creating 4 unformatted partions on the drive and then running something like:
shred -vz /dev/sdd1 shred -vz /dev/sdd2 shred -vz /dev/sdd3 shred -vz /dev/sdd4
in parallel cut my time? Would be just as secure? http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
The question is where the bottleneck lies.
If you think it's CPU bound because of rand bit patterns, shred it with just the non-random patterns (IIRC I think you set this by limiting iterations, the first few iterations are standard patterns: all zeros, all ones, 1010)
My other suggestion would be to use an old junker PC, plug in your drive and boot DBAN and let it churn away for a while. DBAN may be optimized and may run faster (and probably does a more secure job) than shred.
On 02Sep2009 22:17, Marko Vojinovic vvmarko@gmail.com wrote: | On Wednesday 02 September 2009 21:32:32 Dean S. Messing wrote: | > I have a terebyte sata drive that I need to securely wipe clean. It | > originally had 2 partitions. I deleted them using `fdisk', rebooted, | > and then as root ran | > | > shred -vz /dev/sdd | > | > The drive is capable of about 60MB/sec, but shred is only "shredding" | > about 25MB every 5 seconds according to its output. Since the default | > number of passes is 25, this works out to about 5 days. | | I have always wondered about this, why not just do a rm -rf * on the drive, | then put one big file on it (some divx movie or such), and copy it over and | over under different names until the drive space gets exhausted completely? | This can easily be scripted, and I believe it would work as fast as possible | for a drive of given capacity.
Copying /dev/zero is a fast way to get an arbitrary amount of data (my standard anecdote involves emptying it, which I did once on an ancient system). It will be faster than copying a real file since the "read" part is free. So you do the rm, then:
cat /dev/zero >/mnt/the-drive/ZEROES
On a conventionaly filesystem that will do what you outline.
Of course, since the OP is wiping the drive completely it will be even faster to do this:
umount /mnt/the-drive cat /dev/zero >/dev/sdd
HOWEVER:
The purpose of shred is to rewrite the data many times with random data, since it is technically possibly to read "old" patterns from the drive with the right (expensive and special) hardware.
If shred is compute bound (he say it is) he may be better off running:
cat /dev/urandom >/dev/sdd
25 times instead. It should be faster, possibly a lot faster, and be just as good for security purposes. (I would think; if the purpose is solely to erase the drive beyond recovery.) It may deplete your machines random bit pool, so don't generate an new ssh or GPG or SSL private keys during or soon after this process.
Cheers,
Cameron Simpson wrote:
Copying /dev/zero is a fast way to get an arbitrary amount of data (my standard anecdote involves emptying it, which I did once on an ancient system). It will be faster than copying a real file since the "read" part is free. So you do the rm, then:
cat /dev/zero >/mnt/the-drive/ZEROES
On a conventionaly filesystem that will do what you outline.
I like "dd if=/dev/zero of=<drive to be zeroed>". In any case, you do not want to do this to a mounted drive. If you cant to use cat to zero out a partation, try something like "cat /dev/zero > /dev/sde5" to zero out the 5th partition on drive e.
Mikkel
On 09-09-02 17:39:24, Cameron Simpson wrote: ...
The purpose of shred is to rewrite the data many times with random data, ince it is technically possibly to read "old" patterns from the drive with the right (expensive and special) hardware.
Proof? This /may/ have been true for drives of old, with their non- overlapping data tracks.
Bruno Wolff III wrote:
On Wed, Sep 02, 2009 at 13:32:32 -0700, "Dean S. Messing" deanm@sharplabs.com wrote:
I have a terebyte sata drive that I need to securely wipe clean. It
How securely? (I.e. what order of magnitude is the budget an adversary is assumed to have?)
The drive is capable of about 60MB/sec, but shred is only "shredding" about 25MB every 5 seconds according to its output. Since the default number of passes is 25, this works out to about 5 days.
For many definitions of secure, one pass writing zeros will make the cost of recovering any data beyond the benefit to your assumed adversaries. Your biggest risk is probably going to be that you thought you overwrote the disk but made a mistake and didn't (or only partially did).
Note that in most cases where the adversary is assumed to be able to afford to try to recover spare blocks or use electron microscopes to try to figure out what may have been written previously, you should be physically destroying the drive (after wiping) rather than save a few bucks repurposing or selling it.
That's just it. What is "secure"? It's a rather nebulous term and depends on your level of paranoia rather than a fixed definition.
Unless you physically destroy the drive in a manner where it cannot possibly be reassembled (e.g. sanding the oxide off the platters into dust and ensuring the dust spreads to the four corners of the world), then there is a possibility that some data can be recovered.
We do an 8-pass shred on all drives that may have seen sensitive data. Yes, someone with the resources of the NSA could probably recover the data at that point, but there are very few groups with that kind of firepower available to them and would they even bother?
To make everyone happy, though, we then give them to a certified company which puts the drives through a giant degaussing coil (appropriated from an old MRI scanner) before they're physically ground up by a big shredder that also eats cars for a living. The remnants get mingled with the chunks of countless Chevy Cavaliers, Ford Pintos and Chrysler K-cars and probably end up as part of someone's refrigerator. It's overkill in my opinion, but I've been wrong before. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer ricks@nerd.com - - AIM/Skype: therps2 ICQ: 22643734 Yahoo: origrps2 - - - - Never eat anything larger than your head - ----------------------------------------------------------------------
On Wednesday 02 September 2009 22:39:24 you wrote:
On 02Sep2009 22:17, Marko Vojinovic vvmarko@gmail.com wrote: | On Wednesday 02 September 2009 21:32:32 Dean S. Messing wrote: | > I have a terebyte sata drive that I need to securely wipe clean. | | I have always wondered about this, why not just do a rm -rf * on the | drive, then put one big file on it (some divx movie or such), and copy it | over and over under different names until the drive space gets exhausted | completely? This can easily be scripted, and I believe it would work as | fast as possible for a drive of given capacity.
Copying /dev/zero is a fast way to get an arbitrary amount of data (my standard anecdote involves emptying it, which I did once on an ancient system). It will be faster than copying a real file since the "read" part is free.
You are right, zeroing is faster of course. I mentioned a dvix movie just to make the data written more random than all-zeroes, which might be more secure, but the end result is the same, I guess. :-)
HOWEVER:
The purpose of shred is to rewrite the data many times with random data, since it is technically possibly to read "old" patterns from the drive with the right (expensive and special) hardware.
This is the part that puzzles me. Let's give it a following thought experiment. Suppose I have all that state-of-the-art expensive and special equipment at my disposal, and unlimited free time. So I fill the drive with data1, zero it out, fill it with data2. Are you saying that I can use the equipment to recover the old layer of data1 (all or some part of it)? Then I could zero the drive again, fill it with data3. Can I use the equipment to recover both data1 and data2 layers which have been deleted? Suppose I repeat the process arbitrarily many times. At some point data1 layer would have to be lost completely, since otherwise it would mean that there is a way to read and write infinite amount of data on the drive, which is impossible.
So the question is: if you suppose I have in my possession a yet-to-be- invented-most-expensive-CIA-NSA-dream-about-it-machine for data recovery, how many times should a typical drive be zeroed over and over in order to destroy that first layer of sensitive data beyond any chance of recovery, even in principle?
Given that I know so little about modern hard drives, I can only guess, but I guess the number of such rewrite-cycles is ridiculously small, like 3 or maybe 4 top. It would need a serious scientific study to convince me that it needs 5 times to do it.
So what's all the fuss and hype about deleting drives, then? Create a script to zero out (or random out) the drive four times, let it run for a week, and be done with it. There should be some extremely serious arguments to convince me that this would not be completely effective on any drive.
Best, :-) Marko
On 09/03/2009 07:34 AM, Bruno Wolff III wrote:
On Wed, Sep 02, 2009 at 13:32:32 -0700, "Dean S. Messing"deanm@sharplabs.com wrote:
I have a terebyte sata drive that I need to securely wipe clean. It
I've found over time that formatting with ext3 tends to remove most hidden partitions and data. Formatting can be faster than trying to remove files. with particularly troublesome drives I removed partitions with dos fdisk and created 1, formatted to dos 5 then did the same again with ext3. The most troublesome took about 40 minutes to clean. Is there a way to set up say 3 or 4 partitions of various sizes, format each to ext3 then remove the partitions and reformat. Roger
Thanks to all for the replies.
I'll answer most of the comments here.
0) The disk is unmounted.
1) The drive is (was) a backup drive with a great deal of sensitive corporate laboratory research data and algorithms on it. The monitary loss of the data being stolen would be significant though it's hard to put a $$ value on it. More importantly, I'm following corporate policy.
2) The drive is under extended warranty and so I'm sending it back for a new drive. The Power Supply in the enclosure is bad. The actual drive is still good, but they want the whole thing back for a replacement. Sanding off the oxide and then melting the drive probably won't go over well with the manufacturer.
3) Writing zeros is a not a good idea if the data is valuable. The small latent magnetic orientation info left from the previously written data is not _that_ hard to recover with $5000 equipment, so I've read. Multiple passes of random patterns are needed to make recovery costly.
Tony Nelson's remark about newer drives having overlapping data tracks is interesting and I don't know what current research says about the effects of that on recovery, but Gutmann's (slightly old) paper from 1996:
http://www.cs.auckland.ac.nz/%7Epgut001/pubs/secure_del.html
says in Section 2:
When all the above factors are combined it turns out that each track contains an image of everything ever written to it, but that the contribution from each "layer" gets progressively smaller the further back it was made. Intelligence organisations have a lot of expertise in recovering these palimpsestuous images.
Which is why 25 passes meets DoD (and my corporate) standards.
4) I don't know if the fact that the process runing at 100% of one CPU means it is compute bound. Looking at the disk I/O meter in gkrellm I see bursts of writes followed by intervals of no transfer. I know that magnetic reorientation requires some time to "set" and that may be why the delays are there. Or it may be compute bound.
Thanks for all the interesting comments on my question. At this point I think I'll just let the thing run for the five days.
Dean
Dean S. Messing wrote:
Thanks to all for the replies.
I'll answer most of the comments here.
The disk is unmounted.
The drive is (was) a backup drive with a great deal of sensitive corporate laboratory research data and algorithms on it. The monitary loss of the data being stolen would be significant though it's hard to put a $$ value on it. More importantly, I'm following corporate policy.
The drive is under extended warranty and so I'm sending it back for a new drive. The Power Supply in the enclosure is bad. The actual drive is still good, but they want the whole thing back for a replacement. Sanding off the oxide and then melting the drive probably won't go over well with the manufacturer.
Writing zeros is a not a good idea if the data is valuable. The small latent magnetic orientation info left from the previously written data is not _that_ hard to recover with $5000 equipment, so I've read. Multiple passes of random patterns are needed to make recovery costly.
Tony Nelson's remark about newer drives having overlapping data tracks is interesting and I don't know what current research says about the effects of that on recovery, but Gutmann's (slightly old) paper from 1996:
http://www.cs.auckland.ac.nz/%7Epgut001/pubs/secure_del.html
says in Section 2:
When all the above factors are combined it turns out that each track contains an image of everything ever written to it, but that the contribution from each "layer" gets progressively smaller the further back it was made. Intelligence organisations have a lot of expertise in recovering these palimpsestuous images.
Which is why 25 passes meets DoD (and my corporate) standards.
I don't know if the fact that the process runing at 100% of one CPU means it is compute bound. Looking at the disk I/O meter in gkrellm I see bursts of writes followed by intervals of no transfer. I know that magnetic reorientation requires some time to "set" and that may be why the delays are there. Or it may be compute bound.
Run "top" and you may find that the shred process is in a "D" state a lot of the time. That means it's in an I/O wait state, waiting on the drive to complete some operation. A "D" state can suck up a lot of CPU.
Thanks for all the interesting comments on my question. At this point I think I'll just let the thing run for the five days.
Dean
On Wed, Sep 02, 2009 at 16:37:26 -0700, "Dean S. Messing" deanm@sharplabs.com wrote:
Thanks to all for the replies.
I'll answer most of the comments here.
The disk is unmounted.
The drive is (was) a backup drive with a great deal of sensitive corporate laboratory research data and algorithms on it. The monitary loss of the data being stolen would be significant though it's hard to put a $$ value on it. More importantly, I'm following corporate policy.
The drive is under extended warranty and so I'm sending it back for a new drive. The Power Supply in the enclosure is bad. The actual drive is still good, but they want the whole thing back for a replacement. Sanding off the oxide and then melting the drive probably won't go over well with the manufacturer.
Given 1, this seems like a foolish policy. Just eat the cost as part of securing your data. It might be cheaper than having you keep an eye of multiple write passes covering several days.
- Writing zeros is a not a good idea if the data is valuable. The small latent magnetic orientation info left from the previously written data is not _that_ hard to recover with $5000 equipment, so I've read. Multiple passes of random patterns are needed to make recovery costly.
There was some old documentation that claimed reading the remmants from previous writes were recoverable, though I don't remember seeing costs estimates that low. I would have expected a lot of human time needed to help deal with the incomplete recovery.
If one can recover a significant amount of data after writing zeros, one is going to be able to do it after writing a single pass of random data as well.
Rick Stevens wrote:
- I don't know if the fact that the process runing at 100% of one CPU means it is compute bound. Looking at the disk I/O meter in gkrellm I see bursts of writes followed by intervals of no transfer. I know that magnetic reorientation requires some time to "set" and that may be why the delays are there. Or it may be compute bound.
Run "top" and you may find that the shred process is in a "D" state a lot of the time. That means it's in an I/O wait state, waiting on the drive to complete some operation. A "D" state can suck up a lot of CPU.
Thanks Rick. It's in "D" state only about 5-10% of the time. Yet disk writes are occuring (according to the spikes and numbers in gkrellm) for 1 second or so, every 2 seconds. So that either points to random number computation or the wait needed to let the new magnetic orientation 'set'. Not sure.
Dean
On Wed, 2 Sep 2009 17:51:00 -0700 (PDT) "Dean S. Messing" deanm@sharplabs.com wrote:
Bruno Wolff wrote:
Given 1, this seems like a foolish policy. Just eat the cost as part of securing your data. It might be cheaper than having you keep an eye of multiple write passes covering several days.
"I have no comment at this time." :-)
LOL
Bruno Wolff wrote:
Given 1, this seems like a foolish policy. Just eat the cost as part of securing your data. It might be cheaper than having you keep an eye of multiple write passes covering several days.
"I have no comment at this time." :-)
There was some old documentation that claimed reading the remmants from previous writes were recoverable, though I don't remember seeing costs estimates that low. I would have expected a lot of human time needed to help deal with the incomplete recovery.
I suppose the cost of such equipment is much cheaper now than in the olden days of 15 years ago. Also, given my comment below, a sensitive read head (rather than an MF or AF Microscope) may be all that's needed in the case of a "zero write-over".
If one can recover a significant amount of data after writing zeros, one is going to be able to do it after writing a single pass of random data as well.
I don't think this is correct, based on my reading. The situation is a little bit like trying to decrypt a file after adding an unknown constant numerical value to the data, vs. adding a "one-time pad" of random numbers. That's because writing zeros does not completely zero out the local magnetic orientations so the variations can still be detected. A random pattern makes the problem much harder. Multilple passes (which is what /usr/bin/shred does) makes it even harder.
Dean
Dean S. Messing wrote:
Thanks Rick. It's in "D" state only about 5-10% of the time. Yet disk writes are occuring (according to the spikes and numbers in gkrellm) for 1 second or so, every 2 seconds. So that either points to random number computation or the wait needed to let the new magnetic orientation 'set'. Not sure.
Dean
I would expect it to be write buffering. It may be the time it takes to fill the write buffers, and then write them to disk, and/or write buffering on the drive.
I know how to turn off write buffering on a mounted partition, but not when writing directly to a drive. You can probably turn off drive write buffering using hdparm.
You may be able to improve the time by tweaking the size of the writes to the drive. Take a look at the "USB I/O performance" thread.
Mikkel
Terabyte disks are about $100. Erasing it to satisfy the corporate policy isn't possible for that much money. Don't throw the company's money away, throw the drive away because it is cheaper.
As for throwing the drive away, buy a set of small torx screwdrivers and take it apart. Keep the platter until it is scratched and dirty (you needn't remove the oxide, just make it too rough for the head to fly over). As a nice side effect you get some remarkable magnets.
On Wed, Sep 2, 2009 at 3:32 PM, Dean S. Messingdeanm@sharplabs.com wrote:
I have a terebyte sata drive that I need to securely wipe clean. It originally had 2 partitions. I deleted them using `fdisk', rebooted, and then as root ran
shred -vz /dev/sdd
The drive is capable of about 60MB/sec, but shred is only "shredding" about 25MB every 5 seconds according to its output. Since the default number of passes is 25, this works out to about 5 days.
I have been reading this thread wondering this: why do we have to shred the whole disk, why not just find the parts that are actually used and write over them a few times. I seriously doubt you have 1 terrabyte of precious data.
Another idea just hit me. What if you encrypt the data on the disk. Ubuntu has that thing now to create a Private encrypted partition. Do that, move your precious stuff in there. then unmount. That is supposed to be just about impossible to recover, even for the NSA kids.
Anybody know if it is easier to crack an ecrypted file system than recover shredded data?
pj
On Wed, Sep 2, 2009 at 5:32 PM, Dean S. Messing deanm@sharplabs.com wrote:
I have a terebyte sata drive that I need to securely wipe clean. It originally had 2 partitions. I deleted them using `fdisk', rebooted, and then as root ran
shred -vz /dev/sdd
The drive is capable of about 60MB/sec, but shred is only "shredding" about 25MB every 5 seconds according to its output. Since the default number of passes is 25, this works out to about 5 days.
The `shred' process is running at 100% CPU, presumably computing the special random patterns for erasure. Since I have 4 CPUs would creating 4 unformatted partions on the drive and then running something like:
shred -vz /dev/sdd1 shred -vz /dev/sdd2 shred -vz /dev/sdd3 shred -vz /dev/sdd4
in parallel cut my time? Would be just as secure?
Thanks Dean
Since when is formatting a CPU-intensive task?.
Think about this... the heads ALL move in parallel on the same platter. There aren´t 4 individual head actuator arms moving heads independently. If the ´arm´ goes to track 255, all heads go to track 255...
Here, news for you: http://www.dansdata.com/images/io012/heads1280.jpg
FC
PS: this is also interesting reading....
Dual actuator HDs.... http://www.tomshardware.com/news/seagate-hdd-harddrive,8279.html
On Wed, Sep 2, 2009 at 5:32 PM, Dean S. Messing deanm@sharplabs.com wrote:
I have a terebyte sata drive that I need to securely wipe clean.
Just mail it to me, I don´t care about what was in it, and I´ll fill it with movies (Public Domain documentaries on religion) and mp3s (classical music with copyright expired).
Think about it as "recycling"....
-sorry, couldn´t resist- FC
On 02Sep2009 17:07, Rick Stevens ricks@nerd.com wrote: | Dean S. Messing wrote: | > 4) I don't know if the fact that the process runing at 100% of one CPU | > means it is compute bound. Looking at the disk I/O meter in | > gkrellm I see bursts of writes followed by intervals of no | > transfer. I know that magnetic reorientation requires some time | > to "set" and that may be why the delays are there. Or it may be | > compute bound. | | Run "top" and you may find that the shred process is in a "D" state a | lot of the time. That means it's in an I/O wait state, waiting on the | drive to complete some operation.
During this time it should be consuming _no_ CPU.
| A "D" state can suck up a lot of CPU.
It should not. Historically, processes in D state have been counted towards the load average, because D states are normally very brief and the process will be back on the run queue Real Soon Now.
So while D state processes run up your load average, purely for purposes of having that number indicated better how "busy" your system is, a process in D state does _not_ suck up CPU unless it's flickering in and out of D state so fast that the OS housekeeping becomes expensive.
Cheers,
After reading the entire thread, and watching the video, here is what I'd do.
Put the drive in a safe.
Go buy a new drive, and make use of it.
Drop the warranty claim even though it is valid. The company will save money in the end.
In about 10 years, or whenever the corporate data on the drive is deemed obsolete and nonsensitive, hold a Corporate Smash Day in which this drive and others are given to budding young technologists supplied with sledgehammers and other tools. Offer an all-expenses paid dinner to whoever reduces the drives to the smallest pieces.
Bob
On 09/02/2009 04:32 PM, Dean S. Messing wrote:
I have a terebyte sata drive that I need to securely wipe clean. It originally had 2 partitions. I deleted them using `fdisk', rebooted, and then as root ran
shred -vz /dev/sdd
The drive is capable of about 60MB/sec, but shred is only "shredding" about 25MB every 5 seconds according to its output. Since the default number of passes is 25, this works out to about 5 days.
The `shred' process is running at 100% CPU, presumably computing the special random patterns for erasure. Since I have 4 CPUs would creating 4 unformatted partions on the drive and then running something like:
shred -vz /dev/sdd1 shred -vz /dev/sdd2 shred -vz /dev/sdd3 shred -vz /dev/sdd4
in parallel cut my time? Would be just as secure?
Thanks Dean
Earlier on in one of the threads, someone compared encryption with an envelope. That is pretty good. You know the information is in there, but the only way to get it is to open the envelope. The question is how long does it take to open the envelope. No encryption is unbreakable. The value of encryption is how long does it take to break it. One benchmark that is often quoted is a "bruteforce attempt". Although it is not literally a every combination of input attempt, it is quite similar. If a single very high speed computer were used, and the algorithm was known or could be guessed, how long would it take to retrieve the message? This is those long years you see published. The purpose of encryption is simply to make the data harder to retrieve, not conceal it indefinitely. Some algorithms are meant to conceal just until the message is delivered, some to conceal for days, and some for years, none shield for centuries, but attempts are being made daily.
Moreover as encryption algorithms become better understood, the applicable means to break encoding become more numerous, and the power of the computer (about 100Billion times more powerful today than in 1967) make encryption less and less secure at all levels. Of course computer speed also lends more encryption methods to the person shielding information as well, but that is basically an efficiency algorithm, not applicable to the direct computation of breaking any particular code.
Alternate languages are the best bet. It is impossible to replicate the cultural differences on a computer (at least that is true today I think), so languages have distinct attributes that lend them to expressing ideas in a different cultural idiom, and until the language and/or culture are known, it is unfathomable, unless you find a decoded bit that you understand (the rosetta stone for example). Navajo code talkers were used by the US military for that same reason in the Second World War.
If you are a number or math nut, encryption, prime numbers, fibbonacci numbers, and transforms of all varieties will be a really interesting topic of study.
Your signature says that you are a professor of political science. Think about the political and cultural evolution of language, and then think of encryption as a means to code the thoughts of one culture to make it unique. What forces act on that to keep it quiet, and what forces work to weaken the culture. That is a form of code breaking.
Regards, Les H
On Wed, 2009-09-02 at 21:34 -0500, Paul Johnson wrote:
On Wed, Sep 2, 2009 at 3:32 PM, Dean S. Messingdeanm@sharplabs.com wrote:
I have a terebyte sata drive that I need to securely wipe clean. It originally had 2 partitions. I deleted them using `fdisk', rebooted, and then as root ran
shred -vz /dev/sdd
The drive is capable of about 60MB/sec, but shred is only "shredding" about 25MB every 5 seconds according to its output. Since the default number of passes is 25, this works out to about 5 days.
I have been reading this thread wondering this: why do we have to shred the whole disk, why not just find the parts that are actually used and write over them a few times. I seriously doubt you have 1 terrabyte of precious data.
Another idea just hit me. What if you encrypt the data on the disk. Ubuntu has that thing now to create a Private encrypted partition. Do that, move your precious stuff in there. then unmount. That is supposed to be just about impossible to recover, even for the NSA kids.
Anybody know if it is easier to crack an ecrypted file system than recover shredded data?
pj
--
Is shred cpu bound? I see two ways to test: Fill the drive from /dev/zero . cp is not cpu bound. Run top while shred is running. top will tell you how much cpu time shred has used. After an hour, divide the number of shred's cpu seconds by 36 to get the percentage. If random number generation is the problem, you can replace it by passing shred a named FIFO. Pick two relativle prime integers, both larger than one million. Use these as the sizes of two arrays, call them A and B. Fill them from /dev/random . byte(j) = A[j%sizeof A] ^ B[j%sizeof B] Unlike urandom, random can block, so the fill step could be slow. urandom, wh3en random would block uses pseudo-random numbers, so either way, you will be using pseudo-random numbers.
IIRC shred doesn't just write different data patterns on different passes, it also write blocks in different orders. That can affect the precise placement of the r/w heads and can cause shred to affect more of the platter area.
As other have noted, company policy seems just a bit silly. Using any of the following tools would seem cheaper and more effective: degausser sledge hammer arc welder NaK bath
shred-ing would seem more appropriate for an internal drive. A preference for shred-ing over cracking a case seems rational to me.
On Thu, Sep 3, 2009 at 12:40 PM, Leshlhowell@pacbell.net wrote:
Earlier on in one of the threads, someone compared encryption with an envelope. That is pretty good. You know the information is in there, but the only way to get it is to open the envelope. The question is how long does it take to open the envelope. No encryption is unbreakable. The value of encryption is how long does it take to break it.
You shouldn't be developing nuclear weapons in your basement in the first place and storing your top-secret classified documents in your personal hard drive!!. Much less talking about it on a public mailing list.
Next time do the drawings with pencil on a sheet of paper, memorize them all, then shred it and eat the little pieces with some soup.
Oh wait, it wasn't you. It was Dean whom started this thread....
<VBG>
Now seriously, speaking of data security, the other day I was looking for Word templates and found somene's hard drive fully shared on a http server, down to the desktop folder, windows dir, etcetera. It seems he created a "secret" folder under his public http dir and mounted his windows root there, and "somehow" Google ended up indexing it all.
Talk about privacy...But I bet he'll take great care to erase his hd before getting rid of it so no data falls in the wrong hands....
FC
On Thu, 2009-09-03 at 11:36 -0500, Michael Hennebry wrote:
Using any of the following tools would seem cheaper and more effective: degausser sledge hammer arc welder
Oooh, I think the last one sounds the most interesting!
A scientist back in '07 decided to figure this out and did a study on drives current as of then as to the recoverability rate with scanning/force-probe microscopy.
He found that recovering more than one bit at a given location, with a single zeros-wipe was statistically impossible. Multiple passes, bit patterns, 35-x erase, etc. was all further useless, the latter being based on some speculation in a government paper from the 90's.
He then built a generic magnetic recovery microscope using drive heads and a spin stand, and found that recovery of data that hadn't been erased was *very* good. This is the interesting part...
ATA has had a 'secure erase' command for several years, which will erase a drive in a couple hours to a degree that can't be recovered. The trick is, the standard 'secure erase' only does blocks that haven't been re-mapped by ATA bad-block reallocation. Some of the newer drives, Seagates having been the first, have an enhanced version of the ATA command that will also wipe out the re-mapped blocks.
So, go ahead and do your 35 different-bit pattern erases, and any data you had in an ATA-remapped block is easily readable by the spin-stand microscope because it never got re-written (the drive has declared those blocks unwritable and re-mapped them). A relatively-quick enhanced secure erase with hdparm will make the drive un-recoverable.
He also notes that with increasing densitites, bulk de-gaussers are becoming unreliable. Drives shot or smashed can be pretty easily glued back together and the intact surfaces read. This was all on a series of posts on the SANS forensics blogs a while back.
I've been trying to get a list of the Seagate drives that have supported the enhanced secure erase (the manuals don't specify), but I'm caught in the hell of e-mailing with a rep who insists I just want to know how to use Seatools for DOS. Anybody know someone in the right department at Seagate?
The moral of the story is you should wipe your drives securely, if you know you can, and donate them to a charity or re-purpose them. Save the whales and all that.
-Bill
Dean S. Messing wrote:
Thanks to all for the replies.
I'll answer most of the comments here.
The disk is unmounted.
The drive is (was) a backup drive with a great deal of sensitive corporate laboratory research data and algorithms on it. The monitary loss of the data being stolen would be significant though it's hard to put a $$ value on it. More importantly, I'm following corporate policy.
This is the most problematic issue. Corporate policies that were written when drive sectors were visible with a home microscope.
That said, I would go with the dd recommendations, 25 times.
Also, the -v option will slow the progress due to screen writes. I have seen this in the past.
And, if the drive is mounted as ext3, then the data may not get erased as expected. See the man page on shred.
CAUTION: Note that shred relies on a very important assumption: that the file system overwrites data in place. This is the tra- ditional way to do things, but many modern file system designs do not satisfy this assumption. The following are examples of file systems on which shred is not effective, or is not guaran- teed to be effective in all file system modes: ...
Again, dd gets around this.
As for the comments on the "secure erase" features of drives. A quick google search came up with:
http://ata.wiki.kernel.org/index.php/ATA_Secure_Erase Which shows how to use hdparm.
http://advosys.ca/viewpoints/2006/07/hard-drive-secure-erase/ Which is a very interesting article and this is really important.
We tried the secure erase utility on multiple old ATA drives and every one manufactured since 2000 supported the Security Erase command (the utility tells you if the drive does not). Drives older than 2000 don’t have the command so if you need to wipe very old drives, a software wipe is the best you can do.
Maybe run the secure erase 25 times.
CAUTION: Note that shred relies on a very important assumption: that the file system overwrites data in place. This is the tra- ditional way to do things, but many modern file system designs do not satisfy this assumption. The following are examples of file systems on which shred is not effective, or is not guaran- teed to be effective in all file system modes: ...
Again, dd gets around this.
Actually it doesn't because you've no idea how the disk itself lays out data.
Maybe run the secure erase 25 times.
If the drive supports it a single secure erase should be fine. If not then you need to pick an alternate disposal mechanism. The favourite these days sometimes appears to be to pay someone to do it - so your business can claim its discharged its liabilities and anyone in a disaster should sue someone else ;)
Discs are smart enough to be considered as a small storage appliance these days so are quite capable of re-arranging data internally as and when it feels like it.
On Thu, 2009-09-03 at 09:23 +1000, Roger wrote:
I've found over time that formatting with ext3 tends to remove most hidden partitions
I don't see how it can do that. Formatting creates a file system within a partition. If you want to do something to a partition, you need to use a function that works outside of the partition.
You could format what's stored in the hidden partition, but that partition would still be there, unless you did something *else*.
On Mon, Jan 16, 2012 at 5:47 PM, Tim ignored_mailbox@yahoo.com.au wrote:
On Thu, 2009-09-03 at 09:23 +1000, Roger wrote:
I've found over time that formatting with ext3 tends to remove most hidden partitions
I don't see how it can do that. Formatting creates a file system within a partition. If you want to do something to a partition, you need to use a function that works outside of the partition.
You could format what's stored in the hidden partition, but that partition would still be there, unless you did something *else*.
Is it not possible to use secure erase direct to the drive firmware using HD parm? This is also available from some isos such as PartedMagic once booted independently of the drive to be erased. Some machines may need to the drive to be re-hotplugged to gain access to the settings for secure erase but I have done it with drives over some years now (though not terabyte ones) - this method usually gives the fastest way to completely erase the drive and make it factory fresh afterwards presuming it is not actually damaged!