Click here to Skip to main content
15,117,318 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
I'm looking to be able to delete a file as securely as possible in C#. Ideally, even forensic analysis of the disk would yield poor/no results. Instead of wiping free space for the whole drive, I want to just erase the exact portion of the disk used by that file - it's fine if there are parts in different places on the disk, as long as everything relating to that file is gone (contents, file attributes like name, date created etc).

There exists one 13-year-old solution to this problem, at least in part, at Securely Delete a File using .NET[^]

There's doubt as to its ability to actually wipe the file securely, and it's a very old project so I thought it'd be best to start a new conversation on the issue. Despite being able to get past recovery tools, I'm not confident that it's sufficient for preventing forensic analysis. It also needs to work on HDDs and SSDs.

Is this even possible, and what's the best approach?

What I have tried:

Securely Delete a File using .NET[^]
Posted
Updated 14-Oct-21 9:56am
v4
Comments
Gerry Schmitz 14-Oct-21 13:52pm
   
Common sense says that repeatedly writing x"FF" to the same location on disk should make the previous "data" unreadable; flushed properly. As for "file attributes", that's being paranoid (or malware) and requires accessing the system file allocation tables and any other references.
Member 11450536 14-Oct-21 15:59pm
   
Common sense says that it's a bit more difficult to outsmart forensic analysis than that. "Repeatedly": how many times? "Flushed properly": what does that entail? Yes, the idea of wiping free space is to be paranoid. "And any other references": where?
Gerry Schmitz 15-Oct-21 15:44pm
   
"Other references": The Registry, for one; depending on how much of a trail the apps accessing the file leave.

1 solution

What does the age of the article have to do with its effectiveness?

NTFS hasn't really changed much in the last 13 years, so the article is still valid.

All it's doing it writing cryptographically random data to the entire file to be deleted, changing the dates on the file, then deleting it, freeing the space occupied by the file back to the filesystem.

That technique will not change. It's still a valid way to "securely delete" a file.

But, overwriting just the file as it currently is does not guarantee you are overwriting any space on the drive that was previously occupied by the file. Say a copy was made and deleted. A "secure delete" has no of knowing that even happened so cannot overwrite the areas that were occupied by the file. The same goes for disk defragmentation. If the file, or parts of the file, have been moved by a defragmentation tool that does not overwrite sectors that were moved, you have no way of knowing what sectors the file previously occupied.

This is why you don't really see "secure file delete" as a viable product, but instead see tools that overwrite all free space.
   
Comments
BillWoodruff 14-Oct-21 15:26pm
   
+5
Member 11450536 14-Oct-21 16:00pm
   
I don't deal with NTFS in particular on a regular basis so I don't know whether it's changed much in the last 13 years or not. Many other things have. There are tens of thousands of obsolete articles online because of technology changing. Note: "There's doubt as to its ability to actually wipe the file securely".

In my particular scenario, let's say I currently have the file path of a file I wish to securely delete. I'm assuming the solution would involve figuring out exactly which parts of the disk are occupied, deleting the file and then wiping those bytes. The exact approach, I'm not sure about.
Dave Kreskowiak 14-Oct-21 16:02pm
   
The file system already knows which sectors, or blocks, are occupied by the file. It's why you can read the entire file without missing any data.

All you have to do is overwrite every byte in the file and you will destroy all of the content in the CURRENT incarnation of the file.
lmoelleb 15-Oct-21 2:27am
   
Isn't the "address space" as seen by the file system being mapped in the SSD for wear leveling? So the file system might issue an order to overwrite a block, but then the SSD controller decides to shift this block to a new location in the flash memory instead of overwriting the original location. I would expect trim (or the internal GC of the SSD) to take the data out, but I have no idea if it can still be recovered until an actual overwrite - and if you can trust the trim to be executed 100% reliable.
Gerry Schmitz 15-Oct-21 16:01pm
   
Yes; in that case, one should use encryption at the file level. (I don't store "data" on my (OS) SSD; only programs).

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)




CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900