This is already resolved (very poorly) a week ago, but I'm curious what the best tactic is to deal with this in the future.
I had an Ubuntu build on a laptop. I then put VirtualBox OSE on it and installed Win7, and several development tools.
Then I had a power outage. Battery died.
Disk did not unmount cleanly. It was an ext4 filesystem.
I messed with fsck for several hours. I had a HUGE file that I could not recover, which was my VM VHD file. I lost the work in the VM because I could not figure out a way to re-assign the inodes that were dumped into lost+found.
I've been bit by dirty dismounts in Linux several years ago on ext3, also. Large lost file back then was a MySQL database which I had to restore from backup.
I never have this problem with Windows NTFS in its various incarnations.
How does one negotiate lost+found on a Linux file system? How does one avoid this kind of problem? It seems that HUGE amounts of files on ext3/ext4 end up getting sucked up to ram cache and marked dirty on disk, then not trusted on a hard reboot. There's got to be a mount switch to avoid this behavior, or something.
What am I doing wrong? The raw number of *nix based FTP/Apache/etc servers out there tell me that other people are able to hard-reboot them without losing the entire system or random large files.