I installed Recover My Files and recovered over 280GB of data from a friends 160GB external drive after it crashed. I have opened multiple images, audio files, xls sheets and movies to make sure they are intact, and have not found any corrupt files as of yet. The problem is that 280GB of data won’t fit back onto a 160GB drive, so there must be duplicate files somewhere. Recover My Files made 1,387 folders containing 36,625 files with numerical folder names. Those would take a lifetime to sort.
So how can I consolidate ALL of the folders into one folder in Windows. From there I can easily separate them by file type and name and weed out the useless stuff. Apparently a DOS program named xxcopy can do this, but surely there is another way?! The only method I have come up with is creating a huge archived files and then extracting it without subfolders. I don’t have that kind of disc space though. If this can be done while removing possible duplicates at the same time, let me know. The recovery took over 20 hours, so I’m not really worried if he looses SOME data at this point…
no not so I have used this type of software, It can find files from 1 or 2 or even 3 formats, that’s why it found more then what can fit on a hard drive.
can you save to another hard drive? that is bigger?
cd
mkdir \newfolder
cd \recoveredfolder
for /R %x in (.) do xcopy “%x” \newfolder
the /R makes it recurse into folders within folders. The above assumes your new folder is called “newfolder” and is at the root of the drive, and that all the recovered folders are in \recoveredfolder.
Are you putting the files back onto the same drive? Have you tried spinrite?
Also, depending on how the drives are formatted (block size, etc), the same files may take differing amounts of space on two different drives.
Is there an xmove for example so that a copy isn’t made? I just want to eliminate the file tree in this directory so that all the files reside in one folder. I won’t have room to move all of the files into a new folder. Do you know what I mean?
What is “for” and “%x in”. I almost never have used command line, clearly.
The reason I did the xcopy instead of ‘move’ is just in case something goes wrong, you won’t lose the originals. I wouldn’t do ‘move’ unless it’s a last resort. But go for it
‘for’ just loops through the command. ie: for every file in the current folder (that’s the . part), do this…
‘%x’ is a variable. So for example, it will be the filename of the first file it finds, and then the second file it finds, etc. So then you can do things like move %x \newfolder and it will just substitute the filename for the %x.
If I was a real nerd, I’d be using %foo for a variable name. But there’s just something un-nerdy about DOS scripting. Real nerds use $ for variables, and the slashes go the opposite way
Yeah, I also love using the command line instead of a GUI. I also prefer copy to move in linux.
You can copy a whack of files in linux from one directory to another using the copy command. Let us say we have a picture directory within my home folder and I want to copy all of the .jpgs within the directory to my saved directory also within my home directory. Open up a shell prompt. Navigate to your picture directory. Here’s my example:
cd /home/hitest/pictures
From within that directory issue the copy command, specify what you want to copy, and type the pathway to your target directory in this case saved. Using the wildcard *.jpg will get all files ending in .jpg
This command would copy all files in the Recovery tree, selecting only the newest file if there are duplicates and delete the source file after copying, like move.
In an image collection where the SD card was formatted and more than one PICT0001.jpg may exist, you could use SG instead of SGNO, which would sort the newest file first. There are several other variations as well.