NOTE: works with ext3 and ext4

Just edit the dst1, log1, srcdev1 variables. This will list all of the files  and subfolders on the root of srcdev1, and it will launch an rdump for each one (one after another, not in parallel). There is good loggin involved as well. This should work on all systems.

TIP: read this whole page before running everything (the comments are useful)

CAVEAT: lost+found folder is not extracted out (if you want it, use the script at the very bottom, we just needed to edit the SHARES variable at the top to not-discard lost+found entries)

NOTE: we use debugfs on ext vols that are too corrupt to mount, but not corrupt enough to be completely useless.

START of STEPS

First test if you can even access the system via debugfs

# assuming our ext volume is sda1 (Sda1 has the ext filesystem on it)
debugfs -c /dev/sda1
# then when in there, type "ls" and if you see your folders and files on the root of your volume your set. Also try "ls -p"

I recommend start screen so that everything runs in background:

apt-get update
apt-get install screen
screen bash
# dont forget to start swap as its a memory hog
# also dont forget to mount your destination device, in this case its mounted to /mnt (its a simple 3TB USB external drive, but that doesnt matter, it could be an NFS or SAMBA share or SSHFS - it doesnt matter, as long as its reliable)
# make sure the folder where the log will be saved is created, so the log1 dir as the script doesnt create it for all cases - however in this case the log1 dir fill be made because the log1 dir and the dst1 dir contain each other

Here is the script (its copy pasteable, just edit the variables, oh – also make sure swap is enabled as this might eat alot of ram. Note that I have told the log1 file to be appended with the date, you can change that if you want. You can mod anything under the “– variable –” section)

(#!/bin/bash
# debugfs rdump recursively, just change variables, make sure to have something mounted to dump to, and make sure to have swapon just in case. (binbash line above is optional)
# copy paste this script after you edit the variables
# -- variables (mod these) -- #
dst1="/mnt/backup/data/"; # data will dump here - folder will be created
log1="/mnt/backup/rdump`date +%s`.log"; # make sure to put on large space incase many files, loggin is append mode
srcdev1="/dev/sda1";
# -- script start -- #
mkdir -p "${dst1}"; # dont forget to create the folder for log dump, in this case its created
# -- dont worry about anything below this line -- #
OLDIFS=${IFS};IFS=$'\n';N=0
SHARES=`debugfs -c ${srcdev1} -R 'ls -p' 2> /dev/null | awk -F'/' '{print $6}'| grep . | egrep -v '^\.*$|lost\+found'`
TOTAL=`echo "${SHARES}" | wc -l`
{ echo "###############################################################################################################";
echo "START: debugfs rdump script - DATE `date` - `date +%s`";
echo "###############################################################################################################";
echo "- Going to from here: ${srcdev1}";
echo "- Going to extract here: ${dst1}";
echo "- Going to log here: ${log1}";
echo "- Going to extract/rdump these $TOTAL share[s] (including iscsi):"; 
echo "${SHARES}" | awk '{x+=1;print x ": " $0;}'; } | tee -a "${log1}"
for i in ${SHARES}; do
N=$((N+1))
echo "###############################################################################################################"
echo "#### ${N}: --- STARTING: ${i} --- DATE [`date`] [`date +%s`]  ####"
echo "- Peek into the files & 1st layer of subfolders in $i:"
debugfs -c ${srcdev1} -R "ls -p \"${i}\"" 2> /dev/null | awk -F'/' '{print $6}'| grep . | egrep -v '^\.*$' | awk '{x+=1;print x ": " $0;}';
echo "#### ${N}: Beginning data dump via debugfs's rdump: ${i}  ####"
debugfs -c ${srcdev1} -R "rdump \"${i}\" \"${dst1}\""
echo "#### ${N}: space reading ${i} ####"
df; df -h; df -BM;
echo "#### ${N}: --- DONE --- going to next share ####"
echo
done 2>&1 | tee -a "${log1}"
IFS=${OLDIFS})

If you really want the lost+found folder then use this script instead (same rules and tips apply as above script):

(#!/bin/bash
# debugfs rdump recursively, just change variables, make sure to have something mounted to dump to, and make sure to have swapon just in case. (binbash line above is optional)
# copy paste this script after you edit the variables
# -- variables(mod these) -- #
dst1="/mnt/backup/data/"; # data will dump here - folder will be created
log1="/mnt/backup/rdump`date +%s`.log"; # make sure to put on large space incase many files, loggin is append mode
srcdev1="/dev/sda1";
# -- script start -- #
mkdir -p "${dst1}"; # dont forget to create the folder for log dump, in this case its created
# -- dont worry about anything below this line -- #
OLDIFS=${IFS};IFS=$'\n';N=0
SHARES=`debugfs -c ${srcdev1} -R 'ls -p' 2> /dev/null | awk -F'/' '{print $6}'| grep . | egrep -v '^\.*$'`
TOTAL=`echo "${SHARES}" | wc -l`
{ echo "###############################################################################################################";
echo "START: debugfs rdump script - DATE `date` - `date +%s`";
echo "###############################################################################################################";
echo "- Going to from here: ${srcdev1}";
echo "- Going to extract here: ${dst1}";
echo "- Going to log here: ${log1}";
echo "- Going to extract/rdump these $TOTAL share[s] (including iscsi):"; 
echo "${SHARES}" | awk '{x+=1;print x ": " $0;}'; } | tee -a "${log1}"
for i in ${SHARES}; do
N=$((N+1))
echo "###############################################################################################################"
echo "#### ${N}: --- STARTING: ${i} --- DATE [`date`] [`date +%s`]  ####"
echo "- Peek into the files & 1st layer of subfolders in $i:"
debugfs -c ${srcdev1} -R "ls -p \"${i}\"" 2> /dev/null | awk -F'/' '{print $6}'| grep . | egrep -v '^\.*$' | awk '{x+=1;print x ": " $0;}';
echo "#### ${N}: Beginning data dump via debugfs's rdump: ${i}  ####"
debugfs -c ${srcdev1} -R "rdump \"${i}\" \"${dst1}\""
echo "#### ${N}: space reading ${i} ####"
df; df -h; df -BM;
echo "#### ${N}: --- DONE --- going to next share ####"
echo
done 2>&1 | tee -a "${log1}"
IFS=${OLDIFS})

After launching it, since its running in screen you can close out of screen via “CONTROL-A d” which will detach, your processes will still run as you can confirm with “ps ax” or “ps” or “top -cbn1”. You can also “tail -f” your log file. Or maybe even run a watch command like this which will list top info, df info, and the last few lines of the log file:

# with specific log file:
watch "top -cbn1 | head -n12 | grep .; echo ---; df; df -BM; df -h; echo ---; tail -n10 /mnt/backup/rdump.log"

# To grab latest rdump log file:
watch "top -cbn1 | head -n12 | grep .; echo ---; df; df -BM; df -h; echo ---; tail -n10 /mnt/backup/`ls -1St /mnt/backup/ | grep rdump | head -1`"

 

Leave a Reply

Your email address will not be published. Required fields are marked *