Don’t know your CDROM device name?
Archives
All posts for the month February, 2012
seq -s : 1 10
1:2:3:4:5:6:7:8:9:10
seq -s / 1 10
1/2/3/4/5/6/7/8/9/10
seq -s // 1 10
1//2//3//4//5//6//7//8//9//10
echo {a..z}
a b c d e f g h i j k l m n o p q r s t u v w x y z
In the past I’ve always removed empty lines using
:g/^$/d
However, just found an easier way..
:v/./d
Got a ticket today about a possible “HDD issue” due to the user unable to run du -h on a partition. I looked and figured out it was just due to a directory that had 12+ million files in it. Not exaggerating, since the box was out of production, I was able to do a ls | wc -l on it using screen. Not sure how long it took but the next morning I saw this after re-attaching to screen session. (Still not a clue where he got possible HDD issue, but whatever)
# ls | wc -l
12399466
After cleaning out my shorts I started looking into a way to start deleting the files, knowing full well there was no way rm files or directories was going to work without getting “list to long” error which I’m sure we all have seen time and time again.
Also tried, and actually these would work but it would take about ~2 days to remove all those files..too long.
find . -type f -delete
find . -type f | xargs rm -f
Then I found a post by Randal L. Schwartz about using a perl command, which will by pass your SHELL completely, and this is working and it’s deleting the files pretty quickly.
perl -e ‘chdir “problem_dir” or die; opendir D, “.”; while ($n = readdir D) { unlink $n }’
In relative path terms…
You will want run this perl line one level up from the directory that contains millions of files. Change “problem_dir” to whatever your directory has all the files.
In absolute path terms..
Run the perl script where ever you want, change “problem_dir” to the absolute path example, /usr/local/nas/logs/
I would think the relative path one would be a tad faster, but really don’t know. Either way, you should start noticing the file system slowly increasing via df -k