file systems

All posts tagged file systems

Mileage will vary depending on your version of Linux..but first grab your disk device names

$ df

Filesystem Size Used Avail Use% Mounted on
/dev/sda1 55G 1.9G 50G 4% /
/dev/sda5 126G 46G 80G 37% /home
/dev/sda9 26G 522M 24G 3% /tmp
/dev/sda7 103G 15G 84G 15% /usr
/dev/sda6 32G 1.4G 29G 5% /var

$ sudo file -s /dev/sda{,1,5,6,7,9}

/dev/sda: x86 boot sector
/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=1bdf35a5-6ae0-402d-ab61-853ca3877d94 (needs journal recovery) (extents) (large files) (huge files)
/dev/sda5: Linux rev 1.0 ext4 filesystem data, UUID=b9646594-a7b2-4cfd-a909-d9b6df2ff698 (needs journal recovery) (extents) (large files) (huge files)
/dev/sda6: Linux rev 1.0 ext4 filesystem data, UUID=fc0355a1-05f8-416e-8bde-24287782d45b (needs journal recovery) (extents) (large files) (huge files)
/dev/sda7: Linux rev 1.0 ext3 filesystem data, UUID=64395184-64bf-4b52-90cb-e0afff7c678c (needs journal recovery) (large files)
/dev/sda9: Linux rev 1.0 ext4 filesystem data, UUID=9c56ff6c-052d-4329-a956-43d7df88e6d7 (needs journal recovery) (extents) (large files) (huge files)

Using SLES11 SP1, I’m sure other distro’s will have similar files.

/proc/fs/nfsfs/servers
/proc/fs/nfsfs/volumes

/var/lib/nfs/etab:
contains information about what filesystems should be exported to whom at the moment.
/var/lib/nfs/rmtab:
contains a list of which filesystems actually are mounted by certain clients at the moment.
/proc/fs/nfs/exports:
contains information about what filesystems are exported to actual client (individual, not subnet or whatever) at the moment.
/var/lib/nfs/xtab:
is the same information as /proc/fs/nfs/exports but is maintained by nfs-utils instead of directly by the kernel. It is only used if /proc isn’t mounted.

Some helpful URLS

http://www.novell.com/communities/node/3787/configuring-nfsv4-server-and-client-suse-linux-enterprise-server-10
http://www.softpanorama.org/Commercial_linuxes/Suse/Networking/suse_nfs.shtml

(Taken from blog.makezine.com)

Linux Tip: super-fast network file copy

If you’ve ever had to move a huge directory containing many files from one server to another, you may have encountered a situation where the copy rate was significantly less that what you’d expect your network could support. Rsync does a fantastic job of quickly syncing two relatively similar directory structures, but the initial clone can take quite a while, especially as the file count increases.

The problem is that there is a certain amount of per-file overhead when using scp or rsync to copy files from one machine to the other. This is not a problem under most circumstances, but if you are attempting to duplicate tens of thousands of files (think, server or database backup), this per-file overhead can really add up. The solution is to copy the files over in a single stream, which normally means tarring them up on one server, copying the tarball, then untarring on the destination. Unless you are under 50% disk utilization on the source server, this could cause you to run out of space.

Brett Jones has an alternative solution, which uses the handy netcat utility:

After clearing up 10 GBs of log files, we were left with hundreds of thousands of small files that were going to slow us down. We couldn’t tarball the file because of a lack of space on the source server. I started searching around and found this nifty tip that takes our encryption and streams all the files as one large file:

This requires netcat on both servers.

Destination box: nc -l -p 2342 | tar -C /target/dir -xzf –
Source box: tar -cz /source/dir | nc Target_Box 2342

This causes the source machine to tar the files up and send them over the netcat pipe, where they are extracted on the destination machine, all with no per-file negotiation or unnecessary disk space used. It’s also faster than the usual scp or rsync over scp because there is no encryption overhead. If you are on a local protected network, this will perform much better, even for large single-file copies.

If you are on an unprotected network, however, you may still want your data encrypted in transit. You can perform about the same task over ssh:

Run this on the destination machine:
cd /path/to/extract/to/
ssh user@source.server ‘tar -cz -C /source/path/ *’ | tar -zxv

This command will issue the tar command across the network on the source machine, causing tar’s stdout to be sent back over the network. This is then piped to stdin on the destination machine and the files magically appear in the directory you are currently in.

The ssh route is a little slower than using netcat, due to the encryption overhead, but it’s still way faster than scping the files individually. It also has the added advantage of potentially being compatible with Windows servers, provided you have a few of the unix tools like ssh and tar installed on your Windows server (using the cygwin linked binaries that are available).

Fast File Copy – Linux!