2008-11-22

Creating a home web and file server with encryption using Xubuntu 8.04, an old laptop, and some external drives

The goal of this project was to convert an old Dell Inspiron 2650 laptop into a combined home webserver and Samba fileserver with the data for the fileserver being stored on encrypted external hard drives. This post will give an overview of the process, and following posts will give the nitty-gritty details.

Here are the requirements I made up for this project, and the reasons behind them:

  • Home webserver. I work from home and my wife is also home all day, so we have a number of computers that we use throughout the day (her Vista laptop, my work Windows 2000 desktop, my personal Mac Mini, assorted old laptops with Ubuntu on them, and a MythTV machine running on a Via Epia SP8000e). Trying to store and retrieve information across 3 operating systems, different applications, and multiple computers got to be a pain, so a while ago I decided to write various web applications in PHP to keep track of basic things like family finances, to-do lists, and other information. That way we could both access information from any computer at any time, and there was no need find and maintain compatible software across 3 operatings systems and 3+ computers.
  • Samba fileserver. Given all our computers and operating systems I decided a long time ago to keep all of our files on one central fileserver so that we wouldn't have to deal with "oh which machine is that that file on" and "oh, I forgot to backup that machine whose hard drive just failed." For years I had been using an NSLU2 NAS with two external hard drives attached for this purpose.
  • Old Dell Inspiron 2650 as the server hardware. Old laptops make the best home servers because:
    1. They are free or cheap,
    2. They pull a lot less electricity than a desktop, which is important in something thats on 24/7,
    3. They are small and you can fit them on a shelf somewhere, and
    4. They are usually quiet.
  • Data stored on external hard drives. The NSLU2 got me used to the idea of keeping data on a external hard drive, so that its pretty easy to upgrade or replace a disk and so that if the system drive fails it doesn't take the data down with it.
  • Encrypted data drives. Last week a neighbor's house got robbed in the middle of the day. This got me to thinking how I would feel if some thief grabbed the external drives from my NSLU2 in a generic burglary. Those drives have scans of bank statements and other financial information on them. Sure the average thief is maybe not going to dig through them for financial info, but if they were stolen I would feel obligated to post a fraud alert with the credit agencies, cancel all my credit cards, change the passwords etc on all my bank accounts, maybe close all my financial accounts and open new ones, and then keep a close eye on everything for months. With storing the data on encrypted drives a burglary would just involve buying new hardware and restoring from my offsite backup and moving on.
  • The real reason: For the fun of it. Of course the real reason for doing this project is that it sounded like a fun (hopefully) and educational challenge. I like learning new things, especially about computers, and especially the hard way. I have discovered that the best way for me to master a new skill is to set out to do some project where I have no idea what I am doing, and figure everything out as I go along. I find that having a concrete goal forces me to tackle the hard stuff head-on rather than skipping over it.

The first thing I had to do was buy a USB 2.0 PCMCIA (or is it CardBus?) card since the Inspiron 2650 only has USB 1.1 ports and I wanted faster USB ports for the external USB data drives. I went to newegg.com and ordered a $13 card that had lots of decent reviews and which a couple reviewers said worked with Linux.

I had an extra USB drive enclosure laying around and an extra 120 GB drive, so I put them together to make an external hard drive. I plugged it in to my laptop using my new USB 2.0 CardBus card and Xubuntu recognized it. In a previous life the drive had been formatted as an Ubuntu system drive, so it had two partitions formatted with Linux filesystems. I wasn't sure if the underlying filesystem mattered with an encrypted drive (i.e. would I be able to mount it on a Mac or Windows machine using Truecrypt), so just be be sure I installed gparted and used it to delete the existing partitions and then reformat the whole drive as FAT32. In hindsight reformatting the drive was probably not necessary and I could have just gone straight to formatting it with Truecrypt.

Next I looked into increasing the memory on the Inspiron 2650. I had the original system 128 MB, plus a 256 MB memory module I had installed in the second, user accessible, slot years ago, but I wanted more memory headroom to improve performance. I did some research on the web and found out that the maximum memory for the Inspiron 2650 is 512 MB, and that to achieve that you had to do some major surgery to get at memory slot 1 under the keyboard:

http://episteme.arstechnica.com/eve/forums/a/tpc/f/579009962631/m/972009296731

I dug through my box of old RAM and found the sibling of the 256 MB module already installed in the user accessible slot. I had received two modules but had never used the second because I thought I could only add memory to the user accessible slot. Following the directions from the link I removed the original 128 MB memory module, installed the 256 MB module, rebooted, and voila Top showed that the system now has 515 MB of physical memory. Sweet.

Next I installed the desktop version of Xubuntu 8.04. I could have gone with the command line version, but I had an Xubuntu CD already and I didn't want to bother with downloading the alternate CD and burning it.

Then I installed the following packages using aptitude install:

  • openssh-server (so I could administer it from remote machines)
  • samba (so it could serve files to windows boxes)
  • smbfs (just seemed like a good idea, may not have been necessary)

I tried to install Truecrypt from the Ubuntu repositories but its not there, so I went to the Truecrypt website and downloaded the Deb package for Ubuntu. All I had to do to install it was unpack the one file from the compressed archive I downloaded and then click it to run a script which handled the whole install. I picked Truecrypt for my encryption because I was already familiar with it, and I thought that since it is available on Mac, Linux, and Windows it would enable me to mount the encrypted drive on any old machine if necessary.

Then I edited /etc/network/interfaces to give the laptop a fixed IP address.

Next I verified that the system could turn off the laptop's LCD backlight by running:

xset dpms force off

I tested this because the system will be on 24/7 I want to keep the backlight off most of the time to save watts. Then I configured the screensaver and power management options to turn off the backlight when the laptop is idle (see my recent post on this subject for how to do this).

To make sure the time is always correct on the Xubuntu laptop server I installed ntpdate:

sudo aptitude install ntp ntpdate

When I ran this it said that it was only installing one package (ntp) so maybe ntpdate was already installed and maybe I didn't need to do this?

Next I encrypted the external hard drive using Truecrypt and copied over all the data from my existing NSLU2 NAS device. I was in Linux hell for days with this task because I had started out formatting the Truecrypt partition with FAT32, and the disk I was copying the data from was FAT32, and as a result I had all kinds of problems related to FAT32 with permissions, accented characters, preserving original timestamps on files and directories. All the gory details are posted in another blog entry. In the end I gave up on having the encrypted drive be FAT32 and just formatted it with ext3 and everything went smoothly after that. If I had spent more time on it I probably could have figured out how to make everything work using a FAT32 Truecrypt partition, but I just got tired of messing with it and decided to bail on the whole idea.

I have done a separate post with the details on creating an ext3 formatted Truecrypt partition on an external drive.

Next I set up a test Samba share. Since this file server for family use, and since passwords and complications have a low spousal acceptance factor, I wanted to set up Samba so that no username or password is required to access it. Most of the how-to guides online only covered setting samba with security, but if I finally found a couple guides that went through setting up a public share:

Private and guest (no password prompt) Samba shares with security=user

guide to setting up Samba so its wide open and no passwords are required

It ended up taking some fiddling and tweaking to get the Samba share to work properly with all the OSs in the house (Windows XP, Windows Vista, Mac OS X, Ubuntu), and to get permissions and ownership of files sorted out. The details of how I configured the Samba share are in another post.

Then I set up the laptop to be a CUPS print server, as detailed in this post.

Then I set up the laptop as a LAMP server and migrated my existing home web server applications over to the laptop as detailed in this post.

Once I had everything installed and configured on the Xubuntu laptop server it was time to decommission my existing home web and print server and replace it with the new one. To make things more complicated, I decided to give the new server the same IP as the old one so I wouldn't have to change any settings on any other computers on the network.

  • First I made backups of all of the SQL databases on the old server and imported them into the MySQL server on the new server to make sure the new server was up to date.
  • I changed the IP address and name on the old home server by editing /etc/network/interfaces and /etc/hostname and /etc/hosts so that it wouldn't cause conflicts if I needed to boot it up again to get something off of it.
  • Then I shut down my existing home server.
  • Next I edited /etc/network/interfaces, /etc/hostname and /etc/hosts on the new server to change its name and address to the name and address of the old server and then rebooted. Everything seemed to be working on the web server
  • However, when I tried to SSH into the new server at the old server's address I got a message that the fingerprint for the host key had changed and so authentication failed.
  • A little googling revealed that the easy way to fix the problem was to delete the existing RSA key from the client known_hosts file and then ssh to the server again. That causes the client to see the server as a new server and prompt to download the host RSA key again. On my Mac Mini the file I edited to delete the old RSA key was /Users/andy/.ssh/known_hosts and I had to use the Open Hidden menu option on Smultron to be able to navigate to it.
  • Next I shut down the new server and physically moved it to take the place of the old server, hooked up the printer to it, and powered it up.
  • I tested the web server by pointing my browser at the server and verified it worked, and also tested that printing to the new server worked.
Once I got the new server up and running using the address of the old server I set up a script to back up the MySQL databases on it:
  • First I created two directories under my home directory on the server:
    mkdir /home/andy/backups
    mkdir /home/andy/scripts
  • Then I copied my old backup script into the scripts directory and set it to be executable:
    chmod 700 backup_script.sh
  • Then I tried running the backkup script:
    ./backup_script.sh
    But that gave me an error:
    /bin/sh^M: bad interpreter: No such file or directory
  • A little googling showed that I needed to install the sysutils package:
    sudo aptitude install sysutils
  • And then run this utiltit on the file, apparently because I had copied it from a non-Linux machine:
    dos2unix backup_script.sh
    After I ran that utility the script ran just fine.

Once the web server had been migrated I migrate the files from my existing NSLU2 NAS device. Rather than try and copy from my NSLU2 to my new server over the network (which would be slow) I took a FAT32 external drive that had a backup of the NSLU2 files and connected it to the Xubuntu laptop server, and then I mounted it:

sudo mkdir /media/heh
sudo chmod 0777 /media/heh
sudo mount -t vfat -o shortname=mixed,iocharset=utf8 /dev/sdc1 /media/heh
(after looking in /dev to see what device name the USB drive had received)

The vfat option "shortname=mixed" is necessary to prevent Linux from converting all short file and directory names that are all uppercase to all lowercase. The "iocharset=utf8" option is to make sure that accented characters in file and directory names don't get replaced with underscores by Linux.

Once I had the external hard drive with all the files on it mounted, then I copied all of them to the encrypted drive:

sudo -u nobody cp -arv /media/heh/zihuatanejo/"my documents"/* /media/truecrypt1/Documents

This copies all the files and directorys from "my documents" into the existing directory "Documents". I did it this way because wanted to get rid of the stupid "my documents" directory name which was a carry over from long ago when all this data lived on a Windows machine.

Once the copy was done I set open permissions on all of the copied files:

sudo chmod -R a+rw /media/truecrypt1/Documents

Then I did some cursory testing to make sure that I could create, delete, copy, modify files on the encrypted drive from a remote computer through a mounted Samba share.

Next came the acid test: I ran a robocopy between my NSLU2 and the new encrypted Samba share from a Windows box to see if all of the files were there like they were supposed to be.

robocopy "z:\my documents" p:\ /mir /XO /NP /log+:"C:\Documents and Settings\Owner\Data\logs\Z_to_P.1.txt"

Reviewing the log file after it was completed showed that only the new files that should have been copied were copied, everything looks good so far. Checked the permissions on some of the files that had been copied over to the encrypted Samba share by robocopy to make sure owner and permissions were as they were supposed to be, and they were.

Now that I had one encrypted external drive set up, it was time to set up a second encrypted external drive for a mirror of the first drive. I encrypted the second drive as I will detail in a later post.

Once the second external drive had been formatted as an encrypted drive using Truecrypt I set up directories on it. First I created and set permissions for a backup directory and a directory to put files and directories in when they were removed from the backup directory:

cd /media/truecrypt2
sudo -u nobody mkdir Documents
sudo chmod -R a+rw Documents/
sudo -u nobody mkdir Documents-deleted
sudo chmod -R a+rw Documents-deleted

Then I did an initial copy from the main encrypted drive to backup encrypted drive:

sudo -u nobody cp -arv /media/truecrypt1/Documents/* /media/truecrypt2/Documents > /home/andy/logs/2008-11-16-0853.log

I reviewed permissions on /media/truecrypt2 to make sure they seemed correct.

Then I wrote an rsync script to backup from truecrypt1 to truecrypt2 and designate that modified files get kicked into Document-deleted.

#!/bin/sh # Script name : sync_truecrypt_mounts.sh
# Backup /media/truecrypt1 to /media/truecrypt2
echo '\n' >> /home/andy/logs/rsync.log
echo 'Start rsync: ' >> /home/andy/logs/rsync.log date '+%Y-%m-%d_%H:%M:%S' >> /home/andy/logs/rsync.log
rsync -av --delete --stats --backup --backup-dir=/media/truecrypt2/Documents-deleted --suffix=`date +"_%F"` /media/truecrypt1/Documents/ /media/truecrypt2/Documents >> /home/andy/logs/rsync.log
echo 'Rsync finished: ' >> /home/andy/logs/rsync.log
date '+%Y-%m-%d_%H:%M:%S' >> /home/andy/logs/rsync.log
# End of script

I tested the script by running it:

sudo -u nobody ./sync_truecrypt_mounts.sh

And it did what I expected. So on to making it a cron job.

sudo -u nobody crontab e

Then I set up another cron job to backup my MySQL databases every night a few minutes before the sync runs.

Creating a Truecrypt partition that uses the ext3 filesystem

For reasons given in another post I decided that I wanted the external data hard drives on my Xubuntu laptop home server to be encrypted, and I wanted the encrypted partitions to use the ext3 filesystem. The only thing that makes this challenging at all is that there no option on the Truecrypt GUI to create a volume with the ext3 file system. The only options on the GUI are FAT and None.

The first step was to install Truecrypt. For some reason its not in the Ubuntu repositories, you I downloaded a Deb package from the Truecrypt website and used the script that came with it to install. Pretty easy.

First I launched the truecrypt GUI by opening a terminal and running:

truecrypt

Then I plugged in the external drive and looked do see where it mounted as a device:

ls /dev/sd*

I knew sda and sda1 were the system hard drive, so sdb and sdb1 had to be the external drive, and I knew from experience that I wanted to create the Truecrypt partition on sdb1 and not sdb.

Using the Truecrypt GUI I told it to create a new encrypted partition on /dev/sdb1 but specified None for file system instead of FAT, and chose Quick Format (despite the warnings) since I knew the disk had previously been written over with random data on previous encryption efforts.

Once the new encrypted partition was created I used the GUI to mount it, being careful to click the button for options, and then checking the box for mounting without a file system.

Once the new encrypted partition was mounted without a file system I looked up its mount point in a terminal:

truecrypt -l

And saw that it was at /dev/mapper/truecrypt1.

Then I formatted it with the ext3 filesystem with the following command:

sudo mkfs.ext3 /dev/mapper/truecrypt1

Once it was done I dismounted the Truecrypt partition:

truecrypt -d

And then I remounted it from the command line:

truecrypt /dev/sdb1 /media/truecrypt1

Then I did chown and chmod on mount its mount point.

sudo chown nobody:nogroup /media/truecrypt1
sudo chmod -R a+rw /media/truecrypt1

That seemed to work for me!

Setting up a completely insecure Samba share in Xubuntu

As mentioned in another post, I wanted to set up a completely insecure Samba share on my Xubuntu laptop home server to serve as the family file server. What follows is an account of how I stumbled onto something that appeared to more or less work. I really don't know a thing about Samba so I am sure this is riddled with mistakes and bad advice.

Private and guest (no password prompt) Samba shares with security=user

This sounded like a good approach to me, and it seemed to work with my Mac Mini and Windows XP boxes as clients, but when I tried to mount it on our Windows Vista laptop it kept prompting me for a user name and password and it wouldn't let me mount the share without it.

So I did some more googling and found this guide:

guide to setting up Samba so its wide open and no passwords are required.

In the end I took bits and pieces from both guides, mashed them into the default smb.conf file that comes with Xubuntu, then added some stuff I figured out along the way, and ended up with this as my final configuration in /etc/samba/smb.conf. I have edited out all the commented out options that bulk up the default smb.conf file.

[global]

# Change this to the workgroup/NT-domain name your Samba server will part of
workgroup = MYWORKGROUPNAME

# server string is the equivalent of the NT Description field
server string = %h server (Samba, Ubuntu)

# This will prevent nmbd to search for NetBIOS names through DNS.
dns proxy = no

# The specific set of interfaces / networks to bind to
# This can be either the interface name or an IP address/netmask;
# interface names are normally preferred
interfaces = 127.0.0.0/8 eth0

# This tells Samba to use a separate log file for each machine
# that connects
log file = /var/log/samba/log.%m

# Cap the size of the individual log files (in KiB).
max log size = 1000

# We want Samba to log a minimum amount of information to syslog. Everything
# should go to /var/log/samba/log.{smbd,nmbd} instead. If you want to log
# through syslog you should set the following parameter to something higher.
syslog = 0

# Do something sensible when Samba crashes: mail the admin a backtrace
panic action = /usr/share/samba/panic-action %d

# "security = user" is always a good idea. This will require a Unix account
# in this server for every user accessing the server. See
# /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
# in the samba-doc package for details.
security = share

## You may wish to use password encryption. See the section on
# 'encrypt passwords' in the smb.conf(5) manpage before enabling.
encrypt passwords = true

# If you are using encrypted passwords, Samba will need to know what
# password database type you are using.
passdb backend = tdbsam

obey pam restrictions = yes

guest account = nobody
invalid users = root

# This boolean parameter controls whether Samba attempts to sync the Unix
# password with the SMB password when the encrypted SMB password in the
# passdb is changed.
unix password sync = yes

# For Unix password sync to work on a Debian GNU/Linux system, the following
# parameters must be set (thanks to Ian Kahan < for
# sending the correct chat script for the passwd program in Debian Sarge).
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .

# This boolean controls whether PAM will be used for password changes
# when requested by an SMB client instead of the program listed in
# 'passwd program'. The default is 'no'.
pam password change = yes

# This option controls how nsuccessful authentication attempts are mapped
# to anonymous connections
map to guest = bad user

# Most people will find that this option gives better performance.
# See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html
# for details
# You may want to add the following on a Linux system:
# SO_RCVBUF=8192 SO_SNDBUF=8192
socket options = TCP_NODELAY

# Allow users who've been granted usershare privileges to create
# public shares, not just authenticated ones
usershare allow guests = yes

[public]
comment = Public Share
# Path to directory
path = /media/truecrypt1
# Allow writing to share
read only = No
# Force connections as guests
guest only = Yes
guest ok = Yes
# Optionally, specify guest account here
guest account = nobody
# These two are optional
# Sets the umask for files/directories created on this share
force create mode = 777
force directory mode = 777

Then I restarted samba by running:

sudo /etc/init.d/samba restart

Once I had Samba up and running I had to untangle all the various Linux permissions issues. In the past I have run into weird issues where only the Owner of a file could do certain things, even though all permissions were set to allow all users to do anything, so out of an abundance of caution I just recursively changed the owner of the Truecrypt mount point to "nobody" and the group to "nogroup", which is the user that Samba uses under my smb.conf:

sudo chown nobody:nogroup /media/truecrypt1

I also gave everyone read-write permissions on the mount point:

sudo chmod -R a+rw /media/truecrypt1/

When I copied files over to the encrypted drive from within the filesystem (i.e. not through the Samba server) I was always careful to preface my commands with "sudo -u nobody" which means run the command as user nobody, which made sure all the files I copied over had the right owner. Another way to do this would have been to just copy the file and then do a recursive chown on the encrypted drive. Also, after I copied files over to the encrypted drive through the filesystem I ran this again to ensure that all the copied files had the permissions I wanted:

sudo chmod -R a+rw /media/truecrypt1/

When I did some test copies of files from remote machines over the Samba share I kept having problems with files not getting the right permissions. It turns out that Linux permissions are not inherited from the parent directory, but instead the permissions are determined by the umask system variable for all users. I think its possible to set a umask for each user (maybe). After a lot of research and some trial and error I learned that if these values are put in the share definition in /etc/samba/smb.conf:

force create mode = 777
force directory mode = 777

Then that tells Samba that all new files and directories have full read-write-execute permissions. In the long term I should probably go back and reconfigure all of the permissions settings, but at that point I just wanted to get it working in some fashion or another.

Setting up a CUPS server using Xubuntu 8.04

I plugged a HP LaserJet 1012 in my laptop server. Then I edited the CUPS configuration file:

sudo nano /etc/cups/cupsd.conf

And made the following changes in the file:
  • I changed: BrowseAllow @LOCAL
    to:
    BrowseAllow all
  • In the blocks: <location /> , <location /admin> and <location admin/conf>
    I changed
    Order deny,allow
    to
    Allow all
  • I changed: Listen localhost:631
    to:
    Listen 631
  • I Added the line: DefaultEncryption Never
Then I tried to do this to allow cupsys to prompt me for my Paridita password when doing admin tasks on the web interface:

sudo adduser cupsys shadow

But I got a message that user cupsys doesn't exist, and no amount of googling revealed an answer so I decided to plow on ahead.

I connected to the CUPS web interface by pointing my browser at http://10.10.10.160:631 and then I added a printer using the CUPS web interface specifying the correct driver for my printer.

Then on the CUPS web interface I went back and selected modify printer on my newly created printer and changed the make and model to "Raw" so that the client computers can set the printer driver to use. I did this because I have had problems with the Ubuntu driver for the HP LaserJet 1012 (the dreaded "unsuppported personality" problem) so by setting the printer up as a "raw" printer I can specify the printer driver on the client machines and thus use a different printer driver.

Setting up a LAMP server using Xubuntu 8.04

In Ubuntu there is a handy-dandy little command line utility that allows you to install a number of predefined package collections (like LAMP server) from the command line:

sudo tasksel

I fired this up in a terminal and then selected "LAMP server" on the primitive GUI and clicked OK. After the install was done (maybe 2 minutes), I updated all of my packages to bring the new packages up to date.  Then I pointed my browser on another machine towards my laptop's IP address and got a page that says "It works!" which is just what I wanted to be told.

Was phpMyAdmin installed by default as part of the LAMP server package? Tried pointing my browser at http://10.10.10.160/phpmyadmin and got nothing, so I guess not? That was easy to fix:

sudo aptitude install phpmyadmin

Try http://10.10.10.160 again and it works! I logged in as root using the MySQL root password I defined in response to a prompt during the LAMP install. Then I created a new user "andy@localhost" and gave this user all possible privileges on the MySQL database.

I like to have a special user called "www" and then have the web server's document root be that user's home directory. That way I when I want to work with the web server's files I can just log in as "www" and go to work. So, first I set up the user and created a password:

sudo adduser www
sudo passwd www

And then I pointed apache to the home directory of user www:

sudo nano /etc/apache2/sites-available/default

I changed the DocumentRoot value to /home/www and saved the file and then restarted Apache:

sudo /etc/init.d/apache2 restart

Then I did an SSHFS mount of the Xubuntu laptop as user www and copied some HTML files over to /home/www and verified that everything works.

Next I put a symbolic link to the phpMyAdmin directory in /home/www so that I could still use phpMyAdmin:

sudo ln -s /usr/share/phpmyadmin /home/www/phpmyadmin

Then I logged into phpMyAdmin and created new user php_user with all data (but not structure or database administration) privileges.

Next I created a new database with the same name as the database on my existing home web server, and then imported last nights SQL backup using phpMyAdmin. Tested all of my home web applications (don't ask) successfully.

Then it was time to transfer my home Wordpress blog from the old home web server. I had no idea how to do this, so first I did some googling to see whats up. Luckily for me I found this How-To on migrating a WordPress installation to a new server:

http://maketecheasier.com/clone-and-migrate-wordpress-blog-to-new-server/2008/01/30

I went ahead and moved my Wordpress installation:
  • I copied all of the Wordpress php files over to web server document root on the Xubuntu laptop server as described in these directions. However, I didn't search through the files and change anything, I just copied the files over.
  • I used phpMyAdmin to make a SQL file backup of the Wordpress database, and then I used phpMyAdmin on the Xubuntu laptop server to create a new database named wordpress and then I imported the SQL file.
  • Then I tested the transfer by successfully using my browser to go to the wordpress folder on the Xubuntu laptop server.

But then I got to thinking: Maybe I should also upgrade to the latest version of WordPress? I found this How-To on doing that:

http://codex.wordpress.org/Upgrading_WordPress

However, I decided to save that for another day.

Working with FAT32 in Ubuntu Linux

While I was working on my Xubuntu laptop server project I learned a lot about working with FAT32 drives on Linux, and I learned all of it the hard way.

Only the owner of a FAT32 mount can set file time stamps on it.

I tried using rsync to copy a directory from my mounted NSLU2 NAS to a FAT32 drive:

rsync -av /NASmountpoint /FAT32mountpoint

That caused all kinds of errors saying rsync couldn't set permissions on the encrypted drive. A little research showed that its not possible for rsync to set permissions on FAT32. So then I tried:

rsync -rltDv /NASmountpoint /FAT32mountpoint

That did better, but then gave errors about not being able to set dates on directories. So then I tried:

rsync -rltDvO /NASmountpoint /FAT32mountpoint

Adding the -O switch tells it not to try and set the date on directories. That fixed the errors, but then I noticed that the date-time of all of the files being copied over was set to today, which is not at all what I wanted.

I thought maybe it was a problem with rsync, so I tried using cp:

cp -arv /NASmountpoint/ /FAT32mountpoint

Result: The files copied, but the timestamps were not preserved and cp gave this message for each file: "cp: preserving times for `/media/truecrypt1/Washing Machines': Operation not permitted"

I double checked the permissions on the Truecrypt mount:

drwxrwxrwx 3 nobody andy 32768 2008-10-30 03:59

It looks like every user should have full permissions, but apparently they don't since its not possible to set file timestamps!

Next experiment: Dismount FAT32 partition and then remount it with option "uid=andy,umask=0000" and repeat the exact same cp operation.

Result: All the files copy over with the original timestamps properly preserved! Even the original timestamps were preserved on the directories, which is something rsync couldn't do under any circumstances.

Conclusion: In order to preserve original timestamps when using rsync or cp to copy files from a Samba share to a FAT32 mount you have to be logged in as the owner. Any other user will get an error, even if that user has full permissions to the FAT32 mount.

Next experiment: Dismounted the FAT32 partition, mounted it again with options "uid=nobody,umask=1000" and then try copying to it from the original NAS mount using "sudo cp..."

Result: All the files copied successfully with original timestamps preserved, but cp spit out the following error for each file "failed to preserve ownership for ... operation not permitted."

Next experiment: Try using "sudo -u nobody cp..." to run the copy command as user "nobody" which is the owner of the FAT32 partition.

Result: All files copied successfully with original timestamps preserved with no errors.

Conclusion: Only the owner of a FAT32 mount can copy files to it from another FAT32 mount without losing timestamps.

Experiment: Copy files from the NAS to my Xubuntu home directory, and then copy them to the Truecrypt partition (owner = nobody) as user andy.

Result: Original timestamps not preserved and cp gives errors.

Conclusion: Only the owner of a FAT32 mount can copy files to it without losing timestamps.

Once I understood the problem I was able to google other references to it:

Preserving time stamps on FAT32 copies Linux questions forum thread

Even the owner of a FAT32 mount can't set directory times reliably

Although I was now able to copy over the original file time stamps correctly I noticed that all of the directory modification times were set to the time I ran the rsync command, which is not what I wanted at all. A little googling uncovered this forum posting about this problem:

http://ubuntuforums.org/showthread.php?t=886048

This forum posting suggested adding the option "--modify-window=1", which gives 1 second slack on how closely file and directory times have to match before rsync will see them as different, and someone said that worked to correctly preserve original directory timestamps. This forum posting also referenced this article about Linux and FAT32:

http://www.osnews.com/story/9681/The_vfat_file_system_and_Linux/page1

Incidentally, this article recommended using these options when using rsync to a FAT32 drive:

rsync -rvt --modify-window=1

So, I was finally ready to do a dry run test of rsync to synchronize two FAT32 partitions:

sudo -u nobody rsync -av --delete --modify-window=1 /media/FirstFAT32/ /media/SecondFAT32 > /home/andy/logs/rsync_test.log

Success! The original directory timestamps were preserved, the new files were copied from the source to the destination with the original time stamps, the files deleted from the source were deleted from the destination, and nothing else was changed.

However, once I changed the destination FAT32 drive's mount option to "shortname=mixed" (see below) I noticed that the directory times on directories were not being preserved any longer when I ran rsync. I thought I had solved that problem! I went back and ran the exact same rsync command that had worked before, but the directory times were still not preserved! I think the only thing that had changed since it worked correctly the first time was mounting the truecrypt volumes with "shortname=mixed" but how could that make a difference?

Some more experimenting revealed that at least one empty directory had its time preserved. I tried cranking modify-window up to 2, but that still didn't work. I never did figure this one out.

The nightmare continues: The capitalization of short files names is messed up with FAT32 mounts

While I was researching the above, I came across this cryptic little tidbit:

Another lesson is that the FAT32 partition should be mounted with the "shortname=mixed" option. If not, the rsync gets messed up. The default option is "lower," which means that files and directories with short names will have their names forced to lower case. But then rsync will think they're different from the ones it sees in the source that are upper case, and it will send the upper case ones again, and then delete the lower case ones. But since it's really the same files, the end result is it transfers a bunch of data and then deletes it all! Not good.

Comparing my existing NAS with the duplicate on the FAT32 drive, sure enough I found that at least one directory name with an all caps name on the NAS had been converted to all lowercase on the encrypted drive. Doh!

So time to revise the FAT32 mount options to:

sudo mount -t vfat -o shortname=mixed /media/sdc1 /media/SecondFAT32

When I tested it by copying over some new files with short all caps names it seemed to work properly.

Haven't I suffered enough?: Rsync freaks out about capitalization changes when syncing two FAT32 drives

I now knew how to correctly preserve the capitalization of file names when copying to a FAT32 mount, but what to do about all the existing files and directories that had their case changed on my earlier attemps? I decided to try running an rsync dry run to see if it would pick up on the files and directories where the case of the name didn't match:

sudo -u nobody rsync -avn --delete --modify-window=2 --stats /media/FirstFAT32/ /media/SecondFAT32 > /home/andy/logs/test_sync.log

Then I verified the dry run log looked right and it showed that rsync planned to copy all the files and directories with a capitalization mismatch. Then I ran the same command without the n option to fire for effect. However, when I checked over the logs I noticed something very peculiar: Not all of the file and directory names with the wrong case were fixed. Some were, but not all. Even weirder, on these unfixed files rsync said in the log that it was deleting them on the destination, but then never said it copied them to the destination, and when I looked they were still on the destination. Whats up with that? Not only that, but there were big differences in the number of gigs copied between the dry run log and the actual run log.

By this point Linux had beat me into submission, so rather than try to puzzle out this weird behavior I just deleted everything on the /media/SecondFAT32 and started over by using cp to copy everything all over again.

When will it end: Accented characters in file names are lost with FAT32 mounts by default

I thought I was finally done. However, like in some Hollywood blockbuster where you think the villain is dead but then he suddenly springs back to life to threaten the heros again, Linux was not done with me yet.

When I checked over the copy I had made, all the characters with accents in the file names had been changed to underscores! Doh! I used a Windows box to run a Robocopy from the NAS to the encrypted drive, and it spotted and copied over those files with files with the corrected accented characters in their file names.

I found this article which discusses the missing accented characters problem with FAT32 mounts:

http://www.osnews.com/story/9681/The_vfat_file_system_and_Linux/page2/

Following the advice from this post I mounted by source FAT32 drive like so:

sudo mount -t vfat -o shortname=mixed,iocharset=utf8 /dev/sdc1 /media/FirstFAT32

I mounted the second FAT32 drive the same way. Thankfully that appeared to work and preserve accented characters in the file names.

Incidentally, the same accented characters problem happens when you mount a Samba share on a Linux box. I was never able to solve the problem, but this post claims to have a solution that apparently worked for other people:

http://ubuntuforums.org/showthread.php?t=728751

I spent a lot of time trying to solve this, but no matter what I did, I could not manage to mount my NSLU2 NAS so that it would show the accented characters in the file names. I suspect its because my NSLU2 is a few years old and uses an older version of Samba, and the solution posted in this forum post will only work on current versions of Samba.

2008-11-10

Mythbuntu on a Via Epia SP8000e

A few years ago I got a Via Epia SP8000e to be a MythTV box. Here are the stats on the SP8000e:

  • Processor: 800 MHz VIA C3 Eden (fanless). Ubuntu reports:
    processor : 0
    vendor_id : CentaurHauls
    cpu family : 6
    model : 9
    model name : VIA Nehemiah
    stepping : 8
    cpu MHz : 800.222
    cache size : 64 KB
    fdiv_bug : no
    hlt_bug : no
    f00f_bug : no
    coma_bug : no
    fpu : yes
    fpu_exception : yes
    cpuid level : 1
    wp : yes
    flags : fpu vme de pse tsc msr cx8 sep mtrr pge cmov pat mmx fxsr sse up rng rng_en ace ace_en
    bogomips : 1602.28
  • "Integrated VIA UniChrome AGP graphics with MPEG-4 accelerator"

I had this running with Ubuntu Edgy Eft and MythTV 0.20 (I think), but it was giving me problems occasionally and the Edgy repositories have been closed down for a while so it was impossible to upgrade packages anymore. I decided to cross my fingers and try upgrading to Mythbuntu.

I started out with the Mythbuntu 8.04 install CD. Everything seemed to install without problems, and once I figured out how to do the MythTV setup properly I got it running. However, the Via C3 Eden is not very powerful, and without XvMC enabled it runs at around 70-90% CPU utilization playing back recordings. So, I tried to enable XvMC by selecting the Via XvMC as the decoder in a new Playback Profile. It didn't work, and I got the errors/problems described in this forum post and bug listing:

Problem :: VIA XvMC / MythTV 0.21 / Upgrade to Ubuntu 8.04

http://bugs.gentoo.org/show_bug.cgi?id=228473

I tried changing the libraries referenced in /etc/X11/XvMCConfig to libchromeXvMC.so.1 and libchromeXvMCPro.so.1 and that didn't fix it.

I tried doing an upgrade to Mythbuntu 8.10, and a fresh install of Mythbuntu 8.10, and still had the same problem with XvMC.

It seemed from the posts about this problem that it appeared with kernel 2.6.24, so I dug up the ISO for Mythbuntu 7.10 (its not listed on the Mythbuntu site anymore, but if you do some creative googling for mythbuntu-7.10-i386.iso you can find it). Mythbuntu 7.10 installed just fine and XvMC worked as soon as I selected Via XvMC in the appropriate MythTV Frontend setup screen (Settings -> TV Settings -> Playback something or other I think).

However, when I did a normal install of all updates Mythbuntu installed MythTV 0.21 and XvMC stopped working. I determined that with Mythbuntu 7.10 the backports repository is enabled by default, and that is where MythTV 0.21 came from, so I reinstalled Mythbuntu 7.10 from the CD, then on the Mythbuntu desktop I updated the software sources to remove backports, and then I was able to update all packages without getting bumped up to MythTV 0.21.

So, I finally have MythTV working on my Via Epia SP8000e using Mythbuntu 7.10.

If you look at the forum posting and bug report listed above Robert reports that this problem exists with Gentoo and Slackware, and that he has been able to get MythTV 0.21 working with post 2.6.24 kernels by compiling his own kernels with the memory allocator set as SLAB instead of the default SLUB. I have not tried this yet since I need some time for the scars to heal before I wade into battle with Linux again.