2008-12-22

Configuring my new Macbook White

Why a Macbook?

I decided to indulge myself and get a laptop, and of course it had to be an Macbook. In practical terms Windows is really just fine, and there is more and better opensource software for Windows, but after years of working with, and being frustrated by, the awkwardness and goofiness of Microsoft and Windows, not to mention the mysteries of viruses, worms, trojans and bears, oh my, there is so much bad history between me and Windows that I just can't bring myself to buy another Windows computer ever again.  The trust has been broken, and even if Windows produced a flawless OS that spontaneously made breakfast for me every morning I would still have an atavistic reluctance to give it a try.

For me, Mac OS X just feels solid and well designed. Kind of like the difference  between American made cars and Japanese cars back in the 80s and 90s (less so today).  One feels just kind of clobbered together with lots of stuff you have to work around or learn to live with, and the other feels like a well oiled machine. Even though there are lots of practical inconveniences to using a Mac (I still haven't found a file manager I like as much as Salamander for Windows, or a text editor as good as Notepad+), most everything is so well designed and solid feeling that I am willing to forgive its shortcomings.  With Windows there is so many years of negative history between us that every little thing that goes wrong reminds me of 10 other stupid battles I fought with Microsoft over the decades and I end up seething even if its really not that big a deal in practical terms.

Why Macbook White instead of a new Macbook?

I chose a 2.4 Ghz 2.0 Gig RAM Macbook White, instead of one of the new Macbooks, because it seemed like more bang for the buck when I was comparing the various sales on Black Friday. Sure the new Macbook is sexier, but I just couldn't bring myself to pay about $200 more for a slightly less powerful machine, especially given the super glossy screen on the new Macbook (which looks brighter, but is a real pain if you have any light sources behind you).

Tricking it out

In any event, this post is really about what I did to configure my new Macbook White. There is nothing I enjoy more than customizing and tricking out a new computer so its just the way I want.  I am like those guys who spend every weekend working on their hobby hotrod that they never actually get around to driving much. Whenever my wife complains about my computer obsession I tell her she should be greatful that she doesn't have to deal with a perpetually partially disassembled Camero in the carport.

The first thing I did after getting to the desktop was let it do all the Mac software updates.

Set up link to my home file server Samba Share

Mac OS X automatically detected my home Samba fileserver.  To make it easier to access regularly I opened it up on Finder and then dragged the top level folder of the fileserver over to the bottom of the Places list on the Finder Sidebar.  Now its just one click to get to my documents on the fileserver.

In order to have the fileserver mounted automatically when I boot up I went to System Preferences -> Accounts -> Login Items, clicked the plus sign underneath the list, navigated my way to the top level documents folder on my Samba server in the File Open dialog, and hit Add, then checked the box next to the entry for it on the list of items to open automatically on log in.

Installed Firefox and copied over my existing Firefox profile

I fired up Safari, went to the Firefox site, and then downloaded and installed Firefox, since thats the browser I am used to.

I copied my Firefox profile from my regular personal computer over to my file server, then copied it from there to the Firefox profile directory on the Macbook (/Users/andy/Library/Application Support/Firefox/Profiles/xhikl3o34.default) while Firefox was shutdown, of course, and then restarted Firefox.  This gave me an exact duplicate of my existing Firefox configuration from my old computer.

Fixed the problem where you can't reliably tab between form controls in Firefox on Mac OS X

After I installed Firefox I noticed that I could no longer tab between controls on a form on a web page.  For example, I couldn't hit the tab key and get to the submit button on one form I used a lot.  I did some googling and found this post about it:

Tabbing problems in Firefox in Mac OS X

Following a tip in the comments on this page, I opened up about:config in Firefox, then right clicked on the list and selected New > Integer from the context menu, then gave accessibility.tabfocus as the name of the new Preference and gave 3 as the value.  Then I restarted Firefox and I could tab between controls on forms like I was used to.

Trimmed down and moved the Dock to the right hand side

I like an uncluttered desktop, so I got rid of almost all of the default application icons on the Dock by simply dragging them away from the Dock and letting go.  I left the icons for Finder and iTunes.  Then I used Finder to navigate to /users/Andy/Applications and dragged Firefox, Terminal, and System Preferences over to the Dock since those are the programs I use the most frequently.

It seems crazy to me to have the Dock on the bottom of the screen on a widescreen monitor on a laptop, since there is lots of horizontal real estate but vertical real estate is limited.  So I went to System Preferences -> Dock then chose Right for Position on Screen, set the Size on the small end of the scale, and set Magnification to Max. This gives me a non-3d Dock on the right hand side of the screen that doesn't take up much space. Its not as pretty, but I hate having a narrow sliver of webpage showing because 20% of the vertical space of the desktop is taken up by the Dock.

Installed Quicksilver

I hate using a mouse to navigate through tree structures, and I hate peering at long lists to find things in folders, so I am always on the lookout for tools that will let me jump to what I want by typing a few characters of the name.  A while ago Lifehacker turned my on to Quicksilver which does this for opening applications for the Mac (it does a lot more that I haven't begun to explore).  So I went to the Quicksilver homepage:

http://www.blacktree.com/

I downloaded the latest beta of Quicksilver, and then installed it. Then to make it load on startup I went into System Preferences -> Accounts -> Login Items, hit the plus sign under the list, navigated to Quicksilver in /Users/Andy/Applications, cliked Add, then checked the box for the new Quicksilver entry on the startup list.  Now, to open an application I just hit Control-Spacebar to invoke Quicksilver, start typing some of the applications name, and then hit enter when it has selected the right app (usually only takes 2-3 keystrokes).

Tweaked Finder

Finder is the thing I like least about Mac OS X, but I have worked out a number of tweaks that make it more to my liking.

I like to have the full path of the current directory showing in the title bar of Finder, so I opened up a terminal and ran this:

defaults write com.apple.finder _FXShowPosixPathInTitle -bool YES
killall Finder

Also, I enabled the Path Bar at the bottom of the Finder window by going Finder > View > Show Path Bar.  The Path Bar at the bottom allows you to jump to anywhere in the current path.

For some reason I hate being told a file's date is "today" or "yesterday" so I turned that off by going Finder > View > Show View Options, which pops up a dialog where you can select what stuff Finder shows about files, and unchecked the "use relative dates" option. While I was in there I checked the "Calculate all sizes" option so Finder will show me the sizes of folders. Then I clicked "Use as defaults" to make these options apply to all folders.

Installed muCommander for two pane file manager

There are two things I want to be able to do when managing files that Finder doesn't do: (1) allow me to jump to a particular file name by any fragment of the name (not just the beginning), and (2) copy files from one folder to another with one keystroke instead of a lot of clicking and mousing.  I have looked around a lot, and the only file manager I have been able to find that does both on Mac OS X is the opensource muCommander. I was initially resistant to trying it because its Java based, and when you first fire it up it doesn't look very sexy, but its performance is actually pretty snappy. Another advantage from me is that it uses a lot of the Norton Commander style keyboard shortcuts that I am used to (F7 to make folder, F5 to copy, Enter to open a file, etc).

Here is the feature I like the most about muCommander. If I am in a folder with dozens of files I can just start typing some characters I think are in the file name somewhere. The letters I type show up at the bottom of the muCommander window with a little red exclamation point icon if they are not found, and a green check mark icon if they are found.  If the characters are found in a filename it jumps to the first instance, and if you then hit the up or down arrows it jumps to the previous or next instances.  I am not sure why every file manager doesn't have a feature like this.  Certainly trying to find a string of characters in a file name in a particular directory is a bit of a clickapalooza in Finder.

Set the system date format to ISO 8601

I have this thing for using ISO 8601 date format exclusively, so next stop was System Preferences -> Formats to change all the date formats.  I selected Custom for Region, then clicked customize, then went through Short, Medium, Long, and Full formats and changed them all to read 2008-11-23 by dragging and dropping the date elements.

Put the date on the Menubar

I like to have the full date (ISO 8601 of course) showing on the Menubar, so I followed this Lifehacker post by Gina showing how to do it:

Mac Tip: Display the Date on the Menubar

It looks complicated, but its really pretty easy. After I did that, I right clicked on the clock on the Menubar and selected 24 hour time, and now it says "Fri 2008-12-05 13:33" on my menubar.

Installed TinkerTool and tweaked font smoothing options

I don't like the default font smoothing choices on Mac OS X, especially when I have a non-Apple external monitor attached.  So I downloaded and installed TinkerTool which allows you to set font smoothing so that it is turned off for fonts under a certain size (with no cap on the cut-off size you set).  I set the font-smoothing size at 14 pt.

Installed and configured Synergy for running multiple computers with one keyboard and mouse

When I am at work (which is a shed off my carport) I like to have my personal computer sitting next to my work computer for my calendar, personal email, this blog, etc., and I like to control both my work computer and my personal computer with one keyboard and mouse.  A while ago I discovered this amazing opensource program called Synergy2 that works on Windows, Mac, and Linux and allows you to control any number of computers with one keyboard and mouse. Once you have it installed and configured properly on all your computers the mouse and keyboard seamlessly shift from one computer to the next when you move the cursor off the edge of one monitor in the direction of the next computer.  You can even block and copy text (not files I don't think) between the different computer desktops.  After using it for a while you almost forget that you are dealing with multiple computers; It all just seems like one big desktop.  In any event here are the Synergy2 links:

Synergy2 home page (with Windows and Linux downloads)

SynergyKM home page (Synergy for Mac)

Configuring Synergy can be very very confusing, so I am not going to cover it here, but you can use google to find help.  Once its configured it works so flawlessly I almost forget its there, so its worth struggling through the confusing configuration proces. Right now, while I am configuring my Macbook, I am temporarily using Synergy to control a Windows 2000 machine (2 monitors), and Mac Mini, and the Macbook, with one keyboard and trackball connected to the Windows machine.

Set up easy access to file systems on remote computers over SSH using MacFUSE and Macfusion

I have an Xubuntu server running on an old laptop that functions as a file server, home web server, and  print server.  I like to be able to transfer files to and from it (outside of the Samba share) quickly and easily, so I downloaded the two opensource tools necessary to mount other computer's filesystems over SSH: MacFUSE and Macfusion.

Before I proceeded with installing and configuring these programs I wanted to tell my Macbook the names of all the computers on my home network so that I would be able to work with them by name instead of IP address.  I couldn't find a way to do it without using the terminal.  So I fired up the Terminal and opened up the hosts file using the nano editor:

cd /
cd etc
sudo nano hosts

Then I just appended the names and IP addresses of all my various machines to the file, hit Ctrl-O to save it, and then Ctrl-X to exit.

MacFUSE is a Google project that provides other programs a way to mount non-native filesystems on a Mac. I don't think it can do anything by itself. Once you download it and install it you need to install some other program that has been written to work with a particular filesystem.

Macfusion is a program that uses MacFUSE to mount filesystems over SSH, and it is really slick.  I  downloaded it from the site, unzipped the app, dragged the app over to the Applications folder, and then ran it.  It said that its agent wasn't running and asked if I wanted to run it, and there was a checkbox to have the agent run at log in.   I said yes and yes and then ran Macfusion again from the Applications folder.  This brought up a cryptic blank dialog box with only a plus sign and gear icon buttons on the bottom. To add my home server I clicked the plus button, selected SSHFS (SSH filesystem), which brought up a dialog where I entered the name (see above) of the machine I wanted to connect to in both the top unlabelled box and the hosts box, and entered my password for that machine in the password field and hit enter. Then I was back at the original dialog, but now there was an entry for the machine I had just entered, with a Mount and Edit button.  I clicked Mount, but I got an error.  I suspected that the problem was that MacFUSE might not really function until after a reboot, so I just restarted the Macbook, then opened up Macfusion from Applications and tried again, and this time it mounted the SSHFS mount without problems.

But where was the mount? Macfusion just tells you the remote machine's filesystem has been mounted.  It doesn't tell you how the heck to find the mount.  It doesn't show up on your desktop, it doesn't show up on the Sidebar in Finder. Where in the world is it?

I knew from trial and error on my Mac Mini that Macfusion mounts the remote filesystem under /Volumes, but its not obvious how to get to /Volumes on Finder in its default configuration (at least I could never find a way).  What I did discover is that if you open Finder, go to Preferences -> Sidebar, then check the box for "Computer" in the Devices section, you will get a new entry under Devices in the Finder Sidebar with your machine's name. If you then click on your machine's name, you will see a folder that lists Macintosh HD, Network, and then your Macfusion SSHFS mount (and any other mounts you have going).

Once you can find your SSHFS mount point you can navigate the remote computer's filesystem using Finder, and work with its files, pretty much just the same as any other file on the Mac.  I find this really handy when I am working on PHP code on my home web server.  Just mount the web server using MacFusion, then navigate to the web root on the server and open the file using a text editor and get to work.

I did one last Macfusion tweak.  I started it up again using Quicksilver (it doesn't show up in the Dock by default), then went to Preferences and checked the box for starting the Macfusion menu item when I log in.  This puts a little icon on the Menubar for Macfusion (one you reboot), which makes mounting a remote machine's file system as easy as clicking on the Menubar icon, choosing Favorites and then clicking the name of the machine. Pretty slick.

Tweaked Terminal

Since I use the Terminal a lot I took some time to customize it since I hate the microscopic font size and appearance it uses by default.  I started up Terminal, went to Preferences, and then on the Startup tab I selected open new window with settings Homebrew. Then I went to the Settings tab and customized the font, and font size for Homebrew to my tastes. The only thing that was a little cryptic was how to make the Terminal window opaque instead of the semi-transparent default.  I discovered that if you select the Window tab under the Settings tab, and then click on the little square of color just under the Background label it brings up a dialog where you can set opacity.

Installed Plain Clip and set hot key combo to remove formatting from the clipboard

99% of the time when I copy some text from one application to another I just want the words and not the formatting.  For some odd reason the default copy and paste behavior (in both Windows and Mac) is to include the formatting.  I set up a keyboard shortcut to strip the formatting off of whatevers in the clipboard by:
  • Downloading and installing Plain Clip in the Applications folder.
  • Running it once from the Applications folder to get over the "This program was downloaded from the internet" warning.
  • Clicked on my Quicksilver Menubar icon, then selected Triggers to open up the keyboard shortcut configuration pane.
  • Hit the plus sign at the bottom of the pane to add a new trigger.
  • In the Command dialog I selected Plain Clip as the Item (first box) and Open as the Action (second box) and clicked Save.
  • Then I clicked on the Trigger field to open the Trigger dialog, and then hit my preferred key combo in the Hot Key box and then closed the pane.
  • Then to test I copied some formatted text from a web page into another application, confirmed that it was including formatting by default, then hit by defined keyboard shortcut and pasted again and confirmed that the formatting had been stripped out.
Installed VMWare Fusion and Windows XP

I have a Microsoft Access application I wrote way back in 1997 that I still use from time-to-time, plus I have a scanner that I never figured out how to get working with Mac OS X, so I purchased VMWare Fusion 2.0 from Amazon.com and a Windows XP Home OEM install disk from Newegg.com and then installed VMWare Fusion and created a Windows XP virtual machine using the install disk from Newegg.com.

Stopped by www.opensourcemac.org for the rest of the basic applications I use

www.opensourcemac.org is a great place to find all of the major opensource applications for the Mac.  What follows are the ones I downloaded and installed from this site.

Applications: Smultron for text editing

I do some PHP coding and edit various text files while messing around with my Linux machines, so I need a decent text editor (free or opensource of course).  I have tried various ones out there, and right now I am using Smultron and have been pretty happy with it so far. Its not as good as the best freeware Windows editors, but its the one for Mac that I like the best. One really nice feature is that you can choose Open Hidden from the File menu and then get a file open dialog that shows all the hidden files and directories on the Mac, which is very convenient when you are tweaking the Mac.

Applications: Installed VLC media player

A while ago I wanted to be able to start playing an internet radio stream on my Mac Mini just by going to a bookmark of the stream in Firefox.  I couldn't figure out how to get iTunes to automatically start playing the stream when I opened it in Firefox, but it worked out of the box with VLC. So, I downloaded and installed VLC on my Macbook, went to the site of my local PBS station and clicked the link to play the stream, VLC opened (after clicking through some one time only warnings about software downloaded from the internet) and the radio started playing. Nice.

Applications: Installed Burn for burning CDs and DVDs

Burn worked great for me the few times I used it on my Mac Mini so I added it to my Macbook.

Applications: Installed NeoOffice for word processing and spreadsheets

NeoOffice is OpenOffice optimized for Mac OS X.  It takes a while to load, but once its running it has worked well for me for spreadsheets and occasional Word documents.

2008-12-17

Accessing a Truecrypt encrypted external drive that uses the ext3 filesystem from Mac OS X

As covered in another post, I have a laptop home file server set up that stores its files on two external drives that are encrypted using Truecrypt and which use the ext3 filesystem. One of my goals of that project was to be able to grab one of the external drives and be able to read it from another computer (i.e. so I am not hosed if the laptop dies).

I was pretty confident that I could meet my goal by just firing up an old laptop that has Xubuntu installed, so I didn't even test that. However, I wanted to be able to read my encrypted external drives using my new Macbook.

After lots of googling I couldn't find any program that would allow mounting an ext3 filesystem directly on Mac OS X. Maybe its out there, but I couldn't find it. So, it was on to plan B, which was to install a VMware Fusion Xubuntu virtual machine on my Macbook and use that.

I fired up VMware Fusion and went to File > New and then followed the wizard for setting up a new virtual machine. I already had the ISO file for Xubuntu 8.04 desktop, so I told it to use that instead of a CD. I went with the quick install option (I think that was the name) and everything went very smoothly with setting up the virtual machine.

After I got to the Xubuntu desktop I did a system update to bring it up to date, then I downloaded the Truecrypt installer for Ubuntu from the Truecrypt website, unarchived it, and then ran the installer script which worked without any hitches.

Then I logged onto my home file server using SSH and dismounted one of the encrypted drives using truecrypt -d /media/truecrypt2, then disconnected its USB cable, then its power, moved it over to my Macbook, plugged it into power, and then plugged in the USB cable. At that point Mac OS X gave me a dialog about not being able to mount the drive and I clicked the ignore button. Then I went to the Xubuntu virtual machine window and clicked the little USB icon on the bottom to connect the external drive to the virtual machine.

Then I fired up a terminal in the Xubuntu virtual machine and checked /dev for the external drive by running ls sd*. I saw there was an sdb and sdb1, and I guessed that that was the external drive, so I then ran truecrypt /dev/sdb1 /media/truecrypt1 to mount it and entered the password for the drive at the prompt

I then went to the Xubuntu file manager and navigated to /media/truecrypt1 and verified that I could open files on the external drive.

2008-12-10

Favorite Quotes

A collection of my favorite quotes.

I am always doing that which I can not do, in order that I may learn how to do it.
  - Pablo Picasso

2008-12-06

Manually run Mac OS X system maintenance tasks

Wow.  I just found out from an article in the NY Times that if you shut your Mac down at night the routine system maintenance tasks never get done because they are scheduled to run from 3:15 to 5:30 am!  Here is how to run all of them manually, according to the Apple website:

  1. Open Terminal (/Applications/Utilities).
  2. Type: sudo periodic daily weekly monthly
  3. Press Return.
  4. Enter your Admin password when prompted, then press Return.
  5. Quit Terminal when the task is complete.
You can run just daily leaving off weekly and monthly.

2008-11-22

Creating a home web and file server with encryption using Xubuntu 8.04, an old laptop, and some external drives

The goal of this project was to convert an old Dell Inspiron 2650 laptop into a combined home webserver and Samba fileserver with the data for the fileserver being stored on encrypted external hard drives. This post will give an overview of the process, and following posts will give the nitty-gritty details.

Here are the requirements I made up for this project, and the reasons behind them:

  • Home webserver. I work from home and my wife is also home all day, so we have a number of computers that we use throughout the day (her Vista laptop, my work Windows 2000 desktop, my personal Mac Mini, assorted old laptops with Ubuntu on them, and a MythTV machine running on a Via Epia SP8000e). Trying to store and retrieve information across 3 operating systems, different applications, and multiple computers got to be a pain, so a while ago I decided to write various web applications in PHP to keep track of basic things like family finances, to-do lists, and other information. That way we could both access information from any computer at any time, and there was no need find and maintain compatible software across 3 operatings systems and 3+ computers.
  • Samba fileserver. Given all our computers and operating systems I decided a long time ago to keep all of our files on one central fileserver so that we wouldn't have to deal with "oh which machine is that that file on" and "oh, I forgot to backup that machine whose hard drive just failed." For years I had been using an NSLU2 NAS with two external hard drives attached for this purpose.
  • Old Dell Inspiron 2650 as the server hardware. Old laptops make the best home servers because:
    1. They are free or cheap,
    2. They pull a lot less electricity than a desktop, which is important in something thats on 24/7,
    3. They are small and you can fit them on a shelf somewhere, and
    4. They are usually quiet.
  • Data stored on external hard drives. The NSLU2 got me used to the idea of keeping data on a external hard drive, so that its pretty easy to upgrade or replace a disk and so that if the system drive fails it doesn't take the data down with it.
  • Encrypted data drives. Last week a neighbor's house got robbed in the middle of the day. This got me to thinking how I would feel if some thief grabbed the external drives from my NSLU2 in a generic burglary. Those drives have scans of bank statements and other financial information on them. Sure the average thief is maybe not going to dig through them for financial info, but if they were stolen I would feel obligated to post a fraud alert with the credit agencies, cancel all my credit cards, change the passwords etc on all my bank accounts, maybe close all my financial accounts and open new ones, and then keep a close eye on everything for months. With storing the data on encrypted drives a burglary would just involve buying new hardware and restoring from my offsite backup and moving on.
  • The real reason: For the fun of it. Of course the real reason for doing this project is that it sounded like a fun (hopefully) and educational challenge. I like learning new things, especially about computers, and especially the hard way. I have discovered that the best way for me to master a new skill is to set out to do some project where I have no idea what I am doing, and figure everything out as I go along. I find that having a concrete goal forces me to tackle the hard stuff head-on rather than skipping over it.

The first thing I had to do was buy a USB 2.0 PCMCIA (or is it CardBus?) card since the Inspiron 2650 only has USB 1.1 ports and I wanted faster USB ports for the external USB data drives. I went to newegg.com and ordered a $13 card that had lots of decent reviews and which a couple reviewers said worked with Linux.

I had an extra USB drive enclosure laying around and an extra 120 GB drive, so I put them together to make an external hard drive. I plugged it in to my laptop using my new USB 2.0 CardBus card and Xubuntu recognized it. In a previous life the drive had been formatted as an Ubuntu system drive, so it had two partitions formatted with Linux filesystems. I wasn't sure if the underlying filesystem mattered with an encrypted drive (i.e. would I be able to mount it on a Mac or Windows machine using Truecrypt), so just be be sure I installed gparted and used it to delete the existing partitions and then reformat the whole drive as FAT32. In hindsight reformatting the drive was probably not necessary and I could have just gone straight to formatting it with Truecrypt.

Next I looked into increasing the memory on the Inspiron 2650. I had the original system 128 MB, plus a 256 MB memory module I had installed in the second, user accessible, slot years ago, but I wanted more memory headroom to improve performance. I did some research on the web and found out that the maximum memory for the Inspiron 2650 is 512 MB, and that to achieve that you had to do some major surgery to get at memory slot 1 under the keyboard:

http://episteme.arstechnica.com/eve/forums/a/tpc/f/579009962631/m/972009296731

I dug through my box of old RAM and found the sibling of the 256 MB module already installed in the user accessible slot. I had received two modules but had never used the second because I thought I could only add memory to the user accessible slot. Following the directions from the link I removed the original 128 MB memory module, installed the 256 MB module, rebooted, and voila Top showed that the system now has 515 MB of physical memory. Sweet.

Next I installed the desktop version of Xubuntu 8.04. I could have gone with the command line version, but I had an Xubuntu CD already and I didn't want to bother with downloading the alternate CD and burning it.

Then I installed the following packages using aptitude install:

  • openssh-server (so I could administer it from remote machines)
  • samba (so it could serve files to windows boxes)
  • smbfs (just seemed like a good idea, may not have been necessary)

I tried to install Truecrypt from the Ubuntu repositories but its not there, so I went to the Truecrypt website and downloaded the Deb package for Ubuntu. All I had to do to install it was unpack the one file from the compressed archive I downloaded and then click it to run a script which handled the whole install. I picked Truecrypt for my encryption because I was already familiar with it, and I thought that since it is available on Mac, Linux, and Windows it would enable me to mount the encrypted drive on any old machine if necessary.

Then I edited /etc/network/interfaces to give the laptop a fixed IP address.

Next I verified that the system could turn off the laptop's LCD backlight by running:

xset dpms force off

I tested this because the system will be on 24/7 I want to keep the backlight off most of the time to save watts. Then I configured the screensaver and power management options to turn off the backlight when the laptop is idle (see my recent post on this subject for how to do this).

To make sure the time is always correct on the Xubuntu laptop server I installed ntpdate:

sudo aptitude install ntp ntpdate

When I ran this it said that it was only installing one package (ntp) so maybe ntpdate was already installed and maybe I didn't need to do this?

Next I encrypted the external hard drive using Truecrypt and copied over all the data from my existing NSLU2 NAS device. I was in Linux hell for days with this task because I had started out formatting the Truecrypt partition with FAT32, and the disk I was copying the data from was FAT32, and as a result I had all kinds of problems related to FAT32 with permissions, accented characters, preserving original timestamps on files and directories. All the gory details are posted in another blog entry. In the end I gave up on having the encrypted drive be FAT32 and just formatted it with ext3 and everything went smoothly after that. If I had spent more time on it I probably could have figured out how to make everything work using a FAT32 Truecrypt partition, but I just got tired of messing with it and decided to bail on the whole idea.

I have done a separate post with the details on creating an ext3 formatted Truecrypt partition on an external drive.

Next I set up a test Samba share. Since this file server for family use, and since passwords and complications have a low spousal acceptance factor, I wanted to set up Samba so that no username or password is required to access it. Most of the how-to guides online only covered setting samba with security, but if I finally found a couple guides that went through setting up a public share:

Private and guest (no password prompt) Samba shares with security=user

guide to setting up Samba so its wide open and no passwords are required

It ended up taking some fiddling and tweaking to get the Samba share to work properly with all the OSs in the house (Windows XP, Windows Vista, Mac OS X, Ubuntu), and to get permissions and ownership of files sorted out. The details of how I configured the Samba share are in another post.

Then I set up the laptop to be a CUPS print server, as detailed in this post.

Then I set up the laptop as a LAMP server and migrated my existing home web server applications over to the laptop as detailed in this post.

Once I had everything installed and configured on the Xubuntu laptop server it was time to decommission my existing home web and print server and replace it with the new one. To make things more complicated, I decided to give the new server the same IP as the old one so I wouldn't have to change any settings on any other computers on the network.

  • First I made backups of all of the SQL databases on the old server and imported them into the MySQL server on the new server to make sure the new server was up to date.
  • I changed the IP address and name on the old home server by editing /etc/network/interfaces and /etc/hostname and /etc/hosts so that it wouldn't cause conflicts if I needed to boot it up again to get something off of it.
  • Then I shut down my existing home server.
  • Next I edited /etc/network/interfaces, /etc/hostname and /etc/hosts on the new server to change its name and address to the name and address of the old server and then rebooted. Everything seemed to be working on the web server
  • However, when I tried to SSH into the new server at the old server's address I got a message that the fingerprint for the host key had changed and so authentication failed.
  • A little googling revealed that the easy way to fix the problem was to delete the existing RSA key from the client known_hosts file and then ssh to the server again. That causes the client to see the server as a new server and prompt to download the host RSA key again. On my Mac Mini the file I edited to delete the old RSA key was /Users/andy/.ssh/known_hosts and I had to use the Open Hidden menu option on Smultron to be able to navigate to it.
  • Next I shut down the new server and physically moved it to take the place of the old server, hooked up the printer to it, and powered it up.
  • I tested the web server by pointing my browser at the server and verified it worked, and also tested that printing to the new server worked.
Once I got the new server up and running using the address of the old server I set up a script to back up the MySQL databases on it:
  • First I created two directories under my home directory on the server:
    mkdir /home/andy/backups
    mkdir /home/andy/scripts
  • Then I copied my old backup script into the scripts directory and set it to be executable:
    chmod 700 backup_script.sh
  • Then I tried running the backkup script:
    ./backup_script.sh
    But that gave me an error:
    /bin/sh^M: bad interpreter: No such file or directory
  • A little googling showed that I needed to install the sysutils package:
    sudo aptitude install sysutils
  • And then run this utiltit on the file, apparently because I had copied it from a non-Linux machine:
    dos2unix backup_script.sh
    After I ran that utility the script ran just fine.

Once the web server had been migrated I migrate the files from my existing NSLU2 NAS device. Rather than try and copy from my NSLU2 to my new server over the network (which would be slow) I took a FAT32 external drive that had a backup of the NSLU2 files and connected it to the Xubuntu laptop server, and then I mounted it:

sudo mkdir /media/heh
sudo chmod 0777 /media/heh
sudo mount -t vfat -o shortname=mixed,iocharset=utf8 /dev/sdc1 /media/heh
(after looking in /dev to see what device name the USB drive had received)

The vfat option "shortname=mixed" is necessary to prevent Linux from converting all short file and directory names that are all uppercase to all lowercase. The "iocharset=utf8" option is to make sure that accented characters in file and directory names don't get replaced with underscores by Linux.

Once I had the external hard drive with all the files on it mounted, then I copied all of them to the encrypted drive:

sudo -u nobody cp -arv /media/heh/zihuatanejo/"my documents"/* /media/truecrypt1/Documents

This copies all the files and directorys from "my documents" into the existing directory "Documents". I did it this way because wanted to get rid of the stupid "my documents" directory name which was a carry over from long ago when all this data lived on a Windows machine.

Once the copy was done I set open permissions on all of the copied files:

sudo chmod -R a+rw /media/truecrypt1/Documents

Then I did some cursory testing to make sure that I could create, delete, copy, modify files on the encrypted drive from a remote computer through a mounted Samba share.

Next came the acid test: I ran a robocopy between my NSLU2 and the new encrypted Samba share from a Windows box to see if all of the files were there like they were supposed to be.

robocopy "z:\my documents" p:\ /mir /XO /NP /log+:"C:\Documents and Settings\Owner\Data\logs\Z_to_P.1.txt"

Reviewing the log file after it was completed showed that only the new files that should have been copied were copied, everything looks good so far. Checked the permissions on some of the files that had been copied over to the encrypted Samba share by robocopy to make sure owner and permissions were as they were supposed to be, and they were.

Now that I had one encrypted external drive set up, it was time to set up a second encrypted external drive for a mirror of the first drive. I encrypted the second drive as I will detail in a later post.

Once the second external drive had been formatted as an encrypted drive using Truecrypt I set up directories on it. First I created and set permissions for a backup directory and a directory to put files and directories in when they were removed from the backup directory:

cd /media/truecrypt2
sudo -u nobody mkdir Documents
sudo chmod -R a+rw Documents/
sudo -u nobody mkdir Documents-deleted
sudo chmod -R a+rw Documents-deleted

Then I did an initial copy from the main encrypted drive to backup encrypted drive:

sudo -u nobody cp -arv /media/truecrypt1/Documents/* /media/truecrypt2/Documents > /home/andy/logs/2008-11-16-0853.log

I reviewed permissions on /media/truecrypt2 to make sure they seemed correct.

Then I wrote an rsync script to backup from truecrypt1 to truecrypt2 and designate that modified files get kicked into Document-deleted.

#!/bin/sh # Script name : sync_truecrypt_mounts.sh
# Backup /media/truecrypt1 to /media/truecrypt2
echo '\n' >> /home/andy/logs/rsync.log
echo 'Start rsync: ' >> /home/andy/logs/rsync.log date '+%Y-%m-%d_%H:%M:%S' >> /home/andy/logs/rsync.log
rsync -av --delete --stats --backup --backup-dir=/media/truecrypt2/Documents-deleted --suffix=`date +"_%F"` /media/truecrypt1/Documents/ /media/truecrypt2/Documents >> /home/andy/logs/rsync.log
echo 'Rsync finished: ' >> /home/andy/logs/rsync.log
date '+%Y-%m-%d_%H:%M:%S' >> /home/andy/logs/rsync.log
# End of script

I tested the script by running it:

sudo -u nobody ./sync_truecrypt_mounts.sh

And it did what I expected. So on to making it a cron job.

sudo -u nobody crontab e

Then I set up another cron job to backup my MySQL databases every night a few minutes before the sync runs.

Creating a Truecrypt partition that uses the ext3 filesystem

For reasons given in another post I decided that I wanted the external data hard drives on my Xubuntu laptop home server to be encrypted, and I wanted the encrypted partitions to use the ext3 filesystem. The only thing that makes this challenging at all is that there no option on the Truecrypt GUI to create a volume with the ext3 file system. The only options on the GUI are FAT and None.

The first step was to install Truecrypt. For some reason its not in the Ubuntu repositories, you I downloaded a Deb package from the Truecrypt website and used the script that came with it to install. Pretty easy.

First I launched the truecrypt GUI by opening a terminal and running:

truecrypt

Then I plugged in the external drive and looked do see where it mounted as a device:

ls /dev/sd*

I knew sda and sda1 were the system hard drive, so sdb and sdb1 had to be the external drive, and I knew from experience that I wanted to create the Truecrypt partition on sdb1 and not sdb.

Using the Truecrypt GUI I told it to create a new encrypted partition on /dev/sdb1 but specified None for file system instead of FAT, and chose Quick Format (despite the warnings) since I knew the disk had previously been written over with random data on previous encryption efforts.

Once the new encrypted partition was created I used the GUI to mount it, being careful to click the button for options, and then checking the box for mounting without a file system.

Once the new encrypted partition was mounted without a file system I looked up its mount point in a terminal:

truecrypt -l

And saw that it was at /dev/mapper/truecrypt1.

Then I formatted it with the ext3 filesystem with the following command:

sudo mkfs.ext3 /dev/mapper/truecrypt1

Once it was done I dismounted the Truecrypt partition:

truecrypt -d

And then I remounted it from the command line:

truecrypt /dev/sdb1 /media/truecrypt1

Then I did chown and chmod on mount its mount point.

sudo chown nobody:nogroup /media/truecrypt1
sudo chmod -R a+rw /media/truecrypt1

That seemed to work for me!

Setting up a completely insecure Samba share in Xubuntu

As mentioned in another post, I wanted to set up a completely insecure Samba share on my Xubuntu laptop home server to serve as the family file server. What follows is an account of how I stumbled onto something that appeared to more or less work. I really don't know a thing about Samba so I am sure this is riddled with mistakes and bad advice.

Private and guest (no password prompt) Samba shares with security=user

This sounded like a good approach to me, and it seemed to work with my Mac Mini and Windows XP boxes as clients, but when I tried to mount it on our Windows Vista laptop it kept prompting me for a user name and password and it wouldn't let me mount the share without it.

So I did some more googling and found this guide:

guide to setting up Samba so its wide open and no passwords are required.

In the end I took bits and pieces from both guides, mashed them into the default smb.conf file that comes with Xubuntu, then added some stuff I figured out along the way, and ended up with this as my final configuration in /etc/samba/smb.conf. I have edited out all the commented out options that bulk up the default smb.conf file.

[global]

# Change this to the workgroup/NT-domain name your Samba server will part of
workgroup = MYWORKGROUPNAME

# server string is the equivalent of the NT Description field
server string = %h server (Samba, Ubuntu)

# This will prevent nmbd to search for NetBIOS names through DNS.
dns proxy = no

# The specific set of interfaces / networks to bind to
# This can be either the interface name or an IP address/netmask;
# interface names are normally preferred
interfaces = 127.0.0.0/8 eth0

# This tells Samba to use a separate log file for each machine
# that connects
log file = /var/log/samba/log.%m

# Cap the size of the individual log files (in KiB).
max log size = 1000

# We want Samba to log a minimum amount of information to syslog. Everything
# should go to /var/log/samba/log.{smbd,nmbd} instead. If you want to log
# through syslog you should set the following parameter to something higher.
syslog = 0

# Do something sensible when Samba crashes: mail the admin a backtrace
panic action = /usr/share/samba/panic-action %d

# "security = user" is always a good idea. This will require a Unix account
# in this server for every user accessing the server. See
# /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
# in the samba-doc package for details.
security = share

## You may wish to use password encryption. See the section on
# 'encrypt passwords' in the smb.conf(5) manpage before enabling.
encrypt passwords = true

# If you are using encrypted passwords, Samba will need to know what
# password database type you are using.
passdb backend = tdbsam

obey pam restrictions = yes

guest account = nobody
invalid users = root

# This boolean parameter controls whether Samba attempts to sync the Unix
# password with the SMB password when the encrypted SMB password in the
# passdb is changed.
unix password sync = yes

# For Unix password sync to work on a Debian GNU/Linux system, the following
# parameters must be set (thanks to Ian Kahan < for
# sending the correct chat script for the passwd program in Debian Sarge).
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .

# This boolean controls whether PAM will be used for password changes
# when requested by an SMB client instead of the program listed in
# 'passwd program'. The default is 'no'.
pam password change = yes

# This option controls how nsuccessful authentication attempts are mapped
# to anonymous connections
map to guest = bad user

# Most people will find that this option gives better performance.
# See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html
# for details
# You may want to add the following on a Linux system:
# SO_RCVBUF=8192 SO_SNDBUF=8192
socket options = TCP_NODELAY

# Allow users who've been granted usershare privileges to create
# public shares, not just authenticated ones
usershare allow guests = yes

[public]
comment = Public Share
# Path to directory
path = /media/truecrypt1
# Allow writing to share
read only = No
# Force connections as guests
guest only = Yes
guest ok = Yes
# Optionally, specify guest account here
guest account = nobody
# These two are optional
# Sets the umask for files/directories created on this share
force create mode = 777
force directory mode = 777

Then I restarted samba by running:

sudo /etc/init.d/samba restart

Once I had Samba up and running I had to untangle all the various Linux permissions issues. In the past I have run into weird issues where only the Owner of a file could do certain things, even though all permissions were set to allow all users to do anything, so out of an abundance of caution I just recursively changed the owner of the Truecrypt mount point to "nobody" and the group to "nogroup", which is the user that Samba uses under my smb.conf:

sudo chown nobody:nogroup /media/truecrypt1

I also gave everyone read-write permissions on the mount point:

sudo chmod -R a+rw /media/truecrypt1/

When I copied files over to the encrypted drive from within the filesystem (i.e. not through the Samba server) I was always careful to preface my commands with "sudo -u nobody" which means run the command as user nobody, which made sure all the files I copied over had the right owner. Another way to do this would have been to just copy the file and then do a recursive chown on the encrypted drive. Also, after I copied files over to the encrypted drive through the filesystem I ran this again to ensure that all the copied files had the permissions I wanted:

sudo chmod -R a+rw /media/truecrypt1/

When I did some test copies of files from remote machines over the Samba share I kept having problems with files not getting the right permissions. It turns out that Linux permissions are not inherited from the parent directory, but instead the permissions are determined by the umask system variable for all users. I think its possible to set a umask for each user (maybe). After a lot of research and some trial and error I learned that if these values are put in the share definition in /etc/samba/smb.conf:

force create mode = 777
force directory mode = 777

Then that tells Samba that all new files and directories have full read-write-execute permissions. In the long term I should probably go back and reconfigure all of the permissions settings, but at that point I just wanted to get it working in some fashion or another.

Setting up a CUPS server using Xubuntu 8.04

I plugged a HP LaserJet 1012 in my laptop server. Then I edited the CUPS configuration file:

sudo nano /etc/cups/cupsd.conf

And made the following changes in the file:
  • I changed: BrowseAllow @LOCAL
    to:
    BrowseAllow all
  • In the blocks: <location /> , <location /admin> and <location admin/conf>
    I changed
    Order deny,allow
    to
    Allow all
  • I changed: Listen localhost:631
    to:
    Listen 631
  • I Added the line: DefaultEncryption Never
Then I tried to do this to allow cupsys to prompt me for my Paridita password when doing admin tasks on the web interface:

sudo adduser cupsys shadow

But I got a message that user cupsys doesn't exist, and no amount of googling revealed an answer so I decided to plow on ahead.

I connected to the CUPS web interface by pointing my browser at http://10.10.10.160:631 and then I added a printer using the CUPS web interface specifying the correct driver for my printer.

Then on the CUPS web interface I went back and selected modify printer on my newly created printer and changed the make and model to "Raw" so that the client computers can set the printer driver to use. I did this because I have had problems with the Ubuntu driver for the HP LaserJet 1012 (the dreaded "unsuppported personality" problem) so by setting the printer up as a "raw" printer I can specify the printer driver on the client machines and thus use a different printer driver.

Setting up a LAMP server using Xubuntu 8.04

In Ubuntu there is a handy-dandy little command line utility that allows you to install a number of predefined package collections (like LAMP server) from the command line:

sudo tasksel

I fired this up in a terminal and then selected "LAMP server" on the primitive GUI and clicked OK. After the install was done (maybe 2 minutes), I updated all of my packages to bring the new packages up to date.  Then I pointed my browser on another machine towards my laptop's IP address and got a page that says "It works!" which is just what I wanted to be told.

Was phpMyAdmin installed by default as part of the LAMP server package? Tried pointing my browser at http://10.10.10.160/phpmyadmin and got nothing, so I guess not? That was easy to fix:

sudo aptitude install phpmyadmin

Try http://10.10.10.160 again and it works! I logged in as root using the MySQL root password I defined in response to a prompt during the LAMP install. Then I created a new user "andy@localhost" and gave this user all possible privileges on the MySQL database.

I like to have a special user called "www" and then have the web server's document root be that user's home directory. That way I when I want to work with the web server's files I can just log in as "www" and go to work. So, first I set up the user and created a password:

sudo adduser www
sudo passwd www

And then I pointed apache to the home directory of user www:

sudo nano /etc/apache2/sites-available/default

I changed the DocumentRoot value to /home/www and saved the file and then restarted Apache:

sudo /etc/init.d/apache2 restart

Then I did an SSHFS mount of the Xubuntu laptop as user www and copied some HTML files over to /home/www and verified that everything works.

Next I put a symbolic link to the phpMyAdmin directory in /home/www so that I could still use phpMyAdmin:

sudo ln -s /usr/share/phpmyadmin /home/www/phpmyadmin

Then I logged into phpMyAdmin and created new user php_user with all data (but not structure or database administration) privileges.

Next I created a new database with the same name as the database on my existing home web server, and then imported last nights SQL backup using phpMyAdmin. Tested all of my home web applications (don't ask) successfully.

Then it was time to transfer my home Wordpress blog from the old home web server. I had no idea how to do this, so first I did some googling to see whats up. Luckily for me I found this How-To on migrating a WordPress installation to a new server:

http://maketecheasier.com/clone-and-migrate-wordpress-blog-to-new-server/2008/01/30

I went ahead and moved my Wordpress installation:
  • I copied all of the Wordpress php files over to web server document root on the Xubuntu laptop server as described in these directions. However, I didn't search through the files and change anything, I just copied the files over.
  • I used phpMyAdmin to make a SQL file backup of the Wordpress database, and then I used phpMyAdmin on the Xubuntu laptop server to create a new database named wordpress and then I imported the SQL file.
  • Then I tested the transfer by successfully using my browser to go to the wordpress folder on the Xubuntu laptop server.

But then I got to thinking: Maybe I should also upgrade to the latest version of WordPress? I found this How-To on doing that:

http://codex.wordpress.org/Upgrading_WordPress

However, I decided to save that for another day.

Working with FAT32 in Ubuntu Linux

While I was working on my Xubuntu laptop server project I learned a lot about working with FAT32 drives on Linux, and I learned all of it the hard way.

Only the owner of a FAT32 mount can set file time stamps on it.

I tried using rsync to copy a directory from my mounted NSLU2 NAS to a FAT32 drive:

rsync -av /NASmountpoint /FAT32mountpoint

That caused all kinds of errors saying rsync couldn't set permissions on the encrypted drive. A little research showed that its not possible for rsync to set permissions on FAT32. So then I tried:

rsync -rltDv /NASmountpoint /FAT32mountpoint

That did better, but then gave errors about not being able to set dates on directories. So then I tried:

rsync -rltDvO /NASmountpoint /FAT32mountpoint

Adding the -O switch tells it not to try and set the date on directories. That fixed the errors, but then I noticed that the date-time of all of the files being copied over was set to today, which is not at all what I wanted.

I thought maybe it was a problem with rsync, so I tried using cp:

cp -arv /NASmountpoint/ /FAT32mountpoint

Result: The files copied, but the timestamps were not preserved and cp gave this message for each file: "cp: preserving times for `/media/truecrypt1/Washing Machines': Operation not permitted"

I double checked the permissions on the Truecrypt mount:

drwxrwxrwx 3 nobody andy 32768 2008-10-30 03:59

It looks like every user should have full permissions, but apparently they don't since its not possible to set file timestamps!

Next experiment: Dismount FAT32 partition and then remount it with option "uid=andy,umask=0000" and repeat the exact same cp operation.

Result: All the files copy over with the original timestamps properly preserved! Even the original timestamps were preserved on the directories, which is something rsync couldn't do under any circumstances.

Conclusion: In order to preserve original timestamps when using rsync or cp to copy files from a Samba share to a FAT32 mount you have to be logged in as the owner. Any other user will get an error, even if that user has full permissions to the FAT32 mount.

Next experiment: Dismounted the FAT32 partition, mounted it again with options "uid=nobody,umask=1000" and then try copying to it from the original NAS mount using "sudo cp..."

Result: All the files copied successfully with original timestamps preserved, but cp spit out the following error for each file "failed to preserve ownership for ... operation not permitted."

Next experiment: Try using "sudo -u nobody cp..." to run the copy command as user "nobody" which is the owner of the FAT32 partition.

Result: All files copied successfully with original timestamps preserved with no errors.

Conclusion: Only the owner of a FAT32 mount can copy files to it from another FAT32 mount without losing timestamps.

Experiment: Copy files from the NAS to my Xubuntu home directory, and then copy them to the Truecrypt partition (owner = nobody) as user andy.

Result: Original timestamps not preserved and cp gives errors.

Conclusion: Only the owner of a FAT32 mount can copy files to it without losing timestamps.

Once I understood the problem I was able to google other references to it:

Preserving time stamps on FAT32 copies Linux questions forum thread

Even the owner of a FAT32 mount can't set directory times reliably

Although I was now able to copy over the original file time stamps correctly I noticed that all of the directory modification times were set to the time I ran the rsync command, which is not what I wanted at all. A little googling uncovered this forum posting about this problem:

http://ubuntuforums.org/showthread.php?t=886048

This forum posting suggested adding the option "--modify-window=1", which gives 1 second slack on how closely file and directory times have to match before rsync will see them as different, and someone said that worked to correctly preserve original directory timestamps. This forum posting also referenced this article about Linux and FAT32:

http://www.osnews.com/story/9681/The_vfat_file_system_and_Linux/page1

Incidentally, this article recommended using these options when using rsync to a FAT32 drive:

rsync -rvt --modify-window=1

So, I was finally ready to do a dry run test of rsync to synchronize two FAT32 partitions:

sudo -u nobody rsync -av --delete --modify-window=1 /media/FirstFAT32/ /media/SecondFAT32 > /home/andy/logs/rsync_test.log

Success! The original directory timestamps were preserved, the new files were copied from the source to the destination with the original time stamps, the files deleted from the source were deleted from the destination, and nothing else was changed.

However, once I changed the destination FAT32 drive's mount option to "shortname=mixed" (see below) I noticed that the directory times on directories were not being preserved any longer when I ran rsync. I thought I had solved that problem! I went back and ran the exact same rsync command that had worked before, but the directory times were still not preserved! I think the only thing that had changed since it worked correctly the first time was mounting the truecrypt volumes with "shortname=mixed" but how could that make a difference?

Some more experimenting revealed that at least one empty directory had its time preserved. I tried cranking modify-window up to 2, but that still didn't work. I never did figure this one out.

The nightmare continues: The capitalization of short files names is messed up with FAT32 mounts

While I was researching the above, I came across this cryptic little tidbit:

Another lesson is that the FAT32 partition should be mounted with the "shortname=mixed" option. If not, the rsync gets messed up. The default option is "lower," which means that files and directories with short names will have their names forced to lower case. But then rsync will think they're different from the ones it sees in the source that are upper case, and it will send the upper case ones again, and then delete the lower case ones. But since it's really the same files, the end result is it transfers a bunch of data and then deletes it all! Not good.

Comparing my existing NAS with the duplicate on the FAT32 drive, sure enough I found that at least one directory name with an all caps name on the NAS had been converted to all lowercase on the encrypted drive. Doh!

So time to revise the FAT32 mount options to:

sudo mount -t vfat -o shortname=mixed /media/sdc1 /media/SecondFAT32

When I tested it by copying over some new files with short all caps names it seemed to work properly.

Haven't I suffered enough?: Rsync freaks out about capitalization changes when syncing two FAT32 drives

I now knew how to correctly preserve the capitalization of file names when copying to a FAT32 mount, but what to do about all the existing files and directories that had their case changed on my earlier attemps? I decided to try running an rsync dry run to see if it would pick up on the files and directories where the case of the name didn't match:

sudo -u nobody rsync -avn --delete --modify-window=2 --stats /media/FirstFAT32/ /media/SecondFAT32 > /home/andy/logs/test_sync.log

Then I verified the dry run log looked right and it showed that rsync planned to copy all the files and directories with a capitalization mismatch. Then I ran the same command without the n option to fire for effect. However, when I checked over the logs I noticed something very peculiar: Not all of the file and directory names with the wrong case were fixed. Some were, but not all. Even weirder, on these unfixed files rsync said in the log that it was deleting them on the destination, but then never said it copied them to the destination, and when I looked they were still on the destination. Whats up with that? Not only that, but there were big differences in the number of gigs copied between the dry run log and the actual run log.

By this point Linux had beat me into submission, so rather than try to puzzle out this weird behavior I just deleted everything on the /media/SecondFAT32 and started over by using cp to copy everything all over again.

When will it end: Accented characters in file names are lost with FAT32 mounts by default

I thought I was finally done. However, like in some Hollywood blockbuster where you think the villain is dead but then he suddenly springs back to life to threaten the heros again, Linux was not done with me yet.

When I checked over the copy I had made, all the characters with accents in the file names had been changed to underscores! Doh! I used a Windows box to run a Robocopy from the NAS to the encrypted drive, and it spotted and copied over those files with files with the corrected accented characters in their file names.

I found this article which discusses the missing accented characters problem with FAT32 mounts:

http://www.osnews.com/story/9681/The_vfat_file_system_and_Linux/page2/

Following the advice from this post I mounted by source FAT32 drive like so:

sudo mount -t vfat -o shortname=mixed,iocharset=utf8 /dev/sdc1 /media/FirstFAT32

I mounted the second FAT32 drive the same way. Thankfully that appeared to work and preserve accented characters in the file names.

Incidentally, the same accented characters problem happens when you mount a Samba share on a Linux box. I was never able to solve the problem, but this post claims to have a solution that apparently worked for other people:

http://ubuntuforums.org/showthread.php?t=728751

I spent a lot of time trying to solve this, but no matter what I did, I could not manage to mount my NSLU2 NAS so that it would show the accented characters in the file names. I suspect its because my NSLU2 is a few years old and uses an older version of Samba, and the solution posted in this forum post will only work on current versions of Samba.

2008-11-10

Mythbuntu on a Via Epia SP8000e

A few years ago I got a Via Epia SP8000e to be a MythTV box. Here are the stats on the SP8000e:

  • Processor: 800 MHz VIA C3 Eden (fanless). Ubuntu reports:
    processor : 0
    vendor_id : CentaurHauls
    cpu family : 6
    model : 9
    model name : VIA Nehemiah
    stepping : 8
    cpu MHz : 800.222
    cache size : 64 KB
    fdiv_bug : no
    hlt_bug : no
    f00f_bug : no
    coma_bug : no
    fpu : yes
    fpu_exception : yes
    cpuid level : 1
    wp : yes
    flags : fpu vme de pse tsc msr cx8 sep mtrr pge cmov pat mmx fxsr sse up rng rng_en ace ace_en
    bogomips : 1602.28
  • "Integrated VIA UniChrome AGP graphics with MPEG-4 accelerator"

I had this running with Ubuntu Edgy Eft and MythTV 0.20 (I think), but it was giving me problems occasionally and the Edgy repositories have been closed down for a while so it was impossible to upgrade packages anymore. I decided to cross my fingers and try upgrading to Mythbuntu.

I started out with the Mythbuntu 8.04 install CD. Everything seemed to install without problems, and once I figured out how to do the MythTV setup properly I got it running. However, the Via C3 Eden is not very powerful, and without XvMC enabled it runs at around 70-90% CPU utilization playing back recordings. So, I tried to enable XvMC by selecting the Via XvMC as the decoder in a new Playback Profile. It didn't work, and I got the errors/problems described in this forum post and bug listing:

Problem :: VIA XvMC / MythTV 0.21 / Upgrade to Ubuntu 8.04

http://bugs.gentoo.org/show_bug.cgi?id=228473

I tried changing the libraries referenced in /etc/X11/XvMCConfig to libchromeXvMC.so.1 and libchromeXvMCPro.so.1 and that didn't fix it.

I tried doing an upgrade to Mythbuntu 8.10, and a fresh install of Mythbuntu 8.10, and still had the same problem with XvMC.

It seemed from the posts about this problem that it appeared with kernel 2.6.24, so I dug up the ISO for Mythbuntu 7.10 (its not listed on the Mythbuntu site anymore, but if you do some creative googling for mythbuntu-7.10-i386.iso you can find it). Mythbuntu 7.10 installed just fine and XvMC worked as soon as I selected Via XvMC in the appropriate MythTV Frontend setup screen (Settings -> TV Settings -> Playback something or other I think).

However, when I did a normal install of all updates Mythbuntu installed MythTV 0.21 and XvMC stopped working. I determined that with Mythbuntu 7.10 the backports repository is enabled by default, and that is where MythTV 0.21 came from, so I reinstalled Mythbuntu 7.10 from the CD, then on the Mythbuntu desktop I updated the software sources to remove backports, and then I was able to update all packages without getting bumped up to MythTV 0.21.

So, I finally have MythTV working on my Via Epia SP8000e using Mythbuntu 7.10.

If you look at the forum posting and bug report listed above Robert reports that this problem exists with Gentoo and Slackware, and that he has been able to get MythTV 0.21 working with post 2.6.24 kernels by compiling his own kernels with the memory allocator set as SLAB instead of the default SLUB. I have not tried this yet since I need some time for the scars to heal before I wade into battle with Linux again.

2008-10-27

Getting backlight to turn off on Dell Inspiron 8100 with Xubuntu 8.04 Hardy

All I wanted was to get the backlight to turn off when my laptop(Dell Inspiron 8100 laptop running Xubuntu 8.04 Hardy) was idle for a few minutes. Is that so wrong? Why did it take over 3 hours of googling and trying various things without really understanding what I was doing?

I did a whole lot of things, and I have no idea which ones were truly necessary or not, so I will start at the end, with the final change that actually got it to work, and work backwards from there.

I had everything configured properly but still the screensaver and turning off the monitor would not work. Then I found a Ubuntu Forums post that said that the gnome-screensaver module is turned off by default in Xubuntu 8.04 even though there is a place in the Settings Manager for configuring the screensaver! That would explain that all the changes I made to the screensaver configuration had no apparent effect! To make matters worse, apparently the gnome-power-manager settings for what to do when the computer is idle depend on the gnome-screensaver module to be activated, so power management is hobbled by default also. In any event, the fix is:

  • Settings -> Settings Manager -> Autostarted apps
  • Then click on Add to create a new entry
  • Then enter gnome-screensaver as the command (and any name and description you want)
  • Then exit out and restart the desktop (Ctrl-Alt-Backspace) to restart with the screensaver running.
So that was what finally got my gnome-power-manager settings to come to life.

Working backwards, here is how I configured the screensaver settings:
  • Settings -> Settings Manager -> Screensaver
  • Select a screensaver theme.
  • Set the slider for "Regard the computer as idle after"
  • Check the box for "Activate screensaver when idle."
  • Click on the Power Management button
  • On both the On AC Power and On Batter Power tabs set the slider for "Put display to sleep when inactive for:"
This will result in the screen saver kicking on after the time you set, and then the backlight turning off after the time you set Power Management screen. I couldn't find a way on these screens to go straight to turning off the backlight without going through the screensaver first.

However, after much fiddling around I was able to get the backlight to turn off without going through the screensaver first (at least not for long) by doing the following:
  • Set the screensaver to "Blank" on the screensaver settings page.
  • Start up gconf-editor from a terminal, then navigate to Apps -> Gnome-Power-Manager -> Timeout and set the values for sleep_display_ac and sleep_display_battery to both be 1.
This seemed to work to make the backlight turn off at the time set for the screensaver to turn on. Your mileage may vary.

Another thing I did was use gconf-editor to set the settings for Apps -> Gnome-Power-Manager -> Backlight -> dpms_method_ac and dpms_method_battery to "off". I think this may have been necessary to ensure that the backlight was told to turn off when the idle time setting was reached.

Other things I did earlier, but which I am not sure were necessary or not, were to install libsmbios-bin (sudo aptitude install libsmbios-bin) and then add dcdbas to /etc/modules (sudo nano etc/modules/ and then just put dcdbas on a line by itself). I read some posts that suggested these were necessary in order to allow Ubuntu to control the backlight on a Dell laptop, but who knows.

I learned that you can set more options of gnome-power-manager by installing gconf-editor (sudo aptitude install gconf-editor) then starting it by entering gconf-editor in a terminal, and then going

Changing wireless cards on Xubuntu 8.04

I have Xubuntu 8.04 running on an old Dell Inspiron 8100. I started out with a Netgear WG511 v2 wireless card, but it kept having trouble connecting to my router in spots where there should have been no problem connecting. So, I decided to "just" switch to another wireless card I had lying around, a Linksys WPC54g v2 which uses the ACX111 chipset. It was a long and painful process to get it to work, so I thought I would write down what I think I learned along the way.

First, remove the existing wireless card and reboot with a wired ethernet connection and make sure it works. This way you will be able to do research on the web and download files as needed.

Next make your computer forget it ever met your old wireless card, otherwise, like happened to me, it will assign the new card to wlan1 which could cause troubles. Linux keeps a list of every network device it ever met and what interface it assigned it to (i.e. wlan0, wlan1, etc). This list is at:

/etc/udev/rules.d/70-persistent-net.rules

Use sudo nano to edit this file and remove the entry for your old wireless card and then save the file. This will prevent Ubuntu from saving wlan0 for your old card and assigning your new card to wlan1. You could probably make everything work with your new card on wlan1, but you would have to find and change every configuration file that references wlan0 and to me it just seems easier to force your new card to be assigned to wlan0.

If your old card used ndiswrapper then you need to make your computer forget about the old card's drivers. To do this first find out the name of your old wireless card driver:

sudo ndiswapper -l

It should show you the name of the existing wireless driver. Then delete that driver from ndiswrapper like this:

sudo ndiswrapper -e

These steps should eliminate your old wireless card configuration so that you can proceed with installing your new card without creating any conflicts. I followed this guide for the WPC54g v2:

http://ubuntuforums.org/showthread.php?t=324148


If you have a different card just search for an guide on how to install it.

2008-10-08

The rewards of being disorganized

I often wonder why so many people make little or no effort to get their lives organized. Everyone agrees that getting your environment and affairs organized makes life easier, but so few seem to do it.

Then I had an idea. Maybe its psychologically more rewarding to be disorganized than it is to be organized, at least in the short term.

What gave me this idea is an New York Times opinion piece that described experiments regarding willpower:
http://www.nytimes.com/2008/04/02/opinion/02aamodt.html?partner=permalink&exprod=permalink
In this piece the authors assert that "the brain has a limited capacity for self-regulation, so exerting willpower in one area often leads to backsliding in others." They also said that modifying behavior in pursuit of goals uses up willpower: "The brain’s store of willpower is depleted when people control their thoughts, feelings or impulses, or when they modify their behavior in pursuit of goals." They discuss experiments showing that exercising willpower on one task results in reduced performance on a subsequent task requiring willpower.

So maybe getting organized, which naturally involves modifying behavior in pursuit of goals, is experienced by people as draining.

And maybe the surprises and emergencies that result from being disorganized are rewarding because they trigger an adrenaline rush (sorry no articles yet to cite for that proposition). There is something stimulating about quickly reacting to emergency after emergency, and that mode of living is so rewarding that we have movies and TV shows like Indian Jones and 24, that depict it.

The rewards of reacting to crisis, and the draining nature of proper preparation, could lead to a dynamic where a person would experience getting organized as unrewarding because (1) the exercise of willpower is draining, and (2) the reduction in emergencies leads to fewer rewarding spurts of adrenaline. Someone attempting to pull their lives together could simply find the experience to just feel wrong.

This could explain why the chronically disorganized tend to look down on the organized as living a dull and meaningless existence. In their experience attempts at organization left them feeling drained and unstimulated, and so they assume that the lives of organized people must be draining and unrewarding.

Well, are the disorganized people right? Are the organized not living life to the fullest? According to the New York Times piece people can grow their capacity for willpower by exercising it, just like you can make a muscle stronger with exercise. So maybe the organized people who have bulked up their willpower with regular exercises of willpower don't find being organized to be nearly as draining as the disorganized experience it to be.

2008-10-03

Why you need to over-save if your retirement funds are in the stock market

In recent decades the conventional wisdom has been that people should invest the majority of their retirement savings in the stock market. The conventional reasoning goes like this:


  • The stock market has a historic rate of return of 10% over the long term, which is much better than the other investment choices.

  • There is risk in stocks, and the stock market can drop in the short term, but it always comes back up in the long term.

  • Your retirement is off in the future, so you are investing for the long term and short term downturns don't matter.

  • Your retirement will last a long time so even in retirement you will be investing for the long term.

  • If you don't take risks your investments won't make enough money and you won't have enough money to retire when you want to, or to spend as much as you want to in your retirement years.

Do you find yourself nodding your head in sage agreement with this sound advice? Taking risks; getting high returns; thinking in the long term.

Now lets look at what the purveyors of the conventional wisdom have to say when there is big downturn in the stock market:

"Q. But what if I am about to retire? Then what?

A. Leaving the work force at a time like this creates big problems. Not only is your portfolio down, but you need to start withdrawing from it. So you are essentially locking in your losses.
If your portfolio has taken a big hit, it may be time to seriously consider delaying retirement. Working just a few years more can make a big difference. Or, a part-time job may keep you from having to dip into your portfolio before it recovers."


From "Is My Money Safe" New York Times, 29 Sept 2008

"Fortunately, you can soften the blows of retiring in a slumping market. Here are some ways to help make sure your savings last as long as you do:

WORK LONGER AND SPEND LESS. This may sound obvious, and somewhat depressing, but working just a few years longer can make a big difference.
"

From "Retire Now, and Risk Falling Short on Your Nest Egg" New York Times, 16 August 2008. See also "Retirees Filling the Front Line in Market Fears" New York Times, 22 September 2008.

Now wait a second. You were investing in the stock market so that you wouldn't have to work in your golden years, and so that you wouldn't have to pinch pennies. And now they are telling you that you need to delay retirement, and spend less in retirement exactly BECAUSE you were savvy and invested your retirement in the stock market? Our savvy strategy got us the the exact result we were trying to avoid?

The problem with the conventional wisdom is that it doesn't consider one of the most important realities of retirement: You will need to withdraw money from your retirement fund on a fixed schedule over a long period. There are invariably one or more substantial stock market downturns during any given 20 year period, and so invariably the person with all of their retirement savings in the stock market will have to sell some portion of their portfolio when the market is down at some point during their retirement. And once you sell when the market is down it is impossible to realize the long term average return on your portfolio because that long term average assumes you never sell.

Since you will need to be withdrawing your money on a fixed schedule the long term return of the stock market is irrelevant to you. You don't care if over time it will eventually return 10% for someone who never had to sell their stocks. What you care about is what the return will be if you have to start withdrawing a fixed amount every year starting at age 65. And guess what? No one can tell you what that rate of return will be because its impossible to calculate or predict. If you are really lucky the market will stay up for your entire retirement. But what is more likely is that there will be a substantial downturn sometime during your retirement, and you will need to sell stocks at a loss just to keep your up with your expenses, and then your retirement fund will be crippled for the rest of your retirement because you had to liquidate at a bad time, and you will be in the exact financial place you were trying to avoid.

Another reality that the conventional wisdom ignores is that things come up and you never know when you are going to need to suddenly make a larger than expected withdrawal from your retirement fund. People lose jobs. Family members get disabled. Your child gets mixed up with the law. The big house you bought with an adjustable rate mortgage loses value and you have to sell at a loss because you got transferred and in order to close the sale you have to pull money out of your retirement fund to make up the balance on your mortgage. Stuff happens, and if it happens during a market downturn you could be forced to sell stocks at bargain prices and be left in the exact situation you were trying to avoid.

At this point conventional wisdom follower is shaking her head and has a knowing little smile because she knows that even though stocks are risky people who invest in stocks are still going to be better off in the long run than people who invest their retirement in bonds or (horrors) CDs, because the stock market investors will still get a better rate of return on average and so no matter what the stock investors will probably have more money in retirement. Which will probably be true most of the time if you assume the same amount of money invested in stocks vs. CDs. However, what if the perceived high rate of return in the stock market causes someone to save less?

Imagine you are in your 20s and you are starting to make plans for retirement. You carefully calculate the annual income you want to have in retirement, and then you calculate how much you need to be saving now to reach that goal, assuming (of course) a 10% return since you are savvy and will invest in the stock market. Thanks to the high returns available to the savvy stock investor it turns out that you don't have to save that much of each paycheck, and so you can afford to spend lots of money on lattes and nice cars and big houses. Meanwhile your clueless neighbor invests everything in low yielding but safe investments, and since they need to save more of their paycheck they have a plainer car, make their own coffee, and spend less on their home. Fast forward 45 years. The stock market crashes just as you reach age 65 and suddenly your retirement fund shrinks to less than that of your clueless neighbor. You need to postpone your retirement, sell your nice car, drop the lattes, and pray the market recovers before you are 80. Your clueless neighbor's retirement starts right on schedule and she has exactly the amount of money for retirement that she was expecting.

Here are some illustrative numbers. I put together a spreadsheet using Robert Shiller's stock market data and compared two hypothetical people who steadily invested a fixed percentage of the US median household income every month from 1968 to 2008, the year they both plan to retire. One person put 10% of median household income in the S&P 500. The other invested 20% of median household income at long term interest rates. Here's how they compared last year:

October 2007
10% of income in S&P: $1.1 million
20% of income at long term rates: $762,000

The safe investor sure looks like a chump and the stock investor is looking forward to a relatively lavish retirement. But lets look again a year later:

23 October 2008
10% of income in S&P: $646,000
20% of income at long term rates: $802,000

The stock investor has seen her nest egg diminish to almost half its size a year ago and her life has been turned upside down. The safe investor has had no disruption. Note that the stock investor, even with the market downturn, got a higher rate of return than the safe investor. And the stock investor was able to spend more money during their working years. But that didn't protect the stock investor from having her her retirement plans thrown out the window. And if the stock investor retires on schedule, she will pull money out of the market when its down, which will reduce her overall rate of return in in coming years.

The moral of the story: if you save for retirement assuming a high rate of return from stock investments you need to be prepared to suddenly find yourself with much less money than you were expecting to have.

So whats the answer? The answer is that if you want to be sure of having a certain annual income in retirement then during your working years you have to save, and spend, at a rate that assumes a low rate of return. If you save like all your money was invested at 4%, and you spend in retirement like your money was invested at 4%, then you can probably afford to have some or all of your retirement fund in stocks since you will probably be able to sell stocks at a loss during a downturn and still have enough left over.

Another possibility is to start off investing all your retirement funds in the stock market when you are young, but then start moving money out of the stock market and into conservative investments starting 10-15 years before your planned retirement (when the market is up, of course) with the goal of having at least 5 years living expenses in very safe investments when you reach retirement.

2008-09-16

Bike racks at Tucson International Airport

I called the Tucson International Airport to find out if they have a bike rack there where I could leave my bike after biking to the airport to catch a flight. The woman said that there are bike racks "around back" near where the rental car lot, and that if I follow the signs for rental car return I should see the bike racks.

2008-08-02

How to upgrade packages on command-line Ubuntu installation

To bring a command line Ubuntu installation up to date with the most current versions of all packages:

sudo aptitude update
sudo aptitude safe-upgrade
sudo aptitude full-upgrade

To upgrade your Ubuntu Server from 7.04 to 7.10 follow these few steps and you’ll be presented with a menu that will walk you through the upgrade process.
sudo aptitude install update-manager-core
sudo do-release-upgrade

How to fix unsupported personality problem with HP LaserJet 1012

Copied from a forum posting:

First, I downloaded the pxl1010.ppd from

http://linuxprinting.org/show_printer.cgi?recnum=HP-LaserJet_1012

and copied it to the /etc/cups/ppd/ directory. It's the third link in the "Drivers" section of the page, and there's a brief description of how the problem was fixed.

2008-07-20

Fix samba share not mounting at boot

I had this annoying problem with an Ubuntu box where a Samba share that was defined in /etc/fstab would not mount at boot, and then the mount point would get corrupted the first time a program tried to write to the unmounted Samba share, and then the only way to fix it was to unmount it, delete the mount point, recreate the mount point, and then use sudo mount -a to remount it.

How I finally solved the problem was to delete the Samba share from /etc/fstab and replace it with a command line to mount it in /etc/rc.local which apparently is a script that automatically runs at the very end of the boot process.

Specifically I added the following to /etc/rc.local (all one line):
sudo mount -t smbfs -o username=defaults,password=defaults,uid=usersname,gid=groupname,fmask=770,dmask=770 "//10.10.10.110/DISK 1" /media/mountpoint

2008-07-18

A simple HTML-CSS scrolling data box with fixed column headers

I have a number of web pages where large tables of data are presented, and I wanted a way to display them so that when the user scrolled through the data the column headers would stay fixed and not scroll out of view.

My solution was to create a div with a fixed height and width and with its overflow property set to scroll, and then put the actual data table inside that div. Then I put the column headers in a completely separate table outside and right before the div. This way the column headers stay fixed on the page and the user can use the scroll bar that appears on the data div to scroll through the data without moving the whole web page.

The toughest challenge was figuring out how to get the column headers to consistently line up properly with the data columns, since the default behavior for HTML tables is for the columns to be sized based on the data in each cell, and so left to their own devices the columns of the data table would be different each time your data changed.

Just setting the width property of the table cells in CSS, alone, did not work since it appears to me that the default behavior is ignore the width property on td elements. After a lot of trial and error I discovered that the following combination appears to work for me:

- In CSS set the table-layout property for the data and column header tables to fixed.

- In CSS set the width properties for both the data and column header tables to be some value which is less than or equal to the sum of all of the column widths. If you don't do this (i.e. just leave the table width property blank or set it to more than the sum of the column widths) then the columns will not stay fixed in at least Firefox 2, though it will probably still work in IE7. I think this is because in FF2 even if you set table-layout: fixed in CSS it will expand the cells as needed to take up the whole width of the table, and if you don't set the table's width explicitly it will inherit the width of the div it is inside.

- Create a CSS class for each column and set the width property for each column to the right number of pixels, and then give each td of both tables the appropriate class.

Here is rough outline of the code elements. This particular code is not designed to be fully cut and paste, and its air-code, so just use it to understand the idea:


CSS

div.datagrid
{width: 1000px;
height: 400px;
overflow: scroll;
border: 1px solid black;
font-size: 11px;}

table.headers_and_data
{border: 1px solid black;
border-collapse: collapse;
table-layout: fixed;
width: 980px}

td.col_0
{width: 71px}

td.col_1
{width: 150px}

td.col_2
{width: 300px}

td.col_3
{width: 478px}

HTML

// Output column headers table
<table border="1" class="headers_and_data">
<tr>
<td class="col_0">Column 0</td>
<td class="col_1">Column 1</td>
<td class="col_2">Column 2</td>
<td class="col_3">Column 3</td>
</tr>
</table>

//Start a new div with specified height and overflow set to scroll so that table
// of items is scrollable (see style for this div) and then start data table.
<div class="grid">
<table border="1" class="headers_and_data">
<tr>
<td class="col_0">Mississippi</td>
<td class="col_1">Jackson</td>
<td class="col_2">Delta swamps</td>
<td class="col_3">Possum</td>
</tr>
</table>