2010-12-15

How to run a Python script from Mac OS X Finder

Here is how to run a Python script from the Finder:

  • Make this the first line of your Python script "#!/usr/bin/env python"
  • Change the extension of the script file to ".command" i.e. my_python_script.command
  • In Terminal make the Python script file executable by running "chmod +x my_python_script.command"
  • Now when you double click the Python script in Finder it will open a terminal window and run.

2010-12-08

Checklist for making a home file, print and web server using Ubuntu and an old laptop

A couple years ago I did a detailed set of posts on turning an old laptop into a Ubuntu file, print, and web server. That laptop started having hardware problems so I decided to decommission it and replace it with another recently retired Dell Inspiron 1521 laptop. Here is a checklist of all the steps I did to convert the Windows Dell Inspiron 1521 laptop into a home file, print, and web server. If you are new to Linux you should read my original posts since this checklist assumes you have done stuff like this before.

  • Downloaded and burned CD for Ubuntu LTS 10.04 standard desktop
  • Installed Ubuntu standard desktop 10.04 on Inspiron 1521. I followed the prompts and used the default options.
  • Wrote over existing Windows file system rather than doing dual boot.
  • After Ubuntu was installed I started Synaptic and updated packages
  • Installed openssh-server and samba using Synaptic
  • Opened terminal and ran sudo tasksel to start program that offers a number of options for installing groups of packages for different functions.
  • Selected install LAMP server and followed prompts to install.
  • Using Firefox on new server I went to truecrypt website, downloaded latest Linux version (7a) to the Desktop, extracted file, double clicked it, and followed the prompts to install Truecrypt.
  • Edited /etc/network/interfaces (sudo nano /etc/network/interfaces) to make computer use fixed IP address by adding:
auto eth0
iface eth0 inet static
address 10.10.10.123
netmask 255.255.255.0
gateway 10.10.10.1
  • Rebooted new server to put new static IP address in effect
  • sudo aptitude install ntp (ntpdate was already installed)
  • Set up mount points for encrypted external hard drives:
  • sudo mkdir /media/encrypted1
  • sudo mkdir /media/encrypted2
  • Changed owner of mount points to match what smb.conf will use for them
  • sudo chown nobody:nogroup /media/encrypted1
  • sudo chown nobody:nogroup /media/encrypted2
  • Created directory for my custom scripts for new server: sudo mkdir /home/andy/scripts
  • Mounted new server as volume on remote MacBook using MacFusion (SSHFS)
  • Copied backup of custom scripts over to new server from MacBook
  • Checked ownership and permissions of custom scripts and changed them as needed.
  • Ran my custom script to mount truecrypt volumes (/media/encrypted1 etc)
  • Made these changes to /etc/samba/smb.conf (sudo nano /etc/samba/smb.conf)
  • workgroup = MYWORKGROUP
  • removed semi-colon before interfaces = 127.0.0.0/8 eth0
  • removed # before security = user
  • Added these under Share Definitions
[public]
comment = Public Share
path = /media/encrypted1
read only = No
force create mode = 777
force directory mode = 777
force user = nobody

[backup]
comment = Public Share
path = /media/encrypted2
read only = No
force create mode = 777
force directory mode = 777
force user = nobody

[pictures]
comment = Public Share
path = "/media/encrypted1/Documents/My Pictures"
read only = Yes
guest only = Yes
guest ok = Yes
  • sudo adduser spouse
  • sudo smbpasswd -a andy
  • sudo smbpasswd -a spouse
  • Rebooted new server
  • Ran custom script for mounting truecrypt volumes
  • Mounted samba share from remote MacBook and opened file to make sure everything was working properly.
  • sudo aptitude install phpmyadmin
  • From remote MacBook browsed to http://10.10.10.123/phpmyadmin and logged in as user root
  • Using phpmyadmin created a new user and gave this user all possible privileges on the MySQL database.
  • sudo adduser www
  • sudo nano /etc/apache2/sites-available/default
  • Changed the DocumentRoot value to /home/www/public and saved the file
  • Restarted Apache: sudo services apache2 restart
  • From remote MacBook did SSHFS mount as user www using MacFusion
  • Copied all html/php files from backup to /home/www
  • Used phpMyAdmin to create new user web_app_user with all data (but not structure or database administration) privileges. This is the MySQL user used by the web apps.
  • Used phpmyadmin to create new MySQL databases with same names as were used on the old server (including wordpress).
  • Used phpmyadmin to go into each database and then import the SQL file backup of that database.
  • Edited /etc/php5/apache2/php.ini to change session.gc_maxlifetime = 1440 to 1814400 (this prevents web app users from being logged out of web app shortly after they sign in)
  • Edited blank /etc/apache2/httpd.conf to add "ServerName myserversname". This prevents the annoying "Could not reliably determine the server's fully qualified domain name" message when you start apache2.
  • Restarted apache: sudo service apache2 restart
  • Tested that web applications work.
  • Got crontab setups from old server by doing contab -e as all users that had crontabs and writing down what was there.
  • Changed permissions on custom scripts that would be run by different users via crontab: chmod a+rwx scriptname.sh
  • Created logs directory in /home/andy: chmod -R a+rwx logs
  • Tested all backup scripts on new server.
  • Setup crontabs on new server using crontab -e
  • Configured CUPS for printer (not sure all of this is necessary)
  • sudo nano /etc/cups/cupsd.conf
  • In , and and
  • add:
  • Allow all
  • Commented out "Require user @OWNER @SYSTEM" where ever it appeared except for administrative tasks.
  • Changed:
  • Listen localhost:631
  • to:
  • Listen 631
  • Added the line: DefaultEncryption Never
  • sudo service cups restart
  • Browsed to 10.10.10.120:631 from remote machine
  • Add printer
  • Select HP LaserJet 1012 (HP LaserJet 1012), not the 1012 printer with USB in the name.
  • Name: HP_Shed
  • Description: HP LaserJet 1012
  • Location: Shed
  • Select Model: HP LaserJet 1012 - CUPS+Gutenprint v5.2.5(en)
  • Select default default options
  • Select modify printer just created
  • Click Select Another Make/Manufacturer
  • Select Make: Raw

2010-11-30

Specifying different CSS for landscape and portrait orientations on the iPad

I have a web app that uses jQuery UI buttons. I noticed that on my iPad the buttons worked fine in portrait orientation, but in landscape orientation the buttons, and even regular links in a table, would not work properly when pressed (a different button or link than the one pressed would fire). I determined that this problem was caused by the following tag in my HTML:

<meta name='viewport' content='width=device-width' />

Using this tag causes Safari Mobile (the iPad browser) to zoom in on the page a bit in landscape orientation, making the fonts a bit bigger, which would be fine except that it apparently breaks link and button functionality in some cases. I got the buttons and links to work properly by using this tag instead:

<meta name='viewport' content='width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no' />

This locks down the page so it doesn't (and can't) zoom in or out.

That fixed my broken buttons and links in landscape orientation, but I had really liked the way the fonts got bigger in landscape orientation. It gave me a way to have two different zoom levels in may app. If I wanted to see more content in a small font I used portrait; if I wanted to see less content in a bigger font I used landscape.

So, I figured out how to get the exact same effect using CSS "media queries." Here is some CSS that first sets styles for the iPad regardless of orientation and then specifies different styles for portrait and landscape orientation:

@media only screen and (max-device-width: 1024px)
{/* This block specifies CSS that only applies to an iPad
max-device-width: 1024px seems to only select the iPad*/

body
{margin-left: 0px;
margin-top: 0px;}

}

@media only screen and (max-device-width: 1024px) and (orientation: portrait)
{/* This block provides CSS that only applies to the iPad in portrait orientation */

body
{font-size: 16px;}

}

@media only screen and (max-device-width: 1024px) and (orientation: landscape)
{/* This CSS kicks in only when an iPad is in landscape mode. It makes the font bigger, the table rows taller, etc */

body
{font-size: 18px;}

}


You can put this block of CSS at the end of the regular stylesheet for a page and it will apply different CSS when the page is viewed on the iPad, allowing you to have a completely different layout on the iPad than you have on a regular browser. What is remarkable to me is how smoothly the styles change when I switch orientation on the iPad; I was afraid there would be lag while the page reformatted but I can't detect any.

2010-11-21

How to avoid typing the full path of a file in Mac OS X

Occasionally in Mac OS X I want to write the full path of a file in a Bash script or something. To avoid typing out the full path (and possibly making errors) just:

  • Go to the file in Finder.
  • Hit Cmd-I to bring up the Get Info window for the file.
  • Block and copy the path from "General > Where:" section of the Get Info box into your text file.
  • Block and copy the file name from the Get Info box.

2010-11-06

Workaround for jQuery .live() event handler not working on Mobile Safari, iPad, iPod Touch, iPhone

I have a web app where I use the jQuery .live() method to attach a click event handler to a table td element (to convert it into a input field when the user clicks the table cell and then post the input using ajax.)  Everything worked properly on all the mainstream browsers (Firefox, Safari, Chrome) but nothing happened when I tapped the table cell when viewing the app on my iPad.  Some Googling led me to this blog post:

jQuery’s live click handler on mobile Safari

This guy discovered that the .live() click event will fire on Mobile Safari (i.e. iPad, iPod Touch, iPhone) when it is attached to certain elements (like an anchor tag), but for some other elements it will only fire if you add onclick="" to the element tag.

I added onclick="" to my td tag for my table cell and voila, tapping on the table cell works now to convert it to an input field like it should.

2010-11-04

The lineage model for hierarchal data in a SQL database

Recently I decided to make a web app to keep track of a number of policies.  There were a number of different policies, all of which had their provisions numbered and organized in an outline hierarchy that often went three or more layers deep.

The first thing I researched was how to structure SQL databases to handle hierarchical data like a product catalog with the products grouped into categories and subcategories. Or a org chart where employees are grouped into divisions and departments. I quickly learned that most people used either the adjacency list model or the nested set model,  both of which are described in detail in an article on the MySQL website: Managing Hierarchical Data in MySQL.  Another detailed article on both models is Storing Hierarchical Data in a Database.

With the adjacency list model you have a table which gives each record a unique ID number, and then you have a "Parent ID" field where you record the ID number of the record's parent in the hierarchy tree.  If you were doing foods, you would have a record for "apples" and the parent ID for that record would be "fruits."  Then when you need to generate an output of the whole hierarchy tree you do SQL that organizes all the records by connecting the various parent ID values.  Some articles I found on this approach include Tree Drawing with the Adjacency List Model  and Hierarchical SQL.  The thing I didn't like about the adjacency list model was that the SQL required to generate a simple display of the data seemed unduly complicated (lots of recursion) and likely to consume a lot of database server processing power.

I never did really grasp the nested set model, so I won't try and explain it. Needless to say it also involved very complicated SQL that seemed likely to put a big drain on a server.

Then I found an article about a variation on the adjacency list model: More Trees and Hierarchies in SQL.  The author's basic idea was to include a Lineage field in each record where you record the ID numbers of the full path to the record in the hierarchy with each ID number separated by a delimiter character.  For example, if you had a product tree with the iPod in it, the Parent ID field would be "MP3 Players" and the Lineage field would be "Electronics/Audio Equipment/MP3 Players."  This makes the SQL to output the basic tree much easier: you just do a simple SELECT and ORDER BY the Lineage field and you are done.

So here is the structure I ended up with using the lineage field model.  For a hierarchy like this:

  • Plants
    • Fruits
      • Apples
      • Bananas
      • Peaches
    • Vegetables
      • Broccoli
      • Asparagus
  • Animals
    • Poultry
      • Chicken
      • Turkey
    • Meat
      • Beef
      • Pork
The table structure would be like this (except you would use ID numbers instead of the full name of each item as the ID):

Node Lineage
Plants /
Animals /
Fruit /Plants/
Vegetables /Plants/
Poultry /Animals/
Meat /Animals/
Apples /Plants/Fruits/
Bananas /Plants/Fruits/
Peaches /Plants/Fruits/
Chicken /Animals/Poultry/
Turkey /Animals/Poultry/
Beef /Animals/Meat/
Pork /Animals/Meat/
Broccoli /Plants/Vegetables/
Asparagus /Plants/Vegetables/

Once you put your hierarchical data into a structure like this then working with it is easy-peasy. Want to display the whole tree?

SELECT * FROM table_Foods ORDER BY concat(Lineage, Node)

This will output the data organized by groups and subgroups. Want to see just the items in the group Plants, organized by subgroups?

SELECT * FROM Foods WHERE concat( Lineage, Node ) LIKE '/Plants%' ORDER BY concat( Lineage, Node )

This will return just the Plants, grouped by Fruits and then by Vegetables.

Want to indent each item in your output based on its depth in the hierarchy? In your PHP code just count the number of slashes in the Lineage field using substr_count() for each record and set the indent accordingly.

I know this approach seems too simple but I have been working with it for a while in a web application and I haven't run into any dead ends yet, and figuring out the SQL for various tasks has been very easy.

2010-10-28

How to make jQuery UI buttons shorter

I have been using jQuery and jQuery UI for a while now and they make it really easy to add fancy features and layout to a web app. However, one thing I didn't like was that the default appearance jQuery UI buttons is too tall for my taste. The jQuery buttons have lots of space above and below the button text, making them look square and blocky to me.

After playing around with it for a while I figured out how to adjust the height of jQuery UI buttons. Simply add the following to your CSS style sheet:

.ui-button-text-only .ui-button-text
{ padding-top: .1em;
padding-left: 1em;
padding-right: 1em;
padding-bottom: .2em; }

Just tweak the top and bottom padding until you like the way the buttons look. I had to make my bottom padding bigger than my top padding to make the button text look vertically centered.

I also had some buttons with little GIF images instead of text. jQuery had the images jammed up against the top of the button by default. To center the images I added this to my CSS stylesheet:

img {margin-bottom: -.25em;}

I could get away with just changing my img tag styles because these little GIFs were the only img tags on the whole page. If you have other img tags on your page you would need to give the button img tags their own class and apply this style to just that class.

2010-10-26

Google Chrome is particular about syntax of script tags

I was tearing my hair out because my jQuery UI code was working properly with Firefox and Safari but not with Google Chrome. When I looked at the Javascript console in Google Chrome I was getting an error like this:

Uncaught TypeError: Object # has no method 'datepicker'

Because it was working in Safari and Firefox, but not in Google Chrome, and because the error message sounded like some problem in the jQuery code and/or my other Javascript I spent hours trying to troubleshoot the problem. However, in the end the problem turned out to be with my script tags to reference the jQuery libraries:

<script src="/jquery-ui-1.8.5.custom/js/jquery-1.4.2.min.js" type="text/javascript"</script>

<script src="/jquery-ui-1.8.5.custom/js/jquery-ui-1.8.5.custom.min.js" type="text/javascript"</script>


Do you see the error in my syntax? I left off the right angle bracket after the type="text/javascript". For some reason Firefox and Safari don't care about that, but Google Chrome does. To make it even odder, Google Chrome didn't give me an error about problems loading a script, but instead an error about the execution of the script.

2010-10-22

Resources about designing web content for iOS devices (iPhone, iPod, iPad)

Apple has a thorough guide to developing web content for Safari on the iPhone, presumably most of it also applies to the iPad:

[iOS] Safari Web Content Guide

I haven't read all of this article yet but the title is promising:

The iPad Web Design and Development Toolbox

How to turn off php magic quotes on nearlyfreespeech.net

I recently set up a web app hosted on nearlyfreespeech.net that uses mysql_real_escape_string to to escape user input. I noticed the other day that an entry in that web app displayed an escaped single quote, which led me to suspect that nearlyfreespeech.net has PHP magic quotes turned on and my user input was being double escaped, which turned out to be correct. Here is how I turned off magic quotes for web app on nearlyfreespeech.net:

  • I connected to my nearlyfreespeech.net account via SSH.
  • I navigated to the public directory of my account (where the html and php files are).
  • At the command prompt I ran nano .htaccess to create a .htaccess file and open it in nano for editing
  • I put this in the .htaccess file: php_flag magic_quotes_gpc off
  • I saved the file, and voila, magic quotes were off!

2010-10-21

How to specify different CSS rules based on device screen size (i.e. for the iPad)

I was working on a web app where I wanted the appearance to be very different on my iPad than it was on my laptop. After some research I found this article:

Detecting device size & orientation in CSS

That describes how you can include code in a CSS stylesheet that will apply designated rules only if the device screen (or browser viewing area) meets certain criteria. For example (from the article):

@media only screen and (max-width: 999px) { /* rules that only apply for canvases narrower than 1000px */}

@media only screen and (device-width: 768px) and (orientation: landscape) { /* rules for iPad in landscape orientation */}

@media only screen and (min-device-width: 320px) and (max-device-width: 480px) { /* iPhone, Android rules here */}


CSS put inside the brackets in these examples would only apply if the specified criteria are met. Also, the CSS rules apparently work on the fly, so if the user resizes their browser screen the appropriate rules will be applied without refreshing.

I also found a reference article on these "media queries" for Firefox. Much of what is in this article applies to other browsers since I believe it is derived from the CSS 3 specification.

Media Queries

Note that this article only uses these media queries to select an alternate style sheet but as far as I can tell you can also use them to apply a block of rules enclosed in curly brackets after the query as shown in the example above and the first article.

There is also an Apple article that includes a section on conditional CSS:

Optimizing Web Content

After doing some experimenting, it appears that once a CSS property is specified the only way to remove it using a media query is to specifically specify a new value for the property. In other words, if you specify a bunch of properties for the ".main" class in your style sheet, and then in your media query block just specify:

@media only screen and (max-width: 999px) { .main {;}}

Then all the rules you specified for .main are still going to apply even when the media criteria is met because they were not individually countermanded.

I experimented around with different media queries to distinguish between my iPad and my laptop and in the end this seemed to work so that the rules were only applied on my iPad, and the rules were applied regardless of my iPads orientation:

@media only screen and (max-device-width: 1024px)
{/*max-device-width: 1024px seems to only select the iPad*/}

2010-10-20

Escaping square brackets in SQL Server queries from PHP

I have a PHP web app that pulls data from an SQL Server database using the ODBC functions. Earlier I come up with a PHP function to escape single quotes in user input by adding a single quote to each single quote (a single quote is the character you use to escape a single quote) as defense against (inadvertent) SQL injection (the web app is behind a firewall).

The other day I accidentally discovered that including text inside a pair of square brackets [like this] in user input that was added to a LIKE clause of a WHERE clause resulted in a huge data dump being returned by SQL Server. I did some Googling and discovered that characters enclosed in square brackets have some special meaning in SQL Server (I don't remember what it was). I first tried updating my PHP function to escape square brackets with single quotes, but that didn't work for some reason. Then I did some more research and discovered that the way to escape a square bracket in SQL Server is to enclose it in square brackets like this [[]. So I updated my PHP function to do this on user input and it worked to stop the data dumps when a user included something like [fred] in their input.

Writing the PHP to do this was tricky because if you just do a straight str_replace on each square bracket the second replace replaces some of the square brackets you added with the first replace and messes it all up. The way I solved this was to write my function to:

  • first replace the left square bracket with an arbitrary three character string that is unlikely to be in user input,
  • then replace on the right square bracket with []],
  • then do a third replace of my arbitrary three character string with [[].
And yes, I know I should be using stored procedures etc, and hackers can get past any escaping routine, etc, but this app is behind a firewall and I am only concerned about accidental SQL injection.

2010-10-03

SQL and sequences

I am working on an application that uses a sequence field to keep records in a specific order.

The first challenge was how to fill in a numbered sequence in the new sequence field without using auto-increment (the data table will contain a number of independent sequences for different sub-sets of records so I need to be able to multiple sequences in the same table).  I found the answer on this blog post:

How to Sequence each Sub-set of Records by David Soussan

I won't repeat that post here, but here is the SQL that I ended up with based on Mr. Soussan's technique.

This first SQL query creates a Temp table with Prov_ID and sequence number.

CREATE TABLE Temp
SELECT
    t1.Prov_ID, COUNT(t1.Prov_Sort_Num) AS sequence,
    t1.Prov_Sort_Num >= t2.Prov_Sort_Num AS flg
FROM
    tbl_Provisions AS t1
INNER JOIN
    tbl_Provisions AS t2 ON t1.Doc_ID = t2.Doc_ID
WHERE
    t1.Doc_ID = 1
GROUP BY
    t1.Doc_ID,
    t1.Prov_Sort_Num, flg
HAVING
    flg = TRUE
ORDER BY
    t1.Prov_Sort_Num

Then this second SQL query updates tbl_Provision using the sequence numbers from the newly created Temp table.
UPDATE
    tbl_Provisions AS t1
JOIN
    Temp AS t2 ON t1.Prov_ID = t2.Prov_ID
SET t1.Prov_Sequence = t2.sequence

The next issue was how to make sure that gaps and duplicates didn't end up in the sequence for each sub-set of records?  I found the solution to the issue of detecting gaps in this blog post:

Sequence gaps in MySQL by Sameer

I still don't fully understand how Sameer's SQL works, but it does indeed work reliably.  Here is the SQL I ended up with based on Sameer's technique:

SELECT
    a.Prov_Sequence + 1 AS start,
    MIN(b.Prov_Sequence) - 1 AS end
FROM
    tbl_Provisions AS a,
    tbl_Provisions AS b
WHERE
    a.Prov_Sequence < b.Prov_Sequence
   AND
      a.Doc_ID=$Doc_ID
   AND
      b.DOC_ID=$Doc_ID
GROUP BY
   a.Prov_Sequence
HAVING
   start < MIN(b.Prov_Sequence)
This returns a two column table with each row giving the beginning of a gap in the Start column and the end of that gap listed in the End column.

2010-08-07

PDFsam works for merging scans of double sided documents

Occasionally I need to scan a double sided document. For a short documents (1-4) I just scan the fronts of the pages and then the backs and then manually put them together using Preview in Mac OS X.  However for longer documents the free PDF Split and Merge, or PDFsam, quickly merges these types of scans:

http://www.pdfsam.org/

2010-05-31

How to write a script to reliably mount Truecrypt volumes on an external drive in Ubuntu

I have had an annoying problem for years.  I have two external hard drives hooked up to my Ubuntu server that are encrypted with Truecrypt.  In order to mount these Truecrypt volumes on the correct mountpoints after a reboot I had to figure out which external drive was mounted as /dev/sdb and which as /dev/sdc since Ubuntu doesn't necessarily mount the same external drive in the same place every time the system is restarted.  For a long time I deduced which drive was mounted as /dev/sdb and which as /dev/sdc by running sudo fdisk -l which shows the sizes and mount points off all attached drives.  However, I recently found directions written by Ubuntu Forums user B-Con on how to make a script to reliably mount each Truecrypt volume in the right place regardless of which /dev/sd* it gets:

http://ubuntuforums.org/showthread.php?t=468664

Basically, the script uses ls -l /dev/disk/by-id to get a listing of all of the attached drives that includes each drives unique id code and it's mount point, then it uses grep and cut to extract the mount point and put it in a variable, which can then be used with the Truecrypt mount command. In the example below you will have to change the part in quotes to be the unique id code of your drive.

#! /bin/bash

my_var=`ls -l /dev/disk/by-id | grep "scsi-1ATA_Maxtor_7H500F0_H81E4GAH-part1" | cut -d / -f 3`
vol_path="/dev/${my_var}"
# Use vol_path as the path for your drive, now.

2010-04-30

Mac OS X Shared Folders

I have had a lot of troubles getting Mac OS X "Shared Folders" to behave the way I want.  This page is for notes on things I have figured out.

Issue: Users connecting to a Shared Folder as Guest cannot even browse subfolders of the shared folder.

Resolution: It turns out that in order for someone connected to a Shared Folder as Guest to be able to browse a subfolder the subfolder has to have the eXecute permission set for Others, i.e. when you run ls -l on the shared folder the subfolder permissions should look like this:

drwx---rwx 11 ownersname groupname 12 Dec 30 06:19 My Subfolder

with the important one being the last x.  If the last permission is not x, then non-authenticated users will not be able to even browse the subfolder.

In order to set all permissions on all subfolders to allow Guest access I ran this from Terminal:

chmod -R o=rwx /SharedFolderPath

This recursively goes through every subfolder and file under /SharedFolderPath and sets permissions for "others" (aka everyone) to read, write, and execute. I am sure there are all kinds of security issues with this, but this is a machine behind a firewall on a home network so I think I don't care.

2010-04-11

Lessons learned from doing my own 2009 income taxes on Mac OS X

IRS PDF forms are "fillable" meaning you fill in values on screen and then save the form with the values you entered.

The IRS PDF forms do not format completely correctly if you use Mac OS X Preview (for example social security numbers are all bunched together instead of being spread out like the form intends), so the best bet is to just use Adobe Reader.

Once you open an IRS PDF form in Preview and then save it you can no longer open it in Adobe Reader in fillable format; it will tell you that you cannot save your changes.  So it is best to just use Adobe Reader from the outset on IRS PDF forms.

The Arizona PDF 140 form has its permissions set so that you cannot save the form with the values you entered.  However, if you open this form using the Foxit Reader instead of Adobe Reader you can save the form with your input.

Unfortunately, the Foxit Reader cannot print the saved form with the barcode on the front page which encodes all your input.

Fortunately, you can use Adobe Reader to open a form with saved values that you saved using Foxit Reader and then print it with the barcode on the front page.




2010-03-23

Making CDs of recordings using a Zoom H2 and Windows


Download and install Audacity (open source software for editing audo files) from http://audacity.sourceforge.net/

Download and install InfraRecorder (open source software for burning CDs) from http://sourceforge.net/projects/infrarecorder/

Record something using the Zoom H2 using MP3 recording format.

Connect the Zoom H2 to the Windows computer using USB cord and then put the Zoom H2 in Storage mode: Menu -> USB -> Storage.

Copy the MP3 file from the Zoom H2 to the Windows computer.

Use "Safely Remove Hardware" icon on Windows taskbar to dismount Zoom H2, disconnect USB cable, and exit USB mode by hitting Menu key on Zoom H2.

Start Audacity and then use File -> Open to open the MP3 file.

Edit the MP3 file as needed, and then export it to WAV file: File -> Export As WAV

Insert blank CD-R in computer

Start InfraRecorder

Click Audio Disc.

In the Explorer View window select the WAV file (one click) and then click the blue plus sign above the window.  This should make the WAV file show up in the bottom window. Or you can just drag the WAV file from the top window to the bottom window.

Drag as many WAV files as you want on the CD to the bottom window, and put them in the order you want them to be on the CD.

When ready to burn CD, click Action -> Burn Compilation then select the lowest write speed and click OK.




2010-03-21

Installing Mythbuntu backend on Virtualbox virtual machine running on Mac OS X

This is just a rough outline of steps, not a how to!

Install Sun Virtualbox on Mac.

Download ISO of Mythbuntu

Create new Linux virtual machine in Virtualbox. I used 8 GB as the disk size (figuring I would store recordings on an external drive) and 500 MB RAM (no particular reason.

Change settings on the virtual machine's Details list:

  • Click on Storage then under the CD-ROM entry browse to and select the Mythbuntu ISO.
  • Click on Network and then select bridged as the network type instead of NAT (I actually did this after the Mythbuntu install was finished so I am not sure if there would be problems selecting it intitally).
Boot up the virtual machine and go through the Mythbuntu basic install process. I chose to install both a frontend and a backend even though I am not planning to watch recordings on the virtual machine; I did this because I believe some settings are only accessible through the frontend. Other than that I think I just accepted all the defaults.

When it got to the part of the process when the Mythbuntu backend setup program ran I escaped out of it to get to the desktop because I wanted to make some changes to the system before I set up the backend.

Set the virtual machine to use a static IP address:
  • Right click on the network manager icon on the right side of the virtual machine menu bar and selecting edit connections.
  • Select Auto eth0 under Wired
  • Click Edit
  • Select IPv4 Settings
  • Enter values in all fields; Apply button will grey out unless all required fields are completed.
  • Click Apply
Set up the virtual machine to use an external hard drive (for storing recordings).
  • Shut down virtual machine.
  • Go to Details list for VM and click on Shared Folders
  • Browse to external drive and designate it as a shared folder.
  • Install VirtualBox Guest additions on the VM by:
  • Restart VM
  • Click on the Window for the VM to make it active, and then on the Mac menubar click on Devices and then select Install Guest Additions (if the VirtualBox program window is the active window you can't see the Devices menu). This causes a virtual CD to be mounted to the VM.
  • Follow the instructions How to install VirtualBox Guest Additions in Ubuntu with the following changes: run the installer by using "sudo ./VBoxLinuxAdditions-x86.run" otherwise you will get command not found.
  • Reboot VM.
  • Test mounting external hard drive by making mount point and then running this:
  • sudo mount -t vboxsf SharedFolderName \MountPointPath where SharedFolderName is the name you gave to the shared folder in VirtualBox and \MountPointPath is the path to the mount point you created.
  • However, if you mount the shared folder this way it will be mounted as owner root, group root and read only.
  • Dismounting the shared folder and setting the ownership and permissions on the mount point doesn't work to change this because the next time you mount the shared folder the ownership, group, and permissions will revert.
  • This is the mount command that finally made it possible to write to the shared folder: sudo mount -t vboxsf -o uid=andy,gid=users SharedFolderName /MountPointPath/
  • Once I got that working I added it to the /etc/fstab file so that the shared folder would mount on boot by using sudo nano /etc/fstab to open the fstab file in a text editor and then adding this to the bottom of the file SharedFolderName /MountPointPath vboxsf gid=105,uid=103
  • The uid and gid are for the user mythtv and the group mythtv. I tried mounting it with uid=1000 and gid=1000 but mythbackend was not able to write recording files to it, so I just made mythtv the owner of the mount.
Once I got the shared folder working properly I finally ran Myth Backend Setup (Applications -> System -> Myth Backend Setup). Here are the fields that I set differently than the default:
  • General-Host Address Backup Setup-Local Backend-IP Address = static IP address of VM
  • General-Host Address Backup Setup-Local Backend-Security Pin = 0000
  • General-Host Address Backend Setup-Master Backend-IP Address = same
  • General-Locale Settings-Channel frequency table = us-cable (selected)
  • Capture Card Setup-Card type = HDHomeRun DTV tuner box (selected)
  • Capture Card Setup-Available Devices = 1019A9DB-0 (selected)
  • On Input Connections I scanned channels using Cable High.
  • After the channel scan was done and I added all the ATSC channels I used the Channel Editor to manually fill in the XMLTV ID field for each high def channel. For some reason SchedulesDirect.org doesn't link up to the high def channels correctly, so I just fill in the XMLid values for the broadcast versions of the channels, which I looked up on SchedulesDirect.org
  • Then I went to the Storage Groups section, clicked on default, and then filled in the mount point of my shared folder as the default storage location.
Unfortunately, the shared folder approach was too slow, and when I tried to play back recordings stored on the shared folder the video keep halting and stuttering. So it was on to plan b.

I shut down the VM and then in VirtualBox I disconnected the shared folder from the VM by clicking on Details - Shared Folders and then removing the shared folder.

Next in VirtualBox I created a new virtual disk on my external hard drive and made it as big as I could. I picked expandable, instead of fixed, because when I tried fixed it said it would take 20 hours to format the virtual disk!

Then in VirtualBox I attached the new virtual disk to my VM and then started the VM up. From there I used regular linux commands to partition and format the new virtual disk, and then I added it to the /etc/fstab file so it would mount on boot to the same mount point as I had previously used for the shared folder (and obviously I removed the old /etc/fstab entry for the shared folder while I was in there.

Then I did a sudo mount -a to mount the new virtual drive and tested it by recording a show in MythTV, and everything worked great.


    mjpg-streamer documentation

    The documentation for mjpg-streamer is kind of scattered about.  I have dumped it all here for my own convenience.

    Usage introduction

    This example shows how to invoke mjpg-streamer from the command line

    export LD_LIBRARY_PATH="$(pwd)"
    ./mjpg_streamer -o "output_http.so -w ./www"

    pwd echos the current path you are working at, the backticks open a subshell to execute the command pwd first the exported variable name configures ldopen() to search a certain folder for *.so modules

    export LD_LIBRARY_PATH=`pwd`

    This is the minimum command line to start mjpg-streamer with webpages for the input-plugin default parameters are used

    ./mjpg_streamer -o "output_http.so -w `pwd`/www"

    To query help for the core:
    ./mjpg_streamer --help

    To query help for the input-plugin "input_uvc.so":
    ./mjpg_streamer --input "input_uvc.so --help"

    To query help for the output-plugin "output_file.so":

    ./mjpg_streamer --output "output_file.so --help"

    To query help for the output-plugin "output_http.so":
    ./mjpg_streamer --output "output_http.so --help"

    To specify a certain device, framerage and resolution for the input plugin:

     ./mjpg_streamer -i "input_uvc.so -d /dev/video2 -r 320x240 -f 10"

    To start both, the http-output-plugin and write to files every 15 second:

    mkdir pics
    ./mjpg_streamer -o "output_http.so -w `pwd`/www" -o "output_file.so -f pics -d 15000"

    To protect the webserver with a username and password (!! can easily get sniffed and decoded, it is just base64 encoded !!)

    ./mjpg-streamer -o "output_http.so -w ./www -c UsErNaMe:SeCrEt"

    If you want to track down errors, use this simple testpicture plugin as input source. To use the testpicture input plugin instead of a webcam or folder:

    ./mjpg_streamer -i "./input_testpicture.so -r 320x240 -d 500" -o "./output_http.so -w www"

    Usage: mjpg_streamer

    mjpg_streamer
      -i | input "<inputplugin.so> [parameters]"
      -o | output "<outputplugin.so> [parameters]"
     [-h | help ]........: display this help
     [-v | version ].....: display version information
     [-b | background]...: fork to the background, daemon mode

    Note: If you start mjpg-streamer in the background use this to stop it:
    kill -9 `pidof mjpg_streamer`

    Example #1:
     To open an UVC webcam "/dev/video1" and stream it via HTTP:
      mjpg_streamer -i "input_uvc.so d /dev/video1" -o "output_http.so"

    Example #2:
     To open an UVC webcam and stream via HTTP port 8090:
      mjpg_streamer -i "input_uvc.so" -o "output_http.so -p 8090"

    Example #3:
     To get help for a certain input plugin:
    mjpg_streamer -i "input_uvc.so help"

    In case the modules (=plugins) can not be found:
     * Set the default search path for the modules with:
       export LD_LIBRARY_PATH=/path/to/plugins,
     * or put the plugins into the "/lib/" or "/usr/lib" folder,
     * or instead of just providing the plugin file name, use a complete
       path and filename:
       mjpg_streamer i "/path/to/modules/input_uvc.so"

    Parameters for input_uvc.so

    This is the output from:
      #mjpg_streamer --input "input_uvc.so --help"

    The following parameters can be passed to this plugin:

     [-d | --device ].......: video device to open (your camera)
     [-r | --resolution ]...: the resolution of the video device,
                              can be one of the following strings:
                              QSIF QCIF CGA QVGA CIF VGA
                              SVGA XGA SXGA
                              or a custom value like the following
                              example: 640x480
     [-f | --fps ]..........: frames per second
     [-y | --yuv ]..........: enable YUYV format and disable MJPEG mode
     [-q | --quality ]......: JPEG compression quality in percent
                              (activates YUYV format, disables MJPEG)
     [-m | --minimum_size ].: drop frames smaller then this limit, useful
                              if the webcam produces small-sized garbage frames
                              may happen under low light conditions
     [-n | --no_dynctrl ]...: do not initalize dynctrls of Linux-UVC driver
     [-l | --led ]..........: switch the LED "on", "off", let it "blink" or leave
                              it up to the driver using the value "auto


    NSLU2 + Debian Lenny + wireless + webcam

    My webcam is a HP KQ246AA 8.0MP Deluxe Webcam
    First I installed Debian on my NSLU2 following this guide:

    http://www.cyrius.com/debian/nslu2/install.html

    Then I connected my HP Webcam (purchased because it is supposed to be UVC compatible) and ran dmesg | tail and it looked like it was recognized.

    I tried installing mjpg-streamer using aptitude, but it said there was no package by the name in the repositories.

    I then downloaded the tar.gz file from the mjpp-streamer sourceforge site:

    http://sourceforge.net/projects/mjpg-streamer/files/

    I had no idea what to do with it, so I double-clicked on it on my Mac OS X desktop, which uncompressed it to a directory.  Inside that directory was a file called README which had the following instructions:

    To compile and start the tool:
    tar xzvf mjpg-streamer.tgz
    cd mjpg-streamer
    make clean all
    export LD_LIBRARY_PATH=.
    ./mjpg_streamer -o "output_http.so -w ./www"
    So I used Macfusion to connect my Macbook to the NSLU2 as root and copied the tar.gz file over to the NSLU2.  Then I ran the tar xzvf command, which created a directory called mjpg-streamer-r63.  I went into that directory and ran make clean all, but that gave me a command not found error.

    So I used aptitude to install the make package, and tried again.  This time it ran for a bit and then gave me an error that gcc wasn't found.  So I used aptitude to install the gcc package, and tried again.

    This time I got a bunch of errors related to jpeg_utils. Looking at the README file I saw this:
    "In case of error: the input plugin "input_uvc.so" depends on libjpeg, make sure it is installed."
    So I tried aptitude install libjpeg, but that gave me an error saying there was no such package, but listed the names of some packages with similar names.  I took a chance and installed libjpeg-dev, figuring that sounded the most promising. Aptitude then said it was installing a different package with 62 in the name.

    After the 62 variation of libjpg was installed I ran "make clean all" again and it seemed to finish without any fatal errors, but lots of warnings.

    However when I tried to run it I got this error:
    ./mjpg_streamer -o "output_http.so -w ./www"
    MJPG Streamer Version.: 2.0
     i: Using V4L2 device.: /dev/video0
     i: Desired Resolution: 640 x 480
     i: Frames Per Second.: 5
     i: Format............: MJPEG
     o: www-folder-path...: ./www/
     o: HTTP TCP port.....: 8080
     o: username:password.: disabled
     o: commands..........: enabled
    Unable to start capture: Cannot allocate memory
     i: Error grabbing frames
    I figured this meant there wasn't enough memory so I went to this web page I had found about Debian on the NSLU2:

    http://www.sunspot.co.uk/Projects/SWEEX/slug/notes/Debian_notes.html

    And tried his recommendation:
    "If you are not using IPv6, you can prevent the module from being automatically loaded by adding the line
    blacklist ipv6
    to /etc/modprobe.d/blacklist."
    And then rebooted using "shutdown -r now"

    Then by running the export command again, and going into the mjpg-streamer directory I was able to get it to work; pointing firefox at port 8080 of the IP address of the NSLU2 worked.

    Then I decided to try and install mjpg-streamer in the system itself so I wouldn't have to run it from the folder where I made it, and have to run the export library thing every time.  But I had no idea where to put the files.  So I downloaded the Ubuntu deb package from the mjpg-streamer Sourceforge site, and ran dpkg-deb -c (I think) to get a list of the tree structure of the files, and then I copied the files from my install folder to those folders on my system (I had to make one new folder /usr/www).

    Then I was able to run the program from any prompt just using mjpeg-streamer without any ./ etc. but it kept giving me unable to start capture: Cannot allocate memory errors and exiting.  Interestingly, I was still able to run it from the install folder putting ./ before the name, at least once, but then even that didn't work. So back to trying to open up more memory.

    I went through the ideas listed here:

    http://www.cyrius.com/debian/nslu2/reducing-memory.html

    I couldn't find any libdevmapper1.02 in /etc/init.d so I couldn't do that idea.

    Then I started reviewing kernel modules shown by lsmod.  The first one, evdev, appeared to just be used for managing input devices for X.org, so I went to the blacklist file:
    nano /etc/modprobe.d/blacklist
    and added "blacklist evdev"  (no quotes) to the end of the file and rebooted. evdev no longer appears when I run lsmod.

    Next I tried blacklisting both usbhid and hid modules since they seem to be for mouses. Slug still boots.

    It seems that the ixp4xx_beeper module is for making the NSLU2 beep. I guess I won't blacklist it yet.

    Then I moved on to reducing the services (daemons?) loaded when the system starts.  This page mentioned a package called rcconf to change the services loaded.  It is apparently a utility that shows you what is loaded and allows you to specify what gets loaded and what doesn't.  So I installed it using aptitude.

    Using rcconf I turned off the following:
    • exim4 (apparently for handling email)
    • mountnfs.sh (I am guessing its for the NFS file system)
    • nfs-common (same guess)

    After all that, now I am able to run mjpg-streamer from any command prompt:
    mjpg_streamer -o "output_http.so -w /usr/www"
    Note that all the mjpg_streamer commands below are for the way I manually installed it on my system.  To get them to work on a machine where mjpg_streamer wasn't manually installed you will have to modify the command to match the syntax in the original instructions.

    Next issue: the streamer stops working if you end the terminal session where you launched it.  So time to try this idea from this web page:

    "there is an option in mjpg-streamer to launch it in background, use it as a daemon:
    [-b | --background]...: fork to the background, daemon mode"
    So this lets me keep using the ttyS0 terminal -
    mjpg_streamer -i "input_uvc.so -r 960x720" -b -o "output_http.so -p 8080 -w /webcam_www"
    Note:- to make mjpg-streamer exit cleanly, use -
    kill -9 `pidof mjpg_streamer`"
    It works! After I launched mjpg-streamer with the -b option it continues running after closing the SSH terminal session.
    kill -9 `pidof mjpg_streamer`

    Seems to shut it down successfully.

    Now to explore options. Here is the output from mjpg_streamer --help
    Usage: mjpg_streamer
      -i | --input " [parameters]"
      -o | --output " [parameters]"
     [-h | --help ]........: display this help
     [-v | --version ].....: display version information
     [-b | --background]...: fork to the background, daemon mode
    -----------------------------------------------------------------------
    Example #1:
     To open an UVC webcam "/dev/video1" and stream it via HTTP:
      mjpg_streamer -i "input_uvc.so -d /dev/video1" -o "output_http.so"
    -----------------------------------------------------------------------
    Example #2:
     To open an UVC webcam and stream via HTTP port 8090:
      mjpg_streamer -i "input_uvc.so" -o "output_http.so -p 8090"
    -----------------------------------------------------------------------
    Example #3:
     To get help for a certain input plugin:
      mjpg_streamer -i "input_uvc.so --help"
    -----------------------------------------------------------------------
    In case the modules (=plugins) can not be found:
     * Set the default search path for the modules with:
       export LD_LIBRARY_PATH=/path/to/plugins,
     * or put the plugins into the "/lib/" or "/usr/lib" folder,
     * or instead of just providing the plugin file name, use a complete
       path and filename:
       mjpg_streamer -i "/path/to/modules/input_uvc.so"
    I am having a heck of a time finding any documentation on the syntax.  I found this forum post:

    mjpg_streamer -i "input_uvc.so -r 320x240 -f 6" -o "output_http.so -p 8080"  -b

    Which I modified to:

    mjpg_streamer -i "input_uvc.so -r 320x240 -f 6" -o "output_http.so -w /usr/www" -b

    That seemed to work to reduce the resolution, but how to increase it?

    After looking at the webcam specs I tried:

    mjpg_streamer -i "input_uvc.so -r 800x600 -f 20" -o "output_http.so -w /usr/www" -b

    But it still gave me a 640x480 image. I am giving up in trying to get a higher resolution for now.

    On to wireless.  I have a Belkin USB wifi adapter that uses the rt73usb driver which is built into Debian Lenny.

    I followed the directions here on how to set up a rt73 wireless device on Debian Lenny:


    http://wiki.debian.org/WiFi/rt73

    Everything seemed to go smoothly.

    I had a lot of troubles getting the wifi to connect to my network, but I think most of it was because I was trying to use the wrong wpa2 password (i.e. psk).  After I used the right wpa psk it worked using the following /etc/network/interfaces file:

    auto wlan0
    iface wlan0 inet dhcp
            wpa-ssid Chacala
            wpa-psk mywirelesspassword

    That is, I was able to get it working while the ethernet cable was also plugged in.   I could connect SSH to the IP address assigned to the wireless connection while the ethernet cable was plugged in, but once I unplugged ethernet the wireless connection dropped.  I did some googling and discovered this hint on this page:

    "If everything looks like it's working, don't just unplug your ethernet. use "ifconfig eth0 down" beforehand, otherwise you'll loose both connections."

    Once I followed that hint the wireless link stayed alive after I disconnected the ethernet.

    So next I tried rebooting with ethernet unplugged and just the wifi adapter.  The Slug appeared to boot, and the LED on the wireless was flashing, but I couldn't ping or SSH to the Slug.  So I connected the ethernet cable again, but I still wasn't able to ping or SSH to the Slug.  So I rebooted with both the wifi adapter and ethernet connected, and after reboot I was able to ping and ssh to both wifi and ethernet IP addresses.

    Next I tried a hint from this page:

    "Actually there's a strange behavior of networkingon NSLU, when i boot without the ethernet cable plugged, in the wlan interface is also not working.
    When i replug the ethernet cable, the wireless network also comes up. When i ifconfig eth0 down and unplug the ethernet cable the wlan does still work.
    Strange, after some meditation on the interfaces file I noticed the commented line # The primary network interface and had the idea to just change the sections for the interfaces, so lets wlan0 be my primary interface and eth0 an additional one."

    i.e. he swapped the order of eth0 and wlan0 in /etc/network/interfaces to have wlan0 come first.  I changed the order in my interfaces file and rebooted.

    It worked! After reboot I was able to connect to the Slug through the wireless connection with no ethernet cable attached.  Here is the final /etc/network/interfaces file that worked:

    # The loopback network interface
    auto lo
    iface lo inet loopback

    # The primary network interface
    auto wlan0
    iface wlan0 inet dhcp
            wpa-ssid mywirelessnetworkname
            wpa-psk mywirelesspassword

    allow-hotplug eth0
    iface eth0 inet dhcp

    Now onto having the camera and the USB wireless adapter connected at the same time using a hub.

    I shut down the Slug, and then connected both the webcam and the wireless adapter using a cheap USB hub I had purchased from Walmart years ago on a trip.

    After reboot I was able to connect to the Slug via wireless

    Next I tried running the camera:

    mjpg_streamer -o "output_http.so -w /usr/www

    It works! I now have a wireless webcam.

    Poking around looking for something else, I discovered that the following command will list all of the supported modes of the webcam:

    lsusb -v

    Looking at this it seems that the only streaming mode higher than 640 x 480 is 1280 x 1024, which I haven't tried before, so I tried it:

    mjpg_streamer -i "input_uvc.so -f 1 -r 1280x1024" -o "output_http.so -w /usr/www"
     And here is what I got:

    MJPG Streamer Version.: 2.0
     i: Using V4L2 device.: /dev/video0
     i: Desired Resolution: 1280 x 1024
     i: Frames Per Second.: 1
     i: Format............: MJPEG
     o: www-folder-path...: /usr/www/
     o: HTTP TCP port.....: 8080
     o: username:password.: disabled
     o: commands..........: enabled
    Unable to start capture: Cannot allocate memory
     i: Error grabbing frames
    The good news is that it didn't reject the resolution saying it wasn't found.  The bad news is the memory error. On to troubleshooting the memory error.  dmesg shows a log of error messages, starting off with:
    mjpg_streamer: page allocation failure
    [42951766.100000] mjpg_streamer: page allocation failure. order:5, mode:0x1
    After a lot of googling it seems that this is some flavor of running out of memory, and one possible solution is to set overcommit to 2. Apparently you do that by running this command:
    echo 2 > /proc/sys/vm/overcommit_memory
    So I did. But I still got the Cannot allocate memory error when I tried to launch the streamer! Even after rebooting.

    Then I decided to try blacklisting the audio modules from the kernel:

    nano /etc/modprobe.d/blacklist

    And then added the following line to the bottom of the file
    blacklist snd_usb_audio
    After I rebooted I checked lsmod and it looked like all the sound modules that had previously been loading were no longer loading, and free showed about 6 MB free memory with no use of swap. But still get a memory error trying to run mjpg-streamer!

    Then I had a brainstorm.  Free showed no swap being used the first time I ran it after boot, but it should more swap being used, and more memory free, after I tried and failed to run mjpg-streamer, so maybe it would run now that some programs had been moved to swap?  So I tried again, and success! The Slug is now streaming 1280 x 1024 at the rate of 1 frame per second!  Here is the command I used to get the high resolution streaming:
    mjpg_streamer -i "input_uvc.so -f 1 -r 1280x1024" -o "output_http.so -w /usr/www"
     The new USB 2.0 hub I ordered (Belkin USB 2.0 4-Port Ultra Mini Hub F5U407  Belkin USB 2.0 4-Port Ultra Mini Hub F5U407) arrived from Amazon.  I bought this because it was under $10 and had good ratings on Amazon.  I plugged it in, hooked the webcam and the wifi dongle up to it and rebooted.  I hung on the first reboot, so I rebooted with nothing connected and then added the new USB hub by itself, then the webcam, and then the wifi dongle, and things worked that time.

    The picture quality from the webcam seems much better with the USB 2.0 hub.  I suspect the webcam automatically detects when it is hooked up to a 1.1 hub earlier and jacks up the compression on the video stream to compensate.

    Next up: reduce wear on flash drive. I bookmarked some pages about this in the Firefox NSLU2 folder.

    2010-02-16

    Notes on using Google Maps to show hiking trails

    I have been working on using Google Maps to show the routes of various hiking trails in my area. It can be done, but it's not very straightforward. Here are cursory notes on what I learned in the process (i.e. this is not a detailed how-to).

    Initially I used Google "My Maps." Here are the characteristics of Google "My Maps" relevant to mapping hiking tails:

    • Google "My Maps" displays the length of a path when you click on it, which is good for showing people how long a trail is.
    • You can upload a GPX track to it by the roundabout method of importing the GPX track into Google Earth and then exporting it from Google Earth as a KML file, which you can then import into Google My Maps. Or you can use the GPS Visualizer web site to convert GPX files to KML files: http://www.gpsvisualizer.com/
    • Unfortunately Google My Maps has a limit on the number of placemarks and lines it will display at one time. If you add exceed this number of items Google My Maps will automatically bump some of your items to a second page so that it is impossible to see everything at once.
    • Another unfortunate characteristics of Google My Maps is that it apparently has a limit on the number of points that can be in a single line. If you exceed this limit Google My Maps will split the track into two lines of the same color. I searched for a long time and couldn't find any work around. See: http://www.google.com/support/forum/p/maps/thread?tid=7d6964088b224b32&hl=en
    There is a workaround to avoid having your placemarks/lines broken up into multiple pages if you have a lot of them. This workaround also gets around long tracks being broken up into multiple lines. You can display a KML file using Google Maps by entering the http address of the KML file in the search box of Google Maps. When a KML file is displayed this way there is apparently no limit on the number of items that can be displayed (or at least the limit is higher). Also, when you display a KML on Google Maps using this method your long tracks/lines will not be broken up into separate lines.

    If you don't want to upload a KML file to a web server to display it you can also display all the items of a Google "My Maps" map that has too many items to fit on one page using this trick. Just right click on the View in Google Earth link in My Maps to get the http address of the KML file, and then enter that in the search field in regular Google Maps. I have seen forum postings that some people have had problems with this, so posting a KML file to a web server may be the most reliable approach.

    I decided to go with uploading my KML file to a web server and then searching for its URL in Google Maps. Once I that I faced the choice of whether to continue to use Google My Maps to edit my KML and then export the KML file, or to use Google Earth to edit the KML. Although both Google Earth and Google My Maps are awkward to work with if you have a lot items, I decided to go with Google Earth.

    The problem with using Google Maps to render a KML file is that you don't get distance measurements on the length of lines. I decided to work around that by uploading my KML to Google My Maps and then clicking on tracks/lines there to find out the distance. There is also this hand web site that will tell you the length of a Google Earth track/line: http://www.emaltd.net/google/gec/utilities/index.asp?l=en