Category Archives: programming

Unexpected problem with the App Store – App icons

Migrating to XCode 5 has largely been very helpful.  I’ve used many of the new features and I’m quite happy how things are working out.

One thing caught me by surprise.

Submitting apps to the store is usually a painful experience.  One the goals of XCode 5 is to lessen the pain, at least a bit.  So, imaging my surprise when submitting my latests updates was painful!  Neither of my apps would pass the verification step in the approval process.  Basically, the problem was that the app icons couldn’t be found.

Huh?

I’m using XCode’s latest feature called image assets.  They are great for managing all of the icon and launch images required for the app.  It was supposed to make all this painless.  For example, to manage your app icon, you made an app icon image set, load in all of the images, and the rest is magic.

But, it wasn’t quite magic.

It turns out, that when you build an app using image assets, XCode will modify the app’s info.plist file to list all of the app icons to be included in your app.  However, if you’re like me and your project is older, you might have references to icons already in your info.plist files.  I suspect in most cases, this wont be a problem.  In my case, however, when I looked at the info.plist file for a compiled version of the app, all of those old icon references were still there.  Thus, the verification process could not find those images and the app gets rejected.

Fortunately, it didn’t take TOO long to figure this out.  So, I deleted all existing app icon references out of my plist file and after that, things verified normally.

Now that I knew the problem, I fixed it in my other app and the process really because much less painful.

Redesigning my OpenGL engine for Ancient Earth

In case you were wondering about any updates to Ancient Earth, I’m working on them.  Right now, I’m spending a good deal of time trying to modernize my OpenGL engine to take advantage of Apple’s updates in iOS 5 (yes, that means, I’m dropping support for iOS4).  It turns out that Ancient Earth really pushes the limits for iOS, and for us to be able to add more data and map types, we need not only a more efficient rendering system, but a more flexible rendering system.

Ancient Earth Released for iOS!

Last month, I released by iOS app “Ancient Earth: Breakup of Pangea”. Working with C.R. Scotese, we've brought his maps to the iOS platform! In this app, you can explore his continental plate reconstructions for the last 200 million years of Earth history.

You can see our home page for Ancient Earth here: Ancient Earth. Alternatively, you can simply visit our page on iTunes:
Ancient Earth: Breakup of Pangea - Thomas L. Moore

Reading SQL with PySqlite

About a year or so ago, I wrote a special script to run the FOAM climate model. The primary goal of this script, besides running the model, was to store a wide variety of information about the run, including settings, system information (like CPU temperature), and the timing and duration of the run. The storing process stored some of the information before the model starts and after the model ends. It's a great log of my model run and system performance history.

The drawback to this data was the database itself. Up until today, I've been using a single database to store all of the run data. However, I've been wanting a separate database for each model.

I didn't develop this approach in the first version of the script because I didn't know how to read the template SQL and directly insert it into the database. In the command line with Sqlite, you simply enter “.read mysqlfile.sql” or something similar. In python, that's not possible. Nor is it possible for PySqlite to accept more than one SQL command at a time. Without this ability, I couldn't automatically create a complete Sqlite file with all of the required tables.

The solution turned out to be remarkably easy. The SQL file was a straightforward text file. Reading a text file into python is very easy:


data_file = open("path_to_my_SQL_,'r')
theSQL = data_file.read()

Since pysqlite only handles one statement at a time, the commands need to be split into separate statments:


theStatements = theSQL.split(";")

The file can be split into discrete statements because the semicolon always marks the end of a statement.

At this point, you simply need to loop through each of the statements and execute in the sqlite file:


for statement in theStatements:
sqlite_cursor.execute(statement + ";")

Keep in mind, you have to reattach the trailing semicolon at the end of each statement.

There's probably even an easier way to do this, but it's good enough for me.

Using SQLite and Python to Store Model Metadata

As I continue to run a range of climate models, I've learned from painful lessons that I need to record as much information about the model run as possible. When I first started this process, I simply kept files used to make the run (the geography and configuration files for the model) and the model output. At first, this seemed sufficient because, in the end, these were the data that were most important. As it turns out, however, that having a history of everything you did during the model run, such as adjustments to the settings or geography, is also important both historically to the run and possibly sorting out problems later.

My initial solution to this problem was to create a log file. Every time I ran the model, the important setting information was sent to a simple flat-file log. It turned out that this log was very important to debugging a model-run issue because it kept a record of how the model was initially run. I also started keeping information about the hardware in this log. Along with the model information, I began to store hardware temperature data from before and after the run in the log just in case I needed to debug hardware issues. However, these data turned out to be virtually useless in a flat log file. Other information I haven't been keeping that I wanted in the log was geography version control information. I use version control to track all my geography work, so I can both track how I change the geography and get an idea how much time I spend on it. However, the exact geography used in a run is important to know. However, even more info in a flat log file makes it even more difficult to review.

My new solution is to dump the flat file approach and go with SQLite. SQLite is a lightweight, public domain SQL file format that works well with a variety of languages. SQLite has become one of my preferred file formats over the years (nudging out XML for custom data structures). The Python scripting language seems a natural fit to work with SQLite as well.

So, how does this solution work? FIrst, I have a simple run script for the model using bash (for some reason, I could never get the model to run using Python). This script calls my python script before the model starts and after the model ends. It sends two pieces of information, a uuid and the model directory path. The python script assembles everything it needs on its own.

Why a uuid? Each time I run the model, I need to identify the run in the database with a unique id that can be used to link across a number of SQL tables. A uuid ensures that the id is unique. I've considered using a uuid for the overall simulation but I haven't implemented that.

To pull in settings data and temperature data, I've written parsers for each format. For the model I've been running, FOAM, I have parsers that read the atmos_params and run_params files in addition to parsing the temperature monitor software and subversion “svn info” command. The script then inserts these data into their own tables marked by the uuid. While most of these tables have fields for each value I pull out of files, the temperature data is stored in key->value type table since the number of temperature sensors is dependent on hardware and thus may change from machine to machine (and is also Mac only).

Here is the schema for the main table, “runs”:

CREATE TABLE runs (
uuid text,
starttime text,
endtime text,
runduration text,
std_out blob,
completed text,
comments text,
runcmd text,
yearsPerDay text
);

Some of these fields are not yet used. std_out and the rancmd is not yet implemented in the script . Right now, I'll do the comments field manually. My currently running simulation looks like this at the moment:

uuid = **deleted
starttime = 2010-01-16 23:33:45
endtime = 2010-01-17 10:29:57
runduration = 39372.0
std_out =
completed = NO
comments = manual shutdown because of memory problem
runcmd =
yearsPerDay = 14.2

For the geography source location, here's the results for the run above:

uuid = **deleted
url = file:///Volumes/**deleted
type = svn
date = 2009-08-28 09:42:01 -0500 (Fri, 28 Aug 2009)
author = tlmoore
rev = 191
branch =

the branch is empty in anticipation for moving from subversion to git.

For temperatures, I can now look at before and after values for specific sensors for a run:

uuid = *deleted
sensor = SMC CPU A HEAT SINK
temperature = 64.4

uuid = *deleted
sensor = SMC CPU A HEAT SINK
temperature = 98.6

One thing I'd change here is specifying pre- versus post-run measurements.

So far, I'm happy with most of this new solution. It just need refinements.

More Snow Leopard Geotagging With Services: Google Earth

As mentioned before, I feel geotagging is an important part of image metadata. In a previous post, I showed a simple Applescript-based Snow Leopard service to set images to a commonly used location.

With a slight modification to the Applescript in the service, you can pull the location out of Google Earth and embed your photos. I was really excited when Picasa supposedly had this feature, but it’s not mac compatible! Now, iPhoto can do the same thing.

Step 1. Set up the Automator Workflow as described in a previous post.

Step 2. Use the following Applescript

 

on run {input, parameters}
 local myLongitude
 local myLatitude
 tell application "Google Earth"
  set myView to GetViewInfo
  set myLongitude to longitude of myView
  set myLatitude to latitude of myView
 end tell

 tell application "iPhoto"
  set mySelection to selection
  repeat with myImage in mySelection
   tell myImage
    set longitude of myImage to myLongitude
    set latitude of myImage to myLatitude
    reverse geocode myImage
   end tell
  end repeat
 end tell

 return input
end run

Refactoring your code!

My current programming project is Objective-C application for MacOSX to generate climate model results in the form of a web site and a PDF. It's actually a complete write of an existing application.

Why rewrite? The original code was built over many months, is stringy, and very hard to maintain. To get an idea of how hard, I've tried to rewrite this application five (yes, 5) times. The original application was designed to generate NCAR Command Lanaguage (NCL) scripts that generate hundreds of images based on climate model results. From there, the application generated makefiles that would continue the processing by running NCL for each script, convert the resulting postscript images into jpegs, and finally using FOP and Docbook to generate HTML and PDF documents with all the images and some related text.

There are several major weak points in this app. First, the conversion between postscript and jpeg images was originally handled by Imagemagik. Unfortunately, I've had a lot of trouble keeping this functionality working so I switch to generate javascript code to use Adobe Illustrator to do the conversions. In general, I was relatively happy with this solution (so much so, I gave up trying to get Imagemagik working). The unfortunate problem with using Illustrator for this work is that it no longer made the simple makefile approach to do the processing completely viable. I had to rewrite the makefiles to stop processing when the postscript files were ready, and allow running when the jpegs were ready.

Another problem with the code is that the makefiles and scripts were location dependent. In other words, if I moved my files to another location I would have had to regenerate everything if I needed to rebuild the image set.

The most significant problem was the Objective-C code itself. I'm a self-taught programmer, so I have a lot of bad programming habits. So, the code was stringy, repeated itself often, and easily broken. Thus, to make simple changes, such as changing the contents of the makefiles, was very painful. Hence, when I wanted to do something new in the project, I tended to opt to write a new version of the code… but it's a LOT of code to rewrite!

What's different this time? I realized I needed some new images in the reports. Furthermore, I needed a way to start adding more images as I came up with new ideas. Plus, I was just plain sick of the old code not being what I really needed.

The new app is not yet complete, but I've changed how I'm writing the code. The main thing is that I'm taking the time to decide if I can rewrite my code as I go by asking some simple questions: Am I repeating my code? Can I simplify the code by breaking out new methods? Can I join classes into a simpler class hierarchy? These questions are actually quite painful in many ways, often because the answer is “yes”. When this happens, I break down and stop moving forward and see how I can improve the code. This often breaks existing code and can take significant time to propagate the changes through the code. In some ways, it is very much like the classic quote, “I would have written a shorter letter if I had more time”.

However, the benefits have been huge. Instead of many classes repeating code, I now have a good class hierarchy where I need it with all common methods in the superclass. Instead of stringy code, I have much more clear and readable code. Still not perfect, but greatly improved. Debugging has become easier with better code isolation in methods. So far, I've simplified my original 14 NCL template scripts down to 4.

I've also learned a few new things about debugging. Exceptions are something that I've never fully understood in terms of when they should be used and when I watching for them in code. I've was hit by a NSMutableDictionary exception of trying to insert a null object. The problem is that I use a lot of dictionary calls in my code and the exception, nor gdb AFAIK, tells you where this occurs in the code. By implementing @try, @catch in most methods allows me to at least pinpoint the method. While not perfect, it certainly promotes the use if small and clear methods.

So, the lesson of this project is don't be afraid of refactoring. Do it as soon as the need arises. It will likely take up more time up front, but the payoff may include shorter overall development time, greater stability, easier readability, and extending the application is easier.

Whither MacOS 9 Classic: Time to update my data

The following was a post I started a while ago and I'm not sure if I finished it. So, here's a quick re-write…

In practical terms, MacOS 9 is dead, again. Now that Leopard doesn't support “classic” mode, the Mac universe is going OSX… finally. On the other hand, Classic made me lazy. I have tons of stuff still floating about that is not OSX compatible. Now, I'm forced to do the unhappy task of data migration.

The Problem of Technology Creep and My Data

The one area in the whole computer/workflow yadda yadda is the problem of migrating data. When I say data, I mean anything, such as graphics, data files, video, etc. Many of these files are in custom file formats attached to specific software. When your software no longer works with the OS, well, you can see the problem. The technology creep, the slow continuous progress in technology, can cause data loss for many reasons, but the loss of apps is hitting me hardest lately.

Some Problem Apps:

MapInfo – I bought MapInfo (GIS Software) just before MapInfo cancelled the mac version. It served me very well over the years (I still used it up to Leopard). Now MapInfo must be retired. Because of business concerns, I now use ArcGIS. With Classic going away, I'm forced to do the migration.

MacDraw Pro – No, I don't use this anymore, but oh man it was stable in classic. A really well written app. However, I have tons of MacDraw images from my dissertation. If I want to keep them, they must be converted.

Canvas – Another graphics app, which just recently died on OSX, but my images were generated at a university with version 3.5, and I didn't have a copy!

Pagemaker – Similar problem as canvas. I just had a few files, but no app.

Corel Draw – I had a few files in this format. Again, I had no app to convert.

Other problems:

Many apps work happily in OSX, such as word and excel. However, the classic naming conventions, with no file extensions for example, sometimes need changing. The files will often still work, but updating the name and extension keeps you current.

Solutions:

I was planning to detail all of my solutions to the data migration problem. However, I did this work a little while ago so I can only hit the highlights.

Spotlight: As it turns out, Spotlight is a great tool for data migration. type codes are available to Spotlight, so you can search for all the data files created with a particular software package, such as Corel Draw. Furthermore, for cases where you don't know the file format, they often have a type code. This way, although you can't always figure out the creator app, you can find all the files created with the same app.

Automator: This was a great way to change file name extensions.

Demo Versions: If you don't have an app, some demo versions can allow you to read your old files (even very old OS 9 demos are still floating around). If you're fortunate, the demo will allow you to save it or print it in PDF or, if you're very fortunate, it will allow you to export it to another file format that you currently use, such as Adobe Illustrator.

Assessment: No tool here, but advice. Assess the value of your data. This is the current stage I'm in now. I have a lot of data that don't need migrating, but I'm learning to delete, like free videos from iTunes. I've always been a data hoarder; after all, you never know when you might need it! I've got a basement full of books and reprints in the basement, and shelves full of backup DVDs and CDs. It's gotten worse with the climate modeling where a single simulation I'm running has generated over 180 GB of data. The shear volumes of data I'm generating means that I need to be more selective of what I keep or else my house will be full of DVDs, CDs, and hard drives. So, if you don't really need it, don't bother to migrate it. If you don't bother to migrate it, you might as well get rid of it because you wont be able to use it anyway.

Open Source File Formats: I know that open source applications are handy and useful, but they don't always fit the bill. However, one way of protecting your data is to put the into open file formats, or at least well documented file formats. These days, I commonly use XML, NetCDF, and SQLite. The advantage of this approach is that you're not tied to a particular application that may or may not be open source, but you're data are safe because the file itself is either supported by open source (or better yet public domain) code that will likely be in the wild for a long time to come. This is, of course, not always practical, but if this works, it's just as much protection of your data as backing it up.

Final Words

I realize that I've done a little more data migration than most people. The worst is often when you change platforms. For me, the big migrations were C64/C128 -> MacOS6, MacOS9 -> MacOSX and MacOS9 -> Windows (GIS stuff mostly). Keep in mind, however, that these big migrations aren't the only migrations you need to worry about. Software apps become incompatible with MacOSX all the time and sometimes you're faced with the choice of buying software updates or switching software all together. It's best to plan you migration a little and get it done, or else your data will languish and possibly be lost forever.