Author Archives: admin

Why you need the hardware to develop for mobile…

Probably the second biggest roadblock for developing mobile apps is the cost of hardware (the first, of course, is the time).  Devices are expensive and it’s tough to keep up.  My current testing livery of devices is actually quite limited.  Indeed, some of my devices are not longer supported by their manufacturers or myself.

Despite the difficulty of maintaining an adequate device livery, you need the devices for testing. In a previous post, I mentioned that I’m migrating my app from OpenGL ES 1.1 to OpenGL ES 2.0.  All of my currently-supported devices support 2.0 (and in some cases 3.0).   That sounds like things should be just fine.   Well, they don’t have the exact same graphics card or OS – which will lead to horrible surprises.

I have one older iPod Touch which on iOS 6.  Having this particular device saved my bacon for the last release of one of my apps.  It turns out that the software ran just find on all of my devices except that iPod Touch.  Talk about panic!  It wasn’t an easy fix either; it required a rewrite of a major portion of my code to work correctly.

Well, in my testing for parts of my OpenGL ES transition code, I decided to try the old iPod again… and wouldn’t you know, code I thought was working fine rendered completely wrong!  It turns out there was a bug in the OpenGL code that apparently  didn’t manifest in the newer devices or on the simulator.

Realistically, you can get away with a smaller livery of devices under some conditions.  For example, if you don’t use any low-level code and stick with the high-level user interface APIs you’re more  likely to still have a stable app on many devices.  Getting into the lower-level stuff will likely cause more problems particularly in the short run as you develop, and in the long run when devices you didn’t test start reporting issues.

Just to add to the above, I don’t count the xCode simulator as a device.  I have code that works perfectly in the simulator and does not run correctly AT ALL on any device.  After all, the simulator is actually MacOS X OS and hardware, not mobile.  As a consequence, successful rendering in the simulator will not always translate to a device.  The first time you encounter this issue it is a rude awakening.  However, it shouldn’t be a surprise since all of Apple’s OpenGL  ES debugging tools only work on devices, not the simulator.

This gets back around to why my apps have yet to be ported to Android.  I only have a couple devices that use that platform for testing out of the many MANY devices out there… it’s a tad scary…



Working on migrating AE to OpenGL ES 2.0

Now that I have a break from other duties, I’ve been working on improving AE for iOS.  One of my early design decisions I made in AE is to go with OpenGL ES 1.1.  At the time, my only iOS device did not support OpenGL 2.0, so for practicality sake, 1.1 made sense.  Furthermore, I needed some basic functionality that was stripped from OpenGL ES 2.0.  ES 2.0 was clearly unattractive… at the time…

Today, the story is different.  All of my “supported” iOS devices support OpenGL ES 2.0 and in some cases 3.0.  Yes, they still support 1.1, and it’s arguable that the practical decision is to remain on 1.1.  However, it’s only practical if I don’t add new features that call on the graphics card.  I’m also considering the path to porting the application to other platforms, such as Android and Desktops.  From this perspective, ES 1.1 looks increasingly problematic.

The kicker that’s forcing the transition is the time scale control on the bottom of the screen.  It turns out that feature is quite a difficult to display.  The time scale tends to be a memory hog and requires a great deal of code to provide full functionality.  I’ve long known that the control should be moved to OpenGL rather than using higher-level functions.  Originally, the time scale was created using simple bezier-path draw calls. In the last version, I moved the system to Apple’s layers.  That transition was meant to make the code easier to maintain.  However, it turned out, on older devices in particular, the app became crash prone because the layers took up an unexpected amount of memory.  I managed to work through this problem for a stable release, but I wasn’t happy with the situation.

Enter OpenGL ES 2.0.  Now, the time scale is not done by any means, so these comments are initial impressions.  Now that I’ve worked with it, I do regret not going with ES 2.0 from the beginning (it still wouldn’t have been possible to release the original app with ES 2.0 support since I had no hardware at the time).  ES 2.0 is clearly superior to ES 1.1.  Surprisingly, I found that while in some areas ES 2.0 requires more code, the transition from layers to ES 2.0 appears, at least in some areas, is thinning out my code.  That’s exciting!  Since the code for the layers was extremely complex, it’s hard to improve or add features.  With the code simplifying, I feel I can consider, at least, adding functionality I wouldn’t have been too keen on adding before.

Now, back to work on the code.  We’ll see how this turns out…

Unexpected problem with the App Store – App icons

Migrating to XCode 5 has largely been very helpful.  I’ve used many of the new features and I’m quite happy how things are working out.

One thing caught me by surprise.

Submitting apps to the store is usually a painful experience.  One the goals of XCode 5 is to lessen the pain, at least a bit.  So, imaging my surprise when submitting my latests updates was painful!  Neither of my apps would pass the verification step in the approval process.  Basically, the problem was that the app icons couldn’t be found.


I’m using XCode’s latest feature called image assets.  They are great for managing all of the icon and launch images required for the app.  It was supposed to make all this painless.  For example, to manage your app icon, you made an app icon image set, load in all of the images, and the rest is magic.

But, it wasn’t quite magic.

It turns out, that when you build an app using image assets, XCode will modify the app’s info.plist file to list all of the app icons to be included in your app.  However, if you’re like me and your project is older, you might have references to icons already in your info.plist files.  I suspect in most cases, this wont be a problem.  In my case, however, when I looked at the info.plist file for a compiled version of the app, all of those old icon references were still there.  Thus, the verification process could not find those images and the app gets rejected.

Fortunately, it didn’t take TOO long to figure this out.  So, I deleted all existing app icon references out of my plist file and after that, things verified normally.

Now that I knew the problem, I fixed it in my other app and the process really because much less painful.

Redesigning my OpenGL engine for Ancient Earth

In case you were wondering about any updates to Ancient Earth, I’m working on them.  Right now, I’m spending a good deal of time trying to modernize my OpenGL engine to take advantage of Apple’s updates in iOS 5 (yes, that means, I’m dropping support for iOS4).  It turns out that Ancient Earth really pushes the limits for iOS, and for us to be able to add more data and map types, we need not only a more efficient rendering system, but a more flexible rendering system.

Ancient Earth Released for iOS!

Last month, I released by iOS app “Ancient Earth: Breakup of Pangea”. Working with C.R. Scotese, we've brought his maps to the iOS platform! In this app, you can explore his continental plate reconstructions for the last 200 million years of Earth history.

You can see our home page for Ancient Earth here: Ancient Earth. Alternatively, you can simply visit our page on iTunes:
Ancient Earth: Breakup of Pangea - Thomas L. Moore

Reading SQL with PySqlite

About a year or so ago, I wrote a special script to run the FOAM climate model. The primary goal of this script, besides running the model, was to store a wide variety of information about the run, including settings, system information (like CPU temperature), and the timing and duration of the run. The storing process stored some of the information before the model starts and after the model ends. It's a great log of my model run and system performance history.

The drawback to this data was the database itself. Up until today, I've been using a single database to store all of the run data. However, I've been wanting a separate database for each model.

I didn't develop this approach in the first version of the script because I didn't know how to read the template SQL and directly insert it into the database. In the command line with Sqlite, you simply enter “.read mysqlfile.sql” or something similar. In python, that's not possible. Nor is it possible for PySqlite to accept more than one SQL command at a time. Without this ability, I couldn't automatically create a complete Sqlite file with all of the required tables.

The solution turned out to be remarkably easy. The SQL file was a straightforward text file. Reading a text file into python is very easy:

data_file = open("path_to_my_SQL_,'r')
theSQL =

Since pysqlite only handles one statement at a time, the commands need to be split into separate statments:

theStatements = theSQL.split(";")

The file can be split into discrete statements because the semicolon always marks the end of a statement.

At this point, you simply need to loop through each of the statements and execute in the sqlite file:

for statement in theStatements:
sqlite_cursor.execute(statement + ";")

Keep in mind, you have to reattach the trailing semicolon at the end of each statement.

There's probably even an easier way to do this, but it's good enough for me.

Finishing up DLD 2.0

It’s been a long time coming, but I’m finishing up DLD 2.0 for iOS. I hope to submit next week.

What’s new? Everything. Depending on whether it gets through the approval process, here’s a summary:

1. An entirely new map interface! Now, the map is actually useful and will make use of Google map data as well!

2. A custom Time Scale controller! Now you can change the age range of the data displayed on the map without leaving the map view!

3. An entirely new database back-end. Now, this isn’t exciting for the user in general, but the nasty torture I’ve been inflicting on the database is improving the overall quality of the data.

Do you have any other wants for future versions? Let me know.

As always, contact me if you want a copy of the raw database file.


Migrating the Devonian Lithological Database to a Fully Relational System: The Story So Far

The Devonian Lithological Database (DLD for short) is a database I published as part of my PhD work at the University of Arizona. As databases go, it was quite primitive but it got the job done. Over the past year or so, I've been migrating the database to a more modern SQL format using SQLite. SQLite is a public domain database designed to work without a server. It is easy to use (for a SQL database) and the data file is generally cross platform.

The migration from the original DLD format to the SQLite format has not been easy. DLD originally consisted of two basic tables: the data records and the source list. The data records were based in Microsoft Excel with 34 columns of information. The reference list was just a Endnote database. Inserting these tables into SQLite is actually quite easy. However, early on, issues made themselves apparent.

The first issue was database normalization (making sure you don't repeat data more than once) suggested that there were actually far more than two basic tables to the database. I had used various codes to represent information in the database. For example, I came up with a letter code to represent the error in position for each record. That is, how off I thought I might be with the latitude and longitude. Thus, each of those code systems had to be a table so an end-use could at least translate the code. These code systems added an additional 5 tables to the database.

I also discovered I had a few records that used more than a single source from the reference list. This meant I had to have yet another table to list all the references associated with each record.

So, now the database which I thought was only 2 tables was now 8. It was more complicated than I had originally hoped but it was far better than the original Excel/Endnote combination. This approach tied together all the diverse data into one generally easy-to-use file.

Of course, there were more problems. The next problem is that the file is slow in the iPod/iPhone version I created last year. The reasons for the speed issue are complicated and I'm not sure that I can fully resolve them. Two of the main problems with the speed are my letter code system and redundant data.

The letter codes are nice human-readable way to convey information. SQLite isn't human. In some of the cases, there is more than one letter code in the field (a one-to-many relationship). For example, the letter code system for lithology allows many letter codes in the same field and the order in which they appear is important. Parsing and understanding that sort of text field information is time consuming. So, I need to make a new table to replace this field. I haven't done this yet mainly because it seems a bit scary to do with over 5000 records.

Data redundancy takes many forms in the lithological database. The prime example, however, is localities. Each record in the original database fully describes its location: place names, coordinates, etc. While there are over 5000 records in the database, there are fewer than 4000 unique localities. This leads to several problems. First, you have more data to sift through than you need: an obvious slow down. Second, maintaining information is harder than needed. By having each location entered only once, you only have to maintain that record in one place. If I had that information several places in the database, then I'd have to fix it everywhere which has a greater chance of error.

As of today, the database has gone from the 8 tables to 26 with a few more expected. Why so many? Going through this process has made it clear that there was also a quality assurance problem with the original database. Using a flat file like Excel was nice and easy. However, what it didn't do was force you to use rules for data entry. Every time you enter something into a database, you have a chance to make a mistake. For example, I have formation names that are repeated in multiple records but are written differently: e.g. “Ft Vermillion” and “Ft. Vermillion”. In a search, it would be hard to find both. Using what are essentially look-up tables, the system would help force the use of consistent terms.

Designing a new database has been quite enlightening.

Why should scientific papers be "spatially enabled"

Now that I'm starting to build the databases needed for my new lithological database, I'm coming back to how I created my Devonian database.  The papers I generally worked with contained reports from the field, including lithology, measurements, location, etc.  That can be a LOT of information.  Collecting it all from each paper is time consuming to say the least.  Howevever, there was another problem…

That problem is being overly focused on the data in front of you and not the data you need.  The forest for the trees problem, if you will.  In the earth sciences, there are a number of research biases.  North America and Europe are far better studied than Africa, for example.  Thus, most publications are focused in those regions.  Similarly, some specific localities can be studied extensively, because of location or because of something interesting, while others are rarely visited.  This becomes a problem when you keep entering papers from the same area but miss important work from more rarely studied areas.

To combat this problem for the Devonian database, I created a “recon” or “search” database.  I tried to find any paper that might be relevant to the project and collect some basic information such as time range, and the general lat/lon area of the field study.   I could then map these records in a GIS application (at the time, I was using MapInfo, Terra Mobilis, and PGIS). 

As an example, I found about 500 of these records remaining in my archives.  Here is a global map example:

The yellow dots are entries in the Devonian Lithological Database.  The blue rectangles are “coverages” for particular scientific papers.  Where papers overlap, the blue color gets darker.  This is more evident regionally, for example:

As you can see, I can now show the data I have versus the field areas represented by papers I've found.  Careful examination of this sort of map highlights both papers I might not need to bother with (blue rectangles with lots of yellow dots) versus papers I should prioritize (blue rectangles with few if any yellow dots). 

These maps by no means represents all the papers I looked at in developing the database.  I think I physically at least looked at 3000-4000 papers but only 500 are represented in the above maps.  So, to include everything, it would take a great deal of work.

In any case, in this short example, I hope i've shown that in at least once case that geospatially enabled papers can be very important.  Now, the question is how to implement it!