About a year or so ago, I wrote a special script to run the FOAM climate model. The primary goal of this script, besides running the model, was to store a wide variety of information about the run, including settings, system information (like CPU temperature), and the timing and duration of the run. The storing process stored some of the information before the model starts and after the model ends. It's a great log of my model run and system performance history.
The drawback to this data was the database itself. Up until today, I've been using a single database to store all of the run data. However, I've been wanting a separate database for each model.
I didn't develop this approach in the first version of the script because I didn't know how to read the template SQL and directly insert it into the database. In the command line with Sqlite, you simply enter “.read mysqlfile.sql” or something similar. In python, that's not possible. Nor is it possible for PySqlite to accept more than one SQL command at a time. Without this ability, I couldn't automatically create a complete Sqlite file with all of the required tables.
The solution turned out to be remarkably easy. The SQL file was a straightforward text file. Reading a text file into python is very easy:
data_file = open("path_to_my_SQL_,'r')
theSQL = data_file.read()
Since pysqlite only handles one statement at a time, the commands need to be split into separate statments:
theStatements = theSQL.split(";")
The file can be split into discrete statements because the semicolon always marks the end of a statement.
At this point, you simply need to loop through each of the statements and execute in the sqlite file:
for statement in theStatements:
sqlite_cursor.execute(statement + ";")
Keep in mind, you have to reattach the trailing semicolon at the end of each statement.
There's probably even an easier way to do this, but it's good enough for me.