hm.Survey: data capture tool – how & why

Introduction

I have been doing survey automation systematically since 2000 on both regular computers and hand-helds (translates into cell phones and tablets today). More recently, I have generalized that bit of automation in a tool called hm.Survey, a template based data entry tool that allows relatively quick generation of data entry programs and then tabulating the data.

The content under hm.Survey  is in the process of being tidied up: moving from Google Docs to a Wiki, updating documents, clarifying text and removing obsolete pages. This introduction will give a background to that work.

The lead up to hm.Survey

Before hm.Survey, there was another program, written in Delphi in Riyadh (in 2005 where there was so little to do that rather than twiddle thumbs, I thought it best to write a program to automate the table production process instead of doing it  by hand in SQL). I dubbed it hm.SurveyReport. It took an XML document as input and produced tabulated reports as output. The XML document described the fields to be used in each report which allowed the Delphi program to construct the SQL statements and carry them out. It was all very nice and dandy except of course that Delphi – the flagship product of the once popular Borland company, synonymous with top-notch compilers for C, C++ and Pascal – was dying off. I was using Delphi in all my survey programs at one point.

Delphi’s slow death prompted me to start using DotNet (C#Express and SQL Server) for my survey data entry and reporting work in 2006 in Lebanon. And this was because the Ministry of Industry there already had a Microsoft license for SQL Server. I pretty much rewrote the earlier Delphi data entry programs for Lebanon and reused the same later for Tanzania and Iraq (there was a bit of generalization here as well but not of the the same order as the one I carried out for hm.SurveyReport, i.e., for reports).

In parallel, I had been working on a data entry/indexing system for inflation, first in Delphi (Paradox DB, later MySQL), then in Java (desktop; database was MySQL; it even has a servlet based analysis engine which does all sorts of sophisticated regressions but has never been used). The Java version also had a handheld counterpart in the form of a Java/MIDP (now obsolete in favor of Android and iOS) program that could run on cheap Nokias (and ran equally well on expensive ones).

To keep work worthwhile, I wanted to generalize the survey program and its accompanying tabulations along the lines of the specification based reporting work done earlier (on Delphi) but on a platform that was less ephemeral. Using Java was an option (but never Microsoft), but considering how the micro version (Java/MIDP) fell out of favor with the market, I could not find any compelling reason why other versions of Java would not follow suit (plus Oracle acquired Sun and hence Java, thus killing any remnant of an ideological bias for Java).

Hence the idea of specifications for data entry and reports that could reside in text files. These specifications would ‘trigger’ instances of programs rooted in platforms that weathered the vagaries of the marketplace. But if and when the platforms become obsolete, the specifications would not, and new instances could be spawned. That at least was the idea.

The Realization

I implemented this idea when the Iraq project was finishing up. By the time I wrote the programs (online data entry & tabulation with an option for using Androids for data entry) for a seasonal survey that was never implemented, the official UNIDO as well as Iraqi commitment to the project was falling off. So I salvaged what I could and built upon that. This was the first quarter of 2012.

It was in Oman in the third quarter of 2013 that I could put these ideas to use. Today, after three+ years of using the program there and another installation in Laos recently, two instances have been realized and tested, a number of significant changes have been made, and new things are being proposed.

So that was the how & why. In terms of the technologies used, the data entry clients are browser based, the ‘canonical’ server uses Python scripts running on top of an Apache web server on Ubuntu (I have implemented a Windows port using Microsoft SQL Server, but that has not been tested; also a Virtual Machine implementation on Windows running Ubuntu is operational in Laos). Two databases have been tested thoroughly: MySQL and Oracle. Underlying it all, the constants are specification files in text (JSON) format that describe the questionnaire, the validation checks and the output reports including the formulas used in them.

Leave a Reply