- NLTK: Python Natural Language Toolkit
- Introduction to Orange: a data mining toolkit in Python
- Chapter on text classification using Python from Python for Social Science
- What are the best possible Python 2.7 tools for data mining? (Q&A on quora.com)
- Very simple text classification by machine learning (Q&A on stackoverflow.com)
- Who to listen to?
- Rural women poets
- Poets who spend the bulk of their lives trying to eke out daily wages, yet find time to write
- University students whose primary focus is a career but who still have an inexplicable urge to write poetry
- Unlettered poets
- Guiding rails
- Aim for the poetic lull behind the noise – the statistical asymptote – of unheard voices; here is where databases, categorization and visualization come in
- Chime with global and historical well-lit poetry of oppression/dissent: Latin American; black (esp. Africa and US); feminist; Bhakti etc.
- Work with mainstream poets, esp. regional ones, without highlighting their work
- Along the lines of A.K Ramanujan’s collection of Indian folktales and P.J Sainath’s people’s archive of rural India but for verse
- Where to look for?
- Rural spaces where marginalized poets speak
- Katchi abadis
- The unlit mushaira/gathering of poets
- Who to work with?
- Regional poets with social justice sensibilities
- Social justice activists with poetic sensibilities
- Academics with social justice and poetic leanings
- Creating effective quality filters which are open to the myriad voices without damping them down
- Identify the spaces in ‘Where to look for?’
- Nuts & bolts
- Blog/wiki for poetry collection
- A platform for data collection via browsers and phones using the tools I have built (if blog or wiki is not up to the task)
- An incremental portal that learns its categories from the data collected
I have been doing survey automation systematically since 2000 on both regular computers and hand-helds (translates into cell phones and tablets today). More recently, I have generalized that bit of automation in a tool called hm.Survey, a template based data entry tool that allows relatively quick generation of data entry programs and then tabulating the data.
The content under hm.Survey is in the process of being tidied up: moving from Google Docs to a Wiki, updating documents, clarifying text and removing obsolete pages. This introduction will give a background to that work.
The lead up to hm.Survey
Before hm.Survey, there was another program, written in Delphi in Riyadh (in 2005 where there was so little to do that rather than twiddle thumbs, I thought it best to write a program to automate the table production process instead of doing it by hand in SQL). I dubbed it hm.SurveyReport. It took an XML document as input and produced tabulated reports as output. The XML document described the fields to be used in each report which allowed the Delphi program to construct the SQL statements and carry them out. It was all very nice and dandy except of course that Delphi – the flagship product of the once popular Borland company, synonymous with top-notch compilers for C, C++ and Pascal – was dying off. I was using Delphi in all my survey programs at one point.
Delphi’s slow death prompted me to start using DotNet (C#Express and SQL Server) for my survey data entry and reporting work in 2006 in Lebanon. And this was because the Ministry of Industry there already had a Microsoft license for SQL Server. I pretty much rewrote the earlier Delphi data entry programs for Lebanon and reused the same later for Tanzania and Iraq (there was a bit of generalization here as well but not of the the same order as the one I carried out for hm.SurveyReport, i.e., for reports).
In parallel, I had been working on a data entry/indexing system for inflation, first in Delphi (Paradox DB, later MySQL), then in Java (desktop; database was MySQL; it even has a servlet based analysis engine which does all sorts of sophisticated regressions but has never been used). The Java version also had a handheld counterpart in the form of a Java/MIDP (now obsolete in favor of Android and iOS) program that could run on cheap Nokias (and ran equally well on expensive ones).
To keep work worthwhile, I wanted to generalize the survey program and its accompanying tabulations along the lines of the specification based reporting work done earlier (on Delphi) but on a platform that was less ephemeral. Using Java was an option (but never Microsoft), but considering how the micro version (Java/MIDP) fell out of favor with the market, I could not find any compelling reason why other versions of Java would not follow suit (plus Oracle acquired Sun and hence Java, thus killing any remnant of an ideological bias for Java).
Hence the idea of specifications for data entry and reports that could reside in text files. These specifications would ‘trigger’ instances of programs rooted in platforms that weathered the vagaries of the marketplace. But if and when the platforms become obsolete, the specifications would not, and new instances could be spawned. That at least was the idea.
I implemented this idea when the Iraq project was finishing up. By the time I wrote the programs (online data entry & tabulation with an option for using Androids for data entry) for a seasonal survey that was never implemented, the official UNIDO as well as Iraqi commitment to the project was falling off. So I salvaged what I could and built upon that. This was the first quarter of 2012.
It was in Oman in the third quarter of 2013 that I could put these ideas to use. Today, after three+ years of using the program there and another installation in Laos recently, two instances have been realized and tested, a number of significant changes have been made, and new things are being proposed.
So that was the how & why. In terms of the technologies used, the data entry clients are browser based, the ‘canonical’ server uses Python scripts running on top of an Apache web server on Ubuntu (I have implemented a Windows port using Microsoft SQL Server, but that has not been tested; also a Virtual Machine implementation on Windows running Ubuntu is operational in Laos). Two databases have been tested thoroughly: MySQL and Oracle. Underlying it all, the constants are specification files in text (JSON) format that describe the questionnaire, the validation checks and the output reports including the formulas used in them.
I had created this Java based categorization tool somewhere around 2000. It is possible to use hm.StructureMap to tag content (files, URLs) – hierarchically if necessary.
Last I updated the user guide was way back in 2004 and this document should give a good idea of what the tool is about. I have used and updated the program since but the changes are not significant.