Automated text categorization using Python: Links

Data preparation for HDI vs. IFF visualization

After coming across this Guardian article on Illicit Financial Flows by Jason Hickel, I thought of doing a visualization on the topic. His 2014 article on “Flipping the Corruption Myth” was part of my education in leaning towards social justice. I had written to him then as well to see if we could collaborate on some visualization project, and there was only silence.

Since GFIntegrity.org had this PDF on their website, a visualization seemed doable as there are a number of tables in the appendix. I played around with a few ideas for the visual and finally settled down to using the same template that I have used for a number of other visuals (HDI vs. SPI, Sindh Health and Literacy Comparison, & Pakistan HDI vs Election Results). This type of visual uses the general switch.js Javascript program which I will describe in another post.

This is the end result: Africa: Human Development Index (HDI) versus Illicit Financial Flows (IFF)

Scraping table data from PDF to spreadsheet

Over the years, I have used a couple of ways of extracting table data from PDFs. Using Acrobat Acrobat is one. I have also used online programs to do the same for quick conversions. This time I found a Java based user interactive program called Tabula which after downloading and trying out, seemed to work fine. I won’t go into the details of using the program as it is pretty straightforward (it did require a bit of trial and error to get the tables extracted in CSV format, but nothing that requires elaboration).

Once I had the tables in CSV format, I had to convert the country names to codes since the Africa map in SVG format I use in the visual uses 3 digit alphanumeric ISO codes (there was a fair amount of preprocessing for the SVG file as well after downloading a suitable Africa map in SVG from the web; that is deferred for another post). The 3 digit alpha codes can be collected from any number of places and the Wikipedia page is as good as any.

To get the country codes in the spreadsheet, matching has to be done by country names which is never an exact process, so a little bit of manual matching has to be done. The final result is here (note we need only Table 2 of this for our visual. I also wrote an R script to take data from Table 4 and 5 into a couple of CSV files ready to be imported to a MySQL table, but that was not needed for the visual that I decided to do. It was a fun data conversion exercise though).

I then used HDI data downloaded from the UNDP site (containing 2015 estimates based on 2014 data). Again we only need the first worksheet from this dataset for our visual. A little cleaning and country coding of HDI data and combining it with Table 2 from GFI, and we  get the final result needed for our visual.

ramping up to programming

    • Scratch :Scratch helps young people learn to think creatively, reason systematically, and work collaboratively — essential skills for life in the 21st century.Scratch is a project of the Lifelong Kindergarten Group at the MIT Media Lab. It is provided free of charge.
    • Pre-processors for writing code in one’s native language: should be possible, I think, quite easily. In principle, each keyword and syntax contruct would map to its corresponding version in the “actual” language.

For example:

عمر = ۱؛
جب تک (عمر < ۱۲) {
لکھو "میری عمر " + عمر + " سال ہے مگر میں بڑا ہو کر بڑا آدمی بنوں گا! "
عمر++؛
}

Unfortunately, to get all the characters I needed, I had to use both the “standard” Urdu keyboard and the phonetic one.

Or, in French:
âge = 1;
jusqu_a(âge < 12){
écris(“quand je serai grand, je serai un grand homme!”);
âge++;
}
This, oddly enough, turned out to be non-trivial as keys like < and > as well as the plus sign are not easy to find in the French keyboard (too many keys taken over by their accented keys, in my opinion!)
It’s much worse in Urdu as WP’s display mechanism, at some point, totally screws up text direction.

  • Junaid teaches the “intro to computing” at Namal using a set of resources called Computer Science Unplugged. The idea is to introduce learners to “thinking like a machine” – a process which is non-intuitive (especially serial processing) before even getting into programming (though they do go there). To quote at some length from their site:

The primary goal of the Unplugged project is to promote Computer Science (and computing in general) to young people as an interesting, engaging, and intellectually stimulating discipline. We want to capture people’s imagination and address common misconceptions about what it means to be a computer scientist. We want to convey fundamentals that do not depend on particular software or systems, ideas that will still be fresh in 10 years. We want to reach kids in elementary schools and provide supplementary material for university courses. We want to tread where high-tech educational solutions are infeasible; to cross the divide between the information-rich and information-poor, between industrialized countries and the developing world.
There are many worthy projects for promoting computer science. The main principles that distinguish the Unplugged activities are:

1 No Computers Required
2 No Computers Required
3 Real Computer Science
4 Learning by doing
5 Fun
6 No specialised equipment
7 Variations encouraged
8 For everyone
9 Co-operative
10 Stand-alone Activities
11 Resilient

But how is that relevant to our work? I’m not entirely sure, but as we’re building a repository of ideas, I thought that these go together in terms of making information processing skills accessible to a larger number of people.

NakedPunch visual navigation

This process was started some time ago, hit some snags and now back on track. The idea was to categorize the articles and create a visual navigation page.

This Python script scrapes the data off the NakedPunch site and outputs to the terminal a ‘%’ separated text file with the URL, Title, Author and Blurb columns. Most of the work is done by the BeautifulSoup library.

#!/usr/bin/python

import requests
from BeautifulSoup import BeautifulSoup

prefix = 'http://www.nakedpunch.com/site/archives'

print 'url%title%author%blurb'           

for page in range(1, 20):
    url = prefix+"?page="+str(page)
    
    response = requests.get(url)
    html = response.content
    soup = BeautifulSoup(html)
    alldivs = soup.findAll('div', 'article-summary')

    for div in alldivs:
        url = div.h4.a['href'].encode('utf8').strip()
        title = div.h4.text.encode('utf8').strip()
        author = div.div.text[3:].encode('utf8').strip() # remove leading 'by '
        blurb = '"'+div.p.text.encode('utf8').strip()+'"'
        print "%".join([url, title, author, blurb])

Each entry could then be tagged…