I attended a talk titled “Confessions of a Nerd Herder” given by someone who had been a Program Manager in Microsoft Research for twenty years or so. The talk itself was somewhat disappointing – he painted too rosy a picture of MSR, making it out to be a utopia where brains walk unbriddled; where only the brightest of the bright get hired and ,once there, they go about changing the world and saving puppies.

The most interesting thing I took out from the talk was the question “How would you design a computer for people who can’t read?” The idea is not novel. It sounds like the kind of design question you would get in an interview. But taking this after the Hamming talk on great ideas, I translated it into a challenge – how would you design a computer for the developing world, for Africa in particular? Sure enough, there are many people in African can and do use computers in their present incarnation. However, the overwhelming pool of potential users is locked out as a result of poor infrastructure (power and communication) as well as the fact that barely any software or content is currently provided in African languages. Working within these constraints gives an interesting challenge.

To breakdown the software-people communication barrier, you obviously need the computer system to be accessible in the local language. This of course brings up the cost of localizing program content – rural Africa is unlikely to be a money-making venture for software makers so they may not be willing to put in the cost of human translators and associated workflows (not for all languages at least). Machine translation would be the way out, but then the general translation problem is unsolved so far as I know. However, if we reduce the scope to translation between African languages(I am thinking Bantu languages here) we could exploit the similarities in grammar to build simple efficient systems. I know for sure that given content in Swahili we can generate corresponding versions in Kikuyu, Kamba, Embu. So what we would need is to convince software companies to translate content to one Bantu language, a position many are already in favor of if we take the isiZulu, Xhosa and Swahili versions of software produced as an indication.

With computers there is also the learning curve in figuring out input and output. I remeber my own experience when I first started using computers (there were so many keys on they keyboard and they seemed randomly arranged). Speech recognition would be ideal input system (backed with text-to-speech for output) but most systems are not robust to accents, a primary requirement in this domain. Computationally, accents are just noise so we can get around them the same way we deal with other noise – throw more computation at it, use fuzzy matching or map single inputs to sets of possible outputs. The computing power requirement could be met by providing processing grids where users can send speech-recognition jobs on the fly. This can be a service for sale just like cellphone minutes (and, again like cellphone service the service can be provided wirelessly).

The grid approach would also allow for provision of software as a service. Cost for laptops/teminals would be reduced as individual terminals need little processing power and maintenance costs associated with running computers (viruses, patches, upgrades etc) would practically be forgotten for the end user. A computer would then be like a radio or tv , you don’t have to fix it unless the electronics are broken.


Comments are closed.