The following is a list of some of the projects I’ve worked on that I’m allowed to publicly talk about. The list comprises projects in fields such as iOS development, medical visualization, custom multi-touch installations and open source.
For a client in California, I created an iOS app for medical classrooms. It performs real-time, three-dimensional rendering of medical volume data (CT, MRI) and allows for extensive digital interactivity between multiple instances of the app on the network (students and teachers are all equipped with iPads).
Copyright 2012 Loma Linda University. Patent pending. Soon to be on the AppStore.
I’m working with the incredibly talented people at the Plausible Labs Cooperative Inc. in New York City and San Francisco. For the most part, I’ve been adding features to their top-grossing (US) iPhone and iPad Comics app that they develop for comiXology.
Zanther Inc. develops the initially crowd-funded iPad application Taposé. Originally developed by a Russian company, the code base grew to a massive size over time. I was recently contracted to help fix some of the lower-level issues and to make the app more usable on the original iPad.
Python on iOS
As part of a research project, I’ve worked on porting the Python programming language1 to iOS. It allows using Python in any Objective-C application and does not require a jailbreak. Others have used this approach to great extent in their apps (which are available on the AppStore).
I’ve done some work in the field of medical visualization. The main challenge in this field is to make vast sets of data visually comprehensible.
One such attempt is shown below, where I wrote an OpenGL-based program to help neurosurgeons understand nerves in the human brain better. The input to the program is a diffusion MRI scan of a person’s brain. You then select a region of the brain where you want to see the fibers, e.g. when planning a surgical removal of a tumor, in which you don’t want to damage brain nerves that are invisible once you ‘cut a patient open’.
I have co-authored two papers on this and related visualization techniques:
- “Advanced Line Visualization For HARDI”, Bildverarbeitung für die Medizin (BVM) (2012)
- “An Exploration and Planning Tool for Neurosurgical Interventions”, IEEE Visualization Contest 2010, Honorable Mention
Multi-Touch & Human Computer Interaction
Medical Demo Installation
Another project I created has a medical component, too, but is more user interaction centric. It allows medical data to be displayed on custom multi-touch installations. This can be used in medical meetings, or for doctors to explain and demonstrate things to patients. The project was submitted to a CompVis competition and was awarded the first prize (30 projects total).
Markerless Object Recognition
The following is the result of a relatively short (five week) research & development project for a company in Paris. The goal was to allow custom objects to be recognized without the use of active or passive markers, based solely on an object’s contour.
Hand Recognition & Tracking
As a fun experiment, I decided to implement a hand recognition and tracking algorithm. The input to the program was an image showing my hand. The output is the position of the finger tips, as well as the center of the palm. The following are two frames of a video (i.e. the algorithm operates on a video stream in real-time).
I’ve also founded or contributed to a number of open source projects for custom multi-touch installations, such as Movid, Kivy, PyMT, as well as web development, such as Zine and MoinMoin, and well-known projects such as Ubuntu.
I wrote several articles for the (commercial) UNIX print magazine freeX as an independent author.
I was accepted and successfully participated four consecutive times in Google’s annual SoC program, most recently for the Natural User Interface Group.
- More specifically, version 2.7 of the CPython interpreter. ↩