Cor Leonis 5.0 released

After a looooong break, I picked up development on my astronomy app, Cor Leonis, again. The latest version 5.0 is available now for iOS devices in the Apple App Store. The one big feature which justifies the major version jump: the moon! While I was working on the moon info panel, I also beefed up information about the planets in our solar system a lot. Hope you like it!

New demo: moon shader

When I was thinking about how to put the moon into the 3D view of my astronomy app, I figured it would be a waste to actually display a textured sphere. After all, we always see the same side of the moon. All that is needed is a textured quad, and a shader which emulates a lit sphere. In the end, the quad was reduced to a single vertex – a point sprite. Try it out in the demos section: WebGL moon shader

Animated IFS

This shows a combination of an IFS (iterated function system) and a particle system. In an IFS, points are attracted towards a fractal shape by iterating the positions over a set of affine transformations. The original algorithm starts with just one random point and plots its current position over a series of randomized transformations. In this example, I instead start out with a number of randomly distributed particles, which are then given target positions by running the IFS transformations.

When the general shape has settled, one additional iteration is performed every few seconds. It’s instructive to see how particles are warped from one place on the fractal shape to a completely different position.

The demo is implemented in Processing using processing.js. With 5000 particles moving around, performance depends on your browser and hardware. Unfortunately, PShape support is not yet complete in processing.js, which would probably help to speed this up.

Run demo in a popup

Source

Processing(JS)

Recently, I dusted off my copy of “The Computational Beauty of Nature” and started rediscovering this still wonderful book. The chapter about IFS fractals inspired me to do some experimenting with animated fractal shapes. An opportune moment to learn more about Processing! After playing with it for a few hours, I have to say, this is a wonderful programming environment for this kind of visual experiments. Virtually no boilerplate code, cumbersome project setup, etc. Just start hacking away on your ideas. Based on Java, it is not a toy language either, so all the usual data structures and OOP constructs are readily available.

An additional treat: With processing.js, it is quite easy to run a processing application in a browser. See the IFS animation demo in the new demo section on the left (hope to add more to that category soon 🙂 ). Support is not complete, though: I had to rewrite my demo to some extent, because processing.js doesn´t support the PShape class very well yet, which is quite essential to get good performance in a particle system demo… so I reduced the visuals a bit. Still, much easier than having to go through and translate everything to JavaScript myself!

Smoke Simulation

smoke simulation

As a first step towards a full-featured fluid simulator, I am currently working on smoke simulation, and now got something running for the simplest case of smoke in an open volume, i.e., without any solid objects or boundaries. The attached video shows 10 seconds of simulation with a small heat / velocity source at the bottom left. Looks neat already!

The implementation follows the approach laid out in the SIGGRAPH 2007 Course Notes on Fluid Animation. In brief, this is a Semi-Lagrangian advection scheme, running on a 128^3 grid. My current single-threaded CPU implementation is ridiculously slow (about 4 frames / minute on my i7 Laptop), so I am going to investigate parallelization, probably with OpenCL.

Only the ray marching volume shader utilizes the GPU so far.

I’ve had problems playing the video in firefox – you might want try another browser!

Thesis work in medical physics: volumes and surfaces

A long-running side project has recently come to fruition: My master’s thesis in the realm of medical physics got accepted! You can find the full (German) text under the prosaic title “Vergleich von Volumenmodellen” in the publications section on this site. No surprise, it’s dealing with 3D graphics – I used the chance to have a peek at a few interesting problems around volume representations.


The numerous representations of three-dimensional objects fall into two broad categories: surface models and volumetric models. Depending on the application, one typically chooses one or the other. Triangles meshes, for instances, allow for extremely fast visualization of surfaces on modern PC hardware. In medical imaging and scientific visualization, though, volumetric data is often prevalent.

Since many algorithms are using either surfaces or volume data as input, it is often necessary to convert between different representations to perform desired operations on a data set. Mathematically, the task corresponds to finding an implicit form for a parametric surface description, or vice versa.

In this thesis, I am discussing the connections between triangles meshes, and different volumetric shape representations: binary voxel grids, discretized distance fields, and tetrahedral lattices. Algorithms for volume visualization, conversion between representations, and for making cuts and selections are demonstrated. In the practical implementation, a focus was put on making use of current GPU-based techniques to speed up not only visualization, but also the geometric algorithms.

Qt and OpenGL programming in Python

Qt is a well-established framework for developing GUI applications in C++, and it has good support for OpenGL. It is relatively cumbersome, though, to set up a project for just a simple, experimental application. Moreover, even though Qt and OpenGL are portable, carrying over your project from, say, your Linux box to a Windows PC is not completely seamless, due to different compiler configurations, etc.

Being a Friend of Python, I was looking for a way to make life simpler in that respect. Can you use Qt and OpenGL in Python? You can!


There are three packages you want to have a look at:


The Windows binary installer for PyQt4 already contains a copy of the Qt libraries (4.6, currently). so you don’t have to install Qt separately.

Once all this is set up, writing the application itself becomes fairly easy. Here’s an example to get you started, the famous spinning color cube — complete with application menu and status text:

import sys
from PyQt4 import QtCore
from PyQt4 import QtGui
from PyQt4 import QtOpenGL
from OpenGL import GLU
from OpenGL.GL import *
from numpy import array

class GLWidget(QtOpenGL.QGLWidget):
def __init__(self, parent=None):
self.parent = parent
QtOpenGL.QGLWidget.__init__(self, parent)
self.yRotDeg = 0.0

def initializeGL(self):
self.qglClearColor(QtGui.QColor(0, 0, 150))
self.initGeometry()

glEnable(GL_DEPTH_TEST)

def resizeGL(self, width, height):
if height == 0: height = 1

glViewport(0, 0, width, height)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
aspect = width / float(height)

GLU.gluPerspective(45.0, aspect, 1.0, 100.0)
glMatrixMode(GL_MODELVIEW)

def paintGL(self):
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)

glLoadIdentity()
glTranslate(0.0, 0.0, -50.0)
glScale(20.0, 20.0, 20.0)
glRotate(self.yRotDeg, 0.2, 1.0, 0.3)
glTranslate(-0.5, -0.5, -0.5)

glEnableClientState(GL_VERTEX_ARRAY)
glEnableClientState(GL_COLOR_ARRAY)
glVertexPointerf(self.cubeVtxArray)
glColorPointerf(self.cubeClrArray)
glDrawElementsui(GL_QUADS, self.cubeIdxArray)

def initGeometry(self):
self.cubeVtxArray = array(
[[0.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[1.0, 1.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0],
[1.0, 0.0, 1.0],
[1.0, 1.0, 1.0],
[0.0, 1.0, 1.0]])
self.cubeIdxArray = [
0, 1, 2, 3,
3, 2, 6, 7,
1, 0, 4, 5,
2, 1, 5, 6,
0, 3, 7, 4,
7, 6, 5, 4 ]
self.cubeClrArray = [
[0.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[1.0, 1.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0],
[1.0, 0.0, 1.0],
[1.0, 1.0, 1.0],
[0.0, 1.0, 1.0 ]]

def spin(self):
self.yRotDeg = (self.yRotDeg + 1) % 360.0
self.parent.statusBar().showMessage('rotation %f' % self.yRotDeg)
self.updateGL()

class MainWindow(QtGui.QMainWindow):

def __init__(self):
QtGui.QMainWindow.__init__(self)

self.resize(300, 300)
self.setWindowTitle('GL Cube Test')

self.initActions()
self.initMenus()

glWidget = GLWidget(self)
self.setCentralWidget(glWidget)

timer = QtCore.QTimer(self)
timer.setInterval(20)
QtCore.QObject.connect(timer, QtCore.SIGNAL('timeout()'), glWidget.spin)
timer.start()


def initActions(self):
self.exitAction = QtGui.QAction('Quit', self)
self.exitAction.setShortcut('Ctrl+Q')
self.exitAction.setStatusTip('Exit application')
self.connect(self.exitAction, QtCore.SIGNAL('triggered()'), self.close)

def initMenus(self):
menuBar = self.menuBar()
fileMenu = menuBar.addMenu('&File')
fileMenu.addAction(self.exitAction)

def close(self):
QtGui.qApp.quit()

app = QtGui.QApplication(sys.argv)

win = MainWindow()
win.show()

sys.exit(app.exec_())

Ok, so it’s still 120 lines for a spinning cube, but the overhead is considerably smaller. And you can run this program as is on any platform. To me, it doubles the fun 🙂

Ocean water simulation

At Scanline VFX, we were doing a whole lot of CG water for the movie “Megalodon – Hai-Alarm auf Mallorca”. I leave it to you to rate the movie, but, hey: the project got me a credit on The Internet Movie Database. My part in this was the R&D on ocean water surface simulation.

I ended up doing a variation of the FFT-based approach put forth by Jerry Tessendorf, and combined it with an implicit model representation to allow additional modification of the ocean waves by blended shapes. You can watch a short scene from the movie on YouTube illustrating the method. Around 0:38, you can nicely see how the simulated open water surface combines with the bulge of the shark-like shape under water.

Some shots also used a real CFD simulation engine, which was separately developed at Scanline.

Aside from the interesting work on the algorithms, it was very enlightening to turn this into a 3ds Max plugin that could be used by the artists in production. And the renderings they produced were nothing short of amazing.

Facial Animation and Modeling

Facial animation is, despite all advances on the technical front, still a challenge and a rewarding research subject. This might be the one-line summary of my PhD thesis, the outcome of my time at the Max-Planck-Institut für Informatik in Saarbrücken. In a few more words, I developed an anatomy-based modeling approach in conjunction with a physically-based animation system.

The title of my thesis is “A Head Model with Anatomical Structure for Facial Modeling and Animation”. You can find the full text in the publications section. Please also visit the MPII’s facial animation and modeling pages to learn more about this project.

Media recognition

Following is a loose collection of references to appearances of our (award-winning!) facial animation project in the media.

Beware, the following links lead to german language texts:

Aufmacher in der Saarbrücker Zeitung vom 25.4.2002 – kostenpflichtiger Archivzugriff
SaarLB-Wissenschaftspreis für System zur Modellierung und Animation von Gesichtern
Software gibt Mordopfern ihr Gesicht zurück (Scienceticker.info)
Gesichter aus dem Computer (3Sat, zum Beitrag auf nano)
Wiederbelebung im Cyberspace (Focus Magazin, s.a. Heft 33/2003, S. 86)
Neue Software gibt Toten “lebendige Gesichter” (ORF ON Science)
Das lachende Phantombild (Netzeitung.de Wissenschaft)

These articles are in english:

Skulls gain virtual faces (TRNmag.com)
Animation lets murder victims have final say (NewScientist.com)