VR User interface experiments

I’m currently experimenting with the UI for my upcoming Gear VR star gazing application. Virtual reality user interfaces are really interesting, since they have to work so differently from a standard 2D UI. One possible realization is to have interactive elements as actual 3D objects in the scene. This can be fun! For my app, I am thinking about putting a “time machine” into the scene, which will allow you to move forward and backwards in time for different views of the sky. Much cooler than having a 2D number selection thingy. Nothing to show yet, but stay tuned!

How to select and activate anything in a VR scene can be a science. For starters, I recommend having a look at Unity’s VR Sample Scenes project. It includes a bunch of useful scripts for reticles, selection radials, VR input handling, etc. This looks pretty convoluted at first, but once you get your head around it, it offers some nice ideas on how to architect an application UI.

New iOS app: Lightbox Trace

I’m currently spending a lot of time drawing on my iPad Pro, and needed a way to transfer my digital sketches to drawing paper. Essentially, a lightbox with the ability to display an image. Since I still had my old first-generation iPad lying around, I developed a simple little app to put it to use again: Lightbox trace.

  • Load an image from photos or the clipboard
  • Scale, position, rotate as desired
  • lock the screen – the app now ignores all touch events, so you can put a piece of paper on the display and trace the image
  • display brightness is automatically increased to the maximum
  • you can also just show white, for tracing from one paper to another

I’ve found it to be quite useful – please try it (it’s free!) out and let me know if there is anything you might want to be added.

Arduino Prototyping: It’s a clock!

Over the past few months, I digged more deeply into the Arduino platform. One ongoing project is a clock with moon phase display (since I already implemented the computations for my astronomy app, Cor Leonis). I started out with an LED matrix and 7-segment displays like this:

Tons of wire!

Over time, I decided to use 2 8×8 LED matrices, switched to a smaller Arduino compatible board (Adafruit Pro Trinket), and ran it on batteries:

There’s also a button to switch between views now.

It’s far from done, but I find it amazing how much I already learned from this relatively simple project… a refresher on basic electronics (resistors, capacitors, etc.) and soldering,  manual LED matrix display multiplexing, more on LEDs than I ever wanted to know, RTC clock chips, LED display driver chips, shift registers, step-up/down voltage converters, debouncing HW buttons, I2C bus wiring and communications, calculating power consumption and battery lifetime, and so on and so forth. Next up is sensors: I would like to switch views just by waving my hand (and see how robust that is), instead of having to walk over and press a button.

Cor Leonis 5.0 released

After a looooong break, I picked up development on my astronomy app, Cor Leonis, again. The latest version 5.0 is available now for iOS devices in the Apple App Store. The one big feature which justifies the major version jump: the moon! While I was working on the moon info panel, I also beefed up information about the planets in our solar system a lot. Hope you like it!

New demo: moon shader

When I was thinking about how to put the moon into the 3D view of my astronomy app, I figured it would be a waste to actually display a textured sphere. After all, we always see the same side of the moon. All that is needed is a textured quad, and a shader which emulates a lit sphere. In the end, the quad was reduced to a single vertex – a point sprite. Try it out in the demos section: WebGL moon shader

Cut-out shapes and image masking in processing(JS)

In processing, it is not really easy to construct complex 2D geometry by subtracting shapes from each other, i.e., creating cut-outs. You can resort to vertex contours, but if you just want to punch holes into a square, what can you do? Especially, if you want to run in the browser using processing.js, it gets somewhat tricky.

A pixel-based approach is to render the complex shape into an off-screen image, and draw the result transparently onto the display. The example discussed below is about creating a rather simple planet with a ring:

See the full listing at the end for the complete code.

blendMode()

In processing 2, the new blendMode() function can be used to overwrite part of a shape with alpha 0 pixels:

PGraphics planetImage;

void setup()
{
   // blendMode() doesn't work properly with the default renderer, use P2D here
  size(500, 500, P2D);

  // create off-screen buffer with transparent background
  planetImage = createGraphics(200, 200, P2D);
  planetImage.beginDraw();
  planetImage.background(0, 0);
  
  planetImage.translate(100, 100);
  planetImage.rotate(0.5);
  
  // draw circle
  planetImage.noStroke();
  planetImage.fill(255, 220, 0);
  planetImage.ellipse(0, 0, 200, 100);

  // replace part of the circle with alpha 0 - "make a hole"
  planetImage.blendMode(REPLACE);
  planetImage.fill(255, 255,  255, 0);
  planetImage.noStroke();
  planetImage.ellipse(0, -5, 160, 70);

  // add "planet"
  planetImage.fill(255, 220, 0, 255);
  planetImage.ellipse(0, 0, 120, 120);

  planetImage.endDraw();
} 

The result can be rendered nicely on top of a background with image(planetImage, …). Unfortunately, blendMode() doesn’t work yet with processing.js, and the blend() function doesn’t allow you to overwrite the alpha channel in the same way as blendMode(REPLACE).

Image masking

Another approach to the problem is to render the shape onto a black background, and then create a mask image to cut out the shape transparently. In processing 2, this can be done using the PImage.mask() method. But again, mask() is not yet supported in processing.js. Instead, we can create a separate mask image, and render our shape in two passes using blend():

PImage maskImage;

void setup()
{
  ...
  
  // create a white on black mask image by thresholding the off-screen image
  maskImage = planetImage.get();
  maskImage.filter(THRESHOLD, 0.1);
}

void draw()
{
  ...
  
  // subtract white planet image - results in a black planet
  blend(maskImage, 0, 0, 200, 200, xPos, yPos, 200, 200, SUBTRACT);
  
  // add colored planet image on top
  blend(planetImage, 0, 0, 200, 200, xPos, yPos, 200, 200, ADD);
} 

If you wanted to use the color black in the solid part of the shape, creation of the mask would have to be modified, obviously. Another disadvantage of this approach is that you cannot transform the images rendered with blend(). I.e., it is usually easy to draw a rotated image using rotate(angle) followed by image(…). With the blend() function, the rotation just doesn’t work.

Phew! So, as promised, here is the complete working example:

PGraphics planetImage;
PImage bgImage;
PImage maskImage;

void setup()
{
size(500, 500);

// create off-screen planet shape with black background (rendered transparently later on)

// create off-screen buffer with black background
planetImage = createGraphics(200, 200);
planetImage.beginDraw();
planetImage.background(0);

planetImage.translate(100, 100);
planetImage.rotate(0.5);

// draw circle
planetImage.noStroke();
planetImage.fill(255, 220, 0);
planetImage.ellipse(0, 0, 200, 100);

// cut out part of the circle
planetImage.fill(0);
planetImage.noStroke();
planetImage.ellipse(0, -5, 160, 70);

// add "planet"
planetImage.fill(255, 220, 0);
planetImage.ellipse(0, 0, 120, 120);

planetImage.endDraw();

// create a white on black mask image by thresholding the off-screen image
maskImage = planetImage.get();
maskImage.filter(THRESHOLD, 0.1);

// draw a background pattern...
background(0, 0, 120);
fill(255);
stroke(255, 255, 255, 50);
strokeWeight(2);
for (int i = 0; i < 100; ++i)
{
float size = random(2, 5);
ellipse(random(width), random(height), size, size);
}

// ... and save it to an image, so we can re-render it easily
bgImage = get();
}

void draw()
{
// draw background pattern
image(bgImage, 0, 0);

int xPos = frameCount % (width + 200) - 200;
int yPos = 100;

// subtract white planet image - results in a black planet
blend(maskImage, 0, 0, 200, 200, xPos, yPos, 200, 200, SUBTRACT);

// add colored planet image on top
blend(planetImage, 0, 0, 200, 200, xPos, yPos, 200, 200, ADD);
}

Processing(JS)

Recently, I dusted off my copy of “The Computational Beauty of Nature” and started rediscovering this still wonderful book. The chapter about IFS fractals inspired me to do some experimenting with animated fractal shapes. An opportune moment to learn more about Processing! After playing with it for a few hours, I have to say, this is a wonderful programming environment for this kind of visual experiments. Virtually no boilerplate code, cumbersome project setup, etc. Just start hacking away on your ideas. Based on Java, it is not a toy language either, so all the usual data structures and OOP constructs are readily available.

An additional treat: With processing.js, it is quite easy to run a processing application in a browser. See the IFS animation demo in the new demo section on the left (hope to add more to that category soon 🙂 ). Support is not complete, though: I had to rewrite my demo to some extent, because processing.js doesn´t support the PShape class very well yet, which is quite essential to get good performance in a particle system demo… so I reduced the visuals a bit. Still, much easier than having to go through and translate everything to JavaScript myself!

Bouncing Ball Physics

To revive my Blender skills, I’ve been tinkering with setting up a simple bouncing ball animation. How do you keyframe this properly, without running a physics simulation? There are tons of tutorials on the web on basic bouncing ball demos, but few go into details about what a physically plausible bouncing ball trajectory would look like. As it turns out, with an ideal bouncing ball, there are only a few basic ingredients:

  • The path is obviously a series of parabolas
  • With each bounce, a roughly constant fraction of the energy is lost. The exact value depends on the material of the ball – the magic term is “coefficent of restitution” (COR). The height of each parabolic arc is f * previous_height, where f is in the range (0,1).
  • Assuming no slowdown in the horizontal direction, the distance between touch down positions (resp. duration of a bounce) shortens with the square root of f.

So far, so good, but is this model realistic? I did a few experiments tracing bouncing ball trajectories from video.

First, a tennis ball (at 50 fps):

Doing rough calculations based on the pixel positions of the ball’s center, the behavior is close enough to the model, with a COR of roundabout 0.55. Great.

Second, a very squishy rubber ball:

Surprise: The same calculations show that this ball keeps bouncing a bit higher than expected every time! The COR raises from 0.34 to 0.55 over four bounces. I even repeated the experiment, with similar results. Apparently, a non-constant COR is not unusual at slow speeds, as mentioned e.g. in the Wikipedia article on the subject.

Qt and OpenGL programming in Python

Qt is a well-established framework for developing GUI applications in C++, and it has good support for OpenGL. It is relatively cumbersome, though, to set up a project for just a simple, experimental application. Moreover, even though Qt and OpenGL are portable, carrying over your project from, say, your Linux box to a Windows PC is not completely seamless, due to different compiler configurations, etc.

Being a Friend of Python, I was looking for a way to make life simpler in that respect. Can you use Qt and OpenGL in Python? You can!


There are three packages you want to have a look at:


The Windows binary installer for PyQt4 already contains a copy of the Qt libraries (4.6, currently). so you don’t have to install Qt separately.

Once all this is set up, writing the application itself becomes fairly easy. Here’s an example to get you started, the famous spinning color cube — complete with application menu and status text:

import sys
from PyQt4 import QtCore
from PyQt4 import QtGui
from PyQt4 import QtOpenGL
from OpenGL import GLU
from OpenGL.GL import *
from numpy import array

class GLWidget(QtOpenGL.QGLWidget):
def __init__(self, parent=None):
self.parent = parent
QtOpenGL.QGLWidget.__init__(self, parent)
self.yRotDeg = 0.0

def initializeGL(self):
self.qglClearColor(QtGui.QColor(0, 0, 150))
self.initGeometry()

glEnable(GL_DEPTH_TEST)

def resizeGL(self, width, height):
if height == 0: height = 1

glViewport(0, 0, width, height)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
aspect = width / float(height)

GLU.gluPerspective(45.0, aspect, 1.0, 100.0)
glMatrixMode(GL_MODELVIEW)

def paintGL(self):
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)

glLoadIdentity()
glTranslate(0.0, 0.0, -50.0)
glScale(20.0, 20.0, 20.0)
glRotate(self.yRotDeg, 0.2, 1.0, 0.3)
glTranslate(-0.5, -0.5, -0.5)

glEnableClientState(GL_VERTEX_ARRAY)
glEnableClientState(GL_COLOR_ARRAY)
glVertexPointerf(self.cubeVtxArray)
glColorPointerf(self.cubeClrArray)
glDrawElementsui(GL_QUADS, self.cubeIdxArray)

def initGeometry(self):
self.cubeVtxArray = array(
[[0.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[1.0, 1.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0],
[1.0, 0.0, 1.0],
[1.0, 1.0, 1.0],
[0.0, 1.0, 1.0]])
self.cubeIdxArray = [
0, 1, 2, 3,
3, 2, 6, 7,
1, 0, 4, 5,
2, 1, 5, 6,
0, 3, 7, 4,
7, 6, 5, 4 ]
self.cubeClrArray = [
[0.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[1.0, 1.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0],
[1.0, 0.0, 1.0],
[1.0, 1.0, 1.0],
[0.0, 1.0, 1.0 ]]

def spin(self):
self.yRotDeg = (self.yRotDeg + 1) % 360.0
self.parent.statusBar().showMessage('rotation %f' % self.yRotDeg)
self.updateGL()

class MainWindow(QtGui.QMainWindow):

def __init__(self):
QtGui.QMainWindow.__init__(self)

self.resize(300, 300)
self.setWindowTitle('GL Cube Test')

self.initActions()
self.initMenus()

glWidget = GLWidget(self)
self.setCentralWidget(glWidget)

timer = QtCore.QTimer(self)
timer.setInterval(20)
QtCore.QObject.connect(timer, QtCore.SIGNAL('timeout()'), glWidget.spin)
timer.start()


def initActions(self):
self.exitAction = QtGui.QAction('Quit', self)
self.exitAction.setShortcut('Ctrl+Q')
self.exitAction.setStatusTip('Exit application')
self.connect(self.exitAction, QtCore.SIGNAL('triggered()'), self.close)

def initMenus(self):
menuBar = self.menuBar()
fileMenu = menuBar.addMenu('&File')
fileMenu.addAction(self.exitAction)

def close(self):
QtGui.qApp.quit()

app = QtGui.QApplication(sys.argv)

win = MainWindow()
win.show()

sys.exit(app.exec_())

Ok, so it’s still 120 lines for a spinning cube, but the overhead is considerably smaller. And you can run this program as is on any platform. To me, it doubles the fun 🙂