Pocket Observatory: Finalizing!

Yes, I have decided on a name for my upcoming Gear VR astronomy app: Tadaa – Pocket Observatory! Seems it is getting to a decent stage… it’s hard to stop adding features when new ideas pop up every five minutes, but this all has to wait for future releases. Now, it’s all about polishing and optimizing!

I found a really, really helpful guide to optimizing Gear VR apps on the Oculus developer blog: https://developer3.oculus.com/blog/squeezing-performance-out-of-your-unity-gear-vr-game/. This certainly saved me a few headaches.

Here’s a very first video impression of the app: https://youtu.be/G4tHM2v0NyY I was actually wondering how I could do a video such as this, but it turned out to be super easy: Integrate the platform menu provided with the OVR toolkit, and the function is available in the platform menu by pressing the back button 🙂

The sun, the stars, the sky

Slowly, but steadily, it’s coming together. Since my upcoming stargazing app is also featuring a nice landscape and daylight, I am currently spending quite some time adjusting lighting and the complexity of the scene. After all, this has to run with a steady 60 fps on the Galaxy S6 with the Gear VR headset.

Mind you, the scene is not complex by today’s standards, but hey, this is still a phone!

Still loads of stand-in textures and placeholder objects, but getting there…

‘Simple’ property animation in Unity

Recently, I’ve spent a few hours with Unity’s new animation system, Mecanim. I wasn’t working on any complex animations, but only wanted to implement fading of a few elements in my scene. This turned out to be a real desaster. Why?

  • For simply fading a property in an out, I needed two animation clips and four states, with pretty sensitive transition setup (I wanted the fading to be interruptible, etc.)
  • The animations cannot go from the current property value to the target at a certain speed. They will always animate from start to end.
  • Adding a delay would require either to modify the animation clip or do even more complex state setup.

So, after fiddling with this for a while, I came to the conclusion that this kind of animation is better done in code (which seems to be what people in the forums think, too). I’ve set up a simple class for fading a property value, added the logic to my MonoBehaviour script, and had it working the way I wanted within 20 minutes. Go figure.

For anyone interested, here is the code. It could probably be improved in various ways, but it’s doing what I want for my project. Feel free to play with it.

ValueFader.cs:

 public struct FadeParameters
 {
   public float initialValue;
   public float maxValue;
   public float minValue;
   public float fadeInDuration;
   public float fadeOutDuration;
   public float fadeInDelay;
     // applies only when going from Idle to FadingIn
     // (not e.g. FadingOut -> FadingIn)
 }
 
 public class ValueFader
 {
   private enum FadeState
   {
     FadingIn,
     FadingOut,
     Idle
   }
 
   private float _remainingDelay;
   private float _currentValue;
   private FadeParameters _fadeParameters;
   private FadeState _currentFadeState = FadeState.Idle;
   private float _fadeInIncrement;
   private float _fadeOutIncrement;
   public float currentValue { get { return _currentValue; } }
 
   public ValueFader(FadeParameters fp)
   {
     _fadeParameters = fp;
     _fadeInIncrement =
       (_fadeParameters.maxValue - _fadeParameters.minValue)
       / _fadeParameters.fadeInDuration;
     _fadeOutIncrement =
       (_fadeParameters.maxValue - _fadeParameters.minValue)
       / _fadeParameters.fadeOutDuration;
     _currentValue = fp.initialValue;
     _remainingDelay = fp.fadeInDelay;
   }
 
   public void FadeIn()
   {
     if (_currentValue < _fadeParameters.maxValue)
     {
       _currentFadeState = FadeState.FadingIn;
     }
     else
     {
       SetIdle();
     }
   }
 
   public void FadeOut()
   {
     if (_currentValue > _fadeParameters.minValue)
     {
       _currentFadeState = FadeState.FadingOut;
     }
     else
     {
       SetIdle(); // might have been in FadingIn, during delay
     }
   }
 
   // Returns true, if currentValue was changed by this method
   public bool Update(float deltaTime)
   {
     if (_currentFadeState == FadeState.Idle)
     {
       return false;
     }
 
     bool retVal = false;
 
     if (_currentFadeState == FadeState.FadingIn)
     {
       if (_remainingDelay <= 0.0f)
       {
         _currentValue += _fadeInIncrement * deltaTime;
         if (_currentValue >= _fadeParameters.maxValue)
         {
           _currentValue = _fadeParameters.maxValue;
           _currentFadeState = FadeState.Idle;
         }
 
         retVal = true;
       }
       else
       {
         _remainingDelay -= deltaTime;
       }
     }
     else //if (_currentFadeState == FadeState.FadingOut)
     {
       _currentValue -= _fadeOutIncrement * deltaTime;
       if (_currentValue <= _fadeParameters.minValue)
       {
         _currentValue = _fadeParameters.minValue;
         SetIdle();
       }
 
       retVal = true;
     }
 
     return retVal;
   }
 
   private void SetIdle()
   {
     _currentFadeState = FadeState.Idle;
     _remainingDelay = _fadeParameters.fadeInDelay;
   }
 }
 

And here is how you would use it, from within a MonoBehavior script.

Set up the animation parameters like this, and store the ValueFader in a class member:

FadeParameters imageFP;
imageFP.fadeInDuration = 1.0f;
imageFP.fadeOutDuration = 0.7f;
imageFP.initialValue = 0.0f;
imageFP.maxValue = 0.1f;
imageFP.minValue = 0.0f;
imageFP.fadeInDelay = 1.0f;

_imageFader = new ValueFader(imageFP);  

How you control fading values in or out, depends entirely on your logic. You might want to do it based on some event:

void OnSomethingHappened()
 {
   ...
   if (startFadingIn)
   {
     _imageFader.FadeIn();
   }
   else if (startFadingOut)
   {
     _imageFader.FadeOut();
   }
 } 

You get the idea… In Update(), apply the animated value to a material color, or whatever you want to animate:

void Update()
{
  if (_imageFader.Update(Time.deltaTime))
  {
    _imageMaterial.SetColor(
      "_Color", new Color(1.0f, 1.0f, 1.0f, _imageFader.currentValue));
  }
  ...
 } 

 And that’s all 🙂

3D Text with depth test in Unity

The 3D Text asset built into Unity renders always on top, regardless of z depth. This helps to avoid z-fighting issues, if your text is coplanar to other geometry in the scene, but often not what you want. I have labels in my scene which I DO want to be hidden by geometry in front. Unfortunately, there is no simple switch to turn on depth testing – even though users have been struggling with this for years, as far as I can tell.

The solution requires a modified font shader, and to be able to use that, you also need to have a font texture in your project. I had to retrieve all the bits and pieces of information from various places, so I though it might be a good idea to list all the steps together. Here we go:

  1.  Download the Unity built-in shader archive from https://unity3d.com/get-unity/download/archive.
  2. Extract the .zip (at the time of writing: builtin_shaders-5.4.1f1.zip) into some arbitrary folder.
  3. Import DefaultResources/Font.shader into your project.
  4. Rename it, e.g. to ZFont.shader.
  5. Edit the shader source, and change “ZTest Always” to “ZTest LEqual”. Also change the name, e.g. to “GUI/ZText Shader”.
  6. Create a new material, and link it to your new shader.
  7. Import a font into the project. This is as easy as dragging a .ttf into the project window. I used OpenSansRegular.ttf from a sample project.
  8. Show your material in the inspector.
  9. Unfold the font entry in the project window. You will see a “Font Texture” entry. Drag that into the “Font Texture” area of the material displayed in the inspector.
  10. In the Text Mesh where you want to use the new shader, change the Mesh Renderer material to your new material. Change the Text Mesh font to your imported font.

And you’re done!

Custom assets with Unity

Today I’ve taken the time to clean up the architecture of my VR stars Unity app (I really have to find a name for this project!). One really cool Unity feature is the ability to create custom assets by scripts run in the editor, so you don’t have to build complex structures at runtime.

Why is this so fantastic? Well, usually I would read star positions, constellation data, etc. from data files at application startup, and then create appropriate data structures and meshes. Strictly speaking, this isn’t necessary, since that data is essentially static. With Unity, I can now read all the data already at edit time, and create a complete “star sphere” with meshes for the constellations as a ready-to-use asset.

As a bonus, this also allows me to investigate the created geometry and data structures in the editor (see the picture), without having to resort to runtime debug utilities. Very nice!

To the stars with Gear VR!

Getting more serious about VR… the HTC Vive is extremely cool, but quite some investment for starters. Decided to get myself a decent Android phone and a Gear VR, and started porting my astronomy app Cor Leonis to Unity.

Good progress so far! Stars, planets, moon, and sun are all in, reading device location and compass is a breeze in Unity, so I can already have a view of the current night sky in VR 🙂

Now on to make things pretty and creating a cool experience!

Lightbox Trace 2.0 with In-App feature

I recently published version 2.0 of my iPad App Lightbox Trace, introducing a filter panel as an optional In-App feature. This really comes in handy, when your sketches don’t have enough contrast or you want to desaturate a colored image before tracing. Check it out!

Unfortunately, I ran into a problem with enabling the In-App purchase, so it just didn’t work for about a week. If you tried and got an error message about not being able to connect to the App Store, please try again. It really should be working now. If there are still any problems, please send me an e-mail. I appreciate your support!

VR User interface experiments

I’m currently experimenting with the UI for my upcoming Gear VR star gazing application. Virtual reality user interfaces are really interesting, since they have to work so differently from a standard 2D UI. One possible realization is to have interactive elements as actual 3D objects in the scene. This can be fun! For my app, I am thinking about putting a “time machine” into the scene, which will allow you to move forward and backwards in time for different views of the sky. Much cooler than having a 2D number selection thingy. Nothing to show yet, but stay tuned!

How to select and activate anything in a VR scene can be a science. For starters, I recommend having a look at Unity’s VR Sample Scenes project. It includes a bunch of useful scripts for reticles, selection radials, VR input handling, etc. This looks pretty convoluted at first, but once you get your head around it, it offers some nice ideas on how to architect an application UI.

New iOS app: Lightbox Trace

I’m currently spending a lot of time drawing on my iPad Pro, and needed a way to transfer my digital sketches to drawing paper. Essentially, a lightbox with the ability to display an image. Since I still had my old first-generation iPad lying around, I developed a simple little app to put it to use again: Lightbox trace.

  • Load an image from photos or the clipboard
  • Scale, position, rotate as desired
  • lock the screen – the app now ignores all touch events, so you can put a piece of paper on the display and trace the image
  • display brightness is automatically increased to the maximum
  • you can also just show white, for tracing from one paper to another

I’ve found it to be quite useful – please try it (it’s free!) out and let me know if there is anything you might want to be added.

Arduino Prototyping: It’s a clock!

Over the past few months, I digged more deeply into the Arduino platform. One ongoing project is a clock with moon phase display (since I already implemented the computations for my astronomy app, Cor Leonis). I started out with an LED matrix and 7-segment displays like this:

Tons of wire!

Over time, I decided to use 2 8×8 LED matrices, switched to a smaller Arduino compatible board (Adafruit Pro Trinket), and ran it on batteries:

There’s also a button to switch between views now.

It’s far from done, but I find it amazing how much I already learned from this relatively simple project… a refresher on basic electronics (resistors, capacitors, etc.) and soldering,  manual LED matrix display multiplexing, more on LEDs than I ever wanted to know, RTC clock chips, LED display driver chips, shift registers, step-up/down voltage converters, debouncing HW buttons, I2C bus wiring and communications, calculating power consumption and battery lifetime, and so on and so forth. Next up is sensors: I would like to switch views just by waving my hand (and see how robust that is), instead of having to walk over and press a button.