Pocket Observatory released for Gear VR

On 3/16, Pocket Observatory has been released to the Oculus App Store for Gear VR! It has taken a lot longer than expected, but in the end, the additional iterations and feedback have improved the product tremendously. Of course, this is only version 1 – there are tons of additions on my list already, and I am open to suggestions 🙂

Here’s a link to the product page in the store.

Venturing into social VR with Pocket Observatory!

The past few weeks I’ve been working away on a really exciting feature for the upcoming Gear VR version of Pocket Observatory: You will be able to invite a friend (on the Oculus platform) and start a voice chat beneath the stars! GPS coordinates are exchanged between the app instances, so players can visit each other’s GPS locations. This is currently under review, and will hopefully be up in a few weeks in the Oculus Store.

To my knowledge, this is the very first social VR astronomy app ever! I’ve been thinking about this for quite a while during initial development, but didn’t realize how easy it would be to integrate using the Oculus platform SDK. Mind you, setting up peer-to-peer networking can still be nerve-wrecking, given the unreliable nature of network communication, but still… managed to pull this off in just a few weeks. Happy!

Check out the updated page at https://pocketobservatory.com for the details. Here’s a screenshot of the chat UI: (thinking about avatars and a shared space experience, too, but that’s for later.)

p5.js Animated Grid

So here’s a simple processing demo I did a while ago, ported to p5.js, a library allowing you to essentially write processing in JavaScript, using an HTML canvas! As you will see when running the demo, JavaScript performance is not really where you’d like it to be when doing animation 🙂 The JavaScript engine of your browser makes a huge difference here. I recommend running this in Chrome, it’s quite noticably faster than Firefox.

Open demo in new browser tab

Source code

Pocket Observatory for Google Cardboard

Just finished and submitted the iPhone / Google Cardboard version of Pocket Observatory! It really paid off to use Unity – porting from Android with the Oculus SDK to iPhone with GoogleVR turned out to be really easy.

Here are the quirks I encountered, might be useful to know if you’re embarking on a similar project:

  • In Gear VR, system messages (e.g., asking for permissions) are displayed properly and can be confirmed while you’re in VR. On the iPhone, a standard system dialog pops up. To deal with location service permissions, I trigger the message from within a special startup scene, before entering VR mode in the main scene.
  • Texture compression support has to be adjusted with the platform. On the iPhone, compression defaults to PVRTC, which requires square textures. The Unity importer stretches non-square textures to make them  compressible with PVRTC. This results in awful artifacts, so I had to go over the compressions options for all of my (non-square) textures.
  • Make sure the text for camera use permission is set in the iOS player settings – in GoogleVR, there is a UI button to allow the user to switch viewers. This will activate the phone’s camera in order to scan the QR code on the viewer. Not setting the text will result in an app crash.
  • Unfortunately, the Cardboard app doesn’t run in the simulator – there is no suitable architecture of the gvr library, so the app crashes at startup. I guess it would be possible to build the library from source, but haven’t tried that yet.

Visit https://pocketobservatory.com for details regarding app features and release plans!

Pocket Observatory: wrapping up

Haven’t posted as many updates here as planned, because I was terribly busy finishing implementation. But I have started

Now it’s all about finishing materials for submission to the Oculus store, and setting up a landing page, etc. Coming soon!

Pocket Observatory: Finalizing!

Yes, I have decided on a name for my upcoming Gear VR astronomy app: Tadaa – Pocket Observatory! Seems it is getting to a decent stage… it’s hard to stop adding features when new ideas pop up every five minutes, but this all has to wait for future releases. Now, it’s all about polishing and optimizing!

I found a really, really helpful guide to optimizing Gear VR apps on the Oculus developer blog: https://developer3.oculus.com/blog/squeezing-performance-out-of-your-unity-gear-vr-game/. This certainly saved me a few headaches.

Here’s a very first video impression of the app: https://youtu.be/G4tHM2v0NyY I was actually wondering how I could do a video such as this, but it turned out to be super easy: Integrate the platform menu provided with the OVR toolkit, and the function is available in the platform menu by pressing the back button 🙂

The sun, the stars, the sky

Slowly, but steadily, it’s coming together. Since my upcoming stargazing app is also featuring a nice landscape and daylight, I am currently spending quite some time adjusting lighting and the complexity of the scene. After all, this has to run with a steady 60 fps on the Galaxy S6 with the Gear VR headset.

Mind you, the scene is not complex by today’s standards, but hey, this is still a phone!

Still loads of stand-in textures and placeholder objects, but getting there…

‘Simple’ property animation in Unity

Recently, I’ve spent a few hours with Unity’s new animation system, Mecanim. I wasn’t working on any complex animations, but only wanted to implement fading of a few elements in my scene. This turned out to be a real desaster. Why?

  • For simply fading a property in an out, I needed two animation clips and four states, with pretty sensitive transition setup (I wanted the fading to be interruptible, etc.)
  • The animations cannot go from the current property value to the target at a certain speed. They will always animate from start to end.
  • Adding a delay would require either to modify the animation clip or do even more complex state setup.

So, after fiddling with this for a while, I came to the conclusion that this kind of animation is better done in code (which seems to be what people in the forums think, too). I’ve set up a simple class for fading a property value, added the logic to my MonoBehaviour script, and had it working the way I wanted within 20 minutes. Go figure.

For anyone interested, here is the code. It could probably be improved in various ways, but it’s doing what I want for my project. Feel free to play with it.

ValueFader.cs:

 public struct FadeParameters
 {
   public float initialValue;
   public float maxValue;
   public float minValue;
   public float fadeInDuration;
   public float fadeOutDuration;
   public float fadeInDelay;
     // applies only when going from Idle to FadingIn
     // (not e.g. FadingOut -> FadingIn)
 }
 
 public class ValueFader
 {
   private enum FadeState
   {
     FadingIn,
     FadingOut,
     Idle
   }
 
   private float _remainingDelay;
   private float _currentValue;
   private FadeParameters _fadeParameters;
   private FadeState _currentFadeState = FadeState.Idle;
   private float _fadeInIncrement;
   private float _fadeOutIncrement;
   public float currentValue { get { return _currentValue; } }
 
   public ValueFader(FadeParameters fp)
   {
     _fadeParameters = fp;
     _fadeInIncrement =
       (_fadeParameters.maxValue - _fadeParameters.minValue)
       / _fadeParameters.fadeInDuration;
     _fadeOutIncrement =
       (_fadeParameters.maxValue - _fadeParameters.minValue)
       / _fadeParameters.fadeOutDuration;
     _currentValue = fp.initialValue;
     _remainingDelay = fp.fadeInDelay;
   }
 
   public void FadeIn()
   {
     if (_currentValue < _fadeParameters.maxValue)
     {
       _currentFadeState = FadeState.FadingIn;
     }
     else
     {
       SetIdle();
     }
   }
 
   public void FadeOut()
   {
     if (_currentValue > _fadeParameters.minValue)
     {
       _currentFadeState = FadeState.FadingOut;
     }
     else
     {
       SetIdle(); // might have been in FadingIn, during delay
     }
   }
 
   // Returns true, if currentValue was changed by this method
   public bool Update(float deltaTime)
   {
     if (_currentFadeState == FadeState.Idle)
     {
       return false;
     }
 
     bool retVal = false;
 
     if (_currentFadeState == FadeState.FadingIn)
     {
       if (_remainingDelay <= 0.0f)
       {
         _currentValue += _fadeInIncrement * deltaTime;
         if (_currentValue >= _fadeParameters.maxValue)
         {
           _currentValue = _fadeParameters.maxValue;
           _currentFadeState = FadeState.Idle;
         }
 
         retVal = true;
       }
       else
       {
         _remainingDelay -= deltaTime;
       }
     }
     else //if (_currentFadeState == FadeState.FadingOut)
     {
       _currentValue -= _fadeOutIncrement * deltaTime;
       if (_currentValue <= _fadeParameters.minValue)
       {
         _currentValue = _fadeParameters.minValue;
         SetIdle();
       }
 
       retVal = true;
     }
 
     return retVal;
   }
 
   private void SetIdle()
   {
     _currentFadeState = FadeState.Idle;
     _remainingDelay = _fadeParameters.fadeInDelay;
   }
 }
 

And here is how you would use it, from within a MonoBehavior script.

Set up the animation parameters like this, and store the ValueFader in a class member:

FadeParameters imageFP;
imageFP.fadeInDuration = 1.0f;
imageFP.fadeOutDuration = 0.7f;
imageFP.initialValue = 0.0f;
imageFP.maxValue = 0.1f;
imageFP.minValue = 0.0f;
imageFP.fadeInDelay = 1.0f;

_imageFader = new ValueFader(imageFP);  

How you control fading values in or out, depends entirely on your logic. You might want to do it based on some event:

void OnSomethingHappened()
 {
   ...
   if (startFadingIn)
   {
     _imageFader.FadeIn();
   }
   else if (startFadingOut)
   {
     _imageFader.FadeOut();
   }
 } 

You get the idea… In Update(), apply the animated value to a material color, or whatever you want to animate:

void Update()
{
  if (_imageFader.Update(Time.deltaTime))
  {
    _imageMaterial.SetColor(
      "_Color", new Color(1.0f, 1.0f, 1.0f, _imageFader.currentValue));
  }
  ...
 } 

 And that’s all 🙂

3D Text with depth test in Unity

The 3D Text asset built into Unity renders always on top, regardless of z depth. This helps to avoid z-fighting issues, if your text is coplanar to other geometry in the scene, but often not what you want. I have labels in my scene which I DO want to be hidden by geometry in front. Unfortunately, there is no simple switch to turn on depth testing – even though users have been struggling with this for years, as far as I can tell.

The solution requires a modified font shader, and to be able to use that, you also need to have a font texture in your project. I had to retrieve all the bits and pieces of information from various places, so I though it might be a good idea to list all the steps together. Here we go:

  1.  Download the Unity built-in shader archive from https://unity3d.com/get-unity/download/archive.
  2. Extract the .zip (at the time of writing: builtin_shaders-5.4.1f1.zip) into some arbitrary folder.
  3. Import DefaultResources/Font.shader into your project.
  4. Rename it, e.g. to ZFont.shader.
  5. Edit the shader source, and change “ZTest Always” to “ZTest LEqual”. Also change the name, e.g. to “GUI/ZText Shader”.
  6. Create a new material, and link it to your new shader.
  7. Import a font into the project. This is as easy as dragging a .ttf into the project window. I used OpenSansRegular.ttf from a sample project.
  8. Show your material in the inspector.
  9. Unfold the font entry in the project window. You will see a “Font Texture” entry. Drag that into the “Font Texture” area of the material displayed in the inspector.
  10. In the Text Mesh where you want to use the new shader, change the Mesh Renderer material to your new material. Change the Text Mesh font to your imported font.

And you’re done!