Damian Mehers' Blog Xamarin from Geneva, Switzerland.

23Aug/157

Using Android Wear to control Google Cardboard Unity VR

Using a VR headset, even one as simple as Google Cardboard, can be mind-blowing.  Nevertheless it is the little things that can also be disconcerting.  For example looking down and seeing you have no arms, despite the fact they still very much feel as though they exist.

I’m convinced that VR experiences are going to transform not just games, but interaction with computers in general, and I’ve been experimenting with some ideas I have about how to create truly useful VR experiences.

As I was working to implement one of my ideas, it occurred to me that I might be able to use the orientation sensors in the Android Wear device I was wearing.  Why not use them as input into the VR experience I was creating?  What if I could bring part of my body from the real world into the VR world?  How about an arm?

I decided to try to find out, and this was the answer:

The experience is nowhere near good enough for games.  But I don’t care about games.  I want to create genuinely useful VR experiences for interacting with computers in general, and I think this is good enough.  I can point to objects, and have them light up.  I can wear smart watches on both wrists (because I really am that cool) and have two arms available in the VR world. 

By tapping and swiping on the wearable screens I can activate in-world functionality, without being taken out of it.  It sure beats sliding a magnet on the side of my face, because it is my arm I am seeing moving in the virtual world.

In the rest of this article I’m going to describe some of technical challenges behind implementing this, how I overcame them, and some of the resources I used on the way.

The tools

This is part of my workspace: Android Studio on the left, Unity on the top-right and MonoDevelop on the bottom-left:

my workspace

I had many reference browser windows open on other screens (obviously), and creating this solution required me being very comfortable in Android, Java and C#.  I’m relatively new to Unity.

Creating a Unity Java Plugin by overriding the Google Cardboard Plugin

The Unity Android Plugin documentation describes how you can create plugins by extending the UnityPlayerActivity Java class, and I experimented with this a little.  I created an Android Library using Android Studio, and implemented my own UnityPlayerActivity derived class.

After a little hassle, I discovered that Unity now supports the “aar” files generated when compiling libraries in Android Studio, although I found the documentation a little out of date on the matter in places.  It was simply a question of copying my generated “aar” file into Unity under Assets|Plugins|Android

image

image

When it came to a Google Cardboard Unity project, what I discovered though, is that Google had got there first.  They had created their own UnityPlayerActivity called GoogleUnityActivity.  What I needed to do was override Google’s override:

image

I included Google’s unity classes as dependencies in my library project:

image

Once I’d copied the aar file into the Unity Android Plugins folder and ran the test app, I was delighted to see my activity say “Cooey” in the log.

image

Receiving the watch’s orientation to the phone

The next step was to receive Android Wear Messages from Android Wear on the watch, containing orientation messages.

I recreated my project, this time including support for Android Wear:

image

I made the Unity activity I’d created do a little more than say “Cooey”. 

First I used the Capabilities mechanism to tell other Android Wear devices that this device (the phone) was interested in arm orientation messages:

image

… and I set it up to receive Android Wear messages and pass them over to Unity using UnitySendMessage:

image

Sending the watch’s orientation to the phone

This was simply a question of looking out for Android Wear nodes that supported the right capability, listening for orientation sensor changes, and sending Android Wear messages to the right node.  This is the watch code:

image

I did discover that some wearables don’t support the required sensors, although I imagine more modern ones will.

Using the watch’s orientation to animate a block on the screen

Inside Unity I created a cube which tweaked into a rectangle, and made it a child of the CardboardMain’s camera, so that it moved when I moved:

image

See the “Script” field on the bottom right-hand side?  I have a script called “WristController” that is attached to the “wrist” (white blob).  This is where I receive orientation messages sent from the watch, via the UnityPlayerActivity derived Java class I’d created.

I started off simply assigning the received orientation to the block’s orientation by assigning to transform.eulerAngles

image

This worked, but was super-jerky.  I went searching and discovered Lerps and Slerps for smoothly moving from one rotation to another.  My updated code:

image

Animating an arm instead of a block

I was pleased to be smoothly animating a block, but my arm doesn’t quite look like that.  It is more armish.  I went looking for a model of an arm that I could import and use instead.  I found a YouTube Unity video called ADDING ARMS by Asbjørn Thirslund, in which he explains how to to import and use a free arms model by eafg.

It was simply a question of sizing and positioning the arms properly as a child of the Cardboard main camera, and then adding the script I’d used to animate the block.

I also removed the right-hand arm, since it looked a little weird to have a zombie arm doing nothing.

image

The ArmController script you see in this screen capture has the same contents as the WristController I’d used to move the block.

Final Thoughts

There is enough of a lag to make this technique impractical for games, but not enough to make it impractical for the kinds of non-game experiences I have in mind. 

I’d also need to add calibration, since the watch may be pointing in any direction initially – if I assume it always starts straight out, that would be good enough.  Detecting where the arm is pointing shouldn’t be too hard, since the cardboard code already does gaze detection – so many possibilities, but so little time for side-projects such as this!

This has been a fun interlude on my way to creating what I hope to be a genuinely useful VR experience based around browsing books your friends have read … more on that later.

20Oct/140

Eyeglasses are broken

My eyeglasses are broken, and I want them fixed.

I vividly remember the morning I woke up, and could no longer read.

Everything was blurry, and no matter how much I blinked away the night, I still could not read.  I could see things further off, and if I moved my phone well back past my normal reading distance, I could still just about focus.

Eventually my eyes could focus as normal, and I put the experience down to tiredness.  But soon the blurriness came back, and didn't leave.  I was being abruptly welcomed into late middle age.  I needed reading glasses.

I picked up a pair of cheap glasses from the local supermarket, and miracle of miracles, I could read again.  Everything was fine and crisp, even when I used the smallest font on the kindle app.

There was, however, still an issue.  When I was wearing my reading glasses, and I was looking at something that wasn't a book, that was further away, say a person's face, or a stop sign, everything was blurry.  I had to take my glasses off to see beyond the page in front of me.

So, in this age of miniaturized sensors, 3D printers, new material science, why can I not buy a pair of glasses that sense how far away objects are that the glasses are pointing at, and physically deform the lenses appropriately to bring items into focus for the wearer

For me the lenses would become clear glass when looking at something in the distance, and would deform to +0.5 reading glasses when looking at a page in front of me.

There have been similar attempts in the past, but as technology advances, sensors become smaller and motors become miniaturized I think its time to look once again at eye-glasses.  The way they work now is broken.  If Google invested a fraction of the money they have in Google Glass, then I'm convinced they could bring these kind of glasses to the world, benefitting hundreds of millions, And just perhaps, by incorporating Glass-like functionality along for the ride, they could bring Glass to the masses.

Filed under: Product-Ideas No Comments
12Oct/140

The inevitable evolution from wearables to embedables

The inevitable evolution from wearables to embedables is at once both exciting and horrifying.

Let's think about bluetooth headsets. They are already becoming smaller, and will soon be invisible.

I believe that bluetooth headsets will miniaturize to the point of being so tiny they will be embedded subdermally, perhaps behind your ear. We'll solve the battery issues through using the body's own heat, or through body motion.

What will this give us? Only telepathy. You'll be able to communicate mind-to-mind with anyone on the planet through this device that is part of you, initially by voicing words sub-vocally, but perhaps one day through splicing directly into nerves.

It is as exciting as it is inevitable.

What is also inevitable is a despotic regime somewhere will use such capabilities to pipe their propaganda directly to their citizens minds. Can you imagine, from birth, having this incessant stream of brainwashing beamed directly to your brain? Its horrifying.

So, along with the best case scenarios with dreaming of new technologies, let's also think of the nightmare worst-case scenarios, and make sure we do what we can to mitigate them. In this case, let's start with a physical off-switch.

Filed under: Product-Ideas No Comments
11Oct/140

A useful in-car app experience

OK, I admit it: I can't help it. Whenever I hit a problem in the real world, I automatically seek to solve it, often through the hammer in my virtual toolbox, which is creating apps.

So what does this have to do with driving my kids home from school? There are traffic lights on the route. Lots and lots of them. Like all of you, I am sure, I never look at my phone screen when I am driving and the car is in motion, but when the car is stopped in front of traffic lights, it is often hard to resist quickly checking my email, or twitter, or whatever.

Of course that is a trap. Before I know it I've been sucked into my digital world, and am oblivious to the real world, until I am rudely and abruptly pulled out of it by the honking horn of the person behind me.

So what I want is this: An app that lets me use my phone as normal, but in the background, using the camera on my phone, locks in on to the red light of the traffic light, detects when an orange light appears next to it, and alerts me both audibly and visually that the lights are changing.

I'd even use it when I'm not looking at my phone, but instead lost in dreamy reverie, lost in my own thoughts,and equally oblivious to the lights changing.

But this is only part of my master plan. Oh no, it is not all.

Sometimes I'm stopped while in the car, and it isn't a traffic light that has stopped me. Instead it is a traffic jam. Like all of you, I am sure, I dream of being able to launch a small drone from my car to fly overhead to the front of the jam, to understand what is happening, and how long I will be stuck for. The drone would be paired with my phone, letting me control it from my phone, and beam back images to my phone.

It occurs to me that the whole drone thing is unnecessarily, potentially dangerous, and more than likely illegal. Instead all I need is an app that everyone on the traffic jam uses to broadcast live the scene in front of them. Then people far back from the front of the jam can zoom through the cameras, rushing forward car by car through the jam to the front, to understand what is happening.

With appropriate anonymizing safeguards in place (number plate blurring) it could also be used by news organizations and the emergency services.

Filed under: Product-Ideas No Comments
18Jul/101

Tilt to turn pages in Kindle Android: Nope!

I bought an HTC Desire Android mobile phone a couple of months ago.

When I first got it, there were two applications that were missing that I really really wanted: Audible and Kindle.

image

Both were recently released, and despite this, my life is in fact not now complete.  There must be some other app for that.

I’ve been thinking about what I could develop.

As I read using the Kindle app the screen becomes smudged with my tapping the screen to change the page, and I thought it would be nice if I could change pages by tilting the phone to the right in a quick nudge.

I defied the temptation to start coding, and did some research.  It looks like it isn’t possible to do what I want using the public SDK, and to save myself the pain of rediscovering why in six months, when I have forgotten, here are some notes.

For this to work I’d need three things: A background process (called a service on Android), a way for it to detect when the phone has been tilted, and a way for it to tell the Kindle App to change the page, when it is running.

Creating a background process is pretty straightforward, and there is a sensor API that can be used to detect when the phone has been tilted.

Telling Kindle to change the page isn’t so easy.  I thought I’d found it when I discovered the IWindowManager.injectPointerEvent method, but I quickly found that this was not part of the documented Android API, and using it would be very very bad.

I was left with trying to find out whether the Kindle App itself had any way for other apps to tell it to change page.  The standard way of doing this would be to expose an Intent which other apps could invoke.

Unfortunately by dumping the Kindle App’s manifest from its installation (APK) file, it looks as though the only intents exposed are the standard MAIN, LAUNCHER, VIEW, DEFAULT and BROWSABLE intents.

Game over, unless I’m willing to go beyond the public API, which I’m not, right now at least.  Unless anyone else has any ideas?

11Apr/0913

Windows Home Server needs a peer backup system

I've just set up a Windows Home Server on my home network, and so far I think it is fantastic. I've been able to collect an assortment of hard drives and plug them all into the same machine, and have them seamlessly presented as a single large virtual drive:

image

Our photos are stored in a shared folder hosted by the Windows Home Server, and by enabling the "Duplication" feature, I know that copies are kept on two physical disks, meaning that in the event of a hard disk failure, I'll still have a copy on another disk.  I've also been able to set up all of the computers in the house to be backed up.

There is however, as others have noted, a big flaw in this situation. Although I have all my photos duplicated on two disks, and all my computers backed up , in the event of a fire, or theft, I'm screwed.  Someone could walk off with all the physical disks.

What I really need is off-site backup. I've been doing this using an excellent service called Mozy, which for US$5 a month offers unlimited backups for a single PC.  Unfortunately Windows Home Server is based on Windows Server 2003, and Mozy will not run on Server operating systems.

A modest idea: a peer backup service

What I'd like to have, and what I'm tempted to develop, is a peer backup system, implemented via a Windows Home Server Add-in I'd create, and a web site which serves to hook people's Windows Home Servers together.

My idea is this: I'd create a web site where people could register (probably automatically via the add-in) their need for an off-site backup, indicating how much space they need to backup. They need to commit to making an equivalent amount of space available on their computer for someone else.

My web site would match people up, and then the people could use each others systems to automatically perform offsite backups.  The add-ins could talk to each other, either peer-to-peer, or via my web site. There are issues of course. The backed up data would have to be encrypted, which makes incremental backups problematic.

The next obvious step would be to allow the backups to be stored redundantly across the computers of multiple participants, so that you are not just reliant on one other person.  For this to work you'd need to volunteer to make available much more space for other people's backups than your own backups require - perhaps twice as much.

I'm tempted to develop this service, however  I'm not sure how I could cover my costs. Would you pay say US$25 a year for a peer-based secure offsite backup service?

Filed under: Product-Ideas 13 Comments