Damian Mehers' Blog Xamarin from Geneva, Switzerland.

20Jan/161

Radical surgery: Slimming Pebble apps down to run on Aplite

A long way to go

In December 2015, when first I released Powernoter, an unofficial Evernote client for the Pebble Watch, I initially targeted Pebble Time (codename Basalt), and Pebble Time Round (codename Chalk).

After all there was already the official Evernote Pebble app (which I also created) for the original Pebble (codename Aplite).

Then Pebble released a firmware update and SDK for the original Pebble which meant that I could easily release Powernoter for the original Pebble too, using the same SDK I'd already used.

This is the build log from the first time I built Powernoter targeting Aplite (the original Pebble), Basalt (Pebble Time) and Chalk (Pebble Time Round):

-------------------------------------------------------
BASALT APP MEMORY USAGE
Total size of resources:        26461 bytes / 256KB
Total footprint in RAM:         25895 bytes / 64KB
Free RAM available (heap):      39641 bytes
------------------------------------------------------- 
...
-------------------------------------------------------
CHALK APP MEMORY USAGE
Total size of resources:        26461 bytes / 256KB
Total footprint in RAM:         25943 bytes / 64KB
Free RAM available (heap):      39593 bytes
------------------------------------------------------- 
...
-------------------------------------------------------
APLITE APP MEMORY USAGE
Total size of resources:        26341 bytes / 125KB
Total footprint in RAM:         23789 bytes / 24KB
Free RAM available (heap):      787 bytes
------------------------------------------------------- 

See the 787 bytes on the last line? That was how much free memory my app had before it even started running on an original Pebble. Before it created its first window or allocated memory to receive and send messages.

Although I successfully built Powernoter for Aplite, it couldn't even start up, crashing immediately as it ran out of memory.

Not so verbose with the error messages

The first thing I did, was to run the pebble analyze-size command, which gave me a sense of where the memory was being used.

Like all good programmers, I very carefully and very consistently checked all OS calls for out of memory situations, and logged (very) verbose messages if I ran out of memory. Like this:

  bitmap_layer = bitmap_layer_create(image_layer_size);
  if(!bitmap_layer) {
    APP_LOG(APP_LOG_LEVEL_ERROR, "Couldn't allocate memory for the image");
    ...

All those strings had to be allocated somewhere. I went through my app and removed all those lovely descriptive messages. Instead I just logged the line number - that was enough to work out where it went wrong.

  bitmap_layer = bitmap_layer_create(image_layer_size);
  if(!bitmap_layer) {
    OOMCF();
    ...

I defined a couple of macros for Out Of Memory (OOM) situations:

#define OOM(s) log_oom(__FILE_NAME__, __LINE__, (int)s)
#define OOMCF() log_create_failed(__FILE_NAME__, __LINE__)
void log_create_failed(char* file, int line) {
  app_log(APP_LOG_LEVEL_DEBUG, file, line, "create failed %d free", (int)heap_bytes_free());
}

void log_oom(char* file, int line, int size) {
  app_log(APP_LOG_LEVEL_DEBUG, file, line, "oom %d, %d", size, (int)heap_bytes_free());
}

I also declared some handy logging macros, so that debug log strings were stripped out of shipping builds

#ifdef SHIPPING
#define LOG_MEM_START()
#define LOG_MEM_END()
#define LOG_FUNC_START(name)
#define LOG_FUNC_END(name)
#define LOG_DBG(fmt, args...)
#define LOG_ERR(fmt, args...) app_log(APP_LOG_LEVEL_ERROR, __FILE_NAME__, __LINE__, " ")
#else
#define LOG_DBG(fmt, args...) app_log(APP_LOG_LEVEL_DEBUG, __FILE_NAME__, __LINE__, fmt, ## args)
#define LOG_MEM_START() app_log(APP_LOG_LEVEL_DEBUG, __FILE_NAME__, __LINE__, "start %d", (int)heap_bytes_free())
#define LOG_MEM_END() app_log(APP_LOG_LEVEL_DEBUG, __FILE_NAME__, __LINE__, "end %d", (int)heap_bytes_free())
#define LOG_FUNC_START(name) app_log(APP_LOG_LEVEL_DEBUG, __FILE_NAME__, __LINE__, "%s invoked", name)
#define LOG_FUNC_END(name) app_log(APP_LOG_LEVEL_DEBUG, __FILE_NAME__, __LINE__, "%s returning", name)
#define LOG_ERR(fmt, args...) app_log(APP_LOG_LEVEL_ERROR, __FILE_NAME__, __LINE__, fmt, ## args)
#endif

Use statics in moderation

Next I looked into how I was defining static variables. I like statics because they are only visible to the file in which they are declared: a primitive form of encapsulation. A typical C source file might have started with:

static CustomMenu* customMenu;
static CustomMenuItem* items;
static uint16_t itemCount;
static AppTimer *send_timeout_timer;
static NoteSelectedCallback noteSelectedCallback;

The types don't matter (CustomMenu is my own class that does things like automatically scrolling long menu items).

What matters is that I have four pointers and a short declared as statics, meaning I have a whole chunk of memory statically allocated just for this one file.

Powernoter is not a small app ... this multiplied by tens of files means that I had a load of memory statically allocated, which was never used unless the user was actually invoking the functionality represented by those files.

The solution was to move to a dynamically allocated memory:

typedef struct NoteList {
  CustomMenu *customMenu;
  CustomMenuItem *items;
  uint16_t itemCount;
  AppTimer *send_timeout_timer;
  NoteSelectedCallback noteSelectedCallback;
} NoteList;

I only allocate a NoteList when it is being used, and free it as soon as possible.

Omit needless code

Although the SDK includes definitions for things like DictationSession on Aplite, so that code can be compiled regardless of the platform (you do need to check return calls though), it made no sense to include that code at all. I #ifdefed whole chunks of code to reduce the app size:

#ifdef SUPPORTS_VOICE
static void dictation_session_callback(DictationSession *session, DictationSessionStatus status,
                                       char *transcription, void *context) {
  LOG_FUNC_START("dictation_session_callback");
  if(DictationSessionStatusSuccess == status) {
    if(!noteContext->waitingAnimation) {
      if(noteContext->customMenu) {
        layer_set_hidden(custom_menu_get_layer(noteContext->customMenu), true);
      }
...
}
#endif

SUPPORTS_VOICE is my own macro:

#ifndef PBL_PLATFORM_APLITE
#define SUPPORTS_VOICE
#else
#define LOW_MEMORY_DEVICE
#endif

Pebble have added a PBL_MICROPHONE macro so my use of SUPPORTS_VOICE is no longer necessary.

I did the same thing for animations and color support.

Although I think I am a decent enough software engineer, I am under few illusions as to my abilities as a designer, which is why I let you choose your very own foreground and background colors in Powernoter, except if you are running on an original Pebble, in which case all that code, including the color names, is #ifdefed out.

Be careful what you ask for (when calling app_message_xyz_maximum)

Once upon a time were were limited to 120 or so bytes per message sent between the watch and the phone. I wrote inordinately complex code to page menu items in dynamically from the phone to the watch so that you could scroll through infinitely long menus. Then Pebble gave us what we wanted, with massive (8Kish) message buffers.

When you only have a little memory free to start with, the last thing you want to do is go allocating 8K buffers. It won't work.

My code to determine the size of the input buffer looks like this now:

#ifdef LOW_MEMORY_DEVICE
#define MAX_INBOX_SIZE 512
#else
#define MAX_INBOX_SIZE 4096
#endif

The LOW_MEMORY_DEVICE macro is set on Aplite only. Users on the original Pebble won't see an enormous number of notes listed, or a lot of a note's content, but at least they'll see something.

Make long strings into Resources

There is an excellent Internationalization sample for the Pebble. Although Powernoter isn't internationalized, there are no strings hardcoded in code ... all strings are accessed via a single point. I include the strings in a single source file in the app, except for certain very long strings, such as the About page. These I load as resources from files:

static char* loadResource(uint32_t resourceId) {
  ResHandle handle = resource_get_handle(resourceId);
  size_t res_size = resource_size(handle);

  // Copy to buffer
  char* result = (char*)malloc(res_size + 1);
  if(!result) {
    OOM(res_size);
    result = (char*)malloc(1);
    if(result) {
      *result = '\0';
    }
    return result;
  }
  resource_load(handle, (uint8_t*)result, res_size);
  result[res_size] = '\0';
  return result;
}

Once I'm done with them, I free them as quickly as possible.

Summary

In case you were wondering, this is how things look right now:

CHALK APP MEMORY USAGE
Total size of resources:        27313 bytes / 256KB
Total footprint in RAM:         24244 bytes / 64KB
Free RAM available (heap):      41292 bytes
-------------------------------------------------------
BASALT APP MEMORY USAGE
Total size of resources:        27313 bytes / 256KB
Total footprint in RAM:         24176 bytes / 64KB
Free RAM available (heap):      41360 bytes
-------------------------------------------------------
APLITE APP MEMORY USAGE
Total size of resources:        13966 bytes / 125KB
Total footprint in RAM:         17353 bytes / 24KB
Free RAM available (heap):      7223 bytes
------------------------------------------------------- 

Getting from 787 bytes free to 7,223 bytes free, so that Powernoter can really run on Aplite involved many changes, some which I'd say were generally good practice (reducing statics and instead using structs which are allocated/freed), and some less so (removing error log messages).

In general I don't think the code looks too unreadable as a result of supporting Aplite ... certainly I'd prefer not to have as many #ifdefs sprinkled throughout my code as I have, but it's not that bad.

You may also wish to check out this Pebble presentation on Pebble app memory usage.

One thing is for sure, the changes I had to make to Powernoter to get it to run on Aplite are nothing compared with the miracles the Pebble team pulled to get the original Pebble to support the same SDK as Pebble Time and Pebble Time Round.

About me

I'm an independent consultant and speaker, available for ad-hoc Pebble, Android and Android Wear and Tizen consulting and development.

If you like and use Powernoter, please consider supporting it.

On the other hand if something is missing or doesn't work, check out this Trello board where you can comment to request enhancements or report bugs.

Filed under: Pebble, Wearables 1 Comment
13Nov/150

Making using TypeScript for Google Apps Scripts more convenient on OS X

I've started to use TypeScript in IntelliJ, and wanted to use it for a Google Apps Script App that I'm writing.

There are a couple of issues with using TypeScript for this: The first is that Google Apps Script doesn't directly support TypeScript, and the second is that the Apps Scripts editor is web based.

The first issue isn't really an issue, since the TypeScript is transpiled directly into JavaScript. But the second one is an issue. It would be painful to have to open the generated JavaScript in IntelliJ, copy it into the clipboard, activate the web-based editor, select the old content, paste the new content from the clipboard, and save it, every time I make a change to the TypeScript.

Fortunately I've found a simple way to automate all of this using AppleScript.

Firstly, I ensure that the Apps Script editor is open in its own window. My project is called "Documote" and this is what the Google Chrome window looks like:
documote chrome window

Secondly I've created this AppleScript file to copy the generated JavaScript to that project:

try
    set project_name to "Documote"
    set file_name to "/Users/damian/.../documote/Code.js"
    set the_text to (do shell script "cat " & file_name)
    set the clipboard to the_text
    tell application "Google Chrome"
        set visible of window project_name to false
        set visible of window project_name to true
        activate window project_name
        tell application "System Events" to keystroke "a" using command down
        paste selection tab project_name of window project_name
        tell application "System Events" to keystroke "s" using command down
    end tell
on error errMsg
    display dialog "Error: " & errMsg
end try

You'd need to change the first couple of lines to reflect your situation. The reason for hiding and showing the window is to activate the window.

Once you have the AppleScript you can assign it a shortcut.

Filed under: Uncategorized No Comments
11Nov/151

Building an Amazon Echo Skill to create Evernote notes

First, a demo: Alexa, tell Evernote to create a note "Remember to call my Mother":

I recently acquired an Amazon Echo, and although there is limited support for interacting with Evernote via IFTTT, I wanted to simply create Evernote notes as in the demo above.

I’m going to share how I created an Amazon Echo Skill to accomplish what it shown in the video above, and what roadblocks I hit on the way.

Updating the example

I started with the sample Amazon Echo skill which uses lambdas, and got that working pretty quickly.

To update it to work with Evernote, I changed the JavaScript code that recognized the intent to invoke saveNote when the intent is TakeANote (you'll see where this intent is set up later):

**
 * Called when the user specifies an intent for this skill.
 */
function onIntent(intentRequest, session, callback) {
    console.log("onIntent requestId=" + intentRequest.requestId +
        ', sessionId=' + session.sessionId);
    var intent = intentRequest.intent, intentName = intentRequest.intent.name;
    // Dispatch to your skill's intent handlers
    if ("TakeANote" === intentName) {
        saveNote(intent, session, callback);
    }
    else {
        throw "Invalid intent: " + intentName;
    }
}

Creating the note

My code to create the Evernote note (invoked by saveNote above) is pretty much boilerplate. It pulls the content from the list of slots (defined below) and uses it to create a note using the Evernote API:

function saveNote(intent, session, callback) {
    var cardTitle = intent.name;
    var contentSlot = intent.slots["Content"];
    var repromptText = "";
    var sessionAttributes = [];
    var shouldEndSession = false;
    var speechOutput = "";
    if (contentSlot) {
        var noteText = contentSlot.value;
        sessionAttributes = [];
        speechOutput = "OK.";
        repromptText = "What was that?";
        shouldEndSession = true;
        var noteStoreURL = '...';
        var authenticationToken = '...';
        var noteStoreTransport = new Evernote.Thrift.NodeBinaryHttpTransport(noteStoreURL);
        var noteStoreProtocol = new Evernote.Thrift.BinaryProtocol(noteStoreTransport);
        var noteStore = new Evernote.NoteStoreClient(noteStoreProtocol);
        var note = new Evernote.Note();
        note.title = "New note from Alexa";
        var nBody = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>";
        nBody += "<!DOCTYPE en-note SYSTEM \"http://xml.evernote.com/pub/enml2.dtd\">";
        nBody += "<en-note>" + noteText + "</en-note>";
        note.content = nBody;
        noteStore.createNote(authenticationToken, note, function (result) {
            console.log('Create note result: ' + JSON.stringify(result));
            callback(sessionAttributes, buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));
        });
    }
    else {
        speechOutput = "I didn't catch that note, please try again";
        repromptText = "I didn't hear that note.  You can take a note by saying Take a Note followed by your content";
        callback(sessionAttributes, buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));
    }
}

Notice the hard-coded authenticationToken? That means this will only work with my account. To work with anyone's account, including yours, we'd obviously need to do something different. More on that in a moment.

Packaging it up

I zipped up my JavaScript file, together with my node_modules folder and a node package.json:

{
  "name": "AlexaPowerNoter",
  "version": "0.0.0",
  "private": true,
  "dependencies": {
    "evernote": "~1.25.82"
  }
}

Once done, I uploaded my zip to my Amazon Skill, and then published it.

The Skill information

This is the skill information I used:
Alexa Skill Information
Obviously I couldn't use trademarked term "Evernote" as the Invocation Name in something that was public, but just for testing for myself, I think I'm OK.

The Interaction Model

I defined the interaction model like this:
Alexa Interaction Model
The sample utterances is way too limited here - Amazon recommend having several hundred utterances for situations where you allow free-form text. It would also be cool to be able to have an intent to let you search Evernote.

Once I'd done this, and set up my Echo to use my development account, I could create notes.

Authentication roadblock

The next step was to link anyone's Evernote account into the Skill. This is where I hit the roadblock. Amazon require that the authentication support OAUTH 2.0 implicit grant and Evernote supports OAUTH 1. I could attempt to create a bridging service, but the security implications of doing so are scary, and doing it properly would require more time than I have right now.

The source is in GitHub

I've published the source to this app in my GitHub account here. If you are a developer and want to try it out, get an Evernote Developer auth token and plug in the URL and token in the noteStoreURL and authenticationToken above.

Filed under: Uncategorized 1 Comment
30Aug/151

Android 5.0 Media Browser APIs

When I read the release notes for the Android 5.0 APIs I was delighted to see this:

Android 5.0 introduces the ability for apps to browse the media content library of another app, through the new android.media.browse API.

I set out to try to browse the media in a variety of apps I had installed on my phone.

First I listed the apps that supported the MediaBrowserService:

  private void discoverBrowseableMediaApps(Context context) {
    PackageManager packageManager = context.getPackageManager();
    Intent intent = new Intent(MediaBrowserService.SERVICE_INTERFACE);
    List<ResolveInfo> services = packageManager.queryIntentServices(intent, 0);
    for(ResolveInfo resolveInfo : services) {
      if(resolveInfo.serviceInfo != null && resolveInfo.serviceInfo.applicationInfo != null) {
        ApplicationInfo applicationInfo = resolveInfo.serviceInfo.applicationInfo;
        String label = (String) packageManager.getApplicationLabel(applicationInfo);
        Drawable icon = packageManager.getApplicationIcon(applicationInfo);
        String packageName = resolveInfo.serviceInfo.packageName;
        String className = resolveInfo.serviceInfo.name;
        publishProgress(new AudioApp(label, packageName, className, icon));
      }
    }
  }

The publishProgress method updated the UI and soon I had a list of apps that supported the MediaBrowserService:

Apps that support MediaBrowserService

Next, I wanted to browse the media they exposed using the MediaBrowser classes:

…
public class BrowseAppMediaActivity extends ListActivity {
  private static final String TAG = “BrowseAppMediaActivity”;
  …
  private final MediaBrowserConnectionListener mMediaBrowserListener =
      new MediaBrowserConnectionListener();
  private MediaBrowser mMediaBrowser;

  @Override
  public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    …
    Log.d(TAG, “Connecting to “ + packageName + “ / “ + className);
    ComponentName componentName = new ComponentName(packageName, className);

    Log.d(TAG, “Creating media browser …”);
    mMediaBrowser = new MediaBrowser(this, componentName, mMediaBrowserListener, null);

    Log.d(TAG, “Connecting …”);
    mMediaBrowser.connect();
  }

  private final class MediaBrowserConnectionListener extends MediaBrowser.ConnectionCallback {
    @Override
    public void onConnected() {
      Log.d(TAG, “onConnected”);
      super.onConnected();
      String root = mMediaBrowser.getRoot();
      Log.d(TAG, “Have root: “ + root);
    }

    @Override
    public void onConnectionSuspended() {
      Log.d(TAG, “onConnectionSuspended”);
      super.onConnectionSuspended();
    }

    @Override
    public void onConnectionFailed() {
      Log.d(TAG, “onConnectionFailed”);
      super.onConnectionFailed();
    }
  }
}

I’ve cut some code, but assume that the packageName and className are as they were when queried above. No matter what I did, and which app I queried, the onConnectionFailed method was invoked.

Here is the log from when I tried to query the Google Music App:

29195-29195/testapp D/BrowseAppMediaActivity﹕ Connecting to com.google.android.music / com.google.android.music.browse.MediaBrowserService
29195-29195/testapp D/BrowseAppMediaActivity﹕ Creating media browser …
29195-29195/testapp D/BrowseAppMediaActivity﹕ Connecting …
16030-16030/? I/MusicPlaybackService﹕ onStartCommand null / null
16030-16030/? D/MediaBrowserService﹕ Bound to music playback service
16030-16030/? D/MediaBrowserService﹕ onGetRoot fortestapp
16030-16030/? E/MediaBrowserService﹕ package testapp is not signed by Google
16030-16030/? I/MediaBrowserService﹕ No root for client testapp from service android.service.media.MediaBrowserService$ServiceBinder$1
724-819/? I/ActivityManager﹕ Displayed testapp/.BrowseAppMediaActivity: +185ms
29195-29195/testapp E/MediaBrowser﹕ onConnectFailed for ComponentInfo{com.google.android.music/com.google.android.music.browse.MediaBrowserService}
29195-29195/testapp D/BrowseAppMediaActivity﹕ onConnectionFailed

Notice the message about my app not being signed by Google on line 7?

I’m assuming that only authorized apps are allowed to browse Google’s music app such as Google apps supporting Android Wear and Android Auto, but not arbitrary third party apps. Indeed the documentation for people implementing MediaBrowserService.onGetRoot indicates that:

The implementation should verify that the client package has permission to access browse media information before returning the root id; it should return null if the client is not allowed to access this information.

This makes sense, but it is disappointing. Just as users can grant specific apps access to notifications, it would be nice of they could also grant specific apps the right to browser other apps' media.

Please let me know if you discover I am wrong!

Filed under: Android 1 Comment
23Aug/157

Using Android Wear to control Google Cardboard Unity VR

Using a VR headset, even one as simple as Google Cardboard, can be mind-blowing.  Nevertheless it is the little things that can also be disconcerting.  For example looking down and seeing you have no arms, despite the fact they still very much feel as though they exist.

I’m convinced that VR experiences are going to transform not just games, but interaction with computers in general, and I’ve been experimenting with some ideas I have about how to create truly useful VR experiences.

As I was working to implement one of my ideas, it occurred to me that I might be able to use the orientation sensors in the Android Wear device I was wearing.  Why not use them as input into the VR experience I was creating?  What if I could bring part of my body from the real world into the VR world?  How about an arm?

I decided to try to find out, and this was the answer:

The experience is nowhere near good enough for games.  But I don’t care about games.  I want to create genuinely useful VR experiences for interacting with computers in general, and I think this is good enough.  I can point to objects, and have them light up.  I can wear smart watches on both wrists (because I really am that cool) and have two arms available in the VR world. 

By tapping and swiping on the wearable screens I can activate in-world functionality, without being taken out of it.  It sure beats sliding a magnet on the side of my face, because it is my arm I am seeing moving in the virtual world.

In the rest of this article I’m going to describe some of technical challenges behind implementing this, how I overcame them, and some of the resources I used on the way.

The tools

This is part of my workspace: Android Studio on the left, Unity on the top-right and MonoDevelop on the bottom-left:

my workspace

I had many reference browser windows open on other screens (obviously), and creating this solution required me being very comfortable in Android, Java and C#.  I’m relatively new to Unity.

Creating a Unity Java Plugin by overriding the Google Cardboard Plugin

The Unity Android Plugin documentation describes how you can create plugins by extending the UnityPlayerActivity Java class, and I experimented with this a little.  I created an Android Library using Android Studio, and implemented my own UnityPlayerActivity derived class.

After a little hassle, I discovered that Unity now supports the “aar” files generated when compiling libraries in Android Studio, although I found the documentation a little out of date on the matter in places.  It was simply a question of copying my generated “aar” file into Unity under Assets|Plugins|Android

image

image

When it came to a Google Cardboard Unity project, what I discovered though, is that Google had got there first.  They had created their own UnityPlayerActivity called GoogleUnityActivity.  What I needed to do was override Google’s override:

image

I included Google’s unity classes as dependencies in my library project:

image

Once I’d copied the aar file into the Unity Android Plugins folder and ran the test app, I was delighted to see my activity say “Cooey” in the log.

image

Receiving the watch’s orientation to the phone

The next step was to receive Android Wear Messages from Android Wear on the watch, containing orientation messages.

I recreated my project, this time including support for Android Wear:

image

I made the Unity activity I’d created do a little more than say “Cooey”. 

First I used the Capabilities mechanism to tell other Android Wear devices that this device (the phone) was interested in arm orientation messages:

image

… and I set it up to receive Android Wear messages and pass them over to Unity using UnitySendMessage:

image

Sending the watch’s orientation to the phone

This was simply a question of looking out for Android Wear nodes that supported the right capability, listening for orientation sensor changes, and sending Android Wear messages to the right node.  This is the watch code:

image

I did discover that some wearables don’t support the required sensors, although I imagine more modern ones will.

Using the watch’s orientation to animate a block on the screen

Inside Unity I created a cube which tweaked into a rectangle, and made it a child of the CardboardMain’s camera, so that it moved when I moved:

image

See the “Script” field on the bottom right-hand side?  I have a script called “WristController” that is attached to the “wrist” (white blob).  This is where I receive orientation messages sent from the watch, via the UnityPlayerActivity derived Java class I’d created.

I started off simply assigning the received orientation to the block’s orientation by assigning to transform.eulerAngles

image

This worked, but was super-jerky.  I went searching and discovered Lerps and Slerps for smoothly moving from one rotation to another.  My updated code:

image

Animating an arm instead of a block

I was pleased to be smoothly animating a block, but my arm doesn’t quite look like that.  It is more armish.  I went looking for a model of an arm that I could import and use instead.  I found a YouTube Unity video called ADDING ARMS by Asbjørn Thirslund, in which he explains how to to import and use a free arms model by eafg.

It was simply a question of sizing and positioning the arms properly as a child of the Cardboard main camera, and then adding the script I’d used to animate the block.

I also removed the right-hand arm, since it looked a little weird to have a zombie arm doing nothing.

image

The ArmController script you see in this screen capture has the same contents as the WristController I’d used to move the block.

Final Thoughts

There is enough of a lag to make this technique impractical for games, but not enough to make it impractical for the kinds of non-game experiences I have in mind. 

I’d also need to add calibration, since the watch may be pointing in any direction initially – if I assume it always starts straight out, that would be good enough.  Detecting where the arm is pointing shouldn’t be too hard, since the cardboard code already does gaze detection – so many possibilities, but so little time for side-projects such as this!

This has been a fun interlude on my way to creating what I hope to be a genuinely useful VR experience based around browsing books your friends have read … more on that later.

29May/150

Updating the Pebble Emulator python code

I recently wanted to make some changes to the Pebble emulator, which uses the PyV8 Python-JavaScript bridge to emulate the phone environment running your phone-bases JavaScript companion app.Screenshot 2015-05-29 12.46.07

These are some notes on how I did this, mainly so that I remember if I need to do it again, and also just in case it helps anyone else.

The first thing I did was to clone the Pebble Python PebbleKit JS implementation, used in the emulator. The original is at https://github.com/pebble/pypkjs and mine is at https://github.com/DamianMehers/pypkjs

Once I'd done that I cloned my one locally onto my Mac, and followed the instructions to build it.

It needs a copy of the Pebble qemu open source emulator to which to talk, and I started off trying to clone the Pebble qemu and build it locally.  Half-way through it occurred to me that I already had a perfectly good qemu locally, since I already had the Pebble Dev Kit installed.

By running a pbw in the emulator, with the debug switch enabled, I was able to determine the magic command to start the emulator locally:

Screenshot 2015-05-29 12.51.32

I copied the command, added some quotes around parameters that needed them, and was able to launch the emulator in one window:

Screenshot 2015-05-29 12.51.32

The phone simulator in another window:

Screenshot 2015-05-29 12.54.04

And then my app in another:

Screenshot 2015-05-29 12.55.08

Once I was up and running I started making changes to the Python code.  Since I've never written a line of Python before I made liberal use of existing code to make the changes I needed.

It all ended well when my pull request containing my changes to support sending binary data was accepted into the official Pebble codebase, meaning that Evernote now runs in the emulator.

Filed under: Pebble No Comments
26May/150

Capture your Mac screen activity into daily videos

Screenshot 2015-05-26 14.42.37I know I'm not alone in wishing there was a TimeSnapper equivalent for the Mac.  Among many things it lets you look back in time at what you were doing on your computer minutes, hours or days ago.

Perfect for remembering what you were doing yesterday, and even to recover stuff that was displayed on your screen.

Inspired by TimeSnapper, I've created a small bash script that I've called MacBlackBox which takes regular screen-shots every few seconds. Every hour it combines the screenshots into an mp4 video, and every day it combines the hourly videos into daily videos, one per screen.

It is available in GitHub here.  Happy to accept improvement suggestions.

Filed under: Uncategorized No Comments
24May/150

Keeping your Moto 360 alive while charging

moto350_charging

If you are developing using the Moto 360 and debugging over bluetooth, you'll notice the battery plummeting quickly.

If you put the watch on a QI charging pad, the Moto 360's charging screen kicks in, and you can no longer do anything on the watch, although if you launch your app via Android Studio, it will run.

If you still want to use your watch while it is charging, root it, and disable Motorola Connect on the watch using:

adb -s 'localhost:4444' shell
$ su
# pm disable com.motorola.targetnotif

This works for me, although I am sure it stops plenty of other things from working, so only do this on a development device, and at your own risk.

Filed under: Uncategorized No Comments
16Mar/150

On Pulse: Why your basal ganglia and wearables were made for each other

I just posted Why your basal ganglia and wearables were made for each other

Filed under: Wearables No Comments
22Feb/150

On Pulse – How I got my dream job: My wearables journey at Evernote

I just wrote on LinkedIn's Pulse about How I got my dream job: My wearables journey at Evernote

Filed under: Uncategorized No Comments