Damian Mehers' Blog Evernote and Wearable devices. All opinions my own.


Making using TypeScript for Google Apps Scripts more convenient on OS X

I've started to use TypeScript in IntelliJ, and wanted to use it for a Google Apps Script App that I'm writing.

There are a couple of issues with using TypeScript for this: The first is that Google Apps Script doesn't directly support TypeScript, and the second is that the Apps Scripts editor is web based.

The first issue isn't really an issue, since the TypeScript is transpiled directly into JavaScript. But the second one is an issue. It would be painful to have to open the generated JavaScript in IntelliJ, copy it into the clipboard, activate the web-based editor, select the old content, paste the new content from the clipboard, and save it, every time I make a change to the TypeScript.

Fortunately I've found a simple way to automate all of this using AppleScript.

Firstly, I ensure that the Apps Script editor is open in its own window. My project is called "Documote" and this is what the Google Chrome window looks like:
documote chrome window

Secondly I've created this AppleScript file to copy the generated JavaScript to that project:

    set project_name to "Documote"
    set file_name to "/Users/damian/.../documote/Code.js"
    set the_text to (do shell script "cat " & file_name)
    set the clipboard to the_text
    tell application "Google Chrome"
        set visible of window project_name to false
        set visible of window project_name to true
        activate window project_name
        tell application "System Events" to keystroke "a" using command down
        paste selection tab project_name of window project_name
        tell application "System Events" to keystroke "s" using command down
    end tell
on error errMsg
    display dialog "Error: " & errMsg
end try

You'd need to change the first couple of lines to reflect your situation. The reason for hiding and showing the window is to activate the window.

Once you have the AppleScript you can assign it a shortcut.

Filed under: Uncategorized No Comments

Building an Amazon Echo Skill to create Evernote notes

First, a demo: Alexa, tell Evernote to create a note "Remember to call my Mother":

I recently acquired an Amazon Echo, and although there is limited support for interacting with Evernote via IFTTT, I wanted to simply create Evernote notes as in the demo above.

I’m going to share how I created an Amazon Echo Skill to accomplish what it shown in the video above, and what roadblocks I hit on the way.

Updating the example

I started with the sample Amazon Echo skill which uses lambdas, and got that working pretty quickly.

To update it to work with Evernote, I changed the JavaScript code that recognized the intent to invoke saveNote when the intent is TakeANote (you'll see where this intent is set up later):

 * Called when the user specifies an intent for this skill.
function onIntent(intentRequest, session, callback) {
    console.log("onIntent requestId=" + intentRequest.requestId +
        ', sessionId=' + session.sessionId);
    var intent = intentRequest.intent, intentName = intentRequest.intent.name;
    // Dispatch to your skill's intent handlers
    if ("TakeANote" === intentName) {
        saveNote(intent, session, callback);
    else {
        throw "Invalid intent: " + intentName;

Creating the note

My code to create the Evernote note (invoked by saveNote above) is pretty much boilerplate. It pulls the content from the list of slots (defined below) and uses it to create a note using the Evernote API:

function saveNote(intent, session, callback) {
    var cardTitle = intent.name;
    var contentSlot = intent.slots["Content"];
    var repromptText = "";
    var sessionAttributes = [];
    var shouldEndSession = false;
    var speechOutput = "";
    if (contentSlot) {
        var noteText = contentSlot.value;
        sessionAttributes = [];
        speechOutput = "OK.";
        repromptText = "What was that?";
        shouldEndSession = true;
        var noteStoreURL = '...';
        var authenticationToken = '...';
        var noteStoreTransport = new Evernote.Thrift.NodeBinaryHttpTransport(noteStoreURL);
        var noteStoreProtocol = new Evernote.Thrift.BinaryProtocol(noteStoreTransport);
        var noteStore = new Evernote.NoteStoreClient(noteStoreProtocol);
        var note = new Evernote.Note();
        note.title = "New note from Alexa";
        var nBody = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>";
        nBody += "<!DOCTYPE en-note SYSTEM \"http://xml.evernote.com/pub/enml2.dtd\">";
        nBody += "<en-note>" + noteText + "</en-note>";
        note.content = nBody;
        noteStore.createNote(authenticationToken, note, function (result) {
            console.log('Create note result: ' + JSON.stringify(result));
            callback(sessionAttributes, buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));
    else {
        speechOutput = "I didn't catch that note, please try again";
        repromptText = "I didn't hear that note.  You can take a note by saying Take a Note followed by your content";
        callback(sessionAttributes, buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));

Notice the hard-coded authenticationToken? That means this will only work with my account. To work with anyone's account, including yours, we'd obviously need to do something different. More on that in a moment.

Packaging it up

I zipped up my JavaScript file, together with my node_modules folder and a node package.json:

  "name": "AlexaPowerNoter",
  "version": "0.0.0",
  "private": true,
  "dependencies": {
    "evernote": "~1.25.82"

Once done, I uploaded my zip to my Amazon Skill, and then published it.

The Skill information

This is the skill information I used:
Alexa Skill Information
Obviously I couldn't use trademarked term "Evernote" as the Invocation Name in something that was public, but just for testing for myself, I think I'm OK.

The Interaction Model

I defined the interaction model like this:
Alexa Interaction Model
The sample utterances is way too limited here - Amazon recommend having several hundred utterances for situations where you allow free-form text. It would also be cool to be able to have an intent to let you search Evernote.

Once I'd done this, and set up my Echo to use my development account, I could create notes.

Authentication roadblock

The next step was to link anyone's Evernote account into the Skill. This is where I hit the roadblock. Amazon require that the authentication support OAUTH 2.0 implicit grant and Evernote supports OAUTH 1. I could attempt to create a bridging service, but the security implications of doing so are scary, and doing it properly would require more time than I have right now.

The source is in GitHub

I've published the source to this app in my GitHub account here. If you are a developer and want to try it out, get an Evernote Developer auth token and plug in the URL and token in the noteStoreURL and authenticationToken above.

Filed under: Uncategorized No Comments

Android 5.0 Media Browser APIs

When I read the release notes for the Android 5.0 APIs I was delighted to see this:

Android 5.0 introduces the ability for apps to browse the media content library of another app, through the new android.media.browse API.

I set out to try to browse the media in a variety of apps I had installed on my phone.

First I listed the apps that supported the MediaBrowserService:

  private void discoverBrowseableMediaApps(Context context) {
    PackageManager packageManager = context.getPackageManager();
    Intent intent = new Intent(MediaBrowserService.SERVICE_INTERFACE);
    List<ResolveInfo> services = packageManager.queryIntentServices(intent, 0);
    for(ResolveInfo resolveInfo : services) {
      if(resolveInfo.serviceInfo != null && resolveInfo.serviceInfo.applicationInfo != null) {
        ApplicationInfo applicationInfo = resolveInfo.serviceInfo.applicationInfo;
        String label = (String) packageManager.getApplicationLabel(applicationInfo);
        Drawable icon = packageManager.getApplicationIcon(applicationInfo);
        String packageName = resolveInfo.serviceInfo.packageName;
        String className = resolveInfo.serviceInfo.name;
        publishProgress(new AudioApp(label, packageName, className, icon));

The publishProgress method updated the UI and soon I had a list of apps that supported the MediaBrowserService:

Apps that support MediaBrowserService

Next, I wanted to browse the media they exposed using the MediaBrowser classes:

public class BrowseAppMediaActivity extends ListActivity {
  private static final String TAG = “BrowseAppMediaActivity”;
  private final MediaBrowserConnectionListener mMediaBrowserListener =
      new MediaBrowserConnectionListener();
  private MediaBrowser mMediaBrowser;

  public void onCreate(Bundle savedInstanceState) {
    Log.d(TAG, “Connecting to “ + packageName + “ / “ + className);
    ComponentName componentName = new ComponentName(packageName, className);

    Log.d(TAG, “Creating media browser …”);
    mMediaBrowser = new MediaBrowser(this, componentName, mMediaBrowserListener, null);

    Log.d(TAG, “Connecting …”);

  private final class MediaBrowserConnectionListener extends MediaBrowser.ConnectionCallback {
    public void onConnected() {
      Log.d(TAG, “onConnected”);
      String root = mMediaBrowser.getRoot();
      Log.d(TAG, “Have root: “ + root);

    public void onConnectionSuspended() {
      Log.d(TAG, “onConnectionSuspended”);

    public void onConnectionFailed() {
      Log.d(TAG, “onConnectionFailed”);

I’ve cut some code, but assume that the packageName and className are as they were when queried above. No matter what I did, and which app I queried, the onConnectionFailed method was invoked.

Here is the log from when I tried to query the Google Music App:

29195-29195/testapp D/BrowseAppMediaActivity﹕ Connecting to com.google.android.music / com.google.android.music.browse.MediaBrowserService
29195-29195/testapp D/BrowseAppMediaActivity﹕ Creating media browser …
29195-29195/testapp D/BrowseAppMediaActivity﹕ Connecting …
16030-16030/? I/MusicPlaybackService﹕ onStartCommand null / null
16030-16030/? D/MediaBrowserService﹕ Bound to music playback service
16030-16030/? D/MediaBrowserService﹕ onGetRoot fortestapp
16030-16030/? E/MediaBrowserService﹕ package testapp is not signed by Google
16030-16030/? I/MediaBrowserService﹕ No root for client testapp from service android.service.media.MediaBrowserService$ServiceBinder$1
724-819/? I/ActivityManager﹕ Displayed testapp/.BrowseAppMediaActivity: +185ms
29195-29195/testapp E/MediaBrowser﹕ onConnectFailed for ComponentInfo{com.google.android.music/com.google.android.music.browse.MediaBrowserService}
29195-29195/testapp D/BrowseAppMediaActivity﹕ onConnectionFailed

Notice the message about my app not being signed by Google on line 7?

I’m assuming that only authorized apps are allowed to browse Google’s music app such as Google apps supporting Android Wear and Android Auto, but not arbitrary third party apps. Indeed the documentation for people implementing MediaBrowserService.onGetRoot indicates that:

The implementation should verify that the client package has permission to access browse media information before returning the root id; it should return null if the client is not allowed to access this information.

This makes sense, but it is disappointing. Just as users can grant specific apps access to notifications, it would be nice of they could also grant specific apps the right to browser other apps' media.

Please let me know if you discover I am wrong!

Filed under: Android No Comments

Using Android Wear to control Google Cardboard Unity VR

Using a VR headset, even one as simple as Google Cardboard, can be mind-blowing.  Nevertheless it is the little things that can also be disconcerting.  For example looking down and seeing you have no arms, despite the fact they still very much feel as though they exist.

I’m convinced that VR experiences are going to transform not just games, but interaction with computers in general, and I’ve been experimenting with some ideas I have about how to create truly useful VR experiences.

As I was working to implement one of my ideas, it occurred to me that I might be able to use the orientation sensors in the Android Wear device I was wearing.  Why not use them as input into the VR experience I was creating?  What if I could bring part of my body from the real world into the VR world?  How about an arm?

I decided to try to find out, and this was the answer:

The experience is nowhere near good enough for games.  But I don’t care about games.  I want to create genuinely useful VR experiences for interacting with computers in general, and I think this is good enough.  I can point to objects, and have them light up.  I can wear smart watches on both wrists (because I really am that cool) and have two arms available in the VR world. 

By tapping and swiping on the wearable screens I can activate in-world functionality, without being taken out of it.  It sure beats sliding a magnet on the side of my face, because it is my arm I am seeing moving in the virtual world.

In the rest of this article I’m going to describe some of technical challenges behind implementing this, how I overcame them, and some of the resources I used on the way.

The tools

This is part of my workspace: Android Studio on the left, Unity on the top-right and MonoDevelop on the bottom-left:

my workspace

I had many reference browser windows open on other screens (obviously), and creating this solution required me being very comfortable in Android, Java and C#.  I’m relatively new to Unity.

Creating a Unity Java Plugin by overriding the Google Cardboard Plugin

The Unity Android Plugin documentation describes how you can create plugins by extending the UnityPlayerActivity Java class, and I experimented with this a little.  I created an Android Library using Android Studio, and implemented my own UnityPlayerActivity derived class.

After a little hassle, I discovered that Unity now supports the “aar” files generated when compiling libraries in Android Studio, although I found the documentation a little out of date on the matter in places.  It was simply a question of copying my generated “aar” file into Unity under Assets|Plugins|Android



When it came to a Google Cardboard Unity project, what I discovered though, is that Google had got there first.  They had created their own UnityPlayerActivity called GoogleUnityActivity.  What I needed to do was override Google’s override:


I included Google’s unity classes as dependencies in my library project:


Once I’d copied the aar file into the Unity Android Plugins folder and ran the test app, I was delighted to see my activity say “Cooey” in the log.


Receiving the watch’s orientation to the phone

The next step was to receive Android Wear Messages from Android Wear on the watch, containing orientation messages.

I recreated my project, this time including support for Android Wear:


I made the Unity activity I’d created do a little more than say “Cooey”. 

First I used the Capabilities mechanism to tell other Android Wear devices that this device (the phone) was interested in arm orientation messages:


… and I set it up to receive Android Wear messages and pass them over to Unity using UnitySendMessage:


Sending the watch’s orientation to the phone

This was simply a question of looking out for Android Wear nodes that supported the right capability, listening for orientation sensor changes, and sending Android Wear messages to the right node.  This is the watch code:


I did discover that some wearables don’t support the required sensors, although I imagine more modern ones will.

Using the watch’s orientation to animate a block on the screen

Inside Unity I created a cube which tweaked into a rectangle, and made it a child of the CardboardMain’s camera, so that it moved when I moved:


See the “Script” field on the bottom right-hand side?  I have a script called “WristController” that is attached to the “wrist” (white blob).  This is where I receive orientation messages sent from the watch, via the UnityPlayerActivity derived Java class I’d created.

I started off simply assigning the received orientation to the block’s orientation by assigning to transform.eulerAngles


This worked, but was super-jerky.  I went searching and discovered Lerps and Slerps for smoothly moving from one rotation to another.  My updated code:


Animating an arm instead of a block

I was pleased to be smoothly animating a block, but my arm doesn’t quite look like that.  It is more armish.  I went looking for a model of an arm that I could import and use instead.  I found a YouTube Unity video called ADDING ARMS by Asbjørn Thirslund, in which he explains how to to import and use a free arms model by eafg.

It was simply a question of sizing and positioning the arms properly as a child of the Cardboard main camera, and then adding the script I’d used to animate the block.

I also removed the right-hand arm, since it looked a little weird to have a zombie arm doing nothing.


The ArmController script you see in this screen capture has the same contents as the WristController I’d used to move the block.

Final Thoughts

There is enough of a lag to make this technique impractical for games, but not enough to make it impractical for the kinds of non-game experiences I have in mind. 

I’d also need to add calibration, since the watch may be pointing in any direction initially – if I assume it always starts straight out, that would be good enough.  Detecting where the arm is pointing shouldn’t be too hard, since the cardboard code already does gaze detection – so many possibilities, but so little time for side-projects such as this!

This has been a fun interlude on my way to creating what I hope to be a genuinely useful VR experience based around browsing books your friends have read … more on that later.


Updating the Pebble Emulator python code

I recently wanted to make some changes to the Pebble emulator, which uses the PyV8 Python-JavaScript bridge to emulate the phone environment running your phone-bases JavaScript companion app.Screenshot 2015-05-29 12.46.07

These are some notes on how I did this, mainly so that I remember if I need to do it again, and also just in case it helps anyone else.

The first thing I did was to clone the Pebble Python PebbleKit JS implementation, used in the emulator. The original is at https://github.com/pebble/pypkjs and mine is at https://github.com/DamianMehers/pypkjs

Once I'd done that I cloned my one locally onto my Mac, and followed the instructions to build it.

It needs a copy of the Pebble qemu open source emulator to which to talk, and I started off trying to clone the Pebble qemu and build it locally.  Half-way through it occurred to me that I already had a perfectly good qemu locally, since I already had the Pebble Dev Kit installed.

By running a pbw in the emulator, with the debug switch enabled, I was able to determine the magic command to start the emulator locally:

Screenshot 2015-05-29 12.51.32

I copied the command, added some quotes around parameters that needed them, and was able to launch the emulator in one window:

Screenshot 2015-05-29 12.51.32

The phone simulator in another window:

Screenshot 2015-05-29 12.54.04

And then my app in another:

Screenshot 2015-05-29 12.55.08

Once I was up and running I started making changes to the Python code.  Since I've never written a line of Python before I made liberal use of existing code to make the changes I needed.

It all ended well when my pull request containing my changes to support sending binary data was accepted into the official Pebble codebase, meaning that Evernote now runs in the emulator.

Filed under: Pebble No Comments

Capture your Mac screen activity into daily videos

Screenshot 2015-05-26 14.42.37I know I'm not alone in wishing there was a TimeSnapper equivalent for the Mac.  Among many things it lets you look back in time at what you were doing on your computer minutes, hours or days ago.

Perfect for remembering what you were doing yesterday, and even to recover stuff that was displayed on your screen.

Inspired by TimeSnapper, I've created a small bash script that I've called MacBlackBox which takes regular screen-shots every few seconds. Every hour it combines the screenshots into an mp4 video, and every day it combines the hourly videos into daily videos, one per screen.

It is available in GitHub here.  Happy to accept improvement suggestions.

Filed under: Uncategorized No Comments

Keeping your Moto 360 alive while charging


If you are developing using the Moto 360 and debugging over bluetooth, you'll notice the battery plummeting quickly.

If you put the watch on a QI charging pad, the Moto 360's charging screen kicks in, and you can no longer do anything on the watch, although if you launch your app via Android Studio, it will run.

If you still want to use your watch while it is charging, root it, and disable Motorola Connect on the watch using:

adb -s 'localhost:4444' shell
$ su
# pm disable com.motorola.targetnotif

This works for me, although I am sure it stops plenty of other things from working, so only do this on a development device, and at your own risk.

Filed under: Uncategorized No Comments

On Pulse: Why your basal ganglia and wearables were made for each other

I just posted Why your basal ganglia and wearables were made for each other

Filed under: Wearables No Comments

On Pulse – How I got my dream job: My wearables journey at Evernote

I just wrote on LinkedIn's Pulse about How I got my dream job: My wearables journey at Evernote

Filed under: Uncategorized No Comments

Scrolling long Pebble menu items

This is a technical blog post.  Warning: contains code.

We recently pushed version 1.2 of Evernote for the Pebble to the Pebble App Store.  It is a minor release, with one bug fix, and one new feature.

The bug fix is related to support for the additional character sets that Pebble can now display.

The enhancement is what this blog post is about.  Since we released the first version of the app, which was generally well received, we’ve received emails from people complaining that their note titles, notebook names, tag names etc. don’t fit on the Pebble screen.  They are cut off, and hard to read.  People asked if we could make menu items scroll horizontally if they didn’t fit.

My response was generally something along the lines of “sorry, but we use the Pebble’s built-in menuing system, and until they support scrolling menu items horizontally, we can’t do anything”.  I never felt great about this response, but it was the genuine situation.  However before I pushed the 1.2 release with the character-set bug-fix, I thought I’d take a look at scrolling the menu items.  Turns out, it was surprisingly easy.

You can see what I’m talking about here:


The funny thing about the Evernote Pebble watch app is that it knows almost nothing about Evernote.  The Evernote intelligence is all delegated to the companion app that runs on the Phone.  The watch app knows how to display massive menus (paging items in and out as necessary), checkboxes, images, text etc. 

When the user scrolls to a new menu item, we kick off a wait timer using app_timer_register waiting for one second.  If the user scrolls to another menu item before the timer has expired, we wait for a new second, this time using app_timer_reschedule:

static void selection_changed_callback(Layer *cell_layer, MenuIndex new_index, MenuIndex old_index, 
void *data) {
WindowData* window_data = (WindowData*)data;
window_data->moving_forwards_in_menu = new_index.row >= old_index.row;
if(!window_data->menu_reloading_to_scroll) {
} else {
window_data->menu_reloading_to_scroll = false;

The above method is called by the Pebble framework when the user scrolls to a new menu item.  The check for menu_reloading_to_scroll is called to work around some behavior I’ve seen.  This callback invokes the following method:

static void initiate_menu_scroll_timer(WindowData* window_data) {
// If there is already a timer then reschedule it, otherwise create one
bool need_to_create_timer = true;
window_data->scrolling_still_required = true;
window_data->menu_scroll_offset = 0;
window_data->menu_reloading_to_scroll = false;
if(window_data->menu_scroll_timer) {
// APP_LOG(APP_LOG_LEVEL_DEBUG, "Rescheduling timer");
need_to_create_timer = !app_timer_reschedule(window_data->menu_scroll_timer,
if(need_to_create_timer) {
// APP_LOG(APP_LOG_LEVEL_DEBUG, "Creating timer");
window_data->menu_scroll_timer = app_timer_register(SCROLL_MENU_ITEM_WAIT_TIMER,
scroll_menu_callback, window_data);

As you can see it uses a WindowsData structure, which is a custom structure associated with the current window via window_set_user_data.  Once the timer expires it calls scroll_menu_callback:

static void scroll_menu_callback(void* data) {
WindowData* window_data = (WindowData*)data;
if(!window_data->menu) {
window_data->menu_scroll_timer = NULL;
if(!window_data->scrolling_still_required) {

// Redraw the menu with this scroll offset
MenuIndex menuIndex = menu_layer_get_selected_index(window_data->menu);
if(menuIndex.row != 0) {
window_data->menu_reloading_to_scroll = true;
window_data->scrolling_still_required = false;
window_data->menu_scroll_timer = app_timer_register(SCROLL_MENU_ITEM_TIMER, scroll_menu_callback,

This code is called once when the timer initiated by initiate_scroll_menu_timer expires (after the one second delay), and then it invokes itself repeatedly using a shorter delay (a fifth of a second), until the menu item is fully scrolled.  The call to menu_layer_reload_data is what causes the menu to be redrawn, using the menu_scroll_offset to indicate how much to scroll the text by.

This is the method that gets called by the draw_row_callback to get the text to be displayed for each menu item:

void get_menu_text(WindowData* window_data, int index, char** text, char** subtext) {
MenuItem* menu_item = getMenuItem(window_data, index);
*text = menu_item ? menu_item->text : NULL;
*subtext = menu_item && menu_item->flags & ITEM_FLAG_TWO_LINER ?
menu_item->text + strlen(menu_item->text) + 1 : NULL;
if(*subtext != NULL && strlen(*subtext) == 0) {
*subtext = NULL;

MenuIndex menuIndex = menu_layer_get_selected_index(window_data->menu);
if(*text && menuIndex.row == index) {
int len = strlen(*text);
if(len - MENU_CHARS_VISIBLE - window_data->menu_scroll_offset > 0) {
*text += window_data->menu_scroll_offset;
window_data->scrolling_still_required = true;

The bolded code “scrolls” the text if the row corresponds to the currently selected item by indexing into the text to be displayed, and indicating that scrolling is still required.  I’m not happy with using the fixed size MENU_CHARS_VISIBLE to decide whether or not to scroll – it would be much nicer to measure the text and see if it fits.  If you know of a simple way to do this please comment!

The final thing I needed to do was to actually send longer menu item text from the phone to the watch.  Since Pebble now support sending more than 120 or so bytes this was much easier.  I’m sending up to 32 characters now.

In summary I’m simply using a timer to redisplay the menu, each time scrolling the current menu item’s text by indexing into the character array, and I stop the timer once it has all been displayed.

Filed under: Pebble, Wearables 1 Comment