Damian Mehers' Blog Evernote and Wearable devices. All opinions my own.


Making using TypeScript for Google Apps Scripts more convenient on OS X

I've started to use TypeScript in IntelliJ, and wanted to use it for a Google Apps Script App that I'm writing.

There are a couple of issues with using TypeScript for this: The first is that Google Apps Script doesn't directly support TypeScript, and the second is that the Apps Scripts editor is web based.

The first issue isn't really an issue, since the TypeScript is transpiled directly into JavaScript. But the second one is an issue. It would be painful to have to open the generated JavaScript in IntelliJ, copy it into the clipboard, activate the web-based editor, select the old content, paste the new content from the clipboard, and save it, every time I make a change to the TypeScript.

Fortunately I've found a simple way to automate all of this using AppleScript.

Firstly, I ensure that the Apps Script editor is open in its own window. My project is called "Documote" and this is what the Google Chrome window looks like:
documote chrome window

Secondly I've created this AppleScript file to copy the generated JavaScript to that project:

    set project_name to "Documote"
    set file_name to "/Users/damian/.../documote/Code.js"
    set the_text to (do shell script "cat " & file_name)
    set the clipboard to the_text
    tell application "Google Chrome"
        set visible of window project_name to false
        set visible of window project_name to true
        activate window project_name
        tell application "System Events" to keystroke "a" using command down
        paste selection tab project_name of window project_name
        tell application "System Events" to keystroke "s" using command down
    end tell
on error errMsg
    display dialog "Error: " & errMsg
end try

You'd need to change the first couple of lines to reflect your situation. The reason for hiding and showing the window is to activate the window.

Once you have the AppleScript you can assign it a shortcut.

Filed under: Uncategorized No Comments

Building an Amazon Echo Skill to create Evernote notes

First, a demo: Alexa, tell Evernote to create a note "Remember to call my Mother":

I recently acquired an Amazon Echo, and although there is limited support for interacting with Evernote via IFTTT, I wanted to simply create Evernote notes as in the demo above.

I’m going to share how I created an Amazon Echo Skill to accomplish what it shown in the video above, and what roadblocks I hit on the way.

Updating the example

I started with the sample Amazon Echo skill which uses lambdas, and got that working pretty quickly.

To update it to work with Evernote, I changed the JavaScript code that recognized the intent to invoke saveNote when the intent is TakeANote (you'll see where this intent is set up later):

 * Called when the user specifies an intent for this skill.
function onIntent(intentRequest, session, callback) {
    console.log("onIntent requestId=" + intentRequest.requestId +
        ', sessionId=' + session.sessionId);
    var intent = intentRequest.intent, intentName = intentRequest.intent.name;
    // Dispatch to your skill's intent handlers
    if ("TakeANote" === intentName) {
        saveNote(intent, session, callback);
    else {
        throw "Invalid intent: " + intentName;

Creating the note

My code to create the Evernote note (invoked by saveNote above) is pretty much boilerplate. It pulls the content from the list of slots (defined below) and uses it to create a note using the Evernote API:

function saveNote(intent, session, callback) {
    var cardTitle = intent.name;
    var contentSlot = intent.slots["Content"];
    var repromptText = "";
    var sessionAttributes = [];
    var shouldEndSession = false;
    var speechOutput = "";
    if (contentSlot) {
        var noteText = contentSlot.value;
        sessionAttributes = [];
        speechOutput = "OK.";
        repromptText = "What was that?";
        shouldEndSession = true;
        var noteStoreURL = '...';
        var authenticationToken = '...';
        var noteStoreTransport = new Evernote.Thrift.NodeBinaryHttpTransport(noteStoreURL);
        var noteStoreProtocol = new Evernote.Thrift.BinaryProtocol(noteStoreTransport);
        var noteStore = new Evernote.NoteStoreClient(noteStoreProtocol);
        var note = new Evernote.Note();
        note.title = "New note from Alexa";
        var nBody = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>";
        nBody += "<!DOCTYPE en-note SYSTEM \"http://xml.evernote.com/pub/enml2.dtd\">";
        nBody += "<en-note>" + noteText + "</en-note>";
        note.content = nBody;
        noteStore.createNote(authenticationToken, note, function (result) {
            console.log('Create note result: ' + JSON.stringify(result));
            callback(sessionAttributes, buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));
    else {
        speechOutput = "I didn't catch that note, please try again";
        repromptText = "I didn't hear that note.  You can take a note by saying Take a Note followed by your content";
        callback(sessionAttributes, buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));

Notice the hard-coded authenticationToken? That means this will only work with my account. To work with anyone's account, including yours, we'd obviously need to do something different. More on that in a moment.

Packaging it up

I zipped up my JavaScript file, together with my node_modules folder and a node package.json:

  "name": "AlexaPowerNoter",
  "version": "0.0.0",
  "private": true,
  "dependencies": {
    "evernote": "~1.25.82"

Once done, I uploaded my zip to my Amazon Skill, and then published it.

The Skill information

This is the skill information I used:
Alexa Skill Information
Obviously I couldn't use trademarked term "Evernote" as the Invocation Name in something that was public, but just for testing for myself, I think I'm OK.

The Interaction Model

I defined the interaction model like this:
Alexa Interaction Model
The sample utterances is way too limited here - Amazon recommend having several hundred utterances for situations where you allow free-form text. It would also be cool to be able to have an intent to let you search Evernote.

Once I'd done this, and set up my Echo to use my development account, I could create notes.

Authentication roadblock

The next step was to link anyone's Evernote account into the Skill. This is where I hit the roadblock. Amazon require that the authentication support OAUTH 2.0 implicit grant and Evernote supports OAUTH 1. I could attempt to create a bridging service, but the security implications of doing so are scary, and doing it properly would require more time than I have right now.

The source is in GitHub

I've published the source to this app in my GitHub account here. If you are a developer and want to try it out, get an Evernote Developer auth token and plug in the URL and token in the noteStoreURL and authenticationToken above.

Filed under: Uncategorized No Comments

Capture your Mac screen activity into daily videos

Screenshot 2015-05-26 14.42.37I know I'm not alone in wishing there was a TimeSnapper equivalent for the Mac.  Among many things it lets you look back in time at what you were doing on your computer minutes, hours or days ago.

Perfect for remembering what you were doing yesterday, and even to recover stuff that was displayed on your screen.

Inspired by TimeSnapper, I've created a small bash script that I've called MacBlackBox which takes regular screen-shots every few seconds. Every hour it combines the screenshots into an mp4 video, and every day it combines the hourly videos into daily videos, one per screen.

It is available in GitHub here.  Happy to accept improvement suggestions.

Filed under: Uncategorized No Comments

Keeping your Moto 360 alive while charging


If you are developing using the Moto 360 and debugging over bluetooth, you'll notice the battery plummeting quickly.

If you put the watch on a QI charging pad, the Moto 360's charging screen kicks in, and you can no longer do anything on the watch, although if you launch your app via Android Studio, it will run.

If you still want to use your watch while it is charging, root it, and disable Motorola Connect on the watch using:

adb -s 'localhost:4444' shell
$ su
# pm disable com.motorola.targetnotif

This works for me, although I am sure it stops plenty of other things from working, so only do this on a development device, and at your own risk.

Filed under: Uncategorized No Comments

On Pulse – How I got my dream job: My wearables journey at Evernote

I just wrote on LinkedIn's Pulse about How I got my dream job: My wearables journey at Evernote

Filed under: Uncategorized No Comments

Interview for Connectedly on Evernote and Wearables

I recently gave a brief interview about Evernote and Wearables, with special focus on the Pebble, for Adam Zeis at Connectedly, part of the Mobile Nations group (Android Central, iMore, etc).

More here.

Filed under: Uncategorized No Comments

Ski Goggles and Sick Bags: The past, present and future of Virtual Reality

imageimageNote: This is derived from a speech I gave at toastmasters last week, inspired by the arrival of my very own brand new Oculus Rift VR headset




A generation inspired.

In 1984 the author William Gibson penned his first book, called Neuromancer, and inspired a generation.

In it the protagonist navigates through cyberspace.

If you don’t know what cyberspace means, you are not alone.  At the time that William Gibson wrote Neuromancer, nobody else knew what it meant either.  He invented the term.

Cyberspace in that book was a virtual reality.  An immersive computer generated world which when you are in it, feels just like the real thing, beamed directly to the brain via a neural interface.

Our imaginations were fired.  We wanted it so badly.  Looking back, I’m not even sure why, but man was it cool.

There was no way anything like it was possible then.  A personal computer could barely output color, let alone create that kind of world.

Dreams dashed

Time passed, and by the 1990s my generation still hadn't forgotten the dream of Neuromancer.  Computers and computer graphics were getting more and more powerful. 

You even started to see video arcades with games with virtual reality headsets.  I still remember the day I tried one on, sickly smell of cigarette smoke, music from the arcade games pouring in my ears, almost as loud as the pounding of my heart.  This was it, I was going to experience Virtual Reality.  I placed the headset on my head, and looked around as it projected images into my eyes.

The disappointment was devastating. Not only did it feel like I was wearing a dustbin on my head, it was so clumsy and heavy, but the experience was terrible too.  Clunky objects drawn as outlines, which struggled to be re-drawn as I over my head around.

The virtual reality dreams of a generation were dashed on those arcades, as I and many others consigned the idea of virtual reality to the dustbin.

A new hope

Time passed.  Whole new business sprang up, such as Amazon.  Not only did new business spring up, but new ways of doing business sprang up too.

In the old days if you had an idea for a hardware product, such as some kind of electronic gadget you’d need to go to a big company to get it funded.  Endless bureaucracy and meetings.  You’d likely have to give up the rights to your product, and compromise your soul in order to get something like your idea to market.

But the internet and the world wide web changed that.  Now, when someone has an idea for something, such as a new watch, they can go to sites such as Kickstarter, and pitch their idea not to a committee in a bureaucracy, but instead they can pitch their idea to the world.  They can describe what they want to make, what their experience is in the field, what it will cost to bring it to the market, and they can let thousands of individuals invest in their idea, in return for a sample of the product if it ever gets made.

The Pebble watch I’m wearing right now started on Kickstarter.  Their goal was to raise 100,000 dollars to bring it to market.  They didn’t raise 100,000 dollars.  They raised 10 million dollars.

So that's one thing that happened”": decentralized “crowd funding” as it is called, a new way of bringing products to market.

The other thing that happened is mobile phones: Incredibly powerful miniature computers that we all carry in our pockets.  Because they are being made in massive quantities the costs of the components that go into them has dropped massively too.  And those components are interesting. 

These phones have small, but incredibly high resolution screens.  They have a vast array of sensors in them, such as gyroscopes so that they can tell when they have been turned, accelerometers to tell when they are moved, and magnetometers to tell which direction they are facing.

Can you imagine what would happen if you took those screens, attached them to some kind of a helmet, like ski goggles, included the sensors from phones to accurately track your head position, and hooked them up to a computer to generate slightly different images on each screen?  You’d have a virtual reality system. 

As it happens, someone in the states did have that idea.  Someone that knew enough about virtual reality headsets to put together a working prototype.

imageIf only they had some way to bring their idea to market.  Of course they did, and the Oculus Rift Kickstarter was a massive success.

Those that have tried them on have been astounded by the results.  It creates a truly immersive virtual reality experience.

Anyone who was wondering what value virtual reality can possibly have beyond games need only watch a 90 year old women trying them on, screaming with joy, walking around an Italian villa, leaves blowing in the wind, butterflies flittering in the air. 

There are plenty of people who for one reason or another are unable to travel, or even to move, yet they can experience the world through virtual realty. 

School kids can watch the birth of the universe, or chemical reactions happening, and step into the reaction to see it from different perspectives. 

This technology is still young.  The Oculus Rift is still not publicly available.  Its only available to software developers who wish to create for it.  But its coming.

I’ve talked about the ski goggles, but what about the sick bag?  Well all is not perfect with the Oculus Rift.  Many people report nausea after trying it on for a while.  Perhaps its the eye strain, or perhaps the image still isn’t moving quite fast enough and the body senses that. 

I’m sure that they will lick the nausea, and soon, very soon indeed, you too will be visiting new parts of our world, or even other worlds, in virtual realty.

Filed under: Uncategorized No Comments

Some good books

I was at the speaker’s dinner after speaking at the excellent Reaktor conference in Helsinki, chatting about our favorite authors, and rather than just sending an email to the people that were there, I thought I’d instead write a blog post.

Good authors are hard to find.

These are books I’ve enjoyed over the last couple of years, culled from my Audible and Kindle accounts, skipping over many many “meh” books.  Bing links brought to you courtesy of Gmail.

Filed under: Uncategorized 1 Comment

Changing the Windows Amazon Cloud Drive app sync folder

Amazon just released the first version of their Windows app to sync Amazon Cloud Drive.  It’s very much a first version, with no ability to pause/resume sync, sync selective folders, or even (as far as I can see) a way of changing the default sync folder.

imageWhen I displayed the options dialog, I assumed that all you had to do was click on the location to change it, but that simply opens the folder in the Explorer.

It chose the smallest drive on my machine (of course), but I found a way to change it.



Do this entirely at your own risk, and if you don’t know what this means, then don’t do it.  You can use regedit to change the sync folder’s location, under "HKEY_CURRENT_USER\Software\Amazon\AmazonCloudDrive\SyncRoot" change “SyncRoot” to a different folder.


Works for me, but no guarantees.


HTC Gingerbread–automatically switching from Wifi to costly data connection

I have an HTC Incredible S, and it’s a very nice phone indeed.

I recently upgraded Android 2.3.3 (Gingerbread), and discovered that the Wifi connection was dropping in places at home where it had a perfectly usable (albeit weak) Wifi signal.  Places where previously it had worked.

I fiddled with my Wifi base station, repositioning it, to no avail.

Finally I googled and found that HTC had decided to switch from Wifi to data if the Wifi signal dropped below a certain strength (88dbm).  How nice of them to decide on my behalf that I wanted to switch from my (free) Wifi to my (expensive) data plan, even though I still had a perfectly usable (and free) Wifi connection – one that worked perfectly well in the previous OS version.

This is annoying for a couple of reasons.  Firstly I can now run up horrendous data plan charges even though I’m within range of my Wifi.  Secondly, I have services I run on my local Wifi (IP Cams, remote control software) that can no longer connect when I’m off my Wifi.

I’ve been a big HTC fan for a long time, and have gone through many of their ‘phones.  This is a big disappointment for me – it stinks of paternalism/arrogance – deciding what is best for me without giving me a chance to override it.  I am sure that it isn’t arrogance/paternalism – I am sure it made perfectly good engineering sense, perhaps because less battery will be consumed on data than on Wifi when on a weak link, but give me a choice.

I contacted HTC support and was told that yes, this behavior is new and that no, there was no way to downgrade – the suggestion was to switch off the Data connection when I was at home. Right, as if I will remember to do that.

I’ve ended up installing Tasker, and setting up a rule to switch off my Data connection when within range of my home Wifi.  Not ideal, but it works.

Filed under: Uncategorized No Comments