Damian Mehers' Blog Android, VR and Wearables from Geneva, Switzerland.

10Apr/160

Getting Xamarin Xaml Intellisense when the binding context is set in code

I'm working on an app where I navigate from one page to another, passing data by setting the new page's binding context:

Navigation.PushAsync(new QuickNotePage() { BindingContext = quickNote});

When designing the Xaml for QuickNotePage I was pained to see that Intellisense wasn't working, because I wasn't setting the bindingContext for the page in Xaml.

A quick search led me to this page which pre-dates the current version of Xamarin, but nevertheless reminded me of the old design-time namespaces that were auto-generated when I worked on WPF and Silverlight.

This is the Xaml I'm using now to get Intellisense auto-completion and the ability to navigate to properties:

Before:

<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             x:Class="QuickNoteForms.QuickNotePage">
    <Label Text="{Binding Title}"/>
</ContentPage>

After:

<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             x:Class="QuickNoteForms.QuickNotePage"
             xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
             mc:Ignorable="d"
             xmlns:quickNoteViewModels="clr-namespace:QuickNote.ViewModels;assembly=QuickNoteForms"
             d:DataContext="{d:DesignInstance quickNoteViewModels:QuickNoteViewModel}">
    <Label Text="{Binding Title}"/>
</ContentPage>

Where QuickNoteViewModel is the ViewModel class, and instance of which I set above when instantiating the page.

Filed under: Xamarin No Comments
8Apr/160

Visual Studio missing “Forms Xaml Page” from “Add|New Item” menu using Xamarin

Not sure why this is happening, but its been happening on all my installations of Xamarin with Visual Studio 2015.

All the tutorials and web pages talk about using Project|Add New Item and adding a new "Forms Xaml Page". But whenever I install Xamarin and Visual Studio 2015, I just get the "Forms ContentPage" and "Forms ContentView" which just generate a C# file, no Xaml.

To fix this, I copied XamlPage.zip from

C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\Xamarin\Xamarin\4.0.3.214\T\IT\Cross-Platform\Code

to

C:\Users\your name here\Documents\Visual Studio 2015\Templates\ItemTemplates\Visual C#

Finally, it is there:
Screenshot 2016-04-08 11.10.19

Filed under: Xamarin No Comments
31Jan/163

Creating a Windows Universal app to talk Bluetooth LE, save to SQLite and expose a REST service

The Goal

I've had a couple of TI SensorTags sitting on my shelf for a couple of years. These are the original ones, which have been superseded by smaller ones that have additional sensors for light and sound.

Sensor Tag with no caseSensor Tag with case

They are wonderful devices. They last for over a year on a watch battery, they talk Bluetooth LE, and they have loads of sensors including Temperature (both spot temperature of a nearby object, and overall ambient temperature), Gyroscope, Accelerometer, Magnetometer, Barometer, Humidity, etc.

Last, but not least, they cost less than US$30. Unless you actually enjoy wiring physical sensors into an Arduino or Raspberry PI, I think Sensor Tags are a great way to start collecting all kinds of information.

Rather than have a phone sitting talking Bluetooth LE, I decided I wanted to use a Mac Mini server that I have running Windows, which I could run continuously to capture, store and serve the sensor information.

My goal was to:

  • Create a Windows Universal App that talks Bluetooth LE to the Sensor Tag
  • Save the captured information to an SQLite database
  • Serve the captured information using REST (/GetTemperatures?start=201501010000&end=201701010000)

At each step I hit roadblocks, and the purpose of this blog post is to try to capture what I did to overcome them, in the hope that other people may benefit from my pain.

Although I've been mainly writing Java/Android, C, TypeScript and JavaScript over the last three years, I still retain a soft spot for C# and the associated tooling of Visual Studio and Resharper.

I really appreciate the C# syntax and associated features such as lambdas, and LINQ.

I wanted to try my hand at create a Windows app, to see how much I'd lost over the last few years.

Bluetooth LE, SensorTag and Windows Universal

I started off creating a new Windows Universal app in Visual Studio. I browsed the documentation, and found the classes associated with using Bluetooth LE. I liked the fact that my app would be able to run on desktops down to phones.

My initial code:

      _watcher = new BluetoothLEAdvertisementWatcher();
      _watcher.Received += BluetoothReceived;
      _watcher.Stopped += BluetoothStopped;
      _watcher.Start();

When I ran this, I got the following exception: onecoreuap\drivers\wdm\bluetooth\user\winrt\common\devicehandle.cpp(100)\ Windows.Devices.Bluetooth.dll!51D26D1B: (caller: 51D273AE) Exception(1) tid(13c0) 80070005 Access is denied.

Turns out I needed to add Bluetooth to my app's capabilities by double-clicking the Package.appxmanifest file in the Solution Explorer, going to Capabilities and checking Bluetooth.

Enabling Bluetooth in Windows Universal App

Once that was done, I was able to look for the SensorTag's Service UUID, and then check for the correct characteristics and enable the reception of the sensor's data:

    const string BaseUuidStart = "f000";
    const string BaseUuidEnd = "-0451-4000-b000-000000000000";

    const string TempData = "aa01";
    const string TempConfig = "aa02";
    const string AccelData = "aa11";
    const string AccelConfig = "aa12";
    const string HumidData = "aa21";
    const string HumidConfig = "aa22";
    const string MagnetData = "aa31";
    const string MagnetConfig = "aa32";
    const string BaromData = "aa41";
    const string BaromConfig = "aa42";
    const string GyroData = "aa51";
    const string GyroConfig = "aa52";

    private bool _attaching;
    private readonly List<BluetoothLEDevice> _devices = new List<BluetoothLEDevice>();
    private readonly List<GattCharacteristic> _characteristics = new List<GattCharacteristic>();

    private async void BluetoothReceived(BluetoothLEAdvertisementWatcher sender,
      BluetoothLEAdvertisementReceivedEventArgs args) {
      if (_attaching) return;
      try {
        var device = await BluetoothLEDevice.FromBluetoothAddressAsync(args.BluetoothAddress);
        _devices.Add(device);
        _attaching = true;
        device.ConnectionStatusChanged += DeviceConnectionStatusChanged;
        device.GattServicesChanged += DeviceGattServicesChanged;
        foreach (var service in device.GattServices) {
          var serviceUuid = service.Uuid.ToString().ToLowerInvariant();
          if (!serviceUuid.StartsWith(BaseUuidStart) || !serviceUuid.EndsWith(BaseUuidEnd)) {
            continue;
          }
          foreach (var characteristic in service.GetAllCharacteristics()) {
            var characteristicUuid = characteristic.Uuid.ToString().ToLowerInvariant();
            if (_characteristics.Any(c => c.Uuid.ToString() == characteristicUuid)) {
              continue;
            }
            var characteristicType = characteristicUuid.Substring(BaseUuidStart.Length, 4);
            switch (characteristicType) {
              case AccelData:
              case BaromData:
              case HumidData:
              case GyroData:
              case MagnetData:
              case TempData: {
                _characteristics.Add(characteristic);
                characteristic.ValueChanged += CharacteristicChanged;
                var status =
                  await characteristic.WriteClientCharacteristicConfigurationDescriptorAsync(
                    GattClientCharacteristicConfigurationDescriptorValue.Notify);
                Debug.WriteLine("Subscribed .... with status " + status);
                break;
              }
              case AccelConfig:
              case BaromConfig:
              case HumidConfig:
              case GyroConfig:
              case MagnetConfig:
              case TempConfig: {
                var status = await characteristic.WriteValueAsync(new byte[] {1}.AsBuffer());
                break;
              }

              default:
                Debug.WriteLine("Ignoring characteristic: " + characteristicType);
                break;
            }
          }
        }
        sender.Stop();
      }
      catch (Exception ex) {
        Debug.WriteLine("got " + ex);
      }
    }

I used the Sensor Tag documentation to know about the GUIDs used for the services and characteristics.
I found I needed to press Advertise the button on the side of my Sensor Tag to get it to be seen.

Capturing the values was pretty easy, but I did hit one stumbling block which was the temperature. There is an algorithm described in the documentation as to how to transform the series of bytes received into the spot and ambient temperature in degrees Celsius. When I tried using it I got garbage values, but eventually found this C# example showing how they can be calculated:

    private async Task ProcessTempData(string bluetoothId, byte[] rawData) {
      // Extract ambiant temperature 
      var ambTemp = BitConverter.ToUInt16(rawData, 2)/128.0;

      // Extract object temperature 
      int twoByteValue = BitConverter.ToInt16(rawData, 0);
      var vobj2 = twoByteValue*0.00000015625;
      var tdie = ambTemp + 273.15;
      const double s0 = 5.593E-14; // Calibration factor 
      const double a1 = 1.75E-3;
      const double a2 = -1.678E-5;
      const double b0 = -2.94E-5;
      const double b1 = -5.7E-7;
      const double b2 = 4.63E-9;
      const double c2 = 13.4;
      const double tref = 298.15;
      var s = s0*(1 + a1*(tdie - tref) + a2*Math.Pow(tdie - tref, 2));
      var vos = b0 + b1*(tdie - tref) + b2*Math.Pow(tdie - tref, 2);
      var fObj = vobj2 - vos + c2*Math.Pow(vobj2 - vos, 2);
      var tObj = Math.Pow(Math.Pow(tdie, 4) + (fObj/s), .25);
      var objTemp = tObj - 273.15;

      await SaveTemperature(bluetoothId, ambTemp, objTemp);
    }

SQLite and Windows Universal

Installing SQLite for Windows was pretty easy, but I couldn't find clear, complete instructions. In short I used NuGet to install

  • SQLite.Net-PCL
  • SQLite.Net.Async-PCL
  • SQLite.Net.Core-PCL

Once I had this installed, I could define classes corresponding to the tables I wanted to create, such as:

  public class Temperature
  {
    public int Id { get; set; }
    public DateTime Timestamp { get; set; }
    public string BluetoothId { get; set; }
    public double Ambient { get; set; }
    public double Spot { get; set; }
  }

Then I could initialize the database:

    private SQLiteAsyncConnection _asyncConnection;
    private async Task InitializeDatabase() {
      Debug.WriteLine("Initializing database");
      var databasePath = Path.Combine(Windows.Storage.ApplicationData.Current.LocalFolder.Path, "sensortag.db");
      var connectionFactory = new Func<SQLiteConnectionWithLock>(() => new SQLiteConnectionWithLock(new SQLitePlatformWinRT(), new SQLiteConnectionString(databasePath, true)));
      _asyncConnection = new SQLiteAsyncConnection(connectionFactory);
      await _asyncConnection.CreateTablesAsync(typeof (Temperature));
      Debug.WriteLine("Initialized database");
    }

And then write the data:

    private async Task SaveTemperature(string bluetoothId, double ambTemp, double objTemp) {
      var temperature = new Temperature {
        Timestamp = DateTime.Now,
        BluetoothId = bluetoothId,
        Ambient = ambTemp,
        Spot = objTemp
      };
      Debug.WriteLine("Writing temperature");
      await _asyncConnection.InsertAsync(temperature);
      Debug.WriteLine("Wrote temperature");
    }

It turns out this was wrong, though it is what was shown in the Stack Overflow posts I found. The reason that it is wrong is that it is creating a new database connection each time the factory lambda is invoked. When I used this code all would run fine for a while, until eventually I hit an SQLite Busy exception:

Exception thrown: 'SQLite.Net.SQLiteException' in mscorlib.ni.dll
SQLite.Net.SQLiteException: Busy
   at SQLite.Net.PreparedSqlLiteInsertCommand.ExecuteNonQuery(Object[] source)
   at SQLite.Net.SQLiteConnection.Insert(Object obj, String extra, Type objType)
   at SQLite.Net.SQLiteConnection.Insert(Object obj)
   at SQLite.Net.Async.SQLiteAsyncConnection.<>c__DisplayClass14_0.<InsertAsync>b__0()
   at System.Threading.Tasks.Task`1.InnerInvoke()
   at System.Threading.Tasks.Task.Execute()

The simple solution was to create a single database connection instance, and serve that, rather than continually serving new ones:

    private SQLiteAsyncConnection _asyncConnection;
    private SQLiteConnectionWithLock _sqliteConnectionWithLock;
    private async Task InitializeDatabase() {
      Debug.WriteLine("Initializing database");
      var databasePath = Path.Combine(Windows.Storage.ApplicationData.Current.LocalFolder.Path, "sensortag.db");
      _sqliteConnectionWithLock = new SQLiteConnectionWithLock(new SQLitePlatformWinRT(), new SQLiteConnectionString(databasePath, true));
      var connectionFactory = new Func<SQLiteConnectionWithLock>(() => _sqliteConnectionWithLock);
      _asyncConnection = new SQLiteAsyncConnection(connectionFactory);
      await _asyncConnection.CreateTablesAsync(typeof (Temperature));
      Debug.WriteLine("Initialized database");
    }

Exposing a REST Service from Windows Universal

This was supposed to be trivially easy. I've done plenty of WCF in the past, and know how ridiculously straightforward it should be to expose a REST service from an app. Except that Windows Universal doesn't currently support WCF.

I went searching and found Restup, currently in Beta, which aims to expose REST endpoints for Windows Universal apps.

I used NuGet to install it. I had to check the Include prerelease option because it was currently in beta.

Setting up was pretty easy:

    private async Task InitializeWebServer() {
      await InitializeDatabase();
      var webserver = new RestWebServer(); //defaults to 8800
      webserver.RegisterController<SensorTagService>(_asyncConnection);

      await webserver.StartServerAsync();
    }
  [RestController(InstanceCreationType.Singleton)]
  class SensorTagService {
    private readonly SQLiteAsyncConnection _connection;

    public SensorTagService(SQLiteAsyncConnection sqLiteAsyncConnection) {
      _connection = sqLiteAsyncConnection;
    }

    [UriFormat("/GetTemperatures\\?start={start}&end={end}")]
    public async Task<GetResponse> GetTemperatures(string start, string end) {
      Debug.WriteLine("got temp request");
      ...
    }
  }

Note the escaping of the question mark in the UriFormat? I wanted to pass parameters to my endpoint, rather than use values that are part of the path, but all the RestUP examples showed values in the path. I eventually came up with this solution, however it may be unnecessary by the time you read this.

Once again the security model bit me, and I got the following exception:

An exception of type 'System.UnauthorizedAccessException' occurred in mscorlib.ni.dll but was not handled in user code
WinRT information: At least one of either InternetClientServer or PrivateNetworkClientServer capabilities is required to listen for or receive traffic
Additional information: Access is denied.

Once again I edited the app's capabilities by double-clicking the Package.appxmanifest file in the Solution Explorer, going to Capabilities and checking

  • Internet (Client),
  • Internet (Client & Server) and
  • Private Networks (Client & Server) (so that I could use my service on my home network).

Accessing a local Windows Universal app from your web browser

Try as I might, I was not able to use my local Chrome browser to access my service. I resorted to using a totally separate machine to invoke my service. I used the CheckNetIsolation tool. I ensure that the Allow Network Loopback option was set for my project Visual Studio. I turned off my firewalls. Nothing!

Conclusions

The Bluetooth side of things was quite easy, but exposing a REST API was far too hard, despite the sterling work of Tom Kuijsten and the Restup project. Not being able to access my service locally was a complete pain - the Windows Universal restrictions on being able to be accessed from the local host seem strange - almost as though they are trying to stop you from building traditional apps that talk to Windows Universal apps ...

In the end I'll likely use the Windows Universal app to capture the SensorTag data via Bluetooth LE, and then create a Node.JS app to serve it over REST, sharing the same SQLite database, with code to handle retrying if the database is busy when inserting new values.

I'll also push the data to a Node-RED instance to act on the data.

Filed under: Uncategorized 3 Comments
20Jan/161

Radical surgery: Slimming Pebble apps down to run on Aplite

A long way to go

In December 2015, when first I released Powernoter, an unofficial Evernote client for the Pebble Watch, I initially targeted Pebble Time (codename Basalt), and Pebble Time Round (codename Chalk).

After all there was already the official Evernote Pebble app (which I also created) for the original Pebble (codename Aplite).

Then Pebble released a firmware update and SDK for the original Pebble which meant that I could easily release Powernoter for the original Pebble too, using the same SDK I'd already used.

This is the build log from the first time I built Powernoter targeting Aplite (the original Pebble), Basalt (Pebble Time) and Chalk (Pebble Time Round):

-------------------------------------------------------
BASALT APP MEMORY USAGE
Total size of resources:        26461 bytes / 256KB
Total footprint in RAM:         25895 bytes / 64KB
Free RAM available (heap):      39641 bytes
------------------------------------------------------- 
...
-------------------------------------------------------
CHALK APP MEMORY USAGE
Total size of resources:        26461 bytes / 256KB
Total footprint in RAM:         25943 bytes / 64KB
Free RAM available (heap):      39593 bytes
------------------------------------------------------- 
...
-------------------------------------------------------
APLITE APP MEMORY USAGE
Total size of resources:        26341 bytes / 125KB
Total footprint in RAM:         23789 bytes / 24KB
Free RAM available (heap):      787 bytes
------------------------------------------------------- 

See the 787 bytes on the last line? That was how much free memory my app had before it even started running on an original Pebble. Before it created its first window or allocated memory to receive and send messages.

Although I successfully built Powernoter for Aplite, it couldn't even start up, crashing immediately as it ran out of memory.

Not so verbose with the error messages

The first thing I did, was to run the pebble analyze-size command, which gave me a sense of where the memory was being used.

Like all good programmers, I very carefully and very consistently checked all OS calls for out of memory situations, and logged (very) verbose messages if I ran out of memory. Like this:

  bitmap_layer = bitmap_layer_create(image_layer_size);
  if(!bitmap_layer) {
    APP_LOG(APP_LOG_LEVEL_ERROR, "Couldn't allocate memory for the image");
    ...

All those strings had to be allocated somewhere. I went through my app and removed all those lovely descriptive messages. Instead I just logged the line number - that was enough to work out where it went wrong.

  bitmap_layer = bitmap_layer_create(image_layer_size);
  if(!bitmap_layer) {
    OOMCF();
    ...

I defined a couple of macros for Out Of Memory (OOM) situations:

#define OOM(s) log_oom(__FILE_NAME__, __LINE__, (int)s)
#define OOMCF() log_create_failed(__FILE_NAME__, __LINE__)
void log_create_failed(char* file, int line) {
  app_log(APP_LOG_LEVEL_DEBUG, file, line, "create failed %d free", (int)heap_bytes_free());
}

void log_oom(char* file, int line, int size) {
  app_log(APP_LOG_LEVEL_DEBUG, file, line, "oom %d, %d", size, (int)heap_bytes_free());
}

I also declared some handy logging macros, so that debug log strings were stripped out of shipping builds

#ifdef SHIPPING
#define LOG_MEM_START()
#define LOG_MEM_END()
#define LOG_FUNC_START(name)
#define LOG_FUNC_END(name)
#define LOG_DBG(fmt, args...)
#define LOG_ERR(fmt, args...) app_log(APP_LOG_LEVEL_ERROR, __FILE_NAME__, __LINE__, " ")
#else
#define LOG_DBG(fmt, args...) app_log(APP_LOG_LEVEL_DEBUG, __FILE_NAME__, __LINE__, fmt, ## args)
#define LOG_MEM_START() app_log(APP_LOG_LEVEL_DEBUG, __FILE_NAME__, __LINE__, "start %d", (int)heap_bytes_free())
#define LOG_MEM_END() app_log(APP_LOG_LEVEL_DEBUG, __FILE_NAME__, __LINE__, "end %d", (int)heap_bytes_free())
#define LOG_FUNC_START(name) app_log(APP_LOG_LEVEL_DEBUG, __FILE_NAME__, __LINE__, "%s invoked", name)
#define LOG_FUNC_END(name) app_log(APP_LOG_LEVEL_DEBUG, __FILE_NAME__, __LINE__, "%s returning", name)
#define LOG_ERR(fmt, args...) app_log(APP_LOG_LEVEL_ERROR, __FILE_NAME__, __LINE__, fmt, ## args)
#endif

Use statics in moderation

Next I looked into how I was defining static variables. I like statics because they are only visible to the file in which they are declared: a primitive form of encapsulation. A typical C source file might have started with:

static CustomMenu* customMenu;
static CustomMenuItem* items;
static uint16_t itemCount;
static AppTimer *send_timeout_timer;
static NoteSelectedCallback noteSelectedCallback;

The types don't matter (CustomMenu is my own class that does things like automatically scrolling long menu items).

What matters is that I have four pointers and a short declared as statics, meaning I have a whole chunk of memory statically allocated just for this one file.

Powernoter is not a small app ... this multiplied by tens of files means that I had a load of memory statically allocated, which was never used unless the user was actually invoking the functionality represented by those files.

The solution was to move to a dynamically allocated memory:

typedef struct NoteList {
  CustomMenu *customMenu;
  CustomMenuItem *items;
  uint16_t itemCount;
  AppTimer *send_timeout_timer;
  NoteSelectedCallback noteSelectedCallback;
} NoteList;

I only allocate a NoteList when it is being used, and free it as soon as possible.

Omit needless code

Although the SDK includes definitions for things like DictationSession on Aplite, so that code can be compiled regardless of the platform (you do need to check return calls though), it made no sense to include that code at all. I #ifdefed whole chunks of code to reduce the app size:

#ifdef SUPPORTS_VOICE
static void dictation_session_callback(DictationSession *session, DictationSessionStatus status,
                                       char *transcription, void *context) {
  LOG_FUNC_START("dictation_session_callback");
  if(DictationSessionStatusSuccess == status) {
    if(!noteContext->waitingAnimation) {
      if(noteContext->customMenu) {
        layer_set_hidden(custom_menu_get_layer(noteContext->customMenu), true);
      }
...
}
#endif

SUPPORTS_VOICE is my own macro:

#ifndef PBL_PLATFORM_APLITE
#define SUPPORTS_VOICE
#else
#define LOW_MEMORY_DEVICE
#endif

Pebble have added a PBL_MICROPHONE macro so my use of SUPPORTS_VOICE is no longer necessary.

I did the same thing for animations and color support.

Although I think I am a decent enough software engineer, I am under few illusions as to my abilities as a designer, which is why I let you choose your very own foreground and background colors in Powernoter, except if you are running on an original Pebble, in which case all that code, including the color names, is #ifdefed out.

Be careful what you ask for (when calling app_message_xyz_maximum)

Once upon a time were were limited to 120 or so bytes per message sent between the watch and the phone. I wrote inordinately complex code to page menu items in dynamically from the phone to the watch so that you could scroll through infinitely long menus. Then Pebble gave us what we wanted, with massive (8Kish) message buffers.

When you only have a little memory free to start with, the last thing you want to do is go allocating 8K buffers. It won't work.

My code to determine the size of the input buffer looks like this now:

#ifdef LOW_MEMORY_DEVICE
#define MAX_INBOX_SIZE 512
#else
#define MAX_INBOX_SIZE 4096
#endif

The LOW_MEMORY_DEVICE macro is set on Aplite only. Users on the original Pebble won't see an enormous number of notes listed, or a lot of a note's content, but at least they'll see something.

Make long strings into Resources

There is an excellent Internationalization sample for the Pebble. Although Powernoter isn't internationalized, there are no strings hardcoded in code ... all strings are accessed via a single point. I include the strings in a single source file in the app, except for certain very long strings, such as the About page. These I load as resources from files:

static char* loadResource(uint32_t resourceId) {
  ResHandle handle = resource_get_handle(resourceId);
  size_t res_size = resource_size(handle);

  // Copy to buffer
  char* result = (char*)malloc(res_size + 1);
  if(!result) {
    OOM(res_size);
    result = (char*)malloc(1);
    if(result) {
      *result = '\0';
    }
    return result;
  }
  resource_load(handle, (uint8_t*)result, res_size);
  result[res_size] = '\0';
  return result;
}

Once I'm done with them, I free them as quickly as possible.

Summary

In case you were wondering, this is how things look right now:

CHALK APP MEMORY USAGE
Total size of resources:        27313 bytes / 256KB
Total footprint in RAM:         24244 bytes / 64KB
Free RAM available (heap):      41292 bytes
-------------------------------------------------------
BASALT APP MEMORY USAGE
Total size of resources:        27313 bytes / 256KB
Total footprint in RAM:         24176 bytes / 64KB
Free RAM available (heap):      41360 bytes
-------------------------------------------------------
APLITE APP MEMORY USAGE
Total size of resources:        13966 bytes / 125KB
Total footprint in RAM:         17353 bytes / 24KB
Free RAM available (heap):      7223 bytes
------------------------------------------------------- 

Getting from 787 bytes free to 7,223 bytes free, so that Powernoter can really run on Aplite involved many changes, some which I'd say were generally good practice (reducing statics and instead using structs which are allocated/freed), and some less so (removing error log messages).

In general I don't think the code looks too unreadable as a result of supporting Aplite ... certainly I'd prefer not to have as many #ifdefs sprinkled throughout my code as I have, but it's not that bad.

You may also wish to check out this Pebble presentation on Pebble app memory usage.

One thing is for sure, the changes I had to make to Powernoter to get it to run on Aplite are nothing compared with the miracles the Pebble team pulled to get the original Pebble to support the same SDK as Pebble Time and Pebble Time Round.

About me

I'm an independent consultant and speaker, available for ad-hoc Pebble, Android and Android Wear and Tizen consulting and development.

If you like and use Powernoter, please consider supporting it.

On the other hand if something is missing or doesn't work, check out this Trello board where you can comment to request enhancements or report bugs.

Filed under: Pebble, Wearables 1 Comment
13Nov/150

Making using TypeScript for Google Apps Scripts more convenient on OS X

I've started to use TypeScript in IntelliJ, and wanted to use it for a Google Apps Script App that I'm writing.

There are a couple of issues with using TypeScript for this: The first is that Google Apps Script doesn't directly support TypeScript, and the second is that the Apps Scripts editor is web based.

The first issue isn't really an issue, since the TypeScript is transpiled directly into JavaScript. But the second one is an issue. It would be painful to have to open the generated JavaScript in IntelliJ, copy it into the clipboard, activate the web-based editor, select the old content, paste the new content from the clipboard, and save it, every time I make a change to the TypeScript.

Fortunately I've found a simple way to automate all of this using AppleScript.

Firstly, I ensure that the Apps Script editor is open in its own window. My project is called "Documote" and this is what the Google Chrome window looks like:
documote chrome window

Secondly I've created this AppleScript file to copy the generated JavaScript to that project:

try
    set project_name to "Documote"
    set file_name to "/Users/damian/.../documote/Code.js"
    set the_text to (do shell script "cat " & file_name)
    set the clipboard to the_text
    tell application "Google Chrome"
        set visible of window project_name to false
        set visible of window project_name to true
        activate window project_name
        tell application "System Events" to keystroke "a" using command down
        paste selection tab project_name of window project_name
        tell application "System Events" to keystroke "s" using command down
    end tell
on error errMsg
    display dialog "Error: " & errMsg
end try

You'd need to change the first couple of lines to reflect your situation. The reason for hiding and showing the window is to activate the window.

Once you have the AppleScript you can assign it a shortcut.

Filed under: Uncategorized No Comments
11Nov/150

Building an Amazon Echo Skill to create Evernote notes

First, a demo: Alexa, tell Evernote to create a note "Remember to call my Mother":

I recently acquired an Amazon Echo, and although there is limited support for interacting with Evernote via IFTTT, I wanted to simply create Evernote notes as in the demo above.

I’m going to share how I created an Amazon Echo Skill to accomplish what it shown in the video above, and what roadblocks I hit on the way.

Updating the example

I started with the sample Amazon Echo skill which uses lambdas, and got that working pretty quickly.

To update it to work with Evernote, I changed the JavaScript code that recognized the intent to invoke saveNote when the intent is TakeANote (you'll see where this intent is set up later):

**
 * Called when the user specifies an intent for this skill.
 */
function onIntent(intentRequest, session, callback) {
    console.log("onIntent requestId=" + intentRequest.requestId +
        ', sessionId=' + session.sessionId);
    var intent = intentRequest.intent, intentName = intentRequest.intent.name;
    // Dispatch to your skill's intent handlers
    if ("TakeANote" === intentName) {
        saveNote(intent, session, callback);
    }
    else {
        throw "Invalid intent: " + intentName;
    }
}

Creating the note

My code to create the Evernote note (invoked by saveNote above) is pretty much boilerplate. It pulls the content from the list of slots (defined below) and uses it to create a note using the Evernote API:

function saveNote(intent, session, callback) {
    var cardTitle = intent.name;
    var contentSlot = intent.slots["Content"];
    var repromptText = "";
    var sessionAttributes = [];
    var shouldEndSession = false;
    var speechOutput = "";
    if (contentSlot) {
        var noteText = contentSlot.value;
        sessionAttributes = [];
        speechOutput = "OK.";
        repromptText = "What was that?";
        shouldEndSession = true;
        var noteStoreURL = '...';
        var authenticationToken = '...';
        var noteStoreTransport = new Evernote.Thrift.NodeBinaryHttpTransport(noteStoreURL);
        var noteStoreProtocol = new Evernote.Thrift.BinaryProtocol(noteStoreTransport);
        var noteStore = new Evernote.NoteStoreClient(noteStoreProtocol);
        var note = new Evernote.Note();
        note.title = "New note from Alexa";
        var nBody = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>";
        nBody += "<!DOCTYPE en-note SYSTEM \"http://xml.evernote.com/pub/enml2.dtd\">";
        nBody += "<en-note>" + noteText + "</en-note>";
        note.content = nBody;
        noteStore.createNote(authenticationToken, note, function (result) {
            console.log('Create note result: ' + JSON.stringify(result));
            callback(sessionAttributes, buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));
        });
    }
    else {
        speechOutput = "I didn't catch that note, please try again";
        repromptText = "I didn't hear that note.  You can take a note by saying Take a Note followed by your content";
        callback(sessionAttributes, buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));
    }
}

Notice the hard-coded authenticationToken? That means this will only work with my account. To work with anyone's account, including yours, we'd obviously need to do something different. More on that in a moment.

Packaging it up

I zipped up my JavaScript file, together with my node_modules folder and a node package.json:

{
  "name": "AlexaPowerNoter",
  "version": "0.0.0",
  "private": true,
  "dependencies": {
    "evernote": "~1.25.82"
  }
}

Once done, I uploaded my zip to my Amazon Skill, and then published it.

The Skill information

This is the skill information I used:
Alexa Skill Information
Obviously I couldn't use trademarked term "Evernote" as the Invocation Name in something that was public, but just for testing for myself, I think I'm OK.

The Interaction Model

I defined the interaction model like this:
Alexa Interaction Model
The sample utterances is way too limited here - Amazon recommend having several hundred utterances for situations where you allow free-form text. It would also be cool to be able to have an intent to let you search Evernote.

Once I'd done this, and set up my Echo to use my development account, I could create notes.

Authentication roadblock

The next step was to link anyone's Evernote account into the Skill. This is where I hit the roadblock. Amazon require that the authentication support OAUTH 2.0 implicit grant and Evernote supports OAUTH 1. I could attempt to create a bridging service, but the security implications of doing so are scary, and doing it properly would require more time than I have right now.

The source is in GitHub

I've published the source to this app in my GitHub account here. If you are a developer and want to try it out, get an Evernote Developer auth token and plug in the URL and token in the noteStoreURL and authenticationToken above.

Filed under: Uncategorized No Comments
30Aug/151

Android 5.0 Media Browser APIs

When I read the release notes for the Android 5.0 APIs I was delighted to see this:

Android 5.0 introduces the ability for apps to browse the media content library of another app, through the new android.media.browse API.

I set out to try to browse the media in a variety of apps I had installed on my phone.

First I listed the apps that supported the MediaBrowserService:

  private void discoverBrowseableMediaApps(Context context) {
    PackageManager packageManager = context.getPackageManager();
    Intent intent = new Intent(MediaBrowserService.SERVICE_INTERFACE);
    List<ResolveInfo> services = packageManager.queryIntentServices(intent, 0);
    for(ResolveInfo resolveInfo : services) {
      if(resolveInfo.serviceInfo != null && resolveInfo.serviceInfo.applicationInfo != null) {
        ApplicationInfo applicationInfo = resolveInfo.serviceInfo.applicationInfo;
        String label = (String) packageManager.getApplicationLabel(applicationInfo);
        Drawable icon = packageManager.getApplicationIcon(applicationInfo);
        String packageName = resolveInfo.serviceInfo.packageName;
        String className = resolveInfo.serviceInfo.name;
        publishProgress(new AudioApp(label, packageName, className, icon));
      }
    }
  }

The publishProgress method updated the UI and soon I had a list of apps that supported the MediaBrowserService:

Apps that support MediaBrowserService

Next, I wanted to browse the media they exposed using the MediaBrowser classes:

…
public class BrowseAppMediaActivity extends ListActivity {
  private static final String TAG = “BrowseAppMediaActivity”;
  …
  private final MediaBrowserConnectionListener mMediaBrowserListener =
      new MediaBrowserConnectionListener();
  private MediaBrowser mMediaBrowser;

  @Override
  public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    …
    Log.d(TAG, “Connecting to “ + packageName + “ / “ + className);
    ComponentName componentName = new ComponentName(packageName, className);

    Log.d(TAG, “Creating media browser …”);
    mMediaBrowser = new MediaBrowser(this, componentName, mMediaBrowserListener, null);

    Log.d(TAG, “Connecting …”);
    mMediaBrowser.connect();
  }

  private final class MediaBrowserConnectionListener extends MediaBrowser.ConnectionCallback {
    @Override
    public void onConnected() {
      Log.d(TAG, “onConnected”);
      super.onConnected();
      String root = mMediaBrowser.getRoot();
      Log.d(TAG, “Have root: “ + root);
    }

    @Override
    public void onConnectionSuspended() {
      Log.d(TAG, “onConnectionSuspended”);
      super.onConnectionSuspended();
    }

    @Override
    public void onConnectionFailed() {
      Log.d(TAG, “onConnectionFailed”);
      super.onConnectionFailed();
    }
  }
}

I’ve cut some code, but assume that the packageName and className are as they were when queried above. No matter what I did, and which app I queried, the onConnectionFailed method was invoked.

Here is the log from when I tried to query the Google Music App:

29195-29195/testapp D/BrowseAppMediaActivity﹕ Connecting to com.google.android.music / com.google.android.music.browse.MediaBrowserService
29195-29195/testapp D/BrowseAppMediaActivity﹕ Creating media browser …
29195-29195/testapp D/BrowseAppMediaActivity﹕ Connecting …
16030-16030/? I/MusicPlaybackService﹕ onStartCommand null / null
16030-16030/? D/MediaBrowserService﹕ Bound to music playback service
16030-16030/? D/MediaBrowserService﹕ onGetRoot fortestapp
16030-16030/? E/MediaBrowserService﹕ package testapp is not signed by Google
16030-16030/? I/MediaBrowserService﹕ No root for client testapp from service android.service.media.MediaBrowserService$ServiceBinder$1
724-819/? I/ActivityManager﹕ Displayed testapp/.BrowseAppMediaActivity: +185ms
29195-29195/testapp E/MediaBrowser﹕ onConnectFailed for ComponentInfo{com.google.android.music/com.google.android.music.browse.MediaBrowserService}
29195-29195/testapp D/BrowseAppMediaActivity﹕ onConnectionFailed

Notice the message about my app not being signed by Google on line 7?

I’m assuming that only authorized apps are allowed to browse Google’s music app such as Google apps supporting Android Wear and Android Auto, but not arbitrary third party apps. Indeed the documentation for people implementing MediaBrowserService.onGetRoot indicates that:

The implementation should verify that the client package has permission to access browse media information before returning the root id; it should return null if the client is not allowed to access this information.

This makes sense, but it is disappointing. Just as users can grant specific apps access to notifications, it would be nice of they could also grant specific apps the right to browser other apps' media.

Please let me know if you discover I am wrong!

Filed under: Android 1 Comment
23Aug/155

Using Android Wear to control Google Cardboard Unity VR

Using a VR headset, even one as simple as Google Cardboard, can be mind-blowing.  Nevertheless it is the little things that can also be disconcerting.  For example looking down and seeing you have no arms, despite the fact they still very much feel as though they exist.

I’m convinced that VR experiences are going to transform not just games, but interaction with computers in general, and I’ve been experimenting with some ideas I have about how to create truly useful VR experiences.

As I was working to implement one of my ideas, it occurred to me that I might be able to use the orientation sensors in the Android Wear device I was wearing.  Why not use them as input into the VR experience I was creating?  What if I could bring part of my body from the real world into the VR world?  How about an arm?

I decided to try to find out, and this was the answer:

The experience is nowhere near good enough for games.  But I don’t care about games.  I want to create genuinely useful VR experiences for interacting with computers in general, and I think this is good enough.  I can point to objects, and have them light up.  I can wear smart watches on both wrists (because I really am that cool) and have two arms available in the VR world. 

By tapping and swiping on the wearable screens I can activate in-world functionality, without being taken out of it.  It sure beats sliding a magnet on the side of my face, because it is my arm I am seeing moving in the virtual world.

In the rest of this article I’m going to describe some of technical challenges behind implementing this, how I overcame them, and some of the resources I used on the way.

The tools

This is part of my workspace: Android Studio on the left, Unity on the top-right and MonoDevelop on the bottom-left:

my workspace

I had many reference browser windows open on other screens (obviously), and creating this solution required me being very comfortable in Android, Java and C#.  I’m relatively new to Unity.

Creating a Unity Java Plugin by overriding the Google Cardboard Plugin

The Unity Android Plugin documentation describes how you can create plugins by extending the UnityPlayerActivity Java class, and I experimented with this a little.  I created an Android Library using Android Studio, and implemented my own UnityPlayerActivity derived class.

After a little hassle, I discovered that Unity now supports the “aar” files generated when compiling libraries in Android Studio, although I found the documentation a little out of date on the matter in places.  It was simply a question of copying my generated “aar” file into Unity under Assets|Plugins|Android

image

image

When it came to a Google Cardboard Unity project, what I discovered though, is that Google had got there first.  They had created their own UnityPlayerActivity called GoogleUnityActivity.  What I needed to do was override Google’s override:

image

I included Google’s unity classes as dependencies in my library project:

image

Once I’d copied the aar file into the Unity Android Plugins folder and ran the test app, I was delighted to see my activity say “Cooey” in the log.

image

Receiving the watch’s orientation to the phone

The next step was to receive Android Wear Messages from Android Wear on the watch, containing orientation messages.

I recreated my project, this time including support for Android Wear:

image

I made the Unity activity I’d created do a little more than say “Cooey”. 

First I used the Capabilities mechanism to tell other Android Wear devices that this device (the phone) was interested in arm orientation messages:

image

… and I set it up to receive Android Wear messages and pass them over to Unity using UnitySendMessage:

image

Sending the watch’s orientation to the phone

This was simply a question of looking out for Android Wear nodes that supported the right capability, listening for orientation sensor changes, and sending Android Wear messages to the right node.  This is the watch code:

image

I did discover that some wearables don’t support the required sensors, although I imagine more modern ones will.

Using the watch’s orientation to animate a block on the screen

Inside Unity I created a cube which tweaked into a rectangle, and made it a child of the CardboardMain’s camera, so that it moved when I moved:

image

See the “Script” field on the bottom right-hand side?  I have a script called “WristController” that is attached to the “wrist” (white blob).  This is where I receive orientation messages sent from the watch, via the UnityPlayerActivity derived Java class I’d created.

I started off simply assigning the received orientation to the block’s orientation by assigning to transform.eulerAngles

image

This worked, but was super-jerky.  I went searching and discovered Lerps and Slerps for smoothly moving from one rotation to another.  My updated code:

image

Animating an arm instead of a block

I was pleased to be smoothly animating a block, but my arm doesn’t quite look like that.  It is more armish.  I went looking for a model of an arm that I could import and use instead.  I found a YouTube Unity video called ADDING ARMS by Asbjørn Thirslund, in which he explains how to to import and use a free arms model by eafg.

It was simply a question of sizing and positioning the arms properly as a child of the Cardboard main camera, and then adding the script I’d used to animate the block.

I also removed the right-hand arm, since it looked a little weird to have a zombie arm doing nothing.

image

The ArmController script you see in this screen capture has the same contents as the WristController I’d used to move the block.

Final Thoughts

There is enough of a lag to make this technique impractical for games, but not enough to make it impractical for the kinds of non-game experiences I have in mind. 

I’d also need to add calibration, since the watch may be pointing in any direction initially – if I assume it always starts straight out, that would be good enough.  Detecting where the arm is pointing shouldn’t be too hard, since the cardboard code already does gaze detection – so many possibilities, but so little time for side-projects such as this!

This has been a fun interlude on my way to creating what I hope to be a genuinely useful VR experience based around browsing books your friends have read … more on that later.

29May/150

Updating the Pebble Emulator python code

I recently wanted to make some changes to the Pebble emulator, which uses the PyV8 Python-JavaScript bridge to emulate the phone environment running your phone-bases JavaScript companion app.Screenshot 2015-05-29 12.46.07

These are some notes on how I did this, mainly so that I remember if I need to do it again, and also just in case it helps anyone else.

The first thing I did was to clone the Pebble Python PebbleKit JS implementation, used in the emulator. The original is at https://github.com/pebble/pypkjs and mine is at https://github.com/DamianMehers/pypkjs

Once I'd done that I cloned my one locally onto my Mac, and followed the instructions to build it.

It needs a copy of the Pebble qemu open source emulator to which to talk, and I started off trying to clone the Pebble qemu and build it locally.  Half-way through it occurred to me that I already had a perfectly good qemu locally, since I already had the Pebble Dev Kit installed.

By running a pbw in the emulator, with the debug switch enabled, I was able to determine the magic command to start the emulator locally:

Screenshot 2015-05-29 12.51.32

I copied the command, added some quotes around parameters that needed them, and was able to launch the emulator in one window:

Screenshot 2015-05-29 12.51.32

The phone simulator in another window:

Screenshot 2015-05-29 12.54.04

And then my app in another:

Screenshot 2015-05-29 12.55.08

Once I was up and running I started making changes to the Python code.  Since I've never written a line of Python before I made liberal use of existing code to make the changes I needed.

It all ended well when my pull request containing my changes to support sending binary data was accepted into the official Pebble codebase, meaning that Evernote now runs in the emulator.

Filed under: Pebble No Comments
26May/150

Capture your Mac screen activity into daily videos

Screenshot 2015-05-26 14.42.37I know I'm not alone in wishing there was a TimeSnapper equivalent for the Mac.  Among many things it lets you look back in time at what you were doing on your computer minutes, hours or days ago.

Perfect for remembering what you were doing yesterday, and even to recover stuff that was displayed on your screen.

Inspired by TimeSnapper, I've created a small bash script that I've called MacBlackBox which takes regular screen-shots every few seconds. Every hour it combines the screenshots into an mp4 video, and every day it combines the hourly videos into daily videos, one per screen.

It is available in GitHub here.  Happy to accept improvement suggestions.

Filed under: Uncategorized No Comments