Damian Mehers' Blog Evernote and Wearable devices. All opinions my own.

28May/140

Interview for Connectedly on Evernote and Wearables

I recently gave a brief interview about Evernote and Wearables, with special focus on the Pebble, for Adam Zeis at Connectedly, part of the Mobile Nations group (Android Central, iMore, etc).

More here.

Filed under: Uncategorized No Comments
1May/140

Evernote on your Pebble: your desktop duplicated?

At first glance it might look as though Evernote on the Pebble is a simply a clone of Evernote for the desktop.  image  pebble-screenshot_2014-03-15_13-17-53pebble-screenshot_2014-03-15_15-01-35

That would make absolutely no sense whatsoever, given that the Pebble has an entirely different form factor, with very different uses.

I’d like to share some of the ways in which Evernote on the Pebble has been tailored to the wrist-based experience, and what you can do to get the most out of it.   But first …

A step back … why wearables?

Earlier this year at the MCE conference I presented a hierarchy of uses for wearable devices:

  • Notifications, especially smart notifications based on your context, for example based on your current location, or who you are with, such as those provided by Google Now;
  • Sensors, especially health sensors, but also environmental sensors. Very soon we will examine the devices of someone who just died, as a kind of black box to determine what happened.
  • Control of the environment around you, such as the music playing on your phone or your house lights. The key is that you have to be able to do it without thinking about it … maybe gesture-based controls.
  • Capture of information, such as taking audio notes, or photos from your watch or Glass.
  • Consumption of information, such as viewing Evernote notes.  The key to this being useful is that the effort to view the information on your watch must be significantly lower than the effort to pull out your phone, unlock it, start the appropriate app, and navigate/search for the information.  Ideally the information should be pre-prepared for easy consumption based on your context, such as where you are, or what you are doing.

How does Evernote fit in?

Notifications work without the Evernote Pebble app

The Pebble already provides notifications from apps, so that when an Evernote reminder notification fires on your Phone …

05  … you’ll see that notification on your watch… 07

As the Evernote phone apps become more sophisticated about providing smarter, context-based notifications, you’ll get that for free on your watch. 

The Evernote app for the Pebble is very much focused on the last item in that list: consumption.

Easy access to your most important information: Your Shortcuts

On the desktop and mobile versions of Evernote, you use Shortcuts to give you easy, instant access to your most important information. Perhaps its information that you always need to have at your fingertips, or that you are working on right now.

08

It stands to reason that on the Pebble we’d give you an easy way to access those Shortcuts, and we do:

09 10

But wouldn’t it be cool if you could access your most important information, your shortcuts, as soon as you start Evernote? 

]1112

We thought so too, which is why you can put your Shortcuts at the top level menu, before all the other Evernote menu items, so that you can see your most important stuff instantly:

131415

Context-sensitive information: nearby notes

If you are about to walk into a meeting, or into a restaurant, then nearby notes are your friend:

16

This shows the notes that you created closest to your current location (yes, you can toggle between miles and kilometers), so that if you are about to go into a meeting with someone …

17

… you can quickly remind yourself about the person you are about to meet:

1819

Activity-sensitive information: a custom checklist experience

Ideally Evernote for the Pebble would automatically detect that you are in the supermarket, and present you with your shopping list.  It doesn’t do that yet, but it does make it easy for you to check and uncheck checkboxes.

Specifically it looks for all your notes that have unchecked checkboxes in them, and presents them as a list.  If you choose one, then it just displays the checkboxes from the notes, and lets you check/uncheck them.

This makes for a super-convenient shopping experience.  If you’ve ever had to juggle a small child in one hand, a supermarket trolley in the other hand, and a mobile phone in the other hand, you’ll really appreciate being able to quickly and easily check items off, as you buy them:

pebble-screenshot_2014-03-15_15-02-202122

What’s more, if you remembered to use Evernote on your phone take a photo of the yoghurt pot back home, because you knew that you were likely to be overwhelmed when faced with a vast array of dairy produce at the shop …

24

… then you can navigate to that note on your watch, and glance at the photo:

25

The Pebble’s screen is quite small, and black-and-white, so you may need to squint a little to make out the photo!

Easy access to your most important notes: Reminders

If you don’t make much use of Reminders, then you might be a little puzzled to see a dedicated Reminders menu item on the Pebble:

26

The reason is that many many people use Reminders as a way of “pinning” important notes to the top of their notes list.  Reminders are always shown at the top of the note list on the desktop apps:

27

On your Pebble you have quick and easy access to these important notes:

26pebble-screenshot_2014-03-15_21-57-31

You can view a reminder:

29

And you can mark it as “done” by long-pressing:

30

Information at a glance.  When is it a chore, and when is it a glance?

The ideal Evernote experience on your watch gives you instant access to your most important information.  Evernote on the Pebble does this by giving you quick and easy access to your shortcuts, nearby notes, checklists and reminders.

But sometimes, that isn’t enough.  Then you have a choice: do you pull out your phone, unlock it, start Evernote, and search or navigate to the information you want? Or, if it is a small text note, might it be easier to navigate to it on your watch?

Depending on what kind of a person you are, and on how you use Evernote, the idea of navigating to your notes on your watch, by drilling down using Tags (for example) might seem either laughably complex, or super-cool and powerful.  If you are an early-adopter of wearable technology, for example if you were a Pebble Kickstarter backer, then chances are you fall into the second camp.

This is the reason for the other menu items I have not discussed above: Notebooks, Tags, and Saved Searches.  For some people, it would be much easier to quickly drill down to a note on their watch, than to pull out their phone.

31343536

Glancability may not be a real word, but if it were, it would be in the eye of the beholder.

The future of Evernote on wearables

By providing you with a customized experience on the Pebble, Evernote serves you information based on what is most important to you (shortcuts and reminders), what makes sense based on your current context (nearby notes, checklist notes) as well as the more traditional ways of accessing your notes (notebooks, tags, saved searches).

These are very early days for wearable technologies.  Evernote for the Pebble is a start … as the capabilities of wearable devices evolve, so will your Evernote wearable experience.  Evernote is very much about working in symbiosis with you, completing your thoughts for you, providing information to you before you even know you need it.  There is so much more to come.

Filed under: Evernote, Pebble No Comments
9Feb/142

Understanding the Chrome Sync Protocol

Chrome is a cool browser, but its secret sauce is that no matter whether you are using iOS, Windows, Mac, Android, Linux or ChromeOS, you can sync your bookmarks, passwords, recently viewed URLs and more.

Did you noticed any OS missing?  No?  OK, so perhaps you don’t use Windows Phone. 

But I do, as well as Android and iOS, and it bugged me that there was no way to sync all my Chrome goodness to Windows Phone, since Chrome is not available for Windows Phone.

So I implemented my own Chrome sync engine on Windows Phone, and in the process learned how Chrome sync works.

In this post I'll share what I learned, including how you authenticate in order to use it.

I'm going to do this by way of the free Chrome sync app I created for Windows Phone, called Chrync.

imageimage

I reasoned that there must be a way of talking the Chrome sync protocol directly to Google's servers, since Chrome itself does it.

I started off by downloading the Chrome source code, and building it, and running it with a debugger attached.

I also discovered the wonderful world of Chrome debug pages, which are very helpful, especially the sync internals page which you can access by navigating to chrome://sync-internals/

Protocol Buffers

I found that the Chrome sync protocol is layered on top of a Google technology called Protocol Buffers, with the Chrome sync structures being defined in a language independent protocol buffers IDL.

The main source is at http://src.chromium.org/viewvc/chrome/trunk/src/sync/protocol/, and there you’ll find the message types that are sent to and from the Google servers when a sync occurs.

If you want to browse, I suggest starting with sync.proto which defines the SyncEntity message containing core sync item fields, including an EntitySpecifics (also defined in sync.proto). 

The EntitySpecifics message contains a load of optional fields such as BookmarkSpecifics (used for syncing bookmarks), TypedUrlSpecifics (recently browsed URLs), PasswordSpecifics (saved passwords), SessionSpecifics (open sessions) and NigoriSpecifics decrypting all this stuff).

image

Over time various extensions have been defined.  Indeed every time I check the GIT source repository it seems that something new is happening, such as SyncedNotificationSpecifics.

Converting the protocol definitions to native code

I wanted to talk the Chrome protocol on Windows Phone, and went hunting for a C# implementation of Protocol Buffers that worked on Windows Phone.  I found two: protobuf-net by Marc Gravell and protobuf-csharp-port by Jon Skeet which I ended up using.

I was able to generate C# proxies for the Chrome sync protocol buffer files, and link in the .NET protocol buffers runtime.

image

The next step was to work out how to authenticate.

Requesting OAuth 2.0 access to Chrome sync data

Like many Google users, I use two factor authentication, and since I am especially paranoid, I have a custom Chrome sync passphrase defined.

Since I was making the app mainly for myself I needed to support both two factor authentication and custom passphrases.

Google has a standard OAuth 2.0 implementation which they describe here

You direct the user to a Google web site with an authentication request to Google, specifying in the scope parameter what access you require, for example you use userinfo.email to request access to the user’s email address.

You can indicate that your app requires access to all kinds of Google services using the Google Cloud Console.  You’ll notice though that there way to specify to access a user’s Chrome sync data.

After a little digging I discovered the magic string to request access in the scope parameter to Chrome sync data.  In fact I ask for access to the user’s email address, and their Chrome sync data. The scope I use is  https://www.googleapis.com/auth/userinfo.email+https://www.googleapis.com/auth/chromesync

Below you see the OAuth 2.0 process in progress inside a web browser I host within the app.  You login, using two factor authentication if it is enabled, and then you get prompted to ask whether you want to give the app the access that it requests. 

For some reason, Google’s OAuth prompts are always in German for me, despite the fact that I speak no German, and although I live in Switzerland, I live in a French speaking area.  If you don’t speak German you’ll have to take my word for it that it is prompting for permission to access your email address and your Chrome sync data.

imageimage

imageimage

The result of this authentication are two tokens: an access token, which is good for a certain amount of time, and a refresh token, which can be used to generate a new access token when it expires.

Building the sync request

Initiating the sync process involves making an http request to https://clients4.google.com/chrome-sync and setting a “Bearer” http header to the access token. The body of the message is an octet-stream which contains the sync request.

The sync request itself is a GetUpdatesMessage defined in a ClientToServerMessage which are defined in sync.proto:

GetUpdatesMessage
525    message GetUpdatesMessage {
526      // Indicates the client's current progress in downloading updates.  A
527      // from_timestamp value of zero means that the client is requesting a first-
528      // time sync.  After that point, clients should fill in this value with the
529      // value returned in the last-seen GetUpdatesResponse.new_timestamp.
530      //
531      // from_timestamp has been deprecated; clients should use
532      // |from_progress_marker| instead, which allows more flexibility.
533      optional int64 from_timestamp = 1;
534    
535      // Indicates the reason for the GetUpdatesMessage.
536      // Deprecated in M29.  We should eventually rely on GetUpdatesOrigin instead.
537      // Newer clients will support both systems during the transition period.
538      optional GetUpdatesCallerInfo caller_info = 2;
539    
540      // Indicates whether related folders should be fetched.
541      optional bool fetch_folders = 3 [default = true];
542    
543      // The presence of an individual EntitySpecifics field indicates that the
544      // client requests sync object types associated with that field.  This
545      // determination depends only on the presence of the field, not its
546      // contents -- thus clients should send empty messages as the field value.
547      // For backwards compatibility only bookmark objects will be sent to the
548      // client should requested_types not be present.
549      //
550      // requested_types may contain multiple EntitySpecifics fields -- in this
551      // event, the server will return items of all the indicated types.
552      //
553      // requested_types has been deprecated; clients should use
554      // |from_progress_marker| instead, which allows more flexibility.
555      optional EntitySpecifics requested_types = 4;
556    
557      // Client-requested limit on the maximum number of updates to return at once.
558      // The server may opt to return fewer updates than this amount, but it should
559      // not return more.
560      optional int32 batch_size = 5;
561    
562      // Per-datatype progress marker.  If present, the server will ignore
563      // the values of requested_types and from_timestamp, using this instead.
564      //
565      // With the exception of certain configuration or initial sync requests, the
566      // client should include one instance of this field for each enabled data
567      // type.
568      repeated DataTypeProgressMarker from_progress_marker = 6;
569    
570      // Indicates whether the response should be sent in chunks.  This may be
571      // needed for devices with limited memory resources.  If true, the response
572      // will include one or more ClientToServerResponses, with the frist one
573      // containing GetUpdatesMetadataResponse, and the remaining ones, if any,
574      // containing GetUpdatesStreamingResponse.  These ClientToServerResponses are
575      // delimited by a length prefix, which is encoded as a varint.
576      optional bool streaming = 7 [default = false];
577    
578      // Whether the client needs the server to provide an encryption key for this
579      // account.
580      // Note: this should typically only be set on the first GetUpdates a client
581      // requests. Clients are expected to persist the encryption key from then on.
582      // The allowed frequency for requesting encryption keys is much lower than
583      // other datatypes, so repeated usage will likely result in throttling.
584      optional bool need_encryption_key = 8 [default = false];
585    
586      // Whether to create the mobile bookmarks folder if it's not
587      // already created.  Should be set to true only by mobile clients.
588      optional bool create_mobile_bookmarks_folder = 1000 [default = false];
589    
590      // This value is an updated version of the GetUpdatesCallerInfo's
591      // GetUpdatesSource.  It describes the reason for the GetUpdate request.
592      // Introduced in M29.
593      optional SyncEnums.GetUpdatesOrigin get_updates_origin = 9;
594    
595      // Whether this GU also serves as a retry GU. Any GU that happens after
596      // retry timer timeout is a retry GU effectively.
597      optional bool is_retry = 10 [default = false];
598    };

 

This is my code to build this sync request:
/// <summary>
/// Builds a sync request to be sent to the server.  Initializes it based on the user's selected
/// sync options, and previous sync state
/// </summary>
/// <returns></returns>
private byte[] BuildSyncRequest() {
  D("BuildSyncRequest invoked");
  // This ClientToServerMessage is generated from the sync.proto definition
  var myRequest = ClientToServerMessage.CreateBuilder();
  myRequest.SetShare(_syncOptions.User);
  using (var db = _databaseFactory.Get()) {
    if (db == null) throw new Exception("User logged out");

    var syncState = db.GetSyncState();

    // We want to get updates, other options include COMMIT to send changes
    myRequest.SetMessageContents(ClientToServerMessage.Types.Contents.GET_UPDATES);

    var callerInfo = GetUpdatesCallerInfo.CreateBuilder();
    callerInfo.NotificationsEnabled = true;
    callerInfo.SetSource(GetUpdatesCallerInfo.Types.GetUpdatesSource.PERIODIC);
    var getUpdates = GetUpdatesMessage.CreateBuilder();
    getUpdates.SetCallerInfo(callerInfo);
    getUpdates.SetFetchFolders(true);

    // Tell the server what kinds of sync items we can handle

    // We need this in case the user has encrypted everything ... nigori is to get decryption
    // keys to decrypted encrypted items
    var nigoriDataType = InitializeDataType(db, EntitySpecifics.NigoriFieldNumber);
    getUpdates.FromProgressMarkerList.Add(nigoriDataType.Build());

    // We include bookmarks if the user selected them
    if ((_syncOptions.Flags & SyncFlags.Bookmarks) == SyncFlags.Bookmarks) {
      // The field is initialized with state information from the last sync, if any, so that
      // we only get changes since the latest sync
      var bookmarkDataType = InitializeDataType(db, EntitySpecifics.BookmarkFieldNumber);
      getUpdates.FromProgressMarkerList.Add(bookmarkDataType.Build());
    }

    if ((_syncOptions.Flags & SyncFlags.OpenTabs) == SyncFlags.OpenTabs) {
      var sessionDataType = InitializeDataType(db, EntitySpecifics.SessionFieldNumber);
      getUpdates.FromProgressMarkerList.Add(sessionDataType.Build());
    }

    if ((_syncOptions.Flags & SyncFlags.Omnibox) == SyncFlags.Omnibox) {
      var typedUrlDataType = InitializeDataType(db, EntitySpecifics.TypedUrlFieldNumber);
      getUpdates.FromProgressMarkerList.Add(typedUrlDataType.Build());
    }

    if ((_syncOptions.Flags & SyncFlags.Passwords) == SyncFlags.Passwords) {
      var passwordDataType = InitializeDataType(db, EntitySpecifics.PasswordFieldNumber);
      getUpdates.FromProgressMarkerList.Add(passwordDataType.Build());
    }

    if (syncState != null) {
      // ChipBag is "Per-client state for use by the server. Sent with every message sent to the server."
      // Soggy newspaper not included
      if (syncState.ChipBag != null) {
        var chipBag = ChipBag.CreateBuilder().SetServerChips(ByteString.CopyFrom(syncState.ChipBag)).Build();
        myRequest.SetBagOfChips(chipBag);
      }

      if (syncState.StoreBirthday != null) {
        myRequest.SetStoreBirthday(syncState.StoreBirthday);
      }
    }

    myRequest.SetGetUpdates(getUpdates);

    myRequest.SetClientStatus(ClientStatus.CreateBuilder().Build());
  }

  var builtRequest = myRequest.Build();
  return builtRequest.ToByteArray();
}

/// <summary>
/// For each item type we sync, this method initializes it
/// </summary>
private DataTypeProgressMarker.Builder InitializeDataType(IDatabase db, int fieldNumber) {
  var dataType = DataTypeProgressMarker.CreateBuilder();
  dataType.SetDataTypeId(fieldNumber);
  InitializeMarker(dataType, db);
  return dataType;
}

/// <summary>
/// Initializes the sync state for the item types we sync
/// </summary>
private void InitializeMarker(DataTypeProgressMarker.Builder dataType, IDatabase db) {
  var marker = db.GetSyncProgress(dataType.DataTypeId);
  if (marker == null) {
    return;
  }
  D("Initializing marker: " + marker);
  if (marker.NotificationHint != null) {
    dataType.SetNotificationHint(marker.NotificationHint);
  }

  dataType.SetToken(ByteString.CopyFrom(marker.Token));
  if (marker.TimestampForMigration != 0) {
    dataType.SetTimestampTokenForMigration(marker.TimestampForMigration);
  }
}

 

Handling the sync response

Once this request is sent off we get back a sync response, in the form of a ClientToServerResponse containing a GetUpdatesResponse, which are also defined in sync.proto:

GetUpdatesResponse
756    message GetUpdatesResponse {
757      // New sync entries that the client should apply.
758      repeated SyncEntity entries = 1;
759    
760      // If there are more changes on the server that weren't processed during this
761      // GetUpdates request, the client should send another GetUpdates request and
762      // use new_timestamp as the from_timestamp value within GetUpdatesMessage.
763      //
764      // This field has been deprecated and will be returned only to clients
765      // that set the also-deprecated |from_timestamp| field in the update request.
766      // Clients should use |from_progress_marker| and |new_progress_marker|
767      // instead.
768      optional int64 new_timestamp = 2;
769    
770      // DEPRECATED FIELD - server does not set this anymore.
771      optional int64 deprecated_newest_timestamp = 3;
772    
773      // Approximate count of changes remaining - use this for UI feedback.
774      // If present and zero, this estimate is firm: the server has no changes
775      // after the current batch.
776      optional int64 changes_remaining = 4;
777    
778      // Opaque, per-datatype timestamp-like tokens.  A client should use this
779      // field in lieu of new_timestamp, which is deprecated in newer versions
780      // of the protocol.  Clients should retain and persist the values returned
781      // in this field, and present them back to the server to indicate the
782      // starting point for future update requests.
783      //
784      // This will be sent only if the client provided |from_progress_marker|
785      // in the update request.
786      //
787      // The server may provide a new progress marker even if this is the end of
788      // the batch, or if there were no new updates on the server; and the client
789      // must save these.  If the server does not provide a |new_progress_marker|
790      // value for a particular datatype, when the request provided a
791      // |from_progress_marker| value for that datatype, the client should
792      // interpret this to mean "no change from the previous state" and retain its
793      // previous progress-marker value for that datatype.
794      //
795      // Progress markers in the context of a response will never have the
796      // |timestamp_token_for_migration| field set.
797      repeated DataTypeProgressMarker new_progress_marker = 5;
798    
799      // The current encryption keys associated with this account. Will be set if
800      // the GetUpdatesMessage in the request had need_encryption_key == true or
801      // the server has updated the set of encryption keys (e.g. due to a key
802      // rotation).
803      repeated bytes encryption_keys = 6;
804    };

 

SyncEntity

Note that at the start of GetUpdatesResponse there is a repeated series of SyncEntities.  SyncEntity is also defined in sync.proto:

134    message SyncEntity {
135      // This item's identifier.  In a commit of a new item, this will be a
136      // client-generated ID.  If the commit succeeds, the server will generate
137      // a globally unique ID and return it to the committing client in the
138      // CommitResponse.EntryResponse.  In the context of a GetUpdatesResponse,
139      // |id_string| is always the server generated ID.  The original
140      // client-generated ID is preserved in the |originator_client_id| field.
141      // Present in both GetUpdatesResponse and CommitMessage.
142      optional string id_string = 1;
143    
144      // An id referencing this item's parent in the hierarchy.  In a
145      // CommitMessage, it is accepted for this to be a client-generated temporary
146      // ID if there was a new created item with that ID appearing earlier
147      // in the message.  In all other situations, it is a server ID.
148      // Present in both GetUpdatesResponse and CommitMessage.
149      optional string parent_id_string = 2;
150    
151      // old_parent_id is only set in commits and indicates the old server
152      // parent(s) to remove. When omitted, the old parent is the same as
153      // the new.
154      // Present only in CommitMessage.
155      optional string old_parent_id = 3;
156    
157      // The version of this item -- a monotonically increasing value that is
158      // maintained by for each item.  If zero in a CommitMessage, the server
159      // will interpret this entity as a newly-created item and generate a
160      // new server ID and an initial version number.  If nonzero in a
161      // CommitMessage, this item is treated as an update to an existing item, and
162      // the server will use |id_string| to locate the item.  Then, if the item's
163      // current version on the server does not match |version|, the commit will
164      // fail for that item.  The server will not update it, and will return
165      // a result code of CONFLICT.  In a GetUpdatesResponse, |version| is
166      // always positive and indentifies the revision of the item data being sent
167      // to the client.
168      // Present in both GetUpdatesResponse and CommitMessage.
169      required int64 version = 4;
170    
171      // Last modification time (in java time milliseconds)
172      // Present in both GetUpdatesResponse and CommitMessage.
173      optional int64 mtime = 5;
174    
175      // Creation time.
176      // Present in both GetUpdatesResponse and CommitMessage.
177      optional int64 ctime = 6;
178    
179      // The name of this item.
180      // Historical note:
181      //   Since November 2010, this value is no different from non_unique_name.
182      //   Before then, server implementations would maintain a unique-within-parent
183      //   value separate from its base, "non-unique" value.  Clients had not
184      //   depended on the uniqueness of the property since November 2009; it was
185      //   removed from Chromium by http://codereview.chromium.org/371029 .
186      // Present in both GetUpdatesResponse and CommitMessage.
187      required string name = 7;
188    
189      // The name of this item.  Same as |name|.
190      // |non_unique_name| should take precedence over the |name| value if both
191      // are supplied.  For efficiency, clients and servers should avoid setting
192      // this redundant value.
193      // Present in both GetUpdatesResponse and CommitMessage.
194      optional string non_unique_name = 8;
195    
196      // A value from a monotonically increasing sequence that indicates when
197      // this item was last updated on the server. This is now equivalent
198      // to version. This is now deprecated in favor of version.
199      // Present only in GetUpdatesResponse.
200      optional int64 sync_timestamp = 9;
201    
202      // If present, this tag identifies this item as being a uniquely
203      // instanced item.  The server ensures that there is never more
204      // than one entity in a user's store with the same tag value.
205      // This value is used to identify and find e.g. the "Google Chrome" settings
206      // folder without relying on it existing at a particular path, or having
207      // a particular name, in the data store.
208      //
209      // This variant of the tag is created by the server, so clients can't create
210      // an item with a tag using this field.
211      //
212      // Use client_defined_unique_tag if you want to create one from the client.
213      //
214      // An item can't have both a client_defined_unique_tag and
215      // a server_defined_unique_tag.
216      //
217      // Present only in GetUpdatesResponse.
218      optional string server_defined_unique_tag = 10;
219    
220      // If this group is present, it implies that this SyncEntity corresponds to
221      // a bookmark or a bookmark folder.
222      //
223      // This group is deprecated; clients should use the bookmark EntitySpecifics
224      // protocol buffer extension instead.
225      optional group BookmarkData = 11 {
226        // We use a required field to differentiate between a bookmark and a
227        // bookmark folder.
228        // Present in both GetUpdatesMessage and CommitMessage.
229        required bool bookmark_folder = 12;
230    
231        // For bookmark objects, contains the bookmark's URL.
232        // Present in both GetUpdatesResponse and CommitMessage.
233        optional string bookmark_url = 13;
234    
235        // For bookmark objects, contains the bookmark's favicon. The favicon is
236        // represented as a 16X16 PNG image.
237        // Present in both GetUpdatesResponse and CommitMessage.
238        optional bytes bookmark_favicon = 14;
239      }
240    
241      // Supplies a numeric position for this item, relative to other items with the
242      // same parent.  Deprecated in M26, though clients are still required to set
243      // it.
244      //
245      // Present in both GetUpdatesResponse and CommitMessage.
246      //
247      // At one point this was used as an alternative / supplement to
248      // the deprecated |insert_after_item_id|, but now it, too, has been
249      // deprecated.
250      //
251      // In order to maintain compatibility with older clients, newer clients should
252      // still set this field.  Its value should be based on the first 8 bytes of
253      // this item's |unique_position|.
254      //
255      // Nerwer clients must also support the receipt of items that contain
256      // |position_in_parent| but no |unique_position|.  They should locally convert
257      // the given int64 position to a UniquePosition.
258      //
259      // The conversion from int64 to UniquePosition is as follows:
260      // The int64 value will have its sign bit flipped then placed in big endian
261      // order as the first 8 bytes of the UniquePosition.  The subsequent bytes of
262      // the UniquePosition will consist of the item's unique suffix.
263      //
264      // Conversion from UniquePosition to int64 reverses this process: the first 8
265      // bytes of the position are to be interpreted as a big endian int64 value
266      // with its sign bit flipped.
267      optional int64 position_in_parent = 15;
268    
269      // Contains the ID of the element (under the same parent) after which this
270      // element resides. An empty string indicates that the element is the first
271      // element in the parent.  This value is used during commits to specify
272      // a relative position for a position change.  In the context of
273      // a GetUpdatesMessage, |position_in_parent| is used instead to
274      // communicate position.
275      //
276      // Present only in CommitMessage.
277      //
278      // This is deprecated.  Clients are allowed to omit this as long as they
279      // include |position_in_parent| instead.
280      optional string insert_after_item_id = 16;
281    
282      // Arbitrary key/value pairs associated with this item.
283      // Present in both GetUpdatesResponse and CommitMessage.
284      // Deprecated.
285      // optional ExtendedAttributes extended_attributes = 17;
286    
287      // If true, indicates that this item has been (or should be) deleted.
288      // Present in both GetUpdatesResponse and CommitMessage.
289      optional bool deleted = 18 [default = false];
290    
291      // A GUID that identifies the the sync client who initially committed
292      // this entity.  This value corresponds to |cache_guid| in CommitMessage.
293      // This field, along with |originator_client_item_id|, can be used to
294      // reunite the original with its official committed version in the case
295      // where a client does not receive or process the commit response for
296      // some reason.
297      //
298      // Present only in GetUpdatesResponse.
299      //
300      // This field is also used in determining the unique identifier used in
301      // bookmarks' unique_position field.
302      optional string originator_cache_guid = 19;
303    
304      // The local item id of this entry from the client that initially
305      // committed this entity. Typically a negative integer.
306      // Present only in GetUpdatesResponse.
307      //
308      // This field is also used in determinging the unique identifier used in
309      // bookmarks' unique_position field.
310      optional string originator_client_item_id = 20;
311    
312      // Extensible container for datatype-specific data.
313      // This became available in version 23 of the protocol.
314      optional EntitySpecifics specifics = 21;
315    
316      // Indicate whether this is a folder or not. Available in version 23+.
317      optional bool folder = 22 [default = false];
318    
319      // A client defined unique hash for this entity.
320      // Similar to server_defined_unique_tag.
321      //
322      // When initially committing an entity, a client can request that the entity
323      // is unique per that account. To do so, the client should specify a
324      // client_defined_unique_tag. At most one entity per tag value may exist.
325      // per account. The server will enforce uniqueness on this tag
326      // and fail attempts to create duplicates of this tag.
327      // Will be returned in any updates for this entity.
328      //
329      // The difference between server_defined_unique_tag and
330      // client_defined_unique_tag is the creator of the entity. Server defined
331      // tags are entities created by the server at account creation,
332      // while client defined tags are entities created by the client at any time.
333      //
334      // During GetUpdates, a sync entity update will come back with ONE of:
335      // a) Originator and cache id - If client committed the item as non "unique"
336      // b) Server tag - If server committed the item as unique
337      // c) Client tag - If client committed the item as unique
338      //
339      // May be present in CommitMessages for the initial creation of an entity.
340      // If present in Commit updates for the entity, it will be ignored.
341      //
342      // Available in version 24+.
343      //
344      // May be returned in GetUpdatesMessage and sent up in CommitMessage.
345      //
346      optional string client_defined_unique_tag = 23;
347    
348      // This positioning system had a relatively short life.  It was made obsolete
349      // by |unique_position| before either the client or server made much of an
350      // attempt to support it.  In fact, no client ever read or set this field.
351      //
352      // Deprecated in M26.
353      optional bytes ordinal_in_parent = 24;
354    
355      // This is the fourth attempt at positioning.
356      //
357      // This field is present in both GetUpdatesResponse and CommitMessage, if the
358      // item's type requires it and the client that wrote the item supports it (M26
359      // or higher).  Clients must also be prepared to handle updates from clients
360      // that do not set this field.  See the comments on
361      // |server_position_in_parent| for more information on how this is handled.
362      //
363      // This field will not be set for items whose type ignores positioning.
364      // Clients should not attempt to read this field on the receipt of an item of
365      // a type that ignores positioning.
366      //
367      // Refer to its definition in unique_position.proto for more information about
368      // its internal representation.
369      optional UniquePosition unique_position = 25;
370    };

 

EntitySpecifics

What is most important in the SyncEntity is line 314, where you see that a SyncEntity contains an EntitySpecifics, which is where the good stuff is.  The EntitySpecifics looks like this:

64    message EntitySpecifics {
65      // If a datatype is encrypted, this field will contain the encrypted
66      // original EntitySpecifics. The extension for the datatype will continue
67      // to exist, but contain only the default values.
68      // Note that currently passwords employ their own legacy encryption scheme and
69      // do not use this field.
70      optional EncryptedData encrypted = 1;
71    
72      // To add new datatype-specific fields to the protocol, extend
73      // EntitySpecifics.  First, pick a non-colliding tag number by
74      // picking a revision number of one of your past commits
75      // to src.chromium.org.  Then, in a different protocol buffer
76      // definition, define your message type, and add an optional field
77      // to the list below using the unique tag value you selected.
78      //
79      //  optional MyDatatypeSpecifics my_datatype = 32222;
80      //
81      // where:
82      //   - 32222 is the non-colliding tag number you picked earlier.
83      //   - MyDatatypeSpecifics is the type (probably a message type defined
84      //     in your new .proto file) that you want to associate with each
85      //     object of the new datatype.
86      //   - my_datatype is the field identifier you'll use to access the
87      //     datatype specifics from the code.
88      //
89      // Server implementations are obligated to preserve the contents of
90      // EntitySpecifics when it contains unrecognized fields.  In this
91      // way, it is possible to add new datatype fields without having
92      // to update the server.
93      //
94      // Note: The tag selection process is based on legacy versions of the
95      // protocol which used protobuf extensions. We have kept the process
96      // consistent as the old values cannot change.  The 5+ digit nature of the
97      // tags also makes them recognizable (individually and collectively) from
98      // noise in logs and debugging contexts, and creating a divergent subset of
99      // tags would only make things a bit more confusing.
100    
101      optional AutofillSpecifics autofill = 31729;
102      optional BookmarkSpecifics bookmark = 32904;
103      optional PreferenceSpecifics preference = 37702;
104      optional TypedUrlSpecifics typed_url = 40781;
105      optional ThemeSpecifics theme = 41210;
106      optional AppNotification app_notification = 45184;
107      optional PasswordSpecifics password = 45873;
108      optional NigoriSpecifics nigori = 47745;
109      optional ExtensionSpecifics extension = 48119;
110      optional AppSpecifics app = 48364;
111      optional SessionSpecifics session = 50119;
112      optional AutofillProfileSpecifics autofill_profile = 63951;
113      optional SearchEngineSpecifics search_engine = 88610;
114      optional ExtensionSettingSpecifics extension_setting = 96159;
115      optional AppSettingSpecifics app_setting = 103656;
116      optional HistoryDeleteDirectiveSpecifics history_delete_directive = 150251;
117      optional SyncedNotificationSpecifics synced_notification = 153108;
118      optional SyncedNotificationAppInfoSpecifics synced_notification_app_info =
119          235816;
120      optional DeviceInfoSpecifics device_info = 154522;
121      optional ExperimentsSpecifics experiments = 161496;
122      optional PriorityPreferenceSpecifics priority_preference = 163425;
123      optional DictionarySpecifics dictionary = 170540;
124      optional FaviconTrackingSpecifics favicon_tracking = 181534;
125      optional FaviconImageSpecifics favicon_image = 182019;
126      optional ManagedUserSettingSpecifics managed_user_setting = 186662;
127      optional ManagedUserSpecifics managed_user = 194582;
128      optional ManagedUserSharedSettingSpecifics managed_user_shared_setting =
129          202026;
130      optional ArticleSpecifics article = 223759;
131      optional AppListSpecifics app_list = 229170;
132    }

 

BookmarkSpecifics

As you see the EntitySpecifics contains EncryptedData and optional fields for each of the data types.  A specific instance of an EntitySpecifics contains just one, for example here is the BookmarkSpecifics from bookmarks_specifics.proto

23    // Properties of bookmark sync objects.
24    message BookmarkSpecifics {
25      optional string url = 1;
26      optional bytes favicon = 2;
27      optional string title = 3;
28      // Corresponds to BookmarkNode::date_added() and is the internal value from
29      // base::Time.
30      optional int64 creation_time_us = 4;
31      optional string icon_url = 5;
32      repeated MetaInfo meta_info = 6;
33    }

 

Decrypting sync data

What makes things tricky is that you get a set of sync entities, some of which may be encrypted (in the EncryptedData EntitySpecifics field), but they cannot be decrypted until the NigoriSpecifics sync entity is received, which may be some time.  So I buffer of the encrypted sync entities until they can be decrypted.

Encrypted data looks like this in its Protocol Buffers definition in encryption.proto:

EncryptedData
7    // Encrypted sync data consists of two parts: a key name and a blob. Key name is
18    // the name of the key that was used to encrypt blob and blob is encrypted data
19    // itself.
20    //
21    // The reason we need to keep track of the key name is that a sync user can
22    // change their passphrase (and thus their encryption key) at any time. When
23    // that happens, we make a best effort to reencrypt all nodes with the new
24    // passphrase, but since we don't have transactions on the server-side, we
25    // cannot guarantee that every node will be reencrypted. As a workaround, we
26    // keep track of all keys, assign each key a name (by using that key to encrypt
27    // a well known string) and keep track of which key was used to encrypt each
28    // node.
29    message EncryptedData {
30      optional string key_name = 1;
31      optional string blob = 2;
32    };

NigoriKey, NigoriKeyBag and NigoriSpecific

The NigoriSpecifics (one of the entries in the EntitySpecifics) looks like this, including associated data types, in nigori_specifics.proto

19    message NigoriKey {
20      optional string name = 1;
21      optional bytes user_key = 2;
22      optional bytes encryption_key = 3;
23      optional bytes mac_key = 4;
24    }
25    
26    message NigoriKeyBag {
27      repeated NigoriKey key = 2;
28    }
29    
30    // Properties of nigori sync object.
31    message NigoriSpecifics {
32      optional EncryptedData encryption_keybag = 1;
33      // Once keystore migration is performed, we have to freeze the keybag so that
34      // older clients (that don't support keystore encryption) do not attempt to
35      // update the keybag.
36      // Previously |using_explicit_passphrase|.
37      optional bool keybag_is_frozen = 2;
38    
39      // Obsolete encryption fields. These were deprecated due to legacy versions
40      // that understand their usage but did not perform encryption properly.
41      // optional bool deprecated_encrypt_bookmarks = 3;
42      // optional bool deprecated_encrypt_preferences = 4;
43      // optional bool deprecated_encrypt_autofill_profile = 5;
44      // optional bool deprecated_encrypt_autofill = 6;
45      // optional bool deprecated_encrypt_themes = 7;
46      // optional bool deprecated_encrypt_typed_urls = 8;
47      // optional bool deprecated_encrypt_extensions = 9;
48      // optional bool deprecated_encrypt_sessions = 10;
49      // optional bool deprecated_encrypt_apps = 11;
50      // optional bool deprecated_encrypt_search_engines = 12;
51    
52      // Booleans corresponding to whether a datatype should be encrypted.
53      // Passwords are always encrypted, so we don't need a field here.
54      // History delete directives need to be consumable by the server, and
55      // thus can't be encrypted.
56      // Synced Notifications need to be consumed by the server (the read flag)
57      // and thus can't be encrypted.
58      // Synced Notification App Info is set by the server, and thus cannot be
59      // encrypted.
60      optional bool encrypt_bookmarks = 13;
61      optional bool encrypt_preferences = 14;
62      optional bool encrypt_autofill_profile = 15;
63      optional bool encrypt_autofill = 16;
64      optional bool encrypt_themes = 17;
65      optional bool encrypt_typed_urls = 18;
66      optional bool encrypt_extensions = 19;
67      optional bool encrypt_sessions = 20;
68      optional bool encrypt_apps = 21;
69      optional bool encrypt_search_engines = 22;
70    
71      // Deprecated on clients where tab sync is enabled by default.
72      // optional bool sync_tabs = 23;
73    
74      // If true, all current and future datatypes will be encrypted.
75      optional bool encrypt_everything = 24;
76    
77      optional bool encrypt_extension_settings = 25;
78      optional bool encrypt_app_notifications = 26;
79      optional bool encrypt_app_settings = 27;
80    
81      // User device information. Contains information about each device that has a
82      // sync-enabled Chrome browser connected to the user account.
83      // This has been moved to the DeviceInfo message.
84      // repeated DeviceInformation deprecated_device_information = 28;
85    
86      // Enable syncing favicons as part of tab sync.
87      optional bool sync_tab_favicons = 29;
88    
89      // The state of the passphrase required to decrypt |encryption_keybag|.
90      enum PassphraseType {
91        // Gaia-based encryption passphrase. Deprecated.
92        IMPLICIT_PASSPHRASE = 1;
93        // Keystore key encryption passphrase. Uses |keystore_bootstrap| to
94        // decrypt |encryption_keybag|.
95        KEYSTORE_PASSPHRASE = 2;
96        // Previous Gaia-based passphrase frozen and treated as a custom passphrase.
97        FROZEN_IMPLICIT_PASSPHRASE  = 3;
98        // User provided custom passphrase.
99        CUSTOM_PASSPHRASE = 4;
100      }
101      optional PassphraseType passphrase_type = 30
102          [default = IMPLICIT_PASSPHRASE];
103    
104      // The keystore decryptor token blob. Encrypted with the keystore key, and
105      // contains the encryption key used to decrypt |encryption_keybag|.
106      // Only set if passphrase_state == KEYSTORE_PASSPHRASE.
107      optional EncryptedData keystore_decryptor_token = 31;
108    
109      // The time (in epoch milliseconds) at which the keystore migration was
110      // performed.
111      optional int64 keystore_migration_time = 32;
112    
113      // The time (in epoch milliseconds) at which a custom passphrase was set.
114      // Note: this field may not be set if the custom passphrase was applied before
115      // this field was introduced.
116      optional int64 custom_passphrase_time = 33;
117    
118      // Boolean corresponding to whether custom spelling dictionary should be
119      // encrypted.
120      optional bool encrypt_dictionary = 34;
121    
122      // Boolean corresponding to Whether to encrypt favicons data or not.
123      optional bool encrypt_favicon_images = 35;
124      optional bool encrypt_favicon_tracking = 36;
125    
126      // Boolean corresponding to whether articles should be encrypted.
127      optional bool encrypt_articles = 37;
128    
129      // Boolean corresponding to whether app list items should be encrypted.
130      optional bool encrypt_app_list = 38;
131    }

 

Note that the first item in the NigiriSpecifics is the encrypted NigoriKeyBag.  The NigoriKeyBag is a set of NigoriKeys, both defined above.  The NigoriKeys are used to decrypt things like the encrypted BookmarkSpecifics.

So the first thing to do is to decrypt the encrypted NigoriKeyBag.  I prompt the user for the custom passphrase:

image

Once I have the passphrase, I decrypt the encrypted_keybag’s bytes using the passphrase:

Decrypting data
    internal static byte[] Decrypt(string passwordText, string encryptedText) {
      try {
        var salt = Encoding.UTF8.GetBytes("saltsalt");
        var rb = new Rfc2898DeriveBytes(HostUsername, salt, 1001);
        var userSalt = rb.GetBytes(16);

        var password = Encoding.UTF8.GetBytes(passwordText);
        rb = new Rfc2898DeriveBytes(password, userSalt, 1002);
        var userKey = rb.GetBytes(16);

        password = Encoding.UTF8.GetBytes(passwordText);
        rb = new Rfc2898DeriveBytes(password, userSalt, 1003);
        var encryptionKey = rb.GetBytes(16);

        rb = new Rfc2898DeriveBytes(password, userSalt, 1004);
        var macKey = rb.GetBytes(16);

        return Decrypt(encryptionKey, macKey, encryptedText);
      } catch (Exception) {
        return null;
      }
    }
    internal static byte[] Decrypt(byte[] encryptionKey, byte[] macKey, string encryptedText) {
      var input = Convert.FromBase64String(encryptedText);

      //var input = encrypted;
      const int kIvSize = 16;
      const int kHashSize = 32;

      if (input.Length < kIvSize*2 + kHashSize) return null;

      var iv = new byte[kIvSize];
      Array.Copy(input, iv, iv.Length);
      var ciphertext = new byte[input.Length - (kIvSize + kHashSize)];
      Array.Copy(input, kIvSize, ciphertext, 0, ciphertext.Length);
      var hash = new byte[kHashSize];
      Array.Copy(input, input.Length - kHashSize, hash, 0, kHashSize);

      var hmac = new HMACSHA256(macKey);
      var calculatedHash = hmac.ComputeHash(ciphertext);

      if (!Enumerable.SequenceEqual(calculatedHash, hash)) {
        return null;
      }

      var aes = new AesManaged {IV = iv, Key = encryptionKey};
      var cs = new CryptoStream(new MemoryStream(ciphertext), aes.CreateDecryptor(), CryptoStreamMode.Read);
      var decryptedMemoryStream = new MemoryStream();
      var buf = new byte[256];
      while (cs.CanRead) {
        var count = cs.Read(buf, 0, buf.Length);
        if (count == 0) {
          break;
        }
        decryptedMemoryStream.Write(buf, 0, count);
      }
      return decryptedMemoryStream.ToArray();
    }
  }

I then convert the decrypted keybag to an actual keybag

        var bag = NigoriKeyBag.ParseFrom(decrypted);

Each entry in the keybag consists of a NigoriKey which can be used using the second Decrypt method above to decrypt EntitySpecifics enties:

var blob = encrypted.Blob;
var nigori = nigoris.ContainsKey(encrypted.KeyName)
                ? nigoris[encrypted.KeyName]
                : db.GetNigoriWithName(encrypted.KeyName);
if (nigori == null) {
  return null;
}
return Decryptor.Decrypt(nigori.EncryptionKey,
                          nigori.MacKey,
                          blob);

Processing the synced entities

After that it is pretty much plain sailing.  Here is the processing of the Bookmarks sync entity:

Processing bookmarks
 internal class BookmarkProcessor : EntityProcessor {
    public override bool Process(SyncEntity syncEntity, EntitySpecifics specifics) {
      if (!syncEntity.HasSpecifics || !syncEntity.Specifics.HasBookmark) return false;

      var bm = specifics == null ? syncEntity.Specifics.Bookmark : specifics.Bookmark;
      D("Processing bookmark " + bm.Title);

      var model = Db.GetSyncEntityWithId<BookmarkModel>(syncEntity.IdString);
      var isNew = model == null;

      if (isNew) {
        model = new BookmarkModel();
      }

      if (bm.HasFavicon) {
        model.Favicon = bm.Favicon.ToByteArray();
      }

      if (bm.HasTitle) {
        model.BookmarkTitle = bm.Title;
      }

      if (bm.HasUrl) {
        model.BookmarkUrl = bm.Url;
      }


      FillSyncEntityModel(syncEntity, model);

      if (isNew) {
        Db.InsertSyncEntity(model);
      } else {
        Db.UpdateSyncEntity(model);
      }

      return true;

    }
  }

 

I process the decrypted sync entities and store them in a database, which I then use to drive the UI to let the user view bookmarks, recently browsed URLs, saved passwords, and open Chrome sessions on other machines:

imageimage

imageimage

What’s next?

Chrync is read-only.  For example you can’t update your bookmarks.  Also when you tap on a bookmark it launches the built-in browser.

So obvious updates to the app would be to embed a browser within the app, pre-populate password fields, etc.

My biggest concern with investing too much more time in Chrync is that Google could easily pull the plug on the app by disallowing my use of the chrome sync scope in the OAuth 2.0 request.

Although I charged for the app initially, I don’t any more – it doesn’t seem ethical to charge for something that could disappear any day.

I also had grand dreams of bringing Chrome sync to iOS, and indeed got it working, reusing the sync engine using Xamarin, and with fantastic timing, was just looking to launch it when Google released Chrome for iOS …

So, I’ll continue to make minor updates, and if Google do decide to officially document and allow Chrome sync, maybe I’ll make a major update. 

Meanwhile people seem to like it.

image

Filed under: Chrync 2 Comments
25Jan/142

Evernote tip 9: Index the physical

Like a lot of people I rely on Evernote as my external brain, but my use of Evernote extends beyond the digital realm to the physical realm too.

But first, a quick quiz. Can you identify this?

image

No? You know what? Neither can I. But somehow this weird piece of plastic turned up in my home office one day, and I was left with a dilemma with which I am sure you too are familiar.

Throw it away?

On the one hand, I could throw it away. The thing is, if I did that, then you can be absolutely sure that within a week or so, it will turn out that that piece of plastic was absolutely vital to the functioning of a critical piece of household equipment.

image

“File” it?

On the other hand, I could decide to "file" it in that drawer I have. You don't know which drawer I mean? Oh yes you do, it’s the same one you have, filled with cables for phones you no longer have, remote controls , batteries that may or may not be charged, and yes, nameless pieces of plastic.

If I decided to put it in that drawer then I can be equally sure I would never need it, and the only time I might touch it again is when I move house, although even that isn't a sure thing. There is a fair chance it might follow me to my grave...

image

Evernote to the rescue

So what do I do? I choose the second option, BUT before I "file" it in that drawer or box, I also file it in Evernote by taking a photo and putting it in Evernote, and tag it to say where it is. This means that whenever I find out that I need that piece of plastic, all I need to do is scan through my "real world" notes and I can quickly and easily retrieve it.

image

image

Of course this use of Evernote isn't restricted to anonymous pieces of plastic. I also use it to file other small objects that I can't quite bring myself to recycle, but which I know I won't be needing in the near future:

image

Filed under: Evernote 2 Comments
12Jan/140

Sony SmartWatch2 scrollable text

Sony have adopted an intriguing approach to development for their SmartWatch 2.  Unlike other Smart Watches you don’t write code that runs on the watch itself.  Instead all your code runs on an Android phone.  You define the watch UI using standard Android layouts, and that UI is remoted onto the watch.

Your app can respond to events such as touches, since these events are sent from the watch to your phone, and then delivered to your app.

This is kind of cool, in that you don’t have to debug on the watch.  It is simpler, and I think it works well for relatively simple apps.

There are, however limitations on the UI elements that you can display.  Lists work well, but it isn’t currently possible to create a scrollable text area.

For the experimental app I was working on, this was a big issue. I needed to display text that went on for more than one screen.

I eventually found a way around this restriction.  I render the text into a bitmap in memory on the phone, then I split the bitmap up into watch-screen sized chunks, and I use each chunk as an element in a list.  This works.  You can scroll through your text, albeit a page at a time.

My list is derived from ManagedControlExtension, and in the onResume I render the text to a bitmap member variable by calling renderTextToCanvas:

  private void renderTextToCanvas() {
    mBitmap = Bitmap.createBitmap(mScreenWidth, mScreenHeight * SCREEN_PAGES, Bitmap.Config.ARGB_8888);
    mBitmap.setDensity(DisplayMetrics.DENSITY_DEFAULT);
    mCanvas = new Canvas(mBitmap);
    mCanvas.setDensity(DisplayMetrics.DENSITY_DEFAULT);

    TextPaint tp = new TextPaint();
    tp.setColor(Color.WHITE);
    tp.setTextSize(18);

    String text = mNote.textContent;

    if(text == null) {
      Log.d(TAG, "Empty text ...");
      text = mContext.getString(R.string.empty_note);
      tp.setTextSkewX(-0.25f); // Italics
    }

    StaticLayout sl = new StaticLayout(text, tp, mScreenWidth, Layout.Alignment.ALIGN_NORMAL, 1.2f,
                                       0f, false);

    mCanvas.save();
    sl.draw(mCanvas);
    mCanvas.restore();
  }

Then when a list element is requested I render the appropriate bitmap chunk and return it:

@Override
  public void onRequestListItem(final int layoutReference, final int listItemPosition) {
    Log.d(TAG, "onRequestListItem() - position " + listItemPosition);
    if (layoutReference != -1 && listItemPosition != -1 && layoutReference == R.id.listView) {
      ControlListItem item = createControlListItem(listItemPosition);
      if (item != null) {
        sendListItem(item);
      }
    }
  }

  protected ControlListItem createControlListItem(int position) {
    Bitmap bitmap = Bitmap.createBitmap(mBitmap, 0, mScreenHeight * position,
                                                    mScreenWidth, mScreenHeight);
    ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
    bitmap.compress(Bitmap.CompressFormat.PNG, 100, byteArrayOutputStream);

    ControlListItem item = new ControlListItem();
    item.layoutReference = R.id.listView;
    item.dataXmlLayout = R.layout.note_content_item;
    item.listItemPosition = position;
    item.listItemId = position;

    Bundle imageBundle = new Bundle();
    imageBundle.putInt(Control.Intents.EXTRA_LAYOUT_REFERENCE, R.id.imageView);
    imageBundle.putByteArray(Control.Intents.EXTRA_DATA, byteArrayOutputStream.toByteArray());

    item.layoutData = new Bundle[] { imageBundle };

    return item;
  }

All this leads to scrollable text:

Filed under: Wearables No Comments
12Jan/141

So, how did you die? Wearables as the human black box.

Yesterday I gave a presentation at the excellent Mobile Central Europe conference in Warsaw, Poland, on Evernote and wearable devices.

When talking about the convergence of activity monitoring devices and smart watches I voiced a sudden thought.

How long will it be before someone dies of a heart-attack, and we use the health-sensors that are being incorporated into smart wearable devices to at what was happening the moments before they died?

We might then look for the same signs in others and warn them to “Lie down!, lie down!, medical assistance is on its way” … the human equivalent of “terrain!, terain!, pull up” in the cockpit.

File:Flightrecorder.jpg

Filed under: Wearables 1 Comment
5Jan/142

Word for Mac Focus keyboard shortcut: here’s how

You may well be wondering why I am writing a blog post on how to enter Focus view in Word 2011 on the Mac, when clearly I should be focusing on writing something… let’s not go there.

There is no built-in keyboard shortcut, and it wasn’t obvious to me how to add one, but I got there eventually.

Use the Tools|Customize Keyboard menu item:

Screen Shot 2014-01-05 at 10.05.28 AM

Then go to the View menu on the left, find ToogleFull on the right and enter the shortcut you wish to use.  This was the key for me: I’d never have guessed that Full meant Focus, since there is also a full-screen mode.

image

OK, so now you have no excuse not to focus!

Filed under: Fluff 2 Comments
10Nov/130

Evernote, JavaScript and the Pebble watch

Pebble just released a public beta of their new SDK,  version 2.0, and one of the more intriguing features is the ability to write JavaScript code that executes within the Pebble phone app.

No, that’s not a typo, the JavaScript executes on the phone.

Why would you possibly want to write JavaScript on the your phone?  For very good reasons:  Pebble watch apps can generally do very little by themselves.  Instead they communicate with a custom app that runs on the phone, and that custom phone app does stuff on behalf of the watch app.

For example if I wanted to write an Evernote client than ran on the Pebble watch, I’d code the watch app in C, and then create a custom Android app in Java.  My watch app could talk to my Android app, which could in turn talk to Evernote.  This is in fact what I did in an experimental client I talked about recently at a conference.

Android and iOS

But the Pebble doesn’t just support Android: it also supports the iPhone.  If I wanted to extend my app to iOS I’d need to write an equivalent app in Objective C, or C#. I’d have to duplicate my coding and testing.  There are also Bluetooth issues on iOS with the Pebble, which stop more than one app from communicating with the watch at a time, and the app needs to be whitelisted.

So when Pebble announced that they supported writing JavaScript apps that ran on the phone, hosted within a JavaScript engine that ran within the Pebble phone app on both Android and iOS, this seemed appealing.  I’d not need to write a custom Android app, and a custom iOS app.  Instead I could code up the phone functionality in JavaScript, and all would be good.

But first, I need to check that I could talk to Evernote’s cloud API using the Pebble JavaScript implementation ...

Evernote and Thrift

The Evernote service API is exposed using a technology called Thrift.  Originally developed at Facebook, and now hosted by the Apache Foundation, you define your API using a Thrift Interface Definition Language (IDL).  This IDL is consumed by Thrift code generators which generate language-specific bindings allowing you to access the interface from Java, C#, PHP, and many other languages, including, of course, JavaScript.  The generated code talks to a Thrift runtime, which sends and receives bits over the wire to the corresponding service.

I thought I’d start off by making the Pebble phone-based JavaScript app talk to Evernote, and then once I had that working, I could make it talk in turn to my C watch app.

I downloaded the Evernote JavaScript SDK from GitHub which contains both the generated “proxy” classes generated from the Thrift IDL, and the Thrift Runtime classes, both in JavaScript.

I decided to start simple, and just list my Evernote notebooks:

// Get these by creating an account and logging in to sandbox.evernote.com and then
// going to https://sandbox.evernote.com/api/DeveloperToken.action
var authTokenEvernote = "...";
var noteStoreURL = "https://sandbox.evernote.com/shard/s1/notestore";

// We want to talk the Thrift Binary protocol over HTTP. These classes are in the
// Thrift runtime
var noteStoreTransport = new Thrift.BinaryHttpTransport(noteStoreURL);
var noteStoreProtocol = new Thrift.BinaryProtocol(noteStoreTransport);

// We want to talk to Evernote's NoteStore service. This is generated code
var noteStore = new NoteStoreClient(noteStoreProtocol);

// Ask Evernote what notebooks I have
noteStore.listNotebooks(authTokenEvernote, function (notebooks) {
console.log(notebooks);
});

I put this into a file I called main.js but I also needed to include the Thrift runtime (thrift.js and pebble-js-app.js) and the generated Evernote proxies (including NoteStore_types.js and NoteStore.js).

Since this release of the Pebble SDK only supports a single JavaScript file, named pebble-js-app.js, I borrowed a script volunteered by Matthew Tole that validated the JavaScript and merged all the files together in a single file, and built and ran the Pebble app:

clear
jshint js/main.js || { exit 1; }
jshint pebble/appinfo.json || { exit 1; }

#uglifyjs js/libs/evernote-sdk-js/evernote-sdk-js/thrift/lib/thrift.js ... js/main.js -o pebble/src/js/pebble-js-app.js
cat js/libs/evernote-sdk-js/evernote-sdk-js/thrift/lib/thrift.js ... js/main.js > pebble/src/js/pebble-js-app.js
cd pebble
pebble clean
pebble build || { exit 1; }
if [ "$1" = "install" ]; then
pebble install --logs
fi
 

The first error I hit when I tried running the code, is that the Thrift JavaScript runtime expects to be able to use the DataView class, which is part of the JavaScript Typed Arrays mechanism, a work in progress:

[INFO    ] * :0 JS: pbnote: JavaScript app ready and running!
[INFO ] * :0 Error: pbnote: ReferenceError: Can't find variable: DataView at line 819 in pebble-js-app.js

It turns out although it is not documented, or supported, that there is a partial Typed Array implementation: the Pebble JavaScript Engine has an implementation of the ArrayBuffer type, as well as Int8Array, Uint8Array, etc.  There are some peculiarities, such as the ArrayBuffer supporting the slice method, but not the byteLength property.

So I went looking for an ArrayBuffer implementation I could use, and came across Joshua Bell’s implementation on GitHub.

I included it in the build script, but it did nothing.  Upon further examination, I discovered that it looks to see if there is an existing implementation and (very politely) does nothing if it finds one.  I didn’t want that:

global.ArrayBuffer = ArrayBuffer;
// global.ArrayBuffer = global.ArrayBuffer || ArrayBuffer;

I ran once again, but this time I hit a brick wall.  This is the code flow for the call to noteStore.ListNotebooks above:

  • My code calls listNotebooks which is in the NoteStore.js generated class
  • listNotebooks calls send_listNotebooks in NoteStore.js
  • send_listNotebooks writes out a listNotebooks message to the BinaryProtocol Thrift Runtime type I initialized in my code above
  • listNotebooks continues with a call to send in the BinaryHttpTransport Thrift Runtime type I also initialized above
  • the BinaryHttpTransport initializes an XMLHttpRequest and sends off the message that was built up previously.

The trick though, is how it sends the message, which you can see in the last line of this code:

var xhr = new XMLHttpRequest();
xhr.open('POST', this.url, /*async*/true);
xhr.setRequestHeader('Content-Type', 'application/x-thrift');
xhr.setRequestHeader('Accept', 'application/x-thrift');
xhr.responseType = 'arraybuffer';

xhr.onload = function (evt) {
this.received = xhr.response;
this.offset = 0;
try {
var value = recv_method.call(client);
} catch (exception) {
//console.log(JSON.stringify(exception));
value = exception;
callback = onerror;
}
callback(value);
}.bind(this);

xhr.onerror = function (evt) {
//console.log(JSON.stringify(evt));
onerror(evt);
};

xhr.send(postData.buffer);

It attempts to send a Uint8Array’s buffer.  I hacked up a dummy web server, and it turns out I was receiving the result of a toString call, which was something like “object ArrayBuffer”…

The Pebble JavaScript engine’s XMLHttpRequest only supports sending string data, not binary data.  It doesn’t support sending Typed Arrays.

I tried all kinds of things, but finally admitted defeat, for now.

The concept of having your watch app talk to JavaScript code, which in turn talks to the outside world is very appealing.  It means that you don’t have to write, test and maintain separate iOS and Android “companion” apps for your watch app.

I am sure that for most web based services what Pebble provides will be more than enough.  Unfortunately for binary based interfaces, such as the Evernote interface, the current XMLHttpRequest support isn’t quite rich enough.

Yet.

Filed under: Evernote, Pebble No Comments
28Oct/130

Constraints foster creativity: Pebble watch app development

This is a recording of the presentation I gave at the Softshake Conference in Geneva in October 2013.

In this presentation I live-code a Pebble app from scratch and send it messages from a corresponding Android App. I share my experience and insights creating an experimental Evernote client for the Pebble watch. How do you deal with writing code in C, with no malloc available?

What do you do when the maximum message size between your phone and your watch is 120 bytes? What does it mean to create a useful app for your watch?

Filed under: Evernote, Pebble No Comments
27Oct/130

Ski Goggles and Sick Bags: The past, present and future of Virtual Reality

imageimageNote: This is derived from a speech I gave at toastmasters last week, inspired by the arrival of my very own brand new Oculus Rift VR headset

 

 

 

A generation inspired.

In 1984 the author William Gibson penned his first book, called Neuromancer, and inspired a generation.

In it the protagonist navigates through cyberspace.

If you don’t know what cyberspace means, you are not alone.  At the time that William Gibson wrote Neuromancer, nobody else knew what it meant either.  He invented the term.

Cyberspace in that book was a virtual reality.  An immersive computer generated world which when you are in it, feels just like the real thing, beamed directly to the brain via a neural interface.

Our imaginations were fired.  We wanted it so badly.  Looking back, I’m not even sure why, but man was it cool.

There was no way anything like it was possible then.  A personal computer could barely output color, let alone create that kind of world.

Dreams dashed

Time passed, and by the 1990s my generation still hadn't forgotten the dream of Neuromancer.  Computers and computer graphics were getting more and more powerful. 

You even started to see video arcades with games with virtual reality headsets.  I still remember the day I tried one on, sickly smell of cigarette smoke, music from the arcade games pouring in my ears, almost as loud as the pounding of my heart.  This was it, I was going to experience Virtual Reality.  I placed the headset on my head, and looked around as it projected images into my eyes.

The disappointment was devastating. Not only did it feel like I was wearing a dustbin on my head, it was so clumsy and heavy, but the experience was terrible too.  Clunky objects drawn as outlines, which struggled to be re-drawn as I over my head around.

The virtual reality dreams of a generation were dashed on those arcades, as I and many others consigned the idea of virtual reality to the dustbin.

A new hope

Time passed.  Whole new business sprang up, such as Amazon.  Not only did new business spring up, but new ways of doing business sprang up too.

In the old days if you had an idea for a hardware product, such as some kind of electronic gadget you’d need to go to a big company to get it funded.  Endless bureaucracy and meetings.  You’d likely have to give up the rights to your product, and compromise your soul in order to get something like your idea to market.

But the internet and the world wide web changed that.  Now, when someone has an idea for something, such as a new watch, they can go to sites such as Kickstarter, and pitch their idea not to a committee in a bureaucracy, but instead they can pitch their idea to the world.  They can describe what they want to make, what their experience is in the field, what it will cost to bring it to the market, and they can let thousands of individuals invest in their idea, in return for a sample of the product if it ever gets made.

The Pebble watch I’m wearing right now started on Kickstarter.  Their goal was to raise 100,000 dollars to bring it to market.  They didn’t raise 100,000 dollars.  They raised 10 million dollars.

So that's one thing that happened”": decentralized “crowd funding” as it is called, a new way of bringing products to market.

The other thing that happened is mobile phones: Incredibly powerful miniature computers that we all carry in our pockets.  Because they are being made in massive quantities the costs of the components that go into them has dropped massively too.  And those components are interesting. 

These phones have small, but incredibly high resolution screens.  They have a vast array of sensors in them, such as gyroscopes so that they can tell when they have been turned, accelerometers to tell when they are moved, and magnetometers to tell which direction they are facing.

Can you imagine what would happen if you took those screens, attached them to some kind of a helmet, like ski goggles, included the sensors from phones to accurately track your head position, and hooked them up to a computer to generate slightly different images on each screen?  You’d have a virtual reality system. 

As it happens, someone in the states did have that idea.  Someone that knew enough about virtual reality headsets to put together a working prototype.

imageIf only they had some way to bring their idea to market.  Of course they did, and the Oculus Rift Kickstarter was a massive success.

Those that have tried them on have been astounded by the results.  It creates a truly immersive virtual reality experience.

Anyone who was wondering what value virtual reality can possibly have beyond games need only watch a 90 year old women trying them on, screaming with joy, walking around an Italian villa, leaves blowing in the wind, butterflies flittering in the air. 

There are plenty of people who for one reason or another are unable to travel, or even to move, yet they can experience the world through virtual realty. 

School kids can watch the birth of the universe, or chemical reactions happening, and step into the reaction to see it from different perspectives. 

This technology is still young.  The Oculus Rift is still not publicly available.  Its only available to software developers who wish to create for it.  But its coming.

I’ve talked about the ski goggles, but what about the sick bag?  Well all is not perfect with the Oculus Rift.  Many people report nausea after trying it on for a while.  Perhaps its the eye strain, or perhaps the image still isn’t moving quite fast enough and the body senses that. 

I’m sure that they will lick the nausea, and soon, very soon indeed, you too will be visiting new parts of our world, or even other worlds, in virtual realty.

Filed under: Uncategorized No Comments