Damian Mehers' Blog Evernote and Wearable devices. All opinions my own.


On Pulse – How I got my dream job: My wearables journey at Evernote

I just wrote on LinkedIn's Pulse about How I got my dream job: My wearables journey at Evernote

Filed under: Uncategorized No Comments

Scrolling long Pebble menu items

This is a technical blog post.  Warning: contains code.

We recently pushed version 1.2 of Evernote for the Pebble to the Pebble App Store.  It is a minor release, with one bug fix, and one new feature.

The bug fix is related to support for the additional character sets that Pebble can now display.

The enhancement is what this blog post is about.  Since we released the first version of the app, which was generally well received, we’ve received emails from people complaining that their note titles, notebook names, tag names etc. don’t fit on the Pebble screen.  They are cut off, and hard to read.  People asked if we could make menu items scroll horizontally if they didn’t fit.

My response was generally something along the lines of “sorry, but we use the Pebble’s built-in menuing system, and until they support scrolling menu items horizontally, we can’t do anything”.  I never felt great about this response, but it was the genuine situation.  However before I pushed the 1.2 release with the character-set bug-fix, I thought I’d take a look at scrolling the menu items.  Turns out, it was surprisingly easy.

You can see what I’m talking about here:


The funny thing about the Evernote Pebble watch app is that it knows almost nothing about Evernote.  The Evernote intelligence is all delegated to the companion app that runs on the Phone.  The watch app knows how to display massive menus (paging items in and out as necessary), checkboxes, images, text etc. 

When the user scrolls to a new menu item, we kick off a wait timer using app_timer_register waiting for one second.  If the user scrolls to another menu item before the timer has expired, we wait for a new second, this time using app_timer_reschedule:

static void selection_changed_callback(Layer *cell_layer, MenuIndex new_index, MenuIndex old_index, 
void *data) {
WindowData* window_data = (WindowData*)data;
window_data->moving_forwards_in_menu = new_index.row >= old_index.row;
if(!window_data->menu_reloading_to_scroll) {
} else {
window_data->menu_reloading_to_scroll = false;

The above method is called by the Pebble framework when the user scrolls to a new menu item.  The check for menu_reloading_to_scroll is called to work around some behavior I’ve seen.  This callback invokes the following method:

static void initiate_menu_scroll_timer(WindowData* window_data) {
// If there is already a timer then reschedule it, otherwise create one
bool need_to_create_timer = true;
window_data->scrolling_still_required = true;
window_data->menu_scroll_offset = 0;
window_data->menu_reloading_to_scroll = false;
if(window_data->menu_scroll_timer) {
// APP_LOG(APP_LOG_LEVEL_DEBUG, "Rescheduling timer");
need_to_create_timer = !app_timer_reschedule(window_data->menu_scroll_timer,
if(need_to_create_timer) {
// APP_LOG(APP_LOG_LEVEL_DEBUG, "Creating timer");
window_data->menu_scroll_timer = app_timer_register(SCROLL_MENU_ITEM_WAIT_TIMER,
scroll_menu_callback, window_data);

As you can see it uses a WindowsData structure, which is a custom structure associated with the current window via window_set_user_data.  Once the timer expires it calls scroll_menu_callback:

static void scroll_menu_callback(void* data) {
WindowData* window_data = (WindowData*)data;
if(!window_data->menu) {
window_data->menu_scroll_timer = NULL;
if(!window_data->scrolling_still_required) {

// Redraw the menu with this scroll offset
MenuIndex menuIndex = menu_layer_get_selected_index(window_data->menu);
if(menuIndex.row != 0) {
window_data->menu_reloading_to_scroll = true;
window_data->scrolling_still_required = false;
window_data->menu_scroll_timer = app_timer_register(SCROLL_MENU_ITEM_TIMER, scroll_menu_callback,

This code is called once when the timer initiated by initiate_scroll_menu_timer expires (after the one second delay), and then it invokes itself repeatedly using a shorter delay (a fifth of a second), until the menu item is fully scrolled.  The call to menu_layer_reload_data is what causes the menu to be redrawn, using the menu_scroll_offset to indicate how much to scroll the text by.

This is the method that gets called by the draw_row_callback to get the text to be displayed for each menu item:

void get_menu_text(WindowData* window_data, int index, char** text, char** subtext) {
MenuItem* menu_item = getMenuItem(window_data, index);
*text = menu_item ? menu_item->text : NULL;
*subtext = menu_item && menu_item->flags & ITEM_FLAG_TWO_LINER ?
menu_item->text + strlen(menu_item->text) + 1 : NULL;
if(*subtext != NULL && strlen(*subtext) == 0) {
*subtext = NULL;

MenuIndex menuIndex = menu_layer_get_selected_index(window_data->menu);
if(*text && menuIndex.row == index) {
int len = strlen(*text);
if(len - MENU_CHARS_VISIBLE - window_data->menu_scroll_offset > 0) {
*text += window_data->menu_scroll_offset;
window_data->scrolling_still_required = true;

The bolded code “scrolls” the text if the row corresponds to the currently selected item by indexing into the text to be displayed, and indicating that scrolling is still required.  I’m not happy with using the fixed size MENU_CHARS_VISIBLE to decide whether or not to scroll – it would be much nicer to measure the text and see if it fits.  If you know of a simple way to do this please comment!

The final thing I needed to do was to actually send longer menu item text from the phone to the watch.  Since Pebble now support sending more than 120 or so bytes this was much easier.  I’m sending up to 32 characters now.

In summary I’m simply using a timer to redisplay the menu, each time scrolling the current menu item’s text by indexing into the character array, and I stop the timer once it has all been displayed.

Filed under: Pebble, Wearables 1 Comment

WatchKit Error – unable to instantiate row controller class

Trying to create a simple WatchKit table, I hit the error shown in this blog post title.

You mileage may vary, but the eventual cause was that when I added my custom RowController class I accidentally added it to the wrong module … I added it to the main iOS app (WatchTest) instead of the Watch extension:


The first hint of this was when I was trying to reference the RowController when calling rowControllerAtIndex, and my custom row controller class could not be found:

var rootRow = rootTable.rowControllerAtIndex(0) as RootRowController

By this time I’d already set it as the RowController class for my table’s row in the storyboard, and had inadvertently referenced the wrong module:


I fixed the compilation error by adding my custom RowController to the Watch extension module, but accidentally added it to both modules:


Everything compiled but when I ran the log shows the error from the title: Error - unable to instantiate row controller class


I eventually figured out my mistake, and made sure that the row controller only belonged to the extension module:


And I made sure the correct module was referenced when defining the RowController in the storyboard:


It would be nice if the Watch App’s storyboard only saw classes in the Watch Extension’s module.

Filed under: Apple Watch, Swift 1 Comment

Using the Evernote API from Swift

There is a fine Evernote iOS SDK complete with extensive Objective C examples.  In this blog post I want to share what I did to get it working with Swift.

First I created a new Swift iOS app (called “orgr” below), then I copied the ENSDKResources.bundle and evernote-sdk-ios sources ….


… into the new project, and added references to MobileCoreServices and libxml2 per the SDK instructions.


In order for the Swift code to see the Evernote Objective C SDK, I enabled the compatibility header and pointed it to a header in the SDK that included all the other headers I needed.


I also found (YMMV) that I needed to add a reference to the libxml2 path under Header Search Paths


Once I’d done this, I was able to build.  Next it was simply a question of translating the Object C example code to Swift.  This is the minimal example I came up with:


You’ll need to replace “token” and “url” parameters with the values you can obtain using the developer token page. This simple example just logs my notebooks.  Next steps are for you …

Filed under: Evernote, iOS 1 Comment

Eyeglasses are broken

My eyeglasses are broken, and I want them fixed.

I vividly remember the morning I woke up, and could no longer read.

Everything was blurry, and no matter how much I blinked away the night, I still could not read.  I could see things further off, and if I moved my phone well back past my normal reading distance, I could still just about focus.

Eventually my eyes could focus as normal, and I put the experience down to tiredness.  But soon the blurriness came back, and didn't leave.  I was being abruptly welcomed into late middle age.  I needed reading glasses.

I picked up a pair of cheap glasses from the local supermarket, and miracle of miracles, I could read again.  Everything was fine and crisp, even when I used the smallest font on the kindle app.

There was, however, still an issue.  When I was wearing my reading glasses, and I was looking at something that wasn't a book, that was further away, say a person's face, or a stop sign, everything was blurry.  I had to take my glasses off to see beyond the page in front of me.

So, in this age of miniaturized sensors, 3D printers, new material science, why can I not buy a pair of glasses that sense how far away objects are that the glasses are pointing at, and physically deform the lenses appropriately to bring items into focus for the wearer

For me the lenses would become clear glass when looking at something in the distance, and would deform to +0.5 reading glasses when looking at a page in front of me.

There have been similar attempts in the past, but as technology advances, sensors become smaller and motors become miniaturized I think its time to look once again at eye-glasses.  The way they work now is broken.  If Google invested a fraction of the money they have in Google Glass, then I'm convinced they could bring these kind of glasses to the world, benefitting hundreds of millions, And just perhaps, by incorporating Glass-like functionality along for the ride, they could bring Glass to the masses.

Filed under: Product-Ideas No Comments

The inevitable evolution from wearables to embedables

The inevitable evolution from wearables to embedables is at once both exciting and horrifying.

Let's think about bluetooth headsets. They are already becoming smaller, and will soon be invisible.

I believe that bluetooth headsets will miniaturize to the point of being so tiny they will be embedded subdermally, perhaps behind your ear. We'll solve the battery issues through using the body's own heat, or through body motion.

What will this give us? Only telepathy. You'll be able to communicate mind-to-mind with anyone on the planet through this device that is part of you, initially by voicing words sub-vocally, but perhaps one day through splicing directly into nerves.

It is as exciting as it is inevitable.

What is also inevitable is a despotic regime somewhere will use such capabilities to pipe their propaganda directly to their citizens minds. Can you imagine, from birth, having this incessant stream of brainwashing beamed directly to your brain? Its horrifying.

So, along with the best case scenarios with dreaming of new technologies, let's also think of the nightmare worst-case scenarios, and make sure we do what we can to mitigate them. In this case, let's start with a physical off-switch.

Filed under: Product-Ideas No Comments

A useful in-car app experience

OK, I admit it: I can't help it. Whenever I hit a problem in the real world, I automatically seek to solve it, often through the hammer in my virtual toolbox, which is creating apps.

So what does this have to do with driving my kids home from school? There are traffic lights on the route. Lots and lots of them. Like all of you, I am sure, I never look at my phone screen when I am driving and the car is in motion, but when the car is stopped in front of traffic lights, it is often hard to resist quickly checking my email, or twitter, or whatever.

Of course that is a trap. Before I know it I've been sucked into my digital world, and am oblivious to the real world, until I am rudely and abruptly pulled out of it by the honking horn of the person behind me.

So what I want is this: An app that lets me use my phone as normal, but in the background, using the camera on my phone, locks in on to the red light of the traffic light, detects when an orange light appears next to it, and alerts me both audibly and visually that the lights are changing.

I'd even use it when I'm not looking at my phone, but instead lost in dreamy reverie, lost in my own thoughts,and equally oblivious to the lights changing.

But this is only part of my master plan. Oh no, it is not all.

Sometimes I'm stopped while in the car, and it isn't a traffic light that has stopped me. Instead it is a traffic jam. Like all of you, I am sure, I dream of being able to launch a small drone from my car to fly overhead to the front of the jam, to understand what is happening, and how long I will be stuck for. The drone would be paired with my phone, letting me control it from my phone, and beam back images to my phone.

It occurs to me that the whole drone thing is unnecessarily, potentially dangerous, and more than likely illegal. Instead all I need is an app that everyone on the traffic jam uses to broadcast live the scene in front of them. Then people far back from the front of the jam can zoom through the cameras, rushing forward car by car through the jam to the front, to understand what is happening.

With appropriate anonymizing safeguards in place (number plate blurring) it could also be used by news organizations and the emergency services.

Filed under: Product-Ideas No Comments

Interview for Connectedly on Evernote and Wearables

I recently gave a brief interview about Evernote and Wearables, with special focus on the Pebble, for Adam Zeis at Connectedly, part of the Mobile Nations group (Android Central, iMore, etc).

More here.

Filed under: Uncategorized No Comments

Evernote on your Pebble: your desktop duplicated?

At first glance it might look as though Evernote on the Pebble is a simply a clone of Evernote for the desktop.  image  pebble-screenshot_2014-03-15_13-17-53pebble-screenshot_2014-03-15_15-01-35

That would make absolutely no sense whatsoever, given that the Pebble has an entirely different form factor, with very different uses.

I’d like to share some of the ways in which Evernote on the Pebble has been tailored to the wrist-based experience, and what you can do to get the most out of it.   But first …

A step back … why wearables?

Earlier this year at the MCE conference I presented a hierarchy of uses for wearable devices:

  • Notifications, especially smart notifications based on your context, for example based on your current location, or who you are with, such as those provided by Google Now;
  • Sensors, especially health sensors, but also environmental sensors. Very soon we will examine the devices of someone who just died, as a kind of black box to determine what happened.
  • Control of the environment around you, such as the music playing on your phone or your house lights. The key is that you have to be able to do it without thinking about it … maybe gesture-based controls.
  • Capture of information, such as taking audio notes, or photos from your watch or Glass.
  • Consumption of information, such as viewing Evernote notes.  The key to this being useful is that the effort to view the information on your watch must be significantly lower than the effort to pull out your phone, unlock it, start the appropriate app, and navigate/search for the information.  Ideally the information should be pre-prepared for easy consumption based on your context, such as where you are, or what you are doing.

How does Evernote fit in?

Notifications work without the Evernote Pebble app

The Pebble already provides notifications from apps, so that when an Evernote reminder notification fires on your Phone …

05  … you’ll see that notification on your watch… 07

As the Evernote phone apps become more sophisticated about providing smarter, context-based notifications, you’ll get that for free on your watch. 

The Evernote app for the Pebble is very much focused on the last item in that list: consumption.

Easy access to your most important information: Your Shortcuts

On the desktop and mobile versions of Evernote, you use Shortcuts to give you easy, instant access to your most important information. Perhaps its information that you always need to have at your fingertips, or that you are working on right now.


It stands to reason that on the Pebble we’d give you an easy way to access those Shortcuts, and we do:

09 10

But wouldn’t it be cool if you could access your most important information, your shortcuts, as soon as you start Evernote? 


We thought so too, which is why you can put your Shortcuts at the top level menu, before all the other Evernote menu items, so that you can see your most important stuff instantly:


Context-sensitive information: nearby notes

If you are about to walk into a meeting, or into a restaurant, then nearby notes are your friend:


This shows the notes that you created closest to your current location (yes, you can toggle between miles and kilometers), so that if you are about to go into a meeting with someone …


… you can quickly remind yourself about the person you are about to meet:


Activity-sensitive information: a custom checklist experience

Ideally Evernote for the Pebble would automatically detect that you are in the supermarket, and present you with your shopping list.  It doesn’t do that yet, but it does make it easy for you to check and uncheck checkboxes.

Specifically it looks for all your notes that have unchecked checkboxes in them, and presents them as a list.  If you choose one, then it just displays the checkboxes from the notes, and lets you check/uncheck them.

This makes for a super-convenient shopping experience.  If you’ve ever had to juggle a small child in one hand, a supermarket trolley in the other hand, and a mobile phone in the other hand, you’ll really appreciate being able to quickly and easily check items off, as you buy them:


What’s more, if you remembered to use Evernote on your phone take a photo of the yoghurt pot back home, because you knew that you were likely to be overwhelmed when faced with a vast array of dairy produce at the shop …


… then you can navigate to that note on your watch, and glance at the photo:


The Pebble’s screen is quite small, and black-and-white, so you may need to squint a little to make out the photo!

Easy access to your most important notes: Reminders

If you don’t make much use of Reminders, then you might be a little puzzled to see a dedicated Reminders menu item on the Pebble:


The reason is that many many people use Reminders as a way of “pinning” important notes to the top of their notes list.  Reminders are always shown at the top of the note list on the desktop apps:


On your Pebble you have quick and easy access to these important notes:


You can view a reminder:


And you can mark it as “done” by long-pressing:


Information at a glance.  When is it a chore, and when is it a glance?

The ideal Evernote experience on your watch gives you instant access to your most important information.  Evernote on the Pebble does this by giving you quick and easy access to your shortcuts, nearby notes, checklists and reminders.

But sometimes, that isn’t enough.  Then you have a choice: do you pull out your phone, unlock it, start Evernote, and search or navigate to the information you want? Or, if it is a small text note, might it be easier to navigate to it on your watch?

Depending on what kind of a person you are, and on how you use Evernote, the idea of navigating to your notes on your watch, by drilling down using Tags (for example) might seem either laughably complex, or super-cool and powerful.  If you are an early-adopter of wearable technology, for example if you were a Pebble Kickstarter backer, then chances are you fall into the second camp.

This is the reason for the other menu items I have not discussed above: Notebooks, Tags, and Saved Searches.  For some people, it would be much easier to quickly drill down to a note on their watch, than to pull out their phone.


Glancability may not be a real word, but if it were, it would be in the eye of the beholder.

The future of Evernote on wearables

By providing you with a customized experience on the Pebble, Evernote serves you information based on what is most important to you (shortcuts and reminders), what makes sense based on your current context (nearby notes, checklist notes) as well as the more traditional ways of accessing your notes (notebooks, tags, saved searches).

These are very early days for wearable technologies.  Evernote for the Pebble is a start … as the capabilities of wearable devices evolve, so will your Evernote wearable experience.  Evernote is very much about working in symbiosis with you, completing your thoughts for you, providing information to you before you even know you need it.  There is so much more to come.

Filed under: Evernote, Pebble 2 Comments

Understanding the Chrome Sync Protocol

Chrome is a cool browser, but its secret sauce is that no matter whether you are using iOS, Windows, Mac, Android, Linux or ChromeOS, you can sync your bookmarks, passwords, recently viewed URLs and more.

Did you noticed any OS missing?  No?  OK, so perhaps you don’t use Windows Phone. 

But I do, as well as Android and iOS, and it bugged me that there was no way to sync all my Chrome goodness to Windows Phone, since Chrome is not available for Windows Phone.

So I implemented my own Chrome sync engine on Windows Phone, and in the process learned how Chrome sync works.

In this post I'll share what I learned, including how you authenticate in order to use it.

I'm going to do this by way of the free Chrome sync app I created for Windows Phone, called Chrync.


I reasoned that there must be a way of talking the Chrome sync protocol directly to Google's servers, since Chrome itself does it.

I started off by downloading the Chrome source code, and building it, and running it with a debugger attached.

I also discovered the wonderful world of Chrome debug pages, which are very helpful, especially the sync internals page which you can access by navigating to chrome://sync-internals/

Protocol Buffers

I found that the Chrome sync protocol is layered on top of a Google technology called Protocol Buffers, with the Chrome sync structures being defined in a language independent protocol buffers IDL.

The main source is at http://src.chromium.org/viewvc/chrome/trunk/src/sync/protocol/, and there you’ll find the message types that are sent to and from the Google servers when a sync occurs.

If you want to browse, I suggest starting with sync.proto which defines the SyncEntity message containing core sync item fields, including an EntitySpecifics (also defined in sync.proto). 

The EntitySpecifics message contains a load of optional fields such as BookmarkSpecifics (used for syncing bookmarks), TypedUrlSpecifics (recently browsed URLs), PasswordSpecifics (saved passwords), SessionSpecifics (open sessions) and NigoriSpecifics decrypting all this stuff).


Over time various extensions have been defined.  Indeed every time I check the GIT source repository it seems that something new is happening, such as SyncedNotificationSpecifics.

Converting the protocol definitions to native code

I wanted to talk the Chrome protocol on Windows Phone, and went hunting for a C# implementation of Protocol Buffers that worked on Windows Phone.  I found two: protobuf-net by Marc Gravell and protobuf-csharp-port by Jon Skeet which I ended up using.

I was able to generate C# proxies for the Chrome sync protocol buffer files, and link in the .NET protocol buffers runtime.


The next step was to work out how to authenticate.

Requesting OAuth 2.0 access to Chrome sync data

Like many Google users, I use two factor authentication, and since I am especially paranoid, I have a custom Chrome sync passphrase defined.

Since I was making the app mainly for myself I needed to support both two factor authentication and custom passphrases.

Google has a standard OAuth 2.0 implementation which they describe here

You direct the user to a Google web site with an authentication request to Google, specifying in the scope parameter what access you require, for example you use userinfo.email to request access to the user’s email address.

You can indicate that your app requires access to all kinds of Google services using the Google Cloud Console.  You’ll notice though that there way to specify to access a user’s Chrome sync data.

After a little digging I discovered the magic string to request access in the scope parameter to Chrome sync data.  In fact I ask for access to the user’s email address, and their Chrome sync data. The scope I use is  https://www.googleapis.com/auth/userinfo.email+https://www.googleapis.com/auth/chromesync

Below you see the OAuth 2.0 process in progress inside a web browser I host within the app.  You login, using two factor authentication if it is enabled, and then you get prompted to ask whether you want to give the app the access that it requests. 

For some reason, Google’s OAuth prompts are always in German for me, despite the fact that I speak no German, and although I live in Switzerland, I live in a French speaking area.  If you don’t speak German you’ll have to take my word for it that it is prompting for permission to access your email address and your Chrome sync data.



The result of this authentication are two tokens: an access token, which is good for a certain amount of time, and a refresh token, which can be used to generate a new access token when it expires.

Building the sync request

Initiating the sync process involves making an http request to https://clients4.google.com/chrome-sync and setting a “Bearer” http header to the access token. The body of the message is an octet-stream which contains the sync request.

The sync request itself is a GetUpdatesMessage defined in a ClientToServerMessage which are defined in sync.proto:

525    message GetUpdatesMessage {
526      // Indicates the client's current progress in downloading updates.  A
527      // from_timestamp value of zero means that the client is requesting a first-
528      // time sync.  After that point, clients should fill in this value with the
529      // value returned in the last-seen GetUpdatesResponse.new_timestamp.
530      //
531      // from_timestamp has been deprecated; clients should use
532      // |from_progress_marker| instead, which allows more flexibility.
533      optional int64 from_timestamp = 1;
535      // Indicates the reason for the GetUpdatesMessage.
536      // Deprecated in M29.  We should eventually rely on GetUpdatesOrigin instead.
537      // Newer clients will support both systems during the transition period.
538      optional GetUpdatesCallerInfo caller_info = 2;
540      // Indicates whether related folders should be fetched.
541      optional bool fetch_folders = 3 [default = true];
543      // The presence of an individual EntitySpecifics field indicates that the
544      // client requests sync object types associated with that field.  This
545      // determination depends only on the presence of the field, not its
546      // contents -- thus clients should send empty messages as the field value.
547      // For backwards compatibility only bookmark objects will be sent to the
548      // client should requested_types not be present.
549      //
550      // requested_types may contain multiple EntitySpecifics fields -- in this
551      // event, the server will return items of all the indicated types.
552      //
553      // requested_types has been deprecated; clients should use
554      // |from_progress_marker| instead, which allows more flexibility.
555      optional EntitySpecifics requested_types = 4;
557      // Client-requested limit on the maximum number of updates to return at once.
558      // The server may opt to return fewer updates than this amount, but it should
559      // not return more.
560      optional int32 batch_size = 5;
562      // Per-datatype progress marker.  If present, the server will ignore
563      // the values of requested_types and from_timestamp, using this instead.
564      //
565      // With the exception of certain configuration or initial sync requests, the
566      // client should include one instance of this field for each enabled data
567      // type.
568      repeated DataTypeProgressMarker from_progress_marker = 6;
570      // Indicates whether the response should be sent in chunks.  This may be
571      // needed for devices with limited memory resources.  If true, the response
572      // will include one or more ClientToServerResponses, with the frist one
573      // containing GetUpdatesMetadataResponse, and the remaining ones, if any,
574      // containing GetUpdatesStreamingResponse.  These ClientToServerResponses are
575      // delimited by a length prefix, which is encoded as a varint.
576      optional bool streaming = 7 [default = false];
578      // Whether the client needs the server to provide an encryption key for this
579      // account.
580      // Note: this should typically only be set on the first GetUpdates a client
581      // requests. Clients are expected to persist the encryption key from then on.
582      // The allowed frequency for requesting encryption keys is much lower than
583      // other datatypes, so repeated usage will likely result in throttling.
584      optional bool need_encryption_key = 8 [default = false];
586      // Whether to create the mobile bookmarks folder if it's not
587      // already created.  Should be set to true only by mobile clients.
588      optional bool create_mobile_bookmarks_folder = 1000 [default = false];
590      // This value is an updated version of the GetUpdatesCallerInfo's
591      // GetUpdatesSource.  It describes the reason for the GetUpdate request.
592      // Introduced in M29.
593      optional SyncEnums.GetUpdatesOrigin get_updates_origin = 9;
595      // Whether this GU also serves as a retry GU. Any GU that happens after
596      // retry timer timeout is a retry GU effectively.
597      optional bool is_retry = 10 [default = false];
598    };


This is my code to build this sync request:
/// <summary>
/// Builds a sync request to be sent to the server.  Initializes it based on the user's selected
/// sync options, and previous sync state
/// </summary>
/// <returns></returns>
private byte[] BuildSyncRequest() {
  D("BuildSyncRequest invoked");
  // This ClientToServerMessage is generated from the sync.proto definition
  var myRequest = ClientToServerMessage.CreateBuilder();
  using (var db = _databaseFactory.Get()) {
    if (db == null) throw new Exception("User logged out");

    var syncState = db.GetSyncState();

    // We want to get updates, other options include COMMIT to send changes

    var callerInfo = GetUpdatesCallerInfo.CreateBuilder();
    callerInfo.NotificationsEnabled = true;
    var getUpdates = GetUpdatesMessage.CreateBuilder();

    // Tell the server what kinds of sync items we can handle

    // We need this in case the user has encrypted everything ... nigori is to get decryption
    // keys to decrypted encrypted items
    var nigoriDataType = InitializeDataType(db, EntitySpecifics.NigoriFieldNumber);

    // We include bookmarks if the user selected them
    if ((_syncOptions.Flags & SyncFlags.Bookmarks) == SyncFlags.Bookmarks) {
      // The field is initialized with state information from the last sync, if any, so that
      // we only get changes since the latest sync
      var bookmarkDataType = InitializeDataType(db, EntitySpecifics.BookmarkFieldNumber);

    if ((_syncOptions.Flags & SyncFlags.OpenTabs) == SyncFlags.OpenTabs) {
      var sessionDataType = InitializeDataType(db, EntitySpecifics.SessionFieldNumber);

    if ((_syncOptions.Flags & SyncFlags.Omnibox) == SyncFlags.Omnibox) {
      var typedUrlDataType = InitializeDataType(db, EntitySpecifics.TypedUrlFieldNumber);

    if ((_syncOptions.Flags & SyncFlags.Passwords) == SyncFlags.Passwords) {
      var passwordDataType = InitializeDataType(db, EntitySpecifics.PasswordFieldNumber);

    if (syncState != null) {
      // ChipBag is "Per-client state for use by the server. Sent with every message sent to the server."
      // Soggy newspaper not included
      if (syncState.ChipBag != null) {
        var chipBag = ChipBag.CreateBuilder().SetServerChips(ByteString.CopyFrom(syncState.ChipBag)).Build();

      if (syncState.StoreBirthday != null) {



  var builtRequest = myRequest.Build();
  return builtRequest.ToByteArray();

/// <summary>
/// For each item type we sync, this method initializes it
/// </summary>
private DataTypeProgressMarker.Builder InitializeDataType(IDatabase db, int fieldNumber) {
  var dataType = DataTypeProgressMarker.CreateBuilder();
  InitializeMarker(dataType, db);
  return dataType;

/// <summary>
/// Initializes the sync state for the item types we sync
/// </summary>
private void InitializeMarker(DataTypeProgressMarker.Builder dataType, IDatabase db) {
  var marker = db.GetSyncProgress(dataType.DataTypeId);
  if (marker == null) {
  D("Initializing marker: " + marker);
  if (marker.NotificationHint != null) {

  if (marker.TimestampForMigration != 0) {


Handling the sync response

Once this request is sent off we get back a sync response, in the form of a ClientToServerResponse containing a GetUpdatesResponse, which are also defined in sync.proto:

756    message GetUpdatesResponse {
757      // New sync entries that the client should apply.
758      repeated SyncEntity entries = 1;
760      // If there are more changes on the server that weren't processed during this
761      // GetUpdates request, the client should send another GetUpdates request and
762      // use new_timestamp as the from_timestamp value within GetUpdatesMessage.
763      //
764      // This field has been deprecated and will be returned only to clients
765      // that set the also-deprecated |from_timestamp| field in the update request.
766      // Clients should use |from_progress_marker| and |new_progress_marker|
767      // instead.
768      optional int64 new_timestamp = 2;
770      // DEPRECATED FIELD - server does not set this anymore.
771      optional int64 deprecated_newest_timestamp = 3;
773      // Approximate count of changes remaining - use this for UI feedback.
774      // If present and zero, this estimate is firm: the server has no changes
775      // after the current batch.
776      optional int64 changes_remaining = 4;
778      // Opaque, per-datatype timestamp-like tokens.  A client should use this
779      // field in lieu of new_timestamp, which is deprecated in newer versions
780      // of the protocol.  Clients should retain and persist the values returned
781      // in this field, and present them back to the server to indicate the
782      // starting point for future update requests.
783      //
784      // This will be sent only if the client provided |from_progress_marker|
785      // in the update request.
786      //
787      // The server may provide a new progress marker even if this is the end of
788      // the batch, or if there were no new updates on the server; and the client
789      // must save these.  If the server does not provide a |new_progress_marker|
790      // value for a particular datatype, when the request provided a
791      // |from_progress_marker| value for that datatype, the client should
792      // interpret this to mean "no change from the previous state" and retain its
793      // previous progress-marker value for that datatype.
794      //
795      // Progress markers in the context of a response will never have the
796      // |timestamp_token_for_migration| field set.
797      repeated DataTypeProgressMarker new_progress_marker = 5;
799      // The current encryption keys associated with this account. Will be set if
800      // the GetUpdatesMessage in the request had need_encryption_key == true or
801      // the server has updated the set of encryption keys (e.g. due to a key
802      // rotation).
803      repeated bytes encryption_keys = 6;
804    };



Note that at the start of GetUpdatesResponse there is a repeated series of SyncEntities.  SyncEntity is also defined in sync.proto:

134    message SyncEntity {
135      // This item's identifier.  In a commit of a new item, this will be a
136      // client-generated ID.  If the commit succeeds, the server will generate
137      // a globally unique ID and return it to the committing client in the
138      // CommitResponse.EntryResponse.  In the context of a GetUpdatesResponse,
139      // |id_string| is always the server generated ID.  The original
140      // client-generated ID is preserved in the |originator_client_id| field.
141      // Present in both GetUpdatesResponse and CommitMessage.
142      optional string id_string = 1;
144      // An id referencing this item's parent in the hierarchy.  In a
145      // CommitMessage, it is accepted for this to be a client-generated temporary
146      // ID if there was a new created item with that ID appearing earlier
147      // in the message.  In all other situations, it is a server ID.
148      // Present in both GetUpdatesResponse and CommitMessage.
149      optional string parent_id_string = 2;
151      // old_parent_id is only set in commits and indicates the old server
152      // parent(s) to remove. When omitted, the old parent is the same as
153      // the new.
154      // Present only in CommitMessage.
155      optional string old_parent_id = 3;
157      // The version of this item -- a monotonically increasing value that is
158      // maintained by for each item.  If zero in a CommitMessage, the server
159      // will interpret this entity as a newly-created item and generate a
160      // new server ID and an initial version number.  If nonzero in a
161      // CommitMessage, this item is treated as an update to an existing item, and
162      // the server will use |id_string| to locate the item.  Then, if the item's
163      // current version on the server does not match |version|, the commit will
164      // fail for that item.  The server will not update it, and will return
165      // a result code of CONFLICT.  In a GetUpdatesResponse, |version| is
166      // always positive and indentifies the revision of the item data being sent
167      // to the client.
168      // Present in both GetUpdatesResponse and CommitMessage.
169      required int64 version = 4;
171      // Last modification time (in java time milliseconds)
172      // Present in both GetUpdatesResponse and CommitMessage.
173      optional int64 mtime = 5;
175      // Creation time.
176      // Present in both GetUpdatesResponse and CommitMessage.
177      optional int64 ctime = 6;
179      // The name of this item.
180      // Historical note:
181      //   Since November 2010, this value is no different from non_unique_name.
182      //   Before then, server implementations would maintain a unique-within-parent
183      //   value separate from its base, "non-unique" value.  Clients had not
184      //   depended on the uniqueness of the property since November 2009; it was
185      //   removed from Chromium by http://codereview.chromium.org/371029 .
186      // Present in both GetUpdatesResponse and CommitMessage.
187      required string name = 7;
189      // The name of this item.  Same as |name|.
190      // |non_unique_name| should take precedence over the |name| value if both
191      // are supplied.  For efficiency, clients and servers should avoid setting
192      // this redundant value.
193      // Present in both GetUpdatesResponse and CommitMessage.
194      optional string non_unique_name = 8;
196      // A value from a monotonically increasing sequence that indicates when
197      // this item was last updated on the server. This is now equivalent
198      // to version. This is now deprecated in favor of version.
199      // Present only in GetUpdatesResponse.
200      optional int64 sync_timestamp = 9;
202      // If present, this tag identifies this item as being a uniquely
203      // instanced item.  The server ensures that there is never more
204      // than one entity in a user's store with the same tag value.
205      // This value is used to identify and find e.g. the "Google Chrome" settings
206      // folder without relying on it existing at a particular path, or having
207      // a particular name, in the data store.
208      //
209      // This variant of the tag is created by the server, so clients can't create
210      // an item with a tag using this field.
211      //
212      // Use client_defined_unique_tag if you want to create one from the client.
213      //
214      // An item can't have both a client_defined_unique_tag and
215      // a server_defined_unique_tag.
216      //
217      // Present only in GetUpdatesResponse.
218      optional string server_defined_unique_tag = 10;
220      // If this group is present, it implies that this SyncEntity corresponds to
221      // a bookmark or a bookmark folder.
222      //
223      // This group is deprecated; clients should use the bookmark EntitySpecifics
224      // protocol buffer extension instead.
225      optional group BookmarkData = 11 {
226        // We use a required field to differentiate between a bookmark and a
227        // bookmark folder.
228        // Present in both GetUpdatesMessage and CommitMessage.
229        required bool bookmark_folder = 12;
231        // For bookmark objects, contains the bookmark's URL.
232        // Present in both GetUpdatesResponse and CommitMessage.
233        optional string bookmark_url = 13;
235        // For bookmark objects, contains the bookmark's favicon. The favicon is
236        // represented as a 16X16 PNG image.
237        // Present in both GetUpdatesResponse and CommitMessage.
238        optional bytes bookmark_favicon = 14;
239      }
241      // Supplies a numeric position for this item, relative to other items with the
242      // same parent.  Deprecated in M26, though clients are still required to set
243      // it.
244      //
245      // Present in both GetUpdatesResponse and CommitMessage.
246      //
247      // At one point this was used as an alternative / supplement to
248      // the deprecated |insert_after_item_id|, but now it, too, has been
249      // deprecated.
250      //
251      // In order to maintain compatibility with older clients, newer clients should
252      // still set this field.  Its value should be based on the first 8 bytes of
253      // this item's |unique_position|.
254      //
255      // Nerwer clients must also support the receipt of items that contain
256      // |position_in_parent| but no |unique_position|.  They should locally convert
257      // the given int64 position to a UniquePosition.
258      //
259      // The conversion from int64 to UniquePosition is as follows:
260      // The int64 value will have its sign bit flipped then placed in big endian
261      // order as the first 8 bytes of the UniquePosition.  The subsequent bytes of
262      // the UniquePosition will consist of the item's unique suffix.
263      //
264      // Conversion from UniquePosition to int64 reverses this process: the first 8
265      // bytes of the position are to be interpreted as a big endian int64 value
266      // with its sign bit flipped.
267      optional int64 position_in_parent = 15;
269      // Contains the ID of the element (under the same parent) after which this
270      // element resides. An empty string indicates that the element is the first
271      // element in the parent.  This value is used during commits to specify
272      // a relative position for a position change.  In the context of
273      // a GetUpdatesMessage, |position_in_parent| is used instead to
274      // communicate position.
275      //
276      // Present only in CommitMessage.
277      //
278      // This is deprecated.  Clients are allowed to omit this as long as they
279      // include |position_in_parent| instead.
280      optional string insert_after_item_id = 16;
282      // Arbitrary key/value pairs associated with this item.
283      // Present in both GetUpdatesResponse and CommitMessage.
284      // Deprecated.
285      // optional ExtendedAttributes extended_attributes = 17;
287      // If true, indicates that this item has been (or should be) deleted.
288      // Present in both GetUpdatesResponse and CommitMessage.
289      optional bool deleted = 18 [default = false];
291      // A GUID that identifies the the sync client who initially committed
292      // this entity.  This value corresponds to |cache_guid| in CommitMessage.
293      // This field, along with |originator_client_item_id|, can be used to
294      // reunite the original with its official committed version in the case
295      // where a client does not receive or process the commit response for
296      // some reason.
297      //
298      // Present only in GetUpdatesResponse.
299      //
300      // This field is also used in determining the unique identifier used in
301      // bookmarks' unique_position field.
302      optional string originator_cache_guid = 19;
304      // The local item id of this entry from the client that initially
305      // committed this entity. Typically a negative integer.
306      // Present only in GetUpdatesResponse.
307      //
308      // This field is also used in determinging the unique identifier used in
309      // bookmarks' unique_position field.
310      optional string originator_client_item_id = 20;
312      // Extensible container for datatype-specific data.
313      // This became available in version 23 of the protocol.
314      optional EntitySpecifics specifics = 21;
316      // Indicate whether this is a folder or not. Available in version 23+.
317      optional bool folder = 22 [default = false];
319      // A client defined unique hash for this entity.
320      // Similar to server_defined_unique_tag.
321      //
322      // When initially committing an entity, a client can request that the entity
323      // is unique per that account. To do so, the client should specify a
324      // client_defined_unique_tag. At most one entity per tag value may exist.
325      // per account. The server will enforce uniqueness on this tag
326      // and fail attempts to create duplicates of this tag.
327      // Will be returned in any updates for this entity.
328      //
329      // The difference between server_defined_unique_tag and
330      // client_defined_unique_tag is the creator of the entity. Server defined
331      // tags are entities created by the server at account creation,
332      // while client defined tags are entities created by the client at any time.
333      //
334      // During GetUpdates, a sync entity update will come back with ONE of:
335      // a) Originator and cache id - If client committed the item as non "unique"
336      // b) Server tag - If server committed the item as unique
337      // c) Client tag - If client committed the item as unique
338      //
339      // May be present in CommitMessages for the initial creation of an entity.
340      // If present in Commit updates for the entity, it will be ignored.
341      //
342      // Available in version 24+.
343      //
344      // May be returned in GetUpdatesMessage and sent up in CommitMessage.
345      //
346      optional string client_defined_unique_tag = 23;
348      // This positioning system had a relatively short life.  It was made obsolete
349      // by |unique_position| before either the client or server made much of an
350      // attempt to support it.  In fact, no client ever read or set this field.
351      //
352      // Deprecated in M26.
353      optional bytes ordinal_in_parent = 24;
355      // This is the fourth attempt at positioning.
356      //
357      // This field is present in both GetUpdatesResponse and CommitMessage, if the
358      // item's type requires it and the client that wrote the item supports it (M26
359      // or higher).  Clients must also be prepared to handle updates from clients
360      // that do not set this field.  See the comments on
361      // |server_position_in_parent| for more information on how this is handled.
362      //
363      // This field will not be set for items whose type ignores positioning.
364      // Clients should not attempt to read this field on the receipt of an item of
365      // a type that ignores positioning.
366      //
367      // Refer to its definition in unique_position.proto for more information about
368      // its internal representation.
369      optional UniquePosition unique_position = 25;
370    };



What is most important in the SyncEntity is line 314, where you see that a SyncEntity contains an EntitySpecifics, which is where the good stuff is.  The EntitySpecifics looks like this:

64    message EntitySpecifics {
65      // If a datatype is encrypted, this field will contain the encrypted
66      // original EntitySpecifics. The extension for the datatype will continue
67      // to exist, but contain only the default values.
68      // Note that currently passwords employ their own legacy encryption scheme and
69      // do not use this field.
70      optional EncryptedData encrypted = 1;
72      // To add new datatype-specific fields to the protocol, extend
73      // EntitySpecifics.  First, pick a non-colliding tag number by
74      // picking a revision number of one of your past commits
75      // to src.chromium.org.  Then, in a different protocol buffer
76      // definition, define your message type, and add an optional field
77      // to the list below using the unique tag value you selected.
78      //
79      //  optional MyDatatypeSpecifics my_datatype = 32222;
80      //
81      // where:
82      //   - 32222 is the non-colliding tag number you picked earlier.
83      //   - MyDatatypeSpecifics is the type (probably a message type defined
84      //     in your new .proto file) that you want to associate with each
85      //     object of the new datatype.
86      //   - my_datatype is the field identifier you'll use to access the
87      //     datatype specifics from the code.
88      //
89      // Server implementations are obligated to preserve the contents of
90      // EntitySpecifics when it contains unrecognized fields.  In this
91      // way, it is possible to add new datatype fields without having
92      // to update the server.
93      //
94      // Note: The tag selection process is based on legacy versions of the
95      // protocol which used protobuf extensions. We have kept the process
96      // consistent as the old values cannot change.  The 5+ digit nature of the
97      // tags also makes them recognizable (individually and collectively) from
98      // noise in logs and debugging contexts, and creating a divergent subset of
99      // tags would only make things a bit more confusing.
101      optional AutofillSpecifics autofill = 31729;
102      optional BookmarkSpecifics bookmark = 32904;
103      optional PreferenceSpecifics preference = 37702;
104      optional TypedUrlSpecifics typed_url = 40781;
105      optional ThemeSpecifics theme = 41210;
106      optional AppNotification app_notification = 45184;
107      optional PasswordSpecifics password = 45873;
108      optional NigoriSpecifics nigori = 47745;
109      optional ExtensionSpecifics extension = 48119;
110      optional AppSpecifics app = 48364;
111      optional SessionSpecifics session = 50119;
112      optional AutofillProfileSpecifics autofill_profile = 63951;
113      optional SearchEngineSpecifics search_engine = 88610;
114      optional ExtensionSettingSpecifics extension_setting = 96159;
115      optional AppSettingSpecifics app_setting = 103656;
116      optional HistoryDeleteDirectiveSpecifics history_delete_directive = 150251;
117      optional SyncedNotificationSpecifics synced_notification = 153108;
118      optional SyncedNotificationAppInfoSpecifics synced_notification_app_info =
119          235816;
120      optional DeviceInfoSpecifics device_info = 154522;
121      optional ExperimentsSpecifics experiments = 161496;
122      optional PriorityPreferenceSpecifics priority_preference = 163425;
123      optional DictionarySpecifics dictionary = 170540;
124      optional FaviconTrackingSpecifics favicon_tracking = 181534;
125      optional FaviconImageSpecifics favicon_image = 182019;
126      optional ManagedUserSettingSpecifics managed_user_setting = 186662;
127      optional ManagedUserSpecifics managed_user = 194582;
128      optional ManagedUserSharedSettingSpecifics managed_user_shared_setting =
129          202026;
130      optional ArticleSpecifics article = 223759;
131      optional AppListSpecifics app_list = 229170;
132    }



As you see the EntitySpecifics contains EncryptedData and optional fields for each of the data types.  A specific instance of an EntitySpecifics contains just one, for example here is the BookmarkSpecifics from bookmarks_specifics.proto

23    // Properties of bookmark sync objects.
24    message BookmarkSpecifics {
25      optional string url = 1;
26      optional bytes favicon = 2;
27      optional string title = 3;
28      // Corresponds to BookmarkNode::date_added() and is the internal value from
29      // base::Time.
30      optional int64 creation_time_us = 4;
31      optional string icon_url = 5;
32      repeated MetaInfo meta_info = 6;
33    }


Decrypting sync data

What makes things tricky is that you get a set of sync entities, some of which may be encrypted (in the EncryptedData EntitySpecifics field), but they cannot be decrypted until the NigoriSpecifics sync entity is received, which may be some time.  So I buffer of the encrypted sync entities until they can be decrypted.

Encrypted data looks like this in its Protocol Buffers definition in encryption.proto:

7    // Encrypted sync data consists of two parts: a key name and a blob. Key name is
18    // the name of the key that was used to encrypt blob and blob is encrypted data
19    // itself.
20    //
21    // The reason we need to keep track of the key name is that a sync user can
22    // change their passphrase (and thus their encryption key) at any time. When
23    // that happens, we make a best effort to reencrypt all nodes with the new
24    // passphrase, but since we don't have transactions on the server-side, we
25    // cannot guarantee that every node will be reencrypted. As a workaround, we
26    // keep track of all keys, assign each key a name (by using that key to encrypt
27    // a well known string) and keep track of which key was used to encrypt each
28    // node.
29    message EncryptedData {
30      optional string key_name = 1;
31      optional string blob = 2;
32    };

NigoriKey, NigoriKeyBag and NigoriSpecific

The NigoriSpecifics (one of the entries in the EntitySpecifics) looks like this, including associated data types, in nigori_specifics.proto

19    message NigoriKey {
20      optional string name = 1;
21      optional bytes user_key = 2;
22      optional bytes encryption_key = 3;
23      optional bytes mac_key = 4;
24    }
26    message NigoriKeyBag {
27      repeated NigoriKey key = 2;
28    }
30    // Properties of nigori sync object.
31    message NigoriSpecifics {
32      optional EncryptedData encryption_keybag = 1;
33      // Once keystore migration is performed, we have to freeze the keybag so that
34      // older clients (that don't support keystore encryption) do not attempt to
35      // update the keybag.
36      // Previously |using_explicit_passphrase|.
37      optional bool keybag_is_frozen = 2;
39      // Obsolete encryption fields. These were deprecated due to legacy versions
40      // that understand their usage but did not perform encryption properly.
41      // optional bool deprecated_encrypt_bookmarks = 3;
42      // optional bool deprecated_encrypt_preferences = 4;
43      // optional bool deprecated_encrypt_autofill_profile = 5;
44      // optional bool deprecated_encrypt_autofill = 6;
45      // optional bool deprecated_encrypt_themes = 7;
46      // optional bool deprecated_encrypt_typed_urls = 8;
47      // optional bool deprecated_encrypt_extensions = 9;
48      // optional bool deprecated_encrypt_sessions = 10;
49      // optional bool deprecated_encrypt_apps = 11;
50      // optional bool deprecated_encrypt_search_engines = 12;
52      // Booleans corresponding to whether a datatype should be encrypted.
53      // Passwords are always encrypted, so we don't need a field here.
54      // History delete directives need to be consumable by the server, and
55      // thus can't be encrypted.
56      // Synced Notifications need to be consumed by the server (the read flag)
57      // and thus can't be encrypted.
58      // Synced Notification App Info is set by the server, and thus cannot be
59      // encrypted.
60      optional bool encrypt_bookmarks = 13;
61      optional bool encrypt_preferences = 14;
62      optional bool encrypt_autofill_profile = 15;
63      optional bool encrypt_autofill = 16;
64      optional bool encrypt_themes = 17;
65      optional bool encrypt_typed_urls = 18;
66      optional bool encrypt_extensions = 19;
67      optional bool encrypt_sessions = 20;
68      optional bool encrypt_apps = 21;
69      optional bool encrypt_search_engines = 22;
71      // Deprecated on clients where tab sync is enabled by default.
72      // optional bool sync_tabs = 23;
74      // If true, all current and future datatypes will be encrypted.
75      optional bool encrypt_everything = 24;
77      optional bool encrypt_extension_settings = 25;
78      optional bool encrypt_app_notifications = 26;
79      optional bool encrypt_app_settings = 27;
81      // User device information. Contains information about each device that has a
82      // sync-enabled Chrome browser connected to the user account.
83      // This has been moved to the DeviceInfo message.
84      // repeated DeviceInformation deprecated_device_information = 28;
86      // Enable syncing favicons as part of tab sync.
87      optional bool sync_tab_favicons = 29;
89      // The state of the passphrase required to decrypt |encryption_keybag|.
90      enum PassphraseType {
91        // Gaia-based encryption passphrase. Deprecated.
93        // Keystore key encryption passphrase. Uses |keystore_bootstrap| to
94        // decrypt |encryption_keybag|.
96        // Previous Gaia-based passphrase frozen and treated as a custom passphrase.
98        // User provided custom passphrase.
99        CUSTOM_PASSPHRASE = 4;
100      }
101      optional PassphraseType passphrase_type = 30
102          [default = IMPLICIT_PASSPHRASE];
104      // The keystore decryptor token blob. Encrypted with the keystore key, and
105      // contains the encryption key used to decrypt |encryption_keybag|.
106      // Only set if passphrase_state == KEYSTORE_PASSPHRASE.
107      optional EncryptedData keystore_decryptor_token = 31;
109      // The time (in epoch milliseconds) at which the keystore migration was
110      // performed.
111      optional int64 keystore_migration_time = 32;
113      // The time (in epoch milliseconds) at which a custom passphrase was set.
114      // Note: this field may not be set if the custom passphrase was applied before
115      // this field was introduced.
116      optional int64 custom_passphrase_time = 33;
118      // Boolean corresponding to whether custom spelling dictionary should be
119      // encrypted.
120      optional bool encrypt_dictionary = 34;
122      // Boolean corresponding to Whether to encrypt favicons data or not.
123      optional bool encrypt_favicon_images = 35;
124      optional bool encrypt_favicon_tracking = 36;
126      // Boolean corresponding to whether articles should be encrypted.
127      optional bool encrypt_articles = 37;
129      // Boolean corresponding to whether app list items should be encrypted.
130      optional bool encrypt_app_list = 38;
131    }


Note that the first item in the NigiriSpecifics is the encrypted NigoriKeyBag.  The NigoriKeyBag is a set of NigoriKeys, both defined above.  The NigoriKeys are used to decrypt things like the encrypted BookmarkSpecifics.

So the first thing to do is to decrypt the encrypted NigoriKeyBag.  I prompt the user for the custom passphrase:


Once I have the passphrase, I decrypt the encrypted_keybag’s bytes using the passphrase:

Decrypting data
    internal static byte[] Decrypt(string passwordText, string encryptedText) {
      try {
        var salt = Encoding.UTF8.GetBytes("saltsalt");
        var rb = new Rfc2898DeriveBytes(HostUsername, salt, 1001);
        var userSalt = rb.GetBytes(16);

        var password = Encoding.UTF8.GetBytes(passwordText);
        rb = new Rfc2898DeriveBytes(password, userSalt, 1002);
        var userKey = rb.GetBytes(16);

        password = Encoding.UTF8.GetBytes(passwordText);
        rb = new Rfc2898DeriveBytes(password, userSalt, 1003);
        var encryptionKey = rb.GetBytes(16);

        rb = new Rfc2898DeriveBytes(password, userSalt, 1004);
        var macKey = rb.GetBytes(16);

        return Decrypt(encryptionKey, macKey, encryptedText);
      } catch (Exception) {
        return null;
    internal static byte[] Decrypt(byte[] encryptionKey, byte[] macKey, string encryptedText) {
      var input = Convert.FromBase64String(encryptedText);

      //var input = encrypted;
      const int kIvSize = 16;
      const int kHashSize = 32;

      if (input.Length < kIvSize*2 + kHashSize) return null;

      var iv = new byte[kIvSize];
      Array.Copy(input, iv, iv.Length);
      var ciphertext = new byte[input.Length - (kIvSize + kHashSize)];
      Array.Copy(input, kIvSize, ciphertext, 0, ciphertext.Length);
      var hash = new byte[kHashSize];
      Array.Copy(input, input.Length - kHashSize, hash, 0, kHashSize);

      var hmac = new HMACSHA256(macKey);
      var calculatedHash = hmac.ComputeHash(ciphertext);

      if (!Enumerable.SequenceEqual(calculatedHash, hash)) {
        return null;

      var aes = new AesManaged {IV = iv, Key = encryptionKey};
      var cs = new CryptoStream(new MemoryStream(ciphertext), aes.CreateDecryptor(), CryptoStreamMode.Read);
      var decryptedMemoryStream = new MemoryStream();
      var buf = new byte[256];
      while (cs.CanRead) {
        var count = cs.Read(buf, 0, buf.Length);
        if (count == 0) {
        decryptedMemoryStream.Write(buf, 0, count);
      return decryptedMemoryStream.ToArray();

I then convert the decrypted keybag to an actual keybag

        var bag = NigoriKeyBag.ParseFrom(decrypted);

Each entry in the keybag consists of a NigoriKey which can be used using the second Decrypt method above to decrypt EntitySpecifics enties:

var blob = encrypted.Blob;
var nigori = nigoris.ContainsKey(encrypted.KeyName)
                ? nigoris[encrypted.KeyName]
                : db.GetNigoriWithName(encrypted.KeyName);
if (nigori == null) {
  return null;
return Decryptor.Decrypt(nigori.EncryptionKey,

Processing the synced entities

After that it is pretty much plain sailing.  Here is the processing of the Bookmarks sync entity:

Processing bookmarks
 internal class BookmarkProcessor : EntityProcessor {
    public override bool Process(SyncEntity syncEntity, EntitySpecifics specifics) {
      if (!syncEntity.HasSpecifics || !syncEntity.Specifics.HasBookmark) return false;

      var bm = specifics == null ? syncEntity.Specifics.Bookmark : specifics.Bookmark;
      D("Processing bookmark " + bm.Title);

      var model = Db.GetSyncEntityWithId<BookmarkModel>(syncEntity.IdString);
      var isNew = model == null;

      if (isNew) {
        model = new BookmarkModel();

      if (bm.HasFavicon) {
        model.Favicon = bm.Favicon.ToByteArray();

      if (bm.HasTitle) {
        model.BookmarkTitle = bm.Title;

      if (bm.HasUrl) {
        model.BookmarkUrl = bm.Url;

      FillSyncEntityModel(syncEntity, model);

      if (isNew) {
      } else {

      return true;



I process the decrypted sync entities and store them in a database, which I then use to drive the UI to let the user view bookmarks, recently browsed URLs, saved passwords, and open Chrome sessions on other machines:



What’s next?

Chrync is read-only.  For example you can’t update your bookmarks.  Also when you tap on a bookmark it launches the built-in browser.

So obvious updates to the app would be to embed a browser within the app, pre-populate password fields, etc.

My biggest concern with investing too much more time in Chrync is that Google could easily pull the plug on the app by disallowing my use of the chrome sync scope in the OAuth 2.0 request.

Although I charged for the app initially, I don’t any more – it doesn’t seem ethical to charge for something that could disappear any day.

I also had grand dreams of bringing Chrome sync to iOS, and indeed got it working, reusing the sync engine using Xamarin, and with fantastic timing, was just looking to launch it when Google released Chrome for iOS …

So, I’ll continue to make minor updates, and if Google do decide to officially document and allow Chrome sync, maybe I’ll make a major update. 

Meanwhile people seem to like it.


Filed under: Chrync 5 Comments