Damian Mehers' Blog Xamarin from Geneva, Switzerland.

9Feb/148

Understanding the Chrome Sync Protocol

Chrome is a cool browser, but its secret sauce is that no matter whether you are using iOS, Windows, Mac, Android, Linux or ChromeOS, you can sync your bookmarks, passwords, recently viewed URLs and more.

Did you noticed any OS missing?  No?  OK, so perhaps you don’t use Windows Phone. 

But I do, as well as Android and iOS, and it bugged me that there was no way to sync all my Chrome goodness to Windows Phone, since Chrome is not available for Windows Phone.

So I implemented my own Chrome sync engine on Windows Phone, and in the process learned how Chrome sync works.

In this post I'll share what I learned, including how you authenticate in order to use it.

I'm going to do this by way of the free Chrome sync app I created for Windows Phone, called Chrync.

imageimage

I reasoned that there must be a way of talking the Chrome sync protocol directly to Google's servers, since Chrome itself does it.

I started off by downloading the Chrome source code, and building it, and running it with a debugger attached.

I also discovered the wonderful world of Chrome debug pages, which are very helpful, especially the sync internals page which you can access by navigating to chrome://sync-internals/

Protocol Buffers

I found that the Chrome sync protocol is layered on top of a Google technology called Protocol Buffers, with the Chrome sync structures being defined in a language independent protocol buffers IDL.

The main source is at http://src.chromium.org/viewvc/chrome/trunk/src/sync/protocol/, and there you’ll find the message types that are sent to and from the Google servers when a sync occurs.

If you want to browse, I suggest starting with sync.proto which defines the SyncEntity message containing core sync item fields, including an EntitySpecifics (also defined in sync.proto). 

The EntitySpecifics message contains a load of optional fields such as BookmarkSpecifics (used for syncing bookmarks), TypedUrlSpecifics (recently browsed URLs), PasswordSpecifics (saved passwords), SessionSpecifics (open sessions) and NigoriSpecifics decrypting all this stuff).

image

Over time various extensions have been defined.  Indeed every time I check the GIT source repository it seems that something new is happening, such as SyncedNotificationSpecifics.

Converting the protocol definitions to native code

I wanted to talk the Chrome protocol on Windows Phone, and went hunting for a C# implementation of Protocol Buffers that worked on Windows Phone.  I found two: protobuf-net by Marc Gravell and protobuf-csharp-port by Jon Skeet which I ended up using.

I was able to generate C# proxies for the Chrome sync protocol buffer files, and link in the .NET protocol buffers runtime.

image

The next step was to work out how to authenticate.

Requesting OAuth 2.0 access to Chrome sync data

Like many Google users, I use two factor authentication, and since I am especially paranoid, I have a custom Chrome sync passphrase defined.

Since I was making the app mainly for myself I needed to support both two factor authentication and custom passphrases.

Google has a standard OAuth 2.0 implementation which they describe here

You direct the user to a Google web site with an authentication request to Google, specifying in the scope parameter what access you require, for example you use userinfo.email to request access to the user’s email address.

You can indicate that your app requires access to all kinds of Google services using the Google Cloud Console.  You’ll notice though that there way to specify to access a user’s Chrome sync data.

After a little digging I discovered the magic string to request access in the scope parameter to Chrome sync data.  In fact I ask for access to the user’s email address, and their Chrome sync data. The scope I use is  https://www.googleapis.com/auth/userinfo.email+https://www.googleapis.com/auth/chromesync

Below you see the OAuth 2.0 process in progress inside a web browser I host within the app.  You login, using two factor authentication if it is enabled, and then you get prompted to ask whether you want to give the app the access that it requests. 

For some reason, Google’s OAuth prompts are always in German for me, despite the fact that I speak no German, and although I live in Switzerland, I live in a French speaking area.  If you don’t speak German you’ll have to take my word for it that it is prompting for permission to access your email address and your Chrome sync data.

imageimage

imageimage

The result of this authentication are two tokens: an access token, which is good for a certain amount of time, and a refresh token, which can be used to generate a new access token when it expires.

Building the sync request

Initiating the sync process involves making an http request to https://clients4.google.com/chrome-sync and setting a “Bearer” http header to the access token. The body of the message is an octet-stream which contains the sync request.

The sync request itself is a GetUpdatesMessage defined in a ClientToServerMessage which are defined in sync.proto:

GetUpdatesMessage
525    message GetUpdatesMessage {
526      // Indicates the client's current progress in downloading updates.  A
527      // from_timestamp value of zero means that the client is requesting a first-
528      // time sync.  After that point, clients should fill in this value with the
529      // value returned in the last-seen GetUpdatesResponse.new_timestamp.
530      //
531      // from_timestamp has been deprecated; clients should use
532      // |from_progress_marker| instead, which allows more flexibility.
533      optional int64 from_timestamp = 1;
534    
535      // Indicates the reason for the GetUpdatesMessage.
536      // Deprecated in M29.  We should eventually rely on GetUpdatesOrigin instead.
537      // Newer clients will support both systems during the transition period.
538      optional GetUpdatesCallerInfo caller_info = 2;
539    
540      // Indicates whether related folders should be fetched.
541      optional bool fetch_folders = 3 [default = true];
542    
543      // The presence of an individual EntitySpecifics field indicates that the
544      // client requests sync object types associated with that field.  This
545      // determination depends only on the presence of the field, not its
546      // contents -- thus clients should send empty messages as the field value.
547      // For backwards compatibility only bookmark objects will be sent to the
548      // client should requested_types not be present.
549      //
550      // requested_types may contain multiple EntitySpecifics fields -- in this
551      // event, the server will return items of all the indicated types.
552      //
553      // requested_types has been deprecated; clients should use
554      // |from_progress_marker| instead, which allows more flexibility.
555      optional EntitySpecifics requested_types = 4;
556    
557      // Client-requested limit on the maximum number of updates to return at once.
558      // The server may opt to return fewer updates than this amount, but it should
559      // not return more.
560      optional int32 batch_size = 5;
561    
562      // Per-datatype progress marker.  If present, the server will ignore
563      // the values of requested_types and from_timestamp, using this instead.
564      //
565      // With the exception of certain configuration or initial sync requests, the
566      // client should include one instance of this field for each enabled data
567      // type.
568      repeated DataTypeProgressMarker from_progress_marker = 6;
569    
570      // Indicates whether the response should be sent in chunks.  This may be
571      // needed for devices with limited memory resources.  If true, the response
572      // will include one or more ClientToServerResponses, with the frist one
573      // containing GetUpdatesMetadataResponse, and the remaining ones, if any,
574      // containing GetUpdatesStreamingResponse.  These ClientToServerResponses are
575      // delimited by a length prefix, which is encoded as a varint.
576      optional bool streaming = 7 [default = false];
577    
578      // Whether the client needs the server to provide an encryption key for this
579      // account.
580      // Note: this should typically only be set on the first GetUpdates a client
581      // requests. Clients are expected to persist the encryption key from then on.
582      // The allowed frequency for requesting encryption keys is much lower than
583      // other datatypes, so repeated usage will likely result in throttling.
584      optional bool need_encryption_key = 8 [default = false];
585    
586      // Whether to create the mobile bookmarks folder if it's not
587      // already created.  Should be set to true only by mobile clients.
588      optional bool create_mobile_bookmarks_folder = 1000 [default = false];
589    
590      // This value is an updated version of the GetUpdatesCallerInfo's
591      // GetUpdatesSource.  It describes the reason for the GetUpdate request.
592      // Introduced in M29.
593      optional SyncEnums.GetUpdatesOrigin get_updates_origin = 9;
594    
595      // Whether this GU also serves as a retry GU. Any GU that happens after
596      // retry timer timeout is a retry GU effectively.
597      optional bool is_retry = 10 [default = false];
598    };

 

This is my code to build this sync request:
/// <summary>
/// Builds a sync request to be sent to the server.  Initializes it based on the user's selected
/// sync options, and previous sync state
/// </summary>
/// <returns></returns>
private byte[] BuildSyncRequest() {
  D("BuildSyncRequest invoked");
  // This ClientToServerMessage is generated from the sync.proto definition
  var myRequest = ClientToServerMessage.CreateBuilder();
  myRequest.SetShare(_syncOptions.User);
  using (var db = _databaseFactory.Get()) {
    if (db == null) throw new Exception("User logged out");

    var syncState = db.GetSyncState();

    // We want to get updates, other options include COMMIT to send changes
    myRequest.SetMessageContents(ClientToServerMessage.Types.Contents.GET_UPDATES);

    var callerInfo = GetUpdatesCallerInfo.CreateBuilder();
    callerInfo.NotificationsEnabled = true;
    callerInfo.SetSource(GetUpdatesCallerInfo.Types.GetUpdatesSource.PERIODIC);
    var getUpdates = GetUpdatesMessage.CreateBuilder();
    getUpdates.SetCallerInfo(callerInfo);
    getUpdates.SetFetchFolders(true);

    // Tell the server what kinds of sync items we can handle

    // We need this in case the user has encrypted everything ... nigori is to get decryption
    // keys to decrypted encrypted items
    var nigoriDataType = InitializeDataType(db, EntitySpecifics.NigoriFieldNumber);
    getUpdates.FromProgressMarkerList.Add(nigoriDataType.Build());

    // We include bookmarks if the user selected them
    if ((_syncOptions.Flags & SyncFlags.Bookmarks) == SyncFlags.Bookmarks) {
      // The field is initialized with state information from the last sync, if any, so that
      // we only get changes since the latest sync
      var bookmarkDataType = InitializeDataType(db, EntitySpecifics.BookmarkFieldNumber);
      getUpdates.FromProgressMarkerList.Add(bookmarkDataType.Build());
    }

    if ((_syncOptions.Flags & SyncFlags.OpenTabs) == SyncFlags.OpenTabs) {
      var sessionDataType = InitializeDataType(db, EntitySpecifics.SessionFieldNumber);
      getUpdates.FromProgressMarkerList.Add(sessionDataType.Build());
    }

    if ((_syncOptions.Flags & SyncFlags.Omnibox) == SyncFlags.Omnibox) {
      var typedUrlDataType = InitializeDataType(db, EntitySpecifics.TypedUrlFieldNumber);
      getUpdates.FromProgressMarkerList.Add(typedUrlDataType.Build());
    }

    if ((_syncOptions.Flags & SyncFlags.Passwords) == SyncFlags.Passwords) {
      var passwordDataType = InitializeDataType(db, EntitySpecifics.PasswordFieldNumber);
      getUpdates.FromProgressMarkerList.Add(passwordDataType.Build());
    }

    if (syncState != null) {
      // ChipBag is "Per-client state for use by the server. Sent with every message sent to the server."
      // Soggy newspaper not included
      if (syncState.ChipBag != null) {
        var chipBag = ChipBag.CreateBuilder().SetServerChips(ByteString.CopyFrom(syncState.ChipBag)).Build();
        myRequest.SetBagOfChips(chipBag);
      }

      if (syncState.StoreBirthday != null) {
        myRequest.SetStoreBirthday(syncState.StoreBirthday);
      }
    }

    myRequest.SetGetUpdates(getUpdates);

    myRequest.SetClientStatus(ClientStatus.CreateBuilder().Build());
  }

  var builtRequest = myRequest.Build();
  return builtRequest.ToByteArray();
}

/// <summary>
/// For each item type we sync, this method initializes it
/// </summary>
private DataTypeProgressMarker.Builder InitializeDataType(IDatabase db, int fieldNumber) {
  var dataType = DataTypeProgressMarker.CreateBuilder();
  dataType.SetDataTypeId(fieldNumber);
  InitializeMarker(dataType, db);
  return dataType;
}

/// <summary>
/// Initializes the sync state for the item types we sync
/// </summary>
private void InitializeMarker(DataTypeProgressMarker.Builder dataType, IDatabase db) {
  var marker = db.GetSyncProgress(dataType.DataTypeId);
  if (marker == null) {
    return;
  }
  D("Initializing marker: " + marker);
  if (marker.NotificationHint != null) {
    dataType.SetNotificationHint(marker.NotificationHint);
  }

  dataType.SetToken(ByteString.CopyFrom(marker.Token));
  if (marker.TimestampForMigration != 0) {
    dataType.SetTimestampTokenForMigration(marker.TimestampForMigration);
  }
}

 

Handling the sync response

Once this request is sent off we get back a sync response, in the form of a ClientToServerResponse containing a GetUpdatesResponse, which are also defined in sync.proto:

GetUpdatesResponse
756    message GetUpdatesResponse {
757      // New sync entries that the client should apply.
758      repeated SyncEntity entries = 1;
759    
760      // If there are more changes on the server that weren't processed during this
761      // GetUpdates request, the client should send another GetUpdates request and
762      // use new_timestamp as the from_timestamp value within GetUpdatesMessage.
763      //
764      // This field has been deprecated and will be returned only to clients
765      // that set the also-deprecated |from_timestamp| field in the update request.
766      // Clients should use |from_progress_marker| and |new_progress_marker|
767      // instead.
768      optional int64 new_timestamp = 2;
769    
770      // DEPRECATED FIELD - server does not set this anymore.
771      optional int64 deprecated_newest_timestamp = 3;
772    
773      // Approximate count of changes remaining - use this for UI feedback.
774      // If present and zero, this estimate is firm: the server has no changes
775      // after the current batch.
776      optional int64 changes_remaining = 4;
777    
778      // Opaque, per-datatype timestamp-like tokens.  A client should use this
779      // field in lieu of new_timestamp, which is deprecated in newer versions
780      // of the protocol.  Clients should retain and persist the values returned
781      // in this field, and present them back to the server to indicate the
782      // starting point for future update requests.
783      //
784      // This will be sent only if the client provided |from_progress_marker|
785      // in the update request.
786      //
787      // The server may provide a new progress marker even if this is the end of
788      // the batch, or if there were no new updates on the server; and the client
789      // must save these.  If the server does not provide a |new_progress_marker|
790      // value for a particular datatype, when the request provided a
791      // |from_progress_marker| value for that datatype, the client should
792      // interpret this to mean "no change from the previous state" and retain its
793      // previous progress-marker value for that datatype.
794      //
795      // Progress markers in the context of a response will never have the
796      // |timestamp_token_for_migration| field set.
797      repeated DataTypeProgressMarker new_progress_marker = 5;
798    
799      // The current encryption keys associated with this account. Will be set if
800      // the GetUpdatesMessage in the request had need_encryption_key == true or
801      // the server has updated the set of encryption keys (e.g. due to a key
802      // rotation).
803      repeated bytes encryption_keys = 6;
804    };

 

SyncEntity

Note that at the start of GetUpdatesResponse there is a repeated series of SyncEntities.  SyncEntity is also defined in sync.proto:

134    message SyncEntity {
135      // This item's identifier.  In a commit of a new item, this will be a
136      // client-generated ID.  If the commit succeeds, the server will generate
137      // a globally unique ID and return it to the committing client in the
138      // CommitResponse.EntryResponse.  In the context of a GetUpdatesResponse,
139      // |id_string| is always the server generated ID.  The original
140      // client-generated ID is preserved in the |originator_client_id| field.
141      // Present in both GetUpdatesResponse and CommitMessage.
142      optional string id_string = 1;
143    
144      // An id referencing this item's parent in the hierarchy.  In a
145      // CommitMessage, it is accepted for this to be a client-generated temporary
146      // ID if there was a new created item with that ID appearing earlier
147      // in the message.  In all other situations, it is a server ID.
148      // Present in both GetUpdatesResponse and CommitMessage.
149      optional string parent_id_string = 2;
150    
151      // old_parent_id is only set in commits and indicates the old server
152      // parent(s) to remove. When omitted, the old parent is the same as
153      // the new.
154      // Present only in CommitMessage.
155      optional string old_parent_id = 3;
156    
157      // The version of this item -- a monotonically increasing value that is
158      // maintained by for each item.  If zero in a CommitMessage, the server
159      // will interpret this entity as a newly-created item and generate a
160      // new server ID and an initial version number.  If nonzero in a
161      // CommitMessage, this item is treated as an update to an existing item, and
162      // the server will use |id_string| to locate the item.  Then, if the item's
163      // current version on the server does not match |version|, the commit will
164      // fail for that item.  The server will not update it, and will return
165      // a result code of CONFLICT.  In a GetUpdatesResponse, |version| is
166      // always positive and indentifies the revision of the item data being sent
167      // to the client.
168      // Present in both GetUpdatesResponse and CommitMessage.
169      required int64 version = 4;
170    
171      // Last modification time (in java time milliseconds)
172      // Present in both GetUpdatesResponse and CommitMessage.
173      optional int64 mtime = 5;
174    
175      // Creation time.
176      // Present in both GetUpdatesResponse and CommitMessage.
177      optional int64 ctime = 6;
178    
179      // The name of this item.
180      // Historical note:
181      //   Since November 2010, this value is no different from non_unique_name.
182      //   Before then, server implementations would maintain a unique-within-parent
183      //   value separate from its base, "non-unique" value.  Clients had not
184      //   depended on the uniqueness of the property since November 2009; it was
185      //   removed from Chromium by http://codereview.chromium.org/371029 .
186      // Present in both GetUpdatesResponse and CommitMessage.
187      required string name = 7;
188    
189      // The name of this item.  Same as |name|.
190      // |non_unique_name| should take precedence over the |name| value if both
191      // are supplied.  For efficiency, clients and servers should avoid setting
192      // this redundant value.
193      // Present in both GetUpdatesResponse and CommitMessage.
194      optional string non_unique_name = 8;
195    
196      // A value from a monotonically increasing sequence that indicates when
197      // this item was last updated on the server. This is now equivalent
198      // to version. This is now deprecated in favor of version.
199      // Present only in GetUpdatesResponse.
200      optional int64 sync_timestamp = 9;
201    
202      // If present, this tag identifies this item as being a uniquely
203      // instanced item.  The server ensures that there is never more
204      // than one entity in a user's store with the same tag value.
205      // This value is used to identify and find e.g. the "Google Chrome" settings
206      // folder without relying on it existing at a particular path, or having
207      // a particular name, in the data store.
208      //
209      // This variant of the tag is created by the server, so clients can't create
210      // an item with a tag using this field.
211      //
212      // Use client_defined_unique_tag if you want to create one from the client.
213      //
214      // An item can't have both a client_defined_unique_tag and
215      // a server_defined_unique_tag.
216      //
217      // Present only in GetUpdatesResponse.
218      optional string server_defined_unique_tag = 10;
219    
220      // If this group is present, it implies that this SyncEntity corresponds to
221      // a bookmark or a bookmark folder.
222      //
223      // This group is deprecated; clients should use the bookmark EntitySpecifics
224      // protocol buffer extension instead.
225      optional group BookmarkData = 11 {
226        // We use a required field to differentiate between a bookmark and a
227        // bookmark folder.
228        // Present in both GetUpdatesMessage and CommitMessage.
229        required bool bookmark_folder = 12;
230    
231        // For bookmark objects, contains the bookmark's URL.
232        // Present in both GetUpdatesResponse and CommitMessage.
233        optional string bookmark_url = 13;
234    
235        // For bookmark objects, contains the bookmark's favicon. The favicon is
236        // represented as a 16X16 PNG image.
237        // Present in both GetUpdatesResponse and CommitMessage.
238        optional bytes bookmark_favicon = 14;
239      }
240    
241      // Supplies a numeric position for this item, relative to other items with the
242      // same parent.  Deprecated in M26, though clients are still required to set
243      // it.
244      //
245      // Present in both GetUpdatesResponse and CommitMessage.
246      //
247      // At one point this was used as an alternative / supplement to
248      // the deprecated |insert_after_item_id|, but now it, too, has been
249      // deprecated.
250      //
251      // In order to maintain compatibility with older clients, newer clients should
252      // still set this field.  Its value should be based on the first 8 bytes of
253      // this item's |unique_position|.
254      //
255      // Nerwer clients must also support the receipt of items that contain
256      // |position_in_parent| but no |unique_position|.  They should locally convert
257      // the given int64 position to a UniquePosition.
258      //
259      // The conversion from int64 to UniquePosition is as follows:
260      // The int64 value will have its sign bit flipped then placed in big endian
261      // order as the first 8 bytes of the UniquePosition.  The subsequent bytes of
262      // the UniquePosition will consist of the item's unique suffix.
263      //
264      // Conversion from UniquePosition to int64 reverses this process: the first 8
265      // bytes of the position are to be interpreted as a big endian int64 value
266      // with its sign bit flipped.
267      optional int64 position_in_parent = 15;
268    
269      // Contains the ID of the element (under the same parent) after which this
270      // element resides. An empty string indicates that the element is the first
271      // element in the parent.  This value is used during commits to specify
272      // a relative position for a position change.  In the context of
273      // a GetUpdatesMessage, |position_in_parent| is used instead to
274      // communicate position.
275      //
276      // Present only in CommitMessage.
277      //
278      // This is deprecated.  Clients are allowed to omit this as long as they
279      // include |position_in_parent| instead.
280      optional string insert_after_item_id = 16;
281    
282      // Arbitrary key/value pairs associated with this item.
283      // Present in both GetUpdatesResponse and CommitMessage.
284      // Deprecated.
285      // optional ExtendedAttributes extended_attributes = 17;
286    
287      // If true, indicates that this item has been (or should be) deleted.
288      // Present in both GetUpdatesResponse and CommitMessage.
289      optional bool deleted = 18 [default = false];
290    
291      // A GUID that identifies the the sync client who initially committed
292      // this entity.  This value corresponds to |cache_guid| in CommitMessage.
293      // This field, along with |originator_client_item_id|, can be used to
294      // reunite the original with its official committed version in the case
295      // where a client does not receive or process the commit response for
296      // some reason.
297      //
298      // Present only in GetUpdatesResponse.
299      //
300      // This field is also used in determining the unique identifier used in
301      // bookmarks' unique_position field.
302      optional string originator_cache_guid = 19;
303    
304      // The local item id of this entry from the client that initially
305      // committed this entity. Typically a negative integer.
306      // Present only in GetUpdatesResponse.
307      //
308      // This field is also used in determinging the unique identifier used in
309      // bookmarks' unique_position field.
310      optional string originator_client_item_id = 20;
311    
312      // Extensible container for datatype-specific data.
313      // This became available in version 23 of the protocol.
314      optional EntitySpecifics specifics = 21;
315    
316      // Indicate whether this is a folder or not. Available in version 23+.
317      optional bool folder = 22 [default = false];
318    
319      // A client defined unique hash for this entity.
320      // Similar to server_defined_unique_tag.
321      //
322      // When initially committing an entity, a client can request that the entity
323      // is unique per that account. To do so, the client should specify a
324      // client_defined_unique_tag. At most one entity per tag value may exist.
325      // per account. The server will enforce uniqueness on this tag
326      // and fail attempts to create duplicates of this tag.
327      // Will be returned in any updates for this entity.
328      //
329      // The difference between server_defined_unique_tag and
330      // client_defined_unique_tag is the creator of the entity. Server defined
331      // tags are entities created by the server at account creation,
332      // while client defined tags are entities created by the client at any time.
333      //
334      // During GetUpdates, a sync entity update will come back with ONE of:
335      // a) Originator and cache id - If client committed the item as non "unique"
336      // b) Server tag - If server committed the item as unique
337      // c) Client tag - If client committed the item as unique
338      //
339      // May be present in CommitMessages for the initial creation of an entity.
340      // If present in Commit updates for the entity, it will be ignored.
341      //
342      // Available in version 24+.
343      //
344      // May be returned in GetUpdatesMessage and sent up in CommitMessage.
345      //
346      optional string client_defined_unique_tag = 23;
347    
348      // This positioning system had a relatively short life.  It was made obsolete
349      // by |unique_position| before either the client or server made much of an
350      // attempt to support it.  In fact, no client ever read or set this field.
351      //
352      // Deprecated in M26.
353      optional bytes ordinal_in_parent = 24;
354    
355      // This is the fourth attempt at positioning.
356      //
357      // This field is present in both GetUpdatesResponse and CommitMessage, if the
358      // item's type requires it and the client that wrote the item supports it (M26
359      // or higher).  Clients must also be prepared to handle updates from clients
360      // that do not set this field.  See the comments on
361      // |server_position_in_parent| for more information on how this is handled.
362      //
363      // This field will not be set for items whose type ignores positioning.
364      // Clients should not attempt to read this field on the receipt of an item of
365      // a type that ignores positioning.
366      //
367      // Refer to its definition in unique_position.proto for more information about
368      // its internal representation.
369      optional UniquePosition unique_position = 25;
370    };

 

EntitySpecifics

What is most important in the SyncEntity is line 314, where you see that a SyncEntity contains an EntitySpecifics, which is where the good stuff is.  The EntitySpecifics looks like this:

64    message EntitySpecifics {
65      // If a datatype is encrypted, this field will contain the encrypted
66      // original EntitySpecifics. The extension for the datatype will continue
67      // to exist, but contain only the default values.
68      // Note that currently passwords employ their own legacy encryption scheme and
69      // do not use this field.
70      optional EncryptedData encrypted = 1;
71    
72      // To add new datatype-specific fields to the protocol, extend
73      // EntitySpecifics.  First, pick a non-colliding tag number by
74      // picking a revision number of one of your past commits
75      // to src.chromium.org.  Then, in a different protocol buffer
76      // definition, define your message type, and add an optional field
77      // to the list below using the unique tag value you selected.
78      //
79      //  optional MyDatatypeSpecifics my_datatype = 32222;
80      //
81      // where:
82      //   - 32222 is the non-colliding tag number you picked earlier.
83      //   - MyDatatypeSpecifics is the type (probably a message type defined
84      //     in your new .proto file) that you want to associate with each
85      //     object of the new datatype.
86      //   - my_datatype is the field identifier you'll use to access the
87      //     datatype specifics from the code.
88      //
89      // Server implementations are obligated to preserve the contents of
90      // EntitySpecifics when it contains unrecognized fields.  In this
91      // way, it is possible to add new datatype fields without having
92      // to update the server.
93      //
94      // Note: The tag selection process is based on legacy versions of the
95      // protocol which used protobuf extensions. We have kept the process
96      // consistent as the old values cannot change.  The 5+ digit nature of the
97      // tags also makes them recognizable (individually and collectively) from
98      // noise in logs and debugging contexts, and creating a divergent subset of
99      // tags would only make things a bit more confusing.
100    
101      optional AutofillSpecifics autofill = 31729;
102      optional BookmarkSpecifics bookmark = 32904;
103      optional PreferenceSpecifics preference = 37702;
104      optional TypedUrlSpecifics typed_url = 40781;
105      optional ThemeSpecifics theme = 41210;
106      optional AppNotification app_notification = 45184;
107      optional PasswordSpecifics password = 45873;
108      optional NigoriSpecifics nigori = 47745;
109      optional ExtensionSpecifics extension = 48119;
110      optional AppSpecifics app = 48364;
111      optional SessionSpecifics session = 50119;
112      optional AutofillProfileSpecifics autofill_profile = 63951;
113      optional SearchEngineSpecifics search_engine = 88610;
114      optional ExtensionSettingSpecifics extension_setting = 96159;
115      optional AppSettingSpecifics app_setting = 103656;
116      optional HistoryDeleteDirectiveSpecifics history_delete_directive = 150251;
117      optional SyncedNotificationSpecifics synced_notification = 153108;
118      optional SyncedNotificationAppInfoSpecifics synced_notification_app_info =
119          235816;
120      optional DeviceInfoSpecifics device_info = 154522;
121      optional ExperimentsSpecifics experiments = 161496;
122      optional PriorityPreferenceSpecifics priority_preference = 163425;
123      optional DictionarySpecifics dictionary = 170540;
124      optional FaviconTrackingSpecifics favicon_tracking = 181534;
125      optional FaviconImageSpecifics favicon_image = 182019;
126      optional ManagedUserSettingSpecifics managed_user_setting = 186662;
127      optional ManagedUserSpecifics managed_user = 194582;
128      optional ManagedUserSharedSettingSpecifics managed_user_shared_setting =
129          202026;
130      optional ArticleSpecifics article = 223759;
131      optional AppListSpecifics app_list = 229170;
132    }

 

BookmarkSpecifics

As you see the EntitySpecifics contains EncryptedData and optional fields for each of the data types.  A specific instance of an EntitySpecifics contains just one, for example here is the BookmarkSpecifics from bookmarks_specifics.proto

23    // Properties of bookmark sync objects.
24    message BookmarkSpecifics {
25      optional string url = 1;
26      optional bytes favicon = 2;
27      optional string title = 3;
28      // Corresponds to BookmarkNode::date_added() and is the internal value from
29      // base::Time.
30      optional int64 creation_time_us = 4;
31      optional string icon_url = 5;
32      repeated MetaInfo meta_info = 6;
33    }

 

Decrypting sync data

What makes things tricky is that you get a set of sync entities, some of which may be encrypted (in the EncryptedData EntitySpecifics field), but they cannot be decrypted until the NigoriSpecifics sync entity is received, which may be some time.  So I buffer of the encrypted sync entities until they can be decrypted.

Encrypted data looks like this in its Protocol Buffers definition in encryption.proto:

EncryptedData
7    // Encrypted sync data consists of two parts: a key name and a blob. Key name is
18    // the name of the key that was used to encrypt blob and blob is encrypted data
19    // itself.
20    //
21    // The reason we need to keep track of the key name is that a sync user can
22    // change their passphrase (and thus their encryption key) at any time. When
23    // that happens, we make a best effort to reencrypt all nodes with the new
24    // passphrase, but since we don't have transactions on the server-side, we
25    // cannot guarantee that every node will be reencrypted. As a workaround, we
26    // keep track of all keys, assign each key a name (by using that key to encrypt
27    // a well known string) and keep track of which key was used to encrypt each
28    // node.
29    message EncryptedData {
30      optional string key_name = 1;
31      optional string blob = 2;
32    };

NigoriKey, NigoriKeyBag and NigoriSpecific

The NigoriSpecifics (one of the entries in the EntitySpecifics) looks like this, including associated data types, in nigori_specifics.proto

19    message NigoriKey {
20      optional string name = 1;
21      optional bytes user_key = 2;
22      optional bytes encryption_key = 3;
23      optional bytes mac_key = 4;
24    }
25    
26    message NigoriKeyBag {
27      repeated NigoriKey key = 2;
28    }
29    
30    // Properties of nigori sync object.
31    message NigoriSpecifics {
32      optional EncryptedData encryption_keybag = 1;
33      // Once keystore migration is performed, we have to freeze the keybag so that
34      // older clients (that don't support keystore encryption) do not attempt to
35      // update the keybag.
36      // Previously |using_explicit_passphrase|.
37      optional bool keybag_is_frozen = 2;
38    
39      // Obsolete encryption fields. These were deprecated due to legacy versions
40      // that understand their usage but did not perform encryption properly.
41      // optional bool deprecated_encrypt_bookmarks = 3;
42      // optional bool deprecated_encrypt_preferences = 4;
43      // optional bool deprecated_encrypt_autofill_profile = 5;
44      // optional bool deprecated_encrypt_autofill = 6;
45      // optional bool deprecated_encrypt_themes = 7;
46      // optional bool deprecated_encrypt_typed_urls = 8;
47      // optional bool deprecated_encrypt_extensions = 9;
48      // optional bool deprecated_encrypt_sessions = 10;
49      // optional bool deprecated_encrypt_apps = 11;
50      // optional bool deprecated_encrypt_search_engines = 12;
51    
52      // Booleans corresponding to whether a datatype should be encrypted.
53      // Passwords are always encrypted, so we don't need a field here.
54      // History delete directives need to be consumable by the server, and
55      // thus can't be encrypted.
56      // Synced Notifications need to be consumed by the server (the read flag)
57      // and thus can't be encrypted.
58      // Synced Notification App Info is set by the server, and thus cannot be
59      // encrypted.
60      optional bool encrypt_bookmarks = 13;
61      optional bool encrypt_preferences = 14;
62      optional bool encrypt_autofill_profile = 15;
63      optional bool encrypt_autofill = 16;
64      optional bool encrypt_themes = 17;
65      optional bool encrypt_typed_urls = 18;
66      optional bool encrypt_extensions = 19;
67      optional bool encrypt_sessions = 20;
68      optional bool encrypt_apps = 21;
69      optional bool encrypt_search_engines = 22;
70    
71      // Deprecated on clients where tab sync is enabled by default.
72      // optional bool sync_tabs = 23;
73    
74      // If true, all current and future datatypes will be encrypted.
75      optional bool encrypt_everything = 24;
76    
77      optional bool encrypt_extension_settings = 25;
78      optional bool encrypt_app_notifications = 26;
79      optional bool encrypt_app_settings = 27;
80    
81      // User device information. Contains information about each device that has a
82      // sync-enabled Chrome browser connected to the user account.
83      // This has been moved to the DeviceInfo message.
84      // repeated DeviceInformation deprecated_device_information = 28;
85    
86      // Enable syncing favicons as part of tab sync.
87      optional bool sync_tab_favicons = 29;
88    
89      // The state of the passphrase required to decrypt |encryption_keybag|.
90      enum PassphraseType {
91        // Gaia-based encryption passphrase. Deprecated.
92        IMPLICIT_PASSPHRASE = 1;
93        // Keystore key encryption passphrase. Uses |keystore_bootstrap| to
94        // decrypt |encryption_keybag|.
95        KEYSTORE_PASSPHRASE = 2;
96        // Previous Gaia-based passphrase frozen and treated as a custom passphrase.
97        FROZEN_IMPLICIT_PASSPHRASE  = 3;
98        // User provided custom passphrase.
99        CUSTOM_PASSPHRASE = 4;
100      }
101      optional PassphraseType passphrase_type = 30
102          [default = IMPLICIT_PASSPHRASE];
103    
104      // The keystore decryptor token blob. Encrypted with the keystore key, and
105      // contains the encryption key used to decrypt |encryption_keybag|.
106      // Only set if passphrase_state == KEYSTORE_PASSPHRASE.
107      optional EncryptedData keystore_decryptor_token = 31;
108    
109      // The time (in epoch milliseconds) at which the keystore migration was
110      // performed.
111      optional int64 keystore_migration_time = 32;
112    
113      // The time (in epoch milliseconds) at which a custom passphrase was set.
114      // Note: this field may not be set if the custom passphrase was applied before
115      // this field was introduced.
116      optional int64 custom_passphrase_time = 33;
117    
118      // Boolean corresponding to whether custom spelling dictionary should be
119      // encrypted.
120      optional bool encrypt_dictionary = 34;
121    
122      // Boolean corresponding to Whether to encrypt favicons data or not.
123      optional bool encrypt_favicon_images = 35;
124      optional bool encrypt_favicon_tracking = 36;
125    
126      // Boolean corresponding to whether articles should be encrypted.
127      optional bool encrypt_articles = 37;
128    
129      // Boolean corresponding to whether app list items should be encrypted.
130      optional bool encrypt_app_list = 38;
131    }

 

Note that the first item in the NigiriSpecifics is the encrypted NigoriKeyBag.  The NigoriKeyBag is a set of NigoriKeys, both defined above.  The NigoriKeys are used to decrypt things like the encrypted BookmarkSpecifics.

So the first thing to do is to decrypt the encrypted NigoriKeyBag.  I prompt the user for the custom passphrase:

image

Once I have the passphrase, I decrypt the encrypted_keybag’s bytes using the passphrase:

Decrypting data
    internal static byte[] Decrypt(string passwordText, string encryptedText) {
      try {
        var salt = Encoding.UTF8.GetBytes("saltsalt");
        var rb = new Rfc2898DeriveBytes(HostUsername, salt, 1001);
        var userSalt = rb.GetBytes(16);

        var password = Encoding.UTF8.GetBytes(passwordText);
        rb = new Rfc2898DeriveBytes(password, userSalt, 1002);
        var userKey = rb.GetBytes(16);

        password = Encoding.UTF8.GetBytes(passwordText);
        rb = new Rfc2898DeriveBytes(password, userSalt, 1003);
        var encryptionKey = rb.GetBytes(16);

        rb = new Rfc2898DeriveBytes(password, userSalt, 1004);
        var macKey = rb.GetBytes(16);

        return Decrypt(encryptionKey, macKey, encryptedText);
      } catch (Exception) {
        return null;
      }
    }
    internal static byte[] Decrypt(byte[] encryptionKey, byte[] macKey, string encryptedText) {
      var input = Convert.FromBase64String(encryptedText);

      //var input = encrypted;
      const int kIvSize = 16;
      const int kHashSize = 32;

      if (input.Length < kIvSize*2 + kHashSize) return null;

      var iv = new byte[kIvSize];
      Array.Copy(input, iv, iv.Length);
      var ciphertext = new byte[input.Length - (kIvSize + kHashSize)];
      Array.Copy(input, kIvSize, ciphertext, 0, ciphertext.Length);
      var hash = new byte[kHashSize];
      Array.Copy(input, input.Length - kHashSize, hash, 0, kHashSize);

      var hmac = new HMACSHA256(macKey);
      var calculatedHash = hmac.ComputeHash(ciphertext);

      if (!Enumerable.SequenceEqual(calculatedHash, hash)) {
        return null;
      }

      var aes = new AesManaged {IV = iv, Key = encryptionKey};
      var cs = new CryptoStream(new MemoryStream(ciphertext), aes.CreateDecryptor(), CryptoStreamMode.Read);
      var decryptedMemoryStream = new MemoryStream();
      var buf = new byte[256];
      while (cs.CanRead) {
        var count = cs.Read(buf, 0, buf.Length);
        if (count == 0) {
          break;
        }
        decryptedMemoryStream.Write(buf, 0, count);
      }
      return decryptedMemoryStream.ToArray();
    }
  }

I then convert the decrypted keybag to an actual keybag

        var bag = NigoriKeyBag.ParseFrom(decrypted);

Each entry in the keybag consists of a NigoriKey which can be used using the second Decrypt method above to decrypt EntitySpecifics enties:

var blob = encrypted.Blob;
var nigori = nigoris.ContainsKey(encrypted.KeyName)
                ? nigoris[encrypted.KeyName]
                : db.GetNigoriWithName(encrypted.KeyName);
if (nigori == null) {
  return null;
}
return Decryptor.Decrypt(nigori.EncryptionKey,
                          nigori.MacKey,
                          blob);

Processing the synced entities

After that it is pretty much plain sailing.  Here is the processing of the Bookmarks sync entity:

Processing bookmarks
 internal class BookmarkProcessor : EntityProcessor {
    public override bool Process(SyncEntity syncEntity, EntitySpecifics specifics) {
      if (!syncEntity.HasSpecifics || !syncEntity.Specifics.HasBookmark) return false;

      var bm = specifics == null ? syncEntity.Specifics.Bookmark : specifics.Bookmark;
      D("Processing bookmark " + bm.Title);

      var model = Db.GetSyncEntityWithId<BookmarkModel>(syncEntity.IdString);
      var isNew = model == null;

      if (isNew) {
        model = new BookmarkModel();
      }

      if (bm.HasFavicon) {
        model.Favicon = bm.Favicon.ToByteArray();
      }

      if (bm.HasTitle) {
        model.BookmarkTitle = bm.Title;
      }

      if (bm.HasUrl) {
        model.BookmarkUrl = bm.Url;
      }


      FillSyncEntityModel(syncEntity, model);

      if (isNew) {
        Db.InsertSyncEntity(model);
      } else {
        Db.UpdateSyncEntity(model);
      }

      return true;

    }
  }

 

I process the decrypted sync entities and store them in a database, which I then use to drive the UI to let the user view bookmarks, recently browsed URLs, saved passwords, and open Chrome sessions on other machines:

imageimage

imageimage

What’s next?

Chrync is read-only.  For example you can’t update your bookmarks.  Also when you tap on a bookmark it launches the built-in browser.

So obvious updates to the app would be to embed a browser within the app, pre-populate password fields, etc.

My biggest concern with investing too much more time in Chrync is that Google could easily pull the plug on the app by disallowing my use of the chrome sync scope in the OAuth 2.0 request.

Although I charged for the app initially, I don’t any more – it doesn’t seem ethical to charge for something that could disappear any day.

I also had grand dreams of bringing Chrome sync to iOS, and indeed got it working, reusing the sync engine using Xamarin, and with fantastic timing, was just looking to launch it when Google released Chrome for iOS …

So, I’ll continue to make minor updates, and if Google do decide to officially document and allow Chrome sync, maybe I’ll make a major update. 

Meanwhile people seem to like it.

image

Filed under: Chrync 8 Comments
24Jul/127

Porting a Windows Phone app to iOS

Yes, I know.  It’s not the most common direction.  Creating an app first on Windows Phone, and then porting it to iOS?

In my spare time I recently created and released a Windows Phone app that synchronizes your Google Chrome environment to Windows Phone, to access your Chrome bookmarks, passwords, recently viewed web pages, and (experimentally) open tabs from Windows Phone.

 

It does this by talking the Chrome sync protocol directly to Google’s servers just like Chrome itself does.

I created the app by downloading the Chromium source code, and then building and running Chrome on my PC (Chrome is written in C), working out how it did the synchronization, and then I did the same thing from C# in my Windows Phone app.

I released the app and it was well received.

The next step in my master-plan was to release a similar app for the iPhone and iPad, by porting my app using MonoTouch to iOS.  I got to the point where it was working, and then, well … Google released Chrome for iOS … at which point the potential audience for my iOS product shrank to approximately zero.

Nevertheless I did get to port an app from Windows Phone to iOS using MonoTouch;  I thought I’d share my experience.

I’m going to:

  1. Explain how I set up my development environment to run Visual Studio on my Mac, with just a three-finger swipe to go between Visual Studio and the real-device debugger;
  2. Describe how I structured my project to share code between the two apps;
  3. Explain how I implemented different database access code, hidden behind a common interface;
  4. Look at a significant hurdle I hit where my code ran fine in the iPhone Simulator, but crashed and burned on a real device, and what I did to resolve this;
  5. Reflect on the overall experience.

About MonoTouch

First a word about MonoTouch.  If you are like me, you hate the idea of a porting framework because you want to create an app that has a native look and feel … not some generic bland UI that looks the same on all platforms, and is thus horrible on all platforms.

Here is what you need to know about MonoTouch: it provides C# bindings to the native iOS frameworks.  It does not provide any UI compatibility layer to let you run Silverlight on iOS.  You still design your UI using NIB files, create outlets, ViewControllers etc.  You use MonoTouch to create a native app, that is indistinguishable from an app coded in Objective C.

So if you can’t port the UI, what is the point?  It turned out that most of the challenging code in my app was the backend stuff – authenticating, syncing, storing in the DB, etc.  The UI was pretty straightforward.  I wanted to port the backend code, but put an authentic iOS UI on it.

Learning iOS and MonoTouch

A few years ago Red Gate software acquired a product I created and consequently am a Friend of Red Gate.  One of the perks was a free years subscription to the online video course company, Pluralsight.

Before doing anything with MonoTouch I watched the available Pluralsight courses on iOS and MonoTouch.  On most devices, such as the iPad you can watch them at 1.5x or even 2x the normal speed.  I found these courses to be excellent, and I now pay out of my own pocket to subscribe.

The half-life of the information gleaned through watching these videos is very short in my brain, so I needed to get my hands dirty very quickly after watching the videos.

Setting up the development environment

Although I did not know much about MonoTouch development, I did know that I wanted to continue using Visual Studio, and more specifically the Resharper development/refactoring tool from Jetbrains: .NET development without Resharper is unthinkable for me.

One other thing I knew was that I didn’t want to fork over US$200 for a MonoTouch license without being sure that what I wanted to do would work.  Fortunately you can download and use MonoTouch for free, but you can only deploy apps to the iOS Simulator – not to real devices.  This seemed good enough to me. I thought that if it worked on the emulator, it was very likely to work on a real device.

Little did I know how naïve I was.

Windows on Mac

I already had a MacBook Air running OS X, and Parallels hosting Windows 7.  I also already had Visual Studio installed within Windows 7 and Resharper installed.

MonoTouch

I downloaded and installed MonoTouch on the OS X environment, and made sure I could build and run a simple project.  Then I followed the instructions in this email to set up my Windows and Visual Studio environment to be able to edit MonoTouch projects.

Visual Studio and MonoTouch together

I’ve read a lot of stuff about people using Dropbox to automatically synchronize their PC based Visual Studio with their Mac based MonoTouch.  Instead what I did was simply to open the MonoTouch solution from within Visual Studio running in the virtual machine on the same PC, using the ability to open the host OS’s files within the VM.

I set up the Mac’s file system to be available inside Windows:

image

I created a new solution using MonoDevelop on MacOS:

image

Then I opened that solution using Visual Studio running in the parallels Virtual Machine, via the Mac’s drive mounted in the Windows Virtual Machine (notice the drive on the left hand side):

image

I ended up being able to edit and build using Visual Studio, then use a four-finger swipe on the mousepad to switch back to MonoTouch to run and debug the app.  Here is a quick video of the complete edit, debug run cycle using MonoDevelop to run the app under an iPhone simulator on iOS, and Visual Studio to develop:

 

Using Visual Studio to develop, and then MonoTouch to deploy and debug was almost totally painless.  I still needed to learn MonoTouch's debugger shortcuts, but that was the only pain-point.

Porting the code

The Windows Phone project structure

The original version of the Windows Phone project was not designed with the idea of porting it to iOS, however I did use the standard MVVM pattern, which meant that my sync logic and database code was totally decoupled from my UI code.

I used two different Visual Studio solutions, however the iOS solution references the same source control folders as the Windows Phone Solution for the shared classes.  These are the classes that are shared between the solutions:

image

The Engine namespace contains the classes used to talk the Chrome sync protocol to Google’s servers.  The Models namespace contains the classes used to represent entities written to, and read from the database.  The proto folder contains protocol-buffer definitions and generated classes, and the ProtocolBuffers folder contains the engine used to talk the protocol buffers protocol.  All of these classes are shared between the Windows Phone and iOS versions of the app.

Almost all my non-UI code could be reused between Windows Phone 7, and iOS, however there were a couple of areas where I needed to re-write code, namely storage of Settings, and Database code, which I hide behind interfaces (see IDatabase, IDatabaseFactory, and ISyncOptions in the picture above).

Database access across platforms

Although I love using LINQ, Microsoft’s recent announcement that Windows Phone 8 will support SQLite was very welcome, since if I’d used SQLite on Windows Phone, my database code would have remained unchanged.  For this app, I ended up re-writing the database read/write code, with different implementations of an IDatabase interface used by the sync engine.

I use LINQ to SQL as my database implementation on Windows Phone, and I wanted to re-use the same database entities on iOS, even if they were stored using a different technology, namely SQLite.  I ended up using #IFs to allow me to use the same classes between both iOS and Windows Phone.

I’m not going to go into all the details of what I did, but I thought I’d give you a flavour by looking at the class used to represent encryption keys exchanged during synchronization.  I’ll show an extract of the class itself, and then the two different IDatabase implementations which read/write instances of these classes.

Shared database entity class

This is an example of the NigoriModel class, used to represent encryption keys. Note the #IFs used for Windows Phone specific classes. You’ll also see that I have not commented out the use of the Table and Column attributes – I simply defined my own TableAttribute class, #IFd to be only visible when building for iOS.

I used the Windows Phone ProtectedData class to encrypt sensitive information prior to committing it to the database.

using System;
using System.ComponentModel;
#if WINDOWS_PHONE
using System.Data.Linq;
using System.Data.Linq.Mapping;
using System.Security.Cryptography;
#endif
namespace Chromarks.Models {
    [Table]
    public class NigoriModel : INotifyPropertyChanged
#if WINDOWS_PHONE
        , INotifyPropertyChanging
#endif
    {
        private int _id;
 
        [Column(IsPrimaryKey = true, IsDbGenerated = true, DbType = "INT NOT NULL Identity", 
            CanBeNull = false, AutoSync = AutoSync.OnInsert)]
        public int Id
        {
            get
            {
                return _id;
            }
            set
            {
                if (_id != value)
                {
                    NotifyPropertyChanging("Id");
                    _id = value;
                    NotifyPropertyChanged("Id");
                }
            }
        }
 
        private byte[] _userKeyEncrypted;
 
        [Column]
        public byte[] UserKeyEncrypted
        {
            get { return _userKeyEncrypted; }
            set
            {
                if (_userKeyEncrypted != value)
                {
                    NotifyPropertyChanging("UserKeyEncrypted");
                    _userKeyEncrypted = value;
                    NotifyPropertyChanged("UserKeyEncrypted");
                }
            }
        }
 
        private static byte[] Encrypt(byte[]  plain) {
            byte[] bytes = null;
#if WINDOWS_PHONE
            bytes = ProtectedData.Protect(plain, null);
#else
            bytes = plain; // TODO: implement for iOS
#endif
            return bytes;
        }
 
        public byte[] UserKey
        {
            get { return Decrypt(UserKeyEncrypted); }
 
            set {
                UserKeyEncrypted = Encrypt(value);
            }
        }
 
                ...
        
        // Version column aids update performance.
#if WINDOWS_PHONE
        [Column(IsVersion = true)]
        private Binary _sqlVersion;
#endif
 
        #region INotifyPropertyChanged Members
 
        public event PropertyChangedEventHandler PropertyChanged;
 
        // Used to notify the page that a data context property changed
        protected void NotifyPropertyChanged(string propertyName)
        {
            if (PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
            }
        }
 
        #endregion
 
        #region INotifyPropertyChanging Members
 
#if WINDOWS_PHONE
        public event PropertyChangingEventHandler PropertyChanging;
#endif
        // Used to notify the data context that a data context property is about to change
        protected void NotifyPropertyChanging(string propertyName)
        {
#if WINDOWS_PHONE
            if (PropertyChanging != null)
            {
                PropertyChanging(this, new PropertyChangingEventArgs(propertyName));
            }
#endif
        }
 
        #endregion
    }
}

 

In this way I was able to use the same classes in my synchronization engine, whether running on iOS or Windows Phone.  Since all database access was hidden behind the IDatabase interface, all I needed to do was provide the sync engine with different IDatabase implementations depending on the platform:

Windows Phone 7 IDatabase implementation (LINQ to SQL)
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using Chromarks.Engine;
using Chromarks.Models;
 
namespace Chromarks.ViewModels
{
 
    class DatabaseImpl : IDatabase {
        private const String Tag = "DatabaseImpl";
        private readonly ChromarksDataContext _dataContext;
        
        public DatabaseImpl(ChromarksDataContext dataContext)
        {
            _dataContext = dataContext;
        }
 
        public void Dispose()
        {
            _dataContext.Dispose();
        }
 
        public void SubmitChanges()
        {
            _dataContext.SubmitChanges();
        }
 
        public bool AnySyncProgress()
        {
            return _dataContext.SyncProgress.Any();
        }
 
        public NigoriModel GetNigoriWithName(string keyName)
        {
            try
            {
                return _dataContext.Nigoris.SingleOrDefault(n => n.KeyName == keyName);
            }
            catch (Exception ex)
            {
                Log.Error(Tag, "Error invoking GetNigoriWithName with " + keyName, ex);
                return null;
            }
        }
 
        public void InsertNigori(NigoriModel nigori)
        {
            _dataContext.Nigoris.InsertOnSubmit(nigori);
        }
                ...
iOS IDatabase implementation (SQLite)

I replicated the Windows Phone behaviour in the iOS implementation, using equivalent mechanisms from SQLite.

using System;
using System.Collections.Generic;
using System.Data;
using System.Diagnostics;
using System.IO;
using System.Text;
using Chromarks.Engine;
using Chromarks.Models;
using Mono.Data.Sqlite;
using sync_pb;
 
// ReSharper disable CheckNamespace
namespace Chromarks {
// ReSharper restore CheckNamespace
    internal class Database : IDatabase {
        private readonly SqliteConnection _connection;
 
        private SqliteTransaction _transaction;
        private bool _disposed;
 
        internal Database()
        {
            _connection = GetConnection();
            _connection.Open();
        }
        
        public void Dispose()
        {
            Debug.Assert(!_disposed);
            _disposed = true;
            if(_transaction != null) {
                _transaction.Rollback();
                _transaction = null;
            }
            _connection.Dispose();
        }
 
 
        public void SubmitChanges()
        {
            Debug.Assert(!_disposed);
            if (_transaction != null) {
                _transaction.Commit();
                _transaction = null;
            }
        }
 
        public bool AnySyncProgress()
        {
            Debug.Assert(!_disposed);
            using (var cmd = _connection.CreateCommand())
            {
                cmd.CommandType = CommandType.Text;
                cmd.CommandText = "select COUNT(*) FROM SyncProgressModel;";
                var count = (long)cmd.ExecuteScalar();
                return count > 0;
            }
        }
        
        public NigoriModel GetNigoriWithName(string keyName)
        {
            Debug.Assert(!_disposed);
            using (var cmd = _connection.CreateCommand())
            {
                cmd.CommandType = CommandType.Text;
                cmd.CommandText =
@"SELECT [UserKeyEncrypted], [MacKeyEncrypted], [EncryptionKeyEncrypted] FROM [NigoriModel] WHERE " +
                    "[KeyName] = @KeyName";
                Log.Debug("Database", cmd.CommandText);
                AddParameter(cmd, "@KeyName", keyName);
                using (var reader = cmd.ExecuteReader())
                {
                    if (!reader.Read())
                    {
                        return null;
                    }
                    var result = new NigoriModel
                    {
                        UserKeyEncrypted = (byte[])reader["UserKeyEncrypted"],
                        MacKeyEncrypted = (byte[])reader["MacKeyEncrypted"],
                        EncryptionKeyEncrypted = (byte[])reader["EncryptionKeyEncrypted"],
                    };
                    return result;
                }
            }
        }
 
        public void InsertNigori(NigoriModel nigori)
        {
            Debug.Assert(!_disposed);
            using (var cmd = _connection.CreateCommand())
            {
                cmd.CommandType = CommandType.Text;
                cmd.CommandText =
@"INSERT INTO [NigoriModel] ([KeyName],[UserKeyEncrypted],[MacKeyEncrypted],[EncryptionKeyEncrypted])" +
                   "VALUES (@KeyName, @UserKeyEncrypted, @MacKeyEncrypted, @EncryptionKeyEncrypted);";
                Log.Debug("Database", cmd.CommandText);
                AddParameter(cmd, "@KeyName", nigori.KeyName);
                AddParameter(cmd, "@UserKeyEncrypted", nigori.UserKeyEncrypted);
                AddParameter(cmd, "@MacKeyEncrypted", nigori.MacKeyEncrypted);
                AddParameter(cmd, "@EncryptionKeyEncrypted", nigori.EncryptionKeyEncrypted);
                cmd.ExecuteNonQuery();
            }
        }
 
                ...

Running the app

The iOS Simulator

I was amazed and delighted to find that all the networking code just compiled and ran using MonoTouch.

Once I got the database implementation working on iOS, I ran a simple iOS app using my Chrome sync email address, password and application-specific password (I have that option turned on for my account).  It worked – I was able to communicate to Google’s servers and dump out my synchronized bookmarks.

A real device

So far this was all done using the iOS emulator, but I was on a high – I took out my credit card and paid to buy a license to use MonoTouch on physical devices instead of just virtual devices.  I also paid to become a registered Apple iOS developer. There are very thorough instructions on how to set up your real-world iPhone as a developer device.

I rushed through the setup instructions, deployed my app to the iPhone, ran it and … it crashed.  The same code that had run fine on the emulator failed on the real device.

Generic Functions – the problem

Turns out I should have read those warnings and release notes, rather than just diving in.  One of the restrictions that MonoTouch faces is that it can not dynamically generate code at runtime, and one of the C# constructs that requires this generic functions.  And guess what, the protocol buffers code made liberal use of generic functions, such as this:

        /// <summary>
        /// Reads an enum field value from the stream. If the enum is valid for type T,
        /// then the ref value is set and it returns true.  Otherwise the unkown output
        /// value is set and this method returns false.
        /// </summary>   
        [CLSCompliant(false)]
        public bool ReadEnum<T>(ref T value, out object unknown)
            where T : struct, IComparable, IFormattable, IConvertible
        {
            int number = (int)ReadRawVarint32();
            if (Enum.IsDefined(typeof(T), number))
            {
                unknown = null;
                value = (T)(object)number;
                return true;
            }
            unknown = number;
            return false;
        }
Generic Functions – the solution

I wrote new functions to be non-generic:

        public bool ReadEnumNonGeneric(Func<object, bool> isEnum, Action<int> setEnum, 
                                       Action<object> setUnknown)
        {
            int number = (int)ReadRawVarint32();
            if (isEnum(number)) {
                setUnknown(null);
                setEnum(number);
                return true;
            }
            setUnknown(number);
            return false;
        }

… and changed the calling code to invoke my non-generic functions:

// if(input.ReadEnum(ref result.deviceType_, out unknown)) {
if (input.ReadEnumNonGeneric(n => Enum.IsDefined(typeof(global::sync_pb.SessionHeader.Types.DeviceType), n), 
                             n => result.deviceType_ = (global::sync_pb.SessionHeader.Types.DeviceType)n, 
                             u => unknown = u))

Now my code not only compiled, but it also ran!

MonoTouch compiler crash – not a problem

One issue that I never got to the bottom of is that the MonoTouch compiler crashed when compiling my code.  My solution was to always compile under Windows, and then let MonoTouch transform the compiled code into an iOS app, and run it.  I suspect that it is the fact that I left the generic functions there that causes the MonoTouch compiler to crash.

Conclusion

Although Google cold-heartedly destroyed my ambitions to release a Chrome-syncing app for iOS when they released Chrome, I still got a lot out of the experience of porting my app from Windows Phone, and I’m ready now for the next one.

Here are some final thoughts.

  • Being able to program in C#, and having a lot of the .NET framework library available is fantastic if you are an experienced .NET programmer
  • You’ll still need to invest significant effort into familiarizing yourself with the iOS programming frameworks, especially the UI to provide a truly native experience
  • There are restrictions to the magic that MonoTouch can do - When your app works on the Simulator but not on the real device, don’t despair – read the FM and re-write your code to work around the restrictions.  Better yet, read about the restrictions before you code.
  • Its worth investing in getting your development environment set up properly – it was a joy to be able to edit, refactor, build in Visual Studio and then just swipe Visual Studio out of the way and run and debug the app, all on the same MacBook Air
Filed under: Chrync, iOS, WP7, Xamarin 7 Comments
5Jul/120

A one star marketplace review of my app “Works fine. But …”

I don’t mind bad marketplace reviews of my Windows Phone Chrome Sync app because of bugs, but I’m not sure that I can do much about this one!

image

Filed under: Chrync No Comments
3Jul/120

Chrync 1.1 has been released

I’m delighted to announce that Chrync 1.1 was recently released.  Chrync syncs your Chrome environment to your Windows Phone.

Version 1.1 adds:

  • Support for syncing Encrypted Bookmarks, etc.
  • Optional support for syncing Passwords
  • The choice of what to sync when logging in
  • Pinning of bookmarks and Folders
  • Sharing of bookmarks
  • Fixes for browser tabs - they should work properly now
  • Searching of bookmarks, recent urls and passwords

Here is a quick (40 second) demo of the new release:

If you like it do leave a review in the marketplace and if you have suggestions as to how to make it better please vote at UserVoice (no registration required).

All About Windows Phone says “…this is a great example of engagement with users and giving them what they want.” and wpcentral says “If you use Chrome and need an easy, straight forward way to sync your bookmarks to your Windows Phone...Chrync is definitely worth a try.

Filed under: Chrync No Comments