If you've been watching Best Buy closely, you already know that Best Buy is constantly trying to come up with new and creative ways to use App Engine to engage with their customers. In this guest blog post, Luke Francl, BBYOpen Developer, was kind enough to share with us Best Buy's latest App Engine project.
As part of Best Buy's Connected Store initiative, we have placed QR codes on our product information Fact Tags, in addition to the standard pricing and product descriptions already printed there. When a customer uses the Best Buy app, or any other QR code scanner, they are shown the product details for the product they have scanned, powered by the BBYOpen API or the m.bestbuy.com platform.
To track what stores and products are most popular, QR codes are also encoded with the store number. My project at Best Buy has been to analyze these scans and make new landing pages for QR codes easier to create.
Since we have the geo-location of the stores and product details from our API, it is a natural fit to display these scans on a map. We implemented an initial version of this idea, which used polling to visualize recent scans. To take our this a step further, we thought it would be exciting to use the recently launched App Engine Channel API to update our map in real-time.
Our biggest challenge was pushing the updates to multiple browsers, since we'd most certainly have more than one user at a time looking at our map. The Channel API does not currently support broadcasting a single update to many connected clients. In order to broadcast updates to multiple users, our solution was to keep a list of client IDs and send an update message to each of them.
To implement this, we decided to store the list of active channels in memcache. This solution is not ideal as there are race conditions when we modify the list of client IDs. However, it works well for our demo.
Here’s how we got it working. The code has been slightly simplified for clarity, including removing the rate limiting that we do. To play with a working demo, check out the channel-map-demo project from GitHub.
As customers in our stores scan QR codes, those scans are recorded by enqueuing a deferred. We defer all writes so we can return a response to the client as quickly as possible.
deferred
In the deferred, we call a function to push the message to all the active channels (see full source).
def push_to_channels(scan): content = '<div class="infowindowcontent">(...) </div>' % { 'product_name': scan.product.name, 'timestamp' : scan.timestamp.strftime('%I:%M %p'), 'store_name': scan.store.name, 'state': scan.store.state, 'image': scan.product.image } message = {'lat': scan.store.lat, 'lon': scan.store.lon, 'content': content} channels = simplejson.loads(memcache.get('channels') or '{}') for channel_id in channels.iterkeys(): encoded_message = simplejson.dumps(message) channel.send_message(channel_id, encoded_message)
The message is a JSON data structure containing the latitude and longitude of the store where the scan occurred, plus a snippet of HTML to display in an InfoWindow on the map. The product information (such as name and thumbnail image) comes from our BBYOpen Products API.
InfoWindow
Then, when a user opens up the site and requests the map page, we create a channel, add it to the serialized channels Python dictionary, stored in memcache, and pass the token back to the client (see full source).
channels
channel_id = uuid.uuid4().hextoken = channel.create_channel(channel_id)channels = simplejson.loads(memcache.get('channels') or '{}') channels[channel_id] = str(datetime.now())memcache.set('channels', simplejson.dumps(channels))
On the map page, JavaScript creates a Google Maps map and uses the token to open a channel. When the onMessage callback is called by the Channel API, a new InfoWindow is displayed on the map using the HTML content and latitude and longitude in the message (see full source).
onMessage
function onMessage(message) { var scan = JSON.parse(message.data); var infoWindow = new google.maps.InfoWindow( {content: scan.content, disableAutoPan: true, position: new google.maps.LatLng(scan.lat, scan.lon)}); infoWindow.open(map); setTimeout(function() { infoWindow.close(); }, 10000);};
Finally, since channels can only be open for two hours, we have a cron job that runs once an hour to remove old channels. Before deleting the client ID, a message is sent on the channel which triggers code in the JavaScript onMessage function to reload the page, thus giving it a new channel and client ID (see full source).
You can see the end result on our map, watch a video about the BBYScan project or checkout the sample channel-map-demo project and create your own Channel API based application.
It’s only February and we’re already at our second release for the year! Today’s SDK release, 1.4.2 focuses on improving and updating a few existing App Engine APIs.
Improved XMPP API to help applications better interact with users. Notifications are sent when users sign in and out and when their status changes, and the application can now set presence details to be returned to the user. Subscription and Presence notifications are enabled as inbound services in the application configuration.
Task Queue performance and Task Queue API improvements. First, we’ve increased the maximum rate at which tasks can be processed to 100 tasks/second. Applications can also specify the maximum number of concurrent requests allowed per queue in their queue’s configuration file. This can help you more easily manage how many resources your task queue is consuming. We’ve also added an API that allows you to programmatically delete tasks, instead of managing this manually from the Admin Console.
As always, there are more features and issue fixes such as support for JAX-WS complete with a new article on how to build SOAP enabled App Engine apps, as well as support for Django 1.2, so be sure to read the release notes for Java and Python. We’ve also updated the App Engine Roadmap with a few new projects so take a look. And if you have any feedback, please visit the App Engine Groups.
Posted by the App Engine Team
Today we launched Google Art Project in collaboration with 17 of the world’s most renowned museums. Google Art Project is built on top of App Engine and lets you take virtual tours of famous museums using internal Street View technology, view high resolution images of famous art work, and create personal virtual artwork collections.
When Art Project started development several months ago, the team built the application using Java and the Master/Slave Datastore. However, as their launch date approached, we released the new High Replication Datastore configuration. With a scheduled maintenance period so soon after the site’s launch, they decided to switch over to the High Replication Datastore.
Before switching, they ran a load test to set a performance baseline for comparison after the application’s data was migrated. Now that the application has launched, we wanted to share the results of the test with you as an example of what to expect after a switch to the High Replication Datastore. Below are the mean numbers for latency of different parts of the site.
Here’s a description of what each page does behind the scenes:
Homepage: This is the landing page that just serves a static webpage for site navigation. Since this page does not pull information from the datastore, the latency is stable.
Collections: Art Project lets users create individual museum collections. These load tests specifically targeted adding and deleting paintings from a user’s personal collection, as well as rendering those collections. We notice the slightly increased latency from saving and deleting entities in the datastore.
Level Maps: These pages simply performed get() calls on the datastore using entity keys. Latency on these pages is consistent across instances.
Info Spots: This handler performs the most data intensive calculations of all of the handlers. It calculates all line of sight interest points for a user’s map position in a museum gallery room, and saves the points of interest to the datastore for that location. The good news is, this calculation doesn’t have to happen for every user. Once this data has been calculated for a given spot, it can re-used for other visitors to the site.
As you can see, while there was some increased latency when switching to the High Replication datastore, the site latency is still very low. And the migration required no major code changes and no modification to the datastore structure between the two load tests.
For more information about the High Replication Datastore, see the Datastore documentation. The next scheduled maintenance period for the Master/Slave datastore is February 7th, 2011 from 5pm - 6pm PST. The High Replication datastore and Google Art Project will not need to be read-only during this period. On the High Replication Datastore, your application won’t need to be either.
Use promo code NEXT1720 to save $300 off general admission