Today’s guest post is from Daniel Peter, Senior Programmer Analyst at Safari Books Online. In this post, Daniel describes how his company uses Google BigQuery to power their dashboards and analytics. ...
Today’s guest post is from Daniel Peter, Senior Programmer Analyst at Safari Books Online. In this post, Daniel describes how his company uses Google BigQuery to power their dashboards and analytics.



Safari Books Online is a subscription service for individuals and organizations to access a growing library of over 30,000 technology & business books and videos. Our customers browse and search the library from web browsers and mobile devices, generating powerful usage data which we can use to improve our service and increase profitability. We wanted to quickly and easily build dashboards, improve the effectiveness of our sales teams and enable ad-hoc queries to answer specific business questions. With billions of records, we found it challenging to get the answers to our questions fast enough with our existing MySQL databases.



Looking for alternative solutions to built our dashboards and enable interactive ad-hoc querying, we played with several technologies, including Hadoop. In the end, we decided to use Google BigQuery.



Here’s how we pipe data into BigQuery:







Our data starts in our CDN and server logs, gets packaged up into compressed files, and runs through our ETL server before finishing in BigQuery.



Here’s one of the dashboards we build using the data:







You can see that with the help of BigQuery, we can easily categorize our books. This dashboard shows popular books by desktop and mobile, and with BigQuery, we are able to run quick queries to dive into other usage patterns as well.



BigQuery has been very valuable for our company, and we’re just scratching the surface of what is possible.



Check out the article for more details on how we manage our import jobs, transform our data, build our dashboards, detect abuse and improve our sales team's effectiveness.



Contributed by Daniel Peter, Senior Progammer Analyst, Safari Books Online



-Posted by Ryan Boyd, Developer Advocate

Today’s post comes from Kevin Whinnery, Developer Evangelist at Twilio. In this post, Kevin describes how to build a multi-channel chat application using Google App Engine and Twilio. You can follow Kevin on Twitter at @kevinwhinnery or on Google+. ...
Today’s post comes from Kevin Whinnery, Developer Evangelist at Twilio. In this post, Kevin describes how to build a multi-channel chat application using Google App Engine and Twilio. You can follow Kevin on Twitter at @kevinwhinnery or on Google+.









Google App Engine enables developers to focus on their application’s logic by providing a scalable infrastructure and high-level APIs for persistence, file management, and other common web app needs. XMPP and Channels are among these APIs, making it ridiculously easy to write awesome real-time communications apps in the browser.



Today, we’re going to break down an example application (view it live, source code) that integrates these two App Engine services (plus SMS messaging from Twilio) in a group chat application that connects users via SMS, XMPP, and browser-based chat clients.



We won’t go through every line of code, but at a high level, this application is about receiving inbound messages and sending outbound messages. Let’s see how we do this via SMS, XMPP, and Channels.




Twilio SMS


Sending SMS text messages with the Twilio API requires signing up for a Twilio account. Once you’ve signed up for an account, you can use your account SID and auth token to make authenticated requests against the Twilio REST API. You could just use App Engine’s built-in URL fetch service to interact with the Twilio API, but our official helper library for Java makes authenticating requests and serializing data much easier, providing a POJO interface to Twilio resources and functionality. We’ll be using the Twilio helper in this example. If you’re looking for App Engine specific reference examples, our friends at Google included this reference documentation in their doc site.



In our chat application, all outbound communication and message dispatching is handled by the MultichannelChatManager class. In this application, we add subscribers to the chat room to an in-memory set. When it’s time to send out a message, we iterate over the members of this set and send out messages to all subscribers. We send out messages to SMS subscribers using the Twilio helper on line #56:

TwilioRestClient client = new TwilioRestClient("ACCOUNT_SID", "AUTH_TOKEN");

Map params = new HashMap();
params.put("Body", messageBody);
params.put("To", sub);
params.put("From", "+16122948105");

SmsFactory messageFactory = client.getAccount().getSmsFactory();

try {
Sms message = messageFactory.create(params);
System.out.println(message.getSid());
} catch (TwilioRestException e) {
e.printStackTrace();
}

To receive inbound communication, you will need to purchase a Twilio phone number or use the one given to you when you signed up for a Twilio account. You can configure this phone number to send an HTTP POST to a URL that you choose when an SMS message is received (this callback pattern is called a webhook). In this sample application, we have a Java servlet with a web.xml file configured to accept inbound SMS. In your Twilio number configuration, you would enter https://yourappname.appspot.com/sms, as below:







In the actual servlet, we handle inbound SMS messages first by looking for a “STOP” command, which will indicate that this user no longer wants to receive text messages from the app. Then, we confirm that the user is subscribed (by looking for their telephone number). Finally, we send out a message using our MultichannelChatManager class.



When Twilio sends your app the details of an SMS message with an HTTP request, it expects your app to respond with an XML format called TwiML. TwiML is a simple set of fewer than 20 XML tags that tells Twilio how to respond to inbound communication. The output of our SMS servlet is an XML (TwiML) document, which will send an SMS back to a user if they unsubscribe:

public class TwilioSmsServlet extends HttpServlet {
// Handle Incoming SMS
public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException {
try {
TwiMLResponse twiml = new TwiMLResponse();

// parse the body, looking for a command
String smsBody = request.getParameter("Body");
String smsFrom = request.getParameter("From");

// Unsubscribe, if requested
if (smsBody.startsWith("STOP")) {
MultichannelChatManager.removeSub(smsFrom);
com.twilio.sdk.verbs.Sms bye = new com.twilio.sdk.verbs.Sms("You have been unsubscribed. Thank You!");
twiml.append(bye);
} else {
// If they aren't subscribed, subscribe them
if (!MultichannelChatManager.isSubscribed(smsFrom)) {
MultichannelChatManager.addSub(smsFrom);
}
MultichannelChatManager.sendMessage(smsBody, "sms");
}

response.setContentType("application/xml");
response.getWriter().print(twiml.toXML());

} catch (Exception e) {
e.printStackTrace();
System.out.println(e);
}
}
}


App Engine XMPP Integration


App Engine provides a simple API for sending and receiving XMPP chat messages. Our chat application can receive new messages over XMPP and send them back out to all subscribed clients, similar to how our app behaves for SMS.



App Engine applications have an XMPP username associated with them by default, which takes the form of “appname@appspot.com”. The former part of the username is your unique App Engine app ID and the latter is the appspot domain that your app runs on. To send a message via XMPP to our chat app we need to send a chat message to “twiliosandbox@appspot.com” from a chat client that supports XMPP. If you used the desktop chat client Adium for Google Talk, the interaction might look something like this:







For our application to receive inbound XMPP messages, we need to configure an inbound message handler servlet in our web.xml configuration file. This webhook callback design is the same type of event mechanism used by Twilio to deliver SMS messages to our application. In this servlet, we receive an inbound POST request with information about an inbound chat message:

public class XMPPReceiverServlet extends HttpServlet {
// Handle Incoming XMPP Chat messages
public void doPost(HttpServletRequest req, HttpServletResponse res) throws IOException {
XMPPService xmpp = XMPPServiceFactory.getXMPPService();
Message msg = xmpp.parseMessage(req);

// The "JID" is the unique ID of this chat client, which we use to subscribe
JID fromJid = msg.getFromJid();
String body = msg.getBody();

// Unsubscribe, if requested
if (body.startsWith("STOP")) {
MultichannelChatManager.removeSub(fromJid.getId());
} else {
// If they aren't subscribed, subscribe them
if (!MultichannelChatManager.isSubscribed(fromJid.getId())) {
MultichannelChatManager.addSub(fromJid.getId());
}
MultichannelChatManager.sendMessage(body, "xmpp");
}
}
}

To send outbound messages, we use the App Engine platform APIs to send an outbound message to a specific JID, which uniquely identifies a connected XMPP client:

JID jid = new JID(sub);
Message msg = new MessageBuilder().withRecipientJids(jid).withBody(messageBody).build();
XMPPService xmpp = XMPPServiceFactory.getXMPPService();
xmpp.sendMessage(msg);


Channel API


The Channel API allows server-side push to connected clients in an App Engine application. In our chat application, we will utilize this API to push new chat messages to browser-based clients.



In order for our server to push chat messages to a browser, the client needs to be issued an ID by our server. We configure a servlet to handle issuing these IDs (and to handle incoming chat messages created by the browser in JavaScript) in web.xml. The servlet generates a unique ID for a connected client, based on the current system time:

//Generate a client token for the GAE Channel API
public void doGet(HttpServletRequest req, HttpServletResponse res) throws IOException {
ChannelService channelService = ChannelServiceFactory.getChannelService();

//The token must be based on some unique identifier for the client - I am using the current time
//for demo purposes...
String clientId = String.valueOf(System.currentTimeMillis());
String token = channelService.createChannel(clientId);

//Subscribe this client
MultichannelChatManager.addSub(clientId);

//Reply with the token
res.setContentType("text/plain");
res.getWriter().print(token);
}

In the browser, we get an ID for the current user via XHR. First, we include the Channel client JavaScript library by requesting a special URL on the App Engine server. Then we use jQuery to issue a GET request to our server to obtain a client ID:

//Get a client token to use with the channel API
$.ajax('/chat',{
method:'GET',
dataType:'text',
success: function(token) {
console.log(token);
var channel = new goog.appengine.Channel(token);
var socket = channel.open();

//Assign our handler function to the open socket
socket.onmessage = onMessage;
}
});

When we get our client ID, we use that to configure the App Engine channel service in the browser for data pushed from the server. Data pushed from the server is handled in a callback function, which updates the textarea on the page. When the user enters a chat message in the browser, we issue a POST request to our ChatServlet, which uses the MultichannelChatManager class to publish a message to all connected clients. This is where we use the channel API to push data to connected web browsers:

ChannelService channelService = ChannelServiceFactory.getChannelService();
channelService.sendMessage(new ChannelMessage(sub,messageBody));


Wrapping Up


In this walkthrough, we explored three messaging APIs that work nicely on App Engine: Twilio SMS, XMPP, and Channels. Our example used Java, but all three APIs will work with Python and Go as well (Twilio has a helper library you might use for Python also).



Using platforms like Twilio and App Engine, developers can create communications applications, which previously would have required expert knowledge and infrastructure to build, in a fraction of the time. I hope you’ll be able to use these APIs to engage with your users wherever they happen to be.



Application source code is available on GitHub here.



- Contributed by Kevin Whinnery, Developer Evangelist, Twilio

Today’s guest post is from Adam DuVander, Developer Communications Director at SendGrid. SendGrid is a cloud-based email service that delivers email on behalf of companies to increase deliverability and improve customer communications. Integration with new or existing email systems is done via SMTP or through a REST API. In this post, Adam shows how to integrate SendGrid into Google App Engine. ...
Today’s guest post is from Adam DuVander, Developer Communications Director at SendGrid. SendGrid is a cloud-based email service that delivers email on behalf of companies to increase deliverability and improve customer communications. Integration with new or existing email systems is done via SMTP or through a REST API. In this post, Adam shows how to integrate SendGrid into Google App Engine.



Whether you’re developing apps for the web or mobile environments, you need an effective way to communicate with your customers. Building and maintaining your own SMTP infrastructure can be resource intensive and costly. SendGrid eliminates the cost and complexity of maintaining your own email infrastructure so you can focus on developing the next extraordinary app.



Google App Engine developers can easily integrate SendGrid into their applications. In the example below, I’ll show you how to use our Python library. Java and PHP developers, we have you covered with libraries, too. Any App Engine developer can sign up for SendGrid and send 25,000 emails per month* for free, so get an account and follow along in the code editor of your choice.



First, copy the SendGrid Python library into your project by placing the files in a sendgrid sub-directory. When you import this library into your app you'll be able to create SendGrid instances and send mail with simple commands.



At the top of your app's .py file, import the Sendgrid library:

from sendgrid import Sendgrid
from sendgrid import Message

Now, from within your app, you can send email with the following few lines:

# make a secure connection to SendGrid
s = sendgrid.Sendgrid('', '', secure=True)
# make a message object
message = sendgrid.Message("from@mydomain.com", "message subject", "plaintext message body",
"<strong>HTML message body</strong>")
# add a recipient
message.add_to("someone@example.com", "John Doe")
# use the Web API to send your message
s.web.send(message)

In addition to working hard to get your email to an inbox, SendGrid also provides a lot data about your emails. For example, our Event API can tell you when an email is delivered, opened, clicked or bounced, among several other events. The same way that Google App Engine is a platform that makes hosting and scaling apps easy, SendGrid simplifies your interaction with email.



Sign up for SendGrid* to claim your 25,000 emails per month for free and combine the power of email and Google App Engine.



*Google will be compensated for customers who sign up for a non-free account



Contributed by Adam DuVander, Developer Communications Director, SendGrid



-Posted by Boyar Naito, Partner Development Manager

The latest release of the Google Plugin for Eclipse supports the creation of Dynamic Web Projects for Google App Engine. Applications created in this manner fully leverage Eclipse’s Web Tools Platform ...
The latest release of the Google Plugin for Eclipse supports the creation of Dynamic Web Projects for Google App Engine. Applications created in this manner fully leverage Eclipse’s Web Tools Platform (WTP), which makes it easier to create and structure Java EE web applications and allows Google App Engine developers to benefit from advanced tooling the Eclipse Ecosystem offers for the Enterprise space.







Here are a few features enabled by the new, WTP-enabled, Plugin:




  • WTP/Maven integration, enables complex projects to take advantage of the Maven headless build system and the developer-friendly Eclipse tooling.

  • WTP makes it much easier for App Engine projects to use the Eclipse Database and Dali JPA tooling to do either Google Cloud Datastore or Google Cloud SQL development.

  • WTP Enterprise Archive (EAR) support allows for the aggregation of multiple App Engine modules as part of an overall App Engine Application as a WTP EAR project.

  • Existing Eclipse WTP projects targeting another Application Server (e.g. Tomcat, Jetty, or GlassFish, etc) can be ported to App Engine as the runtime server




To get started, head to the documentation or simply create a new Dynamic Web Project after updating to the latest version of the Google Plugin for Eclipse.



-Posted by Rajeev Dayal, Software Engineer

We’ve launched new features in Google Cloud Storage that make it easier to manage objects, and faster to access and upload data. With a tiny bit of upfront configuration, you can take advantage of these improvements with no changes to your application code — and we know that one thing better than improving your app is improving your app transparently ...
We’ve launched new features in Google Cloud Storage that make it easier to manage objects, and faster to access and upload data. With a tiny bit of upfront configuration, you can take advantage of these improvements with no changes to your application code — and we know that one thing better than improving your app is improving your app transparently!



Today we’re announcing:



Object Lifecycle Management - Configure auto-deletion policies for your objects

Regional Buckets - Granular location specifications to keep your data near your computation

gsutil - automatic parallel composite uploads - Faster uploads with gsutil



Object Lifecycle Management

Object Lifecycle Management allows you to define policies that allow Cloud Storage to automatically delete objects based on certain conditions. For example, you could configure a bucket so objects older than 365 days are deleted, or only keep the 3 most recent versions of objects in a versioned bucket. Once you have configured Lifecycle Management, the expected expiration time will be added to object metadata when possible, and all operations are logged in the access log.



Object Lifecycle Management can be used with Object Versioning to limit the number of older versions of your objects that are retained. This can help keep your apps cost-efficient while maintaining a level of protection against accidental data loss due to user application bugs or manual user errors.



Regional Buckets

Regional Buckets allow you to co-locate your Durable Reduced Availability data in the same region as your Google Compute Engine instances. Since Cloud Storage buckets and Compute Engine instances within a region share the same network fabric, this can reduce latency and increase bandwidth to your virtual machines, and may be particularly appropriate for data-intensive computations. You can still specify the less-granular United States or European datacenter locations if you'd like your data spread over multiple regions, which may be a better fit for content distribution use cases.



gsutil - Automatic Parallel Composite Uploads

Gsutil version 3.34 now automatically uploads large objects in parallel for higher throughput. Achieving maximum TCP throughput on most networks requires multiple connections, and this makes it easy and automatic. The support is built using Composite Objects. For details about temporary objects and a few caveats, see the Parallel Composite Uploads documentation. To get started, simply use 'gsutil cp' as usual. Large files are automatically uploaded in parallel.



We think there’s a little something here for everyone: If you’re managing temporary or versioned objects, running compute jobs over Cloud Storage data, or using gsutil to upload data, you’ll want to take advantage of these features right away. We hope you enjoy them!



-Posted by Brian Dorsey, Developer Programs Engineer

Yesterday we announced that dedicated memcache is in Preview. Now you can purchase in-memory data caching capacity exclusively for your application, cache more data and drive up cache hit rates. With higher cache hit rates, dedicated memcache can also reduce Datastore costs and make your application faster than ever. Dedicated memcache has been one of the top 100 features requested by App Engine customers.
Yesterday we announced that dedicated memcache is in Preview. Now you can purchase in-memory data caching capacity exclusively for your application, cache more data and drive up cache hit rates. With higher cache hit rates, dedicated memcache can also reduce Datastore costs and make your application faster than ever. Dedicated memcache has been one of the top 100 features requested by App Engine customers.



With the 1.8.2 release, App Engine now has two classes of memcache service. Dedicated memcache is available in addition to the shared memcache service that has been offered on App Engine for years. No code changes are required when moving to dedicated memcache. Compared to the shared memcache service, dedicated memcache provides developers control over the cache space and performance available to an app.



Bobby Murphy at Snapchat, the rapidly growing mobile photo sharing application, said “We love dedicated memcache and it's already made a big impact on our business. Besides being able to reduce our costs substantially, our hit rates are up to the high 80s.”



By default, applications will continue to use shared memcache, and it will continue to be free. Starting today, billing enabled applications can select dedicated memcache on the App Engine admin console’s application settings page. With dedicated memcache, applications purchase a fixed capacity of RAM and operations-per-second just for that application. This gives developers the ability to plan for the needs of their applications. At this time, there is one class of dedicated memcache available:







Price12 cents per GB per hour
Self Service Capacity1 to 20 GB
PerformanceUp to 10,000 operations per second per GB for items < 1KB



If you need more than 20GB, please contact us at cloud-accounts-team@google.com.







In addition to dedicated memcache, with the 1.8.1 release we added memcache operations per second monitoring for all memcache classes to the App Engine dashboard.











Stay tuned for more improvements to the memcache service over the coming months. As always, we’re interested in hearing where you’d like us to take caching on App Engine via the comments and the App Engine feature request tracker.



- Posted by Logan Henriquez, Product Manager

Today we are announcing the release of App Engine 1.8.2 which includes significant improvements for large-scale application development, developer tooling, runtimes and a new dedicated memcache service.
Today we are announcing the release of App Engine 1.8.2 which includes significant improvements for large-scale application development, developer tooling, runtimes and a new dedicated memcache service.



Dedicated Memcache

Dedicated memcache is now in Preview. App Engine developers already enjoy a free shared memcache that allows them to cache data in order to improve performance, but in some cases your application needs more control over your cache. With dedicated memcache you can purchase in-memory data caching capacity exclusively for your application, cache more data and drive up cache hit rates. With higher cache hit rates, dedicated memcache can also reduce Datastore costs and make your application faster than ever.



App Engine now has two classes of memcache service: shared and dedicated. No code changes are required when moving to dedicated memcache from shared. Starting today, billing enabled applications can select dedicated memcache on the App Engine admin console’s application settings page. Dedicated memcache is priced at 12 cents per GB per hour.



Git Support

Many developers have told us they work with standard development tools such as git and they don’t want to have to context switch to deploy to App Engine. Today we are making it even easier to deploy Python and PHP applications to App Engine with the Source Push-to-Deploy feature.

$ git push appengine master 



Below is what a user would see after enabling this feature for an example project ID ‘polar-automata-277’:







With this release of App Engine we are making Push-to-Deploy Preview available for anyone to try.



Modules

App Engine Modules is a new feature that allows developers to split out large-scale applications into logical components that are able to share stateful services and communicate in a secure fashion. Not all components within an application are equal and often times they require their own performance configurations, authorization, and versioning. With Modules, developers can start splitting their apps with a single configuration change:



mobile-frontend.yaml
----------------------
application: insta-lbs
module: mobile-frontend
version: 1
runtime: python27



You can create modules for any support languages in App Engine. For Java, the packaging system for modules is the Enterprise Archive (EAR) mechanism. See an example of a Maven Ear project with App Engine modules at https://github.com/GoogleCloudPlatform/appengine-modules-sample-java. For more information on this feature, be sure to check out our docs (Java | Python).



PHP Runtime

With over 1000+ developers building PHP apps, we’ve been thrilled with the early interest. Based on your feedback - in the 1.8.1 release of PHP we announced support for the much-requested mcrypt, iconv and mbstring extensions, as well as the ability to include and/or require PHP scripts directly from Google Cloud Storage - helpful when using templating systems such as Smarty.



In this release we’ve added a number of improvements to our Cloud Storage integration. We’ve also launched a drop-in plugin for WordPress that adds support for using Cloud Storage for storing uploaded content, and the Mail API for sending notifications.



Python Runtime

We’ve updated the Python 2.7 interpreter to Python 2.7.5, which was recently released by the Python community and includes a number of bugfixes from Python 2.7.3 and Python 2.7.4.



Java WTP Tooling Support

Based on feedback from developers we have updated the Google Plugin For Eclipse to fully support the Eclipse standard Web Tools Platform and java EAR files. This system will be familiar to many Java developers as it is the most common pattern used in Eclipse for on premise and cloud environments. With WTP, EAR Files and Maven support, Eclipse users can now enjoy the full ecosystem of Eclipse plugins from the open source community.







Please check out the documentation and download the latest GPE update.



The complete list of features and bug fixes for App Engine 1.8.2 can be found in our release notes.



If you’re looking to stay up to date with the latest Google Cloud Platform news, we suggest signing up for our monthly newsletter.



- Posted by Chris Ramsdale, Product Manager

We are constantly looking to improve our products by gathering real customer feedback from you. Often times it's through G+ comments, 1:1 interactions, Twitter, conferences, and more.   ...
We are constantly looking to improve our products by gathering real customer feedback from you. Often times it's through G+ comments, 1:1 interactions, Twitter, conferences, and more.  '



Are you interested in learning more about Google Cloud Platform development as well as helping improve our products directly? You can by participating in a Google Cloud Platform Build Day Study! Selected participants will be invited to our Mountain View or Seattle campuses to work with Google engineers on a day-long build as a part of a User Experience Research study. Please fill out this questionnaire if you’re interested, and we’ll be in touch if you’re a good fit!



Thanks in advance for your interest and help.



- Posted by Andrew Macvean, UX Researcher

Recently, Eric Johnson released a guide to setting up a Cassandra cluster on Google Compute Engine. Cassandra is a NoSQL database that is designed around distributed principles. By distributing data across multiple nodes, your cluster becomes resilient to individual node failure, and scaling up your cluster is as trivial as adding new nodes.
Recently, Eric Johnson released a guide to setting up a Cassandra cluster on Google Compute Engine. Cassandra is a NoSQL database that is designed around distributed principles. By distributing data across multiple nodes, your cluster becomes resilient to individual node failure, and scaling up your cluster is as trivial as adding new nodes.



The guide walks you through creating your nodes (instances), setting up Java, and creating and configuring a firewall. Included in the guide are several scripts that make the configuration and setup easy to understand and execute. Once you are finished with your cluster, a simple call to a teardown script cleans up your project’s environment.



Depending on your system requirements, you will want to make some adjustments to the setup in the guide. For instance, you should modify the global configuration file to meet the needs of your system. Cassandra runs best with plenty of CPU and memory, so you will likely want to choose one of our higher power instance types. You should also adjust the number of overall nodes and nodes per zone to match your requirements. Lastly, best practices highly recommend that you use persistent disks for your cluster.



Many of the core features of Google Compute Engine match up well with the requirements of a distributed database like Cassandra. Distributing instances across zones protects against individual node and zone failures. Using the metadata server means that your nodes can configure themselves, and a change to the configuration file can propagate easily to existing nodes. Consistent, fast disk I/O means that you can rely upon quick queries and reliable write throughput.



For more information about Cassandra, to download, or to contribute, visit the database’s site. Hear from experts about different approaches to distributed databases by watching the Google I/O industry panel.



- Posted by Julia Ferraioli, Developer Advocate

A frequent theme we have heard from customers and read in the community lately has been about lock-in. What does it mean to choose a cloud provider? What trade-offs are reasonable but also proprietary? Is there a model that works or can work for you? Peter Magnusson, the engineering director responsible for ...
A frequent theme we have heard from customers and read in the community lately has been about lock-in. What does it mean to choose a cloud provider? What trade-offs are reasonable but also proprietary? Is there a model that works or can work for you? Peter Magnusson, the engineering director responsible for Google App Engine, takes on the topic recently on G+ and it is worth a read. But he’s not the only pundit discussing the topic and I’m sure he won’t be the last.



Take a look at Peter’s thoughts and let us know what you think.



- Posted by Brian Goldfarb, Head of Marketing

Today’s post is from Tyler Jewell, CEO at Codenvy. In this post, Tyler looks at how you can leverage cloud based development tools to build applications for Google Cloud Platform. ...
Today’s post is from Tyler Jewell, CEO at Codenvy. In this post, Tyler looks at how you can leverage cloud based development tools to build applications for Google Cloud Platform.



Codenvy is a cloud development environment for coding, building, and testing applications for Google Cloud Platform. In a recent LinkedIn survey, 1200 engineers indicated that they spend nearly ⅓ of their week administering their desktop. This includes configuring the IDE, build system, runtime, and plug-ins. By providing a cloud IDE that is pre-integrated with Google App Engine, we can change the developer workflow dramatically by automatically provisioning a cloud workbench that allows a developer to be immediately productive with fewer errors, and have a higher confidence that their application will deploy correctly to App Engine when pushed.



Each Codenvy workspace is comprised of an IDE, a code assistant service, a build system and a debugging runtime. These four components are integrated and decoupled to scale independently based on usage. This allows us to eliminate configuration needs while reducing compilation time and deployment time. Projects in Codenvy can start through a project creation wizard, or can be imported from a git repository. After coding operations are complete, code can be deployed directly to App Engine through using continuous integration post-commit hooks on git, or through a jClouds-based direct deployment connection to App Engine.



The main challenge is setting up the development environment to match the production environment. Each language, framework and PaaS has its nuances that must be addressed. To reproduce the App Engine environment in Codenvy, we embed the Google App Engine SDK into the IDE (enabling compilation and auto-completion), and in the debugging runtime so that applications can be functionally tested in a cloud-local environment before being pushed to App Engine directly.









The App Engine SDK is provided automatically by Codenvy whenever a App Engine application is configured in the project space. By providing this SDK in a cloud local environment, you save time by being able to do a high number of iterative changes to test functionality before pushing artifacts onto a App Engine instance. Since the deployment process of your artifacts onto App Engine has an uptake time, developers who need to make many changes would wait longer than making use of the cloud local environment that is low latency.



Other Google Services Used by Codenvy



Google technology has been instrumental in building our product and growing our user base:


  • First, we enable multi-cursor collaborative editing which is incredibly useful for pair programming, code reviews, or classroom teaching. This is powered by Google Web Toolkit, a framework to write optimized Ajax applications, and Collide, an open source collaboration system published by Google.

  • Second, oAuth makes it possible to register with a Google account and start coding in seconds.

  • Third, we support Chromebook development and have certified on Pixel tablets.

  • Finally, we are preparing Android support in Codenvy. Check out this sample demo of an Android app built in Codenvy and deployed to Manymo’s web-based Android emulator: http://vimeo.com/66157251.




Later this year, we will release production Android development support. In addition to editing, building and packaging Android applications, it will be possible to run them in the browser with a tenanted emulator. We’ll also be shipping an SDK that will allow the community to create programming language, deployment target, and framework extensions so that we can work to extend Codenvy to support PHP and Go in App Engine.



Please visit Codenvy today and get started building your App Engine application. Full documentation and tutorials are at docs.codenvy.com. And you can vote for the features that you want here. And finally, do not hesitate to contact support with any questions at support@codenvy.com.



- Contributed by Tyler Jewell, CEO, Codenvy








A single US cent stretches quite far when experimenting with Google Compute Engine. Watch this video to see how affordable and quick it is. In this video, we walk through the steps required to create a Google Cloud Platform project and start up a virtual machine. Next, we install a web server (Apache) on the virtual machine and fetch a web page to confirm a successful installation. Finally we tear down the virtual machine to wrap up the exercise - all in less than 10 minutes, and under a cent ...












A single US cent stretches quite far when experimenting with Google Compute Engine. Watch this video to see how affordable and quick it is. In this video, we walk through the steps required to create a Google Cloud Platform project and start up a virtual machine. Next, we install a web server (Apache) on the virtual machine and fetch a web page to confirm a successful installation. Finally we tear down the virtual machine to wrap up the exercise - all in less than 10 minutes, and under a cent!











Check out the video to see how easy it is to get started with Google Compute Engine and the Cloud Platform. For a self-paced guide on getting started with Compute Engine, visit the Hello World tutorial.



- Posted by Jonathan Simon, Developer Relations

The following post was contributed by Gary Read CEO of Boundary, a modern IT operations management platform provider and Google Cloud Platform partner. With the Boundary service, operations teams can get early warnings about potential problems, before the dominoes start to fall, so that they can prevent application outages. Get a free, full functional version of Boundary just for Google Compute Engine customers. ...
The following post was contributed by Gary Read CEO of Boundary, a modern IT operations management platform provider and Google Cloud Platform partner. With the Boundary service, operations teams can get early warnings about potential problems, before the dominoes start to fall, so that they can prevent application outages. Get a free, full functional version of Boundary just for Google Compute Engine customers.



Cloud services like Google Compute Engine have fundamentally altered the IT landscape and to effectively monitor this landscape, a new approach is required. To understand and manage performance, operations teams need a monitoring service architected for today’s realities:


  • Rapid rate of change. With today’s DevOps and Agile approaches, organizations are rolling out applications daily, if not hourly. With shifting workloads and resources, static reports and even hourly updates won’t suffice.

  • Highly distributed. As organizations take advantage of cloud services and new technologies like Hadoop, and Cassandra, application environments continue to get more distributed.




It’s with these basic realities in mind, that Boundary developed an entirely new SaaS-based monitoring service. Our platform supports dynamic environments by streaming data from every system, every second, and applying real-time analytics to that information. In distributed environments, we analyze the flows between every application and infrastructure component to give a comprehensive view on all the inter-dependencies and how the distributed system is performing as a whole.



Once this data is collected, Boundary applies big data analytics to the task of IT operations management, and delivers insights through graphical views that are visually exciting and easy to understand. In other words, Boundary makes it practical for IT teams to quickly understand and optimize their modern IT environments.




Delivering Optimized Support for Google Compute Engine


We chose software, rather than appliances to collect application flow data, as this enables us to provide deep visibility without access to the network. Streaming per-second views of performance data, deep visibility into latency, and support for cloud environments, makes Boundary uniquely equipped for Google Compute Engine monitoring.




Agile/DevOps support


Increasing responsive to changing technical and business demands, often means using multiple tools to manage environments and service levels. Boundary can combine real-time streaming data with alerts from third-party platforms, including Google Compute Engine partners Opscode and Puppet Labs, to support fast-changing development environments. This means as customers push out new configurations and automations in Chef and Puppet Enterprise, they can instantly see those changes correlated against the performance impact.




The Boundary Architecture


The graphic below provides an overview of Boundary’s architecture. Lightweight meters automatically stream data from Google Compute Engine instances to the Boundary analytics engine. We analyze this streaming “application chatter” in real-time to automatically build application topology maps, establish a baseline of normal behavior, and generate alerts when there is a deviation from that normal behavior. Events from third parties like Chef and Puppet are correlated against this streaming data, so the operation team has a consolidated view and can quickly do root cause analysis.









Please sign up to check out the free, full-function version of Boundary just for Google Compute Engine users. Take a look and let us know what you think.





- Contributed by Gary Read, CEO, Boundary

Delivering scalable, reliable mobile push notifications when hundreds of thousands of users have installed your app on their phones can be a major headache. Fortunately, Google App Engine’s support for sockets and accessible but powerful queues makes it easy to quickly build a mobile backend that can reliably scale to huge numbers of devices. ...
Delivering scalable, reliable mobile push notifications when hundreds of thousands of users have installed your app on their phones can be a major headache. Fortunately, Google App Engine’s support for sockets and accessible but powerful queues makes it easy to quickly build a mobile backend that can reliably scale to huge numbers of devices.





Get the code!

We’ve created a simple push notification application to help you get started in our Github repository that uses all of the techniques described below. Download or fork the source code to get started.



Push notifications are the little pings your phone gives you to let you know that you’ve got a new message, your friend is waiting for you to take your turn on the latest game or that band you like has just announced a concert in your town. As a developer, push notifications give you a new dimension to engage with your users in real time, any time, regardless as whether they have your app open or even if they have their phone in their hand.



On iOS devices, like iPhones and iPads, push notifications are handled by Apple’s Push Notification Service (APNS). APNS is hosted by Apple, and acts as a bridge between your server and your mobile clients. In brief, here’s how it works:


  • Your mobile application securely registers itself with Apple to be able to receive push notifications, usually when the app is being launched the first time. It receives a device token, which the mobile application passes to your mobile backend.

  • Your server opens a secure connection to APNS, and when an event occurs that requires a push notification - your server sends a short message including the device token of the device that should receive the message to APNS. APNS will then handle the ‘last-mile delivery’ of the notification to the device.


Although this seems relatively trivial, there are a few important things to consider when implementing push notifications in your application.



Connection pooling with backend instances and pull queues

If you have a popular application you can quickly end up generating a large number of push notifications - even after a single event.



For both performance reasons, and should avoid opening a large number of secure connections to APNS, but rather simply hold a few connections open and funnel any push notifications your applications generate through those. This approach is commonly called Connection Pooling.



Fortunately, App Engine provides the building blocks for scalable connection pooling. Resident backend instances are long running App Engine containers that can be used to as workers to hold open APNS notifications for sending notifications. These workers can then monitor a pull queue that can signal to the workers when a notification should be sent. When an event occurs in another component of your application that should trigger a push notification (say an action triggered by your mobile API in a frontend instance), other components of your application can simply enqueue a task on the pull queue.



Each worker can then periodically read from a pull queue to see if any notifications need to be sent by the application, and if there are, lease a block of them, send them via the previously established APNS connection, and delete them.











As well as saving on opening many connections to APNS, this approach also improves the reliability of the app. If a worker is unable to deliver a message to APNS for some reason (e.g., because the TCP connection was severed), App Engine’s pull queues will release the lease on the task and allow another worker to retry it. You can also scale the solution simply by adding additional workers that read off the same pull queue.



Sending bulk notifications with push queues and cursors

You may find a need to send a push notification to a large number of devices at once. This requires a query to your database/datastore to find the list of relevant device tokens and then enqueuing a request onto the pull queue described above that includes the message you want to send along with the relevant device token.



If you were to attempt this in a single request, you could quickly run into problems as your list of device IDs becomes large. A simple but elegant solution is to use push queues and (if you’re storing device IDs in the App Engine datastore) query cursors.



A query cursor is a token that can be used to iterate over a given a given query result set in small batches. A query cursor is an opaque string marking the index position of the last result retrieved. The application can then use the cursor as the starting point for a subsequent retrieval operation, to obtain the next batch of results from the point where the previous retrieval ended



Query cursors can be combined with App Engine push queues. A push queue handler is written to take a query and an optional cursor. The push queue handler then executes the query with a small result limit (say 100 entities), and for each result adds a task to the pull queue described above. If the result of the query also includes a cursor then this indicates there are still unretrieved entities in the query. Once the task handler has cycled through the results it has retrieved, if it has a new cursor, then it can initiate a new push task with that cursor’s value.



Connecting to APNS

While you can use App Engine’s outbound sockets support to talk to APNS from Java or Python directly, popular 3rd party libraries such as JavaPNS also work well, and often provide a cleaner higher level interface for sending notifications.



Putting it all together

Although this sounds like a lot, putting all of this together on App Engine is remarkably straightforward, requiring only a simple batch query queue handler and notification worker. Everything else is taken care of by App Engine’s robust queueing and datatore APIs.













If you’re feeling ready to add Push Notifications to your app, we’ve got some great resources to help you get started.




- Posted by Grzegorz Gogolowicz, Solutions Architect