2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year. ...
2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year.



Seeing one of our customers top 100,000 requests per second was the highlight of the year. That is enough capacity to answer a request by every single person on the planet in a single day. It feels that people can take our platform and change the world with new business models, cool applications, and knowledge sharing at a truly global level. The real exciting part for me is that Google App Engine allowed the customer to do it easily, as they kept all their focus on a great mobile app and customer experience instead of worrying about the underlying infrastructure.



-Posted by Ophir Kra-Oz, Group Product Manager

2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year. ...
2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year.



At the beginning of this month, we made Compute Engine Generally Available. It’s wonderful to see the great products our customers (like Brightcove, Cooladata, Evite, Fishlabs and Mendelics) are building on the Cloud Platform. And that they’re already seeing the benefits of Google’s scalability, reliability and consistently high performance. I’m also excited to see all the great products that partners like DataStax, DataTorrent, Rightscale, SaltStack and Scalr, as well as many open-source projects, are bringing to our customers. And yet, what gets me out of bed in the morning is knowing that that we’re only just getting started.



-Posted by Navneet Joneja, Senior Product Manager

2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year. ...
2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year.



The addition of the PHP runtime to Google App Engine was undoubtedly the highpoint of my year. When we launched PHP support at Google I/O 2013, PHP was the top customer requested feature. By combining App Engine with Google Cloud SQL and Google Cloud Storage, we already see a number of pre-existing high-traffic PHP applications, like Motherboard, move to App Engine to take advantage of the worry free scaling and zero-administration overhead. And, within the last month, we hosted a live online quiz for the largest livestreamed music event in history - built using PHP on App Engine. With so many great users already, it’s exciting to think that the next Snapchat or Khan Academy could be written in PHP, hosted inside Google’s datacenters.



-Posted by Stuart Langley, Software Engineer

Tools for monitoring, analyzing and optimizing cost have become an important part of managing cloud services. But these tools are difficult to build if the usage data is only in the ...
Tools for monitoring, analyzing and optimizing cost have become an important part of managing cloud services. But these tools are difficult to build if the usage data is only in the Google Cloud Console. We are happy to announce a solution to this problem. The Billing Export feature addresses this need, and it is available in Preview.



Once enabled, your daily Google Cloud Platform usage and cost estimates will be exported automatically to a CSV or JSON file stored in a Google Cloud Storage bucket you specify. You can then access the data via the Cloud Storage API, CLI tool or Cloud Console file browser. Usage data is labelled with project Number and resource type. You have full control of who can access this data via ACLs on your Cloud Storage bucket.











"Billing Export is a great new feature of Google Cloud Platform. It allows us to analyze the detailed usage of all our cloud projects in one place and optimize our costs. It also gives us a great tool to monitor our applications over time and understand trends in our usage" said Dave Tucker, Director of Platform Development, WebFilings.



You can manage Billing Export from Cloud Console.











View of the cloud Storage bucket after enabling billing export











As you can see in the example output below, your billing data appears in simple JSON displaying all the important attributes such as service name, date, project number, measurement and cost.



[ {

 "lineItemId" : "com.google.cloud/services/compute-engine/StoragePdCapacity",


 "startTime" : "2013-11-02T00:00:00-07:00",


 "endTime" : "2013-11-03T00:00:00-07:00",


 "projectNumber" : "176782591794",


 "measurements" : [ {


   "measurementId" : "com.google.cloud/services/compute-engine/StoragePdCapacity",


   "sum" : "66325032468480000",


   "unit" : "byte-seconds"


 } ],


 "cost" : {


   "amount" : "2.383101",


   "currency" : "USD"


 }


}, {


 "lineItemId" : "com.google.cloud/services/compute-engine/VmimageN1Highcpu_8",


 "startTime" : "2013-11-02T00:00:00-07:00",


 "endTime" : "2013-11-03T00:00:00-07:00",


 "projectNumber" : "176782591794",


 "measurements" : [ {


   "measurementId" : "com.google.cloud/services/compute-engine/VmimageN1Highcpu_8",


   "sum" : "44220",


   "unit" : "seconds"


 } ],


 "cost" : {


   "amount" : "6.4119",


   "currency" : "USD"


 }


} ]




We would love to hear your feedback and cool ideas on how to improve Google Cloud Platform billing experience.



-Posted by Rae Wang, Product Manager

2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year. ...
2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year.



Seeing customers like Snapchat grow on Google Cloud Platform is what gets me up in the morning. Its exciting to watch customers achieve new heights of scalability with less effort than was possible before. One of the features I worked on this year that was part of that scalability story is dedicated memcache. Dedicated memcache lets customers scale their caching capacity indefinitely without having to manage a server farm of memcached servers. After going into Preview in July, hundreds of customers have deployed many terabytes in production, including a single application using six terabytes. As we go into the New Year, I can’t wait to see which startups use App Engine to bring innovative and fun applications to the world.



-Posted by Logan Henriquez, Product Manager

2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year. ...
2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year.



My highlight of this year was leading Google Cloud Platform talks and code labs in five cities around the world. Myself and colleagues gave talks about Google Compute Engine, App Engine, and the services that glue the fabric of the platform together. We had a great time speaking with all of the attendees, but my favorite part of this tour happened during one of the Google Compute Engine code labs. Attendees were each processing astronomical data on a virtual machine and generating an image of a section of the universe, which was then pulled into a overall collage of the images. However, one of our attendees was feeling a bit mischievous, and replaced his image with -- what else? -- an image of a cat! Well played, very well played.



-Posted by Julia Ferraioli, Developer Advocate

2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year. ...
2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year.



My highlight from 2013 was speaking to developers from across the world at the Big Data Spain conference. I used the opportunity to share with people everything that we have accomplished with Big Query. Although every tool has its limits, it was a joy to review how many big data limits we broke during 2013. Some of the highlights included how to grow a database that already can be as tall you need it to be - in 2012 each row could contain up to 64kB of data. Today, that number is up to 20MB. Other updates include the ability to combine and join 2 insanely huge tables, aggregate values in cases that were considered to have too many groups to group by, or return results of arbitrary sizes (JOIN EACH, GROUP EACH BY, and the allowLargeResults flag). BigQuery not only got bigger this year, it also got smarter: The new window and analytic functions allow users to run richer queries while the new correlation function allows surfacing and discovering relationships previously invisible. What a good way to close this year!



-Posted by Felipe Hoffa, Developer Programs Engineer

2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year. ...
2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year.



My highlight this year was enabling native connections for Cloud SQL instances and seeing how our use of open standards allows developers to use the whole ecosystem of MySQL tools and connectors with their cloud databases. Over the year, I have met many of our users and partners. It is always interesting to see how many developers are using Cloud SQL in applications that would have previously required proprietary or on-premise databases. (Oh, and as a native Brit, my euro-cheese highlight had to be the Eurovision Song Contest app running on Google Cloud Platform: 125 million viewers, 50,000 requests-per-second, 99% of requests completed in 35ms, and no headaches!)



-Posted by Joe Faith, Product Manager

2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year. ...
2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year.



In October 2013, we made Google Cloud Storage Offline Disk Import available to users in several international locations around the globe in limited preview. It is personally gratifying to witness how GCS Offline Disk Import enabled our customers to efficiently import tons of data to their GCS bucket without having to upload it over a slow or unreliable Internet connection. As we’ve built this product, it’s been fascinating to tap into the scalability and efficiently of the OmNomNom StreetView infrastructure, which is the backbone of GCS Offline Disk Import. I’m excited to bring these impressive systems and capabilities to our Google Cloud Platform customers.



-Posted by Lamia Youseff, Software Engineer

We have published two new articles about best practices for App Engine. Are you aware of the best ways to keep Memcache and Datastore in sync? The article Best Practices for App Engine Memcache ...
We have published two new articles about best practices for App Engine. Are you aware of the best ways to keep Memcache and Datastore in sync? The article Best Practices for App Engine Memcache discusses concurrency, performanceand migration with Memcache to make you aware of potential pitfalls and to help you build more robust code.



Do you know how to make your App Engine application faster and more scalable by using eventual consistency? If not, take a look at a new article that explains the difference between eventual and strong consistency. The paper will help you leverage Datastore to scale your apps to millions of happy customers.



Concurrency, performance and migration in memcache

Memcache is a cache service for App Engine applications that is shared by multiple frontend instances and requests. It provides in-memory, temporary storage that is intended primarily as a cache for rapid retrieval of data that's backed by some form of persistent storage, such as Google Cloud Datastore.



Using Memcache will speed up your application's response to requests and reduce hits to the datastore (which in turn saves you money). However, keeping Memcache data synchronized with data in the persistent storage can be challenging when multiple clients modify the application data.



Transactional data sources, such as relational databases or Google Cloud Datastore, coordinate concurrent access by multiple clients. However, Memcache is not transactional, and there's a chance that two clients will simultaneously modify the same piece of data in Memcache. As a result, the data stored may be incorrect. Concurrency problems can be hard to detect because often they do not appear until the application is under load from many users.



With App Engine, you can use the “compare and set” (Client.cas()) function to coordinate concurrent access to memcache. However, if your application uses the compare and set function, it must be prepared to do the error handling and retry.



We recommend that you use the atomic Memcache functions where possible, including incr() and decr(), and use the cas() function for coordinating concurrent access. Use the Python NDB API if the application uses Memcache as a way to optimize reading and writing to Google Cloud Datastore. Read more Best Practices for App Engine Memcache in our newly published paper.



Balancing strong and eventual consistency

Web applications that require high-scalability often use NoSQL which offers eventual consistency for improved scalability. However, if you're used to the strong consistency that relational databases offer, it can be a bit of a mind shift to get your head around the eventual consistency of NoSQL data stores. Google Datastore allows you to choose between strong and eventual consistency, balancing the strengths of each.



Traditional relational databases provide strong consistency of their data, also called immediate consistency. This means that data viewed immediately after an update will be consistent for all observers of the entity. Use cases that require strong consistency include knowing “whether or not a user finished the billing process” or “the number of points a game player earned during a battle session.” It also means that all requests to view the updated data are blocked until all the writes required for strong consistency have finished.



Eventual consistency, on the other hand, means that all reads of the entity will eventually return the last updated value but might return inconsistent views of the data in the meantime. For example, knowing “who in your buddy list is online” or “how many users have +1’d your post” are cases where strong consistency is not required. Your application can get higher scalability and performance by leveraging eventual consistency, because your application won't have to wait for all the writes to complete before returning results.



The following two diagrams illustrate strong versus eventual consistency:




Eventual conistency




Strong consistency

To learn more about the differences between eventual and strong consistency and to learn how to take advantage of each read our article on the technical solutions portal at cloud.google.com/resources.



-Posted by Alex Amies, Cloud Solutions Technical Account Manager

2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year. ...
2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year.



My highlight this year was bringing App Engine’s managed non-relational storage service, Datastore, to developers everywhere as Google Cloud Datastore. There are many use cases and applications where developers themselves want to manage the compute-side of the equation (lucky for them, we have world class VMs as well). That said, managing large distributed storage is no easy task, often times consuming precious development hours. For me, giving time back to developers by managing the complex aspects of a scalable service and, thus, allowing them to focus on creating amazing user experiences, is definitely a highlight of the year.



-Posted by Chris Ramsdale, Product Manager

2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year. ...
2013 was a busy year for Google Cloud Platform. Watch this space: each day, a different Googler who works on Cloud Platform will be sharing his or her highlight from the past year.



You only get a few chances in your lifetime to see a major technology shift, and we are lucky enough to be experiencing two at the same time: mobile and the cloud. The rise of smartphone and tablets are having a fundamental change in user expectations. And cloud computing is changing software development. My favorite moments this year were when we were able to bring these trends together to enable developers to build amazing solutions. Early in the year we launched the Mobile Backend Starter, making it trivial to build no-code backends for android (and later iOS) applications. Then, at Google I/O, we launched some great developer features for cloud connecting your Android app directly in Android Studio and we topped off the year with the General Availability of Google Cloud Endpoints. This makes the infrastructure for the cloud-to-mobile bridge solid and dependable.... and next year will be even better!



-Posted by Brad Abrams, Group Product Manager

With the recent release of App Engine 1.8.8 we are pleased to announce improvements to the Go App Engine SDK, including a new command-line interface, local unit testing facilities, and a configuration option to allow Go apps to handle more concurrent requests.
With the recent release of App Engine 1.8.8 we are pleased to announce improvements to the Go App Engine SDK, including a new command-line interface, local unit testing facilities, and a configuration option to allow Go apps to handle more concurrent requests.



The goapp tool

The Go App Engine SDK now includes the "goapp" tool, an App Engine-specific version of the "go" tool. The new name permits users to keep both the regular "go" tool and the "goapp" tool in their system PATH.



In addition to the existing "go" tool commands, the "goapp" tool provides new commands for working with App Engine apps. The "goapp serve" command starts the local development server and the "goapp deploy" command uploads an app to App Engine.



The "goapp serve" and "goapp deploy" commands give you a simpler user interface and consistency with existing commands like "go get" and "go fmt". For example, to run a local instance of the app in the current directory, run:

$ goapp serve

To upload it to App Engine:

$ goapp deploy

You can also specify the Go import path to serve or deploy:

$ goapp serve github.com/user/myapp

You can even specify a YAML file to serve or deploy a specific module:

$ goapp deploy mymodule.yaml



These commands should replace most uses of "dev_appserver.py" and "appcfg.py", although the Python tools are still available for their less common uses.



Local unit testing

The Go App Engine SDK now supports local unit testing, using Go's native testing package and the "go test" command (provided as "goapp test" by the SDK).



Furthermore, you can now write tests that use App Engine services. The aetest package provides an appengine.Context value that delegates requests to a temporary instance of the development server.



For more information about using "goapp test" and the aetest package see the Local Unit Testing for Go documentation. Note that the aetest package is still in its early days; we hope to add more features over the coming months.



Better concurrency support

It is now possible to configure the number of concurrent requests served by each of your app's dynamic instances by setting the max_concurrent_requests option (available to Automatic Scaling modules only).



Here's an example app.yaml file:

application: maxigopher
version: 1
runtime: go
api_version: go1
automatic_scaling:
max_concurrent_requests: 100



This configures each instance of the app to serve up to 100 requests concurrently (up from the default of 10). You can configure Go instances to serve up to a maximum of 500 concurrent requests.



This setting allows your instances to handle more simultaneous requests by taking advantage of Go's efficient handling of concurrency, which should yield better instance utilization and ultimately fewer billable instance hours.



With these changes Go on App Engine is more convenient and efficient than ever, and we hope you enjoy the improvements. Please join the google-appengine-go group to raise questions or discuss these changes with the engineering team and the rest of the community.



- Posted by Andrew Gerrand, Developer Programs Engineer

This guest post comes form Praveen Seluka, Software Engineer at Qubole, a leading provider of Hadoop-as-a-service. 



Qubole is a leading provider of Hadoop as a service with the mission of providing a simple, integrated, high-performance big data stack that businesses can use to derive actionable insights from their data sources quickly. The ...
This guest post comes form Praveen Seluka, Software Engineer at Qubole, a leading provider of Hadoop-as-a-service. 



Qubole is a leading provider of Hadoop as a service with the mission of providing a simple, integrated, high-performance big data stack that businesses can use to derive actionable insights from their data sources quickly. The Qubole Data Service offers self-managed and auto-scaled Hadoop in the cloud along with an integrated library of data connectors and an easy-to-use GUI designed to help users focus on their data and transformations while enabling data teams to provide a superior service to the consumers of analysis. Now, Qubole is partnering with Google Compute Engine to provide a fully elastic Hadoop service to Compute Engine featuring several advantages.



Auto-scaling and self-managed Hadoop

This elasticity is particularly useful in big data workloads as they are inherently bursty e.g. a 10 node cluster may be sufficient during certain times of the day while peak workload may require a 1000 node cluster. With Qubole Data Services' auto-scaling abilities, this dynamic scaling up and scaling down of clusters becomes a reality leading to better resource utilization and hence users pay only for the resources that they truly need.



Performance and reliability

By taking advantage of Compute Engine's fast spin up of virtual machines and consistent performance, Qubole Data Service brings increased data processing throughput to Hadoop workloads. A strong and performant infrastructure further amplifies the already superior performance of Apache Hadoop provided as part of the Qubole Data Service.



Fully integrated tools for Big Data

Qubole Data Service offers an integrated set of query tools, data pipeline and workflow tools and resource monitoring and management tools to enable a large number of analytic use cases. Qubole Data Service promotes the usage of data by a larger set of users in an organization by simplifying common analytics related tasks. Qubole Data Service can take advantage of the same cloud and datacenter infrastructure that powers Google’s services to handle large and ever-increasing workloads.



We present our findings of running Qubole Data Service and Hadoop on Compute Engine vs. a leading cloud provider (CloudX). In these performance experiments, we used the popular TPC-H dataset. We generated a TPC-H 75GB dataset using the dbgen utility. The data was in delimited text format and uploaded to CloudX’s object store and Google Cloud Storage.



We created external Hive tables against these datasets and used Hadoop’s filesystem implementations to access files in the object stores. As Hive does not support the original form of TPC-H queries, we ran a modified form of TPC-H queries in sequential fashion against both clusters. The complete set of DDLs and hive queries used is available in our public bitbucket repository via the following git command:

git clone 'https://bitbucket.org/qubole/tpch.git'



In the above graph, speedup is calculated as ratio of execution time in CloudX vs Compute Engine. Therefore, a value > 1 indicates that Compute Engine was faster. On an average, Compute Engine is 1.21x faster compared to CloudX. Most queries consistently showed better performance in Compute Engine compared to CloudX.



In conclusion, Qubole brings its Qubole Data Services to Compute Engine so that users looking for big-data solutions can take advantage of Compute Engine’s high-performance, reliable and scalable infrastructure and QDS’ auto-scaling, self-managing, integrated, Hadoop as a Service offering and reduce the time and effort required to gain insights into their business.



Are you interested in running Hadoop on Google Compute Engine? Apply for our beta program.



Note: Hadoop is a trademark of the Apache Software Foundation



-Contributed by Praveen Seluka, Software Engineer, Qubole

Today’s guest blogger is Igor Lautar, senior director of technology at Outfit7 (Ekipa2 subsidiary), one of the fastest-growing media entertainment companies on the planet. Its flagship franchise Talking Tom and Friends has achieved over 1.2 billion downloads since its launch in 2010 and continues to grow with 170 million active users each month. In today’s post, Igor explains how the company has been successful building the backends of its entertainment apps on Google App Engine. ...
Today’s guest blogger is Igor Lautar, senior director of technology at Outfit7 (Ekipa2 subsidiary), one of the fastest-growing media entertainment companies on the planet. Its flagship franchise Talking Tom and Friends has achieved over 1.2 billion downloads since its launch in 2010 and continues to grow with 170 million active users each month. In today’s post, Igor explains how the company has been successful building the backends of its entertainment apps on Google App Engine.



Outfit7 is one of the most downloaded mobile publishers in the world, most famous for creating Talking Tom, an app in which a cat named Tom responds to your touch and repeats what you say. Talking Tom and Friends are unique fully animated 3D animal characters who love to be petted and played with through an array of in-app user functions. Each with their own distinct personality, Talking Tom, Talking Angela, Talking Ginger, Talking Ben, Talking Gina and a host of lovable friends are fully-interactive and can engage in two-way conversations with users. Fans can even create and share videos of interactions with their favourite characters via Facebook and YouTube.



The popular characters started life in the digital world and now Talking Tom and Friends extend from mobile apps to chart-topping YouTube singles, animated web series, innovative merchandise and a soon to be released TV series. The company has published more than 20 apps to-date with users in every country in the world.



In order to run and maintain all of these apps, a robust backend is required to track the state of the app and virtual currency and push new content and promotions across multiple platforms, which is why we turned to Google Cloud Platform.



Why Google App Engine?

Outfit7 was founded by a group of entrepreneurs in 2009 whose mission is to bring fun and entertainment to all. Like most startups of its kind, much of Outfit7’s team were engineers who had first-hand experience with hardware and knew its limitations, specifically the dedicated resources needed to maintain it.



Thus, from the beginning, we knew the types of resources that would be required to grow the company. We started researching cloud solutions that would enable us to scale-up as we grew and have a backend that could handle any workload. After researching and interacting with a number of companies, the team decided to move forward with Google App Engine because of its low maintenance and ease of use. To put this in perspective, we looked at virtual solutions in addition to cloud; however, virtual machines would still require a dedicated IT administrator. Our developers could work directly with App Engine without IT support, which is why we went with Google.



From App Engine to Cloud Datastore and BigQuery

With App Engine, we were able to take advantage of Google’s infrastructure. As any mobile app developer knows, a scalable backend is essential from day one. Apps can go from zero to one million downloads faster than you think - which is one of the primary reasons that we went with Google. And we still run a significant amount of original code, some of it four years old, which proves the value of App Engine and allows us to focus on new features instead of maintaining old ones.



We also implemented Google Cloud Datastore alongside App Engine. This directly supported our backend from the outset with migration from Master/Slave to HRD. The performance and scaling capabilities proved excellent, and we have even seen a drop in access time as Cloud Datastore has improved over the past few years.



After building this foundation with App Engine and Cloud Datastore, we expanded our use of Cloud Platform to include Google Cloud Messaging, an Android-specific service that allows you to send data from your server to your users’ Android devices. While Talking Tom is our most popular app, we also have similar apps with different characters like Talking Angela and Talking Santa. We leveraged Cloud Messaging to increase user engagement by sending fun messages that attract users’ attention.



More recently, we started using Google BigQuery for data analytics. Some of the tables we have are quite large and growing fast, but the performance of the queries remain very consistent (i.e., seconds, not minutes). BigQuery’s scaling ability is just as impressive as the other platform tools, and we’re excited to expand our usage of the tool.



All of our apps communicate via App Engine. The state is stored in a number of datastores supported by memcache. Most of the processing is done directly by frontend instances, and some operations are delegated to task queues. We also have a few backend instances for long-running jobs or complex operations that require more memory.



Some of our apps use push notifications quite heavily. For Android, we send them directly to Cloud Messenger, whereas iOS push notifications are sent to our own forwarding service, which is still legacy.



Data is pushed to BigQuery from our own servers. After processing it and gathering it from various sources, including logs, stats from stores, and downloads, BigQuery is then queried directly via the data visualization tools that we use.



Our experience with Google Cloud Platform has been very positive. The benefits are obvious — consistent performance with great scalability, leaving us more time to focus on app development. We are very happy with the performance and reliability of the platform, and as our vice president of technology, Luka Renko, says, “It’s nice to have a platform that solves more problems than it creates. That’s rare!”



Google Cloud Platform has been the foundation that has enabled us to produce some of the most popular apps in the world. Throughout our relationship with Google the support team has been amazing, helping us unlock all the power of Cloud Platform. We’re excited to continue working with Google.



-Contributed by Igor Lautar, Senior Director of Technology, Outfit7 (Ekipa2 subsidiary)

Today’s guest post comes from Charlie Good, Chief Technology Officer and Co-founder of Wowza Media Systems.



We are excited to join the Google Cloud Platform Partner Program and delighted to be part of the GA release. Together, Wowza® Media Systems and Google provide a powerful, integrated streaming option for customers large and small, with the consistently high performance you expect from Google. The solution works for nearly any and every use case as well — from live sporting events or business meetings to on-demand recorded university lectures or high-production-value broadcast-quality shows. Anyone who has content they want to stream through Wowza can now leverage Compute Engine’s powerful Linux-based virtual machines.
Today’s guest post comes from Charlie Good, Chief Technology Officer and Co-founder of Wowza Media Systems.



We are excited to join the Google Cloud Platform Partner Program and delighted to be part of the GA release. Together, Wowza® Media Systems and Google provide a powerful, integrated streaming option for customers large and small, with the consistently high performance you expect from Google. The solution works for nearly any and every use case as well — from live sporting events or business meetings to on-demand recorded university lectures or high-production-value broadcast-quality shows. Anyone who has content they want to stream through Wowza can now leverage Compute Engine’s powerful Linux-based virtual machines.



The integrated solution of Wowza Media Server® running on Google Compute Engine will meet the needs of the most demanding streaming customers, as well as those looking to get started and explore how streaming could work for them. By leveraging Wowza Media Server on Compute Engine, users can get up and running quickly while enjoying a cost-effective, flexible, easy-to-use solution enabling streaming from the cloud. We understand that for the technically savvy user, maintaining control of workflows, content and infrastructure is just as important as building dynamic and engaging streaming applications. That is why, with the help of Compute Engine, all of our APIs and customization capabilities are available on Google’s world-class infrastructure. Wowza is committed to providing our customers with both cloud and on-premises options for their media streaming workflow, and Google Compute Engine is a powerful new option.



The process is straightforward:


  • Compute Engine customers purchase a daily, monthly, or perpetual license for Wowza Media Server directly from www.wowza.com;

  • They then choose a prebuilt Wowza image within Compute Engine to run on their Compute Engine instance. Using this pre-built image, the initial Wowza installation and configuration steps are already completed, so customers can get started more quickly.




Below is a diagram to provide a high-level overview of how Wowza Media Server and Compute Engine work together, and here is the link to our web page that has more information on getting started.



At Wowza Media Systems, we’re passionate about our customers and helping them achieve their business goals with our media streaming software. Put simply, we enable our customers to reach their desired audience without worrying about the device or player technology or network connection each end user has. With this new Google Compute Engine offering, we’re providing a terrific new way to help customers get where they need to be.



To get started or to learn more please visit www.wowza.com/google-compute-engine.



-Contributed by Charlie Good, CTO and co-founder, Wowza Media Systems

With the recent announcement that Google Compute Engine is now Generally Available, we thought you might also like to know about the many popular open-source solutions for interacting with Google Compute Engine. And now that Compute Engine support is built right into the tool, it makes it that much easier for you to try it out in a known environment.
With the recent announcement that Google Compute Engine is now Generally Available, we thought you might also like to know about the many popular open-source solutions for interacting with Google Compute Engine. And now that Compute Engine support is built right into the tool, it makes it that much easier for you to try it out in a known environment.



For programmatic access with popular programming languages, Google provides a general set of Client APIs for accessing Compute Engine, as well as other Google services. However, you may have code or applications written against another language API that makes updating to Google’s client APIs questionable. In that case, you may be interested in the following:


  • Ruby: The fog.io cloud API has had support for Compute Engine since version 1.11.0 back in May. Take a look at the Compute Engine docs to get started with Compute Engine and fog. It primarily supports instance operations such as create, destroy and bootstrap.

  • Python: The Apache libcloud API project has been receiving solid support and updates for Compute Engine since July. It supports a broad set of Compute Engine features including instances, disks, networks/firewalls, and load-balancer support. The handy getting-started demo gives a good code example of how to use libcloud and Compute Engine.

  • Java: The jclouds cloud API does have Compute Engine support in “labs”. See the jclouds-labs-google repository for work being done to provide Compute Engine support and to elevate the lab into jclouds-core.




But perhaps you’re looking for a tool to automate configuration management of your Compute Engine instances. Below is a list of configuration management tools that provide that capability:


  • PuppetLab’s Puppet: Puppet has been around since 2005 and has evolved from supporting on-premise and hosted-datacenter support to also managing public cloud infrastructure. With the release of Puppet Enterprise 3.1 last month, Puppet’s Cloud Provisioner tool now supports Compute Engine. If you’re more comfortable with Puppet’s domain-specific-language manifest files, you can also use the gce_compute module available at the forge. Savvy puppet users will also be pleased to see that the next version of facter will have extensive support for Compute Engine. Puppet’s roots are open-source and it continues to have a thriving open-source community.

  • Opscode’s Chef: Chef is another system that’s been around for many years with a strong open-source background and active community. Chef is an automation platform with a modular design that has been extended to support Compute Engine through its knife-google plugin. The plugin gives you the power to create new Compute Engine instances, bootstrap them into your Chef environment, and easily manage those instances and their installed software. Chef’s ohai node attribute discovery tool has also been updated to support Compute Engine instances. Opscode provides a hosted solution that can make managing your Chef environment easier and more care-free.

  • SaltStack: Continuing the vein of configuration management, one of the newer systems gaining in popularity is Salt. One of Salt’s main design goals was to provide a highly scalable and fast data collection and execution framework for system administration. Recently, its cloud provisioner system was extended to support Compute Engine instances and includes documentation for getting started.

  • AnsibleWorks’ Ansible: Ansible is the newest configuration management solution on this list and it also embraces a unique design approach. Ansible does not utilize a centralized configuration server nor does it require any agents running on the managed instances. Ansible instead relies on SSH to remotely execute scripts on the managed nodes. As of 1.4, announced recently, Ansible has wide support for Compute Engine features through an inventory plugin and set of new modules.




Branching out from pure configuration management systems, there are a number of other open-source projects that support Compute Engine.


  • CoreOS: CoreOS can best be described as a very thin Linux system that provides just enough “OS” to enable the use of Linux containers. CoreOS is combined with etcd, docker, and systemd to allow you to build cluster-like infrastructure on top of standard physical and virtual machines. Thank you to the fine folks at CoreOS for building an image to support Compute Engine.

  • Docker: Compute Engine v1 has opened the door for additional operating systems, as well as applications like Docker, which require kernel customizations. Docker is an application for running Linux containers and can now be run on your Compute Engine instances. To get started quickly with docker on Compute Engine, take a look at the installation instructions to get up and running with a few simple commands.

  • Packer: Packer is one of Mitchell Hashimoto’s projects and its goal is to create machine images across multiple platforms from a single configuration. Compute Engine support is under active development and once complete will allow you to easily create a custom Compute Engine image that you can use as the basis for spinning up your Compute Engine instances. Kelsey Hightower, a primary Packer contributor, is leading the effort for Compute Engine support and if you’d like to help, you can find his work over on github.

  • Vagrant: Vagrant, another of Mitchell’s projects, is primarily a development tool for easily describing and replicating work environments. It differs a bit from a pure configuration management solution because the primary use-case is to quickly spin up a work environment, make changes, and tear it down again when you’re done. Compute Engine support is enabled by a new custom “provider” and instances can be configured with Vagrant’s “provisioner” plugins. Many thanks to Mitchell for hosting a new vagrant-google repository to house the Compute Engine support for Vagrant.




Google is committed to helping support the open-source ecosystem and we welcome your help in improving and extending the tools listed above in addition to any tools you feel should be added to the list.



-Posted by Eric Johnson, Program Manager

Today's guest post comes from Amol Kekre, CTO and co-founder of DataTorrent.



Scaling and performance are some of the most critical aspects when processing Big Data in real-time. When we started on Google Compute Engine we wanted to explore how the performance of a virtualized cloud environment would match the needs of our platform for high-throughput, Big Data computations while maintaining sub-second latency.
Today's guest post comes from Amol Kekre, CTO and co-founder of DataTorrent.



Scaling and performance are some of the most critical aspects when processing Big Data in real-time. When we started on Google Compute Engine we wanted to explore how the performance of a virtualized cloud environment would match the needs of our platform for high-throughput, Big Data computations while maintaining sub-second latency.



DataTorrent is a real-time stream analytics platform designed to support today’s most demanding, high-throughput, Big Data applications. Many of our use cases (particularly around processing machine-generated data, such as from sensors, logs, etc.) see over half a billion events processed per second.



For businesses to instantly analyze any volume of data as it comes in and respond in real-time, a solution must ensure scalability along with consistent sub-second latency – even while processing a massive volume of events.



While testing our platform, Google Compute Engine provided the necessary layer of compute and enabled us to scale linearly, without compromising latency. In one test, using identical instance configurations, 10 instances were able to process over 87 million events per second on the DataTorrent platform. When we scaled this test to 45 instances, again using the same instance configurations, we achieved over 400 million events per second. This showcases Google Cloud Platform’s excellent performance and suitability for mission-critical, high-throughput, big data applications.



In addition, gcutil allowed us to easily automate instance provisioning and configuration of our cluster. The ability to manage virtualized networking and firewall settings enabled us to set up a secure cluster, protecting our source code and data.



The flexibility and ease of use of Compute Engine enabled us to quickly launch our solution and run stability and auto-scaling tests almost immediately - which made for a pleasant deployment experience.



Get DataTorrent on Google Compute Engine today

With as clear a focus on mission critical, massively scalable applications as Google - we are excited to offer Compute Engine users our platform for real-time computations on a massive scale.




  • Use it free: download and simply install on your Google Cloud Hadoop cluster.

  • Visit us to learn more and read up on our technology

  • Contact us at info@datatorrent.com




-Contributed by Amol Kekre, CTO and Co-founder, DataTorrent

Today’s guest post comes from Martin Van Ryswyk, Vice President of Engineering at DataStax.



The cloud promises many things for database users: transparent elasticity and scalability, high availability, lower cost and much more. As customers evaluate their cloud options -- from porting a legacy RDBMS to the cloud to solutions born in the cloud -- we would like to share our experience from running more than 300+ customers’ live systems in a cloud-native way.
Today’s guest post comes from Martin Van Ryswyk, Vice President of Engineering at DataStax.



The cloud promises many things for database users: transparent elasticity and scalability, high availability, lower cost and much more. As customers evaluate their cloud options -- from porting a legacy RDBMS to the cloud to solutions born in the cloud -- we would like to share our experience from running more than 300+ customers’ live systems in a cloud-native way.



At DataStax, we drive Apache CassandraTM. Designed for the cloud, Cassandra is a massively scalable, open-source NoSQL database designed from the ground up to excel at serving modern online applications. Cassandra easily manages the distribution of data across multiple data centers and cloud availability zones, can add capacity to live systems without impacting your application’s availability and provides extremely fast read/write operations.



One of the advantages of Google Compute Engine is its use of Persistent Disks. When an instance is terminated, the data is still persisted and can be re-connected to a new instance. This gives great flexibility to Cassandra users. For example, you can upgrade a node to a higher CPU/Memory limit without re-replicating the data or recover from the loss of a node without having to stream all of the data from other nodes in the cluster.



DataStax and Google engineers recently collaborated on running DataStax Enterprise (DSE) 3.2 on Google Compute Engine. The goal was to understand the performance customers can expect on Google’s Persistent Disk, which recently announced new performance and pricing tiers. DataStax Enterprise supports a purely cloud-native solution and can span on-premise and cloud instances for customers wanting a hybrid solution.



Tests and results of DataStax Enterprise on Google Compute Engine

We were very interested to see how consistent the latency would be on Persistent Disks, as it represents a highly consistent storage with predictable and highly competitive pricing. Our tests started at the operational level and then moved into testing the robustness of our cluster (Cassandra ring) during failure and I/O under heavy load. All tests were run by DataStax, with Google providing configuration guidance. The resulting configuration file and methodology can be found here.



The key to consistent latency in Google Compute Engine is sizing one’s cluster so that each node stays within the throughput limits. Taking that guidance with our recommended configuration, we believe the results are readily replicable and applicable to your application. We tested three scenarios, all with positive outcomes:


  1. Operational stability of 100 nodes spread across two physical zones.


    • Objective: longevity test at 6,000 record per second (60 record/sec/node) for 72 hours.

    • Results: we saw trouble-free operation, where data tests completed without issue. Replication completed, where data streamed effortlessly across dual zones.


  2. Robustness during a reboot/failure through reconnecting Persistent Disks to an instance.


    • Objective: measure impact of terminating a node and re-connecting its disk to a new node.

    • Results: new nodes joined the Cassandra ring without having to be repaired and with no data loss (no streaming required). We did need to manage IP address changes for the new node.


  3. Push the limits of disk performance for a three node cluster.


    • Objective: measure response under load when approaching the disk throughput limit.

    • Results: Our tests showed a good distribution of latency during the tests, and 90% of the I/O write times were less than 8ms (see figures below depicting the medium latency and latency distribution). These results were while our load did not exceed the published throughput (I/O) thresholds (see caps for thresholds).



What’s next

We find Google Compute Engine and the implementation of Persistent Disks to be very promising as a platform for Cassandra. The next step in our partnership will be more extensive performance benchmarking of DataStax Enterprise. We look forward to publishing the results in a future blog post.



Figures for reference

The graph below shows median latency, a figure of merit indicating how much time it takes to satisfy a write request (in milliseconds):





The figure below depicts the distribution of latencies (ms) for write latencies. As noted above, 90% of write latencies were below 8ms, indicating the consistency of performance. The tight distribution within 1-4ms speaks to the predictability of performance.





-Contributed by Martin Van Ryswyk, Vice President of Engineering, DataStax

Today's guest post comes from Jon Dahl, VP of Encoding Services at Brightcove.



Brightcove’s Zencoder transcodes millions of video and audio files each month, all in the cloud. We've worked hard to establish the Zencoder service as the cloud encoding performance leader and are constantly investigating ways to optimize the application and to architect the service around the best infrastructure available. So, we are very excited to announce the Beta availability of the ...
Today's guest post comes from Jon Dahl, VP of Encoding Services at Brightcove.



Brightcove’s Zencoder transcodes millions of video and audio files each month, all in the cloud. We've worked hard to establish the Zencoder service as the cloud encoding performance leader and are constantly investigating ways to optimize the application and to architect the service around the best infrastructure available. So, we are very excited to announce the Beta availability of the Zencoder cloud encoding service running on Google Compute Engine.



The Zencoder service offers developers APIs for the fastest and most reliable live and file video encoding in the cloud. Thousands of customers, such as AOL, PBS, Khan Academy and the Wall Street Journal, have built their media workflows around our service.



Starting today, developers building applications and video workflows on the Google Cloud can use the Zencoder API to transcode video for a wide variety of Internet-connected devices. Users will be able to programatically select a Cloud Platform region for their transcoding in addition to previously supported regions. Initially, transcoding will be limited to video on-demand jobs, but we will expand to include live transcoding in the future.



Based on our usage and testing to date, there are a few specific things about Cloud Platform that we're most excited about, and as we scale up usage, we hope to release more metrics:



Multi-cloud

At a high level, it's fundamentally important to have a multi-cloud approach to our infrastructure. Having a diversity of cloud resources makes the service more reliable and resilient.



Fast launch times

We built the Zencoder service from the ground up to scale dynamically based on demand. Our goal is to obviate the notion of the queue, or to effectively have infinite lanes in which jobs can be slotted. When a server isn't available for a job, we have to spin one up. Compute Engine instances boot extremely fast, and the faster we spin up instances, the better the experience for our customers.



Consistent performance

It's one thing to be really fast, but Compute Engine boot times are consistently fast. For example, from our preliminary testing, we found that if you run 100 instances, each instance has the same characteristics.



Fast I/O

With a service running at the scale of Brightcove’s Zencoder, even the smallest performance advantages in underlying infrastructure help. Video encoding is fundamentally a CPU-bound process, but in aggregate, improvements in disk I/O make a difference. Video encoding jobs typically consist of a single input file going to multiple output renditions. Improvements in disk read/write time will reduce latency and decrease transcoding time.



Intelligent caching

Content providers should use Google Cloud Storage in conjunction with the Zencoder service on Compute Engine. Storing and processing content in the same cloud infrastructure ensures fast, reliable transfer. Additionally, Cloud Storage improves transfer performance by optimizing data placement and caching across its global infrastructure.



Super network

We're extremely impressed with the network consistency and speed, which are particularly important for content ingest and egress, as well as latency-sensitive service video functions such as live streaming.



The GA release of Google Compute Engine is big news for those of us with our heads and services in the cloud. Google has released storage and compute services that raise the bar (and maybe set the standard) for performance and reliability in some key areas. We're excited to see what types of video apps developers build that take advantage of the Zencoder service and the unique characteristics of Google Cloud Platform.



-Contributed by Jon Dahl, VP of Encoding Services, Brightcove

Today's guest post is from Sebastian Stadil, CEO of Scalr. The company providies a web-based control panel for cloud infrastructure that serves as an interface between end users and the multiple cloud platforms that they use. In this post, Sebastian discusses benchmarks they conducted to analyze Compute Engine performance. ...
Today's guest post is from Sebastian Stadil, CEO of Scalr. The company providies a web-based control panel for cloud infrastructure that serves as an interface between end users and the multiple cloud platforms that they use. In this post, Sebastian discusses benchmarks they conducted to analyze Compute Engine performance.



At Scalr, we build a web-based control panel for cloud infrastructure, which serves as an interface between end users and the multiple cloud platforms that they use. Engineers use Scalr to achieve significant productivity gains, and IT departments use it to drive and control cloud adoption.



One of our customers — grandcentrix — used Scalr with Google Compute Engine a few months back. They were building the backend of the companion mobile application for the Eurovision song contest using both Compute Engine and Scalr. Our experience was documented on the Google Cloud Platform blog.



No random hiccups with Google. Only high and stable performance.

In a few words: this was the first time Eurovison had a companion app, so they had no idea how much traffic they’d have. Fortunately, our load tests had shown that Compute Engine was a predictable high performer with fast provisioning times, so all we had to do was ensure that the application architecture would scale horizontally.



Eurovision was a success, and we’re looking forward to taking on such a challenge again. Why? Because we feel very comfortable using Google Compute Engine. It just doesn’t surprise you, and delivers extremely consistent performance.



We’ve recently conducted performance benchmarks on persistent volumes across multiple cloud providers. For volumes, performance is only part of the story. Stability matters a lot, too. What good is a high performing volume if it fails to perform 1/10th of the time? Not much!



Using Google Compute Engine, every single volume performs the same, every hour of every day. If their throughput is sufficient when you run your tests, you can know for sure that Google volumes won’t let you down when you need them.



By the numbers: the benchmarks

Below, you’ll find graphs that compare the performance dispersion for IOPS, bandwidth, and latency for Google Compute Engine and EC2 volumes. This is basically a measurement of how consistent disk performance has been over hundreds of 10-minute disk-performance measurements.



What you’ll see is that Google volumes offer significantly more consistent performance than their AWS counterparts, including PIOPs volumes!



Here are the graphs. Lower dispersion means more consistent performance.


Note: we’re still in the process of adding benchmarks on sequential workloads for PIOPs volumes, but they have been benchmarked for random workloads.



Are you trying out Google Compute Engine?

Google Compute Engine, EC2, and Rackspace aren’t API-compatible, so if you intend to start using Google’s Cloud for your business today, you’ll probably need to rewrite quite a few integrations.



If that’s the case, you might want to look at the Scalr Cloud Management Platform. Using Scalr lets you design infrastructure and policies once, and use them with any cloud platform.



We actually used Scalr to run the benchmarks we presented here. If you’re interested, you could watch this talk from the OpenStack Summit, where we presented how we did it.



You can of course learn more about Scalr on our website, or request a POC.



-Contributed by Sebastian Stadil, CEO and Founder at Scalr

Today's guest post comes from Ivar Pruijin, Product Manager at Cloud9 - a popular cloud-based IDE. In this post, Ivar discusses their work with Google Compute Engine.



Cloud9 IDE moves your entire development flow onto the cloud by offering an online development environment for web and mobile applications. With Cloud9 IDE developers write code, debug, and deploy their applications, and easily collaborate with others - all right in the cloud. We’ve worked hard to ensure that our online development environment is easy, powerful, and a thrill to use.
Today's guest post comes from Ivar Pruijin, Product Manager at Cloud9 - a popular cloud-based IDE. In this post, Ivar discusses their work with Google Compute Engine.



Cloud9 IDE moves your entire development flow onto the cloud by offering an online development environment for web and mobile applications. With Cloud9 IDE developers write code, debug, and deploy their applications, and easily collaborate with others - all right in the cloud. We’ve worked hard to ensure that our online development environment is easy, powerful, and a thrill to use.



Google Compute Engine recently got us very excited about the possibilities for Cloud9 IDE. So much so, that we built support for Compute Engine into the backend of the soon-to-be-released major update of Cloud9 IDE! We’ve seen major improvements in speed, provisioning and the ability to automate deployments and management of our infrastructure. In this article we’ll talk about those experiences and what benefits Compute Engine offers to a complex web application like ours.



Speed

When building a hosted IDE, latency is a big concern. You want files to open instantly, step debug through your code without delays, and interact with the terminal as if it all was running locally. Every millisecond of delay we reduce is an immediate usability improvement. Due to the global reach of Google’s fiber network and its huge ecosystem of peering partners, we’ve been able to achieve major speed improvements!

Let’s look at this in some more detail. We’ve optimized our architecture to require just one hop between the hosted workspace and the browser running Cloud9. This intermediate layer is our virtual file system server (VFS). VFS connects to the hosted workspaces and provides a REST & WebSocket interface to the client running in the browser. Initially we expected the lowest latency when placing the VFS server close to the hosted workspaces and possibly in the same data center. Surprisingly it turned out that placing the VFS server as close as possible to the user resulted in way smaller latencies. This is where Google’s advantage in networking really comes into play. A round trip from our Amsterdam office to a hosted workspace at the US east coast is about 50% faster when connecting through a VFS server in a Google European data center!



In order to select the closest VFS server we use latency based DNS routing. This alone perceivably improves the responsiveness of Cloud9 in general. Users who use SSH workspaces, live outside of the US or are travelling a lot will especially feel the difference.



Fast & easy Provisioning & Infrastructure Management

There are other things we like about Google Compute Engine. The management interface is great:

It is very responsive and offers metrics on the running VM, like live CPU metrics. Everything you can do in the web console can be performed using Google’s REST interface and the very powerful command line client, too. Throughout the UI you’ll find hints of how to perform the same action using REST or the command line. This allowed us to completely script the deployment of new servers and the management of our infrastructure.



The integrated login system is another strong point of Google Compute Engine (and for Google Cloud Platform as a whole). It allows us to assign different set of permissions to every user being part of the c9.io domain. We can sign in to Google Compute Engine using our regular email/password credentials (we already use Google Apps), and a two factor authentication policy for strong security is available as well.



What’s next?

Our upcoming release is scheduled to go GA in Q1 of 2014. The backend of this new version supports Google Compute Engine next to the infrastructure that we already have and will allow our users to get a workspace in the closest region near them. So far we couldn't be happier. The performance of the VMs and virtual disks are great, the pricing is competitive and Google is really helpful and responsive. As we roll out the new version of Cloud9 IDE we’ll continue working on our Compute Engine support.



-Contributed by Ivar Pruijn, Product Manager, Cloud9 IDE

Today’s guest blog comes from Alex Pop, a cloud integration engineer at RightScale. RightScale is a Google Cloud Platform partner that enables leading enterprises to accelerate delivery of cloud-based applications while optimizing cloud usage to reduce risk and costs. With RightScale, IT organizations can deliver instant access to a portfolio of public, private, and hybrid cloud services across business units and development teams while maintaining enterprise control. Since 2007, leading enterprises including Pearson International, Intercontinental Hotels Group, and PBS have launched millions of servers through RightScale. ...
Today’s guest blog comes from Alex Pop, a cloud integration engineer at RightScale. RightScale is a Google Cloud Platform partner that enables leading enterprises to accelerate delivery of cloud-based applications while optimizing cloud usage to reduce risk and costs. With RightScale, IT organizations can deliver instant access to a portfolio of public, private, and hybrid cloud services across business units and development teams while maintaining enterprise control. Since 2007, leading enterprises including Pearson International, Intercontinental Hotels Group, and PBS have launched millions of servers through RightScale.



Fishlabs is a Hamburg-based developer and publisher of premium games for smartphones and tablets. They are building their next game, Galaxy on Fire - Alliances with a new approach to their cloud strategy. Fishlabs infrastructure architects are working with Google and RightScale to ensure maximum performance and uptime, key factors include:

  • Single Reference deployment that can be redeployed across iOS and Android platforms

  • Scalable Database layer, scaling the DB layer vertically and horizontally with slaves.

  • Compute Engine performance and stability, particularly in a high memory gaming scenario

  • Competitive pricing across compute and networking

  • Deep experience and knowledge of the gaming space



For this project, Fishlabs augmented their cloud strategy with the addition of Google to meet compute and DB needs and RightScale to provide consistent deployment across platforms. “With Google and RightScale we were able to provide Fishlabs a new level of performance and cloud deployment consistency for this new title” said Bruno Ciscato, Cloud Solutions Engineer at RightScale



“We’ve chosen to work with the Google Compute Engine and RightScale teams to meet our DB scaling and repeatable deployment requirements across platforms. Using GCE and RightScale, the team had a fully functional deployment up and running in a week, we’re enthusiastic about measuring results and performance as we head from beta to launch,” saod Uli Sesselmann, IT Supervisor at Fishlabs.



-Contributed by Alex Pop, Integration Engineer, RightScale



In addition to introducing Compute Engine in GA this week, we launched a new website for Google Cloud Platform and a new set of Cloud Platform logos:
In addition to introducing Compute Engine in GA this week, we launched a new website for Google Cloud Platform and a new set of Cloud Platform logos:



Now, none of this changes anything for you. New logos aren’t going to help you serve more requests-per-second (good news, you can already top 1 million). And they aren’t going to allow you to scale your caching capacity indefinitely or reduce your datastore costs (good news, dedicated memcache in App Engine already does that). And they aren’t going to provide you with an analytics tool that lets you query terabytes of data in seconds (that’s what Big Query is for).



But, they do allow us to reflect on what we have been able to do over the past 5 years with Google Cloud Platform.



In April 2008, when we launched Google App Engine, we introduced the first modern Platform as a Service. In fact, the term barely existed at the time. And, given that we were launching a new product, we needed a new icon. The marketing team decided that what App Engine needed was an engine. And because it was Google, the engine should be in the shape of a ‘G’. So they did what marketers do, and made a bunch of versions of this idea:



There was only one problem: none of the people who had built App Engine actually liked the logo. It didn’t represent the next-generation technology that they were building. In fact, it looked like a combination of a printing press (which Gutenberg first started tinkering with in 1436) and an internal combustion engine (conceived of by Huygens in 1680 and first built by Rivaz in 1807). It didn’t reflect a product that allowed you to deploy an application with one-click, scale it effortlessly to serve millions of users, and fundamentally change how web applications are developed.



So, three of App Engine’s early engineers went back to the whiteboard. Literally. Rafe Kaplan, Alon Levi and Brett Slatkin thought about what the product should look like. And it wasn’t a printing press. And it wasn’t an internal combustion engine. And it wasn’t shaped like a G. These are some of the concepts that Brett sketched on the whiteboard in Building 44 at the Googleplex in Mountain View:




The team was much happier with these; they felt like they were inspiring. And these whiteboard sketches were sent to a graphic designer who came back with three concepts for a final logo:



Well, no one really loved these either. In the words of one member of the App Engine team, “A looks like a fan, B looks like a washing machine, and C looks like a washing machine with fins.” So, to cut a long story short, they did some more revisions, and eventually Google designer Micheal Lopez landed on a logo that everyone (well, most people) loved:



It evoked both the power as well as simplicity of App Engine. To Rafe Kaplan, the new icon looked like a shark, so he called it ‘Sharkon’ - a name that quickly spread among the team.



And, for the past 5 years, Sharkon has been the face of App Engine. It’s found it’s way into many forms - whether made into a plush doll, knit into yarn, painted using acrylics by artist Nan Washare, done in a single brushstroke of calligraphy, poured into latte art, or made into a punch-card reader when we announced on April Fools Day in 2009, that App Engine would be supporting Fortran:





Over these 5 years, a lot else has happened. App Engine has grown to support new runtimes, including Python, Java, Go and PHP. And, we’ve introduced a host of other cloud computing products, including Compute Engine, Cloud Storage, BigQuery and others. Together, this family makes it easy for you to take advantage of the scale, speed, and consistency of Google’s infrastructure. And, these services work great together - so that you can truly take advantage of an integrated Cloud Platform.



The new App Engine logo is designed to fit in alongside the rest of the Cloud Platform family, while still paying homage to Sharkon. These new logos represent the toolkit in your garage. They’re the nuts and bolts with which you can create just about anything. So, pick up any one of them (or all of them) and start building.



Oh, and watch this space. We’ve got more exciting announcements coming up - the kind that go deeper than a logo refresh.



-Posted by Benjamin Bechtolsheim, Product Marketing Manager