Today’s guest blog comes from Steve James, CTO of GenieConnect, a digital solutions provider for the events industry with products ranging from mobile apps to attendee Web portals and meetings management. ...
Today’s guest blog comes from Steve James, CTO of GenieConnect, a digital solutions provider for the events industry with products ranging from mobile apps to attendee Web portals and meetings management.



The nature of the events business is such that activity increases per platform according to the time. For example, visitors tend to use web resources for pre-planning prior to an event, whereas mobile app activity increases during the event. As a digital solutions provider for major events, like Mobile World Congress in Barcelona, we need to be able to support a large number of users in concentrated periods of time, even when connectivity is weak.



Google Cloud Platform gave us the infrastructure we needed to hit the ground running. We use almost every component of Cloud Platform, including Google BigQuery, Google App Engine, Google Cloud Datastore, Google Cloud Storage, Google Compute Engine and Google Cloud SQL. If we weren't running Cloud Platform, we would have had to spend significant resources and several months building out comparable tools before releasing our platform.



BigQuery forms the heart of our analytics piece and lets us run queries against multi-terabyte datasets in mere seconds, which is key during huge events with short windows of time. At Mobile World Congress, where we can get up to 75,000 people using our app every day, we’re collecting tens of millions of data points for our client. With BigQuery, our clients can use a simple GUI to make queries and get results right away. For example, we can find correlations between attendee activity and click-through data. This helps us immediately understand attendee behavior and preferences, and proactively prompt attendees to other parts of the event, instead of just reacting to what the user is doing in the app.



Cloud Platform lets us operate the way we like to: lean and cost effectively. We’re made up of small teams and engineers working on the platform at the same time. App Engine’s Modules allows us to partition our application into components based on different requirements for scale, service level, traffic, etc. This way, every team can tune the performance of their environment to their own needs and ultimately run more cost effectively. We can also integrate modules easily if we want to.



Autoscaling is also critical to our business. While some capacity planning is possible, there are challenges in anticipating attendance as well as provisioning resources to meet peak demand. Cloud Platform expands and shrinks our server resources automatically, letting us avoid the risk of not being able to handle user load during an event. Resiliency is also built-in to the cloud, which is critical for event organizers since attendees still need to be able to access information even if there is a server outage.



Cloud Platform has allowed us to bypass months in procuring and setting up new hardware, keeping track of user load and performance and managing our infrastructure. As an SME, we’re happy to let Google take on the responsibility of maintaining our infrastructure so we can focus on enhancing our platform.



-Contributed by Steve James, CTO, GenieConnect

SaltStack makes software to automate cloud factories, Internet assembly lines and Web-scale IT. It is used by thousands of systems administrators, engineers, developers, and data center operators to provide automated control of infrastructure, configuration management, and cloud orchestration. Initially developed to be an extremely fast and scalable systems management platform, Salt is now one of the top five largest and most vibrant open source projects in the world according to GitHub.
SaltStack makes software to automate cloud factories, Internet assembly lines and Web-scale IT. It is used by thousands of systems administrators, engineers, developers, and data center operators to provide automated control of infrastructure, configuration management, and cloud orchestration. Initially developed to be an extremely fast and scalable systems management platform, Salt is now one of the top five largest and most vibrant open source projects in the world according to GitHub.



It is our inherent biases toward speed and scale that get us really excited about the capabilities of Google Compute Engine. We think the combination of SaltStack and Google Compute Engine can help our customers get their big ideas into production faster while creating a massively scalable platform for future growth, or we can help migrate and manage existing large-scale projects.



SaltStack provides simple and direct integration to Google Compute Engine with native support and no extraneous dependencies. We have been able to take advantage of the full power of the Google Compute Engine API with the ability to modify firewalls and load balancers for example. It is quick and easy to create a SaltStack-managed Google Compute Engine environment.



Salt Cloud architecture:

salt_gce_v2 (1).png

Here is an example of how to use Salt Cloud to get any-size Google Compute Engine environment effectively “Salted” in about a minute and a half:

# Note: This example is for /etc/salt/cloud.providers.d/gce.conf

gce-config:
# Set up the Project name and Service Account authorization
#
project: "your_project_name"
service_account_email_address: "123-a5gt@developer.gserviceaccount.com"
service_account_private_key: "/path/to/your/NEW.pem"

# Set up the location of the salt master
#
minion:
master: saltmaster.example.com

# Set up grains information, which will be common for all nodes
# using this provider
grains:
node_type: broker
release: 1.0.1

provider: gce

Once SaltStack is controlling a Compute Engine environment it is time to utilize SaltStack to deploy and control virtual machines in milliseconds while managing application configurations and doing things like continuous code integration and deployment. SaltStack Cloud States can be used to map the Compute Engine environment and apply configuration management and orchestration using the Salt state system for scalable orchestration between nodes. For example, one Salt minion can apply a Salt Highstate to another minion.



In addition to SaltStack Cloud States, SaltStack provides a number of other options for creating virtual machines. They can be managed directly from profiles and the command line, or a more complex SaltStack Cloud Map can be created to control virtual machines in Google Compute Engine. The map file allows for a number of virtual machines to be created and associated with specific profiles.



SaltStack was created to provide really fast and malleable remote execution capabilities. Salt controls large loads of information, and not just dozens but thousands of individual servers in near real time across data centers and cloud environments around the world.



The SaltStack topology is a simple server / client model with all needed functionality built into a single set of daemons. Once a Compute Engine environment is “Salted,” the core functions of SaltStack can be used to control it via:

- remote system commands called in parallel rather than serially;

- a secure and encrypted protocol;

- the smallest and fastest network payloads possible;

- a simple programming interface;

- targeting by hostname and system properties.



The result is a system that can execute commands at high speed on target server groups ranging from one to very many servers. While the default configuration will work with little to no modification, Salt can be fine tuned to meet specific needs. SaltStack is as versatile as it is practical, perfectly suitable for Google Compute Engine.



To see the combination of Google Compute Engine and SaltStack in action, watch this talk from SaltConf14 by Eric Johnson, Google Technical Program Manager:







-Posted by Joseph Hall, Senior Software Engineer, SaltStack

Google Cloud SQL is a fully managed MySQL service hosted on Google Cloud Platform, providing a database backbone for applications running on Google App Engine or Google Compute Engine. Today, we are launching two nice new features that give you more visibility and control over your Cloud SQL databases: point-in-time recovery and custom MySQL flags.
Google Cloud SQL is a fully managed MySQL service hosted on Google Cloud Platform, providing a database backbone for applications running on Google App Engine or Google Compute Engine. Today, we are launching two nice new features that give you more visibility and control over your Cloud SQL databases: point-in-time recovery and custom MySQL flags.



Point-in-time recovery allows you to recover an instance to a specific point in time. For example, if an operator ‘fat finger’ error causes a loss of data you can recover a database to the state it was just before the error occurred. It’s also great for testing your application and diagnosing issues since you can clone your live data to a testing database. See the point-in-time-recovery docs for more information.



Custom MySQL flags allow you to configure and tune your database to support particular applications and improve performance. For example, some applications require certain MySQL settings that are now supported by Cloud SQL, such as maximum packet sizes. You can also use custom flags to enable the MySQL slow query log to help spot performance issues, or put the database into read-only mode. There’s a full list of the flags you can set here.



Hosting your data in Google Cloud SQL gives you convenience and peace of mind; and now you get more control too. Get started with Cloud SQL today.



-Posted by Joe Faith, Product Manager

Today's guest blog post comes from Hadas Birin, Director of Product Management at Ravello.



Self-service tools are critical for developers to be agile and move forward quickly, prototype new ideas and test implementations.
Today's guest blog post comes from Hadas Birin, Director of Product Management at Ravello.



Self-service tools are critical for developers to be agile and move forward quickly, prototype new ideas and test implementations. Google Compute Engine provides just that for single VMs – you can quickly spin up a VM on demand and use it. Ravello, paired with Compute Engine, lets you do the same for complex multi-VM environments without any networking or OS constraints.



Developers and test engineers can spin up multiple copies of production-like complex environments on demand in Compute Engine for development and test. An environment may include many VMs (Windows, Linux, Solaris, BSD…) and a complex network topology (static IPs, multiple subnets, firewalls, load balancers…).



Each instance of the same environment has a fully fenced network that uses the same IP addresses inside, and only differs by the DNS names that allow external access into the VMs. The network, storage and multi-VM encapsulation within each environment allows for one-click duplication of environments.

GCP blog image.png

Here are a few common use cases where developers use Ravello:



Accelerating backend dev and test:




  • Every code commit can be tested instantly on a replica of the production environment as part of the Continuous Integration process (read more).

  • Multiple identical copies of the production environment can be spun up on Compute Engine to allow for parallel automatic/manual testing, resulting in a faster testing cycle. Environments can be shut down automatically when testing ends or left running for the developer to investigate in case a test has failed.

  • Developers can easily deploy into their own environment on Compute Engine from IDE/shell using Maven or REST API (with Ruby/Python bindings).


Accelerating mobile / web client dev and test:




  • Developers can run Android on Compute Engine to test their native and web mobile applications, do continuous integration and run scale tests of thousands of Android clients (read more).

  • Developers can test different browser versions including Chrome, Firefox, Safari - all in parallel using Selenium GRID with Selenium WebDriver - and save tremendously relative to specialized browser testing services (read more).




Accelerating dev and test for existing complex applications from private data centers without any modifications:




  • Existing VMs (say from VMware) can be uploaded onto Ravello and run on Compute Engine. Ravello provides the VMs with the same network and storage setup they expect to get in the private data center. No change to the VMs is required (read more).

  • Developers can utilize Compute Engine to develop and test on existing applications without the need to restructure them for public cloud.




The technology

Ravello was created by the team behind KVM (Kernel-based Virtual Machine) – the linux kernel virtualization infrastructure. Compute Engine itself is built upon the KVM platform and uses it as a high-performance hypervisor to run VMs.



Ravello has developed a high performance nested virtualization technology in order to abstract network, storage, and different cloud hypervisors. This abstraction enables existing virtual machines to be run on third-party hypervisors and on other clouds without any modifications.

The technology includes the following components:




  • High-performance nested hypervisor – uses dynamic binary translation to run guest VMs unmodified on top of any cloud hypervisor.

  • Software-defined network – provides a virtual fully fenced network that may include static IPs, multiple subnets, routers, switches, DHCP, DNS, firewalls and L2/L3 network/security appliances added by the user.

  • Storage overlay – manages a distributed object store and connects it to the cloud provider’s local block storage.

  • Management layer - provides the UI and API to create and manage multi VM environments and publish them to various cloud providers.




Getting started

You can check Ravello out on your own -- here’s what you need to do to get started:




  • Register and activate your account (it’s free for two weeks).

  • Export your existing VMs from VMware, and upload them into your account. Alternatively, use one of our publicly available images and add some Chef recipes to set it up.

  • Create a new ‘application’ – this would be your fully fenced environment. Then, drag and drop selected VMs from the library into the application.

  • Select each VM in the application, update its IP and mask, and choose which ports would be open for external access on this VM.

  • Publish the application to Compute Engine. After a few minutes – login and play with your application.

  • Save this application as ‘blueprint’ (a snapshot of the entire environment, including all VMs and the network definitions). Using this blueprint, you may spin up identical copies of this whole environment with one click.




The combination of Ravello and Compute Engine provides developers and test engineers with the freedom to spin up complex environments on demand and accelerate development and test efforts.

For a live demo of the use cases mentioned above - Join our technical power session on April 29th with Google Cloud Platform.



-Posted by Hadas Birin, Director of Product Management at Ravello

Today’s guest blog comes from Dale Thoms, co-founder and CTO of Backflip Studios, the mobile game development studio. The company is behind popular titles such as DragonVale, Paper Toss and Ninjump, and has grown from three to over 100 employees in the past five years. ...
Today’s guest blog comes from Dale Thoms, co-founder and CTO of Backflip Studios, the mobile game development studio. The company is behind popular titles such as DragonVale, Paper Toss and Ninjump, and has grown from three to over 100 employees in the past five years.



In the fast changing mobile games market, speed of development is critical. We need to launch games quickly, but more importantly, we need to release frequent updates to existing games so that players keep coming back. In 2009 when we started the company, most mobile games had very little server infrastructure behind them if at all. But over time games have grown to include frequent content updates, cross-device play, community events, player communication via ads, push notifications and sophisticated data analysis. Now it is crucial to have a server infrastructure that can handle all of that and more.



Google Cloud Platform gives us the peace of mind that comes from not having to worry about setting up and managing servers, or having a dedicated server engineer to ensure systems never go down. We wish downtime and latency issues didn't exist, but when it happens it's comforting to know Google will take care of them. We started by building our games’ server components on Google App Engine, but now, our code uses other elements of Cloud Platform, namely Google BigQuery and Google Cloud Storage.



Autoscaling is critical to our business because we can't predict when our games will be featured on an app store or review site, and we wind up with a giant influx of new users. What we can predict is that whenever we push a new update to a game like DragonVale, users come back to the game in droves, doubling or even tripling, normal traffic volume over the span of a few minutes. With App Engine in the background, we’ve been able to scale smoothly to meet every spike in demand. Best of all, we only pay for the capacity our application uses.



We’re also very data-driven and frequently do analysis against data we’ve collected. Our games live in App Engine and Datastore, where a player’s game state (player level, dragons owned, placement of items on islands, etc.) is stored in a format optimized for use by our game engine. In order for our marketing and analytics teams to make use of the data, we need to pull it into a system that they can run queries against.



Initially, we were pulling data into an in-house SQL database, but when BigQuery became available, we switched as it imported data and ran queries many times faster than the previous SQL database. We now pull data out of Datastore and transform it via MapReduce from the game engine optimized format into a more traditional database form. The analytics team can then run queries in BigQuery to analyze what players are doing. With these insights, we can figure out areas where players are struggling, that need improvement -- what new features and content to offer, how to better retain players, etc. -- to keep them coming back.



We evaluated other vendors’ cloud-based solutions, but we would have had to build additional services to get all the functionality we needed. In comparison, Cloud Platform took on a lot of the burden, freeing up our developers to focus on actual game development, and the ramp on App Engine was fast thanks to its simple architecture. Our engineering department was pleasantly surprised that all they needed to do was to write the application logic, and all the database, server and scaling components were taken care of for them.



The initial team creating DragonVale was fairly small, and we had only one developer building most of the server backend for the game. Yet, it took only six months from start to finish. If we didn't have Cloud Platform, we would have likely needed a larger team to work on the database and other components, which would have extended the development cycle.



Each game rollout is easier and faster than the previous one as we get better on Cloud Platform. We have several new games coming out this summer, all of which utilize Cloud Platform. We couldn’t move at anything close to this pace without the services provided by Cloud Platform, and thanks to Google, we can make games faster and sleep better at night knowing our infrastructure is in good hands.



-Contributed by Dale Thoms, co-founder and CTO, Backflip Studios

Today's guest blog post comes from Ryan Coleman, Puppet Forge Product Owner at Puppet Labs. This is the second in a series of guest blog posts following publication of Compute Engine Management with Puppet, Chef, Salt, and Ansible ...
Today's guest blog post comes from Ryan Coleman, Puppet Forge Product Owner at Puppet Labs. This is the second in a series of guest blog posts following publication of Compute Engine Management with Puppet, Chef, Salt, and Ansible



At Puppet Labs, we're all about enabling people to enact change quickly, predictably and consistently. It's at the core of everything we do, and one of the reasons we moved our Puppet Forge service to Google Compute Engine. Google Compute Engine immediately halved our service's response time on comparable instances and offers a lot of flexibility in how we deploy and manage our instances. Much of that flexibility comes from their gcutil command-line utility and their REST API.



As of Puppet Enterprise 3.1, we used these tools to provide native support for Google Compute Engine. The gce_compute module, available on the Puppet Forge, provides everything you need to manage compute instances, disk storage, networks and load balancers in Google Compute Engine with Puppet's declarative DSL. In this post, we'll run through a few examples of what you can do with it.



Here's a really simple example of what an instance looks like in Puppet's language and the local application of Puppet to manage the instance. Puppet is easy to install, so it's easy to follow along and create your own running Compute Engine instance. Simply save each example to a file, prepare gcutil by running `gcloud auth login` and then run `puppet apply` against the example file.



# Compute Engine.pp
gce_instance { 'ryan-compute':
ensure => present,
machine_type => 'n1-standard-1',
zone => 'us-central1-a',
network => 'default',
image => 'projects/centos-cloud/global/images/centos-6-v20131120',
}

ryan:gce ryan$ puppet apply Compute Engine.pp
Notice: /Stage[main]//gce_instance[ryan-compute]/ensure: created





With this simple example, I have described the instance I want in Compute Engine. I can share with my co-workers to treat as documentation or use in their own Google Cloud project to get an instance built just like mine. This concept becomes more useful the more complex your infrastructure is.



Here's an example much closer to the real world. It expresses two instances configured by Puppet to be proof-of-concept http servers complete with a Compute Engine load balancer and health checks.



gce_instance { ['web1', 'web2']:
ensure => present,
description => 'web server',
machine_type => 'n1-standard-1',
zone => 'us-central1-a',
network => 'default',
image => 'projects/centos-cloud/global/images/centos-6-v20131120',
tags => ['web'],
modules => ['puppetlabs-apache', 'puppetlabs-stdlib',
'puppetlabs-concat', 'puppetlabs-firewall'],
manifest => 'include apache
firewall { "100 allow http access on host":
port => 80,
proto => tcp,
action => accept,
}',
}

gce_firewall { 'allow-http':
ensure => present,
network => 'default',
description => 'allows incoming HTTP connections',
allowed => 'tcp:80',
}
gce_httphealthcheck { 'basic-http':
ensure => present,
require => gce_instance['web1', 'web2'],
description => 'basic http health check',
}
gce_targetpool { 'web-pool':
ensure => present,
require => gce_httphealthcheck['basic-http'],
health_checks => 'basic-http',
instances => 'us-central1-a/web1,us-central1-b/web2',
region => 'us-central1',
}
gce_forwardingrule { 'web-lb':
ensure => present,
description => 'Forward HTTP to web instances',
port_range => '80',
region => 'us-central1',
target => 'web-pool',
require => gce_targetpool['web-pool'],
}





With Puppet Enterprise and Google Compute Engine, it becomes fairly simple to build and continuously manage complex services from the storage/network/compute resources in Google Compute Engine through operating system configuration and application management. Another cool feature is the relationship graph that Puppet automatically generates from the requirements you express. You can use this as a tool to communicate with your team on how your compute instances relate to each other or to express the dependencies in your application.



These examples demonstrate how to apply Puppet configuration directly in the gce_instance resource, but it's more practical in production to manage the configuration of your entire infrastructure through a Puppet Enterprise master and its agents. If you want to run yours in Compute Engine or just try it out, the gce_compute module makes it simple to bring up a fully-functional Puppet Enterprise Master and Console.



gce_instance { 'puppet-enterprise-master':
ensure => present,
description => 'An evaluation Puppet Enterprise Master and Console',
machine_type => 'n1-standard-1',
zone => 'us-central1-a',
network => 'default',
image => 'projects/centos-cloud/global/images/centos-6-v20131120',
tags => ['puppet', 'master'],
startupscript => 'puppet-enterprise.sh',
metadata => {
'pe_role' => 'master',
'pe_version' => '3.2.0',
'pe_consoleadmin' => 'admin@example.com',
'pe_consolepwd' => 'puppetize',
},
block_for_startup_script => true,
}

gce_instance { 'agent1':
ensure => present,
zone => 'us-central1-a',
machine_type => 'f1-micro',
network => 'default',
image => 'projects/centos-cloud/global/images/centos-6-v20131120',
startupscript => 'puppet-enterprise.sh',
metadata => {
'pe_role' => 'agent',
'pe_master' => 'puppet-enterprise-master',
'pe_version' => '3.2.0',
},
tags => ['puppet', 'agent'],
require => gce_instance["puppet-enterprise-master"],
}



This example will bring up a single master and agent, in sequence. The Puppet Master installation process may take a few minutes. When it's finished, you can browse over https to its external IP address and log in to the Puppet Enterprise Console. Once you have Puppet Enterprise installed, you also have access to our `node_gce` cloud provisioner, offering another way to manage Google Compute instances with Puppet.



From base compute, storage and networking all the way up to a consistently managed application serving your customers, Google Compute Engine and Puppet Enterprise offer a readable, reusable and shareable definition of how your cloud infrastructure is built and interrelated.



Learn More








Contribued by Ryan Coleman, Puppet Forge Product Owner

Our guest post today comes from Olivier Devaux, co-founder of feedly, a reading app founded in 2008 in Palo Alto. feedly offers a free version as well as a Pro version that includes power search and integrations with other popular applications, including Evernote, LinkedIn and Hootsuite. ...
Our guest post today comes from Olivier Devaux, co-founder of feedly, a reading app founded in 2008 in Palo Alto. feedly offers a free version as well as a Pro version that includes power search and integrations with other popular applications, including Evernote, LinkedIn and Hootsuite.



With over 15 million users, feedly is one of the most popular apps for purposeful reading in the world. People can tailor their feedly accounts to serve up their favorite collection of blogs, web sites, magazines, journals and more. Our goal is to deliver to readers the content that matters to them. Over the past year, we have focused on making feedly the reading app of choice for professionals.



For our first few years, we had around four million users, and we hosted all of the content we aggregated on our own servers. We ran a small instance of Google App Engine to extract picture URLs within articles.



In the middle of last year, our servers were overwhelmed with hundreds of thousands of new signups, and we experienced our first service outage. The first thing we did was move all of our static content to App Engine. Within an hour we were up and running again with 10 times the capacity we had before. This turned out to be a good thing – we added millions more users over the next few months and more than doubled in size.



It’s been almost a year since that day, and we’ve greatly expanded our service with Google Cloud Platform. We now use App Engine as a dynamic content delivery network (CDN) for all static content in feedly, as well as to serve formatted images displayed in the app or desktop.



A fast response time is even more important on mobile, and App Engine helps us load images immediately so that there’s no lag when users scroll through their feeds. As a feedly user scrolls through content, the app sends App Engine information in the background about what articles are coming next. App Engine then fetches images from the article page on the Web, determines the best image, stores it in Cloud Storage and receives a serving URL from the Image service. For users, this leads to a seamless scrolling experience.



To optimize the feedly user experience, we make heavy use of the Memcache API and App Engine Modules and the Taskqueue API. The combined result of these services allows us to cut our response time for user requests in the app down to milliseconds.



As an engineer, one of my favorite things about App Engine is that it generates detailed usage reports so we can see the exact cost of our code, like CPU usage or the amount we’ve spent to date, and continue to optimize our performance.



We learned the hard way what happens when you don’t prepare for the unexpected. But this turned out to be a blessing in disguise, because it prompted us to move to Cloud Platform, and expand and improve our service. App Engine has taken pressure off our small team and allowed us to focus on building the best reading experience for our users. With Google’s infrastructure on the backend, today we only need to worry about pushing code.



- Posted by Olivier Devaux, co-founder of feedly

Today, we are making it easier for you to run Hadoop jobs directly against your data in Google BigQuery and Google Cloud Datastore with the Preview release of Google BigQuery connector ...
Today, we are making it easier for you to run Hadoop jobs directly against your data in Google BigQuery and Google Cloud Datastore with the Preview release of Google BigQuery connector and Google Cloud Datastore connector for Hadoop. The Google BigQuery and Google Cloud Datastore connectors implement Hadoop’s InputFormat and OutputFormat interfaces for accessing data. These two connectors complement the existing Google Cloud Storage connector for Hadoop, which implements the Hadoop Distributed File System interface for accessing data in Google Cloud Storage.



The connectors can be automatically installed and configured when deploying your Hadoop cluster using bdutil simply by including the extra “env” files:


  • ./bdutil deploy bigquery_env.sh

  • ./bdutil deploy datastore_env.sh

  • ./bdutil deploy bigquery_env.sh datastore_env.sh







Diagram of Hadoop on Google Cloud Platform





These three connectors allow you to directly access data stored in Google Cloud Platform’s storage services from Hadoop and other Big Data open source software that use Hadoop's IO abstractions. As a result, your valuable data is available simultaneously to multiple Big Data clusters and other services, without duplications. This should dramatically simplify the operational model for your Big Data processing on Google Cloud Platform.



Here are some word-count MapReduce code samples to get you started:




As always, we would love to hear your feedback and ideas on improving these connectors and making Hadoop run better on Google Cloud Platform.



-Posted by Pratul Dublish, Product Manager

Today, we are announcing the release of App Engine 1.9.3.



This release offers stability and scalability improvements, themes that we will continue to build on with the next few releases. We know that you rely on App Engine for critical applications, and with the significant growth we’ve experienced over the past couple years we wanted to take a step back and spend a few release cycles with a laser focus on the core functionality that impacts your service and end users. As a result, new features and functionality may take a back seat to these improvements. That said, we fully expect to continue making progress with existing services, including Dedicated Memcache.
Today, we are announcing the release of App Engine 1.9.3.



This release offers stability and scalability improvements, themes that we will continue to build on with the next few releases. We know that you rely on App Engine for critical applications, and with the significant growth we’ve experienced over the past couple years we wanted to take a step back and spend a few release cycles with a laser focus on the core functionality that impacts your service and end users. As a result, new features and functionality may take a back seat to these improvements. That said, we fully expect to continue making progress with existing services, including Dedicated Memcache.



Dedicated Memcache

Today we are pleased to announce the General Availability of our dedicated memcache service in the European Union. Dedicated Memcache lets you provision additional, isolated memcache capacity for your application. For more details about this service, see our recent announcement.



Our goal is to make sure that App Engine is the best place to grow your application and business rapidly. As always, you can find the latest SDK on our release page along with detailed release notes and can share questions/comments with us at Stack Overflow.




When Applibot needed a flexible computing architecture to help them grow in the competitive mobile gaming market in Japan, they turned to Google Cloud Platform. When Tagtoo, a online content tagging startup, needed to tap into the power of analytics to better serve digital ads to customers in Taiwan, they turned to Google Cloud Platform. In fact, companies all over the world are turning to Cloud Platform to create great apps and build successful businesses.






Now, more developers in Asia Pacific can experience the speed and scale of Google’s infrastructure with the expansion of support for Cloud Platform. Today we switched on support for Compute Engine zones in Asia Pacific, as well as deploying Cloud Storage and Cloud SQL.  






This region comes with our latest Cloud technology, which includes Andromeda - the codename for Google’s network virtualization stack - to provide blazing fast networking performance as well as transparent maintenance with live migration, and automatic restart for Compute Engine.






In addition to local product availability, the Google Cloud Platform website and the developer console will also be available in Japanese and Traditional Chinese. These websites have updated use cases, documentation and all sorts of goodies and tools to help local developers get started with Google Cloud Platform. Developers interested in learning more about Google Cloud Platform can join one of the Google Cloud Platform Global Roadshow events coming up in Tokyo, Taipei, Seoul or Hong Kong.






The launch of Cloud Platform support in Asia Pacific is in line with our increasing investment in the region and our commitment to developers around the world. To all our customers in the region, we would like to say “THANK YOU / 謝謝 / ありがとう ” for your support of Google Cloud Platform.



-Posted by Howard Wu, Head of Asia Pacific Marketing and Ken Sim, Product Manager

Our friends at Google recently published a comprehensive overview of how to manage Google Compute Engine infrastructure via the various automation platforms available. The Compute Engine team invited us to add our perspective on this topic and what follows here is a look at why we love Compute Engine, how our customers are succeeding with Chef+Compute Engine, and technical details on automating Compute Engine resources with Chef.
Our friends at Google recently published a comprehensive overview of how to manage Google Compute Engine infrastructure via the various automation platforms available. The Compute Engine team invited us to add our perspective on this topic and what follows here is a look at why we love Compute Engine, how our customers are succeeding with Chef+Compute Engine, and technical details on automating Compute Engine resources with Chef.



Chef is betting on Compute Engine

You’ve often heard us reference the ‘coded business’. In short, we propose technology has become the primary touch point for customers. Demand is relentless. And the only way to win the race to market is by automating delivery of IT infrastructure and software.



This macro shift began in part because of Google’s success in leveraging large-scale compute to rapidly deliver goods and services to market. And when we say ‘large-scale’, there aren’t many, if any, businesses with more compute resources, expertise, and experience than Google.



So it makes a ton of sense that Google would pivot their massive compute infrastructure into an ultra-scalable cloud service. Obviously they know what they’re doing and now everyone from startups to enterprises can tap into Google’s compute mastery for themselves.



Working with the Compute Engine team fits perfectly into not only our view of how the IT industry, and business itself, is changing, but also what our customers want. Choice. Speed (lots and lots of speed). Scale. Flexibility. Reliability.



Why customers love using Chef and Google Compute Engine



Cloud-based delivery



Like the Google Cloud Platform, Chef offers customers all the benefits of cloud-based delivery. New users can get instant access to a powerful Enterprise Chef server hosted on the cloud, no credit card is required, and you can manage up to five instances for free.



When you want to use Chef to manage larger numbers of nodes, you add this capability on a simple, pay-as-you-go basis. Customers can get started using Chef to configure Compute Engine in minutes, start to finish. Ian Meyer the Technical Ops Manager at AdMeld (now part of Google) praises the SaaS delivery model of Hosted Chef:



“Prior to deploying Hosted Chef,” said Meyer, “we did everything manually. It generally took me a couple of weeks to get access to the servers I needed and at least a day to add a new developer. With Chef, I can now add a couple of developers within 20 minutes. Additionally, when we set up a new ad serving system with data bags, the set-up time goes from two to three days to an hour. This is simply one of those tools that you need regardless of what your environment is.”



Speed & Scale

Just as customers are choosing Compute Engine for its speed, our customers appreciate how Chef’s execution model pushes the heavy lifting to the Chef client(s) rather than compiling configuration instructions on the server. Chef stands well above the field with a single Chef server handling 10,000 nodes at the default 30-minute update interval.



Flexibility

Our customers tell us that Chef is more flexible than any other offering. When the situation calls for it, Chef allows advanced users to work directly with infrastructure primitives and a full-fledged modern Ruby-based programming language.



Community

Chef customers can tap into the shared knowledge, expertise, and helping hands of tens of thousands of Chef Community members, not to mention over 1000 Chef Cookbooks. The Chef Community provides a vibrant, welcoming resource for learning best practices. In recent years, high profile vendors have contributed and built on top of Chef including Google, Rackspace, Dell, HP, Facebook, VMware, AWS, Rackspace and IBM.



Google will be a featured partner at this year’s ChefConf. Join Google’s Eric Johnson as he shares technical details about Chef’s integration and future roadmap with Compute Engine.



Chef and Compute Engine: Under the Hood

Chef makes it easy to get started with Compute Engine. Once you’ve obtained a Compute Engine account and configured your Chef workstation, you can extend Chef’s knife command-line tool with the knife-google plugin:



gem install knife-google
knife google setup



That last command will walk you through a one-time configuration of your knife workstation with Compute Engine credentials.



Now you can use knife with the cookbooks on your Chef server to deploy infrastructure from Chef recipes to Compute Engine instances. Here’s an example where we use Chef to create a Jenkins master node hosted in Compute Engine.  Note that this command assumes your local user has previously used 'gcutil' (bundled with Cloud SDK) resulting in a valid SSH Key that has been registered with the Compute Engine Metadata service:





knife google server create jenkins1 -Z us-central1-a -m n1-highcpu-2 -I centos-6-v20140415 -r 'java,jenkins::master' -x $USER -i $HOME/.ssh/google_compute_engine





This command takes the following actions:




  • Creates a CentOS VM instance in Compute Engine's us-central1-a zone with machine type n1-highcpu-2

  • Registers it as a node named ‘jenkins1’ with the Chef Server

  • Configures the node’s run_list attribute as ‘java,jenkins::master’

  • Uses the ssh protocol to run chef-client with that ‘master’ recipe from the Jenkins community cookbook on the new system.


At the end of this process, you’ll see a message like the one below:



Chef Client finished, 19/21 resources updated in 40.207903203 seconds



And now you have a Jenkins master. This and similar knife commands may be integrated into automation that can also spin up Jenkins tester systems for a complete continuous integration pipeline backed by Compute Engine.



You can then use Chef Server features like search to manage the pipeline as long as you need it. But since Chef makes deployment so simple, and Compute Engine makes it so fast, you can just destroy part or all of it when it’s no longer needed...

# Commands like this destroy unneeded nodes
knife google server delete tester1 -y --purge



… and recreate nodes ‘just-in-time style’ when demand picks back up again.



The quick turnaround on deployment and convergent configuration updates via Chef + Compute Engine allows teams to experiment with developer automation at very low cost.



To get a deeper sense of how you can exploit the capabilities of Compute Engine, please visit our GCE page outlining details around Chef’s knife-google plugin and explore the community library of coded infrastructure.



-Contributed by Adam Edwards, Platform Engineering at Chef

We love seeing our developers create groundbreaking new applications on top of our infrastructure. To help our current and prospective users gain insight into the vast array of these applications, we recently added a new case study. Whether you’re interested in learning about how businesses are building on our platform or just looking for inspiration for your next project, we hope you find it informative.
We love seeing our developers create groundbreaking new applications on top of our infrastructure. To help our current and prospective users gain insight into the vast array of these applications, we recently added a new case study. Whether you’re interested in learning about how businesses are building on our platform or just looking for inspiration for your next project, we hope you find it informative.



Kahuna

Kahuna used App Engine to create an automated mobile-engagement engine that would turn people who downloaded a mobile app into truly engaged customers.



Check out cloud.google.com/customers to see the full list of case studies. You can read about companies varying in size, industry, and use cases, who are using Google Cloud Platform to build their products and businesses.



To learn more about Kahuna, please visit www.usekahuna.com.



-Posted by Chris Palmisano, Account Manager

Today we are excited to announce a significantly updated Logs Viewer for App Engine users. Logs from all your instances can be viewed together in near real time, with greatly improved filtering, searching and browsing capabilities.
Today we are excited to announce a significantly updated Logs Viewer for App Engine users. Logs from all your instances can be viewed together in near real time, with greatly improved filtering, searching and browsing capabilities.



This release includes UI and functional improvements. We’ve added features that simplify navigation and make it easier to find the logs data you’re looking for.

Logs Viewer 4.png

(1) Filter on fields and use regular expressions in a single query

You can now use field filters (e.g. status:, protocol:, etc) and regular expressions together in a single query. This is useful for filtering through events that might occur with a high frequency. In addition, you can add and remove filters ad-hoc to help drill down and then zoom out again until you find what you’re looking for. Simply modify the query and press ‘enter’ to refresh the logs.



When you click the search bar we will show possible completions for filtering fields as you type. For example, typing ‘re’ would produce four possible completions as demonstrated in the screenshot below:

Logs Viewer (1).png

Note that filters of the same type are ORed to get results, while different filter types are ANDed together. So for example, status:400 status:500 regex:quota would produce all requests that returned HTTP status of either 400 OR 500, AND have the word quota in the log.



(2) Search or scroll through all of your logs

When you scroll through your logs in the new Logs Viewer, results are fetched until the console window is full. To retrieve additional logs that match the query, simply scroll down for newer results or up for older ones.



This provides you with a continuous view of your events to enable you to move forward and backward in time without requiring you to click “next” or refresh the console. As related events frequently occur at close proximity to each other, this can help you hone in on root-causes faster. While results are being fetched you will see a Loading… indicator at the top right corner of the viewer.



(3) Get it all in one place

With the Logs Viewer you can view and search logs from all your instances and apply filters to narrow in on a specific event, regardless of where it was generated. While this functionality exists in our old viewer, we are committed to making developers’ lives easier by making it simple to consume and analyze large amounts of data generated by highly scalable applications.



Those of you that have been using the old viewer should note that the same logs are available in both viewers. Additionally, the logs quota remains unchanged.



We’re working hard on additional improvements to make developers more productive and provide you with easier and more insightful access to your data. Stay tuned!



Your feedback is important!

Comments? Suggestions? Rants? Please send them to:

monitoring-and-logs-feedback@google.commailto:monitoring-and-logs-feedback@google.com



-Posted by Amir Hermelin, Product Manager

Today we are pleased to announce that Red Hat Enterprise Linux has exited Open Preview and is now Generally Available in two consumption models: on demand (pay by the hour), and Red Hat Cloud Access (pay by the year). This gives customers the ability to make use of Red Hat support, relationships, and technology on ...
Today we are pleased to announce that Red Hat Enterprise Linux has exited Open Preview and is now Generally Available in two consumption models: on demand (pay by the hour), and Red Hat Cloud Access (pay by the year). This gives customers the ability to make use of Red Hat support, relationships, and technology on Google Cloud Platform, while maintaining a consistent level of service and support with consistent and predictable pricing from Red Hat. As an added benefit for subscribers of Red Hat Enterprise products, Red Hat Cloud Access enables qualified enterprise customers to migrate their current subscriptions for use on Google Cloud Platform. This starts with Red Hat Enterprise Linux (RHEL) subscriptions, with other Red Hat products to follow. You can learn more about Red Hat Cloud Access here, and find documentation for RHEL on Compute Engine here. Use of RHEL on Google Compute Engine is subject to additional terms conditions (see the Google Cloud Platform Service Specific Terms here).



-Posted by Martin Buhr, Product Manager

At Google Cloud Platform Live last week, we announced Sustained Use Discounts for Google Compute Engine, which automatically lower the price of your virtual machines (VMs) when you use them to run sustained workloads. You still only pay for the minutes you use, but with sustained use discounts we give you the best price for every VM you run without you having to perform any additional planning, make any long-term commitments, or pay any upfront fees. Discounts increase with use, so the more you use a VM the greater the discount you get.
At Google Cloud Platform Live last week, we announced Sustained Use Discounts for Google Compute Engine, which automatically lower the price of your virtual machines (VMs) when you use them to run sustained workloads. You still only pay for the minutes you use, but with sustained use discounts we give you the best price for every VM you run without you having to perform any additional planning, make any long-term commitments, or pay any upfront fees. Discounts increase with use, so the more you use a VM the greater the discount you get.



We make these discounts automatically, but I’d like to take a few minutes to explain how they’re calculated and highlight some hidden benefits.



Let’s start with an example to illustrate how we calculate usage levels to give you these discounts:



Say you’re running a Web application on Compute Engine using three virtual machines. Ten days into the month, you discover potential for further optimization. You deploy a new VM with the new code, and send a small percentage of traffic to it. After running in this mode for five days, you conclude the improvements are indeed meaningful and that you only need two VMs to serve all your traffic. So you spin up a second optimized VM and shut down the original three VMs. Your customers love the improved application, and traffic grows quickly. Five days later, you spin up another VM to handle the additional traffic, and you run with these three VMs for the rest of the month.



Here’s what your usage looks like:





As you can see above, this means you have:


  • 3 VMs running for the first 10 days,

  • 4 for the next 5 days,

  • 2 for the next 5 days,

  • and 3 for the last 10 days.






We translate your usage into that pattern, which now looks like:



From a Sustained Use perspective, we treat this as exactly as above - i.e. as if you had two VMs running for 30 days (100% of the month), one running for 25 days (83.3% of the month) and one running for five days (16.7% of the month) - and we give you the appropriate discounts automatically. This approach lets us give you the greatest possible discount for a given usage pattern.





What that means for you is:


  • No lock-in or upfront minimum fee commitments

  • Greater agility (both financial and technical)

  • No complex planning required

  • No up-front payments; when you factor in the time value of money, large upfront payments are not only a source of lock-in, but also a significant hidden cost.

  • Automatically benefit from price reductions when they happen, rather than being contractually obligated to paying a rate that might, over time, be higher than the market rate.

  • No risk associated with over- or under-estimating your usage over a multi-month or multi-year period

  • No penalty for changing instance shapes as your needs change

  • Upgrade to newer instance types that may be better suited to your workload whenever you like.




With Sustained Use discounts, we’re going back to the original promises of the cloud - higher agility, simple pricing and lower risk.



- Posted by Navneet Joneja, Senior Product Manager

Our guest post today comes from Massimo Ilario, co-founder and principal engineer of SwiftIQ, a cloud-based API infrastructure to facilitate data accessibility and adaptive machine learning predictions.



At SwiftIQ, we unify and analyze vast amounts of disparate data and apply scalable algorithms to extract insights for our customers, enabling smarter, real-time decisions. For instance, we help lots of supermarkets collect and analyze customer transaction data to predict in-store shopper engagement, which helps retailers plan floor layouts, optimize promotions and recommend product upsells to their customers. Historically, most supermarkets could not even store detailed in-store basket information because of the size of this data.
Our guest post today comes from Massimo Ilario, co-founder and principal engineer of SwiftIQ, a cloud-based API infrastructure to facilitate data accessibility and adaptive machine learning predictions.



At SwiftIQ, we unify and analyze vast amounts of disparate data and apply scalable algorithms to extract insights for our customers, enabling smarter, real-time decisions. For instance, we help lots of supermarkets collect and analyze customer transaction data to predict in-store shopper engagement, which helps retailers plan floor layouts, optimize promotions and recommend product upsells to their customers. Historically, most supermarkets could not even store detailed in-store basket information because of the size of this data.



Swift Predictions is our machine learning environment that makes predictions based on vast amounts of data. As we thought about scaling Swift Predictions, we knew we needed substantial infrastructure with reliable file storage, long-running processes and the ability to handle unpredictable scale. Google Cloud Platform met all our requirements for fast, powerful, cost-efficient cloud infrastructure. Starting with Google App Engine for development, we also use Google Compute Engine, Google Cloud Storage, Google BigQuery and Google Cloud Datastore.



One of the more data intensive algorithms, frequent pattern mining (FPM), is intended to analyze order combinations and return the top occurrences. For a supermarket with 50,000 items typically available in store, the FPM algorithm commonly creates a million rows of unique order combinations. Once processed, these results span tens of millions of records and are stored in BigQuery, which allows for rapid retrieval. Our infrastructure can then scale these models across dozens or hundreds of stores using Cloud Platform.



Swift Predictions runs on Apache Hadoop, where we were writing and tuning MapReduce jobs to shuffle data between App Engine and Compute Engine. Google’s recent introduction of Google Cloud Storage Connector for Hadoop has removed this obstacle. We can now freely move data between our Web-facing project and our long-running, backend processes. As developers, this higher level of interoperability means we have the flexibility to pick the best tool to house our results from Hadoop: Cloud Storage or BigQuery.



One of our product features, a data mining algorithm for analyzing shopper buying patterns, creates a considerable amount of input data in a sparse matrix representation. Our input data can only be represented as a split of large files into the Hadoop Distributed File System (HDFS), and our output data is best represented as results within BigQuery. "Cloud Storage Connector for Hadoop has proven to make this workflow a more manageable process since our Hadoop workflow communicates with Cloud Storage and BigQuery as if these were HDFS and any other native Hadoop InputFormat and OutputFormat."



App Engine modules are ideal for packaging and isolating functionality in our system. We benefit by dedicating our default module to serve the web application and then develop secondary modules to handle specific tasks that may be longer running or more intensive in certain areas. It has allowed us to decouple much of our source code from a main web application and simplify the maintenance for the long term. Coding changes to isolated modules need not require a more comprehensive QA cycle for a deployment.



Thanks to Cloud Platform, our development team spends a minimal amount of time managing the performance tuning and reliability of our entire machine learning piece, which allows us to focus on unlocking and delivering new insights for our customers.



-Contributed by Massimo Ilario, co-founder and Principal Engineer, SwiftIQ.

We have recently made the latest networking technology that powers our internal services available to Cloud Platform users across the world. Andromeda - the codename for Google’s network virtualization stack - now powers two ...
We have recently made the latest networking technology that powers our internal services available to Cloud Platform users across the world. Andromeda - the codename for Google’s network virtualization stack - now powers two Google Compute Engine zones: us-central1-b and europe-west1-a. Customers in these zones will automatically see major performance gains in throughput over our already fast network connections. We will be fully migrating all zones to Andromeda in the coming months.



At the Open Network Summit earlier this month, I presented Andromeda. In this presentation, I described some of the networking challenges introduced by virtualization. Delivering the highest level of performance, availability, and security requires orchestrating across virtual machines, hypervisors, operating systems, network interface cards, top of rack switches, fabric switches, border routers, and even our network peering edge. We are uniquely positioned to leverage Google's control and expertise over the entire hardware, software, LAN, and WAN to deliver a seamless experience for Cloud Platform customers.



At Google, we benefit from having programmable access to the entire network stack, from the lowest-level hardware to the highest-level software elements. Rather than being forced to create compromised solutions based on available insertion points, we can design end-to-end secure and performant solutions by coordinating across the stack.



Andromeda is a Software Defined Networking (SDN)-based substrate for our network virtualization efforts. It is the orchestration point for provisioning, configuring, and managing virtual networks and in-network packet processing. The figure below from my presentation shows Andromeda's high-level architecture:





Andromeda's goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization (NFV). We expose the same in-network processing that enables our internal services to scale while remaining extensible and isolated to end users. This functionality includes distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls. We do this all while improving performance, with more enhancements coming.



Hence, Andromeda itself is not a Cloud Platform networking product; rather, it is the basis for delivering Cloud Platform networking services with high performance, availability, isolation, and security. For example, Cloud Platform firewalls, routing, and forwarding rules all leverage the underlying internal Andromeda APIs and infrastructure. Our site presents the details of these and other advanced network capabilities.



In addition, my presentation covered various scenarios such as the previously described Google Compute Engine 1M RPS Load balancing post. I also spoke about some forthcoming TCP stream performance improvements within Google Compute Engine (GCE), the most notable of which was a significant improvement to network-level latency, throughput, and CPU overhead. While these enhancements will lead to some of the best network performance available in the industry, we are most excited about the path moving forward. Andromeda will enable Cloud Platform to expose more and more of Google’s raw network infrastructure performance to all GCE virtual machines (VMs).



Some of the most valuable enhancements enable VMs built on supporting Linux kernels to exploit offload/multi-queue capabilities. I encourage interested customers to create new GCE VMs using the Debian backports-image. This image has the latest drivers needed to achieve the best performance.



To show the magnitude of improvements rolling-out, the Cloud Platform team performed a number of performance experiments. One benchmark evaluated throughput using netperf TCP_STREAM in the same GCE zone. By comparing the Baseline performance (before Andromeda) against Andromeda, we can highlight the benefits of the Andromeda architecture.



Additionally, we've started working on the next set of enhancements. In my talk, I highlighted some of the opportunities moving forward: high-speed access to low-latency, durable storage, APIs for NFV, and VM migration to deliver transparent availability in the face of system maintenance. Andromeda is a re-working of our underlying network virtualization architecture, and its SDN core enables us to rapidly iterate and deliver new functionality. This ensures that Cloud Platform's network will continue to be an agent of disruption to cloud computing moving forward.



-Posted by Amin Vahdat, Distinguished Engineer