In the upcoming Google App Engine 1.8.1 release, the  Datastore default auto ID policy in production will switch to scattered IDs to improve performance. This change will take effect for all versions of your app uploaded with the 1.8.1 SDK.
In the upcoming Google App Engine 1.8.1 release, the Datastore default auto ID policy in production will switch to scattered IDs to improve performance. This change will take effect for all versions of your app uploaded with the 1.8.1 SDK.



You can try out the new behavior in the development application server, where scattered auto IDs are the default. These IDs are large, well-distributed integers, but are guaranteed to be small enough to be completely represented as 64-bit floats so they can be stored as Javascript numbers or JSON. If you still need legacy ids for some entities (e.g. because you want smaller numbers for user-facing ids), we recommend you use the allocateIds() API, which will continue to behave as before. You can also override the default auto id policy by setting the new auto_id_policy option in your app.yaml/appengine-web.xml to legacy, but please note that this option will be deprecated in a future release and will eventually be removed.



-Posted by Chris Ramsdale, Product Manager

Today’s guest post is from Alex Bertram of Bedatadriven, who helps clients leverage data and analysis to achieve their goals with software development, consulting and training. In this post, Alex describes why they chose to use Google App Engine and Google Cloud SQL. ...
Today’s guest post is from Alex Bertram of Bedatadriven, who helps clients leverage data and analysis to achieve their goals with software development, consulting and training. In this post, Alex describes why they chose to use Google App Engine and Google Cloud SQL.



One of Bedatadriven’s core projects is ActivityInfo, a database platform for humanitarian relief operations and development assistance.




Affected populations plotted by size and type on a base map of Health Zones in Eastern DRC

Originally developed for UNICEF’s emergency program in eastern Congo, today the system is used by over 75 organizations working in Africa and Asia, tracking relief and development activities, across more than 10,000 project sites. With ActivityInfo, project managers can quickly establish an online database that reports the results of educational projects, maps activities that improve water and hygiene, tracks the delivery of equipment to clinics or any other humanitarian activities a project undertakes.



Field offices are able to collect key data about a relief operation’s activities, either through an offline-capable web interface or push results through a RESTful API. These results are then available to managers at a project or programme level and to the Donor organisations that fund the operations and assistance.



Using ActivityInfo:




  • Less time spent on reporting and collecting data, more on delivering practical aid and support to vulnerable people and communities

  • Builds a unified view of a humanitarian programme’s progress, across partners, regions and countries

  • Improves program quality, with faster and more accurate feedback into the project cycle




Choosing our Architecture



Although the code for ActivityInfo is open sourced, our vision is to offer the system as a central service to the UN, NGOs and others at ActivityInfo.org, allowing them to focus on delivering the humanitarian programmes to some of the world’s most vulnerable populations. In choosing our infrastructure for ActiviyInfo.org, we had several criteria:




  • Given the challenging environments that ActivityInfo users work in and the nature of the crises, we needed a platform that could ensure that the system was highly available.

  • Minimal system administration, allowing bedatadriven’s focus to remain on product development - delivering the tools and functions users need to manage successful relief operations.

  • A platform that could scale up and down according to the load, with minimal human intervention. The platform had to be scale automatically, as during a peak in a humanitarian crisis, when load can increase by an order of magnitude or more.

  • Clear monitoring tools to help pinpoint performance problems. Physics imposes a minimum latency of nearly 900 milliseconds per request for satellite connections, so it’s essential for us to keep the server response time as low as possible to ensure a responsive experience for users.




As our user base grew, we moved first from a single machine to another Java PaaS meant to provide dynamic scaling. Unfortunately, we found we were still spending far too much time on server administration, fussing with auto scaling triggers and responding to alerts when the platform failed to scale up the number of application servers sufficiently. Our goal of minimal system administration had been overtaken by the need to keep the system up and running.



Even worse, we were lacking decent monitoring tools to identify and resolve the performance problems. There are some great Open Source tools out there like statsd and graphite, but the investment to get them up and running was more than we wanted to spend.



We had used Google App Engine for other projects and were impressed by its simplicity and stability. When the MySQL-based Google Cloud SQL service became available, we were quick to make the move.



App Engine has proved to be available and stable. Instances scale up and down with the load appropriately, without having to monkey with configuration or specify triggers through trial and error. New instances come online to serve requests in under 30 seconds, keeping request latency low even when we experience very sudden spikes in utilization.



More importantly, the strong monitoring tools have helped us quickly find and eliminate performance bottlenecks. App Engine collects logs from all running instances in near real time and has a clean interface that allows you to review and search logs, aggregated by request. This allows us to flag all requests that exceed a certain latency and drill down to the causes very quickly.



The App Engine metrics enabled us to pinpoint the MySQL queries that needed tuning, so they no longer tied up threads on the application servers. With a minimal investment of time, we now have ActivityInfo running better than ever before.



App Engine does impose some limitations in exchange for this reliability. Some of these, like the restrictions on the Java imaging libraries, we’ve been able to work around by using pure-Java libraries to render the images and PDF exports for users (See https://github.com/bedatadriven/appengine-export).



Others, like the 30-second request limit, have made us true believers. One of our problems turned out to be a few MySQL queries that worked fine in development, but degraded under load, requiring several minutes to complete. When we got hit with a few hundred of these queries concurrently, they quickly tied up all available threads on the application servers and maxed out the connection limits on MySQL, requiring manual intervention to avoid downtime. On App Engine, these cancerous requests were shut down after thirty seconds and flagged in the logs, allowing other requests to complete normally and giving us time to optimize the queries.



Our move to Google App Engine has proven to be a successful one, improving the quality of service to our users and allowing us to focus on software development.



-Contributed by Alexander Betram, Partner, Bedatadriven

(Cross-posted on the Official Google Australia Blog)



Today’s guest blogger is Joshua Lowcock, Head of Commercial Platforms and Products for News Limited, an Australian media company. Joshua describes how his company used Google App Engine in Australia. ...
(Cross-posted on the Official Google Australia Blog)



Today’s guest blogger is Joshua Lowcock, Head of Commercial Platforms and Products for News Limited, an Australian media company. Joshua describes how his company used Google App Engine in Australia.



News Limited is one of Australia’s largest media companies, spanning newspapers, magazines, online, and subscription TV. We publish over 140 online and printed newspapers in major Australian cities including Sydney, Melbourne, Brisbane, Adelaide, and Perth, as well as in suburban areas.



Classified advertising is a key revenue stream across all our markets, but traditionally booking and billing classifieds had been a manual and time-consuming process. We wanted to implement a solution that would allow customers to serve themselves by placing ads online.



Google App Engine has enabled customers to do just that. We chose Google App Engine as the application because it is easy to build, easy to maintain and simple to scale as the user base and data storage grows. Functionalities within the Google App Engine environment, such as Google BigQuery, have also been useful. We can do an in-depth analysis of our ads and item pricing, as well as provide an internal reporting tool, all using BigQuery.



The end result is a self-service, production booking and billing system - www.traderoo.com.au - which we have developed on Google App Engine. It’s proving to be a real winner for both our business and our customers. It’s fundamentally changed the way customers engage with our company, creating a more usable experience and superb responsiveness. It’s easy to use, and gives more control over ad content, as well as the ability to publish ads online immediately. Online ads are free, while print ads are optional and require a small fee, but complement online ads by extending the advertiser’s reach.



When customers book ads using the Traderoo website, they get automatic email notification from the platform that tells them how their advertisement is performing. Traderoo is optimised for PC, laptop, smartphone and tablet, so the browser and ad placement remain consistent, no matter what device our customers are using.



The real advantage for us is that our classified business has achieved faster time to market, lower costs and less overheads in the form of call centre time and manual data entry. The site has been a huge success, and we look forward to continuing to use Google App Engine as we develop Traderoo further.



-Contributed by Joshua Lowcock, Head of Commercial Platforms and Products for News Limited

Since its inception in 2011, Google App Engine High Replication Datastore (HRD) has grown and currently processes over 4.5 trillion transactions per month with 99.95% uptime. In addition, HRD serves as the basis of ...
Since its inception in 2011, Google App Engine High Replication Datastore (HRD) has grown and currently processes over 4.5 trillion transactions per month with 99.95% uptime. In addition, HRD serves as the basis of Google Cloud Datastore, which we announced last week at Google I/O.



We are always evaluating opportunities to create more value for you and today we are reducing Datastore prices by up to 25%. This price change impacts both App Engine HRD and Cloud Datastore.



Below is a breakdown of the new pricing:



Storage




ResourceOld Unit CostNew Unit Cost
Stored Data (Datastore)$0.24 / GB / Month$0.18 / GB / Month


Operations






Operation Old Cost New Cost
Write $0.10 per 100k operations $0.09 per 100k operations
Read $0.07 per 100k operations $0.06 per 100k operations
Small $0.01 per 100k operations unchanged


If you are unfamiliar with Datastore you can learn more about App Engine HRD and Cloud Datastore.



At the Game Developers Conference last month, we held a day of sessions showing developers how to take advantage of Google Cloud Platform to build all kinds of different games. We invited some of our top developers to share their stories and best practices, including ...
At the Game Developers Conference last month, we held a day of sessions showing developers how to take advantage of Google Cloud Platform to build all kinds of different games. We invited some of our top developers to share their stories and best practices, including LeanPlum, who is building a powerful mobile optimization platform, EA, who is building some really amazing games, and Staq, who is creating a unique game management platform. Check out the videos of the sessions below and let us know what you think.



Intro to Google Cloud Platform - PaaS, IaaS, Storage, Analytics (48:18 min)



Google Cloud Platform has everything needed to build highly scalable applications.  Launch an app without system administrators, while having the ultimate flexibility of root on a virtual machine.  Get high performance asset hosting, and analyze terabyte-sized data to optimize games.









Connect Mobile Apps to the Cloud Without Breaking a Sweat (43:50 min)



Google Cloud Endpoints makes it easy to build OAuth 2-protected, RESTful APIs and instantly generate client libraries for Android, iOS, and JavaScript. See how you can use this feature to trivially connect your Android, iOS, and mobile browser applications to powerful backends built on App Engine.









Create Amazingly Scalable Games on Google Cloud Platform (40:35 min)



Quickly deliver compelling game experiences by leveraging the scalability of Google App Engine combined with the unlimited flexibility of virtual machines on Google Compute Engine. From 1 to 100,000 cores, learn how to unleash your next great game on Google Cloud Platform.









Understanding Your Players Using Near Real-time Data Analytics (41:20 min)



The volume of data generated by games can be immense and the insights one can derive from them invaluable. Learn how to analyze player behavior, virality, segment users, and understand retention in near real-time using //staq and Google BigQuery.











How EA Builds Mobile Game Servers on Google App Engine (44:23 min)



Electronic Arts presents an overview of how Google App Engine propels the production of back-end servers required for connected, social games on mobile, with real-world applications of the platform's services and built-in automatic scaling.











Today’s guest post is from Thomas Orozco, Solutions Engineer at Scalr, which provides cloud management services and integrates with Google Cloud Platform. Thomas shares Scalr’s experience working with another Google partner, grandcentrix, to deliver the Eurovision companion app. ...
Today’s guest post is from Thomas Orozco, Solutions Engineer at Scalr, which provides cloud management services and integrates with Google Cloud Platform. Thomas shares Scalr’s experience working with another Google partner, grandcentrix, to deliver the Eurovision companion app.



Eurovision is a song contest where each European country sends one singer to compete in a televised competition (similar to American Idol for our American readers). It is the one of the most watched non-sporting TV events in the world, with an estimated 125 million live viewers every year!





This year, Eurovision created a second screen application that included singer biographies, real-time updates, contest voting and results. The “smartmrs” backend for the Eurovision companion app, developed by grandcentrix, was powered by Google Cloud Platform. grandcentrix leveraged Google Compute Engine for VMs and used our product at Scalr for orchestration.



Capacity planning without a target

Initially, Eurovision didn’t know how much traffic its companion app would receive, so they decided to work with Scalr and Compute Engine because of its flexibility. grandcentrix needed infrastructure that could scale up and down quickly, with instances that would instantly start serving user requests. Without knowing expected traffic levels, the objective was to take the backend service to a point where it could scale horizontally - that is, where adding twice the capacity would result in twice the throughput.



We had the following components running on Google Compute Engine:


  • Nginx as a load balancer

  • Apache running the app’s PHP code

  • Redis as a datastore for most queries

  • MySQL as a datastore for relationally heavy queries




Scalr was used as a control panel to launch instances and orchestrate the pieces together through automated configuration and DNS management.



How Compute Engine helped us get there

The network

Google Compute Engine has a high performance network - packets move consistently and quickly. To take full advantage of this we went for Compute Engine’s largest compute offering and tuned our network settings a bit to accommodate more connections (think net.ipv4.tcp_tw_reuse, net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait, and net.nf_conntrack_max, among others).



The elasticity, provisioning times, and billing

During the first Eurovision semifinal voting phase, traffic went up by a factor of 5. We were able to quickly spin up extra capacity in just a few minutes and handle the traffic that we were receiving.



During the finals, we were extra careful and decided to spin up 2x capacity just before the voting. We kept those instances up for 30 minutes, and shut them down as soon as the voting phase ended. Compute Engine’s sub hour billing was greatly appreciated by the grandcentrix team and saved them approximately 50% of what it would have cost on other providers.



The (complete) flexibility

Google Compute Engine gives us full access to the instances, so we can understand what’s happening under the hood and optimize it. Here’s an example: DNS resolution.



Here, we connected to the DB instances by pointing the app to a Scalr-managed hostname that lists their IP addresses and gets updated when we add or remove DB servers.



Having low-level (socket) access let us understand the need for and implement randomization logic to distribute traffic evenly across our database servers and get consistent performance throughout the show.



Ready for showtime!

In the end, the infrastructure was ready for the Eurovision finals on Saturday. Google Cloud Platform, grandcentrix and Scalr were able to deliver 50,000 RPS, with 99% of the requests completed within 35ms at the app server layer.



The traffic was higher than expected when voting started, but significantly lower than expected during the results phase (turns out people watch a TV show on TV!), and grandcentrix was able to shut down a large part of the cluster to save on cost and take advantage of Compute Engine’s sub-hour billing!



In the end, Google Cloud Platform provided the technology, pricing, and robustness that grandcentrix and Scalr needed to deliver a high performance solution for Eurovision.







At Google I/O, we announced PHP as the latest supported runtime for Google App Engine in Limited Preview. PHP is one of the world's most popular programming languages, used by developers to power everything from simple web forms to complex enterprise applications.
At Google I/O, we announced PHP as the latest supported runtime for Google App Engine in Limited Preview. PHP is one of the world's most popular programming languages, used by developers to power everything from simple web forms to complex enterprise applications.



Now PHP developers can take advantage of the scale, reliability and security features of App Engine. In addition, PHP runs well with other parts of Google Cloud Platform. Let's look at how this works.



Connecting to Google Cloud SQL from App Engine for PHP



Many PHP developers start with MySQL when choosing a database to store critical information, and a wide variety of products and frameworks such as WordPress make extensive use of MySQL’s rich feature set. Google Cloud SQL provides a reliable, managed database service that is MySQL 5.5 compatible and works well with App Engine.



To set up a Cloud SQL database, sign into Google Cloud Console - create a new project, choose Cloud SQL and create a new instance.





After you create the instance, it's automatically associated with your App Engine app.

You will notice Cloud SQL instances don’t need an IP address. Instead they can be accessed via a compound identifier made up of their project name and instance name, such as hello-php-gae:my-cloudsql-instance.



From within PHP, you can access Cloud SQL directly using the standard PHP MySQL libraries - mysql, mysqli or PDO_MySQL. Just specify your Cloud SQL database with its identifier, such as:

<?php

$db = new PDO(
'mysql:unix_socket=/cloudsql/hello-php-gae:my-cloudsql-instance;dbname=demo_db;charset=utf8',
'demo_user',
'demo_password'
);

foreach($db->query('SELECT * FROM users') as $row) {
echo $row['username'].' '.$row['first_name']; //etc...
}
Methods such as query() work just as you’d expect with any MySQL database. This example uses the popular PDO library, although other libraries such as mysql and mysqli work just as well.



Storing files with PHP and Google Cloud Storage



Reading and writing files is a common task in many PHP projects, whether you are reading stored application state, or generating formatted output (e.g., writing PDF files). The challenge is to find a storage system that is as scalable and secure as Google App Engine itself. Fortunately, we have exactly this in Google Cloud Storage (GCS).



The first step in setting up Google Cloud Storage is to create a bucket:

With the PHP runtime, we’ve implemented native support for GCS. In particular, we’ve made it possible for PHP’s native filesystem functions to read and write to a GCS bucket.



This code writes all prime numbers less than 2000 into a file on GCS:



<?php

$handle = fopen('gs://hello-php-gae-files/prime_numbers.txt','w');

fwrite($handle, "2");
for($i = 3; $i <= 2000; $i = $i + 2) {
$j = 2;
while($i % $j != 0) {
if($j > sqrt($i)) {
fwrite($handle, ", ".$i);
break;
}
$j++;
}
}

fclose($handle);
The same fopen() and fwrite() commands are used just as if you were writing to a local file. The difference is we’ve specified a Google Cloud Storage URL instead of a local filepath.



And this code reads the same file back into memory and pulls out the 100th prime number, using file_get_contents():



<?php

$primes = explode(",",
file_get_contents('gs://hello-php-gae-files/prime_numbers.txt')
);

if(isset($primes[100]))
echo "The 100th prime number is ".$primes[100];


And more features supported in PHP



Many of our most popular App Engine APIs are now supported in PHP, including our zero-configuration Memcache, Task Queues for asynchronous processing, Users API, Mail API and more. The standard features you’d expect from App Engine, including SSL support, Page Speed Service, versioning and traffic splitting are all available as well.



Open today in Limited Preview



Today we’re making App Engine for PHP available in Limited Preview. Read more about the runtime in our online documentation, download an early developer SDK, and sign up to deploy applications at https://cloud.google.com/appengine/php.



- Posted by Andrew Jessup, Product Manager

At Google I/O, we announced Google Cloud Datastore, a fully managed solution for storing non-relational data. Based on the popular Google App Engine High Replication Datastore (HRD), Cloud Datastore provides a schemaless, non-relational datastore with the same accessibility of Google Cloud Storage and Google Cloud SQL.



Cloud Datastore builds off the strong growth and performance of HRD, which has over 1PB of data stored, 4.5 trillion transactions per month and a 99.95% uptime. It also comes with the following features:

  • Built-in query support: near SQL functionality that allows you to search, sort and filter across multiple indexes that are automatically maintained 

  • ACID transactions: data consistency (both Strong and Eventual) that spans multiple replicas and requests 

  • Automatic scaling: built on top of Google’s BigTable infrastructure, the Cloud Datastore will automatically scale with your data 

  • High availability: by utilizing Google’s underlying Megastore service, the Cloud Datastore ensures that data is replicated across multiple datacenters and is highly available 

  • Local development environment: the Cloud Datastore SDK provides a full-featured local environment that allows you to develop, iterate and manage your Cloud Datastore instances efficiently 

  • Free to get started: 50k read & write operations, 200 indexes, and 1GB of stored data for free per month  



Getting started with Cloud Datastore 



To get started, head over to the Google Cloud Console and create a new project. After supplying a few pieces of information you will have a Cloud Project that has the Cloud Datastore enabled by default. For this post we’ll use the project ID cloud-demo.





With the project created and the Cloud Datastore enabled, we’ll need to download the Cloud Datastore client library. Once extracted, it’s time to start writing some code. For the sake of this post, we’ll focus on accessing the Cloud Datastore from a Python application running on a Compute Engine VM (which is also now in Preview). We’ll assume that you’ve already created a new VM instance.

import googledatastore as datastore

def main()
writeEntity()
readEntity()
Next include writeEntity() and readEntity() functions:

def WriteEntity():
req = datastore.BlindWriteRequest()
entity = req.mutation.upsert.add()
path = entity.key.path_element.add()
path.kind = 'Greeting'
path.name = 'foo'
message = entity.property.add()
message.name = 'message'
value = message.value.add()
value.string_value = 'to the cloud and beyond!'
try:
datastore.blind_write(req)
except datastore.RPCError as e:
# remember to do something useful with the exception
pass

def ReadEntity():
req = datastore.LookupRequest()
key = req.key.add()
path = key.path_element.add()
path.kind = 'Greeting0'
path.name = 'foo0'
try:
resp = datastore.lookup(req)
return resp
except datastore.RPCError as e:
# remember to do something useful with the exception
pass
First create a new file called “demo.py”. Inside demo.py, we’ll add code to write and then read an entity from the Cloud Datastore.  Finally we can update main() to print out the property values within the fetched entity:

def main():
writeEntity();
resp = readEntity();

entity = resp.found[0].entity
for p in entity.property:
print 'Entity property name: %s', p.name
v = p.value[0]
print 'Entity property value: %s', v.string_value
Before we can run this code we need to tell the client library which Cloud Datastore instance we would like to use. This is done by exporting the following environment variable:

~$ export DATASTORE_DATASET cloud-datastore-demo
Finally we’re able to run the application by simply issuing the following:

~$ python demo.py
Besides the output that we see in console window, we’re also able to monitor our interactions within the Cloud Console. By navigating back to Cloud Console, selecting our cloud-datastore-demo project, and then selecting the Cloud Datastore we’re taken to our instance’s dashboard page that includes number of entities, properties, and property types, as well as index management, ad-hoc query support and breakdown of stored data.



And that’s really just the beginning. To fully harness the features and functionality that the Cloud Datastore offers, be sure to check out the larger Getting Started guide and the Cloud Datastore documentation.



Cloud Datastore is the latest addition to the Cloud Platform storage family, joining Cloud Storage for storing blob data, Cloud SQL for storing relational data, and Persistent Disk for storing block data. All fully managed so that you can focus on creating amazing solutions and leave the rest to us.



And while this is a Preview Release, the team is off to a great start. As we move the service towards General Availability we’re looking forward to improving JSON support, more deeply integrating with the Cloud Console, streamlining our billing and driving every bit of performance that we can out of the API and underlying service.



Happy coding!



 -Posted by Chris Ramsdale, Product Manager

Last year we announced Google Compute Engine to enable any business or developer to use Google’s infrastructure for their applications. Now we’re taking the next step: Google Compute Engine is open to everyone in preview, and you can ...
Last year we announced Google Compute Engine to enable any business or developer to use Google’s infrastructure for their applications. Now we’re taking the next step: Google Compute Engine is open to everyone in preview, and you can sign up online now.



Over the past year, we’ve launched several features and made significant improvements behind the scenes. We’re now announcing several new capabilities that make it easier and more economical to use Compute Engine for a broader set of applications.




  • Sub-Hour Billing: We heard feedback from our early users who wanted more granular billing increments so they could run short-lived workloads. Now all instances are charged for in one-minute increments with a ten-minute minimum, so you don’t pay for compute minutes that you don’t use.

  • New shared-core instance types: Compute Engine’s new micro and small instance types are designed as a cost-effective option for running small workloads that don’t need a lot of CPU power, like development and test workloads.

  • Larger Persistent Disks: We’re increasing the size of Persistent Disks that can be attached to instances by up to 8,000%. You can now attach up to 10 terabytes of persistent disk to a Compute Engine virtual machine, giving you plenty of persistent storage for a wide variety of applications.

  • Advanced Routing Capabilities: Compute Engine now supports software-defined routing capabilities based on our broad SDN innovation. These capabilities are designed to handle your advanced network routing needs like configuring instances to function as gateways, configuring VPN servers and building applications that span your local network and Google’s cloud.

  • ISO 27001 Certification: We’ve also completed ISO 27001:2005 certification for Compute Engine, App Engine, and Cloud Storage to demonstrate that these products meet the international standard for managing information security.




To get started, go to the Google Cloud Console, select Compute Engine and click the “New Instance” button.





Fill out the required information and click “Create” on the right hand side. Your new virtual machine will be ready to use in about a minute.



To all of our customers who helped us evolve the product over the past months, thank you; your feedback has helped shape Compute Engine. To those of you who have been eager to try Compute Engine, the wait is over and you can sign up for Compute Engine online today.



- Posted by Navneet Joneja, Product Manager

Over the last fourteen years we have been developing some of the best infrastructure in the world to power Google’s global-scale services. With Google Cloud Platform, our goal is to open that infrastructure and make it available to any business or developer anywhere. Today, we are introducing improvements to the platform and making ...
Over the last fourteen years we have been developing some of the best infrastructure in the world to power Google’s global-scale services. With Google Cloud Platform, our goal is to open that infrastructure and make it available to any business or developer anywhere. Today, we are introducing improvements to the platform and making Google Compute Engine available for anyone to use.



Google Compute Engine - now available for everyone



Google Compute Engine provides a fast, consistently high-performance environment for running virtual machines. Later today, you’ll be able to go online to cloud.google.com and start using Compute Engine.



In addition, we’re introducing new Compute Engine features:



  • Sub-hour billing charges for instances in one-minute increments with a ten-minute minimum, so you don’t pay for compute minutes that you don’t use

  • Shared-core instances provide smaller instance shapes for low-intensity workloads

  • Advanced Routing features help you create gateways and VPN servers, and enable you to build applications that span your local network and Google’s cloud

  • Large persistent disks support up to 10 terabytes per volume, which translates to 10X the industry standard



We’ve also completed ISO 27001:2005 international security certification for Compute Engine, Google App Engine, and Google Cloud Storage.



Google App Engine adds the PHP runtime



App Engine 1.8.0 is now available and includes a Limited Preview of the PHP runtime - your top requested feature. We’re bringing one of the most popular web programming languages to App Engine so that you can run open source apps like WordPress. It also offers deep integration with other parts of Cloud Platform including Google Cloud SQL and Cloud Storage.



We’ve also heard that we need to make building modularized applications on App Engine easier. We are introducing the ability to partition apps into components with separate scaling, deployments, versioning and performance settings.



Introducing Google Cloud Datastore



Google Cloud Datastore is a fully managed and schemaless solution for storing non-relational data. Based on the popular App Engine High Replication Datastore, Cloud Datastore is a standalone service that features automatic scalability and high availability while still providing powerful capabilities such as ACID transactions, SQL-like queries, indexes and more.



Over the last year we have continued our focus on feature enhancement and developer experience across App Engine, Compute Engine, Google BigQuery, Cloud Storage and Cloud SQL. We also introduced Google Cloud Endpoints and Google Cloud Console.



With these improvements, we have seen increased usage with over 3 million applications and over 300,000 unique developers using Cloud Platform in a given month. Our developers inspire us everyday, and we can’t wait to see what you build next.



-Posted by Urs Hölzle, Senior Vice President

Cross-posted with the Google Developers Blog



After last year's Google I/O conference, the Google Cloud Platform Developer Relations team started to think about how attendees experienced the event. We wanted to help attendees gain more insight about the conference space and the environment itself. Which developer Sandboxes were the busiest? Which were the loudest locations, and which were the best places to take a quick nap? We think about data problems all the time, and this looked like an interesting big data challenge that we could try to solve. So this year, we decided to try to answer our questions with a project that's a bit different, kind of futuristic, and maybe a little crazy.
Cross-posted with the Google Developers Blog



After last year's Google I/O conference, the Google Cloud Platform Developer Relations team started to think about how attendees experienced the event. We wanted to help attendees gain more insight about the conference space and the environment itself. Which developer Sandboxes were the busiest? Which were the loudest locations, and which were the best places to take a quick nap? We think about data problems all the time, and this looked like an interesting big data challenge that we could try to solve. So this year, we decided to try to answer our questions with a project that's a bit different, kind of futuristic, and maybe a little crazy.



Since we love open source hardware hacking as much as we love to share open source code, we decided to team up with the O'Reilly Data Sensing Lab to deploy hundreds of Arduino-based environmental sensors at Google I/O 2013. Using software built with the Google Cloud Platform, we'll be collecting and visualizing ambient data about the conference, such as temperature, humidity, air quality, in real time! Altogether, the sensors network will provide over 4,000 continuous data streams over a ZigBee mesh network managed by Device Cloud by Etherios.



photo of sensors



In addition, our motes will be able to detect fluctuations in noise level, and some will be attached to footstep counters, to understand collective movement around the conference floor. Of course, since a key goal of Google I/O is to promote innovation in the open, the project's Cloud Platform code, the Arduino hardware designs, and even the data collected, will be open source and available online after the conference.



Google Cloud Platform, which provides the software backend for this project, has a variety of features for building applications that collect and process data from a large number of client devices - without having to spend time managing hardware or infrastructure. Google App Engine Datastore, along with Google Cloud Endpoints, provides a scalable front end API for collecting data from devices. Google Compute Engine is used to process and analyse data with software tools you may already be familiar with, such as R and Hadoop. Google BigQuery provides fast aggregate analysis of terabyte datasets. Finally, App Engine's web application framework is able to surface interactive visualizations to users.



Networked sensor technology is in the early stages of revolutionizing business logistics, city planning, and consumer products. We are looking forward to sharing the Data Sensing Lab with Google I/O attendees, because we want to show how using open hardware together with the Google Cloud Platform can make this technology accessible to anyone.



With the help of the Google Maps DevRel team, we'll be displaying visualizations of interesting trends on several screens around the conference. Members of the Data Sensing Lab will be on hand in the Google I/O Cloud Sandbox to show off prototypes and talk to attendees about open hardware development. Lead software developer Amy Unruh and Kim Cameron from the Cloud Platform Developer Relations team will talk about how we built the software involved in this project in a talk called "Behind the Data Sensing Lab". In case you aren't able to attend Google I/O 2013, this session will be available online after the conference. Learn more about the Google Cloud Platform on our site, and to dive in to building applications, check out our developer documentation.



-Posted by Michael Manoochehri, Developer Programs Engineer



Welcome to the Google Cloud Platform blog, the evolution of the Google App Engine blog, which continues as a key component of our broader Google Cloud Platform vision. On this blog, you can find product updates, developer tips, and other content related to Google Cloud Platform.



Our goal is to build the best cloud platform for developers; one that is comprised of multiple services that work together in harmony. A key component of delivering on this goal is creating a centralized communication channel to discuss updates across the entire Google Cloud Platform.



Moving forward, you can find all of the same content that we posted on the App Engine blog here. In addition to App Engine releases, updates, and customer stories, you can expect similar content for the rest of the platform - including Google Compute Engine, Google BigQuery, Google Cloud SQL, Google Cloud Storage, Google Cloud Endpoints and all future Cloud Platform products and services. 



Looking back at the first post on the App Engine blog is a reminder that our ambitious mission remains the same. That is, we want to give you access to the same building blocks that Google uses for its own applications, so you can continue to build amazing things. We are committed to providing the best possible technology for you to build your business in the cloud. 



Okay, time to get back to building. Subscribe here to get notifications for our new blog. And while you’re at it, follow us on Google+ and Twitter too.



-Posted by Chris Ramsdale, Product Manager

Google I/O 2013 is only a week away! We look forward to sharing updates across Google Cloud Platform. Here’s everything you need to know to keep up with the latest happenings at I/O.



This year, we will have a Google Cloud Platform track kickoff given by Urs Holzle, Senior Vice President of Technical Infrastructure, on Wednesday, May 15th at 12:45 PM Pacific. You can watch the stream on the I/O Live Stream page. Urs will make a few special announcements, so you won’t want to miss it.



At I/O, we have an entire Cloud Platform track complete with code labs and conference sessions. Even if you aren’t attending, you can still tune in to the following sessions on the live stream, which you’ll also be able to find on the homepage of cloud.google.com:





All of our sessions (including the live ones above) will be available on demand as soon as we can get them posted. We’ll post live updates on Google+ and Twitter, so be sure to follow us and take part in the conversation.



Until I/O!



-Posted by Zafir Khan, Product Marketing Manager



This was an exciting week for the Debian community who released Debian 7.0 “wheezy” that brings big improvements including hardened security, improved 32/64-bit compatibility and addresses a lot of community feedback. Today we’re adding Debian images for Google Compute Engine. Debian, in collaboration with us, is providing images for both Debian 7.0 “wheezy” and the previous stable release, Debian 6.0 “squeeze.” This support will make it easy for anyone using Debian today to migrate their workloads onto Compute Engine.



For fast performance and to reduce bandwidth costs, Google is hosting a Debian package mirror for use by Google Compute Engine Debian instances. We’ve updated our docs and will support Debian via our usual support options or you can also check out what Debian offers.



We are continually evaluating other operating systems that we can enable with Compute Engine. However, going forward, Debian will be the default image type for Compute Engine. We look forward to hearing your feedback.



-Posted by Jimmy Kaplowitz, Site Reliability Engineer and Debian developer

Do your customers upload files to Google Cloud Storage for your applications to process? For example, a photo app may want to create thumbnails of new images as soon as they are uploaded. Normally you would have to poll for updated objects which can be a resource waste or cause you to react slowly. Most times writing and deploying custom scripts to trigger your application is cumbersome.



Today, we're releasing object change notification as a preview feature, allowing you to watch your Google Cloud Storage buckets for new, modified, or deleted objects with a webhook you provide. Now your application can be automatically triggered when an important change happens and start processing data immediately. We've also updated gsutil with a notifyconfig command. A Google App Engine webhook can be as simple as the following:

class MainPage(webapp2.RequestHandler):
def post(self):
resource_state = self.request.headers['X-Goog-Resource-State']
if resource_state == 'sync':
# Initial message that the notification channel is active.
pass
elif resource_state == ‘exists’:
an_object = json.loads(self.request.body)
bucket = an_object['bucket']
object_name = an_object['name']
# Take action!
elif resource_state == ‘not_exists’:
# Object was deleted.
pass
We're also releasing an update to the Google Cloud Storage JSON API, bringing it into parity with our existing XML API, including exposing new methods such as Copy and Compose. As a part of this release, we are making the API available to everyone without requiring an invitation.



Enjoy, and as always, we watch StackOverflow.



- Posted by Dave Barth, Product Manager