Ezakus, a leading data management platform, relies on Hadoop to process 600 million digital touch points raised by 40 million users and mobile users.



Fast growth created challenges in managing Ezakus’s existing Hadoop installation, so they tested different alternatives for running Hadoop. Their benchmarks found that Hadoop on ...
Ezakus, a leading data management platform, relies on Hadoop to process 600 million digital touch points raised by 40 million users and mobile users.



Fast growth created challenges in managing Ezakus’s existing Hadoop installation, so they tested different alternatives for running Hadoop. Their benchmarks found that Hadoop on Google Compute Engine provided processing speed that was three to four times better than the next-best cloud provider.



“Our benchmark tests used the Cloudera Hadoop distribution”, said Olivier Gardinetti, CTO. “We were careful to use identical infrastructure - the same logical CPU count, the same mem capacity and so forth. We also ran each test several times to ensure that outliers weren't skewing the results.”



When using MapReduce for basic stats processing of 20,469,283 entries along their browsing history over 1 month, Compute Engine computed the stats in 1 minute and 3 seconds, four times faster than the alternative tested. When more complex queries were run in a second test, Compute Engine computed in 7 minutes and 47 seconds, 3 times faster than the closest alternative which ran at 23 minutes and 31 seconds.



Ezakus can now provide more performance and predictions and serve more clients, “because we can more easily deploy all the servers in a very short time,” said Gardinetti. To learn more about their migration to Google Cloud Platform and subsequent results for their business, read the case study here.



-Posted by Ori Weinroth, Product Marketing Manager





Cross-posted from the Google Enterprise blog



No matter how you slice it, mobile and cloud are essential for future business growth and productivity. This is driving increases in security spending as organizations wrestle with threats and regulatory compliance — according to Gartner, the computer security industry will reach $71 billion this year, which is a 7.9 percent increase over 2013.



To help organizations spend their money wisely, it’s essential that cloud companies are transparent about their security capabilities. Since we see transparency as a crucial way to earn and maintain our customers’ confidence, we ask independent auditors to examine the controls in our systems and operations on a regular basis. The audits are rigorous, and customers can use these reports to make sure Google meets their compliance and data protection needs.



We’re proud to announce we have received an updated ISO 27001 certificate and SOC 2 and SOC 3 Type II audit report, which are the most widely recognized, internationally accepted independent security compliance reports. These audits refresh our coverage for Google Apps for Business and Education, as well Google Cloud Platform, and we’ve expanded the scope to include Google+ and Hangouts. To make it easier for everyone to verify our security, we’re now publishing our updated ISO 27001 certificate and new SOC3 audit report for the first time, on our Google Enterprise security page.



Keeping your data safe is at the core of what we do. That’s why we hire the world’s foremost experts in security—the team is now comprised of over 450 full-time engineers—to keep customers’ data secure from imminent and evolving threats. These certifications, along with our existing offerings of FISMA for Google Apps for Government, support for FERPA and COPPA compliance in Google Apps for Education, model contract clauses for Google Apps customers who operate within Europe, and HIPAA business associate agreements for organizations with protected health information, help assure our customers and their regulators that we’re committed to keeping their data and that of their users secure, private and compliant.



Editor's update February 22, 2016: Click to Deploy for the GitLab Community Server is no longer available, but you can launch GitLab on Google Cloud Platform here within Cloud Launcher. ...
Editor's update February 22, 2016: Click to Deploy for the GitLab Community Server is no longer available, but you can launch GitLab on Google Cloud Platform here within Cloud Launcher.



Every software company today needs a place to store their code and collaborate with teammates. Today we are announcing a solution that can scale with your business. GitLab Community Server is great way to get the benefits of collaborative development for your team wherever you want it. While GitLab already provides simple application installers, we wanted to take it one step further.



Today, we’re announcing Click to Deploy for the GitLab Community Server built on the following open source stack:


  • Nginx, a fast, minimal web server

  • Unicorn, Ruby on Rails hosting server

  • Redis, scalable caching service

  • PostgreSQL, popular SQL database




Get your own, dedicated code collaboration server today!



Learn more about running the GitLab Community Server on Google Compute Engine at https://developers.google.com/cloud/gitlab.



-Posted by Brian Lynch, Solutions Architect



GitLab is a registered trademark of GitLab B.V.. All other trademarks cited here are the property of their respective owners.

Today we are announcing that Zync Render, the visual effects cloud rendering technology behind Star Trek Into Darkness and Looper, is joining the Google Cloud Platform team.



Creating amazing special effects requires a skilled team of visual artists and designers, backed by a highly powerful infrastructure to render scenes. Many studios, however, don’t have the resources or desire to create an in-house rendering farm, or they need to burst past their existing capacity.
Today we are announcing that Zync Render, the visual effects cloud rendering technology behind Star Trek Into Darkness and Looper, is joining the Google Cloud Platform team.



Creating amazing special effects requires a skilled team of visual artists and designers, backed by a highly powerful infrastructure to render scenes. Many studios, however, don’t have the resources or desire to create an in-house rendering farm, or they need to burst past their existing capacity.



Together Zync + Cloud Platform will offer studios the rendering performance and capacity they need, while helping them manage costs. For example, with per-minute billing studios aren’t trapped into paying for unused capacity when their rendering needs don’t fit in perfect hour increments.



We’re excited they're joining us. We’ll have more details to share in the coming months — stay tuned!



-Posted by Belwadi Srikanth, Product Manager





Two months ago, we announced Kubernetes, an open source cluster manager for Docker containers. Since then we’ve seen an impressive community develop around Kubernetes, and today we’re thrilled to welcome VMware to the Kubernetes community.




Two months ago, we announced Kubernetes, an open source cluster manager for Docker containers. Since then we’ve seen an impressive community develop around Kubernetes, and today we’re thrilled to welcome VMware to the Kubernetes community.



We’ve spent a lot of time talking about how we’re building Kubernetes to provide a unique infrastructure for easily building scalable, reliable systems like we do at Google. With the addition of VMware in the community, we thought we’d take the time to discuss the infrastructure side of cluster management and how VMware’s deep technical expertise in this area will make Kubernetes a more capable, powerful and secure platform beyond Google Cloud Platform.



One of the fundamental tenets of Kubernetes is the decoupling of application containers from the details of the systems on which they run. Google Cloud Platform provides a homogenous set of raw resources via virtual machines (VMs) to Kubernetes, and in turn, Kubernetes schedules containers to use those resources. This decoupling simplifies application development since users only ask for abstract resources like cores and memory, and it also simplifies data center operations, since every machine is identical and isolated from the details of the applications that run on them.



VMware will provide enhanced capabilities for running a reliable Kubernetes cluster, much like Google Cloud Platform. The core resources here are:




  • Machines: virtual machines on which containers run

  • Network: the physical or virtualized connectivity between containers in the cluster

  • Storage: reliable, cluster level distributed storage outside of a container’s lifecycle




Providing machines for Kubernetes in not only necessary as a pool of raw cycles and bytes but also can provide a critical extra layer of security. Security is a continuum on which you pick solutions based on threats and risk tolerance. While container security is an evolving area, VMs have a longer track record and are a smaller attack surface. Fundamentally, even in Kubernetes, the machine is a strong security domain. Linux containers can provide strong resource isolation, ensuring, for example, that one container has dedicated access to a specific core in the processor. For semi-trusted workloads, containers may be sufficient. However, because containers share the same kernel, there’s an expanded surface area that may make them insufficient as your only line of defense. For untrusted workloads or users, we highly suggest defense in depth with virtual machine technology as a second layer of security. Indeed, this is how two different users’ Kubernetes clusters can safely co-exist on the same physical infrastructure in a Google data center. VMware will help Kubernetes implement this same pattern of using virtualization to secure physical machines, when those machines are outside of Google’s data centers.



While running individual containers is sufficient for some use cases, the real power of containers comes from implementing distributed systems, and to do this you need a network. However, you don’t just need any network. Containers provide end users with an abstraction that makes each container a self contained unit of computation Traditionally, one place where this has broken down is networking, where containers are exposed on the network via the shared host machine’s address. In Kubernetes, we’ve taken an alternative approach: that each group of containers (called a Pod) deserves its own, unique IP address that’s reachable from any other Pod in the cluster, whether they’re co-located on the same physical machine or not. To achieve this in the Google data center, we’ve taken advantage of the advanced routing features that are available via Google Compute Engine’s Andromeda network virtualization. VMware, with their deep knowledge in network virtualization, specifically Open Virtual Switch (OVS), will simplify network configuration in Kubernetes clusters running outside of Google’s data centers.



Finally, nearly every application that you run needs some sort of storage, but the storing that data on specific machines in your datacenter makes it difficult to schedule containers in the cluster to maximize efficiency and reliability, since pods are forced to co-locate with their data. When Kubernetes runs on Google Cloud Platform, you’ll soon be able to pair your container up with a Persistent Disk (PD) volume, so that regardless of where your container is scheduled in the cluster, its storage follows it to the physical machine. VMware will work with Kubernetes to include integration points to distributed storage systems such as their Virtual-SAN scalable virtual storage solution to enable similar capabilities for users not running on Google Cloud Platform, in addition to simpler less robust shared storage solutions available for users that don't have access to a reliable network storage system.



We developed and open sourced Kubernetes to provide applications developers and operations teams with the ability to build and scale their applications like Google. The addition of VMware’s technical expertise in cluster infrastructure will enable people begin to compute like Google, regardless of where they physically do that computation.



-Posted by Craig Mcluckie, Product Manager

Today’s guest post is by Florian Leibert, Mesosphere Co-Founder & CEO. Prior to Mesosphere, he was an engineering lead at Twitter where he helped introduced Mesos to Twitter where it now runs every new service. He then went on to help build the analytics stack at Airbnb on Mesos. He is the main author of Chronos, an Apache Mesos framework for managing and scheduling ETL systems. ...
Today’s guest post is by Florian Leibert, Mesosphere Co-Founder & CEO. Prior to Mesosphere, he was an engineering lead at Twitter where he helped introduced Mesos to Twitter where it now runs every new service. He then went on to help build the analytics stack at Airbnb on Mesos. He is the main author of Chronos, an Apache Mesos framework for managing and scheduling ETL systems.



Mesosphere enables users to manage their datacenter or cloud as if it were one large machine. It does this by creating a single, highly-elastic pool of resources from which all applications can draw, creating sophisticated clusters out of raw compute nodes (whether physical machines or virtual machines). These Mesosphere clusters are highly available and support scheduling of diverse workloads on the same cluster, such as those from Marathon, Chronos, Hadoop, and Spark. Mesosphere is based on the open source Apache Mesos distributed systems kernel used by customers like Twitter, Airbnb, and Hubspot to power internet scale applications. Mesosphere makes it possible to develop and deploy applications faster with less friction, operate them at massive scale with lower overhead, and enjoy higher levels of resiliency and resource efficiency with no code changes.



We’re collaborating with Google to bring together Mesosphere, Kubernetes and Google Cloud Platform to make it even easier for our customers to run applications and containers at scale. Today, we are excited to announce that we’re bringing Mesosphere to the Google Cloud Platform with a web app that enables customers to deploy Mesosphere clusters in minutes. In addition, we are also incorporating Kubernetes into Mesos to manage the deployment of Docker workloads. Together, we provide customers with a commercial-grade, highly-available and production-ready compute fabric.



With our new web app, developers can literally spin up a Mesosphere cluster on Cloud Platform in just a few clicks, using either standard or custom configurations. The app automatically installs and configures everything you need to run a Mesosphere cluster, including the Mesos kernel, Zookeeper and Marathon, as well as OpenVPN so you can log into your cluster. Also, we’re excited that this functionality will soon be incorporated into the Google Cloud Platform dashboard via the click-to-deploy feature. There is no cost for using this service beyond the charges for running the configured instances on your Google Cloud Platform account. To get started with our web app, simply login with your Google credentials and spin up a Mesos cluster.











We are also incorporating Kubernetes into Mesos and our Mesosphere ecosystem to manage the deployment of Docker workloads. Our combined compute fabric can run anywhere, whether on Google Cloud Platform, your own datacenter, or another cloud provider. You can schedule Docker containers side by side on the same Mesosphere cluster as other Linux workloads such as data analytics tasks like Spark and Hadoop and more traditional tasks like shell scripts and jar files.









Whether you are running massive, internet scale workloads like many of our customers, or you are just getting started, we think the combination of Mesos, Kubernetes, and Google Cloud Platform will help you build your apps faster, deploy them more efficiently, and run them with less overhead. We look forward to working with Google to make Cloud Platform the best place to run traditional Mesosphere workloads, such as Marathon, Chronos, Hadoop, or Spark—or newer Kubernetes workloads. And they can all be run together while sharing resources on the same cluster using Mesos. Please take Mesosphere for Google Cloud Platform for a test drive and let us know what you think.





- Contributed by Florian Leibert, Mesosphere Co-Founder & CEO

Editor's update February 22, 2016: Click to Deploy MEAN is no longer available, but you can launch the MEAN solution on Google Cloud Platform here within Cloud Launcher.



If you’re starting out today, there are a number of development stacks to choose from. From the original LAMP (Linux, Apache, MySQL, PHP) to the myriad of other choices, there is a development stack to match your language and experience. For the NodeJS fans out there, the ...
Editor's update February 22, 2016: Click to Deploy MEAN is no longer available, but you can launch the MEAN solution on Google Cloud Platform here within Cloud Launcher.



If you’re starting out today, there are a number of development stacks to choose from. From the original LAMP (Linux, Apache, MySQL, PHP) to the myriad of other choices, there is a development stack to match your language and experience. For the NodeJS fans out there, the MEAN stack is a great option. Wouldn’t it be awesome if you could launch your favorite development stack with the click of a button?



Today, we’re announcing the first Click to Deploy development stack on Google Compute Engine. MEAN provides you with the best of open source software today:




  • MongoDB, a leading NoSQL database

  • Express Web Framework, a minimal and flexible node.js web application framework

  • AngularJS, an extensible Javascript framework for responsive applications

  • NodeJS, a platform built on Chrome’s JavaScript runtime for server-side Javascript




With a single button click, you can launch a complete MEAN development stack ready for development! Click to Deploy for MEAN handles all software installs and setting up a sample app for you to get started.



So, get out and click to deploy your MEAN development stack today!



Learn more about running the MEAN development stack on Google Compute Engine at https://developers.google.com/cloud/mean.



-Posted by Brian Lynch,  Solutions Architect



MEAN.io, is a registered trademark of Linnovate Technologies Ltd, Inc. All other trademarks cited here are the property of their respective owners.

Two months ago, Kalev Leetaru of Georgetown University announced the availability of the entire quarter-billion-record GDELT Event Database in Google BigQuery. This dataset monitors the broadcast, print, and web news media from across the world in over 100 languages. It's a database of what’s happening throughout the globe - a continuously-updated, computable catalog of human society compiled from the world’s news media.
Two months ago, Kalev Leetaru of Georgetown University announced the availability of the entire quarter-billion-record GDELT Event Database in Google BigQuery. This dataset monitors the broadcast, print, and web news media from across the world in over 100 languages. It's a database of what’s happening throughout the globe - a continuously-updated, computable catalog of human society compiled from the world’s news media.



With the GDELT database publicly accessible through BigQuery, you can query and dig through a quarter-billion records in real-time. To explore what BigQuery can do, GDELT used its ability to compute correlations. Computing correlations allows us, for example, to look at a timeline of events in Egypt before the revolution of 2011 and then search 35 years of history for other countries around the world with similar patterns.



With a single SQL query, the GDELT team has been doing exactly that: using BigQuery to run more than 2.5 million correlations in a few minutes to trace the patterns of global society as captured by GDELT’s archive. Instead of only examining small slices of the data suggested by theory or domain expertise, this experiment showcases the use of GDELT's raw data to leverage the enormous power of BigQuery to exhaustively sift out every correlation from the entire quarter-billion record dataset, surfacing highly unexpected patterns and findings.



On their in-depth post, the GDELT team runs a query like this:

SELECT
STRFTIME_UTC_USEC(a.ending_at, "%Y-%m-%d") ending_at1,
STRFTIME_UTC_USEC(b.ending_at-60*86400000000, "%Y-%m-%d") starting_at2,
STRFTIME_UTC_USEC(b.ending_at, "%Y-%m-%d") ending_at2,
a.country, b.country, CORR(a.c, b.c) corr, COUNT(*) c
FROM (
SELECT country, date+i*86400000000 ending_at, c, i
FROM [gdelt-bq:sample_views.country_date_matconf_numarts] a
CROSS JOIN (SELECT i FROM [fh-bigquery:public_dump.numbers_255] WHERE i < 60) b
) b
JOIN (
SELECT country, date+i*86400000000 ending_at, c, i
FROM [gdelt-bq:sample_views.country_date_matconf_numarts] a
CROSS JOIN (SELECT i FROM [fh-bigquery:public_dump.numbers_255] WHERE i < 60) b
WHERE country='Egypt'
AND date+i*86400000000 = PARSE_UTC_USEC('2011-01-27')
) a
ON a.i=b.i
WHERE a.ending_at != b.ending_at
GROUP EACH BY ending_at1, ending_at2, starting_at2, a.country, b.country
HAVING (c = 60 AND ABS(corr) > 0.254)
ORDER BY corr DESC

This query has 2 subqueries: The smaller one finds the timeline of 30 days in Egypt before 2011-01-27, while the left side collects all sets of 30 days events for every country through GDELT's ever-growing dataset. With a cross join between the first set and all the sets on the left side, BigQuery is capable of sifting through this over a million combinations computed in real-time and calculate the Pearson correlation of each timeline pair. For a visual explanation, see the linked IPython notebook.



After running this query, the GDELT team obtained from BigQuery a list of all the worldwide periods from the last 35 years, as monitored by GDELT, that have been most similar to Egypt’s two months preceding the core of its revolution. Mathematically these periods present a statistically significant correlation with this specific time, and GDELT team proceeded to look into the details of why and its meaning. Read Kalev's post on the official GDELT blog.



You can run your own experiments with based on the GDELT database or other public datasets with your free monthly terabyte to query with Google BigQuery.



-Posted by Felipe Hoffa, Developer Advocate

Earlier this year, thousands of developers joined us online and in person at the first Google Cloud Platform Live. These developers came from a range of companies - from technology start-ups to Fortune 500s. Some of them had used Google Cloud Platform for years, while others were new to cloud computing.
Earlier this year, thousands of developers joined us online and in person at the first Google Cloud Platform Live. These developers came from a range of companies - from technology start-ups to Fortune 500s. Some of them had used Google Cloud Platform for years, while others were new to cloud computing.



At the event, we shared a vision for the future of development, where infrastructure just works and developers can get products to market faster. And we provided a first look at Managed VMs, new developer tools, updates to BigQuery and a new model for Cloud economics.



And now, we’re doing it again.



The second Google Cloud Platform Live is taking place on November 4. And once again, we’ll be broadcasting live from San Francisco.



The day will have two tracks. The first will cover hot topics in Cloud computing - whether it’s bleeding-edge container technologies or best practices for privacy and security. In the second track, Google engineers will teach you how to build a scalable app powered by Google Cloud Platform. We’ll discuss the architectural tradeoffs of IaaS vs. PaaS, then we’ll build, deploy and monitor your app. We’ll show you an end-to-end development cycle - deep on code, with real examples to make your apps even better.



People joining us in person will get to visit our Cloud Techstop to talk to our support team and solutions architects, check out the great work being done by our technology partners in the Partner Sandbox and meet the Googlers who build the products you know face to face.



And, if you can’t make it to San Francisco, you can get all the content from the comfort of your laptop or at our watch party at Google New York.



Registration is open now. Google Cloud Platform Live costs $200, and early-bird discounts are available until September 5. Check out cloud.google.com/LIVE to register and learn more.



I hope to see you there.



-Posted by Joerg Heilig, Vice President of Engineering

Accessing a Google Compute Engine VM instance via Secure Shell (SSH) is a common developer task, but when you’re configuring or managing your application, doing so can take you out of the context of what you’re currently working on. Worse yet, a problem might occur when you don’t have your normal computer handy and you have to try to access the VM from a different device without your developer tools installed.
Accessing a Google Compute Engine VM instance via Secure Shell (SSH) is a common developer task, but when you’re configuring or managing your application, doing so can take you out of the context of what you’re currently working on. Worse yet, a problem might occur when you don’t have your normal computer handy and you have to try to access the VM from a different device without your developer tools installed.



So, we asked ourselves how we could make it quicker for you to access your Compute Engine VMs in any situation. Last month we introduced the ability to SSH into a VM using the gcloud compute command in the Google Cloud SDK which worked out-of-the-box across all major operating systems. However, we wanted to simplify it even further.

Screen Shot 2014-07-24 at 2.39.17 PM.png

Our answer was simple: make it possible for you to SSH directly to your VM without leaving the Developers Console in your browser. We put forward a few ground rules such as it had to be secure and we shouldn’t require an extension or additional software downloads.



The result? Recently we rolled out the ability for anyone with edit access to your project to open an SSH connection and terminal session from directly within the Developers Console website with no additional installations. To keep your session secure, we ensure private keys are never transmitted over the wire, and that all SSH traffic is encrypted before leaving your browser.



Opening up a session is easy. From inside the Developers Console, all you need to do is open up your project, navigate to the VM instances tab under COMPUTE > COMPUTE ENGINE and then click the SSH button. A new window will appear with the connection progress displayed. This works with all the current versions of most web browsers (Google Chrome, Mozilla Firefox and Microsoft Internet Explorer 11) with no additional download required



We also support a common case where only one “frontend” VM on the project has an external IP address,and the rest of the “backend” VMs are not routable from public Internet. To make SSHing into those instances possible from the browser, we support agent forwarding -- you can SSH into the instance with external IP from Cloud Console and then “ssh -A” into the non-external IP instances using their IP address on the private network.



We’ve also tried to pack in a couple of extra goodies in for you. First we keep your connection safe and secure by using only https, generate a private key for each session and never transmit it over the wire, and encrypt all your SSH data before it leaves the browser (that is SSH encryption in addition to https). Clicking under the gear icon, you can change to a light theme if you prefer, navigate easily back to the instance details page in the console in case you closed it, or start a new connection to the same instance in case you need multiple connections.

Screen Shot 2014-07-24 at 2.44.25 PM.png

As more of your developer workflow moves into the web browser, we’re committed to helping bridge the gap between command line and the web browser as seamlessly as possible. We’re interested in hearing more ways we can do so for you. As you can see under the gear icon, we’ve also included a way for you to send us your feedback -- please send us your thoughts.



-Posted by Cody Bratt, Product Manager

We’ve continued to ship features and tools to make it easier to build your application on Google Compute Engine. In addition, Compute Engine played a key role in a number of recent customer success stories - including ...
We’ve continued to ship features and tools to make it easier to build your application on Google Compute Engine. In addition, Compute Engine played a key role in a number of recent customer success stories - including CI&T and Coca Cola, Screenz and ABC’s Rising Star, AllTheCooks, and Fastly and Brightcove. Here are a few more updates for Google Compute Engine we wanted to share.



New Zones in US and Asia

We've added a third zone to both us-central1 and asia-east1 regions, making it easier to use Compute Engine to run systems like MongoDB that use a quorum-based architecture for high availability. The new zones, us-central1-f and asia-east1-c, both support transparent maintenance right out of the gate.



SSD Persistent Disk is generally available

On June 16th, we announced the limited preview of SSD-backed persistent disks, which gives you great price and performance for high-IOPS workloads. On June 25th at Google I/O, we made SSD persistent disks generally available in all Google Compute Engine zones. For a great overview of Google Cloud Platform’s block storage options, including how to decide which one is best suited for your use case, watch this video by our storage guru, Jay Judkowitz. Visit the docs pages to find additional details, including instructions on how to use persistent disks with Compute Engine. Finally, this whitepaper gives you a great overview of best practices for using persistent disks.



Easier image creation from persistent disk

Speaking of persistent disks, we've made it easier for developers to create custom images right from their root persistent disks. You can now specify an existing persistent disk as the source for your Images:insert API call or gcutil addimage CLI command. To get the full scoop, be sure to check out the image creation documentation. Image creation from persistent disk makes it possible to create custom images for your Windows instances too.



-Posted by Scott Van Woudenberg, Product Manager

In case you happened to miss some of the Cloud Platform news in July, we’ve got a round-up for you:
In case you happened to miss some of the Cloud Platform news in July, we’ve got a round-up for you:



Expanding the Kubernetes community

This month, we announced that Microsoft, Red Hat, IBM, Docker, Mesosphere, CoreOS and SaltStack are joining the Kubernetes community. Kubernetes is our open source container management solution. These companies are going to work with us to ensure that Kubernetes is a strong container management framework for any application and in any environment - whether in a private, public or hybrid cloud.



Cloud Platform predicts the World Cup

We kicked off the month with a focus on the World Cup. We used Google Cloud Dataflow to ingest touch-by-touch gameplay data from World Cup matches going back to 2006 as well as three years of English Barclays Premier League, two seasons of Spanish La Liga, and two seasons of U.S. MLS. We then polished the raw data into predictive statistics using Google BigQuery. At the end of the day, we correctly predicted the final outcome as well as 11 of 12 of the games leading up to it. You can read our posts after the round of 16, after the quarterfinals, and before the final.



A great new way to learn about App Engine

We launched a new course on Udacity: Developing Scalable Apps with Google App Engine. We’ve already gotten great feedback from developers, and a few of our favorite sections are Urs talking about what makes App Engine unique as well as a brief history of the data center (pizza boxes included).



More container news: Red Hat Enterprise Atomic Host comes to Compute Engine

Jim Totton, Vice President and General Manager at Red Hat, wrote on our blog about Red Hat Enterprise Linux Atomic Host coming to Google Compute Engine. This provides a secure, lightweight and minimal footprint operating system optimized to run Linux Containers on Google’s infrastructure.



More great customers

We featured lots of great customers who are using Google Cloud Platform to power their business. Webydo, a B2B solution for professional web design, cut costs by 37% when they moved to Google Cloud Platform. And US Cellular is using BigQuery for “highly flexible analysis of large datasets.” This has allowed them to better measure the effectiveness of marketing campaigns.



David LaBine, Director of education software for SMART Technologies, wrote on our blog that using App Engine means “developers [at SMART Technologies] are more productive because they’re able to focus on writing new features rather than worrying about infrastructure…” Rafael Sanches, co-founder of Allthecooks, wrote on our blog that, “Google Cloud Platform played a key role in helping us grow... Since launching, we’ve grown to over 12 million users with a million monthly active users. Our application now sees millions of interactions daily that run through Google App Engine and Google Cloud Datastore. “



Finally, Brightcove and Fastly wrote on our blog that “because Google Cloud Platform launches instances in less than half the time of the rest of the industry, Fastly is able to launch new customers through Brightcove in a turnkey way.”



More product news

We introduced the Google Cloud Monitoring Read API, giving developers programmatic access to over 30 different metrics about their services, including CPU usage, disk IO and much more. Cloud Monitoring Read API allows you to query current and historical metric data for up to the past 30 days.



Also, click-to-deploy Apache Cassandra makes it easy to launch a dedicated Apache Cassandra cluster on Google Compute Engine. All it takes is one click after some basic information. In a matter of minutes, you can get a complete Cassandra cluster deployed and configured.



The roadshows kicked off

The Google Cloud Platform developer roadshow visited Los Angeles, San Francisco and Seattle in July. But, we’ve still got much of the tour coming up, so join us on the road to speak with the Cloud Platform team. You can still catch us in New York City (August 5), Cambridge (August 7), Boulder (August 12), Toronto (August 12), Austin (August 14), Atlanta (August 19), and Chicago (August 22). Click here to register.



-Posted by Benjamin Bechtolsheim, Product Marketing Manager