Google Cloud Platform (GCP) is growing all the time and we love introducing new products and features and getting them into your hands. This rapid pace of innovation does mean that there is always something new to learn about and this can take up a lot of your time. We also know that GCP isn’t the only cloud platform you’re using or have used, and it’s important that we help you leverage that experience to get up to speed fast.
Google Cloud Platform (GCP) is growing all the time and we love introducing new products and features and getting them into your hands. This rapid pace of innovation does mean that there is always something new to learn about and this can take up a lot of your time. We also know that GCP isn’t the only cloud platform you’re using or have used, and it’s important that we help you leverage that experience to get up to speed fast.



Our goal is to make it easier for you to stay on top of the services we offer and help you map your existing expertise to GCP at the same time. To that end, we are happy to release a new whitepaper that we have created to help you apply your existing knowledge and expertise to GCP.



This document is the first part of an on-going series. We start with the basics on how to complete base level operations, followed by a deep dive into the virtual compute platforms and the underlying networks. In the coming months we’ll add more information on how to work with storage, data, containers, and much more.



We hope you find this useful in learning about GCP. Please tell us what you think and what else you would like us to add. And don't forget to use our free trial to try out the things you've learned!



- Posted by Peter-Mark Verwoerd, Cloud Solutions Architect

Last June, we kicked off Google Cloud Platform Next, and so many of you wanted to come that we had to move to a different location to accommodate everyone! Today, we’re excited to announce ...
Last June, we kicked off Google Cloud Platform Next, and so many of you wanted to come that we had to move to a different location to accommodate everyone! Today, we’re excited to announce GCP NEXT 2016 in San Francisco: the event created specifically for you to learn about Google Cloud Platform directly from those who built it. You’ll also hear from developers and organizations that use Google Cloud Platform to build and run their businesses.



At GCP NEXT 2016, you’ll see how cutting-edge features in Google Cloud Platform will help you build powerful, reliable and intelligent applications at any scale.







Set your calendars (or your DeLorean) for March 23 - 24, 2016 to join us at GCP NEXT, where you’ll:




  • Hear about the latest in Google Cloud Platform product developments                                                  Watch in-depth product demos and exclusive talks from SVPs Diane Greene and Urs Hölzle.



  • Try your hand at different parts of the platform in code labs                                                                         Get hands-on experience with our platform with immersive tutorials, led by Google engineers and advocates.



  • Enjoy the NEXT Playground                                                                                                                               Play with fun demos powered by Google Cloud Platform, and find out how they were made to see what’s possible with cloud. Chat and exchange ideas with other technologists in the hallway track.



  • Learn the fundamentals of Google Cloud Platform                                                                                       Need to learn the basics of our platform or want to refresh your skills? Join us before GCP NEXT for a full-day, instructor-led Bootcamp on March 22.



  • Join us for some fun at the after-party                                                                                                        Network with our community and attend our after-party, NEXT After Dark.




We’ll also host tracks that dive a little deeper into specific areas of cloud, so you can learn more about topics that interest you most. This year, we’re opening up a call for speakers, and we hope to see you submit a session proposal! Share your project or experiences using our platform in one of our featured tracks:




  • Data and Analytics                                                                                                                                            Data is key to intelligent applications and decision making. Learn how Google Cloud Platform can help you build more intelligent applications and make better, more timely decisions.



  • Infrastructure and Operations                                                                                                                          Learn how Google’s infrastructure — including our networks, storage, security, data center operations and DevOps tools — gives you scale, security and reliability. Sessions in this track will cover popular tooling, common DevOps patterns and how to manage at any scale.



  • App and Services Development                                                                                                                      Want to understand how different components of Google Cloud Platform can work together in a variety of configurations? In this track, we'll discuss topics such as app architecture, development, deployment and continuous integration.



  • Solutions Showcase                                                                                                                                       Listen to some of our customers talk about how they’re using Google Cloud Platform in production. From cloud-native startups to enterprises in the process of migrating to cloud, they'll tell you about their experiences powering everything from mobile applications to mission-critical deployments. Hear about practical solutions, patterns, and lessons that you can apply to your own applications.




We look forward to seeing you at GCP NEXT 2016. If you can’t make it in person, catch sessions via livestream. Registration opens today, and the call for speakers closes on January 15, 2016; make sure to get your proposal submitted in time!



cloud.google.com/Next2016



To keep up to date on GCP NEXT 2016, follow us on Google+, Twitter, and LinkedIn.



- Posted by Julia Ferraioli, Developer Advocate, Google Cloud Platform

Today’s guest post comes from Salvatore Sferrazza and Sebastian Just from FIS Global, an international provider of financial services and technology solutions. Salvatore and Sebastian tell us how Google Cloud Dataflow transforms fluctuating, large-scale financial services data so that it can be accurately captured and moved across systems. ...
Today’s guest post comes from Salvatore Sferrazza and Sebastian Just from FIS Global, an international provider of financial services and technology solutions. Salvatore and Sebastian tell us how Google Cloud Dataflow transforms fluctuating, large-scale financial services data so that it can be accurately captured and moved across systems.



Much software development in the capital markets (and enterprise systems in general) revolves around the transformation, enrichment and movement of data from one system to another. The unpredictable nature of financial market data volumes, often driven by volatility, exacerbates the pain of scaling and posting data when and where it’s needed for daily trade reconciliation, settlement and regulatory reporting. The implications of technology missteps within such crucial business processes range from missed business opportunities to undesired risk exposure to regulatory non-compliance. These activities must be relentlessly predictable, repeatable and measurable to yield maximum value to stakeholders.



While developers rely on the Extract, Transform and Load (ETL) activities that are so crucial to processing data, they now face limits in terms of the speed and efficiency of ETL as the amount of transactions grows faster than they can process it. As shortened settlement durations and the Consolidated Audit Trail (CAT) loom on the horizon, financial services institutions need simple, fast and powerful approaches to quickly scale and ultimately mitigate time-sensitive risks and operational costs.



Traditionally, developers have considered the activities around ETL data an unglamorous yet necessary dimension of building software products for encapsulating functions that are core to every tier of computing. So when data-driven enterprises are tasked with harvesting insights from massive data sets, it’s quite likely that ETL, in one form or another, is lurking nearby. But in today’s world, data can come from anywhere and in any format, creating a series of labor, time and intellectual challenges. While there may be hundreds of ways to solve the problem, few provide the efficiency and effectiveness so needed in our “big data” world — until recently.



The Google Cloud Dataflow service and its associated software development kit (SDK) provides a series of powerful tools for a myriad of data transformation duties. Designed to perform data processing tasks of any size in a managed services environment, Google Cloud Dataflow simplifies the mechanics of large-scale transformation and supports both batch and stream processing using the same programming model. In our latest white paper, we introduce some of the main concepts behind building and running applications that use Dataflow, then get “hands on” with a job to transform and ingest options market symbol data before storing the transformations within a Google BigQuery data set.



In short, Google Cloud Dataflow allows you to focus on data processing tasks and not cluster management. Rather than asking you to guess the right cluster size, Dataflow automatically scales up or down horizontally as much as needed for your exact processing requirements. This includes scaling all the way down to zero when there is no work, so you’re never paying for an idle cluster. Dataflow also alleviates the pain of writing ETL jobs by standardizing the process of implementing application requirements. As a result, you’ll be able to focus on the data transformations you need to make rather than on the processing mechanics themselves. This not only provides greater flexibility, lower latency and enhanced control of ETL jobs; it offers built-in cost management and ties together other useful Google Cloud services. Beyond common ETL, Dataflow pipelines may also include inline computation ranging from simple counting to highly complex, multi-step analysis. In our experience with the service so far, it can potentially remove much of the work from engineers within financial institutions and regulatory organizations, while providing elasticity to the entire process and ensuring accuracy, scale, performance and cost efficiency.



As market volatility and reporting requirements drive the need for accuracy, low latency and risk reduction, transforming and interpreting market data in a big data world is imperative to trading efficiency and accessibility. Every second counts. With a more cost-effective, real-time and scalable method of processing an ever-increasing volume of data, financial institutions will be able to address specific requirements and volumes at hand while keeping up with the demands of a rapidly evolving global financial system. We hope our experience, as captured in the technical white paper, will prove useful to others in their quest for the more effective way to process data.



Please see this paper’s GitHub page for the complete and buildable project source code.



- Posted by Salvatore Sferrazza, Principal at FIS and Sebastian Just, Manager at FIS

You’ve decided to adopt a microservice architecture and containerize your application. Congrats! But how will you monitor it? To solve that problem, we've worked to make Google Container Engine and ...
You’ve decided to adopt a microservice architecture and containerize your application. Congrats! But how will you monitor it? To solve that problem, we've worked to make Google Container Engine and Google Cloud Monitoring fit together like peas in a pod.



When you launch your Container Engine cluster, you can enable Cloud Monitoring with one click. Check it out!



Information will be collected about the CPU usage, memory usage and disk usage for all of the containers in your cluster. This information is annotated and stored in Cloud Monitoring, where you can choose to either access it via the API or in the Cloud Monitoring UI. From Cloud Monitoring, you can easily examine not only the container level resource usage but also see this aggregated across pods and clusters.



If you head over to the Cloud Monitoring dashboard and click on the Infrastructure dropdown, you can see a new option for Container Engine.











If you have more than one cluster with monitoring enabled, you'll see a page listing the clusters in your project along with how many pods and instances are in them. However, if you only have one cluster, you'll be directed straight to details about it, as shown below.



This page gives you a view of your cluster. It lists all the pods running in your cluster, recent events from the cluster, as well as resource usage aggregated across the nodes in your cluster. In this case, you can see that this cluster has the system components in it (DNS, UI, logging and monitoring) as well as the frontend and redis pods from the guestbook tutorial in the Container Engine documentation.



From here, you can easily drill down to the details of individual pods and containers, where you'll see metadata about the pod and its containers, such as how many times they've been restarted, along with metrics about the pod's resource usage.



But this is just the first piece. Since Cloud Monitoring makes heavy use of tags (the equivalent of Container Engine's labels), you can create groups based on how you've labeled your containers or pods. For example, if you're running a web app in a replication controller, you may have all of your frontend web containers labeled with “role=frontend.” In Cloud Monitoring, you can now create a group “Frontend” that matches all resources with the tag role and the value frontend.



You can also make queries that aggregate across pods without needing to create a group, making it possible to visualize the performance of an entire replication controller or service on a single graph. You can do this by creating a new dashboard from the top-level menu option named Dashboards, and adding a chart. In the example below, you can see the aggregated memory usage of all the php-redis frontend pods in the cluster.









With these tools, you can create powerful alerting policies that trigger when the aggregate across the group or any container within the group violates a threshold, for example, using too much memory. You can also tag your group as a cluster so that Cloud Monitoring's cluster insights detection will show outliers across the set of containers when they're detected, potentially helping you to pinpoint cases where your load isn't evenly distributed or nodes don't have even workloads.











And since this is all based on tags, it will update automatically, even as your containers move across the nodes of your cluster, even if you're auto-scaling and adding and removing nodes over time.



We have a lot more work planned to continue to integrate Container Engine and Cloud Monitoring and make it easy to collect your application and service metrics as well as system metrics that you can use today.



Do you have ideas of what we should do to make things better? Let us know by sending feedback through the Cloud Monitoring console or directly at monitoring-and-logs-feedback@google.com. You can find more information on the available metrics in our docs.



- Posted by Alex Robinson, Software Engineer, Google Container Engine and Jeremy Katz, Software Engineer, Google Cloud Monitoring
















  • API calls that read the configuration or metadata of an application, service or resource



  • API calls that create, modify or read user-provided data managed by a service (e.g. inserting data into a dataset or launching a query in BigQuery)



Not having a full view of administrative actions in your Google Cloud Platform projects can make it challenging and slow going to troubleshoot when an important application breaks or stops working. It can also make it difficult to monitor access to sensitive data and resources managed by your project. That’s why we created Google Cloud Audit Logs, and today they’re available in beta for App Engine and BigQuery. Cloud Audit Logs help you with your audit and compliance needs by enabling you to track the actions of administrators in your Google Cloud Platform projects. They consist of two log streams: Admin Activity and Data Access.






Admin Activity audit logs contain an entry for every administrative action or API call that modifies the configuration or metadata for the related application, service or resource, for example, adding a user to a project, deploying a new version in App Engine or creating a BigQuery dataset. You can inspect these actions across your projects on the Activity page in the Google Cloud Platform Console.
















Data Access audit logs contain an entry for every one of the following events:



  • API calls that read the configuration or metadata of an application, service or resource



  • API calls that create, modify or read user-provided data managed by a service (e.g. inserting data into a dataset or launching a query in BigQuery)








Currently, only BigQuery generates a Data Access log as it manages user-provided data, but ultimately all Cloud Platform services will provide a Data Access log.






There are many additional uses of Audit Logs beyond audit and compliance needs. In particular, the BigQuery team has put together a collection of examples that show how you can use Audit Logs to better understand your utilization and spending on BigQuery. We’ll be sharing more examples in future posts.






Accessing the Logs


Both of these logs are available in Google Cloud Logging, which means that you’ll be able to view the individual log entries in the Logs Viewer as well as take advantage of the many logs management capabilities available, including exporting the logs to Google Cloud Storage for long-term retention, streaming to BigQuery for real-time analysis and publishing to Google Cloud Pub/Sub to enable processing via Google Cloud Dataflow. The specific content and format of the logs can be found in the Cloud Logging documentation for Audit Logs.






Audit Logs are available to you at no additional charge. Applicable charges for using other Google Cloud Platform services (such as BigQuery and Cloud Storage) as well as streaming logs to BigQuery will still apply. As we find more ways to provide greater insight into administrative actions in GCP projects, we’d love to hear your feedback. Share it here: gcp-audit-logging-feedback@google.com.





Posted by Joe Corkery, Product Manager, Google Cloud Platform




































In October, we announced the launch of Google Cloud Shell, a Google Cloud Platform feature that lets you manage your infrastructure and applications from the command line in any browser. At that time we committed that Cloud Shell beta would be free through 2015, and today we have extended this to the end of 2016!






With the holiday season upon us, you might not always have access to the computer you use to manage your application daily. With Cloud Shell, it just takes one click in the console to get temporary, quick access to a VM hosted and managed by Google that includes the most common tools needed to manage GCP pre-installed. If you need to store something between sessions, you’ll have 5GB of storage space.




Cloud Shell in GCP Cloud Console 

















We’ve seen strong enthusiasm around these new capabilities from the community:




“Cloud shell, the new UI, and the depth of each service and it’s documentation puts @googlecloud on top for me. Quality over quantity” - @SageProgramming






“Cloud shell + container engine from @googlecloud make quick work of configuring @kubernetesio projects. Nothing to install but a browser!” - @nissyen






But you also told us that a free beta period through the end of 2015 was too short. With that in mind, we’re excited to extend the free beta period for another year, until the end of 2016.






Here are just a few of the things you can try out in Cloud Shell during this period:











We hope you give it a try and welcome your feedback  or interest in volunteering for a user experience research study, please email us at gcp-shell-feedback@google.com.





  • Posted by Cody Bratt, Product Manager



Today we’re giving you better cost controls in BigQuery to help you manage your spend, along with improvements to the streaming API, a performance diagnostic tool, and a new way to capture detailed usage logs.
Today we’re giving you better cost controls in BigQuery to help you manage your spend, along with improvements to the streaming API, a performance diagnostic tool, and a new way to capture detailed usage logs.



BigQuery is a Google-powered supercomputer that lets you derive meaningful analytics in SQL, letting you only pay for what you use. This makes BigQuery an analytics data warehouse that’s both powerful and flexible. Those accustomed to a traditional fixed-size cluster – where cost is fixed, performance degrades with increased load, and scaling is complex – may find granular cost controls helpful in budgeting your BigQuery usage.



In addition, we’re announcing availability of BigQuery access logs in Audit Logs Beta, improvements to the Streaming API, and a number of UI enhancements. We’re also launching Query Explain to provide insight on how BigQuery executes your queries, how to optimize your queries and how to troubleshoot them.




Custom Quotas: No fear of surprise when the bill comes




Custom quotas allow you to set daily quotas that will help prevent runaway query costs. There are two ways you can set the quota:




  • Project wide: an entire BigQuery project cannot exceed the daily custom quota.

  • Per user: each individual user within a BigQuery project is subject to the daily custom quota.







Query Explain: understand and optimize your queries


Query Explain shows, stage by stage, how BigQuery executes your queries. You can now see if your queries are write, read or compute heavy, and where any performance bottlenecks might be. You can use BigQuery Explain to optimize queries, troubleshoot errors or understand if BigQuery Slots might benefit you.



In the BigQuery Web UI, use the “Explanation” button next to “Results” to see this information.






Improvements to the Streaming API


Data is most valuable when it’s fresh, but loading data into an analytics data warehouse usually takes time. BigQuery is unique among warehouses in that it can easily ingest a stream of up to 100,000 rows per second per table, available for immediate analysis. Some customers even stream 4.5 million rows per second by sharding ingest across tables. Today we’re bringing several improvements to BigQuery Streaming API.




  • Streaming API in EU locations. It’s not just for the US anymore: you may now use the Streaming API to load data into your BigQuery datasets residing in EU.

  • Template tables is a new way to manage related tables used for streaming. It allows an existing table to serve as a template for a streaming insert request. The generated table will have the same schema, and be created in the same dataset and project as the template table. Better yet, when the schema of the template table is updated, the schema of the tables generated from this template will also be updated.

  • No more “warm-up” delay. After streaming the first row into a table, we no longer require a warm-up period of a couple of minutes before the table becomes available for analysis. Your data is available immediately after the first insertion.





Create a paper trail of queries with Audit Logs Beta




BigQuery Audit Logs form an audit trail of every query, every job and every action taken in your project, helping you analyze BigQuery usage and access at the project level, or down to individual users or jobs. Please note that Audit Logs is currently in Beta.



Audit Logs can be filtered in Cloud Logging, or exported back to BigQuery with one click, allowing you to analyze your usage and spend in real-time in SQL.



With today’s announcements, BigQuery gives you more control and visibility. BigQuery is already very easy to use, and with recently launched products like Datalab (a data science notebook integrated with BigQuery), just about anyone in your organization can become a big data expert. If you’re new to BigQuery, take a look at the Quickstart Guide, and the first 1TB of data processed per month is on us. To fully understand the power of BigQuery, check out the documentation and feel free to ask your questions using the “google-bigquery” tag on Stack Overflow.



-Posted by Tino Tereshko, Technical Program Manager

Google Cloud SQL is an easy-to-use service that delivers fully managed MySQL databases. It lets you hand off to Google the mundane, but necessary and often time consuming tasks — like applying patches and updates, managing backups and configuring replications — so you can put your focus on building great applications. And because we use vanilla MySQL, it’s easy to connect from just about any application, anywhere.
Google Cloud SQL is an easy-to-use service that delivers fully managed MySQL databases. It lets you hand off to Google the mundane, but necessary and often time consuming tasks — like applying patches and updates, managing backups and configuring replications — so you can put your focus on building great applications. And because we use vanilla MySQL, it’s easy to connect from just about any application, anywhere.



The first generation of Cloud SQL was launched in October 2011 and has helped thousands of developers and companies build applications. As Compute Engine and Persistent Disk have made great advancements since their launch, the second generation of Cloud SQL builds on their innovation to deliver an even better, more performant MySQL solution at a better price/performance ratio. We’re excited to announce the beta availability of the second generation of Cloud SQL — a new and improved Cloud SQL for Google Cloud Platform.




Speed, more speed and scalability




The two principal goals of the second generation of Cloud SQL are: better performance and scalability per dollar. The performance graph below speaks for itself. Second generation Cloud SQL is more than seven times faster than the first generation of Cloud SQL. And it scales to 10TB of data, 15,000 IOPS and 104GB of RAM per instance — well beyond the first generation.






Source: Google internal testing








Yoga for your database (Cloud SQL is flexible)




Cloud users appreciate flexibility. And while flexibility is not a word frequently associated with relational databases, with Cloud SQL we’ve changed that. Flexibility means easily scaling a database up and down. For example, a database that’s growing in size and number of queries per day might require more CPU cores and RAM. A Cloud SQL instance can be changed to allocate additional resources to the database with minimal downtime. Scaling down is just as easy.



Flexibility means easily connecting to your database from any client with Internet access, including Compute Engine, Managed VMs, Container Engine and your workstation. Connectivity from App Engine is only offered for Cloud SQL First Generation right now, but that will change soon. Because we embrace open standards by supporting MySQL Wire Protocol, the standard connection protocol for MySQL databases, you can access your managed Cloud SQL database from just about any application, running anywhere. For example:




  • Use all your favorite tools, such as MySQL Workbench, Toad and the MySQL command-line tool to manage your Cloud SQL instances

  • Get low latency connections from applications running on Compute Engine and Managed VMs

  • Use standard drivers, such as Connector/J, Connector/ODBC, and Connector/NET, making it exceptionally easy to access Cloud SQL from most applications






Flexibility also means easily starting and stopping databases. Many databases must run 24x7, but some are used only occasionally for brief or infrequent tasks. Cloud SQL can be managed using the Cloud Console (our browser-based administration console), command line (part of our gCloud SDK) or a RESTful API. The command line interface (CLI) and API make Cloud SQL administration scriptable and help users maximize their budgets by running their databases only when they’re needed.



The graph below shows the number of active Cloud SQL database instances running over time. Notice the clusters of five sawtooth-like ridges and then a drop for two additional ridges. These clusters show an increased number of databases running during business hours on Monday through Friday each week. Database activity, measured by the number of active databases, falls outside of business hours, especially on the weekends. This repeated rise and fall of database instances is a great example of flexibility. Its magnitude is helped significantly by first generation Cloud SQL’s ability to automatically sleep when it is not being accessed. While this is not a design goal of the second generation of Cloud SQL, users can quickly create and delete, or start and stop databases that only need to run on occasion. Cloud SQL users get the most from their budget because of the service’s flexibility.










What is a "managed" MySQL database?




Cloud SQL delivers fully managed MySQL databases, but what does that really mean? It means Google will apply patches and updates to MySQL, manage your backups, configure replication and provide automatic failover for High Availability (HA) in the event of a zone outage. It also means that you get Google’s operational expertise for your MySQL database. Google’s team of MySQL experts make configuring replication and automatic failover a breeze, so your data is protected and available. They also patch your database when important security updates are delivered. You choose when (day and time of week) the updates should be applied, and Google’s team takes care of the rest. This combined with Cloud SQL’s automatic encryption on database tables, temporary files and backups ensures your data is secure.



High Availability, replication and backups are configurable, so you can choose what's appropriate for each of your database instances. For development instances, you can choose to opt out of replication and automatic failover, while your production instances are fully protected. Even though we manage the database, you’re still in control.




Pricing: commitment issues




Getting the best Cloud SQL price doesn’t require you to commit to a one- or three-year contract. To get the best Cloud SQL price, just run your database 24x7 for the month. That’s it. If you use a database infrequently, you’ll be charged by the minute at the standard price. But there’s no need to decide upfront and Google helps find savings for you. No commitment, no strings attached. As a bonus, everyone gets the 100% sustained use discount during Beta, regardless of usage.




Ready to get started?




If you haven’t signed up for Google Cloud Platform, do so now and get a $300 credit to test drive Cloud SQL. The second generation Cloud SQL has inexpensive micro instances for small applications, and easily scales up and out to serve performance-intensive applications.



You can also take advantage of our growing partner ecosystem and tools to make working in Cloud SQL even easier. We’ve partnered with Talend, Attunity, Dbvisit and Xplenty to help you streamline the process of loading your data into Cloud SQL and with analytics products Tableau, Looker, YellowFin and Bime so you can easily create rich visualizations for meaningful insights. We’ve also integrated with ScaleArc and WebYog to help you monitor and manage your database and have partnered with service providers like Pythian, so you can have expert support during your Cloud SQL implementations. Reach out to any of our partners if you need help getting up and running.




Bottom Line




Cloud SQL Second Generation makes what customers love about Cloud SQL First Generation faster and more scalable, at a better price per performance.









- Posted by Brett Hesterberg, Product Manager, Google Cloud Platform

In February 2015, Google Cloud Platform and 30+ industry leaders and researchers launched PerfKit Benchmarker (PKB). PKB is an open source cloud benchmarking tool with more than 500 contributors from across the industry, including major cloud providers, hardware vendors and academia.
In February 2015, Google Cloud Platform and 30+ industry leaders and researchers launched PerfKit Benchmarker (PKB). PKB is an open source cloud benchmarking tool with more than 500 contributors from across the industry, including major cloud providers, hardware vendors and academia.



Today we're proud to announce our version 1.0 release. PKB supports 9 cloud providers, including AliCloud, Amazon Web Services, CloudStack, DigitalOcean, Google Cloud Platform, Kubernetes, Microsoft Azure, OpenStack, Rackspace, as well as the machine under your desk or in your datacenter. It fully automates 26 benchmarks covering compute, network and storage primitives, common applications like Tomcat and Cassandra, as well as cloud-specific services like object storage and managed MySQL. It also offers popular benchmarks such as EPFL EcoCloud Web Search, and EPFL EcoCloud Web Serving.



Since we first released PKB, we've seen strong engagement from researchers, industry partners and universities, making it a real community effort. PKB is being used today to measure performance across public clouds, bare-metal hardware and hardware simulations.




Fig. 1 PerfKit Benchmarker architecture







We're now declaring PerfKit Benchmarker V1 because the community believes we have the right set of benchmarks to cover the most common usage scenarios, the framework provides the right abstractions making it easy to extend and maintain and we've achieved the right balance between variance and runtime. PKB will continue to evolve and improve, covering new workloads and scenarios to keep pace with the ever changing cloud development design patterns.



Javier Picorel, a researcher from EPFL EcoCloud explained here why CloudSuite chose to integrate with PKB:


Cloud computing has become the dominant computing platform for delivering scalable online services to a global user base all over the world. The constant emergence of new services, growing user bases and data deluge result in ever-increasing computational requirements for the service providers. Popular online services, such as web search, social networks and video streaming, are hosted by private or public cloud providers in large cloud-server systems, which comprise thousands of servers. Since its inception, CloudSuite has emerged as a popular suite of benchmarks, both in industry and among academics, to benchmark the performance of cloud servers. CloudSuite facilitates research in the field of servers specialized for cloud workloads (e.g., Scale-out Processors) and the development of real products in the server industry (e.g., Cavium Thunderx Processor). 


We believe that PerfKit Benchmarker (PKB) is a step towards the standardization of cloud benchmarking. In essence, we envision PKB as the “SPEC for cloud-server systems.” Naturally, our goals match with PKB's and the strong consortium put together by Google convinced us to team up. On the technical side, we are excited about the standard APIs that PKB provides, enabling the automated deployment of our benchmarks into the most well-known cloud-server providers. We believe that PKB has all the potential to be established as the de-facto standard for cloud benchmarking. Therefore, we expect it to grab the attention of cloud providers, academics and industry, while integrating more and the most recent online services.”



Carlos Torres is a Performance Engineer at Rackspace. At Rackspace, Carlos and his team help other developers performance test their products. They identify critical performance scenarios, develop benchmarks and tools to facilitate the execution and analysis of those scenarios, and provide guidance to other teams to develop their performance tests. Here's what he said:


There are two main cases where I use PKB. One is to provide data for comparative analysis of hardware/software configurations to understand their performance characteristics, and the other is for measuring and tracking performance across software releases. PKB has brought me multiple benefits, but if I had to choose three, I'd say, speed, reproducibility and flexibility.


Speed: Before PKB, configuring and executing a complex benchmark that made use of a multi-node distributed system, such as a 9-node Hadoop cluster with HDFS, took hours of tedious setup, scripting and validation. Maintenance of those scripts, and knowing the current best practices for deploying such systems was a nightmare. Once you executed a benchmark, gathering the data from the tests usually involved manually executing scripts to scrape, parse and copy the data from multiple machines. Now, with PKB, it is very easy to execute, not one, but even multiple of these benchmarks, against every major cloud, usually with just one command. I can rely on the community's expertise, and for the most part, trust the configurations provided with each of the benchmarks. Finally, PKB makes is really easy to consume the data gathered from the tests, since it produces JSON output of the results. 


Reproducibility: Just like in science, reproducibility is a very important aspect of performance engineering. To confirm that either a bottleneck exists, or that it has been fixed, it is necessary to reliably reproduce a workload. Previous to PKB, it was tedious to keep track of all the software versions and configuration settings needed to replicate a benchmark, which sometimes were not documented and hence forgotten. This made reproducibility hard, and error prone. By using the same PKB version, with a single command, I can easily recreate complex benchmarks, and know that I'm executing the same set of software since PKB explicitly tracks versions of the applications, benchmarks and tools used. Also by just sharing the command I used for a test, other users can recreate the same test in no time. 


Flexibility: One of the best features of PKB is the ability execute the same benchmarks across different cloud providers, machine types and compatible operating systems images. While PKB ships with great defaults for most benchmarks, it makes it very easy to execute the benchmarks using custom settings, using commands switches or configuration files that a benchmark might optionally accept. PKB doesn't just make executing benchmarks easy, but contributing new benchmarks is simple as well. It provides a set of libraries that benchmark writers can use to write, for the most part, OS-agnostic benchmark installations.“



Marcin Karkocha and Mateusz Blaszkowski from Intel are working in the software-defined infrastructure (SDI) business to make reference implementations of private clouds.


“We try to determine how specific cloud configuration options impact on workloads running inside the instances. Based on this, we want to create reference architectures for different clouds. We also run Perfkit benchmarks in order to compare and calculate capabilities of reference architectures from different providers. In our case, PKB is used in private cloud  because of that we have slightly different requirements and problems to solve. We do not focus on comparing public cloud offerings. Instead we try to find out what is the most efficient HW/SW configuration for a specific cloud.


PKB as a framework gives us a possibility to create new plugins for providers and benchmarks. Thanks to this, we are able to easily build a custom benchmarking solution which meets most of our requirements.”



Daniel Sanchez and Assistant Professor at MIT EECS told us:


“We are using PKB to investigate new multicore systems. In particular, we are designing new hardware and software techniques that allow servers to provide low, predictable response latencies efficiently. 


PerfKit has made it much easier for us to simulate new hardware techniques on a broad array of cloud computing benchmarks. This is crucial for our work, because traditional benchmark suites are more focused on batch applications, which have quite different needs from cloud computing workloads.



Our goal has always been to help customers make sense of the various cloud products and providers in a simple, transparent way. We want to be innovative, accountable and inclusive in our approach. We're happy to see this effort being welcomed by the industry and academia, and we welcome new partners and feedback.



We invite you to try PerfKit Benchmarker V1 to measure the cloud, and to join the Open Source Benchmarking effort on GitHub.

Do you love Python but hate tracking down bugs in production when time is of the essence? Cloud Debugger can help you identify the root-cause in a few clicks. With our lightning fast, low overhead debugger agent, you simply select the line of code and the debugger returns the local variables and a full stack trace when that line is next executed on any of your instances – all without halting your application or slowing down any requests.
Do you love Python but hate tracking down bugs in production when time is of the essence? Cloud Debugger can help you identify the root-cause in a few clicks. With our lightning fast, low overhead debugger agent, you simply select the line of code and the debugger returns the local variables and a full stack trace when that line is next executed on any of your instances – all without halting your application or slowing down any requests.



Throughout this year we expanded support for Java projects for Google App Engine and Google Compute Engine. Recently we enabled support for Go projects on Compute Engine. Now Python developers can get in on the fun on App Engine and Compute Engine.





Cloud Debugger adds zero overhead on projects that are not being actively debugged. It also adds less than 10ms to request latency when capturing application state without blocking requests to your application.



With this release, Cloud Debugger is now available for Java, Python and Go projects. Try it out today. Support for additional programming languages and frameworks is in the works.



As always, we’d love direct feedback and will be monitoring Stack Overflow for issues and suggestions.



Posted by Keith Smith, Product Manager





For over a decade, we’ve helped evolve the landscape of cloud computing. In that time, we’ve seen plenty of changes — and the past 12 months have been no exception. From widespread adoption of containers to multi-cloud applications, 2015 was truly transformational.




For over a decade, we’ve helped evolve the landscape of cloud computing. In that time, we’ve seen plenty of changes — and the past 12 months have been no exception. From widespread adoption of containers to multi-cloud applications, 2015 was truly transformational.



Here, we’ve put together our top moments and themes of the year. Take a look, then tell us on G+ and Twitter what story or trend you’d add, using #CloudTop10.




1. Enterprise, meet Cloud.


For most organizations, Cloud is no longer a question of “if,” but “when”— and according to new estimates, it’ll be sooner than you might think: 34% of enterprises report plans to host over 60% of their apps on a cloud platform in the next two years. In anticipation, most vendors have taken steps to support enterprise workloads. Just look at Microsoft Azure’s partnership with HP and Google’s custom machine types.




2. Containers rush into the mainstream.


Even a year ago, many developers hadn’t yet given containers a try. Fast-forward to 2015 and we saw containers used not just in testing — but widely adopted in production. In fact, container adoption grew 5X in 2015 according to a recent survey. How did this happen so quickly? Part of the answer lies in the availability of robust open-source technologies like Docker and the Kubernetes project. With these technologies, vendors have been able to accelerate container adoption — from VMware’s vSphere integration to Microsoft’s Docker Client for Windows and our own Container Engine.




3. Big Data needs big insights.


In 2015, Big Data didn't live up to the hype. In a May survey, 77% of organizations felt their big data and analytics deployments are failing or have failed to meet expectations. Yet while the finding is clear, the cause is complex. Siloed teams, high maintenance gear and the need for better tools certainly play a part in the problem. What’s the solution? Most likely, it lies in making tooling and data more accessible to citizen data scientists — whose deep domain knowledge can unlock its true value.




4. Machine learning for all.


The potential benefits of machine learning has been evident for a while. Now, thanks to the increased processing power of computers and data centers, that potential is finally being realized. To help spur this evolution on, software libraries (like TensorFlow) are being open-sourced. This’ll allow ideas and insights to be rapidly exchanged through working code, not just research papers.




5. The Future of IoT.


When most of us hear “Internet of Things“ (IoT), we think of the consumer: connecting the thermostat to the watch to the TV and so on. Yet surprisingly, the greatest adoption of IoT is happening in the enterprise. By 2019, it’s estimated that the enterprise market will account for 9.1 billion of 23.3 billion connected devices. That means scale of ingestion and stream-based data processing will become a critical part of every IT strategy—and interest in technologies like Google Cloud Dataflow and Apache Spark is spiking accordingly.




6. API as a business gets big.


Providing application services-on-demand to developers is now a validated business model — as evidenced by the presence of “unicorn” businesses, such as Twilio and Okta. Both companies closed rounds in 2015 at valuations north of $1 billion, and both provide services that developers can incorporate in their applications.




7. Hybrid clouds on the horizon.


Multi-cloud architecture isn’t new: it’s been used for years as a backup and disaster recovery solution. What is new is the rate at which we’re now seeing multi-cloud orchestration tools, like Kubernetes and Netflix’s Spinnaker being widely deployed. This choice helps prevent lock-in to any one vendor — and with estimates that 50% of enterprises will have hybrid clouds by 2017, this trend shows no signs of slowing down.




8. Shifts and shut-downs.


As the Cloud Platform landscape evolves, we’re seeing increasing consolidation in the market. In part, this is likely due to the cloud’s tremendous hardware and engineering demands. Still, one of the biggest announcements of the year came when Rackspace confirmed it will shift focus from their own cloud offering to supporting third-party cloud infrastructures. With the news that HP will officially shut down Helion in January, this is one trend that’s sure to continue through 2016.




9. Going green.


Customers have spoken and they want their cloud green. What’s still up for debate, however, is how to bring the environmental efficiency of larger, pan-regional data centers to local ones — which may not have the scale to be environmentally efficient.




10. What’s yours?


What cloud story or trend would you add to our list? We want to hear from you: submit your idea on G+ and Twitter, using the hashtag #CloudTop10.



We’ll review all the entries, then select a story — and author — to be featured on our blog.

The cloud console that our customers use to configure and manage Google Cloud Platform resources provides a single comprehensive location for all GCP services, from App Engine instances to viewing Logs to data processing. But which parts of the platform do you use?
The cloud console that our customers use to configure and manage Google Cloud Platform resources provides a single comprehensive location for all GCP services, from App Engine instances to viewing Logs to data processing. But which parts of the platform do you use?



Last month, all GCP customers were invited to start using the new console with features such as pinning and search. With overwhelming positive feedback, we’re pleased to announce its release to general availability.




“Wow! Looks great. I love the way you can pin stuff to the top menu. It makes switching between components much easier (notably App Engine and Datastore). I also like the way you can drill into components, so the UI is less cluttered.” - Gwyn Howell, Appogee HR



“The different areas are very well organized now. Very clean. I love that even the side menu can be searched. That is very useful since there are quite a lot of services.” - Noble Ackerson, LYNXFIT



Thank you for helping us improve the console by providing continual feedback during the beta. After some of you reported page loading latency, we discovered and fixed a bug in Angular Material. We also realigned the color palette to improve the experience after several of you noted that the original red palette for Storage could be misinterpreted as a warning bar.



To quickly review, the updated console now enables you to:




  • Pin each of your commonly-used services to the top of the console for fast access

  • Use the search box and its autocomplete options to easily locate the service you wish to manage

  • Access features in several different ways using the new global navigation options 


    • Open the hamburger menu to see all Cloud Platform offerings in one consolidated place

    • Use the keyboard shortcut (‘/’) to quickly enter into search based navigation


  • Focus solely on a single service and view all content within that service in one place



  • Identify and address issues from a configurable dashboard of your resources and services




When you click in the search bar and start typing, you'll see a dynamically populated set of results:






Figure 1: GCP cloud console search box



Clicking on the menu in the top left will expand the full list of available services and allow you to pin your commonly used items (by clicking on the Pin):




Figure 2: GCP console now allows you to pin favorite services.



We encourage you to give the new console a try and use the feedback button to let us know what you think.



- Posted by Stewart Fife, Product Manager, Google Cloud Platform

As part of our constant improvements to the Google Cloud Platform console we’ve recently updated our Google Compute Engine quotas page. Now you can easily see quota consumption levels and sort to find your most-used resources. This gives you a head start on determining and procuring any additional capacity you need so you hit fewer speed bumps on your road to growth and success.
As part of our constant improvements to the Google Cloud Platform console we’ve recently updated our Google Compute Engine quotas page. Now you can easily see quota consumption levels and sort to find your most-used resources. This gives you a head start on determining and procuring any additional capacity you need so you hit fewer speed bumps on your road to growth and success.



We’ve also improved the process of requesting more quota, which can be initiated directly from the quotas page by clicking on the “Request increase” button. We’ve added additional checks to the request form that help speed up our response processing time; now most requests are completed in minutes. With these changes, we’re making it even easier to do more with Cloud Platform.



You can access your console at https://console.cloud.google.com and learn more about how GCP can help you build better applications faster on the https://cloud.google.com web page.



Posted by Roy Peterkofsky, Product Manager