We here at Google Cloud Platform have been busy working on resources to help you manage identity and security on GCP. Here’s what we’ve been up to.




We here at Google Cloud Platform have been busy working on resources to help you manage identity and security on GCP. Here’s what we’ve been up to.



First off, we’ve been listening to customers and have curated a Google Cloud Identity and Access Management FAQ that answers questions such as ‘What does a Cloud IAM policy look like?’ or "To what identities can I grant IAM roles?" The FAQ already lists almost 40 questions, but if you think there's something missing please let us know.




Google Cloud Resource Manager’s new Organization resource


Several features of Google Cloud Resource Manager are now generally available, including the ability to use the Organization resource. When you use an Organization resource, the projects belong to the business instead of to the employee who created the project. This means that if that employee leaves the company, his or her projects will still belong to the organization. Further, because Organization admins can view and manage all your company's projects, this eliminates shadow projects and rogue admins.



You can grant roles at the Organization level that apply to all projects under the Organization resource. For example, if you grant the Network Admin role to your networking team at the Organization level, they'll be able to manage all the networks in all projects in your company, instead of having to grant them the role for individual projects.






Project provisioning fun with the Cloud Resource Manager API


The Google Cloud Resource Manager API now includes a project.create() feature, which allows you to use scripts and applications to automate project provisioning. Maybe you want to plug into a self-service system to allow developers to request new projects, or perhaps you want to integrate the creation of a new project as part of your CI/CD set-up. Using the project.create() API allows you to standardise the configuration of your projects.



Developers should consider creating different templates for different projects. For example, a data analysis project will have a different composition than a compute project. Using different templates simplifies project creation and management by allowing you to simply run the correct script or template to set up the proper project environment. These scripts can also be treated as code amendments to the standard project creation scripts. You can also version control templates, and revert back to them if need be.



The Cloud Resource Manager project.create() API supports the REST interface, RPC interface, client libraries or gcloud library.




Automating project creation with Python


Let’s look at how to use the project.create() API with Python scripts or templates to automate project creation with a user or service account.



A common scenario for automating project creation is within large organizations that have set up an Organization resource.This example focuses on using a service account to automatically create projects.




  1. Create a service account in a designated project under your Organization resource. We recommended a designated project to contain resources that will be used across the projects in your Organization resource. And because service accounts are associated with a project, creating them in a central designated project will help you manage them.

  2. At a minimum the service account needs to have the resourcemanager.projectCreator IAM role. If you need to enable APIs beyond the default, this will require granting the service account the billing user role at the Organization resource level, so that it can attach projects to the organization resource’s billing account. The service account can then enable the required APIs against the project. The billing account must be associated to the organization resource.




Now that you have a service account that you can use to automatically create scripts, go ahead and create a script that follows this flow:



Create a client with the correct scopes. Here's a code snippet showing how to create a client:



def create_client(http=None):
credentials = oauth2client.GoogleCredentials.get_application_default()
if credentials.create_scoped_required():
credentials = credentials.create_scoped(CRM_SCOPES)
if not http:
http = httplib2.Http()
credentials.authorize(http)
return discovery.build(CRM_SERVICE_NAME, CRM_VERSION, http=http)



Pass your organization ID and a uniquely generated project ID to a function that checks if the project exists by listing projects and looping through them:



Organization_id = str(YOUR-ORG_NUMERIC_ID)
proj_prefix = "your-proj-prefix-" # must be lower case!
proj_id = proj_prefix+"-"+str(random_with_N_digits(6))




#************************



Here's a snippet showing how to list the projects in your organization:



def List_projects(org_id):
crm = create_client()
project_filter = 'parent.type:organization parent.id:%s' % org_id
print(project_filter)
projects = crm.projects().list(filter=project_filter).execute()
#while projects is not None:
print(projects)



Create a project with the generated name if it does not already exist, with this code snippet:



def create_project(proj_id):
crm = create_client()
print "org id in function is :\n"
print(organization_id)
new_project = crm.projects().create(
body={
'project_id': proj_id,
'name': proj_id,
'parent': {
'type': 'organization',
'id': organization_id
}
}).execute()



And finally, programmatically launch the resources and assign IAM policies.



Now that you can use a script to automatically create projects, the next thing to do is to expand on these steps to automate setting of IAM policies and creating resources for your automation pipeline. Google Deployment Manager does that using declarative templates and is a good tool for automatically creating project resources. Stay tuned for a blog post on the topic.










When the World Series starts tonight, I'll be watching the game as a fan and also through the lens of a Google Cloud Platform ...




When the World Series starts tonight, I'll be watching the game as a fan and also through the lens of a Google Cloud Platform developer advocate. As a data wrangler, I want to see if I can get a bit closer to the micro-moments of the game in near real-time.



Baseball is one of the most statistically driven sports. But fans, announcers, coaches and players also talk about “letting the game talk to them” to get insights beyond stats like Batting Averages, ERAs, and WHIPs. What does this really mean? The “talk” can feel like 30 conversations happening all at once lots of noise and lots of signal.



To try and decode it, I’ll be using Google Cloud Dataflow to transform data, Google BigQuery to store and query data and Google Cloud Datalab to slice, dice and visualize it. Baseball data, in particular fine grained play-by-play data, presents many challenges around ETL and interactive analysis  areas that GCP tools are particularly well suited to address for data of any size.



To get there I'm publishing a new public data set in BigQuery that contains every pitch from every at bat from all Major League Baseball 2016 regular season and postseason games. This data is a derivate of raw game logs from Sportradar, which graciously allowed me to denormalize and enrich for this exercise. This open data set provides detailed pitch (type, location, speed) and situational factors like runners on base, players in the field, etc. In essence, this dataset lets you replay each game as it happened at the pitch level.




The Harry Doyle Method


During the World Series games, I'll run an analysis that calculates a score for situational pressure facing a pitcher for each pitch and a score for each pitch based on count management, location control and outcome. This analysis is inspired by the movie Major League and called the Harry Doyle Method. I chose it mainly because I wanted to have some fun, and because no one is more fun than Mr. Baseball, aka Bob Uecker, aka Harry Doyle.



Interpretation of the Harry Doyle Method is based on two numbers the Vaughn Score and the Haywood Score. The Vaughn Score is a pragmatic indication of how well a pitcher is performing. The Haywood Score is an indication of how much pressure the pitcher is under. The scores are aligned at the pitch and then at-bat levels. We can use these scores and their relationship to look at how pressure impacts performance and then dive into factors within a score to gain deeper insight.



With this data and analysis technique you can do some fun things like compare a pitcher's ability to “control the count”  one factor in the Vaughn Score. For example, below is a comparison of Indians’ pitcher Corey Kluber vs Cubs’ pitcher Jon Lester in their respective last 30 regular season starts. This example of Count Management is based on tracking transitions between counts (not just count-seen) and is then used to calculate the Vaughn Score, which is also impacted by the at-bat outcome of out or on-base and other related outcomes like runs scored.



Higher Count Management scores mean that the pitcher is keeping the count to his advantage, for instance, 0 balls and 2 strikes rather than 3 balls and 1 strike. Over the course of a game, a pitcher who stays ahead of the count is more likely to prevent runs due to potentially fewer walks and reduced hits. This is a directional indicator, but it quickly helps pick out performance anomalies like games 9 and 16 for Kluber. And with a simple fit line you can see the overall difference and trend.



Another approach is to analyze the zone(s) where a batter is “hot” (has a high likelihood of getting a hit) by building odds ratios based on each pitch from each at bat. This is then fed back into the pitcher’s situational pressure calculation  the Haywood score. If a pitcher is feeling “weak” he may not want to throw in that zone.



The graphic below is the vertical plane over home plate and 0,0 is dead down the middle of the strike zone. The bigger the dot, the higher the probability (based on previous performance) the hitter will hit the ball if thrown there. The batter in the graphic below is right handed, so throwing to him anywhere in the middle and especially inside at zone 1,0 could be bad news for the pitcher. If the pitcher is behind in the count on a 3-1, he has more pressure to locate outside of the zone, but he also wants to avoid a walk. At the same time, he might be feeling super-confident and throw a 102MPH fastball down the middle and let the batter take a cut.



As each World Series game progresses, we'll look at trends, anomalies and forthcoming risk and give @googlecloud Twitter followers a taste of what we're “hearing" from the game, answering questions like “Is this pitcher performing at his best?” “What was the probability of the triple play?” and “How strong is the Indians’ remaining bullpen?” I’ll also be publishing via Medium during the games, and expand further upon these tweets.



In addition, I’ve written a white paper that details how and why we built our Harry Doyle Method on GCP. It contains code snippets and detailed step-by-step instructions to help you build your own Harry Doyle Method. You can view it here.



If you want more data beyond the 2016 season head over to Sportradar’s API page for a free trial. And there are other amazing sources of baseball data like Retrosheet and MLB’s Baseball Savant to name a few.



Armed with all that data and GCP tools, maybe you too can find some odd nuggets to impress the baseball fans in your life. Or better yet, even predict who’s going to win this series.












Google Cloud Storage is pretty amazing. It offers near-infinite capacity, up to 99.95% availability and fees as low as $0.007GB/month. But storing data in the cloud has always had one drawback: you need to use specialized tools like ...




Google Cloud Storage is pretty amazing. It offers near-infinite capacity, up to 99.95% availability and fees as low as $0.007GB/month. But storing data in the cloud has always had one drawback: you need to use specialized tools like gsutil to browse or access it. You can’t just treat Cloud Storage like a really, really, really big hard disk. That is, until now.




Navigating Cloud Storage with Cloud Tools for PowerShell


The latest release of Cloud Tools for PowerShell (included with the Cloud SDK for Windows) includes a PowerShell provider for Cloud Storage. PowerShell providers are a slick feature of Windows PowerShell that allows you to treat a data source as if it were a file system, to do things like browse the system registry or interact with a SQL Server instance. With a PowerShell provider for Cloud Storage, you can now use commands like cd, dir, copy, del, or even cat to navigate and manipulate your data in Cloud Storage.



To use the provider for Cloud Storage, first load the GoogleCloud PowerShell module by using any of its cmdlets, PowerShell’s lightweight commands. Then just cd into the gs:\ drive. You can now explore your data like you would any local disk. To see what buckets you have available in Cloud Storage, just type dir. The provider will use whatever credentials you have configured for the Cloud SDK (see gcloud init).




PS C:\> Import-Module GoogleCloud


WARNING: The names of some imported commands from the module 'GoogleCloud' include unapproved verbs that might make


them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the


Verbose parameter. For a list of approved verbs, type Get-Verb.


PS C:\> cd gs:\
PS gs:\> dir | Select Name






Name


----


blog-posts


chrsmith-demos.appspot.com


chrsmith-pictures


database-snapshots-prod


staging.chrsmith-demos.appspot.com




...





To navigate your buckets and search for a specific object, just keep using cd and dir (which are aliases for the Set-Location and Get-ChildItem cmdlets respectively.) Note that just like the regular file system provider, you can use tab-completion for file and folder names.






Populating Google Cloud Storage


The following code snippet shows how to create a new bucket using mkdir and use the Set-Content cmdlet to create a new object. Notice that Get-Content takes an object name relative to the current folder in Google Cloud Storage, e.g. gs:\gootoso-test-bucket\folder.




PS gs:\> mkdir gootoso-test-bucket | Out-Null


PS gs:\> Set-Content gs:\gootoso-test-bucket\folder\file.txt `


   -Value "Hello, GCS!"


PS gs:\> Test-Path gs:\gootoso-test-bucket\folder\file.txt


True


PS gs:\> cd .\gootoso-test-bucket\folder


PS gs:\gootoso-test-bucket\folder> cat file.txt




Hello, GCS!





Of course you could do the same thing with the existing PowerShell cmdlets for Cloud Storage such as Get-GcsBucket, New-GcsObject, Copy-GcsObject and so on. But being able to use common commands like cd in the PowerShell provider provides a much more natural and productive experience.




Mixing Cmdlets and the PowerShell Provider


Since the PowerShell provider returns the same objects as other Cloud Storage cmdlets, you can intermix commands. For example:




PS gs:\gootoso-test-bucket\folder> $objs = dir


PS gs:\gootoso-test-bucket\folder> $objs[0].GetType().FullName


Google.Apis.Storage.v1.Data.Object


PS gs:\gootoso-test-bucket\folder> $objs | Read-GcsObject


Hello, GCS!






PS gs:\gootoso-test-bucket\folder> Write-GcsObject -Object $objs[0] -Contents "update"

PS gs:\> Remove-GcsBucket -Name gootoso-test-bucket





All of the objects returned are strongly typed, defined in the C# client library for the Cloud Storage API. That means you can use PowerShell’s particularly powerful pipelining features to access properties on the returned objects, for things like sorting and filtering.



This snippet shows how to get the largest file in the blog-posts Bucket, for any object under the images folder.




PS gs:\> cd gs:\blog-posts\images


PS gs:\blog-posts\images> $objects = dir -Recurse


PS gs:\blog-posts\images> $objects |


   Sort-Object Size -Descending |




   Select-Object -First 1 -Property Name,TimeCreated,Size





In short, the PowerShell provider for Cloud Storage simplifies a lot of tasks, so give it a whirl and try it for yourself. For more information on the provider as well as other PowerShell cmdlets, check out the PowerShell documentation.



Google Cloud Tools for PowerShell, including the new provider for Cloud Storage, is in beta. If you have any feedback on the cmdlet design, documentation, or have any other issues, please report it on GitHub. The code is open-source too, so pull requests are also welcome.





The Google Cloud Platform (GCP) team is working hard to make GCP the best environment to run enterprise Windows workloads. To that end, we're happy to announce support for ...




The Google Cloud Platform (GCP) team is working hard to make GCP the best environment to run enterprise Windows workloads. To that end, we're happy to announce support for Windows Server 2016 Datacenter Edition, the latest version of Microsoft’s server operating system, on Google Compute Engine. Starting this week, you can launch instances with Google Compute Engine VM images with Microsoft Windows Server 2016 preinstalled. In addition, we now also support images for Microsoft SQL Server 2016 with Windows Server 2016. Specifically, we now support the following versions in GA:




  • Windows Server 2016 Datacenter Edition

  • SQL Server Standard 2016 with Windows Server 2016

  • SQL Server Web 2016 with Windows Server 2016

  • SQL Server Express 2016 with Windows Server 2016

  • SQL Server Standard (2012, 2014, 2016) with Windows Server 2012 R2

  • SQL Server Web (2012, 2014, 2016) with Windows Server 2012 R2

  • SQL Server Express (2012, 2014, 2016) with Windows Server 2012 R2

  • and coming soon, SQL Server Enterprise (2012, 2014, 2016) with Windows Server (2012, 2016)




Enterprise customers can leverage Windows Server 2016’s advanced multi-layer security, powerful storage and management capabilities and support for Windows containers. Windows runs on Google’s world-class infrastructure, with dramatic price-to-performance advantages, customizable VM sizes, and state-of-the-art networking and security capabilities. In addition, pricing for Windows Server 2016 and SQL Server 2016 remains the same as previous versions of both products.






Getting started


Sign up for a free trial to deploy your Windows applications and receive a $300 credit. Use this credit toward spinning up instances with pre-configured images for Windows Server, Microsoft SQL Server and your .NET applications. You can create instances directly from the Cloud Console or launch a solution for Windows Server from Cloud Launcher. Here's the detailed documentation on how to create Microsoft Windows Server and SQL Server instances on GCP.




(click to enlarge)




(click to enlarge)



The team is continuing the momentum for Windows on GCP since we announced comprehensive .NET developer solutions back in August, including a .NET client library for all Cloud Platform APIs available through NuGet. The Cloud Platform team has hand-authored libraries for Cloud Platform APIs available as open source projects on GitHub to which the community continues to collaborate and add features. Learn how to build ASP.NET applications on GCP, or check out more resources on Windows Server and Microsoft SQL Server on GCP at cloud.google.com/windows and cloud.google.com/sql-server. If you need help migrating your Windows workloads, please contact the GCP team. We're eager to hear your feedback!








When it comes to cloud-based applications, traditional debugging tools are slow and cumbersome for production systems. When an issue occurs in production, engineers inspect the logs and try to reproduce the problem in a non-production environment. Once they successfully reproduce the problem, they attach a traditional debugger, set breakpoints, step through the code and inspect application state in an attempt to understand the issue. This is often followed up by adding log statements, rebuilding and redeploying code to production and sifting through logs again until the issue's resolved.




When it comes to cloud-based applications, traditional debugging tools are slow and cumbersome for production systems. When an issue occurs in production, engineers inspect the logs and try to reproduce the problem in a non-production environment. Once they successfully reproduce the problem, they attach a traditional debugger, set breakpoints, step through the code and inspect application state in an attempt to understand the issue. This is often followed up by adding log statements, rebuilding and redeploying code to production and sifting through logs again until the issue's resolved.



Google's been a cloud company for a long time, and over the years, we've built developer tools optimized for cloud development. Today we're happy to announce that one such tool, Stackdriver Debugger, is generally available.



Stackdriver Debugger allows engineers to inspect an application's state, its variables and call stack at any line of code without stopping the application or impacting the customer. Being able to debug production code cuts short the many hours engineers invest in finding and reproducing a bug.



Since our beta launch, we've added a number of new features including support for multiple source repositories, logs integration and dynamic log point insertion.



Stackdriver’s Debug page uses source code from repositories such as Github and Bitbucket or local source to display and take debug snapshots. You can also use the debugger without any source files at all, simply by typing in the filename and line number.



The debug snapshot allows you to examine the call-stack and variables and view the raw logs associated with your Google App Engine projects — all on one page.



Out of the box, Stackdriver Debugger supports the following languages and platforms:



Google App Engine (Standard and Flexible): Java, Python, Node

Google Compute Engine and Google Container Engine: Java, Python, Node (experimental), Go



All of this functionality is backed by a publicly accessible Stackdriver Debugger API with which applications interact with the Google Stackdriver Debugger backends. The API enables you to implement your own agent to capture debug data for your favorite programming language. It also allows you to implement a Stackdriver Debugger UI integrated into your favorite IDE to directly set and view debug snapshots and logpoints. Just for fun, we used the same API to integrate the Stackdriver Debugger into the gcloud debug command line.



We're always looking for feedback and suggestions to improve Stackdriver Debugger. Please send us your requests and feedback. If you're interested in contributing to creating additional agents or extending our existing agents, please connect with the Debugger team.





Businesses seek the best price and performance to suit the storage needs of workloads ranging from multimedia serving, to data analytics and machine learning, to data backup/archiving, all of which drive demand for a variety of storage options. At Google, we aim to build a powerful cloud platform that can meet the needs of the most demanding customer workloads.




Businesses seek the best price and performance to suit the storage needs of workloads ranging from multimedia serving, to data analytics and machine learning, to data backup/archiving, all of which drive demand for a variety of storage options. At Google, we aim to build a powerful cloud platform that can meet the needs of the most demanding customer workloads. Google Cloud Storage is a key part of that platform and offers developers and IT organizations durable and highly available object storage, with consistent APIs for ease of application integration, all at a low cost.



Today, we’re excited to announce a major refresh of Google Cloud Storage. We're introducing new storage classes, data lifecycle management tools, improved availability and lower prices  all to make it easy for our customers to store their data with the right type of storage. Whether a business needs to store and stream multimedia to their users, store data for machine learning and analytics or restore a critical archive without waiting for hours or days, Cloud Storage now offers a broad range of storage options to meet those needs.



We’re also excited to announce the continued expansion of our Google Cloud Platform (GCP) partner ecosystem, with partners already using the new Cloud Storage capabilities for use cases including content delivery, hybrid storage, archival, backup and disaster recovery.




New storage classes for Google Cloud Storage


We're announcing the general availability of four storage classes for Cloud Storage. These offer customers a consistent API and data access performance for all of their hot and cold data, with simple-to-understand and highly competitive pricing.







Cloud Storage Coldline: a low-latency storage class for long-term archiving

Coldline is a new Cloud Storage class designed for long-term archival and disaster recovery. Coldline is perfect for the archival needs of big data or multimedia content, allowing businesses to archive years of data. Coldline provides fast and instant (millisecond) access to data and changes the way that companies think about storing and accessing their cold data.



At GCP, we believe that archival data should be as accessible as any other data. Coldline’s API and low latency data access are consistent with other storage classes. This means existing systems can now store and access Coldline data without any updates to the application, and can serve that data directly to end users in milliseconds. Priced at just $0.007 per gigabyte per month plus a simple and predictable access fee of $0.05 per GB retrieved, Coldline is the most economical storage class for data that's accessed less than once per year.



Coldline also works well with Nearline to provide tiered storage for data as it cools. Our recent work on Nearline latency and throughput ensures comparable performance across all storage classes.



To help you migrate your data to Coldline, and other Cloud Storage classes, we offer an easy-to-use Google Cloud Storage Transfer Service and have extended our Switch and Save program to include Coldline. Depending on the amount of data you're bringing to Coldline, you can receive several months of free storage, for up to 100PB of data. To learn more about Switch and Save, please contact our sales team.



Google Cloud Storage Multi-Regional and Regional

GCP customers use Cloud Storage for a variety of demanding use cases. Some use cases require highly available storage close to the Google Compute Engine instances. Others need higher levels of availability and geo-redundancy. We’re updating our storage classes to address those needs:



Google Cloud Storage Multi-Regional is a highly available and geo-redundant storage class. It’s the best storage class for business continuity, or for serving multimedia content to geographically distributed users.



In the case of a regional outage, Cloud Storage transparently routes requests to another available region, ensuring that applications continue to function without disruption. Multi-Regional storage is priced at $0.026 per GB per month, including storage of all replicas, replication over the Google network and connection rerouting. It’s currently available in three locations: US, EU and Asia. All existing Standard storage buckets in a multi-regional location have been converted to Multi-Regional storage class.



Vimeo, a media hosting, sharing and streaming service, leverages Google Cloud Storage Multi-Regional to ensure high availability, and low-latency access to data. Cloud Storage Nearline is used to minimize overall storage costs. To deliver the best possible experience, Vimeo leverages integration between Google Cloud Storage and Fastly, a real-time CDN service. With Fastly, Vimeo can deliver content from Google Cloud Storage to users’ instantly  performing at sub-150 millisecond response times.




We use Google Cloud Platform, including Google Cloud Storage and Compute Engine along with Fastly, for storing and delivering all popular and infrequently accessed content and to handle our peak transcode loads.  


- Naren Venkataraman, Senior Director of Engineering, Vimeo






Fastly customers need low-latency, high-throughput storage and fast, flexible, secure content delivery at the edge. The combined power of Google Cloud Storage and Fastly’s Cloud Accelerator allows customers like Vimeo to fully optimize content storage and delivery, controlling costs and improving global performance.  


- Lee Chen, Head of Strategic Partnerships, Fastly



Google Cloud Storage Regional is a highly available storage class redundant within a single region. It’s ideal for pairing storage and compute resources within a region, to deliver low end-to-end latency and high throughput for workloads such as data transcoding or big data analytics workloads running on Google Compute Engine, Google Cloud DataProc, Google Cloud Machine Learning or BigQuery for example.



Regional storage class is priced at $0.02 per GB per month. All existing Standard storage buckets in a regional location have been converted to the Regional storage class. This is equivalent to a 23% price reduction, and the pricing change for converted buckets takes effect November 1st.



One of our customers who's already using Regional storage class is Spotify. Spotify streams music for over 100 million users with GCP and uses Cloud Storage, along with Compute Engine to scale while controlling costs in a reliable and highly durable environment. Cloud Storage stores more than 30 million songs.








Spotify uses Google Cloud Storage for storing and serving music. Using Regional storage class allowed us to run audio transcoding in Google Compute Engine close to production storage. Google also offers great networking with open and explicit peering setup, as well as interconnect and partnerships with all of our CDN providers


- Jyrki Pulliainen, Software Engineer, Spotify



Effective November 1st we're also introducing new lower API operations pricing for both Multi-Regional and Regional storage classes. Class A operations will cost $0.005 per 1,000 operations (50% price reduction), and Class B will cost $0.004 per 10,000 operations (60% price reduction).



With the addition of Coldline and the refresh of our storage classes with Multi-Regional and Regional, GCP customers will continue to enjoy the same API and consistent data access performance for all of their hot and cold data. With Coldline, no application changes are needed to leverage archived data and there’s no compromise on access time for that data, while Multi-Regional makes it simple to ensure that your data is highly-available and geo-redundant. Plus, we're delivering all of this with simple to understand and highly competitive pricing:




(click to enlarge)




New data management lifecycle capabilities


Many of our customers use multiple storage classes for their different workloads. Having a single API and consistent data access performance ensures applications can seamlessly leverage multiple storage classes. It should be easy for customers to also manage the appropriate storage tier for their data.



We're introducing the beta of new data lifecycle management capabilities to make it easier to manage data placement. Any Google Cloud Storage bucket can now hold data in different storage classes, and the lifecycle policy feature can automatically transition objects in-place to the appropriate colder storage class based on the age of the objects.




Expanding the Cloud Storage partner ecosystem


Many customers already use multiple Cloud Storage classes and will benefit from these storage updates, both directly through us and through our partners, a number of whom have already integrated the new Coldline storage class. Starting today, these partners are available to help you use our new storage classes in your own environment:




  • Fastly: Fastly is a content delivery network that lets businesses control how they serve content, provides real-time performance analytics and caches frequently changing content at the edge. Fastly enables customers to configure Google Cloud Storage as the origin, and Fastly’s Origin Shield designates a single point-of-presence (POP) to handle cache-misses across our entire network.

  • Veritas: Building on existing support for GCP, Veritas is committed to supporting Cloud Storage Coldline. The unique combination of Veritas Information Map, Veritas NetBackup and the GCP ensures customers can gain greater controls on data visibility as they move to the Google Cloud at global enterprise scale. Veritas' collaboration with Google further demonstrates the shared commitment to helping organizations around the world manage information.

  • Cloudian: The Cloudian HyperStore smart data storage platform seamlessly integrates with Google Cloud Storage (including Coldline) to provide anywhere from terabytes to hundreds of petabytes of on-premises storage. Policy-based data migration lets you move data to Coldline based on rules such as data type, age and frequency of access.

  • Cloudberry Lab: CloudBerry Backup is a cloud backup solution that leverages Coldline. In addition to offering real-time and/or scheduled regular backups, encryption, local disk image or bare metal restore, CloudBerry employs block level backup for maximum efficiency and provides alerting features to track each backup and restore plan remotely.

  • Komprise: Komprise data management software enables businesses to seamlessly manage the lifecycle of data and cut costs by over 70% by leveraging all the tiers of Cloud Storage transparently with existing on-premises storage. In under 15 minutes, customers can get a free assessment of how much data can move to Cloud Storage and the projected ROI with a free Komprise trial.

  • StorReduce: StorReduce’s inline deduplication software enables you to move terabytes to petabytes of data into Coldline (or other Cloud Storage tiers) and then use cloud services such as search on that data.

  • Cohesity: The Cohesity hyper-converged secondary storage system for enterprise data consolidates fragmented, inefficient islands of secondary storage into a virtually limitless storage platform. Coldline can be used for any data protection workload via. Cohesity’s policy-based administration capabilities.

  • Sureline: Sureline application mobility software delivers migration and recovery of virtual, cloud, physical or containerized applications and servers. It allows enterprises to use Coldline as the disaster recovery target for occasionally accessed DR images with SUREedge DR.




With an expanding partner ecosystem, more customers than ever before are now able to take advantage of the benefits of GCP.



To learn more about Cloud Storage and the new storage classes, visit our web page here, or do a deeper dive into our technical documentation. You can also sign up for a free trial, or contact our sales team.








Google Stackdriver is now generally available.



Since its inception, Stackdriver was designed to make ops easier by reducing the burden associated with keeping applications fast, error-free and available in the cloud.




Google Stackdriver is now generally available.



Since its inception, Stackdriver was designed to make ops easier by reducing the burden associated with keeping applications fast, error-free and available in the cloud.



We started with a single pane of glass to monitor and alert on metrics from Google Cloud Platform (GCP), Amazon Web Services1 and common application components such as Tomcat, Nginx, Cassandra and MySQL. We added Stackdriver Logging, Error Reporting, Trace and Debugger to help you get to the root cause of issues quickly. And we introduced a simple pricing model that bundles advanced monitoring and logging into a single low-cost package in Stackdriver Premium. Finally, we migrated the service to the same infrastructure that powers the rest of Google so that you can expect world-class reliability and scalability.



Companies of all sizes are already using Stackdriver to simplify ops. For example:

  • Uber uses Stackdriver Monitoring to monitor Google Compute Engine, Cloud VPN and other aspects of GCP. It uses Stackdriver alerts to notify on-call engineers when issues occur.

  • Khan Academy uses Stackdriver Monitoring dashboards to quickly identify issues within its online learning platform. It troubleshoots issues with our integrated Logging, Error Reporting and Tracing tools.

  • Wix uses Stackdriver Logging and Google BigQuery to analyze large volumes of logs from Compute Engine auto-scaled deployments. They get intelligence on system health state and error rates that provides essential insight for them to run their operations.



If you’d like to learn more about Google Stackdriver, please check out our website or documentation. If you’re running on GCP or Amazon Web Services and want to join us on the journey to easier ops, sign up for a 30-day free trial of Stackdriver Premium today.



Happy Monitoring!






1 "Amazon Web Services" and "AWS" are trademarks of Amazon.com, Inc. or its affiliates in the United States









Our goal for Google Cloud Platform (GCP) is to build the most open cloud for all businesses, and make it easy for them to build and run great software. This means being good stewards of the open source community, and having strong engineering partnerships with like-minded industry leaders.




A new way for enterprises to capitalize on Google scale and innovation



Our goal for Google Cloud Platform (GCP) is to build the most open cloud for all businesses, and make it easy for them to build and run great software. This means being good stewards of the open source community, and having strong engineering partnerships with like-minded industry leaders.



Today, we're happy to announce more about our collaboration with Pivotal. Its cloud-native platform, Pivotal Cloud Foundry (PCF), is based on the open source Cloud Foundry project that it started many years ago. It was a natural fit for the two companies to start working together.




A differentiated Pivotal Cloud Foundry with Google


Customers can now deploy and operate Pivotal Cloud Foundry on GCP. This is a powerful combination that brings Pivotal’s enterprise cloud-native experience together with Google’s infrastructure and innovative technology.



So what does that mean in the real-world? Deployments of PCF on GCP can include:






Further, the combination of PCF and GCP allows customers to access Google’s data and machine learning (ML) services within customer applications via custom-built service brokers that expose GCP services directly into Cloud Foundry.



This level of integration with Google’s infrastructure enables the enterprise to build and deploy apps that can scale, store and analyze data quickly. The following data and machine learning services are now available in Pivotal Cloud Foundry today:





Customer collaboration - PCF and GCP in action


We pride ourself on our “engineer to engineer” approach to working with customers and partners. And that’s exactly how we worked with The Home Depot as a shared customer of GCP and Pivotal Cloud Foundry.



The Home Depot software development team worked side-by-side with Google and Pivotal as they co-engineered the integration of PCF on GCP. Together, they’re building business systems for a digital strategy around this partnership, and will be running parts of homedepot.com on PCF and GCP in time for this year’s Black Friday.




Getting started


We've published a “Pivotal Cloud Foundry on Google Cloud Platform” solutions document that provides an example deployment architecture, as well as links to various setup guides. These links range from the lower-level OSS bits up through step-by-step installation guides with screenshots from our friends at Pivotal. It's a comprehensive guide to help you get started with PCF on GCP.








What’s next


Bringing more GCP services into the Cloud Foundry ecosystem is a priority, and we’re looking at how we can further contribute to the Spring community. Stay tuned for more news and updates- but in the meantime, reach out to your local Pivotal or Google Cloud sales team or contact Sales to talk to someone about this exciting partnership.






We’ve heard from a lot of Google Cloud Platform (GCP) users that they like to edit code and configuration files without leaving their browser. We're now making that easier by offering a new feature: an integrated code editor.




We’ve heard from a lot of Google Cloud Platform (GCP) users that they like to edit code and configuration files without leaving their browser. We're now making that easier by offering a new feature: an integrated code editor.



The new code editor is based on Eclipse Orion, and is part of Google Cloud Shell, a command line interface to manage GCP resources. You can access Cloud Shell via the browser from any computer with an internet connection, and it comes with the Cloud SDK and other essential tools pre-installed. The VM backing Cloud Shell is temporary, but each user gets 5GB of persistent storage for files and projects.



To open the new Cloud Shell code editor:


  1. Go to the Google Cloud Console

  2. Click on the Cloud Shell icon on the top right section of the toolbar



  3. Open the code editor from the Cloud Shell toolbar. You’ll also notice that we’ve introduced the ability to upload and download files from your Cloud Shell home directory.



  4. Start editing your code and configuration files.







Cloud Shell code editor in action


Here's an example of how you can use the Cloud Shell code editor to create a sample app, push your changes to Google Cloud Source Repository, deploy the app to Google App Engine Standard, and use Stackdriver Debugger:




Create a sample app



  1. On the Cloud Console website, select an existing project or create a new one from the toolbar.

  2. Open Cloud Shell and the code editor as described above and create a new folder (File->New->Folder). Name it ‘helloworldapp’.

  3. Inside the helloworldapp folder, create a new file and name it ‘app.yaml’.  Paste the following:



    runtime: python27
    api_version: 1
    threadsafe: yes

    handlers:
    - url: .*
    script: main.app

    libraries:
    - name: webapp2
    version: "2.5.2"




  4. Create another file in the same directory, name it ‘main.py’, and paste the following:



    #!/usr/bin/env python

    import webapp2

    class MainHandler(webapp2.RequestHandler):
    def get(self):
    self.response.write('Hello world!')

    app = webapp2.WSGIApplication([
    ('/', MainHandler)
    ], debug=True)




Save your source code in Cloud Source Repositories



  1. Switch to the tab with the open shell pane and go to your app’s directory:

    cd helloworldapp

  2. Initialize git and your repo. The first two steps aren't necessary if you've done them before:

    git config --global user.email "you@example.com"

    git config --global user.name "Your Name"

    git init

    git add . -A

    git commit -m "Initial commit"



  3. Authorize Git to access GCP:

    git config credential.helper gcloud.sh

  4. Add the repository as a remote named ‘google’ to your local Git repository, first replacing [PROJECT_ID] with the name of your Cloud project:



    git remote add google https://source.developers.google.com/p/[PROJECT_ID]/r/default



    git push google master



Deploy to App Engine



  1. From the ~/helloworldapp directory, type: gcloud app deploy app.yaml

  2. Type ‘Y’ to confirm

  3. Visit your newly deployed app at https://[PROJECT_ID].appspot.com



Use Stackdriver Debugger


You can now go to the Debug page, take a snapshot and debug incoming traffic without actually stopping the app.


  1. Open main.py and click on a line number to set the debug snapshot location

  2. Refresh the website displaying the hello world page, and you'll see the request snapshot taken in the debugger

  3. Note that the Debug page displays the source code version of your deployed app





Summary


Now you know how to use Cloud Shell and the code editor to write a sample app, push it into a cloud source repository, deploy it to App Engine Standard, and debug it with Stackdriver Debugger  all without leaving your browser. Note that the new Cloud Shell code editor is just a first step toward making Cloud Shell developers’ go-to environment for everything from simple DevOps tasks to end-to-end software development. We welcome your feedback (click on the gear icon in Shell toolbar->Send Feedback) on how to improve Google Cloud Shell. Stay tuned for new features and functionality.





Here at Google Cloud, our goal is to enable our users and customers to be successful with options, high performance and value. We're committed to open innovation, and look forward to working with industry partners on platform and infrastructure designs.




Here at Google Cloud, our goal is to enable our users and customers to be successful with options, high performance and value. We're committed to open innovation, and look forward to working with industry partners on platform and infrastructure designs.



In fact, earlier this year, we announced that we would collaborate with Rackspace on the development of a new Open Compute Project (OCP) server based on the IBM POWER9 CPU. And we recently announced that we joined the OpenCAPI Consortium in support of the new open standard for a high-speed pathway to improve server performance. Today, we’re excited to share the first spec draft of our new server, Zaius P9 Server, which combines the benefits of IBM POWER9 and OpenCAPI for the OCP community.



Over the past few months, we’ve worked closely with Rackspace, IBM and Ingrasys to learn about the needs of the OCP community and help ensure that Zaius is useful for a broad set of users. With Zaius, Google is building upon the success of the Open Server specification and Barreleye platforms, while contributing the 12 years of experience we’ve gained from designing and deploying servers in our own data centers.



Zaius incorporates many design aspects that are new to Google and unique to OCP: POWER9 was designed to be an advanced accelerated computing platform for scale-out solutions, and will be available for components that use OpenCAPI and PCIE-Gen4 interfaces. The Zaius design brings out all possible PCIe Gen4 and OpenCAPI lanes from the processors to slots and connectors for an unprecedented amount of raw bandwidth compared to prior generation systems. Additionally, the updated package design reduces system complexity and the new microarchitecture provides increased efficiency and performance gains.




Block diagram of Zaius

The specifications

Zaius is a dual-socket platform based on the IBM POWER9 Scale Out CPU. It supports a host of new technologies including DDR4 memory, PCIE Gen4 and the OpenCAPI interface. It’s designed with a highly efficient 48V-POL power system and will be compatible with the 48v Open Rack V2.0 standard. The Zaius BMC software is being developed using Open BMC, the framework for which we’ve released on GitHub. Additionally, Zaius will support a PCIe Gen4 x16 OCP 2.0 mezzanine slot NIC.



We've shared these designs with the OCP community for feedback, and will submit them to the OCP Foundation later this year for review. Following this specification, we plan to release elements of the board’s design collateral, including the schematics and layout. If accepted, these standards will continue the goal of promoting 48V architectures. This is a draft specification of a preliminary, untested design, but we’re hoping that an early release will drive collaboration and discussion within the community.



We look forward to a future of heterogeneous architectures within our cloud. And, as we continue our commitment to open innovation, we’ll continue to collaborate with the industry to improve these designs and the product offerings available to our users.





One of our goals here on the Google Cloud Platform team is to support the broadest possible array of platforms and operating systems. That’s why we’re so excited about the ASP.NET Core, the next generation of the open source ASP.NET web framework built on ...




One of our goals here on the Google Cloud Platform team is to support the broadest possible array of platforms and operating systems. That’s why we’re so excited about the ASP.NET Core, the next generation of the open source ASP.NET web framework built on .NET Core. With it, .NET developers can run their apps cross-platform on Windows, Mac and Linux.



One thing that ASP.NET Core does is allow .NET applications to run in Docker containers. All of a sudden, we’ve gone from Windows-only web apps to lean cross-platform web apps running in containers. This has been great to see!




ASP.NET Core supports running apps across a variety of operating system platforms

Containers can provide a stable runtime environment for apps, but they aren’t always easy to manage. You still need to worry about how to automate deployment of containers, how to scale up and down and how to upgrade or downgrade app versions reliably. In short, you need a container management platform that you can rely on in production.



That’s where the open-source Kubernetes platform comes in. Kubernetes provides high-level building blocks such as pods, labels, controllers and services that collectively help maintenance of containerized apps. Google Container Engine provides a hosted version of Kubernetes which can greatly simplify creating and managing Kubernetes clusters.



My colleague Ivan Naranjo recently published a blog post that shows you how to take an ASP.NET Core app, containerize it with Docker and and run it on Google App Engine. In this post, we’ll take a containerized ASP.NET Core app and manage it with Kubernetes and Google Container Engine. You'll be surprised how easy it is, especially considering that running an ASP.NET app on a non-Windows platform was unthinkable until recently.




Prerequisites


I am assuming a Windows development environment, but the instructions are similar on Mac or Linux.



First, we need to install .NET core, install Docker and install Google Cloud SDK for Windows. Then, we need to create a Google Cloud Platform project. We'll use this project later on to host our Kubernetes cluster on Container Engine.




Create a HelloWorld ASP.NET Core app


.NET Core comes with .NET Core Command Line Tools, which makes it really easy to create apps from command line. Let’s create a HelloWorld folder and create a web app using dotnet command:







$ mkdir HelloWorld


$ cd HelloWorld


$ dotnet new -t web





Restore the dependencies and run the app locally:








$ dotnet restore


$ dotnet run





You can then visit http://localhost:5000 to see the default ASP.NET Core page.




Get the app ready for publishing


Next, let’s pack the application and all of its dependencies into a folder to get it ready to publish.








$ dotnet publish -c Release





Once the app is published, we can test the resulting dll using the following:







$ cd bin/Release/netcoreapp1.1/publish/


$ dotnet HelloWorld.dll






Containerize the ASP.NET Core app with Docker


Let’s now take our HelloWorld app and containerize it with Docker. Create a Dockerfile in the root of our app folder:









FROM microsoft/dotnet:1.1.0-runtime


COPY . /app


WORKDIR /app




EXPOSE 8080/tcp


ENV ASPNETCORE_URLS http://*:8080




ENTRYPOINT ["dotnet", "HelloWorld.dll"]









This is the recipe for the Docker image that we'll create shortly. In a nutshell, we're creating an image based on microsoft/dotnet:latest image, copying the current directory to /app directory in the container, executing the commands needed to get the app running, making sure port 8080 is exposed and that ASP.NET Core is using that port.



Now we’re ready to build our Docker image and tag it with our Google Cloud project id:








$ docker build -t gcr.io/<PROJECT_ID>/hello-dotnet:v1 .





To make sure that our image is good, let’s run it locally in Docker:









$ docker run -d -p 8080:8080 -t gcr.io/<PROJECT_ID>/hello-dotnet:v1







Now when you visit http://localhost:8080 to see the same default ASP.NET Core page, it is running inside a Docker container.




Create a Kubernetes cluster in Container Engine


We are ready to create our Kubernetes cluster but first, let’s first install kubectl. In Google Cloud SDK Shell:







$ gcloud components install kubectl





Configure kubectl command line access to the cluster with the following:







$ gcloud container clusters get-credentials hello-dotnet-cluster \


   --zone europe-west1-b --project <PROJECT_ID>





Now, let’s push our image to Google Container Registry using gcloud, so we can later refer to this image when we deploy and run our Kubernetes cluster. In the Google Cloud SDK Shell, type:










$ gcloud docker push gcr.io/<PROJECT_ID>/hello-dotnet:v1





Create a Kubernetes cluster with two nodes in Container Engine:










$ gcloud container clusters create hello-dotnet-cluster --num-nodes 2 --machine-type n1-standard-1






This will take a little while but when the cluster is ready, you should see something like this:









Creating cluster hello-dotnet-cluster...done.







Deploy and run the app in Container Engine


At this point, we have our image hosted on Google Container Registry and we have our Kubernetes cluster ready in Google Container Engine. There’s only one thing left to do: run our image in our Kubernetes cluster. To do that, we can use the kubectl command line tool.



Create a deployment from our image in Kubernetes:









$ kubectl run hello-dotnet --image=gcr.io/<PROJECT_ID>hello-dotnet:v1 \


 --port=8080


deployment “hello-dotnet” created





Make sure the deployment and pod are running:









$ kubectl get deployments


NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE


hello-dotnet   1         1         1            0           28s






$ kubectl get pods


NAME                            READY     STATUS    RESTARTS   AGE


hello-dotnet-3797665162-gu99e   1/1       Running   0          1m





And expose our deployment to the outside world:










$ kubectl expose deployment hello-dotnet --type="LoadBalancer"


service "hello-dotnet" exposed






Once the service is ready, we can see the external IP address:









$ kubectl get services


NAME           CLUSTER-IP     EXTERNAL-IP      PORT(S)    AGE


hello-dotnet   XX.X.XXX.XXX   XXX.XXX.XX.XXX   8080/TCP   1m






Finally, if you visit the external IP address on port 8080, you should see the default ASP.NET Core app managed by Kubernetes!



It’s fantastic to see the ASP.NET and Linux worlds are coming together. With Kubernetes, ASP.NET Core apps can benefit from automated deployments, scaling, reliable upgrades and much more. It’s a great time to be a .NET developer, for sure!