In-between daydreaming about virtual reality and saying Allo to Duo, there were some fantastic Google Cloud Platform sessions and demos at ...




In-between daydreaming about virtual reality and saying Allo to Duo, there were some fantastic Google Cloud Platform sessions and demos at Google I/O this year. In case you weren’t able to attend the show, here are the recordings.



In One Lap Around Google Cloud Platform, developer advocate Mandy Waite, and Brad Abrams walk through the process of building a Node.js backend for an iOS and Android based game. Easily the best bit of the session is a demo that uses Kubernetes and GCP Container Registry to deploy a Docker container to App Engine Flexible Environment, Google Container Engine and an AWS EC2 instance. It’s a simple demo of portability across clouds, a key differentiator for GCP.



Speaking of multi-cloud environments, developer advocate Aja Hammerly presented a great session on Stackdriver, called Just Enough Stackdriver To Sleep Well At Night, which will be music to the ears for any of you who have to carry a pager to support your site or application. In a nutshell, Aja shows how Stackdriver unifies a bunch of different monitoring and management tools into a single solution.



If you’ve made the leap to containers or are thinking about it, you’ll want to check Carter Morgan's session: Best practices for orchestrating the cloud with Kubernetes. This session includes the basics of modern day applications and how containers work. It also covers packaging and distributing apps using Docker and how to up your game by running applications on Kubernetes.




Did you know Kubernetes has seen 5,000 commits and over 50% from unique contributors since January 2015?

Next, IoT ideas are a dime a dozen, but bringing them to life is another story. In Making sense of IoT data with the cloud, developer advocate Ian Lewis shows how you can manage a large number of devices on GCP and how to ingest, store and analyze the data from those devices.



In Supercharging Firebase with Google Cloud Platform developer advocates Sandeep Dinesh and Bret McGowen use Firebase to build a real-time game that interacts with virtual machines, big data and machine learning APIs on GCP. The coolest part of the demo involves the audience in the room and on the livestream interacting with the game via Speech API, all yelling instructions at the same time to move a dot through a maze. The hallmark of Firebase  real-time data synchronization across connected devices in milliseconds  is on display here and fun to see. For more Firebase tips and tricks check out Creating interactive multiplayer experiences with Firebase, from developer advocate Mark Mandel.



Switching gears to big data and the upcoming U.S. presidential election, developer advocate Felipe Hoffa and software engineer Jordan Tigani, demo the power of Google BigQuery to uncover some intriguing campaign insights in Election 2016: The Big Data Showdown. You'll learn which candidate is spending the most money and how efficient that spending is relative to their mentions on TV, by mashing together various different public datasets in BigQuery. Felipe and Jordan do a nice job showing us how BigQuery can separate the signal from the noise to figure out what it all means.



Figuring out the right storage for each application in your business can be a daunting task on the cloud. Dominic Preuss, group product manager, explains how in Scaling your data from concept to petabytes.



And of course, no event that includes Cloud Platform is complete without demos from developer advocates Kaz Sato on How to build a smart RasPi bot with Cloud Vision and Speech API, and another crowd-pleaser, Google Cloud Spin: Stopping time with the power of Cloud, from Francesc Campoy Flores.



To find more tutorials, talks and demos on GCP beyond the sessions at I/O this year, check out our GCP YouTube channel and weekly podcast, and follow @GoogleCloud on Twitter for all the latest news and product announcements from the Cloud Platform team.







Is there a limit to what you can do with machine learning? Doesn’t seem like it.



At Moogfest last week, Google researchers presented about ...






Is there a limit to what you can do with machine learning? Doesn’t seem like it.



At Moogfest last week, Google researchers presented about Project Magenta, which uses TensorFlow, the open-source machine learning library that we developed, to see if computers can create original pieces of art and music.



Researchers have also shown that they can use TensorFlow to train systems to imitate the grand masters. With this implementation of neural style in TensorFlow, it becomes easy to render an image that looks like it was created by Vincent Van Gogh or Pablo Picasso — or a combination thereof. Take this image of Frank Gehry’s Stata Center in winter,



add style inputs from Van Gogh’s Starry Night and a Picasso Dora Maar:



and end up with:



Voila! A Picasso-esque Van Gogh for 21st century!



(The code for neural style was first posted to GitHub last December, but the author continues to update it and welcomes pull requests.)



Or maybe fine art isn’t your thing. This week, we also saw how to use TensorFlow to solve trivial programming problems, forecast demand — even predict the elections.



Because TensorFlow is open source, anyone can use it, on the platform of their choice. But it’s worth mentioning that running machine learning on Google Cloud Platform works especially well. You can learn more about GCP’s machine learning capabilities here. And if you’re doing something awesome with machine learning on GCP, we’d love to hear about it — just tweet us at @googlecloud.





Editor's note: Updated May 27, 2016 with guidance on running nodes in multiple zones.



Google Container Engine (GKE) aims to be the best place to set up and manage your Kubernetes clusters. When creating a cluster, users have always been able to select options like the nodes’ machine type, disk size, etc. but that applied to all the nodes, making the cluster homogenous. Until now, it was very difficult to have a cluster with a heterogeneous machine configuration.




Editor's note: Updated May 27, 2016 with guidance on running nodes in multiple zones.



Google Container Engine (GKE) aims to be the best place to set up and manage your Kubernetes clusters. When creating a cluster, users have always been able to select options like the nodes’ machine type, disk size, etc. but that applied to all the nodes, making the cluster homogenous. Until now, it was very difficult to have a cluster with a heterogeneous machine configuration.



That’s where node pools come in, a new feature in Google Container Engine that's now generally available. A node pool is simply a collection, or “pool,” of machines with the same configuration. Now instead of a uniform cluster where all the nodes are the same, you can have multiple node pools that better suit your needs. Imagine you created a cluster composed of n1-standard-2 machines, and realize that you need more CPU. You can now easily add a node pool to your existing cluster composed of n1-standard-4 (or bigger) machines.



All this happens through the new “node-pools” commands available via the gcloud command line tool. Let’s take a deeper look at using this new feature.






Creating your cluster




A node pool must belong to a cluster and all clusters have a default node pool named “default-pool”. So, let’s create a new cluster (we assume you’ve set the project and zone defaults in gcloud):



> gcloud container clusters create work
NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE  
NODE_VERSION NUM_NODES STATUS
work us-central1-f 1.2.3 123.456.789.xxx n1-standard-1  
1.2.3 3 RUNNING



Like before, you can still specify some node configuration options, like “--machine-type” to specify a machine type, or “--num-nodes” to set the initial number of nodes.




Creating a new node pool




Once the cluster has been created, you can see its node pools with the new “node-pools” top level object (Note: You may need to upgrade your gcloud commands via “gcloud components update” to use these new options.).



> gcloud container node-pools list --cluster=work
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
default-pool n1-standard-1 100 1.2.3



Notice that you must now specify a new parameter, “--cluster”. Recall that node pools belong to a cluster, so you must specify the cluster with which to use node-pools commands. You can also set it as the default in config by calling:



> gcloud config set container/cluster 



Also, if you have an existing cluster on GKE, your clusters will have been automatically migrated to “default-pool,” with the original cluster node configuration.



Let’s create a new node pool on our “work” with a custom machine type of 2 CPUs and 12 GB of RAM:



> gcloud container node-pools create high-mem --cluster=work 
--machine-type=custom-2-12288 --disk-size=200 --num-nodes=4



This creates a new node pool with 4 nodes, using custom machine VMs and 200 GB boot disks. Now, when you list your node pools, you get:



> gcloud container node-pools list --cluster=work
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
default-pool n1-standard-1 100 1.2.3
high-mem custom-2-12288 200 1.2.3



And if you list the nodes in kubectl:



> kubectl get nodes
NAME STATUS AGE
gke-work-high-mem-d8e4e9a4-xzdy Ready 2m
gke-work-high-mem-d8e4e9a4-4dfc Ready 2m
gke-work-high-mem-d8e4e9a4-bv3d Ready 2m
gke-work-high-mem-d8e4e9a4-5312 Ready 2m
gke-work-default-pool-9356555a-uliq Ready 1d



With Kubernetes 1.2, the nodes on each node pool are also automatically assigned the node label, “cloud.google.com/gke-nodepool=”. With node labels, it’s possible to have heterogeneous nodes within your cluster, and schedule your pods into the specific nodes that meet their needs. Perhaps a set of pods need a lot of memory — allocate a high-mem node pool and schedule them there. Or perhaps they need more local disk space — assign them to a node pool with a lot of local storage capacity. More configuration options for nodes are being considered.




More fun with node pools




There are also other, more advanced scenarios for node pools. Suppose you want to upgrade the nodes in your cluster to the latest Kubernetes release, but need finer grained control of the transition (e.g., to perform A/B testing, or to migrate the pods slowly). When a new release of Kubernetes is available on GKE, simply create a new node pool; all node pools have the same version as the cluster master, which will be automatically updated to the latest Kubernetes release. Here’s how to create a new node pool with the appropriate version:





> gcloud container node-pools create my-1-2-4-pool --cluster=work 
--num-nodes=3 --machine-type=n1-standard-4

> gcloud container node-pools list --cluster=work
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
default-pool n1-standard-1 100 1.2.3
high-mem custom-2-12288 200 1.2.3
my-1-2-4-pool n1-standard-4 100 1.2.4



You can now go to “kubectl” and update your replication controller to schedule your pods with the label selector “cloud.google.com/gke-nodepool=my-1-2-4-pool”. Your pods will then be rescheduled from the old nodes to the new pool nodes. After the verifications are complete, continue the transition with other pods, until all of the old nodes are effectively empty. You can then delete your original node pool:



> gcloud container node-pools delete default-pool --cluster=work

> gcloud container node-pools list --cluster=work
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
high-mem custom-2-12288 200 1.2.3
My-1-2-4-pool n1-standard-4 100 1.2.4



And voila, all of your pods are now running on nodes running the latest version of Kubernetes!




Node pools across multiple zones




Many customers have requested the ability to run nodes in multiple zones to improve the availability of their application in the unlikely event of a zone outage. Node pools support multi-zone clusters automatically. To create a multi-zone cluster, pass the “--additional-zones” flag to gcloud and specify one or more zones within the same region of your cluster:



> gcloud container clusters create multi-prod --zone us-central1-f 
--additional-zones=us-central1-a,us-central1-b
NAME ZONE MASTER_VERSION MASTER_IP
MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
multi-prod us-central1-f 1.2.4 xxx.xxx.xxx.xx
n1-standard-1 1.2.4 9 RUNNING





If you create additional node pools, they'll automatically span all of the zones in your cluster, so nodes will be created in those additional zones as well. Note that the “--num-nodes” option is per zone, and due to the multiplicative effect in the total number of nodes created, be aware that you may hit your quota limits.



> gcloud container node-pools create larger-pool --cluster=multi-prod 
--num-nodes=2
NAME ZONE MASTER_VERSION MASTER_IP
MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
multi-prod us-central1-f 1.2.4 xxx.xxx.xxx.xx
n1-standard-1 1.2.4 6 RUNNING





When you list your nodes in the Kubernetes API, you’ll see that they span all of the zones you specifed, and are automatically labeled with “failure-domain.beta.kubernetes.io/zone”:



# Use a go-template to filter to just the node name and zone.
> kubectl get nodes -o go-template='{{ range .items }}{{
.metadata.name }}, {{ index .metadata.labels
"failure-domain.beta.kubernetes.io/zone" }}{{printf "\n"}}{{ end }}'

gke-multi-prod-default-pool-29bvd5cf-o73u, us-central1-b
gke-multi-prod-default-pool-b14r04c2-blx8, us-central1-c
gke-multi-prod-default-pool-ef1gde41-hzx5, us-central1-f
gke-multi-prod-larger-pool-6e67u678-etsz, us-central1-c
gke-multi-prod-larger-pool-70b6y344-1rz9, us-central1-b
gke-multi-prod-larger-pool-8b25kaa0-k1e3, us-central1-f








Conclusion




The new node pools feature in GKE enables more powerful and flexible scenarios for your Kubernetes clusters. As always, we’d love to hear your feedback and help guide us on what you’d like to see in the product.





Last week at OSCON, a couple of us Google Cloud Platform developer advocates participated in a hackathon with the Kubernetes team and the Cloud Native Computing Foundation as part of the ...




Last week at OSCON, a couple of us Google Cloud Platform developer advocates participated in a hackathon with the Kubernetes team and the Cloud Native Computing Foundation as part of the OSCON Contribute series. The idea behind these hackathons is to help people get involved with open source, to grow from users to contributors. Every open source project has its own culture around submissions, and can be daunting to people on the outside. Having people who want to contribute in the same room as the people who know the process can get new contributors up and running quickly.




John Mulhausen (standing left) from the Technical Writing team and Jeff Mendoza (standing right) from Developer Programs Engineering roam the tables and help Kubernetes hackers get started.

We had a diverse crowd of people from all around the Kubernetes community. One group was working on building drivers to connect EMC storage as a persistent storage solution for Kubernetes clusters. Another contributor helped improve documentation by submitting diagrams that illustrate complex configurations. Mike from Wisconsin participated because “I just love this project. I wanted to see if I could do something to give back.” We liked that attitude so a few of us Googlers skipped out on the event lunch and took Mike out for high-end hot dogs at Frank Restaurant.



Around 75 people showed up to the event, and by the end of the day we had seven pull requests from four new contributors for Kubernetes documentation, and a couple of the 43 pull requests the main Kubernetes project fielded were from our efforts. We’re happy to have the contributions, and even more excited for the new contributors, who have gotten over the hurdle of putting in their first pull request. They're that much closer to their second and third contributions than they were before this event, and are now valued members of the Kubernetes contributor community.





We also weren’t above bribing people. We gave out lots of great new limited-edition “Kollaborator” t-shirts, Cloud Platform credits, and free, temporary accounts for non-GCP users. If you have to run some servers to test out your Kubernetes contributions, we want to foot the bill!



Just because the hackathon is over, it doesn’t mean you can’t still contribute. You can find out more about the Kubernetes Project at our Getting Started pages. Then, visit the Kubernetes project on Github, click on the “Fork” button and start messing around with the source. If you want to try your hand at a few areas where we need help, check out the issues on Github marked Help Wanted. Our documentation team is so keen on getting contributions that they've even put up bounties on particular issues. Every little bit counts!








If you’re looking for a fun way to get up-to-speed on Kubernetes and microservices without leaving the comfort of your desk, look no further. The new online course ...




If you’re looking for a fun way to get up-to-speed on Kubernetes and microservices without leaving the comfort of your desk, look no further. The new online course “Scalable Microservices with Kubernetes” on Udacity is designed for developers and systems administrators that want to hone their automation and distributed systems management skills, and is led by Google Cloud Platform’s very own Kelsey Hightower and Carter Morgan. There’s also a guest appearance from Battery Ventures Technology Fellow and DevOps legend Adrian Cockcroft!



This intermediate-level class consists of four lessons:


  • Introduction to Microservices

  • Building Containers with Docker

  • Kubernetes 

  • Deploying Microservices (from image to running service)




To excel at the course, you should be fluent in at least one programming language, be familiar with general virtualization concepts and know your way around a Linux command line. Udacity estimates that students can complete the course in about one month.



Successful completion of the course counts towards Udacity’s upcoming DevOps Engineer Nanodegree — one of several Nanodegrees co-developed by Google. Best of all, the course is free!



Here’s where to sign up.







With all eyes trained on Google I/O, our annual developer show, Google Cloud Platform took the opportunity to share some of our secret sauce on why GCP is the best possible cloud on which to build and run enterprise applications.






With all eyes trained on Google I/O, our annual developer show, Google Cloud Platform took the opportunity to share some of our secret sauce on why GCP is the best possible cloud on which to build and run enterprise applications.



From the stealthy Tensor Processing Unit (TPU), a custom chip to power machine learning apps, to new Firebase integrations with GCP, the industry default mobile backend-as-service, to new APIs for Sheets and Slides — it’s hard to keep your hands off this stuff if you have an inclination to build the next big thing in the enterprise.



In other news, did you know that GCP now recommends a container-optimized OS image based on Chromium OS the open-source version of Chrome OS? With Container-VM Image, GCP users gain better control over build management, security and compliance and customizations for services such as Google Compute Engine’s metadata framework and image packages, the Docker runtime and Kubernetes. Currently in beta, here’s some helpful documentation on how to get started.



Running containerized workloads may be the next big thing, but GCP customers still spend a lot of time and energy sending out email. A lot of email. To recipients all over the world. For those users with European customers, email marketing firm Mailjet announced this week that several of its services can be accessed directly from Google Compute Engine and Google App Engine. Furthermore, thanks to servers located on the Old Continent, it's fully compliant with U.S. as well as European Union privacy regulations. Whether you pronounce it /ˈpɹɪv.ə.si/ or ˈpɹaɪ.və.si/, Mailjet and GCP have got you covered.



Indeed, with cloud services, location is almost as important as the service itself. Data Center Knowledge gives us a great Q&A with Google vice president of datacenter operations Joe Kava in which he talks about GCP’s plans to launch the new regions in the coming months, including Tokyo, and why in the cloud, it’s so important to build data centers in major metropolitan areas.



Check back next week for our highlights reel of the best Cloud Platform sessions and demos from Google I/O.





In the short time since  Firebase joined Google, the passionate community of developers using the backend-as-a-service to handle the heavy lifting of building an app has grown from 110,000 to over 470,000 developers around the world.




In the short time since Firebase joined Google, the passionate community of developers using the backend-as-a-service to handle the heavy lifting of building an app has grown from 110,000 to over 470,000 developers around the world.



In that same span, Firebase has come to rely on Google Cloud Platform, leaning on GCP for core infrastructure as well as value-added services. For example, GCP figures prominently in several of the new Firebase features that we’re announcing at Google I/O 2016 today.



One of the most requested features by Firebase developers is the ability to store images, videos and other large files. The new Firebase Storage is powered by Google Cloud Storage, giving it massive scalability and allowing stored files to be easily accessed by other projects running on Google Cloud Platform.



Firebase now uses the same underlying account system as GCP, which means you can use any GCP product with your Firebase app. For example, you can export raw analytics data from the new Firebase Analytics to Google BigQuery to help you surface advanced insights about your application and users.



Going forward, we’ll continue to build out integrations between Firebase and Google Cloud Platform, giving you the functionality of a full public cloud as you add to your mobile application portfolio.



To learn more about Firebase, visit our new site. For all the new features we’re announcing at Google I/O today, click on over to the Firebase blog. We’re working quickly to close gaps, and we’d love to hear your feedback so we can improve. You can help by requesting a feature.





Machine learning provides the underlying oomph to many of Google’s most-loved applications. In fact, more than 100 teams are currently using machine learning at Google today, from Street View, to Inbox Smart Reply, to voice search.




Machine learning provides the underlying oomph to many of Google’s most-loved applications. In fact, more than 100 teams are currently using machine learning at Google today, from Street View, to Inbox Smart Reply, to voice search.



But one thing we know to be true at Google: great software shines brightest with great hardware underneath. That’s why we started a stealthy project at Google several years ago to see what we could accomplish with our own custom accelerators for machine learning applications.



The result is called a Tensor Processing Unit (TPU), a custom ASIC we built specifically for machine learning — and tailored for TensorFlow. We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law).



TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly. A board with a TPU fits into a hard disk drive slot in our data center racks.




Tensor Processing Unit board

TPU is an example of how fast we turn research into practice — from first tested silicon, the team had them up and running applications at speed in our data centers within 22 days.



TPUs already power many applications at Google, including RankBrain, used to improve the relevancy of search results and Street View, to improve the accuracy and quality of our maps and navigation. AlphaGo was powered by TPUs in the matches against Go world champion, Lee Sedol, enabling it to "think" much faster and look farther ahead between moves.




Server racks with TPUs used in the AlphaGo matches with Lee Sedol

Our goal is to lead the industry on machine learning and make that innovation available to our customers. Building TPUs into our infrastructure stack will allow us to bring the power of Google to developers across software like TensorFlow and Cloud Machine Learning with advanced acceleration capabilities. Machine Learning is transforming how developers build intelligent applications that benefit customers and consumers, and we're excited to see the possibilities come to life.





The jury’s still out whether that rectangle in the Google Maps image identified by 15-year old Canadian William Gadoury is a ...




The jury’s still out whether that rectangle in the Google Maps image identified by 15-year old Canadian William Gadoury is a lost Mayan city . . . or merely an abandoned field.



Meanwhile, Google Cloud Platform customers have no doubts about the value of geospatial data. This week, Land O’Lakes announced its new WinField Data Silo tool that runs on top of Google Compute Engine and Google Cloud Storage, and integrates with the Google Maps API to display real-time agronomic data stored in the system to its users. The fact that those users can be anywhere  sitting at their desks, or on the console of their combine harvesters  was cited as a unique differentiator for GCP.



Speaking of unique, cloud architect Janakiram MSV shares on Forbes the five unique things about GCE that no IaaS provider can match. First on his list is Google Compute Engine’s sustained usage discount. No argument from us. The longer a VM runs on GCE, the greater the discount  up to 30% for instances that run an entire month. Further, customers don’t need to commit to the instance up front, and any discounts are automatically applied by Google on their bill.



No argument from GCP customer Geofeedia either. According to the market intelligence provider, reserved instances have no place in provisioning cloud compute resources. “In the world of agile software, making a one, let alone a three year prediction about your hardware needs is extremely difficult,” writes Charlie Moad, Geofeedia director of production engineering. Moad also gives shout outs to GCP networking, firewall rules and its project-centric approach to building multi-region applications.



That’s it for this week on Google Cloud Platform. If you happen to be at Google I/0 2016 next week, check out the cloud sessions. And be sure to come back next Friday for the latest on the lost Mayan City/abandoned field debate.





In 2050, the world's population will require farms to feed upwards of 10 billion people. This means one farmer will need to feed 250 people. That’s 61 percent more than a farmer feeds today. To meet this exploding demand ...




In 2050, the world's population will require farms to feed upwards of 10 billion people. This means one farmer will need to feed 250 people. That’s 61 percent more than a farmer feeds today. To meet this exploding demand, Land O’Lakes, Inc. is turning to the cloud to revolutionize modern farming. This is coming to life through Land O’Lakes’ WinField brand, a leading provider of agricultural solutions. Using Google Cloud Platform, they’re launching the WinField Data Silo™, a cloud-based application to help farmers make better, data-driven decisions.



Farmers make dozens of important decisions throughout their crop’s growing cycle from when to plant seeds, to where and how much to water and fertilize. Traditionally, these decisions have largely been based on intuition. But with cloud-based, big data tools that can capture, ingest and analyze data from multiple sources simultaneously, farmers can access precise information to optimize their yields. To improve decision-making tools for farmers, Land O’Lakes built the Data Silo, a cloud-native system that connects farmers with data.



Data Silo is a data collection application that ingests, stores and shares information between farmers, retailers, and third-party providers. It connects previously disparate systems, letting users quickly share information about crops and farm operations. With it, farmers can easily upload data to the platform, build dashboards and search for information. In return, they receive guidance on agronomic best practices, such as which crop to grow in a particular field, while maintaining control over who owns and accesses the data.





Land O’Lakes worked with Google Cloud Platform Technology Partner, Cloud Technology Partners, to develop Data Silo from the ground up. In a first phase, CTP worked with Google App Engine and Google Cloud SQL to build a working prototype within weeks of starting the project. Eventually, Land O’Lakes migrated Data Silo to Google Compute Engine to run its web-based

PHP application, power the mobile and web-based interfaces and integrate with existing monitoring and security systems. In addition, it implemented the PostgreSQL database and PostGIS libraries to run complex GIS functions.



A key differentiator for Cloud Platform is its geospatial sophistication. By integrating with the Google Maps API, Land O’Lakes is able to present Data Silo users with geospatial data overlaid with labels that are meaningful to them, on their mobile devices or desktops. Users can sort and view the data according to user-definable views such as type of crop, growing periods and yields. Maps update in real-time, along with any new data that users upload to the system.



Google Cloud Platform features also present unique possibilities for Land O’Lakes and Data Silo. Today, it functions primarily as a place for growers to store and share data about their farming operations. Tomorrow, Data Silo could evolve into a data hub for a variety of agricultural applications, for example, using Google Pub/Sub for data integration, or Google BigQuery and Google Cloud Bigtable to perform analytics that further drive crop yields.



Over the past 50 years, Land O’Lakes has grown and adapted to the changing needs of more than 300,000 farmers. To help them produce more food, with fewer resources and less environmental impact, the company is investing millions of dollars in new technology. Having a flexible, secure cloud that can easily scale, is critical to Land O’Lakes’ ability to launch today’s Data Silo technology and future innovation.



To hear more about how Land O’Lakes implemented Google Cloud Platform, watch their technical session at GCP NEXT.




Don’t let anyone tell you that Google Cloud Platform doesn’t support a wide range of platforms and programming languages. We kicked things off with Python and Java on Google App Engine, then PHP and Go. Now, we support .NET framework on ...



Don’t let anyone tell you that Google Cloud Platform doesn’t support a wide range of platforms and programming languages. We kicked things off with Python and Java on Google App Engine, then PHP and Go. Now, we support .NET framework on Google Compute Engine.



Google recently published a .NET client library for services like Google Cloud Datastore and Windows virtual machines running on Compute Engine. With those pieces in place, it’s now possible to run an ASP.NET application directly on Cloud Platform.



To get you up and running fast, we published two new tutorials that show you how to build and deploy ASP.NET applications to Cloud Platform.



The Hello World tutorial shows you how to deploy an ASP.NET application to Compute Engine.



The Bookshelf tutorial shows you how to build an ASP.NET MVC application that uses a variety Cloud Platform services to make your application reliable, scaleable and easy to maintain. First, it shows you how to store structured data with .NET. Do you love SQL? Use Entity Framework to store structured data in Cloud SQL. Tired of connection strings and running ALTER TABLE statements? Use Cloud Datastore to store structured data. The tutorial also shows you how to store binary data and run background worker tasks.



Give the tutorials a try, and please share your feedback! And don’t think we’re done yet  this is just the beginning. Among many efforts, we're hand-coding open source libraries so that calling Google APIs feels familiar to .NET programmers. Stay tuned for more on running ASP.NET applications on Google Cloud Platform.





At Google we're always obsessed with speed, in our products and on the web. Faster sites create happy users and improve engagement. Faster sites also ...




At Google we're always obsessed with speed, in our products and on the web. Faster sites create happy users and improve engagement. Faster sites also reduce operating costs. Like us, Google Cloud Platform customers place a lot of value in speed — that's why we decided to externalize some of the tools that Google engineers use to optimize sites, including Stackdriver Trace.



A member of the Google Stackdriver family, Stackdriver Trace is now generally available for Google App Engine which receives over 100B requests per day. Stackdriver Trace automatically analyzes each of your applications running on GAE to identify performance bottlenecks and emergent issues.






(click to enlarge)




Impact of latency on application




Stackdriver Trace provides detailed insight into your application’s run time performance and latency in near real-time. The service continuously evaluates data from each traced request and checks for patterns that indicate performance bottlenecks. To remove the operational overhead for performance analysis, Stackdriver Trace automatically analyzes your application’s performance over time. You can also create reports to evaluate your application’s latency across versions or releases. With the Latency shift detection feature, the service evaluates each of the reports to evaluate if there has been a significant shift in latency over time.




(click to enlarge)

The Stackdriver Trace API can be used to add custom spans to a trace. A span represents a unit of work within a trace, such as an RPC request or a section of code. For custom workloads, you can define your custom start and end of a span using the Stackdriver Trace SDK. This data is uploaded to Stackdriver Trace, where you can leverage all the Trace Insights and Analytics features mentioned above.




(click to enlarge)



Trace is already integrated with other Stackdriver tools such as monitoring, logs, error reporting and debugger.



Today Trace works seamlessly across your distributed environment, and supports all language runtimes on the App Engine platform. Stay tuned for Trace coverage for other GCP platforms.




(click to enlarge)




(click to enlarge)




Get started today





(click to enlarge)

We’re looking forward to this next step for Google Cloud Platform as we continue to help developers and businesses everywhere benefit from Google’s technical and operational expertise in application performance. Please visit Stackdriver Trace to learn more and contact us with your feedback and ideas.




Developers love application containers and the Docker and Rocket package formats, because of the package-once, run-anywhere experience that simplifies their jobs. But even the easiest-to-use technologies can spiral out of control and become victims of their own success. Google knows this all too well. With our own internal systems, we realized a long time ago that the most efficient way to share compute resources was containers, and the only way to run containers at scale is to use automation and orchestration. And so we developed ...



Developers love application containers and the Docker and Rocket package formats, because of the package-once, run-anywhere experience that simplifies their jobs. But even the easiest-to-use technologies can spiral out of control and become victims of their own success. Google knows this all too well. With our own internal systems, we realized a long time ago that the most efficient way to share compute resources was containers, and the only way to run containers at scale is to use automation and orchestration. And so we developed Cgroups, which we donated to the Linux Foundation to help establish the container ecosystem, and what we affectionately call Borg, our cluster management system.



Flash forward to the recent rise of containers, and it occurred to us that developers at large could benefit from the service discovery, configuration and orchestration that Borg provides, to simplify building and running multi-node container-based applications. Thus Kubernetes was born, an open-source derivative of Borg that anyone can use to manage their container environment.



Earlier this year, we transferred the Kubernetes IP to the Cloud Native Computing Foundation. Under the auspices of the CNCF, members such as IBM, Docker, CoreOS, Mesosphere, Red Hat and VMware work alongside Google to ensure that Kubernetes works not just in Google environments, but in whatever public or private cloud an organization may choose.



What does that mean for container-centric shops? Kubernetes builds on the workload portability that containers provide, by helping organizations to avoid getting locked into any one cloud provider. Today, you may be running on Google Container Engine, but there may come a time when you wish you could take advantage of IBM’s middleware. Or you may be a longtime AWS shop, but would love to use Google Cloud Platform’s advanced big data and machine learning. Or you’re on Microsoft Azure today for its ability to run .Net applications, but would like to take advantage of existing in-house resources running OpenStack. By providing an application-centric API on top of compute resources, Kubernetes helps realize the promise of multi-cloud scenarios.



If running across more than one cloud is in your future, choosing Kubernetes as the basis of your container orchestration strategy makes sense. Today, most of the major public cloud providers offer container orchestration and scheduling as a service. Our offering, Google Container Engine, or GKE, is based on Kubernetes and by placing Kubernetes in the hands of the CNCF, our goal is to ensure that your applications will run on any Kubernetes implementation that a cloud provider may offer  or that you run yourself.



Even today, it’s possible to run Kubernetes on any cloud environment of your choosing. Don’t believe us? Just look at CoreOS Tectonic, which runs on AWS, or Kubernetes for Microsoft Azure.



Stay tuned for a tutorial about how to set up and run Kubernetes to run multi-cloud applications, or get started right away with a free trial.







Sometimes, when doing a roundup of the week’s news, no clear theme emerges, and you’re left with a disjointed list of unrelated tidbits. That wasn’t a problem this week; both on this blog and in the ...






Sometimes, when doing a roundup of the week’s news, no clear theme emerges, and you’re left with a disjointed list of unrelated tidbits. That wasn’t a problem this week; both on this blog and in the Google Cloud Platform world at large, people had big data and analytics on the brain.



The week started out with a bang, with big data consultancy Mammoth Data releasing the results of a benchmark test comparing Google Cloud Dataflow with Apache Spark. Google’s data processing service did really well, outperforming Spark by two to five times, depending on the number of cores in the test.



Cloud Dataflow is a paid service, of course, but the platform’s API was recently accepted as an incubator project with the Apache Software Foundation, under Apache Beam. The rationale, according to Tyler Akidau, Google staff software engineer for Apache Beam, is to “provide the world with an easy-to-use, but powerful model for data-parallel processing, both streaming and batch, portable across a variety of runtime platforms.” You can read Tyler’s full post here. Data Artisan’s Kostas Tzoumas also provides his organization’s take, and the relationship of Apache Beam to Apache Flink.



We were also treated with the next installment of big data guru Mark Litwintschik’s  "A billion taxi rides" series, in which he analyzes data about 1.1 billion taxi and Uber rides in NYC against different data analytics tools. Up this week: Mark schooled us on how he got 33x Faster Queries on Google Cloud Dataproc; the Performance Impact of File Sizes on Presto Query Times; and how to build a 50-node Presto Cluster on Google Cloud's Dataproc.



If that’s not enough for you, be sure to register for a joint webinar with Bitnami, "Visualizing Big Data with Big Money" that uses election data from the Center for Responsive Politics. Using Google BigQuery and the open-source Re:Dash data visualization tool, citizens will be able to grok the enormity of this country’s campaign finance problems depressingly fast.





We’re excited to announce that  Ruby runtime on Google App Engine is going beta. Frameworks such as Ruby on Rails and Sinatra make it easy for developers to rapidly build web applications and APIs for the cloud.




We’re excited to announce that Ruby runtime on Google App Engine is going beta. Frameworks such as Ruby on Rails and Sinatra make it easy for developers to rapidly build web applications and APIs for the cloud. App Engine provides an easy to use platform for developers to build, deploy, manage, and automatically scale services on Google’s infrastructure.




Getting started




To help you get started with Ruby on App Engine, we’ve built a collection of getting started guides, samples, and interactive tutorials that walk through creating your code, using our APIs and services, and deploying to production.



When running Ruby on App Engine, you can use the tools and databases you already know and love. Use Rails, Sinatra, or any other web framework to build your app. Use PostgreSQL, MySQL, or Cloud Datastore to store your data. The runtime is flexible enough to manage most applications and services  but if you want more control over the underlying infrastructure, you can easily migrate to Google Container Engine or Google Compute Engine for more flexibility and control.




Using Google’s APIs & services




Using the gcloud Ruby gem, you can take advantage Google’s advanced APIs and services, like our scalable NoSQL database Google Cloud Datastore, Google Cloud Pub/Sub and Google BigQuery:



require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

sql = "SELECT TOP(word, 50) as word, COUNT(*) as count " +
"FROM publicdata:samples.shakespeare"
job = bigquery.query_job sql

job.wait_until_done!
if !job.failed?
job.query_results.each do |row|
puts row["word"]
end
end



Services like BigQuery allow you to take advantage of Google’s unique technology in the cloud to bring life to your applications.




Committent to Ruby and open source




At Google, we’re committed to open source. The new core Ruby Docker runtime, gcloud gem, Google API client, everything  is all open source:






We’re thrilled to welcome Ruby developers to the Google Cloud Platform, and we’re committed to making further investments to help make you as productive as possible. This is just the start  stay tuned to the blog and our GitHub repositories to catch the next wave of Ruby support on GCP.



We can’t wait to hear what you think. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #ruby channel.





Since 2013, over four million people that needed a bit of help with Gmail, Calendar, Drive or Docs have turned to Synergyse, a virtual training coach for the ...




Since 2013, over four million people that needed a bit of help with Gmail, Calendar, Drive or Docs have turned to Synergyse, a virtual training coach for the Google Apps suite. Today, we announced that Synergyse is joining the Google family, and that all Google Apps users will be able to install the extension for free while the integration is underway. Run, don’t walk, to get your copy.



Free stuff aside, the Synergyse architecture serves as a powerful reminder of what’s possible when you choose Google as your application design partner: Synergyse reaches millions of consumers and business users that rely on the easy-to-use and integrated Google Apps suite. The Synergyse training modules make their way to users through a simple Google Chrome extension.



Moreover, the Synergyse back-end is powered by Google Cloud Platform, which takes care of delivering interactive context-aware training on-demand, as new users come online. Because Synergyse is hosted in the cloud, the team can easily add or update its training modules, and because it is built on GCP, it’s easy to weave in advanced Google functionality like search and speech recognition. It’s a testament to the cool things you can do when you use openness, collaboration and integration as your guiding product development principles.