As a web engineer at Google, I've been creating scaled systems for internal teams and customers for the past five years. Often these include a web front and back-end component. I would like to share with you a story about creating a bespoke machine learning (ML) system using the ...




As a web engineer at Google, I've been creating scaled systems for internal teams and customers for the past five years. Often these include a web front and back-end component. I would like to share with you a story about creating a bespoke machine learning (ML) system using the Google Cloud Platform stack  and hopefully inspire you to build some really cool web apps of your own.



The story starts with my curiosity for computer vision. I've been fascinated with this area for a long time. Some of you may have even seen my public posts from my personal experiments, where I strive to find the most simple solution to achieve a desired result. I'm a big fan of simplicity, especially as the complexity of my projects has increased over the years. A good friend once said to me, “Simplicity is a complex art,” and after ten years in the industry, I can say that this is most certainly true.






Some of my early experiments in computer vision attempting to isolate movement

My background is as a web engineer and computer scientist, getting my start back in 2004 on popular stacks of the day like LAMP. Then, in 2011 I joined Google and was introduced to the Google Cloud stack, namely Google App Engine. I found that having a system that dealt with scaling and distribution was a massive time saver, and have been hooked on App Engine ever since.



But things have come a long way since 2011. Recently, I was involved in a project to create a web-based machine learning system using TensorFlow. Let’s look at some of the newer Google Cloud technologies that I used to create it.




Problem: how to guarantee job execution for both long running and shorter time critical tasks





Using TensorFlow to recognize custom objects via Google Compute Engine



Earlier in the year I was learning how to use TensorFlow  an open source software library for machine intelligence developed by Google (which is well worth checking out by the way). Once I figured out how to get TensorFlow working on Google Compute Engine, I soon realized this thing was not going to scale on its own  several components needed to be split out into their own servers to distribute the load.




Initial design and problem


In my application, retraining parts of a deep neural network was taking about 30 minutes per job on average. Given the potential for long running jobs, I wanted to provide status updates in real-time to the user to keep them informed of progress.



I also needed to analyze images using classifiers that had already been trained, which typically takes less than 100ms per job. I could not have these shorter jobs blocked by the longer running 30-minute ones.



An initial implementation looked something like this:



There are a number of problems here:




  1. The Google Compute Engine server is massively overloaded, handling several types of jobs.

  2. It was possible to create a Compute Engine auto-scaling pool of up to 10 instances depending on demand, but if 10 long-running training tasks were requested, then there wouldn’t be any instances available for classification or file upload tasks.

  3. Due to budget constraints for the project, I couldn’t fire up more than 10 instances at a time.






Database options

In addition to having to support many different kinds of workloads, this application required being able to store persistent data. There are a number of databases that support this, the most obvious of which is Google Cloud SQL. However, I had a number of issues with this approach:




  1. Time investment. Using Cloud SQL would have meant writing all that DB code to integrate with a SQL database myself, and I needed to provide a working prototype ASAP.

  2. Security. Cloud SQL integration would have required the Google Compute Engine instances to have direct access to the core database, which I did not want to expose.

  3. Heterogeneous jobs. It’s 2016 and surely there's something that solves this issue already that could work with different job types?




My solution was to use Firebase, Google’s backend as a service offering for creating mobile and web applications. Firebase allowed me to use their existing API to persist data using JSON objects (perfect for my Node.js based server), which allowed the client to listen to changes to DB (perfect for communicating status updates on jobs), and did not require tightly coupled integration with my core Cloud SQL database.




My Google Cloud Platform stack




I ended up splitting the server into three pools that were highly specialized for a specific task: one for classification, one for training, and one for file upload. Here are the cloud technologies I used for each task:



Firebase

I had been eyeing an opportunity to use Firebase on a project for quite some time after speaking with James Tamplin and his team. One key feature of Firebase is that it allows you to create a real-time database in minutes. That’s right, real time, with support for listening for updates to any part of it, just using JavaScript. And yes, you can write a working chat application in less than 5 minutes using Firebase! This would be perfect for real-time job status updates as I could just have the front-end listen for changes to the job in question and refresh the GUI. What’s more, all the websockets and DB fun is handled for you, so I just needed to pass JSON objects around using a super easy-to-use API  Firebase even handles going offline, syncing when connectivity is restored.



Cloud Pub/Sub

My colleagues Robert Kubis and Mete Atamel introduced me to Google Cloud Pub/Sub, Google’s managed real-time messaging service. Cloud Pub/Sub essentially allows you to send messages to a central topic from which your Compute Engine instances can create a subscription and pull/push from/to asynchronously in a loosely coupled manner. This guarantees that all jobs will eventually run, once capacity becomes available, and it all happens behind the scenes so you don't have to worry about retrying the job yourself. It’s a massive time-saver.




Any number of endpoints can be Cloud Pub/Sub publishers and pull subscribers

App Engine

This is where I hosted and delivered my front-end web application  all of the HTML, CSS, JavaScript and theme assets are stored here and scaled automatically on-demand. Even better, App Engine is a managed platform with built-in security and auto-scaling as you code against the App Engine APIs in your preferred language (Java, Python, PHP etc). The APIs also provide access to advanced functionality such as Memcache, Cloud SQL and more without having to worry about how to scale them as load increases.



Compute Engine with AutoScaling

Compute Engine is probably what most web devs are familiar with. It’s a server on which you can install your OS of choice and get full root access to that instance. The instances are fully customizable (you can configure how many vCPUs you desire, as well as RAM and storage) and are charged by the minute  for added cost savings when you scale up and down with demand. Clearly, having root access means you can do pretty much anything you could dream of on these machines, and this is where I chose to run my TensorFlow environment. Compute Engine also benefits from autoscaling, increasing and decreasing the number of available Compute Engine instances with demand or according to a custom metric. For my use case, I had an autoscaler ranging from 2 to 10 instances at any given time, depending on average CPU usage.



Cloud Storage

Google Cloud Storage is an inexpensive place in which you can store a large number of files (both in size and numbers) that are replicated to key edge server locations around the globe, closer to the requesting user. This is where I stored the uploaded files used to train the classifiers in my machine learning system until they were needed.





Network Load Balancer

My JavaScript application was making use of a webcam, and I therefore needed to access it over a secure connection (HTTPS). Google’s Network Load Balancer allows you to route traffic to the different Compute Engine clusters that you have defined. In my case, I had a cluster for classifying images, and a cluster for training new classifiers, and so depending on what was being requested, I could route that request to the right backend, all securely, via HTTPS.




Putting it all together




After putting all these components together, my system architecture looked roughly like this:



While this worked very well, some parts were redundant. I discovered that the Google Compute Engine Upload Pool code could be re-written to just run on App Engine in Java, pushing directly to Cloud Storage, thus taking out the need for an extra pool of Compute Engine instances. Woohoo!



In addition, now that I was using App Engine, the custom SSL load balancer was also redundant as App Engine itself could simply push new jobs to Pub/Sub internally, and deliver any front-end assets over HTTPS out of the box via appspot.com. Thus, the final architecture should look as follows if deploying on Google’s appspot.com:





Reducing the complexity of the architecture will make it easier to maintain, and add to cost savings.




Conclusion




By using Pub/Sub and Firebase, I estimate I saved well over a week’s development time, allowing me to jump in and solve the problem at hand in a short timeframe. Even better, the prototype scaled with demand, and ensured that all jobs would eventually be served even when at max capacity for budget.



Combining the Google Cloud Platform stack provides the web developer with a great toolkit for prototyping full end-to-end systems at rapid speed while aiding security and scalability for the future. I highly recommend you try them out for yourself.







Stackdriver Debugger is already a popular tool for troubleshooting issues in production applications. Now, based on customer feedback, we're announcing a new feature: logs panel integration.






Stackdriver Debugger is already a popular tool for troubleshooting issues in production applications. Now, based on customer feedback, we're announcing a new feature: logs panel integration.



With logs panel integration, not only can you gather production application state and link to its source, but you can also view the associated raw logs associated with your Google App Engine projects  all on one page.



We’ve integrated several useful features. For instance, you can:


  • Display log messages, flat in chronological order, for easy access, without having to expand the request log to see text.

  • Easily navigate to the log statement in source code directly from the log message.

  • Quickly filter by text, log level, request or source file

  • Show all logs while highlighting your log message of interest with the "Show in context" option.








For easier collaboration, simply copy/paste the URL to your team. The link highlights your log message of interest, as well as including your logs panel filter. You can also save this URL and reuse it later for easy retrieval with your tracking system.



We’re working hard to make Stackdriver Debugger an easy and intuitive tool for diagnosing application issues directly in production (check out our new feature that allows you to dynamically add log statements without having to write and re-deploy code). Start using the integrated Debugger and log panel functionality today by navigating to the cloud console Debug page  and be sure to send us your feedback and questions!











Here at Google, we strive to make it easy for developers to use Google Cloud Platform (GCP). Today, we're excited to announce the beta release of two new build tool plugins for Java developers: one for ...





                   





Here at Google, we strive to make it easy for developers to use Google Cloud Platform (GCP). Today, we're excited to announce the beta release of two new build tool plugins for Java developers: one for Apache Maven, and another for Gradle. Together, these plugins allow developers to test applications locally and then deploy them to cloud from the Command Line Interface (CLI), or through integration with an Integrated Development Environment (IDE) such as Eclipse and IntelliJ (check out our new native plugin for IntelliJ as well).



Developed in open-source, the plugins are available for both standard and flexible Google App Engine environments and are based on the Google Cloud SDK. The new Maven plugin for GAE standard is offered as an alternative to an existing plugin for App Engine standard. This allows users to choose the existing plugin if they wish to use tooling based on the App Engine Java SDK, or the new plugin if they wish to use tooling based on Google Cloud SDK (all other plugins are fully based on Google Cloud SDK).



After installing the Google Cloud SDK, you can install the plugins using the pom.xml or build.gradle file:



pom.xml




<plugins>


  <plugin>


    <groupId>com.google.cloud.tools</groupId>


    <artifactId>appengine-maven-plugin</artifactId>


    <version>0.1.1-beta</version>


 </plugin>


</plugins>






build.gradle





buildscript {


dependencies {


   classpath "com.google.cloud.tools:appengine-gradle-plugin:+" // latest version  } }


apply plugin: "com.google.cloud.tools.appengine"







And then, to deploy an application:






$ mvn appengine:deploy


$ gradle appengineDeploy





Once the application is deployed, you'll see its URL in the output of the shell.



For enterprise users who wish to take their compiled artifacts such as JARs and WARs through a separate release process, both plugins provide a staging command that copies the final compiled artifacts to a target directory without deploying them to the cloud. Those artifacts can then be passed to a Continuous Delivery/Continuous Integration (CI/CD) pipeline (see here for some of CI/CD offerings for GCP).










$ mvn appengine:stage


$ gradle appengineStage







You can check the status of your deployed applications in the Google Cloud Platform Console. Head to the Google App Engine tab and click on Instances to see your application’s underlying infrastructure in action.



For additional information on the new plugins, please see the documentation for App Engine Standard (Maven, Gradle) and App Engine Flexible (Maven, Gradle). If you have specific feature requests, please submit them at GitHub, for Maven and Gradle.



You can learn more about using Java on GCP at the Java developer portal, where you’ll find all the information you need to get up and running. And be on the lookout for additional plugins for Google Cloud Platform services in the coming months!



Happy Coding!








Google has a long and storied history running Linux, but Google Cloud Platform’s goal is to support a broad range of languages and tools. This week saw us ...




Google has a long and storied history running Linux, but Google Cloud Platform’s goal is to support a broad range of languages and tools. This week saw us significantly expand our support for the Microsoft ecosystem, with new support for ASP.NET, SQL Server, Powershell and the like.



If you have apps developed in .NET, Microsoft’s application development framework, you’ll be happy to learn that you can run them efficiently on GCP, with support for several flavors of Windows Server, an ASP.NET image in Cloud Launcher, pre-loaded SQL Server images on Google Compute Engine, and a variety of Google APIs available for the .NET platform. And thanks to a new integration with Microsoft Visual Studio, the popular integrated development environment, developers in the Microsoft ecosystem can easily access that functionality from the comfort of their IDE.



But it’s not just about Google broadening its horizons. Microsoft, too, is taking its offerings outside of its traditional confines. This week, Microsoft open-sourced Powershell, the command-line shell and scripting language for .NET, so that developers can use it to automate and administer Linux apps and environments, not just Windows ones.



And Kubernetes, Google’s open-source container management system, is also finding its way over to Microsoft’s Azure public cloud, thanks to its ability to provide a lingua franca for hosting and managing container-based environments. Check out this blog post about provisioning Azure Kubernetes infrastructure to see just how far things have come.





Last week, we introduced new tools and client libraries for .NET developers to integrate with Google Cloud Platform, including Google Cloud Client Libraries for .NET ...




Last week, we introduced new tools and client libraries for .NET developers to integrate with Google Cloud Platform, including Google Cloud Client Libraries for .NET, a set of new client libraries that provide an idiomatic way for .NET developers to interact with GCP services. In this post, we'll explain what it takes to install the new client libraries for .NET in your project.



Currently, the new client libraries support a subset of GCP services, including Google BigQuery, Google Cloud Pub/Sub and Google Cloud Storage (for other services, you still need to rely on the older Google API Client Libraries for .NET). Both sets of libraries can coexist in your project and as more services are supported by the new libraries, dependencies on the older libraries will diminish.




Authentication


As you would expect, the new client libraries are published on NuGet, the popular package manager for .NET, so it's very easy to include them in your project. But before you can use them, you'll need to set up authentication.



The GitHub page for the libraries (google-cloud-dotnet) describes the process for each different scenario in the authentication section. Briefly, to authenticate for local development and testing, install Cloud SDK for Windows, which comes with Google Cloud SDK shell, and use the gcloud command line tool to authenticate.



If you haven’t initialized gcloud yet, run the following command in Google Cloud SDK shell to initialize your project, zones and also setup authentication along the way:



$ gcloud init



If you've already set up gcloud and simply want to authenticate, run this command instead:



$ gcloud auth login




Installation




Now, let's import and use the new libraries. Create a project in Visual Studio (but make sure it's not a .NET Core project, as those are not supported by the libraries yet), right click on the project references and select “Manage NuGet packages”:



In NuGet window, select “Browse” and also check “Include prerelease.” The full list of supported services and their NuGet package names can be found on the google-cloud-dotnet page. Let’s install the library for Cloud Storage. For Cloud Storage, we need to search for Google.Storage:



The resulting list shows the new client library for Cloud Storage (Google.Storage) along with the low-level library (Google.Apis.Storage) that it depends on. Select Google.Storage and install it. When installation is complete, you'll see Google.Storage as a reference, along with its Google.Apis dependencies:



That’s it! Now, you can use the new client library for Cloud Storage from your .NET application. If you're looking for a sample, check out the Cloud Storage section of the GitHub page for the libraries.



Give it a try and let us know what you think. Any issues? Report them here. Better yet, help us improve our support for .NET applications by contributing.



"Interested in helping us improve the Google Cloud User Experience? Click here!"





The promise of public cloud networking is about securely meeting the demand of customers even if your needs grow more quickly than expected.



To address this challenge, today we’re introducing expandable subnetworks, a new capability that lets you quickly and efficiently expand your subnetwork IP space without disrupting running services. This enables more efficient control of your network as the compute resources and number of users on your network grow.




The promise of public cloud networking is about securely meeting the demand of customers even if your needs grow more quickly than expected.



To address this challenge, today we’re introducing expandable subnetworks, a new capability that lets you quickly and efficiently expand your subnetwork IP space without disrupting running services. This enables more efficient control of your network as the compute resources and number of users on your network grow.



In addition, you can extend your Google Cloud Platform subnetwork both geographically (diagram 2 below: growing across new regions) and within an existing region (diagram 3 below). You don’t have to make irreversible IP allocation planning decisions up front.



Our existing subnetwork capabilities already allow you to extend your private space across additional regions as needed. Now, with the introduction of expandable subnetworks, you can also extend the IP ranges of pre-configured subnetworks without any impact to existing instances and workloads. That means you can accommodate additional compute capacity within your existing subnet simply by expanding your IP ranges — without the need to reconfigure or recreate your existing workloads.



To illustrate the power of subnetworks, let’s consider three situations.




  • Specify deployment regions while enjoying a global private space



    Consider an initial deployment that requires your application to run only in the US West and US Central regions. It's possible to decide based on your requirements to host your applications exclusively in those specific regions.



    Further, you can now customize the IP ranges of networks with regional subnetworks. The IP range configuration model provides maximum flexibility by allowing several subnetworks within the network to be configured with IP ranges that don’t need to be aggregated at the network level. Each subnetworks is configured regionally, covering between two and four different availability zones, depending on the region, allowing workload mobility across zones keeping a persistent IP address.


    (click to enlarge)








  • Grow your Virtual Private Cloud with subnetworks in new regions 



    Assume that customer demand now requires you to grow in the US East and Europe West regions. You can easily add new subnetworks in those regions within the same network by configuring a new IP range that's non-contiguous with IP ranges in other regions.


    (click to enlarge)








  • Expand the size of your subnetworks in existing regions non-disruptively



    You can now resize your subnetworks without disruption as demand for your application grows. No need to delete existing instances or services configured in that subnetwork. Simply grow in each region as your business grows without additional planning.



    In the example below, the IP ranges in US West and US Central are experiencing additional growth and require additional compute capacity. In order to accommodate that additional capacity, the IP range can be expanded from a subnetwork with a prefix mask of /20 to a prefix max of /16 without having to reconfigure existing workloads. Machines using the same subnet in a region can be configured in any of the availability zones in that region. In this case, two machines in 10.132/16 in us-central1 are configured in two availability zones (A and B). This network flexibility is the byproduct of Google’s SDN


    (click to enlarge)



Google Cloud Virtual Network allows you to have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets and expansion of those subnets across regions and within region.



GCP provides you with the elasticity to expand your network in the regions where your applications grow. These new features are available now and you can start using them today. And if you’re not already running on GCP, be sure to sign up for a free trial.








Building highly scalable, loosely coupled systems has always been tough. With the proliferation of mobile and IoT devices, burgeoning data volumes and increasing customer expectations, it's critical to be able to develop and run systems efficiently and reliably at internet scale.




Building highly scalable, loosely coupled systems has always been tough. With the proliferation of mobile and IoT devices, burgeoning data volumes and increasing customer expectations, it's critical to be able to develop and run systems efficiently and reliably at internet scale.



In these kinds of environments, developers often work with multiple languages, frameworks, technologies, as well as multiple first- and third-party services. This makes it hard to to define and enforce service contracts and to have consistency across cross-cutting features such as authentication and authorization, health checking, load balancing, logging and monitoring and tracing, all the while maintaining efficiency of teams and underlying resources. It becomes especially challenging in today’s cloud-native world, where new services need to be added very quickly and the expectation from each service is to be agile, elastic, resilient, highly available and composable.



For the past 15 years, Google has solved these problems internally with Stubby, an RPC framework that consists of a core RPC layer that can handle internet-scale of tens of billions of requests per second (yes, billions!). Now, this technology is available for anyone as part of the open-source project called gRPC. It's intended to provide the same scalability, performance and functionality that we enjoy at Google to the community at large.



gRPC can help make connecting, operating and debugging distributed systems as easy as making local function calls; the framework handles all the complexities normally associated with enforcing strict service contracts, data serialization, efficient network communication, authentications and access control, distributed tracing and so on. gRPC along with protocol buffers enables loose coupling, engineering velocity, higher reliability and ease of operations. Also, gRPC allows developers to write service definitions in a language-agnostic spec and generate clients and servers in multiple languages. Generated code is idiomatic to languages and hence feels native to the language you work on.



Today, the gRPC project has reached a significant milestone with its 1.0 release and is now ready for production deployments. As a high performance, open-source RPC framework, gRPC features multiple language bindings (C++, Java, Go, Node, Ruby, Python and C# across Linux, Windows and Mac). It supports iOS and Android via Objective-C and Android Java libraries, enabling mobile apps to connect to backend services more efficiently. Today’s release offers ease-of-use with single-line installation in most languages, API stability, improved and transparent performance with open dashboard, backwards compatibility and production readiness. More details on gRPC 1.0 release are available here.



Community interest in gRPC has seen tremendous pick-up from beta to 1.0, and it's been adopted enthusiastically by companies like Netflix to connect microservices at scale.




With our initial use of gRPC, we've been able to extend it easily to live within our opinionated ecosystem. Further, we've had great success making improvements directly to gRPC through pull requests and interactions with the Google team that manages the project. We expect to see many improvements to developer productivity, and the ability to allow development in non-JVM languages as a result of adopting gRPC.                                                                                            - Timothy Bozarth, engineering manager at Netflix



CoreOS, Vendasta and Cockroachdb use gRPC to connect internal services and APIs. Cisco, Juniper, Arista and Ciena rely on gRPC to get streaming telemetry from network devices.




At CoreOS, we’re excited by the gRPC v1.0 release and the opportunities it opens up for people consuming and building what we like to call GIFEE  Google’s Infrastructure for Everyone Else. Today, gRPC is in use in a number of our critical open-source projects such as the etcd consensus database and the rkt container engine.                                                                                                                                                  - Brandon Philips, CTO of CoreOS



And Square, which has been working with Google on gRPC since the very early days, is connecting polyglot microservices within its infrastructure.



As a financial service company, Square requires a robust, high-performance RPC framework with end-to-end encryption. It chose gRPC for its open support of multiple platforms, demonstrated performance, the ability to customize and adapt it to its codebase, and most of all, to collaborate with a wider community of engineers working on a generic RPC framework.



You can see more details of the implementation on Square’s blog. You can also watch this video about gRPC at Square, or read more customer testimonials.



With gRPC 1.0, the next generation of Stubby is now available in the open for everyone and ready for production deployments. Get started with gRPC at grpc.io and provide feedback on the gRPC mailing list.





Enterprise customers are often surprised to learn that Google Cloud Platform is a great environment to run their Windows workloads. Thanks to GCP’s dramatic price-to-performance ...




Enterprise customers are often surprised to learn that Google Cloud Platform is a great environment to run their Windows workloads. Thanks to GCP’s dramatic price-to-performance advantages, customizable virtual machines and state-of-the-art networking and security, customers can migrate key workloads, retire legacy hardware and focus on building and running great applications rather than on maintaining costly infrastructure.



Our goal is to make GCP the best place to run Windows workloads. Starting this week, you can launch Google Compute Engine VM images preinstalled with Microsoft SQL Server, with the full range of licensing options and administrative control. Specifically, we now have beta support for these SQL Server versions:

  • SQL Server Express (2016)

  • SQL Server Standard (2012, 2014, 2016)

  • SQL Server Web (2012, 2014, 2016)

  • and coming soon, SQL Server Enterprise (2012, 2014, 2016)

Why Google Compute Engine for SQL Server



Google Compute Engine on GCP has key advantages for running SQL Server. Custom Machine Types let you tailor CPU core and memory configurations on VMs, allowing enterprises to fine-tune configurations that can reduce the licensing cost of running Microsoft SQL Server compared to other cloud environments. Add in automatic sustained use discounts, including the long-term prospect of retiring hardware and associated maintenance, and customers can arrive at total costs lower than many other cloud alternatives.



Regarding speed, Compute Engine VMs’ fast startup times shorten the time it takes to boot up operating systems, and Windows is no exception. On the I/O front, standard and solid-state persistent disks associated with Microsoft SQL Server VMs deliver a blazing 20,000 IOPS on 16-core machines and up to 25,000 IOPS on 32-core machines  at no additional cost.



Licensing



Compute Engine VMs preinstalled with Microsoft SQL Server allow customers to spin up new databases on-demand without the need to purchase licenses separately. Enterprise customers can pay for premium software the same way they pay for cloud infrastructure: pay as you go, only for what you use. For customers with Software Assurance from Microsoft, your existing Microsoft SQL Server licenses transfer directly to GCP. In addition, support is available to customers from both Microsoft and from Google.



Learn more on our web page.



Getting started



It’s easy to get started with $300 in free trial credit using any of our supported versions of Microsoft SQL Server. Create a boot disk from ready-to-deploy images directly from the Cloud Console. Here's detailed documentation around how to create Microsoft Windows Server and SQL Server instances on GCP.



Enterprise migration

Customers can get help today with a range of partner-led and self-service migration options. For instance, our partner CloudEndure replicates Windows and Linux machines at the block level, so that all of your apps, data and configuration come along with your migration.



Contact the GCP team for a consultation around your Windows and enterprise workloads. Our team is committed to helping support your workloads today, paving the way to build what’s next tomorrow.





Java Integrated Development Environment (IDE) users prefer to stay in the same environment to develop and test their applications. Now, users of JetBrain’s popular IntelliJ IDEA can do this when they deploy to ...




Java Integrated Development Environment (IDE) users prefer to stay in the same environment to develop and test their applications. Now, users of JetBrain’s popular IntelliJ IDEA can do this when they deploy to Google App Engine.



Starting today, IntelliJ IDEA users can use the new Google Cloud Tools for IntelliJ plugin to deploy their application in App Engine standard and App Engine flexible, and use Google Stackdriver Debugger and Google Cloud Source Repositories without leaving the IDE.



Stackdriver Debugger captures and inspects the call stack and local variables of a live cloud-based application without stopping the app or slowing it down, while Google Cloud Source Repositories are fully-featured, private Git repositories hosted on GCP. The plugin is available on IntelliJ versions 15.0.6 and above and can be installed through the intelliJ IDEA’s built-in plugin manager. It can also be downloaded as a binary from the Jetbrains plugin repository, as described in the installation documentation. The entire plugin source code is available on GitHub, and we welcome contributions and issue reporting from the wider community.



To install the plugin, start IntelliJ IDEA, head to File > Settings (on Mac OS X, open IntelliJ IDEA > Preferences), select Plugins, click Browse repositories, search and select Google Cloud Tools and click Install (you may also be asked to install an additional Google plugin for authorization purposes).



Once installed, make sure you have a billing-enabled project on GCP under your Google account (new users can sign up for free credits here). Open any of your Java web apps that listens on port 8080 and Choose Tools > Deploy to App Engine, where you’ll see a deployment dialog. Below is an example based on Maven (full quickstart instructions can be found here):





Once you click Run, the Google Cloud Tools for IntelliJ plugin deploys your application to App Engine flexible in to the cloud (if this is the first deploy, this can take a few minutes). The deployment output in the IntelliJ shell will show the URL of the application to point to in your browser.



You can also deploy a JAR or WAR file using the same process, instead choosing the Filesystem JAR or WAR file on the Deployment dropdown, as shown below.



You can check the status of your application in the Google Cloud Platform Console by heading to the App Engine tab and clicking on Instances to see the underlying infrastructure of your application in action.



We'll continue adding support for more GCP services to the plugin, so stay tuned for update notifications in the IDE. If you have specific feature requests, please submit them on the GitHub repository.



To learn more about Java on GCP, visit the GCP Java developers portal, where you can find all the information you need to get started and running your Java applications on GCP.



Happy Coding!





Google Cloud Platform is known for many things: big data, machine learning and the global infrastructure that powers Google. What you might not know is how well we support applications built on ASP.NET, the open-source web application framework developed by Microsoft. Let’s change that right now.




Google Cloud Platform is known for many things: big data, machine learning and the global infrastructure that powers Google. What you might not know is how well we support applications built on ASP.NET, the open-source web application framework developed by Microsoft. Let’s change that right now.


Windows Server on Google Compute Engine


To run ASP.NET 4.x, you need a Windows Server running IIS and ASP.NET. To do that, we support creating new Google Compute Engine VMs from both Windows Server Data Center 2008R2 and 2012R2 base images.




(click to enlarge)



Once you have your Windows Server image of choice, which should only take minutes to create and boot, you can establish user credentials, open up the appropriate ports with firewall rules, use RDP to connect to the machine and install whatever software you’d like.



If that software is comprised of the Microsoft IIS web server and ASP.NET, along with the appropriate firewall rules, you should definitely consider using the ASP.NET image in the Cloud Launcher.




(click to enlarge)



Not only does it create a Windows Server instance for you, but it installs SQL Server 2008 Express, IIS, ASP.NET 4.5.2 and opens the standard firewall ports to enable HTTP, HTTPs, WebDeploy and RDP.




SQL Server images on Compute Engine


The SQL Server Express that comes out of the box with the ASP.NET image in Cloud Launcher is useful for development, but when it comes to production workloads, you’re going to want production versions of SQL Server. For that, we’re happy to announce the following versions of SQL Server on Google Compute Engine:


  • SQL Server Standard (2012, 2014, 2016)

  • SQL Server Web (2012, 2014, 2016)

  • SQL Server Enterprise coming soon (2012, 2014, 2016)


As of this week, these editions of SQL Server are available on Google Compute Engine as base images alongside Windows Server. This is the first time we’ve offered production editions of SQL Server, so we’re excited to hear your feedback! Stay tuned next week for an in-depth post about SQL Server on Google Cloud Platform.






Google service libraries in NuGet


With Windows Server, ASP.NET and SQL Server, you’ve got everything you need to bring your ASP.NET 4.x sites and services to Google Cloud Platform, and we think you’re going to be happy that you did.



Further, we’ve heard from our customers how much they love the services provided across more than 100 Google APIs, all of which are available for a variety of languages and platforms, including .NET, in NuGet. Further, we’ve been working hard to ensure that our cloud-specific APIs are easy for .NET developers to understand. To that end, we’re pleased to announce that the vast majority of our Cloud API client library reference documentation has per-language examples, including for .NET.



To further improve usability of these libraries, we’ve created wrapper libraries for each of the Cloud APIs that are specific to each language. These libraries are in beta today, and include wrappers for Google BigQuery, Google Cloud Storage, Google Cloud Pub/Sub and Google Cloud Datastore, with more on the way. Google StackDriver Logging now also supports the log4net library, providing simplified logging for your apps, with all the goodness of StackDriver’s multi-machine, multi-app filtering and querying. These libraries are available in NuGet, as well as on GitHub, where you can log a bug, make a feature request or contribute back to the code!



These .NET library efforts are being led by none other than Jon Skeet, widely known for his C# books and for helping .NET developers on Stack Overflow. We’re very happy to have him helping us make sure that Google’s Cloud APIs are are good as they can be for .NET developers.




Cloud Tools for Visual Studio


One of the major reasons that we’ve made all of our libraries available via NuGet is so that you can bring them into your projects easily from inside Visual Studio. However, we know that you want to do more with your cloud projects than just write code you also want to manage resources like VMs and storage buckets, and you want to deploy. That’s where Google Cloud Tools for Visual Studio comes in, available as of today in the Visual Studio Gallery.



It’s also possible to deploy the ASP.NET 4.x app to Google Compute Engine via Visual Studio’s built-in Publish dialog, but with the Cloud Tools extension, we’ve also made it easy to administer the credentials associated with your VMs and to generate their publish settings files from within Visual Studio.







This functionality is available inside the Google Cloud Explorer, which allows you to browse and manage your Compute Engine, Cloud Storage and Google Cloud SQL resources.



This is just the beginning. We’ve got lots of plans for integrating Cloud Platform deeper into Visual Studio. If you’ve got suggestions, bug reports or if you’d like to help, Cloud Tools for Visual Studio is hosted on GitHub. We’d love to hear from you!




Cloud Tools for PowerShell


Visual Studio is a great way to interactively manage your cloud project resources, but it’s not great for automation. That’s why we’re announcing Google’s first PowerShell extensions, Cloud Tools for PowerShell. With our Google Cloud PowerShell cmdlets, you can manage your Compute Engine and Cloud Storage resources.




(click to enlarge)

We started with cmdlets for the two most popular Cloud Platform products, Compute Engine and Cloud Storage, but we're quickly expanding support to cover other products as well. If you’ve got suggestions about what we should do next, bug reports for what we’ve already got or if you’d like to help, the Google Cloud PowerShell cmdlets are being developed on GitHub.




Migrating existing VMs


Compute Engine’s support for Windows Server and SQL Server, along with our integration with Visual Studio and PowerShell, help you bring your .NET apps and SQL Server data to the Google Cloud Platform. But what if you need more? What if you’d rather not set up new machines, configure them and migrate your apps and data? Sometimes, you just want to bring an entire machine over as it is in your data center and run it on the cloud as if nothing had changed.



A new partnership with CloudEndure does just that.





CloudEndure replicates Windows and Linux machines at the block level, so that all of your apps, data and configuration comes along with your migration. To learn more about migration options for Windows workloads, or for help planning and executing a migration, check out these Google Cloud Platform migration resources.




Coming soon: support for ASP.NET Core


Many developers are exploring ASP.NET Core for their next-generation workloads. Because ASP.NET Core is fully supported on Linux, you can wrap it in a Docker container and deploy it via App Engine Flexible or Kubernetes running on Google Container Engine. ASP.NET is not fully supported on either of these platforms yet, but to give you a taste of where we’re headed, we’ve enabled all of the Google API Client Libraries to work on .NET Core (with the exception of our hand-crafted libraries we’re still working on those). For example, here’s some ASP.NET Core code that pulls a random JPEG image from a Google Cloud Storage bucket:



public IActionResult Index() {
var service = new StorageService(new BaseClientService.Initializer() {
HttpClientInitializer =
GoogleCredential.GetApplicationDefaultAsync().Result
});

// find all of the public JPGs in the project buckets
var request = service.Objects.List("YOUR-GCS-BUCKET");
request.Projection = ObjectsResource.ListRequest.ProjectionEnum.Full;
var items = request.Execute().Items;
var jpgs = items.Where(o => o.Name.EndsWith(".jpg") &&
o.Acl.Any(o2 => o2.Entity == "allUsers"));

// pick a random jpg to show
ViewData["jpg"] =
jpgs.ElementAt((new Random()).Next(0, jpgs.Count())).MediaLink;
return View();
}





We’re working to enable first-class support for containers-based deployment as well as Linux-based ASP.NET Core. Until then, check out this sample code for running simple .NET apps on Cloud Platform.




We’re just getting started


First and foremost, we’re serious about supporting Windows and .NET workloads on Google Cloud Platform. Second, we’re just getting started. We have big plans across all areas of Windows/.NET support and we’d love your feedback  whether it’s to report a bug, make a suggestion or contribute some code!



We’ll leave you with one more resource: .NET on Google Cloud Platform lists everything a developer needs to know to be successful with .NET on Cloud Platform. If there’s something you need that you can’t find, drop a note to the Google Cloud Developers group!




Cloud Datastore is a highly available and durable fully managed NoSQL database service for serving data to your applications. This schema-less document database is geo-replicated and ideal for fast, flexible development of mobile and web applications. It automatically scales as your data and traffic grows—so you’ll never again worry about provisioning enough resources to handle your peak load. It already handles over 15 trillion queries per month.



Cloud Datastore is a highly available and durable fully managed NoSQL database service for serving data to your applications. This schema-less document database is geo-replicated and ideal for fast, flexible development of mobile and web applications. It automatically scales as your data and traffic grows—so you’ll never again worry about provisioning enough resources to handle your peak load. It already handles over 15 trillion queries per month.



The Cloud Datastore v1 API is now generally available for all customers, and the Cloud Datastore Service Level Agreement (SLA) now covers access both from App Engine and the v1 API and provides high confidence in the scalability and availability of the service for your toughest web and mobile workloads. Already, customers like Snapchat, Workiva, and Khan Academy have built amazing mobile and web applications with Cloud Datastore. Khan Academy, for instance, uses Datastore for user data — from user progress tracking to content management.



“It’s our primary database,” said Ben Kraft, Infrastructure Engineer at Khan Academy. “We depend on it being fast and reliable for everything we do.”



Now that the v1 API is generally available, we have deprecated the v1beta3 API with a twelve-month grace period before we decommission it fully on August 17th, 2017. Changes between v1beta 3 and v1 are minor, so transitioning to the new version is quick and straightforward.



Cross-platform access




The v1 API for Cloud Datastore allows you to access your database for Google Compute Engine, Google Container Engine, or any other server via our RESTful or gRPC endpoints. You can access your existing App Engine data now from different compute environments, enabling you to select the best mix for your needs.



You can use the v1 API via the idiomatic Google Cloud Client Libraries (in Node.js, Python, Java, Go, and Ruby), or alternatively via the low-level native client libraries for JSON and Protocol Buffers over gRPC. You can learn more about the various client libraries in our documentation.



Along with this cross-platform access, you can use Google Cloud Dataflow to execute a wide range of data processing patterns against Cloud Datastore, including batch and streaming computation. Take a look in the GitHub repository for examples of using the Dataflow SDK with Cloud Datastore.




New resources


We've also been busy making new resources available to enable you to make more effective use of Cloud Datastore.




  • Best Practices: The down-low on the best practices on topics ranging from transactions to strongly consistent queries.

  • Storage Size Calculations: A new transparent method of calculating the size of your database as announced as part of our simplified pricing.

  • Limits: Information about production limits for Datastore, for example the maximum size of a transaction.

  • Multitenancy: Guidance on how you can use namespaces for multitenancy in your application.






Cloud Console


Lastly, we've made numerous improvements to our Cloud Console interface. If you haven't used it before, get to know it by reading a new article on editing entities in the console. Some highlights:




  • App Engine Python users will be delighted to know that URL-Safe Keys are supported in the Key Filter field on the Entities page.

  • The entity editor supports properties with complex types such as Array and Embedded entity.




To learn more about Cloud Datastore, check out our getting started guide.





In early 2000s, Google developed Bigtable, a petabyte-scale NoSQL database, to handle use cases ranging from low-latency real-time data serving to high-throughput web indexing and analytics. Since then, Bigtable has had a significant impact on the NoSQL storage ecosystem, inspiring the design and development of Apache HBase, Apache Cassandra, Apache Accumulo and several other databases.




In early 2000s, Google developed Bigtable, a petabyte-scale NoSQL database, to handle use cases ranging from low-latency real-time data serving to high-throughput web indexing and analytics. Since then, Bigtable has had a significant impact on the NoSQL storage ecosystem, inspiring the design and development of Apache HBase, Apache Cassandra, Apache Accumulo and several other databases.



Google Cloud Bigtable, a fully-managed database service built on Google's internal Bigtable service, is now generally available. Enterprises of all sizes can build scalable production applications on top of the same managed NoSQL database service that powers Google Search, Google Analytics, Google Maps, Gmail and other Google products, several of which serve over a billion users. Cloud Bigtable is now available in four Google Cloud Platform regions: us-central1, us-east1, europe-west1 and asia-east1, with more to come.



Cloud Bigtable is available via a high-performance gRPC API, supported by native clients in Java, Go and Python. An open-source, HBase-compatible Java client is also available, allowing for easy portability of workloads between HBase and Cloud Bigtable.



Companies such as Spotify, FIS, Energyworx and others are using Cloud Bigtable to address a wide array of use cases, for example:




  • Spotify has migrated its production monitoring system, Heroic, from storing time series in Apache Cassandra to Cloud Bigtable and is writing over 360K data points per second.

  • FIS is working on a bid for the SEC Consolidated Audit Trail (CAT) project, and was able to achieve 34 million reads/sec and 23 million writes/sec on Cloud Bigtable as part of its market data processing pipeline.

  • Energyworx is building an IoT solution for the energy industry on Google Cloud Platform, using Cloud Bigtable to store smart meter data. This allows it to scale without building a large DevOps team to manage its storage backend.




Cloud Platform partners and customers enjoy the scalability, low latency and high throughput of Cloud Bigtable, without worrying about overhead of server management, upgrades, or manual resharding. Cloud Bigtable is well-integrated with Cloud Platform services such as Google Cloud Dataflow and Google Cloud Dataproc as well as open-source projects such as Apache Hadoop, Apache Spark and OpenTSDB. Cloud Bigtable can also be used together with other services such as Google Cloud Pub/Sub and Google BigQuery as part of a real-time streaming IoT solution.



To get acquainted with Cloud Bigtable, take a look at documentation and try the quickstart. We look forward to seeing you build what's next!