# Fire up PowerShell.powershell# Import the Cloud Tools for PowerShell module on OS X.PS > Import-Module ~/Downloads/osx.10.11-x64/Google.PowerShell.dll# List all of the images in a GCS bucket.Get-GcsObject -Bucket "quoct-photos" | Select Name, Size | Format-Table
$ gcloud deployment-manager types list
$ gcloud deployment-manager deployments create net --configuration net-config.yaml
$ gcloud deployment-manager deployments delete net
"These new instances of GPUs in the Google Cloud offer extraordinary performance advantages over comparable CPU-based systems and underscore the inflection point we are seeing in computing today. Using standard analytical queries on the 1.2 billion row NYC taxi dataset, we found that a single Google n1-highmem-32 instance with 8 attached K80 dies is on average 85 times faster than Impala running on a cluster of 6 nodes each with 32 vCPUs. Further, the innovative SSD storage configuration via NVME further reduced cold load times by a factor of five. This performance offers tremendous flexibility for enterprises interested in millisecond speed at over billions of rows."- Todd Mostak, MapD Founder and CEO
"At The Foundry, we're really excited about VFX in the cloud, and with the arrival of GPUs on Google Cloud Platform, we'll have access to the cutting edge of visualisation technology, available on-demand and charged by the minute. The potential ramifications for our industry are enormous.."- Simon Pickles, Lead Engineer, Pipeline-in-the-Cloud
“Every year we have to plan to provision computing resources for our High-Energy Physics experiments based on their overall computing needs for performing their science. Unfortunately, the computing utilization patterns of these experiments typically exhibit peaks and valleys during the year, which makes cost-effective provisioning difficult. To achieve this cost effectiveness we need our computing facility to be able to add and remove resources to track the demand of the experiments as a function of time. Our collaboration with commercial clouds is an important component of our strategy for achieving this elasticity of resources, as we aim to demonstrate with Google Cloud for the CMS experiment via the HEPCloud facility at SC16.”
- Panagiotis Spentzouris, Head of the Scientific Computing Division at Fermilab
$ mkdir HelloWorldAspNetCore$ cd HelloWorldAspNetCore
$ dotnet new -t webCreated new C# project in /home/atameldev/HelloWorldAspNetCore.
public static void Main(string[] args){ var host = new WebHostBuilder() .UseKestrel() .UseContentRoot(Directory.GetCurrentDirectory()) .UseIISIntegration() .UseStartup() .UseUrls("http://*:8080") .Build(); host.Run();}
$ dotnet restore…log : Restore completed in 16298ms.
$ dotnet run…Now listening on: http://*:8080Application started. Press Ctrl+C to shut down.
“It ain’t what we don’t know that hurts us so much as the things we know that just ain’t so.”
Use promo code NEXT1720 to save $300 off general admission