Google Cloud Platform Blog
2014 Year in Review: Making open source #1 through Kubernetes and Google Container Engine
Wednesday, December 31, 2014
Today’s post is the latest installation in our 2014 Google Cloud Platform Year in Review. Every day until early January, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
This year, we launched a pair of projects in the Linux application container orchestration space: Kubernetes, an OSS cluster manager, and Google Container Engine, its premium hosted analog.The velocity and agility of the sister projects has been quite inspiring. Some of the most memorable ‘achievement unlocked’ moments for the two projects in 2014 are:
Becoming the most popular orchestration framework on GitHub with ~5200 stars.
Watching the community around Kubernetes grow, including having Microsoft and IBM sign on to support Kubernetes within a month of launch.
Seeing multiple PaaS providers, including OpenShift, Deis and Cloud Foundry, embrace Kubernetes as a standard orchestration framework.
Watching Google Container Engine help you guys do more and more and account for ~2% of Google Compute Engine instance use within a month of launch.
Seeing an Apple job posting with Kubernetes listed as a desired skill!
Overall it has been an incredible year for Google Cloud Platform. We look forward to 2015, where we will continue to innovate but also create a rock solid new business for Google in the containers space.
- Posted by Craig McLuckie, Product Manager
2014 Year in Review: Awards Season
Tuesday, December 30, 2014
Today’s post is the latest installation in our 2014 Google Cloud Platform Year in Review. Every day until January, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
This year, Google Cloud Platform was named the Best Cloud Computing Provider in the Lifehacker Reader’s Choice Awards. Every year, Lifehacker Awards finalists are nominated by editors and selected by readers over a week-long voting process.
In addition, Google Compute Engine was named the IaaS Solution of the Year by the Storage, Virtualization, Cloud (SVC) Awards for its extensive array of features and unmatched customer results. The SVC Awards recognize exceptional products and services operating in the cloud, virtualization and storage sectors. Finalist are selected by the SVC Awards editorial board, and winners are determined by a public vote.
It’s been a great 2014, from setting the standard for container-based cloud computing with Kubernetes and Google Container Engine to being the first to combine live migration technology with data center innovation to our focus on security and providing encryption at rest. Looking forward to a great 2015 and continuing to help companies grow and innovate!
- Posted by Danielle Aronstam, Communications Manager
2014 Year in Review: Redefining Compute in the Cloud
Monday, December 29, 2014
Today’s post is the latest installation in our 2014 Google Cloud Platform Year in Review. Every day until January, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
I believe the cloud should be about the provider doing the hard things so that the user gets a simpler, easier to use model - whether that has to do with technical decisions or business ones. We made many strides in that direction this year, but the two that stand out most for me are:
We took the first steps towards ending the PaaS/IaaS dichotomy by giving developers the ease of use and productivity of App Engine and the flexibility of VMs with beta launch of App Engine Managed VMs. Customers shouldn’t have to make a binding decision about what level of the stack they want to build their applications at. The launch of Managed VMs and Google Container Engine make it possible to run the same code as a fully managed application, as a unit of code in a managed logical cluster, and in a self-managed Virtual Machine
The launch of sustained use discounts makes it very easy for our customers to automatically benefit from
up to 30% lower prices
for their sustained workloads without any long-term commitments.
- Posted by Navneet Joneja, Product Manager
2014 Year in Review: Meeting face-to-face in Europe
Sunday, December 28, 2014
Today’s post is the latest installation in our 2014 Google Cloud Platform Year in Review. Every day until January, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
Communicating online may be efficient, but it’s more rewarding to meet in person. In 2014, the Europe, Middle East and Africa Google Cloud Platform team attended more than 130 events to support and learn from the developer community. A personal highlight was participating in
the Web Summit in Dublin
, where we got to meet and help many of the thousands of startups exhibiting or attending. In support of these entrepreneurs, we offered over 100 personal mentorship sessions, as well as workshops, Cloud Platform credits, and the best coffee at the Web Summit. Until we meet again!
- Posted by Ori Weinroth, Product Marketing Manager, EMEA
2014 Year in Review: A growing partner ecosystem
Saturday, December 27, 2014
Today’s post is the latest installation in our 2014 Google Cloud Platform Year in Review. Every day until January, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
Our partners took on an ever-expanding role in 2014. While you were hard at work building the next thing, our partners were in many cases by your side--helping architect your app, guiding migration strategy, or making it possible to get up and running with a tool in minutes via a
click-to-deploy
.
For the FIFA World Cup™ this past summer, Coca-Cola called on
CI&T,
a Google Cloud Platform services partner, to build something that would visually illustrate soccer’s global reach. The end result was the
Happiness Flag
, built using App Engine. It’s the world’s largest mosaic flag crafted from thousands of crowdsourced images submitted by people in more than 200 countries.
Thousands of miles away back in San Francisco, Cloud Platform partner Q42 created a
special Sandbox experience
for
Google I/O
attendees to experience first hand the Philips Hue light bulb system it helped build using Google Cloud Platform.
Philips hue
light bulbs, together with a bridge and an app, can wake you up, help protect your home, or even keep you informed about the weather.
Emind
, headquartered in Israel, is re-thinking television viewing through its work with Screenz, a production company. With help from Emind, Screenz launched its
Real Time Platform
, used by the show Rising Star to enable real-time voting by viewers via a mobile app that is fully integrated into the television program.
Bringing more of your favorite developer resources to Google Cloud Platform was also an important part of 2014.
MongoDB
,
Cassandra
,
RabbitMQ
, and many others launched click-to-deploy functionality on Cloud Platform. Bitnami released its
Launchpad
for Cloud Platform featuring almost 100 cloud images. And Red Hat, SUSE, and Canonical/Ubuntu released images optimized for Compute Engine.
These were just a few of our favorite partner moments in 2014, and I know 2015 will hold many more as we expand our partner business worldwide. Thank you to all of our partners that make building on Google Cloud Platform the experience that it is today.
--Posted by Chris Rimer, Google Cloud Platform Global Partner Business Lead
2014 Year in Review: Launch of Zones in Asia Pacific
Friday, December 26, 2014
Today’s post is the latest installation in our 2014 Google Cloud Platform Year in Review. Every day until January, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
The Asia Pacific region saw a ton of growth in 2014, but my favorite event this year was the launch of the data center in Taiwan in May. The expansion of this region has also led to many conversations with analysts and customers in both Taipei and Tokyo, as there’s nothing better than being able to talk directly to local Google teams. Check out some sizzle videos of the APAC expansion events here and here.
- Posted by Howard Wu, Product Marketing Manager-Japan-Asia Pacific
2014 Year in Review: To our customers, “Happy coding to all, and to all a good night!”
Thursday, December 25, 2014
Today’s post is the latest installation in our 2014 Google Cloud Platform Year in Review. Every day until January, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
As the lucky person who makes our Cloud Platform customer stories heard, I witness innovation of all shapes and sizes from companies solving a range of problems. It amazes me to see just what can be built on our platform, often in a short amount of time and with limited resources.
In the spirit of this innovation, I wanted to add a bit of creative holiday flair to my Year in Review post. Please read the following with the well-known poem
“A Visit from St. Nicholas”
in mind, and learn about some of my favorite customer stories from the year.
‘Twas the year of 2014, and all through our cloud
Were innovative customers making us proud.
They build on our platform day after day,
Impacting their industries in tangible ways.
There’s
Framestore
, making movies like Gravity shine,
And
DataStax
, keeping customers running at all times.
Sony Music
built an app for a YouTube livestream
Allthecooks
scaled up fast, even with a small team.
U.S. Cellular
analyzed data for key sales insights
And
Aucor
transitioned more than 70 websites.
Coca-Cola
unveiled a big Happiness Flag,
While
Channel 2
delivered news without any lag.
“We use multi-cloud”, said Eugene from
Wix
Slow radiology scans is what
MNES
did fix.
With
Akselos
, bridges and buildings won’t fall,
And
switch.co
transformed the conference call.
Workiva
makes financial reporting a breeze,
And
feedly
lets you collect news stories with ease.
DotCloud
built a reliable PaaS,
While
Fastly and Brightcove
deliver live video, fast.
More rapid than eagles, new products they came
Developers whistled, and shouted, and called them by name:
“App Engine! Cloud Storage! BigQuery! Cloud SQL!
Compute Engine, and Dataflow-- we love them all equal!”
And we exclaimed to our customers, who bring us such delight,
“Happy coding to all, and to all a good night!”
--Posted by Kelly Rice, Product Marketing Manager
2014 Year in Review: Ending Server Side Bottlenecks with Google Cloud Trace
Wednesday, December 24, 2014
Today’s post is the latest installation in our 2014 Google Cloud Platform Year in Review. Every day until January, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
Faster than the blink of an eye. That’s how fast you can trace requests with Google Cloud Trace, where you can isolate performance issues in your application by giving you detailed trace report of exactly where every millisecond is going in your request.
We were very happy to be able to find the troublesome requests by generating reports to help track down changes in performance from release to release with a graphical view of latency per request and example traces. In this example, version 42 has a long tail latency issue as seen by the spike around 2000ms mark in the graph and in the 95th percentile of requests.
--Posted by Pratul Dublish, Technical Program Manager, and Qi Ke, Software Engineer
2014 Year in Review: Working to bring you the best in visual effects
Tuesday, December 23, 2014
Today’s post is the latest installation in our 2014 Google Cloud Platform Year in Review. Every day until January, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
My highlight was when our company Zync joined Google in late August. We've been busy at work bringing our technology to Google Cloud Platform and will have an exciting, scalable rendering solution to offer visual effects and animation studios in early 2015. I spent a great deal of time meeting with customers and spoke at our VFX rendering event in London in September where we heard about some of the
great work Oscar winning studio Framestore was doing using our platform
.
- Posted by Todd Prives, Product Manager
2014 Year in Review: Reaction to our second Google Cloud Platform Live
Monday, December 22, 2014
Welcome to the Google Cloud Platform 2014 Year in Review blog series. Each day for the next two weeks, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
We worked hard on finishing our features and demos around containers, mobile, developer tools and big data technology leading up to Google Cloud Platform Live. The product and marketing teams, with the help of engineering, put on an inspiring display of where Cloud Platform is going in 2015. Having joined the Cloud Platform only this summer, after 7 years of working on Google’s Display Ads products, it was the highlight of my year seeing the reaction to our story: the nodding and excitement from customers and analysts alike. It motivates us all to work even harder to deliver on the next wave of innovation in the cloud.
- Posted by Joerg Heilig, Vice President, Engineering
2014 Year in Review: Evolved troubleshooting with Cloud Logging
Sunday, December 21, 2014
Welcome to the Google Cloud Platform 2014 Year in Review blog series. Each day for the next two weeks, we will be featuring a different Googler sharing their highlight from the past year in Cloud Platform.
Most developers spend more than 50% of their time tracking down and fixing issues in production. This year, Google Cloud Platform took a big step to get you some of that time back with Google Cloud Logging, which allows you to aggregate logs from your compute instances and other services in a unified Logs Viewer interface with search capabilities. We were stoked that we could just click on the logs (e.g. showing an error or high latency) and get to the exact version, file and line of code causing the problem.
We're looking forward to 2015 where we will save developers even more time!
--Posted by Deepak Tiwari, Product Manager, and Cody Bratt, Product Manager
Geofencing 340 million NYC taxi locations with Google Cloud Dataflow
Friday, December 19, 2014
Posted by Thorsten Schaeff, Sales Engineer Intern
Fun fact: around 170 million taxi journeys occur across New York City yearly, holding vast amounts of information each time someone steps in and out of one of those bright yellow cabs. How much information exactly? Being a not-so-secret maps enthusiast, I made it my challenge to visualize a NYC taxi dataset on Google Maps.
Anyone who’s tried to put a large amount of data points on a map knows about the difficulties one faces when working with big geolocation data. That's why I want to share with you how I used Cloud Dataflow to spatially aggregate every single pick-up and drop-off location with the objective of painting the whole picture on a map. For background info, Google Cloud Dataflow is now in alpha stage and can help you gain insight into large geolocation datasets. You can try
experimenting with it
by applying for the alpha program or learn more with yesterday's
update
.
When I first sat down to think through this data visualization, I knew I needed to create a thematic map, so I built a simple pipeline that was able to geofence all the 340 million pick-up and drop-off locations against 342 different polygons that resulted from converting the
NYC neighbourhood tabulation
areas into single-part polygons. You can find the processed data in
this public BigQuery table
. (In order to access BigQuery you need to have at least one project listed in
your Google Developers Console
. After creating a project you can access the table by following
this link
.)
Thematic map showing the distribution of taxi pick-up locations in NYC in 2013. Midtown South is New Yorkers’ favourite area to get a cab with almost 28 million trips starting there, which is roughly 1 trip per second. You can find an interactive map
here
.
This open data, released by the NYC Taxi & Limo Commission, has been the foundation for some
beautiful visualizations
. By utilizing the power of Google Cloud Platform's tools, I’ve been able to spatially aggregate the data using
Cloud Dataflow
, and then do ad hoc querying on the results using
BigQuery
, to gain fast and comprehensive insight into this immense dataset.
With the Google Cloud Dataflow SDK, which parallels the data transformations across multiple Cloud Platform instances, I was able to build, test and run the whole processing pipeline in a couple of days. The actual processing, distributed across five workers, took slightly less than two hours.
The pipeline’s architecture is extremely simple. Since Cloud Dataflow offers a BigQuery reader and writer, most of the heavy lifting is already taken care of. The only thing I had to provide was the geofencing function that could be parallelised across multiple instances. For a detailed description on how to do complex geofencing using open source libraries see this post on the
Google Developers Blog
.
When executing the pipeline, Cloud Dataflow automatically optimizes your data-centric pipeline code by collapsing multiple logical passes into a single execution pass and deploys the result to multiple Google Compute Engine instances. At the time of deploying the pipeline you can read in files from Google Cloud Storage that contain data you need for your transformations, e.g., shapefiles or GeoJSON formats. Alternatively you can call an external API to load in the geofences you want to test against.
I utilized an
API I built on App Engine
which exposes a list of geofences stored in Datastore. Using the
Java Topology Suite
I created a spatial index maintained in a class variable in the memory of each instance for fast querying access.
Distributed across five workers, Cloud Dataflow was able to process an average of 25,000 records per second, each record having two locations, ploughing through more than 170 million table rows in just under two hours. The amount of workers can be flexibly assigned at the time of deployment. The more workers you use, the more records can be processed in parallel, the faster the execution of your pipeline.
The interactive Cloud Dataflow graph of your Pipeline, helping you to monitor and debug your Pipeline in your Google Developer Console in the browser.
Having the data preprocessed and written back into BigQuery, we were then able to run super fast queries over the whole table answering questions like, “where do the best-paid trips start from?”.
Unsurprisingly they start from JFK airport with an average fare of $46 and an average tip of 20.7%*. Okay, this is probably not a secret, but did you know that, even though the average fare from LGA airport is $15 less, there are roughly 800,000 trips more starting from LGA? And with 22.2%
1
, passengers from LGA airport actually tip best.
As cash tips aren’t reported, only 52% of trips have a tip noted, therefore the values regarding tips could be inaccurate.
Most of the taxi trips start in Midtown-South (28 million) with an average fare of $11. Carnegie Hill in the Upper East Side comes fourth with 12 million pick-ups, however these trips are fairly short. Journeys that start there mostly stay in the Upper East Side and therefore only generate an average fare of $9.80.
Here's
an interactive map visualizing where people went to, what they paid on average and how they tipped at and some other visualizations of of how people tip from where:
(click to visit interactive map)
The processed data is publicly available in this
BigQuery table
. You can find some interesting queries to run against this data in
this gist
.
Though NYC taxi cab journeys may not seem to amount to much, they actually that conceal a ton of information, which Google Cloud Dataflow, as a powerful big data tool, helped reveal by making big data processing easy and affordable. Maybe I'll try
London's black cabs
next.
1
As cash tips aren’t reported, only 52% of trips have a tip noted, therefore the values regarding tips could be inaccurate.
Google Announces Open-Source Cloud Dataflow SDK for Java
Thursday, December 18, 2014
The value of data lies in analysis -- and the intelligence one generates from it. Turning data into intelligence can be very challenging as data sets become large and distributed across disparate storage systems. Add to that the increasing demand for real-time analytics, and the barriers to extracting value from data sets becomes a huge challenge for developers.
In June 2014, we announced a significant step toward a managed service model for data processing. Aimed at relieving operational burden and enabling developers to focus on development,
Google Cloud Dataflow
was unveiled. We created Cloud Dataflow, which is now currently an alpha release, as a platform to democratize large scale data processing by enabling easier and more scalable access to data for data scientists, data analysts and data-centric developers. Regardless of role or goal - users can discover meaningful results from their data via simple and intuitive programing concepts, without the extra noise from managing distributed systems.
Today, we are announcing availability of the
Cloud Dataflow SDK
as open-source. This will make it easier for developers to integrate with our managed service while also forming the basis for porting Cloud Dataflow to other languages and execution environments.
We’ve learned a lot about how to turn data into intelligence as the original
FlumeJava
programming models (basis for Cloud Dataflow) have continued to evolve internally at Google. Why share this via open source? It’s so that the developer community can:
Spur future innovation in combining stream and batch based processing models:
Reusable programming patterns are a key enabler of developer efficiency. The Cloud Dataflow SDK introduces a unified model for batch and stream data processing. Our approach to temporal based aggregations provides a
rich set of windowing primitives
allowing the same computations to be used with batch or stream based data sources. We will continue to innovate on new programming primitives and welcome the community to participate in this process.
Adapt the Dataflow programming model to other languages:
As the proliferation of data grows, so do programming languages and patterns. We are currently building a Python 3 version of the SDK, to give developers even more choice and to make dataflow accessible to more applications.
Execute Dataflow on other service environments:
Modern development - especially in the cloud - is about heterogeneous service and composition. Although we are building a massively scalable, highly reliable, strongly consistent managed service for Dataflow execution, we also embrace portability. As Storm, Spark, and the greater Hadoop family continue to mature - developers are challenged with bifurcated programming models. We hope to relieve developer fatigue and enable choice in deployment platforms by supporting execution and service portability.
We look forward to collaboratively building a system that enables distributed data processing for users from all backgrounds. We encourage developers to check out the
Dataflow SDK for Java on GitHub
and contribute to the community.
Interested in adding to the Cloud Dataflow conversation? Here’s how:
Apply for access
to Cloud Dataflow's managed service
Learn more
through the documentation
Take part in the conversation at StackOverflow [tag:
google-cloud-dataflow
]
- Posted by Sam McVeety, Software Engineer
Hello World: We are the Cloud Developer Advocate Team
Tuesday, December 16, 2014
What do you get when you combine a group of engineers obsessed about cutting edge technology and add a
hint
ton of geek? A bunch of tech enthusiasts that make up the
Developer Advocate
Team at Google. You may have already seen some of our work or
seen us speak
.
We love helping make all of you as successful as possible as you build apps that take full advantage of everything that Google Cloud Platform has to offer. We like talking to you, but even more than that, we like to listen to your feedback. We want to be your voice to the Google Cloud Platform product and engineering teams and use what we hear to help create the best possible developer experience.
You’ll often meet us at technology events (conferences, meetups, user groups, etc.), where we talk about the many products and technologies that get us excited about coming to work everyday. If you do see us, don’t be shy--come say hi!
Ask us anything and everything regarding Google Cloud Platform on
Twitter
and learn more through our
videos
on the
Google Developers
and
Google Cloud Platform
channels.
Without further ado, please meet your friendly neighborhood Cloud Developer Advocates!
Aja Hammerly
@thagomizer_rb
Aja just joined Google as a Developer Advocate. Before Google she spent 10 years working as an engineer building websites at a variety of web companies. She came to Google in order to help people use Google's amazing cloud resources effectively on their own projects.
Fun Fact
Aja learned to solve a Rubik's Cube by racing the build at her first dev job.
Brian Dorsey
@briandorsey
,
+BrianDorsey
Brian Dorsey aims to help you build cool stuff with our APIs and focuses on Kubernetes and Containers. He loves Python and taught it at the University of Washington. He’s spoken at both PyCon & PyCon Japan. Brian is currently learning Go and enjoying it.
Fun Fact
Brian speaks Japanese.
David East
@_davideast
David is passionate about creating resources and speaking about them to help educate developers. A military brat, David has moved over a dozen times in his life.
Fun Fact
David once broke his leg in the middle of the wilderness and had to crawl back to civilization.
Felipe Hoffa
@felipehoffa
,
+FelipeHoffa
Felipe Hoffa is originally from Chile and joined Google as a Software Engineer. Since 2013 he's been a Developer Advocate on big data - to inspire developers around the world to leverage Google Cloud Platform tools to analyze and understand their data in ways they could never before. You can find him in several
YouTube videos
, blog posts, and
conferences
around the world.
Fun Fact
He once went to the New York Film Academy to produce his own 16mm short films.
Francesc Campoy Flores
@francesc
,
+FrancescCampoyFlores
,
Site
Francesc Campoy Flores focuses on Go for Google Cloud Platform. Since joining the Go team in 2014, he has written several didactic resources and traveled the world attending conferences, organizing live courses, and meeting fellow Go-phers. He joined Google in 2011 as a backend software engineer working mostly in C++ and Python, but it was with Go and Cloud Platform that he re-discovered how fun programming can be.
Fun Fact
Francesc celebrated his 30th birthday riding a bike wearing a red tutu from San Francisco to Los Angeles.
Greg Wilson
@gregsramblings
,
+GregWilsonDev
,
Site
Greg Wilson leads the Google Cloud Platform Developer Advocacy team and has over 25 years of software development experience spanning multiple platforms, including cloud, mobile, web, gaming, and various large-scale systems.
Fun Fact
Greg is a part-time pro-photographer and a struggling jazz piano player.
Jenny Tong
@baconatedgeek
,
+JennyMurphy
,
Site
Jenny comes from the Firebase family at Google and helps developers build realtime stuff on all sorts of platforms. If she's away from her laptop, she's probably skating around a roller derby track, or hanging from aerial silk.
Fun Fact
Jenny once ate discount fugu puffer fish from a supermarket. It was priced less than $0.10 per piece. Somehow, she survived.
Julia Ferraioli
@juliaferraioli
,
+JuliaFerraioli
,
Site
Julia helps developers harness the power of Google’s infrastructure to tackle their computationally intensive processes and jobs. She comes from an industrial background in software engineering and an academic background in machine learning and assistive technology.
Fun Fact
Julia once deleted her entire thesis with a malformed regular expression, which she blames on lack of sleep and bad coffee. One good night's sleep outside the sysadmin's door restored it from the tape backup, and luckily only a couple of paragraphs were lost!
Kazunori Sato
@kazunori_279
,
+KazunoriSato
Kazunori Sato recently joined the team after working as a Cloud Platform Solutions Architect for 2.5 years. During that time, he has produced over 10 solutions and has been hosting the largest Google Cloud Platform community event in Japan for the past 5 years, as well as hosting Docker Meetup in Tokyo. He will be one of our resident experts in Japan on BigQuery, BigData, Docker, Kubernetes, mBaaS and IoT.
Fun Fact
Kaz’s hobby is playing with littleBits, RasPi, Arduino and FPGA and having fun connecting them to BigQuery.
Mandy Waite
@tekgrrl
,
+MandyWaite
,
about.me
Mandy is working to make the world a better place for developers building applications for Cloud Platform. She came to Google from Sun Microsystems where she worked with partners on performance and optimisation of large scale applications and services before moving on to building an ecosystem of Open Source applications for OpenSolaris. In her spare time she is learning Japanese and plays the guitar.
Fun Fact
Mandy has been studying Japanese for some time now, in the hopes of of one day working in Japan and travelling the country in search of Cicadas.
Ossama Alami
@ossamaalami
,
+OssamaAlami
Ossama is focused on Firebase, making sure developers have a great experience building realtime apps on Google Cloud Platform. He has worked as a software engineer, consultant, developer advocate and engineering manager at a variety of small and big companies. Prior to Firebase he was Head of Developer Relations for Glass at Google[x]. In the winter he can be found snowboarding in the Sierras.
Fun Fact
Ossama has worked on 8 different Google developer products: Ads APIs, Geo APIs, Android, Commerce APIs, Google TV, Chromecast, Glass and now Firebase.
Paul Newson
@newsons_nybbles
,
+PaulNewson
,
Site
Paul currently focuses on helping developers harness the power of Google Cloud Platform to solve their big data problems. Previously, he was an engineer on Google Cloud Storage. Before joining Google, Paul founded a startup which was acquired by Microsoft, where he worked on DirectX, Xbox, Xbox Live, and Forza Motorsport, before spending time working on machine learning problems at Microsoft Research.
Fun Fact
Paul is a private pilot.
Ray Tsang
@saturnism
,
+RayTsang
,
about.me
Ray had extensive hands-on cross-industry enterprise systems integration delivery and management experiences during his time at Accenture, managed full stack application development, DevOps, and ITOps. Ray specialized in middleware, big data, and PaaS products during his time at Red Hat while contributing to open source projects, such as Infinispan. Aside from technology, Ray enjoys traveling and adventures.
Fun Fact
Ray has been posting at least one picture a day on Flickr since 2010.
Sara Robinson
@srobtweets
Sara joins Google from the Firebase family. She previously worked as an analyst at Sandbox Industries, a venture firm and startup foundry. She's passionate about learning to code, running, and finding the best ice cream in town.
Fun Fact
Sara wrote her senior thesis on Harry Potter, and enjoys finding ways to relate Harry Potter to almost anything.
Terrence Ryan
@tpryan
,
+TerrenceRyan
Terrence (Terry) Ryan is a Developer Advocate for the Cloud Platform team. He has a passion for web standards and 15 years of experience working with both front- and back-end applications for both industry and academia.
Fun Fact
Before doubling down on technology in the early aughts, Terry was a semi-professional improv comic.
-Posted by Greg Wilson, Head of Developer Advocacy
RealMassive transforms commercial real estate with powerful data technology built on Google Cloud Platform
Monday, December 15, 2014
If you’ve hunted for new office space for your company in recent years, you know what a nightmare it can be: dealing with quickly outdated spreadsheets and flyers, finding inaccurate data on listings, or even missing out on a great spot because it wasn’t listed properly. The commercial real estate industry today is technologically behind, and
RealMassive
aims to fix that.
RealMassive uses Google App Engine, Google Compute Engine, Google Cloud Storage, and Google Maps to bring transparency to the commercial real estate industry. The company gives its customers accurate up-to-the-minute digital real estate listings and eliminates conventional operating models. With more than 1 billion square feet of properties in their database, they’re well on their way to transforming an old industry.
Read our new case study on RealMassive
here
to learn more about how Cloud Platform helped the company achieve 1,360% growth in data in just three months.
-Posted by Chris Palmisano, Senior Key Account Manager
Data scientists harness Google Cloud Platform to make social impacts at 24-hour Bayes Impact hackathon
Thursday, December 11, 2014
Can you change the world for the better in 24-hours? That was the challenge 39 teams tackled at the Bayes Hack data-science challenge in November.
Bayes Impact
is a Y Combinator-backed nonprofit which runs programs to bring data-science solutions to high impact social problems. In addition to a 12-month full-time fellowship supporting leading data scientists to work with civic and nonprofit organizations such as the Gates Foundation, Johns Hopkins and the White House, the organization runs an annual 24-hour hackathon to bring together data scientists and engineers to tackle social problems.
Starting from a set of 20 challenge problems proposed by government and non-profit organizations, teams drawn from the Silicon Valley’s top data-science talent applied their skills to finding impactful ways to use already available data to solve pressing social problems.
Google Cloud Platform sponsored the event with $500 Google Cloud Starter pack credit for each team, and a prize of $100K of Google Cloud Platform Credits to the winning team.
With only only 24 hours and large quantities of data to process, teams were able to leverage the power of tools such as
Google Compute Engine
and
BigQuery
to quickly chew through terabytes of information looking for ways to make meaningful impacts on people’s lives.
The winning team, comprised of five local Bay Area data scientists, used their data savvy and their Cloud Platform credits to identify prostitution rings by analyzing patterns of phone numbers and text in postings to adult escort websites. Using a cluster of Compute Engine nodes, the team processed a dataset provided by the non-profit group Thorn. They indexed 38,600 phone numbers and combined that with a heuristic phrase matching strategy to detect 143 separate networks or cells operating in the US.
“Realizing that it was going to take 76 days to process the data on a local laptop, we saw this as a place to use our Cloud Platform credits,” notes Peter Reinhardt, the lead for the winning team. “We found it really straightforward to get SSH access to our first compute instance right from the console. Once that was running, we were able to use that image to quickly bring up 10 machines, and went from nothing to a high powered compute cluster in just over half an hour.”
Paul Duan, President of Bayes Impact, observed that Cloud Platform “enabled the participants to get going quickly and focus on their application without having to spend too much time setting up infrastructure.”
It is estimated that 100,000 to 300,000 children are at risk of commercial sexual exploitation in the United States and one million children are exploited by the global commercial sex trade each year.* As the winning entry, the
team’s work
will be adopted and expanded as a resident Bayes Impact project.
Companies use data-science and Google’s Big Data tools to quickly answer tough data-intensive questions. Bayes Impact and Google worked together to show what is possible when human and technology resources are brought to bear against social problems.
Posted by Preston Holmes, Google Cloud Platform Solutions Architect
*U.S. Department of State,
The Facts About Child Sex Tourism
: 2005.
MITx’s edX course uses Akselos for complex engineering simulations on Compute Engine
Wednesday, December 10, 2014
Today’s post is about Cloud Platform customer Akselos, a platform that enables engineers to design and assess critical infrastructure - such as bridges, buildings and aircraft - via advanced simulation software.
When you enter a tall office building or drive over a giant bridge, it’s likely you don’t think twice about the work that went into ensuring these massive structures stay standing.
Lucky for us, engineers answer myriad design questions before the structures are ever built: How thick do the beams need to be? How will different materials weather over time? Lucky for these engineers, software like
Akselos
helps answer these questions. And now, students around the world can use this same software when they participate in MIT’s massive open online course, Elements of Structure.
Akselos, which is built on
Google Compute Engine
, enables software-based large-scale simulations, allowing engineers to virtually prototype complex infrastructures-- keeping us all safe on those bridges.
Computational simulations are a key tool in all engineering disciplines today. The current industry-standard technology is called Finite Element Analysis (FEA). However, large-scale 3D FEA simulations are computationally intensive. It can be unfeasible to use FEA for many applications of practical interest, such as modeling large infrastructures like bridges, buildings, port equipment, offshore structures or airframes in full 3D detail. These types of simulations require amounts of RAM that often exceed the capacity of a desktop workstation (sometimes over a terabyte). Even if the simulation does fit in RAM, it may require hours or even days of computation time. If time is at a premium, 3D FEA of large-scale systems is too slow.
Akselos aims to make
high-end simulation technology
faster and easier to access. Its software is based on new algorithms (developed at MIT and other universities in the US and Europe over the past decade) that are 1000x faster than FEA for large-scale, highly detailed simulations. Fast response times are crucial in practice because engineers typically need to do hundreds or even thousands of simulations to perform studies for a piece of critical infrastructure, such as analyzing the vibrational characteristics of an entire gas turbine under all operating frequencies. With Akselos, studies like this can be completed within one day.
With Akselos, each simulation model is composed of hundreds or even thousands of components. And each component contains various properties (e.g. density, stiffness) or geometry (length, curvature, crack depth) that can be changed with the click of a button. In order to handle this giant data footprint, Akselos’s software runs on
Google Cloud Platform
and utilizes Google’s storage solutions as well as Replica Pools to scale its computing resources.
Akselos’s initial deployment on Google Compute Engine occurred when Dr. Simona Socrate, a Senior Lecturer in the Mechanical Engineering Department at MIT, decided to leverage its fast simulation technology to help students in her
structural analysis course
, 2.01x, on edX. Dr. Socrate wished to integrate simulation apps that run in the web browser into her course so students could explore subtle effects in structural mechanics in an interactive and visual way. Previous attempts to integrate simulations within university courses had been unsuccessful because the tools are typically too complicated for students to master.
Following Dr. Socrate’s direction, Akselos developed a series of WebGL browser apps to support the course’s learning experience. To handle the scale required for the 7,500 students who were signed up for the course, Akselos deployed the simulation back-end on Compute Engine. The apps were tested to sustain up to 15,000 simulation queries per hour at 99.9% uptime. The simulations ran on Google Compute Engine without a hitch during the 4 month course, with a very positive response from the students.
In parallel with the edX deployment, Akselos has opened up its cloud-based simulation platform, which is now used by a growing
community
of engineers around the world. The company aims to put powerful simulation technology into the hands of as many people as possible to enhance design and analysis workflows across many engineering disciplines. With the software deployed on Compute Engine, Akselos is well on its way to providing faster, easier, more detailed simulations for every engineer.
Expanded Windows Support on Google Cloud Platform
Monday, December 8, 2014
Our customers, large and small, have put a number of things on their holiday wish lists, including better support of their Windows-based workloads, leveraging the performance and scale of Google datacenters. Today, we're releasing three additional enhancements to Google Compute Engine that make it a great place for customers to run highly performant Windows-based workloads at scale.
First, we’re happy to offer Microsoft License Mobility for Google Cloud Platform. This enables our customers to move their existing Microsoft server application software licenses, such as SQL Server, SharePoint and Exchange Server, from on-premises to Google Cloud Platform without any additional Microsoft software licensing fees. Not only does license mobility make the transition easier for existing customers, it provides customers who prefer to purchase perpetual licenses the ability to continue doing so while still taking advantage of the efficiencies of the cloud. You can learn more about Microsoft License Mobility for Google Cloud Platform
here
. Use of Microsoft products on Google Compute Engine is subject to additional terms and conditions (you can view the Google Cloud Platform service terms
here
).
Second, Windows Server 2008 R2 Datacenter Edition is now available to all Google Cloud Platform customers in beta on Google Compute Engine. We know our customers run some of their key workloads on Windows and want rapid deployment, high performance and the ability to stretch their datacenters to the cloud. And with awesome features like
Local SSD
(which also supports live migration), and multiple ways to
connect your datacenter
to the cloud, Google Cloud Platform is the best place to run your Windows workloads. And just so you know, we are working on support for Windows Server 2012 and 2012 R2, we’ll have more on this soon!
And lastly, a version of the the popular Chrome RDP
app
from
Fusion Labs
optimized for Google Cloud Platform is now available for free to our customers for use with Windows in Google Compute Engine. This enables customers using the Chrome browser to create remote desktop sessions to their Windows instances in Google Compute Engine without the need for additional software by simply clicking on the RDP button in the Google Developer Console. In addition, because Google Developers Console stores and passes the login for the Windows credentials to the RDP app, customers are able to leave the complexity of managing unique user IDs and passwords for each Windows instance to Google.
We’re constantly amazed to see what our customers build and run on Google Cloud Platform, from high performance animated movie rendering to rapid scale distributed applications to near instant-on VMs to cloud bursting.
For example, IndependenceIT, a leading software provider of simplified IT management solutions for application and DaaS delivery, has been working to certify its Cloud Workspace Suite ("CWS") with Windows Server 2008 R2 Datacenter Edition running on Google Compute Engine. CWS is software that allows IT administrators to rapidly orchestrate and provision all elements necessary for automated, multi-platform, hypervisor/device agnostic workspaces for use with public, private or hybrid-cloud IT environments. The software offers a robust API set for ease of integration with existing customer business support systems, simplifying deployment while speeding time to market. IndependenceIT has been testing Windows on Google Compute Engine, and their customers will have the ability to use CWS to provision Windows Server based desktops and application deployments into Google Cloud Platform.
We’d love to hear feedback from our customers who use Windows, as well as how you’d like to see us expand support for the Windows ecosystem. What are you building next?
-Posted by Martin Buhr, Product Manager
dotCloud provides faster, more reliable PaaS with Google Cloud Platform
Friday, December 5, 2014
Today’s guest blog comes from Philipp Strube, founder and CEO at
cloudControl
, a Berlin-based Platform-as-a-Service (PaaS) provider. CloudControl provides the dotCloud Platform, which simplifies the deployment, management and scaling of web apps for developers. Launching today, the new version of dotCloud will run on
Google Compute Engine
, giving their customers a number of performance benefits as well as cost savings in the range of 50-80 percent.
CloudControl acquired
dotCloud
, the industry’s first multi-language PaaS, in August. At that time, we updated the underlying dotCloud technology and took the opportunity to pick the best technology vendors to work with dotCloud going forward. As this is our first foray into the U.S., we wanted to differentiate ourselves in this market by giving customers the fastest, most reliable service. And to do this, we needed to run on the highest-performing infrastructure, so we made the decision to move all of dotCloud’s 500 customers from Amazon Web Services (AWS) to Google Cloud Platform.
When you are a developer making architectural decisions, it’s important to have options. That’s why we picked Google. With Google Cloud Platform and dotCloud, customers get options -- the choice of programming languages like Java, Scala, Clojure, NodeJS, PHP, Python, Ruby, and many more via industry standard buildpacks, and
add-on services
for relational databases, including
Google Cloud SQL
, NoSQL databases like MongoDB and in-memory solutions like Memcache and Redis, just to name a few. App developers also get the popular Git-based PaaS workflow, the flexibility to pick the right technology for their use-cases and the scalability and reliability of Google’s infrastructure, without having to maintain development, staging and production environments themselves.
Google Cloud Platform offers an unparalleled global network infrastructure that lays the foundation for a robust and growing ecosystem, enabling developers to connect with partners and services anywhere. Our promise to developers building on top of the dotCloud PaaS is that they always have a platform they can trust. Google Cloud Platform gives us the flexible, reliable and fast infrastructure we require to fulfill this promise.
In addition to the performance benefits of Google Compute Engine, we are also pleased with the reliability and redundancy of
Google Cloud Storage
. To make sure customer applications are always available and can scale fast to meet current demand, the platform uses a robust and proven zero-downtime deployment process. First, during a push to the Git repository, the language and framework-specific buildpack runs and builds an image of the application code, its dependencies and any additional assets required. The compressed image is then stored on Google Cloud Storage. This ensures that the latest image is always available to either replace a container, scale to more containers, or deploy a new version in a matter of seconds.
Initially bootstrapping our technology on Google Cloud Platform took just a week, and preparing the platform for production took about six to eight weeks in total. The move was painless for us because our technology architecture is built from the ground up to be infrastructure agnostic by using containers. All customer application processes and 98 percent of our own platform components run inside the containers. We use n1-highmem-4 VMs to run the containers on.
We also benefitted from the fact that Google provides powerful, well understood abstractions on top of the raw compute, networking and storage infrastructure that were intuitive to use. The pricing model of our PaaS platform is completely consumption-based so customers only pay for what they used. To be able to provide this we rely on the underlying infrastructure pricing to match this. With the sustained usage pricing discounts from Google, we have a cost-effective way to bake in enough headroom for customers to scale instantly without upfront commitments. This allowed us to reduce the per-memory prices by 50-80 percent for customers who migrate from the old dotCloud services running on AWS to the new dotCloud infrastructure running on Google Cloud Platform.
With today’s launch we are also re-introducing a free tier and invite both existing dotCloud customers, as well as new developers, to
try out the next dotCloud
on Google Compute Engine.
Aerospike Hits One Million Writes Per Second with just 50 Nodes on Google Compute Engine
Thursday, December 4, 2014
Today’s guest blogger is Sunil Sayyaparaju, Director of Product & Technology at Aerospike, the open source, flash-optimized, in-memory NoSQL database.
What exactly is the speed of Google? We at
Aerospike
take pride in meeting our challenges of high throughput, consistently low latency, and real time processing that we know will be characteristic of tomorrow’s cloud applications. That’s why after we saw Ivan Santa Maria Filho, Performance Engineering Lead at Google, demonstrate
1 Million Writes Per Second with Cassandra on Google Compute Engine
, our team at Aerospike decided to benchmark our product’s performance on Google Compute Engine and push the boundaries of Google’s speed.
And guess what we found out. Aerospike scaled on Google Compute Engine with consistently low latency, required smaller clusters and was simpler to operate. The combined Aerospike-Google Cloud Platform solution could fuel an entirely new category of applications that must process data in real-time and at scale from the very start, enabling a new class of startups with business models that were not viable economically previously.
Our benchmark used a similar setup as the Cassandra benchmark: 100 Million records at 200 bytes each, debian 7 backports, servers on n1-standard-8 instances with data-in-memory with on-disk persistence on a 500GB non-SSD persistent disks at $0.504/hr, clients on n1-highcpu-8 instances at $0.32/hr, and followed these
steps
. In addition to pure write performance, we also documented pure read and mixed read/write performance. Our findings:
High Throughput for both Reads and Writes
1 Million Writes per Second with just 50 Aerospike servers
1 Million Reads per Second with just 10 Aerospike servers
Consistent low latency, no jitter for both Reads and Writes
7ms median latency for Writes with 83% of writes < 16ms and 96% < 32
1ms median latency for Reads with 80% of reads < 4ms and 96.5% < 16ms
Note that latencies are measured on the server (latencies on the client will be higher)
Unmatched Price / Performance for both Reads and Writes
1 Million Writes Per Second for just $41.20/hour
1 Million Reads Per Second for just $11.44/hour
Aerospike is used as a front edge operational database for a variety of purposes: a session or user context store for real-time bidding, personalization, fraud detection, and real-time analytics. These applications must read and write billions of keys and terabytes, from click-streams to sensor data. Data in Aerospike is replicated synchronously in-memory to ensure immediate consistency and written to disk asynchronously.
Here are details on our experiments with Aerospike on Google Compute Engine. Using 10 server nodes and 20 client nodes, we first examined throughput for a variety of different read and write ratios and documented latency results for those same workloads. Then, we documented how throughput scaled with cluster size, as we pushed a 100% read load and a 100% write load onto Aerospike clusters, going from a 2 nodes to 10.
High Throughput at different Read / Write ratios (10 server nodes, 20 client nodes)
For all read/write ratios, 80% of TPS in this graph is achieved with 50% of the clients (10), adding more clients only marginally improves throughput.
Disk IOPS
depend on size
. We used 500GB non-SSD persistent disks to ensure high IOPS, so the disk is not the bottleneck. For larger clusters, 500GB is a huge over-allocation and can be reduced for lower costs. To achieve this high performance, we did not need to use SSD persistent disks to get higher IOPS.
Consistent Low Latency for different Read / Write ratios (10 server nodes, 20 client nodes)
For a 100% read load, only ~20% of reads took more than 4ms and ~3.5% reads took more than 16ms. This is because reads in Aerospike are only 1 hop (network round trip) away from the client, while writes take 2 network-roundtrips for synchronous replication. Even with a 100% write load, only 16% of writes took more than 32ms. We ran Aerospike on 7 of 8 cores on each server node. Leaving 1 core idle helped latency; if all cores were busy, network latencies increased. Latencies are as measured on the server.
Linear Scalability for both Reads and Writes
This graph shows linear scalability for 100% reads and 100% writes but you can expect linear scaling for mixed workload too. For reads, a 2:1 clients:server ratio was used i.e for a 6 node cluster, we used 12 client machines to saturate the cluster. For writes, a 1:1 client:server ratio was enough because of the lower throughput of writes.
A new generation of applications with mixed read/write data access patterns sense and respond to what users do on websites and on mobile apps across the Internet. These applications perform data writes with every click and swipe, make decisions, record, and respond in real-time.
Aerospike running on Google Compute Engine showcases an example application that requires very high throughput and consistently low latency for both reads and writes. Aerospike processes 1 Million writes per second with just 50 servers, a new level of price and performance for us. You too can follow these
steps
to see the results for yourself and maybe even challenge us.
- Posted by Sunil Sayyaparaju, Director of Product & Technology at Aerospike
Reach High Availability with a Multiple Cloud Deployment
Wednesday, December 3, 2014
This article is written by guest author Eugene Olshenbaum. Eugene is the Head of Media Platform at Wix, a cloud-based web development platform that makes it easy for everyone to create beautiful websites.
While some people are still debating whether to use a cloud service, we at Wix are debating how many to use. Тhe more services we use, the more assurance we have that we can handle any failures. To help ensure business continuity by freeing developers from the constraints of a single provider, multi-cloud environments are becoming the next evolution in cloud platform architecture.
Dimensional Research
recently interviewed 659 IT decision makers with cloud responsibilities in Australia, Brazil, Canada, Germany, the UK, US, and Singapore, and 77% of respondents said they either already have or plan to implement a multi-cloud infrastructure in the coming year. Only 8% are not planning to do so.
As a result of this growing trend, we thought it was time to revisit a
recent blog post
describing Wix’s disaster recovery strategy, as well as discuss our multi-cloud implementation at Wix.
At
Wix.com
, we provide a cloud-based web development platform that allows users to create HTML5 websites and mobile sites through the use of our online drag-and-drop tools. Wix Media Platform is one of the most important pieces of our infrastructure, supporting the 55 million websites running on Wix.com.
While providing tools for building functional websites like an eCommerce shop, hotel, or restaurant, we quickly realized that our customers care about only one thing: they want their site to always be online. And because we know that things fail no matter what, using multiple cloud providers is our solution to:
Achieve at least Five 9s uptime
Stay on top of the competition
Eliminate the risks associated with the business continuity of the infrastructure provider, as well as risks related to electricity suppliers, networking providers, and other "data center" issues (since each cloud provider will usually operate separately).
Wix Media Platform High-Level Architecture
The new multi-cloud configuration of Wix Media Platform’s system layout provides
active/active
, strongly consistent setup on:
Google Cloud Platform (primary)
Amazon Web Services
Wix-managed data centers
These locations are logical in terms of operation. If one of them fails, traffic is re-routed to a healthy location. Instead of focusing on how to extend availability within the boundaries of one cloud provider, we’ve been concentrating on how to failover at the highest possible level, which is the user’s web browser.
Wix’s platform relies on several subsystems, each of which provides its own service-level agreement (SLA). One of the key design guidelines is to keep each subsystem fully backed up by its independent equivalent on another location.
The Challenge
We want to provide close to 100% uptime for data serving while protecting users’ data against loss. We originally ran our service in one managed hosting environment. To improve data disaster recovery, we added a second one, running both services in active/active mode. Later, we added a third data center to run our services in 3x active/active mode.
As we explained in our
previous blog post
, we learned that maintaining three cross-data-center replicas was much more complex than managing two, especially with the data centers owned by different ISPs for ISP redundancy. One of the challenges in 3x active/active mode was database replication. To replicate across three data centers we had to configure our MySQL in a ring topology. The ring would break when one data center went down for a long time or failed completely.
To address this, instead of implementing 3x active/active mode with our current infrastructure, we decided to run in 2x active/active mode, with the third replica running on an entirely different technology platform. The third replica also added protection against data poisoning (when a faulty piece of code unintentionally corrupts data and remains undetected for some time).
We decided to develop a fully functional, logical data center natively on Google Cloud Platform. After six months, in April 2013, we started to serve Wix media from Google Cloud Platform in monitored geographies. By the end of 2013, 100% of production traffic was served from Google Cloud Platform. We developed NORM on Google Cloud Platform. NORM (Not Only Replication Manager) is a generic replication bus that allows us to keep the data in sync in all logical locations: Google Cloud Platform, Amazon Web Services, and Wix data centers.
Conclusion
As the leading cloud-based web development platform in the world, we have been paying very close attention to the string of recent cloud outages. Each minute of downtime is money our client loses, so it came as a natural decision for us to implement a multi-cloud infrastructure and mitigate the risks associated with failures.
We believe the advantages of utilizing multiple cloud platforms heavily outweigh the challenges. Over time, we learned that the benefits were going beyond extended capabilities, lower costs, and improved performance.
Operational efforts are way less stressful, and sleepless nights and crisis chat rooms are now in the past. In most cases we just switch traffic to a functional system and investigate failures afterwards. With this new implementation, our team can rest easier and still provide an exceptional customer experience.
- Posted by Eugene Olshenbaum, Director of Media Platform at Wix
Google Cloud Platform now PCI Data Security Standard Certified
Tuesday, December 2, 2014
Every year people spend billions of dollars buying goods online. Consumers who make these purchases and the companies that accept these online credit transactions need confidence that their data and processes are protected. We are pleased to announce that Google Cloud Platform has been validated for compliance with the Payment Card Industry (PCI) Data Security Standards (DSS). The standard will enable our customers to hold, process, or exchange cardholder information from any branded credit card on Google Cloud Platform.
PCI DSS provides a comprehensive and robust security framework for securing credit card information and transactions. Google is using these third-party audited standards to deliver a platform on which application developers can create and operate their own secure and compliant solutions.
One of these developers is
WePay
. WePay helps their customers, such as online marketplaces and small business software providers seamlessly facilitate payments between their users while avoiding all the operational downsides of fraud, compliance and support. "Google Cloud Platform will enable WePay to process our partners' transactions in a fully scalable, highly available environment with robust security features," said David Nye, Director of DevOps at WePay. "The new PCI DSS certification that Google Cloud Platform has achieved enables WePay to dynamically grow our infrastructure as fast as our business and our partners’ businesses demand."
We are looking forward to helping customers write great apps on Google Cloud Platform with PCI DSS compliance. If you need PCI Compliance and would like more information, please
contact
our team.
Happy shopping!
-Posted by Matthew O’Connor, Product Manager
Don't Miss Next '17
Use promo code NEXT1720 to save $300 off general admission
REGISTER NOW
Free Trial
GCP Blogs
Big Data & Machine Learning
Kubernetes
GCP Japan Blog
Labels
Announcements
56
Big Data & Machine Learning
91
Compute
156
Containers & Kubernetes
36
CRE
7
Customers
90
Developer Tools & Insights
80
Events
34
Infrastructure
24
Management Tools
39
Networking
18
Open Source
105
Partners
63
Pricing
24
Security & Identity
23
Solutions
16
Stackdriver
19
Storage & Databases
111
Weekly Roundups
16
Archive
2017
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Feed
Subscribe by email
Technical questions? Check us out on
Stack Overflow
.
Subscribe to
our monthly newsletter
.
Google
on
Follow @googlecloud
Follow
Follow