As users move mobile-first and increasingly mobile-only, Google aims to equip developers with solutions to address their unique challenges on mobile: from fully managed services where developers can focus exclusively on their app’s front-end user experience, to platform and infrastructure-as-a-service that give developers as much control as they need for their projects.
 As users move mobile-first and increasingly mobile-only, Google aims to equip developers with solutions to address their unique challenges on mobile: from fully managed services where developers can focus exclusively on their app’s front-end user experience, to platform and infrastructure-as-a-service that give developers as much control as they need for their projects.








The Mobile Context






Firebase is an innovative leap forward that addresses the limitations of traditional programming models in a mobile context. The RESTful model of request-response has taken us a long way, especially on the web. But consider the usage context around mobile: network connections can be intermittent or non-existent, especially in developing countries but also in developed regions where users may descend into a subway, toggle airplane mode, or step into areas with spotty network connectivity. Building a seamless user experience under these conditions can be challenging, while users increasingly expect their apps to work offline.













Seamless Offline Capability




Today at Google I/O, Firebase announced native support for offline usage on iOS and on Android. Firebase handles data persistence entirely on the developer’s behalf, automatically storing data locally on the device when the network is unavailable. When connectivity is restored, Firebase automatically syncs application data back to the cloud. Contrast this with the RESTful model where a developer trying to create a seamless offline experience might send a request to the server blindly, realize via timeout or error code that something went wrong, then devise a retry mechanism such as polling. Add to this the prospect of having to keep application data in sync across diverse clients from mobile web browsers, desktop browsers, Android, to iOS, and the complexity for developers escalates quickly. Firebase manages data synchronization across devices completely on behalf of developers, regardless of their connected state.






Realtime


In addition to the offline use case, users expect a snappy, instant-response experience from today’s best apps. Whether it’s your ride-sharing car inching toward you on a map, social posts appearing instantly, or live collaboration in a Google Doc, realtime is becoming an important part of the user experience.






From the start, Firebase pioneered a realtime synchronization approach to mobile applications. Application data gets synchronized to the cloud and across client devices in realtime with no effort on the developer’s part. Clients are notified immediately of changes so they can take action.





Infrastructure & Compute


Finally, for mobile developers who wish to manage or migrate their existing backend, Cloud Platform offers a spectrum of options to power your mobile app with custom server-side code from Google App Engine, Managed VMs, to Google Compute Engine. These platforms are excellent choices to host long-running jobs, run analytics, or to write your custom business logic.





Google Cloud Platform ensures that mobile developers and the context around mobile usage are first-class considerations. But don’t just take our word for it, read about how Rovio adapted its backend for Angry Birds, how Feedly tailors content for purposeful reading on mobile, or how Citrix tackles remote collaboration in the enterprise.





Get started immediately on Firebase or dive into Cloud Platform and take the lead in building the next generation of great mobile experiences.




-Posted by Andy Tzou, Product Marketing Manager, Google Cloud Platform










  • Couchbase Server worked with 3 billion total items with each item containing value size of 200 bytes.



  • Couchbase Server was set up with 2 copies of data for availability and durability (one master and one additional replica).



  • The median latency was 15ms and 95th% latency was 27ms.



  • The total cost of running the benchmark for an hour is $56.3/hr.



Today's guest post comes from Cihan Biyikoglu. Cihan Biyikoglu is the director of Product Management at Couchbase Server, the high performance, always-on document database and a Google Cloud Platform partner.





Over the last year, technology partners have been reporting some exciting performance stats on Google Compute Engine, with Cassandra able to sustain 1 million writes per second. We at Couchbase took that as a challenge, and decided to see how much further we could push the scale, to drive down price/performance. Now, the results are in and we were able to sustain 1.1 million writes per second using only 50 n1-standard-16 VMs, each with a 500GB SSD Google Cloud ersistent Disk!





Couchbase Server is an open source, multi-model NoSQL distributed database that incorporates a JSON document database and key-value store. It’s built on a memory-centric architecture designed to deliver consistent high performance, availability, and scalability for enterprise web, mobile, and Internet of Things applications. It can be used as a document database, key-value store, and distributed cache supporting rich synchronization features to several platforms, including most mobile devices. It so happens that Couchbase Server also does very well on Google Compute Engine, offering superior price/performance.





Here are the additional details on Couchbase Server results:



  • Couchbase Server worked with 3 billion total items with each item containing value size of 200 bytes.



  • Couchbase Server was set up with 2 copies of data for availability and durability (one master and one additional replica).



  • The median latency was 15ms and 95th% latency was 27ms.



  • The total cost of running the benchmark for an hour is $56.3/hr.





Sustained 1.1M writes/sec





The servers were configured to maintain data consistency, availability, and durability during the benchmark. Writes were acknowledged only after the write was received by two data nodes. The data was then flushed to the server Persistent Disk, which is a durable store. This replicated persistence is achieved by the durability flag “ReplicateTo=1” in Couchbase Server.





Replicating data to two servers and flushing to Google Cloud Platform’s persistent disks offer a stronger durability guarantee, as it combines the durability efforts of both Couchbase and Google Cloud Platform. Couchbase’s memory-to-memory replication can also be used to reduce write latency while ensuring durability.





This level of performance and service quality is consistent with Couchbase’s promises to our customers, which include large enterprises with millions of customers or users (often in the advertising, technology, travel, finance, and telecommunication industries), to implement mission-critical services for personalization, profile management, fraud detection, and digital communication where the scale required is unprecedented.





We’re very excited about these benchmark results, and we believe our price/performance on Google Cloud Platform is a great value to our customers. You can find more detail on the Couchbase blog, including detailed instructions on how to run the benchmark and reproduce the numbers independently.




-Posted by Cihan Biyikoglu, Director of Product Management, Couchbase Server



















Everyone loves portability and speed, especially when it comes to cloud. Today we’re excited to introduce a new technical paper and open source reference implementation that will help your Google Compute Engine virtual machines boot even faster, and allow you to build portable images for Docker along with your Compute Engine images.





When you run an application on a Compute Engine instance, you first have to deploy one or more Virtual Machines and configure them so your application works. This configuration usually involves installing the application and its dependencies, then setting up any other configuration your application requires, such as database connection strings or API keys.





You could do this manually by connecting to each instance after it boots and configuring each element, but that is a slow and error-prone process that creates unique, inconsistent “snowflake” instances.  Running configuration scripts automatically at startup is a better solution as it’s repeatable and scalable - but it requires a lot of up-front work and is still subject to human error. Additionally, if a package you’re installing is temporarily unavailable, your instances just won’t boot and if the packages you’re installing are large or need to be compiled, boot times gets slower and your ability to auto scale is affected.





Building custom images before you launch instances is a great way to reduce boot times and increase reliability. Today we’re introducing a solution paper and open source reference implementation that describes in detail how to Automate Image Builds with Jenkins, Packer, and Kubernetes. You’ll learn how to use popular open source technologies to continuously build images for your Compute Engine or Docker-based applications. You’ll build the images in a central project, share them with other projects in your organization, and integrate the image build as a step in your continuous integration (CI) pipeline.





The diagram below shows the Jenkins image builder serving as a hub that builds Compute Engine and Docker images for other projects in your organization:




Figure 1: Jenkins building images for other projects in your account.





In addition to creating a secure and scalable image building pipeline, you’ll learn how to run a reliable Jenkins installation on Kubernetes, including how to backup/restore Jenkins and scale your worker nodes.





Head on over to the Automated Image Builds solution page for all the details on using Jenkins, Packer, and Kubernetes with Google Cloud Platform to increase the speed and reliability of your instance and container launches. After that, go deploy the infrastructure yourself by following the tutorial in the reference implementation. We love feedback: GitHub pull requests or issues for suggestions on the tutorial, comment here, or @evandbrown on Twitter to let me know how you’re using Google Cloud Platform!





-Posted by Evan Brown, Solutions Architect































-Posted by Noah Maxwell & Michael Rothwell, Site Reliability Engineers

If you don’t have a second to spare, you soon will! On June 30, 2015 at precisely 23:59:60 UTC, the world will experience its 26th recorded leap second. It will be the third one experienced by Google. If you use Google Compute Engine, you need to be aware of how leap seconds can affect you.





What is a leap second?


It's sort of like a very small leap year. Generally, the Earth's rotation slows down over time, thus lengthening the day. In leap years, we add an extra day in February to sync the calendar year back up with the astronomical year. Similarly, an extra second is occasionally added to bring coordinated universal time in line with mean solar time.  Leap seconds in Unix time are commonly implemented by repeating the last second of the day.





When do leap seconds happen?


By convention, leap seconds happen at the end of either June or December. However, unlike leap years, leap seconds do not happen at regular intervals, because the Earth's rotation speed varies irregularly in response to climatic and geological events. For example, the 2011 earthquake in Japan shortened the day by 1.8 microseconds by speeding up the Earth's rotation.  





How does Google handle this event?


We have a clever way of handling leap seconds that we posted about back in 2011. Instead of repeating a second, we “smear” away the extra second. During a 20-hour “smear window” centered on the leap second, we slightly slow all our servers’ system clocks (by approximately 14 parts per million). At the end of the smear window, the entire leap second has been added, and we are back in sync with civil time. (This method is a little simpler than the leap second handling we posted back in 2011. The outcome is the same: no time discontinuities.) Twenty hours later, the entire leap second has been added and we are back in sync with non-smeared time.





Why do we smear the extra second?


Any system that depends on careful sequencing of events could experience problems if it sees a repeated second. This problem is accentuated for multi-node distributed systems, because a one second jump dramatically magnifies time sync discrepancies between multiple nodes. Imagine two events going into a database under the same timestamp (or even worse, the later one being recorded under an earlier timestamp), when in reality one follows another. How would you know later what the real sequence was? Most software isn't written to explicitly handle leap seconds, including most of ours.  During the 2005 leap second, we noticed various problems like this with our internal systems. To avoid changing all time-using software to handle leaps correctly, we instead attempt to make leaps invisible by adding a little bit of the extra second to our servers' clocks over the course of a day, rather than all at once.





What services does this apply to on Google Cloud Platform?


Only Virtual Machines running on Google Compute Engine are affected by the time smear as they are the only entities that can manually sync time. All other services within Google Cloud Platform are unaffected as we take care of that for you.





How will I be affected?


All of our Compute Engine services will automatically receive this “smeared” time, so if you are using the default NTP service (metadata.google.internal) or the system clock, everything should be taken care of for you automatically (note that the default NTP service does not set the Leap Indicator bit). If, however, you are using an external time service, you may see a full-second “step”, or perhaps several small steps. We don’t know how external NTP services will handle the leap second, and thus cannot speculate on exactly how time will be kept in sync. If you use an external NTP service with your Compute Engine virtual machines, you should be prepared to understand how those time sources handle the leap second, and how that behavior might affect your applications and services. If possible, you should avoid using external NTP sources on Compute Engine during the leap event.





The worst possible configuration during a leap second is to use a mixture of non-smearing and smearing NTP servers (or servers that smear differently): behavior will be undefined, but probably bad.





If you run services on both Google Compute Engine and other providers that do not smear leap seconds, you should be aware that your services can see discrepancies in time during the leap second.





What is Google's NTP service?


From inside a virtual machine running on Google Compute Engine, you can use metadata.google.internal. You can also just use the system clock, which is automatically synced with the smeared leap second. Google does not offer an external NTP service that advertises smeared leap seconds.





You can find documentation about configuring NTP on Compute Engine instances


here. If you need any assistance, please visit the Help & Support center.




-Posted by Noah Maxwell & Michael Rothwell, Site Reliability Engineers



















- Posted by Ophir Kra-Oz, Group Product Manager

Editor's Update February 22, 2016: The Click to Deploy solution for Crate is no longer available in Cloud Launcher.



Back in March, we announced the availability of Google Cloud Launcher where (at the time) you could launch more than 120 popular open source application packages that have been configured by Bitnami or Google Click to Deploy. Since then, we have received many customer requests for additional solutions. We heard you!





Today, less than three months after launch, we have added 25 new solutions to Cloud Launcher. Recent additions include: Chef, OpenCart, Sharelock, Codiad and SimpleInvoices  - and new solutions are being added on an ongoing basis.





We are also announcing the addition of 14 new operating systems to Cloud Launcher. These include Windows, Ubuntu, Redhat, SUSE, Debian and CentOS.  Moreover, we’ve simplified the initial creation flow to make things even faster and simpler.








Figure 1 - The updated Cloud Launcher operating system section





To help users compare these solutions, we’ve updated the Cloud Launcher interface with detailed information on pricing, support (for OS), and free trial.








Figure 2 - The updated Cloud Launcher detailed solution interface





And finally, in line with our vision of providing customers with complete solutions that can be rapidly deployed, Google Cloud Monitoring is now integrated out of the box with 50 solutions. Built-in reports for components such as MySQL, Apache, Cassandra, Tomcat, PostgreSQL, and Redis provide DevOps an integrated view into their application.








Figure 3 - Google Cloud Monitoring Dashboard for Apache Web Server





You can get started with Cloud Launcher today to launch your favorite application packages on Google Cloud Platform in a matter of minutes. And do remember to give us feedback via the links in Cloud Launcher or join our mailing list for updates and discussions. Enjoy building!




- Posted by Ophir Kra-Oz, Group Product Manager

We know you have a choice of public cloud providers – and choosing the best fit for your application or workload can be a daunting task. Customers like Avaya ...
We know you have a choice of public cloud providers – and choosing the best fit for your application or workload can be a daunting task. Customers like Avaya, Snapchat, Ocado and Wix have selected Google Cloud Platform because of our innovation and proven performance, combined with flexible pricing models. We’ve recently made headlines for our latest product introductions like Google Cloud Storage Nearline and Google Cloud Bigtable, and today, we’re also raising the bar with our pricing options.



Compared to other public cloud providers, Google Cloud Platform is now 40% less expensive for many workloads. Starting today, we are reducing prices of all Google Compute Engine Instance types as well as introducing a new class of preemptible virtual machines that delivers short-term capacity for a very low, fixed cost. When combined with our automatic discounts, per-minute billing, no penalties for changing machine types, and no need to enter into long-term fixed-price commitments, it’s easy to see why we’re leading the industry in price/performance.




Price Reductions


Last year, we committed that Google Cloud Platform prices will follow Moore’s Law, and effective today we’re reducing prices of virtual machines by up to 30%.











Configuration


US Price Reduction


Standard


High Memory


High CPU


Small


Micro


20%


15%


5%


15%


30%









The price reductions in Europe and Asia are similar. Complete details on our compute pricing is available at our Compute Engine pricing page.




We have continued to lower our pricing since Google Compute Engine was launched in November of 2013; together, these price cuts have reduced VM prices by more than half.




Introducing Google Compute Engine Preemptible VMs


For some applications we can do even better: if your workload is flexible, our new Preemptible VMs will run your short-duration batch jobs 70% cheaper than regular VMs. Preemptible VMs are identical to regular VMs, except availability is subject to system supply and demand. Since we run Preemptible VMs on resources that would otherwise be idle, we can offer them at substantially reduced costs. Customers such as Descartes Labs have already found them to be a great option for workloads like Hadoop MapReduce, visual effects rendering, financial analytics, and other computationally expensive workloads.



Importantly, unlike other clouds’ Spot Instances, the price of Preemptible VMs is fixed  making their costs predictable.










Regular n1-standard-1


Preemptible n1-standard-1


Savings


$0.050 /hour


$0.015 /hour


70%







For further information about Preemptible VM pricing, please visit our website.




Google Cloud Platform costs 40% less for many workloads vs. other public cloud providers


Our continued price/performance leadership goes well beyond list prices. Our combination of sustained use discounting, no prepaid lock-in and per-minute billing offers users a structural price advantage which becomes apparent when we consider real-world applications. Consider a typical web application or mobile backend. Its development environment supports software builds and tests, presenting a bursty, daytime load on cloud computing resources. The production environment handles actual user traffic, with a diurnal cycle of demand, aggregate growth over time, and a larger overall footprint than the development environment. The developer environment would benefit from per-minute billing because it can be turned on and off more quickly and you only pay for what you use. The production environment would benefit from sustained use discounting, up to 30% additional discount with no upfront fee or commitment, because it always needs to be on.



Our customer-friendly billing, discounting, and lack of prepaid lock-in, combined with lower list prices, leads to a 40% lower price on Google Cloud Platform for many real-world workloads. Our TCO Tool lets you explore how different combinations of development and production instances, as well as environmental assumptions, change the total cost of a real-world application hosted in the cloud.



Many factors influence the total cost of a real-world application, including the likelihood of design changes, the rate of decrease of compute prices, and whether you’ve been locked into price contracts which are now above market rates, or on instances that don’t fit your current needs anymore. With Google Cloud Platform’s customer-friendly pricing model, you're not required to make a long-term commitment to a price, machine class, or region ahead of time.



This graphic illustrates how our lower list prices and customer-friendly pricing practices can combine to produce a 40% total savings.




Your exact savings depend on your specific application, and may be even greater than what is shown here. To see the impact of our customer-friendly pricing on your specific workload, explore our TCO Tool.



If you have specific pricing questions, please visit the updated pricing page on our website. To get started with testing your own workload, we’ve made it easy with our free trial program.



- Posted by Urs Hölzle, Senior Vice President, Technical Infrastructure