Google Cloud Platform Blog
Simple development of App Engine apps using Cloud SQL - Introducing Google Plugin for Eclipse 2.5
Monday, December 19, 2011
Since we added SQL support to App Engine in the form of
Google Cloud SQL
, the Google Plugin for Eclipse (GPE) team has been working hard on improving the developer experience for developing App Engine apps that can use a Cloud SQL instance as the backing database.
We are pleased to announce
the availability of
Google Plugin for Eclipse 2.5
.
GPE 2.5 simplifies app development by eliminating the need for manual tasks like copying Cloud JDBC drivers, setting classpaths, typing in JDBC URLs or filling in JVM arguments for connecting to local/remote database instances.
GPE 2.5 provides support for:
Configuring Cloud SQL/MySQL instances
Auto-completion for JDBC URLs
Creating database connections in Eclipse database development perspective
OAuth 2.0 for authentication.
Configuring Cloud SQL/MySQL instances
App Engine provides a local development environment in which you can develop and test your application before deploying to App Engine. With GPE 2.5, you now have the ability to configure your local development server to use a local MySQL instance or a Cloud SQL instance for testing. When you choose to deploy your app, it will use the configured Cloud SQL instance for App Engine.
Auto-completion for JDBC URLs
GPE 2.5 supports auto-completion for JDBC URLs, and quick-fix suggestions for incorrect JDBC URLs.
Creating database connections in Eclipse database development perspective
The
Eclipse database development perspective
can be used to configure database connections, browse the schema and execute SQL statements on your database.
Using GPE 2.5, database connections are automatically configured in the Eclipse database development perspective for the Development SQL instance and the App Engine SQL instance.
You can also choose to manually create a new database connection for a Cloud SQL instance. In GPE 2.5, we have added a new connection profile for Cloud SQL.
GPE 2.5 now uses
OAuth 2.0
(earlier versions were using OAuth 1.0) to securely access Google services (including Cloud SQL) from GPE. OAuth 2.0 is the latest version of the OAuth protocol focussing on simplicity of client development.
Can’t wait to get started?
Download GPE
here
and write your first App Engine and Cloud SQL application using GPE by following the instructions
here
.
We hope GPE 2.5 will make cloud application development using App Engine and Cloud SQL a breeze. We always love to hear your feedback and the
GPE group
is a great place to share your thoughts.
Posted on behalf of the Google Plugin for Eclipse Team
App Engine 1.6.1 Released
Tuesday, December 13, 2011
We have one more release this year to make our developers merry, and while some members of our team enjoy the
summer sunshine down under
, we’ll be taking a short winter break from releases. Don’t worry, we’ll be back to our normal schedule in January, but we couldn’t resist tempting you with some new features that will keep you up tinkering well past midnight on January 1st.
Platform Changes
Frontend Instance Classes
- For applications that need more CPU and/or memory to serve requests, we’ve introduced two larger frontend instance classes. Before today, all apps were allocated a fixed instance size no matter what the app was computing in its requests. Now, apps that need more computing power can upgrade the size of their instances.
High Replication Datastore (HRD)
Migration Tool
Has Graduated
- The HRD migration tool is now a fully supported feature. The tool allows you to easily migrate your data, limits the downtime required to complete the migration, and also allows you to choose its precise time. Every app can now start the new year off right, improving their uptime and reliability by migrating to HRD!
New APIs
Conversion API
(Experimental)
- Converting between formats within your application can be a pain, but with the experimental Conversion API you can now easily convert between PDF, HTML, text and images. Generating PDF invoices from HTML, displaying PDF menus as HTML or extracting text from images using OCR is now as simple as an API call.
Logs Reader API
(Experimental)
- Want to summarize latency by handler? Summarize request statistics by user? The new logs reader API allows you to programmatically access your logs to build reports, gather statistics, and analyze requests to your heart’s content.
Read the full release notes for
Java
and
Python
to get all the details on 1.6.1. We always love to hear what you think, so keep the feedback on our groups coming. App Engine releases will resume again with our regular schedule around the end of January.
Posted by The App Engine Team
Whentotweet.com - Twitter analytics for the masses
Monday, December 5, 2011
Our post today comes from Stefan and Niklas of
Whentotweet.com
, a nifty site that recommends the best time of day to tweet based on your followers’ habits.
Twitter handles an amazing number of Tweets -
over 200 million tweets are sent per day
.
We saw that many Twitter users were tweeting interesting content but much of it was lost in the constant stream of tweets.
Whentotweet.com is born
While there were many tools for corporate Twitter users that performed deep analytics and provided insight into their tweets, there were none that answered the most basic question: what time of the day are my followers actually using Twitter?
And so the idea behind Whentotweet was born. In its current form, Whentotweet analyzes when your followers tweet and gives you a personalized recommendation of the best time of day to tweet to reach as many as possible.
Given the massive amount of data we needed to analyze, we knew it would be a huge engineering challenge to build what we wanted using the tools we had used previously. We also wanted to make sure we could offer at least a basic product for free. Not only did we need to process massive amounts of data - we also needed a way to do it without a second mortgage on our houses!
The Technology Used
As we went over the alternatives we started to sketch different ways of hosting our application. We had previous experience building web sites and knew that traditional cloud hosting would be expensive and difficult to manage for the kind of computing that we needed. After some quick back-of-the-envelope calculations it seemed clear that Google App Engine would give us both the kind of pricing we needed and a way to scale. We decided to write a quick test application to test our assumptions.
The test application blew our minds. Apart from proving our initial assumptions around pricing and scale we started appreciating the quick deploys. On previous projects we were used to one deploy per month. Almost immediately we shifted our schedule to one or sometimes several deploys per day to push new code to customers.
The main APIs that Whentotweet relies on are Google App Engine's task queues and Datastore. Whenever a new user requests a report it is added as a task. A typical report requires a huge number of interactions with external sites. By breaking down each external interaction into separate tasks in different queues it became easy to make sure we kept a steady rate of API calls to external sites without risking that a huge influx of users would break our API limits.
The initial task then spawns new tasks until finally one of the tasks decides that the report is complete and tweets a summary of the result and a link to a more detailed report. Whentotweet uses a "fail fast" technique so whenever any request fails, internal or external, the task terminates and puts itself back on the queue.
The Datastore saves a finished or ongoing analysis. Sometimes a single analysis will be updated several times a second by tasks as they finish and store their results.
The Result
After a few weeks of intense coding, we were ready to test our code on a small sized Twitter account with less than three hundred followers. The results came back in just a few minutes.
After verifying that everything had actually worked as well as we thought, we decided to try another account. This time one of the largest Twitter accounts on the planet: @techcrunch. Handling a Twitter account with over a million followers took the application one week. But after the analysis started, Whentotweet would quietly work in the background without us having to lift a finger.
Whentotweet got off to a better start than we imagined. During the initial launch thousands of people tested it on their Twitter accounts.
After a while blog posts appeared, recommending Whentotweet as an invaluable Twitter tool. Each post would generate a sudden huge spike in traffic. Sometimes, a blog owner would mail us and ask if we were ready for the sudden increase in traffic this would bring. But Whentotweet was built to scale and even massive sites such as Mashable.com didn't slow it down. The most amazing thing is that we didn't need to write a single extra line of code to handle these massive variations in load. Instead, as soon as we wrapped our head around the tools in the App Engine toolbox we knew that Whentotweet would easily scale. App Engine forced us to think outside the box and avoid the fallacies of traditional hosting that create bottlenecks.
Currently, over 38,000 people have tried Whentotweet and we see from the user feedback that they love it. Give it a try at:
www.whentotweet.com
- Niklas Agevik (@niklas_a) and Stefan Ålund (@stefan_alund) of Whentotweet.com
Scaling with the Kindle Fire
Tuesday, November 29, 2011
Today’s blog post comes to us from Greg Bayer of
Pulse
, a popular news reading application for iPhone, iPad and Android devices. Pulse has used Google App Engine as a core part of their infrastructure for over a year and they recently celebrated a significant launch. We hope you find their experiences and tips on scaling useful.
As part of the much anticipated Kindle Fire launch,
Pulse
was
announced
as one of the only preloaded apps. When you first un-box the Fire, Pulse will be there waiting for you on the home row, next to Facebook and IMDB!
Scale
The Kindle Fire is projected to sell over
five million units
this quarter alone. This means that those of us who work on backend infrastructure at Pulse have had to prepare for nearly
doubling our user-base
in a very short period. We also need to be ready for spikes in load due to press events and the holiday season.
Architecture
As I’ve discussed previously on the
Pulse Engineering Blog
, Pulse’s infrastructure has been designed with scalability in mind from the beginning. We’ve built our web site and client APIs on top of Google App Engine, which has allowed us to grow steadily from 10s to many 1000s of requests per second, without needing to re-architect our systems.
While restrictive in some ways, we’ve found App Engine’s frontend serving instances (running Python in our case) to be extremely scalable, with minimal operational support from our team. We’ve also found the datastore, memcache, and task queue facilities to be equally scalable.
Pulse’s backend infrastructure provides many critical services to our native applications and web site. For example, we cache and serve optimized feed and image data for each source in our catalog. This allows us to minimize latency and data transfer and is especially important to providing an exceptional user experience on limited mobile connections. Providing this service for millions of users requires us to serve 100Ms of requests per day. As with any
well designed App Engine app
, the vast majority of these requests are served out of
memcache
and never hit the datastore. Another useful technique we use is to
set public cache control headers
wherever possible, to allow Google’s edge cache (shown as cached requests on the graph below) and ISP / mobile carrier caches to serve unchanged content directly to users.
Costs
Based on App Engine’s projected billing statements leading up to the recent
pricing changes
, we were concerned that our costs might increase significantly
. To prepare for these changes and the expected additional load from Kindle Fire users, we invested some time in diagnosing and reducing these costs. In most cases, the increases turned out to be an indicator of inefficiencies in our code and/or in the App Engine scheduler. With a little optimization, we have reduced these costs dramatically.
The new tuning sliders for the scheduler make it possible to rein in overly aggressive instance allocation. In the old pricing structure, idle instance time wasn’t charged for at all, so these inefficiencies were usually ignored. Now App Engine charges for all instance time by default. However, any time App Engine runs more idle instances than you’ve allowed, those hours are free. This acts as a hint to the scheduler, helping it reduce unneeded idle instances. By doing some testing to find the optimal cost vs spike latency tolerance and setting the sliders to those levels, we were able to reduce our frontend instance costs to near original levels. Our heavy usage of memcache (which is still free!) also helps keep our instance hours down.
Since datastore operations used to be charged under the umbrella of CPU hours, it was difficult to know the cost of these operations under the old pricing structure. This meant it was easy to miss application inefficiencies, especially for write-heavy workloads where additional indexes can have a
multiplicative effect
on costs. In our case, the new datastore write operations metric led us to notice some inefficiencies in our design and a tendency to overuse indexes. We are now working to minimize the number of indexes our queries rely on, and this has started to reduce our write costs.
Preparing for the Kindle Fire Launch
We took a few additional steps to prepare for the expected load increase and spikes associated with the Fire’s launch. First, we contacted App Engine’s support team to warn them of the expected increase. This is recommended for any app at or near 10,000 requests per second (to make sure your application is correctly provisioned). We also signed up for a
Premier
account which gets us additional support and simpler billing.
Architecturally, we decided to split our load across three primary applications, each serving different use cases. While this makes it harder to access data across these applications, those same boundaries serve to isolate potential load-related problems and make tuning simpler. In our case, we were able to divide certain parts of our infrastructure, where cross application data access was less important and load would be significant. Until App Engine provides more visibility into and control of memcache eviction policies, this approach also helps prevent lower priority data from evicting critical data.
I’m hopeful that in the near future such division of services will not be required. Individually tunable load isolation zones and memcache controls would certainly make it a lot more appealing to have everything in a single application. Until then, this technique works quite well, and helps to simplify how we think about scaling.
To learn more about Pulse, check out
our
website
! If you have comments or questions about this post or just want to reach out directly,
you can find me
@gregbayer
.
New Datastore client library for Python ready for a test drive
Wednesday, November 16, 2011
Last week we
announced
that App Engine has left preview and is now an officially supported product here at Google. And while the release (and the announcement) was chock-full of great features, one of the features that we’d like to call specific attention to is the new Datastore client library for Python (a.k.a “NDB”).
NDB
has been unde
r development for some time and this release marks its availability to a larger audience as an experimental feature. Some of the benefits of this new library include:
The StructuredProperty class, which allows entities to have nested structure
Integrated two-level caching, using both memcache and a per-request in-process cache
High-level asynchronous API using Python generators as coroutine
s (
PEP 342
)
New, cleaner implementations of Key, Model, Property and Query classes
The version of NDB contained in the 1.6.0 runtime and SDK corresponds to NDB 0.9.1, which is currently the latest NDB release.
Given that this feature is still experimental, it is subject to change, but that’s exactly why we encourage you to give it a test drive and send us any feedback that you might have.
The
NDB project
hosted
on Google
Code is the best place to send this feedback. Happy coding!
Posted by Guido van Rossum, Software Engineer on the App Engine Team
Google BigQuery Service: Big data analytics at Google speed
Monday, November 14, 2011
Our post today,
cross-posted with the
Google Enterprise Blog
,
comes from one of our sister projects, BigQuery. We know that many of you are interested in processing large volumes of data and we encourage you to try it out.
Rapidly crunching terabytes of big data can lead to better business decisions, but this has traditionally required tremendous IT investments. Imagine a large online retailer that wants to provide better product recommendations
by analyzing website usage and purchase patterns from millions of website visits. Or consider a car manufacturer that wants to maximize its advertising impact by learning how its last global campaign performed across billions of multimedia impressions. Fortune 500 companies struggle to unlock the potential of data, so it’s no surprise that it’s been even harder for smaller businesses.
We developed
Google BigQuery Service
for large-scale internal data analytics. At
Google I/O last year
, we opened a preview of the service to a limited number of enterprises and developers. Today we're releasing some big improvements, and putting one of Google's most powerful data analysis systems into the hands of more companies of all sizes.
We’ve added a graphical user interface for analysts and developers to rapidly explore massive data through a web application.
We’ve made big improvements for customers accessing the service programmatically through the API. The new REST API lets you run multiple jobs in the background and manage tables and permissions with more granularity.
Whether you use the BigQuery web application or API, you can now write even more powerful queries with JOIN statements. This lets you run queries across multiple data tables, linked by data that tables have in common.
It’s also now easy to manage, secure, and share access to your data tables in BigQuery, and export query results to the desktop or to
Google Cloud Storage
.
Michael J. Franklin, Professor of Computer Science at UC Berkeley,
remarked
that BigQuery (internally known as Dremel) leverages “thousands of machines to process data at a scale that is simply jaw-dropping given the current state of the art.” We’re looking forward to helping businesses innovate faster by harnessing their own large data sets. BigQuery is available free of charge for now, and we’ll let customers know at least 30 days before the free period ends. We’re bringing on a new batch of pilot customers, so
let us know
if your business wants to test-drive BigQuery Service.
Posted by Ju-Kay Kwek, Product Manager
App Engine 1.6.0 Out of Preview Release
Monday, November 7, 2011
Three and a half years after App Engine’s first
Campfire One
, App Engine has graduated from Preview and is now a fully supported Google product. We started out with the simple philosophy that App Engine should be ‘easy to use, easy to scale, and free to get started.’ And with 100 billion+ monthly hits, 300,000+ active apps, and 100,000+ developers using our product every month it’s clear that this philosophy resonates. Thanks to your support, Google is making a long term investment in App Engine!
When we announced our plans to leave preview earlier this year, we made a commitment to improving the service by adding support for
Python 2.7
,
Premier Accounts
and
Backends
as well as several changes launching today:
Pricing
: The new
pricing structure
announced in May (and updated based on feedback from the community) will be reflected in your bill starting on Nov 7th as
previously announced
.
Terms of Service
: We have a new business-friendly
terms of service
and
acceptable use policy
.
Service Level Agreement
: All paid applications on the High Replication Datastore are covered by our
99.95% SLA
.
We are also holding a series of App Engine Office hours via Google+ this week for any users who have questions about how these changes impact their applications. The list of times can be found on the
Google Developers events
page, with links to join the hangout while the office hours are scheduled. Also, please don’t hesitate to contact us at
appengine_updated_pricing@google.com
with any questions or concerns.
In addition to leaving Preview, we have several additional changes to announce today.
Production Changes
For billing enabled apps, we are offering two more scheduler controls and some additional changes:
Min Idle Instances
: You can now adjust the minimum number of Idle Instances for your application, from 1 to 100. Users who had previously signed up for “Always On” can now set the number of idle instances for their applications using this setting.
Max Pending Latency
: For applications that care about user facing latency, this slider allows you to set a limit to the amount of time a request spends in the pending queue before starting up a new instance.
Blobstore API
: You can now use the Blobstore API without signing up for billing.
Datastore Changes
High Replication Datastore Migration Tool
:
We are releasing an experimental tool that allows you to easily migrate your data from Master/Slave to High Replication Datastore, and seamlessly switch your application’s serving to the new HRD application.
Query Planning Improvements
:
We’ve
published an article
that details recent improvements to our query planner that eliminate the need for exploding indexes.
Python
MapReduce
:
We are releasing the full MapReduce framework in experimental for Python. The framework includes the Map, Shuffle, and Reduce phases.
Python 2.7 in the SDK:
The SDK now supports the Python 2.7 runtime, so you can test out your changes before uploading them to production.
Java
™
Memcache API Improvements
: The Memcache API for Java now supports asynchronous calls. Additionally, putIfUntouched() and getIdentifiable() now support batch operations.
Capability Testing
:
We’ve added the ability to simulate the capability state of local API implementations to test your application’s behavior if a service is unavailable.
Datastore Callbacks
:
You can now specify actions to perform before or after a put() or delete() call.
The full list of changes with this release can be found in the release notes (
Python
,
Java
). We’d love to hear your feedback about this release in the groups. And we’d like to thank you all for investing in our platform for the last three years. We’re excited for this milestone in App Engine history, and we look forward to what the future will bring.
Posted by The App Engine Team
Oracle and Java are registered trademarks of Oracle and/or its affiliates.
ProdEagle - Analyzing your App Engine apps in real-time
Thursday, October 27, 2011
Today’s post comes to us from Andrin von Rechenberg of MiuMeet who has developed an easy to use analysis framework, ProdEagle, for App Engine apps. ProdEagle enables you to easily count and visualize events to help better understand both performance and usage of your site.
ProdEagle
allows you to monitor your system, lets you analyse in real-time what is going on and can alert you if something goes wrong.
The story
We are a small start-up that created an incredibly fast growing social dating platform called
MiuMeet
. We used App Engine from the start and within 6 months we had over 1 million registered users - the system scaled beautifully. One of the world’s leading online dating companies invested in our start-up and gave us a very good piece of advice: Before you build new features, you need to understand what’s actually going on in your system. Of course we thought we knew exactly what is going on in our system. We had the App Engine Dashboard and we had Google Analytics and we ran a couple of daily MapReduces to collect some statistics. But honestly, compared to today, we didn’t have any idea what was going on.
Understanding your system
From our MapReduces we knew that most users who accessed our dating platform used our Android app and only very few users used our iPhone app. We weren’t exactly sure what the reason for that was, but we wanted to improve the experience for our iPhone users. So we had to start measuring what was different for iPhone vs Android users.
But how do you do this? You start recording everything that happens in your system. And by “everything” we don’t mean the status code of an HTTP response but human understandable events, like “an Android user starts a conversation”, “the average length of a conversation”, “how often iPhone users log in” and so on. We realized that iPhone users are much less active and reply less often - not because they don’t want to, but because they don’t know immediately that they got a new message. Android users are informed by Push-Notifications, iPhone users via email. We did not expect that this would make such a big difference, but now it’s pretty obvious what the next feature we should build to improve the iPhone experience. Our conclusions are based on hard numbers and not guesses.
We used ProdEagle to collect and analyse the data that allowed us to realize this. ProdEagle can count, display and compare events in real-time and can even alert you if something is going wrong.
How to measure events
Let’s start with a simple Python example. We would like to record the devices from which messages are sent in our system. All we have to do is add the green lines to our sendMessage function:
import prodeagle
def sendMessage(from, to, text, device):
mailbox = getMailbox(to)
mailbox.addMessage(from, text)
prodeagle.counter.incr("Message." + device)
This one line of code and a few clicks in the ProdEagle dashboard allows us to create the following graphs:
*The numbers in these graphs are just examples.
Analysing your system in “real-time”
With ProdEagle you can count whatever you want. The advantage over traditional analytics systems is that you don’t have to wait for a day for the data to appear. Data appears within 1 minute in your dashboard. This means that you can monitor your system almost in real-time and set up alerts if something goes wrong.
In the following example we measure how long it takes to execute a datastore query:
def searchPeople(query):
timestamp = time.time()
query.execute()
prodeagle.counter.incr("Search.People.Count")
prodeagle.counter.incr("Search.People.Latency",
time.time() - timestamp)
In the ProdEagle Dashboard we can specify that “Search.People.Latency” should be divided by “Search.People.Count” and plot it as a graph. Additionally we can set up an email alert that fires if the latency was more than 4000ms for 5 minutes:
If the latency is above all the red dots you get an email alert.
Being able to measure latency in real-time is particularly useful when you are trying to optimize your system and want to try out different strategies to answer queries.
The magic behind ProdEagle
Whenever you call
prodeagle.counter.incr(counter_name)
the ProdEagle library increments a counter in memcache - this shouldn’t add any noticeable latency to your app, unless it is the first call made by a serving instance. Every few minutes, the ProdEagle service that we run collects all these memcache counters and persists them, creates the graphs and alerts you if something is wrong.
Obviously, the system isn’t robust if your memcache gets flushed. To address this, ProdEagle uses 1024 dummy counters to check when your memcache gets flushed and to approximate the inaccuracy of the data in your graphs. This is based on the assumption that elements with the same size in memcache are freed in a “least-recently-used” fashion and that the 1024 dummy counters hit all memcache shards. We have used ProdEagle for MiuMeet for a couple of months now and our approximated accuracy was never below 99.8%.
I like! How can I get it?
ProdEagle is currently completely free of charge. You can download the python client libraries and sign up for your dashboard on
www.prodeagle.com
.
App Engine SSL for Custom Domains in Testing
Wednesday, October 19, 2011
The long awaited SSL for Custom Domains is entering testing and we are now looking for trusted testers. If you are interested in signing up to test this feature, please fill in this
form
.
We will be offering two types of SSL service,
Server Name Indication (SNI)
and Virtual IP (VIP). SNI will be significantly less expensive than VIP when this service is launched, however unlike VIP it does
not work
in all browsers that support SSL. VIP is a premium service with a dedicated IP and full browser support. Both VIP and SNI support wildcard certificates and certificates with alternate names.
We look forward to making this widely available as soon as possible and as always we welcome your feedback in the
group
.
Posted by The App Engine Team
App Engine 1.5.5 SDK Release
Tuesday, October 11, 2011
2011 has seen some exciting releases for App Engine. As the days get shorter, the weather gets colder, and all that Halloween candy starts tempting everyone in the grocery store, we’ve been hard at work on our latest action packed release.
Premier Accounts
When choosing a platform for your most critical business applications, we recognize that uptime guarantees, easy management and paid support are often just as important as product features. So today we’re launching Google App Engine
premier accounts
.
For $500 per month
(not including the cost
to
provision internet services)
, you’ll receive:
Premium support (see the
Technical Support Services Guidelines
for details).
A 99.95% uptime Service Level Agreement (see the
draft agreement
, the final agreement will be in the signed offline
agreement)
.
The ability to create an unlimited number of apps on your premier account domain.
No minimum monthly fees per app. Pay only for the resources you use.
Monthly billing via invoice.
To sign up for a premier account, please contact our sales team at
appengine_premier_requests@google.com
.
Python 2.7
PIL? NumPy? Concurrent requests? Python 2.7 has it all, and today we’re opening up Python 2.7 as an experimental release. We’ve put together a
list of all the known differences
between the current 2.5 runtime and the new runtime.
Overall Changes
We know that bumping up against hard limits can be frustrating, and we’ve talked all year about our continued push to lift our system limits. With this release we are raising several of these:
Request Duration: The frontend request deadline has been increased from 30 seconds to 60 seconds. We’ve increased the maximum URLFetch deadline to match from 10 seconds to 60 seconds.
File limits: We’ve increased the number of files you can upload with your application from 3,000 to 10,000 files, and the file size limit has also been increased from 10MB to 32MB.
API Limits: Post payloads for URLFetches are now capped at 5MB instead of 1MB.
We’re also announcing several limited preview features and trusted tester programs:
Cloud SQL Preview: We
announced
last week that we are offering a preview of SQL support in App Engine. Give it a try and let us know what you think.
Full-text Search: We are looking for early trusted testers for our long anticipated Full-Text Search API. Please fill out this
form
if you’re interested in trying it out.
Conversion API: Ever wanted to convert from text to PDF in your App? Then consider
signing up
as a trusted tester for the Conversion API.
Datastore
Cross Group (XG) Transactions: For those who need transactional writes to entities in multiple entity groups (and that's everyone, right?), XG Transactions are just the thing. This feature uses two phase commit to make cross group writes atomic just like single group writes.
Platform Improvements
Experimental Google Cloud Storage Integration
:
As Google Cloud Storage, formerly Google Storage for Developers,
graduates from labs
, we are improving our integration by adding
access via the files API
.
Prediction API: Another of our friends graduating from labs is the
Prediction API
. Check out some of the
App Engine examples
in their getting started guide.
Of course, these are just the high level changes. This release is packed full of features and bug fixes, and as always, we welcome your feedback in the group.
Posted by The App Engine Team
Google Cloud SQL: Your database in the cloud
Thursday, October 6, 2011
Cross-posted from the
Google Code Blog
One of App Engine’s most requested features has been a simple way to develop traditional database-driven applications. In response to your feedback, we’re happy to announce the limited preview of
Google Cloud SQL
.
You can now choose to power your App Engine applications with a familiar relational database in a fully-managed cloud environment. This allows you to focus on developing your applications and services, free from the chores of managing, maintaining and administering relational databases.
Google Cloud SQL brings many benefits to the App Engine community:
No maintenance or administration - we manage the database for you.
High reliability and availability - your data is replicated synchronously to multiple data centers. Machine, rack and data center failures are handled automatically to minimize end-user impact.
Familiar
MySQL
database environment with
JDBC
support (for Java-based App Engine applications) and
DB-API
support (for Python-based App Engine applications).
Comprehensive user interface for administering databases.
Simple and powerful integration with
Google App Engine
.
The service includes database import and export functionality, so you can move your existing MySQL databases to the cloud and use them with App Engine.
Cloud SQL is available free of charge for now, and we will publish pricing at least 30 days before charging for it. The service will continue to evolve as we work out the kinks during the preview, but
let us know
if you’d like to take it for a spin.
Posted by Navneet Joneja, Product Manager for Google Cloud SQL
Project WOW
Monday, September 26, 2011
Today's post
is contributed by
Edward Hartwell Goose of PA Consulting, who is working on an App for the UK’s
Met Office
to report everyone's favorite bit of small talk, the
weather.
W
e hope you find the discussion of his team's experience using App Engine illuminating
.
The UK’s
Met Office
is one of the world’s leading organisations in weather forecasting, providing reports throughout the day for the UK and the rest of the world. The weather information they provide is consumed by a variety of industries from shipping to aircraft, and powers some of the UK’s leading media organisations, such as the BBC.
Although the Met Office is the biggest provider of weather data, they aren’t the only ones collecting information. Thousands of enthusiasts worldwide collect their own weather data, from a wide variety of weather stations - either simple temperature sensors or highly sophisticated stations that rival the Met Office’s own equipment. The question of course is: how do you harness the power of this crowd?
Enter the Weather Observations Website
The Met Office and our team from PA Consulting worked together to answer this question late last year. The end result was
The Weather Observations Website
, or “
WOW
” for short. In the 3 months since launch on the 1st June 2011, WOW has recorded
5.5 million weather reports
from countries throughout the world. Furthermore, we can retrieve current reports in sub second times, providing a real time map of worldwide weather. We haven’t got the whole globe covered just yet and the UK carries the most sites, but most countries in Western Europe are reporting. We also have reports from a medley of countries throughout the world, from Mauritius, Brazil and as far away as New Zealand. We even have one site reporting at regular intervals in
Oman
(it’s hot!).
Better yet, as a development team of 2, since launch, we’ve spent almost no time at all doing anything but casual monitoring of WOW. No one carries a pager, and the one time we did have problems (we underestimated demand, and our quota ran out), I was able to upgrade the quota in a minute or so. And I did it from my sofa. On my phone.
How good is that?
WOW - Showing Live Temperature Data Across Europe
Lessons Learn
t Building for App Engine
We learnt a lot building WOW. A huge amount in fact. And we’d love to share our insights with you. We’d also love to tell you what we love about App Engine - and why you should use it too. And so you know we’re honest - we’ll tell you what we don’t like too!
Firstly - the good stuff. We think App Engine is a fantastic tool for prototyping and any team working in an agile environment. We’re big fans of
SCRUM
at PA Consulting, and App Engine was a dream to work with. Compared to some of our colleagues working with difficult build procedures and environments, our full release procedure never took more than 5 minutes. Better yet, App Engine’s deployment tools all hook up with
ANT
and
CruiseControl
(our continuous build system), allowing us to run unit tests and deploy new code on every check-in to our code repository. This allowed us as developers to get on with what we do best: develop!
The APIs are great too. The documentation is fantastic and is regularly updated. We use all but the Channel API at the moment (sadly, it uses Google Talk Infrastructure behind the scenes and this gets blocked by some corporate environments). If I could offer any advice to a budding App Engine developer it would be to thoroughly read the documentation, and place the following three questions and their solutions at the forefront of everything you do:
1. Can I do it in the background? - Task Queue API
2. Can I use the low-level API? - Datastore API
3. Can I cache the result? - Memcache API
These three principles have given WOW the performance it has, and will allow it to scale effectively over the coming years. We’ve got some big changes coming over the next couple of months too that should provide even higher performance as well as exciting new features.
Future Developments
So, how about improvements App Engine could make? There are definitely a few, although how important they are will depend on the problems you’re trying to solve.
To begin with, improvements to some of the non technical elements are needed before App Engine becomes truly mainstream. Needing a credit card to pay is a showstopper for some organisations. Backup and restore is also missing, unless you implement it yourself. This is perfectly possible, but can add significant man days to your development effort if your data structure is complex and fast changing.
We also struggled with how to estimate quota to begin with too. One of the brilliant features of App Engine is how easy it is to spool up (and down) new instances to deal with demand. Unfortunately, this also means it can be quite easy to
accidentally
spool up too many instances and burn quota quickly. Although this has never affected the actual data, it can cause an unpleasant spike in the amount of dollars spent. We also had a similar problem with a MapReduce job getting stuck overnight that caused a scare the next morning. Hopefully the new monitoring API should provide a bit more visibility of these issues as well as automatic email notifications to help catch these issues.
Aside from that, other features will probably depend on the application you’re trying to build. Built in support for geo-queries would be invaluable for WOW. Currently we use an external library, but this adds some extra overhead on our development. Another common feature request is full text search which is essential for projects dealing with large text corpa. Both of these features would allow us to provide better search facilities for our users - for example search by site name or geographic location. These queries can be implemented in App Engine as it is now, but achieving optimal performance and optimal cost are difficult problems that we struggle to complete ourselves.
Final Thoughts
Overall, we’re really impressed by App Engine. The App Engine team regularly releases new versions, and although it does have limitations it has allowed us to concentrate on what really matters to us - the weather. We know WOW will scale without any problems, and we don’t have to worry about any of the hardware configuration or system administration that can easily consume time. Our small team of developers spends all of their time understanding the business problems and improving WOW.
We’re really looking forward to taking WOW forward in the future, we hope you can join us:
http://wow.metoffice.gov.uk
.
Edward Hartwell Goose (
@edhgoose
)
Developer
PA Consulting
Don't Miss Next '17
Use promo code NEXT1720 to save $300 off general admission
REGISTER NOW
Free Trial
GCP Blogs
Big Data & Machine Learning
Kubernetes
GCP Japan Blog
Labels
Announcements
56
Big Data & Machine Learning
91
Compute
156
Containers & Kubernetes
36
CRE
7
Customers
90
Developer Tools & Insights
80
Events
34
Infrastructure
24
Management Tools
39
Networking
18
Open Source
105
Partners
63
Pricing
24
Security & Identity
23
Solutions
16
Stackdriver
19
Storage & Databases
111
Weekly Roundups
16
Archive
2017
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Feed
Subscribe by email
Technical questions? Check us out on
Stack Overflow
.
Subscribe to
our monthly newsletter
.
Google
on
Follow @googlecloud
Follow
Follow