Google Cloud Platform Blog
App Engine 1.6.2 Released
Tuesday, January 31, 2012
Some of you may think of dragons as ferocious, treasure-hoarding, fire-breathing monsters. But the App Engine team is embracing the dragon as a symbol of fortune and good luck, and we are excited to announce our first release in the Year of the Dragon.
Experimental Datastore Backup/Restore
Using the Datastore Admin functionality in the Admin Console, you can now use the
experimental Datastore Backup/Restore tool
to backup your Datastore to Blobstore. You can also select a backup to restore from. The Datastore Backup/Restore feature runs as a MapReduce within your application and counts against your Instance, Datastore Ops, and Storage quotas.
Django® + Cloud SQL
For Python fans of
Google’s Cloud SQL
(currently available in limited preview), the long awaited out-of-the-box
support for the Django framework
has arrived and is now available as an experimental feature. Now you can easily use Cloud SQL within the Django framework as you would use any other SQL database.
...And More
Additional features available in 1.6.2 include:
Channel API
:
Developers can now specify how long a channel token will last until it expires, with the default remaining two hours. Channel API quota is now measured both in calls to create a channel and the number of hours of channel time requested.
Task Queues
: A new
X-Appengine-TaskETA header
has been added which can be used to measure task delivery latency.
Blobstore
: The
Python API for the Blobstore
now provides asynchronous API calls for creating upload URLs and fetching and deleting data.
The full list of our features and bug fixes can be found on our release notes (
Java
,
Python
). Join in the discussion about this release and all things App Engine related in our
Google Group
.
Posted by The App Engine Team
My summer with the Google App Engine Team
Thursday, January 26, 2012
Today’s post is contributed by our Summer 2011 team intern, Chris Bunch. Chris did some great work on our Logs and MapReduce APIs and is also the first “App Engine Triple Crown” winner for developing the Experimental Logs Reader API in Python, Java and Go simultaneously.
Four years ago, I was a brand-new Ph.D. student at the University of California, Santa Barbara and when our research group (the
RACELab
) heard about Google App Engine, we were intrigued. We thought it presented a new model that enabled apps to scale the right way without severely constricting the types of programs users would write.
But we wanted to experiment with the core functionality of App Engine: the APIs, the scheduler, etc., and so we built
AppScale
, an open-source implementation of the Google App Engine APIs that allows users to deploy applications written in Python, Java, and Go to the
infrastructure of their choice
.
Wherever possible, we implement support for the App Engine APIs with alternative open-source technologies. We’ve added support for
nine different databases
, database-agnostic transactions, a
REST interface
that users of any programming language can communicate with (via an App Engine app), and the ability to
run high performance computing programs
over the whole thing and talk to it from your App Engine app. And here’s my favorite part - it all deploys automatically! You don’t need to tell it what block size you want for the distributed file system, or the size of the read buffers: we configure the necessary services automatically. Since AppScale is completely open source, if you don’t like the defaults, change them!
After creating our own system to run Google App Engine apps, I wanted to see how Google does it. Therefore, I decided to become an intern on the App Engine team and see if I could give them (and by extension, the App Engine community) something amazing over the summer. I started off with some work on the MapReduce API, making the
sample app
much easier to use and prettier all around. I also made a
YouTube video
showing how it all works and how easy it is to run MapReduce jobs over App Engine.
I then looked at a recurring question that App Engine users encounter: “How can I get my logging information for my application to answer data analytic questions?” It was an excellent problem to tackle, as we have users who want to be able to determine application-specific queries that Google Analytics or the Admin Console don’t answer. Currently users have to use
appcfg
to grab all their application’s data to a remote machine and run some analysis script over it.
To solve this problem, I created the
Logs API
, which gives applications programmatic access to their logs from within App Engine itself. Applications can use it to query small numbers of logs within a single request, and they can utilize the Pipeline, MapReduce, or Backends APIs if they have lots of logs they want to analyze. Logs contain both request-level information (e.g., the URL accessed, the HTTP response code returned) as well as logging info generated by the application (the
logging
module in Python, the
Logger
class in Java, and the logging methods that Go’s
appengine
package provides). The Logs API is available for use as of App Engine 1.6.1 by programmers using the Python, Java, or Go runtimes, in both the production environment and the local SDK.
I had a great time putting the Logs API together, and had a unique experience interning with the App Engine team. Programming in Python, Java, and Go on a daily basis was an exciting new challenge, and I loved it!
Interested in interning with the App Engine team? Check out
google.com/students
for more information on internships.
Google Cloud Storage: concurrency controls and deeper App Engine integration
Thursday, January 19, 2012
Cross posted from the
Google Code Blog
Google Cloud Storage
is a
robust
, high-performance service that enables developers and businesses to use Google’s infrastructure to store and serve their data. Today, we’re announcing a new feature that gives you greater control over concurrent writes to the same object, and the availability of an App Engine Files API that makes it easier to read and write data from Java App Engine applications.
Write concurrency control
A number of our customers have asked us for greater control over concurrent writes, in order to implement features like strongly consistent write operations and distributed locking semantics in the cloud. In response to your feedback, we’re announcing the release of
version-based concurrency control
. Every time you update an object, it gets assigned a 32-bit, monotonically increasing sequence number. This version number is returned as a header with every GET or HEAD request. You can then use a conditional write operation to manage concurrent updates to the object (for example, when you want read-modify-write semantics). This feature is currently experimental.
AppEngine Files API for Java applications
Last fall, we
announced
the ability to read and write your Google Cloud Storage data using the App Engine Files API for Python applications. Today, we’re making the
Files API available to Java App Engine applications
too. This feature is currently experimental, and we’ll continue to enhance it in the months to come.
As always, we welcome your feedback in our
discussion group
. If you haven’t tried Google Cloud Storage yet, you can sign up and get started
here
.
Happy New Year from the App Engine team
Tuesday, January 17, 2012
Happy New Year! As we return from our New Year's celebrations, brush the dust off our workstations and gear up for our first release of 2012, we thought it would be fun to take a look back at improvements we have made and what developers have accomplished with App Engine in 2011.
Let’s start with the features and functionality we added last year:
Language Support:
We released the initial version of
Python 2.7
support and added
Go
as an experimental runtime.
Storage
: We launched the
High Replication Datastore
and added support for
the Files API
in Python and Java. We also announced the limited preview of
Google Cloud SQL
, a familiar relational database in a fully managed cloud environment.
Computation
: We introduced
Backends
for building larger, long-lived and/or memory intensive infrastructure,
Pull Queues
to allow developers to “pull” tasks from a queue as applications are ready to process them and two larger
Frontend instance classes
. We also released the GAE
MapReduce
framework as an experimental feature for Python.
Security
: We
successfully completed the audit process for the
SAS70 Type II
,
SSAE 16 Type II
, and
ISAE 3402 Type II
standards.
Business Readiness
: We modified our
SLA
,
billing plans
, and
service limits
and now offer fully supported
Premier Accounts
.
Best of all, with your continued support we accomplished our goal of
graduating from preview
and became a full fledged Google product.
We’ve seen excellent growth and adoption over the past year, with businesses like
Pulse
,
Evite
and
Best Buy
choosing App Engine for their applications. Even St. James’s Palace chose App Engine to host the
Royal Wedding site
. We had so much fun collaborating with
17 of the world’s most renowned museums for the
Google Art Project
and with other Googlers building iGoogle gadgets and
Doodles
on App Engine.
We’ve added more than 1 million registered applications and have more than 150,000 active developers on the App Engine platform generating more than 5 billion page hits per day.
Back in our
first blog post
in 2008, we asked you to “start your engines” and what a ride we’ve taken. Thank you for making 2011 our best year yet and here’s to making 2012 even better!
Posted by Peter Magnusson, Engineering Director
Happy Birthday High Replication Datastore: 1 year, 100,000 apps, 0% downtime
Thursday, January 5, 2012
Once upon a time, the only way to store persistent data in App Engine was to use the Master/Slave Datastore. Although it was a transactional, massively scalable, fully managed, distributed storage system running on Google’s world-class infrastructure, its availability was tied to the availability of a single datacenter, and when you’re serving hundreds of thousands of applications, relying on any single datacenter is simply not sufficient. One year ago today we
unveiled
a new offering that was specifically designed to address this weakness: the High Replication Datastore (HRD). Still transactional, still massively scalable, still fully managed, still running on Google’s world-class infrastructure,
but with the ability to withstand multiple datacenter outages and no planned downtime!
By the time Google I/O came around last May, HRD was performing beautifully and our customers were happy, so we took the next step and made HRD the default option for all new App Engine applications.
In June we made HRD available in our SDK so that customers could easily experiment with the new consistency guarantees (paxos on your laptop!), and we launched the first version of our migration tool to make it easy to move your apps from Master/Slave to HRD.
In October we released XG Transactions, our first HRD-only feature, which allows users to transact across entity groups.
In November we brought App Engine
out of preview
and added a 99.95%
SLA
for HRD applications.
In our most recent release we launched an updated version of our HRD migration tool that ties the duration of the read-only period to your write-rate, rather than the size of your dataset. This makes your migration quick, simple, and easy to plan for regardless of how much data you have. One App Engine customer recently migrated over 500G of Datastore data with only a 10 minute read-only period!
Throughout all this,
HRD has had no system-wide downtime (planned or unplanned)
and has grown to serve over 3 billion requests per day. Needless to say it’s been a phenomenal year.
We realize that moving data requires planning, testing, coordination, and a strong stomach. However, we believe strongly that HRD provides a fundamentally better service than Master/Slave, and we encourage all our customers to migrate to HRD. Over the coming months you can expect to see further improvements to our migration tools (Blob migrations are on the way!), more HRD-only features like Full Text Search, and of course, more 9s than you can shake a stick at.
Posted by Max Ross, Datastore Tech Lead
Don't Miss Next '17
Use promo code NEXT1720 to save $300 off general admission
REGISTER NOW
Free Trial
GCP Blogs
Big Data & Machine Learning
Kubernetes
GCP Japan Blog
Labels
Announcements
56
Big Data & Machine Learning
91
Compute
156
Containers & Kubernetes
36
CRE
7
Customers
90
Developer Tools & Insights
80
Events
34
Infrastructure
24
Management Tools
39
Networking
18
Open Source
105
Partners
63
Pricing
24
Security & Identity
23
Solutions
16
Stackdriver
19
Storage & Databases
111
Weekly Roundups
16
Archive
2017
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Feed
Subscribe by email
Technical questions? Check us out on
Stack Overflow
.
Subscribe to
our monthly newsletter
.
Google
on
Follow @googlecloud
Follow
Follow