BigQuery in Practice - Loading Data Sets that are Terabytes and Beyond
Friday, January 31, 2014
We all know the story of David and Goliath. But did you know that King Saul prepared David for battle by fully arming him? He put a coat of armor and a bronze helmet on him, and gave David his sword. David tried walking around in them but they didn't feel right to him. In the end, he decided to carry five small stones and a sling instead. These were the tools that he used to fight off lions as a shepherd boy. We know what the outcome was. David showed us that picking the right tools and using them well is one of the keys to success.
Let's suppose you are tasked to start a Big Data project. You decide to use Google BigQuery because:
To get started though, your Goliath is to load multi-terabytes of data into BigQuery. The technical article, BigQuery in practice - Loading Data Sets that are Terabytes and Beyond, is intended for IT Professionals and Data Architects who are planning to deploy large data sets to Google BigQuery. When dealing with multi-terabytes to petabytes of data, managing the processing of data such as uploading, failure recovery, cost and quota management becomes paramount.
Just as David showed us the importance of using the right tools effectively, the paper presents various options and considerations to help you to decide on the optimal solution. It follows the common ingestion workflow as depicted in the following diagram and discusses the tools that you can use during each stage - from uploading the data to the Google Cloud Storage, running your Extract Transform and Load (ETL) pipelines, to loading the data into BigQuery.
When dealing with large data sets the correct implementation can mean a savings of hours or days, while an improper design may mean weeks of re-work. David was so successful that King Saul gave him a high rank in the army. Similarly, we are here to help you use Google Cloud Platform successfully so your Big Data project will achieve the same level of success.
- Posted by Wally Yau, Cloud Solutions Architect
Let's suppose you are tasked to start a Big Data project. You decide to use Google BigQuery because:
- Its hosting model allows you to quickly run your data analysis without having to set up a costly computing infrastructure.
- The interactive speed allows your analysts to quickly validate hypothesis about their insights.
To get started though, your Goliath is to load multi-terabytes of data into BigQuery. The technical article, BigQuery in practice - Loading Data Sets that are Terabytes and Beyond, is intended for IT Professionals and Data Architects who are planning to deploy large data sets to Google BigQuery. When dealing with multi-terabytes to petabytes of data, managing the processing of data such as uploading, failure recovery, cost and quota management becomes paramount.
Just as David showed us the importance of using the right tools effectively, the paper presents various options and considerations to help you to decide on the optimal solution. It follows the common ingestion workflow as depicted in the following diagram and discusses the tools that you can use during each stage - from uploading the data to the Google Cloud Storage, running your Extract Transform and Load (ETL) pipelines, to loading the data into BigQuery.
![]() |
Scenarios for data ingestion into BigQuery |
When dealing with large data sets the correct implementation can mean a savings of hours or days, while an improper design may mean weeks of re-work. David was so successful that King Saul gave him a high rank in the army. Similarly, we are here to help you use Google Cloud Platform successfully so your Big Data project will achieve the same level of success.
- Posted by Wally Yau, Cloud Solutions Architect
No comments :
Post a Comment