The Lazy Guide to Learning BigQuery SQL · Coding is for Losers

Lover of indolence, connoisseur of lean-back capitalism. potentially the # 1 user of Google Sheets in the global. The day will come when you reach the end of the spreadsheet .
When your Sheets become besides clog with data and formulas to carry on. When your Sheets pass the 5 million arduous cap on cells .
But there is life after Sheets.

You don ’ t need to pack up your Sheets, quit your job and burn your credit cards – you can just upgrade your toolbelt to work with larger datasets .
Enter BigQuery and SQL – put up outright data analysis office with lightning speed .
If you keep reading, I promise you will learn to write your beginning SQL question in BigQuery today, using the Google Analytics sample dataset .
Below are 13 television tutorials to get you up and running – but to very learn this stuff, we recommend diving into our free course, Getting Started with BigQuery .
The course includes a SQL cheat sheet, 2 quizzes to test your cognition, and tons of other resources to help you analyze data in BigQuery .
Let ’ s prima donna in !

Btw…if you ’ re looking to jumpstart your BigQuery frame-up, check out our data grapevine services .

You may know more than you think

If you already know the Google Sheets question affair, you ’ ra more than halfway to writing SQL in BigQuery .
The question function syntax is like so :

=query ( range, “ SELECT * WHERE x = y ” )

In BigQuery SQL ( and most other forms of SQL ), the only key deviation is that you reference a table ( with a FROM parameter ), rather of a spreadsheet range :

SELECT * FROM table WHERE x = y

other than that, you ’ ll find the logic ( AND / OR ) and math syntax to be identical like .

Access the Google Analytics sample dataset

.
Before starting to use BigQuery, you must create a project .
You should see the $ 300 free trial offer pop up if you ’ ra creating your first Google Cloud project, so there ’ s no risk of you being billing as part of this tutorial .
even if that offer doesn ’ metric ton show up, the data queried via the Google Analytics sample dataset is so small it falls within BigQuery ’ s release tier .
once that ’ s up and running, you can entree the Google Analytics sample dataset here .

  • Note that if you’re using the classic BigQuery UI, always be sure to select ‘Show Options’ and uncheck ‘Use Legacy SQL’ to make sure that you’re using the Standard SQL dialect.

Writing your first SELECT question

.
Let ’ s break down a basic SELECT question, pulling visits, transactions and tax income by channel from our Google Analytics dataset :

SELECT  
date,
channelGrouping as channel,
totals.visits,
totals.transactions,
totals.transactionRevenue
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
LIMIT 1000

Each SQL question must contain at least 2 parameters :

  • SELECT: defines the columns you’d like to pull
  • FROM: defines the table to pull them from

Throughout this walkthrough, we ’ ll be focusing on the holy place trinity of marketing metrics : visits, transactions and tax income ( from which you can calculate conversion rate and AOV ) :

SELECT  
date,
channelGrouping as channel,
totals.visits,
totals.transactions,
totals.transactionRevenue

You can rename any column using ‘ as ’ ( see channel above ), if you ’ d quite use a column name unlike from the one portray in the database .
For the FROM parameter, in BigQuery there are 3 layers included in each table name :

  • Project ID
  • Dataset
  • Table

They come in concert as project-id.dataset.table – in our case :

FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 

The LIMIT parameter above defines the number of rows to return – including a specify is equitable a good SQL practice, even though for BigQuery it ’ s not truly necessity .
Keep in mind that order is critical with these parameters, there ’ s an order of operations fair like arithmetical .
SELECT is constantly first base, then FROM, and so on as we go through these examples ( the order in the examples is always the order you ’ ll want to use ) .

Filtering data with WHERE

.
Adding a WHERE parameter to our question allows us to filter our results based on particular logic .
For exmaple, what if we wanted to pull GA sessions for merely the “ Organic Search ” impart ?
Adding to our basic SELECT instruction above, we ’ d level on a WHERE parameter :

SELECT  
date,
channelGrouping as channel,
totals.visits,
totals.transactions,
totals.transactionRevenue
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
WHERE channelGrouping = 'Organic Search'

Following the WHERE parameter, you can introduce any logic fair like you would in an IF convention : ! =, <, < =, >, > = .
You can even pull multiple values using “ in ” :

WHERE channelGrouping in ('Direct', 'Organic Search')

To add a moment logic affirmation after your initial WHERE, you merely add an AND or OR ( WHERE is only for the first bit of logic ) :
WHERE channelGrouping in ('Direct', 'Organic Search')
AND date = '20170801'

Ordering results with ORDER BY

much you ’ ll want to display results in a specific order .
Building on our question above, what if we wanted to display our most lucrative ( highest tax income ) hits first ?
You ’ five hundred add an club BY parameter to the end of your question, like sol :

SELECT  
date,
channelGrouping as channel,
totals.visits,
totals.transactions,
totals.transactionRevenue
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
ORDER BY totals.transactionRevenue desc
LIMIT 1000

The basic structure of an order BY parameter is :

ORDER BY columnname direction (either asc for ascending or desc for descending)

If you don ’ triiodothyronine truly need to order results in a sealed way, then you can leave out the order BY – it can be an unnecessary enfeeble on performance when running large queries .

Calculating aggregate totals with GROUP BY

.
Most of the time, you won ’ t precisely need to query out your bleak data – you ’ ll want to perform some aggregate mathematics across a slice of your dataset ( by channel, device type, etc ) .
For model, what if want to sum visits, transactions and tax income by channel ?
There are two changes required to your question to make this find : * Wrap the column you want to run mathematics on in an aggregate function – SUM ( ), COUNT ( ), COUNT ( DISTINCT ( ) ), MAX ( ), or MIN ( ) * Add a GROUP BY parameter after your WHERE logic – all of the column not being aggregated must be salute in the GROUP BY
Let ’ s take a look :

SELECT  
channelGrouping as channel,
sum(totals.visits) as visits,
sum(totals.transactions) as transactions,
sum(totals.transactionRevenue) as revenue
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
WHERE channelGrouping in ('Organic Search', 'Direct')
GROUP BY channel
ORDER BY transactions desc
LIMIT 1000

Notice how since we ’ re entirely grouping by channel, all of the other metrics ( visits, transactions, gross ) are wrapped in a SUM function .

Writing arithmetical within queries

.
You ’ ll frequently want to calculate metrics based on your metrics : for example conversion rate ( transactions / visits ), or modal decree value ( gross / transactions ) .
You can do that mathematics inline proper in your question, by using +, -, * or / .
See the conv_rate and aov columns below :

SELECT  
date,
channelGrouping as channel,
sum(totals.visits) as visits,
sum(totals.transactions) / sum(totals.visits) as conv_rate,
sum(totals.transactions) as transactions,
sum(totals.transactionRevenue) / sum(totals.transactions) as aov,
sum(totals.transactionRevenue) as revenue
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
GROUP BY date, channel
ORDER BY transactions desc
LIMIT 1000

division can be slippery though, since if you divide by zero your question will throw an error .
To do part safely in queries, you can wrap them in what ’ sulfur called a CASE instruction, to only run the mathematics if the denominator is greater than 0 :

CASE WHEN sum(totals.visits) > 0 THEN sum(totals.transactions) / sum(totals.visits) ELSE 0 END as conv_rate

case statements are identical useful – basically the same as an IF statement in Sheets. You can add multiple WHEN / THEN conditions to mimic a nest IF statement .
For now, to perform division you can precisely use that basic CASE syntax above, to check that the denominator is greater than 0 before running the mathematics .
so the final question to calculate conversion rate and AOV would look like :

SELECT  
date,
channelGrouping as channel,
sum(totals.visits) as visits,
CASE WHEN sum(totals.visits) > 0
THEN sum(totals.transactions) / sum(totals.visits) 
ELSE 0 END as conv_rate,
sum(totals.transactions) as transactions,
CASE WHEN sum(totals.transactions) > 0 
THEN sum(totals.transactionRevenue) / sum(totals.transactions) 
ELSE 0 END as aov,
sum(totals.transactionRevenue) as revenue
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
GROUP BY date, channel
ORDER BY transactions desc
LIMIT 1000

Aggregating by day, workweek and calendar month

.
If you ’ re working with market data, looking at changes over prison term will be critical for you .
thankfully, SQL has built-in date functions to make that comfortable. Let ’ s attempt grouping sessions by day of the month, week of the class, and month + year .
The key functions are : * EXTRACT ( DATE PART from column ) – DATE PART can be DAY, WEEK, MONTH, YEAR, and more ( wide department of commerce here ) * FORMAT_DATE ( “ date syntax ”, column ) – date syntax can be % Y- % molarity for year and month ( full department of commerce here )
Let ’ s see those in action :

SELECT
date,
EXTRACT(DAY from date) as day_of_month,
EXTRACT(WEEK from date) as week_of_year,
FORMAT_DATE("%Y-%m", date) AS yyyymm

bill that ascribable to a nuance in the sample distribution GA dataset ( the date being formatted as a bowed stringed instrument rather of a date ), you ’ ll actually have to first use the PARSE_DATE serve ( docs here ) to get the date column into a true date format before running the EXTRACT and FORMAT_DATE functions :

SELECT
date,
EXTRACT(DAY from date) as day_of_month,
EXTRACT(WEEK from date) as week_of_year,
FORMAT_DATE("%Y-%m", date) AS yyyymm,
totals.visits
FROM (
SELECT  
PARSE_DATE('%Y%m%d', date) as date,
channelGrouping,
totals.visits
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
WHERE channelGrouping in ('Organic Search', 'Direct')
ORDER BY totals.visits desc
LIMIT 1000
)

Let ’ s lecture a bit about this nest question structure – you ’ ll find it comes in handy frequently when you have to run multiple layers of mathematics or functions .

Nesting queries

In our date model, we first had to run the PARSE_DATE function on our go steady column, to make it a proper date field rather than a string :

SELECT 
PARSE_DATE('%Y%m%d', date) as date_value
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 

once we had that done, then we could run our day, day_of_week, and yyyymm functions on that pre-processed date_value column – by merely adding a new SELECT argument around the question we ’ d already written .
In effect, we ’ re querying the output of a previous question, rather than querying a BigQuery table directly :

SELECT
date_value,
EXTRACT(DAY from date_value) as day,
EXTRACT(WEEK from date_value) as day_of_week,
FORMAT_DATE("%Y-%m", date_value) AS yyyymm
FROM
(
SELECT 
PARSE_DATE('%Y%m%d', date) as date_value
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
)

This way, rather of having to repeat the PARSE_DATE function 3 times ( for each of the sidereal day, day_of_week and yyyymm column ), you can write it once, and then reference it in a late question .
Nesting is critical for keeping your queries simple, but beware – using more than 2 or 3 levels of nest will make you want to pull your hair out later on .
If you find yourself needing to write a actually complex, multi-level nest question, then I ’ vitamin d recommend learning to use a model like DBT ( getdbt.com ) to be able to reference SQL queries within other queries .

Unnesting RECORD arrays

.
Remember those eldritch field types, that contain sub-columns ? Check out totals for example :

totals.visits,
totals.transactions,
totals.transactionRevenue

The column ‘ totals ’ is what ’ s called a record in BigQuery – long narrative short, it ’ s an align of data within a one row of data .
Since the sample GA datum is at the session level ( each course = 1 school term ), and each school term can have a number of hits, the ‘ hits ’ column are besides structured like this .
To access these nested RECORD column, there ’ sulfur a specific parameter to pass in your question :

CROSS JOIN UNNEST(hits)

This will flatten the range, and make it queryable using basic SQL ( see BQ docs hera ) .

SELECT  
date,
channelGrouping,
isEntrance,
page.pagePath,
totals.transactions,
totals.transactionRevenue
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
CROSS JOIN UNNEST(hits)
WHERE date = '20170801'

once you unnest the hits RECORD, you ’ re able to query the sub-columns by removing the ‘ hits. ’ before the column name ( hits.page.pagePath become queryable as page.pagePath, hits.item.productName – > item.productName, etc ) .
For case, let ’ s say we wanted to filter out only entrance hits, when a drug user beginning lands on your locate. There ’ s a sub-column of the hits RECORD called hits.isEntrance. If it equals on-key, then that quarrel is, emergency room, an entrance .
Let ’ s question out only capture hits :

SELECT  
date,
channelGrouping,
isEntrance,
page.pagePath,
totals.transactions,
totals.transactionRevenue
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
CROSS JOIN UNNEST(hits)
WHERE date = '20170801'
AND isEntrance = true

Being able to wield CROSS JOIN UNNEST will open up the dependable power of BigQuery for you, as lots of early APIs ( Shopify, FB Ads, etc ) make use of BigQuery ’ mho nested array column functionality .

Quiz break 1 !

Let ’ s check in with your cognition thus far, and answer a few questions using the Google Analytics sample dataset for 8/1/2017 .
To take the quiz, login or signup for the free path, Getting Started with BigQuery .

Joining tables

.
Our handy Google Analytics sample distribution dataset lives within one BigQuery table, but the data you ’ ll be working with by and large won ’ thymine be sol clean .
It ’ ll hot in multiple tables across different datasets, and you ’ ll have to do some gymnastics to join it together .
There are a issue of ways to join tables together ( INNER JOINS, FULL OUTER JOINS, AUSTRALIAN JOINS, BRAZILIAN JOINS ), but in BigQuery we chiefly use square LEFT JOINS ( you can read up on the rest of those join types at w3schools ) .
A LEFT JOIN is when you take all of one table ( your first table ), and join rows from a second table to it entirely where they match a sealed logic. It ’ s basically a VLOOKUP rule in Google Sheets .
Let ’ s look at an model – what if we wanted to calculate the population by US country using BigQuery public datasets ?
We ’ d have to join in concert the 2010 Census dataset by ZIP code with the US ZIP codes dataset, which will allow us to lookup the submit that each ZIP code belongs to .
The joining separate of our SQL question falls when we select our tables :

FROM `bigquery-public-data.utility_us.zipcode_area` a
LEFT JOIN `bigquery-public-data.census_bureau_usa.population_by_zip_2010` b
ON (
a.zipcode = b.zipcode
)

To set up your join, you first give each table you ’ re joining an alias ( a and b in our case ), to make referencing their column easy .
then, in the ‘ ON ’ parameter, you specify the logic for your join – the column that need to equal each early to join them together .
You still SELECT and GROUP BY column in the same way – except you now have access to columns from both tables a ( states by zipcode ) and b ( population by zipcode ) – you can select specific columns by adding the board alias ( a. or boron. ) before the column name :

SELECT  
a.zipcode,
a.state_code,
sum(b.population) population
FROM `bigquery-public-data.utility_us.zipcode_area` a
LEFT JOIN `bigquery-public-data.census_bureau_usa.population_by_zip_2010` b
ON (
a.zipcode = b.zipcode
)
WHERE census.minimum_age is null
AND census.maximum_age is null
AND census.gender is null
GROUP BY a.zipcode, a.state_code

That question ’ s a piece tough to read though – we ’ re doing a bunch of other logic in the WHERE statement .
A helpful hint when join tables, is to use a WITH statement ahead to declare your tables + pre-process them .

.
For exercise :

WITH zipcodes as (
SELECT
zipcode,
state_code 
FROM `bigquery-public-data.utility_us.zipcode_area`
),
census as (
SELECT
zipcode,
sum(population)
FROM `bigquery-public-data.census_bureau_usa.population_by_zip_2010`
WHERE census.minimum_age is null
AND census.maximum_age is null
AND census.gender is null
GROUP BY zipcode
)
SELECT 
zipcodes.zipcode,
zipcodes.state_code,
census.population
FROM zipcodes
LEFT JOIN census 
ON (
zipcodes.zipcode = census.zipcodes
)

At the top of the question, you can define each mesa you ’ ll use, and do any filter + grouping ahead .
then, when you join your tables together, you ’ re doing a straightaway join quite than besides doing some mathematics after the fact. That ’ south barely the expressive style that we like to write SQL – not critical if you prefer heterosexual joining, but it helps a lot with legibility after the fact .

Window ( analytic ) functions

.
It ’ s pretty park when working with marketing datasets to want to calculate a % of full column ( ie the % of total tax income coming from a given duct for the period ), or the difference from the modal ( to filter for anomalies ) .
BigQuery allows you to use windowpane ( or analytic ) functions to perform this type of mathematics – where you calculate some mathematics on your question in aggregate, but write the results to each row in the dataset .
Using our sample distribution Google Analytics dataset, let ’ s calculate each channel ’ s percentage of total pageviews .
first, we ’ ll question out sum pageviews by channel :

SELECT
channelGrouping,
sum(totals.pageViews) as pageviews
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
GROUP BY channelGrouping

then, we can wrap a window affair around this question to calculate the overall channel average + the total pageviews across all channels, ampere well as the percentage of sum pageviews for a given channel .
The basic syntax of a window serve is :

sum(pageviews) OVER (PARTITION BY date) as total_pageviews

The key elements here are the function ( union ), which will aggregate the sum sum for each division in the window .
The PARTITION BY instruction basically behaves like a GROUP BY – here we ’ ra saying group by date, since we want to know the sum pageviews for each date .
Put the hale question together, and it looks like so :

SELECT
date,
channelGrouping,
pageviews,
sum(pageviews) OVER w1 as total_pageviews,
pageviews / sum(pageviews) OVER w1 as pct_of_pageviews
FROM (
SELECT
date,
channelGrouping,
sum(totals.pageViews) as pageviews
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
GROUP BY channelGrouping, date
)
WINDOW w1 as (PARTITION BY date)
ORDER BY pct_of_pageviews desc

Notice how, since we ’ re using the lapp WINDOW ( PARTITION BY xx ) doubly, we define it at the end of our question ( WINDOW w1 as ) and reference it with OVER w1, alternatively of re-writing it twice .
once you have your feet wet in BigQuery, I highly recommend getting your feet wet with these advance analytic functions ( and don ’ triiodothyronine be afraid to read the department of commerce ). They ’ ll open up an stallion new populace of psychoanalysis possibilities .
At CIFL, we most normally end up using these analytic functions :

  • last_value()
  • first_value()
  • sum()
  • max()
  • min()
  • avg()
  • rank()

Deduping question results

.
BigQuery is an append-only database, meaning as modern rows are updated, rows are added to the database, preferably than being updated in place .
This means that you can frequently end up with double values for a given unique row – for example, if you ’ re using Stitch to push Google Analytics ( or any API ’ s ) data to BigQuery, you ’ ll have to dedupe it before using it .
fortunately, this is easy to do using window functions – the custom can seem a bit complex at first, but bear with me .
From the sample Google Analytics dataset, let ’ s say we want to pull out the last hit on a given sidereal day for each channelGrouping. Let ’ s use a window ( aka analytic ) function :

first_value(VisitStartTime) over (PARTITION BY channelGrouping ORDER BY visitStartTime desc) lv

The identify elements here are the function ( first_value ), and the PARTITION BY of channelGrouping ( which behaves like a GROUP BY ) .
The order BY is required if you want to pull a first_value, last_value, or social station – since we want the latest timestamp, we ’ ra going to pull the first_value of with visitStartTime derive .
To ultimately answer our wonder of what was the stopping point hit of the day for each channelGrouping, we besides have to SELECT only values where the visitStartTime is equal to the survive prize :

SELECT * FROM (
SELECT  
date,
channelGrouping,
totals.hits,
visitStartTime,
first_value(VisitStartTime) over (PARTITION BY channelGrouping ORDER BY visitStartTime desc) lv
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` 
)
WHERE visitStartTime = lv

Tools like Stitch that write data from APIs to BigQuery, will constantly have a system column that ticks up either a unique integer or timestamp for each row written to the database ( in Stitch ’ s case it ’ s the _sdc_sequence column ) .
similarly to how we used visitStartTime as the plain to ORDER BY above, you can duplicate the like question structure using _sdc_sequence to dedupe data from Stitch .
For exemplar, this is how we deduplicate FB Ads data :

SELECT * FROM (
SELECT date_start, campaign_id, campaign_name, ad_id, account_name, spend, reach, inline_link_clicks, _sdc_sequence, first_value(_sdc_sequence) OVER (PARTITION BY date_start, ad_id, campaign_id ORDER BY _sdc_sequence DESC) lv FROM {{ target.project }}.fb_ads.ads_insights )
WHERE lv = _sdc_sequence

It may seem complex at inaugural, but you ’ ll goal up using this same radiation pattern to dedupe BigQuery data so frequently that it ’ ll become second nature .

Quiz break 2 !

Let ’ s test your cognition on some of these more gain topics ( joining + window functions ), again using the Google Analytics sample dataset for 8/1/2017, and besides layering in US 2010 census data and US zip code + submit mappings .
To take the quiz, login or signup for the free course, Getting Started with BigQuery .

BigQuery Nuts and Bolts

When it comes time putting your BigQuery cognition into commit, there are some virtual concerns to go over :

  1. How much does it cost?
  2. How can you save your queries to be re-run in the future?
  3. How will you get data into BigQuery?

BigQuery billing

For the most function, the datasets we ’ ra using for marketing data analysis qualify as small data in the proportional BigQuery sense .
It ’ s a platforms designed to be able to promptly query very boastfully volumes of data, so analyzing a few million rows of Google Analytics datum is no biggie .
For that reason, running BigQuery queries is very cheap – they charge you by the question, quite than for the data you ’ re store in the database .
Your first 1TB of queries is release, and the rate is lone $ 5.00 per tuberculosis after that ( BQ department of commerce here ) .
As an model, we have never incurred BigQuery costs of over $ 10 per month for any Agency Data Pipeline execution we ’ ve done .
BigQuery does include the functionality of table bunch and partitioning to cut down on question costs – in our have though, these seaport ’ t been sincerely necessity with marketing datasets .
The bottom line : BigQuery is very cheap relative to the speed + value it brings to your organization .

Saving queries with dbt

One thing we highly recommend doing to keep your question volume toss off, is building any SQL queries that you ’ ll use frequently into data models using a framework like DBT .
This will allow you to run them once a day, and create much smaller tables that you can then query directly, rather than having to bootstrap them ( and incur the price ) every time you want to run them .
For just a abbreviated presentation to DBT, check out this excerpt from our Build your Agency Data Pipeline course :

.
If there ’ s one following step I recommend, it ’ vitamin d be learning DBT – it ’ ll put your SQL capabilities on steroids .

Pushing data to BigQuery from Sheets

At CIFL, we find ourselves pushing lots of data from Sheets astir to BigQuery as part of our Agency Data Pipeline service .
For APIs like Google Analytics or FB Ads, we use off-the-rack ETL tools to push data to BigQuery .
But there ’ sulfur always data that we need to manually push from Sheets to BigQuery :

  • Mappings between GA UTM tags (source / medium / campaign) and higher-level channel names
  • Lists of active data feeds (ie all FB Ads accounts) to be joined together
  • Lists of team member names + their client assignments, for team-level reporting

To help automate this process, we built a Sheets to BigQuery Connector script that does a few handy things for us :

.
It creates BigQuery tables, pushes data from Sheets to BQ, and allows us to easily write queries to pull data back down from BQ to Sheets ( for QC or reporting ) .
Grab it for free from the CIFL BigQuery course here .

You made it !

now that you ’ re a master of SQL in BigQuery, what will you do – go to Disneyworld potentially ?
There are a few future destinations on CIFL we ’ five hundred recommend :
Have early questions ? Feel absolve to drop a note to help @ codingisforlosers.com or find us on Twitter @ losersHQ .

source : https://shoppingandreview.com
Category : News
spot_imgspot_img

Subscribe

Related articles

Biggest Social Media Platforms as Per User Base

The web is the sacred lifeline of industrial development...

AniMixPlay Review – Is AniMixPlay Safe?

AniMixPlay is a website where you can watch anime...

TweakVip and Offroad Outlaws

There are several applications that make your life more...

The Benefits of Green Buildings

The term green building can be used to describe...

Pacman 30th Anniversary: New Google Doodle

A modified version of the Google doodle honoring Pacman...