The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
This is a simple collection of instructions and scripts to accompany the computerphile video about mininet and openflow.

How to get going. This project should work on Linux or MacOS. I used Ubuntu 20.04 and provide some notes here. Note, this is certainly not intended as

Richard G. Clegg 70 Jan 02, 2023
Library written in Python that wraps Halo Infinite API.

haloinfinite Library written in Python that wraps Halo Infinite API. Before start It's unofficial, reverse-engineered, neither stable nor production r

Miguel Ferrer 4 Dec 28, 2022
Using DST's API with Python

A short guide on how to access Denmark's Statistics API with python, together with a helper class that facilitates the collection of data and metadata from any DST's table

Alessandro Martinello 16 Dec 02, 2022
A file-based quote bot written in Python

Let's Write a Python Quote Bot! This repository will get you started with building a quote bot in Python. It's meant to be used along with the Learnin

1 Oct 28, 2021
Sakamata-alpha-pycord - Sakamata bot alpha with pycord

sakamatabot このリポジトリは? ホロライブ所属VTuber沙花叉クロヱさんの非公式ファンDiscordサーバー「クロヱ水族館」の運営/管理補助を行う

sushichaaaan 1 May 04, 2022
MashaRobot : New Generation Telegram Group Manager Bot (🔸Fast 🔸Python🔸Pyrogram 🔸Telethon 🔸Mongo db )

MashaRobot Me On Telegram ✨ MASHA ✨ This is just a demo bot.. Don't try to add to your group.. Create your own bot How To Host The easiest way to depl

Mr Dark Prince 40 Oct 09, 2022
A Tool to scrape URLs for a given domain from wayback machine, Commoncrawl and OTX Alienvault

Mr_URL Mr.URL fetches known URLs for a given domain from Wayback Machine, Commoncrawl and OTX Alienvault. It also finds old versions of any given URL

Stinger 9 Sep 05, 2022
A python based all-in-one tool for Google Drive

gdrive-tools A python based all-in-one tool for Google Drive Uses For Gdrive-Tools ✓ generate SA ✓ Add the SA and Add them to TD automatically ✓ Gener

XcodersHub 32 Feb 09, 2022
Pure Python implementation of the Windows API method IDvdInfo2::GetDiscID.

pydvdid-m Pure Python implementation of the Windows API method IDvdInfo2::GetDiscID. This is a modification of sjwood's pydvdid. The Windows API metho

4 Nov 22, 2022
This is a TG Video Compress BoT. Product by BINARY Tech

🌀 Video Compressor Bot Product by BINARY Tech Deploy to Heroku The Hard Way virtualenv -p python3 VENV . ./VENV/bin/activate pip install -r requireme

1 Jan 04, 2022
Bot to notify when vaccine appointments are available

Vaccine Watch Bot to notify when vaccine appointments are available. Supports checking Hy-Vee, Walgreens, CVS, Walmart, Cosentino's stores (KC), and B

Peter Carnesciali 37 Aug 13, 2022
Telegram bot/scraper to get the latest NUS vacancy reports.

Telegram bot/scraper to get the latest NUS vacancy reports. Stay ahead of the curve and don't get modrekt.

Chee Hong 1 Jan 08, 2022
A Telegram AntiChannel bot to ban users who using channel to send message in group

Anti-Channel-bot A Telegram AntiChannel bot to ban users who using channel to send message in group. BOT LINK: Features: Automatic ban Whitelist Unban

Jigar varma 36 Oct 21, 2022
BaiduPCS API & App 百度网盘客户端

BaiduPCS-Py A BaiduPCS API and An App BaiduPCS-Py 是百度网盘 pcs 的非官方 api 和一个命令行运用程序。

Peter Ding 450 Jan 05, 2023
Music bot for Discord

Treble Music bot for Discord Youtube is after music bots on Discord. So we are here to fill the void. Introducing Treble, the next generation of Disco

Aja Khanal 0 Sep 16, 2022
A simple program to display current playing from Spotify app on your desktop

WallSpot A simple program to display current playing from Spotify app on your desktop How to Use: Linux: Currently Supports GNOME and KDE. If you want

Nannan 4 Feb 19, 2022
And now, for the first time, you can send alerts via action from ArcSight ESM Console to the TheHive when Correlation Rules are triggered.

ArcSight Integration with TheHive And now, for the first time, you can send alerts via action from ArcSight ESM Console to the TheHive when Correlatio

Amir Hossein Zargaran 3 Jan 19, 2022
Automate saving your Discover Weekly Playlist using Python.

SpotWeekly Automate saving your Discover Weekly Playlist using Python. Made with 3 and FastAPI. The saved playlist link is sent to my discord server

shourya 6 Jan 03, 2022
Instant messaging client in tkinter

Concord_client_tk Instant messaging client in tkinter Contributors : Ilade-s [https://github.com/Ilade-s] Doku [https://github.com/D0kuhebi] Descripti

Raphaël Merlet 2 Jun 15, 2022
Social Framework

Social Int Framework Social Int Framework its a Selenium script that scrape the IG photos and do a Reverse search on google and yandex for finding ano

29 Dec 06, 2022