The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
Talon accessibility - Experimental Talon integrations using macOS accessibility APIs

talon_accessibility Experimental Talon integrations using macOS accessibility AP

Phil Cohen 11 Dec 23, 2022
Male' Map Telegram Bot

Male' Map TelegramBot A simple TelegramBot to fetch residential addresses in Male', Maldives. The bot can be queried inline or directly. sample .env f

Naail Abdul Rahman 12 Nov 25, 2022
AWS Glue PySpark - Apache Hudi Quick Start Guide

AWS Glue PySpark - Apache Hudi Quick Start Guide Disclaimer: This is a quick start guide for the Apache Hudi Python Spark connector, running on AWS Gl

Gabriel Amazonas Mesquita 8 Nov 14, 2022
A bot which is a ghost and you can make friends with it

This is a bot which is a ghost and you can make friends with it. It will haunt your friends. Explore and test the bot in replit !

Siwan SR 0 Oct 06, 2022
A Powerful, Smart And Simple Userbot In Telethon.

Owner: Masterolic 🇮🇳 BLACK LIGHTNING A Powerful, Smart And Simple Userbot In Telethon. Credits This is A Remix Bot Of Many UserBot. DARKCOBRA Friday

Masterolic 1 Nov 28, 2021
A discord webhook client written in Python.

DiscordWebhook A discord webhook client written in Python. Installation pip install webhook-client Example from webhook_client import WebhookClient, E

Elijah 4 Nov 28, 2022
A discord nitro generator written in python

VerseGenerator A discord nitro generator written in python Usage ・Fork the repo ・Clone it to replit ・Install the required packages and run it ・Input t

NotDrakezz 4 Nov 13, 2021
Enumerate Microsoft 365 Groups in a tenant with their metadata

Enumerate Microsoft 365 Groups in a tenant with their metadata Description The all_groups.py script allows to enumerate all Microsoft 365 Groups in a

Clément Notin 46 Dec 26, 2022
Implement SAST + DAST through Github actions

Implement SAST + DAST through Github actions The repository is supposed to implement SAST+DAST checks using github actions against a vulnerable python

Syed Umar Arfeen 3 Nov 09, 2022
Disco is an extensive and extendable Python 2.x/3.x library for the Discord API.

disco Disco is an extensive and extendable Python 2.x/3.x library for the Discord API. Disco boasts the following major features: Expressive, function

1 Nov 18, 2021
Collaboration with Microsoft, AWS, Google, and ETHZürich Analytics Club (2022 Datathon Project)

DATATHON_ Collaboration with Microsoft, AWS, Google, and ETHZürich Analytics Club (2022 Datathon Project) Datathon Original Challenge SAV DataDays Rei

esthi 34 Nov 10, 2022
Programa de código abierto para probar el API de Bitso, el exchange más importante de América Latina.

Bitso Semiautomático Programa de código abierto para probar el API de Bitso, el exchange más importante de América Latina. Desarrollador Fernando Mire

Fernando Mireles 17 Dec 07, 2022
Automates the process to obtain an appointment for NIE in spain.

get-nie-appointment A Python script that automates the process of getting an appointment for NIE assignation. It can be modified in order to change th

Ezequiel Aceto 39 Nov 28, 2022
A Discord token grabber written in Python3, with awesome obfuscation and anti-debug protection.

☣️ Plague ☣️ Plague is a Discord token grabber written in Python3, obfuscated with Kramer, protected from traffic analysers with Scarecrow and using t

Billy 125 Dec 20, 2022
Python app to notify via slack channel the status_code change from an URL

Python app to notify, via slack channel you choose to be notified, for the status_code change from the URL list you setup to be checked every yy seconds

Pedro Nunes 1 Oct 25, 2021
A Python Library to interface with LinkedIn API, OAuth and JSON responses

#Overview Here's another library based on the LinkedIn API, OAuth and JSON responses. Hope this documentation explains everything you need to get star

Mike Helmick 69 Dec 11, 2022
a discord bot for searching your movies, and bot return movie url for you :)

IMDb Discord Bot how to run this bot. the first step you must create prefixes.json file the second step you must create a virtualenv if you use window

Mehdi Radfar 6 Dec 20, 2022
Desktop Backup Client for Borg

Vorta Backup Client Vorta is a backup client for macOS and Linux desktops. It integrates the mighty BorgBackup with your desktop environment to protec

BorgBase.com 1.5k Jan 03, 2023
Kali Kush - Account Nuker Tool

Kali Kush - Account Nuker Tool This is a discord tool made by me, and SSL :) antho#1731 How to use? pip3 install -r requirements.txt -py kalikush.py -

ryan 3 Dec 21, 2021
My personal discord bot using discord.py

Rara-chan My personal discord bot that I use for stuff that I find interesting. Features Responds to certain messages ChatBot capabilities NHentai scr

Mikask 3 Nov 06, 2022