The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
Discord music bot using discord.py, slash commands, and yt-dlp.

bop Discord music bot using discord.py, slash commands, and yt-dlp. Features Play music from YouTube videos and playlists Queue system with shuffle Sk

Hizkia Felix 3 Aug 11, 2022
A Anything goes Discord bot written in python and uses the wrapper Discord.py

GerardTheWizard A Anything goes Discord bot written in python and uses the wrapper Discord.py What can he do? Allow users to level up through typing,

1 May 05, 2022
Quickly and efficiently delete your entire tweet history with the help of your Twitter archive without worrying about the pointless 3200 tweet limit imposed by Twitter.

Twitter Nuke Quickly and efficiently delete your entire tweet history with the help of your Twitter archive without worrying about the puny and pointl

Mayur Bhoi 73 Dec 12, 2022
A repo-watcher to watch for commits on a repo an trigger GitHub action by sending a `repository_dispatch` event to destinantion repo

repo-watcher-dispatch-sender This app is used to send a repository_dispatch event to the destination repo set in config.py or Environmental Variables

Divide Projects™ 2 Feb 06, 2022
Bot interpretation of the carbon.now.sh site

📒 Source code of the @PicodeBot 🧸 Developer: @hoosnick Run $ git clone https://github.com/hoosnick/picodebot.git $ pip install -r requirements.txt P

Husniddin Murodov 13 Oct 02, 2022
A discord bot for tracking Iranian Minecraft servers and showing the statistics of them

A discord bot for tracking Iranian Minecraft servers and showing the statistics of them

MCTracker 20 Dec 30, 2022
This Discord bot is to give timely notifications to Students in the Lakehead CS 2021 Guild

Discord-Bot Goal of Project The purpose of this Discord bot is to give timely notifications to Students in the Lakehead CS 2021 Guild. How can I contr

8 Jan 30, 2022
A simple bot discord in PY with moderation controls

Voila un bot discord en py avec les commandes simples de modération tout simplement faut changer les lignes 70 vous mettez votre token de votre bot 53

Ethan 1 Nov 20, 2021
A better rename and convert bot with upload mode option and Auto detection

A better rename and convert bot with upload mode option and Auto detection

Code X Mania 2 Nov 09, 2021
Visionary-OS: open source discord bot

Visionary-OS Our Visionary open source discord bot. Our goal is to create a discord bot, which is hosted by us, but every member of our community can

8 Jan 27, 2022
Cryptocurrency Prices Telegram Bot For Python

Cryptocurrency Prices Telegram Bot How to Run Set your telegram bot token as environment variable TELEGRAM_BOT_TOKEN: export TELEGRAM_BOT_TOKEN=your_

Sina Nazem 3 Oct 31, 2022
Nowadays we don't have time to listen to each and every song that we come across in a playlist.

Nowadays we don't have time to listen to each and every song that we come across in a playlist. so, this project helps you. we used Spotify API for collecting the dataset information and able to do E

3 Apr 26, 2022
A Telegram Bot to manage your music channel with some cool features.

Music Channel Manager V2 A Telegram Bot to manage your music channel with some cool features like appending your predefined username to the musics tag

11 Oct 21, 2022
GUI Pancakeswap2 and Uniswap3 trading client (and bot)

GUI Pancakeswap2 and Uniswap3 trading client (and bot) (MOST ADVANCE TRADING BOT SUPPORT WINDOWS LINUX MAC) (AUTO BUY TOKEN ON LUNCH AFTER ADD LIQUIDI

16 Dec 23, 2021
A Python script to backup all repos (public or private) of a user.

GithubBackupAllRepos A Python script to backup all repos (public or private) of a user. Features Clone public and private repos Load specified SSH key

Podalirius 15 Jan 03, 2023
Unofficial python api for MicroBT Whatsminer ASICs

whatsminer-api Unofficial python api for MicroBT Whatsminer ASICs Code adapted from a python file found in the Whatsminer Telegram group that is credi

Satoshi Anonymoto 16 Dec 23, 2022
The best Discord bot, created for r/Jailbreak

Bloo Setup instructions These instructions assume you are on macOS or Linux. Windows users, good luck. With Docker (recommended!) You will need the fo

GIR 33 Dec 16, 2022
CloudFormation Drift Remediation - Use Cloud Control API to remediate drift that was detected on a CloudFormation stack

CloudFormation Drift Remediation - Use Cloud Control API to remediate drift that was detected on a CloudFormation stack

Cloudar 36 Dec 11, 2022
An API wrapper around the pythonanywhere's API.

pyaww An API wrapper around the pythonanywhere's API. The name stands for pythonanywherewrapper. 100% api coverage most of the codebase is documented

7 Dec 11, 2022
A discord bot for checking what linked profiles a user has to their Ubisoft account

ubisoft_discord_profiles A Discord bot for checking what linked profiles a user has to their Ubisoft account. This can be setup using an enviromental

Andrei 1 Dec 17, 2021