First Party data integration solution built for marketing teams to enable audience and conversion onboarding into Google Marketing products (Google Ads, Campaign Manager, Google Analytics).

Overview

Megalista

Sample integration code for onboarding offline/CRM data from BigQuery as custom audiences or offline conversions in Google Ads, Google Analytics 360, Google Display & Video 360 and Google Campaign Manager.

Disclaimer: This is not an officially supported Google product.

Supported integrations

  • Google Ads

    • Contact Info Customer Match (email, phone, address) [details]
    • Id Based Customer Match (device Id, user id)
    • Offline Conversions through gclid [details]
    • Store Sales Direct (SSD) conversions [details]
  • Google Analytics (Universal analytics)

  • Campaign Manager

    • Offline Conversions API (user id, device id, match id, gclid, dclid) [details]
  • Google Analytics 4

  • Appsflyer

    • S2S Offline events API (conversion upload), to be used for audience creation and in-app events with Google Ads and DV360 [details]

How does it work

Megalista was design to separate the configuration of conversion/audience upload rules from the engine, giving more freedom for non-technical teams (i.e. Media and Business Inteligence) to setup multiple upload rules on their own.

The solution consists in #1 a Google Spreadsheet (template) in which all rules are defined by mapping a data source (BigQuery Table) to a destination (data upload endpoint) and #2, an apache beam workflow running on Google Dataflow, scheduled to upload the data in batch mode.

Prerequisites

Google Cloud Services

  • Google Cloud Platform account
    • Billing enabled
    • BigQuery enabled
    • Dataflow enabled
    • Cloud storage enabled
    • Cloud scheduler enabled
  • At least one of:
    • Google Ads API Access
    • Campaign Manager API Access
    • Google Analytics API Access
  • Python3
  • Google Cloud SDK

Access Requirements

Those are the minimum roles necessary to deploy Megalista:

  • OAuth Config Editor
  • BigQuery User
  • BigQuery Job User
  • BigQuery Data Viewer
  • Cloud Scheduler Admin
  • Storage Admin
  • Dataflow Admin
  • Service Account Admin
  • Logs Viewer
  • Service Consumer

APIs

Required APIs will depend on upload endpoints in use. We recomend you to enable all of them:

  • Google Sheets (required for any use case) [link]
  • Google Analytics [link]
  • Google Analytics Reporting [link]
  • Google Ads [link]
  • Campaign Manager [link]

Installation

Create a copy of the configuration Spreadsheet

WIP

Creating required access tokens

To access campaigns and user lists on Google's platforms, this dataflow will need OAuth tokens for a account that can authenticate in those systems.

In order to create it, follow these steps:

  • Access GCP console
  • Go to the API & Services section on the top-left menu.
  • On the OAuth Consent Screen and configure an Application name
  • Then, go to the Credentials and create an OAuth client Id with Application type set as Desktop App
  • This will generate a Client Id and a Client secret
  • Run the generate_megalist_token.sh script in this folder providing these two values and follow the instructions
    • Sample: ./generate_megalist_token.sh client_id client_secret
  • This will generate the Access Token and the Refresh token

Creating a bucket on Cloud Storage

This bucket will hold the deployed code for this solution. To create it, navigate to the Storage link on the top-left menu on GCP and click on Create bucket. You can use Regional location and Standard data type for this bucket.

Running Megalista

We recommend first running it locally and make sure that everything works. Make some sample tables on BigQuery for one of the uploaders and make sure that the data is getting correctly to the destination. After that is done, upload the Dataflow template to GCP and try running it manually via the UI to make sure it works. Lastly, configure the Cloud Scheduler to run Megalista in the frequency desired and you'll have a fully functional data integration pipeline.

Running locally

python3 megalist_dataflow/main.py \
  --runner DirectRunner \
  --developer_token ${GOOGLE_ADS_DEVELOPER_TOKEN} \
  --setup_sheet_id ${CONFIGURATION_SHEET_ID} \
  --refresh_token ${REFRESH_TOKEN} \
  --access_token ${ACCESS_TOKEN} \
  --client_id ${CLIENT_ID} \
  --client_secret ${CLIENT_SECRET} \
  --project ${GCP_PROJECT_ID} \
  --region us-central1 \
  --temp_location gs://{$GCS_BUCKET}/tmp

Deploying Pipeline

To deploy, use the following command: ./deploy_cloud.sh project_id bucket_name region_name

Manually executing pipeline using Dataflow UI

To execute the pipeline, use the following steps:

  • Go to Dataflow on GCP console
  • Click on Create job from template
  • On the template selection dropdown, select Custom template
  • Find the megalist file on the bucket you've created, on the templates folder
  • Fill in the parameters required and execute

Scheduling pipeline

To schedule daily/hourly runs, go to Cloud Scheduler:

Creating a Service Account

It's recommended to create a new Service Account to be used with the Cloud Scheduler

  • Go to IAM & Admin > Service Accounts
  • Create a new Service Account with the following roles:
    • Cloud Dataflow Service Agent
    • Dataflow Admin
    • Storage Objects Viewer

Usage

Every upload method expects as source a BigQuery data with specific fields, in addition to specific configuration metadata. For details on how to setup your upload routines, refer to the Megalista Wiki or the Megalista user guide.

Comments
  • Add Firestore source

    Add Firestore source

    Hello. I've implemented a Firestore source, which is meant to work as an alternative for Sheets for parametrization purposes.

    • Why Firestore? Some of our clients are unable to access the Spreadsheets domain for security purposes, and Firestore proved to be a great option. It provides reliable and dynamic storage, and is quite simple to use. Also, the expected usage level for Megalist should fall into the free tier.

    Additionally, Firestore has great integration with App Engine. In a future PR, I’d like to add a highly customizable App Engine form integrated with Firestore, which provides an easy to use alternative to Sheets, especially for non-technical users unable to access it.

    • Requirements: For now, Firestore usage requires a GCP project with native Firestore mode.

    • Usage: The default fields for any upload type are: active (yes/no), bq_dataset, bq_table, source and type. Valid upload types and their required fields can be seen in the firestore_execution_source file.

    As with Sheets, account IDs are included separately. In this case, in a Firestore document called account_config, within the same collection. In other words, the hierarchy is: Firestore collection -> document entries for each schedule + account_config document.

    In order to check Firestore, Megalist requires the setup_firestore_collection command line parameter. If setup_sheet_id is provided, Sheets will be used instead.

    • Improvement opportunities:
    1. For now, the Firestore source expects BigQuery parameters, as it is the only ingest option currently available. This should be made flexible in the future, to allow options such as GCS.

    2. The list of parameter metadata was included in firestore_execution_source, and could be modularized in the future.

    • Testing I have only been able to test uploads to Google Ads and Google Analytics so far, as we generally lack access/test data to other platforms. Help with further testing would be greatly appreciated.
    opened by nivaldoh 10
  • Add partial failure support for Google Ads

    Add partial failure support for Google Ads

    Hi, we've added support for partial failures in Google Ads conversion uploads. By default, if a single row contains errors, the entire batch is blocked. This change aims to allow any valid rows to be uploaded, regardless of errors in other rows in the same batch. For more information: https://developers.google.com/adwords/api/docs/guides/partial-failure

    Topics for future consideration:

    1. We could make this optional, if needed.
    2. Megalist currently shows only the amount of rows that reached the uploader. We intend to contribute again soon with changes that display the number of rows that were, in fact, accepted by the API, as well as logs that register invalid rows individually and the reason for their rejections.
    opened by nivaldoh 10
  • Not being to upload Custom Variables as a part of CM offline conversion data

    Not being to upload Custom Variables as a part of CM offline conversion data

    Hi There,

    We are running Megalista implementation for a client for quite a long time. It's been working wonderfully so far. However, in one of the scenarios, where we use the CM offline conversion upload functionality, it uploads all the data except Custom Variables. Below are the steps that we have performed and confirmed at our end.

    1. All these custom variables are available and enabled on the floodlight level.
    2. We also used regular API calls to push these same variables into the CM platform and that worked fine. So the problem is not with the data or setup on the CM part.

    We value your time and effort. However, it would be of huge help if it would be possible to look at this from Megalista's end. Any help or guidance would be highly appreciated, as we are not sure how to proceed beyond this point.

    Additionally, if any input is needed from our end, we would be happy to contribute.

    Thanks & Regards, Sandhya

    opened by Sandhya-1988 7
  • Fixed error when saving the uploaded data to a BigQuery table

    Fixed error when saving the uploaded data to a BigQuery table

    The process fails to create a BigQuery table with the uploaded data:

    table_name = self._bq_ops_dataset.get() + '.' + execution.source.source_metadata[1] + "_uploaded"
    TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' 
    

    The approach as to change it to execution.source.source_metadata[0] instead of self._bq_ops_dataset.get(), which is returning None instead of the Dataset name.

    opened by gabrielmpaula 7
  • Add dclid implementation to the CM connector and fixed a bug in the customVariables column

    Add dclid implementation to the CM connector and fixed a bug in the customVariables column

    Add dclid implementation to the CM connector and fixed a bug where the customVariables (type/value) info was not retrieved from the table since it was not matching the regex for the filtered columns. Migrate CM API version to 4 since it will be deprecated in Feb 2023. Updated google-api-python-client to the latest version to support the CM API version 4. All tests passed. Tested CM connector and Customer Match connector.

    opened by anaesqueda 3
  • Trouble uploading audience to GoogleAds

    Trouble uploading audience to GoogleAds

    I'm trying to send audience lists to Google Ads, but having the following error in all forms of Customer Match:

    ERROR:megalista.GoogleAdsCustomerMatchAbstractUploader:'int' object has no attribute 'name' Traceback (most recent call last): File "/home/bianca_santos/google-marketing-data-sync/megalista-v2/megalista/megalista_dataflow/uploaders/utils.py", line 72, in inner return func(*args, **kwargs) File "/home/bianca_santos/google-marketing-data-sync/megalista-v2/megalista/megalista_dataflow/uploaders/google_ads/customer_match/abstract_uploader.py", line 209, in process execution.destination.destination_metadata)) File "/home/bianca_santos/google-marketing-data-sync/megalista-v2/megalista/megalista_dataflow/uploaders/google_ads/customer_match/abstract_uploader.py", line 60, in _create_list_if_it_does_not_exist customer_id, list_name, list_definition) File "/home/bianca_santos/google-marketing-data-sync/megalista-v2/megalista/megalista_dataflow/uploaders/google_ads/customer_match/abstract_uploader.py", line 70, in _do_create_list_if_it_does_not_exist resource_name = self._get_user_list_resource_name(customer_id, list_name) File "/home/bianca_santos/google-marketing-data-sync/megalista-v2/megalista/megalista_dataflow/uploaders/google_ads/customer_match/abstract_uploader.py", line 109, in _get_user_list_resource_name query_aux = f"AND user_list.access_reason={ads_client.enums.AccessReasonEnum.OWNED.name}" AttributeError: 'int' object has no attribute 'name' ERROR:megalista.GoogleAdsCustomerMatchAbstractUploader:Error uploading data. Traceback (most recent call last): File "/home/bianca_santos/google-marketing-data-sync/megalista-v2/megalista/megalista_dataflow/uploaders/utils.py", line 72, in inner return func(*args, **kwargs) File "/home/bianca_santos/google-marketing-data-sync/megalista-v2/megalista/megalista_dataflow/uploaders/google_ads/customer_match/abstract_uploader.py", line 209, in process execution.destination.destination_metadata)) File "/home/bianca_santos/google-marketing-data-sync/megalista-v2/megalista/megalista_dataflow/uploaders/google_ads/customer_match/abstract_uploader.py", line 60, in _create_list_if_it_does_not_exist customer_id, list_name, list_definition) File "/home/bianca_santos/google-marketing-data-sync/megalista-v2/megalista/megalista_dataflow/uploaders/google_ads/customer_match/abstract_uploader.py", line 70, in _do_create_list_if_it_does_not_exist resource_name = self._get_user_list_resource_name(customer_id, list_name) File "/home/bianca_santos/google-marketing-data-sync/megalista-v2/megalista/megalista_dataflow/uploaders/google_ads/customer_match/abstract_uploader.py", line 109, in _get_user_list_resource_name query_aux = f"AND user_list.access_reason={ads_client.enums.AccessReasonEnum.OWNED.name}" AttributeError: 'int' object has no attribute 'name' INFO:megalista.GoogleAdsOfflineUploader:Uploading 1000 rows... INFO:megalista.GoogleAdsOfflineConversionsUploader:Uploading 1000 offline conversions on customers/5852184472/conversionActions/792598949 to Google Ads. ERROR:megalista.GoogleAdsOfflineConversionsUploader:Error on uploading offline conversions: Multiple errors in ‘details’. First error: The click or call is owned by a customer account that the uploading customer does not manage., at conversions[0].gclid. INFO:megalista:Completed successfully!

    The account is a MCC Google Ads Account.

    opened by BiancaK3 2
  • [Docs] Possible outdated documentation

    [Docs] Possible outdated documentation

    We have identified 1 possible instance of outdated documentation:

    About

    This is part of a research project that aims to automatically detect outdated documentation in GitHub repositories. We are evaluating the validity of our approach by identifying instances of outdated documentation in real-world projects.

    We hope that this research will be a step towards keeping documentation up-to-date. If this has been helpful, consider updating the documentation to keep it in sync with the source code. If this has not been helpful, consider updating this issue with an explanation, so that we can improve our approach. Thanks!

    opened by wesleytanws 2
  • Customer Match Upload with login_customer_id

    Customer Match Upload with login_customer_id

    Change to take the AccountConfig Customer Id to be used as the login_customer_id for the gAds API Requests. It takes each Audience's Metadata 5 (Account) if exists for the customer_id value in requests. If this Metadata does not exist, it takes also the AccountConfig Customer Id for the customer_id

    This allows to upload audiences to non-MCC accounts where the manager account needs to be passed in the login_customer_id in the new gAds API

    opened by alvarolamas10 2
  • Recode change and reformatted

    Recode change and reformatted

    recode change and reformatted code contain:

    • [x] reformatted code to easily readable code maintenance
    • [x] change %s to fstring for more readable and less error
    • [x] passed local test flake8
    opened by slowy07 2
  • Documentation mismatch in Google Ads Customer Match Device ID schema

    Documentation mismatch in Google Ads Customer Match Device ID schema

    According to the documentation, the expected schema for Google Ads Customer Match Device ID table is | Column name | Type | Description | Requirement | | :---: | :---: | :---: | :---: | | mobile_device_id | STRING | Mobile device Id identifier (android AdId or IOS IDFA) | required |

    But in the source code, the field is mobileId.

    def get_row_keys(self) -> List[str]:
        return ['mobileId']
    
    opened by xfer 2
  • Question: project name is megalist or megalista?

    Question: project name is megalist or megalista?

    I have a doubt, @astivi and @caiotomazelli

    The name of the repository and documentation is Megalista But the folder structure uses the name megalist and some parameters as well.

    We understand that the name of the solution is Megalista, and all coding must use the megalist nomenclature. Is correct?

    opened by joaquimsn 2
  • Allow auth via manager access

    Allow auth via manager access

    I noticed that the README states:

    Calls to the Google Ads API will fail if the user that generated the OAuth2 credentials (Access Token and Refresh Token) doesn't have direct access to the Google Ads account to which the calls are being directed. It's not enough for the user to have access to a MCC above this account and being able to access the account through the interface, it's required that the user has permissions on the account itself.

    However, the Google Ads API client library for Java supports auth via manager access by specifying login-customer-id as described in https://developers.google.com/google-ads/api/docs/client-libs/java/config-file and https://developers.google.com/google-ads/api/docs/concepts/call-structure#cid.

    opened by jradcliff 0
  • BigQuery column names schema should be

    BigQuery column names schema should be "dimension" not "cd" for Data Import Destination.

    Hi Guys, I'm using megalista to upload some audiences from bigquery to google analytics data import.

    Right now you're checking the column names in the data source in bigquery with the schema 'cd\d+', but it doesn't work when we upload the data into google analytics, since the data import only accept 'dimension\d+' schema.

    So, my recommendation is to change in the script megalista_dataflow/data_sources/data_source.py, the line:

    'GA_DATA_IMPORT': {
        'columns': [
            {'name': 'cd\\d+', 'required': True, 'data_type': 'string'},
            {'name': 'cd\\d+', 'required': True, 'data_type': 'string'},
            {'name': 'cd\\d+', 'required': False, 'data_type': 'string'},
        ],
        'groups': []
    },
    

    for:

    'GA_DATA_IMPORT': {
        'columns': [
            {'name': 'dimension\\d+', 'required': True, 'data_type': 'string'},
            {'name': 'dimension\\d+', 'required': True, 'data_type': 'string'},
            {'name': 'dimension\\d+', 'required': False, 'data_type': 'string'},
        ],
        'groups': []
    },
    

    Best,

    Gibran

    opened by gibrano 0
  • [GA4 MP] timestamp_micros should be sent inside the events array

    [GA4 MP] timestamp_micros should be sent inside the events array

    CHANGELOG:

    • Alter timestamp_micros location within the measurement protocol payload for GA4 to inside the events array

    OBSERVATIONS

    Looking at the Measurement Protocol Reference, it seems like timestamp_micros should be sent at the top level of the payload. However, empirically, it does not work. I performed experiments both by using Megalista's workflow and by performing individual post requests to the collection endpoint

    Intuitively, it makes sense that it works this way as a same request could send multiple events for a given user with distinct timestamps

    opened by fsalhani 2
Releases(v4.4)
  • v4.4(Apr 12, 2022)

    Add error notification by email capabilities.

    Every error occurred inside uploaders can now be aggregated and sent by email.
    For setup instructions, refer to the "Errors notifications by email" section of the readme file.

    Source code(tar.gz)
    Source code(zip)
  • v4.3(Feb 23, 2022)

    Update the Google Ads API from v8 to v10.

    This update introduced resend controls for Google Ads Offline Conversions given. This change was driven by a new error being thrown starting by the API v9. More information on possible changes necessaries do sources database tables on the Update Instructions page.

    Source code(tar.gz)
    Source code(zip)
  • v4.2(Sep 15, 2021)

    Changes execution of Google Ads API Customer Match uploader to create a single job and append operations to that single job. Also splits user identifiers by operation.

    Source code(tar.gz)
    Source code(zip)
  • v4.1(Jun 22, 2021)

  • v4.0(May 26, 2021)

Owner
Google
Google ❤️ Open Source
Google
:globe_with_meridians: A Python wrapper for the Geocodio geolocation service API

Py-Geocodio Python wrapper for Geocodio geocoding API. Full documentation on Read the Docs. If you are upgrading from a version prior to 0.2.0 please

Ben Lopatin 84 Aug 02, 2022
Pack up to 3MB of data into a tweetable PNG polyglot file.

tweetable-polyglot-png Pack up to 3MB of data into a tweetable PNG polyglot file. See it in action here: https://twitter.com/David3141593/status/13719

David Buchanan 2.4k Dec 29, 2022
Install and manage Proton-GE and Luxtorpeda for Steam and Wine-GE for Lutris with this graphical user interface. Based on AUNaseef's ProtonUp, made with Python 3 and Qt 6.

ProtonUp-Qt Qt-based graphical user interface to install and manage Proton-GE installations for Steam and Wine-GE installations for Lutris. Based on A

638 Jan 02, 2023
Michelle is a Discord Bot coded in Python with Discord.py by Mudit07.

Michelle is a Discord Bot coded in Python with Discord.py by Mudit07.

Michelle 3 Oct 09, 2021
A script to forward mass number of media to another group/channel. Heroku deploy

Telegram Forward Script 😇 This is a Script to Forward Large Number of Files to Another Telegram Channel. Star එකක් දාල fork එකක් ගහපියව් 🥴 If You Tr

Anjana Madu 17 Oct 21, 2022
AUDD IS MUSIC RECOGNITION API

AUDD IS MUSIC RECOGNITION API

Abdimk 1 Dec 15, 2021
The Sue Gray Alert System was a 5 minute project that just beeps every time a new article is updated or published on Gov.UK's news pages.

The Sue Gray Alert System was a 5 minute project that just beeps every time a new article is updated or published on Gov.UK's news pages.

Dafydd 1 Jan 31, 2022
Nflmetrics - Johns Hopkins Spring 2022 Sports Analytics research project about NFL Draft Metrics

nflmetrics GitHub repo for Johns Hopkins Spring 2022 Sports Analytics research p

Anish Kulkarni 4 Feb 24, 2022
A better rename and convert bot with upload mode option and Auto detection

A better rename and convert bot with upload mode option and Auto detection

Code X Mania 2 Nov 09, 2021
A Flask inspired, decorator based API wrapper for Python-Slack.

A Flask inspired, decorator based API wrapper for Python-Slack. About Tangerine is a lightweight Slackbot framework that abstracts away all the boiler

Nick Ficano 149 Jun 30, 2022
Small Python Tracker clone of Electra

Discord Bot Tracker - Python Simply Track your Bots (Status) to get notified when one of those go offline/online. Paste IDs into the config.py files,

Koni 2 Nov 23, 2021
Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'

Introduction This repository contains the code for the paper Sentence Bottleneck Autoencoders from Transformer Language Models by Ivan Montero, Nikola

Ivan Montero 14 Dec 28, 2022
Insane Weather Bot is here! Give suggestions, fork, and do much more to help us enhance the abilities of Insane Weather Bot.

Insane_Weather_Bot Insane Weather Bot is here! Give suggestions, fork, and do much more to help us enhance the abilities of Insane Weather Bot. Weathe

1 Jan 02, 2022
A Python 2.7/3.x module for Amcrest Cameras using the SDK HTTP API.

A Python 2.7/3.x module for Amcrest Cameras using the SDK HTTP API. Amcrest and Dahua devices share similar firmwares. Dahua Cameras and NVRs also work with this module.

Marcelo Moreira de Mello 176 Dec 21, 2022
A wrapper for the Discord Python Pixels API.

DPYPX A simple wrapper around Python Discord Pixels. Requires Python 3.7+ (3.x where x = 7). Requires pillow and aiohttp from pip. Example import dpy

Artemis 3 Oct 01, 2022
A bot to playing music in telegram vcg

Idzeroid Music|| Idzeroid Music adalah sebuah repository music bot telegram untuk memainkan suara di voice chat group anda. Fyi This repo im using for

idzeroid 1 Oct 26, 2021
[Multithreading] [Proxy - auto & infile]

Discord-Token-Generator-AutoCheck [Multithreading] [Proxy - auto & infile] How to install? pip install -r requirements.txt run generator.py with pytho

Chakeaw__ 3 Oct 17, 2021
基于nonebot2开发的群管机器人qbot,支持上传并运行python代码以及一些基础管理功能

nonebot2-Eleina 基于nonebot2开发的群管机器人qbot,支持上传并运行python代码以及一些基础管理功能 Readme 环境:python3.7.3+,go-cqhttp 安装及配置:参见(https://v2.nonebot.dev/guide/installation.h

1 Dec 06, 2022
Python client for the Socrata Open Data API

sodapy sodapy is a python client for the Socrata Open Data API. Installation You can install with pip install sodapy. If you want to install from sour

Cristina 368 Dec 09, 2022
A telegram bot that can send you high-quality audio 🎧🎧🎧

Music downloader bot Still under development Please Report issues to improve this repo.I will try to fix bugs in next update Music downloader bot is a

Anish Gowda 36 Dec 06, 2022