Understanding the field usage of any object in Salesforce

Overview

Understanding the field usage of any object in Salesforce

One of the biggest problems that I have addressed while working with Salesforce is to understand and evaluate the field usage of a custom object. This application does the work for you, generating a CSV/Excel file with the date of the last record that used each field, and the percentage of use across all of them.

To make this app work, you will need a System Administrator credential to log into Salesforce
This app is currently working with the Spyder IDE, which is part of Anaconda


Let's understand how it works!

Dependencies

First, we need our dependencies. We will use Pandas, datetime and Simple Salesforce

from simple_salesforce import Salesforce
import pandas as pd
import datetime

Credentials

Next, we are going to connect to Salesforce with Simple Salesforce

  sf = Salesforce(password='password',
            username='username',
            organizationId='organizationId')

Your organizationId should look like this, 00JH0000000tYml.
To find it, just follow the next steps (Lightning experience):

  • Log into Salesforce with your System Administrator credentials
  • Press the gear button
  • Press Setup, (setup for current app)
  • In the quick search bar (the one in the left) type Company Information
  • Click Company Information
  • Finally, look for Salesforce.com Organization ID. The ID will look like 00JH0000000tYml

The Object

Now you will need to plug the object name. The object name is the API Name of the object. Normally, if it is a custom object, it will finish like this, __c
To find the API NAME just follow these instructions:

  • Log into Salesforce with your System Administrator credentials
  • Press the gear button
  • Press Setup, (setup for current app)
  • Click on Object Manager in the header of the page
  • Find your object using the name and copy the API NAME which is next to the name of the object

This part of the code if going to use the name of the object to bring all the fields
  object_to_evaluate = "object"
  object_fields = getattr(sf, object_to_evaluate).describe()

The Date

This part is important and will make you think. The default code is going to bring the data from the last year. Is important to understand what happened during that period. If you release a new field a week ago, it will show that it was use a couple of days ago, but the usage will be really low, around a 2% (7/365). You can change the days to evaluate simple change the 365 for the number of days that you want.

last_year = (datetime.datetime.now() + datetime.timedelta(days=-365)).strftime("%Y-%m-%d"+"T"+"%H:%M:%S"+"Z")

The Result

Now we are going to iterate all the fields and get the created date from the last record that used the field, and the number of records that use that field during the period (one year).

{} \ AND {} != null \ ORDER BY Id DESC \ LIMIT 1".format(object_to_evaluate, last_year , field['name']) )['records']) field_detail['Field Name'] = field['name'] field_detail['Field Label'] = field['label'] field_detail['Found?'] = 'Yes' field_quantity = pd.DataFrame( sf.query("SELECT count(Id) \ FROM {} \ WHERE createddate > {} \ AND {} != null".format(object_to_evaluate, last_year , field['name']) ))['records'][0]['expr0'] field_detail['Quantity'] = field_quantity data.append(field_detail) if field_detail.empty: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['Yes, no data']} data.append(pd.DataFrame(error_data)) except: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['No']} data.append(pd.DataFrame(error_data)) # Concatenate the list of result into one dataframe data_to_csv = pd.concat(data, ignore_index=True)">
for field in object_fields['fields']:
    print(field['name'])
    try:
        field_detail = pd.DataFrame(
            sf.query("SELECT Id, createddate, SystemModStamp \
                      FROM {} \
                      WHERE createddate > {} \
                        AND {} != null \
                      ORDER BY Id DESC \
                      LIMIT 1".format(object_to_evaluate, last_year , field['name'])
                      )['records'])

        field_detail['Field Name'] = field['name']
        field_detail['Field Label'] = field['label']
        field_detail['Found?'] = 'Yes'

        field_quantity = pd.DataFrame(
            sf.query("SELECT count(Id) \
                    FROM {} \
                    WHERE createddate > {} \
                    AND {} != null".format(object_to_evaluate, last_year , field['name'])
                    ))['records'][0]['expr0']

        field_detail['Quantity'] = field_quantity                        
        data.append(field_detail)

        if field_detail.empty:
            error_data = {'Field Name': [field['name']],
                          'Field Label': [field['label']] , 
                          'Found?': ['Yes, no data']}
            data.append(pd.DataFrame(error_data))
    except:
        error_data = {'Field Name': [field['name']],
                      'Field Label': [field['label']] , 
                      'Found?': ['No']}
        data.append(pd.DataFrame(error_data))

# Concatenate the list of result into one dataframe
data_to_csv = pd.concat(data, ignore_index=True)

Some Formatting

Formatting is a nice to have to understand the result, especially if you are going to share the insights. We are going to rename some columns, format the dates column in a way that CSV/Excel can understand, and we are adding a % of use column.

data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True)
data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date
data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date
data_to_csv = data_to_csv.drop('attributes', axis=1)
max_value = data_to_csv['Quantity'].max()
data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value

The Files

Finally, we are going to export the files to CSV and Excel, so you can choose which one you prefer to use. The files will be stored in the same folder as the app. So, if you are running this app in your Desktop folder, the CSV and Excel files will be store in the same folder.

data_to_csv.to_csv('last Field Usage Date.csv')
data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")

If you like it, remember to
Buy Me A Coffee


The final code will look like this:

{} \ AND {} != null \ ORDER BY Id DESC \ LIMIT 1".format(object_to_evaluate, last_year , field['name']) )['records']) field_detail['Field Name'] = field['name'] field_detail['Field Label'] = field['label'] field_detail['Found?'] = 'Yes' field_quantity = pd.DataFrame( sf.query("SELECT count(Id) \ FROM {} \ WHERE createddate > {} \ AND {} != null".format(object_to_evaluate, last_year , field['name']) ))['records'][0]['expr0'] field_detail['Quantity'] = field_quantity data.append(field_detail) if field_detail.empty: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['Yes, no data']} data.append(pd.DataFrame(error_data)) except: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['No']} data.append(pd.DataFrame(error_data)) # Concatenate the list of result into one dataframe data_to_csv = pd.concat(data, ignore_index=True) # Format the CSV/Excel report data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True) data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date data_to_csv = data_to_csv.drop('attributes', axis=1) max_value = data_to_csv['Quantity'].max() data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value # Export the data to a CSV/Excel file data_to_csv.to_csv('last Field Usage Date.csv') data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")">
from simple_salesforce import Salesforce
import pandas as pd
import datetime

# Connection to Salesforce
sf = Salesforce(password='password',
                username='username',
                organizationId='organizationId')


# Change the name to the object that you want to evaluate. If is a custom object remember to end it with __c
object_to_evaluate = "object"

# Get all the fields from the Object
object_fields = getattr(sf, object_to_evaluate).describe()

# Define an empty list to append the information
data = []

# Create a date variable to define from when we want to get the data
last_year = (datetime.datetime.now() + datetime.timedelta(days=-365)).strftime("%Y-%m-%d"+"T"+"%H:%M:%S"+"Z")

# Iterate over the fields and bring the last record created Date where the field wasn't empty
# If the record is not found, store it in the CSV/Excel file as not found
for field in object_fields['fields']:
    print(field['name'])
    try:
        field_detail = pd.DataFrame(
            sf.query("SELECT Id, createddate, SystemModStamp \
                      FROM {} \
                      WHERE createddate > {} \
                        AND {} != null \
                      ORDER BY Id DESC \
                      LIMIT 1".format(object_to_evaluate, last_year , field['name'])
                      )['records'])

        field_detail['Field Name'] = field['name']
        field_detail['Field Label'] = field['label']
        field_detail['Found?'] = 'Yes'

        field_quantity = pd.DataFrame(
            sf.query("SELECT count(Id) \
                    FROM {} \
                    WHERE createddate > {} \
                    AND {} != null".format(object_to_evaluate, last_year , field['name'])
                    ))['records'][0]['expr0']

        field_detail['Quantity'] = field_quantity                        
        data.append(field_detail)

        if field_detail.empty:
            error_data = {'Field Name': [field['name']],
                          'Field Label': [field['label']] , 
                          'Found?': ['Yes, no data']}
            data.append(pd.DataFrame(error_data))
    except:
        error_data = {'Field Name': [field['name']],
                      'Field Label': [field['label']] , 
                      'Found?': ['No']}
        data.append(pd.DataFrame(error_data))

# Concatenate the list of result into one dataframe
data_to_csv = pd.concat(data, ignore_index=True)

# Format the CSV/Excel report
data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True)
data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date
data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date
data_to_csv = data_to_csv.drop('attributes', axis=1)
max_value = data_to_csv['Quantity'].max()
data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value

# Export the data to a CSV/Excel file
data_to_csv.to_csv('last Field Usage Date.csv')
data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")

HOPE IT HELPS!

If you like it, remember to
Buy Me A Coffee

Owner
Sebastian Undurraga
Sebastian Undurraga
Personal Assistant Tessa

Personal Assistant Tessa Introducing our all new personal assistant Tessa..... An intelligent virtual assistant (IVA) or intelligent personal assistan

Anusha Joseph 4 Mar 08, 2022
x-tools is a collection of tools developed in Python

x-tools X-tools is a collection of tools developed in Python Commands\

5 Jan 24, 2022
Tensorboard plugin 3d with python

tensorboard-plugin-3d Overview In this example, we render a run selector dropdown component. When the user selects a run, it shows a preview of all sc

KitwareMedical 26 Nov 14, 2022
HOWTO: Downgrade from nYNAB to YNAB4

HOWTO: Downgrade from nYNAB to YNAB4 This page explains how to move from nYNAB to YNAB4 while retaining as much information as possible. See Appendix

Tobias Kunze 10 Dec 29, 2022
Collection of system-wide scripts that I use on my Gentoo

linux-scripts Collection of scripts that I use on my Gentoo machine. I tend to put all scripts in /scripts directory. It is not likely that you would

Xoores 1 Jan 09, 2022
Developer guide for Hivecoin project

Hivecoin-developer Developer guide for Hivecoin project. Install Content are writen in reStructuredText (RST) and rendered with Sphinx. Much of the co

tweetyf 1 Nov 22, 2021
A simple assembly- and brainfuck-inspired stack-based language

asm-stackfuck A simple assembly- and brainfuck-inspired stack-based language. The language has a few goals: Be stack-based Look like assembly Have a s

Nils Trinity 1 Feb 06, 2022
Implemented Exploratory Data Analysis (EDA) using Python.Built a dashboard in Tableau and found that 45.87% of People suffer from heart disease.

Heart_Disease_Diagnostic_Analysis Objective 🎯 The aim of this project is to use the given data and perform ETL and data analysis to infer key metrics

Sultan Shaikh 4 Jan 28, 2022
Snakemake worflow to process and filter long read data from Oxford Nanopore Technologies.

Nanopore-Workflow Snakemake workflow to process and filter long read data from Oxford Nanopore Technologies. It is designed to compare whole human gen

5 May 13, 2022
Plock : A stack based programming language

Plock : A stack based programming language

1 Oct 25, 2021
Auto Join Zoom Meeting

Auto-Join-Zoom-Meeting Join a zoom meeting with out filling in meeting id's or passcodes, one button for it all! Setup See attached excel document. MA

JareBear 1 Jan 25, 2022
Statistics Calculator module for all types of Stats calculations.

Statistics-Calculator This Calculator user the formulas and methods to find the statistical values listed. Statistics Calculator module for all types

2 May 29, 2022
Python library to decorate and beautify strings

outputformater Python library to decorate and beautify your standard output 💖 I

Felipe Delestro Matos 259 Dec 13, 2022
Adjust the white point, gamma or make your XDR display darker without losing HDR peak luminance or the ability to adjust display brightness

XDR Tuner Adjust the white point, gamma or make your XDR display darker without losing HDR peak luminance or the ability to adjust display brightness

François Simond 16 Dec 28, 2022
A fishing bot script written in Python!

A fishing bot script written in Python!

Anel Drocic 3 Nov 03, 2021
Developing and Comparing Vision-based Algorithms for Vision-based Agile Flight

DodgeDrone: Vision-based Agile Drone Flight (ICRA 2022 Competition) Would you like to push the boundaries of drone navigation? Then participate in the

Robotics and Perception Group 115 Dec 10, 2022
This app converts an pdf file into the audio file.

PDF-to-Audio This app takes an pdf as an input and convert it into audio, and the library text-to-speech starts speaking the preffered page given in t

Ojas Barawal 3 Aug 04, 2021
Fortnite StW Claimer for Daily Rewards, Research Points and free Llamas.

Fortnite Save the World Daily Reward, Research Points & free Llama Claimer This program allows you to claim Save the World Daily Reward, Research Poin

PRO100KatYT 27 Dec 22, 2022
Reproduce digital electronics in Python

Pylectronics Reproduce digital electronics in Python Report Bug · Request Feature Table of Contents About The Project Getting Started Prerequisites In

Filipe Garcia 45 Dec 20, 2021
Python data loader for Solar Orbiter's (SolO) Energetic Particle Detector (EPD).

Data loader (and downloader) for Solar Orbiter/EPD energetic charged particle sensors EPT, HET, and STEP. Supports level 2 and low latency data provided by ESA's Solar Orbiter Archive.

Jan Gieseler 9 Dec 16, 2022