Python library to make development of portfolio analysis faster and easier

Overview

Trafalgar

Python library to make development of portfolio analysis faster and easier

Installation 🔥

For the moment, Trafalgar is still in beta development. To install it you should:

  1. Download requirements.txt in the folder where you want to execute the trafalgar library
  2. Go to your folder directory with the command prompt and write :
pip install -r requirements.txt
  1. Download trafalgars-0.0.1-py3-none-any.whl in the same folder
  2. Go to your folder directory with the command prompt and write :
pip install trafalgars-0.0.1-py3-none-any.whl

Features include 📈

  • Get close price, open price, adj close, volume and graphs of these in one line of code!
  • Build a efficient frontier programm in 3 lines of code
  • Backtest a portfolio, see its stats and compare it to a benchmark

Here is the code of this article from a google collab, you can use it to follow along with this article: https://colab.research.google.com/drive/1qgFDDQneQP-oddbJVWWApfPKFMnbpj6I?usp=sharing

Documentation

Call the library

First, you should do:

from trafalgar import *

Graph of the closing price of a stock

#graph_close(stock, start_date, end_date)
graph_close(["FB"], "2020-01-01", "2021-01-01")

Graph of the closing price of multiple stocks

graph_close(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the volume

#graph_volume(stock, start_date, end_date)

#for one stock
graph_volume(["FB"], "2020-01-01", "2021-01-01")

#for multiple stocks
graph_volume(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the opening price

#graph_open(stock, start_date, end_date)

#for one stock
graph_open(["FB"], "2020-01-01", "2021-01-01")

#for multiple stocks
graph_open(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the adjusted closing price

#graph_adj_close(stock, start_date, end_date)

#for one stock
graph_adj_close(["FB"], "2020-01-01", "2021-01-01")

#for multiple stocks
graph_adj_close(["FB", "AAPL", "TSLA"], "2020-01-01", "2021-01-01")

Graph the returns (for each day)

#returns_graph(stock, start_date, end_date)

#this one only work for one stock
returns_graph("FB", "2020-01-01", "2021-01-01")

Get closing price data (in dataframe format)

#close(stock, start_date, end_date)
close(["AAPL"], "2020-01-01", "2021-01-01")

Get volume data (in dataframe format)

#volume(stock, start_date, end_date)
volume(["AAPL"], "2020-01-01", "2021-01-01")

Get opening price data (in dataframe format)

#open(stock, start_date, end_date)
open(["AAPL"], "2020-01-01", "2021-01-01")

Get adjusted closing price data (in dataframe format)

#adj_close(stock, start_date, end_date)
adj_close(["AAPL"], "2020-01-01", "2021-01-01")

Covariance between stocks

#covariance(stocks, start_date, end_date, days) -> usually, days = 252
covariance(["AAPL", "DIS", "AMD"], "2020-01-01", "2021-01-01", 252)

Get data from a stock in OHLCV format directly

#ohlcv(stock, start_date, end_date)
ohlcv("AAPL", "2020-01-01", "2021-01-01")

Graph the cumulative returns of a stock/portfolio

#cum_returns_graph(stocks, weights, start_date, end_date)
cum_returns_graph(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

Get cumulative returns data of a stock/portfolio (in a dataframe format)

#cum_returns(stocks, weights, start_date, end_date)
cum_returns(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

Disclaimer : From there, the functions only work for portfolios, not for individual stocks. However there is a way to make it work for individual stock:

#let's say we want to calculate the annual_volatility of Apple. 
#We have to have at least 2 elements in our stock list. Here these are Apple and Facebook
#In order to get the volatility of only Apple we just have to put the weights of Facebook at 0 (so no money will be allocated to this stock) and put the weights of Apple at 1 (so all our money will be allocated to this stock)
annual_volatility(["FB", "AAPL"], [1, 0],"2020-01-01", "2021-01-01")

Annual Volatility of a portfolio/stock

#annual_volatility(stocks, weights, start_date, end_date)

#for your portfolio
annual_volatility(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

#for one stock (FB)
annual_volatility(["FB", "AAPL"], [1, 0],"2020-01-01", "2021-01-01")

Sharpe Ratio of a portfolio/stock

#sharpe_ratio(stocks, weights, start_date, end_date)

#for your portfolio
sharpe_ratio(["FB", "AAPL", "AMD"], [0.3, 0.4, 0.3],"2020-01-01", "2021-01-01")

#for one stock (FB)
sharpe_ratio(["FB", "AAPL"], [1, 0],"2020-01-01", "2021-01-01")

Compare the returns of a portfolio/stock to a benchmark

#returns_benchmark(stocks, weights, benchmark, start_date, end_date)

#for your portfolio
returns_benchmark(["AAPL", "AMD", "MSFT"], [0.3, 0.4, 0.3], "SPY", "2020-01-01", "2021-01-01")

#for one stock(AAPL)
returns_benchmark(["AAPL", "AMD"], [1,0], "SPY", "2020-01-01", "2021-01-01")

Blue line : returns of your portfolio Red line : returns of the benchmark

Compare the cumulative returns of a portfolio/stock to a benchmark

#cum_returns_benchmark(stocks, weights, benchmark, start_date, end_date)

#for your portfolio
cum_returns_benchmark(["AAPL", "AMD", "MSFT"], [0.3, 0.4, 0.3], "SPY", "2020-01-01", "2021-01-01")

#for one stock(AAPL)
cum_returns_benchmark(["AAPL", "AMD"], [1,0], "SPY", "2020-01-01", "2021-01-01")

Blue line : cumulative returns of your portfolio Red line : cumulative returns of the benchmark

Alpha and Beta of a portfolio/stock

#alpha_beta(stocks, weights, benchmark, start_date, end_date)

#for your portfolio
alpha_beta(["AAPL", "AMD", "MSFT"], [0.3, 0.4, 0.3], "SPY", "2020-01-01", "2021-01-01")

#for one stock(AAPL)
alpha_beta(["AAPL", "AMD"], [1,0], "SPY", "2020-01-01", "2021-01-01")

Efficient frontier to optimize allocation of shares in your portfolio

#efficient_frontier(stocks, start_date, end_date, iterations) -> iterations = 10000 is a good starting point
efficient_frontier(["AAPL", "FB", "TSLA", "BABA"], "2020-01-01", "2021-01-01", 10000)

Graph individual cumulative returns for your portfolio

#individual_cum_returns_graph(stocks, start_date, end_date)
individual_cum_returns_graph(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Individual cumulative returns datas for your portfolio (in dataframe format)

#individual_cum_returns(stocks, start_date, end_date)
individual_cum_returns(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Mean daily return of each stocks in your portfolio

#individual_mean_daily_return(stocks, start_date, end_date)
individual_mean_daily_return(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Portfolio mean daily return

#portfolio_daily_mean_return(stocks,weights, start_date, end_date)
portfolio_daily_mean_return(["FB", "AAPL", "AMD"],"2020-01-01", "2021-01-01")

Value at Risk of a stock (still in development)

#VaR(stock, start_date, end_date, confidence_level)
VaR("FB","2020-01-01", "2021-01-01", 98)

License

MIT

Comments
  • Issue with Pandas datareader

    Issue with Pandas datareader

    Describe the bug This seems to effect all your branches

    RemoteDataError Traceback (most recent call last) in ----> 1 oracle(portfolio)

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/empyrial.py in oracle(my_portfolio, prediction_days, based_on) 334 335 --> 336 df = web.DataReader(asset, data_source='yahoo', start = my_portfolio.start_date, end= my_portfolio.end_date) 337 df = pd.DataFrame(df) 338 df.reset_index(level=0, inplace=True)

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs) 197 else: 198 kwargs[new_arg_name] = new_arg_value --> 199 return func(*args, **kwargs) 200 201 return cast(F, wrapper)

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/data.py in DataReader(name, data_source, start, end, retry_count, pause, session, api_key) 374 375 if data_source == "yahoo": --> 376 return YahooDailyReader( 377 symbols=name, 378 start=start,

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/base.py in read(self) 251 # If a single symbol, (e.g., 'GOOG') 252 if isinstance(self.symbols, (string_types, int)): --> 253 df = self._read_one_data(self.url, params=self._get_params(self.symbols)) 254 # Or multiple symbols, (e.g., ['GOOG', 'AAPL', 'MSFT']) 255 elif isinstance(self.symbols, DataFrame):

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/yahoo/daily.py in _read_one_data(self, url, params) 151 url = url.format(symbol) 152 --> 153 resp = self._get_response(url, params=params) 154 ptrn = r"root.App.main = (.*?);\n}(this));" 155 try:

    ~/anaconda3/envs/empyr/lib/python3.8/site-packages/pandas_datareader/base.py in _get_response(self, url, params, headers) 179 msg += "\nResponse Text:\n{0}".format(last_response_text) 180 --> 181 raise RemoteDataError(msg) 182 183 def _get_crumb(self, *args):

    RemoteDataError: Unable to read URL: https://finance.yahoo.com/quote/BABA/history?period1=1591671600&period2=1625972399&interval=1d&frequency=1d&filter=history Response Text: b'\n \n \n \n Yahoo\n \n \n \n \n \n \n \n \n

    \n \n \n \n
    \n Yahoo Logo\n

    Will be right back...

    \n

    Thank you for your patience.

    \n

    Our engineers are working quickly to resolve the issue.

    \n
    \n '

    1

    opened by geofffoster 8
  • Failed to build scs ERROR: Could not build wheels for scs which use PEP 517 and cannot be installed directly

    Failed to build scs ERROR: Could not build wheels for scs which use PEP 517 and cannot be installed directly

    I am using Python 3.8.10. I had a separate environment and I got the following error when pip installing empyrial Failed to build scs ERROR: Could not build wheels for scs which use PEP 517 and cannot be installed directly

    From this link https://github.com/pydata/bottleneck/issues/281 I tried pip install --upgrade pip setuptools wheel but I am still getting the same bug when installing empyrial.

    • OS: Ubuntu 20.04
    • mini conda version and a separate environment for trading
    • Python 3.8.10
    • Let me know if there is any way around this bug. Thanks
    opened by gurusura 8
  • RemoteDataError: No data fetched using 'YahooDailyReader'

    RemoteDataError: No data fetched using 'YahooDailyReader'

    Discussed in https://github.com/ssantoshp/Empyrial/discussions/27

    Originally posted by karim1104 July 3, 2021 Starting July 1, I'm getting the error "RemoteDataError: No data fetched using 'YahooDailyReader'". I've tried it in different Python environments (3.6, 3.8, 3.9). It seems like a Pandas DataReader issue (https://github.com/pydata/pandas-datareader/issues/868) How can we resolve this? I have a subscription to FMP, is there a way I use instead of Yahoo Finance?

    opened by ssantoshp 6
  • Error when rebalancing with only one stock

    Error when rebalancing with only one stock

    Hi, I have tried to reproduce test results and simulated one single stock over time by forcing the weight distribution as shown below: tickers = ["stock1", "stock2"] weights_new_ = [1.0, 0.0] no optimizer is used, so just using the quantstats calculations of ratios and returns. In the next example, we do the same but with a yearly rebalancer. The thing here is that the results should be exactly the same. There seems to be a slight error in the returns calculations over time, which turns out to be bigger with more rebalancing.

    I will have some more look at it, and update if I find the bug. Btw great work!

    opened by atobiese 5
  • Unlisted Stock Symbol Counted in Pie Chart

    Unlisted Stock Symbol Counted in Pie Chart

    Hi , awesome tool santosh bhai. If a ticker symbol data is not listed at the time of the start date , it stills counts the ticker in the pie chart portfolio. Ideally it should not .. or am i getting this wrong. very new guy. regards ,

    opened by lawzeus 5
  • get_report error

    get_report error

    Describe the bug the sample code (as per --> https://empyrial.gitbook.io/empyrial/save-the-tearsheet/get-a-report) is throwing an error

    To Reproduce Steps to reproduce the behavior:

    1. Go to 'https://empyrial.gitbook.io/empyrial/save-the-tearsheet/get-a-report...'
    2. Run the sample code
    3. Scroll down to '....'
    4. See error

    NameError Traceback (most recent call last) /var/folders/41/q1hx0rjd5xzck1vl121t6b2m0000gn/T/ipykernel_11664/2518190224.py in 10 empyrial(portfolio) 11 ---> 12 get_report(portfolio)

    NameError: name 'get_report' is not defined

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Additional context using jypiterlab notebook

    opened by lawzeus 5
  • Support for custom data, or data from other exchanges

    Support for custom data, or data from other exchanges

    Is your feature request related to a problem? Please describe. I want to analyze portfolio in other exchanges.

    Describe the solution you'd like Ability to provide other exchange data.

    opened by suvojit-0x55aa 4
  • Error when running fundlens

    Error when running fundlens

    Anaconda3\lib\site-packages\empyrial.py", line 610, in fundlens ['Dividend yield', yahoo_financials.get_dividend_yield()], ['Payout ratio', yahoo_financials.get_payout_ratio()], ['Controversy', controversy], ['Social score', social_score],

    UnboundLocalError: local variable 'controversy' referenced before assignment

    opened by jaredre 4
  • rebalance has a bug

    rebalance has a bug

    When you set up quarterly rebalance with only one ticker, the strategy and benchmark show different values. This is a bug. They should be completely equal.

    The codebase below reproduces the issue. The EOY returns and the timeseries plot of Cumulative returns vs benchmark show that the strategy and benchmark are divergent.

    from empyrial import empyrial, Engine portfolio = Engine(
    start_date= "2021-01-01", portfolio= ["BTC-USD", "GOOG"], weights = [1, 0.], #equal weighting is set by default benchmark = ["BTC-USD"], #SPY is set by default rebalance = 'quarterly' ) empyrial(portfolio)

    opened by rgleavenworth 3
  • Graph styling

    Graph styling

    Is there a way to override the default styling parameters used in your tearsheet? I understand that most of the styling is inherited from quantstats. Anyway you can suggest how to change things like facecolor, linewidth, etc?

    opened by rgleavenworth 3
  • EM Optimizer fails if benchmark changed to Nifty50  (yahoo ticker used

    EM Optimizer fails if benchmark changed to Nifty50 (yahoo ticker used "^NSEI")

    Describe the bug The EM optimiser fails when the default benchmark is altered to Nifty .

    However if the default is restored it works .

    To Reproduce Steps to reproduce the behavior : use this code "from empyrial import empyrial, Engine

    portfolio = Engine(
    start_date= "2015-01-01", #start date for the backtesting portfolio= ["TCS.NS", "INFY.NS", "HDFC.NS", "KOTAKBANK.NS","TITAN.NS","NESTLEIND.NS"], #assets in your portfolio benchmark = ["NSEI"] optimizer = "EF" ) empyrial(portfolio)"

    Expected behavior error message " File "/var/folders/41/q1hx0rjd5xzck1vl121t6b2m0000gn/T/ipykernel_2924/1251204071.py", line 7 optimizer = "EF" ^ SyntaxError: invalid syntax

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: MacOsx
    • Browser Chrome
    • jupyter

    Additional context Add any other context about the problem here.

    opened by lawzeus 3
  • assets value / non-stock based portfolio?

    assets value / non-stock based portfolio?

    Wondering if Empyrial can be used with a non-stock based portfolio. The example in the docs is like this:

    from empyrial import empyrial, Engine portfolio = Engine(
    start_date= "2018-06-09", portfolio= ["BABA", "PDD", "KO", "AMD","^IXIC"], weights = [0.2, 0.2, 0.2, 0.2, 0.2], #equal weighting is set by default benchmark = ["SPY"] #SPY is set by default ) empyrial(portfolio)

    Is there any alternate way to define a portfolio, not as a list of stocks / weights but based on the value of the assets in the account?

    opened by andrew521 2
  • str and Timestamp error

    str and Timestamp error

    The code:

    from empyrial import empyrial, Engine
    portfolio = Engine(
                      start_date= "2021-01-01", #start date for the backtesting
                      end_date= "2022-05-01",
                      portfolio= tickers[:], #assets in your portfolio
                      weights = w2[:],
                      benchmark=["XU100.IS"]
    )
    print(empyrial(portfolio))
    print(portfolio)
    

    It gives an error like below.

    TypeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_10148/966461475.py in 11 ) 12 ---> 13 print(empyrial(portfolio)) 14 print(portfolio)

    ~\AppData\Roaming\Python\Python39\site-packages\empyrial.py in empyrial(my_portfolio, rf, sigma_value, confidence_value) 304 empyrial.SR = SR 305 --> 306 CR = qs.stats.calmar(returns) 307 CR = CR.tolist() 308 CR = str(round(CR, 2))

    ~\AppData\Roaming\Python\Python39\site-packages\quantstats\stats.py in calmar(returns, prepare_returns) 547 if prepare_returns: 548 returns = _utils._prepare_returns(returns) --> 549 cagr_ratio = cagr(returns) 550 max_dd = max_drawdown(returns) 551 return cagr_ratio / abs(max_dd)

    ~\AppData\Roaming\Python\Python39\site-packages\quantstats\stats.py in cagr(returns, rf, compounded) 500 total = _np.sum(total) 501 --> 502 years = (returns.index[-1] - returns.index[0]).days / 365. 503 504 res = abs(total + 1.0) ** (1.0 / years) - 1

    TypeError: unsupported operand type(s) for -: 'str' and 'Timestamp'

    opened by burakgulmez 1
Releases(v1.9.8)
Owner
Santosh Passoubady
the Copycat Coder
Santosh Passoubady
Türkçe küfürlü içerikleri bulan bir yapay zeka kütüphanesi / An ML library for profanity detection in Turkish sentences

"Kötü söz sahibine aittir." -Anonim Nedir? sinkaf uygunsuz yorumların bulunmasını sağlayan bir python kütüphanesidir. Farkı nedir? Diğer algoritmalard

KaraGoz 4 Feb 18, 2022
Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"

This repository contains code for the following two papers: VisualBERT: A Simple and Performant Baseline for Vision and Language (arxiv) with a short

Natural Language Processing @UCLA 464 Jan 04, 2023
Multilingual finetuning of Machine Translation model on low-resource languages. Project for Deep Natural Language Processing course.

Low-resource-Machine-Translation This repository contains the code for the project relative to the course Deep Natural Language Processing. The goal o

Andrea Cavallo 3 Jun 22, 2022
Crie tokens de autenticação íntegros e seguros com UToken.

UToken - Tokens seguros. UToken (ou Unhandleable Token) é uma bilioteca criada para ser utilizada na geração de tokens seguros e íntegros, ou seja, nã

Jaedson Silva 0 Nov 29, 2022
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference

DeeBERT This is the code base for the paper DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference. Code in this repository is also available

Castorini 132 Nov 14, 2022
A PyTorch implementation of VIOLET

VIOLET: End-to-End Video-Language Transformers with Masked Visual-token Modeling A PyTorch implementation of VIOLET Overview VIOLET is an implementati

Tsu-Jui Fu 119 Dec 30, 2022
CoSENT、STS、SentenceBERT

CoSENT_Pytorch 比Sentence-BERT更有效的句向量方案

102 Dec 07, 2022
This is the source code of RPG (Reward-Randomized Policy Gradient)

RPG (Reward-Randomized Policy Gradient) Zhenggang Tang*, Chao Yu*, Boyuan Chen, Huazhe Xu, Xiaolong Wang, Fei Fang, Simon Shaolei Du, Yu Wang, Yi Wu (

40 Nov 25, 2022
BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia.

BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia. Its intended use is as input for neural models in natural languag

Benjamin Heinzerling 1.1k Jan 03, 2023
wxPython app for converting encodings, modifying and fixing SRT files

Subtitle Converter Program za obradu srt i txt fajlova. Requirements: Python version 3.8 wxPython version 4.1.0 or newer Libraries: srt, PyDispatcher

4 Nov 25, 2022
Blue Brain text mining toolbox for semantic search and structured information extraction

Blue Brain Search Source Code DOI Data & Models DOI Documentation Latest Release Python Versions License Build Status Static Typing Code Style Securit

The Blue Brain Project 29 Dec 01, 2022
Code for our paper "Transfer Learning for Sequence Generation: from Single-source to Multi-source" in ACL 2021.

TRICE: a task-agnostic transferring framework for multi-source sequence generation This is the source code of our work Transfer Learning for Sequence

THUNLP-MT 9 Jun 27, 2022
Beyond Accuracy: Behavioral Testing of NLP models with CheckList

CheckList This repository contains code for testing NLP Models as described in the following paper: Beyond Accuracy: Behavioral Testing of NLP models

Marco Tulio Correia Ribeiro 1.8k Dec 28, 2022
Interactive Jupyter Notebook Environment for using the GPT-3 Instruct API

gpt3-instruct-sandbox Interactive Jupyter Notebook Environment for using the GPT-3 Instruct API Description This project updates an existing GPT-3 san

312 Jan 03, 2023
Official implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimodal Understanding & Generation, and beyond.

English|简体中文 ERNIE是百度开创性提出的基于知识增强的持续学习语义理解框架,该框架将大数据预训练与多源丰富知识相结合,通过持续学习技术,不断吸收海量文本数据中词汇、结构、语义等方面的知识,实现模型效果不断进化。ERNIE在累积 40 余个典型 NLP 任务取得 SOTA 效果,并在 G

5.4k Jan 03, 2023
Nested Named Entity Recognition for Chinese Biomedical Text

CBio-NAMER CBioNAMER (Nested nAMed Entity Recognition for Chinese Biomedical Text) is our method used in CBLUE (Chinese Biomedical Language Understand

8 Dec 25, 2022
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform tasks on automatic speech recogniti

Soohwan Kim 26 Dec 14, 2022
मराठी भाषा वाचविण्याचा एक प्रयास. इंग्रजी ते मराठीचा शब्दकोश. An attempt to preserve the Marathi language. A lightweight and ad free English to Marathi thesaurus.

For English, scroll down मराठी शब्द मराठी भाषा वाचवण्यासाठी मी हा ओपन सोर्स प्रोजेक्ट सुरू केला आहे. माझ्या मते, आपली भाषा हळूहळू आणि कोणाचाही लक्षात

मुक्त स्त्रोत 20 Oct 11, 2022
Ecommerce product title recognition package

revizor This package solves task of splitting product title string into components, like type, brand, model and article (or SKU or product code or you

Bureaucratic Labs 16 Mar 03, 2022
Beyond the Imitation Game collaborative benchmark for enormous language models

BIG-bench 🪑 The Beyond the Imitation Game Benchmark (BIG-bench) will be a collaborative benchmark intended to probe large language models, and extrap

Google 1.3k Jan 01, 2023