This script is useful for downloading stock market data for a wide range of companies specified by their respective tickers. The script reads in the desired tickers and interacts with yahoo finance to download and save csv files containing information for: Date, Open, High, Low, Close, Adjusted Close, and Volume. Once data for a ticker is downloaded and stored, further requests for data will simply append the most recent information onto the existing csv file. Additionally, each time a user requests downloads, a list of the successful and failed requests will be generated. A few important notes: -Most importantly, HUGE shoutout to https://github.com/bradlucas/get-yahoo-quotes-python for the repo on downloading historic data from yahoo finance. My code is build on top of the work done there, which was a huge time saver. -Make sure to set up the directories for your ticker_location and csv_location. -The default behavior is to download as much data that yahoo finance can provide. -This data is daily historic data There are 5 command line arguments which may be helpful to facilitate the data download process, which may either be used directly in the terminal, or have their defaults set by modifying the download_data.py script. Command Line Arguments: --ticker_location (path): this specifies the file location containing a list of tickers to download data for. The list should be saved as a text file with each ticker on its own new line. --csv_location (path): this is the directory where csv files should be saved. If this directory does not already exist, create it manually before running the script. --add_tickers (string): this gives the user an option to add more tickers to their existing list and database. Pass in a string of tickers separated by commas (no spaces) to add the tickers to the list, and download their csv files. The default list of tickers will be updated to contain these new tickers specified. If there is not already a default list of tickers, create this before running the script. --remove_tickers (string): this gives the user an option to remove tickers from their list and database. Pass in a string of tickers separated by commas (no spaces) to remove the tickers from the list as well as the database (csv_location). If there is not already a default list of tickers, create this before running the script. --verbose (bool): this provides extra information while downloading data, useful for debugging. Set to false to only see the progress bar for data being downloaded. To use the script, follow these simple steps. 0. Install dependencies using pip install -r requirements.txt 1. Set up a default list of tickers. This can be a blank text file, or a list of tickers each on their own new line, saved as a text file. 2. Set up a directory to save csv files to. 3. Optionally, change the default ticker_location and csv_location file paths in the script itself. 4. Run the script download_data.py from the command line, or your favorite IDE. Examples: Download using a pre-saved list of tickers python download_data.py --ticker_location /home/user/Desktop/tickers.txt --csv_location /home/user/Desktop/CSVFiles/ Download data using a string of tickers without referencing a tickers.txt file python download_data.py --csv_location /home/user/Desktop/CSVFiles/ --add_tickers "GME,AMC,AAPL,TSLA,SPY" Download data using a string of tickers with referencing a tickers.txt file python download_data.py --csv_location /home/user/Desktop/CSVFiles/ --ticker_location /home/user/Desktop/tickers.txt --add_tickers "GME,AMC,AAPL,TSLA,SPY" From here, the rest is history (pun intended ;)). When downloading from a pre-saved list of tickers, the computer will open as many threads as it can to speed up this highly parallelizable process to get you your data as quick as possible. Once its finished, you'll find all the data in your csv_location folder! Now that you have data, you can easily update the files with the latest information at the end of each day, week, or whatever time frame you prefer. Simply run the script in the same way as previously described, and the newest data will be appended to the existing files. If there is a new ticker in your list, the full set of data will be downloaded. Happy downloading!
Script used to download data for stocks.
Overview
Owner
Carmelo Gonzales
Transistor, a Python web scraping framework for intelligent use cases.
Web data collection and storage for intelligent use cases. transistor About The web is full of data. Transistor is a web scraping framework for collec
A web Scraper for CSrankings.com that scrapes University and Faculty list for a particular country
A look into what we're building Demo.mp4 Prerequisites Python 3 Node v16+ Steps to run Create a virtual environment. Activate the virtual environment.
crypto currency scraping
SCRYPTO What ? Crypto currencies scraping (At the moment, only bitcoin and ethereum crypto currencies are supported) How ? A python script is running
Scrapping Connections' info on Linkedin
Scrapping Connections' info on Linkedin
Dude is a very simple framework for writing web scrapers using Python decorators
Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-lea
Web and PDF Scraper Refactoring
Web and PDF Scraper Refactoring This repository contains the example code of the Web and PDF scraper code roast. Here are the links to the videos: Par
This is a module that I had created along with my friend. It's a basic web scraping module
QuickInfo PYPI link : https://pypi.org/project/quickinfo/ This is the library that you've all been searching for, it's built for developers and allows
python+selenium实现的web端自动打卡 + 每日邮件发送 + 金山词霸 每日一句 + 毒鸡汤(从2月份稳定运行至今)
python+selenium实现的web端自动打卡 说明 本打卡脚本适用于郑州大学健康打卡,其他web端打卡也可借鉴学习。(自己用的,从2月分稳定运行至今) 仅供学习交流使用,请勿依赖。开发者对使用本脚本造成的问题不负任何责任,不对脚本执行效果做出任何担保,原则上不提供任何形式的技术支持。 为防止
Scraping followers of an instagram account
ScrapInsta A script to scraping data from Instagram Install First of all you can run: pip install scrapinsta After that you need to install these requ
学习强国 自动化 百分百正确、瞬间答题,分值45分
项目简介 学习强国自动化脚本,解放你的时间! 使用Selenium、requests、mitmpoxy、百度智能云文字识别开发而成 使用说明 注:Chrome版本 驱动会自动下载 首次使用会生成数据库文件db.db,用于提高文章、视频任务效率。 依赖安装 pip install -r require
This project was created using Python technology and flask tools to scrape a music site
python-scrapping This project was created using Python technology and flask tools to scrape a music site You need to install the following packages to
Creating Scrapy scrapers via the Django admin interface
django-dynamic-scraper Django Dynamic Scraper (DDS) is an app for Django which builds on top of the scraping framework Scrapy and lets you create and
Displays market info for the LUNI token on the Terra Blockchain
LuniBot for Discord Displays market info for the LUNI/LUNA token on the Terra Blockchain (Webscrape method currently scraping CoinMarketCap). Will evo
🤖 Threaded Scraper to get discord servers from disboard.org written in python3
Disboard-Scraper Threaded Scraper to get discord servers from disboard.org written in python3. Setup. One thread / tag If you whant to look for multip
A simple python script to fetch the latest covid info
covid-tracker-script A simple python script to fetch the latest covid info How it works First, get the current date in MM-DD-YYYY format. Check if the
A package that provides you Latest Cyber/Hacker News from website using Web-Scraping.
cybernews A package that provides you Latest Cyber/Hacker News from website using Web-Scraping. Latest Cyber/Hacker News Using Webscraping Developed b
A tool can scrape product in aliexpress: Title, Price, and URL Product.
Scrape-Product-Aliexpress A tool can scrape product in aliexpress: Title, Price, and URL Product. Usage: 1. Install Python 3.8 3.9 padahal halaman ins
Dailyiptvlist.com Scraper With Python
Dailyiptvlist.com scraper Info Made in python Linux only script Script requires to have wget installed Running script Clone repository with: git clone
Footballmapies - Football mapies for learning webscraping and use of gmplot module in python
Footballmapies - Football mapies for learning webscraping and use of gmplot module in python
Deep Web Miner Python | Spyder Crawler
Webcrawler written in Python. This crawler does dig in till the 3 level of inside addressed and mine the respective data accordingly