This script is useful for downloading stock market data for a wide range of companies specified by their respective tickers. The script reads in the desired tickers and interacts with yahoo finance to download and save csv files containing information for: Date, Open, High, Low, Close, Adjusted Close, and Volume. Once data for a ticker is downloaded and stored, further requests for data will simply append the most recent information onto the existing csv file. Additionally, each time a user requests downloads, a list of the successful and failed requests will be generated. A few important notes: -Most importantly, HUGE shoutout to https://github.com/bradlucas/get-yahoo-quotes-python for the repo on downloading historic data from yahoo finance. My code is build on top of the work done there, which was a huge time saver. -Make sure to set up the directories for your ticker_location and csv_location. -The default behavior is to download as much data that yahoo finance can provide. -This data is daily historic data There are 5 command line arguments which may be helpful to facilitate the data download process, which may either be used directly in the terminal, or have their defaults set by modifying the download_data.py script. Command Line Arguments: --ticker_location (path): this specifies the file location containing a list of tickers to download data for. The list should be saved as a text file with each ticker on its own new line. --csv_location (path): this is the directory where csv files should be saved. If this directory does not already exist, create it manually before running the script. --add_tickers (string): this gives the user an option to add more tickers to their existing list and database. Pass in a string of tickers separated by commas (no spaces) to add the tickers to the list, and download their csv files. The default list of tickers will be updated to contain these new tickers specified. If there is not already a default list of tickers, create this before running the script. --remove_tickers (string): this gives the user an option to remove tickers from their list and database. Pass in a string of tickers separated by commas (no spaces) to remove the tickers from the list as well as the database (csv_location). If there is not already a default list of tickers, create this before running the script. --verbose (bool): this provides extra information while downloading data, useful for debugging. Set to false to only see the progress bar for data being downloaded. To use the script, follow these simple steps. 0. Install dependencies using pip install -r requirements.txt 1. Set up a default list of tickers. This can be a blank text file, or a list of tickers each on their own new line, saved as a text file. 2. Set up a directory to save csv files to. 3. Optionally, change the default ticker_location and csv_location file paths in the script itself. 4. Run the script download_data.py from the command line, or your favorite IDE. Examples: Download using a pre-saved list of tickers python download_data.py --ticker_location /home/user/Desktop/tickers.txt --csv_location /home/user/Desktop/CSVFiles/ Download data using a string of tickers without referencing a tickers.txt file python download_data.py --csv_location /home/user/Desktop/CSVFiles/ --add_tickers "GME,AMC,AAPL,TSLA,SPY" Download data using a string of tickers with referencing a tickers.txt file python download_data.py --csv_location /home/user/Desktop/CSVFiles/ --ticker_location /home/user/Desktop/tickers.txt --add_tickers "GME,AMC,AAPL,TSLA,SPY" From here, the rest is history (pun intended ;)). When downloading from a pre-saved list of tickers, the computer will open as many threads as it can to speed up this highly parallelizable process to get you your data as quick as possible. Once its finished, you'll find all the data in your csv_location folder! Now that you have data, you can easily update the files with the latest information at the end of each day, week, or whatever time frame you prefer. Simply run the script in the same way as previously described, and the newest data will be appended to the existing files. If there is a new ticker in your list, the full set of data will be downloaded. Happy downloading!
Script used to download data for stocks.
Overview
Owner
Carmelo Gonzales
API to parse tibia.com content into python objects.
Tibia.py An API to parse Tibia.com content into object oriented data. No fetching is done by this module, you must provide the html content. Features:
Haphazard scripts for scraping bitcoin/bitcoin data from GitHub
This is a quick-and-dirty tool used to scrape bitcoin/bitcoin pull request and commentary data. Each output/pr number folder contains comments.json:
此脚本为 python 脚本,实现原理为利用 selenium 定位相关元素,再配合点击事件完成浏览器的自动化.
此脚本为 python 脚本,实现原理为利用 selenium 定位相关元素,再配合点击事件完成浏览器的自动化.
FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package
FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package
Python Web Scrapper Project
Web Scrapper Projeto desenvolvido em python, sobre tudo com Selenium, BeautifulSoup e Pandas é um web scrapper que puxa uma tabela com as principais e
An automated, headless YouTube Watcher and Scraper
Searches YouTube, queries recommended videos and watches them. All fully automated and anonymised through the Tor network. The project consists of two independently usable components, the YouTube aut
Python framework to scrape Pastebin pastes and analyze them
pastepwn - Paste-Scraping Python Framework Pastebin is a very helpful tool to store or rather share ascii encoded data online. In the world of OSINT,
Visual scraping for Scrapy
Portia Portia is a tool that allows you to visually scrape websites without any programming knowledge required. With Portia you can annotate a web pag
Scraping news from Ucsal portal with Scrapy.
NewsScraping Esse é um projeto de raspagem das últimas noticias, de 2021, do portal da universidade Ucsal http://noosfero.ucsal.br/institucional Tecno
Instagram_scrapper - This project allow you to scrape the list of followers, following or both from a public Instagram account, and create a csv or excel file easily.
Instagram_scrapper This project allow you to scrape the list of followers, following or both from a public Instagram account, and create a csv or exce
Screen scraping and web crawling framework
Pomp Pomp is a screen scraping and web crawling framework. Pomp is inspired by and similar to Scrapy, but has a simpler implementation that lacks the
Complete pipeline for crawling online newspaper article.
Complete pipeline for crawling online newspaper article. The articles are stored to MongoDB. The whole pipeline is dockerized, thus the user does not need to worry about dependencies. Additionally, d
New World Market Scraper
Bean Seller A New Worlds market scraper. Deployment This must be installed on Windows as it uses the Windows api to do its stuff Install Prerequisites
Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine .
TwitterScraper Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine . Screenshot Data Users Only
Bulk download tool for the MyMedia platform
MyMedia Bulk Content Downloader This is a bulk download tool for the MyMedia platform. USE ONLY WHERE ALLOWED BY THE COPYRIGHT OWNER. NOT AFFILIATED W
Simple python tool for the purpose of swapping latinic letters with cirilic ones and vice versa in txt, docx and pdf files in Serbian language
Alpha Swap English This is a simple python tool for the purpose of swapping latinic letters with cirylic ones and vice versa, in txt, docx and pdf fil
Scrapy-soccer-games - Scraping information about soccer games from a few websites
scrapy-soccer-games Esse projeto tem por finalidade pegar informação de tabela d
This code will be able to scrape movies from a movie website and also provide download links to newly uploaded movies.
Movies-Scraper You are probably tired of navigating through a movie website to get the right movie you'd want to watch during the weekend. There may e
tweet random sand cat pictures
sandcatbot setup pip3 install --user -r requirements.txt cp sandcatbot.example.conf sandcatbot.conf vim sandcatbot.conf running the first parameter i
Simple proxy scraper made by using ProxyScrape's api.
What is Moon? Moon is a lightweight and fast proxy scraper made by using ProxyScrape's api. What can i do with this? You can use proxies for varietys