A Web Scraper built with beautiful soup, that fetches udemy course information. Get udemy course information and convert it to json, csv or xml file

Overview

scraper

Udemy Scraper

License Python Chromium

A Web Scraper built with beautiful soup, that fetches udemy course information.

Installation

Virtual Environment

Firstly, it is recommended to install and run this inside of a virtual environment. You can do so by using the virtualenv library and then activating it.

pip install virtualenv

virtualenv somerandomname

Activating for *nix

source somerandomname/bin/activate

Activating for Windows

somerandomname\Scripts\activate

Package Installation

pip install -r requirements.txt

Chrome setup

Be sure to have chrome installed and install the corresponding version of chromedriver. I have already provided a windows binary file. If you want, you can install the linux binary for the chromedriver from its page.

Approach

It is fairly easy to webscrape sites, however, there are some sites that are not that scrape-friendly. Scraping sites, in itself is perfectly legal however there have been cases of lawsuits against web scraping, some companies *cough Amazon *cough consider web-scraping from its website illegal however, they themselves, web-scrape from other websites. And then there are some sites like udemy, that try to prevent people from scraping their site.

Using BS4 in itself, doesn't give the required results back, so I had to use a browser engine by using selenium to fetch the courses information. Initially, even that didn't work out, but then I realised the courses were being fetch asynchronously so I had to add a bit of delay. So fetching the data can be a bit slow initially.

Functionality

As of this commit, the script can search udemy for the search term you input and get the courses link, and all the other overview details like description, instructor, duration, rating, etc.

Here is a json representation of the data it can fetch as of now:-

{
  "query": "The Complete Angular Course: Beginner to Advanced",
  "link": "https://udemy.com/course/the-complete-angular-master-class/",
  "title": "The Complete Angular Course: Beginner to Advanced",
  "headline": "The most comprehensive Angular 4 (Angular 2+) course. Build a real e-commerce app with Angular, Firebase and Bootstrap 4",
  "instructor": "Mosh Hamedani",
  "rating": "4.5",
  "duration": "29.5 total hours",
  "no_of_lectures": "376 lectures",
  "tags": ["Development", "Web Development", "Angular"],
  "no_of_rating": "23,910",
  "no_of_students": "96,174",
  "course_language": "English",
  "objectives": [
    "Establish yourself as a skilled professional developer",
    "Build real-world Angular applications on your own",
    "Troubleshoot common Angular errors",
    "Master the best practices",
    "Write clean and elegant code like a professional developer"
  ],
  "Sections": [
    {
      "name": "Introduction",
      "lessons": [{ "name": "Introduction" }, { "name": "What is Angular" }],
      "no_of_lessons": 12
    },
    {
      "name": "TypeScript Fundamentals",
      "lessons": [
        { "name": "Introduction" },
        { "name": "What is TypeScript?" }
      ],
      "no_of_lessons": 18
    },
    {
      "name": "Angular Fundamentals",
      "lessons": [
        { "name": "Introduction" },
        { "name": "Building Blocks of Angular Apps" }
      ],
      "no_of_lessons": 10
    }
  ],
  "requirements": [
    "Basic familiarity with HTML, CSS and JavaScript",
    "NO knowledge of Angular 1 or Angular 2 is required"
  ],
  "description": "\nAngular is one of the most popular frameworks for building client apps with HTML, CSS and TypeScript. If you want to establish yourself as a front-end or a full-stack developer, you need to learn Angular.\n\nIf you've been confused or frustrated jumping from one Angular 4 tutoria...",
  "target_audience": [
    "Developers who want to upgrade their skills and get better job opportunities",
    "Front-end developers who want to stay up-to-date with the latest technology"
  ],
  "banner": "https://foo.com/somepicture.jpg"
}

Usage

In order to use the scraper, import it as a module and then create a new course class like so-

from udemyscraper import UdemyCourse

This will import the UdemyCourse class and then you can create an instance of it and then pass the search query to it. Prefarably the exact course name.

from udemyscraper import UdemyCourse

javascript_course = UdemyCourse("Javascript course for beginners")

This will create an empty instance of UdemyCourse. To fetch the data, you need to call the fetch_course function.

javascript_course.fetch_course()

Now that you have the course, you can access all of the courses' data as shown here.

print(javascript_course.Sections[2].lessons[1].name) # This will print out the 3rd Sections' 2nd Lesson's name
Comments
  • pip install fails

    pip install fails

    Describe the bug Unable to install udemyscraper via pip install

    To Reproduce ERROR: Cannot install udemyscraper==0.8.1 and udemyscraper==0.8.2 because these package versions have conflicting dependencies.

    The conflict is caused by: udemyscraper 0.8.2 depends on getopt2==0.0.3 udemyscraper 0.8.1 depends on getopt2==0.0.3

    Desktop (please complete the following information):

    • OS: MAC OS
    bug 
    opened by nuggetsnetwork 5
  • udemyscraper timesout

    udemyscraper timesout

    Describe the bug When running the sample code all I get is timeout.

    To Reproduce Steps to reproduce the behavior: Run the sample code from udemyscraper import UdemyCourse

    course = UdemyCourse() course.fetch_course('learn javascript') print(course.title)

    Current behavior Timed out waiting for page to load or could not find a matching course

    OS: MACOS

    bug duplicate 
    opened by nuggetsnetwork 3
  • Switch to browser explicit wait

    Switch to browser explicit wait

    EXPERIMENTAL! Needs Testing.

    time.sleep() introduces a necessary wait, even if the page has already been loaded.

    By using expected_components, we can proceed as and when the element loads. Using the python time library, I calculated the time taken by search and course page to load to be 2 seconds (approx.)

    Theoretically speaking, after the change, execution time should have reduced by 5 seconds. (3+4-2) However, the gain was only 3 seconds instead of the expected 5.

    This behavior seems unexpected for the moment, unless we can find where the missing 2 seconds are. For a reference, the original version, using time.sleep() took 17 seconds to execute.

    (All times are measured for my internet connection, by executing the given example in readme file)

    Possibly need to dig further. I haven't yet got the time to read full code.

    bug optimization 
    opened by flyingcakes85 3
  • Use explicit wait for search query

    Use explicit wait for search query

    Here 4 seconds have been hardcoded, it will be better to wait for the search results to load and then get the source code.

    A basic method to do this would be to check if search element is visible or not, once its visible, it can proceed to fetch source code. This way if you have a really fast connection, you wouldn't need to wait longer and vice-versa.

    bug optimization 
    opened by sortedcord 3
  • Classes Frequently Keep Changing

    Classes Frequently Keep Changing

    It seems that on the search page, the classes of the elements keep changing. So it would be best to only fetch the course url and then fetch all the other data from the course page itself.

    bug 
    opened by sortedcord 3
  • Serialize to xml

    Serialize to xml

    Experimental!!

    Export the entire dictionary to a xml file using the dict2xml library.

    • [x] Make branch even with refractor base
    • [x] Switch to dicttoxml from dict2xml
    • [x] Object arrays of sections and lessons are not grouped under one root called Sections or Lessons. This is also the case for all of the other arrays.
    • [x] Rename List item
    • [x] Rename root tag to course
    enhancement area: module 
    opened by sortedcord 2
  • Automatically fetch drivers

    Automatically fetch drivers

    Setup a way to automatically fetch browser drivers based on user's choice (chromium/firefox) corresponding to the installed browser version.

    The hard part will be to find the version of the browser installed.

    enhancement help wanted 
    opened by sortedcord 2
  • Timed out waiting for page to load or could not find a matching course

    Timed out waiting for page to load or could not find a matching course

    Whenever I try to scrape a course from udemy I get this error-

    on 1: Timed out waiting for page to load or could not find a matching course
    Scraping Course |████████████████████████████████████████| 1 in 29.5s (0.03/s)
    

    It was working a couple of times before but now it doesn't work..

    Steps to reproduce the behavior:

    1. This happens both when using the script and the module
    2. I used query argument
    3. Output- image

    Desktop (please complete the following information):

    • OS: Windows 10
    • Browser: Chromium
    • Version: 92

    I did checked by manually opening chromium and searching for the course. But when I use the scraper, it doesn't work.

    bug good first issue wontfix area: module 
    opened by sortedcord 1
  • Optimize element search

    Optimize element search

    Some tests have shown that it is way more efficient to use css selectors than find, especially with nested finds which tend to be wayyy more slow and time consuming. It would be better replace all of the find statements with select and then use direct path.

    optimization 
    opened by sortedcord 1
  • 🌐 Added browser selection argument

    🌐 Added browser selection argument

    Instead of editing the source code to select which browser you would like to use, you can now specify the same while initializing the UdemyCourse class or by simply using an argument when using the standalone way.

        -b  --browser       Allows you to select the browser you would like to use for Scraping
                            Values: "chrome" or "firefox". Defaults to chrome if no argument is passed.
    

    Also provided a gekodriver.exe binary.

    enhancement optimization 
    opened by sortedcord 1
  • Implementation of Command Line Arguments

    Implementation of Command Line Arguments

    I assume that the main udemyScraper.py file will be used as a module, so instead I made another file main.py which can be used for such operations. As of now only some basic arguments have been added. Will add more in the future.

        -h  --help          Displays information about udemyscraper and its usage
        -v  --version       Displays the version of the tool
        -n  --no-warn       Disables the warning when initializing the udemyscourse class
    
    enhancement 
    opened by sortedcord 1
Releases(0.8.2)
  • 0.8.2(Oct 2, 2021)

  • Beta(Aug 29, 2021)

    The long awaited (atleast by me) distribution update for udemyscraper. Find this project on PyPI - https://pypi.org/project/udemyscraper/

    Added

    • Udemyscraper can now export multiple courses to csv files!

    • course_to_csv takes an array as an input and dumps each course to a single csv file.
    • Udemyscraper can now export courses to xml files!

    • course_to_xml is function that can be used to export the course object to an xml file with the appropriate tags and format.
    • udemyscraper.export submodule for exporting scraped course.
    • Support for Microsoft Edge (Chromium Based) browser.
    • Support for Brave Browser.

    Changes

    • Udemyscraper.py has been refractured into 5 different files:

      • __init__.py - Contains the code which will run when imported as a library
      • metadata.py - Contains metadata of the package such as the name, version, author, etc. Used by setup.py
      • output.py - Contains functions for outputting the course information.
      • udscraperscript.py -Is the script file which will run when you want to use udemyscraper as a script.
      • utils.py - Contains utility related functions for udemyscraper.
    • Now using udemyscraper.export instead of udemyscraper.output.

      • quick_display function has been replaced with print_course function.
    • Now using setup.py instead of setup.cfg

    • Deleted src folder which is now replaced by udemyscraper folder which is the source directory for all the modules

    • Installation Process

      Since udemyscraper is now to be used as a package, it is obvious that the installation process has also had major changes.

      Installation process is documented here

    • Renamed the browser_preference key in Preferences dictionary to browser

    • Relocated browser determination to utils as set_browser function.

    • Removed requirements.txt and pyproject.toml

    Fixed

    • Fixed cache argument bug.
    • Fixed importing preferences bug.
    • Fixed Banner Image scraping.
    • Fixed Progressbar exception handling.
    • Fixed recognition of chrome as a valid browser.
    • Preferences will not be printed while using the script.
    • Fixed browser key error
    Source code(tar.gz)
    Source code(zip)
    udemyscraper-0.8.1-py3-none-any.whl(31.19 KB)
    udemyscraper-0.8.1.tar.gz(4.87 MB)
Owner
Aditya Gupta
🎓 Student🎨 Front end Dev & Part time weeb ϞϞ(๑⚈ ․̫ ⚈๑)∩
Aditya Gupta
Web-Scraping using Selenium Master

Web-Scraping using Selenium What is the need of Selenium? Some websites don't like to be scrapped and in that case you need to disguise your webscrapi

Md Rashidul Islam 1 Oct 26, 2021
✂️🕷️ Spider-Cut is a Network Mapper Framework (NMAP Framework)

Spider-Cut is a Network Mapper Framework (NMAP Framework) Installation | Usage | Creators | Donate Installation # Kali Linux | WSL

XforWorks 3 Mar 07, 2022
Minecraft Item Scraper

Minecraft Item Scraper To run, first ensure you have the BeautifulSoup module: pip install bs4 Then run, python minecraft_items.py folder-to-save-ima

Jaedan Calder 1 Dec 29, 2021
A social networking service scraper in Python

snscrape snscrape is a scraper for social networking services (SNS). It scrapes things like user profiles, hashtags, or searches and returns the disco

2.4k Jan 01, 2023
The open-source web scrapers that feed the Los Angeles Times California coronavirus tracker.

The open-source web scrapers that feed the Los Angeles Times' California coronavirus tracker. Processed data ready for analysis is available at datade

Los Angeles Times Data and Graphics Department 51 Dec 14, 2022
An Web Scraping API for MDL(My Drama List) for Python.

PyMDL An API for MyDramaList(MDL) based on webscraping for python. Description An API for MDL to make your life easier in retriving and working on dat

6 Dec 10, 2022
Binance Smart Chain Contract Scraper + Contract Evaluator

Pulls Binance Smart Chain feed of newly-verified contracts every 30 seconds, then checks their contract code for links to socials.Returns only those with socials information included, and then submit

14 Dec 09, 2022
Anonymously scrapes onlinesim.ru for new usable phone numbers.

phone-scraper Anonymously scrapes onlinesim.ru for new usable phone numbers. Usage Clone the repository $ git clone https://github.com/thomasgruebl/ph

16 Oct 08, 2022
抖音批量下载用户所有无水印视频

Douyincrawler 抖音批量下载用户所有无水印视频 Run 安装python3, 安装依赖

28 Dec 08, 2022
学习强国 自动化 百分百正确、瞬间答题,分值45分

项目简介 学习强国自动化脚本,解放你的时间! 使用Selenium、requests、mitmpoxy、百度智能云文字识别开发而成 使用说明 注:Chrome版本 驱动会自动下载 首次使用会生成数据库文件db.db,用于提高文章、视频任务效率。 依赖安装 pip install -r require

lisztomania 359 Dec 30, 2022
A web scraper that exports your entire WhatsApp chat history.

WhatSoup 🍲 A web scraper that exports your entire WhatsApp chat history. Table of Contents Overview Demo Prerequisites Instructions Frequen

Eddy Harrington 87 Jan 06, 2023
The first public repository that provides free BUBT website scraping API script on Github.

BUBT WEBSITE SCRAPPING SCRIPT I think this is the first public repository that provides free BUBT website scraping API script on github. When I was do

Md Imam Hossain 3 Feb 10, 2022
Creating Scrapy scrapers via the Django admin interface

django-dynamic-scraper Django Dynamic Scraper (DDS) is an app for Django which builds on top of the scraping framework Scrapy and lets you create and

Holger Drewes 1.1k Dec 17, 2022
京东茅台抢购

截止 2021/2/1 日,该项目已无法使用! 京东:约满即止,仅限京东实名认证用户APP端抢购,2月1日10:00开始预约,2月1日12:00开始抢购(京东APP需升级至8.5.6版本及以上) 写在前面 本项目来自 huanghyw - jd_seckill,作者的项目地址我找不到了,找到了再贴上

abee 73 Dec 03, 2022
Screenhook is a script that captures an image of a web page and send it to a discord webhook.

screenshot from the web for discord webhooks screenhook is a script that captures an image of a web page and send it to a discord webhook.

Toast Energy 3 Jun 04, 2022
Bigdata - This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster

Scrapy Cluster This Scrapy project uses Redis and Kafka to create a distributed

Hanh Pham Van 0 Jan 06, 2022
A repository with scraping code and soccer dataset from understat.com.

UNDERSTAT - SHOTS DATASET As many people interested in soccer analytics know, Understat is an amazing source of information. They provide Expected Goa

douglasbc 48 Jan 03, 2023
The core packages of security analyzer web crawler

Security Analyzer 🐍 A large scale web crawler (considered also as vulnerability scanner tool) to take an overview about security of Moroccan sites Cu

Security Analyzer 10 Jul 03, 2022
A web service for scanning media hosted by a Matrix media repository

Matrix Content Scanner A web service for scanning media hosted by a Matrix media repository Installation TODO Development In a virtual environment wit

Brendan Abolivier 5 Dec 01, 2022
Web3 Pancakeswap Sniper bot written in python3

Pancakeswap_BSC_Sniper_Bot Web3 Pancakeswap Sniper bot written in python3, Please note the license conditions! The first Binance Smart Chain sniper bo

Treading-Tigers 295 Dec 31, 2022