Dictionary - Application focused on word search through web scraping

Overview

Dictionary

GeimerDroiid | Discord GeimerDroiid | Spotify GeimerDroiid | Github Email | jmanuelhv9@gmail.com

About

Application focused on searching the meaning of words through web scraping, besides having more functions such as Dictation, Spelling and Syllables.
I created this application as a way to test the knowledge that I have started to acquire so I decided to make this dictionary with some basic functions like spelling but from there more ideas came up, like implementing a method that would tell me the meanings of the words that I didn't understand, or a way in which I didn't have to write the word and just by telling the computer I could write it. When I created this application I was just starting to learn Python (it is the language I used for this application) so I may have a lot of bad practices in the code that I am correcting for future versions. During the creation of this application I learned how to make user interfaces, I dabbled a bit in web scraping and besides investigating a method with which I can change text to sound and play it also at the end I used object oriented programming to facilitate the creation of the interface.

Dictionary | GUI Dictionary | GUI

What's new in v1.5

  • Interface improvements

    Better interface with buttons and colors that contrast better with each other as well as better typography, more minimalist animations for a better user experience.

  • Bugs fixed

    Correction of errors mainly of grammar among the most outstanding is the elimination of "Gua" and "Guo" since that conjugation of letters does not belong to the grammar of the Spanish language. Also improvement in the application startup time.

  • Code improvement

    I have focused on the almost total reconstruction of the application so all the code is new, I have looked for the way to preserve the readability of the same for it I have divided each function in different files. Besides looking for the most efficient and easy way to do each one (All the code is in English).

  • The dictation function has been disabled

    I have decided to disable the dictation feature in the final version, as it gave me a lot of problems when packaging the application, so I decided to keep it disabled until I find a way to build this feature and have as few bugs as possible as well as a proper functioning.


Functions

  • Dictation

    The dictation function listens and converts your voice into text that will be entered into the search bar of the application, thanks to this you can apply some other function to that text. For this function I have used the SpeechRecognition library that allows us to use the microphone of our computer to convert audio to text. All the code is in the file spelling.py

  • Spelling

    The spelling function breaks the sentence into words and spells it letter by letter, and when it reaches the end of a word, it spells it out in full

  • Syllables

    Syllables function has a menu containing all the conjugations of letters and syllables together with their respective sounds.

  • Meaning

    This function by means of websracping looks up the meaning of a word in the DEM dictionary and tells us its meaning with its respective examples, although if it does not find it, it tells you search alternatives. For this function I used the BeautifulSoup4 library for web scraping as well as pyttsx3 to convert text to audio.


Requirements

  • It is important not to delete the executable file from the folder, as this will cause errors. The best option is to create a shortcut and move it to the desktop or anywhere else you want to place it.

  • To have a good performance of the application I recommend downloading "Microsoft Sabina Desktop - Spanish (Mexico)" which is a voice provided by Microsoft for the devices.

How to download "Microsoft Sabina Desktop - Spanish (Mexico)".

In order to download the necessary voice for the program, the first thing to do is to go to:

Settings> Time and language> Voice> Manage voices> Add voices

In the search bar type Spanish and download the one that says "Spanish (Mexico)". And with that, everything would be ready to use the application correctly and avoid any pronunciation error.

If you wish to contribute to the development of the application:

  • First clone the repository

      git clone https://github.com/GeimerDroiid/Dictionary.git
    
  • Then create a branch with your user name

      git checkout -b 
         
    
         
  • And finally install the requirements

      py pip install -r requirements.txt
    

Contribution

Pull requests are welcome, I would appreciate your support to contribute to a better development of this application. For major changes, please open an issue to discuss what you would like to change.
You might also like...
Web Scraping Practica With Python

Web-Scraping-Practica Integrants: Guillem Vidal Pallarols. Lídia Bandrés Solé Fitxers: Aquest document és el primer que trobem. A continuació trobem u

Here I provide the source code for doing web scraping using the python library, it is Selenium.
Here I provide the source code for doing web scraping using the python library, it is Selenium.

Here I provide the source code for doing web scraping using the python library, it is Selenium.

Consulta de CPF e CNPJ na Receita Federal com Web-Scraping

Repositório contendo scripts Python que realizam a consulta de CPF e CNPJ diretamente no site da Receita Federal.

A package that provides you Latest Cyber/Hacker News from website using Web-Scraping.

cybernews A package that provides you Latest Cyber/Hacker News from website using Web-Scraping. Latest Cyber/Hacker News Using Webscraping Developed b

Web Scraping OLX with Python and Bsoup.
Web Scraping OLX with Python and Bsoup.

webScrap WebScraping first step. Authors: Paulo, Claudio M. First steps in Web Scraping. Project carried out for training in Web Scrapping. The export

Demonstration on how to use async python to control multiple playwright browsers for web-scraping

Playwright Browser Pool This example illustrates how it's possible to use a pool of browsers to retrieve page urls in a single asynchronous process. i

Google Scholar Web Scraping

Google Scholar Web Scraping This is a python script that asks for a user to input the url for a google scholar profile, and then it writes publication

This is a module that I had created along with my friend. It's a basic web scraping module
This is a module that I had created along with my friend. It's a basic web scraping module

QuickInfo PYPI link : https://pypi.org/project/quickinfo/ This is the library that you've all been searching for, it's built for developers and allows

A simple django-rest-framework api using web scraping

Apicell You can use this api to search in google, bing, pypi and subscene and get results Method : POST Parameter : query Example import request url =

Releases(v1.5)
  • v1.5(Jan 3, 2022)

    What's new in v1.5

    • Interface improvements

      Better interface with buttons and colors that contrast better with each other as well as better typography, more minimalist animations for a better user experience.

    • Bugs fixed

      Correction of errors mainly of grammar among the most outstanding is the elimination of "Gua" and "Guo" since that conjugation of letters does not belong to the grammar of the Spanish language. Also improvement in the application startup time.

    • Code improvement

      I have focused on the almost total reconstruction of the application so all the code is new, I have looked for the way to preserve the readability of the same for it I have divided each function in different files. Besides looking for the most efficient and easy way to do each one (All the code is in English).

    • The dictation function has been disabled

      I have decided to disable the dictation feature in the final version, as it gave me a lot of problems when packaging the application, so I decided to keep it disabled until I find a way to build this feature and have as few bugs as possible as well as a proper functioning.

    Full Changelog: https://github.com/DawntDev/Dictionary/compare/v1.0...v1.5

    Source code(tar.gz)
    Source code(zip)
    Dictionary.1.5.zip(75.35 MB)
  • v1.0(Jan 3, 2022)

    About

    Application focused on searching the meaning of words through web scraping, besides having more functions such as Dictation, Spelling and Syllables.
    I created this application as a way to test the knowledge that I have started to acquire so I decided to make this dictionary with some basic functions like spelling but from there more ideas came up, like implementing a method that would tell me the meanings of the words that I didn't understand, or a way in which I didn't have to write the word and just by telling the computer I could write it. When I created this application I was just starting to learn Python (it is the language I used for this application) so I may have a lot of bad practices in the code that I am correcting for future versions. During the creation of this application I learned how to make user interfaces, I dabbled a bit in web scraping and besides investigating a method with which I can change text to sound and play it also at the end I used object oriented programming to facilitate the creation of the interface.

    Full Changelog: https://github.com/DawntDev/Dictionary/commits/v1.0

    Source code(tar.gz)
    Source code(zip)
    dictionary.exe(50.35 MB)
Owner
Juan Manuel
Juan Manuel
Scrape all the media from an OnlyFans account - Updated regularly

Scrape all the media from an OnlyFans account - Updated regularly

CRIMINAL 3.2k Dec 29, 2022
Parsel lets you extract data from XML/HTML documents using XPath or CSS selectors

Parsel Parsel is a BSD-licensed Python library to extract and remove data from HTML and XML using XPath and CSS selectors, optionally combined with re

Scrapy project 859 Dec 29, 2022
FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

UserGhost411 1 Nov 17, 2022
Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website by form number and returns the results as json

Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website (prior form publication) by form number and returns the results as json. It provides the option to download pdfs over a ra

1 Jan 04, 2022
This is a web scraper, using Python framework Scrapy, built to extract data from the Deals of the Day section on Mercado Livre website.

Deals of the Day This is a web scraper, using the Python framework Scrapy, built to extract data such as price and product name from the Deals of the

David Souza 1 Jan 12, 2022
PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management

PaperRobot PaperRobot 是一个论文抓取工具,可以快速批量下载大量论文,方便后期进行持续的论文管理与学习。 PaperRobot通过多个接口抓取论文,目前抓取成功率维持在90%以上。通过配置Config文件,可以抓取任意计算机领域相关会议的论文。 Installation Down

moxiaoxi 47 Nov 23, 2022
This app will let you continuously scrape certain parts of LeasePlan and extract data of cars becoming available for lease.

LeasePlan - Scraper This app will let you continuously scrape certain parts of LeasePlan and extract data of cars becoming available for lease. It has

Rodney 4 Nov 18, 2022
A web Scraper for CSrankings.com that scrapes University and Faculty list for a particular country

A look into what we're building Demo.mp4 Prerequisites Python 3 Node v16+ Steps to run Create a virtual environment. Activate the virtual environment.

2 Jun 06, 2022
Scrapegoat is a python library that can be used to scrape the websites from internet based on the relevance of the given topic irrespective of language using Natural Language Processing

Scrapegoat is a python library that can be used to scrape the websites from internet based on the relevance of the given topic irrespective of language using Natural Language Processing. It can be ma

10 Jul 06, 2022
Pseudo API for Google Trends

pytrends Introduction Unofficial API for Google Trends Allows simple interface for automating downloading of reports from Google Trends. Only good unt

General Mills 2.6k Dec 28, 2022
A module for CME that spiders hashes across the domain with a given hash.

hash_spider A module for CME that spiders hashes across the domain with a given hash. Installation Simply copy hash_spider.py to your CME module folde

37 Sep 08, 2022
A simple code to fetch comments below an Instagram post and save them to a csv file

fetch_comments A simple code to fetch comments below an Instagram post and save them to a csv file usage First you have to enter your username and pas

2 Jul 14, 2022
京东秒杀商品抢购Python脚本

Jd_Seckill 非常感谢原作者 https://github.com/zhou-xiaojun/jd_mask 提供的代码 也非常感谢 https://github.com/wlwwu/jd_maotai 进行的优化 主要功能 登陆京东商城(www.jd.com) cookies登录 (需要自

Andy Zou 1.5k Jan 03, 2023
Ebay Webscraper for Getting Average Product Price

Ebay-Webscraper-for-Getting-Average-Product-Price The code in this repo is used to determine the average price of an item on Ebay given a valid search

17 Jan 05, 2023
👁️ Tool for Data Extraction and Web Requests.

httpmapper 👁️ Project • Technologies • Installation • How it works • License Project 🚧 For educational purposes. This is a project that I developed,

15 Dec 05, 2021
An introduction to free, automated web scraping with GitHub’s powerful new Actions framework.

An introduction to free, automated web scraping with GitHub’s powerful new Actions framework Published at palewi.re/docs/first-github-scraper/ Contrib

Ben Welsh 15 Nov 24, 2022
Simple library for exploring/scraping the web or testing a website you’re developing

Robox is a simple library with a clean interface for exploring/scraping the web or testing a website you’re developing. Robox can fetch a page, click on links and buttons, and fill out and submit for

Dan Claudiu Pop 79 Nov 27, 2022
Scrapes the Sun Life of Canada Philippines web site for historical prices of their investment funds and then saves them as CSV files.

slocpi-scraper Sun Life of Canada Philippines Inc Investment Funds Scraper Install dependencies pip install -r requirements.txt Usage General format:

Daryl Yu 2 Jan 07, 2022
哔哩哔哩爬取器:以个人为中心

Open Bilibili Crawer 哔哩哔哩是一个信息非常丰富的社交平台,我们基于此构造社交网络。在该网络中,节点包括用户(up主),以及视频、专栏等创作产物;关系包括:用户之间,包括关注关系(following/follower),回复关系(评论区),转发关系(对视频or动态转发);用户对创

Boshen Shi 3 Oct 21, 2021
Dude is a very simple framework for writing web scrapers using Python decorators

Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-lea

Ronie Martinez 326 Dec 15, 2022