Learn computer graphics by writing GPU shaders!

Overview

Graphics Workshop

This repo contains a selection of projects designed to help you learn the basics of computer graphics. We'll be writing shaders to render interactive two-dimensional and three-dimensional scenes.

Each of these projects aims to introduce an important graphics technique that is extensively used in real-world applications. The code is designed to run in real time on modern GPUs, without requiring any extra software. Feel free to take creative liberties to produce a result that you find aesthetically pleasing!


Contents


Getting started

This workshop covers fragment shaders (GLSL), procedural texture generation, rasterization, lighting calculations, and real-time ray tracing.

All of the projects will be developed in a harness that runs the graphics code in a web browser, using a standard technology called WebGL. This allows us to use modern web development tools to iterate quickly and easily share your work with others. However, you will not need to write JavaScript code yourself, as all of the wrapper code has been provided for you.

To get started, make sure that you have Node.js v14 and NPM installed. Fork this repository, and then clone it to your computer. Then, run the following commands:

$ npm install
...
added 16 packages from 57 contributors and audited 16 packages in 1.459s

3 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities


$ npm run dev

  vite v2.1.5 dev server running at:

  > Local:    http://localhost:3000/
  > Network:  http://10.250.14.217:3000/

  ready in 555ms.

Explanation: The first command installs dependencies in the node_modules/ folder, and the second command starts the development server. You can now visit http://localhost:3000/ in your browser. Whenever you change a file and save it, the website will automatically update.


Projects

Here is an introduction to each of the projects.

If you are new to graphics or shading languages, I would recommend trying the "quilt patterns" project first. You should also complete "rasterization and shading" before starting the "stylized rendering" and "ray tracing" projects.

Quilt patterns

This project shows how you can use GPU shaders to render interesting, procedural patterns on a 2D grid. It also serves as a hand-on introduction to the wonderful yet frightening world of GPU shaders.

Before starting, make sure that you are able to open the development server. You should see an oscillating color gradient in your web browser.

  1. Look at the provided skeleton code. This is known as a fragment shader, and it runs on the GPU. The fragment shader is run on every pixel of the screen, and the pixel number is passed into the code as gl_FragCoord.xy. The code outputs by assigning to gl_FragColor, which is the color at that pixel. It is a vec4 or 4-dimensional vector with red, green, blue, and transparency channels.
  2. Make sure you understand the starter code before continuing. What do the coord and color variables represent? What do the abs() and sin() functions do?
  3. Uncomment the numbered sections of the starter code, one at a time. Observe the change in output on your screen. Make sure that you understand what each block of code is doing before proceeding to the next one! When you're done, you should now see a moving black-and-white grid pattern.
  4. Task 1: Make the quilt pattern more colorful! Modify the code so that instead of generating random greyscale colors with the variable c, it generates random RGB colors instead.
  5. Task 2: Right now, there's just a single pattern that's flown through, but there could be many more. Find a way to use the seed variable, which you can modify using the "Parameters" panel at the top right of the screen, to change the shape of the pattern. (Hint: You'll probably want to adjust the condition if (mod(2.0 * cell.x + cell.y, 5.0) == 1.0) to change the quilt's geometry.)

Feel free to experiment and get creative! For example, add stripes or other shapes besides triangles. If you're looking for inspiration, try doing a Google search for "quilt patterns."

Project skeleton is located in shaders/quilt.frag.glsl.

Procedural landscapes

This project involves generating an organic procedural landscape, similar to what you might see in open world games like Minecraft.

To view in your browser, choose the "project" setting in the controls pane on the top-right and select this project.

  1. To generate natural appearances, we will be using a popular graphics primitive called simplex noise. Feel free to read online about the theory behind this algorithm. The function float snoise(vec2) takes in a 2-vector on the screen and outputs a smooth scalar noise value at that location.
  2. Try changing the "seed" at the top-right of the screen. You should see the rendered output take a new shape, as noise values at different locations are roughly independent.
  3. Uncomment the first block of code, one line at a time. What do you notice visually as higher-frequency noise scales are added to the color? By combining various octaves of noise at different amplitudes, you can change the texture's appearance.
  4. Uncomment the second block of code, and observe how thresholding (in particular, the mix and smoothstep functions) are used to interpolate colors between elevation values.
  5. Finally, uncomment the third block of code to generate land objects.
  6. Task: In addition to elevation, add another parameter such as temperature that is independently sampled from the noise map. Experiment with using temperature to change the shading and indicate map biomes.

Project skeleton is located in shaders/landscape.frag.glsl.

Rasterization and shading

This project renders a 3D triangle mesh using the rasterization method that is very popular in real-time computer graphics. This is the same algorithm that most video games are built upon.

Rasterization works by breaking up a 3D surface into triangles, then drawing each triangle on the screen independently and interpolating variables between them.

Three-dimensional graphics can be much more complex than its two-dimensional counterpart. Unlike in the past projects, the fragment shader will now be evaluated once for each fragment of a triangle, rather than once for every pixel. For this project, we will be focus on shading. See the Scratchapixel article on the Phong illumination model for background reading.

  1. The project starts out by giving you a teapot. You can click and drag on the screen to move your camera position and see different angles of the teapot. This object is stored as a triangle mesh, which is a common format in computer graphics.
  2. Try changing the value of mesh to see your code rendering different shapes besides the teapot. Also, you can play with kd to change the color of the object.
  3. Task: Right now, the starter code has a function illuminate() which takes in the position of a light and returns how much color that light contributes to the current pixel. At the moment, the rendered result looks a little "flat" because the code only implements the diffuse component of the Phong reflectance model. Based on the Wikipedia article, and using the comments as a guide, update the code to also add the Phong specular component. Your task is complete when the teapot looks similar to the image above.

If you're curious to learn more about lighting models, try upgrading your Phong shader to a microfacet model such the Cook-Torrance BRDF. If you do, you may also want to implement sRGB gamma correction.

Project skeleton is located in shaders/shading.frag.glsl.

Stylized rendering

This project explores stylized rendering, also known as non-photorealistic rendering. This is an area of graphics that gives up being faithful to real life, but enables much more creative expression in mimicking expressive styles.

The code for this project is very similar to that of the last rasterization and shading project. However, after doing lighting calculations, we do not output the color immediately. Instead, we discretize it and shade in different styles based on a thresholded value of the luma intensity. Here, we mimic the art style found in comic books.

  1. Task: Once again, fill in your code for specular highlights in the illuminate() function, similar to last time. How does this change the output's appearance as you rotate the camera view? You may need to tweak some parameters to obtain an aesthetically pleasing result.

Project skeleton is located in shaders/contours.frag.glsl.

Ray tracing

This project is a real-time ray tracer implemented as a GPU shader. Ray tracing is the gold standard in photorealistic rendering. Here's how the Academy Award-winning textbook Physically Based Rendering describes it:

pbrt is based on the ray-tracing algorithm. Ray tracing is an elegant technique that has its origins in lens making; Carl Friedrich Gauß traced rays through lenses by hand in the 19th century. Ray-tracing algorithms on computers follow the path of infinitesimal rays of light through the scene until they intersect a surface. This approach gives a simple method for finding the first visible object as seen from any particular position and direction and is the basis for many rendering algorithms.

You might find this non-technical introductory video by Disney to provide helpful context.

  1. To parallelize our ray tracer and run it on a GPU shader, we revert back to running our fragment shader once for every pixel in the screen, like in the 2D graphics examples. However, this time, our shader will do some geometry calculations by shooting rays from the camera for each pixel. The trace() function models this behavior by returning the color corresponding to a given ray.
  2. To model geometry, the starter code has set up a simple scene involving three spheres and one circle forming the checkered floor. This geometry is located in the intersect() function, which computes the first intersection point (represented by a Hit struct) of any ray in the space. Following the pattern shown here, add a fourth sphere to the scene with a different position, color, and radius.
  3. The calcLighting() function uses illuminate(), which contains a simple implementation of a Phong shader, to compute the lighting at a given point by adding up the contributions of two point lights. Modify this code to add a third point light at a different location, with your choice of intensity. (For inspiration, see three-point lighting.)
  4. Task: Implement shadows. In the illuminate() function, before doing lighting calculations, add a conditional statement checks if the ray from the point to the light source is obstructed or not. If it is obstructed, then you should immediately return vec3(0.0) to simulate shadows. [Hint: You will need to use the intersect() function with a new Ray object that you construct, starting from the point pos.]

If you're interested in learning more about ray tracing, check out rpt, which is a physically-based path tracer written in Rust with a lot more features.

Project skeleton is located in shaders/raytracing.frag.glsl.


Workflow

Open your code editor in one window, and keep the website on the side while you're writing your shader code. It's extremely useful to see what happens to the output in real time as you edit your code. Your browser should instantly update (using Vite's hot module replacement feature) whenever you change a file.

The project has a Tweakpane component in the top-right corner of the screen. You can use this to change what project is being displayed and modify parameters for that display. It persists settings when you refresh your browser.

For 3D projects, the main element has an event listener for mouse and touch events, which should allow you to move the camera by clicking and dragging. Data is passed in through the appropriate uniform variables.

When you're done with the workshop, each project should render without errors or visual artifacts, and the outputs should be interesting.


Deployment

This repository comes with a Continuous Integration workflow (through GitHub Actions) that compiles the code on the main branch, then pushes the built HTML to your repository's gh-pages branch. It lets you share a web link that updates whenever you push code changes.

If this isn't enabled already, go to the "Pages" section under the "Settings" tab of your repository and set the deploy target to the gh-pages branch. Then, after the next git push, you will be see your work at https://.github.io/graphics-workshop/.


Debugging tips

If you're stuck debugging OpenGL shaders, try setting gl_FragColor to the result of an intermediate variable and visually examining the output.

If there's a function you're confused about, you can search online through the Khronos documentation. You can also try reading The Book of Shaders, which is an excellent technical resource.


All code is licensed under the MIT license.
Owner
Eric Zhang
Student at Harvard University and competitive programmer, with research in machine learning & programming languages. @scaleapi, @NVIDIA, @HarvardPL
Eric Zhang
python ocr using tesseract/ with EAST opencv detector

pytextractor python ocr using tesseract/ with EAST opencv text detector Uses the EAST opencv detector defined here with pytesseract to extract text(de

Danny Crasto 38 Dec 05, 2022
Autonomous Driving project for Euro Truck Simulator 2

hope-autonomous-driving Autonomous Driving project for Euro Truck Simulator 2 Video: How is it working ? In this video, the program processes the imag

Umut Görkem Kocabaş 36 Nov 06, 2022
list all open dataset about ocr.

ocr-open-dataset list all open dataset about ocr. printed dataset year Born-Digital Images (Web and Email) 2011-2015 COCO-Text 2017 Text Extraction fr

hongbomin 95 Nov 24, 2022
A program that takes in the hand gesture displayed by the user and translates ASL.

Interactive-ASL-Recognition Using the framework mediapipe made by google, OpenCV library and through self teaching, I was able to create a program tha

Riddhi Bajaj 3 Nov 22, 2021
CTPN + DenseNet + CTC based end-to-end Chinese OCR implemented using tensorflow and keras

简介 基于Tensorflow和Keras实现端到端的不定长中文字符检测和识别 文本检测:CTPN 文本识别:DenseNet + CTC 环境部署 sh setup.sh 注:CPU环境执行前需注释掉for gpu部分,并解开for cpu部分的注释 Demo 将测试图片放入test_images

Yang Chenguang 2.6k Dec 29, 2022
Just a script for detecting the lanes in any car game (not just gta 5) with specific resolution and road design ( very basic and limited )

GTA-5-Lane-detection Just a script for detecting the lanes in any car game (not just gta 5) with specific resolution and road design ( very basic and

Danciu Georgian 4 Aug 01, 2021
Generate a list of papers with publicly available source code in the daily arxiv

2021-06-08 paper code optimal network slicing for service-oriented networks with flexible routing and guaranteed e2e latency networkslicing multi-moda

79 Jan 03, 2023
Program created with opencv that allows you to automatically count your repetitions on several fitness exercises.

Virtual partner of gym Description Program created with opencv that allows you to automatically count your repetitions on several fitness exercises li

1 Jan 04, 2022
Library used to deskew a scanned document

Deskew //Note: Skew is measured in degrees. Deskewing is a process whereby skew is removed by rotating an image by the same amount as its skew but in

Stéphane Brunner 273 Jan 06, 2023
LEARN OPENCV IN 3 HOURS USING PYTHON - INCLUDING EXAMPLE PROJECTS

LEARN OPENCV IN 3 HOURS USING PYTHON - INCLUDING EXAMPLE PROJECTS

Murtaza Hassan 815 Dec 29, 2022
STEFANN: Scene Text Editor using Font Adaptive Neural Network

STEFANN: Scene Text Editor using Font Adaptive Neural Network @ The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020.

Prasun Roy 208 Dec 11, 2022
WACV 2022 Paper - Is An Image Worth Five Sentences? A New Look into Semantics for Image-Text Matching

Is An Image Worth Five Sentences? A New Look into Semantics for Image-Text Matching Code based on our WACV 2022 Accepted Paper: https://arxiv.org/pdf/

Andres 13 Dec 17, 2022
Text language identification using Wikipedia data

Text language identification using Wikipedia data The aim of this project is to provide high-quality language detection over all the web's languages.

Vsevolod Dyomkin 28 Jul 09, 2022
Visual Attention based OCR

Attention-OCR Authours: Qi Guo and Yuntian Deng Visual Attention based OCR. The model first runs a sliding CNN on the image (images are resized to hei

Yuntian Deng 1.1k Jan 02, 2023
Connect Aseprite to Blender for painting pixelart textures in real time

Pribambase Pribambase is a small tool that connects Aseprite and Blender, to allow painting with instant viewport feedback and all functionality of ex

117 Jan 03, 2023
🔎 Like Chardet. 🚀 Package for encoding & language detection. Charset detection.

Charset Detection, for Everyone 👋 The Real First Universal Charset Detector A library that helps you read text from an unknown charset encoding. Moti

TAHRI Ahmed R. 332 Dec 31, 2022
A Tensorflow model for text recognition (CNN + seq2seq with visual attention) available as a Python package and compatible with Google Cloud ML Engine.

Attention-based OCR Visual attention-based OCR model for image recognition with additional tools for creating TFRecords datasets and exporting the tra

Ed Medvedev 933 Dec 29, 2022
This is a tensorflow re-implementation of PSENet: Shape Robust Text Detection with Progressive Scale Expansion Network.My blog:

PSENet: Shape Robust Text Detection with Progressive Scale Expansion Network Introduction This is a tensorflow re-implementation of PSENet: Shape Robu

Michael liu 498 Dec 30, 2022
A pure pytorch implemented ocr project including text detection and recognition

ocr.pytorch A pure pytorch implemented ocr project. Text detection is based CTPN and text recognition is based CRNN. More detection and recognition me

coura 444 Dec 30, 2022
This is the code for our paper DAAIN: Detection of Anomalous and AdversarialInput using Normalizing Flows

Merantix-Labs: DAAIN This is the code for our paper DAAIN: Detection of Anomalous and Adversarial Input using Normalizing Flows which can be found at

Merantix 14 Oct 12, 2022