Thermal Control of Laser Powder Bed Fusion using Deep Reinforcement Learning

Overview

Thermal Control of Laser Powder Bed Fusion using Deep Reinforcement Learning

This repository is the implementation of the paper "Thermal Control of Laser Powder Bed Fusion Using Deep Reinforcement Learning", linked here. The project makes use of the Deep Reinforcement Library stable-baselines3 to derive a control policy that maximizes melt pool depth consistency. drl_am

Simulation Framework

The Repeated Usage of Stored Line Solutions (RUSLS) method proposed by Wolfer et al. is used to simulate the temperature dynamics in this work. More detail can be found in the following paper:

  • Fast solution strategy for transient heat conduction for arbitrary scan paths in additive manufacturing, Additive Manufacturing, Volume 30, 2019 (link)

Prerequisites

The following packages are required in order to run the associated code:

  • gym==0.17.3
  • torch==1.5.0
  • stable_baselines3==0.7.0
  • numba==0.50.1

These packages can be installed independently, or all at once by running pip install -r requirements.txt. We recommend that these packages are installed in a new conda environment to avoid clashes with existing package installations. Instructions on defining a new conda environment can be found here.

Usage

The overall workflow for this project first defines a gym environment based on the desired scan path, then performs Proximal Policy Optimization to derive a suitable control policy based on the environment. This is done through the following:

Overview

  • EagarTsaiModel.py: implements the RUSLS solution to the Rosenthal equation, as proposed by Wolfer et al.
  • power_square_gym.py, power_triangle_gym.py, velocity_square_gym.py, velocity_triangle_gym.py: Defines custom gym environments for the respective scan paths and control variables. square is used as shorthand for the predefined horizontal cross-hatching path and triangle is used as shorthand for the predefined concentric triangular path.
  • RL_learn_square.py, RL_learn_triangle.py performs Proximal Policy Optimization on the respective scan paths, with command line arguments to change which control parameter is varied.
  • evaluate_learned_policy.py runs a derived control policy on a specific environment. The environment is specified using command line arguments detailed below.

Testing a trained model

To test a trained model on a specific combination of scan path and control parameter, enter this command:

python evaluate_learned_policy.py --path [scan_path] --param [parameter]

Note: [scan_path] should be replaced by square for the horizontal cross-hatching scan path and triangle for the concentric triangular path. [parameter] should be replaced by power to specify power as a control parameter, and velocity to specify velocity as a control parameter.

Upon running this command, you will be prompted to enter the path to the .zip file for the trained model.

Once the evaluation is complete, the results are stored in the folder results/[scan_path]_[parameter]_control/. This folder will contain plots of the variation of the melt depth and control parameters over time, as well as their raw values for later analysis.

Pre-trained models for each of the four possible combinations of scan path and control parameter can be found in pretrained_models.

Training a new model

In order to train a new model based on the predefined horizontal cross-hatching scan path, enter the command:

python RL_learn_square.py --param [parameter]

Here, [parameter] should be replaced by the control parameter desired. The possible options are power and velocity.

The process is similar for the predefined concentric triangular scan path. To train a new model, enter the command:

python RL_learn_triangle.py --param [parameter]

Again, [parameter] should be replaced by the control parameter desired. The possible options are power and velocity.

During training, intermediate model checkpoints will be saved at

training_checkpoints/ppo_[scan_path]_[parameter]/best_model.zip

At the conclusion of training, the finished model is stored at

trained_models/ppo_[scan_path]_[parameter].zip

Defining a custom domain

Changing the powder bed features

In order to define a custom domain for use with a different problem configuration, the EagarTsaiModel.py file should be edited directly. Within the EagarTsai() class instantiation, the thermodynamic properties and domain dimensions can be specified. Additionally, the resolution and boundary conditions can be provided as arguments to the EagarTsai class. bc = 'flux' and bc = 'temp' implements an adiabatic and constant temperature boundary condition respectively.

Changing the scan path

A new scan path can be defined by creating a new custom gym environment, and writing a custom step() function to represent the desired scan path, similar to the [parameter]_[scan_path]_gym.py scripts in this repository. Considerations for both how the laser moves during a single segment and the placement of each segment within the overall path should be described in this function. More detail on the gym framework for defining custom environments can be found here.

Monitoring the training process with TensorBoard

Tensorboard provides resources for monitoring various metrics of the PPO training process, and can be installed using pip install tensorboard. To open the tensorboard dashboard, enter the command:

tensorboard --log_dir ./tensorboard_logs/ppo_[scan_path]_[parameter]/ppo_[scan_path]_[parameter]_[run_ID]

Tensorboard log files are periodically saved during training, with information on cumulative reward as well as various loss metrics.

Owner
BaratiLab
BaratiLab
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 06, 2023
Deep Halftoning with Reversible Binary Pattern

Deep Halftoning with Reversible Binary Pattern ICCV Paper | Project Website | BibTex Overview Existing halftoning algorithms usually drop colors and f

Menghan Xia 17 Nov 22, 2022
The code of paper 'Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo Collection'

Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo Collection Pytorch implemetation of paper 'Learning to Aggregate and Personalize

Tencent YouTu Research 136 Dec 29, 2022
The Adapter-Bot: All-In-One Controllable Conversational Model

The Adapter-Bot: All-In-One Controllable Conversational Model This is the implementation of the paper: The Adapter-Bot: All-In-One Controllable Conver

CAiRE 37 Nov 04, 2022
Meta-TTS: Meta-Learning for Few-shot SpeakerAdaptive Text-to-Speech

Meta-TTS: Meta-Learning for Few-shot SpeakerAdaptive Text-to-Speech This repository is the official implementation of "Meta-TTS: Meta-Learning for Few

Sung-Feng Huang 128 Dec 25, 2022
PyTorch Lightning implementation of Automatic Speech Recognition

lasr Lightening Automatic Speech Recognition An MIT License ASR research library, built on PyTorch-Lightning, for developing end-to-end ASR models. In

Soohwan Kim 40 Sep 19, 2022
Code for "Universal inference meets random projections: a scalable test for log-concavity"

How to use this repository This repository contains code to replicate the results of "Universal inference meets random projections: a scalable test fo

Robin Dunn 0 Nov 21, 2021
Semantic Segmentation with SegFormer on Drone Dataset.

SegFormer_Segmentation Semantic Segmentation with SegFormer on Drone Dataset. You can check out the blog on Medium You can also try out the model with

Praneet 8 Oct 20, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
This repo contains implementation of different architectures for emotion recognition in conversations.

Emotion Recognition in Conversations Updates 🔥 🔥 🔥 Date Announcements 03/08/2021 🎆 🎆 We have released a new dataset M2H2: A Multimodal Multiparty

Deep Cognition and Language Research (DeCLaRe) Lab 1k Dec 30, 2022
ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

Katherine Crowson 53 Dec 29, 2022
Using VapourSynth with super resolution models and speeding them up with TensorRT.

VSGAN-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Using NVIDIA/Torch-TensorRT combined wi

111 Jan 05, 2023
PyTorch implementation for NED. It can be used to manipulate the facial emotions of actors in videos based on emotion labels or reference styles.

Neural Emotion Director (NED) - Official Pytorch Implementation Example video of facial emotion manipulation while retaining the original mouth motion

Foivos Paraperas 89 Dec 23, 2022
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

InfoPro-Pytorch The Information Propagation algorithm for training deep networks with local supervision. (ICLR 2021) Revisiting Locally Supervised Lea

78 Dec 27, 2022
A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion

A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion This repo intends to release code for our work: Zhaoyang Lyu*, Zhifeng

Zhaoyang Lyu 68 Jan 03, 2023
GULAG: GUessing LAnGuages with neural networks

GULAG: GUessing LAnGuages with neural networks Classify languages in text via neural networks. Привет! My name is Egor. Was für ein herrliches Frühl

Egor Spirin 12 Sep 02, 2022
A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021) This repository contains the official implemen

81 Dec 14, 2022
[CVPR 2021] Exemplar-Based Open-Set Panoptic Segmentation Network (EOPSN)

EOPSN: Exemplar-Based Open-Set Panoptic Segmentation Network (CVPR 2021) PyTorch implementation for EOPSN. We propose open-set panoptic segmentation t

Jaedong Hwang 49 Dec 30, 2022
PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation

deep-hist PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation PyT

Winfried Lötzsch 10 Dec 06, 2022
Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models

Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models, under review at ICLR 2017 requirements: T

Shuangfei Zhai 18 Mar 05, 2022