A Moonraker plug-in for real-time compensation of frame thermal expansion

Overview

Frame Expansion Compensation

A Moonraker plug-in for real-time compensation of frame thermal expansion.

Installation

Credit to protoloft, from whom I plagarized in near entirety the install.sh script -> Z Auto Calibration


Clone this repo into you home directory. For example:

cd /home/pi
git clone https://github.com/alchemyEngine/klipper_frame_expansion_comp

Copy the frame_expansion_compensation.py module to the Klippy extras folder:

cp /home/pi/klipper_frame_expansion_comp/frame_expansion_compensation.py /home/pi/klipper/klippy/extras/

[Optional] Configure Moonraker Updates

Run the install shell script:

bash /home/pi/klipper_frame_expansion_comp/install.sh

Configure the update manager. Add the following section to moonraker.conf:

[update_manager client frame_expansion]
type: git_repo
path: /home/pi/klipper_frame_expansion_comp
primary_branch: main
origin: https://github.com/alchemyEngine/klipper_frame_expansion_comp.git
install_script: install.sh

Configuration

[frame_expansion_compensation]
#temp_coeff:
#   The temperature coefficient of expansion, in mm/K. For example, a
#   temp_coeff of 0.01 mm/K will move the Z axis downwards by 0.01 mm for every
#   Kelvin/degree celcius that the frame temperature increases. Defaults to 0.0,
#   no offset.
temp_sensor:
#   Temperature sensor to use for frame temp measurement. Use full config
#   section name without quoutes. E.g. temperature_sensor frame
#smooth_time:
#   Smoothing window applied to the temp_sensor, in seconds. Can reduce motor
#   noise from excessive small corrections in response to sensor noise. The
#   default is 2.0 seconds.
#max_comp_z:
#   Disables compensation above this Z height [mm]. The last computed correction
#   will remain applied until the toolhead moves below the specified Z position
#   again. The default is 0.0mm (always on).
#max_z_offset:
#   Maximum absolute compensation that can be applied to the Z axis [mm]. The
#   default is 99999999.0mm (unlimited).
z_stepper:
#   The Z stepper motor linked with the Z endstop, as written in printer.cfg.
#   Used for triggering reference temperature measurement. Usually 'stepper_z'
#   unless otherwise defined.

G-Code Commands

The following commands are available when the frame_expansion_compensation config section is enabled:

  • SET_FRAME_COMP ENABLE=[<0:1>]: enable or disable frame expansion compensation. When disabled, the last computed compensation value will remain applied until next homing.
  • QUERY_FRAME_COMP: report current state and key parameters of the frame expansion compensation.

Overview

TODO

Comments
  • QUERY_FRAME_COMP in klipper implementation...

    QUERY_FRAME_COMP in klipper implementation...

    The new klipper documentation doesn't say anything about a query function.... will it still work? If not any reason I shouldn't just stay with the plugin?

    opened by PhilBaz 7
  • stepper_z for multiple Z steppers.

    stepper_z for multiple Z steppers.

    Im on a 24. Voron with 4 Z stepper motors stepper_z - stepper_z3. defined as bellow.

    Is config, z_stepper: stepper_z , still correct?

    The frame compensation appears as if its functioning. Doesn't throw an error, and the query looks as it should. But i dont think it is functioning. I cranked up the temp_coeff: 0.03 producing -0.12mm on a 23min first layer. and it appeared to have no effect. I previously used a manual correction of -0.06mm to correct going into the second layer.

    So I'm at a bit of a loss. I suspect something is not working correctly.

    Im also using 'virtual gantry backers' and have created a corresponding issue there as well. I would appreciate any thoughts or input.

    https://github.com/Deutherius/VGB/issues/3

    printer.cfg

    [frame_expansion_compensation] temp_coeff: 0.03 ##0.0009 temp_sensor: temperature_sensor ToolHP max_z_offset: 0.12 z_stepper: stepper_z

    [stepper_z] ## Z0 Stepper - Front Left ## In Z-MOT Position step_pin: PD14 dir_pin: PD13 enable_pin: !PD15 rotation_distance: 40 gear_ratio: 80:16 microsteps: 16

    position_max: 330 ##<<<<<<<<<

    endstop_pin: ^PA0

    position_min: -5 homing_speed: 32 second_homing_speed: 3 homing_retract_dist: 3

    [tmc2209 stepper_z] uart_pin: PD10 interpolate: True run_current: 0.8 hold_current: 0.8 sense_resistor: 0.110 stealthchop_threshold: 0

    [stepper_z1] ## Z1 Stepper - Rear Left ## In E1-MOT Position step_pin: PE6 dir_pin: !PC13 enable_pin: !PE5 rotation_distance: 40 gear_ratio: 80:16 microsteps: 16

    [tmc2209 stepper_z1] uart_pin: PC14 interpolate: True run_current: 0.8 hold_current: 0.8 sense_resistor: 0.110 stealthchop_threshold: 0

    [stepper_z2] ## Z2 Stepper - Rear Right ## In E2-MOT Position step_pin: PE2 dir_pin: PE4 enable_pin: !PE3 rotation_distance: 40 gear_ratio: 80:16 microsteps: 16

    [tmc2209 stepper_z2] uart_pin: PC15 interpolate: true run_current: 0.8 hold_current: 0.8 sense_resistor: 0.110 stealthchop_threshold: 0

    [stepper_z3] ## Z3 Stepper - Front Right ## In E3-MOT Position step_pin: PD12 dir_pin: !PC4 enable_pin: !PE8 rotation_distance: 40 gear_ratio: 80:16 microsteps: 16

    [tmc2209 stepper_z3] uart_pin: PA15 interpolate: true run_current: 0.8 hold_current: 0.8 sense_resistor: 0.110 stealthchop_threshold: 0

    opened by PhilBaz 2
  • questions regarding temp_sensor & z_stepper configurations

    questions regarding temp_sensor & z_stepper configurations

    Hi,

    My chamber temp sensor was already defined in [temperature_fan] section as the chamber fan was controlled by this thermsitor, I cannot use it to define in a [temperature_sensor] section otherwise an error would be raised. How can I deal with this issue? Any work around?

    Also, how to configure the z_stepper for voron2.4 since there're 4 z steppers?

    Thanks.

    opened by dukeduck1984 1
  • Updated install.sh to no longer use dummy service

    Updated install.sh to no longer use dummy service

    The dummy service should no longer be needed for use with Moonraker. Updated the install.sh file to continue following the pattern used by Z Auto Calibration. In addition, updated the README since copying the file into Klipper isn't needed since the install.sh file will just create a link.

    opened by randellhodges 0
  • Problem with process_frame_expansion

    Problem with process_frame_expansion

    Hello, I have a problem with the process_frame_expansion.py script. If I run the measure_thermal_behavior.py and the process_meshes.py all sound good but when I run the process_frame_expansion.py script I have this error:

    [email protected]:~/measure_thermal_behavior $ python3 process_frame_expansion.py thermal_quant_mark988#5325_2022-05-29_23-12-26.json Analyzing file: thermal_quant_mark988#5325_2022-05-29_23-12-26 sys:1: RankWarning: Polyfit may be poorly conditioned

    And it doesn't create the temp_coeff_fitting.png

    I am attaching the edited measure_thermal_behavior.py the out.txt and the thermal_quant fil

    Thank you for your help

    Marco

    measure_thermal_behavior.zip e

    opened by panik988 0
  • measure_thermal_behavior : Anything to be gained by adding klicky z_calibration between meshes?

    measure_thermal_behavior : Anything to be gained by adding klicky z_calibration between meshes?

    I have a klicky probe.

    My brain is telling me it would be nice to have the z-calibration routine/data added into the measure_thermal_behavior script.

    But I cant actually figure out what it would be useful for. the z-calibration does drift with temperature and time, over squishing after long periods of heated chamber.

    Is there anything to be gained here?

    https://github.com/protoloft/klipper_z_calibration

    opened by PhilBaz 0
  • Need methodology for different active lengths

    Need methodology for different active lengths

    I'm trying to apply this to an i3 bedslinger style frame, where the gantry is supported by twin stainless steel leadscrews, and inside an enclosure. The deviation from expected Z position is going to be dependent on the thermal growth of the length of leadscrew that is supporting the gantry. When the nozzle is at z=0 there's about 50 mm of active leadscrew, so if the chamber was heated from 20C to 40C the leadscrews would grow thermally 0.0000173 mm/mm/C x 50mm x (40C-20C) = 0.017mm. But when the nozzle gets up to z=100mm there would be 100+50 = 150mm of leadscrew active, so the total growth would be 0.0000173 x 150mm x 20c = 0.052mm. So the compensation needs to know the active length of the support element, which may change from layer to layer as it does in the case of the i3. I don't think what you currently have set up here takes that in to account.

    feature request 
    opened by cmgreyhounds 1
Releases(v0.0.2)
  • v0.0.2(Aug 3, 2022)

    What's Changed

    • Updated install.sh to no longer use dummy service by @randellhodges in https://github.com/alchemyEngine/klipper_frame_expansion_comp/pull/4

    Re-run install.sh after updating and make any necessary changes to your Moonraker config (see README/Configuration).

    Source code(tar.gz)
    Source code(zip)
  • v0.0.1(Dec 18, 2021)

J.A.R.V.I.S is an AI virtual assistant made in python.

J.A.R.V.I.S is an AI virtual assistant made in python. Running JARVIS Without Python To run JARVIS without python: 1. Head over to our installation pa

somePythonProgrammer 16 Dec 29, 2022
Learning from graph data using Keras

Steps to run = Download the cora dataset from this link : https://linqs.soe.ucsc.edu/data unzip the files in the folder input/cora cd code python eda

Mansar Youness 64 Nov 16, 2022
Powerful unsupervised domain adaptation method for dense retrieval.

Powerful unsupervised domain adaptation method for dense retrieval

Ubiquitous Knowledge Processing Lab 191 Dec 28, 2022
This is the code related to "Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation" (ICCV 2021).

Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation This is the code relat

39 Sep 23, 2022
Joint deep network for feature line detection and description

SOLD² - Self-supervised Occlusion-aware Line Description and Detection This repository contains the implementation of the paper: SOLD² : Self-supervis

Computer Vision and Geometry Lab 427 Dec 27, 2022
3rd place solution for the Weather4cast 2021 Stage 1 Challenge

weather4cast2021_Stage1 3rd place solution for the Weather4cast 2021 Stage 1 Challenge Dependencies The code can be executed from a fresh environment

5 Aug 14, 2022
Official implementation of the paper Do pedestrians pay attention? Eye contact detection for autonomous driving

Do pedestrians pay attention? Eye contact detection for autonomous driving Official implementation of the paper Do pedestrians pay attention? Eye cont

VITA lab at EPFL 26 Nov 02, 2022
🛰️ List of earth observation companies and job sites

Earth Observation Companies & Jobs source Portals & Jobs Geospatial Geospatial jobs newsletter: ~biweekly newsletter with geospatial jobs by Ali Ahmad

Dahn 64 Dec 27, 2022
Reference models and tools for Cloud TPUs.

Cloud TPUs This repository is a collection of reference models and tools used with Cloud TPUs. The fastest way to get started training a model on a Cl

5k Jan 05, 2023
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 03, 2023
ByteTrack(Multi-Object Tracking by Associating Every Detection Box)のPythonでのONNX推論サンプル

ByteTrack-ONNX-Sample ByteTrack(Multi-Object Tracking by Associating Every Detection Box)のPythonでのONNX推論サンプルです。 ONNXに変換したモデルも同梱しています。 変換自体を試したい方はByteT

KazuhitoTakahashi 16 Oct 26, 2022
Tracking Progress in Question Answering over Knowledge Graphs

Tracking Progress in Question Answering over Knowledge Graphs Table of contents Question Answering Systems with Descriptions The QA Systems Table cont

Knowledge Graph Question Answering 47 Jan 02, 2023
NeuralDiff: Segmenting 3D objects that move in egocentric videos

NeuralDiff: Segmenting 3D objects that move in egocentric videos Project Page | Paper + Supplementary | Video About This repository contains the offic

Vadim Tschernezki 14 Dec 05, 2022
Pip-package for trajectory benchmarking from "Be your own Benchmark: No-Reference Trajectory Metric on Registered Point Clouds", ECMR'21

Map Metrics for Trajectory Quality Map metrics toolkit provides a set of metrics to quantitatively evaluate trajectory quality via estimating consiste

Mobile Robotics Lab. at Skoltech 31 Oct 28, 2022
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

Peidong Liu(刘沛东) 54 Dec 17, 2022
Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21).

ACTION-Net Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21). Getting Started EgoGesture data folder struct

V-Sense 171 Dec 26, 2022
Neural machine translation between the writings of Shakespeare and modern English using TensorFlow

Shakespeare translations using TensorFlow This is an example of using the new Google's TensorFlow library on monolingual translation going from modern

Motoki Wu 245 Dec 28, 2022
PyTorch implementation of SIFT descriptor

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
PyTorch implementation of Soft-DTW: a Differentiable Loss Function for Time-Series in CUDA

Soft DTW Loss Function for PyTorch in CUDA This is a Pytorch Implementation of Soft-DTW: a Differentiable Loss Function for Time-Series which is batch

Keon Lee 76 Dec 20, 2022