A tight inclusion function for continuous collision detection

Overview

Tight-Inclusion Continuous Collision Detection

Build

A conservative Continuous Collision Detection (CCD) method with support for minimum separation.

You can read more about this work in our ACM Transactions on Graphics paper:

"A Large Scale Benchmark and an Inclusion-Based Algorithm forContinuous Collision Detection"

@article{Wang:2021:Benchmark,
    title   = {A Large Scale Benchmark and an Inclusion-Based Algorithm for Continuous Collision Detection},
    author  = {Bolun Wang and Zachary Ferguson and Teseo Schneider and Xin Jiang and Marco Attene and Daniele Panozzo},
    year    = 2021,
    journal = {ACM Transactions on Graphics}
}

Compiling Instruction

To compile the code, first make sure CMake is installed.

To build the library on Linux or macOS:

mkdir build
cd build
cmake ../ -DCMAKE_BUILD_TYPE=Release
make

Then you can run a CCD example:

./Tight_Inclusion_bin 

We also provide you an example to run the Sample Queries using our CCD method. You may need to install gmp before compiling the code. Then set the CMake option TIGHT_INCLUSION_WITH_TESTS as ON when compiling:

cmake ../ -DCMAKE_BUILD_TYPE=Release -DTIGHT_INCLUSION_WITH_TESTS=ON
make

Then you can run ./Tight_Inclusion_bin to test the handcrafted queries in the Sample Queries.

Usage

Include #include <tight_inclusion/inclusion_ccd.hpp>

To check edge-edge ccd, use bool inclusion_ccd::edgeEdgeCCD_double();

To check vertex-face ccd, use bool inclusion_ccd::vertexFaceCCD_double();

💡 If collision is detected, the ccd function will return true, otherwise, the ccd function will return false. Since our method is CONSERVATIVE, if the returned result is false, we guarantee that there is no collision happens. If the result is true, it is possible that there is no collision but we falsely report a collision, but we can guarantee that this happens only if the minimal distance between the two primitives in this time step is no larger than tolerance + ms + err. We wil explain these parameters below.

For both edge-edge ccd and vertex-face ccd, the input CCD query is presented by 8 vertices which are in the format of Eigen::Vector3d. Please read our code in tight_inclusion/inclusion_ccd.hpp for the correct input order of the vertices.

Beside the input vertices, there are some input and output parameters for users to tune the performace or to get more CCD information. Here is a list of the explanations of the parameters:

input:
    err                 The numerical filters of the x, y and z coordinates. It measures the errors introduced by floating-point calculation when solving inclusion functions.
    ms                  Minimum separation distance no less than 0. It guarantees that collision will be reported if the distance between the two primitives is less than ms.
    tolerance           User-specific solving precision. It is the target maximal x, y, and z length of the inclusion function. We suggest the user to set it as 1e-6.
    t_max               The time range [0, t_max] where we detect collisions. Since the input query implies the motion in time range [0, 1], t_max should no larger than 1.
    max_itr             The parameter to enable early termination of the algorithm. If you set max_itr < 0, early termination will be disabled, but this may cause longer runtime. We suggest to set max_itr = 1e6.
    CCD_TYPE            The parameter to choose collision detection algorithms. By default CCD_TYPE = 1. If set CCD_TYPE = 0, the code will switch to a naive conservative CCD algorithm, but lack of our advanced features. 
    
output:
    toi                 Time of impact. If multiple collisions happen in this time step, it will return the earlist collision time. If there is no collision, the returned toi value will be std::numeric_limits<double>::infinity().
    output_tolerance    The real solving precision. If early termination is enabled, the solving precision may not reach the target precision. This parameter will return the real solving precision when the code is terminated.

Tips

💡 The input parameter err is crucial to guarantee our algorithm to be a conservative method not affected by floating point rounding errors. To run a single query, you can set err = {{-1, -1, -1}} to enable a sub-function to calculate the real numerical filters when solving CCD. If you are integrating our CCD in simulators, you need to:

  • Include the headler: #include <tight_inclusion/interval_root_finder.hpp>.
  • Call std::array<double, 3> err_vf = inclusion_ccd::get_numerical_error() and std::array<double, 3> err_ee = inclusion_ccd::get_numerical_error()
  • Use the parameter err_ee each time you call bool inclusion_ccd::edgeEdgeCCD_double() and err_vf when you call bool inclusion_ccd::vertexFaceCCD_double().

The parameters for function inclusion_ccd::get_numerical_error() is explained below:

input:
    vertices            Vertices of the Axies-Aligned-Bounding-Box of the simulation scene. Before you run the simulation, you need to conservatively estimate the Axies-Aligned-Bounding-Box in which the meshes will located during the whole simulation process, and the vertices should be the corners of the AABB.
    check_vf            A boolean instruction showing if you are checking vertex-face or edge-edge CCD.
    using_minimum_separation    A boolean instruction. If you are using minimum-separation CCD (the input parameter ms > 0), please set it as true.

💡 For some simulators which use non-zero minimum separation distance (ms > 0) to make sure intersection-free for each time-step, we have a CMake option TIGHT_INCLUSION_WITH_NO_ZERO_TOI to avoid the returned collision time toi is 0. You need to set TIGHT_INCLUSION_WITH_NO_ZERO_TOI as ON when compiling by: cmake ../ -DCMAKE_BUILD_TYPE=Release -DTIGHT_INCLUSION_WITH_NO_ZERO_TOI=ON. Then when you use the CCD functions, the code will continue the refinement in higher precision if the output toi is 0 under the given tolerance. So, the eventually toi will not be 0.

To have a better understand, or to get more details of our Tight-Inclusion CCD algorithm, please refer to our paper.

You might also like...
Continuous Query Decomposition for Complex Query Answering in Incomplete Knowledge Graphs

Continuous Query Decomposition This repository contains the official implementation for our ICLR 2021 (Oral) paper, Complex Query Answering with Neura

On the model-based stochastic value gradient for continuous reinforcement learning

On the model-based stochastic value gradient for continuous reinforcement learning This repository is by Brandon Amos, Samuel Stanton, Denis Yarats, a

Continuous Diffusion Graph Neural Network

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE.

PyTorch implementation for  MINE: Continuous-Depth MPI with Neural Radiance Fields
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution
This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivariant Continuous Convolution

Trajectory Prediction using Equivariant Continuous Convolution (ECCO) This is the codebase for the ICLR 2021 paper Trajectory Prediction using Equivar

PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

A clean and robust Pytorch implementation of PPO on continuous action space.
A clean and robust Pytorch implementation of PPO on continuous action space.

PPO-Continuous-Pytorch I found the current implementation of PPO on continuous action space is whether somewhat complicated or not stable. And this is

Comments
  • Improved No Zero ToI

    Improved No Zero ToI

    • I changed the options for avoiding a zero time of impact by making the option a parameter to the CCD functions.
    • I also changed from a recursive algorithm to an iterative one with a maximum number of iterations.
    opened by zfergus 2
  • a case returned 0 TOI

    a case returned 0 TOI

    For the PT CCD case below, TICCD returned 0 TOI.

    3.5000000000e-01 0.0000000000e+00 3.5000000000e-01
    3.4604643638e-01 2.7847887896e-03 3.4658121160e-01
    3.5000819263e-01 -1.5763377304e-06 3.5001214242e-01
    3.4903882032e-01 -1.5866575093e-03 3.4560164356e-01
    0.0000000000e+00 0.0000000000e+00 0.0000000000e+00
    1.0120300789e-07 -1.4009994248e-04 -8.5653091896e-05
    -9.2976239634e-07 1.4361165221e-06 5.5168830145e-07
    1.1649271683e-04 -3.6172565398e-05 -4.7128237917e-05
    

    The format is

    node_x node_y node_z
    traingle_node0_x traingle_node0_y traingle_node0_z
    traingle_node1_x traingle_node1_y traingle_node1_z
    traingle_node2_x traingle_node2_y traingle_node2_z
    node_displacement_x node_displacement_y node_displacement_z
    traingle_node0_displacement_x traingle_node0_displacement_y traingle_node0_displacement_z
    traingle_node1_displacement_x traingle_node1_displacement_y traingle_node1_displacement_z
    traingle_node2_displacement_x traingle_node2_displacement_y traingle_node2_displacement_z
    

    I used the TICCD from the CCDWrapper repo (latest commit), and I called it like

    template <class T>
    bool Point_Triangle_CCD_TI(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        T eta, T thickness, T& toc)
    {
        T toc_prev = toc;
        T output_tolerance;
    
        T dist2_cur;
        Point_Triangle_Distance_Unclassified(p, t0, t1, t2, dist2_cur);
        T dist_cur = std::sqrt(dist2_cur);
        if (inclusion_ccd::vertexFaceCCD_double(p, t0, t1, t2,
            p + dp, t0 + dt0, t1 + dt1, t2 + dt2, { { -1, 0, 0 } },
            eta * (dist2_cur - thickness * thickness) / (dist_cur + thickness) + thickness,
            toc, 1e-6, toc_prev, 1e6, output_tolerance, 1))
        {
            if (toc < 1.0e-6) {
                std::cout << "PT CCD tiny!" << std::endl;
                if (inclusion_ccd::vertexFaceCCD_double(p, t0, t1, t2,
                    p + dp, t0 + dt0, t1 + dt1, t2 + dt2, { { -1, 0, 0 } },
                    thickness, toc, 1e-6, toc_prev, 1e6, output_tolerance, 1))
                {
                    toc *= (1.0 - eta);
                    return true;
                }
                else {
                    return false;
                }
            }
            return true;
        }
        return false;
    }
    
    printf("TICCD:\n");
    largestAlpha = 1;
    tstart = clock();
    if (Point_Triangle_CCD_TI(x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7], 0.1, 0.0, largestAlpha)) {
        printf("%le\n", largestAlpha);
    }
    else {
        printf("no collision\n");
    }
    printf("%.1lems\n", double(clock() - tstart) / CLOCKS_PER_SEC * 1000);
    

    The output is

    TICCD:
    PT CCD tiny!
    0.000000e+00
    1.9e+00ms
    

    However I implemented a simpler version of TICCD according to the paper as

    template <class T>
    void Point_Triangle_Distance_Vector_Unclassified(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        T t, T lambda, T beta,
        Eigen::Matrix<T, 3, 1>& distVec)
    {
        const Eigen::Matrix<T, 3, 1> tp = (1 - lambda - beta) * t0 + lambda * t1 + beta * t2;
        const Eigen::Matrix<T, 3, 1> dtp = (1 - lambda - beta) * dt0 + lambda * dt1 + beta * dt2;
        distVec = p + t * dp - (tp + t * dtp);
    }
    
    template <class T>
    bool Point_Triangle_CheckInterval_Unclassified(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        const Eigen::Matrix<T, 3, 1>& dp, 
        const Eigen::Matrix<T, 3, 1>& dt0, 
        const Eigen::Matrix<T, 3, 1>& dt1,
        const Eigen::Matrix<T, 3, 1>& dt2,
        const std::array<T, 6>& interval,
        T gap)
    {
        Eigen::Matrix<T, 3, 1> distVecMax, distVecMin;
        distVecMax.setConstant(-2 * gap - 1);
        distVecMin.setConstant(2 * gap + 1);
        for (int t = 0; t < 2; ++t) {
            for (int lambda = 0; lambda < 2; ++lambda) {
                for (int beta = 0; beta < 2; ++beta) {
                    if (lambda == 1 && beta == 1) {
                        continue;
                    }
                    Eigen::Matrix<T, 3, 1> distVec;
                    Point_Triangle_Distance_Vector_Unclassified(p, t0, t1, t2, dp, dt0, dt1, dt2, 
                        interval[t], interval[2 + lambda], interval[4 + beta], distVec);
                    distVecMax = distVecMax.array().max(distVec.array());
                    distVecMin = distVecMin.array().min(distVec.array());
                }
            }
        }
        return (distVecMax.array() >= -gap).all() && (distVecMin.array() <= gap).all();
    }
    template <class T>
    bool Point_Triangle_TICCD(
        const Eigen::Matrix<T, 3, 1>& p, 
        const Eigen::Matrix<T, 3, 1>& t0, 
        const Eigen::Matrix<T, 3, 1>& t1,
        const Eigen::Matrix<T, 3, 1>& t2,
        Eigen::Matrix<T, 3, 1> dp, 
        Eigen::Matrix<T, 3, 1> dt0, 
        Eigen::Matrix<T, 3, 1> dt1,
        Eigen::Matrix<T, 3, 1> dt2,
        T eta, T thickness, T& toc)
    {
        T dist2_cur;
        Point_Triangle_Distance_Unclassified(p, t0, t1, t2, dist2_cur);
        T dist_cur = std::sqrt(dist2_cur);
        T gap = eta * (dist2_cur - thickness * thickness) / (dist_cur + thickness);
    
        T tTol = 1e-3;
    
        std::vector<std::array<T, 6>> roots;
        std::deque<std::array<T, 6>> intervals;
        intervals.push_back({0, toc, 0, 1, 0, 1});
        int iterAmt = 0;
        while (!intervals.empty()) {
            ++iterAmt;
    
            std::array<T, 6> curIV = intervals.front();
            intervals.pop_front();
    
            // check
            if (Point_Triangle_CheckInterval_Unclassified(p, t0, t1, t2, dp, dt0, dt1, dt2, curIV, gap)) {
                if (curIV[0] && curIV[1] - curIV[0] < tTol) {
                    // root found within tTol
                    roots.emplace_back(curIV);
                }
                else {
                    // split interval and push back
                    std::vector<T> intervalLen({curIV[1] - curIV[0], curIV[3] - curIV[2], curIV[5] - curIV[4]});
                    switch (std::max_element(intervalLen.begin(), intervalLen.end()) - intervalLen.begin()) {
                    case 0:
                        intervals.push_back({curIV[0], (curIV[1] + curIV[0]) / 2, curIV[2], curIV[3], curIV[4], curIV[5]});
                        intervals.push_back({(curIV[1] + curIV[0]) / 2, curIV[1], curIV[2], curIV[3], curIV[4], curIV[5]});
                        break;
    
                    case 1:
                        intervals.push_back({curIV[0], curIV[1], curIV[2], (curIV[2] + curIV[3]) / 2, curIV[4], curIV[5]});
                        intervals.push_back({curIV[0], curIV[1], (curIV[2] + curIV[3]) / 2, curIV[3], curIV[4], curIV[5]});
                        break;
    
                    case 2:
                        intervals.push_back({curIV[0], curIV[1], curIV[2], curIV[3], curIV[4], (curIV[4] + curIV[5]) / 2});
                        intervals.push_back({curIV[0], curIV[1], curIV[2], curIV[3], (curIV[4] + curIV[5]) / 2, curIV[5]});
                        break;
                    }
                }
            }
        }
        
        if (roots.empty()) {
            printf("TICCD PT converged with %d iters\n", iterAmt);
            return false;
        }
        else {
            for (const auto& rI : roots) {
                if (toc > rI[0]) {
                    toc = rI[0];
                }
            }
            printf("TICCD PT converged with %d iters\n", iterAmt);
            return true;
        }
    }
    

    and it kinds of worked on this case and output

    TICCD PT converged with 6143 iters
    4.882812e-04
    5.3e-01ms
    

    which with tighter convergence tolerance might be able to reach the ground truth solution -- no collision.

    In my implementation, I didn't include the epsilons in formula (5) in the paper. Is it the main cause here?

    opened by liminchen 2
  • Refactored and cleaned up code

    Refactored and cleaned up code

    I tested and timed it on the entire dataset. I get 0 FN, a couple of fewer FP on the handcrafted data, and the timing is a little faster on the simulation and faster on handcrafted.

    opened by zfergus 0
Releases(v1.0.2)
Owner
Continuous Collision Detection
Code for continuous collision detection methods bechmarked in "A Large Scale Benchmark and an Inclusion-Based Algorithm for Continuous Collision Detection"
Continuous Collision Detection
Code for the paper "Offline Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Offline Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are

Michael Janner 266 Dec 27, 2022
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

Zhiwu Qing 63 Sep 27, 2022
TICC is a python solver for efficiently segmenting and clustering a multivariate time series

TICC TICC is a python solver for efficiently segmenting and clustering a multivariate time series. It takes as input a T-by-n data matrix, a regulariz

406 Dec 12, 2022
YOLOX_AUDIO is an audio event detection model based on YOLOX

YOLOX_AUDIO is an audio event detection model based on YOLOX, an anchor-free version of YOLO. This repo is an implementated by PyTorch. Main goal of YOLOX_AUDIO is to detect and classify pre-defined

intflow Inc. 77 Dec 19, 2022
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation

Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation Introduction This is a PyTorch

XMed-Lab 30 Sep 23, 2022
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

730 Jan 09, 2023
A disassembler for the RP2040 Programmable I/O State-machine!

piodisasm A disassembler for the RP2040 Programmable I/O State-machine! Usage Just run piodisasm.py on a file that contains the PIO code as hex! (Such

Ghidra Ninja 29 Dec 06, 2022
This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework

neon_course This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework. For more information, see

Nervana 92 Jan 03, 2023
Sample code from the Neural Networks from Scratch book.

Neural Networks from Scratch (NNFS) book code Code from the NNFS book (https://nnfs.io) separated by chapter.

Harrison 172 Dec 31, 2022
The easiest tool for extracting radiomics features and training ML models on them.

Simple pipeline for experimenting with radiomics features Installation git clone https://github.com/piotrekwoznicki/ClassyRadiomics.git cd classrad pi

Piotr Woźnicki 17 Aug 04, 2022
A font family with a great monospaced variant for programmers.

Fantasque Sans Mono A programming font, designed with functionality in mind, and with some wibbly-wobbly handwriting-like fuzziness that makes it unas

Jany Belluz 6.3k Jan 08, 2023
Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.

human-pose-estimation-3d-python-cpp RealSenseD435 (RGB) 480x640 + CPU Corei9 45 FPS (Depth is not used) 1. Run 1-1. RealSenseD435 (RGB) 480x640 + CPU

Katsuya Hyodo 8 Oct 03, 2022
On Generating Extended Summaries of Long Documents

ExtendedSumm This repository contains the implementation details and datasets used in On Generating Extended Summaries of Long Documents paper at the

Georgetown Information Retrieval Lab 76 Sep 05, 2022
This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT).

Dynamic-Vision-Transformer (Pytorch) This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT). Not All Ima

210 Dec 18, 2022
Implementation of TabTransformer, attention network for tabular data, in Pytorch

Tab Transformer Implementation of Tab Transformer, attention network for tabular data, in Pytorch. This simple architecture came within a hair's bread

Phil Wang 420 Jan 05, 2023
A chemical analysis of lipophilicities & molecule drawings including ML

A chemical analysis of lipophilicity & molecule drawings including a bit of ML analysis. This is a simple project that includes two Jupyter files (one

Aurimas A. NausÄ—das 7 Nov 22, 2022
Implementation of Convolutional enhanced image Transformer

CeiT : Convolutional enhanced image Transformer This is an unofficial PyTorch implementation of Incorporating Convolution Designs into Visual Transfor

Rishikesh (ऋषिकेश) 82 Dec 13, 2022
Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation.

DuoRec Code for WSDM 2022 paper, Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation. Usage Download datasets fr

Qrh 46 Dec 19, 2022
This repository contains code and data for "On the Multimodal Person Verification Using Audio-Visual-Thermal Data"

trimodal_person_verification This repository contains the code, and preprocessed dataset featured in "A Study of Multimodal Person Verification Using

ISSAI 7 Aug 31, 2022
Get a Grip! - A robotic system for remote clinical environments.

Get a Grip! Within clinical environments, sterilization is an essential procedure for disinfecting surgical and medical instruments. For our engineeri

Jay Sharma 1 Jan 05, 2022