Fully Adaptive Bayesian Algorithm for Data Analysis (FABADA) is a new approach of noise reduction methods. In this repository is shown the package developed for this new method based on \citepaper.

Overview

Contributors Forks Stargazers Issues GNU License LinkedIn

Fully Adaptive Bayesian Algorithm for Data Analysis

FABADA

FABADA is a novel non-parametric noise reduction technique which arise from the point of view of Bayesian inference that iteratively evaluates possible smoothed models of the data, obtaining an estimation of the underlying signal that is statistically compatible with the noisy measurements. Iterations stop based on the evidence $E$ and the $\chi^2$ statistic of the last smooth model, and we compute the expected value of the signal as a weighted average of the smooth models. You can find the entire paper describing the new method in (link will be available soon).
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Method
  2. Getting Started
  3. Usage
  4. Results
  5. Contributing
  6. License
  7. Contact
  8. Cite

About The Method

This automatic method is focused in astronomical data, such as images (2D) or spectra (1D). Although, this doesn't mean it can be treat like a general noise reduction algorithm and can be use in any kind of two and one-dimensional data reproducing reliable results. The only requisite of the input data is an estimation of its variance.

(back to top)

Getting Started

We try to make the usage of FABADA as simple as possible. For that purpose, we have create a PyPI and Conda package to install FABADA in its latest version.

Prerequisites

The first requirement is to have a version of Python greater than 3.5. Although PyPI install the prerequisites itself, FABADA has two dependecies.

Installation

To install fabada we can, use the Python Package Index (PyPI) or Conda.

Using pip

  pip install fabada

we are currently working on uploading the package to the Conda system.

(back to top)

Usage

Along with the package two examples are given.

  • fabada_demo_image.py

In here we show how to use fabada for an astronomical grey image (two dimensional) First of all we have to import our library previously install and some dependecies

    from fabada import fabada
    import numpy as np
    from PIL import Image

Then we read the bubble image borrowed from the Hubble Space Telescope gallery. In our case we use the Pillow library for that. We also add some random Gaussian white noise using numpy.random.

    # IMPORTING IMAGE
    y = np.array(Image.open("bubble.png").convert('L'))

    # ADDING RANDOM GAUSSIAN NOISE
    np.random.seed(12431)
    sig      = 15             # Standard deviation of noise
    noise    = np.random.normal(0, sig ,y.shape)
    z        = y + noise
    variance = sig**2

Once the noisy image is generated we can apply fabada to produce an estimation of the underlying image, which we only have to call fabada and give it the variance of the noisy image

    y_recover = fabada(z,variance)

And its done 😉

As easy as one line of code.

The results obtained running this example would be:

Image Results

The left, middle and right panel corresponds to the true signal, the noisy meassurents and the estimation of fabada respectively. There is also shown the Peak Signal to Noise Ratio (PSNR) in dB and the Structural Similarity Index Measure (SSIM) at the bottom of the middle and right panel (PSNR/SSIM).

  • fabada_demo_spectra.py

In here we show how to use fabada for an astronomical spectrum (one dimensional), basically is the same as the example above since fabada is the same for one and two-dimensional data. First of all, we have to import our library previously install and some dependecies

    from fabada import fabada
    import pandas as pd
    import numpy as np

Then we read the interacting galaxy pair Arp 256 spectra, taken from the ASTROLIB PYSYNPHOT package which is store in arp256.csv. Again we add some random Gaussian white noise

    # IMPORTING SPECTRUM
    y = np.array(pd.read_csv('arp256.csv').flux)
    y = (y/y.max())*255  # Normalize to 255

    # ADDING RANDOM GAUSSIAN NOISE
    np.random.seed(12431)
    sig      = 10             # Standard deviation of noise
    noise    = np.random.normal(0, sig ,y.shape)
    z        = y + noise
    variance = sig**2

Once the noisy image is generated we can, again, apply fabada to produce an estimation of the underlying spectrum, which we only have to call fabada and give it the variance of the noisy image

    y_recover = fabada(z,variance)

And done again 😉

Which is exactly the same as for two dimensional data.

The results obtained running this example would be:

Spectra Results

The red, grey and black line represents the true signal, the noisy meassurents and the estimation of fabada respectively. There is also shown the Peak Signal to Noise Ratio (PSNR) in dB and the Structural Similarity Index Measure (SSIM) in the legend of the figure (PSNR/SSIM).

(back to top)

Results

All the results of the paper of this algorithm can be found in the folder results along with a jupyter notebook that allows to explore all of them through an interactive interface. You can run the jupyter notebook through Google Colab in this link --> Explore the results.

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the GNU General Public License. See LICENSE.txt for more information.

(back to top)

Contact

Pablo M Sánchez Alarcón - [email protected]

Yago Ascasibar Sequeiros - [email protected]

Project Link: https://github.com/PabloMSanAla/fabada

(back to top)

Cite

Thank you for using FABADA.

Citations and acknowledgement are vital for the continued work on this kind of algorithms.

Please cite the following record if you used FABADA in any of your publications.

@ARTICLE{2022arXiv220105145S,
author = {{Sanchez-Alarcon}, Pablo M and {Ascasibar Sequeiros}, Yago},
title = "{Fully Adaptive Bayesian Algorithm for Data Analysis, FABADA}",
journal = {arXiv e-prints},
keywords = {Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Astrophysics of Galaxies, Astrophysics - Solar and Stellar Astrophysics, Computer Science - Computer Vision and Pattern Recognition, Physics - Data Analysis, Statistics and Probability},
year = 2022,
month = jan,
eid = {arXiv:2201.05145},
pages = {arXiv:2201.05145},
archivePrefix = {arXiv},
eprint = {2201.05145},
primaryClass = {astro-ph.IM},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220105145S}
}

Sanchez-Alarcon, P. M. and Ascasibar Sequeiros, Y., “Fully Adaptive Bayesian Algorithm for Data Analysis, FABADA”, arXiv e-prints, 2022.

https://arxiv.org/abs/2201.05145

(back to top)

Readme file taken from Best README Template.

You might also like...
pyhsmm - library for approximate unsupervised inference in Bayesian Hidden Markov Models (HMMs) and explicit-duration Hidden semi-Markov Models (HSMMs), focusing on the Bayesian Nonparametric extensions, the HDP-HMM and HDP-HSMM, mostly with weak-limit approximations.
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Hierarchical-Bayesian-Defense - Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference (Openreview) How the Deep Q-learning method works and discuss the new ideas that makes the algorithm work
How the Deep Q-learning method works and discuss the new ideas that makes the algorithm work

Deep Q-Learning Recommend papers The first step is to read and understand the method that you will implement. It was first introduced in a 2013 paper

PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build
PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build

simple, elegant and safe Introduction PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to ha

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

A variational Bayesian method for similarity learning in non-rigid image registration (CVPR 2022)
A variational Bayesian method for similarity learning in non-rigid image registration (CVPR 2022)

A variational Bayesian method for similarity learning in non-rigid image registration We provide the source code and the trained models used in the re

We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
Comments
  • chi2pdf

    chi2pdf

    https://github.com/PabloMSanAla/fabada/blob/44a0ae025d21a11235f6591f8fcacbf7c0cec1ec/fabada/init.py#L129

    The chi2pdf estimation is dependent on df. df, in the example demos, is set to data.size.

    In the case of fabada_demo_spectrum, data.size is 1430 samples.

    per wolfram alpha, the gamma function value of 715 is 1x10^1729, which is well out of the calculation range of any desktop computer.

    chi2_data = np.sum <-- a float chi2_pdf = stats.chi2.pdf(chi2_data, df=data.size)

    https://lost-contact.mit.edu/afs/inf.ed.ac.uk/group/teaching/matlab-help/R2014a/stats/chi2pdf.html

    chi2_pdf = (chi2data** (N - 2) / 2) * numpy.exp(-chi2sum / 2)
    / ((2 ** (N / 2)) * math.gamma(N / 2))

    As a result, this function is going to fail without any question, and numpy /python will happily ignore the NaN value which is always returned. this then turns chi2_pdf_derivative chi2_pdf_previous chi2_pdf_snd_derivative chi2_pdf_derivative_previous into NaN values as well.

    opened by falseywinchnet 0
  • data variance fixing unreachable

    data variance fixing unreachable

    https://github.com/PabloMSanAla/fabada/blob/master/fabada/init.py#L83 this line of code is unreachable: since all the nan's are already set to 0 previously

    opened by falseywinchnet 0
  • python equivalance

    python equivalance

    https://github.com/PabloMSanAla/fabada/blob/44a0ae025d21a11235f6591f8fcacbf7c0cec1ec/fabada/init.py#L115 This sets a reference, and afterwards, any update to the array being referenced also modifies the array referencing it.

    opened by falseywinchnet 2
Releases(v0.2)
Statsmodels: statistical modeling and econometrics in Python

About statsmodels statsmodels is a Python package that provides a complement to scipy for statistical computations including descriptive statistics an

statsmodels 8.1k Jan 02, 2023
PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation (TPAMI).

PFENet This is the implementation of our paper PFENet: Prior Guided Feature Enrichment Network for Few-shot Segmentation that has been accepted to IEE

DV Lab 230 Dec 31, 2022
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

Billy HE 141 Dec 30, 2022
Repository for the Bias Benchmark for QA dataset.

BBQ Repository for the Bias Benchmark for QA dataset. Authors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Tho

ML² AT CILVR 18 Nov 18, 2022
Image inpainting using Gaussian Mixture Models

dmfa_inpainting Source code for: MisConv: Convolutional Neural Networks for Missing Data (to be published at WACV 2022) Estimating conditional density

Marcin Przewięźlikowski 8 Oct 09, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

AugMax: Adversarial Composition of Random Augmentations for Robust Training Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, an

VITA 112 Nov 07, 2022
Industrial Image Anomaly Localization Based on Gaussian Clustering of Pre-trained Feature

Industrial Image Anomaly Localization Based on Gaussian Clustering of Pre-trained Feature Q. Wan, L. Gao, X. Li and L. Wen, "Industrial Image Anomaly

smiler 6 Dec 25, 2022
An efficient implementation of GPNN

Efficient-GPNN An efficient implementation of GPNN as depicted in "Drop the GAN: In Defense of Patches Nearest Neighbors as Single Image Generative Mo

7 Apr 16, 2022
Discovering Interpretable GAN Controls [NeurIPS 2020]

GANSpace: Discovering Interpretable GAN Controls Figure 1: Sequences of image edits performed using control discovered with our method, applied to thr

Erik Härkönen 1.7k Jan 03, 2023
This repository contains the implementation of the paper Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans

Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans This repository contains the implementation of the pap

Photogrammetry & Robotics Bonn 40 Dec 01, 2022
Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

Advances in Financial Machine Learning Exercises Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by

Brian 1.4k Jan 04, 2023
Multi-Output Gaussian Process Toolkit

Multi-Output Gaussian Process Toolkit Paper - API Documentation - Tutorials & Examples The Multi-Output Gaussian Process Toolkit is a Python toolkit f

GAMES 113 Nov 25, 2022
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.

faceswap-GAN Adding Adversarial loss and perceptual loss (VGGface) to deepfakes'(reddit user) auto-encoder architecture. Updates Date Update 2018-08-2

3.2k Dec 30, 2022
Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel

Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel This repository is the official PyTorch implementation of BSRDM w

Zongsheng Yue 69 Jan 05, 2023
PerfFuzz: Automatically Generate Pathological Inputs for C/C++ programs

PerfFuzz Performance problems in software can arise unexpectedly when programs are provided with inputs that exhibit pathological behavior. But how ca

Caroline Lemieux 125 Nov 18, 2022
Geometric Algebra package for JAX

JAXGA - JAX Geometric Algebra GitHub | Docs JAXGA is a Geometric Algebra package on top of JAX. It can handle high dimensional algebras by storing onl

Robin Kahlow 36 Dec 22, 2022
RMTD: Robust Moving Target Defence Against False Data Injection Attacks in Power Grids

RMTD: Robust Moving Target Defence Against False Data Injection Attacks in Power Grids Real-time detection performance. This repo contains the code an

0 Nov 10, 2021
A Tensorflow implementation of the Text Conditioned Auxiliary Classifier Generative Adversarial Network for Generating Images from text descriptions

A Tensorflow implementation of the Text Conditioned Auxiliary Classifier Generative Adversarial Network for Generating Images from text descriptions

Ayushman Dash 93 Aug 04, 2022
NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling @ INTERSPEECH 2021 Accepted

NU-Wave — Official PyTorch Implementation NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling Junhyeok Lee, Seungu Han @ MINDsLab Inc

MINDs Lab 242 Dec 23, 2022
[AAAI22] Reliable Propagation-Correction Modulation for Video Object Segmentation

Reliable Propagation-Correction Modulation for Video Object Segmentation (AAAI22) Preview version paper of this work is available at: https://arxiv.or

Xiaohao Xu 70 Dec 04, 2022