PEPit is a package enabling computer-assisted worst-case analyses of first-order optimization methods.

Overview

PEPit: Performance Estimation in Python

PyPI version License

This open source Python library provides a generic way to use PEP framework in Python. Performance estimation problems were introduced in 2014 by Yoel Drori and Marc Teboulle, see [1]. PEPit is mainly based on the formalism and developments from [2, 3] by a subset of the authors of this toolbox. A friendly informal introduction to this formalism is available in this blog post and a corresponding Matlab library is presented in [4] (PESTO).

Website and documentation of PEPit: https://pepit.readthedocs.io/

Source Code (MIT): https://github.com/bgoujaud/PEPit

Using and citing the toolbox

This code comes jointly with the following reference:

B. Goujaud, C. Moucer, F. Glineur, J. Hendrickx, A. Taylor, A. Dieuleveut (2022).
"PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python."

When using the toolbox in a project, please refer to this note via this Bibtex entry:

@article{pepit2022,
  title={{PEPit}: computer-assisted worst-case analyses of first-order optimization methods in {P}ython},
  author={Goujaud, Baptiste and Moucer, C\'eline and Glineur, Fran\c{c}ois and Hendrickx, Julien and Taylor, Adrien and Dieuleveut, Aymeric},
  journal={arXiv preprint arXiv:2201.04040},
  year={2022}
}

Demo Open In Colab

This notebook provides a demonstration of how to use PEPit to obtain a worst-case guarantee on a simple algorithm (gradient descent), and a more advanced analysis of three other examples.

Installation

The library has been tested on Linux and MacOSX. It relies on the following Python modules:

  • Numpy
  • Scipy
  • Cvxpy
  • Matplotlib (for the demo notebook)

Pip installation

You can install the toolbox through PyPI with:

pip install pepit

or get the very latest version by running:

pip install -U https://github.com/bgoujaud/PEPit/archive/master.zip # with --user for user install (no root)

Post installation check

After a correct installation, you should be able to import the module without errors:

import PEPit

Online environment

You can also try the package in this Binder repository. Binder

Example

The folder Examples contains numerous introductory examples to the toolbox.

Among the other examples, the following code (see GradientMethod) generates a worst-case scenario for iterations of the gradient method, applied to the minimization of a smooth (possibly strongly) convex function f(x). More precisely, this code snippet allows computing the worst-case value of when is generated by gradient descent, and when .

from PEPit import PEP
from PEPit.functions import SmoothStronglyConvexFunction


def wc_gradient_descent(L, gamma, n, verbose=True):
    """
    Consider the convex minimization problem

    .. math:: f_\\star \\triangleq \\min_x f(x),

    where :math:`f` is :math:`L`-smooth and convex.

    This code computes a worst-case guarantee for **gradient descent** with fixed step-size :math:`\\gamma`.
    That is, it computes the smallest possible :math:`\\tau(n, L, \\gamma)` such that the guarantee

    .. math:: f(x_n) - f_\\star \\leqslant \\tau(n, L, \\gamma) || x_0 - x_\\star ||^2

    is valid, where :math:`x_n` is the output of gradient descent with fixed step-size :math:`\\gamma`, and
    where :math:`x_\\star` is a minimizer of :math:`f`.

    In short, for given values of :math:`n`, :math:`L`, and :math:`\\gamma`, :math:`\\tau(n, L, \\gamma)` is computed as the worst-case
    value of :math:`f(x_n)-f_\\star` when :math:`||x_0 - x_\\star||^2 \\leqslant 1`.

    **Algorithm**:
    Gradient descent is described by

    .. math:: x_{t+1} = x_t - \\gamma \\nabla f(x_t),

    where :math:`\\gamma` is a step-size.

    **Theoretical guarantee**:
    When :math:`\\gamma \\leqslant \\frac{1}{L}`, the **tight** theoretical guarantee can be found in [1, Theorem 1]:

    .. math:: f(x_n)-f_\\star \\leqslant \\frac{L||x_0-x_\\star||^2}{4nL\\gamma+2},

    which is tight on some Huber loss functions.

    **References**:

    `[1] Y. Drori, M. Teboulle (2014). Performance of first-order methods for smooth convex minimization: a novel
    approach. Mathematical Programming 145(1–2), 451–482.
    <https://arxiv.org/pdf/1206.3209.pdf>`_

    Args:
        L (float): the smoothness parameter.
        gamma (float): step-size.
        n (int): number of iterations.
        verbose (bool): if True, print conclusion

    Returns:
        pepit_tau (float): worst-case value
        theoretical_tau (float): theoretical value

    Example:
        >>> L = 3
        >>> pepit_tau, theoretical_tau = wc_gradient_descent(L=L, gamma=1 / L, n=4, verbose=True)
        (PEPit) Setting up the problem: size of the main PSD matrix: 7x7
        (PEPit) Setting up the problem: performance measure is minimum of 1 element(s)
        (PEPit) Setting up the problem: initial conditions (1 constraint(s) added)
        (PEPit) Setting up the problem: interpolation conditions for 1 function(s)
                 function 1 : 30 constraint(s) added
        (PEPit) Compiling SDP
        (PEPit) Calling SDP solver
        (PEPit) Solver status: optimal (solver: MOSEK); optimal value: 0.16666666497937685
        *** Example file: worst-case performance of gradient descent with fixed step-sizes ***
            PEPit guarantee:		 f(x_n)-f_* <= 0.166667 ||x_0 - x_*||^2
            Theoretical guarantee:	 f(x_n)-f_* <= 0.166667 ||x_0 - x_*||^2

    """

    # Instantiate PEP
    problem = PEP()

    # Declare a strongly convex smooth function
    func = problem.declare_function(SmoothStronglyConvexFunction, param={'mu': 0, 'L': L})

    # Start by defining its unique optimal point xs = x_* and corresponding function value fs = f_*
    xs = func.stationary_point()
    fs = func.value(xs)

    # Then define the starting point x0 of the algorithm
    x0 = problem.set_initial_point()

    # Set the initial constraint that is the distance between x0 and x^*
    problem.set_initial_condition((x0 - xs) ** 2 <= 1)

    # Run n steps of the GD method
    x = x0
    for _ in range(n):
        x = x - gamma * func.gradient(x)

    # Set the performance metric to the function values accuracy
    problem.set_performance_metric(func.value(x) - fs)

    # Solve the PEP
    pepit_tau = problem.solve(verbose=verbose)

    # Compute theoretical guarantee (for comparison)
    theoretical_tau = L / (2 * (2 * n * L * gamma + 1))

    # Print conclusion if required
    if verbose:
        print('*** Example file: worst-case performance of gradient descent with fixed step-sizes ***')
        print('\tPEPit guarantee:\t\t f(x_n)-f_* <= {:.6} ||x_0 - x_*||^2'.format(pepit_tau))
        print('\tTheoretical guarantee:\t f(x_n)-f_* <= {:.6} ||x_0 - x_*||^2'.format(theoretical_tau))

    # Return the worst-case guarantee of the evaluated method (and the reference theoretical value)
    return pepit_tau, theoretical_tau


if __name__ == "__main__":

    L = 3
    pepit_tau, theoretical_tau = wc_gradient_descent(L=L, gamma=1 / L, n=4, verbose=True)

Included tools

A lot of common optimization methods can be studied through this framework, using numerous steps and under a large variety of function / operator classes.

PEPit provides the following steps (often referred to as "oracles"):

PEPit provides the following function classes CNIs:

PEPit provides the following operator classes CNIs:

Authors

This toolbox has been created by

Acknowledgments

The authors would like to thank Rémi Flamary for his feedbacks on preliminary versions of the toolbox, as well as for support regarding the continuous integration.

Contributions

All external contributions are welcome. Please read the contribution guidelines.

References

[1] Y. Drori, M. Teboulle (2014). Performance of first-order methods for smooth convex minimization: a novel approach. Mathematical Programming 145(1–2), 451–482.

[2] A. Taylor, J. Hendrickx, F. Glineur (2017). Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Mathematical Programming, 161(1-2), 307-345.

[3] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

[4] A. Taylor, J. Hendrickx, F. Glineur (2017). Performance Estimation Toolbox (PESTO): automated worst-case analysis of first-order optimization methods. In 56th IEEE Conference on Decision and Control (CDC).

[5] A. d’Aspremont, D. Scieur, A. Taylor (2021). Acceleration Methods. Foundations and Trends in Optimization: Vol. 5, No. 1-2.

[6] O. Güler (1992). New proximal point algorithms for convex minimization. SIAM Journal on Optimization, 2(4):649–664.

[7] Y. Drori (2017). The exact information-based complexity of smooth convex minimization. Journal of Complexity, 39, 1-16.

[8] E. De Klerk, F. Glineur, A. Taylor (2017). On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions. Optimization Letters, 11(7), 1185-1199.

[9] B.T. Polyak (1964). Some methods of speeding up the convergence of iteration method. URSS Computational Mathematics and Mathematical Physics.

[10] E. Ghadimi, H. R. Feyzmahdavian, M. Johansson (2015). Global convergence of the Heavy-ball method for convex optimization. European Control Conference (ECC).

[11] E. De Klerk, F. Glineur, A. Taylor (2020). Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation. SIAM Journal on Optimization, 30(3), 2053-2082.

[12] O. Gannot (2021). A frequency-domain analysis of inexact gradient methods. Mathematical Programming.

[13] D. Kim, J. Fessler (2016). Optimized first-order methods for smooth convex minimization. Mathematical Programming 159.1-2: 81-107.

[14] S. Cyrus, B. Hu, B. Van Scoy, L. Lessard (2018). A robust accelerated optimization algorithm for strongly convex functions. American Control Conference (ACC).

[15] Y. Nesterov (2003). Introductory lectures on convex optimization: A basic course. Springer Science & Business Media.

[16] S. Boyd, L. Xiao, A. Mutapcic (2003). Subgradient Methods (lecture notes).

[17] Y. Drori, M. Teboulle (2016). An optimal variant of Kelley's cutting-plane method. Mathematical Programming, 160(1), 321-351.

[18] Van Scoy, B., Freeman, R. A., Lynch, K. M. (2018). The fastest known globally convergent first-order method for minimizing strongly convex functions. IEEE Control Systems Letters, 2(1), 49-54.

[19] P. Patrinos, L. Stella, A. Bemporad (2014). Douglas-Rachford splitting: Complexity estimates and accelerated variants. In 53rd IEEE Conference on Decision and Control (CDC).

[20] Y. Censor, S.A. Zenios (1992). Proximal minimization algorithm with D-functions. Journal of Optimization Theory and Applications, 73(3), 451-464.

[21] E. Ryu, S. Boyd (2016). A primer on monotone operator methods. Applied and Computational Mathematics 15(1), 3-43.

[22] E. Ryu, A. Taylor, C. Bergeling, P. Giselsson (2020). Operator splitting performance estimation: Tight contraction factors and optimal parameter selection. SIAM Journal on Optimization, 30(3), 2251-2271.

[23] P. Giselsson, and S. Boyd (2016). Linear convergence and metric selection in Douglas-Rachford splitting and ADMM. IEEE Transactions on Automatic Control, 62(2), 532-544.

[24] M .Frank, P. Wolfe (1956). An algorithm for quadratic programming. Naval research logistics quarterly, 3(1-2), 95-110.

[25] M. Jaggi (2013). Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In 30th International Conference on Machine Learning (ICML).

[26] A. Auslender, M. Teboulle (2006). Interior gradient and proximal methods for convex and conic optimization. SIAM Journal on Optimization 16.3 (2006): 697-725.

[27] H.H. Bauschke, J. Bolte, M. Teboulle (2017). A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications. Mathematics of Operations Research, 2017, vol. 42, no 2, p. 330-348

[28] R. Dragomir, A. Taylor, A. d’Aspremont, J. Bolte (2021). Optimal complexity and certification of Bregman first-order methods. Mathematical Programming, 1-43.

[29] A. Taylor, J. Hendrickx, F. Glineur (2018). Exact worst-case convergence rates of the proximal gradient method for composite convex minimization. Journal of Optimization Theory and Applications, 178(2), 455-476.

[30] B. Polyak (1987). Introduction to Optimization. Optimization Software New York.

[31] L. Lessard, B. Recht, A. Packard (2016). Analysis and design of optimization algorithms via integral quadratic constraints. SIAM Journal on Optimization 26(1), 57–95.

[32] D. Davis, W. Yin (2017). A three-operator splitting scheme and its optimization applications. Set-valued and variational analysis, 25(4), 829-858.

[33] Taylor, A. B. (2017). Convex interpolation and performance estimation of first-order methods for convex optimization. PhD Thesis, UCLouvain.

[34] H. Abbaszadehpeivasti, E. de Klerk, M. Zamani (2021). The exact worst-case convergence rate of the gradient method with fixed step lengths for L-smooth functions. arXiv 2104.05468.

[35] J. Bolte, S. Sabach, M. Teboulle, Y. Vaisbourd (2018). First order methods beyond convexity and Lipschitz gradient continuity with applications to quadratic inverse problems. SIAM Journal on Optimization, 28(3), 2131-2151.

[36] A. Defazio (2016). A simple practical accelerated method for finite sums. Advances in Neural Information Processing Systems (NIPS), 29, 676-684.

[37] A. Defazio, F. Bach, S. Lacoste-Julien (2014). SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems (NIPS).

[38] B. Hu, P. Seiler, L. Lessard (2020). Analysis of biased stochastic gradient descent using sequential semidefinite programs. Mathematical programming (to appear).

[39] A. Taylor, F. Bach (2019). Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions. Conference on Learning Theory (COLT).

[40] D. Kim (2021). Accelerated proximal point method for maximally monotone operators. Mathematical Programming, 1-31.

[41] W. Moursi, L. Vandenberghe (2019). Douglas–Rachford Splitting for the Sum of a Lipschitz Continuous and a Strongly Monotone Operator. Journal of Optimization Theory and Applications 183, 179–198.

[42] G. Gu, J. Yang (2020). Tight sublinear convergence rate of the proximal point algorithm for maximal monotone inclusion problem. SIAM Journal on Optimization, 30(3), 1905-1921.

[43] F. Lieder (2021). On the convergence rate of the Halpern-iteration. Optimization Letters, 15(2), 405-418.

[44] F. Lieder (2018). Projection Based Methods for Conic Linear Programming Optimal First Order Complexities and Norm Constrained Quasi Newton Methods. PhD thesis, HHU Düsseldorf.

[45] Y. Nesterov (1983). A method for solving the convex programming problem with convergence rate :math:O(1/k^2). In Dokl. akad. nauk Sssr (Vol. 269, pp. 543-547).

[46] N. Bansal, A. Gupta (2019). Potential-function proofs for gradient methods. Theory of Computing, 15(1), 1-32.

[47] M. Barre, A. Taylor, F. Bach (2021). A note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectives. arXiv:2106.15536v2.

[48] J. Eckstein and W. Yao (2018). Relative-error approximate versions of Douglas–Rachford splitting and special cases of the ADMM. Mathematical Programming, 170(2), 417-444.

[49] M. Barré, A. Taylor, A. d’Aspremont (2020). Complexity guarantees for Polyak steps with momentum. In Conference on Learning Theory (COLT).

[50] D. Kim, J. Fessler (2017). On the convergence analysis of the optimized gradient method. Journal of Optimization Theory and Applications, 172(1), 187-205.

[51] Steven Diamond and Stephen Boyd (2016). CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research (JMLR) 17.83.1--5 (2016).

[52] Agrawal, Akshay and Verschueren, Robin and Diamond, Steven and Boyd, Stephen (2018). A rewriting system for convex optimization problems. Journal of Control and Decision (JCD) 5.1.42--60 (2018).

[53] Adrien Taylor, Bryan Van Scoy, Laurent Lessard (2018). Lyapunov Functions for First-Order Methods: Tight Automated Convergence Guarantees. International Conference on Machine Learning (ICML).

[54] B. Goujaud, C. Moucer, F. Glineur, J. Hendrickx, A. Taylor, A. Dieuleveut (2022). PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python.

Comments
  • Algo steps

    Algo steps

    Quelques remarques :

    • les exemples ont été reformatés dans le format dont on avait discuté,
    • le fichier celine_test.py permet de tester la (quasi) totalité des exemples (sur le modèle de adri_test.py et de ay_test.py),
    • une modification dans pep.py sur une des fonctions d'inégalité : elle se renvoyait elle-même (ce qui créait une boucle infinie),
    • Il va y avoir pas mal de conflit au moment de merger @bgoujaud.

    A votre dispo pour échanger sur les exemples!

    opened by CMoucer 8
  • Fix things for PR78 on coordinate descent.

    Fix things for PR78 on coordinate descent.

    Fix things for PR78 on coordinate descent. 2 tests are commented: 1 in test_examples.py and 1 in test_uselessly_complexified_examples.py. They should be removed.

    opened by bgoujaud 1
  • Easy Partitioning for Coordinate Descent & co

    Easy Partitioning for Coordinate Descent & co

    Note 1: PR not entirely ready to be validated. Note 2: class and function names can be discussed & updated ("partition" alone might be cleaner than "block_partition" for instance)

    Todo:

    opened by AdrienTaylor 1
  • Problem in eval()

    Problem in eval()

    Hello,

    Reporting a minor evaluation bug: it looks like the Points that are not associated with functions might not be correctly evaluable after solving the PEP. Example below (essentially exact line search example, where I try to evaluate a worst-case dx after solving the PEP).

    --------------------------------------------------------- Main file:

    from PEPit import PEP
    from PEPit.functions import SmoothStronglyConvexFunction
    from PEPit.primitive_steps import exact_linesearch_step
    from PEPit.primitive_steps import inexact_gradient_step
    
    def wc_inexact_gradient_exact_line_search(L, mu, epsilon, n, verbose=1):
    
    
        # Instantiate PEP
        problem = PEP()
    
        # Declare a strongly convex smooth function
        func = problem.declare_function(SmoothStronglyConvexFunction, mu=mu, L=L)
    
        # Start by defining its unique optimal point xs = x_* and corresponding function value fs = f_*
        xs = func.stationary_point()
        fs = func(xs)
    
        # Then define the starting point x0 of the algorithm as well as corresponding gradient and function value g0 and f0
        x0 = problem.set_initial_point()
    
        # Set the initial constraint that is the distance between f0 and f_*
        problem.set_initial_condition(func(x0) - fs <= 1)
    
        # Run n steps of the inexact gradient method with ELS
        x = x0
        for i in range(n):
            _, dx, _ = inexact_gradient_step(x, func, gamma=0, epsilon=epsilon, notion='relative')
            x, gx, fx = exact_linesearch_step(x, func, [dx])
    
        # Set the performance metric to the function value accuracy
        problem.set_performance_metric(func(x) - fs)
    
        # Solve the PEP
        pepit_verbose = max(verbose, 0)
        pepit_tau = problem.solve(verbose=pepit_verbose)
    
        # Compute theoretical guarantee (for comparison)
        Leps = (1 + epsilon) * L
        meps = (1 - epsilon) * mu
        theoretical_tau = ((Leps - meps) / (Leps + meps)) ** (2 * n)
    
        # Print conclusion if required
        if verbose != -1:
            print('*** Example file: worst-case performance of inexact gradient descent with exact linesearch ***')
            print('\tPEPit guarantee:\t f(x_n)-f_* <= {:.6} (f(x_0)-f_*)'.format(pepit_tau))
            print('\tTheoretical guarantee:\t f(x_n)-f_* <= {:.6} (f(x_0)-f_*)'.format(theoretical_tau))
    
        # Return the worst-case guarantee of the evaluated method (and the reference theoretical value)
        return pepit_tau, theoretical_tau, dx.eval()
    

    --------------------------------------------------------- Calling the function:

    L = 1
    mu = .1
    epsilon = .5
    n = 1
    
    wc_inexact_gradient_exact_line_search(L, mu, epsilon, n, verbose=1)
    

    --------------------------------------------------------- Output + Error

    (PEPit) Setting up the problem: size of the main PSD matrix: 6x6 (PEPit) Setting up the problem: performance measure is minimum of 1 element(s) (PEPit) Setting up the problem: Adding initial conditions and general constraints ... (PEPit) Setting up the problem: initial conditions and general constraints (1 constraint(s) added) (PEPit) Setting up the problem: interpolation conditions for 1 function(s) function 1 : Adding 9 scalar constraint(s) ... function 1 : 9 scalar constraint(s) added (PEPit) Compiling SDP (PEPit) Calling SDP solver (PEPit) Solver status: optimal (solver: MOSEK); optimal value: 0.8751298780724822 (PEPit) Postprocessing: solver's output is not entirely feasible (smallest eigenvalue of the Gram matrix is: -1.05e-08 < 0). Small deviation from 0 may simply be due to numerical error. Big ones should be deeply investigated. In any case, from now the provided values of parameters are based on the projection of the Gram matrix onto the cone of symmetric semi-definite matrix. *** Example file: worst-case performance of inexact gradient descent with exact linesearch *** PEPit guarantee: f(x_n)-f_* <= 0.87513 (f(x_0)-f_) Theoretical guarantee: f(x_n)-f_ <= 0.87513 (f(x_0)-f_*)


    ValueError Traceback (most recent call last) /tmp/ipykernel_265884/2910429637.py in 4 n = 1 5 ----> 6 wc_inexact_gradient_exact_line_search(L, mu, epsilon, n, verbose=1)

    /tmp/ipykernel_265884/1085922103.py in wc_inexact_gradient_exact_line_search(L, mu, epsilon, n, verbose) 48 49 # Return the worst-case guarantee of the evaluated method (and the reference theoretical value) ---> 50 return pepit_tau, theoretical_tau, dx.eval() 51 52

    ~/anaconda3/lib/python3.9/site-packages/PEPit/point.py in eval(self) 268 # If leaf, the PEP would have filled the attribute at the end of the solve. 269 if self._is_leaf: --> 270 raise ValueError("The PEP must be solved to evaluate Points!") 271 # If linear combination, combine the values of the leaf, and store the result before returning it. 272 else:

    ValueError: The PEP must be solved to evaluate Points!

    bug 
    opened by AdrienTaylor 1
  • Postprocessing issue

    Postprocessing issue

    For the record: postprocessing issue when no eigenvalue is set to zero. Error below: problem in "pep.py" in "get_nb_eigenvalues_and_corrected_matrix(M)"

    This is particularly problematic when trying to find low dimensional instances.


    ValueError Traceback (most recent call last) /tmp/ipykernel_635912/2564841825.py in 41 42 # Solve the PEP ---> 43 pepit_tau = problem.solve(verbose=1, dimension_reduction_heuristic="logdet2")

    ~/anaconda3/lib/python3.9/site-packages/PEPit/pep.py in solve(self, verbose, return_full_cvxpy_problem, dimension_reduction_heuristic, eig_regularization, tol_dimension_reduction, **kwargs) 369 370 # Print the estimated dimension before dimension reduction --> 371 nb_eigenvalues, eig_threshold, corrected_G_value = self.get_nb_eigenvalues_and_corrected_matrix(G.value) 372 if verbose: 373 print('(PEPit) Postprocessing: {} eigenvalue(s) > {} before dimension reduction'.format(nb_eigenvalues,

    ~/anaconda3/lib/python3.9/site-packages/PEPit/pep.py in get_nb_eigenvalues_and_corrected_matrix(M) 466 467 # Get the highest eigenvalue that have been set to 0. --> 468 eig_threshold = max(np.max(eig_val[non_zero_eig_vals == 0]), 0) 469 470 return nb_eigenvalues, eig_threshold, corrected_S

    <array_function internals> in amax(*args, **kwargs)

    ~/anaconda3/lib/python3.9/site-packages/numpy/core/fromnumeric.py in amax(a, axis, out, keepdims, initial, where) 2731 5 2732 """ -> 2733 return _wrapreduction(a, np.maximum, 'max', axis, None, out, 2734 keepdims=keepdims, initial=initial, where=where) 2735

    ~/anaconda3/lib/python3.9/site-packages/numpy/core/fromnumeric.py in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs) 85 return reduction(axis=axis, out=out, **passkwargs) 86 ---> 87 return ufunc.reduce(obj, axis, dtype, out, **passkwargs) 88 89

    ValueError: zero-size array to reduction operation maximum which has no identity

    bug 
    opened by AdrienTaylor 1
  • Examples verbosity is incorrectly documented

    Examples verbosity is incorrectly documented

    most examples say that

    Level of information details to print. 1: No verbose at all. 0: This example’s output. 1: This example’s output + PEPit information. 2: This example’s output + PEPit information + CVXPY details.

    where the first 1 should be -1

    opened by rkm0959 1
  • Feature/doctest

    Feature/doctest

    Add warnings on parameters for function values that are per default "differentiable" + warnings in the QuickStart (size of G, feasible set + parameter reuse_gradient).

    opened by CMoucer 1
  • Refactor/pep

    Refactor/pep

    L ensemble des changements est décrit en messages de commits: Tout est fait dans PEP. Le reste concerne l utilisation des changements. J ai vérifié:

    • les tests
    • les examples
    • la doc
    • le readme
    • le notebook
    • le papier

    Mais si vous voulez verifier que je n ai rien n oublié, c est cool! ;)

    opened by bgoujaud 1
  • Broken link

    Broken link

    Hi and thanks for the great package !

    The link to the demo in the readme seems broken: https://github.com/bgoujaud/PEPit/blob/master/ressources/educational/PEPit_demo.ipynb

    opened by pierreablin 1
  • Feature/constraint class

    Feature/constraint class

    This PR creates the constraint class. Note it is not a child class of Expression as I first expected. I changed this plan. Now an inequality between two expressions is not an expression anymore. And the use of '<=0' is now necessary. I made the appropriate changes in some primitive steps files.

    PS: Please review the previous PR first as I already rebased this branch onto the previous one.

    opened by bgoujaud 1
  • Simplifying PEP.eval_dual method for easier maintenance of PEPit.

    Simplifying PEP.eval_dual method for easier maintenance of PEPit.

    The changes are transparent for the authors. They are twofold:

    • Keep track of the order of the constraints in PEP.solve to make easier any modification. No need to maintain both solve and _eval_constraint_dual_values methods.
    • Factorize updates of the lists of contraints. They now are dealt by send_constraint_to_cvxpy and send_lmi_constraint_to_cvxpy methods instead of the solve one. Those 2 now return None.
    opened by bgoujaud 0
  • CLN discussion w/ Aymeric

    CLN discussion w/ Aymeric

    Hello, this looks like a promising package! Here are random remarks I got while running the tuto with Aymeric.

    • In CI, don't comment the pep8 test :stuck_out_tongue:
    • In CI, run the tuto notebook to make sure this is not broken with new changes.
    • it can be anoying for import to have capitalize name PEPit, using pepit avoid having to remember what is capital and what is not.
    • I was surprised that SmoothStronglyConvexFunction does not raise a ValueError when given mu=0. Using an internal base class wth clearly split cases would avoid user shooting them selfves in the foot.
    • PEP.set_initial_point: in term of API, it is weird to call a set_* method that do not take any argument. Would call this create_initial_point or get_initial_point.
    • In tutorial, section Algorithm Implementation, iit would be nice to define n here.
    • In the tuto, section Solving the PEP, you could rewrite the equation $|f(x_t) - f(x*)| \le \tau |x_0 - x*|$. It would help to compare with the equation bellow, saying $\tau = L / 4 ...$ .
    • In tuto Comparison of analytical (obtained in the literature) ... could be move to a follow up notebook to avoid scaring people (stuff after conclusion is always scary :) )
    opened by tomMoral 0
Releases(0.2.1)
Implementation of momentum^2 teacher

Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning Requirements All experiments are done with python3.6, torch

jemmy li 121 Sep 26, 2022
Semi Supervised Learning for Medical Image Segmentation, a collection of literature reviews and code implementations.

Semi-supervised-learning-for-medical-image-segmentation. Recently, semi-supervised image segmentation has become a hot topic in medical image computin

Healthcare Intelligence Laboratory 1.3k Jan 03, 2023
InsCLR: Improving Instance Retrieval with Self-Supervision

InsCLR: Improving Instance Retrieval with Self-Supervision This is an official PyTorch implementation of the InsCLR paper. Download Dataset Dataset Im

Zelu Deng 25 Aug 30, 2022
A model to classify a piece of news as REAL or FAKE

Fake_news_classification A model to classify a piece of news as REAL or FAKE. This python project of detecting fake news deals with fake and real news

Gokul Stark 1 Jan 29, 2022
Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time

Semi Hand-Object Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time (CVPR 2021).

96 Dec 27, 2022
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 03, 2023
CNN visualization tool in TensorFlow

tf_cnnvis A blog post describing the library: https://medium.com/@falaktheoptimist/want-to-look-inside-your-cnn-we-have-just-the-right-tool-for-you-ad

InFoCusp 778 Jan 02, 2023
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
A new codebase for Group Activity Recognition. It contains codes for ICCV 2021 paper: Spatio-Temporal Dynamic Inference Network for Group Activity Recognition and some other methods.

Spatio-Temporal Dynamic Inference Network for Group Activity Recognition The source codes for ICCV2021 Paper: Spatio-Temporal Dynamic Inference Networ

40 Dec 12, 2022
A framework to train language models to learn invariant representations.

Invariant Language Modeling Implementation of the training for invariant language models. Motivation Modern pretrained language models are critical co

6 Nov 16, 2022
CR-Fill: Generative Image Inpainting with Auxiliary Contextual Reconstruction. ICCV 2021

crfill Usage | Web App | | Paper | Supplementary Material | More results | code for paper ``CR-Fill: Generative Image Inpainting with Auxiliary Contex

182 Dec 20, 2022
[AAAI22] Reliable Propagation-Correction Modulation for Video Object Segmentation

Reliable Propagation-Correction Modulation for Video Object Segmentation (AAAI22) Preview version paper of this work is available at: https://arxiv.or

Xiaohao Xu 70 Dec 04, 2022
Code accompanying our paper Feature Learning in Infinite-Width Neural Networks

Empirical Experiments in "Feature Learning in Infinite-width Neural Networks" This repo contains code to replicate our experiments (Word2Vec, MAML) in

Edward Hu 37 Dec 14, 2022
Unofficial PyTorch Implementation of "Augmenting Convolutional networks with attention-based aggregation"

Pytorch Implementation of Augmenting Convolutional networks with attention-based aggregation This is the unofficial PyTorch Implementation of "Augment

DK 20 Sep 09, 2022
Riemannian Convex Potential Maps

Modeling distributions on Riemannian manifolds is a crucial component in understanding non-Euclidean data that arises, e.g., in physics and geology. The budding approaches in this space are limited b

Facebook Research 61 Nov 28, 2022
nanodet_plus,yolov5_v6.0

OAK_Detection OAK设备上适配nanodet_plus,yolov5_v6.0 Environment pytorch = 1.7.0

炼丹去了 1 Feb 18, 2022
[ICCV 2021] Official PyTorch implementation for Deep Relational Metric Learning.

Ranking Models in Unlabeled New Environments Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch 1.7.0 + torchivision 0.8.1

Borui Zhang 39 Dec 10, 2022
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
From Canonical Correlation Analysis to Self-supervised Graph Neural Networks

Code for CCA-SSG model proposed in the NeurIPS 2021 paper From Canonical Correlation Analysis to Self-supervised Graph Neural Networks.

Hengrui Zhang 44 Nov 27, 2022