library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

Related tags

Deep Learningnlopt
Overview

Latest Docs Build Status Build Status

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unified interface and packaging of several free/open-source nonlinear optimization libraries.

The latest release can be downloaded from the NLopt releases page on Github, and the NLopt manual is hosted on readthedocs.

NLopt is compiled and installed with the CMake build system (see CMakeLists.txt file for available options):

git clone git://github.com/stevengj/nlopt
cd nlopt
mkdir build
cd build
cmake ..
make
sudo make install

(To build the latest development sources from git, you will need SWIG to generate the Python and Guile bindings.)

Once it is installed, #include <nlopt.h> in your C/C++ programs and link it with -lnlopt -lm. You may need to use a C++ compiler to link in order to include the C++ libraries (which are used internally by NLopt, even though it exports a C API). See the C reference manual.

There are also interfaces for C++, Fortran, Python, Matlab or GNU Octave, OCaml, GNU Guile, GNU R, Lua, Rust, and Julia. Interfaces for other languages may be added in the future.

Comments
  • nlopt compilation failed at

    nlopt compilation failed at "make" step on AIX7.2

    I am trying to compile nlopt on AIX7.2. The 1st "cmake" step finished successfully. However, the 2nd "make" step failed with the ERROR: Undefined symbol: __tls_get_addr. Can you help me to figure out the issue? Thanks.

    [ 66%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/global.cc.o
    [ 68%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/linalg.cc.o
    [ 70%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/local.cc.o
    [ 71%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/stogo.cc.o
    [ 73%] Building CXX object CMakeFiles/nlopt.dir/src/algs/stogo/tools.cc.o
    [ 75%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/evolvent.cc.o
    [ 76%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/solver.cc.o
    [ 78%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/local_optimizer.cc.o
    [ 80%] Building CXX object CMakeFiles/nlopt.dir/src/algs/ags/ags.cc.o
    [ 81%] Linking CXX shared library libnlopt
    ld: 0711-317 ERROR: Undefined symbol: __tls_get_addr
    ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information.
    collect2: error: ld returned 8 exit status
    make[2]: *** [CMakeFiles/nlopt.dir/build.make:767: libnlopt.a] Error 1
    make[2]: Leaving directory '/software/thirdparty/nlopt-master/build'
    make[1]: *** [CMakeFiles/Makefile2:179: CMakeFiles/nlopt.dir/all] Error 2
    make[1]: Leaving directory '/software/thirdparty/nlopt-master/build'
    make: *** [Makefile:163: all] Error 2
    
    opened by bergen288 29
  • Prefer target_include_directories in CMake build script

    Prefer target_include_directories in CMake build script

    This PR facilitates the inclusion of nlopt into other CMake projects.

    • The "private" include directories are defined in a per-target basis via target_include_directories instead of include_directories. This has the advantage that it doesn't "pollute" parent projects.
    • Public include directories are set via the INTERFACE argument in target_include_directories. To keep it simple the original header nlopt.h is simply copied in the ${PROJECT_BINARY_DIR}/api folder, which is used as the interface include directory for nlopt. Unfortunatelly, absolute paths cannot be given as interface include directories for installed targets, hence the need for the trick with $<BUILD_INTERFACE:...>. However this generator expression has been introduced in cmake 3.0, so the minimum required version is bumped to 3.0, but maybe you don't want that?
    • Similarly, use target_compile_definitions instead of a global add_definitions.
    • I couldn't make per-target include directories work with SWIG, so there is still a include_directories appearing in the swig/CMakeLists.txt. This is not ideal, but if you have any idea to improve this I'm all ears.

    Now, building nlopt as part of other projects is as simple as

    add_subdirectory(ext/nlopt)
    target_link_libraries(my_program nlopt)
    
    opened by jdumas 15
  • Implement C++ style functors as targets for objectives

    Implement C++ style functors as targets for objectives

    This PR implements a wrapper nlopt::functor_wrapper for C++ style functors via std::function, and two new overloads of nlopt::set_min_objective, nlopt::set_max_objective.

    In order to allow that, a new member field in the myfunc_data struct is added: functor_type functor;, where functor_type is defined as

    typedef std::function<double(unsigned, const double*, double*)> functor_type;
    

    This is not introduced as a pointer (like the other function-pointers are) because std::function is already a container that stores a pointer, and abstracts it away.

    Important: note that the signature for the functor does not include void* data unlike all other function-pointers. That is because it is assumed that the functor already has all the data it needs.

    This PR allows now to write the following:

    class UserDefinedObjective {
      private:
        ImportantData data;
      public:
        UserDefinedObjective() = delete;
        UserDefinedObjective(ImportantData data) :
          data(std::move(data)) {}
        double operator()(unsigned n, const double* x, double* grad) const
        {
          // compute objective(x) and ∇objective(x) using this->data
        }
    };
    
    int main()
    {
      ImportantData data;
      UserDefinedObjective objective(std::move(data));
    
      nlopt::opt optimizer;
      // other nlopt settings
      optimizer.set_max_objective(std::move(objective));
    
      optimizer.optimize(...);
    
      return 0;
    }
    

    Same with C++ lambdas, regular functions and even class member functions (check out std::function).

    This PR also introduces a CMake macro NLOPT_add_cpp_test to quickly add cpp tests, and creates a test cpp_functor.cxx to actually test the new functionality.

    Closes #219 .

    opened by dburov190 14
  • Added example of automatic tests

    Added example of automatic tests

    • Added new_test target to generate test executable
    • Updated CMakeLists.txt to add test as subfolder
    • Added two test using executable new_test
    • Now you can run test suite by just typing either "make test" or "ctest" in the build directory
    opened by boris-il-forte 14
  • C++11 idiom

    C++11 idiom

    I found nlopt a great library, but using it through the C++ is really frustrating. You need to provide a void* for passing data to the objective/constraint functions, with all the problemas that may have.

    Also, you need to pass a function pointer, so you cant use lambda functions with captures (that would avoid passing the void*).

    Ive written a thin wrapper on top of NLopt which tries to offer a more modern c++ API, enabling the use of lambdas and hiding from the API the use of void*.

    Would you be interested in this to be merge?

    opened by jjcasmar 13
  • DIRECT takes impossibly long to reach xtol

    DIRECT takes impossibly long to reach xtol

    Unless I'm mistaken, the XTOL stopping criterion for DIRECT (the cdirect version) can't be used when searching large multi-dimensional spaces, because it requires all hyper-rectangles everywhere to be divided down to below the x-tolerances before stopping. This will take an impossibly long time.

    Wouldn't it make more sense to stop as soon as one (or a few) of the rectangles is small? This could be done by inverting some of the logic for the xtol_reached variable within the cdirect.c function divide_good_rects().

    I can attach some test code here or work towards a pull request if that would be helpful.

    Cheers, Joel

    opened by jcottrell-ellex 11
  • Website & download URLs down

    Website & download URLs down

    I get the following while trying to install:

    configure: Need to download and build NLopt
    trying URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
    Warning in download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz",  :
      unable to connect to 'ab-initio.mit.edu' on port 80.
    Error in download.file(url = "http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz",  : 
      cannot open URL 'http://ab-initio.mit.edu/nlopt/nlopt-2.4.2.tar.gz'
    Execution halted
    /bin/tar: This does not look like a tar archive
    
    gzip: stdin: unexpected end of file
    /bin/tar: Child returned status 1
    /bin/tar: Error is not recoverable: exiting now
    

    The website, http://ab-initio.mit.edu/nlopt/, also does not display in my browser (ERR_CONNECTION_TIMED_OUT)

    opened by IljaKroonen 11
  • Prevent a conditional jump based on uninitialized value in nlopt_create.

    Prevent a conditional jump based on uninitialized value in nlopt_create.

    nlopt_set_lower_bounds1 reads from opt->ub before it has ever been written.

    We caught this in our nightly memcheck CI build for RobotLocomotion/drake: https://drake-cdash.csail.mit.edu/viewDynamicAnalysisFile.php?id=14355

    Contributes to RobotLocomotion/drake#3873


    This change is Reviewable

    opened by david-german-tri 11
  • New release

    New release

    I'm running into issue https://github.com/stevengj/nlopt/issues/33. Could you make a new release that includes that fix, please? It would fix dozens of R packages on NixOS.

    Looks like the last one was in 2014!

    opened by langston-barrett 11
  • Generate missing nlopt.hpp and nlopt.f (CMake), use GNUInstallDirs (CMake), fix for MSVC 2015

    Generate missing nlopt.hpp and nlopt.f (CMake), use GNUInstallDirs (CMake), fix for MSVC 2015

    Create missing nlopt.hpp and nlopt.f when building with CMake Use CMake's GNUInstallDIrs (e.g, ${CMAKE_INSTALL_LIBDIR} instead of lib) depending on platform Fix for MSVC 2015 compiler

    opened by rickertm 11
  • Running tests with CTest is broken

    Running tests with CTest is broken

    If I check out NLopt, build it, and try to run the tests with ctest, it fails horribly because testopt was not built.

    The way this works is a non-standard workflow and prevents running the tests e.g. when NLopt is built as a CMake external project.

    Please, either just build testopt by default (i.e. remove EXCLUDE_FROM_ALL), or else add an option (e.g. NLOPT_ENABLE_TESTS) that controls whether testopt is built by default and whether any add_test are invoked.

    opened by mwoehlke-kitware 10
  • undefined reference to `nlopt_get_errmsg'

    undefined reference to `nlopt_get_errmsg'

    Hi,

    I'm getting the following linker error message :

    in function nlopt::opt::get_errmsg() const': Hamiltonian.cpp:(.text._ZNK5nlopt3opt10get_errmsgEv[_ZNK5nlopt3opt10get_errmsgEv]+0x5b): undefined reference tonlopt_get_errmsg' collect2: error: ld returned 1 exit status

    when trying to compile a code which call nlopt.

    I compiled the nlopt 2.7.1 using the make install command and got :

    Install the project...
    -- Install configuration: "Release"
    -- Installing: /usr/local/lib/pkgconfig/nlopt.pc
    -- Installing: /usr/local/include/nlopt.h
    -- Installing: /usr/local/include/nlopt.hpp
    -- Installing: /usr/local/include/nlopt.f
    -- Installing: /usr/local/lib/libnlopt.so.0.11.1
    -- Installing: /usr/local/lib/libnlopt.so.0
    -- Set runtime path of "/usr/local/lib/libnlopt.so.0.11.1" to "/usr/local/lib"
    -- Installing: /usr/local/lib/libnlopt.so
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptLibraryDepends.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptLibraryDepends-release.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptConfig.cmake
    -- Installing: /usr/local/lib/cmake/nlopt/NLoptConfigVersion.cmake
    -- Installing: /usr/local/share/man/man3/nlopt.3
    -- Installing: /usr/local/share/man/man3/nlopt_minimize.3
    
    

    and my cmakeList look like

    cmake_minimum_required(VERSION` 3.13.4)
    project(myProject)
    
    set(CMAKE_MODULE_PATH "${PROJECT_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
    set(CMAKE_CXX_STANDARD 20)
    
    add_executable(myProject main.cpp)
    target_link_libraries(myProject PUBLIC nlopt ${CPLEX_LIBRARIES} `${CMAKE_DL_LIBS})
    

    I found that someone already got a similar issue but the proposed fixing does not seem to apply to my settings

    opened by Griset 0
  • Reduce number of gradient calculations in LD-MMA

    Reduce number of gradient calculations in LD-MMA

    The Svanberg MMA paper notes that for the CCSA algorithms described, gradients are only required in the outer iterations. "Each new inner iteration requires function values, but no derivatives."

    However, it appears that the implementation of LD-MMA calculates a gradient in the inner as well as the outer iteration. I request that the implementation be updated to reduce gradient calculation.

    I believe this is a two-line change: This line could be changed to something like fcur = f(n, xcur, NULL, f_data);, and then after line 299 in the same file one could add the code if (inner_done) { fcur = f(n, xcur, dfdx_cur, f_data); }.

    This would duplicate objective calls once per outer iteration, but since gradient calculations tend to dominate run-time in objective function calls, there should be overall net savings whenever more than one inner iteration is used.

    I regret not being able to try this out myself. I don't have C set up on my machine and have never coded in C, so I would be extremely slow at running tests (I'm using the Python API). Thanks for considering!

    opened by cpixton 0
  • Simple Academic Use Case with unexpected MMA behavior.

    Simple Academic Use Case with unexpected MMA behavior.

    Hello, Doing some tests on multiple starting guess and multiple optimization algorithm I found a case on which your solver behaves unexpectedly. This is luckily a simple analytical example easily reproducible.

    import nlopt from numpy import array

    def f(x, grad): if grad.size > 0: grad[0] = 2. * (x[0] - 1.) grad[1] = 2. * (x[1] - 1.) return (x[0] - 1.) ** 2. + (x[1] - 1.) ** 2.

    def g(x, grad): if grad.size > 0: grad[0] = 1. grad[1] = 1. return x[0] + x[1] - 1.

    if name == "main": algorithm = nlopt.LD_MMA n = 2 opt = nlopt.opt(algorithm, n) lb = array([0., 0.]) ub = array([1., 1.]) x0 = array([0.25, 1]) opt.set_min_objective(f) opt.set_lower_bounds(lb) opt.set_upper_bounds(ub) opt.add_inequality_constraint(g, 1e-3) tol = 1e-6 maxeval = 50 opt.set_ftol_rel(tol) opt.set_ftol_abs(tol) opt.set_xtol_rel(tol) opt.set_xtol_rel(tol) opt.set_maxeval(maxeval) opt.set_param("verbosity", 10000) opt.set_param("inner_maxeval",10) xopt = opt.optimize(x0) print(xopt) opt_val = opt.last_optimum_value() print(opt_val) result = opt.last_optimize_result() print(result)

    The solution to this problem is simply [0.5, 0.5] correctly spotted by LD_MMA from most initial guess but not with the starting guess [0.25,1].

    In this case the log looks like that:

    MMA dual converged in 6 iterations to g=0.914369: MMA y[0]=1e+40, gc[0]=0.116025 MMA outer iteration: rho -> 0.1 MMA rhoc[0] -> 0.1 MMA dual converged in 3 iterations to g=1.34431: MMA y[0]=1e+40, gc[0]=-0.269712 MMA outer iteration: rho -> 0.01 MMA rhoc[0] -> 0.01 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.6 MMA dual converged in 3 iterations to g=2.23837: MMA y[0]=1e+40, gc[0]=-0.378669 MMA outer iteration: rho -> 0.001 MMA rhoc[0] -> 0.001 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.72 MMA dual converged in 3 iterations to g=2.79213: MMA y[0]=1e+40, gc[0]=-0.454075 MMA outer iteration: rho -> 0.0001 MMA rhoc[0] -> 0.0001 MMA sigma[0] -> 0.6 MMA sigma[1] -> 0.864 MMA dual converged in 3 iterations to g=3.13745: MMA y[0]=1e+40, gc[0]=-0.524222 MMA outer iteration: rho -> 1e-05 MMA rhoc[0] -> 1e-05 MMA sigma[0] -> 0.6 MMA sigma[1] -> 1.0368 MMA dual converged in 3 iterations to g=2.46249: MMA y[0]=1e+40, gc[0]=-0.587037 MMA outer iteration: rho -> 1e-05 MMA rhoc[0] -> 1e-05 MMA sigma[0] -> 0.6 MMA sigma[1] -> 1.24416 MMA dual converged in 3 iterations to g=1.81718: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.0001 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.81722: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.001 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.81766: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.01 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.82206: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.1 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=1.86611: MMA y[0]=1e+40, gc[0]=-0.625775 MMA inner iteration: rho -> 0.410949 MMA rhoc[0] -> 1e-05 MMA dual converged in 3 iterations to g=2.01828: MMA y[0]=1e+40, gc[0]=-0.625775 [0.1160254 0.8660254] 0.7993602791855875 3

    Your solver stops to the design point [0.1160254 0.8660254] that is nor a local minimum, nor a saddle point for the objective neither a kkt point . I would like to have your insight on this behavior. BRs Simone Coniglio

    opened by SimoneConiglio 0
  • How to set discrete values for NLopt

    How to set discrete values for NLopt

    hi, i have an optimization problem to solve. i have hundreds of input variables. the values of these variables can only be 1 or 0.
    How I can tell this the NLopt package? i tried it with an equality equation, but it does not work her my little code in R

    opt_test<-function(boundary){
      
      target<-rep(c(0,1),100)
      sum_of_square<-0
      for (i in 1:length(boundary)){
        sum_of_square<-sum_of_square+sum((boundary[i]-target[i])^2)
      }
      #print(boundary)
      #print(sum_of_square)
      return(sum_of_square)
    }
    
    opt_test(rep(c(1,0),100))
    
    eval_g_eq_test<-function(x) {
      
      ret<-1
      for (i in 1:length(x)){
        if (x[i]==1) {ret<-0}
        if (x[i]==0) {ret<-0}
      }
      return(ret)
    }
    
    opts <- list("algorithm"="NLOPT_GN_ISRES"   # NLOPT_GN_ORIG_DIRECT NLOPT_GNL_DIRECT_NOSCAL, NLOPT_GN_DIRECT_L_NOSCAL, and NLOPT_GN_DIRECT_L_RAND_NOSCAL NLOPT_GD_STOGO, or NLOPT_GD_STOGO_RAND
                 #geht gut: NLOPT_LN_PRAXIS   NLOPT_LN_COBYLA  
                 #NLOPT_LN_NEWUOA !!!!! +bound
                 #NLOPT_LN_BOBYQA   !si only
                 # nloptr.print.options()   all possible options
                 ,xtol_rel=1e-8
                 #stopval=as.numeric(stopval),
                 ,maxeval=2000
                 ,print_level=1
    )
    x0<-rep(0,200)
    lb<-rep(0,200)
    ub<-rep(1,200)
    
    jo<- nloptr(x0=x0
                ,eval_f=opt_test
                ,lb = lb
                ,ub = ub
                #,eval_g_eq=eval_g_eq_test()==0
                ,opts=opts
    )
    
    opened by Axyxo 0
  • Error installing `nloptr` from source on CentOS cluster

    Error installing `nloptr` from source on CentOS cluster

    I am trying to install nloptr from source in R version 4.1.3 on a cluster (CentOS). However I receive the following error:

    /cvmfs/argon.hpc.uiowa.edu/2022.1/prefix/usr/lib/gcc/x86_64-pc-linux-gnu/9.4.0/../../../../x86_64-pc-linux-gnu/bin/ld: cannot find -lnlopt
    collect2: error: ld returned 1 exit status
    make: *** [/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/r-4.1.3-ljofaul/rlib/R/share/make/shlib.mk:10: nloptr.so] Error 1
    ERROR: compilation failed for package ‘nloptr’
    

    I am on a university cluster and cannot run sudo commands. After encouragement from @eddelbuettel to contact my sys admin, he figured out the issue. Here's what he wrote:


    The build environment for nloptr uses pkg-config to get information about nlopt. It turns out that the pkg-config file has an error. It has

    libdir=${exec_prefix}/lib

    but the library is actually located in

    libdir=${exec_prefix}/lib64

    That does not show up in the packaging environment because LIBRARY_PATH is set for the dependency chain. I will need to fix the pkg-config file in the package recipe, but you can work around it as follows:

    1. load environment modules:
    module load stack/2022.1
    module load nlopt
    
    1. set LIBRARY_PATH so linker can find library while launching R session (single line below):
    LIBRARY_PATH=$ROOT_NLOPT/lib64:$LIBRARY_PATH R
    
    1. install nloptr in the R console (single line below):
    install.packages(verbose=1,'nloptr')
    

    I originally posted this issue in the nloptr repo: https://github.com/astamm/nloptr/issues/123. However, @eddelbuettel encouraged me to post an issue here because we suspect that the issue may be the pkg-config file created by nlopt.

    Here's the output of some of my commands in CentOS:

    [[email protected] ~]$ module load stack/2022.1
    
    The following have been reloaded with a version change:
      1) stack/2020.1 => stack/2022.1
    
    [[email protected] ~]$ module load r/4.1.3_gcc-9.4.0
    [[email protected] ~]$ module load nlopt
    [[email protected] ~]$ R CMD config --all | grep lib64
    LIBnn = lib64
    [[email protected] ~]$ pkg-config --libs nlopt
    -L/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/nlopt-2.7.0-u5x4377/lib -lnlopt
    

    We think we would want that to be (https://github.com/astamm/nloptr/issues/123#issuecomment-1317199965_):

    -L/cvmfs/argon.hpc.uiowa.edu/2022.1/apps/linux-centos7-broadwell/gcc-9.4.0/nlopt-2.7.0-u5x4377/lib64 -lnlopt
    

    That is, /lib64 in my case instead of /lib. @eddelbuettel, please clarify if I missed anything or got anything wrong!

    opened by isaactpetersen 4
  • Nonlineay constraints get violated in the result.

    Nonlineay constraints get violated in the result.

    Hi there: During my optimization, I applied LN_COBYLA since it supports arbitrary nonlinear constraint. My "constraint function" is actually a collision avoidance function. The function returns 10.0 when collision hits, so that the constraint should be view as unsatisfied. However, the result in experiment shows the constraint is not satisfied, and the algorithm can still get finished and converge. Can anyone give me some advice? Thanks sincerely! Screenshot from 2022-10-29 21-14-44

    opened by Lbaron980810 2
Releases(v2.7.1)
Owner
Steven G. Johnson
Steven G. Johnson
Aircraft design optimization made fast through modern automatic differentiation

Aircraft design optimization made fast through modern automatic differentiation. Plug-and-play analysis tools for aerodynamics, propulsion, structures, trajectory design, and much more.

Peter Sharpe 394 Dec 23, 2022
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection

FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection arXi

59 Nov 29, 2022
Least Square Calibration for Peer Reviews

Least Square Calibration for Peer Reviews Requirements gurobipy - for solving convex programs GPy - for Bayesian baseline numpy pandas To generate p

Sigma <a href=[email protected]"> 1 Nov 01, 2021
Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR)

This is the official implementation of our paper Personalized Transfer of User Preferences for Cross-domain Recommendation (PTUPCDR), which has been accepted by WSDM2022.

Yongchun Zhu 81 Dec 29, 2022
Speeding-Up Back-Propagation in DNN: Approximate Outer Product with Memory

Approximate Outer Product Gradient Descent with Memory Code for the numerical experiment of the paper Speeding-Up Back-Propagation in DNN: Approximate

2 Mar 02, 2022
Reproduces ResNet-V3 with pytorch

ResNeXt.pytorch Reproduces ResNet-V3 (Aggregated Residual Transformations for Deep Neural Networks) with pytorch. Tried on pytorch 1.6 Trains on Cifar

Pau Rodriguez 481 Dec 23, 2022
Open-Set Recognition: A Good Closed-Set Classifier is All You Need

Open-Set Recognition: A Good Closed-Set Classifier is All You Need Code for our paper: "Open-Set Recognition: A Good Closed-Set Classifier is All You

194 Jan 03, 2023
[NeurIPS 2021] Low-Rank Subspaces in GANs

Low-Rank Subspaces in GANs Figure: Image editing results using LowRankGAN on StyleGAN2 (first three columns) and BigGAN (last column). Low-Rank Subspa

112 Dec 28, 2022
CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search

CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search This repository is the official implementation of CAPITAL: Optimal Subgrou

Hengrui Cai 0 Oct 19, 2021
A tool to visualise the results of AlphaFold2 and inspect the quality of structural predictions

AlphaFold Analyser This program produces high quality visualisations of predicted structures produced by AlphaFold. These visualisations allow the use

Oliver Powell 3 Nov 13, 2022
How to use TensorLayer

How to use TensorLayer While research in Deep Learning continues to improve the world, we use a bunch of tricks to implement algorithms with TensorLay

zhangrui 349 Dec 07, 2022
A Partition Filter Network for Joint Entity and Relation Extraction EMNLP 2021

EMNLP 2021 - A Partition Filter Network for Joint Entity and Relation Extraction

zhy 127 Jan 04, 2023
tinykernel - A minimal Python kernel so you can run Python in your Python

tinykernel - A minimal Python kernel so you can run Python in your Python

fast.ai 37 Dec 02, 2022
Distance correlation and related E-statistics in Python

dcor dcor: distance correlation and related E-statistics in Python. E-statistics are functions of distances between statistical observations in metric

Carlos Ramos Carreño 108 Dec 27, 2022
Hypersearch weight debugging and losses tutorial

tutorial Activate tensorboard option Running TensorBoard remotely When working on a remote server, you can use SSH tunneling to forward the port of th

1 Dec 11, 2021
Pacman-AI - AI project designed by UC Berkeley. Designed reflex and minimax agents for the game Pacman.

Pacman AI Jussi Doherty CAP 4601 - Introduction to Artificial Intelligence - Fall 2020 Python version 3.0+ Source of this project This repo contains a

Jussi Doherty 1 Jan 03, 2022
This repo contains implementation of different architectures for emotion recognition in conversations.

Emotion Recognition in Conversations Updates 🔥 🔥 🔥 Date Announcements 03/08/2021 🎆 🎆 We have released a new dataset M2H2: A Multimodal Multiparty

Deep Cognition and Language Research (DeCLaRe) Lab 1k Dec 30, 2022
Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

ImageProcessingTransformer Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

61 Jan 01, 2023
利用yolov5和TensorRT从0到1实现目标检测的模型训练到模型部署全过程

写在前面 利用TensorRT加速推理速度是以时间换取精度的做法,意味着在推理速度上升的同时将会有精度的下降,不过不用太担心,精度下降微乎其微。此外,要有NVIDIA显卡,经测试,CUDA10.2可以支持20系列显卡及以下,30系列显卡需要CUDA11.x的支持,并且目前有bug。 默认你已经完成了

Helium 6 Jul 28, 2022