optimization routines for hyperparameter tuning

Related tags

Deep Learningoptunity
Overview

Optunity

https://travis-ci.org/claesenm/optunity.svg?branch=master Documentation Status

Optunity is a library containing various optimizers for hyperparameter tuning. Hyperparameter tuning is a recurrent problem in many machine learning tasks, both supervised and unsupervised. Tuning examples include optimizing regularization or kernel parameters.

From an optimization point of view, the tuning problem can be considered as follows: the objective function is non-convex, non-differentiable and typically expensive to evaluate.

This package provides several distinct approaches to solve such problems including some helpful facilities such as cross-validation and a plethora of score functions.

The Optunity library is implemented in Python and allows straightforward integration in other machine learning environments, including R and MATLAB.

If you have any comments, suggestions you can get in touch with us at gitter:

Join the chat at https://gitter.im/claesenm/optunity

To get started with Optunity on Linux, issue the following commands:

git clone https://github.com/claesenm/optunity.git
echo "export PYTHONPATH=$PYTHONPATH:$(pwd)/optunity" >> ~/.bashrc

Afterwards, importing optunity should work in Python:

#!/usr/bin/env python
import optunity

Optunity is developed at the STADIUS lab of the dept. of electrical engineering at KU Leuven (ESAT). Optunity is free software, using a BSD license.

For more information, please refer to the following pages: http://www.optunity.net

Contributors

The main contributors to Optunity are:

  • Marc Claesen: framework design & implementation, communication infrastructure, MATLAB wrapper and all solvers.
  • Jaak Simm: R wrapper.
  • Vilen Jumutc: Julia wrapper.
Comments
  • Unable to get Optunity to work on Windows 10

    Unable to get Optunity to work on Windows 10

    I have tried to get Optunity to work on two of my Windows 10 PCs without success. I followed the instructions to install from GIT, added the PYTHONPATH to my System environment and also added the optunity folder to my path. However, optunity only works on the python files that are inside the optunity example folders. I even tried to drag my own files that I need to process or tune with optunity, but then the python import call does not find the package. Is there any way to have a more detailed windows direction with line-by-line steps that are more in line with a dummies like myself? Thank you for your wonderful code and solution to the hyperparameter tuning

    here is a screen capture of two separate folders with the same code, where one recognizes the import and the other does not. Link to png sample setup

    opened by webzest 8
  • An issue was being caused where x2, y1 or y2 was being set to None du…

    An issue was being caused where x2, y1 or y2 was being set to None du…

    An issue was being caused where x2, y1 or y2 was being set to None during calculation of roc_auc. When I saw the line where x1 being None is reinterpreted as 0.0, I added corresponding lines for the other variables as well. I am not sure if this is the correct interpretation as I haven't taken the time to fully explore what a None value really means. If incorrect, it could mean that my auc is being incorrectly calculated.

    opened by navjotk 5
  • usage of identity in cross-validation

    usage of identity in cross-validation

    I am getting the following error by setting the aggregator option to opt.cross_validation.identity

    ---------------------------------------------------------------------------
         33 # Define Parameter Tuning
    ---> 34 optimal_pars_clf_sgd, _, _ = opt.maximize(clf_sgd_cv, num_evals=n_hyperparams_evals, alpha=[0.001, .1], l1_ratio=[0., 1.])
         35 
         36 # Train model on the Inner Training Set with Tuned Hyperparameters
    
    ../local/lib/python2.7/site-packages/optunity/api.pyc in maximize(f, num_evals, solver_name, pmap, **kwargs)
        179     solver = make_solver(**suggestion)
        180     solution, details = optimize(solver, f, maximize=True, max_evals=num_evals,
    --> 181                                  pmap=pmap)
        182     return solution, details, suggestion
        183 
    
    ../local/lib/python2.7/site-packages/optunity/api.pyc in optimize(solver, func, maximize, max_evals, pmap)
        243     time = timeit.default_timer()
        244     try:
    --> 245         solution, report = solver.optimize(f, maximize, pmap=pmap)
        246     except fun.MaximumEvaluationsException:
        247         # early stopping because maximum number of evaluations is reached
    
    ../local/lib/python2.7/site-packages/optunity/solvers/ParticleSwarm.pyc in optimize(self, f, maximize, pmap)
        257             fitnesses = pmap(evaluate, list(map(self.particle2dict, pop)))
        258             for part, fitness in zip(pop, fitnesses):
    --> 259                 part.fitness = fit*fitness
        260                 if not part.best or part.best_fitness < part.fitness:
        261                     part.best = part.position
    
    TypeError: can't multiply sequence by non-int of type 'float'
    

    Here is my code

    import optunity as opt
    from optunity.metrics import _recall, contingency_table
    from sklearn.linear_model import SGDClassifier
    import numpy as np
    
    n_in = 1
    k_in = 2
    n_hyperparams_evals = 10
    
    clf_sgd = SGDClassifier(
                penalty="elasticnet",
                shuffle=True,
                n_iter=500,
                fit_intercept=True,
                learning_rate="optimal")
    
    # Define Inner CV
    cv_decorator = opt.cross_validated(x=X, y=Y.values, 
                                       num_folds=k_in, num_iter=n_in,
                                       strata=[Y[Y==1].index.values], 
                                       regenerate_folds=True,
                                       aggregator=opt.cross_validation.identity)
    
    def obj_fun_clf_sgd(x_train, y_train, x_test, y_test, alpha, l1_ratio):
        model = clf_sgd.set_params(l1_ratio=l1_ratio, alpha=alpha).fit(x_train, y_train)
        y_pred = model.predict(x_test)
        score = _recall(contingency_table(y_test,y_pred))
        return score
    
    clf_sgd_cv = cv_decorator(obj_fun_clf_sgd)
    
    # Define Parameter Tuning
    optimal_pars_clf_sgd, _, _ = opt.maximize(clf_sgd_cv, num_evals=n_hyperparams_evals, alpha=[0.001, .1], l1_ratio=[0., 1.])
    
    # Train model on the Inner Training Set with Tuned Hyperparameters
    optimal_model_clf_sgd = clf_sgd.set_params(**optimal_pars_clf_sgd).fit(X, Y.values)
    

    The objective is to keep track of all the scores from the various folds. Is it a bug? or am I using incorrectly the API ?

    Thanks in advance

    opened by updiversity 5
  • Optunity not working with Octave on Linux Debian

    Optunity not working with Octave on Linux Debian

    Typical error when I try to run any function is:-

    octave:1> optunity_example error: 'optunity' undefined near line 5 column 11 error: called from: error: /home/andrew/my_source_makes/optunity/wrappers/matlab/optunity_example.m at line 5, column 9

    Output of debug is:-

    octave:2> global DEBUG_OPTUNITY octave:3> DEBUG_OPTUNITY=true DEBUG_OPTUNITY = 1

    Al the relevant folders in the optunity directory are in Octave's path environment. Any suggestions?

    opened by Dekalog 5
  • Better example for CV

    Better example for CV

    In the doc the first CV example returns 0.0. Maybe we could have a more practical example there? http://optunity.readthedocs.org/en/latest/user/cross_validation.html

    @opt.cross_validated(x=data, y=labels, num_folds=3)
    def cved(x_train, y_train, x_test, y_test):
        train(x_train, y_train)
        predict(x_test)
        return 0.0
    
    cved()
    
    Python documentation 
    opened by jaak-s 4
  • lambda cannot be used as an input name

    lambda cannot be used as an input name

    A python problem: lambda is a reserved keyword. Would be nice to have a workaround. For at least API calls.

    echo '{"optimize" : {"max_evals": 0}, "solver": {"solver_name" : "grid search", "lambda":[0,10]}}' | python -m optunity.piped
    Exception in thread FutureThread:
    Traceback (most recent call last):
      File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
        self.run()
      File "/usr/lib/python2.7/threading.py", line 763, in run
        self.__target(*self.__args, **self.__kwargs)
      File "optunity/parallel.py", line 131, in Wrapper
        self.__result=func(*param)
      File "optunity/communication.py", line 157, in wrap
        result = f(*args)
      File "optunity/functions.py", line 357, in wrapped_f
        return f(**dict([(k, v) for k, v in zip(keys, args)]))
      File "optunity/functions.py", line 232, in wrapped_f
        wrapped_f.argtuple = collections.namedtuple('args', wrapped_f.keys)
      File "/usr/lib/python2.7/collections.py", line 334, in namedtuple
        'keyword: %r' % name)
    ValueError: Type names and field names cannot be a keyword: 'lambda'
    
    enhancement Python 
    opened by jaak-s 4
  • running optunity on windows

    running optunity on windows

    We checked Windows setup with Dusan and found out there is an issue:

    • the Python installation in Windows does not put python.exe into the system path
    • however, it links .py files to python, so running any .py file will work

    So python -m optunity.piped will not work in Windows, but a simple solution is to call optunity.piped from a separate script:

    1. make optunity.piped's main code into a separate method (currently it is just under if __name__=='__main__'), like explained here http://www.artima.com/weblogs/viewpost.jsp?thread=4829
    2. then create a script in the top folder that launches the new main method, e.g. run.py:
    #!/usr/bin/env python
    import optunity.piped
    optunity.piped.main()
    

    Then we will just execute run.py in Windows (or also in other systems). Note: shebang is ignored in Windows.

    @claesenm if the solution looks fine to you let me know, I can easily do the implementation or you can do it :).

    enhancement MATLAB R 
    opened by jaak-s 4
  • piped make_solver error on

    piped make_solver error on "random search"

    An error with make_solver and random search. Expecting instead a JSON message.

    echo '{"make_solver":{"solver_name":"random search"}}' | python -m optunity.piped
    Traceback (most recent call last):
      File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
        "__main__", fname, loader, pkg_name)
      File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
        exec code in run_globals
      File "/home/jaak/git/optunity/optunity/piped.py", line 482, in <module>
        make_solver(startup_msg['make_solver'])
      File "/home/jaak/git/optunity/optunity/piped.py", line 389, in make_solver
        optunity.make_solver(**solver_config)
      File "optunity/api.py", line 282, in make_solver
        return solvercls(*args, **kwargs)
    TypeError: __init__() takes exactly 2 arguments (1 given)
    

    When using grid search the make_solver works:

    echo '{"make_solver":{"solver_name":"grid search"}}' | python -m optunity.piped
    {"success": "true"}
    
    bug Python 
    opened by jaak-s 4
  • writing to named pipe in windows with python

    writing to named pipe in windows with python

    In Linux/mac we have a solution python -m optunity > /tmp/py2r. However, Windows does not support that (http://superuser.com/questions/430466/in-windows-can-i-redirect-stdout-to-a-named-pipe-in-command-line).

    So an option is to pass the name of the pipe to python with a paramater, like:

    python -m optunity.piped -p py2r
    

    And in python use

    f = open(r'\\.\pipe\py2r', 'w', 0)
    ...
    f.write(...)
    

    This would be only used in windows. Seems like the easiest approach or are there other options?

    Python 
    opened by jaak-s 4
  • IPython crashes with optunity.parallel.pmap

    IPython crashes with optunity.parallel.pmap

    IPython crashes if the parallelized function outputs anything to stdout or stderr. This is an IPython issue that we can't fix ourselves.

    More info at: https://github.com/ipython/ipython/issues/2438/

    A workaround is to use IPython's own parallel features: http://nbviewer.ipython.org/github/vals/scilife-python-course/blob/master/parallel%20python.ipynb

    bug wontfix 
    opened by claesenm 4
  • Patch to handle cross-validation when X is a sparse matrix

    Patch to handle cross-validation when X is a sparse matrix

    Hi,

    Thanks for the awesome package. Optunity, by default, does not handle the case when X is sparse, since it tries to figure out the shape of X by calling len(X).

    I added a small patch where I changed calls to len(X) with X.shape[0] which is valid even in the case where X is a scipy sparse matrix.

    opened by FedericoV 3
  • None Type error on optunity.maximize function

    None Type error on optunity.maximize function

    this is my function wich need to be maximized `

    def performance_lr(x_train, y_train, x_test, y_test, penalty=None, tol=None, C=None, intercept_scaling=None, solver=None):
    
        def mapper(f, breakpoint=[], cat=[]):
            return cat[bisect(breakpoint, f)]
    
        penalty=mapper(penalty, breakpoint=[0.25, 0.5, 0.75],
                       cat=['none', 'l1', 'l2', 'elasticnet'])
    
        solver=mapper(solver, breakpoint=[0.2, 0.4, 0.6, 0.8], cat=['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'])
        print(f'penalty:{penalty}, solver:{solver}')
        
        model = LogisticRegression(penalty=penalty, tol=tol, C=C,
                                   intercept_scaling=intercept_scaling,
                                   solver=solver, n_jobs=-1, random_state=42)
        
        scores = np.mean(cross_val_score(model, X, y, cv=3, n_jobs=-1,
                                        scoring='accuracy'))
    

    `

    but there seems to an error to come which is `

    TypeError Traceback (most recent call last) in ----> 1 optimal_confg, info, _ = optunity.maximize(performance_lr, 2 solver_name='particle swarm', 3 num_evals=50, 4 **search)

    /usr/local/lib/python3.8/dist-packages/optunity/api.py in maximize(f, num_evals, solver_name, pmap, **kwargs) 178 suggestion = suggest_solver(num_evals, solver_name, **kwargs) 179 solver = make_solver(**suggestion) --> 180 solution, details = optimize(solver, f, maximize=True, max_evals=num_evals, 181 pmap=pmap) 182 return solution, details, suggestion

    /usr/local/lib/python3.8/dist-packages/optunity/api.py in optimize(solver, func, maximize, max_evals, pmap, decoder) 243 time = timeit.default_timer() 244 try: --> 245 solution, report = solver.optimize(f, maximize, pmap=pmap) 246 except fun.MaximumEvaluationsException: 247 # early stopping because maximum number of evaluations is reached

    /usr/local/lib/python3.8/dist-packages/optunity/solvers/ParticleSwarm.py in optimize(self, f, maximize, pmap) 269 for g in range(self.num_generations): 270 fitnesses = pmap(evaluate, list(map(self.particle2dict, pop))) --> 271 for part, fitness in zip(pop, fitnesses): 272 part.fitness = fit * util.score(fitness) 273 if not part.best or part.best_fitness < part.fitness:

    /usr/local/lib/python3.8/dist-packages/optunity/solvers/ParticleSwarm.py in evaluate(d) 257 @functools.wraps(f) 258 def evaluate(d): --> 259 return f(**d) 260 261 if maximize:

    /usr/local/lib/python3.8/dist-packages/optunity/functions.py in wrapped_f(*args, **kwargs) 299 value = wrapped_f.call_log.get(*args, **kwargs) 300 if value is None: --> 301 value = f(*args, **kwargs) 302 wrapped_f.call_log.insert(value, *args, **kwargs) 303 return value

    /usr/local/lib/python3.8/dist-packages/optunity/functions.py in wrapped_f(*args, **kwargs) 354 else: 355 wrapped_f.num_evals += 1 --> 356 return f(*args, **kwargs) 357 wrapped_f.num_evals = 0 358 return wrapped_f

    /usr/local/lib/python3.8/dist-packages/optunity/constraints.py in wrapped_f(*args, **kwargs) 149 def wrapped_f(*args, **kwargs): 150 try: --> 151 return f(*args, **kwargs) 152 except ConstraintViolation: 153 return default

    /usr/local/lib/python3.8/dist-packages/optunity/constraints.py in wrapped_f(*args, **kwargs) 127 if violations: 128 raise ConstraintViolation(violations, *args, **kwargs) --> 129 return f(*args, **kwargs) 130 wrapped_f.constraints = constraints 131 return wrapped_f

    /usr/local/lib/python3.8/dist-packages/optunity/constraints.py in func(*args, **kwargs) 264 @functions.wraps(f) 265 def func(*args, **kwargs): --> 266 return f(*args, **kwargs) 267 return func 268

    /usr/local/lib/python3.8/dist-packages/optunity/cross_validation.py in call(self, *args, **kwargs) 402 kwargs['y_test'] = select(self.y, rows_test) 403 scores.append(self.f(**kwargs)) --> 404 return self.reduce(scores) 405 406 def getattr(self, name):

    /usr/local/lib/python3.8/dist-packages/optunity/cross_validation.py in mean(x) 235 236 def mean(x): --> 237 return float(sum(x)) / len(x) 238 239 def mean_and_list(x):

    TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'

    `

    opened by hsuecu 0
  • Python: cross_validated assert on y is None, but it's not None.

    Python: cross_validated assert on y is None, but it's not None.

    Following the example from: https://optunity.readthedocs.io/en/latest/notebooks/notebooks/sklearn-automated-classification.html# I receive this assert:

    Traceback (most recent call last):
      File "./test.py", line 57, in <module>
        @optunity.cross_validated(x=data, y=labels, num_folds=5)
      File "/x/x/x/x/venv/lib/python3.8/site-packages/optunity/cross_validation.py", line 484, in cross_validated
        assert y is None
    

    However, in cross_validation.py , if i print(y) before the assert, y is not None, it is type List populated with data.

    Any ideas?

    opened by adamwelsh 0
  • Octave Install on Windows fails at

    Octave Install on Windows fails at "optunity_example"

    After following the installation procedure as defined on the Optunity website, I get the below error when running optunity_example.m:

    /usr/bin/python.exe: Error while finding module specification for 'optunity.standalone' (ModuleNotFoundError: No module named 'optunity')

    at line 46 of comm_launch.m: cmd = ['python -m optunity.standalone ', num2str(port)];

    Following issues #72 and #110, I uninstalled all other versions of python on my machine and reinstalled the latest version (3.9.5). I then installed Optunity, attempting cloning the git repository, downloading the git repository, and using both the python and pip install methods from the Optunity website. The Octave path was appended via addpath(genpath('C:\Users~~~\optunity-master\wrappers\octave')); savepath; and the Sockets Octave package has been both installed and loaded prior to each attempted run of optunity_example.

    My Path user variable includes my python location, and my PYTHONPATH system variable contains the optunity location. I know at least some of this is working, because I can run test_standalone.py via the windows command prompt and see it execute successfully. Furthermore, from the windows command prompt I can execute python -c "import optunity" successfully. While I can call python commands from my Octave command window, "python import optunity" and "python import optunity.standalone" fail.

    Has anyone successfully installed optunity on Windows 10 GNU Octave?

    opened by Riley-Brooksher 0
  • Notebook Example : Sklearn SVR generates runtime error

    Notebook Example : Sklearn SVR generates runtime error

    opened by ahmedshahriar 0
  • Unable to test Optunity with the provided sample GitHub code

    Unable to test Optunity with the provided sample GitHub code

    Hello,

    I would really like to test this library so i can use it on my research, however, I am not even able to get the Sample code from Git to work on my R environment. Besides all the required installation for Optunity, is there another step that I missed to get it to work? Please tak a look at the image below for my setup and test code from the Optunity/Docs/Examples

    image

    opened by webzest 0
  • How to tune hyperparameter, where data is passing through model train function step by step?

    How to tune hyperparameter, where data is passing through model train function step by step?

    I am training a ML model, where instead of pushing the data in a whole, I wanted to give data step by step.

    So, like saving weights in Deep learning model. Can we save parameters of some part of training data and then again load these parameters to further tune the hyper-parameters?

    opened by gunjannaik 0
Releases(1.1.1)
  • 1.1.1(Sep 30, 2015)

    This minor release has the same features as 1.1.0, but incorporates some bug fixes, specifically to the specification of structured search spaces.

    Source code(tar.gz)
    Source code(zip)
  • 1.1.0(Jul 19, 2015)

    The second release of Optunity (stable). For documentation, please refer to http://docs.optunity.net.

    The following features have been added:

    • new solvers
      • tree of Parzen estimators (requires Hyperopt)
      • Sobol sequences
    • Octave wrapper
    • support for structured search spaces, which can be nested
    • improved cross-validation routines to return more detailed results
    • most Python examples are now available as notebooks

    This release provides Optunity functionality in the following environments:

    • MATLAB
    • R
    • Octave
    • Jython
    Source code(tar.gz)
    Source code(zip)
    Optunity-1.1.0-py2-none-any.whl(70.78 KB)
    Optunity-1.1.0-py2.py3-none-any.whl(70.78 KB)
    Optunity-1.1.0.tar.gz(3.37 MB)
  • v1.0.1(Dec 2, 2014)

    The first major release of Optunity (stable). For documentation, please refer to http://docs.optunity.net.

    The following features are available:

    • wide variety of solvers
      • particle swarm optimization
      • Nelder-Mead
      • grid search
      • random search
      • CMA-ES (requires DEAP and NumPy)
    • generic cross-validation functionality
      • support for strata and clusters
      • folds are reusable for multiple learning algorithm/solver combinations
    • various quality metrics for models (score/loss functions)
    • univariate domain constraints on hyperparameters
    • support for parallel objective function evaluations

    This release provides Optunity functionality in the following environments:

    • MATLAB
    • R
    Source code(tar.gz)
    Source code(zip)
    Optunity-1.0.1.win32.exe(241.09 KB)
    Optunity-1.0.1.win32.msi(160.00 KB)
Owner
Marc Claesen
Proud father of Kiara & Christophe and husband to Joanne. PhD in machine learning. Computer nerd. Love bioinformatics & open source.
Marc Claesen
An Exact Solver for Semi-supervised Minimum Sum-of-Squares Clustering

PC-SOS-SDP: an Exact Solver for Semi-supervised Minimum Sum-of-Squares Clustering PC-SOS-SDP is an exact algorithm based on the branch-and-bound techn

Antonio M. Sudoso 1 Nov 13, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Towards Interpretable Deep Metric Learning with Structural Matching

DIML Created by Wenliang Zhao*, Yongming Rao*, Ziyi Wang, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for paper Towards Interpr

Wenliang Zhao 75 Nov 11, 2022
ilpyt: imitation learning library with modular, baseline implementations in Pytorch

ilpyt The imitation learning toolbox (ilpyt) contains modular implementations of common deep imitation learning algorithms in PyTorch, with unified in

The MITRE Corporation 11 Nov 17, 2022
Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

tonne 1.4k Dec 29, 2022
Automatic Video Captioning Evaluation Metric --- EMScore

Automatic Video Captioning Evaluation Metric --- EMScore Overview For an illustration, EMScore can be computed as: Installation modify the encode_text

Yaya Shi 17 Nov 28, 2022
Predicting the duration of arrival delays for commercial flights.

Flight Delay Prediction Our objective is to predict arrival delays of commercial flights. According to the US Department of Transportation, about 21%

Jordan Silke 1 Jan 11, 2022
SparseInst: Sparse Instance Activation for Real-Time Instance Segmentation, CVPR 2022

SparseInst 🚀 A simple framework for real-time instance segmentation, CVPR 2022 by Tianheng Cheng, Xinggang Wang†, Shaoyu Chen, Wenqiang Zhang, Qian Z

Hust Visual Learning Team 458 Jan 05, 2023
PyTorch implementation of Deformable Convolution

Deformable Convolutional Networks in PyTorch This repo is an implementation of Deformable Convolution. Ported from author's MXNet implementation. Buil

411 Dec 16, 2022
使用yolov5训练自己数据集(详细过程)并通过flask部署

使用yolov5训练自己的数据集(详细过程)并通过flask部署 依赖库 torch torchvision numpy opencv-python lxml tqdm flask pillow tensorboard matplotlib pycocotools Windows,请使用 pycoc

HB.com 19 Dec 28, 2022
Pytorch reimplementation of PSM-Net: "Pyramid Stereo Matching Network"

This is a Pytorch Lightning version PSMNet which is based on JiaRenChang/PSMNet. use python main.py to start training. PSM-Net Pytorch reimplementatio

XIAOTIAN LIU 1 Nov 25, 2021
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Karush Suri 8 Nov 07, 2022
CKD - Collaborative Knowledge Distillation for Heterogeneous Information Network Embedding

Collaborative Knowledge Distillation for Heterogeneous Information Network Embed

zhousheng 9 Dec 05, 2022
As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).

HAKE-Action HAKE-Action (TensorFlow) is a project to open the SOTA action understanding studies based on our Human Activity Knowledge Engine. It inclu

Yong-Lu Li 94 Nov 18, 2022
A pytorch reprelication of the model-based reinforcement learning algorithm MBPO

Overview This is a re-implementation of the model-based RL algorithm MBPO in pytorch as described in the following paper: When to Trust Your Model: Mo

Xingyu Lin 93 Jan 05, 2023
Twin-deep neural network for semi-supervised learning of materials properties

Deep Semi-Supervised Teacher-Student Material Synthesizability Prediction Citation: Semi-supervised teacher-student deep neural network for materials

MLEG 3 Dec 14, 2022
Material del curso IIC2233 Programación Avanzada 📚

Contenidos Los contenidos se organizan según la semana del semestre en que nos encontremos, y según la semana que se destina para su estudio. Los cont

IIC2233 @ UC 72 Dec 23, 2022
BabelCalib: A Universal Approach to Calibrating Central Cameras. In ICCV (2021)

BabelCalib: A Universal Approach to Calibrating Central Cameras This repository contains the MATLAB implementation of the BabelCalib calibration frame

Yaroslava Lochman 55 Dec 30, 2022