Pyomo is an object-oriented algebraic modeling language in Python for structured optimization problems.

Related tags

Machine Learningpyomo
Overview

Github Actions Status Jenkins Status codecov Documentation Status GitHub contributors Merged PRs Issue stats Project Status: Active - The project has reached a stable, usable state and is being actively developed.

a COIN-OR project

Pyomo Overview

Pyomo is a Python-based open-source software package that supports a diverse set of optimization capabilities for formulating and analyzing optimization models. Pyomo can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. Pyomo supports a wide range of problem types, including:

  • Linear programming
  • Quadratic programming
  • Nonlinear programming
  • Mixed-integer linear programming
  • Mixed-integer quadratic programming
  • Mixed-integer nonlinear programming
  • Mixed-integer stochastic programming
  • Generalized disjunctive programming
  • Differential algebraic equations
  • Mathematical programming with equilibrium constraints

Pyomo supports analysis and scripting within a full-featured programming language. Further, Pyomo has also proven an effective framework for developing high-level optimization and analysis tools. For example, the mpi-sppy package provides generic solvers for stochastic programming. mpi-sppy leverages the fact that Pyomo's modeling objects are embedded within a full-featured high-level programming language, which allows for transparent parallelization of subproblems using Python parallel communication libraries.

Pyomo was formerly released as the Coopr software library.

Pyomo is available under the BSD License, see the LICENSE.txt file.

Pyomo is currently tested with the following Python implementations:

  • CPython: 3.6, 3.7, 3.8, 3.9
  • PyPy: 3

Installation

PyPI PyPI version PyPI downloads

pip install pyomo

Anaconda Anaconda version Anaconda downloads

conda install -c conda-forge pyomo

Tutorials and Examples

Getting Help

To get help from the Pyomo community ask a question on one of the following:

Developers

Pyomo development moved to this repository in June, 2016 from Sandia National Laboratories. Developer discussions are hosted by google groups.

By contributing to this software project, you are agreeing to the following terms and conditions for your contributions:

  1. You agree your contributions are submitted under the BSD license.
  2. You represent you are authorized to make the contributions and grant the license. If your employer has rights to intellectual property that includes your contributions, you represent that you have received permission to make contributions and grant the required license on behalf of that employer.

Related Packages

See https://pyomo.readthedocs.io/en/latest/related_packages.html.

Comments
  • Add kaug dsdp mode into sens.py

    Add kaug dsdp mode into sens.py

    Summary/Motivation:

    The current sens.py uses the only sipopt. kaug dsdp mode has added as another option for sensitivity.

    Changes proposed in this PR:

    • Add ipopt solver option as an input, optarg=None (line 229, 375-376)
    • kaug requires variable initialization (line 311-316).
    • k_aug doesn’t support inequalities. Raise exception error (line 363).
    • This function requires ipopt, k_aug, dotsens (line 374-384).
    • Declare Suffixes(line 386-412).
    • ipopt.solve -> kaug.solve -> dotsens.solve (line 415-428).
    • fixes #2047

    Legal Acknowledgement

    By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by JanghoPark-LBL 48
  • APPSI: 'Could not import gurobipy'

    APPSI: 'Could not import gurobipy'

    Dear all,

    I tried to use APPSI and Gurobi as solver but I got this error: 'Could not import gurobipy' By running the example:

    from gurobipy import GRB 
    import pyomo.environ as pe
    from pyomo.core.expr.taylor_series import taylor_series_expansion
    from pyomo.contrib import appsi
    
    m = pe.ConcreteModel()
    m.x = pe.Var(bounds=(0, 4))
    m.y = pe.Var(within=pe.Integers, bounds=(0, None))
    m.obj = pe.Objective(expr=2*m.x + m.y)
    m.cons = pe.ConstraintList()  # for the cutting planes
    
    def _add_cut(xval):
        # a function to generate the cut
        m.x.value = xval
        return m.cons.add(m.y >= taylor_series_expansion((m.x - 2)**2))
    
    _c = _add_cut(0)  # start with 2 cuts at the bounds of x
    _c = _add_cut(4)  # this is an arbitrary choice
    
    opt = appsi.solvers.Gurobi()
    opt.config.stream_solver = True
    opt.set_instance(m) 
    opt.gurobi_options['PreCrush'] = 1
    opt.gurobi_options['LazyConstraints'] = 1
    
    def my_callback(cb_m, cb_opt, cb_where):
        if cb_where == GRB.Callback.MIPSOL:
            cb_opt.cbGetSolution(vars=[m.x, m.y])
            if m.y.value < (m.x.value - 2)**2 - 1e-6:
                cb_opt.cbLazy(_add_cut(m.x.value))
    
    opt.set_callback(my_callback)
    res = opt.solve(m) 
    

    image

    I have the packages:

    • pyomo 6.3.0
    • gurobipy 9.5.1

    And: -Gurobi 9.5.1 with its respective license.

    Please, let me know if I made something wrong.

    Best regards, Erik

    bug pyomo.contrib 
    opened by erikfilias 45
  • Pyomo Network

    Pyomo Network

    Summary/Motivation:

    This originated out of the desire to have IDAES Streams inherit off of a Pyomo component in order to be able to genericize the sequential modular simulator I'm working on. However, it's also a useful Pyomo component since it provides a simple API for equating everything in two Connectors, and expanding Connections is less expensive than the current ConnectorExpander since it is able to search the model for the specific Connection ctype.

    Basically a Connection is a component on which you can define either a source/destination pair for a directed Connection or simply pass a list/tuple of two Connectors for an undirected Connection. After expanding, simple equality constraints are added onto a new block and the connection is deactivated.

    Except it's all called ports and arcs now and it's in a new package and it doesn't have to be just an equality relationship.

    Changes proposed in this PR:

    • Introduce Pyomo network package
      • Ports and Arcs

    Legal Acknowledgement

    By contributing to this software project, I agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    IDAES related 
    opened by gseastream 28
  • Dropping Support for Python 2.7

    Dropping Support for Python 2.7

    This may seem premature, but many major packages in the Python community are planning to drop support for Python 2.7 in or before 2020. In fact, many major projects are planning to only support bug fixes sometime beforehand:

    • http://python3statement.org/

    Since we often plan releases in early/mid-fall, I think it's reasonable to plan our last major release supporting Python 2.7 in the fall of 2019.

    design discussions testing_and_ci 
    opened by whart222 26
  • Optimizations to minimize use of NumericConstant objects

    Optimizations to minimize use of NumericConstant objects

    Fixes N/A.

    Summary/Motivation:

    Clean-up the use of as_numeric and re-define it with a more restrictive semantics.

    Changes proposed in this PR:

    The as_numeric() function is used to create NumericConstant objects, which are used to wrap numeric values in Pyomo components.

    This PR also changes the caching mechanism. Values are not coerced to floats, but instead they are cached separately for each type.

    Legal Acknowledgement

    By contributing to this software project, I agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by whart222 24
  • Merge Expression Branch

    Merge Expression Branch

    This PR can be referenced to discuss issues that need to be resolved to merge the expressions branch: expr_dev.

    Dependencies:

    • [x] Resolve #276 (Kernel subclasses)
    • [x] Resolve #212 (GDP Rework) and merge into this branch
    IDAES related pyomo.core 
    opened by whart222 23
  • [IDEA] Generation of problem files with mixed-representation index sets

    [IDEA] Generation of problem files with mixed-representation index sets

    Fixes #567 (partial).

    Summary/Motivation:

    We recently made a small change that makes the default representation of Sets in Pyomo rely on insertion order. This PR completes that activity by resolving the simple test failures outlined in #567. Specifically, these changes allow mixed-representation sets to be used as index sets. The result is that problem files can be generated with determinism=0, which does no sorting of index values.

    However, these changes only work with CPython (3.6, 3.7) and PyPy. Starting with Python3.6, the CPython and PyPy Python implementations have deterministic key orderings in their dictionary representations. And as of Python 3.7 this property is a part of the Python language specification. This feature is exploited to provide deterministic file generation without sorting index values.

    NOTE: This is a partial fix of #567, since it only applies to more recent versions of Python. However, there is not a clear motivation for extending this fix for older versions of Python. However, this PR is motivated by the fact that sorting is not done during file generation, and hence files are generated more quickly (especially for models with constraints that have large index sets).

    NOTE: Sorted ordering of mixed-representation sets often works for Python 2.7 (using determinism=1), since that version allows for comparison of more data types.

    NOTE: This PR does not simply use the sorted_robust() function to sort when generating problem files. The sorted_robust() function is significantly slower than sorted() with mixed-representation data. Thus using sorted_robust() with Python 3.x would make it appear that Python3 is slower than Python2, when in fact there are faster alternatives.

    Changes proposed in this PR:

    • Changing iteration in Set objects to use the insertion order by default.
    • Adding tests that confirm that mixed-representation sorts can be solved.

    Legal Acknowledgement

    By contributing to this software project, I agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    AT: STALE 
    opened by whart222 21
  • New ShortNameLabeler, used to limit GAMS symbol names

    New ShortNameLabeler, used to limit GAMS symbol names

    Resolves #488 .

    Create a new labeler called ShortNameLabeler which can take a limit size, a prefix, a start, and even a custom labeler. If no custom labeler is provided, AlphaNumericTextLabeler will be used. The final labels are shortened from the output from this first labeler. This was applied to the GAMS writer in order to enforce a symbol name limit. Once caveat I though of so far was the rare possibility of a name overlap/conflict, which would only happen if the user has a 63-character component name that might look like mycomponent_17, and another component over 63 characters similarly named mycomponentbutdifferent which happens to be the 17th component with a long name so it gets shortened to mycomponent_17. This feels very rare to me, and even if it happened to a user, they could create their own version of a ShortNameLabeler with a different prefix and pass that to the writer via the labeler keyword.

    Also let me know if the tests I added are appropriate or if something else should be done to test the new functionality. I ran it as a solver test to make sure that, by default, the GAMS writer produces output that can be successfully run by GAMS.

    opened by gseastream 19
  • Fix so that generate_cuid_names descends into Disjunct objects as well

    Fix so that generate_cuid_names descends into Disjunct objects as well

    This is a quick fix to allow generate_cuid_names to descend into "Block-like" components. I am not sure why generate_cuid_names doesn't use the _tree_iterator approach that things like block_data_objects uses, but I wasn't comfortable enough with the iteration methods to go ahead and refactor the function with that approach.

    opened by qtothec 19
  • Document PR Process

    Document PR Process

    Summary/Motivation:

    Add documentation of pull request expectations and conventions.

    This fixes #304 and fixes #267.

    Legal Acknowledgement

    By contributing to this software project, I agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by qtothec 18
  • add logfile option to GAMS solver

    add logfile option to GAMS solver

    This would add an option for specifying a custom logfile for the GAMS command line solver. The other solvers seem to support such an option and it is quite useful if one want more control over the logfiles destination.

    opened by daviskirk 18
  • Unit test for QCQO

    Unit test for QCQO

    Change

    • Added a small unit test for QCQO problems for Pyomo-MOSEK.

    ( I did not want to obfuscate the purpose of #2647, so I made this a separate PR)

    Summary/Motivation:

    A while ago there was a bug due to mosek_direct passing upper-triangular elements when setting Q matrix elements. This was fixed, but no unit test was added for this. That has been changed.

    Changes proposed in this PR:

    • Unit test (small qcqo problem).

    Legal Acknowledgement

    By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by Utkarsh-Detha 0
  • Manage Gurobi environments in GurobiDirect

    Manage Gurobi environments in GurobiDirect

    Fixes #2408

    Summary/Motivation:

    There are currently several limitations in the GurobiDirect interface:

    1. Some Gurobi parameters cannot be used with the current approach, as they require explicitly creating a gurobipy Env object. This includes connection parameters for compute servers, token servers, and instant cloud (can be worked around via a license file, but this isn't always an ideal approach) and special parameters such as MemLimit.
    2. There is no clean way to close Gurobi models and environments, which leaves license tokens in-use and compute server connections open longer than a user need them.
    3. A user cannot retry acquiring a Gurobi license token (important in shared license environments) since the GurobiDirect class caches errors in global state.

    Changes proposed in this PR:

    Introduces a constructor flag manage_env for GurobiDirect (defaults to False), and two public methods .close() and .close_global(). If users set manage_env=True:

    • GurobiDirect explicitly creates a Gurobi environment bound to the solver instance. This enables Gurobi resources to be properly freed by the solver object:
    with SolverFactory('gurobi', solver_io='python', manage_env=True) as opt:
        opt.solve(model)
    # All Gurobi models and environments are freed
    
    • Calling .close() achieves the same result as the context manager:
    opt = SolverFactory('gurobi', solver_io='python', manage_env=True)
    try:
        opt.solve(model)
    finally:
        opt.close()
    # All Gurobi models and environments are freed
    
    • Internally, solver options are passed to the Env constructor (instead of the Model, as is currently done) to allow environment-level connection parameters to be used:
    options = {
        "CSManager": "<url>",
        "CSAPIAccessID": "<access-id>",
        "CSAPISecret": "<api-key>",
    }
    with SolverFactory('gurobi', solver_io='python', manage_env=True, options=options) as opt:
        opt.solve(model)  # Solved on compute server
    # Compute server connection terminated
    

    If manage_env=False (the default) is set, then users will get the old behaviour, which uses the Gurobi default/global environment. There are some minor changes:

    • Calling .close(), or exiting the context properly disposes of all models created by the solver
    with SolverFactory('gurobi', solver_io='python') as opt:
        opt.solve(model)
    # Gurobi models created by `opt` are freed; the default/global Gurobi environment is still active
    
    • Calling .close_global() disposes of models created by the solver, and disposes the Gurobi default environment. This will free all Gurobi resources assuming the user did not create any other models (e.g. via another GurobiDirect object with manage_env=False):
    opt = SolverFactory('gurobi', solver_io='python')
    try:
        opt.solve(model)
    finally:
        opt.close_global()
    # Gurobi models created by `opt` are freed, the default/global Gurobi environment is closed
    

    Finally, the available() call no longer stores errors globally and repeats them back if users retry the check. So users can do the following to queue requests if they are using a shared license (regardless of whether manage_env is set to True or False):

    with SolverFactory('gurobi', solver_io='python') as opt:
        while not available(exception_flag=False):
            time.sleep(1)
        opt.solve(model)
    

    Legal Acknowledgement

    By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by simonbowly 0
  • Fixing some bugs in scaling transformation

    Fixing some bugs in scaling transformation

    Fixes None .

    Summary/Motivation:

    In testing the Pyomo scaling transformation on some IDAES models, a couple of bugs were encountered that this PR aims to address.

    1. The scaling transformation tool have a check to see if there is a scaling suffix defined on the top-level block to be scaled. However, after the changes proposed for suffix behavior in #2641 and implemented for scaling in #2619 this is no longer required.
    2. When testing the scaling transformation on models with References it was discovered that these were not being properly remapped by the rename_components function. It was found that only the Reference was being renamed, but the _data dict was left pointing to the old data objects (which had since been deleted).
    3. Further, the application of the scaling factors was being applied to all components in the model, resulting in potential duplication for effort and confusion of scaling factors when References were present.
    4. propagate_solution method checks that the model has exactly one active objective function, however this is only required if calculating duals or reduced costs. This precludes it being used on models with no objective functions. There was also a bug in the code to raise an Exception if there was not exactly one objective.
    5. I also found an unrelated edge case in calculate_variable_from_constraint where assuming the function was linear resulted in an OverflowError when evaluating the function causing the method to fail.

    Changes proposed in this PR:

    • Remove check for top-level scaling suffix in scaling transformation.
    • Update rename_components method to collect and remap References during renaming.
    • Add check to skip References when applying scaling factors to models.
    • Update propagate_solution method to only check for the number of active objective functions if a dual or reduced cost suffix is present.
    • Add some additional tests to cover the fixes.
    • Add a try/except to calculate_variable_from_constraint to catch OverflowErrors in the linear stage and to move onto the non-linear stage.

    Legal Acknowledgement

    By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by andrewlee94 0
  • `Bunch.__delattr__` does not (necessarily) remove attribute from `Bunch` object

    `Bunch.__delattr__` does not (necessarily) remove attribute from `Bunch` object

    Summary

    When attempting attribute deletion on a Bunch object (from pyomo.common.collections), such as through delattr or del, the attribute does not seem to have been removed from the underlying dict structure. Complete removal of the attribute seems to require calls to both Bunch.__delattr__ and Bunch.__delitem__ successively.

    Steps to reproduce the issue

    # example.py
    from pyomo.common.collections import Bunch
    
    
    def display_bunch_attr_val(bunch_obj, attr):
        # attribute value remains unchanged after delattr
        print("-" * 30)
        print("Attribute value check:")
        print("Getattr:", getattr(bunch_obj, attr, None))
        print("Getitem:", bunch_obj[attr])
    
    
    def display_bunch_attr_in_keys(bunch_obj):
        print("-" * 30)
        print("Bunch keys check:")
        print(bunch_obj.keys())  # remains unchanged after delattr
        print(f"Attribute in Bunch keys: {attr_name in bunch_obj.keys()}")
        print(
            "Attribute in Bunch.__dict__ keys: "
            f"{attr_name in bunch_obj.__dict__}"
        )
    
    
    bunch = Bunch()
    attr_name = "example"
    val = 300
    
    print(f"Setting attribute name {attr_name!r} to value {val}")
    setattr(bunch, attr_name, val)
    print(f"Bunch keys: {bunch.keys()}")
    
    print("-" * 30)
    print("Invoking delattr")
    delattr(bunch, attr_name)
    
    # attribute value remains unchanged after delattr
    display_bunch_attr_val(bunch, attr_name)
    
    # bunch keys check
    display_bunch_attr_in_keys(bunch)
    
    print("-" * 30)
    print("Invoking delitem")
    del bunch[attr_name]
    
    # keys and the attribute value
    display_bunch_attr_val(bunch, attr_name)
    
    # bunch keys check
    display_bunch_attr_in_keys(bunch)
    

    Error Message

    $ python example.py
    Setting attribute name 'example' to value 300
    Bunch keys: dict_keys(['example'])
    ------------------------------
    Invoking delattr
    ------------------------------
    Attribute value check:
    Getattr: 300
    Getitem: 300
    ------------------------------
    Bunch keys check:
    dict_keys(['example'])
    Attribute in bunch keys: True
    Attribute in bunch.__dict__ keys: False
    ------------------------------
    Invoking delitem
    ------------------------------
    Attribute value check:
    Getattr: None
    Getitem: None
    ------------------------------
    Bunch keys check:
    dict_keys([])
    Attribute in bunch keys: False
    Attribute in bunch.__dict__ keys: False
    

    Information on your system

    Pyomo version: 6.4.5dev0 Python version: 3.9.13 Operating system: Ubuntu 20.04 How Pyomo was installed (PyPI, conda, source): source Solver (if applicable): N/A

    Additional information

    bug 
    opened by shermanjasonaf 0
  • Switch default NL writer to nlv2

    Switch default NL writer to nlv2

    Fixes # .

    Summary/Motivation:

    This switches the default NL writer to the new "NLv2" writer.

    Changes proposed in this PR:

    • Change default NL writer

    Legal Acknowledgement

    By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

    1. I agree my contributions are submitted under the BSD license.
    2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.
    opened by jsiirola 1
  • Address `AttributeError` raised when `Constraint` with `SumExpression` is declared after `replace_expressions`

    Address `AttributeError` raised when `Constraint` with `SumExpression` is declared after `replace_expressions`

    Summary

    In event a Constraint expression contains a SumExpression obtained through the core.util.replace_expressions function, an AttributeError may be raised, as it appears that the args attribute of a sum expression is set to a tuple rather than a list.

    Steps to reproduce the issue

    import pyomo.environ as pyo
    from pyomo.core.expr.visitor import replace_expressions
    
    m = pyo.ConcreteModel()
    
    m.p = pyo.Param(range(3), initialize=1, mutable=True)
    
    m.x = pyo.Var()
    m.v = pyo.Var(range(3), initialize=1)
    
    # note: the lower bound is a SumExpression here
    m.c = pyo.Constraint(expr=2 + 3 * m.p[2] == m.x)
    
    lower_expr = replace_expressions(m.c.lower, {id(m.p[2]): m.v[2]})
    body_expr = replace_expressions(m.c.body, {id(m.p[2]): m.v[2]})
    
    # build constraint with v[2] substituted for p[2].
    # causes error, as the `args` attribute of a SumExpression
    # somewhere is a tuple (not list)
    m.c2 = pyo.Constraint(expr=lower_expr == body_expr)
    

    Error Message

    $
    ERROR: Rule failed when generating expression for Constraint c2 with index
        None: AttributeError: 'tuple' object has no attribute 'append'
    ERROR: Constructing component 'c2' from data=None failed: AttributeError:
        'tuple' object has no attribute 'append'
    Traceback (most recent call last):
      File "/home/jasherma/Documents/vim_example/pyomo_features_examples/test_err_constraint_add.py", line 34, in <module>
        m.c2 = pyo.Constraint(expr=lower_expr == body_expr)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/block.py", line 649, in __setattr__
        self.add_component(name, val)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/block.py", line 1219, in add_component
        val.construct(data)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/disable_methods.py", line 116, in construct
        return base.construct(self, data)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/constraint.py", line 763, in construct
        self._setitem_when_not_present(index, rule(block, index))
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/indexed_component.py", line 1005, in _setitem_when_not_present
        obj.set_value(value)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/constraint.py", line 922, in set_value
        return super(ScalarConstraint, self).set_value(expr)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/base/constraint.py", line 589, in set_value
        self._body = args[0] - args[1]
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/expr/numvalue.py", line 673, in __sub__
        return _generate_sum_expression(_sub,self,other)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/expr/numeric_expr.py", line 1335, in _generate_sum_expression
        return _self.add(-_other)
      File "/home/jasherma/Documents/cmu/phd-project/pyomo_repo/pyomo/pyomo/core/expr/numeric_expr.py", line 642, in add
        self._args_.append(new_arg)
    AttributeError: 'tuple' object has no attribute 'append'
    

    Information on your system

    Pyomo version: 6.4.3dev0 Python version: 3.9.13 Operating system: Ubuntu 20.04 How Pyomo was installed (PyPI, conda, source): source Solver (if applicable): N/A

    Additional information

    • This exception is also raised in the event the expression is added to a ConstraintList (such as through ConstraintList.add). The PyROS solver (contrib.pyros) adds constraints to subproblems of two-stage RO models in this way. So PyROS users may be affected.
    bug 
    opened by shermanjasonaf 0
Releases(6.4.4)
Datetimes for Humans™

Maya: Datetimes for Humans™ Datetimes are very frustrating to work with in Python, especially when dealing with different locales on different systems

Timo Furrer 3.4k Dec 28, 2022
Python bindings for MPI

MPI for Python Overview Welcome to MPI for Python. This package provides Python bindings for the Message Passing Interface (MPI) standard. It is imple

MPI for Python 604 Dec 29, 2022
Python package for stacking (machine learning technique)

vecstack Python package for stacking (stacked generalization) featuring lightweight functional API and fully compatible scikit-learn API Convenient wa

Igor Ivanov 671 Dec 25, 2022
Bayesian optimization in JAX

Bayesian optimization in JAX

Predictive Intelligence Lab 26 May 11, 2022
Python module for machine learning time series:

seglearn Seglearn is a python package for machine learning time series or sequences. It provides an integrated pipeline for segmentation, feature extr

David Burns 536 Dec 29, 2022
Module is created to build a spam filter using Python and the multinomial Naive Bayes algorithm.

Naive-Bayes Spam Classificator Module is created to build a spam filter using Python and the multinomial Naive Bayes algorithm. Main goal is to code a

Viktoria Maksymiuk 1 Jun 27, 2022
Cohort Intelligence used to solve various mathematical functions

Cohort-Intelligence-for-Mathematical-Functions About Cohort Intelligence : Cohort Intelligence ( CI ) is an optimization technique. It attempts to mod

Aayush Khandekar 2 Oct 25, 2021
Scikit learn library models to account for data and concept drift.

liquid_scikit_learn Scikit learn library models to account for data and concept drift. This python library focuses on solving data drift and concept d

7 Nov 18, 2021
CVXPY is a Python-embedded modeling language for convex optimization problems.

CVXPY The CVXPY documentation is at cvxpy.org. We are building a CVXPY community on Discord. Join the conversation! For issues and long-form discussio

4.3k Jan 08, 2023
A simple example of ML classification, cross validation, and visualization of feature importances

Simple-Classifier This is a basic example of how to use several different libraries for classification and ensembling, mostly with sklearn. Example as

Rob 2 Aug 25, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 08, 2023
CorrProxies - Optimizing Machine Learning Inference Queries with Correlative Proxy Models

CorrProxies - Optimizing Machine Learning Inference Queries with Correlative Proxy Models

ZhihuiYangCS 8 Jun 07, 2022
Library for machine learning stacking generalization.

stacked_generalization Implemented machine learning *stacking technic[1]* as handy library in Python. Feature weighted linear stacking is also availab

114 Jul 19, 2022
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Jan 01, 2023
Built various Machine Learning algorithms (Logistic Regression, Random Forest, KNN, Gradient Boosting and XGBoost. etc)

Built various Machine Learning algorithms (Logistic Regression, Random Forest, KNN, Gradient Boosting and XGBoost. etc). Structured a custom ensemble model and a neural network. Found a outperformed

Chris Yuan 1 Feb 06, 2022
This repository has datasets containing information of Uber pickups in NYC from April 2014 to September 2014 and January to June 2015. data Analysis , virtualization and some insights are gathered here

uber-pickups-analysis Data Source: https://www.kaggle.com/fivethirtyeight/uber-pickups-in-new-york-city Information about data set The dataset contain

B DEVA DEEKSHITH 1 Nov 03, 2021
nn-Meter is a novel and efficient system to accurately predict the inference latency of DNN models on diverse edge devices

A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.

Microsoft 241 Dec 26, 2022
A simple machine learning package to cluster keywords in higher-level groups.

Simple Keyword Clusterer A simple machine learning package to cluster keywords in higher-level groups. Example: "Senior Frontend Engineer" -- "Fronte

Andrea D'Agostino 10 Dec 18, 2022
Conducted ANOVA and Logistic regression analysis using matplot library to visualize the result.

Intro-to-Data-Science Conducted ANOVA and Logistic regression analysis. Project ANOVA The main aim of this project is to perform One-Way ANOVA analysi

Chris Yuan 1 Feb 06, 2022
CD) in machine learning projectsImplementing continuous integration & delivery (CI/CD) in machine learning projects

CML with cloud compute This repository contains a sample project using CML with Terraform (via the cml-runner function) to launch an AWS EC2 instance

Iterative 19 Oct 03, 2022