Iris is a highly configurable and flexible service for paging and messaging.

Overview

Iris Build Status License Gitter chat

Iris core, API, UI and sender service. For third-party integration support, see iris-relay, a stateless proxy designed to sit at the edge of a production network and allow external traffic to pass through. We also have an Iris mobile app for iOS/Android, at the iris-mobile repo.

Setup database

  1. remove ONLY_FULL_GROUP_BY from MySQL config sql_mode
  2. create mysql schema: mysql -u USER -p < ./db/schema_0.sql
  3. import dummy data: mysql -u USER -p -o iris < ./db/dummy_data.sql

dummy_data.sql contains the following entities:

  • user demo with password demo
  • team demo_team
  • application Autoalerts with key: a7a9d7657ac8837cd7dfed0b93f4b8b864007724d7fa21422c24f4ff0adb2e49

Setup dev environment

  1. create & source your virtualenv
  2. install build dependencies: libssl-dev libxml2-dev libxslt1-dev libsasl2-dev python-dev libldap2-dev
  3. run pip install -e '.[dev,kazoo]'
  4. edit ./configs/config.dev.yaml to setup database credential and other settings

To install iris with extra features, you can pass in feature flag with pip:

pip install -e '.[prometheus]'

For list of extra features, please see extras_require setting in setup.py.

Run everything

forego start

Run web server

make serve

Run sender

iris-sender configs/config.dev.yaml

Tests

Run tests:

make test  # all tests, e2e + unit
make e2e  # e2e tests
make unit  # unit tests

Generate test coverage reports:

make e2e-cov
make unit-cov

Adding new plugins

  1. create the plugin file under src/iris/plugins dir
  2. edit src/iris/plugins/__init__.py to add plugin module to __all__ list
Comments
  • iris Plan issue

    iris Plan issue

    with the new images, new or existing plans do not get saved.

    the error pops up as

    Error: Invalid plan - Priority not found for step 1

    even when the priority are setup

    image

    opened by allwyn-pradip 12
  • Add (Prometheus) Alertmanager support

    Add (Prometheus) Alertmanager support

    This is done directly in Iris in order to minimize dependencies for such a crucial function.

    In Alertmanager, one would simply use the alert label "iris_plan" to select the Iris plan.

    HMAC authentication is disabled, as this isn't supported by Alertmanager. Instead, application name and key are passed in the query string.

    An example snippet that can be added to Alertmanager's config:

            receivers:
            - name: 'iris-team1'
              webhook_configs:
                - url: http://iris:16649/v0/alertmanager?application=test-app&key=sdffdssdf
    

    Related to issue https://github.com/linkedin/iris/issues/110

    opened by wleese 10
  • Enable support for both Python 2.x and 3.x

    Enable support for both Python 2.x and 3.x

    As subsequent changes are no longer forthcoming to the 2.x branch of Python projects should begin the process of cross compatibility. This issue is being opened to collect outstanding issues and note blockers, should other users attempt to run Iris with Python 3.x.

    opened by brianredbeard 10
  • css paths missing

    css paths missing

    Tried an installation of Iris and got it running, but there is no CSS displayed in the browser. Static logo is working. Tried looking a bit in the source code and the CSS link path is empty when I do inspect source, so my guess is that there is something wrong in the templating where it sais ASSET_URL. Not sure how to fix it though.

    opened by kjetilmjos 10
  •  IRIS ldap not working

    IRIS ldap not working

    Hi,

    My goal is to enable LDAP authentication for Iris.

    I'm using Iris in a Docker container built from Master branch (Commits on Apr 11, 2022: 6a8a70b1a377636dacdd580e2ba226880be2d79a).

    Users are already created in Oncall, LDAP authentication works in Oncall, and user synchronization from oncal to iris is also functional.

    I added to the Iris configuration an Auth section very similar to what I already have in oncall but this causes the unavailability of Iris.

    #LDAP Auth
    auth:
      ldap_url: 'ldap://ldap.mycorp.intra'
      ldap_user_suffix: ''
      ldap_bind_user: 'MYCORP\svc-iris'
      ldap_bind_password: 'XXXXXXXXX'
      ldap_base_dn: 'DC=mycorp,DC=intra'
      ldap_search_filter: '(&(objectClass=person)(sAMAccountName=%s))'
    

    The web interface returns the error message Internal Server Error

    Logfile /home/iris/var/log/uwsgi/error.log contains --- no python application found, check your startup logs for errors ---

    I'm not sure it's a bug maybe I wrote the configuration wrong. Can you help me?

    opened by bla-ckbox 6
  • Packer docker only build is failing

    Packer docker only build is failing

    [10:07:59] ip-192-168-2-110:~/git/iris/ops/packer 1 % git pull                                                               
    Already up to date.
    [10:08:07] ip-192-168-2-110:~/git/iris/ops/packer % git status                                                             
    On branch master
    Your branch is up to date with 'origin/master'.
    
    nothing to commit, working tree clean
    [10:08:12] ip-192-168-2-110:~/git/iris/ops/packer % git rev-parse HEAD                                                     
    f607fe4c5595469766470b899950d97570e8aebe
    [10:08:19] ip-192-168-2-110:~/git/iris/ops/packer % python gen_packer_cfg.py ./iris.yaml | tail -n +2 > ./output/iris.json
    [10:08:25] ip-192-168-2-110:~/git/iris/ops/packer % packer build -only=docker ./output/iris.json
    ...
        docker: Collecting python-ldap==2.4.9 (from iris==0.0.14)
        docker:   Downloading https://files.pythonhosted.org/packages/75/f5/344cb326a9ba48ee31d58bb1b685f538c3e73954e08a0b81e7dcf48304e2/python-ldap-2.4.9.tar.gz (133kB)            
        docker:     Complete output from command python setup.py egg_info:                                                                                                           
        docker:     defines: HAVE_SASL HAVE_TLS HAVE_LIBLDAP_R
        docker:     extra_compile_args:
        docker:     extra_objects:
        docker:     include_dirs: /opt/openldap-RE24/include /usr/include/sasl /usr/include
        docker:     library_dirs: /opt/openldap-RE24/lib /usr/lib
        docker:     libs: ldap_r
        docker:     Traceback (most recent call last):
        docker:       File "<string>", line 1, in <module>
        docker:       File "/tmp/pip-install-_v6LcX/python-ldap/setup.py", line 178, in <module>
        docker:         **kwargs
        docker:       File "/home/iris/env/local/lib/python2.7/site-packages/setuptools/__init__.py", line 144, in setup
        docker:         _install_setup_requires(attrs)
        docker:       File "/home/iris/env/local/lib/python2.7/site-packages/setuptools/__init__.py", line 137, in _install_setup_requires
        docker:         dist.parse_config_files(ignore_option_errors=True)
        docker:       File "/home/iris/env/local/lib/python2.7/site-packages/setuptools/dist.py", line 702, in parse_config_files
        docker:         self._parse_config_files(filenames=filenames)
        docker:       File "/home/iris/env/local/lib/python2.7/site-packages/setuptools/dist.py", line 599, in _parse_config_files
        docker:         (parser.read_file if six.PY3 else parser.readfp)(reader)
        docker:       File "/usr/lib/python2.7/ConfigParser.py", line 324, in readfp
        docker:         self._read(fp, filename)
        docker:       File "/usr/lib/python2.7/ConfigParser.py", line 479, in _read
        docker:         line = fp.readline()
        docker:       File "/home/iris/env/lib/python2.7/encodings/ascii.py", line 26, in decode
        docker:         return codecs.ascii_decode(input, self.errors)[0]
        docker:     UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 378: ordinal not in range(128)
        docker:
        docker:     ----------------------------------------
        docker: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-_v6LcX/python-ldap/
    ==> docker: Killing the container: 94466af71d7a4ac3b428f47728568bcb5cad842adb61c0c1143fc7e285af2e46
    Build 'docker' errored: Script exited with non-zero exit status: 1
    
    ==> Some builds didn't complete successfully and had errors:
    --> docker: Script exited with non-zero exit status: 1
    
    ==> Builds finished but no artifacts were created.
    

    Looks like this version of python-ldap is broken? Current version is 3.1.0. Related to #255 ?

    I'm able to install python-ldap==2.4.9 on python 2.7.10 without issue though, so unsure what source of the problem is.

    opened by ashafer01 6
  • Add documentation on Oncall/Iris integration for messaging

    Add documentation on Oncall/Iris integration for messaging

    It is not clear to me from the documentation or config comments what config settings are required in order for Oncall to be able to pull from notification_queue and then trigger out a message to the recipient via Iris as the sender. I'm using the annotated config provided via a553c1e from #334, which seems to call out the default values related to SMTP, vendor, etc, and likewise I have an entry in the plugin section of Oncall's config defined for iris_messenger. Yet I'm unable to get email messages triggered out of either Iris (via Incident creation) or On-Call (via Reminders/Notifications). Notification_queue continues to hold a ton of rows all marked with NULL for column SENT. Long story short, additional documentation on using Iris as the messaging system would be appreciated.

    opened by HIT2022 6
  • API key

    API key

    Hi, I'm trying to trigger an incident using the API. But I can't figure out how to get the API key. Is it created automatically with the creation of an application or how does it work? Was also unable to create a new application from UI. Is this something that has to be done via the config file?

    opened by kjetilmjos 6
  • Bug: Plans with a name containing a '/' character cannot be deleted or updated.

    Bug: Plans with a name containing a '/' character cannot be deleted or updated.

    If a plan has a slash in its name, e.g. "test-name-with-/-character", it cannot be deleted. The DELETE http request fails, generating a 404:

    Request URL: https://iris.dev.bol.io/v0/plans/test-name-with-/-character Request method: DELETE Status code: 404 Not Found

    opened by tvlieg 5
  • Support oncall schedules that are not 24/7

    Support oncall schedules that are not 24/7

    At our company, we have the following use case:

    • Alert is created
    • If office hours, TeamA is notified
    • If outside office hours, TeamB is notified

    I've set this up in Iris with a plan that:

    • In step 1 has 2 Notifications
    • The first points to TeamA in oncall as primary
    • The second points to TeamB in oncall as primary

    In Oncall:

    • TeamA has someone oncall during office hours
    • TeamB has someone oncall outside of office hours.

    This works, but results in notifications going to the plan creator:

    You are receiving this as you created this plan and we can't resolve oncall-primary of TeamA at this time.
    

    Now I could suppress this message by making some code changes, but before I take that route (and potentially lose visibility into actual issues resolving oncall-primaries), I'd like to know if there is better way to achieve what I want.

    Edit: alternatively, I could add a dummy user with the drop mode and use that to fill in the gaps in the calendar for both teams..

    opened by wleese 5
  • Creating a Plan fails with sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, u

    Creating a Plan fails with sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, u"Column 'user_id' cannot be null")

    When creating the docker containers with:

    cd ops/packer
    packer build -only=docker ./output/iris.json
    

    Running them with the configs/config.dev.yaml against a DB that imported schema_0.sql and dummy_data.sql and then trying to create a Plan:

    [2017-11-09 14:38:46 +0000] [14] [ERROR] iris.db SERVER ERROR
    Traceback (most recent call last):
      File "./iris/db.py", line 37, in guarded_session
        yield session
      File "./iris/api.py", line 1182, in on_post
        plan_id = session.execute(insert_plan_query, plan_dict).lastrowid
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1046, in execute
        bind, close_with_result=True).execute(clause, params or {})
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
        return meth(self, multiparams, params)
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
        return connection._execute_clauseelement(self, multiparams, params)
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
        compiled_sql, distilled_params
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
        context)
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception
        exc_info
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 200, in raise_from_cause
        reraise(type(exception), exception, tb=exc_tb)
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
        context)
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
        cursor.execute(statement, parameters)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/cursors.py", line 158, in execute
        result = self._query(query)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/cursors.py", line 308, in _query
        conn.query(q)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 820, in query
        self._affected_rows = self._read_query_result(unbuffered=unbuffered)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 1002, in _read_query_result
        result.read()
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 1285, in read
        first_packet = self.connection._read_packet()
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 966, in _read_packet
        packet.check_error()
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 394, in check_error
        err.raise_mysql_exception(self._data)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/err.py", line 120, in raise_mysql_exception
        _check_mysql_exception(errinfo)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/err.py", line 112, in _check_mysql_exception
        raise errorclass(errno, errorvalue)
    IntegrityError: (pymysql.err.IntegrityError) (1048, u"Column 'user_id' cannot be null") [SQL: u"INSERT INTO `plan` (\n    `user_id`, `name`, `created`, `description`, `step_count`,\n    `threshold_window`, `threshold_count`, `aggregation_window`,\n    `aggregation_reset`, `tracking_key`, `tracking_type`, `tracking_template`\n) VALUES (\n    (SELECT `id` FROM `target` where `name` = %(creator)s AND `type_id` = (\n      SELECT `id` FROM `target_type` WHERE `name` = 'user'\n    )),\n    %(name)s,\n    %(created)s,\n    %(description)s,\n    %(step_count)s,\n    %(threshold_window)s,\n    %(threshold_count)s,\n    %(aggregation_window)s,\n    %(aggregation_reset)s,\n    %(tracking_key)s,\n    %(tracking_type)s,\n    %(tracking_template)s\n)"] [parameters: {u'step_count': 1, u'name': u'tes', u'threshold_count': 10, u'creator': u'root', u'created': datetime.datetime(2017, 11, 9, 14, 38, 46, 445930), u'aggregation_window': 300, u'tracking_type': None, u'aggregation_reset': 300, u'tracking_template': None, u'tracking_key': None, u'threshold_window': 900, u'description': u'tes'}]
    Traceback (most recent call last):
      File "/home/iris/env/local/lib/python2.7/site-packages/beaker/middleware.py", line 155, in __call__
        return self.wrap_app(environ, session_start_response)
      File "/home/iris/env/local/lib/python2.7/site-packages/falcon/api.py", line 209, in __call__
        responder(req, resp, **params)
      File "./iris/api.py", line 1182, in on_post
        plan_id = session.execute(insert_plan_query, plan_dict).lastrowid
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1046, in execute
        bind, close_with_result=True).execute(clause, params or {})
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
        return meth(self, multiparams, params)
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
        return connection._execute_clauseelement(self, multiparams, params)
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
        compiled_sql, distilled_params
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
        context)
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception
        exc_info
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 200, in raise_from_cause
        reraise(type(exception), exception, tb=exc_tb)
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
        context)
      File "/home/iris/env/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
        cursor.execute(statement, parameters)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/cursors.py", line 158, in execute
        result = self._query(query)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/cursors.py", line 308, in _query
        conn.query(q)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 820, in query
        self._affected_rows = self._read_query_result(unbuffered=unbuffered)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 1002, in _read_query_result
        result.read()
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 1285, in read
        first_packet = self.connection._read_packet()
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 966, in _read_packet
        packet.check_error()
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 394, in check_error
        err.raise_mysql_exception(self._data)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/err.py", line 120, in raise_mysql_exception
        _check_mysql_exception(errinfo)
      File "/home/iris/env/local/lib/python2.7/site-packages/pymysql/err.py", line 112, in _check_mysql_exception
        raise errorclass(errno, errorvalue)
    sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, u"Column 'user_id' cannot be null") [SQL: u"INSERT INTO `plan` (\n    `user_id`, `name`, `created`, `description`, `step_count`,\n    `threshold_window`, `threshold_count`, `aggregation_window`,\n    `aggregation_reset`, `tracking_key`, `tracking_type`, `tracking_template`\n) VALUES (\n    (SELECT `id` FROM `target` where `name` = %(creator)s AND `type_id` = (\n      SELECT `id` FROM `target_type` WHERE `name` = 'user'\n    )),\n    %(name)s,\n    %(created)s,\n    %(description)s,\n    %(step_count)s,\n    %(threshold_window)s,\n    %(threshold_count)s,\n    %(aggregation_window)s,\n    %(aggregation_reset)s,\n    %(tracking_key)s,\n    %(tracking_type)s,\n    %(tracking_template)s\n)"] [parameters: {u'step_count': 1, u'name': u'tes', u'threshold_count': 10, u'creator': u'root', u'created': datetime.datetime(2017, 11, 9, 14, 38, 46, 445930), u'aggregation_window': 300, u'tracking_type': None, u'aggregation_reset': 300, u'tracking_template': None, u'tracking_key': None, u'threshold_window': 900, u'description': u'tes'}]
    
    opened by wleese 5
  • Bump ujson from 1.35 to 5.4.0

    Bump ujson from 1.35 to 5.4.0

    Bumps ujson from 1.35 to 5.4.0.

    Release notes

    Sourced from ujson's releases.

    5.4.0

    Added

    Fixed

    5.3.0

    Added

    Changed

    Fixed

    5.2.0

    Added

    Fixed

    5.1.0

    Changed

    ... (truncated)

    Commits
    • 9c20de0 Merge pull request from GHSA-fm67-cv37-96ff
    • b21da40 Fix double free on string decoding if realloc fails
    • 67ec071 Merge pull request #555 from JustAnotherArchivist/fix-decode-surrogates-2
    • bc7bdff Replace wchar_t string decoding implementation with a uint32_t-based one
    • cc70119 Merge pull request #548 from JustAnotherArchivist/arbitrary-ints
    • 4b5cccc Merge pull request #553 from bwoodsend/pypy-ci
    • abe26fc Merge pull request #551 from bwoodsend/bye-bye-travis
    • 3efb5cc Delete old TravisCI workflow and references.
    • 404de1a xfail test_decode_surrogate_characters() on Windows PyPy.
    • f7e66dc Switch to musl docker base images.
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies python 
    opened by dependabot[bot] 0
  • Applications are not properly loaded when using uwsgi

    Applications are not properly loaded when using uwsgi

    When running iris in combination with uwsgi (e.g with the container image), the applications are not properly loaded at startup, leading to 401 errrors when using the webhooks api. Once the applications are fully loaded (e.g. by open iris ui with a web browser), the authentication error goes away without modifying the api call.

    Steps to reproduce:

    • clone the master branch of the repo
    • docker-compose build && docker-compose up -d
    • Try to post alertmanager data on the webhook api
      • curl -v -X POST -d @prom-alert.json 'http://localhost:16649/v0/webhooks/alertmanager?application=oncall&key=magic'
      • Will return HTTP/401: {"title":"Authentication failure","description":"Application not found"}
    • Open Iris UI in a browser and login: http://localhost:16649 with demo/any password
      • or call the application api endpoint: curl -v http://localhost:16649/v0/applications
    • Try again the post the same alertmanager data on the api webhook again
      • curl -v -X POST -d @prom-alert.json 'http://localhost:16649/v0/webhooks/alertmanager?application=oncall&key=magic'
      • The incident is properly created and return HTTP/1.1 201 Created

    Some additional logging information: I've modified https://github.com/linkedin/iris/blob/master/src/iris/cache.py#L52 to print when the applications are loaded.

    uwsgi error log:

    [2022-06-13 13:13:38 +0000] [10] [INFO] iris.cache Loaded applications: Autoalerts, test-app, iris, oncall
    [2022-06-13 13:13:38 +0000] [20] [INFO] iris.cache Loaded applications: Autoalerts, test-app, iris, oncall
    

    uwsgi access log:

    13/Jun/2022:13:11:54 +0000 [401] POST /v0/webhooks/alertmanager?application=oncall&key=magic 172.25.0.1 [curl/7.83.1] RT:0 REF:- SZ:248 HTTP/1.1
    13/Jun/2022:13:11:57 +0000 [401] POST /v0/webhooks/alertmanager?application=oncall&key=magic 172.25.0.1 [curl/7.83.1] RT:0 REF:- SZ:248 HTTP/1.1
    13/Jun/2022:13:13:38 +0000 [302] GET / 172.25.0.1 [Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:101.0) Gecko/20100101 Firefox/101.0] RT:36 REF:- SZ:583 HTTP/1.1
    13/Jun/2022:13:13:38 +0000 [200] GET /v0/applications 127.0.0.1 [python-requests/2.28.0] RT:42 REF:- SZ:3668 HTTP/1.1
    13/Jun/2022:13:13:38 +0000 [200] GET /incidents 172.25.0.1 [Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:101.0) Gecko/20100101 Firefox/101.0] RT:1729 REF:- SZ:15859 HTTP/1.1
    13/Jun/2022:13:14:00 +0000 [201] POST /v0/webhooks/alertmanager?application=oncall&key=magic 172.25.0.1 [curl/7.83.1] RT:24 REF:- SZ:195 HTTP/1.1
    

    This looks like the application cache is not properly populated on startup of the application.

    I wasn't able to reproduce the issue with a local setup with gunicorn.

    Used Sample Data

    {
      "version": "4",
      "groupKey": "{}:{alertname=\"high_memory_load\"}",
      "status": "firing",
      "receiver": "teams_proxy",
      "groupLabels": {
          "alertname": "high_memory_load",
          "iris_plan": "Oncall test"
     },
      "commonLabels": {
          "alertname": "high_memory_load",
          "monitor": "master",
          "severity": "warning"
      },
      "commonAnnotations": {
          "summary": "Server High Memory usage"
      },
      "externalURL": "http://docker.for.mac.host.internal:9093",
      "alerts": [
          {
              "labels": {
                  "alertname": "high_memory_load",
                  "instance": "10.80.40.11:9100",
                  "job": "docker_nodes",
                  "monitor": "master",
                  "severity": "warning"
              },
              "annotations": {
                  "description": "10.80.40.11 reported high memory usage with 23.28%.",
                  "summary": "Server High Memory usage"
              },
              "startsAt": "2018-03-07T06:33:21.873077559-05:00",
              "endsAt": "0001-01-01T00:00:00Z"
          }
      ]
    }
    
    opened by roock 0
  • add sync-targets daemon

    add sync-targets daemon

    Environment deployment: Kubernetes Deployment tools: Helm chart User management: Manual user administration

    Issue: Sync target from oncall user to iris need to trigger manual command

    Need to trigger automatically while adding user to oncall platform since we are not using ldap for user management

    I couldn’t see sync-target config for manual insertion, only for ldap, so i added sync-targets daemon process.

    opened by furiatona 0
  • Error during make on Quickstart setup

    Error during make on Quickstart setup

    I have made it with zero issues through all of the Quickstart setup steps to the point of running make in the local iris project directory where the make fails with the following error:

    (venv) [email protected]:~/Projects/iris$ make iris-dev ./configs/config.dev.yaml [2022-04-26 15:31:42 -0500] [55669] [INFO] Starting gunicorn 20.1.0 [2022-04-26 15:31:42 -0500] [55669] [INFO] Listening at: http://0.0.0.0:16649 (55669) [2022-04-26 15:31:42 -0500] [55669] [INFO] Using worker: gevent [2022-04-26 15:31:42 -0500] [55670] [INFO] Booting worker with pid: 55670 [2022-04-26 15:31:42 -0500] [55671] [INFO] Booting worker with pid: 55671 [2022-04-26 15:31:42 -0500] [55672] [INFO] Booting worker with pid: 55672 [2022-04-26 15:31:42 -0500] [55673] [INFO] Booting worker with pid: 55673 [2022-04-26 15:31:42 -0500] [55670] [ERROR] Exception in worker process Traceback (most recent call last): File "/home/chris/Projects/iris/venv/lib/python3.9/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker worker.init_process() File "/home/chris/Projects/iris/venv/lib/python3.9/site-packages/gunicorn/workers/ggevent.py", line 146, in init_process super().init_process() File "/home/chris/Projects/iris/venv/lib/python3.9/site-packages/gunicorn/workers/base.py", line 134, in init_process self.load_wsgi() File "/home/chris/Projects/iris/venv/lib/python3.9/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi self.wsgi = self.app.wsgi() File "/home/chris/Projects/iris/venv/lib/python3.9/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/home/chris/Projects/iris/src/iris/bin/run_server.py", line 36, in load app = iris.api.get_api(config) File "/home/chris/Projects/iris/src/iris/api.py", line 5515, in get_api spawn(update_cache_worker) File "src/gevent/greenlet.py", line 663, in gevent._greenlet.Greenlet.spawn AttributeError: type object 'function' has no attribute 'start' [2022-04-26 15:31:42 -0500] [55670] [INFO] Worker exiting (pid: 55670)

    Environment details: Debian 11 (x86_64) Python 3.9.2 mysql Ver 8.0.29 for Linux on x86_64 (MySQL Community Server - GPL)

    opened by grimmeanor 0
  • Status of Container Images

    Status of Container Images

    I'd like to raise a few questions around the current container images of Iris:

    • Currently there are two Dockerfiles, one in the root directory and one in the ops/Docker directory. The second one isn't maintained anymore and should be removed, or?
      • Bonus Questions: The ops/packer, ops/terraform and ops/charts are also outdated and don't seem to be maintained. Should those go away, too? (I might be looking into proving an updated helm charts with version 2 in the next weeks).
    • Is there any plan to provide pre-build container images? There seems to be an "official" registry (https://quay.io/repository/iris/iris), but the images are 3 years old. If there's no plan I will start building "unofficial" images.
    • I'm currently trying to improve the container image by running the image "rootless" (https://github.com/linkedin/iris/commit/c4123e450cef36a4c5271e4f2550d55f5f73d434), which would mean that iris user has only limited write access to the container image. I'm having trouble finding a good solution for the sender rpc log file. any hints/comments here? I'm currently patching the config file to update, but this doesn't seem to be the best way to handle this.
    • Was there a specific reason for the initializedfile? It doesn't seem to provide any real benefit to me in a container environment.
      • in production setups, database will get initialized once either manually from outside a container or by starting a container with DOCKER_DB_BOOTSTRAP
      • In development setups, containers will be rebuild quite often and reinitialize will happen anyway as the initializedfile isn't existing.

    Thanks and keep up the great work!

    opened by roock 1
  • Feature request: Extract fields from AlertManager labels into top level context

    Feature request: Extract fields from AlertManager labels into top level context

    Hey folks, I'm liking Iris so far. I'd like to check receptiveness to this change - I'm happy to put up a PR for this, as it would help us.

    Problem

    The AlertManager integration doesn't currently support the "override incident title" and "plan dynamic targets" features.

    AlertManager sends a payload in this (slightly redacted) format:

    {
      "groupKey": "Demo system down",
      "status": "firing",
      "groupLabels": {"iris_plan": "page-engineer", "severity":"critical", "service":"demo-system"},
      "commonLabels": {"iris_plan": "page-engineer", "severity":"critical", "service":"demo-system"},
      "commonAnnotations": {"team":"demo-team", "runbook_url": "https://example.org", "description": "The service is down!"}
    }
    

    Since the groupLabels.iris_plan is set to an existing plan (page-engineer) the plan will run and the context (the above payload) will be available in the incident summary and context templates and the message templates.

    Unfortunately, if we want to react to all incidents in the same way (i.e. page an on-call primary and, if they don't answer, the secondary) but want to pages to be received by different teams, we must create a plan (or potentially plans) for each team, and set the iris_plan label on each alert to be team specific (i.e. iris_plan: "page-engineer-demo-team". Ideally we'd be able to use the existing dynamic_targets feature on plans so that we only need to create one plan which could be reused by all teams.

    Also, each incident has the title "alertmanager" (the name of the application). Though we can set an <h1>{{groupLabels.alertname}}</h1> in the context templates, it would be nice if this feature was supported for the AlertManager integration.

    Feature request

    In the AlertManager integration it'd be great if we could pull up the alertname field from groupLabels if it exists, and allow users to set dynamic targets in alert labels.

    labels:
        alertname: "Demo service is down"
        iris_plan: page-engineer
        iris_targets: '[{"role": "oncall-primary","target": "demo-team"},{"role": "oncall-secondary","target": "demo-team"}]'
    
    opened by bilbof 1
Releases(v1.0.37)
Wiremind Kubernetes helper

Wiremind Kubernetes helper This Python library is a high-level set of Kubernetes Helpers allowing either to manage individual standard Kubernetes cont

Wiremind 3 Oct 09, 2021
A little script and trick to make your heroku app run forever without being concerned about dyno hours.

A little script and trick to make your heroku app run forever without being concerned about dyno hours.

Tiararose Biezetta 152 Dec 25, 2022
Convenient tool to manage multiple VMs at once using libvirt

Convenient tool to manage multiple VMs at once using libvirt Installing To install the tool and its dependencies: pip install -e . Getting completion

Cedric Bosdonnat 13 Nov 11, 2022
Cado Response Integration with Amazon GuardDuty using AWS Lambda

Cado Response Integration with Amazon GuardDuty using AWS Lambda This repository contains a simple example where: An alert is triggered by GuardDuty T

Cado Security 4 Mar 02, 2022
This repository contains useful docker-swarm-tools.

docker-swarm-tools This repository contains useful docker-swarm-tools. swarm-guardian This Docker image is intended to be used in a multihost docker e

NeuroForge GmbH & Co. KG 4 Jan 12, 2022
Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.

Welcome to the Spinnaker Project Spinnaker is an open-source continuous delivery platform for releasing software changes with high velocity and confid

8.8k Jan 07, 2023
Webinar oficial Zabbix Brasil. Uma série de 4 aulas sobre API do Zabbix.

Repositório de scripts do Webinar de API do Zabbix Webinar oficial Zabbix Brasil. Uma série de 4 aulas sobre API do Zabbix. Nossos encontros [x] 04/11

Robert Silva 7 Mar 31, 2022
A cron monitoring tool written in Python & Django

Healthchecks Healthchecks is a cron job monitoring service. It listens for HTTP requests and email messages ("pings") from your cron jobs and schedule

Healthchecks 5.8k Jan 02, 2023
Rundeck / Grafana / Prometheus / Rundeck Exporter integration demo

Rundeck / Prometheus / Grafana integration demo via Rundeck Exporter This is a demo environment that shows how to monitor a Rundeck instance using Run

Reiner 4 Oct 14, 2022
Ingress patch example by Kustomize

Ingress patch example by Kustomize

Jinu 10 Nov 14, 2022
Cross-platform lib for process and system monitoring in Python

Home Install Documentation Download Forum Blog Funding What's new Summary psutil (process and system utilities) is a cross-platform library for retrie

Giampaolo Rodola 9k Jan 02, 2023
Wubes is like Qubes but for Windows.

Qubes containerization on Windows. The idea is to leverage the Windows Sandbox technology to spawn applications in isolation.

NCC Group Plc 124 Dec 16, 2022
A curated list of awesome DataOps tools

Awesome DataOps A curated list of awesome DataOps tools. Awesome DataOps Data Catalog Data Exploration Data Ingestion Data Lake Data Processing Data Q

Kelvin S. do Prado 40 Dec 23, 2022
Some automation scripts to setup a deployable development database server (with docker).

Postgres-Docker Database Initializer This is a simple automation script that will create a Docker Postgres database with a custom username, password,

Pysogge 1 Nov 11, 2021
Manage your azure VM easily!

Azure-manager Manage your VM in Azure using cookies.

Team 1injex 129 Dec 17, 2022
Flexible and scalable monitoring framework

Presentation of the Shinken project Welcome to the Shinken project. Shinken is a modern, Nagios compatible monitoring framework, written in Python. It

Gabès Jean 1.1k Dec 18, 2022
A colony of interacting processes

NColony Infrastructure for running "colonies" of processes. Hacking $ tox Should DTRT -- if it passes, it means unit tests are passing, and 100% cover

23 Apr 04, 2022
Python utility function to communicate with a subprocess using iterables: for when data is too big to fit in memory and has to be streamed

iterable-subprocess Python utility function to communicate with a subprocess using iterables: for when data is too big to fit in memory and has to be

Department for International Trade 5 Jul 10, 2022
Deploy a simple Multi-Node Clickhouse Cluster with docker-compose in minutes.

Simple Multi Node Clickhouse Cluster I hate those single-node clickhouse clusters and manually installation, I mean, why should we: Running multiple c

Nova Kwok 11 Nov 18, 2022
Remote Desktop Protocol in Twisted Python

RDPY Remote Desktop Protocol in twisted python. RDPY is a pure Python implementation of the Microsoft RDP (Remote Desktop Protocol) protocol (client a

Sylvain Peyrefitte 1.6k Dec 30, 2022