Line-by-line profiling for Python

Overview

line_profiler and kernprof

Pypi Downloads Travis

NOTICE: This is the official line_profiler repository. The most recent version of line-profiler on pypi points to this repo. The the original line_profiler package by @rkern is currently unmaintained. This fork seeks to simply maintain the original code so it continues to work in new versions of Python.


line_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.

They are available under a BSD license.

Installation

Releases of line_profiler can be installed using pip:

$ pip install line_profiler

Source releases and any binaries can be downloaded from the PyPI link.

http://pypi.python.org/pypi/line_profiler

To check out the development sources, you can use Git:

$ git clone https://github.com/pyutils/line_profiler.git

You may also download source tarballs of any snapshot from that URL.

Source releases will require a C compiler in order to build line_profiler. In addition, git checkouts will also require Cython >= 0.10. Source releases on PyPI should contain the pregenerated C sources, so Cython should not be required in that case.

kernprof is a single-file pure Python script and does not require a compiler. If you wish to use it to run cProfile and not line-by-line profiling, you may copy it to a directory on your PATH manually and avoid trying to build any C extensions.

line_profiler

The current profiling tools supported in Python 2.7 and later only time function calls. This is a good first step for locating hotspots in one's program and is frequently all one needs to do to optimize the program. However, sometimes the cause of the hotspot is actually a single line in the function, and that line may not be obvious from just reading the source code. These cases are particularly frequent in scientific computing. Functions tend to be larger (sometimes because of legitimate algorithmic complexity, sometimes because the programmer is still trying to write FORTRAN code), and a single statement without function calls can trigger lots of computation when using libraries like numpy. cProfile only times explicit function calls, not special methods called because of syntax. Consequently, a relatively slow numpy operation on large arrays like this,

a[large_index_array] = some_other_large_array

is a hotspot that never gets broken out by cProfile because there is no explicit function call in that statement.

LineProfiler can be given functions to profile, and it will time the execution of each individual line inside those functions. In a typical workflow, one only cares about line timings of a few functions because wading through the results of timing every single line of code would be overwhelming. However, LineProfiler does need to be explicitly told what functions to profile. The easiest way to get started is to use the kernprof script.

$ kernprof -l script_to_profile.py

kernprof will create an instance of LineProfiler and insert it into the __builtins__ namespace with the name profile. It has been written to be used as a decorator, so in your script, you decorate the functions you want to profile with @profile.

@profile
def slow_function(a, b, c):
    ...

The default behavior of kernprof is to put the results into a binary file script_to_profile.py.lprof . You can tell kernprof to immediately view the formatted results at the terminal with the [-v/--view] option. Otherwise, you can view the results later like so:

$ python -m line_profiler script_to_profile.py.lprof

For example, here are the results of profiling a single function from a decorated version of the pystone.py benchmark (the first two lines are output from pystone.py, not kernprof):

Pystone(1.1) time for 50000 passes = 2.48
This machine benchmarks at 20161.3 pystones/second
Wrote profile results to pystone.py.lprof
Timer unit: 1e-06 s

File: pystone.py
Function: Proc2 at line 149
Total time: 0.606656 s

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
   149                                           @profile
   150                                           def Proc2(IntParIO):
   151     50000        82003      1.6     13.5      IntLoc = IntParIO + 10
   152     50000        63162      1.3     10.4      while 1:
   153     50000        69065      1.4     11.4          if Char1Glob == 'A':
   154     50000        66354      1.3     10.9              IntLoc = IntLoc - 1
   155     50000        67263      1.3     11.1              IntParIO = IntLoc - IntGlob
   156     50000        65494      1.3     10.8              EnumLoc = Ident1
   157     50000        68001      1.4     11.2          if EnumLoc == Ident1:
   158     50000        63739      1.3     10.5              break
   159     50000        61575      1.2     10.1      return IntParIO

The source code of the function is printed with the timing information for each line. There are six columns of information.

  • Line #: The line number in the file.
  • Hits: The number of times that line was executed.
  • Time: The total amount of time spent executing the line in the timer's units. In the header information before the tables, you will see a line "Timer unit:" giving the conversion factor to seconds. It may be different on different systems.
  • Per Hit: The average amount of time spent executing the line once in the timer's units.
  • % Time: The percentage of time spent on that line relative to the total amount of recorded time spent in the function.
  • Line Contents: The actual source code. Note that this is always read from disk when the formatted results are viewed, not when the code was executed. If you have edited the file in the meantime, the lines will not match up, and the formatter may not even be able to locate the function for display.

If you are using IPython, there is an implementation of an %lprun magic command which will let you specify functions to profile and a statement to execute. It will also add its LineProfiler instance into the __builtins__, but typically, you would not use it like that.

For IPython 0.11+, you can install it by editing the IPython configuration file ~/.ipython/profile_default/ipython_config.py to add the 'line_profiler' item to the extensions list:

c.TerminalIPythonApp.extensions = [
    'line_profiler',
]

To get usage help for %lprun, use the standard IPython help mechanism:

In [1]: %lprun?

These two methods are expected to be the most frequent user-level ways of using LineProfiler and will usually be the easiest. However, if you are building other tools with LineProfiler, you will need to use the API. There are two ways to inform LineProfiler of functions to profile: you can pass them as arguments to the constructor or use the add_function(f) method after instantiation.

profile = LineProfiler(f, g)
profile.add_function(h)

LineProfiler has the same run(), runctx(), and runcall() methods as cProfile.Profile as well as enable() and disable(). It should be noted, though, that enable() and disable() are not entirely safe when nested. Nesting is common when using LineProfiler as a decorator. In order to support nesting, use enable_by_count() and disable_by_count(). These functions will increment and decrement a counter and only actually enable or disable the profiler when the count transitions from or to 0.

After profiling, the dump_stats(filename) method will pickle the results out to the given file. print_stats([stream]) will print the formatted results to sys.stdout or whatever stream you specify. get_stats() will return LineStats object, which just holds two attributes: a dictionary containing the results and the timer unit.

kernprof

kernprof also works with cProfile, its third-party incarnation lsprof, or the pure-Python profile module depending on what is available. It has a few main features:

  • Encapsulation of profiling concerns. You do not have to modify your script in order to initiate profiling and save the results. Unless if you want to use the advanced __builtins__ features, of course.
  • Robust script execution. Many scripts require things like __name__, __file__, and sys.path to be set relative to it. A naive approach at encapsulation would just use execfile(), but many scripts which rely on that information will fail. kernprof will set those variables correctly before executing the script.
  • Easy executable location. If you are profiling an application installed on your PATH, you can just give the name of the executable. If kernprof does not find the given script in the current directory, it will search your PATH for it.
  • Inserting the profiler into __builtins__. Sometimes, you just want to profile a small part of your code. With the [-b/--builtin] argument, the Profiler will be instantiated and inserted into your __builtins__ with the name "profile". Like LineProfiler, it may be used as a decorator, or enabled/disabled with enable_by_count() and disable_by_count(), or even as a context manager with the "with profile:" statement.
  • Pre-profiling setup. With the [-s/--setup] option, you can provide a script which will be executed without profiling before executing the main script. This is typically useful for cases where imports of large libraries like wxPython or VTK are interfering with your results. If you can modify your source code, the __builtins__ approach may be easier.

The results of profile script_to_profile.py will be written to script_to_profile.py.prof by default. It will be a typical marshalled file that can be read with pstats.Stats(). They may be interactively viewed with the command:

$ python -m pstats script_to_profile.py.prof

Such files may also be viewed with graphical tools like kcachegrind through the converter program pyprof2calltree or RunSnakeRun.

Frequently Asked Questions

  • Why the name "kernprof"?

    I didn't manage to come up with a meaningful name, so I named it after myself.

  • Why not use hotshot instead of line_profile?

    hotshot can do line-by-line timings, too. However, it is deprecated and may disappear from the standard library. Also, it can take a long time to process the results while I want quick turnaround in my workflows. hotshot pays this processing time in order to make itself minimally intrusive to the code it is profiling. Code that does network operations, for example, may even go down different code paths if profiling slows down execution too much. For my use cases, and I think those of many other people, their line-by-line profiling is not affected much by this concern.

  • Why not allow using hotshot from kernprof.py?

    I don't use hotshot, myself. I will accept contributions in this vein, though.

  • The line-by-line timings don't add up when one profiled function calls another. What's up with that?

    Let's say you have function F() calling function G(), and you are using LineProfiler on both. The total time reported for G() is less than the time reported on the line in F() that calls G(). The reason is that I'm being reasonably clever (and possibly too clever) in recording the times. Basically, I try to prevent recording the time spent inside LineProfiler doing all of the bookkeeping for each line. Each time Python's tracing facility issues a line event (which happens just before a line actually gets executed), LineProfiler will find two timestamps, one at the beginning before it does anything (t_begin) and one as close to the end as possible (t_end). Almost all of the overhead of LineProfiler's data structures happens in between these two times.

    When a line event comes in, LineProfiler finds the function it belongs to. If it's the first line in the function, we record the line number and t_end associated with the function. The next time we see a line event belonging to that function, we take t_begin of the new event and subtract the old t_end from it to find the amount of time spent in the old line. Then we record the new t_end as the active line for this function. This way, we are removing most of LineProfiler's overhead from the results. Well almost. When one profiled function F calls another profiled function G, the line in F that calls G basically records the total time spent executing the line, which includes the time spent inside the profiler while inside G.

    The first time this question was asked, the questioner had the G() function call as part of a larger expression, and he wanted to try to estimate how much time was being spent in the function as opposed to the rest of the expression. My response was that, even if I could remove the effect, it might still be misleading. G() might be called elsewhere, not just from the relevant line in F(). The workaround would be to modify the code to split it up into two lines, one which just assigns the result of G() to a temporary variable and the other with the rest of the expression.

    I am open to suggestions on how to make this more robust. Or simple admonitions against trying to be clever.

  • Why do my list comprehensions have so many hits when I use the LineProfiler?

    LineProfiler records the line with the list comprehension once for each iteration of the list comprehension.

  • Why is kernprof distributed with line_profiler? It works with just cProfile, right?

    Partly because kernprof.py is essential to using line_profiler effectively, but mostly because I'm lazy and don't want to maintain the overhead of two projects for modules as small as these. However, kernprof.py is a standalone, pure Python script that can be used to do function profiling with just the Python standard library. You may grab it and install it by itself without line_profiler.

  • Do I need a C compiler to build line_profiler? kernprof.py?

    You do need a C compiler for line_profiler. kernprof.py is a pure Python script and can be installed separately, though.

  • Do I need Cython to build line_profiler?

    You should not have to if you are building from a released source tarball. It should contain the generated C sources already. If you are running into problems, that may be a bug; let me know. If you are building from a git checkout or snapshot, you will need Cython to generate the C sources. You will probably need version 0.10 or higher. There is a bug in some earlier versions in how it handles NULL PyObject* pointers.

    As of version 3.0.0 manylinux wheels containing the binaries are available on pypi. Work is still needed to publish osx and win32 wheels. (PRs for this would be helpful!)

  • What version of Python do I need?

    Both line_profiler and kernprof have been tested with Python 2.7, and 3.5-3.9. Older versions of line_profiler support older versions of Python.

To Do

cProfile uses a neat "rotating trees" data structure to minimize the overhead of looking up and recording entries. LineProfiler uses Python dictionaries and extension objects thanks to Cython. This mostly started out as a prototype that I wanted to play with as quickly as possible, so I passed on stealing the rotating trees for now. As usual, I got it working, and it seems to have acceptable performance, so I am much less motivated to use a different strategy now. Maybe later. Contributions accepted!

Bugs and Such

Bugs and pull requested can be submitted on GitHub.

Changes

See CHANGELOG.

Comments
  • ModuleNotFoundError: No module named 'line_profiler._line_profiler'

    ModuleNotFoundError: No module named 'line_profiler._line_profiler'

    I installed the 3.2.0 version with pip and I'm getting this error on the line: https://github.com/pyutils/line_profiler/blob/master/line_profiler/line_profiler.py#L28 There is no _line_profiler file There was not error on 3.1.0 version

    pip 21.0.1 line_profiler 3.2.0 Python 3.7.0

    opened by mikekeda 25
  • No source is visible in line_profiler output in Jupyter notebook

    No source is visible in line_profiler output in Jupyter notebook

    I'm currently having an issue identical to https://github.com/rkern/line_profiler/issues/23. After I use %lprun in a Jupyter notebook no source is shown for the profiling.

    %load_ext line_profiler
    import numpy as np
    
    def func(x):
        y = np.cos(x)
        z = np.sin(x)
        return y+z
    
    %lprun -f func func(np.linspace(0, 1, 1001))
    
    Timer unit: 1e-06 s
    
    Total time: 0.000416 s
    
    Could not find file /var/folders/m8/gwtcncws12jf60xw5n6knwnw0000gn/T/ipykernel_18955/3242911465.py
    Are you sure you are running this program from the same directory
    that you ran the profiler from?
    Continuing without the function's contents.
    
    Line #      Hits         Time  Per Hit   % Time  Line Contents
    ==============================================================
         1                                           
         2         1         69.0     69.0     16.6  
         3         1         15.0     15.0      3.6  
         4         1        332.0    332.0     79.8
    

    This was in a newly created conda environment, Python 3.8.10. I'm on macOS 11.4, with Google Chrome version 91.0.4472.114. However, all packages were installed via pip. I've tried installing line profiler from source at the main branch of this repository, as well as from PyPI. !pip freeze gives:

    appnope==0.1.2
    argon2-cffi==20.1.0
    async-generator==1.10
    attrs==21.2.0
    backcall==0.2.0
    beniget==0.4.0
    bleach==3.3.0
    certifi==2021.5.30
    cffi==1.14.6
    cycler==0.10.0
    Cython==0.29.24
    debugpy==1.3.0
    decorator==5.0.9
    defusedxml==0.7.1
    entrypoints==0.3
    gast==0.5.0
    ipykernel==6.0.1
    ipython==7.25.0
    ipython-genutils==0.2.0
    ipywidgets==7.6.3
    jedi==0.18.0
    Jinja2==3.0.1
    jsonschema==3.2.0
    jupyter==1.0.0
    jupyter-client==6.1.12
    jupyter-console==6.4.0
    jupyter-core==4.7.1
    jupyterlab-pygments==0.1.2
    jupyterlab-widgets==1.0.0
    kiwisolver==1.3.1
    line-profiler @ file:///Users/andrew/Documents/Andy/programming/line_profiler
    MarkupSafe==2.0.1
    matplotlib==3.4.2
    matplotlib-inline==0.1.2
    mistune==0.8.4
    nbclient==0.5.3
    nbconvert==6.1.0
    nbformat==5.1.3
    nest-asyncio==1.5.1
    notebook==6.4.0
    numpy==1.21.0
    packaging==21.0
    pandocfilters==1.4.3
    parso==0.8.2
    pexpect==4.8.0
    pickleshare==0.7.5
    Pillow==8.3.1
    ply==3.11
    prometheus-client==0.11.0
    prompt-toolkit==3.0.19
    ptyprocess==0.7.0
    pybind11==2.6.2
    pycparser==2.20
    Pygments==2.9.0
    pyparsing==2.4.7
    PyQt5==5.15.4
    PyQt5-Qt5==5.15.2
    PyQt5-sip==12.9.0
    pyrsistent==0.18.0
    python-dateutil==2.8.2
    pythran==0.9.12
    pyzmq==22.1.0
    qtconsole==5.1.1
    QtPy==1.9.0
    scipy==1.7.0
    Send2Trash==1.7.1
    six==1.16.0
    terminado==0.10.1
    testpath==0.5.0
    tornado==6.1
    traitlets==5.0.5
    wcwidth==0.2.5
    webencodings==0.5.1
    widgetsnbextension==3.5.1
    
    opened by andyfaff 20
  • Significantly decrease profiling overhead & update build process (4.0.0)

    Significantly decrease profiling overhead & update build process (4.0.0)

    This PR is a large amalgamation of work I've been doing over the past few months to make line_profiler better. The main improvements are:

    • The python trace callback that is called with every line of profiled code executed has been rewritten to make more extensive use of C++, and to reduce python interaction. Cython can generate annotated HTML files which indicate more python interaction by making the line more yellow. An explanation for this can be found here. Generally, the more yellow the line is, the Python interaction there will be, so the slower the line/function will be. I have attached screenshots of the HTML files plus their raw originals for easy viewing. The callback is now approximately 2-3x faster. I have noticed in my other projects that lines that used to say they took 0.7µs (microseconds) to execute now say they execute in 0.2-0.3µs. This is because line_profiler isn't able to completely get rid of its own overhead when tracking lines. The overhead has been reduced from approximately 0.4µs to <0.1µs on my laptop, running Ubuntu 20.04.5 on Linux 5.17.9, with an Intel i7-11800H CPU, with turbo-boost disabled, and when plugged into its charger. This translates into some of my example programs going from a 3.5x slowdown with profiling enabled, to a 1.8x slowdown.
    • The build system has been switched to pure Cython. Scikit-build's Cython build system is woefully outdated, and it doesn't support nearly as many options as Cython's native build system. In addition, the Cmake process is unintuitive for many Python contributors, and the C-extension compilation process for local development is undocumented and complex. With the new system, one can just run pip install . when inside the cloned directory, and the build and installation is automatically done for them. An extra anecdotal benefit is that it seems to install a bit faster.
    • The -i option has been added. This was introduced in an earlier PR of mine, but rejected due to not being an asyncio-based approach. At the moment, profiling asyncio code continues to work in 4.0.0 according to tests of other async code of mine, but I'd appreciate it if you could test with any async code you deem relevant. In addition, basic benchmarks at the bottom of the post indicate no noticeable slowdown when using the -i 1 option, and it doesn't mess with the GIL. If this needs to be split into a separate PR, I can do that.

    This release would be 4.0.0 because in order to implement the C++ optimizations, I had to change how the code_map and last_time attributes on the LineProfiler object work. The attributes are still accessible from pure-Python code, but they contain different objects, so any code relying on specific behavior from those attributes may break. However, I don't think those attributes were ever meant to be relied upon by external code anyway.

    The Cython code is a bit more complex now, but it should be managable. You'll notice I had to do some manipulation of the code objects of functions, because in order to avoid python interaction I couldn't store the function code objects in a Python dictionary, so I had to hash the function objects. This originally caused problems when there were two different functions with the exact same code, however that was fixed by making line_profiler add no-op instructions to any duplicate functions, so that they could be profiled as separate functions. This still

    import asyncio
    import math
    
    @profile
    async def testing(x):
        for i in range(x):
            y = math.pow(100, 150) + math.pow(100, 150)
    
    asyncio.run(testing(7500000))
    

    Above code without -i 1, 7.5m repetitions: 5.28s 5.30s 5.34s 5.31s

    Above code with -i 1, 7.5m repetitions: 5.27s 5.32s 5.28s 5.30s

    There is essentially no difference in performance with -i, as long as it's not done like every .001 seconds. Luckily, it only allows intervals in multiples of 1 second, and 0 seconds is the same as disabled, so there isn't really much chance of it breaking. The timings are approximately the same for both cases when removing asyncio.

    opened by Theelx 16
  • AttributeError after upgrading to line-profiler 4.0.0 from 3.5.1

    AttributeError after upgrading to line-profiler 4.0.0 from 3.5.1

    I do not see anything in changelog that may cause this.

    Traceback (most recent call last):
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/django/contrib/staticfiles/handlers.py", line 76, in __call__
        return self.application(environ, start_response)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/django/core/handlers/wsgi.py", line 133, in __call__
        response = self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/django/core/handlers/base.py", line 130, in get_response
        response = self._middleware_chain(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/django/core/handlers/exception.py", line 49, in inner
        response = response_for_exception(request, exc)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/django/core/handlers/exception.py", line 114, in response_for_exception
        response = handle_uncaught_exception(request, get_resolver(get_urlconf()), sys.exc_info())
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/django/core/handlers/exception.py", line 149, in handle_uncaught_exception
        return debug.technical_500_response(request, *exc_info)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/django_extensions/management/technical_response.py", line 41, in null_technical_500_response
        raise exc_value
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
        response = get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/middleware.py", line 58, in __call__
        response = toolbar.process_request(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/timer.py", line 65, in process_request
        return super().process_request(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/headers.py", line 46, in process_request
        return super().process_request(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      [Previous line repeated 1 more time]
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/template_profiler_panel/panels/template.py", line 250, in process_request
        response = super(TemplateProfilerPanel, self).process_request(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/staticfiles.py", line 116, in process_request
        return super().process_request(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/logging.py", line 95, in process_request
        return super().process_request(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar/panels/__init__.py", line 206, in process_request
        return self.get_response(request)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar_line_profiler/panel.py", line 206, in process_request
        self._unwrap_closure_and_profile(self.view_func)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar_line_profiler/panel.py", line 198, in _unwrap_closure_and_profile
        self._unwrap_closure_and_profile(value)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar_line_profiler/panel.py", line 198, in _unwrap_closure_and_profile
        self._unwrap_closure_and_profile(value)
      File "/Users/user/.pyenv/versions/v_ver/lib/python3.8/site-packages/debug_toolbar_line_profiler/panel.py", line 179, in _unwrap_closure_and_profile
        self.line_profiler.add_function(func)
      File "line_profiler/_line_profiler.pyx", line 196, in line_profiler._line_profiler.LineProfiler.add_function
      File "line_profiler/_line_profiler.pyx", line 219, in line_profiler._line_profiler.LineProfiler.add_function
    AttributeError: 'method' object has no attribute '__code__'
    

    Connected with https://github.com/mikekeda/django-debug-toolbar-line-profiler/issues/9

    opened by Mogost 15
  • Add option to automatically output a file every n seconds

    Add option to automatically output a file every n seconds

    This PR does what the title says. It's tested on Python 3.8.6, 3.10.0a6, and 2.7.18. The snippet for the RepeatedTimer class was copied from stackoverflow and adapted to the specific requirements of this PR, the link is in the docstring for the class. I also took the chance to define a return type (void) for one of the most-called functions in the Cython file. Apologies if I'm missing any checkboxes or whatever, I'm making this PR from my PuTTY SSH terminal because I do my developing on a remote machine, as my laptop sucks :(.

    opened by Theelx 15
  • ModuleNotFoundError: No module named 'line_profiler._line_profiler'

    ModuleNotFoundError: No module named 'line_profiler._line_profiler'

    OS: win10 Python: 3.6.8 64-bit line_profiler: version 3.2.1

    # trial.py
    @profile
    def func():
        for i in range(10):
            print('hello world')
    
    func()
    

    When run the above code with kernprof -l .\trial.py, error occurs:

    File "c:\users\a\appdata\local\programs\python\python36\lib\runpy.py", line 193, in run_module_as_main "main", mod_spec) File "c:\users\a\appdata\local\programs\python\python36\lib\runpy.py", line 85, in run_code exec(code, run_globals) File "C:\Users\A\AppData\Local\Programs\Python\Python36\Scripts\kernprof.exe_main.py", line 9, in File "c:\users\a\appdata\local\programs\python\python36\lib\site-packages\kernprof.py", line 211, in main import line_profiler File "c:\users\a\appdata\local\programs\python\python36\lib\site-packages\line_profiler_init.py", line 10, in from .line_profiler import version File "c:\users\a\appdata\local\programs\python\python36\lib\site-packages\line_profiler\line_profiler.py", line 26, in from ._line_profiler import LineProfiler as CLineProfiler ModuleNotFoundError: No module named 'line_profiler._line_profiler'

    opened by ICYPOLE 12
  • command-line option to control unit of time

    command-line option to control unit of time

    https://github.com/rkern/line_profiler/issues/136#issue-387949622 add a command line option -u --unit to specify the unit of time displayed by line_profiler for ease of use and ease of reading the result. Windows defaults to 1e-7 whereas specifying 1e-6 would be easier to read and mentally parse. The user can easily change the unit if the runtimes are expected to be long eg: seconds

    eg: to specify a time unit of 1us on the output of kernprof "script.lprof" python -m line_profiler ./script.lprof -u 1e-6

    opened by ta946 12
  • Requirements do not install, wheel missing

    Requirements do not install, wheel missing

    For some reason (I think related to installing from the .tar.gz instead of the .whl), pip isn't picking up the requirements in line_profiler 3.2, causing GitHub Actions to fail. Compare the logs for 3.1 installation:

    Run pip install line_profiler==3.1
    Collecting line_profiler==3.1
      Downloading line_profiler-3.1.0-cp38-cp38-manylinux2010_x86_64.whl (65 kB)
    Collecting IPython
      Downloading ipython-7.22.0-py3-none-any.whl (785 kB)
    Collecting decorator
      Downloading decorator-4.4.2-py2.py3-none-any.whl (9.2 kB)
    Collecting traitlets>=4.2
      Downloading traitlets-5.0.5-py3-none-any.whl (100 kB)
    Collecting pickleshare
      Downloading pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
    [...]
    

    with the logs for 3.2:

    Run pip install line_profiler
    Collecting line_profiler
      Downloading line_profiler-3.2.0.tar.gz (17 kB)
      Installing build dependencies: started
      Installing build dependencies: finished with status 'done'
      Getting requirements to build wheel: started
      Getting requirements to build wheel: finished with status 'done'
        Preparing wheel metadata: started
        Preparing wheel metadata: finished with status 'done'
    Building wheels for collected packages: line-profiler
    

    I can also reproduce this locally in a clean Conda environment on Ubuntu 20.04:

    finzi:~> conda create -n lp3.1 python=3.8
    finzi:~> conda activate lp3.1
    finzi:~> pip install line_profiler==3.1
    Collecting line_profiler==3.1
      Using cached line_profiler-3.1.0-cp38-cp38-manylinux2010_x86_64.whl (65 kB)
    Collecting IPython
      Downloading ipython-7.22.0-py3-none-any.whl (785 kB)
         |████████████████████████████████| 785 kB 2.3 MB/s 
    Collecting decorator
      Using cached decorator-4.4.2-py2.py3-none-any.whl (9.2 kB)
    Collecting pexpect>4.3
      Using cached pexpect-4.8.0-py2.py3-none-any.whl (59 kB)
    Collecting pickleshare
      Using cached pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
    Collecting prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0
      Downloading prompt_toolkit-3.0.18-py3-none-any.whl (367 kB)
         |████████████████████████████████| 367 kB 19.6 MB/s 
    Collecting backcall
      Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB)
    Collecting traitlets>=4.2
      Using cached traitlets-5.0.5-py3-none-any.whl (100 kB)
    Requirement already satisfied: setuptools>=18.5 in /software/anaconda3/envs/tmpb2/lib/python3.8/site-packages (from IPython->line_profiler==3.1) (52.0.0.post20210125)
    Collecting jedi>=0.16
      Using cached jedi-0.18.0-py2.py3-none-any.whl (1.4 MB)
    Collecting pygments
      Using cached Pygments-2.8.1-py3-none-any.whl (983 kB)
    Collecting parso<0.9.0,>=0.8.0
      Using cached parso-0.8.1-py2.py3-none-any.whl (93 kB)
    Collecting ptyprocess>=0.5
      Using cached ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB)
    Collecting wcwidth
      Using cached wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
    Collecting ipython-genutils
      Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)
    Installing collected packages: wcwidth, ptyprocess, parso, ipython-genutils, traitlets, pygments, prompt-toolkit, pickleshare, pexpect, jedi, decorator, backcall, IPython, line-profiler
    Successfully installed IPython-7.22.0 backcall-0.2.0 decorator-4.4.2 ipython-genutils-0.2.0 jedi-0.18.0 line-profiler-3.1.0 parso-0.8.1 pexpect-4.8.0 pickleshare-0.7.5 prompt-toolkit-3.0.18 ptyprocess-0.7.0 pygments-2.8.1 traitlets-5.0.5 wcwidth-0.2.5
    

    vs.

    finzi:~> conda create -n lp3.2 python=3.8
    finzi:~> conda activate lp3.2
    finzi:~> pip install line_profiler==3.2
    Collecting line_profiler==3.2
      Using cached line_profiler-3.2.0-py3-none-any.whl
    Installing collected packages: line-profiler
    Successfully installed line-profiler-3.2.0
    
    finzi:~> python
    Python 3.8.8 (default, Feb 24 2021, 21:46:12) 
    [GCC 7.3.0] :: Anaconda, Inc. on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import line_profiler
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/software/anaconda3/envs/tmpbuild/lib/python3.8/site-packages/line_profiler/__init__.py", line 10, in <module>
        from .line_profiler import __version__
      File "/software/anaconda3/envs/tmpbuild/lib/python3.8/site-packages/line_profiler/line_profiler.py", line 23, in <module>
        from IPython.core.magic import (Magics, magics_class, line_magic)
    ModuleNotFoundError: No module named 'IPython'
    

    I can't see any obvious changes in requirements.txt or setup.py that would've caused this, so guessing it has something to do with the wheel uploaded to pypi.

    opened by cliffckerr 11
  • Add Python 3.9 testing and support

    Add Python 3.9 testing and support

    Python 3.9 dropped gettimeofday configure checks

    In bpo-38068, cPython assumes gettimeofday exists and takes two arguments. That's a reasonable assumption that we can repeat.

    See: python/[email protected]

    opened by stefanor 10
  • Make IPython optional

    Make IPython optional

    The only runtime dependency is IPython, which needs many other libraries as well. This make a heavy installation for a features that is completely optional. It was raised before in https://github.com/rkern/line_profiler/issues/112, with some PR that were never merged, apparently due to lack of time: https://github.com/rkern/line_profiler/pull/139, https://github.com/rkern/line_profiler/pull/114. There is also comments from @rkern that it should be optional: https://github.com/rkern/line_profiler/pull/90 This PR is my take on the subject, since I don't use ipython I had the motivation to get rid of it. It is still an optional dependency for pip, to be able to check the version: pip isntall line_profiler[ipython]

    I think that the approach in https://github.com/rkern/line_profiler/pull/139 is better: the IPython import is only made in the load_ipython_extension call. But since LineProfilerMagics is added to the "public API" in __init__.py I couldn't use it easily. The question is, can it be removed or is it needed somewhere ?

    opened by Nodd 9
  • Use ci build wheel to build more wheels

    Use ci build wheel to build more wheels

    This PR implements the github action from joerick/cibuildwheel to build and test more wheels (additional wheels are Windows, MacOs, and aarch64) . It also unifies the workflows python-publish.yml, python-sdist-test.yml and python-test.yml into tests.yml. Having a single workflow to run all tests allows jobs to require other jobs to pass before running, which can save CI time. For example, in this workflow the tests only run if the lining passes and deployment only runs if all tests pass. Another restriction for deployment to run is that the triggering event has to be the push of a tag (see example run on my forks master branch).

    Built files
    $ ls -la dist
    total 2456
    drwxr-xr-x  2 runner docker  4096 May  2 17:08 .
    drwxr-xr-x 10 runner docker  4096 May  2 17:08 ..
    -rw-r--r--  1 runner docker 52085 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-macosx_10_9_x86_64.whl
    -rw-r--r--  1 runner docker 64414 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-manylinux1_i686.whl
    -rw-r--r--  1 runner docker 62724 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-manylinux1_x86_64.whl
    -rw-r--r--  1 runner docker 64417 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-manylinux2010_i686.whl
    -rw-r--r--  1 runner docker 62723 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-manylinux2010_x86_64.whl
    -rw-r--r--  1 runner docker 64632 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-manylinux2014_aarch64.whl
    -rw-r--r--  1 runner docker 46505 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-win32.whl
    -rw-r--r--  1 runner docker 50744 May  2 17:08 line_profiler-3.2.6-cp35-cp35m-win_amd64.whl
    -rw-r--r--  1 runner docker 53532 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-macosx_10_9_x86_64.whl
    -rw-r--r--  1 runner docker 65920 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-manylinux1_i686.whl
    -rw-r--r--  1 runner docker 64391 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-manylinux1_x86_64.whl
    -rw-r--r--  1 runner docker 65922 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-manylinux2010_i686.whl
    -rw-r--r--  1 runner docker 64392 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-manylinux2010_x86_64.whl
    -rw-r--r--  1 runner docker 65388 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-manylinux2014_aarch64.whl
    -rw-r--r--  1 runner docker 47075 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-win32.whl
    -rw-r--r--  1 runner docker 51285 May  2 17:08 line_profiler-3.2.6-cp36-cp36m-win_amd64.whl
    -rw-r--r--  1 runner docker 52206 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-macosx_10_9_x86_64.whl
    -rw-r--r--  1 runner docker 65042 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-manylinux1_i686.whl
    -rw-r--r--  1 runner docker 63822 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-manylinux1_x86_64.whl
    -rw-r--r--  1 runner docker 65045 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-manylinux2010_i686.whl
    -rw-r--r--  1 runner docker 63824 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-manylinux2010_x86_64.whl
    -rw-r--r--  1 runner docker 65082 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-manylinux2014_aarch64.whl
    -rw-r--r--  1 runner docker 47096 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-win32.whl
    -rw-r--r--  1 runner docker 51091 May  2 17:08 line_profiler-3.2.6-cp37-cp37m-win_amd64.whl
    -rw-r--r--  1 runner docker 52964 May  2 17:08 line_profiler-3.2.6-cp38-cp38-macosx_10_9_x86_64.whl
    -rw-r--r--  1 runner docker 67184 May  2 17:08 line_profiler-3.2.6-cp38-cp38-manylinux1_i686.whl
    -rw-r--r--  1 runner docker 65953 May  2 17:08 line_profiler-3.2.6-cp38-cp38-manylinux1_x86_64.whl
    -rw-r--r--  1 runner docker 67188 May  2 17:08 line_profiler-3.2.6-cp38-cp38-manylinux2010_i686.whl
    -rw-r--r--  1 runner docker 65953 May  2 17:08 line_profiler-3.2.6-cp38-cp38-manylinux2010_x86_64.whl
    -rw-r--r--  1 runner docker 67838 May  2 17:08 line_profiler-3.2.6-cp38-cp38-manylinux2014_aarch64.whl
    -rw-r--r--  1 runner docker 47645 May  2 17:08 line_profiler-3.2.6-cp38-cp38-win32.whl
    -rw-r--r--  1 runner docker 52063 May  2 17:08 line_profiler-3.2.6-cp38-cp38-win_amd64.whl
    -rw-r--r--  1 runner docker 53109 May  2 17:08 line_profiler-3.2.6-cp39-cp39-macosx_10_9_x86_64.whl
    -rw-r--r--  1 runner docker 67385 May  2 17:08 line_profiler-3.2.6-cp39-cp39-manylinux1_i686.whl
    -rw-r--r--  1 runner docker 66015 May  2 17:08 line_profiler-3.2.6-cp39-cp39-manylinux1_x86_64.whl
    -rw-r--r--  1 runner docker 67385 May  2 17:08 line_profiler-3.2.6-cp39-cp39-manylinux2010_i686.whl
    -rw-r--r--  1 runner docker 66017 May  2 17:08 line_profiler-3.2.6-cp39-cp39-manylinux2010_x86_64.whl
    -rw-r--r--  1 runner docker 67853 May  2 17:08 line_profiler-3.2.6-cp39-cp39-manylinux2014_aarch64.whl
    -rw-r--r--  1 runner docker 47837 May  2 17:08 line_profiler-3.2.6-cp39-cp39-win32.whl
    -rw-r--r--  1 runner docker 51996 May  2 17:08 line_profiler-3.2.6-cp39-cp39-win_amd64.whl
    -rw-r--r--  1 runner docker 35620 May  2 17:08 line_profiler-3.2.6.tar.gz
    

    See last CI run before reactivating branch restriction. Talking about branch restrictions, how about allowing the tests to run on all push events? As a contributor, I like to know that the CI passes before I bother someone with a PR that might fail. With restricting the branches that the CI runs on I only have two options.

    1. Comment out the restriction to later drop that commit
    2. Merge the feature branch into the forks master over and over again

    Dropping the restrictions would make it easier for contributors to check that all is fine before making a PR.

    Since the linux wheels are built inside of a docker image and paths don't match up with the paths on the host, uploading the coverage is quite hacky. But it works and maybe someone has a better solution (see) Building the wheels for Linux could ofc also be done as it was before without the hacks for the coverage, but IMHO saving the mental capacity of thinking about and keeping up with changing build requirements (leaving that to a widely used and specialized project) is worth it.

    Added bonus, since kernprof.py isn't spatially tested from source and installation anymore, the coverage went up 8% 😄

    No emojis this time, sorry for that again 😞 .

    closes #63

    opened by s-weigand 9
  • Bump pypa/cibuildwheel from 2.11.2 to 2.11.4

    Bump pypa/cibuildwheel from 2.11.2 to 2.11.4

    Bumps pypa/cibuildwheel from 2.11.2 to 2.11.4.

    Release notes

    Sourced from pypa/cibuildwheel's releases.

    v2.11.4

    • 🐛 Fix a bug that caused missing wheels on Windows when a test was skipped using CIBW_TEST_SKIP (#1377)
    • 🛠 Updates CPython 3.11 to 3.11.1 (#1371)
    • 🛠 Updates PyPy 3.7 to 3.7.10, except on macOS which remains on 7.3.9 due to a bug. (#1371)
    • 📚 Added a reference to abi3audit to the docs (#1347)

    v2.11.3

    • ✨ Improves the 'build options' log output that's printed at the start of each run (#1352)
    • ✨ Added a friendly error message to a common misconfiguration of the CIBW_TEST_COMMAND option - not specifying path using the {project} placeholder (#1336)
    • 🛠 The GitHub Action now uses Powershell on Windows to avoid occasional incompabilities with bash (#1346)
    Changelog

    Sourced from pypa/cibuildwheel's changelog.

    v2.11.4

    24 Dec 2022

    • 🐛 Fix a bug that caused missing wheels on Windows when a test was skipped using CIBW_TEST_SKIP (#1377)
    • 🛠 Updates CPython 3.11 to 3.11.1 (#1371)
    • 🛠 Updates PyPy to 7.3.10, except on macOS which remains on 7.3.9 due to a bug on that platform. (#1371)
    • 📚 Added a reference to abi3audit to the docs (#1347)

    v2.11.3

    5 Dec 2022

    • ✨ Improves the 'build options' log output that's printed at the start of each run (#1352)
    • ✨ Added a friendly error message to a common misconfiguration of the CIBW_TEST_COMMAND option - not specifying path using the {project} placeholder (#1336)
    • 🛠 The GitHub Action now uses Powershell on Windows to avoid occasional incompabilities with bash (#1346)
    Commits
    • 27fc88e Bump version: v2.11.4
    • a7e9ece Merge pull request #1371 from pypa/update-dependencies-pr
    • b9a3ed8 Update cibuildwheel/resources/build-platforms.toml
    • 3dcc2ff fix: not skipping the tests stops the copy (Windows ARM) (#1377)
    • 1c9ec76 Merge pull request #1378 from pypa/henryiii-patch-3
    • 22b433d Merge pull request #1379 from pypa/pre-commit-ci-update-config
    • 98fdf8c [pre-commit.ci] pre-commit autoupdate
    • cefc5a5 Update dependencies
    • e53253d ci: move to ubuntu 20
    • e9ecc65 [pre-commit.ci] pre-commit autoupdate (#1374)
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • Bump actions/setup-python from 4.3.0 to 4.4.0

    Bump actions/setup-python from 4.3.0 to 4.4.0

    Bumps actions/setup-python from 4.3.0 to 4.4.0.

    Release notes

    Sourced from actions/setup-python's releases.

    Add support to install multiple python versions

    In scope of this release we added support to install multiple python versions. For this you can try to use this snippet:

        - uses: actions/[email protected]
          with:
            python-version: |
                3.8
                3.9
                3.10
    

    Besides, we changed logic with throwing the error for GHES if cache is unavailable to warn (actions/setup-python#566).

    Improve error handling and messages

    In scope of this release we added improved error message to put operating system and its version in the logs (actions/setup-python#559). Besides, the release

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • Draft: mark function to profile from the CLI

    Draft: mark function to profile from the CLI

    This is a draft that allows to profile some function without having to modify the source.

    It does so by inserting an importhook that will be active only for module/function pairs that are passed on the command line, and for the given functions it will insert the @profile decorator.

    The code is not the best, but I'm mostly opening this to get feedback as to whether you believe this is an interesting possibility to push forward, or if it something I should just use for me locally/publish as a separate package.

    You can try it with for example

    $ python -m kernprof -l  --prof bar:f -v line_profiler/foo.py
    
    $ cat foo.py
    from bar import f, g
    
    def main():
        for i in range(60):
            f()
            g()
    
    main()
    
    $ cat bar.py
    import time
    
    def f():
        5
        time.sleep(0.001)
        7
    
    def g():
        pass
    

    And you will get the following without having to add the decorator.

    Timer unit: 1e-06 s
    
    Total time: 0.081675 s
    File: /Users/bussonniermatthias/dev/line_profiler/bar.py
    Function: f at line 3
    
    Line #      Hits         Time  Per Hit   % Time  Line Contents
    ==============================================================
         3                                           def f():
         4        60         37.0      0.6      0.0      5
         5        60      81574.0   1359.6     99.9      time.sleep(0.001)
         6        60         64.0      1.1      0.1      7
    
    opened by Carreau 0
  • AttributeError: Can't get attribute 'xx' on <module '__main__' from 'path/to/kernprof'>

    AttributeError: Can't get attribute 'xx' on

    I want to profile method in class with kernprof, but it seems that pickle is not supported. Any help is appreciated.

    kernprof version: 4.0.1

    Below is the code to reproduce the issue.

    import pickle
    
    class Data:
      pass
    
    class Test:
      @profile
      def a(self):
        data = Data()
        pickle.dump(data, open("tmp.pkl", "wb"))
    
    Test().a()
    

    If we save the code into test.py and call kernprof -lv test.py, the error is:

    _pickle.PicklingError: Can't pickle <class '__main__.Data'>: attribute lookup Data on __main__ failed
    

    It seems that neither pickle.dump() nor pickle.load() is supporrted by line_profile currently.

    opened by SysuJayce 2
  • Customizing line_profiler

    Customizing line_profiler

    I love the module and have been using it for years, for which many thanks.

    A StackOverflow question appeared yesterday asking - wait for it - about type checking (32-bit vs 64-bit) for every function call. Here is the Q:

    https://stackoverflow.com/questions/74516195/detecting-unexpected-type-conversion-in-python

    I suggested line_profiler (or memory_profiler) might be hackable along the lines of https://stackoverflow.com/a/74517678/1021819

    Do you have any advice on whether customizing line_profiler in this way would be feasible, and if so how one might go about it?

    Thanks again!

    opened by jtlz2 2
  • PyPy build error: 'struct PyCodeObject' has no member named 'co_code'

    PyPy build error: 'struct PyCodeObject' has no member named 'co_code'

    Running into an issue building line_profiler on PyPy 3.8. In particular am seeing an error with this line where co_code appears undefined?

    https://github.com/pyutils/line_profiler/blob/83e4332c065d8f8b8bc602b99c908955a11aacc2/line_profiler/_line_profiler.pyx#L20

    Here's the CI build. Also have attached the log file. This affects PyPy 3.9 as well (CI build and attached log file).

    opened by jakirkham 4
Releases(v4.0.2)
  • v4.0.2(Dec 9, 2022)

  • v4.0.1(Nov 16, 2022)

  • v4.0.0(Nov 12, 2022)

    • ENH: Python 3.11 is now supported.
    • ENH: Profiling overhead is now drastically smaller, thanks to reimplementing almost all of the tracing callback in C++. You can expect to see reductions of between 0.3 and 1 microseconds per line hit, resulting in a speedup of up to 4x for codebases with many lines of Python that only do a little work per line.
    • ENH: Added the -i <# of seconds> option to the kernprof script. This uses the threading module to output profiling data to the output file every n seconds, and is useful for long-running tasks that shouldn't be stopped in the middle of processing.
    • CHANGE: Cython's native cythonize function is now used to compile the project, instead of scikit-build's convoluted process.
    • CHANGE: Due to optimizations done while reimplementing the callback in C++, the profiler's code_map and last_time attributes now are indexed by a hash of the code block's bytecode and its line number. Any code that directly reads (and processes) or edits the code_map and/or last_time attributes will likely break.

    Thanks to @Theelx and others for all of their hard work on this!

    Source code(tar.gz)
    Source code(zip)
  • v3.5.1(Apr 1, 2022)

  • v3.5.0(Mar 23, 2022)

    • FIX: #109 kernprof fails to write to stdout if stdout was replaced
    • FIX: Fixes max of an empty sequence error #118
    • Make IPython optional
    • FIX: #100 Exception raise ZeroDivisionError

    Thanks to @Nodd @yarula @ctw

    Source code(tar.gz)
    Source code(zip)
  • v3.4.0(Dec 30, 2021)

    3.4.0

    • Drop support for Python <= 3.5.x
    • FIX: #104 issue with new IPython kernels
    • Wheels for musllinux are now included

    Notes

    There was a minor issue in the CI release process. Wheels were produced correctly on the CI, but the upload to pypi step expected them to be in the wheelhouse (not dist) directory. So most wheels were uploaded manually a few hours after the main release. This should be fixed for future releases.

    Source code(tar.gz)
    Source code(zip)
  • v3.3.1(Sep 24, 2021)

    • FIX: Fix bug where lines were not displayed in Jupyter>=6.0 via #93

    • CHANGE: moving forward, new pypi releases will be signed with the GPG key 2A290272C174D28EA9CA48E9D7224DAF0347B114 for PyUtils-CI [email protected]. For reference, older versions were signed with either 262A1DF005BE5D2D5210237C85CD61514641325F or 1636DAF294BA22B89DBB354374F166CFA2F39C18.

    Source code(tar.gz)
    Source code(zip)
  • 3.2.4(Apr 30, 2021)

  • 3.2.3(Apr 30, 2021)

  • 3.0.2(Jan 12, 2020)

Owner
OpenPyUtils
A group for community maintained python packages
OpenPyUtils
Watch your Docker registry project size, then monitor it with Grafana.

Watch your Docker registry project size, then monitor it with Grafana.

Nova Kwok 33 Apr 05, 2022
Cobalt Strike random C2 Profile generator

Random C2 Profile Generator Cobalt Strike random C2 Profile generator Author: Joe Vest (@joevest) This project is designed to generate malleable c2 pr

Threat Express 482 Jan 08, 2023
Tracy Profiler module for the Godot Engine

GodotTracy Tracy Profiler module for the Godot Engine git clone --recurse-submodules https://github.com/Pineapple/GodotTracy.git Copy godot_tracy fold

Pineapple Works 17 Aug 23, 2022
Exports osu! user stats to prometheus metrics for a specified set of users

osu! to prometheus exporter This tool exports osu! user statistics into prometheus metrics for a specified set of user ids. Just copy the config.json.

Peter Oettig 1 Feb 24, 2022
Prometheus instrumentation library for Python applications

Prometheus Python Client The official Python 2 and 3 client for Prometheus. Three Step Demo One: Install the client: pip install prometheus-client Tw

Prometheus 3.2k Jan 07, 2023
Scalene: a high-performance, high-precision CPU and memory profiler for Python

scalene: a high-performance CPU and memory profiler for Python by Emery Berger 中文版本 (Chinese version) About Scalene % pip install -U scalene Scalen

Emery Berger 138 Dec 30, 2022
Monitor Memory usage of Python code

Memory Profiler This is a python module for monitoring memory consumption of a process as well as line-by-line analysis of memory consumption for pyth

Fabian Pedregosa 80 Nov 18, 2022
Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.

Glances - An eye on your system Summary Glances is a cross-platform monitoring tool which aims to present a large amount of monitoring information thr

Nicolas Hennion 22k Jan 04, 2023
Development tool to measure, monitor and analyze the memory behavior of Python objects in a running Python application.

README for pympler Before installing Pympler, try it with your Python version: python setup.py try If any errors are reported, check whether your Pyt

996 Jan 01, 2023
Call-graph profiling for TwinCAT 3

Twingrind This project brings profiling to TwinCAT PLCs. The general idea of the implementation is as follows. Twingrind is a TwinCAT library that inc

stefanbesler 10 Oct 12, 2022
Output provisioning profiles in a diffable way

normalize-profile This tool reads Apple's provisioning profile files and produces reproducible output perfect for diffing. You can easily integrate th

Keith Smiley 8 Oct 18, 2022
Prometheus integration for Starlette.

Starlette Prometheus Introduction Prometheus integration for Starlette. Requirements Python 3.6+ Starlette 0.9+ Installation $ pip install starlette-p

José Antonio Perdiguero 229 Dec 21, 2022
GoAccess is a real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser.

GoAccess What is it? GoAccess is an open source real-time web log analyzer and interactive viewer that runs in a terminal on *nix systems or through y

Gerardo O. 15.6k Jan 02, 2023
Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automatically use request headers such as x-request-id or x-correlation-id.

starlette context Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automat

Tomasz Wójcik 300 Dec 26, 2022
Display machine state using Python3 with Flask.

Flask-State English | 简体中文 Flask-State is a lightweight chart plugin for displaying machine state data in your web application. Monitored Metric: CPU,

622 Dec 18, 2022
Prometheus exporter for Flask applications

Prometheus Flask exporter This library provides HTTP request metrics to export into Prometheus. It can also track method invocations using convenient

Viktor Adam 535 Dec 23, 2022
System monitor - A python-based real-time system monitoring tool

System monitor A python-based real-time system monitoring tool Screenshots Installation Run My project with these commands pip install -r requiremen

Sachit Yadav 4 Feb 11, 2022
Was an interactive continuous Python profiler.

☠ This project is not maintained anymore. We highly recommend switching to py-spy which provides better performance and usability. Profiling The profi

What! Studio 3k Dec 27, 2022
Sampling profiler for Python programs

py-spy: Sampling profiler for Python programs py-spy is a sampling profiler for Python programs. It lets you visualize what your Python program is spe

Ben Frederickson 9.5k Jan 08, 2023