Add lockstat tool

Adding lockstat tool to trace kernel mutex lock events and
display locks statistics and displays following data:

                                  Caller   Avg Spin  Count   Max spin Total spin
                      psi_avgs_work+0x2e       3675      5       5468      18379
                     flush_to_ldisc+0x22       2833      2       4210       5667
                       n_tty_write+0x30c       3914      1       3914       3914
                               isig+0x5d       2390      1       2390       2390
                   tty_buffer_flush+0x2a       1604      1       1604       1604
                      commit_echoes+0x22       1400      1       1400       1400
          n_tty_receive_buf_common+0x3b9       1399      1       1399       1399

                                  Caller   Avg Hold  Count   Max hold Total hold
                     flush_to_ldisc+0x22      42558      2      76135      85116
                      psi_avgs_work+0x2e      14821      5      20446      74106
          n_tty_receive_buf_common+0x3b9      12300      1      12300      12300
                       n_tty_write+0x30c      10712      1      10712      10712
                               isig+0x5d       3362      1       3362       3362
                   tty_buffer_flush+0x2a       3078      1       3078       3078
                      commit_echoes+0x22       3017      1       3017       3017

Every caller to using kernel's mutex is displayed on every line.

First portion of lines show the lock acquiring data, showing the amount
of time it took to acquired given lock.

  'Caller'       - symbol acquiring the mutex
  'Average Spin' - average time to acquire the mutex
  'Count'        - number of times mutex was acquired
  'Max spin'     - maximum time to acquire the mutex
  'Total spin'   - total time spent in acquiring the mutex

Second portion of lines show the lock holding data, showing the amount
of time it took to hold given lock.

  'Caller'       - symbol holding the mutex
  'Average Hold' - average time mutex was held
  'Count'        - number of times mutex was held
  'Max hold'     - maximum time mutex was held
  'Total hold'   - total time spent in holding the mutex

This works by tracing mutex_lock/unlock kprobes, udating the lock stats
in maps and processing them in the python part.

Examples:
    lockstats                           # trace system wide
    lockstats -d 5                      # trace for 5 seconds only
    lockstats -i 5                      # display stats every 5 seconds
    lockstats -p 123                    # trace locks for PID 123
    lockstats -t 321                    # trace locks for PID 321
    lockstats -c pipe_                  # display stats only for lock callers with 'pipe_' substring
    lockstats -S acq_count              # sort lock acquired results on acquired count
    lockstats -S hld_total              # sort lock held results on total held time
    lockstats -S acq_count,hld_total    # combination of above
    lockstats -n 3                      # display 3 locks
    lockstats -s 3                      # display 3 levels of stack

Signed-off-by: Jiri Olsa <[email protected]>
7 files changed
tree: 6e1d551fde042da9a5ca2f9528eac0c4304a1b7f
  1. cmake/
  2. debian/
  3. docs/
  4. examples/
  5. images/
  6. introspection/
  7. man/
  8. scripts/
  9. snapcraft/
  10. SPECS/
  11. src/
  12. tests/
  13. tools/
  14. .clang-format
  15. .dockerignore
  16. .gitignore
  17. .gitmodules
  18. .travis.yml
  19. CMakeLists.txt
  20. CODEOWNERS
  21. CONTRIBUTING-SCRIPTS.md
  22. Dockerfile.debian
  23. Dockerfile.ubuntu
  24. FAQ.txt
  25. INSTALL.md
  26. LICENSE.txt
  27. LINKS.md
  28. QUICKSTART.md
  29. README.md
README.md

BCC Logo

BPF Compiler Collection (BCC)

BCC is a toolkit for creating efficient kernel tracing and manipulation programs, and includes several useful tools and examples. It makes use of extended BPF (Berkeley Packet Filters), formally known as eBPF, a new feature that was first added to Linux 3.15. Much of what BCC uses requires Linux 4.1 and above.

eBPF was described by Ingo Molnár as:

One of the more interesting features in this cycle is the ability to attach eBPF programs (user-defined, sandboxed bytecode executed by the kernel) to kprobes. This allows user-defined instrumentation on a live kernel image that can never crash, hang or interfere with the kernel negatively.

BCC makes BPF programs easier to write, with kernel instrumentation in C (and includes a C wrapper around LLVM), and front-ends in Python and lua. It is suited for many tasks, including performance analysis and network traffic control.

Screenshot

This example traces a disk I/O kernel function, and populates an in-kernel power-of-2 histogram of the I/O size. For efficiency, only the histogram summary is returned to user-level.

# ./bitehist.py
Tracing... Hit Ctrl-C to end.
^C
     kbytes          : count     distribution
       0 -> 1        : 3        |                                      |
       2 -> 3        : 0        |                                      |
       4 -> 7        : 211      |**********                            |
       8 -> 15       : 0        |                                      |
      16 -> 31       : 0        |                                      |
      32 -> 63       : 0        |                                      |
      64 -> 127      : 1        |                                      |
     128 -> 255      : 800      |**************************************|

The above output shows a bimodal distribution, where the largest mode of 800 I/O was between 128 and 255 Kbytes in size.

See the source: bitehist.py. What this traces, what this stores, and how the data is presented, can be entirely customized. This shows only some of many possible capabilities.

Installing

See INSTALL.md for installation steps on your platform.

FAQ

See FAQ.txt for the most common troubleshoot questions.

Reference guide

See docs/reference_guide.md for the reference guide to the bcc and bcc/BPF APIs.

Contents

Some of these are single files that contain both C and Python, others have a pair of .c and .py files, and some are directories of files.

Tracing

Examples:

Tools:

Networking

Examples:

BPF Introspection:

Tools that help to introspect BPF programs.

  • introspection/bps.c: List all BPF programs loaded into the kernel. ‘ps’ for BPF programs. Examples.

Motivation

BPF guarantees that the programs loaded into the kernel cannot crash, and cannot run forever, but yet BPF is general purpose enough to perform many arbitrary types of computation. Currently, it is possible to write a program in C that will compile into a valid BPF program, yet it is vastly easier to write a C program that will compile into invalid BPF (C is like that). The user won't know until trying to run the program whether it was valid or not.

With a BPF-specific frontend, one should be able to write in a language and receive feedback from the compiler on the validity as it pertains to a BPF backend. This toolkit aims to provide a frontend that can only create valid BPF programs while still harnessing its full flexibility.

Furthermore, current integrations with BPF have a kludgy workflow, sometimes involving compiling directly in a linux kernel source tree. This toolchain aims to minimize the time that a developer spends getting BPF compiled, and instead focus on the applications that can be written and the problems that can be solved with BPF.

The features of this toolkit include:

  • End-to-end BPF workflow in a shared library
    • A modified C language for BPF backends
    • Integration with llvm-bpf backend for JIT
    • Dynamic (un)loading of JITed programs
    • Support for BPF kernel hooks: socket filters, tc classifiers, tc actions, and kprobes
  • Bindings for Python
  • Examples for socket filters, tc classifiers, and kprobes
  • Self-contained tools for tracing a running system

In the future, more bindings besides python will likely be supported. Feel free to add support for the language of your choice and send a pull request!

Tutorials

Networking

At Red Hat Summit 2015, BCC was presented as part of a session on BPF. A multi-host vxlan environment is simulated and a BPF program used to monitor one of the physical interfaces. The BPF program keeps statistics on the inner and outer IP addresses traversing the interface, and the userspace component turns those statistics into a graph showing the traffic distribution at multiple granularities. See the code here.

Screenshot

Contributing

Already pumped up to commit some code? Here are some resources to join the discussions in the IOVisor community and see what you want to work on.

External links

Looking for more information on BCC and how it's being used? You can find links to other BCC content on the web in LINKS.md.