Initial check in
Bug: 137197907
diff --git a/src/llvm-project/clang/tools/scan-build-py/README.md b/src/llvm-project/clang/tools/scan-build-py/README.md
new file mode 100644
index 0000000..01e3454
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/README.md
@@ -0,0 +1,146 @@
+scan-build
+==========
+
+A package designed to wrap a build so that all calls to gcc/clang are
+intercepted and logged into a [compilation database][1] and/or piped to
+the clang static analyzer. Includes intercept-build tool, which logs
+the build, as well as scan-build tool, which logs the build and runs
+the clang static analyzer on it.
+
+Portability
+-----------
+
+Should be working on UNIX operating systems.
+
+- It has been tested on FreeBSD, GNU/Linux and OS X.
+- Prepared to work on windows, but need help to make it.
+
+
+Prerequisites
+-------------
+
+1. **python** interpreter (version 2.7, 3.2, 3.3, 3.4, 3.5).
+
+
+How to use
+----------
+
+To run the Clang static analyzer against a project goes like this:
+
+ $ scan-build <your build command>
+
+To generate a compilation database file goes like this:
+
+ $ intercept-build <your build command>
+
+To run the Clang static analyzer against a project with compilation database
+goes like this:
+
+ $ analyze-build
+
+Use `--help` to know more about the commands.
+
+
+How to use the experimental Cross Translation Unit analysis
+-----------------------------------------------------------
+
+To run the CTU analysis, a compilation database file has to be created:
+
+ $ intercept-build <your build command>
+
+To run the Clang Static Analyzer against a compilation database
+with CTU analysis enabled, execute:
+
+ $ analyze-build --ctu
+
+For CTU analysis an additional (external definition) collection-phase is required.
+For debugging purposes, it is possible to separately execute the collection
+and the analysis phase. By doing this, the intermediate files used for
+the analysis are kept on the disk in `./ctu-dir`.
+
+ # Collect and store the data required by the CTU analysis
+ $ analyze-build --ctu-collect-only
+
+ # Analyze using the previously collected data
+ $ analyze-build --ctu-analyze-only
+
+Use `--help` to get more information about the commands.
+
+
+Limitations
+-----------
+
+Generally speaking, the `intercept-build` and `analyze-build` tools together
+does the same job as `scan-build` does. So, you can expect the same output
+from this line as simple `scan-build` would do:
+
+ $ intercept-build <your build command> && analyze-build
+
+The major difference is how and when the analyzer is run. The `scan-build`
+tool has three distinct model to run the analyzer:
+
+1. Use compiler wrappers to make actions.
+ The compiler wrappers does run the real compiler and the analyzer.
+ This is the default behaviour, can be enforced with `--override-compiler`
+ flag.
+
+2. Use special library to intercept compiler calls during the build process.
+ The analyzer run against each modules after the build finished.
+ Use `--intercept-first` flag to get this model.
+
+3. Use compiler wrappers to intercept compiler calls during the build process.
+ The analyzer run against each modules after the build finished.
+ Use `--intercept-first` and `--override-compiler` flags together to get
+ this model.
+
+The 1. and 3. are using compiler wrappers, which works only if the build
+process respects the `CC` and `CXX` environment variables. (Some build
+process can override these variable as command line parameter only. This case
+you need to pass the compiler wrappers manually. eg.: `intercept-build
+--override-compiler make CC=intercept-cc CXX=intercept-c++ all` where the
+original build command would have been `make all` only.)
+
+The 1. runs the analyzer right after the real compilation. So, if the build
+process removes removes intermediate modules (generated sources) the analyzer
+output still kept.
+
+The 2. and 3. generate the compilation database first, and filters out those
+modules which are not exists. So, it's suitable for incremental analysis during
+the development.
+
+The 2. mode is available only on FreeBSD and Linux. Where library preload
+is available from the dynamic loader. Not supported on OS X (unless System
+Integrity Protection feature is turned off).
+
+`intercept-build` command uses only the 2. and 3. mode to generate the
+compilation database. `analyze-build` does only run the analyzer against the
+captured compiler calls.
+
+
+Known problems
+--------------
+
+Because it uses `LD_PRELOAD` or `DYLD_INSERT_LIBRARIES` environment variables,
+it does not append to it, but overrides it. So builds which are using these
+variables might not work. (I don't know any build tool which does that, but
+please let me know if you do.)
+
+
+Problem reports
+---------------
+
+If you find a bug in this documentation or elsewhere in the program or would
+like to propose an improvement, please use the project's [issue tracker][3].
+Please describing the bug and where you found it. If you have a suggestion
+how to fix it, include that as well. Patches are also welcome.
+
+
+License
+-------
+
+The project is licensed under University of Illinois/NCSA Open Source License.
+See LICENSE.TXT for details.
+
+ [1]: http://clang.llvm.org/docs/JSONCompilationDatabase.html
+ [2]: https://pypi.python.org/pypi/scan-build
+ [3]: https://llvm.org/bugs/enter_bug.cgi?product=clang
diff --git a/src/llvm-project/clang/tools/scan-build-py/bin/analyze-build b/src/llvm-project/clang/tools/scan-build-py/bin/analyze-build
new file mode 100755
index 0000000..991cff0
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/bin/analyze-build
@@ -0,0 +1,17 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import multiprocessing
+multiprocessing.freeze_support()
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.analyze import analyze_build
+sys.exit(analyze_build())
diff --git a/src/llvm-project/clang/tools/scan-build-py/bin/analyze-c++ b/src/llvm-project/clang/tools/scan-build-py/bin/analyze-c++
new file mode 100755
index 0000000..df1012d
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/bin/analyze-c++
@@ -0,0 +1,14 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.analyze import analyze_compiler_wrapper
+sys.exit(analyze_compiler_wrapper())
diff --git a/src/llvm-project/clang/tools/scan-build-py/bin/analyze-cc b/src/llvm-project/clang/tools/scan-build-py/bin/analyze-cc
new file mode 100755
index 0000000..df1012d
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/bin/analyze-cc
@@ -0,0 +1,14 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.analyze import analyze_compiler_wrapper
+sys.exit(analyze_compiler_wrapper())
diff --git a/src/llvm-project/clang/tools/scan-build-py/bin/intercept-build b/src/llvm-project/clang/tools/scan-build-py/bin/intercept-build
new file mode 100755
index 0000000..2c3a26e
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/bin/intercept-build
@@ -0,0 +1,17 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import multiprocessing
+multiprocessing.freeze_support()
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.intercept import intercept_build
+sys.exit(intercept_build())
diff --git a/src/llvm-project/clang/tools/scan-build-py/bin/intercept-c++ b/src/llvm-project/clang/tools/scan-build-py/bin/intercept-c++
new file mode 100755
index 0000000..67e076f
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/bin/intercept-c++
@@ -0,0 +1,14 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.intercept import intercept_compiler_wrapper
+sys.exit(intercept_compiler_wrapper())
diff --git a/src/llvm-project/clang/tools/scan-build-py/bin/intercept-cc b/src/llvm-project/clang/tools/scan-build-py/bin/intercept-cc
new file mode 100755
index 0000000..67e076f
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/bin/intercept-cc
@@ -0,0 +1,14 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.intercept import intercept_compiler_wrapper
+sys.exit(intercept_compiler_wrapper())
diff --git a/src/llvm-project/clang/tools/scan-build-py/bin/scan-build b/src/llvm-project/clang/tools/scan-build-py/bin/scan-build
new file mode 100755
index 0000000..f0f3469
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/bin/scan-build
@@ -0,0 +1,17 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import multiprocessing
+multiprocessing.freeze_support()
+
+import sys
+import os.path
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.dirname(this_dir))
+
+from libscanbuild.analyze import scan_build
+sys.exit(scan_build())
diff --git a/src/llvm-project/clang/tools/scan-build-py/libear/__init__.py b/src/llvm-project/clang/tools/scan-build-py/libear/__init__.py
new file mode 100644
index 0000000..421e2e7
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libear/__init__.py
@@ -0,0 +1,260 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module compiles the intercept library. """
+
+import sys
+import os
+import os.path
+import re
+import tempfile
+import shutil
+import contextlib
+import logging
+
+__all__ = ['build_libear']
+
+
+def build_libear(compiler, dst_dir):
+ """ Returns the full path to the 'libear' library. """
+
+ try:
+ src_dir = os.path.dirname(os.path.realpath(__file__))
+ toolset = make_toolset(src_dir)
+ toolset.set_compiler(compiler)
+ toolset.set_language_standard('c99')
+ toolset.add_definitions(['-D_GNU_SOURCE'])
+
+ configure = do_configure(toolset)
+ configure.check_function_exists('execve', 'HAVE_EXECVE')
+ configure.check_function_exists('execv', 'HAVE_EXECV')
+ configure.check_function_exists('execvpe', 'HAVE_EXECVPE')
+ configure.check_function_exists('execvp', 'HAVE_EXECVP')
+ configure.check_function_exists('execvP', 'HAVE_EXECVP2')
+ configure.check_function_exists('exect', 'HAVE_EXECT')
+ configure.check_function_exists('execl', 'HAVE_EXECL')
+ configure.check_function_exists('execlp', 'HAVE_EXECLP')
+ configure.check_function_exists('execle', 'HAVE_EXECLE')
+ configure.check_function_exists('posix_spawn', 'HAVE_POSIX_SPAWN')
+ configure.check_function_exists('posix_spawnp', 'HAVE_POSIX_SPAWNP')
+ configure.check_symbol_exists('_NSGetEnviron', 'crt_externs.h',
+ 'HAVE_NSGETENVIRON')
+ configure.write_by_template(
+ os.path.join(src_dir, 'config.h.in'),
+ os.path.join(dst_dir, 'config.h'))
+
+ target = create_shared_library('ear', toolset)
+ target.add_include(dst_dir)
+ target.add_sources('ear.c')
+ target.link_against(toolset.dl_libraries())
+ target.link_against(['pthread'])
+ target.build_release(dst_dir)
+
+ return os.path.join(dst_dir, target.name)
+
+ except Exception:
+ logging.info("Could not build interception library.", exc_info=True)
+ return None
+
+
+def execute(cmd, *args, **kwargs):
+ """ Make subprocess execution silent. """
+
+ import subprocess
+ kwargs.update({'stdout': subprocess.PIPE, 'stderr': subprocess.STDOUT})
+ return subprocess.check_call(cmd, *args, **kwargs)
+
+
[email protected]
+def TemporaryDirectory(**kwargs):
+ name = tempfile.mkdtemp(**kwargs)
+ try:
+ yield name
+ finally:
+ shutil.rmtree(name)
+
+
+class Toolset(object):
+ """ Abstract class to represent different toolset. """
+
+ def __init__(self, src_dir):
+ self.src_dir = src_dir
+ self.compiler = None
+ self.c_flags = []
+
+ def set_compiler(self, compiler):
+ """ part of public interface """
+ self.compiler = compiler
+
+ def set_language_standard(self, standard):
+ """ part of public interface """
+ self.c_flags.append('-std=' + standard)
+
+ def add_definitions(self, defines):
+ """ part of public interface """
+ self.c_flags.extend(defines)
+
+ def dl_libraries(self):
+ raise NotImplementedError()
+
+ def shared_library_name(self, name):
+ raise NotImplementedError()
+
+ def shared_library_c_flags(self, release):
+ extra = ['-DNDEBUG', '-O3'] if release else []
+ return extra + ['-fPIC'] + self.c_flags
+
+ def shared_library_ld_flags(self, release, name):
+ raise NotImplementedError()
+
+
+class DarwinToolset(Toolset):
+ def __init__(self, src_dir):
+ Toolset.__init__(self, src_dir)
+
+ def dl_libraries(self):
+ return []
+
+ def shared_library_name(self, name):
+ return 'lib' + name + '.dylib'
+
+ def shared_library_ld_flags(self, release, name):
+ extra = ['-dead_strip'] if release else []
+ return extra + ['-dynamiclib', '-install_name', '@rpath/' + name]
+
+
+class UnixToolset(Toolset):
+ def __init__(self, src_dir):
+ Toolset.__init__(self, src_dir)
+
+ def dl_libraries(self):
+ return []
+
+ def shared_library_name(self, name):
+ return 'lib' + name + '.so'
+
+ def shared_library_ld_flags(self, release, name):
+ extra = [] if release else []
+ return extra + ['-shared', '-Wl,-soname,' + name]
+
+
+class LinuxToolset(UnixToolset):
+ def __init__(self, src_dir):
+ UnixToolset.__init__(self, src_dir)
+
+ def dl_libraries(self):
+ return ['dl']
+
+
+def make_toolset(src_dir):
+ platform = sys.platform
+ if platform in {'win32', 'cygwin'}:
+ raise RuntimeError('not implemented on this platform')
+ elif platform == 'darwin':
+ return DarwinToolset(src_dir)
+ elif platform in {'linux', 'linux2'}:
+ return LinuxToolset(src_dir)
+ else:
+ return UnixToolset(src_dir)
+
+
+class Configure(object):
+ def __init__(self, toolset):
+ self.ctx = toolset
+ self.results = {'APPLE': sys.platform == 'darwin'}
+
+ def _try_to_compile_and_link(self, source):
+ try:
+ with TemporaryDirectory() as work_dir:
+ src_file = 'check.c'
+ with open(os.path.join(work_dir, src_file), 'w') as handle:
+ handle.write(source)
+
+ execute([self.ctx.compiler, src_file] + self.ctx.c_flags,
+ cwd=work_dir)
+ return True
+ except Exception:
+ return False
+
+ def check_function_exists(self, function, name):
+ template = "int FUNCTION(); int main() { return FUNCTION(); }"
+ source = template.replace("FUNCTION", function)
+
+ logging.debug('Checking function %s', function)
+ found = self._try_to_compile_and_link(source)
+ logging.debug('Checking function %s -- %s', function,
+ 'found' if found else 'not found')
+ self.results.update({name: found})
+
+ def check_symbol_exists(self, symbol, include, name):
+ template = """#include <INCLUDE>
+ int main() { return ((int*)(&SYMBOL))[0]; }"""
+ source = template.replace('INCLUDE', include).replace("SYMBOL", symbol)
+
+ logging.debug('Checking symbol %s', symbol)
+ found = self._try_to_compile_and_link(source)
+ logging.debug('Checking symbol %s -- %s', symbol,
+ 'found' if found else 'not found')
+ self.results.update({name: found})
+
+ def write_by_template(self, template, output):
+ def transform(line, definitions):
+
+ pattern = re.compile(r'^#cmakedefine\s+(\S+)')
+ m = pattern.match(line)
+ if m:
+ key = m.group(1)
+ if key not in definitions or not definitions[key]:
+ return '/* #undef {0} */{1}'.format(key, os.linesep)
+ else:
+ return '#define {0}{1}'.format(key, os.linesep)
+ return line
+
+ with open(template, 'r') as src_handle:
+ logging.debug('Writing config to %s', output)
+ with open(output, 'w') as dst_handle:
+ for line in src_handle:
+ dst_handle.write(transform(line, self.results))
+
+
+def do_configure(toolset):
+ return Configure(toolset)
+
+
+class SharedLibrary(object):
+ def __init__(self, name, toolset):
+ self.name = toolset.shared_library_name(name)
+ self.ctx = toolset
+ self.inc = []
+ self.src = []
+ self.lib = []
+
+ def add_include(self, directory):
+ self.inc.extend(['-I', directory])
+
+ def add_sources(self, source):
+ self.src.append(source)
+
+ def link_against(self, libraries):
+ self.lib.extend(['-l' + lib for lib in libraries])
+
+ def build_release(self, directory):
+ for src in self.src:
+ logging.debug('Compiling %s', src)
+ execute(
+ [self.ctx.compiler, '-c', os.path.join(self.ctx.src_dir, src),
+ '-o', src + '.o'] + self.inc +
+ self.ctx.shared_library_c_flags(True),
+ cwd=directory)
+ logging.debug('Linking %s', self.name)
+ execute(
+ [self.ctx.compiler] + [src + '.o' for src in self.src] +
+ ['-o', self.name] + self.lib +
+ self.ctx.shared_library_ld_flags(True, self.name),
+ cwd=directory)
+
+
+def create_shared_library(name, toolset):
+ return SharedLibrary(name, toolset)
diff --git a/src/llvm-project/clang/tools/scan-build-py/libear/config.h.in b/src/llvm-project/clang/tools/scan-build-py/libear/config.h.in
new file mode 100644
index 0000000..6643d89
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libear/config.h.in
@@ -0,0 +1,23 @@
+/* -*- coding: utf-8 -*-
+// The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+*/
+
+#pragma once
+
+#cmakedefine HAVE_EXECVE
+#cmakedefine HAVE_EXECV
+#cmakedefine HAVE_EXECVPE
+#cmakedefine HAVE_EXECVP
+#cmakedefine HAVE_EXECVP2
+#cmakedefine HAVE_EXECT
+#cmakedefine HAVE_EXECL
+#cmakedefine HAVE_EXECLP
+#cmakedefine HAVE_EXECLE
+#cmakedefine HAVE_POSIX_SPAWN
+#cmakedefine HAVE_POSIX_SPAWNP
+#cmakedefine HAVE_NSGETENVIRON
+
+#cmakedefine APPLE
diff --git a/src/llvm-project/clang/tools/scan-build-py/libear/ear.c b/src/llvm-project/clang/tools/scan-build-py/libear/ear.c
new file mode 100644
index 0000000..0e7093a
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libear/ear.c
@@ -0,0 +1,605 @@
+/* -*- coding: utf-8 -*-
+// The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+*/
+
+/**
+ * This file implements a shared library. This library can be pre-loaded by
+ * the dynamic linker of the Operating System (OS). It implements a few function
+ * related to process creation. By pre-load this library the executed process
+ * uses these functions instead of those from the standard library.
+ *
+ * The idea here is to inject a logic before call the real methods. The logic is
+ * to dump the call into a file. To call the real method this library is doing
+ * the job of the dynamic linker.
+ *
+ * The only input for the log writing is about the destination directory.
+ * This is passed as environment variable.
+ */
+
+#include "config.h"
+
+#include <stddef.h>
+#include <stdarg.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <dlfcn.h>
+#include <pthread.h>
+
+#if defined HAVE_POSIX_SPAWN || defined HAVE_POSIX_SPAWNP
+#include <spawn.h>
+#endif
+
+#if defined HAVE_NSGETENVIRON
+# include <crt_externs.h>
+#else
+extern char **environ;
+#endif
+
+#define ENV_OUTPUT "INTERCEPT_BUILD_TARGET_DIR"
+#ifdef APPLE
+# define ENV_FLAT "DYLD_FORCE_FLAT_NAMESPACE"
+# define ENV_PRELOAD "DYLD_INSERT_LIBRARIES"
+# define ENV_SIZE 3
+#else
+# define ENV_PRELOAD "LD_PRELOAD"
+# define ENV_SIZE 2
+#endif
+
+#define DLSYM(TYPE_, VAR_, SYMBOL_) \
+ union { \
+ void *from; \
+ TYPE_ to; \
+ } cast; \
+ if (0 == (cast.from = dlsym(RTLD_NEXT, SYMBOL_))) { \
+ perror("bear: dlsym"); \
+ exit(EXIT_FAILURE); \
+ } \
+ TYPE_ const VAR_ = cast.to;
+
+
+typedef char const * bear_env_t[ENV_SIZE];
+
+static int bear_capture_env_t(bear_env_t *env);
+static int bear_reset_env_t(bear_env_t *env);
+static void bear_release_env_t(bear_env_t *env);
+static char const **bear_update_environment(char *const envp[], bear_env_t *env);
+static char const **bear_update_environ(char const **in, char const *key, char const *value);
+static char **bear_get_environment();
+static void bear_report_call(char const *fun, char const *const argv[]);
+static char const **bear_strings_build(char const *arg, va_list *ap);
+static char const **bear_strings_copy(char const **const in);
+static char const **bear_strings_append(char const **in, char const *e);
+static size_t bear_strings_length(char const *const *in);
+static void bear_strings_release(char const **);
+
+
+static bear_env_t env_names =
+ { ENV_OUTPUT
+ , ENV_PRELOAD
+#ifdef ENV_FLAT
+ , ENV_FLAT
+#endif
+ };
+
+static bear_env_t initial_env =
+ { 0
+ , 0
+#ifdef ENV_FLAT
+ , 0
+#endif
+ };
+
+static int initialized = 0;
+static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
+
+static void on_load(void) __attribute__((constructor));
+static void on_unload(void) __attribute__((destructor));
+
+
+#ifdef HAVE_EXECVE
+static int call_execve(const char *path, char *const argv[],
+ char *const envp[]);
+#endif
+#ifdef HAVE_EXECVP
+static int call_execvp(const char *file, char *const argv[]);
+#endif
+#ifdef HAVE_EXECVPE
+static int call_execvpe(const char *file, char *const argv[],
+ char *const envp[]);
+#endif
+#ifdef HAVE_EXECVP2
+static int call_execvP(const char *file, const char *search_path,
+ char *const argv[]);
+#endif
+#ifdef HAVE_EXECT
+static int call_exect(const char *path, char *const argv[],
+ char *const envp[]);
+#endif
+#ifdef HAVE_POSIX_SPAWN
+static int call_posix_spawn(pid_t *restrict pid, const char *restrict path,
+ const posix_spawn_file_actions_t *file_actions,
+ const posix_spawnattr_t *restrict attrp,
+ char *const argv[restrict],
+ char *const envp[restrict]);
+#endif
+#ifdef HAVE_POSIX_SPAWNP
+static int call_posix_spawnp(pid_t *restrict pid, const char *restrict file,
+ const posix_spawn_file_actions_t *file_actions,
+ const posix_spawnattr_t *restrict attrp,
+ char *const argv[restrict],
+ char *const envp[restrict]);
+#endif
+
+
+/* Initialization method to Captures the relevant environment variables.
+ */
+
+static void on_load(void) {
+ pthread_mutex_lock(&mutex);
+ if (!initialized)
+ initialized = bear_capture_env_t(&initial_env);
+ pthread_mutex_unlock(&mutex);
+}
+
+static void on_unload(void) {
+ pthread_mutex_lock(&mutex);
+ bear_release_env_t(&initial_env);
+ initialized = 0;
+ pthread_mutex_unlock(&mutex);
+}
+
+
+/* These are the methods we are try to hijack.
+ */
+
+#ifdef HAVE_EXECVE
+int execve(const char *path, char *const argv[], char *const envp[]) {
+ bear_report_call(__func__, (char const *const *)argv);
+ return call_execve(path, argv, envp);
+}
+#endif
+
+#ifdef HAVE_EXECV
+#ifndef HAVE_EXECVE
+#error can not implement execv without execve
+#endif
+int execv(const char *path, char *const argv[]) {
+ bear_report_call(__func__, (char const *const *)argv);
+ char * const * envp = bear_get_environment();
+ return call_execve(path, argv, envp);
+}
+#endif
+
+#ifdef HAVE_EXECVPE
+int execvpe(const char *file, char *const argv[], char *const envp[]) {
+ bear_report_call(__func__, (char const *const *)argv);
+ return call_execvpe(file, argv, envp);
+}
+#endif
+
+#ifdef HAVE_EXECVP
+int execvp(const char *file, char *const argv[]) {
+ bear_report_call(__func__, (char const *const *)argv);
+ return call_execvp(file, argv);
+}
+#endif
+
+#ifdef HAVE_EXECVP2
+int execvP(const char *file, const char *search_path, char *const argv[]) {
+ bear_report_call(__func__, (char const *const *)argv);
+ return call_execvP(file, search_path, argv);
+}
+#endif
+
+#ifdef HAVE_EXECT
+int exect(const char *path, char *const argv[], char *const envp[]) {
+ bear_report_call(__func__, (char const *const *)argv);
+ return call_exect(path, argv, envp);
+}
+#endif
+
+#ifdef HAVE_EXECL
+# ifndef HAVE_EXECVE
+# error can not implement execl without execve
+# endif
+int execl(const char *path, const char *arg, ...) {
+ va_list args;
+ va_start(args, arg);
+ char const **argv = bear_strings_build(arg, &args);
+ va_end(args);
+
+ bear_report_call(__func__, (char const *const *)argv);
+ char * const * envp = bear_get_environment();
+ int const result = call_execve(path, (char *const *)argv, envp);
+
+ bear_strings_release(argv);
+ return result;
+}
+#endif
+
+#ifdef HAVE_EXECLP
+# ifndef HAVE_EXECVP
+# error can not implement execlp without execvp
+# endif
+int execlp(const char *file, const char *arg, ...) {
+ va_list args;
+ va_start(args, arg);
+ char const **argv = bear_strings_build(arg, &args);
+ va_end(args);
+
+ bear_report_call(__func__, (char const *const *)argv);
+ int const result = call_execvp(file, (char *const *)argv);
+
+ bear_strings_release(argv);
+ return result;
+}
+#endif
+
+#ifdef HAVE_EXECLE
+# ifndef HAVE_EXECVE
+# error can not implement execle without execve
+# endif
+// int execle(const char *path, const char *arg, ..., char * const envp[]);
+int execle(const char *path, const char *arg, ...) {
+ va_list args;
+ va_start(args, arg);
+ char const **argv = bear_strings_build(arg, &args);
+ char const **envp = va_arg(args, char const **);
+ va_end(args);
+
+ bear_report_call(__func__, (char const *const *)argv);
+ int const result =
+ call_execve(path, (char *const *)argv, (char *const *)envp);
+
+ bear_strings_release(argv);
+ return result;
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWN
+int posix_spawn(pid_t *restrict pid, const char *restrict path,
+ const posix_spawn_file_actions_t *file_actions,
+ const posix_spawnattr_t *restrict attrp,
+ char *const argv[restrict], char *const envp[restrict]) {
+ bear_report_call(__func__, (char const *const *)argv);
+ return call_posix_spawn(pid, path, file_actions, attrp, argv, envp);
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWNP
+int posix_spawnp(pid_t *restrict pid, const char *restrict file,
+ const posix_spawn_file_actions_t *file_actions,
+ const posix_spawnattr_t *restrict attrp,
+ char *const argv[restrict], char *const envp[restrict]) {
+ bear_report_call(__func__, (char const *const *)argv);
+ return call_posix_spawnp(pid, file, file_actions, attrp, argv, envp);
+}
+#endif
+
+/* These are the methods which forward the call to the standard implementation.
+ */
+
+#ifdef HAVE_EXECVE
+static int call_execve(const char *path, char *const argv[],
+ char *const envp[]) {
+ typedef int (*func)(const char *, char *const *, char *const *);
+
+ DLSYM(func, fp, "execve");
+
+ char const **const menvp = bear_update_environment(envp, &initial_env);
+ int const result = (*fp)(path, argv, (char *const *)menvp);
+ bear_strings_release(menvp);
+ return result;
+}
+#endif
+
+#ifdef HAVE_EXECVPE
+static int call_execvpe(const char *file, char *const argv[],
+ char *const envp[]) {
+ typedef int (*func)(const char *, char *const *, char *const *);
+
+ DLSYM(func, fp, "execvpe");
+
+ char const **const menvp = bear_update_environment(envp, &initial_env);
+ int const result = (*fp)(file, argv, (char *const *)menvp);
+ bear_strings_release(menvp);
+ return result;
+}
+#endif
+
+#ifdef HAVE_EXECVP
+static int call_execvp(const char *file, char *const argv[]) {
+ typedef int (*func)(const char *file, char *const argv[]);
+
+ DLSYM(func, fp, "execvp");
+
+ bear_env_t current_env;
+ bear_capture_env_t(¤t_env);
+ bear_reset_env_t(&initial_env);
+ int const result = (*fp)(file, argv);
+ bear_reset_env_t(¤t_env);
+ bear_release_env_t(¤t_env);
+
+ return result;
+}
+#endif
+
+#ifdef HAVE_EXECVP2
+static int call_execvP(const char *file, const char *search_path,
+ char *const argv[]) {
+ typedef int (*func)(const char *, const char *, char *const *);
+
+ DLSYM(func, fp, "execvP");
+
+ bear_env_t current_env;
+ bear_capture_env_t(¤t_env);
+ bear_reset_env_t(&initial_env);
+ int const result = (*fp)(file, search_path, argv);
+ bear_reset_env_t(¤t_env);
+ bear_release_env_t(¤t_env);
+
+ return result;
+}
+#endif
+
+#ifdef HAVE_EXECT
+static int call_exect(const char *path, char *const argv[],
+ char *const envp[]) {
+ typedef int (*func)(const char *, char *const *, char *const *);
+
+ DLSYM(func, fp, "exect");
+
+ char const **const menvp = bear_update_environment(envp, &initial_env);
+ int const result = (*fp)(path, argv, (char *const *)menvp);
+ bear_strings_release(menvp);
+ return result;
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWN
+static int call_posix_spawn(pid_t *restrict pid, const char *restrict path,
+ const posix_spawn_file_actions_t *file_actions,
+ const posix_spawnattr_t *restrict attrp,
+ char *const argv[restrict],
+ char *const envp[restrict]) {
+ typedef int (*func)(pid_t *restrict, const char *restrict,
+ const posix_spawn_file_actions_t *,
+ const posix_spawnattr_t *restrict,
+ char *const *restrict, char *const *restrict);
+
+ DLSYM(func, fp, "posix_spawn");
+
+ char const **const menvp = bear_update_environment(envp, &initial_env);
+ int const result =
+ (*fp)(pid, path, file_actions, attrp, argv, (char *const *restrict)menvp);
+ bear_strings_release(menvp);
+ return result;
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWNP
+static int call_posix_spawnp(pid_t *restrict pid, const char *restrict file,
+ const posix_spawn_file_actions_t *file_actions,
+ const posix_spawnattr_t *restrict attrp,
+ char *const argv[restrict],
+ char *const envp[restrict]) {
+ typedef int (*func)(pid_t *restrict, const char *restrict,
+ const posix_spawn_file_actions_t *,
+ const posix_spawnattr_t *restrict,
+ char *const *restrict, char *const *restrict);
+
+ DLSYM(func, fp, "posix_spawnp");
+
+ char const **const menvp = bear_update_environment(envp, &initial_env);
+ int const result =
+ (*fp)(pid, file, file_actions, attrp, argv, (char *const *restrict)menvp);
+ bear_strings_release(menvp);
+ return result;
+}
+#endif
+
+/* this method is to write log about the process creation. */
+
+static void bear_report_call(char const *fun, char const *const argv[]) {
+ static int const GS = 0x1d;
+ static int const RS = 0x1e;
+ static int const US = 0x1f;
+
+ if (!initialized)
+ return;
+
+ pthread_mutex_lock(&mutex);
+ const char *cwd = getcwd(NULL, 0);
+ if (0 == cwd) {
+ perror("bear: getcwd");
+ exit(EXIT_FAILURE);
+ }
+ char const * const out_dir = initial_env[0];
+ size_t const path_max_length = strlen(out_dir) + 32;
+ char filename[path_max_length];
+ if (-1 == snprintf(filename, path_max_length, "%s/%d.cmd", out_dir, getpid())) {
+ perror("bear: snprintf");
+ exit(EXIT_FAILURE);
+ }
+ FILE * fd = fopen(filename, "a+");
+ if (0 == fd) {
+ perror("bear: fopen");
+ exit(EXIT_FAILURE);
+ }
+ fprintf(fd, "%d%c", getpid(), RS);
+ fprintf(fd, "%d%c", getppid(), RS);
+ fprintf(fd, "%s%c", fun, RS);
+ fprintf(fd, "%s%c", cwd, RS);
+ size_t const argc = bear_strings_length(argv);
+ for (size_t it = 0; it < argc; ++it) {
+ fprintf(fd, "%s%c", argv[it], US);
+ }
+ fprintf(fd, "%c", GS);
+ if (fclose(fd)) {
+ perror("bear: fclose");
+ exit(EXIT_FAILURE);
+ }
+ free((void *)cwd);
+ pthread_mutex_unlock(&mutex);
+}
+
+/* update environment assure that chilren processes will copy the desired
+ * behaviour */
+
+static int bear_capture_env_t(bear_env_t *env) {
+ int status = 1;
+ for (size_t it = 0; it < ENV_SIZE; ++it) {
+ char const * const env_value = getenv(env_names[it]);
+ char const * const env_copy = (env_value) ? strdup(env_value) : env_value;
+ (*env)[it] = env_copy;
+ status &= (env_copy) ? 1 : 0;
+ }
+ return status;
+}
+
+static int bear_reset_env_t(bear_env_t *env) {
+ int status = 1;
+ for (size_t it = 0; it < ENV_SIZE; ++it) {
+ if ((*env)[it]) {
+ setenv(env_names[it], (*env)[it], 1);
+ } else {
+ unsetenv(env_names[it]);
+ }
+ }
+ return status;
+}
+
+static void bear_release_env_t(bear_env_t *env) {
+ for (size_t it = 0; it < ENV_SIZE; ++it) {
+ free((void *)(*env)[it]);
+ (*env)[it] = 0;
+ }
+}
+
+static char const **bear_update_environment(char *const envp[], bear_env_t *env) {
+ char const **result = bear_strings_copy((char const **)envp);
+ for (size_t it = 0; it < ENV_SIZE && (*env)[it]; ++it)
+ result = bear_update_environ(result, env_names[it], (*env)[it]);
+ return result;
+}
+
+static char const **bear_update_environ(char const *envs[], char const *key, char const * const value) {
+ // find the key if it's there
+ size_t const key_length = strlen(key);
+ char const **it = envs;
+ for (; (it) && (*it); ++it) {
+ if ((0 == strncmp(*it, key, key_length)) &&
+ (strlen(*it) > key_length) && ('=' == (*it)[key_length]))
+ break;
+ }
+ // allocate a environment entry
+ size_t const value_length = strlen(value);
+ size_t const env_length = key_length + value_length + 2;
+ char *env = malloc(env_length);
+ if (0 == env) {
+ perror("bear: malloc [in env_update]");
+ exit(EXIT_FAILURE);
+ }
+ if (-1 == snprintf(env, env_length, "%s=%s", key, value)) {
+ perror("bear: snprintf");
+ exit(EXIT_FAILURE);
+ }
+ // replace or append the environment entry
+ if (it && *it) {
+ free((void *)*it);
+ *it = env;
+ return envs;
+ }
+ return bear_strings_append(envs, env);
+}
+
+static char **bear_get_environment() {
+#if defined HAVE_NSGETENVIRON
+ return *_NSGetEnviron();
+#else
+ return environ;
+#endif
+}
+
+/* util methods to deal with string arrays. environment and process arguments
+ * are both represented as string arrays. */
+
+static char const **bear_strings_build(char const *const arg, va_list *args) {
+ char const **result = 0;
+ size_t size = 0;
+ for (char const *it = arg; it; it = va_arg(*args, char const *)) {
+ result = realloc(result, (size + 1) * sizeof(char const *));
+ if (0 == result) {
+ perror("bear: realloc");
+ exit(EXIT_FAILURE);
+ }
+ char const *copy = strdup(it);
+ if (0 == copy) {
+ perror("bear: strdup");
+ exit(EXIT_FAILURE);
+ }
+ result[size++] = copy;
+ }
+ result = realloc(result, (size + 1) * sizeof(char const *));
+ if (0 == result) {
+ perror("bear: realloc");
+ exit(EXIT_FAILURE);
+ }
+ result[size++] = 0;
+
+ return result;
+}
+
+static char const **bear_strings_copy(char const **const in) {
+ size_t const size = bear_strings_length(in);
+
+ char const **const result = malloc((size + 1) * sizeof(char const *));
+ if (0 == result) {
+ perror("bear: malloc");
+ exit(EXIT_FAILURE);
+ }
+
+ char const **out_it = result;
+ for (char const *const *in_it = in; (in_it) && (*in_it);
+ ++in_it, ++out_it) {
+ *out_it = strdup(*in_it);
+ if (0 == *out_it) {
+ perror("bear: strdup");
+ exit(EXIT_FAILURE);
+ }
+ }
+ *out_it = 0;
+ return result;
+}
+
+static char const **bear_strings_append(char const **const in,
+ char const *const e) {
+ size_t size = bear_strings_length(in);
+ char const **result = realloc(in, (size + 2) * sizeof(char const *));
+ if (0 == result) {
+ perror("bear: realloc");
+ exit(EXIT_FAILURE);
+ }
+ result[size++] = e;
+ result[size++] = 0;
+ return result;
+}
+
+static size_t bear_strings_length(char const *const *const in) {
+ size_t result = 0;
+ for (char const *const *it = in; (it) && (*it); ++it)
+ ++result;
+ return result;
+}
+
+static void bear_strings_release(char const **in) {
+ for (char const *const *it = in; (it) && (*it); ++it) {
+ free((void *)*it);
+ }
+ free((void *)in);
+}
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/__init__.py b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/__init__.py
new file mode 100644
index 0000000..903207c
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/__init__.py
@@ -0,0 +1,208 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module is a collection of methods commonly used in this project. """
+import collections
+import functools
+import json
+import logging
+import os
+import os.path
+import re
+import shlex
+import subprocess
+import sys
+
+ENVIRONMENT_KEY = 'INTERCEPT_BUILD'
+
+Execution = collections.namedtuple('Execution', ['pid', 'cwd', 'cmd'])
+
+CtuConfig = collections.namedtuple('CtuConfig', ['collect', 'analyze', 'dir',
+ 'extdef_map_cmd'])
+
+
+def duplicate_check(method):
+ """ Predicate to detect duplicated entries.
+
+ Unique hash method can be use to detect duplicates. Entries are
+ represented as dictionaries, which has no default hash method.
+ This implementation uses a set datatype to store the unique hash values.
+
+ This method returns a method which can detect the duplicate values. """
+
+ def predicate(entry):
+ entry_hash = predicate.unique(entry)
+ if entry_hash not in predicate.state:
+ predicate.state.add(entry_hash)
+ return False
+ return True
+
+ predicate.unique = method
+ predicate.state = set()
+ return predicate
+
+
+def run_build(command, *args, **kwargs):
+ """ Run and report build command execution
+
+ :param command: array of tokens
+ :return: exit code of the process
+ """
+ environment = kwargs.get('env', os.environ)
+ logging.debug('run build %s, in environment: %s', command, environment)
+ exit_code = subprocess.call(command, *args, **kwargs)
+ logging.debug('build finished with exit code: %d', exit_code)
+ return exit_code
+
+
+def run_command(command, cwd=None):
+ """ Run a given command and report the execution.
+
+ :param command: array of tokens
+ :param cwd: the working directory where the command will be executed
+ :return: output of the command
+ """
+ def decode_when_needed(result):
+ """ check_output returns bytes or string depend on python version """
+ return result.decode('utf-8') if isinstance(result, bytes) else result
+
+ try:
+ directory = os.path.abspath(cwd) if cwd else os.getcwd()
+ logging.debug('exec command %s in %s', command, directory)
+ output = subprocess.check_output(command,
+ cwd=directory,
+ stderr=subprocess.STDOUT)
+ return decode_when_needed(output).splitlines()
+ except subprocess.CalledProcessError as ex:
+ ex.output = decode_when_needed(ex.output).splitlines()
+ raise ex
+
+
+def reconfigure_logging(verbose_level):
+ """ Reconfigure logging level and format based on the verbose flag.
+
+ :param verbose_level: number of `-v` flags received by the command
+ :return: no return value
+ """
+ # Exit when nothing to do.
+ if verbose_level == 0:
+ return
+
+ root = logging.getLogger()
+ # Tune logging level.
+ level = logging.WARNING - min(logging.WARNING, (10 * verbose_level))
+ root.setLevel(level)
+ # Be verbose with messages.
+ if verbose_level <= 3:
+ fmt_string = '%(name)s: %(levelname)s: %(message)s'
+ else:
+ fmt_string = '%(name)s: %(levelname)s: %(funcName)s: %(message)s'
+ handler = logging.StreamHandler(sys.stdout)
+ handler.setFormatter(logging.Formatter(fmt=fmt_string))
+ root.handlers = [handler]
+
+
+def command_entry_point(function):
+ """ Decorator for command entry methods.
+
+ The decorator initialize/shutdown logging and guard on programming
+ errors (catch exceptions).
+
+ The decorated method can have arbitrary parameters, the return value will
+ be the exit code of the process. """
+
+ @functools.wraps(function)
+ def wrapper(*args, **kwargs):
+ """ Do housekeeping tasks and execute the wrapped method. """
+
+ try:
+ logging.basicConfig(format='%(name)s: %(message)s',
+ level=logging.WARNING,
+ stream=sys.stdout)
+ # This hack to get the executable name as %(name).
+ logging.getLogger().name = os.path.basename(sys.argv[0])
+ return function(*args, **kwargs)
+ except KeyboardInterrupt:
+ logging.warning('Keyboard interrupt')
+ return 130 # Signal received exit code for bash.
+ except Exception:
+ logging.exception('Internal error.')
+ if logging.getLogger().isEnabledFor(logging.DEBUG):
+ logging.error("Please report this bug and attach the output "
+ "to the bug report")
+ else:
+ logging.error("Please run this command again and turn on "
+ "verbose mode (add '-vvvv' as argument).")
+ return 64 # Some non used exit code for internal errors.
+ finally:
+ logging.shutdown()
+
+ return wrapper
+
+
+def compiler_wrapper(function):
+ """ Implements compiler wrapper base functionality.
+
+ A compiler wrapper executes the real compiler, then implement some
+ functionality, then returns with the real compiler exit code.
+
+ :param function: the extra functionality what the wrapper want to
+ do on top of the compiler call. If it throws exception, it will be
+ caught and logged.
+ :return: the exit code of the real compiler.
+
+ The :param function: will receive the following arguments:
+
+ :param result: the exit code of the compilation.
+ :param execution: the command executed by the wrapper. """
+
+ def is_cxx_compiler():
+ """ Find out was it a C++ compiler call. Compiler wrapper names
+ contain the compiler type. C++ compiler wrappers ends with `c++`,
+ but might have `.exe` extension on windows. """
+
+ wrapper_command = os.path.basename(sys.argv[0])
+ return re.match(r'(.+)c\+\+(.*)', wrapper_command)
+
+ def run_compiler(executable):
+ """ Execute compilation with the real compiler. """
+
+ command = executable + sys.argv[1:]
+ logging.debug('compilation: %s', command)
+ result = subprocess.call(command)
+ logging.debug('compilation exit code: %d', result)
+ return result
+
+ # Get relevant parameters from environment.
+ parameters = json.loads(os.environ[ENVIRONMENT_KEY])
+ reconfigure_logging(parameters['verbose'])
+ # Execute the requested compilation. Do crash if anything goes wrong.
+ cxx = is_cxx_compiler()
+ compiler = parameters['cxx'] if cxx else parameters['cc']
+ result = run_compiler(compiler)
+ # Call the wrapped method and ignore it's return value.
+ try:
+ call = Execution(
+ pid=os.getpid(),
+ cwd=os.getcwd(),
+ cmd=['c++' if cxx else 'cc'] + sys.argv[1:])
+ function(result, call)
+ except:
+ logging.exception('Compiler wrapper failed complete.')
+ finally:
+ # Always return the real compiler exit code.
+ return result
+
+
+def wrapper_environment(args):
+ """ Set up environment for interpose compiler wrapper."""
+
+ return {
+ ENVIRONMENT_KEY: json.dumps({
+ 'verbose': args.verbose,
+ 'cc': shlex.split(args.cc),
+ 'cxx': shlex.split(args.cxx)
+ })
+ }
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/analyze.py b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/analyze.py
new file mode 100644
index 0000000..ab8ea62
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/analyze.py
@@ -0,0 +1,781 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module implements the 'scan-build' command API.
+
+To run the static analyzer against a build is done in multiple steps:
+
+ -- Intercept: capture the compilation command during the build,
+ -- Analyze: run the analyzer against the captured commands,
+ -- Report: create a cover report from the analyzer outputs. """
+
+import re
+import os
+import os.path
+import json
+import logging
+import multiprocessing
+import tempfile
+import functools
+import subprocess
+import contextlib
+import datetime
+import shutil
+import glob
+from collections import defaultdict
+
+from libscanbuild import command_entry_point, compiler_wrapper, \
+ wrapper_environment, run_build, run_command, CtuConfig
+from libscanbuild.arguments import parse_args_for_scan_build, \
+ parse_args_for_analyze_build
+from libscanbuild.intercept import capture
+from libscanbuild.report import document
+from libscanbuild.compilation import split_command, classify_source, \
+ compiler_language
+from libscanbuild.clang import get_version, get_arguments, get_triple_arch
+from libscanbuild.shell import decode
+
+__all__ = ['scan_build', 'analyze_build', 'analyze_compiler_wrapper']
+
+COMPILER_WRAPPER_CC = 'analyze-cc'
+COMPILER_WRAPPER_CXX = 'analyze-c++'
+
+CTU_EXTDEF_MAP_FILENAME = 'externalDefMap.txt'
+CTU_TEMP_DEFMAP_FOLDER = 'tmpExternalDefMaps'
+
+
+@command_entry_point
+def scan_build():
+ """ Entry point for scan-build command. """
+
+ args = parse_args_for_scan_build()
+ # will re-assign the report directory as new output
+ with report_directory(args.output, args.keep_empty) as args.output:
+ # Run against a build command. there are cases, when analyzer run
+ # is not required. But we need to set up everything for the
+ # wrappers, because 'configure' needs to capture the CC/CXX values
+ # for the Makefile.
+ if args.intercept_first:
+ # Run build command with intercept module.
+ exit_code = capture(args)
+ # Run the analyzer against the captured commands.
+ if need_analyzer(args.build):
+ govern_analyzer_runs(args)
+ else:
+ # Run build command and analyzer with compiler wrappers.
+ environment = setup_environment(args)
+ exit_code = run_build(args.build, env=environment)
+ # Cover report generation and bug counting.
+ number_of_bugs = document(args)
+ # Set exit status as it was requested.
+ return number_of_bugs if args.status_bugs else exit_code
+
+
+@command_entry_point
+def analyze_build():
+ """ Entry point for analyze-build command. """
+
+ args = parse_args_for_analyze_build()
+ # will re-assign the report directory as new output
+ with report_directory(args.output, args.keep_empty) as args.output:
+ # Run the analyzer against a compilation db.
+ govern_analyzer_runs(args)
+ # Cover report generation and bug counting.
+ number_of_bugs = document(args)
+ # Set exit status as it was requested.
+ return number_of_bugs if args.status_bugs else 0
+
+
+def need_analyzer(args):
+ """ Check the intent of the build command.
+
+ When static analyzer run against project configure step, it should be
+ silent and no need to run the analyzer or generate report.
+
+ To run `scan-build` against the configure step might be necessary,
+ when compiler wrappers are used. That's the moment when build setup
+ check the compiler and capture the location for the build process. """
+
+ return len(args) and not re.search('configure|autogen', args[0])
+
+
+def prefix_with(constant, pieces):
+ """ From a sequence create another sequence where every second element
+ is from the original sequence and the odd elements are the prefix.
+
+ eg.: prefix_with(0, [1,2,3]) creates [0, 1, 0, 2, 0, 3] """
+
+ return [elem for piece in pieces for elem in [constant, piece]]
+
+
+def get_ctu_config_from_args(args):
+ """ CTU configuration is created from the chosen phases and dir. """
+
+ return (
+ CtuConfig(collect=args.ctu_phases.collect,
+ analyze=args.ctu_phases.analyze,
+ dir=args.ctu_dir,
+ extdef_map_cmd=args.extdef_map_cmd)
+ if hasattr(args, 'ctu_phases') and hasattr(args.ctu_phases, 'dir')
+ else CtuConfig(collect=False, analyze=False, dir='', extdef_map_cmd=''))
+
+
+def get_ctu_config_from_json(ctu_conf_json):
+ """ CTU configuration is created from the chosen phases and dir. """
+
+ ctu_config = json.loads(ctu_conf_json)
+ # Recover namedtuple from json when coming from analyze-cc or analyze-c++
+ return CtuConfig(collect=ctu_config[0],
+ analyze=ctu_config[1],
+ dir=ctu_config[2],
+ extdef_map_cmd=ctu_config[3])
+
+
+def create_global_ctu_extdef_map(extdef_map_lines):
+ """ Takes iterator of individual external definition maps and creates a
+ global map keeping only unique names. We leave conflicting names out of
+ CTU.
+
+ :param extdef_map_lines: Contains the id of a definition (mangled name) and
+ the originating source (the corresponding AST file) name.
+ :type extdef_map_lines: Iterator of str.
+ :returns: Mangled name - AST file pairs.
+ :rtype: List of (str, str) tuples.
+ """
+
+ mangled_to_asts = defaultdict(set)
+
+ for line in extdef_map_lines:
+ mangled_name, ast_file = line.strip().split(' ', 1)
+ mangled_to_asts[mangled_name].add(ast_file)
+
+ mangled_ast_pairs = []
+
+ for mangled_name, ast_files in mangled_to_asts.items():
+ if len(ast_files) == 1:
+ mangled_ast_pairs.append((mangled_name, next(iter(ast_files))))
+
+ return mangled_ast_pairs
+
+
+def merge_ctu_extdef_maps(ctudir):
+ """ Merge individual external definition maps into a global one.
+
+ As the collect phase runs parallel on multiple threads, all compilation
+ units are separately mapped into a temporary file in CTU_TEMP_DEFMAP_FOLDER.
+ These definition maps contain the mangled names and the source
+ (AST generated from the source) which had their definition.
+ These files should be merged at the end into a global map file:
+ CTU_EXTDEF_MAP_FILENAME."""
+
+ def generate_extdef_map_lines(extdefmap_dir):
+ """ Iterate over all lines of input files in a determined order. """
+
+ files = glob.glob(os.path.join(extdefmap_dir, '*'))
+ files.sort()
+ for filename in files:
+ with open(filename, 'r') as in_file:
+ for line in in_file:
+ yield line
+
+ def write_global_map(arch, mangled_ast_pairs):
+ """ Write (mangled name, ast file) pairs into final file. """
+
+ extern_defs_map_file = os.path.join(ctudir, arch,
+ CTU_EXTDEF_MAP_FILENAME)
+ with open(extern_defs_map_file, 'w') as out_file:
+ for mangled_name, ast_file in mangled_ast_pairs:
+ out_file.write('%s %s\n' % (mangled_name, ast_file))
+
+ triple_arches = glob.glob(os.path.join(ctudir, '*'))
+ for triple_path in triple_arches:
+ if os.path.isdir(triple_path):
+ triple_arch = os.path.basename(triple_path)
+ extdefmap_dir = os.path.join(ctudir, triple_arch,
+ CTU_TEMP_DEFMAP_FOLDER)
+
+ extdef_map_lines = generate_extdef_map_lines(extdefmap_dir)
+ mangled_ast_pairs = create_global_ctu_extdef_map(extdef_map_lines)
+ write_global_map(triple_arch, mangled_ast_pairs)
+
+ # Remove all temporary files
+ shutil.rmtree(extdefmap_dir, ignore_errors=True)
+
+
+def run_analyzer_parallel(args):
+ """ Runs the analyzer against the given compilation database. """
+
+ def exclude(filename):
+ """ Return true when any excluded directory prefix the filename. """
+ return any(re.match(r'^' + directory, filename)
+ for directory in args.excludes)
+
+ consts = {
+ 'clang': args.clang,
+ 'output_dir': args.output,
+ 'output_format': args.output_format,
+ 'output_failures': args.output_failures,
+ 'direct_args': analyzer_params(args),
+ 'force_debug': args.force_debug,
+ 'ctu': get_ctu_config_from_args(args)
+ }
+
+ logging.debug('run analyzer against compilation database')
+ with open(args.cdb, 'r') as handle:
+ generator = (dict(cmd, **consts)
+ for cmd in json.load(handle) if not exclude(cmd['file']))
+ # when verbose output requested execute sequentially
+ pool = multiprocessing.Pool(1 if args.verbose > 2 else None)
+ for current in pool.imap_unordered(run, generator):
+ if current is not None:
+ # display error message from the static analyzer
+ for line in current['error_output']:
+ logging.info(line.rstrip())
+ pool.close()
+ pool.join()
+
+
+def govern_analyzer_runs(args):
+ """ Governs multiple runs in CTU mode or runs once in normal mode. """
+
+ ctu_config = get_ctu_config_from_args(args)
+ # If we do a CTU collect (1st phase) we remove all previous collection
+ # data first.
+ if ctu_config.collect:
+ shutil.rmtree(ctu_config.dir, ignore_errors=True)
+
+ # If the user asked for a collect (1st) and analyze (2nd) phase, we do an
+ # all-in-one run where we deliberately remove collection data before and
+ # also after the run. If the user asks only for a single phase data is
+ # left so multiple analyze runs can use the same data gathered by a single
+ # collection run.
+ if ctu_config.collect and ctu_config.analyze:
+ # CTU strings are coming from args.ctu_dir and extdef_map_cmd,
+ # so we can leave it empty
+ args.ctu_phases = CtuConfig(collect=True, analyze=False,
+ dir='', extdef_map_cmd='')
+ run_analyzer_parallel(args)
+ merge_ctu_extdef_maps(ctu_config.dir)
+ args.ctu_phases = CtuConfig(collect=False, analyze=True,
+ dir='', extdef_map_cmd='')
+ run_analyzer_parallel(args)
+ shutil.rmtree(ctu_config.dir, ignore_errors=True)
+ else:
+ # Single runs (collect or analyze) are launched from here.
+ run_analyzer_parallel(args)
+ if ctu_config.collect:
+ merge_ctu_extdef_maps(ctu_config.dir)
+
+
+def setup_environment(args):
+ """ Set up environment for build command to interpose compiler wrapper. """
+
+ environment = dict(os.environ)
+ environment.update(wrapper_environment(args))
+ environment.update({
+ 'CC': COMPILER_WRAPPER_CC,
+ 'CXX': COMPILER_WRAPPER_CXX,
+ 'ANALYZE_BUILD_CLANG': args.clang if need_analyzer(args.build) else '',
+ 'ANALYZE_BUILD_REPORT_DIR': args.output,
+ 'ANALYZE_BUILD_REPORT_FORMAT': args.output_format,
+ 'ANALYZE_BUILD_REPORT_FAILURES': 'yes' if args.output_failures else '',
+ 'ANALYZE_BUILD_PARAMETERS': ' '.join(analyzer_params(args)),
+ 'ANALYZE_BUILD_FORCE_DEBUG': 'yes' if args.force_debug else '',
+ 'ANALYZE_BUILD_CTU': json.dumps(get_ctu_config_from_args(args))
+ })
+ return environment
+
+
+@command_entry_point
+def analyze_compiler_wrapper():
+ """ Entry point for `analyze-cc` and `analyze-c++` compiler wrappers. """
+
+ return compiler_wrapper(analyze_compiler_wrapper_impl)
+
+
+def analyze_compiler_wrapper_impl(result, execution):
+ """ Implements analyzer compiler wrapper functionality. """
+
+ # don't run analyzer when compilation fails. or when it's not requested.
+ if result or not os.getenv('ANALYZE_BUILD_CLANG'):
+ return
+
+ # check is it a compilation?
+ compilation = split_command(execution.cmd)
+ if compilation is None:
+ return
+ # collect the needed parameters from environment, crash when missing
+ parameters = {
+ 'clang': os.getenv('ANALYZE_BUILD_CLANG'),
+ 'output_dir': os.getenv('ANALYZE_BUILD_REPORT_DIR'),
+ 'output_format': os.getenv('ANALYZE_BUILD_REPORT_FORMAT'),
+ 'output_failures': os.getenv('ANALYZE_BUILD_REPORT_FAILURES'),
+ 'direct_args': os.getenv('ANALYZE_BUILD_PARAMETERS',
+ '').split(' '),
+ 'force_debug': os.getenv('ANALYZE_BUILD_FORCE_DEBUG'),
+ 'directory': execution.cwd,
+ 'command': [execution.cmd[0], '-c'] + compilation.flags,
+ 'ctu': get_ctu_config_from_json(os.getenv('ANALYZE_BUILD_CTU'))
+ }
+ # call static analyzer against the compilation
+ for source in compilation.files:
+ parameters.update({'file': source})
+ logging.debug('analyzer parameters %s', parameters)
+ current = run(parameters)
+ # display error message from the static analyzer
+ if current is not None:
+ for line in current['error_output']:
+ logging.info(line.rstrip())
+
+
[email protected]
+def report_directory(hint, keep):
+ """ Responsible for the report directory.
+
+ hint -- could specify the parent directory of the output directory.
+ keep -- a boolean value to keep or delete the empty report directory. """
+
+ stamp_format = 'scan-build-%Y-%m-%d-%H-%M-%S-%f-'
+ stamp = datetime.datetime.now().strftime(stamp_format)
+ parent_dir = os.path.abspath(hint)
+ if not os.path.exists(parent_dir):
+ os.makedirs(parent_dir)
+ name = tempfile.mkdtemp(prefix=stamp, dir=parent_dir)
+
+ logging.info('Report directory created: %s', name)
+
+ try:
+ yield name
+ finally:
+ if os.listdir(name):
+ msg = "Run 'scan-view %s' to examine bug reports."
+ keep = True
+ else:
+ if keep:
+ msg = "Report directory '%s' contains no report, but kept."
+ else:
+ msg = "Removing directory '%s' because it contains no report."
+ logging.warning(msg, name)
+
+ if not keep:
+ os.rmdir(name)
+
+
+def analyzer_params(args):
+ """ A group of command line arguments can mapped to command
+ line arguments of the analyzer. This method generates those. """
+
+ result = []
+
+ if args.store_model:
+ result.append('-analyzer-store={0}'.format(args.store_model))
+ if args.constraints_model:
+ result.append('-analyzer-constraints={0}'.format(
+ args.constraints_model))
+ if args.internal_stats:
+ result.append('-analyzer-stats')
+ if args.analyze_headers:
+ result.append('-analyzer-opt-analyze-headers')
+ if args.stats:
+ result.append('-analyzer-checker=debug.Stats')
+ if args.maxloop:
+ result.extend(['-analyzer-max-loop', str(args.maxloop)])
+ if args.output_format:
+ result.append('-analyzer-output={0}'.format(args.output_format))
+ if args.analyzer_config:
+ result.extend(['-analyzer-config', args.analyzer_config])
+ if args.verbose >= 4:
+ result.append('-analyzer-display-progress')
+ if args.plugins:
+ result.extend(prefix_with('-load', args.plugins))
+ if args.enable_checker:
+ checkers = ','.join(args.enable_checker)
+ result.extend(['-analyzer-checker', checkers])
+ if args.disable_checker:
+ checkers = ','.join(args.disable_checker)
+ result.extend(['-analyzer-disable-checker', checkers])
+
+ return prefix_with('-Xclang', result)
+
+
+def require(required):
+ """ Decorator for checking the required values in state.
+
+ It checks the required attributes in the passed state and stop when
+ any of those is missing. """
+
+ def decorator(function):
+ @functools.wraps(function)
+ def wrapper(*args, **kwargs):
+ for key in required:
+ if key not in args[0]:
+ raise KeyError('{0} not passed to {1}'.format(
+ key, function.__name__))
+
+ return function(*args, **kwargs)
+
+ return wrapper
+
+ return decorator
+
+
+@require(['command', # entry from compilation database
+ 'directory', # entry from compilation database
+ 'file', # entry from compilation database
+ 'clang', # clang executable name (and path)
+ 'direct_args', # arguments from command line
+ 'force_debug', # kill non debug macros
+ 'output_dir', # where generated report files shall go
+ 'output_format', # it's 'plist', 'html', both or plist-multi-file
+ 'output_failures', # generate crash reports or not
+ 'ctu']) # ctu control options
+def run(opts):
+ """ Entry point to run (or not) static analyzer against a single entry
+ of the compilation database.
+
+ This complex task is decomposed into smaller methods which are calling
+ each other in chain. If the analyzis is not possible the given method
+ just return and break the chain.
+
+ The passed parameter is a python dictionary. Each method first check
+ that the needed parameters received. (This is done by the 'require'
+ decorator. It's like an 'assert' to check the contract between the
+ caller and the called method.) """
+
+ try:
+ command = opts.pop('command')
+ command = command if isinstance(command, list) else decode(command)
+ logging.debug("Run analyzer against '%s'", command)
+ opts.update(classify_parameters(command))
+
+ return arch_check(opts)
+ except Exception:
+ logging.error("Problem occurred during analyzis.", exc_info=1)
+ return None
+
+
+@require(['clang', 'directory', 'flags', 'file', 'output_dir', 'language',
+ 'error_output', 'exit_code'])
+def report_failure(opts):
+ """ Create report when analyzer failed.
+
+ The major report is the preprocessor output. The output filename generated
+ randomly. The compiler output also captured into '.stderr.txt' file.
+ And some more execution context also saved into '.info.txt' file. """
+
+ def extension():
+ """ Generate preprocessor file extension. """
+
+ mapping = {'objective-c++': '.mii', 'objective-c': '.mi', 'c++': '.ii'}
+ return mapping.get(opts['language'], '.i')
+
+ def destination():
+ """ Creates failures directory if not exits yet. """
+
+ failures_dir = os.path.join(opts['output_dir'], 'failures')
+ if not os.path.isdir(failures_dir):
+ os.makedirs(failures_dir)
+ return failures_dir
+
+ # Classify error type: when Clang terminated by a signal it's a 'Crash'.
+ # (python subprocess Popen.returncode is negative when child terminated
+ # by signal.) Everything else is 'Other Error'.
+ error = 'crash' if opts['exit_code'] < 0 else 'other_error'
+ # Create preprocessor output file name. (This is blindly following the
+ # Perl implementation.)
+ (handle, name) = tempfile.mkstemp(suffix=extension(),
+ prefix='clang_' + error + '_',
+ dir=destination())
+ os.close(handle)
+ # Execute Clang again, but run the syntax check only.
+ cwd = opts['directory']
+ cmd = get_arguments(
+ [opts['clang'], '-fsyntax-only', '-E'
+ ] + opts['flags'] + [opts['file'], '-o', name], cwd)
+ run_command(cmd, cwd=cwd)
+ # write general information about the crash
+ with open(name + '.info.txt', 'w') as handle:
+ handle.write(opts['file'] + os.linesep)
+ handle.write(error.title().replace('_', ' ') + os.linesep)
+ handle.write(' '.join(cmd) + os.linesep)
+ handle.write(' '.join(os.uname()) + os.linesep)
+ handle.write(get_version(opts['clang']))
+ handle.close()
+ # write the captured output too
+ with open(name + '.stderr.txt', 'w') as handle:
+ handle.writelines(opts['error_output'])
+ handle.close()
+
+
+@require(['clang', 'directory', 'flags', 'direct_args', 'file', 'output_dir',
+ 'output_format'])
+def run_analyzer(opts, continuation=report_failure):
+ """ It assembles the analysis command line and executes it. Capture the
+ output of the analysis and returns with it. If failure reports are
+ requested, it calls the continuation to generate it. """
+
+ def target():
+ """ Creates output file name for reports. """
+ if opts['output_format'] in {
+ 'plist',
+ 'plist-html',
+ 'plist-multi-file'}:
+ (handle, name) = tempfile.mkstemp(prefix='report-',
+ suffix='.plist',
+ dir=opts['output_dir'])
+ os.close(handle)
+ return name
+ return opts['output_dir']
+
+ try:
+ cwd = opts['directory']
+ cmd = get_arguments([opts['clang'], '--analyze'] +
+ opts['direct_args'] + opts['flags'] +
+ [opts['file'], '-o', target()],
+ cwd)
+ output = run_command(cmd, cwd=cwd)
+ return {'error_output': output, 'exit_code': 0}
+ except subprocess.CalledProcessError as ex:
+ result = {'error_output': ex.output, 'exit_code': ex.returncode}
+ if opts.get('output_failures', False):
+ opts.update(result)
+ continuation(opts)
+ return result
+
+
+def extdef_map_list_src_to_ast(extdef_src_list):
+ """ Turns textual external definition map list with source files into an
+ external definition map list with ast files. """
+
+ extdef_ast_list = []
+ for extdef_src_txt in extdef_src_list:
+ mangled_name, path = extdef_src_txt.split(" ", 1)
+ # Normalize path on windows as well
+ path = os.path.splitdrive(path)[1]
+ # Make relative path out of absolute
+ path = path[1:] if path[0] == os.sep else path
+ ast_path = os.path.join("ast", path + ".ast")
+ extdef_ast_list.append(mangled_name + " " + ast_path)
+ return extdef_ast_list
+
+
+@require(['clang', 'directory', 'flags', 'direct_args', 'file', 'ctu'])
+def ctu_collect_phase(opts):
+ """ Preprocess source by generating all data needed by CTU analysis. """
+
+ def generate_ast(triple_arch):
+ """ Generates ASTs for the current compilation command. """
+
+ args = opts['direct_args'] + opts['flags']
+ ast_joined_path = os.path.join(opts['ctu'].dir, triple_arch, 'ast',
+ os.path.realpath(opts['file'])[1:] +
+ '.ast')
+ ast_path = os.path.abspath(ast_joined_path)
+ ast_dir = os.path.dirname(ast_path)
+ if not os.path.isdir(ast_dir):
+ try:
+ os.makedirs(ast_dir)
+ except OSError:
+ # In case an other process already created it.
+ pass
+ ast_command = [opts['clang'], '-emit-ast']
+ ast_command.extend(args)
+ ast_command.append('-w')
+ ast_command.append(opts['file'])
+ ast_command.append('-o')
+ ast_command.append(ast_path)
+ logging.debug("Generating AST using '%s'", ast_command)
+ run_command(ast_command, cwd=opts['directory'])
+
+ def map_extdefs(triple_arch):
+ """ Generate external definition map file for the current source. """
+
+ args = opts['direct_args'] + opts['flags']
+ extdefmap_command = [opts['ctu'].extdef_map_cmd]
+ extdefmap_command.append(opts['file'])
+ extdefmap_command.append('--')
+ extdefmap_command.extend(args)
+ logging.debug("Generating external definition map using '%s'",
+ extdefmap_command)
+ extdef_src_list = run_command(extdefmap_command, cwd=opts['directory'])
+ extdef_ast_list = extdef_map_list_src_to_ast(extdef_src_list)
+ extern_defs_map_folder = os.path.join(opts['ctu'].dir, triple_arch,
+ CTU_TEMP_DEFMAP_FOLDER)
+ if not os.path.isdir(extern_defs_map_folder):
+ try:
+ os.makedirs(extern_defs_map_folder)
+ except OSError:
+ # In case an other process already created it.
+ pass
+ if extdef_ast_list:
+ with tempfile.NamedTemporaryFile(mode='w',
+ dir=extern_defs_map_folder,
+ delete=False) as out_file:
+ out_file.write("\n".join(extdef_ast_list) + "\n")
+
+ cwd = opts['directory']
+ cmd = [opts['clang'], '--analyze'] + opts['direct_args'] + opts['flags'] \
+ + [opts['file']]
+ triple_arch = get_triple_arch(cmd, cwd)
+ generate_ast(triple_arch)
+ map_extdefs(triple_arch)
+
+
+@require(['ctu'])
+def dispatch_ctu(opts, continuation=run_analyzer):
+ """ Execute only one phase of 2 phases of CTU if needed. """
+
+ ctu_config = opts['ctu']
+
+ if ctu_config.collect or ctu_config.analyze:
+ assert ctu_config.collect != ctu_config.analyze
+ if ctu_config.collect:
+ return ctu_collect_phase(opts)
+ if ctu_config.analyze:
+ cwd = opts['directory']
+ cmd = [opts['clang'], '--analyze'] + opts['direct_args'] \
+ + opts['flags'] + [opts['file']]
+ triarch = get_triple_arch(cmd, cwd)
+ ctu_options = ['ctu-dir=' + os.path.join(ctu_config.dir, triarch),
+ 'experimental-enable-naive-ctu-analysis=true']
+ analyzer_options = prefix_with('-analyzer-config', ctu_options)
+ direct_options = prefix_with('-Xanalyzer', analyzer_options)
+ opts['direct_args'].extend(direct_options)
+
+ return continuation(opts)
+
+
+@require(['flags', 'force_debug'])
+def filter_debug_flags(opts, continuation=dispatch_ctu):
+ """ Filter out nondebug macros when requested. """
+
+ if opts.pop('force_debug'):
+ # lazy implementation just append an undefine macro at the end
+ opts.update({'flags': opts['flags'] + ['-UNDEBUG']})
+
+ return continuation(opts)
+
+
+@require(['language', 'compiler', 'file', 'flags'])
+def language_check(opts, continuation=filter_debug_flags):
+ """ Find out the language from command line parameters or file name
+ extension. The decision also influenced by the compiler invocation. """
+
+ accepted = frozenset({
+ 'c', 'c++', 'objective-c', 'objective-c++', 'c-cpp-output',
+ 'c++-cpp-output', 'objective-c-cpp-output'
+ })
+
+ # language can be given as a parameter...
+ language = opts.pop('language')
+ compiler = opts.pop('compiler')
+ # ... or find out from source file extension
+ if language is None and compiler is not None:
+ language = classify_source(opts['file'], compiler == 'c')
+
+ if language is None:
+ logging.debug('skip analysis, language not known')
+ return None
+ elif language not in accepted:
+ logging.debug('skip analysis, language not supported')
+ return None
+ else:
+ logging.debug('analysis, language: %s', language)
+ opts.update({'language': language,
+ 'flags': ['-x', language] + opts['flags']})
+ return continuation(opts)
+
+
+@require(['arch_list', 'flags'])
+def arch_check(opts, continuation=language_check):
+ """ Do run analyzer through one of the given architectures. """
+
+ disabled = frozenset({'ppc', 'ppc64'})
+
+ received_list = opts.pop('arch_list')
+ if received_list:
+ # filter out disabled architectures and -arch switches
+ filtered_list = [a for a in received_list if a not in disabled]
+ if filtered_list:
+ # There should be only one arch given (or the same multiple
+ # times). If there are multiple arch are given and are not
+ # the same, those should not change the pre-processing step.
+ # But that's the only pass we have before run the analyzer.
+ current = filtered_list.pop()
+ logging.debug('analysis, on arch: %s', current)
+
+ opts.update({'flags': ['-arch', current] + opts['flags']})
+ return continuation(opts)
+ else:
+ logging.debug('skip analysis, found not supported arch')
+ return None
+ else:
+ logging.debug('analysis, on default arch')
+ return continuation(opts)
+
+
+# To have good results from static analyzer certain compiler options shall be
+# omitted. The compiler flag filtering only affects the static analyzer run.
+#
+# Keys are the option name, value number of options to skip
+IGNORED_FLAGS = {
+ '-c': 0, # compile option will be overwritten
+ '-fsyntax-only': 0, # static analyzer option will be overwritten
+ '-o': 1, # will set up own output file
+ # flags below are inherited from the perl implementation.
+ '-g': 0,
+ '-save-temps': 0,
+ '-install_name': 1,
+ '-exported_symbols_list': 1,
+ '-current_version': 1,
+ '-compatibility_version': 1,
+ '-init': 1,
+ '-e': 1,
+ '-seg1addr': 1,
+ '-bundle_loader': 1,
+ '-multiply_defined': 1,
+ '-sectorder': 3,
+ '--param': 1,
+ '--serialize-diagnostics': 1
+}
+
+
+def classify_parameters(command):
+ """ Prepare compiler flags (filters some and add others) and take out
+ language (-x) and architecture (-arch) flags for future processing. """
+
+ result = {
+ 'flags': [], # the filtered compiler flags
+ 'arch_list': [], # list of architecture flags
+ 'language': None, # compilation language, None, if not specified
+ 'compiler': compiler_language(command) # 'c' or 'c++'
+ }
+
+ # iterate on the compile options
+ args = iter(command[1:])
+ for arg in args:
+ # take arch flags into a separate basket
+ if arg == '-arch':
+ result['arch_list'].append(next(args))
+ # take language
+ elif arg == '-x':
+ result['language'] = next(args)
+ # parameters which looks source file are not flags
+ elif re.match(r'^[^-].+', arg) and classify_source(arg):
+ pass
+ # ignore some flags
+ elif arg in IGNORED_FLAGS:
+ count = IGNORED_FLAGS[arg]
+ for _ in range(count):
+ next(args)
+ # we don't care about extra warnings, but we should suppress ones
+ # that we don't want to see.
+ elif re.match(r'^-W.+', arg) and not re.match(r'^-Wno-.+', arg):
+ pass
+ # and consider everything else as compilation flag.
+ else:
+ result['flags'].append(arg)
+
+ return result
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/arguments.py b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/arguments.py
new file mode 100644
index 0000000..58c56d2
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/arguments.py
@@ -0,0 +1,502 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module parses and validates arguments for command-line interfaces.
+
+It uses argparse module to create the command line parser. (This library is
+in the standard python library since 3.2 and backported to 2.7, but not
+earlier.)
+
+It also implements basic validation methods, related to the command.
+Validations are mostly calling specific help methods, or mangling values.
+"""
+from __future__ import absolute_import, division, print_function
+
+import os
+import sys
+import argparse
+import logging
+import tempfile
+from libscanbuild import reconfigure_logging, CtuConfig
+from libscanbuild.clang import get_checkers, is_ctu_capable
+
+__all__ = ['parse_args_for_intercept_build', 'parse_args_for_analyze_build',
+ 'parse_args_for_scan_build']
+
+
+def parse_args_for_intercept_build():
+ """ Parse and validate command-line arguments for intercept-build. """
+
+ parser = create_intercept_parser()
+ args = parser.parse_args()
+
+ reconfigure_logging(args.verbose)
+ logging.debug('Raw arguments %s', sys.argv)
+
+ # short validation logic
+ if not args.build:
+ parser.error(message='missing build command')
+
+ logging.debug('Parsed arguments: %s', args)
+ return args
+
+
+def parse_args_for_analyze_build():
+ """ Parse and validate command-line arguments for analyze-build. """
+
+ from_build_command = False
+ parser = create_analyze_parser(from_build_command)
+ args = parser.parse_args()
+
+ reconfigure_logging(args.verbose)
+ logging.debug('Raw arguments %s', sys.argv)
+
+ normalize_args_for_analyze(args, from_build_command)
+ validate_args_for_analyze(parser, args, from_build_command)
+ logging.debug('Parsed arguments: %s', args)
+ return args
+
+
+def parse_args_for_scan_build():
+ """ Parse and validate command-line arguments for scan-build. """
+
+ from_build_command = True
+ parser = create_analyze_parser(from_build_command)
+ args = parser.parse_args()
+
+ reconfigure_logging(args.verbose)
+ logging.debug('Raw arguments %s', sys.argv)
+
+ normalize_args_for_analyze(args, from_build_command)
+ validate_args_for_analyze(parser, args, from_build_command)
+ logging.debug('Parsed arguments: %s', args)
+ return args
+
+
+def normalize_args_for_analyze(args, from_build_command):
+ """ Normalize parsed arguments for analyze-build and scan-build.
+
+ :param args: Parsed argument object. (Will be mutated.)
+ :param from_build_command: Boolean value tells is the command suppose
+ to run the analyzer against a build command or a compilation db. """
+
+ # make plugins always a list. (it might be None when not specified.)
+ if args.plugins is None:
+ args.plugins = []
+
+ # make exclude directory list unique and absolute.
+ uniq_excludes = set(os.path.abspath(entry) for entry in args.excludes)
+ args.excludes = list(uniq_excludes)
+
+ # because shared codes for all tools, some common used methods are
+ # expecting some argument to be present. so, instead of query the args
+ # object about the presence of the flag, we fake it here. to make those
+ # methods more readable. (it's an arguable choice, took it only for those
+ # which have good default value.)
+ if from_build_command:
+ # add cdb parameter invisibly to make report module working.
+ args.cdb = 'compile_commands.json'
+
+ # Make ctu_dir an abspath as it is needed inside clang
+ if not from_build_command and hasattr(args, 'ctu_phases') \
+ and hasattr(args.ctu_phases, 'dir'):
+ args.ctu_dir = os.path.abspath(args.ctu_dir)
+
+
+def validate_args_for_analyze(parser, args, from_build_command):
+ """ Command line parsing is done by the argparse module, but semantic
+ validation still needs to be done. This method is doing it for
+ analyze-build and scan-build commands.
+
+ :param parser: The command line parser object.
+ :param args: Parsed argument object.
+ :param from_build_command: Boolean value tells is the command suppose
+ to run the analyzer against a build command or a compilation db.
+ :return: No return value, but this call might throw when validation
+ fails. """
+
+ if args.help_checkers_verbose:
+ print_checkers(get_checkers(args.clang, args.plugins))
+ parser.exit(status=0)
+ elif args.help_checkers:
+ print_active_checkers(get_checkers(args.clang, args.plugins))
+ parser.exit(status=0)
+ elif from_build_command and not args.build:
+ parser.error(message='missing build command')
+ elif not from_build_command and not os.path.exists(args.cdb):
+ parser.error(message='compilation database is missing')
+
+ # If the user wants CTU mode
+ if not from_build_command and hasattr(args, 'ctu_phases') \
+ and hasattr(args.ctu_phases, 'dir'):
+ # If CTU analyze_only, the input directory should exist
+ if args.ctu_phases.analyze and not args.ctu_phases.collect \
+ and not os.path.exists(args.ctu_dir):
+ parser.error(message='missing CTU directory')
+ # Check CTU capability via checking clang-extdef-mapping
+ if not is_ctu_capable(args.extdef_map_cmd):
+ parser.error(message="""This version of clang does not support CTU
+ functionality or clang-extdef-mapping command not found.""")
+
+
+def create_intercept_parser():
+ """ Creates a parser for command-line arguments to 'intercept'. """
+
+ parser = create_default_parser()
+ parser_add_cdb(parser)
+
+ parser_add_prefer_wrapper(parser)
+ parser_add_compilers(parser)
+
+ advanced = parser.add_argument_group('advanced options')
+ group = advanced.add_mutually_exclusive_group()
+ group.add_argument(
+ '--append',
+ action='store_true',
+ help="""Extend existing compilation database with new entries.
+ Duplicate entries are detected and not present in the final output.
+ The output is not continuously updated, it's done when the build
+ command finished. """)
+
+ parser.add_argument(
+ dest='build', nargs=argparse.REMAINDER, help="""Command to run.""")
+ return parser
+
+
+def create_analyze_parser(from_build_command):
+ """ Creates a parser for command-line arguments to 'analyze'. """
+
+ parser = create_default_parser()
+
+ if from_build_command:
+ parser_add_prefer_wrapper(parser)
+ parser_add_compilers(parser)
+
+ parser.add_argument(
+ '--intercept-first',
+ action='store_true',
+ help="""Run the build commands first, intercept compiler
+ calls and then run the static analyzer afterwards.
+ Generally speaking it has better coverage on build commands.
+ With '--override-compiler' it use compiler wrapper, but does
+ not run the analyzer till the build is finished.""")
+ else:
+ parser_add_cdb(parser)
+
+ parser.add_argument(
+ '--status-bugs',
+ action='store_true',
+ help="""The exit status of '%(prog)s' is the same as the executed
+ build command. This option ignores the build exit status and sets to
+ be non zero if it found potential bugs or zero otherwise.""")
+ parser.add_argument(
+ '--exclude',
+ metavar='<directory>',
+ dest='excludes',
+ action='append',
+ default=[],
+ help="""Do not run static analyzer against files found in this
+ directory. (You can specify this option multiple times.)
+ Could be useful when project contains 3rd party libraries.""")
+
+ output = parser.add_argument_group('output control options')
+ output.add_argument(
+ '--output',
+ '-o',
+ metavar='<path>',
+ default=tempfile.gettempdir(),
+ help="""Specifies the output directory for analyzer reports.
+ Subdirectory will be created if default directory is targeted.""")
+ output.add_argument(
+ '--keep-empty',
+ action='store_true',
+ help="""Don't remove the build results directory even if no issues
+ were reported.""")
+ output.add_argument(
+ '--html-title',
+ metavar='<title>',
+ help="""Specify the title used on generated HTML pages.
+ If not specified, a default title will be used.""")
+ format_group = output.add_mutually_exclusive_group()
+ format_group.add_argument(
+ '--plist',
+ '-plist',
+ dest='output_format',
+ const='plist',
+ default='html',
+ action='store_const',
+ help="""Cause the results as a set of .plist files.""")
+ format_group.add_argument(
+ '--plist-html',
+ '-plist-html',
+ dest='output_format',
+ const='plist-html',
+ default='html',
+ action='store_const',
+ help="""Cause the results as a set of .html and .plist files.""")
+ format_group.add_argument(
+ '--plist-multi-file',
+ '-plist-multi-file',
+ dest='output_format',
+ const='plist-multi-file',
+ default='html',
+ action='store_const',
+ help="""Cause the results as a set of .plist files with extra
+ information on related files.""")
+
+ advanced = parser.add_argument_group('advanced options')
+ advanced.add_argument(
+ '--use-analyzer',
+ metavar='<path>',
+ dest='clang',
+ default='clang',
+ help="""'%(prog)s' uses the 'clang' executable relative to itself for
+ static analysis. One can override this behavior with this option by
+ using the 'clang' packaged with Xcode (on OS X) or from the PATH.""")
+ advanced.add_argument(
+ '--no-failure-reports',
+ '-no-failure-reports',
+ dest='output_failures',
+ action='store_false',
+ help="""Do not create a 'failures' subdirectory that includes analyzer
+ crash reports and preprocessed source files.""")
+ parser.add_argument(
+ '--analyze-headers',
+ action='store_true',
+ help="""Also analyze functions in #included files. By default, such
+ functions are skipped unless they are called by functions within the
+ main source file.""")
+ advanced.add_argument(
+ '--stats',
+ '-stats',
+ action='store_true',
+ help="""Generates visitation statistics for the project.""")
+ advanced.add_argument(
+ '--internal-stats',
+ action='store_true',
+ help="""Generate internal analyzer statistics.""")
+ advanced.add_argument(
+ '--maxloop',
+ '-maxloop',
+ metavar='<loop count>',
+ type=int,
+ help="""Specify the number of times a block can be visited before
+ giving up. Increase for more comprehensive coverage at a cost of
+ speed.""")
+ advanced.add_argument(
+ '--store',
+ '-store',
+ metavar='<model>',
+ dest='store_model',
+ choices=['region', 'basic'],
+ help="""Specify the store model used by the analyzer. 'region'
+ specifies a field- sensitive store model. 'basic' which is far less
+ precise but can more quickly analyze code. 'basic' was the default
+ store model for checker-0.221 and earlier.""")
+ advanced.add_argument(
+ '--constraints',
+ '-constraints',
+ metavar='<model>',
+ dest='constraints_model',
+ choices=['range', 'basic'],
+ help="""Specify the constraint engine used by the analyzer. Specifying
+ 'basic' uses a simpler, less powerful constraint model used by
+ checker-0.160 and earlier.""")
+ advanced.add_argument(
+ '--analyzer-config',
+ '-analyzer-config',
+ metavar='<options>',
+ help="""Provide options to pass through to the analyzer's
+ -analyzer-config flag. Several options are separated with comma:
+ 'key1=val1,key2=val2'
+
+ Available options:
+ stable-report-filename=true or false (default)
+
+ Switch the page naming to:
+ report-<filename>-<function/method name>-<id>.html
+ instead of report-XXXXXX.html""")
+ advanced.add_argument(
+ '--force-analyze-debug-code',
+ dest='force_debug',
+ action='store_true',
+ help="""Tells analyzer to enable assertions in code even if they were
+ disabled during compilation, enabling more precise results.""")
+
+ plugins = parser.add_argument_group('checker options')
+ plugins.add_argument(
+ '--load-plugin',
+ '-load-plugin',
+ metavar='<plugin library>',
+ dest='plugins',
+ action='append',
+ help="""Loading external checkers using the clang plugin interface.""")
+ plugins.add_argument(
+ '--enable-checker',
+ '-enable-checker',
+ metavar='<checker name>',
+ action=AppendCommaSeparated,
+ help="""Enable specific checker.""")
+ plugins.add_argument(
+ '--disable-checker',
+ '-disable-checker',
+ metavar='<checker name>',
+ action=AppendCommaSeparated,
+ help="""Disable specific checker.""")
+ plugins.add_argument(
+ '--help-checkers',
+ action='store_true',
+ help="""A default group of checkers is run unless explicitly disabled.
+ Exactly which checkers constitute the default group is a function of
+ the operating system in use. These can be printed with this flag.""")
+ plugins.add_argument(
+ '--help-checkers-verbose',
+ action='store_true',
+ help="""Print all available checkers and mark the enabled ones.""")
+
+ if from_build_command:
+ parser.add_argument(
+ dest='build', nargs=argparse.REMAINDER, help="""Command to run.""")
+ else:
+ ctu = parser.add_argument_group('cross translation unit analysis')
+ ctu_mutex_group = ctu.add_mutually_exclusive_group()
+ ctu_mutex_group.add_argument(
+ '--ctu',
+ action='store_const',
+ const=CtuConfig(collect=True, analyze=True,
+ dir='', extdef_map_cmd=''),
+ dest='ctu_phases',
+ help="""Perform cross translation unit (ctu) analysis (both collect
+ and analyze phases) using default <ctu-dir> for temporary output.
+ At the end of the analysis, the temporary directory is removed.""")
+ ctu.add_argument(
+ '--ctu-dir',
+ metavar='<ctu-dir>',
+ dest='ctu_dir',
+ default='ctu-dir',
+ help="""Defines the temporary directory used between ctu
+ phases.""")
+ ctu_mutex_group.add_argument(
+ '--ctu-collect-only',
+ action='store_const',
+ const=CtuConfig(collect=True, analyze=False,
+ dir='', extdef_map_cmd=''),
+ dest='ctu_phases',
+ help="""Perform only the collect phase of ctu.
+ Keep <ctu-dir> for further use.""")
+ ctu_mutex_group.add_argument(
+ '--ctu-analyze-only',
+ action='store_const',
+ const=CtuConfig(collect=False, analyze=True,
+ dir='', extdef_map_cmd=''),
+ dest='ctu_phases',
+ help="""Perform only the analyze phase of ctu. <ctu-dir> should be
+ present and will not be removed after analysis.""")
+ ctu.add_argument(
+ '--use-extdef-map-cmd',
+ metavar='<path>',
+ dest='extdef_map_cmd',
+ default='clang-extdef-mapping',
+ help="""'%(prog)s' uses the 'clang-extdef-mapping' executable
+ relative to itself for generating external definition maps for
+ static analysis. One can override this behavior with this option
+ by using the 'clang-extdef-mapping' packaged with Xcode (on OS X)
+ or from the PATH.""")
+ return parser
+
+
+def create_default_parser():
+ """ Creates command line parser for all build wrapper commands. """
+
+ parser = argparse.ArgumentParser(
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+
+ parser.add_argument(
+ '--verbose',
+ '-v',
+ action='count',
+ default=0,
+ help="""Enable verbose output from '%(prog)s'. A second, third and
+ fourth flags increases verbosity.""")
+ return parser
+
+
+def parser_add_cdb(parser):
+ parser.add_argument(
+ '--cdb',
+ metavar='<file>',
+ default="compile_commands.json",
+ help="""The JSON compilation database.""")
+
+
+def parser_add_prefer_wrapper(parser):
+ parser.add_argument(
+ '--override-compiler',
+ action='store_true',
+ help="""Always resort to the compiler wrapper even when better
+ intercept methods are available.""")
+
+
+def parser_add_compilers(parser):
+ parser.add_argument(
+ '--use-cc',
+ metavar='<path>',
+ dest='cc',
+ default=os.getenv('CC', 'cc'),
+ help="""When '%(prog)s' analyzes a project by interposing a compiler
+ wrapper, which executes a real compiler for compilation and do other
+ tasks (record the compiler invocation). Because of this interposing,
+ '%(prog)s' does not know what compiler your project normally uses.
+ Instead, it simply overrides the CC environment variable, and guesses
+ your default compiler.
+
+ If you need '%(prog)s' to use a specific compiler for *compilation*
+ then you can use this option to specify a path to that compiler.""")
+ parser.add_argument(
+ '--use-c++',
+ metavar='<path>',
+ dest='cxx',
+ default=os.getenv('CXX', 'c++'),
+ help="""This is the same as "--use-cc" but for C++ code.""")
+
+
+class AppendCommaSeparated(argparse.Action):
+ """ argparse Action class to support multiple comma separated lists. """
+
+ def __call__(self, __parser, namespace, values, __option_string):
+ # getattr(obj, attr, default) does not really returns default but none
+ if getattr(namespace, self.dest, None) is None:
+ setattr(namespace, self.dest, [])
+ # once it's fixed we can use as expected
+ actual = getattr(namespace, self.dest)
+ actual.extend(values.split(','))
+ setattr(namespace, self.dest, actual)
+
+
+def print_active_checkers(checkers):
+ """ Print active checkers to stdout. """
+
+ for name in sorted(name for name, (_, active) in checkers.items()
+ if active):
+ print(name)
+
+
+def print_checkers(checkers):
+ """ Print verbose checker help to stdout. """
+
+ print('')
+ print('available checkers:')
+ print('')
+ for name in sorted(checkers.keys()):
+ description, active = checkers[name]
+ prefix = '+' if active else ' '
+ if len(name) > 30:
+ print(' {0} {1}'.format(prefix, name))
+ print(' ' * 35 + description)
+ else:
+ print(' {0} {1: <30} {2}'.format(prefix, name, description))
+ print('')
+ print('NOTE: "+" indicates that an analysis is enabled by default.')
+ print('')
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/clang.py b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/clang.py
new file mode 100644
index 0000000..0cbfdb6
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/clang.py
@@ -0,0 +1,179 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module is responsible for the Clang executable.
+
+Since Clang command line interface is so rich, but this project is using only
+a subset of that, it makes sense to create a function specific wrapper. """
+
+import subprocess
+import re
+from libscanbuild import run_command
+from libscanbuild.shell import decode
+
+__all__ = ['get_version', 'get_arguments', 'get_checkers', 'is_ctu_capable',
+ 'get_triple_arch']
+
+# regex for activated checker
+ACTIVE_CHECKER_PATTERN = re.compile(r'^-analyzer-checker=(.*)$')
+
+
+def get_version(clang):
+ """ Returns the compiler version as string.
+
+ :param clang: the compiler we are using
+ :return: the version string printed to stderr """
+
+ output = run_command([clang, '-v'])
+ # the relevant version info is in the first line
+ return output[0]
+
+
+def get_arguments(command, cwd):
+ """ Capture Clang invocation.
+
+ :param command: the compilation command
+ :param cwd: the current working directory
+ :return: the detailed front-end invocation command """
+
+ cmd = command[:]
+ cmd.insert(1, '-###')
+
+ output = run_command(cmd, cwd=cwd)
+ # The relevant information is in the last line of the output.
+ # Don't check if finding last line fails, would throw exception anyway.
+ last_line = output[-1]
+ if re.search(r'clang(.*): error:', last_line):
+ raise Exception(last_line)
+ return decode(last_line)
+
+
+def get_active_checkers(clang, plugins):
+ """ Get the active checker list.
+
+ :param clang: the compiler we are using
+ :param plugins: list of plugins which was requested by the user
+ :return: list of checker names which are active
+
+ To get the default checkers we execute Clang to print how this
+ compilation would be called. And take out the enabled checker from the
+ arguments. For input file we specify stdin and pass only language
+ information. """
+
+ def get_active_checkers_for(language):
+ """ Returns a list of active checkers for the given language. """
+
+ load_args = [arg
+ for plugin in plugins
+ for arg in ['-Xclang', '-load', '-Xclang', plugin]]
+ cmd = [clang, '--analyze'] + load_args + ['-x', language, '-']
+ return [ACTIVE_CHECKER_PATTERN.match(arg).group(1)
+ for arg in get_arguments(cmd, '.')
+ if ACTIVE_CHECKER_PATTERN.match(arg)]
+
+ result = set()
+ for language in ['c', 'c++', 'objective-c', 'objective-c++']:
+ result.update(get_active_checkers_for(language))
+ return frozenset(result)
+
+
+def is_active(checkers):
+ """ Returns a method, which classifies the checker active or not,
+ based on the received checker name list. """
+
+ def predicate(checker):
+ """ Returns True if the given checker is active. """
+
+ return any(pattern.match(checker) for pattern in predicate.patterns)
+
+ predicate.patterns = [re.compile(r'^' + a + r'(\.|$)') for a in checkers]
+ return predicate
+
+
+def parse_checkers(stream):
+ """ Parse clang -analyzer-checker-help output.
+
+ Below the line 'CHECKERS:' are there the name description pairs.
+ Many of them are in one line, but some long named checker has the
+ name and the description in separate lines.
+
+ The checker name is always prefixed with two space character. The
+ name contains no whitespaces. Then followed by newline (if it's
+ too long) or other space characters comes the description of the
+ checker. The description ends with a newline character.
+
+ :param stream: list of lines to parse
+ :return: generator of tuples
+
+ (<checker name>, <checker description>) """
+
+ lines = iter(stream)
+ # find checkers header
+ for line in lines:
+ if re.match(r'^CHECKERS:', line):
+ break
+ # find entries
+ state = None
+ for line in lines:
+ if state and not re.match(r'^\s\s\S', line):
+ yield (state, line.strip())
+ state = None
+ elif re.match(r'^\s\s\S+$', line.rstrip()):
+ state = line.strip()
+ else:
+ pattern = re.compile(r'^\s\s(?P<key>\S*)\s*(?P<value>.*)')
+ match = pattern.match(line.rstrip())
+ if match:
+ current = match.groupdict()
+ yield (current['key'], current['value'])
+
+
+def get_checkers(clang, plugins):
+ """ Get all the available checkers from default and from the plugins.
+
+ :param clang: the compiler we are using
+ :param plugins: list of plugins which was requested by the user
+ :return: a dictionary of all available checkers and its status
+
+ {<checker name>: (<checker description>, <is active by default>)} """
+
+ load = [elem for plugin in plugins for elem in ['-load', plugin]]
+ cmd = [clang, '-cc1'] + load + ['-analyzer-checker-help']
+
+ lines = run_command(cmd)
+
+ is_active_checker = is_active(get_active_checkers(clang, plugins))
+
+ checkers = {
+ name: (description, is_active_checker(name))
+ for name, description in parse_checkers(lines)
+ }
+ if not checkers:
+ raise Exception('Could not query Clang for available checkers.')
+
+ return checkers
+
+
+def is_ctu_capable(extdef_map_cmd):
+ """ Detects if the current (or given) clang and external definition mapping
+ executables are CTU compatible. """
+
+ try:
+ run_command([extdef_map_cmd, '-version'])
+ except (OSError, subprocess.CalledProcessError):
+ return False
+ return True
+
+
+def get_triple_arch(command, cwd):
+ """Returns the architecture part of the target triple for the given
+ compilation command. """
+
+ cmd = get_arguments(command, cwd)
+ try:
+ separator = cmd.index("-triple")
+ return cmd[separator + 1]
+ except (IndexError, ValueError):
+ return ""
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/compilation.py b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/compilation.py
new file mode 100644
index 0000000..ef906fa
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/compilation.py
@@ -0,0 +1,141 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module is responsible for to parse a compiler invocation. """
+
+import re
+import os
+import collections
+
+__all__ = ['split_command', 'classify_source', 'compiler_language']
+
+# Ignored compiler options map for compilation database creation.
+# The map is used in `split_command` method. (Which does ignore and classify
+# parameters.) Please note, that these are not the only parameters which
+# might be ignored.
+#
+# Keys are the option name, value number of options to skip
+IGNORED_FLAGS = {
+ # compiling only flag, ignored because the creator of compilation
+ # database will explicitly set it.
+ '-c': 0,
+ # preprocessor macros, ignored because would cause duplicate entries in
+ # the output (the only difference would be these flags). this is actual
+ # finding from users, who suffered longer execution time caused by the
+ # duplicates.
+ '-MD': 0,
+ '-MMD': 0,
+ '-MG': 0,
+ '-MP': 0,
+ '-MF': 1,
+ '-MT': 1,
+ '-MQ': 1,
+ # linker options, ignored because for compilation database will contain
+ # compilation commands only. so, the compiler would ignore these flags
+ # anyway. the benefit to get rid of them is to make the output more
+ # readable.
+ '-static': 0,
+ '-shared': 0,
+ '-s': 0,
+ '-rdynamic': 0,
+ '-l': 1,
+ '-L': 1,
+ '-u': 1,
+ '-z': 1,
+ '-T': 1,
+ '-Xlinker': 1
+}
+
+# Known C/C++ compiler executable name patterns
+COMPILER_PATTERNS = frozenset([
+ re.compile(r'^(intercept-|analyze-|)c(c|\+\+)$'),
+ re.compile(r'^([^-]*-)*[mg](cc|\+\+)(-\d+(\.\d+){0,2})?$'),
+ re.compile(r'^([^-]*-)*clang(\+\+)?(-\d+(\.\d+){0,2})?$'),
+ re.compile(r'^llvm-g(cc|\+\+)$'),
+])
+
+
+def split_command(command):
+ """ Returns a value when the command is a compilation, None otherwise.
+
+ The value on success is a named tuple with the following attributes:
+
+ files: list of source files
+ flags: list of compile options
+ compiler: string value of 'c' or 'c++' """
+
+ # the result of this method
+ result = collections.namedtuple('Compilation',
+ ['compiler', 'flags', 'files'])
+ result.compiler = compiler_language(command)
+ result.flags = []
+ result.files = []
+ # quit right now, if the program was not a C/C++ compiler
+ if not result.compiler:
+ return None
+ # iterate on the compile options
+ args = iter(command[1:])
+ for arg in args:
+ # quit when compilation pass is not involved
+ if arg in {'-E', '-S', '-cc1', '-M', '-MM', '-###'}:
+ return None
+ # ignore some flags
+ elif arg in IGNORED_FLAGS:
+ count = IGNORED_FLAGS[arg]
+ for _ in range(count):
+ next(args)
+ elif re.match(r'^-(l|L|Wl,).+', arg):
+ pass
+ # some parameters could look like filename, take as compile option
+ elif arg in {'-D', '-I'}:
+ result.flags.extend([arg, next(args)])
+ # parameter which looks source file is taken...
+ elif re.match(r'^[^-].+', arg) and classify_source(arg):
+ result.files.append(arg)
+ # and consider everything else as compile option.
+ else:
+ result.flags.append(arg)
+ # do extra check on number of source files
+ return result if result.files else None
+
+
+def classify_source(filename, c_compiler=True):
+ """ Return the language from file name extension. """
+
+ mapping = {
+ '.c': 'c' if c_compiler else 'c++',
+ '.i': 'c-cpp-output' if c_compiler else 'c++-cpp-output',
+ '.ii': 'c++-cpp-output',
+ '.m': 'objective-c',
+ '.mi': 'objective-c-cpp-output',
+ '.mm': 'objective-c++',
+ '.mii': 'objective-c++-cpp-output',
+ '.C': 'c++',
+ '.cc': 'c++',
+ '.CC': 'c++',
+ '.cp': 'c++',
+ '.cpp': 'c++',
+ '.cxx': 'c++',
+ '.c++': 'c++',
+ '.C++': 'c++',
+ '.txx': 'c++'
+ }
+
+ __, extension = os.path.splitext(os.path.basename(filename))
+ return mapping.get(extension)
+
+
+def compiler_language(command):
+ """ A predicate to decide the command is a compiler call or not.
+
+ Returns 'c' or 'c++' when it match. None otherwise. """
+
+ cplusplus = re.compile(r'^(.+)(\+\+)(-.+|)$')
+
+ if command:
+ executable = os.path.basename(command[0])
+ if any(pattern.match(executable) for pattern in COMPILER_PATTERNS):
+ return 'c++' if cplusplus.match(executable) else 'c'
+ return None
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/intercept.py b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/intercept.py
new file mode 100644
index 0000000..b9bf9e9
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/intercept.py
@@ -0,0 +1,263 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module is responsible to capture the compiler invocation of any
+build process. The result of that should be a compilation database.
+
+This implementation is using the LD_PRELOAD or DYLD_INSERT_LIBRARIES
+mechanisms provided by the dynamic linker. The related library is implemented
+in C language and can be found under 'libear' directory.
+
+The 'libear' library is capturing all child process creation and logging the
+relevant information about it into separate files in a specified directory.
+The parameter of this process is the output directory name, where the report
+files shall be placed. This parameter is passed as an environment variable.
+
+The module also implements compiler wrappers to intercept the compiler calls.
+
+The module implements the build command execution and the post-processing of
+the output files, which will condensates into a compilation database. """
+
+import sys
+import os
+import os.path
+import re
+import itertools
+import json
+import glob
+import logging
+from libear import build_libear, TemporaryDirectory
+from libscanbuild import command_entry_point, compiler_wrapper, \
+ wrapper_environment, run_command, run_build
+from libscanbuild import duplicate_check
+from libscanbuild.compilation import split_command
+from libscanbuild.arguments import parse_args_for_intercept_build
+from libscanbuild.shell import encode, decode
+
+__all__ = ['capture', 'intercept_build', 'intercept_compiler_wrapper']
+
+GS = chr(0x1d)
+RS = chr(0x1e)
+US = chr(0x1f)
+
+COMPILER_WRAPPER_CC = 'intercept-cc'
+COMPILER_WRAPPER_CXX = 'intercept-c++'
+TRACE_FILE_EXTENSION = '.cmd' # same as in ear.c
+WRAPPER_ONLY_PLATFORMS = frozenset({'win32', 'cygwin'})
+
+
+@command_entry_point
+def intercept_build():
+ """ Entry point for 'intercept-build' command. """
+
+ args = parse_args_for_intercept_build()
+ return capture(args)
+
+
+def capture(args):
+ """ The entry point of build command interception. """
+
+ def post_processing(commands):
+ """ To make a compilation database, it needs to filter out commands
+ which are not compiler calls. Needs to find the source file name
+ from the arguments. And do shell escaping on the command.
+
+ To support incremental builds, it is desired to read elements from
+ an existing compilation database from a previous run. These elements
+ shall be merged with the new elements. """
+
+ # create entries from the current run
+ current = itertools.chain.from_iterable(
+ # creates a sequence of entry generators from an exec,
+ format_entry(command) for command in commands)
+ # read entries from previous run
+ if 'append' in args and args.append and os.path.isfile(args.cdb):
+ with open(args.cdb) as handle:
+ previous = iter(json.load(handle))
+ else:
+ previous = iter([])
+ # filter out duplicate entries from both
+ duplicate = duplicate_check(entry_hash)
+ return (entry
+ for entry in itertools.chain(previous, current)
+ if os.path.exists(entry['file']) and not duplicate(entry))
+
+ with TemporaryDirectory(prefix='intercept-') as tmp_dir:
+ # run the build command
+ environment = setup_environment(args, tmp_dir)
+ exit_code = run_build(args.build, env=environment)
+ # read the intercepted exec calls
+ exec_traces = itertools.chain.from_iterable(
+ parse_exec_trace(os.path.join(tmp_dir, filename))
+ for filename in sorted(glob.iglob(os.path.join(tmp_dir, '*.cmd'))))
+ # do post processing
+ entries = post_processing(exec_traces)
+ # dump the compilation database
+ with open(args.cdb, 'w+') as handle:
+ json.dump(list(entries), handle, sort_keys=True, indent=4)
+ return exit_code
+
+
+def setup_environment(args, destination):
+ """ Sets up the environment for the build command.
+
+ It sets the required environment variables and execute the given command.
+ The exec calls will be logged by the 'libear' preloaded library or by the
+ 'wrapper' programs. """
+
+ c_compiler = args.cc if 'cc' in args else 'cc'
+ cxx_compiler = args.cxx if 'cxx' in args else 'c++'
+
+ libear_path = None if args.override_compiler or is_preload_disabled(
+ sys.platform) else build_libear(c_compiler, destination)
+
+ environment = dict(os.environ)
+ environment.update({'INTERCEPT_BUILD_TARGET_DIR': destination})
+
+ if not libear_path:
+ logging.debug('intercept gonna use compiler wrappers')
+ environment.update(wrapper_environment(args))
+ environment.update({
+ 'CC': COMPILER_WRAPPER_CC,
+ 'CXX': COMPILER_WRAPPER_CXX
+ })
+ elif sys.platform == 'darwin':
+ logging.debug('intercept gonna preload libear on OSX')
+ environment.update({
+ 'DYLD_INSERT_LIBRARIES': libear_path,
+ 'DYLD_FORCE_FLAT_NAMESPACE': '1'
+ })
+ else:
+ logging.debug('intercept gonna preload libear on UNIX')
+ environment.update({'LD_PRELOAD': libear_path})
+
+ return environment
+
+
+@command_entry_point
+def intercept_compiler_wrapper():
+ """ Entry point for `intercept-cc` and `intercept-c++`. """
+
+ return compiler_wrapper(intercept_compiler_wrapper_impl)
+
+
+def intercept_compiler_wrapper_impl(_, execution):
+ """ Implement intercept compiler wrapper functionality.
+
+ It does generate execution report into target directory.
+ The target directory name is from environment variables. """
+
+ message_prefix = 'execution report might be incomplete: %s'
+
+ target_dir = os.getenv('INTERCEPT_BUILD_TARGET_DIR')
+ if not target_dir:
+ logging.warning(message_prefix, 'missing target directory')
+ return
+ # write current execution info to the pid file
+ try:
+ target_file_name = str(os.getpid()) + TRACE_FILE_EXTENSION
+ target_file = os.path.join(target_dir, target_file_name)
+ logging.debug('writing execution report to: %s', target_file)
+ write_exec_trace(target_file, execution)
+ except IOError:
+ logging.warning(message_prefix, 'io problem')
+
+
+def write_exec_trace(filename, entry):
+ """ Write execution report file.
+
+ This method shall be sync with the execution report writer in interception
+ library. The entry in the file is a JSON objects.
+
+ :param filename: path to the output execution trace file,
+ :param entry: the Execution object to append to that file. """
+
+ with open(filename, 'ab') as handler:
+ pid = str(entry.pid)
+ command = US.join(entry.cmd) + US
+ content = RS.join([pid, pid, 'wrapper', entry.cwd, command]) + GS
+ handler.write(content.encode('utf-8'))
+
+
+def parse_exec_trace(filename):
+ """ Parse the file generated by the 'libear' preloaded library.
+
+ Given filename points to a file which contains the basic report
+ generated by the interception library or wrapper command. A single
+ report file _might_ contain multiple process creation info. """
+
+ logging.debug('parse exec trace file: %s', filename)
+ with open(filename, 'r') as handler:
+ content = handler.read()
+ for group in filter(bool, content.split(GS)):
+ records = group.split(RS)
+ yield {
+ 'pid': records[0],
+ 'ppid': records[1],
+ 'function': records[2],
+ 'directory': records[3],
+ 'command': records[4].split(US)[:-1]
+ }
+
+
+def format_entry(exec_trace):
+ """ Generate the desired fields for compilation database entries. """
+
+ def abspath(cwd, name):
+ """ Create normalized absolute path from input filename. """
+ fullname = name if os.path.isabs(name) else os.path.join(cwd, name)
+ return os.path.normpath(fullname)
+
+ logging.debug('format this command: %s', exec_trace['command'])
+ compilation = split_command(exec_trace['command'])
+ if compilation:
+ for source in compilation.files:
+ compiler = 'c++' if compilation.compiler == 'c++' else 'cc'
+ command = [compiler, '-c'] + compilation.flags + [source]
+ logging.debug('formated as: %s', command)
+ yield {
+ 'directory': exec_trace['directory'],
+ 'command': encode(command),
+ 'file': abspath(exec_trace['directory'], source)
+ }
+
+
+def is_preload_disabled(platform):
+ """ Library-based interposition will fail silently if SIP is enabled,
+ so this should be detected. You can detect whether SIP is enabled on
+ Darwin by checking whether (1) there is a binary called 'csrutil' in
+ the path and, if so, (2) whether the output of executing 'csrutil status'
+ contains 'System Integrity Protection status: enabled'.
+
+ :param platform: name of the platform (returned by sys.platform),
+ :return: True if library preload will fail by the dynamic linker. """
+
+ if platform in WRAPPER_ONLY_PLATFORMS:
+ return True
+ elif platform == 'darwin':
+ command = ['csrutil', 'status']
+ pattern = re.compile(r'System Integrity Protection status:\s+enabled')
+ try:
+ return any(pattern.match(line) for line in run_command(command))
+ except:
+ return False
+ else:
+ return False
+
+
+def entry_hash(entry):
+ """ Implement unique hash method for compilation database entries. """
+
+ # For faster lookup in set filename is reverted
+ filename = entry['file'][::-1]
+ # For faster lookup in set directory is reverted
+ directory = entry['directory'][::-1]
+ # On OS X the 'cc' and 'c++' compilers are wrappers for
+ # 'clang' therefore both call would be logged. To avoid
+ # this the hash does not contain the first word of the
+ # command.
+ command = ' '.join(decode(entry['command'])[1:])
+
+ return '<>'.join([filename, directory, command])
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/report.py b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/report.py
new file mode 100644
index 0000000..b3753c1
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/report.py
@@ -0,0 +1,506 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module is responsible to generate 'index.html' for the report.
+
+The input for this step is the output directory, where individual reports
+could be found. It parses those reports and generates 'index.html'. """
+
+import re
+import os
+import os.path
+import sys
+import shutil
+import plistlib
+import glob
+import json
+import logging
+import datetime
+from libscanbuild import duplicate_check
+from libscanbuild.clang import get_version
+
+__all__ = ['document']
+
+
+def document(args):
+ """ Generates cover report and returns the number of bugs/crashes. """
+
+ html_reports_available = args.output_format in {'html', 'plist-html'}
+
+ logging.debug('count crashes and bugs')
+ crash_count = sum(1 for _ in read_crashes(args.output))
+ bug_counter = create_counters()
+ for bug in read_bugs(args.output, html_reports_available):
+ bug_counter(bug)
+ result = crash_count + bug_counter.total
+
+ if html_reports_available and result:
+ use_cdb = os.path.exists(args.cdb)
+
+ logging.debug('generate index.html file')
+ # common prefix for source files to have sorter path
+ prefix = commonprefix_from(args.cdb) if use_cdb else os.getcwd()
+ # assemble the cover from multiple fragments
+ fragments = []
+ try:
+ if bug_counter.total:
+ fragments.append(bug_summary(args.output, bug_counter))
+ fragments.append(bug_report(args.output, prefix))
+ if crash_count:
+ fragments.append(crash_report(args.output, prefix))
+ assemble_cover(args, prefix, fragments)
+ # copy additional files to the report
+ copy_resource_files(args.output)
+ if use_cdb:
+ shutil.copy(args.cdb, args.output)
+ finally:
+ for fragment in fragments:
+ os.remove(fragment)
+ return result
+
+
+def assemble_cover(args, prefix, fragments):
+ """ Put together the fragments into a final report. """
+
+ import getpass
+ import socket
+
+ if args.html_title is None:
+ args.html_title = os.path.basename(prefix) + ' - analyzer results'
+
+ with open(os.path.join(args.output, 'index.html'), 'w') as handle:
+ indent = 0
+ handle.write(reindent("""
+ |<!DOCTYPE html>
+ |<html>
+ | <head>
+ | <title>{html_title}</title>
+ | <link type="text/css" rel="stylesheet" href="scanview.css"/>
+ | <script type='text/javascript' src="sorttable.js"></script>
+ | <script type='text/javascript' src='selectable.js'></script>
+ | </head>""", indent).format(html_title=args.html_title))
+ handle.write(comment('SUMMARYENDHEAD'))
+ handle.write(reindent("""
+ | <body>
+ | <h1>{html_title}</h1>
+ | <table>
+ | <tr><th>User:</th><td>{user_name}@{host_name}</td></tr>
+ | <tr><th>Working Directory:</th><td>{current_dir}</td></tr>
+ | <tr><th>Command Line:</th><td>{cmd_args}</td></tr>
+ | <tr><th>Clang Version:</th><td>{clang_version}</td></tr>
+ | <tr><th>Date:</th><td>{date}</td></tr>
+ | </table>""", indent).format(html_title=args.html_title,
+ user_name=getpass.getuser(),
+ host_name=socket.gethostname(),
+ current_dir=prefix,
+ cmd_args=' '.join(sys.argv),
+ clang_version=get_version(args.clang),
+ date=datetime.datetime.today(
+ ).strftime('%c')))
+ for fragment in fragments:
+ # copy the content of fragments
+ with open(fragment, 'r') as input_handle:
+ shutil.copyfileobj(input_handle, handle)
+ handle.write(reindent("""
+ | </body>
+ |</html>""", indent))
+
+
+def bug_summary(output_dir, bug_counter):
+ """ Bug summary is a HTML table to give a better overview of the bugs. """
+
+ name = os.path.join(output_dir, 'summary.html.fragment')
+ with open(name, 'w') as handle:
+ indent = 4
+ handle.write(reindent("""
+ |<h2>Bug Summary</h2>
+ |<table>
+ | <thead>
+ | <tr>
+ | <td>Bug Type</td>
+ | <td>Quantity</td>
+ | <td class="sorttable_nosort">Display?</td>
+ | </tr>
+ | </thead>
+ | <tbody>""", indent))
+ handle.write(reindent("""
+ | <tr style="font-weight:bold">
+ | <td class="SUMM_DESC">All Bugs</td>
+ | <td class="Q">{0}</td>
+ | <td>
+ | <center>
+ | <input checked type="checkbox" id="AllBugsCheck"
+ | onClick="CopyCheckedStateToCheckButtons(this);"/>
+ | </center>
+ | </td>
+ | </tr>""", indent).format(bug_counter.total))
+ for category, types in bug_counter.categories.items():
+ handle.write(reindent("""
+ | <tr>
+ | <th>{0}</th><th colspan=2></th>
+ | </tr>""", indent).format(category))
+ for bug_type in types.values():
+ handle.write(reindent("""
+ | <tr>
+ | <td class="SUMM_DESC">{bug_type}</td>
+ | <td class="Q">{bug_count}</td>
+ | <td>
+ | <center>
+ | <input checked type="checkbox"
+ | onClick="ToggleDisplay(this,'{bug_type_class}');"/>
+ | </center>
+ | </td>
+ | </tr>""", indent).format(**bug_type))
+ handle.write(reindent("""
+ | </tbody>
+ |</table>""", indent))
+ handle.write(comment('SUMMARYBUGEND'))
+ return name
+
+
+def bug_report(output_dir, prefix):
+ """ Creates a fragment from the analyzer reports. """
+
+ pretty = prettify_bug(prefix, output_dir)
+ bugs = (pretty(bug) for bug in read_bugs(output_dir, True))
+
+ name = os.path.join(output_dir, 'bugs.html.fragment')
+ with open(name, 'w') as handle:
+ indent = 4
+ handle.write(reindent("""
+ |<h2>Reports</h2>
+ |<table class="sortable" style="table-layout:automatic">
+ | <thead>
+ | <tr>
+ | <td>Bug Group</td>
+ | <td class="sorttable_sorted">
+ | Bug Type
+ | <span id="sorttable_sortfwdind"> ▾</span>
+ | </td>
+ | <td>File</td>
+ | <td>Function/Method</td>
+ | <td class="Q">Line</td>
+ | <td class="Q">Path Length</td>
+ | <td class="sorttable_nosort"></td>
+ | </tr>
+ | </thead>
+ | <tbody>""", indent))
+ handle.write(comment('REPORTBUGCOL'))
+ for current in bugs:
+ handle.write(reindent("""
+ | <tr class="{bug_type_class}">
+ | <td class="DESC">{bug_category}</td>
+ | <td class="DESC">{bug_type}</td>
+ | <td>{bug_file}</td>
+ | <td class="DESC">{bug_function}</td>
+ | <td class="Q">{bug_line}</td>
+ | <td class="Q">{bug_path_length}</td>
+ | <td><a href="{report_file}#EndPath">View Report</a></td>
+ | </tr>""", indent).format(**current))
+ handle.write(comment('REPORTBUG', {'id': current['report_file']}))
+ handle.write(reindent("""
+ | </tbody>
+ |</table>""", indent))
+ handle.write(comment('REPORTBUGEND'))
+ return name
+
+
+def crash_report(output_dir, prefix):
+ """ Creates a fragment from the compiler crashes. """
+
+ pretty = prettify_crash(prefix, output_dir)
+ crashes = (pretty(crash) for crash in read_crashes(output_dir))
+
+ name = os.path.join(output_dir, 'crashes.html.fragment')
+ with open(name, 'w') as handle:
+ indent = 4
+ handle.write(reindent("""
+ |<h2>Analyzer Failures</h2>
+ |<p>The analyzer had problems processing the following files:</p>
+ |<table>
+ | <thead>
+ | <tr>
+ | <td>Problem</td>
+ | <td>Source File</td>
+ | <td>Preprocessed File</td>
+ | <td>STDERR Output</td>
+ | </tr>
+ | </thead>
+ | <tbody>""", indent))
+ for current in crashes:
+ handle.write(reindent("""
+ | <tr>
+ | <td>{problem}</td>
+ | <td>{source}</td>
+ | <td><a href="{file}">preprocessor output</a></td>
+ | <td><a href="{stderr}">analyzer std err</a></td>
+ | </tr>""", indent).format(**current))
+ handle.write(comment('REPORTPROBLEM', current))
+ handle.write(reindent("""
+ | </tbody>
+ |</table>""", indent))
+ handle.write(comment('REPORTCRASHES'))
+ return name
+
+
+def read_crashes(output_dir):
+ """ Generate a unique sequence of crashes from given output directory. """
+
+ return (parse_crash(filename)
+ for filename in glob.iglob(os.path.join(output_dir, 'failures',
+ '*.info.txt')))
+
+
+def read_bugs(output_dir, html):
+ # type: (str, bool) -> Generator[Dict[str, Any], None, None]
+ """ Generate a unique sequence of bugs from given output directory.
+
+ Duplicates can be in a project if the same module was compiled multiple
+ times with different compiler options. These would be better to show in
+ the final report (cover) only once. """
+
+ def empty(file_name):
+ return os.stat(file_name).st_size == 0
+
+ duplicate = duplicate_check(
+ lambda bug: '{bug_line}.{bug_path_length}:{bug_file}'.format(**bug))
+
+ # get the right parser for the job.
+ parser = parse_bug_html if html else parse_bug_plist
+ # get the input files, which are not empty.
+ pattern = os.path.join(output_dir, '*.html' if html else '*.plist')
+ bug_files = (file for file in glob.iglob(pattern) if not empty(file))
+
+ for bug_file in bug_files:
+ for bug in parser(bug_file):
+ if not duplicate(bug):
+ yield bug
+
+
+def parse_bug_plist(filename):
+ """ Returns the generator of bugs from a single .plist file. """
+
+ content = plistlib.readPlist(filename)
+ files = content.get('files')
+ for bug in content.get('diagnostics', []):
+ if len(files) <= int(bug['location']['file']):
+ logging.warning('Parsing bug from "%s" failed', filename)
+ continue
+
+ yield {
+ 'result': filename,
+ 'bug_type': bug['type'],
+ 'bug_category': bug['category'],
+ 'bug_line': int(bug['location']['line']),
+ 'bug_path_length': int(bug['location']['col']),
+ 'bug_file': files[int(bug['location']['file'])]
+ }
+
+
+def parse_bug_html(filename):
+ """ Parse out the bug information from HTML output. """
+
+ patterns = [re.compile(r'<!-- BUGTYPE (?P<bug_type>.*) -->$'),
+ re.compile(r'<!-- BUGFILE (?P<bug_file>.*) -->$'),
+ re.compile(r'<!-- BUGPATHLENGTH (?P<bug_path_length>.*) -->$'),
+ re.compile(r'<!-- BUGLINE (?P<bug_line>.*) -->$'),
+ re.compile(r'<!-- BUGCATEGORY (?P<bug_category>.*) -->$'),
+ re.compile(r'<!-- BUGDESC (?P<bug_description>.*) -->$'),
+ re.compile(r'<!-- FUNCTIONNAME (?P<bug_function>.*) -->$')]
+ endsign = re.compile(r'<!-- BUGMETAEND -->')
+
+ bug = {
+ 'report_file': filename,
+ 'bug_function': 'n/a', # compatibility with < clang-3.5
+ 'bug_category': 'Other',
+ 'bug_line': 0,
+ 'bug_path_length': 1
+ }
+
+ with open(filename) as handler:
+ for line in handler.readlines():
+ # do not read the file further
+ if endsign.match(line):
+ break
+ # search for the right lines
+ for regex in patterns:
+ match = regex.match(line.strip())
+ if match:
+ bug.update(match.groupdict())
+ break
+
+ encode_value(bug, 'bug_line', int)
+ encode_value(bug, 'bug_path_length', int)
+
+ yield bug
+
+
+def parse_crash(filename):
+ """ Parse out the crash information from the report file. """
+
+ match = re.match(r'(.*)\.info\.txt', filename)
+ name = match.group(1) if match else None
+ with open(filename, mode='rb') as handler:
+ # this is a workaround to fix windows read '\r\n' as new lines.
+ lines = [line.decode().rstrip() for line in handler.readlines()]
+ return {
+ 'source': lines[0],
+ 'problem': lines[1],
+ 'file': name,
+ 'info': name + '.info.txt',
+ 'stderr': name + '.stderr.txt'
+ }
+
+
+def category_type_name(bug):
+ """ Create a new bug attribute from bug by category and type.
+
+ The result will be used as CSS class selector in the final report. """
+
+ def smash(key):
+ """ Make value ready to be HTML attribute value. """
+
+ return bug.get(key, '').lower().replace(' ', '_').replace("'", '')
+
+ return escape('bt_' + smash('bug_category') + '_' + smash('bug_type'))
+
+
+def create_counters():
+ """ Create counters for bug statistics.
+
+ Two entries are maintained: 'total' is an integer, represents the
+ number of bugs. The 'categories' is a two level categorisation of bug
+ counters. The first level is 'bug category' the second is 'bug type'.
+ Each entry in this classification is a dictionary of 'count', 'type'
+ and 'label'. """
+
+ def predicate(bug):
+ bug_category = bug['bug_category']
+ bug_type = bug['bug_type']
+ current_category = predicate.categories.get(bug_category, dict())
+ current_type = current_category.get(bug_type, {
+ 'bug_type': bug_type,
+ 'bug_type_class': category_type_name(bug),
+ 'bug_count': 0
+ })
+ current_type.update({'bug_count': current_type['bug_count'] + 1})
+ current_category.update({bug_type: current_type})
+ predicate.categories.update({bug_category: current_category})
+ predicate.total += 1
+
+ predicate.total = 0
+ predicate.categories = dict()
+ return predicate
+
+
+def prettify_bug(prefix, output_dir):
+ def predicate(bug):
+ """ Make safe this values to embed into HTML. """
+
+ bug['bug_type_class'] = category_type_name(bug)
+
+ encode_value(bug, 'bug_file', lambda x: escape(chop(prefix, x)))
+ encode_value(bug, 'bug_category', escape)
+ encode_value(bug, 'bug_type', escape)
+ encode_value(bug, 'report_file', lambda x: escape(chop(output_dir, x)))
+ return bug
+
+ return predicate
+
+
+def prettify_crash(prefix, output_dir):
+ def predicate(crash):
+ """ Make safe this values to embed into HTML. """
+
+ encode_value(crash, 'source', lambda x: escape(chop(prefix, x)))
+ encode_value(crash, 'problem', escape)
+ encode_value(crash, 'file', lambda x: escape(chop(output_dir, x)))
+ encode_value(crash, 'info', lambda x: escape(chop(output_dir, x)))
+ encode_value(crash, 'stderr', lambda x: escape(chop(output_dir, x)))
+ return crash
+
+ return predicate
+
+
+def copy_resource_files(output_dir):
+ """ Copy the javascript and css files to the report directory. """
+
+ this_dir = os.path.dirname(os.path.realpath(__file__))
+ for resource in os.listdir(os.path.join(this_dir, 'resources')):
+ shutil.copy(os.path.join(this_dir, 'resources', resource), output_dir)
+
+
+def encode_value(container, key, encode):
+ """ Run 'encode' on 'container[key]' value and update it. """
+
+ if key in container:
+ value = encode(container[key])
+ container.update({key: value})
+
+
+def chop(prefix, filename):
+ """ Create 'filename' from '/prefix/filename' """
+
+ return filename if not len(prefix) else os.path.relpath(filename, prefix)
+
+
+def escape(text):
+ """ Paranoid HTML escape method. (Python version independent) """
+
+ escape_table = {
+ '&': '&',
+ '"': '"',
+ "'": ''',
+ '>': '>',
+ '<': '<'
+ }
+ return ''.join(escape_table.get(c, c) for c in text)
+
+
+def reindent(text, indent):
+ """ Utility function to format html output and keep indentation. """
+
+ result = ''
+ for line in text.splitlines():
+ if len(line.strip()):
+ result += ' ' * indent + line.split('|')[1] + os.linesep
+ return result
+
+
+def comment(name, opts=dict()):
+ """ Utility function to format meta information as comment. """
+
+ attributes = ''
+ for key, value in opts.items():
+ attributes += ' {0}="{1}"'.format(key, value)
+
+ return '<!-- {0}{1} -->{2}'.format(name, attributes, os.linesep)
+
+
+def commonprefix_from(filename):
+ """ Create file prefix from a compilation database entries. """
+
+ with open(filename, 'r') as handle:
+ return commonprefix(item['file'] for item in json.load(handle))
+
+
+def commonprefix(files):
+ """ Fixed version of os.path.commonprefix.
+
+ :param files: list of file names.
+ :return: the longest path prefix that is a prefix of all files. """
+ result = None
+ for current in files:
+ if result is not None:
+ result = os.path.commonprefix([result, current])
+ else:
+ result = current
+
+ if result is None:
+ return ''
+ elif not os.path.isdir(result):
+ return os.path.dirname(result)
+ else:
+ return os.path.abspath(result)
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/resources/scanview.css b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/resources/scanview.css
new file mode 100644
index 0000000..cf8a5a6
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/resources/scanview.css
@@ -0,0 +1,62 @@
+body { color:#000000; background-color:#ffffff }
+body { font-family: Helvetica, sans-serif; font-size:9pt }
+h1 { font-size: 14pt; }
+h2 { font-size: 12pt; }
+table { font-size:9pt }
+table { border-spacing: 0px; border: 1px solid black }
+th, table thead {
+ background-color:#eee; color:#666666;
+ font-weight: bold; cursor: default;
+ text-align:center;
+ font-weight: bold; font-family: Verdana;
+ white-space:nowrap;
+}
+.W { font-size:0px }
+th, td { padding:5px; padding-left:8px; text-align:left }
+td.SUMM_DESC { padding-left:12px }
+td.DESC { white-space:pre }
+td.Q { text-align:right }
+td { text-align:left }
+tbody.scrollContent { overflow:auto }
+
+table.form_group {
+ background-color: #ccc;
+ border: 1px solid #333;
+ padding: 2px;
+}
+
+table.form_inner_group {
+ background-color: #ccc;
+ border: 1px solid #333;
+ padding: 0px;
+}
+
+table.form {
+ background-color: #999;
+ border: 1px solid #333;
+ padding: 2px;
+}
+
+td.form_label {
+ text-align: right;
+ vertical-align: top;
+}
+/* For one line entires */
+td.form_clabel {
+ text-align: right;
+ vertical-align: center;
+}
+td.form_value {
+ text-align: left;
+ vertical-align: top;
+}
+td.form_submit {
+ text-align: right;
+ vertical-align: top;
+}
+
+h1.SubmitFail {
+ color: #f00;
+}
+h1.SubmitOk {
+}
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/resources/selectable.js b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/resources/selectable.js
new file mode 100644
index 0000000..53f6a8d
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/resources/selectable.js
@@ -0,0 +1,47 @@
+function SetDisplay(RowClass, DisplayVal)
+{
+ var Rows = document.getElementsByTagName("tr");
+ for ( var i = 0 ; i < Rows.length; ++i ) {
+ if (Rows[i].className == RowClass) {
+ Rows[i].style.display = DisplayVal;
+ }
+ }
+}
+
+function CopyCheckedStateToCheckButtons(SummaryCheckButton) {
+ var Inputs = document.getElementsByTagName("input");
+ for ( var i = 0 ; i < Inputs.length; ++i ) {
+ if (Inputs[i].type == "checkbox") {
+ if(Inputs[i] != SummaryCheckButton) {
+ Inputs[i].checked = SummaryCheckButton.checked;
+ Inputs[i].onclick();
+ }
+ }
+ }
+}
+
+function returnObjById( id ) {
+ if (document.getElementById)
+ var returnVar = document.getElementById(id);
+ else if (document.all)
+ var returnVar = document.all[id];
+ else if (document.layers)
+ var returnVar = document.layers[id];
+ return returnVar;
+}
+
+var NumUnchecked = 0;
+
+function ToggleDisplay(CheckButton, ClassName) {
+ if (CheckButton.checked) {
+ SetDisplay(ClassName, "");
+ if (--NumUnchecked == 0) {
+ returnObjById("AllBugsCheck").checked = true;
+ }
+ }
+ else {
+ SetDisplay(ClassName, "none");
+ NumUnchecked++;
+ returnObjById("AllBugsCheck").checked = false;
+ }
+}
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/resources/sorttable.js b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/resources/sorttable.js
new file mode 100644
index 0000000..32faa07
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/resources/sorttable.js
@@ -0,0 +1,492 @@
+/*
+ SortTable
+ version 2
+ 7th April 2007
+ Stuart Langridge, http://www.kryogenix.org/code/browser/sorttable/
+
+ Instructions:
+ Download this file
+ Add <script src="sorttable.js"></script> to your HTML
+ Add class="sortable" to any table you'd like to make sortable
+ Click on the headers to sort
+
+ Thanks to many, many people for contributions and suggestions.
+ Licenced as X11: http://www.kryogenix.org/code/browser/licence.html
+ This basically means: do what you want with it.
+*/
+
+
+var stIsIE = /*@cc_on!@*/false;
+
+sorttable = {
+ init: function() {
+ // quit if this function has already been called
+ if (arguments.callee.done) return;
+ // flag this function so we don't do the same thing twice
+ arguments.callee.done = true;
+ // kill the timer
+ if (_timer) clearInterval(_timer);
+
+ if (!document.createElement || !document.getElementsByTagName) return;
+
+ sorttable.DATE_RE = /^(\d\d?)[\/\.-](\d\d?)[\/\.-]((\d\d)?\d\d)$/;
+
+ forEach(document.getElementsByTagName('table'), function(table) {
+ if (table.className.search(/\bsortable\b/) != -1) {
+ sorttable.makeSortable(table);
+ }
+ });
+
+ },
+
+ makeSortable: function(table) {
+ if (table.getElementsByTagName('thead').length == 0) {
+ // table doesn't have a tHead. Since it should have, create one and
+ // put the first table row in it.
+ the = document.createElement('thead');
+ the.appendChild(table.rows[0]);
+ table.insertBefore(the,table.firstChild);
+ }
+ // Safari doesn't support table.tHead, sigh
+ if (table.tHead == null) table.tHead = table.getElementsByTagName('thead')[0];
+
+ if (table.tHead.rows.length != 1) return; // can't cope with two header rows
+
+ // Sorttable v1 put rows with a class of "sortbottom" at the bottom (as
+ // "total" rows, for example). This is B&R, since what you're supposed
+ // to do is put them in a tfoot. So, if there are sortbottom rows,
+ // for backward compatibility, move them to tfoot (creating it if needed).
+ sortbottomrows = [];
+ for (var i=0; i<table.rows.length; i++) {
+ if (table.rows[i].className.search(/\bsortbottom\b/) != -1) {
+ sortbottomrows[sortbottomrows.length] = table.rows[i];
+ }
+ }
+ if (sortbottomrows) {
+ if (table.tFoot == null) {
+ // table doesn't have a tfoot. Create one.
+ tfo = document.createElement('tfoot');
+ table.appendChild(tfo);
+ }
+ for (var i=0; i<sortbottomrows.length; i++) {
+ tfo.appendChild(sortbottomrows[i]);
+ }
+ delete sortbottomrows;
+ }
+
+ // work through each column and calculate its type
+ headrow = table.tHead.rows[0].cells;
+ for (var i=0; i<headrow.length; i++) {
+ // manually override the type with a sorttable_type attribute
+ if (!headrow[i].className.match(/\bsorttable_nosort\b/)) { // skip this col
+ mtch = headrow[i].className.match(/\bsorttable_([a-z0-9]+)\b/);
+ if (mtch) { override = mtch[1]; }
+ if (mtch && typeof sorttable["sort_"+override] == 'function') {
+ headrow[i].sorttable_sortfunction = sorttable["sort_"+override];
+ } else {
+ headrow[i].sorttable_sortfunction = sorttable.guessType(table,i);
+ }
+ // make it clickable to sort
+ headrow[i].sorttable_columnindex = i;
+ headrow[i].sorttable_tbody = table.tBodies[0];
+ dean_addEvent(headrow[i],"click", function(e) {
+
+ if (this.className.search(/\bsorttable_sorted\b/) != -1) {
+ // if we're already sorted by this column, just
+ // reverse the table, which is quicker
+ sorttable.reverse(this.sorttable_tbody);
+ this.className = this.className.replace('sorttable_sorted',
+ 'sorttable_sorted_reverse');
+ this.removeChild(document.getElementById('sorttable_sortfwdind'));
+ sortrevind = document.createElement('span');
+ sortrevind.id = "sorttable_sortrevind";
+ sortrevind.innerHTML = stIsIE ? ' <font face="webdings">5</font>' : ' ▴';
+ this.appendChild(sortrevind);
+ return;
+ }
+ if (this.className.search(/\bsorttable_sorted_reverse\b/) != -1) {
+ // if we're already sorted by this column in reverse, just
+ // re-reverse the table, which is quicker
+ sorttable.reverse(this.sorttable_tbody);
+ this.className = this.className.replace('sorttable_sorted_reverse',
+ 'sorttable_sorted');
+ this.removeChild(document.getElementById('sorttable_sortrevind'));
+ sortfwdind = document.createElement('span');
+ sortfwdind.id = "sorttable_sortfwdind";
+ sortfwdind.innerHTML = stIsIE ? ' <font face="webdings">6</font>' : ' ▾';
+ this.appendChild(sortfwdind);
+ return;
+ }
+
+ // remove sorttable_sorted classes
+ theadrow = this.parentNode;
+ forEach(theadrow.childNodes, function(cell) {
+ if (cell.nodeType == 1) { // an element
+ cell.className = cell.className.replace('sorttable_sorted_reverse','');
+ cell.className = cell.className.replace('sorttable_sorted','');
+ }
+ });
+ sortfwdind = document.getElementById('sorttable_sortfwdind');
+ if (sortfwdind) { sortfwdind.parentNode.removeChild(sortfwdind); }
+ sortrevind = document.getElementById('sorttable_sortrevind');
+ if (sortrevind) { sortrevind.parentNode.removeChild(sortrevind); }
+
+ this.className += ' sorttable_sorted';
+ sortfwdind = document.createElement('span');
+ sortfwdind.id = "sorttable_sortfwdind";
+ sortfwdind.innerHTML = stIsIE ? ' <font face="webdings">6</font>' : ' ▾';
+ this.appendChild(sortfwdind);
+
+ // build an array to sort. This is a Schwartzian transform thing,
+ // i.e., we "decorate" each row with the actual sort key,
+ // sort based on the sort keys, and then put the rows back in order
+ // which is a lot faster because you only do getInnerText once per row
+ row_array = [];
+ col = this.sorttable_columnindex;
+ rows = this.sorttable_tbody.rows;
+ for (var j=0; j<rows.length; j++) {
+ row_array[row_array.length] = [sorttable.getInnerText(rows[j].cells[col]), rows[j]];
+ }
+ /* If you want a stable sort, uncomment the following line */
+ sorttable.shaker_sort(row_array, this.sorttable_sortfunction);
+ /* and comment out this one */
+ //row_array.sort(this.sorttable_sortfunction);
+
+ tb = this.sorttable_tbody;
+ for (var j=0; j<row_array.length; j++) {
+ tb.appendChild(row_array[j][1]);
+ }
+
+ delete row_array;
+ });
+ }
+ }
+ },
+
+ guessType: function(table, column) {
+ // guess the type of a column based on its first non-blank row
+ sortfn = sorttable.sort_alpha;
+ for (var i=0; i<table.tBodies[0].rows.length; i++) {
+ text = sorttable.getInnerText(table.tBodies[0].rows[i].cells[column]);
+ if (text != '') {
+ if (text.match(/^-?[£$¤]?[\d,.]+%?$/)) {
+ return sorttable.sort_numeric;
+ }
+ // check for a date: dd/mm/yyyy or dd/mm/yy
+ // can have / or . or - as separator
+ // can be mm/dd as well
+ possdate = text.match(sorttable.DATE_RE)
+ if (possdate) {
+ // looks like a date
+ first = parseInt(possdate[1]);
+ second = parseInt(possdate[2]);
+ if (first > 12) {
+ // definitely dd/mm
+ return sorttable.sort_ddmm;
+ } else if (second > 12) {
+ return sorttable.sort_mmdd;
+ } else {
+ // looks like a date, but we can't tell which, so assume
+ // that it's dd/mm (English imperialism!) and keep looking
+ sortfn = sorttable.sort_ddmm;
+ }
+ }
+ }
+ }
+ return sortfn;
+ },
+
+ getInnerText: function(node) {
+ // gets the text we want to use for sorting for a cell.
+ // strips leading and trailing whitespace.
+ // this is *not* a generic getInnerText function; it's special to sorttable.
+ // for example, you can override the cell text with a customkey attribute.
+ // it also gets .value for <input> fields.
+
+ hasInputs = (typeof node.getElementsByTagName == 'function') &&
+ node.getElementsByTagName('input').length;
+
+ if (node.getAttribute("sorttable_customkey") != null) {
+ return node.getAttribute("sorttable_customkey");
+ }
+ else if (typeof node.textContent != 'undefined' && !hasInputs) {
+ return node.textContent.replace(/^\s+|\s+$/g, '');
+ }
+ else if (typeof node.innerText != 'undefined' && !hasInputs) {
+ return node.innerText.replace(/^\s+|\s+$/g, '');
+ }
+ else if (typeof node.text != 'undefined' && !hasInputs) {
+ return node.text.replace(/^\s+|\s+$/g, '');
+ }
+ else {
+ switch (node.nodeType) {
+ case 3:
+ if (node.nodeName.toLowerCase() == 'input') {
+ return node.value.replace(/^\s+|\s+$/g, '');
+ }
+ case 4:
+ return node.nodeValue.replace(/^\s+|\s+$/g, '');
+ break;
+ case 1:
+ case 11:
+ var innerText = '';
+ for (var i = 0; i < node.childNodes.length; i++) {
+ innerText += sorttable.getInnerText(node.childNodes[i]);
+ }
+ return innerText.replace(/^\s+|\s+$/g, '');
+ break;
+ default:
+ return '';
+ }
+ }
+ },
+
+ reverse: function(tbody) {
+ // reverse the rows in a tbody
+ newrows = [];
+ for (var i=0; i<tbody.rows.length; i++) {
+ newrows[newrows.length] = tbody.rows[i];
+ }
+ for (var i=newrows.length-1; i>=0; i--) {
+ tbody.appendChild(newrows[i]);
+ }
+ delete newrows;
+ },
+
+ /* sort functions
+ each sort function takes two parameters, a and b
+ you are comparing a[0] and b[0] */
+ sort_numeric: function(a,b) {
+ aa = parseFloat(a[0].replace(/[^0-9.-]/g,''));
+ if (isNaN(aa)) aa = 0;
+ bb = parseFloat(b[0].replace(/[^0-9.-]/g,''));
+ if (isNaN(bb)) bb = 0;
+ return aa-bb;
+ },
+ sort_alpha: function(a,b) {
+ if (a[0]==b[0]) return 0;
+ if (a[0]<b[0]) return -1;
+ return 1;
+ },
+ sort_ddmm: function(a,b) {
+ mtch = a[0].match(sorttable.DATE_RE);
+ y = mtch[3]; m = mtch[2]; d = mtch[1];
+ if (m.length == 1) m = '0'+m;
+ if (d.length == 1) d = '0'+d;
+ dt1 = y+m+d;
+ mtch = b[0].match(sorttable.DATE_RE);
+ y = mtch[3]; m = mtch[2]; d = mtch[1];
+ if (m.length == 1) m = '0'+m;
+ if (d.length == 1) d = '0'+d;
+ dt2 = y+m+d;
+ if (dt1==dt2) return 0;
+ if (dt1<dt2) return -1;
+ return 1;
+ },
+ sort_mmdd: function(a,b) {
+ mtch = a[0].match(sorttable.DATE_RE);
+ y = mtch[3]; d = mtch[2]; m = mtch[1];
+ if (m.length == 1) m = '0'+m;
+ if (d.length == 1) d = '0'+d;
+ dt1 = y+m+d;
+ mtch = b[0].match(sorttable.DATE_RE);
+ y = mtch[3]; d = mtch[2]; m = mtch[1];
+ if (m.length == 1) m = '0'+m;
+ if (d.length == 1) d = '0'+d;
+ dt2 = y+m+d;
+ if (dt1==dt2) return 0;
+ if (dt1<dt2) return -1;
+ return 1;
+ },
+
+ shaker_sort: function(list, comp_func) {
+ // A stable sort function to allow multi-level sorting of data
+ // see: http://en.wikipedia.org/wiki/Cocktail_sort
+ // thanks to Joseph Nahmias
+ var b = 0;
+ var t = list.length - 1;
+ var swap = true;
+
+ while(swap) {
+ swap = false;
+ for(var i = b; i < t; ++i) {
+ if ( comp_func(list[i], list[i+1]) > 0 ) {
+ var q = list[i]; list[i] = list[i+1]; list[i+1] = q;
+ swap = true;
+ }
+ } // for
+ t--;
+
+ if (!swap) break;
+
+ for(var i = t; i > b; --i) {
+ if ( comp_func(list[i], list[i-1]) < 0 ) {
+ var q = list[i]; list[i] = list[i-1]; list[i-1] = q;
+ swap = true;
+ }
+ } // for
+ b++;
+
+ } // while(swap)
+ }
+}
+
+/* ******************************************************************
+ Supporting functions: bundled here to avoid depending on a library
+ ****************************************************************** */
+
+// Dean Edwards/Matthias Miller/John Resig
+
+/* for Mozilla/Opera9 */
+if (document.addEventListener) {
+ document.addEventListener("DOMContentLoaded", sorttable.init, false);
+}
+
+/* for Internet Explorer */
+/*@cc_on @*/
+/*@if (@_win32)
+ document.write("<script id=__ie_onload defer src=javascript:void(0)><\/script>");
+ var script = document.getElementById("__ie_onload");
+ script.onreadystatechange = function() {
+ if (this.readyState == "complete") {
+ sorttable.init(); // call the onload handler
+ }
+ };
+/*@end @*/
+
+/* for Safari */
+if (/WebKit/i.test(navigator.userAgent)) { // sniff
+ var _timer = setInterval(function() {
+ if (/loaded|complete/.test(document.readyState)) {
+ sorttable.init(); // call the onload handler
+ }
+ }, 10);
+}
+
+/* for other browsers */
+window.onload = sorttable.init;
+
+// written by Dean Edwards, 2005
+// with input from Tino Zijdel, Matthias Miller, Diego Perini
+
+// http://dean.edwards.name/weblog/2005/10/add-event/
+
+function dean_addEvent(element, type, handler) {
+ if (element.addEventListener) {
+ element.addEventListener(type, handler, false);
+ } else {
+ // assign each event handler a unique ID
+ if (!handler.$$guid) handler.$$guid = dean_addEvent.guid++;
+ // create a hash table of event types for the element
+ if (!element.events) element.events = {};
+ // create a hash table of event handlers for each element/event pair
+ var handlers = element.events[type];
+ if (!handlers) {
+ handlers = element.events[type] = {};
+ // store the existing event handler (if there is one)
+ if (element["on" + type]) {
+ handlers[0] = element["on" + type];
+ }
+ }
+ // store the event handler in the hash table
+ handlers[handler.$$guid] = handler;
+ // assign a global event handler to do all the work
+ element["on" + type] = handleEvent;
+ }
+};
+// a counter used to create unique IDs
+dean_addEvent.guid = 1;
+
+function removeEvent(element, type, handler) {
+ if (element.removeEventListener) {
+ element.removeEventListener(type, handler, false);
+ } else {
+ // delete the event handler from the hash table
+ if (element.events && element.events[type]) {
+ delete element.events[type][handler.$$guid];
+ }
+ }
+};
+
+function handleEvent(event) {
+ var returnValue = true;
+ // grab the event object (IE uses a global event object)
+ event = event || fixEvent(((this.ownerDocument || this.document || this).parentWindow || window).event);
+ // get a reference to the hash table of event handlers
+ var handlers = this.events[event.type];
+ // execute each event handler
+ for (var i in handlers) {
+ this.$$handleEvent = handlers[i];
+ if (this.$$handleEvent(event) === false) {
+ returnValue = false;
+ }
+ }
+ return returnValue;
+};
+
+function fixEvent(event) {
+ // add W3C standard event methods
+ event.preventDefault = fixEvent.preventDefault;
+ event.stopPropagation = fixEvent.stopPropagation;
+ return event;
+};
+fixEvent.preventDefault = function() {
+ this.returnValue = false;
+};
+fixEvent.stopPropagation = function() {
+ this.cancelBubble = true;
+}
+
+// Dean's forEach: http://dean.edwards.name/base/forEach.js
+/*
+ forEach, version 1.0
+ Copyright 2006, Dean Edwards
+ License: http://www.opensource.org/licenses/mit-license.php
+*/
+
+// array-like enumeration
+if (!Array.forEach) { // mozilla already supports this
+ Array.forEach = function(array, block, context) {
+ for (var i = 0; i < array.length; i++) {
+ block.call(context, array[i], i, array);
+ }
+ };
+}
+
+// generic enumeration
+Function.prototype.forEach = function(object, block, context) {
+ for (var key in object) {
+ if (typeof this.prototype[key] == "undefined") {
+ block.call(context, object[key], key, object);
+ }
+ }
+};
+
+// character enumeration
+String.forEach = function(string, block, context) {
+ Array.forEach(string.split(""), function(chr, index) {
+ block.call(context, chr, index, string);
+ });
+};
+
+// globally resolve forEach enumeration
+var forEach = function(object, block, context) {
+ if (object) {
+ var resolve = Object; // default
+ if (object instanceof Function) {
+ // functions have a "length" property
+ resolve = Function;
+ } else if (object.forEach instanceof Function) {
+ // the object implements a custom forEach method so use that
+ object.forEach(block, context);
+ return;
+ } else if (typeof object == "string") {
+ // the object is a string
+ resolve = String;
+ } else if (typeof object.length == "number") {
+ // the object is array-like
+ resolve = Array;
+ }
+ resolve.forEach(object, block, context);
+ }
+};
diff --git a/src/llvm-project/clang/tools/scan-build-py/libscanbuild/shell.py b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/shell.py
new file mode 100644
index 0000000..a575946
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/libscanbuild/shell.py
@@ -0,0 +1,66 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+""" This module implements basic shell escaping/unescaping methods. """
+
+import re
+import shlex
+
+__all__ = ['encode', 'decode']
+
+
+def encode(command):
+ """ Takes a command as list and returns a string. """
+
+ def needs_quote(word):
+ """ Returns true if arguments needs to be protected by quotes.
+
+ Previous implementation was shlex.split method, but that's not good
+ for this job. Currently is running through the string with a basic
+ state checking. """
+
+ reserved = {' ', '$', '%', '&', '(', ')', '[', ']', '{', '}', '*', '|',
+ '<', '>', '@', '?', '!'}
+ state = 0
+ for current in word:
+ if state == 0 and current in reserved:
+ return True
+ elif state == 0 and current == '\\':
+ state = 1
+ elif state == 1 and current in reserved | {'\\'}:
+ state = 0
+ elif state == 0 and current == '"':
+ state = 2
+ elif state == 2 and current == '"':
+ state = 0
+ elif state == 0 and current == "'":
+ state = 3
+ elif state == 3 and current == "'":
+ state = 0
+ return state != 0
+
+ def escape(word):
+ """ Do protect argument if that's needed. """
+
+ table = {'\\': '\\\\', '"': '\\"'}
+ escaped = ''.join([table.get(c, c) for c in word])
+
+ return '"' + escaped + '"' if needs_quote(word) else escaped
+
+ return " ".join([escape(arg) for arg in command])
+
+
+def decode(string):
+ """ Takes a command string and returns as a list. """
+
+ def unescape(arg):
+ """ Gets rid of the escaping characters. """
+
+ if len(arg) >= 2 and arg[0] == arg[-1] and arg[0] == '"':
+ arg = arg[1:-1]
+ return re.sub(r'\\(["\\])', r'\1', arg)
+ return re.sub(r'\\([\\ $%&\(\)\[\]\{\}\*|<>@?!])', r'\1', arg)
+
+ return [unescape(arg) for arg in shlex.split(string)]
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/__init__.py b/src/llvm-project/clang/tools/scan-build-py/tests/__init__.py
new file mode 100644
index 0000000..bde2376
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/__init__.py
@@ -0,0 +1,18 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import unittest
+
+import tests.unit
+import tests.functional.cases
+
+
+def suite():
+ loader = unittest.TestLoader()
+ suite = unittest.TestSuite()
+ suite.addTests(loader.loadTestsFromModule(tests.unit))
+ suite.addTests(loader.loadTestsFromModule(tests.functional.cases))
+ return suite
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/__init__.py b/src/llvm-project/clang/tools/scan-build-py/tests/functional/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/__init__.py
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/__init__.py b/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/__init__.py
new file mode 100644
index 0000000..8fb8465
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/__init__.py
@@ -0,0 +1,71 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import re
+import os.path
+import subprocess
+
+
+def load_tests(loader, suite, pattern):
+ from . import test_from_cdb
+ suite.addTests(loader.loadTestsFromModule(test_from_cdb))
+ from . import test_from_cmd
+ suite.addTests(loader.loadTestsFromModule(test_from_cmd))
+ from . import test_create_cdb
+ suite.addTests(loader.loadTestsFromModule(test_create_cdb))
+ from . import test_exec_anatomy
+ suite.addTests(loader.loadTestsFromModule(test_exec_anatomy))
+ return suite
+
+
+def make_args(target):
+ this_dir, _ = os.path.split(__file__)
+ path = os.path.normpath(os.path.join(this_dir, '..', 'src'))
+ return ['make', 'SRCDIR={}'.format(path), 'OBJDIR={}'.format(target), '-f',
+ os.path.join(path, 'build', 'Makefile')]
+
+
+def silent_call(cmd, *args, **kwargs):
+ kwargs.update({'stdout': subprocess.PIPE, 'stderr': subprocess.STDOUT})
+ return subprocess.call(cmd, *args, **kwargs)
+
+
+def silent_check_call(cmd, *args, **kwargs):
+ kwargs.update({'stdout': subprocess.PIPE, 'stderr': subprocess.STDOUT})
+ return subprocess.check_call(cmd, *args, **kwargs)
+
+
+def call_and_report(analyzer_cmd, build_cmd):
+ child = subprocess.Popen(analyzer_cmd + ['-v'] + build_cmd,
+ universal_newlines=True,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT)
+
+ pattern = re.compile('Report directory created: (.+)')
+ directory = None
+ for line in child.stdout.readlines():
+ match = pattern.search(line)
+ if match and match.lastindex == 1:
+ directory = match.group(1)
+ break
+ child.stdout.close()
+ child.wait()
+
+ return (child.returncode, directory)
+
+
+def check_call_and_report(analyzer_cmd, build_cmd):
+ exit_code, result = call_and_report(analyzer_cmd, build_cmd)
+ if exit_code != 0:
+ raise subprocess.CalledProcessError(
+ exit_code, analyzer_cmd + build_cmd, None)
+ else:
+ return result
+
+
+def create_empty_file(filename):
+ with open(filename, 'a') as handle:
+ pass
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_create_cdb.py b/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_create_cdb.py
new file mode 100644
index 0000000..c26fce0
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_create_cdb.py
@@ -0,0 +1,191 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libear
+from . import make_args, silent_check_call, silent_call, create_empty_file
+import unittest
+
+import os.path
+import json
+
+
+class CompilationDatabaseTest(unittest.TestCase):
+ @staticmethod
+ def run_intercept(tmpdir, args):
+ result = os.path.join(tmpdir, 'cdb.json')
+ make = make_args(tmpdir) + args
+ silent_check_call(
+ ['intercept-build', '--cdb', result] + make)
+ return result
+
+ @staticmethod
+ def count_entries(filename):
+ with open(filename, 'r') as handler:
+ content = json.load(handler)
+ return len(content)
+
+ def test_successful_build(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ result = self.run_intercept(tmpdir, ['build_regular'])
+ self.assertTrue(os.path.isfile(result))
+ self.assertEqual(5, self.count_entries(result))
+
+ def test_successful_build_with_wrapper(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ result = os.path.join(tmpdir, 'cdb.json')
+ make = make_args(tmpdir) + ['build_regular']
+ silent_check_call(['intercept-build', '--cdb', result,
+ '--override-compiler'] + make)
+ self.assertTrue(os.path.isfile(result))
+ self.assertEqual(5, self.count_entries(result))
+
+ @unittest.skipIf(os.getenv('TRAVIS'), 'ubuntu make return -11')
+ def test_successful_build_parallel(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ result = self.run_intercept(tmpdir, ['-j', '4', 'build_regular'])
+ self.assertTrue(os.path.isfile(result))
+ self.assertEqual(5, self.count_entries(result))
+
+ @unittest.skipIf(os.getenv('TRAVIS'), 'ubuntu env remove clang from path')
+ def test_successful_build_on_empty_env(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ result = os.path.join(tmpdir, 'cdb.json')
+ make = make_args(tmpdir) + ['CC=clang', 'build_regular']
+ silent_check_call(['intercept-build', '--cdb', result,
+ 'env', '-'] + make)
+ self.assertTrue(os.path.isfile(result))
+ self.assertEqual(5, self.count_entries(result))
+
+ def test_successful_build_all_in_one(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ result = self.run_intercept(tmpdir, ['build_all_in_one'])
+ self.assertTrue(os.path.isfile(result))
+ self.assertEqual(5, self.count_entries(result))
+
+ def test_not_successful_build(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ result = os.path.join(tmpdir, 'cdb.json')
+ make = make_args(tmpdir) + ['build_broken']
+ silent_call(
+ ['intercept-build', '--cdb', result] + make)
+ self.assertTrue(os.path.isfile(result))
+ self.assertEqual(2, self.count_entries(result))
+
+
+class ExitCodeTest(unittest.TestCase):
+ @staticmethod
+ def run_intercept(tmpdir, target):
+ result = os.path.join(tmpdir, 'cdb.json')
+ make = make_args(tmpdir) + [target]
+ return silent_call(
+ ['intercept-build', '--cdb', result] + make)
+
+ def test_successful_build(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ exitcode = self.run_intercept(tmpdir, 'build_clean')
+ self.assertFalse(exitcode)
+
+ def test_not_successful_build(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ exitcode = self.run_intercept(tmpdir, 'build_broken')
+ self.assertTrue(exitcode)
+
+
+class ResumeFeatureTest(unittest.TestCase):
+ @staticmethod
+ def run_intercept(tmpdir, target, args):
+ result = os.path.join(tmpdir, 'cdb.json')
+ make = make_args(tmpdir) + [target]
+ silent_check_call(
+ ['intercept-build', '--cdb', result] + args + make)
+ return result
+
+ @staticmethod
+ def count_entries(filename):
+ with open(filename, 'r') as handler:
+ content = json.load(handler)
+ return len(content)
+
+ def test_overwrite_existing_cdb(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ result = self.run_intercept(tmpdir, 'build_clean', [])
+ self.assertTrue(os.path.isfile(result))
+ result = self.run_intercept(tmpdir, 'build_regular', [])
+ self.assertTrue(os.path.isfile(result))
+ self.assertEqual(2, self.count_entries(result))
+
+ def test_append_to_existing_cdb(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ result = self.run_intercept(tmpdir, 'build_clean', [])
+ self.assertTrue(os.path.isfile(result))
+ result = self.run_intercept(tmpdir, 'build_regular', ['--append'])
+ self.assertTrue(os.path.isfile(result))
+ self.assertEqual(5, self.count_entries(result))
+
+
+class ResultFormatingTest(unittest.TestCase):
+ @staticmethod
+ def run_intercept(tmpdir, command):
+ result = os.path.join(tmpdir, 'cdb.json')
+ silent_check_call(
+ ['intercept-build', '--cdb', result] + command,
+ cwd=tmpdir)
+ with open(result, 'r') as handler:
+ content = json.load(handler)
+ return content
+
+ def assert_creates_number_of_entries(self, command, count):
+ with libear.TemporaryDirectory() as tmpdir:
+ filename = os.path.join(tmpdir, 'test.c')
+ create_empty_file(filename)
+ command.append(filename)
+ cmd = ['sh', '-c', ' '.join(command)]
+ cdb = self.run_intercept(tmpdir, cmd)
+ self.assertEqual(count, len(cdb))
+
+ def test_filter_preprocessor_only_calls(self):
+ self.assert_creates_number_of_entries(['cc', '-c'], 1)
+ self.assert_creates_number_of_entries(['cc', '-c', '-E'], 0)
+ self.assert_creates_number_of_entries(['cc', '-c', '-M'], 0)
+ self.assert_creates_number_of_entries(['cc', '-c', '-MM'], 0)
+
+ def assert_command_creates_entry(self, command, expected):
+ with libear.TemporaryDirectory() as tmpdir:
+ filename = os.path.join(tmpdir, command[-1])
+ create_empty_file(filename)
+ cmd = ['sh', '-c', ' '.join(command)]
+ cdb = self.run_intercept(tmpdir, cmd)
+ self.assertEqual(' '.join(expected), cdb[0]['command'])
+
+ def test_filter_preprocessor_flags(self):
+ self.assert_command_creates_entry(
+ ['cc', '-c', '-MD', 'test.c'],
+ ['cc', '-c', 'test.c'])
+ self.assert_command_creates_entry(
+ ['cc', '-c', '-MMD', 'test.c'],
+ ['cc', '-c', 'test.c'])
+ self.assert_command_creates_entry(
+ ['cc', '-c', '-MD', '-MF', 'test.d', 'test.c'],
+ ['cc', '-c', 'test.c'])
+
+ def test_pass_language_flag(self):
+ self.assert_command_creates_entry(
+ ['cc', '-c', '-x', 'c', 'test.c'],
+ ['cc', '-c', '-x', 'c', 'test.c'])
+ self.assert_command_creates_entry(
+ ['cc', '-c', 'test.c'],
+ ['cc', '-c', 'test.c'])
+
+ def test_pass_arch_flags(self):
+ self.assert_command_creates_entry(
+ ['clang', '-c', 'test.c'],
+ ['cc', '-c', 'test.c'])
+ self.assert_command_creates_entry(
+ ['clang', '-c', '-arch', 'i386', 'test.c'],
+ ['cc', '-c', '-arch', 'i386', 'test.c'])
+ self.assert_command_creates_entry(
+ ['clang', '-c', '-arch', 'i386', '-arch', 'armv7l', 'test.c'],
+ ['cc', '-c', '-arch', 'i386', '-arch', 'armv7l', 'test.c'])
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_exec_anatomy.py b/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_exec_anatomy.py
new file mode 100644
index 0000000..d58a612
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_exec_anatomy.py
@@ -0,0 +1,50 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libear
+import unittest
+
+import os.path
+import subprocess
+import json
+
+
+def run(source_dir, target_dir):
+ def execute(cmd):
+ return subprocess.check_call(cmd,
+ cwd=target_dir,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT)
+
+ execute(['cmake', source_dir])
+ execute(['make'])
+
+ result_file = os.path.join(target_dir, 'result.json')
+ expected_file = os.path.join(target_dir, 'expected.json')
+ execute(['intercept-build', '--cdb', result_file, './exec',
+ expected_file])
+ return (expected_file, result_file)
+
+
+class ExecAnatomyTest(unittest.TestCase):
+ def assertEqualJson(self, expected, result):
+ def read_json(filename):
+ with open(filename) as handler:
+ return json.load(handler)
+
+ lhs = read_json(expected)
+ rhs = read_json(result)
+ for item in lhs:
+ self.assertTrue(rhs.count(item))
+ for item in rhs:
+ self.assertTrue(lhs.count(item))
+
+ def test_all_exec_calls(self):
+ this_dir, _ = os.path.split(__file__)
+ source_dir = os.path.normpath(os.path.join(this_dir, '..', 'exec'))
+ with libear.TemporaryDirectory() as tmp_dir:
+ expected, result = run(source_dir, tmp_dir)
+ self.assertEqualJson(expected, result)
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_from_cdb.py b/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_from_cdb.py
new file mode 100644
index 0000000..5026400
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_from_cdb.py
@@ -0,0 +1,182 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libear
+from . import call_and_report
+import unittest
+
+import os.path
+import string
+import glob
+
+
+def prepare_cdb(name, target_dir):
+ target_file = 'build_{0}.json'.format(name)
+ this_dir, _ = os.path.split(__file__)
+ path = os.path.normpath(os.path.join(this_dir, '..', 'src'))
+ source_dir = os.path.join(path, 'compilation_database')
+ source_file = os.path.join(source_dir, target_file + '.in')
+ target_file = os.path.join(target_dir, 'compile_commands.json')
+ with open(source_file, 'r') as in_handle:
+ with open(target_file, 'w') as out_handle:
+ for line in in_handle:
+ temp = string.Template(line)
+ out_handle.write(temp.substitute(path=path))
+ return target_file
+
+
+def run_analyzer(directory, cdb, args):
+ cmd = ['analyze-build', '--cdb', cdb, '--output', directory] \
+ + args
+ return call_and_report(cmd, [])
+
+
+class OutputDirectoryTest(unittest.TestCase):
+ def test_regular_keeps_report_dir(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('regular', tmpdir)
+ exit_code, reportdir = run_analyzer(tmpdir, cdb, [])
+ self.assertTrue(os.path.isdir(reportdir))
+
+ def test_clear_deletes_report_dir(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('clean', tmpdir)
+ exit_code, reportdir = run_analyzer(tmpdir, cdb, [])
+ self.assertFalse(os.path.isdir(reportdir))
+
+ def test_clear_keeps_report_dir_when_asked(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('clean', tmpdir)
+ exit_code, reportdir = run_analyzer(tmpdir, cdb, ['--keep-empty'])
+ self.assertTrue(os.path.isdir(reportdir))
+
+
+class ExitCodeTest(unittest.TestCase):
+ def test_regular_does_not_set_exit_code(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('regular', tmpdir)
+ exit_code, __ = run_analyzer(tmpdir, cdb, [])
+ self.assertFalse(exit_code)
+
+ def test_clear_does_not_set_exit_code(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('clean', tmpdir)
+ exit_code, __ = run_analyzer(tmpdir, cdb, [])
+ self.assertFalse(exit_code)
+
+ def test_regular_sets_exit_code_if_asked(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('regular', tmpdir)
+ exit_code, __ = run_analyzer(tmpdir, cdb, ['--status-bugs'])
+ self.assertTrue(exit_code)
+
+ def test_clear_does_not_set_exit_code_if_asked(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('clean', tmpdir)
+ exit_code, __ = run_analyzer(tmpdir, cdb, ['--status-bugs'])
+ self.assertFalse(exit_code)
+
+ def test_regular_sets_exit_code_if_asked_from_plist(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('regular', tmpdir)
+ exit_code, __ = run_analyzer(
+ tmpdir, cdb, ['--status-bugs', '--plist'])
+ self.assertTrue(exit_code)
+
+ def test_clear_does_not_set_exit_code_if_asked_from_plist(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('clean', tmpdir)
+ exit_code, __ = run_analyzer(
+ tmpdir, cdb, ['--status-bugs', '--plist'])
+ self.assertFalse(exit_code)
+
+
+class OutputFormatTest(unittest.TestCase):
+ @staticmethod
+ def get_html_count(directory):
+ return len(glob.glob(os.path.join(directory, 'report-*.html')))
+
+ @staticmethod
+ def get_plist_count(directory):
+ return len(glob.glob(os.path.join(directory, 'report-*.plist')))
+
+ def test_default_creates_html_report(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('regular', tmpdir)
+ exit_code, reportdir = run_analyzer(tmpdir, cdb, [])
+ self.assertTrue(
+ os.path.exists(os.path.join(reportdir, 'index.html')))
+ self.assertEqual(self.get_html_count(reportdir), 2)
+ self.assertEqual(self.get_plist_count(reportdir), 0)
+
+ def test_plist_and_html_creates_html_report(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('regular', tmpdir)
+ exit_code, reportdir = run_analyzer(tmpdir, cdb, ['--plist-html'])
+ self.assertTrue(
+ os.path.exists(os.path.join(reportdir, 'index.html')))
+ self.assertEqual(self.get_html_count(reportdir), 2)
+ self.assertEqual(self.get_plist_count(reportdir), 5)
+
+ def test_plist_does_not_creates_html_report(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('regular', tmpdir)
+ exit_code, reportdir = run_analyzer(tmpdir, cdb, ['--plist'])
+ self.assertFalse(
+ os.path.exists(os.path.join(reportdir, 'index.html')))
+ self.assertEqual(self.get_html_count(reportdir), 0)
+ self.assertEqual(self.get_plist_count(reportdir), 5)
+
+
+class FailureReportTest(unittest.TestCase):
+ def test_broken_creates_failure_reports(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('broken', tmpdir)
+ exit_code, reportdir = run_analyzer(tmpdir, cdb, [])
+ self.assertTrue(
+ os.path.isdir(os.path.join(reportdir, 'failures')))
+
+ def test_broken_does_not_creates_failure_reports(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('broken', tmpdir)
+ exit_code, reportdir = run_analyzer(
+ tmpdir, cdb, ['--no-failure-reports'])
+ self.assertFalse(
+ os.path.isdir(os.path.join(reportdir, 'failures')))
+
+
+class TitleTest(unittest.TestCase):
+ def assertTitleEqual(self, directory, expected):
+ import re
+ patterns = [
+ re.compile(r'<title>(?P<page>.*)</title>'),
+ re.compile(r'<h1>(?P<head>.*)</h1>')
+ ]
+ result = dict()
+
+ index = os.path.join(directory, 'index.html')
+ with open(index, 'r') as handler:
+ for line in handler.readlines():
+ for regex in patterns:
+ match = regex.match(line.strip())
+ if match:
+ result.update(match.groupdict())
+ break
+ self.assertEqual(result['page'], result['head'])
+ self.assertEqual(result['page'], expected)
+
+ def test_default_title_in_report(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('broken', tmpdir)
+ exit_code, reportdir = run_analyzer(tmpdir, cdb, [])
+ self.assertTitleEqual(reportdir, 'src - analyzer results')
+
+ def test_given_title_in_report(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ cdb = prepare_cdb('broken', tmpdir)
+ exit_code, reportdir = run_analyzer(
+ tmpdir, cdb, ['--html-title', 'this is the title'])
+ self.assertTitleEqual(reportdir, 'this is the title')
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_from_cmd.py b/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_from_cmd.py
new file mode 100644
index 0000000..0eee4bb
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/cases/test_from_cmd.py
@@ -0,0 +1,118 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libear
+from . import make_args, check_call_and_report, create_empty_file
+import unittest
+
+import os
+import os.path
+import glob
+
+
+class OutputDirectoryTest(unittest.TestCase):
+
+ @staticmethod
+ def run_analyzer(outdir, args, cmd):
+ return check_call_and_report(
+ ['scan-build', '--intercept-first', '-o', outdir] + args,
+ cmd)
+
+ def test_regular_keeps_report_dir(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ make = make_args(tmpdir) + ['build_regular']
+ outdir = self.run_analyzer(tmpdir, [], make)
+ self.assertTrue(os.path.isdir(outdir))
+
+ def test_clear_deletes_report_dir(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ make = make_args(tmpdir) + ['build_clean']
+ outdir = self.run_analyzer(tmpdir, [], make)
+ self.assertFalse(os.path.isdir(outdir))
+
+ def test_clear_keeps_report_dir_when_asked(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ make = make_args(tmpdir) + ['build_clean']
+ outdir = self.run_analyzer(tmpdir, ['--keep-empty'], make)
+ self.assertTrue(os.path.isdir(outdir))
+
+
+class RunAnalyzerTest(unittest.TestCase):
+
+ @staticmethod
+ def get_plist_count(directory):
+ return len(glob.glob(os.path.join(directory, 'report-*.plist')))
+
+ def test_interposition_works(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ make = make_args(tmpdir) + ['build_regular']
+ outdir = check_call_and_report(
+ ['scan-build', '--plist', '-o', tmpdir, '--override-compiler'],
+ make)
+
+ self.assertTrue(os.path.isdir(outdir))
+ self.assertEqual(self.get_plist_count(outdir), 5)
+
+ def test_intercept_wrapper_works(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ make = make_args(tmpdir) + ['build_regular']
+ outdir = check_call_and_report(
+ ['scan-build', '--plist', '-o', tmpdir, '--intercept-first',
+ '--override-compiler'],
+ make)
+
+ self.assertTrue(os.path.isdir(outdir))
+ self.assertEqual(self.get_plist_count(outdir), 5)
+
+ def test_intercept_library_works(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ make = make_args(tmpdir) + ['build_regular']
+ outdir = check_call_and_report(
+ ['scan-build', '--plist', '-o', tmpdir, '--intercept-first'],
+ make)
+
+ self.assertTrue(os.path.isdir(outdir))
+ self.assertEqual(self.get_plist_count(outdir), 5)
+
+ @staticmethod
+ def compile_empty_source_file(target_dir, is_cxx):
+ compiler = '$CXX' if is_cxx else '$CC'
+ src_file_name = 'test.cxx' if is_cxx else 'test.c'
+ src_file = os.path.join(target_dir, src_file_name)
+ obj_file = os.path.join(target_dir, 'test.o')
+ create_empty_file(src_file)
+ command = ' '.join([compiler, '-c', src_file, '-o', obj_file])
+ return ['sh', '-c', command]
+
+ def test_interposition_cc_works(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ outdir = check_call_and_report(
+ ['scan-build', '--plist', '-o', tmpdir, '--override-compiler'],
+ self.compile_empty_source_file(tmpdir, False))
+ self.assertEqual(self.get_plist_count(outdir), 1)
+
+ def test_interposition_cxx_works(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ outdir = check_call_and_report(
+ ['scan-build', '--plist', '-o', tmpdir, '--override-compiler'],
+ self.compile_empty_source_file(tmpdir, True))
+ self.assertEqual(self.get_plist_count(outdir), 1)
+
+ def test_intercept_cc_works(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ outdir = check_call_and_report(
+ ['scan-build', '--plist', '-o', tmpdir, '--override-compiler',
+ '--intercept-first'],
+ self.compile_empty_source_file(tmpdir, False))
+ self.assertEqual(self.get_plist_count(outdir), 1)
+
+ def test_intercept_cxx_works(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ outdir = check_call_and_report(
+ ['scan-build', '--plist', '-o', tmpdir, '--override-compiler',
+ '--intercept-first'],
+ self.compile_empty_source_file(tmpdir, True))
+ self.assertEqual(self.get_plist_count(outdir), 1)
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/exec/CMakeLists.txt b/src/llvm-project/clang/tools/scan-build-py/tests/functional/exec/CMakeLists.txt
new file mode 100644
index 0000000..42ee1d1
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/exec/CMakeLists.txt
@@ -0,0 +1,32 @@
+project(exec C)
+
+cmake_minimum_required(VERSION 3.4.3)
+
+include(CheckCCompilerFlag)
+check_c_compiler_flag("-std=c99" C99_SUPPORTED)
+if (C99_SUPPORTED)
+ set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -std=c99")
+endif()
+
+include(CheckFunctionExists)
+include(CheckSymbolExists)
+
+add_definitions(-D_GNU_SOURCE)
+list(APPEND CMAKE_REQUIRED_DEFINITIONS -D_GNU_SOURCE)
+
+check_function_exists(execve HAVE_EXECVE)
+check_function_exists(execv HAVE_EXECV)
+check_function_exists(execvpe HAVE_EXECVPE)
+check_function_exists(execvp HAVE_EXECVP)
+check_function_exists(execvP HAVE_EXECVP2)
+check_function_exists(exect HAVE_EXECT)
+check_function_exists(execl HAVE_EXECL)
+check_function_exists(execlp HAVE_EXECLP)
+check_function_exists(execle HAVE_EXECLE)
+check_function_exists(posix_spawn HAVE_POSIX_SPAWN)
+check_function_exists(posix_spawnp HAVE_POSIX_SPAWNP)
+
+configure_file(${CMAKE_CURRENT_SOURCE_DIR}/config.h.in ${CMAKE_CURRENT_BINARY_DIR}/config.h)
+include_directories(${CMAKE_CURRENT_BINARY_DIR})
+
+add_executable(exec main.c)
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/exec/config.h.in b/src/llvm-project/clang/tools/scan-build-py/tests/functional/exec/config.h.in
new file mode 100644
index 0000000..6221083
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/exec/config.h.in
@@ -0,0 +1,20 @@
+/* -*- coding: utf-8 -*-
+// The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+*/
+
+#pragma once
+
+#cmakedefine HAVE_EXECVE
+#cmakedefine HAVE_EXECV
+#cmakedefine HAVE_EXECVPE
+#cmakedefine HAVE_EXECVP
+#cmakedefine HAVE_EXECVP2
+#cmakedefine HAVE_EXECT
+#cmakedefine HAVE_EXECL
+#cmakedefine HAVE_EXECLP
+#cmakedefine HAVE_EXECLE
+#cmakedefine HAVE_POSIX_SPAWN
+#cmakedefine HAVE_POSIX_SPAWNP
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/exec/main.c b/src/llvm-project/clang/tools/scan-build-py/tests/functional/exec/main.c
new file mode 100644
index 0000000..830cf37
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/exec/main.c
@@ -0,0 +1,307 @@
+/* -*- coding: utf-8 -*-
+// The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+*/
+
+#include "config.h"
+
+#include <sys/wait.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <paths.h>
+
+#if defined HAVE_POSIX_SPAWN || defined HAVE_POSIX_SPAWNP
+#include <spawn.h>
+#endif
+
+// ..:: environment access fixer - begin ::..
+#ifdef HAVE_NSGETENVIRON
+#include <crt_externs.h>
+#else
+extern char **environ;
+#endif
+
+char **get_environ() {
+#ifdef HAVE_NSGETENVIRON
+ return *_NSGetEnviron();
+#else
+ return environ;
+#endif
+}
+// ..:: environment access fixer - end ::..
+
+// ..:: test fixtures - begin ::..
+static char const *cwd = NULL;
+static FILE *fd = NULL;
+static int need_comma = 0;
+
+void expected_out_open(const char *expected) {
+ cwd = getcwd(NULL, 0);
+ fd = fopen(expected, "w");
+ if (!fd) {
+ perror("fopen");
+ exit(EXIT_FAILURE);
+ }
+ fprintf(fd, "[\n");
+ need_comma = 0;
+}
+
+void expected_out_close() {
+ fprintf(fd, "]\n");
+ fclose(fd);
+ fd = NULL;
+
+ free((void *)cwd);
+ cwd = NULL;
+}
+
+void expected_out(const char *file) {
+ if (need_comma)
+ fprintf(fd, ",\n");
+ else
+ need_comma = 1;
+
+ fprintf(fd, "{\n");
+ fprintf(fd, " \"directory\": \"%s\",\n", cwd);
+ fprintf(fd, " \"command\": \"cc -c %s\",\n", file);
+ fprintf(fd, " \"file\": \"%s/%s\"\n", cwd, file);
+ fprintf(fd, "}\n");
+}
+
+void create_source(char *file) {
+ FILE *fd = fopen(file, "w");
+ if (!fd) {
+ perror("fopen");
+ exit(EXIT_FAILURE);
+ }
+ fprintf(fd, "typedef int score;\n");
+ fclose(fd);
+}
+
+typedef void (*exec_fun)();
+
+void wait_for(pid_t child) {
+ int status;
+ if (-1 == waitpid(child, &status, 0)) {
+ perror("wait");
+ exit(EXIT_FAILURE);
+ }
+ if (WIFEXITED(status) ? WEXITSTATUS(status) : EXIT_FAILURE) {
+ fprintf(stderr, "children process has non zero exit code\n");
+ exit(EXIT_FAILURE);
+ }
+}
+
+#define FORK(FUNC) \
+ { \
+ pid_t child = fork(); \
+ if (-1 == child) { \
+ perror("fork"); \
+ exit(EXIT_FAILURE); \
+ } else if (0 == child) { \
+ FUNC fprintf(stderr, "children process failed to exec\n"); \
+ exit(EXIT_FAILURE); \
+ } else { \
+ wait_for(child); \
+ } \
+ }
+// ..:: test fixtures - end ::..
+
+#ifdef HAVE_EXECV
+void call_execv() {
+ char *const file = "execv.c";
+ char *const compiler = "/usr/bin/cc";
+ char *const argv[] = {"cc", "-c", file, 0};
+
+ expected_out(file);
+ create_source(file);
+
+ FORK(execv(compiler, argv);)
+}
+#endif
+
+#ifdef HAVE_EXECVE
+void call_execve() {
+ char *const file = "execve.c";
+ char *const compiler = "/usr/bin/cc";
+ char *const argv[] = {compiler, "-c", file, 0};
+ char *const envp[] = {"THIS=THAT", 0};
+
+ expected_out(file);
+ create_source(file);
+
+ FORK(execve(compiler, argv, envp);)
+}
+#endif
+
+#ifdef HAVE_EXECVP
+void call_execvp() {
+ char *const file = "execvp.c";
+ char *const compiler = "cc";
+ char *const argv[] = {compiler, "-c", file, 0};
+
+ expected_out(file);
+ create_source(file);
+
+ FORK(execvp(compiler, argv);)
+}
+#endif
+
+#ifdef HAVE_EXECVP2
+void call_execvP() {
+ char *const file = "execv_p.c";
+ char *const compiler = "cc";
+ char *const argv[] = {compiler, "-c", file, 0};
+
+ expected_out(file);
+ create_source(file);
+
+ FORK(execvP(compiler, _PATH_DEFPATH, argv);)
+}
+#endif
+
+#ifdef HAVE_EXECVPE
+void call_execvpe() {
+ char *const file = "execvpe.c";
+ char *const compiler = "cc";
+ char *const argv[] = {"/usr/bin/cc", "-c", file, 0};
+ char *const envp[] = {"THIS=THAT", 0};
+
+ expected_out(file);
+ create_source(file);
+
+ FORK(execvpe(compiler, argv, envp);)
+}
+#endif
+
+#ifdef HAVE_EXECT
+void call_exect() {
+ char *const file = "exect.c";
+ char *const compiler = "/usr/bin/cc";
+ char *const argv[] = {compiler, "-c", file, 0};
+ char *const envp[] = {"THIS=THAT", 0};
+
+ expected_out(file);
+ create_source(file);
+
+ FORK(exect(compiler, argv, envp);)
+}
+#endif
+
+#ifdef HAVE_EXECL
+void call_execl() {
+ char *const file = "execl.c";
+ char *const compiler = "/usr/bin/cc";
+
+ expected_out(file);
+ create_source(file);
+
+ FORK(execl(compiler, "cc", "-c", file, (char *)0);)
+}
+#endif
+
+#ifdef HAVE_EXECLP
+void call_execlp() {
+ char *const file = "execlp.c";
+ char *const compiler = "cc";
+
+ expected_out(file);
+ create_source(file);
+
+ FORK(execlp(compiler, compiler, "-c", file, (char *)0);)
+}
+#endif
+
+#ifdef HAVE_EXECLE
+void call_execle() {
+ char *const file = "execle.c";
+ char *const compiler = "/usr/bin/cc";
+ char *const envp[] = {"THIS=THAT", 0};
+
+ expected_out(file);
+ create_source(file);
+
+ FORK(execle(compiler, compiler, "-c", file, (char *)0, envp);)
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWN
+void call_posix_spawn() {
+ char *const file = "posix_spawn.c";
+ char *const compiler = "cc";
+ char *const argv[] = {compiler, "-c", file, 0};
+
+ expected_out(file);
+ create_source(file);
+
+ pid_t child;
+ if (0 != posix_spawn(&child, "/usr/bin/cc", 0, 0, argv, get_environ())) {
+ perror("posix_spawn");
+ exit(EXIT_FAILURE);
+ }
+ wait_for(child);
+}
+#endif
+
+#ifdef HAVE_POSIX_SPAWNP
+void call_posix_spawnp() {
+ char *const file = "posix_spawnp.c";
+ char *const compiler = "cc";
+ char *const argv[] = {compiler, "-c", file, 0};
+
+ expected_out(file);
+ create_source(file);
+
+ pid_t child;
+ if (0 != posix_spawnp(&child, "cc", 0, 0, argv, get_environ())) {
+ perror("posix_spawnp");
+ exit(EXIT_FAILURE);
+ }
+ wait_for(child);
+}
+#endif
+
+int main(int argc, char *const argv[]) {
+ if (argc != 2)
+ exit(EXIT_FAILURE);
+
+ expected_out_open(argv[1]);
+#ifdef HAVE_EXECV
+ call_execv();
+#endif
+#ifdef HAVE_EXECVE
+ call_execve();
+#endif
+#ifdef HAVE_EXECVP
+ call_execvp();
+#endif
+#ifdef HAVE_EXECVP2
+ call_execvP();
+#endif
+#ifdef HAVE_EXECVPE
+ call_execvpe();
+#endif
+#ifdef HAVE_EXECT
+ call_exect();
+#endif
+#ifdef HAVE_EXECL
+ call_execl();
+#endif
+#ifdef HAVE_EXECLP
+ call_execlp();
+#endif
+#ifdef HAVE_EXECLE
+ call_execle();
+#endif
+#ifdef HAVE_POSIX_SPAWN
+ call_posix_spawn();
+#endif
+#ifdef HAVE_POSIX_SPAWNP
+ call_posix_spawnp();
+#endif
+ expected_out_close();
+ return 0;
+}
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/broken-one.c b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/broken-one.c
new file mode 100644
index 0000000..f055023
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/broken-one.c
@@ -0,0 +1,6 @@
+#include <notexisting.hpp>
+
+int value(int in)
+{
+ return 2 * in;
+}
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/broken-two.c b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/broken-two.c
new file mode 100644
index 0000000..7b4c12f
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/broken-two.c
@@ -0,0 +1 @@
+int test() { ;
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/build/Makefile b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/build/Makefile
new file mode 100644
index 0000000..a8c0aaf
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/build/Makefile
@@ -0,0 +1,42 @@
+SRCDIR := ..
+OBJDIR := .
+
+CFLAGS = -Wall -DDEBUG -Dvariable="value with space" -I $(SRCDIR)/include
+LDFLAGS =
+PROGRAM = $(OBJDIR)/prg
+
+$(OBJDIR)/main.o: $(SRCDIR)/main.c
+ $(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/main.c
+
+$(OBJDIR)/clean-one.o: $(SRCDIR)/clean-one.c
+ $(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/clean-one.c
+
+$(OBJDIR)/clean-two.o: $(SRCDIR)/clean-two.c
+ $(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/clean-two.c
+
+$(OBJDIR)/emit-one.o: $(SRCDIR)/emit-one.c
+ $(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/emit-one.c
+
+$(OBJDIR)/emit-two.o: $(SRCDIR)/emit-two.c
+ $(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/emit-two.c
+
+$(OBJDIR)/broken-one.o: $(SRCDIR)/broken-one.c
+ $(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/broken-one.c
+
+$(OBJDIR)/broken-two.o: $(SRCDIR)/broken-two.c
+ $(CC) $(CFLAGS) -c -o $@ $(SRCDIR)/broken-two.c
+
+$(PROGRAM): $(OBJDIR)/main.o $(OBJDIR)/clean-one.o $(OBJDIR)/clean-two.o $(OBJDIR)/emit-one.o $(OBJDIR)/emit-two.o
+ $(CC) $(LDFLAGS) -o $@ $(OBJDIR)/main.o $(OBJDIR)/clean-one.o $(OBJDIR)/clean-two.o $(OBJDIR)/emit-one.o $(OBJDIR)/emit-two.o
+
+build_regular: $(PROGRAM)
+
+build_clean: $(OBJDIR)/main.o $(OBJDIR)/clean-one.o $(OBJDIR)/clean-two.o
+
+build_broken: $(OBJDIR)/main.o $(OBJDIR)/broken-one.o $(OBJDIR)/broken-two.o
+
+build_all_in_one: $(SRCDIR)/main.c $(SRCDIR)/clean-one.c $(SRCDIR)/clean-two.c $(SRCDIR)/emit-one.c $(SRCDIR)/emit-two.c
+ $(CC) $(CFLAGS) $(LDFLAGS) -o $(PROGRAM) $(SRCDIR)/main.c $(SRCDIR)/clean-one.c $(SRCDIR)/clean-two.c $(SRCDIR)/emit-one.c $(SRCDIR)/emit-two.c
+
+clean:
+ rm -f $(PROGRAM) $(OBJDIR)/*.o
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/clean-one.c b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/clean-one.c
new file mode 100644
index 0000000..08c5f33
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/clean-one.c
@@ -0,0 +1,13 @@
+#include <clean-one.h>
+
+int do_nothing_loop()
+{
+ int i = 32;
+ int idx = 0;
+
+ for (idx = i; idx > 0; --idx)
+ {
+ i += idx;
+ }
+ return i;
+}
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/clean-two.c b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/clean-two.c
new file mode 100644
index 0000000..73bc288
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/clean-two.c
@@ -0,0 +1,11 @@
+#include <clean-one.h>
+
+#include <stdlib.h>
+
+unsigned int another_method()
+{
+ unsigned int const size = do_nothing_loop();
+ unsigned int const square = size * size;
+
+ return square;
+}
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/compilation_database/build_broken.json.in b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/compilation_database/build_broken.json.in
new file mode 100644
index 0000000..104a419
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/compilation_database/build_broken.json.in
@@ -0,0 +1,43 @@
+[
+{
+ "directory": "${path}",
+ "command": "g++ -c -o main.o main.c -Wall -DDEBUG -Dvariable=value",
+ "file": "${path}/main.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "cc -c -o broken-one.o broken-one.c -Wall -DDEBUG \"-Dvariable=value with space\"",
+ "file": "${path}/broken-one.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "g++ -c -o broken-two.o broken-two.c -Wall -DDEBUG -Dvariable=value",
+ "file": "${path}/broken-two.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "cc -c -o clean-one.o clean-one.c -Wall -DDEBUG \"-Dvariable=value with space\" -Iinclude",
+ "file": "${path}/clean-one.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "g++ -c -o clean-two.o clean-two.c -Wall -DDEBUG -Dvariable=value -I ./include",
+ "file": "${path}/clean-two.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "cc -c -o emit-one.o emit-one.c -Wall -DDEBUG \"-Dvariable=value with space\"",
+ "file": "${path}/emit-one.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "g++ -c -o emit-two.o emit-two.c -Wall -DDEBUG -Dvariable=value",
+ "file": "${path}/emit-two.c"
+}
+]
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/compilation_database/build_clean.json.in b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/compilation_database/build_clean.json.in
new file mode 100644
index 0000000..aa4dcde
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/compilation_database/build_clean.json.in
@@ -0,0 +1,19 @@
+[
+{
+ "directory": "${path}",
+ "command": "g++ -c -o main.o main.c -Wall -DDEBUG -Dvariable=value",
+ "file": "${path}/main.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "cc -c -o clean-one.o clean-one.c -Wall -DDEBUG \"-Dvariable=value with space\" -Iinclude",
+ "file": "${path}/clean-one.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "g++ -c -o clean-two.o clean-two.c -Wall -DDEBUG -Dvariable=value -I ./include",
+ "file": "${path}/clean-two.c"
+}
+]
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/compilation_database/build_regular.json.in b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/compilation_database/build_regular.json.in
new file mode 100644
index 0000000..0200c1d
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/compilation_database/build_regular.json.in
@@ -0,0 +1,31 @@
+[
+{
+ "directory": "${path}",
+ "command": "g++ -c -o main.o main.c -Wall -DDEBUG -Dvariable=value",
+ "file": "${path}/main.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "cc -c -o clean-one.o clean-one.c -Wall -DDEBUG \"-Dvariable=value with space\" -Iinclude",
+ "file": "${path}/clean-one.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "g++ -c -o clean-two.o clean-two.c -Wall -DDEBUG -Dvariable=value -I ./include",
+ "file": "${path}/clean-two.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "cc -c -o emit-one.o emit-one.c -Wall -DDEBUG \"-Dvariable=value with space\"",
+ "file": "${path}/emit-one.c"
+}
+,
+{
+ "directory": "${path}",
+ "command": "g++ -c -o emit-two.o emit-two.c -Wall -DDEBUG -Dvariable=value",
+ "file": "${path}/emit-two.c"
+}
+]
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/emit-one.c b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/emit-one.c
new file mode 100644
index 0000000..6cbd9ce
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/emit-one.c
@@ -0,0 +1,23 @@
+#include <assert.h>
+
+int div(int numerator, int denominator)
+{
+ return numerator / denominator;
+}
+
+void div_test()
+{
+ int i = 0;
+ for (i = 0; i < 2; ++i)
+ assert(div(2 * i, i) == 2);
+}
+
+int do_nothing()
+{
+ unsigned int i = 0;
+
+ int k = 100;
+ int j = k + 1;
+
+ return j;
+}
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/emit-two.c b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/emit-two.c
new file mode 100644
index 0000000..faea771
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/emit-two.c
@@ -0,0 +1,13 @@
+
+int bad_guy(int * i)
+{
+ *i = 9;
+ return *i;
+}
+
+void bad_guy_test()
+{
+ int * ptr = 0;
+
+ bad_guy(ptr);
+}
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/include/clean-one.h b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/include/clean-one.h
new file mode 100644
index 0000000..695dbd0
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/include/clean-one.h
@@ -0,0 +1,6 @@
+#ifndef CLEAN_ONE_H
+#define CLEAN_ONE_H
+
+int do_nothing_loop();
+
+#endif
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/main.c b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/main.c
new file mode 100644
index 0000000..905869d
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/functional/src/main.c
@@ -0,0 +1,4 @@
+int main()
+{
+ return 0;
+}
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/unit/__init__.py b/src/llvm-project/clang/tools/scan-build-py/tests/unit/__init__.py
new file mode 100644
index 0000000..6b7fd9f
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/unit/__init__.py
@@ -0,0 +1,24 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+from . import test_libear
+from . import test_compilation
+from . import test_clang
+from . import test_report
+from . import test_analyze
+from . import test_intercept
+from . import test_shell
+
+
+def load_tests(loader, suite, _):
+ suite.addTests(loader.loadTestsFromModule(test_libear))
+ suite.addTests(loader.loadTestsFromModule(test_compilation))
+ suite.addTests(loader.loadTestsFromModule(test_clang))
+ suite.addTests(loader.loadTestsFromModule(test_report))
+ suite.addTests(loader.loadTestsFromModule(test_analyze))
+ suite.addTests(loader.loadTestsFromModule(test_intercept))
+ suite.addTests(loader.loadTestsFromModule(test_shell))
+ return suite
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_analyze.py b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_analyze.py
new file mode 100644
index 0000000..768a3b6
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_analyze.py
@@ -0,0 +1,415 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import unittest
+import re
+import os
+import os.path
+import libear
+import libscanbuild.analyze as sut
+
+
+class ReportDirectoryTest(unittest.TestCase):
+
+ # Test that successive report directory names ascend in lexicographic
+ # order. This is required so that report directories from two runs of
+ # scan-build can be easily matched up to compare results.
+ def test_directory_name_comparison(self):
+ with libear.TemporaryDirectory() as tmpdir, \
+ sut.report_directory(tmpdir, False) as report_dir1, \
+ sut.report_directory(tmpdir, False) as report_dir2, \
+ sut.report_directory(tmpdir, False) as report_dir3:
+ self.assertLess(report_dir1, report_dir2)
+ self.assertLess(report_dir2, report_dir3)
+
+
+class FilteringFlagsTest(unittest.TestCase):
+
+ def test_language_captured(self):
+ def test(flags):
+ cmd = ['clang', '-c', 'source.c'] + flags
+ opts = sut.classify_parameters(cmd)
+ return opts['language']
+
+ self.assertEqual(None, test([]))
+ self.assertEqual('c', test(['-x', 'c']))
+ self.assertEqual('cpp', test(['-x', 'cpp']))
+
+ def test_arch(self):
+ def test(flags):
+ cmd = ['clang', '-c', 'source.c'] + flags
+ opts = sut.classify_parameters(cmd)
+ return opts['arch_list']
+
+ self.assertEqual([], test([]))
+ self.assertEqual(['mips'], test(['-arch', 'mips']))
+ self.assertEqual(['mips', 'i386'],
+ test(['-arch', 'mips', '-arch', 'i386']))
+
+ def assertFlagsChanged(self, expected, flags):
+ cmd = ['clang', '-c', 'source.c'] + flags
+ opts = sut.classify_parameters(cmd)
+ self.assertEqual(expected, opts['flags'])
+
+ def assertFlagsUnchanged(self, flags):
+ self.assertFlagsChanged(flags, flags)
+
+ def assertFlagsFiltered(self, flags):
+ self.assertFlagsChanged([], flags)
+
+ def test_optimalizations_pass(self):
+ self.assertFlagsUnchanged(['-O'])
+ self.assertFlagsUnchanged(['-O1'])
+ self.assertFlagsUnchanged(['-Os'])
+ self.assertFlagsUnchanged(['-O2'])
+ self.assertFlagsUnchanged(['-O3'])
+
+ def test_include_pass(self):
+ self.assertFlagsUnchanged([])
+ self.assertFlagsUnchanged(['-include', '/usr/local/include'])
+ self.assertFlagsUnchanged(['-I.'])
+ self.assertFlagsUnchanged(['-I', '.'])
+ self.assertFlagsUnchanged(['-I/usr/local/include'])
+ self.assertFlagsUnchanged(['-I', '/usr/local/include'])
+ self.assertFlagsUnchanged(['-I/opt', '-I', '/opt/otp/include'])
+ self.assertFlagsUnchanged(['-isystem', '/path'])
+ self.assertFlagsUnchanged(['-isystem=/path'])
+
+ def test_define_pass(self):
+ self.assertFlagsUnchanged(['-DNDEBUG'])
+ self.assertFlagsUnchanged(['-UNDEBUG'])
+ self.assertFlagsUnchanged(['-Dvar1=val1', '-Dvar2=val2'])
+ self.assertFlagsUnchanged(['-Dvar="val ues"'])
+
+ def test_output_filtered(self):
+ self.assertFlagsFiltered(['-o', 'source.o'])
+
+ def test_some_warning_filtered(self):
+ self.assertFlagsFiltered(['-Wall'])
+ self.assertFlagsFiltered(['-Wnoexcept'])
+ self.assertFlagsFiltered(['-Wreorder', '-Wunused', '-Wundef'])
+ self.assertFlagsUnchanged(['-Wno-reorder', '-Wno-unused'])
+
+ def test_compile_only_flags_pass(self):
+ self.assertFlagsUnchanged(['-std=C99'])
+ self.assertFlagsUnchanged(['-nostdinc'])
+ self.assertFlagsUnchanged(['-isystem', '/image/debian'])
+ self.assertFlagsUnchanged(['-iprefix', '/usr/local'])
+ self.assertFlagsUnchanged(['-iquote=me'])
+ self.assertFlagsUnchanged(['-iquote', 'me'])
+
+ def test_compile_and_link_flags_pass(self):
+ self.assertFlagsUnchanged(['-fsinged-char'])
+ self.assertFlagsUnchanged(['-fPIC'])
+ self.assertFlagsUnchanged(['-stdlib=libc++'])
+ self.assertFlagsUnchanged(['--sysroot', '/'])
+ self.assertFlagsUnchanged(['-isysroot', '/'])
+
+ def test_some_flags_filtered(self):
+ self.assertFlagsFiltered(['-g'])
+ self.assertFlagsFiltered(['-fsyntax-only'])
+ self.assertFlagsFiltered(['-save-temps'])
+ self.assertFlagsFiltered(['-init', 'my_init'])
+ self.assertFlagsFiltered(['-sectorder', 'a', 'b', 'c'])
+
+
+class Spy(object):
+ def __init__(self):
+ self.arg = None
+ self.success = 0
+
+ def call(self, params):
+ self.arg = params
+ return self.success
+
+
+class RunAnalyzerTest(unittest.TestCase):
+
+ @staticmethod
+ def run_analyzer(content, failures_report):
+ with libear.TemporaryDirectory() as tmpdir:
+ filename = os.path.join(tmpdir, 'test.cpp')
+ with open(filename, 'w') as handle:
+ handle.write(content)
+
+ opts = {
+ 'clang': 'clang',
+ 'directory': os.getcwd(),
+ 'flags': [],
+ 'direct_args': [],
+ 'file': filename,
+ 'output_dir': tmpdir,
+ 'output_format': 'plist',
+ 'output_failures': failures_report
+ }
+ spy = Spy()
+ result = sut.run_analyzer(opts, spy.call)
+ return (result, spy.arg)
+
+ def test_run_analyzer(self):
+ content = "int div(int n, int d) { return n / d; }"
+ (result, fwds) = RunAnalyzerTest.run_analyzer(content, False)
+ self.assertEqual(None, fwds)
+ self.assertEqual(0, result['exit_code'])
+
+ def test_run_analyzer_crash(self):
+ content = "int div(int n, int d) { return n / d }"
+ (result, fwds) = RunAnalyzerTest.run_analyzer(content, False)
+ self.assertEqual(None, fwds)
+ self.assertEqual(1, result['exit_code'])
+
+ def test_run_analyzer_crash_and_forwarded(self):
+ content = "int div(int n, int d) { return n / d }"
+ (_, fwds) = RunAnalyzerTest.run_analyzer(content, True)
+ self.assertEqual(1, fwds['exit_code'])
+ self.assertTrue(len(fwds['error_output']) > 0)
+
+
+class ReportFailureTest(unittest.TestCase):
+
+ def assertUnderFailures(self, path):
+ self.assertEqual('failures', os.path.basename(os.path.dirname(path)))
+
+ def test_report_failure_create_files(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ # create input file
+ filename = os.path.join(tmpdir, 'test.c')
+ with open(filename, 'w') as handle:
+ handle.write('int main() { return 0')
+ uname_msg = ' '.join(os.uname()) + os.linesep
+ error_msg = 'this is my error output'
+ # execute test
+ opts = {
+ 'clang': 'clang',
+ 'directory': os.getcwd(),
+ 'flags': [],
+ 'file': filename,
+ 'output_dir': tmpdir,
+ 'language': 'c',
+ 'error_type': 'other_error',
+ 'error_output': error_msg,
+ 'exit_code': 13
+ }
+ sut.report_failure(opts)
+ # verify the result
+ result = dict()
+ pp_file = None
+ for root, _, files in os.walk(tmpdir):
+ keys = [os.path.join(root, name) for name in files]
+ for key in keys:
+ with open(key, 'r') as handle:
+ result[key] = handle.readlines()
+ if re.match(r'^(.*/)+clang(.*)\.i$', key):
+ pp_file = key
+
+ # prepocessor file generated
+ self.assertUnderFailures(pp_file)
+ # info file generated and content dumped
+ info_file = pp_file + '.info.txt'
+ self.assertTrue(info_file in result)
+ self.assertEqual('Other Error\n', result[info_file][1])
+ self.assertEqual(uname_msg, result[info_file][3])
+ # error file generated and content dumped
+ error_file = pp_file + '.stderr.txt'
+ self.assertTrue(error_file in result)
+ self.assertEqual([error_msg], result[error_file])
+
+
+class AnalyzerTest(unittest.TestCase):
+
+ def test_nodebug_macros_appended(self):
+ def test(flags):
+ spy = Spy()
+ opts = {'flags': flags, 'force_debug': True}
+ self.assertEqual(spy.success,
+ sut.filter_debug_flags(opts, spy.call))
+ return spy.arg['flags']
+
+ self.assertEqual(['-UNDEBUG'], test([]))
+ self.assertEqual(['-DNDEBUG', '-UNDEBUG'], test(['-DNDEBUG']))
+ self.assertEqual(['-DSomething', '-UNDEBUG'], test(['-DSomething']))
+
+ def test_set_language_fall_through(self):
+ def language(expected, input):
+ spy = Spy()
+ input.update({'compiler': 'c', 'file': 'test.c'})
+ self.assertEqual(spy.success, sut.language_check(input, spy.call))
+ self.assertEqual(expected, spy.arg['language'])
+
+ language('c', {'language': 'c', 'flags': []})
+ language('c++', {'language': 'c++', 'flags': []})
+
+ def test_set_language_stops_on_not_supported(self):
+ spy = Spy()
+ input = {
+ 'compiler': 'c',
+ 'flags': [],
+ 'file': 'test.java',
+ 'language': 'java'
+ }
+ self.assertIsNone(sut.language_check(input, spy.call))
+ self.assertIsNone(spy.arg)
+
+ def test_set_language_sets_flags(self):
+ def flags(expected, input):
+ spy = Spy()
+ input.update({'compiler': 'c', 'file': 'test.c'})
+ self.assertEqual(spy.success, sut.language_check(input, spy.call))
+ self.assertEqual(expected, spy.arg['flags'])
+
+ flags(['-x', 'c'], {'language': 'c', 'flags': []})
+ flags(['-x', 'c++'], {'language': 'c++', 'flags': []})
+
+ def test_set_language_from_filename(self):
+ def language(expected, input):
+ spy = Spy()
+ input.update({'language': None, 'flags': []})
+ self.assertEqual(spy.success, sut.language_check(input, spy.call))
+ self.assertEqual(expected, spy.arg['language'])
+
+ language('c', {'file': 'file.c', 'compiler': 'c'})
+ language('c++', {'file': 'file.c', 'compiler': 'c++'})
+ language('c++', {'file': 'file.cxx', 'compiler': 'c'})
+ language('c++', {'file': 'file.cxx', 'compiler': 'c++'})
+ language('c++', {'file': 'file.cpp', 'compiler': 'c++'})
+ language('c-cpp-output', {'file': 'file.i', 'compiler': 'c'})
+ language('c++-cpp-output', {'file': 'file.i', 'compiler': 'c++'})
+
+ def test_arch_loop_sets_flags(self):
+ def flags(archs):
+ spy = Spy()
+ input = {'flags': [], 'arch_list': archs}
+ sut.arch_check(input, spy.call)
+ return spy.arg['flags']
+
+ self.assertEqual([], flags([]))
+ self.assertEqual(['-arch', 'i386'], flags(['i386']))
+ self.assertEqual(['-arch', 'i386'], flags(['i386', 'ppc']))
+ self.assertEqual(['-arch', 'sparc'], flags(['i386', 'sparc']))
+
+ def test_arch_loop_stops_on_not_supported(self):
+ def stop(archs):
+ spy = Spy()
+ input = {'flags': [], 'arch_list': archs}
+ self.assertIsNone(sut.arch_check(input, spy.call))
+ self.assertIsNone(spy.arg)
+
+ stop(['ppc'])
+ stop(['ppc64'])
+
+
[email protected]([])
+def method_without_expecteds(opts):
+ return 0
+
+
[email protected](['this', 'that'])
+def method_with_expecteds(opts):
+ return 0
+
+
[email protected]([])
+def method_exception_from_inside(opts):
+ raise Exception('here is one')
+
+
+class RequireDecoratorTest(unittest.TestCase):
+
+ def test_method_without_expecteds(self):
+ self.assertEqual(method_without_expecteds(dict()), 0)
+ self.assertEqual(method_without_expecteds({}), 0)
+ self.assertEqual(method_without_expecteds({'this': 2}), 0)
+ self.assertEqual(method_without_expecteds({'that': 3}), 0)
+
+ def test_method_with_expecteds(self):
+ self.assertRaises(KeyError, method_with_expecteds, dict())
+ self.assertRaises(KeyError, method_with_expecteds, {})
+ self.assertRaises(KeyError, method_with_expecteds, {'this': 2})
+ self.assertRaises(KeyError, method_with_expecteds, {'that': 3})
+ self.assertEqual(method_with_expecteds({'this': 0, 'that': 3}), 0)
+
+ def test_method_exception_not_caught(self):
+ self.assertRaises(Exception, method_exception_from_inside, dict())
+
+
+class PrefixWithTest(unittest.TestCase):
+
+ def test_gives_empty_on_empty(self):
+ res = sut.prefix_with(0, [])
+ self.assertFalse(res)
+
+ def test_interleaves_prefix(self):
+ res = sut.prefix_with(0, [1, 2, 3])
+ self.assertListEqual([0, 1, 0, 2, 0, 3], res)
+
+
+class MergeCtuMapTest(unittest.TestCase):
+
+ def test_no_map_gives_empty(self):
+ pairs = sut.create_global_ctu_extdef_map([])
+ self.assertFalse(pairs)
+
+ def test_multiple_maps_merged(self):
+ concat_map = ['c:@F@fun1#I# ast/fun1.c.ast',
+ 'c:@F@fun2#I# ast/fun2.c.ast',
+ 'c:@F@fun3#I# ast/fun3.c.ast']
+ pairs = sut.create_global_ctu_extdef_map(concat_map)
+ self.assertTrue(('c:@F@fun1#I#', 'ast/fun1.c.ast') in pairs)
+ self.assertTrue(('c:@F@fun2#I#', 'ast/fun2.c.ast') in pairs)
+ self.assertTrue(('c:@F@fun3#I#', 'ast/fun3.c.ast') in pairs)
+ self.assertEqual(3, len(pairs))
+
+ def test_not_unique_func_left_out(self):
+ concat_map = ['c:@F@fun1#I# ast/fun1.c.ast',
+ 'c:@F@fun2#I# ast/fun2.c.ast',
+ 'c:@F@fun1#I# ast/fun7.c.ast']
+ pairs = sut.create_global_ctu_extdef_map(concat_map)
+ self.assertFalse(('c:@F@fun1#I#', 'ast/fun1.c.ast') in pairs)
+ self.assertFalse(('c:@F@fun1#I#', 'ast/fun7.c.ast') in pairs)
+ self.assertTrue(('c:@F@fun2#I#', 'ast/fun2.c.ast') in pairs)
+ self.assertEqual(1, len(pairs))
+
+ def test_duplicates_are_kept(self):
+ concat_map = ['c:@F@fun1#I# ast/fun1.c.ast',
+ 'c:@F@fun2#I# ast/fun2.c.ast',
+ 'c:@F@fun1#I# ast/fun1.c.ast']
+ pairs = sut.create_global_ctu_extdef_map(concat_map)
+ self.assertTrue(('c:@F@fun1#I#', 'ast/fun1.c.ast') in pairs)
+ self.assertTrue(('c:@F@fun2#I#', 'ast/fun2.c.ast') in pairs)
+ self.assertEqual(2, len(pairs))
+
+ def test_space_handled_in_source(self):
+ concat_map = ['c:@F@fun1#I# ast/f un.c.ast']
+ pairs = sut.create_global_ctu_extdef_map(concat_map)
+ self.assertTrue(('c:@F@fun1#I#', 'ast/f un.c.ast') in pairs)
+ self.assertEqual(1, len(pairs))
+
+
+class ExtdefMapSrcToAstTest(unittest.TestCase):
+
+ def test_empty_gives_empty(self):
+ fun_ast_lst = sut.extdef_map_list_src_to_ast([])
+ self.assertFalse(fun_ast_lst)
+
+ def test_sources_to_asts(self):
+ fun_src_lst = ['c:@F@f1#I# ' + os.path.join(os.sep + 'path', 'f1.c'),
+ 'c:@F@f2#I# ' + os.path.join(os.sep + 'path', 'f2.c')]
+ fun_ast_lst = sut.extdef_map_list_src_to_ast(fun_src_lst)
+ self.assertTrue('c:@F@f1#I# ' +
+ os.path.join('ast', 'path', 'f1.c.ast')
+ in fun_ast_lst)
+ self.assertTrue('c:@F@f2#I# ' +
+ os.path.join('ast', 'path', 'f2.c.ast')
+ in fun_ast_lst)
+ self.assertEqual(2, len(fun_ast_lst))
+
+ def test_spaces_handled(self):
+ fun_src_lst = ['c:@F@f1#I# ' + os.path.join(os.sep + 'path', 'f 1.c')]
+ fun_ast_lst = sut.extdef_map_list_src_to_ast(fun_src_lst)
+ self.assertTrue('c:@F@f1#I# ' +
+ os.path.join('ast', 'path', 'f 1.c.ast')
+ in fun_ast_lst)
+ self.assertEqual(1, len(fun_ast_lst))
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_clang.py b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_clang.py
new file mode 100644
index 0000000..7d625c6
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_clang.py
@@ -0,0 +1,106 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libear
+import libscanbuild.clang as sut
+import unittest
+import os.path
+import sys
+
+
+class ClangGetVersion(unittest.TestCase):
+ def test_get_version_is_not_empty(self):
+ self.assertTrue(sut.get_version('clang'))
+
+ def test_get_version_throws(self):
+ with self.assertRaises(OSError):
+ sut.get_version('notexists')
+
+
+class ClangGetArgumentsTest(unittest.TestCase):
+ def test_get_clang_arguments(self):
+ with libear.TemporaryDirectory() as tmpdir:
+ filename = os.path.join(tmpdir, 'test.c')
+ with open(filename, 'w') as handle:
+ handle.write('')
+
+ result = sut.get_arguments(
+ ['clang', '-c', filename, '-DNDEBUG', '-Dvar="this is it"'],
+ tmpdir)
+
+ self.assertTrue('NDEBUG' in result)
+ self.assertTrue('var="this is it"' in result)
+
+ def test_get_clang_arguments_fails(self):
+ with self.assertRaises(Exception):
+ sut.get_arguments(['clang', '-x', 'c', 'notexist.c'], '.')
+
+ def test_get_clang_arguments_fails_badly(self):
+ with self.assertRaises(OSError):
+ sut.get_arguments(['notexist'], '.')
+
+
+class ClangGetCheckersTest(unittest.TestCase):
+ def test_get_checkers(self):
+ # this test is only to see is not crashing
+ result = sut.get_checkers('clang', [])
+ self.assertTrue(len(result))
+ # do check result types
+ string_type = unicode if sys.version_info < (3,) else str
+ for key, value in result.items():
+ self.assertEqual(string_type, type(key))
+ self.assertEqual(string_type, type(value[0]))
+ self.assertEqual(bool, type(value[1]))
+
+ def test_get_active_checkers(self):
+ # this test is only to see is not crashing
+ result = sut.get_active_checkers('clang', [])
+ self.assertTrue(len(result))
+ # do check result types
+ for value in result:
+ self.assertEqual(str, type(value))
+
+ def test_is_active(self):
+ test = sut.is_active(['a', 'b.b', 'c.c.c'])
+
+ self.assertTrue(test('a'))
+ self.assertTrue(test('a.b'))
+ self.assertTrue(test('b.b'))
+ self.assertTrue(test('b.b.c'))
+ self.assertTrue(test('c.c.c.p'))
+
+ self.assertFalse(test('ab'))
+ self.assertFalse(test('ba'))
+ self.assertFalse(test('bb'))
+ self.assertFalse(test('c.c'))
+ self.assertFalse(test('b'))
+ self.assertFalse(test('d'))
+
+ def test_parse_checkers(self):
+ lines = [
+ 'OVERVIEW: Clang Static Analyzer Checkers List',
+ '',
+ 'CHECKERS:',
+ ' checker.one Checker One description',
+ ' checker.two',
+ ' Checker Two description']
+ result = dict(sut.parse_checkers(lines))
+ self.assertTrue('checker.one' in result)
+ self.assertEqual('Checker One description', result.get('checker.one'))
+ self.assertTrue('checker.two' in result)
+ self.assertEqual('Checker Two description', result.get('checker.two'))
+
+
+class ClangIsCtuCapableTest(unittest.TestCase):
+ def test_ctu_not_found(self):
+ is_ctu = sut.is_ctu_capable('not-found-clang-extdef-mapping')
+ self.assertFalse(is_ctu)
+
+
+class ClangGetTripleArchTest(unittest.TestCase):
+ def test_arch_is_not_empty(self):
+ arch = sut.get_triple_arch(['clang', '-E', '-'], '.')
+ self.assertTrue(len(arch) > 0)
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_compilation.py b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_compilation.py
new file mode 100644
index 0000000..124feba
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_compilation.py
@@ -0,0 +1,122 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libscanbuild.compilation as sut
+import unittest
+
+
+class CompilerTest(unittest.TestCase):
+
+ def test_is_compiler_call(self):
+ self.assertIsNotNone(sut.compiler_language(['clang']))
+ self.assertIsNotNone(sut.compiler_language(['clang-3.6']))
+ self.assertIsNotNone(sut.compiler_language(['clang++']))
+ self.assertIsNotNone(sut.compiler_language(['clang++-3.5.1']))
+ self.assertIsNotNone(sut.compiler_language(['cc']))
+ self.assertIsNotNone(sut.compiler_language(['c++']))
+ self.assertIsNotNone(sut.compiler_language(['gcc']))
+ self.assertIsNotNone(sut.compiler_language(['g++']))
+ self.assertIsNotNone(sut.compiler_language(['/usr/local/bin/gcc']))
+ self.assertIsNotNone(sut.compiler_language(['/usr/local/bin/g++']))
+ self.assertIsNotNone(sut.compiler_language(['/usr/local/bin/clang']))
+ self.assertIsNotNone(
+ sut.compiler_language(['armv7_neno-linux-gnueabi-g++']))
+
+ self.assertIsNone(sut.compiler_language([]))
+ self.assertIsNone(sut.compiler_language(['']))
+ self.assertIsNone(sut.compiler_language(['ld']))
+ self.assertIsNone(sut.compiler_language(['as']))
+ self.assertIsNone(sut.compiler_language(['/usr/local/bin/compiler']))
+
+
+class SplitTest(unittest.TestCase):
+
+ def test_detect_cxx_from_compiler_name(self):
+ def test(cmd):
+ result = sut.split_command([cmd, '-c', 'src.c'])
+ self.assertIsNotNone(result, "wrong input for test")
+ return result.compiler == 'c++'
+
+ self.assertFalse(test('cc'))
+ self.assertFalse(test('gcc'))
+ self.assertFalse(test('clang'))
+
+ self.assertTrue(test('c++'))
+ self.assertTrue(test('g++'))
+ self.assertTrue(test('g++-5.3.1'))
+ self.assertTrue(test('clang++'))
+ self.assertTrue(test('clang++-3.7.1'))
+ self.assertTrue(test('armv7_neno-linux-gnueabi-g++'))
+
+ def test_action(self):
+ self.assertIsNotNone(sut.split_command(['clang', 'source.c']))
+ self.assertIsNotNone(sut.split_command(['clang', '-c', 'source.c']))
+ self.assertIsNotNone(sut.split_command(['clang', '-c', 'source.c',
+ '-MF', 'a.d']))
+
+ self.assertIsNone(sut.split_command(['clang', '-E', 'source.c']))
+ self.assertIsNone(sut.split_command(['clang', '-c', '-E', 'source.c']))
+ self.assertIsNone(sut.split_command(['clang', '-c', '-M', 'source.c']))
+ self.assertIsNone(
+ sut.split_command(['clang', '-c', '-MM', 'source.c']))
+
+ def test_source_file(self):
+ def test(expected, cmd):
+ self.assertEqual(expected, sut.split_command(cmd).files)
+
+ test(['src.c'], ['clang', 'src.c'])
+ test(['src.c'], ['clang', '-c', 'src.c'])
+ test(['src.C'], ['clang', '-x', 'c', 'src.C'])
+ test(['src.cpp'], ['clang++', '-c', 'src.cpp'])
+ test(['s1.c', 's2.c'], ['clang', '-c', 's1.c', 's2.c'])
+ test(['s1.c', 's2.c'], ['cc', 's1.c', 's2.c', '-ldep', '-o', 'a.out'])
+ test(['src.c'], ['clang', '-c', '-I', './include', 'src.c'])
+ test(['src.c'], ['clang', '-c', '-I', '/opt/me/include', 'src.c'])
+ test(['src.c'], ['clang', '-c', '-D', 'config=file.c', 'src.c'])
+
+ self.assertIsNone(
+ sut.split_command(['cc', 'this.o', 'that.o', '-o', 'a.out']))
+ self.assertIsNone(
+ sut.split_command(['cc', 'this.o', '-lthat', '-o', 'a.out']))
+
+ def test_filter_flags(self):
+ def test(expected, flags):
+ command = ['clang', '-c', 'src.c'] + flags
+ self.assertEqual(expected, sut.split_command(command).flags)
+
+ def same(expected):
+ test(expected, expected)
+
+ def filtered(flags):
+ test([], flags)
+
+ same([])
+ same(['-I', '/opt/me/include', '-DNDEBUG', '-ULIMITS'])
+ same(['-O', '-O2'])
+ same(['-m32', '-mmms'])
+ same(['-Wall', '-Wno-unused', '-g', '-funroll-loops'])
+
+ filtered([])
+ filtered(['-lclien', '-L/opt/me/lib', '-L', '/opt/you/lib'])
+ filtered(['-static'])
+ filtered(['-MD', '-MT', 'something'])
+ filtered(['-MMD', '-MF', 'something'])
+
+
+class SourceClassifierTest(unittest.TestCase):
+
+ def test_sources(self):
+ self.assertIsNone(sut.classify_source('file.o'))
+ self.assertIsNone(sut.classify_source('file.exe'))
+ self.assertIsNone(sut.classify_source('/path/file.o'))
+ self.assertIsNone(sut.classify_source('clang'))
+
+ self.assertEqual('c', sut.classify_source('file.c'))
+ self.assertEqual('c', sut.classify_source('./file.c'))
+ self.assertEqual('c', sut.classify_source('/path/file.c'))
+ self.assertEqual('c++', sut.classify_source('file.c', False))
+ self.assertEqual('c++', sut.classify_source('./file.c', False))
+ self.assertEqual('c++', sut.classify_source('/path/file.c', False))
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_intercept.py b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_intercept.py
new file mode 100644
index 0000000..583d1c3
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_intercept.py
@@ -0,0 +1,90 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libear
+import libscanbuild.intercept as sut
+import unittest
+import os.path
+
+
+class InterceptUtilTest(unittest.TestCase):
+
+ def test_format_entry_filters_action(self):
+ def test(command):
+ trace = {'command': command, 'directory': '/opt/src/project'}
+ return list(sut.format_entry(trace))
+
+ self.assertTrue(test(['cc', '-c', 'file.c', '-o', 'file.o']))
+ self.assertFalse(test(['cc', '-E', 'file.c']))
+ self.assertFalse(test(['cc', '-MM', 'file.c']))
+ self.assertFalse(test(['cc', 'this.o', 'that.o', '-o', 'a.out']))
+
+ def test_format_entry_normalize_filename(self):
+ parent = os.path.join(os.sep, 'home', 'me')
+ current = os.path.join(parent, 'project')
+
+ def test(filename):
+ trace = {'directory': current, 'command': ['cc', '-c', filename]}
+ return list(sut.format_entry(trace))[0]['file']
+
+ self.assertEqual(os.path.join(current, 'file.c'), test('file.c'))
+ self.assertEqual(os.path.join(current, 'file.c'), test('./file.c'))
+ self.assertEqual(os.path.join(parent, 'file.c'), test('../file.c'))
+ self.assertEqual(os.path.join(current, 'file.c'),
+ test(os.path.join(current, 'file.c')))
+
+ def test_sip(self):
+ def create_status_report(filename, message):
+ content = """#!/usr/bin/env sh
+ echo 'sa-la-la-la'
+ echo 'la-la-la'
+ echo '{0}'
+ echo 'sa-la-la-la'
+ echo 'la-la-la'
+ """.format(message)
+ lines = [line.strip() for line in content.split('\n')]
+ with open(filename, 'w') as handle:
+ handle.write('\n'.join(lines))
+ handle.close()
+ os.chmod(filename, 0x1ff)
+
+ def create_csrutil(dest_dir, status):
+ filename = os.path.join(dest_dir, 'csrutil')
+ message = 'System Integrity Protection status: {0}'.format(status)
+ return create_status_report(filename, message)
+
+ def create_sestatus(dest_dir, status):
+ filename = os.path.join(dest_dir, 'sestatus')
+ message = 'SELinux status:\t{0}'.format(status)
+ return create_status_report(filename, message)
+
+ ENABLED = 'enabled'
+ DISABLED = 'disabled'
+
+ OSX = 'darwin'
+
+ with libear.TemporaryDirectory() as tmpdir:
+ saved = os.environ['PATH']
+ try:
+ os.environ['PATH'] = tmpdir + ':' + saved
+
+ create_csrutil(tmpdir, ENABLED)
+ self.assertTrue(sut.is_preload_disabled(OSX))
+
+ create_csrutil(tmpdir, DISABLED)
+ self.assertFalse(sut.is_preload_disabled(OSX))
+ finally:
+ os.environ['PATH'] = saved
+
+ saved = os.environ['PATH']
+ try:
+ os.environ['PATH'] = ''
+ # shall be false when it's not in the path
+ self.assertFalse(sut.is_preload_disabled(OSX))
+
+ self.assertFalse(sut.is_preload_disabled('unix'))
+ finally:
+ os.environ['PATH'] = saved
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_libear.py b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_libear.py
new file mode 100644
index 0000000..f5b9280
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_libear.py
@@ -0,0 +1,30 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libear as sut
+import unittest
+import os.path
+
+
+class TemporaryDirectoryTest(unittest.TestCase):
+ def test_creates_directory(self):
+ dirname = None
+ with sut.TemporaryDirectory() as tmpdir:
+ self.assertTrue(os.path.isdir(tmpdir))
+ dirname = tmpdir
+ self.assertIsNotNone(dirname)
+ self.assertFalse(os.path.exists(dirname))
+
+ def test_removes_directory_when_exception(self):
+ dirname = None
+ try:
+ with sut.TemporaryDirectory() as tmpdir:
+ self.assertTrue(os.path.isdir(tmpdir))
+ dirname = tmpdir
+ raise RuntimeError('message')
+ except:
+ self.assertIsNotNone(dirname)
+ self.assertFalse(os.path.exists(dirname))
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_report.py b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_report.py
new file mode 100644
index 0000000..c943699
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_report.py
@@ -0,0 +1,148 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libear
+import libscanbuild.report as sut
+import unittest
+import os
+import os.path
+
+
+def run_bug_parse(content):
+ with libear.TemporaryDirectory() as tmpdir:
+ file_name = os.path.join(tmpdir, 'test.html')
+ with open(file_name, 'w') as handle:
+ handle.writelines(content)
+ for bug in sut.parse_bug_html(file_name):
+ return bug
+
+
+def run_crash_parse(content, preproc):
+ with libear.TemporaryDirectory() as tmpdir:
+ file_name = os.path.join(tmpdir, preproc + '.info.txt')
+ with open(file_name, 'w') as handle:
+ handle.writelines(content)
+ return sut.parse_crash(file_name)
+
+
+class ParseFileTest(unittest.TestCase):
+
+ def test_parse_bug(self):
+ content = [
+ "some header\n",
+ "<!-- BUGDESC Division by zero -->\n",
+ "<!-- BUGTYPE Division by zero -->\n",
+ "<!-- BUGCATEGORY Logic error -->\n",
+ "<!-- BUGFILE xx -->\n",
+ "<!-- BUGLINE 5 -->\n",
+ "<!-- BUGCOLUMN 22 -->\n",
+ "<!-- BUGPATHLENGTH 4 -->\n",
+ "<!-- BUGMETAEND -->\n",
+ "<!-- REPORTHEADER -->\n",
+ "some tails\n"]
+ result = run_bug_parse(content)
+ self.assertEqual(result['bug_category'], 'Logic error')
+ self.assertEqual(result['bug_path_length'], 4)
+ self.assertEqual(result['bug_line'], 5)
+ self.assertEqual(result['bug_description'], 'Division by zero')
+ self.assertEqual(result['bug_type'], 'Division by zero')
+ self.assertEqual(result['bug_file'], 'xx')
+
+ def test_parse_bug_empty(self):
+ content = []
+ result = run_bug_parse(content)
+ self.assertEqual(result['bug_category'], 'Other')
+ self.assertEqual(result['bug_path_length'], 1)
+ self.assertEqual(result['bug_line'], 0)
+
+ def test_parse_crash(self):
+ content = [
+ "/some/path/file.c\n",
+ "Some very serious Error\n",
+ "bla\n",
+ "bla-bla\n"]
+ result = run_crash_parse(content, 'file.i')
+ self.assertEqual(result['source'], content[0].rstrip())
+ self.assertEqual(result['problem'], content[1].rstrip())
+ self.assertEqual(os.path.basename(result['file']),
+ 'file.i')
+ self.assertEqual(os.path.basename(result['info']),
+ 'file.i.info.txt')
+ self.assertEqual(os.path.basename(result['stderr']),
+ 'file.i.stderr.txt')
+
+ def test_parse_real_crash(self):
+ import libscanbuild.analyze as sut2
+ import re
+ with libear.TemporaryDirectory() as tmpdir:
+ filename = os.path.join(tmpdir, 'test.c')
+ with open(filename, 'w') as handle:
+ handle.write('int main() { return 0')
+ # produce failure report
+ opts = {
+ 'clang': 'clang',
+ 'directory': os.getcwd(),
+ 'flags': [],
+ 'file': filename,
+ 'output_dir': tmpdir,
+ 'language': 'c',
+ 'error_type': 'other_error',
+ 'error_output': 'some output',
+ 'exit_code': 13
+ }
+ sut2.report_failure(opts)
+ # find the info file
+ pp_file = None
+ for root, _, files in os.walk(tmpdir):
+ keys = [os.path.join(root, name) for name in files]
+ for key in keys:
+ if re.match(r'^(.*/)+clang(.*)\.i$', key):
+ pp_file = key
+ self.assertIsNot(pp_file, None)
+ # read the failure report back
+ result = sut.parse_crash(pp_file + '.info.txt')
+ self.assertEqual(result['source'], filename)
+ self.assertEqual(result['problem'], 'Other Error')
+ self.assertEqual(result['file'], pp_file)
+ self.assertEqual(result['info'], pp_file + '.info.txt')
+ self.assertEqual(result['stderr'], pp_file + '.stderr.txt')
+
+
+class ReportMethodTest(unittest.TestCase):
+
+ def test_chop(self):
+ self.assertEqual('file', sut.chop('/prefix', '/prefix/file'))
+ self.assertEqual('file', sut.chop('/prefix/', '/prefix/file'))
+ self.assertEqual('lib/file', sut.chop('/prefix/', '/prefix/lib/file'))
+ self.assertEqual('/prefix/file', sut.chop('', '/prefix/file'))
+
+ def test_chop_when_cwd(self):
+ self.assertEqual('../src/file', sut.chop('/cwd', '/src/file'))
+ self.assertEqual('../src/file', sut.chop('/prefix/cwd',
+ '/prefix/src/file'))
+
+
+class GetPrefixFromCompilationDatabaseTest(unittest.TestCase):
+
+ def test_with_different_filenames(self):
+ self.assertEqual(
+ sut.commonprefix(['/tmp/a.c', '/tmp/b.c']), '/tmp')
+
+ def test_with_different_dirnames(self):
+ self.assertEqual(
+ sut.commonprefix(['/tmp/abs/a.c', '/tmp/ack/b.c']), '/tmp')
+
+ def test_no_common_prefix(self):
+ self.assertEqual(
+ sut.commonprefix(['/tmp/abs/a.c', '/usr/ack/b.c']), '/')
+
+ def test_with_single_file(self):
+ self.assertEqual(
+ sut.commonprefix(['/tmp/a.c']), '/tmp')
+
+ def test_empty(self):
+ self.assertEqual(
+ sut.commonprefix([]), '')
diff --git a/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_shell.py b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_shell.py
new file mode 100644
index 0000000..a2904b0
--- /dev/null
+++ b/src/llvm-project/clang/tools/scan-build-py/tests/unit/test_shell.py
@@ -0,0 +1,42 @@
+# -*- coding: utf-8 -*-
+# The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+
+import libscanbuild.shell as sut
+import unittest
+
+
+class ShellTest(unittest.TestCase):
+
+ def test_encode_decode_are_same(self):
+ def test(value):
+ self.assertEqual(sut.encode(sut.decode(value)), value)
+
+ test("")
+ test("clang")
+ test("clang this and that")
+
+ def test_decode_encode_are_same(self):
+ def test(value):
+ self.assertEqual(sut.decode(sut.encode(value)), value)
+
+ test([])
+ test(['clang'])
+ test(['clang', 'this', 'and', 'that'])
+ test(['clang', 'this and', 'that'])
+ test(['clang', "it's me", 'again'])
+ test(['clang', 'some "words" are', 'quoted'])
+
+ def test_encode(self):
+ self.assertEqual(sut.encode(['clang', "it's me", 'again']),
+ 'clang "it\'s me" again')
+ self.assertEqual(sut.encode(['clang', "it(s me", 'again)']),
+ 'clang "it(s me" "again)"')
+ self.assertEqual(sut.encode(['clang', 'redirect > it']),
+ 'clang "redirect > it"')
+ self.assertEqual(sut.encode(['clang', '-DKEY="VALUE"']),
+ 'clang -DKEY=\\"VALUE\\"')
+ self.assertEqual(sut.encode(['clang', '-DKEY="value with spaces"']),
+ 'clang -DKEY=\\"value with spaces\\"')