Snapshot fb3d71c562a0c9a822e3cd0473c641d8449e7acd from idea/132.839 of git://git.jetbrains.org/idea/community.git
fb3d71c: clear RootIndex cache less often, don't report package inconsistencies with dot-containing dirs
a75754d: reorder params
5c6eedd: cleanup
b51af8c: cleanup
8b798ca: cleanup
045fc8b: handle IDEA_PROPERTIES environment variable in Windows launcher (IDEA-115231) [r=nicity]
acf05c7: limit location filtering by method name to java files only
0bb8064: IDEA-115560 UI Designer creates invalid xml .form
bd14c78: Merge remote-tracking branch 'origin/master'
936f0ea: IDEA-115550 Not corresponding icons in the search results + transfer predefined text in other choose by
02ccb1a: remove getList()
7f5a8c4: cleanup
1a663cc: [log] IDEA-115521 Better implement "Copy Hash" from the log
3808295: Merge remote-tracking branch 'origin/master'
1f6c4b3: Merge remote-tracking branch 'origin/master'
e14d27e: remove servers view: NPE fixed
661faef: provide html tag with for html files #WEB-444 fixed
1549d90: Merge remote-tracking branch 'origin/master'
54a9b58: cleanup
9e75645: treat WHOLE_FILE_INSPECTIONS the same as local inspection pass
e62f352: IDEA-115468 (for loop variables, catch block parameters and resource variables not reported by "local variable or parameter can be final" inspection)
3110925: already disposed
0024cf32: [log] Don't store Hash in CommitCell to avoid memory waste.
a9f1c74: fixing tests
bcd1bc6: remove servers view: presentation for 'deploy' action corrected
72b5704: remove servers view: fixed 'start' action for server
d5699c6: remove servers view: corrected presentation for 'Redeploy' action
f01c367: remove servers view: update view after undeploy
637e3d0: remove servers view: deploy/redeploy actions implemented
0f4a6f3: cleanup
2a1ebb5: IDEA-115516 (Multiple warnings local foreach parameter can be final)
f214acc2: revert to PlatformTestCase
694db8f: Merge branch 'createPR'
1eaaff7: Lens improvements
1061800: Merge remote-tracking branch 'origin/master'
7ec28c2: more granular "too many usages" warning
38d1e48: [log] Draw multi-repo flag for ref groups
d9a5010: [log] Group branches in the filter and on the BranchesPanel
83c5cf8: [log] Teach RefPainter to draw just some given text, not VcsRef
8401e62: fix NPE
4c24188: fix location filtering in constructors
c666801: Merge remote-tracking branch 'origin/master'
3afb3f3: FinderRecursivePanelTest: move to platform-tests
1d75a16: avoid to use slow windows drives on every local file path check +review
963157f: cleanup
fb1f5b5: Merge remote-tracking branch 'origin/master'
f609ba8: rollback to make BPs work in constructors
7231e34: IDEA-113139 Groovy: Introduce Variable refactoring: correct place to insert variable
fd43ccf: already disposed
375d8ad: do not check infos by default
9bf386f: IDEA-115474 Maven Dependency template broken in latest Idea Cardea
bfdfd74: xml-analysis added
07ec346: return sensible value
35d6710: remote servers view: supported 'edit configuration' action for deployments
e06284d: cleanup
6306a41: remote servers view: wrap long error message
04fef0a: remote servers view: show all deployments when 'deploy' configuration is started
eb0249d: incorrect extension name
05d570f: IDEA-115037 restrict 1st column movement
37a39de: IDEA-115507 Create test: Groovy JUnit: "Fix" button should add groovy-all.jar to the corresponding module dependencies
729f20d: temporarily disabled
729765f: Merge remote-tracking branch 'origin/master'
503d26f: check for idle states
4686c64: correct presentation of network errors in debugger console reduce usage of BrowsersConfiguration.BrowserFamily — OpenUrlHyperlinkInfo
06a7410: Merge remote-tracking branch 'origin/master'
7980883: use constants
727f0e4: fix stupid flipped boolean mistake
4896b5e: [log] IDEA-115481 Fix permanent "loading" state
9f1b28b: [log] IDEA-115480 Correctly identify details of selected rows
3e0c2d6: [log] save details loaded from the filter in the cache
94a7babe: Merge remote-tracking branch 'origin/master'
832b3db: WEB-9771 Emmet preview showing in strange scenarios
ebe0d82: assertion fix
3f1620e: [log] Remove temp icons, use some existing ones
7586cf2: [log] Better progress indicator text
1f83087: [log] Disable highlighting of current branch commits via background
82a93b8: [log] Disable loading more details for filtering
edbfd4d: [git] Fix NPE is old log doesn't return data, and new one is not enabled
1d2a10c: synchronization between Disposer and focusSettlesDown
52bde7e: Merge remote-tracking branch 'origin/master'
716cf07: mem leak fix
6d5af62: Search: Comments only: several words target is not found in javadocs (IDEA-113951)
9d07ec2: lambda support in smart step into
4861333: setting line breakpoint requests: added additional locations filtering by method name if method name is available
ee66d8d: proper fix for finding stuff in java literals within jsp (IDEA-114105)
29d4353: test for IDEA-114108
2a37757: IDEA-115445 (Unreasonable simplification of code)
deba1c4: @NotNull annotations for *DateFormat
a0a5570: IDEA-115452 ("Local variable or parameter can be final" -- "Report local variables" setting doesn't work)
86b0984: [log] Remember users which were recently used in the user filter.
fd263bc: [log] Split VcsLogSettings
3967c21: [log] Special "me" value for the user filter
09cedec: [log] Copy hash action
3a0a796: [log] Make arrows indicating long branches be not so long
21a1093: when searching nested class do not take into account locations corresponding to synthetic, bridge and native methods
38021a4: RootIndex: support dirs by package without in lib sources
52f11be: RootIndex: cache excluded/ignored status
af0ad8b: add to DirectoryIndexTest the controversial library-exclude inside module content
b7f6cd2: RootIndex: ensure transitive dependents end up in order entries cache
2248228: Fix OC-5992 Formatter: block comments indentation +review CR-OC @Anton.Makeev
c05a74d: EA-50832 - NPE: GotoActionAction$.run
94b2d81: EA-42489 - NPE: InspectionToolsConfigurable.getPreferredFocusedComponent
deb8ca8: EA-44224 - NPE: ImplementationViewComponent.updateEditorText
0241905: EA-51233 - assert: InheritanceToDelegationProcessor.<init>
c96e307: fix wrong import
8d4324e: Minor changes: remove unnecessary qualifier.
ab17b2c: Minor changes: replace string concatenation with StringBuilder.
388da27: Minor changes: avoid warnings: make methods static if possible.
2e234d2: Minor changes: remove unnecessary qualifier.
0355fe8: Minor changes: remove unnecessary cast.
b128478: Remove unused class.
895743b: catch abstract method error and report incompatible plugin (IDEA-114712)
31c4654: fix race condition: IDEA-115325 NPE on compile
a802b79: [log] Fix NPE
93ee340: WEB-9760 Allow convert CSS colors to textual ones WEB-9759 CSS color "Convert to hex" intention missing
b3e9304: [log] make edges thinner, circle smaller, looks more accurate
926d212: [log] store text filter history
f31f667: [log] fix details filters: proper case-insensitivity
0230c93: IDEA-115341 (EA JVM warning notification + suppression)
e5854b5: Cleanup (code duplication; arrangement)
f0911b1: Cleanup (annotations; pointless code)
39a6915: platform: hide-on-click balloon builder option
9b940be: Cleanup (possible NPE; missing modifiers; dead code; typos)
c273570: Cleanup (formatting)
24f3c56: Dictionary extended
f96b485: [github] Add "Open on Github" to new log.
346cd09: [log] Add Cherry-pick action for Git to new log
7474a8c: Merge remote-tracking branch 'origin/master'
973e828: IDEA-114951 (.idea/inspectionProfiles/Project_Default.xml is always auto-modified by IJ)
868603f: forbid multiple selection for find action, run inspection (IDEA-115390)
ade364d: fix up/down scrolling in choose by name for single selection list
fd4a0ce: WEB-9647 magic mappings
a235f55: IDEA-113152 Groovy: In-Place Introduce Refactoring: PIEAE in name suggestion
29fb0e9: smart type pointers for Groovy class types
b07d6af: IDEA-113196 Groovy: Introduce Field Refactoring: disable "initialize in method" when occurrences are some different methods
e921e18: better state machine for double shift
386016d: Merge branch 'types-db-to-skeletons'
15193bc: Removed 'math' from stdlib types database
1d80c11: Lens mode ("black area" & foldings fixes)
81f6cca: Errors
49ea461: IDEA-103320 Android/XML code style: add "Insert line break after last attribute" option
019a43e: IDEA-106488 Separate highlighting of namespace prefix and tag name
6932a14: matched braces
544d48e: Merge remote-tracking branch 'origin/master'
dc906a5: Slightly update logic in CertificatesManager and add note about keystore path and password in dialog
0f7f4e0: fixing method collecting visitor for smart step: only arguments of method call expression were visited
71b8b54: allow to throw PCE
4f6cbe4: XML inspections moved to xml-analysis-* modules
177e812: IDEA-106488 Separate highlighting of namespace prefix and tag name
c02ba9f: EA-50579 (assert: TextRange.<init>)
cda0ea9: resolve conflict on type param rename (IDEA-57326)
54daf97: suppression warnings inspection: allow to accept some suppressions (IDEA-23392)
43113e6: IDEA-106488 Separate highlighting of namespace prefix and tag name
7af4952: Fixed resolving unqualified classes with user-defined skeletons in type strings
04fa6dd: Removed collections.Iterator from stdlib types database
e9a7241: [log] check for isDisposed in invokeAndWait()
c2a228d: [log] More descriptive text on the progress indicator
7fab0aa3: [log] Load filtered details from the VCS if nothing found
67ecd40: [log] Move VcsLogFilter to vcs-log-api
610d18e: [log] Store information about root in VcsShortCommitDetails
77a4b67: [log] optimize check if hash corresponds to a visible node
876f751: [log] cleanup
deb302b: [log] Load more details if there is not enough for current filter.
406e7a7: [log] refactor: split loadFirstPart in 2 methods to simplify the logic
bccc213: [log] trivial refactoring: extract several utility methods
e5fa9dc: Remove (zen coding) string from template name and wrap-balloon title
1ec54d8: Context menu item to open Dart-aware HTML files in Dartium browser
5724abe: set default recent files to 50 (Search Everywhere needs this)
7fa9c8c: better presentation for text search results and injected lang background
19e2670: Update untrusted server's certificate dialog to align labels in all sections equally
66ddb29: EA-48210 - IAE: ProjectFileIndexImpl.isInLibraryClasses
2e6046c: EA-48232 - RE: PostprocessReformattingAspect.beforeDocumentChanged
a5b96e4: EA-50520 - assert: ProblemDescriptorBase.<init>
35f2369: cleanup
81d5407: remove editor from PROCESSED_EDITORS eventually
325b30b: revert
fc58b7e: cleanup
b5f9105: moved to analysis
4f47c9d: moved to analysis
6d524ff: make sort more stable: EA-50594 - IAE: UsageViewImpl.rulesChanged
8ad0f31: checkCanceled
90e5604: cleanup
bf32208: EA-46449 - IOE: PsiJavaParserFacadeImpl.createExpressionFromText
4dd0d44: EA-48556 - IAE: SmartPointerManagerImpl.fastenBelts
58161af: EA-48654 - IAE: PsiTreeUtil.isAncestor
e488235: EA-48943 - IAE: VfsUtilCore.isAncestor
44cdee4: EA-48992 - assert: CaretModelImpl.validateCallContext
e85708a: EA-49302 - NPE: UnusedDeclarationInspection.checkForReachables
e59b7da: EA-49472 - assert: SmartPointerManagerImpl.createSmartPsiElementPointer
e31fc5d: EA-49821 - assert: CaretModelImpl.validateCallContext
b8f83aa: EA-49841 - IOE: CheckUtil.checkWritable
02f97f2: diagnostics
89ce3f5: EA-50143 - CME: EditorFactoryImpl.getEditors
a01fe2a: EA-50145 - EIIE: FileTypeManagerImpl$.process
1252157: EA-50208 - NPE: ControlFlowAnalyzer.visitAssignmentExpression
71d2675: allow to fork on two cores (virtualized environments often offer as many)
632df82: make sort more stable: EA-50594 - IAE: UsageViewImpl.rulesChanged
46c37ec: EA-50688 - assert: FileManagerImpl.findFile
69db615: EA-50704 - assert: SmartPointerManagerImpl.createSmartPsiElementPointer
69abbb8: EA-50733 - IAE: PsiElementUsageTargetAdapter.<init>
2851286: EA-50885 - CCE: MagicConstantInspection.parseBeanInfo
c06a3d4: allow to fork on two cores (virtualized environments often offer as many)
e2de601: moved to proper package
5117c2a: cleanup
9e6668a: fix testdata to reflect adding shell comment type to groovy comments set
bd95dec: fix name suggester for introduce variable dialog
141747e: Forget to fill owner's data panel
d1e4716: Update certificate manager to use new dialog
2d30d2f: IDEA-113270 Groovy: In-Place Introduce Variable: please add a hint about possibility to edit type to the refactoring preview
221dfa3: correct suggest names in inplace introduce variable
6960277: IDEA-115270 Extract variable in Groovy - resulting variable always strongly typed
05a3f08: IDEA-115240 Tags from scheme helptopic.xsd are not recognized WEB-7228 fix rolled back
eddcfe0: Dartium support
f8fd05d: framework detection excludes: one more method extracted to API
1ebcc6b: remote servers: download client libraries in background
c8bddaf: FileDownloader reworked: allow to download in background, get rid of unnecessary VFS access
b95a17f: split tests for community and ultimate parts
728a889: add mSH_COMMENT to comments set
8bfcab9: Remove write action (EA-50711 - assert: WriteAction.execute)
29dc1d8: SimpleDateFormat and TimeZone @NonNls
dc35e46: mem leak
c052b72: IDEA-115401 Tab (Shift+Tab) should select next (prev) group in SearchEverywhere
1708225: ESC doesn't work
8002672: AnnotationAttributeChildLink: remove obsolete psiAnnotation.getText() calls (Stubs now)
bda6f9e: IDEA-114478 (refresh a directory before attempting to open a project from it)
1cb3c6f: platform: respect synchronization setting in file chooser
651a774: fix NPE in SearchEverywhere
f96f953: fix toolbar icon painting
e51e002: done WEB-9647 (is not yet fully tested (case / vs /index.html)) but works
0702c0e: don't create semaphore — check early
d60d5e6: move console message handling form ext to idea
ec1a863: check disposed
9947207: simplify code
d3834f3: allow to check if DirectoryIndexImpl and RootIndex return same results
71e03bb: more fixes to align RootIndex and DirectoryIndexImpl
bf52d77: RootIndex: honor ignored files
a55f4b1: RootIndex: save order entries for the owner module and its dependents
9c87cfc: RootIndex: support libs inside modules and vice versa
89f4853: inline RootIndex.initialize
c45930e: minor field grouping in DirectoryIndexTest
001723f: fixed parameters matcher and methods chains comparing
5d5b7f3: scopes: restore selection and model (IDEA-115393)
98a3b3a: plugin advertiser: support no extension files
2a6b6e0: Prevent possible NPE.
58a0a3b: cleanup
19f68fa: Merge remote-tracking branch 'origin/master'
2fc6eca: project mem leak fix
504e209: Make the terminal a project component (PY-10353).
a6a42b5: unused local: introduce new setting and deprecate old one to prevent profile modifications
6b4c416: restore icon for scopes list
99b9be8: IDEA-115381 NotNull bytecode instrumenter does not update visibility scope for 'this' and parameter local variables
0c841cd: IDEA-88460 support deployment of custom artifact in Android run configuration
45877eb: use class instead of interface for step target descriptor
bceab92: Set to null on dispose.
1bab20e: fix containing field detection
db64633: nullable stuff: leave buggy assignment to be warned by dfa (IDEA-100172)
84e8629: IDEA-114105 Find String literals only java strings in JSP not found
3bb9fb6: IDEA-71794 smart step into: provide some visual indication in editor
c472d17: Update untrusted certificate warning dialog: use labels and FormBuilder instead of tables
e5208be: an incomplete and bug-ridden alternative implementation of DirectoryIndex (adapted from upsource)
b74a981: WEB-7228 HTML5 Boilerplate breaks HTML completion in .html file re-fixed
b4d4dfb: IDEA-114710 Groovy: @Language annotation is not inserted before concatenated string on Alt+Enter
55f9f51: treat foreach/catch block params as local variables (IDEA-115326)
a7316ca: lambda: do not suggest to replace with lambda when refs to final fields exist in body (IDEA-111026); final initializer
0b79057: Merge remote-tracking branch 'origin/master'
1b5cba7: cleanup
3d68304: IDEA-114762 ShowContainerDeclaration for Groovy
91e3c30: IDEA-114289 Groovy: Pull Members Up: 'implements library interface' clause is not suggested it 'Members to be pulled up' list
c6f3fba: fixed PY-11109 Incorrect "Instance attribute defined outside __init__" in local classes
0553c87: fix order for adding / removing children
eeb8d97: PY-11099 Test runner is trashed
3b9d915: IDEA-115240 Tags from scheme helptopic.xsd are not recognized WEB-7228 fix rolled back
de44ee0: OC-8602
626bc98: IDEA-114094 Search-replace in files does not happen if comments only is enabled
b107c73: Add another variant of untrusted certificate dialog
4c5364f: SearchEverywhere is not accessible when no tabs are open
81591ab: Merge remote-tracking branch 'origin/master'
102647e: extracted constants, util methods from generator, forced PEP8 style
d944e3f: Merge branch 'svn-history-regex-filter' of https://github.com/pkrcah/intellij-community
c79cabb: fix directory search
cbebdec: getter for provider
8b3f0aa: External system: debug info added for orphan modules processing
ca4af15: selection after methods chains completion fixed and chain relevance improved
80106fc: selection after methods chains completion and checking parameters in context
0fb461e: IDEA-115359 Gradle: generate apply plugin dsl code with alt-insert
a57dfa4: remote servers: extracted api for downloading client libraries
314f3ff: show balloon a bit higher
29487a1: smart-step-into targets rendering: show method and parameter name for 'closure' step targets.
cb2a7a8: we don't need picocontainer to create DefaultActionGroup
2e92afb: minor scrolling fix
b99519e: NPE fix: invokeLater without isDisposed check
92d4163: fix IncorrectOperationException
cbc2aeb: typo fix: mainFragmentIndex problem
061ede2: speed-search highlights in Changes view
67dbff6: speed-search highlights in standard trees
049dd64: do not search/highlight in non-main fragments
0ffe882: cleanup
fa8eca5: quick doc: fix class type params around <> (IDEA-113445)
59d9a71: testng: accept non-recursive package declarations (IDEA-113479)
0287b53: Cleanup (readability)
3b38f59: IDEA-115305 (requirements to temp directory relaxed)
520bb4b: [by max medvedev]: eliminating TopLevelParentClassProvider extension point and moving its functionality to GroovyPositionManager
cce9bd8: "smart step into" action offers possibility to specify methods of anonymous class objects as a step-target
72ddd04: Project View: do not freeze on large files in split mode with Structure
1378c06: speed-search highlights in switcher
30718f6: invoke checkCancelled just over instance of the indicator
b874726: IDEA-113526
51bfa11: do not treat annotated local variable as unused by default (IDEA-108925)
cbe42de: testdata for IDEA-65377
1d5f624: Merge branch 'python-fixes'
168f3eb: EA-48784 (assert: ConditionalExpressionWithIdenticalBranchesInspection$CollapseConditional.doFix)
198c872: Don't use relative import resolving for references in user skeletons
cd68157: avoid O(N^2) for processing create or delete events (N number of such additions / removals)
824868b: don't createNewFile in assertion, running without -ea will ruin side effect
aa281e4: DB: unify editor & console tables header menus
4f1a3e5: be prepared to null highlight infos (EA-49900)
5094f95: Add presentation providers for some MavenDom element.
59a0722: Fix warning: import icons.MavenIcons
b9f77b8: [git] IDEA-115318 Clicking "amend" shouldn't revert what user has typed
40568da: refactor ChooseByName (from ChooseByNameBase to Model)
ee5aec5: Merge remote-tracking branch 'origin/master'
6a4d26a: fix inspection serialization for SpringFacetInspection
d206064: renderer for dirs
aa310e7: WEB-9517 Npm: Error loading package list
a79ea18: white text on empty frame
e90954b: Cleanup (distinct logging)
c8bc340: platform: file permissions copying for not owned files fixed
e9eda0d: don't select 2nd element if no files are opened
d360216: better empty text positioning
60dffe3: provide common quickfixes for ambiguous constructor calls (IDEA-115255)
e3a799d: take into account containing class type params when accepting assignability (IDEA-71582)
02a4914: inline with flatten array creation: do not loose leading comment (IDEA-112161)
0573611: merge p4 fstat requests for all incoming changes (IDEA-99185)
52e9d7f: framework detection excludes: method extracted to API
b48c2ea: public API for pattern transformation
b9987ac: support directory search
6f3285d: public API for pattern transformation
5af63c2: public API for getting name pattern from input
20d08fc: IDEA-115260: when loading root model, for "inherited-sdk" order entry add implicit dependency to java sdk if needed
f27422e: memory leak
b7654b2: IDEA-114606 Replace with for-in should replace return with continue
9104204: IDEA-114606 Replace with for-in should replace return with continue
14b0f28: testdata for IDEA-18343
87be7fc: NPE
9e64ba0: plugin advertiser: no border over plugin name
cf140ec: plugin advertiser: sort suggested plugins by name
64feb9f: plugin advertiser: do not suggest to enable by already ignored facets
dbc3985: plugin advertiser: do not suggest to restart if no plugins were installed/enabled
9ba5f92: testdata for IDEA-54197
1d08cd4: merge completion variants with same method erasures
9084fdd: NPE
e2aaa54: 1. Project import uses "chain-of-responsibility" pattern for GradleProjectResolverExtension. 2. Project import can reuse Gradle Tooling connection. But in fact, it's not needed to deal with it in a GradleProjectResolverExtension, since it already has all information it needs about gradle build models 3. GradleProjectResolverExtension can provide extra JVM args for core logic to deal with Tooling API see com.android.tools.idea.gradle.project.AndroidGradleProjectResolver#getExtraJvmArgs method 4. GradleProjectResolverExtension may provide extra project model classes to retrieve for core logic e.g. com.android.builder.model.AndroidProject see com.android.tools.idea.gradle.project.AndroidGradleProjectResolver#getExtraProjectModelClasses method 5. GradleProjectResolverExtension may contribute to extension specific UserFriendlyError parser. see com.android.tools.idea.gradle.project.AndroidGradleProjectResolver#getUserFriendlyError method 6. BuildActionExecuter used if it supports by user’s Gradle version otherwise fallback to ModelBuilder. 7. Extension point added for ExternalSystemTaskNotificationListener to listen any task events 8. onSuccess and onFailure methods added to ExternalSystemTaskNotificationListener
17d057e: IDEA-107733 run scripts without make. Add . to classpath
8efc0ea: IDEA-115116 Correct highlighting range in case of generics
9d01b80: cleanup
640d3c8: IDEA-115015 Ctrl-Shift-T doesnt offer to create test class
b8303a2: IDEA-115272 Lots of [partial] incoming changes, and NPE in CommittedChangesCache
8183283: notnull
946ae12: do not call commit for closed project
745f73c: cleanup
c7622f7: do not log PCE
35d4aa9: do not commit in closed project
4f72b63: do not create empty jars
4730ae3: IDEA-114047: "Create Class" quickfix places new class in "target/generated-source" by default
208e24f: support cross-fragment speed-search highlighting
fcd8b45: IDEA-115260: correctly identify "java-compatible" sdk
b3e99ae: remove ConcurrentSoftArrayHashMap
0fc9cbe: repaint speed-search highlighting on pattern change
448f07e: testdata for IDEA-60836
8b4c147: testdata for IDEA-63331
e9fd705: testdata for IDEA-60818
18ee2d8: forbid references on final fields from another ones (IDEA-100237)
34f23fb: Merge remote-tracking branch 'origin/master'
a5c7947: Merge remote-tracking branch 'origin/master'
923916b: init WEB-1561 Debug: explicitly execute Dartium browser instead of Chrome
eb09709: use PortField
efb4101: finally IDEA-64374 Speed search not working in Find Usages toolwindow
dfa0b15: recover from PersistentHashMap storage format change
cfbaedc: extract class -> extract interface (IDEA-114821; IDEA-113989)
2b326ee: eclipse: default natures simple registration
c7304c6: cleanup
c3dba5e: javaFX: unresolved fx:id in fx:include supported (IDEA-114920)
5d26c68: Merge remote-tracking branch 'origin/master'
125f066: [log] fix a typo lead to a CCE
c8770e0: fix extract method. revert engine changes.
33d50cb: +goto symbol
aee2167: recover from PersistentHashMap storage format change
6e76b39: jzlib was missing from required_for_dist
796ec6e: do not log ProcessCanceledException
81f1503: reduce delay between pressed shifts
8d72ccd: 'thread dump' is replaced with 'stacktrace' to be more clearer for single-threaded environments
70f532b: Removed type annotations for StringIO family in favor of python-skeletons
4dc762d: label disabled foreground
53f4af7: proper empty text
247da91: testdata fixed
80d9d58: plugins advertisement: suggest to restart when only enable is requested
15523e7: plugins advertisement: suggest to enable disabled plugins
5ca293d: EA-51245 - IAE: InspectionEP.getLocalizedString
b331897: extract interface: allow to pull up default implementations implicitly (IDEA-114918)
0024f74: relax types
41e1124: rewrite using pooled thread
e62b8de: IDEA-114703 Groovy: "Unnecessary qualified reference" quick fix could be renamed
2602032: IDEA-114409 Wrong type inference in Groovy takeWhile on List<String>
c13aacf: IDEA-114427 Using $this in GString marked as invalid (Identifier or code block expected)
b783e1e: selection color
6124207: move toolwindow on top. Instant cancel
5d0f4c0: Add highlighting restart after successful test connection in YouTrackRepositoryEditor
9ab8a42: Add several improvements in certificate manager
3ab2bad: frameworks support step suppressed for ruby and python
ae66327: [log] Don't show "1.01.1970" as date for commits which are not loaded yet.
9123848: [log] cleanup
e7034f0: [log] Make branch colors a bit more calm
5a78887: IDEA-115232 Search Everywhere shouldn't work in modal dialogs
7558ecf: focus color for arrow button
ec5f9f0: wrong coordinates for focus ring
1112119: Merge remote-tracking branch 'origin/master'
532d51c: IDEA-114976 Incorrect syntax highlight of Groovy dollar slashy strings
78d5c90: cleanup lexer
d1299a0: IDEA-114474, IDEA-114473 fix override/implement groovy methods
a55dd8a: notnull, cleanup
35a698b: IDEA-115230 Gradle: settings.gradle script codeInsight
a72c225: FinderRecursivePanel: restore list filtering on speed search, disable speed search
feb19b9: text is not centered under Darcula and IntelliJ laf
7b3af34: move static: correctly process ambiguity between moved members (IDEA-114908)
7dc625f: Abstract method overrides abstract method: do not treat default as abstract (IDEA-114841)
058705e: wording
77c3bf5: remove redundant cast
675f663: proper height of combobox under Darcula & IntelliJ
76e2039: First version of self-signed certificates support for task subsystem
31527c6: border for editor text field
7796b17: ring color for IntelliJ laf
d7b9179: color for text field border in IntelliJ laf
f92a04e: 1px offset for Darcula and IntelliJ lafs
a4777ec: FinderRecursivePanel: restore list filtering on speed search
c750608: Arrangement: add shortcuts for rules manipulations
fe1985e: fix tests
0fc685d: fix icon paths in PyCharm Community Edition installer
894d3b0: Properly calculate short name for inner class with dollars
e27a3c1: Gradle: code insight
551579c: use configured @Nullable in inspection descriptions (IDEA-114884)
2f551f8: memory leak fixed
b764706: Merge remote-tracking branch 'origin/master'
6fab344: use maps with embedded reacheability tracking to handle reference queue automatically
f57c764: provide container name to searchers to improve find usages responsiveness
6bcad28: dataflow to/from containers support
86ad4d8: old wizard fixed
999a2e8: added option 'for generated source' for java source roots, generated roots won't be suggested as target for refactorings (IDEA-112680)
ede9457: cleanup
8cd2201: IDEA-115218 ComboboxEditorTextField has inner border (IntelliJ laf)
b1eca26: test fixed
8a816ff: test that cached completion copy file caches are correctly cleared
8343988: avoid unnecessary ClassNotFoundExceptions in plugin class loader hierarchy
84d7707: Merge remote-tracking branch 'origin/master'
031d8c0: protect action objects against unnecessary sorting
a41e00a8: IDEA-114955 Use existing helper method to get the default background color instead of storing it in a field.
1cd7777: PY-9343
cbd9919: cosmetics (follow-up to PY-9356)
936e77c: PY-11175 Add speed search support for Python Integrated Tools
04b7219: PY-9356 Clean .pyc should include $py.class files if jython interpreter is used
2ab6ffc: PY-10020 Dracula: Python External Documentation Settings panel have not "blue"-dracula styled elements
e7c8286: remove useless skipped plugin check
6dc596f: more specific locators for non-text editors
42004a0: EA-51163 - ISE: JsonElement.getAsJsonArray
e1822c5: EA-51122 - assert: TextRange.<init>
3aef8cd: EA-51183 - NPE: UnknownFeature.hashCode
185c50e: to have community build run correctly as part of professional one, include community layouts relative to ch
1973d0b: module and run configuration for IDEA with Python plugin
535ce56: add notice about skeletons
7d727d4: Merge remote-tracking branch 'origin/master'
b35a1e6: Added 'tuples' to Python dictionary
0bedaf8: missed comments on Simplify (IDEA-114798)
c505f98: IG: allow to proceed with replaced statement
4ec547f: type migration: avoid migration from primitive to boxed and vice versa (IDEA-114902)
64284c1: cache scope text for deep scopes (IDEA-114816)
835549c: escape initializer text for quick javadoc (IDEA-114911)
d86e31f: testdata for IDEA-114894
026f911: scopes order: do not override stored order (IDEA-46994; IDEA-115020)
b17024a: skip sonar natures (IDEA-114957)
4742432: Added python-helpers to .gitignore
b397fd5: support and instructions for building Python plugin from command line
0ba9860: load namespaces for document even if present in tag as they may provide default namespace #WEB-7228 fixed
01e97b8: Github: add information about exception context
9da662f: don't fail community plugin build if we don't have help
ae8bbe6: if sub server cannot be bound, log it as warning
3c0f475: Merge remote-tracking branch 'origin/master'
d0448ed: correctly handle internal structure of .tar.gz
4ec818d: IDEA-115009 Gradle: Invalid DSL processing for custom maven repo
136a3ef: [log] Store the number of recent commits to load in the settings
4675b0f: backspace doesn't work on Mac
a183547: distinct name for welcomeCaption.png in community resources (even though it's not used)
a106358: use .tar.gz distribution of ideaIC as dependency
cb41e24: Merge remote-tracking branch 'origin/master'
7ec4bb1: Merge remote-tracking branch 'origin/master'
46ac22c: move testdata for env tests back to professional repo
31f9f52: delete duplicate app info file
dc862f8: Github: check DiffInfo before performing request
e61f000: IDEA-115187: Support Saxon-specific XSLT extension functions
55378a0: Lens mode (speed-up)
582fce2: [log] Introduce filtering
35879f8: [log] Create NoGraphTableModel for future filtering
53da3b5: [log] GraphModel#getNodeIfVisible by Hash
6eebae8: [log] move a method to VcsLogUtil.
e757543: [log] Introduce CommitCellRender for rendering without painting graph
4fa54fe: [log] cleanup: remove unused Kind
f69b51d: [log] refactor: move getting selected changes from table to table model
2373d8e: [log] Store top details in the cache forever, fix top commits number.
c46ac92: [log] Improve synchronization & data flow in the VcsLogDataHolder
b629894: [log] Initialize the content pane in the EDT
3fe07df: [log] Support empty log (which can happen after filtering)
1c44602: [log] optimize RefsModel#refsToCommit
6581308: [log] Show branches in BranchesPanel instead of all refs.
7d52e6d: [log] cleanup
99195dc: [log] Protect against cases when root color is undefined.
5950de0: [log] Allow empty log (for filtering purposes).
74642d3: Github: use FutureTask instead of FutureResult
dbac23a: fix search field glow border
bf812b3: instructions for building and Ant build script
70e6289: Apache 2 copyright headers
22eb244: avoid hard referencing of filecontent (including psi file in user data) in indexed stamp update runnable [r=Peter.Gromov]
07edea4: hide settings for intentions without category (IDEA-114995)
43b4bad: eclipse: skip jRebel nature (IDEA-114956)
38d2ec1: disable testNG addition for jsp files
15777aa: missed SuppressActions (IDEA-115027)
320dcce: fix paths in Python community layout
6fe6422: IDEA-115062 New Project Wizard: multi-selection should not be allowed
a658595: Merge remote-tracking branch 'origin/master'
77592ae: IDEA-115056 NewProjectWizard: for Flash module, no specific settings are available
13e46e9: extract com.intellij.util.xml.CanonicalPsiTypeConverterImpl#getReferences
9e08a76: delete duplicate artwork
17c2b42: IDEA-115050 NewProjectWizard: JavaEE: module is not created
3752346: correctly delegate to base IDEA layout in PyCharm CE build
7e71511: regexp error highlighting in Darcula
c5b355c: CR-IC-2706 (logging dropped)
73b37cc: CR-IC-2653 (cleanup)
9d206b3: Github: clean
7bd6e10: IDEA-115050 NewProjectWizard: JavaEE: module is not created
1ba2352: replace action sorter to action promoter
831c230: do not show frameworks twice
97fb00a: WEB-9173 JS Debugger should use symbol maps from Chrome (Scripts tab)
1cdfc94: use SingleAlarm
a0f228b: Merge remote-tracking branch 'origin/master'
306295c: IDEA-115008 Gradle: Make 'apply' button enabled at settings dialog only if there are changes
e07df8a: rename IdFilter.contains into containsFileId
d3eca3e: avoid volatile setter in API, we do not need it because of DefaultChooseByNameProvider sets IdFilter at FindSymbolParameters creation
315db08: NPE
db19bf7: NPE
f3dc513: duplicate class file deleted
9856419: fix paths in community plugin build script
b883580: fix paths in community build script
8467d98: delete test fixture class for professional part of code
8afadbc: Git: add multiple selection for commits in 'Compare branches' and 'Branch not fully merged' dialogs
4c4d961: Github: suppress warning
e832fa9: Github: fix typo
deaf380: Github: fix yellow code producing by CommonDataKeys
d328373: Git: add 'fetch'
e3f46b9: Git: allow to use JTextArea
2a52aad: Github: substitute deprecated field
6d28a89: Remove newline at the end of xml file
c8c7f04: remove incorrectly migrated plugin.xml section
e734291: remove duplicate community plugin.xml after sources move
df8409b: Merge remote-tracking branch 'origin/master'
8562c28: IDEA-115008 Gradle: Make 'apply' button enabled at settings dialog only if there are changes
06dc4c0: fix "AssertionError: Wrong line separators:"
b7e90e5: restore missed module in community project
4bbd3d3: fix dependencies of IntelliLang-python module
6ccd71d: RUBY-14390: redundant escape character in regexp highlighted by annotator, not by regular highlighting
14ce958: Fix OC-3040 TODO[ik] implementation: need to be fixed to correctly process indent inside indent +review CR-OC @Anton.Makeev
c135d71: cleanup: introduced method to add source root with default properties
a17f3fd: Github: do not spin busy on fetch error
003fca8: Github: add current branch to dialog title
d1e13a3: cleanup
c0127b8: highlight completed word in the correct editor (IDEA-115153)
ee4e642: don't clone text attributes for each console hyperlink (IDEA-114275)
27787aa: incoming cache: get revisions for all changelist files in a bulk way (IDEA-99185)
ac9d0fc: make IncomingChangeState immutable, introduce ProcessingResult
5a8f5e2: make RefreshIncomingChangesOperation single-assignment fields final
f5509ad: move RefreshIncomingChangesOperation.myCurrentRevisions initialization to constructor
41f68931: less project-related indirection in RefreshIncomingChangesOperation
eb841471: make RefreshIncomingChangesOperation static
1d7f430: [log] IDEA-64583 Don't invokeAndWait from runProcessWithProgressSync
10f646d: Lens mode (better mouse wheel support)
84d809e: reliably re-throw PCE
7710e54: remove redundant cast
7cdf46b: "show source" action leaves focus in breakpoints dialog
d879f97: Merge branch 'master' of git.labs.intellij.net:idea/community
791a72a: IDEA-112731 - Cannot start WebSphere v6.1 server anymore - fix reappear
de53c88: processElements only in source scope
19aedae: better spreading nameIds / caching strings using integer hash from http://burtleburtle.net/bob/hash/integer.html with previous nameId % 16 striping sampling was like 7542400, 16103753, 1844054, 5237640, 6218814, 11005156, 3395732, 8183208, 1897660, 3119959, 2961691, 7768377, 2177568, 3889462, 6146853, 2765254, with new one 8207442, 6253680, 14184952, 4657166, 5186040, 25115754, 10426752, 15750418, 11239308, 5293340, 16883296, 16750920, 6545096, 23295592, 5969384, 4756022,
96e10cc: soft border for checkboxes
7902d4c: combobox ui for IntelliJ laf
437a1bd: remove darcula background
40f0d43: tune status bar separators for IntelliJ laf
5c6e28e: fix white text in combobox
f8de7fa: several changes in order to get vfs more fault tolerant: - write all file attributes under write lock, - delete file from parent first and after it delete from vfs - when move: delete from old parent first and add to new parent last
0491567: fixed race for for concurrent read access of byte buffers from vfs
06ee80f: support Class<?> in loggers (IDEA-105064)
c9c171a: use fileEditor name if no file available
c222858: Merge remote-tracking branch 'origin/master'
49871d0: run configuration for PyCharm Community Edition
599a203: Copy/Paste fix
6a5ed3f: import PyCharm Community Edition .iml files into IntelliJ IDEA Community Edition project
a6d91c2: python-community-tests depends on python-helpers
99147e2: EA-44967 - assert: ComponentManagerImpl.getMessageBus
1a77c55: WEB-9656 BrowserConnectionManager Exception in console on shutdown
fd9da87: correctly locate helpers under new repo layout
e46168d: update community path location for new repo layout
eaea8db: delete accidentally checked in .class files
bed53d1: Merge remote-tracking branch 'origin/master'
92cd05a: remove tests for non-public stuff
fe0dce1: delete build scripts for non-public part of code
44afbb5: remove IMLs of non-public modules
6ae34e7: Merge remote-tracking branch 'pycharmce/master'
d7e6756: don't throw exceptions on contracts with several clauses
c0631e8: OC-8590
f84668f: warn in more cases
4fcb928: possible NPE fix
1ae1cf2: ExternalSystem: project wizard refactoring (related bug - http://youtrack.jetbrains.com/issue/IDEA-115072)
c5f60d1: correct navigation to fields in anonymous classes
02338e1: fixed ProjectFileIndex#getOrderEntriesForFile for files which are library roots themselves (in javascript libraries)
fad7649: WEB-9549 When in CSS context, hyphenated words should NOT be treated at single entities when double clicked
257e013: Restore creation/paste selection after reparse during updateRenderer()
4cacdc9: AppCode: temporarily fix for broken environment (OC-8606, OC-8593)
f533de0: Merge remote-tracking branch 'origin/master'
af655c9: Gradle: code cleanup
fccbb7c: Merge remote-tracking branch 'origin/master'
e74e111: Merge remote-tracking branch 'origin/master'
2472d91: password fields for Darcula and IntelliJ lafs are 2px taller than text fields
261b3c6: password field bg
2ffdbba: a bit faster TaskManager initialization
9e8e97b: avoid expensive class loading in resource bundles
ffe59e6: don't use rotten things in constructor dfa cache
fe6e5e0: a bit faster startup by more efficient extension registration
397585d: ExternalSystem: project wizard refactoring (related bug - http://youtrack.jetbrains.com/issue/IDEA-115072)
b1d8d18: IDEA-114952 Eclipse code style import: would be nice to remember imported file location
c7fbd91: Make "Problems" icon gray is there is no problem inside
8233a03: use native bytes order for bytebuffers in vfs / enumerator / persistent hash map
01782db: avoid O(N^2) filtering where N number of deleted directories
75f48fc: injection pattern building - whitespace matching fix
fa05415: remote servers view extracted to platform
2197063: make text field lower
15475c9: progress bar for intellij laf
3c85664: double dispose
430df85: json icon updated
2080dca: <!--> is proper comment in html #WEB-7733 fixed
75c384b: Groovy Move members: fix doc formatting
0050935: Report @SafeVarargs and System.lineSeparator() usages
ab9a368: WEB-9343 nodejs debugging hangs on rerun
bbf3618: WEB-9669 Add combobox with history in 'Surround with Emmet' dialog
8661388: cleanup unused import
8865bcf: provide default implementation for new processClasses / processFields / processMethods
247ea84: call 'textAvailable' to restore compatibility with 3rd-party ColoredProcessHandler inheritors
2548844: IDEA-113515 "To many events posted" diagnostic message if create remote sdk
5a400bf: live template macro that expands to clipboard contents (IDEA-67895)
81e36b8: toString() template for commons-lang 3 (IDEA-94260)
0ddc96a: "open in opposite group" action (IDEA-84182)
44b7c27: tab placement actions need to be dumb-aware
03c4da6: rename "Alphabetical Mode" action to reflect what it does
9c0da2a: UI cosmetics (IDEA-114953 Eclipse code style import: dialog to select a profile to import has no title)
8a3a087: Merge remote-tracking branch 'origin/master'
4dcb516: cleanup
9859422: cleanup, thread safety
98387e0: cleanup
a03f863: fix menu separator color
d8a9bd4: tooltips background
785ba9b: softer border in tooltips
0dca368: i18n
003f046: Fixed IDEA-115084 Color and Fonts: can't change colors for certain elements
6bc71fb: IDEA-114936 (Good code is yellow: Confusing floating point constants)
a53515b: IDEA-114337 Incorrect HTML templates highlighting
f587d58: fix surround tests
e77c9b3: parallel make: changed default value for max worker threads count
8dc3eab: better wording
d752c7c: support freemarker iso_* date builtins (IDEA-114985)
50b3116: IDEA-115070 Remove "exit" as possible @Contract method effect
57b3cfd: IDEA-115044 Change @Contract retention policy to class
09f9827: make it more clear that @Contract doesn't instrument the code
ec2d2d3: IDEA-105064 Is it possible to have even smarter code completion in case of Logger.getLogger(Class)?
ce10d38: IDEA-103266 Suggest names for vars by method names without 'get' prefix
293e4c5: IDEA-40780 When code is compiled with -g, use the argument name for the @NotNull error.
99e97f3: build constructor dfa to check final field not-nullability (IDEA-114828)
f9b7944: dfa: assert statement throws an exception; distinguish exceptional returns
eb6455e: don't make var nullable if it's not equal to a constant (IDEA-114791)
32d29a0: remove TemplateImplUtil method used only in tests
e01df0a: IDEA-114874 Live templates: don't allow editing $END$ and $SELECTION$ variables
eaf018e: restart highlighting after contract change
2958970: IDEA-114877 Checks using java.lang.Class#isInstance should guarantee argument is not null
e411701: RUBY-14397: reverting unintentional API change
3a0c9aa: RUBY-14397: (refactoring) RemoteRunUserInfo doesn't use projectOrComponent anymore
d0db9be: proper enter bulk mode by fixing calculation of number of created directoriesd when created directory does not exist (VirtualFileCreateEvent has null file but proper directory flag)
17736e9: IDEA-115069 Maven Artifact Search shows not recent version of the artifacts
f725b93: Try to fix JavaAutoPopupTest.
414a189: clear icons should be lighter
bf5f08d: better fix
d763ada: leave bold on Win and Linux only
d29d6f2: SOE fix IDEA-72618 IDEA-98891
9ae0ae7: Lens mode (better glass effect)
17b6cab: remote servers: call 'connect' method on EDT so it can show error dialog if needed
7993314: Lens mode (mouse wheel support fix)
3ef9371: introduce marker interface DifferentSerializableBytesImplyNonEqualityPolicy for KeyDescriptor implementations
e0ddb24: ability to set native bytes order for PagedFileStorage
1725aeb: Code cleanup
84f6e6e: Revert bad fix: don't refresh plugin jar in MavenPluginDomUtil.getPluginXmlFile()
535b92c: not so annoying logging
91740c8: fixed WEB-8096 Breakpoint is ignored in ExtJS app — But is very slow now (due to >100 scripts) So, I will continue working on it WEB-4429 Add ability to debug the GWT 2.5.0 super dev mode inside IDEA — real reason of this commit. We set breakpoint by url, but GWT reports sourceUrl in case of redirected script, so... oh, ma! init WEB-6659 JS Debugger stops at arbitrary point in code — well, set by url is not reliable way due to Chrome bugs Now we set breakpoint by script id. This way is slow, but solves our problems init WEB-6413 sourcemap backed breakpoints do not work until page is loaded — we pause script on first line now, so, we can check sourcemap too.
ca91b61: Fix testLiveTemplateWithoutDescription for Mac
07d6c7d: 'middleware' added
ca4292af: clear cache on Darcula On/Off
81b271f: cosmetics
2050fc0: spaces for better reading
fb28609: after the user presses Expand All in Usages view, subsequent usage searches are shown as always expanded (IDEA-82552)
5455f1c: IDEA-115042 Support audio notifications under Mac OS X (command "say")
943caa2: buttons and search component for intellij laf
ee0033e: customize buttons and search component for darcula
d554916: bigger buttons, bold text on default buttons
64e07f6: make buttons bigger & fix text centering
002b871: ability to load icons from path
b13ae44: customize search icons
fdd15e4: icons for Search component
dcaebf8: Bug fix: generate <type> and <classifier> for managed dependencies.
e31e1be: ChooseItemAction should handle default shortcut char
0e6c7a2: rename
872c421: cleanup
d7f49fe: extract TextTransferrable ctor for the same values
40350d8: Merge remote-tracking branch 'origin/master'
86aaf2f: There are no problems with flickering under Darcula on Linux
03e63d7: IDEA-115034 (Good code red: hex float zero)
52557fa: XmlFormatting
ecf0ebe: IDEA-113936 Dependency autocompletion causes confusing behaviour when editing POMs
48dd11c: IDEA-115019 include Android project templates to IDEA build
32625a1: Merge remote-tracking branch 'origin/master'
ebcb37e: validate editors closed: 1) do not invokeLater the check because it doesn't fire on application close and 2) release editors as early as possible - on project close
af27f56: EA-51089 - IAE: ServiceManager.getService
3710d76: IDEA-114997 (IDEA 13: inspection to replace StringBuilder with String is incorrect)
5380822: cleanup
e6d0ac7: do not block job scheduler thread waiting for read action
746736f: removed assertion
e62520b: cleanup
c61ed4a: cleanup
205cb0d: cleanup
25a8a52: Fixed IDEA-114949 Eclipse code style import: "Align fields in columns" is ignored
4fa9351: Arrangement: allow to remove rule while editing
f649c66: WEB-9384 Emmet expansion collides with PHP completion using Tab IDEA-110640 Tab autocompletes to tag name in XML
ddb830b: WI-16768 Rearranger: exception during reformatting of large file
eab93a3: Add some details to Exceptions to be able to respond to those pesky bad filter behavior/input reports (i.e. SOEs) (cherry picked from commit 100a017)
b4d9541: IDEA-114955 Show red background in the search box if it doesn't match any commits or if regex is invalid
22268d7: IDEA-114955 Display an error message when invalid regex is entered
96c629f: correctly calculate the path to tools.jar in case of "IntelliJ Platform SDK" (IDEA-114975)
5001929: IDEA-114998 Rename Thread Dump Action
f2800a9: IDEA-114955 Cosmetic code style changes
37b896e: Merge remote-tracking branch 'origin/master'
e6e34d9: fix check box colors
8aeaef5: IDEA-114982 (Feature: warn if something is calculated inside assert statement)
faf4bc1: better colors
eff0d88: radio buttons colors
ddb84d5: customize radio button painting
586e93d: wrong background
032f884: Merge branch 'smevok' of https://github.com/erokhins/intellij-community
dfcec99: [git] Save amended message to restore it if edited by user
b0999c0: Darcula: remove custom colors for EL language (inherits from Language Defaults)
a40a4af: XmlFormatting
b5de17b: fix deadlock IDEA-114726 [r=Peter.Gromov]
18ab831: use default.css from jre
83ee8d5: fix menu bar border
e170860: generify border colors
360e7ea: don't load css if there is no css file
0f3388f: Minor code change: simplifying to remove warning.
4e0aded: Fix typo.
23b6b01: Minor code change: optimization.
05aa825: Call completion after inserting of "dependency"
0f474fc: Fixed IDEA-114601 Settings | Editor | Colors & Fonts: value of majority of color scheme settings is not shown (explicit inheritance checkbox)
68e1214: IDEA-114885 ("Class with too many fields" inspection reports both enum and enum fields)
d1bf532: Merge branch 'master' of git.labs.intellij.net:idea/community
c67d3c1: Heroku integration - framework under new cloud api
9ca2b4c: IDEA-114965 ${pom.parent.version} red in pom.xml file
ee18a27: IDEA-114556 Add the ability to have maven targets run before/after "rebuild" Add "before rebuild" action.
078a405: work with supplied project because some GlobalSearchScope could not have it
6aab7a0: fixes in EL
ab10880: Merge remote-tracking branch 'origin/master'
efe5d11: Merge remote-tracking branch 'origin/master'
1a07e85: Jediterm updated.
b7c06c3: Merge remote-tracking branch 'origin/master'
42e59eb: PY-11106 Wrong error message: "accessing protected member of class" for module._member (PyCharm3)
2ab4414: PY-11071 File contains non-ASCII character: layout for inspection settings has invalid grid
f7b1bee: move QualifiedName class from Python to platform
07f7df7: move QualifiedName class from Python to platform
5bba53d: DSGN-467 Draw separate icon for Json files
162b80f: fix tests
939d116: IDEA-25908 (Add Find Usages for imports)
584f605: After-review refactoring
38f3dc4: [git] Better fix for IDEA-64583 get message from git log -1 if amending
9f5d7c3: use card layout for switching tree and text views
b958ac3: IDEA-14881 Ant integration: 'Make build in background' should only show output window on error, show task progress in status bar/background tasks window
d580831: IDEA-114955 Add support for filtering by regular expressions in Subversion repository view
edc4a1c: use unescaped property values for paths calculations and custom elements definition
5da545a: OC-8588 Rename Refactoring renames unrelated method
9b3cffc: convert indents action: correctly calculate end of indent when line contains nothing but whitespace characters (IDEA-79905)
88b0df1: use tab text color in all tabs popup (IDEA-84267)
b9577ac: show correct filename in checkout prompt (IDEA-83617)
6d5552e: honor "hide extensions in editor tabs" when calculating unique filename (IDEA-85737)
541844e: don't block welcome screen showing if 'nosplash' command line argument is specified (IDEA-106794)
fd78a45: IDEA-114556 Add the ability to have maven targets run before/after "rebuild" Add "after Rebuild" action.
b148283: Emmet: fix tests
d08ee7a: WEB-8934 Preview for Emmet live templates
f2f0e41: pep8.py doesn't like our EOF marker (PY-11094)
0409731: remap shortcuts in Visual Studio keymap to better match Visual Studio 2010 behavior: Goto Class is Ctrl-comma, Goto Line is Ctrl-G (IDEA-108504)
7074552: Bug fix: FindJar does not work.
4411232: Download thread must be a daemon.
86f2022: [vcs] IDEA-33094 Unify commit dialog labels text
ed110ba: IDEA-114890 Find Jar On The Web: Looking For Libraries progress can't ve cancelled
1f7ba86: Merge branch 'svn1_8_new'
74b0471: IDEA-114837 Should add library in test scope if adding a maven dependency from test class
c8afe58: Merge two listeners.
d4f7442: IDEA-114815 maven: new project from archetype: checkbox enabled but treeview inactive
0a68d79: restore compatibility
060dd0f4: IDEA-108202 Create maven configured workspace. Support VCS properties.
7cb8231: IDEA-108202 Create maven configured workspace. Support "autoscrollToSource" and "autoscrollFromSource" properties.
0c96bf1: Lens mode (glass effect is Darcula-only)
b949eee: svn: Refactored CommandRuntime - renames, method extractions, @Nullable/@NotNull
6e093cd: Lens mode (don't ignore two right pixels)
e1acdfc: pass complete find options via processElements style API when retrieving elements by name via ChooseByNameContributorEx this allows us to suggest directories in goto file contributor when pattern starts with / or \
4b01b2c: new project wizard: associated frameworks
edbc353: borders removed
8b1421c: svn: Refactored CommandUtil - logic to resolve working directory and repository url for command moved to CommandRuntime
e61f4d3: disable "toggle column mode" action in one-line editors (IDEA-82403)
6d6b551: match "Id" in thread dumps case-insensitively (IDEA-114150)
b088ebf: svn: Refactored InfoCommandRepositoryProvider - use instance methods (instead of static)
22fae5e: svn: Repository providers from CommandUtil extracted to separate classes
552683c: svn: Refactored CommandRuntime.runWithAuthenticationAttempt to accept Command parameter
2d7a2e2: IDEA-108202 Create maven configured workspace. Support "assertNotNull" property.
f95705d: Fix javadoc: use Messages.YES and Messages.NO instead of 0 and 1.
def8104: IDEA-108202 Create maven configured workspace. Support "downloadJavadocs" and "downloadSources" property.
6f9978f: IDEA-108202 Create maven configured workspace. Support "jdkLevel" property.
f4136f1: IDEA-113995 Jump list: can't open recent projects that have spaces in paths
dcfe8e2: svn: Made command result builder to be Command class field
978494a: svn: Refactored CommandRuntime - moved Command creation to runWithAuthenticationAttempt
21a0622: svn: Renamed variables in CommandRuntime
0bd8948: svn: Removed unused code from command line logic
50482d9: moved LibraryDependentToolwindow to openapi
c012d1d: svn: Moved SvnBindUtil.correctUpToExistingParent to CommandUtil
52cac77: svn: Replaced SvnBindUtil.changelistsToCommand with CommandUtil.putChangeLists
d6f9b5a: svn: Refactored CommandExecutor - parameters extracted to Command class
5635060: IDEA-114907 Rendering problems in SearchEverywhere under Darcula
7ed02a5: Merge remote-tracking branch 'origin/master'
a749e81: fix assertion
7859049: [git] IDEA-64583 read commit message from .git if "amend" is selected
07af6c9: [git] load commit message from .git/COMMIT_EDITMSG as well
6f0f73f: Updated versions of ReST and GetText files plug-ins
9132454: PY-11103 Method can be static: false positive for aliased class abstractproperty(property):
bd10ad9: Merge branch 'regexp-format-inject'
56b1a9b: Moved Python string literal injection methods into PyInjectionUtil
ac0c562: IDEA-108202 Create maven configured workspace. Support 'jdkName' property.
ab7d520: Refresh plugin jar.
4171246: NPE (IDEA-107442)
1de751a: selectLineAtCaret() implemented for text component (IDEA-112383)
f57b36d: "indent line or selection" action (IDEA-53093)
ea7646c: use same bundle name for each build (IDEA-100897)
f8d0fdd: OS-specific name for settings menu (IDEA-106855)
d74ab42: Don't inject Regexp language into {}-fragments of str.format() literals
1adfd0f: Fixed off-by-1 bug in parsing constant chunks of new style formatted strings
8351721: instrumentnotnull property for javac2 task allowing to switch off @NotNull instrumentation (IDEA-49419)
cfeb285: OS-specific name for settings menu (IDEA-106855)
c00c067: Revert "Improved XmlSerializerUtil#copyBean -- moved type compatibility checks to compile time instead runtime."
520d1c1: Merge branch 'impove_XmlSerializerUtil' of git://github.com/bashor/intellij-community into pull106
e8664e8: force restart after import settings (IDEA-50255)
0e11ad8: svn: Renamed SvnCommand to CommandExecutor
db4c59d: svn: Moved CommandRuntime.Executor logic to SvnCommand
5d36b36: fixed PY-11099 Test runner is trashed
8b1cedc: svn: Refactored ResultBuilderNotifier - extends ProcessAdapter, code simplified
c96b381: svn: Moved SvnCommand.ResultBuilderNotifier to separate class
a117a19: svn: Renamed listeners from SvnCommand
62b8ba8: svn: Refactored SvnCommand - moved exit code tracking logic to SvnCommand.ErrorTracker
20fc436: Removed unused method
2efaf73: Extracted name of str.format() function
cb9634c: svn: Refactored SvnCommand - explicitly specify command result builder (in constructor)
07aef2a: Don't inject Regexp language into %-formatted fragments of string literals
5ef51a3: svn: Removed unused SvnCommand constructor
a49c60b: fixed PY-10337 Web2Py: propagate variables from for cycles to view context
0d330d5: Simplified interface for PyStringFormatParser
2733819: Improved XmlSerializerUtil#copyBean -- moved type compatibility checks to compile time instead runtime. Fixed bug when copying a instance of Base class to a instance of Derived class. Added XmlSerializerUtil#mergeBeans. Suppressed some warnings.
374833d: svn: Refactored CommandRuntime.Executor - logging logic extracted to separate listener
6ec1ef5: svn: Refactored CommandRuntime.Executory - process started/finished logic moved to SvnCommand
ac51e84: svn: Refactored SvnCommand - encapsulated myListeners field
ad1b5e8: svn: Refactored SvnCommand - command error tracking code extracted to separate listener
5aee765: svn: Removed unused code from SvnCommand
c189c10: svn: Moved command output line handling logic to ProcessEventTracker
d4f94c3: svn: Removed SvnSimpleCommand
32442ef: svn: Refactored SvnCommandLineStabilityTest - use SvnCommand instead of SvnSimpleCommand
e3c0845: svn: Refactored CommandRuntime - one-time command execution extracted to inner class
8a661c0: svn: Refactored CommandRuntime - make command line path instance field
1e17c7a: fixed PY-10848 Implement abstract method: do not insert super call for abstract method implementations
27448db: svn: Replaced SvnLineCommand with SvnCommand
1e97ddf: svn: Moved all SvnLineCommand logic to SvnCommand
31c8f94: svn: Refactored LineCommandListener to be interface that supports currently implemented cancel support
3ac2523: svn: Moved exit code tracking from SvnLineCommand to SvnCommand
18a45f2: svn: Refactored SvnCommand - process event tracking logic extracted to inner class
22cc128: svn: Moved error detection logic from CommandRuntime to SvnLineCommand
d625fd5: svn: Replaced SvnLineCommand.myStdErr with SvnCommand.getErrorOutput()
59f488a: svn: Removed unnecessary myStdOut from SvnLineCommand
0dd2981: svn: Refactored CommandRuntime - use its own logger (instead of SvnCommand logger)
4972309: Find a font that can print a character (PY-10872).
c42f75a: Github: IDEA-113816 show default title/description on creating Pull Request
45921c9: Github: remove deprecated tooltip message
49234a9: Github: @NotNull
cfc1c10: remove newline at the end of xml file
e22af4f: Github: rename confusing variable
22f24dc: Github: capitalise message title
9f28f38: Github: fix warning
99b6333: Github: optimise imports
296c29a: Github: remove static import
75e3036: Github: change error message
d1ef492: Github: change labels
9a72efd: Merge branch 'concat-inject'
217d97d: Added regexp injection for concatenated string literals
b013a83: Added regexp injection for parenthesized string literals
6b008df: typo
8fb7535: svn: Refactored CommandRuntime - use AuthenticationCallback as instance field
6f83245: Merge remote-tracking branch 'origin/master'
ec5f16c: fixed PY-11058 False positive Statement expected, found statement break
7f7c448: Use blinking caret setting (PY-10927).
ddad7f2: svn: Moved command execution logic to separate CommandRuntime class
bfaf87b: fixed PY-11071 File contains non-ASCII character: layout for inspection settings has invalid grid
69e319a: svn: Encapsulated SvnLineCommand fields
ed9a81b: svn: Moved AuthCallbackCase and subclasses to separate files
3503ddb: build win zip and sit for PyCharm community
cac4b7a: Fixed IntelliLang default injections for multi-part Python string literals (PY-10983)
f810244: svn: Refactored command authentication - make AuthCallbackCase instance decide if it can handle given error
8b5f995: IDEA-114518 Log only "original" command parameters - without added authentication data
8a19ae2: svn: Moved SvnSimpleCommand to tests (as it is currently used only in one test)
da2bf24: we don't actually need jython in layout
f75ab9b: Jython is used by tests in community, so it needs to be in community as well
fe2852a: PY-11075 Class must implement all abstract methods: false negative for abstract decorators defined with fqn moved test data to the proper place
75c7e61: correct product code in build.txt is needed for patch building to work
dfc63b5: [log] Fix VcsLogJoiner: remove deleted commits, timestamp sort
e45c2b0: svn: Refactored command execution - removed unused/unnecessary code
33a3ecf: PY-9365 Make function from method: leads to unresolved attribute reference for method usages in class
2527f73: PY-11065 Preferences search: "PEP8" vs "PEP 8" (PyCharm3)
f1d031a: fixed tests
ddb0d98: extracted augment assignment tests
1fb43fb: Github: do not hang on cancel initial showTargetDialog
bb361b9: Merge branch 'master' into createPR
dd75fa1: main_pycharm_ce.iml moved to community
809fcb7: python-plugin-tests moved to community along with some .iml files
543d576: move testdata to community
3f72f71: fixed unicode for python 3 in tests
4a7938d: flipped default value for pep8 naming in test functions
6efecb7: fixed PY-10993 Instance attribute defined outside __init__, which is strictly true, but it's an invalid issue.
227ec9f: Merge remote-tracking branch 'origin/master'
c8fc103: fixed test data
b94a55c: Merge branch 'python-fixes'
8102966: Added test for PY-3991
c4ab47c: Added test for PY-4200
ff9b7e7: another go at PY-8077: increase timeout, log message if pep8 times out, add PYTHONUNBUFFERED=1
22be935: Merge remote-tracking branch 'origin/master'
bd9fe15: fixed PY-10918 disable method may be static for TestCase inheritors
5490ff4: fixed PY-10857 UnicodeEncodeError when run doctest
978dd00: Fixed message in executable validation logic - removed "git" from message
eff2e75: added class_method_versions.xml
193659b: fixed PY-10947 Moving lines splits multiple statements
0e3f000: Fixed injecting Python regexp language into multi-part string literals (PY-11057)
6ecf618: moved method to core
ba71aab: index function throws ValueError not KeyError
cd85b16: fixed PY-10906 Error in nosetest runner (Python 3.3)
70d3040: fixed PY-10976 Method override in nested class generated with incorrect super call
ececc6c: Merge remote-tracking branch 'origin/master'
e181a02: Add browse history button.
ef73535: fixed PY-11047 "Python version 3.3 does not have module exceptions" on relative import
de5c809: numpy no longer exists as a separate plugin
f44003d: Merge remote-tracking branch 'origin/master'
d866446: used proper function for detecting autopopup
2536173: version updated
db87a87: version updated
75a8c6b: fixed PY-10982 Autopopup code completion interferes in docstring comments
0a38c71: fixed PY-10964 Extract variable doesn't work as expected inside brackets fixed PY-10221 Refactor->Extract->Variable may break square bracket symmetry and may break user input
ecec49d: simplified attr outside init search
fd24faa: Merge remote-tracking branch 'origin/master'
e213f36: used proper context
f3a97d4: added cache
03d6eba: removed dot
ee6a21f: Reimplement select word tests for CSS, CoffeeScript, Ruby, Python, Sass, Scss and Less
33d8aee: Merge remote-tracking branch 'origin/master'
bfbeda0: stoptrace method added (PY-4711, PY-4714).
ab671c0: quote function cat raise an error (PY-3265).
ac6fe8b: Don't show deprecation warning for IPython 1.1.0 (PY-10800).
ba9cab5: Ports of remote console should be obtained regardless of other output generated by the process (PY-11028).
aadc421: Cleanup.
c48b4bf8: Github: use prefix for @TestOnly functions
c8d8f70: Github: fix tests
dae8bd8: Github: use getRequestTitle() instead of getTitle() - typo
7b6d42c: Github: use dialog manager
1ed98b1: Github: change default value only after successful loading
b1ccaba: Github: rename variable
9692be1: Github: move ProjectSettings usage
01af98e: Github: disable 'ShowDiff' button on fetch fault
1b68335: Github: rename variable
3698a19: Github: get GithubAuthData on worker init
8915d0b: Github: remove useless 'final'
14f55f9: PY-10988 Instance attribute defined outside __init__ with property setter
0c90a2d: extracted tests for PyConvertFormatOperatorToMethodIntention
01b776c: extracted tests for PyConvertFormatOperatorToMethodIntention
fdb3742: PY-10989 str.format intentions don't handle unicode properly in Python 2.x
37dd664: PY-9633 Docrunner raise AttributeError when running tests
b52acac: PY-11012 Web2py project: notify error when import module in folder web2py/site-packages
182bc37: Merge remote-tracking branch 'origin/master'
1153f37: Fixed StringLiteralEscaper.getOffsetInHost() for host substrings
01b5f6d: Added javadocs for TextRangeConsumer
98eef3b: Github: remove inspection warning
ef619d8: IDEA-114360 Github: remove double borders for TextArea inside ScrollPane
6b8a52c: Refactored PyStringLiteralTest.testLiteralEscaper
fe5d5ab: myShouldKillProcessSoftly
c412235: fixed PY-11002 "Variable in function should be lowercase" when overriding class
c05694e: Cleanup
ce40977: Fixed NPE
f02c7a7: Merge remote-tracking branch 'origin/master'
7ed9df6: Nullity annotations
c3e39fa: autotest delay is customizable now default is back to 3 seconds #RUBY-10517 fixed #PY-10711 fixed
34b3712: fixed test data paths
b98d5e9: Merge remote-tracking branch 'origin/master'
b5165a7: fixed PY-10995 Wrapping on right margin deletes code with slashes at the end of miltiline statements.
baec2af: fixed PY-10993 Instance attribute defined outside __init__, which is strictly true, but it's an invalid issue.
be7a549: fixed PY-10991 pytestrunner fails with TypeError
963f53c: fix yellow code producing by CommonDataKeys
63d671e: fix yellow code producing by CommonDataKeys
599f121: fix yellow code producing by CommonDataKeys
344bb2d: fix yellow code producing by CommonDataKeys
94ed15c: fix yellow code producing by CommonDataKeys
8a08827: IDEA-113594 (OpenJDK notification refined)
ac6ec08: Merge remote-tracking branch 'origin/master'
4d40b48: language level pushers can skip individual directories in order to allow e.g. JavaLanguageLevelPusher to avoid nonsource roots
651e4dc: reverted temp fix
4b70fdf: removed debug prints
ccc33e5: Merge remote-tracking branch 'origin/master'
16e2bbb: Merge branch 'python-fixes'
a75960a: fixed NPE
403f40d: Github: rewrite CreatePullRequest: Part2
9e728e3: Github: rewrite CreatePullRequestAction
9be35c8: don't advertise technologies we don't support in community edition
57e7abe: Merge remote-tracking branch 'origin/master'
bbda445: Reset PythonDialectsTokenSetProvider properly in tests
da75c51: add license to pycharm community layout (PY-10921)
d5dda08: separate artifacts dir for community python plugin
b642f81: separate zip name for community edition plugin
359171a: cleanly separate resources of pycharm community and professional
5c1acdc: fix path correctly
963e3a7: running tests in classpath of python-community-tests
c03cf33: add new module to layout
bd6cd15: avoid having two separate plugin.xml files for Python plugin in classpath when running idea from idea
d39610f: temp fix for user skeletons path
0a69cab: moving tests to community
1bab1fc: fix pydev helpers path
5a3bb5e: Do not call synchronous refresh from inside read action except for event dispatch thread. This will eventually cause deadlock if there are
fa9c70f4: Fixed analyzing calls of properties that return callables (PY-9605)
ce1013e: fixed is modified check in dependencies configurable
18d0d62: fix path
df9af4c: correct path for CE zip
f5918b0: don't depend on broken detection of home
7a6dc3e: separate directory for unpacking idea CE
b55af59: python plugin for community edition
3e3c580: move PythonFileTypeFactory to correct place
0492963: restore unjustly excluded "invert boolean" refactoring in pycharm community
eaba9bd: register psi tree change preprocessor as extension, not as project component
3caf699: fixed PY-10933 PyCharm 3.0 - PEP8 in test method names
ca435d0: python: plugin layout fixed
539ba37: If we cannot infer the type of '__new__', return a weak class type (PY-10893)
feea24c: Try to get parameter type from annotations/docstrings, then check if it is 'self' (PY-8953)
c9b8ad0: fixed PY-10902 PyCharm 3 no __init__ class introspection for django Meta classes
7a253cb: fixed PY-10908 SkipTest from nose is reported as test failure
4555da3: Merge remote-tracking branch 'origin/master'
872e697: Merge branch 'python-fixes'
2a07265: Use the last import statement in the current scope if a name is available via several imports (PY-10667)
34c5235: Implicitly imported package members are already processed in PyModuleType instead of PyFile (PY-10819)
52ca435: prepare to release python plugin for intellij community (work in progress)
e5c0741: moving stuff that will be part of community edition sources
c5fc462: fixed PY-10881 Method may be static false positive on abc.abstractproperty
8fed6c9: Fixed function doesn't return anything inspection for decorated and overridden methods (PY-10883)
c81e155f: fixed PY-9403 Good code is red: backslash before empty line
caaa8c9: Merge remote-tracking branch 'origin/master'
ee2dbfc: Fix multiprocess debug for Python 3.3 on Windows (PY-8258).
b0eefb3b: JavaBreakpointType — isSuspendThreadSupported true
dbfe830: Merge remote-tracking branch 'origin/master'
847ae9e: another place where we need to check for empty extraSysPath
fa35ead: copy help to unix layout dir; use correct icon for pycharm.png on linux
28ee757: copy windows help to layout dir
30d4c71: separate picture for welcome screen; dmg background updated again
28d91b4: extract XLineBreakpointTypeBase
ac22df2: Fixed false positive in redeclaration inspection for nested comprehensions (PY-10839)
1990ed0: Merge remote-tracking branch 'origin/master'
dfc2718: Merge remote-tracking branch 'origin/master'
78e74c5: correctly sized dmg background
338b8d5: use correct product code for pycharm CE
e706777: ignore pep8 error if we end up with end offset being smaller than start offset (EA-49122)
e6fed67: don't pass empty extra sys path
3b6b7ca: fixed PY-6801 Remove pypi repository
b64aa0c: Merge remote-tracking branch 'origin/master'
6e563e5: fixed PY-9365 Make function from method: leads to unresolved attribute reference for method usages in class
6691885: fixed PY-9262 Instance attribute defined outside init: false positive with superclass constructor call in class
fd7a566: fixed PY-9448 Add ability to directly create pdf's from sphinx documentation
b9a1c53: Merge branch 'safe-sudo-escaping'
7d973ec: no yourkit in pycharm ce
f26b2f4: Fixed escaping in install packages with superuser privileges on Mac and Linux (PY-9029, PY-10124)
2cd2f53: fixed EA-48905 - SIOOBE: ParameterInfoComponent$OneLineComponent.buildLabelText
a8074fa: installer artwork
879afaf: pycharm 3 artwork
d15de80: Merge remote-tracking branch 'origin/master'
72b7ace: merge numpy support into main pycharm-community code
9cc2deb: Merge remote-tracking branch 'origin/master'
0c865e1: no "Core" in app info
15b82dc: "Community Edition" is too long to look good in the launcher
50f3bcf: remove dependency which is no longer appropriate
004497a: notnull
e164e63: Merge remote-tracking branch 'origin/master'
002dbc3: revert unneeded change
ae64527: explicitly include java-runtime in classpath for building searchable options
96d2926: correct set of used jars for CE build
b3c8ef5: Merge remote-tracking branch 'origin/master'
938f5c7: improved duplicates search in python extract method
8d4d936: correct platform prefix for CE DMG
e548b4c: separate background for community DMG
f7a17a2: moved NPE to InvalidSdkException
ebdbe5f: Merge remote-tracking branch 'origin/master'
86ce6e0: Fixed type of 'iter()' built-in function (PY-10854)
f0a46d4: Fixed generation of skeletons for built-in modules of PyPy (PY-9546, PY-8194, PY-3441)
1010c2e: Updated test data
c5085638: Merge branch 'python-fixes'
1696ffe: Fixed skeleton generation for remote interpreters.
18bc13f: Try all possible importable names of module for resolving implicit members (PY-10002)
f54e7be: Added QualifiedNameFinder.findImportableQNames() for finding all possible importable names
a04d2de: simplify some constant conditions and greenify
fe39c06: fixes for building CE as part of main pycharm build
7271b32: Don't allow empty names when looking for shortest importable names (PY-10002)
3560937: fixed title for the implement methods qFix
d6529f3: Merge remote-tracking branch 'origin/master'
0c29311: in implement abstract methods qFix show only abstract methods
19fa00c: fixed PY-10788 Possibility to disable PEP8 naming convention violation inspection for overridden functions
5e1b982: fix path
0ada8c0: Merge branch 'python-fixes'
95507a1: Don't warn about unresolved references with null or empty names
fe1b2d4: added description for the pep8 -naming inspection
dcc6f81: declare property
c6b8dc6: new imagery
48fa5bf: try to build DMG and NSIS for PyCharm CE
af1b464: @Nullable XBreakpoint.getProperties reverted +review CR-IC-2418
e8e05bf: Merge remote-tracking branch 'origin/master'
3bb3a09: fixed PY-7713 PyCharm asks me to convert set literals back and forth
b21be95: correct app info path for community layout invoked from professional build
2e6bf2b: fixed PY-10843 Converting from concatenation to str.format intention does not work in PyCharm
398c5b1: fixed PY-8484 "Insert documentation string stub" should prompt to choose docstring format if it's currently set to plain
b7ee7d8: Merge remote-tracking branch 'origin/master'
72e1541: Merge branch 'python-fixes'
bc21402: Fixed false positive attribute assignment warning for classes that inherit '__slots__', but don't define their own slots (PY-10158)
df135fc: Disabled call arguments inspection for decorated functions (PY-10601)
a56383f: Moved and inverted hasCustomDecorators() to PyUtil
8a05973: fixed PY-8946 Classmethod docstring creation
b1b5450: fixed PY-9847 Class has no init: disable inspection for classes with unresolved BaseClass
e875e65: XDebugger: @Nullable XBreakpoint.getProperties()
faa22da: Merge remote-tracking branch 'origin/master'
f5b6204: Fixed type inference for nested tuples in 'for' loop targets (PY-9334)
214e07f: fixed PY-10320 Chameleon: resolve request variable to view
835323c: xdebugger: supported rebuilding of standalone variables view
13fe819: forbid including Ant jars
80d1e10: build platform-main as part of main pycharm build too
b765fc3: include python-rest into layout
848bc00: moved python-rest under community, add to build
8ee628a: python-community-tests module initially extracted
4ce0f0a: remove hard dependency of python-rest on django
3dcce7d: move intellilang-python and rest to community, include in community build
ace41d4: paths fixed
61b2464: Merge remote-tracking branch 'origin/master'
b7a2c6a: fix
1199bf7: Merge branch 'python-fixes'
7b6b2da: Test for types in nested tuple unpacking inside 'for' loops (PY-9334)
71e635f: try to build pycharm CE as part of main build
8ce9f60: fix prefix in launcher
e8d67b5: Added test for 'isinstance' check inside conditional expression (PY-10416)
58f85ba: Fixed 'isinstance' analysis for expressions that are resolved to tuples (PY-7573)
11e200c: separate icon for PyCharm CE
8941c1c: fixed PY-10442 Set the working dir to the project root if nothing is specified in the settings
4b2f6a8: fixed PY-10457 Chameleon: missing completion for TALES expression types prefixes
09e5c95: Fixed 'isinstance' analysis for expressions that are resolved to tuples (PY-7573)
3d232a9: Merge remote-tracking branch 'origin/master'
b0ff03e: Python variables view class extracted.
7252188: Merge remote-tracking branch 'origin/master'
c5e734d: use proper type eval context for parameter info popup
36394cb: do not map implicit arguments to **args
c5fd56d: Method renamed.
ebea9ea: Merge remote-tracking branch 'origin/master'
7c35d71: Renamed isUnreachable() to hasAnyInterruptedControlFlowPaths()
b55a8b4: Fixed unbound variable inspection for unreachable code (PY-6114)
c4795bb: Typo
fade29f: Fixed potential NPE
03d1a6b: Extracted isFirstInstruction()
3afada5: Fixed unresolved reference inspection for unreachable code (PY-10006)
b8927f4: fixed PY-10458 Chameleon: comment with block comment does nothing inside html ad pt files
fcb06c9: handle nulls in superclasstypes
3c2c528: xdebugger: implemented standalone variables view
e4e5d6a: fixed EA-48890 - NPE: PyUnresolvedReferencesInspection$Visitor.addAddSelfFix
6141f23: Merge remote-tracking branch 'origin/master'
6d0bf60: do not use Sun's internal NotImplementedException, use the standard UnsupportedOperationException instead
89bd18f: do not use Sun's internal NotImplementedException, use the standard UnsupportedOperationException instead
653e6a1: JB license service support, initiall
64a094c: icon classes regenerated: copyright added to generated files, more instructive comment added
fbda697: separate set of images for pycharm community edition (copies of original ones for now)
63e7f74: launcher for pycharm community; extract layout to allow calling from main build
f9fc6f8: Merge branch 'python-fixes'
e2dc3c8: Fixed docstring
09fdfb6: Removed redundant PyClass.getSuperClassElements() method
966829c: build zip and tar.gz for pycharm CE
9ba8bb6: Docs for super classes and ancestors-related methods of PyClass
140ecdb: skeletons path fixed
feec73a: fixed EA-49997 - IOOBE: SegmentArray.getSegmentEnd
a2b76b7: Docs and the base class for Python token set contributors
639d213: forgot a flatten
4f4ea89: Merge branch 'python-fixes'
9680fa8: Fixed CCE in PyImportElementImpl.getImportReferenceExpression (EA-50007)
02a7791: fix path again
d3b3b94: Merge remote-tracking branch 'origin/master'
be227e0: Provide language injection only for the first text node of the string literal (PY-10691)
c38fa43: Added PyDocstringTest to all tests suite
688e10d: fix path
5c96f2b: fix module name
1c05edc: build community-main module in pycharm ce
f59cd3a: fixed PY-10777 super() should be allowed for PyQT classes
0684eee: avoid deprecated methods
3a17ba8: Merge remote-tracking branch 'origin/master'
e6f5489: declarations of icons from ultimate part of python moved to separate class
7137cb9: pycharm ce build script
196b3ce: Fixed getting superclass from 'six.with_metaclass()' call
f2b43db: added request code insight for pyramid
9fa7036: Merge remote-tracking branch 'origin/master'
c022f25: added failing test for with_metaclass
78120f4: reverted fix for PY-6512
9237028: fixed PY-10798 False positive on "must implement all abstract methods" inspection
0930c9c: Merge remote-tracking branch 'origin/master'
f498657: Merge remote-tracking branch 'origin/master'
b75fdaa: Add change variable env-test.
d2a5271: Make action dumb-aware.
707986e: Alow none return value.
8cbaec6: Set value: fix import
3e86714: Test for get variable in console variables view.
474228b: Get variable: fix import
e4b6958: Merge branch 'python-fixes'
a1e5aa1: Updated Python package management tools (PY-9700, PY-8681)
01caab8: python: added ability to move multiline statements
9466db5: Suppress unchecked cast warning
da456ea: Updated static member access
51d3a54: Use parameters provided by function type for method signature checks in inspections
79395fe: Fixed overriding instance and subclass checks for types (PY-10229)
bd55989: simplify API — TextBrowseFolderListener JSDebug settings editor — file chooser must use the same logic as RC producer
dce5c8e: Use parameters provided by function type for generating override method
4ade2a6: Use parameters provided by function type in override methods list
37e6ba7: Use parameters provided by function type in completion variants list
9855b71: A minimal interface sufficient for XVariablesView usage extracted from XDebugSession.
19668ef: Search for missing class attributes in user skeletons (PY-9011)
b3d8ec8: Added skeleton for 'datetime' module
94b6506: Removed unfinished test
42f4022: Fixed potential NPEs and other warnings
5c8f686: Don't map call receiver to arguments for static methods
7ffe201: Don't resolve references in types to functions
eeff2a8: fixed PY-6512 Augmented assignment quickfix treats polyadic expressions inconsistently
2e5cb2c: fixed PY-6511 Augmented assignment quickfix treats & and | operators as non-commutative
6928f0d: continue extract "Start Browser" functionality from J2EE to platform xml — JavaEEJavaScriptDebugStarter unified
8f36e9e: Merge remote-tracking branch 'origin/master'
15cc235: suggest to create a file when user creates a directory named "foo.xt" (IDEA-113072)
6e01aa6: Merge branch 'python-fixes'
617b79c: Weaker type for round() built-in (PY-9072)
d81aac6: Added 'io' modules for mock Python 2.7 SDK
a3b5e67: Added skeletons for io.FileIO, io.TextIOWrapper and several others
d085fdf: Use canonical module names for user skeletons
4469bcd: More examples for open() type tests
1907bb8: fixed PY-10549 Unresolved reference: duplicated warning for unresolved param with type in docstring
edd289b: Test of str.startswith() type (PY-10095)
22e1074: Enabled test of constructor signature there are unresolved superclasses (PY-4419)
4ee194d: fixed PY-10549 Unresolved reference: duplicated warning for unresolved param with type in docstring
f7c144d: Test of float() signature (PY-9664)
d5c0756: Separate 'builtins' skeleton for Python 3
fc1e8c2: Test for checking types of builtins for Python 3
539917d: suggest folder ended with test as test folder
efca2be: Added mock class for <type 'generator'>
40d61dc: Merge remote-tracking branch 'origin/master'
93e4f8e: Added test for types of <type 'generator'> methods
ce16562: fixed PY-10661 Chameleon (general?): Cannot resolve directory
7052782: Added skeletons for built-in 'file' class
ead47ca: Added skeletons for built-in 'list', 'tuple' and 'dict' classes
e655e29: fixed PY-10668 Test Runner: do not propose unittest run configuration on every folder in project
0ff157c: Added skeletons for built-in string and bytes types
86035d2: fixed PY-10653 Chameleon: expression expected
414cb94: fixed PY-10697 Instance attribute defined outside init false positive for django models
54379c3: fixed PY-10701 False positive "missing closing triple quotes" when in docstring code
8fb3009: Merge remote-tracking branch 'origin/master'
1d5bfd2: set use module sdk in tests producer
a6f2fb7: fixed PY-10714 Reformat of string creates error
89a1cb9: Merge remote-tracking branch 'origin/master'
a3e7400: Merge branch 'python-fixes'
fe3540e: Added skeletons for built-in numeric types
129f5ba: fixed PY-10735 Map help button of the Invert Boolean dialog
2f94136: Merge remote-tracking branch 'origin/master'
2f7d6f7: Fix ipython console.
4c2a160: Merge remote-tracking branch 'origin/master'
1292695: fixed PY-10738 Description of an inspections is incomplete
0a6ba82: Removed python-skeletons from the main repository
1c96630: Fixed return type of struct.Struct.unpack() in Python 3 (PY-10660)
cbf36a2: Possibility to select Maya.app on Mac in add interpreter dialog.
1b5fcd6: Added skeleton for 'struct' module
92415ca: fixed PY-9263 Move attribute to init method: add super class call when moving to not yet existing init
81e236a: Set value implemented for console variables view.
9473179: continue WEB-1171 javascript live console: add history actions
bbec7ea: fixed PY-10755 PyCharm mark some statements with side effects as "has no effect"
557a260: fixed PY-9488 Local python interpretator configure
a5e1a57: prefer to use interface LanguageConsoleView instead of impl class
6f6c0e6: continue WEB-1171 javascript live console — works now, but UI is ugly
06c68fd: Merge remote-tracking branch 'origin/master'
bac7a1e: Dispose variables view.
ba7848f: Rebuild view after command execution.
853eb17: Merge remote-tracking branch 'origin/master'
645c85f: added search for executable if sdk home path is a symlink
79d4777: Locals should be empty in console from default.
0924caf: Implement get variable for console variables view.
9a15c3c: Debugger refactoring: frame accessor extracted.
179fd4e: Merge remote-tracking branch 'origin/master'
fd54e0f: Merge remote-tracking branch 'origin/master'
0fe72c7: Check that path is absolute equally for different OSs in tests.
78de90b: fixed PY-10740 Error trying to create new Pyramid project
3a590b3: system dependent paths
54a8a2e: Merge branch 'python-fixes'
818459a: Detect redeclarations in top- and class-level loops (PY-4650)
4788262: Merge remote-tracking branch 'origin/master'
6df221c: Corrected inspection name in test
1432c4a: Fixed adding a path to PYTHONPATH.
c4186d3: Merge branch 'python-fixes'
77d87ec: Merge remote-tracking branch 'origin/master'
de8cc97: Split shadowing inspection into built-in and names from outer scopes shadowing
1e7606a: when parsing settings.py, go into true branch of if statements (PY-8493)
3f060dc: correctly merge declarations when handling imports in settings.py (PY-9640)
cc75e99: Merge remote-tracking branch 'origin/master'
d618c9f: fixed PY-6932 Support for Gtk+ 3 dynamic modules
3bb20bf: init WEB-1171 javascript live console
1b6b4a6: Ignore class-level names in shadowed names inspection (PY-10164)
e9b39a4: Merge branch 'python-fixes'
2311681: ICallback -> Function
96c0b39: Fixed NPE in PyClassImpl.createElementFromExpression()
243fa85: fixed EA-48592 - IAE: PyExtractMethodUtil.<unknown>
6863e4b: fixed EA-48280 - assert: CompositeElement.addChild
5723e5a: fixed EA-49219 - IOE: CheckUtil.checkWritable
968fb4a: fixed EA-49613 - AIOOBE: PythonNoseTestConfigurationProducer.isAvailable
e6ba7c6: Fixed AIOOBE in PyNamedParameterImpl.isSelf (EA-44966)
92929f1: Merge remote-tracking branch 'origin/master'
b92de48: set home in community build script
3e3f920: Updated messages in test
1c3691e: Use new serializer for Python shadowing inspection
e77667d: Fixed shadowing inspection for names defined in comprehensions
586af0f: Changed name and warning message of redeclaration inspection
6982ac4: Fixed AIOOBE at PyRedeclarationInspection.processElement
04c8b34: Ignore 'global' and 'nonlocal' variables in shadowing inspection
682d2ad: Changed priority of shadowed names inspection to weak warning
89f96c0: Fixed toggling variables view.
208c44b: Shadowing inspection now works for shadowed names from outer scopes (PY-10746)
85ef684: Merge remote-tracking branch 'origin/master'
e19a70d: Added rename quick-fix for redeclaration inspection
4a9f96c: Updated static method call
0681f64: Fixed text ranges for unresolved warnings for operator references
22f5cd6: Unused imports
0fec9b9: Merge branch 'python-fixes'
3e3ac10: Note about unconditional redeclarations
aa71404: Don't warn about conditionally redeclared items
2dedd15: fixed high CPU usage in qt type provider (getReferenceExpressionType used in Inspections)
03d185c: Refactored PyRedeclarationInspection tests
d55e3c0: Re-introduced redeclaration inspection
fd689cb: Variable view in Python Console.
46bebeb: Added ignore shadowed built-in quick-fix (PY-8672)
f674870: Merge remote-tracking branch 'origin/master'
01b0901: Merge remote-tracking branch 'origin/master'
e8f2e34: fixed PY-4120 PyQt: unresolved reference: false positive for new-style sygnals
47ecc5f: merge
ec68461: Merge remote-tracking branch 'origin/master'
c48e3a7: fix path to skeletons
24d0f90: fixed PY-5131 Unresolved reference in PyQt for QtGui module
d6faed2: Merge branch 'python-skeletons'
46199cb: Typos
cbed76e: Renamed user-skeletons to python-skeletons
4386ab9: Search for user skeletons in PyCharm config
9d36b46: parameter info generics: check resulted html
7df9501: Merge remote-tracking branch 'origin/master'
f499c86: temp fix for build script
c176205: Asynchronous call to debugger in consoleExec command.
1cde3f6b: fix registration of CythonFileTypeFactory
21b566c: pycharm community build script initial
5c033c2: introduce property
7ad2460: better to keep appinfo under resources
9570005: extract pycharm community resources
0c74a24: tweaking dependencies to allow running pycharm ce
6cd63be: we're using separate file type factory for Cython, no need to use it here
d44732d: extract community part out of python-ide module
ed02fb5: fixed PY-8485 Move module: breaks imports in doctests
5dc8c31: fixed references in from import in doctests
9ac5a47: Fix tests: we should declare resource bundle only once
faa5af1: fixed PY-8823 Enforce PEP 8 naming conventions
0e29f6e: pycharm community edition run configuration (work in progress)
2de8a31: separate community and ultimate parts of python's plugin.xml
ceb32a3: remove cyclic dependency between python-community and pyhton
dfd482e: in Django, look for references to local variables in template files (PY-8204)
4c7704c: Moved Cython-specific inspection checks to CythonInspectionExtension
2e88c54: Merge remote-tracking branch 'origin/master'
445622e: fixed PY-9724 Notice about abstract methods and properties
273bb1a: include resoures from community/src into plugin build (PY-10695)
a886faf: Merge branch 'remove-cython'
8fdfa4c: Extracted CythonReferenceExpression with its own getReference()
029c0b4: include community/src in python plugin build; increment version
cdd33e8: decouple PyDebugProcess.runToCursor() from DjangoTemplateLineBreakpointType
9447b56: extension point for adding options of extra consoles
2b27267: extract Django console settings to a separate service
6a8555c: avoid unnecessary storage of project in PyConsoleSettings, pass it as parameter instead
7e32568: rename PyConsoleOptionsProvider to PyConsoleOptions
904d0f9: Removed explicit references to REFERENCE_EXPRESSION
8fec120: Get unbalanced braces recovery tokens via PythonDialectsTokenSetProvider
771dae3: Added Cython lexer test
abe323d: decouple PythonSdkType from PyRemoteSdkAdditionalData
6f39dc2: decouple debugger from PyRemotePositionConverter; replace some usages of PyRemoteSdkAdditionalData with PyRemoteSdkData
31896f9: PyRemoteInterpreterManager moved to python-community
bbd463a: decouple AddFunctionQuickFix from Django
b627ec8: rename UnusedLocalFilter to PyInspectionExtension, added method for ignoring missing docstrings
7544db6: fixed PY-5654 Uninstall package should warn if the package I try to uninstall is a dependency for another installed package
77a675e: Moved 'cppclass' in PyNamedParameterImpl.isSelf() to CythonNamedParameter
c0f5b5a: No need to check for CythonStructType in canQualifyAnImplicitName()
d09a7bc: Don't use resolvesToSameLocal() shortcut for Cython elements
7f0743b: update until-build according to new branch number
5b63ca9: decouple QualifiedNameResolver from DjangoFacetType
2069d45: Removed unnecessary Cython check
9b0c92d: Low-rate import resolve results generalized as PyImportedNameDefiners
789d186: Name definers collected in ScopeImpl made PyImportedNameDefiner instances
366711e: don't like yellow and duplicated code
9a1c7fe: Updated javadoc
5e8b8e6: fixed PY-9598 Add docstring parameter **kwargs
919531a: fixed "no params" message for python parameter info popup (since it's highlighted as disabled text)
4f6e10a: Python plugin update layout fixed
dee033a: Merge remote-tracking branch 'origin/master'
73d5198: register additional breakpoint handlers via extension point
fb2e538: fix helpers locator when running pycharm from idea
d20eb67: delete meaningless code for passing in parent PATH as well as meaningless OSUtil class
6458ab4: ProcessRunner is not Django-specific => move it to python-community; kill some dead code
8dba7a2: PyConsoleType is a class, not an enum
00756c6: move Django-specific logic from PyStringReferencesSearch to DjangoTemplateStringReferencesSearch
a69b1a3: avoid usage of django VirtualFileUtil
e3c2930: avoid usage of django VirtualFileUtil
328b8f8: decouple PythonTRunnerConsoleProperties from Django
f677396: decouple PyRerunFailedTestsAction from DjangoTestsRunConfiguration
c85bf40: missing dependencies
a3f9061: typo fixed
3df14cd: separate Python and Django live template providers
96d5976: use platform base class for reference implementation
21ab09c: remove nonsense code from BuildoutPartReference
e0d025d: initial extraction of python-community module (for now with a few cyclic dependencies)
9d89437: use same 'no params' message to avoid escape problems
3b4835c: 1) methods "void setPresentation(@NonNls String name, @Nullable Icon icon, @NonNls @Nullable String type, @NonNls @NotNull String separator, @NonNls @NotNull String value, boolean hasChildren);" and "void setPresentation(@NonNls String name, @Nullable Icon icon, @NonNls @Nullable String type, @NonNls @NotNull String value, boolean hasChildren);"
f038a6c: Merge branch 'python-fixes'
cb2a064: fixed PY-7535 Specify return type: intention is not available at the end of the function name
2277c75: fixed PY-7591 Unclear naming for unittest run configurations
f5748d2: Skeleton properties have read-write-delete access by default (PY-9797)
72d5432: Merge remote-tracking branch 'origin/master'
073a417: bundle stylus with pycharm
1af5f7d: Merge remote-tracking branch 'origin/master'
a7eca4d: fixed PY-8437 Nosetest runner is set as default for newly created projects when interpreter doesn't have nose installed.
13150eb: Types for known stdlib properties in skeletons
bd4f8a2: fixed PY-8970 Missing docstring inspection should ignore test classes
500cdb6: fixed PY-9329 Instance attribute defined outside init: disable inspection in testCases methods which start with setUp
f3430e2: fixed PY-9400 Not added super class call with suggestion box
3be529a: fixed test data for move statement
7f7d101: fixed offset for move pass statement into
846eed9: added ability to move continue/break/pass
d3103c0: fixed PY-10665 Move Statement: IOOBE at com.intellij.openapi.editor.impl.EditorImpl.a
f1a71ee: moved template web2py editor stuff to the proper place
828e848: fixed PY-10642 Run Configuration: NPE at com.jetbrains.python.testing.PythonTestConfigurationProducer.isConfigurationFromContext
d915d44: fixed test data for duplocator test
95dea60: Merge remote-tracking branch 'origin/master'
2550490: meet brand new python move statement
755f348: renamed
9e53670: removed duplicated code
b9e892f: Merge remote-tracking branch 'origin/master'
1393f08: notnull
5159214: Merge branch 'target-docstring-stubs'
2e8150a: Added stubs for docstrings of target expressions
ecba7b3: Merge remote-tracking branch 'origin/master'
14bb2de: Pulled getVirtualFileByName() up to PyTestCase
b040b18: Extract attribute type from class docstring (PY-6584)
af9d579: Extracted PyUtil.isAttribute()
74e2668: Made PyUtil.isInstanceAttribute() stub-safe
5f5b83d: Updated misplaced docstring test
9486c83: Removed unused method
ece2216: Don't show statement effect warnings for variable docstrings even if the docstring type isn't set
ce194d6: Merge remote-tracking branch 'origin/master'
be2ba61: fixed PY-10460 Chameleon: missing completion of code blocks opening and closing tag
a09bc02: cleanup: added default (empty) implementation for SettingsEditor#disposeEditor
8e19b68: Merge branch 'python-fixes'
25e55b1: Property.getGetter() doesn't depend on stub/AST switch, explicit Property.getType()
81458c1: Allow any PsiElement as an anchor in TypeEvalContext.maySwitchToAST()
361aa08: honor 'resolve collection items' flag for values assigned via subscription expression (PY-10542)
ec685df: find usages for django template parameters passed from view functions and class-based views (PY-7000)
18f98d4: better presentation of template variables in usage view (PY-6999)
95f758b: class name completion in string literals inserts qualified names after imports; disable class name completion after dot in string literals (PY-10526)
b0f6a39: introduce PyQualifiedNameOwner interface
7bcad34: fixed PY-10443 Chameleon: metal namespace prefix should be available without explicit namespace declaration
f248ea6: Merge remote-tracking branch 'origin/master'
be280d0: Allow stub->AST in TypeEvalContext only if explicitly allowed or in the origin file
78ea35b: init WI-19609 PHP-CGI built-in web server console is needed TextConsoleBuilder.filters
b762f2c: Merge remote-tracking branch 'origin/master'
4121323: Merge branch 'python-fixes'
95cd6a6: Fix missing selection handlers
6ccfeed: Use Python 3 'metaclass=' class as a metaclass, not as a superclass (PY-10208)
785485d: PyType.getCompletionVariants() takes location as PsiElement, not PyExpression
eba0eda: resolve context data of class-based views (PY-10542)
e082a72: introduce and use PyAssignmentStatement.isAssignmentTo()
69030cb: python-tests depends on intellilang-js
4512d1a: separate stub-based and non-stub-based tests for property detection
d035e60: fix compilation due to API-change
040d910: IDEA-109465 make run configuration's unique name unique for module based run configurations
33f05d3: moved django tests producer to the new API
375299d: fix PyDeprecationTest: allow to turn off the new behavior of looking for an injected fragment under the caret in CodeInsightTestFixture
9634fca: moved all python run configurations to new API
cebc332: fixed PY-10510 Web2Py: Incorrect auto-indent in views
72d45a5: Merge remote-tracking branch 'origin/master'
60cd6a2: clean up statements after move
c7a9b6d: rename PyFileEvaluator to PyBlockEvaluator
a830e41: use PyFileEvaluator to parse get_context_data() function and provide context for class-based views (PY-6775)
e8efeb6: PyFileEvaluator can evaluate functions as well
109e28f: understand dict.update() calls in evaluators
197ad1a: understand {} and dict[a]=b in evaluators
201f8d3: teach PyFileEvaluator to keep track of declarations; get rid of old code for parsing aug assignments and 'extend' calls
6f3c795: use proper CachedValue for caching of Django settings values
0e47994: use PyPathEvaluator for all variables, get rid of per-attribute evaluator factories
9e826b0: PsiUtilBase.asVirtualFile, Overrides, remove duplicated code — please use PsiUtilCore.getVirtualFile
27215d0: cleanup — don't implement deprecated method
b435ed1: added possibility to specify missing methods for (class, version) pair. Added updater for a new versions.
8275d4c: Merge remote-tracking branch 'origin/master'
b3acfc6: convert single values to one-element lists; test for recognizing string concatenation in TEMPLATE_DIRS (PY-7521)
2ceb00a: handle os.path.realpath in PyPathEvaluator (PY-9787)
0a949a9: handle os.path.normpath in PyPathEvaluator (PY-10194)
735c0c8: full-file interpreter for settings.py; more settings converted to caching framework; understand concatenation in INSTALLED_APPS (PY-8413)
066803c: PyEvaluator returns objects and not just strings; evaluate binary expressions and sequence literals
f78593b: PyPathEvaluator split into general and path-specific parts
013c954: collect all variants of string list building in one pass
341cae1: adjust pattern in structuredDocstring to work with reStructured class definitions in docstring
15f18c1: tests were already fixed, sorry; remove duplicate fix
9c198e0: fix PyToJavaResolveTest
0ad1c6b: Merge remote-tracking branch 'origin/master'
8df2107: fixed tests
11e7dfc: notnull
c04d3c2: Merge remote-tracking branch 'origin/master'
5588578: fixed PY-8845 false positives like "Function 'x' does not have a parameter 'y'"
5cefc84: Extensible comment injector API
7d559c0: moved python's test run configuration producers to new API
58f1a28: Don't provide metaclass user skeletons for classes
b7b65e3: pass location to PyClassMembersProvider; complete request.user (PY-10452)
1e2f44c: allow resolving a PyDynamicMember to an assignment statement
8ec523a: don't lose location when resolving members of union type
2845542: EA-47930 - NPE: PyInstalledPackagesPanel.getSelectedSdk
debd07c: Don't complete files and directories available as user skeleton module members
de39993: Don't complete class-private names from module members providers
afe27cc: Removed redundant type casts
0a710c1: Merge branch 'user-skeleton-signatures'
7b8845c: Merge branch 'python-fixes'
443bbd7: Fixed updating gutter icons for user skeletons (PY-10161)
8972661: fixed tests
5a85409: Merge remote-tracking branch 'origin/master'
dff34d6: fixed EA-45596 - UOE: ASTDelegatePsiElement.delete
8e9e4de: Don't show parameters for multi-parameter types if all parameters are unknown
a6609f0: Fixed IAE at PyClassImpl.getMROAncestorTypes (EA-47519)
46ebd57: Converted anonymous classes to static inner classes
3d03c89: Fixed signatures of slice() and xrange() builtins (PY-9978)
e05d1e2: fixed EA-46419 - SIOOBE: PyDocumentationBuilder.removeCommonIndentation
53db124: fixed EA-46972 - IAE: LookupElementBuilder.create
010715d: fixed EA-47141 - IAE: PyDocumentationSettings.getInstance
5b7dbb4: Use parameters from function type when inspecting function calls
c483f67: Use parameters from function type in keyword completion
5706861: Use parameters from function type in parameter info pop-up
d7fad24: Removed unused method
a77dcfc: Use parameters from function type in documentation pop-ups
ab034cf: Override function signatures in user skeletons
dd43169: Extracted getParameters(Callable, TypeEvalContext)
af58432: added more information for EA-47950 - assert: PyStringLiteralLexer.start
c83f529: fixed EA-48087 - IAE: TemplateBuilderFactoryImpl.createTemplateBuilder
2deee29: fixed EA-48155 - NPE: PythonTestConfigurationProducer.createConfigurationFromFile
3d2c8f5: Evaluate parameter types only when needed
e5b5cb4: fixed EA-41700 - RE: PythonIndentingProcessor.checkStartState
bd49b75: set docstring type for tests
6752083: notnull
5adf641: fixed broken completion tests
ec5ad15: Merge remote-tracking branch 'origin/master'
1dcc55d: Chameleon: improved highlighting
540e3b7: CR-IC-1713 Make SpacingBuilder use correct language settings
8b0b589: Use parameters from function type for matching arguments
a460641: fixed identifiers completion for plain docstring
8a0f011: Fixed nullness annotations
95cd5d0: Made Callable a typed element
bbcee93: Use object instead of pair for describing function type parameters
afd5c68: Fixed possible NPE
f0222e8: Added dependencies to main_pycharm on TextMate bundles and CoffeeScript plugins
7eafba6: added forgotten ini4idea to build.gant
32601ca: Merge remote-tracking branch 'origin/master'
5840e9e: fixed PY-10444 Chameleon: autocomplete closing curly bracket for ${...} operator
7305ae4: fixed PY-10446 Chameleon: when enabled breaks auto indentation for html tags
a27a93d: Merge branch 'python-fixes'
b11d6be: Fixed signature of PyCallableType.getParameters() for inheritors
3c13648: PyCompilationFix
de56b56: Chameleon: added completion for predefined statements
8837eea: Merge branch 'python-fixes'
c0a7dd2: Renamed TypeNameVisitor.processListCommaSeparated to processList
bb34310: Show more details about types of functions (PY-10413)
aded3ba: fixed PY-3413 Complete identifiers from context anywhere in a plain-format docstring
bc19f97: fixed PY-10389 Replace with function call: invalid replacement with backslash after print statement
288f8d3: removed unused intention
08a4fe3: Fixed type signature for 'str.replace' for Python 3 (PY-10402)
5069bc0: Don't show unknown unions in type description strings
ebbabcf: fixed PY-10439 Test Runner: not able to specify setup.py test options: IOError: [Errno 2] No such file or directory
4d17f7a: added ini plugin to pycharm Ini files inspection: highlight only duplicated keys in section, not the random text range in file
3bffbc9: Switched from type checking to comparing type strings in PyTypeTest
0d29b7b: Fixed unification of parameters in function types
8369845: Deterministic ordering in return types of 'yield' generator functions
af6829b: Return weak parameter types and search for all local usages
25028e5: reverted buildout->ini files due to found Ini plugin
99ab256: Fixed type checking of 'for' loop targets for weak unions
712f387: added ini files to buildout file type
6ba2901: Spaces in tuple types
a16af3a: Extracted constant for unknown type
ccd56af: added run configuration for Pyramid
a7b015b: Use '[]' operator instead of 'of' for parameteried types (PY-10423)
375938e: Use '|' operator instead of 'or' for union types (PY-10423)
95df6dc: fixed PY-10420 False positive in redundant parentheses inspection with yield
87bb05b: Merge remote-tracking branch 'origin/master'
3128b7f: Fixed type name for bounded generic types
501d8eb: fixed script_name for manifest_maker
697c7ad: fixed PY-2388 Make PyCharm's testrunner aware of setup.py
4e2f11d: pull up most commonly used APIs from RunManagerEx to RunManager; remove some unnecessary usages of RunManagerEx
3fe4685: Merge branch 'python-fixes'
03a4da2: Fallback to arrow return type only after structured docstring formats (PY-9849)
262f51f: Use the type of the original element for Ctrl-Hover info (PY-10386)
5544c92: fixed PY-10390 Locate Duplicates: do not anonymize parameters in function definition
12f9f24: Merge remote-tracking branch 'origin/master'
30c597c: Pyramid: added navigation from template to view
01511dd: Merge branch 'fix-basestring-type'
0085154: Merge branch 'python-fixes'
5ca099d: Pyramid: added navigation from view to template
4a5c637: Moved open() type signature to user skeletons
745cb4a: Fixed qualified type annotations for 'datetime' module (PY-10365)
e4f55bd: Merge remote-tracking branch 'origin/master'
a1db25a: added type provider for pyramid request
5f604e8: fixed NPE (create run configuration and then set script name)
facef3b: IDEA-108183 XML: auto-popup completion for attribute values with enum
f93540c: new jira connector
732aa2e: deprecate RuntimeConfiguration and remove most of its remaining usages
1dfc2ee: fixed PY-10367 Locate Duplicates: do not anonymize literals along with class methods
f935a65: Merge branch 'python-fixes'
1bdd820: Code insight for dynamic members of 'nose.tools' (PY-7614)
1a754de: Concatenation instead of StringBuilder
e6faf26: Ask module members providers when resolving names in import statements
8f51d96: fixed PY-7648 Wrong "docstring seems to be misplaced" warning in editor
631eb33: Merge remote-tracking branch 'origin/master'
02ff7bd: fixed PY-10366 Locate Duplicates: do not anonymize classnames
c909b18: build scripts migrated to new JPS
f5d9112: fix Python run configurations test
d11d758: platform code takes care of updating generated name of run configuration after refactoring; remove code that does it from specific run configuration implementations
99deeb6: keep track of whether the name of a run configuration was changed by the user on the platform level; introduce LocatableConfigurationBase class for this purpose; deprecate getGeneratedName() and replace its usages with suggestedName(); delete implementations of isGeneratedName() which are now redundant
0220af9: @NotNull RunConfiguration.getSettingsEditor()
a840901: Fixed searching for a user skeleton file for a Python package
61fa16c: ColoredTextContainer — avoid Swing dependency, ability to copy frames (as in JS debugger, now it is supported for all debuggers)
f0e9dea: fixed PY-10296 Access to protected member should have different severity levels
d4ea1ff: fixed PY-10364 Print statement -> function inspection is not PEP-8 compatible
31faaaa: Merge remote-tracking branch 'origin/master'
17135d4: passing a name to ModuleBasedConfiguration constructor is optional; kill a lot of code for passing around empty names
73728e8: Merge remote-tracking branch 'origin/master'
000c261: fixed PY-9676 Unresolved attribute reference 'key' for class '*Property'
19c5431: fixed PY-9661
83c9616: provide default implementation for ModuleBasedConfiguration.createInstance(), delete a ton of existing implementations
fc02fb5: fixed PY-9496 Don't show warnings about base classes without __init__ PY-9302 Class has no __init__ method: false positive for child class without one
718b14a: Merge remote-tracking branch 'origin/master'
78721c0: remove usage of SyntaxHighlighterColors
e0794c1: fixed PY-10152 Web2py: restore parser after not closed tags
be9089a: tiny bit of dead code
0eb4c59: Merge remote-tracking branch 'origin/master'
8f0dc8f: fixed PY-10152 Web2py: restore parser after not closed tags
9bc4191: Project in ExecutionEnvironment is @NotNull; remove some redundant usages of Executor
3849dce: store Executor instance in ExecutionEnvironment; don't pass it to ProgramRunner.execute() separately
acd48d3: test console view receives entire ExecutionEnvironment, not separate RunnerSettings and ConfigurationPerRunnerSettings
b5969ca: fixed PY-10289 False positive in compatibility inspection for 'readline'
8720eae: Merge remote-tracking branch 'origin/master'
390a1e4: fixed tests after fix for IDEA-90750 (Virtual Space)
4f4ef25: fixed PY-10331 Web2Py: unresolved reference: false positive for imported modules inside controllers
98c654f: fixed PY-10329 Web2Py: missing completion of default web2py environment inside models, controllers and views
9ecccef: fixed PY-10330 Web2Py: do not add __init__.py on moving view to some directory
fb890c8: fixed PY-10334 Web2Py: empty tag: false positive for tags with python comments
8be7c8c: fixed PY-10346 __div__ still highlighted as special method in Python3
f26865e: Merge branch 'function-types'
0fd24bb: Fixed type inference for iteration over union types
0a6f172: Function type in type signature of map() (PY-4285)
ad8b160: Added function types syntax
146df04: remove deprecated methods, cleanup idetalk (idetalk must use our built-in web server, will be done in next commits)
f18b2b5: Merge remote-tracking branch 'origin/master'
62b6560: Removed type signature overloading for open() functions
703a0d0: Function types overloading for open() stdlib functions only
90ce101: Fixed type signature of StringIO.StringIO
0613122: Lowered priority of types of default values
174632b: Removed overloaded signatures from type database
c9e9084: 'Less than' arrow syntax for upper type bounds
00bfe44: Merge remote-tracking branch 'origin/master'
5c8122f: anonymize function calls
afc5466: fixed PY-10307 Locate Duplicates: anonymize fields is not taken into account
b242033: track list selection change in inspection's options panel
0b0e81c: remove unnecessary indirection in accessing runner ID
33fa552: inject SQL by regexp
16f323e: added initial resolve for chameleon templates
91b17f5: initial support for Chameleon template language
be4bacd: add intellilang to pycharm release build
36f033f: inject SQL in arguments of sqlite3 module (PY-4260 work in progress)
58d28e4: Merge branch 'union-types-syntax'
dbaa838: Support for pipe operator syntax for union types
495a932: Rebuild stubs on creation of mock Python SDKs in unit tests
9238717: intellilang support for python (work in progress)
d0f5d9a: Support for square bracket syntax for parameterized types
1bf7a3c: python-tests also needs a dependency on duplicates-xml
9a442aa: delete old launcher script for pycharm
3dfb50c: add missing dependency which is required for installer
77a4936: Don't try to unify types of *args and **kwargs
2a302e1: Fixed duplicate requirements if several requires args are specified (PY-10297)
9b0b8b6: duplicates in pycharm build
40af644: Merge remote-tracking branch 'origin/master'
94f93f5: get rid of identifier type in duplicates search
97689fc: Merge remote-tracking branch 'origin/master'
646378b: webide-impl module extracted
0665096: do not pass panel of already closed dialog as owner
288b736: Merge remote-tracking branch 'origin/master'
4aa0c22: Parsing qualified type names should return class types for instances
a53f27c: added initial duplocator integration for python code
ba1dafd: fixed merge errors
a632ef1: Merge remote-tracking branch 'origin/master'
cfa01be: Check type of 'for' loop source expression (PY-6728)
a228b2a: Get type of iteration from '__getitem__'
1cbb9a3: Updated iteration types for bytes and str in Python 3
a3220d5: Typo
4cd9d5a: Use __builtin__.py user skeleton for builtins in Python 3
34698a7: added intelligence for duplicate processor in python extract method
38fe7ac: Merged stdlib type databases for Python 2 and 3
00c1740: Merge remote-tracking branch 'origin/master'
0649e32: Merge branch 'remove-type-reference'
10aa101: Updated type database for StringIO
8dae2f4: Continue searching for local usages if the type of the argument is unknown
ed31b62: Removed unneeded PyTypeReference
2cd9205: Don't use unicode type for Python 3 stdlib annotations
a7b693e: Updated stdlib annotations to use weaker union types instead of unknown types
dbaa029: Weaker union types in type checking
f90d30f: fixed PY-10104 Web2Py: missing completion of base template keywords (block, extend etc)
7ddb812: Merge remote-tracking branch 'origin/master'
8be7af5: Merge branch 'python-fixes'
5843999: Fixed parsing implicitly available qualified types (PY-10140)
ad693e3: query project value before returning defaults
89aff86: fixed PY-10144 Compatibility inspection: dict.itervalues() and dict.iteritems() are not highlighted for python3.3
5ddd350: fixed PY-10146 Packaging: Do not try to install distutils or pip simultaneously with other packages
20757bb: Do not double check for unresolved basestring
29e2a55: added checks to VFSTestFrameworkListener
1a0876d: Merge remote-tracking branch 'origin/master'
3dac1fbb: refactored quick documentation, removed duplicated code. Now we use the same code to run external doc renderer
0c64dc8: Used the same way to encde params for the ReSTructured docstrings as for the epydoc Quick Doc
9eee7ce: fixed PY-10186 "Method could be static" should ignore methods annotated with @abstractmethod
0d58e7a: fixed regression after PY-9920
c427f5f: Docstrings highlighting: Do not stop on the first line after backslash
b00d24e: Do not reuse RunConfigurationModule for different configurations
7f45e1a: fixed PY-10230 Decorator reference in doctest is unresolved
dd0c565: Merge remote-tracking branch 'origin/master'
abace23: fixed PY-10233 "Method may be static" should ignore method that have overrides
cda2749: fixed sphinx-quickstart for multi-module project
5ace3a2: search for sphinx-quckstart in default location
e390abf: fixed stupid mistake comparing strings
7e0906f: Merge remote-tracking branch 'origin/master'
6c9a7fb: extracted duplicates processing
258ebcc: extracted duplicates finder for extract method
5f59958: refactored
d372247: syntax highlighting for extract method dialog
8b0257f: fixed PY-10207 Inner classes should not fire up "acces to protected member" inspections
2127cb6: fixed PY-10211 false docstring notice when using PEP3102
bde41de: fixed PY-10174 Specifying types of local variables requires # noinspection PyStatementEffect
486d0f0: Merge remote-tracking branch 'origin/master'
12cc93c: Merge remote-tracking branch 'origin/master'
ce4e3de: added sys import for pytests
c215607: Merge remote-tracking branch 'origin/master'
96b1fce: Write full stack trace.
8319b8a: fixed PY-10150 Web2Py: highlight comments inside python code blocks of views
e4a2204: fixed PY-10145 Web2Py: disable smart type completion inside views
ae0d7f63: Merge remote-tracking branch 'origin/master'
bf019e3: fixed PY-10130 Web2Py: Throwable at com.intellij.psi.impl.source.resolve.reference.impl.providers.FileReference.rename
75085bf: pycharm 3.0 eap version number and artwork
6719326: Merge remote-tracking branch 'origin/master'
8b8eb18: Merge branch 'type-parser'
8a47706: fixed PY-10102 Web2Py: missing completion of paired symbols inside python code in templates
991094e: Added numeric ABC checks
5f51e22: Added bounded generic types inference and checking
82b0465: made search for package case insensitive
8c892b2: Bounded generic types parsing
9765972: Merge remote-tracking branch 'origin/master'
fbde173: added completion for web2py keywords
3c8d82f: Merge branch 'type-parser'
145f40b: Fixed NPE in PyTypeParser.MakeSimpleType.fun()
ac2e97e: fixed PY-10107 Web2Py: update views on renaming controller function
ef868bb: Merge remote-tracking branch 'origin/master'
0ea0efd: Fixed usage of parentheses for priority
706595a: Fixed unions of collection types with different element types
18324d4: Names of parsers for better debug messages
ba6b014: Fixed NPE in PyIntentionTest.tearDown()
4f6faef: Merge branch 'python-fixes'
9f53d08: Fixed PyStubsTest.testRenamingUpdatesTheStub()
82b3364: fixed tests (documentation format)
99877c1: initial approach to project view for Web2Py projects
e05172c: Merge remote-tracking branch 'origin/master'
cb9218a: fixed tests
2c54f0a: Merge branch 'type-parser'
a160497: New more formal and declarative Python type parser
3e9df38: fixed PY-10081 Insert documentation string stub: Do not insert rst annotations when generating docstring in plain format
2cd503e: fixed PY-10070 Do not report "method can be static" for overwritten methods
397268f: Merge remote-tracking branch 'origin/master'
755b16d: ability to provide custom structure view wrapper (editor action "Select in -> File Structure")
c61aeeb: Merge remote-tracking branch 'origin/master'
98851d0: fixed PY-10077 Web2Py: disable ineffective statement inspection for language files
8323461: fixed PY-7083 Lists in Epytext don't render properly in Ctrl+Q pop-up
10fd1ee: fixed PY-7083 Lists in Epytext don't render properly in Ctrl+Q pop-up
4ac79a3: refactor "change signature" and JBListTable to reduce the amount of copy/paste
6dc7537: pass owner component instead of project for showing virtual environment creation errors (EA-47068 - IAE: PackagesNotificationPanel.showError)
6964c9f: remove unnecessary spaces before pasting text
35cd1be: do not anyhow change text pasted to a string literal
e826946: do not use the same formatting rules inside string literals
e3493ce: Fixed completion with type inference from usages for empty attribute name after call expression
4ffeef8: Merge remote-tracking branch 'origin/master'
6b6062d: fixed PY-10007 Epytext @raise should include type name
7fdc938: Overloaded type signatures for functions are not supported in user skeletons yet
5c6ffa5: fixed PY-10023 Fill Paragraph: do not remove first space in paragraph
3bdfc4f: Merge branch 'user-skeletons'
8c26f83: fixed PY-10025 Run/Debug when cursor is at the end of line of test function runs whole test class
0936d36: fixed PY-7794 Docstring after comment - syntax highlight
6795fa8: fixed PY-10032 PyCharm No Longer Highlights Module Docstrings after Comments
dfac6ae: Provide types for built-ins using user skeletons based on cached docstrings
e95e41a: Merge remote-tracking branch 'origin/master'
110983a: Web2Py: added colors highlighting
74bef37: added initial support for custom references in web2py templates
4290a4d: added initial support for web2py templates
1fc0e46: don't include null elements in collectUsedNames()
2a75931: Merge remote-tracking branch 'origin/master'
bce0812: Web2Py : organize code into packages
e759d85: Web2Py : added navigation from views to controllers
aa6dd87: added navigation from controllers to views
a57d1f1: refactoring to separate state and result classes; a bit more diagnostics
0b3e027: Merge branch 'user-skeletons'
3ff968c: Cached PyFunction.getStructuredDocString()
11526bb: Extended PyDocStringOwner interface and moved docstring utilities into DocStringUtil
da11e6a: fixed broken test
8a15e3e: fixed skeleton generator for ubuntu 13.04 PY-9129
dc00f2d: Extracted StructuredDocString interface and moved to PSI API
e247b6e: Merge remote-tracking branch 'origin/master'
f7e44f7: added extension point for reference resolve
85c9e82: use correct plugin name (WEB-8084)
98ddc00: Merge remote-tracking branch 'origin/master'
6991115: Merge branch 'user-skeletons'
49e7309: Merge remote-tracking branch 'origin/master'
312238b: additional diagnostics for PY-8077
8c0b8a3: Moved 'user-skeletons' to 'helpers'
11585e3: User skeleton for 're' module (PY-1219)
76e5ac7: EA-46978 - IOOBE: SegmentArray.getSegmentEnd
f9adab8: to avoid SOE, don't check validity for light elemens (PY-9620)
838275e: always suggest to run tests in directory (it costs too much to check big dirs)
2e11b5b: Basic support for user-defined skeletons
29894dd: cleanup
f3f1c8a: fixed PY-9960 Restructuredtext doc strings produce poorly formatted quick help
2b025f6: Return weak types for target expressions if we are not allowed to switch from stubs to AST
fda444e: fixed PY-9970 Changing settings about treating txt files as rst doesn't take affect until project reload
64cc818: do not search for init in test classes
316b038: fixed PY-9971 Django Tests: invalid default test run configuration is created for django 1.6 fixed PY-9244 Django Tests: invalid default test run configuration is created for test class and test method with django nose test runner
eaa3368: fixed PY-9568 Doctest match pattern doesn't work with py extension
ff716ed: fixed PY-9956 Join Lines fails to preserve space between tokens
0d51c2a: Merge remote-tracking branch 'origin/master'
52d1e42: fixed PY-9963 cannot configure options for pytest that contain spaces
a757d5f: Merge remote-tracking branch 'origin/master'
23d3144: fixed missing imports
c97f24e: moved all configurations in Python Integrated Tools to module base
367fef5: Merge remote-tracking branch 'origin/master'
9f1d0dc: reverted fix for PY-9920 according to discussion CR-IC-1139
e6951fa: Merge remote-tracking branch 'origin/master'
65e7e86: added element manipulator for rest line (as injection host for doctest language)
584035e: Merge remote-tracking branch 'origin/master'
ad9f507: fixed PY-9527 unittests miss tcunittest
7b88d41: Convert line separators in test.
95ce9fd: provide scope to detect location more precisely (IDEA-107895)
1e10af9: fixed PY-9845 Doctests: missing completion for import statement on first line of rst file
f6f185a: cleanup
9031840: fixed PY-9847 Class has no init: disable inspection for classes with unresolved BaseClass
4c723be: improved search for duplicates in class
0bfae05: fixed PY-9652 Replace print with function call: does not put argument to call in case print is on one line with compound statement
36cd516: fixed PY-9653 Extract Method: Replace Duplicates: do not replace occurrences with class fields
f2496b0: fixed PY-9871 TestRunner not working with Django 1.6
1ad949e: Merge remote-tracking branch 'origin/master'
2afdd5c: fixed PY-9656 Make function from method: update import statements
6b24c81: Merge remote-tracking branch 'origin/master'
5a4ec12: Merge remote-tracking branch 'origin/master'
7782f88: fix exception from pep8 ea
075dc7e: Fixed NPE (PY-9882).
ebb6f70: move ModuleAwareProjectConfigurable to platform
536d978: fixed PY-9654 Make function from method/Make method static: correctly update class calls with first instance argument
b2a2abb: fixed PY-9654 Make function from method/Make method static: correctly update class calls with first instance argument
4d3a63d: updated test data
38677e6: notnull
ff63c30: fixed highlighting for not closed quote PY-9617
a0dbf20: Merge remote-tracking branch 'origin/master'
0f0c7bc: do highlighting for file-level docstrings which are not at the beginning of file PY-9617
d3d3743: Merge remote-tracking branch 'origin/master'
c8e9390: Some comments.
02a11c2: Added terminal to PyCharm distribution.
4f3cbd7: Merge remote-tracking branch 'origin/master'
c3d4128: fixed tests for PY-9617
08d2b4d: fixed tests for PY-9617
237864f: PsiUtilBase usages removed
62dcafa: fixed PY-9819 Wrong highlight for property in metaclass
75e7be3: moved to core
b69de82: Merge remote-tracking branch 'origin/master'
1c47a61: moved to analysis
8f877cd: detect 'del x' as write access (PY-9784)
0b5a2ab: since/until and version number for Python plugin in 130 branch
bc57a3e: when possible, use document instead of raw text for mapping between offsets and line/column numbers (PY-9519)
7042802: Merge branch 'python-fixes' into user-skeletons
dc19c6e: Fixed type inference for class fields in stubs holding 'None' values (PY-7340)
2672d86: Merge remote-tracking branch 'origin/master'
d0156f3: Include Terminal plugin.
ec0d7c1: fixed PY-9617 Unusably slow when editing a large (12kLOC) file
61f590d: Merge remote-tracking branch 'origin/master'
3e49a9a: fixed PY-9617 Unusably slow when editing a large (12kLOC) file
09602cc: removed apache commons library usage
8615717: proper fix for PY-9840
e563ecd: Resolve module members by providers only if resolve failed
a528430: Merge remote-tracking branch 'origin/master'
1fbad51: fixed PY-9661 Fill paragraph generally bad
f8b1c19: fixed PY-9840 Enter in comment leads to inconsistent caret position
20d7ba5: NotNull
4c9459c: Merge remote-tracking branch 'origin/master'
4954874: fixed PY-9730 Method can be static: disable inspection for methods which simply raise NotImplementedError
e71cda6: fixed PY-9796 Replace <name> with self.<name> inside staticmethods: AIOOBE at com.jetbrains.python.inspections.PyUnresolvedReferencesInspection$Visitor.a
82d288d: fixed PY-9794 Change signature: loses parameter in function call when rearranging arguments so simple parameter gets after keyword parameter in call
504e76c: Don't evaluate getTokenText() when it is not used for performance reasons
f33411c: Read content lines lazily
40562ed: Added analyze parameter types action in development mode
d3525c5: Infer parameter types from file-local function usages during code completion
fd0476b: Updated TypeEvalContext in tests
58fc2e7: Use original file as origin in TypeEvalContext during completion
fce1b98: Added 'origin' parameter to TypeEvalContext.userInitiated()
17abf2d: Merge branch 'type-eval-contexts'
968f550: Don't perform extra computations in type checker inspection if formal parameter type is unknown
a55e505: Renamed deepCodeAnalysis() context to deepCodeInsight()
3748482: fixed PY-9657 Make function from method: function call is not updated correctly when imported with qualified reference
6840669: fixed PY-9659 Convert method to property: intention should be available for methods with yeild
3a3f31d: fixed PY-9660 Convert Method to Property: provokes Getter should return something when converting function which has return without return value
465a0b0: fixed PY-9690 Show 'Insert documentation string stub' only on function name
76e34ee: fixed PY-9691 Doctest configuration producer loads contents of all files in clicked directory synchronously in EDT
5e60dfb: fixed PY-9715 Inconsistent "Too broad exception clauses" inspection
3c6fdca: fixed PY-9715 Inconsistent "Too broad exception clauses" inspection
d154569: fixed PY-9721 "Replace <name> with self.<name>" quickfix does not handle classmethods or staticmethods correctly
0c1ab3e: fixed PY-9730 Method can be static: disable inspection for methods which simply raise NotImplementedError
0e5eed9: fixed PY-9735 Disable access to a protected member inspection for double-underscored names
49f4176: fixed PY-9753 Change Signature Refactor - removing an argument creates a mess in the calls to the refactored method
c29387f: Merge remote-tracking branch 'origin/master'
fd54bc9: Reused available TypeEvalContext instead of codeInsightFallback()
9ffac76: Renamed TypeEvalContext constructors
0eb7ccf: Slow TypeEvalContext for completion variants, current context for resolve
c1b3956: Share TypeEvalContext among several iterations
c8502bb: Added TypeEvalContext.codeInsightFallback() constructor
a75cb96: checking SDK flavor only makes sense for Python SDK type
e6e730d: Use inspection session type eval context
4f0702e: Current TypeEvalContext for comparing references to methods
23ed6bc: Slow TypeEvalContext for completion, documentation, find usages, quick-fixes
990006b: Fast stub only TypeEvalContext for checking intentions availability and for resolve
deaf4a0: TypeEvalContext for docstring types inspection from inspection session
cce09ec: cleanup: obsolete CvsFileFilter removed
4da0295: Merge remote-tracking branch 'origin/master'
353ddcb: Fixed NPE at DocumentationBuilderKit.combUp (PY-9731)
fc174bc: Added some inspections to PythonAllTestsSuite
a2962fd: InstalledPackagesPanel moved to webide-api
6a87f08: get rid of InstalledPackagesPanel.mySelectedSdk
0766497: InstaledPackagesPanel is filled with data through PackageManagementService
ea6fe72: uninstalling packages works through PackageManagementService
66e081b: InstalledPackagesPanel runs package upgrade through PackageManagementService
941b3c8: push out creation of PackageManagementService to clients of InstalledPackagesPanel
319ce3e: push down Python-specific constants
9cc210c: introduce InstalledPackage class as superclass for PyPackage; use it on PackageManagementService API
2ddcc41: initial split of PyPackagesPanel into general and Python-specific parts
b6d5b4a: remove python-specific diagnostics from general manage packages dialog
e2264be: Merge branch 'python-fixes'
a521fe1: Infer types of logical operators
e122e60: Infer parameter types from default values (PY-7063)
5f0bb19: Cache resolve results from
877ce88: Renamed PyCythonImportResolver to CythonImportResolver
b1b3dc3: Added type annotations for several 'dict' methods
0dacabd: Merge remote-tracking branch 'origin/master'
5343b8f: implement getRangeInElement: fixes inject language action
59292a8: Merge branch 'python-fixes'
df86d66: fixed PY-9646 Invert Boolean: do not invert reference in import statement
2d7abf0: fixed test runner for Django 12
7b576ab: Fixed completion for 'namedtuple' fields in Python 3.3 (PY-8904)
a44550c: Fixed missing name declaration for Cython functions (PY-8806)
18007fe: decoupled ManagePackagesDialog from the rest of Python-specific stuff, moved it to webcore.packaging
608b668: PackageManagerController is not a controller, rename it accordingly
a081a5d: moved handling of 'install to user' checkbix to PackageManagerController
6f904e4: package manager refactoring step 2: move core classes to com.intellij.webcore.packaging; decouple ManageRepoDialog from Python service
b5def8c: package manager refactoring work in progress (goal is to allow reusing same package management UI for different package management systems such as PyPI, NPM, Bower etc.)
5652cbf: replace CodeInsightUtilBase.prepareXXXForWrite usages with FileModificationService in core-api
c56272b: avoid HintAction if not necessary
6c5b8fc: notnull
dbc5150: moved to OpenAPI
4267faf: NotNull
aa85c4e: Command line API simplified for common use case
ae57486: Fixed regression in resolving 'namedtuple' members
3c1271b: Merge branch 'python-fixes'
129f176: do replace duplicates in different functions in class during extract method
1c8c0c6: Fixed regression in Django completion tests
8ac4b9c: Fixed NPE in PyJavaMethodType.getName()
8a8aa06: Nullable annotations
9bc46fa: Fixed regression in resolving 'namedtuple' members
3d49b60: added convert method to property intention
8a60a26: moved intention tests to special place
9bc89de: update usages after converting static method to function
bf71eaa: update usages after converting method to function
b6c5477: Command line API cleaned, take 2
6fa9dcd: Throw exception with stacktrace if couldn't run remote process instead of message.
019dc4d: Merge remote-tracking branch 'origin/master'
7592dfd: Package requirements and skeletons tests should use fixture from test task as it is recreated every run.
64668ea: added convert static method to function intention
ca3c43c: do not invert boolean in builtins
f283890: do not rename if name was not changed
6ce131b: do not add already presented path to sys.path
f6e9626: Merge remote-tracking branch 'origin/master'
f872287: added invert boolean action
c6610f6: Command line API cleaned
d54c6ba: Merge branch 'python-fixes'
1421279: Added 'isinstance' checks for qualified references (PY-5614)
12e464c: Merge remote-tracking branch 'origin/master'
1df8d5b: Pass 'inherited' resolve flag to PyType ancestors
c56a56c: Don't try to resolve members of PyClassType via PyType.resolveMember()
4d04cfc: Added 'inherited' flag for PyType.resolveMember
320c8be: Run console progresses from EDT.
e9fccfb: Fixed NPE in PyQualifiedReference.getVariants()
f562850: Merge branch 'cfg-negative-assert-type'
51f3ec1: Merge branch 'python-fixes'
83a912a: Added negative type assertions for 'else' and continue after 'if' then exit (PY-5084, PY-7694, PY-9118)
86d1036: Merge remote-tracking branch 'origin/master'
832243a: fixed PY-9604 No option to run unit tests in the context menu
a4e5ce8: [vcs] Remove "Move to Changelist" from all places but the Changes View
904196e: Inlined PyClassImpl.calculateAncestorTypes()
a37f048: Fixed type inference for parenthesized yield expressions (PY-9590)
230de54: Merge branch 'new-mro'
59f0cf2: Merge remote-tracking branch 'origin/master'
8285e19: Added class ancestors cache
b2816e5: NotNull
da62cc4: Use TypeEvalContext for getting ancestor classes if it's available
2eab025: Added test for MRO in complicated diamond hierarchy (PY-4183)
e2f2bc6: Use 'object' for resolving only specific attributes of 'module' (PY-7823)
d6972ab: Fixed several bugs in new-style MRO algorithm
f8eb0dd: Fixed resolve of multiple inherited constructors for new-style MRO (PY-9080)
62532f3: Merge remote-tracking branch 'origin/master'
f9f63c1: Fixed broken condition.
cfc8d92: Added PyClass.getAncestorClasses(TypeEvalContext)
d0de628: added Replace duplicates in Extract Method
563104c: Changed PyClass.iterateAncestorClasses() to getAncestorClasses()
eb3aa61: NotNull
fb3b838: Merge remote-tracking branch 'origin/master'
73f0ba3: Fix getpass in pydevconsole.
862ccef: Fixed getpass in debugger (PY-4012).
7845bc9: Removed PyClassRef and PyClass.iterateAncestors() in favor of PyClass.getAncestorTypes()
1d8313a: We should override getpass for Unix too.
04837e3: make auto focus for all consoles with history
e6d6625: Removed members provider for SQLAlchemy declarative_base
56203f9: use standard focus management instead of later invocators
4972079: Fixed createsuperuser manage.py task (PY-6683).
d7b81f0: Fixed test data files capitalization
ebab3b5: Fixed Jython resolve test for Java superclasses
55e1379: Added common ancestor PyClassLikeType of PyClassType and PyJavaClassType
ffc180d: manage.py tasks work for remote interpreters (PY-9513).
297fb2c: Added warning about threadframe module (PY-9172).
4cab00f: Fixed exception inheritance inspection for unresolved base classes of exceptions (PY-5811)
57621dd: Refactored exception inheritance inspection tests to highlighting tests
0c75d0a: Removed PyClassRef-based ancestors iterators
700e7db: Iteration over ancestors via PyClass.getAncestorTypes()
a1afc33: Merge remote-tracking branch 'origin/master'
d576b7f: include resources directories in the build of l10n and rest plugins
1de53dc: Fix resolving of python members (PY-9512).
188f394: fixed PY-9416 hybrid_property of sqlalchemy is only recognized as write-only property
24eaf50: added getter for class property
9e7cc02: fixed PY-9499 Django: Not able to run django tests because of circular imports in test runner
261d507: fixed env tests
31a51a9: Merge remote-tracking branch 'origin/master'
04e9957: fixed PY-9470 Parameter Info: provide reasonable view for functions without paramters
50f455e: Merge branch 'py-6805'
34221a6: Fixed recursion in evaluating return type of __new__ while evaluating type of cls (PY-9493)
bbd0c61: Cleanup.
98ec453: Don't convert sys.path's to system dependent style (PY-9492).
072b388: Fixed unresolved references inspection for fields defined in __new__ (PY-6805)
98ac387: Ansi escapes decoding for python remote interpreters (PY-7992). ColoredProcessHandler refactored.
1399493: Disable Encoding not specified inspection in console.
3abb146: Don't pass action event to deep (PY-9465).
4d1dea0: Changed case of the message (PY-7800).
2d6673e: Merge remote-tracking branch 'origin/master'
b28c2d5: Convert paths used to setup python console for remote interpreters (PY-9173).
8d7bd52: Merge branch 'python-fixes'
82303ad: fixed PY-9466 Wrong current parameter highlighting for the very first parameter in finction
528f673: fixed PY-5151 Unclear message on positional parameter assigned both with keyword arg and *args
f052489: fixed PY-4757 Mako: python code blocks and variable blocks lose background color once highlighted as errors
bc181f5: Merge remote-tracking branch 'origin/master'
f5e6ba5: Fix circular import problem in Django console (PY-9030).
eb3081f: Refactored collecting instance attributes
ab9a378: Wait for the next connection after process is terminated in remote debug (PY-9340).
6aff7c8: Removed unnecessary recursion in assignments visitor
1c09a17: Better can't execute message (PY-9456).
7de2d43: Correct import completion in console.
367f6d4: fixed test data
7def3e1: fixed PY-9407 Method can be static: false positive for methods with first parameter other then self
9ccbc08: fixed PY-9408 Method can be static: handle properties and classmethods friendlier
6134dd9: Fixed types for FileIO.write() for Python 3 (PY-9289)
f190c0a: Regenerate _collections skeleton for Mac (PY-2292)
6f7ef39: Fixed skeleton signature for seek() methods of cStringIO classes (PY-9056)
24e5198: switch to declarative API for bundled color schemes in PyCharm
553598b: Merge remote-tracking branch 'origin/master'
d01629e: Remove extra prefix from IPython completion variants (PY-9393).
1b8fe93: additionalInfo should never be None (PY-9420).
adf8104: Focus problems with console refixed (PY-5560).
e4bfec6: fixed PY-9446 False positive: Class has no __init__ method inspection doesn't understand __new__
0145f33: fixed PY-9440 False positive in function doesn't return anything inspection for binary skeletons
1472b0e: fixed PY-9302 Class has no __init__ method: false positive for child class without one
0e0a670: fixed PY-4449 Wrap parameter info
2ebae24: do not corrupt non-python files by our copy-paste processor
ae3e438: fixed PY-9337 Copy paste doesn't always preserve the relative indentation of what is pasted.
b16f001: fixed PY-9337 Copy paste doesn't always preserve the relative indentation of what is pasted.
5d8c3a5: Fixed type signature of chr() for Python 3 (PY-9042)
89d58d4: Correctly execute code in console when there is something in console (PY-9428).
7f362d5: Other name for execute selection action when there is no selection (PY-7800).
9f5d02f: Warn that process is not stopped on code execution (PY-9431).
a9dffdc: Merge branch 'python-fixes'
75b825c: Set type eval origin for references in find usages if it isn't set (PY-9047)
6eb69aa: fixed PY-9405 Method can be static: handle existing decorators
913e96e: fixed PY-9406 Make function from method: when removing last method from class add pass statement
46e58f1: Merge branch 'python-fixes'
f7d3b43: Dummy parameter list for Cython functions if there is no real list during reparse (PY-8806)
4455f0e: Only processes executed for Debug.
3bad56e: full set of images in the pycharm icon
b646365: Open debug console action in Tools menu (PY-5449).
62be77a: Merge remote-tracking branch 'origin/master'
763630a: Execute in console should send current line if there is no selection (PY-7800).
595595e: Save all documents on run python console action (PY-7783).
4b8113b: Less yellow.
0ff792c: EA-45050 - IVFAE: PersistentFSImpl.c
0a7ee08: EA-45275 - PIEAE: PsiElementBase.getContainingFile
526b6fd: specify icon path for windows launcher (PY-9383)
16ada37: Usage of GeneralCommandLine.getEnvParams() replaced by getEnvParamsNotNull() analogue in cases where it was a) NPE possibility, b) !=null assertions and c) checking for null and setting new map
bf638c2: Gevent debugging compatibility option (PY-9372).
768f4b8: Used alias for Queue module to avoid collisions with user imports of Queue class (PY-9360).
0dc8f671: Rename processor for django static prefix (PY-9362).
5faf163: Merge remote-tracking branch 'origin/master'
5c40d64: use new launcher for PyCharm
f660573: Fixed step over breakpoints with conditions (PY-9213).
65e23a2: Add types to generated docstring on smart-enter (PY-8781).
417372e: Fixed indentation in generated docstring (PY-9127).
89f19c9: Fixed unresolved false positive for GET/POST/FILES in Django request (PY-7482).
7a13134: Url filter for all python run configurations.
459f971: Merge remote-tracking branch 'origin/master'
dc1a086: Fixed renaming prefix for static roots (PY-9307).
ac96fe6: fixed PY-9301 Instance attribute defined outside init: false positive with init only in superclass
afb490c: fixed PY-9329 Instance attribute defined outside init: disable inspection in testCases methods which start with setUp
c47a5d4: fixed PY-9331 Method can be static: false positive for static methods
ab9622f: fixed PY-9274 Broken raw bytes literal parsing for Python 3.3
ade6f46: fixed PY-9136 The TeamCity plugin breaks tests with current Python 2.7
4f1a53c: Fixed loosing tracing on creating new process in multiprocess debugging.
536d12a: Merge remote-tracking branch 'origin/master'
4342b7f: fixed target references in static method inspection
0c60543: Fixed multiprocess debugging using multiprocessing module on Windows (PY-6649).
ba2efd7: fixed tests
6a6dc8a: updated test data for constructor/__init__ inspection
843bf0e: fixed PY-9322 Django: not able to run django tests without nose installed
3fffafb: fixed PY-8520 Refill Paragraph should not move first line in docstring to the line with quotes
6430040: Fixed resolving of absolute path on Windows (PY-8256).
89cbab5: Merge remote-tracking branch 'origin/master'
d64e31e: SQL quoted identifiers handling reworked & IDEA-103850
ccd116d: Fixed type naming for dynamically evaluated types (PY-8898).
10355b4: Fixed hanging of the debugger (PY-9291).
e444b04: Fixed closing socket in debuger.
9970779: Fixed closing socket in debuger.
0297cc1: Better error message when distribute install fails.
e615e47: Merge remote-tracking branch 'origin/master'
844efbe: make sure RubyRunner and PythonRunner are used to run Ruby bzw. Python run configurations instead of DefaultJavaProgramRunner (IDEA-103903)
d921352: uncomment tests
e1299cd: added fill paragraph test for Denis
025c63a: fixed PY-9259 Unwrap/Remove action is not available with caret/selection inside statement with caret on empty line
e2da503: fixed PY-9260 Unwrap/Remove action is not available inside statement with caret on indented space
1fa5b3b: added Method may be static or a function (+ quickfix)
8ac60d9: Merge remote-tracking branch 'origin/master'
70530c9: Merge remote-tracking branch 'origin/master'
7380ea1: Fixed renaming of django file references (PY-9146).
47c38ee: Fixed renaming file reference with defined context (PY-9147).
cfdb655: Fixed path evaluation.
7dd847d: Merge remote-tracking branch 'origin/master'
b6c860f: fixed PY-9262 Instance attribute defined outside init: false positive with superclass constructor call in class
097ed48: fixed PY-9263 Move attribute to init method: add super class call when moving to not yet existing init
24cef77: fixed PY-9264 Unify naming for inspections related to class constructors
036752a: fixed PY-9266 Running doctest for __test__ in module from PyCharm returns "TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'"
9750faf: fixed PY-9253 Broken "add field to class" adds the field to the instance instead.
505f300: Merge remote-tracking branch 'origin/master'
00eafcb: added tests for PyUtil.addElementToStatementList
d7281a8: implemented pylint W0601
d74c7f2: implemented pylint E1111
eafde14: implemented W0232: Class has no __init__ method
e6d9c74: used proper way to find __init__ method
03cddf5: Merge remote-tracking branch 'origin/master'
e985159: fixed test data
5fac997: automatically clear templateTesting flag
ab41bf4: Allow to save a choice to download remote sources.
40b94ce: Merge remote-tracking branch 'origin/master'
51ca237: added quick fix for the PyAttributeOutsideInitInspection
d95b343: implemented W0212 pylint inspection
6c0fe5e: implemented W0201 pylint inspection
9ff26d2: Merge remote-tracking branch 'origin/master'
4bff901: Maya support plugin. Send to Maya action.
2d212e2: Generate skeletons for module if error occurs.
687a0ee: fixed PY-9181 Django tests are unrunnable
9ab386a: Cosmetic in usage.
82c6b51: fixed PY-7663 Newline ignored in the quick documentation
96cbe1c: Maya flavor (PY-4945).
ec9c094: fixed PY-7015 Auto-detect docstring format while rendering
0e4b372: fixed PY-5673 Incomplete/incorrect rendering of Sphinx/rst docstrings
7165599: use Python 2 sdk for documentation builder
b99e8f3: fixed ugly docstring in case of unknown node visiting
04f922f: fixed rest documentation for non-project modules
f418607: Merge remote-tracking branch 'origin/master'
f0fab26: fixed PY-9084 Not able to run Django Tests: AttributeError: 'NoneType' object has no attribute 'ismethod'
201007b: fixed PY-9079 Enable PYHTONPATH controlling checkboxes by default in Python Run configurations
4c08703: 128 -> 130
fd5654e: Don't stop remote debug server after timeout (PY-9142).
c9a23f6: include fileWatcher plugin in PyCharm build
9c80125: added built-it members for declarative_base inherotors
9595e25: Merge remote-tracking branch 'origin/master'
f50c158: Cache for calculation of django web urls (PY-9000).
5c329b2: fixed PY-9070 Indentation is broken when pasting code.
5cf7377: fixed PY-9086 False positive for "exec" by Compatibility Inspection
2e272c2: record execution of PyPathEvaluatorTest
730a7c1: Fixed working dir selection for remote interpreter call in package manager (PY-9090).
0e77675: Fixed hanging debugger connection (PY-8154).
df8db49: Merge remote-tracking branch 'origin/master'
7c0d256: HACK: make UsefulTestCase.checkForSettingsDamage happy again
2b792ef: Quote parameters in package management for remote interpreters (PY-9029).
362945d: Fixed handling prefixes for staticfiles dirs.
ca83eb8: moved python smart copy-paste logic to python module
78a7f66: Merge remote-tracking branch 'origin/master'
d857f5a: fixed PY-9084 Not able to run Django Tests: AttributeError: 'NoneType' object has no attribute 'ismethod'
a25704c: Merge remote-tracking branch 'origin/master'
c00d589: fixed PY-9079 Enable PYHTONPATH controlling checkboxes by default in Python Run configurations
0240c4e: fix plugin versions
d7c830d: Merge remote-tracking branch 'origin/master'
e8460df: Python-specific intentions: do not perform isAvailable on non-python files
4598b6c: Merge remote-tracking branch 'origin/master'
e8003b3: fixed empty suite in doctest runner tests
86e6eb7: fixed PY-9063 Package manager doesn't seem to use proxy to access https://pypi.python.org/pypi
99e7b04: use thread-safe list for listeners
e3aeb19: Fixed NPE in multiprocess debug termination (PY-9048).
f34f661: fixed tests after PY-8961 fix (formatter for py 3 parameter annotations)
f09ff98: Merge remote-tracking branch 'origin/master'
5797669: fixed import_system_module function. Used already imported os module
5e80806: don't highlight Java methods as not callable (PY-9037)
7439ebe3: Merge remote-tracking branch 'origin/master'
952fa4d: added content/source root checkboxes to all run configurations
5433c59: restore my version of the colors
63081ae: move Python to new highlighter API, extract Python and ReST colors out of platform color scheme (PY-8841)
1f28f83: update pep8.py again
30110c2: "space within method declaration parentheses" code style option (PY-8818)
fc8a8bd: pull in latest pep8.py from github
9419709: surprisingly enough it looks that up until now we didn't have any mechanism for inserting a backslash when Format wraps code in a position where implicit wrap doesn't happen (PY-9032)
f29a942: Select Word handles escape sequences in Python string literals (PY-9014)
680a841: correctly highlight __nonzero__ and __bool__ depending on language level (PY-9023)
e22e740: Make some ReST highlighting attributes depend on Language Defaults (new API)
778edaf: take search scope obtained from run configuration into account to navigate to correct class when user clicks on stacktrace printed in console or log (IDEA-63362)
0384e21: fix confusion between SDK paths when "use module SDK" option is selected and a Python SDK is configured (PY-9021)
95c61f7: revert more usages of new API to fix compilation
53abb4e: temp revert usage of new API to make sure the Python plugin compiles against EAP 128.18
d5aac1e: include resources from python-psi-api module in Python plugin build (PY-9015)
8f0892e: Merge remote-tracking branch 'origin/master'
f44b5bb: removed border from common options form
7f7ea52: cleanup
a2a7ed7: nicer API for plugins to modify inspection tool settings
368f8cd: merge common options form
925c25e: Merge remote-tracking branch 'origin/master'
9171a3d: fixed details and versions in install package dialog
c6236f1: move static method closer to its usage
066ddf1: allow each instance of PyCallableType to tell whether it's actually callable or not; distinguish class definition and class instance for Java class types; correctly treat Java constructors as callable (PY-4269)
f4ad359: show flavor icons in SDK choosers of Python plugin (PY-9015)
a5d078b: if run configuration settings specify -Dpython.path explicitly, don't overwrite it (PY-8044)
4578f38: hide "set as project interpreter" in "create virtualenv" dialog in python plugin (PY-8082)
a92a2db: advance version and since/until build for python plugin
040b623: don't swallow exceptions
711b773: sort packages in Manage Python Packages (PY-6918); disprefer remote interpreters when sorting by preference
c18cade: remote Python interpreters are always considered valid (PY-8079)
1c4463c: show path mappings combobox in Python plugin run configurations (PY-8868)
132b4a0: OpenAPI for accessing PythonModuleBuilder
d841b84: allow using Flask plugin when running IntelliJ IDEA under debugger
9b598f9: moving core part of PythonModuleBuilder to common code between PyCharm and the plugin, so that it can be used by the Flask project generator in the Python plugin
f64ae0c: spacing in py3 annotations (PY-8961)
02e70bb: improved usability for run/debug configuration panel
59ef183: PY-8924 "Ignore errors like this" option is not saved
7eaf625: PY-6287 Changes to settings of inspections applied via quickfix are not persisted
f87645a: Merge remote-tracking branch 'origin/master'
01a152e: Change SSL certificate validation only for PYPI connection
9455f5d: Merge remote-tracking branch 'origin/master'
82b6b24: Merge remote-tracking branch 'origin/master'
8278ad5: Took into account Parenthesized Expressions
4571ef9: branch 127
494c538: do not escape all quotes in triple quoted string
473b152: fixed PY-8962 No packages are listed on the "Available Packages" screen
fe58bce: fixed improper cleanup from 2/18/13
44abc79: Don't mention eclipse.
66c7b9b: fixed PY-8965 Inconsistent behavior for "Specify type for reference in docstring"
381908f: Merge remote-tracking branch 'origin/master'
e17436e: improved readability
4a603dd: cleanup duplicate code
48eeac8: Merge remote-tracking branch 'origin/master'
e8fc047: set docstring format for test PythonHighlightingTest
7e5421d: Merge remote-tracking branch 'origin/master'
49e6a5c: Fixed checking whether file is in scope.
3776580: cleanup
22ec0dd: Keep else part on unwrap for and while
6ed91c8: Keep else part extracting try
4ea7a52: do not propose unwrap for if in elif branch
4ca0e98: trying to avoid race conditions [r=traff]
923d66d: do not search for element on previous line
a56a52a: Merge remote-tracking branch 'origin/master'
0792f73: updated test data for parsing
ce6298e: API for conformant stub processing skip methods
a27c345: Database IDE initial: fix ruby&python gant's
7c3b0cb8: Merge remote-tracking branch 'origin/master'
60d0628: fixed PY-8948 Not closed quote in subscription breaks parsing for the whole file
3de2401: Merge remote-tracking branch 'origin/master'
c024597: fixed PY-7151 Convert triple-quoted string to single-quoted string: do not wrap string with parenthesis if initial string is already inside them
1e99754: fixed PY-7152 Convert triple-quoted string to single-quoted string: missing intention for strings with prefixes
3d9c6a7: Fixed generation of type in docstring in case of it already presents as @param (PY-8930).
9ebaeef: proper fix for PY-7883
7e407b7: fixed PY-8943 Specify type in docstring chooses wrong function for parameter
727878b: fixed PY-7857 Doctest: missing completion and highlighting for python keywords
0983dff: fixed PY-7883 Doctest: decrease severity for errors in doctest to warnings
2d801a8: fixed PY-8025 Enable rest, epytext and doctest highlighting in strings literal assigned to __doc__
4713082: added few tests for reflow paragraph
6859758: Merge remote-tracking branch 'origin/master'
925853f: excluded unused intentions/inspections from doctest dialect PY-8939 redundant inspections in doctests
9dfd89f: Merge remote-tracking branch 'origin/master'
5e73f3f: Send only signature from project scope (PY-8844).
2815efb: fixed PY-8581 Fill Paragraph should not corrupt indentation PY-8937 fill paragraph works wrong on simple strings
6a79ca6: reparse python files if analyze docstring settings changed rerun code analyze on apply fix a PY-8925 "Analyze Python code in docstrings" check box always returns to selected state
a52bcf8: used proper type check fix a PY-6756 PyCharm erroneously reports "too many arguments" for certain string formatting lines.
be28564: Merge remote-tracking branch 'origin/master'
0f392dd: fix a PY-7318 Converting dictionary creation to dictionary literal does not handle duplicate items correctly
1e22104: fix a PY-8926 Python: Conversion of docsting into recommended triple double-quoted form keeps doublequote at last position
9dc0a80: Merge remote-tracking branch 'origin/master'
450750f: Fixed CME (PY-8686).
ca546f2: Fixed memory leak in debug console.
7ac2023: Fixed thread leak in multiprocess debugger.
ba19e0d: Fixed overriding method signature inspection for property setters (PY-7725)
5b471c8: Split method overriding inspection tests
35114bf: fix a PY-8654 "Add super class call" quickfix should delete "pass" statement if it was the only body of __init__ before quickfix was invoked
d870894: Fixed unresolved 'os.error' on Linux (PY-7650)
1155e5d: fix a PY-8704 Refactoring: Rename/Change Signature: inconsistent base class is proposed
3776623: Fixed parsing ellipsis as an expression in Python 3 (PY-8257)
105fe15: do not count "\n" as part of indent
8b5b707: Updated Python 3 compatibility checks to include Python 3.3 and future 3.x versions
da88f9c: Fixed augmented assignment inspection for non-commutative operations on weak union types (PY-7605)
6db0302: Cleanup
f49320f: Fixed broken debug of new threads. Better fix for jython.
f7a163d: Merge remote-tracking branch 'origin/master'
8aacebd: Merge remote-tracking branch 'origin/master'
20a3e88: do not rename param usages if param was not renamed
dea2b78: Env tests work with remote python interpreters.
756caa3: Python debugger: set socket timeout.
a0ff9df: Merge branch 'python-fixes'
ae86efa: Moved Cython built-ins check to a separate CythonReference class (CR-PY-5857)
85aa4c8: Removed obsolete nullable check
65e7a19: Shorter token type and text comparison in Cython parser (CR-PY-5856)
7ebf234: Split Python unbound variable tests into separate test cases (CR-PY-5855)
e8e2017: align children of generator expressions (PY-8822)
ad6105d: Don't report pep8.py issue about tabs if code style settings use tabs (PY-8864)
f48f885: SOE protection in PyPathEvaluator wasn't complete (PY-8880)
7012d62: fix PY-8909 change method signature doesn't change argument names in function body
08217ae: Merge remote-tracking branch 'origin/master'
2a21660: fix CR-IC-97 (indent is counted in tab units so need multiplication instead of division) handle empty insertion line properly
4c2d80e: nicer UI for Python Integrated Tools
db602c2: Merge remote-tracking branch 'origin/master'
0dc3dcc: Generate correct string for union type (PY-8896).
963b431: Don't stop in debugger internals on exception breakpoint in jython.
2f6fefe: Merge remote-tracking branch 'origin/master'
1dbce62: Fixed debugger for some cases in jython.
14555ed: Fixed tests. Refactored setting python path.
df62698: We shouldn't set pythonpath var when it is empty.
377ea4f: Merge branch 'python-fixes'
80d7377: Fixed type signatures for globals(), locals(), vars() in Python 2 (PY-8888)
eab6133: Quickfix for renaming a reference that shadows a built-in name (PY-8788)
f41f98d: Merge remote-tracking branch 'origin/master'
9da8b5d: Merge remote-tracking branch 'origin/master'
236242d: proper update imports in doctest need improved lexing part fix for 2.7.x. PY-8485 Move module: breaks imports in doctests
74c308c: notnull
0ee5725: Fixed rename of elements that are imported via alias (PY-8857)
e484284: Fixed PIEAE when imported file is renamed and then renamed back using undo
c7fbbc6: Updated PyStructureViewTest to Python 2.7
b5c1704: Updated mock SDK version for Python 2 in light project descriptors to fix tests
550eee9: added option to integrated tools fixed PY-8815 Feature Request: Disable Docstring Analysis
2d7a87f: get into account tab characters for file type PY-8580 "Smart indent pasted lines" does not work with tabs in python
8a5be29: regenerated versions.xml fix for PY-8696 False positive for bz2 module (code compatibility inspection)
8b46e78: Merge remote-tracking branch 'origin/master'
abbcacc: Merge branch 'python-fixes'
55f0826: Don't show private module names from builtins in completion and mark them as unresolved (PY-7805)
0c13944: Fixed django manage completeness test.
8603259: Fixed callable inspection for 'namedtuple' instances (PY-8801)
9031552: Updated Python mock SDK from 2.5 to 2.7
01f84fa: Merge remote-tracking branch 'origin/master'
84d8f8c: Fixed resolve for Cython compiler directives and memory view flags (PY-8675)
0b41a2b: Merge remote-tracking branch 'origin/master'
0f4a97b: new python file type
07b93b3: Now docstring intention available if type is not specified in docstring and annotation intention available if type is not specified in annotation fixed PY-8782 Specify return type in doctring: intention is not available on function call with collected types
aeb7b07: use the proper caret offset in case of selection. Do not overindent when replacing selection PY-8744 Smart indent pasted lines doesn't replace selection when pasting multi-line code
2dc50cd: fixed PY-8783 Specify return type using annotation: intention seems to do nothing
340dbd6: TeamcityTestRunner is now new-style class (it's possible that user defined old-style suite runner in settings instead of DjangoTestSuiteRunner) fixed PY-8821 can't run unit tests with pycharm 2.7
926293d: there is no need to use import_module (no relative import possible) fixed PY-8636 Geting 'No module named importlib' error when trying to run remote tests (with django)
07862a5: RUBY-13071: (refactoring) RegExpPropertiesProvider iface merged into RegExpLanguageHost
1555ef9: Added 'gil' Cython builtin and several Cython-level types that shouldn't be marked as unresolved
615ebf1: Fixed unbound local for a variable after 'with' statement after 'raise' (PY-7966)
bbc85bd: fix docstring generation tests
458b7ae: struggling with line breaks in generated docstrings
177134e: 24-bit instead of 32-bit bmp (PY-8777), now for the correct image
68ef974: 24-bit instead of 32-bit bmp (PY-8777)
b502cd2: Todo about caching.
e13cee9: Method moved to class.
f85f8af: Cleanup conditions 2.
383b75c: Cleanup conditions.
ed5090a: @NotNull.
4056438: Generate docstring intention: simplified and fixed.
a93b58b: Generate docstring: remote unnecessary limit for params > 0.
95ac0b9: Better checking whether parameter is self.
92f88b5: Merge remote-tracking branch 'origin/master'
ccda6d3: Added a cache for collecting signatures.
a2ab96d: PyParameter.isSelf(); highlight first argument of @classmethod as self (PY-6881)
8169496: use ScopeUtil.getScopeOwner()
053674c: Project scope is taken from a better place.
433876c: Don't generate docstring param for self (PY-8759).
83cb4c1: Merge remote-tracking branch 'origin/master'
f57562e: Collect types information for remote interpreters (PY-8733).
8aff6e5: Clear package caches after installing package management tools (PY-8739)
bedd71b: Merge remote-tracking branch 'origin/master'
23daa1c: Merge branch 'python-fixes'
5716bb1: Fixed search in settings (PY-8658).
627b6b7: Resolve library modules first, then ask import resolver extensions (PY-8664)
6428f1a: continue name search in upper scopes if the name found in current scope is defined in a comprehension scope and not actually visible from the reference location (PY-8725)
b6550f3: extract isInnerComprehension() method
d1f61b7: fix for compatibility with token type caching in PsiBuilder
9c4a398: Ask Vagrant executable path if we couldn't get it version (PY-8653).
4c7d6e4: workaround for http://bugs.python.org/issue17143 (PY-8706)
f00ece8: don't insert docstring with types if we don't have any parameters (PY-8768)
6fe9a19: @NotNull name in PyType.resolveMember()
a880593: PY-8764
f1b3a39: Collecting run-time types information isn't supported for Jython (PY-8737).
93a6630: Fixed console for ipython >0.10 (PY-8758).
171b464: List remote interpreter connection in servers list (PY-8589).
aa14a2e: Reverted default value of unknown objects to None (PY-8703)
0b9c257: add import element before end-of-line comment (PY-8034)
6d1b33c: auto-import fix for module uses correct qualified name (PY-6677)
4c7a2fd: second half of fix for PY-7887
baed9c4: look at correct scope owner
2be20a6: 'with' statement also requires continuation indent for its condition (PY-8743)
be342bc: correctly look for scope owner when checking name shadowing
cc5af2c: don't show duplicate names in completion list inside __all__ (PY-6483)
ef9fb33: Merge remote-tracking branch 'origin/master'
6bedecf: Merge branch 'python-fixes'
f4d2b40: find usages considers a reference to be a usage if it resolves to an element that shadows our name or is shadowed by ours (PY-6241)
108b42b: when completing members of imported module, look at local imports, not only file-level ones (PY-3668)
55b2824: don't cache file template in 'create setup.py' action (PY-6681)
764c6b4: log failures to run pep8.py
adeed1b: 2013 copyright year
e0b317d: ReformatFix prompts to make file writable (EA-43561 - IOE: CheckUtil.checkWritable)
b6326c6: EA-43567 - assert: UsageDescriptor.<init>
07670e0: read action when collecting package usages (PY-8723)
20fea10: Fixed python console for remote interpreters (PY-8734).
a9bb526: Added TypeEvalContext.Key to PyTypedElement.getType to statically force type evaluation via TypeEvalContext only
165a805: Use cache in TypeEvalContext when choosing left/right binary operator (PY-8731)
004ae30: Merge remote-tracking branch 'origin/master'
2db8c93: Merge remote-tracking branch 'origin/master'
036efdc: Merge remote-tracking branch 'origin/master'
b22d76f: Merge branch 'python-fixes'
e3a95c9: Merge remote-tracking branch 'origin/master'
34ce7ab: Fixed str.format() return type for Python 3 (PY-7684)
81a58c0: Merge branch 'emmet'
24ff343: rerun failed tests: use before tasks options from the initial run (IDEA-100417)
73779c4: Merge remote-tracking branch 'origin/master'
3cbfde1: proper fix for PY-8699 "Specify return type/docstring" option modifies python sources
065f9ae: Merge remote-tracking branch 'origin/master'
f821545: fix showing external documentation configurable (EA-41157 - assert: ShowSettingsUtilImpl.showSettingsDialog)
4c72015: commit document before inserting import for lookup element (EA-42224 - assert: PsiToDocumentSynchronizer.doSync)
9469120: EA-43308 - SOE: PyPathEvaluator.evaluate
dc246da: Fixed time import.
5054211: Added lost 'else'. Correct default name for remote sdk.
9878ca9: Added a progress indicator (CR-PY-5819).
a00a35a: Cleanup (CR-PY-5818).
d5aa7c3f: Cleanup (CFR-28898).
219dff7: Merge remote-tracking branch 'origin/master'
c366236: Cleanup.
1bdc323: Mark incomplete remote python interpreter (PY-8711).
138f047: use old keyword parameter name fixed PY-8705 Change Signature: usages of renamed parameter in function call are not updated if it was reodered
fe38e41: Fixed NPE (PY-8710).
8fd966b: GenerateDocstringWithTypes intention merged to PyGenerateDocstringIntention (PY-8010).
8b7f57f: Merge remote-tracking branch 'origin/master'
3ff146e: don't bother searching for implicit resolve results if we're going to reject them anyway (PY-6559)
baade0a: do not modify library files fixed PY-8699 "Specify return type/docstring" option modifies python sources
84cd44d: used fast context for intention
1469a3d: Moved isResolvedToSeveralMethods() to PyTypeChecker
e6b419e: fixed PY-8682 Disable Non-Ascii character warning
ef0c840: do not move caret to the line beginning in case of selection on paste fixed PY-8693 Smart indent pasted lines deletes part of string if pasting Django ORM query
31c3b75: Python debugger settings are project settings. Button to delete dynamic types cache (PY-8136).
c52b5a8: Merge dynamic signature types (PY-8624).
d9a5305: Merge remote-tracking branch 'origin/master'
4b8e63e: don't show class-private names in Ctrl-O (CR-PY-5749)
1a73093: tiny cleanup
ac824f4: publish chrome ext to chrome webstore
2de819f: Ability to save unfinished remote interpreter settings (PY-7503).
f2fcc50: statistics for Python interpreter versions and used packages
033b0c6: Merge branch 'python-fixes'
73241a2: Added dynamic member for os.error (PY-7650)
b3068df: Fixed type signature for 'struct' module functions in Python 3 (PY-7961)
a169ed06: Fixed type signature for abs() (PY-7983)
c56d127: Fixed type evaluation for list comprehensions with tuple targets assigned in the RHS of assignment (PY-8063)
3b17157: Fixed skeletons for collections on Mac OS X (PY-8051)
950095b: Fixed type checker inspection for classes with unresolved ancestors (PY-8181)
965abda: Moved hasUnresolvedAncestors() to PyUtil
1b812fa: Removed duplicate method of PyClass.isSubclass()
4c8aa8d: correctly insert import when classname completion is used on a module (PY-7887)
20ed34a: run pep8.py under CPython when possible (PY-8276)
f8bbeea: Merge remote-tracking branch 'origin/master'
dd219d1: Merge remote-tracking branch 'origin/master'
602653c: Merge branch 'python-fixes'
b8ec51c: change pyside docs url (PY-7819)
c4e1141: failing test for PY-8252
86ad22c: ignore leading underscores in pattern when searching in override method (PY-8375)
c5446aa: correctly apply command line patchers for debug run configuration (PY-6740)
6114dfb: default external doc for kivy
9b71962: .com -> .net; remove useless plugin DTD declarations in plugin.xml files
6d6da65: escaping # in verbose regexps is not redundant (PY-6545)
9e57727: correctly detect verbose syntax when flags is passed as keyword argument (PY-8143)
ef65985: Python regexp syntax allows omitting numbers in quantifiers (PY-8304)
38ed156: report missing expression in subscription (PY-8652)
2da6abf: more precise external annotation range for multiline elements (PY-8614)
8f72827: if strip trailing spaces is enabled, ignore trailing whitespace errors reported by pep8.py (PY-8326)
7b9eb1e: in argument list, complete 'args' after * and 'kwargs' after ** (PY-7208)
f10670b: don't ask to find usages of super method if super method comes from 'object' (PY-8602)
6e43d1a: Select stdlib overloaded type signature ignoring weak argument types (PY-8261)
6ba0a02: Disabled type checking for method calls of union members if there are several call alternatives (PY-8182)
62b23ed: Fixed escaping % with %% when replacing string concatenation with %-formatting (PY-8588)
4e3e3e2: Fixed escaping backslash and quotes when replacing string concatenation with formatting (PY-8366)
8722e6a: Merge remote-tracking branch 'origin/master'
14021b9: Provides fine-grained control on fallback indent options to be used during formatting
0d68eea: Provides fine-grained control on fallback indent options to be used during formatting
7bfe9f4: Fixed false positive in shadowing built-ins for qualified targets (PY-8646)
0d5b931: code style settings to force new line after colon in single-clause and multi-clause compound statements
99241be: don't fail the test just yet
af51692: failing test for PY-8195 #4
7e180ce: binary expressions in statement parts use continuation indent instead of alignment (PY-8195 #3)
f85fc09: better detection of incomplete blocks in formatter (PY-6360)
2cb47f6: preview for space before backslash
950c820: optional space before backslash (on by default) in python code style settings (PY-5674)
b796386: optional alignment for multiline import statements (PY-7394)
e7e7a8f: follow-up fix for indent after backslash (PY-6359)
16a7346: auto-indent after backslash (PY-2759)
819d1d5: os.path.pardir in path evaluator (PY-8245).
cdc2340: Merge branch 'python-fixes'
70bf895: Fixed text/binary types of open() functions for different values of 'mode' (PY-7757, PY-7708, PY-8235, PY-7710)
8ac22cc: Fixed callable inspection for return values of getattr() (PY-7625)
f678966: fix PyIndentTest
b0d492f: We don't support Jython older then 2.5.3.
9e1b5e5: Merge remote-tracking branch 'origin/master'
c7ca3b7: EA-42614 - PIEAE: LeafPsiElement.invalid
dad08e3: EA-42592 - NPE: PyUnresolvedReferencesInspection$Visitor.addAddSelfFix
48c7d23: EA-43020 - assert: ComponentManagerImpl.getComponent
47db6a3: EA-43057 - IOOBE: PyPsiUtils.removeSlash
4228947: Merge remote-tracking branch 'origin/master'
c25125b: Fixed callable check for reference to lambda in property definition (PY-7680)
124be9e: Cleanup
4ac91fa: Moved isCallable() to PyTypeChecker
fd47c0f: Removed unnecessary 'anchor' for isCallable()
fb57830: Support for @DynamicAttrs annotation in the docstring of a class (PY-4600)
8e4270e: Fixed parsing ellipsis in slice lists (PY-8606)
899e2e7: uncomment and fix commented out test
bdff852: option to turn off alignment in collections and comprehensions (PY-8516)
3919f17: use continuation indent for parameters of call in expression part of control statement (PY-8577)
4dfef3e: Merge remote-tracking branch 'origin/master'
035efc6: indent, rather than align, child expression of subscription expression (PY-8572)
70949d0: don't allow wrapping before operand in slice expression (PY-8572)
7a14043: show third-party licenses link in about box (IDEA-92269)
1c00750: Made type signatures of datetime.timedelta operators less strict (PY-8617)
bd508c2: Fixed parsing VCS requirements with '/' in revision (PY-8623)
abbbfce: Merge branch 'python-fixes'
2733787: Merge remote-tracking branch 'origin/master'
43d78ce: Ask the user if he wants to rename the old-style property itself or its getter/setter function
a9a2672: Fixed rename for old-style properties and properties with lambdas (PY-8315)
a766c20: likely fix for PY-8598
3b30b30: log command lines for packaging tool execution
7549dad: fix testdata; add PyCommenterTest to all tests suite
2d1ffb5: PEP8-compliant line commenter behavior in Python (PY-3153)
ae22e63: correctly implement PyNamedParameter.getTextOffset() (PY-8339)
54dbca4: Fixes spoiling remote sdk data (PY-8593).
20aacf1: Fixed skeleton signatures for format() methods of string and bytes classes (PY-8328)
451e925: Fixed false positive in unbound vars inspection for assert False with argument (PY-7784)
8f6d399: Merge remote-tracking branch 'origin/master'
a0bf2ef: added FileUtilRt.getExtension method which doesn't convert to lower case to ensure correct behavior on case-sensitive FS (inspired by Jeka)
6e50b73: Merge remote-tracking branch 'origin/master'
5b06d53: Merge remote-tracking branch 'origin/master'
1fa353e: real-life example of IntelliJ team interview task #3
374d4b3: Merge remote-tracking branch 'origin/master'
0a7115b: Merge remote-tracking branch 'origin/master'
d3e1b42: Merge branch 'python-fixes'
f3beb1f: Inspection for shadowing built-in names (PY-5807)
7143207: bundle puppet with pycharm 2.7 (PY-8218)
2fdcab1: Merge remote-tracking branch 'origin/master'
6931b8f: since/until for rest and l10n plugins in master
6ce7527: Fixed false negative in callable inspection for lambda decorators
0019fc7: Made PyFunctionType more generic accepting any callable element
28ddfc8: Allocate pty for remote console.
cf8696c: Moved PyCallExpressionImpl.getCallType() to PyCallExpressionHelper
9dc0255: Fixed false positive in callable inspection for instances of callable classes as decorators (PY-5905)
90ca9c8: fixed PY-8439 Unresolved reference: false positive for separate doctests in rest files
e68c750: PY-7335 take 2
1f9bbfb: fixed PY-8567 Unwrap/Remove action is not avalable with caret/selection at the very end of the statement
a08116e: fixedPY-8575 Unwrap for try should not remove content of the finally clause
45ebd7c: update until-build to new trunk 126
810f2c5: fixed PY-8566 Unwrap/remove action is not available for with statement
d2c5e01: fixed PY-8565 Unwrap/Remove is not available in for loop
6f3d0a2: fixed PY-8568 Unwrap for try statement is called Unwrap while
43db6b1: fixed PY-8570 It should be possible to remove elif suite
a7b18e3: Merge remote-tracking branch 'origin/master'
165c525: fixed PY-8571 Unwrap for elif suite should highlight the entire if statement
cc8acf4: Connection settings for ssh console.
cc7b318: do not look for topmost parent fixed PY-8563 Incorrect "Unwrap else" preview and behavior for inner if statement
85d6cbd: do not inject doctest language to variable docstring PY-8558 PyCharm use 200-400% CPU when working with long files
b0b64b4: dots and >>> are not significant tokens in python doctest language fixed PY-8505 Doctest: Indentation error: false positive for comment after colon
3fd0851: fixed PY-8541 Modules is not sorted by alphabetic in the list of SDK
a291861: Merge remote-tracking branch 'origin/master'
3565004: Import error in case of custom django project structure caused test runner failure fixed PY-8545 Django test runner fails due to wrong import
aa0db58: Updated test data
1d7a5b7: Fixed known type of dir() for Python 3.x (PY-8347)
fedb2b9: Set unknown values in skeletons to 'object()' and don't typecheck 'object' types (PY-7340)
d0e1af2: Known properties of datetime classes
cdcb255: Correctly parse unresolved types in unions (PY-7950)
3b5f5e6: Fixed parsing VCS package requirement specs without '#egg' (PY-7583)
86bee08: Extracted PyRequirementTest
e875fea: Fixed 'int.from_bytes' skeleton (PY-8417)
5f014d1: Merge remote-tracking branch 'origin/master'
9003881: Fixed false positive in callable inspection when __call__ is defined as an attribute (PY-8416)
06c64f5: Rename PyCallingNonCallableInspectionTest
c44251a: Merge branch 'python-fixes'
41d7ccc: Test for unresolved attributes of base class assigned to a variable (PY-5427)
bb91645: Completion for classes with non-class expressions in their base class list (PY-4345)
073b92d: Merge remote-tracking branch 'origin/master'
b514459: Return an instance type provided by reference type provider for callees
45ca0f5: fixed EA-41755 - IAE: ParamsGroup.addParameter
efc5066: fixed EA-42254 - CCE: PyElementGeneratorImpl.createDocstring
b909c35: Disabled namedtuple stubs test
28d1458: Fixed false positive unresolved attribute for classes with unresolved bases (PY-7301)
1075bf3: Merge remote-tracking branch 'origin/master'
17005c9: fixed part of unicode-raw strings decode problem (part of symbols were missed from output and wrong shift in case of unicode symbol in UR"" string)
48591d6: SSH console refactored.
f5a844f: fixed regression in PY-8127 Change Signature: show just one dialog with proposition to change base method with several base classes
aae6db1: fixed PY-8131 Change Signature: popup about inability to perform refactoring is not shown for lambda expressions
e13a5d0: fixed PY-8539 Change Signature: usages of renamed parameter are not updated if it was reodered
9a51afa: fixed prefix for python comments
74d6e83: fixed PY-8404 Change signature: update validation on changing Use default value in signature checkbox
f0d8fe4: fixed PY-8400 Change Signature: do not allow removing last argument after star when refactoring function with keyword-only arguments
72e5d82: fixed PY-8483 Disable rename reference quick-fix for unresolved references in doctests in rst files
3ea64d5: fixed PY-8490 Change signature: validation error about missing default value is not shown once some of the parameter is selected
7684016: improved goto test action (PY-7522 Navigate to test does not find existing test )
35bae29: fixed part two of PY-5554 "Statement can be replaced with functional call": usability improvements
b4e5774: fixed part one of PY-5554 "Statement can be replaced with functional call": usability improvements
1312042: extracted python interpreter inspection
cf2acd8e: excluded encoding inspection from docstring test language dialect
f318482: Merge remote-tracking branch 'origin/master'
c5fa661: improved reflow paragraph action
e025c74: improved reflow for ReStructredText
7f93d98: Platform: got rid of hardcoded platform prefixes
cb087a5: added initial reflow for ReStructredText
2c36a6c: cleanup pyfill paragraph action
7d8047a: extract fill paragraph action
986f15f: prepared to extract fill paragraph action
89564da: fixed PY-8519 Refill Paragraph should stop on empty line
4c25306: fixed PY-8520 Refill Paragraph should not move first line in docstring to the line with quotes
3e2b3df: Merge remote-tracking branch 'origin/master'
ac559c4: fixed PY-8522 "Unwrap if" doesn't handle 'elif' correctly
c09022f: Merge remote-tracking branch 'origin/master'
8857aea: fixed PY-8519 Refill Paragraph should stop on empty line
ddd471f: fixed PY-8519 for comments
0bae9c1: fixed PY-8518 Undo for Refill paragraph doesn't work properly
0ff6496: fixed PY-8511 "Fill paragraph" action should be available in the main menu
2a0a536: fixed unicode regexp for u""
558268d: Merge remote-tracking branch 'origin/master'
480c5df: Fixed magic notification in ipython for remote console (PY-8298).
061d507: remove wrong placed icons
ac69c58: Removed unneeded dependency on SASS
2ed4c7b: new structure for python icons
9c1cfbf: fixed UR"" string escape token highlighting
d8346b2: [^nik] jsch updated up to 0.1.49
e8fa442: fixed PY-6319 Don't show repository name in "Available Packages" if I only have one repo configured
ac92f11: Merge remote-tracking branch 'origin/master'
719c416: Code cleanup: IDEA's warnings have been fixed
716bfc3: Merge remote-tracking branch 'origin/master'
cd0e90b: fixed PY-7699 Wrongly reported missed call to superclass constructor using prefixed/embedded class
a232523: Merge remote-tracking branch 'origin/master'
d54f4f0: Added SSH Console for remote interpreter.
0f3e4fe: fixed PY-7715 replace with augmented assignment doesn't work in case of subscription
a5ecadc: Merge branch 'refactor-icons'
2cad8eb: Moved Python-related icons to resources/icons/ folders
df5ef4a: fixed Convert triple-quoted string: loses data in case of double quote inside the string and leads to syntax error
5cc6f9d: python icons
519b7a9: fixed PY-7829 Ignore missing docstring inspection feedback for Django model inner Meta classes
7a05a83: advertise ctrl+dot
81104c9: Typo in inspection results (PY-8469)
c0c9f9a: Use the same fast type eval context as in inspections for the same type inference results
d5c10b3: Try to find target element in stubs first
b07db08: Don't try to find function annotations in AST for Python 2
1385c15: Metaclass attribute type for standard use cases can be inferred without AST
5831789: Try to find decorators in stubs first
018f9fe: Removed transitive resolve for globals not needed any more after switching to scope crawl up
35ee12f: fixed PY-8151 Don't highlight deprecation on symbols imported as fallbacks after ImportError
0bbefa7: fixed PY-8200 Introduce parameter: do not allow refactoring when there is local variable in the selection
869b28c: Support for subclasses of namedtuple defined in the current file (PY-4345)
e72fa6e: fixed PY-8129 Change Signature: exception on processing incorrect call
e9d7677: fixed PY-8461 Change Signature: unable to correctly process refactoring with invalid calls: AIOOBE at
6145b9d: fixed PY-8130 Change Signature: do not allow to change signature for built-in functions
f233414: Fixed non-deterministic detection of unresolved attribute of named tuple defined in other file
154d1b1: Removed unnecessary switches from stubs to AST
fdad757: fixed PY-8311 Packages list UI looks bad under Darcula
6e9c98c: Fixed false positives for fields of classes with unresolved bases (PY-7301)
9341e40: fixed PY-8324 Create parameter: disable intention for unresolved imports inside function
d45fb95: fixed PY-8368 Introduce parameter: disable refactoring with caret at the function definition: Throwable at com.intellij.refactoring.rename.inplace.InplaceRefactoring.a
e3720de: fixed PY-8370 Introduce Parameter: disable refactoring for nonlocal and global statements
3ca6510: fixed PY-8374 Unnecessary backslash (line continuation) when moving import to next lines
8e53596: fixed PY-8376 "Tools > Sphinx quickstart" command missing from main menu
727f35e: fixed PY-8376 "Tools > Sphinx quickstart" command missing from main menu
61dbb04: fixed PY-8440 Doctest: Unused import: false positive if there is an empty-line between usage and import
860891a: fixed PY-8442 Add import alias: references in docstrings are not updated
fe6bf79: fixed PY-8443 Rename module: reference in doctest is not updated: Throwable at com.intellij.refactoring.rename.RenameUtil.doRenameGenericNamedElement
1efeab5: fixed PY-8444 Disable import quick-fix for unresolved references in doctests in rst files
30f70e6: Cleanup
7809540: fixed PY-8398 Change Signature: disable refactor and preview buttons in case of validation error in the dialog
66a708b: Merge remote-tracking branch 'origin/master'
fb12881: fixed PY-8400 Change Signature: do not allow removing last argument after star when refactoring function with keyword-only arguments
27ce105: fixed PY-8401 Change Signature: breaks code when chaging default value of an argument before star in case keyword-only arguments
0cadde6: Merge branch 'introduce-for-substrings'
b5874c6: Introduce substring for new-style formatted strings (PY-8372)
17dfe42: fixed PY-8403 Change signature: breaks code on making keyword only argument regular
85d093e: removed unusable shortcut for fillparagraph action
3e0784b: Fixed braces escaping for new-style formatted strings (PY-8372)
7230e05: Don't allow selection to break new-style formatting when extracting substrings (PY-8372)
ecfe234: fixed PY-8404 Change signature: update validation on changing Use default value in signature checkbox
ddd187f: Simple parser of new-style string formatting ranges (PY-8372)
6ad0f3b7: added replace-all option to fil paragraph action in PyCharm
998bf50: fixed PY-8405 Change Signature: invalid base method is proposed to be refactored for multiple inheritance
d2292a1: fixed NPE in change signature with keyword-only param
213e66a: fixed PY-8402 Insert documentation string stub: strange null parameter for functions with keyword-only argument
2c1c299: fixed PY-4706 for Python 2.7
bce962d: fixed nose in django with nose plugin
e954239: Merge remote-tracking branch 'origin/master'
87c1a5d: make FillParagraphAction actually reformat short strings
a9d18c0: EA-42015
db9caa1: EA-40890
0270be3: EA-42571
5a0577a: EA-41497
d667527: recording test execution via Chronon (controlled by @RecordExecution annotation)
3f0f04d: Merge remote-tracking branch 'origin/master'
c69d609: change signature: fix param name hides field (IDEA-98123)
665235d: Fixed build.
96dfaad: Try N to make py3k debug egg.
adcd1d1: Fixed exclude call.
4f0125d: Merge remote-tracking branch 'origin/master'
9733431: Fixed string.
7d9940f: fixed PY-3777 ReST: Field lists do not wrap correctly (added formatter)
edd98e6: Py3K debug egg.
125ae1e: Fixed path.
fd743d0: Merge remote-tracking branch 'origin/master'
8209ae0: Move modules to the top-level in pycharm-debug.egg
e81dcf0: Remote debug Python 3 compatible (PY-7193).
de2cd20: Python 2.4 compatibility.
e21a930: Introduce refactoring for substrings of %-formatted strings with single value (PY-3654)
c229061: Introduce refactoring for substrings of %-formatted strings with named substitutions (PY-3654)
d37cc7d: Cleanup
1d13811: Merge branch 'introduce-for-substrings'
16ac7aa: Introduce refactoring for strings with %-based positional formatting (PY-3654)
04eafc4: %-based formatting for plain string literals, concat-based formatting for concat strings (PY-3654)
e3d2070: Fixed nullable annotation
f60d2d0: Moved replace expression to PyReplaceExpressionUtil
f2bc06d: Protection against breaking escaping and formatting of substrings in introduce refactoring (PY-3654)
a525181: BrowserUtil.browse can open url with fragments
a647dd4: fixed PY-8004 Doctest: support doctests in restructured text
81c3713: fixed PY-8270 reStructuredText highlight error
ade0209: fix testdata
10ff264: Merge remote-tracking branch 'origin/master'
6621066: EA-41503 - IAE: AddImportHelper.getImportPriority
ba7de6a: EA-41823 - IAE: ReferencesSearch.search
c8ca737: fix assertion on adding import to empty file (EA-42079 - assert: AddImportHelper.getInsertPosition)
2c0672c: EA-42183 - NPE: PythonReferenceImporter.isImportableModule
ac2a452: Merge remote-tracking branch 'origin/master'
76ec940: Merge remote-tracking branch 'origin/master'
5b4f917: fixed PY-4775 fill-paragraph for comments and docstrings
73c769e: Added error handling to remote ports obtaining (PY-8298).
c7848c2: Fix reload console action for remote interpreter (PY-8170).
ebc756c: Don't try to kill remote process by means of local OS (PY-8171).
82727c3: Longer timeout for ports waiting.
bf59e2d: Nullable annotations and restricted visibility for setters
f92a90e: fixed PY-1478 Support Unwrap action (Ctrl-Shift-Del) in Python code
41eadd1: include restClient plugin in pycharm build
24dd481: Merge remote-tracking branch 'origin/master'
677bbe5: Merge branch 'introduce-for-substrings'
9d80d6f: Initial implementation of introduce refactoring for substrings (PY-3654)
c9e423d: Fixed Python string utils for triple-quoted and triple-double quoted string and bytes literals
4163e62: rename run configuration group (PY-8310)
724dccd: resolve references to WSGI apps from app.yaml (PY-5186)
ff8d993: select Python 2 interpreter when selecting a project type that doesn't support Python 3; report Python 3 as incompatible when creating App Engine project
1a4c24f: Merge remote-tracking branch 'origin/master'
ee559b9: Merge remote-tracking branch 'origin/master'
4d2f60b: fixed PY-6595 Add "Rename" intention for unresolved indentifier
4f80af1: Merge remote-tracking branch 'origin/master'
131a1ab: Merge remote-tracking branch 'origin/master'
cbce1d0: if Django project root points to a directory outside of module, make sure we have a module from data context when gathering manage.py tasks (PY-8288)
c361716: rename "URL Pattern" to "URL/Path Pattern"
495e0ef: Merge remote-tracking branch 'origin/master'
c9d7238: fixed test data according to code style
b640bc9: fixed tests according to code style
5402576: fixed tests for convert format operator
4a49190: fixed PY-4706 Modernize the improve "Replace + with string operator" refactoring tool.
2e3cf07: fixed inappropriate align in Replace format with str.format intention
ff428fd: Add path completion to os.path.join elements.
4fc46fc: Suppress warning.
41d2352: Fixed broken highlighting for file path references ended by \\ (PY-7434).
ad211a3: Merge remote-tracking branch 'origin/master'
cfb289f: Merge remote-tracking branch 'origin/master'
c636115: fixed PY-8247 Django: not able to run django tests for django 1.3
3be3b75: changed nose test runner patching to nose plugin
f519433: parse ellipsis only in the context where it can actually be used (PY-7763)
59d5c27: don't parse compound statements after semicolon at top level of file (PY-7660)
2b9d275: Merge remote-tracking branch 'origin/master'
bea9778: Understand installed apps added by += and extend (PY-7471)
c7eeb6d: highlight assignments as 'with' statement target as error (PY-7529)
279a136: change option "between top-level classes and functions" to "around ...", ensure that we put two lines between import and class (PY-7743)
218239d: build fix (PY-8275)
c4dc3c2: align multiline parameters in calls by default (PY-5700)
a6c03d3: Merge remote-tracking branch 'origin/master'
00d40f2: delete .pyc when moving Python file (PY-7951)
ed4cf28: build fix
ff9b2b7: don't indent closing paren of tuple (PY-7946)
dbe5942: take "space within braces" setting from correct place (PY-8069)
e57ceea: don't delete space between . and 'import' (PY-8112)
dd80f4d: Merge remote-tracking branch 'origin/master'
8bd83aa: Fixed .egg debugging for python 2.7
bbc1b62: include textmate plugin in pycharm dist
4ebec8f: highlighting for Python keyword arguments (PY-5418)
da7dfc1: Unresolved django template references inspection should be shown even if Django support if unset as we have a separate setting for templates.
8acf08e: Fixed broken resolve of absolute paths (PY-8256).
ce0face: Merge remote-tracking branch 'origin/master'
eb22c80: correct import priority for files under virtualenv lib root (PY-8231)
911365b: optimize imports
edf74b4: no need for 'create parameter' to be HighPriorityAction (PY-8237)
3a7862c: run pep8 annotator only on pure python files (PY-8178)
9385eec: Fixed loosing egg interpreter path after reload. Also removing of user added paths (PY-7430).
286cc42: fixed PY-8200 Introduce parameter: do not allow refactoring when there is local variable in the selection
e69aa6a: fixed tests
b471a8b: fixed PY-8202 Introduce parameter: breaks code when introducing parameter for last ineffective statement
e3e172d: fixed PY-8216 Create parameter for reference: disable intention for unresolved decorator
8740a01: fixed PY-8215 Signature refactor seems to duplicate arguments that wrap over 120 chars
ce098b3: Env test for egg debugging.
ade13eb: Merge remote-tracking branch 'origin/master'
5677bfa: Debugging .egg files (PY-7528).
9f56692: Merge branch 'python-fixes'
494fd6c: Merge remote-tracking branch 'origin/master'
0116e7b: fixed PY-8224 Django: TestRunner: not able to run django tests under django 1.5 on py3: ValueError: level must be >= 0 PY-8223 Django: Test Runner: DeprecationWarning on trying to run django tests under django 1.5b
a5ee266: Merge remote-tracking branch 'origin/master'
bf1143b: Fixed django file reference default roots and rebind.
14c30ea: fixed PY-8175 Rename quickfix "Create function for reference" to "Create function <name>"
cbd78c0: Fixed file references from python string literals (PY-7433, PY-6234, PY-7434)
9719bc9: Merge remote-tracking branch 'origin/master'
bef3b08: line separators normalized
3403e4c: Merge branch 'python-fixes'
f9f8aae: Refactored getScopeOwner(), made it more precise by default
727d081: Fixed resolve scope for unresolved augmented assignments
49b88f4: fixed PY-8165 Running individual tests fails with pytest 2.3.4
30738e5: Fixed resolve for augmented assignments in cycles and for undefined variables (PY-7970)
cd9b6bd: Fixed create template dialog in case of absence of template dirs (PY-7761).
f3f1309: fixed PY-8201 Create parameter for reference: breaks code in case code style requires spaces around = in named parameter
ceb078d: fixed PY-8189 Doctest: Import this name quick-fix breaks doctest when invoked inside one
e665f54: fixed PY-5038 Optimize imports does not keep doctests in mind
0725d6d: fixed specify type intention for multi resolved references
fb8d54e: fixed Doctest: Package requirement inspection highlight entire doctest in every project file : PY-8190
b0b1d8f: fixed EA-41736
c1af423: Merge remote-tracking branch 'origin/master'
c0ba8ca: Fixed debugging for Jython 2.5.3 (PY-8164).
3bbb915: Put jrubyparser.jar to build (try 2).
f1cc99c: Add maximal depth for type structures visiting (PY-7749).
5bfc1a0: Merge remote-tracking branch 'origin/master'
00e30e0: release notes generator checks which issues affect a product based on files in commit
c6f9e50: Option renamed (PY-8012).
0ab2825: Fixed debugger process termination in case of daemon threads.
0fbab35: dock lists with results in usages toolwindow (IDEA-94094)
6c0aa1f: Merge remote-tracking branch 'origin/master'
20af1b2: Fixed running PEP-8 inspection on Mac OS X and Linux (PY-7982, PY-8002)
d27104b: Merge remote-tracking branch 'origin/master'
cc7d20d: Added jrubyparser lib to layout.
4180b56: pass SDK creation callback one step further
1226c22: fixed PY-8163 Create parameter for reference: PIEAE at com.intellij.psi.impl.PsiElementBase.getContainingFile when trying to add paramter with last comma at declaration of the function
3992c46: fixed doctest injection place
6670116: more test data update
deccad0: updated test data
a90c82f: fixed PY-8159 Lines continued with \ in doctest are marked as errors
0d3609a: Merge remote-tracking branch 'origin/master'
66ca8b0: notnull, cleanup, performance
22c3dde: make sure dialog is disposed
820a2ee: Merge remote-tracking branch 'origin/master'
a424eec: welcome screen logo for pycharm
0804597: allow oauth2 login for appcfg.py (PY-7843)
28668b9: refactored type intentions
7775e31: Merge remote-tracking branch 'origin/master'
de3fd21: Merge remote-tracking branch 'origin/master'
c260e64: Changed wrong id.
1256e06: fixed PY-8061 False positive for Python 2.x-only builtins in 3.x code
7e704ce: fixed PY-8062 Code compatibility inspection should allow selecting individual Python versions
f69ca67: Platform: WelcomeScreen and background images in ApplicationInfo.xml AppCode: artwork
87e0e87: Fixed super-class iteration for six.with_metaclass decoration in Django 1.5 (PY-8137).
5a26932: Fixed super-class iteration for six.with_metaclass decoration in Django 1.5 (PY-8137).
9166f4c: Merge remote-tracking branch 'origin/master'
04486be: added "Create parameter" quickfix on unresolved reference
7ab1654: fixed PY-8125 Change Signature: do not allow to set default value for self parameter in class methods
0d9a70b: fixed PY-8127 Change Signature: show just one dialog with proposition to change base method with several base classes
8198c30: fixed PY-8131 Change Signature: popup about inability to perform refactoring is not shown for lambda expressions
89a6087: fixed PY-8129 Change Signature: exception on processing incorrect call
cf9065a: fixed PY-8128 Change Signature: signature for child method is not updated when changing method name
05ec32d: fixed PY-8130 Change Signature: do not allow to change signature for built-in functions
d8bdb86: fixed PY-8126 Change Signature: do not propose to change signature for base method if base class is a built-in class
f42a3e8: fixed PY-8104 Change Signature: refactoring is invoked for method not at the caret when caret is inside other method
94ec439: fixed PY-8096 Change Signature: produces invalid code on adding new parameters for function calls with keyword arguments
d789caa: Merge remote-tracking branch 'origin/master'
787afa1: fixed tests
085c08f: fixed NPE in chech test configuration
2e5049b: made more friendly messages
f3c6db7: fixed PY-8124 Change Signature: do not allow invalid IDs as parameters: NPE at com.jetbrains.python.psi.impl.PyKeywordArgumentManipulator.handleContentChange
b60b7f7: added extract parameter refactoring
1715bfa: fixed add parameter to parameter list
35f4e73: fixed PY-8084 Change signature: correctly process keyword only arguments
df2bf57: fixed PY-8115 inspection detects lack of encoding magic comment for doctests
449e82a: Merge remote-tracking branch 'origin/master'
11c330b: fixed PY-8104 Change Signature: refactoring is invoked for method not at the caret when caret is inside other method
5fdafb7: fixed PY-8100 Change Signature: changing signature for __init__ method doesn't take effect in code # forbid refactoring object class
452f14f: fixed PY-8098 Change Signature: not able to add new parameter before non-keyword parameter without adding default value to the signature
b87ab25: fixed PY-8105 Change Signature: CCE at com.jetbrains.python.codeInsight.intentions.SpecifyTypeInPy3AnnotationsIntention.a
cf126bc: JSDK -> JDK
8ca9729: Merge remote-tracking branch 'origin/master'
b3a40f2: _subprocess import problem under Python 3.3 (PY-7868).
657b5ad: Python 2.4 compatibility (PY-8014).
b1965b2: Bundle Vagrant plugin in PyCharm.
e56eee0: fixed PY-8095 Change Signature: IOE at com.jetbrains.python.psi.impl.PyElementGeneratorImpl.createExpressionFromText
179f7a3: fixed PY-8094 Change signature: do not allow to change signature when name of the parameter is undefined
fb6bfe7: Merge remote-tracking branch 'origin/master'
9479c45: fixed PY-8092 Change Signature: removes py3 annotations on parameter rename
0599288: fixed rename function in case no usages found
3805a35: fixed PY-8091 Change Signature: refactoring is not available at the very end of the called function name to be refectored
caed123: fixed PY-8096 Change Signature: produces invalid code on adding new parameters for function calls with keyword arguments
ecf8157: update since/until
8746529: fixed PY-8064 Incorrect joining of two-line raw string literal
3e33798: fixed docstring update while change signature
7dbfef5: forbid change signature for functions with tuple params
b2c7647: Merge remote-tracking branch 'origin/master'
7eb9cc4: fixed method call in change signature
8f231cc: hacky worarkaround to ensure we _always_ have owner for remote transfer (PY-8075)
53c9a44: hide "make available to other projects" checkbox in python plugin (PY-8076)
378dd09: reworked app signing to support IC build
331ec3d: fixed PY-8046 Change signature: docstring
323706a: fixed ClassCastException in change signature
3c7126c: ensure we have owner for remote connection when refreshing python sdk skeletons
c681fa9: correct order of dependencies to enable creating remote interpreters in IDEA under debugger
ed5006a: fix IDs of plugins we depend on
b23b6ff: Python plugin depends on remote run
bf442b1: fixed PY-8047 Change Signature: choose method to refactor
ddc5e54: sign app in sit archive [r=amakeev]
0940123: fixed PY-8048 Change Signature: imports
0a0cc1b: remote-run must be in classpath
a89137d: since remote run is packaged as a separate plugin, no need to include its classes in python plugin archive (PY-8059)
228f97d: fix creating remote python interpreters from new project wizard (PY-8054)
1c43383: remove @Override annotations to maintain compatibility with IDEA EAP
67d8ef4: if no base SDK for virtualenv was passed, use best of the defined ones
71e65a9: creating virtualenv in python plugin (work in progress)
d8e9bf7: group all Python SDK flavors in a single package
7cd928d: fix misunderstanding of PyExternalProcessException (EA-40942 - UOE: PyPackageManagerImpl.getPackagesFast)
6070453: moving part of virtualenv creation code from PythonSdkConfigurable to CreateVirtualEnvDialog
c2e8fc4: API in SdkType for custom creation of SDK [r=romka]
22229bd: @Nullable Project in RemoteTransfer
2d990a7: extract interpreter path chooser popup out of PythonSdkConfigurable
c738f60: Merge remote-tracking branch 'origin/master'
f672a57: fixed PY-1443 Change method signature refactoring (initial support added)
8536353: add python-remote-interpreter to the list of connector plugins
8a11d6e: delete separate build scripts for rest and localization
d701ef0: fix output path
4ff2001: fix dependency of jarLocalization task
707ce92: l10n/rest jar names
9b2d0a5: fix the real reason why the buid started failing
49effd0: cleanup of cleanup; diagnostics
da36919: idea.build.number -> ideaBuildNumber
f8d022a: merge rest and localization build scripts into python one
194fc5f: unzip specific file, not fileset, for IDEA dependency
34d44f4: build doesn't depend on clean in small plugins as well
f4f2e8f: don't include unnecessary stuff in the javac2 task classpath
43e2fa3: including Flask and NumPy into Python plugin build
7b8ce3e: clean does delete the artifacts but build does not depend on it
7340c99: don't clean dist before plugin build
3e74985: enable python-rest and python-localization in python plugin build
f1c3cb5: trying to make sure connector plugins are compiled correctly
39a4088: fixed docstring file name
9c93109: Merge remote-tracking branch 'origin/master'
9f6a030: fixed broken resolve for import with class attribute reassignment
f3b1303: reenable darcula color scheme
9976b93: typo fix
9a62708: avoid triggering update notifications for rest and i18n plugins
e909a9e: trying to package connector plugins inside Python plugin jar
4fa2554: module dependencies for rest and i18n
ea45283: restore code broken by incorrect refactoring (PY-7841)
b752395: specify version for ReST and localization plugins
9f9c1d5: logging and minor fixes for helper upload and skeleton download
2118406: don't clean output
ec72da5: build script for localization plugin
ca7dc82: build script for REST plugin
ef3cd70: commons-vfs is also a dependency for python plugin
2830b13: optimize imports
caca1ee: python plugin depends on js-debugger and webDeployment
7d62d26: Merge remote-tracking branch 'origin/master'
d7d545a: fixed resolve to imported in docstrings turned on unresolved reference in docstrings
7abe8da: upload helpers before refreshing skeletons for remote interpreter (PY-7996)
6036f9f: no need to clear package cache on VFS change for remote SDK - we won't get any FS notifications on actual package install/remove anyway
dcd3e33: moving skeleton-related stuff to a separate package
e6d824c: don't access remote packages synchronously from inspections
cf1bf4f: PyCharm — build JB Chrome extension
2b9f634: fixed PY-7895 pytest: Errors evaluating pytest expressions should be displayed as failed tests, not as unfinished ones
3918ab5: fixed PY-7941 Running a certain unittest causes the test runner never to finish
f418091: fixed PY-7971 Insert type assertion: produces invalid assertion in case of caret at the second reference in expression
fffb49f: Code cleanup: unused imports have been removed
8b5732e: Merge remote-tracking branch 'origin/master'
d1584e8: don't package two copies of idea.properties into Mac zip, package only the Mac generated one
0b58bab: Merge remote-tracking branch 'origin/master'
48ec58e: Added "Vagrant init in project" action.
94f549e: fixed PY-7962 Doctest: IndentationError: false positives for tests with empty lines
16dd3f0: fixed resolve in sub injected fragments of doctests
964a85f: Merge remote-tracking branch 'origin/master'
065fe2a: fixed PY-7975 isTestFrameworkInstalled throws exception for remote interpreter
c3ca9ef: Merge remote-tracking branch 'origin/master'
82b1321: update version to 2.7
d8c9655: Create directory for skeletons if it doesn't exist.
39d6c20: Fixed usage of xstream as update to version 1.4.3 has broken xml parsing.
306ead4b: Fix compilation.
a538504: Merge remote-tracking branch 'origin/master'
94bcfeb: Cleanup logging.
356c0ec: Fixed usage of xstream as update to version 1.4.3 has broken xml parsing.
54d11df: fixed PY-7968 Replace + with string formatting operator: disable intention for expressions not only with +
2c025a3: fixed PY-7969 Replace + with string formatting operator: disable intention for expression with undefined types
723180f: fixed PY-7971 Insert type assertion: produces invalid assertion in case of caret at the second reference in expression
d8514c8: fixed PY-7973 Specify return type in docstring: disable intention for multi-resolve reference
1ea2329: cleanup
02de9cf: Merge remote-tracking branch 'origin/master'
c7bf83b: assorted refactorings
ad03146: look for pythons under mac under /usr/local/bin (PY-7977)
260c313: fixed tests for type in docstring intention
1a90b48: Merge remote-tracking branch 'origin/master'
54457d6: fixed PY-7965 Doctest: Missing closing quote: false positive for the last doctest with string assignment
06ff6aa: fixed PY-7974 NPE at com.jetbrains.python.codeInsight.intentions.SpecifyTypeInPy3AnnotationsIntention.isAvailable
184ff41: fixed PY-7974 NPE at com.jetbrains.python.codeInsight.intentions.SpecifyTypeInPy3AnnotationsIntention.isAvailable
89b6637: fix compile
d0e2c9d: Merge remote-tracking branch 'origin/master'
197ef37: Checking that docstringOwner is a function.
7538b8a: Cleanup.
84a4093: Fix skeleton name generation (PY-7955).
e211ddf: Merge remote-tracking branch 'origin/master'
062fef5: fixed PY-3089 String literal assigned to __doc__ should be highlighted as a docstring
a52b032: fixed PY-1611 Go To Next/Previous method should skip class fields
6377e07: move service declaration to correct place (PY-7953)
cfe3e62: Merge remote-tracking branch 'origin/master'
6f3988a: Don't add extra slash (PY-7955).
129fd74: fixed PY-7952 Doctest: AE: start (42) must not be greater than end (41) at com.intellij.openapi.editor.ex.util.EditorUtil.calcOffset on hovering multi-line doctest
5b4bc0e: fixed PY-7657 Specify type in docstring: AE at com.jetbrains.python.codeInsight.intentions.SpecifyTypeInDocstringIntention.invoke
02781a3: Merge remote-tracking branch 'origin/master'
23f8155: merge
04f5acf: remote-sdk-api moved to the platform (python plugin should work now) remote-sdk-impl renamed to remote-run.
e38b35a: Links for function types in quickdoc (PY-3404).
023df74: Links for types in quickdoc.
14767c1: Correct quick doc for parameters.
018694a: Fixed stack-overflow on type name evaluation (EA-40207).
22a4e38: Return old element if new one is null.
0513430: Fixed an NPE.
c80d70b: Fixed NPE.
211a230: Restore originalElement that resolves to element for correct quick documentation.
7aa0d08: Merge remote-tracking branch 'origin/master'
f903459: Correct cast.
86f886a: Show type for named parameters in quickdoc.
e0022a0: key can duplicate
5b9c2c1: type can be null
fb949d3: Merge remote-tracking branch 'origin/master'
4695047: Edit injection language fragment action is not available for Python Docstring language
edc1b4a: fixed PY-7933 Specify type in docstring intention not available
69cf06e: Clean up.
289be47: Inspection to highlight wrong docstring types.
182924e: Merge remote-tracking branch 'origin/master'
dd16686: RUBY-12335: path mappings added to ruby run configurations
703f74d: failing test for PY-7929
15efff8: typo fixed
7cc024a: restore move without d&d
0d05324: NotNull
6e61b0c: If we know type we should fill it in on Specify type intention.
a1f451c: Add types to existing docstring.
6d1bbb7: Merge remote-tracking branch 'origin/master'
020396b: fixed doctest lexer
bca8830: Fixed inspection.
ffb3b74: Merge remote-tracking branch 'origin/master'
19feb64: fixed doctest tests
bfda294: Fixed highlighting bug.
5a782e0: Javadoc added.
7d1d010: Method extracted.
fa935e0: Merge remote-tracking branch 'origin/master'
85e01b0: Writer thread is not daemon.
1f63cd1: Merge remote-tracking branch 'origin/master'
95ea672: fixed exception EA-40239
4732c24: don't use Color.BLUE
ebec3d5: Intention to generate docstring with types based on dynamically collected signatures.
79c63fb: cleanup, notnull
dc92a87: sdks filtering
35c943e: Pydev shouldn't trace itself (PY-7849).
00ad6b4: IDEA-93744 IDEA-93743 Create Project From Template: Simple Web: facets creating is enabled but doesn't work
79bd3db: Pull Up refactoring
3a77de55: this method is not used, but let's make it correct
362a72c: Merge remote-tracking branch 'origin/master'
2a2f009: added tests for pytest runner
cbd86c9: fixed PY-7864 Dostest: Statement expected: false positive for empty doctest
9d01390: Import.
d8a8518: fixed PY-7860 Doctest: respect python language level in doctest syntax highlighting
e538f59: Merge remote-tracking branch 'origin/master'
97d93c1: Post-mortem exception stop handling moved to exithook as sometimes it is not working if invoked from tracing.
8ea7fbe: Added logging.
d500e0c: avoid NPE on import statements with relative-only source (EA-39281 - NPE: ImportCandidateHolder.getPresentableText)
45f9d75: read action (EA-39671 - assert: ModuleManagerImpl.moduleDependencyComparator)
2bdc59a: fixed docstring parser name
ae611d5: fixed PY-7859 Doctest: Unexpected Indent: false negative for improperly indented code
8997ab9: Ignore errors while generating coverage xml.
ce1b4c8: fixed PY-7858 Doctest: Indent Expected: false positive for compound statements fixed highlighting part of PY-7858 Doctest: Indent Expected: false positive for compound statements
e6ce21c: Logging of errors.
b57610b: Correct socket shutdown. Also thread stop method added.
d6fa99f: space around more keywords (PY-7863)
5322e43: fixed PY-7886 Doctest: Missing resolve and completion for names imported in doctest
d37c997: Merge remote-tracking branch 'origin/master'
b9815ab: Merge remote-tracking branch 'origin/master'
ba56707: fixed tests
f8bbc43: Merge remote-tracking branch 'origin/master'
2c0685e: fixed PY-7878 py.test: ValueError: Plugin already registered: terminalreporter
7e0cf91: Stop button in run manage.py task tool window (PY-5849).
9f7ed63: A bit of cleanup.
6072204: Don't wait process termination in EDT.
643c187: Use local sdk only if default is remote.
9d8325a: Merge remote-tracking branch 'origin/master'
febc8c9: Rerun button in console (PY-6863).
d0398c5: more straightforward code structure, don't add same results to set twice
d60042f: RUBY-10872: RM and PyCharm should use "dot-directories" for helpers to be more user-friendly
50a26a2: fixed PY-7816 No context menu to run nose tests
606a52f: Resolve to project sources first instead of libraries and stubs (PY-7775).
d210d12: Use only local pythons for pep-8.
3b0d04c: Added error logging.
879f982: fixed PY-6877 double clicking on a test from the test results opens up the wrong test file
0ab1d23: Merge remote-tracking branch 'origin/master'
b97b4d7: Fixed install distribute action for remote interpreter: run it in specified working dir.
6d929c3: More common remote run marker environment variable name.
b7b1c60: Fixed storing of edited helpers path (PY-7672).
ed0cd98: Create remote interpreter dialog little redesign (PY-6073).
724482a: Create helpers folder automatically (PY-7552).
c9f496d: fixed PY-7637 Unittest stacktrace contains unittest-calls
6751e60: fixed PY-7744 Rerun failed tests should not rerun skipped tests
de123dc: Merge remote-tracking branch 'origin/master'
e2902a1: fixed PY-7746 Not able to rerun failed doctest tests: No tests were found: NameError: NameError: Module has no class
da60d43: fixed PY-7745 Test Runner: NPE at com.jetbrains.python.testing.PyRerunFailedTestsAction$FailedPythonTestCommandLineStateBase.getTestSpecs
75b8810: fixed PY-7750 Test Runner: unable to rerun tests under py2.4: AttributeError: 'TestLoader' object has no attribute 'makeTest'
1c1baa3: fixed PY-7751 Test Runner: pytest: skipped with skipIf decorator tests are shown as unfinished ones
dcc74c5: Merge remote-tracking branch 'origin/master'
425d579: fixed PY-7655 Error in setUpClass messes up the nose test runner
ffbb03d: fixed PY-7356 Django: missing newline in test command output without valid manage.py file specified
387c8fd: fixed PY-6974 Data created in south migrations is not available to Django tests
4a9ce1f: fixed PY-5234 Please add a TEST_EXCLUDE
1b3f2aa: fixed PY-2808 Add possibility to specify additional command line parameters in test run configuration
394eb0e: fixed PY-6402 noinput option in the django tests
e68dcd3: Merge remote-tracking branch 'origin/master'
41542e2: Fixed PackageRequirementsInspection test to work when Django and nose are installed to interpreter.
6dbd3fa: fixed PY-6252 Run single py.test from the editor context menu
ccb2c9d: Merge remote-tracking branch 'origin/master'
a898ec3: initial doctests support
af2e2fd: Platform: icons consolidated: Refresh, Settings, ProjectSettings
76405c5: .ts extension is used for CoffeeScript, don't use it for our never-really-implemented PyQt support
8cf46cf: simplify (due to now correct behavior of PyStatementList.add())
5427a2a: Merge remote-tracking branch 'origin/master'
2a54d62: RUBY-10872: now we do copy ruby files to remote system
432be62: Fixed execute in console for arguments on different rows (PY-6791).
8244abb: Merge remote-tracking branch 'origin/master'
7855d0f: Missing whitespaces around operator (PY-5496).
7aa94bd: RUBY-10872: one more step to correctly calculate params for remote sdk
97a0ed7: fixed added first jdk in 'New Project' wizard
4aeff43: Show full exception traceback if pip import failes but pip exists (PY-7782).
d3eb444: Better place for import os
8e05253: Merge remote-tracking branch 'origin/master'
070d167: Add pluginResources to plugin build.
d94d4ff: include python-psi-api in plugin build
5ec9b04: Platform: standard refresh action, standard 'change details' VCS action icon
5df1921: Merge remote-tracking branch 'origin/master'
c60aff1: RUBY-10872: configure remote sdk dialog now uses resource bundles to select appropriate for RubyMine and PyCharm names of components
1c055c6: RUBY-10872: RubyRemoteSdkAdditionalData implemented so we could edit remote sdk
0700051: Merge remote-tracking branch 'origin/master'
2962594: take right margin for pep8.py from code style settings
e9596fe: quickfixes to ignore errors and configure the PEP8 inspection (this completes PY-2766)
86ecc47: cleanup
302d807: possibility to configure PEP8 inspection to ignore specific errors
4d04cdb: possibility to disable PEP8 inspection and configure its severity
2123a8d: 'Reformat File' quickfix for pep8 inspection
8e8f406: initial implementation of pep8 external annotator
3dbc0cd: more keywords which allow only 1 space around them
aec0179: since pep8.py complains about blank lines after decorators, add rule to remove them
6bb8627: optimize imports sorts them according to PEP-8 (PY-2367)
543d2a6: avoid project disposed exception
098687b8: from should be keyword in load tag (PY-2958).
45b219e: Merge remote-tracking branch 'origin/master'
6c780b5: fixed EA-38806
537d06b: RUBY-12161: partial fix - RegExpLexer should accept \0 as valid octal char if allowOctalNoLeadingZero is set - Ruby19RegExp language needs its own highlighter
ad0e4ad: Merge remote-tracking branch 'origin/master'
a7ca2be: Merge remote-tracking branch 'origin/master'
0493cd7: fixed PY-4245 Way to rerun failed tests
34e2b77: allow custom isIdentifierPart for different languages
c3bdc58: action to delete .pyc files (PY-1871)
bf5a8e4: move inspection quickfixes to inspections.quickfix package
9089e30: delete .pyc when .py file is renamed (PY-1519)
0790ceb: delete .pyc when .py file is deleted (PY-6000)
71bf889: cosmetics (PY-7444)
546858b: PropertyBunch.resolvesLocally() performs local-only resolve correctly (PY-7695)
5c5274c: more straightforward logic of honoring the resolveImportElement flag in PyImportElement.getElementNamed()
63a5241: correct fix for PY-2087
fd07b2c: Refactoring: only SyntaxHighlighterFactory should be used to create highlighters. - SyntaxHighlighter.PROVIDER moved to SyntaxHighlighterFactory. - POVIDER.create() encapsulated into SyntaxHighlighterFactory.getSyntaxHighligher(FileType, Project, VirtualFile()
db33fba: Code cleanup: IDEA's warning has been fixed
8cd15f2: Skipping non-existent files when listing source for remote interpreter. See last comment in PY-7682 for details.
4335aff: Put remote-sdk-api to pycharm.jar
b19ee15: Merge remote-tracking branch 'origin/master'
24bb418: Fixed python plugin build.
b1eedc5: Don't spoil new process command line in case of non-python executable (PY-7464).
1ba67e0: Merge remote-tracking branch 'origin/master'
30afbc0: Bundle remote-sdk-impl.
2fb4cff: Common building code extracted
9d5565a: IDEA-92081 Reformat causes syntax error when "Ensure margin not exceeded"
3192bf2: Fixed PyCharm build.
96e8f52: Merge remote-tracking branch 'origin/master'
86fe357: Removed obsolete setting.
e80f6db: Remove Plastic L&Fs from small IDEs
a6b7cd5: Cleanup
1614abc: Python remote interpreters refactored. Common code moved to remote-sdk-api and remote-sdk-impl modules.
28a4482: Merge remote-tracking branch 'origin/master'
30d2124: Python remote interpreters refactored. Common code moved to remote-sdk-api and remote-sdk-impl modules.
b38409f: Python remote interpreters refactored. Common code moved to remote-sdk-api and remote-sdk-impl modules.
3c92158: Merge remote-tracking branch 'origin/master'
db80dd7: netty 3.5.7 bundled
c1876b3: Merge remote-tracking branch 'origin/master'
917036e: Merge remote-tracking branch 'origin/master'
a94b580: Drop usages of now-unneeded HtmlListCellRenderer
26652db: Deprecate and replace some TokenSet methods
b48f089: fixed PY-6282 Move statement: adds newline between parts of compound statement
f9ebaf1: fixed PY-5927 Make search in the "Available Packages" dialog box more convenient
cdcb4dc: Merge remote-tracking branch 'origin/master'
dff5f05: refactored tests for PyAugmentedAssignment Inspection
115299e: fixed PY-7591 Unclear naming for unittest run configurations
0d144a6: Fixed a hang of Python console options dialog.
d8668c5: xml mappings for configurables
981c7a0: fixed PY-7599 Conflict with bin/pycharm64.exe.vmoptions upgrading from PyCharm 2.6.1 to 2.6.2
0cb3f45: fixed PY-7599 Conflict with bin/pycharm64.exe.vmoptions upgrading from PyCharm 2.6.1 to 2.6.2
ef87923: Merge remote-tracking branch 'origin/master'
4b26657: IDEA-89915 Notification on stderror output in Run panel / Console
e51dc63: Merge remote-tracking branch 'origin/master'
a158922: cleanup
245df23: Merge branch 'python-fixes'
c34d3d1: Cleanup
0f2dcaa: Fixed unresolved attributes of namedtuple classes (PY-4349)
01ab405: Updated Python 3 mock SDK from 3.1 to 3.2
1df7a00: Fixed false positive in unreachable code inspection for 'raise' inside 'with' that suppressed errors (PY-7420)
b3a190e: Fixed regression in unreachable inspection for 'with self.assertRaises' blocks
f608182: Converted unreachable inspection tests to highlighting tests
b8867b2: fixed tests
513665d: Merge remote-tracking branch 'origin/master'
b077303: fixed AssertionError: Already disposed in PythonPathCache.<init> (61) (fast open/close project)
4243afc: Merge remote-tracking branch 'origin/master'
95d8e47: fixed PY-7524 Dragged/pasted code in wrong location fixed PY-7470 Smart copy-paste unindents next line (in place insert)
6b2b424: zen coding in jinja2 templates (PY-7476)
7e84d9d: PY-7549
c1e30f8: the option has moved, update explanatory text
6caec36: avoid infinite loop when calculating target element (PY-7538)
76357c8: Don't resolve target expression to latest defs syntactically below it in the PSI (PY-7541)
31cffdd: Fixed extracting requirements from 'tests_require' (PY-7525)
617613d: Fixed unresolved reference in list comprehension inside default value of parameter (PY-7516)
9a859a4: Fixed false positive in unused locals for condition vars in while loops with if statement at the end (PY-7517)
5cc3c90: Merge branch 'python-fixes'
f9a7e06: Added envtest of skeleton header parsing and resolve for builtins (PY-7451)
2751df9: Make every language have its own preview page by default (no need to specify useSharedPreview() for each)
67c743e: Merge remote-tracking branch 'origin/master'
413a97b: fixed PY-7403 Insert type assertion: disable intention for variable defined in with statement
0c109a6: Adapter
4dddf8f: Kill plugin logos
f674470: fixed forgotten nullable in docstring intention
8473dbb: fixed unused code (produced NPE) and tests
11fc2f4: Merge remote-tracking branch 'origin/master'
5e86f13: Merge remote-tracking branch 'origin/master'
bdbc9cc: npe fix
f102b7e: Merge branch 'python-fixes'
87a8838: Merge remote-tracking branch 'origin/master'
76429aee: fixed PY-7410 Remove redundant parenthesis: false positive for yield from in return statement
61dd175: one-line PyStatementList handles insertion correctly (PY-149)
5650a08: fixed PY-7086 Specify type for reference using annotation: removes default parameter value
8b6b618: Fixed NPE in PySkeletonRefresher.refreshSkeletons
8057534: test for PY-7439
60ebdc9: fixed PY-7463 Intention "Replace + with string formatting operator" doesn't work for unicode strings
5714709: dead icons
2de09ad: IDEA-90765 When a completion popup is visible in Python code, typing . does not popup next completion
c7b31da: Don't clean up builtins skeleton file even if we cannot parse its header (PY-7451)
df134af: Don't clean up skeleton file for built-ins (PY-7451)
0148558: Merge branch 'python-fixes'
91dd500: PythonAutoPopupTest
9543f08: Fixed bug in headers of built-in skeletons that caused unresolved built-ins (PY-7451)
39e6609: Merge remote-tracking branch 'origin/master'
05320ed: fixed PY-7441 Docstring references behave strangely
4502389: fixed PY-7435 Specify return type using annotation: annotation is added to current function instead of one at caret
e4ce323: Merge remote-tracking branch 'origin/master'
c2297e5: Merge remote-tracking branch 'origin/master'
cd3be32: fixed PY-7438 Specify return type in docstring: disable intention for unresolved reference: NPE at com.jetbrains.python.codeInsight.intentions.SpecifyTypeInDocstringIntention.invoke
5f24af5: Inline some static fields referencing newly introduced generated *Icons classes
450755f: Merge remote-tracking branch 'origin/master'
22280bd: fixed PY-7436 Specify return type in docstring: inserted type breaks code in case imported function
28d65cf: Merge remote-tracking branch 'origin/master'
a8e82c2: don't silently choose existing import statement when importing from project (PY-7426)
8694205: Merge remote-tracking branch 'origin/master'
cc41599: fixed PY-7354 "Add return type annotation" quickfix on function to add :rtype: to docstring
86ef8c7: Merge remote-tracking branch 'origin/master'
b71c670: fixed PY-6978 Cannot pass more than one command line option to py.test
c581d9d: Fixed deleting up-to-date skeletons when their path contains whitespaces (PY-7421)
66f2079: Refactored skeleton header parsing
3b24596: Merge branch 'python-fixes'
173f8be: Merge remote-tracking branch 'origin/master'
1d3a782: Fixed parsing error in StatementParsing.parseSimpleStatement (EA-30244)
5188a07: Removed open/closed icon on module types. API left untouched for compatibility reasons
4d7446b: DB console: Ctrl-E
6f0f860: Fixed SOE in PyStringFormatInspection$Visitor$Inspection.inspectArguments (EA-38828)
560032d: Fixed resolve after back slashes (PY-3831).
11a0a6a: Keep user-added paths after package management setup paths (PY-7038).
c12e226: Deprecate unnecessary ListCellRendererWrapper constructor parameter
e859be2: IDEA-90860 Reformat Code breaks Copyright/Header
89362c2: IDEA-90860 Reformat Code breaks Copyright/Header
5df24fb: tweak range in which ctrl-u goes to superclass attribute
2b32c2a: fix completion of imported subpackages (PY-7409)
91e67c7: Migrate CollectionFactory to ContainerUtil
2fd0abd: netty-3.5.5 bundled
fe9b346: fix regression in override method with **kwargs
6ec35c6: use smarter, higher-level API for adding import elements to an import statement (PY-7400)
f322ccc: disable silent reference importer (PY-2191)
db667f0: correct scope for updating references in "add alias to import" (PY-3680)
ea842b3: goto super navigates to superclass attributes (PY-6707)
6c1df23: show continuation indent in preview (PY-2048)
9dd4b8b: a test just in case
93671bc: PY-7370 take 2
09f7e97: one more piece needed for correct import usage highlighting (PY-7379)
4663c58: Merge branch 'python-fixes'
898c488: Fixed scope for list comprehension targets for Python 2.x (PY-7389)
69e17ba: proper fix for PY-7316 Wrong text ranges for type references that have qualified canonical names fixed tests
81510ca: Merge remote-tracking branch 'origin/master'
d40cc5a: fixed PY-7354 "Add return type annotation" quickfix on function to add :rtype: to docstring
6387477: Fixed extracting a single 'yield from ...' expression (PY-7399)
9e42d4d: fixed PY-7091 Specify type intentions are not available when cursor is at the very beginning of the reference with undefined type
bd9c80c: cleanup, notnull
e03ffb5: Merge remote-tracking branch 'origin/master'
7d868ff: never generate return statement when overriding __init__ method (PY-2171)
961ab09: complete 'else' after loop (PY-6755)
0134a6c: performance: don't ask next TemplateContextProvider for references if previous one gave a satisfactory result (PY-7384)
36ebf9e: findReferencesToHighlight() correctly handles dir/__init__.py duality (PY-1514)
c7d355d: Merge remote-tracking branch 'origin/master'
0e7bad6: Added ability to extract sub-generator in Python 3.3 for extract method refactoring (PY-7382)
fb233cb: Disabled extract method refactoring for code fragments that contain 'yield' (PY-7381)
67be1e4: Added intention for replacing explicit iteration with delegation to sub-generator for Python 3.3 (PY-7383)
c017c73: Merge remote-tracking branch 'origin/master'
db169f4: tune 'in function header' condition (PY-7370)
0032210: Merge remote-tracking branch 'origin/master'
eb3d956: a binary expression without a right operand is an incomplete block (PY-7360)
868bae4: fixed PY-7316 Wrong text ranges for type references that have qualified canonical names
0df6752: Use plugin reflective icons
76ffc0f: Refer to originating class so IconLoader doesn't have to guess via stack frame reflection
5d5deee: Fixed NFE (EA-38726).
a7e111a: resolve of target references doesn't go to functions or classes shadowed by the variable (PY-7342)
5a0d7cd: Fixed multiple compatibility messages for 'yield from' syntax (PY-7374)
1ab8a2f: Merge remote-tracking branch 'origin/master'
93d0b80: fixed PY-7319 "Join lines" does not remove trailing backslash correctly
4c1384e: Merge remote-tracking branch 'origin/master'
81672eb: Fixed unresolved __qualname__ for instances and functions in Python 3.3 (PY-6745)
5e314db: Added runWithLanguageLevel() for Python testes
e392e3b: Fixed path for Python 3.3 interpreter installed via official installer on Mac OS X (PY-7336)
b58eb73: consistent UI between creating virtualenv and specifying options for added virtualenv (PY-7365)
84d0677: correct formatting of relative imports (PY-7367)
02b1dc2: fixed PY-7355 Specify type using annotation: quick-fix is not available for top-level statements
cfbe5ac: redundant imports removed
ec23357: Don't log interruption here, as it happens and it's ok (EA-36726).
1d7e209: Fixed CCE (EA-38206).
97b7eb7: Fixed IOOBE(EA-38501).
f6b5bea: Fixed IOOBE.
3475886: Fixed CME.
4b46f3a: Fixed NPE.
090dd25: Merge remote-tracking branch 'origin/master'
a2eb93c: don't jump to function when looking for rename target (PY-7342)
5992da6: allow specifying local paths in Settings | External Documentation (PY-7335)
69ca3b7: avoid completing statements in function name (PY-5567)
f2495b7: Merge branch 'python-fixes'
4ecfdfc: More diagnostics for PIEAE in parsing types from docstrings for EA-38697
648008c: to enable user editing, store stdlib properties files outside of jar
79e0105: Fixed find usages of namespace packages and imported elements (PY-7348)
4334423: compilation fixed
a1370f8: Merge remote-tracking branch 'origin/master'
754e30a: Fixed filtering in TreeClassChooserDialog (IDEA-90723).
75f6672: more references highlighted as warnings rather than errors (PY-7253)
5eabb7a: Merge remote-tracking branch 'origin/master'
4f39f76: Fixed false positive 'Element is not closed' in Django (PY-2837).
1b709d3: correct context for completion of 'as' keyword in 'with' (PY-3701)
3871749: Icons classes generated for every plugin. Those aren't used yet though
9542b40: Merge remote-tracking branch 'origin/master'
05d3c1f: fixed PY-7089 Insert type assertion: leads to syntactically incorrect code when invoked for one-line function
4273a29: one more original element issue in completion (PY-7327); cleanup
5de3656: a better fix for renaming reassigned things (PY-3698)
6804db3: pull up getReference(PyResolveContext) to PyQualifiedReference interface
c5945dd: Nullable
2548e35: Fixed unused imports that are used for declaring types in docstrings (PY-7315)
cd2a799: Merge remote-tracking branch 'origin/master'
ab91ba0: add openapi modules to layout of python plugin
1ef18d5: since/until for python plugin in trunk
40b8413: Merge remote-tracking branch 'origin/master'
3b17169: fixed PY-7089 Insert type assertion: leads to syntactically incorrect code when invoked for one-line function
03f348d: Merge remote-tracking branch 'origin/master'
359b11d: Fixed unresolved entity highlight for Django templates inherited from html5 base (PY-6084).
8801ab8: Merge branch 'python-fixes'
bc43da0: when looking for target element, if the previous write of a reference is an augmented assignment, resolve to original declaration (PY-3698)
c7580aa: skip building skeleton for pynestkernel (PY-2087)
67633f3: complete 'as' keyword in 'except' (PY-1846)
7723851: Fixed unresolved reference false negative when referencing a class within its definition's suite (PY-5995)
78d0043: Merge branch 'python-fixes'
5db6ed1: Support for implicit namespace packages in Python 3.3 (PY-7156)
d203cf5: Merge remote-tracking branch 'origin/master'
21941c4: fixed PY-7089 Insert type assertion: leads to syntactically incorrect code when invoked for one-line function
5ee402d: Don't cache temporary SDK packaging manager for setting up Python 3.3 virtualenv
7d0e6a2: Fixed platform-independent SDK flavor for Python 3.3 virtualenvs
7eaf3be: highlight first parameter of __new__ (PY-5942)
1fae7b9: don't highlight first parameter of staticmethods in metaclasses (PY-6648)
90c31fc: Merge remote-tracking branch 'origin/master'
fb2957f: fixed PY-7096 Insert type assertion should be disabled for references introduced in list comprehensions
bdc87a2: added predefined live template for 'super' call (PY-4230)
1d9b255: Merge branch 'python-fixes'
3d6d9d3: fixed PY-7294 Invalid warning about encodings in Python files
ea30e90: Merge remote-tracking branch 'origin/master'
5ab77e7: Create Python 3.3 virtual environments using the standard 'pyvenv' tool (PY-6701)
e95e099: Detect Python 3.3 virtual environments created using 'pyvenv' (PY-6701)
6dbebc9: fixed PY-7306 Packaging: do not run separate background tasks for multi-selection upgrade
1633266: Merge remote-tracking branch 'origin/master'
6ecdd10: Use VFS visitor in place of recursion (python)
d120f25: Actually killing 'open' icons
6bbdbcf: Merge remote-tracking branch 'origin/master'
842e236: do not overwrite user-specified value of PATH with virtualenv PATH (PY-2237)
73a602d: Killing open/closed icons.
e84b0c6: Command-line tests fixed.
b266a04: Fixed wrong text ranges for type references that have qualified canonical names (PY-7316)
702f1f6: Async VFS refresh works well again
b472a48: Merge remote-tracking branch 'origin/master'
3d06543: Merge remote-tracking branch 'origin/master'
d2166d0: Removing a lot of Icon static fields, that refer to other static fields, mostly from AllIcons
dcc5391: Substitutor is not default for JS.
163964c: Save call signatures option added.
931fad5: Attach to subprocess settings moved to Settings | Debugger | Python. That allows to debug subprocesses in tests (PY-7111).
e6e0bfd: Merge remote-tracking branch 'origin/master'
e9a77c1: Return weak unions for stdlib return types when we unsure about type (PY-7302)
a4767a5: move live templates-related code to its own package; implement pyClassName() macro (PY-7241)
6929070: when joining comments, we skipped over whitespace to the left of caret twice, which caused characters to be deleted (PY-7286)
0d9f715: UI fix (PY-6593)
29deff4: prefer wrapping after open paren, not before it (PY-7230)
1392184: Fix attributes resolve.
90e762c: Merge branch 'python-fixes'
6065082: Fixed refreshing SDK roots and updating current files after installing/uninstalling packages (PY-7250)
8dcc218: Merge remote-tracking branch 'origin/master'
44aaf83: Settings to select file types for templates (PY-6024, PY-4267).
608ebd0: better alignment in nested binary expressions (PY-5710)
21d6cb7: don't wrap before comma
d358309: correctly check if we need to insert a backslash when wrapping (PY-7039)
3490280: a PyTargetExpression will itself call PyTypeProvider.getReferenceType() when asked for its type (PY-7270)
cda3a70: testdata
2f5c1b1: dedicated interface for type callbacks of instructions
7a117f7: indent brace after comma only in adjust line indent mode (PY-6751)
5f15d99: put the data which are shared between all PyBlocks in a formatting model into a single PyBlockContext class
d17bca6: fixed PY-7130 Specify type for reference using annotation: disable intention with multi-resolve reference for return values
98b245e: fix regression in resolve: PyKeywordArgument is not a declaration
355e26a: correct alignment for nested lists in argument list; pending test for PY-6751
f051c6c: Merge remote-tracking branch 'origin/master'
5c7ef4d: less broken formatting for literals in argument list (PY-6672)
1835d32: allow specifying singleton option when creating a Python script run configuration
eea6cb3: Merge remote-tracking branch 'origin/master'
d65bb8e: pull up path calculation logic on rename to TemplateFileReference (PY-7263)
8b9e798: keyword argument can be renamed
b25a6e0: Clear installed packages cache if files under SDK paths had been modified (PY-6767)
cb4c41e: don't allow cross-file inplace rename
a72c563: fixed PY-7255 String value in dict literal incorrectly formatted as comment/docstring PY-5224 String literal after for statement erroneously highlighted as docstring
c4af136: EA-38364 - IAE: PsiTreeUtil.isAncestor
f8fdb0e: EA-38373 - NPE: PythonReferenceImporter.addImportViaElement
4727a34: do not indent if caret is currently at beginning of line (PY-3009)
249d6eb: Fixed ISE in ScopeImpl.getNameDefiners (EA-38404)
044bbe7: Merge remote-tracking branch 'origin/master'
aa3215d: handle escape sequences in Python string literal spellchecker (PY-6794)
ba10b39: Follow assignments when looking for 'requires' arguments in setup.py (PY-5828)
d65ea55: fixed PY-7252 EAP - PyCharm running py.test shows green success icons on left side of run dialog even when py.test reports an error.
137ef5c3: Merge remote-tracking branch 'origin/master'
9c6cd48: don't rename configuration on renaming python script if its name is different from suggested name (PY-7154)
caa62f3: optimize imports quickfix checks out the files (PY-7195)
1e6cd6a: template file references should be highlighted as warnings (PY-7253)
ea74f4e: fixed PY-7151 Convert triple-quoted string to single-quoted string: do not wrap string with parenthesis if initial string is already inside them
6c09ad5: unresolved parameter references in docstrings are weak warnings
86270ca: Merge remote-tracking branch 'origin/master'
60b18d7: Fixed PIEAE caused by PyBuiltinCache.getStdlibType() (EA-37595)
604fecf: Fixed type checking for 'math' module functions and arguments that define '__float__' (PY-7231)
b495ad1: don't miss highlighting partially resolved references in from/import statements
55ed25d: Don't warn about using values with generic types as normal Python values (PY-7244)
57532f7: don't complain about comparison with none for overloaded operators (PY-952)
d7b891f: Included NumPy plugin in installer
34b57d1: Merge branch 'numpy-support'
9806fb1: Merge branch 'python-fixes'
8ac36af: Added methods for parsing type strings and creating union and tuple types to PyPsiFacade
75d9dad: Fixed SOE in PyTypeChecker.collectGenerics() (PY-34597)
99d1735: type provider for SQLAlchemy columns (PY-7243)
4b5bd21: fixed PY-7128 Specify type for reference using annotation: IOOBE at com.intellij.openapi.editor.impl.EditorImpl.a
080c1ef: Turn off WriterThread writing till the end, because sometimes the end is never reached.
053db76: Merge remote-tracking branch 'origin/master'
f5be981: Special gevent constant.
2f43ad2: Merge remote-tracking branch 'origin/master'
fbeb34e: Merge remote-tracking branch 'origin/master'
02da198: Fixed indentation problem in Execute in console (PY-7180).
1a30d19: include OpenAPI source code in PyCharm distribution
df96a76: EA-36803 - assert: PsiToDocumentSynchronizer.doSync
adfe23a: unnecessary code removed from build scripts
58b98c3: don't allow nested unions when calculating readable name of a type (EA-38222 - SOE: PythonDocumentationProvider.getTypeName)
116f963: pull up diagnostics (EA-38220 - PIEAE: ResolveImportUtil.resolveInDirectory)
2d018f9: API for loading requirements only from requirements.txt
4588a06: preparing migration to new JPS: types removed
d6a5d05: Created NumPy module
4b369ed: Merge remote-tracking branch 'origin/master'
af35913: "configure template directories" quickfix for unresolved template reference is also available in Flask
f15b147: Merge branch 'python-fixes'
c9b1dc9: Merge remote-tracking branch 'origin/master'
7ec3cf6: Merge branch 'python-fixes'
817db80: Fixed type inference for generator expressions (PY-7020)
59c975f: Refactored searching for fake '__generator' class
37e98ee: Merge remote-tracking branch 'origin/master'
7be6203: include Flask plugin in installer
e14a7aa: Fixed type inference for list comprehensions (PY-7021)
f026238: Removed PyWeakType in favor of PyDynamicallyEvaluatedType
fc2208c: Merge branch 'python-fixes'
9fa1d24: Fixed type inference for 'self' parameters of derived parameterized classes (PY-7214)
87897df: Fixed return type of function with nested generator (PY-7215)
f2574be: Fixed type of of 'sys.exit()' parameter (PY-7217)
b18ceb4: Merge remote-tracking branch 'origin/master'
0428956: some PSI for template languages in python-psi-api; common superinterface for string literals in Python code and templates
7cfe219: Merge remote-tracking branch 'origin/master'
2ee455c: Merge remote-tracking branch 'origin/master'
3e12cfd: extension point for resolving context variables in template files
357de3d: improve and simplify "auto-import via existing import statement" logic
51428d5: cleanup
524ee4a: build scripts refactored to prepare for migration to new JPS - 2
271bea4: include netty in pycharm dist (for LiveEdit plugin)
6677bdb: star import elements correctly resolve source of from import statement (PY-7204)
c2ac36d: remove meaningless logic for calculating unique names of imported elements: auto-import works only on unresolved identifiers, which means they won't clash with anything (in the worst case they will be shadowed in a specific narrower scope, which is OK because it doesn't change the meaning of the code)
810f08d: Merge remote-tracking branch 'origin/master'
8925f57: build scripts refactored to prepare for migration to new JPS
1b018a0: Merge remote-tracking branch 'origin/master'
a43a9fa: Allow qualified references and ignore patterns in unresolved references inspection (PY-1942)
9d7d6be: Fixed NPE in PyDynamicMember
b0a0e01: highlight urls in Python console output
c565da9: Renamed AddIgnoredIdentifierQuickFix
8352e59: tune collectionElementName() live template macro to work better in Jinja2 templates
b097438: Fixed CCE in PythonDocumentationProvider (PY-7202)
6d4632c: Fixed NPE in version comparator inside PyRequirement
9b9d11b: annotations
3c6c502: javadoc
79a71c1: pass foothold to PyCanonicalPathProvider; javadoc
116a4e2: don't insert parentheses when completing a function call if the function has any custom decorators (PY-4859)
d2bd505: fix testdata according to change in highlighting severity
b523566: fix alignment after first argument in call (PY-6360)
adfc447: let's try red wavy underline highlighting for unresolved unqualified references
956a4dc: correctly check if name to auto-import was exported from a higher-level package (PY-7203)
6547c94: a bit more test code excluded from auto import
2795e02: Fixed adding project path to PYTHONPATH in console (PY-5622).
2d8dfc6: some types for sqlite3 (PY-6507)
411951b: hook for providing type of 'with' block variables; provide type for contextlib.closing
20b193d: Merge remote-tracking branch 'origin/master'
14c84a4: load SQL plugin when running PyCharm under a debugger
4fff579: PyUtil.strValue() moved to PyPsiUtils
346ea96: Fixed updating skeletons for old-style module names 'foomodule.so' for 'foo' (PY-7132)
e7799fe: extract PyPackageManagerImpl.getRequirements() to python API
0658678: Merge remote-tracking branch 'origin/master'
5a7f2e1: PyPackage and PyRequirement moved to python-openapi
bcc92a8: Fixed false positive in unused locals for 'while' loops with interrupted control flow at the end (PY-7072)
f3b0731: keyword argument completion refactored a bit
5b77e35: Disabled type checker for operators that are assigned, not defined via functions (PY-6925)
8ad1116: Cancel now cancels connection to remote host in packaging panel.
b5fc1db: Refactored call signature provider in order not to switch from stubs to AST
a35bac7: Removed unnecessary check that forced switch from stubs to AST
19f5047: Removed obsolete not null checks
7800296: Merge branch 'python-fixes'
ad1e026: Don't highlight __package__ as unresolved (PY-7043)
9f9a115: Don't show "Introduce variable for statement" quickfix for simple reference expressions (PY-7189)
6b2fa6f: reducing the scariness of ResolveImportUtil
477117e: Show the type of the resolved element in quick documentation pop-up, not the type of the original element (PY-7127)
b194599: ResolveImportUtil.resolveImportElement() -> PyImportElement.resolve()
91da528: Not null check
613b414: Guess the return type of __enter__ for completion as a weak type (PY-7168)
a4c151a: PyClassType.getClass() is now always @NotNull
e1f97d3: moving to python-psi-api
fbb5e5c: remove PointInImport from PyModuleMembersProvider API
a587d92: stupid typo fixed
85d753f: resolve foreign imports in QualifiedNameResolver; simplified PyImportResolver API
00aacad: extract QualifiedNameResolveContext to a separate class
2a6a458: Revert "Don't highlight unused parameters of special methods with double underscore (PY-7178)"
dba8137: Revert "Don't ignore unused parameters in __new__ and __init__ (PY-7178)"
8278e10: Merge remote-tracking branch 'origin/master'
2379f75: remove unnecessary point in import check; don't duplicate completion items returned from module members provider
4455bd8: QualifiedNameResolver and ResolveImportUtil is back from resolving PsiFileSystemItems to PsiElements
57a62bc: Merge remote-tracking branch 'origin/master'
1123c7b0: documentation link provider -> python-openapi
0dd8ae9: pull out qualified name resolve logic from ResolveImportUtil into a separate QualifiedNameFinder class
2afc743: remove stdlib specific logic from core ResolveImportUtil class
ae0de9f: cleanup
5671021: don't lose valuable diagnostic message (EA-38136 - PIEAE: PyFunctionImpl.a)
1156f2a: PIEAE diagnostics (EA-38139 - PIEAE: PsiElementBase.getContainingFile)
40928ff: Don't ignore unused parameters in __new__ and __init__ (PY-7178)
3a07231: Merge branch 'python-fixes'
3998c88: make python completion weigher work only in python contexts
c802a63: False positive in type checker for decorated functions (PY-7179)
1a395ba: Don't highlight unused parameters of special methods with double underscore (PY-7178)
a49f4b6: fix build again
1e81610: Don't ignore unused parameters of an empty function as opposed to an empty method (PY-7126)
7badb19: Ignore unused parameters for function with a single empty return statement (PY-7028)
71c02bf: Cleanup
fde38fc: Don't highlight missing call to super class constructor for exceptions (PY-7176)
cb7dd4d: Refactored built-in Exception equality check
eb53afa: Put back logic for highlighting only the first read access to a Python unbound variable
da4de62: get rid of commons-lang usages in python
4e845ca: one more fix for tests
60636aa: Switched off and made optional Simplify comparison to zero (PY-6876)
9c45553: test fix
2a6a07f: to avoid packaging an extra copy of helpers inside pycharm.jar, move helpers to a separate module
1627170: package y.jar inside pycharm.jar
65f729c: PIEAE diagnostics (EA-38108 - PIEAE: ResolveResultList.poke)
ca18737: Merge remote-tracking branch 'origin/master'
5cf8ae9: Fixed NPE (EA-37680)
65052b6: Fixed NPE.
4493647: Don't highlight unresolved attributes for decorated classes (PY-7173)
825d83b: Don't highlight unresolved attributes of decorated functions (PY-7173)
858a0a6: Fixed false positive for attributes of class that has __getattr__ as attribute, not method
e54985d: Don't hold references to PSI elements in 'Add to requirements' quickfix
4b87ba4: Removed unused code
62b9779: Merge branch 'python-fixes'
b8faf7c: Fixes of overriding signature inspection for Cython
ffc07ef: Merge remote-tracking branch 'origin/master'
c8ab31f: PY-7124
5454b6f: Fixed false positive in overriding method signature inspection for overriding signatures with less parameters and '**kwargs'
9c73604: Merge remote-tracking branch 'origin/master'
f4c4eaf: More details in warning message of overriding method signature inspection
8960baf: PyClassMembersProvider && PyModuleMembersProvider -> python-psi-api
72dbe42: avoid stub to AST switches when using class name completion for variables
af0456c: reduced usages of StringUtilRt
8ff184d: PyDynamicMember -> python-psi-api
408b9e0: extract interface from PyClassType
ae18478: PyPsiPath ->python-psi-api
ac2c9bf: Merge branch 'python-fixes'
1512978: Fixed false negative in overridden signature inspection for extra required parameter in overriding method with '**kwargs' (PY-7159)
400bd5d: Fixed false negative in overridden signature inspection for extra parameter with default value (PY-7162)
9e4e595: correctly implement getFamilyName() for Python quickfixes; remove some of PSI elements stored in quickfix instances
846c4952: move PyUnresolvedReferenceQuickFixProvider to python-openapi
bf257b3: Removed Darcula color scheme from bundle list until IDEA-89181 is fixed.
5314a75: Merge remote-tracking branch 'origin/master'
41b56c9: it's not abstract
2649b91: TemplateFileReferenceSet extracted to python-openapi
5b4da49: PythonStringUtil -> python-psi-api
38f20a6: PyPsiUtils -> python-psi-api
39bea5b: Merge branch 'python-fixes'
e28a1bc: Fixed false positive in overriding method signature inspection (PY-7157)
c518835: TemplatesService in python-openapi
ea8faa2: Merge remote-tracking branch 'origin/master'
00ff230: PyRunConfigurationFactory in OpenAPI
9c8516b: If someone debugs, don't kill softly - that doesn't help. Kill very hard instead.
57c8c4c: Signature tracing is temporary turned off.
d90212e: Fixed wrapping inside Django tags (PY-7134).
5835399: refresh VFS for site-packages directory after installing or uninstalling a package
bad82a7: PyPackageManager.getInstance() in python-openapi
dff0e7f: new project settings -> python-openapi
951350a: a bit of PyPackageManage in python-openapi
ec03766: pass Project to isFrameworkInstalled(); working implementation of qname resolution in SDK only
01897c8: cleanup; allow QualifiedNameResolver to work with only an SDK
46fee0c: introduce PyPsiFacade; extract interface from QualifiedNameResolver
b359381: Fixed false positive in unused import for class attribute reassignment (PY-7136)
2bc06ba: decouple PyNewDirectoryProjectDialog from Django
86a0bb4: Flask module created
718ec7e: let's have separate modules for python-psi-api (that will go into upsource later) and python-openapi (which will contain UI stuff and more)
e4df653: Merge remote-tracking branch 'origin/master'
3b94251: Merge branch 'python-fixes'
233aebb: Fixed false positives in inherited method signature inspection (PY-6700)
bdcda79: Merge remote-tracking branch 'origin/master'
1403d0d: Fix builds
7d22e41: include openapi in plugin build sourcepath
df9f3f8: Merge remote-tracking branch 'origin/master'
60b6094: some more stuff moved to python-openapi
75fecdc: python-openapi module extracted
0988915: Removed optimisation, which broke skeleton update (PY-6676)
b9a833b: Refactored tests
bb59f0a: return weak type of class as return type of constructor call if the actual return type of __new__ is unknown (PY-6671)
e735ebc: introduce variable handles multiline string literals (PY-6698)
f475408: don't suggest keywords as names in introduce refactoring (PY-6605)
c66feb7: turn off auto-import popup by default
5ce3afc: Fail jar task on duplicates by default
d7eacea: __qualname__ is a well-known built-in attribute (PY-6745)
c1dcb39: fix handling of star arguments in override method (PY-6455)
8c19781: Merge branch 'python-fixes'
e04ae9a: check for multiple * parameters in parameter list (PY-6456)
3395553: test fix
af2fc97: handle backslashes in introduce variable initializer correctly (PY-6908); formatter removes spaces around dot
154b10f: diagnostics
ef8b773: show variables in second completion (PY-7066)
9e3c799: move implementation details to inner class
3a0ede9: Removed another print.
0c11023: Remove print.
ac79b9b: highlight "Generator expression must be parenthesized if not sole argument" (PY-6926)
844281a: cleanup
59bd0dd: fix parsing of generator in argument list to actually match Python grammaer (PY-6926)
166b126: show module names in second completion (PY-7066)
757f8bc: artwork for 2.6 EAP (DSGN-256)
bf8faa8: toolwindow icon size according to the new politics of the party (DSGN-255)
3ad7bbd: Fixed false positive in unreachable code inspection for 'break' from 'else' part of nested 'while' loop (PY-6062)
da66e6d: Merge remote-tracking branch 'origin/master'
3de1fd2: don't shorten paths to ~ on Windows
37cc731: use correct SDK (module rather than project) to exclude stdlib tests from auto-import and class name completion
a310c1c: Containing class.
89d310f: - We send real call signatures from debugger and save them as attributes to project files - Weak-types introduced (types we can complete, but dont participate in inspections)
4b72823: introduce -> extract (PY-7103)
7f1ff4e: store cache of exported names on soft reference (PY-7005)
28b930b: dead code
9241229: don't run test runner updater in non-Python projects
59b9bff: Merge remote-tracking branch 'origin/master'
f321886: Resolve parameter in docstring to *args or **kwargs if it is not found among the function parameters (PY-7032)
3c84d4b: Don't mark parameters of empty methods with no body as unused (PY-7028)
def421c: Comment.
7c4589b: update channels in ApplicationInfo.xml don't need to be fixed because they aren't used for anything any more; remove them to avoid confusion
5bd778f: Merge remote-tracking branch 'origin/master'
0bb08d3: Merge branch 'python-fixes'
c06c0b4: Fixed parsing '-' in package names inside requirement URLs (PY-7034)
a1f8c72: Merge remote-tracking branch 'origin/master'
9570c47: Fixed IntellijIdeaRulezzz in completion of assignment to instance attribute (PY-7102)
3cf5776: Fixed resolve for class and instance attributes with the same name (PY-7040)
4e9bf67: Merge remote-tracking branch 'origin/master'
1352a0f: restrict Python handler to Python file type
25c85ac: fixed PY-6994 Paste leads to syntactically incorrect code in case pasting copied text instead of selection
dcc9173: Cleanup
0d413cc: Fixed remote output lost (PY-2420).
e9486b3: Fixed false positive in unreachable code inspection for 'break' inside 'else' in nested for loop (PY-6062)
655b8f5: fixed PY-7085 Specify type for reference using annotation: missing intention for specifying annotation for return values
aa77742: Merge remote-tracking branch 'origin/master'
023095b: fixed PY-7086 Specify type for reference using annotation: removes default parameter value
9ae2649: fixed PY-7089 Insert type assertion: leads to syntactically incorrect code when invoked for one-line function
92ea411: Removed unused code
855cf59: fixed PY-7088 Insert docstring stub: leads to syntactically incorrect code when invoked for one-line function
44bf303: fixed PY-7090 Insert type assertion: do not propose under if isinstance fixed PY-7092 Insert type assertion: disable intention after already existing type assertion
2c5a8e4: fixed PY-7093 Disable type specifying intentions when already specified type is unresolved
7314bd9: fixed PY-7094 Insert type assertion: leads to unresolved reference when invoked in lambdas
2128819: fixed PY-7095 Specify type for reference using annotation: leads to syntactically incorrect code when invoked in lambdas
5469754: fixed PY-7096 Insert type assertion should be disabled for references introduced in list comprehensions
82df8af: fixed PY-7097 Insert type assertion should not be available for unresolved references
1ed3996: fixed PY-7098 Type specifying intentions should not be available for references which look like a callables
ddfd858: Fixed SOE in searching for recursive package requirements (PY-7074)
9635fcc: Merge branch 'python-fixes'
54dec94: Fixed skeleton of bytearray() (PY-7078)
2de9a33: IDEA-89501 IDEA sometimes freeze after selecting item in a completion list
28e5bbb: Fixed indirect star imports of Cython symbols (PY-6289)
8f30323: fixed PY-6995 Extra indents when pasting code with non-rectangle selection between top-level statements
caf2af4: fixed PY-6997 Not able to paste some multi-line text into string
569b12d: fixed PY-7060 RST parsing problem
c99816a: Fixed resolve to constructor for Python classes inside Cython files (PY-6017)
f6e16a1: Added support for Cython memoryview syntax (PY-6420)
14048fe: Merge remote-tracking branch 'origin/master'
2595afd: refactored PyCharm build script to use common layoutPlugin
2f72d8b: npe fix, api cleanup
80740fc: Merge branch 'python-fixes'
8c473b4: Disabled highlighting of the first method parameter as 'self' for Cython C++ classes
270bbb4: fixed PY-7045 Intention for specifying type of reference in Python 3 annotation
ff0707e: fixed settings and manage.py location for tests/django console/manage.py tasks in case of complicated project structure
806a580: Merge remote-tracking branch 'origin/master'
7ff637c: faster sorting for second completion
c977676: fixed PY-6543 Django manage.py commands not working
d5918b4: fixed PY-7068 Test run config doesn't include project sources in PYTHON PATH correctly compute settings file
509a22e: fixed PY-7068 Test run config doesn't include project sources in PYTHON PATH
610a6aa: Disabled method parameters inspection for Cython cppclasses
7972145: Unified parameter token set via Python dialects token set provider (EA-35980)
4a133b8: move ext to correct place
036e056: Fixed type inference for type() calls (PY-7058)
614c51a: Merge branch 'python-fixes'
cdcbe98: Fixed Python builtins file detection and lambda accessor value for PropertyStubStorage
8511454: Merge remote-tracking branch 'origin/master'
1c48307: cleanup python-rest dependencies
8472b6c: fixed PY-6868 No parameter info popup for parameter lists without closed parentheses
cb7c973: Merge remote-tracking branch 'origin/master'
05faf3b: Merge remote-tracking branch 'origin/master'
e25293c: moved rest dependent code from python to python-rest
f670ae3: moved localization dependent code from python to python-localization
b7eeecb: Revert "Return property with unknown calls for target expression stubs with assigned calls to 'property'"
e3d3164: Type database for 'shutil' stdlib module (PY-7023)
06ef432: fixed EA-37702 - assert: ComponentManagerImpl.getPicoContainer
e362572: Merge branch 'python-fixes'
c7edeb8: Added test for types of properties of values having union types
f20820d: Merge branch 'lambda-properties' into python-fixes
6ba66ec: Fixed resolve of properties for values of union types
0c3e62b: merge
eff6e9d: Merge remote-tracking branch 'origin/master'
231aef5: fixed tests
5043741: Cleanup
8c06e0a: Return property with unknown calls for target expression stubs with assigned calls to 'property'
1e4239d: Fix dependencies
6145301: fixed test dependency
5c8141d: IDEA-87938 Completion: extend 'middle-matching' by '*word*parts*' in completion
2137ec2: Revert "Revert "Allow lambdas in classic property declarations (PY-5916)""
8849599: Remove parentheses when analysing a call site in type checker
afe7369: Merge remote-tracking branch 'origin/master'
3bca0ad: Use information from stub for finding decorator in decorators list
6a9d721: Merge remote-tracking branch 'origin/master'
6ad840a: Merge remote-tracking branch 'origin/master'
9ce7f48: fixed PY-6947 Separate RST into plugin suitable for inclusion into PhpStorm/WebStorm
89c5572: Fixed callable check for PyFunctionType in the type checker
d878fa2: Type database for 'os.path' (PY-7023)
0f1a223: Don't use overloaded return type with null type matches only
e01cd93: Fixed types of os.readlink and os.walk
05ce16e: Type database for the 'os' module (PY-7023)
9b24cbd: Fixed boolean 'or' operator
c4886e5: Merge remote-tracking branch 'origin/master'
96c541b: fixed PY-6946 Make separate plugin for: GNU GetText (.po) files
18d3362: Added type for sys.exit()
dafa3f8: Merge branch 'python-fixes'
2f4bd06: Don't show warnings for unused imports listed in __all__ (PY-2668)
d064b94: Merge remote-tracking branch 'origin/master'
4552393: Don't show warnings for unused imports of sub-modules inside a package (PY-2668)
af7301f: Fixed false positive in unresolved references for qualified references of values returned by imported functions (PY-7022)
ad37ddb: Try to resolve a class-level name in the outer context if it exists, but it is not found in the latest defs (PY-6540)
55383be: Fixed unresolved reference for reassignment of a variable after star import (PY-7026)
fe3e5b0: fixed PY-6868 No parameter info popup for parameter lists without closed parentheses
3ce7d1e: fixed PY-5986 Provide an option to disable automatic docstring generation on """[space]
f304907: Resolve os.path to ntpath or posixpath based on the current platform (PY-7024)
ba4b198: fixed PyTypeParser
df48e4d: Known signature of itertools.groupby() for Jython 2.x (PY-6816)
c4c0b21: Merge remote-tracking branch 'origin/master'
96f59f6: fixed test data
71de46e: Merge remote-tracking branch 'origin/master'
e02021f: fixed PY-1413 Show first line of docstring in Ctrl-hover popup
fe592b8: typo
d7a8af6: look for ClassType in imported identifiers
0a8a3c3: fixed PY-4646 Complete class names after @type/:type in docstrings (added imported classes to completion)
81886e1: fixed PY-4646 Complete class names after @type/:type in docstrings
b80603d: fixed PY-7006 An Epytext @type rendering issue in Quick Documentation
60ee2f3: Fixed NPE in PyPackageRequirementsInspection (EA-37562)
2e7eba4: Assist in specifying types via docstrings (Intention for docstring and intention for assertion)
f4d822a: Merge remote-tracking branch 'origin/master'
e9ccabd: #PY-5507 Fixed
28cd869: Meta attributes completion #PY-4118 Fixed
15e0db7: Merge remote-tracking branch 'origin/master'
7c9f6c9: optimization: use event.getPath() instead of event.getFile()
412fc46: Merge remote-tracking branch 'origin/master'
2364ee2: Cleanup
a8d021d: Fixed NSEE in PyReferenceExpressionImpl (EA-37415, PY-6921)
0357f61: #PY-6352 Fixed: Foreign keys relations from abstract models.
bf0be02: #PY-6979 Fixed: Fixed rename of template parameter
8be4f7c: Merge remote-tracking branch 'origin/master'
a5c84fa: fixed PY-6982 Looks like PyTestRunnerUpdater is trying to check for library installation in a non-Python SDK
7dc4914: fixed PY-6982 Looks like PyTestRunnerUpdater is trying to check for library installation in a non-Python SDK
ab5c426: Merge remote-tracking branch 'origin/master'
81eb0e5: Merge remote-tracking branch 'origin/master'
6aa655c: logic for indenting after pasting does not based on copying now PY-6884, PY-6965, PY-6966, PY-6907
d9236b6: Merge remote-tracking branch 'origin/master'
5cc0a41: restore TextAttributesKey API back
6899581: Autodetect docstring format
84ecc62: Complete only valid identifiers as related_name (PY-5520).
fde72d4: Fixed django template line marker for raw/unicode string literals (PY-6233).
714172a: Autodetect testrunner
7c771fd: Merge remote-tracking branch 'origin/master'
83f40f7: turn on middle matching in completion tests
d346e64: get_next_by_FOO and get_previous_by_FOO (PY-6050).
5daf6d4: dmg backgrounds mountain lion ready
c85708d: Ask providers for ancestors after real introspection.
272fb97: Merge remote-tracking branch 'origin/master'
b1b3bcc: Merge remote-tracking branch 'origin/master'
98e5444: fixed PY-6884 Paste changes code logic in case empty line inside indented block
6706085: Merge branch 'upsource-master'
27ac3cb: Merge remote-tracking branch 'origin/master'
ec5c008: fixed PY-6889 Incorrect indent on pasting class method on module-level right after the class itself
97026d7: Merge branch 'python-fixes'
b7f6529: Fixed django dynamic methods resolve in case of override (PY-6664).
a60a5c8: Improved code insight for simple assignments in PSI stubs (PY-4748, PY-6116, PY-4306)
6861400: Django get_FOO_display completion (PY-6050).
4775d2a: fixed PY-6907 Paste leads to syntactically incorrect code in case of dedent in copied text
12fd578: Merge branch 'master' into upsource-master
3e022d8: Merge remote-tracking branch 'origin/master'
8c122bf: Fixed handling of raw/unicode strings (PY-6673).
a36c66e: move TextAttributesKey to core-api
2b1223c: Extract default TextAttributes from TextAttributesKey into separate class TextAttributesKeyDefaults
5f77453: replaced VirtualFile' == comparisons with .equals()
05e4fb3: fixed PY-6927 Paste always pastes copied text to the second indent level regardless caret position when pasting to the end of the function
ea6b7bf: Merge remote-tracking branch 'origin/master'
9d28280: fixed PY-6928 Paste inserts code one line after caret and messes up code in case for as the last compound statement in function PY-6907 Paste leads to syntactically incorrect code in case of dedent in copied text
f12252b: Fixed save of exception breakpoints state (PY-6920).
b32b5fe: sort python class name completion variants
33c425635: Merge remote-tracking branch 'origin/master'
326ac10: Merge remote-tracking branch 'origin/master'
f0b5a22: fixed exception in smart copy-paste
821358e: Merge branch 'python-fixes'
7a9628c: Fixed resolving to skeleton modules implicitly added to the package via imports in referenced files (PY-6796, PY-6905)
26201c9: Merge remote-tracking branch 'origin/master'
b41a551: Use IdeaAwareWebServer to avoid threads leaks in Python Console.
ff1984c: fixed PY-6886 Paste leads to syntactically incorrect code with caret beyond actual indent
940cb3b: replace NotImplementedException (from sporadic packages) with java.lang.UnsupportedOperationException
c117626: initial step in phasing out commons-lang library: removed separate library, added Velocity dependency instead, (Velocity library forces this depedency).
845389d: Merge branch 'python-fixes'
294e738: Updated package requirements inspection test
84cf35f: fixed PY-6887 Paste changes code logic when making top-level function inner
fa51cb9: Merge remote-tracking branch 'origin/master'
0bd83df: fixed PY-6889 Incorrect indent on pasting class method on module-level right after the class itself
902d053: removed common-collections
685f8fd: fixed PY-6843 package install confused by whitespace in directory name?
80a75ac: Merge remote-tracking branch 'origin/master'
51bf641: Merge branch 'python-fixes'
d8a58bb: Disabled class name completion for import elements
ccb017b: Fixed generator methods for various Python versions (PY-6892, PY-6893)
193bce3: Fixed wrong completion variant for module containing function with the same name (PY-6878)
bc35273: Disabled class name completion for qualified references (PY-6879)
d47d4a0: Fixed import.
f00a6d1: Fixed import.
3193d0b: Fixed import.
524e15f: Fixed import.
10929da: No USE_LIB_COPY at the momment.
b78430e: Fixed ipython 0.10 multiline input.
4a700e6: Fixed imports.
aa5af83: Merge branch 'python-fixes'
bda1435: Added fake __generator class for emulating <type 'generator'> (PY-6758)
482f940: Fixed wrong __init__ attribute in completion for packages (PY-6603)
0c6e76a: Cleanup
43c4aa0: Merge remote-tracking branch 'origin/master'
6e5412f: Merge remote-tracking branch 'origin/master'
c46253f: Merge branch 'master' into upsource-master
cd1b798: Merge remote-tracking branch 'origin/master'
e8f76fb: fixed exception and inner function in copy-paste
4037d93: python group
817b8aa: Fixed method call.
e5be6c4: Fixed import.
0256525: Merge remote-tracking branch 'origin/master'
b422202: Correct order of declarations.
5f8c21a: Merge branch 'python-fixes'
c36e0d0: USE_LIB_COPY only for python 2.6 and 2.7
2060c0c: Fixed type inference for properties of objects created by factory functions (PY-6803)
38088c4: Merge branch 'master' into upsource-master
b5555a3: Merge remote-tracking branch 'origin/master'
3cc766a: Navigation to block inheritors (PY-6852).
cfb15ce: Fixed skeleton signature of itertools.groupby (PY-6816)
4140c25: Fixed wrong resolve to submodule instead of function with the same name (PY-6866)
b48d46a: IDEA-88147 Add PROJECT_NAME variable to file templates
65388e2: Updated to distribute-0.6.27 and virtualenv-1.7.2 (PY-6855, PY-5896)
828b875: Merge remote-tracking branch 'origin/master'
80e3584: Fixed parsing a single backslash line inside brackets (PY-6722)
469ce0c: fixed inner function bug in copy-paste
c47f87d: Fixed IntellijIdeaRulezzz in completion of qualified references in target expressions (PY-6829)
8adeac7: Merge branch 'master' into upsource-master
b9989d1: Fixed exception in exception handling (PY-6828).
cea24f0: Merge remote-tracking branch 'origin/master'
a67f912: Smart copy-paste fixed PY-6410 Adding/Pasting text at the end of a class method causes the next class method to be moved outside the class
2a2fe5e: remove incorrect extension registration (IDEA-87117)
560d889: remove AUTOCOMPLETE_ON_CLASS_NAME_COMPLETION usages in tests as they make no sense anymore
f0f0af8: Merge remote-tracking branch 'origin/master'
8876c45: Completion and resolve for Django url tag: named patterns and view methods inside str.literals
8b7832e: test 2nd basic instead of class name completion: python
f66a60c: fixed PY-6845 Class Diagrams: redundant metaclass relation for child classes
7c7c476: Fixed wrong import.
c55ba62: fixed PY-6756 PyCharm erroneously reports "too many arguments" for certain string formatting lines.
275232b: Merge remote-tracking branch 'origin/master'
b38145a: Merge remote-tracking branch 'origin/master'
f5c65bd: Merge remote-tracking branch 'origin/master'
01fa4d4: EA-36929 - NFE: PyRequirement.replace
eb42140: Merge remote-tracking branch 'origin/master'
8317410: ensure that everything suggested by class name completion is also suggested on second basic completion invocation (IDEA-86517)
7b5a9b4: Merge remote-tracking branch 'origin/master'
9af8b46: Fixed django identifier parsing (PY-6831).
8717819: fixed EA-36720 - NPE: StatementMover.getStatementParts
b9df7eb: End block name completion, resolve and matching open-close inspection (PY-420).
a46b3d4: fixed EA-36853 - NPE: PyTransformConditionalExpressionIntention.invoke
8385ba1: Merge remote-tracking branch 'origin/master'
06ab377: Merge remote-tracking branch 'origin/master'
72fedaa: bring back old name to make plugins happy
39f9411: fixed PY-6834 Convert lambda to function: check existing names when creating functions
683d060: Merge branch 'master' into upsource-master
f6cdefe: Merge remote-tracking branch 'origin/master'
a284320: fixed PY-6835 Convert lambda to function: leads to syntactically incorrect code when lambda expression is the only statement in function
1eb202e: Merge branch 'master' into upsource-master
f6915b2: Merge remote-tracking branch 'origin/master'
0126b16: EA-36833 - NPE: PyRedeclarationInspection$Visitor._checkAbove
4611eb1: don't return invalid types from stdlib type cache (EA-36870 - PIEAE: PyFunctionImpl.getGenericReturnType)
89c8242: Merge remote-tracking branch 'origin/master'
4433adc: Should work with gevent monkey-patching now (PY-1681).
a4aa20f: Fixed template references rename for raw and unicode strings (PY-6673).
c33bdf3: Merge remote-tracking branch 'origin/master'
f4dbd4b: Merge remote-tracking branch 'origin/master'
802d906: refixed PY-6589
7c041e3: Merge branch 'master' into upsource-master
9ed4882: invoke on PsiFile made final
eb25502: editor color scheme for darcula
a277710: use less memory for file attributes
afd58be: rename InjectedLanguageFacadeImpl back to InjectedLanguageUtil. Move method getInjectedPsiFiles from InjectedLanguageUtil to injectedLanguageManager. Move InjectedLanguageManager to core-api
730cd45: fixed PY-6528 "Install to user's site packages directory" shows wrong path for remote interpreter
4d95db9: fixed PY-6548 @type in class docstrings does not recognize class variables
97e5631: fixed PY-6589 Unnecessary backslash in expression: false negative for double parenthesis
4f1aa81: fixed PY-6610 Convert lambda to function: leads to unresolved reference when using local vars in lambda
20f7fc5: fixed PY-6678 Assignment can be replace with augmented assignment: does nothing if there is a function in the assignment
7ed6a43: fixed PY-6743 Not able to see the difference for failed tests in py3 - link is not available
d27c2ec: Merge remote-tracking branch 'origin/master'
1ac1000: fixed PY-6071 Django Testing in PyCharm Doesn't Load Fixtures
756b52b: IDEA-87003
d23e137: Merge branch 'master' into upsource-master
61b4589: fixed PY-6770 Back slash is not inserted when pressing Enter in "if" condition
2ab9e20: Merge remote-tracking branch 'origin/master'
150dd26: EA-36609 - AIOOBE: PyFileImpl$ExportedNameCache.findNameInNameDefiners
63ed5dc: test fixed
509c515: EA-36661 - NPE: PyFileImpl$ExportedNameCache.addDeclaration
1fcc952: Merge remote-tracking branch 'origin/master'
510314a: fixed PY-6723 Entering new line inside empty string in parenthesis leads to syntactically incorrect code
55803b0: Fixed false negative in parsing for empty 'if' in list comprehensions (PY-6781)
0c47dc2: Merge branch 'python-fixes'
7f5220c: Fixed type parameters of xrange and range in Python 3 (PY-6725)
379fa94: Fixed datetime skeleton name for Python 3.1 (PY-6473)
b8a1735: Merge remote-tracking branch 'origin/master'
aa5de6a: added updater for versions.xml fixed PY-6704 Update compatibility inspection for Python 3.3
e538b25: Fixed error in parsing backslashes before whitespace lines inside expressions with brackets (PY-6722)
106e6d4: fixed PY-6766 Compatibility inspection doesn't detect 'raise from'
181a0d6: Merge branch 'master' into upsource-master
6c00607: Complete 'from' in 'yield' statements for Python 3 (PY-6727)
ac91546: Complete 'from' in 'raise' statements for Python 3 (PY-6735)
7761964: Fixed false negative in parsing incomplete 'raise ... from' expression (PY-6734)
c282007: Fixed false negative in parsing incomplete 'yield from' expression (PY-6733)
fb56ad5: Merge remote-tracking branch 'origin/master'
0c3a1ce: fixed PY-6731 Convert list comprehensions to for loop: bug with nested comprehensions
ddaeafe: Merge branch 'python-fixes'
97f6360: Show owners of variables and fields in completion pop-up (PY-6585)
dfdfd67: Move PsiSearchHelperImpl to "indexing-impl". Introduce components for supplementary stuff
608af89: fixed PY-6761 Django: create template from usage: strip quotes from the name of created template
8427e5b: improve logic for following assignments in Ctrl-Q (PY-6502)
782847f: trunk is PyCharm 2.6
5eb27cd: correct title for goto super popup (PY-6706)
9639c39: no longer insert colon when completing keywords that require text before colon (PY-6709)
095d7d4: allow selecting files with no extension in Python script run configuration (PY-6739)
8084204: Removed print-statements (PY-6675).
52def9d: don't return null from element generator on invalid text, throw detailed message instead (EA-33112 - CCE: IntroduceHandler.createDeclaration)
7e31a87: diagnostics for EA-36209
4606d84: more assertions for EA-35463 - PIEAE: TypeEvalContext.getType
093274e: delete old-style resolve of exported names and some other code which isn't used any more
928fea4: mouseClicked() mostly replaced with ClickListener and DoubleClickListener, since original mouseClicked doesn't respect minor movements between press and release.
ba2d379: fixed PY-6723 Entering new line inside empty string in parenthesis leads to syntactically incorrect code
e9edea7: Remove debug print statements (PY-6675).
6860822: notnull
c4f2cce: Merge remote-tracking branch 'origin/master'
1fcbb41: Merge branch 'python3.3-support'
11740d0: Allow 'return' with arguments inside generators in Python 3.3 (PY-6702)
e61f959: Type inference for delegating generator functions (PY-6702)
b9939f1: Support 'yield from' syntax (PY-6702)
56a0c67: Don't highlight Unicode literals in Python 3.3 as errors (PY-6703)
4ae25d2: notnull
b0f0148: Merge remote-tracking branch 'origin/master'
610da1b: Fixed completion for relatively imported submodule attributes (PY-6674)
0cfed32: Merge remote-tracking branch 'origin/master'
89d5a01: Reference AllIcons' fields from xml, rather than resource files directly
7a55b70: Merge branch 'python-fixes'
6a8d534: Updated structure view test to include object.__module__ (PY-6634)
d744571: Fixed skeletons for fields of metaclass types
87ef7d9: Fixed duplicated __module__ attribute for old-style classes (PY-6634)
c2db96a: Better syntax error recovery for unmatched braces (PY-3067)
f9612a3: Configurables no longer have icons.
c39bded: Unified access to icons.jar resources
550dbdd: Merge remote-tracking branch 'origin/master'
415375b: "java.util.List" ->CommonClassNames.JAVA_UTIL_LIST
62c8ef1: Merge branch 'python-fixes'
167d546: @NotNull added to UsageTarget[] targets
eee791f: Added the __module__ attribute to new-style classes (PY-6634)
82b8047: Added the exceptions module to the builtins
74ba455: Check the inner scope and all the outer scopes for name clashes when extracting a method (PY-6626)
600c566: fixed ancestors in Python class diagram
3bbb31e: fixed pycharm build script
c5d3152: Merge remote-tracking branch 'origin/master'
ed55ef5: fixed exception in python diagram plugin
4d2bce7: Merge branch 'type-eval-context'
55495f6: Merge branch 'python-fixes'
8cd92f4: Fixed updating references when 'import ...' style is selected (PY-6590)
8532199: Cleanup
f9f42a4: Fixed resolve of imported module references in submodules with relative imports (PY-6575)
c5f5d59: Fixed isPackage() check for module files
72b1067: Moved inSameFile() to PyUtil
be7a410: @NotNull contextElement in CompletionConfidence
397dca4: Merge remote branch 'origin/master'
b71ac6b: fixed PY-6643
df4c9b2: Removed fast() type eval context with origin as unused
3b98a94: Fixed type eval context for PySuperArgumentsInspection
51ed668: Merge remote-tracking branch 'origin/master'
a0e4917: IdeBorderFactory. Redundant methods removed.
11ead27: fixed failed tests for callable
0e8bb3f: fixed failed tests for statement mover
b5a3011: Fixed dictionary keys representation (PY-5834).
766f999: added python-uml to build script
94e62f8: Fixed CreateProcess monkey-patching for 3.3 (PY-6642).
0e95db7: Merge remote branch 'origin/master'
aec5764: added initial support for Python Class diagram
981c567: Test for PY-6575
c7d5d2b: Fixed unresolved references when moving modules and functions with 'import ...' style enabled (PY-6590, PY-6592)
b5ea1ee: Merge branch 'python-fixes'
bd42933: Fixed extract method for fragments with nonlocal variables (PY-6625)
f4f66b0: Fixed unnecessary cls argument when extracting @classmethod (PY-6624)
9544579: Fixed function scopes in name clash detection in extract method (PY-6626)
f3dd384: Fixed extracting fragments with inner classes without constructors (PY-6622)
aa5d3f0: Fixed detection of loop parts with continue and break in extract method refactoring (PY-6623)
20b7a97: has_key usage in debugger code (PY-6635).
557f56d: Fixed exceptions skeletons for Python 2.4 (PY-6581).
4b4dec7: Merge remote-tracking branch 'origin/master'
c10a567: Refactored extract method tests
cf23be8: Merge branch 'python-fixes'
ff69ebb: Split type checker tests
862bd16: IDEA-86431 Rename methods LookupElementBuilder.setXXX to withXXX
14df77a: Fixed type checker for classes with builtin base classes (PY-6606)
1bea08c: Extracted some type checking tests
6bcc4d3: Merge branch 'python-fixes'
b2e5d4d: Fixed global variable detection at the toplevel in extract method (PY-6619)
16448be: Don't mark global symbols as inputs of code fragment (PY-6616)
caef87f: Merge branch 'python-fixes'
d97b0fa: Merge branch 'extract-method-rewrite'
3942f9e: Disabled test for PY-2903
b07cae0: Fixed indentation of original method after extracting method before comment (PY-6416)
eed45d2: Add 'global' statement when extracting modification of global variable (PY-6417)
72f3103: Removed old Python code fragment builder
51ca6d5: Rewritten Python code fragment extraction based on CFG (PY-4156, PY-6414)
f97d2c5: Python 2.4 exceptions correct filtering (PY-6581).
192a4dc: Format keys also (PY-5834).
0807848: Merge remote-tracking branch 'origin/master'
0dd0d82: Correct slash escaping in debugger var view (PY-5834).
a800b59: Python 2.4 doesn't have BaseException (PY-5466).
30c444d: Improved control flow analysis before code fragment extraction (PY-5865, PY-6413)
12d21d8: Fixed out-of-order for-loop body instruction in Python CFG
227fa93: Merge remote-tracking branch 'origin/master'
6fa18cd: Coverage works with remote interpreter (PY-6431).
ee39fc0: fixed CopyPaste
6441607: reducing dependencies of ModuleImpl
bd3730e: SDK -> projectModel-api
e9d2cb8: extract ValidatableSdkAdditionalData
c089145: SdkTypeId extracted
4119b49: Fixed updating star imports of usages in move refactoring (PY-6571)
37d0ce3: Fixed subscription types for dict literals with members of unknown type (PY-6570)
345f132: Moved type checker inspection tests to a separate test case
110daa8: Tests for current bugs of extract method refactoring
ee02005: Merge remote-tracking branch 'origin/master'
89ff88f: Special handling of strange case when traceback is None in excepthook in GTK-based project (PY-6556).
0da0bd3: Merge remote branch 'origin/master'
f9de665: fixed PY-3245 Should complain about named param after *args on Python ≤2.5
2820eda: Merge remote-tracking branch 'origin/master'
cd27230: fixed PY-6117 Show flavor icons in "Add" popup when adding a new Python interpreter
27763c3: fixed PY-6529 "Manage repositories" dialog should use ToolbarDecorator for add/remove buttons
ea8cbd9: fixed PY-5106 Unnecessary backslash is added on enter inside string in parenthsis
01fc893: fixed PY-5534 PyCharm fails to show difference for failed unit test
b997c75: fixed PY-6548 @type in class docstrings does not recognize class variables
f5300f3: fixed EA-35467 - assert: PsiFileImpl.getStubTree
9da83b9: Merge branch 'python-fixes'
99dfa6e: Fixed generic types for dict literals (PY-6542)
a2b9053: Merge remote branch 'origin/master'
6b106e2: fixed PY-6505 "Replace + with string formatting operator" is available in a context which has nothing to do with strings
1f8c178: fixed PY-6509 Refresh packages table when packages list is loaded from the network
0e88084: fixed PY-6513 Filtering the Install Packages dialog does not update description correctly
6ae9a56: fixed PY-6522 List of suggested paths for new virtualenv interpreter should not include multiple pythons from same virtualenv
42d5a3d: build script refactoring (work in progress)
5e7680e: SDKs -> interpreters
0bc27b3: cleanup for cleanup
1292fa7: Merge branch 'python-fixes'
87dd3cf: Added move refactoring tests to all Python tests suite
088ef9d: 'datetime' binary module is called '_datetime' in Python 3 (PY-6473)
4540311: fixed PY-6410 Adding/Pasting text at the end of a class method causes the next class method to be moved outside the class
a7f1969: Get pip and distribute errors from packaging_tool.py as separate return codes (PY-6493)
01282b7: Use single quotes for requirements added to 'requires=' kwarg (PY-5842)
ccf9c56: Replace element usage in move refactoring via element generator (PY-6464)
edab40f: Fixed updates of qualified references to modules in move module refactoring (PY-6466)
5be8730: Fixed inserting imports for usages of the moved element from the same file (PY-6465)
d14a19c: fixed item 3 of PY-5673 Incomplete/incorrect rendering of Sphinx/rst docstrings
5cc260e: Select any named element for move refactoring, not only scope owner (PY-6461)
ff81273: fixed PY-6500 Don't allow to run multiple install or upgrade processes for the same package
2cf42a9: updated bundled version of pip (didn't for distribute because virtualenv 1.7 uses 0.6.24)
ac71f3f: fixed PY-6505 "Replace + with string formatting operator" is available in a context which has nothing to do with strings
9130b04: Merge remote branch 'origin/master'
42e1da4: fixed PY-6492 PyCharm hangs on trying to find package when viewing details for some other package
451d6b6: ModuleRootListeners cleanup
df506fa: fixed PY-6501 Exception from ChainedComparison inspection
dd0689d: fixed PY-6504 Rename quickfix for augmented assignment inspection
715c0fe: fixed PY-6331 Invalid refactoring suggestion: augmented assignment
b661eca: tests for PY-6364
26a3f34: Merge remote-tracking branch 'origin/master'
02fa412: check for empty value of WORKON_HOME
3b34059: don't do deep recursive search for virtualenv home (PY-6459)
95350db: a reference in a slice expression in the LHS of an assignment is always a reference expression, not a target expression (PY-6468)
418d724: Fixed for docstrings also (PY-4644).
5c458b6: cleanup
1337eb6: Fixed windows file references handling for raw strings (PY-6233).
e6300b7: removed pycharm's helpers from PYTHONPATH
b2b3dfb: fixed PY-6029 Module pyparsing from PyCharm.app/helpers included in test run execution
b5f2dae: fixed PY-6223 Default location for new virtualenvs on Mac OS X is based on filesystem root (/)
c63b907: fixed PY-6492 PyCharm hangs on trying to find package when viewing details for some other package (again network from UI)
d3eb832: Merge remote branch 'origin/master'
190fb4e: IDEA-85546 Introduce constants for the persistent components macros
5f4d4f6: Merge remote-tracking branch 'origin/master'
de077a6: "manage packages" action in Python plugin (PY-6494)
1399eed: one renderer for the SDK combo box is quite sufficient
fda6c63: don't show 'create virtualenv' link if we don't have a handler for this
9d1b907: decouple CreateVirtualEnvDialog from PyConfigurableInterpreterList, move it to python core
5114978: move installManagementTool() to PyPackagesPanel
74b68b0: move updateNotifications() method to PyPackagesPanel
fde144f: EA-35924 - ICNE: PythonFileType.extractCharsetFromFileContent
2bc4f18: Merge remote-tracking branch 'origin/master'
12dd871: fixed PY-6363 mako tag in javascript trigger unexpected "expression expected" lint
519ab3a: PYTHONIOENCODING for Python console (PY-6481).
d6a1e23: fixed PY-6490 False positive for "Assignment can be replaced with augmented assignment"
0a42282: Merge remote branch 'origin/master'
0bc9e0d: fixed PY-6285 Install pip fails for python 2.4: TypeError: an integer is required
e586ddd: fixed PY-6286 Invalid output format for virtualenvs on py2.4
8dbb1b0: Console env params for remote interpreters (PY-5890).
b18589a: fixed PY-6364 Pycharm adds extra backward slash at the end of line
05186ce: fixed PY-6376 testrunner attempts to run migrations even if south is not listed in INSTALLED_APPS
4e7244a: Merge remote-tracking branch 'origin/master'
532d40c: extract packages table from PySdkConfigurable, move it along with ManagePackagesDialog to python core
befb56a: extract cell renderer to a top-level class
2615865: moving stuff around
2f8bf50: Merge remote-tracking branch 'origin/master'
3479388: Merge remote branch 'origin/master'
dac294d: fixed PY-6482 Move statement: ineffective move down to else statement in try/except/else
7baedc9: Don't put breakpints at strings (PY-4644).
936c7f4: fixed PY-6467 Simplify chained comparison suggests strange solutions
eb33aeb: fixed PY-6451 .js files improperly autocommented
3e9550f: Fixed unresolved references for namespace packages (PY-2813)
ff7c083: Refactored isPackage() checks
c83bca1: Refactored multi-file tests for PyUnresolvedReferencesInspection
02f747b: Merge branch 'python-fixes'
b3048ef: Don't insert imports in move refactoring for elements that are not imported (PY-5612)
920c842: Put focus on the file name in move refactoring dialog, suggest current file name (PY-4924)
c387549: Fixed updating usages of sub-modules in move refactoring (PY-5850)
aa15b8b: Insert moved function before all its usages at destination (PY-6447)
24f6714: Merge remote branch 'origin/master'
fe61210: added environment tests for python sys.path in django
2453928: Refactored isTopLevel() function for Python
c34c480: Refactored PyMoveClassOrFunctionProcessor
d8cf8d7: @NotNull
49390ea: Fixed test username for all Python move tests
1ec65fa: Delete imports when moving function to file containing its usages (PY-6447)
b05db21: Refactored PyMoveClassOrFunctionProcessor
7475135: Merge remote-tracking branch 'origin/master'
6c6f87c: harmonize module type descriptions (IDEA-83167)
4d5c064: Merge remote-tracking branch 'origin/master'
3d25acb: Step-into in case of arguments on different lines (PY-6278).
a2dfd6d: Merge remote branch 'origin/master'
97f04fe: Merge remote-tracking branch 'origin/master'
02bdee4: fixed PY-3852 Since 1.5 I can not launch the django runserver via the PyCharm fixed PY-5076 Unable to run manage.py commands from PyCharm 2.0 beta
e9c3095: Merge remote-tracking branch 'origin/master'
ca79ee6: Python 2.4 compatibility (PY-6403).
693e3cc: Merge branch 'python-fixes'
71aebc4: Fixed move refactoring for symbols that are used via star-import (PY-6432)
09e95d3: enable some more plugins in pycharm started under debugger
ebe1b5c: Merge remote-tracking branch 'origin/master'
5d90d66: Missing builtins for remote interpreter (PY-6191).
35d6c52: Suppress warning for USE_CACHE in PyReferenceImpl
6d01adf: Fixed local resolve prioriries for star imports (PY-6380)
8b3d3d1: Removed unused return value
cb6f0de: fix stub/AST mismatches and IOOBE in indices: use correct file content when building stubs for unsaved document (IDEA-85266) [r=peter, jeka]
1deb0c5: Refactored tests of unused imports
2a21d89: Package names translate '_' in requirements to '-' (PY-6438)
9eff8e7: Fixed unresolved reference in local variable of lambda inside default parameter value (PY-6435)
f6ef6c6: Merge remote-tracking branch 'origin/master'
32bca31: fix SOE in new-style resolve (PY-6305) (oh god the contract of findExportedName() is so confusing)
35c2477: Remote interpreter: correct interpreter version (PY-6430).
0e089d2: find usages of class does not do text search on __init__ (PY-5406)
69688bf: correct semantics of __slots__ with inheritance (PY-5939)
d7fecf6: Merge remote-tracking branch 'origin/master'
2da4446: don't offer top-level packages in completion of relative imports (PY-6304)
0a7e0a6: don't try to append any import elements to a star import (PY-6302)
2b64b8d: complete imported names inside __all__ (PY-6306)
73e5c67: more strict condition for injecting method references inside __all__ (PY-6370)
22e3067: fixed regression caused by PY-6366 fix (unable to install distribute/pip)
c0819d3: Merge remote branch 'origin/master'
f70e2c2: fixed collect pythonpath for tests
7bb6c02: fixed broken order in pythonpath (append value to the end) so fixed PY-6312 Unit Tests no longer run in PyCharm 2.x and probably(can't reproduce any) it helped with PY-3852 Since 1.5 I can not launch the django runserver via the PyCharm PY-5076 Unable to run manage.py commands from PyCharm 2.0 beta
0253ecc: Fixed pip installation (PY-6321).
4459fea: Removed error message (PY-6324).
e3167e6: fixed PY-6248 django 1.4 manage.py error running shell fixed PY-2921 Django shell problem django console execute users manage.py
dcb0292: Merge remote-tracking branch 'origin/master'
6de7622: Don't create '__init__.py' when moving files if 'Search for references' is not selected (PY-6253)
35e5e5c: Merge remote branch 'origin/master'
d1f1cbb: Merge remote-tracking branch 'origin/master'
ea16177: PNG resources repack
b732b86: Merge branch 'python-fixes'
c01de31: new PyCharm Mac icon (DSGN-56)
1c8d1c9: Don't suggest defined unread and unmodified variables in extract method (PY-6391)
0099417: Parse URLs without the editable flag in requirements.txt (PY-6328)
b74965d: Merge remote-tracking branch 'origin/master'
1e286cb: Fixed position conversion for remote debug (PY-6406).
4168e43: Fixed assignment balance inspection for sliced tuples (PY-6315)
5b9bcfb: Fixed parsing trailing zeroes in version numbers of requirements (PY-6355)
4659d71: fixed testdata
48c7eff: Wait for output before remote process termination (PY-6318).
f35d432: Fixed missing last part of output from remote process (PY-6318).
04a8991: Fixed resolve for nested comprehensions (PY-6316)
1c7a09b: merge
33bb122: Merge remote branch 'origin/master'
7f66090: Fixed --build-dir for remote interpreter
7102189: Refactoring: strings->consts
c778aa9: RI: fixed install management(pip or distribute) (PY-6366).
002711a: useUserSite param removed (detect whether to use sudo or not based on additional args)
737fedc: Re-ask sudo password if it was incorrect (PY-6324).
2c36625: Remove zip extension (hide implementation).
e467dd9: 1) Fixed bug in remote process termination. 2) For remote Django we allocate pty and send ctrl+C now. 3) Possibly exception from PY-6347 is fixed
d20d49a: cleanup
14f7074: fixed PY-6271 Move statement: breaks code in case of empty line between parts of compound statement
92ad063: Command line is constructed with getInterpreterPath method, so it should be used in other places as well.
0550206: Fixed import (PY-6368).
99ef2ac: fixed PY-6329 Selecting a package to install causes PyCharm to hang forever
efacac2: Print exception in list_sources.
eb94a97: Removed extra connection checks.
470d386: Merge remote branch 'origin/master'
56b1589: Add numbers to interpreter name for disambiguation (PY-6339).
2b9c474: Cache external process exception.
8ec061e: fixed PY-6331 Invalid refactoring suggestion: augmented assignment
509e34a: Merge remote-tracking branch 'origin/master'
7836e82: Merge remote-tracking branch 'origin/master'
d362548: Wrong @NotNull annotation (PY-6343).
3e2a4de: Refactoring.
0e543ec: Merge remote-tracking branch 'origin/master'
d65aac9: Fixed remote skeletons transfer.
f7e4ab7: File chooser for remote project root.
889ce02: Fixed possible problem with paths in skeleton generator
6714de1: Merge remote-tracking branch 'origin/master'
6b1a93e: Merge remote-tracking branch 'origin/master'
263297b: Fixed breakpoints management for remote libraries.
16723ad: Merge remote-tracking branch 'origin/master'
8033938: native mac help building
b8c3e12: Merge remote branch 'origin/master'
7d6120e: Merge remote-tracking branch 'origin/master'
1537bdb: inherit parent PATH environment variable when running tasks (PY-6264)
a023d3b: Merge remote branch 'origin/master'
1c3215d: fixed PY-6248 but PY-2921 actually still doesn't work
47f7cb7: Merge remote-tracking branch 'origin/master'
002d0b1: PyClassStubImpl.toString()
801a5a6: Merge remote-tracking branch 'origin/master'
07e5769: Merge remote-tracking branch 'origin/master'
e0d40af: when "associate with current project" is selected in SDK chooser from new project dialog, associate virtualenv with project being created instead (PY-6272)
a21ed9b: Create remote project dialog. Disabled yet.
7891f2e: Merge remote branch 'origin/master'
19e124d: Merge remote-tracking branch 'origin/master'
9ad8d77: fixed PY-6271 Move statement: breaks code in case of empty line between parts of compound statement
abaf1ac: ignore parse errors when trying to do fancy mapping from one LHS to multiple RHS (EA-35507 - IOE: PyElementGeneratorImpl.createExpressionFromText)
40c88bd: EA-35107 - NPE: PydevConsoleRunner.getConsoleCommunication
4bf0c7b: clear cache of builtin types if builtins file has been reparsed (EA-35359 - NPE: StubBasedPsiElementBase.getNode)
9180ddb: EA-35436 - NPE: PythonDocumentationProvider.getUrlFor
0a8a504: chasing down the PIEAE (EA-35463 - PIEAE: TypeEvalContext.getType)
dceac09: chasing down the PIEAE (EA-35499 - PIEAE: ResolveImportUtil.resolveNameInFromImport)
dcf16bc: Merge remote-tracking branch 'origin/master'
f739698: icon for virtualenv; use toolbar button instead of regular button for creating virtualenv
c84ae60: recognize Python 3.3 as 3.2 (PY-6284)
8e9010e: Merge remote branch 'origin/master'
1642729: Merge remote branch 'origin/master'
ab36043: Merge remote-tracking branch 'origin/master'
ab0ce28: @NotNull
e115d61: Corrected the type of 'map' Python builtin
9de6877: rename coverage-impl -> coverage-common; exclude jacoco from coverage-common distr; && src from distr
3f547a8: Merge remote branch 'origin/master'
7fe3c39: fixed PY-6283 Introscpection- "This dictionary creation could be rewritten as a dictionary literal" produces invalid code
5fb322a: typo
214b8ca: fixed PY-6271 Move statement: breaks code in case of empty line between parts of compound statement
25e286f: Merge branch 'python-fixes'
e373b22: Ignore modules with names not found in PyPI (PY-6276)
ebfea88: Added cache of lower-cased PyPI package names
bc13922: Fixed wrong 'unknown' type sub-parts in type name strings
1883ad9: Type database for 'subprocess' module (PY-6170)
da485a0: Python return type references are considered unknown in isUnknown() check
1947c31: fixed PY-6238 Missing create virtualenv link for py2.4: cut off Traceback
90cd2d9: Merge remote branch 'origin/master'
a975c14: Fixed exception inheritance inspection for exceptions defined by class name (PY-5811)
f7c5c82: Fixed bug in resolving relative imports of skeletons from source modules inside packages (PY-6254)
be137ab: fixed PY-6271 Move statement: breaks code in case of empty line between parts of compound statement
dc6c583: Removed redundant search in the Python skeletons directory in resolveInDirectory()
bfc922f: Merge remote-tracking branch 'origin/master'
3111385: Fixed NPE and added @Nullable annotations in PyClassType (PY-6267)
b874d9c: More readable `toString()` for stubs of Python import elements
b4765cf: Merge remote-tracking branch 'origin/master'
dae6913: Merge remote branch 'origin/master'
a767277: fixed PY-6259 PyCharm suggests to install pip, though it is already installed
83575c8: IDEA-84195 single line of code has multiple methods on it IDEA displays the interface icon multiple times causing display to be funky
646d8ff: fixed EA-35340 - NPE: PyTestUtil.isPyTestClass
d7dab94: fixed continue/break moving
03c13c1: Merge remote branch 'origin/master'
194383c: fixed PY-6133 Move statement: strange comment moving down to def or class statements
148b163: Merge remote-tracking branch 'origin/master'
26501ff: Fixed CME in PyUnresolvedReferencesInspection$Visitor.registerUnresolvedReferenceProblem (EA-35232)
abe2353: Fixed assertion error in PyRequirement.toOptions (EA-35333)
1e593db: Fixed assertion error: return null for type of elements with invalid anchor (EA-34289)
a99a116: Nullness checks
09f7b1f: Django paths in raw strings (PY-6233).
3bc5625: Fixed completion of local variables on empty input (PY-6226)
34e5a6a: Fixed evaluation of instance types from strings
b2e3d61: fixed PY-6163 Moving blocks of code
5c05e7f: Merge remote-tracking branch 'origin/master'
27f6352: @NotNull
892a5e3: Cleanup
ef38c7d: Cleanup.
bac966a: Step only current thread (PY-6243).
f32cd3a: Merge remote-tracking branch 'origin/master'
661d470: Merge remote-tracking branch 'origin/master'
ffab07b: Added warning when new process is stated under debugger.
0ab3b4b: Merge remote branch 'origin/master'
da8e4ad: fixed not moving statements in case of selection
be11f60: Fixed mulitprocess debug for windows (PY-6157).
6c2be36: Merge remote-tracking branch 'origin/master'
ec34352: Merge remote-tracking branch 'origin/master'
49b4309: more PIEAE diagnostics
4ecc34e: Merge remote branch 'origin/master'
8871d25: fixed PY-6043 Joining Lines cause discreet elements to be joined
cac9496: Merge remote-tracking branch 'origin/master'
3bb5a17: Pass exception to stderr.
770a8aa: Added special-case support for the django-nose PY-3168
d6518a3: store path of associated project for VirtualEnv
8ffd6ec: file type for Jinja2 files (PY-6110); more consistent naming for file types
fde3a3a: Merge branch 'python-fixes'
bbfa9b1: Fixed TypeError when skeletons generation fails (PY-5810)
abb6ad9: Fixed stub and AST mismatch: don't create stubs for named parameters inside lambda
15e78e3: Merge remote-tracking branch 'origin/master'
b7eaf50: don't use atexit, causes problems.
64138dc: Skeletons for builtin exceptions should go to the 'exceptions' module (PY-5882, PY-6136)
cdeacd9: Fixed path case normalization on remote win with pycharm on unix (PY-4244).
b8c8e49: Fixed detaching for remote debug.
cc96f01: Merge remote branch 'origin/master'
03c82ed: fixed PY-5623 "Chained comparisons can be simplified" fix gives wrong result
0663554: fixed PY-6223 Default location for new virtualenvs on Mac OS X is based on filesystem root (/)
fe7898e: Merge remote-tracking branch 'origin/master'
aa960f8: remove duplicate code for preferring resolve results with non-empty __init__.py files
1c1d237: Merge branch 'python-fixes'
a28d3a4: Fixed bug in extract method refactoring for vars defined before and redefined inside the fragment (PY-6081)
3d0af58: Merge remote branch 'remotes/origin/master'
68aec0f: fix test for fixed formatter
307a118: Merge remote branch 'origin/master'
a4f57ef: Merge remote-tracking branch 'origin/master'
2a20595: fixed test for editing
3e2e2a4: remove unhelpful @Nullable annotation
b246a82: make sure we don't add the import below the element which needs it (PY-4951)
3e78aec: don't suggest auto-import from module which is shadowed by a same-named package (PY-6208)
d0c96a9: Merge remote branch 'origin/master'
aa7434f: fixed PY-5489 Refactor/Move: leads to syntax error with multi-line imports
03eb939: don't show auto-import hint if the text of the reference has changed (PY-6205)
e20ffc7: use correct names for files imported via star import (PY-5813)
8f39d27: allow creating new Python packages at top level (PY-2085)
6ca70dc: provide type also for functions annotated as setter or deleter (PY-5951 take 2)
be3978f: Fixed references to qualified classes in docstring type annotations (PY-6022)
9f65c2e: take formatting options from correct place (PY-6008)
201a186: PyAssignmentStatement.iterateNames() does not iterate qualified names (PY-6121)
f001e65: correct type for static access to properties (PY-5915)
ef1a3de: Merge remote-tracking branch 'origin/master'
5c018ef: Locale safe toUpperCase().
ed0e7ca: correctly rename all accessors of a property (PY-5948)
9f1616d: In case of remote debugging on Win from Unix, paths shouldn't be lowercased (PY-4244).
55f1abb: better formatting for generator expressions (PY-6219)
3fb379c: Removed duplicated methods.
7ce4f9b: Fixed highlighting of unused local classes (PY-5086)
426755a: fixed PY-5106 Unnecessary backslash is added on enter inside string in parenthsis
72c061d: Fixed false positive for arguments passed to class method without explicit 'self' (PY-6108)
ff7e134: Fixed django file completion in unicode strings (PY-5998).
882f187: Don't highlight first argument as 'self' if it is *args or **kwargs (PY-6108)
5e1de01: Show both stderr and stdout in external tools error messages (PY-5896)
5cefa51: Fixed debugger path normalization problem on Win (PY-5943).
4d4559c: fixed PY-5039 reStructuredText docstring and doctest conflicting for inspection
44be38a: Merge remote-tracking branch 'origin/master'
4c1eec6: Fixed false positive in unreachable code inspection for while-else loops (PY-6159)
f4fefe0: Use system-independent paths to recursive requirements files (PY-6201)
2b241d1: Merge branch 'python-fixes'
6b611fd: Compress skeletons by default.
96a610c: Don't complain about transitive dependencies not being listed in requirements.txt (PY-6016)
b187edf: Flushing remote output in skeleton generation (PY-6190, PY-6191).
9b5c274: fixed PY-4376 Split lines: do not add another backslash if there is one already
7a48113: Merge remote-tracking branch 'origin/master'
3196621: fixed PY-5489 Refactor/Move: leads to syntax error with multi-line imports
70bcc70: more PIEAE diagnostics
60531d2: Added HTTP proxy support for 'pip install' (PY-5989)
d3d92d7: Flushing remote output in skeleton generation (PY-6190, PY-6191).
083b5a4: fixed EA-35243 - PIEAE: PsiElementBase.getContainingFile
42b6218: Merge remote branch 'origin/master'
e202232: fixed EA-35203 - NPE: PyDictDuplicateKeysInspection$Visitor.isDict
7597881: fixed filtering of annotations for mako, cython and so on
4ac0a2c: fixed EA-35193 - IOOBE: PyParameterInfoHandler.updateUI
e856a64: small cleanup
abe3426: tests PY-6101 assignment can be replaced false negative
2cdd813: tests PY-6102 Call to constructor of super is missed
55daee0: fixed PY-6102 Call to constructor of super is missed
e03703b: Merge remote branch 'origin/master'
1c157d7: fixed PY-6124 Disable uninstall button for user-site packages for vEnv
b55712d: Merge remote-tracking branch 'origin/master'
0f4a1c0: Sudo execution for remote interpreters.
0f23505: Merge branch 'python-fixes'
0b22536: Parse recursive requirements files (PY-6155)
dcf92eb: Merge remote-tracking branch 'origin/master'
0eb6c29: Path to package requirements file in module settings (PY-6094)
01174d9: fixed PY-6186 compatibility inspection does not believe in intern()
226cef2: Merge remote branch 'origin/master'
add4359: skip autopopup in string literal prefix (PY-6004)
9decb95: don't override locally resolved module with results from global resolve cache (PY-6011)
51f4ec4: fixed tests
34a31d0: Do not show Install package quickfix if 'pip' tool is not found (PY-6156)
44f09ea: fixed PY-6166 Show warning on checking Install Django check-box for interpreter without pip
df075bf: Fixed path with spaces in skeleton generation (PY-6177).
6939c8b: Reuse connection in remote skeletons building.
7b3a095: packaging_tool.py made compatible with Jython 2.2 (PY-6097)
2c5d840: small cleanup
ebea9ee: Cleanup
efa6d0c: fixed PY-6053 "Statement seems to have no effect" should be disabled for autocomplete hints
e1d114b: fixed PY-6148 Wrong "Package installed successfully" message if the superuser password dialog has been closed
71bccec: fixed PY-6161 Do not propose to add pythonw.exe for virtualenv located in project directory
f4f7ed0: Merge remote-tracking branch 'origin/master'
e78b853: Fixed packaging for remote interpreters.
1343785: Merge branch 'python-fixes'
c53c2c7: Merge remote-tracking branch 'origin/master'
633a04f: Upload helpers if version is changed (PY-6085).
88dcaef: Revert "Allow lambdas in classic property declarations (PY-5916)"
3ef0e14: Merge remote-tracking branch 'origin/master'
2208403: Switch from stubs to AST in types statelessly: in the current file or if explicitly allowed (PY-6116)
6b51ae5: Merge remote branch 'origin/master'
86c6df8: fixed PY-6150 django_test_manage.py has incorrect import for south's MigrateAndSyncCommand
63824d7: Ask credentials for remote run if they aren't stored(PY-6127).
fd4c9ec: Merge remote-tracking branch 'origin/master'
1cb0fde: Nullable annotations.
e2419e5: Not so busy wait.
fd2aa63: Project can be null for setup interpreter paths.
9a8beae: fixed duplicated path in add suggestions to Add interpreter
ab08471: Merge remote-tracking branch 'origin/master'
c0ec992: Zipping remote skeletons.
fc52138: Don't generate skeleton if one is already exits.
728d640: removed dependencies on django from core
1f83b43: Merge remote branch 'origin/master'
6dda929: make sure setupSdkPaths() has a module when creating virtualenv (PY-6106)
90f2ddb: fixed privileges in uninstall
1391181: fixed PY-6067 Offer to install pip and distribute if they are not installed in the current interpreter
d31e2fd: micro-optimization
4c881c7: Merge remote-tracking branch 'origin/master'
e59a407: don't depend on inconsistent behavior of resolving PyImportElements in findExportedName()
f01cb98: encapsulate cache of exported names; handle case when same name is imported via multiple import elements
1640b58: declarations in 'except' part should not override those in 'try'
51f13be: Merge remote-tracking branch 'origin/master'
2235935: fixed PY-6066 Ask for admin permissions when installing packages to system directories
ab480ab: IDEA-79927: "Create project from existing sources" doesn't work with HTML/JS only projects (cherry picked from commit d701ff1)
a40abd2: Fixed store password checkbox (PY-5722)
6648180: Merge branch 'python-fixes'
825dfd6: Fixed ISE in ScopeImpl.getNameDefiners (EA-34621)
442d008: Merge remote-tracking branch 'origin/master'
162487b: global cache for names not found in any NameDefiners imported from current file
96a70a2: Fixed IAE in PythonSdkType.createInvalidSdkNotification (EA-34536)
0a23dfe: Fixed SOE in PyDefUseUtil.getLatestDefs (EA-34435)
5f77873: Fixed CCE in AddImportHelper.addImportStatement (EA-33577)
ef32df0: Merge branch 'python-fixes'
ffd2ebd: Dialog to choose packages to install in install requirements quickfix (PY-6048)
6294a6a: Merge remote branch 'origin/master'
03b716e: fixed emptiness of exception in PyPackageManager on finished() action
24aa3c3: Disabled package requirements inspection for Cython 'cimport' statements
3e21177: Merge remote-tracking branch 'origin/master'
cd0854a: Fixed unresolved reference to parameter inside lambda in decorator (PY-6083)
e9e9b8f: Cleanup
d124125: Python skeletons update for remote interpreters.
2046a33: Merge branch 'python-fixes'
e17c0b8: RI: fixed cleaning of skeletons.
713753a: Initial remote skeletons building and copy locally.
9f0a8a2: Install packages one at a time in order to skip failed installations and track progress (PY-6048)
32220e9: fixed PY-5268 Move statement: breaks code in case of indented one-line compound statements
c4928a6: fixed PY-5527 Move statement: unnecessary pass statement on moving one-line comment out of indented block
5caf729: fixed PY-5270 Move statement: unexpected move on trying to move one-line comment within indented block
1a3228c: Merge remote-tracking branch 'origin/master'
ab0df25: fixed PY-5269 Move statement: inconsistent moving of continue and break statement within nested loops
7e19170: fixed PY-5192 Move Statement: breaks code in case moving down to nested try statement
e3aa684: Merge remote branch 'origin/master'
71c4ddd: fixed PY-6068 "No nosetest runner found in selected interpreter" should have quickfix to install nose
388fe4a: Delegate to util-rt to avoid yellow code
665eb43: Merge branch 'python-fixes'
ff640ae: Resolve to locally defined 'global' variables (PY-5617)
fe09ec0: PyFile exported name caching rewrite
211e92f: Merge remote-tracking branch 'origin/master'
fd18586: fixed PY-6045 Incorrect simplification of chained comparison
2e59b1b: Fixed PATH env variable for virtualenvs (PY-5962)
52178c8: fix PyCharm and RubyMine build
6a9c4b7: Merge remote-tracking branch 'origin/master'
030da48: Merge remote branch 'origin/master'
a5f332c: fixed PY-5852 Install Django when creating a new Django project
d3923bb: Merge remote-tracking branch 'origin/master'
a6ee9ad: Move util-rt utility classes to a more familiar place
d66cbfea: cleanup
b4b867f: util-rt module introduced
080639b: Merge remote-tracking branch 'origin/master'
39e4b73: Improved detection of for-loop and context manager variable types
5a6e5bb: Fixed stdlib type databse for 'str' and 'bytes' in Python 3 (PY-5901)
602055e: Cleanup
1136fbe: Separate error messages for missing 'pip' and 'distribute' (PY-5931)
e0ae1ff: dead code
46b0888: inline unused overloads of treeCrawlUp()
96f1bca: remove parameter list from name definer tokens too
3242b5a: PyParameterList is no longer a NameDefiner
c33a6da: dead code
9871bc0: use findParameterByName()
67c86ba: don't create stubs for parameter lists of lambdas
abacb96: explicit api to find parameter by name
98c83da: diagnostics for EA-34808 - assert: PyStringLiteralLexer.getTokenEnd
de30668: Merge remote-tracking branch 'origin/master'
2c8cd60: EA-31486 - assert: PyParameterInfoHandler.updateUI
924eea6: diagnostics for EA-30244 - assert: StatementParsing.parseSimpleStatement
8a5e7db: Merge remote-tracking branch 'origin/master'
14a7c8b: EA-34432 - NPE: QualifiedNameResolver.fromElement
fc33caa: EA-34823 - RE: PyBaseElementImpl.childToPsiNotNull
a53953d: Merge remote branch 'origin/master'
3f80fd5: Merge branch 'python-fixes'
52b205b: Fixed bug in type checker for functions that raise exceptions instead of returning a value (PY-5873)
9e5720f: Fixed false positive in property definition inspection with 'raise' and docstring (PY-5048)
3647495: Allow lambdas in classic property declarations (PY-5916)
b6a1535: Fixed method decorators of 'fromhex' and 'maketrans' methods of 'bytes' and 'bytearray' (PY-5922)
5f684c0: Specified types for 'struct' stdlib module (PY-5923)
ad52d47: Merge remote branch 'origin/master'
19934c6: Special-cased 'setuptools' package requirement (PY-5924)
9fc8089: Added 'abc' to stdlib packages list (PY-5935)
13766c9: Set instead of List as stdlib packages collection
c3863c2: Changed problem highlighting of unspecified package requirement to weak warning
72cf7f6: Single quotes for 'requires' argument in setup.py by default
1d3bbd7: @NotNull
a0aaec5: Added Ignore requirements quickfix for package requirements inspection (PY-5988)
1407dad: Merge remote-tracking branch 'origin/master'
518fa73: Fixed IntellijIdeaRulezzz in completion of empty identifier names (PY-5821, PY-6037)
4b56108: Merge remote branch 'origin/master'
736eb3a: Merge remote-tracking branch 'origin/master'
843e823: Merge remote-tracking branch 'origin/master'
fb207b8: added detection of virtualEnv for the open directory action
ab8f1bb: added test for join lines PY-6043
14c87dd: fixed PY-6013 False positive for "Statement has no effect" in import statements in Mako templates
bc10434: Merge branch 'python-fixes'
17cb8bc: Fixed IntellijIdeaRulezzz in completion (PY-5821, PY-6037)
8199315: Step into: add parenthesis (PY-4542)
ce9b63f: Fixed parsing editable package requirements (PY-5929)
07fd2da: fixed PY-5738 False positive on string formatting from clas.__dict__
17c7459: fixed PY-5953 Duplicate menuitems in context menu on Ctrl-Shift-R on folder
fde1467: Merge remote-tracking branch 'origin/master'
839e196: fixed PY-5967 manage changepassword broken according to new helpers path (helpers/pycharm)
ccbe225: fixed PY-6018 Problem running py.test (ImportError: No module named teamcity) (according for new python helpers path)
0fc0bff: Merge remote-tracking branch 'origin/master'
483f5e1: Merge remote branch 'origin/master'
7079cae: Drop no longer needed path APIs
9bdd509: fixed PY-6015 Wrong import resolution in Mako
78d3999: fixed EA-32644
c1600a5: fixed EA-34265 - NPE: PyDictKeyNamesCompletionContributor.createResult
6f1a3d1: fixed EA-34478
334e11c: fixed EA-34532
bceafe0: fixed EA-34551 PyPIPackageUtil.parsePyPIList already gor service
cd7a737: Merge remote branch 'origin/master'
b4a4127: fixed PY-5525 Move statement: loses code on moving one-line comment up to function with nested indent
b8a11d4: fixed PY-3818 Current parameter blinks for starred arguments
c8b0fb5: fixed PY-5546 Too many quotes when replacing text by dict key insertion handler.
1583af0: add module wizard icon for sdk chooser
36190b5: added weight for dict keys in completion
5f529ca: Merge remote branch 'origin/master'
0334bb9: fixed PY-5588 Compatibility inspection: highlight using keyword argument in the list of base classes as errors under py2
143e2ec: Merge remote-tracking branch 'origin/master'
0d8205a: fixed PY-5732 code compat false positive, python version does not have email.Utils problem in lower case fixed with patch
5f1466d: Some UI fixes
d853291: fixed mako tests
881a05e: fixed PY-5990 Code compatibility inspection false positive when using relative imports
81a4ce2: fixed PY-5960 Mako auto-format incorrectly removes \ characters
453a07d: PY-3902
85a7011: Merge remote-tracking branch 'origin/master'
be69445: Merge remote-tracking branch 'origin/master'
6b4d0ec: Merge remote-tracking branch 'origin/master'
5262c5a: Exception breakpoints UI changed (PY-5244).
eee55c5: even more fine-grained checks for "import resolves to its containing file" (PY-5945)
382f78c: Run Manage.py Task correct capitalization (PY-5805).
f192e49: Pass env from settings to console (PY-5890).
bfc1aa8: Allow multiple mapping settings in Remote debug configuration (PY-4778).
f00c0ad: provide type, and therefore completion, for function decorated with @property (PY-5951)
f37fa4c: Recursive killing for remote process.
8a8fbfb: Merge remote branch 'origin/master'
8573bbd: Changed logging level for invalid Python interpreters
1d6f483: Merge remote-tracking branch 'origin/master'
8b7f2d8: compilation fix
d4d7933: Merge remote branch 'origin/master'
c4064c9: Merge remote-tracking branch 'origin/master'
64636f8: Added error logging for invalid Python interpreters
1be235d: Merge branch 'python-fixes'
0e9b39d: Fixed handling of invalid Python SDKs (PY-5855, EA-32393)
e101f87: Merge remote-tracking branch 'origin/master'
3b8322f: test fix
7410c1e: Merge remote-tracking branch 'origin/master'
de7f22c: getter/setter/deleter icons in structure view also for classic style properties (PY-5949)
67488be: one-pass property caching in PyClassImpl
8775255: split tests for property access inspection
632b857: fix bugs in stub-based property PSI, refactor and code cleanup
9fd873a: Removed pre-1.0 SDK settings compatibility code that breaks virtualenv interpreters (PY-5855)
3c853e4: Merge remote branch 'origin/master'
4db2581: Merge branch 'python-fixes'
12cfe1a: Fixed double pydevd appending to command line.
c957dff: fixed PY-5930 Rename quickfix "Statement can be replaced with function call"
62f34c5: Merge remote-tracking branch 'origin/master'
b9a49c9: PYTHONUNBUFFERED should be also set to remote env.
0eb783c: support __metaclass__ attribute on new-style classes (PY-1392)
f84f5f8: some refactoring; delete duplicate code for collecting completion variants in "from ... import <caret>" context
cd5492b: heuristic completion for members of qualified expressions (PY-5629)
b1d9b1d: Small refactoring.
da1ac53: don't shadow built-in names in introduce variable (PY-5626)
1dddb53: Extra separator removed.
56a9440: Interpreter options are used now in django and appengine.
83ae3d1: show auto-import only as quickfix, not as balloon, if same name is available in class context (PY-5597)
66993b7: don't show auto-import fix if we're trying to reference a variable which is defined below in the same scope (PY-5667)
eeb881f: don't insert any imports before from __future__ import (PY-5795)
3539404: correctly handle tuples as function arguments when inlining (PY-5832)
98860a2: don't auto-import module references in superclass context (PY-5806)
1f17231: auto-import doesn't hide parameter info (PY-5764)
b819179: completion of setup.py keyword arguments (PY-5706)
ec8c30c: Merge remote branch 'origin/master'
dc6f9cb: Merge remote-tracking branch 'origin/master'
ce0f2be: Merge remote-tracking branch 'origin/master'
c764af0: disable "run setup.py" if no python interpreter configured (PY-5814)
bd043d2: delete old setup.py task runner UI
1897f5c: no need to look at both source roots and content roots at the same time when collecting module list (PY-5846)
96884f7: source roots containing __init__.py should still be displayed as source roots
db4f46e: Merge remote-tracking branch 'origin/master'
452d521: Fixed path convertion to work under Win.
72f5c38: Remove dead tales.
4d5fe52: Name 'other' for second arguments of datetime operators in Python 3 (PY-5886)
7b40633: Merge remote-tracking branch 'origin/master'
3b2d964: Fixed compilation.
7fe02f8: Merge remote branch 'origin/master'
643fe2c: __next__ for Iterator in Python 3 (PY-5661)
3adbc5a: - Fixed test run configurations with remote interpreter (PY-5903) - Usage of deployment mappings when running remote interpter (PY-5685).
cd4bfbd: fixed PY-5898 Exception when closing a project very soon after opening
a39e03f: Merge remote branch 'origin/master'
6cfcbda: NPE.
6796233: Merge remote-tracking branch 'origin/master'
3cc756a: Path mappings UI fix.
edc825c: Merge remote branch 'origin/master'
558a593: Merge remote-tracking branch 'origin/master'
c70032f: Don't throw error to log.
3bebf18: make sure the Setup Script template is visible in File Templates configurable
d49ec6b: IAE (PY-5984)
3557ccb: Merge remote-tracking branch 'origin/master'
85979d4: Merge remote-tracking branch 'origin/master'
cab50a7: Merge remote-tracking branch 'origin/master'
3f514e7: Multiprocess debugging works with remote interpreters. Port obtaining reworked.
1a7e5f4: PyCharm 2.1 branding and licensing
8e8bda9: Fixed CCE.
3a00baf: Merge remote branch 'origin/master'
75342f0: added python interpreter selection to the "Create new project"
b94a467: Borders. Deprecated methods to undeprecated.
ef2c242: notnull
bdbe926: NotNull
cd18742: Merge remote-tracking branch 'origin/master'
bbe83cf: RI: UI reworked, debugging depends from path mappings not remote debug run configuration.
9c2e0ec: Merge branch 'python-fixes'
cb1fdc9: Fixed false positive in unused locals when variable is used inside class (PY-5755)
36870c1: Cleanup
ec44815: Coverted unused local variable test to highlighting test
521e2a5: Type of yield expression is unknown (PY-5831)
3f3f948: Return type references should not contain generic types (PY-5661)
c044f5e: Remote interpreter debugging.
5dbe692: Resolve to canonical datetime module instead of _datetime (PY-5636)
58f90c6: Fixed skeleton for thread.start_new() (PY-5575)
1c4e77a: Fixed NPE in ScopeUtil.getReadWriteElements() (EA-33612)
a9f067c: Merge remote-tracking branch 'origin/master'
76e0fa8: Merge remote-tracking branch 'origin/master'
3262aa4: Merge branch 'python-fixes'
29518d4: IDEA-81740 (detect 64-bit JRE in a Windows startup script)
e832f8d: No sleep.
8d64836: Console works with remote interpreter (PY-5716).
b33bea4: Copy all files from helpers.
ddc76b5: Merge remote-tracking branch 'origin/master'
a93db8d: there is sense in separating process from process handler creation for Python
ed16e91: Fixed tests for PY-5837
835deeb: Merge branch 'python-fixes'
41859e2: Don't suggest to add Python module files inside project to requirements (PY-5837)
2546dd9: Package requirements now supports 'requires' and 'setup_requires' arguments of setup() (PY-5826)
196b837: Merge remote branch 'origin/master'
35629fe: Cleanup
80b1b31: Try PyPy and Jython executables first, fall back to CPython
ba4d5ab: Merge remote-tracking branch 'origin/master'
43592cc: Merge remote branch 'origin/master'
6ed4023: Merge remote branch 'origin/master'
2524bdc: fixed PY-5670 Remove virtualenvs from the list of base interpreters for creating a new virtualenv
8d86f49: fixed PY-5772 Packaging: add system-site-packages check-box to create virtualenv dialog
9d32a0d: extracted PY-5868 Inspection and autofix for missing encoding declaration in file with non-ASCII characters
672cb99: fixed PY-5656 Show packages installed in user's site-packages directory
ecc619e: Multiple versions for Python package in requirements spec (PY-5843)
6a29039: Check comparison specifiers for Python package requirements (PY-5843)
7c054ea: Merge branch 'master' into python-fixes
95c239b: Merge remote branch 'origin/master'
2d83a09: rename all components of svn4idea as they were called before, in old plugin all products to use new plugin - old plugin references
392eaff: Plain-text version of Python stdlib modules list instead of XML
2798fb1: Cleanup
4bc4192: Fixed IAE at PyImportElementImpl.iterateNames() (PY-5851)
20d953c: Merge remote branch 'origin/master'
149c8ae: fixed PY-5767 Python package cache should be stored per-SDK in application, not in project
09ff3a8: RI: fixed install packages from win (PY-5707).
2f14e35: fixed part of PY-5811 Exception doesn't inherit from base exception class: false positive for classes inerited from Exception
7ca24ed: Create helpers directory on remote host (PY-5679).
3107995: Merge remote-tracking branch 'origin/master'
4941688: Ignore settings and quickfix for Python package requirements inspection (PY-5671)
28daacc: @NotNull
744f635: EA-33852 - NPE: PyQualifiedName.fromDottedString
d55ec52: Merge remote branch 'origin/master'
793c6e2: Fixed threads state in multiprocess mode.
c84a738: Debugging for multiprocessing module (PY-5766).
8980525: Merge remote branch 'origin/master'
5001cc1: Correct virtual name under Win (PY-5729).
ca64868: Merge remote-tracking branch 'origin/master'
79bb39a: Merge branch 'packaging'
00d1d45: To PyCharm dictionary
737866e: Added result parameter to finished() method of installation listener interface
740dce6: Merge remote branch 'origin/master'
55767ab: Fixed suspend in remote debug (PY-5790)
3a99c65: Merge remote branch 'origin/master'
96881bf: fixed PY-5742 Virtualenv: remember directory where virtualenv is created
bb8d96f: Fixed remote debug connection reloading and process termination.
34f02fe: added mako file type factory
ad41d5b: Merge branch 'packaging'
04fc682: Quickfix for adding imported package to requirements.txt or setup.py
3a1527f: Quickfix for installing an unresolved imported Python module
61f963c: Show package name in progress if a single package is being installed
b0d13ef: Fixed PIEAE in PyClassImpl and PyBuiltinCache
2c85d34: fixed PY-5789 Encoding declaration hint missed for comments
56be8c5: filter out unnecessary options from the dialog
8e79be3: inline setup.py action group
0890437: expand/collapse options in setup.py dialog
a314418: chooser action for running setup.py tasks
5e18d78: pull SetupTask to top-level class
daf479a: task introspection fixes
1d85f8b: Check if imported packages are listed in requirements.txt/setup.py (PY-5671)
d5d8080: Moved common methods related to packaging to PyPackageUtil
82c9ff5: Cleanup
5ef4b0c: Merge remote-tracking branch 'origin/master'
4a7f539: initial, scary version of autogenerated UI for setuptools commands
97cc082: autogenerated dialogs for running setup.py tasks
d562bd4: let only those who need CommandLineArgumentsProvider use it
3f8cd56: there seems to be no value in separating process creation from OsProcessHandler construction
b18dcd4: load list of setup.py tasks via introspection
5a49129: Merge branch 'packaging'
3faa5d6: Progress and re-index/skeletons update for install/upgrade/uninstall Python packages
8a9f28d: Merge remote-tracking branch 'origin/master'
6b65ec1: dummy hardcoded action to run setup.py task
16f6125: extract PythonTask class for rerunnable tasks which display their output in run toolwindow
e821138: coverage: add top level action to hide coverage; hide if disabled; cleanup (IDEA-75656)
83d9587: if needed, generate package_dir based on source root configuration
4c173a6: don't return duplicates from PyUtil.getSourceRoots() if a content root is also marked as source root
4e01a35: fill 'packages' keyword argument when creating setup.py
b5f3c01: fix case (2)
12911d7: fix case (1)
bc9e4c2: Merge remote-tracking branch 'origin/master'
6405683: Merge branch 'master' into packaging
dce8473: Refactored code related to UI for installation process
e08932c: Cleanup
f349202: Depending on product show key language color settings at the top
197166b: moved packaging logic from python-ide module
9fb0764: Typos fixed (PY-5717).
c070c28: Refactored PyPackageManager.install()
87292c5: Typo
8fdd38f: Merge remote branch 'origin/master'
14c29ac: Merge remote-tracking branch 'origin/master'
448ea3a: action to create setup.py (PY-5709)
5c4470b: Merge branch 'packaging'
b37b2aa: Correctly handle installation progress and results in the package requirements quickfix
713451e: Merge remote branch 'origin/master'
cbc8714: added PyPI updatable cache
0ab24ad: Merge branch 'packaging'
b2129d5: Initial version of install requirements quickfix (PY-5671)
bfc9ed1: added generated list of all std python modules
32fd7e3: Added tests for package requirements inspection
275fe13: Merge remote branch 'origin/master'
86038a6: fixed PY-5681 Use ~ instead of user home directory in "Select Interpreter Path"
4ce9344: Merge remote-tracking branch 'origin/master'
6dd7ecc: RI: fixed exception on update (PY-5690).
511b05f: RI: fixed flavor detection (PY-5687).
4b16fa5: PyDebugger: fixed breakpoint path normalization (PY-5702).
c1c0683: Cleanup.
da97536: Change debug params according to param names (PY-5701).
4c6a8cd: Equals for sdk data.
7ed5af1: Detect deafaul sdk as remote (PY-5694).
cfa5fc1: Initial version of package requirements inspection (PY-5671)
7a4826f: Merge remote-tracking branch 'origin/master'
92cdc31: Refactored.
9983d10: Exception fixed (PY-5690).
8430c67: RI: editing of interpreter.
11041d2: Merge remote branch 'origin/master'
275fb3b: Merge remote-tracking branch 'origin/master'
5ac6f97: per-flavor SDK icons (visuals to be improved)
15e45d4: Merge remote-tracking branch 'origin/master'
ca3ef2d: Remote Debug: path mapping as tree.
42fcefa: Cleanup
2ede4db: 2012
67ce818: Merge remote branch 'origin/master'
a1f15e6: use of WORKON_HOME
d684380: added search for venv in .virtualenvs
ece482a: Merge remote branch 'origin/master'
60f8e4e: added UI for creation virtual environment
fbd19f5: Merge remote branch 'origin/master'
f5a467f: correct fix for compilation fix
32e07ac: Add python-specific plugins to layout.
13d11ef: RI: show dialog if running was unsuccessful
1a16449: RI: fixed flavor detection for Windows.
97bd25d: RI: bundle remote interpreters
4c461bf: Remove unused class.
9f462ec: RI: use path mappings for remote script name.
0d673d6: fix compilation
acb1d66: Merge remote-tracking branch 'origin/master'
969c851: RI: remote debug tuned.
6adf2a5: Fixed resolving '/' to __truediv__ when available (PY-5499, PY-4460)
0cfcfe6: Fixed NPE in PyFunctionImpl
81ddd03: Cleanup
13a800e: @Nullable annotations
44f075c: Fixed NPE in PyTypeParser
f4e31bf: Merge branch 'python-fixes'
240f28f: Additional diagnostics for determining Python's sys.path errors (EA-32393)
8a8f9ea: Merge remote-tracking branch 'origin/master'
97119b3: Fixed PIEAE in PyTargetExpressionImpl.findAssignedValue (EA-32560)
524695f: Fixed Fixed PIEAE in PsiElementBase.getContainingFile (EA-33461)
483ca46: Fixed PIEAE in PsiElementBase.getContainingFile (EA-32594)
c68b291: Custom folding for Ruby (RUBY-7875) and API changes
0ce203c: Python custom folding enabled
8bcc6a3: RI: validation of paths.
eb00883: Fixed resolving implicit submodules of imported modules defined in the current file (PY-5621)
a881f0a: RI: exception moved.
3687087: Remote interpreter exception.
6d13aca: Merge remote-tracking branch 'origin/master'
0a0168a: Remove unnecessary StringBuilder usages
bef9e9f: RI: Correct password saving.
2843b80: Merge remote-tracking branch 'origin/master'
47fce8e: Remote interpreters: helpers copying, debugging, packaging.
e7f3792: Fixed completion of classobj attributes for old-style classes (PY-5486)
c88fa94: added installation and new view for packages
8fb3e04: Merge branch 'python-fixes'
152cead: Resolve package attributes first, then submodules contained in import statements (PY-3626, PY-3597, PY-5589)
9626811: EA-33113 - CCE: PyCallExpressionImpl.getCallee
50ce261: EA-33227 - NPE: PyDecoratorListImpl.findDecorator
59eb0f2: EA-33281 - NFE: PyNumericLiteralExpressionImpl.getBigIntegerValue
90e42d5: Cleaned up.
0b3d68f: Merge remote-tracking branch 'origin/master'
7e3c894: don't highlight named groups in Java 1.7 regular expressions as errors (IDEA-80456)
999ca3d: Fixed AE in PyKeywordArgumentReference (PY-5586)
0fb5d32: Merge branch 'python-fixes'
e0ab774: Merge branch 'new-style-resolve'
0abd6af: Cache found exported names in PyFileImpl
054a750: Scope-based instead of tree-based crawl up
3ced078: Improved peformance of filtering Python inspections
0c1b5c2: NPE
9313070: Flex build configurations in IDEA project setup UI
316983e: Revert "Fixed names of class references in exception inheritance inspection"
1a44b0e: Argument equals default value inspection is disabled by default
34c7501: Stub-only type eval context by default
2a7f9b9: Fixed false positive in star-imported unbound toplevel names (PY-5592)
7650fce: Merge remote-tracking branch 'origin/master'
153f83e: Replace PsiManager.getInstance(psiElement.getProject()) to psiElement.getManager() and add inspection to this case.
3daa365: Removed unnecessary expensive PSI traversal in PyImportedModule
88f17df: Moved to env.
89a2af1: Merge remote-tracking branch 'origin/master'
aa51275: Fixed resolve when qname starts with django project name (PY-5568).
ea54d47: Packaging test fixed.
d43ff52: split tests for missing constructor inspection (2)
fdd88fe: Fixed return type of 'math.fsum' (PY-5488)
d9c297a: Unified idea.properties
0620c6c: split tests for missing constructor inspection (1)
a4f7cec: Merge remote-tracking branch 'origin/master'
dbe42ea: Fixed start_new_thread patching (PY-5513, PY-3578).
7a1acb5: Merge branch 'new-style-resolve'
ff43a4c: Fixed input exception in debug console (PY-5170).
cc5a3b6: Cleanup
029b06f: Fixed resolve of target expression to previous targets
af6bb0d: Almost all Scope methods now use caches of visited scope variables instead of CFG and DFA
879a997: Updated tests to match new resolve rules
c5d2f50: Refactored new resolve to be more compatible with old interfaces
a3b2408: Unified startup classpath
5d5115b: Fixed new resolve for implicit subpackage imports in 'from .. import' statements
683f205: Removed debug logging.
818db86: Fixed new resolve for augmented assignments
84b13d8: Removed old resolve
0310fa9: Fall back to scope-level resolve for Cython elements if CFG-based resolve has failed
a7373f6: Removed unnececcary checks
ec6da5e: Fixed console input processing.
9d68f63: Resolve to globals and nonlocals in outer scopes
06e46dc: Using ConsoleFilterVisitor to prevent PyStatementEffectInspection running in console.
8a4a7b1: Don't trace ipython history saver thread and threads newly created by console command (PY-5490).
1385ff5: Merge remote-tracking branch 'origin/master'
7d3df01: Unified build scripts
ba1b32b: Cleanup and unify idea.properties
f61a6ed: Moved superclasses list scope owner test to ScopeUtil
9e17d10: Short-circuit resolve for target expressions
c3b7896: Fixed resolve for Cython named elements
57e816c: Do not resolve to variables defined in inner comprehensions
39baa12: Added some Cython tests into all Python tests suite
288f8a8: Resolve via collected declarations first, then via control flow
21e0183: Fixed broken setting of console interpreter (PY-5529).
dc6e033: Do not run for python console (PY-5524).
b45543e: Merge remote branch 'origin/master'
51cd0a2: Fixed remote sdk interpreter path load.
dd5ce70: Merge remote branch 'origin/master'
8bbcbe1: completion inside __all__ (PY-5502)
437d9af: don't iterate through names in __all__ which are not valid Python identifiers (PY-5503)
88a8f35: distinct icons for property getter, setter and deleter (PY-5145)
480fd07: Merge remote-tracking branch 'origin/master'
625a6d8: Fixed exception.
d7156dd: Cleanup.
51b9b92: IronPython not support it (trace is always None).
c608471: Merge remote branch 'origin/master'
6a29a3b: Merge remote branch 'origin/master'
0a64b1b: pack *.dic in Python plugin distribution
b50b3be: Merge remote-tracking branch 'origin/master'
2d5740f: correctly handle keyword parameters when building index for foreign key fields (PY-5518)
cea0553: Merge remote-tracking branch 'origin/master'
4c9a14d: Fixed NPE.
47d2667: correctly resolve class ref pointing to built-in class
a33f57c: Merge remote branch 'origin/master'
77b2550: add trove4j as in bat/sh to fix launching
f57e879: added upgrade action. Added options for installing packages
3c485c3: Merge remote-tracking branch 'origin/master'
386a15c: Console: exception fixed.
88f0c3a: Remote interpreter initial.
b2dc0d0: Cleanup
6f45474: added checkbox for package version. Fixed filtering and PackageModel (use CollectionListModel)
9b53f52: added sorting of sdks packages
33064b2: Merge remote branch 'origin/master'
176f45e: changed notifications type for python package install/uninstall. Added options in UI
4ede75f: Merge remote branch 'origin/master'
9c9f3d1: optional treeCrawlUp-less resolve for Python
039ce8f: accepting type assertions as definitions in PyDefUseUtil is optional
23d4962: cleanup
482d66a: visit type annotations in control flow builder
e57db64: to make sure CFG for bad code is built correctly, visit raw targets of assignment
85257c0: include reads of decorators in control flow graph
2c36d0f: correctly return import element as definition of a name; getLatestDefs() accepts name instead of element as search parameter
8dc99be: correct order of assertEquals() parameters
ef6c586: include read of lambda default parameter value in control flow graph
3f94da5: cleanup
43b6357: include read of function default parameter value in control flow graph
60d7a4e: include read of superclass expression in control flow graph
05e75cf: dead code
d44a73d: Install Python packages via package requirements
1b2b101: Fixed merge conflicts
b5dea16: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
82a79a4: added install/uninstall packages for python
8c7e195: Merge remote branch 'origin/master'
957a944: Merge branch 'packaging'
e392063: Experimental install of Python packages, small refactoring
cb46f4c: Merge remote branch 'origin/master'
71f706d: Merge branch 'packaging'
b831a4d: Removed PyPackagingTest from envtests due to problems with testing environment
6c59487: Merge remote branch 'origin/master'
473d49f: Merge branch 'packaging'
1a2e694: Delete virtualenvs using Python's rmtree() for symlinks on Windows
f2c71f4: Merge remote branch 'origin/master'
1bcf705: Resolve .clear() and delete() for foreign key object set (PY-2820)
308aea9: Merge branch 'packaging'
9af2ace: Experimental uninstall of Python packages
71fb022: Merge remote branch 'origin/master'
c6ede05: Merge remote-tracking branch 'origin/master'
bea854d: Merge remote branch 'origin/master'
c22f621: Merge remote-tracking branch 'origin/master'
df841b3: Merge remote branch 'origin/master'
572c48b: support keyword arguments in foreign key calls (PY-5309)
c816836: Merge branch 'packaging'
f835e9e: Return path to interpreter after creating virtualenv
6bb3e0a: Fixed searching for virtualenv for Windows
8b85f28: Fixed small bugs in packaging tools
ce52382: Initial support for creating and deleting Python virtualenvs
4ad4328: Fixed property serialization.
8fc3ca5: Merge remote-tracking branch 'origin/master'
7b2a3ba: Fixed exception breakpoints handling for Jython (PY-5468).
1f93de0: complete keyword arguments also for implicitly resolved function calls (PY-5126)
f853ae1: don't highlight members of class as unresolved if a superclass of hte class is also unresolved (PY-5427)
37ab584: PyPackagingTest converted to envtests
2c106ab: Strim trailing spaces in console executed lines.
68e4abc: correct Find Usages for wrapped method calls (PY-5458)
c530e43: inplace: decouple introduces from rename (I)
32b90dc: Merge remote-tracking branch 'origin/master'
0bf7b44: exclude known built-in methods from super method completion (PY-5494)
3a206f3: add __get__, __set__ and __delete__ to built-in methods (PY-5304)
aa3ad94: add __enter__ and __exit__ to built-in methods (PY-5303)
21a42f3: use correct isDefinition value when gathering completion variants from super class (PY-5311)
48001c2: space around 'in' keyword (PY-5379)
d1c0964: correctly handle qualified names in Python goto symbol (PY-5441)
291a586: Merge remote branch 'origin/master'
39535f4: Merge branch 'packaging'
ce275a8: Remove print statement.
d165a96: Initial classes for Python package management
e469752: Merge remote branch 'origin/master'
a6679b3: Added type database for StringIO and cStringIO (PY-5426)
93b9c06: Fixed NPE (PY-5474)
70d8696: changed options name in py.test run configuration
6a9ad58: fixed PY-5439 Renaming a script file does not rename associated run configuration
d0e52fe8: fixed EA-32963 - SIOOBE: PyConvertTripleQuotedStringIntention.invoke
bd6aafe: Merge remote branch 'origin/master'
09dc37c: 2012
5a4c9bd: English
30c1805: Cleanup.
939d3d8: Fixed exception classes filtering for python 3.
229ad3d: Don't trace debugger internals in Python 3.
64f6b2c: Added new exception breakpoint activation case (PY-5244). Fixed on-terminate handling.
3a49bed: Merge remote-tracking branch 'origin/master'
b8ef994: Fixed ipython 0.12 compatibility (PY-5403).
80347c3: detect Ruby file type by hashbang line
e961117: Merge remote-tracking branch 'origin/master'
070a371: Print correct version of IPython.
540d3fc: Don't show strange exceptions on shutdown (PY-5417).
5a84a0c: Configure Interpreters link added to console settings (PY-5404).
681432b: JS-debugger for AppEngine (PY-5424).
d41c16b: Merge remote-tracking branch 'origin/master'
83d98d5: Fixed SOE in PyReturnTypeReference
f226a4b: Fixed names of class references in exception inheritance inspection
4baa24a: fixed PY-5271 Move statement: one-line comment isn't indented on moving into function
b30697c: fixed EA-32847 - NPE: BaseQuoteHandler.isClosingQuote
63e95a8: Fixed AssertionError
dd80fe5: Merge branch 'python-fixes'
2548823: Added types database for 'math' stdlib module
f162c6f: Fixed update of imports in move refactoring (PY-5331)
33acc7d: Added method isExclusive to SurroundDescriptor
42f9ecd: Fixed bug in inferring type of enumerate().next() (PY-4702)
6d6acc6: Fixed parameter analaysis of Python binary operators
e6b2522: Merge remote branch 'origin/master'
5d099a0: fixed PY-5365 Incorrect "Expression can be simplified" inspection for "== 0" expressions.
c9e35d4: fixed PY-5412 Intention "Convert between single-quoted and double-quoted strings" is missing on unicode strings
ae616c4a: Merge remote-tracking branch 'origin/master'
fe42d90: Merge branch 'generic-types'
39fc0fc: Merge remote branch 'origin/master'
b846d41: fixed PY-5389 Docstring autocompletion does not work if more than one tag specified for same function argument
6d5a946: Added more tests and fixed a couple of bugs in Python generics
cbb4961: fixed PY-5419 py.test INTERNAL ERROR if launch with --doctest-modules parameter
0260994: use SystemProperties.getUserHome() where possible
048adcf: Weak warnings for Python generic types
a193c2c: Some UI fixes
c30c61b: Merge remote branch 'origin/master'
de503e4: fixed EA-32650 - SIOOBE: DictCreationQuickFix.applyFix
f2c12eb: JGoodies Looks 2.4.2 (IDEA-79237)
7b16af8: Fixed type evaluation for right Python operators, including 'str.__rmul__'
04d26ea: Generic and overloaded types for Python operators
f74a5e1: Merge remote-tracking branch 'origin/master'
f7fac3f: Merge remote branch 'origin/master'
06976a4: fixed EA-31332 - SIOOBE: DocstringQuickFix.createMissingReplacement
6416bbb: Cleaned up python path handling.
3a680ef: fixed EA-32245 - CCE: PyDictLiteralFormToConstructorIntention.replaceDictLiteral
71bea78: Initial support for generic types in Python
df6ba12: create correct type of PyImportReference in Mako
3ec17f7: Merge remote branch 'origin/master'
04067da: fixed EA-32535 - SIOOBE: TextRange.substring
3830b4f: fixed EA-32648 - IAE: ASTDelegatePsiElement.addBefore
a15067c: Merge remote branch 'origin/master'
6b238c0: Merge remote-tracking branch 'origin/master'
154dde8: added test for completion after has_attr
6504acf: Merge remote branch 'origin/master'
845fbff: fixed PY-5351 "Add encoding declaration" must be shown in case of unicode usage in arbitrary strings. Not only doc comments.
6cd4766: using specific API when possible
5df1dbf: split PyImportReference into three distinct references depending on location; separate resolve logic for each
e31dfeb: put all Python references in a common package
53ea423: don't resolve import elements in PyFromImportStatementImpl.processDeclarations, there's code later that knows how to resolve them (PY-5295)
9dcbbfb: move processing of __all__ from iterateNames() to processDeclarations() (PY-5346)
adf88a4: remove findExportedName() from the public API of PyFile
d20564a: another bit of duplicated logic
0843ebe: replacing usages of findExportedName() with getElementNamed() when appropriate
7a2c500: QualifiedNameResolver resolves to PsiFileSystemItems, not arbitrary PsiElements
ac22a73: getElementNamed() never looks in builtins
c307234: don't look in builtins when resolving names in __init__.py of a package
1c5fe81: don't look in builtins twice
d9ce55d: optimize imports
a28fd5a: delete unused implementations of processDeclarations()
1b844b6: delete unused implementations of processDeclarations()
a576889: use iterateNames() for gathering completion variants of a module type
0d7a641: lazily calculate control flow in ScopeImpl
1224088: test for 'attributes assigned nearby' logic in completion
8b6d7b5: "resolve to attribute assigned nearby" logic should also work when qualifier type is unknown; provide test
86d20ed: Clean-up.
aa29341: no point in doing the exact same thing twice
3daae95: remove unnecessary usage of DataManager.getDataContext()
0a19af1: basic test for 'from ... import to import' intention
178130c: ResolveImportUtil.resolveFromImportStatementSource() -> PyFromImportStatement.resolveImportSource()
40cf4c7: don't look up SDK for directory on every step of resolveInDirectory(), use the information we already have
22afd28: Added warning if TEMPLATE_DEBUG set to False.
099e229: fixed PY-5192 Move Statement: breaks code in case moving down to nested try statement
81a21be: fixed test data
a89dd1b: Merge remote branch 'origin/master'
7df9c94: Ignore commented lines in .vmoptions
3349c6c: Merge remote-tracking branch 'origin/master'
9adab52: an action to copy PSI to clipboard from PsiViewer, removed similar action in python
3c05f4a: Merge remote-tracking branch 'origin/master'
035ec1f: Fixed tests.
d376e35: kill some code duplication
eceb8e0: rename ImportResolver to QualifiedNameResolver (to match what it actually does); delete some more code which is thankfully not used anymore
115350a: nicer API for ImportResolver
8a1f266: use standard ImportResolver for checking django/coverage presence, avoid code duplication
0883915: push relative resolve logic into ImportResolver
cf193b7: crlf to messages.
e37c497: some more code which is not really needed
614558e: Merge remote branch 'origin/master'
65bea98: fix accidental usage of old method
e5312fe: a smarter ImportResolver; use it directly when convenient
719be97: Fixed console indentation parse error (PY-5333, PY-4493)
47dc6db: Better error messages.
e81efff: extract ImportResolver class
b11b58d: move code related to accepting RootVisitors from ResolveImportUtil to a separate class
421966f: shuffling some code around
8e786ac: no point in having Impl as part of class name when there is no interface class
a6cc4be: getImportReference() -> getImportReferenceExpression()
270e99c: cleanup
ba9c13d: cleanup
afb47cc: cleanup
2d78fea: classmethod/staticmethod refactoring: set -> single value, Flag -> Modifier, static method in PyUtil -> instance method on PyFunction
3acfd90: refactor PyUnresolvedReferencesInspection.registerUnresolvedReferenceProblem() to a more manageable size
21d063d: honor fileOnly flag correctly when resolving import references (PY-1896)
a1e5d2e: fixed PY-5200 Move statement: ineffective moving of nested if block
5c866de: disable native mac clipboard in PyCharm
3741753: added last statement break to the python lexing
1e1eca4: fixed tuple assignment with numeric literal
81139a0: generate python spellchecker dict -> internal
2e90d60: Merge branch 'python-fixes'
70eea82: Changed priority of some inspections from warning to weak warning
9916946: Fixed test of call-by-class Python inspection
f862fce: Merge branch 'python-fixes'
5f1e612: Fixed SOE in PythonDocumentationProvider (EA-32588)
86811f0: Cleanup
fd72745: fixed tuple assignment
7245e4f: fixed PY-5299 Insert docstring: respect class indentation
98c005a: Merge remote-tracking branch 'origin/master'
cb37e2a: EA-32578 - NPE: SkeletonErrorsDialog.getHTMLColor
1cbf057: EA-32562 - NPE: CreateClassQuickFix.applyFix
4540379: EA-32548 - PIEAE: PsiElementBase.getContainingFile
91b9bdf: Fixed NPE in PySkeletonRefresher (EA-32558)
94c1dc4: Merge remote branch 'origin/master'
bef2d1e: Added interpreter model listening (PY-5173).
b26fa2c: Zipfile import moved to usage.
9107687: Merge remote-tracking branch 'origin/master'
a22eae7: Merge remote branch 'origin/master'
bbddb6b: Merge branch 'python-fixes'
357381d: Bytes and unicode types for files opened in binary and text modes
342975c: Type of iterator variable via 'iterator.next()'
a2a9971: Fixed console interpreter and module selection (PY-5273).
9794e16: fixed PY-5294 "Replace with str.format method call": do not add unnecessary parenthesis with one argument
40c3ba4: NPE (EA-29947).
60d2e38: Merge remote branch 'origin/master'
8a4f219: Merge remote-tracking branch 'origin/master'
d2f06c4: Fixed import.
1ed6af9: canonical import for defaultdict (PY-5287)
d0c7b13: in-place refactorings work correctly when "replace all occurrences" is not selected (PY-5292)
a4ce732: NPE (EA-31914).
6512d30: Merge remote branch 'origin/master'
0363a24: fixed PY-4358 Tuple assignment balance is incorrect: extend inspection for sequnce literals other then tuples
ce732f3: highlighting for slashy strings (\u1234)
d80285a: fixed PY-5291 Tuple assignment balance is incorrect: false positive for buildins
2f2ed64: fixed PY-5290 Tuple assignment balance is incorrect: false positive for namedtuples
120386e: Fixed ISE in adding a new Python SDK (EA-32393)
077b5ff: Debugger: fixed step into eggs (PY-4669).
1499785: fixed PY-5264 Tuple assignment balance false positives for function calls
2131605: Merge remote-tracking branch 'origin/master'
2acfe2d: notnull
c3e7182: fixed PY-5139 Missing docstring inspection should have a quickfix
672f486: fixed PY-4606 Applying quickfix discards end-of-line comment
3c5d0cd: Merge remote-tracking branch 'origin/master'
a80df9b: optimize imports doesn't remove imports for which inspection is suppressed (PY-5228)
f741966: Merge remote branch 'origin/master'
c9dc1a2: if no interpreter is configured for the project, show big warning and suppress auto imports (PY-5247)
eecad15: Merge remote-tracking branch 'origin/master'
0e4a38a: Merge remote branch 'origin/master'
6738d66: Merge branch 'python-fixes'
480adbf: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f8e52b9: making it clearer what actually happens
70e61fa: Fixed NPE.
61eb719: the mechanism for setting an SDK through project userdata isn't needed anymore, and most likely was never used
932fd0f: Confusing red message removed (PY-5272).
73590cc: Merge remote branch 'origin/master'
c2b5cb9: Merge remote-tracking branch 'origin/master'
594c94c: Xml namespace resolve for django extends inheritance (PY-1257).
7b96064: avoid more stub to AST switches
20c1e92: don't look in builtins when trying to find name in 'from ... import *' (PY-5231)
b62b120: Merge remote branch 'origin/master'
1ec74bc: Updated fix for PY-5265
4b39252: Merge remote branch 'origin/master'
c4bd812: add some checkCanceled() calls for good measure
c8f07f3: title case
b8dc9c2: fixed PY-2523 "Convert lambda to function" makes code unclear - lambda as an argument
5ee245a: Fixed import.
f2d22e3: Better printing.
20c3941: Merge branch 'python-fixes'
9ed5c12: fixed PY-2314 Doctype declaration not picked up from base template if template uses {% extends %}
f991af7: More debug info for django templates debugging.
a68f65b: Merge remote-tracking branch 'origin/master'
18f63ad: Fixed false positive in unreachable code inspection in nested try/except (PY-5266)
71485cf: don't use treeCrawlUp in findExportedName(), use same code path for stub- and PSI-based cases (PY-5231)
54c03a1: Added debug env var.
632ecf9: Fixed python detection for Mac.
5aac647: Fixed wrong warning about importing the current file in 'from import' statement (PY-5265)
111d458: fixed PY-4423 Missing completion for attribute checked in hasattr
1b7785d: Merge remote branch 'origin/master'
571e28c: canged compatibility inspection
4fd5c0f: EA-31614 - NPE: PySkeletonRefresher.cleanUpSkeletons
8e2320b: store class docstring in stub
32b3258: avoid stub to AST switch if we don't have a default parameter value
2eede75: store entire docstring text in PyFunction stub
7584b21: Merge remote branch 'origin/master'
07bcdcc: store annotations in stubs
d2dbc19: Merge branch 'python-fixes'
e168f77: towards reducing stub -> AST switches
5d7778a: check staticmethod/classmethod wrappers without loading AST
8f16f88: Fixed bug in unresolved references inspection for '__dict__' in '__slots__' (PY-5255)
4684f47: delete WRAPPED flag that no one cares about; test for staticmethod as wrapping call
a271e81: javadoc
4c2fddd: faster overload of resolveCallee() that doesn't try to calculate implicit offset and other stuff that no one cares about
3453c1f: PyCharm 2.0 artwork
30bc878: Merge remote branch 'origin/master'
74b7e27: Fixed ignore settings for unresolved operator names (PY-4451)
8b430aa: don't refresh skeletons if nothing was generated
299ea90: Abrupt control flow on 'self.fail' not just 'fail' (PY-4864)
54d66d5: Type database for 'datetime' stdlib module (PY-4273)
0f5bde9: Show type mismatch message for the first resolved operator containing errors
81b8847: Merge remote-tracking branch 'origin/master'
8eb1fe3: create project from existing sources: select suitable project sdk
dc52bb2: Merge remote branch 'origin/master'
da0c806: only Python SDK is acceptable for PythonModuleBuilder
b96d8b8: Merge branch 'python-fixes'
3180a7f: Added '__debug__' keyword for Python 3
36d9ea1: Types for tuple iterator vars in for loops and list comprehensions (PY-5184)
38a7654: Try to infer new Python object type from constructor return type: via type providers, etc.
7510b49: Don't switch to AST for files except the current one while evaluating types
c7733cb: Don't switch to AST while evaluating function return type using Python 3 annotations
22b0626: LAHT tests fixed (who broke'em?)
43b06fb: Renamed.
544b37c: Merge remote-tracking branch 'origin/master'
5b0c732: fixed tuple assignment balance inspection (use types now)
55d75f8: Added overloaded types to Python stdlib type provider (PY-4401)
cd5de1d: notnull
6862c11: Test fixes
7aa745d: Merge remote-tracking branch 'origin/master'
c85c51b: Due to broken compilation commit reverted: Test fixes The problem was that FormatterUtil hold FormatterUtilHelper instances at the ... (0b6daee5d2122c72395e94ac47d7cc57f6ede8d8)
58ed6d1: Test fixes
0415d3d: Merge remote-tracking branch 'origin/master'
dded710: Merge remote-tracking branch 'origin/master'
95d26b5: Renamed settings group Notifications -> Activations Policy (PY-3740).
a311ede: Web preview for django (PY-3186).
ad5f180: add libraries to pythonpath; more correctly add content roots to pythonpath (PY-5241)
c5f32e14: Merge remote branch 'origin/master'
150634f: logging in PyProjectStructureDetector
273447c: fixed PY-5226 "replace + with string formatting operator" produces bad code
967f260: yet another compatibility fix
c01788b: Merge remote branch 'origin/master'
b371a6e: fixed PY-5203 Move statement: statement should not jump into docstring
2c44e3c: Merge remote-tracking branch 'origin/master'
39496c8: Merge remote branch 'origin/master'
d891865: fixed PY-5193 Move Statement: do not allow to move break and continue statements out of a loop
ac3e2ea: Fixed twice-monkey-patch (PY-5223).
120efe6: fixed PY-5221 Move statement: Throwable at com.intellij.psi.impl.PsiToDocumentSynchronizer.a
82d0472: Merge remote branch 'origin/master'
7b901b5: Fixed run to cursor temp breakpoint removal (PY-4465).
4f42f02: Python project structure detector (PY-4967)
f64673c: Merge remote branch 'origin/master'
8d0387b: fixed PY-4083 Commentdocs support in manual types declaration
9ea3f3c: fixed PY-4423 Missing completion for attribute checked in hasattr
f9ae3ea: fixed nonempty docstring in init
1bae379: fixed PY-3927 Types autodetection should search types for __init__ function parameters in class docstring
510aee8: Tree-based PyClassImpl instance may not have a parent element (EA-32381)
361574d: Nullness annotations for Python element types
bab6e96: Updated Python builtin skeletons version in order to re-generate them (EA-32363)
ccd38d9: Fixed debugger disconnection.
b2b56bd: Merge remote-tracking branch 'origin/master'
56afc0c: Python: don't kill softly in suspend debug mode (PY-5185).
8600b4c: notnull
8435c2e: Fixed console input in debug mode (PY-5209).
3ac4fb6: Merge remote-tracking branch 'origin/master'
07640e8: fixed docstring qFix name
3f92821: Merge remote-tracking branch 'origin/master'
07f9fa9: Merge remote-tracking branch 'origin/master'
1474418: fixed PY-4358 Tuple assignment balance is incorrect: extend inspection for sequnce literals other then tuples
2efcc7f: Merge branch 'python-fixes'
07efb8b: fixed PY-5139 Missing docstring inspection should have a quickfix
28faa96: Python language levels associated with SDK flavors
79ddbf2: fixed PY-5157 Convert dict literal to dict constructor: invalid code with space in keys
1de2fba: fixed PY-5201 Move statement: changes code logic on moving one-line compound statement outside nested one
6004dee: Merge remote-tracking branch 'origin/master'
a5a774f: fixed PY-5203 Move statement: statement should not jump into docstring
14d89ce: fixed PY-5202 Move statement: breaks code on moving outside with statement
b30327b: Empty file to put directory under version control
d94ce92: Merge branch 'python-fixes'
5b6ccd5: Fixed non-deterministic bug in updating imports in move Python package refactoring
2d56553: fixed PY-5191 Move Statement: not able not to move down the very last indented statement with EOF right after it
5f35151: Merge remote-tracking branch 'origin/master'
ef2640a: fixed PY-5197 Move statement: inconsistent skipping of empty lines
72760de: fixed PY-5199 Move statement: do not perform action with cursor on empty line
a0b8b18: fixed PY-5198 Move statement: breaks code in case of one-line compound statements on module level
3c98006: fixed PY-5196 Move statement: gets stuck on moving class after compound statement
fbee7c3: fixed PY-5192 Move Statement: breaks code in case moving down to nested try statement
2219082: fixed PY-5193 Move Statement: do not allow to move break and continue statements out of a loop
0c455eb: fixed PY-5195 Move Statement: loses code on moving nested function
e337155: fixed PY-5183 Test only created for first method
a2d295a: Empty file to put directory under version control
0696b04: Merge remote-tracking branch 'origin/master'
d22ca23: rewritten move statement for python (see PY-3284, PY-1834, PY-3283)
0757865: correctly specify language for sandbox inspection
ad07253: Removed debug prints.
cbf8979: Fixed test.
676aed4: Merge remote-tracking branch 'origin/master'
aadf23e: Merge branch 'python-fixes'
e6e5189: Fixed headers of Python skeleton files with errors
122b48c: Tooltip about extracting method from empty selection (PY-1697)
28a5f10: Create packages in destination dirs while moving Python modules (PY-5168)
a36ecfd: Moved getSourceRoots() for Python to PyUtil
fae20e1: Fixed path normalization for windows.
0916e57: line
387cf15: Some debug prints.
4e89740: Added debug info.
5584eda: Write Python skeleton generation errors to skeleton files
613d35e: Added debug logging.
a58ab84: Fixed os patching for win (PY-5175).
f2b65cd: Merge remote branch 'origin/master'
00c27a3: cache for class properties
d322fc9: micro-optimization to avoid Class.isInstance()
79311d7: visitor filter performance optimization
9cbbf8e: visitor filter performance optimization
03f0bb1: Merge branch 'python-fixes'
6ddf4a6: Updated types for startswith() and endswith() of str/unicode (PY-5035)
9ae5e36: Type database for decimal.Decimal (PY-4748)
2aa8aba: Django custom settings are written to DJANGO_SETTINGS_MODULE env var.
5e62a50: Multi-resolve to both left and right binary operators, updated type checker (PY-4748)
e6a2b35: Fixed type assertions for named tuples (PY-4611)
94c9479: If any exception happen on ipython_010 import try to import ipython_011.
7d893bf: Merge branch 'python-fixes'
c6bf564: Fixed determining PyPy interpreter name (PY-5097)
8cc8253: Fixed bug in generating builtin skeletons in case of errors
0a02bb2: proper fix for django tests
904a1be: Fixed extract method for assert type instructions in CFG (PY-5123)
35874cc: Merge remote-tracking branch 'origin/master'
e53e8d8: Cleanup
8acce04: Merge branch 'python-fixes'
28cb205: Copy of skeletons for virtualenv from its base SDK
eb7ca48: since/until for Python plugin in branch 112
1d16c06: PyTargetExpession (which is a PsiNamedElement) is most definitely not a scope when renaming (PY-5146)
0036b53: Django 1.4: Fixed test run (PY-5160).
4c458a0: Fixed debug console for ipython 0.10 (PY-5125).
d57ff71: pregenerated skeletons for MacOS X 10.6
1a0488c: added "pass" to PyNames
44bb1dd: Show skeleton generation errors after adding Python interpreter
6a38063: Merge remote-tracking branch 'origin/master'
d0b6c06: fixed PY-3659 Don't offer to replace set function with set literal when parameter is a string literal
cc4ac48: Exit status for generator of builtin skeletons
1adf261: fixed PY-5140 Exclude doctests from quick documentation content
ba9cf0e: Fixed types of numbers for Python string formatting (PY-3478)
19a0cf6: Fixed 'import' keyword completion (PY-5144)
307c7ae: Merge branch 'python-fixes'
0b19c62: added test for PY-5041
b13eca7: fixed PY-5041 Closing pair quote not inserted if caret is before pair of quotes
c9f7a2c: PyCharm specific project scope building (PY-4187)
3048823: no qualifier - no resolve (PY-4980)
f0f02e6: show full name of module in toggle alias quickfix (PY-5142); handle language level correctly
6d87883: allow configuring project interpreter for default (template) project
a421b5c: define environment variable for all configurations started from PyCharm (PY-4853)
6e0df56: fix parameter highlighting in python colors preview
7e9bc71: fixed PY-5097 PyPy interpreter is marked as Unknown in interpreters list
571f98d: Merge remote-tracking branch 'origin/master'
e7d419b: Fixed hyperlinks in skeletons generation failure messages
44f174d: Always create blacklists for bad modules while generating skeletons (PY-4709)
f8c0de7: notnull
3856b5f: Update references when moving Python modules (PY-4379)
d7bc0ea: Moved findUsages() to PyRefactoringUtil
c515faf: Removed several warnings
047eb34: Fixed find usages handler for Python modules
0ac43b3: fixed PY-5130 Inspection 'Code compatibility inspection' false for Python version 2.6
26eaea9: Merge remote branch 'origin/master'
996047b: Merge remote-tracking branch 'origin/master'
727fe2a: Added inplace imports as some modules disappear after fork.
251ce37: don't show "toggle import alias" for unresolved imports (PY-1978); drop custom AskNameDialog and use standard Messages class instead
feea481: renaming __init__.py renames its containing directory instead (PY-3856)
805e6b5: trailing comma after star parameter is a syntax error (PY-4039)
d12724b: [^kb] no mac corner for lookups (IDEA-75965), a productivity feature for changing lookup sorting
2cf6b57: Multiprocess debugging. Flask supported. (PY-5132).
7a918be: Merge remote-tracking branch 'origin/master'
fb98092: Merge remote branch 'origin/master'
f380915: don't update the same SDK twice on initial startup
5402d7f: more binaries to skip
05cd382: remove version number from PyCharm bundle name (PY-2376)
6eb5664: test fix
2475b82: Deprecated call.
62a7687: don't try to generate skeletons for files under helpers (PY-5030)
2ba71dc: extract method
433bf23: Merge remote-tracking branch 'origin/master'
a6b5edd: Merge remote branch 'origin/master'
3682a7f: Inspection's option panels layout
d114c09: A lot of configurables are unscrollable now.
7329f12: cleanup
0e09b15: to handle built-in modules correctly, unpack pregenerated skeletons entirely; refactoring
1d540c6: Merge remote-tracking branch 'origin/master'
5b99cb5: cleanup
16e67f9: colors for parameter and 'self' parameter in Python (PY-2610)
18a23da: added console filter for inspections
21f0801: Merge remote branch 'origin/master'
5bd83cb: initial support for pregenerated skeletons
bcf792e: Merge remote-tracking branch 'origin/master'
647ac0f: removed mako/cython dependent code from python inspections. Created extention point to filter out some python visitors for custom languages
1c16d2e: I love xkcd but there's no reason to open it in the browser when building PyCharm skeletons
d921b24: Fork handling in multiprocess debugger.
e98480e: Added checkbox for running debug in multiprocess mode.
30e3de69: disabled coverage and debug for documentation configurations
355d625: Merge remote branch 'origin/master'
4e6177f: more correct calculation of super call type (PY-2320)
8646732: fixed PY-4242 Reduce suggestion list of parameter values as the param tags are filled in.
181f899: distinct icon for Python packages (PY-1838)
4b1f976: fixed PY-4900 Mako: parameter unfilled: false positive calls from namespace from regular python module
d851ecd: Ctrl-] at end of block jumps to end of enclosing block
71b2a93: don't show auto-import hint if other hints are active now (PY-2100)
8abc841: more adequate test and proper fix for PY-4437
cca4e00: Better name.
5873dad: Better name.
70da939: distinct icons for functions and properties (PY-5122)
7ea04ee: fixed PY-4138 Would be nice to have some diagnostic for test runners
1baab5a: Merge remote-tracking branch 'origin/master'
e9ff6d9: fixed redundant relexing (future import unicode_literals)
f54d743: Merge remote-tracking branch 'origin/master'
c5c7e26: fixed vfstestframework listener
7b7c853: removed registrarion of TestFramework service
3fbed7e: fixed quotes for rtype/type in options
7b21a59: Renamed Python move refactoring test
b5767fc: Merge remote-tracking branch 'origin/master'
358d23d: changed test framework listener to application component
04d77cd: Merge remote branch 'origin/master'
e2a48b5: generator works better with IronPython (PY-5021)
1ac7c17: indeterminate progress for "generate binary stub"
8b7334d: fixed test data for string literal wrap
02f508c: fixed test name for smart enter test
2acd90d: Merge remote-tracking branch 'origin/master'
20f0fb9: Merge branch 'python-fixes'
f4a1778: Removed redundant equals() and hashCode() for PyImportReferenceImpl
bf024a0: Fixed race condition in evaluating set of TypeEvalContext and resolve results cache
25fe2ab: fixed PY-5106 Unnecessary backslash is added on enter inside string in parenthsis
0479ddc: fixed PY-3036 Unnecessary backslash: missing for split strings
d146c22: Merge remote-tracking branch 'origin/master'
85973ac: diagnostics for EA-29910
d300946: Fixed bug in detecting old-style iterables (PY-4890)
e502a7a: Merge remote branch 'origin/master'
b44fba5: extract method
80436c3: typo
4adc354: always place caret at occurrence when in-place introduce is used (PY-5098)
c7d68fc: Merge remote-tracking branch 'origin/master'
65dd2df: Fixed NPE (PY-5101).
7194cc6: Merge remote-tracking branch 'origin/master'
6af5a66: better SOE protection when evaluating types (EA-31867 - SOE: PySubscriptionExpressionImpl.getType)
85cfb20: remove completely unnecessary double recursion (EA-31868 - SOE: PyMethodParametersInspection$Visitor.ultimatelyListsInBases)
65974d1: Fixed ipython magic completions.
da70657: IPython completion in debug console.
9b1d746: IPython: Magic function are parsed correctly (PY-4497, PY-4468)
c0b2b41: fixed PY-5029 Editable auto stub string
51444ef: Merge branch 'python-fixes'
e0fa320: Special-cased signature of getattr in skeletons (PY-4509)
e70cb1b7: Fixed Jython compatibility of generator3.py
2e28ae5: Fixed completion of magic commands (PY-4475).
a0b40c7: Fixed bug in handling union types in argument list inspection (PY-4968)
4b52bf5: Fixed bug in checking for sequence instead of iterable in '*args' (PY-5057)
6ddfa16: fixed qFix tests
2b33065: Merge remote-tracking branch 'origin/master'
2bb62bb: generate indent token if first line of file is indented (PY-4941)
ccb7e2f: suppress unresolved reference error under PyImportedModuleType (PY-2075)
6bdfe52: Ctrl-[ and Ctrl-] work for Python (PY-2045)
83dc76c: detect Python file type by hashbang line (PY-4865)
d25ceb4: inplace introduce variable: if we have no occurrences, put caret into target (PY-4482)
7f7479a: if no project SDK is specified, "open directory" uses latest configured version of CPython
43df68c: call in qualifier of assignment LHS should be parsed as reference expression, not target expression (PY-5062)
b670fc8: include empty statement list in PSI if colon is missing (EA-31846 - RE: PyBaseElementImpl.childToPsiNotNull)
13dcd27: correctly calculate super call type when completing; allow completing dunder-prefixed attributes in context of super call (PY-5066)
6596f55: fixed PY-5058 When unicode/raw string literals are line-split, the u/r/ur prefix is missing
cde4653: EA-31529 - NPE: PythonDocumentationProvider.pyVersion
d3c5d7a: don't access data context from wrong thread (EA-31629 - assert: FocusManagerImpl.isFocusTransferReady)
2e01e60: don't require JDK where it can be null (EA-31649 - IAE: JdkChooserPanel.<init>), move declaration and registration of OrderEntryAppearanceService to correct place
2312915: avoid storing PsiElement reference in a quickfix (EA-31783 - PIEAE: PsiElementBase.getContainingFile)
c41305a: fixed PY-5073 Django test can not run if settings are not at the top level of the project
288ae5e: - Fixed parsing in console a bit - On %edit open file in PyCharm rather then in external editor (PY-4507).
baa501c: Don't ask about destroying python console on close.
86750a2: Merge branch 'python-fixes'
724d537: Merge remote-tracking branch 'origin/master'
f6961f5: testframework service is an application service now
579ba5b: Special-cased default arguments check for 'dict.get' and 'dict.pop' (PY-4158)
9e676c4: Fixed resolve and completion for locally assigned instance fields (PY-4279)
df923b1: IDEA-74433 (allow to switch safe write off)
7e83954: Cleanup and typos in idea.properties
f3b4f00: Merge branch 'python-fixes'
3019a47: Shared TypeEvalContext for all inspections in the current session
2bc400f: Merge remote-tracking branch 'origin/master'
b7821dd: changed queue type in test framework listener
e217fcc: Merge remote-tracking branch 'origin/master'
301513c: fixed manage.py options for jython (made params group in command line)
0d64956: to lower case for attest paths
870b657: Merge remote-tracking branch 'origin/master'
f501f36: fixed exception in test framework listener
3a8bca1: Fixed parsen ipython code started with ? (PY-4494).
d51330b: Merge branch 'python-fixes'
2be69fc: Help reference (PY-5047).
c2c24b5: fixed Test Framework listener
ba3bf31: Merge remote-tracking branch 'origin/master'
21bb875: changed VFSTestFramework listener type to BulkFileListener
94848e2: Added message.
f758617: PY-5031
6b66d41: No message.
ef335d0: Fixed false positive in unused locals for 'try' with multiple 'except' (PY-4378)
601a45d: Merge remote-tracking branch 'origin/master'
28b6041: Notify that console is started.
7313702: Merge remote branch 'origin/master'
243613c: Merge remote-tracking branch 'origin/master'
f54ac0e: Console joined debugger in their common package.
9c792ef: Merge remote branch 'origin/master'
693ebf8: Merge remote-tracking branch 'origin/master'
e64cf53: Merge remote branch 'origin/master'
45a71ff: svn 1.7 as additional plugin
7001adc: completion lookup shows type of value shown in lookup, not type of assignment RHS (PY-4350)
bbc3eb2: Merge remote-tracking branch 'origin/master'
c0d66f6: fixed bug in django tests with south
489ac7c: Fixed false positive in unused locals for class factories (PY-4147)
63f7f91: Merge remote-tracking branch 'origin/master'
d16b390: IPython console embedded to debug console (PY-4504).
8128e7a: 'self' is not a local scope dependency (PY-4492)
ee395e1: Merge remote-tracking branch 'origin/master'
6b0b6b4: Tests cleanup
14c4f6d: Python unused locals inspection cleanup
94c760a: 2.0 Beta
a443e03: IPython-specific parsing if only IPython is running (PY-4496).
87fb922: tweak parsing of incomplete statements (PY-3792)
40f66a7: Merge remote branch 'origin/master'
c19ce52: Merge remote-tracking branch 'origin/master'
c7992d7: Merge branch 'python-fixes'
8ae4f0a: Merge remote-tracking branch 'origin/master'
599f05f: Remove vfs access here.
e3499fd: added logging information when searching for test runner
d76792c: changed unnecessary list to more suitable set of sdk
d47fdf8: Merge remote-tracking branch 'origin/master'
e95e83f: changed error type to warning if couln't find nosetests in interpreter
5c98fe5: Merge branch 'python-fixes'
0a01cf1: Return unknown type for old-style properties if we cannot infer their type
71675cc: fixed PY-4735 "mandatory encoding declaration" inspection+quick fix on python files
037d110: use continuation indent in parameter list (PY-4356)
ee438d4: SearchableConfigurable (PY-5042)
1a0c24d: fixed PY-4916 Mako: unresolved template reference: false negative for templates in nested folders
5a77bea: Merge branch 'python-fixes'
0ab1341: Merge remote-tracking branch 'origin/master'
cf9c3bf: Fixed overridden properties lookup (PY-2313)
ec5c7b2: correctly check isIncomplete() for unclosed argument lists (PY-4863)
22bf9a1: don't fail trying to create a directory which already exists (PY-5021)
c4bd72fd: action to mark a directory as template directory (PY-4848)
3f165de: correctly spell check format strings with escaped characters (PY-4440)
ac0f85f: findInitOrNew() doesn't find fake __init__ in fake superclass of old-style classes (PY-4897)
e1f23fa: fix test name to match testdata name
ba2daf9: fix silly typo in previous commit
fab96fd: don't replace line breaks with spaces inside string literals (PY-4962)
d8a2126: passing test for PY-4947
fe2efca: an empty return statement does not return a value (PY-4502)
67c17b8: handle Enter in a string injection inside a Python file by looking at non-injected context (PY-4982)
5d992e3: escaping single quotes is not redundant in PyCharm (PY-5030)
3fcac94: Fixed english.
741324c: Error->LOG.
4492318: Merge remote-tracking branch 'origin/master'
fdaad8c: Converted Python property access inspection test to highlighting test
3249c7b: fixed PY-5037 "Assignment can be replaced with augmented assignment" quick fix doesn't work
d14c9132: Merge branch 'python-fixes'
67aee02: Fixed some occurences of type eval context without the origin file in Python inspections
50913e1: Removed broken equals() and hashCode() overrides for operator references (PY-5016)
87cc807: Merge remote-tracking branch 'origin/master'
57dd7ee: fixed PY-491 Inspection to detect non-ASCII characters in source files with no encoding specified
9c56947: Merge remote-tracking branch 'origin/master'
ee4526d: Merge remote-tracking branch 'origin/master'
852c6d9: Fixed callable inspection for union types (PY-4608)
8cc55ab: Fixed callable inspection for decorators (PY-4090)
cc7d031: Cython token contributor for running tests via PythonAllTestsSuite
06f7463: fixed PY-5018 "Inherit from object" gives messed up code for child-class
f7468df: Merge branch 'python-fixes'
23fa48b: fixed PY-5014 Short chinese characters still show "Byte literal contains characters > 255"
783c421: Fixed loop-exit CFG edges in for-else loops (PY-4239)
68d5c3a: Don't count type assertion CFG vertices in DFA (PY-4609)
a6b4168: Removed unused 'self' attributes access vertices from CFG and cleaned up unbound varirables inspection (PY-4623)
554968b: fixed PY-5022 Test Runner does not work with django South
d08bdb7: fixed PY-4697 "join 2 if's" inspection suggestion is invalid
120122c: fixed PY-4647 False positive for "Too few arguments for format string"
55d78b5: fixed PY-977 "Replace with str.format method call" doesn't handle escape sequences.
fd00e4f: fixed PY-4376 Split lines: do not add another backslash if there is one already
779d803: fixed PY-4375 Join lines: delete escaping backslash when joining one string
56c1f28: fixed PY-3814 Current parameter info: remove popup on moving cursor out of the parenthesis
7cddecd: fixed PY-3817 Current parameter info: inconsistent current parameter before and after comma in case of list/tuple parameters
fa64585: Merge remote-tracking branch 'origin/master'
032e761: Fixed NPE (EA-30277, PY-4512).
b3c8fb0: Fixed NPE (EA-31270).
315b9ca: Fixed NPE (EA-31450).
46e2f24: fixed PY-4534 PyCharm can't find nosetests runner. PY-4534 PyCharm can't find nosetests runner.
0e8a006: Merge remote-tracking branch 'origin/master'
c88a15d: NO NPE.
4709c3c: Fixed false positive in unbound locals for outer functions and classes (PY-4297)
331f84d28: Change console settings ui to tree from tabs.
06802ac: Fixed false positive in unbound locals for self-like arguments of nested functions (PY-4229)
4b5e13f: Merge remote-tracking branch 'origin/master'
e2e5b55: put primary module on top; accept UnnamedConfigurable as child configurable; select first module in list by default
155d358: Merge remote-tracking branch 'origin/master'
e7c95d7: Multiprocess debug: fixed thread hang-up (PY-4979).
c30c9ef: Merge remote-tracking branch 'origin/master'
7efc02d: Merge branch 'python-fixes'
06b25d7: fixed PY-4360 Tuple assignment balance is incorrect: false negative for parenthesized tuples on the left side
b8a94d7: Fixed CCE in move refactoring for base classes (PY-4545)
ff9665c: Removed commented code
4a5667f: Fixed debug autoreload in case in interpreter like '/usr/bin/python27' (PY-5004).
7cd102c: fixed PY-4357 Tuple assignment balance is incorrect: inspection doesn't affect variables
c5fd88d: Merge remote-tracking branch 'origin/master'
a600509: Merge branch 'python-fixes'
e0f825f: Fixed resolve for toplevel references to cdef functions and modules defined below (PY-4991)
4f5720f: Setting to remove separator line in console (PY-2573).
a65a25e: Merge remote-tracking branch 'origin/master'
32f6794: fixed PY-4964 Broken "Add docstring parameter" quickfix
d01b898: Python console: fixed case with module default sdk (PY-4996).
5a4e065: fixed EA-30553 NPE AddCallSuperQuickFix.applyFix
e5d35e3: disabled focstring inspection in mako files
b65dada: fixed test configuration panel
7d6fa48: crash-proof the generator a bit more (PY-4369)
16553df: update roots in EDT; update skeletons on first activation of a certain interpreter in the project
568bb46: EA-30956 - assert: PyClassNameIndex.findClass
9372073: trying to solve the SOE at EA-31262 - SOE: PythonDocumentationProvider.getTypeName
8fafc66: EA-31378 - NPE: PyTargetExpressionImpl.getType
970afd5: EA-31379 - NPE: PyClassImpl$NameFinder.process
16ed064: better timing diagnostics; don't walk into directories that don't contain __init__.py (such as assorted locale directories)
561156e: per-SDK refresh of skeletons
523440f: register PythonSdkUpdater as startupActivity extension
41ec8c8: Merge remote-tracking branch 'origin/master'
32271e8: - Added python console settings. - Python console now can run even if no python module present
8ed9204: fixed **options in django test runner
da09db5: Merge remote-tracking branch 'origin/master'
fbbad2e: DataContext needed for correct scopes selection is not available in background thread (EA-31516)
0f9bcbf: Merge remote-tracking branch 'origin/master'
1d953ac: fixed PY-4983 incorrect line break on enter for strings with prefix
fbfcb33: implements SearchableConfigurable
100c565: fixed enter in identifiers
9167075: Added formatting for Cython
60d4774: ConfigurablesModifiedTest fixed
ca08a34: Merge branch 'attachproject'
b02fb2a: per-module template language configuration
eaed9c7: Merge branch 'cython'
401a263: IDE-specific buildout configuration moved to python-ide
7560f96: project (module) chooser in run configurations
100ec53: Merge remote-tracking branch 'origin/master'
3f87d51: Python coverage settings moved to general settings (PY-4712).
ca698fd: Added structure view for Cython elements
12932c5: fixed PY-4970 Broken test runner for py.test with test generators
e0847c0: fixed PY-4191 'Show command line' option for unittest runners
3f9ccfd: Merge remote-tracking branch 'origin/master'
eee63c6: allow to add support for framework from Project Structure dialog (for Groovy, JSF and Spring libraries for now)
d470f03: Scripts: Java home detection on FreeBSD and Solaris; fail gracefully on missing readlink
444bc3a: fixed PY-4220 Inspection: False positive on "Key ... has no following argument" (string formatting)
662afc9: understood unresolved reference NPE and fixed
4ba0dbd: Merge remote-tracking branch 'origin/master'
1fcb298: Merge remote-tracking branch 'origin/master'
65e9944: tweak Python formatting rules to fix tests
0ede4e2: JavaScript debugger in PyCharm (PY-3021).
c462a23: rule-based API for spacing calculation in formatter; use it in Python formatter
d912f45: fixed PY-4952 Wrong code compatibility inspection for json module
b100d44: fixed PY-4692 "Goto test" in test file shows "Choose test" popup when multiple options are available
78e3c5e: fixed PY-4731 Create new test for class: respect user indentation settings on creation
d0c4c8d: fixed PY-4726 Create test for class: navigate to created TestClass
40ef94f: fixed failed django test
2b92490: fixed PY-4738 Create test for class: generate failing test not one which raises errors
31ec051: fixed PY-4693 "Create test" has empty lines in the list of methods
75155d4: fixed PY-4732 Tests Run Configurations: unnecessary pattern checkbox in per-script configurations
6c7c69f: Merge remote-tracking branch 'origin/master'
91f20bb: stoped faking django_settings module in tests (it leads to wrong project name) PY-4217 Error: Caught NoReverseMatch while rendering when running tests
464a4f0: WI-8073 (6x is better than 4x)
5d8b9b5: fixed PY-4218 Test runner incorrectly shows passing run when py.test gets an error collecting
9cc589c: keep formatter stuff in formatter package
6ba0092: fixed PY-4691 Test runner should check filename pattern before importing module
c1425e3: Merge remote-tracking branch 'origin/master'
349bae9: fixed PY-4832 Test runner can't attach to pinax django tests
596c83a: Merge remote-tracking branch 'origin/master'
9792a9f: Merge remote-tracking branch 'origin/master'
ff30f49: Fixed resolve for attributes of Cython classes defined in *.pxd files (PY-4946)
908d67d: Fixed resolve for references to Cython modules in the first part of 'from ... import' (PY-4944)
663ed07: Disabled unresolved references inspection for Cython include files
1031ce1: Enabled unused imports inspection for Cython files
e93b3aa: fixed PY-4925 noserunner.py fails when arguments contain space [with patch]
e4674c0: Merge remote-tracking branch 'origin/master'
e632c67: fixed PY-4928 Mako: object is not callable: false positive for functions from re module
765638f: correctly resolve class private names when completion is used (PY-4589)
fcfe166: don't allow to invoke in-place refactoring if another template is currently active
2e45853: don't suggest any used names as name candidates for introduce variable (PY-4605)
a058121: Merge remote-tracking branch 'origin/master'
aaf4529: Merge remote-tracking branch 'origin/master'
3612bf1: Merge branch 'cython'
8380efc: yet another attempt to fix spellchecker SIOOBE (PY-4906)
703cbd8: Resolve for Cython '*.pyx' modules imported from Python files (PY-4934)
d596b73: Merge remote-tracking branch 'origin/master'
4cca7c2: put forceDelete() function to common build scripts, use it in IDEA build script
ac144b2: Merge remote-tracking branch 'origin/master'
1c8da6f: Python plugin version advanced
8122bbc: Hidden some methods of ResolveImportUtil for Python
7815fb4: Merge remote-tracking branch 'origin/master'
c5afd96: Merge remote-tracking branch 'origin/master'
37d4335: Merge remote-tracking branch 'origin/master'
e151d49: Its better to synchronize on socketObject.
feb0c23: Merge branch 'cython'
a95f624: Fixed debug command-line tests.
fc49dec: Merge remote-tracking branch 'origin/master'
3f40ecd: Added synchronization on socket.
8800c99: Fixed NPE in PyClassImpl
c39e409: fixed PY-4911 Mako: unnecessary backslash in expression: false positive for control structures
3c530dd: Merge remote-tracking branch 'origin/master'
c705649: PyDebugger: added CR to frame string.
300c5d6: Fixed resolve for attributes of Cython class instances (PY-4931)
9545b3f: rename stuff around to allow moving more template language specific stuff into TemplateLanguageTagLibrary (formerly known as TemplateLanguageCoreTags)
7327ffa: Some more print info.
d72eae1: Extended timeout for jython in tests.
6159d50: Added some debug information for tests.
26f0072: Completion for Cython reserverd keywords
bec5a5f: Merge remote-tracking branch 'origin/master'
83987f5: fixed PY-4823 Wrap with trans tag: navigation by gutter icon
0dd6045: commenter for Jinja2 (PY-4874)
a4d26ca: Temporarily disabled call by class inspection for Cython
b0f651a: Disabled unresolved references inspection for Cython builtin types
107781a: Temporarily disabled callable inspection for Cython files
3b65a76: Fixed resolve for Cython includes + forward declarations
8c5f6b3: Enabled unresolved references inspection for Cython
e699e0c: Merge branch 'cython'
b05f31b: Bug in 'cimport' of starred 'extern' symbol is obsolete (PY-4844)
261e58a: Fixed bug in absolute 'cimport' resolve in submodules (PY-4843)
2107987: Merge remote-tracking branch 'origin/master'
1998863: fixed PY-4796 Mako: provide completion for attributes of default mako template tags
3f2f73d: fixed PY-4801 Mako: Structure view for mako template files
5388e17: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0532f6a: Merge branch 'cython'
a50e997: Resolve and completion for Cython cdef classes
3789e6d: fixed PY-4802 Mako: Formatter for mako template
b48a029: Find Usages mostly works for Jinja2 macros
3c115ef: cleanup: avoid invalid elements
de85eea: don't validate disabled buildout facet
57861be: Extracted base functionality from OSProcessHandler; moved relevant classes to 'util' module
9734817: Merge remote-tracking branch 'origin/master'
2fb6fc5: fix another case of SIOOBE in Python spellchecker (PY-4784)
d4e0e8b: synchronize FactoryMap access (EA-30786 - IAE: TObjectHash.throwObjectContractViolation)
5938860: EA-30775 - PIEAE: PsiElementBase.getContainingFile
f9c14be: RunnerMediator: send ctrl-break before closing stream (last change was incorrect and should be reverted)
fcad21f: replaced FileTypeRegistry.getFileTypeByFile() with file.getFileType()
b411710: few refactoring, removed redundant code fixed PY-4810 Mako: namespaces are not available in completion inside python code blocks
c01132a: inspections language
3d688f1: Merge remote-tracking branch 'origin/master'
922b706: 1. OSProcessHandler & Python process handlers were refactored (code duplication, etc.) 2. RubyProcessHandler - Kill process tree recursively 3. "soft-kill' feature for Python & Ruby process handlers
780d315: cosmetics
18021b6: diagnostics
cb7b657: diagnostics
dd42a88: EA-29682 - assert: ProblemsHolder.registerProblem
818c1ed: EA-29683 - assert: PyCallExpressionImpl.getSuperCallType
746b672: coverage-agent is also needed on the classpath
915b3fb: EA-30661 - PIEAE: PsiElementBase.getContainingFile
0dd134b: add coverage plugin to Python plugin build classpath
664438d: to avoid recursive union types appearance, PyUnionType constructor is now private (EA-29942 - SOE: PythonDocumentationProvider.getTypeName)
2820513: EA-30246 - IAE: ChainIterable.add
852abf5: EA-30280 - SOE: PyKeywordArgumentReference.resolveToFunction
e276025: EA-30486 - AIOOBE: PropertyBunch.fillFromCall
ef3c6ed: make sources jdk7 compilable
74074b9: Merge remote branch 'origin/master'
83d8b7f: Initial support for types in Cython
19f8959: Fixed typo.
c84dd21: Merge remote branch 'origin/master'
efb6557: Merge remote-tracking branch 'origin/master'
10c1ab8: fix exception in Python spellchecker (PY-4784)
a6a7f4f: Fixed typo.
2449047: Merge remote-tracking branch 'origin/master'
9ae650a: PyTypedElement for PyReferenceExpressionImpl and TypeEvalContext
f83c9bc: Fixed an AttributeError (PY-4627).
c481e3b: shortcut for coverage action in PyCharm (PY-4745)
f1fd03a: simpler names for testcase classes
a353c63: Extracted PyTypedElement interface from PyExpression
b581935: Merge remote branch 'origin/master'
16378dbe: Merge remote-tracking branch 'origin/master'
51b7c79: Merge remote branch 'origin/master'
4e5ba51: disabled unused import inspection in mako (fixed PY-4755 Mako: unused imports: imports are always unused in module-level blocks)
05df1ba: fixed PY-4759 Mako: import keyword not available in completion for module-level blocks
2f46f67: fixed PY-4773 Mako: SyntaxError: false positive for comments inside python code blocks
ca24661: Merge branch 'cython'
eb68b1f: Updated highlighting and unresolved references for Cython builtins
abd9037: Rename refactoring for Cython elements
1f63875: fixed NPE in addCall qFix
06e063e: Merge remote-tracking branch 'origin/master'
2e5fc05: added django trans tag wrap intention
8961723: Not needed now. Using gant.
aaae747: Merge remote branch 'origin/master'
07ce783: Fixed some bugs in debugger reload: PY-4772 and PY-4770
5f79858: Run configuration for Cucumber with steps written in JRuby & Java
abac4b2: added references from trans and transblock tags of django templates to django.po file ids
01abe75: Merge remote branch 'origin/master'
acd722e: Added test for multiprocess django debug. Some bugs fixed.
7347103: Added quotes to args (PY-4761).
19b168f: Less yellow code.
0ff96e3: Fixed generic parameter.
dc19589: Fixed concurrent list modification (PY-4760).
7faaf0e: Restored exception breakpoints.
a41bc43: Python debugger: get completions restored. Test added.
9420ea2: WebIDE startup script fixed; scripts unified once again
feb6ead: Merge branch 'cython'
c5d8628: PsiNamedElement and NameDefiner elements are not disjoint sets
ef8dd2d: Merge remote branch 'origin/master'
20877e8: Multiprocess debug for Django in autoreload mode.
a6cd2d0: Merge remote-tracking branch 'origin/master'
cdeff49: re-used python indenting lexer in mako templates. fixed PY-4572 Resolve Python imports in Mako templates to corresponding Python code
80aac22: Merge remote branch 'origin/master'
b0bc8b0: per-language registration of spellchecker strategies; move HAML spellchecking support to HAML plugin
2499ee1: get rid of SplitterFactory; each splitter can create itself without outside help
2fb1b68: spell checker refactored nafig, part 2: tokenizer receives a TokenConsumer instead of returning an array of tokens; use names validator from the right language to check if an element is a keyword instead of asking all of them
5da05fa: IDEA-75407 (recognize Mac JDK 1.7 preview release)
a1786a1: spellchecker refactoring: Tokenizer.split() no longer creates huge amount of temp objects
80f3436: restore parity of the number of resourceBundle declarations in Python plugin xml files
4a060f4: fixed PY-4573 Find Usages should work for Mako functions
d3071ff: fixed PY-4694 Completion and resolve for loop variables in Mako templates
0197c3e: fixed PY-4695 Provide completion after % in Mako templates
60e0dfd: Fixed NPE in Python string format inspection
89d6f6a: Merge branch 'cython'
d8c42a1: exclude testFramework from pycharm release build
f1e04e9c: skip duplicated bundle definition
6f04577: fixed
bb2844e: bundles fixed
951c13e: specify bundle and since/until build
f15775b: Resolve and references for Cython 'include' statement
6cde6a6: Merge remote-tracking branch 'origin/master'
0d51a9d: add completion features to all copies of ProductivityFeaturesRegistry (PY-4725)
a9c7430: Resolve for implicit imports of Cython definition files
a587a94: Added a check function to explicitly mark the code that knows about Cython
014b37c: Merge remote-tracking branch 'origin/master'
b34a3ec: Resolve for Cython references inside 'cimport' statements
2b93bfa: Merge remote branch 'origin/master'
9ef8a1a: IDEA-74625 (secure temp file creation in a startup scripts)
0641868: Resolve for starred 'cimport' elements in Cython
9aa9ec0: Moved IElementType classes for Cython to the elementTypes package
b3abc4a: Same "No JRE check" policy
4973b62: Resolve for all forms of 'cimport' except star import
6809dbd: Fixed misspelled Python.
a8ed646: skip completion popup in floating point literal (PY-4316)
92a2ec7: help topic (PY-4665)
8551fca: don't show a single radio button in dialog (PY-4687)
608e30e: Merge remote branch 'origin/master'
7e30cb0: Merge remote branch 'origin/master'
53e379b: fixed unittest producer
eec97f1: Merge remote-tracking branch 'origin/master'
865ae64: Added resolve for Cython elemenets imported via simple 'cimport'
9cfd20f: Merge remote branch 'origin/master'
1282960: improved lexer and parser for substitutions and attributes, so fixed some bugs fixed PY-4578 Validate parameter count for calls to <%def > blocks in Mako templates PY-4574 Better highlighting for function definitions in Mako
f6eb363: inspections as extension points: ruby & pyton
11dd6bb: fixed NPE in dict key names completion contributor
76b89b8: ASTWrapperPsiElement -> core-impl
a38da17: run configuration extensions: remove duplicates
9ef3bd3: coverage: cleanup
126627d: coverage: replace check if coverage is enabled for configuration with executor check
f9add0e: coverage as separate runner (~IDEA-74522)
87cb411: Merge branch 'master' into cython
2761157: Prevent recursive resolving for return types of recursive functions (PY-4621)
852eea6: fixed function name field in test run configuration
06532e6: duplicates replaced
a26b303: fixed failed env tests
312643f: spelling
3f00bad: kill old deprecated modules inspection which is now superceded by a general deprecation inspection
0dcc234: fixed NPE in dict keys completion
8070b31: Merge remote-tracking branch 'origin/master'
2e3844f: Macro.getDescription() is misleading, rename to getPresentableName
49b879d: live template Macro -> abstract class
a857c1c: Python coverage: file resolve from sys.path
2f8bad4: Moved Cython PSI tests to cython/psi
4aadafb: Merge remote branch 'origin/master'
9581615: coverage.py runner renamed to be sure that Python won't go crazy having module coverage and package coverage.
77096a0: Fixed Jython compatibility of generator3 (PY-3505)
138d79b: Merge remote branch 'origin/master'
2969439: fixed PY-4556 Quickfix to add missing "self." should also work for functions
fb3cc96: Fixed method parameters inspection for 'self' with Cython type declaration (PY-4641)
6036901: Merge remote-tracking branch 'origin/master'
2f0766f: Show detailed progress for generating Python skeletons (PY-3505)
0fb3f19: Let's have branch id specified in just one place -> build.txt
954497b: done PY-4617 Add help button of the Create Test dialog and map it to the specified id
4944a16: removed methods starts with __ from create test action
b901f14: fixed PY-4581 "Old-style class contains call for super method" false positive when superclass is unresolved
8cadf81: Merge remote-tracking branch 'origin/master'
a933fce: fixed PY-4567 Don't suggest to run tests as 'Python docs' when there are no Python docs in that package
b231ca1: Merge branch 'cython'
34a2f8a: fixed PY-4599 Nosetest runner not found
c382d3b: Merge remote-tracking branch 'origin/master'
c640139: magic incantation to have cython element types serialized correctly in stub indices
b83c61b: also register token set contributor manually
5d00b61: register extension point manually in Python parsing test
732bf97: Disable coverage for Django server configuration.
c80a052: Fixed run.
6c6357d: Python coverage: use sdk coverage.py
87e9b08: Python Coverage: report generation, covered line highlighting style toggle, handling of multi-line covered statements and whitespaces between covered statements.
c9845a4: include CoffeeScript in pycharm 2.0 build
5a7e1cd: Temporarily disabled type checker inspection for Cython
2985ff4: Temporarily disabled unused imports inspection for Cython
0ef2b14: fixed PY-4624 Comparison can be simplified: false positive for math equations
fdd0fa4: Fixed wrong test condition in disabling inspections for Cython
79c021d: Fixed bugs after merging with branch 'cython'
eaff2c1: Revert "no cython for EAP"
03feb9e: Merge remote-tracking branch 'origin/master'
cbd93e0: Merge branch 'cython'
e6350c0: Minor fixes.
70b4ec8: Temporarily disbaled some inspections for Cython files
f5c6382: Some type cast checks and Nullable annotations for Python element generator
d5d095d: Added Cython builitins and non-reserved keywords highlighting
d888c1f: suggestHomePath() prefers suggested paths from first flavor (regular Python), not subsequent ones (IronPython)
632ca3a: mention GAE in Python module type description
5921678: move Django framework detector to plugin; correctly set up template folders when detecting Django framework
82869f7: TemplateLanguageSubstitutor handles Django substitutions as well; TemplatesService is a module-level service
82877da: option to enable templates in JavaScript moved to template language configurable
cc1eb87: move stuff for selecting a template language from Mako to common templateLanguages package; separate configurable for template language
8f5e78e: Added Cython keywords highlighting
54010b3: Fixed bug in ParsingScope.isSuite flag handling for Python
499180e: Added parsing of Cython integer for-loops in list comprehensions
3b8ade6: Fixed bug in parsing declrated Cython cdef functions
9cf10cf: extract language-dependent part out of DjangoTagLibrary; correct scope for parameters of {% macro %}
cd806b6: add coverage to PyCharm build
bfae1b8: Python plugin depends on coverage
06b7613: Pass default project to Python/Ruby SDK select wizard step
dbcec36: last batch of dependencies to core-api/impl
66cbf27: Added parsing of Cython character literals
8dc96d8: correctly register CoverageDataManager
aef05c6: Added parsing of Cython property declarations
0011856: correctly register SendStatisticsProjectCOmponent
50e9d0a: pull up getRelativePath() to VfsUtilCore
515294d: introduce IndexingDataKeys class to decouple PsiJavaFileBaseImpl from FileBasedIndex
27579e3: Merge remote branch 'origin/master'
1155d16: added python test generator
9a98e8d: Added parsing of Cython cdef class options
763a268: Merge remote branch 'origin/master'
086abe1: Added initial support for Python code coverage. Ruby coverage refactored.
dc4631e: moving lexer stuff to core-api
ffed96d: use PsiShortNamesCache as independent service
802e05d: Added parsing of Cython include statements
6ac9119: Added parsing of Cython integer for-loops
9872048: Fixed PSI tree for Cython function pointers in type declarations
5ad379f: fix yjpagent in vmoptions
83e3604: fixed function name in python test configuration
bd9dc5d: no cython for EAP
d218f6d: Merge remote branch 'origin/master'
ba62c3e: unified borders in python test configurations
8c15d7f: use common VM options in Linux build of PyCharm (PY-4568)
93e539b: added goto test for python
a64c75f: Platform: PsiElementProcessor.execute takes @NotNull element
f415283: Merge remote branch 'origin/master'
6ba7fde: test fix
1833b27: soft keywords in Jinja2 templates
8d58893: Added parsing of Cython cdef blocks, fixed parsing of cdef extern blocks
64f7479: fixed false mergeon python test configurations
3f31343: Merge remote branch 'origin/master'
1176233: refactored python tests run configurations
f16ec18: PsiUtil extends PsiUtilCore rather than PsiUtilBase
604d4a6: refactoring to retrieve editor
bbb19ea: Merge remote branch 'origin/master'
13463d8: Added parsing of Cython typedefs
7d1b727: unified Sphinx working directory name
aa8b42b: Added parsing of Cython enums
b7878dd: Added parsing of Cython structs and unions
42e38e2: rename ComponentWithAnchor -> PanelWithAnchor added interface AnchorableComponent for JBLabel, JBCheckBox, etc. Before they implements ComponentWithAnchor.
dc570fb: reverted python test type hint
52d3e11: Merge remote branch 'origin/master'
6e354fe: fixed PY-4397 "Remove argument equal to default" should reformat argument list in a nicer way
a39232c: fixed PY-4551 "Remove redundant parenthesis" suggestion yields to mistake
bd17841: Added 'cdef extern' Cython blocks parsing
61219d6: fixed PY-4392 Python docs run configurations: messed up slashes for input\output fields
91a389a: added sphinx producer. PY-4391 Python docs run configurations / sphinx: provide temporary run configuration on Ctrl+Shift+F10 for docs sources directory
518e265: fixed PY-4548 "Fix all 'Single quoted docstring' problems" results in 5 quotes
0f389fb: Merge remote branch 'origin/master'
f6af80f: fixed docstring statement effect inspection
c4fa7e0: Added builtin and qualified Cython types parsing, cimport statements parsing
ee27c71: Added several Cython decls and macro statements parsing, corrected Cython stubs
32296f9: fixed PY-4398 "Statement seems to have no effect" should handle misplaced docstrings better
b560fe5: Merge remote branch 'origin/master'
8ab8c16: fixed PY-4481 Incorrect hightlighting for the last value of a multi-line dict if not followed by comma
0c059bc: fixed PY-4518 '"Default argument value is mutable" inspection is not triggered for a dict constructor
c830df0: order independent
b950d5c: register AST factory in Jinja2 test, update testdata accordingly
a4473ef: Jinja parsing initial; parse whitespace control directives correctly
4742f44: Jinja syntax highlighter initial
aafed9c: optimize imports
eeaaf08: Mako template language substitutor handles Jinja2 as well
36ade7d: finishing lookup with a smart enter should actually work like a smart enter
dedf7dd: fixed missed id in python test configuration converter
57ff2e7: fixed PY-4348 update documentation for python tests configuration
e6c6579: added functions completion for mako
1a73327: Removed unnesessary print stmt.
8cae717: fixed PY-4363 Map help button to correct Help reference in Python test Run/Debug configurations
b586dab: Merge remote branch 'origin/master'
abb4ad6: Parsing of Cython variables and functions without complex declarators
46f40b1: chooser made abstract
da70a8e: Merge remote branch 'origin/master'
25c96a5: fixed PY-4534 PyCharm can't fimd nosetests runner.
a764691: Python code style settings grouped by language
4a1b9a4: Merge remote branch 'origin/master'
b4fb46a: fixed PY-4521 Mako: Element not closed: false negative for doc tag
6879c88: get rid of PySeeingOriginalCompletionContributor
74e1094: get rid of ParentMatcher and Matcher altogether
a4f77c5: simplify UnindentingInsertHandler by not using ParentMatcher
e8eff77: simplify PyBreakContinueGotoProvider by not using ParentMatcher
0a005ca: get rid of SyntaxMatchers and MatcherBasedFilter
ab7329d: don't use SyntaxMatchers.IN_FUNCTION in ReturnAnnotator; more interesting test
ed30d32: it makes no sense to restrict completion of keywords in class context, so get rid of IN_DEFINITION matcher (it was working inconsistently anyway)
117c71d: get rid of InFunctionBodyFilter
2438e76: get rid of LoopControlMatcher
4379729: restore the pattern - we do need it
a31d75a: get rid of IN_FINALLY_NO_LOOP matcher (it didn't serve any purpose in completion)
67a3c2d: cleanupification
234620f: got rid of PrecededByFilter (commented out some patterns which are either not used or don't seem to be applicable)
d46f02b: getting rid of PrecededByFilter
7d2f1e5: got rid of InSequenceFilter
eb63ddb: getting rid of InSequenceFilter
d54447e5: getting rid of InSequenceFilter
df36a7c: getting rid of InSequenceFilter
56e24f3: XDebugger: creating different editors for expression and code fragment mode AppCode:Debugger: do not show incorrect parsing errors in evaluate dialog (OC-1434)
9cae604: XDebugger: less intrusive 'show more' link for the full value AppCode:Debugger: Displaying a full value of the variable in the popup
f0e11f1: added tag completion for mako
dc6486a: Merge remote branch 'origin/master'
94ec71a: added braces handler for mako (substitution and control structure autoinsert)
c98472c: Correct step-into for Django template debugger (PY-3819).
2d4863e: Removed strange empty space in console toolbar (PY-3612).
51ab731: Smart step into support for Python debugger (PY-1124).
70dd80c: added quote handler for mako templates
4cd1863: register ResolveCache as independent service, no need to get it through PsiManager
6340d05: Added very basic Cython variable declarations parser
bfe94d0: use constant for idea marker class
6d70049: Merge remote branch 'origin/master'
c85f394: Enabled Cython lexer and parser
3b1be1f: Merge remote branch 'origin/master'
10cb974: added references instead of goto provider for mako templates
8271937: reorganized mako package
0f9963e: simplify completion contributor registration; gather completion-related stuff in a single package
c048f4e: simplify completion contributor registration
a4b7554: fix read access assertions when looking for inheritors (PY-4478)
dd4c424: SOE protection when resolving type references (EA-29174 - SOE: PyReturnTypeReference.isBuiltin)
c9a28ce: ignore InstructionNotFoundException (EA-29524 - PDUUINFE: PyDefUseUtil.getLatestDefs)
e0ec98a: added brace matcher for mako templates
39a1b54: Added initial Cython processing classes
90bc816: Merge remote branch 'origin/master'
ec1ebbc: new update channel for PyCharm 2 EAP
ca76b73: added commenter for mako template language
1829a14: NPE on reading run configurations from old PyCharm
b689c20: since IdeaProjectManagerImpl went to platform, use a different class to detect platform prefix
cd19310: Merge remote branch 'origin/master'
504bf2a: fixed PY-4352 Replace + with string formatting operator: SIOOBE at java.lang.AbstractStringBuilder.substring
a117797: simplify
e67303e: converter for old test configurations
313ef01: check for local scope dependencies when selecting initializer place (PY-4459)
dd6ac2c: refactoring: get rid of PyIntroduceSettings
38136f8: no need for OccurrencesChooser to be abstract
96b2dac: refactoring: pass entire IntroduceOperation to checkEnabled()
783124c: refactoring: use EnumSet of available introduce places instead of boolean flags
7961d7e: correct name uniqueness check for constants (PY-4409)
266002e: in-place refactorings don't suggest names of built-in types (PY-4474)
34a677b: don't delete the statement containing the introduced field if the field is not initialized in constructor (PY-4437)
4aab676: fixed PY-4402 Code compatibility inspection doesn't know cPickle.load and cPickle.dump in Python 2.4 and 2.5
d8b0430: Completion for magic functions (PY-4470).
1de34a5: Merge remote branch 'origin/master'
341f905: Fixed highlighting when arguments are passed to magic commands (PY-4468). Also exception fixed (PY-4467).
e91dd98: fixed PY-4426 Mako Template: Changing integrated tool to Mako takes place only after PyCharm restart
dca44df: Merge remote branch 'origin/master'
05c09d3: fixed CR-PY-259
d7f2cb7: added caching for test framework state and VFS listener for update (CR-PY-228)
bc3d5bb: fix PY-4414 for Introduce Constant; cleanup
e426ebe: merge in-place and regular tests for introduce field
149747b: fix PY-4414 comment #1
cbcabb5: tests for in-place introduce field; fix PY-4453
72b1cef: another case of meaningless introduce (PY-4456)
7f7b9d8: SIOOBE (PY-4455)
9645530: Python parsing tuned for IPython console special characters (PY-1017).
a9c8c0e: Merge remote branch 'origin/master'
2f8d994: @NotNull
41d1dca: .bat scripts fixed (support for paths with spaces)
1d2ae3c: IndentHelper refactoring, get rid of HelperFactory/JavaHelperFactory
34e5c24: Merge remote branch 'origin/master'
01ba9db: added goto provider for mako templates
0b3a208: Merge remote branch 'origin/master'
ad0c70d: Little cleanup.
f185708: Hyperlinks for IPython output.
fd82732: Always convert to ANSI colors text attributes.
25505f9: Better IPython processing.
b5f81fd: Fallback to Python 2 stdlib types for Python 3
3817ffe: Disabled add field quickfix for unresolved operators (PY-4364)
936c4c0: Source highlighting for IPython console. Added ANSI highlighting(PY-885).
c28b5da: Type checking fixes: Python strings are iterable, classes are callable
26a05cf: Type annotations for Python bulitin functions
9a9a892: get rid of identical method implementations in PsiLanguageInjectionHost interface, delegate clients directly to InjectedLanguageUtil
a2ca468: about/splash for PyCharm 2.0 EAP
a000722: Type assertions for assertIsInstance (PY-4383)
8d83323: PyTypeTest cleanup
e3d1523: Added type assertions for callable() builtin (PY-3892)
2c8a328: Added support for not isinstance and not None type assertions (PY-2140)
dedcf13: Fixed bug in type evaluation for Python references inside isinstance checks
6c3bfd3: run configurations layout with anchors
ef07afe: Merge remote branch 'origin/master'
63f28f7: Fixed some issues with line-by-line input in console.
8b0af11: Corrected help name for mac.
5f12d8f: Added help for mac.
9d28b87: Help restored.
33f1c9f: Merge remote branch 'origin/master'
015bfa5: Added Env tests for Python console.
551e008: occurrences chooser generalized
011636d: merge two typed handlers that are active in the same context
e78b20ae5: cleanup
8de21b4: PythonSpaceHandler is now a TypedHandlerDelegate
2d92035: always group usages by file in PyCharm and *Storm (IDEA-63711)
78d1802: separate group for usages with mismatched signature
6e66300: assorted cleanup around CallArgumentsMapping
c6c5522: pull AnalysisResult to upper level, rename to CallArgumentsMapping
7d674cc: cleanup
22f31e8: getImplicitArgumentCount() refactoring
feb73c7: no ex
256988e: accidental commit
0ec1edf: separate group for usages with mismatched signature
43fd108: fix declaration range handler registration for Python (PY-4010)
6a93985: named tuple extends tuple, so members of tuple and other class members should be resolved (PY-4349)
899a596: fix parsing Jython version string (PY-4355)
b2caf6c: one more case for suggesting to import a file into itself (PY-979)
ade9198: Fixed console paging for ipython. Fixed completion.
c68af1a: Python interpreter paths added by user should be in PYTHONPATH of Console too.
6cdf40a: Merge remote branch 'origin/master'
12d991b: Changes console API.
fc52035: test for PY-4371
53cb337: more correct way to check for imports that resolve to their containing file (PY-4372)
95be2d0: more robust code to put caret on field name (PY-4414)
10a59a7: to ensure correct undo, perform the refactoring in moveOffsetAfter() and not in finish() (PY-4416)
2f12ad0: fix name uniqueness checks for introduce constant and introduce field (PY-4409); refactoring
3c8799d: balloon cosmetics (PY-4413)
9a6764a: failing test for PY-4419
b48bc5b: test fix: support tracing in TypeEvalContext; move stdlib type cache from PyStdlibTypeProvider to PyBuiltinCache
0fc37dd: finish splitting test data for PyArgumentListInspection
06bc7c1: .bat scripts fixed and unified
c525653: Merge remote branch 'origin/master'
c5b6c61: Merge remote branch 'origin/master'
0aac47d: Changes from Pydev got backported. Optimization performed.
f7743ce: Merge remote branch 'origin/master'
35f3457: no implicits if possible
cca6593: optimize and refactor (fast checks first)
6dc56e5: qualified references are not going to resolve to built-in set
9696571: resolve without implicits for injection
704d2ae: optimize inspection (fast checks first)
91fce0a: analyzeCall(), resolveCallee() & friends accept as parameters resolve context, not just type eval context
745f1a4: cache results of findShortestImportableName()
cb31873: Moved backwards to console package.
98ec4fd: Moved to upper level.
c87e252: Moved to upper level as it is in Pydev.
767cc32: No need to import django_settings here.
74d4958: Merge remote branch 'origin/master'
89532d7: correct equals()/hashCode() for PyReturnTypeReference
752b7b5: first parameter of @classmethod method is callable
d637fce: Merge remote branch 'origin/master'
11debc3: fix test
a72c04b8: filtering imports in Python usage view (PY-4386)
602439b: Find Usages of Python method asks to find usages of super method if one is present (PY-4384)
d639907: fix formatting of lambda expressions
c5142c9: tweak lambda parsing: don't put colon inside parameter list (for consistency with regular functions)
5d902df: read access
76e0ae3: Merge remote branch 'origin/master'
23bc3d7: Revert file normalizing.
83fe264: in-place introduce field & constant
ca58c1d: consistent naming of classes
a147d32: refactor introduce refactorings to store all parameters in a bean class; support in-place introduce variable (PY-2141)
9cda6ff: fixed pyintegrated tools panel
1ffa897: Merge remote branch 'origin/master'
afc0d0f: added lexer level for mako template
0292bd7: Merge remote branch 'origin/master'
9454edf: Fixed default task. Removed *.pyc and *.class from egg.
28c6a17: Added pycharm-debug.egg to plugin build.
2fc9174: Add JNA to startup path
64aba7b: Add JNA to startup path
8166f95: Merge remote branch 'origin/master'
ee127da: Fixed paths.
496738d: Gant build script for python plugin.
d1d26aa: decouple ItemPresentation from TextAttributesKey
9be32ba: separate interface and implementation for ProjectScopeBuilder
7e561b7: rename and new border factories for borders with large font
e99ab0e: setting tips to labels
2a9ab33: rename
75d3090: Merge remote branch 'origin/master'
dbc9fbd: added icon for python tests
8dbd956: Run configaration's form refactored and some other forms too
fd55e21: Removed unneeded normacase in remote debugger (PY-4244).
33476ff: Merge remote branch 'origin/master'
fd6d7ad: improved python test run configurations
bbf9a29: allow to include RelaxNG in used jars (PY-4331)
c6a2e3b: read action
4ae583c: optimize imports
6cf7939: NPE
75a25f4: Merge remote branch 'origin/master'
432264c: remove redundant pydev console check (CR-PY-227)
d894e0a: fixed PyUtil (fast/slow checks CR-PY-151 )
23e8a32: Added check for reference names equality.
e8d8d31: Detecting whether we are in console or not, by CONOLE_KEY in user data.
8b50464: Fixed find usages for reassigned class and instance attributes (PY-4338).
49ba12d: Merge remote branch 'origin/master'
1c81465: very big and long compilation fix
11ad50b: Fixed find usages of reassigned instance attributes (PY-4338).
80d2f32: fixed PY-1738 Pressing Enter inside string literal should split it in two, as in Java
cee5cce: fixed PY-4255 Run default for working-directory is seemingly not applied to temporary configurations
228a6d1: Fixed strange condition in console code execution (PY-4329).
81cd66d: fixed py.test configuration to be subclass of AbstractPythonTestConfiguration
9301df8: fixed py.test producer (existing configurations)
859d698: fixed PY-4293 Conditional expression in return clause is not flagged as incompatible with Python 2.4 removed duplicated tests
56939f7: fixed PY-4212 Triple double-quoted strings inspector erases original string
d3283fb: Remote debugger: fixed attribute access in stdout redirect.
f77658d: Merge remote branch 'origin/master'
55c93ba: fixed python test runner tests according to new configuration type
1499e47: moved all python test configuration types to one
2067bef: fixed PY-4165 restructuredtext tasks should be associated to the current file added configuration producer for docutils tasks
228aa55: delete declaration of service that no longer exists
f37be5e: Fixed import resolve from console.
c294d44: Fixed 2 NPEs.
5151eec: Run configaration's form refactored and some other forms too
deee46c: add core-api and core-impl to build scripts
848bf3e: Merge remote branch 'origin/master'
1b09b07: PY-3647 improved docutils run configuration, added sphinx run configuration
c88fbe4: Python console: added wait for previous execution ending (PY-3573).
f98e401: Fixed type eval context for unresolved references inspection (PY-2308)
0faac83: Fixed bug in call arguments matching after isinstance, new type assertions in Python CFG
0d92215: moved set labelFor to UI designer
cc28086: fix build again
1085271: fix dependencies for building searchable options
6b16c5e: Merge remote branch 'origin/master'
b445122: Remote debug: now there is no need to restart server (PY-2631).
281fc79: fix tests?
d715a57: correctly update language level for subdirectories of SDK path (PY-4307)
6889784: add relaxng to classpath
d312590: move relaxng support to platform
7741c06: bundle rng to pycharm and rubymine
029d3f8: added tests for enter handler (CR-PY-144)
ff5dff6: moved oldStyleClasses inspection tests
1198084: Some code style API changes and doc comments
24e559d: fixes according to CR-PY-151
61406fa: removed ugly configuration names in py.test
e5b0adc: added Alt-key accelerators (CR-PY-149)
a8cc632: removed duplicated code (CR-PY-147)
2b67ae3: Allow null values as unknown types in Python union types
25fd234: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c5408c4: calling non callable inspection recognizes callable type (CR-PY-203)
7adce93: cleanup
b42a2f3: split test for 'calling non-callable' inspection
b342044: add some more methods (PY-4305)
a6c9f9b: NPE (PY-4303)
e69cb44: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
3538747: Added references and types for unary Python operators
c29623d: Merge remote branch 'origin/master'
e3e332f: Added hashCode for PyOperatorReferenceImpl
b2cfc62: Merge remote branch 'origin/master'
63c8cf5: fix tests: the completion-related typed handlers should really be the first, sync means sync
866f9a4: Fixed bug in arguments order for reversed Python operators, added type annotations for them
4dcfada: Better messages for unresolved operators (PY-4298)
947bc78: Merge remote branch 'origin/master'
898ca32: Python console: take into account Python language level (PY-2694).
5b976b2c: Added reference, type checking and default type for "in" operator in Python
ef98dfd: Removed redundant code in type checking inspection
2ac77af: implicit resolve prefers results imported in current file (PY-1310)
1ee8db0: fix location string calculation for non-stub-backed elements
6da3b95: add __isub__ and friends to built-in methods (PY-4290)
07420b4: don't expand Python project view nodes on double-click (PY-4295)
1ed931a: cleanup
a798d3d: use scope without stdlib tests for implicit resolve
213cb9d: include class name in location string for instance attributes
d2693b6: let's advance stub version to make sure indices are correctly updated
a6f9e82: implicit resolve works also for instance attributes (PY-4292)
9845ac2: missing closing paren in code style example
f21702d: prioritization of implicit resolve results (initial)
a619953: Unresolved and untyped Python comparison operators return bool value
d254d50: Fixed CCE in PyQualifiedName caused by unsafe casts inside unwindQualifiers (PY-4286)
0e58281: Fixed console prompt inconsistency (PY-3307).
5f6f329: use 'instance' attribute instead of deprecated 'implementation' for configurables
0eafd40: Fixed cursor position with help prompt (PY-3306).
fb6ca13: Merge remote branch 'origin/master'
0646b9e: Return type of elements in sequence if __getattr__ returns None, required for Python stdlib type database
1b42f83: Type checking for main builtin subscriptable types (PY-4164)
7df77a6: Cleanup
c0ac51b: References and type checking for subscription expressions in Python
c2365f0: Added test for slow Python type eval context during type checking
4626376: named tuple support (PY-1360)
257951e: added possibility setting large and small font for titled borders and separators
200e6a0: Merge remote branch 'origin/master'
7ca3432: allow passing null to PyType.union()
a539421: PyTypeCheckerInspection always uses context from inspection
61597cd: provide origin in TypeEvalContext also for binary operation reference
164142c: Merge remote branch 'origin/master'
ce1e5db: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cedfedf: Moved to lang-impl.
f0d42e3: no argument default values in auto-import popup (PY-3588)
8632a44: always allow return type analysis in owner file (PY-2007)
a85f585: store origin file in TypeEvalContext; always allow DFA in origin file; provide origin file in references found with findReferenceAt() (PY-4278)
6c962c8: Merge remote branch 'origin/master'
ed05b70: Validation of fields in facet panels (PY-1699).
98cb4b5: support "show members" option for Python project view (PY-1865)
efc2abc: rename keyword arguments in calls when a named parameter is renamed (PY-3890)
e28cb81: options for alignment in function declarations and calls (PY-3995)
84dd1aa: Added new reference type for Python operators, support for operators with reflected arguments
3587c4f: IdeBorderFactory refactored for new titled borders
048f026: Changed CodeStyleSettings.IndentOptions to CommonCodeStyleSettings.IndentOptions
cd9a7dd: Django tags matcher (PY-2812).
a47aa41: No Django breakpoint tabs for non-django projects (PY-3905).
47e091f: Removed wrong code.
5e153ee: Test fixed.
3b78c94: improve Ctrl-hover popups, show name of function that owns the parameter (PY-4155)
cac2b54: Merge remote branch 'origin/master'
059d692: resolve target expression to self if trying to resolve it got a result in another file (e.g. builtins) (PY-4256)
2898448: Ignore Python union types with unknown components during type checking
694dadf: Fixed nondeterministic ordering in union types, added test data (PY-3496, CR-PY-170)
ddb080e: Merge remote branch 'origin/master'
6bfaa7b: correct scope for search in project and libraries (PY-4225)
4c7567a: show super icon for overridden class attributes (PY-4207)
577e716: Sorting of Python union type members is performed during latest definitions search
c6f3922: Python old-style classes are subclasses of "object" from the type checking point of view
68b0309: Added CFG type assertions for "elif", no union types in type assertions for Python
8cc4f4f: Slow type evaluation context for Python binary operators by default
bf7b3c8: Added quickdoc and tooltips with inferred types for variables (PY-3496)
2f945e9: handle inheritance
7e86bdf: update __slots__ on rename (PY-4195)
ae532c1: annotate stacktrace only in "analyze stacktrace" and thread dump console. recent changes annotation setting also put into "analyze stacktrace" dialog
3616297: added NotNull annotations for EA-28713 - NPE: DefaultJavaProgramRunner.addDefaultActions
7a4bd98: ResolveImportUtil.visitRoots() handles module dependencies (PY-4270)
d94c116: add source roots and compiler output paths of dependencies to PYTHONPATH (PY-4238)
d96d067: support "Spaces before method declaration parentheses" option for Python (PY-4241)
e43654e: allow suppressing PyCharm inspections from toolwindow (PY-2103)
8e584b7: Narrowed return types of Python builtins for better user experience in type checking inspection
8687c52: Added references and type checking for binary operators in Python
80802c5: rounded borders removed
144f05b: new framework detection: migrating facet detectors, part 2
aea21a1: new framework detection: migrating facet detectors
b85aaa9: border's font switched from Bold to Simple and some refactor
13a385e: Merge remote branch 'origin/master'
4961269: PsiUtilBase.originalElement => CompletionUtil.originalElement
0363e79: More code style API changes
7450c7f: Added Java EE names tab to Java code style settings, some API changes
d3d99d0: Django templates debugger: step-into = step-over (PY-3819).
4a01c8a: RunToCursor implemented for Django templates debugger. Relates to PY-3419.
9874279: Race condition fixed (PY-4234).
880c14d: Merge remote branch 'origin/master'
a54221d: form's refactor. run configurations forms. refactoring forms.
d222418: Added checkbox to show inherited members in the file structure popup (PY-3936)
9a6641b: Keywords "and" and "or" don't correspond to "__and__" and "__or__" in Python
c9c620e: Merge remote branch 'origin/master'
74ba85b: added method to choose single file in FileChooser calls to FileChooserDescriptor constructor replaced by FileChooserDescriptorFactory methods
3252ca5: Merge remote branch 'origin/master'
55e2af0: PyPy interpreter support (PY-1625)
ee7d005: Type inference for with statements according to context management procotol (PY-4198)
8338191: Show types of Python variables in tooltips
f4f19c4: More Sphinx cross-references syntax in docstrings (PY-4223)
0b9d42c: PY-3647 Run Configurations for restructured text files added docutils run configuration removed docutils actions
9c154a9: avoiding debug mode
b01cac3: do not insert id for Storage when only one can be used - provide default value
1e47c8b: Fixed false positive in unreachable code inspection in try-raise-finally without except (PY-4208)
e433374: Refactored Python docstring parsing, more careful handling of formatting inside type tags
93996d1: Django templates: completion of variables from extra_context fo direct_to_template (PY-3956).
e04200f: Stick selection in stack frame after step over (PY-3943).
9730776: Added xml escaping of log message (PY-3998).
d057e3d: Merge remote branch 'origin/master'
a7435b3: Configurable script names for Create Launcher action; avoiding sudo if path is accessible but incomplete
f84db12: Merge remote branch 'origin/master'
302ab11: Grouped spacing/blank lines/wrapping panels to a single panel with tabs for each type of settings.
e3b2feb: Ordering code style tabs/panels according to key IDE language (see IDEA-72177)
4caab09: Merge remote branch 'origin/master'
97cdef1: pushing faster + @NotNulls
2f5e111: light parser testcase
7098b25: method of border factory renamed and some forms refactored
3a37f2d: Rewritten Sphinx tag parser for docstrings in order to support complex type expressions in :param tags
360da2c: References to subparts of types in docstrings for move and rename refactorings (PY-4127, PY-4129)
9331bd0: Merge remote branch 'origin/master'
d9409a1: added PY-4040 Inspection on using super in old-style classes
77e6253: Multi-tab code style settings for Python
b5807aa: improved py.test configuration producer
d7cf08f: unified pytest run configuration, added parameters, fixed browse listener in target fixed PY-4063 Improved pytest Configuration Pane
103fd9a: send non-physical change event + track invalidation of elements in JavaResolveCache
547ed34: fixed PY-4205 Doctest runner ignores match pattern
02d190b: Merge remote branch 'origin/master'
3b8a6e5: Startup scripts unified a bit; regression in Java lookup on Macs fixed
b29f549: fixed problem with highlighting escaping sequences in 2 string literals concatenation
613b51a: Merge remote branch 'origin/master'
97551a9: Fixed renaming module usages (PY-4200)
324f5f8: Fixed renaming package and subpackage usages (PY-3991)
ccff5e0: Added some forgotten changes to a bugfix for PY-4173
f596ef0: Fixed false positive in unbound locals for assignments to builtins in classes and at the toplevel (PY-4197)
4487985: Show all definitions of elements with the same name in structure view (PY-4173)
596cf7a8: Merge remote branch 'origin/master'
f794e3d: fixed runnable script filter to use PyIntegratedTools selected test runner
2e4a10d: correct break string on Enter (insert \\ in single-double quoted)
5c4c311: Updated Python CFG builder tests related to try-finally statements
9993d8b: Removed special handling of try-except in unused locals inspection because of new try-except-finally CFG (PY-4151, PY-4154, PY-4157)
97fa494: Rewritten unreachable code inspection using CFG backwards traversal (PY-4149)
fdaab2c: Fixed bugs in unbound local inspection related to try-finally CFG rewrite (PY-4150, PY-4151, PY-4152, PY-4157)
9ecf8af: removed non-ASCII chars from pytestrunner
4bb9f28: fixed PY-4192 Python Console window shows "super()" syntax as error even when using Python 3 (disabled compatibility visitor in console)
3b20fbb: added py.test runner. fixed PY-4168 PyTest: detect pytest in virtualenv installed globally for some interpreter on Windows and other detection problems
2b7ba6d: remove javadoc for non-existing parameter
54c9f5a: Fix startup scripts to work on Solaris
7dd4d56: fixed PY-4181 Strange bechavior when compliting dict keys
6cb3279: Merge remote branch 'origin/master'
c468fff: Create new files in move refactoring according to Python Script file template (PY-4092)
adfb060: Fixed handling inner imports in move refactoring (PY-4182)
7f116ca: fixed PY-4169 Docstrings are not highlighted properly right before EOF
f098e80: Remove unused imports from source file after move refactoring (PY-3529)
46ebe6f: Update type references in docstrings during move refactoring (PY-4130, PY-4131)
5208baa: PyClass can now get its method resolution order.
6f1eeb7: PyClass can now get its method resolution order.
48820ac: removed redundant code for colors page (CR-PY-133)
ce101bd: merged 2 if statements in PyStringLiteralLexer (CR-PY-134)
87ca199: Show error message if move refactoring cannot be performed on selected elements (PY-4093)
550c8c3: More careful selection of elements to move in move refactoring (PY-3883)
f8879c7: Fixed "Preview" button in move refactoring dialog (PY-4170)
1f60761: Fixed updating "import as" in target module during move refactoring (PY-4095)
4ce86ad: Canonical import name os.path for ntpath Python module
221f0d8: Fixed move refactoring for multidotted imports (PY-4098)
d1ac12b: added IdeBorderFactory in test runner configuration
183d969: added docstringUtil class
00d34ae: fixed pystringliterallexer test
1ed997c: fixed stringliterallexer constructor
8f93598: fixed comment CR-PY-127
81f0b01: improved PyStringLiteralLexer according to new token types
a0c3c01: fixed string literal layered lexer
b098e90: fixed comment for CR-PY-108
d0bf1c2: fixed PY-4134 Console: Single quoted docstring inspection: looks not at place in console
7603f08: fixed PY-4135 Single quoted docstring: duplicated with cursor between opening and closing quote
39f43e8: fixed PY-4129 Refactor/Raname: type names are not updated in rst docstrings if type specified in docstring as :class:`SomeClass` or :py:class:`SomeClass`
5e972d9: fixed ugly view in test run configuration
713f05e: fixed PY-3916 One line doc-strings are difficult to achieve for existing methods/functions
e7f7318: Merge remote branch 'origin/master'
0f3d297: Better guess java location from path (may point to JDK instead of JRE)
a3e229c: fixed getType for PyStringLiteralExpression
95450fc: optimized docStringAnnotator (do not analyze docstring if we don't have structured docstrings)
ba1ae08: fixed document updating to work with layeredlexer
9493de2: removed redundant test code
37a9885: added highlightinglexertest to AllTestsSuite
1d4e0ed: removed redundant UnicodeOrbyteLiteralAnnotator, revert highlighting to the LayeredLexer
3b1307b: honor parent value of JYTHONPATH (PY-4109)
2adb07f: correctly parse tuple as dictionary key (PY-4144)
1d94bf0: help ID (PY-4107)
254076c: help ID (PY-4108)
3ea5bf8: help ID (PY-4110)
ce89d9a: help ID (PY-4111)
257f2c8: help ID (PY-4112)
80b61f8: help ID (PY-4113)
666266a: help ID (PY-4114)
eb637ce: help ID (PY-4116)
bc43c65: help ID (PY-4115)
0c59c2e: fixed name for Sphinx documentation sources
a39f18d: replace constructor with factory method
3f98cb8: fixed PY-4131 Refactor\Move: return type names are not updated in docstrings
43ac25d: Merge remote branch 'origin/master'
ef175a3: separated token types for single/double and triple quoted strings added lexer-level bytes/unicode detection added lexer-level docstring detection
1cf4b1d: Separated success and failure control flow for finally clause if no except clauses found (PY-4102)
a5e38c8: Refactored Python try-except CFG builder
9636037: PyCharm: let's bundle all the tips of the day in resources_en.jar
77526fa: Applying new titled border
fc73496: PY-4082: correctly map things like (*args, a=... b=...) in Py3k.
978ff21: Merge remote branch 'origin/master'
5583aa6: Added button to show fields in structure view (PY-3936)
2c3269c: PY-4103: Python forgets about start param in enumerate().
46a358d: Show inherited members in structure view (PY-3936)
5aa0bfa: Show Python underscored variables in structure view as protected
487045f: rename correctly updates type names in epydoc docstrings (PY-4101)
d5f7364: help ID (PY-4091)
75b78dc: help for Python extract method dialog (PY-4097)
718394a: prepend system pythonpath to pycharm generated one (PY-3541)
2edd245: implement folding for docstrings and "collapse by default | method bodies" option (PY-2068)
6a51d99: commas don't need to be aligned (PY-4034)
05ec420: fixed a case when formatter breaks code (PY-4034 comment)
cbd9245: correct skipChildProcessing check when switching from Python stubs to AST (PY-4056)
5a74d64: search for ipy.exe in path (PY-4035)
760268e: help topics for plugin (PY-4087, PY-4088, PY-4089)
c9a2641: fixed PY-4084 Invalid manual variable types declaration syntax for rst
75ed190: fixed PY-3849 Support for type detection when type specified in docstring as :class:`SomeClass` or :py:class:`SomeClass` sphinx role
8f36cca: Merge remote branch 'origin/master'
cf5a3c8: PY-4073: correct signature of print().
8129586: Allow moving classes/function to new Python modules (PY-4074)
d9b0a5f: Public placeFile() method for creating Python modules in packages
d327049: delete obsolete help sources
41e68a5: one more fix to use correct library table model (PY-4080)
320f300: better clean (another attempt to fix PY-4072)
bd70c07: fix
de6ec37: add help to plugin build (PY-3780)
54f5cd0: don't use Sun internal class
b4e2127: Temporarily ignore preferred import style settings for updating imports of moved functions (PY-3971)
b79f528: Use canonical names of some stdlib modules in imports for move refactoring (PY-3969)
c0e8905: Fixed invocation of move refactoring on function parameters via F6 (PY-4079)
1b77b84: a working syntax for specifying variable type in Sphinx docstrings
9cd2ddc: rename test to match what it actually tests
b50d7ac: dead code (fix for PY-4072?)
5e5a767: added tests for PY-4017
1758259: Merge remote branch 'origin/master'
29b4ccf: fixed PY-4017 Add super call quickfix should not introduce additional parameters to the inheritors constructor if possible
031128f: Don't check if callee is callable for Python type references
e1dafff: Fixed type checker for assignments to callee (PY-4025)
5dc6710: Fixed updating qualified names in move class/function refactoring (PY-3929)
c825c0c: fixed CR-PY-7
a0d1523: update SDK combo in Python facet editor when list of SDKs changes (PY-3875)
da609be: couple of fixes for correct update of Python SDK library (PY-3878)
7a0fa64: don't show empty line in interpreter list (PY-3879)
9feaf34: include Python integrated tools configurable in plugin (PY-4021)
17178af: tweak Python parser to handle incomplete code better (PY-4053)
4bec759: let's not have Ruby code in Python control flow builder testdata
e8aabad: Moved Python ABC utils into a separate class
5da806b: added tests for PY-4038 (CR-PY-8)
91f6ac2: Merge remote branch 'origin/master'
e5ef82f: Fixed calling non callable inspection for class assignments (PY-4061)
c3c5b3a: Merge remote branch 'origin/master'
e4651ee: A naive fix for PY-4043
3b9db69: Fixed callable inspection for both direct and __class__ constructor calls
bf4fbae: Fixed move class or function refactoring for "import as" constructs (PY-3929)
f817a11: Merge remote branch 'origin/master'
bf76340: fixed PY-4038 Call to constructor of super class is missed: false positive for super call with self.__class__
6fa716e: fixed PY-3836 With default python interpreter on Mac OS PyCharm in not able to find valid py.test and PY-4045 pytest does not work with virtualenv (when pytest is installed globally)
e6ee221: Merge remote branch 'origin/master'
0fd6235: Handle bin module paths as per PEP-3149
93a112c: String format inspection compatible with Python stdlib type annotations for integer types
295ea72: fixed PY-4048 Bugs in type detection with sphinx combined parameter type and description
7ea187c: Calling non callable inspection now uses Callable ABC check
f112bd6: Highlight attempts to call a module (PY-3872)
b6c0688: Moved well-known Python names to PyNames
2a56833: More types for Python sequences and files
a216217: Weak warnings for functions that cannot be analyzed by DFA (PY-3823)
baff936: Basic support for parametrized collection ABCs from Python stdlib
db6f89c: Extended method types for str in Python 2 to str or unicode (PY-3986)
eaffdce: better parser error recovery for keyword as default argument value (PY-3713)
10c5c2a: complete/resolve members of 'type' type on definitions of new-style classes (PY-3989)
b6e322b: colon is a tail type, not part of the lookup text (PY-2652)
d6b38bf: no stubs for variables in list comprehensions (PY-4029)
8c042c0: don't include declarations under if __name__ == "__main__" in stubs (PY-4008)
b16acc5: accept either 'self' or 'cls' for all methods in metaclass (PY-1224)
4062d5a: cleanup
309ba33: no 'print' keyword in completion list under Py3k (PY-4028)
14a4f8e: use Python 2 SDK if possible for running epydoc/sphinx (PY-3804)
dba64fb: a trailing comma in argument list is not an error (PY-4016)
5ed2aec: separate main module for PyCharm run configuration
57aeb3f: Merge remote branch 'origin/master'
4d93d5a: fixed CR-PY-1, added test
48b9054: abort rename when unexpected exception occurs (check didn't detect e.g. rename file which is used by another app: IDEA-71413)
6eb9420: fixed failed tests and added test for PY-4027
f293151: Merge remote branch 'origin/master'
8e25d04: fixed PY-4027 Call to constructor of super class is missed: false negative for wrong super call
32a1c60: fixed PY-3942 Test runner doesn't seem to notice py.test tests have completed when using classes
aaaf9ec: Don't offer to move outer functions/classes if an inner element is selected (PY-3883)
2730c31: fixed PY-3942 Test runner doesn't seem to notice py.test tests have completed when using classes
823b3dc: fail() call breaks control flow (PY-3886)
6fe317f: fixed PY-4020 Byte literal contains characters > 255: gets duplicated on every symbol > 255
5dab392: optimization: do not encode whole file content to bytes on reparse
0595c1d: Fixed false positive for fallback classes in try-except ImportError (PY-3919)
08ea6dc: Merge remote branch 'origin/master'
283216c: 'show implementations' for groovy implicit accessors
ba2878e: Merge remote branch 'origin/master'
8bca870: fixed registerProblem for new style classes in PY-3315
93fbf45: fixed PY-3315 Call to constructor of super class is missed: quick-fix for inspection (comment)
e038fc1: fixed PY-3907 Missing empty or incorrect docstring: not updated properly on changing settings
10ca0aa: perform parameter rename without using BaseRefactoringProcessor, avoiding its invokeLater() call (PY-3914)
70767a4: EA-28293 - IAE: FileUtil.pathsEqual
f12b869: Merge remote branch 'origin/master'
39e5fea: Added modules strictly requiring generator 1.94
7d97b7a: Removed unused KNOWN_FAKE_EXPORTS data.
c88f8d8: skip wrapper-only base classes, fix 'failed to find module', sqlite as fake reexporter.
9be544b: Merge remote branch 'origin/master'
24c3cbd: WIP: ignore sip fake bases
5812115: Fixed worst Qt / pygame import issues (PY-3985)
975569c: name of library root doesn't have to be a valid identifier (PY-3933)
8be451d: PY-3968
233472e: honor "around class" option for blank lines before non-top-level class (PY-3729)
1b21cbb: declaration range handler for PyCharm
558ee2c: compile .qrc action actually works
79d6c02: compile .qrc action
f8e2cd9: .qrc files are XML
16bbbb2: double-click on .ts file opens it in Qt Linguist
778f3ec: double-click on .ui file opens it in Qt Designer
4356759: ignore max.intellisense.filesize for skeletons
0fdf5e8: imports fixed
5042838: inspection highlights deprecated members
f3c219c: Fixed django exception breakpoints handling for Django before 1.3.
c845376: Fixed python and django breakpoints expression logging (PY-3961).
680278a: redundant casts: cleanup
1b11e99: platform icons
4e58847: Debug console execution fixed (PY-3964).
a0ebede: Django template debugger: step over fixed.
d5ebccf: Condition and log expression handling in Django template line breakpoints (PY-3962).
cf26b43: platform icons
3e6d7c8: PY-3898: bumped generator3 -L timeout to 4 minutes.
60fb8a8: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9bda869: Django template debugging: added filename normalizing.
dbeaef3: Refactored a bit.
1c7cac9: IDEA-70559 Add "-Dsun.java2d.pmoffscreen=false" line to idea.vmoptions file of linux/unix build of IDEA by default
35f65de: Special case of super(AnUnresolvedClass).something not to be marked belonging to super or unresolved.
b3360ec: Merge remote branch 'origin/master'
5060301: fixed PY-3924 PyTest: IAE at com.intellij.execution.configurations.ParamsGroup.addParameter
d2aa04e: fixed PY-3849 Support for type detection when type specified in docstring as :class:`SomeClass` or :py:class:`SomeClass` sphinx role
3090dd4: Merged ruby/python spellchecking modules with main language modules.
e5de7f4: Merge remote branch 'origin/master'
e1233a0: Added __exception__ to variables frame when stopped on exception break (PY-3742). Fixed TemplateDoesNotExists exception break handling (PY-3738).
fc80e15: Fixed debugger to work with python 2.4 with django exception breakpoints on.
72c0d63: Django isn't importable error message not updated when selecting interpeters (PY-3841).
2ed2c18: remove duplicate class
7d70899: diagnostics for part 2 of PY-3909
5f47232: set JYTHONPATH to the same string as -Dpython.path (for better compat with Jython 2.5) (PY-3249)
2c4f453: fix possible NPE case
bff677d: Fixed scope for references in function decorators (PY-3895)
f59b0ed: Added CFG write instructions for function definitions (PY-3866, PY-3869)
16fbe0d: PY-3818: stick last param to *arg in param info
726bf66: Show warning in unbound locals inspection if DFA limit is exceeded (PY-3823)
c45c14c: Move spellchecker plugin to a core, rendering unnecessary spellchecker plugins for each language/ide.
4117a08: fix compilation
5ce6c6e: Template debugger: exception breakpoint icon is missing on suspending on exception in templates (PY-3901).
453b6cf: Debugger: children view after set value raised exception (PY-3772)
c37b1dd: Resolve type references in quick info for Python functions
3c5449e: Revert "Added unknown type for some Python stdlib return types"
651edbf: Resolve PyTypeReferences during type checking
d49ab22: Added unknown type for some Python stdlib return types
b7369fe: Different stdlib type databases for Python 2 and 3
e504906: Initial type checker inspection for Python
922480d: fixed PY-3885 "Dictionary contains duplicate keys" inspection should highlight the duplicate keys, not the entire dictionary
2d247fe: fixed PY-3899 Statement can be replaced with function call breaks print statement
9f38345d: fixed lineno search for attests
9397283: Merge remote branch 'origin/master'
973da79: added attest support PY-3700 PyCharm should support Attest
b3b6c2b: J2D_PIXMAPS=shared (seems to fix menu painting under GTK+ L&F; NetBeans uses this as well) (PY-2559)
b459b5b: Special case of super().whatever_method() resolution
d9e4025: added dialog for sphinx make/quickstart action to choose working directory (if not specified in integrated tools)
0727a52: fixed comments for PY-3854
9bf60aa: Merge remote branch 'origin/master'
6ea0ab9: added test for PY-3848
3a94f05: Merge remote branch 'origin/master'
0faacee: Merge remote branch 'origin/master'
778fd68: fixed error message for Type error in unittests
4430464: fixed compatibility with python2.4 in unittests
2f1bf9f: Merge remote branch 'origin/master'
15a2fbe: added PY-3868 Per-project option to treat *.txt files as reStructuredText
77f79fb: Throw exception in case of dfa engine time/count limit exceeded
edf55d0: fixed PY-3848 Support for type detection with sphinx combined parameter type and description
c52e226: Merge remote branch 'origin/master'
683e474: fixed PY-3854 Code compatibility inspection should highlight usages of 'basestring' in py3 code as error
a5950ec: Py & Django exception breakpoints icon (PY-2528).
8c5bb13: fixed PY-3853 Error in utrunner.py when running unit tests
f139d40: Merge remote branch 'origin/master'
8e937ae: Fixed resolve returning duplicate elements.
20a0832: Merge remote branch 'origin/master'
a15bba0: trunk is 2.0 now
2192341: Merge remote branch 'origin/master'
4831b68: NPE in PyPathEvaluator
b606a86: suppress parentheses when completing function names inside Django templates (PY-3816)
f9f87a8: ask type providers about function return type even if allowReturnTypes is disabled
5400354: handle variants from all files when doing an 'from ... import *' from module with binary skeletons
d368f25: avoid CCE when collecting variants in 'from ... import *'
5556543: Merge remote branch 'origin/master'
d31f60c: fixed py.test location for windows
d6666e1: fixed PY-3811 AE at com.intellij.execution.testframework.sm.FileUrlProvider.createLocationFor
8c4761b: Merge remote branch 'origin/master'
755dc22: Merge remote branch 'origin/master'
e41c292: Fix a degenerate case in isRaw()
32a06a9: Merge remote branch 'origin/master'
8241940: A real fix for PY-3600, with tests.
f19eb92: PY-3686
04a4445: if we don't find name in one source, it's a good idea to look for it in other sources we've resolved to
d297c65: PY-3440: don't mind the third argument to getattr().
2e05c03: Forgotten test data for PY-3363
3dad88c: Merge remote branch 'origin/master'
a24da3f: NumberFormatException in iterateRawCharacterRanges when an unicode escape is invalid.
a01af2e: PY-3363: don't highlight escapes in raw strings
8900505: I love antiviruses, too. Really.
dce7de4: oh gant how I love thee
7650199: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f51f3f5: try #3
26c31a9: module -> fileset
c926113: avoid packaging duplicate entries in resources-en.jar
fd6eb54: Merge remote branch 'origin/master'
34a2547: PY-710: Auto-detect virtualenv when creating a new project from one (via open directory)
422ac85: Renderers fixed for GTK+; typos
83b16eb: Merge remote branch 'origin/master'
4db34c9: fixed PY-1964 comment
0e86a6b: fixed python test configuration producer
c90361e: PyPathEvaluator: fixed join for args > 2.
b87e822: Merge remote branch 'origin/master'
35c0bbf: Merge remote branch 'origin/master'
bd0dc87: PY-3791 IOOBE: PyParameterInfoHandler.updateUI in the case of parameterless functions.
0258c8d: Merge remote branch 'origin/master'
fe0b327: fixed PY-3783 Docstring: caret position is out of the docstring when generating plain docstring with space
8706c0a: Merge remote branch 'origin/master'
e725494: Bug in resolveInRoots(): use fromDottedString() to cope with qualified names.
08d2706: Merge remote branch 'origin/master'
0c5ec21: conciseness
300c8f2: stack overflow protection (EA-28087 - SOE: PyFileImpl.processDeclarations)
12bf52f: Merge remote branch 'origin/master'
5e058c9: fixed PY-3782 Docstring: not generated in case EOF is right after opening triple-quote
57f32ae: fixed PY-3582 Navigate to error source from console stacktrace
bd55fb5: Merge remote branch 'origin/master'
67777ff: added traceback filter for py.tests. PY-3781
9137166: If path is absolute just return it.
620e4d0: fixed PY-3637 for Django > 1.1
948ec7d: fixed PY-3786 Test runner doesn't work with arbitrary file names
16551d8: fixed PY-3637
f571f53: evaluate paths in STATIC_ROOT/MEDIA_ROOT/STATIC_URLS (PY-3767)
1e005e3: implement path evaluator; use it for database paths (PY-3756)
ad2d63e: close console actions are dumb-aware (PY-3611)
f76c62a: provide preview text (PY-3766); show only meaningful options
e4f2f6f: an importable path is not valid if it contains non-identifier components
c77ee74: Merge remote branch 'origin/master'
405ff78: Constants are placed after docstrings in introduce constant refactoring (PY-3657)
f9f2b99: Merge remote branch 'origin/master'
a05b50b: removed redundant code in singleQuotedDocstringInspection
dbddc4f: fixed PY-3768 StringIndexOutOfBoundsException in DocStringReferenceProvider
f77655e: Merge remote branch 'origin/master'
b2b3a8b: fixed PY-3764 Incorrect highlighting in triple double-quoted docstrings inspection
d3c5a70: Don't highlight attributes guarded by hasattr() as unresolved (PY-2309)
34d50c1: few refactoring in test runner (removed unusable classes)
eb0f2e2: avoiding stub to AST switch
d43a082: avoiding stub to AST switch (safe)
527ca9e: cleanup
98220c0: refactoring to avoid unnecessary stub to AST switch
ac471e0: EA-28027 - IOE: CheckUtil.checkWritable
99e9ccc: PIEAE (EA-28034 - PIEAE: PsiElementBase.getContainingFile)
3fbe125: Exception breakpoints don't work if there is no other breakpoints.
a61e365: Merge remote branch 'origin/master'
283b3b7: fixed PY-3126 "Chained comparison" doesn't react if one of the sides is flipped
09d552e: Merge remote branch 'origin/master'
b592885: No exception on completer import when using GAE (PY-3486).
bbd12e4: Fixed exception breakpoints.
116053e: Removed unnecessary import leading to PY-3718.
caad797: Fixed regexp injection in url call (PY-3749).
7726f3c: Perf opt and PY-3699.
0f04e7c: avoid mutating cached value
258a0f4: avoid unnecessary stub->AST switch
df08ac4: allow custom target expression stubs to provide callee name (PY-2981); looser detection of Django facet
16b1761: auto import is HighPriorityAction
e830f25: fixed NPE in docstring quickfix
0d2962e: fixed **args params in docstring
9525096: Merge remote branch 'origin/master'
9d52650: fixed PY-3723 Missing docstring: do not highlight all docstring when some parameters is missing
3ba393a: IDEA-70194 Javadoc: Provide support for completing javadoc parameters description
4de3c45: Added type info to debug quick eval.
c874090: Merge remote branch 'origin/master'
d3aa039: PY-1462: better caching of zope Interface references.
19248b7: fixed PY-3730 Ignore unresolved identifiers does not ignore parameters in docstring
988ec5a: Merge remote branch 'origin/master'
7472ff7: Merge remote branch 'origin/master'
0af6dfb: PY-1462: rather crude detection on Zope interfaces.
a2ee9aa: Merge remote branch 'origin/master'
ffba11a: Show names in structure view for unresolved base classes (PY-3731)
5738dab: xdebugger: added parameter to allow different presentation of value in tree and tooltip, type is shown in tooltip by default
405ba33: simplified expression in doc comment util
5a88a3f: fixed PY-3720 SIOOBE java.lang.String.substring and PY-3724 Generate docstring leads to syntacticly incorrect code in case of single-quoted string
b892e1d: Merge remote branch 'origin/master'
d1c42df: fixed PY-3722 Missing docstring: duplicated word parameters in qf
b849f39: Show instance attributes in structure view (PY-3371)
a0e2c0a: Merge remote branch 'origin/master'
cc898d0: Show qualified base class names in structure view (PY-3714)
adc97e2: fixed test data for single quoted quick fix
a38f913: fixed test data for single quoted inspection
25825f9: fixed PY-3706 Debugging py.test tests doesn't work
1952bdc: Don't report possibly used conditional imports as unused (PY-983)
9bc2b71: Merge remote branch 'origin/master'
8ca2341: PY-3689: continue to propose *arg in parameter popup even if further params are available.
cf67a59: Django template rendering exception handling.
a029cef: Django template rendering exception handling.
6ee59ce: Fixes to exception breakpoints handling.
a551022: advance required gen version for builtins
919b058: Bumped skeleton generator version for PY-3655
ff17a99: Fixed wrong generated constructor argument in inroduce field refactoring for classic classes (PY-3655)
c38455e: Merge remote branch 'origin/master'
97de8be: Fixed false positive in unresolved references inspection for try-except ImportError (PY-3678)
a029961: fixed PY-3586 Docstring quotes inspection should highlight only the incorrect quotes, not the entire docstring
a3c2541: PY-3690: Now first non-implicit parameter is highlighted in parameter popup if no arguments are typed yet.
44294f5: Merge remote branch 'origin/master'
c8c2458: added test for exception PY-3683
d7a8aef: fixed during completion method for PyDictKeyNamesCompletionContributor
9785c84: fixed PY-3683 Dict keys code completion exception
9825bb7: avoid CCE (EA-27567 - CCE: PyClassImpl.calculateNewStyleClass)
5ab9485: PyClassType with unresolved class has no completion variants (EA-27879 - NPE: PyClassType.addOwnClassMembers)
05bbe39: check for reference validity (EA-27969 - PIEAE: PsiElementBase.getContainingFile)
28e5a47: Fixed false positive in unbound locals inspection with "sys.exit" (PY-3702)
6caec0e: fixed NPE in missing constructor inspection
15b797e: Merge remote branch 'origin/master'
bcea76f: fixed folder name checking in python unittests
28ee690: Fixed bug in import aliases inside try blocks guarded by ImportError (PY-3675)
4f84efb: fixed number of django tests
1e37ddb: Merge remote branch 'origin/master'
83ca88c: PY-3222: Add Function quickfix works correctly if a function is mentioned outside a call. Special handling of Django's urls.py
6b3e935: PY-3222: Add Function quickfix works correctly if a function is mentioned outside a call.
18263b6: Merge remote branch 'origin/master'
038d650: PY-3552: now parameter popup info is correctly updated while typing.
afef970: Fixed bug in unbound locals inspection for augmented assignments (PY-3651)
5cd4809: Fixed bug in CFG for qualified imports (PY-3665)
3536f5f: Removed duplicated alerts in unbound locals inspection for qualified imports (PY-3666)
da686d2: fixed wrong text data for PY-3673 in PyEditing
a4b3aa5: Merge remote branch 'origin/master'
1ee5272: fixed PY-3650 Missing docstring: move highlighting for module level to scrollbar only
b59dae3: Fixed false negative for unbound locals inspection for "import as" (PY-3671)
eab90ff: fixed PY-3089 String literal assigned to __doc__ should be highlighted as a docstring
85b999c: fixed PY-3649 Missing docstring: suppressing on module level does nothing (made impossible to suppress docstring inspection for statement)
1a6d5ec: Merge remote branch 'origin/master'
32fc212: fixed PY-3660 Assertion Error Expected/Actual output is incorrect
9f2d8c3: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4f79a77: fixed PY-3673 SIOOBE at java.lang.String.substring
c3713f9: Merge remote branch 'origin/master'
f604696: Merge remote branch 'origin/master'
7b3dbb7: Simplified fake reexporters data in generator.
425418e: Python smart keys are optional (PY-3263, PY-2798)
685e765: report attempts to rename a Python file for which imports exist to a name which is not a valid Python identifier (PY-3664)
b90199a: failing test for PY-3673
1e30440: use as name for completion of import elements (PY-3672)
093ea5c: remove unneeded code
75bbd0e: Django debug: set value implemented (PY-3418).
899b2ae: Fixed django search in run django configuration.
3a4a00b: Special warnings for unresolved references in try-except ImportError (PY-3639)
016c563: another weird spacing fixed
18e4a23: split PyUnresolvedReferencesInspection test into individual testcases
02827c5: completion variants from 'from ... import *' are now handled in a uniform way, not ad-hoc in PyReferenceImpl (PY-3658)
55f45e5: cut .py extension from file completion variants (PY-3595)
8ea5bec: testdata for previous commit
2117a06: PY-3201 is also correctly handled by unused imports inspection
1a19cbf: subpackage names are visible in the __init__.py of a package even if something was imported from the subpackages, and the subpackages themselves weren't imported (related to PY-3201)
940a1a2: typo in javadoc
4f58832: couple of toString()
e38ab46: Jython part of PyType refactoring
2c4047f: refactoring: PyType.resolveMember() returns list of ResolveResult, not list of PsiElement
65415ce: names of stdlib doc pages are all lowercase (PY-3656)
8956b6e: non-failing test for PY-3595
1888546: missed test
93712d8: one more indent case (PY-1947)
3adfec9: parentheses fail (PY-3624)
53e7e31: a non-class function is never qualified by instance (PY-3623)
7aa77ed: don't suggest auto import if name of file is not a valid Python identifier (PY-3627)
8a97673: better recovery for syntax errors in parameter list (PY-3635); fix old bug with tuple parameters parsing
79c0c49: index corruption recovery
4e21928: Merge remote branch 'origin/master'
bda3047: contentRootPath / create content entry code moved to ModuleBuilder, allow instantiation of WebModule
9742f2a: added form validation for unittest PY-3652 test files
6ee7cc9: fixed PY-3652 Test runner tries to run tests from improperly named module in producer
73fb0e7: Search in zip files (PY-3269).
cc76fb2: Merge remote branch 'origin/master'
f3d26fa: fixed convert set literal could not be invoked in py3
a70756b: fixed PY-3653 Spurious extra quote on docstring close-quote autocomplete
a16bbcb: Show error message if introduce refactoring is invoked for substrings (PY-3619)
32275f7: fixed PY-3636 for classmethod and cls param
0b5d5f1: fixed failed test_runner tests
4220c54: fixed PY-3642 Insert documentation stub: do not generate docs for self param in class methods
af9a3ce: Merge remote branch 'origin/master'
d63d28a: fixed PY-3636 missing parameter self
3d4c875: fixed PY-3637 test runner progress in django
6c6c02b: Now skeletons import modules needed for base classes while respecting 'fake reexports'.
c9e0434: Fixed unbound variables inspection for local import expressions (PY-3583)
a878769: changed docstring TypeReference to use pyTypeParser
7840b70: fixed compatibility for nonlocal
f301bdf: Fixed NPE in PyReferenceImpl (PY-3616)
067ab10: Fixed unbound variables inspection for nonlocals (PY-3603)
4ae7a9b: added tests for PY-3281
240949c: fixed PY-3281 DemorganIntention works incorrectly
6a633d4: Merge remote branch 'origin/master'
c9549a0: fixed text (because of new compatibility in py3k)
bf44bbc: fixed PY-3305 Unclear remove redundant parenthesis behavior
562b4e5: fixed failed test
03b1fce: fixed PY-3328 Argument equals to default parameter value: in non-keyword form qf can be envoked only on the last argument
e10f13f: fixed PY-3613 Code compatibility inspection: add nonlocal reference to the inspection
d260302: fixed PY-3315 Call to constructor of super class is missed: quick-fix for inspection
1e2bf81: Merge remote branch 'origin/master'
3940153: PY-3291, PY-3589: fix imports of base classes, hone fake re-export code; generator 1.91
ac5d278: Correct resolving and searching for nonlocals, updated appropriate inspections (PY-3603)
bbebbc2: fixed PY-3601 "Assignment can be replaced with augmented assignment" provides no action in case of compound argument
38c5774: Correct Jython termination (PY-2591).
5aec784: Transitive resolve during references search only for globals (PY-3547)
75bfbd0: GotoRelated API used for django view line markers (PY-1326).
e12ee56: Merge remote branch 'origin/master'
793aaad: fixed PY-3367 Render non-default epytext fields
0634584: fixed rendering for rest parameters with type inside
dc44a7b: Fixed bug with searching for usages of global variables (PY-3547)
b264e81: added PY-3488 :type name:, :rtype: and :param type name: info fields: try matching (and linking) types
e7b53e5: resolve prefers directories to files (PY-3194)
ccf89f4: ResolveImportUtil.multiResolveImportElement() returns all successfully resolved variants for qualified name, not just the first one (PY-2443)
00b3390: don't hard-code name
75007f1: PY-2713: special case of OS X
18c81d2: Merge remote branch 'origin/master'
4925696: correct 'is definition' value for provided FieldFile/Image types (PY-3356)
b491f4d: fix incorrect path calculation in auto-import
a2465cc: tweak logic for detecting incomplete formatter blocks (PY-3572)
c12957e: complete imported submodules of a module (PY-3227)
a555889: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
751450c: fixed PY-3487 Incomplete support for Sphinx's :param: info field
a87088e: fixed PY-3511 False positive "Unresolved reference" on :param RST tag with underscores in
7dedfdd: dead code
85aa8c4: fixed calculation of relative level in incomplete from ... import statements (PY-3409)
9e5e7b3: several more cases of inconsistent indentation (PY-1947)
7867bd3: fixed PY-3554 Unresolved reference: false positive for imported superclass with the same name as derived one
f218584: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
341ba29: finally stop bundling jython with python plugin
f29ee11: don't inspect qualifier of target reference twice (PY-3460)
2cb6d99: references in superclass list of a class should be resolved outside of the class (PY-3554)
c20b983: add missing test
0944a34: allow completion of True/False/None in default values of parameters under Python 3 (PY-3464)
2cb4f4e: /usr/bin is unlikely to be a valid Python interpreter path
1f8f479: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ee8274f: IDEA-69724 Formatter exception on Extract Method in Python
b1a63cc: fixed PY-3346 "Argument value equals to default parameter value" must not trigger for implicitly resolved calls
c681735: Merge remote branch 'origin/master'
13d8d69: PY-3567: _socket.error circular import in skeletons.
0308ee5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d5a7274: fixed test data for unittest runner test
a0273d0: Merge remote branch 'origin/master'
9c19268: PY-1894: frozenset signature fixed
33e061f: Improved GUI captions and added help ID for move class refactoring (PY-3561)
d6ab1ad: Lambda expressions became ScopeOwners and got their own CFG (PY-3532)
f0b3668: Fixed default arguments scoping in unbound locals inspection (PY-3550)
7764556: Fixed bug in scoping rules for references in default function argument values (PY-3550)
24c1a13: Fixed bug in control flow ordering for named function arguments
9929417: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
63bd43d: fixed PY-3560 utrunner.py throws exeption when runnins over python nose library
fe941b1: addition to fix: [IDEA-69141] config dir named .${PRODUCT_SETTINGS_DIR}
78b7921: Env tests: added correct python_path for jython.
6033d81: Django template parser: error on wrong ids in member parsing (PY-2342).
bd2e686: Added stdoutBuf reinit if absent (PY-2758).
72d0789: Fixed buildout parsing breaking completion (PY-3515).
c5ec6da: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
1c6bf58: added env tests for doc test runner
2224fae: fixed PY-3556 Error in helpers/pycharm/docrunner.py while running doctest containing exception
6ebf005: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
27bbb3e: fixed: [IDEA-69141] config dir named .${PRODUCT_SETTINGS_DIR}
4aa4c9a: Merge remote branch 'origin/master'
486f42e: Removed a dysfunctional non-__dict__ test.
566d209: Fixed control flow break edges for try-except-finally (PY-3503)
3080753: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a482e72: Merge remote branch 'origin/master'
1dce00b: PY-3539: allow empty str(), bytes(), and unicode() constructors
7611cba: changed role annotator for not defined roles to role inspection
09b0664: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8a36e3d: Version inquire shouldn't raise errors even if it fails (PY-3522).
19c166d: Fixed condition.
d44cc1f: IDEA-69153 Editor: Improve performance of caret movement via arrow keys
68a4ca4: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
2ce3fc7: add database and db-config to PyCharm build
81c9e31: create django-db-config plugin (work in progress)
8b93d0b: Do not import the same module several times while inserting import statements (PY-3530)
81e2b13: Check for valid module names in move class refactoring (PY-3527)
8415bef: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8b5d3ab: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
53def8a: test_generator updated to match current generator.
9e09cb9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0c805c5: Forgotten annotator was breaking some tests.
b9402b0: Merge remote branch 'origin/master'
f850aa4: Merge remote branch 'origin/master'
19cdac8: added tests for nose
fbe1466: fixed basestring bug in tcunittest runner
30b2cb0: fixed PY-3507 Test runner shouldn't launch func with name containing word 'test' as unit test (do not collect test function with arguments)
589b8b5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c227556: Move class or function refactoring for Python (PY-991)
4fa0c9d: Added method for updating import statements of a moved Python PSI named element
5dde251: Removed unused code
2574121: Reusable methods to update references in the moved Python code
74e61df: More general argument types for insertImport() in order to cover functions and variables
cfbbfff: Merge remote branch 'origin/master'
0ad89fa: Removed local interpreter reference.
de552ea: Fixed NPE with console start in case of timeout in interpreter run (PY-3469).
d5022fa: Fixed EDT bug (PY-3434).
3e1fac2: Fixed import error in pydevd (PY-3462).
33a5cb6: Debugger version check switched off for py plug-in.
946ab2b: PY-3514: never return an null builtins version.
01b821f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6df77de: *.sh scripts updated: [[ .. ]] -> [ .. ]
1115858: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f266e69: removed redundant noinspection action
0fbd9d8: Merge remote branch 'origin/master'
1178cfb: Unicode highlighting: fixes in octals, tests, updated colors page
c946d7b: added noinspection quickfix (PY-3501 )
4c7c3a1: fixed PY-3501 "Add global statement" quickfix for "Local variable might be referenced before asignement" puts "global" statement before class methods docstring
b452b3e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b2c2ac1: inspection description
4d8c042: EA-27326 - SOE: PyReturnTypeReference.isBuiltin
f1919b6: EA-27525 - NPE: PythonDocumentationConfigurable$PythonDocumentationPanel$1.customizeCellRenderer
874499f: minor
17d2b5d: 1) bye-bye idea.properties in MacOS ides bundles 2) [ide_name].sh scripts were updated to work on MacOS + fetch system and vmoptions from Info.plist
431c0e5: fixed failure in helper PY-2658 Can't launch my "dynamic" unit-tests in PyCharm 1.1 (but they work fine in 1.0 and 1.0.1)
82ab67d: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
777f61c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
93b2cd0: Unicode highlighting moved to annotator, added a bit of tests; LanguageLevel linked to FutureFeature.
4d073d6: added PY-3345 Would be nice to have history of recent entries in the doc generator fields for docutils
e8a9153: Python env test: added testing failed in UT.
245d607: fix compilation, missing deps
2ab1310: Do not run for unix: as copy paste need x11.
2bcaaf0: Env tests: added test for UT-runner.
daf651a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9e2f7cf: fixed test data for doc stub intention
ba067ed: added PY-3348 Inspection to highlight occurrences of @classmethod and @staticmethod on methods outside of a class
f9178a1: added PY-3421 Generate docstring stub also on pressing Space after triple quotes
6d643ac: fixed PY-3278 False positive for 'Super constructor call is missing'
858147e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
dca0b1a: PyFile stub test for import from future
01b7fec: added check for existence of raise/return statement in function in Generate docStub intention
2d31187: Merge remote-tracking branch 'origin/master'
3741ae6: Updated default Python colors to distinguish Unicode.
e4f362b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c115af8: fixed PY-3364 Quick-doc: Align base field tag params to the right
3399a20: Merge remote branch 'origin/master'
3f155fd: fixed PY-3476 Reported element PyDictLiteralExpression is not from the file
de38cb7: PY-3478: return type of divmod() must be (0, 0).
a3894dc: added annotation for not defined references in ReST
70d9e04: Fixed unicode literals coloring, updated colors page.
5c50eb8: added PY-3451 Support Structure View for reStructuredText
25ac1ca: Merge remote-tracking branch 'origin/master'
d9bda71: Updated PyFile stub to handle all future features uniformly.
205f2037: Merge remote branch 'origin/master'
37f4dba: nsis installer fixed
0014623: improved argumentEqualDefaulsInspection for string literals
af9a279: fixed PY-3455 False positive for 'Argument value equals default parameter value' with same-named classes in different modules
4a3fb45: Merge remote branch 'origin/master'
817d8ba: include ivar/cvar/var description and type in Ctrl-Q documentation
4430466: extract types from attribute docstrings
a6a16d7: "analyze type parser" also looks at instance variable types
96d4654: recognize __docformat__ which also specifies language
19ca879: support for class and instance attributes in Ctrl-Q
bab8754: extract a couple of methods
f1913c2: extract HTML building utility functions from DocumentationBuilder and PythonDocumentationProvider to separate class
93725cb: extract DocumentationBuilder out of PythonDocumentationProvider
d87ab8b: allow instance variable docstrings also when sphinx format is used
25eedf8: implement injector for ReST language into docstrings but disable it for now; not too much stuff highlighted and I don't like the green background inside triple quotes
ec84100: recognize __docformat__ tag for specifying docstring format
84eead3: nicer formatting for Sphinx docstrings
437870d: add docutils modules to helpers
30bd5ed: NPE diagnostics & fix
0b983e9: text cosmetics
f846c2d: WIP: unicode literals colored, from future imports detection extended.
aa7a59e: Fixed equals check for CFG instruction name in unused locals
e3b43d2: added autocompletion for links, footnotes, citations and substitutions in ReST
6073da2: Merge remote branch 'origin/master'
4710db8: build scripts: system_selector property access fixed
1bddb96: load *ApplicationInfo.xml before first use
86f6c10: 1. Now default names for settings folders paths are defined in java code 2. Paths in idea.properties are optional and should be used to change default behaviour 3. Settings folder selector now is defined in one place for mac/linux/win/installer binaries - see system_selector property in gant build script (also it can fetch ide major version from *ApplicationInfo.xml
a77fd0a: Merge remote branch 'origin/master'
e87b654: fixed test data for docStub intention
c2896db: WIP: unicode literals colored, from future imports detection extended.
c4c7646: Prohibited setting breakpoints to lines without Django tags.
5aba45a: Check for unbound local variables in nested list comprehension expressions (PY-3407)
13716d3: Check for unbound local function variables in call expressions (PY-3343)
9d29d78: Merge remote branch 'origin/master'
2a9e0bc: fixed PY-3420 "Insert documentation string stub" intention should place caret in the docstring fixed PY-3426 Documentation stub for plain text is generated as a string of six quotes.
128aa5d: build number range for plugin in trunk
0a1320d: added unresolved reference inspection for class inherited from itself.
5747cda: use iterateAnsectors in PyMissingConstructorInspection. Added test data for PY-3395
c9e8dba: added highlighting for python code in rest files
09f05d8: fixed PY-3395 Stack overflow if a class is inherited from itself
156ff09: Merge remote branch 'origin/master'
ecf74a4: Merge remote branch 'origin/master'
fac5759: PY-3213: fix wrong 'can't set property value' diagnostic.
cc65d68: parameter references after @type (PY-3400)
4270543: StructuredDocString.parse(): null in -> null out (PY-3401)
6edef16: manage app engine library when app engine facet is created or deleted in Python plugin
3d188bd: extract non-facet-specific code for configuring a facet-provided library
c8d2693: more correct working with modifiable models
9e90079: almost working implementation of buildout facet detection and support in Python plugin
b790ba5: Fixed false unused locals inside if statements with isinstance checks (PY-2418)
f5e037e: Don't include already blacklisted skeletons as errors.
d2756f5: Python debugger tests added. Cidr debugger tests refactored.
f927520: Merge remote branch 'origin/master'
6d4fa64: Fixed bugs in unused local variables inspection for nested functions (PY-3076, PY-3118)
63dcf72: NPE of PY-3415, some cleanup.
3318670: branch number 108
7825162: Merge remote branch 'origin/master'
1f67fae: added goto element definition provider for hyperlinks, citations, footnotes and substitutions in ReST
9353e66: Merge remote branch 'origin/master'
148d7fc: Handle skeletons in manually added dirs (PY-3085), keep blacklisted items unless successfully regenerated.
051985e: Merge remote branch 'origin/master'
c5cbaf1: added completion for restructured text options in directives. Added directive block to parsing
0496002: UI: bold thingy in find dialog
99941e0: Fixed bugs in control flow for Python list comprehensions (PY-2574)
46edd8e: added completion for restructured text directives
def2d8a: fixed PY-3368 Insert documentation string stub: omit blank lines right after method definition
e3fbb64: fixed PY-3405 Incorrect indent on generating docstring for classes.
b25dc09: fixed test data for docstring inspection
72b2888: added PY-3394 QuickFix for docstring
49a37f1: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
1e2d643: fixed PY-3397 The test runner cannot handle lists that contain unicode characters
996875d: Merge remote branch 'origin/master'
26351e2: Better Ctrl+P handler: PY-3383 and more.
5d9455e: Base classes in Structure View only for valid Psi elements
a5fb8b1: added paramenters recognized by sphinx
eb3f5f7: EA-26859 - assert: PyClassType.resolveMember
2bfa170: EA-27069 - SIOOBE: ConvertDocstringQuickFix.applyFix
dd8e399: EA-27110 - assert: SmartPointerManagerImpl.createSmartPsiElementPointer
5370f69: PY-3309
a01c876: resolve @param tags in class doc comment to constructor parameters
f59e157: highlighting of tags in docstrings
cb2e4d4: completion of tag names in epydoc and sphinx docstrings
a04d5b8: references to parameters in docstrings, with rename and completion support
784c08f: cleanup
23398cf: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
af10f59: Fixed uppercase chars in tests for Unix platforms
b11beb0: Fixed test for unix (; -> pathSeparator)
3e9abcd: Added step-over in django template debugger.
99facc4: Added Django Template line breakpoints handling.
083bc05: added tests for PY-3373
92623aa: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ef6079a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
70eef0e: added PY-3373 Inspection to detect incorrect parameters in docstrings
df66f7f: Show base classes in structure view (PY-3370)
3a633fb: support for keyword arguments and other minor improvements to epydoc formatter
aa746b8: mostly complete type parser
6014a65: cleanup
d2c373d: typo fixed
df2b406: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b56ffc7: PY-3518 (deceptive imported names); removed duplicate imports (harmless but silly).
7d21c5a: move calculation of parameter type from docstring to appropriate place
1aeb89c: PyTypeParser work in progress
7131543: Merge remote branch 'origin/master'
0f6584b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
2cf574e: avoid test flickering: restore settings after change
3fbe6dd: PY-3365
c096341: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9a71415: Merge remote branch 'origin/master'
f792422: PY-3341, PY-3329, refactoring of skeleton refreshing code.
3a68441: fixed doc stub generation
3547da9: cleanup
5b22a19: added dry run for docutils task
213a357: added doc string completion for class and modules
7f13653: fixed doc stub creation in function
7d65a80: Merge remote branch 'origin/master'
d32c3a0: WIP: factored out PySkeletonRefresher, initial blacklisting support
12b9719: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
1d5e9ae: working solution (a bit incomplete) for nice formatting of epydoc docstrings
9305028: set language level for 'with' test
6887955: added intention for doc stub generation
f61c794: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
1af1c52: EA-25762 - IAE: XValueContainerNode.addChildren
42ea17e: added empty line to comment generation
8432376: added tests for generating docstrings in python
7f2d56b: added simple documentation stub generation for python functions
98f6f3c: fixed PY-3295 "Convert single-quoted string to double-quoted string" intention changes the meaning of the code on escaped backslahes
e4a26f5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
bcbf5ac: added Restructured text formatter for docstrings
7328ebf9: changed docstring format names in integrated tool
f3f63ff: PY-3341: only refresh skeletons the first time a project is loaded.
f3e4c7d: IDEA-67384 Emacs Tab does not work properly
aad608e: line separators
8aff328: fixed PY-3350 "Code Compatibility" false positive on logging.config (for Python versions >= 2.4) [not fixed in 1.2.1]
8c7c49f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b70cb99: put build number of used IDEA build as since-build in plugin.xml
a3be5fe: correct signature preview for extract method with @classmethod or @staticmethod (PY-3294)
5f8d5bd: don't highlight exit points when inside the return expression (PY-3333)
cca0545: create function quickfix doesn't take arguments from call expression if function being created is itself an argument
cc62359: completion for keyword arguments in Java class constructors (PY-584)
aaa2c48: auto-import Java classes in Python code
2bf9358: provide types for Java fields
c0cb682: resolve Java methods in a Python class inherited from a Java class
8e8e2ab: restore resolve from Python to Java, introduce Java package type
9cba5d7: enable running tests with classpath of module
a04e605: move the test to a different package, to make sure it will run as part of regular IDEA tests
6747bd7: accept epydoc variable docstrings also at file level
21eb5b4: treat usages qualified by type reference as untyped usages
68bb6fa: use query optimizer in Python reference searchers
eb59fe3e: use QueryExecutorBase
34dcbc9: fixed PY-3342 Change command name in the Sphinx actions menu
1254b52: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
95e45c7: introducing AbstractDocumentationProvider
f7da092: fixed PY-3327 RST: Throwable at com.intellij.openapi.diagnostic.Logger.error
b0ab1e1: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
02889e3: Merge remote branch 'origin/master'
154197d: couple more usage types implemented
dced258: highlight epydoc docstrings of class and instance variables as docstrings, not as statements with no effect
2641b71: EDT is not the best place to sleep for 7 seconds just in case. Really.
f4879a3: diagnose errors from epydoc, correctly use ByteBuffer
0e8d9cc: show root url if pattern substitution failed (PY-3273 #5)
6a56131: fix stdlib detection for IronPython, which doesn't have 'Lib/test' directory (PY-3273 #3)
1a3a418: restore missing stdlib modules (PY-3273 #4)
bc1f196: provide doc urls for Jython (PY-3273 #2)
cbc7c8f: fix doc url for pre-2.6 Python (PY-3273 #1)
bd27a93: help topic (PY-3290)
f29ed29: format docstring with epydoc itself
9e81128: fixed PY-3313 Call to constructor of super class is missed: check constructor presence for all super classes in mro
df84f4a: Jython 2.5.2
58ebce3: changed unclear name for path to the ReST files
1ed8d42: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ad07465: Added optimization checks before running hyperlinks highlighting code. Add possibility to turn off folding updating (it was a bottle neck and unnecessary in Python Console).
e479fb6: fixed tests failed because of wrong language level
6bf043f: PY-3317
c4ef79c: EA-26796 - NPE: ResolveImportUtil.findShortestImportableQName
d69ff69: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
71bdd66: PY-3310: stub generation fails after adding an interpreter (NPE)
2fc9b2a: fixed closing tag in versions.xml
cef2000: fixed PY-3314 False positive on compatibility check for stdlib module copy
93212e6: fixed PY-3303 run_tests() got an unexcepted keyword argument 'failfast' when testing an app in a buildout based project.
476a546: Integrated a new skeleton generator subsystem. PY-2243, PY-3252, PY-3235, PY-2883
374a9ca: Message changed.
27679fe: Python Console refactored a bit.
76a303c: Python console performance(namely output processing) GREATLY improved.
eba1a36: 8 is not an octal digit (PY-3287)
6c121f0: included groovy files to sources.zip
ea32e6c: Fixed help() exiting in python console (PY-2362).
c4cecfd: Python console improvements: command invocation timelimit removed, added possibility to interrupt previous command by Ctrl+C (PY-3084).
f8cb5bb: fixed PY-3239 "Remove redundant parentheses" quickfix doesn't work
b86253f: fixed PY-3197 Quck fix doesn't work for string with string formatting symbols
2224b94: fixed PY-3238 Do not highlight "Call to constructor of super class is missed" if there is no super class constructor
f5cfcee: fixed PY-2310 "Redundant parentheses" should not be reported when expression spans multiple lines
68eea35: fixed PY-3278 False positive for 'Super constructor call is missing'
ee37728: moved ReST preferences to PyIntegratedToolsCofigurable
ce50ec0: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
fe3315c: fixed PY-3261 Argument equals to default parameter value: qf changes code logic , PY-3260 Argument equals to default parameter value: not available for keyword argument in function call added tests
725ada9: fixed PY-3257 AIOOBE from 'argument equal default' inspection
2a79e8e: work in progress on rendering epydoc docstrings to HTML
0e221ca: configure docstring format on "integrated tools" page
cd16f51: "test runner" configurable -> "python integrated tools"
c1d2ba5: converting epydoc markup to HTML
0f42c09: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
81eae34: added possibility to run sphinx tasks to Tools->Restructuredtext->Run sphinx...
2f67af7: dump heap on out of memory under Mac in EAP builds
3c4d733: productivity for hor scrolling
e008bdd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e4ae301: added possibility to run docutils tasks to Tools->Run docutils
59584ba: Fixed sorting unicode dictionary keys in watches bug (PY-3265).
f408a5e: Fixed possible NPE.
929a821: Fixed multiline comment handling (PY-2396).
8e5839d: PY-3156: allow yield in property getters.
e34d16b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
be395ac: Merge remote branch 'origin/master'
3972c5e: preliminary support for Sphinx/reStructuredText docstrings
3597dda: extract superclass from EpydocString
82397ca: show parameter description from epydoc in Ctrl-Q
1cac405: extract parameter types from epydoc docstrings
c2e834b: initial implementation of epydoc docstring parser; extract function return type from epydoc
b7bdccc: no longer need to have stdlib modules in PythonDocumentationMap
fe7cc5b: reimplement doc provider for python stdlib
c31213c: prompt to configure external documentation location if none is configured
94b5529: "extract method" handles @classmethod and @staticmethod correctly (PY-3228)
edecdf5: exclude __ names when calculating the list of completion variants for a module (PY-3256)
7986ab8: exclude import elements when calculating the list of completion variants for a module (PY-2385)
0562a83: correct withinOurClass check for completing class members (PY-3246)
96fcbcd: one more check to avoid duplicate imports (PY-3167)
3983308: imports go together
c2fdba4: complain about missing default value of a function argument (PY-3253)
d9c43d1: remove meaningless copyright comments
17a877a: Merge commit 'origin/master'
1eb3ed9: GotoDeclarationHandler now can return multiple targets
28a2ffd: fixed PY-3244 Replace function with set literal: do not propose intention for empty sets added convertion from string
6c49113: fixed PY-3033 Auto-completion removes quote
5cef0fe: added colors page for ReST files
f558577: fixed python-plugin-common (returned lost documentation link provider)
3552ca9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
3654c46: added initial RestructuredText lexer for syntax highlighting
384a203: smart enter shouldn't insert parentheses into decorator calls (PY-3209)
c4a7bdb: don't show meaningless auto-import variants (we don't typically access attributes of functions or call modules) (PY-2598)
8c206b8: link to pycharm instead of idea
a1e08b3: enable in-place rename for locally defined classes and functions (PY-3029)
428ce72: ditto
7336860: correct module-qualified Create Class (PY-3211)
ee7a3bc: Separated history for Python and Django consoles (PY-3241).
7fe517a: Execute code in python console on empty string entered.
7f40dbe: Import completion in console now works identically like in normal python editor (PY-2408).
41a291f: Fixed python code indentation in the editor of Evaluate code fragment dialog (PY-3045).
869368a: Backspace unindent restored in Python console (PY-3005).
430cdbd: nicer presentation of parameter list in completion list (PY-3220)
96aadb6: memcache methods have an implicit first parameter (PY-3131)
78288d4: use canonical name in class name completion (PY-3240)
459ab64: when searching for __init__ doc, look for class doc instead
fcd2c89: check for existence of pages built from map
7fde70c: check for existence of pages when showing external docs for Django and GAE
06152b7: external documentation for Google App Engine
b94070a: scipy
b1397e6: external documentation for Django
ccad106: numpy docs; search for canonical name further up the module hierarchy (needed for numpy); fix NPE in substituting macros
3cec184: correct fix for not saving default state of PyDocumentationMap
166ef4a: rollback class.qname; working link for wx
40183c0: add class.qname macro; add equals() for PythonDocumentationMap state; add non-working binding for wx
7932295: undo and rename/move class used in run configuration (IDEA-36423)
ca16003: sync collections used in ReferencesSearch
571cbba: cleanup
218f136: fail build if licenses for libraries aren't specified
e0348bf: implements SearchableConfigurable
d0a386b: use canonical import path for finding external documentation; other tweaks
6f800bc: prefer canonical names for import candidates (PY-2882)
7b82242: don't show redundant 'add import' text in auto import popup
026a083: ImportCandidateHolder.path is a PyQualifiedName
ca30e15: cleanup and refactoring
ef4cc3e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b4582ac: fixed PY-3206 Code Compatibility false positive
b76d0eb: correct and unduplicated code for checking if a method is in stdlib
31040e3: highlighting for macros in documentation URLs
edf5a36: tweaks for multiprocessing and __init__.py
0f4cb89: external documentation for Python, take 1 (basic framework and configuration UI)
df4641d: unused import removed
9445f53: move controller to a more appropriate package
d672914: NPE fixed.
d5bdb6a: PyDev code cleanup.
1b0433e: Fixed help() command in python console (PY-2362).
500f979: console history controller with persistence
66466a6: WI-5680 Register 'Introduce Variable' refactoring in usage statistic + some more in usage statistic + some common constants
b839dee: Unindent on entering backspace in python console (PY-3005).
786011b: Output handling for python console reworked. A try to fix messed up console output (PY-2113).
bf8f302: Fixed bug in console init without Django (PY-3204).
a109e02: added PY-2670 Django Tests: Interactive Console
895b91c: more convenient method
ed09b1b: Django manage.py shell now executes manage.py properly (PY-2921).
bdd574e: Fixed wrong string comparison in .sh (PY-3140).
c2b765d: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4bfa81e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b6e5937: test runners: limited stdin support in testrunner console (by default turned on only for Django test framework).
3077f50: moved auto rerun tests project service to the platform
d888284: fixed ReplaceListComprehensionWithForIntention to be compatible with new comprehensions PSI
6f3dbf3: added PY-3120 Inspection to replace set built-in function with set literal
6fd64fd: compilation fix
630ae20: prefer shorter names when sorting import candidate list
559db4c: presentation cleaned up
cd80ee8: modules under root are suggested as imports even if there's no __init__.py there
9dd34c0: this goes to codeInsight.imports too
969cd5e: group imports-related stuff in a package
1248293: get rid of AddImportAction
8880189: cleanup
8b98901: don't try to create statements like 'from<nothing> import django'
23364d2: auto-import for module references (PY-1323)
73eda47: restore commented out caching
79db9af: get rid of ResolveImportUtil.resolvePythonImport2()
6cda66f: 'run manage.py task' uses the same logic for collecting PYTHONPATH as normal run configurations (PY-2930)
8db17a4: refactoring: get rid of PythonCommandLineState.getConfig()
ac6c054: false positive on escaped backslash in string literal (PY-2994)
31491c4: change PSI for nested comprehensions (PY-3030)
00d5bc4: no import fixes inside string literals (PY-3123)
21cd4ff: don't show auto-import popup if the reference is no longer unresolved (PY-3167)
db5297d: change parsing of generator in argument list so that parentheses are part of argument list, not generator (PY-3172)
971d0aa: added PY-3119 Support auto-rerun of tests in PyCharm
2cbb3ee: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
62c3cf2: fixed PY-3121 "Statement has no effect" inspection should not react on statements that contain error elements
563b6f9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
71e5ac1: fixed PY-3125 Inspection: Argument value is equal to default parameter value plus quickFix
fbebcdc: Fixed test data.
45592dc: Refactored a bit.
5efccf8: Revert "testdata updated according to highlighting changes"
ef084ee: missing testdata
7751257: fixed PY-3127 "Default argument value is mutable" inspection should provide a quickfix
555870e: SDK roots must be only classes, not sources (PY-2891)
19534ec: PythonSdkUpdater looks for manually excluded paths in PythonSdkAdditionalData
10651c1: rollback error severity for now
199ad37: don't add content or source roots to buildout library
a966f4e: load buildout paths from Buildout 1.5 site.py (PY-3169)
6c5fa4b: correctly add packed egg paths to library roots
e945f83: Correct setting of python path in console.
d1ee070: Correct python path in django console.
15ab304: Handling of indentations in console execute action (PY-3104).
e747ba7: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f6a2e6a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
45abd45: correct isReferenceTo() for variables used in nested functions (part of PY-3118)
bbe17ef: statement effect inspection adjusted for tuples (PY-3143)
53aa5df: testdata updated according to highlighting changes
5a8922d: change context in which 'lambda' keyword is completed (PY-3150)
7951182: add _functools to KNOWN_FAKE_REEXPORTERS (PY-3164)
138e854: propagate error severity to ProblemDescriptor (PY-3146)
bd7f3cb: fixed failed tests for testRunner
7cd1c0b: added support for new service messages in TeamCity
97adfc9: fixed PY-3128 Bogus assertEquals diff if value being compared is not a string
b9f6a73: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
777fcc3: added support for pytest 2.0.2
de1f21b: Execute in py console: fixing keymap for NetBeans 6.5
bd2e7cb: Execute in console terminates input no matter what code is executed.(cherry picked from commit be5af2e284c753d75cbea5a2d21b20a1c6425ba3)
bbffe9b: Handling of indantaions in console execute action (PY-3104).
92157ca: EA-25851 - NPE: OptimizeImportsQuickFix.applyFix
ca8dd53: fixed PY-1158 "Run Test" test context menu action should copy all current run configuration setup to the temporary run configuration
94af316: fixed PY-3094 Intention to transform conditional expression into if/else statement
66c1a35: fixed PY-3095 "Redundant parentheses" should react on parentheses around side of 'and' or 'or' boolean expression
96b13fb: fixed failed compatibility tests
5b2c3a0: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0dbfb21: added warning about ignored TEST_RUNNER variable
d43b5ab: fixed PY-3086 Version compatibility inspection shouldn't highlight any imports as errors if they're under an if statement
279a7f2: smarter evaluation of __all__ to have hashlib functions resolved again (PY-2302)
22e33dd: advance skeletons version
556c3b8: synchronously refresh skeletons after generation (may help against PY-2990)
7c8f22d: toString()
5887265: resolve works correctly in nested comprehensions (temp fix until parser is rewritten) (PY-3068)
d64d1bf: better behavior of introduce field on LHS of assignment
a751ad7: cleanup language level after each test
21e218f: fix spelling of name
442515c: check for element validity (EA-25785)
98a2141: don't put null refs in superclass list (EA-25783)
f39315b: NPE and cleanup (EA-25970)
4bad266: IAE (EA-26005)
a2d3356: NPE (EA-26073)
bfde9a8: add _collections to list of known reexporters (PY-283)
ddd2979: don't walk into Python 3.2 __pycache__ directories when collecting list of binary modules
fbfd4d8: duplicated methods removed
66ab70f: added PY-3051 true to True, false to False, etc intentions
7e17a7dd: fixed PY-3055 Missing super constructor call Inspection
d1a130f: fixed exception in django test runner with custom test_runner function
31b9fd8: fixed PY-3065 Wrong "Strange argument" warning.
02add59: fixed PY-3064 Wrong "Too few arguments" warning.
8d63a49: fixed PY-3066 Customer TEST_RUNNER not being used by django_test_runner. Works for custom TEST_RUNNER in django > 1.1.
1fa548b: in django -- settings file can be set as environment variable
7302654: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
fb70845: added PY-1445 Inspection to highlight docstrings using single quotes
9d9c3f6: missing testdata
a0f0f62: Fixed duplicated short-cut on Eclipse for execute in console action.
d974900: Fixed duplicated short-cut on Eclipse for execute in console action.
ff4f77c: don't insert 'import' keyword when completing in the middle of a package hierarchy (PY-3016)
6faab1b: refactoring: remove duplicate code for caching control flow from each ScopeOwner implementation
2a1e707: don't resolve from outside of generator into generator (PY-3030)
1bd0441: don't insert import keyword when completing import statement (PY-3034)
704d2bf: Moved from distribution to internal tool folder.
29651ae: Fixed indentation in console (PY-3006).
a38684a: Do not execute in stopped console.
4b76bbd: Python console: all roots are added to sys.path (PY-2786).
0885990: Python execute in console: console is started if there is none (PY-3035).
ff4b749: Fixed indentation bug in console (PY-3006).
5b38f15: Python execute in console: added support of python debug console.
09d8706: Python execute in console: added selection of the console in case of more than one.
b7cf03c: fixed situation with several triple-quoted string in PyConvertTripleQuotedStringIntention
c1503f3: fixed PY-3036 Unnecessary backslash: missing for split strings
1eceb89: added PY-2697 Quickfix to transform a triple-quoted multiline literal into a literal with many quoted strings
6412cda: fixed failed BackslashInspection test
63df5c4: fixed PY-2912 Add red curly underline to name in tab for po files when there're errors in file
691bb66: fixed PY-3020 "Code Compatibility" false positive on logging.config (for Python versions >= 2.4)
be3e456: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
bae885d: added PY-2952 "Unnecessary backslash" inspection
1d60367: Execute code fragment in console action(PY-2633).
44828b4e: removed duplicated python keywords container from dict literal to constructor intention
3c039a1: added test for caught ImportError in compatibility inspection
09a7658: fixed PY-1409 Django test runner hangs on error creating the test database
59bb325: added string diff for failure in non-UnittestCase-like functions. Added unescaping of string diff for python 3
6c6314a: added check for ImportError caught to CompatibilityInspection::visitPyImport. (It means that user already knew that module not available in some versions.)
de15800: fixed failed test in SimplifyBooleanCheck
bc8382c: fixed doctest failure with python 3.2 (encoding in pyc)
bd4e382: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
219e715: impoved problem descriptions in SimplifyBooleanCheckQuickFix
06efa76: Create template quick fix added (PY-2850).
9de1711: Method renamed as old name was misleading.
0fefc3e: Fixed some exceptions.
73cf986: Fixed few exception breakpoint bugs and AttributeError on debugger shut down (PY-2451).
549098d: fixed unittest producer. (to match pure unit test checkbox)
8925932: fixed failed tests. (builtins module in compatibility inspection)
92ef590: fixed test data for compatibility inspection
230cdc0: improved module crawling for versions analyser. Improved compatibility inspection for builtins. Updated versions.xml
287fbd3: fixed false positives in unsupported features util
209ed62: fixed debug logging to be compatible with py3k
9f57959: added python 3.2 language level
4e6c750: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
845364a: fixed failed compatibility test; removed duplication in compatibility message
8770417: build Windows zip for PyCharm (necessary for patches)
05a6b2a: update since/until for Python plugin in trunk
bb20713: fixed PY-2977 "Code Compatibility" false positives. updated versions.xml for Compatibility Visitor.
17b0dbb: fixed PY-2976 Django test runner doesn't work.
11aedbd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e114502: fixed failed doctest
59f8449: delete commented out code
ef08e91: restore paths for files being imported
c5cc5d2: fix completion of top-level packages
39a0983: rollback incorrect change
e17d3d9: show variants from __init__.py in completion of relative imports; remove duplicate logic for calculating completion variants for a directory; simplify ResolveImportUtil.getPointInImport() (PY-2816)
14626e2: fix off-by-one error in counting target directory for relative imports (part of PY-2816)
288072a: fix AIOOBE in removeTail()
604d8dc: StringConstantAnnotator performs all checks per node, doesn't do lexer's job again (PY-2802)
51aefc7: made the triple quote string lexer a bit uglier but a bit more correct (PY-1777)
803b545: string constant annotator checks triple quote pairing separately for each string literal node (PY-2806)
e500402: report colon with no following statement list as error (PY-2790)
5379195: report missing value expression in dict literal as error (PY-2791)
eb02840: handle rename of working directory
b16baff: refactor and clean up PYTHONPATH initialization (PY-2626)
9f6ed71: update Python run configurations on rename of target (PY-2787)
3c30581: introduce variable handles multiline expressions better (PY-2862)
67e8e63: Python console now handles multiline statement after entering empty an string (PY-2727).
b011506: Console init moved out of the event dispatch thread.
09afbde: Python console: a try with commons xmlrpc transport instead of default one, which hangs on getting inputStream.
b7976c8: separate language for python regexps with verbose flag set, set commentMode to true by default (PY-2941)
c5ae14c: refactoring: enum set of regexp capabilities instead of a bunch of boolean flags
74b7c06: allow dangling metacharacters in Python regexps (PY-2430)
29e9f1f: rbracket after caret at the beginning of character class doesn't close it (PY-1929 part 2)
aefabe0: fix parsing of character classes so that it's actually correct (PY-1929)
1e821e1: fix parsing of octal characters in Python regexps (PY-2906)
4ed294e: Python also has its own regexp dialect; correct parsing of character classes for Python (PY-2908)
7f10517: use new format string parser in string format inspection
b0124e8: new format string parser
436f51e: added test count to django test runner
780ca12: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4326688: PyClass implements PsiNameIdentifierOwner
a59f1d2: Revert "More consistent API"
4dc8c0f: More consistent API RUBY-7865 feature request: ability to collapse commented out code
d598c1f: fixed PY-2970 Join 2 if's intention alters behavior of code
030a961: Restored 3 secs sleep for jython as it can't connect sometimes otherwise.
3d8ee5c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9fb6601: fixed suite location in nosetests
6d72141: Fixed 3 problems with console: xmlrpc hangs up without timelimit, close action doesn't work, disconnect on ide close doesn't work too.
d491fae: fixed suite location in doctests
98269a2: fix race condition in updateVersionsToProcess() (no caching is needed here)
5348ff9: provided scope for Groovy library in Python and Ruby
40d2e3e: Web IDE api module: modified PyCharm/RubyMine build
515858e: advance skeletons version
abfe8a1: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
735220a: added support for multiprocess plugin in nosetests.
52174dc: toPsiElementArray, toPsiFileArray
191a33e: testFramework & junit exluded from classpath fo build_searchable_options task
d5492e1: Traceback collision fixed in one more place.
314f2c1: Resolve fixed once more, to fix PY-2211
651ad43: Fix multi-line import blunder
d72aa3f: fix Python 3 incompatibility (PY-2955)
7184774: merged with master
5845bc2: Fix py2.7 + PyQt 4.7.3 on win32 failing on isinstance()
8f1d502: fix testdata
ccfda73: even further crash-proof import restoration
2cc29dd: fix another incompatibility of generator3 with PyQt4
9fe7661: diagnostics for ImportError
795031f: in debug mode, exit if failed to import name
5d691e6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4cba4f7: Fixed resolveModulesInRoots to make work resolve for django project name. Needed to fix PY-2239.
bb8bc20: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
409cd08: PY-1868: auto-insert 'import' after qualified import refs
8c0ef87: Fix for the strange case of unhashable functions in py27/win
5cbfad8: fix 'unhashable instance' exception in generator
992b5a3: @NotNull, fix calculating SDK for out of project files during completion (PY-1975)
30e03ef: fixed test run configuration producer for functions
b5c058f: fixed enabled by default test functionRB in unittests.
8470e21: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d51fa95: PY-2846: only allow root as top-level package resolution in modules with Django facets
146bf86: Added isatty() for IORedirector, also added smart method dispatcher, which will prevent such errors in future (PY-2933)
4fdf1b9: Removed stupid 3 seconds wait on python console launch (PY-2282)
4f4dfeb: Support of from keyword in django load tag (PY-2866)
c7e0324: correctly find names in import statements under 'except' clause (PY-2853)
52abaf5: performance: don't calculate the enclosing function every single time in refersFromMethodToClass()
68beefc: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
93e4efd: performance: don't check isValid() on each element
f1e4fb9: performance: a bit faster PyTargetExpression.getQualifier()
424a3f7: performance: PyResolveUtil.getPrevNodeOf() works over AST rather than PSI
898c2fa: improved models module name
92773d9: fixed testrunner problem from the forum. (Running unittests and doctests under Django)
840bb56: performance: remove unnecessary getParentOfType()
c34c0e3: fast path for PyAssignmentStatementImpl.getElementNamed()
7b29063: performance: quickly return false for PyQualifiedReference.isReferenceTo(local variable or parameter)
c8a445c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f5f0396: fixed failed test
882e3b5: build fix
813097f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4409191: UnsupportedFeatures and PyCompatiblityInspection refactored.
529e227: branch number 106
529f620: do not ignore .pyc in generator
df69640: Extract haml plugin and haml common parts for RubyMine build
f79f77c: fixed PY-2807 for test functions and for test generators (Running and re-running single test cases or test methods from nosetests runner)
daebcb6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0964286: fixed PY-2914: RE at com.jetbrains.python.psi.impl.PyBaseElementImpl.childToPsiNotNull
0045492: fixed PY-2898 PyListCreationInspection ignores conditional changes, forgets to clean up
38d4edf: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b26b44c: Updated generator3 datetime to force skeleton rebuild.
8da49aa: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7afb2b1: Assume source roots to be top-level packages: PY-2846
d763642: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
3ef18a8: fixed receiving of string literal text in quoted string intention
5dbe9b9: fixed PY-2915 Convert double-quoted string to single-quoted string: breaks code in case of escaped characters
8e73e97: usage view for text usages; smart pointers to the range inside PsiFile; use stub index to restore psiElement for files without tree loaded
804e3f1: fixed duplicated failure in unittest runner
e68497f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
52b345d: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
35aaf62: changed scope to 'test' for dependencies on junit
166f636: fixed PY-2807 Running and re-running single test cases or test methods from nosetests runner
a85923e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
af40f87: Nicer property accessors formatting
67cd45a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a7b89cd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
84ed1a2: CIDR: using selected 'Xcode' location to start 'iPhone Simulator' XDebugger: api for pre-selecting expressions in evaluate dialog CIDR: tests refactored
aae62dd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
31a87c5: fixed PY-1121 Django testing - removing database (Now django test runs as --noinput)
22ffbe5: changed scope to 'test' for testFramework dependencies
e2c7d39: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a1bbc307: added syntax highlighting for django localization files. (PY-1302 File type and syntax highlighting for .po files )
04ff78b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
67f8ab4: improved collecting used modules in layout
6c71c74: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
63c5f1b: fixed failed test (compatibility inspection)
5e5bb00: PY-2713, PY-2588, ignore .pyc, little cleanup
abd5761: Don't return deceptively named functions as class instances
8e1ad73: Works under py3k
8e69e36: Sane imports, fails on py3k._ctypes, line 1117, in fmtValue: unorderable types: _ctypes.PyCSimpleType()
3bb2087: Renamed -l to -v (verbose) for consistency.
2577ae4: No crazy imports, better duplicate handling, works in py3k; reimport broken!
7b14718: Fixed output in generator, renames, cosmetics.
e8ebc05: Proper logging in find_binaries.py
ea96296: generator3 switched to buffer list output, tests updated
4bc3973: Add haml plugin to the layout
4d1d33c: fixed false enabled button in test runner.
b5e104b: added quick fix for print replace with print() to compatibility inspection
854bc6a: Rename.
a57489f: fixed PY-2809 After installing unittest2, setUpClass/tearDownClass not called
6edce02: fixed PY-2836 Inspection for Dictionary Argument in String Formatting Statement Yields Incorrect "Key Has No Following Argument" Error
64188b0: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7513dbe: fixed PY-2873 PyDictLiteralFormToConstructorIntention breaks code if there are key named as python keyword
7152787: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
dbc7684: added support for string diff in python 3
59f7ce8: fixed PY-2828 nosetest runner intermittently fails to catch errors
9893f9e: fixed null assignment value in compatibility inspection
cae1fcf: fixed problem with nosetest runner with python3
6fa93ba: added tests for list creation inspection
a25f95b: fixed PY-2832 Python 3.x - "Statement can be replaced with function call ignores" arguments
fd0cf78: fixed PY-2823 PyListCreationInspection fails on recursive lists
7b2195a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
3c90334: generate 3rd party list for pycharm
d30bd6d: Normalize memory defaults across products build scripts.
3be226a: Common Info.plist for all products.
6ae6981: fix compilation
60b0808: @NotNull, sorry
b4871a9: rewrite PyBuiltinCache, fix memory leaks, tie to PythonSdkPathCache
23118fb: include versions.xml in plugin build
67bcf88: Revert "Revert "Merge branch 'master' of git.labs.intellij.net:idea/ultimate""
8197316: Revert "Merge branch 'master' of git.labs.intellij.net:idea/ultimate"
c3426b6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
89622c0: DjangoTagLibrary is a project service
8c3bfb3: provide return type for get_object_or_404 function (PY-2822)
2cf5941: more useful message
a5b5106: runtime dependency from python-ide on colorSchemes (fixes building searchable options)
f2aa7ed: wording
9e53499: PyCharm version number updated to 1.2
275d049: no JRE check when building searchable options in RubyMine or PyCharm (PY-2829)
547fd75: one more test
deff48c: don't try to import references that resolve in the method being pulled up (PY-2810)
2c36976: pycharm: unescape "\n" in expected/actual texts provided by test unit framework
915f169: PY-2783 Assignment incorrectly highlighted as unused. Better fix
656e1c3: PY-2783 Assignment incorrectly highlighted as unused
fc89e72: Added existing import check.
3be829c: extract django.manage package
cd194cc: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
91ef339: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9d36f08: app_directories template loader support is much closer to correctness now :)
18b8744: support resolving to classes inside a function
ea70262: correctly handle implicit argument count for constructors qualified by module (previously failing test now passes)
8bb4a81: start splitting test for PyArgumentListInspection into many separate testcases (one test fails, work in progress)
96c92a3: work in progress on PY-2820
e1b7b4f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8955ba0: Added initial support for Django 1.3 class based views migration quick-fix intention.
84ba0a9: fix expected highlighting
3952c34: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
644793e: Fixed paremater info tests
3dbda5f: PY-2755 'Referenced before assignment' inspection when using loop variable after the loop
6beca8a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6675f5b: Join comments + test.
927655c: String joiner fixes, tests.
4442015: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
fdce943: navigate to ListView
b6fee4c: understand template_name attribute in generic view classes inherited from TemplateView
87e0493: view navigation markers in urlpatterns, step 1
fb23afc: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4fe3dbd: [r:kirillk] make 64bit a default, since there seem to be no other way to have common configuration for 10.5 and 10.6
a985a1b: understand _id column for ForeignKey fields (PY-2359)
b71d584: remove LookupElementFactory
58fc00c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8be81af: Added as_view method arguments completion.
c77e7bd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9cff7ee: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7b783a5: PY-190: initial support for joining lines; needs JoinRawLinesHandlerDelegate.
449d379: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7f71f41: Refactoring and many fixes of the django completion in string literals. Also tests added.
7df180a: replaced vector with arraylist.
092d8ec: fixed failed test in compatibility inspection.
eb2db01: fixed PY-2793 Py2.4 compatibility: relative imports: false negative on relative imports from modules
bebe92c: fixed PY-2797 Py2.4 compatibility: highlight that with statement isn't available in python 2.4
a2cd70e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d3dc22f: added tests for PY-2792, PY-2796, PY-2795
f829adc: API for supporting known wrappers of @classmethod and @staticmethod
a5c6fd9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
aef6f44: fixed PY-2796 Py2.4 compatibility: missing highlighting when yield value is assigned to a variable
fdfbe18: use PyNames where appropriate
68762c3: fixed PY-2795 Python 2.4 compatibility: Unified try/except/finally: missing inspection
965ddc5: fixed PY-2792 Missing SyntaxError highlighting in case of conditional expression in python 2.4
421bdb0: PY-2794
1a5a291: restore changes reverted because of failed push
282a649: fix compile
13444c3a: fix compile
13bea54: slice list parsing improved (PY-1928)
ca1f5b8: tweak parsing of incomplete dict literals, consistent alignment/indentation when pressing Enter between dict key and value (PY-1469)
38d3fe6: fix alignment when pressing Enter in list literal (PY-2407); thankfully the logic for calculating whether a child needs alignment can be much cleaner now
4fc3b64: Ctrl-Enter splits line without adding \ (PY-2442)
cb0fca1: consistent behavior for pressing Enter inside a parentheses pair (PY-1947)
c9136f1: PEP 8 compliant option for blank lines between top-level classes/functions (PY-2765)
f76e57d: fixed PY-2698 'Too broad exception clause' inspection
1965a07: Django upgraded to 1.3beta1.
a000306: Fixed failed test for completion of inherited class attributes.
85e22fc: Request focus of debugger console editor on show action (PY-2723).
1cba18c: fixed print in unsupported features util and in compatibility inspection.
8378e03: fixed list creation quick fix. Added tests.
0cd3f9d: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
872ad54: fixed memory leak in ConvertVariadicParamIntention.
593a87f: fixed PY-2776 Python version compatibility inspection ignores selected version range
8bb1c25: fix the missing versions.xml
618beef: allow searching for text occurrences of Python classes and methods (PY-2345)
1422b20: statelessness
a982862: intention made truly stateless
e4997f0: cleanup
ffbfee8: fix project leak in ImportToggleAliasIntention
5f44cee: special case for import completion of os.path (PY-2433)
5ce7895: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b0e5269: auto-import for variables (PY-2343)
31c9eca: Goto Symbol works for file-level variable declarations (PY-564)
a2f76aa: added PY-2752 New Intention: Replace list creation with list literal
27aaddf: cleanup
7f98153: storing name variants in a HashSet is not a very good idea (PY-1511)
a495eeb: remove quotes from string when suggesting name (last remaining part of PY-1276)
b24337d: use PyClassRef in resolveSuperclassesFromStub()
c9a033c: fixed django test. added notification about import errors (import settings/manage files)
0e38f47: PyClass.iterateAncestors() split into iterateAncestors() (returning class refs, which can be not classes or unresolved classes) and iterateAncestorClasses() (returning actual resolved classes); fix PY-817
b6c0bd1: fixed ReplaceNotEqOperator quick fix test.
6de2c7e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b39b94c: fixed PY-2774 'from . import x' not highlighted as syntax error under Python 2.4
eafb6cf: no import reference completion in incomplete/invalid import statements
7fbc8a7: complete 'import' after 'from .' (PY-2772)
01649cd: PY-2520: add spaces between params in Ctrl+P hint
7a2f0eb: isQualifiedByInstance: handle long module quelifiers.
c282af0: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7062ac2: Allow callable class instances to be passed to property()
8519231: branch number = 104
67ee2f0: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c766239: Prepared fix for PY-1634 Provide link to diff for string assertEquals failures in Python unit tests
2f007d2: fixed PY-2334 Buildout custom manage.py for unit tests. (All custom stuff should be defined outside main block)
748b32b: fixed failed django test.
fd23c77: fixed failed test.
dc65c8f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a2a537d: Added tests for doc test runner.
201898f: Added tests for compatibility inspections.
fa047ec: fixed PY-2757 Nosetest runner failing on code error
03f04c5: fixed failed tests.
2161e2f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c9e5520: updated python unsupported features information xml.
8f31b76: added processing of subsections in python documentation
0e4ecff: few refactoring in test runner. moved getTestSpec to the PythonTestCommandLineStateBase, moved getSuggestedName to AbstractPythonTestRunConfiguration.
3b8315f: option to rename parameters in hierarchy when renaming a method parameter (PY-2374)
59d1b07: offer to rename inheritors when renaming Python class (PY-2373)
7670b4f: Added Unsupported features Inspection. (PY-1820, PY-2719)
21d0b48: offer to rename containing file when renaming Python class (PY-2372)
765dd18: added tool to collect python functions and modules for Compatibility Inspection.
3b512ad: fixed failed ManageTaskModelTest.testTaskListCompleteness
c5f9870: fix PyIndentTest
430a47d: missing testdata
82924aa: name conflicts in Python rename (PY-2390)
95dfe01: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ce70c75: cleanup for resolve of class members; prefer resolving to definition of instance variable in containing method rather than in first method of class (PY-2740)
7c7e1d1: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7463982: Added super class attributes completion (PY-2744).
604fa13: Architecture priorities updated according latest Apple guidelines.
2bae176: Added template_name reference to Django 1.3 views classes and as_view method.
93ac3f7: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
dc5a698: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8e3b034: IDEA-52274
c4e1537: python debugger adapted to changes in XValueNode/XCompositeNode
89e2cfc: Autoscroll for python consoles
c292bd2: no 'make before run' for Python run configurations (related to IDEA-64261)
2999b5d: names prefixed with __ are not imported with 'from ... import *' (PY-2717)
da80a41: 1. Smart expand API for structure view trees 2. Improved Ruby structure view for Test::Unit tests definition via closures
668f0a4: couple of fixes for docstring indentation (PY-2667)
8a3e68f: PyCharm doesn't need jcip-annotations
13fbe40: fixed SKIP parameter from docrunner
126aa39: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d1e77e7: fixed PY-2718 Cannot run django application under python 2.4 and 2.5
4d768ce: remove deprecated ProblemHighlightType.INFO usages
39d2355: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
577db7c: 'AttribueError' in console is fixed (PY-2711)
f6aecf6: fixed PY-2716 Can't run Doctest under python 2.4: AttributeError: 'module' object has no attribute 'SKIP'
f2e19e8: removed duplicated code for django tests.
7d066d3: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8541af2: fixed PY-2715 Couldn't run Django Test on python 2.4
a8cfdd5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
eb3a2c6: correct resolve roof for private names in superclass list of a class (PY-2618)
6dda0bc: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
eb1cea0: fixed failed tests. removed unusable code.
80f3978: clean up calculation of isQualifiedByInstance() (PY-2622)
5450533: fixed BaseException usage in case of python 2.4 (PY-2688)
5e45e65: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
83402a5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8c038ec: fixed PY-2706 Preserve original traceback if creation of testcase fails
1a411e9: classes with __metaclass__ are not old-style (PY-2699)
ad69290: refactor test to new style
bdaef9c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b97072e: fixed debugger eval problem (PY-2695)(cherry picked from commit a3b39826383bf96607168e3895f71de8c499e8a7)
d560ebc: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
772e240: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8f2e839: crlf
157e2d8: refactored PythonDocTestUtil.isDocTestClass
70b860a: fixed wrong condition in DocTestUtil. DocParser moved to Util as a function.
f1ed6e7: cosmetics: avoid exposing internals to user
56c7d30: replace HighlightSeverity.INFO usages with weak warnings; deprecate api
c4f8a83: fix test
838a58f: avoid inserting duplicate colon when adding 'self' (PY-2652)
2875c12: when building PYTHONPATH, add path to jar file itself instead of path inside it (PY-2625)
a5edf81: correctly handle return annotations in 'override' and formatter (PY-2690)
39b4c5c: correctly handle parenthesises expressions in 'as' clause of with statement (PY-2691)
15b2459: created separate django_manage file for Django runner and for Django Tests.
9b89e71: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
aebad6f: fixed PY-2689 Convert double-quoted string to single-quoted string: loses data with multi-line strings
cc5a2b0: 2011
68cacd7: fixed 64 bit check for python 2.4 (PY-2688)
a944f87: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d180c5d: fixed var eval presentation with errors (PY-2671)
7dd52129: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c92379c: fixed PY-2684 "Too broad exception clause" false positive
6c2d02f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
bfc4522: PY-2665: first params of metaclass methods as per pylint.
42375bc: added tests for PY-2264
6373377: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7079cc5: fixed removing of kwargs in ConvertVariadicParamIntention
3507935: added PY-2264 Add refactoring to convert parameter(s) between normal and variadic
7b40aa9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0426476: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
5f75b12: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
410fda2: fixed PY-2678 Test runner catches files that have nothing to do with testing in case of Ctrl + Shift + F10 on docsection
eb51948: fixed exception handling while evaluating breakpoint conditions (PY-2672)
b86b2b8: API allowing StructureViewModel to control node auto-expand behavior (PY-2369)
ed34716: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
80837f1: fixed PY-2676 Django Tests: Postgres: invalid ImproperlyConfigured exeption within PyCharm (Added support for django >= 1.2 tests)
146b23c: fixed PY-2679 doubled django test cases
1c58670: fixed pydev for python <=2.5
966eac1: correctly handle pressing Enter in parenthesises from ... import statements (PY-2661)
81089d6: since/until build for current trunk
e9dee4f: fixed PY-2674 Assignment can be replaced with augmented assignmet breaks context
6b54436: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
5227d52: fixed PY-2669 Django Tests: NameError: global name 'build_test' is not defined
87ee23e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6996135: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d94f6d0: fixed pycharm.sh (PY-2612)
b4f47df: fixed failed tests from testRunner
85b75d8: removed ()
750bc86: redundant parenthesis removed
b5f75e9: debugger got fixed for python 2.4 (PY-2616)
ef0a9b4: added possibility to python's Unittests Configuration to run just tests, inherited from unittest.TestClass (as it happens in pure python unittest)
5c66d7e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
db8cf19: fixed lineno value
9f4771e: fixed PY-2658 Can't launch my "dynamic" unit-tests in PyCharm 1.1 (but they work fine in 1.0 and 1.0.1)
003560f: fixed PY-2659 missing Tuple Assignment Balance inspection
ec2be1d: fixed PY-2656 "Convert double quoted string to single quoted" intention incorrectly changes escaped quotes inside the string
b2710de: fixed second part of PY-2648, and PY-2649
686acdd: fixed PY-2648 Dictionary creation intention breaks code when a value is a tuple
c3c54b0: python doctests refactored
62aaa77: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
47208c0: fixed PY-2608 Test runner does not draw test hierarchy for django versions >= 1.2
ed90415: fixed PY-2608 Test runner does not draw test hierarchy for django versions < 1.2
67adf72: refactored unittestRunner and noseTestRunner
26534f7: added im_class for functions in python3
9aae24c: removed unusable set of parameters to function
fcafd87: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0fa8ea1: fixed unit tests in generator functions with python3
d3678bd: fixed PY-2595 Unittests runner - can't run test case on Ctrl + Shift + F10 in filename has no underscores
98c2009: fixed PY-2594 Can't run simple test using nosetests
2bfb03a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
74652f9: fixed PY-2593 Redundant parentheses inspection should have option to ignore tuple used in 'return' statement
e004b1a: fixed 64bit awareness
f8ff09a: fixed literals quick evaluation (PY-2538, PY-2496) (cherry picked from commit 958db58d634b2e89b2c21f6e8fc7f74df968dd73)
b244fbe: fixed py2.5 debugger compatibility issue (cherry picked from commit 33d886a2866d2fa3029559b7e7aac8c1fde9dd82)
787304f: psyco warning removed for 2.7 and 64 bit (PY-2298) (cherry picked from commit 40d641b073e7b4672598015a4ae8cc295c1fe42b)
728104a: fixed XValueHint value trimming (cherry picked from commit e9a835f038395a6bbc5d0fe09580ed35986feab1)
0b41827: fixed line convertion and breakpoint settings in case of continuations (PY-2347) (cherry picked from commit 1945349ce2653c4b9df8116c23b4667734f11c80)
7ea8d89: added wave error highlighting to buidout.cfg files (PY-2525) (cherry picked from commit a82a5ab946cb78b2d69006412938b8d32fcd839b)
b2ec0b6: reregistering exceptions removed (cherry picked from commit 96a0ca212d634a6874e2f6248bbf43a8b6b36252)
d6d70ae: fix compatibility of generator with python 3
bdb6930: advance skeletons version
8cb7221: fix exception on handling overloads in generator
628c77d: in internal mode, log errors when building skeletons
b0ab628: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b6aaa01: remove Gant library
ad4b75e: use correct language level for createCallExpression() (PY-2587)
7db7477: fixed dict completion contributor exception
01ac225: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
15d9dc6: failing test for dict key completion
f043dd1: fixed doubled path's in unittests
e0198c5: EA-24550 - NPE: PythonUnitTestCommandLineState.a
b2f2833: EA-24791 - NPE: PyTypeProviderBase$ReturnTypeDescriptor.get
430277c: EA-24842 - SIOOBE: PyDictLiteralFormToConstructorIntention.isAvailable
26fd6cd: splash tweaked
d497629: fixed PY-2585 statement has no effect break code
f281c63: fixed PY-2583 wrong list comprehension to for loop
7ca42ce: @NotNull
01fd456: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4da445e: Added a test case where join ifs intention is *not* activated.
0d68392: removed unusable print statement
560a2be: fixed PY-2412 test runner inappropriately runs abstract trial TestCase classes
ca79da6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate python/codeInsight/intentions/PyJoinIfIntention.java: notice outer 'else'.
0401cdc: don't suppress other completion contributors in dict keys (PY-2558)
dc90d60: fixed PY-2503 Tests with errors and failed tests have the same icon
b54b224: fixed PY-2516 Split into 2 ifs: broken code with suites on the same line to if and else clauses
592e929: fixed PY-2517 Split into 2 ifs: don't duplicate comments
58ed490: added TeamcityNose runner instead of Plugin
7226751: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
90854b2: fixed PY-2545 Incorrect intention for string literal concatenation
91beaca: branch number = 102
a46910b: perf fix: don't go looking for fields assigned nearby if we have a good result from the type itself
ffa0956: correctly highlight "can't assign to operator" in chained assignments (PY-2491)
4c420d8: isQualifiedByInstance() returned incorrect results for nested classes (PY-2460)
f0025c6: use correct language level when building overridden method (PY-2547)
04550c1: AIOOBE in folding
5ef49c4: don't include whitespace after statement list in fold region (PY-2544)
1ee4822: likely fix for PY-2531
779781f: year in copyright in info.plist
da805a6: for out-of-project files, resolve imports from SDK selected for project
c4ce296: PY-2542: only join ifs without else parts.
995897f: add new line in msg
0fe9b63: fixed bug in versino cmp
533e15f: fixed children sorting in py3k
581c7aa: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
169c53f5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f28f879: fixed PY-2535 Test Runner: nose_helper.failure.Failure with py3
bec2c2b: fixed display children for python 3 (PY-2534)
0f256ae: fixed sort of float keys(PY-2537)
e230f01: fixed broken build
5016d9c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6f66f60: fixed broken build
1c9dd82: fixed PY-2523 "Convert lambda to function" makes code unclear - lambda as an argument
d250c36: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9955cd4: focuskiller loading fix (IDEA-62107)
a7a48ca: fixed PY-2533 CCE at com.jetbrains.python.codeInsight.intentions.PyDictLiteralFormToConstructorIntention.isAvailable
9fb24d3: fixed PY-2511 Dictionary contains duplicate keys - dict constructor uncovered
57bea1a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cd1889b: fixed PY-2518 Join 2 ifs: merge comments if both presented
07764c9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cfb853b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f6236ff: added diagnostic of debugger version (PY-2526)
e302308: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7859568: fixed sorting of children in variable view (PY-2471)
90a1767: twilight color scheme for PyCharm, converted by https://github.com/yole/colorSchemeTool
716f337: partly fixed PY-2518 Join 2 ifs: merge comments if both presented (Still have problem if comment is on the same level as Statement List)
94c9edc: monokai color scheme for PyCharm, converted by https://github.com/yole/colorSchemeTool
44d430e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6fbcb1b: fixed 'Unable to display children' in debugger (PY-2508)
e00afc8: fixed PY-2519 Join 2 ifs: disable intention if nested if contains also elif/else clause
7a7d9e7: fixed PY-2514 False positive for augmented assignment
08a9ea7: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0d69b1e: fixed saving test run configuration
6bf3bc0: fixed PY-2512 "Convert dict literal to dict constructor" breaks code
98c20d6: 'ignore unresolved identifier' is a low-priority action
ed06afa: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
bdff22e: added settings saving
bd79647: fixed some msgs, label changed to editor pane (PY-2470)
d43ab43: remove bogus InspectionColorSettingsPage declarations
9a576d8: EA-24321 - AIOOBE: PropertyBunch.fillFromCall
ef9474c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
37d3889: EA-24594 - IOOBE: PyQualifiedName.removeTail
7d503bd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e54f04f: added highlighting for chainedcomparison inspection if there are different operators. For instance, a >= b >= c and c > d Fixed:PY-2489, PY-2492, PY-2490
f02aa78: fixed property in unresolved ref expression added property looks like x = property(getx, setx, delx, "I'm the 'x' property.")
a9179bf: PyCharm 1.1 splash
a290c0f: added tests for PY-2488
f5bceef: fixed PY-2488 "Augmented assignment" - some operators unsupported
7457972: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
10537a4: fixed ListSelectionHandler (removed unusable whiteSpaces from selection)
a4a4f07: fixed property quickFixcreation in Unresolved Reference Inspection
118332c: fixed operator checking in Augmented Assignment QuickFix
0a3e54d: fixed Augmented Assignment (added shift operations)
361b1fd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ddadcf0: fixed remote debug run configuration ui (PY-2474)
f046287: fixed failed test (convert lambda)
5e237cc: fixed (PY-2484) "Replace assignment with augmented assignment" - variables uncovered
d41b203: fixed (PY-2483) "Replace assignment with augmentet assignment" - data structures uncovered
d7d50e6: added tests for PY-2481, PY-2482
16c8972: fixed PY-2482 "Replace assignment with augmented assignment" - support communicativity
bf210a8: fixed (PY-2481) "Replace assigment with augmented assignment" break code
49bc240: fixed test count in doctest runner
dc9d6f7: added py.test to tes runner settings
52ca7fb: fixed error on breakpoints update (PY-2473)
f11a561: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a2e8f5f: DMG background for PyCharm
0866f2c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7f9205c: fixed bugs, connected to lambda
5075461: fixed (PY-2467) PyCharm's own unittest / unittest2 helpers for PyDev get into the way while debugging a test suite which is run as a script
c0c2377: added checks to catch PY-2305 next time
7379199: change connecting state in variables view (PY-2449)
d48d606: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
2fd311a: augmented assignment to __all__ marks it as dynamic (PY-2346, or maybe part thereof)
17ea720: added re-registering exceptions after changing of path mappings (PY-2450)
110857c: fixed remote debug run conf. ui. added suspend on connect option (PY-2452)
66c61f8: refactoring. also 'Set value' in debugger (PY-2453 and PY-2456) is fixed.
1510c25: partly fixed "No tests found " in doctests
8db419b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
712a32c: fixed debug position update for remote files
e401127: fixed list selection handler (to select in parameter/argument list)
8071d1c: build fix
07477f2: fix exception from auto-import (PY-2445)
ba7abd4: fixed comma selection handler (to select comma in function definition and function call)
0b018ed: fix Unix build
d6745d5: fixed saving in noserunconfiguration
6ed9d06: fixed (PY-2447) Quickfix: Create function for reference: function in if clause on module level is created in that if clause
c03283c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b7e7164: added Nosetest configuration. Added selection of default test configuration to project settings
9860a19: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
014bcc2: add colorSchemes to modules filter list
6f90a9b: fixed 3 bugs
8bfcfad: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0a57012: bring back restarter (PY-2435)
3170a77: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a5fd7bb: include colorSchemes in layout of PyCharm and dev update of Python plugin (PY-2416)
d339977: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
39c9fac: small fixes in remote debugger
984b3bd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
424dd79: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4f4c147: added handling of absent remote files in python remote debug
d69053f: Infer some object return types in functions, imported or local.
c1ffbbc: fix incorrect import
a7777b1: python plugin build updates
c209b5c: fixed (PY-2422) Code analysis wrongly marks print() in Python as having unnecessary parentheses
dbf9397: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c1296ad: added error output for nose tests
3a375b0: removed unusable code
29beae6: added nose test run configuration with plugins support
1c3b9c4: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4426433: added egg
97bb53d: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
575662c: added debugger egg
517c0ec: correct since-until versions for CFML, Mercurial, Python, Ruby, Ruby-coverage plugins [r=Oleg.Shpynov]
b3c947b: branch number = 100
58f38f4: some minor fixes in py remote debug
2b3d2f7: Added buildout cfg colors page
10b4f6b: fixed failed tests
0f3dce6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
46d9d8b: fixed (PY-2409) Exception from dict keys completion
bc1eefc: Mind the fair warning from another inspection.
52b792f: Don't show custom color schemes in unit test mode.
6b7394d: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b9dc895: Adding a bundled color scheme provider; see WarmNeon.xml in community/.
20d0e72: [peter] preserveMarkup refactoring
0bde140: added support for IronPython to testRunner
09ebe71: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0b03943: added description to remote run configuration
a3c1993: fixed file name finding
da5348a: added stdout and stderr redirection in remote debug mode
63233b1: false positive in inspection
980f12c: added (PY-2404) Offer "add self." qualifier when property with the same name exists
eec5350: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
fff5610: runnerw.exe moved.
cee4569: added python remote debug
85fcda4: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cf8b489: fix path to pycharm.sh
e4add85: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
1173be7: added (PY-2204) Unresolved References inspection should highlight imports that resolve to their containing file
9a247c2: fixed double adding function
18046b4: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
3d77c24: fixed (PY-2403) Allow multiple patterns in run configuration
f6f2891: runnerw.exe moved. Mediator manager removed.
14dd806: added dict key names completion to python plugin common
97d2256: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
3b7370f: fixed python plugin common
7244e9d: fixed PY-2393 Rename self adding quickfix PY-2389 "Create function" quickfix should not create nested function
ea6796c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f77528b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4fed177: show more descriptive name in rename dialog when renaming module 'as' name
71785b9: no completion after 'as' in import statements (PY-2384)
6d9f136: take types from type annotations (PY-1750)
97f0e67: show function signatures in completion list (PY-235)
fe826fb: restore and fix completion of 'else' in conditional expressions (PY-2397)
d6a869b: fixed tests anf description for byte literal inspection
a21408f: complete super method names (PY-170)
14b434b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
82f4f58: fixed (PY-2395) "Assignment can be replaced with augmented assignment" false positive
df4c869: added tests for selection
8cd51ca: reference inside __all__ (PY-986)
e08de42: fixed dict completion
6b04b7b: diagostics for EA-24292 - CCE: PyElementGeneratorImpl.createCallExpression
ab3978a: refactoring of runner mediator
d68ea56: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c9fdd7b: fixed test in dict keys completion
4673024: added tests for dict key completion
0c8066b: added (PY-1378) Option form select word to select commas around a selection before widening the selection to few words
e6e3a93: added (PY-1686) Ctrl-W inside argument or parameter list should have step to select list contents without parentheses
aa82508: added dict keys from dictionary constructor to dictNamesCompletion
faaff61: Do not launch jython with -u option
975ef57: build script tweaks
631c2a7: fix build (?), better root name for Mac release build
747a546: enable YourKit only in EAP builds
985caf1: Fix problem with try/except assignments#2
1cf6953: Fix problem with try/except assignments
b46f064: simpler and more correct fix for PY-1958
c40ee4d: fix regression in import completion (PY-1956)
e8041e0: use correct name when completing imported module (PY-1955); add icon
709cec7: fix adding backslash when pressing enter near comments (PY-1958)
3fa4e98: delete a bunch of dead code
1a7d7de: fix regression in import completion (PY-1956)
17d7cef: drop key which isn't used anywhere
10e055b: fixed process handler
0cd9c5f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a5a1c61: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b04729e: PY-1767: auto-add a correct first parameter to a method + tests.
e9148ca: added (PY-2245) Complete known keys for dictionaries
88b259c: fixed (PY-2365) Incorrect range for bytes inspection
dcd9f1a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
af3bc32: added completion to settings.py
0526f6e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9c8b0f3: fixed tests (CreateFunctionQuickFix, LambdaToFunctionIntention)
fdee21f: changed name extracting in ConvertLambdaToFunctionIntention added function builder
9fb60e0: added (PY-1451) "Boolean expression can be simplified" should detect additional patterns
da601df: added param visitor to createfunctionQuickFix
6932563: fixed unresolved reference create function
4a10814: advance min Java version to 1.6 in all launchers
ef6026f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4ec15b2: PY-2294: map known *arg to lone *param.
94c70f7: added template to create function in unresolved reference
9366f89: removed dialog from statement effect introduce var quick fix
c459b5d: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d1d69a4: added test for unresolved ref
5873685: simplified code for unresolved reference, satement effect
10e770c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0d07668: class and fs references from settings.py (PY-2316)
9436935: PY-2292: deque.__init__ skeleton.
7faaa09: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
301867f: added (PY-2092) Quickfix to create a function for an unresolved unqualified reference
6a5254c: Generator tests include return types, pass on C/J/Iron Pythons
ba6943c: don't return invalid skeletons root (EA-23850 - assert: FileManagerImpl.findDirectory)
fd9038d: added (PY-2083) Offer 'from __future__ import with_statement'' when “with” unresolved in Python versions older than 2.5
fb0661d: fixed names in dialogs
25a8e92: added (PY-1265) "Introduce Variable" quickfix for "Statement has no effect" inspection changed dialog in Convertlambda inspection
978aba1: fixed (PY-2375) Doctest run configuration is suggested for non-Python files
77358c2: added (PY-1242) Refactoring: Convert lambda to function
0b778db: fixed add to watches for set: now whole set is added. that as a result fixes PY-2340
d677c83: Fix generator tests; allow classes without __dict__.
d984cab: debugger fixed for py3: relative imports, flushing out streams, builtins
a9437d4: refatored
aca5c269: fixed bug after refactoring
69ad54c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
22a944f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cce0319: fixed intentions description
e89da8e: separated dictConstructor intention into 2 intentions (back and forth)
00dff6d: adjust test according to changes of behavior in platform
99d32d0: added (PY-1589) Intention to convert between single-quoted and double-quoted strings
af06ed5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ef661d4: fixed exception breakpoint handling
e8b7def: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4882970: added notify always exception breakpoint type, debugger commands refactored
58fb0c3: don't use deprecated method
9b38460: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8a0514d: added (PY-1405) PyCharm could handle translation back and forth between literal dict and dict(**kwargs)
268b67e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9f447e6: correctly calculate import priority for skeletons and site-packages
8bea0bd8: don't try to get containing file for directories
4b51f26: one more fix on re-importing imports from imported files
fbb0e60: Add Import chooses location to add import statement according to PEP-8 (part of PY-482)
34841b9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
72b1aca: fixed join if's intention
078d4b5: added searching if statement in children
72322c3: added (PY-1406) Missing "join ifs" intention
7fd7563: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
5548845: test fix (major typo in resolve)
154db10: no line breaks from text
7978fd0: type for exception variable in try/except
64fa4b6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7c76697: handle iteration of stdlib files (PY-1836)
6ac8962: Skeleton date updated
1456b92: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
faa95c3: correctly delegate dynamic class members from socket to _socket (PY-2170)
6e4b7bb: added full value evaluator instead of evaluating full value on copy every time (PY-1072)
f30d9d36: Better handling of open(), fixed unicode/str version dependency.
ff67211: when doing stub-based resolve, resolve names into except block if there's no declaration elsewhere (PY-2302)
bf7670e: robust
ade9182: copy value in debugger returns now full evaluated value(PY-1072)
9311796: Add return types to skeletons
f6138dd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
af4e8be: removed some unusable code
9c87888: added tests for ReplaceListComprehensionWithFor intention
7526a51: refactor PyDynamicMember implementation to use PyPsiPath instead of string-based ResolveData
11a6e99: fixed false positive redundant parentheses in print expression in python3
b9c5f4c: fixed some bugs in UnresolvedReferenceAddSelf
91c7bc3: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b0ac5ea: PyModuleMembersProvider refactoring
916c459: fixed quickfix for unresolved reference adding self
4c0d89e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f10e564: since-build advanced
b67877a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
fa46769: fixed (PY-1423) Don't show 'Python requires 0o prefix for octal literals' if literal value is 0
c9ffb08: (PY-2315) Option to ignore argument of % operator in redundant parentheses inspection Fixed with checkbox
75b5ffe: (PY-1356) Quickfix to qualify unresolved local variable with 'self.' if field with same name exists in the containing class
1051baa: fixed (PY-2317) Doctest run configuration isn't suggested when a file has a module-level doctest
55ea1a4: fixed (PY-2319) Tests: Do not allow to run test unless run configuration has been validated
4281eb7: fixed loading unittest from class inpython 3
517f021: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
3a416dd: 'nonlocal' is also a global name (PY-2296)
379c287: testdata file
75ffc7c: completion for 'nonlocal' keyword (PY-2289)
6ec1621: fixed failed test redundant paretheses
7efd446: force release evaluation selector; early access preview
e701787: Fix negative implicit offset when a badly incorrect source is parsed.
5b2f30a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8f9f560: don't show auto-import popup if the reference is not a call and we've only found functions (PY-2312)
2316b3d: exception
b8f49ac: fixed doctests for python 3
2c80fef: performance fix: don't try to search for top-level autoimport variants if a qualified reference is unresolved
4e1ea24: (PY-2315) fixed higlighting
da2d5cf: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
dd1c272: fixed: (PY-2310) "Redundant parentheses" should not be reported when expression spans multiple lines (PY-2311) "Redundant parentheses" should also handle return statements
37bde0e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
86a9996: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
47f80ca: Go2Inspection
e2fe7d6: typo fix: AIOOBE when mapping arguments
9141b0c: use common code for searching classmethod and staticmethod decorators, recognize not only the first one
82cd72b: run manage.py as __main__
e12ccbc: add to PYTHONPATH only the paths added by the user, not all paths of Python interpreter (in particular, skeletons must never be on PYTHONPATH)
42b4e44: fixed testData for BroadException inspection
338faf2: fixed built-in Exceptions in BroadException inspection
72867bd: fixed highlighting in BroadException inspection
3d89602: provide members for memcache module in GAE (PY-1278)
9cdb2cd: dead code
abc1202: if we have <module>.<something>, don't try to resolve <something> in builtins
f7d8b1c: added quickfix to add parentheses if the statement is a function reference to Statement has no effect inspection, added tests for that quickfix
8ab0f01: Fix erroneous 'self-reference' strings. Fix a 'dictionary updated in iteration' under IronPython.
f49241b: added PyOldStyleClass inspection to detect occurrences of __slots__ and __getattribute__() features in old-style classes (Py-1212)
521ce29: avoid creating many custom PyClassType subclasses, use PyClassType userdata instead
80070b3: PyTypeProvider implementation refactoring continued; provide return type of gql() method
1314aa1: PyTypeProvider implementation refactoring
6bcb3cb: fixed PY-2301
6d4fc49: added quick fix tests for Redundant Parentheses, Augment Assignment, Chained Comparisons
2291b99: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
488a733: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e28d43e: configure App Engine library in App Engine tests
2d100ac: improved DictDuplicatekeys inspection. The same functions can be used as keys in dictionary.
b753968: Added tests for inspections (DuplicateKeys, RedundantParentheses, BroadException, AugmentAssignment, ChainedComparison). Fixed false positive RedundantParenthesesInspection in Except clause.
8d46348: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a45f490: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f2738fb: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d3709df: test for type of slice
6b273ac: initial version of calculating return type for objects.all() in Django (PY-1053)
325b1c7: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
742c61d: Renaming failed test folders. Second iteration.
51d6da8: pass optional call site to PyFunction.getReturnType()
1a407a8: cleanup
a55f2ce: less read actions
c635b81: pass TypeEvalContext to Callable.getReturnType(); add getReturnType() to PyTypeProvider
0c3d8b7: improved Redundant parentheses inspection (PY-1362)
c3886f3: improved Broad Exception clause inspection (PY-1452)
56620f0: type for subscription of collection
2d1fe43: propagate the confusion about the semantics of getStringLiteralElements() return value by returning both lexer token type and parser element type of Python string literals
196ebad: iteration element type initial implementation
b1417d5: collection type initial implementation
e6fcef4: PySetLiteralExpression extends PySequenceExpression
0308fe8: improved inspection for chained comparison (PY-1020)
cd8b233: cleanup
0562a52: include webDeployment in PyCharm build
a91c9f2: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ed8f75d: code clean-up and django for tag synthetic vars added (PY-2181)
684fb45: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0f3b6c4: (PY-1020) added inspection and quick fix for simple chained comparison to replace: a < b and b < c --> a < b < c
9330747: use non-strict getParentOfType() when searching for suppression comment (PY-2240)
48ccbf6: added inspection and quick fix for redundant parentheses in except clause
951cc65: added inspection and quick fix for augmented assignment (PY-1415)
e9cff52: plugin-filter
801a617: PyCharm spellchecker (PY-2258)
b1cbc7f: fixed 'Add to Watches' (PY-2267) and 'Set Value' on dictionary (PY-2267)
7a874dc: PY-2222: allow tuple initializers in skeletons.
981a0f1: added inspection and quick fix for redundant parentheses in if and while statements (PY-1470)
a38a3d7: added inspection for too broad exception clauses
243a810: added inspection for duplicate keys in dictionary
ec13508: added TextFieldWithBrowseButton to py.test configuration to select script, fixed keywords problem in py.test
d05eadb: minus ; in python code
b7e7a21: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b6a2ca6: better use of python syntax
19881b11: add github and tasks to pycharm build; advance version to 1.1
fe8e74d: added forgotten files
976ab6a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
dc7c3a7: Make pyTestProduser, DocTestProduser check if we need to produce configuration.
913a469: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a536bae: fixed python debug console highlighting
4913e31: fixed locations for tests
213fa08: Fix Python SDK chooser combo for GTK+ L&F
34bb1b1: Typos
c7f77de: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
bc9cf31: improved context test produser in unittests (to search for assert or yield occurences in functions)
3d21206: fixed suite names in test All tests in folder configuration, added locations for all type of tests
a22cd55: @NotNull location in weigher
86fd364: added suspend option and 'log to console' for exception breakpoints
fd11204: Completion regression fix.
3666c90: Fixed suspend option and log to console in breakpoints processing. Added log expression evaluation to python breakpoints.
fd12277: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4ccf3a1: Fix completion test now that underscored name completion changed.
1b501c0: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e3172f1: Added python exception breakpoints. TreeClassChooserDialog refactored.
fc250193: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b823c96: PY-1240: don't hide __names__, put them at the end of completion list.
4cb26c4: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
501f204: avoid duplicate skeleton generation on startup
3e3b153: fixed some test names from doctest configuration (if we load test from context) added possibility to load doctests from non-python file
6d8e7bb: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
212535a: fixed size records
80ef393: don't try to build skeletons for IDLE
cceea3b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cae1de5: don't generate skeletons for stdlib tests on Mac
12b6e3a: added doctest configuration produser
2f8a7c5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8c772b2: testdata discoverability
aa7b2be: one more null check
a8f1e16: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f085cc8: added doctest configuration to testRunner
d47a515: do not create instances of FacetType's if it is registered as extension
5270483: removed internal invocations from py debug console stack trace
ab92ded: Exit application startup scripts if no JDK was found (Unix)
9d01f6f: Having a directory in initial startup costs >5megs in ClasspathCache resource map filled with results of recursive symlinks walking in JWS plugin.
48e6e31: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
55884b6: fixed django live templates insertion
0acda27: IDEA-21530 (restart IDEA in Linux)
fcbdbb8: fixed django template id lexem (PY-2227) url tag parser improved
21e03b0: PY-2183: lambda as a method correctly understands 'self'.
eaff872: Missing added test.
a67392a: added py.test support for versions >= 1.0 fixed location for unittests
a508e05: separators
e64f419: fixed smthng
630d4fe: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
b54ddb8: added restriction to debug command line to execute only on paused process
e0feb50: PY-2223: Ctrl+P works in tuple or list args of plain params.
45b0fc3: python icons moved to a proper place
60f699a: fixed debug console highlighting
d2382c4: 1)added show command-line action to debug console 2)fixed multi-line command-line console input in debug
bcb0485: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a0ed2cb: Correctly handle __new__'s first param, staticmethod or not.
f26fa89: Fixed __new__ erroneously considered classmethod (now staticmethod).
d783bff: Special-cased __new__ for descendants of 'type' (see PY-1862).
fd94ea2: PY-2229: time.ctime() signature.
b42e324: A better fix for PY-1265 plus force skeleton regeneration.
97ae253: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
217a26c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
df290a3: get rid of PsiFile-based filters in facet detectors
f31bb3b: added completion to debug console
f61a1dc: added nosetests support for jython >= 2.2 fixed failed tests
44bd6bd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e8d8c81: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
fa7a3ef: if the same name is declared in the 'try' and 'except' parts of a try/except statement, prefer the one in the 'try' part (fixes PyGame: PY-2197)
d9de0e7: test for PY-1284
f38c72c: don't copy entire testdata dir into temp fs in PySelectWordTest
f05662d: tweak alignment of comments between classes (PY-1598)
0655a8f: don't run the same test twicw
dcd94cb: skip preceding comments with same indent also when generating INDENT tokens (PY-2108)
6a4acd9: attach trailing comments to statement list (PY-2137)
1abe89e: test for PY-2209
70f9294: move DEDENT tokens before comment tokens if comment-only lines preceding a significant token have the same indent as that token (PY-2209 and friends)
15f5149: make it easier to specify expected data for new lexer tests
67b8107: merge PythonIndentingLexer and PythonFutureAwareLexer, delete some dead code
bd46ea9: don't add parentheses when completing single-arg decorator (PY-2210)
98dfa24: don't auto-import from current file (PY-2212)
a6137e7: Optimize Imports shouldn't delete unused imports (PY-2201)
fb106d6: PY-2165: correct parameters in type.__init__ under python3. Better omitted initializers (likes of sys.argv) preserving types. Omit certain known names altogether.
67f03eb: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
87b41b9: added possibility to run function test directly. added producing run configuration for nose-like tests
c6fb4cc: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6ddb251: Initial support of python debug console. +some refactorings
cf4cb1b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d238765: added support for nosetests in python3
842c2d5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
11574e2: Better argument mapping, fix py3k regression, fix errors in test data.
12ccebe: Don't shown "Cannot analyze argument list" as useless.
bd8edc5: added support for nosetests in run All Tests in folder. function, generators, TestClasses, which is not subclasses of TestCase, but looks like tests.
5d8c737: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
204a62d: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4d8bc82: CRLF
d89d18f: crlf
6f21878: fix path search in Extract Superclass (PY-2119)
7984d55: Override generates correct reference to superclass (PY-2171)
8ee8298: add "from __future__ import print_function" to skeleton for builtins under Python 2.6 and 2.7 (PY-2143)
ad69fcb: NPE
698e933: set PYTHONIOENCODING env variable on all platforms; handle encoding in Python console more nicely (PY-2154)
8e60970: type of super() call is union of superclass types (PY-2133)
0993530: dead code
e7b6285: invoke scanOuterContext() also for lambdas (PY-2182)
0fa152a: Introduce Constant adds element after imports (PY-2149)
72681d0: no statement - no backslash (PY-2194)
bb40fc7: yet another case of unneeded \ on enter (PY-2138)
9ea8995: handle parenthesized expressions in 'statement has no effect' inspection (PY-2144)
26ea44c: improve sample in code style preview (PY-2168)
9cd0a7e: space after colon in dict literals (PY-2169)
fdd79f9: testdata discoverability, adjust expected formatting
30a16be: align pieces of string literal
303d745: py conditional breakpoints (PY-1826)
e3c1236: fixed npe
2175b67: semicolon removed
4ffb182: pydev update
7eaa95b: pydev update
1d8cf7c: filtered out stubs from interpreter paths(PY-2062)
f69ed13: Add a staticmethod case to ^P test.
d7ac2cb: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e8d8b95: Now nested-tuple arguments autocomplete without failing an assertion.
9f96ed5: Removed dead code.
d1853db: PY-1268, PY-2005, a bunch of fixes to the broken logic of parameter mapper.
4e1735a: PY-2189 Do not perform autopopup handler in case of REPL console
72a9982: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7f29959: Application.assertWriteAccess does really assert in tests, tests now run in EDT
bc3816db: Fixed PY-1433 ("All tests in folder" allows to specify pattern for files to be considered tests) Fixed PY-1964 (The setUpClass fixture from unittest2 is supported.) Fixed PY-2124 (Running all tests in folder works with Python 3)
94b5443: 1)adding of python interpreter paths without removing on reload 2)python interpreter paths added to PYTHONPATH variable on python run PY-2136
c135c9e: Updated arglist inspection, related to PY-1268.
9d93dec: PY-1268, PY-2005, PY-312: new by-instance call detection logic. Raw, needs cleanup, but passes all tests.
2a984a8: jdk->sdk rename refactoring
e222a72: do not break Python plugin for a while
f26495d: django run configuration django check fixed for bundled django
70ae3ba: added is django importable check for sdk selected in run configuration (PY-2153)
f310d8c: IDEA-60125 False positive: static field is unused IDEA-58929 flex: fields, variables, functions occasionally marked as unused (while they're actually used)
f971df9: django run: invocation of custom manage.py, usage of settings.py path from settings
9933888: copy runnerw to correct place
35d7244: copy runnerw from sources rather than binaries
094e773: advance since-build
6ab17b2: fixed stupid error (PY-2101)
d79c67c: buildout icon for buildout cfg file type
2cd8a04: added completion to urls.py view reference string literals(PY-1051)
8f70534: missing space added
0f97299: fixed extra } in js django (PY-2095)
f9c8bbd: help topics for sdks (IDEA-59895 )
558a592: fixed resolve of django paramters in case of locals()
165702a: django template parameters resolve in case of locals() as argument of render function (PY-1982)
7086d78: added runnerw.exe to python plugin
23b0caa: removed ;
54fc232: include pycharm.sh instead of idea.sh in PyCharm Mac dmg
9601aed: Register problems on a correct local element (not in foreign file).
e540bd4: fixed eval of compound global vars (PY-2101)
040fc13: do not move class comments
583262d: fix path validation in Extract Superclass
2a74a61: __Classobj is a PyCharm internal thing, don't expose it via Override feature (PY-2098)
c13622a: PY-2102 Extract method fails
970765f: reindex django templates on enabling django facet
b031cb15: fixed run sh syntax
0c8e941: we don't need to have any specific file to check if Django is importable, and also don't need to cache the availability
6dcdb67: mention Django in module type description
abb23f7: advance version number
ed1d69c: update since/until; include file templates in plugin resources (PY-1803)
a193c5e: enter in subscription expression (PY-1992)
8ed8bb2: python plugin depends on java
7af9319: "dump psi to clipboard" is internal action
309c8ab: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a51b8ba: enter PyClass in structure view by default (PY-1907)
ca8074c: cosmetic
a6d106c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
65be529: PY-2046: correctly infer tuple type in inspection.
350757b: PyCopyPasteTest fixed
c68babe: Do not launch jython with -u option
b684d10: preserving indent on paste is optional and off by default (PY-1998)
ce0b276: handle dynamically built __all__ (PY-2030)
b909f0e: create package action uses "Python Script" template for __init__.py (PY-2047)
8a9aa63: a dash of guava
6bf12f8: fix test runner compatibility with old pythons and weird tests (PY-1976)
d40f942: fix PyWrapTest
c19573f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
61a04bf: Hopefully enough for PY-2014 and other recursive data structures.
235c03a: fix scrambling with new cglib placement
4ce6700: hack to ensure patched cglib classes are loaded before the standard ones (PY-936)
be8b832: wrapping in slice expressions (PY-1992)
8122c25: do not add backslashes on pressing enter between function and its decorator (PY-1985)
348a271: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
fa2a708: PY-2027: better handling of dicts, bundled messages, updated test.
e23ffa4: More numpy-related fixes: ignore more spurious reexports.
84e38ec: fixed python introduce field refactoring (PY-1983)
5c60c05: fix properties and VM options path for NSIS installer
2bffeee: tar root for isEAP=false
a09df05: build script tweaks
260dfa4: don't check for existence of tools.jar
0282890: @NotNull
ef413fc: build fix
d1502b2: fix build again
e18769f: fix build (?), better root name for Mac release build
2b27c1e: isEAP = true, sorry, wrong branch
dc84c9e1: isEAP = false
38b12a4: enable YourKit only in EAP builds
50bd028: remove irrelevant options, enable error reporting only in EAP builds
6b14953: diagnostics for EA-20138 - RE: PyBaseElementImpl.childToPsiNotNull
9810415: NPE (EA-21822 - NPE: LexerEditorHighlighter$HighlighterIteratorImpl.getDocument)
e73c48a: cleanup
470b45d: removed dependency from Javascript, fixed installer build
ce06c93: Fix problem with try/except assignments#2
6bbe6f8: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6b2b54d: IDEA-59220 IAE at com.intellij.openapi.editor.impl.DocumentImpl.a
dc630f5: fixed PYTHONPATH inheritance in test runners (PY-868)
23d12b1: File separator blunder
0f076df: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7f1a805: PY-1968: allow parent classes without warnings
23dae3f: Minimal numpy support
6dcd2aa: Fix problem with try/except assignments
bc4b933: PY-1996 Local variable usage inspection misses some cases for *args parameter
90acfce: Tests for Den's change of control flow building (processing self.reference assignments as write access)
4408a9d: @Override inserted
7bde887: Django Templates in Javascript files
ea51c51: Better fixes and tests for PY-1788.
3d0db62: IDEA-49574 Type inference for hower tooltip
b8ea35e: PY-1499: don't add hardcoded system libs to virtualenv that refuses it.
94f2fd8: PY-1788: create missing __init__.py when extracting to an existing dir.
95245c2: PY-1965: allow certain keyword-only arguments past the star argument.
f32abd5: Files eemoved by mistake, reverted
2a2d3c5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6ddf6bd: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ebc098c: PyCallByClassInspection test updated
b388e71: Allow A.__init__(self) where self is a subclass of A; add description.
c31f628: rewrite and fix test
ce7a1d8: don't overwrite existing visitor when inspecting injected files (PY-1932)
6f61761: simpler and more correct fix for PY-1958
3f8b8dc: advance branch number to 98
e644607: moved to proper place
c376830: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a7dbea5: one point oh (PY-1930)
7140f23: don't add another backslash if we already have one (PY-1960)
7e31afb: fix adding backslash when pressing enter near comments (PY-1958)
6a0f0d3: separate path caches for module and SDK; clear module path cache on changing SDK paths
7bff14d: delete a bunch of dead code
b96424ae: fix regression in import completion (PY-1956)
3e06978: fix from dcherysasov
c2babae: test for wrapping inside argument list
a85ab03: drop key which isn't used anywhere
011c1e4: use correct name when completing imported module (PY-1955); add icon
bf7d533: prepend linebreaks to TeamCity service messages in unittest runner (PY-1737)
2a3c63c: fix find usages for module directories (PY-1444)
05c13c3: fix regression in resolve (losing found variant)
33515fc: accept trailing comma in list literal
40d7797: test for wrapping in docstrings
2bceae0: IDEA-59034 Editor: Modify default wrap strategy in order to prefer wrap on comma if possible
8b5da60: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
61dcbf0: Test resolution collision (submodule vs __init__.py)
b929f6a: fixed app engine script reference refresh (PY-1884), added test
643a66d: added django template parameter rename refactoring (PY-1866)
ea0e086: enter handler that inserts \ at line break position
517c44d: don't forbid to replace all occurrences for call expression (PY-1900)
92bb78d: Partially closes PY-1438, adds call-by-class first argument type check.
2f2a744: failing test for alignment in binary expressions
f878a60: fix target element detection for Inline Local (PY-1585); rewrite test so that it doesn't introduce logic which is not there in production
b5bec6a: remove junk 'throws Exception'
5f236bc: normalize file names when collecting suggested interpreters (PY-1931)
845db20: comma between list literal elements is required (PY-1933)
e7edc31: PY-1065 Reformat and auto-import do not respect space-after-comma preference in multiline imports
bd309de: don't offer to introduce field from keyword or positional container (PY-1927)
c24e3f4: improve Introduce Variable naming suggestions a little bit (PY-1510, PY-1549)
dac19b3: we don't need to bundle jython anymore
560e5a0: remove bogus jython imports
2a3f8f8: SOE protection in PyClassType.resolveMember() (PY-1920)
53f3712: highlight files with bad code in project view (PY-1923)
729a234: refresh VFS for skeletons at predictable time (PY-1617)
d313828: remove junk throws clauses
c401ffc: escaping " in regexp is not redundant (PY-1620)
c02c196: Partial fix for PY-1438
c939b81: Signature of frozenset: PY-1894
3439889: PY-1065 Reformat and auto-import do not respect space-after-comma preference in multiline imports
8d37aeb: PY-1065 Reformat and auto-import do not respect space-after-comma preference in multiline imports
fac0e03: copy license to system path
67c2772: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
2d0baa5: IDEA-58743 Formatter: Add ability to configure formatter's 'white space symbols'
fd2b989: failing test for PY-1065
7f86a1e: quickfix to ignore unresolved identifier (PY-804)
f600ec5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
bab4af9: Several small resolver and completion fixes, PY-280.
2648103: don't import module in itself (PY-1895)
c7c3b42: fixed breakpoint removing (PY-1288)
9dfbac9: Fixed a stray EOL in test file
b6740c6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
17894ad: Correct handling of alignment an multiple modifiers converting to format(). Tests altered / added.
815cdf5: Handle dict() and arbitrary functions after "%".
e1c5653: PY-977, PY-1312, PY-1340: converter to str.format() rewritten. No test yet.
85b26d5: PY-1465: "create function" quickfix for modules.
96fb446: new Ant wants new jsch
f38efd7: NPE (PY-1878)
88373a8: don't allow wrapping in non-parenthesized tuple (PY-1792)
b4e93f2: icons in completion list
d60e5d5: rewrite completion in import statements (PY-1632), provide icons and paths for completion variants
5a6b06f: reimplement findShortestImportableName() in terms of PyQualifiedName
d8eed3a: allow resolveModulesInRoots() for empty qName
318b9fc: added django root parent folder to django tests comand line state (PY-1864)
06b2fe9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
572fe0e: added handling bundled django and django in gae (PY-1296, PY-1854)
b4a2493: find Django foreign key fields using index
1cb0aa2: refactor target expression stubs to support pluggable custom stubs
d7057a1: More fixes to inspection and its test. Do look at staticmethods' first arg.
edc76c4: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
1489ebc: don't try to preserve indentation when pasting single-line fragment (PY-1815)
20539b7: ImmutableSet
abba792: toString(), normalize filenames
ebb6379: refactor Create Package to make sure undo works correctly (PY-1829)
fa5baf5: PY-1799 Unused local variable: false positive with locals used as function parameter
0b67ff8: A silly typo in test (classmethod vs staticmethod).
be555aa: PY-1848 Unable to extract method from "else" part of try-except-else block
9cf765c: PY-1842 Extract method within while breaks content and generates endless loop
cad6632: special case logic for detecting completion in empty file (PY-1845)
4629473: restore correct logic for folding text range calculation (PY-1847)
c8a505b: introduce fixes: try smart introduce before selecting whole line, use correct language level when creating elements from text (PY-1840)
9487399: "statement has no effect" handles conditional expressions correctly (PY-1841)
09e24e5: Avoid ClassCastException with tuple parameters.
65a1f81: Test nested decorators.
4fbb043: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cd01b98: adapt PyTestRunnerTest to new test runner functionality
747745f: optimization for previous checkin: calculate relative path directly if we know the root we're in
45ae29c: search for matching library in skeletons even if we don't have .pyd or .so file (PY-1655) assorted resolve tweaks
8a38f79: 128x128 icons (PY-1583)
db1661b: disable incorrectly working and unneeded highlighting (PY-1424)
39d1822: group test results by class when running test file with multiple classes (PY-1418)
a2e33b0: test location can be class as well as method
858db57: simplify code
7f909d8: consistent nullability
56616a9: SDK path cache must be per-project
90dd349: EA-21957 - NPE: PythonUtil.getConstructorClass
932503c: fix indent in specific case after PSI modifications (PY-1796)
e20f472: use correct first parameter name (PY-1811)
ae1e80b: Skeleton signatures: PY-1688, PY-1818
115f3c1: Moved test_generator.py
6c69714: autodetect App Engine installation path (PY-1429)
107fb01: search for Python interpreters in %PATH% (PY-1593)
4faa2ec: PY-1788 create __init__ when extracting to new package
26090ae: PY-1695 python imports reworked again
30c0a0c: remove buggy removeIndentation method
a220d0c: dispose path cache for sdk when project is disposed
3cdb94e: store 'absolute import enabled' flag in PyFile stubs
e28829e: use path cache also for files in SDK
a546ad4: more reliable check for searching parent outside of file
d3c5b63: more refactoring
4b79291: extract common code out of Python run configuration tests
7b2a239: "Create Python Package" action (PY-1237)
07ecba2: Tests and cosmetics for quickdoc.
aac5c19: EA-21928 - CCE: PyUnusedLocalInspectionVisitor.registerProblems
6b3b834: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d310b20: PY-1787: don't use HTML in ctrl+hover hints.
9e4cd75: remove bogus indent > 0 check
3418fea: added enter handling between django tags (PY-1529)
a9526b1: detectDecorationsAndWrappersOf() doesn't use PyDecorator.isBuiltin()
cd1ddcd: don't load PSI tree if we have stubs but failed to resolve superclasses via stubs
2d5290e: PyBuiltinCache.getByName() works via stubs
4e2394d: getConcealingParent() works via stubs
9adb8f8: don't look inside function if not allowed
00c3bba: for perf reasons, don't check isBuiltin() for staticmethod and classmethod decorators
7a11d1f: refactoring: avoid weird copy() call, use ImmutableMap
bd7aa88: flip getType() calls to make sure evaluated type cache is actually used
06c1fc8: option to disallow stub -> AST switching in TypeEvalContext; more TypeEvalContext propagation
94b8d68: reuse TypeEvalContext and cache of evaluated types between all inspections in a session
6f659ef: don't look at properties when resolving superclasses (EA-21895 - SOE: PyReferenceImpl$CachingResolver.resolve)
2f61da1: added autoinsert }} and %} option in settings (PY-1590)
117e8e1: advance since/until and version
684520b: imports for all class refactorings
bdba2f8: PY-1695 save class references
957e94e: PY-1756 do not add imports from builtins + refactoring
8cfcc91: fixed python list literal indetation (PY-1522)
371f72d: don't mention SDK when validating run configurations in PyCharm (PY-816)
dcb07f2: Imports optimieren
48f238b: initialize module for every created Python run configuration
0efa014: simpler interpreter selection UI in PyCharm (PY-736)
412dd12: factory and interface for common options form
9fec6f5: an incomplete triple-quoted string is still a triple-quoted string (PY-1768)
06b5ce3: correctly detect unicode strings by language level
f4be839: added replace-all option to shortcut element from plugin configuration xml, also fixed shortcut for run manage.py task on Mac OS X 10.5+ keymap
774e5c1: add extension for controlling file types in which indents are preserved
ada0733: fixed python process termination on mac and linux
a3b4629: added return new value in changeVariable of python debug
3df00a6: little fixes in debug
2dd6904: don't show classes from standard library test folders in import and completion (PY-1628)
2c4264c: generify a little bit
b48ce0d: enable YourKit in Mac EAP builds
00ffce3: enable YourKit in Mac EAP builds
29f2167: get original file when creating PyClassType from PyClass (PY-1748)
e311373: smart indent on paste for Python (PY-208)
2b4b4f3: fixed python process termination issues on mac and linux (PY-651)
6160d0f: added caching of stack frame due to pdb bug in frame loading => fixed set value in python debug (PY-1730).
cc0e8a7: multiple threads implemented
9b5576e: no debug arg
ce8d980: pycharm: added multiple thread suspend (PY-1746, PY-1707)
e1a68ee: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
5b610df: PY-1755 Python Console: broken autoindent in case of multi-line if clause
32e9bee: intellilang dependency no longer needed
a29530f: search for inconsistent indentation only in Python files (PY-1763)
953a719: fix issue with parsing 'return' at EOF (PY-1759)
7447dd6: The mysteriously missing PyNestedDecoratorsInspection
8cee7ea: Inspection to warn about decorators nested over @{class,static}method.
e1ca0a5: PY-1747: allow non-builtin decorators before @classmethod | @staticmethod
a1e52d9: Improved quickdoc tests.
6b3f853: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0a77fa9: clean up dependencies to make sure that Python tests run after JS->IntelliLang dependency has been added
32bfb80: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c3ba4d0: one more case of dedenting after control flow break (PY-289)
7ec0817: Do not add unnecessary space when entering newline inside comment (PY-1739)
2a6368d: PY-1416: saner missing docstring inspection.
e79ce74: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d747f29: Data for updated quickdoc tests.
86da4b3: PY-895: show constructor's quickdoc on constructor call. Also: links to classes inside method and class quickdocs. Adds tests for property quickdocs.
5ef9682: pydev updated to 1.6.1
661610b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4d9eb5b: Timeout for establishing socket connection for pydevconsole.py
95f643d: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4a5189f: auto-dedent after 'return', 'raise' or 'pass' (PY-289)
d429ba3: option to ignore unused loop iteration variables (PY-1432); fix unused lambda parameter check
0119b38: option to ignore unused parameters of lambdas (PY-1306)
b94a01e: tweak to fix test
3af03a7: Enter inside comment works correctly in Python (PY-1482)
eb2d109: quickfixes for inconsistent indentatio inspection (PY-1674)
f0cc785: tweak inconsistent indentation detection logic
19b6efc: don't use threadlocal
c9830e1: don't use threadlocal
ced47de: allow access to LocalInspectionToolSession in buildVisitor(); use it to fix Python inspections
9121cb1: inconsistent indentation inspection
4b701b5: PY-1733 assertion fixed
e457c5c: max connections increased
72910fe: hidden templates
1d0dd26: Make pycharm work with corresponding version of yjpagent in EAP builds
78092a9: PY-1724 Python Console: missing ellipsis on the line after backslash
8de992c: AutoImportsOptionsProvider extension reworked to create new instance each time when Settings dialog is opened
d48db7f: PY-1471 'Unused local variable' false negative for variadic list unpacking
4f2a529: PY-1695
686aaa7: PY-1725 Python Console: IOOBE com.intellij.openapi.editor.ex.util.SegmentArray.getSegmentStart
cd17c41: PY-1643, PY-1644
581696b: AIOOBE (PY-1719)
24f6f55: Personal test for not calling getText for PyNamedParameter
571ce7a: PY-1245 "Unused local variable" false positive and false negative with try/finally
dd9ca31: revert incomplete and unhelpful fix for PY-1178
74097ba: restore deleted line once again
659c06f: help ID (PY-856)
e6ac200: PY-1178 Different highlighting for variables which are referenced before assignment on some control flow paths
b1b9afe: PY-719 Highlighting artifacts on executed lines in console
9576b43: Move timeout to more appropriate place
d4864a3: PY-542 Extract method - doesn't extract code with comments
160f131: Support single quote multiline strings
82d4872: Remove way too complicated logic for unused local variable inspection in case of first parameter
c62fce1: PY-1546 Python console: Incorrect highlighting for multi-line strings
dc900d8: PY-1547 Python console: SyntaxError in multi-line string with backslashes
06b59d5: Better handling of multiline strings indentation
f00f96f: fix testdata
bda9fcf: Consoles colouring fixes for PY-1631 Console coloring is patchy
3835b5a: fix Oleg's latest fix
bcc217e: don't go above file level when looking for introduce variants (PY-1694)
c1a7c7c: Waiting for REPL communication before destroying process handler on closing console
6a133df: PY-922 Python console - autoindentation support and output formatting
dcf0a4c: Better UI updates after changing pydev console prompt
842f249: PY-1698 Python Console: don't terminate console before user reaction to appeared dialog
71f8d71: PY-1225 When working in the shell, code completion doesn't add parentheses after methods
0058d30: fix buildout autodetection: refresh VFS before trying to detect
c75a680: Better cls and self parameters handling by unused inspection
03bd18e: fix NPE on project initialization (PY-1690)
711dd96: PY-1693 Unused variable: false positive in case of cls as first argument of metaclass
609acb7: PY-1553 False "local variable is not used"
26d26d6: PY-1635 Extract Method in middle of expression generates broken code
bfb05b4: Per-language code style settings, initial refactoring
91f80c6: understand buildout script names without extension
a953986: fix buildout autodetection on non-Windows platforms
27503b6: added buildout unresolved part inspection
e5bd04e: added completion in buildout parts
e4e8808: EA-21514 - CCE: PyQualifiedReferenceImpl.getUntypedVariants
4c37ff3: check for valid VirtualFile (EA-21638 - IVFAE: PersistentFS.getFileId)
d978bea: select buildout script from combobox
05439ef: update buildout egg paths when contents of selected script is changed
4b6d227: delete method which has been moved away
5e1920b: automatically create Django tests run configuration when opening buildout+django project
7d2e4bb: added buildout config parts reference
28373d7: take paths from correct script when autoconfiguring django+buildout project; configure library when autoconfiguring buildout facet; don't rely on 'src' directory to find project root
4966191: auto-configure source roots based on the package_dir definitions in setup.py
451a576: prepare for beta 2
ff66eff: merge
70fa292: added django facet detection in case of buildout
b69ec6a: remaning
1256e8f: PY-1656: sys.exc_value, sys.last_traceback were unresolved
038fbb0: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0d2244b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4b3db59: Disable non-applicable options in Wrapping and Braces panel
9eb4d9f: automatically set settings file in django tests run configuration if buildout is used
867a394: fixed misprint
0f9939f: buildout.cfg PSI tree
ba74235: fixed buildout.cfg lexer and parser
42f5e5c: An sttempt at buildout support in manage.py actions, not yet working.
9ba9477: comprehension element defines a separate scope (PY-1618)
edf8e67: fix implementation of refersFromMethodToClass() (PY-1654)
4c0053a: performance: cache results of looking up module in roots (PY-1592)
229e65f: to avoid unnecessary tree loading in Django code, when the value assigned to a target expression is a call, store the callee qualified name in stubs
d96b79c: performance: avoid unnecessary PSI loading
87ef2a2: removed deprecated in Idea 6 method FlexLexer.reset(CharSequence, int)
23bf5cb: Factored out command line patching logic.
20fa923: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c8c10c7: fixed hang on of py process because of hidden buffered output(PY-1409)
afbeb02: icon
aa968a6: added buildout.cfg lexer and parser (PY-1678)
23dfe0a: allow specifying non-standard Django settings file for Django tests run configuration
6ee70d7: really fix python 2.5 compatibility
0b70212: fix NPE on unnamed function (PY-1677)
43cdb17: added support for run commands from installed apps (PY-1358)
f5a0e38: no Create Class fix in import statement (PY-1629)
87bc498: don't print stacktrace in case of expected exception
e7ec8c4: remove debugger console output (PY-341)
a813b10: type of property is null, rather than 'property', if its accessor is defined as a lambda
32af3bd: optimize imports
3ea03bc: advance skeletons version
214201e: fix regression with detection of @classmethod and @staticmethod decorators in binary modules (PY-1657)
53346b5: Add buildout paths when running Python and Django consoles.
f6020bc: somewhat more sensible behavior of buildout facet autodetection (it doesn't pick the right script yet, but at least it doesn't crash)
1612bdd: apply correct spacing when adding class to existing file (PY-1649)
71043eb: Extract Method uses correct name of method first arg (PY-1647)
4bd365c: unused function deleted
c9e9d57: fix AIOOBE on trivial Extract Superclass (PY-1639)
8fe52c2: forgot 'return' statement when refactoring cannot be performed (PY-1646)
de23068: don't perform Extract Method if no elements are in range (PY-1658)
082af60: handle functions without name identifier (PY-1669)
f5e783f: use utf-8 encoding for writing skeletons in py3
cd449ee: restore py3 compatibility once more
7caf4ff: propagate language level when creating file from text (PY-1659)
43172ab: toString()
8096690: if a single binary module defines multiple Python modules, build skeletons for all of them
c8cfd23: correctly show name of defining class when completing class members (PY-1653)
984c3d5: tweak resolve logic to work better with PyGObject
c563ca8: tweaks for better PyGTK stub generation
b48d31d: "Reload" button in Python Interpreters dialog uses correct SDK modificator
ef3f1dd: refreshAndFind() instead of simply find()
e90fa9b: associate .pyw extension with Python file type
e09a5d4: Fixes NPE in command line state and in test
9e320a0: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6d5dbce: Fix PY-1642 (wrong debug parameters order; a silly typo!)
e596191: create test for py debug runner (PY-1642)
437556b: fixed test configuration (PY-1534)
972dc2f: restore Py3 compatibility
02808e7: regenerate correct equals() and hashCode()
b0f727f: if resolve has led us to a binary module, look for matching .py file under skeletons
4ef90fb: toString()
6aadf36: restore imports performed by PyModule_Import of a binary module; generate qualified names for base classes when appropriate
623a31b: restore __init__ signature for skeleton classes
6915a63: fix generation of skeletons for overloaded methods which already have default values specified
8fc15d3: fix assertion when building test runner command line; add test; delete some unused code
f29cb88: show binary modules in completion list for import reference
bf2363e: NPE fixed
4553da1: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
715aeae: Initial and incomplete Django+Buildout support: server (+debugger) only. Consoles still lack buildout support. PythonCommandLineState and patchers refactored.
c121ff8: more reasonable handling of overloaded PyQt4 methods (part of PY-1563)
235b48e: better handling of existing superclass argument list in Extract Superclass (PY-1633)
d603cf6: preferred focused component
0dd1601: ignore literal . in sys.path (happens with IronPython)
f9a7f61: suggest interpreter home paths for IronPython
d1f69ec: fix use scope calculation for variables inside class methods (PY-1619)
7400f4f: NPE avoidance (EA-21165 - NPE: PyOverrideImplementUtil.write)
686d3c9: SOE protection when calculating completion variants (EA-21405 - SOE: VirtualFileSystemEntry.appendPathOnFileSystem)
a511a79: nullability cleanup
a9f47e6: consistent nullability of PyBinaryExpression.getOperator (EA-21510 - NPE: PyStatementEffectInspection$Visitor.visitPyExpressionStatement)
9fd2906: Code Style settings: refactoring
d20b2fd: Code Style settings: cosmetics
9a704d9: cleanup
099dee0: advance skeletons version
cb1e388: debug output removed
287606e: reformat to 4-space indents
b670c3b: allow dots in parameter names (needed for PyQt4)
4f2a5c4: better PyQt4 compatibility in generator3 (part of PY-1563)
9074c6e: freshness check in generateBinarySkeletons() is not appropriate because we check for necessity of skeleton rebuild in PythonSdkUpdater
06a1a60: re-inherit DjangoServerRunConfiguration, fix saving of environment variables (PY-1613)
6f99b1c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e71f515: fix PyStubsTest
c8a6d37: advance version and since-build
43b5622: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0d409c5: look for names inside __init__.py even if it represents a directory (part of PY-1463)
a5d2dac: perf: don't go looking at virtual files if we have just one candidate anyway
ca4794f: import reference can be resolved to multiple candidates; prefer the one which has a non-empty __init__.py file (part of PY-1463)
02e5578: guava ftw
9308c80: use correct name for parameter in Unused Variable inspection (PY-1602, PY-1603)
771becc: __init__ method from generated stub is known to return None (PY-1601)
c7c81a5: disallow Introduce Field inside static methods (PY-1596)
769c402: don't perform meaningless refactorings on decorators (PY-1597)
d6acc41: correct display of arguments in structure view for functions with decorators (PY-1599)
a8a57c6: Python is not Ruby
df7bc4d: added django template structure view (PY-1588)
0a0fd61: removed wrong lib
d97abd4: remove invalid community lib reference
bf585fb: Introduce Field makes sense only inside methods (PY-1550)
681990e: suppress Smart Introduce in parameter lists (PY-1551)
a725575: correct isReferenceTo() for references to files (PY-1514)
da0aadd: propagate type eval context via resolve context (PY-1565)
d320ced: unused constructor parameters have quickfix to initialize field (PY-1398)
23788a4: Introduce Field honors name of method argument (PY-1580)
7233c2b: generate 'return' before super method call in Override Methods if appropriate (PY-1537)
a42cb20: step to select string literal without quotes in Ctrl-W (PY-1489)
8365c3c: remove implicit superclass walking code in PyClassType which is no longer necessary
97cfc99: correctly return implicit superclass in the list of superclasses for both Py2 and Py3 (PY-1494); add mock SDK for Python 3.1
cb59df9: validate 'assignment to keyword' in Python 3 (PY-1524)
6fc5759: unfold star expressions when iterating names in 'for' and similar statements (PY-1525)
e446210: no space after star in star expression (PY-1523)
4684b8c: return non-null descriptive name for elements that don't have a name (PY-1552)
b12bf87: Made intention instance stateless (PY-1591).
bd05528: method renamed
7765558: DefaultRefactoringSupportProvider inlined
dac66ce: correct handling of template moving (PY-1371)
6e19854: clean
4803a4b: fixed django url reference resolve logic (PY-1367)
832ea84: PY-1541: use innermost builtin decorator in paramater inspections.
40ba6a5: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8225942: Fixes PY-1543: correct autoimport in "from ... import (foo, bar)"
19cd394: Fixed comment on JMX 1.0 in idea.properties
14bf7ef: [r=romeo] use outNamePrefix variable for nsis instead of duplicating it in strings
7a286a2: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c996d6e: Added datetime signatures. Use cls, not self, in inferred classmethods.
53f5db7: Removed premature hint message.
1b0d9c5: added django endcomment completion (PY-1562)
58f18de: EA-21070
cc03278: removed comment annotator as block comment handling implemented in lexer
d9aa588: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
fd2e514: Missing files; PY-1512.
bd9460f: Resolve __dict__ of modules.
2893a22: Changed the way the notice text is shown. Still it is not updated timely.
bbc97b7: Fix NPE
08e4de4: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
42c1965: removed debug output
961581a: Signature fixes: PY-949, PY-1404, PY-1420.
15fdbe5: PY-1188: resolve __builtins__.
19afc5a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ecc121e: UI: borders
6de8aff: fixed again
64103bb: fixed plugin build.xml
a4982354: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0f0cc5e: added community lib to classpath
d877383: empty tags
952031b: added pythonunbuffered=1 to appengine run configuration
5f2cbac: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4a8161eb4: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
536c428: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4d8b061: Create JavaEE Web Page action
383c620: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
5c18d4a: Simple cases of buildout support.
5df3f5b: inspection fixes
561eb76: fix NPE in Introduce Field (PY-1487)
bd044e6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d07f6e1: Path copying in a simpler robust way.
2103d14: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f0b491b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f344ba5: Removed silly buildout+sdk, added a not-yet-working buildout facet.
4f65752: added Guava library
9b77fba: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
131e7b9: refactoring: TestConsoleProperties now holds a reference to Executor
15b3f09: PY-1480 Exception building control flow
5801fde: remover runners for mac and linux
db04945: typo in version number
7cea03f: version and since/until build updated
1322fcd: remove pySrc from Python plugin build
b5736e4: add pydevSrc to Python plugin build
d58feb0: fixing plugin build for IDEA 10
a80ccb1: highlight raise without arguments outside of except block in Py3k (PY-1410)
26e17e2: correct implicit superclass for Py3k classes (PY-1468)
4ceb5da: isReferenceTo() by name equality only applies to unqualified references (PY-1472)
9a5169c: highlight use of starred argument outside of list or tuple (PY-1474)
a7c2946: operand of prefix expression is nullable (PY-1467)
04c1354: improved wording (PY-1473)
d674110: more correct check for intention availability (PY-1479)
0a75bad: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
2772deb: added django template variable completion of dynamic members
d47e52b: fix regression in binary stubs search (PY-1393)
6ad9329: more tweaks for dict literal formatting (PY-1441)
6cac8c5: don't include leading underscores in quick search text of Python method chooser (PY-1396)
6de6fe6: remove unnecessary usages of PyUtil.getAllSuperClasses(), iterate super method signatures in deterministic order (PY-1397)
0186ed5: handle PyStarExpression in resolve (PY-1459)
cdd633c: fix dict alignment in PyIndentTest
ed916ba: nicer formatting for dict literals (PY-1461)
675ae89: no editor - no inline (PY-1460)
66babfb: EA-21252 - NPE: PythonReferenceImporter.proposeImportFix
7739788: EA-21266 - NPE: PyUnionType.isBuiltin
745bae9: fix CCE in DFA (EA-21286 - CCE: PyDefUseUtil.getLatestDefs)
f0f7c88: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4583e5f: added member completion in django templates (PY-1274)
edc44bf: refactoring
875a6a3: provide correct type for bytes and unicode literals (PY-1427)
4462aa5: no use inferring a variable's type if its initializer is None; this means a value of a different type is assigned elsewhere (PY-1425)
33ebe6d: LHS of assignment expression can be any sequence, not just a tuple (PY-1419)
5841388: implements DumbAware (PY-1417)
ffa8df5: Introduce Variable ignores leading whitespace of selection (PY-1338)
210c9cb: draw method separators at correct offset (PY-1440)
cc3647f: don't apply loose isReferenceTo() logic of qualified references with unknown qualifier type to __init__ calls (otherwise we get very weird results when searching usages of class) (PY-1450)
8471792: simplify boolean check inspection ignores 'is' comparisons
df7f4a9: performance: check for super/subclasses only after we've found out that function actually has unused parameters
a6ab5ef: cleanup
3b501b9: ignore implicitly resolved methods in string format inspection
d54c638: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
3ba7d39: PY-1430 Recognize that 'assert False' breaks control flow
3422c0e: PY-1426 Code after 'with self.assertRaises' block in Python 2.7 unit tests is incorrectly highlighted as unreachable
0777d88: PY-1435 Incorrect highlighting of 'Local variable referenced before assignment' when variable is defined on module level
a3b9b57: PY-1434 "Local variable referenced before assignment" should not work in class bodies
7a48c1f: PY-1431 Too much highlighting for variables that are potentially unassigned
73d1faf: Empty tags are back.
37cd2f5: logic for creating class attribute stubs was completely bogus (PY-1436)
52f5157: formatter handles *args correctly (PY-1350)
0776fae: add new module to Python plugin layout
c1d1b35: for now, python 3.2a0 has lang level of python 3.1
465bcf3: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
bd828cf: PY-1408 'variable referenced before assignment' false negative with try/except (Py3)
4943be3: added no-reload in debug (PY-1369)
105d0e1: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
3653042: RubyMine & PyCharm: Native MacOS file chooser disabled
0e868e5: Merge branch 'master' of [email protected]:idea/ultimate
d18bf64: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
2247ebc: un-confuse class and __init__() when renaming classes (PY-1364)
4962a85: correct testdata
7401314: don't suggest to rewrite dictionary as literal if RHS of dict element assignment references dict itself (PY-1347)
f3acc7c: ignore unused 'request' parameter for Django request methods
8296ae1: allow evaluating method return type while calculating Introduce Variable name suggestions (PY-1336)
d2e07e7: PY-1359 "Unassigned local variable" inspection doesn't handle empty loops correctly
521b1aa: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4290b9c: http://ea.jetbrains.com/browser/ea_problems/21199
9e58225: test duration must be an int (PY-1350)
75be1d8: Merge branch 'master' of [email protected]:idea/ultimate
65df657: PY-1333 Throwable at com.intellij.openapi.diagnostic.Logger.error
25497bc: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6c2207b: PY-1348
e6e0089: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6a30e2a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9330d27: process termination for linux and mac
b3d2bd7: Merge branch 'master' of [email protected]:idea/ultimate
fba43e7: Better canceling #2
9cf56d1: Better canceled check
17e6eab: Merge branch 'master' of [email protected]:idea/ultimate
fcb2a7f: provide fixes for App Engine run configuration problems
12e0140: NPE
0b41b82: tips (PY-1292)
206bf65: recognize \N escape in Unicode string literals (PY-1313)
d9dd5f8: remove meaningless 'throws Exception'
f343a3b: provide correct type for Python 3 no-args super call (PY-1330)
c0aca39: override doesn't pass default parameter values to super method (PY-1332)
ef95297: to ensure version is saved correctly, mkdir python_stubs file before saving the version
393d357: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
5d9c59a: Ignore /usr/bin/python*-config in SDK list.
23ea00f: Initial buildout run configuration support. Still lame.
a0159fe: Updated version of pydevconsole.py
5629b11: Do not block AWT if we get no response from server
e76806b: refactoring
4563d95: Correctly change input prompt after sending an answer for REPL Python
b9d6333: Pydev sources cleanup
efbdc86: Use Tasks instead of synchronous calls for Python REPL console
55247c8: PY-1315 Unused imports inspection shouldn't work in python repl console
1a779e6: Correct way of registering XML RPC handler
69e7da5: Change python console propmpt when input is required
9dd15da: Merge branch 'master' of [email protected]:idea/ultimate
8d48ca4: idea config cleanup
c06da6e: if collection name has dots, element name is derived from text after last dot
025a9a4: don't show parameter info for implicitly resolved calls (PY-1309)
4ccb24d: include runners in distribution
4ca5297: don't miss new Python installations: refresh VFS while looking
6ffffef: fix test
bfc1497: more reasonable behavior of auto-importing reference at cursor (PY-1300)
03b43ff: Merge branch 'master' of [email protected]:idea/ultimate
09c0e75: reimplement PropertyBunch.fillFromCall() without touching PSI or stubs for builtins (fix OOME while building indices?)
ea6470a: added native mediator runner for django run configuration to solve ctrl+break sending problem. added no-reload check-box
986ea17: accept either self or cls for the name of first parameter in metaclass method (PY-1224)
1a445bc: Introduce inside argument list suggests names based on argument names (PY-1260)
60e1764: don't create unittest run configuration inside "if __name__ == '__main__'" block (PY-877)
ad885be: correct handling of decorators in move statement up/down (PY-1222)
96e698a: help topics for Python find usages (PY-856)
02ee72c: action to create Python file from template, templates for regular class and unit test (PY-829)
3c83fd3: display module variables in structure view (PY-1166); don't show [+] for variable nodes
9f6c0b1: Initial buildout support as an SDK (PY-500)
4d7a94b: fix module name
afe6954: add containing directory of test script to PYTHONPATH (PY-546)
29bfc3b: add python-pydev to the list of modules to compile
fa618ab: recognize Python 2.7 testcase classes (PY-1298)
e7649cb: add LICENSE.TXT for EPL-licensed code
88f14f0: package CPL-licensed code from Pydev as a separate module, include its source in PyCharm distribution
f2e3759: simple perf fix: don't visit the same roots twice
62378b8: check for root validity (EA-20440 - assert: FileManagerImpl.findDirectory)
691b1cc: lazier creation of myNewStyle cached value (EA-21153 - PIEAE: PsiElementBase.getContainingFile)
68bc812: NPE? (EA-20912 - NPE: PyUnboundLocalVariableInspection$1.visitPyReferenceExpression)
694cb2b: less false-positive-proof checking of getter returns (PY-1287)
ecba496: set PYTHONIOENCODING for stdin/stdout encoding (PY-834)
1e6da6e: typo in build script
c9c3dba: fix Python tests
fc40038: correctly package Python tips (PY-429, PY-772)
bccfa1c: NPE fixed in MacOS Python SDK suggester
67861a1: Native Open dialog enabled in PyCharm
87833e1: PY-959 Inspection to detect unused inner lambdas/functions
69a1914: Propagate all the improvements within launching sh scripts
33b59a4: PY-996
e49b9bf: preparations for PY-996
cb3da1f: PY-349
e29ae8f: beta version number
be3c883: beta images for PyCharm
b8da160: correct Mac app name (PY-1250)
fba06d1: icons in bin folder of Linux distribution (PY-754)
607bce1: even better logging
1f1dc55: missed testdata
c2d6646: Python backspace unindent works correctly when tabs are used (PY-1270)
d47b6ac: unused local variable inspection has option to ignore vars used in tuple unpacking (PY-1235)
4e06698: log skeleton updates
1849576: no PythonSdkUpdater in unit test mode
685135c: PythonSdkUpdater takes care of skeletons monitoring and updating, so it's no longer necessary to update them from loadAdditionalData() (PY-1226)
ada28fb: add new paths to sys.path automatically on PyCharm startup (PY-758)
4ea3cbd: refactor skeleton generation
ece5857: real fix for PY-883
621d5cb: auto-import bugfixes and some cleanup (PY-1202, PY-833)
748043b: correctly build argument list for super method call (PY-1269)
c32f9f8: cleanup
0ed24eb: PY-1176 "Local variable referenced before assignment" incorrectly reported if local function has same name as var
144ae1f: cleanup
85b85b2: PY-1209 Unused local variable inspection doesn't handle name conflicts correctly
bbf7041: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
f8e4d6f: Fix edge case NPEs.
80bcf04: Method has_key() absent from python 3.x. Added open() signature for python 3.x.
d3bbdc9: rename refactoring handles class inheritance (PY-1236)
4d69d8d: parameter is not unused if function has either super methods or inherited methods (PY-1234)
8b3b17c: class name completion also works for top-level functions (PY-539)
5565a2e: 'Override method' inserts super method call instead of 'pass' (PY-302)
11d598a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4373804: Check assignments to properties, too.
28e4e05: Allow detection of properties in __builtins__.
1ff5d0c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ddf284c: Skeletons of builtins now have correct properties declared.
842cfd1: Updated quickdoc to support class decorators.
ecb99e7: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
2dbae7c: PY-1244 "Unreachable code" false positive with try/except/else/finally
e476a07: PY-1243 "Unreachable code" false positive with try/finally and raised exception
56b55d5: PY-1193 "Unused parameter" false negative with lambdas
fad62da: copletion in case of null in dataflow now works, also fixed case with isinstance with tuple (PY-1133)
2e406fa: copletion in case of null in dataflow now works, also fixed case with isinstance with tuple (PY-1133)
a8be806: copletion in case of null in dataflow now works, also fixed case with isinstance with tuple (PY-1175)
772b040: fixed annotations
919d03a: search for Mac Python installations under /System/Library/Frameworks as well as /Library/Frameworks
c050ec0: "Add interpreter" shows list of interpreters which are detected but haven't been added yet (PY-847)
9df0105: build skeletons for library classes only available as .pyc/.pyo (PY-944)
fd93083: fix bug in building class hierarchy structure (PY-236)
23dbd5e: implement findExistingByElement() for Python script run configuration (PY-324)
867d126: PyTestRunnableScriptFilter is applied only if py.test is actually installed (PY-1220)
b054fe7: PY-1192 "Unused local variable" false positive with try/finally
fd5fe7d: CodeInsightTestFixture rethrows all checked exceptions as RuntimeException
d299cc1: support inspection suppressions for Python (PY-1229)
1a3de35: PyUnionType allows null members
e05f184: zencoding surround to django templates (PY-1170)
0ea208a: fixed bug in formatter and surrounders moved to another package
2c7b6e9: search for modules in roots from django urls.py (PY-1189)
4b2cc1c: more forbidden jars, apply them to pycharm as well
144e510: performance
4a5f36b: Don't report a binary op as ineffective if its custom handler is defined.
aa8be43: don't copy help files to archive twice
eb48d74: build script updates (towards working RubyMine gant installers, add PyCharm icon, package less and sass as separate plugins)
fe5222e: PyCharm reference card (PY-515)
b07f454: fixes in django formatter and close tag inspection
ef957e4: highlight assignment of attributes not mentioned in __slots__ (PY-1211)
b1ede4b: Forgotten test data
2ff02c3: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e412217: Now getType() knows about property types (getters).
bf58611: completion of instance attributes shows variants from __slots__ (PY-1211)
fc19442: PyClass.processDeclarations() is its own method, not inherited from PsiElement
9a9ac3d: delete unused implementations of processDeclarations()
246b011: new icon for Python file
cf2ed20: NPE in getReturnType() fixed; getStatementList() made nullable.
fcd23c1: store value of __slots__ in stub for PyClass; introduce PySequenceExpression as common superclass for PyTupleExpression and PyListLiteralExpression
d057729: Use ThreadLocal for inspection-global values.
b27ee69: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
2216370: fix testdata
c95ef5d: don't highlight dict comprehensions as unsupported in Python 2.7
7a8df8f: add __rand__ to built-in methods
82ea74d: highlight multiple context managers in with statement as error in Python < 2.7
329dad8: include parameter default value in control flow (PY-1208)
212d532: don't resolve default value of parameter to parameter itself (PY-1207)
e29b4aa: Forgotten PyBundle.
886f080: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a327ea5: Factored out peelArgument() utility method.
bc2a6bc: Property declaration inspection + tests + fixes to PyClass.scanProperties.
b3d85a7: fix tests?
60b0a5b: isReferenceTo() handles lambda expressions more correctly
6ab05bb: types for set literal and set comp expressions
a8bf9dc: clean up comprehensions PSI, don't highlight variables in set comp expressions as unused
f13d977: highlight set comprehensions as unsupported in Python older than 2.7 and 3.1
011a70d: cleanup
dff2467: don't report problems in a different file (EA-20134 - assert: ProblemsHolder.registerProblem)
a6e1b73: honor __all__ when resolving references imported via 'from ... import *' (PY-98)
441d7b4: completion of names imported via 'import *' honors __all__ attribute (PY-96)
8d3bd05: resolve names which are mentioned in __all__ but cannot be found statically (PY-839)
afda537: copyDirectoryToProject() has suddenly become recursive
5db097d: fix PythonCompletionTest.testSeenMembers()
b5a03fa: optimize imports
24a3ff8: better diagnostics in except clauses order inspection
9f0f1b8: from __future__ imports aren't unused
78894d8: fix false positive of 'from __future__ import' inspection
052d7d2: never autocomplete seen member variants; remove duplicate names
cdbb08b: show seen members of same qualifier in completion list (PY-1181)
aef26f6: better disambiguate between multiple type alternatives, correctly filter out duplicate names
07f64f0: completion that guesses type name by variable name (PY-1010)
1d2a80a: meaningful use of ThreadLocal
56d6fcd: encapsulate all access to PyClassNameIndex
34a4aba: recognize super calls via self.__class__ (PY-1190)
66ffa93: findExportedName() returns import element as exported name only if it's resolved (fixes 'import os.path', where thee are a number of 'import xxxpath as path' imports, only one of which is resolved)
172a3bb: recognize new-style classes based on __metaclass__ declaration
9735f9f: some refactoring of the test suite
cdf76f7: to have same behavior for stub-based and non-stub-based resolve, findExportedName() walks the list of stubs in reverse order
5b5edf8: stubborn stubs
a833190: More fixes to the unfortunate test.
95bb54f: Fixed typos in test (definitely this time?)
0f4762b: remember state of checkboxes in rename dialog (PY-1165)
77e4a15: don't resolve target expressions outside of their defining function (PY-1179)
a76a852: rewrite global name checks in terms of Scope.isGlobal()
99ee30e: too broad scope and isReferenceTo()
5ff2322: improve interaction of Find Usages and global statements (PY-1167); changed PSI so that names declared in global statement are now PyTargetExpression rather than PyReferenceExpression instances
88a19e9: don't replace a more specific imported module with a more broad one
0d6040b: resolve qualified references imported inside a function (PY-1185)
62d87ef: resolve qualified reference when there's both import .. from and regular import for same module (PY-1183)
29604ee: resolve reimported members of imported module (PY-1153)
85e104a: PyImportElement.toString()
170dc42: include YourKit agent in PyCharm Mac distribution
d5f04ef: - regexp colors page
f400973: django db completion fixed (PY-1163, PY-1169)
c70b9bb: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
33660bc: Fixed typos in test.
c8d5dcf: usages of ModuleRootModel.processOrder migrated to new api
dcd9ea6: Forced Python 2.6+ in property test where due.
63d3daf: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
535b7e1: Property API tests; fixes named args to property() not being handled.
0b96edb: Property access inspection.
90fad9b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
be1718c: Some unnecessary JScrollPane -> JBScrollPane conversions have been done, unrolling.
f675c65: - gql colors page - regexp colors page - added id to django colors page (PY-1115)
26076c5: All scrollpanes replaces with JBScrollPane.
81f3c84: JList->JBList
ed0d414: Property access inspection (no test yet). Small updates to quickdoc.
54fcb8c: Stub and callables regressions fixed.
58a64f1: Broken compilation after git malfunction.
062b2f1: quikfixed broken compilation
855b12c: added django template formatter
7b4ae86: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4469b30: Better support for properties.
8e37939: Extract accessors (e.g. lambdas) from parens.
1db93d0: Refactor resolveMember() to mark several NameDefiners (e.g. of properties).
79c51de: unfold list literals when iterating names in list comprehensions (PY-1143)
8614de2: more correct checking for builtins module (PY-1145)
08098ad: inspection to highlight attempts to call a non-callable object (PY-1006)
5e4d385: Create Test: refactoring + small fixes
4855f89: common superclass for Python inspections
5117518: remove dependencies on deleted module
7d0af54: unthrown ConfigurationException declaration removed
c60a370: IAE
40c7898: IAE
a22e2eb: EA-20880 - NPE: VariantsProcessor.execute
37d4efb: EA-20892 - assert: DocumentImpl.replaceString
96c7873: "Analyze stacktrace" does something useful in PyCharm (PY-1128)
60b86d0: include source roots in test runner PYTHONPATH (PY-798)
f8ef9e2: don't add extra parens when completing property call (PY-1037)
5ad043b: include app engine libs path in PYTHONPATH when running tests for app engine apps (PY-1123)
b34b16f: fix building control flow for assertions (PY-1138)
20dcf1f: cleanup
767247b: IAE
56dc60b: NPE (PY-1137)
e6f4db2: better diagnostics for inconsistent unindent (PY-890)
a679f3a: fix logic of enabling Override Methods in Python (PY-1132)
c4265ea: support Ruby 1.9 named groups in regular expressions (RUBY-5822)
8b9b16b: type inference for isinstance() should not touch resolve logic (PY-1133)
d9b6502: don't register 'add import' fix if there's nothing to import (PY-901)
e8f4474: ImportFromExistingAction doesn't require an editor
bf982a4: class name completion adds class name to existing import statement if possible (PY-1003)
6791a42: correct type for call expression when callee is class definition (PY-1013)
97e0d41: don't show import popup if something is imported from module via unresolved import (PY-956)
3ed5861: don't validate arguments of decorator list in case of implicit resolve
4de66e4: added surround with {{ }} in django templates
39cd728: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
750cf54: Exclude resolve() from property detection, needed to build stubs right. Special case of built-in call analysis added.
2b973a0: make getConcealingParent() respect decorators' scope
9ac3cba: fixed broken test test added for PY-1083
3a16b51: added django template quote handler (PY-1088)
ea093f5: fixed positive fault of base method signature matching inspection in case of named params and **kwargs (PY-1083)
52453b6: fix PyInlineLocalTest?
c3a6795: Cosmetics (keymap name)
86878a8: added class field type to qualified reference complition (PY-742)
8236d8e: little fix
fe652a8: advance stubs version
013e980: merged correctly that time
01a5e27: PY-922 Python console - autoindentation support and output formatting
9ceab81: PY-993 "Method name clashes with already existing name" should be a warning, not an error if existing method is in superclass
106dd3a: fixed some cases of file references renaming added find usages to django templates(PY-1081) added navigation to files
7d3771c: delete python-py again; cleanup
8a6d9cd: delete python-py module
5029b32: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
4851de7: Fixed a typo in javadoc.
f0fa973: Added isCaseSensitive().
844ae79: Fix for case-sensitive file systems. (Trivial.)
94228df: Fixed resolve test.
4529619: support unittest2 test skipping functionality (PY-1062)
dd5d425: enable drag & drop from Variables to Watches in PyCharm (PY-1054)
67fc51d: PY-1059
eaf3bf8: rewrite PyDemorganIntention in Java, get rid of initializing Jython in IDEA process (PY-792)
1a82eec: remove dead code for writing inspections in Python
d2f79bb: cleanup
da1b499: PY-992 Extract Method doesn't handle conditional returns correctly
5a15882: PY-1086 False positive for "Cannot perform refactoring when execution flow is interrupted"
5b7b00e: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
ddd37b8: junit: tests by pattern - quick start
0359be8: EA-20542 - NPE: PyClassType.getClassQName
9c41990: build fix?
e65a237: EA-20679 - NPE: PyUnresolvedReferencesInspection$Visitor.overridesGetAttr
00bfaef: EA-20733 - IOOBE: ImportFromExistingAction.execute
bb5618c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
86c6e3f: add hg4idea to PyCharm
0c5b157: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d06db6f: added assert case for isinstance type flow (PY-845)
a226ca5: fix PyMultiFileResolveTest
280701f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate Adds language level check for @foo.setter, updates stub test.
b62f959: fixed some cases with **kwargs completion (PY-1050)
5cb9e92: -added isinstance type inderence(PY-1075) -added correct completion to django model fields(PY-1076) -fixed bugs in django fixture test cases
8b18cbd: Chandes to manage.py model, improved usability of ctrl+alt+r(PY-450), added some tests
b65648e: Django 1.2
12baccd: Jython 2.5.1 and Django 1.2
2cfb673: Preliminary full support for properties. Still slow.
2281ef0: refactored
a1614ce: refactored
6a8211e: added django model foreign keys incoming completion support
bce02e2a: @property decorators preliminarily work (PY-828).
0d6c619: added django model foreign keys incoming reference support to undefined inspection (PY-638)
6840960: added completion resolving kwargs from code usage (PY-1002)
5565b19: added completion resolving kwargs from code usage (PY-1002)
2717675: added model '_meta' field support (PY-561)
e7035f3: -objects completion fixed in case of redefined objects model field -added strict navigation from add_to_class cases to corresponding add_to_class invocations
5e6e883: added DoesNotExist and MultipleObjectsReturned completion to django model subclasses (PY-945)
3dcd356: move class to correct module
345a39e: cleanup
0753d94: correct testdata for fixed test
e7b97de: correctly check for first missing comma in dict literal (PY-1025)
0144adf: reduce scope of simplifiable boolean expression inspection (PY-1021)
3ca7166: don't offer the callee of a call as a smart introduce variant (PY-1026)
cd4c10b: don't include context in the list of completion variants (PY-1033)
323018b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
76820e1: Added param inspection test. Allowed (*args) signature in the inspection.
be54583: Updated test.
e4b2abe: RUBY-6097 AssertionError: Wrong line separators: '...ller::Base\r\n' at offset
06b85d2: Builtins highlighting: fixed a regression, added a test.
f5c4a5c: Highlight predefined assignment targets, too, not only references.
d910636: Rename parameter references, not only the parameter itself.
d1c7640: Different naming logic for first params of metaclass methods (PY-578).
b31279b: revert
cd7071d: PY-1018 Quickfix to rename unused variable in tuple expression to _
4769cb8: Do not add *import* keyword on completion if it's laready there.
12be507: PY-992 Cannot perform refactoring when execution flow is interrupted
214917a: Added competion providing references for MEDIA_ROOT and TEMPLATE_DIRS string literals in settings.py
a489540: PyKeyword completions removed from string literal scope (PY-1029)
b1d3d05: move dummy action to correct place
170de34: EA-20469 - IAE: PsiTreeUtil.getNonStrictParentOfType
cdacca8: EA-20502 - NPE: PyQualifiedName.matchesPrefix
e3b8a61: don't show empty Python code style settings page; fix Python preview text
7464e3f: merge CodeStyleCustomizationsConsumer into LanguageCodeStyleSettingsProvider
9b32d39: added Surround With to Django Templates (PY-272)
477911e: code style options customizers
de573f8: Test Runner API and Semantics updated. See http://jetbrains-feed.appspot.com/message/261002
4f2b9ad: Merge branch 'master' of [email protected]:idea/ultimate
f9f064e: PyPushDownTest fixed?
dc111b9: Errors logging/reporting was improved. Now each error/warning contains test framework id. Also log errors will be thrown usually in "debug mode" and warnings in default normal mode
863d38d: test data fixed
ced4776: Merge branch 'master' of [email protected]:idea/ultimate
7c6bf89: cleanup: use Arrays.asList() & friends instead of ArrayIterable/ArrayIterator
fd1bcb6: cleanup: use Collections.singleton() instead of SingleIterable/SingleIterator
f8324bc: cleanup: use PsiTreeUtil.getParentOfType() instead of custom-made getContainingElement() methods
8bb7c16: cleanup: use StringUtil.join() instead of custom-made join functions in PyUtil
d873403: I see dead code (TM)
45756ad: cleanup: avoid duplicate implementation of ensureWritable(), remove calls from PSI implementation
c3f88e4: cleanup: use PyFunctionBuilder
1e23efb: Fixed renaming of django template file references in string literals (PY-1009)
16dc72c: Multi-range references. Proper ctrl+mouseover link highlight for objective-c selector references.
544195f: auto-import shouldn't re-import top-level modules from other modules (PY-978)
3cf986a: guard against infinite loops in followAssignmentsChain() (PY-1014)
e140714: handle escape sequences in "convert format operator to method" (PY-977)
e5c9600: better icon for local variables in completion list (PY-1001)
ddf5beb: blank line between statement and function (PY-1007)
1a4d4fd: correct version check for 'raising new-style class' inspection (PY-981)
fec30bb: test fixed
0521976: rename refactoring for variables in django templates (PY-989)
544d2cc: added django template tag name validator for correct spellchecking
a874892: Merge branch 'master' of [email protected]:idea/ultimate
7632a3a: correctly prefer regular unittest to py.test
ae0f208: Forgotten test file.
aa3f5ef: Fixed a typo, moved a message to resources.
2a402f8: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e73e9cf: Make lambdas and functions equally Callable (PY-958, part PY-201).
655cd59: PY-995 fix introduce* incorrectly working with assignment left part
1f56dab: PY-994 don't put parenthesis if putting into non-expression element
572c6a9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
549a853: Don't autocomplete *import* keyword if submodules are present (PY-1000)
b103824: restore after bad merge
bba7cd2: class insert handler handles Ctrl-Shift-Enter completion char (PY-998)
3439ea2: use slow type calculation for building completion variants
c40a090: initial support for Smarter Introduce (PY-955)
cc1aedd: no overriding method marker on __init__ (PY-430)
ae8fa5d: don't create super call in __init__ for classes that extend object
e118ec0: testdata corrected
ca35be6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
742d47c: Force regeneration of skeletons if script is newer that the skeleton file
cd9d976: Create Class quickfix (initial) (PY-197)
c0c4b19: testdata fix
b4b0999: introduce variable: fix broken formatting, add test, set explicit caret position after refactoring, cleanup
1b335b4: honor "blank lines after imports" option in Py formatter (PY-987)
5f6952e: honor "blank lines after imports" option in Py formatter (PY-987)
700ee42: honor "blank lines after imports" option in Py formatter (PY-987)
dab60f2: don't fold import statements if there's just one of them
04875f5: import render_to_response when creating a view method if not already imported
ee323fe: if line comment prefix is followed by a single space, uncomment line action deletes that space (PY-980)
96f6e7d: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
01ada3b: Special-case signature of datetime.timedelta (PY-954)
46f0a8b: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
9b73e30: use interface RunnerAndConfigurationSettings where possible in order to move RuntimeConfigurationProducer and Ko -> lang-api
1bd4a48: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
dd62f54: get rid of LoactableConfigurationTYpe (I)
731e657: fix tests
c6e6635: Auto-import should not suggest to import the file itself (PY-979)
4642f83: more on syncdb usability
708dfd3: workaround to ensure correct getpass() operation in Django manage.py (PY-386)
75485bf: shortcut for 'run manage.py' (PY-382); cleanup action groups
057270a: indent when pressing Enter inside tuples
5a7fc0a: add new test classes to suite
23ba7ae: insert matching parenthesis and autopopup parameter info when closing class completion list with ( character (PY-750)
6e827d6: keyword parameter completion looks in superclass if **kwargs are used (PY-778)
fccc234: test fix
d64591e: don't show keyword argument completion inside parameter default value (PY-973)
eec2a91: show keyword arguments on top of completion list; don't suggest 'self=' (PY-972)
20e4473: relax parameter list compatibility checks so that inspection wouldn't complain about App Engine projects
8df16e1: suppress CSS and JS errors in Django templates (PY-600, PY-902)
f086510: fix "Run tests" from project view context menu on file containing tests (PY-961)
d45f63a: fixed rename in string literals while renaming of files and directories
d32e141: added selection handlers for django template tags (PY-934)
c58d7cc: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cc98448: added: -django color page(PY-475) - django template comment tag highlighting as comment(PY-421) - single quoted strings literals highlighting(PY-968)
03762a6: more improvements for python statement mover
0f5aa45: fix for PY-948 that doesn't break resolve of os.path
ea8e88c: remove redundant supers check completely
5665818: fix testdata; fix formatting for unary operators
7eb8a34: Fix for PY-948
8651e3a: Fixed regression in argument inspection.
401ca12: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
6c3e0ad: fixed django find usages and rename refactoring
acfaf93: Forgotten file
797b3ca: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
68b2b14: improved python statement mover
55b6e40: Detect parameters of bound static methods (PY-50). Added tests.
b97bee1: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
55de157: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
31981b7: fixed PY-950 Exception from Move Statement, more tests
b28c907: spacing options in Python formatter (PY-951)
74672e8: correctly disambiguate between os and os.path when both are imported
e347201: PY-947 False positive for "Statement seems to have no effect"
80455d5: cleanup: remove incorrect and unnecessary overload
29c20bb: SOE protection in PyFileImpl.getElementNamed()
1ac390b: ignore INRE when checking for new-style class: EA-20288 - INRE: FileBasedIndex.handleDumbMode
9ce5cfb: don't highlight empty import elements as unused: EA-20300 - assert: ProblemDescriptorImpl.<init>
74ee44b: fixed NPE
33f608a: PY-745 Split if - no action provided
b178460: cleanup and new test
e1af90d: PY-943 Override method should insert @classmethod decorator for classmethods
17a7992: Python statement mover works only in Python files
242606a: provide type for 'cls' parameter of classmethods (PY-833)
929c46c: PY-296 'Move statement up/down' on method name line should move the whole method
d6bc7f8: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7885ad9: Class-private name resolution (PY-50): a small fix and tests.
dfb2bce: check for existence of __init__.py in directories when resolving imports (PY-942)
389b8a0: Merging PyQualifiedName.toString()
b807159: Add .toString() to qualified names; easier debugging.
1a99249: better code completion for imported modules (PY-874)
f069e74: a qualified reference is not a reference to an unqualified element with the same name (PY-939)
328e79e: PY-50: only resole class-private names within the class. (No tests yet.)
b900dba: Move AbstractConsoleRunnerWithHistory related actions to platform
e87427a: Highlight whatever.__doc__ as predefined.
dfbe2c6: fix tests; add hidden option to disable highlighting of unused imports
9fdc915: Manual conflict resolve after long rebase
5d17d1d: Restored Reload Rails console action
3da2bbe: Revert "Revert "Get rid of unnescessary language consoles (they were used as prototypes for ruby)""
e4740f6: Revert "Revert "Initial version of IRB/Rails consoles based on new LanguageConsole API""
0a5aa15: Revert "Revert "PY-840""
8935f0f: Revert "Revert "Extracted AbstractConsoleRunnerWithHistory to platform""
a23d227: Revert "yet another back compatibility fix"
c71c4e4: Revert "Fix compatibility with Maia"
e8e84f7: Python imports folding (PY-928)
1df360c: imports inside try/except statements shouldn't be optimized away
bf294f8: "optimize imports" quickfix for unused imports (PY-268)
847800c: initial implementation of 'optimize imports' for Python
de15f4a: convert RatedResolveResult to class from interface; initial implementation of unused import highlighting
6447876: Do not run inspections in console
d565fdf: PY-909 False positive of "Local variable referenced before assignment" after raise
1c23e9b: cleanup
480b875: PY-849 Global variable recognized as local variable
5b198ec: PY-850 Unused variable for _
a75b73a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
51e9a5e: PY-870: add 'import' keyword in 'from module' completion.
9f95edc: PY-870: complete relatively-imported names.
9283833: store docstring-extracted return type in stubs; unresolved type references don't block implicit resolve (PY-919)
85e0e02: take completion list icon from the correct element (PY-875)
ebb6d48: correctly resolve parameter names for tuples in parameter list (PY-882)
ed03a68: keyword completion for py3k literals (PY-896)
2328454: don't highlight unresolved target expressions (PY-906)
75db8db: don't look at implicits when resolving superclasses (http://youtrack.jetbrains.net/issue/PY-915)
1bd3f5e: do a fast check before a slow one
d5702f8: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
7ededbc: @Nullable
8d8a314: keep context while resolving docstring
5e17f97: redundant null check
a7fa102: elements -> expressions
e4318b7: PY-911 If statements consists of single expression, to which "introduce variable" refactoring is applied, one statement should be created instead of two
87555a3: PY-891 Ctrl + Shift +Enter doesn't move to next line in case of method definition
9393ccd: reuse type eval context for all returns in a function
bf0fa5e: fix getting operation sign for prefix expressions
51df70f: SOE avoidance in resolving member of class with recursive inheritance
ee84a43: ignore DFA exceptions
73f8ce7: NPE
4a4eb58: cache to avoid reevaluating already evaluated types
6dcc627: don't highlight unused values in qualified expressions
fd46373: handle qualified refs in control flow builder
5e469ad: don't highlight unresolved members if left hand type is a type reference
0be4482: types for bool literals
6864c91: don't interpret 'pass' in generated stub as meaningful information about method return type
750df6d: correct handling of aug assignments when inferring types
33b214de: don't try to use DFA for type inference if reference and target are in different scopes
b96182c: one more SOE guard for type eval
19633ac: don't bother evaluating implicit resolve results if we're going to reject them later anyway
4f9b7b7: Part of PY-870: autocompletion in from _ import
f36da9b: Made USE_CACHE non-final so that it could be used again while debugging.
a866d42: do refactoring from conflicts view ( IDEA-52320 )
323d581: Line separators for python
f28f295: NPE related to ColoredOutputTypeRegistry was fixed
8ae978c: enable showing inferred return types for methods in quick doc; fix quick doc tests
937842f: types
c932611: don't take types from implicit resolve results
68b9a0e: test fix
0059d9b: don't take types from implicit resolve results
0056fda: list comp expr has a type
f11713a: more types
5e485f5: union of tuples is tuple of unions
ecfa894: couple more known aliases for types in builtins
dc64ae6: type for tuple assignment
87af145: auto-resolve references if context allows
8f2e9c1: some initial code for type references and extracting types from docstrings
72adfd7: more types
a17cee5: some work on evaluating types of binary expressions
0e46e4f: support local type inference via DFA; introduce TypeEvalContext
7635ed2: @NotNull getCanonicalText()
9da0af0: EA-20248 - assert: PyNamedParameterImpl.getUseScope
d9758a0: remove redundant calls to getInterpreterPath()
b7ac503: EA-20246 - CCE: PyPsiUtils.removeIndentation
afdda27: initial support for calculating method return types; in internal mode show inferred method return type in quick documentation
23a40ec: type for tuple expressions
349c38d: Remove redundant -u argument in console runners
88f44f5: PY-880: updated signatures fo min() and max().
52b4b3d: PY-860 "Introduce Variable" at top level in python script inserts variable declaration on top of file
a55721f: Enable history actions in console in plugin for Maia
2214c44: Python plugin compatibility fix
8761fc9: hide irrelevant roots for Python SDKs
e25faeb: test fix
5c6625a: PyMissingBracesFixer now works with slice expressions and subscription expressions
28a0ddc: PY-439 Ctrl-O should add methods at caret location, not at end of class
7e60220: PY-873 Show method parameter lists in Ctrl-O dialog
056ba164: PY-181 Complete Statement should work for Python
1e2ef89: Maia API compatibility for Python plugin
eaf69d0: plugin version updated
88622a1: Merge branch 'master' of [email protected]:idea/ultimate
c9062ab: big fix for completion of imported names (PY-866)
16db40a: PY-871 False positive on unused inspection
ffe03c7: PY-863 Incorrect error about value being referenced before assignment
838d688: PY-862 'cls' parameter should not be highlighted as unused in methods annotated with @classmethod
91e5e2f: PY-861 "Unused variable" false positive in case of multiple nested functions
a929243: Fixes problem with extracting documentation data
9d5da4e: Fix problem with nullable working directory in python console
bbd1616: speedup check
c8402b9: Compilation fix
fcec3b1: Revert "Revert "PY-840""
90499c8: Revert "Revert "PY-842""
32741e2: Make python related consoles handle ANSI highlighting
bd969ff: plugin compilation fix?
0836e91: unreverting parts of console
bd1c70a: NPE fix
add25e2: console vs. localhost fix
fa2392d: Fix compatibility with Maia
7cdf772: Merge branch 'master' of [email protected]:idea/ultimate
793ae13: yet another back compatibility fix
81fed22: Merge branch 'master' of [email protected]:idea/ultimate
4b3e1ec: Revert "Extracted AbstractConsoleRunnerWithHistory to platform"
1d2f967: Revert "PY-840"
6e39d33: Revert "PY-842"
020cb8f: Revert "Initial version of IRB/Rails consoles based on new LanguageConsole API"
0f7a673: Revert "Get rid of unnescessary language consoles (they were used as prototypes for ruby)"
90047f6: Get rid of unnescessary language consoles (they were used as prototypes for ruby)
8a11692: Merge branch 'master' of [email protected]:idea/ultimate
b76cb36: don't throw exception on invalid Python SDK
0ab9d87: Initial version of IRB/Rails consoles based on new LanguageConsole API
872137a: more tolerant condition for availability of 'create binary stubs' fix
1ea7726: PY-853 Back space action should respect indentation settings in non empty lines
4849d7e: protect against SOE when iterating superclasses (PY-846)
32c33bf: test runner works under IronPython
4e5c169: SOE protection during resolve (PY-843, PY-841)
13bddbf: force preview when renaming Python functions
e1bd744: usage type for untyped usages
df7eb2a: add dom-impl to python-ide runtime classpath
2aaffb3: PY-842 Make Quick documentation lookup work in pydev console
333f9c8: find usages works for implicit resolve results
9bf1adf: python-tests depend on DOM
df89ff8: include DOM in PyCharm
763d724: argument list inspection does not check implicitly resolved method calls
ae02b88: closing braces in dict literals do need alignment (PY-814)
86745b6: PY-840 Provide Django runtime console
23dac84: changes in function signature are out-of-code-block changes (PY-821)
6139b22: read/write access detector for Python (PY-832)
0e7799b: in-place rename for local variables and parameters (PY-831)
4961e38: test for PY-727
f9d7360: keep things simple and provide self-reference in target expression
efd9e36: variations on the theme of Find Usages for local variables (PY-527)
b47a960: more correct rename/find usages for references to multiple definitions of same var (PY-385)
245c52a: Extracted AbstractConsoleRunnerWithHistory to platform
902f32d: bugfixes
d79b719: improve PyParameterList.isCompatibleTo(), fix PY-669
46e9cb3: disable PyClassicStyleClassInspection by default
8cfe5a9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
5462765: PY-825: don't propose to import inknown identifiers in 'global' statements.
7231b7a: PY-830 Do not highlight strings in python consoles as doc-comments
9e413e7: Merge branch 'master' of [email protected]:idea/ultimate
85cc627: cleanup
ea86b8f: PY-824 "Raising a string exception" incorrectly reported when string is passed as exception constructor parameter
5fdd1db: don't create Python tests run configurations for directories that have nothing to do with Python
0edb791: provide a reference inside qualified target expressions
f92fc0a: refactoring: PyReferenceImpl depends on PyQualifiedExpression, not specifically on PyReferenceExpressionImpl
4b006cd: Get rid of copy-paste round 1
7458eae0d: PY-803 Do not run "DocString" inside REPL consoles
d9e29c6: PY-802 Do not run "Effective Statement" inspections inside REPL consoles
2eabb95: PY-823 Octal prefix inspection erroneously complains about floating point literals
c6bf573: Added Pydev console check method
87b0970: better parameter list for overloaded methods
1f6b436: reduce size of stubs generated for .NET assemblies
41f1e33: generate empty init.py for middle packages in hierarchy
3234400: improve perf on IronPython, build correct stub filenames for package tree (System and System.Windows.Forms)
3106b8b: option to turn off auto import popups in Python
08a35b3: Sexy runtime completion elements quick documentation lookup.
c7d98a0: PY-822 Make Quick documentation extract data from runtime in case of REPL console completion
3fb2eb1: PY-763 Make python console extracting completion data from REPL
57d494f: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cece327: PY-805: check Django availability in Django run configuration.
9c4e43b: mkdir artifacts before zipping sources
41242cf: EA-19707 - NPE: DFAMap.get
8672e72: EA-18745 - Throwable: SelectionModelImpl.setSelection
48a34b5: cleanup
a1235bb: EA-19412 - IAE: ResolveImportUtil.resolveChild
14758e2: EA-19749 - NPE: PyImportElementImpl.getVisibleName
9ffd2a9: build sources.zip for pycharm
ac4313a: Next version of python repl console: internal web server launch and stop
a436778: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cb7d90d: Updated to match changes of 53706e7f6177c1deac08ba8684b76b0f7d1a733d.
cd224f5: Complete class-private names only in right contexts (PY-570). Only complete underscore-starting names if explicitly asked to (PY-568).
7d19bef: Fix regression with empty namefilter (obvious).
97873ff: Don't autocomplete magic and private names unless asked (PY-568).
9b6526d: Correct qualified import completion (PY-777). Don't propose underscored names unless expected (part of PY-586).
3a09ce5: Improve PySplitIfIntention
7572c7a: Initial version of Python XMLRPC based REPL console
d502b27: PY-812 Octal literal inspection erroneously triggers on hex literals
a1cfd7b: test fix
7ae412d: Show path and highlight non-root part of it in module's quickdoc.
7cf9883: initial version of quickfix to trigger generation of IronPython binary stubs
baecd72: IronPython-aware skeletons generation (initial)
20eec38: built-in support for CLR profiler
497f043: option for importing CLR references with specified names
947a350: remove invalid characters from repr() output
ad58560: XXL timeout
56325a9: initial debugging support for IronPython
dabd114: escape line breaks in object repr values
ea8f451: bump the timeout even further
9905b59: SDK flavor for IronPython
49e89f2: split SDK type into multiple flavors
53081f6: correct fix
8bf415c: http://ea.jetbrains.com/browser/ea_problems/19719
4d1aa3a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e541b02: Correctly cache and switch SDK's builtins on SDK change. Bumped scripts timeout to 30 sec, skeletons version to 2.
5ec7d1b: don't allow using standard unit test configuration for Django unit tests
bfe9024: initial version of Django test runner
50e59dc: don't highlight entire class as old-style
a48eda4: PY-800 Map help button of the Python and Django consoles
a1847df: cleanup
161966d: Generalizing
4f1d316: final fields
4ca2c8d: final fields
c45a110: PY-793 Control flow builder should know about exit() function and consider it a dead end
af25259: Merge branch 'master' of [email protected]:idea/ultimate
709c4dc: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
57ca261: PY-303: disallow invalid SDKs.
d05f456: string format inspection improve, fix PY-734
752af44: disable docstring inspection by default
31c9cf8: add new module to platformImplementationModules
570f312: enabled by default
06a1895: Do not perform Unused lv inspection in case of expression code fragment
7e7d765: Make PyUnbound Local Variable inspection enabled by default
33c9fe6: make sure we don't add /usr/bin to SDK path (PY-785)
76be0e8: EA-19593
ff74d46: EA-19543
d3c4fe6: PY-344 tuple assignment inspection - direct element assignment check
03ecfa5: PY-318 Inspection to highlight characters > 127 in byte literals
97cc0d7: delete obsolete TODO
8f8688c: PY-233 Inspection for super() arguments
c07026d: performance: don't go into elements which have their own scope
d7ecb8d: performance: cache assignment targets
9c359fd: performance: don't use PsiTreeUtil.getParentOfType() in hotspots
84ad124: rewrite inspection to avoid double-recursive visitor
fd398ff: SOE protection when evaluating types
63c243b: correctly process escape sequences in raw unicode string literals; escape sequence map must be static
aeea10b: do not process escape sequences in raw string literals
46f4f44: performance: don't load AST for the sake of checking whether an element is a valid auto-import suggestion
93208b7: sorry, I broke resolve caching: provide equals() and hashCode() for PyReferenceImpl
2fc335f: performance: use standard PsiTreeUtil.isDeepestLast() function without isValid() checks
e9978f8: performance: use Condition<PsiElement> instead of Class.isInstance()
2fb541b: take idea.properties from correct place
720f4c4: __unicode__ should be highlighted as built-in method
61c3a87: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
c4a4561: PY-776: inspect that __new__ and __init__ signatures are compatible.
ddbba96: Function signature compatibility check logic factored out.
d035044: scanMethods() stops when result has been found
d11a0a6: http://ea.jetbrains.com/browser/ea_problems/19352
6199af3: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
78cfe6e: Don't highlight __new__ as predefined in old-style classes.
05261ea: Consider __new__ a static method in new-style classes (PY-282).
1364124: Adds special handling of __new__ signature (PY-217).
4ac4682: fix test
4dcda77: better target descriptions in Find Usages
5e6ef30: searching for class usages also searches for constructor usages (PY-774), stub for usage type provider
d71303d: Find Usages works on __init__ method (PY-292)
92029df: do not create whitespace from text
d1fb3e2: implement PyTargetExpression.getTextOffset() (PY-726)
a75e44d: initial implementation of classname completion for Python (PY-267)
046c22a: cleanup and test fix for autoimports
2769ff9: more predictable auto import hints in Python (PY-427)
0a393c3: Python auto import style settings
5f40a1e: PY-746
d7c0530: cleanup, remove Apache 2 license headers from all source files
9583c2e: cleanup
610d0a1: build number = 96
0fa1be4: PyElementGenerator refactoring
c7325fd: parameter info highlights next parameter to be filled as current
a8279cd: when completing method calls, auto-insert parentheses and show parameter info (PY-437)
dfc9f20: separate implementation class for qualified references
0a8f91b: separate implementation class for references inside import statements
77cfefa: PY-741 RegExp - no regexp support after extract method refactoring
f7a97fe: fix build
00880a5: add JGoodies Looks to PyCharm dist
e51d0db: correct resolve when same module name (PyQt4 for example) is found under multiple roots in the project (PY-275)
9bc100b: Flip comparison intention
3692995: Improve negate comparison intention
ce488d7: PyStringExceptionInspection
8b71ea3: fix StringConcatenationToFormatIntention is available for all binary expressions
bdb3567: PY-704 String formatting inspection must check that a mapping is used with %(name)s syntax
1b3528f: Py-698 Inspection to check that comparisons with None are performed with 'is' rather than '=='
cfdba33: Using PyNames
02bdf4a: fix minor bug
0221df6: initial version of completing Django tag names from loaded tag libraries
e882626: use reference instead of completion contributor for tag name completion
bb02b41: filter macro and live template
f19a925: live template macro for {% load %}
d9669c9: predefined live templates for Django
6cbc763: live template macro for Django block ID
db31e23: cleanup
a803500: Fix strings highlighting within console
a02f143: PY-738 Make Python console already entered context aware
8ba44b0: Forgotten test for PY-590
9f900ac: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
3b37a90: PY-590: resolve mtrhod parameter default values from class context.
94e4f8d: Support hex numbers in highlighter
5f706ed: cleanup
b0cea06: CCE (PY-744)
0631d4c: resolve for {% load %} tags
e06b56e: PY-260 Inspection to check "from __future__ import"
7ba6e30: Inspections enabled by default
32b6f61: fix DocString highlight as having no effect
c98b435: PY-705 Intention to convert % style formatting to usages of string.format method
ee5792e: cleanup
9392046: PY-740 Make completion and resolve inside evaluate/watches while debugging context aware
d3edd3a: Cleanup
69bee0c: VIM-22 Add/Subtract not working at end of line
591f520: resolve and completion for Django settings (PY-563)
568e094: split PyReferenceExpressionImpl and PyReferenceImpl
8ee4c1e: provide type for ForeignKey fields in Django (PY-416)
bef7855: cleanup
b66fedf: correctly resolve when name with same prefix is found under several roots (PY-327)
50b264b: Support PyExpressionCodeFragment as real context provider
09d75ae: Make PyExpressionCodeFragment context aware
bb4716b: Optimization
94e4a25: fix logic for skipping over doc comments in add import fix (PY-728)
49641e0: fix duplicate keyword completion (PY-737)
76e6142: Python add import actions clear read-only status (PY-735)
ed1616f: PY-716 all imports in python console shown as unresolved
0932003: support Python conditional references in regexps
9ca650d: inject regexp language in re methods, take Django regexp injector out of internal
45653b5: PY-700 Python console - python 3 support missing
75cc8d6: Cleanup
94d625a: PY-701: autocompletion fix
1d62b95: since build and version number
6a4d39f: 19351 - ISE: DialogWrapper.ensureEventDispatchThread (http://ea.jetbrains.com/browser/ea_problems/19351)
1e52d14: 19353 - NPE: PythonSdkType.getVirtualEnvRoot (http://ea.jetbrains.com/browser/ea_problems/19353)
94401a9: 19360 - CCE: PyElementListCellRenderer.getElementText (http://ea.jetbrains.com/browser/ea_problems/19360)
a29a0b0: more specific inspection reporting
1112857: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
a059c29: Updated skeleton generator to be compatible with python 2.2 (PY-702).
38224c8: PY-696
b162baa: PY-679
2ee94e8: don't highlight unresolved reference if class overrides __getattr__ or __getattribute__ (PY-574)
8683b66: fix indent after inserting newline after comment in function body (PY-641)
b0ddfa2: honor space after comma option in formatter (PY-486)
7c314e9: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
aa894c1: PY-496, PY-697: complete a keyword after a comment.
1cedc23: keyword completion works after a comment (PY-697)
08b5f44: parameter info works for inherited class constructors (PY-256)
c9bb113: do not indent closing braces of argument list and list literal (PY-433)
709caf3: complete keyword arguments taken from __init__ of superclass (PY-505)
17badef: add __and__ to builtin descriptors (PY-557)
0b115b0: some handling for "blank lines around classes" (PY-295)
352d05d: possible NPE
1f13a5f: @NotNull
65c5149: cleanup
94ce59c: Initial version of PY-688 Behavior of Enter key in Python console should match that of the standard Python interpreter
de1b46f: recognize encoding declarations in Python source files (PY-503)
9c86a36: return correct type for super() calls (PY-573)
993b892: update test: the operation is no longer stub-based
8a05e03: search in strings option works correctly for Python refactorings (PY-670)
24cb0b4: during index building, there is no way to determine whether a decorator resolves to a built-in or not, so we don't store this information in the stubs
65923d5: PY-687 "Run Python console" action should focus the console toolwindow
d3838eb: fix lexing of backslash before empty line (PY-678)
49ff093: class is not a member of itself
72cbe97: cleanup
0245d88: Py CF improvements: handle except parts more accurately
9fc35b4: show correct qualified name for classes in libraries (PY-556)
ded5bc0: honor "blank lines around functions" for class followed by a function (PY-675)
8bb6937: correctly resolve components of import element (PY-676)
2c099c9: indent options for Python (PY-619)
5315768: fix test
601c2f6: PY-680
377f9fa: negate comparison intention
d9c7bfc: PY-659
c7d5721: Fix of unbound local variable inside except parts
ea166a8: Python reaching defs fix leading to unbound local variable inspection false positives
2d0d11e: force restore language level after all tests that change it
b6bd02b: correct implementation of PyAssignmentStatement.getTargets() (PY-671)
41fd6e8: NPE (PY-672)
cc917c2: fix blinking test
46ca24b: one more case for stub-based resolve of qualified names
c304205: Fixes in CF - unbound lv inspection false positives
5f4ae43: do not provide class type for parameters decorated with @staticmethod (PY-663)
da48c76: python-ide has runtime only dependency on sass
d685812: remove System.out
0853230: offer correct quickfix for @classmethod with no parameters
7e55f7e: infinite loop fix
ef7414e: one case of resolving qualified names in superclass list
0ec1477: PyFile implements NameDefiner
93d0d01: Fixes PSI
0945a3a: Removed broken scrolling code
fe125f1: Better UI updates on process termination.
f86e753: Optimization
889ae46: Cleanup
8b99454: False positives fix
19b1fc7: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
03db911: PY-660: commit document before an attempt of auto-unindenting.
e9eb361: fix testdata
3dc0c0d: store initializer for target expressions in stubs
84dd320: PsiTreeUtil.isAncestor() avoids loading tree when working with stub-based elements
9087d72: store hasDefaultValue flag in parameter stubs
9991c22: stub-based PyClass.getQualifiedName()
04d6090: disable broken inspection by default
7b0c556: PyFileImpl.findExportedName() processes star imports correctly
d57e3fe: refactoring towards using PyQualifiedName during import resolve
e443df5: assert stub-based resolve in test
d3b72b0: stub-based resolve of builtin file members
3fb7b2d: stub-based resolve of PyClassType members
34fb6bd: PyClass.processDeclarations() uses stubs when possible
d934c2b: initial fast path for resolving superclass via stubs
0949f23: don't load tree for the sake of checking if an expr is in the superclass expressions list
d3a63ae: convert to instance method
6420f00: tweak classpath
54fae05: store qualified names of superclasses in stubs
5389173: refactoring: introduce PyQualifiedName class
89a7ca1: add yaml plugin to python plugin build classpath
321d55e: Fixed presentation of run console actions
146275b: Tests on add global quickfix added
c8c1da4: statement has no effect inspection
f8f775f: Python docstring inspection
c818989: PY-477 Inspection to highlight locals referenced before assignment
1a450c1: PyRaisingNewStyleClassInspection
75490d0: PY-653 Python console - assign shortcut for executing action
c48dfd4: Disable execute action if process was stopped #2
c1855d4: Disable execute action if process was stopped
986b10e: PyDefaultArgumentInspection
48882a1: PyExceptionInheritInspection
659b752: PY-652 AssertionError - Wrong line separators: '...] on win32\r\n' at offset 22
1d2b62a: Improved update of presentation of Run Python Console action
be4570a: cleanup
3c709b6: Fixed one space console problem
c35552c: Improved number regexp
d79a665: Remove unnescessary read action
07fec49: New style console colouring added
efea7f1: tests fixed
02b6ec9: Django console added
f067c49: Exception and possible NPE fixed
94ad96b: Better handling of completion at EOF, when parsing is not complete.
fd73f4a: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
5098035: Unsupported features inspection is annotator now
7494126: PyAssigningFunctionInspection
e20466a: fix PY-631
351fd7c: Test for auto-unindent of 'else' and freinds. Only typing (no completion).
46b08a1: NPE
9552dff: Allow completion of 'finally' after 'except'.
b86d850c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0f946f7: Auto-unindent matching parts like 'else' or 'finally' (PY-234). Works both for hand typing, once ":" is typed, and completion.
a98b63d: assert using stubs during multi-file resolve
ba357ca: towards stub-based import resolve
4ffbf9b: remove obsolete copyright
7d67f2a: NPE
118aaad: rewrite PyImportElement.getElementNamed() in a form that can work without touching AST
65bd485: store imported Qname in ImportElement stub
b24114c: store import source qualified name in PyFromImportStatement stub
110e6fe: store relative import level in PyFromImportStatement stub
1e94aaf: decoupling import resolve from PyReferenceExpression, step 4
9adcc9c: decoupling import resolve from PyReferenceExpression, step 3
bb12170: decoupling import resolve from PyReferenceExpression, step 2
96cbfcf: decoupling import resolve from PyReferenceExpression, step 1
18ca58c: stubs for PyImportStatement
d9b439c: PY-634 Unused inspection false positive
0b0f408: Preparations for new style console output colouring
7f28595: test fix
534d638: stubs for from .. import statements
0f6a020: cleanup
e9ecfce: Hopefully tests fix
a2317c7: cleanup
7f2a86f: Custom prompts support
e9c06e2: Initial version of python interactive console #3
6e15b64: Initial version of python interactive console
cd59686: avoid unnecessary tree loading
2b80856: PyClassNameIndex.findClass()
c72c4bf: PyClass.getQualifiedName()
6889f86: cleanup
d9f7f39: improved checking of call expression in string format inspection
d6bdcf3: Use PyNames
00b50b5: PY-577
6b59f49: PY-632
4f45af9: tests for PyExceptClausesOrderInspection
57f863c: package YAML inside of core for pycharm
f3c9b3e: PY-618 Variables used only in a format string are marked as unused
18ca5af: PY-625 Unused assignment: false positive
0a5d91b: initial implementation of type provider for App Engine models
5e639cb: ask type from providers before looking at resolve result
dd36bc8: resolve to instance varaibles not only in constructor (PY-281)
d726d88: PY-624 Unused assignment false positive
ca49f98: PY-623 "Unused assignment" false positive
b5321c6: Python plugin depends on YAML plugin
4a5c7a0: typo
d4f44b5: choose Python interpreter when creating App Engine project
69b6c50: add resource files
9588511: initial import of Google App Engine support
b4cad16: braces are not structural in python
1847986: improve diagnostics for validity of SDK home
72e699f: Possible NPE
090e22d: control flow test fix
79c5b9c: Merge branch 'master' of [email protected]:idea/ultimate
db82efa: PyDebugRunner command line patching fixed
9d66e6c: PY-616 Unused assignment false positive
57727ad: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
15108e8: Moved mock SDK to test data dir. Added forgotten files.
6d7bee9: advance version number
2e7931a: Corrected filename to make the test pass on linux.
d522e8c: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
d966aaa: The only true VisibleName of import element is the FQN of imported module.
6a4bc56: Added a simple builtin names highlighting test.
ebe1b63: Added a blank line between methods in the sample text.
713353f: PyBuiltinCache actually compares files in isBuiltin().
7828815: Mock SDK really accessing mock stubs; a builtin-oriented test that passes.
bbbc923: Send all diagnostics to stderr and none to stdout.
d4ec5cb: added checking on number of starred expressions in assignment
bb6fcf5: PY-576
627bc4e: Tuple balance inspection (PY-233)
c486da7: Unsupported features in Py2 update
f31c046: Python 3 star expressions parsing
5dffade: fix building searchable options
65dd374: fix assertion
78e3156: Bad except clauses order inspection
291eae7: Make unused local variable inspection check functions only
ab2b4b8: fix compilation
29d782c: PY-583
d733ee1: cleanup
46e079c: messages cleanup
5e806a0: add intention test to all tests suite
1b02b30: Annotator to highlight unsupported features in Python2
a1db395: parens
0c094c0: PyDeprecatedModulesInspection (PY-580)
7f4510b: fix bug in PySliceExpression
743b499: PY-278
5d39123: PY-559 Unused assignment false positive in case of nested scopes
aba68d9: support for Python named groups in regexp lexer and parser
f7aee38: fix spacing in Introduce refactorings
c1df971: Django regexp injector (isInternal for now)
77c68d6: refactoring: store Python script in separate file, not inline in Java code
0a594e0: one more check required for correct SDK conversion
e528030: work in progress of PythonSdkType conversion and fixes
efbbe43: use line marker provider instead of annotator for view method navigation markers
c007dd4: line marker for navigation from template to views that reference it (PY-588)
0f49b80: create field fix shows live template for field initializer
eda4944: avoiding exceptions when running WebIDE from IDEA
9eacf15: PY-558 Unused assignment should not highlight qualified references or class context references
ce7b27e: PY-582 Unused Assignment inspection must not highlight import statements
28e2720: PY-579 Incorrect "code is unreachable" warning
139529a: http://ea.jetbrains.com/browser/ea_problems/18809
0d8381d: tests fix, further English cleanup
e611936: language cleanup
8ceee9d: test fix
26320e1: fix testdata
a46cf23: fix tests?
e4dc8aa: diagnostics for failing tests
71bbb70: NPE
58ae4d0: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0295da6: Intention did not work on long-qualified imports.
04f4086: cleanup
10f5514: cleanup
f2792ad: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
022d114: compilation
b1231cc: PostprocessReformattingAspect rewrite
d1ceea0: Revert until core formatting is ready.
0d69836: Let mock SDK only use predefined paths and not run binaries.
2da5f44: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
8bc5a39: Adds virtialenv support to all run configuration types. Pytest configuration uses a lame hack, but still seems supported.
9e16448: PY-552 'self' parameter should not be highlighted as unused
3f9d0da: fix the fix (PY-544)
652d03b: NPE: PyControlFlowBuilder.visitPyListCompExpression
bfb032e: PY-547 Extract method - class name handled as parameter which corrupts context
d1cc234: added PyUnsupportedFeaturesInspection (PY-501)
2e3a359: PY-474 Extract method - disable ability to extract "from" statement
345be73: Added names validator for Python Extract method refactoring
c81053f: http://ea.jetbrains.com/browser/ea_problems/18530
ff1fd05: http://ea.jetbrains.com/browser/ea_problems/18602
b9ef332: do not push members if there's nowhere to push
5dc3b9e: NotNull assertion fix
dcf0809: fix objects method resolve for models in django/contrib
36af9bb: tooltips for Python gutter icons (PY-530)
fcc57e1: macros needed for a smart 'for' live template in Django templates (PY-523)
ea383df: dumb-aware annotators for Python (PY-522)
7594fb4: advance version
d2191dc: correctly handle triple apostrophe strings in annotator (PY-502)
2974942: Preliminary support of virtualenv (PY-397, PY-244), various runner fixes. Only Run and Debug configurations understand virtualenv currently.
e7d84d5: please ignore
d290f38: PY-436: Unused local variables and parameters inspection
29748ba: Fixed scope update on subtree change
c4391b5: Merge branch 'master' of [email protected]:idea/ultimate
4dc7d24: Revert "PY-520"
8a91a97: PY-490 Support Ctrl-Shift-F7 on 'return' statement in Python to highlight exit points
bec3f58: PY-520 Method parameters inspection fails in case of static method
19974ef: PY-443 Incorrect extract method in case of multiple inheritance
3d69278: PY-464 Pycharm does not detect ubuntu installed python-django
6dfc1ce: correctly package sass in pycharm (PY-516)
748f165: python inline: double definition detection
a8e4d59: reworked inline for new instructions, updated tests
9d34323: tests for inline local
7087468: inline local snake
ba51c21: PY-514 Single instruction for augmented assignment needed
f7155f9: create view method fix fixed and refactored (PY-487)
a5441cb: PY-509 Extract Method wants to pass a class as a method parameter
2980e0c: live template context type for Django templates (PY-489)
30c7c4f: some test for 'override methods'
b002c2a: Dependency on module sass. Fix tests
747f4b3: run skeleton updater under progress
a313d78: Read instructions inside control flow
c1b9c5d: SASS inside PyCharm
515778a: Renamed variables, made PyBlock engaged to formatting.
fda1a25: Last extract method reported bug fixes: PY-479, PY-474, Py-480, PY-471
06441f6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
95ef7cb: PY-274: add folding of Django template tags.
665ada6: dynamic members for models PY-390
a1d452f: PY-470: java.util.ConcurrentModificationException - checkForComodification
3ad7bd9: correct system selector in Mac build
d2e7dd7: fix default Python path on Mac
32f68d0: auto-update and feedback URLs for PyCharm
a549198: don't try to propose import fixes from out-of-project SDKs (PY-458)
7448046: provide types for keyword and positional containers (PY-460)
4d7bbf7: no profiler by default in Linux PyCharm build
fa0244c: Tests fix
eded2a9: PY-467: Do not treat self as method parameter in call in method case
fae1fc8: PY-461: Extract Method doesn't handle 'self' parameter
a62b6d5: API for custom generation of synthetic whitespace between tokens; fix all Py tests
ca981ea: magic incantations for inserting line breaks between tokens at PSI modification time
4981cf93: don't do anything if there's nothing to do
eeaa35e: reformat generated method before inserting
572a7e6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
0f5ec9f: Removed QuoteTypeHandler, PythonQuoteHandler does all and overtyping works. Added tests for overtyping, auto-closing (or not) quotes, triple quotes.
4552fdc: fix expected testdata for new formatter behavior
500e410: fix assertion in extract method tests
a871b1b: Merge branch 'master' of [email protected]:idea/ultimate
8d88c74: PyUnreachableCode test fixed
54d6b70: testdata discoverability, update testdata for new formatter behavior
f38a8f9: gentler PSI building in quick fix
7228667: formatter honors "blank lines between methods" option
58005b8: tweak PSI generation logic in AddFieldQuickFix
d767e18: tests are closer to working condition
6f49862: rename of qualified target expression replaces correct element with new name (PY-457)
7ceae97: PY-459 Unreachable code positively fails - if (try.. except..) else return
9e1d3dd: PY-453: List of possible parameters in extract method can't be reset
16f0c5d: sorting by source order works correctly in Python structure view (PY-257)
cbd9787: set correct working dir for run configurations created by Ctrl-Shift-F10 (PY-324)
76c3aa0: Ctrl-O works in whitespace after end of last class (PY-440)
584f6ef: value tooltips should work for parameters (PY-456)
159f281: accept single expression in 'while' statement (PY-452)
7f67838: fix possible SOE in multi-ile resolve
f722eba: correct hasInBuiltins() check for py3k
6451d26: cleanup run configuration UI code
96d6239: correct filtering of duplicated names in completion list (PY-438)
a34ba47: AIOOBE in inspection (PY-428)
c977c8e: component for updating version of skeletons when generator3.py is updated
4477b1a: Extract method fixes. PY-442 in particular
d6f7bf1: fixed tests PY-371
4ffc70d: Unreachable code inspection uses control flow
9a26ccc: autodetect @classmethod and @staticmethod for native methods
0bbaa34: no need to keep old code in VCS
8656333: PY-425
611db7c: file structure grouping in PyCharm (PY-423)
25b3590: move common UsageGroup base class to platform
1951f7d: kill a weird file structure grouping provider (should be a usage grouping provider instead)
7520b5b: ElementDescriptionProvider for Python
7e02fec: Finally extract method generation fixed
08e3b24: fix test
8973a64: Py-419 and much more First bunch of tests on extract method.
24badb0: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cfad7e0: PY-380 and more: smarter quote auto-closing, including triples.
b1404bc: disable inapplicable controls
d4c45c7: correctly track argument count for inherited __init__ calls (PY-312)
616e3e9: special case of show implementations for Python: show initializer for target expressions (PY-237)
67d486e: show class quick doc if constructor isn't documented (part of PY-381)
143e266: disambiguate between importing file and name more correctly (PY-381)
8b50307: no need to register PyBuiltinCache as project service any more
9dab650: some more optimizations for iterateAncestors()
c6fd7c8: minor
bfe7375: help id's
da11e8d: introduce field
99c686c: introduce field
5738de3: fix build of missing skeletons
e099a51: fix two big prodolbs
42819d5: Changed tail recursion to explicit loop to save some stack.
46a24cc: NPE (PY-377)
7875d11: Fix compilation for case insensitive file systems
9393b15: Fixed extract method in case of expression with selection that breaks AST
f420478: Fixed extract method in case of expression
e362d47: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
48d2079: Moved "Dump PSI to clipboard" where it belongs in Tools menu.
91bcecf: Auto-close Django interpolation braces (PY-271).
84acb06c: Merge branch 'master' of [email protected]:idea/ultimate
da0dee1: cleanup
9ec0668: Fix extract statements within functions
d53580c: Better statements boundaries handling
22f5ca3: Initial version of extract method for python
0ad3af5: use setErrorText instead of custom label. it has nice icon, too!
2739628: PyCharm license text
783caa7: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
423a0e0: Moved a message to the resourse bundle
ba61a80: NPE (PY-372)
2458ab5: who the f. is Diana? :)
df1cf52: "create field" fix works for creating brand new __init__ method
9ee8d6f: use type text instead of tail text
882dc92: nicer looking menu for import candidates, sort candidates by relevance
b4bf6a6: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
e076819: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
94a6f92: missing testdata file
f8d2771: testdata path fixed
7441d89: pressing Backspace in virtual space doesn't delete innocent characters in the preceding line (PY-254)
2b1eda9: correct insertion of paired quotes in raw and unicode Python literals (PY-263)
2a136e4: do not insert paired ) when typing before identifier (PY-290)
5168c00: really-really-really fix lexer for long strings
df81556: really-really fix lexer for long strings
91648ac: correctly parse trailing semicolon in single-line suite (PY-363)
bc13ff6: long string parsing fixed again
b3923d5: help topic (PY-348)
2dcc6d5: PY-255 parse __init__.py for * imports reworked
ded250d: PY-255 parse __init__.py for * imports
6f9cfa1: PY-255 parse __init__.py for * imports
8c23e4a: help topic (PY-348)
d16d1c4: PyCodeFragmentBuilder with imports support and tests
53a9005: PY-255 parse __init__.py for * imports reworked
05d4545: PY-255 parse __init__.py for * imports
ddcbe14: PY-255 parse __init__.py for * imports
2e89ac4: Fix bundle properties
0a8cdc3: Revert "Revert "First step of extract method: CodeFragment""
d2107bd: Merge branch 'master' of [email protected]:idea/ultimate
23edab2: Revert "First step of extract method: CodeFragment"
072bcf3: version and since-build
db709aa: support Unicode characters in identifiers (http://www.python.org/dev/peps/pep-3131/)
fe94cda: one more fix for slice parsing, some cleanup
10a22ea: add //= to augmented assignment operators
9d313c6: support multiple targets in 'with' statement (Python 3.1 language feature)
db33330: First step of extract method: CodeFragment
5f9505d: Allow whitespaces in installation folders
53dfd2d: parser for 'nonlocal' statements in Python 3
56f02d9: parser and PSI for Py3k annotations (http://www.python.org/dev/peps/pep-3107/)
98a1dda: fix regression in slice parsing
108b305: Propagate RUBY-5703 fix
398651f: correct fix for scrambling
72b9a45: fix scrambling
7d260f1: fix PyMultiFileResolveTest: ellipsis must be handled by parser, not by lexer
26054ce: scrambling in PyCharm build
9b0728a: PyCharm images
de17783: Scope implementation for Python
d084ece: cleanup
c03175f: fix parsing of extended slices, change PSI for slices, select word handler for slices is now redundant
baa08ee: correctly parse default values for tuple parameters (PY-350)
2f9e89c: tests for extract superclass formatting for extract superclass
77bafe8: [PY-346] fix formatting
837ef22: imports
86cbe14: imports for extract superclass
4400949: fixed testdata
c6c9be3: yet another fix
aaad64a: lexer and parser for ellipsis tokens
8f16549: support raise ... from in Python 3
3ff98b5: traceback filter condition was too narrow (PY-301)
b9edb719: more correct fix for NPE
2179607: Initial PyReachingDefs implementation
c9dbb1c: Import statements support in Python control flow
9e56af3: extract supersnake
cc7ecad: fix formatting issues
4db3a0c: fixed NPE
b0c7c02: assign target annotator checks set literals, set and dict comprehensions
4528252: parsing Python 3 dict comprehensions
a927e91: parsing Python 3 set comprehensions
38cd9c6: parsing Python 3 set literals
6cd30db: support keyword arguments in superclass list (http://www.python.org/dev/peps/pep-3115/)
562ef6b: improve handling of keyword-only parameters in argument list inspection
f899453: None, True and False literals are also expressions
80ed1c4: push language level for files under SDK roots
a150573: exec is not a keyword in Py3
03a3ec2: correct builtins file name for Py3
2669f97: highlight tuple parameter unpacking as error in Py3 code (http://www.python.org/dev/peps/pep-3113/)
843f6ce: Merge branch 'master' of ideahost:idea/ultimate
ab6954a: push down tests and formatting
0f143a9: fix display of execution point in Python debugger
d25e051: kill old pycharm build script
9cb32fd: specify help path
bf522bc: typo fix
6e88a4d: mac launcher fix
f9e0aa3: package PyCharm help
3b05865: PyCharm icons, cleaner Mac build
162fdef: True, False and None are keywords in Python 3
ba97387: continue work on annotation of keyword-only args
3c8137a: testdata discoverability
c1ab665: Refactor Ruby & Python Contrlo flow building
7237446: aid for testdata discoverability
ca0922a: correctly handle offset at end of string in Python literal escaper
eb96a58: tweaks for testdata discoverability
28df4ca: push that snake down
4f34e4d: cleanup
7238fc9: Fix import statements to accept PyVisitor
f3afb97: Support for try-except statements
98b1f0f: Support for named parameters in control flow
42b5c7f: Tests on PyControlFlowBuilder
a70e8e4: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
cf04021: Removes duplicate name constants, renames instance fields properly.
1c43162: Fixes PY-294 and PY-297, adds relative import intentions.
df885d6: smart handling for slices in select word (PY-288)
a9cdeae: Yet another piece of Python control flow
c341f5f: rename parameter so that it says what it means; fix PySurroundWithTest
12e3e01: missing part of testdata
083b3b9: refactor command line state so that debugging for Python tests is now supported
35ef49d: pull up snake
79c2e21: debugger patch from Roman
b40bbe0: unused import
87fcf29: not used anymore
92e8b87: one more optimization of PyStringLiteralExpression
83fceca: don't use regular expressions for calculating text range in string literal; revert injector implementation
5770c18: honor the contract of LiteralTextEscaper
bf17fe6: controlflow prototype
e081684: Fixed python editor
2c4f28b: Merge branch 'master' of ideahost:idea/ultimate
b51cbe0: sanity check for PyStringLiteral injection host
8310e40: initial (possibly incorrect) implementation of PsiLanguageInjectionHost for PyStringLiteralExpression
1647544: PyTypeProvider.getReferenceExpressionType()
1e5bf89: merge idea90 into trunk
517bd8e: add missing checkResultByFile()
58d999e: cleanup
470fa54: PythonSdkType was fixed : do not create sdk with empty binary path
cc7b0ef: NPE fix
fc96d3e: Fixed python interpreter search strategy
b4a6ea3: dependent members collector pre-selection
9380553: bunch of pull up stuff for python
6fc75e1: bunch of class-related refactoring stuff for python
44be31a: initial work in progress support for Py3K keyword-only arguments
445b18d: Good code is red: nested 'if's in list comprehension (PY-322)
01ee9b0: Good code is red: empty superclasses list for a class (PY-321)
4999096: support class decorators in Python 2.6 (PY-320)
bea11cc: support new literal types in Python 2.6 (PY-317)
ff3439e: Fix bunch of NPEs
84e9cfe: fix NPE when building searchable options (PY-315)
b26aa62: Handle 'from __future__ import print_function' in Python 2.6 (PY-314)
51a4f1d: include vmoptions and correct sh script in pycharm tar.gz
079776f: fix tests
6528173: Python language level support (PY-261), correct parsing of with statements in Python 2.6 (PY-259)
4f9a806: more fixes for unclosed string literal highlighting
0ad02d6: Relative import (PY-92) with three test cases.
2ecca6c: Merged python/src/com/jetbrains/python/parsing/StatementParsing.java (No change really)
85fe290: Merge branch 'master' of git.labs.intellij.net:idea/ultimate
106d78f: Tests to PY-286 related __doc__ resolution.
a2e32cd: correct highlighting of string literals with mixed quotes (PY-299)
7f7953b: add explicit dependency on XmlPullParser so that it won't be missing from the installer (PY-300)
89e7aa7: lexer correctly handles sequences of multiple backslashes (PY-287)
d27e523: cleanup
1c947c1: no.jre.check
0944ffd: support tuples in 'except' part of try ... except (PY-293)
42498c9: PY-92: relative imports work. No relevant test cases yet. Much code rewritten, some old code can be cleaned up and/or eliminated.
db217b6: correct parsing for 'as' in except clause (part of PY-297)
275d661: inject inspection colors dynamically into pages that implement InspectionColorSettingsPage
f6d967ad: try to build dmg
ad9bd92: build tar.gz for pycharm
88b1092: pack license jars inside pycharm.jar
3b3673e: include missing win-specific stuff
fc4c82e: fix artifacts path and library dependency scopes
b19a0b5: stub image for about dialog
f52b1be: another round of installer fixes
9949100: build searchable options, couple more installer fixes
fdf82c0: pycharm installer improvements
a3a0539: support creating Django projects in PyCharm
3b40e2a: python build works now (?)
e477f34: one more missing piece
5ad8a2f: one more missing piece
180bdcc: one more missing piece
0b1cca6: images for PyCharm welcome screen
2729753: one more missing piece
caa643f: one more missing piece
c1b834a: fix build
b06d923: executeExternalAnt
8948d01: try to build installer
c080d5f: fix path
22d1f7f: try to build exe for pycharm
e28b1fc: rename build script
dc1ff79: gant-based build (initial)
9785dad: Fix NPE when the name is empty.
31139bd: Preliminary resolution of __doc__: no tests, no reassignment in functions yet.
5a42132: PY-286: resolve __dict__ attribute. Added crude debug mode, too.
f0034f8: PY-282: now first parameter of object.__new__ is named 'cls'.
91485f6: Merge branch 'master' of [email protected]:idea/ultimate
b2702a9: PY-265: reworked completion of module names.
72d6cde: reference from render_to_response to template file
5ead73e: PythonCompletionTest fixed
ef05a7b: PythonHighlightingTest fixed
733d9a3: really-really , we do need to process remaining modules if we fail to generate a skeleton for one
016389e: do not copy helper scripts to temp dir, run them directly from helpers
0ad756b: tests for forthcoming functionality, cleanup and refactoring
f27cd6f: catch SIOOBE in PythonNamesValidator
a5916b1: incorrect dependency on junit3 removed
8409d343: allow running tests with classpath of module python-tests
e84db7d: remove depenendencies which are no longer necessary
9ab912b: extract Java-related tests to a separate module
84bb87e: convert test to PyLightFixtureTestCase
c6ab7f7: convert test to PyLightFixtureTestCase
8ed733e: convert test to PyLightFixtureTestCase
882e334: convert test to PyLightFixtureTestCase
544b329: convert test to PyLightFixtureTestCase
3927018: convert test to PyLightFixtureTestCase
85894d2: convert test to PyLightFixtureTestCase
5904980: convert test to PyLightFixtureTestCase
4300cb6: convert test to PyLightFixtureTestCase
4118879: convert test to PyLightFixtureTestCase
c47b89b: convert test to PyLightFixtureTestCase
cf08a86: convert test to PyLightFixtureTestCase
98f1355: convert test to PyLightFixtureTestCase
cca9d5c: convert test to PyLightFixtureTestCase
7cc74ac: convert test to PyLightFixtureTestCase
408bc8b: convert test to PyLightFixtureTestCase
6d7ab28: Python debugger
33812f6: hoping to have tests for Django some time soon
14e98ae: more generic name for the class
258ec0e: nicer looking completion items: use type text instead of tail text
360f36f: advance stub version
c801fc5: PY-253: make both foo and bar refs in assignments like 'foo.bar[1] = 1'
8f80611: Fixes PY-253: foo[1]=2 now has foo as reference, not target
6893ed7: correctly pass environment options to Django run configuration
b96c328: initial integration of Alexei Orischenko's Django plugin
5fc100f: Tests missing from previous commit
970d8da: Fixes wrong mapping of subscribed assignment targets (PY-247); adds tests.
e3cbb52: PY-249
64797ac: corrected roots markup
7546967: APIs for django
29a8e7c: add ultimate dependency
04d7a9c: add ultimate dependency
1d9d4bf: revert incorrect fix for PY-152 and fix the original overriding markers calculation logic
21c9c77: correct scope for Python goto class and goto symbol contributors (PY-242)
f27a14d: python surrounders brought to mostly working condition
77a3189: work in progress on fixing python surround
9e8db5b: update link to release notes
31cce76: out of code block modification tracker for Python
619a99e: Restored highlighting of predefined methods (regression).
9a7f7b8: update link to release notes
5829b54: Merge branch 'master' of [email protected]:idea/ultimate
06e926f: Added tests for predefined methods name completion.
a21681b: Added predefined method names completion. PyNames refactored.
b37296f: remove incorrect properties (IDEADEV-41823)
feabd7d: Adding SDK under progress bar; closes PY-180
4c7dfdb: Fixes PY-234: find skeletons of built-in modules
1ade6e5: Fixed a javadoc typo
6ac1c52: Renamed tests more uniformly; extended PythonAllTestsSuite
2dc82aa: Merge branch 'master' of [email protected]:idea/ultimate
d92d046: Updated overlooked test data in quickdoc/Module.html
c98d306: Refactorings, cleanup of PyUtil, no new functionality
7b64652: advance python stubs version
33c8671: Enhanced parameter renaming quickfixes.
e6760a3: Fixes the rest of PY-231: inspections understand staticmethod() and classmethod() wrappers. Some cleanup and refactoring, too.
93e979b: Removed unneeded info in module's quickdoc
676b224: Rewrote QuickDoc code, cleaner HTML creation, closes PY-231 partly.
570d449: FP stuff refactored and extended
d30e10f: pluggable quickfixes for Python unresolved references
261af0b: show framework support providers in Python module wizard
d319bdc: query reference providers for references in Python string literal expressions
ed9418e: Added a test suite to include all actual tests in fail-fast order
79122ab: Reimplemented PythonBuiltinsCache, made it per-module. PyCallExpression.resolveCallee() now understands wrapping methods in calls to classmethod() and staticmethod() with reassignment, without decorators; parameter info and inspections correctly handle this. PythonDocumentationProvider understands such reassigned methods, too.
493d377: Merge branch 'master' of [email protected]:idea/ultimate
c7054d3: Rewrote PyReturnFromInitInspection in Java. Removed the Jython version. The Jython inspection provider stopped working for some reason (PY-240) and is now disused.
296dc59: cleanup: toArray() -> toStringArray
27b9288: deadlock in initialization: inject dependency on application service
384c584: Tiny fixes in PyQuickDocTest data
4165004: Merge branch 'master' of [email protected]:idea/ultimate
ea9b154: Multiple targets support fixed in followAssignmentsChain(); some tests added.
07a96cf: Inspections - pass onTheFly into ProblemDescriptors & use it to create LAZY refs in batch run.
540e705: Added direct assignment tracking. Quick doc and parameter info now follow assignments of classes and functions. FP-related stuff was factored away to toolbox.
50d29f0: Factored out FP-like code to toolbox.
2a2b619: Fixed NPE in the break and continue ctrl+click handler. NPE happened while hovering cursor over unrelated elements; the incoming source element happened to be null, it seems.
dbd564b: Added a break and continue ctrl+click handler.
704c521: Autocomplete names defined below function definition (PY-230).Also enhanced icons in autocompletion popup, and probably removed some 'false positive' completions in class contexts.
1aaf36d: Correctly indent statements on pressing Enter after a colon.
a216228: Merge branch 'master' of [email protected]:idea/ultimate
db681f5: Better named params handling in 'Create method from usage'.
a6a8488: git doesn't have build.vcs.number
5077c49: version 2.2, correct since-build
9e81e44: let's try not forking javac
1814dd0: Fix PY-196, also create classmethods from usage, name parameters.
fd0fb19: try to fix pycharm release build
2fca1dd: fix python plugin build
0549083: Don't propose to add methods or fields to builtin classes.
051a1d4: PY-206: add PYTHONUNBUFFERED=1 to newly created run configurations
c79f421: Signature restoration: name clashes removed.
d7c553f: Merge branch 'master' of [email protected]:idea/ultimate
1fb5d8e: PY-134: correct type of __class__ and __dict__.
ce89448: Removed needless commented-out code.
7266e95: Adds detection of super(Class, instance).some_name; closes PY-62 and PY-114. No test case yet. Also, super(Class, AnotherClass) is not handled.
a1054af: Fixed problem with broken backspace/delete on MacOS
663d9f4: Cheaper ProgressManager.checkCanceled(). Mostly, it's call to abstract method eliminated.
19dfc1b: Improved class inheritors search performance. Fixes PY-152.
c2cb825: verifier for Python and Ruby plugins
e9f4052: Added qualification of names imported via star import to import conversion intention.
0873e68: Merge branch 'master' of [email protected]:idea/ultimate
9aa98a9: Fixed PY-191: don't propose to import unresolved names within an import statement.
11dcd87c: separators
a94f550: Strings went to resources; only one copy of isIdentifier() left.
7e6f824: Merge branch 'master' of [email protected]:idea/ultimate
fa8f26f: Removed Jython-generated files, updated pyparsing with right EOLs.
c6f61bb: Added "Toggle alias", conflict panel, descriptions; refactored intentions.
75f56b5: Added updated PyUtil missing from previous
b035f5c: Import intentions are more correct and detect clashes.
7647071: remove *.classes from version control (*.class is in .gitignore)
86e406e: Strange bug with backspace handler fixed
5b1c958: before accessing index data forcibly update only those 'dirty' files that can affect the result (update the file only if it is accepted by search scope)
7099b91: classes
b8873b8: multiple conflicts per element
1624ddf: Updated utilities used by import intentions.
789a765: always use getIcon instead of findIcon
ef0808d: Merge branch 'master' of [email protected]:idea/ultimate
139eb7b: @NotNull
6abc005: Ruby and Python Smart backspace fix: RUBY-5097
b40b140: fix idea home path
52746af: fix idea home path
50d173b: fix idea home path
97113b9: ignore *.class
bef7a77: ignore cachedir
ef316bf: fix paths to python testdata
60f4d0f: building from sources fixed
6b6a672: test fixed
fa4fffa: API for specifying test source root path in RubyLightProjectDescriptor
2ca251d: update for TeamCity API changes
96fd790: Makes default auto-imports non-qualified, gives user a choice how to import importable names. Fixes PY-203 and several small bugs.
7ce758a: get rid of deprecated constructor usages
7e593bb: added icons to frameworks tree
154bcbe: decorate->withRenderer, createDecorator->withTail, remove Visagiste
5a6bb8a: 'add framework support' ui imporved & FrameworkSupportProvider moved to open api
1aadb2e: Added validity checks in treeCrawlUp(). Fixes PY-155 hopefully.
7233dea: Extended test just in case.
7ecbbad: Now ^Q works for unqualified imported names. Fixes PY-229.
78a47f8: fileForDocumentCheckedOutSuccessfully #2
a6f9367: fileForDocumentCheckedOutSuccessfully
1cbf0d2: Adds checks before setting a problem descriptor. Hopefully fixes PY-210.
a7a19e2: fix Ant path
e0e16d8: cleanup LookupElementBuilder API, non-invasive groovy insert handler setting, no semicolons after break/continue, better 'delegate' name
e12728d: Changed skeleton generator's parser
a3215bc: requestWriting method
ffab699: remove LookupElementFactory#builder
fbae2dd: no LookupItem usages in python
af8bdc8: annoying 'unversioned files'
1b67955: more LookupElementBuilder clients
97abd8a: build paths
a84caa5: idea-ui module extracted
d11dcae: module psi-extapi merged with lang-impl
7fe898b: testFramework-java
68b823a: minor error
1b4e455: fixed IllegalStateException
767eb7e: added surround with functionality (PY-220)
ee873fc: added Introduce constant for Python, fix some bugs in introduce variable
a5ca5c4: avoid shared lexer usage from different threads
67a26c4: Fixes PY-222. Restores @classmethod logic.
056ef23: sorry, fixed light fixture root init
381c099: added PyGoToSuperHandler
2749457: correct work PyStringFormatInspection with PySliceExpression and PyListLiteralExpression
b59d8a0: first test for dependency scopes; allow to specify root of LightTempDirTestFixture
dea8c70: proper use of visitor pattern
f0ff379: added Introduce variable for Python
46a6fd2: PY-221 - Method overriding inspection checks __init__ and __new__ and it should not.
42cac9c: Added SliceExpression as allowed on RHS, so that things like %d % foo[1:2] pass as correct. Please write a test for it already!
4a10bfa: Added SubscriptionExpression as allowed on RHS, so that things like '%d' % foo[1] pass as correct.
4850ea5: added python methods override
a0a0c0e: Python type hierarchy crash with stack overflow on cyclic inheritance
6123cdc: PY-214 fixed
7ec0ea8: use more standard implementation of delete() for Python PSI elements
2b2fd5b: delete deprecated code
eafea28: optimization: do not try to eval python for every inspection
0e1bddb: remove wrong code
fa07c1d: register hierarchy actions in correct place
cda49bc: Minor update
dc3149f: don't register same actions twice
8009ce1: Added type hierarchy for Python (PY-209)
56e3e95: Adds proper lookup in both classes and sources of module roots. Fixes PY-216.
a8495a3: diagnostics for blinking test
cb4b117: fixed PyStringFormatInspection + added some new tests
1b84d8a: generator3.py minor update
0108c25: use unified API for capturing output; update assertion for compatibility with Jython 2.5
1a3f690: add PyTrailingSemicolonInspection and RemoveTrailingSemicolonQuickFix (PY-189)
bb24ece: upgrade to Jython 2.5.0
f221494: correctly run jython.bat on Windows
c43da5c: use unified implementation of capturing stdout and stderr
4dffb26: add PyMethodOverridingInspection (PY-140)
484e7ae: PyLightFixtureTestCase introduced and used for PythonInspectionsTest; CodeInsightTestFixture is capable of testing inspections
8218bb7: fix stubs generation for Jython 2.2
c55ef84: (no message)
d7d5ecd: mock SDK for Python inspections test
da7f6cf: bumped idea.max.intellisense.filesize to 2500 (ext.all.debug.js is larger than previous limit)
1bf7421: Fix completion to use new parameter interface, update tests.
4168517: missing test data for PyParameterInfoTest
a714346: Forgotten change of testMethod for corrected decorators rendering.
90ce4a1: Adds a notion of a tuple parameter. Adapts paramater and argument inspections and Ctrl+P handler to use it. Adds tests for Ctrl+P handler. Closes PY-200.
3fdb9d6: Don't show empty arglist at decorators if there's none. Fixes PY-194.
7516ce9: QuickFixTest fixed
5918703: initial OpenAPI for TemplateBuilder
e5cb9e3: 1. Missing test cases. 2. A typo that swapped opening and closing tags in quickdoc.
740433b: Added completion of parameter names. Removed wrong double-underscore completion, added relevant. Closes PY-192.
9ea1e44: Closes Py-184 somehow. Added arglist to function's hover info, class name for methods, ancestor list for classes. Decorators are omitted as they seem to add lots of visual clutter.
cd91c4c: Show "no documentation" instead of an empty page for modules that don't have a proper doc string. Closes PY-198.
aa4c7e5: recognize skipped tests; cleanup
464b460: validate existence of py.test runner script
e785f42: NPE
0e8bd34: use runnable script filter instead of producer for unittest; fix creating tests for directory in py.test
4bf6e9e: Clarified "unresolved reference" message for unresolved references qualified by things other than a class. (Fixes PY-188.)
bad9caa: py.test: keywords support, creating configuration from location
faaf72c: initial version of test runner for py.test
b5da816: Added an override for unicode constructor for python 2.x.
38e3858: show template builder for parameters of newly created method
a6a9dd6: Closes Py-184 somehow. Added arglist to function's hover info. Decorators and class info are not added as they seem to add lots of visual clutter. (Code for this is present but commented out.)
86265f1: Do not displace file's doc comment when adding an import.
b8f2106: Now if ^P fails, it fails silently, without a '<failed to determine>' hint.
d4acbfa: dumb-aware folding builders (IDEADEV-38126)
9c00226: Adds submodules to "from ... import" completion. Removes duplicates from variant lists in imports of any kind. Closes PY-182.
3830acd: Don't mark assignment to None as error in __builtins__.py where it's actually defined.
76ecf73: simplified TextEditorBasedStructureViewModel API
e37ad7e: per-language completion contributors
4270b8e: lexers refactoring, part 1.
8157b1d: reuse new test runner UI between JUnit and SM runners
70b51b7: PY-173 - [#14797] ProblemDescriptorImpl.<init> (don't mark empty elements)
23e88de: Adhere to new CommandLineState API
f28bcd1: JUnit UI redesign, part 4: "rerun failed tests" button moved to proper place
3112b7a: Introduces an inspection marking reassignments of 'self' in methods.
0dc2594: Enhanced assignment target annotator (closes PY-172 and some similar issues). Added a test.
1b47211: Fixed typos in javadoc.
a4e6443: recursive copy
ce689fe: copy helpers to zip
0117481: one more fix for parent envs
ba9eb83: copy helpers to plugin dir
562fb72: use different element name to make sure changed defaults are picked up
62105fc: passParentEnvs is true by default
f29dc1c: fix builtins generation under Python 2.6
77cb17a: use module SDK in new Python script run configurations
484c5a2: force building plugin with JDK 1.5
ad30a83: disambiguate version
d0c09f6: change notes link added
9c3f0f7: live template context type for Python (PY-81)
1857bd8: correct fix for PyTestRunnerTest
8d66477: since/until for Maia
d814269: version 2.1
848fa73: ignore Jython's garbage in stdout
77cb93c: Fixes 'assert' with no arguments being parsed as correct (PY-162).
0c7c087: Only show Python SDKs in runtime config for a Python module (closes PY-129).
68a7632: cleanup; system-dependent names
996fe6b: unit test configuration is created with correct module
f5f9e9f: put module content roots in PYTHONPATH
581367f: running all tests in folder rewritten in Jython compatible way; test added
6649968: tests for class and method
bbccd4e: tests for Python unit test runner; utrunner.py is compatible with old version of unittest library from Jython 2.2.1
42b7f05: disable debug output by default
6c88498: fix exception on creating stub for incomplete class declaration
3b92533: add jython stdlib
074a447: avoid nested rootsChanged: create library in invokeLater()
71123af: fix sys.path retrieval for Jython
553a951: facet autodetection and framework support provider for Python
3977915: Show SDK selection panel in unit test run configuration.
84947e9: Adds a basic inspection for unreachable code.
f9c2eb3: facet-based run configurations actually sorta work now
d646242: Enhanced 'return' and 'yield' annotator. Closes PY-168.
d9a69b1: Fixed an assertion error when indentation is wrong. (This whole thing begs to be rewritten.)
4fb1bdb: Fixed a false positive in a test.
2cdd275: Efficient doc string annotator; tests.
5825238: Exclude files with non-identifer names from import suggestions.
801ea7b: Enhance quick doc generation; copy inherited docs for overridden methods. Add module quickdoc (PY-166). Add tests.
eeb16ad: Fixing compilation: previous commit did include these files, but perforce was acting out.
2655cec: Re-application of 236134 which was reverted.
43498ac: Don't show keyword completion inside comments (PY-130).
3cc36c3: Revert: Prevent inherited names from appearing in completion several times. Prevent non-identifer file names from appearing as import suggestions. Show defining class names for inherited attributes in completion popup.
a6453aa: Prevent inherited names from appearing in completion several times. Prevent non-identifer file names from appearing as import suggestions. Show defining class names for inherited attributes in completion popup.
66cb652: Reintroduced test setup that got lost during PythonPyInspectionToolProvider refactoring.
ffb3fa9: Fixed statement keywords completion right after classes and functions. Fixed PyClass and PyFunction not being statements :)
e5f6b3d: Removed bracket pairs handling, now that an IDEA-wide handler does this.
9ed98bb: For built-in decorators, highlight not an entire decorator as builtin, but only the "@", to prevent conflicts with decorators annotator.
896d5ce: fix tests: Python typed handler works only in Python code
b85e064: FileBasedIndex receives non-null getProject() in all queries
7481196: Made to match current openapi (failed compilation).
06271f5: compilation fix
a88fb50: Fixed a typo in dict() constructor signature. Problems with generation of a skeleton for a binary module now produce a warning instead of error.
10aec64: Keyword completion made far better (PY-51), added overtyping of bracket pairs and of the colon (used by completion), better tree pattern matching enhanced break and continue annotation (PY-164).
b90b8cf: performance
a27ad9f: ProjectJdkTable listener converted to MessageBus topic
9c3906e: register tools before profile is loaded
ec68272: restore Python plugin API compatibility
6e4506f: java resolve optimizations
462d85a: dropped old and unused method
17f13c2: Saner module name completion in import statements.
2ea496c: 1. Run generator3 in update mode, greatly speeds up startup; 2. Don't look for binary modules in Jython SDK initialization; there can't be any.
b28cc5b: Basic string sub-parsing and highlighting: valid and invalid escapes. No test. No format strings (tbd). Fixes PY-39.
d08e17f: Python run configurations work in progress
e3a1913: Fixed a crazy (but simple) timeout-related blunder that prevented skeletons from generating.
332ccef: Don't ever resolve to a PsiDir, only to PsiFile('__init__.py') inside it. Partly fixes 147. See discussion there.
b27e251: Don't show an empty Ctrl+P hint if analyzeCall() returned null.
96d389f: Alleviated PY-108 effects somehow: param lists with tuples are just ignored by inspection, for now.
49d368c: Classes that merely inherit from built-in classes are no more highlighted as built-in.
80f0d5a: Check for PsiFile instead of PyFile as top level; possibly injected Python statements might reside not in a PyFile.
8cd8253: Revamped doc provider. Now it shows function or class signature, class for methods, function decorators, doc string if present, or, if absent, builtin object's docstrings for certain predefined methods (__init__, etc). Output uses html and looks better.
df6834a: Fixed an assertion error when a function is being defined and a colon is not yet typed in.
11bf8fa: python facet support: look for imports not only in SDK roots but also in library roots
c5ef007: initial version of Python facet (mostly copy-pasted from Ruby :( )
3c04062: Prevents incorrect resolution of unqalified names to qualified. Closes PY-153.
56bc589: Prevent NPE if no identifer is entered yet after 'class'.
3a4684e: Makes 'as' clauses in autoimport avoid unwanted name redefinitions. Removes '__init__.py' from autoimport candidates. No tests for these yet due to stub index not being available at test time; to be added later. Fixed stupid error in previous autoimport tests that gave false positives.
6146379: A more complete fix for PY-147. Proposes several ways to import potentially importable names, adds necessary import statements, handles name clashes. Also, bits of semi-related refactoring here and there.
11be068: Adds a kind of auto-import for names defined in already imported files. Hopefully fixes PY-119. Also refactors resolve processors and adds a bunch of small code improvements.
f74e85b: Python: added helpers directory to pycharm
d87fa6c: Python: include serviceMessages.jar in release build
72e09d6: Python: unittests from folder and fixes
2a5f1e6: Python: locate test method in hierarchy
e38aeef: cosmetics
12bf808: Python: location provider for tests
370d075: Python: fixed stacktrace filter
bc795bc: inspections ui rework (I)
72fbf1b: unneeded tag removed
75f390a: JSF jam models, annoted members toolwindow and others
4f336df: Python: unittest helpers fixes
6fbe538: Python: unittest helpers
5b54934: Introduces 'Add field' and 'Add method' quickfixes for unresolved atributre names. Fixes PY-34. Adds groundwork for future 'Implement constructor'.
4fe7a73: Python: @author
4eeb401: Python: create unittest cfg from directory
0751c76: Python: run configuration from location
de3e178: Python: reworked run configurations, unittest runcfg UI only
186db78: folding in injected
8504fd3: Removed debug output from PyBlock.
c4abd84: Added interpreter options to run configuration. Closes PY-145.
578cf59: Rather crude fixes to make Jython runnable on Linux; to be rewritten later.
13cd8a8: More correct constructors for strings, ints, and tuples.
3bcabf5: Fixes PY-71, introduces parts in multi-part statements. Slight fixes to autoindenter.
321dd54: IDEADEV-25498
ede0734: Fix for NPE in PY-144. Fix for hangs caused by badly unimportable binary modules. Fix for import errors caused by unimportable binary modules that don't hang.
1075419: Fix for PY-142 (hopefully).
28ab3c3: made final; Perforce integration failed to commit all files last time, sorry
7bc46a8: made couple of fields final Sorry
9963149: customizable URL for "Submit Feedback" (RUBY-2926)
b487577: Fixes endless 'add import' (PY-126), slightly refactors roots handling.
962ee61: eof space
fcfbb92: @NotNull getOriginalFile()
d9c5721: made ThreadLocal fields static as they must be
0eee629: Added a check for PSI validity in import resolution. Hopefully closes PIEAEs in PY-116 and PY-131.
3fe7f63: Compatibility with Python 3.0 release (previous version only worked with alphas); builtin module is renamed in 3.0 release, this causes PY-138 (not fixed yet). Proper constructor signatures for several built-in types (fixes PY-137 and PY-94). Decorators for restored methods (fixes PY-65).
cf9ec0a: Fixes NPE on highlights without actions.
ac1b2de: Adds highligting of class and function definitions, decorators, and builtins. Fixes PY-2. No test for builtin highlighting yet.
d494181: Fixed a silly issue of non-recursive mkdir().
d68a72e: Fixes a stupid exit value bug. Possibly fixes PY-132.
f3103c6: Adds proper inspection of decorator argument lists. Changes parsing slightly, updates call analysis. Fixes PY-97.
7366967: Enhanced callee resolution for decorators. Tests for decorators.
f0aefda: Freshened javadoc.
a3b40ba: Change the way decorators are parsed, add stub infrastrucuture.
25245d6: More proper decorator support: parsing, stubs, some uses; work in progress. No tests for new functionality yet; other tests pass. (Resubmitting 218601 that seemed to miss things.)
ec42d22: More proper decorator support: parsing, stubs, some uses; work in progress. No tests for new functionality yet; other tests pass.
2b460fa: Fixed SOE in old/new superclass finding logic.
598ca00: garbage reduced
a6efb4e: object.__init__ and old-style classes' __init__ to have empty paramlist (PY-75 and more).
57a8c9f: Multi-resolve at a constructor call resolves to constructor, not class, or both class and constructor if the constructor is inherited.
9badd97: Fixed a stupid regression in getType() introduced in 216282.
80d7945: Adds multi-resolve to imported names; fixes PY-120.
c7ca892: Allow **arg to map to nothing (PY-103).
17e8c11: Test data missing from 215620.
a3ccfec: Refactored and extended tests for inspections and quickfixes.
badca2a: Missing from change 215588.
5e8d558: Add "Add 'self' parameter" quickfix. Use properties instead of bare strings. Move/rename PyInspectionToolProvider.
b3e1ff7: move more plugin tags to python plugin.xml
a954e34: build number range
c2e3d7e: consistent inspection naming
1b1cae0: CCE fix
26e9261: don't show unresolved members of type None
212b6b0: checkCanceled in python resolve
840c7da: Cache resolution misses using multi-resolve as caching resolver's source.
088391f: Turns method params annotator to inspection.
b6b2eac: all inspections have warning severity by default
3849b25: temp fix for 'super' skeleton
eb0d599: correct type for None reference (PY-113)
d6b6cf0: special-case set constructor similar to list and dict
e0105d7: class and function declarations also need to be separated by line breaks
0f1c872: naming cleanup
ff91579: Changed ImportAction into a HintAction; still got both 'lamp hint' and 'import hint'.
26fb075: Removed tests of highlighting that is now in inspections (and covered by its own tests).
18a9d55: Moves argument list, unresolved names, and redefinition checks to inspections (PY-109).
7c07cc9: Slightly changes skeletons for __builtins__ and sys; fixes PY-104 (for py2.x) and PY-106.
a70e1e5: Fixes PY-105: "with" statement becomes a name definer and resolutions to its 'as' name works.
c090f15: Added a fallback: use project root as source root if no source root entries found (related to PY-98).
a7f8e6f: Fixes PY-98 in a yet different, more Java-like, way (thanks yole).
3e66b22: link to Confluence page from plugin.xml
8ac7c6c: different icon for Python file type
0f3077d: Fixes PY-98 in a better way.
5b672ef: Use module's root paths in resolution (PY-98).
21aafca: fix compilation: add missing dependency
1c65af9: Python module type added
9ac2325: add jython.jar to Python plugin build
dc9b3a8: ruby -> python
c86447d: build scripts for Python plugin
d86b1d9: plugin.xml tweaks
0f7d389: Slight improvement of completion inside loop conditions.
b43031e: test commit under P4 ...
deeefc3: Was missing from 211718 for mysterious reasons.
1f799ca: Makes structure view sensible: classes, methods, class fields; visibility modifiers. Sort of closes PY-74.
ee3e2c76: Added known special __identifiers__.
7bbe02d: use JDK 1.6 API for setting frame icons; provide frame icons for RubyMine [r=romeo]
77d356d: Added resolver caching (PY-90), along with a bit of cleanup.
61061c1: Fixed an NPE in getName().
910bac9: Turn qualifiers of assignment targets into references.
6a7cc50: compilation
d6910ce: Files missing from 210897. (Show type of unresolved qualified reference.)
ca1d512: Fixes keyword completion somehow (PY-72, PY-76, PY-80). Still leaves much to be desired.
ceda337: general settings: cycle buffer -> idea.properties
f5cfdd7: Adds in-place defined attributes to completion variants of qualified references. (See PY-82.)
ac37eb8: Added resolution of in-place assigned attributes (PY-82 and more).
2ffee56: Fixes PY-92 (star imports not included in completion).
f562f1e: Fixes PY-89: makes project root one of resolution roots. (Only good for EAP, but enough for it.)
3e1402d: Hopefully fixes NPE in word selector (PY-88); no test case was provided in the ticket, but the code was NPE-prone.
7a1febc: include fsnotifier in pycharm build
8d2056d: Fixes triple-quoted string parsing (PY-84, PY-87).
755d17b: Fixes PY-77 (Completion after "self." inside methods must not include the name of the class)
f0849cf: Fixes PY-73 (report exit code after running a python script).
5c03276: Was missing from 208001 (PY-87).
0cdacf5: Doesn't compile
2d188ef: Fixes PY-78 (empty ^Q doc), adds a slight refactoring.
8f908d5: Fixes PY-83 (**args not being correctly mapped).
ff986ad: Fixed a silly rare NPE condition.
e30de17: Restores lost fixes to PY-32 and PY-64 (^P handler), adds a fallback.
0613b69: Fixes PY-85 (names in tuples in "for" stmt).
a72594a: indexing uncommitted documents reviewed.
ad957aa: performance
89392e4: Removed "Cancel" button in SDK refresh progress. (Re PY-54.)
33862ad: Closes PY-54 for most important cases, and PY-8.
1fc64fa: Fixes PY-25, in an amazingly trivial way.
f7d2246: Fixed small glitch in module names autocompletion display.
c1f74cd: Fixes PY-68.
bedb7c3: Fixes PY-52.
32ea9aa: completion progress reworked
fcb4977: Adds changes missing from 205368 that made the build fail.
3742a8b: compilation fix - changes reverted
6d7fa5b: Partial fix for PY-51. Adds better completion.
000d6a3: Fixes PY-69
b07a4c8: Closes PY-32, PY-64 (a nice number coincidence).
a47bf47: clean temp files
d1d37e3: backspace unindent handler from Python integrated into Ruby
dee1868: Fixes PY-66 and similar unfiled issues.
0a520c6: Fixes PY-63.
a63f6eb: Closes PY-56. Prevents generation of inheritors list for certain very base classes. Fixes an NPE (happens in unit testing only, might happen when __builtins__ can't be found).
ab3eb00: Fixes more NPE cases.
f154e29: Fixes PY-48. finishes PY-53, and removes a bunch of obscure and possible NPEs.
3f56bc0: refactor FileTypeFactory API; associate Rakefile with Ruby file type by default (RUBY-1756)
d928d3a: Fixes PY-57, PY-59, and partially PY-53. Sorry, no tests yet due to rather complex test setup required.
b7282f1: Closes PY-58.
1774933: Fixes PY-49, PY-33, partially fixes PY-32. Provides test cases for PY-33 and (rudimentary) for PY-49 and PY-32.
0bdb534: change default
bff62d1: solution for too long classpath
c68fa2c: Fixes PY-48, PY-44, PY-28 (for this no test yet).
6c2c2f1: Added circular import resolution test that now passes.
33e8f6691: Removes some dead code, adds some @NonNls to make the code look slightly prettier. Discovers that PY-27 has been actually fixed in change 201808 :)
ffae877: Fixes PY-43.
39d08b9: Tests for fixes of PY-46 and PY-47 in submitted in 201808.
f86dcc6: Adds "self" to restored methods even if not in __doc__; closes PY-38. Sorry, no meaningful test yet %)
17cbaf3: Fixes PY-46, PY-47, and attempts to fix PY-27 (to no avail).
69a4de0: PY-45
16431a0: add new required libraries
364178e: enable test
c411ef1: Closes PY-30, PY-37, PY-41, and a subtle import resolution bug introduced by the previous commit which current test suite fails to detect. Introduces a base class for list-comprehension-like classes, puts some life into generators.
8104001: <orderEntriesProperties> removed
dfd8e8b: Renewed resolver, closes PY-19 and supposedly PY-13. Introduces a new internal interface for easier resolution. Corrects fromQualifiedPackageImportFile and LookAhead tests. There is a known problem with Jython resolution using "import *", to be fixed later.
37de291: Use 'append' bootclasspath instead of 'prepend', that's faster.
9fdf823: Make index stamp file attributes shorter, thus optimize "scanning files to index" phase of the startup.
14f0da9: stub serialization refactored
c0ae3b1: Revert: Run Configurations: correctly copying 'before run' actions + some ant/maven generalization
1cddeeb: Run Configurations: correctly copying 'before run' actions + some ant/maven generalization
78a1b50: suppress test for now
b0f195b: Annotators for python sources, and tests for them.
f2533d9: PyTargetExpressionImpl is presentable (PY-36)
e1db15d: Closes PY-31 "Unclosed string literals are not highlighted as erorrs".
234f403: IDEADEV-23162 Ctrl-Shift-F7 includes trailing whitespace
acc58bc: GotoTestAction
f0c2b75: fix AIOOBE in formatter
60efec0: PY-29 fixed
72cf000: Generate stubs for binary python modules and resolve to them. Sort of closes PY-5 and PY-11, but not completely, because of broken resolution in non-project files, see PY-25. Also a progress bar is desperately needed.
2b9730d: In generator3: - Added "update only mode", - Fixed failures on missing __doc__, - Fixed failures on certain optional arg patterns.
247ec85: New generator3.py: - handles fancy doc comments and non-None default values, - handles command line for generating skeleton files, - produces skeletons that cpython and jython seem to parse happily, - closes PY-23 and makes PY-5 easier to close.
c5536dc: Many improvements to generator3.py. Closes PY-24.
9dcf6f8: A source-restoring generator, instead of current. (Still very crude.)
603582f: Forgotten file for change 196451. Closes PY-9.
8a48e7d: Slightly modified type system now allows easier access to built-in types. PyClassType and friends allow to peek at known instance fields that cannot be directly resolved to. Annotator does not highlight such names as unresolved.
0e1b796: Don't propose to import unknown fields, etc.
f498b8c: implicit StubSerializer registration
c105d46: Optimizing IndexingStamp.
08fcde9: lazy loading of python intentions
2487d96: Added a commented-out test case that fails currently due to inadequate stubs. In a true-Psi environment (e.g. IDE) circular resolution works and creates no SOEs.
ff8e829: Changed PyCallExpression to handle complex callees, with implementation changes and tests.
2220bfe: Published getReadableRepr(); fixed minor NPE possibility in ResolveProcessor.approve().
18c80a0: Removed unneeded anti-recursive filtering.
d538ddb: Preliminarily closes PY-12 and PY-8. Import resolution is still quite suboptimal, though. PyFile and PyFileImpl: added getUrl() method. PyResolveUtil and PyFile: more of (lame) means to squash SOEs on circular imports. ResolveImportUtil: actual PY-12 resolve order fix. PyMultiFileResolveTest and friends: a test case for name resolution inside subpackage.
6ad081b: Fixes __init__/py and other smaller issues; almost closes PY-8, but not PY-12 yet.
43dcb0c: Moved code in hopes of better reuse; still not reused.
e4a9aac: Tag synthetic __init__.py to prevent SOE and for PY-8
e765989: Another case of import action inside an import statement quashed.
5072d67: An attempt to prevent infinite lops working with qualified imports in __init__.py, not entirely successful.
4acc786: Transitive resolution (when an imported name is re-exported).
efbc5c7: Resolve in files that have no directly associated module (e.g. in stdlib)
77f021b: Incomplete assignment stmt does not produce a SOE.
12e46c0: Fixed name case glitch (importAs.py to ImportAs.py).
fe223be: Removed useless debug logging.
fb628bf: Understand __init__.py resolution.
a6c035f: Fixed test file case issues.
19f6a82: Resolve to imported file in "import x as y"
8039afd: remove Language.associateFileType()
3be31bf: StringRef (lazy loading string data for stubs)
d6ba5b2: restore ability to create Python SDK on Windows
6c976f2: First attempts in multi-resolve
f8af655: First attempts in multi-resolve
a12a397: Better imporetd modules resolution
3457e35: Better imporetd modules resolution
0f295cf: Better imporetd modules resolution
3559205: Better imporetd modules resolution
613bc0c: Do not propose to add import in import statement context
e47e834: test bombed
f853af9: revert incorrect change to Python resolve logic; fix Python tests
d18a4f9: Changed walk up logic to more correct (up the tree, not backwards by the tree),
f44cb2f: Changed logic of resolution to more correct. Still, references to qualified timports do not work yet.
c7b123b: Slightly refactored to make comprehensible duiring debugging.
16afd25: LineMarkerProvider -> lang-api
f966401: Better handling of SDK import paths and nicer subrocess utils, Jython support seems still broken, maybe on mswin too.
d276d2d: A crude but working attempt at resolving to C libs. Needs skeleton support.
998f9d8: Another attempt to fix PY-4; bad PyClassTypes not created anymore.
9d08e9e: Another attempt to fix PY-4; bad PyClassTypes not created anymore.
71d44a1: Assertion to fail on creating types with empty myClass.
02353b5: Detect and validate SDK under linux.
6400eaa: comment out test which won't pass now
348bc88: Acommodated python context-dependent parsing to the caching PsiBuilderImpl.
ff72ac6: killed DefaultBraceMatcher
0720d54: importing 'with_statement' not as first in list (demonstrative case for PY-1).
42249bb: Implemented parsing-dependent PSI building for 'import as', 'with ... as' and 'from __future__ import' constructs; added tests.
2b4ae16: 1. stub index fault tolerance and versioning 2. correct update stub indices on unsaved data 3. file content storage performance optimization - do not attempt to load file if it is not in the storage
31bbbc2: stubs for RConstant; show constants in Ruby goto symbol; occurence -> occurrence
5ee849c: Faster stub serialization plus library classes traversal.
7e9d8e4: enable error reporter
9cc6e6e: remove incorrect null check
7b90846: work around for missed (?) py files
b3cfefd: Ruby IDE SDK configurator; extract code shared between Ruby and Python to lang-impl
e74cd79: move IDE-specific configurable to ideSrc
bbcd716: fix incorrect resolve to self in tuple expressions; fix possible SOE
6c6c25c: RubySdkType; more flexible SDK editing API for plugins
b5d63e9: overriding methods in Python Ctrl-Shift-I and Ctrl-Alt-B
dc2e54a: python overriding method navigation
b5e4019: python subclass navigation
1f483e4: OverriddenMarkersPass -> platform
828e997: correctly process dotted names in superclass index
495fb1b: deep search for inheritors in python
dc5a91c: improve presentation for Python items in "goto symbol" and "show implementations"
c91d18e: show containing file in renderer
39b1616: renderer for Ctrl-Alt-B in Python
f451cf6: definitions search for Python
693e7fa: fix extension point declarations
1881f54: Fix indices with multiple key hits in same file.
26a91dd: python inheritors search (doesn't work yet: Max will investigate)
8de4f19: Don't push nulls into index.
ee2dbbf: Speed up python parsing a bit
280c22f: It seems it makes more sense to keep original linebreaks when displaying python documentation.
969ef27: Drop old pyclassindex
05d7d1b: Draft of stub indexing. Python "goto class" feature converted, "goto symbol" feature implemented.
ffe9be9: some more builder removal
ac29427: yellow highlighting for unresolved references when qualifier type is known
e66ce89: delete obsolete annotation
d211e7c: correctly resolve assignment to multiple instance fields as a tuple
86d9f1b: testdata fixed
6df4a2ab: more diagnostics
6d7c929: python stub-based resolve cont'd
97bfe5f: compilation fix
c309266: another SOE on resolve: a = a; introduce PyNoneType
01f3752: stack overflow fix: adjust parser so that list comp variable is PyTargetExpression rather than PyReferenceExpression
4c7388b: API for correct stubs to PSI binding when only part of elements of stubbable type are actually saved as stubs
ebee321: stub-based resolve in python, work in progress
b5b7b43: stub implementation simplification refactoring, cont'd
6523e61: PyStubsTest is less flickery
7886397: stub implementation simplification refactoring
496975b: refactoring: merge StubSerializer into IStubElementType
87f685c: draft of stubs for PyTargetExpression
31aa99c: Index building in progress.
36edbde: do not skip __init__ method when resolving
d91c442: a bit more PsiBuilder removal
6941e71: cleanup
a5459f1: resolve to conditionally assigned fields
c937cfd: correct resolve to superclass members
53bc348: filter duplicates by name in completion lists
f0fc34c: resolve to superclass member
e49ce1e: propagate parameter type from Java super method to Python method
7d8c03e: find Java super methods for Python methods
be77334: assertion fixed
6ecab68: allow selecting interpreter for run configuration
c944356: refactor super methods search for Python into ExtensibleQueryFactory
7f8e5e2: move lexer to a separate package
38a9a8d: initial implementation of python overridden method markers (to be refactored)
6c30167: a bit more of PsiBuilder removal in the parser
4c75035: py->java resolve: get type from Java method return value
02a4a29: Stubs building, Stub/AST binding works.
b6312bc: resolve to Java fields; more tests
dafa938: test and a bit of refactoring for py->java resolve
666a29e: initial implementation of python to java resolve
c9d94c8: python-plugin module extracted
3c4dd89: import things declared in __init__.py
82678cf: one more case of import resolving
299e0b1: rename test
b86c456: python stubs regeneration on startup if needed
2497c07: drop old fallback for searching imports by filename index; search for SDK imports in sourcepath rather than classpath
35f3a75: fix "read access violation" plus some serialization works.
1eba002: split __future__ processing and indent tracking; use future-aware lexer for syntax highlighting
5ce1169: auto-choose Python SDK when opening a directory project
4a4ec2f: rename python-resources module to python-ide
fb9c010: fix for AST operations
438ab2b: resolve for lambda parameters
8c3cee0: a bit more parser cleanup
cb4d37c: use correct interpreter path for stubs generation
fb19dea: python-py module extracted
d73aaff: fix resolve for 'global' statements
e281dc1: Building new repository. Not using it yet though.
de947d4: initial resolve for import ... as
00cb882: parser cleanup: less passing around of PsiBuilder
b702187: NPE
68e62cc: do not process imports in imported files
be0f768: test fixed
cd04513: remove cobertura support
958fa81: use shared interpreter service; suppress cachedir creation
6faabc7: xdebugger toolwindow + some execution refactorings
8c82f23: getContainingClass() uses stubs
80eaee0: First version of stub builder
a113291: Repository stuff other way round.
f3f4ceb: completion for class fields
c5a49b3: register single import fix
7623695: test: type for 'self' parameter
50af391: type for 'self' parameter
db473ae: 'yield' parsing fixed
ae32aa3: find imported files under jdk content roots
26022c1: prototype of completion for qualified references
6a676dd: Convert to interface
fd0d6f3: type for slice expressions
80b866c: first steps towards a type system
0ae3dc0: python demorgan intention tests; extract commenter frorm language for tests
42fca3c: NPE
42800ae: better fix for annotator race condition
4d15651: python demorgan intention tests; extract commenter frorm language for tests
036adbb: resolving imports cont'd
4a18e35: Some work on multi-language repository based PSI.
2a83c86: tests for multiple file resolve
ad8cef1: better resolve for imported elements in python
d08098d: python completion test added
64de88a: processDeclarations() for 'from <module> import *'
c6c0030: highlighting unresolved references
b859910: python: local variables completion
0010688: correctly handle lastParent in 'try' statement processDeclarations()
6b8fec0: fix resolve on file level
7b35f09: getOperator() refactoring
26d7704: process builtin declarations
a749103: correct resolve for loop variable of 'for' statement
11659fb: restore PyTargetExpression in LHS of assignment statements
0fc5fdc: cleanup
c8e4cf4: revert
17b9f09: stubs generation
4431381: some resolve in from ... import
cc1b1fd: reformat
8e76017: demorgan py intention
99faef1: demorgan py intention
bdeb23f: attach sources to jython
cf38d29: include jython.jar in python plugin layout
6a17551: update pycharm release build script
5e30214: python inspections; framework for writing inspections in jython
e9df2db: remove redundant PyElementEx interface
7873371: context run action for python
228aa70: support for selecting jython as "python installation" and running scripts under jython
9cec15e: PyPresentableElementImpl: common implementation of getPresentation()
7be8e42: getPresentation() for PyClass
5572a44: cleanup
7018b19: Ctrl-Q works for Python classes
7c4e632: python sdk configurable
62db126: ctrl-w for python
a157949: extract python-tests module, remove extra dependencies from python module
90d6824: python searchable options
893dcb2: use platform resources for pycharm build
b935217: reformat
d93cfdb: do not recreate PyAnnotator instances for every element
d65b8d9: delete unused exception class
6e4a988: more correct docstring annotator: ignore comments and whitespace when checking whether docstring is the first child
008e071: oromatcher.jar is required for completion
842188e: pycharm splash
60a1b9f: python formatter fixed
c84022e: tiny execution refactoring
07add4d: fix layout
801ff14: clean build output dirs
231b169: two phase build
69807b3: put javac2.jar in dist dir
246267e: release build for pycharm
15351fc: ru.yole.pythonid -> com.jetbrains.python
7387b85: move python tests to correct package
b2da01f: load ApplicationInfo by platform prefix
48425ef: python-resources module; hack for loading XInclude files by relative path in classpath
4c83426: Python color settings page
f4b5d85: python ~resolve
8d8b85d: project jdk -> lang (impl)
6f7ba2c: initial implementation of Ctrl-P for Python
f9df953: Ctrl-N for Python
506aa3b: cleanup: less passing around of PythonLanguage; @NotNull
d2b1054: parsing and PSI for Python 2.5 'with' keyword
942eff8: support for Python 2.5 yield expressions
326e677: highlighting tests; fix try/finally parsing; drop old try/finally PSI; fix "continue in finally" highlighting
e181e0e: parsing for Python 2.5 try/except/finally
afb2d37: parsing Python 2.5 conditional expressions
f91a331: stub for parsing tests
1977d68: Pythonid tests
808c0bf: Python SDK type
3247fba: quote handler and smart backspace handler for python
282f9c2: Ctrl-Q implementation for python; move Ctrl-Q components and actions registration to correct places
db9c90f: Python runner
fef363f: Python run config
2df554d: layout for Python plugin in platform
c76193b: todo highlighting works in python
54989d4: Python structure view implementation
2dc5af6: Python plugin
Change-Id: I4bf87c7f529f060d0ad5e2fd4d259449cd43b2f2
diff --git a/python/helpers/StdlibTypes.properties b/python/helpers/StdlibTypes.properties
new file mode 100644
index 0000000..657e8f0
--- /dev/null
+++ b/python/helpers/StdlibTypes.properties
@@ -0,0 +1,921 @@
+# Python stdlib
+
+
+## 9.4. decimal
+
+decimal.Decimal.as_tuple = \
+ :rtype: decimal.DecimalTuple \n\
+
+decimal.Decimal.__new__ = \
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__add__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__sub__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__mul__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__floordiv__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__mod__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__pow__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__div__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__truediv__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__radd__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__rsub__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__rmul__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__rfloordiv__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__rmod__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__rpow__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__rdiv__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__rtruediv__ = \
+ :type other: decimal.Decimal or int or long or float or complex \n\
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__pos__ = \
+ :rtype: decimal.Decimal \n\
+
+decimal.Decimal.__neg__ = \
+ :rtype: decimal.Decimal \n\
+
+
+# 10.1. os.path
+
+os.path.abspath = \
+ :type path: T <= bytes or unicode \n\
+ :rtype: T \n\
+
+os.path.basename = \
+ :type p: T <= bytes or unicode \n\
+ :rtype: T \n\
+
+os.path.commonprefix = \
+ :type m: collections.Iterable of T <= bytes or unicode \n\
+ :rtype: T \n\
+
+os.path.dirname = \
+ :type p: T <= bytes or unicode \n\
+ :rtype: T \n\
+
+os.path.exists = \
+ :type path: bytes or unicode \n\
+ :rtype: bool \n\
+
+os.path.lexists = \
+ :type path: bytes or unicode \n\
+ :rtype: bool \n\
+
+os.path.expanduser = \
+ :type path: T <= bytes or unicode \n\
+ :rtype: T \n\
+
+os.path.expandvars = \
+ :type path: T <= bytes or unicode \n\
+ :rtype: T \n\
+
+os.path.getatime = \
+ :type filename: bytes or unicode \n\
+ :rtype: int or float \n\
+
+os.path.getmtime = \
+ :type filename: bytes or unicode \n\
+ :rtype: int or float \n\
+
+os.path.getctime = \
+ :type filename: bytes or unicode \n\
+ :rtype: int or float \n\
+
+os.path.getsize = \
+ :type filename: bytes or unicode \n\
+ :rtype: int or long \n\
+
+os.path.isabs = \
+ :type s: bytes or unicode \n\
+ :rtype: bool \n\
+
+os.path.isfile = \
+ :type path: bytes or unicode \n\
+ :rtype: bool \n\
+
+os.path.isdir = \
+ :type s: bytes or unicode \n\
+ :rtype: bool \n\
+
+os.path.islink = \
+ :type path: bytes or unicode \n\
+ :rtype: bool \n\
+
+os.path.ismount = \
+ :type path: bytes or unicode \n\
+ :rtype: bool \n\
+
+os.path.join = \
+ :type a: T <= bytes or unicode \n\
+ :rtype: T \n\
+
+os.path.normcase = \
+ :type s: T <= bytes or unicode \n\
+ :rtype: T \n\
+
+os.path.normpath = \
+ :type path: T <= bytes or unicode \n\
+ :rtype: T \n\
+
+os.path.realpath = \
+ :type filename: T <= bytes or unicode \n\
+ :rtype: bytes or unicode \n\
+
+os.path.relpath = \
+ :type path: T <= bytes or unicode \n\
+ :type start: bytes or unicode \n\
+ :rtype: T \n\
+
+os.path.samefile = \
+ :type f1: bytes or unicode \n\
+ :type f2: bytes or unicode \n\
+ :rtype: bool \n\
+
+os.path.sameopenfile = \
+ :type fp1: int \n\
+ :type fp2: int \n\
+ :rtype: bool \n\
+
+os.path.samestat = \
+ :type s1: os.stat_result or tuple \n\
+ :type s2: os.stat_result or tuple \n\
+ :rtype: bool \n\
+
+os.path.split = \
+ :type p: T <= bytes or unicode \n\
+ :rtype: (T, T) \n\
+
+os.path.splitdrive = \
+ :type p: T <= bytes or unicode \n\
+ :rtype: (T, T) \n\
+
+os.path.splitext = \
+ :type p: T <= bytes or unicode \n\
+ :rtype: (T, T) \n\
+
+os.path.splitunc = \
+ :type p: T <= bytes or unicode \n\
+ :rtype: (T, T) \n\
+
+os.path.walk = \
+ :type top: bytes or unicode \n\
+ :rtype: None \n\
+
+
+## 10.10. shutil
+
+shutil.copyfile = \
+ :type src: bytes or unicode \n\
+ :type dst: bytes or unicode \n\
+ :rtype None \n\
+
+shutil.copymode = \
+ :type src: bytes or unicode \n\
+ :type dst: bytes or unicode \n\
+ :rtype None \n\
+
+shutil.copystat = \
+ :type src: bytes or unicode \n\
+ :type dst: bytes or unicode \n\
+ :rtype None \n\
+
+shutil.copy = \
+ :type src: bytes or unicode \n\
+ :type dst: bytes or unicode \n\
+ :rtype None \n\
+
+shutil.copy2 = \
+ :type src: bytes or unicode \n\
+ :type dst: bytes or unicode \n\
+ :rtype None \n\
+
+shutil.copytree = \
+ :type src: bytes or unicode \n\
+ :type dst: bytes or unicode \n\
+ :type symlinks: bool \n\
+ :type ignore: collections.Callable or None \n\
+ :rtype None \n\
+
+shutil.rmtree = \
+ :type path: bytes or unicode \n\
+ :type ignore_errors: bool\n\
+ :type onerror: collections.Callable or None \n\
+ :rtype None \n\
+
+shutil.move = \
+ :type src: bytes or unicode \n\
+ :type dst: bytes or unicode \n\
+ :rtype None \n\
+
+shutil.make_archive = \
+ :type base_name: bytes or unicode \n\
+ :type format: bytes or unicode \n\
+ :type root_dir: bytes or unicode or None \n\
+ :type base_dir: bytes or unicode or None \n\
+ :type verbose: bool or int \n\
+ :type dry_run: bool or int \n\
+ :type owner: bytes or unicode or int or None \n\
+ :type group: bytes or unicode or int or None \n\
+ :rtype: bytes or unicode \n\
+
+shutil.get_archive_formats = \
+ :rtype: list of (string, string) \n\
+
+shutil.register_archive_format = \
+ :type name: bytes or unicode \n\
+ :type function: collections.Callable \n\
+ :type extra_args: None or collections.Sequence of (string, object) \n\
+ :type description: bytes or unicode \n\
+ :rtype: None
+
+shutil.unregister_archive_format = \
+ :type name: bytes or unicode \n\
+ :rtype: None \n\
+
+
+## 11.13 sqlite3
+
+_sqlite3.connect = \
+ :type database: bytes or unicode \n\
+ :rtype: _sqlite3.Connection
+
+_sqlite3.Connection.cursor = \
+ :rtype: _sqlite3.Cursor
+
+## 15.1. os
+
+os.ctermid = \
+ :rtype: unicode \n\
+
+os.getegid = \
+ :rtype: int \n\
+
+os.geteuid = \
+ :rtype: int \n\
+
+os.getgid = \
+ :rtype: int \n\
+
+os.getgroups = \
+ :rtype: list of int \n\
+
+os.initgroups = \
+ :type username: string \n\
+ :type gid: int \n\
+ :rtype: None \n\
+
+os.getlogin = \
+ :rtype: unicode \n\
+
+os.getpgid = \
+ :type pid: int \n\
+ :rtype: int \n\
+
+os.getpgrp = \
+ :rtype: int \n\
+
+os.getpid = \
+ :rtype: int \n\
+
+os.getresuid = \
+ :rtype: (int, int, int) \n\
+
+os.getuid = \
+ :rtype: int \n\
+
+os.getenv = \
+ :type key: string \n\
+ :type default: object \n\
+ :rtype: string \n\
+
+os.putenv = \
+ :type key: bytes or unicode \n\
+ :type value: bytes or unicode \n\
+ :rtype: None \n\
+
+os.setegid = \
+ :type gid: int \n\
+ :rtype: None \n\
+
+os.seteuid = \
+ :type uid: int \n\
+ :rtype: None \n\
+
+os.setgid = \
+ :type gid: int \n\
+ :rtype: None \n\
+
+os.setgroups = \
+ :type p_list: list of int \n\
+ :rtype: None \n\
+
+os.setpgrp = \
+ :rtype: None \n\
+
+os.setpgid = \
+ :type pid: int \n\
+ :type pgrp: int \n\
+ :rtype: None \n\
+
+os.setregid = \
+ :type rgid: int \n\
+ :type egid: int \n\
+ :rtype: None \n\
+
+os.setresgid = \
+ :type rgid: int \n\
+ :type egid: int \n\
+ :type sgid: int \n\
+ :rtype: None \n\
+
+os.setresuid = \
+ :type ruid: int \n\
+ :type euid: int \n\
+ :type suid: int \n\
+ :rtype: None \n\
+
+os.setreuid = \
+ :type ruid: int \n\
+ :type euid: int \n\
+ :rtype: None \n\
+
+os.getsid = \
+ :type pid: int \n\
+ :rtype: int \n\
+
+os.setsid = \
+ :rtype: None \n\
+
+os.setuid = \
+ :type uid: int \n\
+ :rtype: None \n\
+
+os.strerror = \
+ :type code: int \n\
+ :rtype: unicode \n\
+
+os.umask = \
+ :type new_mask: int \n\
+ :rtype: int \n\
+
+os.uname = \
+ :rtype: (unicode, unicode, unicode, unicode, unicode) \n\
+
+os.unsetenv = \
+ :type key: string \n\
+ :rtype: None \n\
+
+os.fdopen = \
+ :type fd: int \n\
+ :type mode: string \n\
+ :type bufsize: int \n\
+ :rtype: file \n\
+
+os.popen = \
+ :type command: string \n\
+ :type mode: string \n\
+ :type bufsize: int \n\
+ :rtype: io.FileIO \n\
+
+os.tmpfile = \
+ :rtype: io.FileIO \n\
+
+os.popen2 = \
+ :type cmd: string \n\
+ :type mode: string \n\
+ :type bufsize: int \n\
+ :rtype: (io.FileIO, io.FileIO) \n\
+
+os.popen3 = \
+ :type cmd: string \n\
+ :type mode: string \n\
+ :type bufsize: int \n\
+ :rtype: (io.FileIO, io.FileIO, io.FileIO) \n\
+
+os.popen4 = \
+ :type cmd: string \n\
+ :type mode: string \n\
+ :type bufsize: int \n\
+ :rtype: (io.FileIO, io.FileIO) \n\
+
+os.close = \
+ :type fd: int \n\
+ :rtype: None \n\
+
+os.closerange = \
+ :type fd_low: int \n\
+ :type fd_high: int \n\
+ :rtype: None \n\
+
+os.dup = \
+ :type fd: int \n\
+ :rtype: int \n\
+
+os.dup2 = \
+ :type old_fd: int \n\
+ :type new_fd: int \n\
+ :rtype: None \n\
+
+os.fchmod = \
+ :type fd: int \n\
+ :type mode: int \n\
+ :rtype: None \n\
+
+os.fchown = \
+ :type fd: int \n\
+ :type uid: int \n\
+ :type gid: int \n\
+ :rtype: None \n\
+
+os.fdatasync = \
+ :type fildes: int \n\
+ :rtype: None \n\
+
+os.fpathconf = \
+ :type fd: int \n\
+ :type name: int or string \n\
+
+os.fstat = \
+ :type fd: int \n\
+ :rtype: os.stat_result \n\
+
+os.fstatvfs = \
+ :type fd: int \n\
+ :rtype: os.statvfs_result \n\
+
+os.fsync = \
+ :type filedes: int \n\
+ :rtype: None \n\
+
+os.ftruncate = \
+ :type fd: int \n\
+ :type length: int or long \n\
+ :rtype: None \n\
+
+os.isatty = \
+ :type fd: int \n\
+ :rtype: bool \n\
+
+os.lseek = \
+ :type fd: int \n\
+ :type pos: int or long \n\
+ :type how: int \n\
+ :rtype: None \n\
+
+os.open = \
+ :type filename: string \n\
+ :type mode: string \n\
+ :type bufsize: int \n\
+ :rtype: int \n\
+
+os.openpty = \
+ :rtype: (int, int) \n\
+
+os.pipe = \
+ :rtype: (int, int) \n\
+
+os.read = \
+ :type fd: int \b\
+ :type buffersize: int or long \n\
+ :rtype: bytes \n\
+
+os.tcgetpgrp = \
+ :type fd: int \n\
+ :rtype: int \n\
+
+os.tcsetpgrp = \
+ :type fd: int \n\
+ :type pgid: int \n\
+ :rtype: None \n\
+
+os.ttyname = \
+ :type fd: int \n\
+ :rtype: unicode \n\
+
+os.write = \
+ :type fd: int \n\
+ :type string: bytes \n\
+ :rtype: int \n\
+
+os.access = \
+ :type path: bytes or unicode \n\
+ :type mode: int \n\
+ :rtype: bool \n\
+
+os.chdir = \
+ :type path: bytes or unicode \n\
+ :rtype: None \n\
+
+os.fchdir = \
+ :type filedes: int \n\
+ :rtype: None \n\
+
+os.getcwd = \
+ :rtype: str \n\
+
+os.getcwdu = \
+ :rtype: unicode \n\
+
+os.chroot = \
+ :type path: bytes or unicode \n\
+ :rtype: None \n\
+
+os.chmod = \
+ :type path: bytes or unicode \n\
+ :type mode: int \n\
+ :rtype: None \n\
+
+os.chown = \
+ :type path: bytes or unicode \n\
+ :type uid: int \n\
+ :type gid: int \n\
+ :rtype: None \n\
+
+os.lchown = \
+ :type path: bytes or unicode \n\
+ :type uid: int \n\
+ :type gid: int \n\
+ :rtype: None \n\
+
+os.link = \
+ :type src: bytes or unicode \n\
+ :type dst: bytes or unicode \n\
+ :rtype: None \n\
+
+os.listdir = \
+ :type path: T <= bytes or unicode \n\
+ :rtype: list of T \n\
+
+os.lstat = \
+ :type path: bytes or unicode \n\
+ :rtype: os.stat_result \n\
+
+os.mkfifo = \
+ :type filename: bytes or unicode \n\
+ :type mode: int \n\
+ :rtype: None \n\
+
+os.mknod = \
+ :type filename: bytes or unicode \n\
+ :type mode: int \n\
+ :type device: int \n\
+ :rtype: None \n\
+
+os.major = \
+ :type device: int \n\
+ :rtype: int \n\
+
+os.minor = \
+ :type device: int \n\
+ :rtype: int \n\
+
+os.makedev = \
+ :type major: int \n\
+ :type minor: int \n\
+ :rtype: int \n\
+
+os.mkdir = \
+ :type path: bytes or unicode \n\
+ :type mode: int \n\
+ :rtype: None \n\
+
+os.makedirs = \
+ :type name: bytes or unicode \n\
+ :type mode: int \n\
+ :rtype: None \n\
+
+os.pathconf = \
+ :type path: bytes or unicode \n\
+ :type name: int or string \n\
+
+os.readlink = \
+ :type path: T <= bytes or unicode \n\
+ :rtype: T \n\
+
+os.remove = \
+ :type path: bytes or unicode \n\
+ :rtype: None \n\
+
+os.removedirs = \
+ :type name: bytes or unicode \n\
+ :rtype: None \n\
+
+os.rename = \
+ :type old: bytes or unicode \n\
+ :type new: bytes or unicode \n\
+ :rtype: None \n\
+
+os.renames = \
+ :type old: bytes or unicode \n\
+ :type new: bytes or unicode \n\
+ :rtype: None \n\
+
+os.rmdir = \
+ :type path: bytes or unicode \n\
+ :rtype: None \n\
+
+os.stat = \
+ :type path: bytes or unicode \n\
+ :rtype: os.stat_result \n\
+
+os.stat_float_times = \
+ :type newval: bool or None \n\
+ :rtype: bool \n\
+
+os.statvfs = \
+ :type path: bytes or unicode \n\
+ :rtype: os.statvfs_result \n\
+
+os.symlink = \
+ :type src: bytes or unicode \n\
+ :type dst: bytes or unicode \n\
+ :rtype: None \n\
+
+os.tempnam = \
+ :type dir: bytes or unicode \n\
+ :type prefix: bytes or unicode \n\
+ :rtype: string \n\
+
+os.tmpnam = \
+ :rtype: string \n\
+
+os.unlink = \
+ :type path: bytes or unicode \n\
+ :rtype: None \n\
+
+os.utime = \
+ :type path: bytes or unicode \n\
+ :type atime: int or float \n\
+ :type mtime: int or float \n\
+ :rtype: None \n\
+
+os.walk = \
+ :type top: T <= bytes or unicode \n\
+ :type topdown: bool \n\
+ :type followlinks: bool \n\
+ :rtype: collections.Iterable of (T, list of T, list of T) \n\
+
+os.execl = \
+ :type file: bytes or unicode \n\
+ :rtype: None \n\
+
+os.execle = \
+ :type file: bytes or unicode \n\
+ :rtype: None \n\
+
+os.execlp = \
+ :type file: bytes or unicode \n\
+ :rtype: None \n\
+
+os.execlpe = \
+ :type file: bytes or unicode \n\
+ :rtype: None \n\
+
+os.execv = \
+ :type path: bytes or unicode \n\
+ :type args: collections.Iterable of string \n\
+ :rtype: None \n\
+
+os.execve = \
+ :type path: bytes or unicode \n\
+ :type args: collections.Iterable of string \n\
+ :type env: collections.Mapping of (string, string) \n\
+ :rtype: None \n\
+
+os.execvp = \
+ :type file: bytes or unicode \n\
+ :type args: collections.Iterable of string \n\
+ :rtype: None \n\
+
+os.execvpe = \
+ :type file: bytes or unicode \n\
+ :type args: collections.Iterable of string \n\
+ :type env: collections.Mapping of (string, string) \n\
+ :rtype: None \n\
+
+os._exit = \
+ :type status: int \n\
+ :rtype: None \n\
+
+os.fork = \
+ :rtype: int \n\
+
+os.forkpty = \
+ :rtype: (int, int) \n\
+
+os.kill = \
+ :type pid: int \n\
+ :type sig: int \n\
+ :rtype: None \n\
+
+os.killpg = \
+ :type pgid: int \n\
+ :type sig: int \n\
+ :rtype: None \n\
+
+os.nice = \
+ :type inc: int \n\
+ :rtype: int \n\
+
+os.spawnl = \
+ :type mode: int \n\
+ :type file: bytes or unicode \n\
+ :rtype: int \n\
+
+os.spawnle = \
+ :type mode: int \n\
+ :type file: bytes or unicode \n\
+ :rtype: int \n\
+
+os.spawnlp = \
+ :type mode: int \n\
+ :type file: bytes or unicode \n\
+ :rtype: int \n\
+
+os.spawnlpe = \
+ :type mode: int \n\
+ :type file: bytes or unicode \n\
+ :rtype: int \n\
+
+os.spawnv = \
+ :type mode: int \n\
+ :type file: bytes or unicode \n\
+ :type args: collections.Iterable of string \n\
+ :rtype: int \n\
+
+os.spawnve = \
+ :type mode: int \n\
+ :type file: bytes or unicode \n\
+ :type args: collections.Iterable of string \n\
+ :type env: collections.Mapping of (string, string) \n\
+ :rtype: int \n\
+
+os.spawnvp = \
+ :type mode: int \n\
+ :type file: bytes or unicode \n\
+ :type args: collections.Iterable of string \n\
+ :rtype: int \n\
+
+os.spawnvpe = \
+ :type mode: int \n\
+ :type file: bytes or unicode \n\
+ :type args: collections.Iterable of string \n\
+ :type env: collections.Mapping of (string, string) \n\
+ :rtype: int \n\
+
+os.system = \
+ :type command: bytes or unicode \n\
+ :rtype: int \n\
+
+os.times = \
+ :rtype: (float, float, float, float, float) \n\
+
+os.wait = \
+ :rtype: (int, int) \n\
+
+os.waitpid = \
+ :type pid: int \n\
+ :type options: int \n\
+ :rtype: (int, int) \n\
+
+os.wait3 = \
+ :type options: int \n\
+ :rtype: (int, int, resource.struct_rusage) \n\
+
+os.wait4 = \
+ :type pid: int \n\
+ :type options: int \n\
+ :rtype: (int, int, resource.struct_rusage) \n\
+
+os.WCOREDUMP = \
+ :type status: int \n\
+ :rtype: bool \n\
+
+os.WIFCONTINUED = \
+ :type status: int \n\
+ :rtype: bool \n\
+
+os.WIFSTOPPED = \
+ :type status: int \n\
+ :rtype: bool \n\
+
+os.WIFSIGNALED = \
+ :type status: int \n\
+ :rtype: bool \n\
+
+os.WIFEXITED = \
+ :type status: int \n\
+ :rtype: bool \n\
+
+os.WEXITSTATUS = \
+ :type status: int \n\
+ :rtype: bool \n\
+
+os.WSTOPSIG = \
+ :type status: int \n\
+ :rtype: bool \n\
+
+os.WTERMSIG = \
+ :type status: int \n\
+ :rtype: bool \n\
+
+os.urandom = \
+ :type n: int \n\
+ :rtype: bytes \n\
+
+
+## 17.1. subprocess
+
+subprocess.Popen.__init__ = \
+ :type args: string or collections.Sequence of string \n\
+ :type executable: string or None \n\
+ :type preexec_fn: collections.Callable or None \n\
+ :type close_fds: bool or int \n\
+ :type shell: bool or int \n\
+ :type cwd: string or None \n\
+ :type env: collections.Mapping of (string, string) \n\
+ :type universal_newlines: bool or int \n\
+
+subprocess.Popen.poll = \
+ :rtype: int \n\
+
+subprocess.Popen.wait = \
+ :rtype: int \n\
+
+subprocess.Popen.communicate = \
+ :type intput: string or None \n\
+ :rtype: (bytes, bytes) \n\
+
+subprocess.Popen.send_signal = \
+ :type sig: int \n\
+
+
+## 18.2. json
+
+json.loads = \
+ :type s: string \n\
+ :type encoding: string \n\
+ :rtype: object or unknown \n\
+
+
+## 18.12. base64
+
+base64.b64encode = \
+ :type s: bytes \n\
+ :rtype: bytes \n\
+
+base64.b64decode = \
+ :type s: bytes \n\
+ :rtype: bytes \n\
+
+
+## 27.1. sys
+
+sys.exit = \
+ :type status: int or object \n\
+ :rtype: None \n\
diff --git a/python/helpers/coverage/__init__.py b/python/helpers/coverage/__init__.py
new file mode 100644
index 0000000..d8dbc0f
--- /dev/null
+++ b/python/helpers/coverage/__init__.py
@@ -0,0 +1,88 @@
+"""Code coverage measurement for Python.
+
+Ned Batchelder
+http://nedbatchelder.com/code/coverage
+
+"""
+
+__version__ = "3.5" # see detailed history in CHANGES.txt
+
+__url__ = "http://nedbatchelder.com/code/coverage"
+if max(__version__).isalpha():
+ # For pre-releases, use a version-specific URL.
+ __url__ += "/" + __version__
+
+from coverage.control import coverage, process_startup
+from coverage.data import CoverageData
+from coverage.cmdline import main, CoverageScript
+from coverage.misc import CoverageException
+
+
+# Module-level functions. The original API to this module was based on
+# functions defined directly in the module, with a singleton of the coverage()
+# class. That design hampered programmability, so the current api uses
+# explicitly-created coverage objects. But for backward compatibility, here we
+# define the top-level functions to create the singleton when they are first
+# called.
+
+# Singleton object for use with module-level functions. The singleton is
+# created as needed when one of the module-level functions is called.
+_the_coverage = None
+
+def _singleton_method(name):
+ """Return a function to the `name` method on a singleton `coverage` object.
+
+ The singleton object is created the first time one of these functions is
+ called.
+
+ """
+ def wrapper(*args, **kwargs):
+ """Singleton wrapper around a coverage method."""
+ global _the_coverage
+ if not _the_coverage:
+ _the_coverage = coverage(auto_data=True)
+ return getattr(_the_coverage, name)(*args, **kwargs)
+ return wrapper
+
+
+# Define the module-level functions.
+use_cache = _singleton_method('use_cache')
+start = _singleton_method('start')
+stop = _singleton_method('stop')
+erase = _singleton_method('erase')
+exclude = _singleton_method('exclude')
+analysis = _singleton_method('analysis')
+analysis2 = _singleton_method('analysis2')
+report = _singleton_method('report')
+annotate = _singleton_method('annotate')
+
+
+# COPYRIGHT AND LICENSE
+#
+# Copyright 2001 Gareth Rees. All rights reserved.
+# Copyright 2004-2010 Ned Batchelder. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+# 1. Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#
+# 2. Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the
+# distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# HOLDERS AND CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
+# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
+# DAMAGE.
diff --git a/python/helpers/coverage/__main__.py b/python/helpers/coverage/__main__.py
new file mode 100644
index 0000000..af5fa9f
--- /dev/null
+++ b/python/helpers/coverage/__main__.py
@@ -0,0 +1,3 @@
+"""Coverage.py's main entrypoint."""
+from coverage.cmdline import main
+main()
diff --git a/python/helpers/coverage/annotate.py b/python/helpers/coverage/annotate.py
new file mode 100644
index 0000000..a556d85
--- /dev/null
+++ b/python/helpers/coverage/annotate.py
@@ -0,0 +1,101 @@
+"""Source file annotation for Coverage."""
+
+import os, re
+
+from coverage.report import Reporter
+
+class AnnotateReporter(Reporter):
+ """Generate annotated source files showing line coverage.
+
+ This reporter creates annotated copies of the measured source files. Each
+ .py file is copied as a .py,cover file, with a left-hand margin annotating
+ each line::
+
+ > def h(x):
+ - if 0: #pragma: no cover
+ - pass
+ > if x == 1:
+ ! a = 1
+ > else:
+ > a = 2
+
+ > h(2)
+
+ Executed lines use '>', lines not executed use '!', lines excluded from
+ consideration use '-'.
+
+ """
+
+ def __init__(self, coverage, ignore_errors=False):
+ super(AnnotateReporter, self).__init__(coverage, ignore_errors)
+ self.directory = None
+
+ blank_re = re.compile(r"\s*(#|$)")
+ else_re = re.compile(r"\s*else\s*:\s*(#|$)")
+
+ def report(self, morfs, config, directory=None):
+ """Run the report.
+
+ See `coverage.report()` for arguments.
+
+ """
+ self.report_files(self.annotate_file, morfs, config, directory)
+
+ def annotate_file(self, cu, analysis):
+ """Annotate a single file.
+
+ `cu` is the CodeUnit for the file to annotate.
+
+ """
+ if not cu.relative:
+ return
+
+ filename = cu.filename
+ source = cu.source_file()
+ if self.directory:
+ dest_file = os.path.join(self.directory, cu.flat_rootname())
+ dest_file += ".py,cover"
+ else:
+ dest_file = filename + ",cover"
+ dest = open(dest_file, 'w')
+
+ statements = analysis.statements
+ missing = analysis.missing
+ excluded = analysis.excluded
+
+ lineno = 0
+ i = 0
+ j = 0
+ covered = True
+ while True:
+ line = source.readline()
+ if line == '':
+ break
+ lineno += 1
+ while i < len(statements) and statements[i] < lineno:
+ i += 1
+ while j < len(missing) and missing[j] < lineno:
+ j += 1
+ if i < len(statements) and statements[i] == lineno:
+ covered = j >= len(missing) or missing[j] > lineno
+ if self.blank_re.match(line):
+ dest.write(' ')
+ elif self.else_re.match(line):
+ # Special logic for lines containing only 'else:'.
+ if i >= len(statements) and j >= len(missing):
+ dest.write('! ')
+ elif i >= len(statements) or j >= len(missing):
+ dest.write('> ')
+ elif statements[i] == missing[j]:
+ dest.write('! ')
+ else:
+ dest.write('> ')
+ elif lineno in excluded:
+ dest.write('- ')
+ elif covered:
+ dest.write('> ')
+ else:
+ dest.write('! ')
+ dest.write(line)
+ source.close()
+ dest.close()
diff --git a/python/helpers/coverage/backward.py b/python/helpers/coverage/backward.py
new file mode 100644
index 0000000..f0a34ac
--- /dev/null
+++ b/python/helpers/coverage/backward.py
@@ -0,0 +1,110 @@
+"""Add things to old Pythons so I can pretend they are newer."""
+
+# This file does lots of tricky stuff, so disable a bunch of lintisms.
+# pylint: disable=F0401,W0611,W0622
+# F0401: Unable to import blah
+# W0611: Unused import blah
+# W0622: Redefining built-in blah
+
+import os, sys
+
+# Python 2.3 doesn't have `set`
+try:
+ set = set # new in 2.4
+except NameError:
+ from sets import Set as set
+
+# Python 2.3 doesn't have `sorted`.
+try:
+ sorted = sorted
+except NameError:
+ def sorted(iterable):
+ """A 2.3-compatible implementation of `sorted`."""
+ lst = list(iterable)
+ lst.sort()
+ return lst
+
+# Pythons 2 and 3 differ on where to get StringIO
+try:
+ from cStringIO import StringIO
+ BytesIO = StringIO
+except ImportError:
+ from io import StringIO, BytesIO
+
+# What's a string called?
+try:
+ string_class = basestring
+except NameError:
+ string_class = str
+
+# Where do pickles come from?
+try:
+ import cPickle as pickle
+except ImportError:
+ import pickle
+
+# range or xrange?
+try:
+ range = xrange
+except NameError:
+ range = range
+
+# Exec is a statement in Py2, a function in Py3
+if sys.version_info >= (3, 0):
+ def exec_code_object(code, global_map):
+ """A wrapper around exec()."""
+ exec(code, global_map)
+else:
+ # OK, this is pretty gross. In Py2, exec was a statement, but that will
+ # be a syntax error if we try to put it in a Py3 file, even if it is never
+ # executed. So hide it inside an evaluated string literal instead.
+ eval(
+ compile(
+ "def exec_code_object(code, global_map):\n"
+ " exec code in global_map\n",
+ "<exec_function>", "exec"
+ )
+ )
+
+# ConfigParser was renamed to the more-standard configparser
+try:
+ import configparser
+except ImportError:
+ import ConfigParser as configparser
+
+# Python 3.2 provides `tokenize.open`, the best way to open source files.
+try:
+ import tokenize
+ open_source = tokenize.open # pylint: disable=E1101
+except AttributeError:
+ def open_source(fname):
+ """Open a source file the best way."""
+ return open(fname, "rU")
+
+# Python 3.x is picky about bytes and strings, so provide methods to
+# get them right, and make them no-ops in 2.x
+if sys.version_info >= (3, 0):
+ def to_bytes(s):
+ """Convert string `s` to bytes."""
+ return s.encode('utf8')
+
+ def to_string(b):
+ """Convert bytes `b` to a string."""
+ return b.decode('utf8')
+
+else:
+ def to_bytes(s):
+ """Convert string `s` to bytes (no-op in 2.x)."""
+ return s
+
+ def to_string(b):
+ """Convert bytes `b` to a string (no-op in 2.x)."""
+ return b
+
+# Md5 is available in different places.
+try:
+ import hashlib
+ md5 = hashlib.md5
+except ImportError:
+ import md5
+ md5 = md5.new
diff --git a/python/helpers/coverage/bytecode.py b/python/helpers/coverage/bytecode.py
new file mode 100644
index 0000000..ab522d6
--- /dev/null
+++ b/python/helpers/coverage/bytecode.py
@@ -0,0 +1,81 @@
+"""Bytecode manipulation for coverage.py"""
+
+import opcode, sys, types
+
+class ByteCode(object):
+ """A single bytecode."""
+ def __init__(self):
+ self.offset = -1
+ self.op = -1
+ self.arg = -1
+ self.next_offset = -1
+ self.jump_to = -1
+
+
+class ByteCodes(object):
+ """Iterator over byte codes in `code`.
+
+ Returns `ByteCode` objects.
+
+ """
+ def __init__(self, code):
+ self.code = code
+ self.offset = 0
+
+ if sys.version_info >= (3, 0):
+ def __getitem__(self, i):
+ return self.code[i]
+ else:
+ def __getitem__(self, i):
+ return ord(self.code[i])
+
+ def __iter__(self):
+ return self
+
+ def __next__(self):
+ if self.offset >= len(self.code):
+ raise StopIteration
+
+ bc = ByteCode()
+ bc.op = self[self.offset]
+ bc.offset = self.offset
+
+ next_offset = self.offset+1
+ if bc.op >= opcode.HAVE_ARGUMENT:
+ bc.arg = self[self.offset+1] + 256*self[self.offset+2]
+ next_offset += 2
+
+ label = -1
+ if bc.op in opcode.hasjrel:
+ label = next_offset + bc.arg
+ elif bc.op in opcode.hasjabs:
+ label = bc.arg
+ bc.jump_to = label
+
+ bc.next_offset = self.offset = next_offset
+ return bc
+
+ next = __next__ # Py2k uses an old-style non-dunder name.
+
+
+class CodeObjects(object):
+ """Iterate over all the code objects in `code`."""
+ def __init__(self, code):
+ self.stack = [code]
+
+ def __iter__(self):
+ return self
+
+ def __next__(self):
+ if self.stack:
+ # We're going to return the code object on the stack, but first
+ # push its children for later returning.
+ code = self.stack.pop()
+ for c in code.co_consts:
+ if isinstance(c, types.CodeType):
+ self.stack.append(c)
+ return code
+
+ raise StopIteration
+
+ next = __next__
diff --git a/python/helpers/coverage/cmdline.py b/python/helpers/coverage/cmdline.py
new file mode 100644
index 0000000..1ce5e0f
--- /dev/null
+++ b/python/helpers/coverage/cmdline.py
@@ -0,0 +1,677 @@
+"""Command-line support for Coverage."""
+
+import optparse, re, sys, traceback
+
+from coverage.backward import sorted # pylint: disable=W0622
+from coverage.execfile import run_python_file, run_python_module
+from coverage.misc import CoverageException, ExceptionDuringRun, NoSource
+
+
+class Opts(object):
+ """A namespace class for individual options we'll build parsers from."""
+
+ append = optparse.make_option(
+ '-a', '--append', action='store_false', dest="erase_first",
+ help="Append coverage data to .coverage, otherwise it is started "
+ "clean with each run."
+ )
+ branch = optparse.make_option(
+ '', '--branch', action='store_true',
+ help="Measure branch coverage in addition to statement coverage."
+ )
+ directory = optparse.make_option(
+ '-d', '--directory', action='store',
+ metavar="DIR",
+ help="Write the output files to DIR."
+ )
+ help = optparse.make_option(
+ '-h', '--help', action='store_true',
+ help="Get help on this command."
+ )
+ ignore_errors = optparse.make_option(
+ '-i', '--ignore-errors', action='store_true',
+ help="Ignore errors while reading source files."
+ )
+ include = optparse.make_option(
+ '', '--include', action='store',
+ metavar="PAT1,PAT2,...",
+ help="Include files only when their filename path matches one of "
+ "these patterns. Usually needs quoting on the command line."
+ )
+ pylib = optparse.make_option(
+ '-L', '--pylib', action='store_true',
+ help="Measure coverage even inside the Python installed library, "
+ "which isn't done by default."
+ )
+ show_missing = optparse.make_option(
+ '-m', '--show-missing', action='store_true',
+ help="Show line numbers of statements in each module that weren't "
+ "executed."
+ )
+ old_omit = optparse.make_option(
+ '-o', '--omit', action='store',
+ metavar="PAT1,PAT2,...",
+ help="Omit files when their filename matches one of these patterns. "
+ "Usually needs quoting on the command line."
+ )
+ omit = optparse.make_option(
+ '', '--omit', action='store',
+ metavar="PAT1,PAT2,...",
+ help="Omit files when their filename matches one of these patterns. "
+ "Usually needs quoting on the command line."
+ )
+ output_xml = optparse.make_option(
+ '-o', '', action='store', dest="outfile",
+ metavar="OUTFILE",
+ help="Write the XML report to this file. Defaults to 'coverage.xml'"
+ )
+ parallel_mode = optparse.make_option(
+ '-p', '--parallel-mode', action='store_true',
+ help="Append the machine name, process id and random number to the "
+ ".coverage data file name to simplify collecting data from "
+ "many processes."
+ )
+ module = optparse.make_option(
+ '-m', '--module', action='store_true',
+ help="<pyfile> is an importable Python module, not a script path, "
+ "to be run as 'python -m' would run it."
+ )
+ rcfile = optparse.make_option(
+ '', '--rcfile', action='store',
+ help="Specify configuration file. Defaults to '.coveragerc'"
+ )
+ source = optparse.make_option(
+ '', '--source', action='store', metavar="SRC1,SRC2,...",
+ help="A list of packages or directories of code to be measured."
+ )
+ timid = optparse.make_option(
+ '', '--timid', action='store_true',
+ help="Use a simpler but slower trace method. Try this if you get "
+ "seemingly impossible results!"
+ )
+ version = optparse.make_option(
+ '', '--version', action='store_true',
+ help="Display version information and exit."
+ )
+
+
+class CoverageOptionParser(optparse.OptionParser, object):
+ """Base OptionParser for coverage.
+
+ Problems don't exit the program.
+ Defaults are initialized for all options.
+
+ """
+
+ def __init__(self, *args, **kwargs):
+ super(CoverageOptionParser, self).__init__(
+ add_help_option=False, *args, **kwargs
+ )
+ self.set_defaults(
+ actions=[],
+ branch=None,
+ directory=None,
+ help=None,
+ ignore_errors=None,
+ include=None,
+ omit=None,
+ parallel_mode=None,
+ module=None,
+ pylib=None,
+ rcfile=True,
+ show_missing=None,
+ source=None,
+ timid=None,
+ erase_first=None,
+ version=None,
+ )
+
+ self.disable_interspersed_args()
+ self.help_fn = self.help_noop
+
+ def help_noop(self, error=None, topic=None, parser=None):
+ """No-op help function."""
+ pass
+
+ class OptionParserError(Exception):
+ """Used to stop the optparse error handler ending the process."""
+ pass
+
+ def parse_args(self, args=None, options=None):
+ """Call optparse.parse_args, but return a triple:
+
+ (ok, options, args)
+
+ """
+ try:
+ options, args = \
+ super(CoverageOptionParser, self).parse_args(args, options)
+ except self.OptionParserError:
+ return False, None, None
+ return True, options, args
+
+ def error(self, msg):
+ """Override optparse.error so sys.exit doesn't get called."""
+ self.help_fn(msg)
+ raise self.OptionParserError
+
+
+class ClassicOptionParser(CoverageOptionParser):
+ """Command-line parser for coverage.py classic arguments."""
+
+ def __init__(self):
+ super(ClassicOptionParser, self).__init__()
+
+ self.add_action('-a', '--annotate', 'annotate')
+ self.add_action('-b', '--html', 'html')
+ self.add_action('-c', '--combine', 'combine')
+ self.add_action('-e', '--erase', 'erase')
+ self.add_action('-r', '--report', 'report')
+ self.add_action('-x', '--execute', 'execute')
+
+ self.add_options([
+ Opts.directory,
+ Opts.help,
+ Opts.ignore_errors,
+ Opts.pylib,
+ Opts.show_missing,
+ Opts.old_omit,
+ Opts.parallel_mode,
+ Opts.timid,
+ Opts.version,
+ ])
+
+ def add_action(self, dash, dashdash, action_code):
+ """Add a specialized option that is the action to execute."""
+ option = self.add_option(dash, dashdash, action='callback',
+ callback=self._append_action
+ )
+ option.action_code = action_code
+
+ def _append_action(self, option, opt_unused, value_unused, parser):
+ """Callback for an option that adds to the `actions` list."""
+ parser.values.actions.append(option.action_code)
+
+
+class CmdOptionParser(CoverageOptionParser):
+ """Parse one of the new-style commands for coverage.py."""
+
+ def __init__(self, action, options=None, defaults=None, usage=None,
+ cmd=None, description=None
+ ):
+ """Create an OptionParser for a coverage command.
+
+ `action` is the slug to put into `options.actions`.
+ `options` is a list of Option's for the command.
+ `defaults` is a dict of default value for options.
+ `usage` is the usage string to display in help.
+ `cmd` is the command name, if different than `action`.
+ `description` is the description of the command, for the help text.
+
+ """
+ if usage:
+ usage = "%prog " + usage
+ super(CmdOptionParser, self).__init__(
+ prog="coverage %s" % (cmd or action),
+ usage=usage,
+ description=description,
+ )
+ self.set_defaults(actions=[action], **(defaults or {}))
+ if options:
+ self.add_options(options)
+ self.cmd = cmd or action
+
+ def __eq__(self, other):
+ # A convenience equality, so that I can put strings in unit test
+ # results, and they will compare equal to objects.
+ return (other == "<CmdOptionParser:%s>" % self.cmd)
+
+GLOBAL_ARGS = [
+ Opts.rcfile,
+ Opts.help,
+ ]
+
+CMDS = {
+ 'annotate': CmdOptionParser("annotate",
+ [
+ Opts.directory,
+ Opts.ignore_errors,
+ Opts.omit,
+ Opts.include,
+ ] + GLOBAL_ARGS,
+ usage = "[options] [modules]",
+ description = "Make annotated copies of the given files, marking "
+ "statements that are executed with > and statements that are "
+ "missed with !."
+ ),
+
+ 'combine': CmdOptionParser("combine", GLOBAL_ARGS,
+ usage = " ",
+ description = "Combine data from multiple coverage files collected "
+ "with 'run -p'. The combined results are written to a single "
+ "file representing the union of the data."
+ ),
+
+ 'debug': CmdOptionParser("debug", GLOBAL_ARGS,
+ usage = "<topic>",
+ description = "Display information on the internals of coverage.py, "
+ "for diagnosing problems. "
+ "Topics are 'data' to show a summary of the collected data, "
+ "or 'sys' to show installation information."
+ ),
+
+ 'erase': CmdOptionParser("erase", GLOBAL_ARGS,
+ usage = " ",
+ description = "Erase previously collected coverage data."
+ ),
+
+ 'help': CmdOptionParser("help", GLOBAL_ARGS,
+ usage = "[command]",
+ description = "Describe how to use coverage.py"
+ ),
+
+ 'html': CmdOptionParser("html",
+ [
+ Opts.directory,
+ Opts.ignore_errors,
+ Opts.omit,
+ Opts.include,
+ ] + GLOBAL_ARGS,
+ usage = "[options] [modules]",
+ description = "Create an HTML report of the coverage of the files. "
+ "Each file gets its own page, with the source decorated to show "
+ "executed, excluded, and missed lines."
+ ),
+
+ 'report': CmdOptionParser("report",
+ [
+ Opts.ignore_errors,
+ Opts.omit,
+ Opts.include,
+ Opts.show_missing,
+ ] + GLOBAL_ARGS,
+ usage = "[options] [modules]",
+ description = "Report coverage statistics on modules."
+ ),
+
+ 'run': CmdOptionParser("execute",
+ [
+ Opts.append,
+ Opts.branch,
+ Opts.pylib,
+ Opts.parallel_mode,
+ Opts.module,
+ Opts.timid,
+ Opts.source,
+ Opts.omit,
+ Opts.include,
+ ] + GLOBAL_ARGS,
+ defaults = {'erase_first': True},
+ cmd = "run",
+ usage = "[options] <pyfile> [program options]",
+ description = "Run a Python program, measuring code execution."
+ ),
+
+ 'xml': CmdOptionParser("xml",
+ [
+ Opts.ignore_errors,
+ Opts.omit,
+ Opts.include,
+ Opts.output_xml,
+ ] + GLOBAL_ARGS,
+ cmd = "xml",
+ defaults = {'outfile': 'coverage.xml'},
+ usage = "[options] [modules]",
+ description = "Generate an XML report of coverage results."
+ ),
+ }
+
+
+OK, ERR = 0, 1
+
+
+class CoverageScript(object):
+ """The command-line interface to Coverage."""
+
+ def __init__(self, _covpkg=None, _run_python_file=None,
+ _run_python_module=None, _help_fn=None):
+ # _covpkg is for dependency injection, so we can test this code.
+ if _covpkg:
+ self.covpkg = _covpkg
+ else:
+ import coverage
+ self.covpkg = coverage
+
+ # For dependency injection:
+ self.run_python_file = _run_python_file or run_python_file
+ self.run_python_module = _run_python_module or run_python_module
+ self.help_fn = _help_fn or self.help
+
+ self.coverage = None
+
+ def help(self, error=None, topic=None, parser=None):
+ """Display an error message, or the named topic."""
+ assert error or topic or parser
+ if error:
+ print(error)
+ print("Use 'coverage help' for help.")
+ elif parser:
+ print(parser.format_help().strip())
+ else:
+ # Parse out the topic we want from HELP_TOPICS
+ topic_list = re.split("(?m)^=+ (\w+) =+$", HELP_TOPICS)
+ topics = dict(zip(topic_list[1::2], topic_list[2::2]))
+ help_msg = topics.get(topic, '').strip()
+ if help_msg:
+ print(help_msg % self.covpkg.__dict__)
+ else:
+ print("Don't know topic %r" % topic)
+
+ def command_line(self, argv):
+ """The bulk of the command line interface to Coverage.
+
+ `argv` is the argument list to process.
+
+ Returns 0 if all is well, 1 if something went wrong.
+
+ """
+ # Collect the command-line options.
+
+ if not argv:
+ self.help_fn(topic='minimum_help')
+ return OK
+
+ # The command syntax we parse depends on the first argument. Classic
+ # syntax always starts with an option.
+ classic = argv[0].startswith('-')
+ if classic:
+ parser = ClassicOptionParser()
+ else:
+ parser = CMDS.get(argv[0])
+ if not parser:
+ self.help_fn("Unknown command: '%s'" % argv[0])
+ return ERR
+ argv = argv[1:]
+
+ parser.help_fn = self.help_fn
+ ok, options, args = parser.parse_args(argv)
+ if not ok:
+ return ERR
+
+ # Handle help.
+ if options.help:
+ if classic:
+ self.help_fn(topic='help')
+ else:
+ self.help_fn(parser=parser)
+ return OK
+
+ if "help" in options.actions:
+ if args:
+ for a in args:
+ parser = CMDS.get(a)
+ if parser:
+ self.help_fn(parser=parser)
+ else:
+ self.help_fn(topic=a)
+ else:
+ self.help_fn(topic='help')
+ return OK
+
+ # Handle version.
+ if options.version:
+ self.help_fn(topic='version')
+ return OK
+
+ # Check for conflicts and problems in the options.
+ for i in ['erase', 'execute']:
+ for j in ['annotate', 'html', 'report', 'combine']:
+ if (i in options.actions) and (j in options.actions):
+ self.help_fn("You can't specify the '%s' and '%s' "
+ "options at the same time." % (i, j))
+ return ERR
+
+ if not options.actions:
+ self.help_fn(
+ "You must specify at least one of -e, -x, -c, -r, -a, or -b."
+ )
+ return ERR
+ args_allowed = (
+ 'execute' in options.actions or
+ 'annotate' in options.actions or
+ 'html' in options.actions or
+ 'debug' in options.actions or
+ 'report' in options.actions or
+ 'xml' in options.actions
+ )
+ if not args_allowed and args:
+ self.help_fn("Unexpected arguments: %s" % " ".join(args))
+ return ERR
+
+ if 'execute' in options.actions and not args:
+ self.help_fn("Nothing to do.")
+ return ERR
+
+ # Listify the list options.
+ source = unshell_list(options.source)
+ omit = unshell_list(options.omit)
+ include = unshell_list(options.include)
+
+ # Do something.
+ self.coverage = self.covpkg.coverage(
+ data_suffix = options.parallel_mode,
+ cover_pylib = options.pylib,
+ timid = options.timid,
+ branch = options.branch,
+ config_file = options.rcfile,
+ source = source,
+ omit = omit,
+ include = include,
+ )
+
+ if 'debug' in options.actions:
+ if not args:
+ self.help_fn("What information would you like: data, sys?")
+ return ERR
+ for info in args:
+ if info == 'sys':
+ print("-- sys ----------------------------------------")
+ for label, info in self.coverage.sysinfo():
+ if info == []:
+ info = "-none-"
+ if isinstance(info, list):
+ print("%15s:" % label)
+ for e in info:
+ print("%15s %s" % ("", e))
+ else:
+ print("%15s: %s" % (label, info))
+ elif info == 'data':
+ print("-- data ---------------------------------------")
+ self.coverage.load()
+ print("path: %s" % self.coverage.data.filename)
+ print("has_arcs: %r" % self.coverage.data.has_arcs())
+ summary = self.coverage.data.summary(fullpath=True)
+ if summary:
+ filenames = sorted(summary.keys())
+ print("\n%d files:" % len(filenames))
+ for f in filenames:
+ print("%s: %d lines" % (f, summary[f]))
+ else:
+ print("No data collected")
+ else:
+ self.help_fn("Don't know what you mean by %r" % info)
+ return ERR
+ return OK
+
+ if 'erase' in options.actions or options.erase_first:
+ self.coverage.erase()
+ else:
+ self.coverage.load()
+
+ if 'execute' in options.actions:
+ # Run the script.
+ self.coverage.start()
+ code_ran = True
+ try:
+ try:
+ if options.module:
+ self.run_python_module(args[0], args)
+ else:
+ self.run_python_file(args[0], args)
+ except NoSource:
+ code_ran = False
+ raise
+ finally:
+ if code_ran:
+ self.coverage.stop()
+ self.coverage.save()
+
+ if 'combine' in options.actions:
+ self.coverage.combine()
+ self.coverage.save()
+
+ # Remaining actions are reporting, with some common options.
+ report_args = dict(
+ morfs = args,
+ ignore_errors = options.ignore_errors,
+ omit = omit,
+ include = include,
+ )
+
+ if 'report' in options.actions:
+ self.coverage.report(
+ show_missing=options.show_missing, **report_args)
+ if 'annotate' in options.actions:
+ self.coverage.annotate(
+ directory=options.directory, **report_args)
+ if 'html' in options.actions:
+ self.coverage.html_report(
+ directory=options.directory, **report_args)
+ if 'xml' in options.actions:
+ outfile = options.outfile
+ self.coverage.xml_report(outfile=outfile, **report_args)
+
+ return OK
+
+
+def unshell_list(s):
+ """Turn a command-line argument into a list."""
+ if not s:
+ return None
+ if sys.platform == 'win32':
+ # When running coverage as coverage.exe, some of the behavior
+ # of the shell is emulated: wildcards are expanded into a list of
+ # filenames. So you have to single-quote patterns on the command
+ # line, but (not) helpfully, the single quotes are included in the
+ # argument, so we have to strip them off here.
+ s = s.strip("'")
+ return s.split(',')
+
+
+HELP_TOPICS = r"""
+
+== classic ====================================================================
+Coverage.py version %(__version__)s
+Measure, collect, and report on code coverage in Python programs.
+
+Usage:
+
+coverage -x [-p] [-L] [--timid] MODULE.py [ARG1 ARG2 ...]
+ Execute the module, passing the given command-line arguments, collecting
+ coverage data. With the -p option, include the machine name and process
+ id in the .coverage file name. With -L, measure coverage even inside the
+ Python installed library, which isn't done by default. With --timid, use a
+ simpler but slower trace method.
+
+coverage -e
+ Erase collected coverage data.
+
+coverage -c
+ Combine data from multiple coverage files (as created by -p option above)
+ and store it into a single file representing the union of the coverage.
+
+coverage -r [-m] [-i] [-o DIR,...] [FILE1 FILE2 ...]
+ Report on the statement coverage for the given files. With the -m
+ option, show line numbers of the statements that weren't executed.
+
+coverage -b -d DIR [-i] [-o DIR,...] [FILE1 FILE2 ...]
+ Create an HTML report of the coverage of the given files. Each file gets
+ its own page, with the file listing decorated to show executed, excluded,
+ and missed lines.
+
+coverage -a [-d DIR] [-i] [-o DIR,...] [FILE1 FILE2 ...]
+ Make annotated copies of the given files, marking statements that
+ are executed with > and statements that are missed with !.
+
+-d DIR
+ Write output files for -b or -a to this directory.
+
+-i Ignore errors while reporting or annotating.
+
+-o DIR,...
+ Omit reporting or annotating files when their filename path starts with
+ a directory listed in the omit list.
+ e.g. coverage -i -r -o c:\python25,lib\enthought\traits
+
+Coverage data is saved in the file .coverage by default. Set the
+COVERAGE_FILE environment variable to save it somewhere else.
+
+== help =======================================================================
+Coverage.py, version %(__version__)s
+Measure, collect, and report on code coverage in Python programs.
+
+usage: coverage <command> [options] [args]
+
+Commands:
+ annotate Annotate source files with execution information.
+ combine Combine a number of data files.
+ erase Erase previously collected coverage data.
+ help Get help on using coverage.py.
+ html Create an HTML report.
+ report Report coverage stats on modules.
+ run Run a Python program and measure code execution.
+ xml Create an XML report of coverage results.
+
+Use "coverage help <command>" for detailed help on any command.
+Use "coverage help classic" for help on older command syntax.
+For more information, see %(__url__)s
+
+== minimum_help ===============================================================
+Code coverage for Python. Use 'coverage help' for help.
+
+== version ====================================================================
+Coverage.py, version %(__version__)s. %(__url__)s
+
+"""
+
+
+def main(argv=None):
+ """The main entrypoint to Coverage.
+
+ This is installed as the script entrypoint.
+
+ """
+ if argv is None:
+ argv = sys.argv[1:]
+ try:
+ status = CoverageScript().command_line(argv)
+ except ExceptionDuringRun:
+ # An exception was caught while running the product code. The
+ # sys.exc_info() return tuple is packed into an ExceptionDuringRun
+ # exception.
+ _, err, _ = sys.exc_info()
+ traceback.print_exception(*err.args)
+ status = ERR
+ except CoverageException:
+ # A controlled error inside coverage.py: print the message to the user.
+ _, err, _ = sys.exc_info()
+ print(err)
+ status = ERR
+ except SystemExit:
+ # The user called `sys.exit()`. Exit with their argument, if any.
+ _, err, _ = sys.exc_info()
+ if err.args:
+ status = err.args[0]
+ else:
+ status = None
+ return status
diff --git a/python/helpers/coverage/codeunit.py b/python/helpers/coverage/codeunit.py
new file mode 100644
index 0000000..55f44a2
--- /dev/null
+++ b/python/helpers/coverage/codeunit.py
@@ -0,0 +1,117 @@
+"""Code unit (module) handling for Coverage."""
+
+import glob, os
+
+from coverage.backward import open_source, string_class, StringIO
+from coverage.misc import CoverageException
+
+
+def code_unit_factory(morfs, file_locator):
+ """Construct a list of CodeUnits from polymorphic inputs.
+
+ `morfs` is a module or a filename, or a list of same.
+
+ `file_locator` is a FileLocator that can help resolve filenames.
+
+ Returns a list of CodeUnit objects.
+
+ """
+ # Be sure we have a list.
+ if not isinstance(morfs, (list, tuple)):
+ morfs = [morfs]
+
+ # On Windows, the shell doesn't expand wildcards. Do it here.
+ globbed = []
+ for morf in morfs:
+ if isinstance(morf, string_class) and ('?' in morf or '*' in morf):
+ globbed.extend(glob.glob(morf))
+ else:
+ globbed.append(morf)
+ morfs = globbed
+
+ code_units = [CodeUnit(morf, file_locator) for morf in morfs]
+
+ return code_units
+
+
+class CodeUnit(object):
+ """Code unit: a filename or module.
+
+ Instance attributes:
+
+ `name` is a human-readable name for this code unit.
+ `filename` is the os path from which we can read the source.
+ `relative` is a boolean.
+
+ """
+ def __init__(self, morf, file_locator):
+ self.file_locator = file_locator
+
+ if hasattr(morf, '__file__'):
+ f = morf.__file__
+ else:
+ f = morf
+ # .pyc files should always refer to a .py instead.
+ if f.endswith('.pyc'):
+ f = f[:-1]
+ self.filename = self.file_locator.canonical_filename(f)
+
+ if hasattr(morf, '__name__'):
+ n = modname = morf.__name__
+ self.relative = True
+ else:
+ n = os.path.splitext(morf)[0]
+ rel = self.file_locator.relative_filename(n)
+ if os.path.isabs(n):
+ self.relative = (rel != n)
+ else:
+ self.relative = True
+ n = rel
+ modname = None
+ self.name = n
+ self.modname = modname
+
+ def __repr__(self):
+ return "<CodeUnit name=%r filename=%r>" % (self.name, self.filename)
+
+ # Annoying comparison operators. Py3k wants __lt__ etc, and Py2k needs all
+ # of them defined.
+
+ def __lt__(self, other): return self.name < other.name
+ def __le__(self, other): return self.name <= other.name
+ def __eq__(self, other): return self.name == other.name
+ def __ne__(self, other): return self.name != other.name
+ def __gt__(self, other): return self.name > other.name
+ def __ge__(self, other): return self.name >= other.name
+
+ def flat_rootname(self):
+ """A base for a flat filename to correspond to this code unit.
+
+ Useful for writing files about the code where you want all the files in
+ the same directory, but need to differentiate same-named files from
+ different directories.
+
+ For example, the file a/b/c.py might return 'a_b_c'
+
+ """
+ if self.modname:
+ return self.modname.replace('.', '_')
+ else:
+ root = os.path.splitdrive(self.name)[1]
+ return root.replace('\\', '_').replace('/', '_').replace('.', '_')
+
+ def source_file(self):
+ """Return an open file for reading the source of the code unit."""
+ if os.path.exists(self.filename):
+ # A regular text file: open it.
+ return open_source(self.filename)
+
+ # Maybe it's in a zip file?
+ source = self.file_locator.get_zip_data(self.filename)
+ if source is not None:
+ return StringIO(source)
+
+ # Couldn't find source.
+ raise CoverageException(
+ "No source for code %r." % self.filename
+ )
diff --git a/python/helpers/coverage/collector.py b/python/helpers/coverage/collector.py
new file mode 100644
index 0000000..9c40d16
--- /dev/null
+++ b/python/helpers/coverage/collector.py
@@ -0,0 +1,305 @@
+"""Raw data collector for Coverage."""
+
+import sys, threading
+
+try:
+ # Use the C extension code when we can, for speed.
+ from coverage.tracer import Tracer
+except ImportError:
+ # Couldn't import the C extension, maybe it isn't built.
+ Tracer = None
+
+
+class PyTracer(object):
+ """Python implementation of the raw data tracer."""
+
+ # Because of poor implementations of trace-function-manipulating tools,
+ # the Python trace function must be kept very simple. In particular, there
+ # must be only one function ever set as the trace function, both through
+ # sys.settrace, and as the return value from the trace function. Put
+ # another way, the trace function must always return itself. It cannot
+ # swap in other functions, or return None to avoid tracing a particular
+ # frame.
+ #
+ # The trace manipulator that introduced this restriction is DecoratorTools,
+ # which sets a trace function, and then later restores the pre-existing one
+ # by calling sys.settrace with a function it found in the current frame.
+ #
+ # Systems that use DecoratorTools (or similar trace manipulations) must use
+ # PyTracer to get accurate results. The command-line --timid argument is
+ # used to force the use of this tracer.
+
+ def __init__(self):
+ self.data = None
+ self.should_trace = None
+ self.should_trace_cache = None
+ self.warn = None
+ self.cur_file_data = None
+ self.last_line = 0
+ self.data_stack = []
+ self.last_exc_back = None
+ self.last_exc_firstlineno = 0
+ self.arcs = False
+
+ def _trace(self, frame, event, arg_unused):
+ """The trace function passed to sys.settrace."""
+
+ #print("trace event: %s %r @%d" % (
+ # event, frame.f_code.co_filename, frame.f_lineno))
+
+ if self.last_exc_back:
+ if frame == self.last_exc_back:
+ # Someone forgot a return event.
+ if self.arcs and self.cur_file_data:
+ pair = (self.last_line, -self.last_exc_firstlineno)
+ self.cur_file_data[pair] = None
+ self.cur_file_data, self.last_line = self.data_stack.pop()
+ self.last_exc_back = None
+
+ if event == 'call':
+ # Entering a new function context. Decide if we should trace
+ # in this file.
+ self.data_stack.append((self.cur_file_data, self.last_line))
+ filename = frame.f_code.co_filename
+ tracename = self.should_trace_cache.get(filename)
+ if tracename is None:
+ tracename = self.should_trace(filename, frame)
+ self.should_trace_cache[filename] = tracename
+ #print("called, stack is %d deep, tracename is %r" % (
+ # len(self.data_stack), tracename))
+ if tracename:
+ if tracename not in self.data:
+ self.data[tracename] = {}
+ self.cur_file_data = self.data[tracename]
+ else:
+ self.cur_file_data = None
+ # Set the last_line to -1 because the next arc will be entering a
+ # code block, indicated by (-1, n).
+ self.last_line = -1
+ elif event == 'line':
+ # Record an executed line.
+ if self.cur_file_data is not None:
+ if self.arcs:
+ #print("lin", self.last_line, frame.f_lineno)
+ self.cur_file_data[(self.last_line, frame.f_lineno)] = None
+ else:
+ #print("lin", frame.f_lineno)
+ self.cur_file_data[frame.f_lineno] = None
+ self.last_line = frame.f_lineno
+ elif event == 'return':
+ if self.arcs and self.cur_file_data:
+ first = frame.f_code.co_firstlineno
+ self.cur_file_data[(self.last_line, -first)] = None
+ # Leaving this function, pop the filename stack.
+ self.cur_file_data, self.last_line = self.data_stack.pop()
+ #print("returned, stack is %d deep" % (len(self.data_stack)))
+ elif event == 'exception':
+ #print("exc", self.last_line, frame.f_lineno)
+ self.last_exc_back = frame.f_back
+ self.last_exc_firstlineno = frame.f_code.co_firstlineno
+ return self._trace
+
+ def start(self):
+ """Start this Tracer.
+
+ Return a Python function suitable for use with sys.settrace().
+
+ """
+ sys.settrace(self._trace)
+ return self._trace
+
+ def stop(self):
+ """Stop this Tracer."""
+ if hasattr(sys, "gettrace") and self.warn:
+ if sys.gettrace() != self._trace:
+ msg = "Trace function changed, measurement is likely wrong: %r"
+ self.warn(msg % sys.gettrace())
+ sys.settrace(None)
+
+ def get_stats(self):
+ """Return a dictionary of statistics, or None."""
+ return None
+
+
+class Collector(object):
+ """Collects trace data.
+
+ Creates a Tracer object for each thread, since they track stack
+ information. Each Tracer points to the same shared data, contributing
+ traced data points.
+
+ When the Collector is started, it creates a Tracer for the current thread,
+ and installs a function to create Tracers for each new thread started.
+ When the Collector is stopped, all active Tracers are stopped.
+
+ Threads started while the Collector is stopped will never have Tracers
+ associated with them.
+
+ """
+
+ # The stack of active Collectors. Collectors are added here when started,
+ # and popped when stopped. Collectors on the stack are paused when not
+ # the top, and resumed when they become the top again.
+ _collectors = []
+
+ def __init__(self, should_trace, timid, branch, warn):
+ """Create a collector.
+
+ `should_trace` is a function, taking a filename, and returning a
+ canonicalized filename, or False depending on whether the file should
+ be traced or not.
+
+ If `timid` is true, then a slower simpler trace function will be
+ used. This is important for some environments where manipulation of
+ tracing functions make the faster more sophisticated trace function not
+ operate properly.
+
+ If `branch` is true, then branches will be measured. This involves
+ collecting data on which statements followed each other (arcs). Use
+ `get_arc_data` to get the arc data.
+
+ `warn` is a warning function, taking a single string message argument,
+ to be used if a warning needs to be issued.
+
+ """
+ self.should_trace = should_trace
+ self.warn = warn
+ self.branch = branch
+ self.reset()
+
+ if timid:
+ # Being timid: use the simple Python trace function.
+ self._trace_class = PyTracer
+ else:
+ # Being fast: use the C Tracer if it is available, else the Python
+ # trace function.
+ self._trace_class = Tracer or PyTracer
+
+ def __repr__(self):
+ return "<Collector at 0x%x>" % id(self)
+
+ def tracer_name(self):
+ """Return the class name of the tracer we're using."""
+ return self._trace_class.__name__
+
+ def reset(self):
+ """Clear collected data, and prepare to collect more."""
+ # A dictionary mapping filenames to dicts with linenumber keys,
+ # or mapping filenames to dicts with linenumber pairs as keys.
+ self.data = {}
+
+ # A cache of the results from should_trace, the decision about whether
+ # to trace execution in a file. A dict of filename to (filename or
+ # False).
+ self.should_trace_cache = {}
+
+ # Our active Tracers.
+ self.tracers = []
+
+ def _start_tracer(self):
+ """Start a new Tracer object, and store it in self.tracers."""
+ tracer = self._trace_class()
+ tracer.data = self.data
+ tracer.arcs = self.branch
+ tracer.should_trace = self.should_trace
+ tracer.should_trace_cache = self.should_trace_cache
+ tracer.warn = self.warn
+ fn = tracer.start()
+ self.tracers.append(tracer)
+ return fn
+
+ # The trace function has to be set individually on each thread before
+ # execution begins. Ironically, the only support the threading module has
+ # for running code before the thread main is the tracing function. So we
+ # install this as a trace function, and the first time it's called, it does
+ # the real trace installation.
+
+ def _installation_trace(self, frame_unused, event_unused, arg_unused):
+ """Called on new threads, installs the real tracer."""
+ # Remove ourselves as the trace function
+ sys.settrace(None)
+ # Install the real tracer.
+ fn = self._start_tracer()
+ # Invoke the real trace function with the current event, to be sure
+ # not to lose an event.
+ if fn:
+ fn = fn(frame_unused, event_unused, arg_unused)
+ # Return the new trace function to continue tracing in this scope.
+ return fn
+
+ def start(self):
+ """Start collecting trace information."""
+ if self._collectors:
+ self._collectors[-1].pause()
+ self._collectors.append(self)
+ #print >>sys.stderr, "Started: %r" % self._collectors
+ # Install the tracer on this thread.
+ self._start_tracer()
+ # Install our installation tracer in threading, to jump start other
+ # threads.
+ threading.settrace(self._installation_trace)
+
+ def stop(self):
+ """Stop collecting trace information."""
+ #print >>sys.stderr, "Stopping: %r" % self._collectors
+ assert self._collectors
+ assert self._collectors[-1] is self
+
+ self.pause()
+ self.tracers = []
+
+ # Remove this Collector from the stack, and resume the one underneath
+ # (if any).
+ self._collectors.pop()
+ if self._collectors:
+ self._collectors[-1].resume()
+
+ def pause(self):
+ """Pause tracing, but be prepared to `resume`."""
+ for tracer in self.tracers:
+ tracer.stop()
+ stats = tracer.get_stats()
+ if stats:
+ print("\nCoverage.py tracer stats:")
+ for k in sorted(stats.keys()):
+ print("%16s: %s" % (k, stats[k]))
+ threading.settrace(None)
+
+ def resume(self):
+ """Resume tracing after a `pause`."""
+ for tracer in self.tracers:
+ tracer.start()
+ threading.settrace(self._installation_trace)
+
+ def get_line_data(self):
+ """Return the line data collected.
+
+ Data is { filename: { lineno: None, ...}, ...}
+
+ """
+ if self.branch:
+ # If we were measuring branches, then we have to re-build the dict
+ # to show line data.
+ line_data = {}
+ for f, arcs in self.data.items():
+ line_data[f] = ldf = {}
+ for l1, _ in list(arcs.keys()):
+ if l1:
+ ldf[l1] = None
+ return line_data
+ else:
+ return self.data
+
+ def get_arc_data(self):
+ """Return the arc data collected.
+
+ Data is { filename: { (l1, l2): None, ...}, ...}
+
+ Note that no data is collected or returned if the Collector wasn't
+ created with `branch` true.
+
+ """
+ if self.branch:
+ return self.data
+ else:
+ return {}
diff --git a/python/helpers/coverage/config.py b/python/helpers/coverage/config.py
new file mode 100644
index 0000000..6b441dd
--- /dev/null
+++ b/python/helpers/coverage/config.py
@@ -0,0 +1,156 @@
+"""Config file for coverage.py"""
+
+import os
+from coverage.backward import configparser # pylint: disable=W0622
+
+# The default line exclusion regexes
+DEFAULT_EXCLUDE = [
+ '(?i)# *pragma[: ]*no *cover',
+ ]
+
+# The default partial branch regexes, to be modified by the user.
+DEFAULT_PARTIAL = [
+ '(?i)# *pragma[: ]*no *branch',
+ ]
+
+# The default partial branch regexes, based on Python semantics.
+# These are any Python branching constructs that can't actually execute all
+# their branches.
+DEFAULT_PARTIAL_ALWAYS = [
+ 'while (True|1|False|0):',
+ 'if (True|1|False|0):',
+ ]
+
+
+class CoverageConfig(object):
+ """Coverage.py configuration.
+
+ The attributes of this class are the various settings that control the
+ operation of coverage.py.
+
+ """
+
+ def __init__(self):
+ """Initialize the configuration attributes to their defaults."""
+ # Defaults for [run]
+ self.branch = False
+ self.cover_pylib = False
+ self.data_file = ".coverage"
+ self.parallel = False
+ self.timid = False
+ self.source = None
+
+ # Defaults for [report]
+ self.exclude_list = DEFAULT_EXCLUDE[:]
+ self.ignore_errors = False
+ self.include = None
+ self.omit = None
+ self.partial_list = DEFAULT_PARTIAL[:]
+ self.partial_always_list = DEFAULT_PARTIAL_ALWAYS[:]
+ self.precision = 0
+
+ # Defaults for [html]
+ self.html_dir = "htmlcov"
+
+ # Defaults for [xml]
+ self.xml_output = "coverage.xml"
+
+ def from_environment(self, env_var):
+ """Read configuration from the `env_var` environment variable."""
+ # Timidity: for nose users, read an environment variable. This is a
+ # cheap hack, since the rest of the command line arguments aren't
+ # recognized, but it solves some users' problems.
+ env = os.environ.get(env_var, '')
+ if env:
+ self.timid = ('--timid' in env)
+
+ def from_args(self, **kwargs):
+ """Read config values from `kwargs`."""
+ for k, v in kwargs.items():
+ if v is not None:
+ setattr(self, k, v)
+
+ def from_file(self, *files):
+ """Read configuration from .rc files.
+
+ Each argument in `files` is a file name to read.
+
+ """
+ cp = configparser.RawConfigParser()
+ cp.read(files)
+
+ # [run]
+ if cp.has_option('run', 'branch'):
+ self.branch = cp.getboolean('run', 'branch')
+ if cp.has_option('run', 'cover_pylib'):
+ self.cover_pylib = cp.getboolean('run', 'cover_pylib')
+ if cp.has_option('run', 'data_file'):
+ self.data_file = cp.get('run', 'data_file')
+ if cp.has_option('run', 'include'):
+ self.include = self.get_list(cp, 'run', 'include')
+ if cp.has_option('run', 'omit'):
+ self.omit = self.get_list(cp, 'run', 'omit')
+ if cp.has_option('run', 'parallel'):
+ self.parallel = cp.getboolean('run', 'parallel')
+ if cp.has_option('run', 'source'):
+ self.source = self.get_list(cp, 'run', 'source')
+ if cp.has_option('run', 'timid'):
+ self.timid = cp.getboolean('run', 'timid')
+
+ # [report]
+ if cp.has_option('report', 'exclude_lines'):
+ self.exclude_list = \
+ self.get_line_list(cp, 'report', 'exclude_lines')
+ if cp.has_option('report', 'ignore_errors'):
+ self.ignore_errors = cp.getboolean('report', 'ignore_errors')
+ if cp.has_option('report', 'include'):
+ self.include = self.get_list(cp, 'report', 'include')
+ if cp.has_option('report', 'omit'):
+ self.omit = self.get_list(cp, 'report', 'omit')
+ if cp.has_option('report', 'partial_branches'):
+ self.partial_list = \
+ self.get_line_list(cp, 'report', 'partial_branches')
+ if cp.has_option('report', 'partial_branches_always'):
+ self.partial_always_list = \
+ self.get_line_list(cp, 'report', 'partial_branches_always')
+ if cp.has_option('report', 'precision'):
+ self.precision = cp.getint('report', 'precision')
+
+ # [html]
+ if cp.has_option('html', 'directory'):
+ self.html_dir = cp.get('html', 'directory')
+
+ # [xml]
+ if cp.has_option('xml', 'output'):
+ self.xml_output = cp.get('xml', 'output')
+
+ def get_list(self, cp, section, option):
+ """Read a list of strings from the ConfigParser `cp`.
+
+ The value of `section` and `option` is treated as a comma- and newline-
+ separated list of strings. Each value is stripped of whitespace.
+
+ Returns the list of strings.
+
+ """
+ value_list = cp.get(section, option)
+ values = []
+ for value_line in value_list.split('\n'):
+ for value in value_line.split(','):
+ value = value.strip()
+ if value:
+ values.append(value)
+ return values
+
+ def get_line_list(self, cp, section, option):
+ """Read a list of full-line strings from the ConfigParser `cp`.
+
+ The value of `section` and `option` is treated as a newline-separated
+ list of strings. Each value is stripped of whitespace.
+
+ Returns the list of strings.
+
+ """
+ value_list = cp.get(section, option)
+ return list(filter(None, value_list.split('\n')))
+
diff --git a/python/helpers/coverage/control.py b/python/helpers/coverage/control.py
new file mode 100644
index 0000000..5ca1ef9
--- /dev/null
+++ b/python/helpers/coverage/control.py
@@ -0,0 +1,673 @@
+"""Core control stuff for Coverage."""
+
+import atexit, os, random, socket, sys
+
+from coverage.annotate import AnnotateReporter
+from coverage.backward import string_class
+from coverage.codeunit import code_unit_factory, CodeUnit
+from coverage.collector import Collector
+from coverage.config import CoverageConfig
+from coverage.data import CoverageData
+from coverage.files import FileLocator, TreeMatcher, FnmatchMatcher
+from coverage.files import find_python_files
+from coverage.html import HtmlReporter
+from coverage.misc import CoverageException, bool_or_none, join_regex
+from coverage.results import Analysis, Numbers
+from coverage.summary import SummaryReporter
+from coverage.xmlreport import XmlReporter
+
+class coverage(object):
+ """Programmatic access to Coverage.
+
+ To use::
+
+ from coverage import coverage
+
+ cov = coverage()
+ cov.start()
+ #.. blah blah (run your code) blah blah ..
+ cov.stop()
+ cov.html_report(directory='covhtml')
+
+ """
+ def __init__(self, data_file=None, data_suffix=None, cover_pylib=None,
+ auto_data=False, timid=None, branch=None, config_file=True,
+ source=None, omit=None, include=None):
+ """
+ `data_file` is the base name of the data file to use, defaulting to
+ ".coverage". `data_suffix` is appended (with a dot) to `data_file` to
+ create the final file name. If `data_suffix` is simply True, then a
+ suffix is created with the machine and process identity included.
+
+ `cover_pylib` is a boolean determining whether Python code installed
+ with the Python interpreter is measured. This includes the Python
+ standard library and any packages installed with the interpreter.
+
+ If `auto_data` is true, then any existing data file will be read when
+ coverage measurement starts, and data will be saved automatically when
+ measurement stops.
+
+ If `timid` is true, then a slower and simpler trace function will be
+ used. This is important for some environments where manipulation of
+ tracing functions breaks the faster trace function.
+
+ If `branch` is true, then branch coverage will be measured in addition
+ to the usual statement coverage.
+
+ `config_file` determines what config file to read. If it is a string,
+ it is the name of the config file to read. If it is True, then a
+ standard file is read (".coveragerc"). If it is False, then no file is
+ read.
+
+ `source` is a list of file paths or package names. Only code located
+ in the trees indicated by the file paths or package names will be
+ measured.
+
+ `include` and `omit` are lists of filename patterns. Files that match
+ `include` will be measured, files that match `omit` will not. Each
+ will also accept a single string argument.
+
+ """
+ from coverage import __version__
+
+ # A record of all the warnings that have been issued.
+ self._warnings = []
+
+ # Build our configuration from a number of sources:
+ # 1: defaults:
+ self.config = CoverageConfig()
+
+ # 2: from the coveragerc file:
+ if config_file:
+ if config_file is True:
+ config_file = ".coveragerc"
+ try:
+ self.config.from_file(config_file)
+ except ValueError:
+ _, err, _ = sys.exc_info()
+ raise CoverageException(
+ "Couldn't read config file %s: %s" % (config_file, err)
+ )
+
+ # 3: from environment variables:
+ self.config.from_environment('COVERAGE_OPTIONS')
+ env_data_file = os.environ.get('COVERAGE_FILE')
+ if env_data_file:
+ self.config.data_file = env_data_file
+
+ # 4: from constructor arguments:
+ if isinstance(omit, string_class):
+ omit = [omit]
+ if isinstance(include, string_class):
+ include = [include]
+ self.config.from_args(
+ data_file=data_file, cover_pylib=cover_pylib, timid=timid,
+ branch=branch, parallel=bool_or_none(data_suffix),
+ source=source, omit=omit, include=include
+ )
+
+ self.auto_data = auto_data
+ self.atexit_registered = False
+
+ # _exclude_re is a dict mapping exclusion list names to compiled
+ # regexes.
+ self._exclude_re = {}
+ self._exclude_regex_stale()
+
+ self.file_locator = FileLocator()
+
+ # The source argument can be directories or package names.
+ self.source = []
+ self.source_pkgs = []
+ for src in self.config.source or []:
+ if os.path.exists(src):
+ self.source.append(self.file_locator.canonical_filename(src))
+ else:
+ self.source_pkgs.append(src)
+
+ self.omit = self._prep_patterns(self.config.omit)
+ self.include = self._prep_patterns(self.config.include)
+
+ self.collector = Collector(
+ self._should_trace, timid=self.config.timid,
+ branch=self.config.branch, warn=self._warn
+ )
+
+ # Suffixes are a bit tricky. We want to use the data suffix only when
+ # collecting data, not when combining data. So we save it as
+ # `self.run_suffix` now, and promote it to `self.data_suffix` if we
+ # find that we are collecting data later.
+ if data_suffix or self.config.parallel:
+ if not isinstance(data_suffix, string_class):
+ # if data_suffix=True, use .machinename.pid.random
+ data_suffix = True
+ else:
+ data_suffix = None
+ self.data_suffix = None
+ self.run_suffix = data_suffix
+
+ # Create the data file. We do this at construction time so that the
+ # data file will be written into the directory where the process
+ # started rather than wherever the process eventually chdir'd to.
+ self.data = CoverageData(
+ basename=self.config.data_file,
+ collector="coverage v%s" % __version__
+ )
+
+ # The dirs for files considered "installed with the interpreter".
+ self.pylib_dirs = []
+ if not self.config.cover_pylib:
+ # Look at where some standard modules are located. That's the
+ # indication for "installed with the interpreter". In some
+ # environments (virtualenv, for example), these modules may be
+ # spread across a few locations. Look at all the candidate modules
+ # we've imported, and take all the different ones.
+ for m in (atexit, os, random, socket):
+ if hasattr(m, "__file__"):
+ m_dir = self._canonical_dir(m.__file__)
+ if m_dir not in self.pylib_dirs:
+ self.pylib_dirs.append(m_dir)
+
+ # To avoid tracing the coverage code itself, we skip anything located
+ # where we are.
+ self.cover_dir = self._canonical_dir(__file__)
+
+ # The matchers for _should_trace, created when tracing starts.
+ self.source_match = None
+ self.pylib_match = self.cover_match = None
+ self.include_match = self.omit_match = None
+
+ # Only _harvest_data once per measurement cycle.
+ self._harvested = False
+
+ # Set the reporting precision.
+ Numbers.set_precision(self.config.precision)
+
+ # When tearing down the coverage object, modules can become None.
+ # Saving the modules as object attributes avoids problems, but it is
+ # quite ad-hoc which modules need to be saved and which references
+ # need to use the object attributes.
+ self.socket = socket
+ self.os = os
+ self.random = random
+
+ def _canonical_dir(self, f):
+ """Return the canonical directory of the file `f`."""
+ return os.path.split(self.file_locator.canonical_filename(f))[0]
+
+ def _source_for_file(self, filename):
+ """Return the source file for `filename`."""
+ if not filename.endswith(".py"):
+ if filename[-4:-1] == ".py":
+ filename = filename[:-1]
+ return filename
+
+ def _should_trace(self, filename, frame):
+ """Decide whether to trace execution in `filename`
+
+ This function is called from the trace function. As each new file name
+ is encountered, this function determines whether it is traced or not.
+
+ Returns a canonicalized filename if it should be traced, False if it
+ should not.
+
+ """
+ if os is None:
+ return False
+
+ if filename.startswith('<'):
+ # Lots of non-file execution is represented with artificial
+ # filenames like "<string>", "<doctest readme.txt[0]>", or
+ # "<exec_function>". Don't ever trace these executions, since we
+ # can't do anything with the data later anyway.
+ return False
+
+ if filename.endswith(".html"):
+ # Jinja and maybe other templating systems compile templates into
+ # Python code, but use the template filename as the filename in
+ # the compiled code. Of course, those filenames are useless later
+ # so don't bother collecting. TODO: How should we really separate
+ # out good file extensions from bad?
+ return False
+
+ self._check_for_packages()
+
+ # Compiled Python files have two filenames: frame.f_code.co_filename is
+ # the filename at the time the .pyc was compiled. The second name is
+ # __file__, which is where the .pyc was actually loaded from. Since
+ # .pyc files can be moved after compilation (for example, by being
+ # installed), we look for __file__ in the frame and prefer it to the
+ # co_filename value.
+ dunder_file = frame.f_globals.get('__file__')
+ if dunder_file:
+ filename = self._source_for_file(dunder_file)
+
+ # Jython reports the .class file to the tracer, use the source file.
+ if filename.endswith("$py.class"):
+ filename = filename[:-9] + ".py"
+
+ canonical = self.file_locator.canonical_filename(filename)
+
+ # If the user specified source, then that's authoritative about what to
+ # measure. If they didn't, then we have to exclude the stdlib and
+ # coverage.py directories.
+ if self.source_match:
+ if not self.source_match.match(canonical):
+ return False
+ else:
+ # If we aren't supposed to trace installed code, then check if this
+ # is near the Python standard library and skip it if so.
+ if self.pylib_match and self.pylib_match.match(canonical):
+ return False
+
+ # We exclude the coverage code itself, since a little of it will be
+ # measured otherwise.
+ if self.cover_match and self.cover_match.match(canonical):
+ return False
+
+ # Check the file against the include and omit patterns.
+ if self.include_match and not self.include_match.match(canonical):
+ return False
+ if self.omit_match and self.omit_match.match(canonical):
+ return False
+
+ return canonical
+
+ # To log what should_trace returns, change this to "if 1:"
+ if 0:
+ _real_should_trace = _should_trace
+ def _should_trace(self, filename, frame): # pylint: disable=E0102
+ """A logging decorator around the real _should_trace function."""
+ ret = self._real_should_trace(filename, frame)
+ print("should_trace: %r -> %r" % (filename, ret))
+ return ret
+
+ def _warn(self, msg):
+ """Use `msg` as a warning."""
+ self._warnings.append(msg)
+ sys.stderr.write("Coverage.py warning: %s\n" % msg)
+
+ def _prep_patterns(self, patterns):
+ """Prepare the file patterns for use in a `FnmatchMatcher`.
+
+ If a pattern starts with a wildcard, it is used as a pattern
+ as-is. If it does not start with a wildcard, then it is made
+ absolute with the current directory.
+
+ If `patterns` is None, an empty list is returned.
+
+ """
+ patterns = patterns or []
+ prepped = []
+ for p in patterns or []:
+ if p.startswith("*") or p.startswith("?"):
+ prepped.append(p)
+ else:
+ prepped.append(self.file_locator.abs_file(p))
+ return prepped
+
+ def _check_for_packages(self):
+ """Update the source_match matcher with latest imported packages."""
+ # Our self.source_pkgs attribute is a list of package names we want to
+ # measure. Each time through here, we see if we've imported any of
+ # them yet. If so, we add its file to source_match, and we don't have
+ # to look for that package any more.
+ if self.source_pkgs:
+ found = []
+ for pkg in self.source_pkgs:
+ try:
+ mod = sys.modules[pkg]
+ except KeyError:
+ continue
+
+ found.append(pkg)
+
+ try:
+ pkg_file = mod.__file__
+ except AttributeError:
+ self._warn("Module %s has no Python source." % pkg)
+ else:
+ d, f = os.path.split(pkg_file)
+ if f.startswith('__init__.'):
+ # This is actually a package, return the directory.
+ pkg_file = d
+ else:
+ pkg_file = self._source_for_file(pkg_file)
+ pkg_file = self.file_locator.canonical_filename(pkg_file)
+ self.source.append(pkg_file)
+ self.source_match.add(pkg_file)
+
+ for pkg in found:
+ self.source_pkgs.remove(pkg)
+
+ def use_cache(self, usecache):
+ """Control the use of a data file (incorrectly called a cache).
+
+ `usecache` is true or false, whether to read and write data on disk.
+
+ """
+ self.data.usefile(usecache)
+
+ def load(self):
+ """Load previously-collected coverage data from the data file."""
+ self.collector.reset()
+ self.data.read()
+
+ def start(self):
+ """Start measuring code coverage."""
+ if self.run_suffix:
+ # Calling start() means we're running code, so use the run_suffix
+ # as the data_suffix when we eventually save the data.
+ self.data_suffix = self.run_suffix
+ if self.auto_data:
+ self.load()
+ # Save coverage data when Python exits.
+ if not self.atexit_registered:
+ atexit.register(self.save)
+ self.atexit_registered = True
+
+ # Create the matchers we need for _should_trace
+ if self.source or self.source_pkgs:
+ self.source_match = TreeMatcher(self.source)
+ else:
+ if self.cover_dir:
+ self.cover_match = TreeMatcher([self.cover_dir])
+ if self.pylib_dirs:
+ self.pylib_match = TreeMatcher(self.pylib_dirs)
+ if self.include:
+ self.include_match = FnmatchMatcher(self.include)
+ if self.omit:
+ self.omit_match = FnmatchMatcher(self.omit)
+
+ self._harvested = False
+ self.collector.start()
+
+ def stop(self):
+ """Stop measuring code coverage."""
+ self.collector.stop()
+ self._harvest_data()
+
+ def erase(self):
+ """Erase previously-collected coverage data.
+
+ This removes the in-memory data collected in this session as well as
+ discarding the data file.
+
+ """
+ self.collector.reset()
+ self.data.erase()
+
+ def clear_exclude(self, which='exclude'):
+ """Clear the exclude list."""
+ setattr(self.config, which + "_list", [])
+ self._exclude_regex_stale()
+
+ def exclude(self, regex, which='exclude'):
+ """Exclude source lines from execution consideration.
+
+ A number of lists of regular expressions are maintained. Each list
+ selects lines that are treated differently during reporting.
+
+ `which` determines which list is modified. The "exclude" list selects
+ lines that are not considered executable at all. The "partial" list
+ indicates lines with branches that are not taken.
+
+ `regex` is a regular expression. The regex is added to the specified
+ list. If any of the regexes in the list is found in a line, the line
+ is marked for special treatment during reporting.
+
+ """
+ excl_list = getattr(self.config, which + "_list")
+ excl_list.append(regex)
+ self._exclude_regex_stale()
+
+ def _exclude_regex_stale(self):
+ """Drop all the compiled exclusion regexes, a list was modified."""
+ self._exclude_re.clear()
+
+ def _exclude_regex(self, which):
+ """Return a compiled regex for the given exclusion list."""
+ if which not in self._exclude_re:
+ excl_list = getattr(self.config, which + "_list")
+ self._exclude_re[which] = join_regex(excl_list)
+ return self._exclude_re[which]
+
+ def get_exclude_list(self, which='exclude'):
+ """Return a list of excluded regex patterns.
+
+ `which` indicates which list is desired. See `exclude` for the lists
+ that are available, and their meaning.
+
+ """
+ return getattr(self.config, which + "_list")
+
+ def save(self):
+ """Save the collected coverage data to the data file."""
+ data_suffix = self.data_suffix
+ if data_suffix is True:
+ # If data_suffix was a simple true value, then make a suffix with
+ # plenty of distinguishing information. We do this here in
+ # `save()` at the last minute so that the pid will be correct even
+ # if the process forks.
+ data_suffix = "%s.%s.%06d" % (
+ self.socket.gethostname(), self.os.getpid(),
+ self.random.randint(0, 99999)
+ )
+
+ self._harvest_data()
+ self.data.write(suffix=data_suffix)
+
+ def combine(self):
+ """Combine together a number of similarly-named coverage data files.
+
+ All coverage data files whose name starts with `data_file` (from the
+ coverage() constructor) will be read, and combined together into the
+ current measurements.
+
+ """
+ self.data.combine_parallel_data()
+
+ def _harvest_data(self):
+ """Get the collected data and reset the collector.
+
+ Also warn about various problems collecting data.
+
+ """
+ if not self._harvested:
+ self.data.add_line_data(self.collector.get_line_data())
+ self.data.add_arc_data(self.collector.get_arc_data())
+ self.collector.reset()
+
+ # If there are still entries in the source_pkgs list, then we never
+ # encountered those packages.
+ for pkg in self.source_pkgs:
+ self._warn("Module %s was never imported." % pkg)
+
+ # Find out if we got any data.
+ summary = self.data.summary()
+ if not summary:
+ self._warn("No data was collected.")
+
+ # Find files that were never executed at all.
+ for src in self.source:
+ for py_file in find_python_files(src):
+ self.data.touch_file(py_file)
+
+ self._harvested = True
+
+ # Backward compatibility with version 1.
+ def analysis(self, morf):
+ """Like `analysis2` but doesn't return excluded line numbers."""
+ f, s, _, m, mf = self.analysis2(morf)
+ return f, s, m, mf
+
+ def analysis2(self, morf):
+ """Analyze a module.
+
+ `morf` is a module or a filename. It will be analyzed to determine
+ its coverage statistics. The return value is a 5-tuple:
+
+ * The filename for the module.
+ * A list of line numbers of executable statements.
+ * A list of line numbers of excluded statements.
+ * A list of line numbers of statements not run (missing from
+ execution).
+ * A readable formatted string of the missing line numbers.
+
+ The analysis uses the source file itself and the current measured
+ coverage data.
+
+ """
+ analysis = self._analyze(morf)
+ return (
+ analysis.filename, analysis.statements, analysis.excluded,
+ analysis.missing, analysis.missing_formatted()
+ )
+
+ def _analyze(self, it):
+ """Analyze a single morf or code unit.
+
+ Returns an `Analysis` object.
+
+ """
+ if not isinstance(it, CodeUnit):
+ it = code_unit_factory(it, self.file_locator)[0]
+
+ return Analysis(self, it)
+
+ def report(self, morfs=None, show_missing=True, ignore_errors=None,
+ file=None, # pylint: disable=W0622
+ omit=None, include=None
+ ):
+ """Write a summary report to `file`.
+
+ Each module in `morfs` is listed, with counts of statements, executed
+ statements, missing statements, and a list of lines missed.
+
+ `include` is a list of filename patterns. Modules whose filenames
+ match those patterns will be included in the report. Modules matching
+ `omit` will not be included in the report.
+
+ """
+ self.config.from_args(
+ ignore_errors=ignore_errors, omit=omit, include=include
+ )
+ reporter = SummaryReporter(
+ self, show_missing, self.config.ignore_errors
+ )
+ reporter.report(morfs, outfile=file, config=self.config)
+
+ def annotate(self, morfs=None, directory=None, ignore_errors=None,
+ omit=None, include=None):
+ """Annotate a list of modules.
+
+ Each module in `morfs` is annotated. The source is written to a new
+ file, named with a ",cover" suffix, with each line prefixed with a
+ marker to indicate the coverage of the line. Covered lines have ">",
+ excluded lines have "-", and missing lines have "!".
+
+ See `coverage.report()` for other arguments.
+
+ """
+ self.config.from_args(
+ ignore_errors=ignore_errors, omit=omit, include=include
+ )
+ reporter = AnnotateReporter(self, self.config.ignore_errors)
+ reporter.report(morfs, config=self.config, directory=directory)
+
+ def html_report(self, morfs=None, directory=None, ignore_errors=None,
+ omit=None, include=None):
+ """Generate an HTML report.
+
+ See `coverage.report()` for other arguments.
+
+ """
+ self.config.from_args(
+ ignore_errors=ignore_errors, omit=omit, include=include,
+ html_dir=directory,
+ )
+ reporter = HtmlReporter(self, self.config.ignore_errors)
+ reporter.report(morfs, config=self.config)
+
+ def xml_report(self, morfs=None, outfile=None, ignore_errors=None,
+ omit=None, include=None):
+ """Generate an XML report of coverage results.
+
+ The report is compatible with Cobertura reports.
+
+ Each module in `morfs` is included in the report. `outfile` is the
+ path to write the file to, "-" will write to stdout.
+
+ See `coverage.report()` for other arguments.
+
+ """
+ self.config.from_args(
+ ignore_errors=ignore_errors, omit=omit, include=include,
+ xml_output=outfile,
+ )
+ file_to_close = None
+ if self.config.xml_output:
+ if self.config.xml_output == '-':
+ outfile = sys.stdout
+ else:
+ outfile = open(self.config.xml_output, "w")
+ file_to_close = outfile
+ try:
+ reporter = XmlReporter(self, self.config.ignore_errors)
+ reporter.report(morfs, outfile=outfile, config=self.config)
+ finally:
+ if file_to_close:
+ file_to_close.close()
+
+ def sysinfo(self):
+ """Return a list of (key, value) pairs showing internal information."""
+
+ import coverage as covmod
+ import platform, re
+
+ info = [
+ ('version', covmod.__version__),
+ ('coverage', covmod.__file__),
+ ('cover_dir', self.cover_dir),
+ ('pylib_dirs', self.pylib_dirs),
+ ('tracer', self.collector.tracer_name()),
+ ('data_path', self.data.filename),
+ ('python', sys.version.replace('\n', '')),
+ ('platform', platform.platform()),
+ ('cwd', os.getcwd()),
+ ('path', sys.path),
+ ('environment', [
+ ("%s = %s" % (k, v)) for k, v in os.environ.items()
+ if re.search("^COV|^PY", k)
+ ]),
+ ]
+ return info
+
+
+def process_startup():
+ """Call this at Python startup to perhaps measure coverage.
+
+ If the environment variable COVERAGE_PROCESS_START is defined, coverage
+ measurement is started. The value of the variable is the config file
+ to use.
+
+ There are two ways to configure your Python installation to invoke this
+ function when Python starts:
+
+ #. Create or append to sitecustomize.py to add these lines::
+
+ import coverage
+ coverage.process_startup()
+
+ #. Create a .pth file in your Python installation containing::
+
+ import coverage; coverage.process_startup()
+
+ """
+ cps = os.environ.get("COVERAGE_PROCESS_START")
+ if cps:
+ cov = coverage(config_file=cps, auto_data=True)
+ if os.environ.get("COVERAGE_COVERAGE"):
+ # Measuring coverage within coverage.py takes yet more trickery.
+ cov.cover_dir = "Please measure coverage.py!"
+ cov.start()
diff --git a/python/helpers/coverage/data.py b/python/helpers/coverage/data.py
new file mode 100644
index 0000000..3263cb3
--- /dev/null
+++ b/python/helpers/coverage/data.py
@@ -0,0 +1,266 @@
+"""Coverage data for Coverage."""
+
+import os
+
+from coverage.backward import pickle, sorted # pylint: disable=W0622
+
+
+class CoverageData(object):
+ """Manages collected coverage data, including file storage.
+
+ The data file format is a pickled dict, with these keys:
+
+ * collector: a string identifying the collecting software
+
+ * lines: a dict mapping filenames to sorted lists of line numbers
+ executed:
+ { 'file1': [17,23,45], 'file2': [1,2,3], ... }
+
+ * arcs: a dict mapping filenames to sorted lists of line number pairs:
+ { 'file1': [(17,23), (17,25), (25,26)], ... }
+
+ """
+
+ def __init__(self, basename=None, collector=None):
+ """Create a CoverageData.
+
+ `basename` is the name of the file to use for storing data.
+
+ `collector` is a string describing the coverage measurement software.
+
+ """
+ self.collector = collector or 'unknown'
+
+ self.use_file = True
+
+ # Construct the filename that will be used for data file storage, if we
+ # ever do any file storage.
+ self.filename = basename or ".coverage"
+ self.filename = os.path.abspath(self.filename)
+
+ # A map from canonical Python source file name to a dictionary in
+ # which there's an entry for each line number that has been
+ # executed:
+ #
+ # {
+ # 'filename1.py': { 12: None, 47: None, ... },
+ # ...
+ # }
+ #
+ self.lines = {}
+
+ # A map from canonical Python source file name to a dictionary with an
+ # entry for each pair of line numbers forming an arc:
+ #
+ # {
+ # 'filename1.py': { (12,14): None, (47,48): None, ... },
+ # ...
+ # }
+ #
+ self.arcs = {}
+
+ self.os = os
+ self.sorted = sorted
+ self.pickle = pickle
+
+ def usefile(self, use_file=True):
+ """Set whether or not to use a disk file for data."""
+ self.use_file = use_file
+
+ def read(self):
+ """Read coverage data from the coverage data file (if it exists)."""
+ if self.use_file:
+ self.lines, self.arcs = self._read_file(self.filename)
+ else:
+ self.lines, self.arcs = {}, {}
+
+ def write(self, suffix=None):
+ """Write the collected coverage data to a file.
+
+ `suffix` is a suffix to append to the base file name. This can be used
+ for multiple or parallel execution, so that many coverage data files
+ can exist simultaneously. A dot will be used to join the base name and
+ the suffix.
+
+ """
+ if self.use_file:
+ filename = self.filename
+ if suffix:
+ filename += "." + suffix
+ self.write_file(filename)
+
+ def erase(self):
+ """Erase the data, both in this object, and from its file storage."""
+ if self.use_file:
+ if self.filename and os.path.exists(self.filename):
+ os.remove(self.filename)
+ self.lines = {}
+ self.arcs = {}
+
+ def line_data(self):
+ """Return the map from filenames to lists of line numbers executed."""
+ return dict(
+ [(f, self.sorted(lmap.keys())) for f, lmap in self.lines.items()]
+ )
+
+ def arc_data(self):
+ """Return the map from filenames to lists of line number pairs."""
+ return dict(
+ [(f, self.sorted(amap.keys())) for f, amap in self.arcs.items()]
+ )
+
+ def write_file(self, filename):
+ """Write the coverage data to `filename`."""
+
+ # Create the file data.
+ data = {}
+
+ data['lines'] = self.line_data()
+ arcs = self.arc_data()
+ if arcs:
+ data['arcs'] = arcs
+
+ if self.collector:
+ data['collector'] = self.collector
+
+ # Write the pickle to the file.
+ fdata = open(filename, 'wb')
+ try:
+ self.pickle.dump(data, fdata, 2)
+ finally:
+ fdata.close()
+
+ def read_file(self, filename):
+ """Read the coverage data from `filename`."""
+ self.lines, self.arcs = self._read_file(filename)
+
+ def raw_data(self, filename):
+ """Return the raw pickled data from `filename`."""
+ fdata = open(filename, 'rb')
+ try:
+ data = pickle.load(fdata)
+ finally:
+ fdata.close()
+ return data
+
+ def _read_file(self, filename):
+ """Return the stored coverage data from the given file.
+
+ Returns two values, suitable for assigning to `self.lines` and
+ `self.arcs`.
+
+ """
+ lines = {}
+ arcs = {}
+ try:
+ data = self.raw_data(filename)
+ if isinstance(data, dict):
+ # Unpack the 'lines' item.
+ lines = dict([
+ (f, dict.fromkeys(linenos, None))
+ for f, linenos in data.get('lines', {}).items()
+ ])
+ # Unpack the 'arcs' item.
+ arcs = dict([
+ (f, dict.fromkeys(arcpairs, None))
+ for f, arcpairs in data.get('arcs', {}).items()
+ ])
+ except Exception:
+ pass
+ return lines, arcs
+
+ def combine_parallel_data(self):
+ """Combine a number of data files together.
+
+ Treat `self.filename` as a file prefix, and combine the data from all
+ of the data files starting with that prefix plus a dot.
+
+ """
+ data_dir, local = os.path.split(self.filename)
+ localdot = local + '.'
+ for f in os.listdir(data_dir or '.'):
+ if f.startswith(localdot):
+ full_path = os.path.join(data_dir, f)
+ new_lines, new_arcs = self._read_file(full_path)
+ for filename, file_data in new_lines.items():
+ self.lines.setdefault(filename, {}).update(file_data)
+ for filename, file_data in new_arcs.items():
+ self.arcs.setdefault(filename, {}).update(file_data)
+ if f != local:
+ os.remove(full_path)
+
+ def add_line_data(self, line_data):
+ """Add executed line data.
+
+ `line_data` is { filename: { lineno: None, ... }, ...}
+
+ """
+ for filename, linenos in line_data.items():
+ self.lines.setdefault(filename, {}).update(linenos)
+
+ def add_arc_data(self, arc_data):
+ """Add measured arc data.
+
+ `arc_data` is { filename: { (l1,l2): None, ... }, ...}
+
+ """
+ for filename, arcs in arc_data.items():
+ self.arcs.setdefault(filename, {}).update(arcs)
+
+ def touch_file(self, filename):
+ """Ensure that `filename` appears in the data, empty if needed."""
+ self.lines.setdefault(filename, {})
+
+ def measured_files(self):
+ """A list of all files that had been measured."""
+ return list(self.lines.keys())
+
+ def executed_lines(self, filename):
+ """A map containing all the line numbers executed in `filename`.
+
+ If `filename` hasn't been collected at all (because it wasn't executed)
+ then return an empty map.
+
+ """
+ return self.lines.get(filename) or {}
+
+ def executed_arcs(self, filename):
+ """A map containing all the arcs executed in `filename`."""
+ return self.arcs.get(filename) or {}
+
+ def add_to_hash(self, filename, hasher):
+ """Contribute `filename`'s data to the Md5Hash `hasher`."""
+ hasher.update(self.executed_lines(filename))
+ hasher.update(self.executed_arcs(filename))
+
+ def summary(self, fullpath=False):
+ """Return a dict summarizing the coverage data.
+
+ Keys are based on the filenames, and values are the number of executed
+ lines. If `fullpath` is true, then the keys are the full pathnames of
+ the files, otherwise they are the basenames of the files.
+
+ """
+ summ = {}
+ if fullpath:
+ filename_fn = lambda f: f
+ else:
+ filename_fn = self.os.path.basename
+ for filename, lines in self.lines.items():
+ summ[filename_fn(filename)] = len(lines)
+ return summ
+
+ def has_arcs(self):
+ """Does this data have arcs?"""
+ return bool(self.arcs)
+
+
+if __name__ == '__main__':
+ # Ad-hoc: show the raw data in a data file.
+ import pprint, sys
+ covdata = CoverageData()
+ if sys.argv[1:]:
+ fname = sys.argv[1]
+ else:
+ fname = covdata.filename
+ pprint.pprint(covdata.raw_data(fname))
diff --git a/python/helpers/coverage/execfile.py b/python/helpers/coverage/execfile.py
new file mode 100644
index 0000000..71227b7
--- /dev/null
+++ b/python/helpers/coverage/execfile.py
@@ -0,0 +1,133 @@
+"""Execute files of Python code."""
+
+import imp, os, sys
+
+from coverage.backward import exec_code_object, open_source
+from coverage.misc import NoSource, ExceptionDuringRun
+
+
+try:
+ # In Py 2.x, the builtins were in __builtin__
+ BUILTINS = sys.modules['__builtin__']
+except KeyError:
+ # In Py 3.x, they're in builtins
+ BUILTINS = sys.modules['builtins']
+
+
+def rsplit1(s, sep):
+ """The same as s.rsplit(sep, 1), but works in 2.3"""
+ parts = s.split(sep)
+ return sep.join(parts[:-1]), parts[-1]
+
+
+def run_python_module(modulename, args):
+ """Run a python module, as though with ``python -m name args...``.
+
+ `modulename` is the name of the module, possibly a dot-separated name.
+ `args` is the argument array to present as sys.argv, including the first
+ element naming the module being executed.
+
+ """
+ openfile = None
+ glo, loc = globals(), locals()
+ try:
+ try:
+ # Search for the module - inside its parent package, if any - using
+ # standard import mechanics.
+ if '.' in modulename:
+ packagename, name = rsplit1(modulename, '.')
+ package = __import__(packagename, glo, loc, ['__path__'])
+ searchpath = package.__path__
+ else:
+ packagename, name = None, modulename
+ searchpath = None # "top-level search" in imp.find_module()
+ openfile, pathname, _ = imp.find_module(name, searchpath)
+
+ # Complain if this is a magic non-file module.
+ if openfile is None and pathname is None:
+ raise NoSource(
+ "module does not live in a file: %r" % modulename
+ )
+
+ # If `modulename` is actually a package, not a mere module, then we
+ # pretend to be Python 2.7 and try running its __main__.py script.
+ if openfile is None:
+ packagename = modulename
+ name = '__main__'
+ package = __import__(packagename, glo, loc, ['__path__'])
+ searchpath = package.__path__
+ openfile, pathname, _ = imp.find_module(name, searchpath)
+ except ImportError:
+ _, err, _ = sys.exc_info()
+ raise NoSource(str(err))
+ finally:
+ if openfile:
+ openfile.close()
+
+ # Finally, hand the file off to run_python_file for execution.
+ run_python_file(pathname, args, package=packagename)
+
+
+def run_python_file(filename, args, package=None):
+ """Run a python file as if it were the main program on the command line.
+
+ `filename` is the path to the file to execute, it need not be a .py file.
+ `args` is the argument array to present as sys.argv, including the first
+ element naming the file being executed. `package` is the name of the
+ enclosing package, if any.
+
+ """
+ # Create a module to serve as __main__
+ old_main_mod = sys.modules['__main__']
+ main_mod = imp.new_module('__main__')
+ sys.modules['__main__'] = main_mod
+ main_mod.__file__ = filename
+ main_mod.__package__ = package
+ main_mod.__builtins__ = BUILTINS
+
+ # Set sys.argv and the first path element properly.
+ old_argv = sys.argv
+ old_path0 = sys.path[0]
+ sys.argv = args
+ sys.path[0] = os.path.abspath(os.path.dirname(filename))
+
+ try:
+ # Open the source file.
+ try:
+ source_file = open_source(filename)
+ except IOError:
+ raise NoSource("No file to run: %r" % filename)
+
+ try:
+ source = source_file.read()
+ finally:
+ source_file.close()
+
+ # We have the source. `compile` still needs the last line to be clean,
+ # so make sure it is, then compile a code object from it.
+ if source[-1] != '\n':
+ source += '\n'
+ code = compile(source, filename, "exec")
+
+ # Execute the source file.
+ try:
+ exec_code_object(code, main_mod.__dict__)
+ except SystemExit:
+ # The user called sys.exit(). Just pass it along to the upper
+ # layers, where it will be handled.
+ raise
+ except:
+ # Something went wrong while executing the user code.
+ # Get the exc_info, and pack them into an exception that we can
+ # throw up to the outer loop. We peel two layers off the traceback
+ # so that the coverage.py code doesn't appear in the final printed
+ # traceback.
+ typ, err, tb = sys.exc_info()
+ raise ExceptionDuringRun(typ, err, tb.tb_next.tb_next)
+ finally:
+ # Restore the old __main__
+ sys.modules['__main__'] = old_main_mod
+
+ # Restore the old argv and path
+ sys.argv = old_argv
+ sys.path[0] = old_path0
diff --git a/python/helpers/coverage/files.py b/python/helpers/coverage/files.py
new file mode 100644
index 0000000..a68a0a7
--- /dev/null
+++ b/python/helpers/coverage/files.py
@@ -0,0 +1,131 @@
+"""File wrangling."""
+
+from coverage.backward import to_string
+import fnmatch, os, sys
+
+class FileLocator(object):
+ """Understand how filenames work."""
+
+ def __init__(self):
+ # The absolute path to our current directory.
+ self.relative_dir = self.abs_file(os.curdir) + os.sep
+
+ # Cache of results of calling the canonical_filename() method, to
+ # avoid duplicating work.
+ self.canonical_filename_cache = {}
+
+ def abs_file(self, filename):
+ """Return the absolute normalized form of `filename`."""
+ return os.path.normcase(os.path.abspath(os.path.realpath(filename)))
+
+ def relative_filename(self, filename):
+ """Return the relative form of `filename`.
+
+ The filename will be relative to the current directory when the
+ `FileLocator` was constructed.
+
+ """
+ if filename.startswith(self.relative_dir):
+ filename = filename.replace(self.relative_dir, "")
+ return filename
+
+ def canonical_filename(self, filename):
+ """Return a canonical filename for `filename`.
+
+ An absolute path with no redundant components and normalized case.
+
+ """
+ if filename not in self.canonical_filename_cache:
+ f = filename
+ if os.path.isabs(f) and not os.path.exists(f):
+ if self.get_zip_data(f) is None:
+ f = os.path.basename(f)
+ if not os.path.isabs(f):
+ for path in [os.curdir] + sys.path:
+ if path is None:
+ continue
+ g = os.path.join(path, f)
+ if os.path.exists(g):
+ f = g
+ break
+ cf = self.abs_file(f)
+ self.canonical_filename_cache[filename] = cf
+ return self.canonical_filename_cache[filename]
+
+ def get_zip_data(self, filename):
+ """Get data from `filename` if it is a zip file path.
+
+ Returns the string data read from the zip file, or None if no zip file
+ could be found or `filename` isn't in it. The data returned will be
+ an empty string if the file is empty.
+
+ """
+ import zipimport
+ markers = ['.zip'+os.sep, '.egg'+os.sep]
+ for marker in markers:
+ if marker in filename:
+ parts = filename.split(marker)
+ try:
+ zi = zipimport.zipimporter(parts[0]+marker[:-1])
+ except zipimport.ZipImportError:
+ continue
+ try:
+ data = zi.get_data(parts[1])
+ except IOError:
+ continue
+ return to_string(data)
+ return None
+
+
+class TreeMatcher(object):
+ """A matcher for files in a tree."""
+ def __init__(self, directories):
+ self.dirs = directories[:]
+
+ def __repr__(self):
+ return "<TreeMatcher %r>" % self.dirs
+
+ def add(self, directory):
+ """Add another directory to the list we match for."""
+ self.dirs.append(directory)
+
+ def match(self, fpath):
+ """Does `fpath` indicate a file in one of our trees?"""
+ for d in self.dirs:
+ if fpath.startswith(d):
+ if fpath == d:
+ # This is the same file!
+ return True
+ if fpath[len(d)] == os.sep:
+ # This is a file in the directory
+ return True
+ return False
+
+
+class FnmatchMatcher(object):
+ """A matcher for files by filename pattern."""
+ def __init__(self, pats):
+ self.pats = pats[:]
+
+ def __repr__(self):
+ return "<FnmatchMatcher %r>" % self.pats
+
+ def match(self, fpath):
+ """Does `fpath` match one of our filename patterns?"""
+ for pat in self.pats:
+ if fnmatch.fnmatch(fpath, pat):
+ return True
+ return False
+
+
+def find_python_files(dirname):
+ """Yield all of the importable Python files in `dirname`, recursively."""
+ for dirpath, dirnames, filenames in os.walk(dirname, topdown=True):
+ if '__init__.py' not in filenames:
+ # If a directory doesn't have __init__.py, then it isn't
+ # importable and neither are its files
+ del dirnames[:]
+ continue
+ for filename in filenames:
+ if fnmatch.fnmatch(filename, "*.py"):
+ yield os.path.join(dirpath, filename)
diff --git a/python/helpers/coverage/html.py b/python/helpers/coverage/html.py
new file mode 100644
index 0000000..fffd9b4
--- /dev/null
+++ b/python/helpers/coverage/html.py
@@ -0,0 +1,325 @@
+"""HTML reporting for Coverage."""
+
+import os, re, shutil
+
+import coverage
+from coverage.backward import pickle
+from coverage.misc import CoverageException, Hasher
+from coverage.phystokens import source_token_lines
+from coverage.report import Reporter
+from coverage.templite import Templite
+
+# Disable pylint msg W0612, because a bunch of variables look unused, but
+# they're accessed in a Templite context via locals().
+# pylint: disable=W0612
+
+def data_filename(fname):
+ """Return the path to a data file of ours."""
+ return os.path.join(os.path.split(__file__)[0], fname)
+
+def data(fname):
+ """Return the contents of a data file of ours."""
+ data_file = open(data_filename(fname))
+ try:
+ return data_file.read()
+ finally:
+ data_file.close()
+
+
+class HtmlReporter(Reporter):
+ """HTML reporting."""
+
+ # These files will be copied from the htmlfiles dir to the output dir.
+ STATIC_FILES = [
+ "style.css",
+ "jquery-1.4.3.min.js",
+ "jquery.hotkeys.js",
+ "jquery.isonscreen.js",
+ "jquery.tablesorter.min.js",
+ "coverage_html.js",
+ "keybd_closed.png",
+ "keybd_open.png",
+ ]
+
+ def __init__(self, cov, ignore_errors=False):
+ super(HtmlReporter, self).__init__(cov, ignore_errors)
+ self.directory = None
+ self.template_globals = {
+ 'escape': escape,
+ '__url__': coverage.__url__,
+ '__version__': coverage.__version__,
+ }
+ self.source_tmpl = Templite(
+ data("htmlfiles/pyfile.html"), self.template_globals
+ )
+
+ self.coverage = cov
+
+ self.files = []
+ self.arcs = self.coverage.data.has_arcs()
+ self.status = HtmlStatus()
+
+ def report(self, morfs, config=None):
+ """Generate an HTML report for `morfs`.
+
+ `morfs` is a list of modules or filenames. `config` is a
+ CoverageConfig instance.
+
+ """
+ assert config.html_dir, "must provide a directory for html reporting"
+
+ # Read the status data.
+ self.status.read(config.html_dir)
+
+ # Check that this run used the same settings as the last run.
+ m = Hasher()
+ m.update(config)
+ these_settings = m.digest()
+ if self.status.settings_hash() != these_settings:
+ self.status.reset()
+ self.status.set_settings_hash(these_settings)
+
+ # Process all the files.
+ self.report_files(self.html_file, morfs, config, config.html_dir)
+
+ if not self.files:
+ raise CoverageException("No data to report.")
+
+ # Write the index file.
+ self.index_file()
+
+ # Create the once-per-directory files.
+ for static in self.STATIC_FILES:
+ shutil.copyfile(
+ data_filename("htmlfiles/" + static),
+ os.path.join(self.directory, static)
+ )
+
+ def file_hash(self, source, cu):
+ """Compute a hash that changes if the file needs to be re-reported."""
+ m = Hasher()
+ m.update(source)
+ self.coverage.data.add_to_hash(cu.filename, m)
+ return m.digest()
+
+ def html_file(self, cu, analysis):
+ """Generate an HTML file for one source file."""
+ source_file = cu.source_file()
+ try:
+ source = source_file.read()
+ finally:
+ source_file.close()
+
+ # Find out if the file on disk is already correct.
+ flat_rootname = cu.flat_rootname()
+ this_hash = self.file_hash(source, cu)
+ that_hash = self.status.file_hash(flat_rootname)
+ if this_hash == that_hash:
+ # Nothing has changed to require the file to be reported again.
+ self.files.append(self.status.index_info(flat_rootname))
+ return
+
+ self.status.set_file_hash(flat_rootname, this_hash)
+
+ nums = analysis.numbers
+
+ missing_branch_arcs = analysis.missing_branch_arcs()
+ n_par = 0 # accumulated below.
+ arcs = self.arcs
+
+ # These classes determine which lines are highlighted by default.
+ c_run = "run hide_run"
+ c_exc = "exc"
+ c_mis = "mis"
+ c_par = "par " + c_run
+
+ lines = []
+
+ for lineno, line in enumerate(source_token_lines(source)):
+ lineno += 1 # 1-based line numbers.
+ # Figure out how to mark this line.
+ line_class = []
+ annotate_html = ""
+ annotate_title = ""
+ if lineno in analysis.statements:
+ line_class.append("stm")
+ if lineno in analysis.excluded:
+ line_class.append(c_exc)
+ elif lineno in analysis.missing:
+ line_class.append(c_mis)
+ elif self.arcs and lineno in missing_branch_arcs:
+ line_class.append(c_par)
+ n_par += 1
+ annlines = []
+ for b in missing_branch_arcs[lineno]:
+ if b < 0:
+ annlines.append("exit")
+ else:
+ annlines.append(str(b))
+ annotate_html = " ".join(annlines)
+ if len(annlines) > 1:
+ annotate_title = "no jumps to these line numbers"
+ elif len(annlines) == 1:
+ annotate_title = "no jump to this line number"
+ elif lineno in analysis.statements:
+ line_class.append(c_run)
+
+ # Build the HTML for the line
+ html = []
+ for tok_type, tok_text in line:
+ if tok_type == "ws":
+ html.append(escape(tok_text))
+ else:
+ tok_html = escape(tok_text) or ' '
+ html.append(
+ "<span class='%s'>%s</span>" % (tok_type, tok_html)
+ )
+
+ lines.append({
+ 'html': ''.join(html),
+ 'number': lineno,
+ 'class': ' '.join(line_class) or "pln",
+ 'annotate': annotate_html,
+ 'annotate_title': annotate_title,
+ })
+
+ # Write the HTML page for this file.
+ html_filename = flat_rootname + ".html"
+ html_path = os.path.join(self.directory, html_filename)
+ html = spaceless(self.source_tmpl.render(locals()))
+ fhtml = open(html_path, 'w')
+ try:
+ fhtml.write(html)
+ finally:
+ fhtml.close()
+
+ # Save this file's information for the index file.
+ index_info = {
+ 'nums': nums,
+ 'par': n_par,
+ 'html_filename': html_filename,
+ 'name': cu.name,
+ }
+ self.files.append(index_info)
+ self.status.set_index_info(flat_rootname, index_info)
+
+ def index_file(self):
+ """Write the index.html file for this report."""
+ index_tmpl = Templite(
+ data("htmlfiles/index.html"), self.template_globals
+ )
+
+ files = self.files
+ arcs = self.arcs
+
+ totals = sum([f['nums'] for f in files])
+
+ fhtml = open(os.path.join(self.directory, "index.html"), "w")
+ try:
+ fhtml.write(index_tmpl.render(locals()))
+ finally:
+ fhtml.close()
+
+ # Write the latest hashes for next time.
+ self.status.write(self.directory)
+
+
+class HtmlStatus(object):
+ """The status information we keep to support incremental reporting."""
+
+ STATUS_FILE = "status.dat"
+ STATUS_FORMAT = 1
+
+ def __init__(self):
+ self.reset()
+
+ def reset(self):
+ """Initialize to empty."""
+ self.settings = ''
+ self.files = {}
+
+ def read(self, directory):
+ """Read the last status in `directory`."""
+ usable = False
+ try:
+ status_file = os.path.join(directory, self.STATUS_FILE)
+ status = pickle.load(open(status_file, "rb"))
+ except IOError:
+ usable = False
+ else:
+ usable = True
+ if status['format'] != self.STATUS_FORMAT:
+ usable = False
+ elif status['version'] != coverage.__version__:
+ usable = False
+
+ if usable:
+ self.files = status['files']
+ self.settings = status['settings']
+ else:
+ self.reset()
+
+ def write(self, directory):
+ """Write the current status to `directory`."""
+ status_file = os.path.join(directory, self.STATUS_FILE)
+ status = {
+ 'format': self.STATUS_FORMAT,
+ 'version': coverage.__version__,
+ 'settings': self.settings,
+ 'files': self.files,
+ }
+ fout = open(status_file, "wb")
+ try:
+ pickle.dump(status, fout)
+ finally:
+ fout.close()
+
+ def settings_hash(self):
+ """Get the hash of the coverage.py settings."""
+ return self.settings
+
+ def set_settings_hash(self, settings):
+ """Set the hash of the coverage.py settings."""
+ self.settings = settings
+
+ def file_hash(self, fname):
+ """Get the hash of `fname`'s contents."""
+ return self.files.get(fname, {}).get('hash', '')
+
+ def set_file_hash(self, fname, val):
+ """Set the hash of `fname`'s contents."""
+ self.files.setdefault(fname, {})['hash'] = val
+
+ def index_info(self, fname):
+ """Get the information for index.html for `fname`."""
+ return self.files.get(fname, {}).get('index', {})
+
+ def set_index_info(self, fname, info):
+ """Set the information for index.html for `fname`."""
+ self.files.setdefault(fname, {})['index'] = info
+
+
+# Helpers for templates and generating HTML
+
+def escape(t):
+ """HTML-escape the text in `t`."""
+ return (t
+ # Convert HTML special chars into HTML entities.
+ .replace("&", "&").replace("<", "<").replace(">", ">")
+ .replace("'", "'").replace('"', """)
+ # Convert runs of spaces: "......" -> " . . ."
+ .replace(" ", " ")
+ # To deal with odd-length runs, convert the final pair of spaces
+ # so that "....." -> " . ."
+ .replace(" ", " ")
+ )
+
+def spaceless(html):
+ """Squeeze out some annoying extra space from an HTML string.
+
+ Nicely-formatted templates mean lots of extra space in the result.
+ Get rid of some.
+
+ """
+ html = re.sub(">\s+<p ", ">\n<p ", html)
+ return html
diff --git a/python/helpers/coverage/htmlfiles/coverage_html.js b/python/helpers/coverage/htmlfiles/coverage_html.js
new file mode 100644
index 0000000..da3e22c
--- /dev/null
+++ b/python/helpers/coverage/htmlfiles/coverage_html.js
@@ -0,0 +1,372 @@
+// Coverage.py HTML report browser code.
+/*jslint browser: true, sloppy: true, vars: true, plusplus: true, maxerr: 50, indent: 4 */
+/*global coverage: true, document, window, $ */
+
+coverage = {};
+
+// Find all the elements with shortkey_* class, and use them to assign a shotrtcut key.
+coverage.assign_shortkeys = function () {
+ $("*[class*='shortkey_']").each(function (i, e) {
+ $.each($(e).attr("class").split(" "), function (i, c) {
+ if (/^shortkey_/.test(c)) {
+ $(document).bind('keydown', c.substr(9), function () {
+ $(e).click();
+ });
+ }
+ });
+ });
+};
+
+// Create the events for the help panel.
+coverage.wire_up_help_panel = function () {
+ $("#keyboard_icon").click(function () {
+ // Show the help panel, and position it so the keyboard icon in the
+ // panel is in the same place as the keyboard icon in the header.
+ $(".help_panel").show();
+ var koff = $("#keyboard_icon").offset();
+ var poff = $("#panel_icon").position();
+ $(".help_panel").offset({
+ top: koff.top-poff.top,
+ left: koff.left-poff.left
+ });
+ });
+ $("#panel_icon").click(function () {
+ $(".help_panel").hide();
+ });
+};
+
+// Loaded on index.html
+coverage.index_ready = function ($) {
+ // Look for a cookie containing previous sort settings:
+ var sort_list = [];
+ var cookie_name = "COVERAGE_INDEX_SORT";
+ var i;
+
+ // This almost makes it worth installing the jQuery cookie plugin:
+ if (document.cookie.indexOf(cookie_name) > -1) {
+ var cookies = document.cookie.split(";");
+ for (i = 0; i < cookies.length; i++) {
+ var parts = cookies[i].split("=");
+
+ if ($.trim(parts[0]) === cookie_name && parts[1]) {
+ sort_list = eval("[[" + parts[1] + "]]");
+ break;
+ }
+ }
+ }
+
+ // Create a new widget which exists only to save and restore
+ // the sort order:
+ $.tablesorter.addWidget({
+ id: "persistentSort",
+
+ // Format is called by the widget before displaying:
+ format: function (table) {
+ if (table.config.sortList.length === 0 && sort_list.length > 0) {
+ // This table hasn't been sorted before - we'll use
+ // our stored settings:
+ $(table).trigger('sorton', [sort_list]);
+ }
+ else {
+ // This is not the first load - something has
+ // already defined sorting so we'll just update
+ // our stored value to match:
+ sort_list = table.config.sortList;
+ }
+ }
+ });
+
+ // Configure our tablesorter to handle the variable number of
+ // columns produced depending on report options:
+ var headers = [];
+ var col_count = $("table.index > thead > tr > th").length;
+
+ headers[0] = { sorter: 'text' };
+ for (i = 1; i < col_count-1; i++) {
+ headers[i] = { sorter: 'digit' };
+ }
+ headers[col_count-1] = { sorter: 'percent' };
+
+ // Enable the table sorter:
+ $("table.index").tablesorter({
+ widgets: ['persistentSort'],
+ headers: headers
+ });
+
+ coverage.assign_shortkeys();
+ coverage.wire_up_help_panel();
+
+ // Watch for page unload events so we can save the final sort settings:
+ $(window).unload(function () {
+ document.cookie = cookie_name + "=" + sort_list.toString() + "; path=/";
+ });
+};
+
+// -- pyfile stuff --
+
+coverage.pyfile_ready = function ($) {
+ // If we're directed to a particular line number, highlight the line.
+ var frag = location.hash;
+ if (frag.length > 2 && frag[1] === 'n') {
+ $(frag).addClass('highlight');
+ coverage.set_sel(parseInt(frag.substr(2), 10));
+ }
+ else {
+ coverage.set_sel(0);
+ }
+
+ $(document)
+ .bind('keydown', 'j', coverage.to_next_chunk_nicely)
+ .bind('keydown', 'k', coverage.to_prev_chunk_nicely)
+ .bind('keydown', '0', coverage.to_top)
+ .bind('keydown', '1', coverage.to_first_chunk)
+ ;
+
+ coverage.assign_shortkeys();
+ coverage.wire_up_help_panel();
+};
+
+coverage.toggle_lines = function (btn, cls) {
+ btn = $(btn);
+ var hide = "hide_"+cls;
+ if (btn.hasClass(hide)) {
+ $("#source ."+cls).removeClass(hide);
+ btn.removeClass(hide);
+ }
+ else {
+ $("#source ."+cls).addClass(hide);
+ btn.addClass(hide);
+ }
+};
+
+// Return the nth line div.
+coverage.line_elt = function (n) {
+ return $("#t" + n);
+};
+
+// Return the nth line number div.
+coverage.num_elt = function (n) {
+ return $("#n" + n);
+};
+
+// Return the container of all the code.
+coverage.code_container = function () {
+ return $(".linenos");
+};
+
+// Set the selection. b and e are line numbers.
+coverage.set_sel = function (b, e) {
+ // The first line selected.
+ coverage.sel_begin = b;
+ // The next line not selected.
+ coverage.sel_end = (e === undefined) ? b+1 : e;
+};
+
+coverage.to_top = function () {
+ coverage.set_sel(0, 1);
+ coverage.scroll_window(0);
+};
+
+coverage.to_first_chunk = function () {
+ coverage.set_sel(0, 1);
+ coverage.to_next_chunk();
+};
+
+coverage.is_transparent = function (color) {
+ // Different browsers return different colors for "none".
+ return color === "transparent" || color === "rgba(0, 0, 0, 0)";
+};
+
+coverage.to_next_chunk = function () {
+ var c = coverage;
+
+ // Find the start of the next colored chunk.
+ var probe = c.sel_end;
+ while (true) {
+ var probe_line = c.line_elt(probe);
+ if (probe_line.length === 0) {
+ return;
+ }
+ var color = probe_line.css("background-color");
+ if (!c.is_transparent(color)) {
+ break;
+ }
+ probe++;
+ }
+
+ // There's a next chunk, `probe` points to it.
+ var begin = probe;
+
+ // Find the end of this chunk.
+ var next_color = color;
+ while (next_color === color) {
+ probe++;
+ probe_line = c.line_elt(probe);
+ next_color = probe_line.css("background-color");
+ }
+ c.set_sel(begin, probe);
+ c.show_selection();
+};
+
+coverage.to_prev_chunk = function () {
+ var c = coverage;
+
+ // Find the end of the prev colored chunk.
+ var probe = c.sel_begin-1;
+ var probe_line = c.line_elt(probe);
+ if (probe_line.length === 0) {
+ return;
+ }
+ var color = probe_line.css("background-color");
+ while (probe > 0 && c.is_transparent(color)) {
+ probe--;
+ probe_line = c.line_elt(probe);
+ if (probe_line.length === 0) {
+ return;
+ }
+ color = probe_line.css("background-color");
+ }
+
+ // There's a prev chunk, `probe` points to its last line.
+ var end = probe+1;
+
+ // Find the beginning of this chunk.
+ var prev_color = color;
+ while (prev_color === color) {
+ probe--;
+ probe_line = c.line_elt(probe);
+ prev_color = probe_line.css("background-color");
+ }
+ c.set_sel(probe+1, end);
+ c.show_selection();
+};
+
+// Return the line number of the line nearest pixel position pos
+coverage.line_at_pos = function (pos) {
+ var l1 = coverage.line_elt(1),
+ l2 = coverage.line_elt(2),
+ result;
+ if (l1.length && l2.length) {
+ var l1_top = l1.offset().top,
+ line_height = l2.offset().top - l1_top,
+ nlines = (pos - l1_top) / line_height;
+ if (nlines < 1) {
+ result = 1;
+ }
+ else {
+ result = Math.ceil(nlines);
+ }
+ }
+ else {
+ result = 1;
+ }
+ return result;
+};
+
+// Returns 0, 1, or 2: how many of the two ends of the selection are on
+// the screen right now?
+coverage.selection_ends_on_screen = function () {
+ if (coverage.sel_begin === 0) {
+ return 0;
+ }
+
+ var top = coverage.line_elt(coverage.sel_begin);
+ var next = coverage.line_elt(coverage.sel_end-1);
+
+ return (
+ (top.isOnScreen() ? 1 : 0) +
+ (next.isOnScreen() ? 1 : 0)
+ );
+};
+
+coverage.to_next_chunk_nicely = function () {
+ coverage.finish_scrolling();
+ if (coverage.selection_ends_on_screen() === 0) {
+ // The selection is entirely off the screen: select the top line on
+ // the screen.
+ var win = $(window);
+ coverage.select_line_or_chunk(coverage.line_at_pos(win.scrollTop()));
+ }
+ coverage.to_next_chunk();
+};
+
+coverage.to_prev_chunk_nicely = function () {
+ coverage.finish_scrolling();
+ if (coverage.selection_ends_on_screen() === 0) {
+ var win = $(window);
+ coverage.select_line_or_chunk(coverage.line_at_pos(win.scrollTop() + win.height()));
+ }
+ coverage.to_prev_chunk();
+};
+
+// Select line number lineno, or if it is in a colored chunk, select the
+// entire chunk
+coverage.select_line_or_chunk = function (lineno) {
+ var c = coverage;
+ var probe_line = c.line_elt(lineno);
+ if (probe_line.length === 0) {
+ return;
+ }
+ var the_color = probe_line.css("background-color");
+ if (!c.is_transparent(the_color)) {
+ // The line is in a highlighted chunk.
+ // Search backward for the first line.
+ var probe = lineno;
+ var color = the_color;
+ while (probe > 0 && color === the_color) {
+ probe--;
+ probe_line = c.line_elt(probe);
+ if (probe_line.length === 0) {
+ break;
+ }
+ color = probe_line.css("background-color");
+ }
+ var begin = probe + 1;
+
+ // Search forward for the last line.
+ probe = lineno;
+ color = the_color;
+ while (color === the_color) {
+ probe++;
+ probe_line = c.line_elt(probe);
+ color = probe_line.css("background-color");
+ }
+
+ coverage.set_sel(begin, probe);
+ }
+ else {
+ coverage.set_sel(lineno);
+ }
+};
+
+coverage.show_selection = function () {
+ var c = coverage;
+
+ // Highlight the lines in the chunk
+ c.code_container().find(".highlight").removeClass("highlight");
+ for (var probe = c.sel_begin; probe > 0 && probe < c.sel_end; probe++) {
+ c.num_elt(probe).addClass("highlight");
+ }
+
+ c.scroll_to_selection();
+};
+
+coverage.scroll_to_selection = function () {
+ // Scroll the page if the chunk isn't fully visible.
+ if (coverage.selection_ends_on_screen() < 2) {
+ // Need to move the page. The html,body trick makes it scroll in all
+ // browsers, got it from http://stackoverflow.com/questions/3042651
+ var top = coverage.line_elt(coverage.sel_begin);
+ var top_pos = parseInt(top.offset().top, 10);
+ coverage.scroll_window(top_pos - 30);
+ }
+};
+
+coverage.scroll_window = function (to_pos) {
+ $("html,body").animate({scrollTop: to_pos}, 200);
+};
+
+coverage.finish_scrolling = function () {
+ $("html,body").stop(true, true);
+};
+
diff --git a/python/helpers/coverage/htmlfiles/index.html b/python/helpers/coverage/htmlfiles/index.html
new file mode 100644
index 0000000..04b314a
--- /dev/null
+++ b/python/helpers/coverage/htmlfiles/index.html
@@ -0,0 +1,101 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
+<html>
+<head>
+ <meta http-equiv='Content-Type' content='text/html; charset=utf-8'>
+ <title>Coverage report</title>
+ <link rel='stylesheet' href='style.css' type='text/css'>
+ <script type='text/javascript' src='jquery-1.4.3.min.js'></script>
+ <script type='text/javascript' src='jquery.tablesorter.min.js'></script>
+ <script type='text/javascript' src='jquery.hotkeys.js'></script>
+ <script type='text/javascript' src='coverage_html.js'></script>
+ <script type='text/javascript' charset='utf-8'>
+ jQuery(document).ready(coverage.index_ready);
+ </script>
+</head>
+<body id='indexfile'>
+
+<div id='header'>
+ <div class='content'>
+ <h1>Coverage report:
+ <span class='pc_cov'>{{totals.pc_covered_str}}%</span>
+ </h1>
+ <img id='keyboard_icon' src='keybd_closed.png'>
+ </div>
+</div>
+
+<div class='help_panel'>
+ <img id='panel_icon' src='keybd_open.png'>
+ <p class='legend'>Hot-keys on this page</p>
+ <div>
+ <p class='keyhelp'>
+ <span class='key'>n</span>
+ <span class='key'>s</span>
+ <span class='key'>m</span>
+ <span class='key'>x</span>
+ {% if arcs %}
+ <span class='key'>b</span>
+ <span class='key'>p</span>
+ {% endif %}
+ <span class='key'>c</span> change column sorting
+ </p>
+ </div>
+</div>
+
+<div id='index'>
+ <table class='index'>
+ <thead>
+ {# The title='' attr doesn't work in Safari. #}
+ <tr class='tablehead' title='Click to sort'>
+ <th class='name left headerSortDown shortkey_n'>Module</th>
+ <th class='shortkey_s'>statements</th>
+ <th class='shortkey_m'>missing</th>
+ <th class='shortkey_x'>excluded</th>
+ {% if arcs %}
+ <th class='shortkey_b'>branches</th>
+ <th class='shortkey_p'>partial</th>
+ {% endif %}
+ <th class='right shortkey_c'>coverage</th>
+ </tr>
+ </thead>
+ {# HTML syntax requires thead, tfoot, tbody #}
+ <tfoot>
+ <tr class='total'>
+ <td class='name left'>Total</td>
+ <td>{{totals.n_statements}}</td>
+ <td>{{totals.n_missing}}</td>
+ <td>{{totals.n_excluded}}</td>
+ {% if arcs %}
+ <td>{{totals.n_branches}}</td>
+ <td>{{totals.n_missing_branches}}</td>
+ {% endif %}
+ <td class='right'>{{totals.pc_covered_str}}%</td>
+ </tr>
+ </tfoot>
+ <tbody>
+ {% for file in files %}
+ <tr class='file'>
+ <td class='name left'><a href='{{file.html_filename}}'>{{file.name}}</a></td>
+ <td>{{file.nums.n_statements}}</td>
+ <td>{{file.nums.n_missing}}</td>
+ <td>{{file.nums.n_excluded}}</td>
+ {% if arcs %}
+ <td>{{file.nums.n_branches}}</td>
+ <td>{{file.nums.n_missing_branches}}</td>
+ {% endif %}
+ <td class='right'>{{file.nums.pc_covered_str}}%</td>
+ </tr>
+ {% endfor %}
+ </tbody>
+ </table>
+</div>
+
+<div id='footer'>
+ <div class='content'>
+ <p>
+ <a class='nav' href='{{__url__}}'>coverage.py v{{__version__}}</a>
+ </p>
+ </div>
+</div>
+
+</body>
+</html>
diff --git a/python/helpers/coverage/htmlfiles/jquery-1.4.3.min.js b/python/helpers/coverage/htmlfiles/jquery-1.4.3.min.js
new file mode 100644
index 0000000..c941a5f
--- /dev/null
+++ b/python/helpers/coverage/htmlfiles/jquery-1.4.3.min.js
@@ -0,0 +1,166 @@
+/*!
+ * jQuery JavaScript Library v1.4.3
+ * http://jquery.com/
+ *
+ * Copyright 2010, John Resig
+ * Dual licensed under the MIT or GPL Version 2 licenses.
+ * http://jquery.org/license
+ *
+ * Includes Sizzle.js
+ * http://sizzlejs.com/
+ * Copyright 2010, The Dojo Foundation
+ * Released under the MIT, BSD, and GPL Licenses.
+ *
+ * Date: Thu Oct 14 23:10:06 2010 -0400
+ */
+(function(E,A){function U(){return false}function ba(){return true}function ja(a,b,d){d[0].type=a;return c.event.handle.apply(b,d)}function Ga(a){var b,d,e=[],f=[],h,k,l,n,s,v,B,D;k=c.data(this,this.nodeType?"events":"__events__");if(typeof k==="function")k=k.events;if(!(a.liveFired===this||!k||!k.live||a.button&&a.type==="click")){if(a.namespace)D=RegExp("(^|\\.)"+a.namespace.split(".").join("\\.(?:.*\\.)?")+"(\\.|$)");a.liveFired=this;var H=k.live.slice(0);for(n=0;n<H.length;n++){k=H[n];k.origType.replace(X,
+"")===a.type?f.push(k.selector):H.splice(n--,1)}f=c(a.target).closest(f,a.currentTarget);s=0;for(v=f.length;s<v;s++){B=f[s];for(n=0;n<H.length;n++){k=H[n];if(B.selector===k.selector&&(!D||D.test(k.namespace))){l=B.elem;h=null;if(k.preType==="mouseenter"||k.preType==="mouseleave"){a.type=k.preType;h=c(a.relatedTarget).closest(k.selector)[0]}if(!h||h!==l)e.push({elem:l,handleObj:k,level:B.level})}}}s=0;for(v=e.length;s<v;s++){f=e[s];if(d&&f.level>d)break;a.currentTarget=f.elem;a.data=f.handleObj.data;
+a.handleObj=f.handleObj;D=f.handleObj.origHandler.apply(f.elem,arguments);if(D===false||a.isPropagationStopped()){d=f.level;if(D===false)b=false}}return b}}function Y(a,b){return(a&&a!=="*"?a+".":"")+b.replace(Ha,"`").replace(Ia,"&")}function ka(a,b,d){if(c.isFunction(b))return c.grep(a,function(f,h){return!!b.call(f,h,f)===d});else if(b.nodeType)return c.grep(a,function(f){return f===b===d});else if(typeof b==="string"){var e=c.grep(a,function(f){return f.nodeType===1});if(Ja.test(b))return c.filter(b,
+e,!d);else b=c.filter(b,e)}return c.grep(a,function(f){return c.inArray(f,b)>=0===d})}function la(a,b){var d=0;b.each(function(){if(this.nodeName===(a[d]&&a[d].nodeName)){var e=c.data(a[d++]),f=c.data(this,e);if(e=e&&e.events){delete f.handle;f.events={};for(var h in e)for(var k in e[h])c.event.add(this,h,e[h][k],e[h][k].data)}}})}function Ka(a,b){b.src?c.ajax({url:b.src,async:false,dataType:"script"}):c.globalEval(b.text||b.textContent||b.innerHTML||"");b.parentNode&&b.parentNode.removeChild(b)}
+function ma(a,b,d){var e=b==="width"?a.offsetWidth:a.offsetHeight;if(d==="border")return e;c.each(b==="width"?La:Ma,function(){d||(e-=parseFloat(c.css(a,"padding"+this))||0);if(d==="margin")e+=parseFloat(c.css(a,"margin"+this))||0;else e-=parseFloat(c.css(a,"border"+this+"Width"))||0});return e}function ca(a,b,d,e){if(c.isArray(b)&&b.length)c.each(b,function(f,h){d||Na.test(a)?e(a,h):ca(a+"["+(typeof h==="object"||c.isArray(h)?f:"")+"]",h,d,e)});else if(!d&&b!=null&&typeof b==="object")c.isEmptyObject(b)?
+e(a,""):c.each(b,function(f,h){ca(a+"["+f+"]",h,d,e)});else e(a,b)}function S(a,b){var d={};c.each(na.concat.apply([],na.slice(0,b)),function(){d[this]=a});return d}function oa(a){if(!da[a]){var b=c("<"+a+">").appendTo("body"),d=b.css("display");b.remove();if(d==="none"||d==="")d="block";da[a]=d}return da[a]}function ea(a){return c.isWindow(a)?a:a.nodeType===9?a.defaultView||a.parentWindow:false}var u=E.document,c=function(){function a(){if(!b.isReady){try{u.documentElement.doScroll("left")}catch(i){setTimeout(a,
+1);return}b.ready()}}var b=function(i,r){return new b.fn.init(i,r)},d=E.jQuery,e=E.$,f,h=/^(?:[^<]*(<[\w\W]+>)[^>]*$|#([\w\-]+)$)/,k=/\S/,l=/^\s+/,n=/\s+$/,s=/\W/,v=/\d/,B=/^<(\w+)\s*\/?>(?:<\/\1>)?$/,D=/^[\],:{}\s]*$/,H=/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g,w=/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g,G=/(?:^|:|,)(?:\s*\[)+/g,M=/(webkit)[ \/]([\w.]+)/,g=/(opera)(?:.*version)?[ \/]([\w.]+)/,j=/(msie) ([\w.]+)/,o=/(mozilla)(?:.*? rv:([\w.]+))?/,m=navigator.userAgent,p=false,
+q=[],t,x=Object.prototype.toString,C=Object.prototype.hasOwnProperty,P=Array.prototype.push,N=Array.prototype.slice,R=String.prototype.trim,Q=Array.prototype.indexOf,L={};b.fn=b.prototype={init:function(i,r){var y,z,F;if(!i)return this;if(i.nodeType){this.context=this[0]=i;this.length=1;return this}if(i==="body"&&!r&&u.body){this.context=u;this[0]=u.body;this.selector="body";this.length=1;return this}if(typeof i==="string")if((y=h.exec(i))&&(y[1]||!r))if(y[1]){F=r?r.ownerDocument||r:u;if(z=B.exec(i))if(b.isPlainObject(r)){i=
+[u.createElement(z[1])];b.fn.attr.call(i,r,true)}else i=[F.createElement(z[1])];else{z=b.buildFragment([y[1]],[F]);i=(z.cacheable?z.fragment.cloneNode(true):z.fragment).childNodes}return b.merge(this,i)}else{if((z=u.getElementById(y[2]))&&z.parentNode){if(z.id!==y[2])return f.find(i);this.length=1;this[0]=z}this.context=u;this.selector=i;return this}else if(!r&&!s.test(i)){this.selector=i;this.context=u;i=u.getElementsByTagName(i);return b.merge(this,i)}else return!r||r.jquery?(r||f).find(i):b(r).find(i);
+else if(b.isFunction(i))return f.ready(i);if(i.selector!==A){this.selector=i.selector;this.context=i.context}return b.makeArray(i,this)},selector:"",jquery:"1.4.3",length:0,size:function(){return this.length},toArray:function(){return N.call(this,0)},get:function(i){return i==null?this.toArray():i<0?this.slice(i)[0]:this[i]},pushStack:function(i,r,y){var z=b();b.isArray(i)?P.apply(z,i):b.merge(z,i);z.prevObject=this;z.context=this.context;if(r==="find")z.selector=this.selector+(this.selector?" ":
+"")+y;else if(r)z.selector=this.selector+"."+r+"("+y+")";return z},each:function(i,r){return b.each(this,i,r)},ready:function(i){b.bindReady();if(b.isReady)i.call(u,b);else q&&q.push(i);return this},eq:function(i){return i===-1?this.slice(i):this.slice(i,+i+1)},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},slice:function(){return this.pushStack(N.apply(this,arguments),"slice",N.call(arguments).join(","))},map:function(i){return this.pushStack(b.map(this,function(r,y){return i.call(r,
+y,r)}))},end:function(){return this.prevObject||b(null)},push:P,sort:[].sort,splice:[].splice};b.fn.init.prototype=b.fn;b.extend=b.fn.extend=function(){var i=arguments[0]||{},r=1,y=arguments.length,z=false,F,I,K,J,fa;if(typeof i==="boolean"){z=i;i=arguments[1]||{};r=2}if(typeof i!=="object"&&!b.isFunction(i))i={};if(y===r){i=this;--r}for(;r<y;r++)if((F=arguments[r])!=null)for(I in F){K=i[I];J=F[I];if(i!==J)if(z&&J&&(b.isPlainObject(J)||(fa=b.isArray(J)))){if(fa){fa=false;clone=K&&b.isArray(K)?K:[]}else clone=
+K&&b.isPlainObject(K)?K:{};i[I]=b.extend(z,clone,J)}else if(J!==A)i[I]=J}return i};b.extend({noConflict:function(i){E.$=e;if(i)E.jQuery=d;return b},isReady:false,readyWait:1,ready:function(i){i===true&&b.readyWait--;if(!b.readyWait||i!==true&&!b.isReady){if(!u.body)return setTimeout(b.ready,1);b.isReady=true;if(!(i!==true&&--b.readyWait>0)){if(q){for(var r=0;i=q[r++];)i.call(u,b);q=null}b.fn.triggerHandler&&b(u).triggerHandler("ready")}}},bindReady:function(){if(!p){p=true;if(u.readyState==="complete")return setTimeout(b.ready,
+1);if(u.addEventListener){u.addEventListener("DOMContentLoaded",t,false);E.addEventListener("load",b.ready,false)}else if(u.attachEvent){u.attachEvent("onreadystatechange",t);E.attachEvent("onload",b.ready);var i=false;try{i=E.frameElement==null}catch(r){}u.documentElement.doScroll&&i&&a()}}},isFunction:function(i){return b.type(i)==="function"},isArray:Array.isArray||function(i){return b.type(i)==="array"},isWindow:function(i){return i&&typeof i==="object"&&"setInterval"in i},isNaN:function(i){return i==
+null||!v.test(i)||isNaN(i)},type:function(i){return i==null?String(i):L[x.call(i)]||"object"},isPlainObject:function(i){if(!i||b.type(i)!=="object"||i.nodeType||b.isWindow(i))return false;if(i.constructor&&!C.call(i,"constructor")&&!C.call(i.constructor.prototype,"isPrototypeOf"))return false;for(var r in i);return r===A||C.call(i,r)},isEmptyObject:function(i){for(var r in i)return false;return true},error:function(i){throw i;},parseJSON:function(i){if(typeof i!=="string"||!i)return null;i=b.trim(i);
+if(D.test(i.replace(H,"@").replace(w,"]").replace(G,"")))return E.JSON&&E.JSON.parse?E.JSON.parse(i):(new Function("return "+i))();else b.error("Invalid JSON: "+i)},noop:function(){},globalEval:function(i){if(i&&k.test(i)){var r=u.getElementsByTagName("head")[0]||u.documentElement,y=u.createElement("script");y.type="text/javascript";if(b.support.scriptEval)y.appendChild(u.createTextNode(i));else y.text=i;r.insertBefore(y,r.firstChild);r.removeChild(y)}},nodeName:function(i,r){return i.nodeName&&i.nodeName.toUpperCase()===
+r.toUpperCase()},each:function(i,r,y){var z,F=0,I=i.length,K=I===A||b.isFunction(i);if(y)if(K)for(z in i){if(r.apply(i[z],y)===false)break}else for(;F<I;){if(r.apply(i[F++],y)===false)break}else if(K)for(z in i){if(r.call(i[z],z,i[z])===false)break}else for(y=i[0];F<I&&r.call(y,F,y)!==false;y=i[++F]);return i},trim:R?function(i){return i==null?"":R.call(i)}:function(i){return i==null?"":i.toString().replace(l,"").replace(n,"")},makeArray:function(i,r){var y=r||[];if(i!=null){var z=b.type(i);i.length==
+null||z==="string"||z==="function"||z==="regexp"||b.isWindow(i)?P.call(y,i):b.merge(y,i)}return y},inArray:function(i,r){if(r.indexOf)return r.indexOf(i);for(var y=0,z=r.length;y<z;y++)if(r[y]===i)return y;return-1},merge:function(i,r){var y=i.length,z=0;if(typeof r.length==="number")for(var F=r.length;z<F;z++)i[y++]=r[z];else for(;r[z]!==A;)i[y++]=r[z++];i.length=y;return i},grep:function(i,r,y){var z=[],F;y=!!y;for(var I=0,K=i.length;I<K;I++){F=!!r(i[I],I);y!==F&&z.push(i[I])}return z},map:function(i,
+r,y){for(var z=[],F,I=0,K=i.length;I<K;I++){F=r(i[I],I,y);if(F!=null)z[z.length]=F}return z.concat.apply([],z)},guid:1,proxy:function(i,r,y){if(arguments.length===2)if(typeof r==="string"){y=i;i=y[r];r=A}else if(r&&!b.isFunction(r)){y=r;r=A}if(!r&&i)r=function(){return i.apply(y||this,arguments)};if(i)r.guid=i.guid=i.guid||r.guid||b.guid++;return r},access:function(i,r,y,z,F,I){var K=i.length;if(typeof r==="object"){for(var J in r)b.access(i,J,r[J],z,F,y);return i}if(y!==A){z=!I&&z&&b.isFunction(y);
+for(J=0;J<K;J++)F(i[J],r,z?y.call(i[J],J,F(i[J],r)):y,I);return i}return K?F(i[0],r):A},now:function(){return(new Date).getTime()},uaMatch:function(i){i=i.toLowerCase();i=M.exec(i)||g.exec(i)||j.exec(i)||i.indexOf("compatible")<0&&o.exec(i)||[];return{browser:i[1]||"",version:i[2]||"0"}},browser:{}});b.each("Boolean Number String Function Array Date RegExp Object".split(" "),function(i,r){L["[object "+r+"]"]=r.toLowerCase()});m=b.uaMatch(m);if(m.browser){b.browser[m.browser]=true;b.browser.version=
+m.version}if(b.browser.webkit)b.browser.safari=true;if(Q)b.inArray=function(i,r){return Q.call(r,i)};if(!/\s/.test("\u00a0")){l=/^[\s\xA0]+/;n=/[\s\xA0]+$/}f=b(u);if(u.addEventListener)t=function(){u.removeEventListener("DOMContentLoaded",t,false);b.ready()};else if(u.attachEvent)t=function(){if(u.readyState==="complete"){u.detachEvent("onreadystatechange",t);b.ready()}};return E.jQuery=E.$=b}();(function(){c.support={};var a=u.documentElement,b=u.createElement("script"),d=u.createElement("div"),
+e="script"+c.now();d.style.display="none";d.innerHTML=" <link/><table></table><a href='/a' style='color:red;float:left;opacity:.55;'>a</a><input type='checkbox'/>";var f=d.getElementsByTagName("*"),h=d.getElementsByTagName("a")[0],k=u.createElement("select"),l=k.appendChild(u.createElement("option"));if(!(!f||!f.length||!h)){c.support={leadingWhitespace:d.firstChild.nodeType===3,tbody:!d.getElementsByTagName("tbody").length,htmlSerialize:!!d.getElementsByTagName("link").length,style:/red/.test(h.getAttribute("style")),
+hrefNormalized:h.getAttribute("href")==="/a",opacity:/^0.55$/.test(h.style.opacity),cssFloat:!!h.style.cssFloat,checkOn:d.getElementsByTagName("input")[0].value==="on",optSelected:l.selected,optDisabled:false,checkClone:false,scriptEval:false,noCloneEvent:true,boxModel:null,inlineBlockNeedsLayout:false,shrinkWrapBlocks:false,reliableHiddenOffsets:true};k.disabled=true;c.support.optDisabled=!l.disabled;b.type="text/javascript";try{b.appendChild(u.createTextNode("window."+e+"=1;"))}catch(n){}a.insertBefore(b,
+a.firstChild);if(E[e]){c.support.scriptEval=true;delete E[e]}a.removeChild(b);if(d.attachEvent&&d.fireEvent){d.attachEvent("onclick",function s(){c.support.noCloneEvent=false;d.detachEvent("onclick",s)});d.cloneNode(true).fireEvent("onclick")}d=u.createElement("div");d.innerHTML="<input type='radio' name='radiotest' checked='checked'/>";a=u.createDocumentFragment();a.appendChild(d.firstChild);c.support.checkClone=a.cloneNode(true).cloneNode(true).lastChild.checked;c(function(){var s=u.createElement("div");
+s.style.width=s.style.paddingLeft="1px";u.body.appendChild(s);c.boxModel=c.support.boxModel=s.offsetWidth===2;if("zoom"in s.style){s.style.display="inline";s.style.zoom=1;c.support.inlineBlockNeedsLayout=s.offsetWidth===2;s.style.display="";s.innerHTML="<div style='width:4px;'></div>";c.support.shrinkWrapBlocks=s.offsetWidth!==2}s.innerHTML="<table><tr><td style='padding:0;display:none'></td><td>t</td></tr></table>";var v=s.getElementsByTagName("td");c.support.reliableHiddenOffsets=v[0].offsetHeight===
+0;v[0].style.display="";v[1].style.display="none";c.support.reliableHiddenOffsets=c.support.reliableHiddenOffsets&&v[0].offsetHeight===0;s.innerHTML="";u.body.removeChild(s).style.display="none"});a=function(s){var v=u.createElement("div");s="on"+s;var B=s in v;if(!B){v.setAttribute(s,"return;");B=typeof v[s]==="function"}return B};c.support.submitBubbles=a("submit");c.support.changeBubbles=a("change");a=b=d=f=h=null}})();c.props={"for":"htmlFor","class":"className",readonly:"readOnly",maxlength:"maxLength",
+cellspacing:"cellSpacing",rowspan:"rowSpan",colspan:"colSpan",tabindex:"tabIndex",usemap:"useMap",frameborder:"frameBorder"};var pa={},Oa=/^(?:\{.*\}|\[.*\])$/;c.extend({cache:{},uuid:0,expando:"jQuery"+c.now(),noData:{embed:true,object:"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000",applet:true},data:function(a,b,d){if(c.acceptData(a)){a=a==E?pa:a;var e=a.nodeType,f=e?a[c.expando]:null,h=c.cache;if(!(e&&!f&&typeof b==="string"&&d===A)){if(e)f||(a[c.expando]=f=++c.uuid);else h=a;if(typeof b==="object")if(e)h[f]=
+c.extend(h[f],b);else c.extend(h,b);else if(e&&!h[f])h[f]={};a=e?h[f]:h;if(d!==A)a[b]=d;return typeof b==="string"?a[b]:a}}},removeData:function(a,b){if(c.acceptData(a)){a=a==E?pa:a;var d=a.nodeType,e=d?a[c.expando]:a,f=c.cache,h=d?f[e]:e;if(b){if(h){delete h[b];d&&c.isEmptyObject(h)&&c.removeData(a)}}else if(d&&c.support.deleteExpando)delete a[c.expando];else if(a.removeAttribute)a.removeAttribute(c.expando);else if(d)delete f[e];else for(var k in a)delete a[k]}},acceptData:function(a){if(a.nodeName){var b=
+c.noData[a.nodeName.toLowerCase()];if(b)return!(b===true||a.getAttribute("classid")!==b)}return true}});c.fn.extend({data:function(a,b){if(typeof a==="undefined")return this.length?c.data(this[0]):null;else if(typeof a==="object")return this.each(function(){c.data(this,a)});var d=a.split(".");d[1]=d[1]?"."+d[1]:"";if(b===A){var e=this.triggerHandler("getData"+d[1]+"!",[d[0]]);if(e===A&&this.length){e=c.data(this[0],a);if(e===A&&this[0].nodeType===1){e=this[0].getAttribute("data-"+a);if(typeof e===
+"string")try{e=e==="true"?true:e==="false"?false:e==="null"?null:!c.isNaN(e)?parseFloat(e):Oa.test(e)?c.parseJSON(e):e}catch(f){}else e=A}}return e===A&&d[1]?this.data(d[0]):e}else return this.each(function(){var h=c(this),k=[d[0],b];h.triggerHandler("setData"+d[1]+"!",k);c.data(this,a,b);h.triggerHandler("changeData"+d[1]+"!",k)})},removeData:function(a){return this.each(function(){c.removeData(this,a)})}});c.extend({queue:function(a,b,d){if(a){b=(b||"fx")+"queue";var e=c.data(a,b);if(!d)return e||
+[];if(!e||c.isArray(d))e=c.data(a,b,c.makeArray(d));else e.push(d);return e}},dequeue:function(a,b){b=b||"fx";var d=c.queue(a,b),e=d.shift();if(e==="inprogress")e=d.shift();if(e){b==="fx"&&d.unshift("inprogress");e.call(a,function(){c.dequeue(a,b)})}}});c.fn.extend({queue:function(a,b){if(typeof a!=="string"){b=a;a="fx"}if(b===A)return c.queue(this[0],a);return this.each(function(){var d=c.queue(this,a,b);a==="fx"&&d[0]!=="inprogress"&&c.dequeue(this,a)})},dequeue:function(a){return this.each(function(){c.dequeue(this,
+a)})},delay:function(a,b){a=c.fx?c.fx.speeds[a]||a:a;b=b||"fx";return this.queue(b,function(){var d=this;setTimeout(function(){c.dequeue(d,b)},a)})},clearQueue:function(a){return this.queue(a||"fx",[])}});var qa=/[\n\t]/g,ga=/\s+/,Pa=/\r/g,Qa=/^(?:href|src|style)$/,Ra=/^(?:button|input)$/i,Sa=/^(?:button|input|object|select|textarea)$/i,Ta=/^a(?:rea)?$/i,ra=/^(?:radio|checkbox)$/i;c.fn.extend({attr:function(a,b){return c.access(this,a,b,true,c.attr)},removeAttr:function(a){return this.each(function(){c.attr(this,
+a,"");this.nodeType===1&&this.removeAttribute(a)})},addClass:function(a){if(c.isFunction(a))return this.each(function(s){var v=c(this);v.addClass(a.call(this,s,v.attr("class")))});if(a&&typeof a==="string")for(var b=(a||"").split(ga),d=0,e=this.length;d<e;d++){var f=this[d];if(f.nodeType===1)if(f.className){for(var h=" "+f.className+" ",k=f.className,l=0,n=b.length;l<n;l++)if(h.indexOf(" "+b[l]+" ")<0)k+=" "+b[l];f.className=c.trim(k)}else f.className=a}return this},removeClass:function(a){if(c.isFunction(a))return this.each(function(n){var s=
+c(this);s.removeClass(a.call(this,n,s.attr("class")))});if(a&&typeof a==="string"||a===A)for(var b=(a||"").split(ga),d=0,e=this.length;d<e;d++){var f=this[d];if(f.nodeType===1&&f.className)if(a){for(var h=(" "+f.className+" ").replace(qa," "),k=0,l=b.length;k<l;k++)h=h.replace(" "+b[k]+" "," ");f.className=c.trim(h)}else f.className=""}return this},toggleClass:function(a,b){var d=typeof a,e=typeof b==="boolean";if(c.isFunction(a))return this.each(function(f){var h=c(this);h.toggleClass(a.call(this,
+f,h.attr("class"),b),b)});return this.each(function(){if(d==="string")for(var f,h=0,k=c(this),l=b,n=a.split(ga);f=n[h++];){l=e?l:!k.hasClass(f);k[l?"addClass":"removeClass"](f)}else if(d==="undefined"||d==="boolean"){this.className&&c.data(this,"__className__",this.className);this.className=this.className||a===false?"":c.data(this,"__className__")||""}})},hasClass:function(a){a=" "+a+" ";for(var b=0,d=this.length;b<d;b++)if((" "+this[b].className+" ").replace(qa," ").indexOf(a)>-1)return true;return false},
+val:function(a){if(!arguments.length){var b=this[0];if(b){if(c.nodeName(b,"option")){var d=b.attributes.value;return!d||d.specified?b.value:b.text}if(c.nodeName(b,"select")){var e=b.selectedIndex;d=[];var f=b.options;b=b.type==="select-one";if(e<0)return null;var h=b?e:0;for(e=b?e+1:f.length;h<e;h++){var k=f[h];if(k.selected&&(c.support.optDisabled?!k.disabled:k.getAttribute("disabled")===null)&&(!k.parentNode.disabled||!c.nodeName(k.parentNode,"optgroup"))){a=c(k).val();if(b)return a;d.push(a)}}return d}if(ra.test(b.type)&&
+!c.support.checkOn)return b.getAttribute("value")===null?"on":b.value;return(b.value||"").replace(Pa,"")}return A}var l=c.isFunction(a);return this.each(function(n){var s=c(this),v=a;if(this.nodeType===1){if(l)v=a.call(this,n,s.val());if(v==null)v="";else if(typeof v==="number")v+="";else if(c.isArray(v))v=c.map(v,function(D){return D==null?"":D+""});if(c.isArray(v)&&ra.test(this.type))this.checked=c.inArray(s.val(),v)>=0;else if(c.nodeName(this,"select")){var B=c.makeArray(v);c("option",this).each(function(){this.selected=
+c.inArray(c(this).val(),B)>=0});if(!B.length)this.selectedIndex=-1}else this.value=v}})}});c.extend({attrFn:{val:true,css:true,html:true,text:true,data:true,width:true,height:true,offset:true},attr:function(a,b,d,e){if(!a||a.nodeType===3||a.nodeType===8)return A;if(e&&b in c.attrFn)return c(a)[b](d);e=a.nodeType!==1||!c.isXMLDoc(a);var f=d!==A;b=e&&c.props[b]||b;if(a.nodeType===1){var h=Qa.test(b);if((b in a||a[b]!==A)&&e&&!h){if(f){b==="type"&&Ra.test(a.nodeName)&&a.parentNode&&c.error("type property can't be changed");
+if(d===null)a.nodeType===1&&a.removeAttribute(b);else a[b]=d}if(c.nodeName(a,"form")&&a.getAttributeNode(b))return a.getAttributeNode(b).nodeValue;if(b==="tabIndex")return(b=a.getAttributeNode("tabIndex"))&&b.specified?b.value:Sa.test(a.nodeName)||Ta.test(a.nodeName)&&a.href?0:A;return a[b]}if(!c.support.style&&e&&b==="style"){if(f)a.style.cssText=""+d;return a.style.cssText}f&&a.setAttribute(b,""+d);if(!a.attributes[b]&&a.hasAttribute&&!a.hasAttribute(b))return A;a=!c.support.hrefNormalized&&e&&
+h?a.getAttribute(b,2):a.getAttribute(b);return a===null?A:a}}});var X=/\.(.*)$/,ha=/^(?:textarea|input|select)$/i,Ha=/\./g,Ia=/ /g,Ua=/[^\w\s.|`]/g,Va=function(a){return a.replace(Ua,"\\$&")},sa={focusin:0,focusout:0};c.event={add:function(a,b,d,e){if(!(a.nodeType===3||a.nodeType===8)){if(c.isWindow(a)&&a!==E&&!a.frameElement)a=E;if(d===false)d=U;var f,h;if(d.handler){f=d;d=f.handler}if(!d.guid)d.guid=c.guid++;if(h=c.data(a)){var k=a.nodeType?"events":"__events__",l=h[k],n=h.handle;if(typeof l===
+"function"){n=l.handle;l=l.events}else if(!l){a.nodeType||(h[k]=h=function(){});h.events=l={}}if(!n)h.handle=n=function(){return typeof c!=="undefined"&&!c.event.triggered?c.event.handle.apply(n.elem,arguments):A};n.elem=a;b=b.split(" ");for(var s=0,v;k=b[s++];){h=f?c.extend({},f):{handler:d,data:e};if(k.indexOf(".")>-1){v=k.split(".");k=v.shift();h.namespace=v.slice(0).sort().join(".")}else{v=[];h.namespace=""}h.type=k;if(!h.guid)h.guid=d.guid;var B=l[k],D=c.event.special[k]||{};if(!B){B=l[k]=[];
+if(!D.setup||D.setup.call(a,e,v,n)===false)if(a.addEventListener)a.addEventListener(k,n,false);else a.attachEvent&&a.attachEvent("on"+k,n)}if(D.add){D.add.call(a,h);if(!h.handler.guid)h.handler.guid=d.guid}B.push(h);c.event.global[k]=true}a=null}}},global:{},remove:function(a,b,d,e){if(!(a.nodeType===3||a.nodeType===8)){if(d===false)d=U;var f,h,k=0,l,n,s,v,B,D,H=a.nodeType?"events":"__events__",w=c.data(a),G=w&&w[H];if(w&&G){if(typeof G==="function"){w=G;G=G.events}if(b&&b.type){d=b.handler;b=b.type}if(!b||
+typeof b==="string"&&b.charAt(0)==="."){b=b||"";for(f in G)c.event.remove(a,f+b)}else{for(b=b.split(" ");f=b[k++];){v=f;l=f.indexOf(".")<0;n=[];if(!l){n=f.split(".");f=n.shift();s=RegExp("(^|\\.)"+c.map(n.slice(0).sort(),Va).join("\\.(?:.*\\.)?")+"(\\.|$)")}if(B=G[f])if(d){v=c.event.special[f]||{};for(h=e||0;h<B.length;h++){D=B[h];if(d.guid===D.guid){if(l||s.test(D.namespace)){e==null&&B.splice(h--,1);v.remove&&v.remove.call(a,D)}if(e!=null)break}}if(B.length===0||e!=null&&B.length===1){if(!v.teardown||
+v.teardown.call(a,n)===false)c.removeEvent(a,f,w.handle);delete G[f]}}else for(h=0;h<B.length;h++){D=B[h];if(l||s.test(D.namespace)){c.event.remove(a,v,D.handler,h);B.splice(h--,1)}}}if(c.isEmptyObject(G)){if(b=w.handle)b.elem=null;delete w.events;delete w.handle;if(typeof w==="function")c.removeData(a,H);else c.isEmptyObject(w)&&c.removeData(a)}}}}},trigger:function(a,b,d,e){var f=a.type||a;if(!e){a=typeof a==="object"?a[c.expando]?a:c.extend(c.Event(f),a):c.Event(f);if(f.indexOf("!")>=0){a.type=
+f=f.slice(0,-1);a.exclusive=true}if(!d){a.stopPropagation();c.event.global[f]&&c.each(c.cache,function(){this.events&&this.events[f]&&c.event.trigger(a,b,this.handle.elem)})}if(!d||d.nodeType===3||d.nodeType===8)return A;a.result=A;a.target=d;b=c.makeArray(b);b.unshift(a)}a.currentTarget=d;(e=d.nodeType?c.data(d,"handle"):(c.data(d,"__events__")||{}).handle)&&e.apply(d,b);e=d.parentNode||d.ownerDocument;try{if(!(d&&d.nodeName&&c.noData[d.nodeName.toLowerCase()]))if(d["on"+f]&&d["on"+f].apply(d,b)===
+false){a.result=false;a.preventDefault()}}catch(h){}if(!a.isPropagationStopped()&&e)c.event.trigger(a,b,e,true);else if(!a.isDefaultPrevented()){e=a.target;var k,l=f.replace(X,""),n=c.nodeName(e,"a")&&l==="click",s=c.event.special[l]||{};if((!s._default||s._default.call(d,a)===false)&&!n&&!(e&&e.nodeName&&c.noData[e.nodeName.toLowerCase()])){try{if(e[l]){if(k=e["on"+l])e["on"+l]=null;c.event.triggered=true;e[l]()}}catch(v){}if(k)e["on"+l]=k;c.event.triggered=false}}},handle:function(a){var b,d,e;
+d=[];var f,h=c.makeArray(arguments);a=h[0]=c.event.fix(a||E.event);a.currentTarget=this;b=a.type.indexOf(".")<0&&!a.exclusive;if(!b){e=a.type.split(".");a.type=e.shift();d=e.slice(0).sort();e=RegExp("(^|\\.)"+d.join("\\.(?:.*\\.)?")+"(\\.|$)")}a.namespace=a.namespace||d.join(".");f=c.data(this,this.nodeType?"events":"__events__");if(typeof f==="function")f=f.events;d=(f||{})[a.type];if(f&&d){d=d.slice(0);f=0;for(var k=d.length;f<k;f++){var l=d[f];if(b||e.test(l.namespace)){a.handler=l.handler;a.data=
+l.data;a.handleObj=l;l=l.handler.apply(this,h);if(l!==A){a.result=l;if(l===false){a.preventDefault();a.stopPropagation()}}if(a.isImmediatePropagationStopped())break}}}return a.result},props:"altKey attrChange attrName bubbles button cancelable charCode clientX clientY ctrlKey currentTarget data detail eventPhase fromElement handler keyCode layerX layerY metaKey newValue offsetX offsetY pageX pageY prevValue relatedNode relatedTarget screenX screenY shiftKey srcElement target toElement view wheelDelta which".split(" "),
+fix:function(a){if(a[c.expando])return a;var b=a;a=c.Event(b);for(var d=this.props.length,e;d;){e=this.props[--d];a[e]=b[e]}if(!a.target)a.target=a.srcElement||u;if(a.target.nodeType===3)a.target=a.target.parentNode;if(!a.relatedTarget&&a.fromElement)a.relatedTarget=a.fromElement===a.target?a.toElement:a.fromElement;if(a.pageX==null&&a.clientX!=null){b=u.documentElement;d=u.body;a.pageX=a.clientX+(b&&b.scrollLeft||d&&d.scrollLeft||0)-(b&&b.clientLeft||d&&d.clientLeft||0);a.pageY=a.clientY+(b&&b.scrollTop||
+d&&d.scrollTop||0)-(b&&b.clientTop||d&&d.clientTop||0)}if(a.which==null&&(a.charCode!=null||a.keyCode!=null))a.which=a.charCode!=null?a.charCode:a.keyCode;if(!a.metaKey&&a.ctrlKey)a.metaKey=a.ctrlKey;if(!a.which&&a.button!==A)a.which=a.button&1?1:a.button&2?3:a.button&4?2:0;return a},guid:1E8,proxy:c.proxy,special:{ready:{setup:c.bindReady,teardown:c.noop},live:{add:function(a){c.event.add(this,Y(a.origType,a.selector),c.extend({},a,{handler:Ga,guid:a.handler.guid}))},remove:function(a){c.event.remove(this,
+Y(a.origType,a.selector),a)}},beforeunload:{setup:function(a,b,d){if(c.isWindow(this))this.onbeforeunload=d},teardown:function(a,b){if(this.onbeforeunload===b)this.onbeforeunload=null}}}};c.removeEvent=u.removeEventListener?function(a,b,d){a.removeEventListener&&a.removeEventListener(b,d,false)}:function(a,b,d){a.detachEvent&&a.detachEvent("on"+b,d)};c.Event=function(a){if(!this.preventDefault)return new c.Event(a);if(a&&a.type){this.originalEvent=a;this.type=a.type}else this.type=a;this.timeStamp=
+c.now();this[c.expando]=true};c.Event.prototype={preventDefault:function(){this.isDefaultPrevented=ba;var a=this.originalEvent;if(a)if(a.preventDefault)a.preventDefault();else a.returnValue=false},stopPropagation:function(){this.isPropagationStopped=ba;var a=this.originalEvent;if(a){a.stopPropagation&&a.stopPropagation();a.cancelBubble=true}},stopImmediatePropagation:function(){this.isImmediatePropagationStopped=ba;this.stopPropagation()},isDefaultPrevented:U,isPropagationStopped:U,isImmediatePropagationStopped:U};
+var ta=function(a){var b=a.relatedTarget;try{for(;b&&b!==this;)b=b.parentNode;if(b!==this){a.type=a.data;c.event.handle.apply(this,arguments)}}catch(d){}},ua=function(a){a.type=a.data;c.event.handle.apply(this,arguments)};c.each({mouseenter:"mouseover",mouseleave:"mouseout"},function(a,b){c.event.special[a]={setup:function(d){c.event.add(this,b,d&&d.selector?ua:ta,a)},teardown:function(d){c.event.remove(this,b,d&&d.selector?ua:ta)}}});if(!c.support.submitBubbles)c.event.special.submit={setup:function(){if(this.nodeName.toLowerCase()!==
+"form"){c.event.add(this,"click.specialSubmit",function(a){var b=a.target,d=b.type;if((d==="submit"||d==="image")&&c(b).closest("form").length){a.liveFired=A;return ja("submit",this,arguments)}});c.event.add(this,"keypress.specialSubmit",function(a){var b=a.target,d=b.type;if((d==="text"||d==="password")&&c(b).closest("form").length&&a.keyCode===13){a.liveFired=A;return ja("submit",this,arguments)}})}else return false},teardown:function(){c.event.remove(this,".specialSubmit")}};if(!c.support.changeBubbles){var V,
+va=function(a){var b=a.type,d=a.value;if(b==="radio"||b==="checkbox")d=a.checked;else if(b==="select-multiple")d=a.selectedIndex>-1?c.map(a.options,function(e){return e.selected}).join("-"):"";else if(a.nodeName.toLowerCase()==="select")d=a.selectedIndex;return d},Z=function(a,b){var d=a.target,e,f;if(!(!ha.test(d.nodeName)||d.readOnly)){e=c.data(d,"_change_data");f=va(d);if(a.type!=="focusout"||d.type!=="radio")c.data(d,"_change_data",f);if(!(e===A||f===e))if(e!=null||f){a.type="change";a.liveFired=
+A;return c.event.trigger(a,b,d)}}};c.event.special.change={filters:{focusout:Z,beforedeactivate:Z,click:function(a){var b=a.target,d=b.type;if(d==="radio"||d==="checkbox"||b.nodeName.toLowerCase()==="select")return Z.call(this,a)},keydown:function(a){var b=a.target,d=b.type;if(a.keyCode===13&&b.nodeName.toLowerCase()!=="textarea"||a.keyCode===32&&(d==="checkbox"||d==="radio")||d==="select-multiple")return Z.call(this,a)},beforeactivate:function(a){a=a.target;c.data(a,"_change_data",va(a))}},setup:function(){if(this.type===
+"file")return false;for(var a in V)c.event.add(this,a+".specialChange",V[a]);return ha.test(this.nodeName)},teardown:function(){c.event.remove(this,".specialChange");return ha.test(this.nodeName)}};V=c.event.special.change.filters;V.focus=V.beforeactivate}u.addEventListener&&c.each({focus:"focusin",blur:"focusout"},function(a,b){function d(e){e=c.event.fix(e);e.type=b;return c.event.trigger(e,null,e.target)}c.event.special[b]={setup:function(){sa[b]++===0&&u.addEventListener(a,d,true)},teardown:function(){--sa[b]===
+0&&u.removeEventListener(a,d,true)}}});c.each(["bind","one"],function(a,b){c.fn[b]=function(d,e,f){if(typeof d==="object"){for(var h in d)this[b](h,e,d[h],f);return this}if(c.isFunction(e)||e===false){f=e;e=A}var k=b==="one"?c.proxy(f,function(n){c(this).unbind(n,k);return f.apply(this,arguments)}):f;if(d==="unload"&&b!=="one")this.one(d,e,f);else{h=0;for(var l=this.length;h<l;h++)c.event.add(this[h],d,k,e)}return this}});c.fn.extend({unbind:function(a,b){if(typeof a==="object"&&!a.preventDefault)for(var d in a)this.unbind(d,
+a[d]);else{d=0;for(var e=this.length;d<e;d++)c.event.remove(this[d],a,b)}return this},delegate:function(a,b,d,e){return this.live(b,d,e,a)},undelegate:function(a,b,d){return arguments.length===0?this.unbind("live"):this.die(b,null,d,a)},trigger:function(a,b){return this.each(function(){c.event.trigger(a,b,this)})},triggerHandler:function(a,b){if(this[0]){var d=c.Event(a);d.preventDefault();d.stopPropagation();c.event.trigger(d,b,this[0]);return d.result}},toggle:function(a){for(var b=arguments,d=
+1;d<b.length;)c.proxy(a,b[d++]);return this.click(c.proxy(a,function(e){var f=(c.data(this,"lastToggle"+a.guid)||0)%d;c.data(this,"lastToggle"+a.guid,f+1);e.preventDefault();return b[f].apply(this,arguments)||false}))},hover:function(a,b){return this.mouseenter(a).mouseleave(b||a)}});var wa={focus:"focusin",blur:"focusout",mouseenter:"mouseover",mouseleave:"mouseout"};c.each(["live","die"],function(a,b){c.fn[b]=function(d,e,f,h){var k,l=0,n,s,v=h||this.selector;h=h?this:c(this.context);if(typeof d===
+"object"&&!d.preventDefault){for(k in d)h[b](k,e,d[k],v);return this}if(c.isFunction(e)){f=e;e=A}for(d=(d||"").split(" ");(k=d[l++])!=null;){n=X.exec(k);s="";if(n){s=n[0];k=k.replace(X,"")}if(k==="hover")d.push("mouseenter"+s,"mouseleave"+s);else{n=k;if(k==="focus"||k==="blur"){d.push(wa[k]+s);k+=s}else k=(wa[k]||k)+s;if(b==="live"){s=0;for(var B=h.length;s<B;s++)c.event.add(h[s],"live."+Y(k,v),{data:e,selector:v,handler:f,origType:k,origHandler:f,preType:n})}else h.unbind("live."+Y(k,v),f)}}return this}});
+c.each("blur focus focusin focusout load resize scroll unload click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup error".split(" "),function(a,b){c.fn[b]=function(d,e){if(e==null){e=d;d=null}return arguments.length>0?this.bind(b,d,e):this.trigger(b)};if(c.attrFn)c.attrFn[b]=true});E.attachEvent&&!E.addEventListener&&c(E).bind("unload",function(){for(var a in c.cache)if(c.cache[a].handle)try{c.event.remove(c.cache[a].handle.elem)}catch(b){}});
+(function(){function a(g,j,o,m,p,q){p=0;for(var t=m.length;p<t;p++){var x=m[p];if(x){x=x[g];for(var C=false;x;){if(x.sizcache===o){C=m[x.sizset];break}if(x.nodeType===1&&!q){x.sizcache=o;x.sizset=p}if(x.nodeName.toLowerCase()===j){C=x;break}x=x[g]}m[p]=C}}}function b(g,j,o,m,p,q){p=0;for(var t=m.length;p<t;p++){var x=m[p];if(x){x=x[g];for(var C=false;x;){if(x.sizcache===o){C=m[x.sizset];break}if(x.nodeType===1){if(!q){x.sizcache=o;x.sizset=p}if(typeof j!=="string"){if(x===j){C=true;break}}else if(l.filter(j,
+[x]).length>0){C=x;break}}x=x[g]}m[p]=C}}}var d=/((?:\((?:\([^()]+\)|[^()]+)+\)|\[(?:\[[^\[\]]*\]|['"][^'"]*['"]|[^\[\]'"]+)+\]|\\.|[^ >+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g,e=0,f=Object.prototype.toString,h=false,k=true;[0,0].sort(function(){k=false;return 0});var l=function(g,j,o,m){o=o||[];var p=j=j||u;if(j.nodeType!==1&&j.nodeType!==9)return[];if(!g||typeof g!=="string")return o;var q=[],t,x,C,P,N=true,R=l.isXML(j),Q=g,L;do{d.exec("");if(t=d.exec(Q)){Q=t[3];q.push(t[1]);if(t[2]){P=t[3];
+break}}}while(t);if(q.length>1&&s.exec(g))if(q.length===2&&n.relative[q[0]])x=M(q[0]+q[1],j);else for(x=n.relative[q[0]]?[j]:l(q.shift(),j);q.length;){g=q.shift();if(n.relative[g])g+=q.shift();x=M(g,x)}else{if(!m&&q.length>1&&j.nodeType===9&&!R&&n.match.ID.test(q[0])&&!n.match.ID.test(q[q.length-1])){t=l.find(q.shift(),j,R);j=t.expr?l.filter(t.expr,t.set)[0]:t.set[0]}if(j){t=m?{expr:q.pop(),set:D(m)}:l.find(q.pop(),q.length===1&&(q[0]==="~"||q[0]==="+")&&j.parentNode?j.parentNode:j,R);x=t.expr?l.filter(t.expr,
+t.set):t.set;if(q.length>0)C=D(x);else N=false;for(;q.length;){t=L=q.pop();if(n.relative[L])t=q.pop();else L="";if(t==null)t=j;n.relative[L](C,t,R)}}else C=[]}C||(C=x);C||l.error(L||g);if(f.call(C)==="[object Array]")if(N)if(j&&j.nodeType===1)for(g=0;C[g]!=null;g++){if(C[g]&&(C[g]===true||C[g].nodeType===1&&l.contains(j,C[g])))o.push(x[g])}else for(g=0;C[g]!=null;g++)C[g]&&C[g].nodeType===1&&o.push(x[g]);else o.push.apply(o,C);else D(C,o);if(P){l(P,p,o,m);l.uniqueSort(o)}return o};l.uniqueSort=function(g){if(w){h=
+k;g.sort(w);if(h)for(var j=1;j<g.length;j++)g[j]===g[j-1]&&g.splice(j--,1)}return g};l.matches=function(g,j){return l(g,null,null,j)};l.matchesSelector=function(g,j){return l(j,null,null,[g]).length>0};l.find=function(g,j,o){var m;if(!g)return[];for(var p=0,q=n.order.length;p<q;p++){var t=n.order[p],x;if(x=n.leftMatch[t].exec(g)){var C=x[1];x.splice(1,1);if(C.substr(C.length-1)!=="\\"){x[1]=(x[1]||"").replace(/\\/g,"");m=n.find[t](x,j,o);if(m!=null){g=g.replace(n.match[t],"");break}}}}m||(m=j.getElementsByTagName("*"));
+return{set:m,expr:g}};l.filter=function(g,j,o,m){for(var p=g,q=[],t=j,x,C,P=j&&j[0]&&l.isXML(j[0]);g&&j.length;){for(var N in n.filter)if((x=n.leftMatch[N].exec(g))!=null&&x[2]){var R=n.filter[N],Q,L;L=x[1];C=false;x.splice(1,1);if(L.substr(L.length-1)!=="\\"){if(t===q)q=[];if(n.preFilter[N])if(x=n.preFilter[N](x,t,o,q,m,P)){if(x===true)continue}else C=Q=true;if(x)for(var i=0;(L=t[i])!=null;i++)if(L){Q=R(L,x,i,t);var r=m^!!Q;if(o&&Q!=null)if(r)C=true;else t[i]=false;else if(r){q.push(L);C=true}}if(Q!==
+A){o||(t=q);g=g.replace(n.match[N],"");if(!C)return[];break}}}if(g===p)if(C==null)l.error(g);else break;p=g}return t};l.error=function(g){throw"Syntax error, unrecognized expression: "+g;};var n=l.selectors={order:["ID","NAME","TAG"],match:{ID:/#((?:[\w\u00c0-\uFFFF\-]|\\.)+)/,CLASS:/\.((?:[\w\u00c0-\uFFFF\-]|\\.)+)/,NAME:/\[name=['"]*((?:[\w\u00c0-\uFFFF\-]|\\.)+)['"]*\]/,ATTR:/\[\s*((?:[\w\u00c0-\uFFFF\-]|\\.)+)\s*(?:(\S?=)\s*(['"]*)(.*?)\3|)\s*\]/,TAG:/^((?:[\w\u00c0-\uFFFF\*\-]|\\.)+)/,CHILD:/:(only|nth|last|first)-child(?:\((even|odd|[\dn+\-]*)\))?/,
+POS:/:(nth|eq|gt|lt|first|last|even|odd)(?:\((\d*)\))?(?=[^\-]|$)/,PSEUDO:/:((?:[\w\u00c0-\uFFFF\-]|\\.)+)(?:\((['"]?)((?:\([^\)]+\)|[^\(\)]*)+)\2\))?/},leftMatch:{},attrMap:{"class":"className","for":"htmlFor"},attrHandle:{href:function(g){return g.getAttribute("href")}},relative:{"+":function(g,j){var o=typeof j==="string",m=o&&!/\W/.test(j);o=o&&!m;if(m)j=j.toLowerCase();m=0;for(var p=g.length,q;m<p;m++)if(q=g[m]){for(;(q=q.previousSibling)&&q.nodeType!==1;);g[m]=o||q&&q.nodeName.toLowerCase()===
+j?q||false:q===j}o&&l.filter(j,g,true)},">":function(g,j){var o=typeof j==="string",m,p=0,q=g.length;if(o&&!/\W/.test(j))for(j=j.toLowerCase();p<q;p++){if(m=g[p]){o=m.parentNode;g[p]=o.nodeName.toLowerCase()===j?o:false}}else{for(;p<q;p++)if(m=g[p])g[p]=o?m.parentNode:m.parentNode===j;o&&l.filter(j,g,true)}},"":function(g,j,o){var m=e++,p=b,q;if(typeof j==="string"&&!/\W/.test(j)){q=j=j.toLowerCase();p=a}p("parentNode",j,m,g,q,o)},"~":function(g,j,o){var m=e++,p=b,q;if(typeof j==="string"&&!/\W/.test(j)){q=
+j=j.toLowerCase();p=a}p("previousSibling",j,m,g,q,o)}},find:{ID:function(g,j,o){if(typeof j.getElementById!=="undefined"&&!o)return(g=j.getElementById(g[1]))&&g.parentNode?[g]:[]},NAME:function(g,j){if(typeof j.getElementsByName!=="undefined"){for(var o=[],m=j.getElementsByName(g[1]),p=0,q=m.length;p<q;p++)m[p].getAttribute("name")===g[1]&&o.push(m[p]);return o.length===0?null:o}},TAG:function(g,j){return j.getElementsByTagName(g[1])}},preFilter:{CLASS:function(g,j,o,m,p,q){g=" "+g[1].replace(/\\/g,
+"")+" ";if(q)return g;q=0;for(var t;(t=j[q])!=null;q++)if(t)if(p^(t.className&&(" "+t.className+" ").replace(/[\t\n]/g," ").indexOf(g)>=0))o||m.push(t);else if(o)j[q]=false;return false},ID:function(g){return g[1].replace(/\\/g,"")},TAG:function(g){return g[1].toLowerCase()},CHILD:function(g){if(g[1]==="nth"){var j=/(-?)(\d*)n((?:\+|-)?\d*)/.exec(g[2]==="even"&&"2n"||g[2]==="odd"&&"2n+1"||!/\D/.test(g[2])&&"0n+"+g[2]||g[2]);g[2]=j[1]+(j[2]||1)-0;g[3]=j[3]-0}g[0]=e++;return g},ATTR:function(g,j,o,
+m,p,q){j=g[1].replace(/\\/g,"");if(!q&&n.attrMap[j])g[1]=n.attrMap[j];if(g[2]==="~=")g[4]=" "+g[4]+" ";return g},PSEUDO:function(g,j,o,m,p){if(g[1]==="not")if((d.exec(g[3])||"").length>1||/^\w/.test(g[3]))g[3]=l(g[3],null,null,j);else{g=l.filter(g[3],j,o,true^p);o||m.push.apply(m,g);return false}else if(n.match.POS.test(g[0])||n.match.CHILD.test(g[0]))return true;return g},POS:function(g){g.unshift(true);return g}},filters:{enabled:function(g){return g.disabled===false&&g.type!=="hidden"},disabled:function(g){return g.disabled===
+true},checked:function(g){return g.checked===true},selected:function(g){return g.selected===true},parent:function(g){return!!g.firstChild},empty:function(g){return!g.firstChild},has:function(g,j,o){return!!l(o[3],g).length},header:function(g){return/h\d/i.test(g.nodeName)},text:function(g){return"text"===g.type},radio:function(g){return"radio"===g.type},checkbox:function(g){return"checkbox"===g.type},file:function(g){return"file"===g.type},password:function(g){return"password"===g.type},submit:function(g){return"submit"===
+g.type},image:function(g){return"image"===g.type},reset:function(g){return"reset"===g.type},button:function(g){return"button"===g.type||g.nodeName.toLowerCase()==="button"},input:function(g){return/input|select|textarea|button/i.test(g.nodeName)}},setFilters:{first:function(g,j){return j===0},last:function(g,j,o,m){return j===m.length-1},even:function(g,j){return j%2===0},odd:function(g,j){return j%2===1},lt:function(g,j,o){return j<o[3]-0},gt:function(g,j,o){return j>o[3]-0},nth:function(g,j,o){return o[3]-
+0===j},eq:function(g,j,o){return o[3]-0===j}},filter:{PSEUDO:function(g,j,o,m){var p=j[1],q=n.filters[p];if(q)return q(g,o,j,m);else if(p==="contains")return(g.textContent||g.innerText||l.getText([g])||"").indexOf(j[3])>=0;else if(p==="not"){j=j[3];o=0;for(m=j.length;o<m;o++)if(j[o]===g)return false;return true}else l.error("Syntax error, unrecognized expression: "+p)},CHILD:function(g,j){var o=j[1],m=g;switch(o){case "only":case "first":for(;m=m.previousSibling;)if(m.nodeType===1)return false;if(o===
+"first")return true;m=g;case "last":for(;m=m.nextSibling;)if(m.nodeType===1)return false;return true;case "nth":o=j[2];var p=j[3];if(o===1&&p===0)return true;var q=j[0],t=g.parentNode;if(t&&(t.sizcache!==q||!g.nodeIndex)){var x=0;for(m=t.firstChild;m;m=m.nextSibling)if(m.nodeType===1)m.nodeIndex=++x;t.sizcache=q}m=g.nodeIndex-p;return o===0?m===0:m%o===0&&m/o>=0}},ID:function(g,j){return g.nodeType===1&&g.getAttribute("id")===j},TAG:function(g,j){return j==="*"&&g.nodeType===1||g.nodeName.toLowerCase()===
+j},CLASS:function(g,j){return(" "+(g.className||g.getAttribute("class"))+" ").indexOf(j)>-1},ATTR:function(g,j){var o=j[1];o=n.attrHandle[o]?n.attrHandle[o](g):g[o]!=null?g[o]:g.getAttribute(o);var m=o+"",p=j[2],q=j[4];return o==null?p==="!=":p==="="?m===q:p==="*="?m.indexOf(q)>=0:p==="~="?(" "+m+" ").indexOf(q)>=0:!q?m&&o!==false:p==="!="?m!==q:p==="^="?m.indexOf(q)===0:p==="$="?m.substr(m.length-q.length)===q:p==="|="?m===q||m.substr(0,q.length+1)===q+"-":false},POS:function(g,j,o,m){var p=n.setFilters[j[2]];
+if(p)return p(g,o,j,m)}}},s=n.match.POS,v=function(g,j){return"\\"+(j-0+1)},B;for(B in n.match){n.match[B]=RegExp(n.match[B].source+/(?![^\[]*\])(?![^\(]*\))/.source);n.leftMatch[B]=RegExp(/(^(?:.|\r|\n)*?)/.source+n.match[B].source.replace(/\\(\d+)/g,v))}var D=function(g,j){g=Array.prototype.slice.call(g,0);if(j){j.push.apply(j,g);return j}return g};try{Array.prototype.slice.call(u.documentElement.childNodes,0)}catch(H){D=function(g,j){var o=j||[],m=0;if(f.call(g)==="[object Array]")Array.prototype.push.apply(o,
+g);else if(typeof g.length==="number")for(var p=g.length;m<p;m++)o.push(g[m]);else for(;g[m];m++)o.push(g[m]);return o}}var w,G;if(u.documentElement.compareDocumentPosition)w=function(g,j){if(g===j){h=true;return 0}if(!g.compareDocumentPosition||!j.compareDocumentPosition)return g.compareDocumentPosition?-1:1;return g.compareDocumentPosition(j)&4?-1:1};else{w=function(g,j){var o=[],m=[],p=g.parentNode,q=j.parentNode,t=p;if(g===j){h=true;return 0}else if(p===q)return G(g,j);else if(p){if(!q)return 1}else return-1;
+for(;t;){o.unshift(t);t=t.parentNode}for(t=q;t;){m.unshift(t);t=t.parentNode}p=o.length;q=m.length;for(t=0;t<p&&t<q;t++)if(o[t]!==m[t])return G(o[t],m[t]);return t===p?G(g,m[t],-1):G(o[t],j,1)};G=function(g,j,o){if(g===j)return o;for(g=g.nextSibling;g;){if(g===j)return-1;g=g.nextSibling}return 1}}l.getText=function(g){for(var j="",o,m=0;g[m];m++){o=g[m];if(o.nodeType===3||o.nodeType===4)j+=o.nodeValue;else if(o.nodeType!==8)j+=l.getText(o.childNodes)}return j};(function(){var g=u.createElement("div"),
+j="script"+(new Date).getTime();g.innerHTML="<a name='"+j+"'/>";var o=u.documentElement;o.insertBefore(g,o.firstChild);if(u.getElementById(j)){n.find.ID=function(m,p,q){if(typeof p.getElementById!=="undefined"&&!q)return(p=p.getElementById(m[1]))?p.id===m[1]||typeof p.getAttributeNode!=="undefined"&&p.getAttributeNode("id").nodeValue===m[1]?[p]:A:[]};n.filter.ID=function(m,p){var q=typeof m.getAttributeNode!=="undefined"&&m.getAttributeNode("id");return m.nodeType===1&&q&&q.nodeValue===p}}o.removeChild(g);
+o=g=null})();(function(){var g=u.createElement("div");g.appendChild(u.createComment(""));if(g.getElementsByTagName("*").length>0)n.find.TAG=function(j,o){var m=o.getElementsByTagName(j[1]);if(j[1]==="*"){for(var p=[],q=0;m[q];q++)m[q].nodeType===1&&p.push(m[q]);m=p}return m};g.innerHTML="<a href='#'></a>";if(g.firstChild&&typeof g.firstChild.getAttribute!=="undefined"&&g.firstChild.getAttribute("href")!=="#")n.attrHandle.href=function(j){return j.getAttribute("href",2)};g=null})();u.querySelectorAll&&
+function(){var g=l,j=u.createElement("div");j.innerHTML="<p class='TEST'></p>";if(!(j.querySelectorAll&&j.querySelectorAll(".TEST").length===0)){l=function(m,p,q,t){p=p||u;if(!t&&!l.isXML(p))if(p.nodeType===9)try{return D(p.querySelectorAll(m),q)}catch(x){}else if(p.nodeType===1&&p.nodeName.toLowerCase()!=="object"){var C=p.id,P=p.id="__sizzle__";try{return D(p.querySelectorAll("#"+P+" "+m),q)}catch(N){}finally{if(C)p.id=C;else p.removeAttribute("id")}}return g(m,p,q,t)};for(var o in g)l[o]=g[o];
+j=null}}();(function(){var g=u.documentElement,j=g.matchesSelector||g.mozMatchesSelector||g.webkitMatchesSelector||g.msMatchesSelector,o=false;try{j.call(u.documentElement,":sizzle")}catch(m){o=true}if(j)l.matchesSelector=function(p,q){try{if(o||!n.match.PSEUDO.test(q))return j.call(p,q)}catch(t){}return l(q,null,null,[p]).length>0}})();(function(){var g=u.createElement("div");g.innerHTML="<div class='test e'></div><div class='test'></div>";if(!(!g.getElementsByClassName||g.getElementsByClassName("e").length===
+0)){g.lastChild.className="e";if(g.getElementsByClassName("e").length!==1){n.order.splice(1,0,"CLASS");n.find.CLASS=function(j,o,m){if(typeof o.getElementsByClassName!=="undefined"&&!m)return o.getElementsByClassName(j[1])};g=null}}})();l.contains=u.documentElement.contains?function(g,j){return g!==j&&(g.contains?g.contains(j):true)}:function(g,j){return!!(g.compareDocumentPosition(j)&16)};l.isXML=function(g){return(g=(g?g.ownerDocument||g:0).documentElement)?g.nodeName!=="HTML":false};var M=function(g,
+j){for(var o=[],m="",p,q=j.nodeType?[j]:j;p=n.match.PSEUDO.exec(g);){m+=p[0];g=g.replace(n.match.PSEUDO,"")}g=n.relative[g]?g+"*":g;p=0;for(var t=q.length;p<t;p++)l(g,q[p],o);return l.filter(m,o)};c.find=l;c.expr=l.selectors;c.expr[":"]=c.expr.filters;c.unique=l.uniqueSort;c.text=l.getText;c.isXMLDoc=l.isXML;c.contains=l.contains})();var Wa=/Until$/,Xa=/^(?:parents|prevUntil|prevAll)/,Ya=/,/,Ja=/^.[^:#\[\.,]*$/,Za=Array.prototype.slice,$a=c.expr.match.POS;c.fn.extend({find:function(a){for(var b=this.pushStack("",
+"find",a),d=0,e=0,f=this.length;e<f;e++){d=b.length;c.find(a,this[e],b);if(e>0)for(var h=d;h<b.length;h++)for(var k=0;k<d;k++)if(b[k]===b[h]){b.splice(h--,1);break}}return b},has:function(a){var b=c(a);return this.filter(function(){for(var d=0,e=b.length;d<e;d++)if(c.contains(this,b[d]))return true})},not:function(a){return this.pushStack(ka(this,a,false),"not",a)},filter:function(a){return this.pushStack(ka(this,a,true),"filter",a)},is:function(a){return!!a&&c.filter(a,this).length>0},closest:function(a,
+b){var d=[],e,f,h=this[0];if(c.isArray(a)){var k={},l,n=1;if(h&&a.length){e=0;for(f=a.length;e<f;e++){l=a[e];k[l]||(k[l]=c.expr.match.POS.test(l)?c(l,b||this.context):l)}for(;h&&h.ownerDocument&&h!==b;){for(l in k){e=k[l];if(e.jquery?e.index(h)>-1:c(h).is(e))d.push({selector:l,elem:h,level:n})}h=h.parentNode;n++}}return d}k=$a.test(a)?c(a,b||this.context):null;e=0;for(f=this.length;e<f;e++)for(h=this[e];h;)if(k?k.index(h)>-1:c.find.matchesSelector(h,a)){d.push(h);break}else{h=h.parentNode;if(!h||
+!h.ownerDocument||h===b)break}d=d.length>1?c.unique(d):d;return this.pushStack(d,"closest",a)},index:function(a){if(!a||typeof a==="string")return c.inArray(this[0],a?c(a):this.parent().children());return c.inArray(a.jquery?a[0]:a,this)},add:function(a,b){var d=typeof a==="string"?c(a,b||this.context):c.makeArray(a),e=c.merge(this.get(),d);return this.pushStack(!d[0]||!d[0].parentNode||d[0].parentNode.nodeType===11||!e[0]||!e[0].parentNode||e[0].parentNode.nodeType===11?e:c.unique(e))},andSelf:function(){return this.add(this.prevObject)}});
+c.each({parent:function(a){return(a=a.parentNode)&&a.nodeType!==11?a:null},parents:function(a){return c.dir(a,"parentNode")},parentsUntil:function(a,b,d){return c.dir(a,"parentNode",d)},next:function(a){return c.nth(a,2,"nextSibling")},prev:function(a){return c.nth(a,2,"previousSibling")},nextAll:function(a){return c.dir(a,"nextSibling")},prevAll:function(a){return c.dir(a,"previousSibling")},nextUntil:function(a,b,d){return c.dir(a,"nextSibling",d)},prevUntil:function(a,b,d){return c.dir(a,"previousSibling",
+d)},siblings:function(a){return c.sibling(a.parentNode.firstChild,a)},children:function(a){return c.sibling(a.firstChild)},contents:function(a){return c.nodeName(a,"iframe")?a.contentDocument||a.contentWindow.document:c.makeArray(a.childNodes)}},function(a,b){c.fn[a]=function(d,e){var f=c.map(this,b,d);Wa.test(a)||(e=d);if(e&&typeof e==="string")f=c.filter(e,f);f=this.length>1?c.unique(f):f;if((this.length>1||Ya.test(e))&&Xa.test(a))f=f.reverse();return this.pushStack(f,a,Za.call(arguments).join(","))}});
+c.extend({filter:function(a,b,d){if(d)a=":not("+a+")";return b.length===1?c.find.matchesSelector(b[0],a)?[b[0]]:[]:c.find.matches(a,b)},dir:function(a,b,d){var e=[];for(a=a[b];a&&a.nodeType!==9&&(d===A||a.nodeType!==1||!c(a).is(d));){a.nodeType===1&&e.push(a);a=a[b]}return e},nth:function(a,b,d){b=b||1;for(var e=0;a;a=a[d])if(a.nodeType===1&&++e===b)break;return a},sibling:function(a,b){for(var d=[];a;a=a.nextSibling)a.nodeType===1&&a!==b&&d.push(a);return d}});var xa=/ jQuery\d+="(?:\d+|null)"/g,
+$=/^\s+/,ya=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/ig,za=/<([\w:]+)/,ab=/<tbody/i,bb=/<|&#?\w+;/,Aa=/<(?:script|object|embed|option|style)/i,Ba=/checked\s*(?:[^=]|=\s*.checked.)/i,cb=/\=([^="'>\s]+\/)>/g,O={option:[1,"<select multiple='multiple'>","</select>"],legend:[1,"<fieldset>","</fieldset>"],thead:[1,"<table>","</table>"],tr:[2,"<table><tbody>","</tbody></table>"],td:[3,"<table><tbody><tr>","</tr></tbody></table>"],col:[2,"<table><tbody></tbody><colgroup>","</colgroup></table>"],
+area:[1,"<map>","</map>"],_default:[0,"",""]};O.optgroup=O.option;O.tbody=O.tfoot=O.colgroup=O.caption=O.thead;O.th=O.td;if(!c.support.htmlSerialize)O._default=[1,"div<div>","</div>"];c.fn.extend({text:function(a){if(c.isFunction(a))return this.each(function(b){var d=c(this);d.text(a.call(this,b,d.text()))});if(typeof a!=="object"&&a!==A)return this.empty().append((this[0]&&this[0].ownerDocument||u).createTextNode(a));return c.text(this)},wrapAll:function(a){if(c.isFunction(a))return this.each(function(d){c(this).wrapAll(a.call(this,
+d))});if(this[0]){var b=c(a,this[0].ownerDocument).eq(0).clone(true);this[0].parentNode&&b.insertBefore(this[0]);b.map(function(){for(var d=this;d.firstChild&&d.firstChild.nodeType===1;)d=d.firstChild;return d}).append(this)}return this},wrapInner:function(a){if(c.isFunction(a))return this.each(function(b){c(this).wrapInner(a.call(this,b))});return this.each(function(){var b=c(this),d=b.contents();d.length?d.wrapAll(a):b.append(a)})},wrap:function(a){return this.each(function(){c(this).wrapAll(a)})},
+unwrap:function(){return this.parent().each(function(){c.nodeName(this,"body")||c(this).replaceWith(this.childNodes)}).end()},append:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.appendChild(a)})},prepend:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.insertBefore(a,this.firstChild)})},before:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b,this)});else if(arguments.length){var a=
+c(arguments[0]);a.push.apply(a,this.toArray());return this.pushStack(a,"before",arguments)}},after:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b,this.nextSibling)});else if(arguments.length){var a=this.pushStack(this,"after",arguments);a.push.apply(a,c(arguments[0]).toArray());return a}},remove:function(a,b){for(var d=0,e;(e=this[d])!=null;d++)if(!a||c.filter(a,[e]).length){if(!b&&e.nodeType===1){c.cleanData(e.getElementsByTagName("*"));
+c.cleanData([e])}e.parentNode&&e.parentNode.removeChild(e)}return this},empty:function(){for(var a=0,b;(b=this[a])!=null;a++)for(b.nodeType===1&&c.cleanData(b.getElementsByTagName("*"));b.firstChild;)b.removeChild(b.firstChild);return this},clone:function(a){var b=this.map(function(){if(!c.support.noCloneEvent&&!c.isXMLDoc(this)){var d=this.outerHTML,e=this.ownerDocument;if(!d){d=e.createElement("div");d.appendChild(this.cloneNode(true));d=d.innerHTML}return c.clean([d.replace(xa,"").replace(cb,'="$1">').replace($,
+"")],e)[0]}else return this.cloneNode(true)});if(a===true){la(this,b);la(this.find("*"),b.find("*"))}return b},html:function(a){if(a===A)return this[0]&&this[0].nodeType===1?this[0].innerHTML.replace(xa,""):null;else if(typeof a==="string"&&!Aa.test(a)&&(c.support.leadingWhitespace||!$.test(a))&&!O[(za.exec(a)||["",""])[1].toLowerCase()]){a=a.replace(ya,"<$1></$2>");try{for(var b=0,d=this.length;b<d;b++)if(this[b].nodeType===1){c.cleanData(this[b].getElementsByTagName("*"));this[b].innerHTML=a}}catch(e){this.empty().append(a)}}else c.isFunction(a)?
+this.each(function(f){var h=c(this);h.html(a.call(this,f,h.html()))}):this.empty().append(a);return this},replaceWith:function(a){if(this[0]&&this[0].parentNode){if(c.isFunction(a))return this.each(function(b){var d=c(this),e=d.html();d.replaceWith(a.call(this,b,e))});if(typeof a!=="string")a=c(a).detach();return this.each(function(){var b=this.nextSibling,d=this.parentNode;c(this).remove();b?c(b).before(a):c(d).append(a)})}else return this.pushStack(c(c.isFunction(a)?a():a),"replaceWith",a)},detach:function(a){return this.remove(a,
+true)},domManip:function(a,b,d){var e,f,h=a[0],k=[],l;if(!c.support.checkClone&&arguments.length===3&&typeof h==="string"&&Ba.test(h))return this.each(function(){c(this).domManip(a,b,d,true)});if(c.isFunction(h))return this.each(function(s){var v=c(this);a[0]=h.call(this,s,b?v.html():A);v.domManip(a,b,d)});if(this[0]){e=h&&h.parentNode;e=c.support.parentNode&&e&&e.nodeType===11&&e.childNodes.length===this.length?{fragment:e}:c.buildFragment(a,this,k);l=e.fragment;if(f=l.childNodes.length===1?l=l.firstChild:
+l.firstChild){b=b&&c.nodeName(f,"tr");f=0;for(var n=this.length;f<n;f++)d.call(b?c.nodeName(this[f],"table")?this[f].getElementsByTagName("tbody")[0]||this[f].appendChild(this[f].ownerDocument.createElement("tbody")):this[f]:this[f],f>0||e.cacheable||this.length>1?l.cloneNode(true):l)}k.length&&c.each(k,Ka)}return this}});c.buildFragment=function(a,b,d){var e,f,h;b=b&&b[0]?b[0].ownerDocument||b[0]:u;if(a.length===1&&typeof a[0]==="string"&&a[0].length<512&&b===u&&!Aa.test(a[0])&&(c.support.checkClone||
+!Ba.test(a[0]))){f=true;if(h=c.fragments[a[0]])if(h!==1)e=h}if(!e){e=b.createDocumentFragment();c.clean(a,b,e,d)}if(f)c.fragments[a[0]]=h?e:1;return{fragment:e,cacheable:f}};c.fragments={};c.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){c.fn[a]=function(d){var e=[];d=c(d);var f=this.length===1&&this[0].parentNode;if(f&&f.nodeType===11&&f.childNodes.length===1&&d.length===1){d[b](this[0]);return this}else{f=0;for(var h=
+d.length;f<h;f++){var k=(f>0?this.clone(true):this).get();c(d[f])[b](k);e=e.concat(k)}return this.pushStack(e,a,d.selector)}}});c.extend({clean:function(a,b,d,e){b=b||u;if(typeof b.createElement==="undefined")b=b.ownerDocument||b[0]&&b[0].ownerDocument||u;for(var f=[],h=0,k;(k=a[h])!=null;h++){if(typeof k==="number")k+="";if(k){if(typeof k==="string"&&!bb.test(k))k=b.createTextNode(k);else if(typeof k==="string"){k=k.replace(ya,"<$1></$2>");var l=(za.exec(k)||["",""])[1].toLowerCase(),n=O[l]||O._default,
+s=n[0],v=b.createElement("div");for(v.innerHTML=n[1]+k+n[2];s--;)v=v.lastChild;if(!c.support.tbody){s=ab.test(k);l=l==="table"&&!s?v.firstChild&&v.firstChild.childNodes:n[1]==="<table>"&&!s?v.childNodes:[];for(n=l.length-1;n>=0;--n)c.nodeName(l[n],"tbody")&&!l[n].childNodes.length&&l[n].parentNode.removeChild(l[n])}!c.support.leadingWhitespace&&$.test(k)&&v.insertBefore(b.createTextNode($.exec(k)[0]),v.firstChild);k=v.childNodes}if(k.nodeType)f.push(k);else f=c.merge(f,k)}}if(d)for(h=0;f[h];h++)if(e&&
+c.nodeName(f[h],"script")&&(!f[h].type||f[h].type.toLowerCase()==="text/javascript"))e.push(f[h].parentNode?f[h].parentNode.removeChild(f[h]):f[h]);else{f[h].nodeType===1&&f.splice.apply(f,[h+1,0].concat(c.makeArray(f[h].getElementsByTagName("script"))));d.appendChild(f[h])}return f},cleanData:function(a){for(var b,d,e=c.cache,f=c.event.special,h=c.support.deleteExpando,k=0,l;(l=a[k])!=null;k++)if(!(l.nodeName&&c.noData[l.nodeName.toLowerCase()]))if(d=l[c.expando]){if((b=e[d])&&b.events)for(var n in b.events)f[n]?
+c.event.remove(l,n):c.removeEvent(l,n,b.handle);if(h)delete l[c.expando];else l.removeAttribute&&l.removeAttribute(c.expando);delete e[d]}}});var Ca=/alpha\([^)]*\)/i,db=/opacity=([^)]*)/,eb=/-([a-z])/ig,fb=/([A-Z])/g,Da=/^-?\d+(?:px)?$/i,gb=/^-?\d/,hb={position:"absolute",visibility:"hidden",display:"block"},La=["Left","Right"],Ma=["Top","Bottom"],W,ib=u.defaultView&&u.defaultView.getComputedStyle,jb=function(a,b){return b.toUpperCase()};c.fn.css=function(a,b){if(arguments.length===2&&b===A)return this;
+return c.access(this,a,b,true,function(d,e,f){return f!==A?c.style(d,e,f):c.css(d,e)})};c.extend({cssHooks:{opacity:{get:function(a,b){if(b){var d=W(a,"opacity","opacity");return d===""?"1":d}else return a.style.opacity}}},cssNumber:{zIndex:true,fontWeight:true,opacity:true,zoom:true,lineHeight:true},cssProps:{"float":c.support.cssFloat?"cssFloat":"styleFloat"},style:function(a,b,d,e){if(!(!a||a.nodeType===3||a.nodeType===8||!a.style)){var f,h=c.camelCase(b),k=a.style,l=c.cssHooks[h];b=c.cssProps[h]||
+h;if(d!==A){if(!(typeof d==="number"&&isNaN(d)||d==null)){if(typeof d==="number"&&!c.cssNumber[h])d+="px";if(!l||!("set"in l)||(d=l.set(a,d))!==A)try{k[b]=d}catch(n){}}}else{if(l&&"get"in l&&(f=l.get(a,false,e))!==A)return f;return k[b]}}},css:function(a,b,d){var e,f=c.camelCase(b),h=c.cssHooks[f];b=c.cssProps[f]||f;if(h&&"get"in h&&(e=h.get(a,true,d))!==A)return e;else if(W)return W(a,b,f)},swap:function(a,b,d){var e={},f;for(f in b){e[f]=a.style[f];a.style[f]=b[f]}d.call(a);for(f in b)a.style[f]=
+e[f]},camelCase:function(a){return a.replace(eb,jb)}});c.curCSS=c.css;c.each(["height","width"],function(a,b){c.cssHooks[b]={get:function(d,e,f){var h;if(e){if(d.offsetWidth!==0)h=ma(d,b,f);else c.swap(d,hb,function(){h=ma(d,b,f)});return h+"px"}},set:function(d,e){if(Da.test(e)){e=parseFloat(e);if(e>=0)return e+"px"}else return e}}});if(!c.support.opacity)c.cssHooks.opacity={get:function(a,b){return db.test((b&&a.currentStyle?a.currentStyle.filter:a.style.filter)||"")?parseFloat(RegExp.$1)/100+"":
+b?"1":""},set:function(a,b){var d=a.style;d.zoom=1;var e=c.isNaN(b)?"":"alpha(opacity="+b*100+")",f=d.filter||"";d.filter=Ca.test(f)?f.replace(Ca,e):d.filter+" "+e}};if(ib)W=function(a,b,d){var e;d=d.replace(fb,"-$1").toLowerCase();if(!(b=a.ownerDocument.defaultView))return A;if(b=b.getComputedStyle(a,null)){e=b.getPropertyValue(d);if(e===""&&!c.contains(a.ownerDocument.documentElement,a))e=c.style(a,d)}return e};else if(u.documentElement.currentStyle)W=function(a,b){var d,e,f=a.currentStyle&&a.currentStyle[b],
+h=a.style;if(!Da.test(f)&&gb.test(f)){d=h.left;e=a.runtimeStyle.left;a.runtimeStyle.left=a.currentStyle.left;h.left=b==="fontSize"?"1em":f||0;f=h.pixelLeft+"px";h.left=d;a.runtimeStyle.left=e}return f};if(c.expr&&c.expr.filters){c.expr.filters.hidden=function(a){var b=a.offsetHeight;return a.offsetWidth===0&&b===0||!c.support.reliableHiddenOffsets&&(a.style.display||c.css(a,"display"))==="none"};c.expr.filters.visible=function(a){return!c.expr.filters.hidden(a)}}var kb=c.now(),lb=/<script\b[^<]*(?:(?!<\/script>)<[^<]*)*<\/script>/gi,
+mb=/^(?:select|textarea)/i,nb=/^(?:color|date|datetime|email|hidden|month|number|password|range|search|tel|text|time|url|week)$/i,ob=/^(?:GET|HEAD|DELETE)$/,Na=/\[\]$/,T=/\=\?(&|$)/,ia=/\?/,pb=/([?&])_=[^&]*/,qb=/^(\w+:)?\/\/([^\/?#]+)/,rb=/%20/g,sb=/#.*$/,Ea=c.fn.load;c.fn.extend({load:function(a,b,d){if(typeof a!=="string"&&Ea)return Ea.apply(this,arguments);else if(!this.length)return this;var e=a.indexOf(" ");if(e>=0){var f=a.slice(e,a.length);a=a.slice(0,e)}e="GET";if(b)if(c.isFunction(b)){d=
+b;b=null}else if(typeof b==="object"){b=c.param(b,c.ajaxSettings.traditional);e="POST"}var h=this;c.ajax({url:a,type:e,dataType:"html",data:b,complete:function(k,l){if(l==="success"||l==="notmodified")h.html(f?c("<div>").append(k.responseText.replace(lb,"")).find(f):k.responseText);d&&h.each(d,[k.responseText,l,k])}});return this},serialize:function(){return c.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?c.makeArray(this.elements):this}).filter(function(){return this.name&&
+!this.disabled&&(this.checked||mb.test(this.nodeName)||nb.test(this.type))}).map(function(a,b){var d=c(this).val();return d==null?null:c.isArray(d)?c.map(d,function(e){return{name:b.name,value:e}}):{name:b.name,value:d}}).get()}});c.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "),function(a,b){c.fn[b]=function(d){return this.bind(b,d)}});c.extend({get:function(a,b,d,e){if(c.isFunction(b)){e=e||d;d=b;b=null}return c.ajax({type:"GET",url:a,data:b,success:d,dataType:e})},
+getScript:function(a,b){return c.get(a,null,b,"script")},getJSON:function(a,b,d){return c.get(a,b,d,"json")},post:function(a,b,d,e){if(c.isFunction(b)){e=e||d;d=b;b={}}return c.ajax({type:"POST",url:a,data:b,success:d,dataType:e})},ajaxSetup:function(a){c.extend(c.ajaxSettings,a)},ajaxSettings:{url:location.href,global:true,type:"GET",contentType:"application/x-www-form-urlencoded",processData:true,async:true,xhr:function(){return new E.XMLHttpRequest},accepts:{xml:"application/xml, text/xml",html:"text/html",
+script:"text/javascript, application/javascript",json:"application/json, text/javascript",text:"text/plain",_default:"*/*"}},ajax:function(a){var b=c.extend(true,{},c.ajaxSettings,a),d,e,f,h=b.type.toUpperCase(),k=ob.test(h);b.url=b.url.replace(sb,"");b.context=a&&a.context!=null?a.context:b;if(b.data&&b.processData&&typeof b.data!=="string")b.data=c.param(b.data,b.traditional);if(b.dataType==="jsonp"){if(h==="GET")T.test(b.url)||(b.url+=(ia.test(b.url)?"&":"?")+(b.jsonp||"callback")+"=?");else if(!b.data||
+!T.test(b.data))b.data=(b.data?b.data+"&":"")+(b.jsonp||"callback")+"=?";b.dataType="json"}if(b.dataType==="json"&&(b.data&&T.test(b.data)||T.test(b.url))){d=b.jsonpCallback||"jsonp"+kb++;if(b.data)b.data=(b.data+"").replace(T,"="+d+"$1");b.url=b.url.replace(T,"="+d+"$1");b.dataType="script";var l=E[d];E[d]=function(m){f=m;c.handleSuccess(b,w,e,f);c.handleComplete(b,w,e,f);if(c.isFunction(l))l(m);else{E[d]=A;try{delete E[d]}catch(p){}}v&&v.removeChild(B)}}if(b.dataType==="script"&&b.cache===null)b.cache=
+false;if(b.cache===false&&h==="GET"){var n=c.now(),s=b.url.replace(pb,"$1_="+n);b.url=s+(s===b.url?(ia.test(b.url)?"&":"?")+"_="+n:"")}if(b.data&&h==="GET")b.url+=(ia.test(b.url)?"&":"?")+b.data;b.global&&c.active++===0&&c.event.trigger("ajaxStart");n=(n=qb.exec(b.url))&&(n[1]&&n[1]!==location.protocol||n[2]!==location.host);if(b.dataType==="script"&&h==="GET"&&n){var v=u.getElementsByTagName("head")[0]||u.documentElement,B=u.createElement("script");if(b.scriptCharset)B.charset=b.scriptCharset;B.src=
+b.url;if(!d){var D=false;B.onload=B.onreadystatechange=function(){if(!D&&(!this.readyState||this.readyState==="loaded"||this.readyState==="complete")){D=true;c.handleSuccess(b,w,e,f);c.handleComplete(b,w,e,f);B.onload=B.onreadystatechange=null;v&&B.parentNode&&v.removeChild(B)}}}v.insertBefore(B,v.firstChild);return A}var H=false,w=b.xhr();if(w){b.username?w.open(h,b.url,b.async,b.username,b.password):w.open(h,b.url,b.async);try{if(b.data!=null&&!k||a&&a.contentType)w.setRequestHeader("Content-Type",
+b.contentType);if(b.ifModified){c.lastModified[b.url]&&w.setRequestHeader("If-Modified-Since",c.lastModified[b.url]);c.etag[b.url]&&w.setRequestHeader("If-None-Match",c.etag[b.url])}n||w.setRequestHeader("X-Requested-With","XMLHttpRequest");w.setRequestHeader("Accept",b.dataType&&b.accepts[b.dataType]?b.accepts[b.dataType]+", */*; q=0.01":b.accepts._default)}catch(G){}if(b.beforeSend&&b.beforeSend.call(b.context,w,b)===false){b.global&&c.active--===1&&c.event.trigger("ajaxStop");w.abort();return false}b.global&&
+c.triggerGlobal(b,"ajaxSend",[w,b]);var M=w.onreadystatechange=function(m){if(!w||w.readyState===0||m==="abort"){H||c.handleComplete(b,w,e,f);H=true;if(w)w.onreadystatechange=c.noop}else if(!H&&w&&(w.readyState===4||m==="timeout")){H=true;w.onreadystatechange=c.noop;e=m==="timeout"?"timeout":!c.httpSuccess(w)?"error":b.ifModified&&c.httpNotModified(w,b.url)?"notmodified":"success";var p;if(e==="success")try{f=c.httpData(w,b.dataType,b)}catch(q){e="parsererror";p=q}if(e==="success"||e==="notmodified")d||
+c.handleSuccess(b,w,e,f);else c.handleError(b,w,e,p);d||c.handleComplete(b,w,e,f);m==="timeout"&&w.abort();if(b.async)w=null}};try{var g=w.abort;w.abort=function(){w&&g.call&&g.call(w);M("abort")}}catch(j){}b.async&&b.timeout>0&&setTimeout(function(){w&&!H&&M("timeout")},b.timeout);try{w.send(k||b.data==null?null:b.data)}catch(o){c.handleError(b,w,null,o);c.handleComplete(b,w,e,f)}b.async||M();return w}},param:function(a,b){var d=[],e=function(h,k){k=c.isFunction(k)?k():k;d[d.length]=encodeURIComponent(h)+
+"="+encodeURIComponent(k)};if(b===A)b=c.ajaxSettings.traditional;if(c.isArray(a)||a.jquery)c.each(a,function(){e(this.name,this.value)});else for(var f in a)ca(f,a[f],b,e);return d.join("&").replace(rb,"+")}});c.extend({active:0,lastModified:{},etag:{},handleError:function(a,b,d,e){a.error&&a.error.call(a.context,b,d,e);a.global&&c.triggerGlobal(a,"ajaxError",[b,a,e])},handleSuccess:function(a,b,d,e){a.success&&a.success.call(a.context,e,d,b);a.global&&c.triggerGlobal(a,"ajaxSuccess",[b,a])},handleComplete:function(a,
+b,d){a.complete&&a.complete.call(a.context,b,d);a.global&&c.triggerGlobal(a,"ajaxComplete",[b,a]);a.global&&c.active--===1&&c.event.trigger("ajaxStop")},triggerGlobal:function(a,b,d){(a.context&&a.context.url==null?c(a.context):c.event).trigger(b,d)},httpSuccess:function(a){try{return!a.status&&location.protocol==="file:"||a.status>=200&&a.status<300||a.status===304||a.status===1223}catch(b){}return false},httpNotModified:function(a,b){var d=a.getResponseHeader("Last-Modified"),e=a.getResponseHeader("Etag");
+if(d)c.lastModified[b]=d;if(e)c.etag[b]=e;return a.status===304},httpData:function(a,b,d){var e=a.getResponseHeader("content-type")||"",f=b==="xml"||!b&&e.indexOf("xml")>=0;a=f?a.responseXML:a.responseText;f&&a.documentElement.nodeName==="parsererror"&&c.error("parsererror");if(d&&d.dataFilter)a=d.dataFilter(a,b);if(typeof a==="string")if(b==="json"||!b&&e.indexOf("json")>=0)a=c.parseJSON(a);else if(b==="script"||!b&&e.indexOf("javascript")>=0)c.globalEval(a);return a}});if(E.ActiveXObject)c.ajaxSettings.xhr=
+function(){if(E.location.protocol!=="file:")try{return new E.XMLHttpRequest}catch(a){}try{return new E.ActiveXObject("Microsoft.XMLHTTP")}catch(b){}};c.support.ajax=!!c.ajaxSettings.xhr();var da={},tb=/^(?:toggle|show|hide)$/,ub=/^([+\-]=)?([\d+.\-]+)(.*)$/,aa,na=[["height","marginTop","marginBottom","paddingTop","paddingBottom"],["width","marginLeft","marginRight","paddingLeft","paddingRight"],["opacity"]];c.fn.extend({show:function(a,b,d){if(a||a===0)return this.animate(S("show",3),a,b,d);else{a=
+0;for(b=this.length;a<b;a++){if(!c.data(this[a],"olddisplay")&&this[a].style.display==="none")this[a].style.display="";this[a].style.display===""&&c.css(this[a],"display")==="none"&&c.data(this[a],"olddisplay",oa(this[a].nodeName))}for(a=0;a<b;a++)this[a].style.display=c.data(this[a],"olddisplay")||"";return this}},hide:function(a,b,d){if(a||a===0)return this.animate(S("hide",3),a,b,d);else{a=0;for(b=this.length;a<b;a++){d=c.css(this[a],"display");d!=="none"&&c.data(this[a],"olddisplay",d)}for(a=
+0;a<b;a++)this[a].style.display="none";return this}},_toggle:c.fn.toggle,toggle:function(a,b,d){var e=typeof a==="boolean";if(c.isFunction(a)&&c.isFunction(b))this._toggle.apply(this,arguments);else a==null||e?this.each(function(){var f=e?a:c(this).is(":hidden");c(this)[f?"show":"hide"]()}):this.animate(S("toggle",3),a,b,d);return this},fadeTo:function(a,b,d,e){return this.filter(":hidden").css("opacity",0).show().end().animate({opacity:b},a,d,e)},animate:function(a,b,d,e){var f=c.speed(b,d,e);if(c.isEmptyObject(a))return this.each(f.complete);
+return this[f.queue===false?"each":"queue"](function(){var h=c.extend({},f),k,l=this.nodeType===1,n=l&&c(this).is(":hidden"),s=this;for(k in a){var v=c.camelCase(k);if(k!==v){a[v]=a[k];delete a[k];k=v}if(a[k]==="hide"&&n||a[k]==="show"&&!n)return h.complete.call(this);if(l&&(k==="height"||k==="width")){h.overflow=[this.style.overflow,this.style.overflowX,this.style.overflowY];if(c.css(this,"display")==="inline"&&c.css(this,"float")==="none")if(c.support.inlineBlockNeedsLayout)if(oa(this.nodeName)===
+"inline")this.style.display="inline-block";else{this.style.display="inline";this.style.zoom=1}else this.style.display="inline-block"}if(c.isArray(a[k])){(h.specialEasing=h.specialEasing||{})[k]=a[k][1];a[k]=a[k][0]}}if(h.overflow!=null)this.style.overflow="hidden";h.curAnim=c.extend({},a);c.each(a,function(B,D){var H=new c.fx(s,h,B);if(tb.test(D))H[D==="toggle"?n?"show":"hide":D](a);else{var w=ub.exec(D),G=H.cur(true)||0;if(w){var M=parseFloat(w[2]),g=w[3]||"px";if(g!=="px"){c.style(s,B,(M||1)+g);
+G=(M||1)/H.cur(true)*G;c.style(s,B,G+g)}if(w[1])M=(w[1]==="-="?-1:1)*M+G;H.custom(G,M,g)}else H.custom(G,D,"")}});return true})},stop:function(a,b){var d=c.timers;a&&this.queue([]);this.each(function(){for(var e=d.length-1;e>=0;e--)if(d[e].elem===this){b&&d[e](true);d.splice(e,1)}});b||this.dequeue();return this}});c.each({slideDown:S("show",1),slideUp:S("hide",1),slideToggle:S("toggle",1),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"}},function(a,b){c.fn[a]=function(d,e,f){return this.animate(b,
+d,e,f)}});c.extend({speed:function(a,b,d){var e=a&&typeof a==="object"?c.extend({},a):{complete:d||!d&&b||c.isFunction(a)&&a,duration:a,easing:d&&b||b&&!c.isFunction(b)&&b};e.duration=c.fx.off?0:typeof e.duration==="number"?e.duration:e.duration in c.fx.speeds?c.fx.speeds[e.duration]:c.fx.speeds._default;e.old=e.complete;e.complete=function(){e.queue!==false&&c(this).dequeue();c.isFunction(e.old)&&e.old.call(this)};return e},easing:{linear:function(a,b,d,e){return d+e*a},swing:function(a,b,d,e){return(-Math.cos(a*
+Math.PI)/2+0.5)*e+d}},timers:[],fx:function(a,b,d){this.options=b;this.elem=a;this.prop=d;if(!b.orig)b.orig={}}});c.fx.prototype={update:function(){this.options.step&&this.options.step.call(this.elem,this.now,this);(c.fx.step[this.prop]||c.fx.step._default)(this)},cur:function(){if(this.elem[this.prop]!=null&&(!this.elem.style||this.elem.style[this.prop]==null))return this.elem[this.prop];var a=parseFloat(c.css(this.elem,this.prop));return a&&a>-1E4?a:0},custom:function(a,b,d){function e(h){return f.step(h)}
+this.startTime=c.now();this.start=a;this.end=b;this.unit=d||this.unit||"px";this.now=this.start;this.pos=this.state=0;var f=this;a=c.fx;e.elem=this.elem;if(e()&&c.timers.push(e)&&!aa)aa=setInterval(a.tick,a.interval)},show:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.show=true;this.custom(this.prop==="width"||this.prop==="height"?1:0,this.cur());c(this.elem).show()},hide:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.hide=true;
+this.custom(this.cur(),0)},step:function(a){var b=c.now(),d=true;if(a||b>=this.options.duration+this.startTime){this.now=this.end;this.pos=this.state=1;this.update();this.options.curAnim[this.prop]=true;for(var e in this.options.curAnim)if(this.options.curAnim[e]!==true)d=false;if(d){if(this.options.overflow!=null&&!c.support.shrinkWrapBlocks){var f=this.elem,h=this.options;c.each(["","X","Y"],function(l,n){f.style["overflow"+n]=h.overflow[l]})}this.options.hide&&c(this.elem).hide();if(this.options.hide||
+this.options.show)for(var k in this.options.curAnim)c.style(this.elem,k,this.options.orig[k]);this.options.complete.call(this.elem)}return false}else{a=b-this.startTime;this.state=a/this.options.duration;b=this.options.easing||(c.easing.swing?"swing":"linear");this.pos=c.easing[this.options.specialEasing&&this.options.specialEasing[this.prop]||b](this.state,a,0,1,this.options.duration);this.now=this.start+(this.end-this.start)*this.pos;this.update()}return true}};c.extend(c.fx,{tick:function(){for(var a=
+c.timers,b=0;b<a.length;b++)a[b]()||a.splice(b--,1);a.length||c.fx.stop()},interval:13,stop:function(){clearInterval(aa);aa=null},speeds:{slow:600,fast:200,_default:400},step:{opacity:function(a){c.style(a.elem,"opacity",a.now)},_default:function(a){if(a.elem.style&&a.elem.style[a.prop]!=null)a.elem.style[a.prop]=(a.prop==="width"||a.prop==="height"?Math.max(0,a.now):a.now)+a.unit;else a.elem[a.prop]=a.now}}});if(c.expr&&c.expr.filters)c.expr.filters.animated=function(a){return c.grep(c.timers,function(b){return a===
+b.elem}).length};var vb=/^t(?:able|d|h)$/i,Fa=/^(?:body|html)$/i;c.fn.offset="getBoundingClientRect"in u.documentElement?function(a){var b=this[0],d;if(a)return this.each(function(k){c.offset.setOffset(this,a,k)});if(!b||!b.ownerDocument)return null;if(b===b.ownerDocument.body)return c.offset.bodyOffset(b);try{d=b.getBoundingClientRect()}catch(e){}var f=b.ownerDocument,h=f.documentElement;if(!d||!c.contains(h,b))return d||{top:0,left:0};b=f.body;f=ea(f);return{top:d.top+(f.pageYOffset||c.support.boxModel&&
+h.scrollTop||b.scrollTop)-(h.clientTop||b.clientTop||0),left:d.left+(f.pageXOffset||c.support.boxModel&&h.scrollLeft||b.scrollLeft)-(h.clientLeft||b.clientLeft||0)}}:function(a){var b=this[0];if(a)return this.each(function(s){c.offset.setOffset(this,a,s)});if(!b||!b.ownerDocument)return null;if(b===b.ownerDocument.body)return c.offset.bodyOffset(b);c.offset.initialize();var d=b.offsetParent,e=b.ownerDocument,f,h=e.documentElement,k=e.body;f=(e=e.defaultView)?e.getComputedStyle(b,null):b.currentStyle;
+for(var l=b.offsetTop,n=b.offsetLeft;(b=b.parentNode)&&b!==k&&b!==h;){if(c.offset.supportsFixedPosition&&f.position==="fixed")break;f=e?e.getComputedStyle(b,null):b.currentStyle;l-=b.scrollTop;n-=b.scrollLeft;if(b===d){l+=b.offsetTop;n+=b.offsetLeft;if(c.offset.doesNotAddBorder&&!(c.offset.doesAddBorderForTableAndCells&&vb.test(b.nodeName))){l+=parseFloat(f.borderTopWidth)||0;n+=parseFloat(f.borderLeftWidth)||0}d=b.offsetParent}if(c.offset.subtractsBorderForOverflowNotVisible&&f.overflow!=="visible"){l+=
+parseFloat(f.borderTopWidth)||0;n+=parseFloat(f.borderLeftWidth)||0}f=f}if(f.position==="relative"||f.position==="static"){l+=k.offsetTop;n+=k.offsetLeft}if(c.offset.supportsFixedPosition&&f.position==="fixed"){l+=Math.max(h.scrollTop,k.scrollTop);n+=Math.max(h.scrollLeft,k.scrollLeft)}return{top:l,left:n}};c.offset={initialize:function(){var a=u.body,b=u.createElement("div"),d,e,f,h=parseFloat(c.css(a,"marginTop"))||0;c.extend(b.style,{position:"absolute",top:0,left:0,margin:0,border:0,width:"1px",
+height:"1px",visibility:"hidden"});b.innerHTML="<div style='position:absolute;top:0;left:0;margin:0;border:5px solid #000;padding:0;width:1px;height:1px;'><div></div></div><table style='position:absolute;top:0;left:0;margin:0;border:5px solid #000;padding:0;width:1px;height:1px;' cellpadding='0' cellspacing='0'><tr><td></td></tr></table>";a.insertBefore(b,a.firstChild);d=b.firstChild;e=d.firstChild;f=d.nextSibling.firstChild.firstChild;this.doesNotAddBorder=e.offsetTop!==5;this.doesAddBorderForTableAndCells=
+f.offsetTop===5;e.style.position="fixed";e.style.top="20px";this.supportsFixedPosition=e.offsetTop===20||e.offsetTop===15;e.style.position=e.style.top="";d.style.overflow="hidden";d.style.position="relative";this.subtractsBorderForOverflowNotVisible=e.offsetTop===-5;this.doesNotIncludeMarginInBodyOffset=a.offsetTop!==h;a.removeChild(b);c.offset.initialize=c.noop},bodyOffset:function(a){var b=a.offsetTop,d=a.offsetLeft;c.offset.initialize();if(c.offset.doesNotIncludeMarginInBodyOffset){b+=parseFloat(c.css(a,
+"marginTop"))||0;d+=parseFloat(c.css(a,"marginLeft"))||0}return{top:b,left:d}},setOffset:function(a,b,d){var e=c.css(a,"position");if(e==="static")a.style.position="relative";var f=c(a),h=f.offset(),k=c.css(a,"top"),l=c.css(a,"left"),n=e==="absolute"&&c.inArray("auto",[k,l])>-1;e={};var s={};if(n)s=f.position();k=n?s.top:parseInt(k,10)||0;l=n?s.left:parseInt(l,10)||0;if(c.isFunction(b))b=b.call(a,d,h);if(b.top!=null)e.top=b.top-h.top+k;if(b.left!=null)e.left=b.left-h.left+l;"using"in b?b.using.call(a,
+e):f.css(e)}};c.fn.extend({position:function(){if(!this[0])return null;var a=this[0],b=this.offsetParent(),d=this.offset(),e=Fa.test(b[0].nodeName)?{top:0,left:0}:b.offset();d.top-=parseFloat(c.css(a,"marginTop"))||0;d.left-=parseFloat(c.css(a,"marginLeft"))||0;e.top+=parseFloat(c.css(b[0],"borderTopWidth"))||0;e.left+=parseFloat(c.css(b[0],"borderLeftWidth"))||0;return{top:d.top-e.top,left:d.left-e.left}},offsetParent:function(){return this.map(function(){for(var a=this.offsetParent||u.body;a&&!Fa.test(a.nodeName)&&
+c.css(a,"position")==="static";)a=a.offsetParent;return a})}});c.each(["Left","Top"],function(a,b){var d="scroll"+b;c.fn[d]=function(e){var f=this[0],h;if(!f)return null;if(e!==A)return this.each(function(){if(h=ea(this))h.scrollTo(!a?e:c(h).scrollLeft(),a?e:c(h).scrollTop());else this[d]=e});else return(h=ea(f))?"pageXOffset"in h?h[a?"pageYOffset":"pageXOffset"]:c.support.boxModel&&h.document.documentElement[d]||h.document.body[d]:f[d]}});c.each(["Height","Width"],function(a,b){var d=b.toLowerCase();
+c.fn["inner"+b]=function(){return this[0]?parseFloat(c.css(this[0],d,"padding")):null};c.fn["outer"+b]=function(e){return this[0]?parseFloat(c.css(this[0],d,e?"margin":"border")):null};c.fn[d]=function(e){var f=this[0];if(!f)return e==null?null:this;if(c.isFunction(e))return this.each(function(h){var k=c(this);k[d](e.call(this,h,k[d]()))});return c.isWindow(f)?f.document.compatMode==="CSS1Compat"&&f.document.documentElement["client"+b]||f.document.body["client"+b]:f.nodeType===9?Math.max(f.documentElement["client"+
+b],f.body["scroll"+b],f.documentElement["scroll"+b],f.body["offset"+b],f.documentElement["offset"+b]):e===A?parseFloat(c.css(f,d)):this.css(d,typeof e==="string"?e:e+"px")}})})(window);
diff --git a/python/helpers/coverage/htmlfiles/jquery.hotkeys.js b/python/helpers/coverage/htmlfiles/jquery.hotkeys.js
new file mode 100644
index 0000000..09b21e0
--- /dev/null
+++ b/python/helpers/coverage/htmlfiles/jquery.hotkeys.js
@@ -0,0 +1,99 @@
+/*
+ * jQuery Hotkeys Plugin
+ * Copyright 2010, John Resig
+ * Dual licensed under the MIT or GPL Version 2 licenses.
+ *
+ * Based upon the plugin by Tzury Bar Yochay:
+ * http://github.com/tzuryby/hotkeys
+ *
+ * Original idea by:
+ * Binny V A, http://www.openjs.com/scripts/events/keyboard_shortcuts/
+*/
+
+(function(jQuery){
+
+ jQuery.hotkeys = {
+ version: "0.8",
+
+ specialKeys: {
+ 8: "backspace", 9: "tab", 13: "return", 16: "shift", 17: "ctrl", 18: "alt", 19: "pause",
+ 20: "capslock", 27: "esc", 32: "space", 33: "pageup", 34: "pagedown", 35: "end", 36: "home",
+ 37: "left", 38: "up", 39: "right", 40: "down", 45: "insert", 46: "del",
+ 96: "0", 97: "1", 98: "2", 99: "3", 100: "4", 101: "5", 102: "6", 103: "7",
+ 104: "8", 105: "9", 106: "*", 107: "+", 109: "-", 110: ".", 111 : "/",
+ 112: "f1", 113: "f2", 114: "f3", 115: "f4", 116: "f5", 117: "f6", 118: "f7", 119: "f8",
+ 120: "f9", 121: "f10", 122: "f11", 123: "f12", 144: "numlock", 145: "scroll", 191: "/", 224: "meta"
+ },
+
+ shiftNums: {
+ "`": "~", "1": "!", "2": "@", "3": "#", "4": "$", "5": "%", "6": "^", "7": "&",
+ "8": "*", "9": "(", "0": ")", "-": "_", "=": "+", ";": ": ", "'": "\"", ",": "<",
+ ".": ">", "/": "?", "\\": "|"
+ }
+ };
+
+ function keyHandler( handleObj ) {
+ // Only care when a possible input has been specified
+ if ( typeof handleObj.data !== "string" ) {
+ return;
+ }
+
+ var origHandler = handleObj.handler,
+ keys = handleObj.data.toLowerCase().split(" ");
+
+ handleObj.handler = function( event ) {
+ // Don't fire in text-accepting inputs that we didn't directly bind to
+ if ( this !== event.target && (/textarea|select/i.test( event.target.nodeName ) ||
+ event.target.type === "text") ) {
+ return;
+ }
+
+ // Keypress represents characters, not special keys
+ var special = event.type !== "keypress" && jQuery.hotkeys.specialKeys[ event.which ],
+ character = String.fromCharCode( event.which ).toLowerCase(),
+ key, modif = "", possible = {};
+
+ // check combinations (alt|ctrl|shift+anything)
+ if ( event.altKey && special !== "alt" ) {
+ modif += "alt+";
+ }
+
+ if ( event.ctrlKey && special !== "ctrl" ) {
+ modif += "ctrl+";
+ }
+
+ // TODO: Need to make sure this works consistently across platforms
+ if ( event.metaKey && !event.ctrlKey && special !== "meta" ) {
+ modif += "meta+";
+ }
+
+ if ( event.shiftKey && special !== "shift" ) {
+ modif += "shift+";
+ }
+
+ if ( special ) {
+ possible[ modif + special ] = true;
+
+ } else {
+ possible[ modif + character ] = true;
+ possible[ modif + jQuery.hotkeys.shiftNums[ character ] ] = true;
+
+ // "$" can be triggered as "Shift+4" or "Shift+$" or just "$"
+ if ( modif === "shift+" ) {
+ possible[ jQuery.hotkeys.shiftNums[ character ] ] = true;
+ }
+ }
+
+ for ( var i = 0, l = keys.length; i < l; i++ ) {
+ if ( possible[ keys[i] ] ) {
+ return origHandler.apply( this, arguments );
+ }
+ }
+ };
+ }
+
+ jQuery.each([ "keydown", "keyup", "keypress" ], function() {
+ jQuery.event.special[ this ] = { add: keyHandler };
+ });
+
+})( jQuery );
diff --git a/python/helpers/coverage/htmlfiles/jquery.isonscreen.js b/python/helpers/coverage/htmlfiles/jquery.isonscreen.js
new file mode 100644
index 0000000..0182ebd
--- /dev/null
+++ b/python/helpers/coverage/htmlfiles/jquery.isonscreen.js
@@ -0,0 +1,53 @@
+/* Copyright (c) 2010
+ * @author Laurence Wheway
+ * Dual licensed under the MIT (http://www.opensource.org/licenses/mit-license.php)
+ * and GPL (http://www.opensource.org/licenses/gpl-license.php) licenses.
+ *
+ * @version 1.2.0
+ */
+(function($) {
+ jQuery.extend({
+ isOnScreen: function(box, container) {
+ //ensure numbers come in as intgers (not strings) and remove 'px' is it's there
+ for(var i in box){box[i] = parseFloat(box[i])};
+ for(var i in container){container[i] = parseFloat(container[i])};
+
+ if(!container){
+ container = {
+ left: $(window).scrollLeft(),
+ top: $(window).scrollTop(),
+ width: $(window).width(),
+ height: $(window).height()
+ }
+ }
+
+ if( box.left+box.width-container.left > 0 &&
+ box.left < container.width+container.left &&
+ box.top+box.height-container.top > 0 &&
+ box.top < container.height+container.top
+ ) return true;
+ return false;
+ }
+ })
+
+
+ jQuery.fn.isOnScreen = function (container) {
+ for(var i in container){container[i] = parseFloat(container[i])};
+
+ if(!container){
+ container = {
+ left: $(window).scrollLeft(),
+ top: $(window).scrollTop(),
+ width: $(window).width(),
+ height: $(window).height()
+ }
+ }
+
+ if( $(this).offset().left+$(this).width()-container.left > 0 &&
+ $(this).offset().left < container.width+container.left &&
+ $(this).offset().top+$(this).height()-container.top > 0 &&
+ $(this).offset().top < container.height+container.top
+ ) return true;
+ return false;
+ }
+})(jQuery);
diff --git a/python/helpers/coverage/htmlfiles/jquery.tablesorter.min.js b/python/helpers/coverage/htmlfiles/jquery.tablesorter.min.js
new file mode 100644
index 0000000..64c7007
--- /dev/null
+++ b/python/helpers/coverage/htmlfiles/jquery.tablesorter.min.js
@@ -0,0 +1,2 @@
+
+(function($){$.extend({tablesorter:new function(){var parsers=[],widgets=[];this.defaults={cssHeader:"header",cssAsc:"headerSortUp",cssDesc:"headerSortDown",sortInitialOrder:"asc",sortMultiSortKey:"shiftKey",sortForce:null,sortAppend:null,textExtraction:"simple",parsers:{},widgets:[],widgetZebra:{css:["even","odd"]},headers:{},widthFixed:false,cancelSelection:true,sortList:[],headerList:[],dateFormat:"us",decimal:'.',debug:false};function benchmark(s,d){log(s+","+(new Date().getTime()-d.getTime())+"ms");}this.benchmark=benchmark;function log(s){if(typeof console!="undefined"&&typeof console.debug!="undefined"){console.log(s);}else{alert(s);}}function buildParserCache(table,$headers){if(table.config.debug){var parsersDebug="";}var rows=table.tBodies[0].rows;if(table.tBodies[0].rows[0]){var list=[],cells=rows[0].cells,l=cells.length;for(var i=0;i<l;i++){var p=false;if($.metadata&&($($headers[i]).metadata()&&$($headers[i]).metadata().sorter)){p=getParserById($($headers[i]).metadata().sorter);}else if((table.config.headers[i]&&table.config.headers[i].sorter)){p=getParserById(table.config.headers[i].sorter);}if(!p){p=detectParserForColumn(table,cells[i]);}if(table.config.debug){parsersDebug+="column:"+i+" parser:"+p.id+"\n";}list.push(p);}}if(table.config.debug){log(parsersDebug);}return list;};function detectParserForColumn(table,node){var l=parsers.length;for(var i=1;i<l;i++){if(parsers[i].is($.trim(getElementText(table.config,node)),table,node)){return parsers[i];}}return parsers[0];}function getParserById(name){var l=parsers.length;for(var i=0;i<l;i++){if(parsers[i].id.toLowerCase()==name.toLowerCase()){return parsers[i];}}return false;}function buildCache(table){if(table.config.debug){var cacheTime=new Date();}var totalRows=(table.tBodies[0]&&table.tBodies[0].rows.length)||0,totalCells=(table.tBodies[0].rows[0]&&table.tBodies[0].rows[0].cells.length)||0,parsers=table.config.parsers,cache={row:[],normalized:[]};for(var i=0;i<totalRows;++i){var c=table.tBodies[0].rows[i],cols=[];cache.row.push($(c));for(var j=0;j<totalCells;++j){cols.push(parsers[j].format(getElementText(table.config,c.cells[j]),table,c.cells[j]));}cols.push(i);cache.normalized.push(cols);cols=null;};if(table.config.debug){benchmark("Building cache for "+totalRows+" rows:",cacheTime);}return cache;};function getElementText(config,node){if(!node)return"";var t="";if(config.textExtraction=="simple"){if(node.childNodes[0]&&node.childNodes[0].hasChildNodes()){t=node.childNodes[0].innerHTML;}else{t=node.innerHTML;}}else{if(typeof(config.textExtraction)=="function"){t=config.textExtraction(node);}else{t=$(node).text();}}return t;}function appendToTable(table,cache){if(table.config.debug){var appendTime=new Date()}var c=cache,r=c.row,n=c.normalized,totalRows=n.length,checkCell=(n[0].length-1),tableBody=$(table.tBodies[0]),rows=[];for(var i=0;i<totalRows;i++){rows.push(r[n[i][checkCell]]);if(!table.config.appender){var o=r[n[i][checkCell]];var l=o.length;for(var j=0;j<l;j++){tableBody[0].appendChild(o[j]);}}}if(table.config.appender){table.config.appender(table,rows);}rows=null;if(table.config.debug){benchmark("Rebuilt table:",appendTime);}applyWidget(table);setTimeout(function(){$(table).trigger("sortEnd");},0);};function buildHeaders(table){if(table.config.debug){var time=new Date();}var meta=($.metadata)?true:false,tableHeadersRows=[];for(var i=0;i<table.tHead.rows.length;i++){tableHeadersRows[i]=0;};$tableHeaders=$("thead th",table);$tableHeaders.each(function(index){this.count=0;this.column=index;this.order=formatSortingOrder(table.config.sortInitialOrder);if(checkHeaderMetadata(this)||checkHeaderOptions(table,index))this.sortDisabled=true;if(!this.sortDisabled){$(this).addClass(table.config.cssHeader);}table.config.headerList[index]=this;});if(table.config.debug){benchmark("Built headers:",time);log($tableHeaders);}return $tableHeaders;};function checkCellColSpan(table,rows,row){var arr=[],r=table.tHead.rows,c=r[row].cells;for(var i=0;i<c.length;i++){var cell=c[i];if(cell.colSpan>1){arr=arr.concat(checkCellColSpan(table,headerArr,row++));}else{if(table.tHead.length==1||(cell.rowSpan>1||!r[row+1])){arr.push(cell);}}}return arr;};function checkHeaderMetadata(cell){if(($.metadata)&&($(cell).metadata().sorter===false)){return true;};return false;}function checkHeaderOptions(table,i){if((table.config.headers[i])&&(table.config.headers[i].sorter===false)){return true;};return false;}function applyWidget(table){var c=table.config.widgets;var l=c.length;for(var i=0;i<l;i++){getWidgetById(c[i]).format(table);}}function getWidgetById(name){var l=widgets.length;for(var i=0;i<l;i++){if(widgets[i].id.toLowerCase()==name.toLowerCase()){return widgets[i];}}};function formatSortingOrder(v){if(typeof(v)!="Number"){i=(v.toLowerCase()=="desc")?1:0;}else{i=(v==(0||1))?v:0;}return i;}function isValueInArray(v,a){var l=a.length;for(var i=0;i<l;i++){if(a[i][0]==v){return true;}}return false;}function setHeadersCss(table,$headers,list,css){$headers.removeClass(css[0]).removeClass(css[1]);var h=[];$headers.each(function(offset){if(!this.sortDisabled){h[this.column]=$(this);}});var l=list.length;for(var i=0;i<l;i++){h[list[i][0]].addClass(css[list[i][1]]);}}function fixColumnWidth(table,$headers){var c=table.config;if(c.widthFixed){var colgroup=$('<colgroup>');$("tr:first td",table.tBodies[0]).each(function(){colgroup.append($('<col>').css('width',$(this).width()));});$(table).prepend(colgroup);};}function updateHeaderSortCount(table,sortList){var c=table.config,l=sortList.length;for(var i=0;i<l;i++){var s=sortList[i],o=c.headerList[s[0]];o.count=s[1];o.count++;}}function multisort(table,sortList,cache){if(table.config.debug){var sortTime=new Date();}var dynamicExp="var sortWrapper = function(a,b) {",l=sortList.length;for(var i=0;i<l;i++){var c=sortList[i][0];var order=sortList[i][1];var s=(getCachedSortType(table.config.parsers,c)=="text")?((order==0)?"sortText":"sortTextDesc"):((order==0)?"sortNumeric":"sortNumericDesc");var e="e"+i;dynamicExp+="var "+e+" = "+s+"(a["+c+"],b["+c+"]); ";dynamicExp+="if("+e+") { return "+e+"; } ";dynamicExp+="else { ";}var orgOrderCol=cache.normalized[0].length-1;dynamicExp+="return a["+orgOrderCol+"]-b["+orgOrderCol+"];";for(var i=0;i<l;i++){dynamicExp+="}; ";}dynamicExp+="return 0; ";dynamicExp+="}; ";eval(dynamicExp);cache.normalized.sort(sortWrapper);if(table.config.debug){benchmark("Sorting on "+sortList.toString()+" and dir "+order+" time:",sortTime);}return cache;};function sortText(a,b){return((a<b)?-1:((a>b)?1:0));};function sortTextDesc(a,b){return((b<a)?-1:((b>a)?1:0));};function sortNumeric(a,b){return a-b;};function sortNumericDesc(a,b){return b-a;};function getCachedSortType(parsers,i){return parsers[i].type;};this.construct=function(settings){return this.each(function(){if(!this.tHead||!this.tBodies)return;var $this,$document,$headers,cache,config,shiftDown=0,sortOrder;this.config={};config=$.extend(this.config,$.tablesorter.defaults,settings);$this=$(this);$headers=buildHeaders(this);this.config.parsers=buildParserCache(this,$headers);cache=buildCache(this);var sortCSS=[config.cssDesc,config.cssAsc];fixColumnWidth(this);$headers.click(function(e){$this.trigger("sortStart");var totalRows=($this[0].tBodies[0]&&$this[0].tBodies[0].rows.length)||0;if(!this.sortDisabled&&totalRows>0){var $cell=$(this);var i=this.column;this.order=this.count++%2;if(!e[config.sortMultiSortKey]){config.sortList=[];if(config.sortForce!=null){var a=config.sortForce;for(var j=0;j<a.length;j++){if(a[j][0]!=i){config.sortList.push(a[j]);}}}config.sortList.push([i,this.order]);}else{if(isValueInArray(i,config.sortList)){for(var j=0;j<config.sortList.length;j++){var s=config.sortList[j],o=config.headerList[s[0]];if(s[0]==i){o.count=s[1];o.count++;s[1]=o.count%2;}}}else{config.sortList.push([i,this.order]);}};setTimeout(function(){setHeadersCss($this[0],$headers,config.sortList,sortCSS);appendToTable($this[0],multisort($this[0],config.sortList,cache));},1);return false;}}).mousedown(function(){if(config.cancelSelection){this.onselectstart=function(){return false};return false;}});$this.bind("update",function(){this.config.parsers=buildParserCache(this,$headers);cache=buildCache(this);}).bind("sorton",function(e,list){$(this).trigger("sortStart");config.sortList=list;var sortList=config.sortList;updateHeaderSortCount(this,sortList);setHeadersCss(this,$headers,sortList,sortCSS);appendToTable(this,multisort(this,sortList,cache));}).bind("appendCache",function(){appendToTable(this,cache);}).bind("applyWidgetId",function(e,id){getWidgetById(id).format(this);}).bind("applyWidgets",function(){applyWidget(this);});if($.metadata&&($(this).metadata()&&$(this).metadata().sortlist)){config.sortList=$(this).metadata().sortlist;}if(config.sortList.length>0){$this.trigger("sorton",[config.sortList]);}applyWidget(this);});};this.addParser=function(parser){var l=parsers.length,a=true;for(var i=0;i<l;i++){if(parsers[i].id.toLowerCase()==parser.id.toLowerCase()){a=false;}}if(a){parsers.push(parser);};};this.addWidget=function(widget){widgets.push(widget);};this.formatFloat=function(s){var i=parseFloat(s);return(isNaN(i))?0:i;};this.formatInt=function(s){var i=parseInt(s);return(isNaN(i))?0:i;};this.isDigit=function(s,config){var DECIMAL='\\'+config.decimal;var exp='/(^[+]?0('+DECIMAL+'0+)?$)|(^([-+]?[1-9][0-9]*)$)|(^([-+]?((0?|[1-9][0-9]*)'+DECIMAL+'(0*[1-9][0-9]*)))$)|(^[-+]?[1-9]+[0-9]*'+DECIMAL+'0+$)/';return RegExp(exp).test($.trim(s));};this.clearTableBody=function(table){if($.browser.msie){function empty(){while(this.firstChild)this.removeChild(this.firstChild);}empty.apply(table.tBodies[0]);}else{table.tBodies[0].innerHTML="";}};}});$.fn.extend({tablesorter:$.tablesorter.construct});var ts=$.tablesorter;ts.addParser({id:"text",is:function(s){return true;},format:function(s){return $.trim(s.toLowerCase());},type:"text"});ts.addParser({id:"digit",is:function(s,table){var c=table.config;return $.tablesorter.isDigit(s,c);},format:function(s){return $.tablesorter.formatFloat(s);},type:"numeric"});ts.addParser({id:"currency",is:function(s){return/^[£$€?.]/.test(s);},format:function(s){return $.tablesorter.formatFloat(s.replace(new RegExp(/[^0-9.]/g),""));},type:"numeric"});ts.addParser({id:"ipAddress",is:function(s){return/^\d{2,3}[\.]\d{2,3}[\.]\d{2,3}[\.]\d{2,3}$/.test(s);},format:function(s){var a=s.split("."),r="",l=a.length;for(var i=0;i<l;i++){var item=a[i];if(item.length==2){r+="0"+item;}else{r+=item;}}return $.tablesorter.formatFloat(r);},type:"numeric"});ts.addParser({id:"url",is:function(s){return/^(https?|ftp|file):\/\/$/.test(s);},format:function(s){return jQuery.trim(s.replace(new RegExp(/(https?|ftp|file):\/\//),''));},type:"text"});ts.addParser({id:"isoDate",is:function(s){return/^\d{4}[\/-]\d{1,2}[\/-]\d{1,2}$/.test(s);},format:function(s){return $.tablesorter.formatFloat((s!="")?new Date(s.replace(new RegExp(/-/g),"/")).getTime():"0");},type:"numeric"});ts.addParser({id:"percent",is:function(s){return/\%$/.test($.trim(s));},format:function(s){return $.tablesorter.formatFloat(s.replace(new RegExp(/%/g),""));},type:"numeric"});ts.addParser({id:"usLongDate",is:function(s){return s.match(new RegExp(/^[A-Za-z]{3,10}\.? [0-9]{1,2}, ([0-9]{4}|'?[0-9]{2}) (([0-2]?[0-9]:[0-5][0-9])|([0-1]?[0-9]:[0-5][0-9]\s(AM|PM)))$/));},format:function(s){return $.tablesorter.formatFloat(new Date(s).getTime());},type:"numeric"});ts.addParser({id:"shortDate",is:function(s){return/\d{1,2}[\/\-]\d{1,2}[\/\-]\d{2,4}/.test(s);},format:function(s,table){var c=table.config;s=s.replace(/\-/g,"/");if(c.dateFormat=="us"){s=s.replace(/(\d{1,2})[\/\-](\d{1,2})[\/\-](\d{4})/,"$3/$1/$2");}else if(c.dateFormat=="uk"){s=s.replace(/(\d{1,2})[\/\-](\d{1,2})[\/\-](\d{4})/,"$3/$2/$1");}else if(c.dateFormat=="dd/mm/yy"||c.dateFormat=="dd-mm-yy"){s=s.replace(/(\d{1,2})[\/\-](\d{1,2})[\/\-](\d{2})/,"$1/$2/$3");}return $.tablesorter.formatFloat(new Date(s).getTime());},type:"numeric"});ts.addParser({id:"time",is:function(s){return/^(([0-2]?[0-9]:[0-5][0-9])|([0-1]?[0-9]:[0-5][0-9]\s(am|pm)))$/.test(s);},format:function(s){return $.tablesorter.formatFloat(new Date("2000/01/01 "+s).getTime());},type:"numeric"});ts.addParser({id:"metadata",is:function(s){return false;},format:function(s,table,cell){var c=table.config,p=(!c.parserMetadataName)?'sortValue':c.parserMetadataName;return $(cell).metadata()[p];},type:"numeric"});ts.addWidget({id:"zebra",format:function(table){if(table.config.debug){var time=new Date();}$("tr:visible",table.tBodies[0]).filter(':even').removeClass(table.config.widgetZebra.css[1]).addClass(table.config.widgetZebra.css[0]).end().filter(':odd').removeClass(table.config.widgetZebra.css[0]).addClass(table.config.widgetZebra.css[1]);if(table.config.debug){$.tablesorter.benchmark("Applying Zebra widget",time);}}});})(jQuery);
\ No newline at end of file
diff --git a/python/helpers/coverage/htmlfiles/keybd_closed.png b/python/helpers/coverage/htmlfiles/keybd_closed.png
new file mode 100644
index 0000000..6843abf
--- /dev/null
+++ b/python/helpers/coverage/htmlfiles/keybd_closed.png
Binary files differ
diff --git a/python/helpers/coverage/htmlfiles/keybd_open.png b/python/helpers/coverage/htmlfiles/keybd_open.png
new file mode 100644
index 0000000..5a681ea
--- /dev/null
+++ b/python/helpers/coverage/htmlfiles/keybd_open.png
Binary files differ
diff --git a/python/helpers/coverage/htmlfiles/pyfile.html b/python/helpers/coverage/htmlfiles/pyfile.html
new file mode 100644
index 0000000..ee0a3b1
--- /dev/null
+++ b/python/helpers/coverage/htmlfiles/pyfile.html
@@ -0,0 +1,87 @@
+<!doctype html PUBLIC "-//W3C//DTD html 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
+<html>
+<head>
+ <meta http-equiv='Content-Type' content='text/html; charset=utf-8'>
+ {# IE8 rounds line-height incorrectly, and adding this emulateIE7 line makes it right! #}
+ {# http://social.msdn.microsoft.com/Forums/en-US/iewebdevelopment/thread/7684445e-f080-4d8f-8529-132763348e21 #}
+ <meta http-equiv='X-UA-Compatible' content='IE=emulateIE7' />
+ <title>Coverage for {{cu.name|escape}}: {{nums.pc_covered_str}}%</title>
+ <link rel='stylesheet' href='style.css' type='text/css'>
+ <script type='text/javascript' src='jquery-1.4.3.min.js'></script>
+ <script type='text/javascript' src='jquery.hotkeys.js'></script>
+ <script type='text/javascript' src='jquery.isonscreen.js'></script>
+ <script type='text/javascript' src='coverage_html.js'></script>
+ <script type='text/javascript' charset='utf-8'>
+ jQuery(document).ready(coverage.pyfile_ready);
+ </script>
+</head>
+<body id='pyfile'>
+
+<div id='header'>
+ <div class='content'>
+ <h1>Coverage for <b>{{cu.name|escape}}</b> :
+ <span class='pc_cov'>{{nums.pc_covered_str}}%</span>
+ </h1>
+ <img id='keyboard_icon' src='keybd_closed.png'>
+ <h2 class='stats'>
+ {{nums.n_statements}} statements
+ <span class='{{c_run}} shortkey_r' onclick='coverage.toggle_lines(this, "run")'>{{nums.n_executed}} run</span>
+ <span class='{{c_mis}} shortkey_m' onclick='coverage.toggle_lines(this, "mis")'>{{nums.n_missing}} missing</span>
+ <span class='{{c_exc}} shortkey_x' onclick='coverage.toggle_lines(this, "exc")'>{{nums.n_excluded}} excluded</span>
+ {% if arcs %}
+ <span class='{{c_par}} shortkey_p' onclick='coverage.toggle_lines(this, "par")'>{{n_par}} partial</span>
+ {% endif %}
+ </h2>
+ </div>
+</div>
+
+<div class='help_panel'>
+ <img id='panel_icon' src='keybd_open.png'>
+ <p class='legend'>Hot-keys on this page</p>
+ <div>
+ <p class='keyhelp'>
+ <span class='key'>r</span>
+ <span class='key'>m</span>
+ <span class='key'>x</span>
+ <span class='key'>p</span> toggle line displays
+ </p>
+ <p class='keyhelp'>
+ <span class='key'>j</span>
+ <span class='key'>k</span> next/prev highlighted chunk
+ </p>
+ <p class='keyhelp'>
+ <span class='key'>0</span> (zero) top of page
+ </p>
+ <p class='keyhelp'>
+ <span class='key'>1</span> (one) first highlighted chunk
+ </p>
+ </div>
+</div>
+
+<div id='source'>
+ <table cellspacing='0' cellpadding='0'>
+ <tr>
+ <td class='linenos' valign='top'>
+ {% for line in lines %}
+ <p id='n{{line.number}}' class='{{line.class}}'><a href='#n{{line.number}}'>{{line.number}}</a></p>
+ {% endfor %}
+ </td>
+ <td class='text' valign='top'>
+ {% for line in lines %}
+ <p id='t{{line.number}}' class='{{line.class}}'>{% if line.annotate %}<span class='annotate' title='{{line.annotate_title}}'>{{line.annotate}}</span>{% endif %}{{line.html}}<span class='strut'> </span></p>
+ {% endfor %}
+ </td>
+ </tr>
+ </table>
+</div>
+
+<div id='footer'>
+ <div class='content'>
+ <p>
+ <a class='nav' href='index.html'>« index</a> <a class='nav' href='{{__url__}}'>coverage.py v{{__version__}}</a>
+ </p>
+ </div>
+</div>
+
+</body>
+</html>
diff --git a/python/helpers/coverage/htmlfiles/style.css b/python/helpers/coverage/htmlfiles/style.css
new file mode 100644
index 0000000..c40357b
--- /dev/null
+++ b/python/helpers/coverage/htmlfiles/style.css
@@ -0,0 +1,275 @@
+/* CSS styles for Coverage. */
+/* Page-wide styles */
+html, body, h1, h2, h3, p, td, th {
+ margin: 0;
+ padding: 0;
+ border: 0;
+ outline: 0;
+ font-weight: inherit;
+ font-style: inherit;
+ font-size: 100%;
+ font-family: inherit;
+ vertical-align: baseline;
+ }
+
+/* Set baseline grid to 16 pt. */
+body {
+ font-family: georgia, serif;
+ font-size: 1em;
+ }
+
+html>body {
+ font-size: 16px;
+ }
+
+/* Set base font size to 12/16 */
+p {
+ font-size: .75em; /* 12/16 */
+ line-height: 1.3333em; /* 16/12 */
+ }
+
+table {
+ border-collapse: collapse;
+ }
+
+a.nav {
+ text-decoration: none;
+ color: inherit;
+ }
+a.nav:hover {
+ text-decoration: underline;
+ color: inherit;
+ }
+
+/* Page structure */
+#header {
+ background: #f8f8f8;
+ width: 100%;
+ border-bottom: 1px solid #eee;
+ }
+
+#source {
+ padding: 1em;
+ font-family: "courier new", monospace;
+ }
+
+#indexfile #footer {
+ margin: 1em 3em;
+ }
+
+#pyfile #footer {
+ margin: 1em 1em;
+ }
+
+#footer .content {
+ padding: 0;
+ font-size: 85%;
+ font-family: verdana, sans-serif;
+ color: #666666;
+ font-style: italic;
+ }
+
+#index {
+ margin: 1em 0 0 3em;
+ }
+
+/* Header styles */
+#header .content {
+ padding: 1em 3em;
+ }
+
+h1 {
+ font-size: 1.25em;
+}
+
+h2.stats {
+ margin-top: .5em;
+ font-size: 1em;
+}
+.stats span {
+ border: 1px solid;
+ padding: .1em .25em;
+ margin: 0 .1em;
+ cursor: pointer;
+ border-color: #999 #ccc #ccc #999;
+}
+.stats span.hide_run, .stats span.hide_exc,
+.stats span.hide_mis, .stats span.hide_par,
+.stats span.par.hide_run.hide_par {
+ border-color: #ccc #999 #999 #ccc;
+}
+.stats span.par.hide_run {
+ border-color: #999 #ccc #ccc #999;
+}
+
+/* Help panel */
+#keyboard_icon {
+ float: right;
+ cursor: pointer;
+}
+
+.help_panel {
+ position: absolute;
+ background: #ffc;
+ padding: .5em;
+ border: 1px solid #883;
+ display: none;
+}
+
+#indexfile .help_panel {
+ width: 20em; height: 4em;
+}
+
+#pyfile .help_panel {
+ width: 16em; height: 8em;
+}
+
+.help_panel .legend {
+ font-style: italic;
+ margin-bottom: 1em;
+}
+
+#panel_icon {
+ float: right;
+ cursor: pointer;
+}
+
+.keyhelp {
+ margin: .75em;
+}
+
+.keyhelp .key {
+ border: 1px solid black;
+ border-color: #888 #333 #333 #888;
+ padding: .1em .35em;
+ font-family: monospace;
+ font-weight: bold;
+ background: #eee;
+}
+
+/* Source file styles */
+.linenos p {
+ text-align: right;
+ margin: 0;
+ padding: 0 .5em;
+ color: #999999;
+ font-family: verdana, sans-serif;
+ font-size: .625em; /* 10/16 */
+ line-height: 1.6em; /* 16/10 */
+ }
+.linenos p.highlight {
+ background: #ffdd00;
+ }
+.linenos p a {
+ text-decoration: none;
+ color: #999999;
+ }
+.linenos p a:hover {
+ text-decoration: underline;
+ color: #999999;
+ }
+
+td.text {
+ width: 100%;
+ }
+.text p {
+ margin: 0;
+ padding: 0 0 0 .5em;
+ border-left: 2px solid #ffffff;
+ white-space: nowrap;
+ }
+
+.text p.mis {
+ background: #ffdddd;
+ border-left: 2px solid #ff0000;
+ }
+.text p.run, .text p.run.hide_par {
+ background: #ddffdd;
+ border-left: 2px solid #00ff00;
+ }
+.text p.exc {
+ background: #eeeeee;
+ border-left: 2px solid #808080;
+ }
+.text p.par, .text p.par.hide_run {
+ background: #ffffaa;
+ border-left: 2px solid #eeee99;
+ }
+.text p.hide_run, .text p.hide_exc, .text p.hide_mis, .text p.hide_par,
+.text p.hide_run.hide_par {
+ background: inherit;
+ }
+
+.text span.annotate {
+ font-family: georgia;
+ font-style: italic;
+ color: #666;
+ float: right;
+ padding-right: .5em;
+ }
+.text p.hide_par span.annotate {
+ display: none;
+ }
+
+/* Syntax coloring */
+.text .com {
+ color: green;
+ font-style: italic;
+ line-height: 1px;
+ }
+.text .key {
+ font-weight: bold;
+ line-height: 1px;
+ }
+.text .str {
+ color: #000080;
+ }
+
+/* index styles */
+#index td, #index th {
+ text-align: right;
+ width: 5em;
+ padding: .25em .5em;
+ border-bottom: 1px solid #eee;
+ }
+#index th {
+ font-style: italic;
+ color: #333;
+ border-bottom: 1px solid #ccc;
+ cursor: pointer;
+ }
+#index th:hover {
+ background: #eee;
+ border-bottom: 1px solid #999;
+ }
+#index td.left, #index th.left {
+ padding-left: 0;
+ }
+#index td.right, #index th.right {
+ padding-right: 0;
+ }
+#index th.headerSortDown, #index th.headerSortUp {
+ border-bottom: 1px solid #000;
+ }
+#index td.name, #index th.name {
+ text-align: left;
+ width: auto;
+ }
+#index td.name a {
+ text-decoration: none;
+ color: #000;
+ }
+#index td.name a:hover {
+ text-decoration: underline;
+ color: #000;
+ }
+#index tr.total {
+ }
+#index tr.total td {
+ font-weight: bold;
+ border-top: 1px solid #ccc;
+ border-bottom: none;
+ }
+#index tr.file:hover {
+ background: #eeeeee;
+ }
diff --git a/python/helpers/coverage/misc.py b/python/helpers/coverage/misc.py
new file mode 100644
index 0000000..fd9be85
--- /dev/null
+++ b/python/helpers/coverage/misc.py
@@ -0,0 +1,139 @@
+"""Miscellaneous stuff for Coverage."""
+
+import inspect
+from coverage.backward import md5, sorted # pylint: disable=W0622
+from coverage.backward import string_class, to_bytes
+
+
+def nice_pair(pair):
+ """Make a nice string representation of a pair of numbers.
+
+ If the numbers are equal, just return the number, otherwise return the pair
+ with a dash between them, indicating the range.
+
+ """
+ start, end = pair
+ if start == end:
+ return "%d" % start
+ else:
+ return "%d-%d" % (start, end)
+
+
+def format_lines(statements, lines):
+ """Nicely format a list of line numbers.
+
+ Format a list of line numbers for printing by coalescing groups of lines as
+ long as the lines represent consecutive statements. This will coalesce
+ even if there are gaps between statements.
+
+ For example, if `statements` is [1,2,3,4,5,10,11,12,13,14] and
+ `lines` is [1,2,5,10,11,13,14] then the result will be "1-2, 5-11, 13-14".
+
+ """
+ pairs = []
+ i = 0
+ j = 0
+ start = None
+ while i < len(statements) and j < len(lines):
+ if statements[i] == lines[j]:
+ if start == None:
+ start = lines[j]
+ end = lines[j]
+ j += 1
+ elif start:
+ pairs.append((start, end))
+ start = None
+ i += 1
+ if start:
+ pairs.append((start, end))
+ ret = ', '.join(map(nice_pair, pairs))
+ return ret
+
+
+def expensive(fn):
+ """A decorator to cache the result of an expensive operation.
+
+ Only applies to methods with no arguments.
+
+ """
+ attr = "_cache_" + fn.__name__
+ def _wrapped(self):
+ """Inner fn that checks the cache."""
+ if not hasattr(self, attr):
+ setattr(self, attr, fn(self))
+ return getattr(self, attr)
+ return _wrapped
+
+
+def bool_or_none(b):
+ """Return bool(b), but preserve None."""
+ if b is None:
+ return None
+ else:
+ return bool(b)
+
+
+def join_regex(regexes):
+ """Combine a list of regexes into one that matches any of them."""
+ if len(regexes) > 1:
+ return "(" + ")|(".join(regexes) + ")"
+ elif regexes:
+ return regexes[0]
+ else:
+ return ""
+
+
+class Hasher(object):
+ """Hashes Python data into md5."""
+ def __init__(self):
+ self.md5 = md5()
+
+ def update(self, v):
+ """Add `v` to the hash, recursively if needed."""
+ self.md5.update(to_bytes(str(type(v))))
+ if isinstance(v, string_class):
+ self.md5.update(to_bytes(v))
+ elif isinstance(v, (int, float)):
+ self.update(str(v))
+ elif isinstance(v, (tuple, list)):
+ for e in v:
+ self.update(e)
+ elif isinstance(v, dict):
+ keys = v.keys()
+ for k in sorted(keys):
+ self.update(k)
+ self.update(v[k])
+ else:
+ for k in dir(v):
+ if k.startswith('__'):
+ continue
+ a = getattr(v, k)
+ if inspect.isroutine(a):
+ continue
+ self.update(k)
+ self.update(a)
+
+ def digest(self):
+ """Retrieve the digest of the hash."""
+ return self.md5.digest()
+
+
+class CoverageException(Exception):
+ """An exception specific to Coverage."""
+ pass
+
+class NoSource(CoverageException):
+ """We couldn't find the source for a module."""
+ pass
+
+class NotPython(CoverageException):
+ """A source file turned out not to be parsable Python."""
+ pass
+
+class ExceptionDuringRun(CoverageException):
+ """An exception happened while running customer code.
+
+ Construct it with three arguments, the values from `sys.exc_info`.
+
+ """
+ pass
diff --git a/python/helpers/coverage/parser.py b/python/helpers/coverage/parser.py
new file mode 100644
index 0000000..cbbb5a6
--- /dev/null
+++ b/python/helpers/coverage/parser.py
@@ -0,0 +1,800 @@
+"""Code parsing for Coverage."""
+
+import glob, opcode, os, re, sys, token, tokenize
+
+from coverage.backward import set, sorted, StringIO # pylint: disable=W0622
+from coverage.backward import open_source
+from coverage.bytecode import ByteCodes, CodeObjects
+from coverage.misc import nice_pair, expensive, join_regex
+from coverage.misc import CoverageException, NoSource, NotPython
+
+
+class CodeParser(object):
+ """Parse code to find executable lines, excluded lines, etc."""
+
+ def __init__(self, text=None, filename=None, exclude=None):
+ """
+ Source can be provided as `text`, the text itself, or `filename`, from
+ which the text will be read. Excluded lines are those that match
+ `exclude`, a regex.
+
+ """
+ assert text or filename, "CodeParser needs either text or filename"
+ self.filename = filename or "<code>"
+ self.text = text
+ if not self.text:
+ try:
+ sourcef = open_source(self.filename)
+ try:
+ self.text = sourcef.read()
+ finally:
+ sourcef.close()
+ except IOError:
+ _, err, _ = sys.exc_info()
+ raise NoSource(
+ "No source for code: %r: %s" % (self.filename, err)
+ )
+
+ self.exclude = exclude
+
+ self.show_tokens = False
+
+ # The text lines of the parsed code.
+ self.lines = self.text.split('\n')
+
+ # The line numbers of excluded lines of code.
+ self.excluded = set()
+
+ # The line numbers of docstring lines.
+ self.docstrings = set()
+
+ # The line numbers of class definitions.
+ self.classdefs = set()
+
+ # A dict mapping line numbers to (lo,hi) for multi-line statements.
+ self.multiline = {}
+
+ # The line numbers that start statements.
+ self.statement_starts = set()
+
+ # Lazily-created ByteParser
+ self._byte_parser = None
+
+ def _get_byte_parser(self):
+ """Create a ByteParser on demand."""
+ if not self._byte_parser:
+ self._byte_parser = \
+ ByteParser(text=self.text, filename=self.filename)
+ return self._byte_parser
+ byte_parser = property(_get_byte_parser)
+
+ def lines_matching(self, *regexes):
+ """Find the lines matching one of a list of regexes.
+
+ Returns a set of line numbers, the lines that contain a match for one
+ of the regexes in `regexes`. The entire line needn't match, just a
+ part of it.
+
+ """
+ regex_c = re.compile(join_regex(regexes))
+ matches = set()
+ for i, ltext in enumerate(self.lines):
+ if regex_c.search(ltext):
+ matches.add(i+1)
+ return matches
+
+ def _raw_parse(self):
+ """Parse the source to find the interesting facts about its lines.
+
+ A handful of member fields are updated.
+
+ """
+ # Find lines which match an exclusion pattern.
+ if self.exclude:
+ self.excluded = self.lines_matching(self.exclude)
+
+ # Tokenize, to find excluded suites, to find docstrings, and to find
+ # multi-line statements.
+ indent = 0
+ exclude_indent = 0
+ excluding = False
+ prev_toktype = token.INDENT
+ first_line = None
+ empty = True
+
+ tokgen = tokenize.generate_tokens(StringIO(self.text).readline)
+ for toktype, ttext, (slineno, _), (elineno, _), ltext in tokgen:
+ if self.show_tokens: # pragma: no cover
+ print("%10s %5s %-20r %r" % (
+ tokenize.tok_name.get(toktype, toktype),
+ nice_pair((slineno, elineno)), ttext, ltext
+ ))
+ if toktype == token.INDENT:
+ indent += 1
+ elif toktype == token.DEDENT:
+ indent -= 1
+ elif toktype == token.NAME and ttext == 'class':
+ # Class definitions look like branches in the byte code, so
+ # we need to exclude them. The simplest way is to note the
+ # lines with the 'class' keyword.
+ self.classdefs.add(slineno)
+ elif toktype == token.OP and ttext == ':':
+ if not excluding and elineno in self.excluded:
+ # Start excluding a suite. We trigger off of the colon
+ # token so that the #pragma comment will be recognized on
+ # the same line as the colon.
+ exclude_indent = indent
+ excluding = True
+ elif toktype == token.STRING and prev_toktype == token.INDENT:
+ # Strings that are first on an indented line are docstrings.
+ # (a trick from trace.py in the stdlib.) This works for
+ # 99.9999% of cases. For the rest (!) see:
+ # http://stackoverflow.com/questions/1769332/x/1769794#1769794
+ for i in range(slineno, elineno+1):
+ self.docstrings.add(i)
+ elif toktype == token.NEWLINE:
+ if first_line is not None and elineno != first_line:
+ # We're at the end of a line, and we've ended on a
+ # different line than the first line of the statement,
+ # so record a multi-line range.
+ rng = (first_line, elineno)
+ for l in range(first_line, elineno+1):
+ self.multiline[l] = rng
+ first_line = None
+
+ if ttext.strip() and toktype != tokenize.COMMENT:
+ # A non-whitespace token.
+ empty = False
+ if first_line is None:
+ # The token is not whitespace, and is the first in a
+ # statement.
+ first_line = slineno
+ # Check whether to end an excluded suite.
+ if excluding and indent <= exclude_indent:
+ excluding = False
+ if excluding:
+ self.excluded.add(elineno)
+
+ prev_toktype = toktype
+
+ # Find the starts of the executable statements.
+ if not empty:
+ self.statement_starts.update(self.byte_parser._find_statements())
+
+ def first_line(self, line):
+ """Return the first line number of the statement including `line`."""
+ rng = self.multiline.get(line)
+ if rng:
+ first_line = rng[0]
+ else:
+ first_line = line
+ return first_line
+
+ def first_lines(self, lines, ignore=None):
+ """Map the line numbers in `lines` to the correct first line of the
+ statement.
+
+ Skip any line mentioned in `ignore`.
+
+ Returns a sorted list of the first lines.
+
+ """
+ ignore = ignore or []
+ lset = set()
+ for l in lines:
+ if l in ignore:
+ continue
+ new_l = self.first_line(l)
+ if new_l not in ignore:
+ lset.add(new_l)
+ return sorted(lset)
+
+ def parse_source(self):
+ """Parse source text to find executable lines, excluded lines, etc.
+
+ Return values are 1) a sorted list of executable line numbers, and
+ 2) a sorted list of excluded line numbers.
+
+ Reported line numbers are normalized to the first line of multi-line
+ statements.
+
+ """
+ self._raw_parse()
+
+ excluded_lines = self.first_lines(self.excluded)
+ ignore = excluded_lines + list(self.docstrings)
+ lines = self.first_lines(self.statement_starts, ignore)
+
+ return lines, excluded_lines
+
+ def arcs(self):
+ """Get information about the arcs available in the code.
+
+ Returns a sorted list of line number pairs. Line numbers have been
+ normalized to the first line of multiline statements.
+
+ """
+ all_arcs = []
+ for l1, l2 in self.byte_parser._all_arcs():
+ fl1 = self.first_line(l1)
+ fl2 = self.first_line(l2)
+ if fl1 != fl2:
+ all_arcs.append((fl1, fl2))
+ return sorted(all_arcs)
+ arcs = expensive(arcs)
+
+ def exit_counts(self):
+ """Get a mapping from line numbers to count of exits from that line.
+
+ Excluded lines are excluded.
+
+ """
+ excluded_lines = self.first_lines(self.excluded)
+ exit_counts = {}
+ for l1, l2 in self.arcs():
+ if l1 < 0:
+ # Don't ever report -1 as a line number
+ continue
+ if l1 in excluded_lines:
+ # Don't report excluded lines as line numbers.
+ continue
+ if l2 in excluded_lines:
+ # Arcs to excluded lines shouldn't count.
+ continue
+ if l1 not in exit_counts:
+ exit_counts[l1] = 0
+ exit_counts[l1] += 1
+
+ # Class definitions have one extra exit, so remove one for each:
+ for l in self.classdefs:
+ # Ensure key is there: classdefs can include excluded lines.
+ if l in exit_counts:
+ exit_counts[l] -= 1
+
+ return exit_counts
+ exit_counts = expensive(exit_counts)
+
+
+## Opcodes that guide the ByteParser.
+
+def _opcode(name):
+ """Return the opcode by name from the opcode module."""
+ return opcode.opmap[name]
+
+def _opcode_set(*names):
+ """Return a set of opcodes by the names in `names`."""
+ s = set()
+ for name in names:
+ try:
+ s.add(_opcode(name))
+ except KeyError:
+ pass
+ return s
+
+# Opcodes that leave the code object.
+OPS_CODE_END = _opcode_set('RETURN_VALUE')
+
+# Opcodes that unconditionally end the code chunk.
+OPS_CHUNK_END = _opcode_set(
+ 'JUMP_ABSOLUTE', 'JUMP_FORWARD', 'RETURN_VALUE', 'RAISE_VARARGS',
+ 'BREAK_LOOP', 'CONTINUE_LOOP',
+ )
+
+# Opcodes that unconditionally begin a new code chunk. By starting new chunks
+# with unconditional jump instructions, we neatly deal with jumps to jumps
+# properly.
+OPS_CHUNK_BEGIN = _opcode_set('JUMP_ABSOLUTE', 'JUMP_FORWARD')
+
+# Opcodes that push a block on the block stack.
+OPS_PUSH_BLOCK = _opcode_set(
+ 'SETUP_LOOP', 'SETUP_EXCEPT', 'SETUP_FINALLY', 'SETUP_WITH'
+ )
+
+# Block types for exception handling.
+OPS_EXCEPT_BLOCKS = _opcode_set('SETUP_EXCEPT', 'SETUP_FINALLY')
+
+# Opcodes that pop a block from the block stack.
+OPS_POP_BLOCK = _opcode_set('POP_BLOCK')
+
+# Opcodes that have a jump destination, but aren't really a jump.
+OPS_NO_JUMP = _opcode_set('SETUP_EXCEPT', 'SETUP_FINALLY')
+
+# Individual opcodes we need below.
+OP_BREAK_LOOP = _opcode('BREAK_LOOP')
+OP_END_FINALLY = _opcode('END_FINALLY')
+OP_COMPARE_OP = _opcode('COMPARE_OP')
+COMPARE_EXCEPTION = 10 # just have to get this const from the code.
+OP_LOAD_CONST = _opcode('LOAD_CONST')
+OP_RETURN_VALUE = _opcode('RETURN_VALUE')
+
+
+class ByteParser(object):
+ """Parse byte codes to understand the structure of code."""
+
+ def __init__(self, code=None, text=None, filename=None):
+ if code:
+ self.code = code
+ else:
+ if not text:
+ assert filename, "If no code or text, need a filename"
+ sourcef = open_source(filename)
+ try:
+ text = sourcef.read()
+ finally:
+ sourcef.close()
+
+ try:
+ # Python 2.3 and 2.4 don't like partial last lines, so be sure
+ # the text ends nicely for them.
+ self.code = compile(text + '\n', filename, "exec")
+ except SyntaxError:
+ _, synerr, _ = sys.exc_info()
+ raise NotPython(
+ "Couldn't parse '%s' as Python source: '%s' at line %d" %
+ (filename, synerr.msg, synerr.lineno)
+ )
+
+ # Alternative Python implementations don't always provide all the
+ # attributes on code objects that we need to do the analysis.
+ for attr in ['co_lnotab', 'co_firstlineno', 'co_consts', 'co_code']:
+ if not hasattr(self.code, attr):
+ raise CoverageException(
+ "This implementation of Python doesn't support code "
+ "analysis.\n"
+ "Run coverage.py under CPython for this command."
+ )
+
+ def child_parsers(self):
+ """Iterate over all the code objects nested within this one.
+
+ The iteration includes `self` as its first value.
+
+ """
+ return map(lambda c: ByteParser(code=c), CodeObjects(self.code))
+
+ # Getting numbers from the lnotab value changed in Py3.0.
+ if sys.version_info >= (3, 0):
+ def _lnotab_increments(self, lnotab):
+ """Return a list of ints from the lnotab bytes in 3.x"""
+ return list(lnotab)
+ else:
+ def _lnotab_increments(self, lnotab):
+ """Return a list of ints from the lnotab string in 2.x"""
+ return [ord(c) for c in lnotab]
+
+ def _bytes_lines(self):
+ """Map byte offsets to line numbers in `code`.
+
+ Uses co_lnotab described in Python/compile.c to map byte offsets to
+ line numbers. Returns a list: [(b0, l0), (b1, l1), ...]
+
+ """
+ # Adapted from dis.py in the standard library.
+ byte_increments = self._lnotab_increments(self.code.co_lnotab[0::2])
+ line_increments = self._lnotab_increments(self.code.co_lnotab[1::2])
+
+ bytes_lines = []
+ last_line_num = None
+ line_num = self.code.co_firstlineno
+ byte_num = 0
+ for byte_incr, line_incr in zip(byte_increments, line_increments):
+ if byte_incr:
+ if line_num != last_line_num:
+ bytes_lines.append((byte_num, line_num))
+ last_line_num = line_num
+ byte_num += byte_incr
+ line_num += line_incr
+ if line_num != last_line_num:
+ bytes_lines.append((byte_num, line_num))
+ return bytes_lines
+
+ def _find_statements(self):
+ """Find the statements in `self.code`.
+
+ Return a set of line numbers that start statements. Recurses into all
+ code objects reachable from `self.code`.
+
+ """
+ stmts = set()
+ for bp in self.child_parsers():
+ # Get all of the lineno information from this code.
+ for _, l in bp._bytes_lines():
+ stmts.add(l)
+ return stmts
+
+ def _disassemble(self): # pragma: no cover
+ """Disassemble code, for ad-hoc experimenting."""
+
+ import dis
+
+ for bp in self.child_parsers():
+ print("\n%s: " % bp.code)
+ dis.dis(bp.code)
+ print("Bytes lines: %r" % bp._bytes_lines())
+
+ print("")
+
+ def _split_into_chunks(self):
+ """Split the code object into a list of `Chunk` objects.
+
+ Each chunk is only entered at its first instruction, though there can
+ be many exits from a chunk.
+
+ Returns a list of `Chunk` objects.
+
+ """
+
+ # The list of chunks so far, and the one we're working on.
+ chunks = []
+ chunk = None
+ bytes_lines_map = dict(self._bytes_lines())
+
+ # The block stack: loops and try blocks get pushed here for the
+ # implicit jumps that can occur.
+ # Each entry is a tuple: (block type, destination)
+ block_stack = []
+
+ # Some op codes are followed by branches that should be ignored. This
+ # is a count of how many ignores are left.
+ ignore_branch = 0
+
+ # We have to handle the last two bytecodes specially.
+ ult = penult = None
+
+ for bc in ByteCodes(self.code.co_code):
+ # Maybe have to start a new chunk
+ if bc.offset in bytes_lines_map:
+ # Start a new chunk for each source line number.
+ if chunk:
+ chunk.exits.add(bc.offset)
+ chunk = Chunk(bc.offset, bytes_lines_map[bc.offset])
+ chunks.append(chunk)
+ elif bc.op in OPS_CHUNK_BEGIN:
+ # Jumps deserve their own unnumbered chunk. This fixes
+ # problems with jumps to jumps getting confused.
+ if chunk:
+ chunk.exits.add(bc.offset)
+ chunk = Chunk(bc.offset)
+ chunks.append(chunk)
+
+ if not chunk:
+ chunk = Chunk(bc.offset)
+ chunks.append(chunk)
+
+ # Look at the opcode
+ if bc.jump_to >= 0 and bc.op not in OPS_NO_JUMP:
+ if ignore_branch:
+ # Someone earlier wanted us to ignore this branch.
+ ignore_branch -= 1
+ else:
+ # The opcode has a jump, it's an exit for this chunk.
+ chunk.exits.add(bc.jump_to)
+
+ if bc.op in OPS_CODE_END:
+ # The opcode can exit the code object.
+ chunk.exits.add(-self.code.co_firstlineno)
+ if bc.op in OPS_PUSH_BLOCK:
+ # The opcode adds a block to the block_stack.
+ block_stack.append((bc.op, bc.jump_to))
+ if bc.op in OPS_POP_BLOCK:
+ # The opcode pops a block from the block stack.
+ block_stack.pop()
+ if bc.op in OPS_CHUNK_END:
+ # This opcode forces the end of the chunk.
+ if bc.op == OP_BREAK_LOOP:
+ # A break is implicit: jump where the top of the
+ # block_stack points.
+ chunk.exits.add(block_stack[-1][1])
+ chunk = None
+ if bc.op == OP_END_FINALLY:
+ if block_stack:
+ # A break that goes through a finally will jump to whatever
+ # block is on top of the stack.
+ chunk.exits.add(block_stack[-1][1])
+ # For the finally clause we need to find the closest exception
+ # block, and use its jump target as an exit.
+ for iblock in range(len(block_stack)-1, -1, -1):
+ if block_stack[iblock][0] in OPS_EXCEPT_BLOCKS:
+ chunk.exits.add(block_stack[iblock][1])
+ break
+ if bc.op == OP_COMPARE_OP and bc.arg == COMPARE_EXCEPTION:
+ # This is an except clause. We want to overlook the next
+ # branch, so that except's don't count as branches.
+ ignore_branch += 1
+
+ penult = ult
+ ult = bc
+
+ if chunks:
+ # The last two bytecodes could be a dummy "return None" that
+ # shouldn't be counted as real code. Every Python code object seems
+ # to end with a return, and a "return None" is inserted if there
+ # isn't an explicit return in the source.
+ if ult and penult:
+ if penult.op == OP_LOAD_CONST and ult.op == OP_RETURN_VALUE:
+ if self.code.co_consts[penult.arg] is None:
+ # This is "return None", but is it dummy? A real line
+ # would be a last chunk all by itself.
+ if chunks[-1].byte != penult.offset:
+ ex = -self.code.co_firstlineno
+ # Split the last chunk
+ last_chunk = chunks[-1]
+ last_chunk.exits.remove(ex)
+ last_chunk.exits.add(penult.offset)
+ chunk = Chunk(penult.offset)
+ chunk.exits.add(ex)
+ chunks.append(chunk)
+
+ # Give all the chunks a length.
+ chunks[-1].length = bc.next_offset - chunks[-1].byte
+ for i in range(len(chunks)-1):
+ chunks[i].length = chunks[i+1].byte - chunks[i].byte
+
+ return chunks
+
+ def _arcs(self):
+ """Find the executable arcs in the code.
+
+ Returns a set of pairs, (from,to). From and to are integer line
+ numbers. If from is < 0, then the arc is an entrance into the code
+ object. If to is < 0, the arc is an exit from the code object.
+
+ """
+ chunks = self._split_into_chunks()
+
+ # A map from byte offsets to chunks jumped into.
+ byte_chunks = dict([(c.byte, c) for c in chunks])
+
+ # Build a map from byte offsets to actual lines reached.
+ byte_lines = {}
+ bytes_to_add = set([c.byte for c in chunks])
+
+ while bytes_to_add:
+ byte_to_add = bytes_to_add.pop()
+ if byte_to_add in byte_lines or byte_to_add < 0:
+ continue
+
+ # Which lines does this chunk lead to?
+ bytes_considered = set()
+ bytes_to_consider = [byte_to_add]
+ lines = set()
+
+ while bytes_to_consider:
+ byte = bytes_to_consider.pop()
+ bytes_considered.add(byte)
+
+ # Find chunk for byte
+ try:
+ ch = byte_chunks[byte]
+ except KeyError:
+ for ch in chunks:
+ if ch.byte <= byte < ch.byte+ch.length:
+ break
+ else:
+ # No chunk for this byte!
+ raise Exception("Couldn't find chunk @ %d" % byte)
+ byte_chunks[byte] = ch
+
+ if ch.line:
+ lines.add(ch.line)
+ else:
+ for ex in ch.exits:
+ if ex < 0:
+ lines.add(ex)
+ elif ex not in bytes_considered:
+ bytes_to_consider.append(ex)
+
+ bytes_to_add.update(ch.exits)
+
+ byte_lines[byte_to_add] = lines
+
+ # Figure out for each chunk where the exits go.
+ arcs = set()
+ for chunk in chunks:
+ if chunk.line:
+ for ex in chunk.exits:
+ if ex < 0:
+ exit_lines = [ex]
+ else:
+ exit_lines = byte_lines[ex]
+ for exit_line in exit_lines:
+ if chunk.line != exit_line:
+ arcs.add((chunk.line, exit_line))
+ for line in byte_lines[0]:
+ arcs.add((-1, line))
+
+ return arcs
+
+ def _all_chunks(self):
+ """Returns a list of `Chunk` objects for this code and its children.
+
+ See `_split_into_chunks` for details.
+
+ """
+ chunks = []
+ for bp in self.child_parsers():
+ chunks.extend(bp._split_into_chunks())
+
+ return chunks
+
+ def _all_arcs(self):
+ """Get the set of all arcs in this code object and its children.
+
+ See `_arcs` for details.
+
+ """
+ arcs = set()
+ for bp in self.child_parsers():
+ arcs.update(bp._arcs())
+
+ return arcs
+
+
+class Chunk(object):
+ """A sequence of bytecodes with a single entrance.
+
+ To analyze byte code, we have to divide it into chunks, sequences of byte
+ codes such that each basic block has only one entrance, the first
+ instruction in the block.
+
+ This is almost the CS concept of `basic block`_, except that we're willing
+ to have many exits from a chunk, and "basic block" is a more cumbersome
+ term.
+
+ .. _basic block: http://en.wikipedia.org/wiki/Basic_block
+
+ An exit < 0 means the chunk can leave the code (return). The exit is
+ the negative of the starting line number of the code block.
+
+ """
+ def __init__(self, byte, line=0):
+ self.byte = byte
+ self.line = line
+ self.length = 0
+ self.exits = set()
+
+ def __repr__(self):
+ return "<%d+%d @%d %r>" % (
+ self.byte, self.length, self.line, list(self.exits)
+ )
+
+
+class AdHocMain(object): # pragma: no cover
+ """An ad-hoc main for code parsing experiments."""
+
+ def main(self, args):
+ """A main function for trying the code from the command line."""
+
+ from optparse import OptionParser
+
+ parser = OptionParser()
+ parser.add_option(
+ "-c", action="store_true", dest="chunks",
+ help="Show basic block chunks"
+ )
+ parser.add_option(
+ "-d", action="store_true", dest="dis",
+ help="Disassemble"
+ )
+ parser.add_option(
+ "-R", action="store_true", dest="recursive",
+ help="Recurse to find source files"
+ )
+ parser.add_option(
+ "-s", action="store_true", dest="source",
+ help="Show analyzed source"
+ )
+ parser.add_option(
+ "-t", action="store_true", dest="tokens",
+ help="Show tokens"
+ )
+
+ options, args = parser.parse_args()
+ if options.recursive:
+ if args:
+ root = args[0]
+ else:
+ root = "."
+ for root, _, _ in os.walk(root):
+ for f in glob.glob(root + "/*.py"):
+ self.adhoc_one_file(options, f)
+ else:
+ self.adhoc_one_file(options, args[0])
+
+ def adhoc_one_file(self, options, filename):
+ """Process just one file."""
+
+ if options.dis or options.chunks:
+ try:
+ bp = ByteParser(filename=filename)
+ except CoverageException:
+ _, err, _ = sys.exc_info()
+ print("%s" % (err,))
+ return
+
+ if options.dis:
+ print("Main code:")
+ bp._disassemble()
+
+ if options.chunks:
+ chunks = bp._all_chunks()
+ if options.recursive:
+ print("%6d: %s" % (len(chunks), filename))
+ else:
+ print("Chunks: %r" % chunks)
+ arcs = bp._all_arcs()
+ print("Arcs: %r" % sorted(arcs))
+
+ if options.source or options.tokens:
+ cp = CodeParser(filename=filename, exclude=r"no\s*cover")
+ cp.show_tokens = options.tokens
+ cp._raw_parse()
+
+ if options.source:
+ if options.chunks:
+ arc_width, arc_chars = self.arc_ascii_art(arcs)
+ else:
+ arc_width, arc_chars = 0, {}
+
+ exit_counts = cp.exit_counts()
+
+ for i, ltext in enumerate(cp.lines):
+ lineno = i+1
+ m0 = m1 = m2 = m3 = a = ' '
+ if lineno in cp.statement_starts:
+ m0 = '-'
+ exits = exit_counts.get(lineno, 0)
+ if exits > 1:
+ m1 = str(exits)
+ if lineno in cp.docstrings:
+ m2 = '"'
+ if lineno in cp.classdefs:
+ m2 = 'C'
+ if lineno in cp.excluded:
+ m3 = 'x'
+ a = arc_chars.get(lineno, '').ljust(arc_width)
+ print("%4d %s%s%s%s%s %s" %
+ (lineno, m0, m1, m2, m3, a, ltext)
+ )
+
+ def arc_ascii_art(self, arcs):
+ """Draw arcs as ascii art.
+
+ Returns a width of characters needed to draw all the arcs, and a
+ dictionary mapping line numbers to ascii strings to draw for that line.
+
+ """
+ arc_chars = {}
+ for lfrom, lto in sorted(arcs):
+ if lfrom < 0:
+ arc_chars[lto] = arc_chars.get(lto, '') + 'v'
+ elif lto < 0:
+ arc_chars[lfrom] = arc_chars.get(lfrom, '') + '^'
+ else:
+ if lfrom == lto - 1:
+ # Don't show obvious arcs.
+ continue
+ if lfrom < lto:
+ l1, l2 = lfrom, lto
+ else:
+ l1, l2 = lto, lfrom
+ w = max([len(arc_chars.get(l, '')) for l in range(l1, l2+1)])
+ for l in range(l1, l2+1):
+ if l == lfrom:
+ ch = '<'
+ elif l == lto:
+ ch = '>'
+ else:
+ ch = '|'
+ arc_chars[l] = arc_chars.get(l, '').ljust(w) + ch
+ arc_width = 0
+
+ if arc_chars:
+ arc_width = max([len(a) for a in arc_chars.values()])
+ else:
+ arc_width = 0
+
+ return arc_width, arc_chars
+
+if __name__ == '__main__':
+ AdHocMain().main(sys.argv[1:])
diff --git a/python/helpers/coverage/phystokens.py b/python/helpers/coverage/phystokens.py
new file mode 100644
index 0000000..fc4f2c9
--- /dev/null
+++ b/python/helpers/coverage/phystokens.py
@@ -0,0 +1,108 @@
+"""Better tokenizing for coverage.py."""
+
+import keyword, re, token, tokenize
+from coverage.backward import StringIO # pylint: disable=W0622
+
+def phys_tokens(toks):
+ """Return all physical tokens, even line continuations.
+
+ tokenize.generate_tokens() doesn't return a token for the backslash that
+ continues lines. This wrapper provides those tokens so that we can
+ re-create a faithful representation of the original source.
+
+ Returns the same values as generate_tokens()
+
+ """
+ last_line = None
+ last_lineno = -1
+ last_ttype = None
+ for ttype, ttext, (slineno, scol), (elineno, ecol), ltext in toks:
+ if last_lineno != elineno:
+ if last_line and last_line[-2:] == "\\\n":
+ # We are at the beginning of a new line, and the last line
+ # ended with a backslash. We probably have to inject a
+ # backslash token into the stream. Unfortunately, there's more
+ # to figure out. This code::
+ #
+ # usage = """\
+ # HEY THERE
+ # """
+ #
+ # triggers this condition, but the token text is::
+ #
+ # '"""\\\nHEY THERE\n"""'
+ #
+ # so we need to figure out if the backslash is already in the
+ # string token or not.
+ inject_backslash = True
+ if last_ttype == tokenize.COMMENT:
+ # Comments like this \
+ # should never result in a new token.
+ inject_backslash = False
+ elif ttype == token.STRING:
+ if "\n" in ttext and ttext.split('\n', 1)[0][-1] == '\\':
+ # It's a multiline string and the first line ends with
+ # a backslash, so we don't need to inject another.
+ inject_backslash = False
+ if inject_backslash:
+ # Figure out what column the backslash is in.
+ ccol = len(last_line.split("\n")[-2]) - 1
+ # Yield the token, with a fake token type.
+ yield (
+ 99999, "\\\n",
+ (slineno, ccol), (slineno, ccol+2),
+ last_line
+ )
+ last_line = ltext
+ last_ttype = ttype
+ yield ttype, ttext, (slineno, scol), (elineno, ecol), ltext
+ last_lineno = elineno
+
+
+def source_token_lines(source):
+ """Generate a series of lines, one for each line in `source`.
+
+ Each line is a list of pairs, each pair is a token::
+
+ [('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ]
+
+ Each pair has a token class, and the token text.
+
+ If you concatenate all the token texts, and then join them with newlines,
+ you should have your original `source` back, with two differences:
+ trailing whitespace is not preserved, and a final line with no newline
+ is indistinguishable from a final line with a newline.
+
+ """
+ ws_tokens = [token.INDENT, token.DEDENT, token.NEWLINE, tokenize.NL]
+ line = []
+ col = 0
+ source = source.expandtabs(8).replace('\r\n', '\n')
+ tokgen = tokenize.generate_tokens(StringIO(source).readline)
+ for ttype, ttext, (_, scol), (_, ecol), _ in phys_tokens(tokgen):
+ mark_start = True
+ for part in re.split('(\n)', ttext):
+ if part == '\n':
+ yield line
+ line = []
+ col = 0
+ mark_end = False
+ elif part == '':
+ mark_end = False
+ elif ttype in ws_tokens:
+ mark_end = False
+ else:
+ if mark_start and scol > col:
+ line.append(("ws", " " * (scol - col)))
+ mark_start = False
+ tok_class = tokenize.tok_name.get(ttype, 'xx').lower()[:3]
+ if ttype == token.NAME and keyword.iskeyword(ttext):
+ tok_class = "key"
+ line.append((tok_class, part))
+ mark_end = True
+ scol = 0
+ if mark_end:
+ col = ecol
+
+ if line:
+ yield line
diff --git a/python/helpers/coverage/report.py b/python/helpers/coverage/report.py
new file mode 100644
index 0000000..6c5510a
--- /dev/null
+++ b/python/helpers/coverage/report.py
@@ -0,0 +1,89 @@
+"""Reporter foundation for Coverage."""
+
+import fnmatch, os
+from coverage.codeunit import code_unit_factory
+from coverage.misc import CoverageException, NoSource, NotPython
+
+class Reporter(object):
+ """A base class for all reporters."""
+
+ def __init__(self, coverage, ignore_errors=False):
+ """Create a reporter.
+
+ `coverage` is the coverage instance. `ignore_errors` controls how
+ skittish the reporter will be during file processing.
+
+ """
+ self.coverage = coverage
+ self.ignore_errors = ignore_errors
+
+ # The code units to report on. Set by find_code_units.
+ self.code_units = []
+
+ # The directory into which to place the report, used by some derived
+ # classes.
+ self.directory = None
+
+ def find_code_units(self, morfs, config):
+ """Find the code units we'll report on.
+
+ `morfs` is a list of modules or filenames. `config` is a
+ CoverageConfig instance.
+
+ """
+ morfs = morfs or self.coverage.data.measured_files()
+ file_locator = self.coverage.file_locator
+ self.code_units = code_unit_factory(morfs, file_locator)
+
+ if config.include:
+ patterns = [file_locator.abs_file(p) for p in config.include]
+ filtered = []
+ for cu in self.code_units:
+ for pattern in patterns:
+ if fnmatch.fnmatch(cu.filename, pattern):
+ filtered.append(cu)
+ break
+ self.code_units = filtered
+
+ if config.omit:
+ patterns = [file_locator.abs_file(p) for p in config.omit]
+ filtered = []
+ for cu in self.code_units:
+ for pattern in patterns:
+ if fnmatch.fnmatch(cu.filename, pattern):
+ break
+ else:
+ filtered.append(cu)
+ self.code_units = filtered
+
+ self.code_units.sort()
+
+ def report_files(self, report_fn, morfs, config, directory=None):
+ """Run a reporting function on a number of morfs.
+
+ `report_fn` is called for each relative morf in `morfs`. It is called
+ as::
+
+ report_fn(code_unit, analysis)
+
+ where `code_unit` is the `CodeUnit` for the morf, and `analysis` is
+ the `Analysis` for the morf.
+
+ `config` is a CoverageConfig instance.
+
+ """
+ self.find_code_units(morfs, config)
+
+ if not self.code_units:
+ raise CoverageException("No data to report.")
+
+ self.directory = directory
+ if self.directory and not os.path.exists(self.directory):
+ os.makedirs(self.directory)
+
+ for cu in self.code_units:
+ try:
+ report_fn(cu, self.coverage._analyze(cu))
+ except (NoSource, NotPython):
+ if not self.ignore_errors:
+ raise
diff --git a/python/helpers/coverage/results.py b/python/helpers/coverage/results.py
new file mode 100644
index 0000000..adfb8f4
--- /dev/null
+++ b/python/helpers/coverage/results.py
@@ -0,0 +1,245 @@
+"""Results of coverage measurement."""
+
+import os
+
+from coverage.backward import set, sorted # pylint: disable=W0622
+from coverage.misc import format_lines, join_regex, NoSource
+from coverage.parser import CodeParser
+
+
+class Analysis(object):
+ """The results of analyzing a code unit."""
+
+ def __init__(self, cov, code_unit):
+ self.coverage = cov
+ self.code_unit = code_unit
+
+ self.filename = self.code_unit.filename
+ ext = os.path.splitext(self.filename)[1]
+ source = None
+ if ext == '.py':
+ if not os.path.exists(self.filename):
+ source = self.coverage.file_locator.get_zip_data(self.filename)
+ if not source:
+ raise NoSource("No source for code: %r" % self.filename)
+
+ self.parser = CodeParser(
+ text=source, filename=self.filename,
+ exclude=self.coverage._exclude_regex('exclude')
+ )
+ self.statements, self.excluded = self.parser.parse_source()
+
+ # Identify missing statements.
+ executed = self.coverage.data.executed_lines(self.filename)
+ exec1 = self.parser.first_lines(executed)
+ self.missing = sorted(set(self.statements) - set(exec1))
+
+ if self.coverage.data.has_arcs():
+ self.no_branch = self.parser.lines_matching(
+ join_regex(self.coverage.config.partial_list),
+ join_regex(self.coverage.config.partial_always_list)
+ )
+ n_branches = self.total_branches()
+ mba = self.missing_branch_arcs()
+ n_missing_branches = sum([len(v) for v in mba.values()])
+ else:
+ n_branches = n_missing_branches = 0
+ self.no_branch = set()
+
+ self.numbers = Numbers(
+ n_files=1,
+ n_statements=len(self.statements),
+ n_excluded=len(self.excluded),
+ n_missing=len(self.missing),
+ n_branches=n_branches,
+ n_missing_branches=n_missing_branches,
+ )
+
+ def missing_formatted(self):
+ """The missing line numbers, formatted nicely.
+
+ Returns a string like "1-2, 5-11, 13-14".
+
+ """
+ return format_lines(self.statements, self.missing)
+
+ def has_arcs(self):
+ """Were arcs measured in this result?"""
+ return self.coverage.data.has_arcs()
+
+ def arc_possibilities(self):
+ """Returns a sorted list of the arcs in the code."""
+ arcs = self.parser.arcs()
+ return arcs
+
+ def arcs_executed(self):
+ """Returns a sorted list of the arcs actually executed in the code."""
+ executed = self.coverage.data.executed_arcs(self.filename)
+ m2fl = self.parser.first_line
+ executed = [(m2fl(l1), m2fl(l2)) for (l1,l2) in executed]
+ return sorted(executed)
+
+ def arcs_missing(self):
+ """Returns a sorted list of the arcs in the code not executed."""
+ possible = self.arc_possibilities()
+ executed = self.arcs_executed()
+ missing = [
+ p for p in possible
+ if p not in executed
+ and p[0] not in self.no_branch
+ ]
+ return sorted(missing)
+
+ def arcs_unpredicted(self):
+ """Returns a sorted list of the executed arcs missing from the code."""
+ possible = self.arc_possibilities()
+ executed = self.arcs_executed()
+ # Exclude arcs here which connect a line to itself. They can occur
+ # in executed data in some cases. This is where they can cause
+ # trouble, and here is where it's the least burden to remove them.
+ unpredicted = [
+ e for e in executed
+ if e not in possible
+ and e[0] != e[1]
+ ]
+ return sorted(unpredicted)
+
+ def branch_lines(self):
+ """Returns a list of line numbers that have more than one exit."""
+ exit_counts = self.parser.exit_counts()
+ return [l1 for l1,count in exit_counts.items() if count > 1]
+
+ def total_branches(self):
+ """How many total branches are there?"""
+ exit_counts = self.parser.exit_counts()
+ return sum([count for count in exit_counts.values() if count > 1])
+
+ def missing_branch_arcs(self):
+ """Return arcs that weren't executed from branch lines.
+
+ Returns {l1:[l2a,l2b,...], ...}
+
+ """
+ missing = self.arcs_missing()
+ branch_lines = set(self.branch_lines())
+ mba = {}
+ for l1, l2 in missing:
+ if l1 in branch_lines:
+ if l1 not in mba:
+ mba[l1] = []
+ mba[l1].append(l2)
+ return mba
+
+ def branch_stats(self):
+ """Get stats about branches.
+
+ Returns a dict mapping line numbers to a tuple:
+ (total_exits, taken_exits).
+ """
+
+ exit_counts = self.parser.exit_counts()
+ missing_arcs = self.missing_branch_arcs()
+ stats = {}
+ for lnum in self.branch_lines():
+ exits = exit_counts[lnum]
+ try:
+ missing = len(missing_arcs[lnum])
+ except KeyError:
+ missing = 0
+ stats[lnum] = (exits, exits - missing)
+ return stats
+
+
+class Numbers(object):
+ """The numerical results of measuring coverage.
+
+ This holds the basic statistics from `Analysis`, and is used to roll
+ up statistics across files.
+
+ """
+ # A global to determine the precision on coverage percentages, the number
+ # of decimal places.
+ _precision = 0
+ _near0 = 1.0 # These will change when _precision is changed.
+ _near100 = 99.0
+
+ def __init__(self, n_files=0, n_statements=0, n_excluded=0, n_missing=0,
+ n_branches=0, n_missing_branches=0
+ ):
+ self.n_files = n_files
+ self.n_statements = n_statements
+ self.n_excluded = n_excluded
+ self.n_missing = n_missing
+ self.n_branches = n_branches
+ self.n_missing_branches = n_missing_branches
+
+ def set_precision(cls, precision):
+ """Set the number of decimal places used to report percentages."""
+ assert 0 <= precision < 10
+ cls._precision = precision
+ cls._near0 = 1.0 / 10**precision
+ cls._near100 = 100.0 - cls._near0
+ set_precision = classmethod(set_precision)
+
+ def _get_n_executed(self):
+ """Returns the number of executed statements."""
+ return self.n_statements - self.n_missing
+ n_executed = property(_get_n_executed)
+
+ def _get_n_executed_branches(self):
+ """Returns the number of executed branches."""
+ return self.n_branches - self.n_missing_branches
+ n_executed_branches = property(_get_n_executed_branches)
+
+ def _get_pc_covered(self):
+ """Returns a single percentage value for coverage."""
+ if self.n_statements > 0:
+ pc_cov = (100.0 * (self.n_executed + self.n_executed_branches) /
+ (self.n_statements + self.n_branches))
+ else:
+ pc_cov = 100.0
+ return pc_cov
+ pc_covered = property(_get_pc_covered)
+
+ def _get_pc_covered_str(self):
+ """Returns the percent covered, as a string, without a percent sign.
+
+ Note that "0" is only returned when the value is truly zero, and "100"
+ is only returned when the value is truly 100. Rounding can never
+ result in either "0" or "100".
+
+ """
+ pc = self.pc_covered
+ if 0 < pc < self._near0:
+ pc = self._near0
+ elif self._near100 < pc < 100:
+ pc = self._near100
+ else:
+ pc = round(pc, self._precision)
+ return "%.*f" % (self._precision, pc)
+ pc_covered_str = property(_get_pc_covered_str)
+
+ def pc_str_width(cls):
+ """How many characters wide can pc_covered_str be?"""
+ width = 3 # "100"
+ if cls._precision > 0:
+ width += 1 + cls._precision
+ return width
+ pc_str_width = classmethod(pc_str_width)
+
+ def __add__(self, other):
+ nums = Numbers()
+ nums.n_files = self.n_files + other.n_files
+ nums.n_statements = self.n_statements + other.n_statements
+ nums.n_excluded = self.n_excluded + other.n_excluded
+ nums.n_missing = self.n_missing + other.n_missing
+ nums.n_branches = self.n_branches + other.n_branches
+ nums.n_missing_branches = (self.n_missing_branches +
+ other.n_missing_branches)
+ return nums
+
+ def __radd__(self, other):
+ # Implementing 0+Numbers allows us to sum() a list of Numbers.
+ if other == 0:
+ return self
+ return NotImplemented
diff --git a/python/helpers/coverage/summary.py b/python/helpers/coverage/summary.py
new file mode 100644
index 0000000..599ae78
--- /dev/null
+++ b/python/helpers/coverage/summary.py
@@ -0,0 +1,81 @@
+"""Summary reporting"""
+
+import sys
+
+from coverage.report import Reporter
+from coverage.results import Numbers
+
+
+class SummaryReporter(Reporter):
+ """A reporter for writing the summary report."""
+
+ def __init__(self, coverage, show_missing=True, ignore_errors=False):
+ super(SummaryReporter, self).__init__(coverage, ignore_errors)
+ self.show_missing = show_missing
+ self.branches = coverage.data.has_arcs()
+
+ def report(self, morfs, outfile=None, config=None):
+ """Writes a report summarizing coverage statistics per module.
+
+ `outfile` is a file object to write the summary to. `config` is a
+ CoverageConfig instance.
+
+ """
+ self.find_code_units(morfs, config)
+
+ # Prepare the formatting strings
+ max_name = max([len(cu.name) for cu in self.code_units] + [5])
+ fmt_name = "%%- %ds " % max_name
+ fmt_err = "%s %s: %s\n"
+ header = (fmt_name % "Name") + " Stmts Miss"
+ fmt_coverage = fmt_name + "%6d %6d"
+ if self.branches:
+ header += " Branch BrPart"
+ fmt_coverage += " %6d %6d"
+ width100 = Numbers.pc_str_width()
+ header += "%*s" % (width100+4, "Cover")
+ fmt_coverage += "%%%ds%%%%" % (width100+3,)
+ if self.show_missing:
+ header += " Missing"
+ fmt_coverage += " %s"
+ rule = "-" * len(header) + "\n"
+ header += "\n"
+ fmt_coverage += "\n"
+
+ if not outfile:
+ outfile = sys.stdout
+
+ # Write the header
+ outfile.write(header)
+ outfile.write(rule)
+
+ total = Numbers()
+
+ for cu in self.code_units:
+ try:
+ analysis = self.coverage._analyze(cu)
+ nums = analysis.numbers
+ args = (cu.name, nums.n_statements, nums.n_missing)
+ if self.branches:
+ args += (nums.n_branches, nums.n_missing_branches)
+ args += (nums.pc_covered_str,)
+ if self.show_missing:
+ args += (analysis.missing_formatted(),)
+ outfile.write(fmt_coverage % args)
+ total += nums
+ except KeyboardInterrupt: # pragma: no cover
+ raise
+ except:
+ if not self.ignore_errors:
+ typ, msg = sys.exc_info()[:2]
+ outfile.write(fmt_err % (cu.name, typ.__name__, msg))
+
+ if total.n_files > 1:
+ outfile.write(rule)
+ args = ("TOTAL", total.n_statements, total.n_missing)
+ if self.branches:
+ args += (total.n_branches, total.n_missing_branches)
+ args += (total.pc_covered_str,)
+ if self.show_missing:
+ args += ("",)
+ outfile.write(fmt_coverage % args)
diff --git a/python/helpers/coverage/templite.py b/python/helpers/coverage/templite.py
new file mode 100644
index 0000000..c39e061
--- /dev/null
+++ b/python/helpers/coverage/templite.py
@@ -0,0 +1,166 @@
+"""A simple Python template renderer, for a nano-subset of Django syntax."""
+
+# Coincidentally named the same as http://code.activestate.com/recipes/496702/
+
+import re, sys
+
+class Templite(object):
+ """A simple template renderer, for a nano-subset of Django syntax.
+
+ Supported constructs are extended variable access::
+
+ {{var.modifer.modifier|filter|filter}}
+
+ loops::
+
+ {% for var in list %}...{% endfor %}
+
+ and ifs::
+
+ {% if var %}...{% endif %}
+
+ Comments are within curly-hash markers::
+
+ {# This will be ignored #}
+
+ Construct a Templite with the template text, then use `render` against a
+ dictionary context to create a finished string.
+
+ """
+ def __init__(self, text, *contexts):
+ """Construct a Templite with the given `text`.
+
+ `contexts` are dictionaries of values to use for future renderings.
+ These are good for filters and global values.
+
+ """
+ self.text = text
+ self.context = {}
+ for context in contexts:
+ self.context.update(context)
+
+ # Split the text to form a list of tokens.
+ toks = re.split(r"(?s)({{.*?}}|{%.*?%}|{#.*?#})", text)
+
+ # Parse the tokens into a nested list of operations. Each item in the
+ # list is a tuple with an opcode, and arguments. They'll be
+ # interpreted by TempliteEngine.
+ #
+ # When parsing an action tag with nested content (if, for), the current
+ # ops list is pushed onto ops_stack, and the parsing continues in a new
+ # ops list that is part of the arguments to the if or for op.
+ ops = []
+ ops_stack = []
+ for tok in toks:
+ if tok.startswith('{{'):
+ # Expression: ('exp', expr)
+ ops.append(('exp', tok[2:-2].strip()))
+ elif tok.startswith('{#'):
+ # Comment: ignore it and move on.
+ continue
+ elif tok.startswith('{%'):
+ # Action tag: split into words and parse further.
+ words = tok[2:-2].strip().split()
+ if words[0] == 'if':
+ # If: ('if', (expr, body_ops))
+ if_ops = []
+ assert len(words) == 2
+ ops.append(('if', (words[1], if_ops)))
+ ops_stack.append(ops)
+ ops = if_ops
+ elif words[0] == 'for':
+ # For: ('for', (varname, listexpr, body_ops))
+ assert len(words) == 4 and words[2] == 'in'
+ for_ops = []
+ ops.append(('for', (words[1], words[3], for_ops)))
+ ops_stack.append(ops)
+ ops = for_ops
+ elif words[0].startswith('end'):
+ # Endsomething. Pop the ops stack
+ ops = ops_stack.pop()
+ assert ops[-1][0] == words[0][3:]
+ else:
+ raise SyntaxError("Don't understand tag %r" % words)
+ else:
+ ops.append(('lit', tok))
+
+ assert not ops_stack, "Unmatched action tag: %r" % ops_stack[-1][0]
+ self.ops = ops
+
+ def render(self, context=None):
+ """Render this template by applying it to `context`.
+
+ `context` is a dictionary of values to use in this rendering.
+
+ """
+ # Make the complete context we'll use.
+ ctx = dict(self.context)
+ if context:
+ ctx.update(context)
+
+ # Run it through an engine, and return the result.
+ engine = _TempliteEngine(ctx)
+ engine.execute(self.ops)
+ return "".join(engine.result)
+
+
+class _TempliteEngine(object):
+ """Executes Templite objects to produce strings."""
+ def __init__(self, context):
+ self.context = context
+ self.result = []
+
+ def execute(self, ops):
+ """Execute `ops` in the engine.
+
+ Called recursively for the bodies of if's and loops.
+
+ """
+ for op, args in ops:
+ if op == 'lit':
+ self.result.append(args)
+ elif op == 'exp':
+ try:
+ self.result.append(str(self.evaluate(args)))
+ except:
+ exc_class, exc, _ = sys.exc_info()
+ new_exc = exc_class("Couldn't evaluate {{ %s }}: %s"
+ % (args, exc))
+ raise new_exc
+ elif op == 'if':
+ expr, body = args
+ if self.evaluate(expr):
+ self.execute(body)
+ elif op == 'for':
+ var, lis, body = args
+ vals = self.evaluate(lis)
+ for val in vals:
+ self.context[var] = val
+ self.execute(body)
+ else:
+ raise AssertionError("TempliteEngine doesn't grok op %r" % op)
+
+ def evaluate(self, expr):
+ """Evaluate an expression.
+
+ `expr` can have pipes and dots to indicate data access and filtering.
+
+ """
+ if "|" in expr:
+ pipes = expr.split("|")
+ value = self.evaluate(pipes[0])
+ for func in pipes[1:]:
+ value = self.evaluate(func)(value)
+ elif "." in expr:
+ dots = expr.split('.')
+ value = self.evaluate(dots[0])
+ for dot in dots[1:]:
+ try:
+ value = getattr(value, dot)
+ except AttributeError:
+ value = value[dot]
+ if hasattr(value, '__call__'):
+ value = value()
+ else:
+ value = self.context[expr]
+ return value
diff --git a/python/helpers/coverage/tracer.pyd b/python/helpers/coverage/tracer.pyd
new file mode 100644
index 0000000..a13aa03
--- /dev/null
+++ b/python/helpers/coverage/tracer.pyd
Binary files differ
diff --git a/python/helpers/coverage/xmlreport.py b/python/helpers/coverage/xmlreport.py
new file mode 100644
index 0000000..5f6cc87
--- /dev/null
+++ b/python/helpers/coverage/xmlreport.py
@@ -0,0 +1,147 @@
+"""XML reporting for coverage.py"""
+
+import os, sys, time
+import xml.dom.minidom
+
+from coverage import __url__, __version__
+from coverage.backward import sorted # pylint: disable=W0622
+from coverage.report import Reporter
+
+def rate(hit, num):
+ """Return the fraction of `hit`/`num`, as a string."""
+ return "%.4g" % (float(hit) / (num or 1.0))
+
+
+class XmlReporter(Reporter):
+ """A reporter for writing Cobertura-style XML coverage results."""
+
+ def __init__(self, coverage, ignore_errors=False):
+ super(XmlReporter, self).__init__(coverage, ignore_errors)
+
+ self.packages = None
+ self.xml_out = None
+ self.arcs = coverage.data.has_arcs()
+
+ def report(self, morfs, outfile=None, config=None):
+ """Generate a Cobertura-compatible XML report for `morfs`.
+
+ `morfs` is a list of modules or filenames.
+
+ `outfile` is a file object to write the XML to. `config` is a
+ CoverageConfig instance.
+
+ """
+ # Initial setup.
+ outfile = outfile or sys.stdout
+
+ # Create the DOM that will store the data.
+ impl = xml.dom.minidom.getDOMImplementation()
+ docType = impl.createDocumentType(
+ "coverage", None,
+ "http://cobertura.sourceforge.net/xml/coverage-03.dtd"
+ )
+ self.xml_out = impl.createDocument(None, "coverage", docType)
+
+ # Write header stuff.
+ xcoverage = self.xml_out.documentElement
+ xcoverage.setAttribute("version", __version__)
+ xcoverage.setAttribute("timestamp", str(int(time.time()*1000)))
+ xcoverage.appendChild(self.xml_out.createComment(
+ " Generated by coverage.py: %s " % __url__
+ ))
+ xpackages = self.xml_out.createElement("packages")
+ xcoverage.appendChild(xpackages)
+
+ # Call xml_file for each file in the data.
+ self.packages = {}
+ self.report_files(self.xml_file, morfs, config)
+
+ lnum_tot, lhits_tot = 0, 0
+ bnum_tot, bhits_tot = 0, 0
+
+ # Populate the XML DOM with the package info.
+ for pkg_name in sorted(self.packages.keys()):
+ pkg_data = self.packages[pkg_name]
+ class_elts, lhits, lnum, bhits, bnum = pkg_data
+ xpackage = self.xml_out.createElement("package")
+ xpackages.appendChild(xpackage)
+ xclasses = self.xml_out.createElement("classes")
+ xpackage.appendChild(xclasses)
+ for class_name in sorted(class_elts.keys()):
+ xclasses.appendChild(class_elts[class_name])
+ xpackage.setAttribute("name", pkg_name.replace(os.sep, '.'))
+ xpackage.setAttribute("line-rate", rate(lhits, lnum))
+ xpackage.setAttribute("branch-rate", rate(bhits, bnum))
+ xpackage.setAttribute("complexity", "0")
+
+ lnum_tot += lnum
+ lhits_tot += lhits
+ bnum_tot += bnum
+ bhits_tot += bhits
+
+ xcoverage.setAttribute("line-rate", rate(lhits_tot, lnum_tot))
+ xcoverage.setAttribute("branch-rate", rate(bhits_tot, bnum_tot))
+
+ # Use the DOM to write the output file.
+ outfile.write(self.xml_out.toprettyxml())
+
+ def xml_file(self, cu, analysis):
+ """Add to the XML report for a single file."""
+
+ # Create the 'lines' and 'package' XML elements, which
+ # are populated later. Note that a package == a directory.
+ dirname, fname = os.path.split(cu.name)
+ dirname = dirname or '.'
+ package = self.packages.setdefault(dirname, [ {}, 0, 0, 0, 0 ])
+
+ xclass = self.xml_out.createElement("class")
+
+ xclass.appendChild(self.xml_out.createElement("methods"))
+
+ xlines = self.xml_out.createElement("lines")
+ xclass.appendChild(xlines)
+ className = fname.replace('.', '_')
+ xclass.setAttribute("name", className)
+ ext = os.path.splitext(cu.filename)[1]
+ xclass.setAttribute("filename", cu.name + ext)
+ xclass.setAttribute("complexity", "0")
+
+ branch_stats = analysis.branch_stats()
+
+ # For each statement, create an XML 'line' element.
+ for line in analysis.statements:
+ xline = self.xml_out.createElement("line")
+ xline.setAttribute("number", str(line))
+
+ # Q: can we get info about the number of times a statement is
+ # executed? If so, that should be recorded here.
+ xline.setAttribute("hits", str(int(not line in analysis.missing)))
+
+ if self.arcs:
+ if line in branch_stats:
+ total, taken = branch_stats[line]
+ xline.setAttribute("branch", "true")
+ xline.setAttribute("condition-coverage",
+ "%d%% (%d/%d)" % (100*taken/total, taken, total)
+ )
+ xlines.appendChild(xline)
+
+ class_lines = len(analysis.statements)
+ class_hits = class_lines - len(analysis.missing)
+
+ if self.arcs:
+ class_branches = sum([t for t,k in branch_stats.values()])
+ missing_branches = sum([t-k for t,k in branch_stats.values()])
+ class_br_hits = class_branches - missing_branches
+ else:
+ class_branches = 0.0
+ class_br_hits = 0.0
+
+ # Finalize the statistics that are collected in the XML DOM.
+ xclass.setAttribute("line-rate", rate(class_hits, class_lines))
+ xclass.setAttribute("branch-rate", rate(class_br_hits, class_branches))
+ package[0][className] = xclass
+ package[1] += class_hits
+ package[2] += class_lines
+ package[3] += class_br_hits
+ package[4] += class_branches
diff --git a/python/helpers/docutils/__init__.py b/python/helpers/docutils/__init__.py
new file mode 100644
index 0000000..ec0f725
--- /dev/null
+++ b/python/helpers/docutils/__init__.py
@@ -0,0 +1,204 @@
+# $Id: __init__.py 6364 2010-07-07 14:56:53Z grubert $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+This is the Docutils (Python Documentation Utilities) package.
+
+Package Structure
+=================
+
+Modules:
+
+- __init__.py: Contains component base classes, exception classes, and
+ Docutils version information.
+
+- core.py: Contains the ``Publisher`` class and ``publish_*()`` convenience
+ functions.
+
+- frontend.py: Runtime settings (command-line interface, configuration files)
+ processing, for Docutils front-ends.
+
+- io.py: Provides a uniform API for low-level input and output.
+
+- nodes.py: Docutils document tree (doctree) node class library.
+
+- statemachine.py: A finite state machine specialized for
+ regular-expression-based text filters.
+
+- urischemes.py: Contains a complete mapping of known URI addressing
+ scheme names to descriptions.
+
+- utils.py: Contains the ``Reporter`` system warning class and miscellaneous
+ utilities.
+
+Subpackages:
+
+- languages: Language-specific mappings of terms.
+
+- parsers: Syntax-specific input parser modules or packages.
+
+- readers: Context-specific input handlers which understand the data
+ source and manage a parser.
+
+- transforms: Modules used by readers and writers to modify DPS
+ doctrees.
+
+- writers: Format-specific output translators.
+"""
+
+__docformat__ = 'reStructuredText'
+
+__version__ = '0.8'
+"""``major.minor.micro`` version number. The micro number is bumped for API
+changes, for new functionality, and for interim project releases. The minor
+number is bumped whenever there is a significant project release. The major
+number will be bumped when the project is feature-complete, and perhaps if
+there is a major change in the design."""
+
+__version_details__ = 'snapshot 2010-09-01, r6395'
+"""Extra version details (e.g. 'snapshot 2005-05-29, r3410', 'repository',
+'release'), modified automatically & manually."""
+
+class ApplicationError(StandardError): pass
+class DataError(ApplicationError): pass
+
+
+class SettingsSpec:
+
+ """
+ Runtime setting specification base class.
+
+ SettingsSpec subclass objects used by `docutils.frontend.OptionParser`.
+ """
+
+ settings_spec = ()
+ """Runtime settings specification. Override in subclasses.
+
+ Defines runtime settings and associated command-line options, as used by
+ `docutils.frontend.OptionParser`. This is a tuple of:
+
+ - Option group title (string or `None` which implies no group, just a list
+ of single options).
+
+ - Description (string or `None`).
+
+ - A sequence of option tuples. Each consists of:
+
+ - Help text (string)
+
+ - List of option strings (e.g. ``['-Q', '--quux']``).
+
+ - Dictionary of keyword arguments sent to the OptionParser/OptionGroup
+ ``add_option`` method.
+
+ Runtime setting names are derived implicitly from long option names
+ ('--a-setting' becomes ``settings.a_setting``) or explicitly from the
+ 'dest' keyword argument.
+
+ Most settings will also have a 'validator' keyword & function. The
+ validator function validates setting values (from configuration files
+ and command-line option arguments) and converts them to appropriate
+ types. For example, the ``docutils.frontend.validate_boolean``
+ function, **required by all boolean settings**, converts true values
+ ('1', 'on', 'yes', and 'true') to 1 and false values ('0', 'off',
+ 'no', 'false', and '') to 0. Validators need only be set once per
+ setting. See the `docutils.frontend.validate_*` functions.
+
+ See the optparse docs for more details.
+
+ - More triples of group title, description, options, as many times as
+ needed. Thus, `settings_spec` tuples can be simply concatenated.
+ """
+
+ settings_defaults = None
+ """A dictionary of defaults for settings not in `settings_spec` (internal
+ settings, intended to be inaccessible by command-line and config file).
+ Override in subclasses."""
+
+ settings_default_overrides = None
+ """A dictionary of auxiliary defaults, to override defaults for settings
+ defined in other components. Override in subclasses."""
+
+ relative_path_settings = ()
+ """Settings containing filesystem paths. Override in subclasses.
+ Settings listed here are to be interpreted relative to the current working
+ directory."""
+
+ config_section = None
+ """The name of the config file section specific to this component
+ (lowercase, no brackets). Override in subclasses."""
+
+ config_section_dependencies = None
+ """A list of names of config file sections that are to be applied before
+ `config_section`, in order (from general to specific). In other words,
+ the settings in `config_section` are to be overlaid on top of the settings
+ from these sections. The "general" section is assumed implicitly.
+ Override in subclasses."""
+
+
+class TransformSpec:
+
+ """
+ Runtime transform specification base class.
+
+ TransformSpec subclass objects used by `docutils.transforms.Transformer`.
+ """
+
+ def get_transforms(self):
+ """Transforms required by this class. Override in subclasses."""
+ if self.default_transforms != ():
+ import warnings
+ warnings.warn('default_transforms attribute deprecated.\n'
+ 'Use get_transforms() method instead.',
+ DeprecationWarning)
+ return list(self.default_transforms)
+ return []
+
+ # Deprecated; for compatibility.
+ default_transforms = ()
+
+ unknown_reference_resolvers = ()
+ """List of functions to try to resolve unknown references. Unknown
+ references have a 'refname' attribute which doesn't correspond to any
+ target in the document. Called when the transforms in
+ `docutils.tranforms.references` are unable to find a correct target. The
+ list should contain functions which will try to resolve unknown
+ references, with the following signature::
+
+ def reference_resolver(node):
+ '''Returns boolean: true if resolved, false if not.'''
+
+ If the function is able to resolve the reference, it should also remove
+ the 'refname' attribute and mark the node as resolved::
+
+ del node['refname']
+ node.resolved = 1
+
+ Each function must have a "priority" attribute which will affect the order
+ the unknown_reference_resolvers are run::
+
+ reference_resolver.priority = 100
+
+ Override in subclasses."""
+
+
+class Component(SettingsSpec, TransformSpec):
+
+ """Base class for Docutils components."""
+
+ component_type = None
+ """Name of the component type ('reader', 'parser', 'writer'). Override in
+ subclasses."""
+
+ supported = ()
+ """Names for this component. Override in subclasses."""
+
+ def supports(self, format):
+ """
+ Is `format` supported by this component?
+
+ To be used by transforms to ask the dependent component if it supports
+ a certain input context or output format.
+ """
+ return format in self.supported
diff --git a/python/helpers/docutils/_compat.py b/python/helpers/docutils/_compat.py
new file mode 100644
index 0000000..87f46da
--- /dev/null
+++ b/python/helpers/docutils/_compat.py
@@ -0,0 +1,36 @@
+# $Id: _compat.py 5908 2009-04-21 13:43:23Z goodger $
+# Author: Georg Brandl <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Python 2/3 compatibility definitions.
+
+This module currently provides the following helper symbols:
+
+* bytes (name of byte string type; str in 2.x, bytes in 3.x)
+* b (function converting a string literal to an ASCII byte string;
+ can be also used to convert a Unicode string into a byte string)
+* u_prefix (unicode repr prefix, 'u' in 2.x, nothing in 3.x)
+* BytesIO (a StringIO class that works with bytestrings)
+"""
+
+import sys
+
+if sys.version_info < (3,0):
+ b = bytes = str
+ u_prefix = 'u'
+ from StringIO import StringIO as BytesIO
+else:
+ import builtins
+ bytes = builtins.bytes
+ u_prefix = ''
+ def b(s):
+ if isinstance(s, str):
+ return s.encode('latin1')
+ elif isinstance(s, bytes):
+ return s
+ else:
+ raise TypeError("Invalid argument %r for b()" % (s,))
+ # using this hack since 2to3 "fixes" the relative import
+ # when using ``from io import BytesIO``
+ BytesIO = __import__('io').BytesIO
diff --git a/python/helpers/docutils/_string_template_compat.py b/python/helpers/docutils/_string_template_compat.py
new file mode 100644
index 0000000..38929c2
--- /dev/null
+++ b/python/helpers/docutils/_string_template_compat.py
@@ -0,0 +1,133 @@
+#!/usr/bin/env python
+# -*- coding: utf8 -*-
+
+# string_template_compat.py: string.Template for Python <= 2.4
+# =====================================================
+
+# This is just an excerpt of the standard string module to provide backwards
+# compatibility.
+
+import re as _re
+
+class _multimap:
+ """Helper class for combining multiple mappings.
+
+ Used by .{safe_,}substitute() to combine the mapping and keyword
+ arguments.
+ """
+ def __init__(self, primary, secondary):
+ self._primary = primary
+ self._secondary = secondary
+
+ def __getitem__(self, key):
+ try:
+ return self._primary[key]
+ except KeyError:
+ return self._secondary[key]
+
+
+class _TemplateMetaclass(type):
+ pattern = r"""
+ %(delim)s(?:
+ (?P<escaped>%(delim)s) | # Escape sequence of two delimiters
+ (?P<named>%(id)s) | # delimiter and a Python identifier
+ {(?P<braced>%(id)s)} | # delimiter and a braced identifier
+ (?P<invalid>) # Other ill-formed delimiter exprs
+ )
+ """
+
+ def __init__(cls, name, bases, dct):
+ super(_TemplateMetaclass, cls).__init__(name, bases, dct)
+ if 'pattern' in dct:
+ pattern = cls.pattern
+ else:
+ pattern = _TemplateMetaclass.pattern % {
+ 'delim' : _re.escape(cls.delimiter),
+ 'id' : cls.idpattern,
+ }
+ cls.pattern = _re.compile(pattern, _re.IGNORECASE | _re.VERBOSE)
+
+
+class Template:
+ """A string class for supporting $-substitutions."""
+ __metaclass__ = _TemplateMetaclass
+
+ delimiter = '$'
+ idpattern = r'[_a-z][_a-z0-9]*'
+
+ def __init__(self, template):
+ self.template = template
+
+ # Search for $$, $identifier, ${identifier}, and any bare $'s
+
+ def _invalid(self, mo):
+ i = mo.start('invalid')
+ lines = self.template[:i].splitlines(True)
+ if not lines:
+ colno = 1
+ lineno = 1
+ else:
+ colno = i - len(''.join(lines[:-1]))
+ lineno = len(lines)
+ raise ValueError('Invalid placeholder in string: line %d, col %d' %
+ (lineno, colno))
+
+ def substitute(self, *args, **kws):
+ if len(args) > 1:
+ raise TypeError('Too many positional arguments')
+ if not args:
+ mapping = kws
+ elif kws:
+ mapping = _multimap(kws, args[0])
+ else:
+ mapping = args[0]
+ # Helper function for .sub()
+ def convert(mo):
+ # Check the most common path first.
+ named = mo.group('named') or mo.group('braced')
+ if named is not None:
+ val = mapping[named]
+ # We use this idiom instead of str() because the latter will
+ # fail if val is a Unicode containing non-ASCII characters.
+ return '%s' % (val,)
+ if mo.group('escaped') is not None:
+ return self.delimiter
+ if mo.group('invalid') is not None:
+ self._invalid(mo)
+ raise ValueError('Unrecognized named group in pattern',
+ self.pattern)
+ return self.pattern.sub(convert, self.template)
+
+ def safe_substitute(self, *args, **kws):
+ if len(args) > 1:
+ raise TypeError('Too many positional arguments')
+ if not args:
+ mapping = kws
+ elif kws:
+ mapping = _multimap(kws, args[0])
+ else:
+ mapping = args[0]
+ # Helper function for .sub()
+ def convert(mo):
+ named = mo.group('named')
+ if named is not None:
+ try:
+ # We use this idiom instead of str() because the latter
+ # will fail if val is a Unicode containing non-ASCII
+ return '%s' % (mapping[named],)
+ except KeyError:
+ return self.delimiter + named
+ braced = mo.group('braced')
+ if braced is not None:
+ try:
+ return '%s' % (mapping[braced],)
+ except KeyError:
+ return self.delimiter + '{' + braced + '}'
+ if mo.group('escaped') is not None:
+ return self.delimiter
+ if mo.group('invalid') is not None:
+ return self.delimiter
+ raise ValueError('Unrecognized named group in pattern',
+ self.pattern)
+ return self.pattern.sub(convert, self.template)
+
diff --git a/python/helpers/docutils/core.py b/python/helpers/docutils/core.py
new file mode 100644
index 0000000..68928fb
--- /dev/null
+++ b/python/helpers/docutils/core.py
@@ -0,0 +1,642 @@
+# $Id: core.py 6294 2010-03-26 10:47:00Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Calling the ``publish_*`` convenience functions (or instantiating a
+`Publisher` object) with component names will result in default
+behavior. For custom behavior (setting component options), create
+custom component objects first, and pass *them* to
+``publish_*``/`Publisher`. See `The Docutils Publisher`_.
+
+.. _The Docutils Publisher: http://docutils.sf.net/docs/api/publisher.html
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import pprint
+from docutils import __version__, __version_details__, SettingsSpec
+from docutils import frontend, io, utils, readers, writers
+from docutils.frontend import OptionParser
+from docutils.transforms import Transformer
+import docutils.readers.doctree
+
+
+class Publisher:
+
+ """
+ A facade encapsulating the high-level logic of a Docutils system.
+ """
+
+ def __init__(self, reader=None, parser=None, writer=None,
+ source=None, source_class=io.FileInput,
+ destination=None, destination_class=io.FileOutput,
+ settings=None):
+ """
+ Initial setup. If any of `reader`, `parser`, or `writer` are not
+ specified, the corresponding ``set_...`` method should be called with
+ a component name (`set_reader` sets the parser as well).
+ """
+
+ self.document = None
+ """The document tree (`docutils.nodes` objects)."""
+
+ self.reader = reader
+ """A `docutils.readers.Reader` instance."""
+
+ self.parser = parser
+ """A `docutils.parsers.Parser` instance."""
+
+ self.writer = writer
+ """A `docutils.writers.Writer` instance."""
+
+ for component in 'reader', 'parser', 'writer':
+ assert not isinstance(getattr(self, component), str), (
+ 'passed string "%s" as "%s" parameter; pass an instance, '
+ 'or use the "%s_name" parameter instead (in '
+ 'docutils.core.publish_* convenience functions).'
+ % (getattr(self, component), component, component))
+
+ self.source = source
+ """The source of input data, a `docutils.io.Input` instance."""
+
+ self.source_class = source_class
+ """The class for dynamically created source objects."""
+
+ self.destination = destination
+ """The destination for docutils output, a `docutils.io.Output`
+ instance."""
+
+ self.destination_class = destination_class
+ """The class for dynamically created destination objects."""
+
+ self.settings = settings
+ """An object containing Docutils settings as instance attributes.
+ Set by `self.process_command_line()` or `self.get_settings()`."""
+
+ def set_reader(self, reader_name, parser, parser_name):
+ """Set `self.reader` by name."""
+ reader_class = readers.get_reader_class(reader_name)
+ self.reader = reader_class(parser, parser_name)
+ self.parser = self.reader.parser
+
+ def set_writer(self, writer_name):
+ """Set `self.writer` by name."""
+ writer_class = writers.get_writer_class(writer_name)
+ self.writer = writer_class()
+
+ def set_components(self, reader_name, parser_name, writer_name):
+ if self.reader is None:
+ self.set_reader(reader_name, self.parser, parser_name)
+ if self.parser is None:
+ if self.reader.parser is None:
+ self.reader.set_parser(parser_name)
+ self.parser = self.reader.parser
+ if self.writer is None:
+ self.set_writer(writer_name)
+
+ def setup_option_parser(self, usage=None, description=None,
+ settings_spec=None, config_section=None,
+ **defaults):
+ if config_section:
+ if not settings_spec:
+ settings_spec = SettingsSpec()
+ settings_spec.config_section = config_section
+ parts = config_section.split()
+ if len(parts) > 1 and parts[-1] == 'application':
+ settings_spec.config_section_dependencies = ['applications']
+ #@@@ Add self.source & self.destination to components in future?
+ option_parser = OptionParser(
+ components=(self.parser, self.reader, self.writer, settings_spec),
+ defaults=defaults, read_config_files=1,
+ usage=usage, description=description)
+ return option_parser
+
+ def get_settings(self, usage=None, description=None,
+ settings_spec=None, config_section=None, **defaults):
+ """
+ Set and return default settings (overrides in `defaults` dict).
+
+ Set components first (`self.set_reader` & `self.set_writer`).
+ Explicitly setting `self.settings` disables command line option
+ processing from `self.publish()`.
+ """
+ option_parser = self.setup_option_parser(
+ usage, description, settings_spec, config_section, **defaults)
+ self.settings = option_parser.get_default_values()
+ return self.settings
+
+ def process_programmatic_settings(self, settings_spec,
+ settings_overrides,
+ config_section):
+ if self.settings is None:
+ defaults = (settings_overrides or {}).copy()
+ # Propagate exceptions by default when used programmatically:
+ defaults.setdefault('traceback', 1)
+ self.get_settings(settings_spec=settings_spec,
+ config_section=config_section,
+ **defaults)
+
+ def process_command_line(self, argv=None, usage=None, description=None,
+ settings_spec=None, config_section=None,
+ **defaults):
+ """
+ Pass an empty list to `argv` to avoid reading `sys.argv` (the
+ default).
+
+ Set components first (`self.set_reader` & `self.set_writer`).
+ """
+ option_parser = self.setup_option_parser(
+ usage, description, settings_spec, config_section, **defaults)
+ if argv is None:
+ argv = sys.argv[1:]
+ self.settings = option_parser.parse_args(argv)
+
+ def set_io(self, source_path=None, destination_path=None):
+ if self.source is None:
+ self.set_source(source_path=source_path)
+ if self.destination is None:
+ self.set_destination(destination_path=destination_path)
+
+ def set_source(self, source=None, source_path=None):
+ if source_path is None:
+ source_path = self.settings._source
+ else:
+ self.settings._source = source_path
+ self.source = self.source_class(
+ source=source, source_path=source_path,
+ encoding=self.settings.input_encoding)
+
+ def set_destination(self, destination=None, destination_path=None):
+ if destination_path is None:
+ destination_path = self.settings._destination
+ else:
+ self.settings._destination = destination_path
+ self.destination = self.destination_class(
+ destination=destination, destination_path=destination_path,
+ encoding=self.settings.output_encoding,
+ error_handler=self.settings.output_encoding_error_handler)
+
+ def apply_transforms(self):
+ self.document.transformer.populate_from_components(
+ (self.source, self.reader, self.reader.parser, self.writer,
+ self.destination))
+ self.document.transformer.apply_transforms()
+
+ def publish(self, argv=None, usage=None, description=None,
+ settings_spec=None, settings_overrides=None,
+ config_section=None, enable_exit_status=None):
+ """
+ Process command line options and arguments (if `self.settings` not
+ already set), run `self.reader` and then `self.writer`. Return
+ `self.writer`'s output.
+ """
+ exit = None
+ try:
+ if self.settings is None:
+ self.process_command_line(
+ argv, usage, description, settings_spec, config_section,
+ **(settings_overrides or {}))
+ self.set_io()
+ self.document = self.reader.read(self.source, self.parser,
+ self.settings)
+ self.apply_transforms()
+ output = self.writer.write(self.document, self.destination)
+ self.writer.assemble_parts()
+ except SystemExit, error:
+ exit = 1
+ exit_status = error.code
+ except Exception, error:
+ if not self.settings: # exception too early to report nicely
+ raise
+ if self.settings.traceback: # Propagate exceptions?
+ self.debugging_dumps()
+ raise
+ self.report_Exception(error)
+ exit = 1
+ exit_status = 1
+ self.debugging_dumps()
+ if (enable_exit_status and self.document
+ and (self.document.reporter.max_level
+ >= self.settings.exit_status_level)):
+ sys.exit(self.document.reporter.max_level + 10)
+ elif exit:
+ sys.exit(exit_status)
+ return output
+
+ def debugging_dumps(self):
+ if not self.document:
+ return
+ if self.settings.dump_settings:
+ print >>sys.stderr, '\n::: Runtime settings:'
+ print >>sys.stderr, pprint.pformat(self.settings.__dict__)
+ if self.settings.dump_internals:
+ print >>sys.stderr, '\n::: Document internals:'
+ print >>sys.stderr, pprint.pformat(self.document.__dict__)
+ if self.settings.dump_transforms:
+ print >>sys.stderr, '\n::: Transforms applied:'
+ print >>sys.stderr, (' (priority, transform class, '
+ 'pending node details, keyword args)')
+ print >>sys.stderr, pprint.pformat(
+ [(priority, '%s.%s' % (xclass.__module__, xclass.__name__),
+ pending and pending.details, kwargs)
+ for priority, xclass, pending, kwargs
+ in self.document.transformer.applied])
+ if self.settings.dump_pseudo_xml:
+ print >>sys.stderr, '\n::: Pseudo-XML:'
+ print >>sys.stderr, self.document.pformat().encode(
+ 'raw_unicode_escape')
+
+ def report_Exception(self, error):
+ if isinstance(error, utils.SystemMessage):
+ self.report_SystemMessage(error)
+ elif isinstance(error, UnicodeEncodeError):
+ self.report_UnicodeError(error)
+ else:
+ print >>sys.stderr, '%s: %s' % (error.__class__.__name__, error)
+ print >>sys.stderr, ("""\
+Exiting due to error. Use "--traceback" to diagnose.
+Please report errors to <[email protected]>.
+Include "--traceback" output, Docutils version (%s [%s]),
+Python version (%s), your OS type & version, and the
+command line used.""" % (__version__, __version_details__,
+ sys.version.split()[0]))
+
+ def report_SystemMessage(self, error):
+ print >>sys.stderr, ('Exiting due to level-%s (%s) system message.'
+ % (error.level,
+ utils.Reporter.levels[error.level]))
+
+ def report_UnicodeError(self, error):
+ data = error.object[error.start:error.end]
+ sys.stderr.write(
+ '%s: %s\n'
+ '\n'
+ 'The specified output encoding (%s) cannot\n'
+ 'handle all of the output.\n'
+ 'Try setting "--output-encoding-error-handler" to\n'
+ '\n'
+ '* "xmlcharrefreplace" (for HTML & XML output);\n'
+ ' the output will contain "%s" and should be usable.\n'
+ '* "backslashreplace" (for other output formats);\n'
+ ' look for "%s" in the output.\n'
+ '* "replace"; look for "?" in the output.\n'
+ '\n'
+ '"--output-encoding-error-handler" is currently set to "%s".\n'
+ '\n'
+ 'Exiting due to error. Use "--traceback" to diagnose.\n'
+ 'If the advice above doesn\'t eliminate the error,\n'
+ 'please report it to <[email protected]>.\n'
+ 'Include "--traceback" output, Docutils version (%s),\n'
+ 'Python version (%s), your OS type & version, and the\n'
+ 'command line used.\n'
+ % (error.__class__.__name__, error,
+ self.settings.output_encoding,
+ data.encode('ascii', 'xmlcharrefreplace'),
+ data.encode('ascii', 'backslashreplace'),
+ self.settings.output_encoding_error_handler,
+ __version__, sys.version.split()[0]))
+
+default_usage = '%prog [options] [<source> [<destination>]]'
+default_description = ('Reads from <source> (default is stdin) and writes to '
+ '<destination> (default is stdout). See '
+ '<http://docutils.sf.net/docs/user/config.html> for '
+ 'the full reference.')
+
+def publish_cmdline(reader=None, reader_name='standalone',
+ parser=None, parser_name='restructuredtext',
+ writer=None, writer_name='pseudoxml',
+ settings=None, settings_spec=None,
+ settings_overrides=None, config_section=None,
+ enable_exit_status=1, argv=None,
+ usage=default_usage, description=default_description):
+ """
+ Set up & run a `Publisher` for command-line-based file I/O (input and
+ output file paths taken automatically from the command line). Return the
+ encoded string output also.
+
+ Parameters: see `publish_programmatically` for the remainder.
+
+ - `argv`: Command-line argument list to use instead of ``sys.argv[1:]``.
+ - `usage`: Usage string, output if there's a problem parsing the command
+ line.
+ - `description`: Program description, output for the "--help" option
+ (along with command-line option descriptions).
+ """
+ pub = Publisher(reader, parser, writer, settings=settings)
+ pub.set_components(reader_name, parser_name, writer_name)
+ output = pub.publish(
+ argv, usage, description, settings_spec, settings_overrides,
+ config_section=config_section, enable_exit_status=enable_exit_status)
+ return output
+
+def publish_file(source=None, source_path=None,
+ destination=None, destination_path=None,
+ reader=None, reader_name='standalone',
+ parser=None, parser_name='restructuredtext',
+ writer=None, writer_name='pseudoxml',
+ settings=None, settings_spec=None, settings_overrides=None,
+ config_section=None, enable_exit_status=None):
+ """
+ Set up & run a `Publisher` for programmatic use with file-like I/O.
+ Return the encoded string output also.
+
+ Parameters: see `publish_programmatically`.
+ """
+ output, pub = publish_programmatically(
+ source_class=io.FileInput, source=source, source_path=source_path,
+ destination_class=io.FileOutput,
+ destination=destination, destination_path=destination_path,
+ reader=reader, reader_name=reader_name,
+ parser=parser, parser_name=parser_name,
+ writer=writer, writer_name=writer_name,
+ settings=settings, settings_spec=settings_spec,
+ settings_overrides=settings_overrides,
+ config_section=config_section,
+ enable_exit_status=enable_exit_status)
+ return output
+
+def publish_string(source, source_path=None, destination_path=None,
+ reader=None, reader_name='standalone',
+ parser=None, parser_name='restructuredtext',
+ writer=None, writer_name='pseudoxml',
+ settings=None, settings_spec=None,
+ settings_overrides=None, config_section=None,
+ enable_exit_status=None):
+ """
+ Set up & run a `Publisher` for programmatic use with string I/O. Return
+ the encoded string or Unicode string output.
+
+ For encoded string output, be sure to set the 'output_encoding' setting to
+ the desired encoding. Set it to 'unicode' for unencoded Unicode string
+ output. Here's one way::
+
+ publish_string(..., settings_overrides={'output_encoding': 'unicode'})
+
+ Similarly for Unicode string input (`source`)::
+
+ publish_string(..., settings_overrides={'input_encoding': 'unicode'})
+
+ Parameters: see `publish_programmatically`.
+ """
+ output, pub = publish_programmatically(
+ source_class=io.StringInput, source=source, source_path=source_path,
+ destination_class=io.StringOutput,
+ destination=None, destination_path=destination_path,
+ reader=reader, reader_name=reader_name,
+ parser=parser, parser_name=parser_name,
+ writer=writer, writer_name=writer_name,
+ settings=settings, settings_spec=settings_spec,
+ settings_overrides=settings_overrides,
+ config_section=config_section,
+ enable_exit_status=enable_exit_status)
+ return output
+
+def publish_parts(source, source_path=None, source_class=io.StringInput,
+ destination_path=None,
+ reader=None, reader_name='standalone',
+ parser=None, parser_name='restructuredtext',
+ writer=None, writer_name='pseudoxml',
+ settings=None, settings_spec=None,
+ settings_overrides=None, config_section=None,
+ enable_exit_status=None):
+ """
+ Set up & run a `Publisher`, and return a dictionary of document parts.
+ Dictionary keys are the names of parts, and values are Unicode strings;
+ encoding is up to the client. For programmatic use with string I/O.
+
+ For encoded string input, be sure to set the 'input_encoding' setting to
+ the desired encoding. Set it to 'unicode' for unencoded Unicode string
+ input. Here's how::
+
+ publish_parts(..., settings_overrides={'input_encoding': 'unicode'})
+
+ Parameters: see `publish_programmatically`.
+ """
+ output, pub = publish_programmatically(
+ source=source, source_path=source_path, source_class=source_class,
+ destination_class=io.StringOutput,
+ destination=None, destination_path=destination_path,
+ reader=reader, reader_name=reader_name,
+ parser=parser, parser_name=parser_name,
+ writer=writer, writer_name=writer_name,
+ settings=settings, settings_spec=settings_spec,
+ settings_overrides=settings_overrides,
+ config_section=config_section,
+ enable_exit_status=enable_exit_status)
+ return pub.writer.parts
+
+def publish_doctree(source, source_path=None,
+ source_class=io.StringInput,
+ reader=None, reader_name='standalone',
+ parser=None, parser_name='restructuredtext',
+ settings=None, settings_spec=None,
+ settings_overrides=None, config_section=None,
+ enable_exit_status=None):
+ """
+ Set up & run a `Publisher` for programmatic use with string I/O.
+ Return the document tree.
+
+ For encoded string input, be sure to set the 'input_encoding' setting to
+ the desired encoding. Set it to 'unicode' for unencoded Unicode string
+ input. Here's one way::
+
+ publish_doctree(..., settings_overrides={'input_encoding': 'unicode'})
+
+ Parameters: see `publish_programmatically`.
+ """
+ pub = Publisher(reader=reader, parser=parser, writer=None,
+ settings=settings,
+ source_class=source_class,
+ destination_class=io.NullOutput)
+ pub.set_components(reader_name, parser_name, 'null')
+ pub.process_programmatic_settings(
+ settings_spec, settings_overrides, config_section)
+ pub.set_source(source, source_path)
+ pub.set_destination(None, None)
+ output = pub.publish(enable_exit_status=enable_exit_status)
+ return pub.document
+
+def publish_from_doctree(document, destination_path=None,
+ writer=None, writer_name='pseudoxml',
+ settings=None, settings_spec=None,
+ settings_overrides=None, config_section=None,
+ enable_exit_status=None):
+ """
+ Set up & run a `Publisher` to render from an existing document
+ tree data structure, for programmatic use with string I/O. Return
+ the encoded string output.
+
+ Note that document.settings is overridden; if you want to use the settings
+ of the original `document`, pass settings=document.settings.
+
+ Also, new document.transformer and document.reporter objects are
+ generated.
+
+ For encoded string output, be sure to set the 'output_encoding' setting to
+ the desired encoding. Set it to 'unicode' for unencoded Unicode string
+ output. Here's one way::
+
+ publish_from_doctree(
+ ..., settings_overrides={'output_encoding': 'unicode'})
+
+ Parameters: `document` is a `docutils.nodes.document` object, an existing
+ document tree.
+
+ Other parameters: see `publish_programmatically`.
+ """
+ reader = docutils.readers.doctree.Reader(parser_name='null')
+ pub = Publisher(reader, None, writer,
+ source=io.DocTreeInput(document),
+ destination_class=io.StringOutput, settings=settings)
+ if not writer and writer_name:
+ pub.set_writer(writer_name)
+ pub.process_programmatic_settings(
+ settings_spec, settings_overrides, config_section)
+ pub.set_destination(None, destination_path)
+ return pub.publish(enable_exit_status=enable_exit_status)
+
+def publish_cmdline_to_binary(reader=None, reader_name='standalone',
+ parser=None, parser_name='restructuredtext',
+ writer=None, writer_name='pseudoxml',
+ settings=None, settings_spec=None,
+ settings_overrides=None, config_section=None,
+ enable_exit_status=1, argv=None,
+ usage=default_usage, description=default_description,
+ destination=None, destination_class=io.BinaryFileOutput
+ ):
+ """
+ Set up & run a `Publisher` for command-line-based file I/O (input and
+ output file paths taken automatically from the command line). Return the
+ encoded string output also.
+
+ This is just like publish_cmdline, except that it uses
+ io.BinaryFileOutput instead of io.FileOutput.
+
+ Parameters: see `publish_programmatically` for the remainder.
+
+ - `argv`: Command-line argument list to use instead of ``sys.argv[1:]``.
+ - `usage`: Usage string, output if there's a problem parsing the command
+ line.
+ - `description`: Program description, output for the "--help" option
+ (along with command-line option descriptions).
+ """
+ pub = Publisher(reader, parser, writer, settings=settings,
+ destination_class=destination_class)
+ pub.set_components(reader_name, parser_name, writer_name)
+ output = pub.publish(
+ argv, usage, description, settings_spec, settings_overrides,
+ config_section=config_section, enable_exit_status=enable_exit_status)
+ return output
+
+def publish_programmatically(source_class, source, source_path,
+ destination_class, destination, destination_path,
+ reader, reader_name,
+ parser, parser_name,
+ writer, writer_name,
+ settings, settings_spec,
+ settings_overrides, config_section,
+ enable_exit_status):
+ """
+ Set up & run a `Publisher` for custom programmatic use. Return the
+ encoded string output and the Publisher object.
+
+ Applications should not need to call this function directly. If it does
+ seem to be necessary to call this function directly, please write to the
+ Docutils-develop mailing list
+ <http://docutils.sf.net/docs/user/mailing-lists.html#docutils-develop>.
+
+ Parameters:
+
+ * `source_class` **required**: The class for dynamically created source
+ objects. Typically `io.FileInput` or `io.StringInput`.
+
+ * `source`: Type depends on `source_class`:
+
+ - If `source_class` is `io.FileInput`: Either a file-like object
+ (must have 'read' and 'close' methods), or ``None``
+ (`source_path` is opened). If neither `source` nor
+ `source_path` are supplied, `sys.stdin` is used.
+
+ - If `source_class` is `io.StringInput` **required**: The input
+ string, either an encoded 8-bit string (set the
+ 'input_encoding' setting to the correct encoding) or a Unicode
+ string (set the 'input_encoding' setting to 'unicode').
+
+ * `source_path`: Type depends on `source_class`:
+
+ - `io.FileInput`: Path to the input file, opened if no `source`
+ supplied.
+
+ - `io.StringInput`: Optional. Path to the file or object that produced
+ `source`. Only used for diagnostic output.
+
+ * `destination_class` **required**: The class for dynamically created
+ destination objects. Typically `io.FileOutput` or `io.StringOutput`.
+
+ * `destination`: Type depends on `destination_class`:
+
+ - `io.FileOutput`: Either a file-like object (must have 'write' and
+ 'close' methods), or ``None`` (`destination_path` is opened). If
+ neither `destination` nor `destination_path` are supplied,
+ `sys.stdout` is used.
+
+ - `io.StringOutput`: Not used; pass ``None``.
+
+ * `destination_path`: Type depends on `destination_class`:
+
+ - `io.FileOutput`: Path to the output file. Opened if no `destination`
+ supplied.
+
+ - `io.StringOutput`: Path to the file or object which will receive the
+ output; optional. Used for determining relative paths (stylesheets,
+ source links, etc.).
+
+ * `reader`: A `docutils.readers.Reader` object.
+
+ * `reader_name`: Name or alias of the Reader class to be instantiated if
+ no `reader` supplied.
+
+ * `parser`: A `docutils.parsers.Parser` object.
+
+ * `parser_name`: Name or alias of the Parser class to be instantiated if
+ no `parser` supplied.
+
+ * `writer`: A `docutils.writers.Writer` object.
+
+ * `writer_name`: Name or alias of the Writer class to be instantiated if
+ no `writer` supplied.
+
+ * `settings`: A runtime settings (`docutils.frontend.Values`) object, for
+ dotted-attribute access to runtime settings. It's the end result of the
+ `SettingsSpec`, config file, and option processing. If `settings` is
+ passed, it's assumed to be complete and no further setting/config/option
+ processing is done.
+
+ * `settings_spec`: A `docutils.SettingsSpec` subclass or object. Provides
+ extra application-specific settings definitions independently of
+ components. In other words, the application becomes a component, and
+ its settings data is processed along with that of the other components.
+ Used only if no `settings` specified.
+
+ * `settings_overrides`: A dictionary containing application-specific
+ settings defaults that override the defaults of other components.
+ Used only if no `settings` specified.
+
+ * `config_section`: A string, the name of the configuration file section
+ for this application. Overrides the ``config_section`` attribute
+ defined by `settings_spec`. Used only if no `settings` specified.
+
+ * `enable_exit_status`: Boolean; enable exit status at end of processing?
+ """
+ pub = Publisher(reader, parser, writer, settings=settings,
+ source_class=source_class,
+ destination_class=destination_class)
+ pub.set_components(reader_name, parser_name, writer_name)
+ pub.process_programmatic_settings(
+ settings_spec, settings_overrides, config_section)
+ pub.set_source(source, source_path)
+ pub.set_destination(destination, destination_path)
+ output = pub.publish(enable_exit_status=enable_exit_status)
+ return output, pub
diff --git a/python/helpers/docutils/docutils.conf b/python/helpers/docutils/docutils.conf
new file mode 100644
index 0000000..cdce8d6
--- /dev/null
+++ b/python/helpers/docutils/docutils.conf
@@ -0,0 +1,5 @@
+# This configuration file is to prevent tools/buildhtml.py from
+# processing text files in and below this directory.
+
+[buildhtml application]
+prune: .
diff --git a/python/helpers/docutils/examples.py b/python/helpers/docutils/examples.py
new file mode 100644
index 0000000..68980f3
--- /dev/null
+++ b/python/helpers/docutils/examples.py
@@ -0,0 +1,96 @@
+# $Id: examples.py 4800 2006-11-12 18:02:01Z goodger $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+This module contains practical examples of Docutils client code.
+
+Importing this module from client code is not recommended; its contents are
+subject to change in future Docutils releases. Instead, it is recommended
+that you copy and paste the parts you need into your own code, modifying as
+necessary.
+"""
+
+from docutils import core, io
+
+
+def html_parts(input_string, source_path=None, destination_path=None,
+ input_encoding='unicode', doctitle=1, initial_header_level=1):
+ """
+ Given an input string, returns a dictionary of HTML document parts.
+
+ Dictionary keys are the names of parts, and values are Unicode strings;
+ encoding is up to the client.
+
+ Parameters:
+
+ - `input_string`: A multi-line text string; required.
+ - `source_path`: Path to the source file or object. Optional, but useful
+ for diagnostic output (system messages).
+ - `destination_path`: Path to the file or object which will receive the
+ output; optional. Used for determining relative paths (stylesheets,
+ source links, etc.).
+ - `input_encoding`: The encoding of `input_string`. If it is an encoded
+ 8-bit string, provide the correct encoding. If it is a Unicode string,
+ use "unicode", the default.
+ - `doctitle`: Disable the promotion of a lone top-level section title to
+ document title (and subsequent section title to document subtitle
+ promotion); enabled by default.
+ - `initial_header_level`: The initial level for header elements (e.g. 1
+ for "<h1>").
+ """
+ overrides = {'input_encoding': input_encoding,
+ 'doctitle_xform': doctitle,
+ 'initial_header_level': initial_header_level}
+ parts = core.publish_parts(
+ source=input_string, source_path=source_path,
+ destination_path=destination_path,
+ writer_name='html', settings_overrides=overrides)
+ return parts
+
+def html_body(input_string, source_path=None, destination_path=None,
+ input_encoding='unicode', output_encoding='unicode',
+ doctitle=1, initial_header_level=1):
+ """
+ Given an input string, returns an HTML fragment as a string.
+
+ The return value is the contents of the <body> element.
+
+ Parameters (see `html_parts()` for the remainder):
+
+ - `output_encoding`: The desired encoding of the output. If a Unicode
+ string is desired, use the default value of "unicode" .
+ """
+ parts = html_parts(
+ input_string=input_string, source_path=source_path,
+ destination_path=destination_path,
+ input_encoding=input_encoding, doctitle=doctitle,
+ initial_header_level=initial_header_level)
+ fragment = parts['html_body']
+ if output_encoding != 'unicode':
+ fragment = fragment.encode(output_encoding)
+ return fragment
+
+def internals(input_string, source_path=None, destination_path=None,
+ input_encoding='unicode', settings_overrides=None):
+ """
+ Return the document tree and publisher, for exploring Docutils internals.
+
+ Parameters: see `html_parts()`.
+ """
+ if settings_overrides:
+ overrides = settings_overrides.copy()
+ else:
+ overrides = {}
+ overrides['input_encoding'] = input_encoding
+ output, pub = core.publish_programmatically(
+ source_class=io.StringInput, source=input_string,
+ source_path=source_path,
+ destination_class=io.NullOutput, destination=None,
+ destination_path=destination_path,
+ reader=None, reader_name='standalone',
+ parser=None, parser_name='restructuredtext',
+ writer=None, writer_name='null',
+ settings=None, settings_spec=None, settings_overrides=overrides,
+ config_section=None, enable_exit_status=None)
+ return pub.writer.document, pub
diff --git a/python/helpers/docutils/frontend.py b/python/helpers/docutils/frontend.py
new file mode 100644
index 0000000..f274150
--- /dev/null
+++ b/python/helpers/docutils/frontend.py
@@ -0,0 +1,786 @@
+# $Id: frontend.py 6205 2009-11-30 08:11:08Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Command-line and common processing for Docutils front-end tools.
+
+Exports the following classes:
+
+* `OptionParser`: Standard Docutils command-line processing.
+* `Option`: Customized version of `optparse.Option`; validation support.
+* `Values`: Runtime settings; objects are simple structs
+ (``object.attribute``). Supports cumulative list settings (attributes).
+* `ConfigParser`: Standard Docutils config file processing.
+
+Also exports the following functions:
+
+* Option callbacks: `store_multiple`, `read_config_file`.
+* Setting validators: `validate_encoding`,
+ `validate_encoding_error_handler`,
+ `validate_encoding_and_error_handler`, `validate_boolean`,
+ `validate_threshold`, `validate_colon_separated_string_list`,
+ `validate_dependency_file`.
+* `make_paths_absolute`.
+* SettingSpec manipulation: `filter_settings_spec`.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import os
+import os.path
+import sys
+import warnings
+import ConfigParser as CP
+import codecs
+import docutils
+import docutils.utils
+import docutils.nodes
+import optparse
+from optparse import SUPPRESS_HELP
+
+
+def store_multiple(option, opt, value, parser, *args, **kwargs):
+ """
+ Store multiple values in `parser.values`. (Option callback.)
+
+ Store `None` for each attribute named in `args`, and store the value for
+ each key (attribute name) in `kwargs`.
+ """
+ for attribute in args:
+ setattr(parser.values, attribute, None)
+ for key, value in kwargs.items():
+ setattr(parser.values, key, value)
+
+def read_config_file(option, opt, value, parser):
+ """
+ Read a configuration file during option processing. (Option callback.)
+ """
+ try:
+ new_settings = parser.get_config_file_settings(value)
+ except ValueError, error:
+ parser.error(error)
+ parser.values.update(new_settings, parser)
+
+def validate_encoding(setting, value, option_parser,
+ config_parser=None, config_section=None):
+ try:
+ codecs.lookup(value)
+ except LookupError:
+ raise (LookupError('setting "%s": unknown encoding: "%s"'
+ % (setting, value)),
+ None, sys.exc_info()[2])
+ return value
+
+def validate_encoding_error_handler(setting, value, option_parser,
+ config_parser=None, config_section=None):
+ try:
+ codecs.lookup_error(value)
+ except AttributeError: # TODO: remove (only needed prior to Python 2.3)
+ if value not in ('strict', 'ignore', 'replace', 'xmlcharrefreplace'):
+ raise (LookupError(
+ 'unknown encoding error handler: "%s" (choices: '
+ '"strict", "ignore", "replace", or "xmlcharrefreplace")' % value),
+ None, sys.exc_info()[2])
+ except LookupError:
+ raise (LookupError(
+ 'unknown encoding error handler: "%s" (choices: '
+ '"strict", "ignore", "replace", "backslashreplace", '
+ '"xmlcharrefreplace", and possibly others; see documentation for '
+ 'the Python ``codecs`` module)' % value),
+ None, sys.exc_info()[2])
+ return value
+
+def validate_encoding_and_error_handler(
+ setting, value, option_parser, config_parser=None, config_section=None):
+ """
+ Side-effect: if an error handler is included in the value, it is inserted
+ into the appropriate place as if it was a separate setting/option.
+ """
+ if ':' in value:
+ encoding, handler = value.split(':')
+ validate_encoding_error_handler(
+ setting + '_error_handler', handler, option_parser,
+ config_parser, config_section)
+ if config_parser:
+ config_parser.set(config_section, setting + '_error_handler',
+ handler)
+ else:
+ setattr(option_parser.values, setting + '_error_handler', handler)
+ else:
+ encoding = value
+ validate_encoding(setting, encoding, option_parser,
+ config_parser, config_section)
+ return encoding
+
+def validate_boolean(setting, value, option_parser,
+ config_parser=None, config_section=None):
+ if isinstance(value, unicode):
+ try:
+ return option_parser.booleans[value.strip().lower()]
+ except KeyError:
+ raise (LookupError('unknown boolean value: "%s"' % value),
+ None, sys.exc_info()[2])
+ return value
+
+def validate_nonnegative_int(setting, value, option_parser,
+ config_parser=None, config_section=None):
+ value = int(value)
+ if value < 0:
+ raise ValueError('negative value; must be positive or zero')
+ return value
+
+def validate_threshold(setting, value, option_parser,
+ config_parser=None, config_section=None):
+ try:
+ return int(value)
+ except ValueError:
+ try:
+ return option_parser.thresholds[value.lower()]
+ except (KeyError, AttributeError):
+ raise (LookupError('unknown threshold: %r.' % value),
+ None, sys.exc_info[2])
+
+def validate_colon_separated_string_list(
+ setting, value, option_parser, config_parser=None, config_section=None):
+ if isinstance(value, unicode):
+ value = value.split(':')
+ else:
+ last = value.pop()
+ value.extend(last.split(':'))
+ return value
+
+def validate_url_trailing_slash(
+ setting, value, option_parser, config_parser=None, config_section=None):
+ if not value:
+ return './'
+ elif value.endswith('/'):
+ return value
+ else:
+ return value + '/'
+
+def validate_dependency_file(setting, value, option_parser,
+ config_parser=None, config_section=None):
+ try:
+ return docutils.utils.DependencyList(value)
+ except IOError:
+ return docutils.utils.DependencyList(None)
+
+def validate_strip_class(setting, value, option_parser,
+ config_parser=None, config_section=None):
+ if config_parser: # validate all values
+ class_values = value
+ else: # just validate the latest value
+ class_values = [value[-1]]
+ for class_value in class_values:
+ normalized = docutils.nodes.make_id(class_value)
+ if class_value != normalized:
+ raise ValueError('invalid class value %r (perhaps %r?)'
+ % (class_value, normalized))
+ return value
+
+def make_paths_absolute(pathdict, keys, base_path=None):
+ """
+ Interpret filesystem path settings relative to the `base_path` given.
+
+ Paths are values in `pathdict` whose keys are in `keys`. Get `keys` from
+ `OptionParser.relative_path_settings`.
+ """
+ if base_path is None:
+ base_path = os.getcwd()
+ for key in keys:
+ if key in pathdict:
+ value = pathdict[key]
+ if isinstance(value, list):
+ value = [make_one_path_absolute(base_path, path)
+ for path in value]
+ elif value:
+ value = make_one_path_absolute(base_path, value)
+ pathdict[key] = value
+
+def make_one_path_absolute(base_path, path):
+ return os.path.abspath(os.path.join(base_path, path))
+
+def filter_settings_spec(settings_spec, *exclude, **replace):
+ """Return a copy of `settings_spec` excluding/replacing some settings.
+
+ `settings_spec` is a tuple of configuration settings with a structure
+ described for docutils.SettingsSpec.settings_spec.
+
+ Optional positional arguments are names of to-be-excluded settings.
+ Keyword arguments are option specification replacements.
+ (See the html4strict writer for an example.)
+ """
+ settings = list(settings_spec)
+ # every third item is a sequence of option tuples
+ for i in range(2, len(settings), 3):
+ newopts = []
+ for opt_spec in settings[i]:
+ # opt_spec is ("<help>", [<option strings>], {<keyword args>})
+ opt_name = [opt_string[2:].replace('-', '_')
+ for opt_string in opt_spec[1]
+ if opt_string.startswith('--')
+ ][0]
+ if opt_name in exclude:
+ continue
+ if opt_name in replace.keys():
+ newopts.append(replace[opt_name])
+ else:
+ newopts.append(opt_spec)
+ settings[i] = tuple(newopts)
+ return tuple(settings)
+
+
+class Values(optparse.Values):
+
+ """
+ Updates list attributes by extension rather than by replacement.
+ Works in conjunction with the `OptionParser.lists` instance attribute.
+ """
+
+ def __init__(self, *args, **kwargs):
+ optparse.Values.__init__(self, *args, **kwargs)
+ if (not hasattr(self, 'record_dependencies')
+ or self.record_dependencies is None):
+ # Set up dependency list, in case it is needed.
+ self.record_dependencies = docutils.utils.DependencyList()
+
+ def update(self, other_dict, option_parser):
+ if isinstance(other_dict, Values):
+ other_dict = other_dict.__dict__
+ other_dict = other_dict.copy()
+ for setting in option_parser.lists.keys():
+ if (hasattr(self, setting) and setting in other_dict):
+ value = getattr(self, setting)
+ if value:
+ value += other_dict[setting]
+ del other_dict[setting]
+ self._update_loose(other_dict)
+
+ def copy(self):
+ """Return a shallow copy of `self`."""
+ return self.__class__(defaults=self.__dict__)
+
+
+class Option(optparse.Option):
+
+ ATTRS = optparse.Option.ATTRS + ['validator', 'overrides']
+
+ def process(self, opt, value, values, parser):
+ """
+ Call the validator function on applicable settings and
+ evaluate the 'overrides' option.
+ Extends `optparse.Option.process`.
+ """
+ result = optparse.Option.process(self, opt, value, values, parser)
+ setting = self.dest
+ if setting:
+ if self.validator:
+ value = getattr(values, setting)
+ try:
+ new_value = self.validator(setting, value, parser)
+ except Exception, error:
+ raise (optparse.OptionValueError(
+ 'Error in option "%s":\n %s: %s'
+ % (opt, error.__class__.__name__, error)),
+ None, sys.exc_info()[2])
+ setattr(values, setting, new_value)
+ if self.overrides:
+ setattr(values, self.overrides, None)
+ return result
+
+
+class OptionParser(optparse.OptionParser, docutils.SettingsSpec):
+
+ """
+ Parser for command-line and library use. The `settings_spec`
+ specification here and in other Docutils components are merged to build
+ the set of command-line options and runtime settings for this process.
+
+ Common settings (defined below) and component-specific settings must not
+ conflict. Short options are reserved for common settings, and components
+ are restrict to using long options.
+ """
+
+ standard_config_files = [
+ '/etc/docutils.conf', # system-wide
+ './docutils.conf', # project-specific
+ '~/.docutils'] # user-specific
+ """Docutils configuration files, using ConfigParser syntax. Filenames
+ will be tilde-expanded later. Later files override earlier ones."""
+
+ threshold_choices = 'info 1 warning 2 error 3 severe 4 none 5'.split()
+ """Possible inputs for for --report and --halt threshold values."""
+
+ thresholds = {'info': 1, 'warning': 2, 'error': 3, 'severe': 4, 'none': 5}
+ """Lookup table for --report and --halt threshold values."""
+
+ booleans={'1': 1, 'on': 1, 'yes': 1, 'true': 1,
+ '0': 0, 'off': 0, 'no': 0, 'false': 0, '': 0}
+ """Lookup table for boolean configuration file settings."""
+
+ try:
+ default_error_encoding = sys.stderr.encoding or 'ascii'
+ except AttributeError:
+ default_error_encoding = 'ascii'
+
+ default_error_encoding_error_handler = 'backslashreplace'
+
+ settings_spec = (
+ 'General Docutils Options',
+ None,
+ (('Specify the document title as metadata.',
+ ['--title'], {}),
+ ('Include a "Generated by Docutils" credit and link.',
+ ['--generator', '-g'], {'action': 'store_true',
+ 'validator': validate_boolean}),
+ ('Do not include a generator credit.',
+ ['--no-generator'], {'action': 'store_false', 'dest': 'generator'}),
+ ('Include the date at the end of the document (UTC).',
+ ['--date', '-d'], {'action': 'store_const', 'const': '%Y-%m-%d',
+ 'dest': 'datestamp'}),
+ ('Include the time & date (UTC).',
+ ['--time', '-t'], {'action': 'store_const',
+ 'const': '%Y-%m-%d %H:%M UTC',
+ 'dest': 'datestamp'}),
+ ('Do not include a datestamp of any kind.',
+ ['--no-datestamp'], {'action': 'store_const', 'const': None,
+ 'dest': 'datestamp'}),
+ ('Include a "View document source" link.',
+ ['--source-link', '-s'], {'action': 'store_true',
+ 'validator': validate_boolean}),
+ ('Use <URL> for a source link; implies --source-link.',
+ ['--source-url'], {'metavar': '<URL>'}),
+ ('Do not include a "View document source" link.',
+ ['--no-source-link'],
+ {'action': 'callback', 'callback': store_multiple,
+ 'callback_args': ('source_link', 'source_url')}),
+ ('Link from section headers to TOC entries. (default)',
+ ['--toc-entry-backlinks'],
+ {'dest': 'toc_backlinks', 'action': 'store_const', 'const': 'entry',
+ 'default': 'entry'}),
+ ('Link from section headers to the top of the TOC.',
+ ['--toc-top-backlinks'],
+ {'dest': 'toc_backlinks', 'action': 'store_const', 'const': 'top'}),
+ ('Disable backlinks to the table of contents.',
+ ['--no-toc-backlinks'],
+ {'dest': 'toc_backlinks', 'action': 'store_false'}),
+ ('Link from footnotes/citations to references. (default)',
+ ['--footnote-backlinks'],
+ {'action': 'store_true', 'default': 1,
+ 'validator': validate_boolean}),
+ ('Disable backlinks from footnotes and citations.',
+ ['--no-footnote-backlinks'],
+ {'dest': 'footnote_backlinks', 'action': 'store_false'}),
+ ('Enable section numbering by Docutils. (default)',
+ ['--section-numbering'],
+ {'action': 'store_true', 'dest': 'sectnum_xform',
+ 'default': 1, 'validator': validate_boolean}),
+ ('Disable section numbering by Docutils.',
+ ['--no-section-numbering'],
+ {'action': 'store_false', 'dest': 'sectnum_xform'}),
+ ('Remove comment elements from the document tree.',
+ ['--strip-comments'],
+ {'action': 'store_true', 'validator': validate_boolean}),
+ ('Leave comment elements in the document tree. (default)',
+ ['--leave-comments'],
+ {'action': 'store_false', 'dest': 'strip_comments'}),
+ ('Remove all elements with classes="<class>" from the document tree. '
+ 'Warning: potentially dangerous; use with caution. '
+ '(Multiple-use option.)',
+ ['--strip-elements-with-class'],
+ {'action': 'append', 'dest': 'strip_elements_with_classes',
+ 'metavar': '<class>', 'validator': validate_strip_class}),
+ ('Remove all classes="<class>" attributes from elements in the '
+ 'document tree. Warning: potentially dangerous; use with caution. '
+ '(Multiple-use option.)',
+ ['--strip-class'],
+ {'action': 'append', 'dest': 'strip_classes',
+ 'metavar': '<class>', 'validator': validate_strip_class}),
+ ('Report system messages at or higher than <level>: "info" or "1", '
+ '"warning"/"2" (default), "error"/"3", "severe"/"4", "none"/"5"',
+ ['--report', '-r'], {'choices': threshold_choices, 'default': 2,
+ 'dest': 'report_level', 'metavar': '<level>',
+ 'validator': validate_threshold}),
+ ('Report all system messages. (Same as "--report=1".)',
+ ['--verbose', '-v'], {'action': 'store_const', 'const': 1,
+ 'dest': 'report_level'}),
+ ('Report no system messages. (Same as "--report=5".)',
+ ['--quiet', '-q'], {'action': 'store_const', 'const': 5,
+ 'dest': 'report_level'}),
+ ('Halt execution at system messages at or above <level>. '
+ 'Levels as in --report. Default: 4 (severe).',
+ ['--halt'], {'choices': threshold_choices, 'dest': 'halt_level',
+ 'default': 4, 'metavar': '<level>',
+ 'validator': validate_threshold}),
+ ('Halt at the slightest problem. Same as "--halt=info".',
+ ['--strict'], {'action': 'store_const', 'const': 1,
+ 'dest': 'halt_level'}),
+ ('Enable a non-zero exit status for non-halting system messages at '
+ 'or above <level>. Default: 5 (disabled).',
+ ['--exit-status'], {'choices': threshold_choices,
+ 'dest': 'exit_status_level',
+ 'default': 5, 'metavar': '<level>',
+ 'validator': validate_threshold}),
+ ('Enable debug-level system messages and diagnostics.',
+ ['--debug'], {'action': 'store_true', 'validator': validate_boolean}),
+ ('Disable debug output. (default)',
+ ['--no-debug'], {'action': 'store_false', 'dest': 'debug'}),
+ ('Send the output of system messages to <file>.',
+ ['--warnings'], {'dest': 'warning_stream', 'metavar': '<file>'}),
+ ('Enable Python tracebacks when Docutils is halted.',
+ ['--traceback'], {'action': 'store_true', 'default': None,
+ 'validator': validate_boolean}),
+ ('Disable Python tracebacks. (default)',
+ ['--no-traceback'], {'dest': 'traceback', 'action': 'store_false'}),
+ ('Specify the encoding and optionally the '
+ 'error handler of input text. Default: <locale-dependent>:strict.',
+ ['--input-encoding', '-i'],
+ {'metavar': '<name[:handler]>',
+ 'validator': validate_encoding_and_error_handler}),
+ ('Specify the error handler for undecodable characters. '
+ 'Choices: "strict" (default), "ignore", and "replace".',
+ ['--input-encoding-error-handler'],
+ {'default': 'strict', 'validator': validate_encoding_error_handler}),
+ ('Specify the text encoding and optionally the error handler for '
+ 'output. Default: UTF-8:strict.',
+ ['--output-encoding', '-o'],
+ {'metavar': '<name[:handler]>', 'default': 'utf-8',
+ 'validator': validate_encoding_and_error_handler}),
+ ('Specify error handler for unencodable output characters; '
+ '"strict" (default), "ignore", "replace", '
+ '"xmlcharrefreplace", "backslashreplace".',
+ ['--output-encoding-error-handler'],
+ {'default': 'strict', 'validator': validate_encoding_error_handler}),
+ ('Specify text encoding and error handler for error output. '
+ 'Default: %s:%s.'
+ % (default_error_encoding, default_error_encoding_error_handler),
+ ['--error-encoding', '-e'],
+ {'metavar': '<name[:handler]>', 'default': default_error_encoding,
+ 'validator': validate_encoding_and_error_handler}),
+ ('Specify the error handler for unencodable characters in '
+ 'error output. Default: %s.'
+ % default_error_encoding_error_handler,
+ ['--error-encoding-error-handler'],
+ {'default': default_error_encoding_error_handler,
+ 'validator': validate_encoding_error_handler}),
+ ('Specify the language (as 2-letter code). Default: en.',
+ ['--language', '-l'], {'dest': 'language_code', 'default': 'en',
+ 'metavar': '<name>'}),
+ ('Write output file dependencies to <file>.',
+ ['--record-dependencies'],
+ {'metavar': '<file>', 'validator': validate_dependency_file,
+ 'default': None}), # default set in Values class
+ ('Read configuration settings from <file>, if it exists.',
+ ['--config'], {'metavar': '<file>', 'type': 'string',
+ 'action': 'callback', 'callback': read_config_file}),
+ ("Show this program's version number and exit.",
+ ['--version', '-V'], {'action': 'version'}),
+ ('Show this help message and exit.',
+ ['--help', '-h'], {'action': 'help'}),
+ # Typically not useful for non-programmatical use:
+ (SUPPRESS_HELP, ['--id-prefix'], {'default': ''}),
+ (SUPPRESS_HELP, ['--auto-id-prefix'], {'default': 'id'}),
+ # Hidden options, for development use only:
+ (SUPPRESS_HELP, ['--dump-settings'], {'action': 'store_true'}),
+ (SUPPRESS_HELP, ['--dump-internals'], {'action': 'store_true'}),
+ (SUPPRESS_HELP, ['--dump-transforms'], {'action': 'store_true'}),
+ (SUPPRESS_HELP, ['--dump-pseudo-xml'], {'action': 'store_true'}),
+ (SUPPRESS_HELP, ['--expose-internal-attribute'],
+ {'action': 'append', 'dest': 'expose_internals',
+ 'validator': validate_colon_separated_string_list}),
+ (SUPPRESS_HELP, ['--strict-visitor'], {'action': 'store_true'}),
+ ))
+ """Runtime settings and command-line options common to all Docutils front
+ ends. Setting specs specific to individual Docutils components are also
+ used (see `populate_from_components()`)."""
+
+ settings_defaults = {'_disable_config': None,
+ '_source': None,
+ '_destination': None,
+ '_config_files': None}
+ """Defaults for settings that don't have command-line option equivalents."""
+
+ relative_path_settings = ('warning_stream',)
+
+ config_section = 'general'
+
+ version_template = ('%%prog (Docutils %s [%s], Python %s, on %s)'
+ % (docutils.__version__, docutils.__version_details__,
+ sys.version.split()[0], sys.platform))
+ """Default version message."""
+
+ def __init__(self, components=(), defaults=None, read_config_files=None,
+ *args, **kwargs):
+ """
+ `components` is a list of Docutils components each containing a
+ ``.settings_spec`` attribute. `defaults` is a mapping of setting
+ default overrides.
+ """
+
+ self.lists = {}
+ """Set of list-type settings."""
+
+ self.config_files = []
+ """List of paths of applied configuration files."""
+
+ optparse.OptionParser.__init__(
+ self, option_class=Option, add_help_option=None,
+ formatter=optparse.TitledHelpFormatter(width=78),
+ *args, **kwargs)
+ if not self.version:
+ self.version = self.version_template
+ # Make an instance copy (it will be modified):
+ self.relative_path_settings = list(self.relative_path_settings)
+ self.components = (self,) + tuple(components)
+ self.populate_from_components(self.components)
+ self.set_defaults_from_dict(defaults or {})
+ if read_config_files and not self.defaults['_disable_config']:
+ try:
+ config_settings = self.get_standard_config_settings()
+ except ValueError, error:
+ self.error(error)
+ self.set_defaults_from_dict(config_settings.__dict__)
+
+ def populate_from_components(self, components):
+ """
+ For each component, first populate from the `SettingsSpec.settings_spec`
+ structure, then from the `SettingsSpec.settings_defaults` dictionary.
+ After all components have been processed, check for and populate from
+ each component's `SettingsSpec.settings_default_overrides` dictionary.
+ """
+ for component in components:
+ if component is None:
+ continue
+ settings_spec = component.settings_spec
+ self.relative_path_settings.extend(
+ component.relative_path_settings)
+ for i in range(0, len(settings_spec), 3):
+ title, description, option_spec = settings_spec[i:i+3]
+ if title:
+ group = optparse.OptionGroup(self, title, description)
+ self.add_option_group(group)
+ else:
+ group = self # single options
+ for (help_text, option_strings, kwargs) in option_spec:
+ option = group.add_option(help=help_text, *option_strings,
+ **kwargs)
+ if kwargs.get('action') == 'append':
+ self.lists[option.dest] = 1
+ if component.settings_defaults:
+ self.defaults.update(component.settings_defaults)
+ for component in components:
+ if component and component.settings_default_overrides:
+ self.defaults.update(component.settings_default_overrides)
+
+ def get_standard_config_files(self):
+ """Return list of config files, from environment or standard."""
+ try:
+ config_files = os.environ['DOCUTILSCONFIG'].split(os.pathsep)
+ except KeyError:
+ config_files = self.standard_config_files
+
+ # If 'HOME' is not set, expandvars() requires the 'pwd' module which is
+ # not available under certain environments, for example, within
+ # mod_python. The publisher ends up in here, and we need to publish
+ # from within mod_python. Therefore we need to avoid expanding when we
+ # are in those environments.
+ expand = os.path.expanduser
+ if 'HOME' not in os.environ:
+ try:
+ import pwd
+ except ImportError:
+ expand = lambda x: x
+ return [expand(f) for f in config_files if f.strip()]
+
+ def get_standard_config_settings(self):
+ settings = Values()
+ for filename in self.get_standard_config_files():
+ settings.update(self.get_config_file_settings(filename), self)
+ return settings
+
+ def get_config_file_settings(self, config_file):
+ """Returns a dictionary containing appropriate config file settings."""
+ parser = ConfigParser()
+ parser.read(config_file, self)
+ self.config_files.extend(parser._files)
+ base_path = os.path.dirname(config_file)
+ applied = {}
+ settings = Values()
+ for component in self.components:
+ if not component:
+ continue
+ for section in (tuple(component.config_section_dependencies or ())
+ + (component.config_section,)):
+ if section in applied:
+ continue
+ applied[section] = 1
+ settings.update(parser.get_section(section), self)
+ make_paths_absolute(
+ settings.__dict__, self.relative_path_settings, base_path)
+ return settings.__dict__
+
+ def check_values(self, values, args):
+ """Store positional arguments as runtime settings."""
+ values._source, values._destination = self.check_args(args)
+ make_paths_absolute(values.__dict__, self.relative_path_settings,
+ os.getcwd())
+ values._config_files = self.config_files
+ return values
+
+ def check_args(self, args):
+ source = destination = None
+ if args:
+ source = args.pop(0)
+ if source == '-': # means stdin
+ source = None
+ if args:
+ destination = args.pop(0)
+ if destination == '-': # means stdout
+ destination = None
+ if args:
+ self.error('Maximum 2 arguments allowed.')
+ if source and source == destination:
+ self.error('Do not specify the same file for both source and '
+ 'destination. It will clobber the source file.')
+ return source, destination
+
+ def set_defaults_from_dict(self, defaults):
+ self.defaults.update(defaults)
+
+ def get_default_values(self):
+ """Needed to get custom `Values` instances."""
+ defaults = Values(self.defaults)
+ defaults._config_files = self.config_files
+ return defaults
+
+ def get_option_by_dest(self, dest):
+ """
+ Get an option by its dest.
+
+ If you're supplying a dest which is shared by several options,
+ it is undefined which option of those is returned.
+
+ A KeyError is raised if there is no option with the supplied
+ dest.
+ """
+ for group in self.option_groups + [self]:
+ for option in group.option_list:
+ if option.dest == dest:
+ return option
+ raise KeyError('No option with dest == %r.' % dest)
+
+
+class ConfigParser(CP.ConfigParser):
+
+ old_settings = {
+ 'pep_stylesheet': ('pep_html writer', 'stylesheet'),
+ 'pep_stylesheet_path': ('pep_html writer', 'stylesheet_path'),
+ 'pep_template': ('pep_html writer', 'template')}
+ """{old setting: (new section, new setting)} mapping, used by
+ `handle_old_config`, to convert settings from the old [options] section."""
+
+ old_warning = """
+The "[option]" section is deprecated. Support for old-format configuration
+files may be removed in a future Docutils release. Please revise your
+configuration files. See <http://docutils.sf.net/docs/user/config.html>,
+section "Old-Format Configuration Files".
+"""
+
+ not_utf8_error = """\
+Unable to read configuration file "%s": content not encoded as UTF-8.
+Skipping "%s" configuration file.
+"""
+
+ def __init__(self, *args, **kwargs):
+ CP.ConfigParser.__init__(self, *args, **kwargs)
+
+ self._files = []
+ """List of paths of configuration files read."""
+
+ def read(self, filenames, option_parser):
+ if type(filenames) in (str, unicode):
+ filenames = [filenames]
+ for filename in filenames:
+ try:
+ # Config files must be UTF-8-encoded:
+ fp = codecs.open(filename, 'r', 'utf-8')
+ except IOError:
+ continue
+ try:
+ CP.ConfigParser.readfp(self, fp, filename)
+ except UnicodeDecodeError:
+ sys.stderr.write(self.not_utf8_error % (filename, filename))
+ fp.close()
+ continue
+ fp.close()
+ self._files.append(filename)
+ if self.has_section('options'):
+ self.handle_old_config(filename)
+ self.validate_settings(filename, option_parser)
+
+ def handle_old_config(self, filename):
+ warnings.warn_explicit(self.old_warning, ConfigDeprecationWarning,
+ filename, 0)
+ options = self.get_section('options')
+ if not self.has_section('general'):
+ self.add_section('general')
+ for key, value in options.items():
+ if key in self.old_settings:
+ section, setting = self.old_settings[key]
+ if not self.has_section(section):
+ self.add_section(section)
+ else:
+ section = 'general'
+ setting = key
+ if not self.has_option(section, setting):
+ self.set(section, setting, value)
+ self.remove_section('options')
+
+ def validate_settings(self, filename, option_parser):
+ """
+ Call the validator function and implement overrides on all applicable
+ settings.
+ """
+ for section in self.sections():
+ for setting in self.options(section):
+ try:
+ option = option_parser.get_option_by_dest(setting)
+ except KeyError:
+ continue
+ if option.validator:
+ value = self.get(section, setting, raw=1)
+ try:
+ new_value = option.validator(
+ setting, value, option_parser,
+ config_parser=self, config_section=section)
+ except Exception, error:
+ raise (ValueError(
+ 'Error in config file "%s", section "[%s]":\n'
+ ' %s: %s\n %s = %s'
+ % (filename, section, error.__class__.__name__,
+ error, setting, value)), None, sys.exc_info()[2])
+ self.set(section, setting, new_value)
+ if option.overrides:
+ self.set(section, option.overrides, None)
+
+ def optionxform(self, optionstr):
+ """
+ Transform '-' to '_' so the cmdline form of option names can be used.
+ """
+ return optionstr.lower().replace('-', '_')
+
+ def get_section(self, section):
+ """
+ Return a given section as a dictionary (empty if the section
+ doesn't exist).
+ """
+ section_dict = {}
+ if self.has_section(section):
+ for option in self.options(section):
+ section_dict[option] = self.get(section, option, raw=1)
+ return section_dict
+
+
+class ConfigDeprecationWarning(DeprecationWarning):
+ """Warning for deprecated configuration file features."""
diff --git a/python/helpers/docutils/io.py b/python/helpers/docutils/io.py
new file mode 100644
index 0000000..29b298f1
--- /dev/null
+++ b/python/helpers/docutils/io.py
@@ -0,0 +1,443 @@
+# $Id: io.py 6269 2010-03-18 22:27:53Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+I/O classes provide a uniform API for low-level input and output. Subclasses
+will exist for a variety of input/output mechanisms.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+try:
+ import locale
+except:
+ pass
+import re
+import codecs
+from docutils import TransformSpec
+from docutils._compat import b
+
+
+class Input(TransformSpec):
+
+ """
+ Abstract base class for input wrappers.
+ """
+
+ component_type = 'input'
+
+ default_source_path = None
+
+ def __init__(self, source=None, source_path=None, encoding=None,
+ error_handler='strict'):
+ self.encoding = encoding
+ """Text encoding for the input source."""
+
+ self.error_handler = error_handler
+ """Text decoding error handler."""
+
+ self.source = source
+ """The source of input data."""
+
+ self.source_path = source_path
+ """A text reference to the source."""
+
+ if not source_path:
+ self.source_path = self.default_source_path
+
+ self.successful_encoding = None
+ """The encoding that successfully decoded the source data."""
+
+ def __repr__(self):
+ return '%s: source=%r, source_path=%r' % (self.__class__, self.source,
+ self.source_path)
+
+ def read(self):
+ raise NotImplementedError
+
+ def decode(self, data):
+ """
+ Decode a string, `data`, heuristically.
+ Raise UnicodeError if unsuccessful.
+
+ The client application should call ``locale.setlocale`` at the
+ beginning of processing::
+
+ locale.setlocale(locale.LC_ALL, '')
+ """
+ if self.encoding and self.encoding.lower() == 'unicode':
+ assert isinstance(data, unicode), (
+ 'input encoding is "unicode" '
+ 'but input is not a unicode object')
+ if isinstance(data, unicode):
+ # Accept unicode even if self.encoding != 'unicode'.
+ return data
+ if self.encoding:
+ # We believe the user/application when the encoding is
+ # explicitly given.
+ encodings = [self.encoding]
+ else:
+ data_encoding = self.determine_encoding_from_data(data)
+ if data_encoding:
+ # If the data declares its encoding (explicitly or via a BOM),
+ # we believe it.
+ encodings = [data_encoding]
+ else:
+ # Apply heuristics only if no encoding is explicitly given and
+ # no BOM found. Start with UTF-8, because that only matches
+ # data that *IS* UTF-8:
+ encodings = ['utf-8']
+ try:
+ encodings.append(locale.getlocale()[1])
+ except:
+ pass
+ try:
+ encodings.append(locale.getdefaultlocale()[1])
+ except:
+ pass
+ # fallback encoding:
+ encodings.append('latin-1')
+ error = None
+ error_details = ''
+ for enc in encodings:
+ if not enc:
+ continue
+ try:
+ decoded = unicode(data, enc, self.error_handler)
+ self.successful_encoding = enc
+ # Return decoded, removing BOMs.
+ return decoded.replace(u'\ufeff', u'')
+ except (UnicodeError, LookupError), tmperror:
+ error = tmperror # working around Python 3 deleting the
+ # error variable after the except clause
+ if error is not None:
+ error_details = '\n(%s: %s)' % (error.__class__.__name__, error)
+ raise UnicodeError(
+ 'Unable to decode input data. Tried the following encodings: '
+ '%s.%s'
+ % (', '.join([repr(enc) for enc in encodings if enc]),
+ error_details))
+
+ coding_slug = re.compile(b("coding[:=]\s*([-\w.]+)"))
+ """Encoding declaration pattern."""
+
+ byte_order_marks = ((codecs.BOM_UTF8, 'utf-8'), # actually 'utf-8-sig'
+ (codecs.BOM_UTF16_BE, 'utf-16-be'),
+ (codecs.BOM_UTF16_LE, 'utf-16-le'),)
+ """Sequence of (start_bytes, encoding) tuples for encoding detection.
+ The first bytes of input data are checked against the start_bytes strings.
+ A match indicates the given encoding."""
+
+ def determine_encoding_from_data(self, data):
+ """
+ Try to determine the encoding of `data` by looking *in* `data`.
+ Check for a byte order mark (BOM) or an encoding declaration.
+ """
+ # check for a byte order mark:
+ for start_bytes, encoding in self.byte_order_marks:
+ if data.startswith(start_bytes):
+ return encoding
+ # check for an encoding declaration pattern in first 2 lines of file:
+ for line in data.splitlines()[:2]:
+ match = self.coding_slug.search(line)
+ if match:
+ return match.group(1).decode('ascii')
+ return None
+
+
+class Output(TransformSpec):
+
+ """
+ Abstract base class for output wrappers.
+ """
+
+ component_type = 'output'
+
+ default_destination_path = None
+
+ def __init__(self, destination=None, destination_path=None,
+ encoding=None, error_handler='strict'):
+ self.encoding = encoding
+ """Text encoding for the output destination."""
+
+ self.error_handler = error_handler or 'strict'
+ """Text encoding error handler."""
+
+ self.destination = destination
+ """The destination for output data."""
+
+ self.destination_path = destination_path
+ """A text reference to the destination."""
+
+ if not destination_path:
+ self.destination_path = self.default_destination_path
+
+ def __repr__(self):
+ return ('%s: destination=%r, destination_path=%r'
+ % (self.__class__, self.destination, self.destination_path))
+
+ def write(self, data):
+ """`data` is a Unicode string, to be encoded by `self.encode`."""
+ raise NotImplementedError
+
+ def encode(self, data):
+ if self.encoding and self.encoding.lower() == 'unicode':
+ assert isinstance(data, unicode), (
+ 'the encoding given is "unicode" but the output is not '
+ 'a Unicode string')
+ return data
+ if not isinstance(data, unicode):
+ # Non-unicode (e.g. binary) output.
+ return data
+ else:
+ return data.encode(self.encoding, self.error_handler)
+
+
+class FileInput(Input):
+
+ """
+ Input for single, simple file-like objects.
+ """
+
+ def __init__(self, source=None, source_path=None,
+ encoding=None, error_handler='strict',
+ autoclose=1, handle_io_errors=1, mode='rU'):
+ """
+ :Parameters:
+ - `source`: either a file-like object (which is read directly), or
+ `None` (which implies `sys.stdin` if no `source_path` given).
+ - `source_path`: a path to a file, which is opened and then read.
+ - `encoding`: the expected text encoding of the input file.
+ - `error_handler`: the encoding error handler to use.
+ - `autoclose`: close automatically after read (boolean); always
+ false if `sys.stdin` is the source.
+ - `handle_io_errors`: summarize I/O errors here, and exit?
+ - `mode`: how the file is to be opened (see standard function
+ `open`). The default 'rU' provides universal newline support
+ for text files.
+ """
+ Input.__init__(self, source, source_path, encoding, error_handler)
+ self.autoclose = autoclose
+ self.handle_io_errors = handle_io_errors
+ if source is None:
+ if source_path:
+ # Specify encoding in Python 3
+ if sys.version_info >= (3,0):
+ kwargs = {'encoding': self.encoding,
+ 'errors': self.error_handler}
+ else:
+ kwargs = {}
+
+ try:
+ self.source = open(source_path, mode, **kwargs)
+ except IOError, error:
+ if not handle_io_errors:
+ raise
+ print >>sys.stderr, '%s: %s' % (error.__class__.__name__,
+ error)
+ print >>sys.stderr, ('Unable to open source file for '
+ "reading ('%s'). Exiting." %
+ source_path)
+ sys.exit(1)
+ else:
+ self.source = sys.stdin
+ self.autoclose = None
+ if not source_path:
+ try:
+ self.source_path = self.source.name
+ except AttributeError:
+ pass
+
+ def read(self):
+ """
+ Read and decode a single file and return the data (Unicode string).
+ """
+ try:
+ data = self.source.read()
+ finally:
+ if self.autoclose:
+ self.close()
+ return self.decode(data)
+
+ def readlines(self):
+ """
+ Return lines of a single file as list of Unicode strings.
+ """
+ try:
+ lines = self.source.readlines()
+ finally:
+ if self.autoclose:
+ self.close()
+ return [self.decode(line) for line in lines]
+
+ def close(self):
+ self.source.close()
+
+
+class FileOutput(Output):
+
+ """
+ Output for single, simple file-like objects.
+ """
+
+ def __init__(self, destination=None, destination_path=None,
+ encoding=None, error_handler='strict', autoclose=1,
+ handle_io_errors=1):
+ """
+ :Parameters:
+ - `destination`: either a file-like object (which is written
+ directly) or `None` (which implies `sys.stdout` if no
+ `destination_path` given).
+ - `destination_path`: a path to a file, which is opened and then
+ written.
+ - `autoclose`: close automatically after write (boolean); always
+ false if `sys.stdout` is the destination.
+ """
+ Output.__init__(self, destination, destination_path,
+ encoding, error_handler)
+ self.opened = 1
+ self.autoclose = autoclose
+ self.handle_io_errors = handle_io_errors
+ if destination is None:
+ if destination_path:
+ self.opened = None
+ else:
+ self.destination = sys.stdout
+ self.autoclose = None
+ if not destination_path:
+ try:
+ self.destination_path = self.destination.name
+ except AttributeError:
+ pass
+
+ def open(self):
+ # Specify encoding in Python 3.
+ # (Do not use binary mode ('wb') as this prevents the
+ # conversion of newlines to the system specific default.)
+ if sys.version_info >= (3,0):
+ kwargs = {'encoding': self.encoding,
+ 'errors': self.error_handler}
+ else:
+ kwargs = {}
+
+ try:
+ self.destination = open(self.destination_path, 'w', **kwargs)
+ except IOError, error:
+ if not self.handle_io_errors:
+ raise
+ print >>sys.stderr, '%s: %s' % (error.__class__.__name__,
+ error)
+ print >>sys.stderr, ('Unable to open destination file for writing'
+ " ('%s'). Exiting." % self.destination_path)
+ sys.exit(1)
+ self.opened = 1
+
+ def write(self, data):
+ """Encode `data`, write it to a single file, and return it.
+
+ In Python 3, a (unicode) String is returned.
+ """
+ if sys.version_info >= (3,0):
+ output = data # in py3k, write expects a (Unicode) string
+ else:
+ output = self.encode(data)
+ if not self.opened:
+ self.open()
+ try:
+ self.destination.write(output)
+ finally:
+ if self.autoclose:
+ self.close()
+ return output
+
+ def close(self):
+ self.destination.close()
+ self.opened = None
+
+
+class BinaryFileOutput(FileOutput):
+ """
+ A version of docutils.io.FileOutput which writes to a binary file.
+ """
+ def open(self):
+ try:
+ self.destination = open(self.destination_path, 'wb')
+ except IOError, error:
+ if not self.handle_io_errors:
+ raise
+ print >>sys.stderr, '%s: %s' % (error.__class__.__name__,
+ error)
+ print >>sys.stderr, ('Unable to open destination file for writing '
+ "('%s'). Exiting." % self.destination_path)
+ sys.exit(1)
+ self.opened = 1
+
+
+class StringInput(Input):
+
+ """
+ Direct string input.
+ """
+
+ default_source_path = '<string>'
+
+ def read(self):
+ """Decode and return the source string."""
+ return self.decode(self.source)
+
+
+class StringOutput(Output):
+
+ """
+ Direct string output.
+ """
+
+ default_destination_path = '<string>'
+
+ def write(self, data):
+ """Encode `data`, store it in `self.destination`, and return it."""
+ self.destination = self.encode(data)
+ return self.destination
+
+
+class NullInput(Input):
+
+ """
+ Degenerate input: read nothing.
+ """
+
+ default_source_path = 'null input'
+
+ def read(self):
+ """Return a null string."""
+ return u''
+
+
+class NullOutput(Output):
+
+ """
+ Degenerate output: write nothing.
+ """
+
+ default_destination_path = 'null output'
+
+ def write(self, data):
+ """Do nothing ([don't even] send data to the bit bucket)."""
+ pass
+
+
+class DocTreeInput(Input):
+
+ """
+ Adapter for document tree input.
+
+ The document tree must be passed in the ``source`` parameter.
+ """
+
+ default_source_path = 'doctree input'
+
+ def read(self):
+ """Return the document tree."""
+ return self.source
diff --git a/python/helpers/docutils/languages/__init__.py b/python/helpers/docutils/languages/__init__.py
new file mode 100644
index 0000000..d567f9c
--- /dev/null
+++ b/python/helpers/docutils/languages/__init__.py
@@ -0,0 +1,21 @@
+# $Id: __init__.py 5618 2008-07-28 08:37:32Z strank $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# Internationalization details are documented in
+# <http://docutils.sf.net/docs/howto/i18n.html>.
+
+"""
+This package contains modules for language-dependent features of Docutils.
+"""
+
+__docformat__ = 'reStructuredText'
+
+_languages = {}
+
+def get_language(language_code):
+ if language_code in _languages:
+ return _languages[language_code]
+ module = __import__(language_code, globals(), locals())
+ _languages[language_code] = module
+ return module
diff --git a/python/helpers/docutils/languages/en.py b/python/helpers/docutils/languages/en.py
new file mode 100644
index 0000000..7dde144
--- /dev/null
+++ b/python/helpers/docutils/languages/en.py
@@ -0,0 +1,60 @@
+# $Id: en.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+English-language mappings for language-dependent features of Docutils.
+"""
+
+__docformat__ = 'reStructuredText'
+
+labels = {
+ # fixed: language-dependent
+ 'author': 'Author',
+ 'authors': 'Authors',
+ 'organization': 'Organization',
+ 'address': 'Address',
+ 'contact': 'Contact',
+ 'version': 'Version',
+ 'revision': 'Revision',
+ 'status': 'Status',
+ 'date': 'Date',
+ 'copyright': 'Copyright',
+ 'dedication': 'Dedication',
+ 'abstract': 'Abstract',
+ 'attention': 'Attention!',
+ 'caution': 'Caution!',
+ 'danger': '!DANGER!',
+ 'error': 'Error',
+ 'hint': 'Hint',
+ 'important': 'Important',
+ 'note': 'Note',
+ 'tip': 'Tip',
+ 'warning': 'Warning',
+ 'contents': 'Contents'}
+"""Mapping of node class name to label text."""
+
+bibliographic_fields = {
+ # language-dependent: fixed
+ 'author': 'author',
+ 'authors': 'authors',
+ 'organization': 'organization',
+ 'address': 'address',
+ 'contact': 'contact',
+ 'version': 'version',
+ 'revision': 'revision',
+ 'status': 'status',
+ 'date': 'date',
+ 'copyright': 'copyright',
+ 'dedication': 'dedication',
+ 'abstract': 'abstract'}
+"""English (lowcased) to canonical name mapping for bibliographic fields."""
+
+author_separators = [';', ',']
+"""List of separator strings for the 'Authors' bibliographic field. Tried in
+order."""
diff --git a/python/helpers/docutils/nodes.py b/python/helpers/docutils/nodes.py
new file mode 100644
index 0000000..c43fa3d
--- /dev/null
+++ b/python/helpers/docutils/nodes.py
@@ -0,0 +1,1919 @@
+# $Id: nodes.py 6351 2010-07-03 14:19:09Z gbrandl $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Docutils document tree element class library.
+
+Classes in CamelCase are abstract base classes or auxiliary classes. The one
+exception is `Text`, for a text (PCDATA) node; uppercase is used to
+differentiate from element classes. Classes in lower_case_with_underscores
+are element classes, matching the XML element generic identifiers in the DTD_.
+
+The position of each node (the level at which it can occur) is significant and
+is represented by abstract base classes (`Root`, `Structural`, `Body`,
+`Inline`, etc.). Certain transformations will be easier because we can use
+``isinstance(node, base_class)`` to determine the position of the node in the
+hierarchy.
+
+.. _DTD: http://docutils.sourceforge.net/docs/ref/docutils.dtd
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import os
+import re
+import warnings
+import types
+import unicodedata
+
+# ==============================
+# Functional Node Base Classes
+# ==============================
+
+class Node(object):
+
+ """Abstract base class of nodes in a document tree."""
+
+ parent = None
+ """Back-reference to the Node immediately containing this Node."""
+
+ document = None
+ """The `document` node at the root of the tree containing this Node."""
+
+ source = None
+ """Path or description of the input source which generated this Node."""
+
+ line = None
+ """The line number (1-based) of the beginning of this Node in `source`."""
+
+ def __nonzero__(self):
+ """
+ Node instances are always true, even if they're empty. A node is more
+ than a simple container. Its boolean "truth" does not depend on
+ having one or more subnodes in the doctree.
+
+ Use `len()` to check node length. Use `None` to represent a boolean
+ false value.
+ """
+ return True
+
+ if sys.version_info < (3,):
+ # on 2.x, str(node) will be a byte string with Unicode
+ # characters > 255 escaped; on 3.x this is no longer necessary
+ def __str__(self):
+ return unicode(self).encode('raw_unicode_escape')
+
+ def asdom(self, dom=None):
+ """Return a DOM **fragment** representation of this Node."""
+ if dom is None:
+ import xml.dom.minidom as dom
+ domroot = dom.Document()
+ return self._dom_node(domroot)
+
+ def pformat(self, indent=' ', level=0):
+ """
+ Return an indented pseudo-XML representation, for test purposes.
+
+ Override in subclasses.
+ """
+ raise NotImplementedError
+
+ def copy(self):
+ """Return a copy of self."""
+ raise NotImplementedError
+
+ def deepcopy(self):
+ """Return a deep copy of self (also copying children)."""
+ raise NotImplementedError
+
+ def setup_child(self, child):
+ child.parent = self
+ if self.document:
+ child.document = self.document
+ if child.source is None:
+ child.source = self.document.current_source
+ if child.line is None:
+ child.line = self.document.current_line
+
+ def walk(self, visitor):
+ """
+ Traverse a tree of `Node` objects, calling the
+ `dispatch_visit()` method of `visitor` when entering each
+ node. (The `walkabout()` method is similar, except it also
+ calls the `dispatch_departure()` method before exiting each
+ node.)
+
+ This tree traversal supports limited in-place tree
+ modifications. Replacing one node with one or more nodes is
+ OK, as is removing an element. However, if the node removed
+ or replaced occurs after the current node, the old node will
+ still be traversed, and any new nodes will not.
+
+ Within ``visit`` methods (and ``depart`` methods for
+ `walkabout()`), `TreePruningException` subclasses may be raised
+ (`SkipChildren`, `SkipSiblings`, `SkipNode`, `SkipDeparture`).
+
+ Parameter `visitor`: A `NodeVisitor` object, containing a
+ ``visit`` implementation for each `Node` subclass encountered.
+
+ Return true if we should stop the traversal.
+ """
+ stop = 0
+ visitor.document.reporter.debug(
+ 'docutils.nodes.Node.walk calling dispatch_visit for %s'
+ % self.__class__.__name__)
+ try:
+ try:
+ visitor.dispatch_visit(self)
+ except (SkipChildren, SkipNode):
+ return stop
+ except SkipDeparture: # not applicable; ignore
+ pass
+ children = self.children
+ try:
+ for child in children[:]:
+ if child.walk(visitor):
+ stop = 1
+ break
+ except SkipSiblings:
+ pass
+ except StopTraversal:
+ stop = 1
+ return stop
+
+ def walkabout(self, visitor):
+ """
+ Perform a tree traversal similarly to `Node.walk()` (which
+ see), except also call the `dispatch_departure()` method
+ before exiting each node.
+
+ Parameter `visitor`: A `NodeVisitor` object, containing a
+ ``visit`` and ``depart`` implementation for each `Node`
+ subclass encountered.
+
+ Return true if we should stop the traversal.
+ """
+ call_depart = 1
+ stop = 0
+ visitor.document.reporter.debug(
+ 'docutils.nodes.Node.walkabout calling dispatch_visit for %s'
+ % self.__class__.__name__)
+ try:
+ try:
+ visitor.dispatch_visit(self)
+ except SkipNode:
+ return stop
+ except SkipDeparture:
+ call_depart = 0
+ children = self.children
+ try:
+ for child in children[:]:
+ if child.walkabout(visitor):
+ stop = 1
+ break
+ except SkipSiblings:
+ pass
+ except SkipChildren:
+ pass
+ except StopTraversal:
+ stop = 1
+ if call_depart:
+ visitor.document.reporter.debug(
+ 'docutils.nodes.Node.walkabout calling dispatch_departure '
+ 'for %s' % self.__class__.__name__)
+ visitor.dispatch_departure(self)
+ return stop
+
+ def _fast_traverse(self, cls):
+ """Specialized traverse() that only supports instance checks."""
+ result = []
+ if isinstance(self, cls):
+ result.append(self)
+ for child in self.children:
+ result.extend(child._fast_traverse(cls))
+ return result
+
+ def _all_traverse(self):
+ """Specialized traverse() that doesn't check for a condition."""
+ result = []
+ result.append(self)
+ for child in self.children:
+ result.extend(child._all_traverse())
+ return result
+
+ def traverse(self, condition=None,
+ include_self=1, descend=1, siblings=0, ascend=0):
+ """
+ Return an iterable containing
+
+ * self (if include_self is true)
+ * all descendants in tree traversal order (if descend is true)
+ * all siblings (if siblings is true) and their descendants (if
+ also descend is true)
+ * the siblings of the parent (if ascend is true) and their
+ descendants (if also descend is true), and so on
+
+ If `condition` is not None, the iterable contains only nodes
+ for which ``condition(node)`` is true. If `condition` is a
+ node class ``cls``, it is equivalent to a function consisting
+ of ``return isinstance(node, cls)``.
+
+ If ascend is true, assume siblings to be true as well.
+
+ For example, given the following tree::
+
+ <paragraph>
+ <emphasis> <--- emphasis.traverse() and
+ <strong> <--- strong.traverse() are called.
+ Foo
+ Bar
+ <reference name="Baz" refid="baz">
+ Baz
+
+ Then list(emphasis.traverse()) equals ::
+
+ [<emphasis>, <strong>, <#text: Foo>, <#text: Bar>]
+
+ and list(strong.traverse(ascend=1)) equals ::
+
+ [<strong>, <#text: Foo>, <#text: Bar>, <reference>, <#text: Baz>]
+ """
+ if ascend:
+ siblings=1
+ # Check for special argument combinations that allow using an
+ # optimized version of traverse()
+ if include_self and descend and not siblings:
+ if condition is None:
+ return self._all_traverse()
+ elif isinstance(condition, (types.ClassType, type)):
+ return self._fast_traverse(condition)
+ # Check if `condition` is a class (check for TypeType for Python
+ # implementations that use only new-style classes, like PyPy).
+ if isinstance(condition, (types.ClassType, type)):
+ node_class = condition
+ def condition(node, node_class=node_class):
+ return isinstance(node, node_class)
+ r = []
+ if include_self and (condition is None or condition(self)):
+ r.append(self)
+ if descend and len(self.children):
+ for child in self:
+ r.extend(child.traverse(
+ include_self=1, descend=1, siblings=0, ascend=0,
+ condition=condition))
+ if siblings or ascend:
+ node = self
+ while node.parent:
+ index = node.parent.index(node)
+ for sibling in node.parent[index+1:]:
+ r.extend(sibling.traverse(include_self=1, descend=descend,
+ siblings=0, ascend=0,
+ condition=condition))
+ if not ascend:
+ break
+ else:
+ node = node.parent
+ return r
+
+ def next_node(self, condition=None,
+ include_self=0, descend=1, siblings=0, ascend=0):
+ """
+ Return the first node in the iterable returned by traverse(),
+ or None if the iterable is empty.
+
+ Parameter list is the same as of traverse. Note that
+ include_self defaults to 0, though.
+ """
+ iterable = self.traverse(condition=condition,
+ include_self=include_self, descend=descend,
+ siblings=siblings, ascend=ascend)
+ try:
+ return iterable[0]
+ except IndexError:
+ return None
+
+if sys.version_info < (3,):
+ class reprunicode(unicode):
+ """
+ A class that removes the initial u from unicode's repr.
+ """
+
+ def __repr__(self):
+ return unicode.__repr__(self)[1:]
+else:
+ reprunicode = unicode
+
+
+class Text(Node, reprunicode):
+
+ """
+ Instances are terminal nodes (leaves) containing text only; no child
+ nodes or attributes. Initialize by passing a string to the constructor.
+ Access the text itself with the `astext` method.
+ """
+
+ tagname = '#text'
+
+ children = ()
+ """Text nodes have no children, and cannot have children."""
+
+ if sys.version_info > (3,):
+ def __new__(cls, data, rawsource=None):
+ """Prevent the rawsource argument from propagating to str."""
+ if isinstance(data, bytes):
+ raise TypeError('expecting str data, not bytes')
+ return reprunicode.__new__(cls, data)
+ else:
+ def __new__(cls, data, rawsource=None):
+ """Prevent the rawsource argument from propagating to str."""
+ return reprunicode.__new__(cls, data)
+
+ def __init__(self, data, rawsource=''):
+
+ self.rawsource = rawsource
+ """The raw text from which this element was constructed."""
+
+ def shortrepr(self, maxlen=18):
+ data = self
+ if len(data) > maxlen:
+ data = data[:maxlen-4] + ' ...'
+ return '<%s: %s>' % (self.tagname, repr(reprunicode(data)))
+
+ def __repr__(self):
+ return self.shortrepr(maxlen=68)
+
+ def _dom_node(self, domroot):
+ return domroot.createTextNode(unicode(self))
+
+ def astext(self):
+ return reprunicode(self)
+
+ # Note about __unicode__: The implementation of __unicode__ here,
+ # and the one raising NotImplemented in the superclass Node had
+ # to be removed when changing Text to a subclass of unicode instead
+ # of UserString, since there is no way to delegate the __unicode__
+ # call to the superclass unicode:
+ # unicode itself does not have __unicode__ method to delegate to
+ # and calling unicode(self) or unicode.__new__ directly creates
+ # an infinite loop
+
+ def copy(self):
+ return self.__class__(reprunicode(self), rawsource=self.rawsource)
+
+ def deepcopy(self):
+ return self.copy()
+
+ def pformat(self, indent=' ', level=0):
+ result = []
+ indent = indent * level
+ for line in self.splitlines():
+ result.append(indent + line + '\n')
+ return ''.join(result)
+
+ # rstrip and lstrip are used by substitution definitions where
+ # they are expected to return a Text instance, this was formerly
+ # taken care of by UserString. Note that then and now the
+ # rawsource member is lost.
+
+ def rstrip(self, chars=None):
+ return self.__class__(reprunicode.rstrip(self, chars))
+ def lstrip(self, chars=None):
+ return self.__class__(reprunicode.lstrip(self, chars))
+
+class Element(Node):
+
+ """
+ `Element` is the superclass to all specific elements.
+
+ Elements contain attributes and child nodes. Elements emulate
+ dictionaries for attributes, indexing by attribute name (a string). To
+ set the attribute 'att' to 'value', do::
+
+ element['att'] = 'value'
+
+ There are two special attributes: 'ids' and 'names'. Both are
+ lists of unique identifiers, and names serve as human interfaces
+ to IDs. Names are case- and whitespace-normalized (see the
+ fully_normalize_name() function), and IDs conform to the regular
+ expression ``[a-z](-?[a-z0-9]+)*`` (see the make_id() function).
+
+ Elements also emulate lists for child nodes (element nodes and/or text
+ nodes), indexing by integer. To get the first child node, use::
+
+ element[0]
+
+ Elements may be constructed using the ``+=`` operator. To add one new
+ child node to element, do::
+
+ element += node
+
+ This is equivalent to ``element.append(node)``.
+
+ To add a list of multiple child nodes at once, use the same ``+=``
+ operator::
+
+ element += [node1, node2]
+
+ This is equivalent to ``element.extend([node1, node2])``.
+ """
+
+ list_attributes = ('ids', 'classes', 'names', 'dupnames', 'backrefs')
+ """List attributes, automatically initialized to empty lists for
+ all nodes."""
+
+ tagname = None
+ """The element generic identifier. If None, it is set as an instance
+ attribute to the name of the class."""
+
+ child_text_separator = '\n\n'
+ """Separator for child nodes, used by `astext()` method."""
+
+ def __init__(self, rawsource='', *children, **attributes):
+ self.rawsource = rawsource
+ """The raw text from which this element was constructed."""
+
+ self.children = []
+ """List of child nodes (elements and/or `Text`)."""
+
+ self.extend(children) # maintain parent info
+
+ self.attributes = {}
+ """Dictionary of attribute {name: value}."""
+
+ # Initialize list attributes.
+ for att in self.list_attributes:
+ self.attributes[att] = []
+
+ for att, value in attributes.items():
+ att = att.lower()
+ if att in self.list_attributes:
+ # mutable list; make a copy for this node
+ self.attributes[att] = value[:]
+ else:
+ self.attributes[att] = value
+
+ if self.tagname is None:
+ self.tagname = self.__class__.__name__
+
+ def _dom_node(self, domroot):
+ element = domroot.createElement(self.tagname)
+ for attribute, value in self.attlist():
+ if isinstance(value, list):
+ value = ' '.join([serial_escape('%s' % v) for v in value])
+ element.setAttribute(attribute, '%s' % value)
+ for child in self.children:
+ element.appendChild(child._dom_node(domroot))
+ return element
+
+ def __repr__(self):
+ data = ''
+ for c in self.children:
+ data += c.shortrepr()
+ if len(data) > 60:
+ data = data[:56] + ' ...'
+ break
+ if self['names']:
+ return '<%s "%s": %s>' % (self.__class__.__name__,
+ '; '.join(self['names']), data)
+ else:
+ return '<%s: %s>' % (self.__class__.__name__, data)
+
+ def shortrepr(self):
+ if self['names']:
+ return '<%s "%s"...>' % (self.__class__.__name__,
+ '; '.join(self['names']))
+ else:
+ return '<%s...>' % self.tagname
+
+ def __unicode__(self):
+ if self.children:
+ return u'%s%s%s' % (self.starttag(),
+ ''.join([unicode(c) for c in self.children]),
+ self.endtag())
+ else:
+ return self.emptytag()
+
+ if sys.version_info > (3,):
+ # 2to3 doesn't convert __unicode__ to __str__
+ __str__ = __unicode__
+
+ def starttag(self):
+ parts = [self.tagname]
+ for name, value in self.attlist():
+ if value is None: # boolean attribute
+ parts.append(name)
+ elif isinstance(value, list):
+ values = [serial_escape('%s' % v) for v in value]
+ parts.append('%s="%s"' % (name, ' '.join(values)))
+ else:
+ parts.append('%s="%s"' % (name, value))
+ return '<%s>' % ' '.join(parts)
+
+ def endtag(self):
+ return '</%s>' % self.tagname
+
+ def emptytag(self):
+ return u'<%s/>' % ' '.join([self.tagname] +
+ ['%s="%s"' % (n, v)
+ for n, v in self.attlist()])
+
+ def __len__(self):
+ return len(self.children)
+
+ def __contains__(self, key):
+ # support both membership test for children and attributes
+ # (has_key is translated to "in" by 2to3)
+ if isinstance(key, basestring):
+ return key in self.attributes
+ return key in self.children
+
+ def __getitem__(self, key):
+ if isinstance(key, basestring):
+ return self.attributes[key]
+ elif isinstance(key, int):
+ return self.children[key]
+ elif isinstance(key, types.SliceType):
+ assert key.step in (None, 1), 'cannot handle slice with stride'
+ return self.children[key.start:key.stop]
+ else:
+ raise TypeError, ('element index must be an integer, a slice, or '
+ 'an attribute name string')
+
+ def __setitem__(self, key, item):
+ if isinstance(key, basestring):
+ self.attributes[str(key)] = item
+ elif isinstance(key, int):
+ self.setup_child(item)
+ self.children[key] = item
+ elif isinstance(key, types.SliceType):
+ assert key.step in (None, 1), 'cannot handle slice with stride'
+ for node in item:
+ self.setup_child(node)
+ self.children[key.start:key.stop] = item
+ else:
+ raise TypeError, ('element index must be an integer, a slice, or '
+ 'an attribute name string')
+
+ def __delitem__(self, key):
+ if isinstance(key, basestring):
+ del self.attributes[key]
+ elif isinstance(key, int):
+ del self.children[key]
+ elif isinstance(key, types.SliceType):
+ assert key.step in (None, 1), 'cannot handle slice with stride'
+ del self.children[key.start:key.stop]
+ else:
+ raise TypeError, ('element index must be an integer, a simple '
+ 'slice, or an attribute name string')
+
+ def __add__(self, other):
+ return self.children + other
+
+ def __radd__(self, other):
+ return other + self.children
+
+ def __iadd__(self, other):
+ """Append a node or a list of nodes to `self.children`."""
+ if isinstance(other, Node):
+ self.append(other)
+ elif other is not None:
+ self.extend(other)
+ return self
+
+ def astext(self):
+ return self.child_text_separator.join(
+ [child.astext() for child in self.children])
+
+ def non_default_attributes(self):
+ atts = {}
+ for key, value in self.attributes.items():
+ if self.is_not_default(key):
+ atts[key] = value
+ return atts
+
+ def attlist(self):
+ attlist = self.non_default_attributes().items()
+ attlist.sort()
+ return attlist
+
+ def get(self, key, failobj=None):
+ return self.attributes.get(key, failobj)
+
+ def hasattr(self, attr):
+ return attr in self.attributes
+
+ def delattr(self, attr):
+ if attr in self.attributes:
+ del self.attributes[attr]
+
+ def setdefault(self, key, failobj=None):
+ return self.attributes.setdefault(key, failobj)
+
+ has_key = hasattr
+
+ # support operator in
+ __contains__ = hasattr
+
+ def append(self, item):
+ self.setup_child(item)
+ self.children.append(item)
+
+ def extend(self, item):
+ for node in item:
+ self.append(node)
+
+ def insert(self, index, item):
+ if isinstance(item, Node):
+ self.setup_child(item)
+ self.children.insert(index, item)
+ elif item is not None:
+ self[index:index] = item
+
+ def pop(self, i=-1):
+ return self.children.pop(i)
+
+ def remove(self, item):
+ self.children.remove(item)
+
+ def index(self, item):
+ return self.children.index(item)
+
+ def is_not_default(self, key):
+ if self[key] == [] and key in self.list_attributes:
+ return 0
+ else:
+ return 1
+
+ def update_basic_atts(self, dict):
+ """
+ Update basic attributes ('ids', 'names', 'classes',
+ 'dupnames', but not 'source') from node or dictionary `dict`.
+ """
+ if isinstance(dict, Node):
+ dict = dict.attributes
+ for att in ('ids', 'classes', 'names', 'dupnames'):
+ for value in dict.get(att, []):
+ if not value in self[att]:
+ self[att].append(value)
+
+ def clear(self):
+ self.children = []
+
+ def replace(self, old, new):
+ """Replace one child `Node` with another child or children."""
+ index = self.index(old)
+ if isinstance(new, Node):
+ self.setup_child(new)
+ self[index] = new
+ elif new is not None:
+ self[index:index+1] = new
+
+ def replace_self(self, new):
+ """
+ Replace `self` node with `new`, where `new` is a node or a
+ list of nodes.
+ """
+ update = new
+ if not isinstance(new, Node):
+ # `new` is a list; update first child.
+ try:
+ update = new[0]
+ except IndexError:
+ update = None
+ if isinstance(update, Element):
+ update.update_basic_atts(self)
+ else:
+ # `update` is a Text node or `new` is an empty list.
+ # Assert that we aren't losing any attributes.
+ for att in ('ids', 'names', 'classes', 'dupnames'):
+ assert not self[att], \
+ 'Losing "%s" attribute: %s' % (att, self[att])
+ self.parent.replace(self, new)
+
+ def first_child_matching_class(self, childclass, start=0, end=sys.maxint):
+ """
+ Return the index of the first child whose class exactly matches.
+
+ Parameters:
+
+ - `childclass`: A `Node` subclass to search for, or a tuple of `Node`
+ classes. If a tuple, any of the classes may match.
+ - `start`: Initial index to check.
+ - `end`: Initial index to *not* check.
+ """
+ if not isinstance(childclass, tuple):
+ childclass = (childclass,)
+ for index in range(start, min(len(self), end)):
+ for c in childclass:
+ if isinstance(self[index], c):
+ return index
+ return None
+
+ def first_child_not_matching_class(self, childclass, start=0,
+ end=sys.maxint):
+ """
+ Return the index of the first child whose class does *not* match.
+
+ Parameters:
+
+ - `childclass`: A `Node` subclass to skip, or a tuple of `Node`
+ classes. If a tuple, none of the classes may match.
+ - `start`: Initial index to check.
+ - `end`: Initial index to *not* check.
+ """
+ if not isinstance(childclass, tuple):
+ childclass = (childclass,)
+ for index in range(start, min(len(self), end)):
+ for c in childclass:
+ if isinstance(self.children[index], c):
+ break
+ else:
+ return index
+ return None
+
+ def pformat(self, indent=' ', level=0):
+ return ''.join(['%s%s\n' % (indent * level, self.starttag())] +
+ [child.pformat(indent, level+1)
+ for child in self.children])
+
+ def copy(self):
+ return self.__class__(rawsource=self.rawsource, **self.attributes)
+
+ def deepcopy(self):
+ copy = self.copy()
+ copy.extend([child.deepcopy() for child in self.children])
+ return copy
+
+ def set_class(self, name):
+ """Add a new class to the "classes" attribute."""
+ warnings.warn('docutils.nodes.Element.set_class deprecated; '
+ "append to Element['classes'] list attribute directly",
+ DeprecationWarning, stacklevel=2)
+ assert ' ' not in name
+ self['classes'].append(name.lower())
+
+ def note_referenced_by(self, name=None, id=None):
+ """Note that this Element has been referenced by its name
+ `name` or id `id`."""
+ self.referenced = 1
+ # Element.expect_referenced_by_* dictionaries map names or ids
+ # to nodes whose ``referenced`` attribute is set to true as
+ # soon as this node is referenced by the given name or id.
+ # Needed for target propagation.
+ by_name = getattr(self, 'expect_referenced_by_name', {}).get(name)
+ by_id = getattr(self, 'expect_referenced_by_id', {}).get(id)
+ if by_name:
+ assert name is not None
+ by_name.referenced = 1
+ if by_id:
+ assert id is not None
+ by_id.referenced = 1
+
+
+class TextElement(Element):
+
+ """
+ An element which directly contains text.
+
+ Its children are all `Text` or `Inline` subclass nodes. You can
+ check whether an element's context is inline simply by checking whether
+ its immediate parent is a `TextElement` instance (including subclasses).
+ This is handy for nodes like `image` that can appear both inline and as
+ standalone body elements.
+
+ If passing children to `__init__()`, make sure to set `text` to
+ ``''`` or some other suitable value.
+ """
+
+ child_text_separator = ''
+ """Separator for child nodes, used by `astext()` method."""
+
+ def __init__(self, rawsource='', text='', *children, **attributes):
+ if text != '':
+ textnode = Text(text)
+ Element.__init__(self, rawsource, textnode, *children,
+ **attributes)
+ else:
+ Element.__init__(self, rawsource, *children, **attributes)
+
+
+class FixedTextElement(TextElement):
+
+ """An element which directly contains preformatted text."""
+
+ def __init__(self, rawsource='', text='', *children, **attributes):
+ TextElement.__init__(self, rawsource, text, *children, **attributes)
+ self.attributes['xml:space'] = 'preserve'
+
+
+# ========
+# Mixins
+# ========
+
+class Resolvable:
+
+ resolved = 0
+
+
+class BackLinkable:
+
+ def add_backref(self, refid):
+ self['backrefs'].append(refid)
+
+
+# ====================
+# Element Categories
+# ====================
+
+class Root: pass
+
+class Titular: pass
+
+class PreBibliographic:
+ """Category of Node which may occur before Bibliographic Nodes."""
+
+class Bibliographic: pass
+
+class Decorative(PreBibliographic): pass
+
+class Structural: pass
+
+class Body: pass
+
+class General(Body): pass
+
+class Sequential(Body):
+ """List-like elements."""
+
+class Admonition(Body): pass
+
+class Special(Body):
+ """Special internal body elements."""
+
+class Invisible(PreBibliographic):
+ """Internal elements that don't appear in output."""
+
+class Part: pass
+
+class Inline: pass
+
+class Referential(Resolvable): pass
+
+
+class Targetable(Resolvable):
+
+ referenced = 0
+
+ indirect_reference_name = None
+ """Holds the whitespace_normalized_name (contains mixed case) of a target.
+ Required for MoinMoin/reST compatibility."""
+
+
+class Labeled:
+ """Contains a `label` as its first element."""
+
+
+# ==============
+# Root Element
+# ==============
+
+class document(Root, Structural, Element):
+
+ """
+ The document root element.
+
+ Do not instantiate this class directly; use
+ `docutils.utils.new_document()` instead.
+ """
+
+ def __init__(self, settings, reporter, *args, **kwargs):
+ Element.__init__(self, *args, **kwargs)
+
+ self.current_source = None
+ """Path to or description of the input source being processed."""
+
+ self.current_line = None
+ """Line number (1-based) of `current_source`."""
+
+ self.settings = settings
+ """Runtime settings data record."""
+
+ self.reporter = reporter
+ """System message generator."""
+
+ self.indirect_targets = []
+ """List of indirect target nodes."""
+
+ self.substitution_defs = {}
+ """Mapping of substitution names to substitution_definition nodes."""
+
+ self.substitution_names = {}
+ """Mapping of case-normalized substitution names to case-sensitive
+ names."""
+
+ self.refnames = {}
+ """Mapping of names to lists of referencing nodes."""
+
+ self.refids = {}
+ """Mapping of ids to lists of referencing nodes."""
+
+ self.nameids = {}
+ """Mapping of names to unique id's."""
+
+ self.nametypes = {}
+ """Mapping of names to hyperlink type (boolean: True => explicit,
+ False => implicit."""
+
+ self.ids = {}
+ """Mapping of ids to nodes."""
+
+ self.footnote_refs = {}
+ """Mapping of footnote labels to lists of footnote_reference nodes."""
+
+ self.citation_refs = {}
+ """Mapping of citation labels to lists of citation_reference nodes."""
+
+ self.autofootnotes = []
+ """List of auto-numbered footnote nodes."""
+
+ self.autofootnote_refs = []
+ """List of auto-numbered footnote_reference nodes."""
+
+ self.symbol_footnotes = []
+ """List of symbol footnote nodes."""
+
+ self.symbol_footnote_refs = []
+ """List of symbol footnote_reference nodes."""
+
+ self.footnotes = []
+ """List of manually-numbered footnote nodes."""
+
+ self.citations = []
+ """List of citation nodes."""
+
+ self.autofootnote_start = 1
+ """Initial auto-numbered footnote number."""
+
+ self.symbol_footnote_start = 0
+ """Initial symbol footnote symbol index."""
+
+ self.id_start = 1
+ """Initial ID number."""
+
+ self.parse_messages = []
+ """System messages generated while parsing."""
+
+ self.transform_messages = []
+ """System messages generated while applying transforms."""
+
+ import docutils.transforms
+ self.transformer = docutils.transforms.Transformer(self)
+ """Storage for transforms to be applied to this document."""
+
+ self.decoration = None
+ """Document's `decoration` node."""
+
+ self.document = self
+
+ def __getstate__(self):
+ """
+ Return dict with unpicklable references removed.
+ """
+ state = self.__dict__.copy()
+ state['reporter'] = None
+ state['transformer'] = None
+ return state
+
+ def asdom(self, dom=None):
+ """Return a DOM representation of this document."""
+ if dom is None:
+ import xml.dom.minidom as dom
+ domroot = dom.Document()
+ domroot.appendChild(self._dom_node(domroot))
+ return domroot
+
+ def set_id(self, node, msgnode=None):
+ for id in node['ids']:
+ if id in self.ids and self.ids[id] is not node:
+ msg = self.reporter.severe('Duplicate ID: "%s".' % id)
+ if msgnode != None:
+ msgnode += msg
+ if not node['ids']:
+ for name in node['names']:
+ id = self.settings.id_prefix + make_id(name)
+ if id and id not in self.ids:
+ break
+ else:
+ id = ''
+ while not id or id in self.ids:
+ id = (self.settings.id_prefix +
+ self.settings.auto_id_prefix + str(self.id_start))
+ self.id_start += 1
+ node['ids'].append(id)
+ self.ids[id] = node
+ return id
+
+ def set_name_id_map(self, node, id, msgnode=None, explicit=None):
+ """
+ `self.nameids` maps names to IDs, while `self.nametypes` maps names to
+ booleans representing hyperlink type (True==explicit,
+ False==implicit). This method updates the mappings.
+
+ The following state transition table shows how `self.nameids` ("ids")
+ and `self.nametypes` ("types") change with new input (a call to this
+ method), and what actions are performed ("implicit"-type system
+ messages are INFO/1, and "explicit"-type system messages are ERROR/3):
+
+ ==== ===== ======== ======== ======= ==== ===== =====
+ Old State Input Action New State Notes
+ ----------- -------- ----------------- ----------- -----
+ ids types new type sys.msg. dupname ids types
+ ==== ===== ======== ======== ======= ==== ===== =====
+ - - explicit - - new True
+ - - implicit - - new False
+ None False explicit - - new True
+ old False explicit implicit old new True
+ None True explicit explicit new None True
+ old True explicit explicit new,old None True [#]_
+ None False implicit implicit new None False
+ old False implicit implicit new,old None False
+ None True implicit implicit new None True
+ old True implicit implicit new old True
+ ==== ===== ======== ======== ======= ==== ===== =====
+
+ .. [#] Do not clear the name-to-id map or invalidate the old target if
+ both old and new targets are external and refer to identical URIs.
+ The new target is invalidated regardless.
+ """
+ for name in node['names']:
+ if name in self.nameids:
+ self.set_duplicate_name_id(node, id, name, msgnode, explicit)
+ else:
+ self.nameids[name] = id
+ self.nametypes[name] = explicit
+
+ def set_duplicate_name_id(self, node, id, name, msgnode, explicit):
+ old_id = self.nameids[name]
+ old_explicit = self.nametypes[name]
+ self.nametypes[name] = old_explicit or explicit
+ if explicit:
+ if old_explicit:
+ level = 2
+ if old_id is not None:
+ old_node = self.ids[old_id]
+ if 'refuri' in node:
+ refuri = node['refuri']
+ if old_node['names'] \
+ and 'refuri' in old_node \
+ and old_node['refuri'] == refuri:
+ level = 1 # just inform if refuri's identical
+ if level > 1:
+ dupname(old_node, name)
+ self.nameids[name] = None
+ msg = self.reporter.system_message(
+ level, 'Duplicate explicit target name: "%s".' % name,
+ backrefs=[id], base_node=node)
+ if msgnode != None:
+ msgnode += msg
+ dupname(node, name)
+ else:
+ self.nameids[name] = id
+ if old_id is not None:
+ old_node = self.ids[old_id]
+ dupname(old_node, name)
+ else:
+ if old_id is not None and not old_explicit:
+ self.nameids[name] = None
+ old_node = self.ids[old_id]
+ dupname(old_node, name)
+ dupname(node, name)
+ if not explicit or (not old_explicit and old_id is not None):
+ msg = self.reporter.info(
+ 'Duplicate implicit target name: "%s".' % name,
+ backrefs=[id], base_node=node)
+ if msgnode != None:
+ msgnode += msg
+
+ def has_name(self, name):
+ return name in self.nameids
+
+ # "note" here is an imperative verb: "take note of".
+ def note_implicit_target(self, target, msgnode=None):
+ id = self.set_id(target, msgnode)
+ self.set_name_id_map(target, id, msgnode, explicit=None)
+
+ def note_explicit_target(self, target, msgnode=None):
+ id = self.set_id(target, msgnode)
+ self.set_name_id_map(target, id, msgnode, explicit=1)
+
+ def note_refname(self, node):
+ self.refnames.setdefault(node['refname'], []).append(node)
+
+ def note_refid(self, node):
+ self.refids.setdefault(node['refid'], []).append(node)
+
+ def note_indirect_target(self, target):
+ self.indirect_targets.append(target)
+ if target['names']:
+ self.note_refname(target)
+
+ def note_anonymous_target(self, target):
+ self.set_id(target)
+
+ def note_autofootnote(self, footnote):
+ self.set_id(footnote)
+ self.autofootnotes.append(footnote)
+
+ def note_autofootnote_ref(self, ref):
+ self.set_id(ref)
+ self.autofootnote_refs.append(ref)
+
+ def note_symbol_footnote(self, footnote):
+ self.set_id(footnote)
+ self.symbol_footnotes.append(footnote)
+
+ def note_symbol_footnote_ref(self, ref):
+ self.set_id(ref)
+ self.symbol_footnote_refs.append(ref)
+
+ def note_footnote(self, footnote):
+ self.set_id(footnote)
+ self.footnotes.append(footnote)
+
+ def note_footnote_ref(self, ref):
+ self.set_id(ref)
+ self.footnote_refs.setdefault(ref['refname'], []).append(ref)
+ self.note_refname(ref)
+
+ def note_citation(self, citation):
+ self.citations.append(citation)
+
+ def note_citation_ref(self, ref):
+ self.set_id(ref)
+ self.citation_refs.setdefault(ref['refname'], []).append(ref)
+ self.note_refname(ref)
+
+ def note_substitution_def(self, subdef, def_name, msgnode=None):
+ name = whitespace_normalize_name(def_name)
+ if name in self.substitution_defs:
+ msg = self.reporter.error(
+ 'Duplicate substitution definition name: "%s".' % name,
+ base_node=subdef)
+ if msgnode != None:
+ msgnode += msg
+ oldnode = self.substitution_defs[name]
+ dupname(oldnode, name)
+ # keep only the last definition:
+ self.substitution_defs[name] = subdef
+ # case-insensitive mapping:
+ self.substitution_names[fully_normalize_name(name)] = name
+
+ def note_substitution_ref(self, subref, refname):
+ subref['refname'] = whitespace_normalize_name(refname)
+
+ def note_pending(self, pending, priority=None):
+ self.transformer.add_pending(pending, priority)
+
+ def note_parse_message(self, message):
+ self.parse_messages.append(message)
+
+ def note_transform_message(self, message):
+ self.transform_messages.append(message)
+
+ def note_source(self, source, offset):
+ self.current_source = source
+ if offset is None:
+ self.current_line = offset
+ else:
+ self.current_line = offset + 1
+
+ def copy(self):
+ return self.__class__(self.settings, self.reporter,
+ **self.attributes)
+
+ def get_decoration(self):
+ if not self.decoration:
+ self.decoration = decoration()
+ index = self.first_child_not_matching_class(Titular)
+ if index is None:
+ self.append(self.decoration)
+ else:
+ self.insert(index, self.decoration)
+ return self.decoration
+
+
+# ================
+# Title Elements
+# ================
+
+class title(Titular, PreBibliographic, TextElement): pass
+class subtitle(Titular, PreBibliographic, TextElement): pass
+class rubric(Titular, TextElement): pass
+
+
+# ========================
+# Bibliographic Elements
+# ========================
+
+class docinfo(Bibliographic, Element): pass
+class author(Bibliographic, TextElement): pass
+class authors(Bibliographic, Element): pass
+class organization(Bibliographic, TextElement): pass
+class address(Bibliographic, FixedTextElement): pass
+class contact(Bibliographic, TextElement): pass
+class version(Bibliographic, TextElement): pass
+class revision(Bibliographic, TextElement): pass
+class status(Bibliographic, TextElement): pass
+class date(Bibliographic, TextElement): pass
+class copyright(Bibliographic, TextElement): pass
+
+
+# =====================
+# Decorative Elements
+# =====================
+
+class decoration(Decorative, Element):
+
+ def get_header(self):
+ if not len(self.children) or not isinstance(self.children[0], header):
+ self.insert(0, header())
+ return self.children[0]
+
+ def get_footer(self):
+ if not len(self.children) or not isinstance(self.children[-1], footer):
+ self.append(footer())
+ return self.children[-1]
+
+
+class header(Decorative, Element): pass
+class footer(Decorative, Element): pass
+
+
+# =====================
+# Structural Elements
+# =====================
+
+class section(Structural, Element): pass
+
+
+class topic(Structural, Element):
+
+ """
+ Topics are terminal, "leaf" mini-sections, like block quotes with titles,
+ or textual figures. A topic is just like a section, except that it has no
+ subsections, and it doesn't have to conform to section placement rules.
+
+ Topics are allowed wherever body elements (list, table, etc.) are allowed,
+ but only at the top level of a section or document. Topics cannot nest
+ inside topics, sidebars, or body elements; you can't have a topic inside a
+ table, list, block quote, etc.
+ """
+
+
+class sidebar(Structural, Element):
+
+ """
+ Sidebars are like miniature, parallel documents that occur inside other
+ documents, providing related or reference material. A sidebar is
+ typically offset by a border and "floats" to the side of the page; the
+ document's main text may flow around it. Sidebars can also be likened to
+ super-footnotes; their content is outside of the flow of the document's
+ main text.
+
+ Sidebars are allowed wherever body elements (list, table, etc.) are
+ allowed, but only at the top level of a section or document. Sidebars
+ cannot nest inside sidebars, topics, or body elements; you can't have a
+ sidebar inside a table, list, block quote, etc.
+ """
+
+
+class transition(Structural, Element): pass
+
+
+# ===============
+# Body Elements
+# ===============
+
+class paragraph(General, TextElement): pass
+class compound(General, Element): pass
+class container(General, Element): pass
+class bullet_list(Sequential, Element): pass
+class enumerated_list(Sequential, Element): pass
+class list_item(Part, Element): pass
+class definition_list(Sequential, Element): pass
+class definition_list_item(Part, Element): pass
+class term(Part, TextElement): pass
+class classifier(Part, TextElement): pass
+class definition(Part, Element): pass
+class field_list(Sequential, Element): pass
+class field(Part, Element): pass
+class field_name(Part, TextElement): pass
+class field_body(Part, Element): pass
+
+
+class option(Part, Element):
+
+ child_text_separator = ''
+
+
+class option_argument(Part, TextElement):
+
+ def astext(self):
+ return self.get('delimiter', ' ') + TextElement.astext(self)
+
+
+class option_group(Part, Element):
+
+ child_text_separator = ', '
+
+
+class option_list(Sequential, Element): pass
+
+
+class option_list_item(Part, Element):
+
+ child_text_separator = ' '
+
+
+class option_string(Part, TextElement): pass
+class description(Part, Element): pass
+class literal_block(General, FixedTextElement): pass
+class doctest_block(General, FixedTextElement): pass
+class line_block(General, Element): pass
+
+
+class line(Part, TextElement):
+
+ indent = None
+
+
+class block_quote(General, Element): pass
+class attribution(Part, TextElement): pass
+class attention(Admonition, Element): pass
+class caution(Admonition, Element): pass
+class danger(Admonition, Element): pass
+class error(Admonition, Element): pass
+class important(Admonition, Element): pass
+class note(Admonition, Element): pass
+class tip(Admonition, Element): pass
+class hint(Admonition, Element): pass
+class warning(Admonition, Element): pass
+class admonition(Admonition, Element): pass
+class comment(Special, Invisible, FixedTextElement): pass
+class substitution_definition(Special, Invisible, TextElement): pass
+class target(Special, Invisible, Inline, TextElement, Targetable): pass
+class footnote(General, BackLinkable, Element, Labeled, Targetable): pass
+class citation(General, BackLinkable, Element, Labeled, Targetable): pass
+class label(Part, TextElement): pass
+class figure(General, Element): pass
+class caption(Part, TextElement): pass
+class legend(Part, Element): pass
+class table(General, Element): pass
+class tgroup(Part, Element): pass
+class colspec(Part, Element): pass
+class thead(Part, Element): pass
+class tbody(Part, Element): pass
+class row(Part, Element): pass
+class entry(Part, Element): pass
+
+
+class system_message(Special, BackLinkable, PreBibliographic, Element):
+
+ """
+ System message element.
+
+ Do not instantiate this class directly; use
+ ``document.reporter.info/warning/error/severe()`` instead.
+ """
+
+ def __init__(self, message=None, *children, **attributes):
+ if message:
+ p = paragraph('', message)
+ children = (p,) + children
+ try:
+ Element.__init__(self, '', *children, **attributes)
+ except:
+ print 'system_message: children=%r' % (children,)
+ raise
+
+ def astext(self):
+ line = self.get('line', '')
+ return u'%s:%s: (%s/%s) %s' % (self['source'], line, self['type'],
+ self['level'], Element.astext(self))
+
+
+class pending(Special, Invisible, Element):
+
+ """
+ The "pending" element is used to encapsulate a pending operation: the
+ operation (transform), the point at which to apply it, and any data it
+ requires. Only the pending operation's location within the document is
+ stored in the public document tree (by the "pending" object itself); the
+ operation and its data are stored in the "pending" object's internal
+ instance attributes.
+
+ For example, say you want a table of contents in your reStructuredText
+ document. The easiest way to specify where to put it is from within the
+ document, with a directive::
+
+ .. contents::
+
+ But the "contents" directive can't do its work until the entire document
+ has been parsed and possibly transformed to some extent. So the directive
+ code leaves a placeholder behind that will trigger the second phase of its
+ processing, something like this::
+
+ <pending ...public attributes...> + internal attributes
+
+ Use `document.note_pending()` so that the
+ `docutils.transforms.Transformer` stage of processing can run all pending
+ transforms.
+ """
+
+ def __init__(self, transform, details=None,
+ rawsource='', *children, **attributes):
+ Element.__init__(self, rawsource, *children, **attributes)
+
+ self.transform = transform
+ """The `docutils.transforms.Transform` class implementing the pending
+ operation."""
+
+ self.details = details or {}
+ """Detail data (dictionary) required by the pending operation."""
+
+ def pformat(self, indent=' ', level=0):
+ internals = [
+ '.. internal attributes:',
+ ' .transform: %s.%s' % (self.transform.__module__,
+ self.transform.__name__),
+ ' .details:']
+ details = self.details.items()
+ details.sort()
+ for key, value in details:
+ if isinstance(value, Node):
+ internals.append('%7s%s:' % ('', key))
+ internals.extend(['%9s%s' % ('', line)
+ for line in value.pformat().splitlines()])
+ elif value and isinstance(value, list) \
+ and isinstance(value[0], Node):
+ internals.append('%7s%s:' % ('', key))
+ for v in value:
+ internals.extend(['%9s%s' % ('', line)
+ for line in v.pformat().splitlines()])
+ else:
+ internals.append('%7s%s: %r' % ('', key, value))
+ return (Element.pformat(self, indent, level)
+ + ''.join([(' %s%s\n' % (indent * level, line))
+ for line in internals]))
+
+ def copy(self):
+ return self.__class__(self.transform, self.details, self.rawsource,
+ **self.attributes)
+
+
+class raw(Special, Inline, PreBibliographic, FixedTextElement):
+
+ """
+ Raw data that is to be passed untouched to the Writer.
+ """
+
+ pass
+
+
+# =================
+# Inline Elements
+# =================
+
+class emphasis(Inline, TextElement): pass
+class strong(Inline, TextElement): pass
+class literal(Inline, TextElement): pass
+class reference(General, Inline, Referential, TextElement): pass
+class footnote_reference(Inline, Referential, TextElement): pass
+class citation_reference(Inline, Referential, TextElement): pass
+class substitution_reference(Inline, TextElement): pass
+class title_reference(Inline, TextElement): pass
+class abbreviation(Inline, TextElement): pass
+class acronym(Inline, TextElement): pass
+class superscript(Inline, TextElement): pass
+class subscript(Inline, TextElement): pass
+
+
+class image(General, Inline, Element):
+
+ def astext(self):
+ return self.get('alt', '')
+
+
+class inline(Inline, TextElement): pass
+class problematic(Inline, TextElement): pass
+class generated(Inline, TextElement): pass
+
+
+# ========================================
+# Auxiliary Classes, Functions, and Data
+# ========================================
+
+node_class_names = """
+ Text
+ abbreviation acronym address admonition attention attribution author
+ authors
+ block_quote bullet_list
+ caption caution citation citation_reference classifier colspec comment
+ compound contact container copyright
+ danger date decoration definition definition_list definition_list_item
+ description docinfo doctest_block document
+ emphasis entry enumerated_list error
+ field field_body field_list field_name figure footer
+ footnote footnote_reference
+ generated
+ header hint
+ image important inline
+ label legend line line_block list_item literal literal_block
+ note
+ option option_argument option_group option_list option_list_item
+ option_string organization
+ paragraph pending problematic
+ raw reference revision row rubric
+ section sidebar status strong subscript substitution_definition
+ substitution_reference subtitle superscript system_message
+ table target tbody term tgroup thead tip title title_reference topic
+ transition
+ version
+ warning""".split()
+"""A list of names of all concrete Node subclasses."""
+
+
+class NodeVisitor:
+
+ """
+ "Visitor" pattern [GoF95]_ abstract superclass implementation for
+ document tree traversals.
+
+ Each node class has corresponding methods, doing nothing by
+ default; override individual methods for specific and useful
+ behaviour. The `dispatch_visit()` method is called by
+ `Node.walk()` upon entering a node. `Node.walkabout()` also calls
+ the `dispatch_departure()` method before exiting a node.
+
+ The dispatch methods call "``visit_`` + node class name" or
+ "``depart_`` + node class name", resp.
+
+ This is a base class for visitors whose ``visit_...`` & ``depart_...``
+ methods should be implemented for *all* node types encountered (such as
+ for `docutils.writers.Writer` subclasses). Unimplemented methods will
+ raise exceptions.
+
+ For sparse traversals, where only certain node types are of interest,
+ subclass `SparseNodeVisitor` instead. When (mostly or entirely) uniform
+ processing is desired, subclass `GenericNodeVisitor`.
+
+ .. [GoF95] Gamma, Helm, Johnson, Vlissides. *Design Patterns: Elements of
+ Reusable Object-Oriented Software*. Addison-Wesley, Reading, MA, USA,
+ 1995.
+ """
+
+ optional = ()
+ """
+ Tuple containing node class names (as strings).
+
+ No exception will be raised if writers do not implement visit
+ or departure functions for these node classes.
+
+ Used to ensure transitional compatibility with existing 3rd-party writers.
+ """
+
+ def __init__(self, document):
+ self.document = document
+
+ def dispatch_visit(self, node):
+ """
+ Call self."``visit_`` + node class name" with `node` as
+ parameter. If the ``visit_...`` method does not exist, call
+ self.unknown_visit.
+ """
+ node_name = node.__class__.__name__
+ method = getattr(self, 'visit_' + node_name, self.unknown_visit)
+ self.document.reporter.debug(
+ 'docutils.nodes.NodeVisitor.dispatch_visit calling %s for %s'
+ % (method.__name__, node_name))
+ return method(node)
+
+ def dispatch_departure(self, node):
+ """
+ Call self."``depart_`` + node class name" with `node` as
+ parameter. If the ``depart_...`` method does not exist, call
+ self.unknown_departure.
+ """
+ node_name = node.__class__.__name__
+ method = getattr(self, 'depart_' + node_name, self.unknown_departure)
+ self.document.reporter.debug(
+ 'docutils.nodes.NodeVisitor.dispatch_departure calling %s for %s'
+ % (method.__name__, node_name))
+ return method(node)
+
+ def unknown_visit(self, node):
+ """
+ Called when entering unknown `Node` types.
+
+ Raise an exception unless overridden.
+ """
+ if (self.document.settings.strict_visitor
+ or node.__class__.__name__ not in self.optional):
+ raise NotImplementedError(
+ '%s visiting unknown node type: %s'
+ % (self.__class__, node.__class__.__name__))
+
+ def unknown_departure(self, node):
+ """
+ Called before exiting unknown `Node` types.
+
+ Raise exception unless overridden.
+ """
+ if (self.document.settings.strict_visitor
+ or node.__class__.__name__ not in self.optional):
+ raise NotImplementedError(
+ '%s departing unknown node type: %s'
+ % (self.__class__, node.__class__.__name__))
+
+
+class SparseNodeVisitor(NodeVisitor):
+
+ """
+ Base class for sparse traversals, where only certain node types are of
+ interest. When ``visit_...`` & ``depart_...`` methods should be
+ implemented for *all* node types (such as for `docutils.writers.Writer`
+ subclasses), subclass `NodeVisitor` instead.
+ """
+
+
+class GenericNodeVisitor(NodeVisitor):
+
+ """
+ Generic "Visitor" abstract superclass, for simple traversals.
+
+ Unless overridden, each ``visit_...`` method calls `default_visit()`, and
+ each ``depart_...`` method (when using `Node.walkabout()`) calls
+ `default_departure()`. `default_visit()` (and `default_departure()`) must
+ be overridden in subclasses.
+
+ Define fully generic visitors by overriding `default_visit()` (and
+ `default_departure()`) only. Define semi-generic visitors by overriding
+ individual ``visit_...()`` (and ``depart_...()``) methods also.
+
+ `NodeVisitor.unknown_visit()` (`NodeVisitor.unknown_departure()`) should
+ be overridden for default behavior.
+ """
+
+ def default_visit(self, node):
+ """Override for generic, uniform traversals."""
+ raise NotImplementedError
+
+ def default_departure(self, node):
+ """Override for generic, uniform traversals."""
+ raise NotImplementedError
+
+def _call_default_visit(self, node):
+ self.default_visit(node)
+
+def _call_default_departure(self, node):
+ self.default_departure(node)
+
+def _nop(self, node):
+ pass
+
+def _add_node_class_names(names):
+ """Save typing with dynamic assignments:"""
+ for _name in names:
+ setattr(GenericNodeVisitor, "visit_" + _name, _call_default_visit)
+ setattr(GenericNodeVisitor, "depart_" + _name, _call_default_departure)
+ setattr(SparseNodeVisitor, 'visit_' + _name, _nop)
+ setattr(SparseNodeVisitor, 'depart_' + _name, _nop)
+
+_add_node_class_names(node_class_names)
+
+
+class TreeCopyVisitor(GenericNodeVisitor):
+
+ """
+ Make a complete copy of a tree or branch, including element attributes.
+ """
+
+ def __init__(self, document):
+ GenericNodeVisitor.__init__(self, document)
+ self.parent_stack = []
+ self.parent = []
+
+ def get_tree_copy(self):
+ return self.parent[0]
+
+ def default_visit(self, node):
+ """Copy the current node, and make it the new acting parent."""
+ newnode = node.copy()
+ self.parent.append(newnode)
+ self.parent_stack.append(self.parent)
+ self.parent = newnode
+
+ def default_departure(self, node):
+ """Restore the previous acting parent."""
+ self.parent = self.parent_stack.pop()
+
+
+class TreePruningException(Exception):
+
+ """
+ Base class for `NodeVisitor`-related tree pruning exceptions.
+
+ Raise subclasses from within ``visit_...`` or ``depart_...`` methods
+ called from `Node.walk()` and `Node.walkabout()` tree traversals to prune
+ the tree traversed.
+ """
+
+ pass
+
+
+class SkipChildren(TreePruningException):
+
+ """
+ Do not visit any children of the current node. The current node's
+ siblings and ``depart_...`` method are not affected.
+ """
+
+ pass
+
+
+class SkipSiblings(TreePruningException):
+
+ """
+ Do not visit any more siblings (to the right) of the current node. The
+ current node's children and its ``depart_...`` method are not affected.
+ """
+
+ pass
+
+
+class SkipNode(TreePruningException):
+
+ """
+ Do not visit the current node's children, and do not call the current
+ node's ``depart_...`` method.
+ """
+
+ pass
+
+
+class SkipDeparture(TreePruningException):
+
+ """
+ Do not call the current node's ``depart_...`` method. The current node's
+ children and siblings are not affected.
+ """
+
+ pass
+
+
+class NodeFound(TreePruningException):
+
+ """
+ Raise to indicate that the target of a search has been found. This
+ exception must be caught by the client; it is not caught by the traversal
+ code.
+ """
+
+ pass
+
+
+class StopTraversal(TreePruningException):
+
+ """
+ Stop the traversal alltogether. The current node's ``depart_...`` method
+ is not affected. The parent nodes ``depart_...`` methods are also called
+ as usual. No other nodes are visited. This is an alternative to
+ NodeFound that does not cause exception handling to trickle up to the
+ caller.
+ """
+
+ pass
+
+
+def make_id(string):
+ """
+ Convert `string` into an identifier and return it.
+
+ Docutils identifiers will conform to the regular expression
+ ``[a-z](-?[a-z0-9]+)*``. For CSS compatibility, identifiers (the "class"
+ and "id" attributes) should have no underscores, colons, or periods.
+ Hyphens may be used.
+
+ - The `HTML 4.01 spec`_ defines identifiers based on SGML tokens:
+
+ ID and NAME tokens must begin with a letter ([A-Za-z]) and may be
+ followed by any number of letters, digits ([0-9]), hyphens ("-"),
+ underscores ("_"), colons (":"), and periods (".").
+
+ - However the `CSS1 spec`_ defines identifiers based on the "name" token,
+ a tighter interpretation ("flex" tokenizer notation; "latin1" and
+ "escape" 8-bit characters have been replaced with entities)::
+
+ unicode \\[0-9a-f]{1,4}
+ latin1 [¡-ÿ]
+ escape {unicode}|\\[ -~¡-ÿ]
+ nmchar [-a-z0-9]|{latin1}|{escape}
+ name {nmchar}+
+
+ The CSS1 "nmchar" rule does not include underscores ("_"), colons (":"),
+ or periods ("."), therefore "class" and "id" attributes should not contain
+ these characters. They should be replaced with hyphens ("-"). Combined
+ with HTML's requirements (the first character must be a letter; no
+ "unicode", "latin1", or "escape" characters), this results in the
+ ``[a-z](-?[a-z0-9]+)*`` pattern.
+
+ .. _HTML 4.01 spec: http://www.w3.org/TR/html401
+ .. _CSS1 spec: http://www.w3.org/TR/REC-CSS1
+ """
+ id = string.lower()
+ if not isinstance(id, unicode):
+ id = id.decode()
+ id = id.translate(_non_id_translate_digraphs)
+ id = id.translate(_non_id_translate)
+ # get rid of non-ascii characters.
+ # 'ascii' lowercase to prevent problems with turkish locale.
+ id = unicodedata.normalize('NFKD', id).\
+ encode('ascii', 'ignore').decode('ascii')
+ # shrink runs of whitespace and replace by hyphen
+ id = _non_id_chars.sub('-', ' '.join(id.split()))
+ id = _non_id_at_ends.sub('', id)
+ return str(id)
+
+_non_id_chars = re.compile('[^a-z0-9]+')
+_non_id_at_ends = re.compile('^[-0-9]+|-+$')
+_non_id_translate = {
+ 0x00f8: u'o', # o with stroke
+ 0x0111: u'd', # d with stroke
+ 0x0127: u'h', # h with stroke
+ 0x0131: u'i', # dotless i
+ 0x0142: u'l', # l with stroke
+ 0x0167: u't', # t with stroke
+ 0x0180: u'b', # b with stroke
+ 0x0183: u'b', # b with topbar
+ 0x0188: u'c', # c with hook
+ 0x018c: u'd', # d with topbar
+ 0x0192: u'f', # f with hook
+ 0x0199: u'k', # k with hook
+ 0x019a: u'l', # l with bar
+ 0x019e: u'n', # n with long right leg
+ 0x01a5: u'p', # p with hook
+ 0x01ab: u't', # t with palatal hook
+ 0x01ad: u't', # t with hook
+ 0x01b4: u'y', # y with hook
+ 0x01b6: u'z', # z with stroke
+ 0x01e5: u'g', # g with stroke
+ 0x0225: u'z', # z with hook
+ 0x0234: u'l', # l with curl
+ 0x0235: u'n', # n with curl
+ 0x0236: u't', # t with curl
+ 0x0237: u'j', # dotless j
+ 0x023c: u'c', # c with stroke
+ 0x023f: u's', # s with swash tail
+ 0x0240: u'z', # z with swash tail
+ 0x0247: u'e', # e with stroke
+ 0x0249: u'j', # j with stroke
+ 0x024b: u'q', # q with hook tail
+ 0x024d: u'r', # r with stroke
+ 0x024f: u'y', # y with stroke
+}
+_non_id_translate_digraphs = {
+ 0x00df: u'sz', # ligature sz
+ 0x00e6: u'ae', # ae
+ 0x0153: u'oe', # ligature oe
+ 0x0238: u'db', # db digraph
+ 0x0239: u'qp', # qp digraph
+}
+
+def dupname(node, name):
+ node['dupnames'].append(name)
+ node['names'].remove(name)
+ # Assume that this method is referenced, even though it isn't; we
+ # don't want to throw unnecessary system_messages.
+ node.referenced = 1
+
+def fully_normalize_name(name):
+ """Return a case- and whitespace-normalized name."""
+ return ' '.join(name.lower().split())
+
+def whitespace_normalize_name(name):
+ """Return a whitespace-normalized name."""
+ return ' '.join(name.split())
+
+def serial_escape(value):
+ """Escape string values that are elements of a list, for serialization."""
+ return value.replace('\\', r'\\').replace(' ', r'\ ')
+
+#
+#
+# Local Variables:
+# indent-tabs-mode: nil
+# sentence-end-double-space: t
+# fill-column: 78
+# End:
diff --git a/python/helpers/docutils/parsers/__init__.py b/python/helpers/docutils/parsers/__init__.py
new file mode 100644
index 0000000..0096d32
--- /dev/null
+++ b/python/helpers/docutils/parsers/__init__.py
@@ -0,0 +1,47 @@
+# $Id: __init__.py 5618 2008-07-28 08:37:32Z strank $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains Docutils parser modules.
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import Component
+
+
+class Parser(Component):
+
+ component_type = 'parser'
+ config_section = 'parsers'
+
+ def parse(self, inputstring, document):
+ """Override to parse `inputstring` into document tree `document`."""
+ raise NotImplementedError('subclass must override this method')
+
+ def setup_parse(self, inputstring, document):
+ """Initial parse setup. Call at start of `self.parse()`."""
+ self.inputstring = inputstring
+ self.document = document
+ document.reporter.attach_observer(document.note_parse_message)
+
+ def finish_parse(self):
+ """Finalize parse details. Call at end of `self.parse()`."""
+ self.document.reporter.detach_observer(
+ self.document.note_parse_message)
+
+
+_parser_aliases = {
+ 'restructuredtext': 'rst',
+ 'rest': 'rst',
+ 'restx': 'rst',
+ 'rtxt': 'rst',}
+
+def get_parser_class(parser_name):
+ """Return the Parser class from the `parser_name` module."""
+ parser_name = parser_name.lower()
+ if parser_name in _parser_aliases:
+ parser_name = _parser_aliases[parser_name]
+ module = __import__(parser_name, globals(), locals())
+ return module.Parser
diff --git a/python/helpers/docutils/parsers/null.py b/python/helpers/docutils/parsers/null.py
new file mode 100644
index 0000000..238c450
--- /dev/null
+++ b/python/helpers/docutils/parsers/null.py
@@ -0,0 +1,20 @@
+# $Id: null.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Martin Blais <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""A do-nothing parser."""
+
+from docutils import parsers
+
+
+class Parser(parsers.Parser):
+
+ """A do-nothing parser."""
+
+ supported = ('null',)
+
+ config_section = 'null parser'
+ config_section_dependencies = ('parsers',)
+
+ def parse(self, inputstring, document):
+ pass
diff --git a/python/helpers/docutils/parsers/rst/__init__.py b/python/helpers/docutils/parsers/rst/__init__.py
new file mode 100644
index 0000000..de8e2ab
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/__init__.py
@@ -0,0 +1,373 @@
+# $Id: __init__.py 6314 2010-04-26 10:04:17Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+This is ``docutils.parsers.rst`` package. It exports a single class, `Parser`,
+the reStructuredText parser.
+
+
+Usage
+=====
+
+1. Create a parser::
+
+ parser = docutils.parsers.rst.Parser()
+
+ Several optional arguments may be passed to modify the parser's behavior.
+ Please see `Customizing the Parser`_ below for details.
+
+2. Gather input (a multi-line string), by reading a file or the standard
+ input::
+
+ input = sys.stdin.read()
+
+3. Create a new empty `docutils.nodes.document` tree::
+
+ document = docutils.utils.new_document(source, settings)
+
+ See `docutils.utils.new_document()` for parameter details.
+
+4. Run the parser, populating the document tree::
+
+ parser.parse(input, document)
+
+
+Parser Overview
+===============
+
+The reStructuredText parser is implemented as a state machine, examining its
+input one line at a time. To understand how the parser works, please first
+become familiar with the `docutils.statemachine` module, then see the
+`states` module.
+
+
+Customizing the Parser
+----------------------
+
+Anything that isn't already customizable is that way simply because that type
+of customizability hasn't been implemented yet. Patches welcome!
+
+When instantiating an object of the `Parser` class, two parameters may be
+passed: ``rfc2822`` and ``inliner``. Pass ``rfc2822=1`` to enable an initial
+RFC-2822 style header block, parsed as a "field_list" element (with "class"
+attribute set to "rfc2822"). Currently this is the only body-level element
+which is customizable without subclassing. (Tip: subclass `Parser` and change
+its "state_classes" and "initial_state" attributes to refer to new classes.
+Contact the author if you need more details.)
+
+The ``inliner`` parameter takes an instance of `states.Inliner` or a subclass.
+It handles inline markup recognition. A common extension is the addition of
+further implicit hyperlinks, like "RFC 2822". This can be done by subclassing
+`states.Inliner`, adding a new method for the implicit markup, and adding a
+``(pattern, method)`` pair to the "implicit_dispatch" attribute of the
+subclass. See `states.Inliner.implicit_inline()` for details. Explicit
+inline markup can be customized in a `states.Inliner` subclass via the
+``patterns.initial`` and ``dispatch`` attributes (and new methods as
+appropriate).
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import docutils.parsers
+import docutils.statemachine
+from docutils.parsers.rst import states
+from docutils import frontend, nodes
+
+
+class Parser(docutils.parsers.Parser):
+
+ """The reStructuredText parser."""
+
+ supported = ('restructuredtext', 'rst', 'rest', 'restx', 'rtxt', 'rstx')
+ """Aliases this parser supports."""
+
+ settings_spec = (
+ 'reStructuredText Parser Options',
+ None,
+ (('Recognize and link to standalone PEP references (like "PEP 258").',
+ ['--pep-references'],
+ {'action': 'store_true', 'validator': frontend.validate_boolean}),
+ ('Base URL for PEP references '
+ '(default "http://www.python.org/dev/peps/").',
+ ['--pep-base-url'],
+ {'metavar': '<URL>', 'default': 'http://www.python.org/dev/peps/',
+ 'validator': frontend.validate_url_trailing_slash}),
+ ('Template for PEP file part of URL. (default "pep-%04d")',
+ ['--pep-file-url-template'],
+ {'metavar': '<URL>', 'default': 'pep-%04d'}),
+ ('Recognize and link to standalone RFC references (like "RFC 822").',
+ ['--rfc-references'],
+ {'action': 'store_true', 'validator': frontend.validate_boolean}),
+ ('Base URL for RFC references (default "http://www.faqs.org/rfcs/").',
+ ['--rfc-base-url'],
+ {'metavar': '<URL>', 'default': 'http://www.faqs.org/rfcs/',
+ 'validator': frontend.validate_url_trailing_slash}),
+ ('Set number of spaces for tab expansion (default 8).',
+ ['--tab-width'],
+ {'metavar': '<width>', 'type': 'int', 'default': 8,
+ 'validator': frontend.validate_nonnegative_int}),
+ ('Remove spaces before footnote references.',
+ ['--trim-footnote-reference-space'],
+ {'action': 'store_true', 'validator': frontend.validate_boolean}),
+ ('Leave spaces before footnote references.',
+ ['--leave-footnote-reference-space'],
+ {'action': 'store_false', 'dest': 'trim_footnote_reference_space'}),
+ ('Disable directives that insert the contents of external file '
+ '("include" & "raw"); replaced with a "warning" system message.',
+ ['--no-file-insertion'],
+ {'action': 'store_false', 'default': 1,
+ 'dest': 'file_insertion_enabled',
+ 'validator': frontend.validate_boolean}),
+ ('Enable directives that insert the contents of external file '
+ '("include" & "raw"). Enabled by default.',
+ ['--file-insertion-enabled'],
+ {'action': 'store_true'}),
+ ('Disable the "raw" directives; replaced with a "warning" '
+ 'system message.',
+ ['--no-raw'],
+ {'action': 'store_false', 'default': 1, 'dest': 'raw_enabled',
+ 'validator': frontend.validate_boolean}),
+ ('Enable the "raw" directive. Enabled by default.',
+ ['--raw-enabled'],
+ {'action': 'store_true'}),))
+
+ config_section = 'restructuredtext parser'
+ config_section_dependencies = ('parsers',)
+
+ def __init__(self, rfc2822=None, inliner=None):
+ if rfc2822:
+ self.initial_state = 'RFC2822Body'
+ else:
+ self.initial_state = 'Body'
+ self.state_classes = states.state_classes
+ self.inliner = inliner
+
+ def parse(self, inputstring, document):
+ """Parse `inputstring` and populate `document`, a document tree."""
+ self.setup_parse(inputstring, document)
+ self.statemachine = states.RSTStateMachine(
+ state_classes=self.state_classes,
+ initial_state=self.initial_state,
+ debug=document.reporter.debug_flag)
+ inputlines = docutils.statemachine.string2lines(
+ inputstring, tab_width=document.settings.tab_width,
+ convert_whitespace=1)
+ self.statemachine.run(inputlines, document, inliner=self.inliner)
+ self.finish_parse()
+
+
+class DirectiveError(Exception):
+
+ """
+ Store a message and a system message level.
+
+ To be thrown from inside directive code.
+
+ Do not instantiate directly -- use `Directive.directive_error()`
+ instead!
+ """
+
+ def __init__(self, level, message):
+ """Set error `message` and `level`"""
+ Exception.__init__(self)
+ self.level = level
+ self.msg = message
+
+
+class Directive(object):
+
+ """
+ Base class for reStructuredText directives.
+
+ The following attributes may be set by subclasses. They are
+ interpreted by the directive parser (which runs the directive
+ class):
+
+ - `required_arguments`: The number of required arguments (default:
+ 0).
+
+ - `optional_arguments`: The number of optional arguments (default:
+ 0).
+
+ - `final_argument_whitespace`: A boolean, indicating if the final
+ argument may contain whitespace (default: False).
+
+ - `option_spec`: A dictionary, mapping known option names to
+ conversion functions such as `int` or `float` (default: {}, no
+ options). Several conversion functions are defined in the
+ directives/__init__.py module.
+
+ Option conversion functions take a single parameter, the option
+ argument (a string or ``None``), validate it and/or convert it
+ to the appropriate form. Conversion functions may raise
+ `ValueError` and `TypeError` exceptions.
+
+ - `has_content`: A boolean; True if content is allowed. Client
+ code must handle the case where content is required but not
+ supplied (an empty content list will be supplied).
+
+ Arguments are normally single whitespace-separated words. The
+ final argument may contain whitespace and/or newlines if
+ `final_argument_whitespace` is True.
+
+ If the form of the arguments is more complex, specify only one
+ argument (either required or optional) and set
+ `final_argument_whitespace` to True; the client code must do any
+ context-sensitive parsing.
+
+ When a directive implementation is being run, the directive class
+ is instantiated, and the `run()` method is executed. During
+ instantiation, the following instance variables are set:
+
+ - ``name`` is the directive type or name (string).
+
+ - ``arguments`` is the list of positional arguments (strings).
+
+ - ``options`` is a dictionary mapping option names (strings) to
+ values (type depends on option conversion functions; see
+ `option_spec` above).
+
+ - ``content`` is a list of strings, the directive content line by line.
+
+ - ``lineno`` is the absolute line number of the first line
+ of the directive.
+
+ - ``src`` is the name (or path) of the rst source of the directive.
+
+ - ``srcline`` is the line number of the first line of the directive
+ in its source. It may differ from ``lineno``, if the main source
+ includes other sources with the ``.. include::`` directive.
+
+ - ``content_offset`` is the line offset of the first line of the content from
+ the beginning of the current input. Used when initiating a nested parse.
+
+ - ``block_text`` is a string containing the entire directive.
+
+ - ``state`` is the state which called the directive function.
+
+ - ``state_machine`` is the state machine which controls the state which called
+ the directive function.
+
+ Directive functions return a list of nodes which will be inserted
+ into the document tree at the point where the directive was
+ encountered. This can be an empty list if there is nothing to
+ insert.
+
+ For ordinary directives, the list must contain body elements or
+ structural elements. Some directives are intended specifically
+ for substitution definitions, and must return a list of `Text`
+ nodes and/or inline elements (suitable for inline insertion, in
+ place of the substitution reference). Such directives must verify
+ substitution definition context, typically using code like this::
+
+ if not isinstance(state, states.SubstitutionDef):
+ error = state_machine.reporter.error(
+ 'Invalid context: the "%s" directive can only be used '
+ 'within a substitution definition.' % (name),
+ nodes.literal_block(block_text, block_text), line=lineno)
+ return [error]
+ """
+
+ # There is a "Creating reStructuredText Directives" how-to at
+ # <http://docutils.sf.net/docs/howto/rst-directives.html>. If you
+ # update this docstring, please update the how-to as well.
+
+ required_arguments = 0
+ """Number of required directive arguments."""
+
+ optional_arguments = 0
+ """Number of optional arguments after the required arguments."""
+
+ final_argument_whitespace = False
+ """May the final argument contain whitespace?"""
+
+ option_spec = None
+ """Mapping of option names to validator functions."""
+
+ has_content = False
+ """May the directive have content?"""
+
+ def __init__(self, name, arguments, options, content, lineno,
+ content_offset, block_text, state, state_machine):
+ self.name = name
+ self.arguments = arguments
+ self.options = options
+ self.content = content
+ self.lineno = lineno
+ self.content_offset = content_offset
+ self.block_text = block_text
+ self.state = state
+ self.state_machine = state_machine
+ self.src, self.scrline = state_machine.get_source_and_line(lineno)
+
+ def run(self):
+ raise NotImplementedError('Must override run() is subclass.')
+
+ # Directive errors:
+
+ def directive_error(self, level, message):
+ """
+ Return a DirectiveError suitable for being thrown as an exception.
+
+ Call "raise self.directive_error(level, message)" from within
+ a directive implementation to return one single system message
+ at level `level`, which automatically gets the directive block
+ and the line number added.
+
+ You'd often use self.error(message) instead, which will
+ generate an ERROR-level directive error.
+ """
+ return DirectiveError(level, message)
+
+ def debug(self, message):
+ return self.directive_error(0, message)
+
+ def info(self, message):
+ return self.directive_error(1, message)
+
+ def warning(self, message):
+ return self.directive_error(2, message)
+
+ def error(self, message):
+ return self.directive_error(3, message)
+
+ def severe(self, message):
+ return self.directive_error(4, message)
+
+ # Convenience methods:
+
+ def assert_has_content(self):
+ """
+ Throw an ERROR-level DirectiveError if the directive doesn't
+ have contents.
+ """
+ if not self.content:
+ raise self.error('Content block expected for the "%s" directive; '
+ 'none found.' % self.name)
+
+
+def convert_directive_function(directive_fn):
+ """
+ Define & return a directive class generated from `directive_fn`.
+
+ `directive_fn` uses the old-style, functional interface.
+ """
+
+ class FunctionalDirective(Directive):
+
+ option_spec = getattr(directive_fn, 'options', None)
+ has_content = getattr(directive_fn, 'content', False)
+ _argument_spec = getattr(directive_fn, 'arguments', (0, 0, False))
+ required_arguments, optional_arguments, final_argument_whitespace \
+ = _argument_spec
+
+ def run(self):
+ return directive_fn(
+ self.name, self.arguments, self.options, self.content,
+ self.lineno, self.content_offset, self.block_text,
+ self.state, self.state_machine)
+
+ # Return new-style directive.
+ return FunctionalDirective
diff --git a/python/helpers/docutils/parsers/rst/directives/__init__.py b/python/helpers/docutils/parsers/rst/directives/__init__.py
new file mode 100644
index 0000000..8ab6999
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/directives/__init__.py
@@ -0,0 +1,395 @@
+# $Id: __init__.py 5952 2009-05-19 08:45:27Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains directive implementation modules.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import re
+import codecs
+from docutils import nodes
+from docutils.parsers.rst.languages import en as _fallback_language_module
+
+
+_directive_registry = {
+ 'attention': ('admonitions', 'Attention'),
+ 'caution': ('admonitions', 'Caution'),
+ 'danger': ('admonitions', 'Danger'),
+ 'error': ('admonitions', 'Error'),
+ 'important': ('admonitions', 'Important'),
+ 'note': ('admonitions', 'Note'),
+ 'tip': ('admonitions', 'Tip'),
+ 'hint': ('admonitions', 'Hint'),
+ 'warning': ('admonitions', 'Warning'),
+ 'admonition': ('admonitions', 'Admonition'),
+ 'sidebar': ('body', 'Sidebar'),
+ 'topic': ('body', 'Topic'),
+ 'line-block': ('body', 'LineBlock'),
+ 'parsed-literal': ('body', 'ParsedLiteral'),
+ 'rubric': ('body', 'Rubric'),
+ 'epigraph': ('body', 'Epigraph'),
+ 'highlights': ('body', 'Highlights'),
+ 'pull-quote': ('body', 'PullQuote'),
+ 'compound': ('body', 'Compound'),
+ 'container': ('body', 'Container'),
+ #'questions': ('body', 'question_list'),
+ 'table': ('tables', 'RSTTable'),
+ 'csv-table': ('tables', 'CSVTable'),
+ 'list-table': ('tables', 'ListTable'),
+ 'image': ('images', 'Image'),
+ 'figure': ('images', 'Figure'),
+ 'contents': ('parts', 'Contents'),
+ 'sectnum': ('parts', 'Sectnum'),
+ 'header': ('parts', 'Header'),
+ 'footer': ('parts', 'Footer'),
+ #'footnotes': ('parts', 'footnotes'),
+ #'citations': ('parts', 'citations'),
+ 'target-notes': ('references', 'TargetNotes'),
+ 'meta': ('html', 'Meta'),
+ #'imagemap': ('html', 'imagemap'),
+ 'raw': ('misc', 'Raw'),
+ 'include': ('misc', 'Include'),
+ 'replace': ('misc', 'Replace'),
+ 'unicode': ('misc', 'Unicode'),
+ 'class': ('misc', 'Class'),
+ 'role': ('misc', 'Role'),
+ 'default-role': ('misc', 'DefaultRole'),
+ 'title': ('misc', 'Title'),
+ 'date': ('misc', 'Date'),
+ 'restructuredtext-test-directive': ('misc', 'TestDirective'),}
+"""Mapping of directive name to (module name, class name). The
+directive name is canonical & must be lowercase. Language-dependent
+names are defined in the ``language`` subpackage."""
+
+_directives = {}
+"""Cache of imported directives."""
+
+def directive(directive_name, language_module, document):
+ """
+ Locate and return a directive function from its language-dependent name.
+ If not found in the current language, check English. Return None if the
+ named directive cannot be found.
+ """
+ normname = directive_name.lower()
+ messages = []
+ msg_text = []
+ if normname in _directives:
+ return _directives[normname], messages
+ canonicalname = None
+ try:
+ canonicalname = language_module.directives[normname]
+ except AttributeError, error:
+ msg_text.append('Problem retrieving directive entry from language '
+ 'module %r: %s.' % (language_module, error))
+ except KeyError:
+ msg_text.append('No directive entry for "%s" in module "%s".'
+ % (directive_name, language_module.__name__))
+ if not canonicalname:
+ try:
+ canonicalname = _fallback_language_module.directives[normname]
+ msg_text.append('Using English fallback for directive "%s".'
+ % directive_name)
+ except KeyError:
+ msg_text.append('Trying "%s" as canonical directive name.'
+ % directive_name)
+ # The canonical name should be an English name, but just in case:
+ canonicalname = normname
+ if msg_text:
+ message = document.reporter.info(
+ '\n'.join(msg_text), line=document.current_line)
+ messages.append(message)
+ try:
+ modulename, classname = _directive_registry[canonicalname]
+ except KeyError:
+ # Error handling done by caller.
+ return None, messages
+ try:
+ module = __import__(modulename, globals(), locals())
+ except ImportError, detail:
+ messages.append(document.reporter.error(
+ 'Error importing directive module "%s" (directive "%s"):\n%s'
+ % (modulename, directive_name, detail),
+ line=document.current_line))
+ return None, messages
+ try:
+ directive = getattr(module, classname)
+ _directives[normname] = directive
+ except AttributeError:
+ messages.append(document.reporter.error(
+ 'No directive class "%s" in module "%s" (directive "%s").'
+ % (classname, modulename, directive_name),
+ line=document.current_line))
+ return None, messages
+ return directive, messages
+
+def register_directive(name, directive):
+ """
+ Register a nonstandard application-defined directive function.
+ Language lookups are not needed for such functions.
+ """
+ _directives[name] = directive
+
+def flag(argument):
+ """
+ Check for a valid flag option (no argument) and return ``None``.
+ (Directive option conversion function.)
+
+ Raise ``ValueError`` if an argument is found.
+ """
+ if argument and argument.strip():
+ raise ValueError('no argument is allowed; "%s" supplied' % argument)
+ else:
+ return None
+
+def unchanged_required(argument):
+ """
+ Return the argument text, unchanged.
+ (Directive option conversion function.)
+
+ Raise ``ValueError`` if no argument is found.
+ """
+ if argument is None:
+ raise ValueError('argument required but none supplied')
+ else:
+ return argument # unchanged!
+
+def unchanged(argument):
+ """
+ Return the argument text, unchanged.
+ (Directive option conversion function.)
+
+ No argument implies empty string ("").
+ """
+ if argument is None:
+ return u''
+ else:
+ return argument # unchanged!
+
+def path(argument):
+ """
+ Return the path argument unwrapped (with newlines removed).
+ (Directive option conversion function.)
+
+ Raise ``ValueError`` if no argument is found.
+ """
+ if argument is None:
+ raise ValueError('argument required but none supplied')
+ else:
+ path = ''.join([s.strip() for s in argument.splitlines()])
+ return path
+
+def uri(argument):
+ """
+ Return the URI argument with whitespace removed.
+ (Directive option conversion function.)
+
+ Raise ``ValueError`` if no argument is found.
+ """
+ if argument is None:
+ raise ValueError('argument required but none supplied')
+ else:
+ uri = ''.join(argument.split())
+ return uri
+
+def nonnegative_int(argument):
+ """
+ Check for a nonnegative integer argument; raise ``ValueError`` if not.
+ (Directive option conversion function.)
+ """
+ value = int(argument)
+ if value < 0:
+ raise ValueError('negative value; must be positive or zero')
+ return value
+
+def percentage(argument):
+ """
+ Check for an integer percentage value with optional percent sign.
+ """
+ try:
+ argument = argument.rstrip(' %')
+ except AttributeError:
+ pass
+ return nonnegative_int(argument)
+
+length_units = ['em', 'ex', 'px', 'in', 'cm', 'mm', 'pt', 'pc']
+
+def get_measure(argument, units):
+ """
+ Check for a positive argument of one of the units and return a
+ normalized string of the form "<value><unit>" (without space in
+ between).
+
+ To be called from directive option conversion functions.
+ """
+ match = re.match(r'^([0-9.]+) *(%s)$' % '|'.join(units), argument)
+ try:
+ assert match is not None
+ float(match.group(1))
+ except (AssertionError, ValueError):
+ raise ValueError(
+ 'not a positive measure of one of the following units:\n%s'
+ % ' '.join(['"%s"' % i for i in units]))
+ return match.group(1) + match.group(2)
+
+def length_or_unitless(argument):
+ return get_measure(argument, length_units + [''])
+
+def length_or_percentage_or_unitless(argument, default=''):
+ """
+ Return normalized string of a length or percentage unit.
+
+ Add <default> if there is no unit. Raise ValueError if the argument is not
+ a positive measure of one of the valid CSS units (or without unit).
+
+ >>> length_or_percentage_or_unitless('3 pt')
+ '3pt'
+ >>> length_or_percentage_or_unitless('3%', 'em')
+ '3%'
+ >>> length_or_percentage_or_unitless('3')
+ '3'
+ >>> length_or_percentage_or_unitless('3', 'px')
+ '3px'
+ """
+ try:
+ return get_measure(argument, length_units + ['%'])
+ except ValueError:
+ return get_measure(argument, ['']) + default
+
+def class_option(argument):
+ """
+ Convert the argument into a list of ID-compatible strings and return it.
+ (Directive option conversion function.)
+
+ Raise ``ValueError`` if no argument is found.
+ """
+ if argument is None:
+ raise ValueError('argument required but none supplied')
+ names = argument.split()
+ class_names = []
+ for name in names:
+ class_name = nodes.make_id(name)
+ if not class_name:
+ raise ValueError('cannot make "%s" into a class name' % name)
+ class_names.append(class_name)
+ return class_names
+
+unicode_pattern = re.compile(
+ r'(?:0x|x|\\x|U\+?|\\u)([0-9a-f]+)$|&#x([0-9a-f]+);$', re.IGNORECASE)
+
+def unicode_code(code):
+ r"""
+ Convert a Unicode character code to a Unicode character.
+ (Directive option conversion function.)
+
+ Codes may be decimal numbers, hexadecimal numbers (prefixed by ``0x``,
+ ``x``, ``\x``, ``U+``, ``u``, or ``\u``; e.g. ``U+262E``), or XML-style
+ numeric character entities (e.g. ``☮``). Other text remains as-is.
+
+ Raise ValueError for illegal Unicode code values.
+ """
+ try:
+ if code.isdigit(): # decimal number
+ return unichr(int(code))
+ else:
+ match = unicode_pattern.match(code)
+ if match: # hex number
+ value = match.group(1) or match.group(2)
+ return unichr(int(value, 16))
+ else: # other text
+ return code
+ except OverflowError, detail:
+ raise ValueError('code too large (%s)' % detail)
+
+def single_char_or_unicode(argument):
+ """
+ A single character is returned as-is. Unicode characters codes are
+ converted as in `unicode_code`. (Directive option conversion function.)
+ """
+ char = unicode_code(argument)
+ if len(char) > 1:
+ raise ValueError('%r invalid; must be a single character or '
+ 'a Unicode code' % char)
+ return char
+
+def single_char_or_whitespace_or_unicode(argument):
+ """
+ As with `single_char_or_unicode`, but "tab" and "space" are also supported.
+ (Directive option conversion function.)
+ """
+ if argument == 'tab':
+ char = '\t'
+ elif argument == 'space':
+ char = ' '
+ else:
+ char = single_char_or_unicode(argument)
+ return char
+
+def positive_int(argument):
+ """
+ Converts the argument into an integer. Raises ValueError for negative,
+ zero, or non-integer values. (Directive option conversion function.)
+ """
+ value = int(argument)
+ if value < 1:
+ raise ValueError('negative or zero value; must be positive')
+ return value
+
+def positive_int_list(argument):
+ """
+ Converts a space- or comma-separated list of values into a Python list
+ of integers.
+ (Directive option conversion function.)
+
+ Raises ValueError for non-positive-integer values.
+ """
+ if ',' in argument:
+ entries = argument.split(',')
+ else:
+ entries = argument.split()
+ return [positive_int(entry) for entry in entries]
+
+def encoding(argument):
+ """
+ Verfies the encoding argument by lookup.
+ (Directive option conversion function.)
+
+ Raises ValueError for unknown encodings.
+ """
+ try:
+ codecs.lookup(argument)
+ except LookupError:
+ raise ValueError('unknown encoding: "%s"' % argument)
+ return argument
+
+def choice(argument, values):
+ """
+ Directive option utility function, supplied to enable options whose
+ argument must be a member of a finite set of possible values (must be
+ lower case). A custom conversion function must be written to use it. For
+ example::
+
+ from docutils.parsers.rst import directives
+
+ def yesno(argument):
+ return directives.choice(argument, ('yes', 'no'))
+
+ Raise ``ValueError`` if no argument is found or if the argument's value is
+ not valid (not an entry in the supplied list).
+ """
+ try:
+ value = argument.lower().strip()
+ except AttributeError:
+ raise ValueError('must supply an argument; choose from %s'
+ % format_values(values))
+ if value in values:
+ return value
+ else:
+ raise ValueError('"%s" unknown; choose from %s'
+ % (argument, format_values(values)))
+
+def format_values(values):
+ return '%s, or "%s"' % (', '.join(['"%s"' % s for s in values[:-1]]),
+ values[-1])
diff --git a/python/helpers/docutils/parsers/rst/directives/admonitions.py b/python/helpers/docutils/parsers/rst/directives/admonitions.py
new file mode 100644
index 0000000..e7ba007
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/directives/admonitions.py
@@ -0,0 +1,97 @@
+# $Id: admonitions.py 5618 2008-07-28 08:37:32Z strank $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Admonition directives.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+from docutils.parsers.rst import Directive
+from docutils.parsers.rst import states, directives
+from docutils import nodes
+
+
+class BaseAdmonition(Directive):
+
+ required_arguments = 0
+ optional_arguments = 0
+ final_argument_whitespace = True
+ option_spec = {}
+ has_content = True
+
+ node_class = None
+ """Subclasses must set this to the appropriate admonition node class."""
+
+ def run(self):
+ self.assert_has_content()
+ text = '\n'.join(self.content)
+ admonition_node = self.node_class(text)
+ if self.arguments:
+ title_text = self.arguments[0]
+ textnodes, messages = self.state.inline_text(title_text,
+ self.lineno)
+ admonition_node += nodes.title(title_text, '', *textnodes)
+ admonition_node += messages
+ if 'class' in self.options:
+ classes = self.options['class']
+ else:
+ classes = ['admonition-' + nodes.make_id(title_text)]
+ admonition_node['classes'] += classes
+ self.state.nested_parse(self.content, self.content_offset,
+ admonition_node)
+ return [admonition_node]
+
+
+class Admonition(BaseAdmonition):
+
+ required_arguments = 1
+ option_spec = {'class': directives.class_option}
+ node_class = nodes.admonition
+
+
+class Attention(BaseAdmonition):
+
+ node_class = nodes.attention
+
+
+class Caution(BaseAdmonition):
+
+ node_class = nodes.caution
+
+
+class Danger(BaseAdmonition):
+
+ node_class = nodes.danger
+
+
+class Error(BaseAdmonition):
+
+ node_class = nodes.error
+
+
+class Hint(BaseAdmonition):
+
+ node_class = nodes.hint
+
+
+class Important(BaseAdmonition):
+
+ node_class = nodes.important
+
+
+class Note(BaseAdmonition):
+
+ node_class = nodes.note
+
+
+class Tip(BaseAdmonition):
+
+ node_class = nodes.tip
+
+
+class Warning(BaseAdmonition):
+
+ node_class = nodes.warning
diff --git a/python/helpers/docutils/parsers/rst/directives/body.py b/python/helpers/docutils/parsers/rst/directives/body.py
new file mode 100644
index 0000000..13fc06c
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/directives/body.py
@@ -0,0 +1,192 @@
+# $Id: body.py 5618 2008-07-28 08:37:32Z strank $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for additional body elements.
+
+See `docutils.parsers.rst.directives` for API details.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+from docutils import nodes
+from docutils.parsers.rst import Directive
+from docutils.parsers.rst import directives
+from docutils.parsers.rst.roles import set_classes
+
+
+class BasePseudoSection(Directive):
+
+ required_arguments = 1
+ optional_arguments = 0
+ final_argument_whitespace = True
+ option_spec = {'class': directives.class_option}
+ has_content = True
+
+ node_class = None
+ """Node class to be used (must be set in subclasses)."""
+
+ def run(self):
+ if not (self.state_machine.match_titles
+ or isinstance(self.state_machine.node, nodes.sidebar)):
+ raise self.error('The "%s" directive may not be used within '
+ 'topics or body elements.' % self.name)
+ self.assert_has_content()
+ title_text = self.arguments[0]
+ textnodes, messages = self.state.inline_text(title_text, self.lineno)
+ titles = [nodes.title(title_text, '', *textnodes)]
+ # Sidebar uses this code.
+ if 'subtitle' in self.options:
+ textnodes, more_messages = self.state.inline_text(
+ self.options['subtitle'], self.lineno)
+ titles.append(nodes.subtitle(self.options['subtitle'], '',
+ *textnodes))
+ messages.extend(more_messages)
+ text = '\n'.join(self.content)
+ node = self.node_class(text, *(titles + messages))
+ node['classes'] += self.options.get('class', [])
+ if text:
+ self.state.nested_parse(self.content, self.content_offset, node)
+ return [node]
+
+
+class Topic(BasePseudoSection):
+
+ node_class = nodes.topic
+
+
+class Sidebar(BasePseudoSection):
+
+ node_class = nodes.sidebar
+
+ option_spec = BasePseudoSection.option_spec.copy()
+ option_spec['subtitle'] = directives.unchanged_required
+
+ def run(self):
+ if isinstance(self.state_machine.node, nodes.sidebar):
+ raise self.error('The "%s" directive may not be used within a '
+ 'sidebar element.' % self.name)
+ return BasePseudoSection.run(self)
+
+
+class LineBlock(Directive):
+
+ option_spec = {'class': directives.class_option}
+ has_content = True
+
+ def run(self):
+ self.assert_has_content()
+ block = nodes.line_block(classes=self.options.get('class', []))
+ node_list = [block]
+ for line_text in self.content:
+ text_nodes, messages = self.state.inline_text(
+ line_text.strip(), self.lineno + self.content_offset)
+ line = nodes.line(line_text, '', *text_nodes)
+ if line_text.strip():
+ line.indent = len(line_text) - len(line_text.lstrip())
+ block += line
+ node_list.extend(messages)
+ self.content_offset += 1
+ self.state.nest_line_block_lines(block)
+ return node_list
+
+
+class ParsedLiteral(Directive):
+
+ option_spec = {'class': directives.class_option}
+ has_content = True
+
+ def run(self):
+ set_classes(self.options)
+ self.assert_has_content()
+ text = '\n'.join(self.content)
+ text_nodes, messages = self.state.inline_text(text, self.lineno)
+ node = nodes.literal_block(text, '', *text_nodes, **self.options)
+ node.line = self.content_offset + 1
+ return [node] + messages
+
+
+class Rubric(Directive):
+
+ required_arguments = 1
+ optional_arguments = 0
+ final_argument_whitespace = True
+ option_spec = {'class': directives.class_option}
+
+ def run(self):
+ set_classes(self.options)
+ rubric_text = self.arguments[0]
+ textnodes, messages = self.state.inline_text(rubric_text, self.lineno)
+ rubric = nodes.rubric(rubric_text, '', *textnodes, **self.options)
+ return [rubric] + messages
+
+
+class BlockQuote(Directive):
+
+ has_content = True
+ classes = []
+
+ def run(self):
+ self.assert_has_content()
+ elements = self.state.block_quote(self.content, self.content_offset)
+ for element in elements:
+ if isinstance(element, nodes.block_quote):
+ element['classes'] += self.classes
+ return elements
+
+
+class Epigraph(BlockQuote):
+
+ classes = ['epigraph']
+
+
+class Highlights(BlockQuote):
+
+ classes = ['highlights']
+
+
+class PullQuote(BlockQuote):
+
+ classes = ['pull-quote']
+
+
+class Compound(Directive):
+
+ option_spec = {'class': directives.class_option}
+ has_content = True
+
+ def run(self):
+ self.assert_has_content()
+ text = '\n'.join(self.content)
+ node = nodes.compound(text)
+ node['classes'] += self.options.get('class', [])
+ self.state.nested_parse(self.content, self.content_offset, node)
+ return [node]
+
+
+class Container(Directive):
+
+ required_arguments = 0
+ optional_arguments = 1
+ final_argument_whitespace = True
+ has_content = True
+
+ def run(self):
+ self.assert_has_content()
+ text = '\n'.join(self.content)
+ try:
+ if self.arguments:
+ classes = directives.class_option(self.arguments[0])
+ else:
+ classes = []
+ except ValueError:
+ raise self.error(
+ 'Invalid class attribute value for "%s" directive: "%s".'
+ % (self.name, self.arguments[0]))
+ node = nodes.container(text)
+ node['classes'].extend(classes)
+ self.state.nested_parse(self.content, self.content_offset, node)
+ return [node]
diff --git a/python/helpers/docutils/parsers/rst/directives/html.py b/python/helpers/docutils/parsers/rst/directives/html.py
new file mode 100644
index 0000000..97dfdad
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/directives/html.py
@@ -0,0 +1,88 @@
+# $Id: html.py 4667 2006-07-12 21:40:56Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for typically HTML-specific constructs.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+from docutils import nodes, utils
+from docutils.parsers.rst import Directive
+from docutils.parsers.rst import states
+from docutils.transforms import components
+
+
+class MetaBody(states.SpecializedBody):
+
+ class meta(nodes.Special, nodes.PreBibliographic, nodes.Element):
+ """HTML-specific "meta" element."""
+ pass
+
+ def field_marker(self, match, context, next_state):
+ """Meta element."""
+ node, blank_finish = self.parsemeta(match)
+ self.parent += node
+ return [], next_state, []
+
+ def parsemeta(self, match):
+ name = self.parse_field_marker(match)
+ indented, indent, line_offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end())
+ node = self.meta()
+ pending = nodes.pending(components.Filter,
+ {'component': 'writer',
+ 'format': 'html',
+ 'nodes': [node]})
+ node['content'] = ' '.join(indented)
+ if not indented:
+ line = self.state_machine.line
+ msg = self.reporter.info(
+ 'No content for meta tag "%s".' % name,
+ nodes.literal_block(line, line),
+ line=self.state_machine.abs_line_number())
+ return msg, blank_finish
+ tokens = name.split()
+ try:
+ attname, val = utils.extract_name_value(tokens[0])[0]
+ node[attname.lower()] = val
+ except utils.NameValueError:
+ node['name'] = tokens[0]
+ for token in tokens[1:]:
+ try:
+ attname, val = utils.extract_name_value(token)[0]
+ node[attname.lower()] = val
+ except utils.NameValueError, detail:
+ line = self.state_machine.line
+ msg = self.reporter.error(
+ 'Error parsing meta tag attribute "%s": %s.'
+ % (token, detail), nodes.literal_block(line, line),
+ line=self.state_machine.abs_line_number())
+ return msg, blank_finish
+ self.document.note_pending(pending)
+ return pending, blank_finish
+
+
+class Meta(Directive):
+
+ has_content = True
+
+ SMkwargs = {'state_classes': (MetaBody,)}
+
+ def run(self):
+ self.assert_has_content()
+ node = nodes.Element()
+ new_line_offset, blank_finish = self.state.nested_list_parse(
+ self.content, self.content_offset, node,
+ initial_state='MetaBody', blank_finish=1,
+ state_machine_kwargs=self.SMkwargs)
+ if (new_line_offset - self.content_offset) != len(self.content):
+ # incomplete parse of block?
+ error = self.state_machine.reporter.error(
+ 'Invalid meta directive.',
+ nodes.literal_block(self.block_text, self.block_text),
+ line=self.lineno)
+ node += error
+ return node.children
diff --git a/python/helpers/docutils/parsers/rst/directives/images.py b/python/helpers/docutils/parsers/rst/directives/images.py
new file mode 100644
index 0000000..6356102
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/directives/images.py
@@ -0,0 +1,154 @@
+# $Id: images.py 5952 2009-05-19 08:45:27Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for figures and simple images.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+from docutils import nodes, utils
+from docutils.parsers.rst import Directive
+from docutils.parsers.rst import directives, states
+from docutils.nodes import fully_normalize_name, whitespace_normalize_name
+from docutils.parsers.rst.roles import set_classes
+
+try:
+ import Image as PIL # PIL
+except ImportError:
+ PIL = None
+
+
+class Image(Directive):
+
+ align_h_values = ('left', 'center', 'right')
+ align_v_values = ('top', 'middle', 'bottom')
+ align_values = align_v_values + align_h_values
+
+ def align(argument):
+ # This is not callable as self.align. We cannot make it a
+ # staticmethod because we're saving an unbound method in
+ # option_spec below.
+ return directives.choice(argument, Image.align_values)
+
+ required_arguments = 1
+ optional_arguments = 0
+ final_argument_whitespace = True
+ option_spec = {'alt': directives.unchanged,
+ 'height': directives.length_or_unitless,
+ 'width': directives.length_or_percentage_or_unitless,
+ 'scale': directives.percentage,
+ 'align': align,
+ 'target': directives.unchanged_required,
+ 'class': directives.class_option}
+
+ def run(self):
+ if 'align' in self.options:
+ if isinstance(self.state, states.SubstitutionDef):
+ # Check for align_v_values.
+ if self.options['align'] not in self.align_v_values:
+ raise self.error(
+ 'Error in "%s" directive: "%s" is not a valid value '
+ 'for the "align" option within a substitution '
+ 'definition. Valid values for "align" are: "%s".'
+ % (self.name, self.options['align'],
+ '", "'.join(self.align_v_values)))
+ elif self.options['align'] not in self.align_h_values:
+ raise self.error(
+ 'Error in "%s" directive: "%s" is not a valid value for '
+ 'the "align" option. Valid values for "align" are: "%s".'
+ % (self.name, self.options['align'],
+ '", "'.join(self.align_h_values)))
+ messages = []
+ reference = directives.uri(self.arguments[0])
+ self.options['uri'] = reference
+ reference_node = None
+ if 'target' in self.options:
+ block = states.escape2null(
+ self.options['target']).splitlines()
+ block = [line for line in block]
+ target_type, data = self.state.parse_target(
+ block, self.block_text, self.lineno)
+ if target_type == 'refuri':
+ reference_node = nodes.reference(refuri=data)
+ elif target_type == 'refname':
+ reference_node = nodes.reference(
+ refname=fully_normalize_name(data),
+ name=whitespace_normalize_name(data))
+ reference_node.indirect_reference_name = data
+ self.state.document.note_refname(reference_node)
+ else: # malformed target
+ messages.append(data) # data is a system message
+ del self.options['target']
+ set_classes(self.options)
+ image_node = nodes.image(self.block_text, **self.options)
+ if reference_node:
+ reference_node += image_node
+ return messages + [reference_node]
+ else:
+ return messages + [image_node]
+
+
+class Figure(Image):
+
+ def align(argument):
+ return directives.choice(argument, Figure.align_h_values)
+
+ def figwidth_value(argument):
+ if argument.lower() == 'image':
+ return 'image'
+ else:
+ return directives.length_or_percentage_or_unitless(argument, 'px')
+
+ option_spec = Image.option_spec.copy()
+ option_spec['figwidth'] = figwidth_value
+ option_spec['figclass'] = directives.class_option
+ option_spec['align'] = align
+ has_content = True
+
+ def run(self):
+ figwidth = self.options.pop('figwidth', None)
+ figclasses = self.options.pop('figclass', None)
+ align = self.options.pop('align', None)
+ (image_node,) = Image.run(self)
+ if isinstance(image_node, nodes.system_message):
+ return [image_node]
+ figure_node = nodes.figure('', image_node)
+ if figwidth == 'image':
+ if PIL and self.state.document.settings.file_insertion_enabled:
+ # PIL doesn't like Unicode paths:
+ try:
+ i = PIL.open(str(image_node['uri']))
+ except (IOError, UnicodeError):
+ pass
+ else:
+ self.state.document.settings.record_dependencies.add(
+ image_node['uri'])
+ figure_node['width'] = i.size[0]
+ elif figwidth is not None:
+ figure_node['width'] = figwidth
+ if figclasses:
+ figure_node['classes'] += figclasses
+ if align:
+ figure_node['align'] = align
+ if self.content:
+ node = nodes.Element() # anonymous container for parsing
+ self.state.nested_parse(self.content, self.content_offset, node)
+ first_node = node[0]
+ if isinstance(first_node, nodes.paragraph):
+ caption = nodes.caption(first_node.rawsource, '',
+ *first_node.children)
+ figure_node += caption
+ elif not (isinstance(first_node, nodes.comment)
+ and len(first_node) == 0):
+ error = self.state_machine.reporter.error(
+ 'Figure caption must be a paragraph or empty comment.',
+ nodes.literal_block(self.block_text, self.block_text),
+ line=self.lineno)
+ return [figure_node, error]
+ if len(node) > 1:
+ figure_node += nodes.legend('', *node[1:])
+ return [figure_node]
diff --git a/python/helpers/docutils/parsers/rst/directives/misc.py b/python/helpers/docutils/parsers/rst/directives/misc.py
new file mode 100644
index 0000000..63afc75
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/directives/misc.py
@@ -0,0 +1,484 @@
+# $Id: misc.py 6185 2009-10-25 20:43:42Z milde $
+# Authors: David Goodger <[email protected]>; Dethe Elza
+# Copyright: This module has been placed in the public domain.
+
+"""Miscellaneous directives."""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import os.path
+import re
+import time
+from docutils import io, nodes, statemachine, utils
+from docutils.parsers.rst import Directive, convert_directive_function
+from docutils.parsers.rst import directives, roles, states
+from docutils.transforms import misc
+
+class Include(Directive):
+
+ """
+ Include content read from a separate source file.
+
+ Content may be parsed by the parser, or included as a literal
+ block. The encoding of the included file can be specified. Only
+ a part of the given file argument may be included by specifying
+ start and end line or text to match before and/or after the text
+ to be used.
+ """
+
+ required_arguments = 1
+ optional_arguments = 0
+ final_argument_whitespace = True
+ option_spec = {'literal': directives.flag,
+ 'encoding': directives.encoding,
+ 'tab-width': int,
+ 'start-line': int,
+ 'end-line': int,
+ 'start-after': directives.unchanged_required,
+ 'end-before': directives.unchanged_required}
+
+ standard_include_path = os.path.join(os.path.dirname(states.__file__),
+ 'include')
+
+ def run(self):
+ """Include a reST file as part of the content of this reST file."""
+ if not self.state.document.settings.file_insertion_enabled:
+ raise self.warning('"%s" directive disabled.' % self.name)
+ source = self.state_machine.input_lines.source(
+ self.lineno - self.state_machine.input_offset - 1)
+ source_dir = os.path.dirname(os.path.abspath(source))
+ path = directives.path(self.arguments[0])
+ if path.startswith('<') and path.endswith('>'):
+ path = os.path.join(self.standard_include_path, path[1:-1])
+ path = os.path.normpath(os.path.join(source_dir, path))
+ path = utils.relative_path(None, path)
+ path = nodes.reprunicode(path)
+ encoding = self.options.get(
+ 'encoding', self.state.document.settings.input_encoding)
+ tab_width = self.options.get(
+ 'tab-width', self.state.document.settings.tab_width)
+ try:
+ self.state.document.settings.record_dependencies.add(path)
+ include_file = io.FileInput(
+ source_path=path, encoding=encoding,
+ error_handler=(self.state.document.settings.\
+ input_encoding_error_handler),
+ handle_io_errors=None)
+ except IOError, error:
+ raise self.severe('Problems with "%s" directive path:\n%s: %s.' %
+ (self.name, error.__class__.__name__, str(error)))
+ # Hack: Since Python 2.6, the string interpolation returns a
+ # unicode object if one of the supplied %s replacements is a
+ # unicode object. IOError has no `__unicode__` method and the
+ # fallback `__repr__` does not report the file name. Explicitely
+ # converting to str fixes this for now::
+ # print '%s\n%s\n%s\n' %(error, str(error), repr(error))
+ startline = self.options.get('start-line', None)
+ endline = self.options.get('end-line', None)
+ try:
+ if startline or (endline is not None):
+ lines = include_file.readlines()
+ rawtext = ''.join(lines[startline:endline])
+ else:
+ rawtext = include_file.read()
+ except UnicodeError, error:
+ raise self.severe(
+ 'Problem with "%s" directive:\n%s: %s'
+ % (self.name, error.__class__.__name__, error))
+ # start-after/end-before: no restrictions on newlines in match-text,
+ # and no restrictions on matching inside lines vs. line boundaries
+ after_text = self.options.get('start-after', None)
+ if after_text:
+ # skip content in rawtext before *and incl.* a matching text
+ after_index = rawtext.find(after_text)
+ if after_index < 0:
+ raise self.severe('Problem with "start-after" option of "%s" '
+ 'directive:\nText not found.' % self.name)
+ rawtext = rawtext[after_index + len(after_text):]
+ before_text = self.options.get('end-before', None)
+ if before_text:
+ # skip content in rawtext after *and incl.* a matching text
+ before_index = rawtext.find(before_text)
+ if before_index < 0:
+ raise self.severe('Problem with "end-before" option of "%s" '
+ 'directive:\nText not found.' % self.name)
+ rawtext = rawtext[:before_index]
+ if 'literal' in self.options:
+ # Convert tabs to spaces, if `tab_width` is positive.
+ if tab_width >= 0:
+ text = rawtext.expandtabs(tab_width)
+ else:
+ text = rawtext
+ literal_block = nodes.literal_block(rawtext, text, source=path)
+ literal_block.line = 1
+ return [literal_block]
+ else:
+ include_lines = statemachine.string2lines(
+ rawtext, tab_width, convert_whitespace=1)
+ self.state_machine.insert_input(include_lines, path)
+ return []
+
+
+class Raw(Directive):
+
+ """
+ Pass through content unchanged
+
+ Content is included in output based on type argument
+
+ Content may be included inline (content section of directive) or
+ imported from a file or url.
+ """
+
+ required_arguments = 1
+ optional_arguments = 0
+ final_argument_whitespace = True
+ option_spec = {'file': directives.path,
+ 'url': directives.uri,
+ 'encoding': directives.encoding}
+ has_content = True
+
+ def run(self):
+ if (not self.state.document.settings.raw_enabled
+ or (not self.state.document.settings.file_insertion_enabled
+ and ('file' in self.options
+ or 'url' in self.options))):
+ raise self.warning('"%s" directive disabled.' % self.name)
+ attributes = {'format': ' '.join(self.arguments[0].lower().split())}
+ encoding = self.options.get(
+ 'encoding', self.state.document.settings.input_encoding)
+ if self.content:
+ if 'file' in self.options or 'url' in self.options:
+ raise self.error(
+ '"%s" directive may not both specify an external file '
+ 'and have content.' % self.name)
+ text = '\n'.join(self.content)
+ elif 'file' in self.options:
+ if 'url' in self.options:
+ raise self.error(
+ 'The "file" and "url" options may not be simultaneously '
+ 'specified for the "%s" directive.' % self.name)
+ source_dir = os.path.dirname(
+ os.path.abspath(self.state.document.current_source))
+ path = os.path.normpath(os.path.join(source_dir,
+ self.options['file']))
+ path = utils.relative_path(None, path)
+ try:
+ self.state.document.settings.record_dependencies.add(path)
+ raw_file = io.FileInput(
+ source_path=path, encoding=encoding,
+ error_handler=(self.state.document.settings.\
+ input_encoding_error_handler),
+ handle_io_errors=None)
+ except IOError, error:
+ raise self.severe('Problems with "%s" directive path:\n%s.'
+ % (self.name, error))
+ try:
+ text = raw_file.read()
+ except UnicodeError, error:
+ raise self.severe(
+ 'Problem with "%s" directive:\n%s: %s'
+ % (self.name, error.__class__.__name__, error))
+ attributes['source'] = path
+ elif 'url' in self.options:
+ source = self.options['url']
+ # Do not import urllib2 at the top of the module because
+ # it may fail due to broken SSL dependencies, and it takes
+ # about 0.15 seconds to load.
+ import urllib2
+ try:
+ raw_text = urllib2.urlopen(source).read()
+ except (urllib2.URLError, IOError, OSError), error:
+ raise self.severe(
+ 'Problems with "%s" directive URL "%s":\n%s.'
+ % (self.name, self.options['url'], error))
+ raw_file = io.StringInput(
+ source=raw_text, source_path=source, encoding=encoding,
+ error_handler=(self.state.document.settings.\
+ input_encoding_error_handler))
+ try:
+ text = raw_file.read()
+ except UnicodeError, error:
+ raise self.severe(
+ 'Problem with "%s" directive:\n%s: %s'
+ % (self.name, error.__class__.__name__, error))
+ attributes['source'] = source
+ else:
+ # This will always fail because there is no content.
+ self.assert_has_content()
+ raw_node = nodes.raw('', text, **attributes)
+ return [raw_node]
+
+
+class Replace(Directive):
+
+ has_content = True
+
+ def run(self):
+ if not isinstance(self.state, states.SubstitutionDef):
+ raise self.error(
+ 'Invalid context: the "%s" directive can only be used within '
+ 'a substitution definition.' % self.name)
+ self.assert_has_content()
+ text = '\n'.join(self.content)
+ element = nodes.Element(text)
+ self.state.nested_parse(self.content, self.content_offset,
+ element)
+ if ( len(element) != 1
+ or not isinstance(element[0], nodes.paragraph)):
+ messages = []
+ for node in element:
+ if isinstance(node, nodes.system_message):
+ node['backrefs'] = []
+ messages.append(node)
+ error = self.state_machine.reporter.error(
+ 'Error in "%s" directive: may contain a single paragraph '
+ 'only.' % (self.name), line=self.lineno)
+ messages.append(error)
+ return messages
+ else:
+ return element[0].children
+
+
+class Unicode(Directive):
+
+ r"""
+ Convert Unicode character codes (numbers) to characters. Codes may be
+ decimal numbers, hexadecimal numbers (prefixed by ``0x``, ``x``, ``\x``,
+ ``U+``, ``u``, or ``\u``; e.g. ``U+262E``), or XML-style numeric character
+ entities (e.g. ``☮``). Text following ".." is a comment and is
+ ignored. Spaces are ignored, and any other text remains as-is.
+ """
+
+ required_arguments = 1
+ optional_arguments = 0
+ final_argument_whitespace = True
+ option_spec = {'trim': directives.flag,
+ 'ltrim': directives.flag,
+ 'rtrim': directives.flag}
+
+ comment_pattern = re.compile(r'( |\n|^)\.\. ')
+
+ def run(self):
+ if not isinstance(self.state, states.SubstitutionDef):
+ raise self.error(
+ 'Invalid context: the "%s" directive can only be used within '
+ 'a substitution definition.' % self.name)
+ substitution_definition = self.state_machine.node
+ if 'trim' in self.options:
+ substitution_definition.attributes['ltrim'] = 1
+ substitution_definition.attributes['rtrim'] = 1
+ if 'ltrim' in self.options:
+ substitution_definition.attributes['ltrim'] = 1
+ if 'rtrim' in self.options:
+ substitution_definition.attributes['rtrim'] = 1
+ codes = self.comment_pattern.split(self.arguments[0])[0].split()
+ element = nodes.Element()
+ for code in codes:
+ try:
+ decoded = directives.unicode_code(code)
+ except ValueError, err:
+ raise self.error(
+ 'Invalid character code: %s\n%s: %s'
+ % (code, err.__class__.__name__, err))
+ element += nodes.Text(decoded)
+ return element.children
+
+
+class Class(Directive):
+
+ """
+ Set a "class" attribute on the directive content or the next element.
+ When applied to the next element, a "pending" element is inserted, and a
+ transform does the work later.
+ """
+
+ required_arguments = 1
+ optional_arguments = 0
+ final_argument_whitespace = True
+ has_content = True
+
+ def run(self):
+ try:
+ class_value = directives.class_option(self.arguments[0])
+ except ValueError:
+ raise self.error(
+ 'Invalid class attribute value for "%s" directive: "%s".'
+ % (self.name, self.arguments[0]))
+ node_list = []
+ if self.content:
+ container = nodes.Element()
+ self.state.nested_parse(self.content, self.content_offset,
+ container)
+ for node in container:
+ node['classes'].extend(class_value)
+ node_list.extend(container.children)
+ else:
+ pending = nodes.pending(
+ misc.ClassAttribute,
+ {'class': class_value, 'directive': self.name},
+ self.block_text)
+ self.state_machine.document.note_pending(pending)
+ node_list.append(pending)
+ return node_list
+
+
+class Role(Directive):
+
+ has_content = True
+
+ argument_pattern = re.compile(r'(%s)\s*(\(\s*(%s)\s*\)\s*)?$'
+ % ((states.Inliner.simplename,) * 2))
+
+ def run(self):
+ """Dynamically create and register a custom interpreted text role."""
+ if self.content_offset > self.lineno or not self.content:
+ raise self.error('"%s" directive requires arguments on the first '
+ 'line.' % self.name)
+ args = self.content[0]
+ match = self.argument_pattern.match(args)
+ if not match:
+ raise self.error('"%s" directive arguments not valid role names: '
+ '"%s".' % (self.name, args))
+ new_role_name = match.group(1)
+ base_role_name = match.group(3)
+ messages = []
+ if base_role_name:
+ base_role, messages = roles.role(
+ base_role_name, self.state_machine.language, self.lineno,
+ self.state.reporter)
+ if base_role is None:
+ error = self.state.reporter.error(
+ 'Unknown interpreted text role "%s".' % base_role_name,
+ nodes.literal_block(self.block_text, self.block_text),
+ line=self.lineno)
+ return messages + [error]
+ else:
+ base_role = roles.generic_custom_role
+ assert not hasattr(base_role, 'arguments'), (
+ 'Supplemental directive arguments for "%s" directive not '
+ 'supported (specified by "%r" role).' % (self.name, base_role))
+ try:
+ converted_role = convert_directive_function(base_role)
+ (arguments, options, content, content_offset) = (
+ self.state.parse_directive_block(
+ self.content[1:], self.content_offset, converted_role,
+ option_presets={}))
+ except states.MarkupError, detail:
+ error = self.state_machine.reporter.error(
+ 'Error in "%s" directive:\n%s.' % (self.name, detail),
+ nodes.literal_block(self.block_text, self.block_text),
+ line=self.lineno)
+ return messages + [error]
+ if 'class' not in options:
+ try:
+ options['class'] = directives.class_option(new_role_name)
+ except ValueError, detail:
+ error = self.state_machine.reporter.error(
+ 'Invalid argument for "%s" directive:\n%s.'
+ % (self.name, detail), nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ return messages + [error]
+ role = roles.CustomRole(new_role_name, base_role, options, content)
+ roles.register_local_role(new_role_name, role)
+ return messages
+
+
+class DefaultRole(Directive):
+
+ """Set the default interpreted text role."""
+
+ required_arguments = 0
+ optional_arguments = 1
+ final_argument_whitespace = False
+
+ def run(self):
+ if not self.arguments:
+ if '' in roles._roles:
+ # restore the "default" default role
+ del roles._roles['']
+ return []
+ role_name = self.arguments[0]
+ role, messages = roles.role(role_name, self.state_machine.language,
+ self.lineno, self.state.reporter)
+ if role is None:
+ error = self.state.reporter.error(
+ 'Unknown interpreted text role "%s".' % role_name,
+ nodes.literal_block(self.block_text, self.block_text),
+ line=self.lineno)
+ return messages + [error]
+ roles._roles[''] = role
+ # @@@ should this be local to the document, not the parser?
+ return messages
+
+
+class Title(Directive):
+
+ required_arguments = 1
+ optional_arguments = 0
+ final_argument_whitespace = True
+
+ def run(self):
+ self.state_machine.document['title'] = self.arguments[0]
+ return []
+
+
+class Date(Directive):
+
+ has_content = True
+
+ def run(self):
+ if not isinstance(self.state, states.SubstitutionDef):
+ raise self.error(
+ 'Invalid context: the "%s" directive can only be used within '
+ 'a substitution definition.' % self.name)
+ format = '\n'.join(self.content) or '%Y-%m-%d'
+ text = time.strftime(format)
+ return [nodes.Text(text)]
+
+
+class TestDirective(Directive):
+
+ """This directive is useful only for testing purposes."""
+
+ required_arguments = 0
+ optional_arguments = 1
+ final_argument_whitespace = True
+ option_spec = {'option': directives.unchanged_required}
+ has_content = True
+
+ def run(self):
+ if self.content:
+ text = '\n'.join(self.content)
+ info = self.state_machine.reporter.info(
+ 'Directive processed. Type="%s", arguments=%r, options=%r, '
+ 'content:' % (self.name, self.arguments, self.options),
+ nodes.literal_block(text, text), line=self.lineno)
+ else:
+ info = self.state_machine.reporter.info(
+ 'Directive processed. Type="%s", arguments=%r, options=%r, '
+ 'content: None' % (self.name, self.arguments, self.options),
+ line=self.lineno)
+ return [info]
+
+# Old-style, functional definition:
+#
+# def directive_test_function(name, arguments, options, content, lineno,
+# content_offset, block_text, state, state_machine):
+# """This directive is useful only for testing purposes."""
+# if content:
+# text = '\n'.join(content)
+# info = state_machine.reporter.info(
+# 'Directive processed. Type="%s", arguments=%r, options=%r, '
+# 'content:' % (name, arguments, options),
+# nodes.literal_block(text, text), line=lineno)
+# else:
+# info = state_machine.reporter.info(
+# 'Directive processed. Type="%s", arguments=%r, options=%r, '
+# 'content: None' % (name, arguments, options), line=lineno)
+# return [info]
+#
+# directive_test_function.arguments = (0, 1, 1)
+# directive_test_function.options = {'option': directives.unchanged_required}
+# directive_test_function.content = 1
diff --git a/python/helpers/docutils/parsers/rst/directives/parts.py b/python/helpers/docutils/parsers/rst/directives/parts.py
new file mode 100644
index 0000000..425ddc7
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/directives/parts.py
@@ -0,0 +1,127 @@
+# $Id: parts.py 6385 2010-08-13 12:17:01Z milde $
+# Authors: David Goodger <[email protected]>; Dmitry Jemerov
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for document parts.
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import nodes, languages
+from docutils.transforms import parts
+from docutils.parsers.rst import Directive
+from docutils.parsers.rst import directives
+
+
+class Contents(Directive):
+
+ """
+ Table of contents.
+
+ The table of contents is generated in two passes: initial parse and
+ transform. During the initial parse, a 'pending' element is generated
+ which acts as a placeholder, storing the TOC title and any options
+ internally. At a later stage in the processing, the 'pending' element is
+ replaced by a 'topic' element, a title and the table of contents proper.
+ """
+
+ backlinks_values = ('top', 'entry', 'none')
+
+ def backlinks(arg):
+ value = directives.choice(arg, Contents.backlinks_values)
+ if value == 'none':
+ return None
+ else:
+ return value
+
+ required_arguments = 0
+ optional_arguments = 1
+ final_argument_whitespace = True
+ option_spec = {'depth': directives.nonnegative_int,
+ 'local': directives.flag,
+ 'backlinks': backlinks,
+ 'class': directives.class_option}
+
+ def run(self):
+ if not (self.state_machine.match_titles
+ or isinstance(self.state_machine.node, nodes.sidebar)):
+ raise self.error('The "%s" directive may not be used within '
+ 'topics or body elements.' % self.name)
+ document = self.state_machine.document
+ language = languages.get_language(document.settings.language_code)
+ if self.arguments:
+ title_text = self.arguments[0]
+ text_nodes, messages = self.state.inline_text(title_text,
+ self.lineno)
+ title = nodes.title(title_text, '', *text_nodes)
+ else:
+ messages = []
+ if 'local' in self.options:
+ title = None
+ else:
+ title = nodes.title('', language.labels['contents'])
+ topic = nodes.topic(classes=['contents'])
+ topic['classes'] += self.options.get('class', [])
+ # the latex2e writer needs source and line for a warning:
+ src, srcline = self.state_machine.get_source_and_line()
+ topic.source = src
+ topic.line = srcline - 1
+ if 'local' in self.options:
+ topic['classes'].append('local')
+ if title:
+ name = title.astext()
+ topic += title
+ else:
+ name = language.labels['contents']
+ name = nodes.fully_normalize_name(name)
+ if not document.has_name(name):
+ topic['names'].append(name)
+ document.note_implicit_target(topic)
+ pending = nodes.pending(parts.Contents, rawsource=self.block_text)
+ pending.details.update(self.options)
+ document.note_pending(pending)
+ topic += pending
+ return [topic] + messages
+
+
+class Sectnum(Directive):
+
+ """Automatic section numbering."""
+
+ option_spec = {'depth': int,
+ 'start': int,
+ 'prefix': directives.unchanged_required,
+ 'suffix': directives.unchanged_required}
+
+ def run(self):
+ pending = nodes.pending(parts.SectNum)
+ pending.details.update(self.options)
+ self.state_machine.document.note_pending(pending)
+ return [pending]
+
+
+class Header(Directive):
+
+ """Contents of document header."""
+
+ has_content = True
+
+ def run(self):
+ self.assert_has_content()
+ header = self.state_machine.document.get_decoration().get_header()
+ self.state.nested_parse(self.content, self.content_offset, header)
+ return []
+
+
+class Footer(Directive):
+
+ """Contents of document footer."""
+
+ has_content = True
+
+ def run(self):
+ self.assert_has_content()
+ footer = self.state_machine.document.get_decoration().get_footer()
+ self.state.nested_parse(self.content, self.content_offset, footer)
+ return []
diff --git a/python/helpers/docutils/parsers/rst/directives/references.py b/python/helpers/docutils/parsers/rst/directives/references.py
new file mode 100644
index 0000000..b77f9c5
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/directives/references.py
@@ -0,0 +1,28 @@
+# $Id: references.py 4667 2006-07-12 21:40:56Z wiemann $
+# Authors: David Goodger <[email protected]>; Dmitry Jemerov
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for references and targets.
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import nodes
+from docutils.transforms import references
+from docutils.parsers.rst import Directive
+from docutils.parsers.rst import directives
+
+
+class TargetNotes(Directive):
+
+ """Target footnote generation."""
+
+ option_spec = {'class': directives.class_option}
+
+ def run(self):
+ pending = nodes.pending(references.TargetNotes)
+ pending.details.update(self.options)
+ self.state_machine.document.note_pending(pending)
+ nodelist = [pending]
+ return nodelist
diff --git a/python/helpers/docutils/parsers/rst/directives/tables.py b/python/helpers/docutils/parsers/rst/directives/tables.py
new file mode 100644
index 0000000..a2242fd
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/directives/tables.py
@@ -0,0 +1,447 @@
+# $Id: tables.py 6107 2009-08-31 02:29:08Z goodger $
+# Authors: David Goodger <[email protected]>; David Priest
+# Copyright: This module has been placed in the public domain.
+
+"""
+Directives for table elements.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import os.path
+import csv
+
+from docutils import io, nodes, statemachine, utils
+from docutils.utils import SystemMessagePropagation
+from docutils.parsers.rst import Directive
+from docutils.parsers.rst import directives
+
+
+class Table(Directive):
+
+ """
+ Generic table base class.
+ """
+
+ required_arguments = 0
+ optional_arguments = 1
+ final_argument_whitespace = True
+ option_spec = {'class': directives.class_option}
+ has_content = True
+
+ def make_title(self):
+ if self.arguments:
+ title_text = self.arguments[0]
+ text_nodes, messages = self.state.inline_text(title_text,
+ self.lineno)
+ title = nodes.title(title_text, '', *text_nodes)
+ else:
+ title = None
+ messages = []
+ return title, messages
+
+ def process_header_option(self):
+ source = self.state_machine.get_source(self.lineno - 1)
+ table_head = []
+ max_header_cols = 0
+ if 'header' in self.options: # separate table header in option
+ rows, max_header_cols = self.parse_csv_data_into_rows(
+ self.options['header'].split('\n'), self.HeaderDialect(),
+ source)
+ table_head.extend(rows)
+ return table_head, max_header_cols
+
+ def check_table_dimensions(self, rows, header_rows, stub_columns):
+ if len(rows) < header_rows:
+ error = self.state_machine.reporter.error(
+ '%s header row(s) specified but only %s row(s) of data '
+ 'supplied ("%s" directive).'
+ % (header_rows, len(rows), self.name), nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ raise SystemMessagePropagation(error)
+ if len(rows) == header_rows > 0:
+ error = self.state_machine.reporter.error(
+ 'Insufficient data supplied (%s row(s)); no data remaining '
+ 'for table body, required by "%s" directive.'
+ % (len(rows), self.name), nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ raise SystemMessagePropagation(error)
+ for row in rows:
+ if len(row) < stub_columns:
+ error = self.state_machine.reporter.error(
+ '%s stub column(s) specified but only %s columns(s) of '
+ 'data supplied ("%s" directive).' %
+ (stub_columns, len(row), self.name), nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ raise SystemMessagePropagation(error)
+ if len(row) == stub_columns > 0:
+ error = self.state_machine.reporter.error(
+ 'Insufficient data supplied (%s columns(s)); no data remaining '
+ 'for table body, required by "%s" directive.'
+ % (len(row), self.name), nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ raise SystemMessagePropagation(error)
+
+ def get_column_widths(self, max_cols):
+ if 'widths' in self.options:
+ col_widths = self.options['widths']
+ if len(col_widths) != max_cols:
+ error = self.state_machine.reporter.error(
+ '"%s" widths do not match the number of columns in table '
+ '(%s).' % (self.name, max_cols), nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ raise SystemMessagePropagation(error)
+ elif max_cols:
+ col_widths = [100 // max_cols] * max_cols
+ else:
+ error = self.state_machine.reporter.error(
+ 'No table data detected in CSV file.', nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ raise SystemMessagePropagation(error)
+ return col_widths
+
+ def extend_short_rows_with_empty_cells(self, columns, parts):
+ for part in parts:
+ for row in part:
+ if len(row) < columns:
+ row.extend([(0, 0, 0, [])] * (columns - len(row)))
+
+
+class RSTTable(Table):
+
+ def run(self):
+ if not self.content:
+ warning = self.state_machine.reporter.warning(
+ 'Content block expected for the "%s" directive; none found.'
+ % self.name, nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ return [warning]
+ title, messages = self.make_title()
+ node = nodes.Element() # anonymous container for parsing
+ self.state.nested_parse(self.content, self.content_offset, node)
+ if len(node) != 1 or not isinstance(node[0], nodes.table):
+ error = self.state_machine.reporter.error(
+ 'Error parsing content block for the "%s" directive: exactly '
+ 'one table expected.' % self.name, nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ return [error]
+ table_node = node[0]
+ table_node['classes'] += self.options.get('class', [])
+ if title:
+ table_node.insert(0, title)
+ return [table_node] + messages
+
+
+class CSVTable(Table):
+
+ option_spec = {'header-rows': directives.nonnegative_int,
+ 'stub-columns': directives.nonnegative_int,
+ 'header': directives.unchanged,
+ 'widths': directives.positive_int_list,
+ 'file': directives.path,
+ 'url': directives.uri,
+ 'encoding': directives.encoding,
+ 'class': directives.class_option,
+ # field delimiter char
+ 'delim': directives.single_char_or_whitespace_or_unicode,
+ # treat whitespace after delimiter as significant
+ 'keepspace': directives.flag,
+ # text field quote/unquote char:
+ 'quote': directives.single_char_or_unicode,
+ # char used to escape delim & quote as-needed:
+ 'escape': directives.single_char_or_unicode,}
+
+ class DocutilsDialect(csv.Dialect):
+
+ """CSV dialect for `csv_table` directive."""
+
+ delimiter = ','
+ quotechar = '"'
+ doublequote = True
+ skipinitialspace = True
+ lineterminator = '\n'
+ quoting = csv.QUOTE_MINIMAL
+
+ def __init__(self, options):
+ if 'delim' in options:
+ self.delimiter = str(options['delim'])
+ if 'keepspace' in options:
+ self.skipinitialspace = False
+ if 'quote' in options:
+ self.quotechar = str(options['quote'])
+ if 'escape' in options:
+ self.doublequote = False
+ self.escapechar = str(options['escape'])
+ csv.Dialect.__init__(self)
+
+
+ class HeaderDialect(csv.Dialect):
+
+ """CSV dialect to use for the "header" option data."""
+
+ delimiter = ','
+ quotechar = '"'
+ escapechar = '\\'
+ doublequote = False
+ skipinitialspace = True
+ lineterminator = '\n'
+ quoting = csv.QUOTE_MINIMAL
+
+ def check_requirements(self):
+ pass
+
+ def run(self):
+ try:
+ if (not self.state.document.settings.file_insertion_enabled
+ and ('file' in self.options
+ or 'url' in self.options)):
+ warning = self.state_machine.reporter.warning(
+ 'File and URL access deactivated; ignoring "%s" '
+ 'directive.' % self.name, nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ return [warning]
+ self.check_requirements()
+ title, messages = self.make_title()
+ csv_data, source = self.get_csv_data()
+ table_head, max_header_cols = self.process_header_option()
+ rows, max_cols = self.parse_csv_data_into_rows(
+ csv_data, self.DocutilsDialect(self.options), source)
+ max_cols = max(max_cols, max_header_cols)
+ header_rows = self.options.get('header-rows', 0)
+ stub_columns = self.options.get('stub-columns', 0)
+ self.check_table_dimensions(rows, header_rows, stub_columns)
+ table_head.extend(rows[:header_rows])
+ table_body = rows[header_rows:]
+ col_widths = self.get_column_widths(max_cols)
+ self.extend_short_rows_with_empty_cells(max_cols,
+ (table_head, table_body))
+ except SystemMessagePropagation, detail:
+ return [detail.args[0]]
+ except csv.Error, detail:
+ error = self.state_machine.reporter.error(
+ 'Error with CSV data in "%s" directive:\n%s'
+ % (self.name, detail), nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ return [error]
+ table = (col_widths, table_head, table_body)
+ table_node = self.state.build_table(table, self.content_offset,
+ stub_columns)
+ table_node['classes'] += self.options.get('class', [])
+ if title:
+ table_node.insert(0, title)
+ return [table_node] + messages
+
+ def get_csv_data(self):
+ """
+ Get CSV data from the directive content, from an external
+ file, or from a URL reference.
+ """
+ encoding = self.options.get(
+ 'encoding', self.state.document.settings.input_encoding)
+ if self.content:
+ # CSV data is from directive content.
+ if 'file' in self.options or 'url' in self.options:
+ error = self.state_machine.reporter.error(
+ '"%s" directive may not both specify an external file and'
+ ' have content.' % self.name, nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ raise SystemMessagePropagation(error)
+ source = self.content.source(0)
+ csv_data = self.content
+ elif 'file' in self.options:
+ # CSV data is from an external file.
+ if 'url' in self.options:
+ error = self.state_machine.reporter.error(
+ 'The "file" and "url" options may not be simultaneously'
+ ' specified for the "%s" directive.' % self.name,
+ nodes.literal_block(self.block_text, self.block_text),
+ line=self.lineno)
+ raise SystemMessagePropagation(error)
+ source_dir = os.path.dirname(
+ os.path.abspath(self.state.document.current_source))
+ source = os.path.normpath(os.path.join(source_dir,
+ self.options['file']))
+ source = utils.relative_path(None, source)
+ try:
+ self.state.document.settings.record_dependencies.add(source)
+ csv_file = io.FileInput(
+ source_path=source, encoding=encoding,
+ error_handler=(self.state.document.settings.\
+ input_encoding_error_handler),
+ handle_io_errors=None)
+ csv_data = csv_file.read().splitlines()
+ except IOError, error:
+ severe = self.state_machine.reporter.severe(
+ 'Problems with "%s" directive path:\n%s.'
+ % (self.name, error), nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ raise SystemMessagePropagation(severe)
+ elif 'url' in self.options:
+ # CSV data is from a URL.
+ # Do not import urllib2 at the top of the module because
+ # it may fail due to broken SSL dependencies, and it takes
+ # about 0.15 seconds to load.
+ import urllib2
+ source = self.options['url']
+ try:
+ csv_text = urllib2.urlopen(source).read()
+ except (urllib2.URLError, IOError, OSError, ValueError), error:
+ severe = self.state_machine.reporter.severe(
+ 'Problems with "%s" directive URL "%s":\n%s.'
+ % (self.name, self.options['url'], error),
+ nodes.literal_block(self.block_text, self.block_text),
+ line=self.lineno)
+ raise SystemMessagePropagation(severe)
+ csv_file = io.StringInput(
+ source=csv_text, source_path=source, encoding=encoding,
+ error_handler=(self.state.document.settings.\
+ input_encoding_error_handler))
+ csv_data = csv_file.read().splitlines()
+ else:
+ error = self.state_machine.reporter.warning(
+ 'The "%s" directive requires content; none supplied.'
+ % self.name, nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ raise SystemMessagePropagation(error)
+ return csv_data, source
+
+ if sys.version_info < (3,):
+ # 2.x csv module doesn't do Unicode
+ def decode_from_csv(s):
+ return s.decode('utf-8')
+ def encode_for_csv(s):
+ return s.encode('utf-8')
+ else:
+ def decode_from_csv(s):
+ return s
+ def encode_for_csv(s):
+ return s
+ decode_from_csv = staticmethod(decode_from_csv)
+ encode_for_csv = staticmethod(encode_for_csv)
+
+ def parse_csv_data_into_rows(self, csv_data, dialect, source):
+ # csv.py doesn't do Unicode; encode temporarily as UTF-8
+ csv_reader = csv.reader([self.encode_for_csv(line + '\n')
+ for line in csv_data],
+ dialect=dialect)
+ rows = []
+ max_cols = 0
+ for row in csv_reader:
+ row_data = []
+ for cell in row:
+ # decode UTF-8 back to Unicode
+ cell_text = self.decode_from_csv(cell)
+ cell_data = (0, 0, 0, statemachine.StringList(
+ cell_text.splitlines(), source=source))
+ row_data.append(cell_data)
+ rows.append(row_data)
+ max_cols = max(max_cols, len(row))
+ return rows, max_cols
+
+
+class ListTable(Table):
+
+ """
+ Implement tables whose data is encoded as a uniform two-level bullet list.
+ For further ideas, see
+ http://docutils.sf.net/docs/dev/rst/alternatives.html#list-driven-tables
+ """
+
+ option_spec = {'header-rows': directives.nonnegative_int,
+ 'stub-columns': directives.nonnegative_int,
+ 'widths': directives.positive_int_list,
+ 'class': directives.class_option}
+
+ def run(self):
+ if not self.content:
+ error = self.state_machine.reporter.error(
+ 'The "%s" directive is empty; content required.' % self.name,
+ nodes.literal_block(self.block_text, self.block_text),
+ line=self.lineno)
+ return [error]
+ title, messages = self.make_title()
+ node = nodes.Element() # anonymous container for parsing
+ self.state.nested_parse(self.content, self.content_offset, node)
+ try:
+ num_cols, col_widths = self.check_list_content(node)
+ table_data = [[item.children for item in row_list[0]]
+ for row_list in node[0]]
+ header_rows = self.options.get('header-rows', 0)
+ stub_columns = self.options.get('stub-columns', 0)
+ self.check_table_dimensions(table_data, header_rows, stub_columns)
+ except SystemMessagePropagation, detail:
+ return [detail.args[0]]
+ table_node = self.build_table_from_list(table_data, col_widths,
+ header_rows, stub_columns)
+ table_node['classes'] += self.options.get('class', [])
+ if title:
+ table_node.insert(0, title)
+ return [table_node] + messages
+
+ def check_list_content(self, node):
+ if len(node) != 1 or not isinstance(node[0], nodes.bullet_list):
+ error = self.state_machine.reporter.error(
+ 'Error parsing content block for the "%s" directive: '
+ 'exactly one bullet list expected.' % self.name,
+ nodes.literal_block(self.block_text, self.block_text),
+ line=self.lineno)
+ raise SystemMessagePropagation(error)
+ list_node = node[0]
+ # Check for a uniform two-level bullet list:
+ for item_index in range(len(list_node)):
+ item = list_node[item_index]
+ if len(item) != 1 or not isinstance(item[0], nodes.bullet_list):
+ error = self.state_machine.reporter.error(
+ 'Error parsing content block for the "%s" directive: '
+ 'two-level bullet list expected, but row %s does not '
+ 'contain a second-level bullet list.'
+ % (self.name, item_index + 1), nodes.literal_block(
+ self.block_text, self.block_text), line=self.lineno)
+ raise SystemMessagePropagation(error)
+ elif item_index:
+ # ATTN pychecker users: num_cols is guaranteed to be set in the
+ # "else" clause below for item_index==0, before this branch is
+ # triggered.
+ if len(item[0]) != num_cols:
+ error = self.state_machine.reporter.error(
+ 'Error parsing content block for the "%s" directive: '
+ 'uniform two-level bullet list expected, but row %s '
+ 'does not contain the same number of items as row 1 '
+ '(%s vs %s).'
+ % (self.name, item_index + 1, len(item[0]), num_cols),
+ nodes.literal_block(self.block_text, self.block_text),
+ line=self.lineno)
+ raise SystemMessagePropagation(error)
+ else:
+ num_cols = len(item[0])
+ col_widths = self.get_column_widths(num_cols)
+ return num_cols, col_widths
+
+ def build_table_from_list(self, table_data, col_widths, header_rows, stub_columns):
+ table = nodes.table()
+ tgroup = nodes.tgroup(cols=len(col_widths))
+ table += tgroup
+ for col_width in col_widths:
+ colspec = nodes.colspec(colwidth=col_width)
+ if stub_columns:
+ colspec.attributes['stub'] = 1
+ stub_columns -= 1
+ tgroup += colspec
+ rows = []
+ for row in table_data:
+ row_node = nodes.row()
+ for cell in row:
+ entry = nodes.entry()
+ entry += cell
+ row_node += entry
+ rows.append(row_node)
+ if header_rows:
+ thead = nodes.thead()
+ thead.extend(rows[:header_rows])
+ tgroup += thead
+ tbody = nodes.tbody()
+ tbody.extend(rows[header_rows:])
+ tgroup += tbody
+ return table
diff --git a/python/helpers/docutils/parsers/rst/include/README.txt b/python/helpers/docutils/parsers/rst/include/README.txt
new file mode 100644
index 0000000..cd03135
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/README.txt
@@ -0,0 +1,17 @@
+============================================
+ ``docutils/parsers/rst/include`` Directory
+============================================
+
+This directory contains standard data files intended for inclusion in
+reStructuredText documents. To access these files, use the "include"
+directive with the special syntax for standard "include" data files,
+angle brackets around the file name::
+
+ .. include:: <isonum.txt>
+
+See the documentation for the `"include" directive`__ and
+`reStructuredText Standard Substitution Definition Sets`__ for
+details.
+
+__ http://docutils.sf.net/docs/ref/rst/directives.html#include
+__ http://docutils.sf.net/docs/ref/rst/substitutions.html
diff --git a/python/helpers/docutils/parsers/rst/include/isoamsa.txt b/python/helpers/docutils/parsers/rst/include/isoamsa.txt
new file mode 100644
index 0000000..e6f4518
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isoamsa.txt
@@ -0,0 +1,162 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |angzarr| unicode:: U+0237C .. RIGHT ANGLE WITH DOWNWARDS ZIGZAG ARROW
+.. |cirmid| unicode:: U+02AEF .. VERTICAL LINE WITH CIRCLE ABOVE
+.. |cudarrl| unicode:: U+02938 .. RIGHT-SIDE ARC CLOCKWISE ARROW
+.. |cudarrr| unicode:: U+02935 .. ARROW POINTING RIGHTWARDS THEN CURVING DOWNWARDS
+.. |cularr| unicode:: U+021B6 .. ANTICLOCKWISE TOP SEMICIRCLE ARROW
+.. |cularrp| unicode:: U+0293D .. TOP ARC ANTICLOCKWISE ARROW WITH PLUS
+.. |curarr| unicode:: U+021B7 .. CLOCKWISE TOP SEMICIRCLE ARROW
+.. |curarrm| unicode:: U+0293C .. TOP ARC CLOCKWISE ARROW WITH MINUS
+.. |Darr| unicode:: U+021A1 .. DOWNWARDS TWO HEADED ARROW
+.. |dArr| unicode:: U+021D3 .. DOWNWARDS DOUBLE ARROW
+.. |darr2| unicode:: U+021CA .. DOWNWARDS PAIRED ARROWS
+.. |ddarr| unicode:: U+021CA .. DOWNWARDS PAIRED ARROWS
+.. |DDotrahd| unicode:: U+02911 .. RIGHTWARDS ARROW WITH DOTTED STEM
+.. |dfisht| unicode:: U+0297F .. DOWN FISH TAIL
+.. |dHar| unicode:: U+02965 .. DOWNWARDS HARPOON WITH BARB LEFT BESIDE DOWNWARDS HARPOON WITH BARB RIGHT
+.. |dharl| unicode:: U+021C3 .. DOWNWARDS HARPOON WITH BARB LEFTWARDS
+.. |dharr| unicode:: U+021C2 .. DOWNWARDS HARPOON WITH BARB RIGHTWARDS
+.. |dlarr| unicode:: U+02199 .. SOUTH WEST ARROW
+.. |drarr| unicode:: U+02198 .. SOUTH EAST ARROW
+.. |duarr| unicode:: U+021F5 .. DOWNWARDS ARROW LEFTWARDS OF UPWARDS ARROW
+.. |duhar| unicode:: U+0296F .. DOWNWARDS HARPOON WITH BARB LEFT BESIDE UPWARDS HARPOON WITH BARB RIGHT
+.. |dzigrarr| unicode:: U+027FF .. LONG RIGHTWARDS SQUIGGLE ARROW
+.. |erarr| unicode:: U+02971 .. EQUALS SIGN ABOVE RIGHTWARDS ARROW
+.. |hArr| unicode:: U+021D4 .. LEFT RIGHT DOUBLE ARROW
+.. |harr| unicode:: U+02194 .. LEFT RIGHT ARROW
+.. |harrcir| unicode:: U+02948 .. LEFT RIGHT ARROW THROUGH SMALL CIRCLE
+.. |harrw| unicode:: U+021AD .. LEFT RIGHT WAVE ARROW
+.. |hoarr| unicode:: U+021FF .. LEFT RIGHT OPEN-HEADED ARROW
+.. |imof| unicode:: U+022B7 .. IMAGE OF
+.. |lAarr| unicode:: U+021DA .. LEFTWARDS TRIPLE ARROW
+.. |Larr| unicode:: U+0219E .. LEFTWARDS TWO HEADED ARROW
+.. |larr2| unicode:: U+021C7 .. LEFTWARDS PAIRED ARROWS
+.. |larrbfs| unicode:: U+0291F .. LEFTWARDS ARROW FROM BAR TO BLACK DIAMOND
+.. |larrfs| unicode:: U+0291D .. LEFTWARDS ARROW TO BLACK DIAMOND
+.. |larrhk| unicode:: U+021A9 .. LEFTWARDS ARROW WITH HOOK
+.. |larrlp| unicode:: U+021AB .. LEFTWARDS ARROW WITH LOOP
+.. |larrpl| unicode:: U+02939 .. LEFT-SIDE ARC ANTICLOCKWISE ARROW
+.. |larrsim| unicode:: U+02973 .. LEFTWARDS ARROW ABOVE TILDE OPERATOR
+.. |larrtl| unicode:: U+021A2 .. LEFTWARDS ARROW WITH TAIL
+.. |lAtail| unicode:: U+0291B .. LEFTWARDS DOUBLE ARROW-TAIL
+.. |latail| unicode:: U+02919 .. LEFTWARDS ARROW-TAIL
+.. |lBarr| unicode:: U+0290E .. LEFTWARDS TRIPLE DASH ARROW
+.. |lbarr| unicode:: U+0290C .. LEFTWARDS DOUBLE DASH ARROW
+.. |ldca| unicode:: U+02936 .. ARROW POINTING DOWNWARDS THEN CURVING LEFTWARDS
+.. |ldrdhar| unicode:: U+02967 .. LEFTWARDS HARPOON WITH BARB DOWN ABOVE RIGHTWARDS HARPOON WITH BARB DOWN
+.. |ldrushar| unicode:: U+0294B .. LEFT BARB DOWN RIGHT BARB UP HARPOON
+.. |ldsh| unicode:: U+021B2 .. DOWNWARDS ARROW WITH TIP LEFTWARDS
+.. |lfisht| unicode:: U+0297C .. LEFT FISH TAIL
+.. |lHar| unicode:: U+02962 .. LEFTWARDS HARPOON WITH BARB UP ABOVE LEFTWARDS HARPOON WITH BARB DOWN
+.. |lhard| unicode:: U+021BD .. LEFTWARDS HARPOON WITH BARB DOWNWARDS
+.. |lharu| unicode:: U+021BC .. LEFTWARDS HARPOON WITH BARB UPWARDS
+.. |lharul| unicode:: U+0296A .. LEFTWARDS HARPOON WITH BARB UP ABOVE LONG DASH
+.. |llarr| unicode:: U+021C7 .. LEFTWARDS PAIRED ARROWS
+.. |llhard| unicode:: U+0296B .. LEFTWARDS HARPOON WITH BARB DOWN BELOW LONG DASH
+.. |loarr| unicode:: U+021FD .. LEFTWARDS OPEN-HEADED ARROW
+.. |lrarr| unicode:: U+021C6 .. LEFTWARDS ARROW OVER RIGHTWARDS ARROW
+.. |lrarr2| unicode:: U+021C6 .. LEFTWARDS ARROW OVER RIGHTWARDS ARROW
+.. |lrhar| unicode:: U+021CB .. LEFTWARDS HARPOON OVER RIGHTWARDS HARPOON
+.. |lrhar2| unicode:: U+021CB .. LEFTWARDS HARPOON OVER RIGHTWARDS HARPOON
+.. |lrhard| unicode:: U+0296D .. RIGHTWARDS HARPOON WITH BARB DOWN BELOW LONG DASH
+.. |lsh| unicode:: U+021B0 .. UPWARDS ARROW WITH TIP LEFTWARDS
+.. |lurdshar| unicode:: U+0294A .. LEFT BARB UP RIGHT BARB DOWN HARPOON
+.. |luruhar| unicode:: U+02966 .. LEFTWARDS HARPOON WITH BARB UP ABOVE RIGHTWARDS HARPOON WITH BARB UP
+.. |Map| unicode:: U+02905 .. RIGHTWARDS TWO-HEADED ARROW FROM BAR
+.. |map| unicode:: U+021A6 .. RIGHTWARDS ARROW FROM BAR
+.. |midcir| unicode:: U+02AF0 .. VERTICAL LINE WITH CIRCLE BELOW
+.. |mumap| unicode:: U+022B8 .. MULTIMAP
+.. |nearhk| unicode:: U+02924 .. NORTH EAST ARROW WITH HOOK
+.. |neArr| unicode:: U+021D7 .. NORTH EAST DOUBLE ARROW
+.. |nearr| unicode:: U+02197 .. NORTH EAST ARROW
+.. |nesear| unicode:: U+02928 .. NORTH EAST ARROW AND SOUTH EAST ARROW
+.. |nhArr| unicode:: U+021CE .. LEFT RIGHT DOUBLE ARROW WITH STROKE
+.. |nharr| unicode:: U+021AE .. LEFT RIGHT ARROW WITH STROKE
+.. |nlArr| unicode:: U+021CD .. LEFTWARDS DOUBLE ARROW WITH STROKE
+.. |nlarr| unicode:: U+0219A .. LEFTWARDS ARROW WITH STROKE
+.. |nrArr| unicode:: U+021CF .. RIGHTWARDS DOUBLE ARROW WITH STROKE
+.. |nrarr| unicode:: U+0219B .. RIGHTWARDS ARROW WITH STROKE
+.. |nrarrc| unicode:: U+02933 U+00338 .. WAVE ARROW POINTING DIRECTLY RIGHT with slash
+.. |nrarrw| unicode:: U+0219D U+00338 .. RIGHTWARDS WAVE ARROW with slash
+.. |nvHarr| unicode:: U+02904 .. LEFT RIGHT DOUBLE ARROW WITH VERTICAL STROKE
+.. |nvlArr| unicode:: U+02902 .. LEFTWARDS DOUBLE ARROW WITH VERTICAL STROKE
+.. |nvrArr| unicode:: U+02903 .. RIGHTWARDS DOUBLE ARROW WITH VERTICAL STROKE
+.. |nwarhk| unicode:: U+02923 .. NORTH WEST ARROW WITH HOOK
+.. |nwArr| unicode:: U+021D6 .. NORTH WEST DOUBLE ARROW
+.. |nwarr| unicode:: U+02196 .. NORTH WEST ARROW
+.. |nwnear| unicode:: U+02927 .. NORTH WEST ARROW AND NORTH EAST ARROW
+.. |olarr| unicode:: U+021BA .. ANTICLOCKWISE OPEN CIRCLE ARROW
+.. |orarr| unicode:: U+021BB .. CLOCKWISE OPEN CIRCLE ARROW
+.. |origof| unicode:: U+022B6 .. ORIGINAL OF
+.. |rAarr| unicode:: U+021DB .. RIGHTWARDS TRIPLE ARROW
+.. |Rarr| unicode:: U+021A0 .. RIGHTWARDS TWO HEADED ARROW
+.. |rarr2| unicode:: U+021C9 .. RIGHTWARDS PAIRED ARROWS
+.. |rarrap| unicode:: U+02975 .. RIGHTWARDS ARROW ABOVE ALMOST EQUAL TO
+.. |rarrbfs| unicode:: U+02920 .. RIGHTWARDS ARROW FROM BAR TO BLACK DIAMOND
+.. |rarrc| unicode:: U+02933 .. WAVE ARROW POINTING DIRECTLY RIGHT
+.. |rarrfs| unicode:: U+0291E .. RIGHTWARDS ARROW TO BLACK DIAMOND
+.. |rarrhk| unicode:: U+021AA .. RIGHTWARDS ARROW WITH HOOK
+.. |rarrlp| unicode:: U+021AC .. RIGHTWARDS ARROW WITH LOOP
+.. |rarrpl| unicode:: U+02945 .. RIGHTWARDS ARROW WITH PLUS BELOW
+.. |rarrsim| unicode:: U+02974 .. RIGHTWARDS ARROW ABOVE TILDE OPERATOR
+.. |Rarrtl| unicode:: U+02916 .. RIGHTWARDS TWO-HEADED ARROW WITH TAIL
+.. |rarrtl| unicode:: U+021A3 .. RIGHTWARDS ARROW WITH TAIL
+.. |rarrw| unicode:: U+0219D .. RIGHTWARDS WAVE ARROW
+.. |rAtail| unicode:: U+0291C .. RIGHTWARDS DOUBLE ARROW-TAIL
+.. |ratail| unicode:: U+0291A .. RIGHTWARDS ARROW-TAIL
+.. |RBarr| unicode:: U+02910 .. RIGHTWARDS TWO-HEADED TRIPLE DASH ARROW
+.. |rBarr| unicode:: U+0290F .. RIGHTWARDS TRIPLE DASH ARROW
+.. |rbarr| unicode:: U+0290D .. RIGHTWARDS DOUBLE DASH ARROW
+.. |rdca| unicode:: U+02937 .. ARROW POINTING DOWNWARDS THEN CURVING RIGHTWARDS
+.. |rdldhar| unicode:: U+02969 .. RIGHTWARDS HARPOON WITH BARB DOWN ABOVE LEFTWARDS HARPOON WITH BARB DOWN
+.. |rdsh| unicode:: U+021B3 .. DOWNWARDS ARROW WITH TIP RIGHTWARDS
+.. |rfisht| unicode:: U+0297D .. RIGHT FISH TAIL
+.. |rHar| unicode:: U+02964 .. RIGHTWARDS HARPOON WITH BARB UP ABOVE RIGHTWARDS HARPOON WITH BARB DOWN
+.. |rhard| unicode:: U+021C1 .. RIGHTWARDS HARPOON WITH BARB DOWNWARDS
+.. |rharu| unicode:: U+021C0 .. RIGHTWARDS HARPOON WITH BARB UPWARDS
+.. |rharul| unicode:: U+0296C .. RIGHTWARDS HARPOON WITH BARB UP ABOVE LONG DASH
+.. |rlarr| unicode:: U+021C4 .. RIGHTWARDS ARROW OVER LEFTWARDS ARROW
+.. |rlarr2| unicode:: U+021C4 .. RIGHTWARDS ARROW OVER LEFTWARDS ARROW
+.. |rlhar| unicode:: U+021CC .. RIGHTWARDS HARPOON OVER LEFTWARDS HARPOON
+.. |rlhar2| unicode:: U+021CC .. RIGHTWARDS HARPOON OVER LEFTWARDS HARPOON
+.. |roarr| unicode:: U+021FE .. RIGHTWARDS OPEN-HEADED ARROW
+.. |rrarr| unicode:: U+021C9 .. RIGHTWARDS PAIRED ARROWS
+.. |rsh| unicode:: U+021B1 .. UPWARDS ARROW WITH TIP RIGHTWARDS
+.. |ruluhar| unicode:: U+02968 .. RIGHTWARDS HARPOON WITH BARB UP ABOVE LEFTWARDS HARPOON WITH BARB UP
+.. |searhk| unicode:: U+02925 .. SOUTH EAST ARROW WITH HOOK
+.. |seArr| unicode:: U+021D8 .. SOUTH EAST DOUBLE ARROW
+.. |searr| unicode:: U+02198 .. SOUTH EAST ARROW
+.. |seswar| unicode:: U+02929 .. SOUTH EAST ARROW AND SOUTH WEST ARROW
+.. |simrarr| unicode:: U+02972 .. TILDE OPERATOR ABOVE RIGHTWARDS ARROW
+.. |slarr| unicode:: U+02190 .. LEFTWARDS ARROW
+.. |srarr| unicode:: U+02192 .. RIGHTWARDS ARROW
+.. |swarhk| unicode:: U+02926 .. SOUTH WEST ARROW WITH HOOK
+.. |swArr| unicode:: U+021D9 .. SOUTH WEST DOUBLE ARROW
+.. |swarr| unicode:: U+02199 .. SOUTH WEST ARROW
+.. |swnwar| unicode:: U+0292A .. SOUTH WEST ARROW AND NORTH WEST ARROW
+.. |Uarr| unicode:: U+0219F .. UPWARDS TWO HEADED ARROW
+.. |uArr| unicode:: U+021D1 .. UPWARDS DOUBLE ARROW
+.. |uarr2| unicode:: U+021C8 .. UPWARDS PAIRED ARROWS
+.. |Uarrocir| unicode:: U+02949 .. UPWARDS TWO-HEADED ARROW FROM SMALL CIRCLE
+.. |udarr| unicode:: U+021C5 .. UPWARDS ARROW LEFTWARDS OF DOWNWARDS ARROW
+.. |udhar| unicode:: U+0296E .. UPWARDS HARPOON WITH BARB LEFT BESIDE DOWNWARDS HARPOON WITH BARB RIGHT
+.. |ufisht| unicode:: U+0297E .. UP FISH TAIL
+.. |uHar| unicode:: U+02963 .. UPWARDS HARPOON WITH BARB LEFT BESIDE UPWARDS HARPOON WITH BARB RIGHT
+.. |uharl| unicode:: U+021BF .. UPWARDS HARPOON WITH BARB LEFTWARDS
+.. |uharr| unicode:: U+021BE .. UPWARDS HARPOON WITH BARB RIGHTWARDS
+.. |uuarr| unicode:: U+021C8 .. UPWARDS PAIRED ARROWS
+.. |vArr| unicode:: U+021D5 .. UP DOWN DOUBLE ARROW
+.. |varr| unicode:: U+02195 .. UP DOWN ARROW
+.. |xhArr| unicode:: U+027FA .. LONG LEFT RIGHT DOUBLE ARROW
+.. |xharr| unicode:: U+027F7 .. LONG LEFT RIGHT ARROW
+.. |xlArr| unicode:: U+027F8 .. LONG LEFTWARDS DOUBLE ARROW
+.. |xlarr| unicode:: U+027F5 .. LONG LEFTWARDS ARROW
+.. |xmap| unicode:: U+027FC .. LONG RIGHTWARDS ARROW FROM BAR
+.. |xrArr| unicode:: U+027F9 .. LONG RIGHTWARDS DOUBLE ARROW
+.. |xrarr| unicode:: U+027F6 .. LONG RIGHTWARDS ARROW
+.. |zigrarr| unicode:: U+021DD .. RIGHTWARDS SQUIGGLE ARROW
diff --git a/python/helpers/docutils/parsers/rst/include/isoamsb.txt b/python/helpers/docutils/parsers/rst/include/isoamsb.txt
new file mode 100644
index 0000000..05e68d9
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isoamsb.txt
@@ -0,0 +1,126 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |ac| unicode:: U+0223E .. INVERTED LAZY S
+.. |acE| unicode:: U+0223E U+00333 .. INVERTED LAZY S with double underline
+.. |amalg| unicode:: U+02A3F .. AMALGAMATION OR COPRODUCT
+.. |barvee| unicode:: U+022BD .. NOR
+.. |Barwed| unicode:: U+02306 .. PERSPECTIVE
+.. |barwed| unicode:: U+02305 .. PROJECTIVE
+.. |bsolb| unicode:: U+029C5 .. SQUARED FALLING DIAGONAL SLASH
+.. |Cap| unicode:: U+022D2 .. DOUBLE INTERSECTION
+.. |capand| unicode:: U+02A44 .. INTERSECTION WITH LOGICAL AND
+.. |capbrcup| unicode:: U+02A49 .. INTERSECTION ABOVE BAR ABOVE UNION
+.. |capcap| unicode:: U+02A4B .. INTERSECTION BESIDE AND JOINED WITH INTERSECTION
+.. |capcup| unicode:: U+02A47 .. INTERSECTION ABOVE UNION
+.. |capdot| unicode:: U+02A40 .. INTERSECTION WITH DOT
+.. |caps| unicode:: U+02229 U+0FE00 .. INTERSECTION with serifs
+.. |ccaps| unicode:: U+02A4D .. CLOSED INTERSECTION WITH SERIFS
+.. |ccups| unicode:: U+02A4C .. CLOSED UNION WITH SERIFS
+.. |ccupssm| unicode:: U+02A50 .. CLOSED UNION WITH SERIFS AND SMASH PRODUCT
+.. |coprod| unicode:: U+02210 .. N-ARY COPRODUCT
+.. |Cup| unicode:: U+022D3 .. DOUBLE UNION
+.. |cupbrcap| unicode:: U+02A48 .. UNION ABOVE BAR ABOVE INTERSECTION
+.. |cupcap| unicode:: U+02A46 .. UNION ABOVE INTERSECTION
+.. |cupcup| unicode:: U+02A4A .. UNION BESIDE AND JOINED WITH UNION
+.. |cupdot| unicode:: U+0228D .. MULTISET MULTIPLICATION
+.. |cupor| unicode:: U+02A45 .. UNION WITH LOGICAL OR
+.. |cups| unicode:: U+0222A U+0FE00 .. UNION with serifs
+.. |cuvee| unicode:: U+022CE .. CURLY LOGICAL OR
+.. |cuwed| unicode:: U+022CF .. CURLY LOGICAL AND
+.. |Dagger| unicode:: U+02021 .. DOUBLE DAGGER
+.. |dagger| unicode:: U+02020 .. DAGGER
+.. |diam| unicode:: U+022C4 .. DIAMOND OPERATOR
+.. |divonx| unicode:: U+022C7 .. DIVISION TIMES
+.. |eplus| unicode:: U+02A71 .. EQUALS SIGN ABOVE PLUS SIGN
+.. |hercon| unicode:: U+022B9 .. HERMITIAN CONJUGATE MATRIX
+.. |intcal| unicode:: U+022BA .. INTERCALATE
+.. |iprod| unicode:: U+02A3C .. INTERIOR PRODUCT
+.. |loplus| unicode:: U+02A2D .. PLUS SIGN IN LEFT HALF CIRCLE
+.. |lotimes| unicode:: U+02A34 .. MULTIPLICATION SIGN IN LEFT HALF CIRCLE
+.. |lthree| unicode:: U+022CB .. LEFT SEMIDIRECT PRODUCT
+.. |ltimes| unicode:: U+022C9 .. LEFT NORMAL FACTOR SEMIDIRECT PRODUCT
+.. |midast| unicode:: U+0002A .. ASTERISK
+.. |minusb| unicode:: U+0229F .. SQUARED MINUS
+.. |minusd| unicode:: U+02238 .. DOT MINUS
+.. |minusdu| unicode:: U+02A2A .. MINUS SIGN WITH DOT BELOW
+.. |ncap| unicode:: U+02A43 .. INTERSECTION WITH OVERBAR
+.. |ncup| unicode:: U+02A42 .. UNION WITH OVERBAR
+.. |oast| unicode:: U+0229B .. CIRCLED ASTERISK OPERATOR
+.. |ocir| unicode:: U+0229A .. CIRCLED RING OPERATOR
+.. |odash| unicode:: U+0229D .. CIRCLED DASH
+.. |odiv| unicode:: U+02A38 .. CIRCLED DIVISION SIGN
+.. |odot| unicode:: U+02299 .. CIRCLED DOT OPERATOR
+.. |odsold| unicode:: U+029BC .. CIRCLED ANTICLOCKWISE-ROTATED DIVISION SIGN
+.. |ofcir| unicode:: U+029BF .. CIRCLED BULLET
+.. |ogt| unicode:: U+029C1 .. CIRCLED GREATER-THAN
+.. |ohbar| unicode:: U+029B5 .. CIRCLE WITH HORIZONTAL BAR
+.. |olcir| unicode:: U+029BE .. CIRCLED WHITE BULLET
+.. |olt| unicode:: U+029C0 .. CIRCLED LESS-THAN
+.. |omid| unicode:: U+029B6 .. CIRCLED VERTICAL BAR
+.. |ominus| unicode:: U+02296 .. CIRCLED MINUS
+.. |opar| unicode:: U+029B7 .. CIRCLED PARALLEL
+.. |operp| unicode:: U+029B9 .. CIRCLED PERPENDICULAR
+.. |oplus| unicode:: U+02295 .. CIRCLED PLUS
+.. |osol| unicode:: U+02298 .. CIRCLED DIVISION SLASH
+.. |Otimes| unicode:: U+02A37 .. MULTIPLICATION SIGN IN DOUBLE CIRCLE
+.. |otimes| unicode:: U+02297 .. CIRCLED TIMES
+.. |otimesas| unicode:: U+02A36 .. CIRCLED MULTIPLICATION SIGN WITH CIRCUMFLEX ACCENT
+.. |ovbar| unicode:: U+0233D .. APL FUNCTIONAL SYMBOL CIRCLE STILE
+.. |plusacir| unicode:: U+02A23 .. PLUS SIGN WITH CIRCUMFLEX ACCENT ABOVE
+.. |plusb| unicode:: U+0229E .. SQUARED PLUS
+.. |pluscir| unicode:: U+02A22 .. PLUS SIGN WITH SMALL CIRCLE ABOVE
+.. |plusdo| unicode:: U+02214 .. DOT PLUS
+.. |plusdu| unicode:: U+02A25 .. PLUS SIGN WITH DOT BELOW
+.. |pluse| unicode:: U+02A72 .. PLUS SIGN ABOVE EQUALS SIGN
+.. |plussim| unicode:: U+02A26 .. PLUS SIGN WITH TILDE BELOW
+.. |plustwo| unicode:: U+02A27 .. PLUS SIGN WITH SUBSCRIPT TWO
+.. |prod| unicode:: U+0220F .. N-ARY PRODUCT
+.. |race| unicode:: U+029DA .. LEFT DOUBLE WIGGLY FENCE
+.. |roplus| unicode:: U+02A2E .. PLUS SIGN IN RIGHT HALF CIRCLE
+.. |rotimes| unicode:: U+02A35 .. MULTIPLICATION SIGN IN RIGHT HALF CIRCLE
+.. |rthree| unicode:: U+022CC .. RIGHT SEMIDIRECT PRODUCT
+.. |rtimes| unicode:: U+022CA .. RIGHT NORMAL FACTOR SEMIDIRECT PRODUCT
+.. |sdot| unicode:: U+022C5 .. DOT OPERATOR
+.. |sdotb| unicode:: U+022A1 .. SQUARED DOT OPERATOR
+.. |setmn| unicode:: U+02216 .. SET MINUS
+.. |simplus| unicode:: U+02A24 .. PLUS SIGN WITH TILDE ABOVE
+.. |smashp| unicode:: U+02A33 .. SMASH PRODUCT
+.. |solb| unicode:: U+029C4 .. SQUARED RISING DIAGONAL SLASH
+.. |sqcap| unicode:: U+02293 .. SQUARE CAP
+.. |sqcaps| unicode:: U+02293 U+0FE00 .. SQUARE CAP with serifs
+.. |sqcup| unicode:: U+02294 .. SQUARE CUP
+.. |sqcups| unicode:: U+02294 U+0FE00 .. SQUARE CUP with serifs
+.. |ssetmn| unicode:: U+02216 .. SET MINUS
+.. |sstarf| unicode:: U+022C6 .. STAR OPERATOR
+.. |subdot| unicode:: U+02ABD .. SUBSET WITH DOT
+.. |sum| unicode:: U+02211 .. N-ARY SUMMATION
+.. |supdot| unicode:: U+02ABE .. SUPERSET WITH DOT
+.. |timesb| unicode:: U+022A0 .. SQUARED TIMES
+.. |timesbar| unicode:: U+02A31 .. MULTIPLICATION SIGN WITH UNDERBAR
+.. |timesd| unicode:: U+02A30 .. MULTIPLICATION SIGN WITH DOT ABOVE
+.. |top| unicode:: U+022A4 .. DOWN TACK
+.. |tridot| unicode:: U+025EC .. WHITE UP-POINTING TRIANGLE WITH DOT
+.. |triminus| unicode:: U+02A3A .. MINUS SIGN IN TRIANGLE
+.. |triplus| unicode:: U+02A39 .. PLUS SIGN IN TRIANGLE
+.. |trisb| unicode:: U+029CD .. TRIANGLE WITH SERIFS AT BOTTOM
+.. |tritime| unicode:: U+02A3B .. MULTIPLICATION SIGN IN TRIANGLE
+.. |uplus| unicode:: U+0228E .. MULTISET UNION
+.. |veebar| unicode:: U+022BB .. XOR
+.. |wedbar| unicode:: U+02A5F .. LOGICAL AND WITH UNDERBAR
+.. |wreath| unicode:: U+02240 .. WREATH PRODUCT
+.. |xcap| unicode:: U+022C2 .. N-ARY INTERSECTION
+.. |xcirc| unicode:: U+025EF .. LARGE CIRCLE
+.. |xcup| unicode:: U+022C3 .. N-ARY UNION
+.. |xdtri| unicode:: U+025BD .. WHITE DOWN-POINTING TRIANGLE
+.. |xodot| unicode:: U+02A00 .. N-ARY CIRCLED DOT OPERATOR
+.. |xoplus| unicode:: U+02A01 .. N-ARY CIRCLED PLUS OPERATOR
+.. |xotime| unicode:: U+02A02 .. N-ARY CIRCLED TIMES OPERATOR
+.. |xsqcup| unicode:: U+02A06 .. N-ARY SQUARE UNION OPERATOR
+.. |xuplus| unicode:: U+02A04 .. N-ARY UNION OPERATOR WITH PLUS
+.. |xutri| unicode:: U+025B3 .. WHITE UP-POINTING TRIANGLE
+.. |xvee| unicode:: U+022C1 .. N-ARY LOGICAL OR
+.. |xwedge| unicode:: U+022C0 .. N-ARY LOGICAL AND
diff --git a/python/helpers/docutils/parsers/rst/include/isoamsc.txt b/python/helpers/docutils/parsers/rst/include/isoamsc.txt
new file mode 100644
index 0000000..343504d
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isoamsc.txt
@@ -0,0 +1,29 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |dlcorn| unicode:: U+0231E .. BOTTOM LEFT CORNER
+.. |drcorn| unicode:: U+0231F .. BOTTOM RIGHT CORNER
+.. |gtlPar| unicode:: U+02995 .. DOUBLE LEFT ARC GREATER-THAN BRACKET
+.. |langd| unicode:: U+02991 .. LEFT ANGLE BRACKET WITH DOT
+.. |lbrke| unicode:: U+0298B .. LEFT SQUARE BRACKET WITH UNDERBAR
+.. |lbrksld| unicode:: U+0298F .. LEFT SQUARE BRACKET WITH TICK IN BOTTOM CORNER
+.. |lbrkslu| unicode:: U+0298D .. LEFT SQUARE BRACKET WITH TICK IN TOP CORNER
+.. |lceil| unicode:: U+02308 .. LEFT CEILING
+.. |lfloor| unicode:: U+0230A .. LEFT FLOOR
+.. |lmoust| unicode:: U+023B0 .. UPPER LEFT OR LOWER RIGHT CURLY BRACKET SECTION
+.. |lpargt| unicode:: U+029A0 .. SPHERICAL ANGLE OPENING LEFT
+.. |lparlt| unicode:: U+02993 .. LEFT ARC LESS-THAN BRACKET
+.. |ltrPar| unicode:: U+02996 .. DOUBLE RIGHT ARC LESS-THAN BRACKET
+.. |rangd| unicode:: U+02992 .. RIGHT ANGLE BRACKET WITH DOT
+.. |rbrke| unicode:: U+0298C .. RIGHT SQUARE BRACKET WITH UNDERBAR
+.. |rbrksld| unicode:: U+0298E .. RIGHT SQUARE BRACKET WITH TICK IN BOTTOM CORNER
+.. |rbrkslu| unicode:: U+02990 .. RIGHT SQUARE BRACKET WITH TICK IN TOP CORNER
+.. |rceil| unicode:: U+02309 .. RIGHT CEILING
+.. |rfloor| unicode:: U+0230B .. RIGHT FLOOR
+.. |rmoust| unicode:: U+023B1 .. UPPER RIGHT OR LOWER LEFT CURLY BRACKET SECTION
+.. |rpargt| unicode:: U+02994 .. RIGHT ARC GREATER-THAN BRACKET
+.. |ulcorn| unicode:: U+0231C .. TOP LEFT CORNER
+.. |urcorn| unicode:: U+0231D .. TOP RIGHT CORNER
diff --git a/python/helpers/docutils/parsers/rst/include/isoamsn.txt b/python/helpers/docutils/parsers/rst/include/isoamsn.txt
new file mode 100644
index 0000000..5ff1729
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isoamsn.txt
@@ -0,0 +1,96 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |gnap| unicode:: U+02A8A .. GREATER-THAN AND NOT APPROXIMATE
+.. |gnE| unicode:: U+02269 .. GREATER-THAN BUT NOT EQUAL TO
+.. |gne| unicode:: U+02A88 .. GREATER-THAN AND SINGLE-LINE NOT EQUAL TO
+.. |gnsim| unicode:: U+022E7 .. GREATER-THAN BUT NOT EQUIVALENT TO
+.. |gvnE| unicode:: U+02269 U+0FE00 .. GREATER-THAN BUT NOT EQUAL TO - with vertical stroke
+.. |lnap| unicode:: U+02A89 .. LESS-THAN AND NOT APPROXIMATE
+.. |lnE| unicode:: U+02268 .. LESS-THAN BUT NOT EQUAL TO
+.. |lne| unicode:: U+02A87 .. LESS-THAN AND SINGLE-LINE NOT EQUAL TO
+.. |lnsim| unicode:: U+022E6 .. LESS-THAN BUT NOT EQUIVALENT TO
+.. |lvnE| unicode:: U+02268 U+0FE00 .. LESS-THAN BUT NOT EQUAL TO - with vertical stroke
+.. |nap| unicode:: U+02249 .. NOT ALMOST EQUAL TO
+.. |napE| unicode:: U+02A70 U+00338 .. APPROXIMATELY EQUAL OR EQUAL TO with slash
+.. |napid| unicode:: U+0224B U+00338 .. TRIPLE TILDE with slash
+.. |ncong| unicode:: U+02247 .. NEITHER APPROXIMATELY NOR ACTUALLY EQUAL TO
+.. |ncongdot| unicode:: U+02A6D U+00338 .. CONGRUENT WITH DOT ABOVE with slash
+.. |nequiv| unicode:: U+02262 .. NOT IDENTICAL TO
+.. |ngE| unicode:: U+02267 U+00338 .. GREATER-THAN OVER EQUAL TO with slash
+.. |nge| unicode:: U+02271 .. NEITHER GREATER-THAN NOR EQUAL TO
+.. |nges| unicode:: U+02A7E U+00338 .. GREATER-THAN OR SLANTED EQUAL TO with slash
+.. |nGg| unicode:: U+022D9 U+00338 .. VERY MUCH GREATER-THAN with slash
+.. |ngsim| unicode:: U+02275 .. NEITHER GREATER-THAN NOR EQUIVALENT TO
+.. |nGt| unicode:: U+0226B U+020D2 .. MUCH GREATER THAN with vertical line
+.. |ngt| unicode:: U+0226F .. NOT GREATER-THAN
+.. |nGtv| unicode:: U+0226B U+00338 .. MUCH GREATER THAN with slash
+.. |nlE| unicode:: U+02266 U+00338 .. LESS-THAN OVER EQUAL TO with slash
+.. |nle| unicode:: U+02270 .. NEITHER LESS-THAN NOR EQUAL TO
+.. |nles| unicode:: U+02A7D U+00338 .. LESS-THAN OR SLANTED EQUAL TO with slash
+.. |nLl| unicode:: U+022D8 U+00338 .. VERY MUCH LESS-THAN with slash
+.. |nlsim| unicode:: U+02274 .. NEITHER LESS-THAN NOR EQUIVALENT TO
+.. |nLt| unicode:: U+0226A U+020D2 .. MUCH LESS THAN with vertical line
+.. |nlt| unicode:: U+0226E .. NOT LESS-THAN
+.. |nltri| unicode:: U+022EA .. NOT NORMAL SUBGROUP OF
+.. |nltrie| unicode:: U+022EC .. NOT NORMAL SUBGROUP OF OR EQUAL TO
+.. |nLtv| unicode:: U+0226A U+00338 .. MUCH LESS THAN with slash
+.. |nmid| unicode:: U+02224 .. DOES NOT DIVIDE
+.. |npar| unicode:: U+02226 .. NOT PARALLEL TO
+.. |npr| unicode:: U+02280 .. DOES NOT PRECEDE
+.. |nprcue| unicode:: U+022E0 .. DOES NOT PRECEDE OR EQUAL
+.. |npre| unicode:: U+02AAF U+00338 .. PRECEDES ABOVE SINGLE-LINE EQUALS SIGN with slash
+.. |nrtri| unicode:: U+022EB .. DOES NOT CONTAIN AS NORMAL SUBGROUP
+.. |nrtrie| unicode:: U+022ED .. DOES NOT CONTAIN AS NORMAL SUBGROUP OR EQUAL
+.. |nsc| unicode:: U+02281 .. DOES NOT SUCCEED
+.. |nsccue| unicode:: U+022E1 .. DOES NOT SUCCEED OR EQUAL
+.. |nsce| unicode:: U+02AB0 U+00338 .. SUCCEEDS ABOVE SINGLE-LINE EQUALS SIGN with slash
+.. |nsim| unicode:: U+02241 .. NOT TILDE
+.. |nsime| unicode:: U+02244 .. NOT ASYMPTOTICALLY EQUAL TO
+.. |nsmid| unicode:: U+02224 .. DOES NOT DIVIDE
+.. |nspar| unicode:: U+02226 .. NOT PARALLEL TO
+.. |nsqsube| unicode:: U+022E2 .. NOT SQUARE IMAGE OF OR EQUAL TO
+.. |nsqsupe| unicode:: U+022E3 .. NOT SQUARE ORIGINAL OF OR EQUAL TO
+.. |nsub| unicode:: U+02284 .. NOT A SUBSET OF
+.. |nsubE| unicode:: U+02AC5 U+00338 .. SUBSET OF ABOVE EQUALS SIGN with slash
+.. |nsube| unicode:: U+02288 .. NEITHER A SUBSET OF NOR EQUAL TO
+.. |nsup| unicode:: U+02285 .. NOT A SUPERSET OF
+.. |nsupE| unicode:: U+02AC6 U+00338 .. SUPERSET OF ABOVE EQUALS SIGN with slash
+.. |nsupe| unicode:: U+02289 .. NEITHER A SUPERSET OF NOR EQUAL TO
+.. |ntgl| unicode:: U+02279 .. NEITHER GREATER-THAN NOR LESS-THAN
+.. |ntlg| unicode:: U+02278 .. NEITHER LESS-THAN NOR GREATER-THAN
+.. |nvap| unicode:: U+0224D U+020D2 .. EQUIVALENT TO with vertical line
+.. |nVDash| unicode:: U+022AF .. NEGATED DOUBLE VERTICAL BAR DOUBLE RIGHT TURNSTILE
+.. |nVdash| unicode:: U+022AE .. DOES NOT FORCE
+.. |nvDash| unicode:: U+022AD .. NOT TRUE
+.. |nvdash| unicode:: U+022AC .. DOES NOT PROVE
+.. |nvge| unicode:: U+02265 U+020D2 .. GREATER-THAN OR EQUAL TO with vertical line
+.. |nvgt| unicode:: U+0003E U+020D2 .. GREATER-THAN SIGN with vertical line
+.. |nvle| unicode:: U+02264 U+020D2 .. LESS-THAN OR EQUAL TO with vertical line
+.. |nvlt| unicode:: U+0003C U+020D2 .. LESS-THAN SIGN with vertical line
+.. |nvltrie| unicode:: U+022B4 U+020D2 .. NORMAL SUBGROUP OF OR EQUAL TO with vertical line
+.. |nvrtrie| unicode:: U+022B5 U+020D2 .. CONTAINS AS NORMAL SUBGROUP OR EQUAL TO with vertical line
+.. |nvsim| unicode:: U+0223C U+020D2 .. TILDE OPERATOR with vertical line
+.. |parsim| unicode:: U+02AF3 .. PARALLEL WITH TILDE OPERATOR
+.. |prnap| unicode:: U+02AB9 .. PRECEDES ABOVE NOT ALMOST EQUAL TO
+.. |prnE| unicode:: U+02AB5 .. PRECEDES ABOVE NOT EQUAL TO
+.. |prnsim| unicode:: U+022E8 .. PRECEDES BUT NOT EQUIVALENT TO
+.. |rnmid| unicode:: U+02AEE .. DOES NOT DIVIDE WITH REVERSED NEGATION SLASH
+.. |scnap| unicode:: U+02ABA .. SUCCEEDS ABOVE NOT ALMOST EQUAL TO
+.. |scnE| unicode:: U+02AB6 .. SUCCEEDS ABOVE NOT EQUAL TO
+.. |scnsim| unicode:: U+022E9 .. SUCCEEDS BUT NOT EQUIVALENT TO
+.. |simne| unicode:: U+02246 .. APPROXIMATELY BUT NOT ACTUALLY EQUAL TO
+.. |solbar| unicode:: U+0233F .. APL FUNCTIONAL SYMBOL SLASH BAR
+.. |subnE| unicode:: U+02ACB .. SUBSET OF ABOVE NOT EQUAL TO
+.. |subne| unicode:: U+0228A .. SUBSET OF WITH NOT EQUAL TO
+.. |supnE| unicode:: U+02ACC .. SUPERSET OF ABOVE NOT EQUAL TO
+.. |supne| unicode:: U+0228B .. SUPERSET OF WITH NOT EQUAL TO
+.. |vnsub| unicode:: U+02282 U+020D2 .. SUBSET OF with vertical line
+.. |vnsup| unicode:: U+02283 U+020D2 .. SUPERSET OF with vertical line
+.. |vsubnE| unicode:: U+02ACB U+0FE00 .. SUBSET OF ABOVE NOT EQUAL TO - variant with stroke through bottom members
+.. |vsubne| unicode:: U+0228A U+0FE00 .. SUBSET OF WITH NOT EQUAL TO - variant with stroke through bottom members
+.. |vsupnE| unicode:: U+02ACC U+0FE00 .. SUPERSET OF ABOVE NOT EQUAL TO - variant with stroke through bottom members
+.. |vsupne| unicode:: U+0228B U+0FE00 .. SUPERSET OF WITH NOT EQUAL TO - variant with stroke through bottom members
diff --git a/python/helpers/docutils/parsers/rst/include/isoamso.txt b/python/helpers/docutils/parsers/rst/include/isoamso.txt
new file mode 100644
index 0000000..65cc17e
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isoamso.txt
@@ -0,0 +1,62 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |ang| unicode:: U+02220 .. ANGLE
+.. |ange| unicode:: U+029A4 .. ANGLE WITH UNDERBAR
+.. |angmsd| unicode:: U+02221 .. MEASURED ANGLE
+.. |angmsdaa| unicode:: U+029A8 .. MEASURED ANGLE WITH OPEN ARM ENDING IN ARROW POINTING UP AND RIGHT
+.. |angmsdab| unicode:: U+029A9 .. MEASURED ANGLE WITH OPEN ARM ENDING IN ARROW POINTING UP AND LEFT
+.. |angmsdac| unicode:: U+029AA .. MEASURED ANGLE WITH OPEN ARM ENDING IN ARROW POINTING DOWN AND RIGHT
+.. |angmsdad| unicode:: U+029AB .. MEASURED ANGLE WITH OPEN ARM ENDING IN ARROW POINTING DOWN AND LEFT
+.. |angmsdae| unicode:: U+029AC .. MEASURED ANGLE WITH OPEN ARM ENDING IN ARROW POINTING RIGHT AND UP
+.. |angmsdaf| unicode:: U+029AD .. MEASURED ANGLE WITH OPEN ARM ENDING IN ARROW POINTING LEFT AND UP
+.. |angmsdag| unicode:: U+029AE .. MEASURED ANGLE WITH OPEN ARM ENDING IN ARROW POINTING RIGHT AND DOWN
+.. |angmsdah| unicode:: U+029AF .. MEASURED ANGLE WITH OPEN ARM ENDING IN ARROW POINTING LEFT AND DOWN
+.. |angrtvb| unicode:: U+022BE .. RIGHT ANGLE WITH ARC
+.. |angrtvbd| unicode:: U+0299D .. MEASURED RIGHT ANGLE WITH DOT
+.. |bbrk| unicode:: U+023B5 .. BOTTOM SQUARE BRACKET
+.. |bbrktbrk| unicode:: U+023B6 .. BOTTOM SQUARE BRACKET OVER TOP SQUARE BRACKET
+.. |bemptyv| unicode:: U+029B0 .. REVERSED EMPTY SET
+.. |beth| unicode:: U+02136 .. BET SYMBOL
+.. |boxbox| unicode:: U+029C9 .. TWO JOINED SQUARES
+.. |bprime| unicode:: U+02035 .. REVERSED PRIME
+.. |bsemi| unicode:: U+0204F .. REVERSED SEMICOLON
+.. |cemptyv| unicode:: U+029B2 .. EMPTY SET WITH SMALL CIRCLE ABOVE
+.. |cirE| unicode:: U+029C3 .. CIRCLE WITH TWO HORIZONTAL STROKES TO THE RIGHT
+.. |cirscir| unicode:: U+029C2 .. CIRCLE WITH SMALL CIRCLE TO THE RIGHT
+.. |comp| unicode:: U+02201 .. COMPLEMENT
+.. |daleth| unicode:: U+02138 .. DALET SYMBOL
+.. |demptyv| unicode:: U+029B1 .. EMPTY SET WITH OVERBAR
+.. |ell| unicode:: U+02113 .. SCRIPT SMALL L
+.. |empty| unicode:: U+02205 .. EMPTY SET
+.. |emptyv| unicode:: U+02205 .. EMPTY SET
+.. |gimel| unicode:: U+02137 .. GIMEL SYMBOL
+.. |iiota| unicode:: U+02129 .. TURNED GREEK SMALL LETTER IOTA
+.. |image| unicode:: U+02111 .. BLACK-LETTER CAPITAL I
+.. |imath| unicode:: U+00131 .. LATIN SMALL LETTER DOTLESS I
+.. |inodot| unicode:: U+00131 .. LATIN SMALL LETTER DOTLESS I
+.. |jmath| unicode:: U+0006A .. LATIN SMALL LETTER J
+.. |jnodot| unicode:: U+0006A .. LATIN SMALL LETTER J
+.. |laemptyv| unicode:: U+029B4 .. EMPTY SET WITH LEFT ARROW ABOVE
+.. |lltri| unicode:: U+025FA .. LOWER LEFT TRIANGLE
+.. |lrtri| unicode:: U+022BF .. RIGHT TRIANGLE
+.. |mho| unicode:: U+02127 .. INVERTED OHM SIGN
+.. |nang| unicode:: U+02220 U+020D2 .. ANGLE with vertical line
+.. |nexist| unicode:: U+02204 .. THERE DOES NOT EXIST
+.. |oS| unicode:: U+024C8 .. CIRCLED LATIN CAPITAL LETTER S
+.. |planck| unicode:: U+0210F .. PLANCK CONSTANT OVER TWO PI
+.. |plankv| unicode:: U+0210F .. PLANCK CONSTANT OVER TWO PI
+.. |raemptyv| unicode:: U+029B3 .. EMPTY SET WITH RIGHT ARROW ABOVE
+.. |range| unicode:: U+029A5 .. REVERSED ANGLE WITH UNDERBAR
+.. |real| unicode:: U+0211C .. BLACK-LETTER CAPITAL R
+.. |sbsol| unicode:: U+0FE68 .. SMALL REVERSE SOLIDUS
+.. |tbrk| unicode:: U+023B4 .. TOP SQUARE BRACKET
+.. |trpezium| unicode:: U+0FFFD .. REPLACEMENT CHARACTER
+.. |ultri| unicode:: U+025F8 .. UPPER LEFT TRIANGLE
+.. |urtri| unicode:: U+025F9 .. UPPER RIGHT TRIANGLE
+.. |vprime| unicode:: U+02032 .. PRIME
+.. |vzigzag| unicode:: U+0299A .. VERTICAL ZIGZAG LINE
+.. |weierp| unicode:: U+02118 .. SCRIPT CAPITAL P
diff --git a/python/helpers/docutils/parsers/rst/include/isoamsr.txt b/python/helpers/docutils/parsers/rst/include/isoamsr.txt
new file mode 100644
index 0000000..a3d03da
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isoamsr.txt
@@ -0,0 +1,191 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |apE| unicode:: U+02A70 .. APPROXIMATELY EQUAL OR EQUAL TO
+.. |ape| unicode:: U+0224A .. ALMOST EQUAL OR EQUAL TO
+.. |apid| unicode:: U+0224B .. TRIPLE TILDE
+.. |asymp| unicode:: U+02248 .. ALMOST EQUAL TO
+.. |Barv| unicode:: U+02AE7 .. SHORT DOWN TACK WITH OVERBAR
+.. |bcong| unicode:: U+0224C .. ALL EQUAL TO
+.. |bepsi| unicode:: U+003F6 .. GREEK REVERSED LUNATE EPSILON SYMBOL
+.. |bowtie| unicode:: U+022C8 .. BOWTIE
+.. |bsim| unicode:: U+0223D .. REVERSED TILDE
+.. |bsime| unicode:: U+022CD .. REVERSED TILDE EQUALS
+.. |bsolhsub| unicode:: U+0005C U+02282 .. REVERSE SOLIDUS, SUBSET OF
+.. |bump| unicode:: U+0224E .. GEOMETRICALLY EQUIVALENT TO
+.. |bumpE| unicode:: U+02AAE .. EQUALS SIGN WITH BUMPY ABOVE
+.. |bumpe| unicode:: U+0224F .. DIFFERENCE BETWEEN
+.. |cire| unicode:: U+02257 .. RING EQUAL TO
+.. |Colon| unicode:: U+02237 .. PROPORTION
+.. |Colone| unicode:: U+02A74 .. DOUBLE COLON EQUAL
+.. |colone| unicode:: U+02254 .. COLON EQUALS
+.. |congdot| unicode:: U+02A6D .. CONGRUENT WITH DOT ABOVE
+.. |csub| unicode:: U+02ACF .. CLOSED SUBSET
+.. |csube| unicode:: U+02AD1 .. CLOSED SUBSET OR EQUAL TO
+.. |csup| unicode:: U+02AD0 .. CLOSED SUPERSET
+.. |csupe| unicode:: U+02AD2 .. CLOSED SUPERSET OR EQUAL TO
+.. |cuepr| unicode:: U+022DE .. EQUAL TO OR PRECEDES
+.. |cuesc| unicode:: U+022DF .. EQUAL TO OR SUCCEEDS
+.. |cupre| unicode:: U+0227C .. PRECEDES OR EQUAL TO
+.. |Dashv| unicode:: U+02AE4 .. VERTICAL BAR DOUBLE LEFT TURNSTILE
+.. |dashv| unicode:: U+022A3 .. LEFT TACK
+.. |easter| unicode:: U+02A6E .. EQUALS WITH ASTERISK
+.. |ecir| unicode:: U+02256 .. RING IN EQUAL TO
+.. |ecolon| unicode:: U+02255 .. EQUALS COLON
+.. |eDDot| unicode:: U+02A77 .. EQUALS SIGN WITH TWO DOTS ABOVE AND TWO DOTS BELOW
+.. |eDot| unicode:: U+02251 .. GEOMETRICALLY EQUAL TO
+.. |efDot| unicode:: U+02252 .. APPROXIMATELY EQUAL TO OR THE IMAGE OF
+.. |eg| unicode:: U+02A9A .. DOUBLE-LINE EQUAL TO OR GREATER-THAN
+.. |egs| unicode:: U+02A96 .. SLANTED EQUAL TO OR GREATER-THAN
+.. |egsdot| unicode:: U+02A98 .. SLANTED EQUAL TO OR GREATER-THAN WITH DOT INSIDE
+.. |el| unicode:: U+02A99 .. DOUBLE-LINE EQUAL TO OR LESS-THAN
+.. |els| unicode:: U+02A95 .. SLANTED EQUAL TO OR LESS-THAN
+.. |elsdot| unicode:: U+02A97 .. SLANTED EQUAL TO OR LESS-THAN WITH DOT INSIDE
+.. |equest| unicode:: U+0225F .. QUESTIONED EQUAL TO
+.. |equivDD| unicode:: U+02A78 .. EQUIVALENT WITH FOUR DOTS ABOVE
+.. |erDot| unicode:: U+02253 .. IMAGE OF OR APPROXIMATELY EQUAL TO
+.. |esdot| unicode:: U+02250 .. APPROACHES THE LIMIT
+.. |Esim| unicode:: U+02A73 .. EQUALS SIGN ABOVE TILDE OPERATOR
+.. |esim| unicode:: U+02242 .. MINUS TILDE
+.. |fork| unicode:: U+022D4 .. PITCHFORK
+.. |forkv| unicode:: U+02AD9 .. ELEMENT OF OPENING DOWNWARDS
+.. |frown| unicode:: U+02322 .. FROWN
+.. |gap| unicode:: U+02A86 .. GREATER-THAN OR APPROXIMATE
+.. |gE| unicode:: U+02267 .. GREATER-THAN OVER EQUAL TO
+.. |gEl| unicode:: U+02A8C .. GREATER-THAN ABOVE DOUBLE-LINE EQUAL ABOVE LESS-THAN
+.. |gel| unicode:: U+022DB .. GREATER-THAN EQUAL TO OR LESS-THAN
+.. |ges| unicode:: U+02A7E .. GREATER-THAN OR SLANTED EQUAL TO
+.. |gescc| unicode:: U+02AA9 .. GREATER-THAN CLOSED BY CURVE ABOVE SLANTED EQUAL
+.. |gesdot| unicode:: U+02A80 .. GREATER-THAN OR SLANTED EQUAL TO WITH DOT INSIDE
+.. |gesdoto| unicode:: U+02A82 .. GREATER-THAN OR SLANTED EQUAL TO WITH DOT ABOVE
+.. |gesdotol| unicode:: U+02A84 .. GREATER-THAN OR SLANTED EQUAL TO WITH DOT ABOVE LEFT
+.. |gesl| unicode:: U+022DB U+0FE00 .. GREATER-THAN slanted EQUAL TO OR LESS-THAN
+.. |gesles| unicode:: U+02A94 .. GREATER-THAN ABOVE SLANTED EQUAL ABOVE LESS-THAN ABOVE SLANTED EQUAL
+.. |Gg| unicode:: U+022D9 .. VERY MUCH GREATER-THAN
+.. |gl| unicode:: U+02277 .. GREATER-THAN OR LESS-THAN
+.. |gla| unicode:: U+02AA5 .. GREATER-THAN BESIDE LESS-THAN
+.. |glE| unicode:: U+02A92 .. GREATER-THAN ABOVE LESS-THAN ABOVE DOUBLE-LINE EQUAL
+.. |glj| unicode:: U+02AA4 .. GREATER-THAN OVERLAPPING LESS-THAN
+.. |gsdot| unicode:: U+022D7 .. GREATER-THAN WITH DOT
+.. |gsim| unicode:: U+02273 .. GREATER-THAN OR EQUIVALENT TO
+.. |gsime| unicode:: U+02A8E .. GREATER-THAN ABOVE SIMILAR OR EQUAL
+.. |gsiml| unicode:: U+02A90 .. GREATER-THAN ABOVE SIMILAR ABOVE LESS-THAN
+.. |Gt| unicode:: U+0226B .. MUCH GREATER-THAN
+.. |gtcc| unicode:: U+02AA7 .. GREATER-THAN CLOSED BY CURVE
+.. |gtcir| unicode:: U+02A7A .. GREATER-THAN WITH CIRCLE INSIDE
+.. |gtdot| unicode:: U+022D7 .. GREATER-THAN WITH DOT
+.. |gtquest| unicode:: U+02A7C .. GREATER-THAN WITH QUESTION MARK ABOVE
+.. |gtrarr| unicode:: U+02978 .. GREATER-THAN ABOVE RIGHTWARDS ARROW
+.. |homtht| unicode:: U+0223B .. HOMOTHETIC
+.. |lap| unicode:: U+02A85 .. LESS-THAN OR APPROXIMATE
+.. |lat| unicode:: U+02AAB .. LARGER THAN
+.. |late| unicode:: U+02AAD .. LARGER THAN OR EQUAL TO
+.. |lates| unicode:: U+02AAD U+0FE00 .. LARGER THAN OR slanted EQUAL
+.. |ldot| unicode:: U+022D6 .. LESS-THAN WITH DOT
+.. |lE| unicode:: U+02266 .. LESS-THAN OVER EQUAL TO
+.. |lEg| unicode:: U+02A8B .. LESS-THAN ABOVE DOUBLE-LINE EQUAL ABOVE GREATER-THAN
+.. |leg| unicode:: U+022DA .. LESS-THAN EQUAL TO OR GREATER-THAN
+.. |les| unicode:: U+02A7D .. LESS-THAN OR SLANTED EQUAL TO
+.. |lescc| unicode:: U+02AA8 .. LESS-THAN CLOSED BY CURVE ABOVE SLANTED EQUAL
+.. |lesdot| unicode:: U+02A7F .. LESS-THAN OR SLANTED EQUAL TO WITH DOT INSIDE
+.. |lesdoto| unicode:: U+02A81 .. LESS-THAN OR SLANTED EQUAL TO WITH DOT ABOVE
+.. |lesdotor| unicode:: U+02A83 .. LESS-THAN OR SLANTED EQUAL TO WITH DOT ABOVE RIGHT
+.. |lesg| unicode:: U+022DA U+0FE00 .. LESS-THAN slanted EQUAL TO OR GREATER-THAN
+.. |lesges| unicode:: U+02A93 .. LESS-THAN ABOVE SLANTED EQUAL ABOVE GREATER-THAN ABOVE SLANTED EQUAL
+.. |lg| unicode:: U+02276 .. LESS-THAN OR GREATER-THAN
+.. |lgE| unicode:: U+02A91 .. LESS-THAN ABOVE GREATER-THAN ABOVE DOUBLE-LINE EQUAL
+.. |Ll| unicode:: U+022D8 .. VERY MUCH LESS-THAN
+.. |lsim| unicode:: U+02272 .. LESS-THAN OR EQUIVALENT TO
+.. |lsime| unicode:: U+02A8D .. LESS-THAN ABOVE SIMILAR OR EQUAL
+.. |lsimg| unicode:: U+02A8F .. LESS-THAN ABOVE SIMILAR ABOVE GREATER-THAN
+.. |Lt| unicode:: U+0226A .. MUCH LESS-THAN
+.. |ltcc| unicode:: U+02AA6 .. LESS-THAN CLOSED BY CURVE
+.. |ltcir| unicode:: U+02A79 .. LESS-THAN WITH CIRCLE INSIDE
+.. |ltdot| unicode:: U+022D6 .. LESS-THAN WITH DOT
+.. |ltlarr| unicode:: U+02976 .. LESS-THAN ABOVE LEFTWARDS ARROW
+.. |ltquest| unicode:: U+02A7B .. LESS-THAN WITH QUESTION MARK ABOVE
+.. |ltrie| unicode:: U+022B4 .. NORMAL SUBGROUP OF OR EQUAL TO
+.. |mcomma| unicode:: U+02A29 .. MINUS SIGN WITH COMMA ABOVE
+.. |mDDot| unicode:: U+0223A .. GEOMETRIC PROPORTION
+.. |mid| unicode:: U+02223 .. DIVIDES
+.. |mlcp| unicode:: U+02ADB .. TRANSVERSAL INTERSECTION
+.. |models| unicode:: U+022A7 .. MODELS
+.. |mstpos| unicode:: U+0223E .. INVERTED LAZY S
+.. |Pr| unicode:: U+02ABB .. DOUBLE PRECEDES
+.. |pr| unicode:: U+0227A .. PRECEDES
+.. |prap| unicode:: U+02AB7 .. PRECEDES ABOVE ALMOST EQUAL TO
+.. |prcue| unicode:: U+0227C .. PRECEDES OR EQUAL TO
+.. |prE| unicode:: U+02AB3 .. PRECEDES ABOVE EQUALS SIGN
+.. |pre| unicode:: U+02AAF .. PRECEDES ABOVE SINGLE-LINE EQUALS SIGN
+.. |prsim| unicode:: U+0227E .. PRECEDES OR EQUIVALENT TO
+.. |prurel| unicode:: U+022B0 .. PRECEDES UNDER RELATION
+.. |ratio| unicode:: U+02236 .. RATIO
+.. |rtrie| unicode:: U+022B5 .. CONTAINS AS NORMAL SUBGROUP OR EQUAL TO
+.. |rtriltri| unicode:: U+029CE .. RIGHT TRIANGLE ABOVE LEFT TRIANGLE
+.. |samalg| unicode:: U+02210 .. N-ARY COPRODUCT
+.. |Sc| unicode:: U+02ABC .. DOUBLE SUCCEEDS
+.. |sc| unicode:: U+0227B .. SUCCEEDS
+.. |scap| unicode:: U+02AB8 .. SUCCEEDS ABOVE ALMOST EQUAL TO
+.. |sccue| unicode:: U+0227D .. SUCCEEDS OR EQUAL TO
+.. |scE| unicode:: U+02AB4 .. SUCCEEDS ABOVE EQUALS SIGN
+.. |sce| unicode:: U+02AB0 .. SUCCEEDS ABOVE SINGLE-LINE EQUALS SIGN
+.. |scsim| unicode:: U+0227F .. SUCCEEDS OR EQUIVALENT TO
+.. |sdote| unicode:: U+02A66 .. EQUALS SIGN WITH DOT BELOW
+.. |sfrown| unicode:: U+02322 .. FROWN
+.. |simg| unicode:: U+02A9E .. SIMILAR OR GREATER-THAN
+.. |simgE| unicode:: U+02AA0 .. SIMILAR ABOVE GREATER-THAN ABOVE EQUALS SIGN
+.. |siml| unicode:: U+02A9D .. SIMILAR OR LESS-THAN
+.. |simlE| unicode:: U+02A9F .. SIMILAR ABOVE LESS-THAN ABOVE EQUALS SIGN
+.. |smid| unicode:: U+02223 .. DIVIDES
+.. |smile| unicode:: U+02323 .. SMILE
+.. |smt| unicode:: U+02AAA .. SMALLER THAN
+.. |smte| unicode:: U+02AAC .. SMALLER THAN OR EQUAL TO
+.. |smtes| unicode:: U+02AAC U+0FE00 .. SMALLER THAN OR slanted EQUAL
+.. |spar| unicode:: U+02225 .. PARALLEL TO
+.. |sqsub| unicode:: U+0228F .. SQUARE IMAGE OF
+.. |sqsube| unicode:: U+02291 .. SQUARE IMAGE OF OR EQUAL TO
+.. |sqsup| unicode:: U+02290 .. SQUARE ORIGINAL OF
+.. |sqsupe| unicode:: U+02292 .. SQUARE ORIGINAL OF OR EQUAL TO
+.. |ssmile| unicode:: U+02323 .. SMILE
+.. |Sub| unicode:: U+022D0 .. DOUBLE SUBSET
+.. |subE| unicode:: U+02AC5 .. SUBSET OF ABOVE EQUALS SIGN
+.. |subedot| unicode:: U+02AC3 .. SUBSET OF OR EQUAL TO WITH DOT ABOVE
+.. |submult| unicode:: U+02AC1 .. SUBSET WITH MULTIPLICATION SIGN BELOW
+.. |subplus| unicode:: U+02ABF .. SUBSET WITH PLUS SIGN BELOW
+.. |subrarr| unicode:: U+02979 .. SUBSET ABOVE RIGHTWARDS ARROW
+.. |subsim| unicode:: U+02AC7 .. SUBSET OF ABOVE TILDE OPERATOR
+.. |subsub| unicode:: U+02AD5 .. SUBSET ABOVE SUBSET
+.. |subsup| unicode:: U+02AD3 .. SUBSET ABOVE SUPERSET
+.. |Sup| unicode:: U+022D1 .. DOUBLE SUPERSET
+.. |supdsub| unicode:: U+02AD8 .. SUPERSET BESIDE AND JOINED BY DASH WITH SUBSET
+.. |supE| unicode:: U+02AC6 .. SUPERSET OF ABOVE EQUALS SIGN
+.. |supedot| unicode:: U+02AC4 .. SUPERSET OF OR EQUAL TO WITH DOT ABOVE
+.. |suphsol| unicode:: U+02283 U+0002F .. SUPERSET OF, SOLIDUS
+.. |suphsub| unicode:: U+02AD7 .. SUPERSET BESIDE SUBSET
+.. |suplarr| unicode:: U+0297B .. SUPERSET ABOVE LEFTWARDS ARROW
+.. |supmult| unicode:: U+02AC2 .. SUPERSET WITH MULTIPLICATION SIGN BELOW
+.. |supplus| unicode:: U+02AC0 .. SUPERSET WITH PLUS SIGN BELOW
+.. |supsim| unicode:: U+02AC8 .. SUPERSET OF ABOVE TILDE OPERATOR
+.. |supsub| unicode:: U+02AD4 .. SUPERSET ABOVE SUBSET
+.. |supsup| unicode:: U+02AD6 .. SUPERSET ABOVE SUPERSET
+.. |thkap| unicode:: U+02248 .. ALMOST EQUAL TO
+.. |thksim| unicode:: U+0223C .. TILDE OPERATOR
+.. |topfork| unicode:: U+02ADA .. PITCHFORK WITH TEE TOP
+.. |trie| unicode:: U+0225C .. DELTA EQUAL TO
+.. |twixt| unicode:: U+0226C .. BETWEEN
+.. |Vbar| unicode:: U+02AEB .. DOUBLE UP TACK
+.. |vBar| unicode:: U+02AE8 .. SHORT UP TACK WITH UNDERBAR
+.. |vBarv| unicode:: U+02AE9 .. SHORT UP TACK ABOVE SHORT DOWN TACK
+.. |VDash| unicode:: U+022AB .. DOUBLE VERTICAL BAR DOUBLE RIGHT TURNSTILE
+.. |Vdash| unicode:: U+022A9 .. FORCES
+.. |vDash| unicode:: U+022A8 .. TRUE
+.. |vdash| unicode:: U+022A2 .. RIGHT TACK
+.. |Vdashl| unicode:: U+02AE6 .. LONG DASH FROM LEFT MEMBER OF DOUBLE VERTICAL
+.. |veebar| unicode:: U+022BB .. XOR
+.. |vltri| unicode:: U+022B2 .. NORMAL SUBGROUP OF
+.. |vprop| unicode:: U+0221D .. PROPORTIONAL TO
+.. |vrtri| unicode:: U+022B3 .. CONTAINS AS NORMAL SUBGROUP
+.. |Vvdash| unicode:: U+022AA .. TRIPLE VERTICAL BAR RIGHT TURNSTILE
diff --git a/python/helpers/docutils/parsers/rst/include/isobox.txt b/python/helpers/docutils/parsers/rst/include/isobox.txt
new file mode 100644
index 0000000..2304f87
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isobox.txt
@@ -0,0 +1,46 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |boxDL| unicode:: U+02557 .. BOX DRAWINGS DOUBLE DOWN AND LEFT
+.. |boxDl| unicode:: U+02556 .. BOX DRAWINGS DOWN DOUBLE AND LEFT SINGLE
+.. |boxdL| unicode:: U+02555 .. BOX DRAWINGS DOWN SINGLE AND LEFT DOUBLE
+.. |boxdl| unicode:: U+02510 .. BOX DRAWINGS LIGHT DOWN AND LEFT
+.. |boxDR| unicode:: U+02554 .. BOX DRAWINGS DOUBLE DOWN AND RIGHT
+.. |boxDr| unicode:: U+02553 .. BOX DRAWINGS DOWN DOUBLE AND RIGHT SINGLE
+.. |boxdR| unicode:: U+02552 .. BOX DRAWINGS DOWN SINGLE AND RIGHT DOUBLE
+.. |boxdr| unicode:: U+0250C .. BOX DRAWINGS LIGHT DOWN AND RIGHT
+.. |boxH| unicode:: U+02550 .. BOX DRAWINGS DOUBLE HORIZONTAL
+.. |boxh| unicode:: U+02500 .. BOX DRAWINGS LIGHT HORIZONTAL
+.. |boxHD| unicode:: U+02566 .. BOX DRAWINGS DOUBLE DOWN AND HORIZONTAL
+.. |boxHd| unicode:: U+02564 .. BOX DRAWINGS DOWN SINGLE AND HORIZONTAL DOUBLE
+.. |boxhD| unicode:: U+02565 .. BOX DRAWINGS DOWN DOUBLE AND HORIZONTAL SINGLE
+.. |boxhd| unicode:: U+0252C .. BOX DRAWINGS LIGHT DOWN AND HORIZONTAL
+.. |boxHU| unicode:: U+02569 .. BOX DRAWINGS DOUBLE UP AND HORIZONTAL
+.. |boxHu| unicode:: U+02567 .. BOX DRAWINGS UP SINGLE AND HORIZONTAL DOUBLE
+.. |boxhU| unicode:: U+02568 .. BOX DRAWINGS UP DOUBLE AND HORIZONTAL SINGLE
+.. |boxhu| unicode:: U+02534 .. BOX DRAWINGS LIGHT UP AND HORIZONTAL
+.. |boxUL| unicode:: U+0255D .. BOX DRAWINGS DOUBLE UP AND LEFT
+.. |boxUl| unicode:: U+0255C .. BOX DRAWINGS UP DOUBLE AND LEFT SINGLE
+.. |boxuL| unicode:: U+0255B .. BOX DRAWINGS UP SINGLE AND LEFT DOUBLE
+.. |boxul| unicode:: U+02518 .. BOX DRAWINGS LIGHT UP AND LEFT
+.. |boxUR| unicode:: U+0255A .. BOX DRAWINGS DOUBLE UP AND RIGHT
+.. |boxUr| unicode:: U+02559 .. BOX DRAWINGS UP DOUBLE AND RIGHT SINGLE
+.. |boxuR| unicode:: U+02558 .. BOX DRAWINGS UP SINGLE AND RIGHT DOUBLE
+.. |boxur| unicode:: U+02514 .. BOX DRAWINGS LIGHT UP AND RIGHT
+.. |boxV| unicode:: U+02551 .. BOX DRAWINGS DOUBLE VERTICAL
+.. |boxv| unicode:: U+02502 .. BOX DRAWINGS LIGHT VERTICAL
+.. |boxVH| unicode:: U+0256C .. BOX DRAWINGS DOUBLE VERTICAL AND HORIZONTAL
+.. |boxVh| unicode:: U+0256B .. BOX DRAWINGS VERTICAL DOUBLE AND HORIZONTAL SINGLE
+.. |boxvH| unicode:: U+0256A .. BOX DRAWINGS VERTICAL SINGLE AND HORIZONTAL DOUBLE
+.. |boxvh| unicode:: U+0253C .. BOX DRAWINGS LIGHT VERTICAL AND HORIZONTAL
+.. |boxVL| unicode:: U+02563 .. BOX DRAWINGS DOUBLE VERTICAL AND LEFT
+.. |boxVl| unicode:: U+02562 .. BOX DRAWINGS VERTICAL DOUBLE AND LEFT SINGLE
+.. |boxvL| unicode:: U+02561 .. BOX DRAWINGS VERTICAL SINGLE AND LEFT DOUBLE
+.. |boxvl| unicode:: U+02524 .. BOX DRAWINGS LIGHT VERTICAL AND LEFT
+.. |boxVR| unicode:: U+02560 .. BOX DRAWINGS DOUBLE VERTICAL AND RIGHT
+.. |boxVr| unicode:: U+0255F .. BOX DRAWINGS VERTICAL DOUBLE AND RIGHT SINGLE
+.. |boxvR| unicode:: U+0255E .. BOX DRAWINGS VERTICAL SINGLE AND RIGHT DOUBLE
+.. |boxvr| unicode:: U+0251C .. BOX DRAWINGS LIGHT VERTICAL AND RIGHT
diff --git a/python/helpers/docutils/parsers/rst/include/isocyr1.txt b/python/helpers/docutils/parsers/rst/include/isocyr1.txt
new file mode 100644
index 0000000..afee744c
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isocyr1.txt
@@ -0,0 +1,73 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Acy| unicode:: U+00410 .. CYRILLIC CAPITAL LETTER A
+.. |acy| unicode:: U+00430 .. CYRILLIC SMALL LETTER A
+.. |Bcy| unicode:: U+00411 .. CYRILLIC CAPITAL LETTER BE
+.. |bcy| unicode:: U+00431 .. CYRILLIC SMALL LETTER BE
+.. |CHcy| unicode:: U+00427 .. CYRILLIC CAPITAL LETTER CHE
+.. |chcy| unicode:: U+00447 .. CYRILLIC SMALL LETTER CHE
+.. |Dcy| unicode:: U+00414 .. CYRILLIC CAPITAL LETTER DE
+.. |dcy| unicode:: U+00434 .. CYRILLIC SMALL LETTER DE
+.. |Ecy| unicode:: U+0042D .. CYRILLIC CAPITAL LETTER E
+.. |ecy| unicode:: U+0044D .. CYRILLIC SMALL LETTER E
+.. |Fcy| unicode:: U+00424 .. CYRILLIC CAPITAL LETTER EF
+.. |fcy| unicode:: U+00444 .. CYRILLIC SMALL LETTER EF
+.. |Gcy| unicode:: U+00413 .. CYRILLIC CAPITAL LETTER GHE
+.. |gcy| unicode:: U+00433 .. CYRILLIC SMALL LETTER GHE
+.. |HARDcy| unicode:: U+0042A .. CYRILLIC CAPITAL LETTER HARD SIGN
+.. |hardcy| unicode:: U+0044A .. CYRILLIC SMALL LETTER HARD SIGN
+.. |Icy| unicode:: U+00418 .. CYRILLIC CAPITAL LETTER I
+.. |icy| unicode:: U+00438 .. CYRILLIC SMALL LETTER I
+.. |IEcy| unicode:: U+00415 .. CYRILLIC CAPITAL LETTER IE
+.. |iecy| unicode:: U+00435 .. CYRILLIC SMALL LETTER IE
+.. |IOcy| unicode:: U+00401 .. CYRILLIC CAPITAL LETTER IO
+.. |iocy| unicode:: U+00451 .. CYRILLIC SMALL LETTER IO
+.. |Jcy| unicode:: U+00419 .. CYRILLIC CAPITAL LETTER SHORT I
+.. |jcy| unicode:: U+00439 .. CYRILLIC SMALL LETTER SHORT I
+.. |Kcy| unicode:: U+0041A .. CYRILLIC CAPITAL LETTER KA
+.. |kcy| unicode:: U+0043A .. CYRILLIC SMALL LETTER KA
+.. |KHcy| unicode:: U+00425 .. CYRILLIC CAPITAL LETTER HA
+.. |khcy| unicode:: U+00445 .. CYRILLIC SMALL LETTER HA
+.. |Lcy| unicode:: U+0041B .. CYRILLIC CAPITAL LETTER EL
+.. |lcy| unicode:: U+0043B .. CYRILLIC SMALL LETTER EL
+.. |Mcy| unicode:: U+0041C .. CYRILLIC CAPITAL LETTER EM
+.. |mcy| unicode:: U+0043C .. CYRILLIC SMALL LETTER EM
+.. |Ncy| unicode:: U+0041D .. CYRILLIC CAPITAL LETTER EN
+.. |ncy| unicode:: U+0043D .. CYRILLIC SMALL LETTER EN
+.. |numero| unicode:: U+02116 .. NUMERO SIGN
+.. |Ocy| unicode:: U+0041E .. CYRILLIC CAPITAL LETTER O
+.. |ocy| unicode:: U+0043E .. CYRILLIC SMALL LETTER O
+.. |Pcy| unicode:: U+0041F .. CYRILLIC CAPITAL LETTER PE
+.. |pcy| unicode:: U+0043F .. CYRILLIC SMALL LETTER PE
+.. |Rcy| unicode:: U+00420 .. CYRILLIC CAPITAL LETTER ER
+.. |rcy| unicode:: U+00440 .. CYRILLIC SMALL LETTER ER
+.. |Scy| unicode:: U+00421 .. CYRILLIC CAPITAL LETTER ES
+.. |scy| unicode:: U+00441 .. CYRILLIC SMALL LETTER ES
+.. |SHCHcy| unicode:: U+00429 .. CYRILLIC CAPITAL LETTER SHCHA
+.. |shchcy| unicode:: U+00449 .. CYRILLIC SMALL LETTER SHCHA
+.. |SHcy| unicode:: U+00428 .. CYRILLIC CAPITAL LETTER SHA
+.. |shcy| unicode:: U+00448 .. CYRILLIC SMALL LETTER SHA
+.. |SOFTcy| unicode:: U+0042C .. CYRILLIC CAPITAL LETTER SOFT SIGN
+.. |softcy| unicode:: U+0044C .. CYRILLIC SMALL LETTER SOFT SIGN
+.. |Tcy| unicode:: U+00422 .. CYRILLIC CAPITAL LETTER TE
+.. |tcy| unicode:: U+00442 .. CYRILLIC SMALL LETTER TE
+.. |TScy| unicode:: U+00426 .. CYRILLIC CAPITAL LETTER TSE
+.. |tscy| unicode:: U+00446 .. CYRILLIC SMALL LETTER TSE
+.. |Ucy| unicode:: U+00423 .. CYRILLIC CAPITAL LETTER U
+.. |ucy| unicode:: U+00443 .. CYRILLIC SMALL LETTER U
+.. |Vcy| unicode:: U+00412 .. CYRILLIC CAPITAL LETTER VE
+.. |vcy| unicode:: U+00432 .. CYRILLIC SMALL LETTER VE
+.. |YAcy| unicode:: U+0042F .. CYRILLIC CAPITAL LETTER YA
+.. |yacy| unicode:: U+0044F .. CYRILLIC SMALL LETTER YA
+.. |Ycy| unicode:: U+0042B .. CYRILLIC CAPITAL LETTER YERU
+.. |ycy| unicode:: U+0044B .. CYRILLIC SMALL LETTER YERU
+.. |YUcy| unicode:: U+0042E .. CYRILLIC CAPITAL LETTER YU
+.. |yucy| unicode:: U+0044E .. CYRILLIC SMALL LETTER YU
+.. |Zcy| unicode:: U+00417 .. CYRILLIC CAPITAL LETTER ZE
+.. |zcy| unicode:: U+00437 .. CYRILLIC SMALL LETTER ZE
+.. |ZHcy| unicode:: U+00416 .. CYRILLIC CAPITAL LETTER ZHE
+.. |zhcy| unicode:: U+00436 .. CYRILLIC SMALL LETTER ZHE
diff --git a/python/helpers/docutils/parsers/rst/include/isocyr2.txt b/python/helpers/docutils/parsers/rst/include/isocyr2.txt
new file mode 100644
index 0000000..fe09c01
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isocyr2.txt
@@ -0,0 +1,32 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |DJcy| unicode:: U+00402 .. CYRILLIC CAPITAL LETTER DJE
+.. |djcy| unicode:: U+00452 .. CYRILLIC SMALL LETTER DJE
+.. |DScy| unicode:: U+00405 .. CYRILLIC CAPITAL LETTER DZE
+.. |dscy| unicode:: U+00455 .. CYRILLIC SMALL LETTER DZE
+.. |DZcy| unicode:: U+0040F .. CYRILLIC CAPITAL LETTER DZHE
+.. |dzcy| unicode:: U+0045F .. CYRILLIC SMALL LETTER DZHE
+.. |GJcy| unicode:: U+00403 .. CYRILLIC CAPITAL LETTER GJE
+.. |gjcy| unicode:: U+00453 .. CYRILLIC SMALL LETTER GJE
+.. |Iukcy| unicode:: U+00406 .. CYRILLIC CAPITAL LETTER BYELORUSSIAN-UKRAINIAN I
+.. |iukcy| unicode:: U+00456 .. CYRILLIC SMALL LETTER BYELORUSSIAN-UKRAINIAN I
+.. |Jsercy| unicode:: U+00408 .. CYRILLIC CAPITAL LETTER JE
+.. |jsercy| unicode:: U+00458 .. CYRILLIC SMALL LETTER JE
+.. |Jukcy| unicode:: U+00404 .. CYRILLIC CAPITAL LETTER UKRAINIAN IE
+.. |jukcy| unicode:: U+00454 .. CYRILLIC SMALL LETTER UKRAINIAN IE
+.. |KJcy| unicode:: U+0040C .. CYRILLIC CAPITAL LETTER KJE
+.. |kjcy| unicode:: U+0045C .. CYRILLIC SMALL LETTER KJE
+.. |LJcy| unicode:: U+00409 .. CYRILLIC CAPITAL LETTER LJE
+.. |ljcy| unicode:: U+00459 .. CYRILLIC SMALL LETTER LJE
+.. |NJcy| unicode:: U+0040A .. CYRILLIC CAPITAL LETTER NJE
+.. |njcy| unicode:: U+0045A .. CYRILLIC SMALL LETTER NJE
+.. |TSHcy| unicode:: U+0040B .. CYRILLIC CAPITAL LETTER TSHE
+.. |tshcy| unicode:: U+0045B .. CYRILLIC SMALL LETTER TSHE
+.. |Ubrcy| unicode:: U+0040E .. CYRILLIC CAPITAL LETTER SHORT U
+.. |ubrcy| unicode:: U+0045E .. CYRILLIC SMALL LETTER SHORT U
+.. |YIcy| unicode:: U+00407 .. CYRILLIC CAPITAL LETTER YI
+.. |yicy| unicode:: U+00457 .. CYRILLIC SMALL LETTER YI
diff --git a/python/helpers/docutils/parsers/rst/include/isodia.txt b/python/helpers/docutils/parsers/rst/include/isodia.txt
new file mode 100644
index 0000000..ede6d99
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isodia.txt
@@ -0,0 +1,20 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |acute| unicode:: U+000B4 .. ACUTE ACCENT
+.. |breve| unicode:: U+002D8 .. BREVE
+.. |caron| unicode:: U+002C7 .. CARON
+.. |cedil| unicode:: U+000B8 .. CEDILLA
+.. |circ| unicode:: U+002C6 .. MODIFIER LETTER CIRCUMFLEX ACCENT
+.. |dblac| unicode:: U+002DD .. DOUBLE ACUTE ACCENT
+.. |die| unicode:: U+000A8 .. DIAERESIS
+.. |dot| unicode:: U+002D9 .. DOT ABOVE
+.. |grave| unicode:: U+00060 .. GRAVE ACCENT
+.. |macr| unicode:: U+000AF .. MACRON
+.. |ogon| unicode:: U+002DB .. OGONEK
+.. |ring| unicode:: U+002DA .. RING ABOVE
+.. |tilde| unicode:: U+002DC .. SMALL TILDE
+.. |uml| unicode:: U+000A8 .. DIAERESIS
diff --git a/python/helpers/docutils/parsers/rst/include/isogrk1.txt b/python/helpers/docutils/parsers/rst/include/isogrk1.txt
new file mode 100644
index 0000000..434368a
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isogrk1.txt
@@ -0,0 +1,55 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Agr| unicode:: U+00391 .. GREEK CAPITAL LETTER ALPHA
+.. |agr| unicode:: U+003B1 .. GREEK SMALL LETTER ALPHA
+.. |Bgr| unicode:: U+00392 .. GREEK CAPITAL LETTER BETA
+.. |bgr| unicode:: U+003B2 .. GREEK SMALL LETTER BETA
+.. |Dgr| unicode:: U+00394 .. GREEK CAPITAL LETTER DELTA
+.. |dgr| unicode:: U+003B4 .. GREEK SMALL LETTER DELTA
+.. |EEgr| unicode:: U+00397 .. GREEK CAPITAL LETTER ETA
+.. |eegr| unicode:: U+003B7 .. GREEK SMALL LETTER ETA
+.. |Egr| unicode:: U+00395 .. GREEK CAPITAL LETTER EPSILON
+.. |egr| unicode:: U+003B5 .. GREEK SMALL LETTER EPSILON
+.. |Ggr| unicode:: U+00393 .. GREEK CAPITAL LETTER GAMMA
+.. |ggr| unicode:: U+003B3 .. GREEK SMALL LETTER GAMMA
+.. |Igr| unicode:: U+00399 .. GREEK CAPITAL LETTER IOTA
+.. |igr| unicode:: U+003B9 .. GREEK SMALL LETTER IOTA
+.. |Kgr| unicode:: U+0039A .. GREEK CAPITAL LETTER KAPPA
+.. |kgr| unicode:: U+003BA .. GREEK SMALL LETTER KAPPA
+.. |KHgr| unicode:: U+003A7 .. GREEK CAPITAL LETTER CHI
+.. |khgr| unicode:: U+003C7 .. GREEK SMALL LETTER CHI
+.. |Lgr| unicode:: U+0039B .. GREEK CAPITAL LETTER LAMDA
+.. |lgr| unicode:: U+003BB .. GREEK SMALL LETTER LAMDA
+.. |Mgr| unicode:: U+0039C .. GREEK CAPITAL LETTER MU
+.. |mgr| unicode:: U+003BC .. GREEK SMALL LETTER MU
+.. |Ngr| unicode:: U+0039D .. GREEK CAPITAL LETTER NU
+.. |ngr| unicode:: U+003BD .. GREEK SMALL LETTER NU
+.. |Ogr| unicode:: U+0039F .. GREEK CAPITAL LETTER OMICRON
+.. |ogr| unicode:: U+003BF .. GREEK SMALL LETTER OMICRON
+.. |OHgr| unicode:: U+003A9 .. GREEK CAPITAL LETTER OMEGA
+.. |ohgr| unicode:: U+003C9 .. GREEK SMALL LETTER OMEGA
+.. |Pgr| unicode:: U+003A0 .. GREEK CAPITAL LETTER PI
+.. |pgr| unicode:: U+003C0 .. GREEK SMALL LETTER PI
+.. |PHgr| unicode:: U+003A6 .. GREEK CAPITAL LETTER PHI
+.. |phgr| unicode:: U+003C6 .. GREEK SMALL LETTER PHI
+.. |PSgr| unicode:: U+003A8 .. GREEK CAPITAL LETTER PSI
+.. |psgr| unicode:: U+003C8 .. GREEK SMALL LETTER PSI
+.. |Rgr| unicode:: U+003A1 .. GREEK CAPITAL LETTER RHO
+.. |rgr| unicode:: U+003C1 .. GREEK SMALL LETTER RHO
+.. |sfgr| unicode:: U+003C2 .. GREEK SMALL LETTER FINAL SIGMA
+.. |Sgr| unicode:: U+003A3 .. GREEK CAPITAL LETTER SIGMA
+.. |sgr| unicode:: U+003C3 .. GREEK SMALL LETTER SIGMA
+.. |Tgr| unicode:: U+003A4 .. GREEK CAPITAL LETTER TAU
+.. |tgr| unicode:: U+003C4 .. GREEK SMALL LETTER TAU
+.. |THgr| unicode:: U+00398 .. GREEK CAPITAL LETTER THETA
+.. |thgr| unicode:: U+003B8 .. GREEK SMALL LETTER THETA
+.. |Ugr| unicode:: U+003A5 .. GREEK CAPITAL LETTER UPSILON
+.. |ugr| unicode:: U+003C5 .. GREEK SMALL LETTER UPSILON
+.. |Xgr| unicode:: U+0039E .. GREEK CAPITAL LETTER XI
+.. |xgr| unicode:: U+003BE .. GREEK SMALL LETTER XI
+.. |Zgr| unicode:: U+00396 .. GREEK CAPITAL LETTER ZETA
+.. |zgr| unicode:: U+003B6 .. GREEK SMALL LETTER ZETA
diff --git a/python/helpers/docutils/parsers/rst/include/isogrk2.txt b/python/helpers/docutils/parsers/rst/include/isogrk2.txt
new file mode 100644
index 0000000..fa59f96
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isogrk2.txt
@@ -0,0 +1,26 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Aacgr| unicode:: U+00386 .. GREEK CAPITAL LETTER ALPHA WITH TONOS
+.. |aacgr| unicode:: U+003AC .. GREEK SMALL LETTER ALPHA WITH TONOS
+.. |Eacgr| unicode:: U+00388 .. GREEK CAPITAL LETTER EPSILON WITH TONOS
+.. |eacgr| unicode:: U+003AD .. GREEK SMALL LETTER EPSILON WITH TONOS
+.. |EEacgr| unicode:: U+00389 .. GREEK CAPITAL LETTER ETA WITH TONOS
+.. |eeacgr| unicode:: U+003AE .. GREEK SMALL LETTER ETA WITH TONOS
+.. |Iacgr| unicode:: U+0038A .. GREEK CAPITAL LETTER IOTA WITH TONOS
+.. |iacgr| unicode:: U+003AF .. GREEK SMALL LETTER IOTA WITH TONOS
+.. |idiagr| unicode:: U+00390 .. GREEK SMALL LETTER IOTA WITH DIALYTIKA AND TONOS
+.. |Idigr| unicode:: U+003AA .. GREEK CAPITAL LETTER IOTA WITH DIALYTIKA
+.. |idigr| unicode:: U+003CA .. GREEK SMALL LETTER IOTA WITH DIALYTIKA
+.. |Oacgr| unicode:: U+0038C .. GREEK CAPITAL LETTER OMICRON WITH TONOS
+.. |oacgr| unicode:: U+003CC .. GREEK SMALL LETTER OMICRON WITH TONOS
+.. |OHacgr| unicode:: U+0038F .. GREEK CAPITAL LETTER OMEGA WITH TONOS
+.. |ohacgr| unicode:: U+003CE .. GREEK SMALL LETTER OMEGA WITH TONOS
+.. |Uacgr| unicode:: U+0038E .. GREEK CAPITAL LETTER UPSILON WITH TONOS
+.. |uacgr| unicode:: U+003CD .. GREEK SMALL LETTER UPSILON WITH TONOS
+.. |udiagr| unicode:: U+003B0 .. GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND TONOS
+.. |Udigr| unicode:: U+003AB .. GREEK CAPITAL LETTER UPSILON WITH DIALYTIKA
+.. |udigr| unicode:: U+003CB .. GREEK SMALL LETTER UPSILON WITH DIALYTIKA
diff --git a/python/helpers/docutils/parsers/rst/include/isogrk3.txt b/python/helpers/docutils/parsers/rst/include/isogrk3.txt
new file mode 100644
index 0000000..efacd98
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isogrk3.txt
@@ -0,0 +1,52 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |alpha| unicode:: U+003B1 .. GREEK SMALL LETTER ALPHA
+.. |beta| unicode:: U+003B2 .. GREEK SMALL LETTER BETA
+.. |chi| unicode:: U+003C7 .. GREEK SMALL LETTER CHI
+.. |Delta| unicode:: U+00394 .. GREEK CAPITAL LETTER DELTA
+.. |delta| unicode:: U+003B4 .. GREEK SMALL LETTER DELTA
+.. |epsi| unicode:: U+003F5 .. GREEK LUNATE EPSILON SYMBOL
+.. |epsis| unicode:: U+003F5 .. GREEK LUNATE EPSILON SYMBOL
+.. |epsiv| unicode:: U+003B5 .. GREEK SMALL LETTER EPSILON
+.. |eta| unicode:: U+003B7 .. GREEK SMALL LETTER ETA
+.. |Gamma| unicode:: U+00393 .. GREEK CAPITAL LETTER GAMMA
+.. |gamma| unicode:: U+003B3 .. GREEK SMALL LETTER GAMMA
+.. |Gammad| unicode:: U+003DC .. GREEK LETTER DIGAMMA
+.. |gammad| unicode:: U+003DD .. GREEK SMALL LETTER DIGAMMA
+.. |iota| unicode:: U+003B9 .. GREEK SMALL LETTER IOTA
+.. |kappa| unicode:: U+003BA .. GREEK SMALL LETTER KAPPA
+.. |kappav| unicode:: U+003F0 .. GREEK KAPPA SYMBOL
+.. |Lambda| unicode:: U+0039B .. GREEK CAPITAL LETTER LAMDA
+.. |lambda| unicode:: U+003BB .. GREEK SMALL LETTER LAMDA
+.. |mu| unicode:: U+003BC .. GREEK SMALL LETTER MU
+.. |nu| unicode:: U+003BD .. GREEK SMALL LETTER NU
+.. |Omega| unicode:: U+003A9 .. GREEK CAPITAL LETTER OMEGA
+.. |omega| unicode:: U+003C9 .. GREEK SMALL LETTER OMEGA
+.. |Phi| unicode:: U+003A6 .. GREEK CAPITAL LETTER PHI
+.. |phi| unicode:: U+003D5 .. GREEK PHI SYMBOL
+.. |phis| unicode:: U+003D5 .. GREEK PHI SYMBOL
+.. |phiv| unicode:: U+003C6 .. GREEK SMALL LETTER PHI
+.. |Pi| unicode:: U+003A0 .. GREEK CAPITAL LETTER PI
+.. |pi| unicode:: U+003C0 .. GREEK SMALL LETTER PI
+.. |piv| unicode:: U+003D6 .. GREEK PI SYMBOL
+.. |Psi| unicode:: U+003A8 .. GREEK CAPITAL LETTER PSI
+.. |psi| unicode:: U+003C8 .. GREEK SMALL LETTER PSI
+.. |rho| unicode:: U+003C1 .. GREEK SMALL LETTER RHO
+.. |rhov| unicode:: U+003F1 .. GREEK RHO SYMBOL
+.. |Sigma| unicode:: U+003A3 .. GREEK CAPITAL LETTER SIGMA
+.. |sigma| unicode:: U+003C3 .. GREEK SMALL LETTER SIGMA
+.. |sigmav| unicode:: U+003C2 .. GREEK SMALL LETTER FINAL SIGMA
+.. |tau| unicode:: U+003C4 .. GREEK SMALL LETTER TAU
+.. |Theta| unicode:: U+00398 .. GREEK CAPITAL LETTER THETA
+.. |theta| unicode:: U+003B8 .. GREEK SMALL LETTER THETA
+.. |thetas| unicode:: U+003B8 .. GREEK SMALL LETTER THETA
+.. |thetav| unicode:: U+003D1 .. GREEK THETA SYMBOL
+.. |Upsi| unicode:: U+003D2 .. GREEK UPSILON WITH HOOK SYMBOL
+.. |upsi| unicode:: U+003C5 .. GREEK SMALL LETTER UPSILON
+.. |Xi| unicode:: U+0039E .. GREEK CAPITAL LETTER XI
+.. |xi| unicode:: U+003BE .. GREEK SMALL LETTER XI
+.. |zeta| unicode:: U+003B6 .. GREEK SMALL LETTER ZETA
diff --git a/python/helpers/docutils/parsers/rst/include/isogrk4-wide.txt b/python/helpers/docutils/parsers/rst/include/isogrk4-wide.txt
new file mode 100644
index 0000000..39a6307
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isogrk4-wide.txt
@@ -0,0 +1,49 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |b.alpha| unicode:: U+1D6C2 .. MATHEMATICAL BOLD SMALL ALPHA
+.. |b.beta| unicode:: U+1D6C3 .. MATHEMATICAL BOLD SMALL BETA
+.. |b.chi| unicode:: U+1D6D8 .. MATHEMATICAL BOLD SMALL CHI
+.. |b.Delta| unicode:: U+1D6AB .. MATHEMATICAL BOLD CAPITAL DELTA
+.. |b.delta| unicode:: U+1D6C5 .. MATHEMATICAL BOLD SMALL DELTA
+.. |b.epsi| unicode:: U+1D6C6 .. MATHEMATICAL BOLD SMALL EPSILON
+.. |b.epsiv| unicode:: U+1D6DC .. MATHEMATICAL BOLD EPSILON SYMBOL
+.. |b.eta| unicode:: U+1D6C8 .. MATHEMATICAL BOLD SMALL ETA
+.. |b.Gamma| unicode:: U+1D6AA .. MATHEMATICAL BOLD CAPITAL GAMMA
+.. |b.gamma| unicode:: U+1D6C4 .. MATHEMATICAL BOLD SMALL GAMMA
+.. |b.Gammad| unicode:: U+003DC .. GREEK LETTER DIGAMMA
+.. |b.gammad| unicode:: U+003DD .. GREEK SMALL LETTER DIGAMMA
+.. |b.iota| unicode:: U+1D6CA .. MATHEMATICAL BOLD SMALL IOTA
+.. |b.kappa| unicode:: U+1D6CB .. MATHEMATICAL BOLD SMALL KAPPA
+.. |b.kappav| unicode:: U+1D6DE .. MATHEMATICAL BOLD KAPPA SYMBOL
+.. |b.Lambda| unicode:: U+1D6B2 .. MATHEMATICAL BOLD CAPITAL LAMDA
+.. |b.lambda| unicode:: U+1D6CC .. MATHEMATICAL BOLD SMALL LAMDA
+.. |b.mu| unicode:: U+1D6CD .. MATHEMATICAL BOLD SMALL MU
+.. |b.nu| unicode:: U+1D6CE .. MATHEMATICAL BOLD SMALL NU
+.. |b.Omega| unicode:: U+1D6C0 .. MATHEMATICAL BOLD CAPITAL OMEGA
+.. |b.omega| unicode:: U+1D6DA .. MATHEMATICAL BOLD SMALL OMEGA
+.. |b.Phi| unicode:: U+1D6BD .. MATHEMATICAL BOLD CAPITAL PHI
+.. |b.phi| unicode:: U+1D6D7 .. MATHEMATICAL BOLD SMALL PHI
+.. |b.phiv| unicode:: U+1D6DF .. MATHEMATICAL BOLD PHI SYMBOL
+.. |b.Pi| unicode:: U+1D6B7 .. MATHEMATICAL BOLD CAPITAL PI
+.. |b.pi| unicode:: U+1D6D1 .. MATHEMATICAL BOLD SMALL PI
+.. |b.piv| unicode:: U+1D6E1 .. MATHEMATICAL BOLD PI SYMBOL
+.. |b.Psi| unicode:: U+1D6BF .. MATHEMATICAL BOLD CAPITAL PSI
+.. |b.psi| unicode:: U+1D6D9 .. MATHEMATICAL BOLD SMALL PSI
+.. |b.rho| unicode:: U+1D6D2 .. MATHEMATICAL BOLD SMALL RHO
+.. |b.rhov| unicode:: U+1D6E0 .. MATHEMATICAL BOLD RHO SYMBOL
+.. |b.Sigma| unicode:: U+1D6BA .. MATHEMATICAL BOLD CAPITAL SIGMA
+.. |b.sigma| unicode:: U+1D6D4 .. MATHEMATICAL BOLD SMALL SIGMA
+.. |b.sigmav| unicode:: U+1D6D3 .. MATHEMATICAL BOLD SMALL FINAL SIGMA
+.. |b.tau| unicode:: U+1D6D5 .. MATHEMATICAL BOLD SMALL TAU
+.. |b.Theta| unicode:: U+1D6AF .. MATHEMATICAL BOLD CAPITAL THETA
+.. |b.thetas| unicode:: U+1D6C9 .. MATHEMATICAL BOLD SMALL THETA
+.. |b.thetav| unicode:: U+1D6DD .. MATHEMATICAL BOLD THETA SYMBOL
+.. |b.Upsi| unicode:: U+1D6BC .. MATHEMATICAL BOLD CAPITAL UPSILON
+.. |b.upsi| unicode:: U+1D6D6 .. MATHEMATICAL BOLD SMALL UPSILON
+.. |b.Xi| unicode:: U+1D6B5 .. MATHEMATICAL BOLD CAPITAL XI
+.. |b.xi| unicode:: U+1D6CF .. MATHEMATICAL BOLD SMALL XI
+.. |b.zeta| unicode:: U+1D6C7 .. MATHEMATICAL BOLD SMALL ZETA
diff --git a/python/helpers/docutils/parsers/rst/include/isogrk4.txt b/python/helpers/docutils/parsers/rst/include/isogrk4.txt
new file mode 100644
index 0000000..5b9f410
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isogrk4.txt
@@ -0,0 +1,8 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |b.Gammad| unicode:: U+003DC .. GREEK LETTER DIGAMMA
+.. |b.gammad| unicode:: U+003DD .. GREEK SMALL LETTER DIGAMMA
diff --git a/python/helpers/docutils/parsers/rst/include/isolat1.txt b/python/helpers/docutils/parsers/rst/include/isolat1.txt
new file mode 100644
index 0000000..3e9ad9d
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isolat1.txt
@@ -0,0 +1,68 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Aacute| unicode:: U+000C1 .. LATIN CAPITAL LETTER A WITH ACUTE
+.. |aacute| unicode:: U+000E1 .. LATIN SMALL LETTER A WITH ACUTE
+.. |Acirc| unicode:: U+000C2 .. LATIN CAPITAL LETTER A WITH CIRCUMFLEX
+.. |acirc| unicode:: U+000E2 .. LATIN SMALL LETTER A WITH CIRCUMFLEX
+.. |AElig| unicode:: U+000C6 .. LATIN CAPITAL LETTER AE
+.. |aelig| unicode:: U+000E6 .. LATIN SMALL LETTER AE
+.. |Agrave| unicode:: U+000C0 .. LATIN CAPITAL LETTER A WITH GRAVE
+.. |agrave| unicode:: U+000E0 .. LATIN SMALL LETTER A WITH GRAVE
+.. |Aring| unicode:: U+000C5 .. LATIN CAPITAL LETTER A WITH RING ABOVE
+.. |aring| unicode:: U+000E5 .. LATIN SMALL LETTER A WITH RING ABOVE
+.. |Atilde| unicode:: U+000C3 .. LATIN CAPITAL LETTER A WITH TILDE
+.. |atilde| unicode:: U+000E3 .. LATIN SMALL LETTER A WITH TILDE
+.. |Auml| unicode:: U+000C4 .. LATIN CAPITAL LETTER A WITH DIAERESIS
+.. |auml| unicode:: U+000E4 .. LATIN SMALL LETTER A WITH DIAERESIS
+.. |Ccedil| unicode:: U+000C7 .. LATIN CAPITAL LETTER C WITH CEDILLA
+.. |ccedil| unicode:: U+000E7 .. LATIN SMALL LETTER C WITH CEDILLA
+.. |Eacute| unicode:: U+000C9 .. LATIN CAPITAL LETTER E WITH ACUTE
+.. |eacute| unicode:: U+000E9 .. LATIN SMALL LETTER E WITH ACUTE
+.. |Ecirc| unicode:: U+000CA .. LATIN CAPITAL LETTER E WITH CIRCUMFLEX
+.. |ecirc| unicode:: U+000EA .. LATIN SMALL LETTER E WITH CIRCUMFLEX
+.. |Egrave| unicode:: U+000C8 .. LATIN CAPITAL LETTER E WITH GRAVE
+.. |egrave| unicode:: U+000E8 .. LATIN SMALL LETTER E WITH GRAVE
+.. |ETH| unicode:: U+000D0 .. LATIN CAPITAL LETTER ETH
+.. |eth| unicode:: U+000F0 .. LATIN SMALL LETTER ETH
+.. |Euml| unicode:: U+000CB .. LATIN CAPITAL LETTER E WITH DIAERESIS
+.. |euml| unicode:: U+000EB .. LATIN SMALL LETTER E WITH DIAERESIS
+.. |Iacute| unicode:: U+000CD .. LATIN CAPITAL LETTER I WITH ACUTE
+.. |iacute| unicode:: U+000ED .. LATIN SMALL LETTER I WITH ACUTE
+.. |Icirc| unicode:: U+000CE .. LATIN CAPITAL LETTER I WITH CIRCUMFLEX
+.. |icirc| unicode:: U+000EE .. LATIN SMALL LETTER I WITH CIRCUMFLEX
+.. |Igrave| unicode:: U+000CC .. LATIN CAPITAL LETTER I WITH GRAVE
+.. |igrave| unicode:: U+000EC .. LATIN SMALL LETTER I WITH GRAVE
+.. |Iuml| unicode:: U+000CF .. LATIN CAPITAL LETTER I WITH DIAERESIS
+.. |iuml| unicode:: U+000EF .. LATIN SMALL LETTER I WITH DIAERESIS
+.. |Ntilde| unicode:: U+000D1 .. LATIN CAPITAL LETTER N WITH TILDE
+.. |ntilde| unicode:: U+000F1 .. LATIN SMALL LETTER N WITH TILDE
+.. |Oacute| unicode:: U+000D3 .. LATIN CAPITAL LETTER O WITH ACUTE
+.. |oacute| unicode:: U+000F3 .. LATIN SMALL LETTER O WITH ACUTE
+.. |Ocirc| unicode:: U+000D4 .. LATIN CAPITAL LETTER O WITH CIRCUMFLEX
+.. |ocirc| unicode:: U+000F4 .. LATIN SMALL LETTER O WITH CIRCUMFLEX
+.. |Ograve| unicode:: U+000D2 .. LATIN CAPITAL LETTER O WITH GRAVE
+.. |ograve| unicode:: U+000F2 .. LATIN SMALL LETTER O WITH GRAVE
+.. |Oslash| unicode:: U+000D8 .. LATIN CAPITAL LETTER O WITH STROKE
+.. |oslash| unicode:: U+000F8 .. LATIN SMALL LETTER O WITH STROKE
+.. |Otilde| unicode:: U+000D5 .. LATIN CAPITAL LETTER O WITH TILDE
+.. |otilde| unicode:: U+000F5 .. LATIN SMALL LETTER O WITH TILDE
+.. |Ouml| unicode:: U+000D6 .. LATIN CAPITAL LETTER O WITH DIAERESIS
+.. |ouml| unicode:: U+000F6 .. LATIN SMALL LETTER O WITH DIAERESIS
+.. |szlig| unicode:: U+000DF .. LATIN SMALL LETTER SHARP S
+.. |THORN| unicode:: U+000DE .. LATIN CAPITAL LETTER THORN
+.. |thorn| unicode:: U+000FE .. LATIN SMALL LETTER THORN
+.. |Uacute| unicode:: U+000DA .. LATIN CAPITAL LETTER U WITH ACUTE
+.. |uacute| unicode:: U+000FA .. LATIN SMALL LETTER U WITH ACUTE
+.. |Ucirc| unicode:: U+000DB .. LATIN CAPITAL LETTER U WITH CIRCUMFLEX
+.. |ucirc| unicode:: U+000FB .. LATIN SMALL LETTER U WITH CIRCUMFLEX
+.. |Ugrave| unicode:: U+000D9 .. LATIN CAPITAL LETTER U WITH GRAVE
+.. |ugrave| unicode:: U+000F9 .. LATIN SMALL LETTER U WITH GRAVE
+.. |Uuml| unicode:: U+000DC .. LATIN CAPITAL LETTER U WITH DIAERESIS
+.. |uuml| unicode:: U+000FC .. LATIN SMALL LETTER U WITH DIAERESIS
+.. |Yacute| unicode:: U+000DD .. LATIN CAPITAL LETTER Y WITH ACUTE
+.. |yacute| unicode:: U+000FD .. LATIN SMALL LETTER Y WITH ACUTE
+.. |yuml| unicode:: U+000FF .. LATIN SMALL LETTER Y WITH DIAERESIS
diff --git a/python/helpers/docutils/parsers/rst/include/isolat2.txt b/python/helpers/docutils/parsers/rst/include/isolat2.txt
new file mode 100644
index 0000000..20de845
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isolat2.txt
@@ -0,0 +1,128 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Abreve| unicode:: U+00102 .. LATIN CAPITAL LETTER A WITH BREVE
+.. |abreve| unicode:: U+00103 .. LATIN SMALL LETTER A WITH BREVE
+.. |Amacr| unicode:: U+00100 .. LATIN CAPITAL LETTER A WITH MACRON
+.. |amacr| unicode:: U+00101 .. LATIN SMALL LETTER A WITH MACRON
+.. |Aogon| unicode:: U+00104 .. LATIN CAPITAL LETTER A WITH OGONEK
+.. |aogon| unicode:: U+00105 .. LATIN SMALL LETTER A WITH OGONEK
+.. |Cacute| unicode:: U+00106 .. LATIN CAPITAL LETTER C WITH ACUTE
+.. |cacute| unicode:: U+00107 .. LATIN SMALL LETTER C WITH ACUTE
+.. |Ccaron| unicode:: U+0010C .. LATIN CAPITAL LETTER C WITH CARON
+.. |ccaron| unicode:: U+0010D .. LATIN SMALL LETTER C WITH CARON
+.. |Ccirc| unicode:: U+00108 .. LATIN CAPITAL LETTER C WITH CIRCUMFLEX
+.. |ccirc| unicode:: U+00109 .. LATIN SMALL LETTER C WITH CIRCUMFLEX
+.. |Cdot| unicode:: U+0010A .. LATIN CAPITAL LETTER C WITH DOT ABOVE
+.. |cdot| unicode:: U+0010B .. LATIN SMALL LETTER C WITH DOT ABOVE
+.. |Dcaron| unicode:: U+0010E .. LATIN CAPITAL LETTER D WITH CARON
+.. |dcaron| unicode:: U+0010F .. LATIN SMALL LETTER D WITH CARON
+.. |Dstrok| unicode:: U+00110 .. LATIN CAPITAL LETTER D WITH STROKE
+.. |dstrok| unicode:: U+00111 .. LATIN SMALL LETTER D WITH STROKE
+.. |Ecaron| unicode:: U+0011A .. LATIN CAPITAL LETTER E WITH CARON
+.. |ecaron| unicode:: U+0011B .. LATIN SMALL LETTER E WITH CARON
+.. |Edot| unicode:: U+00116 .. LATIN CAPITAL LETTER E WITH DOT ABOVE
+.. |edot| unicode:: U+00117 .. LATIN SMALL LETTER E WITH DOT ABOVE
+.. |Emacr| unicode:: U+00112 .. LATIN CAPITAL LETTER E WITH MACRON
+.. |emacr| unicode:: U+00113 .. LATIN SMALL LETTER E WITH MACRON
+.. |ENG| unicode:: U+0014A .. LATIN CAPITAL LETTER ENG
+.. |eng| unicode:: U+0014B .. LATIN SMALL LETTER ENG
+.. |Eogon| unicode:: U+00118 .. LATIN CAPITAL LETTER E WITH OGONEK
+.. |eogon| unicode:: U+00119 .. LATIN SMALL LETTER E WITH OGONEK
+.. |gacute| unicode:: U+001F5 .. LATIN SMALL LETTER G WITH ACUTE
+.. |Gbreve| unicode:: U+0011E .. LATIN CAPITAL LETTER G WITH BREVE
+.. |gbreve| unicode:: U+0011F .. LATIN SMALL LETTER G WITH BREVE
+.. |Gcedil| unicode:: U+00122 .. LATIN CAPITAL LETTER G WITH CEDILLA
+.. |gcedil| unicode:: U+00123 .. LATIN SMALL LETTER G WITH CEDILLA
+.. |Gcirc| unicode:: U+0011C .. LATIN CAPITAL LETTER G WITH CIRCUMFLEX
+.. |gcirc| unicode:: U+0011D .. LATIN SMALL LETTER G WITH CIRCUMFLEX
+.. |Gdot| unicode:: U+00120 .. LATIN CAPITAL LETTER G WITH DOT ABOVE
+.. |gdot| unicode:: U+00121 .. LATIN SMALL LETTER G WITH DOT ABOVE
+.. |Hcirc| unicode:: U+00124 .. LATIN CAPITAL LETTER H WITH CIRCUMFLEX
+.. |hcirc| unicode:: U+00125 .. LATIN SMALL LETTER H WITH CIRCUMFLEX
+.. |Hstrok| unicode:: U+00126 .. LATIN CAPITAL LETTER H WITH STROKE
+.. |hstrok| unicode:: U+00127 .. LATIN SMALL LETTER H WITH STROKE
+.. |Idot| unicode:: U+00130 .. LATIN CAPITAL LETTER I WITH DOT ABOVE
+.. |IJlig| unicode:: U+00132 .. LATIN CAPITAL LIGATURE IJ
+.. |ijlig| unicode:: U+00133 .. LATIN SMALL LIGATURE IJ
+.. |Imacr| unicode:: U+0012A .. LATIN CAPITAL LETTER I WITH MACRON
+.. |imacr| unicode:: U+0012B .. LATIN SMALL LETTER I WITH MACRON
+.. |inodot| unicode:: U+00131 .. LATIN SMALL LETTER DOTLESS I
+.. |Iogon| unicode:: U+0012E .. LATIN CAPITAL LETTER I WITH OGONEK
+.. |iogon| unicode:: U+0012F .. LATIN SMALL LETTER I WITH OGONEK
+.. |Itilde| unicode:: U+00128 .. LATIN CAPITAL LETTER I WITH TILDE
+.. |itilde| unicode:: U+00129 .. LATIN SMALL LETTER I WITH TILDE
+.. |Jcirc| unicode:: U+00134 .. LATIN CAPITAL LETTER J WITH CIRCUMFLEX
+.. |jcirc| unicode:: U+00135 .. LATIN SMALL LETTER J WITH CIRCUMFLEX
+.. |Kcedil| unicode:: U+00136 .. LATIN CAPITAL LETTER K WITH CEDILLA
+.. |kcedil| unicode:: U+00137 .. LATIN SMALL LETTER K WITH CEDILLA
+.. |kgreen| unicode:: U+00138 .. LATIN SMALL LETTER KRA
+.. |Lacute| unicode:: U+00139 .. LATIN CAPITAL LETTER L WITH ACUTE
+.. |lacute| unicode:: U+0013A .. LATIN SMALL LETTER L WITH ACUTE
+.. |Lcaron| unicode:: U+0013D .. LATIN CAPITAL LETTER L WITH CARON
+.. |lcaron| unicode:: U+0013E .. LATIN SMALL LETTER L WITH CARON
+.. |Lcedil| unicode:: U+0013B .. LATIN CAPITAL LETTER L WITH CEDILLA
+.. |lcedil| unicode:: U+0013C .. LATIN SMALL LETTER L WITH CEDILLA
+.. |Lmidot| unicode:: U+0013F .. LATIN CAPITAL LETTER L WITH MIDDLE DOT
+.. |lmidot| unicode:: U+00140 .. LATIN SMALL LETTER L WITH MIDDLE DOT
+.. |Lstrok| unicode:: U+00141 .. LATIN CAPITAL LETTER L WITH STROKE
+.. |lstrok| unicode:: U+00142 .. LATIN SMALL LETTER L WITH STROKE
+.. |Nacute| unicode:: U+00143 .. LATIN CAPITAL LETTER N WITH ACUTE
+.. |nacute| unicode:: U+00144 .. LATIN SMALL LETTER N WITH ACUTE
+.. |napos| unicode:: U+00149 .. LATIN SMALL LETTER N PRECEDED BY APOSTROPHE
+.. |Ncaron| unicode:: U+00147 .. LATIN CAPITAL LETTER N WITH CARON
+.. |ncaron| unicode:: U+00148 .. LATIN SMALL LETTER N WITH CARON
+.. |Ncedil| unicode:: U+00145 .. LATIN CAPITAL LETTER N WITH CEDILLA
+.. |ncedil| unicode:: U+00146 .. LATIN SMALL LETTER N WITH CEDILLA
+.. |Odblac| unicode:: U+00150 .. LATIN CAPITAL LETTER O WITH DOUBLE ACUTE
+.. |odblac| unicode:: U+00151 .. LATIN SMALL LETTER O WITH DOUBLE ACUTE
+.. |OElig| unicode:: U+00152 .. LATIN CAPITAL LIGATURE OE
+.. |oelig| unicode:: U+00153 .. LATIN SMALL LIGATURE OE
+.. |Omacr| unicode:: U+0014C .. LATIN CAPITAL LETTER O WITH MACRON
+.. |omacr| unicode:: U+0014D .. LATIN SMALL LETTER O WITH MACRON
+.. |Racute| unicode:: U+00154 .. LATIN CAPITAL LETTER R WITH ACUTE
+.. |racute| unicode:: U+00155 .. LATIN SMALL LETTER R WITH ACUTE
+.. |Rcaron| unicode:: U+00158 .. LATIN CAPITAL LETTER R WITH CARON
+.. |rcaron| unicode:: U+00159 .. LATIN SMALL LETTER R WITH CARON
+.. |Rcedil| unicode:: U+00156 .. LATIN CAPITAL LETTER R WITH CEDILLA
+.. |rcedil| unicode:: U+00157 .. LATIN SMALL LETTER R WITH CEDILLA
+.. |Sacute| unicode:: U+0015A .. LATIN CAPITAL LETTER S WITH ACUTE
+.. |sacute| unicode:: U+0015B .. LATIN SMALL LETTER S WITH ACUTE
+.. |Scaron| unicode:: U+00160 .. LATIN CAPITAL LETTER S WITH CARON
+.. |scaron| unicode:: U+00161 .. LATIN SMALL LETTER S WITH CARON
+.. |Scedil| unicode:: U+0015E .. LATIN CAPITAL LETTER S WITH CEDILLA
+.. |scedil| unicode:: U+0015F .. LATIN SMALL LETTER S WITH CEDILLA
+.. |Scirc| unicode:: U+0015C .. LATIN CAPITAL LETTER S WITH CIRCUMFLEX
+.. |scirc| unicode:: U+0015D .. LATIN SMALL LETTER S WITH CIRCUMFLEX
+.. |Tcaron| unicode:: U+00164 .. LATIN CAPITAL LETTER T WITH CARON
+.. |tcaron| unicode:: U+00165 .. LATIN SMALL LETTER T WITH CARON
+.. |Tcedil| unicode:: U+00162 .. LATIN CAPITAL LETTER T WITH CEDILLA
+.. |tcedil| unicode:: U+00163 .. LATIN SMALL LETTER T WITH CEDILLA
+.. |Tstrok| unicode:: U+00166 .. LATIN CAPITAL LETTER T WITH STROKE
+.. |tstrok| unicode:: U+00167 .. LATIN SMALL LETTER T WITH STROKE
+.. |Ubreve| unicode:: U+0016C .. LATIN CAPITAL LETTER U WITH BREVE
+.. |ubreve| unicode:: U+0016D .. LATIN SMALL LETTER U WITH BREVE
+.. |Udblac| unicode:: U+00170 .. LATIN CAPITAL LETTER U WITH DOUBLE ACUTE
+.. |udblac| unicode:: U+00171 .. LATIN SMALL LETTER U WITH DOUBLE ACUTE
+.. |Umacr| unicode:: U+0016A .. LATIN CAPITAL LETTER U WITH MACRON
+.. |umacr| unicode:: U+0016B .. LATIN SMALL LETTER U WITH MACRON
+.. |Uogon| unicode:: U+00172 .. LATIN CAPITAL LETTER U WITH OGONEK
+.. |uogon| unicode:: U+00173 .. LATIN SMALL LETTER U WITH OGONEK
+.. |Uring| unicode:: U+0016E .. LATIN CAPITAL LETTER U WITH RING ABOVE
+.. |uring| unicode:: U+0016F .. LATIN SMALL LETTER U WITH RING ABOVE
+.. |Utilde| unicode:: U+00168 .. LATIN CAPITAL LETTER U WITH TILDE
+.. |utilde| unicode:: U+00169 .. LATIN SMALL LETTER U WITH TILDE
+.. |Wcirc| unicode:: U+00174 .. LATIN CAPITAL LETTER W WITH CIRCUMFLEX
+.. |wcirc| unicode:: U+00175 .. LATIN SMALL LETTER W WITH CIRCUMFLEX
+.. |Ycirc| unicode:: U+00176 .. LATIN CAPITAL LETTER Y WITH CIRCUMFLEX
+.. |ycirc| unicode:: U+00177 .. LATIN SMALL LETTER Y WITH CIRCUMFLEX
+.. |Yuml| unicode:: U+00178 .. LATIN CAPITAL LETTER Y WITH DIAERESIS
+.. |Zacute| unicode:: U+00179 .. LATIN CAPITAL LETTER Z WITH ACUTE
+.. |zacute| unicode:: U+0017A .. LATIN SMALL LETTER Z WITH ACUTE
+.. |Zcaron| unicode:: U+0017D .. LATIN CAPITAL LETTER Z WITH CARON
+.. |zcaron| unicode:: U+0017E .. LATIN SMALL LETTER Z WITH CARON
+.. |Zdot| unicode:: U+0017B .. LATIN CAPITAL LETTER Z WITH DOT ABOVE
+.. |zdot| unicode:: U+0017C .. LATIN SMALL LETTER Z WITH DOT ABOVE
diff --git a/python/helpers/docutils/parsers/rst/include/isomfrk-wide.txt b/python/helpers/docutils/parsers/rst/include/isomfrk-wide.txt
new file mode 100644
index 0000000..75bba25
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isomfrk-wide.txt
@@ -0,0 +1,58 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Afr| unicode:: U+1D504 .. MATHEMATICAL FRAKTUR CAPITAL A
+.. |afr| unicode:: U+1D51E .. MATHEMATICAL FRAKTUR SMALL A
+.. |Bfr| unicode:: U+1D505 .. MATHEMATICAL FRAKTUR CAPITAL B
+.. |bfr| unicode:: U+1D51F .. MATHEMATICAL FRAKTUR SMALL B
+.. |Cfr| unicode:: U+0212D .. BLACK-LETTER CAPITAL C
+.. |cfr| unicode:: U+1D520 .. MATHEMATICAL FRAKTUR SMALL C
+.. |Dfr| unicode:: U+1D507 .. MATHEMATICAL FRAKTUR CAPITAL D
+.. |dfr| unicode:: U+1D521 .. MATHEMATICAL FRAKTUR SMALL D
+.. |Efr| unicode:: U+1D508 .. MATHEMATICAL FRAKTUR CAPITAL E
+.. |efr| unicode:: U+1D522 .. MATHEMATICAL FRAKTUR SMALL E
+.. |Ffr| unicode:: U+1D509 .. MATHEMATICAL FRAKTUR CAPITAL F
+.. |ffr| unicode:: U+1D523 .. MATHEMATICAL FRAKTUR SMALL F
+.. |Gfr| unicode:: U+1D50A .. MATHEMATICAL FRAKTUR CAPITAL G
+.. |gfr| unicode:: U+1D524 .. MATHEMATICAL FRAKTUR SMALL G
+.. |Hfr| unicode:: U+0210C .. BLACK-LETTER CAPITAL H
+.. |hfr| unicode:: U+1D525 .. MATHEMATICAL FRAKTUR SMALL H
+.. |Ifr| unicode:: U+02111 .. BLACK-LETTER CAPITAL I
+.. |ifr| unicode:: U+1D526 .. MATHEMATICAL FRAKTUR SMALL I
+.. |Jfr| unicode:: U+1D50D .. MATHEMATICAL FRAKTUR CAPITAL J
+.. |jfr| unicode:: U+1D527 .. MATHEMATICAL FRAKTUR SMALL J
+.. |Kfr| unicode:: U+1D50E .. MATHEMATICAL FRAKTUR CAPITAL K
+.. |kfr| unicode:: U+1D528 .. MATHEMATICAL FRAKTUR SMALL K
+.. |Lfr| unicode:: U+1D50F .. MATHEMATICAL FRAKTUR CAPITAL L
+.. |lfr| unicode:: U+1D529 .. MATHEMATICAL FRAKTUR SMALL L
+.. |Mfr| unicode:: U+1D510 .. MATHEMATICAL FRAKTUR CAPITAL M
+.. |mfr| unicode:: U+1D52A .. MATHEMATICAL FRAKTUR SMALL M
+.. |Nfr| unicode:: U+1D511 .. MATHEMATICAL FRAKTUR CAPITAL N
+.. |nfr| unicode:: U+1D52B .. MATHEMATICAL FRAKTUR SMALL N
+.. |Ofr| unicode:: U+1D512 .. MATHEMATICAL FRAKTUR CAPITAL O
+.. |ofr| unicode:: U+1D52C .. MATHEMATICAL FRAKTUR SMALL O
+.. |Pfr| unicode:: U+1D513 .. MATHEMATICAL FRAKTUR CAPITAL P
+.. |pfr| unicode:: U+1D52D .. MATHEMATICAL FRAKTUR SMALL P
+.. |Qfr| unicode:: U+1D514 .. MATHEMATICAL FRAKTUR CAPITAL Q
+.. |qfr| unicode:: U+1D52E .. MATHEMATICAL FRAKTUR SMALL Q
+.. |Rfr| unicode:: U+0211C .. BLACK-LETTER CAPITAL R
+.. |rfr| unicode:: U+1D52F .. MATHEMATICAL FRAKTUR SMALL R
+.. |Sfr| unicode:: U+1D516 .. MATHEMATICAL FRAKTUR CAPITAL S
+.. |sfr| unicode:: U+1D530 .. MATHEMATICAL FRAKTUR SMALL S
+.. |Tfr| unicode:: U+1D517 .. MATHEMATICAL FRAKTUR CAPITAL T
+.. |tfr| unicode:: U+1D531 .. MATHEMATICAL FRAKTUR SMALL T
+.. |Ufr| unicode:: U+1D518 .. MATHEMATICAL FRAKTUR CAPITAL U
+.. |ufr| unicode:: U+1D532 .. MATHEMATICAL FRAKTUR SMALL U
+.. |Vfr| unicode:: U+1D519 .. MATHEMATICAL FRAKTUR CAPITAL V
+.. |vfr| unicode:: U+1D533 .. MATHEMATICAL FRAKTUR SMALL V
+.. |Wfr| unicode:: U+1D51A .. MATHEMATICAL FRAKTUR CAPITAL W
+.. |wfr| unicode:: U+1D534 .. MATHEMATICAL FRAKTUR SMALL W
+.. |Xfr| unicode:: U+1D51B .. MATHEMATICAL FRAKTUR CAPITAL X
+.. |xfr| unicode:: U+1D535 .. MATHEMATICAL FRAKTUR SMALL X
+.. |Yfr| unicode:: U+1D51C .. MATHEMATICAL FRAKTUR CAPITAL Y
+.. |yfr| unicode:: U+1D536 .. MATHEMATICAL FRAKTUR SMALL Y
+.. |Zfr| unicode:: U+02128 .. BLACK-LETTER CAPITAL Z
+.. |zfr| unicode:: U+1D537 .. MATHEMATICAL FRAKTUR SMALL Z
diff --git a/python/helpers/docutils/parsers/rst/include/isomfrk.txt b/python/helpers/docutils/parsers/rst/include/isomfrk.txt
new file mode 100644
index 0000000..868b687a
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isomfrk.txt
@@ -0,0 +1,11 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Cfr| unicode:: U+0212D .. BLACK-LETTER CAPITAL C
+.. |Hfr| unicode:: U+0210C .. BLACK-LETTER CAPITAL H
+.. |Ifr| unicode:: U+02111 .. BLACK-LETTER CAPITAL I
+.. |Rfr| unicode:: U+0211C .. BLACK-LETTER CAPITAL R
+.. |Zfr| unicode:: U+02128 .. BLACK-LETTER CAPITAL Z
diff --git a/python/helpers/docutils/parsers/rst/include/isomopf-wide.txt b/python/helpers/docutils/parsers/rst/include/isomopf-wide.txt
new file mode 100644
index 0000000..a91ea43
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isomopf-wide.txt
@@ -0,0 +1,32 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Aopf| unicode:: U+1D538 .. MATHEMATICAL DOUBLE-STRUCK CAPITAL A
+.. |Bopf| unicode:: U+1D539 .. MATHEMATICAL DOUBLE-STRUCK CAPITAL B
+.. |Copf| unicode:: U+02102 .. DOUBLE-STRUCK CAPITAL C
+.. |Dopf| unicode:: U+1D53B .. MATHEMATICAL DOUBLE-STRUCK CAPITAL D
+.. |Eopf| unicode:: U+1D53C .. MATHEMATICAL DOUBLE-STRUCK CAPITAL E
+.. |Fopf| unicode:: U+1D53D .. MATHEMATICAL DOUBLE-STRUCK CAPITAL F
+.. |Gopf| unicode:: U+1D53E .. MATHEMATICAL DOUBLE-STRUCK CAPITAL G
+.. |Hopf| unicode:: U+0210D .. DOUBLE-STRUCK CAPITAL H
+.. |Iopf| unicode:: U+1D540 .. MATHEMATICAL DOUBLE-STRUCK CAPITAL I
+.. |Jopf| unicode:: U+1D541 .. MATHEMATICAL DOUBLE-STRUCK CAPITAL J
+.. |Kopf| unicode:: U+1D542 .. MATHEMATICAL DOUBLE-STRUCK CAPITAL K
+.. |Lopf| unicode:: U+1D543 .. MATHEMATICAL DOUBLE-STRUCK CAPITAL L
+.. |Mopf| unicode:: U+1D544 .. MATHEMATICAL DOUBLE-STRUCK CAPITAL M
+.. |Nopf| unicode:: U+02115 .. DOUBLE-STRUCK CAPITAL N
+.. |Oopf| unicode:: U+1D546 .. MATHEMATICAL DOUBLE-STRUCK CAPITAL O
+.. |Popf| unicode:: U+02119 .. DOUBLE-STRUCK CAPITAL P
+.. |Qopf| unicode:: U+0211A .. DOUBLE-STRUCK CAPITAL Q
+.. |Ropf| unicode:: U+0211D .. DOUBLE-STRUCK CAPITAL R
+.. |Sopf| unicode:: U+1D54A .. MATHEMATICAL DOUBLE-STRUCK CAPITAL S
+.. |Topf| unicode:: U+1D54B .. MATHEMATICAL DOUBLE-STRUCK CAPITAL T
+.. |Uopf| unicode:: U+1D54C .. MATHEMATICAL DOUBLE-STRUCK CAPITAL U
+.. |Vopf| unicode:: U+1D54D .. MATHEMATICAL DOUBLE-STRUCK CAPITAL V
+.. |Wopf| unicode:: U+1D54E .. MATHEMATICAL DOUBLE-STRUCK CAPITAL W
+.. |Xopf| unicode:: U+1D54F .. MATHEMATICAL DOUBLE-STRUCK CAPITAL X
+.. |Yopf| unicode:: U+1D550 .. MATHEMATICAL DOUBLE-STRUCK CAPITAL Y
+.. |Zopf| unicode:: U+02124 .. DOUBLE-STRUCK CAPITAL Z
diff --git a/python/helpers/docutils/parsers/rst/include/isomopf.txt b/python/helpers/docutils/parsers/rst/include/isomopf.txt
new file mode 100644
index 0000000..4350db6
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isomopf.txt
@@ -0,0 +1,13 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Copf| unicode:: U+02102 .. DOUBLE-STRUCK CAPITAL C
+.. |Hopf| unicode:: U+0210D .. DOUBLE-STRUCK CAPITAL H
+.. |Nopf| unicode:: U+02115 .. DOUBLE-STRUCK CAPITAL N
+.. |Popf| unicode:: U+02119 .. DOUBLE-STRUCK CAPITAL P
+.. |Qopf| unicode:: U+0211A .. DOUBLE-STRUCK CAPITAL Q
+.. |Ropf| unicode:: U+0211D .. DOUBLE-STRUCK CAPITAL R
+.. |Zopf| unicode:: U+02124 .. DOUBLE-STRUCK CAPITAL Z
diff --git a/python/helpers/docutils/parsers/rst/include/isomscr-wide.txt b/python/helpers/docutils/parsers/rst/include/isomscr-wide.txt
new file mode 100644
index 0000000..34b278b
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isomscr-wide.txt
@@ -0,0 +1,58 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Ascr| unicode:: U+1D49C .. MATHEMATICAL SCRIPT CAPITAL A
+.. |ascr| unicode:: U+1D4B6 .. MATHEMATICAL SCRIPT SMALL A
+.. |Bscr| unicode:: U+0212C .. SCRIPT CAPITAL B
+.. |bscr| unicode:: U+1D4B7 .. MATHEMATICAL SCRIPT SMALL B
+.. |Cscr| unicode:: U+1D49E .. MATHEMATICAL SCRIPT CAPITAL C
+.. |cscr| unicode:: U+1D4B8 .. MATHEMATICAL SCRIPT SMALL C
+.. |Dscr| unicode:: U+1D49F .. MATHEMATICAL SCRIPT CAPITAL D
+.. |dscr| unicode:: U+1D4B9 .. MATHEMATICAL SCRIPT SMALL D
+.. |Escr| unicode:: U+02130 .. SCRIPT CAPITAL E
+.. |escr| unicode:: U+0212F .. SCRIPT SMALL E
+.. |Fscr| unicode:: U+02131 .. SCRIPT CAPITAL F
+.. |fscr| unicode:: U+1D4BB .. MATHEMATICAL SCRIPT SMALL F
+.. |Gscr| unicode:: U+1D4A2 .. MATHEMATICAL SCRIPT CAPITAL G
+.. |gscr| unicode:: U+0210A .. SCRIPT SMALL G
+.. |Hscr| unicode:: U+0210B .. SCRIPT CAPITAL H
+.. |hscr| unicode:: U+1D4BD .. MATHEMATICAL SCRIPT SMALL H
+.. |Iscr| unicode:: U+02110 .. SCRIPT CAPITAL I
+.. |iscr| unicode:: U+1D4BE .. MATHEMATICAL SCRIPT SMALL I
+.. |Jscr| unicode:: U+1D4A5 .. MATHEMATICAL SCRIPT CAPITAL J
+.. |jscr| unicode:: U+1D4BF .. MATHEMATICAL SCRIPT SMALL J
+.. |Kscr| unicode:: U+1D4A6 .. MATHEMATICAL SCRIPT CAPITAL K
+.. |kscr| unicode:: U+1D4C0 .. MATHEMATICAL SCRIPT SMALL K
+.. |Lscr| unicode:: U+02112 .. SCRIPT CAPITAL L
+.. |lscr| unicode:: U+1D4C1 .. MATHEMATICAL SCRIPT SMALL L
+.. |Mscr| unicode:: U+02133 .. SCRIPT CAPITAL M
+.. |mscr| unicode:: U+1D4C2 .. MATHEMATICAL SCRIPT SMALL M
+.. |Nscr| unicode:: U+1D4A9 .. MATHEMATICAL SCRIPT CAPITAL N
+.. |nscr| unicode:: U+1D4C3 .. MATHEMATICAL SCRIPT SMALL N
+.. |Oscr| unicode:: U+1D4AA .. MATHEMATICAL SCRIPT CAPITAL O
+.. |oscr| unicode:: U+02134 .. SCRIPT SMALL O
+.. |Pscr| unicode:: U+1D4AB .. MATHEMATICAL SCRIPT CAPITAL P
+.. |pscr| unicode:: U+1D4C5 .. MATHEMATICAL SCRIPT SMALL P
+.. |Qscr| unicode:: U+1D4AC .. MATHEMATICAL SCRIPT CAPITAL Q
+.. |qscr| unicode:: U+1D4C6 .. MATHEMATICAL SCRIPT SMALL Q
+.. |Rscr| unicode:: U+0211B .. SCRIPT CAPITAL R
+.. |rscr| unicode:: U+1D4C7 .. MATHEMATICAL SCRIPT SMALL R
+.. |Sscr| unicode:: U+1D4AE .. MATHEMATICAL SCRIPT CAPITAL S
+.. |sscr| unicode:: U+1D4C8 .. MATHEMATICAL SCRIPT SMALL S
+.. |Tscr| unicode:: U+1D4AF .. MATHEMATICAL SCRIPT CAPITAL T
+.. |tscr| unicode:: U+1D4C9 .. MATHEMATICAL SCRIPT SMALL T
+.. |Uscr| unicode:: U+1D4B0 .. MATHEMATICAL SCRIPT CAPITAL U
+.. |uscr| unicode:: U+1D4CA .. MATHEMATICAL SCRIPT SMALL U
+.. |Vscr| unicode:: U+1D4B1 .. MATHEMATICAL SCRIPT CAPITAL V
+.. |vscr| unicode:: U+1D4CB .. MATHEMATICAL SCRIPT SMALL V
+.. |Wscr| unicode:: U+1D4B2 .. MATHEMATICAL SCRIPT CAPITAL W
+.. |wscr| unicode:: U+1D4CC .. MATHEMATICAL SCRIPT SMALL W
+.. |Xscr| unicode:: U+1D4B3 .. MATHEMATICAL SCRIPT CAPITAL X
+.. |xscr| unicode:: U+1D4CD .. MATHEMATICAL SCRIPT SMALL X
+.. |Yscr| unicode:: U+1D4B4 .. MATHEMATICAL SCRIPT CAPITAL Y
+.. |yscr| unicode:: U+1D4CE .. MATHEMATICAL SCRIPT SMALL Y
+.. |Zscr| unicode:: U+1D4B5 .. MATHEMATICAL SCRIPT CAPITAL Z
+.. |zscr| unicode:: U+1D4CF .. MATHEMATICAL SCRIPT SMALL Z
diff --git a/python/helpers/docutils/parsers/rst/include/isomscr.txt b/python/helpers/docutils/parsers/rst/include/isomscr.txt
new file mode 100644
index 0000000..a77890e
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isomscr.txt
@@ -0,0 +1,17 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Bscr| unicode:: U+0212C .. SCRIPT CAPITAL B
+.. |Escr| unicode:: U+02130 .. SCRIPT CAPITAL E
+.. |escr| unicode:: U+0212F .. SCRIPT SMALL E
+.. |Fscr| unicode:: U+02131 .. SCRIPT CAPITAL F
+.. |gscr| unicode:: U+0210A .. SCRIPT SMALL G
+.. |Hscr| unicode:: U+0210B .. SCRIPT CAPITAL H
+.. |Iscr| unicode:: U+02110 .. SCRIPT CAPITAL I
+.. |Lscr| unicode:: U+02112 .. SCRIPT CAPITAL L
+.. |Mscr| unicode:: U+02133 .. SCRIPT CAPITAL M
+.. |oscr| unicode:: U+02134 .. SCRIPT SMALL O
+.. |Rscr| unicode:: U+0211B .. SCRIPT CAPITAL R
diff --git a/python/helpers/docutils/parsers/rst/include/isonum.txt b/python/helpers/docutils/parsers/rst/include/isonum.txt
new file mode 100644
index 0000000..35793b3
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isonum.txt
@@ -0,0 +1,82 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |amp| unicode:: U+00026 .. AMPERSAND
+.. |apos| unicode:: U+00027 .. APOSTROPHE
+.. |ast| unicode:: U+0002A .. ASTERISK
+.. |brvbar| unicode:: U+000A6 .. BROKEN BAR
+.. |bsol| unicode:: U+0005C .. REVERSE SOLIDUS
+.. |cent| unicode:: U+000A2 .. CENT SIGN
+.. |colon| unicode:: U+0003A .. COLON
+.. |comma| unicode:: U+0002C .. COMMA
+.. |commat| unicode:: U+00040 .. COMMERCIAL AT
+.. |copy| unicode:: U+000A9 .. COPYRIGHT SIGN
+.. |curren| unicode:: U+000A4 .. CURRENCY SIGN
+.. |darr| unicode:: U+02193 .. DOWNWARDS ARROW
+.. |deg| unicode:: U+000B0 .. DEGREE SIGN
+.. |divide| unicode:: U+000F7 .. DIVISION SIGN
+.. |dollar| unicode:: U+00024 .. DOLLAR SIGN
+.. |equals| unicode:: U+0003D .. EQUALS SIGN
+.. |excl| unicode:: U+00021 .. EXCLAMATION MARK
+.. |frac12| unicode:: U+000BD .. VULGAR FRACTION ONE HALF
+.. |frac14| unicode:: U+000BC .. VULGAR FRACTION ONE QUARTER
+.. |frac18| unicode:: U+0215B .. VULGAR FRACTION ONE EIGHTH
+.. |frac34| unicode:: U+000BE .. VULGAR FRACTION THREE QUARTERS
+.. |frac38| unicode:: U+0215C .. VULGAR FRACTION THREE EIGHTHS
+.. |frac58| unicode:: U+0215D .. VULGAR FRACTION FIVE EIGHTHS
+.. |frac78| unicode:: U+0215E .. VULGAR FRACTION SEVEN EIGHTHS
+.. |gt| unicode:: U+0003E .. GREATER-THAN SIGN
+.. |half| unicode:: U+000BD .. VULGAR FRACTION ONE HALF
+.. |horbar| unicode:: U+02015 .. HORIZONTAL BAR
+.. |hyphen| unicode:: U+02010 .. HYPHEN
+.. |iexcl| unicode:: U+000A1 .. INVERTED EXCLAMATION MARK
+.. |iquest| unicode:: U+000BF .. INVERTED QUESTION MARK
+.. |laquo| unicode:: U+000AB .. LEFT-POINTING DOUBLE ANGLE QUOTATION MARK
+.. |larr| unicode:: U+02190 .. LEFTWARDS ARROW
+.. |lcub| unicode:: U+0007B .. LEFT CURLY BRACKET
+.. |ldquo| unicode:: U+0201C .. LEFT DOUBLE QUOTATION MARK
+.. |lowbar| unicode:: U+0005F .. LOW LINE
+.. |lpar| unicode:: U+00028 .. LEFT PARENTHESIS
+.. |lsqb| unicode:: U+0005B .. LEFT SQUARE BRACKET
+.. |lsquo| unicode:: U+02018 .. LEFT SINGLE QUOTATION MARK
+.. |lt| unicode:: U+0003C .. LESS-THAN SIGN
+.. |micro| unicode:: U+000B5 .. MICRO SIGN
+.. |middot| unicode:: U+000B7 .. MIDDLE DOT
+.. |nbsp| unicode:: U+000A0 .. NO-BREAK SPACE
+.. |not| unicode:: U+000AC .. NOT SIGN
+.. |num| unicode:: U+00023 .. NUMBER SIGN
+.. |ohm| unicode:: U+02126 .. OHM SIGN
+.. |ordf| unicode:: U+000AA .. FEMININE ORDINAL INDICATOR
+.. |ordm| unicode:: U+000BA .. MASCULINE ORDINAL INDICATOR
+.. |para| unicode:: U+000B6 .. PILCROW SIGN
+.. |percnt| unicode:: U+00025 .. PERCENT SIGN
+.. |period| unicode:: U+0002E .. FULL STOP
+.. |plus| unicode:: U+0002B .. PLUS SIGN
+.. |plusmn| unicode:: U+000B1 .. PLUS-MINUS SIGN
+.. |pound| unicode:: U+000A3 .. POUND SIGN
+.. |quest| unicode:: U+0003F .. QUESTION MARK
+.. |quot| unicode:: U+00022 .. QUOTATION MARK
+.. |raquo| unicode:: U+000BB .. RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
+.. |rarr| unicode:: U+02192 .. RIGHTWARDS ARROW
+.. |rcub| unicode:: U+0007D .. RIGHT CURLY BRACKET
+.. |rdquo| unicode:: U+0201D .. RIGHT DOUBLE QUOTATION MARK
+.. |reg| unicode:: U+000AE .. REGISTERED SIGN
+.. |rpar| unicode:: U+00029 .. RIGHT PARENTHESIS
+.. |rsqb| unicode:: U+0005D .. RIGHT SQUARE BRACKET
+.. |rsquo| unicode:: U+02019 .. RIGHT SINGLE QUOTATION MARK
+.. |sect| unicode:: U+000A7 .. SECTION SIGN
+.. |semi| unicode:: U+0003B .. SEMICOLON
+.. |shy| unicode:: U+000AD .. SOFT HYPHEN
+.. |sol| unicode:: U+0002F .. SOLIDUS
+.. |sung| unicode:: U+0266A .. EIGHTH NOTE
+.. |sup1| unicode:: U+000B9 .. SUPERSCRIPT ONE
+.. |sup2| unicode:: U+000B2 .. SUPERSCRIPT TWO
+.. |sup3| unicode:: U+000B3 .. SUPERSCRIPT THREE
+.. |times| unicode:: U+000D7 .. MULTIPLICATION SIGN
+.. |trade| unicode:: U+02122 .. TRADE MARK SIGN
+.. |uarr| unicode:: U+02191 .. UPWARDS ARROW
+.. |verbar| unicode:: U+0007C .. VERTICAL LINE
+.. |yen| unicode:: U+000A5 .. YEN SIGN
diff --git a/python/helpers/docutils/parsers/rst/include/isopub.txt b/python/helpers/docutils/parsers/rst/include/isopub.txt
new file mode 100644
index 0000000..bc5b6d4
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isopub.txt
@@ -0,0 +1,90 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |blank| unicode:: U+02423 .. OPEN BOX
+.. |blk12| unicode:: U+02592 .. MEDIUM SHADE
+.. |blk14| unicode:: U+02591 .. LIGHT SHADE
+.. |blk34| unicode:: U+02593 .. DARK SHADE
+.. |block| unicode:: U+02588 .. FULL BLOCK
+.. |bull| unicode:: U+02022 .. BULLET
+.. |caret| unicode:: U+02041 .. CARET INSERTION POINT
+.. |check| unicode:: U+02713 .. CHECK MARK
+.. |cir| unicode:: U+025CB .. WHITE CIRCLE
+.. |clubs| unicode:: U+02663 .. BLACK CLUB SUIT
+.. |copysr| unicode:: U+02117 .. SOUND RECORDING COPYRIGHT
+.. |cross| unicode:: U+02717 .. BALLOT X
+.. |Dagger| unicode:: U+02021 .. DOUBLE DAGGER
+.. |dagger| unicode:: U+02020 .. DAGGER
+.. |dash| unicode:: U+02010 .. HYPHEN
+.. |diams| unicode:: U+02666 .. BLACK DIAMOND SUIT
+.. |dlcrop| unicode:: U+0230D .. BOTTOM LEFT CROP
+.. |drcrop| unicode:: U+0230C .. BOTTOM RIGHT CROP
+.. |dtri| unicode:: U+025BF .. WHITE DOWN-POINTING SMALL TRIANGLE
+.. |dtrif| unicode:: U+025BE .. BLACK DOWN-POINTING SMALL TRIANGLE
+.. |emsp| unicode:: U+02003 .. EM SPACE
+.. |emsp13| unicode:: U+02004 .. THREE-PER-EM SPACE
+.. |emsp14| unicode:: U+02005 .. FOUR-PER-EM SPACE
+.. |ensp| unicode:: U+02002 .. EN SPACE
+.. |female| unicode:: U+02640 .. FEMALE SIGN
+.. |ffilig| unicode:: U+0FB03 .. LATIN SMALL LIGATURE FFI
+.. |fflig| unicode:: U+0FB00 .. LATIN SMALL LIGATURE FF
+.. |ffllig| unicode:: U+0FB04 .. LATIN SMALL LIGATURE FFL
+.. |filig| unicode:: U+0FB01 .. LATIN SMALL LIGATURE FI
+.. |flat| unicode:: U+0266D .. MUSIC FLAT SIGN
+.. |fllig| unicode:: U+0FB02 .. LATIN SMALL LIGATURE FL
+.. |frac13| unicode:: U+02153 .. VULGAR FRACTION ONE THIRD
+.. |frac15| unicode:: U+02155 .. VULGAR FRACTION ONE FIFTH
+.. |frac16| unicode:: U+02159 .. VULGAR FRACTION ONE SIXTH
+.. |frac23| unicode:: U+02154 .. VULGAR FRACTION TWO THIRDS
+.. |frac25| unicode:: U+02156 .. VULGAR FRACTION TWO FIFTHS
+.. |frac35| unicode:: U+02157 .. VULGAR FRACTION THREE FIFTHS
+.. |frac45| unicode:: U+02158 .. VULGAR FRACTION FOUR FIFTHS
+.. |frac56| unicode:: U+0215A .. VULGAR FRACTION FIVE SIXTHS
+.. |hairsp| unicode:: U+0200A .. HAIR SPACE
+.. |hearts| unicode:: U+02665 .. BLACK HEART SUIT
+.. |hellip| unicode:: U+02026 .. HORIZONTAL ELLIPSIS
+.. |hybull| unicode:: U+02043 .. HYPHEN BULLET
+.. |incare| unicode:: U+02105 .. CARE OF
+.. |ldquor| unicode:: U+0201E .. DOUBLE LOW-9 QUOTATION MARK
+.. |lhblk| unicode:: U+02584 .. LOWER HALF BLOCK
+.. |loz| unicode:: U+025CA .. LOZENGE
+.. |lozf| unicode:: U+029EB .. BLACK LOZENGE
+.. |lsquor| unicode:: U+0201A .. SINGLE LOW-9 QUOTATION MARK
+.. |ltri| unicode:: U+025C3 .. WHITE LEFT-POINTING SMALL TRIANGLE
+.. |ltrif| unicode:: U+025C2 .. BLACK LEFT-POINTING SMALL TRIANGLE
+.. |male| unicode:: U+02642 .. MALE SIGN
+.. |malt| unicode:: U+02720 .. MALTESE CROSS
+.. |marker| unicode:: U+025AE .. BLACK VERTICAL RECTANGLE
+.. |mdash| unicode:: U+02014 .. EM DASH
+.. |mldr| unicode:: U+02026 .. HORIZONTAL ELLIPSIS
+.. |natur| unicode:: U+0266E .. MUSIC NATURAL SIGN
+.. |ndash| unicode:: U+02013 .. EN DASH
+.. |nldr| unicode:: U+02025 .. TWO DOT LEADER
+.. |numsp| unicode:: U+02007 .. FIGURE SPACE
+.. |phone| unicode:: U+0260E .. BLACK TELEPHONE
+.. |puncsp| unicode:: U+02008 .. PUNCTUATION SPACE
+.. |rdquor| unicode:: U+0201D .. RIGHT DOUBLE QUOTATION MARK
+.. |rect| unicode:: U+025AD .. WHITE RECTANGLE
+.. |rsquor| unicode:: U+02019 .. RIGHT SINGLE QUOTATION MARK
+.. |rtri| unicode:: U+025B9 .. WHITE RIGHT-POINTING SMALL TRIANGLE
+.. |rtrif| unicode:: U+025B8 .. BLACK RIGHT-POINTING SMALL TRIANGLE
+.. |rx| unicode:: U+0211E .. PRESCRIPTION TAKE
+.. |sext| unicode:: U+02736 .. SIX POINTED BLACK STAR
+.. |sharp| unicode:: U+0266F .. MUSIC SHARP SIGN
+.. |spades| unicode:: U+02660 .. BLACK SPADE SUIT
+.. |squ| unicode:: U+025A1 .. WHITE SQUARE
+.. |squf| unicode:: U+025AA .. BLACK SMALL SQUARE
+.. |star| unicode:: U+02606 .. WHITE STAR
+.. |starf| unicode:: U+02605 .. BLACK STAR
+.. |target| unicode:: U+02316 .. POSITION INDICATOR
+.. |telrec| unicode:: U+02315 .. TELEPHONE RECORDER
+.. |thinsp| unicode:: U+02009 .. THIN SPACE
+.. |uhblk| unicode:: U+02580 .. UPPER HALF BLOCK
+.. |ulcrop| unicode:: U+0230F .. TOP LEFT CROP
+.. |urcrop| unicode:: U+0230E .. TOP RIGHT CROP
+.. |utri| unicode:: U+025B5 .. WHITE UP-POINTING SMALL TRIANGLE
+.. |utrif| unicode:: U+025B4 .. BLACK UP-POINTING SMALL TRIANGLE
+.. |vellip| unicode:: U+022EE .. VERTICAL ELLIPSIS
diff --git a/python/helpers/docutils/parsers/rst/include/isotech.txt b/python/helpers/docutils/parsers/rst/include/isotech.txt
new file mode 100644
index 0000000..01f7e34
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/isotech.txt
@@ -0,0 +1,168 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |acd| unicode:: U+0223F .. SINE WAVE
+.. |aleph| unicode:: U+02135 .. ALEF SYMBOL
+.. |And| unicode:: U+02A53 .. DOUBLE LOGICAL AND
+.. |and| unicode:: U+02227 .. LOGICAL AND
+.. |andand| unicode:: U+02A55 .. TWO INTERSECTING LOGICAL AND
+.. |andd| unicode:: U+02A5C .. LOGICAL AND WITH HORIZONTAL DASH
+.. |andslope| unicode:: U+02A58 .. SLOPING LARGE AND
+.. |andv| unicode:: U+02A5A .. LOGICAL AND WITH MIDDLE STEM
+.. |ang90| unicode:: U+0221F .. RIGHT ANGLE
+.. |angrt| unicode:: U+0221F .. RIGHT ANGLE
+.. |angsph| unicode:: U+02222 .. SPHERICAL ANGLE
+.. |angst| unicode:: U+0212B .. ANGSTROM SIGN
+.. |ap| unicode:: U+02248 .. ALMOST EQUAL TO
+.. |apacir| unicode:: U+02A6F .. ALMOST EQUAL TO WITH CIRCUMFLEX ACCENT
+.. |awconint| unicode:: U+02233 .. ANTICLOCKWISE CONTOUR INTEGRAL
+.. |awint| unicode:: U+02A11 .. ANTICLOCKWISE INTEGRATION
+.. |becaus| unicode:: U+02235 .. BECAUSE
+.. |bernou| unicode:: U+0212C .. SCRIPT CAPITAL B
+.. |bne| unicode:: U+0003D U+020E5 .. EQUALS SIGN with reverse slash
+.. |bnequiv| unicode:: U+02261 U+020E5 .. IDENTICAL TO with reverse slash
+.. |bNot| unicode:: U+02AED .. REVERSED DOUBLE STROKE NOT SIGN
+.. |bnot| unicode:: U+02310 .. REVERSED NOT SIGN
+.. |bottom| unicode:: U+022A5 .. UP TACK
+.. |cap| unicode:: U+02229 .. INTERSECTION
+.. |Cconint| unicode:: U+02230 .. VOLUME INTEGRAL
+.. |cirfnint| unicode:: U+02A10 .. CIRCULATION FUNCTION
+.. |compfn| unicode:: U+02218 .. RING OPERATOR
+.. |cong| unicode:: U+02245 .. APPROXIMATELY EQUAL TO
+.. |Conint| unicode:: U+0222F .. SURFACE INTEGRAL
+.. |conint| unicode:: U+0222E .. CONTOUR INTEGRAL
+.. |ctdot| unicode:: U+022EF .. MIDLINE HORIZONTAL ELLIPSIS
+.. |cup| unicode:: U+0222A .. UNION
+.. |cwconint| unicode:: U+02232 .. CLOCKWISE CONTOUR INTEGRAL
+.. |cwint| unicode:: U+02231 .. CLOCKWISE INTEGRAL
+.. |cylcty| unicode:: U+0232D .. CYLINDRICITY
+.. |disin| unicode:: U+022F2 .. ELEMENT OF WITH LONG HORIZONTAL STROKE
+.. |Dot| unicode:: U+000A8 .. DIAERESIS
+.. |DotDot| unicode:: U+020DC .. COMBINING FOUR DOTS ABOVE
+.. |dsol| unicode:: U+029F6 .. SOLIDUS WITH OVERBAR
+.. |dtdot| unicode:: U+022F1 .. DOWN RIGHT DIAGONAL ELLIPSIS
+.. |dwangle| unicode:: U+029A6 .. OBLIQUE ANGLE OPENING UP
+.. |elinters| unicode:: U+0FFFD .. REPLACEMENT CHARACTER
+.. |epar| unicode:: U+022D5 .. EQUAL AND PARALLEL TO
+.. |eparsl| unicode:: U+029E3 .. EQUALS SIGN AND SLANTED PARALLEL
+.. |equiv| unicode:: U+02261 .. IDENTICAL TO
+.. |eqvparsl| unicode:: U+029E5 .. IDENTICAL TO AND SLANTED PARALLEL
+.. |exist| unicode:: U+02203 .. THERE EXISTS
+.. |fltns| unicode:: U+025B1 .. WHITE PARALLELOGRAM
+.. |fnof| unicode:: U+00192 .. LATIN SMALL LETTER F WITH HOOK
+.. |forall| unicode:: U+02200 .. FOR ALL
+.. |fpartint| unicode:: U+02A0D .. FINITE PART INTEGRAL
+.. |ge| unicode:: U+02265 .. GREATER-THAN OR EQUAL TO
+.. |hamilt| unicode:: U+0210B .. SCRIPT CAPITAL H
+.. |iff| unicode:: U+021D4 .. LEFT RIGHT DOUBLE ARROW
+.. |iinfin| unicode:: U+029DC .. INCOMPLETE INFINITY
+.. |imped| unicode:: U+001B5 .. LATIN CAPITAL LETTER Z WITH STROKE
+.. |infin| unicode:: U+0221E .. INFINITY
+.. |infintie| unicode:: U+029DD .. TIE OVER INFINITY
+.. |Int| unicode:: U+0222C .. DOUBLE INTEGRAL
+.. |int| unicode:: U+0222B .. INTEGRAL
+.. |intlarhk| unicode:: U+02A17 .. INTEGRAL WITH LEFTWARDS ARROW WITH HOOK
+.. |isin| unicode:: U+02208 .. ELEMENT OF
+.. |isindot| unicode:: U+022F5 .. ELEMENT OF WITH DOT ABOVE
+.. |isinE| unicode:: U+022F9 .. ELEMENT OF WITH TWO HORIZONTAL STROKES
+.. |isins| unicode:: U+022F4 .. SMALL ELEMENT OF WITH VERTICAL BAR AT END OF HORIZONTAL STROKE
+.. |isinsv| unicode:: U+022F3 .. ELEMENT OF WITH VERTICAL BAR AT END OF HORIZONTAL STROKE
+.. |isinv| unicode:: U+02208 .. ELEMENT OF
+.. |lagran| unicode:: U+02112 .. SCRIPT CAPITAL L
+.. |Lang| unicode:: U+0300A .. LEFT DOUBLE ANGLE BRACKET
+.. |lang| unicode:: U+02329 .. LEFT-POINTING ANGLE BRACKET
+.. |lArr| unicode:: U+021D0 .. LEFTWARDS DOUBLE ARROW
+.. |lbbrk| unicode:: U+03014 .. LEFT TORTOISE SHELL BRACKET
+.. |le| unicode:: U+02264 .. LESS-THAN OR EQUAL TO
+.. |loang| unicode:: U+03018 .. LEFT WHITE TORTOISE SHELL BRACKET
+.. |lobrk| unicode:: U+0301A .. LEFT WHITE SQUARE BRACKET
+.. |lopar| unicode:: U+02985 .. LEFT WHITE PARENTHESIS
+.. |lowast| unicode:: U+02217 .. ASTERISK OPERATOR
+.. |minus| unicode:: U+02212 .. MINUS SIGN
+.. |mnplus| unicode:: U+02213 .. MINUS-OR-PLUS SIGN
+.. |nabla| unicode:: U+02207 .. NABLA
+.. |ne| unicode:: U+02260 .. NOT EQUAL TO
+.. |nedot| unicode:: U+02250 U+00338 .. APPROACHES THE LIMIT with slash
+.. |nhpar| unicode:: U+02AF2 .. PARALLEL WITH HORIZONTAL STROKE
+.. |ni| unicode:: U+0220B .. CONTAINS AS MEMBER
+.. |nis| unicode:: U+022FC .. SMALL CONTAINS WITH VERTICAL BAR AT END OF HORIZONTAL STROKE
+.. |nisd| unicode:: U+022FA .. CONTAINS WITH LONG HORIZONTAL STROKE
+.. |niv| unicode:: U+0220B .. CONTAINS AS MEMBER
+.. |Not| unicode:: U+02AEC .. DOUBLE STROKE NOT SIGN
+.. |notin| unicode:: U+02209 .. NOT AN ELEMENT OF
+.. |notindot| unicode:: U+022F5 U+00338 .. ELEMENT OF WITH DOT ABOVE with slash
+.. |notinE| unicode:: U+022F9 U+00338 .. ELEMENT OF WITH TWO HORIZONTAL STROKES with slash
+.. |notinva| unicode:: U+02209 .. NOT AN ELEMENT OF
+.. |notinvb| unicode:: U+022F7 .. SMALL ELEMENT OF WITH OVERBAR
+.. |notinvc| unicode:: U+022F6 .. ELEMENT OF WITH OVERBAR
+.. |notni| unicode:: U+0220C .. DOES NOT CONTAIN AS MEMBER
+.. |notniva| unicode:: U+0220C .. DOES NOT CONTAIN AS MEMBER
+.. |notnivb| unicode:: U+022FE .. SMALL CONTAINS WITH OVERBAR
+.. |notnivc| unicode:: U+022FD .. CONTAINS WITH OVERBAR
+.. |nparsl| unicode:: U+02AFD U+020E5 .. DOUBLE SOLIDUS OPERATOR with reverse slash
+.. |npart| unicode:: U+02202 U+00338 .. PARTIAL DIFFERENTIAL with slash
+.. |npolint| unicode:: U+02A14 .. LINE INTEGRATION NOT INCLUDING THE POLE
+.. |nvinfin| unicode:: U+029DE .. INFINITY NEGATED WITH VERTICAL BAR
+.. |olcross| unicode:: U+029BB .. CIRCLE WITH SUPERIMPOSED X
+.. |Or| unicode:: U+02A54 .. DOUBLE LOGICAL OR
+.. |or| unicode:: U+02228 .. LOGICAL OR
+.. |ord| unicode:: U+02A5D .. LOGICAL OR WITH HORIZONTAL DASH
+.. |order| unicode:: U+02134 .. SCRIPT SMALL O
+.. |oror| unicode:: U+02A56 .. TWO INTERSECTING LOGICAL OR
+.. |orslope| unicode:: U+02A57 .. SLOPING LARGE OR
+.. |orv| unicode:: U+02A5B .. LOGICAL OR WITH MIDDLE STEM
+.. |par| unicode:: U+02225 .. PARALLEL TO
+.. |parsl| unicode:: U+02AFD .. DOUBLE SOLIDUS OPERATOR
+.. |part| unicode:: U+02202 .. PARTIAL DIFFERENTIAL
+.. |permil| unicode:: U+02030 .. PER MILLE SIGN
+.. |perp| unicode:: U+022A5 .. UP TACK
+.. |pertenk| unicode:: U+02031 .. PER TEN THOUSAND SIGN
+.. |phmmat| unicode:: U+02133 .. SCRIPT CAPITAL M
+.. |pointint| unicode:: U+02A15 .. INTEGRAL AROUND A POINT OPERATOR
+.. |Prime| unicode:: U+02033 .. DOUBLE PRIME
+.. |prime| unicode:: U+02032 .. PRIME
+.. |profalar| unicode:: U+0232E .. ALL AROUND-PROFILE
+.. |profline| unicode:: U+02312 .. ARC
+.. |profsurf| unicode:: U+02313 .. SEGMENT
+.. |prop| unicode:: U+0221D .. PROPORTIONAL TO
+.. |qint| unicode:: U+02A0C .. QUADRUPLE INTEGRAL OPERATOR
+.. |qprime| unicode:: U+02057 .. QUADRUPLE PRIME
+.. |quatint| unicode:: U+02A16 .. QUATERNION INTEGRAL OPERATOR
+.. |radic| unicode:: U+0221A .. SQUARE ROOT
+.. |Rang| unicode:: U+0300B .. RIGHT DOUBLE ANGLE BRACKET
+.. |rang| unicode:: U+0232A .. RIGHT-POINTING ANGLE BRACKET
+.. |rArr| unicode:: U+021D2 .. RIGHTWARDS DOUBLE ARROW
+.. |rbbrk| unicode:: U+03015 .. RIGHT TORTOISE SHELL BRACKET
+.. |roang| unicode:: U+03019 .. RIGHT WHITE TORTOISE SHELL BRACKET
+.. |robrk| unicode:: U+0301B .. RIGHT WHITE SQUARE BRACKET
+.. |ropar| unicode:: U+02986 .. RIGHT WHITE PARENTHESIS
+.. |rppolint| unicode:: U+02A12 .. LINE INTEGRATION WITH RECTANGULAR PATH AROUND POLE
+.. |scpolint| unicode:: U+02A13 .. LINE INTEGRATION WITH SEMICIRCULAR PATH AROUND POLE
+.. |sim| unicode:: U+0223C .. TILDE OPERATOR
+.. |simdot| unicode:: U+02A6A .. TILDE OPERATOR WITH DOT ABOVE
+.. |sime| unicode:: U+02243 .. ASYMPTOTICALLY EQUAL TO
+.. |smeparsl| unicode:: U+029E4 .. EQUALS SIGN AND SLANTED PARALLEL WITH TILDE ABOVE
+.. |square| unicode:: U+025A1 .. WHITE SQUARE
+.. |squarf| unicode:: U+025AA .. BLACK SMALL SQUARE
+.. |strns| unicode:: U+000AF .. MACRON
+.. |sub| unicode:: U+02282 .. SUBSET OF
+.. |sube| unicode:: U+02286 .. SUBSET OF OR EQUAL TO
+.. |sup| unicode:: U+02283 .. SUPERSET OF
+.. |supe| unicode:: U+02287 .. SUPERSET OF OR EQUAL TO
+.. |tdot| unicode:: U+020DB .. COMBINING THREE DOTS ABOVE
+.. |there4| unicode:: U+02234 .. THEREFORE
+.. |tint| unicode:: U+0222D .. TRIPLE INTEGRAL
+.. |top| unicode:: U+022A4 .. DOWN TACK
+.. |topbot| unicode:: U+02336 .. APL FUNCTIONAL SYMBOL I-BEAM
+.. |topcir| unicode:: U+02AF1 .. DOWN TACK WITH CIRCLE BELOW
+.. |tprime| unicode:: U+02034 .. TRIPLE PRIME
+.. |utdot| unicode:: U+022F0 .. UP RIGHT DIAGONAL ELLIPSIS
+.. |uwangle| unicode:: U+029A7 .. OBLIQUE ANGLE OPENING DOWN
+.. |vangrt| unicode:: U+0299C .. RIGHT ANGLE VARIANT WITH SQUARE
+.. |veeeq| unicode:: U+0225A .. EQUIANGULAR TO
+.. |Verbar| unicode:: U+02016 .. DOUBLE VERTICAL LINE
+.. |wedgeq| unicode:: U+02259 .. ESTIMATES
+.. |xnis| unicode:: U+022FB .. CONTAINS WITH VERTICAL BAR AT END OF HORIZONTAL STROKE
diff --git a/python/helpers/docutils/parsers/rst/include/mmlalias.txt b/python/helpers/docutils/parsers/rst/include/mmlalias.txt
new file mode 100644
index 0000000..cabc54a
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/mmlalias.txt
@@ -0,0 +1,554 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |angle| unicode:: U+02220 .. ANGLE
+.. |ApplyFunction| unicode:: U+02061 .. FUNCTION APPLICATION
+.. |approx| unicode:: U+02248 .. ALMOST EQUAL TO
+.. |approxeq| unicode:: U+0224A .. ALMOST EQUAL OR EQUAL TO
+.. |Assign| unicode:: U+02254 .. COLON EQUALS
+.. |backcong| unicode:: U+0224C .. ALL EQUAL TO
+.. |backepsilon| unicode:: U+003F6 .. GREEK REVERSED LUNATE EPSILON SYMBOL
+.. |backprime| unicode:: U+02035 .. REVERSED PRIME
+.. |backsim| unicode:: U+0223D .. REVERSED TILDE
+.. |backsimeq| unicode:: U+022CD .. REVERSED TILDE EQUALS
+.. |Backslash| unicode:: U+02216 .. SET MINUS
+.. |barwedge| unicode:: U+02305 .. PROJECTIVE
+.. |Because| unicode:: U+02235 .. BECAUSE
+.. |because| unicode:: U+02235 .. BECAUSE
+.. |Bernoullis| unicode:: U+0212C .. SCRIPT CAPITAL B
+.. |between| unicode:: U+0226C .. BETWEEN
+.. |bigcap| unicode:: U+022C2 .. N-ARY INTERSECTION
+.. |bigcirc| unicode:: U+025EF .. LARGE CIRCLE
+.. |bigcup| unicode:: U+022C3 .. N-ARY UNION
+.. |bigodot| unicode:: U+02A00 .. N-ARY CIRCLED DOT OPERATOR
+.. |bigoplus| unicode:: U+02A01 .. N-ARY CIRCLED PLUS OPERATOR
+.. |bigotimes| unicode:: U+02A02 .. N-ARY CIRCLED TIMES OPERATOR
+.. |bigsqcup| unicode:: U+02A06 .. N-ARY SQUARE UNION OPERATOR
+.. |bigstar| unicode:: U+02605 .. BLACK STAR
+.. |bigtriangledown| unicode:: U+025BD .. WHITE DOWN-POINTING TRIANGLE
+.. |bigtriangleup| unicode:: U+025B3 .. WHITE UP-POINTING TRIANGLE
+.. |biguplus| unicode:: U+02A04 .. N-ARY UNION OPERATOR WITH PLUS
+.. |bigvee| unicode:: U+022C1 .. N-ARY LOGICAL OR
+.. |bigwedge| unicode:: U+022C0 .. N-ARY LOGICAL AND
+.. |bkarow| unicode:: U+0290D .. RIGHTWARDS DOUBLE DASH ARROW
+.. |blacklozenge| unicode:: U+029EB .. BLACK LOZENGE
+.. |blacksquare| unicode:: U+025AA .. BLACK SMALL SQUARE
+.. |blacktriangle| unicode:: U+025B4 .. BLACK UP-POINTING SMALL TRIANGLE
+.. |blacktriangledown| unicode:: U+025BE .. BLACK DOWN-POINTING SMALL TRIANGLE
+.. |blacktriangleleft| unicode:: U+025C2 .. BLACK LEFT-POINTING SMALL TRIANGLE
+.. |blacktriangleright| unicode:: U+025B8 .. BLACK RIGHT-POINTING SMALL TRIANGLE
+.. |bot| unicode:: U+022A5 .. UP TACK
+.. |boxminus| unicode:: U+0229F .. SQUARED MINUS
+.. |boxplus| unicode:: U+0229E .. SQUARED PLUS
+.. |boxtimes| unicode:: U+022A0 .. SQUARED TIMES
+.. |Breve| unicode:: U+002D8 .. BREVE
+.. |bullet| unicode:: U+02022 .. BULLET
+.. |Bumpeq| unicode:: U+0224E .. GEOMETRICALLY EQUIVALENT TO
+.. |bumpeq| unicode:: U+0224F .. DIFFERENCE BETWEEN
+.. |CapitalDifferentialD| unicode:: U+02145 .. DOUBLE-STRUCK ITALIC CAPITAL D
+.. |Cayleys| unicode:: U+0212D .. BLACK-LETTER CAPITAL C
+.. |Cedilla| unicode:: U+000B8 .. CEDILLA
+.. |CenterDot| unicode:: U+000B7 .. MIDDLE DOT
+.. |centerdot| unicode:: U+000B7 .. MIDDLE DOT
+.. |checkmark| unicode:: U+02713 .. CHECK MARK
+.. |circeq| unicode:: U+02257 .. RING EQUAL TO
+.. |circlearrowleft| unicode:: U+021BA .. ANTICLOCKWISE OPEN CIRCLE ARROW
+.. |circlearrowright| unicode:: U+021BB .. CLOCKWISE OPEN CIRCLE ARROW
+.. |circledast| unicode:: U+0229B .. CIRCLED ASTERISK OPERATOR
+.. |circledcirc| unicode:: U+0229A .. CIRCLED RING OPERATOR
+.. |circleddash| unicode:: U+0229D .. CIRCLED DASH
+.. |CircleDot| unicode:: U+02299 .. CIRCLED DOT OPERATOR
+.. |circledR| unicode:: U+000AE .. REGISTERED SIGN
+.. |circledS| unicode:: U+024C8 .. CIRCLED LATIN CAPITAL LETTER S
+.. |CircleMinus| unicode:: U+02296 .. CIRCLED MINUS
+.. |CirclePlus| unicode:: U+02295 .. CIRCLED PLUS
+.. |CircleTimes| unicode:: U+02297 .. CIRCLED TIMES
+.. |ClockwiseContourIntegral| unicode:: U+02232 .. CLOCKWISE CONTOUR INTEGRAL
+.. |CloseCurlyDoubleQuote| unicode:: U+0201D .. RIGHT DOUBLE QUOTATION MARK
+.. |CloseCurlyQuote| unicode:: U+02019 .. RIGHT SINGLE QUOTATION MARK
+.. |clubsuit| unicode:: U+02663 .. BLACK CLUB SUIT
+.. |coloneq| unicode:: U+02254 .. COLON EQUALS
+.. |complement| unicode:: U+02201 .. COMPLEMENT
+.. |complexes| unicode:: U+02102 .. DOUBLE-STRUCK CAPITAL C
+.. |Congruent| unicode:: U+02261 .. IDENTICAL TO
+.. |ContourIntegral| unicode:: U+0222E .. CONTOUR INTEGRAL
+.. |Coproduct| unicode:: U+02210 .. N-ARY COPRODUCT
+.. |CounterClockwiseContourIntegral| unicode:: U+02233 .. ANTICLOCKWISE CONTOUR INTEGRAL
+.. |CupCap| unicode:: U+0224D .. EQUIVALENT TO
+.. |curlyeqprec| unicode:: U+022DE .. EQUAL TO OR PRECEDES
+.. |curlyeqsucc| unicode:: U+022DF .. EQUAL TO OR SUCCEEDS
+.. |curlyvee| unicode:: U+022CE .. CURLY LOGICAL OR
+.. |curlywedge| unicode:: U+022CF .. CURLY LOGICAL AND
+.. |curvearrowleft| unicode:: U+021B6 .. ANTICLOCKWISE TOP SEMICIRCLE ARROW
+.. |curvearrowright| unicode:: U+021B7 .. CLOCKWISE TOP SEMICIRCLE ARROW
+.. |dbkarow| unicode:: U+0290F .. RIGHTWARDS TRIPLE DASH ARROW
+.. |ddagger| unicode:: U+02021 .. DOUBLE DAGGER
+.. |ddotseq| unicode:: U+02A77 .. EQUALS SIGN WITH TWO DOTS ABOVE AND TWO DOTS BELOW
+.. |Del| unicode:: U+02207 .. NABLA
+.. |DiacriticalAcute| unicode:: U+000B4 .. ACUTE ACCENT
+.. |DiacriticalDot| unicode:: U+002D9 .. DOT ABOVE
+.. |DiacriticalDoubleAcute| unicode:: U+002DD .. DOUBLE ACUTE ACCENT
+.. |DiacriticalGrave| unicode:: U+00060 .. GRAVE ACCENT
+.. |DiacriticalTilde| unicode:: U+002DC .. SMALL TILDE
+.. |Diamond| unicode:: U+022C4 .. DIAMOND OPERATOR
+.. |diamond| unicode:: U+022C4 .. DIAMOND OPERATOR
+.. |diamondsuit| unicode:: U+02666 .. BLACK DIAMOND SUIT
+.. |DifferentialD| unicode:: U+02146 .. DOUBLE-STRUCK ITALIC SMALL D
+.. |digamma| unicode:: U+003DD .. GREEK SMALL LETTER DIGAMMA
+.. |div| unicode:: U+000F7 .. DIVISION SIGN
+.. |divideontimes| unicode:: U+022C7 .. DIVISION TIMES
+.. |doteq| unicode:: U+02250 .. APPROACHES THE LIMIT
+.. |doteqdot| unicode:: U+02251 .. GEOMETRICALLY EQUAL TO
+.. |DotEqual| unicode:: U+02250 .. APPROACHES THE LIMIT
+.. |dotminus| unicode:: U+02238 .. DOT MINUS
+.. |dotplus| unicode:: U+02214 .. DOT PLUS
+.. |dotsquare| unicode:: U+022A1 .. SQUARED DOT OPERATOR
+.. |doublebarwedge| unicode:: U+02306 .. PERSPECTIVE
+.. |DoubleContourIntegral| unicode:: U+0222F .. SURFACE INTEGRAL
+.. |DoubleDot| unicode:: U+000A8 .. DIAERESIS
+.. |DoubleDownArrow| unicode:: U+021D3 .. DOWNWARDS DOUBLE ARROW
+.. |DoubleLeftArrow| unicode:: U+021D0 .. LEFTWARDS DOUBLE ARROW
+.. |DoubleLeftRightArrow| unicode:: U+021D4 .. LEFT RIGHT DOUBLE ARROW
+.. |DoubleLeftTee| unicode:: U+02AE4 .. VERTICAL BAR DOUBLE LEFT TURNSTILE
+.. |DoubleLongLeftArrow| unicode:: U+027F8 .. LONG LEFTWARDS DOUBLE ARROW
+.. |DoubleLongLeftRightArrow| unicode:: U+027FA .. LONG LEFT RIGHT DOUBLE ARROW
+.. |DoubleLongRightArrow| unicode:: U+027F9 .. LONG RIGHTWARDS DOUBLE ARROW
+.. |DoubleRightArrow| unicode:: U+021D2 .. RIGHTWARDS DOUBLE ARROW
+.. |DoubleRightTee| unicode:: U+022A8 .. TRUE
+.. |DoubleUpArrow| unicode:: U+021D1 .. UPWARDS DOUBLE ARROW
+.. |DoubleUpDownArrow| unicode:: U+021D5 .. UP DOWN DOUBLE ARROW
+.. |DoubleVerticalBar| unicode:: U+02225 .. PARALLEL TO
+.. |DownArrow| unicode:: U+02193 .. DOWNWARDS ARROW
+.. |Downarrow| unicode:: U+021D3 .. DOWNWARDS DOUBLE ARROW
+.. |downarrow| unicode:: U+02193 .. DOWNWARDS ARROW
+.. |DownArrowUpArrow| unicode:: U+021F5 .. DOWNWARDS ARROW LEFTWARDS OF UPWARDS ARROW
+.. |downdownarrows| unicode:: U+021CA .. DOWNWARDS PAIRED ARROWS
+.. |downharpoonleft| unicode:: U+021C3 .. DOWNWARDS HARPOON WITH BARB LEFTWARDS
+.. |downharpoonright| unicode:: U+021C2 .. DOWNWARDS HARPOON WITH BARB RIGHTWARDS
+.. |DownLeftVector| unicode:: U+021BD .. LEFTWARDS HARPOON WITH BARB DOWNWARDS
+.. |DownRightVector| unicode:: U+021C1 .. RIGHTWARDS HARPOON WITH BARB DOWNWARDS
+.. |DownTee| unicode:: U+022A4 .. DOWN TACK
+.. |DownTeeArrow| unicode:: U+021A7 .. DOWNWARDS ARROW FROM BAR
+.. |drbkarow| unicode:: U+02910 .. RIGHTWARDS TWO-HEADED TRIPLE DASH ARROW
+.. |Element| unicode:: U+02208 .. ELEMENT OF
+.. |emptyset| unicode:: U+02205 .. EMPTY SET
+.. |eqcirc| unicode:: U+02256 .. RING IN EQUAL TO
+.. |eqcolon| unicode:: U+02255 .. EQUALS COLON
+.. |eqsim| unicode:: U+02242 .. MINUS TILDE
+.. |eqslantgtr| unicode:: U+02A96 .. SLANTED EQUAL TO OR GREATER-THAN
+.. |eqslantless| unicode:: U+02A95 .. SLANTED EQUAL TO OR LESS-THAN
+.. |EqualTilde| unicode:: U+02242 .. MINUS TILDE
+.. |Equilibrium| unicode:: U+021CC .. RIGHTWARDS HARPOON OVER LEFTWARDS HARPOON
+.. |Exists| unicode:: U+02203 .. THERE EXISTS
+.. |expectation| unicode:: U+02130 .. SCRIPT CAPITAL E
+.. |ExponentialE| unicode:: U+02147 .. DOUBLE-STRUCK ITALIC SMALL E
+.. |exponentiale| unicode:: U+02147 .. DOUBLE-STRUCK ITALIC SMALL E
+.. |fallingdotseq| unicode:: U+02252 .. APPROXIMATELY EQUAL TO OR THE IMAGE OF
+.. |ForAll| unicode:: U+02200 .. FOR ALL
+.. |Fouriertrf| unicode:: U+02131 .. SCRIPT CAPITAL F
+.. |geq| unicode:: U+02265 .. GREATER-THAN OR EQUAL TO
+.. |geqq| unicode:: U+02267 .. GREATER-THAN OVER EQUAL TO
+.. |geqslant| unicode:: U+02A7E .. GREATER-THAN OR SLANTED EQUAL TO
+.. |gg| unicode:: U+0226B .. MUCH GREATER-THAN
+.. |ggg| unicode:: U+022D9 .. VERY MUCH GREATER-THAN
+.. |gnapprox| unicode:: U+02A8A .. GREATER-THAN AND NOT APPROXIMATE
+.. |gneq| unicode:: U+02A88 .. GREATER-THAN AND SINGLE-LINE NOT EQUAL TO
+.. |gneqq| unicode:: U+02269 .. GREATER-THAN BUT NOT EQUAL TO
+.. |GreaterEqual| unicode:: U+02265 .. GREATER-THAN OR EQUAL TO
+.. |GreaterEqualLess| unicode:: U+022DB .. GREATER-THAN EQUAL TO OR LESS-THAN
+.. |GreaterFullEqual| unicode:: U+02267 .. GREATER-THAN OVER EQUAL TO
+.. |GreaterLess| unicode:: U+02277 .. GREATER-THAN OR LESS-THAN
+.. |GreaterSlantEqual| unicode:: U+02A7E .. GREATER-THAN OR SLANTED EQUAL TO
+.. |GreaterTilde| unicode:: U+02273 .. GREATER-THAN OR EQUIVALENT TO
+.. |gtrapprox| unicode:: U+02A86 .. GREATER-THAN OR APPROXIMATE
+.. |gtrdot| unicode:: U+022D7 .. GREATER-THAN WITH DOT
+.. |gtreqless| unicode:: U+022DB .. GREATER-THAN EQUAL TO OR LESS-THAN
+.. |gtreqqless| unicode:: U+02A8C .. GREATER-THAN ABOVE DOUBLE-LINE EQUAL ABOVE LESS-THAN
+.. |gtrless| unicode:: U+02277 .. GREATER-THAN OR LESS-THAN
+.. |gtrsim| unicode:: U+02273 .. GREATER-THAN OR EQUIVALENT TO
+.. |gvertneqq| unicode:: U+02269 U+0FE00 .. GREATER-THAN BUT NOT EQUAL TO - with vertical stroke
+.. |Hacek| unicode:: U+002C7 .. CARON
+.. |hbar| unicode:: U+0210F .. PLANCK CONSTANT OVER TWO PI
+.. |heartsuit| unicode:: U+02665 .. BLACK HEART SUIT
+.. |HilbertSpace| unicode:: U+0210B .. SCRIPT CAPITAL H
+.. |hksearow| unicode:: U+02925 .. SOUTH EAST ARROW WITH HOOK
+.. |hkswarow| unicode:: U+02926 .. SOUTH WEST ARROW WITH HOOK
+.. |hookleftarrow| unicode:: U+021A9 .. LEFTWARDS ARROW WITH HOOK
+.. |hookrightarrow| unicode:: U+021AA .. RIGHTWARDS ARROW WITH HOOK
+.. |hslash| unicode:: U+0210F .. PLANCK CONSTANT OVER TWO PI
+.. |HumpDownHump| unicode:: U+0224E .. GEOMETRICALLY EQUIVALENT TO
+.. |HumpEqual| unicode:: U+0224F .. DIFFERENCE BETWEEN
+.. |iiiint| unicode:: U+02A0C .. QUADRUPLE INTEGRAL OPERATOR
+.. |iiint| unicode:: U+0222D .. TRIPLE INTEGRAL
+.. |Im| unicode:: U+02111 .. BLACK-LETTER CAPITAL I
+.. |ImaginaryI| unicode:: U+02148 .. DOUBLE-STRUCK ITALIC SMALL I
+.. |imagline| unicode:: U+02110 .. SCRIPT CAPITAL I
+.. |imagpart| unicode:: U+02111 .. BLACK-LETTER CAPITAL I
+.. |Implies| unicode:: U+021D2 .. RIGHTWARDS DOUBLE ARROW
+.. |in| unicode:: U+02208 .. ELEMENT OF
+.. |integers| unicode:: U+02124 .. DOUBLE-STRUCK CAPITAL Z
+.. |Integral| unicode:: U+0222B .. INTEGRAL
+.. |intercal| unicode:: U+022BA .. INTERCALATE
+.. |Intersection| unicode:: U+022C2 .. N-ARY INTERSECTION
+.. |intprod| unicode:: U+02A3C .. INTERIOR PRODUCT
+.. |InvisibleComma| unicode:: U+02063 .. INVISIBLE SEPARATOR
+.. |InvisibleTimes| unicode:: U+02062 .. INVISIBLE TIMES
+.. |langle| unicode:: U+02329 .. LEFT-POINTING ANGLE BRACKET
+.. |Laplacetrf| unicode:: U+02112 .. SCRIPT CAPITAL L
+.. |lbrace| unicode:: U+0007B .. LEFT CURLY BRACKET
+.. |lbrack| unicode:: U+0005B .. LEFT SQUARE BRACKET
+.. |LeftAngleBracket| unicode:: U+02329 .. LEFT-POINTING ANGLE BRACKET
+.. |LeftArrow| unicode:: U+02190 .. LEFTWARDS ARROW
+.. |Leftarrow| unicode:: U+021D0 .. LEFTWARDS DOUBLE ARROW
+.. |leftarrow| unicode:: U+02190 .. LEFTWARDS ARROW
+.. |LeftArrowBar| unicode:: U+021E4 .. LEFTWARDS ARROW TO BAR
+.. |LeftArrowRightArrow| unicode:: U+021C6 .. LEFTWARDS ARROW OVER RIGHTWARDS ARROW
+.. |leftarrowtail| unicode:: U+021A2 .. LEFTWARDS ARROW WITH TAIL
+.. |LeftCeiling| unicode:: U+02308 .. LEFT CEILING
+.. |LeftDoubleBracket| unicode:: U+0301A .. LEFT WHITE SQUARE BRACKET
+.. |LeftDownVector| unicode:: U+021C3 .. DOWNWARDS HARPOON WITH BARB LEFTWARDS
+.. |LeftFloor| unicode:: U+0230A .. LEFT FLOOR
+.. |leftharpoondown| unicode:: U+021BD .. LEFTWARDS HARPOON WITH BARB DOWNWARDS
+.. |leftharpoonup| unicode:: U+021BC .. LEFTWARDS HARPOON WITH BARB UPWARDS
+.. |leftleftarrows| unicode:: U+021C7 .. LEFTWARDS PAIRED ARROWS
+.. |LeftRightArrow| unicode:: U+02194 .. LEFT RIGHT ARROW
+.. |Leftrightarrow| unicode:: U+021D4 .. LEFT RIGHT DOUBLE ARROW
+.. |leftrightarrow| unicode:: U+02194 .. LEFT RIGHT ARROW
+.. |leftrightarrows| unicode:: U+021C6 .. LEFTWARDS ARROW OVER RIGHTWARDS ARROW
+.. |leftrightharpoons| unicode:: U+021CB .. LEFTWARDS HARPOON OVER RIGHTWARDS HARPOON
+.. |leftrightsquigarrow| unicode:: U+021AD .. LEFT RIGHT WAVE ARROW
+.. |LeftTee| unicode:: U+022A3 .. LEFT TACK
+.. |LeftTeeArrow| unicode:: U+021A4 .. LEFTWARDS ARROW FROM BAR
+.. |leftthreetimes| unicode:: U+022CB .. LEFT SEMIDIRECT PRODUCT
+.. |LeftTriangle| unicode:: U+022B2 .. NORMAL SUBGROUP OF
+.. |LeftTriangleEqual| unicode:: U+022B4 .. NORMAL SUBGROUP OF OR EQUAL TO
+.. |LeftUpVector| unicode:: U+021BF .. UPWARDS HARPOON WITH BARB LEFTWARDS
+.. |LeftVector| unicode:: U+021BC .. LEFTWARDS HARPOON WITH BARB UPWARDS
+.. |leq| unicode:: U+02264 .. LESS-THAN OR EQUAL TO
+.. |leqq| unicode:: U+02266 .. LESS-THAN OVER EQUAL TO
+.. |leqslant| unicode:: U+02A7D .. LESS-THAN OR SLANTED EQUAL TO
+.. |lessapprox| unicode:: U+02A85 .. LESS-THAN OR APPROXIMATE
+.. |lessdot| unicode:: U+022D6 .. LESS-THAN WITH DOT
+.. |lesseqgtr| unicode:: U+022DA .. LESS-THAN EQUAL TO OR GREATER-THAN
+.. |lesseqqgtr| unicode:: U+02A8B .. LESS-THAN ABOVE DOUBLE-LINE EQUAL ABOVE GREATER-THAN
+.. |LessEqualGreater| unicode:: U+022DA .. LESS-THAN EQUAL TO OR GREATER-THAN
+.. |LessFullEqual| unicode:: U+02266 .. LESS-THAN OVER EQUAL TO
+.. |LessGreater| unicode:: U+02276 .. LESS-THAN OR GREATER-THAN
+.. |lessgtr| unicode:: U+02276 .. LESS-THAN OR GREATER-THAN
+.. |lesssim| unicode:: U+02272 .. LESS-THAN OR EQUIVALENT TO
+.. |LessSlantEqual| unicode:: U+02A7D .. LESS-THAN OR SLANTED EQUAL TO
+.. |LessTilde| unicode:: U+02272 .. LESS-THAN OR EQUIVALENT TO
+.. |ll| unicode:: U+0226A .. MUCH LESS-THAN
+.. |llcorner| unicode:: U+0231E .. BOTTOM LEFT CORNER
+.. |Lleftarrow| unicode:: U+021DA .. LEFTWARDS TRIPLE ARROW
+.. |lmoustache| unicode:: U+023B0 .. UPPER LEFT OR LOWER RIGHT CURLY BRACKET SECTION
+.. |lnapprox| unicode:: U+02A89 .. LESS-THAN AND NOT APPROXIMATE
+.. |lneq| unicode:: U+02A87 .. LESS-THAN AND SINGLE-LINE NOT EQUAL TO
+.. |lneqq| unicode:: U+02268 .. LESS-THAN BUT NOT EQUAL TO
+.. |LongLeftArrow| unicode:: U+027F5 .. LONG LEFTWARDS ARROW
+.. |Longleftarrow| unicode:: U+027F8 .. LONG LEFTWARDS DOUBLE ARROW
+.. |longleftarrow| unicode:: U+027F5 .. LONG LEFTWARDS ARROW
+.. |LongLeftRightArrow| unicode:: U+027F7 .. LONG LEFT RIGHT ARROW
+.. |Longleftrightarrow| unicode:: U+027FA .. LONG LEFT RIGHT DOUBLE ARROW
+.. |longleftrightarrow| unicode:: U+027F7 .. LONG LEFT RIGHT ARROW
+.. |longmapsto| unicode:: U+027FC .. LONG RIGHTWARDS ARROW FROM BAR
+.. |LongRightArrow| unicode:: U+027F6 .. LONG RIGHTWARDS ARROW
+.. |Longrightarrow| unicode:: U+027F9 .. LONG RIGHTWARDS DOUBLE ARROW
+.. |longrightarrow| unicode:: U+027F6 .. LONG RIGHTWARDS ARROW
+.. |looparrowleft| unicode:: U+021AB .. LEFTWARDS ARROW WITH LOOP
+.. |looparrowright| unicode:: U+021AC .. RIGHTWARDS ARROW WITH LOOP
+.. |LowerLeftArrow| unicode:: U+02199 .. SOUTH WEST ARROW
+.. |LowerRightArrow| unicode:: U+02198 .. SOUTH EAST ARROW
+.. |lozenge| unicode:: U+025CA .. LOZENGE
+.. |lrcorner| unicode:: U+0231F .. BOTTOM RIGHT CORNER
+.. |Lsh| unicode:: U+021B0 .. UPWARDS ARROW WITH TIP LEFTWARDS
+.. |lvertneqq| unicode:: U+02268 U+0FE00 .. LESS-THAN BUT NOT EQUAL TO - with vertical stroke
+.. |maltese| unicode:: U+02720 .. MALTESE CROSS
+.. |mapsto| unicode:: U+021A6 .. RIGHTWARDS ARROW FROM BAR
+.. |measuredangle| unicode:: U+02221 .. MEASURED ANGLE
+.. |Mellintrf| unicode:: U+02133 .. SCRIPT CAPITAL M
+.. |MinusPlus| unicode:: U+02213 .. MINUS-OR-PLUS SIGN
+.. |mp| unicode:: U+02213 .. MINUS-OR-PLUS SIGN
+.. |multimap| unicode:: U+022B8 .. MULTIMAP
+.. |napprox| unicode:: U+02249 .. NOT ALMOST EQUAL TO
+.. |natural| unicode:: U+0266E .. MUSIC NATURAL SIGN
+.. |naturals| unicode:: U+02115 .. DOUBLE-STRUCK CAPITAL N
+.. |nearrow| unicode:: U+02197 .. NORTH EAST ARROW
+.. |NegativeMediumSpace| unicode:: U+0200B .. ZERO WIDTH SPACE
+.. |NegativeThickSpace| unicode:: U+0200B .. ZERO WIDTH SPACE
+.. |NegativeThinSpace| unicode:: U+0200B .. ZERO WIDTH SPACE
+.. |NegativeVeryThinSpace| unicode:: U+0200B .. ZERO WIDTH SPACE
+.. |NestedGreaterGreater| unicode:: U+0226B .. MUCH GREATER-THAN
+.. |NestedLessLess| unicode:: U+0226A .. MUCH LESS-THAN
+.. |nexists| unicode:: U+02204 .. THERE DOES NOT EXIST
+.. |ngeq| unicode:: U+02271 .. NEITHER GREATER-THAN NOR EQUAL TO
+.. |ngeqq| unicode:: U+02267 U+00338 .. GREATER-THAN OVER EQUAL TO with slash
+.. |ngeqslant| unicode:: U+02A7E U+00338 .. GREATER-THAN OR SLANTED EQUAL TO with slash
+.. |ngtr| unicode:: U+0226F .. NOT GREATER-THAN
+.. |nLeftarrow| unicode:: U+021CD .. LEFTWARDS DOUBLE ARROW WITH STROKE
+.. |nleftarrow| unicode:: U+0219A .. LEFTWARDS ARROW WITH STROKE
+.. |nLeftrightarrow| unicode:: U+021CE .. LEFT RIGHT DOUBLE ARROW WITH STROKE
+.. |nleftrightarrow| unicode:: U+021AE .. LEFT RIGHT ARROW WITH STROKE
+.. |nleq| unicode:: U+02270 .. NEITHER LESS-THAN NOR EQUAL TO
+.. |nleqq| unicode:: U+02266 U+00338 .. LESS-THAN OVER EQUAL TO with slash
+.. |nleqslant| unicode:: U+02A7D U+00338 .. LESS-THAN OR SLANTED EQUAL TO with slash
+.. |nless| unicode:: U+0226E .. NOT LESS-THAN
+.. |NonBreakingSpace| unicode:: U+000A0 .. NO-BREAK SPACE
+.. |NotCongruent| unicode:: U+02262 .. NOT IDENTICAL TO
+.. |NotDoubleVerticalBar| unicode:: U+02226 .. NOT PARALLEL TO
+.. |NotElement| unicode:: U+02209 .. NOT AN ELEMENT OF
+.. |NotEqual| unicode:: U+02260 .. NOT EQUAL TO
+.. |NotEqualTilde| unicode:: U+02242 U+00338 .. MINUS TILDE with slash
+.. |NotExists| unicode:: U+02204 .. THERE DOES NOT EXIST
+.. |NotGreater| unicode:: U+0226F .. NOT GREATER-THAN
+.. |NotGreaterEqual| unicode:: U+02271 .. NEITHER GREATER-THAN NOR EQUAL TO
+.. |NotGreaterFullEqual| unicode:: U+02266 U+00338 .. LESS-THAN OVER EQUAL TO with slash
+.. |NotGreaterGreater| unicode:: U+0226B U+00338 .. MUCH GREATER THAN with slash
+.. |NotGreaterLess| unicode:: U+02279 .. NEITHER GREATER-THAN NOR LESS-THAN
+.. |NotGreaterSlantEqual| unicode:: U+02A7E U+00338 .. GREATER-THAN OR SLANTED EQUAL TO with slash
+.. |NotGreaterTilde| unicode:: U+02275 .. NEITHER GREATER-THAN NOR EQUIVALENT TO
+.. |NotHumpDownHump| unicode:: U+0224E U+00338 .. GEOMETRICALLY EQUIVALENT TO with slash
+.. |NotLeftTriangle| unicode:: U+022EA .. NOT NORMAL SUBGROUP OF
+.. |NotLeftTriangleEqual| unicode:: U+022EC .. NOT NORMAL SUBGROUP OF OR EQUAL TO
+.. |NotLess| unicode:: U+0226E .. NOT LESS-THAN
+.. |NotLessEqual| unicode:: U+02270 .. NEITHER LESS-THAN NOR EQUAL TO
+.. |NotLessGreater| unicode:: U+02278 .. NEITHER LESS-THAN NOR GREATER-THAN
+.. |NotLessLess| unicode:: U+0226A U+00338 .. MUCH LESS THAN with slash
+.. |NotLessSlantEqual| unicode:: U+02A7D U+00338 .. LESS-THAN OR SLANTED EQUAL TO with slash
+.. |NotLessTilde| unicode:: U+02274 .. NEITHER LESS-THAN NOR EQUIVALENT TO
+.. |NotPrecedes| unicode:: U+02280 .. DOES NOT PRECEDE
+.. |NotPrecedesEqual| unicode:: U+02AAF U+00338 .. PRECEDES ABOVE SINGLE-LINE EQUALS SIGN with slash
+.. |NotPrecedesSlantEqual| unicode:: U+022E0 .. DOES NOT PRECEDE OR EQUAL
+.. |NotReverseElement| unicode:: U+0220C .. DOES NOT CONTAIN AS MEMBER
+.. |NotRightTriangle| unicode:: U+022EB .. DOES NOT CONTAIN AS NORMAL SUBGROUP
+.. |NotRightTriangleEqual| unicode:: U+022ED .. DOES NOT CONTAIN AS NORMAL SUBGROUP OR EQUAL
+.. |NotSquareSubsetEqual| unicode:: U+022E2 .. NOT SQUARE IMAGE OF OR EQUAL TO
+.. |NotSquareSupersetEqual| unicode:: U+022E3 .. NOT SQUARE ORIGINAL OF OR EQUAL TO
+.. |NotSubset| unicode:: U+02282 U+020D2 .. SUBSET OF with vertical line
+.. |NotSubsetEqual| unicode:: U+02288 .. NEITHER A SUBSET OF NOR EQUAL TO
+.. |NotSucceeds| unicode:: U+02281 .. DOES NOT SUCCEED
+.. |NotSucceedsEqual| unicode:: U+02AB0 U+00338 .. SUCCEEDS ABOVE SINGLE-LINE EQUALS SIGN with slash
+.. |NotSucceedsSlantEqual| unicode:: U+022E1 .. DOES NOT SUCCEED OR EQUAL
+.. |NotSuperset| unicode:: U+02283 U+020D2 .. SUPERSET OF with vertical line
+.. |NotSupersetEqual| unicode:: U+02289 .. NEITHER A SUPERSET OF NOR EQUAL TO
+.. |NotTilde| unicode:: U+02241 .. NOT TILDE
+.. |NotTildeEqual| unicode:: U+02244 .. NOT ASYMPTOTICALLY EQUAL TO
+.. |NotTildeFullEqual| unicode:: U+02247 .. NEITHER APPROXIMATELY NOR ACTUALLY EQUAL TO
+.. |NotTildeTilde| unicode:: U+02249 .. NOT ALMOST EQUAL TO
+.. |NotVerticalBar| unicode:: U+02224 .. DOES NOT DIVIDE
+.. |nparallel| unicode:: U+02226 .. NOT PARALLEL TO
+.. |nprec| unicode:: U+02280 .. DOES NOT PRECEDE
+.. |npreceq| unicode:: U+02AAF U+00338 .. PRECEDES ABOVE SINGLE-LINE EQUALS SIGN with slash
+.. |nRightarrow| unicode:: U+021CF .. RIGHTWARDS DOUBLE ARROW WITH STROKE
+.. |nrightarrow| unicode:: U+0219B .. RIGHTWARDS ARROW WITH STROKE
+.. |nshortmid| unicode:: U+02224 .. DOES NOT DIVIDE
+.. |nshortparallel| unicode:: U+02226 .. NOT PARALLEL TO
+.. |nsimeq| unicode:: U+02244 .. NOT ASYMPTOTICALLY EQUAL TO
+.. |nsubset| unicode:: U+02282 U+020D2 .. SUBSET OF with vertical line
+.. |nsubseteq| unicode:: U+02288 .. NEITHER A SUBSET OF NOR EQUAL TO
+.. |nsubseteqq| unicode:: U+02AC5 U+00338 .. SUBSET OF ABOVE EQUALS SIGN with slash
+.. |nsucc| unicode:: U+02281 .. DOES NOT SUCCEED
+.. |nsucceq| unicode:: U+02AB0 U+00338 .. SUCCEEDS ABOVE SINGLE-LINE EQUALS SIGN with slash
+.. |nsupset| unicode:: U+02283 U+020D2 .. SUPERSET OF with vertical line
+.. |nsupseteq| unicode:: U+02289 .. NEITHER A SUPERSET OF NOR EQUAL TO
+.. |nsupseteqq| unicode:: U+02AC6 U+00338 .. SUPERSET OF ABOVE EQUALS SIGN with slash
+.. |ntriangleleft| unicode:: U+022EA .. NOT NORMAL SUBGROUP OF
+.. |ntrianglelefteq| unicode:: U+022EC .. NOT NORMAL SUBGROUP OF OR EQUAL TO
+.. |ntriangleright| unicode:: U+022EB .. DOES NOT CONTAIN AS NORMAL SUBGROUP
+.. |ntrianglerighteq| unicode:: U+022ED .. DOES NOT CONTAIN AS NORMAL SUBGROUP OR EQUAL
+.. |nwarrow| unicode:: U+02196 .. NORTH WEST ARROW
+.. |oint| unicode:: U+0222E .. CONTOUR INTEGRAL
+.. |OpenCurlyDoubleQuote| unicode:: U+0201C .. LEFT DOUBLE QUOTATION MARK
+.. |OpenCurlyQuote| unicode:: U+02018 .. LEFT SINGLE QUOTATION MARK
+.. |orderof| unicode:: U+02134 .. SCRIPT SMALL O
+.. |parallel| unicode:: U+02225 .. PARALLEL TO
+.. |PartialD| unicode:: U+02202 .. PARTIAL DIFFERENTIAL
+.. |pitchfork| unicode:: U+022D4 .. PITCHFORK
+.. |PlusMinus| unicode:: U+000B1 .. PLUS-MINUS SIGN
+.. |pm| unicode:: U+000B1 .. PLUS-MINUS SIGN
+.. |Poincareplane| unicode:: U+0210C .. BLACK-LETTER CAPITAL H
+.. |prec| unicode:: U+0227A .. PRECEDES
+.. |precapprox| unicode:: U+02AB7 .. PRECEDES ABOVE ALMOST EQUAL TO
+.. |preccurlyeq| unicode:: U+0227C .. PRECEDES OR EQUAL TO
+.. |Precedes| unicode:: U+0227A .. PRECEDES
+.. |PrecedesEqual| unicode:: U+02AAF .. PRECEDES ABOVE SINGLE-LINE EQUALS SIGN
+.. |PrecedesSlantEqual| unicode:: U+0227C .. PRECEDES OR EQUAL TO
+.. |PrecedesTilde| unicode:: U+0227E .. PRECEDES OR EQUIVALENT TO
+.. |preceq| unicode:: U+02AAF .. PRECEDES ABOVE SINGLE-LINE EQUALS SIGN
+.. |precnapprox| unicode:: U+02AB9 .. PRECEDES ABOVE NOT ALMOST EQUAL TO
+.. |precneqq| unicode:: U+02AB5 .. PRECEDES ABOVE NOT EQUAL TO
+.. |precnsim| unicode:: U+022E8 .. PRECEDES BUT NOT EQUIVALENT TO
+.. |precsim| unicode:: U+0227E .. PRECEDES OR EQUIVALENT TO
+.. |primes| unicode:: U+02119 .. DOUBLE-STRUCK CAPITAL P
+.. |Proportion| unicode:: U+02237 .. PROPORTION
+.. |Proportional| unicode:: U+0221D .. PROPORTIONAL TO
+.. |propto| unicode:: U+0221D .. PROPORTIONAL TO
+.. |quaternions| unicode:: U+0210D .. DOUBLE-STRUCK CAPITAL H
+.. |questeq| unicode:: U+0225F .. QUESTIONED EQUAL TO
+.. |rangle| unicode:: U+0232A .. RIGHT-POINTING ANGLE BRACKET
+.. |rationals| unicode:: U+0211A .. DOUBLE-STRUCK CAPITAL Q
+.. |rbrace| unicode:: U+0007D .. RIGHT CURLY BRACKET
+.. |rbrack| unicode:: U+0005D .. RIGHT SQUARE BRACKET
+.. |Re| unicode:: U+0211C .. BLACK-LETTER CAPITAL R
+.. |realine| unicode:: U+0211B .. SCRIPT CAPITAL R
+.. |realpart| unicode:: U+0211C .. BLACK-LETTER CAPITAL R
+.. |reals| unicode:: U+0211D .. DOUBLE-STRUCK CAPITAL R
+.. |ReverseElement| unicode:: U+0220B .. CONTAINS AS MEMBER
+.. |ReverseEquilibrium| unicode:: U+021CB .. LEFTWARDS HARPOON OVER RIGHTWARDS HARPOON
+.. |ReverseUpEquilibrium| unicode:: U+0296F .. DOWNWARDS HARPOON WITH BARB LEFT BESIDE UPWARDS HARPOON WITH BARB RIGHT
+.. |RightAngleBracket| unicode:: U+0232A .. RIGHT-POINTING ANGLE BRACKET
+.. |RightArrow| unicode:: U+02192 .. RIGHTWARDS ARROW
+.. |Rightarrow| unicode:: U+021D2 .. RIGHTWARDS DOUBLE ARROW
+.. |rightarrow| unicode:: U+02192 .. RIGHTWARDS ARROW
+.. |RightArrowBar| unicode:: U+021E5 .. RIGHTWARDS ARROW TO BAR
+.. |RightArrowLeftArrow| unicode:: U+021C4 .. RIGHTWARDS ARROW OVER LEFTWARDS ARROW
+.. |rightarrowtail| unicode:: U+021A3 .. RIGHTWARDS ARROW WITH TAIL
+.. |RightCeiling| unicode:: U+02309 .. RIGHT CEILING
+.. |RightDoubleBracket| unicode:: U+0301B .. RIGHT WHITE SQUARE BRACKET
+.. |RightDownVector| unicode:: U+021C2 .. DOWNWARDS HARPOON WITH BARB RIGHTWARDS
+.. |RightFloor| unicode:: U+0230B .. RIGHT FLOOR
+.. |rightharpoondown| unicode:: U+021C1 .. RIGHTWARDS HARPOON WITH BARB DOWNWARDS
+.. |rightharpoonup| unicode:: U+021C0 .. RIGHTWARDS HARPOON WITH BARB UPWARDS
+.. |rightleftarrows| unicode:: U+021C4 .. RIGHTWARDS ARROW OVER LEFTWARDS ARROW
+.. |rightleftharpoons| unicode:: U+021CC .. RIGHTWARDS HARPOON OVER LEFTWARDS HARPOON
+.. |rightrightarrows| unicode:: U+021C9 .. RIGHTWARDS PAIRED ARROWS
+.. |rightsquigarrow| unicode:: U+0219D .. RIGHTWARDS WAVE ARROW
+.. |RightTee| unicode:: U+022A2 .. RIGHT TACK
+.. |RightTeeArrow| unicode:: U+021A6 .. RIGHTWARDS ARROW FROM BAR
+.. |rightthreetimes| unicode:: U+022CC .. RIGHT SEMIDIRECT PRODUCT
+.. |RightTriangle| unicode:: U+022B3 .. CONTAINS AS NORMAL SUBGROUP
+.. |RightTriangleEqual| unicode:: U+022B5 .. CONTAINS AS NORMAL SUBGROUP OR EQUAL TO
+.. |RightUpVector| unicode:: U+021BE .. UPWARDS HARPOON WITH BARB RIGHTWARDS
+.. |RightVector| unicode:: U+021C0 .. RIGHTWARDS HARPOON WITH BARB UPWARDS
+.. |risingdotseq| unicode:: U+02253 .. IMAGE OF OR APPROXIMATELY EQUAL TO
+.. |rmoustache| unicode:: U+023B1 .. UPPER RIGHT OR LOWER LEFT CURLY BRACKET SECTION
+.. |Rrightarrow| unicode:: U+021DB .. RIGHTWARDS TRIPLE ARROW
+.. |Rsh| unicode:: U+021B1 .. UPWARDS ARROW WITH TIP RIGHTWARDS
+.. |searrow| unicode:: U+02198 .. SOUTH EAST ARROW
+.. |setminus| unicode:: U+02216 .. SET MINUS
+.. |ShortDownArrow| unicode:: U+02193 .. DOWNWARDS ARROW
+.. |ShortLeftArrow| unicode:: U+02190 .. LEFTWARDS ARROW
+.. |shortmid| unicode:: U+02223 .. DIVIDES
+.. |shortparallel| unicode:: U+02225 .. PARALLEL TO
+.. |ShortRightArrow| unicode:: U+02192 .. RIGHTWARDS ARROW
+.. |ShortUpArrow| unicode:: U+02191 .. UPWARDS ARROW
+.. |simeq| unicode:: U+02243 .. ASYMPTOTICALLY EQUAL TO
+.. |SmallCircle| unicode:: U+02218 .. RING OPERATOR
+.. |smallsetminus| unicode:: U+02216 .. SET MINUS
+.. |spadesuit| unicode:: U+02660 .. BLACK SPADE SUIT
+.. |Sqrt| unicode:: U+0221A .. SQUARE ROOT
+.. |sqsubset| unicode:: U+0228F .. SQUARE IMAGE OF
+.. |sqsubseteq| unicode:: U+02291 .. SQUARE IMAGE OF OR EQUAL TO
+.. |sqsupset| unicode:: U+02290 .. SQUARE ORIGINAL OF
+.. |sqsupseteq| unicode:: U+02292 .. SQUARE ORIGINAL OF OR EQUAL TO
+.. |Square| unicode:: U+025A1 .. WHITE SQUARE
+.. |SquareIntersection| unicode:: U+02293 .. SQUARE CAP
+.. |SquareSubset| unicode:: U+0228F .. SQUARE IMAGE OF
+.. |SquareSubsetEqual| unicode:: U+02291 .. SQUARE IMAGE OF OR EQUAL TO
+.. |SquareSuperset| unicode:: U+02290 .. SQUARE ORIGINAL OF
+.. |SquareSupersetEqual| unicode:: U+02292 .. SQUARE ORIGINAL OF OR EQUAL TO
+.. |SquareUnion| unicode:: U+02294 .. SQUARE CUP
+.. |Star| unicode:: U+022C6 .. STAR OPERATOR
+.. |straightepsilon| unicode:: U+003F5 .. GREEK LUNATE EPSILON SYMBOL
+.. |straightphi| unicode:: U+003D5 .. GREEK PHI SYMBOL
+.. |Subset| unicode:: U+022D0 .. DOUBLE SUBSET
+.. |subset| unicode:: U+02282 .. SUBSET OF
+.. |subseteq| unicode:: U+02286 .. SUBSET OF OR EQUAL TO
+.. |subseteqq| unicode:: U+02AC5 .. SUBSET OF ABOVE EQUALS SIGN
+.. |SubsetEqual| unicode:: U+02286 .. SUBSET OF OR EQUAL TO
+.. |subsetneq| unicode:: U+0228A .. SUBSET OF WITH NOT EQUAL TO
+.. |subsetneqq| unicode:: U+02ACB .. SUBSET OF ABOVE NOT EQUAL TO
+.. |succ| unicode:: U+0227B .. SUCCEEDS
+.. |succapprox| unicode:: U+02AB8 .. SUCCEEDS ABOVE ALMOST EQUAL TO
+.. |succcurlyeq| unicode:: U+0227D .. SUCCEEDS OR EQUAL TO
+.. |Succeeds| unicode:: U+0227B .. SUCCEEDS
+.. |SucceedsEqual| unicode:: U+02AB0 .. SUCCEEDS ABOVE SINGLE-LINE EQUALS SIGN
+.. |SucceedsSlantEqual| unicode:: U+0227D .. SUCCEEDS OR EQUAL TO
+.. |SucceedsTilde| unicode:: U+0227F .. SUCCEEDS OR EQUIVALENT TO
+.. |succeq| unicode:: U+02AB0 .. SUCCEEDS ABOVE SINGLE-LINE EQUALS SIGN
+.. |succnapprox| unicode:: U+02ABA .. SUCCEEDS ABOVE NOT ALMOST EQUAL TO
+.. |succneqq| unicode:: U+02AB6 .. SUCCEEDS ABOVE NOT EQUAL TO
+.. |succnsim| unicode:: U+022E9 .. SUCCEEDS BUT NOT EQUIVALENT TO
+.. |succsim| unicode:: U+0227F .. SUCCEEDS OR EQUIVALENT TO
+.. |SuchThat| unicode:: U+0220B .. CONTAINS AS MEMBER
+.. |Sum| unicode:: U+02211 .. N-ARY SUMMATION
+.. |Superset| unicode:: U+02283 .. SUPERSET OF
+.. |SupersetEqual| unicode:: U+02287 .. SUPERSET OF OR EQUAL TO
+.. |Supset| unicode:: U+022D1 .. DOUBLE SUPERSET
+.. |supset| unicode:: U+02283 .. SUPERSET OF
+.. |supseteq| unicode:: U+02287 .. SUPERSET OF OR EQUAL TO
+.. |supseteqq| unicode:: U+02AC6 .. SUPERSET OF ABOVE EQUALS SIGN
+.. |supsetneq| unicode:: U+0228B .. SUPERSET OF WITH NOT EQUAL TO
+.. |supsetneqq| unicode:: U+02ACC .. SUPERSET OF ABOVE NOT EQUAL TO
+.. |swarrow| unicode:: U+02199 .. SOUTH WEST ARROW
+.. |Therefore| unicode:: U+02234 .. THEREFORE
+.. |therefore| unicode:: U+02234 .. THEREFORE
+.. |thickapprox| unicode:: U+02248 .. ALMOST EQUAL TO
+.. |thicksim| unicode:: U+0223C .. TILDE OPERATOR
+.. |ThinSpace| unicode:: U+02009 .. THIN SPACE
+.. |Tilde| unicode:: U+0223C .. TILDE OPERATOR
+.. |TildeEqual| unicode:: U+02243 .. ASYMPTOTICALLY EQUAL TO
+.. |TildeFullEqual| unicode:: U+02245 .. APPROXIMATELY EQUAL TO
+.. |TildeTilde| unicode:: U+02248 .. ALMOST EQUAL TO
+.. |toea| unicode:: U+02928 .. NORTH EAST ARROW AND SOUTH EAST ARROW
+.. |tosa| unicode:: U+02929 .. SOUTH EAST ARROW AND SOUTH WEST ARROW
+.. |triangle| unicode:: U+025B5 .. WHITE UP-POINTING SMALL TRIANGLE
+.. |triangledown| unicode:: U+025BF .. WHITE DOWN-POINTING SMALL TRIANGLE
+.. |triangleleft| unicode:: U+025C3 .. WHITE LEFT-POINTING SMALL TRIANGLE
+.. |trianglelefteq| unicode:: U+022B4 .. NORMAL SUBGROUP OF OR EQUAL TO
+.. |triangleq| unicode:: U+0225C .. DELTA EQUAL TO
+.. |triangleright| unicode:: U+025B9 .. WHITE RIGHT-POINTING SMALL TRIANGLE
+.. |trianglerighteq| unicode:: U+022B5 .. CONTAINS AS NORMAL SUBGROUP OR EQUAL TO
+.. |TripleDot| unicode:: U+020DB .. COMBINING THREE DOTS ABOVE
+.. |twoheadleftarrow| unicode:: U+0219E .. LEFTWARDS TWO HEADED ARROW
+.. |twoheadrightarrow| unicode:: U+021A0 .. RIGHTWARDS TWO HEADED ARROW
+.. |ulcorner| unicode:: U+0231C .. TOP LEFT CORNER
+.. |Union| unicode:: U+022C3 .. N-ARY UNION
+.. |UnionPlus| unicode:: U+0228E .. MULTISET UNION
+.. |UpArrow| unicode:: U+02191 .. UPWARDS ARROW
+.. |Uparrow| unicode:: U+021D1 .. UPWARDS DOUBLE ARROW
+.. |uparrow| unicode:: U+02191 .. UPWARDS ARROW
+.. |UpArrowDownArrow| unicode:: U+021C5 .. UPWARDS ARROW LEFTWARDS OF DOWNWARDS ARROW
+.. |UpDownArrow| unicode:: U+02195 .. UP DOWN ARROW
+.. |Updownarrow| unicode:: U+021D5 .. UP DOWN DOUBLE ARROW
+.. |updownarrow| unicode:: U+02195 .. UP DOWN ARROW
+.. |UpEquilibrium| unicode:: U+0296E .. UPWARDS HARPOON WITH BARB LEFT BESIDE DOWNWARDS HARPOON WITH BARB RIGHT
+.. |upharpoonleft| unicode:: U+021BF .. UPWARDS HARPOON WITH BARB LEFTWARDS
+.. |upharpoonright| unicode:: U+021BE .. UPWARDS HARPOON WITH BARB RIGHTWARDS
+.. |UpperLeftArrow| unicode:: U+02196 .. NORTH WEST ARROW
+.. |UpperRightArrow| unicode:: U+02197 .. NORTH EAST ARROW
+.. |upsilon| unicode:: U+003C5 .. GREEK SMALL LETTER UPSILON
+.. |UpTee| unicode:: U+022A5 .. UP TACK
+.. |UpTeeArrow| unicode:: U+021A5 .. UPWARDS ARROW FROM BAR
+.. |upuparrows| unicode:: U+021C8 .. UPWARDS PAIRED ARROWS
+.. |urcorner| unicode:: U+0231D .. TOP RIGHT CORNER
+.. |varepsilon| unicode:: U+003B5 .. GREEK SMALL LETTER EPSILON
+.. |varkappa| unicode:: U+003F0 .. GREEK KAPPA SYMBOL
+.. |varnothing| unicode:: U+02205 .. EMPTY SET
+.. |varphi| unicode:: U+003C6 .. GREEK SMALL LETTER PHI
+.. |varpi| unicode:: U+003D6 .. GREEK PI SYMBOL
+.. |varpropto| unicode:: U+0221D .. PROPORTIONAL TO
+.. |varrho| unicode:: U+003F1 .. GREEK RHO SYMBOL
+.. |varsigma| unicode:: U+003C2 .. GREEK SMALL LETTER FINAL SIGMA
+.. |varsubsetneq| unicode:: U+0228A U+0FE00 .. SUBSET OF WITH NOT EQUAL TO - variant with stroke through bottom members
+.. |varsubsetneqq| unicode:: U+02ACB U+0FE00 .. SUBSET OF ABOVE NOT EQUAL TO - variant with stroke through bottom members
+.. |varsupsetneq| unicode:: U+0228B U+0FE00 .. SUPERSET OF WITH NOT EQUAL TO - variant with stroke through bottom members
+.. |varsupsetneqq| unicode:: U+02ACC U+0FE00 .. SUPERSET OF ABOVE NOT EQUAL TO - variant with stroke through bottom members
+.. |vartheta| unicode:: U+003D1 .. GREEK THETA SYMBOL
+.. |vartriangleleft| unicode:: U+022B2 .. NORMAL SUBGROUP OF
+.. |vartriangleright| unicode:: U+022B3 .. CONTAINS AS NORMAL SUBGROUP
+.. |Vee| unicode:: U+022C1 .. N-ARY LOGICAL OR
+.. |vee| unicode:: U+02228 .. LOGICAL OR
+.. |Vert| unicode:: U+02016 .. DOUBLE VERTICAL LINE
+.. |vert| unicode:: U+0007C .. VERTICAL LINE
+.. |VerticalBar| unicode:: U+02223 .. DIVIDES
+.. |VerticalTilde| unicode:: U+02240 .. WREATH PRODUCT
+.. |VeryThinSpace| unicode:: U+0200A .. HAIR SPACE
+.. |Wedge| unicode:: U+022C0 .. N-ARY LOGICAL AND
+.. |wedge| unicode:: U+02227 .. LOGICAL AND
+.. |wp| unicode:: U+02118 .. SCRIPT CAPITAL P
+.. |wr| unicode:: U+02240 .. WREATH PRODUCT
+.. |zeetrf| unicode:: U+02128 .. BLACK-LETTER CAPITAL Z
diff --git a/python/helpers/docutils/parsers/rst/include/mmlextra-wide.txt b/python/helpers/docutils/parsers/rst/include/mmlextra-wide.txt
new file mode 100644
index 0000000..0177ccc
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/mmlextra-wide.txt
@@ -0,0 +1,113 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |af| unicode:: U+02061 .. FUNCTION APPLICATION
+.. |aopf| unicode:: U+1D552 .. MATHEMATICAL DOUBLE-STRUCK SMALL A
+.. |asympeq| unicode:: U+0224D .. EQUIVALENT TO
+.. |bopf| unicode:: U+1D553 .. MATHEMATICAL DOUBLE-STRUCK SMALL B
+.. |copf| unicode:: U+1D554 .. MATHEMATICAL DOUBLE-STRUCK SMALL C
+.. |Cross| unicode:: U+02A2F .. VECTOR OR CROSS PRODUCT
+.. |DD| unicode:: U+02145 .. DOUBLE-STRUCK ITALIC CAPITAL D
+.. |dd| unicode:: U+02146 .. DOUBLE-STRUCK ITALIC SMALL D
+.. |dopf| unicode:: U+1D555 .. MATHEMATICAL DOUBLE-STRUCK SMALL D
+.. |DownArrowBar| unicode:: U+02913 .. DOWNWARDS ARROW TO BAR
+.. |DownBreve| unicode:: U+00311 .. COMBINING INVERTED BREVE
+.. |DownLeftRightVector| unicode:: U+02950 .. LEFT BARB DOWN RIGHT BARB DOWN HARPOON
+.. |DownLeftTeeVector| unicode:: U+0295E .. LEFTWARDS HARPOON WITH BARB DOWN FROM BAR
+.. |DownLeftVectorBar| unicode:: U+02956 .. LEFTWARDS HARPOON WITH BARB DOWN TO BAR
+.. |DownRightTeeVector| unicode:: U+0295F .. RIGHTWARDS HARPOON WITH BARB DOWN FROM BAR
+.. |DownRightVectorBar| unicode:: U+02957 .. RIGHTWARDS HARPOON WITH BARB DOWN TO BAR
+.. |ee| unicode:: U+02147 .. DOUBLE-STRUCK ITALIC SMALL E
+.. |EmptySmallSquare| unicode:: U+025FB .. WHITE MEDIUM SQUARE
+.. |EmptyVerySmallSquare| unicode:: U+025AB .. WHITE SMALL SQUARE
+.. |eopf| unicode:: U+1D556 .. MATHEMATICAL DOUBLE-STRUCK SMALL E
+.. |Equal| unicode:: U+02A75 .. TWO CONSECUTIVE EQUALS SIGNS
+.. |FilledSmallSquare| unicode:: U+025FC .. BLACK MEDIUM SQUARE
+.. |FilledVerySmallSquare| unicode:: U+025AA .. BLACK SMALL SQUARE
+.. |fopf| unicode:: U+1D557 .. MATHEMATICAL DOUBLE-STRUCK SMALL F
+.. |gopf| unicode:: U+1D558 .. MATHEMATICAL DOUBLE-STRUCK SMALL G
+.. |GreaterGreater| unicode:: U+02AA2 .. DOUBLE NESTED GREATER-THAN
+.. |Hat| unicode:: U+0005E .. CIRCUMFLEX ACCENT
+.. |hopf| unicode:: U+1D559 .. MATHEMATICAL DOUBLE-STRUCK SMALL H
+.. |HorizontalLine| unicode:: U+02500 .. BOX DRAWINGS LIGHT HORIZONTAL
+.. |ic| unicode:: U+02063 .. INVISIBLE SEPARATOR
+.. |ii| unicode:: U+02148 .. DOUBLE-STRUCK ITALIC SMALL I
+.. |iopf| unicode:: U+1D55A .. MATHEMATICAL DOUBLE-STRUCK SMALL I
+.. |it| unicode:: U+02062 .. INVISIBLE TIMES
+.. |jopf| unicode:: U+1D55B .. MATHEMATICAL DOUBLE-STRUCK SMALL J
+.. |kopf| unicode:: U+1D55C .. MATHEMATICAL DOUBLE-STRUCK SMALL K
+.. |larrb| unicode:: U+021E4 .. LEFTWARDS ARROW TO BAR
+.. |LeftDownTeeVector| unicode:: U+02961 .. DOWNWARDS HARPOON WITH BARB LEFT FROM BAR
+.. |LeftDownVectorBar| unicode:: U+02959 .. DOWNWARDS HARPOON WITH BARB LEFT TO BAR
+.. |LeftRightVector| unicode:: U+0294E .. LEFT BARB UP RIGHT BARB UP HARPOON
+.. |LeftTeeVector| unicode:: U+0295A .. LEFTWARDS HARPOON WITH BARB UP FROM BAR
+.. |LeftTriangleBar| unicode:: U+029CF .. LEFT TRIANGLE BESIDE VERTICAL BAR
+.. |LeftUpDownVector| unicode:: U+02951 .. UP BARB LEFT DOWN BARB LEFT HARPOON
+.. |LeftUpTeeVector| unicode:: U+02960 .. UPWARDS HARPOON WITH BARB LEFT FROM BAR
+.. |LeftUpVectorBar| unicode:: U+02958 .. UPWARDS HARPOON WITH BARB LEFT TO BAR
+.. |LeftVectorBar| unicode:: U+02952 .. LEFTWARDS HARPOON WITH BARB UP TO BAR
+.. |LessLess| unicode:: U+02AA1 .. DOUBLE NESTED LESS-THAN
+.. |lopf| unicode:: U+1D55D .. MATHEMATICAL DOUBLE-STRUCK SMALL L
+.. |mapstodown| unicode:: U+021A7 .. DOWNWARDS ARROW FROM BAR
+.. |mapstoleft| unicode:: U+021A4 .. LEFTWARDS ARROW FROM BAR
+.. |mapstoup| unicode:: U+021A5 .. UPWARDS ARROW FROM BAR
+.. |MediumSpace| unicode:: U+0205F .. MEDIUM MATHEMATICAL SPACE
+.. |mopf| unicode:: U+1D55E .. MATHEMATICAL DOUBLE-STRUCK SMALL M
+.. |nbump| unicode:: U+0224E U+00338 .. GEOMETRICALLY EQUIVALENT TO with slash
+.. |nbumpe| unicode:: U+0224F U+00338 .. DIFFERENCE BETWEEN with slash
+.. |nesim| unicode:: U+02242 U+00338 .. MINUS TILDE with slash
+.. |NewLine| unicode:: U+0000A .. LINE FEED (LF)
+.. |NoBreak| unicode:: U+02060 .. WORD JOINER
+.. |nopf| unicode:: U+1D55F .. MATHEMATICAL DOUBLE-STRUCK SMALL N
+.. |NotCupCap| unicode:: U+0226D .. NOT EQUIVALENT TO
+.. |NotHumpEqual| unicode:: U+0224F U+00338 .. DIFFERENCE BETWEEN with slash
+.. |NotLeftTriangleBar| unicode:: U+029CF U+00338 .. LEFT TRIANGLE BESIDE VERTICAL BAR with slash
+.. |NotNestedGreaterGreater| unicode:: U+02AA2 U+00338 .. DOUBLE NESTED GREATER-THAN with slash
+.. |NotNestedLessLess| unicode:: U+02AA1 U+00338 .. DOUBLE NESTED LESS-THAN with slash
+.. |NotRightTriangleBar| unicode:: U+029D0 U+00338 .. VERTICAL BAR BESIDE RIGHT TRIANGLE with slash
+.. |NotSquareSubset| unicode:: U+0228F U+00338 .. SQUARE IMAGE OF with slash
+.. |NotSquareSuperset| unicode:: U+02290 U+00338 .. SQUARE ORIGINAL OF with slash
+.. |NotSucceedsTilde| unicode:: U+0227F U+00338 .. SUCCEEDS OR EQUIVALENT TO with slash
+.. |oopf| unicode:: U+1D560 .. MATHEMATICAL DOUBLE-STRUCK SMALL O
+.. |OverBar| unicode:: U+000AF .. MACRON
+.. |OverBrace| unicode:: U+0FE37 .. PRESENTATION FORM FOR VERTICAL LEFT CURLY BRACKET
+.. |OverBracket| unicode:: U+023B4 .. TOP SQUARE BRACKET
+.. |OverParenthesis| unicode:: U+0FE35 .. PRESENTATION FORM FOR VERTICAL LEFT PARENTHESIS
+.. |planckh| unicode:: U+0210E .. PLANCK CONSTANT
+.. |popf| unicode:: U+1D561 .. MATHEMATICAL DOUBLE-STRUCK SMALL P
+.. |Product| unicode:: U+0220F .. N-ARY PRODUCT
+.. |qopf| unicode:: U+1D562 .. MATHEMATICAL DOUBLE-STRUCK SMALL Q
+.. |rarrb| unicode:: U+021E5 .. RIGHTWARDS ARROW TO BAR
+.. |RightDownTeeVector| unicode:: U+0295D .. DOWNWARDS HARPOON WITH BARB RIGHT FROM BAR
+.. |RightDownVectorBar| unicode:: U+02955 .. DOWNWARDS HARPOON WITH BARB RIGHT TO BAR
+.. |RightTeeVector| unicode:: U+0295B .. RIGHTWARDS HARPOON WITH BARB UP FROM BAR
+.. |RightTriangleBar| unicode:: U+029D0 .. VERTICAL BAR BESIDE RIGHT TRIANGLE
+.. |RightUpDownVector| unicode:: U+0294F .. UP BARB RIGHT DOWN BARB RIGHT HARPOON
+.. |RightUpTeeVector| unicode:: U+0295C .. UPWARDS HARPOON WITH BARB RIGHT FROM BAR
+.. |RightUpVectorBar| unicode:: U+02954 .. UPWARDS HARPOON WITH BARB RIGHT TO BAR
+.. |RightVectorBar| unicode:: U+02953 .. RIGHTWARDS HARPOON WITH BARB UP TO BAR
+.. |ropf| unicode:: U+1D563 .. MATHEMATICAL DOUBLE-STRUCK SMALL R
+.. |RoundImplies| unicode:: U+02970 .. RIGHT DOUBLE ARROW WITH ROUNDED HEAD
+.. |RuleDelayed| unicode:: U+029F4 .. RULE-DELAYED
+.. |sopf| unicode:: U+1D564 .. MATHEMATICAL DOUBLE-STRUCK SMALL S
+.. |Tab| unicode:: U+00009 .. CHARACTER TABULATION
+.. |ThickSpace| unicode:: U+02009 U+0200A U+0200A .. space of width 5/18 em
+.. |topf| unicode:: U+1D565 .. MATHEMATICAL DOUBLE-STRUCK SMALL T
+.. |UnderBar| unicode:: U+00332 .. COMBINING LOW LINE
+.. |UnderBrace| unicode:: U+0FE38 .. PRESENTATION FORM FOR VERTICAL RIGHT CURLY BRACKET
+.. |UnderBracket| unicode:: U+023B5 .. BOTTOM SQUARE BRACKET
+.. |UnderParenthesis| unicode:: U+0FE36 .. PRESENTATION FORM FOR VERTICAL RIGHT PARENTHESIS
+.. |uopf| unicode:: U+1D566 .. MATHEMATICAL DOUBLE-STRUCK SMALL U
+.. |UpArrowBar| unicode:: U+02912 .. UPWARDS ARROW TO BAR
+.. |Upsilon| unicode:: U+003A5 .. GREEK CAPITAL LETTER UPSILON
+.. |VerticalLine| unicode:: U+0007C .. VERTICAL LINE
+.. |VerticalSeparator| unicode:: U+02758 .. LIGHT VERTICAL BAR
+.. |vopf| unicode:: U+1D567 .. MATHEMATICAL DOUBLE-STRUCK SMALL V
+.. |wopf| unicode:: U+1D568 .. MATHEMATICAL DOUBLE-STRUCK SMALL W
+.. |xopf| unicode:: U+1D569 .. MATHEMATICAL DOUBLE-STRUCK SMALL X
+.. |yopf| unicode:: U+1D56A .. MATHEMATICAL DOUBLE-STRUCK SMALL Y
+.. |ZeroWidthSpace| unicode:: U+0200B .. ZERO WIDTH SPACE
+.. |zopf| unicode:: U+1D56B .. MATHEMATICAL DOUBLE-STRUCK SMALL Z
diff --git a/python/helpers/docutils/parsers/rst/include/mmlextra.txt b/python/helpers/docutils/parsers/rst/include/mmlextra.txt
new file mode 100644
index 0000000..790a977
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/mmlextra.txt
@@ -0,0 +1,87 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |af| unicode:: U+02061 .. FUNCTION APPLICATION
+.. |asympeq| unicode:: U+0224D .. EQUIVALENT TO
+.. |Cross| unicode:: U+02A2F .. VECTOR OR CROSS PRODUCT
+.. |DD| unicode:: U+02145 .. DOUBLE-STRUCK ITALIC CAPITAL D
+.. |dd| unicode:: U+02146 .. DOUBLE-STRUCK ITALIC SMALL D
+.. |DownArrowBar| unicode:: U+02913 .. DOWNWARDS ARROW TO BAR
+.. |DownBreve| unicode:: U+00311 .. COMBINING INVERTED BREVE
+.. |DownLeftRightVector| unicode:: U+02950 .. LEFT BARB DOWN RIGHT BARB DOWN HARPOON
+.. |DownLeftTeeVector| unicode:: U+0295E .. LEFTWARDS HARPOON WITH BARB DOWN FROM BAR
+.. |DownLeftVectorBar| unicode:: U+02956 .. LEFTWARDS HARPOON WITH BARB DOWN TO BAR
+.. |DownRightTeeVector| unicode:: U+0295F .. RIGHTWARDS HARPOON WITH BARB DOWN FROM BAR
+.. |DownRightVectorBar| unicode:: U+02957 .. RIGHTWARDS HARPOON WITH BARB DOWN TO BAR
+.. |ee| unicode:: U+02147 .. DOUBLE-STRUCK ITALIC SMALL E
+.. |EmptySmallSquare| unicode:: U+025FB .. WHITE MEDIUM SQUARE
+.. |EmptyVerySmallSquare| unicode:: U+025AB .. WHITE SMALL SQUARE
+.. |Equal| unicode:: U+02A75 .. TWO CONSECUTIVE EQUALS SIGNS
+.. |FilledSmallSquare| unicode:: U+025FC .. BLACK MEDIUM SQUARE
+.. |FilledVerySmallSquare| unicode:: U+025AA .. BLACK SMALL SQUARE
+.. |GreaterGreater| unicode:: U+02AA2 .. DOUBLE NESTED GREATER-THAN
+.. |Hat| unicode:: U+0005E .. CIRCUMFLEX ACCENT
+.. |HorizontalLine| unicode:: U+02500 .. BOX DRAWINGS LIGHT HORIZONTAL
+.. |ic| unicode:: U+02063 .. INVISIBLE SEPARATOR
+.. |ii| unicode:: U+02148 .. DOUBLE-STRUCK ITALIC SMALL I
+.. |it| unicode:: U+02062 .. INVISIBLE TIMES
+.. |larrb| unicode:: U+021E4 .. LEFTWARDS ARROW TO BAR
+.. |LeftDownTeeVector| unicode:: U+02961 .. DOWNWARDS HARPOON WITH BARB LEFT FROM BAR
+.. |LeftDownVectorBar| unicode:: U+02959 .. DOWNWARDS HARPOON WITH BARB LEFT TO BAR
+.. |LeftRightVector| unicode:: U+0294E .. LEFT BARB UP RIGHT BARB UP HARPOON
+.. |LeftTeeVector| unicode:: U+0295A .. LEFTWARDS HARPOON WITH BARB UP FROM BAR
+.. |LeftTriangleBar| unicode:: U+029CF .. LEFT TRIANGLE BESIDE VERTICAL BAR
+.. |LeftUpDownVector| unicode:: U+02951 .. UP BARB LEFT DOWN BARB LEFT HARPOON
+.. |LeftUpTeeVector| unicode:: U+02960 .. UPWARDS HARPOON WITH BARB LEFT FROM BAR
+.. |LeftUpVectorBar| unicode:: U+02958 .. UPWARDS HARPOON WITH BARB LEFT TO BAR
+.. |LeftVectorBar| unicode:: U+02952 .. LEFTWARDS HARPOON WITH BARB UP TO BAR
+.. |LessLess| unicode:: U+02AA1 .. DOUBLE NESTED LESS-THAN
+.. |mapstodown| unicode:: U+021A7 .. DOWNWARDS ARROW FROM BAR
+.. |mapstoleft| unicode:: U+021A4 .. LEFTWARDS ARROW FROM BAR
+.. |mapstoup| unicode:: U+021A5 .. UPWARDS ARROW FROM BAR
+.. |MediumSpace| unicode:: U+0205F .. MEDIUM MATHEMATICAL SPACE
+.. |nbump| unicode:: U+0224E U+00338 .. GEOMETRICALLY EQUIVALENT TO with slash
+.. |nbumpe| unicode:: U+0224F U+00338 .. DIFFERENCE BETWEEN with slash
+.. |nesim| unicode:: U+02242 U+00338 .. MINUS TILDE with slash
+.. |NewLine| unicode:: U+0000A .. LINE FEED (LF)
+.. |NoBreak| unicode:: U+02060 .. WORD JOINER
+.. |NotCupCap| unicode:: U+0226D .. NOT EQUIVALENT TO
+.. |NotHumpEqual| unicode:: U+0224F U+00338 .. DIFFERENCE BETWEEN with slash
+.. |NotLeftTriangleBar| unicode:: U+029CF U+00338 .. LEFT TRIANGLE BESIDE VERTICAL BAR with slash
+.. |NotNestedGreaterGreater| unicode:: U+02AA2 U+00338 .. DOUBLE NESTED GREATER-THAN with slash
+.. |NotNestedLessLess| unicode:: U+02AA1 U+00338 .. DOUBLE NESTED LESS-THAN with slash
+.. |NotRightTriangleBar| unicode:: U+029D0 U+00338 .. VERTICAL BAR BESIDE RIGHT TRIANGLE with slash
+.. |NotSquareSubset| unicode:: U+0228F U+00338 .. SQUARE IMAGE OF with slash
+.. |NotSquareSuperset| unicode:: U+02290 U+00338 .. SQUARE ORIGINAL OF with slash
+.. |NotSucceedsTilde| unicode:: U+0227F U+00338 .. SUCCEEDS OR EQUIVALENT TO with slash
+.. |OverBar| unicode:: U+000AF .. MACRON
+.. |OverBrace| unicode:: U+0FE37 .. PRESENTATION FORM FOR VERTICAL LEFT CURLY BRACKET
+.. |OverBracket| unicode:: U+023B4 .. TOP SQUARE BRACKET
+.. |OverParenthesis| unicode:: U+0FE35 .. PRESENTATION FORM FOR VERTICAL LEFT PARENTHESIS
+.. |planckh| unicode:: U+0210E .. PLANCK CONSTANT
+.. |Product| unicode:: U+0220F .. N-ARY PRODUCT
+.. |rarrb| unicode:: U+021E5 .. RIGHTWARDS ARROW TO BAR
+.. |RightDownTeeVector| unicode:: U+0295D .. DOWNWARDS HARPOON WITH BARB RIGHT FROM BAR
+.. |RightDownVectorBar| unicode:: U+02955 .. DOWNWARDS HARPOON WITH BARB RIGHT TO BAR
+.. |RightTeeVector| unicode:: U+0295B .. RIGHTWARDS HARPOON WITH BARB UP FROM BAR
+.. |RightTriangleBar| unicode:: U+029D0 .. VERTICAL BAR BESIDE RIGHT TRIANGLE
+.. |RightUpDownVector| unicode:: U+0294F .. UP BARB RIGHT DOWN BARB RIGHT HARPOON
+.. |RightUpTeeVector| unicode:: U+0295C .. UPWARDS HARPOON WITH BARB RIGHT FROM BAR
+.. |RightUpVectorBar| unicode:: U+02954 .. UPWARDS HARPOON WITH BARB RIGHT TO BAR
+.. |RightVectorBar| unicode:: U+02953 .. RIGHTWARDS HARPOON WITH BARB UP TO BAR
+.. |RoundImplies| unicode:: U+02970 .. RIGHT DOUBLE ARROW WITH ROUNDED HEAD
+.. |RuleDelayed| unicode:: U+029F4 .. RULE-DELAYED
+.. |Tab| unicode:: U+00009 .. CHARACTER TABULATION
+.. |ThickSpace| unicode:: U+02009 U+0200A U+0200A .. space of width 5/18 em
+.. |UnderBar| unicode:: U+00332 .. COMBINING LOW LINE
+.. |UnderBrace| unicode:: U+0FE38 .. PRESENTATION FORM FOR VERTICAL RIGHT CURLY BRACKET
+.. |UnderBracket| unicode:: U+023B5 .. BOTTOM SQUARE BRACKET
+.. |UnderParenthesis| unicode:: U+0FE36 .. PRESENTATION FORM FOR VERTICAL RIGHT PARENTHESIS
+.. |UpArrowBar| unicode:: U+02912 .. UPWARDS ARROW TO BAR
+.. |Upsilon| unicode:: U+003A5 .. GREEK CAPITAL LETTER UPSILON
+.. |VerticalLine| unicode:: U+0007C .. VERTICAL LINE
+.. |VerticalSeparator| unicode:: U+02758 .. LIGHT VERTICAL BAR
+.. |ZeroWidthSpace| unicode:: U+0200B .. ZERO WIDTH SPACE
diff --git a/python/helpers/docutils/parsers/rst/include/s5defs.txt b/python/helpers/docutils/parsers/rst/include/s5defs.txt
new file mode 100644
index 0000000..8aceeac
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/s5defs.txt
@@ -0,0 +1,68 @@
+.. Definitions of interpreted text roles (classes) for S5/HTML data.
+.. This data file has been placed in the public domain.
+
+.. Colours
+ =======
+
+.. role:: black
+.. role:: gray
+.. role:: silver
+.. role:: white
+
+.. role:: maroon
+.. role:: red
+.. role:: magenta
+.. role:: fuchsia
+.. role:: pink
+.. role:: orange
+.. role:: yellow
+.. role:: lime
+.. role:: green
+.. role:: olive
+.. role:: teal
+.. role:: cyan
+.. role:: aqua
+.. role:: blue
+.. role:: navy
+.. role:: purple
+
+
+.. Text Sizes
+ ==========
+
+.. role:: huge
+.. role:: big
+.. role:: small
+.. role:: tiny
+
+
+.. Display in Slides (Presentation Mode) Only
+ ==========================================
+
+.. role:: slide
+ :class: slide-display
+
+
+.. Display in Outline Mode Only
+ ============================
+
+.. role:: outline
+
+
+.. Display in Print Only
+ =====================
+
+.. role:: print
+
+
+.. Display in Handout Mode Only
+ ============================
+
+.. role:: handout
+
+
+.. Incremental Display
+ ===================
+
+.. role:: incremental
+.. default-role:: incremental
diff --git a/python/helpers/docutils/parsers/rst/include/xhtml1-lat1.txt b/python/helpers/docutils/parsers/rst/include/xhtml1-lat1.txt
new file mode 100644
index 0000000..824dc61
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/xhtml1-lat1.txt
@@ -0,0 +1,102 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |Aacute| unicode:: U+000C1 .. LATIN CAPITAL LETTER A WITH ACUTE
+.. |aacute| unicode:: U+000E1 .. LATIN SMALL LETTER A WITH ACUTE
+.. |Acirc| unicode:: U+000C2 .. LATIN CAPITAL LETTER A WITH CIRCUMFLEX
+.. |acirc| unicode:: U+000E2 .. LATIN SMALL LETTER A WITH CIRCUMFLEX
+.. |acute| unicode:: U+000B4 .. ACUTE ACCENT
+.. |AElig| unicode:: U+000C6 .. LATIN CAPITAL LETTER AE
+.. |aelig| unicode:: U+000E6 .. LATIN SMALL LETTER AE
+.. |Agrave| unicode:: U+000C0 .. LATIN CAPITAL LETTER A WITH GRAVE
+.. |agrave| unicode:: U+000E0 .. LATIN SMALL LETTER A WITH GRAVE
+.. |Aring| unicode:: U+000C5 .. LATIN CAPITAL LETTER A WITH RING ABOVE
+.. |aring| unicode:: U+000E5 .. LATIN SMALL LETTER A WITH RING ABOVE
+.. |Atilde| unicode:: U+000C3 .. LATIN CAPITAL LETTER A WITH TILDE
+.. |atilde| unicode:: U+000E3 .. LATIN SMALL LETTER A WITH TILDE
+.. |Auml| unicode:: U+000C4 .. LATIN CAPITAL LETTER A WITH DIAERESIS
+.. |auml| unicode:: U+000E4 .. LATIN SMALL LETTER A WITH DIAERESIS
+.. |brvbar| unicode:: U+000A6 .. BROKEN BAR
+.. |Ccedil| unicode:: U+000C7 .. LATIN CAPITAL LETTER C WITH CEDILLA
+.. |ccedil| unicode:: U+000E7 .. LATIN SMALL LETTER C WITH CEDILLA
+.. |cedil| unicode:: U+000B8 .. CEDILLA
+.. |cent| unicode:: U+000A2 .. CENT SIGN
+.. |copy| unicode:: U+000A9 .. COPYRIGHT SIGN
+.. |curren| unicode:: U+000A4 .. CURRENCY SIGN
+.. |deg| unicode:: U+000B0 .. DEGREE SIGN
+.. |divide| unicode:: U+000F7 .. DIVISION SIGN
+.. |Eacute| unicode:: U+000C9 .. LATIN CAPITAL LETTER E WITH ACUTE
+.. |eacute| unicode:: U+000E9 .. LATIN SMALL LETTER E WITH ACUTE
+.. |Ecirc| unicode:: U+000CA .. LATIN CAPITAL LETTER E WITH CIRCUMFLEX
+.. |ecirc| unicode:: U+000EA .. LATIN SMALL LETTER E WITH CIRCUMFLEX
+.. |Egrave| unicode:: U+000C8 .. LATIN CAPITAL LETTER E WITH GRAVE
+.. |egrave| unicode:: U+000E8 .. LATIN SMALL LETTER E WITH GRAVE
+.. |ETH| unicode:: U+000D0 .. LATIN CAPITAL LETTER ETH
+.. |eth| unicode:: U+000F0 .. LATIN SMALL LETTER ETH
+.. |Euml| unicode:: U+000CB .. LATIN CAPITAL LETTER E WITH DIAERESIS
+.. |euml| unicode:: U+000EB .. LATIN SMALL LETTER E WITH DIAERESIS
+.. |frac12| unicode:: U+000BD .. VULGAR FRACTION ONE HALF
+.. |frac14| unicode:: U+000BC .. VULGAR FRACTION ONE QUARTER
+.. |frac34| unicode:: U+000BE .. VULGAR FRACTION THREE QUARTERS
+.. |Iacute| unicode:: U+000CD .. LATIN CAPITAL LETTER I WITH ACUTE
+.. |iacute| unicode:: U+000ED .. LATIN SMALL LETTER I WITH ACUTE
+.. |Icirc| unicode:: U+000CE .. LATIN CAPITAL LETTER I WITH CIRCUMFLEX
+.. |icirc| unicode:: U+000EE .. LATIN SMALL LETTER I WITH CIRCUMFLEX
+.. |iexcl| unicode:: U+000A1 .. INVERTED EXCLAMATION MARK
+.. |Igrave| unicode:: U+000CC .. LATIN CAPITAL LETTER I WITH GRAVE
+.. |igrave| unicode:: U+000EC .. LATIN SMALL LETTER I WITH GRAVE
+.. |iquest| unicode:: U+000BF .. INVERTED QUESTION MARK
+.. |Iuml| unicode:: U+000CF .. LATIN CAPITAL LETTER I WITH DIAERESIS
+.. |iuml| unicode:: U+000EF .. LATIN SMALL LETTER I WITH DIAERESIS
+.. |laquo| unicode:: U+000AB .. LEFT-POINTING DOUBLE ANGLE QUOTATION MARK
+.. |macr| unicode:: U+000AF .. MACRON
+.. |micro| unicode:: U+000B5 .. MICRO SIGN
+.. |middot| unicode:: U+000B7 .. MIDDLE DOT
+.. |nbsp| unicode:: U+000A0 .. NO-BREAK SPACE
+.. |not| unicode:: U+000AC .. NOT SIGN
+.. |Ntilde| unicode:: U+000D1 .. LATIN CAPITAL LETTER N WITH TILDE
+.. |ntilde| unicode:: U+000F1 .. LATIN SMALL LETTER N WITH TILDE
+.. |Oacute| unicode:: U+000D3 .. LATIN CAPITAL LETTER O WITH ACUTE
+.. |oacute| unicode:: U+000F3 .. LATIN SMALL LETTER O WITH ACUTE
+.. |Ocirc| unicode:: U+000D4 .. LATIN CAPITAL LETTER O WITH CIRCUMFLEX
+.. |ocirc| unicode:: U+000F4 .. LATIN SMALL LETTER O WITH CIRCUMFLEX
+.. |Ograve| unicode:: U+000D2 .. LATIN CAPITAL LETTER O WITH GRAVE
+.. |ograve| unicode:: U+000F2 .. LATIN SMALL LETTER O WITH GRAVE
+.. |ordf| unicode:: U+000AA .. FEMININE ORDINAL INDICATOR
+.. |ordm| unicode:: U+000BA .. MASCULINE ORDINAL INDICATOR
+.. |Oslash| unicode:: U+000D8 .. LATIN CAPITAL LETTER O WITH STROKE
+.. |oslash| unicode:: U+000F8 .. LATIN SMALL LETTER O WITH STROKE
+.. |Otilde| unicode:: U+000D5 .. LATIN CAPITAL LETTER O WITH TILDE
+.. |otilde| unicode:: U+000F5 .. LATIN SMALL LETTER O WITH TILDE
+.. |Ouml| unicode:: U+000D6 .. LATIN CAPITAL LETTER O WITH DIAERESIS
+.. |ouml| unicode:: U+000F6 .. LATIN SMALL LETTER O WITH DIAERESIS
+.. |para| unicode:: U+000B6 .. PILCROW SIGN
+.. |plusmn| unicode:: U+000B1 .. PLUS-MINUS SIGN
+.. |pound| unicode:: U+000A3 .. POUND SIGN
+.. |raquo| unicode:: U+000BB .. RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
+.. |reg| unicode:: U+000AE .. REGISTERED SIGN
+.. |sect| unicode:: U+000A7 .. SECTION SIGN
+.. |shy| unicode:: U+000AD .. SOFT HYPHEN
+.. |sup1| unicode:: U+000B9 .. SUPERSCRIPT ONE
+.. |sup2| unicode:: U+000B2 .. SUPERSCRIPT TWO
+.. |sup3| unicode:: U+000B3 .. SUPERSCRIPT THREE
+.. |szlig| unicode:: U+000DF .. LATIN SMALL LETTER SHARP S
+.. |THORN| unicode:: U+000DE .. LATIN CAPITAL LETTER THORN
+.. |thorn| unicode:: U+000FE .. LATIN SMALL LETTER THORN
+.. |times| unicode:: U+000D7 .. MULTIPLICATION SIGN
+.. |Uacute| unicode:: U+000DA .. LATIN CAPITAL LETTER U WITH ACUTE
+.. |uacute| unicode:: U+000FA .. LATIN SMALL LETTER U WITH ACUTE
+.. |Ucirc| unicode:: U+000DB .. LATIN CAPITAL LETTER U WITH CIRCUMFLEX
+.. |ucirc| unicode:: U+000FB .. LATIN SMALL LETTER U WITH CIRCUMFLEX
+.. |Ugrave| unicode:: U+000D9 .. LATIN CAPITAL LETTER U WITH GRAVE
+.. |ugrave| unicode:: U+000F9 .. LATIN SMALL LETTER U WITH GRAVE
+.. |uml| unicode:: U+000A8 .. DIAERESIS
+.. |Uuml| unicode:: U+000DC .. LATIN CAPITAL LETTER U WITH DIAERESIS
+.. |uuml| unicode:: U+000FC .. LATIN SMALL LETTER U WITH DIAERESIS
+.. |Yacute| unicode:: U+000DD .. LATIN CAPITAL LETTER Y WITH ACUTE
+.. |yacute| unicode:: U+000FD .. LATIN SMALL LETTER Y WITH ACUTE
+.. |yen| unicode:: U+000A5 .. YEN SIGN
+.. |yuml| unicode:: U+000FF .. LATIN SMALL LETTER Y WITH DIAERESIS
diff --git a/python/helpers/docutils/parsers/rst/include/xhtml1-special.txt b/python/helpers/docutils/parsers/rst/include/xhtml1-special.txt
new file mode 100644
index 0000000..dc6f5753
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/xhtml1-special.txt
@@ -0,0 +1,37 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |bdquo| unicode:: U+0201E .. DOUBLE LOW-9 QUOTATION MARK
+.. |circ| unicode:: U+002C6 .. MODIFIER LETTER CIRCUMFLEX ACCENT
+.. |Dagger| unicode:: U+02021 .. DOUBLE DAGGER
+.. |dagger| unicode:: U+02020 .. DAGGER
+.. |emsp| unicode:: U+02003 .. EM SPACE
+.. |ensp| unicode:: U+02002 .. EN SPACE
+.. |euro| unicode:: U+020AC .. EURO SIGN
+.. |gt| unicode:: U+0003E .. GREATER-THAN SIGN
+.. |ldquo| unicode:: U+0201C .. LEFT DOUBLE QUOTATION MARK
+.. |lrm| unicode:: U+0200E .. LEFT-TO-RIGHT MARK
+.. |lsaquo| unicode:: U+02039 .. SINGLE LEFT-POINTING ANGLE QUOTATION MARK
+.. |lsquo| unicode:: U+02018 .. LEFT SINGLE QUOTATION MARK
+.. |lt| unicode:: U+0003C .. LESS-THAN SIGN
+.. |mdash| unicode:: U+02014 .. EM DASH
+.. |ndash| unicode:: U+02013 .. EN DASH
+.. |OElig| unicode:: U+00152 .. LATIN CAPITAL LIGATURE OE
+.. |oelig| unicode:: U+00153 .. LATIN SMALL LIGATURE OE
+.. |permil| unicode:: U+02030 .. PER MILLE SIGN
+.. |quot| unicode:: U+00022 .. QUOTATION MARK
+.. |rdquo| unicode:: U+0201D .. RIGHT DOUBLE QUOTATION MARK
+.. |rlm| unicode:: U+0200F .. RIGHT-TO-LEFT MARK
+.. |rsaquo| unicode:: U+0203A .. SINGLE RIGHT-POINTING ANGLE QUOTATION MARK
+.. |rsquo| unicode:: U+02019 .. RIGHT SINGLE QUOTATION MARK
+.. |sbquo| unicode:: U+0201A .. SINGLE LOW-9 QUOTATION MARK
+.. |Scaron| unicode:: U+00160 .. LATIN CAPITAL LETTER S WITH CARON
+.. |scaron| unicode:: U+00161 .. LATIN SMALL LETTER S WITH CARON
+.. |thinsp| unicode:: U+02009 .. THIN SPACE
+.. |tilde| unicode:: U+002DC .. SMALL TILDE
+.. |Yuml| unicode:: U+00178 .. LATIN CAPITAL LETTER Y WITH DIAERESIS
+.. |zwj| unicode:: U+0200D .. ZERO WIDTH JOINER
+.. |zwnj| unicode:: U+0200C .. ZERO WIDTH NON-JOINER
diff --git a/python/helpers/docutils/parsers/rst/include/xhtml1-symbol.txt b/python/helpers/docutils/parsers/rst/include/xhtml1-symbol.txt
new file mode 100644
index 0000000..8fe97f8
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/include/xhtml1-symbol.txt
@@ -0,0 +1,130 @@
+.. This data file has been placed in the public domain.
+.. Derived from the Unicode character mappings available from
+ <http://www.w3.org/2003/entities/xml/>.
+ Processed by unicode2rstsubs.py, part of Docutils:
+ <http://docutils.sourceforge.net>.
+
+.. |alefsym| unicode:: U+02135 .. ALEF SYMBOL
+.. |Alpha| unicode:: U+00391 .. GREEK CAPITAL LETTER ALPHA
+.. |alpha| unicode:: U+003B1 .. GREEK SMALL LETTER ALPHA
+.. |and| unicode:: U+02227 .. LOGICAL AND
+.. |ang| unicode:: U+02220 .. ANGLE
+.. |asymp| unicode:: U+02248 .. ALMOST EQUAL TO
+.. |Beta| unicode:: U+00392 .. GREEK CAPITAL LETTER BETA
+.. |beta| unicode:: U+003B2 .. GREEK SMALL LETTER BETA
+.. |bull| unicode:: U+02022 .. BULLET
+.. |cap| unicode:: U+02229 .. INTERSECTION
+.. |Chi| unicode:: U+003A7 .. GREEK CAPITAL LETTER CHI
+.. |chi| unicode:: U+003C7 .. GREEK SMALL LETTER CHI
+.. |clubs| unicode:: U+02663 .. BLACK CLUB SUIT
+.. |cong| unicode:: U+02245 .. APPROXIMATELY EQUAL TO
+.. |crarr| unicode:: U+021B5 .. DOWNWARDS ARROW WITH CORNER LEFTWARDS
+.. |cup| unicode:: U+0222A .. UNION
+.. |dArr| unicode:: U+021D3 .. DOWNWARDS DOUBLE ARROW
+.. |darr| unicode:: U+02193 .. DOWNWARDS ARROW
+.. |Delta| unicode:: U+00394 .. GREEK CAPITAL LETTER DELTA
+.. |delta| unicode:: U+003B4 .. GREEK SMALL LETTER DELTA
+.. |diams| unicode:: U+02666 .. BLACK DIAMOND SUIT
+.. |empty| unicode:: U+02205 .. EMPTY SET
+.. |Epsilon| unicode:: U+00395 .. GREEK CAPITAL LETTER EPSILON
+.. |epsilon| unicode:: U+003B5 .. GREEK SMALL LETTER EPSILON
+.. |equiv| unicode:: U+02261 .. IDENTICAL TO
+.. |Eta| unicode:: U+00397 .. GREEK CAPITAL LETTER ETA
+.. |eta| unicode:: U+003B7 .. GREEK SMALL LETTER ETA
+.. |exist| unicode:: U+02203 .. THERE EXISTS
+.. |fnof| unicode:: U+00192 .. LATIN SMALL LETTER F WITH HOOK
+.. |forall| unicode:: U+02200 .. FOR ALL
+.. |frasl| unicode:: U+02044 .. FRACTION SLASH
+.. |Gamma| unicode:: U+00393 .. GREEK CAPITAL LETTER GAMMA
+.. |gamma| unicode:: U+003B3 .. GREEK SMALL LETTER GAMMA
+.. |ge| unicode:: U+02265 .. GREATER-THAN OR EQUAL TO
+.. |hArr| unicode:: U+021D4 .. LEFT RIGHT DOUBLE ARROW
+.. |harr| unicode:: U+02194 .. LEFT RIGHT ARROW
+.. |hearts| unicode:: U+02665 .. BLACK HEART SUIT
+.. |hellip| unicode:: U+02026 .. HORIZONTAL ELLIPSIS
+.. |image| unicode:: U+02111 .. BLACK-LETTER CAPITAL I
+.. |infin| unicode:: U+0221E .. INFINITY
+.. |int| unicode:: U+0222B .. INTEGRAL
+.. |Iota| unicode:: U+00399 .. GREEK CAPITAL LETTER IOTA
+.. |iota| unicode:: U+003B9 .. GREEK SMALL LETTER IOTA
+.. |isin| unicode:: U+02208 .. ELEMENT OF
+.. |Kappa| unicode:: U+0039A .. GREEK CAPITAL LETTER KAPPA
+.. |kappa| unicode:: U+003BA .. GREEK SMALL LETTER KAPPA
+.. |Lambda| unicode:: U+0039B .. GREEK CAPITAL LETTER LAMDA
+.. |lambda| unicode:: U+003BB .. GREEK SMALL LETTER LAMDA
+.. |lang| unicode:: U+02329 .. LEFT-POINTING ANGLE BRACKET
+.. |lArr| unicode:: U+021D0 .. LEFTWARDS DOUBLE ARROW
+.. |larr| unicode:: U+02190 .. LEFTWARDS ARROW
+.. |lceil| unicode:: U+02308 .. LEFT CEILING
+.. |le| unicode:: U+02264 .. LESS-THAN OR EQUAL TO
+.. |lfloor| unicode:: U+0230A .. LEFT FLOOR
+.. |lowast| unicode:: U+02217 .. ASTERISK OPERATOR
+.. |loz| unicode:: U+025CA .. LOZENGE
+.. |minus| unicode:: U+02212 .. MINUS SIGN
+.. |Mu| unicode:: U+0039C .. GREEK CAPITAL LETTER MU
+.. |mu| unicode:: U+003BC .. GREEK SMALL LETTER MU
+.. |nabla| unicode:: U+02207 .. NABLA
+.. |ne| unicode:: U+02260 .. NOT EQUAL TO
+.. |ni| unicode:: U+0220B .. CONTAINS AS MEMBER
+.. |notin| unicode:: U+02209 .. NOT AN ELEMENT OF
+.. |nsub| unicode:: U+02284 .. NOT A SUBSET OF
+.. |Nu| unicode:: U+0039D .. GREEK CAPITAL LETTER NU
+.. |nu| unicode:: U+003BD .. GREEK SMALL LETTER NU
+.. |oline| unicode:: U+0203E .. OVERLINE
+.. |Omega| unicode:: U+003A9 .. GREEK CAPITAL LETTER OMEGA
+.. |omega| unicode:: U+003C9 .. GREEK SMALL LETTER OMEGA
+.. |Omicron| unicode:: U+0039F .. GREEK CAPITAL LETTER OMICRON
+.. |omicron| unicode:: U+003BF .. GREEK SMALL LETTER OMICRON
+.. |oplus| unicode:: U+02295 .. CIRCLED PLUS
+.. |or| unicode:: U+02228 .. LOGICAL OR
+.. |otimes| unicode:: U+02297 .. CIRCLED TIMES
+.. |part| unicode:: U+02202 .. PARTIAL DIFFERENTIAL
+.. |perp| unicode:: U+022A5 .. UP TACK
+.. |Phi| unicode:: U+003A6 .. GREEK CAPITAL LETTER PHI
+.. |phi| unicode:: U+003D5 .. GREEK PHI SYMBOL
+.. |Pi| unicode:: U+003A0 .. GREEK CAPITAL LETTER PI
+.. |pi| unicode:: U+003C0 .. GREEK SMALL LETTER PI
+.. |piv| unicode:: U+003D6 .. GREEK PI SYMBOL
+.. |Prime| unicode:: U+02033 .. DOUBLE PRIME
+.. |prime| unicode:: U+02032 .. PRIME
+.. |prod| unicode:: U+0220F .. N-ARY PRODUCT
+.. |prop| unicode:: U+0221D .. PROPORTIONAL TO
+.. |Psi| unicode:: U+003A8 .. GREEK CAPITAL LETTER PSI
+.. |psi| unicode:: U+003C8 .. GREEK SMALL LETTER PSI
+.. |radic| unicode:: U+0221A .. SQUARE ROOT
+.. |rang| unicode:: U+0232A .. RIGHT-POINTING ANGLE BRACKET
+.. |rArr| unicode:: U+021D2 .. RIGHTWARDS DOUBLE ARROW
+.. |rarr| unicode:: U+02192 .. RIGHTWARDS ARROW
+.. |rceil| unicode:: U+02309 .. RIGHT CEILING
+.. |real| unicode:: U+0211C .. BLACK-LETTER CAPITAL R
+.. |rfloor| unicode:: U+0230B .. RIGHT FLOOR
+.. |Rho| unicode:: U+003A1 .. GREEK CAPITAL LETTER RHO
+.. |rho| unicode:: U+003C1 .. GREEK SMALL LETTER RHO
+.. |sdot| unicode:: U+022C5 .. DOT OPERATOR
+.. |Sigma| unicode:: U+003A3 .. GREEK CAPITAL LETTER SIGMA
+.. |sigma| unicode:: U+003C3 .. GREEK SMALL LETTER SIGMA
+.. |sigmaf| unicode:: U+003C2 .. GREEK SMALL LETTER FINAL SIGMA
+.. |sim| unicode:: U+0223C .. TILDE OPERATOR
+.. |spades| unicode:: U+02660 .. BLACK SPADE SUIT
+.. |sub| unicode:: U+02282 .. SUBSET OF
+.. |sube| unicode:: U+02286 .. SUBSET OF OR EQUAL TO
+.. |sum| unicode:: U+02211 .. N-ARY SUMMATION
+.. |sup| unicode:: U+02283 .. SUPERSET OF
+.. |supe| unicode:: U+02287 .. SUPERSET OF OR EQUAL TO
+.. |Tau| unicode:: U+003A4 .. GREEK CAPITAL LETTER TAU
+.. |tau| unicode:: U+003C4 .. GREEK SMALL LETTER TAU
+.. |there4| unicode:: U+02234 .. THEREFORE
+.. |Theta| unicode:: U+00398 .. GREEK CAPITAL LETTER THETA
+.. |theta| unicode:: U+003B8 .. GREEK SMALL LETTER THETA
+.. |thetasym| unicode:: U+003D1 .. GREEK THETA SYMBOL
+.. |trade| unicode:: U+02122 .. TRADE MARK SIGN
+.. |uArr| unicode:: U+021D1 .. UPWARDS DOUBLE ARROW
+.. |uarr| unicode:: U+02191 .. UPWARDS ARROW
+.. |upsih| unicode:: U+003D2 .. GREEK UPSILON WITH HOOK SYMBOL
+.. |Upsilon| unicode:: U+003A5 .. GREEK CAPITAL LETTER UPSILON
+.. |upsilon| unicode:: U+003C5 .. GREEK SMALL LETTER UPSILON
+.. |weierp| unicode:: U+02118 .. SCRIPT CAPITAL P
+.. |Xi| unicode:: U+0039E .. GREEK CAPITAL LETTER XI
+.. |xi| unicode:: U+003BE .. GREEK SMALL LETTER XI
+.. |Zeta| unicode:: U+00396 .. GREEK CAPITAL LETTER ZETA
+.. |zeta| unicode:: U+003B6 .. GREEK SMALL LETTER ZETA
diff --git a/python/helpers/docutils/parsers/rst/languages/__init__.py b/python/helpers/docutils/parsers/rst/languages/__init__.py
new file mode 100644
index 0000000..9195e05
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/__init__.py
@@ -0,0 +1,25 @@
+# $Id: __init__.py 5618 2008-07-28 08:37:32Z strank $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# Internationalization details are documented in
+# <http://docutils.sf.net/docs/howto/i18n.html>.
+
+"""
+This package contains modules for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+_languages = {}
+
+def get_language(language_code):
+ if language_code in _languages:
+ return _languages[language_code]
+ try:
+ module = __import__(language_code, globals(), locals())
+ except ImportError:
+ return None
+ _languages[language_code] = module
+ return module
diff --git a/python/helpers/docutils/parsers/rst/languages/af.py b/python/helpers/docutils/parsers/rst/languages/af.py
new file mode 100644
index 0000000..ecaa0cb
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/af.py
@@ -0,0 +1,102 @@
+# $Id: af.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Jannie Hofmeyr <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Afrikaans-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ 'aandag': 'attention',
+ 'versigtig': 'caution',
+ 'gevaar': 'danger',
+ 'fout': 'error',
+ 'wenk': 'hint',
+ 'belangrik': 'important',
+ 'nota': 'note',
+ 'tip': 'tip', # hint and tip both have the same translation: wenk
+ 'waarskuwing': 'warning',
+ 'vermaning': 'admonition',
+ 'kantstreep': 'sidebar',
+ 'onderwerp': 'topic',
+ 'lynblok': 'line-block',
+ 'parsed-literal (translation required)': 'parsed-literal',
+ 'rubriek': 'rubric',
+ 'epigraaf': 'epigraph',
+ 'hoogtepunte': 'highlights',
+ 'pull-quote (translation required)': 'pull-quote',
+ u'compound (translation required)': 'compound',
+ u'container (translation required)': 'container',
+ #'vrae': 'questions',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ 'table (translation required)': 'table',
+ 'csv-table (translation required)': 'csv-table',
+ 'list-table (translation required)': 'list-table',
+ 'meta': 'meta',
+ #'beeldkaart': 'imagemap',
+ 'beeld': 'image',
+ 'figuur': 'figure',
+ 'insluiting': 'include',
+ 'rou': 'raw',
+ 'vervang': 'replace',
+ 'unicode': 'unicode', # should this be translated? unikode
+ 'datum': 'date',
+ 'klas': 'class',
+ 'role (translation required)': 'role',
+ 'default-role (translation required)': 'default-role',
+ 'title (translation required)': 'title',
+ 'inhoud': 'contents',
+ 'sectnum': 'sectnum',
+ 'section-numbering': 'sectnum',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',
+ #'voetnote': 'footnotes',
+ #'aanhalings': 'citations',
+ 'teikennotas': 'target-notes',
+ 'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""Afrikaans name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ 'afkorting': 'abbreviation',
+ 'ab': 'abbreviation',
+ 'akroniem': 'acronym',
+ 'ac': 'acronym',
+ 'indeks': 'index',
+ 'i': 'index',
+ 'voetskrif': 'subscript',
+ 'sub': 'subscript',
+ 'boskrif': 'superscript',
+ 'sup': 'superscript',
+ 'titelverwysing': 'title-reference',
+ 'titel': 'title-reference',
+ 't': 'title-reference',
+ 'pep-verwysing': 'pep-reference',
+ 'pep': 'pep-reference',
+ 'rfc-verwysing': 'rfc-reference',
+ 'rfc': 'rfc-reference',
+ 'nadruk': 'emphasis',
+ 'sterk': 'strong',
+ 'literal (translation required)': 'literal',
+ 'benoemde verwysing': 'named-reference',
+ 'anonieme verwysing': 'anonymous-reference',
+ 'voetnootverwysing': 'footnote-reference',
+ 'aanhalingverwysing': 'citation-reference',
+ 'vervangingsverwysing': 'substitution-reference',
+ 'teiken': 'target',
+ 'uri-verwysing': 'uri-reference',
+ 'uri': 'uri-reference',
+ 'url': 'uri-reference',
+ 'rou': 'raw',}
+"""Mapping of Afrikaans role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/ca.py b/python/helpers/docutils/parsers/rst/languages/ca.py
new file mode 100644
index 0000000..5839778
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/ca.py
@@ -0,0 +1,121 @@
+# $Id: ca.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Ivan Vilata i Balaguer <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Catalan-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ u'atenci\u00F3': 'attention',
+ u'compte': 'caution',
+ u'perill': 'danger',
+ u'error': 'error',
+ u'suggeriment': 'hint',
+ u'important': 'important',
+ u'nota': 'note',
+ u'consell': 'tip',
+ u'av\u00EDs': 'warning',
+ u'advertiment': 'admonition',
+ u'nota-al-marge': 'sidebar',
+ u'nota-marge': 'sidebar',
+ u'tema': 'topic',
+ u'bloc-de-l\u00EDnies': 'line-block',
+ u'bloc-l\u00EDnies': 'line-block',
+ u'literal-analitzat': 'parsed-literal',
+ u'r\u00FAbrica': 'rubric',
+ u'ep\u00EDgraf': 'epigraph',
+ u'sumari': 'highlights',
+ u'cita-destacada': 'pull-quote',
+ u'compost': 'compound',
+ u'container (translation required)': 'container',
+ #'questions': 'questions',
+ u'taula': 'table',
+ u'taula-csv': 'csv-table',
+ u'taula-llista': 'list-table',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ u'meta': 'meta',
+ #'imagemap': 'imagemap',
+ u'imatge': 'image',
+ u'figura': 'figure',
+ u'inclou': 'include',
+ u'incloure': 'include',
+ u'cru': 'raw',
+ u'reempla\u00E7a': 'replace',
+ u'reempla\u00E7ar': 'replace',
+ u'unicode': 'unicode',
+ u'data': 'date',
+ u'classe': 'class',
+ u'rol': 'role',
+ u'default-role (translation required)': 'default-role',
+ u'title (translation required)': 'title',
+ u'contingut': 'contents',
+ u'numsec': 'sectnum',
+ u'numeraci\u00F3-de-seccions': 'sectnum',
+ u'numeraci\u00F3-seccions': 'sectnum',
+ u'cap\u00E7alera': 'header',
+ u'peu-de-p\u00E0gina': 'footer',
+ u'peu-p\u00E0gina': 'footer',
+ #'footnotes': 'footnotes',
+ #'citations': 'citations',
+ u'notes-amb-destinacions': 'target-notes',
+ u'notes-destinacions': 'target-notes',
+ u'directiva-de-prova-de-restructuredtext': 'restructuredtext-test-directive'}
+"""Catalan name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ # language-dependent: fixed
+ u'abreviatura': 'abbreviation',
+ u'abreviaci\u00F3': 'abbreviation',
+ u'abrev': 'abbreviation',
+ u'ab': 'abbreviation',
+ u'acr\u00F2nim': 'acronym',
+ u'ac': 'acronym',
+ u'\u00EDndex': 'index',
+ u'i': 'index',
+ u'sub\u00EDndex': 'subscript',
+ u'sub': 'subscript',
+ u'super\u00EDndex': 'superscript',
+ u'sup': 'superscript',
+ u'refer\u00E8ncia-a-t\u00EDtol': 'title-reference',
+ u'refer\u00E8ncia-t\u00EDtol': 'title-reference',
+ u't\u00EDtol': 'title-reference',
+ u't': 'title-reference',
+ u'refer\u00E8ncia-a-pep': 'pep-reference',
+ u'refer\u00E8ncia-pep': 'pep-reference',
+ u'pep': 'pep-reference',
+ u'refer\u00E8ncia-a-rfc': 'rfc-reference',
+ u'refer\u00E8ncia-rfc': 'rfc-reference',
+ u'rfc': 'rfc-reference',
+ u'\u00E8mfasi': 'emphasis',
+ u'destacat': 'strong',
+ u'literal': 'literal',
+ u'refer\u00E8ncia-amb-nom': 'named-reference',
+ u'refer\u00E8ncia-nom': 'named-reference',
+ u'refer\u00E8ncia-an\u00F2nima': 'anonymous-reference',
+ u'refer\u00E8ncia-a-nota-al-peu': 'footnote-reference',
+ u'refer\u00E8ncia-nota-al-peu': 'footnote-reference',
+ u'refer\u00E8ncia-a-cita': 'citation-reference',
+ u'refer\u00E8ncia-cita': 'citation-reference',
+ u'refer\u00E8ncia-a-substituci\u00F3': 'substitution-reference',
+ u'refer\u00E8ncia-substituci\u00F3': 'substitution-reference',
+ u'destinaci\u00F3': 'target',
+ u'refer\u00E8ncia-a-uri': 'uri-reference',
+ u'refer\u00E8ncia-uri': 'uri-reference',
+ u'uri': 'uri-reference',
+ u'url': 'uri-reference',
+ u'cru': 'raw',}
+"""Mapping of Catalan role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/cs.py b/python/helpers/docutils/parsers/rst/languages/cs.py
new file mode 100644
index 0000000..efd4393
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/cs.py
@@ -0,0 +1,104 @@
+# $Id: cs.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Marek Blaha <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Czech-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ u'pozor': 'attention',
+ u'caution (translation required)': 'caution', # jak rozlisit caution a warning?
+ u'nebezpe\u010D\u00ED': 'danger',
+ u'chyba': 'error',
+ u'rada': 'hint',
+ u'd\u016Fle\u017Eit\u00E9': 'important',
+ u'pozn\u00E1mka': 'note',
+ u'tip (translation required)': 'tip',
+ u'varov\u00E1n\u00ED': 'warning',
+ u'admonition (translation required)': 'admonition',
+ u'sidebar (translation required)': 'sidebar',
+ u't\u00E9ma': 'topic',
+ u'line-block (translation required)': 'line-block',
+ u'parsed-literal (translation required)': 'parsed-literal',
+ u'odd\u00EDl': 'rubric',
+ u'moto': 'epigraph',
+ u'highlights (translation required)': 'highlights',
+ u'pull-quote (translation required)': 'pull-quote',
+ u'compound (translation required)': 'compound',
+ u'container (translation required)': 'container',
+ #'questions': 'questions',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ u'table (translation required)': 'table',
+ u'csv-table (translation required)': 'csv-table',
+ u'list-table (translation required)': 'list-table',
+ u'meta (translation required)': 'meta',
+ #'imagemap': 'imagemap',
+ u'image (translation required)': 'image', # obrazek
+ u'figure (translation required)': 'figure', # a tady?
+ u'include (translation required)': 'include',
+ u'raw (translation required)': 'raw',
+ u'replace (translation required)': 'replace',
+ u'unicode (translation required)': 'unicode',
+ u'datum': 'date',
+ u't\u0159\u00EDda': 'class',
+ u'role (translation required)': 'role',
+ u'default-role (translation required)': 'default-role',
+ u'title (translation required)': 'title',
+ u'obsah': 'contents',
+ u'sectnum (translation required)': 'sectnum',
+ u'section-numbering (translation required)': 'sectnum',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',
+ #'footnotes': 'footnotes',
+ #'citations': 'citations',
+ u'target-notes (translation required)': 'target-notes',
+ u'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""Czech name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ # language-dependent: fixed
+ u'abbreviation (translation required)': 'abbreviation',
+ u'ab (translation required)': 'abbreviation',
+ u'acronym (translation required)': 'acronym',
+ u'ac (translation required)': 'acronym',
+ u'index (translation required)': 'index',
+ u'i (translation required)': 'index',
+ u'subscript (translation required)': 'subscript',
+ u'sub (translation required)': 'subscript',
+ u'superscript (translation required)': 'superscript',
+ u'sup (translation required)': 'superscript',
+ u'title-reference (translation required)': 'title-reference',
+ u'title (translation required)': 'title-reference',
+ u't (translation required)': 'title-reference',
+ u'pep-reference (translation required)': 'pep-reference',
+ u'pep (translation required)': 'pep-reference',
+ u'rfc-reference (translation required)': 'rfc-reference',
+ u'rfc (translation required)': 'rfc-reference',
+ u'emphasis (translation required)': 'emphasis',
+ u'strong (translation required)': 'strong',
+ u'literal (translation required)': 'literal',
+ u'named-reference (translation required)': 'named-reference',
+ u'anonymous-reference (translation required)': 'anonymous-reference',
+ u'footnote-reference (translation required)': 'footnote-reference',
+ u'citation-reference (translation required)': 'citation-reference',
+ u'substitution-reference (translation required)': 'substitution-reference',
+ u'target (translation required)': 'target',
+ u'uri-reference (translation required)': 'uri-reference',
+ u'uri (translation required)': 'uri-reference',
+ u'url (translation required)': 'uri-reference',
+ u'raw (translation required)': 'raw',}
+"""Mapping of Czech role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/de.py b/python/helpers/docutils/parsers/rst/languages/de.py
new file mode 100644
index 0000000..c56abec
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/de.py
@@ -0,0 +1,96 @@
+# $Id: de.py 5174 2007-05-31 00:01:52Z wiemann $
+# Authors: Engelbert Gruber <[email protected]>;
+# Lea Wiemann <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+German-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ 'achtung': 'attention',
+ 'vorsicht': 'caution',
+ 'gefahr': 'danger',
+ 'fehler': 'error',
+ 'hinweis': 'hint',
+ 'wichtig': 'important',
+ 'notiz': 'note',
+ 'tipp': 'tip',
+ 'warnung': 'warning',
+ 'ermahnung': 'admonition',
+ 'kasten': 'sidebar',
+ 'seitenkasten': 'sidebar',
+ 'thema': 'topic',
+ 'zeilen-block': 'line-block',
+ 'parsed-literal (translation required)': 'parsed-literal',
+ 'rubrik': 'rubric',
+ 'epigraph': 'epigraph',
+ 'highlights (translation required)': 'highlights',
+ 'pull-quote (translation required)': 'pull-quote', # kasten too ?
+ 'zusammengesetzt': 'compound',
+ 'verbund': 'compound',
+ u'container (translation required)': 'container',
+ #'fragen': 'questions',
+ 'tabelle': 'table',
+ 'csv-tabelle': 'csv-table',
+ 'list-table (translation required)': 'list-table',
+ 'meta': 'meta',
+ #'imagemap': 'imagemap',
+ 'bild': 'image',
+ 'abbildung': 'figure',
+ u'unver\xe4ndert': 'raw',
+ u'roh': 'raw',
+ u'einf\xfcgen': 'include',
+ 'ersetzung': 'replace',
+ 'ersetzen': 'replace',
+ 'ersetze': 'replace',
+ 'unicode': 'unicode',
+ 'datum': 'date',
+ 'klasse': 'class',
+ 'rolle': 'role',
+ u'default-role (translation required)': 'default-role',
+ u'title (translation required)': 'title',
+ 'inhalt': 'contents',
+ 'kapitel-nummerierung': 'sectnum',
+ 'abschnitts-nummerierung': 'sectnum',
+ u'linkziel-fu\xdfnoten': 'target-notes',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',
+ #u'fu\xdfnoten': 'footnotes',
+ #'zitate': 'citations',
+ }
+"""German name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ u'abk\xfcrzung': 'abbreviation',
+ 'akronym': 'acronym',
+ 'index': 'index',
+ 'tiefgestellt': 'subscript',
+ 'hochgestellt': 'superscript',
+ 'titel-referenz': 'title-reference',
+ 'pep-referenz': 'pep-reference',
+ 'rfc-referenz': 'rfc-reference',
+ 'betonung': 'emphasis',
+ 'fett': 'strong',
+ u'w\xf6rtlich': 'literal',
+ 'benannte-referenz': 'named-reference',
+ 'unbenannte-referenz': 'anonymous-reference',
+ u'fu\xdfnoten-referenz': 'footnote-reference',
+ 'zitat-referenz': 'citation-reference',
+ 'ersetzungs-referenz': 'substitution-reference',
+ 'ziel': 'target',
+ 'uri-referenz': 'uri-reference',
+ u'unver\xe4ndert': 'raw',
+ u'roh': 'raw',}
+"""Mapping of German role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/en.py b/python/helpers/docutils/parsers/rst/languages/en.py
new file mode 100644
index 0000000..2a31fdd
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/en.py
@@ -0,0 +1,104 @@
+# $Id: en.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+English-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ 'attention': 'attention',
+ 'caution': 'caution',
+ 'danger': 'danger',
+ 'error': 'error',
+ 'hint': 'hint',
+ 'important': 'important',
+ 'note': 'note',
+ 'tip': 'tip',
+ 'warning': 'warning',
+ 'admonition': 'admonition',
+ 'sidebar': 'sidebar',
+ 'topic': 'topic',
+ 'line-block': 'line-block',
+ 'parsed-literal': 'parsed-literal',
+ 'rubric': 'rubric',
+ 'epigraph': 'epigraph',
+ 'highlights': 'highlights',
+ 'pull-quote': 'pull-quote',
+ 'compound': 'compound',
+ 'container': 'container',
+ #'questions': 'questions',
+ 'table': 'table',
+ 'csv-table': 'csv-table',
+ 'list-table': 'list-table',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ 'meta': 'meta',
+ #'imagemap': 'imagemap',
+ 'image': 'image',
+ 'figure': 'figure',
+ 'include': 'include',
+ 'raw': 'raw',
+ 'replace': 'replace',
+ 'unicode': 'unicode',
+ 'date': 'date',
+ 'class': 'class',
+ 'role': 'role',
+ 'default-role': 'default-role',
+ 'title': 'title',
+ 'contents': 'contents',
+ 'sectnum': 'sectnum',
+ 'section-numbering': 'sectnum',
+ 'header': 'header',
+ 'footer': 'footer',
+ #'footnotes': 'footnotes',
+ #'citations': 'citations',
+ 'target-notes': 'target-notes',
+ 'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""English name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ # language-dependent: fixed
+ 'abbreviation': 'abbreviation',
+ 'ab': 'abbreviation',
+ 'acronym': 'acronym',
+ 'ac': 'acronym',
+ 'index': 'index',
+ 'i': 'index',
+ 'subscript': 'subscript',
+ 'sub': 'subscript',
+ 'superscript': 'superscript',
+ 'sup': 'superscript',
+ 'title-reference': 'title-reference',
+ 'title': 'title-reference',
+ 't': 'title-reference',
+ 'pep-reference': 'pep-reference',
+ 'pep': 'pep-reference',
+ 'rfc-reference': 'rfc-reference',
+ 'rfc': 'rfc-reference',
+ 'emphasis': 'emphasis',
+ 'strong': 'strong',
+ 'literal': 'literal',
+ 'named-reference': 'named-reference',
+ 'anonymous-reference': 'anonymous-reference',
+ 'footnote-reference': 'footnote-reference',
+ 'citation-reference': 'citation-reference',
+ 'substitution-reference': 'substitution-reference',
+ 'target': 'target',
+ 'uri-reference': 'uri-reference',
+ 'uri': 'uri-reference',
+ 'url': 'uri-reference',
+ 'raw': 'raw',}
+"""Mapping of English role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/eo.py b/python/helpers/docutils/parsers/rst/languages/eo.py
new file mode 100644
index 0000000..2945763
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/eo.py
@@ -0,0 +1,114 @@
+# $Id: eo.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Marcelo Huerta San Martin <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Esperanto-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ u'atentu': 'attention',
+ u'zorgu': 'caution',
+ u'dangxero': 'danger',
+ u'dan\u011dero': 'danger',
+ u'eraro': 'error',
+ u'spuro': 'hint',
+ u'grava': 'important',
+ u'noto': 'note',
+ u'helpeto': 'tip',
+ u'averto': 'warning',
+ u'admono': 'admonition',
+ u'flankteksto': 'sidebar',
+ u'temo': 'topic',
+ u'linea-bloko': 'line-block',
+ u'analizota-literalo': 'parsed-literal',
+ u'rubriko': 'rubric',
+ u'epigrafo': 'epigraph',
+ u'elstarajxoj': 'highlights',
+ u'elstara\u0135oj': 'highlights',
+ u'ekstera-citajxo': 'pull-quote',
+ u'ekstera-cita\u0135o': 'pull-quote',
+ u'kombinajxo': 'compound',
+ u'kombina\u0135o': 'compound',
+ u'tekstingo': 'container',
+ u'enhavilo': 'container',
+ #'questions': 'questions',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ u'tabelo': 'table',
+ u'tabelo-vdk': 'csv-table', # "valoroj disigitaj per komoj"
+ u'tabelo-csv': 'csv-table',
+ u'tabelo-lista': 'list-table',
+ u'meta': 'meta',
+ #'imagemap': 'imagemap',
+ u'bildo': 'image',
+ u'figuro': 'figure',
+ u'inkludi': 'include',
+ u'senanaliza': 'raw',
+ u'anstatauxi': 'replace',
+ u'anstata\u016di': 'replace',
+ u'unicode': 'unicode',
+ u'dato': 'date',
+ u'klaso': 'class',
+ u'rolo': 'role',
+ u'preterlasita-rolo': 'default-role',
+ u'titolo': 'title',
+ u'enhavo': 'contents',
+ u'seknum': 'sectnum',
+ u'sekcia-numerado': 'sectnum',
+ u'kapsekcio': 'header',
+ u'piedsekcio': 'footer',
+ #'footnotes': 'footnotes',
+ #'citations': 'citations',
+ u'celaj-notoj': 'target-notes',
+ u'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""Esperanto name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ # language-dependent: fixed
+ u'mallongigo': 'abbreviation',
+ u'mall': 'abbreviation',
+ u'komenclitero': 'acronym',
+ u'kl': 'acronym',
+ u'indekso': 'index',
+ u'i': 'index',
+ u'subskribo': 'subscript',
+ u'sub': 'subscript',
+ u'supraskribo': 'superscript',
+ u'sup': 'superscript',
+ u'titola-referenco': 'title-reference',
+ u'titolo': 'title-reference',
+ u't': 'title-reference',
+ u'pep-referenco': 'pep-reference',
+ u'pep': 'pep-reference',
+ u'rfc-referenco': 'rfc-reference',
+ u'rfc': 'rfc-reference',
+ u'emfazo': 'emphasis',
+ u'forta': 'strong',
+ u'litera': 'literal',
+ u'nomita-referenco': 'named-reference',
+ u'nenomita-referenco': 'anonymous-reference',
+ u'piednota-referenco': 'footnote-reference',
+ u'citajxo-referenco': 'citation-reference',
+ u'cita\u0135o-referenco': 'citation-reference',
+ u'anstatauxa-referenco': 'substitution-reference',
+ u'anstata\u016da-referenco': 'substitution-reference',
+ u'celo': 'target',
+ u'uri-referenco': 'uri-reference',
+ u'uri': 'uri-reference',
+ u'url': 'uri-reference',
+ u'senanaliza': 'raw',
+}
+"""Mapping of Esperanto role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/es.py b/python/helpers/docutils/parsers/rst/languages/es.py
new file mode 100644
index 0000000..c2246c5
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/es.py
@@ -0,0 +1,121 @@
+# -*- coding: utf-8 -*-
+# $Id: es.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Marcelo Huerta San Martín <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Spanish-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ u'atenci\u00f3n': 'attention',
+ u'atencion': 'attention',
+ u'precauci\u00f3n': 'caution',
+ u'precaucion': 'caution',
+ u'peligro': 'danger',
+ u'error': 'error',
+ u'sugerencia': 'hint',
+ u'importante': 'important',
+ u'nota': 'note',
+ u'consejo': 'tip',
+ u'advertencia': 'warning',
+ u'exhortacion': 'admonition',
+ u'exhortaci\u00f3n': 'admonition',
+ u'nota-al-margen': 'sidebar',
+ u'tema': 'topic',
+ u'bloque-de-lineas': 'line-block',
+ u'bloque-de-l\u00edneas': 'line-block',
+ u'literal-evaluado': 'parsed-literal',
+ u'firma': 'rubric',
+ u'ep\u00edgrafe': 'epigraph',
+ u'epigrafe': 'epigraph',
+ u'destacado': 'highlights',
+ u'cita-destacada': 'pull-quote',
+ u'combinacion': 'compound',
+ u'combinaci\u00f3n': 'compound',
+ u'contenedor': 'container',
+ #'questions': 'questions',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ u'tabla': 'table',
+ u'tabla-vsc': 'csv-table',
+ u'tabla-csv': 'csv-table',
+ u'tabla-lista': 'list-table',
+ u'meta': 'meta',
+ #'imagemap': 'imagemap',
+ u'imagen': 'image',
+ u'figura': 'figure',
+ u'incluir': 'include',
+ u'sin-analisis': 'raw',
+ u'sin-an\u00e1lisis': 'raw',
+ u'reemplazar': 'replace',
+ u'unicode': 'unicode',
+ u'fecha': 'date',
+ u'clase': 'class',
+ u'rol': 'role',
+ u'rol-por-omision': 'default-role',
+ u'rol-por-omisi\u00f3n': 'default-role',
+ u'titulo': 'title',
+ u't\u00edtulo': 'title',
+ u'contenido': 'contents',
+ u'numseccion': 'sectnum',
+ u'numsecci\u00f3n': 'sectnum',
+ u'numeracion-seccion': 'sectnum',
+ u'numeraci\u00f3n-secci\u00f3n': 'sectnum',
+ u'notas-destino': 'target-notes',
+ u'cabecera': 'header',
+ u'pie': 'footer',
+ #'footnotes': 'footnotes',
+ #'citations': 'citations',
+ u'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""Spanish name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ u'abreviatura': 'abbreviation',
+ u'ab': 'abbreviation',
+ u'acronimo': 'acronym',
+ u'acronimo': 'acronym',
+ u'ac': 'acronym',
+ u'indice': 'index',
+ u'i': 'index',
+ u'subindice': 'subscript',
+ u'sub\u00edndice': 'subscript',
+ u'superindice': 'superscript',
+ u'super\u00edndice': 'superscript',
+ u'referencia-titulo': 'title-reference',
+ u'titulo': 'title-reference',
+ u't': 'title-reference',
+ u'referencia-pep': 'pep-reference',
+ u'pep': 'pep-reference',
+ u'referencia-rfc': 'rfc-reference',
+ u'rfc': 'rfc-reference',
+ u'enfasis': 'emphasis',
+ u'\u00e9nfasis': 'emphasis',
+ u'destacado': 'strong',
+ u'literal': 'literal', # "literal" is also a word in Spanish :-)
+ u'referencia-con-nombre': 'named-reference',
+ u'referencia-anonima': 'anonymous-reference',
+ u'referencia-an\u00f3nima': 'anonymous-reference',
+ u'referencia-nota-al-pie': 'footnote-reference',
+ u'referencia-cita': 'citation-reference',
+ u'referencia-sustitucion': 'substitution-reference',
+ u'referencia-sustituci\u00f3n': 'substitution-reference',
+ u'destino': 'target',
+ u'referencia-uri': 'uri-reference',
+ u'uri': 'uri-reference',
+ u'url': 'uri-reference',
+ u'sin-analisis': 'raw',
+ u'sin-an\u00e1lisis': 'raw',
+}
+"""Mapping of Spanish role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/fi.py b/python/helpers/docutils/parsers/rst/languages/fi.py
new file mode 100644
index 0000000..dbb4d2282
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/fi.py
@@ -0,0 +1,93 @@
+# $Id: fi.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Asko Soukka <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Finnish-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ u'huomio': u'attention',
+ u'varo': u'caution',
+ u'vaara': u'danger',
+ u'virhe': u'error',
+ u'vihje': u'hint',
+ u't\u00e4rke\u00e4\u00e4': u'important',
+ u'huomautus': u'note',
+ u'neuvo': u'tip',
+ u'varoitus': u'warning',
+ u'kehotus': u'admonition',
+ u'sivupalkki': u'sidebar',
+ u'aihe': u'topic',
+ u'rivi': u'line-block',
+ u'tasalevyinen': u'parsed-literal',
+ u'ohje': u'rubric',
+ u'epigraafi': u'epigraph',
+ u'kohokohdat': u'highlights',
+ u'lainaus': u'pull-quote',
+ u'taulukko': u'table',
+ u'csv-taulukko': u'csv-table',
+ u'list-table (translation required)': 'list-table',
+ u'compound (translation required)': 'compound',
+ u'container (translation required)': 'container',
+ #u'kysymykset': u'questions',
+ u'meta': u'meta',
+ #u'kuvakartta': u'imagemap',
+ u'kuva': u'image',
+ u'kaavio': u'figure',
+ u'sis\u00e4llyt\u00e4': u'include',
+ u'raaka': u'raw',
+ u'korvaa': u'replace',
+ u'unicode': u'unicode',
+ u'p\u00e4iv\u00e4ys': u'date',
+ u'luokka': u'class',
+ u'rooli': u'role',
+ u'default-role (translation required)': 'default-role',
+ u'title (translation required)': 'title',
+ u'sis\u00e4llys': u'contents',
+ u'kappale': u'sectnum',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',
+ #u'alaviitteet': u'footnotes',
+ #u'viitaukset': u'citations',
+ u'target-notes (translation required)': u'target-notes'}
+"""Finnish name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ # language-dependent: fixed
+ u'lyhennys': u'abbreviation',
+ u'akronyymi': u'acronym',
+ u'kirjainsana': u'acronym',
+ u'hakemisto': u'index',
+ u'luettelo': u'index',
+ u'alaindeksi': u'subscript',
+ u'indeksi': u'subscript',
+ u'yl\u00e4indeksi': u'superscript',
+ u'title-reference (translation required)': u'title-reference',
+ u'title (translation required)': u'title-reference',
+ u'pep-reference (translation required)': u'pep-reference',
+ u'rfc-reference (translation required)': u'rfc-reference',
+ u'korostus': u'emphasis',
+ u'vahvistus': u'strong',
+ u'tasalevyinen': u'literal',
+ u'named-reference (translation required)': u'named-reference',
+ u'anonymous-reference (translation required)': u'anonymous-reference',
+ u'footnote-reference (translation required)': u'footnote-reference',
+ u'citation-reference (translation required)': u'citation-reference',
+ u'substitution-reference (translation required)': u'substitution-reference',
+ u'kohde': u'target',
+ u'uri-reference (translation required)': u'uri-reference',
+ u'raw (translation required)': 'raw',}
+"""Mapping of Finnish role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/fr.py b/python/helpers/docutils/parsers/rst/languages/fr.py
new file mode 100644
index 0000000..6828d77
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/fr.py
@@ -0,0 +1,99 @@
+# $Id: fr.py 4564 2006-05-21 20:44:42Z wiemann $
+# Authors: David Goodger <[email protected]>; William Dode
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+French-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ u'attention': 'attention',
+ u'pr\u00E9caution': 'caution',
+ u'danger': 'danger',
+ u'erreur': 'error',
+ u'conseil': 'hint',
+ u'important': 'important',
+ u'note': 'note',
+ u'astuce': 'tip',
+ u'avertissement': 'warning',
+ u'admonition': 'admonition',
+ u'encadr\u00E9': 'sidebar',
+ u'sujet': 'topic',
+ u'bloc-textuel': 'line-block',
+ u'bloc-interpr\u00E9t\u00E9': 'parsed-literal',
+ u'code-interpr\u00E9t\u00E9': 'parsed-literal',
+ u'intertitre': 'rubric',
+ u'exergue': 'epigraph',
+ u'\u00E9pigraphe': 'epigraph',
+ u'chapeau': 'highlights',
+ u'accroche': 'pull-quote',
+ u'compound (translation required)': 'compound',
+ u'container (translation required)': 'container',
+ #u'questions': 'questions',
+ #u'qr': 'questions',
+ #u'faq': 'questions',
+ u'tableau': 'table',
+ u'csv-table (translation required)': 'csv-table',
+ u'list-table (translation required)': 'list-table',
+ u'm\u00E9ta': 'meta',
+ #u'imagemap (translation required)': 'imagemap',
+ u'image': 'image',
+ u'figure': 'figure',
+ u'inclure': 'include',
+ u'brut': 'raw',
+ u'remplacer': 'replace',
+ u'remplace': 'replace',
+ u'unicode': 'unicode',
+ u'date': 'date',
+ u'classe': 'class',
+ u'role (translation required)': 'role',
+ u'default-role (translation required)': 'default-role',
+ u'titre (translation required)': 'title',
+ u'sommaire': 'contents',
+ u'table-des-mati\u00E8res': 'contents',
+ u'sectnum': 'sectnum',
+ u'section-num\u00E9rot\u00E9e': 'sectnum',
+ u'liens': 'target-notes',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',
+ #u'footnotes (translation required)': 'footnotes',
+ #u'citations (translation required)': 'citations',
+ }
+"""French name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ u'abr\u00E9viation': 'abbreviation',
+ u'acronyme': 'acronym',
+ u'sigle': 'acronym',
+ u'index': 'index',
+ u'indice': 'subscript',
+ u'ind': 'subscript',
+ u'exposant': 'superscript',
+ u'exp': 'superscript',
+ u'titre-r\u00E9f\u00E9rence': 'title-reference',
+ u'titre': 'title-reference',
+ u'pep-r\u00E9f\u00E9rence': 'pep-reference',
+ u'rfc-r\u00E9f\u00E9rence': 'rfc-reference',
+ u'emphase': 'emphasis',
+ u'fort': 'strong',
+ u'litt\u00E9ral': 'literal',
+ u'nomm\u00E9e-r\u00E9f\u00E9rence': 'named-reference',
+ u'anonyme-r\u00E9f\u00E9rence': 'anonymous-reference',
+ u'note-r\u00E9f\u00E9rence': 'footnote-reference',
+ u'citation-r\u00E9f\u00E9rence': 'citation-reference',
+ u'substitution-r\u00E9f\u00E9rence': 'substitution-reference',
+ u'lien': 'target',
+ u'uri-r\u00E9f\u00E9rence': 'uri-reference',
+ u'brut': 'raw',}
+"""Mapping of French role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/gl.py b/python/helpers/docutils/parsers/rst/languages/gl.py
new file mode 100644
index 0000000..716f170
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/gl.py
@@ -0,0 +1,107 @@
+# -*- coding: utf-8 -*-
+# Author: David Goodger
+# Contact: [email protected]
+# Revision: $Revision: 4229 $
+# Date: $Date: 2005-12-23 00:46:16 +0100 (Fri, 23 Dec 2005) $
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Galician-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ u'atenci\u00f3n': 'attention',
+ u'advertencia': 'caution',
+ u'perigo': 'danger',
+ u'erro': 'error',
+ u'pista': 'hint',
+ u'importante': 'important',
+ u'nota': 'note',
+ u'consello': 'tip',
+ u'aviso': 'warning',
+ u'admonici\u00f3n': 'admonition',
+ u'barra lateral': 'sidebar',
+ u't\u00f3pico': 'topic',
+ u'bloque-li\u00f1a': 'line-block',
+ u'literal-analizado': 'parsed-literal',
+ u'r\u00fabrica': 'rubric',
+ u'ep\u00edgrafe': 'epigraph',
+ u'realzados': 'highlights',
+ u'coller-citaci\u00f3n': 'pull-quote',
+ u'compor': 'compound',
+ u'recipiente': 'container',
+ #'questions': 'questions',
+ u't\u00e1boa': 'table',
+ u't\u00e1boa-csv': 'csv-table',
+ u't\u00e1boa-listaxe': 'list-table',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ u'meta': 'meta',
+ #'imagemap': 'imagemap',
+ u'imaxe': 'image',
+ u'figura': 'figure',
+ u'inclu\u00edr': 'include',
+ u'cru': 'raw',
+ u'substitu\u00edr': 'replace',
+ u'unicode': 'unicode',
+ u'data': 'date',
+ u'clase': 'class',
+ u'regra': 'role',
+ u'regra-predeterminada': 'default-role',
+ u't\u00edtulo': 'title',
+ u'contido': 'contents',
+ u'seccnum': 'sectnum',
+ u'secci\u00f3n-numerar': 'sectnum',
+ u'cabeceira': 'header',
+ u'p\u00e9 de p\u00e1xina': 'footer',
+ #'footnotes': 'footnotes',
+ #'citations': 'citations',
+ u'notas-destino': 'target-notes',
+ u'texto restruturado-proba-directiva': 'restructuredtext-test-directive'}
+"""Galician name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ # language-dependent: fixed
+ u'abreviatura': 'abbreviation',
+ u'ab': 'abbreviation',
+ u'acr\u00f3nimo': 'acronym',
+ u'ac': 'acronym',
+ u'\u00edndice': 'index',
+ u'i': 'index',
+ u'sub\u00edndice': 'subscript',
+ u'sub': 'subscript',
+ u'super\u00edndice': 'superscript',
+ u'sup': 'superscript',
+ u'referencia t\u00edtulo': 'title-reference',
+ u't\u00edtulo': 'title-reference',
+ u't': 'title-reference',
+ u'referencia-pep': 'pep-reference',
+ u'pep': 'pep-reference',
+ u'referencia-rfc': 'rfc-reference',
+ u'rfc': 'rfc-reference',
+ u'\u00e9nfase': 'emphasis',
+ u'forte': 'strong',
+ u'literal': 'literal',
+ u'referencia-nome': 'named-reference',
+ u'referencia-an\u00f3nimo': 'anonymous-reference',
+ u'referencia-nota ao p\u00e9': 'footnote-reference',
+ u'referencia-citaci\u00f3n': 'citation-reference',
+ u'referencia-substituci\u00f3n': 'substitution-reference',
+ u'destino': 'target',
+ u'referencia-uri': 'uri-reference',
+ u'uri': 'uri-reference',
+ u'url': 'uri-reference',
+ u'cru': 'raw',}
+"""Mapping of Galician role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/he.py b/python/helpers/docutils/parsers/rst/languages/he.py
new file mode 100644
index 0000000..a46b995
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/he.py
@@ -0,0 +1,104 @@
+# Author: Meir Kriheli
+# Id: $Id: he.py 4837 2006-12-26 09:59:41Z sfcben $
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+English-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ u'\u05ea\u05e9\u05d5\u05de\u05ea \u05dc\u05d1': 'attention',
+ u'\u05d6\u05d4\u05d9\u05e8\u05d5\u05ea': 'caution',
+ u'\u05e1\u05db\u05e0\u05d4': 'danger',
+ u'\u05e9\u05d2\u05d9\u05d0\u05d4' : 'error',
+ u'\u05e8\u05de\u05d6': 'hint',
+ u'\u05d7\u05e9\u05d5\u05d1': 'important',
+ u'\u05d4\u05e2\u05e8\u05d4': 'note',
+ u'\u05d8\u05d9\u05e4': 'tip',
+ u'\u05d0\u05d6\u05d4\u05e8\u05d4': 'warning',
+ 'admonition': 'admonition',
+ 'sidebar': 'sidebar',
+ 'topic': 'topic',
+ 'line-block': 'line-block',
+ 'parsed-literal': 'parsed-literal',
+ 'rubric': 'rubric',
+ 'epigraph': 'epigraph',
+ 'highlights': 'highlights',
+ 'pull-quote': 'pull-quote',
+ 'compound': 'compound',
+ 'container': 'container',
+ #'questions': 'questions',
+ 'table': 'table',
+ 'csv-table': 'csv-table',
+ 'list-table': 'list-table',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ 'meta': 'meta',
+ #'imagemap': 'imagemap',
+ u'\u05ea\u05de\u05d5\u05e0\u05d4': 'image',
+ 'figure': 'figure',
+ 'include': 'include',
+ 'raw': 'raw',
+ 'replace': 'replace',
+ 'unicode': 'unicode',
+ 'date': 'date',
+ u'\u05e1\u05d2\u05e0\u05d5\u05df': 'class',
+ 'role': 'role',
+ 'default-role': 'default-role',
+ 'title': 'title',
+ u'\u05ea\u05d5\u05db\u05df': 'contents',
+ 'sectnum': 'sectnum',
+ 'section-numbering': 'sectnum',
+ 'header': 'header',
+ 'footer': 'footer',
+ #'footnotes': 'footnotes',
+ #'citations': 'citations',
+ 'target-notes': 'target-notes',
+ 'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""English name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ # language-dependent: fixed
+ 'abbreviation': 'abbreviation',
+ 'ab': 'abbreviation',
+ 'acronym': 'acronym',
+ 'ac': 'acronym',
+ 'index': 'index',
+ 'i': 'index',
+ u'\u05ea\u05d7\u05ea\u05d9': 'subscript',
+ 'sub': 'subscript',
+ u'\u05e2\u05d9\u05dc\u05d9': 'superscript',
+ 'sup': 'superscript',
+ 'title-reference': 'title-reference',
+ 'title': 'title-reference',
+ 't': 'title-reference',
+ 'pep-reference': 'pep-reference',
+ 'pep': 'pep-reference',
+ 'rfc-reference': 'rfc-reference',
+ 'rfc': 'rfc-reference',
+ 'emphasis': 'emphasis',
+ 'strong': 'strong',
+ 'literal': 'literal',
+ 'named-reference': 'named-reference',
+ 'anonymous-reference': 'anonymous-reference',
+ 'footnote-reference': 'footnote-reference',
+ 'citation-reference': 'citation-reference',
+ 'substitution-reference': 'substitution-reference',
+ 'target': 'target',
+ 'uri-reference': 'uri-reference',
+ 'uri': 'uri-reference',
+ 'url': 'uri-reference',
+ 'raw': 'raw',}
+"""Mapping of English role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/it.py b/python/helpers/docutils/parsers/rst/languages/it.py
new file mode 100644
index 0000000..a6131af
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/it.py
@@ -0,0 +1,93 @@
+# $Id: it.py 4564 2006-05-21 20:44:42Z wiemann $
+# Authors: Nicola Larosa <[email protected]>;
+# Lele Gaifax <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# Beware: the italian translation of the reStructuredText documentation
+# at http://docit.bice.dyndns.org/static/ReST, in particular
+# http://docit.bice.dyndns.org/static/ReST/ref/rst/directives.html, needs
+# to be synced with the content of this file.
+
+"""
+Italian-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ 'attenzione': 'attention',
+ 'cautela': 'caution',
+ 'pericolo': 'danger',
+ 'errore': 'error',
+ 'suggerimento': 'hint',
+ 'importante': 'important',
+ 'nota': 'note',
+ 'consiglio': 'tip',
+ 'avvertenza': 'warning',
+ 'ammonizione': 'admonition',
+ 'riquadro': 'sidebar',
+ 'argomento': 'topic',
+ 'blocco-di-righe': 'line-block',
+ 'blocco-interpretato': 'parsed-literal',
+ 'rubrica': 'rubric',
+ 'epigrafe': 'epigraph',
+ 'punti-salienti': 'highlights',
+ 'estratto-evidenziato': 'pull-quote',
+ 'composito': 'compound',
+ u'container (translation required)': 'container',
+ #'questions': 'questions',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ 'tabella': 'table',
+ 'tabella-csv': 'csv-table',
+ 'tabella-elenco': 'list-table',
+ 'meta': 'meta',
+ #'imagemap': 'imagemap',
+ 'immagine': 'image',
+ 'figura': 'figure',
+ 'includi': 'include',
+ 'grezzo': 'raw',
+ 'sostituisci': 'replace',
+ 'unicode': 'unicode',
+ 'data': 'date',
+ 'classe': 'class',
+ 'ruolo': 'role',
+ 'ruolo-predefinito': 'default-role',
+ 'titolo': 'title',
+ 'indice': 'contents',
+ 'contenuti': 'contents',
+ 'seznum': 'sectnum',
+ 'sezioni-autonumerate': 'sectnum',
+ 'annota-riferimenti-esterni': 'target-notes',
+ 'intestazione': 'header',
+ 'piede-pagina': 'footer',
+ #'footnotes': 'footnotes',
+ #'citations': 'citations',
+ 'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""Italian name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ 'abbreviazione': 'abbreviation',
+ 'acronimo': 'acronym',
+ 'indice': 'index',
+ 'deponente': 'subscript',
+ 'esponente': 'superscript',
+ 'riferimento-titolo': 'title-reference',
+ 'riferimento-pep': 'pep-reference',
+ 'riferimento-rfc': 'rfc-reference',
+ 'enfasi': 'emphasis',
+ 'forte': 'strong',
+ 'letterale': 'literal',
+ 'riferimento-con-nome': 'named-reference',
+ 'riferimento-anonimo': 'anonymous-reference',
+ 'riferimento-nota': 'footnote-reference',
+ 'riferimento-citazione': 'citation-reference',
+ 'riferimento-sostituzione': 'substitution-reference',
+ 'destinazione': 'target',
+ 'riferimento-uri': 'uri-reference',
+ 'grezzo': 'raw',}
+"""Mapping of Italian role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/ja.py b/python/helpers/docutils/parsers/rst/languages/ja.py
new file mode 100644
index 0000000..9a1e348
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/ja.py
@@ -0,0 +1,115 @@
+# -*- coding: utf-8 -*-
+# $Id: ja.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Japanese-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+# Corrections to these translations are welcome!
+# 間違いがあれば、どうぞ正しい翻訳を教えて下さい。
+
+directives = {
+ # language-dependent: fixed
+ u'注目': 'attention',
+ u'注意': 'caution',
+ u'危険': 'danger',
+ u'エラー': 'error',
+ u'ヒント': 'hint',
+ u'重要': 'important',
+ u'備考': 'note',
+ u'通報': 'tip',
+ u'警告': 'warning',
+ u'戒告': 'admonition',
+ u'サイドバー': 'sidebar',
+ u'トピック': 'topic',
+ u'ラインブロック': 'line-block',
+ u'パーズドリテラル': 'parsed-literal',
+ u'ルブリック': 'rubric',
+ u'エピグラフ': 'epigraph',
+ u'題言': 'epigraph',
+ u'ハイライト': 'highlights',
+ u'見所': 'highlights',
+ u'プルクオート': 'pull-quote',
+ u'合成': 'compound',
+ u'コンテナー': 'container',
+ u'容器': 'container',
+ u'表': 'table',
+ u'csv表': 'csv-table',
+ u'リスト表': 'list-table',
+ #u'質問': 'questions',
+ #u'問答': 'questions',
+ #u'faq': 'questions',
+ u'メタ': 'meta',
+ #u'イメージマプ': 'imagemap',
+ u'イメージ': 'image',
+ u'画像': 'image',
+ u'フィグア': 'figure',
+ u'図版': 'figure',
+ u'インクルード': 'include',
+ u'含む': 'include',
+ u'組み込み': 'include',
+ u'生': 'raw',
+ u'原': 'raw',
+ u'換える': 'replace',
+ u'取り換える': 'replace',
+ u'掛け替える': 'replace',
+ u'ユニコード': 'unicode',
+ u'日付': 'date',
+ u'クラス': 'class',
+ u'ロール': 'role',
+ u'役': 'role',
+ u'ディフォルトロール': 'default-role',
+ u'既定役': 'default-role',
+ u'タイトル': 'title',
+ u'題': 'title', # 題名 件名
+ u'目次': 'contents',
+ u'節数': 'sectnum',
+ u'ヘッダ': 'header',
+ u'フッタ': 'footer',
+ #u'脚注': 'footnotes', # 脚註?
+ #u'サイテーション': 'citations', # 出典 引証 引用
+ u'ターゲットノート': 'target-notes', # 的注 的脚注
+ }
+"""Japanese name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ # language-dependent: fixed
+ u'略': 'abbreviation',
+ u'頭字語': 'acronym',
+ u'インデックス': 'index',
+ u'索引': 'index',
+ u'添字': 'subscript',
+ u'下付': 'subscript',
+ u'下': 'subscript',
+ u'上付': 'superscript',
+ u'上': 'superscript',
+ u'題参照': 'title-reference',
+ u'pep参照': 'pep-reference',
+ u'rfc参照': 'rfc-reference',
+ u'強調': 'emphasis',
+ u'強い': 'strong',
+ u'リテラル': 'literal',
+ u'整形済み': 'literal',
+ u'名付参照': 'named-reference',
+ u'無名参照': 'anonymous-reference',
+ u'脚注参照': 'footnote-reference',
+ u'出典参照': 'citation-reference',
+ u'代入参照': 'substitution-reference',
+ u'的': 'target',
+ u'uri参照': 'uri-reference',
+ u'uri': 'uri-reference',
+ u'url': 'uri-reference',
+ u'生': 'raw',}
+"""Mapping of Japanese role names to canonical role names for interpreted
+text."""
diff --git a/python/helpers/docutils/parsers/rst/languages/nl.py b/python/helpers/docutils/parsers/rst/languages/nl.py
new file mode 100644
index 0000000..33ca299
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/nl.py
@@ -0,0 +1,108 @@
+# $Id: nl.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Martijn Pieters <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Dutch-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ 'attentie': 'attention',
+ 'let-op': 'caution',
+ 'gevaar': 'danger',
+ 'fout': 'error',
+ 'hint': 'hint',
+ 'belangrijk': 'important',
+ 'opmerking': 'note',
+ 'tip': 'tip',
+ 'waarschuwing': 'warning',
+ 'aanmaning': 'admonition',
+ 'katern': 'sidebar',
+ 'onderwerp': 'topic',
+ 'lijn-blok': 'line-block',
+ 'letterlijk-ontleed': 'parsed-literal',
+ 'rubriek': 'rubric',
+ 'opschrift': 'epigraph',
+ 'hoogtepunten': 'highlights',
+ 'pull-quote': 'pull-quote', # Dutch printers use the english term
+ 'samenstelling': 'compound',
+ 'verbinding': 'compound',
+ u'container (translation required)': 'container',
+ #'vragen': 'questions',
+ 'tabel': 'table',
+ 'csv-tabel': 'csv-table',
+ 'lijst-tabel': 'list-table',
+ #'veelgestelde-vragen': 'questions',
+ 'meta': 'meta',
+ #'imagemap': 'imagemap',
+ 'beeld': 'image',
+ 'figuur': 'figure',
+ 'opnemen': 'include',
+ 'onbewerkt': 'raw',
+ 'vervang': 'replace',
+ 'vervanging': 'replace',
+ 'unicode': 'unicode',
+ 'datum': 'date',
+ 'klasse': 'class',
+ 'rol': 'role',
+ u'default-role (translation required)': 'default-role',
+ 'title (translation required)': 'title',
+ 'inhoud': 'contents',
+ 'sectnum': 'sectnum',
+ 'sectie-nummering': 'sectnum',
+ 'hoofdstuk-nummering': 'sectnum',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',
+ #'voetnoten': 'footnotes',
+ #'citaten': 'citations',
+ 'verwijzing-voetnoten': 'target-notes',
+ 'restructuredtext-test-instructie': 'restructuredtext-test-directive'}
+"""Dutch name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ # language-dependent: fixed
+ 'afkorting': 'abbreviation',
+ # 'ab': 'abbreviation',
+ 'acroniem': 'acronym',
+ 'ac': 'acronym',
+ 'index': 'index',
+ 'i': 'index',
+ 'inferieur': 'subscript',
+ 'inf': 'subscript',
+ 'superieur': 'superscript',
+ 'sup': 'superscript',
+ 'titel-referentie': 'title-reference',
+ 'titel': 'title-reference',
+ 't': 'title-reference',
+ 'pep-referentie': 'pep-reference',
+ 'pep': 'pep-reference',
+ 'rfc-referentie': 'rfc-reference',
+ 'rfc': 'rfc-reference',
+ 'nadruk': 'emphasis',
+ 'extra': 'strong',
+ 'extra-nadruk': 'strong',
+ 'vet': 'strong',
+ 'letterlijk': 'literal',
+ 'benoemde-referentie': 'named-reference',
+ 'anonieme-referentie': 'anonymous-reference',
+ 'voetnoot-referentie': 'footnote-reference',
+ 'citaat-referentie': 'citation-reference',
+ 'substitie-reference': 'substitution-reference',
+ 'verwijzing': 'target',
+ 'uri-referentie': 'uri-reference',
+ 'uri': 'uri-reference',
+ 'url': 'uri-reference',
+ 'onbewerkt': 'raw',}
+"""Mapping of Dutch role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/pl.py b/python/helpers/docutils/parsers/rst/languages/pl.py
new file mode 100644
index 0000000..408a8e6
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/pl.py
@@ -0,0 +1,98 @@
+# $Id$
+# Author: Robert Wojciechowicz <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Polish-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ u'uwaga': 'attention',
+ u'ostro\u017cnie': 'caution',
+ u'niebezpiecze\u0144stwo': 'danger',
+ u'b\u0142\u0105d': 'error',
+ u'wskaz\u00f3wka': 'hint',
+ u'wa\u017cne': 'important',
+ u'przypis': 'note',
+ u'rada': 'tip',
+ u'ostrze\u017cenie': 'warning',
+ u'upomnienie': 'admonition',
+ u'ramka': 'sidebar',
+ u'temat': 'topic',
+ u'blok-linii': 'line-block',
+ u'sparsowany-litera\u0142': 'parsed-literal',
+ u'rubryka': 'rubric',
+ u'epigraf': 'epigraph',
+ u'highlights': 'highlights', # FIXME no polish equivalent?
+ u'pull-quote': 'pull-quote', # FIXME no polish equivalent?
+ u'z\u0142o\u017cony': 'compound',
+ u'kontener': 'container',
+ #'questions': 'questions',
+ u'tabela': 'table',
+ u'tabela-csv': 'csv-table',
+ u'tabela-listowa': 'list-table',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ u'meta': 'meta',
+ #'imagemap': 'imagemap',
+ u'obraz': 'image',
+ u'rycina': 'figure',
+ u'do\u0142\u0105cz': 'include',
+ u'surowe': 'raw',
+ u'zast\u0105p': 'replace',
+ u'unikod': 'unicode',
+ u'data': 'date',
+ u'klasa': 'class',
+ u'rola': 'role',
+ u'rola-domy\u015blna': 'default-role',
+ u'tytu\u0142': 'title',
+ u'tre\u015b\u0107': 'contents',
+ u'sectnum': 'sectnum',
+ u'numeracja-sekcji': 'sectnum',
+ u'nag\u0142\u00f3wek': 'header',
+ u'stopka': 'footer',
+ #'footnotes': 'footnotes',
+ #'citations': 'citations',
+ u'target-notes': 'target-notes', # FIXME no polish equivalent?
+ u'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""Polish name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ # language-dependent: fixed
+ u'skr\u00f3t': 'abbreviation',
+ u'akronim': 'acronym',
+ u'indeks': 'index',
+ u'indeks-dolny': 'subscript',
+ u'indeks-g\u00f3rny': 'superscript',
+ u'referencja-tytu\u0142': 'title-reference',
+ u'referencja-pep': 'pep-reference',
+ u'referencja-rfc': 'rfc-reference',
+ u'podkre\u015blenie': 'emphasis',
+ u'wyt\u0142uszczenie': 'strong',
+ u'dos\u0142ownie': 'literal',
+ u'referencja-nazwana': 'named-reference',
+ u'referencja-anonimowa': 'anonymous-reference',
+ u'referencja-przypis': 'footnote-reference',
+ u'referencja-cytat': 'citation-reference',
+ u'referencja-podstawienie': 'substitution-reference',
+ u'cel': 'target',
+ u'referencja-uri': 'uri-reference',
+ u'uri': 'uri-reference',
+ u'url': 'uri-reference',
+ u'surowe': 'raw',}
+"""Mapping of Polish role names to canonical role names for interpreted text.
+"""
+
+
+
diff --git a/python/helpers/docutils/parsers/rst/languages/pt_br.py b/python/helpers/docutils/parsers/rst/languages/pt_br.py
new file mode 100644
index 0000000..9c179c1
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/pt_br.py
@@ -0,0 +1,104 @@
+# $Id: pt_br.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Brazilian Portuguese-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ u'aten\u00E7\u00E3o': 'attention',
+ 'cuidado': 'caution',
+ 'perigo': 'danger',
+ 'erro': 'error',
+ u'sugest\u00E3o': 'hint',
+ 'importante': 'important',
+ 'nota': 'note',
+ 'dica': 'tip',
+ 'aviso': 'warning',
+ u'exorta\u00E7\u00E3o': 'admonition',
+ 'barra-lateral': 'sidebar',
+ u't\u00F3pico': 'topic',
+ 'bloco-de-linhas': 'line-block',
+ 'literal-interpretado': 'parsed-literal',
+ 'rubrica': 'rubric',
+ u'ep\u00EDgrafo': 'epigraph',
+ 'destaques': 'highlights',
+ u'cita\u00E7\u00E3o-destacada': 'pull-quote',
+ u'compound (translation required)': 'compound',
+ u'container (translation required)': 'container',
+ #'perguntas': 'questions',
+ #'qa': 'questions',
+ #'faq': 'questions',
+ u'table (translation required)': 'table',
+ u'csv-table (translation required)': 'csv-table',
+ u'list-table (translation required)': 'list-table',
+ 'meta': 'meta',
+ #'imagemap': 'imagemap',
+ 'imagem': 'image',
+ 'figura': 'figure',
+ u'inclus\u00E3o': 'include',
+ 'cru': 'raw',
+ u'substitui\u00E7\u00E3o': 'replace',
+ 'unicode': 'unicode',
+ 'data': 'date',
+ 'classe': 'class',
+ 'role (translation required)': 'role',
+ u'default-role (translation required)': 'default-role',
+ u'title (translation required)': 'title',
+ u'\u00EDndice': 'contents',
+ 'numsec': 'sectnum',
+ u'numera\u00E7\u00E3o-de-se\u00E7\u00F5es': 'sectnum',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',
+ #u'notas-de-rorap\u00E9': 'footnotes',
+ #u'cita\u00E7\u00F5es': 'citations',
+ u'links-no-rodap\u00E9': 'target-notes',
+ 'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""Brazilian Portuguese name to registered (in directives/__init__.py)
+directive name mapping."""
+
+roles = {
+ # language-dependent: fixed
+ u'abbrevia\u00E7\u00E3o': 'abbreviation',
+ 'ab': 'abbreviation',
+ u'acr\u00F4nimo': 'acronym',
+ 'ac': 'acronym',
+ u'\u00EDndice-remissivo': 'index',
+ 'i': 'index',
+ 'subscrito': 'subscript',
+ 'sub': 'subscript',
+ 'sobrescrito': 'superscript',
+ 'sob': 'superscript',
+ u'refer\u00EAncia-a-t\u00EDtulo': 'title-reference',
+ u't\u00EDtulo': 'title-reference',
+ 't': 'title-reference',
+ u'refer\u00EAncia-a-pep': 'pep-reference',
+ 'pep': 'pep-reference',
+ u'refer\u00EAncia-a-rfc': 'rfc-reference',
+ 'rfc': 'rfc-reference',
+ u'\u00EAnfase': 'emphasis',
+ 'forte': 'strong',
+ 'literal': 'literal', # translation required?
+ u'refer\u00EAncia-por-nome': 'named-reference',
+ u'refer\u00EAncia-an\u00F4nima': 'anonymous-reference',
+ u'refer\u00EAncia-a-nota-de-rodap\u00E9': 'footnote-reference',
+ u'refer\u00EAncia-a-cita\u00E7\u00E3o': 'citation-reference',
+ u'refer\u00EAncia-a-substitui\u00E7\u00E3o': 'substitution-reference',
+ 'alvo': 'target',
+ u'refer\u00EAncia-a-uri': 'uri-reference',
+ 'uri': 'uri-reference',
+ 'url': 'uri-reference',
+ 'cru': 'raw',}
+"""Mapping of Brazilian Portuguese role names to canonical role names
+for interpreted text."""
diff --git a/python/helpers/docutils/parsers/rst/languages/ru.py b/python/helpers/docutils/parsers/rst/languages/ru.py
new file mode 100644
index 0000000..20ed86d
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/ru.py
@@ -0,0 +1,103 @@
+# $Id: ru.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Roman Suzi <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Russian-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+directives = {
+ u'\u0431\u043b\u043e\u043a-\u0441\u0442\u0440\u043e\u043a': u'line-block',
+ u'meta': u'meta',
+ u'\u043e\u0431\u0440\u0430\u0431\u043e\u0442\u0430\u043d\u043d\u044b\u0439-\u043b\u0438\u0442\u0435\u0440\u0430\u043b':
+ u'parsed-literal',
+ u'\u0432\u044b\u0434\u0435\u043b\u0435\u043d\u043d\u0430\u044f-\u0446\u0438\u0442\u0430\u0442\u0430':
+ u'pull-quote',
+ u'compound (translation required)': 'compound',
+ u'container (translation required)': 'container',
+ u'table (translation required)': 'table',
+ u'csv-table (translation required)': 'csv-table',
+ u'list-table (translation required)': 'list-table',
+ u'\u0441\u044b\u0440\u043e\u0439': u'raw',
+ u'\u0437\u0430\u043c\u0435\u043d\u0430': u'replace',
+ u'\u0442\u0435\u0441\u0442\u043e\u0432\u0430\u044f-\u0434\u0438\u0440\u0435\u043a\u0442\u0438\u0432\u0430-restructuredtext':
+ u'restructuredtext-test-directive',
+ u'\u0446\u0435\u043b\u0435\u0432\u044b\u0435-\u0441\u043d\u043e\u0441\u043a\u0438':
+ u'target-notes',
+ u'unicode': u'unicode',
+ u'\u0434\u0430\u0442\u0430': u'date',
+ u'\u0431\u043e\u043a\u043e\u0432\u0430\u044f-\u043f\u043e\u043b\u043e\u0441\u0430':
+ u'sidebar',
+ u'\u0432\u0430\u0436\u043d\u043e': u'important',
+ u'\u0432\u043a\u043b\u044e\u0447\u0430\u0442\u044c': u'include',
+ u'\u0432\u043d\u0438\u043c\u0430\u043d\u0438\u0435': u'attention',
+ u'\u0432\u044b\u0434\u0435\u043b\u0435\u043d\u0438\u0435': u'highlights',
+ u'\u0437\u0430\u043c\u0435\u0447\u0430\u043d\u0438\u0435': u'admonition',
+ u'\u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0435':
+ u'image',
+ u'\u043a\u043b\u0430\u0441\u0441': u'class',
+ u'role (translation required)': 'role',
+ u'default-role (translation required)': 'default-role',
+ u'title (translation required)': 'title',
+ u'\u043d\u043e\u043c\u0435\u0440-\u0440\u0430\u0437\u0434\u0435\u043b\u0430':
+ u'sectnum',
+ u'\u043d\u0443\u043c\u0435\u0440\u0430\u0446\u0438\u044f-\u0440\u0430\u0437'
+ u'\u0434\u0435\u043b\u043e\u0432': u'sectnum',
+ u'\u043e\u043f\u0430\u0441\u043d\u043e': u'danger',
+ u'\u043e\u0441\u0442\u043e\u0440\u043e\u0436\u043d\u043e': u'caution',
+ u'\u043e\u0448\u0438\u0431\u043a\u0430': u'error',
+ u'\u043f\u043e\u0434\u0441\u043a\u0430\u0437\u043a\u0430': u'tip',
+ u'\u043f\u0440\u0435\u0434\u0443\u043f\u0440\u0435\u0436\u0434\u0435\u043d'
+ u'\u0438\u0435': u'warning',
+ u'\u043f\u0440\u0438\u043c\u0435\u0447\u0430\u043d\u0438\u0435': u'note',
+ u'\u0440\u0438\u0441\u0443\u043d\u043e\u043a': u'figure',
+ u'\u0440\u0443\u0431\u0440\u0438\u043a\u0430': u'rubric',
+ u'\u0441\u043e\u0432\u0435\u0442': u'hint',
+ u'\u0441\u043e\u0434\u0435\u0440\u0436\u0430\u043d\u0438\u0435': u'contents',
+ u'\u0442\u0435\u043c\u0430': u'topic',
+ u'\u044d\u043f\u0438\u0433\u0440\u0430\u0444': u'epigraph',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',}
+"""Russian name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ u'\u0430\u043a\u0440\u043e\u043d\u0438\u043c': 'acronym',
+ u'\u0430\u043d\u043e\u043d\u0438\u043c\u043d\u0430\u044f-\u0441\u0441\u044b\u043b\u043a\u0430':
+ 'anonymous-reference',
+ u'\u0431\u0443\u043a\u0432\u0430\u043b\u044c\u043d\u043e': 'literal',
+ u'\u0432\u0435\u0440\u0445\u043d\u0438\u0439-\u0438\u043d\u0434\u0435\u043a\u0441':
+ 'superscript',
+ u'\u0432\u044b\u0434\u0435\u043b\u0435\u043d\u0438\u0435': 'emphasis',
+ u'\u0438\u043c\u0435\u043d\u043e\u0432\u0430\u043d\u043d\u0430\u044f-\u0441\u0441\u044b\u043b\u043a\u0430':
+ 'named-reference',
+ u'\u0438\u043d\u0434\u0435\u043a\u0441': 'index',
+ u'\u043d\u0438\u0436\u043d\u0438\u0439-\u0438\u043d\u0434\u0435\u043a\u0441':
+ 'subscript',
+ u'\u0441\u0438\u043b\u044c\u043d\u043e\u0435-\u0432\u044b\u0434\u0435\u043b\u0435\u043d\u0438\u0435':
+ 'strong',
+ u'\u0441\u043e\u043a\u0440\u0430\u0449\u0435\u043d\u0438\u0435':
+ 'abbreviation',
+ u'\u0441\u0441\u044b\u043b\u043a\u0430-\u0437\u0430\u043c\u0435\u043d\u0430':
+ 'substitution-reference',
+ u'\u0441\u0441\u044b\u043b\u043a\u0430-\u043d\u0430-pep': 'pep-reference',
+ u'\u0441\u0441\u044b\u043b\u043a\u0430-\u043d\u0430-rfc': 'rfc-reference',
+ u'\u0441\u0441\u044b\u043b\u043a\u0430-\u043d\u0430-uri': 'uri-reference',
+ u'\u0441\u0441\u044b\u043b\u043a\u0430-\u043d\u0430-\u0437\u0430\u0433\u043b\u0430\u0432\u0438\u0435':
+ 'title-reference',
+ u'\u0441\u0441\u044b\u043b\u043a\u0430-\u043d\u0430-\u0441\u043d\u043e\u0441\u043a\u0443':
+ 'footnote-reference',
+ u'\u0446\u0438\u0442\u0430\u0442\u043d\u0430\u044f-\u0441\u0441\u044b\u043b\u043a\u0430':
+ 'citation-reference',
+ u'\u0446\u0435\u043b\u044c': 'target',
+ u'raw (translation required)': 'raw',}
+"""Mapping of Russian role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/sk.py b/python/helpers/docutils/parsers/rst/languages/sk.py
new file mode 100644
index 0000000..79c3aa0
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/sk.py
@@ -0,0 +1,91 @@
+# $Id: sk.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Miroslav Vasko <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Slovak-language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ u'pozor': 'attention',
+ u'opatrne': 'caution',
+ u'nebezpe\xe8enstvo': 'danger',
+ u'chyba': 'error',
+ u'rada': 'hint',
+ u'd\xf4le\x9eit\xe9': 'important',
+ u'pozn\xe1mka': 'note',
+ u'tip (translation required)': 'tip',
+ u'varovanie': 'warning',
+ u'admonition (translation required)': 'admonition',
+ u'sidebar (translation required)': 'sidebar',
+ u't\xe9ma': 'topic',
+ u'blok-riadkov': 'line-block',
+ u'parsed-literal': 'parsed-literal',
+ u'rubric (translation required)': 'rubric',
+ u'epigraph (translation required)': 'epigraph',
+ u'highlights (translation required)': 'highlights',
+ u'pull-quote (translation required)': 'pull-quote',
+ u'compound (translation required)': 'compound',
+ u'container (translation required)': 'container',
+ #u'questions': 'questions',
+ #u'qa': 'questions',
+ #u'faq': 'questions',
+ u'table (translation required)': 'table',
+ u'csv-table (translation required)': 'csv-table',
+ u'list-table (translation required)': 'list-table',
+ u'meta': 'meta',
+ #u'imagemap': 'imagemap',
+ u'obr\xe1zok': 'image',
+ u'tvar': 'figure',
+ u'vlo\x9ei\x9d': 'include',
+ u'raw (translation required)': 'raw',
+ u'nahradi\x9d': 'replace',
+ u'unicode': 'unicode',
+ u'd\u00E1tum': 'date',
+ u'class (translation required)': 'class',
+ u'role (translation required)': 'role',
+ u'default-role (translation required)': 'default-role',
+ u'title (translation required)': 'title',
+ u'obsah': 'contents',
+ u'\xe8as\x9d': 'sectnum',
+ u'\xe8as\x9d-\xe8\xedslovanie': 'sectnum',
+ u'cie\xbeov\xe9-pozn\xe1mky': 'target-notes',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',
+ #u'footnotes': 'footnotes',
+ #u'citations': 'citations',
+ }
+"""Slovak name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ u'abbreviation (translation required)': 'abbreviation',
+ u'acronym (translation required)': 'acronym',
+ u'index (translation required)': 'index',
+ u'subscript (translation required)': 'subscript',
+ u'superscript (translation required)': 'superscript',
+ u'title-reference (translation required)': 'title-reference',
+ u'pep-reference (translation required)': 'pep-reference',
+ u'rfc-reference (translation required)': 'rfc-reference',
+ u'emphasis (translation required)': 'emphasis',
+ u'strong (translation required)': 'strong',
+ u'literal (translation required)': 'literal',
+ u'named-reference (translation required)': 'named-reference',
+ u'anonymous-reference (translation required)': 'anonymous-reference',
+ u'footnote-reference (translation required)': 'footnote-reference',
+ u'citation-reference (translation required)': 'citation-reference',
+ u'substitution-reference (translation required)': 'substitution-reference',
+ u'target (translation required)': 'target',
+ u'uri-reference (translation required)': 'uri-reference',
+ u'raw (translation required)': 'raw',}
+"""Mapping of Slovak role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/sv.py b/python/helpers/docutils/parsers/rst/languages/sv.py
new file mode 100644
index 0000000..22a6697
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/sv.py
@@ -0,0 +1,90 @@
+# $Id: sv.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Adam Chodorowski <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Swedish language mappings for language-dependent features of reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ u'observera': 'attention',
+ u'caution (translation required)': 'caution',
+ u'fara': 'danger',
+ u'fel': 'error',
+ u'v\u00e4gledning': 'hint',
+ u'viktigt': 'important',
+ u'notera': 'note',
+ u'tips': 'tip',
+ u'varning': 'warning',
+ u'admonition (translation required)': 'admonition',
+ u'sidebar (translation required)': 'sidebar',
+ u'\u00e4mne': 'topic',
+ u'line-block (translation required)': 'line-block',
+ u'parsed-literal (translation required)': 'parsed-literal',
+ u'mellanrubrik': 'rubric',
+ u'epigraph (translation required)': 'epigraph',
+ u'highlights (translation required)': 'highlights',
+ u'pull-quote (translation required)': 'pull-quote',
+ u'compound (translation required)': 'compound',
+ u'container (translation required)': 'container',
+ # u'fr\u00e5gor': 'questions',
+ # NOTE: A bit long, but recommended by http://www.nada.kth.se/dataterm/:
+ # u'fr\u00e5gor-och-svar': 'questions',
+ # u'vanliga-fr\u00e5gor': 'questions',
+ u'table (translation required)': 'table',
+ u'csv-table (translation required)': 'csv-table',
+ u'list-table (translation required)': 'list-table',
+ u'meta': 'meta',
+ # u'bildkarta': 'imagemap', # FIXME: Translation might be too literal.
+ u'bild': 'image',
+ u'figur': 'figure',
+ u'inkludera': 'include',
+ u'r\u00e5': 'raw', # FIXME: Translation might be too literal.
+ u'ers\u00e4tt': 'replace',
+ u'unicode': 'unicode',
+ u'datum': 'date',
+ u'class (translation required)': 'class',
+ u'role (translation required)': 'role',
+ u'default-role (translation required)': 'default-role',
+ u'title (translation required)': 'title',
+ u'inneh\u00e5ll': 'contents',
+ u'sektionsnumrering': 'sectnum',
+ u'target-notes (translation required)': 'target-notes',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',
+ # u'fotnoter': 'footnotes',
+ # u'citeringar': 'citations',
+ }
+"""Swedish name to registered (in directives/__init__.py) directive name
+mapping."""
+
+roles = {
+ u'abbreviation (translation required)': 'abbreviation',
+ u'acronym (translation required)': 'acronym',
+ u'index (translation required)': 'index',
+ u'subscript (translation required)': 'subscript',
+ u'superscript (translation required)': 'superscript',
+ u'title-reference (translation required)': 'title-reference',
+ u'pep-reference (translation required)': 'pep-reference',
+ u'rfc-reference (translation required)': 'rfc-reference',
+ u'emphasis (translation required)': 'emphasis',
+ u'strong (translation required)': 'strong',
+ u'literal (translation required)': 'literal',
+ u'named-reference (translation required)': 'named-reference',
+ u'anonymous-reference (translation required)': 'anonymous-reference',
+ u'footnote-reference (translation required)': 'footnote-reference',
+ u'citation-reference (translation required)': 'citation-reference',
+ u'substitution-reference (translation required)': 'substitution-reference',
+ u'target (translation required)': 'target',
+ u'uri-reference (translation required)': 'uri-reference',
+ u'r\u00e5': 'raw',}
+"""Mapping of Swedish role names to canonical role names for interpreted text.
+"""
diff --git a/python/helpers/docutils/parsers/rst/languages/zh_cn.py b/python/helpers/docutils/parsers/rst/languages/zh_cn.py
new file mode 100644
index 0000000..b8dc5f6
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/zh_cn.py
@@ -0,0 +1,100 @@
+# -*- coding: utf-8 -*-
+# $Id: zh_cn.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Panjunyong <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Simplified Chinese language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ u'注意': 'attention',
+ u'小心': 'caution',
+ u'危险': 'danger',
+ u'错误': 'error',
+ u'提示': 'hint',
+ u'重要': 'important',
+ u'注解': 'note',
+ u'技巧': 'tip',
+ u'警告': 'warning',
+ u'忠告': 'admonition',
+ u'侧框': 'sidebar',
+ u'主题': 'topic',
+ u'line-block (translation required)': 'line-block',
+ u'parsed-literal (translation required)': 'parsed-literal',
+ u'醒目': 'rubric',
+ u'铭文': 'epigraph',
+ u'要点': 'highlights',
+ u'pull-quote (translation required)': 'pull-quote',
+ u'复合': 'compound',
+ u'容器': 'container',
+ #u'questions (translation required)': 'questions',
+ u'表格': 'table',
+ u'csv表格': 'csv-table',
+ u'列表表格': 'list-table',
+ #u'qa (translation required)': 'questions',
+ #u'faq (translation required)': 'questions',
+ u'元数据': 'meta',
+ #u'imagemap (translation required)': 'imagemap',
+ u'图片': 'image',
+ u'图例': 'figure',
+ u'包含': 'include',
+ u'原文': 'raw',
+ u'代替': 'replace',
+ u'统一码': 'unicode',
+ u'日期': 'date',
+ u'类型': 'class',
+ u'角色': 'role',
+ u'默认角色': 'default-role',
+ u'标题': 'title',
+ u'目录': 'contents',
+ u'章节序号': 'sectnum',
+ u'题头': 'header',
+ u'页脚': 'footer',
+ #u'footnotes (translation required)': 'footnotes',
+ #u'citations (translation required)': 'citations',
+ u'target-notes (translation required)': 'target-notes',
+ u'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""Simplified Chinese name to registered (in directives/__init__.py)
+directive name mapping."""
+
+roles = {
+ # language-dependent: fixed
+ u'缩写': 'abbreviation',
+ u'简称': 'acronym',
+ u'index (translation required)': 'index',
+ u'i (translation required)': 'index',
+ u'下标': 'subscript',
+ u'上标': 'superscript',
+ u'title-reference (translation required)': 'title-reference',
+ u'title (translation required)': 'title-reference',
+ u't (translation required)': 'title-reference',
+ u'pep-reference (translation required)': 'pep-reference',
+ u'pep (translation required)': 'pep-reference',
+ u'rfc-reference (translation required)': 'rfc-reference',
+ u'rfc (translation required)': 'rfc-reference',
+ u'强调': 'emphasis',
+ u'加粗': 'strong',
+ u'字面': 'literal',
+ u'named-reference (translation required)': 'named-reference',
+ u'anonymous-reference (translation required)': 'anonymous-reference',
+ u'footnote-reference (translation required)': 'footnote-reference',
+ u'citation-reference (translation required)': 'citation-reference',
+ u'substitution-reference (translation required)': 'substitution-reference',
+ u'target (translation required)': 'target',
+ u'uri-reference (translation required)': 'uri-reference',
+ u'uri (translation required)': 'uri-reference',
+ u'url (translation required)': 'uri-reference',
+ u'raw (translation required)': 'raw',}
+"""Mapping of Simplified Chinese role names to canonical role names
+for interpreted text."""
diff --git a/python/helpers/docutils/parsers/rst/languages/zh_tw.py b/python/helpers/docutils/parsers/rst/languages/zh_tw.py
new file mode 100644
index 0000000..ab43d6b
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/languages/zh_tw.py
@@ -0,0 +1,105 @@
+# -*- coding: utf-8 -*-
+# $Id: zh_tw.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+# New language mappings are welcome. Before doing a new translation, please
+# read <http://docutils.sf.net/docs/howto/i18n.html>. Two files must be
+# translated for each language: one in docutils/languages, the other in
+# docutils/parsers/rst/languages.
+
+"""
+Traditional Chinese language mappings for language-dependent features of
+reStructuredText.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+directives = {
+ # language-dependent: fixed
+ 'attention (translation required)': 'attention',
+ 'caution (translation required)': 'caution',
+ 'danger (translation required)': 'danger',
+ 'error (translation required)': 'error',
+ 'hint (translation required)': 'hint',
+ 'important (translation required)': 'important',
+ 'note (translation required)': 'note',
+ 'tip (translation required)': 'tip',
+ 'warning (translation required)': 'warning',
+ 'admonition (translation required)': 'admonition',
+ 'sidebar (translation required)': 'sidebar',
+ 'topic (translation required)': 'topic',
+ 'line-block (translation required)': 'line-block',
+ 'parsed-literal (translation required)': 'parsed-literal',
+ 'rubric (translation required)': 'rubric',
+ 'epigraph (translation required)': 'epigraph',
+ 'highlights (translation required)': 'highlights',
+ 'pull-quote (translation required)': 'pull-quote',
+ 'compound (translation required)': 'compound',
+ u'container (translation required)': 'container',
+ #'questions (translation required)': 'questions',
+ 'table (translation required)': 'table',
+ 'csv-table (translation required)': 'csv-table',
+ 'list-table (translation required)': 'list-table',
+ #'qa (translation required)': 'questions',
+ #'faq (translation required)': 'questions',
+ 'meta (translation required)': 'meta',
+ #'imagemap (translation required)': 'imagemap',
+ 'image (translation required)': 'image',
+ 'figure (translation required)': 'figure',
+ 'include (translation required)': 'include',
+ 'raw (translation required)': 'raw',
+ 'replace (translation required)': 'replace',
+ 'unicode (translation required)': 'unicode',
+ u'日期': 'date',
+ 'class (translation required)': 'class',
+ 'role (translation required)': 'role',
+ u'default-role (translation required)': 'default-role',
+ u'title (translation required)': 'title',
+ 'contents (translation required)': 'contents',
+ 'sectnum (translation required)': 'sectnum',
+ 'section-numbering (translation required)': 'sectnum',
+ u'header (translation required)': 'header',
+ u'footer (translation required)': 'footer',
+ #'footnotes (translation required)': 'footnotes',
+ #'citations (translation required)': 'citations',
+ 'target-notes (translation required)': 'target-notes',
+ 'restructuredtext-test-directive': 'restructuredtext-test-directive'}
+"""Traditional Chinese name to registered (in directives/__init__.py)
+directive name mapping."""
+
+roles = {
+ # language-dependent: fixed
+ 'abbreviation (translation required)': 'abbreviation',
+ 'ab (translation required)': 'abbreviation',
+ 'acronym (translation required)': 'acronym',
+ 'ac (translation required)': 'acronym',
+ 'index (translation required)': 'index',
+ 'i (translation required)': 'index',
+ 'subscript (translation required)': 'subscript',
+ 'sub (translation required)': 'subscript',
+ 'superscript (translation required)': 'superscript',
+ 'sup (translation required)': 'superscript',
+ 'title-reference (translation required)': 'title-reference',
+ 'title (translation required)': 'title-reference',
+ 't (translation required)': 'title-reference',
+ 'pep-reference (translation required)': 'pep-reference',
+ 'pep (translation required)': 'pep-reference',
+ 'rfc-reference (translation required)': 'rfc-reference',
+ 'rfc (translation required)': 'rfc-reference',
+ 'emphasis (translation required)': 'emphasis',
+ 'strong (translation required)': 'strong',
+ 'literal (translation required)': 'literal',
+ 'named-reference (translation required)': 'named-reference',
+ 'anonymous-reference (translation required)': 'anonymous-reference',
+ 'footnote-reference (translation required)': 'footnote-reference',
+ 'citation-reference (translation required)': 'citation-reference',
+ 'substitution-reference (translation required)': 'substitution-reference',
+ 'target (translation required)': 'target',
+ 'uri-reference (translation required)': 'uri-reference',
+ 'uri (translation required)': 'uri-reference',
+ 'url (translation required)': 'uri-reference',
+ 'raw (translation required)': 'raw',}
+"""Mapping of Traditional Chinese role names to canonical role names for
+interpreted text."""
diff --git a/python/helpers/docutils/parsers/rst/roles.py b/python/helpers/docutils/parsers/rst/roles.py
new file mode 100644
index 0000000..ca88172
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/roles.py
@@ -0,0 +1,350 @@
+# $Id: roles.py 6121 2009-09-10 12:05:04Z milde $
+# Author: Edward Loper <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+This module defines standard interpreted text role functions, a registry for
+interpreted text roles, and an API for adding to and retrieving from the
+registry.
+
+The interface for interpreted role functions is as follows::
+
+ def role_fn(name, rawtext, text, lineno, inliner,
+ options={}, content=[]):
+ code...
+
+ # Set function attributes for customization:
+ role_fn.options = ...
+ role_fn.content = ...
+
+Parameters:
+
+- ``name`` is the local name of the interpreted text role, the role name
+ actually used in the document.
+
+- ``rawtext`` is a string containing the entire interpreted text construct.
+ Return it as a ``problematic`` node linked to a system message if there is a
+ problem.
+
+- ``text`` is the interpreted text content, with backslash escapes converted
+ to nulls (``\x00``).
+
+- ``lineno`` is the line number where the interpreted text beings.
+
+- ``inliner`` is the Inliner object that called the role function.
+ It defines the following useful attributes: ``reporter``,
+ ``problematic``, ``memo``, ``parent``, ``document``.
+
+- ``options``: A dictionary of directive options for customization, to be
+ interpreted by the role function. Used for additional attributes for the
+ generated elements and other functionality.
+
+- ``content``: A list of strings, the directive content for customization
+ ("role" directive). To be interpreted by the role function.
+
+Function attributes for customization, interpreted by the "role" directive:
+
+- ``options``: A dictionary, mapping known option names to conversion
+ functions such as `int` or `float`. ``None`` or an empty dict implies no
+ options to parse. Several directive option conversion functions are defined
+ in the `directives` module.
+
+ All role functions implicitly support the "class" option, unless disabled
+ with an explicit ``{'class': None}``.
+
+- ``content``: A boolean; true if content is allowed. Client code must handle
+ the case where content is required but not supplied (an empty content list
+ will be supplied).
+
+Note that unlike directives, the "arguments" function attribute is not
+supported for role customization. Directive arguments are handled by the
+"role" directive itself.
+
+Interpreted role functions return a tuple of two values:
+
+- A list of nodes which will be inserted into the document tree at the
+ point where the interpreted role was encountered (can be an empty
+ list).
+
+- A list of system messages, which will be inserted into the document tree
+ immediately after the end of the current inline block (can also be empty).
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import nodes, utils
+from docutils.parsers.rst import directives
+from docutils.parsers.rst.languages import en as _fallback_language_module
+
+DEFAULT_INTERPRETED_ROLE = 'title-reference'
+"""
+The canonical name of the default interpreted role. This role is used
+when no role is specified for a piece of interpreted text.
+"""
+
+_role_registry = {}
+"""Mapping of canonical role names to role functions. Language-dependent role
+names are defined in the ``language`` subpackage."""
+
+_roles = {}
+"""Mapping of local or language-dependent interpreted text role names to role
+functions."""
+
+def role(role_name, language_module, lineno, reporter):
+ """
+ Locate and return a role function from its language-dependent name, along
+ with a list of system messages. If the role is not found in the current
+ language, check English. Return a 2-tuple: role function (``None`` if the
+ named role cannot be found) and a list of system messages.
+ """
+ normname = role_name.lower()
+ messages = []
+ msg_text = []
+
+ if normname in _roles:
+ return _roles[normname], messages
+
+ if role_name:
+ canonicalname = None
+ try:
+ canonicalname = language_module.roles[normname]
+ except AttributeError, error:
+ msg_text.append('Problem retrieving role entry from language '
+ 'module %r: %s.' % (language_module, error))
+ except KeyError:
+ msg_text.append('No role entry for "%s" in module "%s".'
+ % (role_name, language_module.__name__))
+ else:
+ canonicalname = DEFAULT_INTERPRETED_ROLE
+
+ # If we didn't find it, try English as a fallback.
+ if not canonicalname:
+ try:
+ canonicalname = _fallback_language_module.roles[normname]
+ msg_text.append('Using English fallback for role "%s".'
+ % role_name)
+ except KeyError:
+ msg_text.append('Trying "%s" as canonical role name.'
+ % role_name)
+ # The canonical name should be an English name, but just in case:
+ canonicalname = normname
+
+ # Collect any messages that we generated.
+ if msg_text:
+ message = reporter.info('\n'.join(msg_text), line=lineno)
+ messages.append(message)
+
+ # Look the role up in the registry, and return it.
+ if canonicalname in _role_registry:
+ role_fn = _role_registry[canonicalname]
+ register_local_role(normname, role_fn)
+ return role_fn, messages
+ else:
+ return None, messages # Error message will be generated by caller.
+
+def register_canonical_role(name, role_fn):
+ """
+ Register an interpreted text role by its canonical name.
+
+ :Parameters:
+ - `name`: The canonical name of the interpreted role.
+ - `role_fn`: The role function. See the module docstring.
+ """
+ set_implicit_options(role_fn)
+ _role_registry[name] = role_fn
+
+def register_local_role(name, role_fn):
+ """
+ Register an interpreted text role by its local or language-dependent name.
+
+ :Parameters:
+ - `name`: The local or language-dependent name of the interpreted role.
+ - `role_fn`: The role function. See the module docstring.
+ """
+ set_implicit_options(role_fn)
+ _roles[name] = role_fn
+
+def set_implicit_options(role_fn):
+ """
+ Add customization options to role functions, unless explicitly set or
+ disabled.
+ """
+ if not hasattr(role_fn, 'options') or role_fn.options is None:
+ role_fn.options = {'class': directives.class_option}
+ elif 'class' not in role_fn.options:
+ role_fn.options['class'] = directives.class_option
+
+def register_generic_role(canonical_name, node_class):
+ """For roles which simply wrap a given `node_class` around the text."""
+ role = GenericRole(canonical_name, node_class)
+ register_canonical_role(canonical_name, role)
+
+
+class GenericRole:
+
+ """
+ Generic interpreted text role, where the interpreted text is simply
+ wrapped with the provided node class.
+ """
+
+ def __init__(self, role_name, node_class):
+ self.name = role_name
+ self.node_class = node_class
+
+ def __call__(self, role, rawtext, text, lineno, inliner,
+ options={}, content=[]):
+ set_classes(options)
+ return [self.node_class(rawtext, utils.unescape(text), **options)], []
+
+
+class CustomRole:
+
+ """
+ Wrapper for custom interpreted text roles.
+ """
+
+ def __init__(self, role_name, base_role, options={}, content=[]):
+ self.name = role_name
+ self.base_role = base_role
+ self.options = None
+ if hasattr(base_role, 'options'):
+ self.options = base_role.options
+ self.content = None
+ if hasattr(base_role, 'content'):
+ self.content = base_role.content
+ self.supplied_options = options
+ self.supplied_content = content
+
+ def __call__(self, role, rawtext, text, lineno, inliner,
+ options={}, content=[]):
+ opts = self.supplied_options.copy()
+ opts.update(options)
+ cont = list(self.supplied_content)
+ if cont and content:
+ cont += '\n'
+ cont.extend(content)
+ return self.base_role(role, rawtext, text, lineno, inliner,
+ options=opts, content=cont)
+
+
+def generic_custom_role(role, rawtext, text, lineno, inliner,
+ options={}, content=[]):
+ """"""
+ # Once nested inline markup is implemented, this and other methods should
+ # recursively call inliner.nested_parse().
+ set_classes(options)
+ return [nodes.inline(rawtext, utils.unescape(text), **options)], []
+
+generic_custom_role.options = {'class': directives.class_option}
+
+
+######################################################################
+# Define and register the standard roles:
+######################################################################
+
+register_generic_role('abbreviation', nodes.abbreviation)
+register_generic_role('acronym', nodes.acronym)
+register_generic_role('emphasis', nodes.emphasis)
+register_generic_role('literal', nodes.literal)
+register_generic_role('strong', nodes.strong)
+register_generic_role('subscript', nodes.subscript)
+register_generic_role('superscript', nodes.superscript)
+register_generic_role('title-reference', nodes.title_reference)
+
+def pep_reference_role(role, rawtext, text, lineno, inliner,
+ options={}, content=[]):
+ try:
+ pepnum = int(text)
+ if pepnum < 0 or pepnum > 9999:
+ raise ValueError
+ except ValueError:
+ msg = inliner.reporter.error(
+ 'PEP number must be a number from 0 to 9999; "%s" is invalid.'
+ % text, line=lineno)
+ prb = inliner.problematic(rawtext, rawtext, msg)
+ return [prb], [msg]
+ # Base URL mainly used by inliner.pep_reference; so this is correct:
+ ref = (inliner.document.settings.pep_base_url
+ + inliner.document.settings.pep_file_url_template % pepnum)
+ set_classes(options)
+ return [nodes.reference(rawtext, 'PEP ' + utils.unescape(text), refuri=ref,
+ **options)], []
+
+register_canonical_role('pep-reference', pep_reference_role)
+
+def rfc_reference_role(role, rawtext, text, lineno, inliner,
+ options={}, content=[]):
+ try:
+ rfcnum = int(text)
+ if rfcnum <= 0:
+ raise ValueError
+ except ValueError:
+ msg = inliner.reporter.error(
+ 'RFC number must be a number greater than or equal to 1; '
+ '"%s" is invalid.' % text, line=lineno)
+ prb = inliner.problematic(rawtext, rawtext, msg)
+ return [prb], [msg]
+ # Base URL mainly used by inliner.rfc_reference, so this is correct:
+ ref = inliner.document.settings.rfc_base_url + inliner.rfc_url % rfcnum
+ set_classes(options)
+ node = nodes.reference(rawtext, 'RFC ' + utils.unescape(text), refuri=ref,
+ **options)
+ return [node], []
+
+register_canonical_role('rfc-reference', rfc_reference_role)
+
+def raw_role(role, rawtext, text, lineno, inliner, options={}, content=[]):
+ if not inliner.document.settings.raw_enabled:
+ msg = inliner.reporter.warning('raw (and derived) roles disabled')
+ prb = inliner.problematic(rawtext, rawtext, msg)
+ return [prb], [msg]
+ if 'format' not in options:
+ msg = inliner.reporter.error(
+ 'No format (Writer name) is associated with this role: "%s".\n'
+ 'The "raw" role cannot be used directly.\n'
+ 'Instead, use the "role" directive to create a new role with '
+ 'an associated format.' % role, line=lineno)
+ prb = inliner.problematic(rawtext, rawtext, msg)
+ return [prb], [msg]
+ set_classes(options)
+ node = nodes.raw(rawtext, utils.unescape(text, 1), **options)
+ return [node], []
+
+raw_role.options = {'format': directives.unchanged}
+
+register_canonical_role('raw', raw_role)
+
+
+######################################################################
+# Register roles that are currently unimplemented.
+######################################################################
+
+def unimplemented_role(role, rawtext, text, lineno, inliner, attributes={}):
+ msg = inliner.reporter.error(
+ 'Interpreted text role "%s" not implemented.' % role, line=lineno)
+ prb = inliner.problematic(rawtext, rawtext, msg)
+ return [prb], [msg]
+
+register_canonical_role('index', unimplemented_role)
+register_canonical_role('named-reference', unimplemented_role)
+register_canonical_role('anonymous-reference', unimplemented_role)
+register_canonical_role('uri-reference', unimplemented_role)
+register_canonical_role('footnote-reference', unimplemented_role)
+register_canonical_role('citation-reference', unimplemented_role)
+register_canonical_role('substitution-reference', unimplemented_role)
+register_canonical_role('target', unimplemented_role)
+
+# This should remain unimplemented, for testing purposes:
+register_canonical_role('restructuredtext-unimplemented-role',
+ unimplemented_role)
+
+
+def set_classes(options):
+ """
+ Auxiliary function to set options['classes'] and delete
+ options['class'].
+ """
+ if 'class' in options:
+ assert 'classes' not in options
+ options['classes'] = options['class']
+ del options['class']
diff --git a/python/helpers/docutils/parsers/rst/states.py b/python/helpers/docutils/parsers/rst/states.py
new file mode 100644
index 0000000..f3ad6793
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/states.py
@@ -0,0 +1,3054 @@
+# $Id: states.py 6314 2010-04-26 10:04:17Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+This is the ``docutils.parsers.restructuredtext.states`` module, the core of
+the reStructuredText parser. It defines the following:
+
+:Classes:
+ - `RSTStateMachine`: reStructuredText parser's entry point.
+ - `NestedStateMachine`: recursive StateMachine.
+ - `RSTState`: reStructuredText State superclass.
+ - `Inliner`: For parsing inline markup.
+ - `Body`: Generic classifier of the first line of a block.
+ - `SpecializedBody`: Superclass for compound element members.
+ - `BulletList`: Second and subsequent bullet_list list_items
+ - `DefinitionList`: Second+ definition_list_items.
+ - `EnumeratedList`: Second+ enumerated_list list_items.
+ - `FieldList`: Second+ fields.
+ - `OptionList`: Second+ option_list_items.
+ - `RFC2822List`: Second+ RFC2822-style fields.
+ - `ExtensionOptions`: Parses directive option fields.
+ - `Explicit`: Second+ explicit markup constructs.
+ - `SubstitutionDef`: For embedded directives in substitution definitions.
+ - `Text`: Classifier of second line of a text block.
+ - `SpecializedText`: Superclass for continuation lines of Text-variants.
+ - `Definition`: Second line of potential definition_list_item.
+ - `Line`: Second line of overlined section title or transition marker.
+ - `Struct`: An auxiliary collection class.
+
+:Exception classes:
+ - `MarkupError`
+ - `ParserError`
+ - `MarkupMismatch`
+
+:Functions:
+ - `escape2null()`: Return a string, escape-backslashes converted to nulls.
+ - `unescape()`: Return a string, nulls removed or restored to backslashes.
+
+:Attributes:
+ - `state_classes`: set of State classes used with `RSTStateMachine`.
+
+Parser Overview
+===============
+
+The reStructuredText parser is implemented as a recursive state machine,
+examining its input one line at a time. To understand how the parser works,
+please first become familiar with the `docutils.statemachine` module. In the
+description below, references are made to classes defined in this module;
+please see the individual classes for details.
+
+Parsing proceeds as follows:
+
+1. The state machine examines each line of input, checking each of the
+ transition patterns of the state `Body`, in order, looking for a match.
+ The implicit transitions (blank lines and indentation) are checked before
+ any others. The 'text' transition is a catch-all (matches anything).
+
+2. The method associated with the matched transition pattern is called.
+
+ A. Some transition methods are self-contained, appending elements to the
+ document tree (`Body.doctest` parses a doctest block). The parser's
+ current line index is advanced to the end of the element, and parsing
+ continues with step 1.
+
+ B. Other transition methods trigger the creation of a nested state machine,
+ whose job is to parse a compound construct ('indent' does a block quote,
+ 'bullet' does a bullet list, 'overline' does a section [first checking
+ for a valid section header], etc.).
+
+ - In the case of lists and explicit markup, a one-off state machine is
+ created and run to parse contents of the first item.
+
+ - A new state machine is created and its initial state is set to the
+ appropriate specialized state (`BulletList` in the case of the
+ 'bullet' transition; see `SpecializedBody` for more detail). This
+ state machine is run to parse the compound element (or series of
+ explicit markup elements), and returns as soon as a non-member element
+ is encountered. For example, the `BulletList` state machine ends as
+ soon as it encounters an element which is not a list item of that
+ bullet list. The optional omission of inter-element blank lines is
+ enabled by this nested state machine.
+
+ - The current line index is advanced to the end of the elements parsed,
+ and parsing continues with step 1.
+
+ C. The result of the 'text' transition depends on the next line of text.
+ The current state is changed to `Text`, under which the second line is
+ examined. If the second line is:
+
+ - Indented: The element is a definition list item, and parsing proceeds
+ similarly to step 2.B, using the `DefinitionList` state.
+
+ - A line of uniform punctuation characters: The element is a section
+ header; again, parsing proceeds as in step 2.B, and `Body` is still
+ used.
+
+ - Anything else: The element is a paragraph, which is examined for
+ inline markup and appended to the parent element. Processing
+ continues with step 1.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import re
+import roman
+from types import FunctionType, MethodType
+from docutils import nodes, statemachine, utils, urischemes
+from docutils import ApplicationError, DataError
+from docutils.statemachine import StateMachineWS, StateWS
+from docutils.nodes import fully_normalize_name as normalize_name
+from docutils.nodes import whitespace_normalize_name
+from docutils.utils import escape2null, unescape, column_width
+import docutils.parsers.rst
+from docutils.parsers.rst import directives, languages, tableparser, roles
+from docutils.parsers.rst.languages import en as _fallback_language_module
+
+
+class MarkupError(DataError): pass
+class UnknownInterpretedRoleError(DataError): pass
+class InterpretedRoleNotImplementedError(DataError): pass
+class ParserError(ApplicationError): pass
+class MarkupMismatch(Exception): pass
+
+
+class Struct:
+
+ """Stores data attributes for dotted-attribute access."""
+
+ def __init__(self, **keywordargs):
+ self.__dict__.update(keywordargs)
+
+
+class RSTStateMachine(StateMachineWS):
+
+ """
+ reStructuredText's master StateMachine.
+
+ The entry point to reStructuredText parsing is the `run()` method.
+ """
+
+ def run(self, input_lines, document, input_offset=0, match_titles=1,
+ inliner=None):
+ """
+ Parse `input_lines` and modify the `document` node in place.
+
+ Extend `StateMachineWS.run()`: set up parse-global data and
+ run the StateMachine.
+ """
+ self.language = languages.get_language(
+ document.settings.language_code)
+ self.match_titles = match_titles
+ if inliner is None:
+ inliner = Inliner()
+ inliner.init_customizations(document.settings)
+ self.memo = Struct(document=document,
+ reporter=document.reporter,
+ language=self.language,
+ title_styles=[],
+ section_level=0,
+ section_bubble_up_kludge=0,
+ inliner=inliner)
+ self.document = document
+ self.attach_observer(document.note_source)
+ self.reporter = self.memo.reporter
+ self.node = document
+ results = StateMachineWS.run(self, input_lines, input_offset,
+ input_source=document['source'])
+ assert results == [], 'RSTStateMachine.run() results should be empty!'
+ self.node = self.memo = None # remove unneeded references
+
+
+class NestedStateMachine(StateMachineWS):
+
+ """
+ StateMachine run from within other StateMachine runs, to parse nested
+ document structures.
+ """
+
+ def run(self, input_lines, input_offset, memo, node, match_titles=1):
+ """
+ Parse `input_lines` and populate a `docutils.nodes.document` instance.
+
+ Extend `StateMachineWS.run()`: set up document-wide data.
+ """
+ self.match_titles = match_titles
+ self.memo = memo
+ self.document = memo.document
+ self.attach_observer(self.document.note_source)
+ self.reporter = memo.reporter
+ self.language = memo.language
+ self.node = node
+ results = StateMachineWS.run(self, input_lines, input_offset)
+ assert results == [], ('NestedStateMachine.run() results should be '
+ 'empty!')
+ return results
+
+
+class RSTState(StateWS):
+
+ """
+ reStructuredText State superclass.
+
+ Contains methods used by all State subclasses.
+ """
+
+ nested_sm = NestedStateMachine
+ nested_sm_cache = []
+
+ def __init__(self, state_machine, debug=0):
+ self.nested_sm_kwargs = {'state_classes': state_classes,
+ 'initial_state': 'Body'}
+ StateWS.__init__(self, state_machine, debug)
+
+ def runtime_init(self):
+ StateWS.runtime_init(self)
+ memo = self.state_machine.memo
+ self.memo = memo
+ self.reporter = memo.reporter
+ self.inliner = memo.inliner
+ self.document = memo.document
+ self.parent = self.state_machine.node
+ # enable the reporter to determine source and source-line
+ if not hasattr(self.reporter, 'locator'):
+ self.reporter.locator = self.state_machine.get_source_and_line
+ # print "adding locator to reporter", self.state_machine.input_offset
+
+
+ def goto_line(self, abs_line_offset):
+ """
+ Jump to input line `abs_line_offset`, ignoring jumps past the end.
+ """
+ try:
+ self.state_machine.goto_line(abs_line_offset)
+ except EOFError:
+ pass
+
+ def no_match(self, context, transitions):
+ """
+ Override `StateWS.no_match` to generate a system message.
+
+ This code should never be run.
+ """
+ src, srcline = self.state_machine.get_source_and_line()
+ self.reporter.severe(
+ 'Internal error: no transition pattern match. State: "%s"; '
+ 'transitions: %s; context: %s; current line: %r.'
+ % (self.__class__.__name__, transitions, context,
+ self.state_machine.line),
+ source=src, line=srcline)
+ return context, None, []
+
+ def bof(self, context):
+ """Called at beginning of file."""
+ return [], []
+
+ def nested_parse(self, block, input_offset, node, match_titles=0,
+ state_machine_class=None, state_machine_kwargs=None):
+ """
+ Create a new StateMachine rooted at `node` and run it over the input
+ `block`.
+ """
+ use_default = 0
+ if state_machine_class is None:
+ state_machine_class = self.nested_sm
+ use_default += 1
+ if state_machine_kwargs is None:
+ state_machine_kwargs = self.nested_sm_kwargs
+ use_default += 1
+ block_length = len(block)
+
+ state_machine = None
+ if use_default == 2:
+ try:
+ state_machine = self.nested_sm_cache.pop()
+ except IndexError:
+ pass
+ if not state_machine:
+ state_machine = state_machine_class(debug=self.debug,
+ **state_machine_kwargs)
+ state_machine.run(block, input_offset, memo=self.memo,
+ node=node, match_titles=match_titles)
+ if use_default == 2:
+ self.nested_sm_cache.append(state_machine)
+ else:
+ state_machine.unlink()
+ new_offset = state_machine.abs_line_offset()
+ # No `block.parent` implies disconnected -- lines aren't in sync:
+ if block.parent and (len(block) - block_length) != 0:
+ # Adjustment for block if modified in nested parse:
+ self.state_machine.next_line(len(block) - block_length)
+ return new_offset
+
+ def nested_list_parse(self, block, input_offset, node, initial_state,
+ blank_finish,
+ blank_finish_state=None,
+ extra_settings={},
+ match_titles=0,
+ state_machine_class=None,
+ state_machine_kwargs=None):
+ """
+ Create a new StateMachine rooted at `node` and run it over the input
+ `block`. Also keep track of optional intermediate blank lines and the
+ required final one.
+ """
+ if state_machine_class is None:
+ state_machine_class = self.nested_sm
+ if state_machine_kwargs is None:
+ state_machine_kwargs = self.nested_sm_kwargs.copy()
+ state_machine_kwargs['initial_state'] = initial_state
+ state_machine = state_machine_class(debug=self.debug,
+ **state_machine_kwargs)
+ if blank_finish_state is None:
+ blank_finish_state = initial_state
+ state_machine.states[blank_finish_state].blank_finish = blank_finish
+ for key, value in extra_settings.items():
+ setattr(state_machine.states[initial_state], key, value)
+ state_machine.run(block, input_offset, memo=self.memo,
+ node=node, match_titles=match_titles)
+ blank_finish = state_machine.states[blank_finish_state].blank_finish
+ state_machine.unlink()
+ return state_machine.abs_line_offset(), blank_finish
+
+ def section(self, title, source, style, lineno, messages):
+ """Check for a valid subsection and create one if it checks out."""
+ if self.check_subsection(source, style, lineno):
+ self.new_subsection(title, lineno, messages)
+
+ def check_subsection(self, source, style, lineno):
+ """
+ Check for a valid subsection header. Return 1 (true) or None (false).
+
+ When a new section is reached that isn't a subsection of the current
+ section, back up the line count (use ``previous_line(-x)``), then
+ ``raise EOFError``. The current StateMachine will finish, then the
+ calling StateMachine can re-examine the title. This will work its way
+ back up the calling chain until the correct section level isreached.
+
+ @@@ Alternative: Evaluate the title, store the title info & level, and
+ back up the chain until that level is reached. Store in memo? Or
+ return in results?
+
+ :Exception: `EOFError` when a sibling or supersection encountered.
+ """
+ memo = self.memo
+ title_styles = memo.title_styles
+ mylevel = memo.section_level
+ try: # check for existing title style
+ level = title_styles.index(style) + 1
+ except ValueError: # new title style
+ if len(title_styles) == memo.section_level: # new subsection
+ title_styles.append(style)
+ return 1
+ else: # not at lowest level
+ self.parent += self.title_inconsistent(source, lineno)
+ return None
+ if level <= mylevel: # sibling or supersection
+ memo.section_level = level # bubble up to parent section
+ if len(style) == 2:
+ memo.section_bubble_up_kludge = 1
+ # back up 2 lines for underline title, 3 for overline title
+ self.state_machine.previous_line(len(style) + 1)
+ raise EOFError # let parent section re-evaluate
+ if level == mylevel + 1: # immediate subsection
+ return 1
+ else: # invalid subsection
+ self.parent += self.title_inconsistent(source, lineno)
+ return None
+
+ def title_inconsistent(self, sourcetext, lineno):
+ src, srcline = self.state_machine.get_source_and_line(lineno)
+ error = self.reporter.severe(
+ 'Title level inconsistent:', nodes.literal_block('', sourcetext),
+ source=src, line=srcline)
+ return error
+
+ def new_subsection(self, title, lineno, messages):
+ """Append new subsection to document tree. On return, check level."""
+ memo = self.memo
+ mylevel = memo.section_level
+ memo.section_level += 1
+ section_node = nodes.section()
+ self.parent += section_node
+ textnodes, title_messages = self.inline_text(title, lineno)
+ titlenode = nodes.title(title, '', *textnodes)
+ name = normalize_name(titlenode.astext())
+ section_node['names'].append(name)
+ section_node += titlenode
+ section_node += messages
+ section_node += title_messages
+ self.document.note_implicit_target(section_node, section_node)
+ offset = self.state_machine.line_offset + 1
+ absoffset = self.state_machine.abs_line_offset() + 1
+ newabsoffset = self.nested_parse(
+ self.state_machine.input_lines[offset:], input_offset=absoffset,
+ node=section_node, match_titles=1)
+ self.goto_line(newabsoffset)
+ if memo.section_level <= mylevel: # can't handle next section?
+ raise EOFError # bubble up to supersection
+ # reset section_level; next pass will detect it properly
+ memo.section_level = mylevel
+
+ def paragraph(self, lines, lineno):
+ """
+ Return a list (paragraph & messages) & a boolean: literal_block next?
+ """
+ data = '\n'.join(lines).rstrip()
+ if re.search(r'(?<!\\)(\\\\)*::$', data):
+ if len(data) == 2:
+ return [], 1
+ elif data[-3] in ' \n':
+ text = data[:-3].rstrip()
+ else:
+ text = data[:-1]
+ literalnext = 1
+ else:
+ text = data
+ literalnext = 0
+ textnodes, messages = self.inline_text(text, lineno)
+ p = nodes.paragraph(data, '', *textnodes)
+ p.source, p.line = self.state_machine.get_source_and_line(lineno)
+ return [p] + messages, literalnext
+
+ def inline_text(self, text, lineno):
+ """
+ Return 2 lists: nodes (text and inline elements), and system_messages.
+ """
+ return self.inliner.parse(text, lineno, self.memo, self.parent)
+
+ def unindent_warning(self, node_name):
+ # the actual problem is one line below the current line
+ src, srcline = self.state_machine.get_source_and_line()
+ return self.reporter.warning('%s ends without a blank line; '
+ 'unexpected unindent.' % node_name,
+ source=src, line=srcline+1)
+
+
+def build_regexp(definition, compile=1):
+ """
+ Build, compile and return a regular expression based on `definition`.
+
+ :Parameter: `definition`: a 4-tuple (group name, prefix, suffix, parts),
+ where "parts" is a list of regular expressions and/or regular
+ expression definitions to be joined into an or-group.
+ """
+ name, prefix, suffix, parts = definition
+ part_strings = []
+ for part in parts:
+ if type(part) is tuple:
+ part_strings.append(build_regexp(part, None))
+ else:
+ part_strings.append(part)
+ or_group = '|'.join(part_strings)
+ regexp = '%(prefix)s(?P<%(name)s>%(or_group)s)%(suffix)s' % locals()
+ if compile:
+ return re.compile(regexp, re.UNICODE)
+ else:
+ return regexp
+
+
+class Inliner:
+
+ """
+ Parse inline markup; call the `parse()` method.
+ """
+
+ def __init__(self):
+ self.implicit_dispatch = [(self.patterns.uri, self.standalone_uri),]
+ """List of (pattern, bound method) tuples, used by
+ `self.implicit_inline`."""
+
+ def init_customizations(self, settings):
+ """Setting-based customizations; run when parsing begins."""
+ if settings.pep_references:
+ self.implicit_dispatch.append((self.patterns.pep,
+ self.pep_reference))
+ if settings.rfc_references:
+ self.implicit_dispatch.append((self.patterns.rfc,
+ self.rfc_reference))
+
+ def parse(self, text, lineno, memo, parent):
+ # Needs to be refactored for nested inline markup.
+ # Add nested_parse() method?
+ """
+ Return 2 lists: nodes (text and inline elements), and system_messages.
+
+ Using `self.patterns.initial`, a pattern which matches start-strings
+ (emphasis, strong, interpreted, phrase reference, literal,
+ substitution reference, and inline target) and complete constructs
+ (simple reference, footnote reference), search for a candidate. When
+ one is found, check for validity (e.g., not a quoted '*' character).
+ If valid, search for the corresponding end string if applicable, and
+ check it for validity. If not found or invalid, generate a warning
+ and ignore the start-string. Implicit inline markup (e.g. standalone
+ URIs) is found last.
+ """
+ self.reporter = memo.reporter
+ self.document = memo.document
+ self.language = memo.language
+ self.parent = parent
+ pattern_search = self.patterns.initial.search
+ dispatch = self.dispatch
+ remaining = escape2null(text)
+ processed = []
+ unprocessed = []
+ messages = []
+ while remaining:
+ match = pattern_search(remaining)
+ if match:
+ groups = match.groupdict()
+ method = dispatch[groups['start'] or groups['backquote']
+ or groups['refend'] or groups['fnend']]
+ before, inlines, remaining, sysmessages = method(self, match,
+ lineno)
+ unprocessed.append(before)
+ messages += sysmessages
+ if inlines:
+ processed += self.implicit_inline(''.join(unprocessed),
+ lineno)
+ processed += inlines
+ unprocessed = []
+ else:
+ break
+ remaining = ''.join(unprocessed) + remaining
+ if remaining:
+ processed += self.implicit_inline(remaining, lineno)
+ return processed, messages
+
+ openers = u'\'"([{<\u2018\u201c\xab\u00a1\u00bf' # see quoted_start below
+ closers = u'\'")]}>\u2019\u201d\xbb!?'
+ unicode_delimiters = u'\u2010\u2011\u2012\u2013\u2014\u00a0'
+ start_string_prefix = (u'((?<=^)|(?<=[-/: \\n\u2019%s%s]))'
+ % (re.escape(unicode_delimiters),
+ re.escape(openers)))
+ end_string_suffix = (r'((?=$)|(?=[-/:.,; \n\x00%s%s]))'
+ % (re.escape(unicode_delimiters),
+ re.escape(closers)))
+ non_whitespace_before = r'(?<![ \n])'
+ non_whitespace_escape_before = r'(?<![ \n\x00])'
+ non_whitespace_after = r'(?![ \n])'
+ # Alphanumerics with isolated internal [-._+:] chars (i.e. not 2 together):
+ simplename = r'(?:(?!_)\w)+(?:[-._+:](?:(?!_)\w)+)*'
+ # Valid URI characters (see RFC 2396 & RFC 2732);
+ # final \x00 allows backslash escapes in URIs:
+ uric = r"""[-_.!~*'()[\];/:@&=+$,%a-zA-Z0-9\x00]"""
+ # Delimiter indicating the end of a URI (not part of the URI):
+ uri_end_delim = r"""[>]"""
+ # Last URI character; same as uric but no punctuation:
+ urilast = r"""[_~*/=+a-zA-Z0-9]"""
+ # End of a URI (either 'urilast' or 'uric followed by a
+ # uri_end_delim'):
+ uri_end = r"""(?:%(urilast)s|%(uric)s(?=%(uri_end_delim)s))""" % locals()
+ emailc = r"""[-_!~*'{|}/#?^`&=+$%a-zA-Z0-9\x00]"""
+ email_pattern = r"""
+ %(emailc)s+(?:\.%(emailc)s+)* # name
+ (?<!\x00)@ # at
+ %(emailc)s+(?:\.%(emailc)s*)* # host
+ %(uri_end)s # final URI char
+ """
+ parts = ('initial_inline', start_string_prefix, '',
+ [('start', '', non_whitespace_after, # simple start-strings
+ [r'\*\*', # strong
+ r'\*(?!\*)', # emphasis but not strong
+ r'``', # literal
+ r'_`', # inline internal target
+ r'\|(?!\|)'] # substitution reference
+ ),
+ ('whole', '', end_string_suffix, # whole constructs
+ [# reference name & end-string
+ r'(?P<refname>%s)(?P<refend>__?)' % simplename,
+ ('footnotelabel', r'\[', r'(?P<fnend>\]_)',
+ [r'[0-9]+', # manually numbered
+ r'\#(%s)?' % simplename, # auto-numbered (w/ label?)
+ r'\*', # auto-symbol
+ r'(?P<citationlabel>%s)' % simplename] # citation reference
+ )
+ ]
+ ),
+ ('backquote', # interpreted text or phrase reference
+ '(?P<role>(:%s:)?)' % simplename, # optional role
+ non_whitespace_after,
+ ['`(?!`)'] # but not literal
+ )
+ ]
+ )
+ patterns = Struct(
+ initial=build_regexp(parts),
+ emphasis=re.compile(non_whitespace_escape_before
+ + r'(\*)' + end_string_suffix),
+ strong=re.compile(non_whitespace_escape_before
+ + r'(\*\*)' + end_string_suffix),
+ interpreted_or_phrase_ref=re.compile(
+ r"""
+ %(non_whitespace_escape_before)s
+ (
+ `
+ (?P<suffix>
+ (?P<role>:%(simplename)s:)?
+ (?P<refend>__?)?
+ )
+ )
+ %(end_string_suffix)s
+ """ % locals(), re.VERBOSE | re.UNICODE),
+ embedded_uri=re.compile(
+ r"""
+ (
+ (?:[ \n]+|^) # spaces or beginning of line/string
+ < # open bracket
+ %(non_whitespace_after)s
+ ([^<>\x00]+) # anything but angle brackets & nulls
+ %(non_whitespace_before)s
+ > # close bracket w/o whitespace before
+ )
+ $ # end of string
+ """ % locals(), re.VERBOSE),
+ literal=re.compile(non_whitespace_before + '(``)'
+ + end_string_suffix),
+ target=re.compile(non_whitespace_escape_before
+ + r'(`)' + end_string_suffix),
+ substitution_ref=re.compile(non_whitespace_escape_before
+ + r'(\|_{0,2})'
+ + end_string_suffix),
+ email=re.compile(email_pattern % locals() + '$', re.VERBOSE),
+ uri=re.compile(
+ (r"""
+ %(start_string_prefix)s
+ (?P<whole>
+ (?P<absolute> # absolute URI
+ (?P<scheme> # scheme (http, ftp, mailto)
+ [a-zA-Z][a-zA-Z0-9.+-]*
+ )
+ :
+ (
+ ( # either:
+ (//?)? # hierarchical URI
+ %(uric)s* # URI characters
+ %(uri_end)s # final URI char
+ )
+ ( # optional query
+ \?%(uric)s*
+ %(uri_end)s
+ )?
+ ( # optional fragment
+ \#%(uric)s*
+ %(uri_end)s
+ )?
+ )
+ )
+ | # *OR*
+ (?P<email> # email address
+ """ + email_pattern + r"""
+ )
+ )
+ %(end_string_suffix)s
+ """) % locals(), re.VERBOSE),
+ pep=re.compile(
+ r"""
+ %(start_string_prefix)s
+ (
+ (pep-(?P<pepnum1>\d+)(.txt)?) # reference to source file
+ |
+ (PEP\s+(?P<pepnum2>\d+)) # reference by name
+ )
+ %(end_string_suffix)s""" % locals(), re.VERBOSE),
+ rfc=re.compile(
+ r"""
+ %(start_string_prefix)s
+ (RFC(-|\s+)?(?P<rfcnum>\d+))
+ %(end_string_suffix)s""" % locals(), re.VERBOSE))
+
+ def quoted_start(self, match):
+ """Return 1 if inline markup start-string is 'quoted', 0 if not."""
+ string = match.string
+ start = match.start()
+ end = match.end()
+ if start == 0: # start-string at beginning of text
+ return 0
+ prestart = string[start - 1]
+ try:
+ poststart = string[end]
+ if self.openers.index(prestart) \
+ == self.closers.index(poststart): # quoted
+ return 1
+ except IndexError: # start-string at end of text
+ return 1
+ except ValueError: # not quoted
+ pass
+ return 0
+
+ def inline_obj(self, match, lineno, end_pattern, nodeclass,
+ restore_backslashes=0):
+ string = match.string
+ matchstart = match.start('start')
+ matchend = match.end('start')
+ if self.quoted_start(match):
+ return (string[:matchend], [], string[matchend:], [], '')
+ endmatch = end_pattern.search(string[matchend:])
+ if endmatch and endmatch.start(1): # 1 or more chars
+ text = unescape(endmatch.string[:endmatch.start(1)],
+ restore_backslashes)
+ textend = matchend + endmatch.end(1)
+ rawsource = unescape(string[matchstart:textend], 1)
+ return (string[:matchstart], [nodeclass(rawsource, text)],
+ string[textend:], [], endmatch.group(1))
+ msg = self.reporter.warning(
+ 'Inline %s start-string without end-string.'
+ % nodeclass.__name__, line=lineno)
+ text = unescape(string[matchstart:matchend], 1)
+ rawsource = unescape(string[matchstart:matchend], 1)
+ prb = self.problematic(text, rawsource, msg)
+ return string[:matchstart], [prb], string[matchend:], [msg], ''
+
+ def problematic(self, text, rawsource, message):
+ msgid = self.document.set_id(message, self.parent)
+ problematic = nodes.problematic(rawsource, text, refid=msgid)
+ prbid = self.document.set_id(problematic)
+ message.add_backref(prbid)
+ return problematic
+
+ def emphasis(self, match, lineno):
+ before, inlines, remaining, sysmessages, endstring = self.inline_obj(
+ match, lineno, self.patterns.emphasis, nodes.emphasis)
+ return before, inlines, remaining, sysmessages
+
+ def strong(self, match, lineno):
+ before, inlines, remaining, sysmessages, endstring = self.inline_obj(
+ match, lineno, self.patterns.strong, nodes.strong)
+ return before, inlines, remaining, sysmessages
+
+ def interpreted_or_phrase_ref(self, match, lineno):
+ end_pattern = self.patterns.interpreted_or_phrase_ref
+ string = match.string
+ matchstart = match.start('backquote')
+ matchend = match.end('backquote')
+ rolestart = match.start('role')
+ role = match.group('role')
+ position = ''
+ if role:
+ role = role[1:-1]
+ position = 'prefix'
+ elif self.quoted_start(match):
+ return (string[:matchend], [], string[matchend:], [])
+ endmatch = end_pattern.search(string[matchend:])
+ if endmatch and endmatch.start(1): # 1 or more chars
+ textend = matchend + endmatch.end()
+ if endmatch.group('role'):
+ if role:
+ msg = self.reporter.warning(
+ 'Multiple roles in interpreted text (both '
+ 'prefix and suffix present; only one allowed).',
+ line=lineno)
+ text = unescape(string[rolestart:textend], 1)
+ prb = self.problematic(text, text, msg)
+ return string[:rolestart], [prb], string[textend:], [msg]
+ role = endmatch.group('suffix')[1:-1]
+ position = 'suffix'
+ escaped = endmatch.string[:endmatch.start(1)]
+ rawsource = unescape(string[matchstart:textend], 1)
+ if rawsource[-1:] == '_':
+ if role:
+ msg = self.reporter.warning(
+ 'Mismatch: both interpreted text role %s and '
+ 'reference suffix.' % position, line=lineno)
+ text = unescape(string[rolestart:textend], 1)
+ prb = self.problematic(text, text, msg)
+ return string[:rolestart], [prb], string[textend:], [msg]
+ return self.phrase_ref(string[:matchstart], string[textend:],
+ rawsource, escaped, unescape(escaped))
+ else:
+ rawsource = unescape(string[rolestart:textend], 1)
+ nodelist, messages = self.interpreted(rawsource, escaped, role,
+ lineno)
+ return (string[:rolestart], nodelist,
+ string[textend:], messages)
+ msg = self.reporter.warning(
+ 'Inline interpreted text or phrase reference start-string '
+ 'without end-string.', line=lineno)
+ text = unescape(string[matchstart:matchend], 1)
+ prb = self.problematic(text, text, msg)
+ return string[:matchstart], [prb], string[matchend:], [msg]
+
+ def phrase_ref(self, before, after, rawsource, escaped, text):
+ match = self.patterns.embedded_uri.search(escaped)
+ if match:
+ text = unescape(escaped[:match.start(0)])
+ uri_text = match.group(2)
+ uri = ''.join(uri_text.split())
+ uri = self.adjust_uri(uri)
+ if uri:
+ target = nodes.target(match.group(1), refuri=uri)
+ else:
+ raise ApplicationError('problem with URI: %r' % uri_text)
+ if not text:
+ text = uri
+ else:
+ target = None
+ refname = normalize_name(text)
+ reference = nodes.reference(rawsource, text,
+ name=whitespace_normalize_name(text))
+ node_list = [reference]
+ if rawsource[-2:] == '__':
+ if target:
+ reference['refuri'] = uri
+ else:
+ reference['anonymous'] = 1
+ else:
+ if target:
+ reference['refuri'] = uri
+ target['names'].append(refname)
+ self.document.note_explicit_target(target, self.parent)
+ node_list.append(target)
+ else:
+ reference['refname'] = refname
+ self.document.note_refname(reference)
+ return before, node_list, after, []
+
+ def adjust_uri(self, uri):
+ match = self.patterns.email.match(uri)
+ if match:
+ return 'mailto:' + uri
+ else:
+ return uri
+
+ def interpreted(self, rawsource, text, role, lineno):
+ role_fn, messages = roles.role(role, self.language, lineno,
+ self.reporter)
+ if role_fn:
+ nodes, messages2 = role_fn(role, rawsource, text, lineno, self)
+ return nodes, messages + messages2
+ else:
+ msg = self.reporter.error(
+ 'Unknown interpreted text role "%s".' % role,
+ line=lineno)
+ return ([self.problematic(rawsource, rawsource, msg)],
+ messages + [msg])
+
+ def literal(self, match, lineno):
+ before, inlines, remaining, sysmessages, endstring = self.inline_obj(
+ match, lineno, self.patterns.literal, nodes.literal,
+ restore_backslashes=1)
+ return before, inlines, remaining, sysmessages
+
+ def inline_internal_target(self, match, lineno):
+ before, inlines, remaining, sysmessages, endstring = self.inline_obj(
+ match, lineno, self.patterns.target, nodes.target)
+ if inlines and isinstance(inlines[0], nodes.target):
+ assert len(inlines) == 1
+ target = inlines[0]
+ name = normalize_name(target.astext())
+ target['names'].append(name)
+ self.document.note_explicit_target(target, self.parent)
+ return before, inlines, remaining, sysmessages
+
+ def substitution_reference(self, match, lineno):
+ before, inlines, remaining, sysmessages, endstring = self.inline_obj(
+ match, lineno, self.patterns.substitution_ref,
+ nodes.substitution_reference)
+ if len(inlines) == 1:
+ subref_node = inlines[0]
+ if isinstance(subref_node, nodes.substitution_reference):
+ subref_text = subref_node.astext()
+ self.document.note_substitution_ref(subref_node, subref_text)
+ if endstring[-1:] == '_':
+ reference_node = nodes.reference(
+ '|%s%s' % (subref_text, endstring), '')
+ if endstring[-2:] == '__':
+ reference_node['anonymous'] = 1
+ else:
+ reference_node['refname'] = normalize_name(subref_text)
+ self.document.note_refname(reference_node)
+ reference_node += subref_node
+ inlines = [reference_node]
+ return before, inlines, remaining, sysmessages
+
+ def footnote_reference(self, match, lineno):
+ """
+ Handles `nodes.footnote_reference` and `nodes.citation_reference`
+ elements.
+ """
+ label = match.group('footnotelabel')
+ refname = normalize_name(label)
+ string = match.string
+ before = string[:match.start('whole')]
+ remaining = string[match.end('whole'):]
+ if match.group('citationlabel'):
+ refnode = nodes.citation_reference('[%s]_' % label,
+ refname=refname)
+ refnode += nodes.Text(label)
+ self.document.note_citation_ref(refnode)
+ else:
+ refnode = nodes.footnote_reference('[%s]_' % label)
+ if refname[0] == '#':
+ refname = refname[1:]
+ refnode['auto'] = 1
+ self.document.note_autofootnote_ref(refnode)
+ elif refname == '*':
+ refname = ''
+ refnode['auto'] = '*'
+ self.document.note_symbol_footnote_ref(
+ refnode)
+ else:
+ refnode += nodes.Text(label)
+ if refname:
+ refnode['refname'] = refname
+ self.document.note_footnote_ref(refnode)
+ if utils.get_trim_footnote_ref_space(self.document.settings):
+ before = before.rstrip()
+ return (before, [refnode], remaining, [])
+
+ def reference(self, match, lineno, anonymous=None):
+ referencename = match.group('refname')
+ refname = normalize_name(referencename)
+ referencenode = nodes.reference(
+ referencename + match.group('refend'), referencename,
+ name=whitespace_normalize_name(referencename))
+ if anonymous:
+ referencenode['anonymous'] = 1
+ else:
+ referencenode['refname'] = refname
+ self.document.note_refname(referencenode)
+ string = match.string
+ matchstart = match.start('whole')
+ matchend = match.end('whole')
+ return (string[:matchstart], [referencenode], string[matchend:], [])
+
+ def anonymous_reference(self, match, lineno):
+ return self.reference(match, lineno, anonymous=1)
+
+ def standalone_uri(self, match, lineno):
+ if (not match.group('scheme')
+ or match.group('scheme').lower() in urischemes.schemes):
+ if match.group('email'):
+ addscheme = 'mailto:'
+ else:
+ addscheme = ''
+ text = match.group('whole')
+ unescaped = unescape(text, 0)
+ return [nodes.reference(unescape(text, 1), unescaped,
+ refuri=addscheme + unescaped)]
+ else: # not a valid scheme
+ raise MarkupMismatch
+
+ def pep_reference(self, match, lineno):
+ text = match.group(0)
+ if text.startswith('pep-'):
+ pepnum = int(match.group('pepnum1'))
+ elif text.startswith('PEP'):
+ pepnum = int(match.group('pepnum2'))
+ else:
+ raise MarkupMismatch
+ ref = (self.document.settings.pep_base_url
+ + self.document.settings.pep_file_url_template % pepnum)
+ unescaped = unescape(text, 0)
+ return [nodes.reference(unescape(text, 1), unescaped, refuri=ref)]
+
+ rfc_url = 'rfc%d.html'
+
+ def rfc_reference(self, match, lineno):
+ text = match.group(0)
+ if text.startswith('RFC'):
+ rfcnum = int(match.group('rfcnum'))
+ ref = self.document.settings.rfc_base_url + self.rfc_url % rfcnum
+ else:
+ raise MarkupMismatch
+ unescaped = unescape(text, 0)
+ return [nodes.reference(unescape(text, 1), unescaped, refuri=ref)]
+
+ def implicit_inline(self, text, lineno):
+ """
+ Check each of the patterns in `self.implicit_dispatch` for a match,
+ and dispatch to the stored method for the pattern. Recursively check
+ the text before and after the match. Return a list of `nodes.Text`
+ and inline element nodes.
+ """
+ if not text:
+ return []
+ for pattern, method in self.implicit_dispatch:
+ match = pattern.search(text)
+ if match:
+ try:
+ # Must recurse on strings before *and* after the match;
+ # there may be multiple patterns.
+ return (self.implicit_inline(text[:match.start()], lineno)
+ + method(match, lineno) +
+ self.implicit_inline(text[match.end():], lineno))
+ except MarkupMismatch:
+ pass
+ return [nodes.Text(unescape(text), rawsource=unescape(text, 1))]
+
+ dispatch = {'*': emphasis,
+ '**': strong,
+ '`': interpreted_or_phrase_ref,
+ '``': literal,
+ '_`': inline_internal_target,
+ ']_': footnote_reference,
+ '|': substitution_reference,
+ '_': reference,
+ '__': anonymous_reference}
+
+
+def _loweralpha_to_int(s, _zero=(ord('a')-1)):
+ return ord(s) - _zero
+
+def _upperalpha_to_int(s, _zero=(ord('A')-1)):
+ return ord(s) - _zero
+
+def _lowerroman_to_int(s):
+ return roman.fromRoman(s.upper())
+
+
+class Body(RSTState):
+
+ """
+ Generic classifier of the first line of a block.
+ """
+
+ double_width_pad_char = tableparser.TableParser.double_width_pad_char
+ """Padding character for East Asian double-width text."""
+
+ enum = Struct()
+ """Enumerated list parsing information."""
+
+ enum.formatinfo = {
+ 'parens': Struct(prefix='(', suffix=')', start=1, end=-1),
+ 'rparen': Struct(prefix='', suffix=')', start=0, end=-1),
+ 'period': Struct(prefix='', suffix='.', start=0, end=-1)}
+ enum.formats = enum.formatinfo.keys()
+ enum.sequences = ['arabic', 'loweralpha', 'upperalpha',
+ 'lowerroman', 'upperroman'] # ORDERED!
+ enum.sequencepats = {'arabic': '[0-9]+',
+ 'loweralpha': '[a-z]',
+ 'upperalpha': '[A-Z]',
+ 'lowerroman': '[ivxlcdm]+',
+ 'upperroman': '[IVXLCDM]+',}
+ enum.converters = {'arabic': int,
+ 'loweralpha': _loweralpha_to_int,
+ 'upperalpha': _upperalpha_to_int,
+ 'lowerroman': _lowerroman_to_int,
+ 'upperroman': roman.fromRoman}
+
+ enum.sequenceregexps = {}
+ for sequence in enum.sequences:
+ enum.sequenceregexps[sequence] = re.compile(
+ enum.sequencepats[sequence] + '$')
+
+ grid_table_top_pat = re.compile(r'\+-[-+]+-\+ *$')
+ """Matches the top (& bottom) of a full table)."""
+
+ simple_table_top_pat = re.compile('=+( +=+)+ *$')
+ """Matches the top of a simple table."""
+
+ simple_table_border_pat = re.compile('=+[ =]*$')
+ """Matches the bottom & header bottom of a simple table."""
+
+ pats = {}
+ """Fragments of patterns used by transitions."""
+
+ pats['nonalphanum7bit'] = '[!-/:-@[-`{-~]'
+ pats['alpha'] = '[a-zA-Z]'
+ pats['alphanum'] = '[a-zA-Z0-9]'
+ pats['alphanumplus'] = '[a-zA-Z0-9_-]'
+ pats['enum'] = ('(%(arabic)s|%(loweralpha)s|%(upperalpha)s|%(lowerroman)s'
+ '|%(upperroman)s|#)' % enum.sequencepats)
+ pats['optname'] = '%(alphanum)s%(alphanumplus)s*' % pats
+ # @@@ Loosen up the pattern? Allow Unicode?
+ pats['optarg'] = '(%(alpha)s%(alphanumplus)s*|<[^<>]+>)' % pats
+ pats['shortopt'] = r'(-|\+)%(alphanum)s( ?%(optarg)s)?' % pats
+ pats['longopt'] = r'(--|/)%(optname)s([ =]%(optarg)s)?' % pats
+ pats['option'] = r'(%(shortopt)s|%(longopt)s)' % pats
+
+ for format in enum.formats:
+ pats[format] = '(?P<%s>%s%s%s)' % (
+ format, re.escape(enum.formatinfo[format].prefix),
+ pats['enum'], re.escape(enum.formatinfo[format].suffix))
+
+ patterns = {
+ 'bullet': u'[-+*\u2022\u2023\u2043]( +|$)',
+ 'enumerator': r'(%(parens)s|%(rparen)s|%(period)s)( +|$)' % pats,
+ 'field_marker': r':(?![: ])([^:\\]|\\.)*(?<! ):( +|$)',
+ 'option_marker': r'%(option)s(, %(option)s)*( +| ?$)' % pats,
+ 'doctest': r'>>>( +|$)',
+ 'line_block': r'\|( +|$)',
+ 'grid_table_top': grid_table_top_pat,
+ 'simple_table_top': simple_table_top_pat,
+ 'explicit_markup': r'\.\.( +|$)',
+ 'anonymous': r'__( +|$)',
+ 'line': r'(%(nonalphanum7bit)s)\1* *$' % pats,
+ 'text': r''}
+ initial_transitions = (
+ 'bullet',
+ 'enumerator',
+ 'field_marker',
+ 'option_marker',
+ 'doctest',
+ 'line_block',
+ 'grid_table_top',
+ 'simple_table_top',
+ 'explicit_markup',
+ 'anonymous',
+ 'line',
+ 'text')
+
+ def indent(self, match, context, next_state):
+ """Block quote."""
+ indented, indent, line_offset, blank_finish = \
+ self.state_machine.get_indented()
+ elements = self.block_quote(indented, line_offset)
+ self.parent += elements
+ if not blank_finish:
+ self.parent += self.unindent_warning('Block quote')
+ return context, next_state, []
+
+ def block_quote(self, indented, line_offset):
+ elements = []
+ while indented:
+ (blockquote_lines,
+ attribution_lines,
+ attribution_offset,
+ indented,
+ new_line_offset) = self.split_attribution(indented, line_offset)
+ blockquote = nodes.block_quote()
+ self.nested_parse(blockquote_lines, line_offset, blockquote)
+ elements.append(blockquote)
+ if attribution_lines:
+ attribution, messages = self.parse_attribution(
+ attribution_lines, attribution_offset)
+ blockquote += attribution
+ elements += messages
+ line_offset = new_line_offset
+ while indented and not indented[0]:
+ indented = indented[1:]
+ line_offset += 1
+ return elements
+
+ # U+2014 is an em-dash:
+ attribution_pattern = re.compile(u'(---?(?!-)|\u2014) *(?=[^ \\n])')
+
+ def split_attribution(self, indented, line_offset):
+ """
+ Check for a block quote attribution and split it off:
+
+ * First line after a blank line must begin with a dash ("--", "---",
+ em-dash; matches `self.attribution_pattern`).
+ * Every line after that must have consistent indentation.
+ * Attributions must be preceded by block quote content.
+
+ Return a tuple of: (block quote content lines, content offset,
+ attribution lines, attribution offset, remaining indented lines).
+ """
+ blank = None
+ nonblank_seen = False
+ for i in range(len(indented)):
+ line = indented[i].rstrip()
+ if line:
+ if nonblank_seen and blank == i - 1: # last line blank
+ match = self.attribution_pattern.match(line)
+ if match:
+ attribution_end, indent = self.check_attribution(
+ indented, i)
+ if attribution_end:
+ a_lines = indented[i:attribution_end]
+ a_lines.trim_left(match.end(), end=1)
+ a_lines.trim_left(indent, start=1)
+ return (indented[:i], a_lines,
+ i, indented[attribution_end:],
+ line_offset + attribution_end)
+ nonblank_seen = True
+ else:
+ blank = i
+ else:
+ return (indented, None, None, None, None)
+
+ def check_attribution(self, indented, attribution_start):
+ """
+ Check attribution shape.
+ Return the index past the end of the attribution, and the indent.
+ """
+ indent = None
+ i = attribution_start + 1
+ for i in range(attribution_start + 1, len(indented)):
+ line = indented[i].rstrip()
+ if not line:
+ break
+ if indent is None:
+ indent = len(line) - len(line.lstrip())
+ elif len(line) - len(line.lstrip()) != indent:
+ return None, None # bad shape; not an attribution
+ else:
+ # return index of line after last attribution line:
+ i += 1
+ return i, (indent or 0)
+
+ def parse_attribution(self, indented, line_offset):
+ text = '\n'.join(indented).rstrip()
+ lineno = self.state_machine.abs_line_number() + line_offset
+ textnodes, messages = self.inline_text(text, lineno)
+ node = nodes.attribution(text, '', *textnodes)
+ node.line = lineno
+ # report with source and source-line results in
+ # ``IndexError: list index out of range``
+ # node.source, node.line = self.state_machine.get_source_and_line(lineno)
+ return node, messages
+
+ def bullet(self, match, context, next_state):
+ """Bullet list item."""
+ bulletlist = nodes.bullet_list()
+ self.parent += bulletlist
+ bulletlist['bullet'] = match.string[0]
+ i, blank_finish = self.list_item(match.end())
+ bulletlist += i
+ offset = self.state_machine.line_offset + 1 # next line
+ new_line_offset, blank_finish = self.nested_list_parse(
+ self.state_machine.input_lines[offset:],
+ input_offset=self.state_machine.abs_line_offset() + 1,
+ node=bulletlist, initial_state='BulletList',
+ blank_finish=blank_finish)
+ self.goto_line(new_line_offset)
+ if not blank_finish:
+ self.parent += self.unindent_warning('Bullet list')
+ return [], next_state, []
+
+ def list_item(self, indent):
+ if self.state_machine.line[indent:]:
+ indented, line_offset, blank_finish = (
+ self.state_machine.get_known_indented(indent))
+ else:
+ indented, indent, line_offset, blank_finish = (
+ self.state_machine.get_first_known_indented(indent))
+ listitem = nodes.list_item('\n'.join(indented))
+ if indented:
+ self.nested_parse(indented, input_offset=line_offset,
+ node=listitem)
+ return listitem, blank_finish
+
+ def enumerator(self, match, context, next_state):
+ """Enumerated List Item"""
+ format, sequence, text, ordinal = self.parse_enumerator(match)
+ if not self.is_enumerated_list_item(ordinal, sequence, format):
+ raise statemachine.TransitionCorrection('text')
+ enumlist = nodes.enumerated_list()
+ self.parent += enumlist
+ if sequence == '#':
+ enumlist['enumtype'] = 'arabic'
+ else:
+ enumlist['enumtype'] = sequence
+ enumlist['prefix'] = self.enum.formatinfo[format].prefix
+ enumlist['suffix'] = self.enum.formatinfo[format].suffix
+ if ordinal != 1:
+ enumlist['start'] = ordinal
+ src, srcline = self.state_machine.get_source_and_line()
+ msg = self.reporter.info(
+ 'Enumerated list start value not ordinal-1: "%s" (ordinal %s)'
+ % (text, ordinal), source=src, line=srcline)
+ self.parent += msg
+ listitem, blank_finish = self.list_item(match.end())
+ enumlist += listitem
+ offset = self.state_machine.line_offset + 1 # next line
+ newline_offset, blank_finish = self.nested_list_parse(
+ self.state_machine.input_lines[offset:],
+ input_offset=self.state_machine.abs_line_offset() + 1,
+ node=enumlist, initial_state='EnumeratedList',
+ blank_finish=blank_finish,
+ extra_settings={'lastordinal': ordinal,
+ 'format': format,
+ 'auto': sequence == '#'})
+ self.goto_line(newline_offset)
+ if not blank_finish:
+ self.parent += self.unindent_warning('Enumerated list')
+ return [], next_state, []
+
+ def parse_enumerator(self, match, expected_sequence=None):
+ """
+ Analyze an enumerator and return the results.
+
+ :Return:
+ - the enumerator format ('period', 'parens', or 'rparen'),
+ - the sequence used ('arabic', 'loweralpha', 'upperroman', etc.),
+ - the text of the enumerator, stripped of formatting, and
+ - the ordinal value of the enumerator ('a' -> 1, 'ii' -> 2, etc.;
+ ``None`` is returned for invalid enumerator text).
+
+ The enumerator format has already been determined by the regular
+ expression match. If `expected_sequence` is given, that sequence is
+ tried first. If not, we check for Roman numeral 1. This way,
+ single-character Roman numerals (which are also alphabetical) can be
+ matched. If no sequence has been matched, all sequences are checked in
+ order.
+ """
+ groupdict = match.groupdict()
+ sequence = ''
+ for format in self.enum.formats:
+ if groupdict[format]: # was this the format matched?
+ break # yes; keep `format`
+ else: # shouldn't happen
+ raise ParserError('enumerator format not matched')
+ text = groupdict[format][self.enum.formatinfo[format].start
+ :self.enum.formatinfo[format].end]
+ if text == '#':
+ sequence = '#'
+ elif expected_sequence:
+ try:
+ if self.enum.sequenceregexps[expected_sequence].match(text):
+ sequence = expected_sequence
+ except KeyError: # shouldn't happen
+ raise ParserError('unknown enumerator sequence: %s'
+ % sequence)
+ elif text == 'i':
+ sequence = 'lowerroman'
+ elif text == 'I':
+ sequence = 'upperroman'
+ if not sequence:
+ for sequence in self.enum.sequences:
+ if self.enum.sequenceregexps[sequence].match(text):
+ break
+ else: # shouldn't happen
+ raise ParserError('enumerator sequence not matched')
+ if sequence == '#':
+ ordinal = 1
+ else:
+ try:
+ ordinal = self.enum.converters[sequence](text)
+ except roman.InvalidRomanNumeralError:
+ ordinal = None
+ return format, sequence, text, ordinal
+
+ def is_enumerated_list_item(self, ordinal, sequence, format):
+ """
+ Check validity based on the ordinal value and the second line.
+
+ Return true if the ordinal is valid and the second line is blank,
+ indented, or starts with the next enumerator or an auto-enumerator.
+ """
+ if ordinal is None:
+ return None
+ try:
+ next_line = self.state_machine.next_line()
+ except EOFError: # end of input lines
+ self.state_machine.previous_line()
+ return 1
+ else:
+ self.state_machine.previous_line()
+ if not next_line[:1].strip(): # blank or indented
+ return 1
+ result = self.make_enumerator(ordinal + 1, sequence, format)
+ if result:
+ next_enumerator, auto_enumerator = result
+ try:
+ if ( next_line.startswith(next_enumerator) or
+ next_line.startswith(auto_enumerator) ):
+ return 1
+ except TypeError:
+ pass
+ return None
+
+ def make_enumerator(self, ordinal, sequence, format):
+ """
+ Construct and return the next enumerated list item marker, and an
+ auto-enumerator ("#" instead of the regular enumerator).
+
+ Return ``None`` for invalid (out of range) ordinals.
+ """ #"
+ if sequence == '#':
+ enumerator = '#'
+ elif sequence == 'arabic':
+ enumerator = str(ordinal)
+ else:
+ if sequence.endswith('alpha'):
+ if ordinal > 26:
+ return None
+ enumerator = chr(ordinal + ord('a') - 1)
+ elif sequence.endswith('roman'):
+ try:
+ enumerator = roman.toRoman(ordinal)
+ except roman.RomanError:
+ return None
+ else: # shouldn't happen
+ raise ParserError('unknown enumerator sequence: "%s"'
+ % sequence)
+ if sequence.startswith('lower'):
+ enumerator = enumerator.lower()
+ elif sequence.startswith('upper'):
+ enumerator = enumerator.upper()
+ else: # shouldn't happen
+ raise ParserError('unknown enumerator sequence: "%s"'
+ % sequence)
+ formatinfo = self.enum.formatinfo[format]
+ next_enumerator = (formatinfo.prefix + enumerator + formatinfo.suffix
+ + ' ')
+ auto_enumerator = formatinfo.prefix + '#' + formatinfo.suffix + ' '
+ return next_enumerator, auto_enumerator
+
+ def field_marker(self, match, context, next_state):
+ """Field list item."""
+ field_list = nodes.field_list()
+ self.parent += field_list
+ field, blank_finish = self.field(match)
+ field_list += field
+ offset = self.state_machine.line_offset + 1 # next line
+ newline_offset, blank_finish = self.nested_list_parse(
+ self.state_machine.input_lines[offset:],
+ input_offset=self.state_machine.abs_line_offset() + 1,
+ node=field_list, initial_state='FieldList',
+ blank_finish=blank_finish)
+ self.goto_line(newline_offset)
+ if not blank_finish:
+ self.parent += self.unindent_warning('Field list')
+ return [], next_state, []
+
+ def field(self, match):
+ name = self.parse_field_marker(match)
+ src, srcline = self.state_machine.get_source_and_line()
+ lineno = self.state_machine.abs_line_number()
+ indented, indent, line_offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end())
+ field_node = nodes.field()
+ field_node.source = src
+ field_node.line = srcline
+ name_nodes, name_messages = self.inline_text(name, lineno)
+ field_node += nodes.field_name(name, '', *name_nodes)
+ field_body = nodes.field_body('\n'.join(indented), *name_messages)
+ field_node += field_body
+ if indented:
+ self.parse_field_body(indented, line_offset, field_body)
+ return field_node, blank_finish
+
+ def parse_field_marker(self, match):
+ """Extract & return field name from a field marker match."""
+ field = match.group()[1:] # strip off leading ':'
+ field = field[:field.rfind(':')] # strip off trailing ':' etc.
+ return field
+
+ def parse_field_body(self, indented, offset, node):
+ self.nested_parse(indented, input_offset=offset, node=node)
+
+ def option_marker(self, match, context, next_state):
+ """Option list item."""
+ optionlist = nodes.option_list()
+ try:
+ listitem, blank_finish = self.option_list_item(match)
+ except MarkupError, error:
+ # This shouldn't happen; pattern won't match.
+ src, srcline = self.state_machine.get_source_and_line()
+ msg = self.reporter.error('Invalid option list marker: %s' %
+ str(error), source=src, line=srcline)
+ self.parent += msg
+ indented, indent, line_offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end())
+ elements = self.block_quote(indented, line_offset)
+ self.parent += elements
+ if not blank_finish:
+ self.parent += self.unindent_warning('Option list')
+ return [], next_state, []
+ self.parent += optionlist
+ optionlist += listitem
+ offset = self.state_machine.line_offset + 1 # next line
+ newline_offset, blank_finish = self.nested_list_parse(
+ self.state_machine.input_lines[offset:],
+ input_offset=self.state_machine.abs_line_offset() + 1,
+ node=optionlist, initial_state='OptionList',
+ blank_finish=blank_finish)
+ self.goto_line(newline_offset)
+ if not blank_finish:
+ self.parent += self.unindent_warning('Option list')
+ return [], next_state, []
+
+ def option_list_item(self, match):
+ offset = self.state_machine.abs_line_offset()
+ options = self.parse_option_marker(match)
+ indented, indent, line_offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end())
+ if not indented: # not an option list item
+ self.goto_line(offset)
+ raise statemachine.TransitionCorrection('text')
+ option_group = nodes.option_group('', *options)
+ description = nodes.description('\n'.join(indented))
+ option_list_item = nodes.option_list_item('', option_group,
+ description)
+ if indented:
+ self.nested_parse(indented, input_offset=line_offset,
+ node=description)
+ return option_list_item, blank_finish
+
+ def parse_option_marker(self, match):
+ """
+ Return a list of `node.option` and `node.option_argument` objects,
+ parsed from an option marker match.
+
+ :Exception: `MarkupError` for invalid option markers.
+ """
+ optlist = []
+ optionstrings = match.group().rstrip().split(', ')
+ for optionstring in optionstrings:
+ tokens = optionstring.split()
+ delimiter = ' '
+ firstopt = tokens[0].split('=')
+ if len(firstopt) > 1:
+ # "--opt=value" form
+ tokens[:1] = firstopt
+ delimiter = '='
+ elif (len(tokens[0]) > 2
+ and ((tokens[0].startswith('-')
+ and not tokens[0].startswith('--'))
+ or tokens[0].startswith('+'))):
+ # "-ovalue" form
+ tokens[:1] = [tokens[0][:2], tokens[0][2:]]
+ delimiter = ''
+ if len(tokens) > 1 and (tokens[1].startswith('<')
+ and tokens[-1].endswith('>')):
+ # "-o <value1 value2>" form; join all values into one token
+ tokens[1:] = [' '.join(tokens[1:])]
+ if 0 < len(tokens) <= 2:
+ option = nodes.option(optionstring)
+ option += nodes.option_string(tokens[0], tokens[0])
+ if len(tokens) > 1:
+ option += nodes.option_argument(tokens[1], tokens[1],
+ delimiter=delimiter)
+ optlist.append(option)
+ else:
+ raise MarkupError(
+ 'wrong number of option tokens (=%s), should be 1 or 2: '
+ '"%s"' % (len(tokens), optionstring))
+ return optlist
+
+ def doctest(self, match, context, next_state):
+ data = '\n'.join(self.state_machine.get_text_block())
+ self.parent += nodes.doctest_block(data, data)
+ return [], next_state, []
+
+ def line_block(self, match, context, next_state):
+ """First line of a line block."""
+ block = nodes.line_block()
+ self.parent += block
+ lineno = self.state_machine.abs_line_number()
+ line, messages, blank_finish = self.line_block_line(match, lineno)
+ block += line
+ self.parent += messages
+ if not blank_finish:
+ offset = self.state_machine.line_offset + 1 # next line
+ new_line_offset, blank_finish = self.nested_list_parse(
+ self.state_machine.input_lines[offset:],
+ input_offset=self.state_machine.abs_line_offset() + 1,
+ node=block, initial_state='LineBlock',
+ blank_finish=0)
+ self.goto_line(new_line_offset)
+ if not blank_finish:
+ src, srcline = self.state_machine.get_source_and_line()
+ self.parent += self.reporter.warning(
+ 'Line block ends without a blank line.',
+ source=src, line=srcline+1)
+ if len(block):
+ if block[0].indent is None:
+ block[0].indent = 0
+ self.nest_line_block_lines(block)
+ return [], next_state, []
+
+ def line_block_line(self, match, lineno):
+ """Return one line element of a line_block."""
+ indented, indent, line_offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end(),
+ until_blank=1)
+ text = u'\n'.join(indented)
+ text_nodes, messages = self.inline_text(text, lineno)
+ line = nodes.line(text, '', *text_nodes)
+ if match.string.rstrip() != '|': # not empty
+ line.indent = len(match.group(1)) - 1
+ return line, messages, blank_finish
+
+ def nest_line_block_lines(self, block):
+ for index in range(1, len(block)):
+ if block[index].indent is None:
+ block[index].indent = block[index - 1].indent
+ self.nest_line_block_segment(block)
+
+ def nest_line_block_segment(self, block):
+ indents = [item.indent for item in block]
+ least = min(indents)
+ new_items = []
+ new_block = nodes.line_block()
+ for item in block:
+ if item.indent > least:
+ new_block.append(item)
+ else:
+ if len(new_block):
+ self.nest_line_block_segment(new_block)
+ new_items.append(new_block)
+ new_block = nodes.line_block()
+ new_items.append(item)
+ if len(new_block):
+ self.nest_line_block_segment(new_block)
+ new_items.append(new_block)
+ block[:] = new_items
+
+ def grid_table_top(self, match, context, next_state):
+ """Top border of a full table."""
+ return self.table_top(match, context, next_state,
+ self.isolate_grid_table,
+ tableparser.GridTableParser)
+
+ def simple_table_top(self, match, context, next_state):
+ """Top border of a simple table."""
+ return self.table_top(match, context, next_state,
+ self.isolate_simple_table,
+ tableparser.SimpleTableParser)
+
+ def table_top(self, match, context, next_state,
+ isolate_function, parser_class):
+ """Top border of a generic table."""
+ nodelist, blank_finish = self.table(isolate_function, parser_class)
+ self.parent += nodelist
+ if not blank_finish:
+ src, srcline = self.state_machine.get_source_and_line()
+ msg = self.reporter.warning(
+ 'Blank line required after table.',
+ source=src, line=srcline+1)
+ self.parent += msg
+ return [], next_state, []
+
+ def table(self, isolate_function, parser_class):
+ """Parse a table."""
+ block, messages, blank_finish = isolate_function()
+ if block:
+ try:
+ parser = parser_class()
+ tabledata = parser.parse(block)
+ tableline = (self.state_machine.abs_line_number() - len(block)
+ + 1)
+ table = self.build_table(tabledata, tableline)
+ nodelist = [table] + messages
+ except tableparser.TableMarkupError, detail:
+ nodelist = self.malformed_table(
+ block, ' '.join(detail.args)) + messages
+ else:
+ nodelist = messages
+ return nodelist, blank_finish
+
+ def isolate_grid_table(self):
+ messages = []
+ blank_finish = 1
+ try:
+ block = self.state_machine.get_text_block(flush_left=1)
+ except statemachine.UnexpectedIndentationError, instance:
+ block, src, srcline = instance.args
+ messages.append(self.reporter.error('Unexpected indentation.',
+ source=src, line=srcline))
+ blank_finish = 0
+ block.disconnect()
+ # for East Asian chars:
+ block.pad_double_width(self.double_width_pad_char)
+ width = len(block[0].strip())
+ for i in range(len(block)):
+ block[i] = block[i].strip()
+ if block[i][0] not in '+|': # check left edge
+ blank_finish = 0
+ self.state_machine.previous_line(len(block) - i)
+ del block[i:]
+ break
+ if not self.grid_table_top_pat.match(block[-1]): # find bottom
+ blank_finish = 0
+ # from second-last to third line of table:
+ for i in range(len(block) - 2, 1, -1):
+ if self.grid_table_top_pat.match(block[i]):
+ self.state_machine.previous_line(len(block) - i + 1)
+ del block[i+1:]
+ break
+ else:
+ messages.extend(self.malformed_table(block))
+ return [], messages, blank_finish
+ for i in range(len(block)): # check right edge
+ if len(block[i]) != width or block[i][-1] not in '+|':
+ messages.extend(self.malformed_table(block))
+ return [], messages, blank_finish
+ return block, messages, blank_finish
+
+ def isolate_simple_table(self):
+ start = self.state_machine.line_offset
+ lines = self.state_machine.input_lines
+ limit = len(lines) - 1
+ toplen = len(lines[start].strip())
+ pattern_match = self.simple_table_border_pat.match
+ found = 0
+ found_at = None
+ i = start + 1
+ while i <= limit:
+ line = lines[i]
+ match = pattern_match(line)
+ if match:
+ if len(line.strip()) != toplen:
+ self.state_machine.next_line(i - start)
+ messages = self.malformed_table(
+ lines[start:i+1], 'Bottom/header table border does '
+ 'not match top border.')
+ return [], messages, i == limit or not lines[i+1].strip()
+ found += 1
+ found_at = i
+ if found == 2 or i == limit or not lines[i+1].strip():
+ end = i
+ break
+ i += 1
+ else: # reached end of input_lines
+ if found:
+ extra = ' or no blank line after table bottom'
+ self.state_machine.next_line(found_at - start)
+ block = lines[start:found_at+1]
+ else:
+ extra = ''
+ self.state_machine.next_line(i - start - 1)
+ block = lines[start:]
+ messages = self.malformed_table(
+ block, 'No bottom table border found%s.' % extra)
+ return [], messages, not extra
+ self.state_machine.next_line(end - start)
+ block = lines[start:end+1]
+ # for East Asian chars:
+ block.pad_double_width(self.double_width_pad_char)
+ return block, [], end == limit or not lines[end+1].strip()
+
+ def malformed_table(self, block, detail=''):
+ block.replace(self.double_width_pad_char, '')
+ data = '\n'.join(block)
+ message = 'Malformed table.'
+ startline = self.state_machine.abs_line_number() - len(block) + 1
+ src, srcline = self.state_machine.get_source_and_line(startline)
+ if detail:
+ message += '\n' + detail
+ error = self.reporter.error(message, nodes.literal_block(data, data),
+ source=src, line=srcline)
+ return [error]
+
+ def build_table(self, tabledata, tableline, stub_columns=0):
+ colwidths, headrows, bodyrows = tabledata
+ table = nodes.table()
+ tgroup = nodes.tgroup(cols=len(colwidths))
+ table += tgroup
+ for colwidth in colwidths:
+ colspec = nodes.colspec(colwidth=colwidth)
+ if stub_columns:
+ colspec.attributes['stub'] = 1
+ stub_columns -= 1
+ tgroup += colspec
+ if headrows:
+ thead = nodes.thead()
+ tgroup += thead
+ for row in headrows:
+ thead += self.build_table_row(row, tableline)
+ tbody = nodes.tbody()
+ tgroup += tbody
+ for row in bodyrows:
+ tbody += self.build_table_row(row, tableline)
+ return table
+
+ def build_table_row(self, rowdata, tableline):
+ row = nodes.row()
+ for cell in rowdata:
+ if cell is None:
+ continue
+ morerows, morecols, offset, cellblock = cell
+ attributes = {}
+ if morerows:
+ attributes['morerows'] = morerows
+ if morecols:
+ attributes['morecols'] = morecols
+ entry = nodes.entry(**attributes)
+ row += entry
+ if ''.join(cellblock):
+ self.nested_parse(cellblock, input_offset=tableline+offset,
+ node=entry)
+ return row
+
+
+ explicit = Struct()
+ """Patterns and constants used for explicit markup recognition."""
+
+ explicit.patterns = Struct(
+ target=re.compile(r"""
+ (
+ _ # anonymous target
+ | # *OR*
+ (?!_) # no underscore at the beginning
+ (?P<quote>`?) # optional open quote
+ (?![ `]) # first char. not space or
+ # backquote
+ (?P<name> # reference name
+ .+?
+ )
+ %(non_whitespace_escape_before)s
+ (?P=quote) # close quote if open quote used
+ )
+ (?<!(?<!\x00):) # no unescaped colon at end
+ %(non_whitespace_escape_before)s
+ [ ]? # optional space
+ : # end of reference name
+ ([ ]+|$) # followed by whitespace
+ """ % vars(Inliner), re.VERBOSE),
+ reference=re.compile(r"""
+ (
+ (?P<simple>%(simplename)s)_
+ | # *OR*
+ ` # open backquote
+ (?![ ]) # not space
+ (?P<phrase>.+?) # hyperlink phrase
+ %(non_whitespace_escape_before)s
+ `_ # close backquote,
+ # reference mark
+ )
+ $ # end of string
+ """ % vars(Inliner), re.VERBOSE | re.UNICODE),
+ substitution=re.compile(r"""
+ (
+ (?![ ]) # first char. not space
+ (?P<name>.+?) # substitution text
+ %(non_whitespace_escape_before)s
+ \| # close delimiter
+ )
+ ([ ]+|$) # followed by whitespace
+ """ % vars(Inliner), re.VERBOSE),)
+
+ def footnote(self, match):
+ src, srcline = self.state_machine.get_source_and_line()
+ indented, indent, offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end())
+ label = match.group(1)
+ name = normalize_name(label)
+ footnote = nodes.footnote('\n'.join(indented))
+ footnote.source = src
+ footnote.line = srcline
+ if name[0] == '#': # auto-numbered
+ name = name[1:] # autonumber label
+ footnote['auto'] = 1
+ if name:
+ footnote['names'].append(name)
+ self.document.note_autofootnote(footnote)
+ elif name == '*': # auto-symbol
+ name = ''
+ footnote['auto'] = '*'
+ self.document.note_symbol_footnote(footnote)
+ else: # manually numbered
+ footnote += nodes.label('', label)
+ footnote['names'].append(name)
+ self.document.note_footnote(footnote)
+ if name:
+ self.document.note_explicit_target(footnote, footnote)
+ else:
+ self.document.set_id(footnote, footnote)
+ if indented:
+ self.nested_parse(indented, input_offset=offset, node=footnote)
+ return [footnote], blank_finish
+
+ def citation(self, match):
+ src, srcline = self.state_machine.get_source_and_line()
+ indented, indent, offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end())
+ label = match.group(1)
+ name = normalize_name(label)
+ citation = nodes.citation('\n'.join(indented))
+ citation.source = src
+ citation.line = srcline
+ citation += nodes.label('', label)
+ citation['names'].append(name)
+ self.document.note_citation(citation)
+ self.document.note_explicit_target(citation, citation)
+ if indented:
+ self.nested_parse(indented, input_offset=offset, node=citation)
+ return [citation], blank_finish
+
+ def hyperlink_target(self, match):
+ pattern = self.explicit.patterns.target
+ lineno = self.state_machine.abs_line_number()
+ src, srcline = self.state_machine.get_source_and_line()
+ block, indent, offset, blank_finish = \
+ self.state_machine.get_first_known_indented(
+ match.end(), until_blank=1, strip_indent=0)
+ blocktext = match.string[:match.end()] + '\n'.join(block)
+ block = [escape2null(line) for line in block]
+ escaped = block[0]
+ blockindex = 0
+ while 1:
+ targetmatch = pattern.match(escaped)
+ if targetmatch:
+ break
+ blockindex += 1
+ try:
+ escaped += block[blockindex]
+ except IndexError:
+ raise MarkupError('malformed hyperlink target.')
+ del block[:blockindex]
+ block[0] = (block[0] + ' ')[targetmatch.end()-len(escaped)-1:].strip()
+ target = self.make_target(block, blocktext, lineno,
+ targetmatch.group('name'))
+ return [target], blank_finish
+
+ def make_target(self, block, block_text, lineno, target_name):
+ target_type, data = self.parse_target(block, block_text, lineno)
+ if target_type == 'refname':
+ target = nodes.target(block_text, '', refname=normalize_name(data))
+ target.indirect_reference_name = data
+ self.add_target(target_name, '', target, lineno)
+ self.document.note_indirect_target(target)
+ return target
+ elif target_type == 'refuri':
+ target = nodes.target(block_text, '')
+ self.add_target(target_name, data, target, lineno)
+ return target
+ else:
+ return data
+
+ def parse_target(self, block, block_text, lineno):
+ """
+ Determine the type of reference of a target.
+
+ :Return: A 2-tuple, one of:
+
+ - 'refname' and the indirect reference name
+ - 'refuri' and the URI
+ - 'malformed' and a system_message node
+ """
+ if block and block[-1].strip()[-1:] == '_': # possible indirect target
+ reference = ' '.join([line.strip() for line in block])
+ refname = self.is_reference(reference)
+ if refname:
+ return 'refname', refname
+ reference = ''.join([''.join(line.split()) for line in block])
+ return 'refuri', unescape(reference)
+
+ def is_reference(self, reference):
+ match = self.explicit.patterns.reference.match(
+ whitespace_normalize_name(reference))
+ if not match:
+ return None
+ return unescape(match.group('simple') or match.group('phrase'))
+
+ def add_target(self, targetname, refuri, target, lineno):
+ target.line = lineno
+ if targetname:
+ name = normalize_name(unescape(targetname))
+ target['names'].append(name)
+ if refuri:
+ uri = self.inliner.adjust_uri(refuri)
+ if uri:
+ target['refuri'] = uri
+ else:
+ raise ApplicationError('problem with URI: %r' % refuri)
+ self.document.note_explicit_target(target, self.parent)
+ else: # anonymous target
+ if refuri:
+ target['refuri'] = refuri
+ target['anonymous'] = 1
+ self.document.note_anonymous_target(target)
+
+ def substitution_def(self, match):
+ pattern = self.explicit.patterns.substitution
+ src, srcline = self.state_machine.get_source_and_line()
+ block, indent, offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end(),
+ strip_indent=0)
+ blocktext = (match.string[:match.end()] + '\n'.join(block))
+ block.disconnect()
+ escaped = escape2null(block[0].rstrip())
+ blockindex = 0
+ while 1:
+ subdefmatch = pattern.match(escaped)
+ if subdefmatch:
+ break
+ blockindex += 1
+ try:
+ escaped = escaped + ' ' + escape2null(block[blockindex].strip())
+ except IndexError:
+ raise MarkupError('malformed substitution definition.')
+ del block[:blockindex] # strip out the substitution marker
+ block[0] = (block[0].strip() + ' ')[subdefmatch.end()-len(escaped)-1:-1]
+ if not block[0]:
+ del block[0]
+ offset += 1
+ while block and not block[-1].strip():
+ block.pop()
+ subname = subdefmatch.group('name')
+ substitution_node = nodes.substitution_definition(blocktext)
+ substitution_node.source = src
+ substitution_node.line = srcline
+ if not block:
+ msg = self.reporter.warning(
+ 'Substitution definition "%s" missing contents.' % subname,
+ nodes.literal_block(blocktext, blocktext),
+ source=src, line=srcline)
+ return [msg], blank_finish
+ block[0] = block[0].strip()
+ substitution_node['names'].append(
+ nodes.whitespace_normalize_name(subname))
+ new_abs_offset, blank_finish = self.nested_list_parse(
+ block, input_offset=offset, node=substitution_node,
+ initial_state='SubstitutionDef', blank_finish=blank_finish)
+ i = 0
+ for node in substitution_node[:]:
+ if not (isinstance(node, nodes.Inline) or
+ isinstance(node, nodes.Text)):
+ self.parent += substitution_node[i]
+ del substitution_node[i]
+ else:
+ i += 1
+ for node in substitution_node.traverse(nodes.Element):
+ if self.disallowed_inside_substitution_definitions(node):
+ pformat = nodes.literal_block('', node.pformat().rstrip())
+ msg = self.reporter.error(
+ 'Substitution definition contains illegal element:',
+ pformat, nodes.literal_block(blocktext, blocktext),
+ source=src, line=srcline)
+ return [msg], blank_finish
+ if len(substitution_node) == 0:
+ msg = self.reporter.warning(
+ 'Substitution definition "%s" empty or invalid.' % subname,
+ nodes.literal_block(blocktext, blocktext),
+ source=src, line=srcline)
+ return [msg], blank_finish
+ self.document.note_substitution_def(
+ substitution_node, subname, self.parent)
+ return [substitution_node], blank_finish
+
+ def disallowed_inside_substitution_definitions(self, node):
+ if (node['ids'] or
+ isinstance(node, nodes.reference) and node.get('anonymous') or
+ isinstance(node, nodes.footnote_reference) and node.get('auto')):
+ return 1
+ else:
+ return 0
+
+ def directive(self, match, **option_presets):
+ """Returns a 2-tuple: list of nodes, and a "blank finish" boolean."""
+ type_name = match.group(1)
+ directive_class, messages = directives.directive(
+ type_name, self.memo.language, self.document)
+ self.parent += messages
+ if directive_class:
+ return self.run_directive(
+ directive_class, match, type_name, option_presets)
+ else:
+ return self.unknown_directive(type_name)
+
+ def run_directive(self, directive, match, type_name, option_presets):
+ """
+ Parse a directive then run its directive function.
+
+ Parameters:
+
+ - `directive`: The class implementing the directive. Must be
+ a subclass of `rst.Directive`.
+
+ - `match`: A regular expression match object which matched the first
+ line of the directive.
+
+ - `type_name`: The directive name, as used in the source text.
+
+ - `option_presets`: A dictionary of preset options, defaults for the
+ directive options. Currently, only an "alt" option is passed by
+ substitution definitions (value: the substitution name), which may
+ be used by an embedded image directive.
+
+ Returns a 2-tuple: list of nodes, and a "blank finish" boolean.
+ """
+ if isinstance(directive, (FunctionType, MethodType)):
+ from docutils.parsers.rst import convert_directive_function
+ directive = convert_directive_function(directive)
+ lineno = self.state_machine.abs_line_number()
+ src, srcline = self.state_machine.get_source_and_line()
+ initial_line_offset = self.state_machine.line_offset
+ indented, indent, line_offset, blank_finish \
+ = self.state_machine.get_first_known_indented(match.end(),
+ strip_top=0)
+ block_text = '\n'.join(self.state_machine.input_lines[
+ initial_line_offset : self.state_machine.line_offset + 1])
+ try:
+ arguments, options, content, content_offset = (
+ self.parse_directive_block(indented, line_offset,
+ directive, option_presets))
+ except MarkupError, detail:
+ error = self.reporter.error(
+ 'Error in "%s" directive:\n%s.' % (type_name,
+ ' '.join(detail.args)),
+ nodes.literal_block(block_text, block_text),
+ source=src, line=srcline)
+ return [error], blank_finish
+ directive_instance = directive(
+ type_name, arguments, options, content, lineno,
+ content_offset, block_text, self, self.state_machine)
+ try:
+ result = directive_instance.run()
+ except docutils.parsers.rst.DirectiveError, error:
+ msg_node = self.reporter.system_message(error.level, error.msg,
+ source=src, line=srcline)
+ msg_node += nodes.literal_block(block_text, block_text)
+ result = [msg_node]
+ assert isinstance(result, list), \
+ 'Directive "%s" must return a list of nodes.' % type_name
+ for i in range(len(result)):
+ assert isinstance(result[i], nodes.Node), \
+ ('Directive "%s" returned non-Node object (index %s): %r'
+ % (type_name, i, result[i]))
+ return (result,
+ blank_finish or self.state_machine.is_next_line_blank())
+
+ def parse_directive_block(self, indented, line_offset, directive,
+ option_presets):
+ option_spec = directive.option_spec
+ has_content = directive.has_content
+ if indented and not indented[0].strip():
+ indented.trim_start()
+ line_offset += 1
+ while indented and not indented[-1].strip():
+ indented.trim_end()
+ if indented and (directive.required_arguments
+ or directive.optional_arguments
+ or option_spec):
+ for i in range(len(indented)):
+ if not indented[i].strip():
+ break
+ else:
+ i += 1
+ arg_block = indented[:i]
+ content = indented[i+1:]
+ content_offset = line_offset + i + 1
+ else:
+ content = indented
+ content_offset = line_offset
+ arg_block = []
+ while content and not content[0].strip():
+ content.trim_start()
+ content_offset += 1
+ if option_spec:
+ options, arg_block = self.parse_directive_options(
+ option_presets, option_spec, arg_block)
+ if arg_block and not (directive.required_arguments
+ or directive.optional_arguments):
+ raise MarkupError('no arguments permitted; blank line '
+ 'required before content block')
+ else:
+ options = {}
+ if directive.required_arguments or directive.optional_arguments:
+ arguments = self.parse_directive_arguments(
+ directive, arg_block)
+ else:
+ arguments = []
+ if content and not has_content:
+ raise MarkupError('no content permitted')
+ return (arguments, options, content, content_offset)
+
+ def parse_directive_options(self, option_presets, option_spec, arg_block):
+ options = option_presets.copy()
+ for i in range(len(arg_block)):
+ if arg_block[i][:1] == ':':
+ opt_block = arg_block[i:]
+ arg_block = arg_block[:i]
+ break
+ else:
+ opt_block = []
+ if opt_block:
+ success, data = self.parse_extension_options(option_spec,
+ opt_block)
+ if success: # data is a dict of options
+ options.update(data)
+ else: # data is an error string
+ raise MarkupError(data)
+ return options, arg_block
+
+ def parse_directive_arguments(self, directive, arg_block):
+ required = directive.required_arguments
+ optional = directive.optional_arguments
+ arg_text = '\n'.join(arg_block)
+ arguments = arg_text.split()
+ if len(arguments) < required:
+ raise MarkupError('%s argument(s) required, %s supplied'
+ % (required, len(arguments)))
+ elif len(arguments) > required + optional:
+ if directive.final_argument_whitespace:
+ arguments = arg_text.split(None, required + optional - 1)
+ else:
+ raise MarkupError(
+ 'maximum %s argument(s) allowed, %s supplied'
+ % (required + optional, len(arguments)))
+ return arguments
+
+ def parse_extension_options(self, option_spec, datalines):
+ """
+ Parse `datalines` for a field list containing extension options
+ matching `option_spec`.
+
+ :Parameters:
+ - `option_spec`: a mapping of option name to conversion
+ function, which should raise an exception on bad input.
+ - `datalines`: a list of input strings.
+
+ :Return:
+ - Success value, 1 or 0.
+ - An option dictionary on success, an error string on failure.
+ """
+ node = nodes.field_list()
+ newline_offset, blank_finish = self.nested_list_parse(
+ datalines, 0, node, initial_state='ExtensionOptions',
+ blank_finish=1)
+ if newline_offset != len(datalines): # incomplete parse of block
+ return 0, 'invalid option block'
+ try:
+ options = utils.extract_extension_options(node, option_spec)
+ except KeyError, detail:
+ return 0, ('unknown option: "%s"' % detail.args[0])
+ except (ValueError, TypeError), detail:
+ return 0, ('invalid option value: %s' % ' '.join(detail.args))
+ except utils.ExtensionOptionError, detail:
+ return 0, ('invalid option data: %s' % ' '.join(detail.args))
+ if blank_finish:
+ return 1, options
+ else:
+ return 0, 'option data incompletely parsed'
+
+ def unknown_directive(self, type_name):
+ src, srcline = self.state_machine.get_source_and_line()
+ indented, indent, offset, blank_finish = \
+ self.state_machine.get_first_known_indented(0, strip_indent=0)
+ text = '\n'.join(indented)
+ error = self.reporter.error(
+ 'Unknown directive type "%s".' % type_name,
+ nodes.literal_block(text, text), source=src, line=srcline)
+ return [error], blank_finish
+
+ def comment(self, match):
+ if not match.string[match.end():].strip() \
+ and self.state_machine.is_next_line_blank(): # an empty comment?
+ return [nodes.comment()], 1 # "A tiny but practical wart."
+ indented, indent, offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end())
+ while indented and not indented[-1].strip():
+ indented.trim_end()
+ text = '\n'.join(indented)
+ return [nodes.comment(text, text)], blank_finish
+
+ explicit.constructs = [
+ (footnote,
+ re.compile(r"""
+ \.\.[ ]+ # explicit markup start
+ \[
+ ( # footnote label:
+ [0-9]+ # manually numbered footnote
+ | # *OR*
+ \# # anonymous auto-numbered footnote
+ | # *OR*
+ \#%s # auto-number ed?) footnote label
+ | # *OR*
+ \* # auto-symbol footnote
+ )
+ \]
+ ([ ]+|$) # whitespace or end of line
+ """ % Inliner.simplename, re.VERBOSE | re.UNICODE)),
+ (citation,
+ re.compile(r"""
+ \.\.[ ]+ # explicit markup start
+ \[(%s)\] # citation label
+ ([ ]+|$) # whitespace or end of line
+ """ % Inliner.simplename, re.VERBOSE | re.UNICODE)),
+ (hyperlink_target,
+ re.compile(r"""
+ \.\.[ ]+ # explicit markup start
+ _ # target indicator
+ (?![ ]|$) # first char. not space or EOL
+ """, re.VERBOSE)),
+ (substitution_def,
+ re.compile(r"""
+ \.\.[ ]+ # explicit markup start
+ \| # substitution indicator
+ (?![ ]|$) # first char. not space or EOL
+ """, re.VERBOSE)),
+ (directive,
+ re.compile(r"""
+ \.\.[ ]+ # explicit markup start
+ (%s) # directive name
+ [ ]? # optional space
+ :: # directive delimiter
+ ([ ]+|$) # whitespace or end of line
+ """ % Inliner.simplename, re.VERBOSE | re.UNICODE))]
+
+ def explicit_markup(self, match, context, next_state):
+ """Footnotes, hyperlink targets, directives, comments."""
+ nodelist, blank_finish = self.explicit_construct(match)
+ self.parent += nodelist
+ self.explicit_list(blank_finish)
+ return [], next_state, []
+
+ def explicit_construct(self, match):
+ """Determine which explicit construct this is, parse & return it."""
+ errors = []
+ for method, pattern in self.explicit.constructs:
+ expmatch = pattern.match(match.string)
+ if expmatch:
+ try:
+ return method(self, expmatch)
+ except MarkupError, error: # never reached?
+ message = ' '.join(error.args)
+ src, srcline = self.state_machine.get_source_and_line()
+ errors.append(self.reporter.warning(
+ message, source=src, line=srcline))
+ break
+ nodelist, blank_finish = self.comment(match)
+ return nodelist + errors, blank_finish
+
+ def explicit_list(self, blank_finish):
+ """
+ Create a nested state machine for a series of explicit markup
+ constructs (including anonymous hyperlink targets).
+ """
+ offset = self.state_machine.line_offset + 1 # next line
+ newline_offset, blank_finish = self.nested_list_parse(
+ self.state_machine.input_lines[offset:],
+ input_offset=self.state_machine.abs_line_offset() + 1,
+ node=self.parent, initial_state='Explicit',
+ blank_finish=blank_finish,
+ match_titles=self.state_machine.match_titles)
+ self.goto_line(newline_offset)
+ if not blank_finish:
+ self.parent += self.unindent_warning('Explicit markup')
+
+ def anonymous(self, match, context, next_state):
+ """Anonymous hyperlink targets."""
+ nodelist, blank_finish = self.anonymous_target(match)
+ self.parent += nodelist
+ self.explicit_list(blank_finish)
+ return [], next_state, []
+
+ def anonymous_target(self, match):
+ lineno = self.state_machine.abs_line_number()
+ block, indent, offset, blank_finish \
+ = self.state_machine.get_first_known_indented(match.end(),
+ until_blank=1)
+ blocktext = match.string[:match.end()] + '\n'.join(block)
+ block = [escape2null(line) for line in block]
+ target = self.make_target(block, blocktext, lineno, '')
+ return [target], blank_finish
+
+ def line(self, match, context, next_state):
+ """Section title overline or transition marker."""
+ if self.state_machine.match_titles:
+ return [match.string], 'Line', []
+ elif match.string.strip() == '::':
+ raise statemachine.TransitionCorrection('text')
+ elif len(match.string.strip()) < 4:
+ msg = self.reporter.info(
+ 'Unexpected possible title overline or transition.\n'
+ "Treating it as ordinary text because it's so short.",
+ line=self.state_machine.abs_line_number())
+ self.parent += msg
+ raise statemachine.TransitionCorrection('text')
+ else:
+ blocktext = self.state_machine.line
+ msg = self.reporter.severe(
+ 'Unexpected section title or transition.',
+ nodes.literal_block(blocktext, blocktext),
+ line=self.state_machine.abs_line_number())
+ self.parent += msg
+ return [], next_state, []
+
+ def text(self, match, context, next_state):
+ """Titles, definition lists, paragraphs."""
+ return [match.string], 'Text', []
+
+
+class RFC2822Body(Body):
+
+ """
+ RFC2822 headers are only valid as the first constructs in documents. As
+ soon as anything else appears, the `Body` state should take over.
+ """
+
+ patterns = Body.patterns.copy() # can't modify the original
+ patterns['rfc2822'] = r'[!-9;-~]+:( +|$)'
+ initial_transitions = [(name, 'Body')
+ for name in Body.initial_transitions]
+ initial_transitions.insert(-1, ('rfc2822', 'Body')) # just before 'text'
+
+ def rfc2822(self, match, context, next_state):
+ """RFC2822-style field list item."""
+ fieldlist = nodes.field_list(classes=['rfc2822'])
+ self.parent += fieldlist
+ field, blank_finish = self.rfc2822_field(match)
+ fieldlist += field
+ offset = self.state_machine.line_offset + 1 # next line
+ newline_offset, blank_finish = self.nested_list_parse(
+ self.state_machine.input_lines[offset:],
+ input_offset=self.state_machine.abs_line_offset() + 1,
+ node=fieldlist, initial_state='RFC2822List',
+ blank_finish=blank_finish)
+ self.goto_line(newline_offset)
+ if not blank_finish:
+ self.parent += self.unindent_warning(
+ 'RFC2822-style field list')
+ return [], next_state, []
+
+ def rfc2822_field(self, match):
+ name = match.string[:match.string.find(':')]
+ indented, indent, line_offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end(),
+ until_blank=1)
+ fieldnode = nodes.field()
+ fieldnode += nodes.field_name(name, name)
+ fieldbody = nodes.field_body('\n'.join(indented))
+ fieldnode += fieldbody
+ if indented:
+ self.nested_parse(indented, input_offset=line_offset,
+ node=fieldbody)
+ return fieldnode, blank_finish
+
+
+class SpecializedBody(Body):
+
+ """
+ Superclass for second and subsequent compound element members. Compound
+ elements are lists and list-like constructs.
+
+ All transition methods are disabled (redefined as `invalid_input`).
+ Override individual methods in subclasses to re-enable.
+
+ For example, once an initial bullet list item, say, is recognized, the
+ `BulletList` subclass takes over, with a "bullet_list" node as its
+ container. Upon encountering the initial bullet list item, `Body.bullet`
+ calls its ``self.nested_list_parse`` (`RSTState.nested_list_parse`), which
+ starts up a nested parsing session with `BulletList` as the initial state.
+ Only the ``bullet`` transition method is enabled in `BulletList`; as long
+ as only bullet list items are encountered, they are parsed and inserted
+ into the container. The first construct which is *not* a bullet list item
+ triggers the `invalid_input` method, which ends the nested parse and
+ closes the container. `BulletList` needs to recognize input that is
+ invalid in the context of a bullet list, which means everything *other
+ than* bullet list items, so it inherits the transition list created in
+ `Body`.
+ """
+
+ def invalid_input(self, match=None, context=None, next_state=None):
+ """Not a compound element member. Abort this state machine."""
+ self.state_machine.previous_line() # back up so parent SM can reassess
+ raise EOFError
+
+ indent = invalid_input
+ bullet = invalid_input
+ enumerator = invalid_input
+ field_marker = invalid_input
+ option_marker = invalid_input
+ doctest = invalid_input
+ line_block = invalid_input
+ grid_table_top = invalid_input
+ simple_table_top = invalid_input
+ explicit_markup = invalid_input
+ anonymous = invalid_input
+ line = invalid_input
+ text = invalid_input
+
+
+class BulletList(SpecializedBody):
+
+ """Second and subsequent bullet_list list_items."""
+
+ def bullet(self, match, context, next_state):
+ """Bullet list item."""
+ if match.string[0] != self.parent['bullet']:
+ # different bullet: new list
+ self.invalid_input()
+ listitem, blank_finish = self.list_item(match.end())
+ self.parent += listitem
+ self.blank_finish = blank_finish
+ return [], next_state, []
+
+
+class DefinitionList(SpecializedBody):
+
+ """Second and subsequent definition_list_items."""
+
+ def text(self, match, context, next_state):
+ """Definition lists."""
+ return [match.string], 'Definition', []
+
+
+class EnumeratedList(SpecializedBody):
+
+ """Second and subsequent enumerated_list list_items."""
+
+ def enumerator(self, match, context, next_state):
+ """Enumerated list item."""
+ format, sequence, text, ordinal = self.parse_enumerator(
+ match, self.parent['enumtype'])
+ if ( format != self.format
+ or (sequence != '#' and (sequence != self.parent['enumtype']
+ or self.auto
+ or ordinal != (self.lastordinal + 1)))
+ or not self.is_enumerated_list_item(ordinal, sequence, format)):
+ # different enumeration: new list
+ self.invalid_input()
+ if sequence == '#':
+ self.auto = 1
+ listitem, blank_finish = self.list_item(match.end())
+ self.parent += listitem
+ self.blank_finish = blank_finish
+ self.lastordinal = ordinal
+ return [], next_state, []
+
+
+class FieldList(SpecializedBody):
+
+ """Second and subsequent field_list fields."""
+
+ def field_marker(self, match, context, next_state):
+ """Field list field."""
+ field, blank_finish = self.field(match)
+ self.parent += field
+ self.blank_finish = blank_finish
+ return [], next_state, []
+
+
+class OptionList(SpecializedBody):
+
+ """Second and subsequent option_list option_list_items."""
+
+ def option_marker(self, match, context, next_state):
+ """Option list item."""
+ try:
+ option_list_item, blank_finish = self.option_list_item(match)
+ except MarkupError:
+ self.invalid_input()
+ self.parent += option_list_item
+ self.blank_finish = blank_finish
+ return [], next_state, []
+
+
+class RFC2822List(SpecializedBody, RFC2822Body):
+
+ """Second and subsequent RFC2822-style field_list fields."""
+
+ patterns = RFC2822Body.patterns
+ initial_transitions = RFC2822Body.initial_transitions
+
+ def rfc2822(self, match, context, next_state):
+ """RFC2822-style field list item."""
+ field, blank_finish = self.rfc2822_field(match)
+ self.parent += field
+ self.blank_finish = blank_finish
+ return [], 'RFC2822List', []
+
+ blank = SpecializedBody.invalid_input
+
+
+class ExtensionOptions(FieldList):
+
+ """
+ Parse field_list fields for extension options.
+
+ No nested parsing is done (including inline markup parsing).
+ """
+
+ def parse_field_body(self, indented, offset, node):
+ """Override `Body.parse_field_body` for simpler parsing."""
+ lines = []
+ for line in list(indented) + ['']:
+ if line.strip():
+ lines.append(line)
+ elif lines:
+ text = '\n'.join(lines)
+ node += nodes.paragraph(text, text)
+ lines = []
+
+
+class LineBlock(SpecializedBody):
+
+ """Second and subsequent lines of a line_block."""
+
+ blank = SpecializedBody.invalid_input
+
+ def line_block(self, match, context, next_state):
+ """New line of line block."""
+ lineno = self.state_machine.abs_line_number()
+ line, messages, blank_finish = self.line_block_line(match, lineno)
+ self.parent += line
+ self.parent.parent += messages
+ self.blank_finish = blank_finish
+ return [], next_state, []
+
+
+class Explicit(SpecializedBody):
+
+ """Second and subsequent explicit markup construct."""
+
+ def explicit_markup(self, match, context, next_state):
+ """Footnotes, hyperlink targets, directives, comments."""
+ nodelist, blank_finish = self.explicit_construct(match)
+ self.parent += nodelist
+ self.blank_finish = blank_finish
+ return [], next_state, []
+
+ def anonymous(self, match, context, next_state):
+ """Anonymous hyperlink targets."""
+ nodelist, blank_finish = self.anonymous_target(match)
+ self.parent += nodelist
+ self.blank_finish = blank_finish
+ return [], next_state, []
+
+ blank = SpecializedBody.invalid_input
+
+
+class SubstitutionDef(Body):
+
+ """
+ Parser for the contents of a substitution_definition element.
+ """
+
+ patterns = {
+ 'embedded_directive': re.compile(r'(%s)::( +|$)'
+ % Inliner.simplename, re.UNICODE),
+ 'text': r''}
+ initial_transitions = ['embedded_directive', 'text']
+
+ def embedded_directive(self, match, context, next_state):
+ nodelist, blank_finish = self.directive(match,
+ alt=self.parent['names'][0])
+ self.parent += nodelist
+ if not self.state_machine.at_eof():
+ self.blank_finish = blank_finish
+ raise EOFError
+
+ def text(self, match, context, next_state):
+ if not self.state_machine.at_eof():
+ self.blank_finish = self.state_machine.is_next_line_blank()
+ raise EOFError
+
+
+class Text(RSTState):
+
+ """
+ Classifier of second line of a text block.
+
+ Could be a paragraph, a definition list item, or a title.
+ """
+
+ patterns = {'underline': Body.patterns['line'],
+ 'text': r''}
+ initial_transitions = [('underline', 'Body'), ('text', 'Body')]
+
+ def blank(self, match, context, next_state):
+ """End of paragraph."""
+ paragraph, literalnext = self.paragraph(
+ context, self.state_machine.abs_line_number() - 1)
+ self.parent += paragraph
+ if literalnext:
+ self.parent += self.literal_block()
+ return [], 'Body', []
+
+ def eof(self, context):
+ if context:
+ self.blank(None, context, None)
+ return []
+
+ def indent(self, match, context, next_state):
+ """Definition list item."""
+ definitionlist = nodes.definition_list()
+ definitionlistitem, blank_finish = self.definition_list_item(context)
+ definitionlist += definitionlistitem
+ self.parent += definitionlist
+ offset = self.state_machine.line_offset + 1 # next line
+ newline_offset, blank_finish = self.nested_list_parse(
+ self.state_machine.input_lines[offset:],
+ input_offset=self.state_machine.abs_line_offset() + 1,
+ node=definitionlist, initial_state='DefinitionList',
+ blank_finish=blank_finish, blank_finish_state='Definition')
+ self.goto_line(newline_offset)
+ if not blank_finish:
+ self.parent += self.unindent_warning('Definition list')
+ return [], 'Body', []
+
+ def underline(self, match, context, next_state):
+ """Section title."""
+ lineno = self.state_machine.abs_line_number()
+ src, srcline = self.state_machine.get_source_and_line()
+ title = context[0].rstrip()
+ underline = match.string.rstrip()
+ source = title + '\n' + underline
+ messages = []
+ if column_width(title) > len(underline):
+ if len(underline) < 4:
+ if self.state_machine.match_titles:
+ msg = self.reporter.info(
+ 'Possible title underline, too short for the title.\n'
+ "Treating it as ordinary text because it's so short.",
+ source=src, line=srcline)
+ self.parent += msg
+ raise statemachine.TransitionCorrection('text')
+ else:
+ blocktext = context[0] + '\n' + self.state_machine.line
+ msg = self.reporter.warning(
+ 'Title underline too short.',
+ nodes.literal_block(blocktext, blocktext),
+ source=src, line=srcline)
+ messages.append(msg)
+ if not self.state_machine.match_titles:
+ blocktext = context[0] + '\n' + self.state_machine.line
+ msg = self.reporter.severe(
+ 'Unexpected section title.',
+ nodes.literal_block(blocktext, blocktext),
+ source=src, line=srcline)
+ self.parent += messages
+ self.parent += msg
+ return [], next_state, []
+ style = underline[0]
+ context[:] = []
+ self.section(title, source, style, lineno - 1, messages)
+ return [], next_state, []
+
+ def text(self, match, context, next_state):
+ """Paragraph."""
+ startline = self.state_machine.abs_line_number() - 1
+ msg = None
+ try:
+ block = self.state_machine.get_text_block(flush_left=1)
+ except statemachine.UnexpectedIndentationError, instance:
+ block, src, srcline = instance.args
+ msg = self.reporter.error('Unexpected indentation.',
+ source=src, line=srcline)
+ lines = context + list(block)
+ paragraph, literalnext = self.paragraph(lines, startline)
+ self.parent += paragraph
+ self.parent += msg
+ if literalnext:
+ try:
+ self.state_machine.next_line()
+ except EOFError:
+ pass
+ self.parent += self.literal_block()
+ return [], next_state, []
+
+ def literal_block(self):
+ """Return a list of nodes."""
+ indented, indent, offset, blank_finish = \
+ self.state_machine.get_indented()
+ while indented and not indented[-1].strip():
+ indented.trim_end()
+ if not indented:
+ return self.quoted_literal_block()
+ data = '\n'.join(indented)
+ literal_block = nodes.literal_block(data, data)
+ literal_block.line = offset + 1
+ nodelist = [literal_block]
+ if not blank_finish:
+ nodelist.append(self.unindent_warning('Literal block'))
+ return nodelist
+
+ def quoted_literal_block(self):
+ abs_line_offset = self.state_machine.abs_line_offset()
+ offset = self.state_machine.line_offset
+ parent_node = nodes.Element()
+ new_abs_offset = self.nested_parse(
+ self.state_machine.input_lines[offset:],
+ input_offset=abs_line_offset, node=parent_node, match_titles=0,
+ state_machine_kwargs={'state_classes': (QuotedLiteralBlock,),
+ 'initial_state': 'QuotedLiteralBlock'})
+ self.goto_line(new_abs_offset)
+ return parent_node.children
+
+ def definition_list_item(self, termline):
+ indented, indent, line_offset, blank_finish = \
+ self.state_machine.get_indented()
+ definitionlistitem = nodes.definition_list_item(
+ '\n'.join(termline + list(indented)))
+ lineno = self.state_machine.abs_line_number() - 1
+ src, srcline = self.state_machine.get_source_and_line()
+ definitionlistitem.source = src
+ definitionlistitem.line = srcline - 1
+ termlist, messages = self.term(termline, lineno)
+ definitionlistitem += termlist
+ definition = nodes.definition('', *messages)
+ definitionlistitem += definition
+ if termline[0][-2:] == '::':
+ definition += self.reporter.info(
+ 'Blank line missing before literal block (after the "::")? '
+ 'Interpreted as a definition list item.',
+ source=src, line=srcline)
+ self.nested_parse(indented, input_offset=line_offset, node=definition)
+ return definitionlistitem, blank_finish
+
+ classifier_delimiter = re.compile(' +: +')
+
+ def term(self, lines, lineno):
+ """Return a definition_list's term and optional classifiers."""
+ assert len(lines) == 1
+ text_nodes, messages = self.inline_text(lines[0], lineno)
+ term_node = nodes.term()
+ node_list = [term_node]
+ for i in range(len(text_nodes)):
+ node = text_nodes[i]
+ if isinstance(node, nodes.Text):
+ parts = self.classifier_delimiter.split(node.rawsource)
+ if len(parts) == 1:
+ node_list[-1] += node
+ else:
+
+ node_list[-1] += nodes.Text(parts[0].rstrip())
+ for part in parts[1:]:
+ classifier_node = nodes.classifier('', part)
+ node_list.append(classifier_node)
+ else:
+ node_list[-1] += node
+ return node_list, messages
+
+
+class SpecializedText(Text):
+
+ """
+ Superclass for second and subsequent lines of Text-variants.
+
+ All transition methods are disabled. Override individual methods in
+ subclasses to re-enable.
+ """
+
+ def eof(self, context):
+ """Incomplete construct."""
+ return []
+
+ def invalid_input(self, match=None, context=None, next_state=None):
+ """Not a compound element member. Abort this state machine."""
+ raise EOFError
+
+ blank = invalid_input
+ indent = invalid_input
+ underline = invalid_input
+ text = invalid_input
+
+
+class Definition(SpecializedText):
+
+ """Second line of potential definition_list_item."""
+
+ def eof(self, context):
+ """Not a definition."""
+ self.state_machine.previous_line(2) # so parent SM can reassess
+ return []
+
+ def indent(self, match, context, next_state):
+ """Definition list item."""
+ definitionlistitem, blank_finish = self.definition_list_item(context)
+ self.parent += definitionlistitem
+ self.blank_finish = blank_finish
+ return [], 'DefinitionList', []
+
+
+class Line(SpecializedText):
+
+ """
+ Second line of over- & underlined section title or transition marker.
+ """
+
+ eofcheck = 1 # @@@ ???
+ """Set to 0 while parsing sections, so that we don't catch the EOF."""
+
+ def eof(self, context):
+ """Transition marker at end of section or document."""
+ marker = context[0].strip()
+ if self.memo.section_bubble_up_kludge:
+ self.memo.section_bubble_up_kludge = 0
+ elif len(marker) < 4:
+ self.state_correction(context)
+ if self.eofcheck: # ignore EOFError with sections
+ lineno = self.state_machine.abs_line_number() - 1
+ transition = nodes.transition(rawsource=context[0])
+ transition.line = lineno
+ self.parent += transition
+ self.eofcheck = 1
+ return []
+
+ def blank(self, match, context, next_state):
+ """Transition marker."""
+ src, srcline = self.state_machine.get_source_and_line()
+ marker = context[0].strip()
+ if len(marker) < 4:
+ self.state_correction(context)
+ transition = nodes.transition(rawsource=marker)
+ transition.source = src
+ transition.line = srcline - 1
+ self.parent += transition
+ return [], 'Body', []
+
+ def text(self, match, context, next_state):
+ """Potential over- & underlined title."""
+ lineno = self.state_machine.abs_line_number() - 1
+ src, srcline = self.state_machine.get_source_and_line()
+ overline = context[0]
+ title = match.string
+ underline = ''
+ try:
+ underline = self.state_machine.next_line()
+ except EOFError:
+ blocktext = overline + '\n' + title
+ if len(overline.rstrip()) < 4:
+ self.short_overline(context, blocktext, lineno, 2)
+ else:
+ msg = self.reporter.severe(
+ 'Incomplete section title.',
+ nodes.literal_block(blocktext, blocktext),
+ source=src, line=srcline-1)
+ self.parent += msg
+ return [], 'Body', []
+ source = '%s\n%s\n%s' % (overline, title, underline)
+ overline = overline.rstrip()
+ underline = underline.rstrip()
+ if not self.transitions['underline'][0].match(underline):
+ blocktext = overline + '\n' + title + '\n' + underline
+ if len(overline.rstrip()) < 4:
+ self.short_overline(context, blocktext, lineno, 2)
+ else:
+ msg = self.reporter.severe(
+ 'Missing matching underline for section title overline.',
+ nodes.literal_block(source, source),
+ source=src, line=srcline-1)
+ self.parent += msg
+ return [], 'Body', []
+ elif overline != underline:
+ blocktext = overline + '\n' + title + '\n' + underline
+ if len(overline.rstrip()) < 4:
+ self.short_overline(context, blocktext, lineno, 2)
+ else:
+ msg = self.reporter.severe(
+ 'Title overline & underline mismatch.',
+ nodes.literal_block(source, source),
+ source=src, line=srcline-1)
+ self.parent += msg
+ return [], 'Body', []
+ title = title.rstrip()
+ messages = []
+ if column_width(title) > len(overline):
+ blocktext = overline + '\n' + title + '\n' + underline
+ if len(overline.rstrip()) < 4:
+ self.short_overline(context, blocktext, lineno, 2)
+ else:
+ msg = self.reporter.warning(
+ 'Title overline too short.',
+ nodes.literal_block(source, source),
+ source=src, line=srcline-1)
+ messages.append(msg)
+ style = (overline[0], underline[0])
+ self.eofcheck = 0 # @@@ not sure this is correct
+ self.section(title.lstrip(), source, style, lineno + 1, messages)
+ self.eofcheck = 1
+ return [], 'Body', []
+
+ indent = text # indented title
+
+ def underline(self, match, context, next_state):
+ overline = context[0]
+ blocktext = overline + '\n' + self.state_machine.line
+ lineno = self.state_machine.abs_line_number() - 1
+ src, srcline = self.state_machine.get_source_and_line()
+ if len(overline.rstrip()) < 4:
+ self.short_overline(context, blocktext, lineno, 1)
+ msg = self.reporter.error(
+ 'Invalid section title or transition marker.',
+ nodes.literal_block(blocktext, blocktext),
+ source=src, line=srcline-1)
+ self.parent += msg
+ return [], 'Body', []
+
+ def short_overline(self, context, blocktext, lineno, lines=1):
+ src, srcline = self.state_machine.get_source_and_line(lineno)
+ msg = self.reporter.info(
+ 'Possible incomplete section title.\nTreating the overline as '
+ "ordinary text because it's so short.",
+ source=src, line=srcline)
+ self.parent += msg
+ self.state_correction(context, lines)
+
+ def state_correction(self, context, lines=1):
+ self.state_machine.previous_line(lines)
+ context[:] = []
+ raise statemachine.StateCorrection('Body', 'text')
+
+
+class QuotedLiteralBlock(RSTState):
+
+ """
+ Nested parse handler for quoted (unindented) literal blocks.
+
+ Special-purpose. Not for inclusion in `state_classes`.
+ """
+
+ patterns = {'initial_quoted': r'(%(nonalphanum7bit)s)' % Body.pats,
+ 'text': r''}
+ initial_transitions = ('initial_quoted', 'text')
+
+ def __init__(self, state_machine, debug=0):
+ RSTState.__init__(self, state_machine, debug)
+ self.messages = []
+ self.initial_lineno = None
+
+ def blank(self, match, context, next_state):
+ if context:
+ raise EOFError
+ else:
+ return context, next_state, []
+
+ def eof(self, context):
+ if context:
+ src, srcline = self.state_machine.get_source_and_line(
+ self.initial_lineno)
+ text = '\n'.join(context)
+ literal_block = nodes.literal_block(text, text)
+ literal_block.source = src
+ literal_block.line = srcline
+ self.parent += literal_block
+ else:
+ self.parent += self.reporter.warning(
+ 'Literal block expected; none found.',
+ line=self.state_machine.abs_line_number())
+ # src not available, because statemachine.input_lines is empty
+ self.state_machine.previous_line()
+ self.parent += self.messages
+ return []
+
+ def indent(self, match, context, next_state):
+ assert context, ('QuotedLiteralBlock.indent: context should not '
+ 'be empty!')
+ self.messages.append(
+ self.reporter.error('Unexpected indentation.',
+ line=self.state_machine.abs_line_number()))
+ self.state_machine.previous_line()
+ raise EOFError
+
+ def initial_quoted(self, match, context, next_state):
+ """Match arbitrary quote character on the first line only."""
+ self.remove_transition('initial_quoted')
+ quote = match.string[0]
+ pattern = re.compile(re.escape(quote))
+ # New transition matches consistent quotes only:
+ self.add_transition('quoted',
+ (pattern, self.quoted, self.__class__.__name__))
+ self.initial_lineno = self.state_machine.abs_line_number()
+ return [match.string], next_state, []
+
+ def quoted(self, match, context, next_state):
+ """Match consistent quotes on subsequent lines."""
+ context.append(match.string)
+ return context, next_state, []
+
+ def text(self, match, context, next_state):
+ if context:
+ src, srcline = self.state_machine.get_source_and_line()
+ self.messages.append(
+ self.reporter.error('Inconsistent literal block quoting.',
+ source=src, line=srcline))
+ self.state_machine.previous_line()
+ raise EOFError
+
+
+state_classes = (Body, BulletList, DefinitionList, EnumeratedList, FieldList,
+ OptionList, LineBlock, ExtensionOptions, Explicit, Text,
+ Definition, Line, SubstitutionDef, RFC2822Body, RFC2822List)
+"""Standard set of State classes used to start `RSTStateMachine`."""
diff --git a/python/helpers/docutils/parsers/rst/tableparser.py b/python/helpers/docutils/parsers/rst/tableparser.py
new file mode 100644
index 0000000..5de124a
--- /dev/null
+++ b/python/helpers/docutils/parsers/rst/tableparser.py
@@ -0,0 +1,525 @@
+# $Id: tableparser.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+This module defines table parser classes,which parse plaintext-graphic tables
+and produce a well-formed data structure suitable for building a CALS table.
+
+:Classes:
+ - `GridTableParser`: Parse fully-formed tables represented with a grid.
+ - `SimpleTableParser`: Parse simple tables, delimited by top & bottom
+ borders.
+
+:Exception class: `TableMarkupError`
+
+:Function:
+ `update_dict_of_lists()`: Merge two dictionaries containing list values.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import re
+import sys
+from docutils import DataError
+
+
+class TableMarkupError(DataError): pass
+
+
+class TableParser:
+
+ """
+ Abstract superclass for the common parts of the syntax-specific parsers.
+ """
+
+ head_body_separator_pat = None
+ """Matches the row separator between head rows and body rows."""
+
+ double_width_pad_char = '\x00'
+ """Padding character for East Asian double-width text."""
+
+ def parse(self, block):
+ """
+ Analyze the text `block` and return a table data structure.
+
+ Given a plaintext-graphic table in `block` (list of lines of text; no
+ whitespace padding), parse the table, construct and return the data
+ necessary to construct a CALS table or equivalent.
+
+ Raise `TableMarkupError` if there is any problem with the markup.
+ """
+ self.setup(block)
+ self.find_head_body_sep()
+ self.parse_table()
+ structure = self.structure_from_cells()
+ return structure
+
+ def find_head_body_sep(self):
+ """Look for a head/body row separator line; store the line index."""
+ for i in range(len(self.block)):
+ line = self.block[i]
+ if self.head_body_separator_pat.match(line):
+ if self.head_body_sep:
+ raise TableMarkupError(
+ 'Multiple head/body row separators in table (at line '
+ 'offset %s and %s); only one allowed.'
+ % (self.head_body_sep, i))
+ else:
+ self.head_body_sep = i
+ self.block[i] = line.replace('=', '-')
+ if self.head_body_sep == 0 or self.head_body_sep == (len(self.block)
+ - 1):
+ raise TableMarkupError('The head/body row separator may not be '
+ 'the first or last line of the table.')
+
+
+class GridTableParser(TableParser):
+
+ """
+ Parse a grid table using `parse()`.
+
+ Here's an example of a grid table::
+
+ +------------------------+------------+----------+----------+
+ | Header row, column 1 | Header 2 | Header 3 | Header 4 |
+ +========================+============+==========+==========+
+ | body row 1, column 1 | column 2 | column 3 | column 4 |
+ +------------------------+------------+----------+----------+
+ | body row 2 | Cells may span columns. |
+ +------------------------+------------+---------------------+
+ | body row 3 | Cells may | - Table cells |
+ +------------------------+ span rows. | - contain |
+ | body row 4 | | - body elements. |
+ +------------------------+------------+---------------------+
+
+ Intersections use '+', row separators use '-' (except for one optional
+ head/body row separator, which uses '='), and column separators use '|'.
+
+ Passing the above table to the `parse()` method will result in the
+ following data structure::
+
+ ([24, 12, 10, 10],
+ [[(0, 0, 1, ['Header row, column 1']),
+ (0, 0, 1, ['Header 2']),
+ (0, 0, 1, ['Header 3']),
+ (0, 0, 1, ['Header 4'])]],
+ [[(0, 0, 3, ['body row 1, column 1']),
+ (0, 0, 3, ['column 2']),
+ (0, 0, 3, ['column 3']),
+ (0, 0, 3, ['column 4'])],
+ [(0, 0, 5, ['body row 2']),
+ (0, 2, 5, ['Cells may span columns.']),
+ None,
+ None],
+ [(0, 0, 7, ['body row 3']),
+ (1, 0, 7, ['Cells may', 'span rows.', '']),
+ (1, 1, 7, ['- Table cells', '- contain', '- body elements.']),
+ None],
+ [(0, 0, 9, ['body row 4']), None, None, None]])
+
+ The first item is a list containing column widths (colspecs). The second
+ item is a list of head rows, and the third is a list of body rows. Each
+ row contains a list of cells. Each cell is either None (for a cell unused
+ because of another cell's span), or a tuple. A cell tuple contains four
+ items: the number of extra rows used by the cell in a vertical span
+ (morerows); the number of extra columns used by the cell in a horizontal
+ span (morecols); the line offset of the first line of the cell contents;
+ and the cell contents, a list of lines of text.
+ """
+
+ head_body_separator_pat = re.compile(r'\+=[=+]+=\+ *$')
+
+ def setup(self, block):
+ self.block = block[:] # make a copy; it may be modified
+ self.block.disconnect() # don't propagate changes to parent
+ self.bottom = len(block) - 1
+ self.right = len(block[0]) - 1
+ self.head_body_sep = None
+ self.done = [-1] * len(block[0])
+ self.cells = []
+ self.rowseps = {0: [0]}
+ self.colseps = {0: [0]}
+
+ def parse_table(self):
+ """
+ Start with a queue of upper-left corners, containing the upper-left
+ corner of the table itself. Trace out one rectangular cell, remember
+ it, and add its upper-right and lower-left corners to the queue of
+ potential upper-left corners of further cells. Process the queue in
+ top-to-bottom order, keeping track of how much of each text column has
+ been seen.
+
+ We'll end up knowing all the row and column boundaries, cell positions
+ and their dimensions.
+ """
+ corners = [(0, 0)]
+ while corners:
+ top, left = corners.pop(0)
+ if top == self.bottom or left == self.right \
+ or top <= self.done[left]:
+ continue
+ result = self.scan_cell(top, left)
+ if not result:
+ continue
+ bottom, right, rowseps, colseps = result
+ update_dict_of_lists(self.rowseps, rowseps)
+ update_dict_of_lists(self.colseps, colseps)
+ self.mark_done(top, left, bottom, right)
+ cellblock = self.block.get_2D_block(top + 1, left + 1,
+ bottom, right)
+ cellblock.disconnect() # lines in cell can't sync with parent
+ cellblock.replace(self.double_width_pad_char, '')
+ self.cells.append((top, left, bottom, right, cellblock))
+ corners.extend([(top, right), (bottom, left)])
+ corners.sort()
+ if not self.check_parse_complete():
+ raise TableMarkupError('Malformed table; parse incomplete.')
+
+ def mark_done(self, top, left, bottom, right):
+ """For keeping track of how much of each text column has been seen."""
+ before = top - 1
+ after = bottom - 1
+ for col in range(left, right):
+ assert self.done[col] == before
+ self.done[col] = after
+
+ def check_parse_complete(self):
+ """Each text column should have been completely seen."""
+ last = self.bottom - 1
+ for col in range(self.right):
+ if self.done[col] != last:
+ return None
+ return 1
+
+ def scan_cell(self, top, left):
+ """Starting at the top-left corner, start tracing out a cell."""
+ assert self.block[top][left] == '+'
+ result = self.scan_right(top, left)
+ return result
+
+ def scan_right(self, top, left):
+ """
+ Look for the top-right corner of the cell, and make note of all column
+ boundaries ('+').
+ """
+ colseps = {}
+ line = self.block[top]
+ for i in range(left + 1, self.right + 1):
+ if line[i] == '+':
+ colseps[i] = [top]
+ result = self.scan_down(top, left, i)
+ if result:
+ bottom, rowseps, newcolseps = result
+ update_dict_of_lists(colseps, newcolseps)
+ return bottom, i, rowseps, colseps
+ elif line[i] != '-':
+ return None
+ return None
+
+ def scan_down(self, top, left, right):
+ """
+ Look for the bottom-right corner of the cell, making note of all row
+ boundaries.
+ """
+ rowseps = {}
+ for i in range(top + 1, self.bottom + 1):
+ if self.block[i][right] == '+':
+ rowseps[i] = [right]
+ result = self.scan_left(top, left, i, right)
+ if result:
+ newrowseps, colseps = result
+ update_dict_of_lists(rowseps, newrowseps)
+ return i, rowseps, colseps
+ elif self.block[i][right] != '|':
+ return None
+ return None
+
+ def scan_left(self, top, left, bottom, right):
+ """
+ Noting column boundaries, look for the bottom-left corner of the cell.
+ It must line up with the starting point.
+ """
+ colseps = {}
+ line = self.block[bottom]
+ for i in range(right - 1, left, -1):
+ if line[i] == '+':
+ colseps[i] = [bottom]
+ elif line[i] != '-':
+ return None
+ if line[left] != '+':
+ return None
+ result = self.scan_up(top, left, bottom, right)
+ if result is not None:
+ rowseps = result
+ return rowseps, colseps
+ return None
+
+ def scan_up(self, top, left, bottom, right):
+ """
+ Noting row boundaries, see if we can return to the starting point.
+ """
+ rowseps = {}
+ for i in range(bottom - 1, top, -1):
+ if self.block[i][left] == '+':
+ rowseps[i] = [left]
+ elif self.block[i][left] != '|':
+ return None
+ return rowseps
+
+ def structure_from_cells(self):
+ """
+ From the data collected by `scan_cell()`, convert to the final data
+ structure.
+ """
+ rowseps = self.rowseps.keys() # list of row boundaries
+ rowseps.sort()
+ rowindex = {}
+ for i in range(len(rowseps)):
+ rowindex[rowseps[i]] = i # row boundary -> row number mapping
+ colseps = self.colseps.keys() # list of column boundaries
+ colseps.sort()
+ colindex = {}
+ for i in range(len(colseps)):
+ colindex[colseps[i]] = i # column boundary -> col number map
+ colspecs = [(colseps[i] - colseps[i - 1] - 1)
+ for i in range(1, len(colseps))] # list of column widths
+ # prepare an empty table with the correct number of rows & columns
+ onerow = [None for i in range(len(colseps) - 1)]
+ rows = [onerow[:] for i in range(len(rowseps) - 1)]
+ # keep track of # of cells remaining; should reduce to zero
+ remaining = (len(rowseps) - 1) * (len(colseps) - 1)
+ for top, left, bottom, right, block in self.cells:
+ rownum = rowindex[top]
+ colnum = colindex[left]
+ assert rows[rownum][colnum] is None, (
+ 'Cell (row %s, column %s) already used.'
+ % (rownum + 1, colnum + 1))
+ morerows = rowindex[bottom] - rownum - 1
+ morecols = colindex[right] - colnum - 1
+ remaining -= (morerows + 1) * (morecols + 1)
+ # write the cell into the table
+ rows[rownum][colnum] = (morerows, morecols, top + 1, block)
+ assert remaining == 0, 'Unused cells remaining.'
+ if self.head_body_sep: # separate head rows from body rows
+ numheadrows = rowindex[self.head_body_sep]
+ headrows = rows[:numheadrows]
+ bodyrows = rows[numheadrows:]
+ else:
+ headrows = []
+ bodyrows = rows
+ return (colspecs, headrows, bodyrows)
+
+
+class SimpleTableParser(TableParser):
+
+ """
+ Parse a simple table using `parse()`.
+
+ Here's an example of a simple table::
+
+ ===== =====
+ col 1 col 2
+ ===== =====
+ 1 Second column of row 1.
+ 2 Second column of row 2.
+ Second line of paragraph.
+ 3 - Second column of row 3.
+
+ - Second item in bullet
+ list (row 3, column 2).
+ 4 is a span
+ ------------
+ 5
+ ===== =====
+
+ Top and bottom borders use '=', column span underlines use '-', column
+ separation is indicated with spaces.
+
+ Passing the above table to the `parse()` method will result in the
+ following data structure, whose interpretation is the same as for
+ `GridTableParser`::
+
+ ([5, 25],
+ [[(0, 0, 1, ['col 1']),
+ (0, 0, 1, ['col 2'])]],
+ [[(0, 0, 3, ['1']),
+ (0, 0, 3, ['Second column of row 1.'])],
+ [(0, 0, 4, ['2']),
+ (0, 0, 4, ['Second column of row 2.',
+ 'Second line of paragraph.'])],
+ [(0, 0, 6, ['3']),
+ (0, 0, 6, ['- Second column of row 3.',
+ '',
+ '- Second item in bullet',
+ ' list (row 3, column 2).'])],
+ [(0, 1, 10, ['4 is a span'])],
+ [(0, 0, 12, ['5']),
+ (0, 0, 12, [''])]])
+ """
+
+ head_body_separator_pat = re.compile('=[ =]*$')
+ span_pat = re.compile('-[ -]*$')
+
+ def setup(self, block):
+ self.block = block[:] # make a copy; it will be modified
+ self.block.disconnect() # don't propagate changes to parent
+ # Convert top & bottom borders to column span underlines:
+ self.block[0] = self.block[0].replace('=', '-')
+ self.block[-1] = self.block[-1].replace('=', '-')
+ self.head_body_sep = None
+ self.columns = []
+ self.border_end = None
+ self.table = []
+ self.done = [-1] * len(block[0])
+ self.rowseps = {0: [0]}
+ self.colseps = {0: [0]}
+
+ def parse_table(self):
+ """
+ First determine the column boundaries from the top border, then
+ process rows. Each row may consist of multiple lines; accumulate
+ lines until a row is complete. Call `self.parse_row` to finish the
+ job.
+ """
+ # Top border must fully describe all table columns.
+ self.columns = self.parse_columns(self.block[0], 0)
+ self.border_end = self.columns[-1][1]
+ firststart, firstend = self.columns[0]
+ offset = 1 # skip top border
+ start = 1
+ text_found = None
+ while offset < len(self.block):
+ line = self.block[offset]
+ if self.span_pat.match(line):
+ # Column span underline or border; row is complete.
+ self.parse_row(self.block[start:offset], start,
+ (line.rstrip(), offset))
+ start = offset + 1
+ text_found = None
+ elif line[firststart:firstend].strip():
+ # First column not blank, therefore it's a new row.
+ if text_found and offset != start:
+ self.parse_row(self.block[start:offset], start)
+ start = offset
+ text_found = 1
+ elif not text_found:
+ start = offset + 1
+ offset += 1
+
+ def parse_columns(self, line, offset):
+ """
+ Given a column span underline, return a list of (begin, end) pairs.
+ """
+ cols = []
+ end = 0
+ while 1:
+ begin = line.find('-', end)
+ end = line.find(' ', begin)
+ if begin < 0:
+ break
+ if end < 0:
+ end = len(line)
+ cols.append((begin, end))
+ if self.columns:
+ if cols[-1][1] != self.border_end:
+ raise TableMarkupError('Column span incomplete at line '
+ 'offset %s.' % offset)
+ # Allow for an unbounded rightmost column:
+ cols[-1] = (cols[-1][0], self.columns[-1][1])
+ return cols
+
+ def init_row(self, colspec, offset):
+ i = 0
+ cells = []
+ for start, end in colspec:
+ morecols = 0
+ try:
+ assert start == self.columns[i][0]
+ while end != self.columns[i][1]:
+ i += 1
+ morecols += 1
+ except (AssertionError, IndexError):
+ raise TableMarkupError('Column span alignment problem at '
+ 'line offset %s.' % (offset + 1))
+ cells.append([0, morecols, offset, []])
+ i += 1
+ return cells
+
+ def parse_row(self, lines, start, spanline=None):
+ """
+ Given the text `lines` of a row, parse it and append to `self.table`.
+
+ The row is parsed according to the current column spec (either
+ `spanline` if provided or `self.columns`). For each column, extract
+ text from each line, and check for text in column margins. Finally,
+ adjust for insigificant whitespace.
+ """
+ if not (lines or spanline):
+ # No new row, just blank lines.
+ return
+ if spanline:
+ columns = self.parse_columns(*spanline)
+ span_offset = spanline[1]
+ else:
+ columns = self.columns[:]
+ span_offset = start
+ self.check_columns(lines, start, columns)
+ row = self.init_row(columns, start)
+ for i in range(len(columns)):
+ start, end = columns[i]
+ cellblock = lines.get_2D_block(0, start, len(lines), end)
+ cellblock.disconnect() # lines in cell can't sync with parent
+ cellblock.replace(self.double_width_pad_char, '')
+ row[i][3] = cellblock
+ self.table.append(row)
+
+ def check_columns(self, lines, first_line, columns):
+ """
+ Check for text in column margins and text overflow in the last column.
+ Raise TableMarkupError if anything but whitespace is in column margins.
+ Adjust the end value for the last column if there is text overflow.
+ """
+ # "Infinite" value for a dummy last column's beginning, used to
+ # check for text overflow:
+ columns.append((sys.maxint, None))
+ lastcol = len(columns) - 2
+ for i in range(len(columns) - 1):
+ start, end = columns[i]
+ nextstart = columns[i+1][0]
+ offset = 0
+ for line in lines:
+ if i == lastcol and line[end:].strip():
+ text = line[start:].rstrip()
+ new_end = start + len(text)
+ columns[i] = (start, new_end)
+ main_start, main_end = self.columns[-1]
+ if new_end > main_end:
+ self.columns[-1] = (main_start, new_end)
+ elif line[end:nextstart].strip():
+ raise TableMarkupError('Text in column margin at line '
+ 'offset %s.' % (first_line + offset))
+ offset += 1
+ columns.pop()
+
+ def structure_from_cells(self):
+ colspecs = [end - start for start, end in self.columns]
+ first_body_row = 0
+ if self.head_body_sep:
+ for i in range(len(self.table)):
+ if self.table[i][0][2] > self.head_body_sep:
+ first_body_row = i
+ break
+ return (colspecs, self.table[:first_body_row],
+ self.table[first_body_row:])
+
+
+def update_dict_of_lists(master, newdata):
+ """
+ Extend the list values of `master` with those from `newdata`.
+
+ Both parameters must be dictionaries containing list values.
+ """
+ for key, values in newdata.items():
+ master.setdefault(key, []).extend(values)
diff --git a/python/helpers/docutils/readers/__init__.py b/python/helpers/docutils/readers/__init__.py
new file mode 100644
index 0000000..d727bd4
--- /dev/null
+++ b/python/helpers/docutils/readers/__init__.py
@@ -0,0 +1,107 @@
+# $Id: __init__.py 5618 2008-07-28 08:37:32Z strank $
+# Authors: David Goodger <[email protected]>; Ueli Schlaepfer
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains Docutils Reader modules.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+from docutils import utils, parsers, Component
+from docutils.transforms import universal
+
+
+class Reader(Component):
+
+ """
+ Abstract base class for docutils Readers.
+
+ Each reader module or package must export a subclass also called 'Reader'.
+
+ The two steps of a Reader's responsibility are `scan()` and
+ `parse()`. Call `read()` to process a document.
+ """
+
+ component_type = 'reader'
+ config_section = 'readers'
+
+ def get_transforms(self):
+ return Component.get_transforms(self) + [
+ universal.Decorations,
+ universal.ExposeInternals,
+ universal.StripComments,]
+
+ def __init__(self, parser=None, parser_name=None):
+ """
+ Initialize the Reader instance.
+
+ Several instance attributes are defined with dummy initial values.
+ Subclasses may use these attributes as they wish.
+ """
+
+ self.parser = parser
+ """A `parsers.Parser` instance shared by all doctrees. May be left
+ unspecified if the document source determines the parser."""
+
+ if parser is None and parser_name:
+ self.set_parser(parser_name)
+
+ self.source = None
+ """`docutils.io` IO object, source of input data."""
+
+ self.input = None
+ """Raw text input; either a single string or, for more complex cases,
+ a collection of strings."""
+
+ def set_parser(self, parser_name):
+ """Set `self.parser` by name."""
+ parser_class = parsers.get_parser_class(parser_name)
+ self.parser = parser_class()
+
+ def read(self, source, parser, settings):
+ self.source = source
+ if not self.parser:
+ self.parser = parser
+ self.settings = settings
+ self.input = self.source.read()
+ self.parse()
+ return self.document
+
+ def parse(self):
+ """Parse `self.input` into a document tree."""
+ self.document = document = self.new_document()
+ self.parser.parse(self.input, document)
+ document.current_source = document.current_line = None
+
+ def new_document(self):
+ """Create and return a new empty document tree (root node)."""
+ document = utils.new_document(self.source.source_path, self.settings)
+ return document
+
+
+class ReReader(Reader):
+
+ """
+ A reader which rereads an existing document tree (e.g. a
+ deserializer).
+
+ Often used in conjunction with `writers.UnfilteredWriter`.
+ """
+
+ def get_transforms(self):
+ # Do not add any transforms. They have already been applied
+ # by the reader which originally created the document.
+ return Component.get_transforms(self)
+
+
+_reader_aliases = {}
+
+def get_reader_class(reader_name):
+ """Return the Reader class from the `reader_name` module."""
+ reader_name = reader_name.lower()
+ if reader_name in _reader_aliases:
+ reader_name = _reader_aliases[reader_name]
+ module = __import__(reader_name, globals(), locals())
+ return module.Reader
diff --git a/python/helpers/docutils/readers/doctree.py b/python/helpers/docutils/readers/doctree.py
new file mode 100644
index 0000000..f8d3726
--- /dev/null
+++ b/python/helpers/docutils/readers/doctree.py
@@ -0,0 +1,46 @@
+# $Id: doctree.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: Martin Blais <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""Reader for existing document trees."""
+
+from docutils import readers, utils, transforms
+
+
+class Reader(readers.ReReader):
+
+ """
+ Adapt the Reader API for an existing document tree.
+
+ The existing document tree must be passed as the ``source`` parameter to
+ the `docutils.core.Publisher` initializer, wrapped in a
+ `docutils.io.DocTreeInput` object::
+
+ pub = docutils.core.Publisher(
+ ..., source=docutils.io.DocTreeInput(document), ...)
+
+ The original document settings are overridden; if you want to use the
+ settings of the original document, pass ``settings=document.settings`` to
+ the Publisher call above.
+ """
+
+ supported = ('doctree',)
+
+ config_section = 'doctree reader'
+ config_section_dependencies = ('readers',)
+
+ def parse(self):
+ """
+ No parsing to do; refurbish the document tree instead.
+ Overrides the inherited method.
+ """
+ self.document = self.input
+ # Create fresh Transformer object, to be populated from Writer
+ # component.
+ self.document.transformer = transforms.Transformer(self.document)
+ # Replace existing settings object with new one.
+ self.document.settings = self.settings
+ # Create fresh Reporter object because it is dependent on
+ # (new) settings.
+ self.document.reporter = utils.new_reporter(
+ self.document.get('source', ''), self.document.settings)
diff --git a/python/helpers/docutils/readers/pep.py b/python/helpers/docutils/readers/pep.py
new file mode 100644
index 0000000..d6fed72
--- /dev/null
+++ b/python/helpers/docutils/readers/pep.py
@@ -0,0 +1,48 @@
+# $Id: pep.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Python Enhancement Proposal (PEP) Reader.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+from docutils.readers import standalone
+from docutils.transforms import peps, references, misc, frontmatter
+from docutils.parsers import rst
+
+
+class Reader(standalone.Reader):
+
+ supported = ('pep',)
+ """Contexts this reader supports."""
+
+ settings_spec = (
+ 'PEP Reader Option Defaults',
+ 'The --pep-references and --rfc-references options (for the '
+ 'reStructuredText parser) are on by default.',
+ ())
+
+ config_section = 'pep reader'
+ config_section_dependencies = ('readers', 'standalone reader')
+
+ def get_transforms(self):
+ transforms = standalone.Reader.get_transforms(self)
+ # We have PEP-specific frontmatter handling.
+ transforms.remove(frontmatter.DocTitle)
+ transforms.remove(frontmatter.SectionSubTitle)
+ transforms.remove(frontmatter.DocInfo)
+ transforms.extend([peps.Headers, peps.Contents, peps.TargetNotes])
+ return transforms
+
+ settings_default_overrides = {'pep_references': 1, 'rfc_references': 1}
+
+ inliner_class = rst.states.Inliner
+
+ def __init__(self, parser=None, parser_name=None):
+ """`parser` should be ``None``."""
+ if parser is None:
+ parser = rst.Parser(rfc2822=1, inliner=self.inliner_class())
+ standalone.Reader.__init__(self, parser, '')
diff --git a/python/helpers/docutils/readers/python/__init__.py b/python/helpers/docutils/readers/python/__init__.py
new file mode 100644
index 0000000..f046144
--- /dev/null
+++ b/python/helpers/docutils/readers/python/__init__.py
@@ -0,0 +1,127 @@
+# $Id: __init__.py 5618 2008-07-28 08:37:32Z strank $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains the Python Source Reader modules.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import docutils.readers
+from docutils.readers.python import moduleparser
+from docutils import parsers
+from docutils import nodes
+from docutils.readers.python import pynodes
+from docutils import readers
+
+class Reader(docutils.readers.Reader):
+
+ config_section = 'python reader'
+ config_section_dependencies = ('readers',)
+
+ default_parser = 'restructuredtext'
+
+ def parse(self):
+ """Parse `self.input` into a document tree."""
+ self.document = document = self.new_document()
+ module_section = moduleparser.parse_module(self.input,
+ self.source.source_path)
+ module_section.walk(DocformatVisitor(self.document))
+ visitor = DocstringFormattingVisitor(
+ document=document,
+ default_parser=self.default_parser)
+ module_section.walk(visitor)
+ self.document.append(module_section)
+
+
+class DocformatVisitor(nodes.SparseNodeVisitor):
+
+ """
+ This sets docformat attributes in a module. Wherever an assignment
+ to __docformat__ is found, we look for the enclosing scope -- a class,
+ a module, or a function -- and set the docformat attribute there.
+
+ We can't do this during the DocstringFormattingVisitor walking,
+ because __docformat__ may appear below a docstring in that format
+ (typically below the module docstring).
+ """
+
+ def visit_attribute(self, node):
+ assert isinstance(node[0], pynodes.object_name)
+ name = node[0][0].data
+ if name != '__docformat__':
+ return
+ value = None
+ for child in children:
+ if isinstance(child, pynodes.expression_value):
+ value = child[0].data
+ break
+ assert value.startswith("'") or value.startswith('"'), "__docformat__ must be assigned a string literal (not %s); line: %s" % (value, node['lineno'])
+ name = name[1:-1]
+ looking_in = node.parent
+ while not isinstance(looking_in, (pynodes.module_section,
+ pynodes.function_section,
+ pynodes.class_section)):
+ looking_in = looking_in.parent
+ looking_in['docformat'] = name
+
+
+class DocstringFormattingVisitor(nodes.SparseNodeVisitor):
+
+ def __init__(self, document, default_parser):
+ self.document = document
+ self.default_parser = default_parser
+ self.parsers = {}
+
+ def visit_docstring(self, node):
+ text = node[0].data
+ docformat = self.find_docformat(node)
+ del node[0]
+ node['docformat'] = docformat
+ parser = self.get_parser(docformat)
+ parser.parse(text, self.document)
+ for child in self.document.children:
+ node.append(child)
+ self.document.current_source = self.document.current_line = None
+ del self.document[:]
+
+ def get_parser(self, parser_name):
+ """
+ Get a parser based on its name. We reuse parsers during this
+ visitation, so parser instances are cached.
+ """
+ parser_name = parsers._parser_aliases.get(parser_name, parser_name)
+ if parser_name not in self.parsers:
+ cls = parsers.get_parser_class(parser_name)
+ self.parsers[parser_name] = cls()
+ return self.parsers[parser_name]
+
+ def find_docformat(self, node):
+ """
+ Find the __docformat__ closest to this node (i.e., look in the
+ class or module)
+ """
+ while node:
+ if node.get('docformat'):
+ return node['docformat']
+ node = node.parent
+ return self.default_parser
+
+
+if __name__ == '__main__':
+ try:
+ import locale
+ locale.setlocale(locale.LC_ALL, '')
+ except:
+ pass
+
+ from docutils.core import publish_cmdline, default_description
+
+ description = ('Generates pseudo-XML from Python modules '
+ '(for testing purposes). ' + default_description)
+
+ publish_cmdline(description=description,
+ reader=Reader())
diff --git a/python/helpers/docutils/readers/python/moduleparser.py b/python/helpers/docutils/readers/python/moduleparser.py
new file mode 100644
index 0000000..1823272
--- /dev/null
+++ b/python/helpers/docutils/readers/python/moduleparser.py
@@ -0,0 +1,757 @@
+# $Id: moduleparser.py 5738 2008-11-30 08:59:04Z grubert $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Parser for Python modules.
+
+The `parse_module()` function takes a module's text and file name,
+runs it through the module parser (using compiler.py and tokenize.py)
+and produces a parse tree of the source code, using the nodes as found
+in pynodes.py. For example, given this module (x.py)::
+
+ # comment
+
+ '''Docstring'''
+
+ '''Additional docstring'''
+
+ __docformat__ = 'reStructuredText'
+
+ a = 1
+ '''Attribute docstring'''
+
+ class C(Super):
+
+ '''C's docstring'''
+
+ class_attribute = 1
+ '''class_attribute's docstring'''
+
+ def __init__(self, text=None):
+ '''__init__'s docstring'''
+
+ self.instance_attribute = (text * 7
+ + ' whaddyaknow')
+ '''instance_attribute's docstring'''
+
+
+ def f(x, # parameter x
+ y=a*5, # parameter y
+ *args): # parameter args
+ '''f's docstring'''
+ return [x + item for item in args]
+
+ f.function_attribute = 1
+ '''f.function_attribute's docstring'''
+
+The module parser will produce this module documentation tree::
+
+ <module_section filename="test data">
+ <docstring>
+ Docstring
+ <docstring lineno="5">
+ Additional docstring
+ <attribute lineno="7">
+ <object_name>
+ __docformat__
+ <expression_value lineno="7">
+ 'reStructuredText'
+ <attribute lineno="9">
+ <object_name>
+ a
+ <expression_value lineno="9">
+ 1
+ <docstring lineno="10">
+ Attribute docstring
+ <class_section lineno="12">
+ <object_name>
+ C
+ <class_base>
+ Super
+ <docstring lineno="12">
+ C's docstring
+ <attribute lineno="16">
+ <object_name>
+ class_attribute
+ <expression_value lineno="16">
+ 1
+ <docstring lineno="17">
+ class_attribute's docstring
+ <method_section lineno="19">
+ <object_name>
+ __init__
+ <docstring lineno="19">
+ __init__'s docstring
+ <parameter_list lineno="19">
+ <parameter lineno="19">
+ <object_name>
+ self
+ <parameter lineno="19">
+ <object_name>
+ text
+ <parameter_default lineno="19">
+ None
+ <attribute lineno="22">
+ <object_name>
+ self.instance_attribute
+ <expression_value lineno="22">
+ (text * 7 + ' whaddyaknow')
+ <docstring lineno="24">
+ instance_attribute's docstring
+ <function_section lineno="27">
+ <object_name>
+ f
+ <docstring lineno="27">
+ f's docstring
+ <parameter_list lineno="27">
+ <parameter lineno="27">
+ <object_name>
+ x
+ <comment>
+ # parameter x
+ <parameter lineno="27">
+ <object_name>
+ y
+ <parameter_default lineno="27">
+ a * 5
+ <comment>
+ # parameter y
+ <parameter excess_positional="1" lineno="27">
+ <object_name>
+ args
+ <comment>
+ # parameter args
+ <attribute lineno="33">
+ <object_name>
+ f.function_attribute
+ <expression_value lineno="33">
+ 1
+ <docstring lineno="34">
+ f.function_attribute's docstring
+
+(Comments are not implemented yet.)
+
+compiler.parse() provides most of what's needed for this doctree, and
+"tokenize" can be used to get the rest. We can determine the line
+number from the compiler.parse() AST, and the TokenParser.rhs(lineno)
+method provides the rest.
+
+The Docutils Python reader component will transform this module doctree into a
+Python-specific Docutils doctree, and then a "stylist transform" will
+further transform it into a generic doctree. Namespaces will have to be
+compiled for each of the scopes, but I'm not certain at what stage of
+processing.
+
+It's very important to keep all docstring processing out of this, so that it's
+a completely generic and not tool-specific.
+
+::
+
+> Why perform all of those transformations? Why not go from the AST to a
+> generic doctree? Or, even from the AST to the final output?
+
+I want the docutils.readers.python.moduleparser.parse_module() function to
+produce a standard documentation-oriented tree that can be used by any tool.
+We can develop it together without having to compromise on the rest of our
+design (i.e., HappyDoc doesn't have to be made to work like Docutils, and
+vice-versa). It would be a higher-level version of what compiler.py provides.
+
+The Python reader component transforms this generic AST into a Python-specific
+doctree (it knows about modules, classes, functions, etc.), but this is
+specific to Docutils and cannot be used by HappyDoc or others. The stylist
+transform does the final layout, converting Python-specific structures
+("class" sections, etc.) into a generic doctree using primitives (tables,
+sections, lists, etc.). This generic doctree does *not* know about Python
+structures any more. The advantage is that this doctree can be handed off to
+any of the output writers to create any output format we like.
+
+The latter two transforms are separate because I want to be able to have
+multiple independent layout styles (multiple runtime-selectable "stylist
+transforms"). Each of the existing tools (HappyDoc, pydoc, epydoc, Crystal,
+etc.) has its own fixed format. I personally don't like the tables-based
+format produced by these tools, and I'd like to be able to customize the
+format easily. That's the goal of stylist transforms, which are independent
+from the Reader component itself. One stylist transform could produce
+HappyDoc-like output, another could produce output similar to module docs in
+the Python library reference manual, and so on.
+
+It's for exactly this reason::
+
+>> It's very important to keep all docstring processing out of this, so that
+>> it's a completely generic and not tool-specific.
+
+... but it goes past docstring processing. It's also important to keep style
+decisions and tool-specific data transforms out of this module parser.
+
+
+Issues
+======
+
+* At what point should namespaces be computed? Should they be part of the
+ basic AST produced by the ASTVisitor walk, or generated by another tree
+ traversal?
+
+* At what point should a distinction be made between local variables &
+ instance attributes in __init__ methods?
+
+* Docstrings are getting their lineno from their parents. Should the
+ TokenParser find the real line no's?
+
+* Comments: include them? How and when? Only full-line comments, or
+ parameter comments too? (See function "f" above for an example.)
+
+* Module could use more docstrings & refactoring in places.
+
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import compiler
+import compiler.ast
+import tokenize
+import token
+from compiler.consts import OP_ASSIGN
+from compiler.visitor import ASTVisitor
+from docutils.readers.python import pynodes
+from docutils.nodes import Text
+
+
+def parse_module(module_text, filename):
+ """Return a module documentation tree from `module_text`."""
+ ast = compiler.parse(module_text)
+ token_parser = TokenParser(module_text)
+ visitor = ModuleVisitor(filename, token_parser)
+ compiler.walk(ast, visitor, walker=visitor)
+ return visitor.module
+
+class BaseVisitor(ASTVisitor):
+
+ def __init__(self, token_parser):
+ ASTVisitor.__init__(self)
+ self.token_parser = token_parser
+ self.context = []
+ self.documentable = None
+
+ def default(self, node, *args):
+ self.documentable = None
+ #print 'in default (%s)' % node.__class__.__name__
+ #ASTVisitor.default(self, node, *args)
+
+ def default_visit(self, node, *args):
+ #print 'in default_visit (%s)' % node.__class__.__name__
+ ASTVisitor.default(self, node, *args)
+
+
+class DocstringVisitor(BaseVisitor):
+
+ def visitDiscard(self, node):
+ if self.documentable:
+ self.visit(node.expr)
+
+ def visitConst(self, node):
+ if self.documentable:
+ if type(node.value) in (str, unicode):
+ self.documentable.append(make_docstring(node.value, node.lineno))
+ else:
+ self.documentable = None
+
+ def visitStmt(self, node):
+ self.default_visit(node)
+
+
+class AssignmentVisitor(DocstringVisitor):
+
+ def visitAssign(self, node):
+ visitor = AttributeVisitor(self.token_parser)
+ compiler.walk(node, visitor, walker=visitor)
+ if visitor.attributes:
+ self.context[-1].extend(visitor.attributes)
+ if len(visitor.attributes) == 1:
+ self.documentable = visitor.attributes[0]
+ else:
+ self.documentable = None
+
+
+class ModuleVisitor(AssignmentVisitor):
+
+ def __init__(self, filename, token_parser):
+ AssignmentVisitor.__init__(self, token_parser)
+ self.filename = filename
+ self.module = None
+
+ def visitModule(self, node):
+ self.module = module = pynodes.module_section()
+ module['filename'] = self.filename
+ append_docstring(module, node.doc, node.lineno)
+ self.context.append(module)
+ self.documentable = module
+ self.visit(node.node)
+ self.context.pop()
+
+ def visitImport(self, node):
+ self.context[-1] += make_import_group(names=node.names,
+ lineno=node.lineno)
+ self.documentable = None
+
+ def visitFrom(self, node):
+ self.context[-1].append(
+ make_import_group(names=node.names, from_name=node.modname,
+ lineno=node.lineno))
+ self.documentable = None
+
+ def visitFunction(self, node):
+ visitor = FunctionVisitor(self.token_parser,
+ function_class=pynodes.function_section)
+ compiler.walk(node, visitor, walker=visitor)
+ self.context[-1].append(visitor.function)
+
+ def visitClass(self, node):
+ visitor = ClassVisitor(self.token_parser)
+ compiler.walk(node, visitor, walker=visitor)
+ self.context[-1].append(visitor.klass)
+
+
+class AttributeVisitor(BaseVisitor):
+
+ def __init__(self, token_parser):
+ BaseVisitor.__init__(self, token_parser)
+ self.attributes = pynodes.class_attribute_section()
+
+ def visitAssign(self, node):
+ # Don't visit the expression itself, just the attribute nodes:
+ for child in node.nodes:
+ self.dispatch(child)
+ expression_text = self.token_parser.rhs(node.lineno)
+ expression = pynodes.expression_value()
+ expression.append(Text(expression_text))
+ for attribute in self.attributes:
+ attribute.append(expression)
+
+ def visitAssName(self, node):
+ self.attributes.append(make_attribute(node.name,
+ lineno=node.lineno))
+
+ def visitAssTuple(self, node):
+ attributes = self.attributes
+ self.attributes = []
+ self.default_visit(node)
+ n = pynodes.attribute_tuple()
+ n.extend(self.attributes)
+ n['lineno'] = self.attributes[0]['lineno']
+ attributes.append(n)
+ self.attributes = attributes
+ #self.attributes.append(att_tuple)
+
+ def visitAssAttr(self, node):
+ self.default_visit(node, node.attrname)
+
+ def visitGetattr(self, node, suffix):
+ self.default_visit(node, node.attrname + '.' + suffix)
+
+ def visitName(self, node, suffix):
+ self.attributes.append(make_attribute(node.name + '.' + suffix,
+ lineno=node.lineno))
+
+
+class FunctionVisitor(DocstringVisitor):
+
+ in_function = 0
+
+ def __init__(self, token_parser, function_class):
+ DocstringVisitor.__init__(self, token_parser)
+ self.function_class = function_class
+
+ def visitFunction(self, node):
+ if self.in_function:
+ self.documentable = None
+ # Don't bother with nested function definitions.
+ return
+ self.in_function = 1
+ self.function = function = make_function_like_section(
+ name=node.name,
+ lineno=node.lineno,
+ doc=node.doc,
+ function_class=self.function_class)
+ self.context.append(function)
+ self.documentable = function
+ self.parse_parameter_list(node)
+ self.visit(node.code)
+ self.context.pop()
+
+ def parse_parameter_list(self, node):
+ parameters = []
+ special = []
+ argnames = list(node.argnames)
+ if node.kwargs:
+ special.append(make_parameter(argnames[-1], excess_keyword=1))
+ argnames.pop()
+ if node.varargs:
+ special.append(make_parameter(argnames[-1],
+ excess_positional=1))
+ argnames.pop()
+ defaults = list(node.defaults)
+ defaults = [None] * (len(argnames) - len(defaults)) + defaults
+ function_parameters = self.token_parser.function_parameters(
+ node.lineno)
+ #print >>sys.stderr, function_parameters
+ for argname, default in zip(argnames, defaults):
+ if type(argname) is tuple:
+ parameter = pynodes.parameter_tuple()
+ for tuplearg in argname:
+ parameter.append(make_parameter(tuplearg))
+ argname = normalize_parameter_name(argname)
+ else:
+ parameter = make_parameter(argname)
+ if default:
+ n_default = pynodes.parameter_default()
+ n_default.append(Text(function_parameters[argname]))
+ parameter.append(n_default)
+ parameters.append(parameter)
+ if parameters or special:
+ special.reverse()
+ parameters.extend(special)
+ parameter_list = pynodes.parameter_list()
+ parameter_list.extend(parameters)
+ self.function.append(parameter_list)
+
+
+class ClassVisitor(AssignmentVisitor):
+
+ in_class = 0
+
+ def __init__(self, token_parser):
+ AssignmentVisitor.__init__(self, token_parser)
+ self.bases = []
+
+ def visitClass(self, node):
+ if self.in_class:
+ self.documentable = None
+ # Don't bother with nested class definitions.
+ return
+ self.in_class = 1
+ #import mypdb as pdb
+ #pdb.set_trace()
+ for base in node.bases:
+ self.visit(base)
+ self.klass = klass = make_class_section(node.name, self.bases,
+ doc=node.doc,
+ lineno=node.lineno)
+ self.context.append(klass)
+ self.documentable = klass
+ self.visit(node.code)
+ self.context.pop()
+
+ def visitGetattr(self, node, suffix=None):
+ if suffix:
+ name = node.attrname + '.' + suffix
+ else:
+ name = node.attrname
+ self.default_visit(node, name)
+
+ def visitName(self, node, suffix=None):
+ if suffix:
+ name = node.name + '.' + suffix
+ else:
+ name = node.name
+ self.bases.append(name)
+
+ def visitFunction(self, node):
+ if node.name == '__init__':
+ visitor = InitMethodVisitor(self.token_parser,
+ function_class=pynodes.method_section)
+ compiler.walk(node, visitor, walker=visitor)
+ else:
+ visitor = FunctionVisitor(self.token_parser,
+ function_class=pynodes.method_section)
+ compiler.walk(node, visitor, walker=visitor)
+ self.context[-1].append(visitor.function)
+
+
+class InitMethodVisitor(FunctionVisitor, AssignmentVisitor): pass
+
+
+class TokenParser:
+
+ def __init__(self, text):
+ self.text = text + '\n\n'
+ self.lines = self.text.splitlines(1)
+ self.generator = tokenize.generate_tokens(iter(self.lines).next)
+ self.next()
+
+ def __iter__(self):
+ return self
+
+ def next(self):
+ self.token = self.generator.next()
+ self.type, self.string, self.start, self.end, self.line = self.token
+ return self.token
+
+ def goto_line(self, lineno):
+ while self.start[0] < lineno:
+ self.next()
+ return token
+
+ def rhs(self, lineno):
+ """
+ Return a whitespace-normalized expression string from the right-hand
+ side of an assignment at line `lineno`.
+ """
+ self.goto_line(lineno)
+ while self.string != '=':
+ self.next()
+ self.stack = None
+ while self.type != token.NEWLINE and self.string != ';':
+ if self.string == '=' and not self.stack:
+ self.tokens = []
+ self.stack = []
+ self._type = None
+ self._string = None
+ self._backquote = 0
+ else:
+ self.note_token()
+ self.next()
+ self.next()
+ text = ''.join(self.tokens)
+ return text.strip()
+
+ closers = {')': '(', ']': '[', '}': '{'}
+ openers = {'(': 1, '[': 1, '{': 1}
+ del_ws_prefix = {'.': 1, '=': 1, ')': 1, ']': 1, '}': 1, ':': 1, ',': 1}
+ no_ws_suffix = {'.': 1, '=': 1, '(': 1, '[': 1, '{': 1}
+
+ def note_token(self):
+ if self.type == tokenize.NL:
+ return
+ del_ws = self.string in self.del_ws_prefix
+ append_ws = self.string not in self.no_ws_suffix
+ if self.string in self.openers:
+ self.stack.append(self.string)
+ if (self._type == token.NAME
+ or self._string in self.closers):
+ del_ws = 1
+ elif self.string in self.closers:
+ assert self.stack[-1] == self.closers[self.string]
+ self.stack.pop()
+ elif self.string == '`':
+ if self._backquote:
+ del_ws = 1
+ assert self.stack[-1] == '`'
+ self.stack.pop()
+ else:
+ append_ws = 0
+ self.stack.append('`')
+ self._backquote = not self._backquote
+ if del_ws and self.tokens and self.tokens[-1] == ' ':
+ del self.tokens[-1]
+ self.tokens.append(self.string)
+ self._type = self.type
+ self._string = self.string
+ if append_ws:
+ self.tokens.append(' ')
+
+ def function_parameters(self, lineno):
+ """
+ Return a dictionary mapping parameters to defaults
+ (whitespace-normalized strings).
+ """
+ self.goto_line(lineno)
+ while self.string != 'def':
+ self.next()
+ while self.string != '(':
+ self.next()
+ name = None
+ default = None
+ parameter_tuple = None
+ self.tokens = []
+ parameters = {}
+ self.stack = [self.string]
+ self.next()
+ while 1:
+ if len(self.stack) == 1:
+ if parameter_tuple:
+ # Just encountered ")".
+ #print >>sys.stderr, 'parameter_tuple: %r' % self.tokens
+ name = ''.join(self.tokens).strip()
+ self.tokens = []
+ parameter_tuple = None
+ if self.string in (')', ','):
+ if name:
+ if self.tokens:
+ default_text = ''.join(self.tokens).strip()
+ else:
+ default_text = None
+ parameters[name] = default_text
+ self.tokens = []
+ name = None
+ default = None
+ if self.string == ')':
+ break
+ elif self.type == token.NAME:
+ if name and default:
+ self.note_token()
+ else:
+ assert name is None, (
+ 'token=%r name=%r parameters=%r stack=%r'
+ % (self.token, name, parameters, self.stack))
+ name = self.string
+ #print >>sys.stderr, 'name=%r' % name
+ elif self.string == '=':
+ assert name is not None, 'token=%r' % (self.token,)
+ assert default is None, 'token=%r' % (self.token,)
+ assert self.tokens == [], 'token=%r' % (self.token,)
+ default = 1
+ self._type = None
+ self._string = None
+ self._backquote = 0
+ elif name:
+ self.note_token()
+ elif self.string == '(':
+ parameter_tuple = 1
+ self._type = None
+ self._string = None
+ self._backquote = 0
+ self.note_token()
+ else: # ignore these tokens:
+ assert (self.string in ('*', '**', '\n')
+ or self.type == tokenize.COMMENT), (
+ 'token=%r' % (self.token,))
+ else:
+ self.note_token()
+ self.next()
+ return parameters
+
+
+def make_docstring(doc, lineno):
+ n = pynodes.docstring()
+ if lineno:
+ # Really, only module docstrings don't have a line
+ # (@@: but maybe they should)
+ n['lineno'] = lineno
+ n.append(Text(doc))
+ return n
+
+def append_docstring(node, doc, lineno):
+ if doc:
+ node.append(make_docstring(doc, lineno))
+
+def make_class_section(name, bases, lineno, doc):
+ n = pynodes.class_section()
+ n['lineno'] = lineno
+ n.append(make_object_name(name))
+ for base in bases:
+ b = pynodes.class_base()
+ b.append(make_object_name(base))
+ n.append(b)
+ append_docstring(n, doc, lineno)
+ return n
+
+def make_object_name(name):
+ n = pynodes.object_name()
+ n.append(Text(name))
+ return n
+
+def make_function_like_section(name, lineno, doc, function_class):
+ n = function_class()
+ n['lineno'] = lineno
+ n.append(make_object_name(name))
+ append_docstring(n, doc, lineno)
+ return n
+
+def make_import_group(names, lineno, from_name=None):
+ n = pynodes.import_group()
+ n['lineno'] = lineno
+ if from_name:
+ n_from = pynodes.import_from()
+ n_from.append(Text(from_name))
+ n.append(n_from)
+ for name, alias in names:
+ n_name = pynodes.import_name()
+ n_name.append(Text(name))
+ if alias:
+ n_alias = pynodes.import_alias()
+ n_alias.append(Text(alias))
+ n_name.append(n_alias)
+ n.append(n_name)
+ return n
+
+def make_class_attribute(name, lineno):
+ n = pynodes.class_attribute()
+ n['lineno'] = lineno
+ n.append(Text(name))
+ return n
+
+def make_attribute(name, lineno):
+ n = pynodes.attribute()
+ n['lineno'] = lineno
+ n.append(make_object_name(name))
+ return n
+
+def make_parameter(name, excess_keyword=0, excess_positional=0):
+ """
+ excess_keyword and excess_positional must be either 1 or 0, and
+ not both of them can be 1.
+ """
+ n = pynodes.parameter()
+ n.append(make_object_name(name))
+ assert not excess_keyword or not excess_positional
+ if excess_keyword:
+ n['excess_keyword'] = 1
+ if excess_positional:
+ n['excess_positional'] = 1
+ return n
+
+def trim_docstring(text):
+ """
+ Trim indentation and blank lines from docstring text & return it.
+
+ See PEP 257.
+ """
+ if not text:
+ return text
+ # Convert tabs to spaces (following the normal Python rules)
+ # and split into a list of lines:
+ lines = text.expandtabs().splitlines()
+ # Determine minimum indentation (first line doesn't count):
+ indent = sys.maxint
+ for line in lines[1:]:
+ stripped = line.lstrip()
+ if stripped:
+ indent = min(indent, len(line) - len(stripped))
+ # Remove indentation (first line is special):
+ trimmed = [lines[0].strip()]
+ if indent < sys.maxint:
+ for line in lines[1:]:
+ trimmed.append(line[indent:].rstrip())
+ # Strip off trailing and leading blank lines:
+ while trimmed and not trimmed[-1]:
+ trimmed.pop()
+ while trimmed and not trimmed[0]:
+ trimmed.pop(0)
+ # Return a single string:
+ return '\n'.join(trimmed)
+
+def normalize_parameter_name(name):
+ """
+ Converts a tuple like ``('a', ('b', 'c'), 'd')`` into ``'(a, (b, c), d)'``
+ """
+ if type(name) is tuple:
+ return '(%s)' % ', '.join([normalize_parameter_name(n) for n in name])
+ else:
+ return name
+
+if __name__ == '__main__':
+ import sys
+ args = sys.argv[1:]
+ if args[0] == '-v':
+ filename = args[1]
+ module_text = open(filename).read()
+ ast = compiler.parse(module_text)
+ visitor = compiler.visitor.ExampleASTVisitor()
+ compiler.walk(ast, visitor, walker=visitor, verbose=1)
+ else:
+ filename = args[0]
+ content = open(filename).read()
+ print parse_module(content, filename).pformat()
+
diff --git a/python/helpers/docutils/readers/python/pynodes.py b/python/helpers/docutils/readers/python/pynodes.py
new file mode 100644
index 0000000..e90314dc
--- /dev/null
+++ b/python/helpers/docutils/readers/python/pynodes.py
@@ -0,0 +1,81 @@
+#! /usr/bin/env python
+# $Id: pynodes.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+from docutils import nodes
+from docutils.nodes import Element, TextElement, Structural, Inline, Part, \
+ Text
+import types
+
+# This is the parent class of all the other pynode classes:
+class PythonStructural(Structural): pass
+
+# =====================
+# Structural Elements
+# =====================
+
+class module_section(PythonStructural, Element): pass
+class class_section(PythonStructural, Element): pass
+class class_base(PythonStructural, Element): pass
+class method_section(PythonStructural, Element): pass
+class attribute(PythonStructural, Element): pass
+class function_section(PythonStructural, Element): pass
+class class_attribute_section(PythonStructural, Element): pass
+class class_attribute(PythonStructural, Element): pass
+class expression_value(PythonStructural, Element): pass
+class attribute(PythonStructural, Element): pass
+
+# Structural Support Elements
+# ---------------------------
+
+class parameter_list(PythonStructural, Element): pass
+class parameter_tuple(PythonStructural, Element): pass
+class parameter_default(PythonStructural, TextElement): pass
+class import_group(PythonStructural, TextElement): pass
+class import_from(PythonStructural, TextElement): pass
+class import_name(PythonStructural, TextElement): pass
+class import_alias(PythonStructural, TextElement): pass
+class docstring(PythonStructural, Element): pass
+
+# =================
+# Inline Elements
+# =================
+
+# These elements cannot become references until the second
+# pass. Initially, we'll use "reference" or "name".
+
+class object_name(PythonStructural, TextElement): pass
+class parameter_list(PythonStructural, TextElement): pass
+class parameter(PythonStructural, TextElement): pass
+class parameter_default(PythonStructural, TextElement): pass
+class class_attribute(PythonStructural, TextElement): pass
+class attribute_tuple(PythonStructural, TextElement): pass
+
+# =================
+# Unused Elements
+# =================
+
+# These were part of the model, and maybe should be in the future, but
+# aren't now.
+#class package_section(PythonStructural, Element): pass
+#class module_attribute_section(PythonStructural, Element): pass
+#class instance_attribute_section(PythonStructural, Element): pass
+#class module_attribute(PythonStructural, TextElement): pass
+#class instance_attribute(PythonStructural, TextElement): pass
+#class exception_class(PythonStructural, TextElement): pass
+#class warning_class(PythonStructural, TextElement): pass
+
+
+# Collect all the classes we've written above
+def install_node_class_names():
+ node_class_names = []
+ for name, var in globals().items():
+ if (type(var) is types.ClassType
+ and issubclass(var, PythonStructural) \
+ and name.lower() == name):
+ node_class_names.append(var.tagname or name)
+ # Register the new node names with GenericNodeVisitor and
+ # SpecificNodeVisitor:
+ nodes._add_node_class_names(node_class_names)
+install_node_class_names()
diff --git a/python/helpers/docutils/readers/standalone.py b/python/helpers/docutils/readers/standalone.py
new file mode 100644
index 0000000..3c302ed
--- /dev/null
+++ b/python/helpers/docutils/readers/standalone.py
@@ -0,0 +1,66 @@
+# $Id: standalone.py 4802 2006-11-12 18:02:17Z goodger $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Standalone file Reader for the reStructuredText markup syntax.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+from docutils import frontend, readers
+from docutils.transforms import frontmatter, references, misc
+
+
+class Reader(readers.Reader):
+
+ supported = ('standalone',)
+ """Contexts this reader supports."""
+
+ document = None
+ """A single document tree."""
+
+ settings_spec = (
+ 'Standalone Reader',
+ None,
+ (('Disable the promotion of a lone top-level section title to '
+ 'document title (and subsequent section title to document '
+ 'subtitle promotion; enabled by default).',
+ ['--no-doc-title'],
+ {'dest': 'doctitle_xform', 'action': 'store_false', 'default': 1,
+ 'validator': frontend.validate_boolean}),
+ ('Disable the bibliographic field list transform (enabled by '
+ 'default).',
+ ['--no-doc-info'],
+ {'dest': 'docinfo_xform', 'action': 'store_false', 'default': 1,
+ 'validator': frontend.validate_boolean}),
+ ('Activate the promotion of lone subsection titles to '
+ 'section subtitles (disabled by default).',
+ ['--section-subtitles'],
+ {'dest': 'sectsubtitle_xform', 'action': 'store_true', 'default': 0,
+ 'validator': frontend.validate_boolean}),
+ ('Deactivate the promotion of lone subsection titles.',
+ ['--no-section-subtitles'],
+ {'dest': 'sectsubtitle_xform', 'action': 'store_false'}),
+ ))
+
+ config_section = 'standalone reader'
+ config_section_dependencies = ('readers',)
+
+ def get_transforms(self):
+ return readers.Reader.get_transforms(self) + [
+ references.Substitutions,
+ references.PropagateTargets,
+ frontmatter.DocTitle,
+ frontmatter.SectionSubTitle,
+ frontmatter.DocInfo,
+ references.AnonymousHyperlinks,
+ references.IndirectHyperlinks,
+ references.Footnotes,
+ references.ExternalTargets,
+ references.InternalTargets,
+ references.DanglingReferences,
+ misc.Transitions,
+ ]
diff --git a/python/helpers/docutils/statemachine.py b/python/helpers/docutils/statemachine.py
new file mode 100644
index 0000000..aa9d2c0
--- /dev/null
+++ b/python/helpers/docutils/statemachine.py
@@ -0,0 +1,1525 @@
+ # $Id: statemachine.py 6388 2010-08-13 12:24:34Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+A finite state machine specialized for regular-expression-based text filters,
+this module defines the following classes:
+
+- `StateMachine`, a state machine
+- `State`, a state superclass
+- `StateMachineWS`, a whitespace-sensitive version of `StateMachine`
+- `StateWS`, a state superclass for use with `StateMachineWS`
+- `SearchStateMachine`, uses `re.search()` instead of `re.match()`
+- `SearchStateMachineWS`, uses `re.search()` instead of `re.match()`
+- `ViewList`, extends standard Python lists.
+- `StringList`, string-specific ViewList.
+
+Exception classes:
+
+- `StateMachineError`
+- `UnknownStateError`
+- `DuplicateStateError`
+- `UnknownTransitionError`
+- `DuplicateTransitionError`
+- `TransitionPatternNotFound`
+- `TransitionMethodNotFound`
+- `UnexpectedIndentationError`
+- `TransitionCorrection`: Raised to switch to another transition.
+- `StateCorrection`: Raised to switch to another state & transition.
+
+Functions:
+
+- `string2lines()`: split a multi-line string into a list of one-line strings
+
+
+How To Use This Module
+======================
+(See the individual classes, methods, and attributes for details.)
+
+1. Import it: ``import statemachine`` or ``from statemachine import ...``.
+ You will also need to ``import re``.
+
+2. Derive a subclass of `State` (or `StateWS`) for each state in your state
+ machine::
+
+ class MyState(statemachine.State):
+
+ Within the state's class definition:
+
+ a) Include a pattern for each transition, in `State.patterns`::
+
+ patterns = {'atransition': r'pattern', ...}
+
+ b) Include a list of initial transitions to be set up automatically, in
+ `State.initial_transitions`::
+
+ initial_transitions = ['atransition', ...]
+
+ c) Define a method for each transition, with the same name as the
+ transition pattern::
+
+ def atransition(self, match, context, next_state):
+ # do something
+ result = [...] # a list
+ return context, next_state, result
+ # context, next_state may be altered
+
+ Transition methods may raise an `EOFError` to cut processing short.
+
+ d) You may wish to override the `State.bof()` and/or `State.eof()` implicit
+ transition methods, which handle the beginning- and end-of-file.
+
+ e) In order to handle nested processing, you may wish to override the
+ attributes `State.nested_sm` and/or `State.nested_sm_kwargs`.
+
+ If you are using `StateWS` as a base class, in order to handle nested
+ indented blocks, you may wish to:
+
+ - override the attributes `StateWS.indent_sm`,
+ `StateWS.indent_sm_kwargs`, `StateWS.known_indent_sm`, and/or
+ `StateWS.known_indent_sm_kwargs`;
+ - override the `StateWS.blank()` method; and/or
+ - override or extend the `StateWS.indent()`, `StateWS.known_indent()`,
+ and/or `StateWS.firstknown_indent()` methods.
+
+3. Create a state machine object::
+
+ sm = StateMachine(state_classes=[MyState, ...],
+ initial_state='MyState')
+
+4. Obtain the input text, which needs to be converted into a tab-free list of
+ one-line strings. For example, to read text from a file called
+ 'inputfile'::
+
+ input_string = open('inputfile').read()
+ input_lines = statemachine.string2lines(input_string)
+
+5. Run the state machine on the input text and collect the results, a list::
+
+ results = sm.run(input_lines)
+
+6. Remove any lingering circular references::
+
+ sm.unlink()
+"""
+
+__docformat__ = 'restructuredtext'
+
+import sys
+import re
+import types
+import unicodedata
+
+
+class StateMachine:
+
+ """
+ A finite state machine for text filters using regular expressions.
+
+ The input is provided in the form of a list of one-line strings (no
+ newlines). States are subclasses of the `State` class. Transitions consist
+ of regular expression patterns and transition methods, and are defined in
+ each state.
+
+ The state machine is started with the `run()` method, which returns the
+ results of processing in a list.
+ """
+
+ def __init__(self, state_classes, initial_state, debug=0):
+ """
+ Initialize a `StateMachine` object; add state objects.
+
+ Parameters:
+
+ - `state_classes`: a list of `State` (sub)classes.
+ - `initial_state`: a string, the class name of the initial state.
+ - `debug`: a boolean; produce verbose output if true (nonzero).
+ """
+
+ self.input_lines = None
+ """`StringList` of input lines (without newlines).
+ Filled by `self.run()`."""
+
+ self.input_offset = 0
+ """Offset of `self.input_lines` from the beginning of the file."""
+
+ self.line = None
+ """Current input line."""
+
+ self.line_offset = -1
+ """Current input line offset from beginning of `self.input_lines`."""
+
+ self.debug = debug
+ """Debugging mode on/off."""
+
+ self.initial_state = initial_state
+ """The name of the initial state (key to `self.states`)."""
+
+ self.current_state = initial_state
+ """The name of the current state (key to `self.states`)."""
+
+ self.states = {}
+ """Mapping of {state_name: State_object}."""
+
+ self.add_states(state_classes)
+
+ self.observers = []
+ """List of bound methods or functions to call whenever the current
+ line changes. Observers are called with one argument, ``self``.
+ Cleared at the end of `run()`."""
+
+ def unlink(self):
+ """Remove circular references to objects no longer required."""
+ for state in self.states.values():
+ state.unlink()
+ self.states = None
+
+ def run(self, input_lines, input_offset=0, context=None,
+ input_source=None, initial_state=None):
+ """
+ Run the state machine on `input_lines`. Return results (a list).
+
+ Reset `self.line_offset` and `self.current_state`. Run the
+ beginning-of-file transition. Input one line at a time and check for a
+ matching transition. If a match is found, call the transition method
+ and possibly change the state. Store the context returned by the
+ transition method to be passed on to the next transition matched.
+ Accumulate the results returned by the transition methods in a list.
+ Run the end-of-file transition. Finally, return the accumulated
+ results.
+
+ Parameters:
+
+ - `input_lines`: a list of strings without newlines, or `StringList`.
+ - `input_offset`: the line offset of `input_lines` from the beginning
+ of the file.
+ - `context`: application-specific storage.
+ - `input_source`: name or path of source of `input_lines`.
+ - `initial_state`: name of initial state.
+ """
+ self.runtime_init()
+ if isinstance(input_lines, StringList):
+ self.input_lines = input_lines
+ else:
+ self.input_lines = StringList(input_lines, source=input_source)
+ self.input_offset = input_offset
+ self.line_offset = -1
+ self.current_state = initial_state or self.initial_state
+ if self.debug:
+ print >>sys.stderr, (
+ '\nStateMachine.run: input_lines (line_offset=%s):\n| %s'
+ % (self.line_offset, '\n| '.join(self.input_lines)))
+ transitions = None
+ results = []
+ state = self.get_state()
+ try:
+ if self.debug:
+ print >>sys.stderr, ('\nStateMachine.run: bof transition')
+ context, result = state.bof(context)
+ results.extend(result)
+ while 1:
+ try:
+ try:
+ self.next_line()
+ if self.debug:
+ source, offset = self.input_lines.info(
+ self.line_offset)
+ print >>sys.stderr, (
+ '\nStateMachine.run: line (source=%r, '
+ 'offset=%r):\n| %s'
+ % (source, offset, self.line))
+ context, next_state, result = self.check_line(
+ context, state, transitions)
+ except EOFError:
+ if self.debug:
+ print >>sys.stderr, (
+ '\nStateMachine.run: %s.eof transition'
+ % state.__class__.__name__)
+ result = state.eof(context)
+ results.extend(result)
+ break
+ else:
+ results.extend(result)
+ except TransitionCorrection, exception:
+ self.previous_line() # back up for another try
+ transitions = (exception.args[0],)
+ if self.debug:
+ print >>sys.stderr, (
+ '\nStateMachine.run: TransitionCorrection to '
+ 'state "%s", transition %s.'
+ % (state.__class__.__name__, transitions[0]))
+ continue
+ except StateCorrection, exception:
+ self.previous_line() # back up for another try
+ next_state = exception.args[0]
+ if len(exception.args) == 1:
+ transitions = None
+ else:
+ transitions = (exception.args[1],)
+ if self.debug:
+ print >>sys.stderr, (
+ '\nStateMachine.run: StateCorrection to state '
+ '"%s", transition %s.'
+ % (next_state, transitions[0]))
+ else:
+ transitions = None
+ state = self.get_state(next_state)
+ except:
+ if self.debug:
+ self.error()
+ raise
+ self.observers = []
+ return results
+
+ def get_state(self, next_state=None):
+ """
+ Return current state object; set it first if `next_state` given.
+
+ Parameter `next_state`: a string, the name of the next state.
+
+ Exception: `UnknownStateError` raised if `next_state` unknown.
+ """
+ if next_state:
+ if self.debug and next_state != self.current_state:
+ print >>sys.stderr, \
+ ('\nStateMachine.get_state: Changing state from '
+ '"%s" to "%s" (input line %s).'
+ % (self.current_state, next_state,
+ self.abs_line_number()))
+ self.current_state = next_state
+ try:
+ return self.states[self.current_state]
+ except KeyError:
+ raise UnknownStateError(self.current_state)
+
+ def next_line(self, n=1):
+ """Load `self.line` with the `n`'th next line and return it."""
+ try:
+ try:
+ self.line_offset += n
+ self.line = self.input_lines[self.line_offset]
+ except IndexError:
+ self.line = None
+ raise EOFError
+ return self.line
+ finally:
+ self.notify_observers()
+
+ def is_next_line_blank(self):
+ """Return 1 if the next line is blank or non-existant."""
+ try:
+ return not self.input_lines[self.line_offset + 1].strip()
+ except IndexError:
+ return 1
+
+ def at_eof(self):
+ """Return 1 if the input is at or past end-of-file."""
+ return self.line_offset >= len(self.input_lines) - 1
+
+ def at_bof(self):
+ """Return 1 if the input is at or before beginning-of-file."""
+ return self.line_offset <= 0
+
+ def previous_line(self, n=1):
+ """Load `self.line` with the `n`'th previous line and return it."""
+ self.line_offset -= n
+ if self.line_offset < 0:
+ self.line = None
+ else:
+ self.line = self.input_lines[self.line_offset]
+ self.notify_observers()
+ return self.line
+
+ def goto_line(self, line_offset):
+ """Jump to absolute line offset `line_offset`, load and return it."""
+ try:
+ try:
+ self.line_offset = line_offset - self.input_offset
+ self.line = self.input_lines[self.line_offset]
+ except IndexError:
+ self.line = None
+ raise EOFError
+ return self.line
+ finally:
+ self.notify_observers()
+
+ def get_source(self, line_offset):
+ """Return source of line at absolute line offset `line_offset`."""
+ return self.input_lines.source(line_offset - self.input_offset)
+
+ def abs_line_offset(self):
+ """Return line offset of current line, from beginning of file."""
+ return self.line_offset + self.input_offset
+
+ def abs_line_number(self):
+ """Return line number of current line (counting from 1)."""
+ return self.line_offset + self.input_offset + 1
+
+ def get_source_and_line(self, lineno=None):
+ """Return (source, line) tuple for current or given line number.
+
+ Looks up the source and line number in the `self.input_lines`
+ StringList instance to count for included source files.
+
+ If the optional argument `lineno` is given, convert it from an
+ absolute line number to the corresponding (source, line) pair.
+ """
+ if lineno is None:
+ offset = self.line_offset
+ else:
+ offset = lineno - self.input_offset - 1
+ try:
+ src, srcoffset = self.input_lines.info(offset)
+ srcline = srcoffset + 1
+ except (TypeError):
+ # line is None if index is "Just past the end"
+ src, srcline = self.get_source_and_line(offset + self.input_offset)
+ return src, srcline + 1
+ except (IndexError): # `offset` is off the list
+ src, srcline = None, None
+ # raise AssertionError('cannot find line %d in %s lines' %
+ # (offset, len(self.input_lines)))
+ # # list(self.input_lines.lines())))
+ # assert offset == srcoffset, str(self.input_lines)
+ # print "get_source_and_line(%s):" % lineno,
+ # print offset + 1, '->', src, srcline
+ # print self.input_lines
+ return (src, srcline)
+
+ def insert_input(self, input_lines, source):
+ self.input_lines.insert(self.line_offset + 1, '',
+ source='internal padding after '+source,
+ offset=len(input_lines))
+ self.input_lines.insert(self.line_offset + 1, '',
+ source='internal padding before '+source,
+ offset=-1)
+ self.input_lines.insert(self.line_offset + 2,
+ StringList(input_lines, source))
+
+ def get_text_block(self, flush_left=0):
+ """
+ Return a contiguous block of text.
+
+ If `flush_left` is true, raise `UnexpectedIndentationError` if an
+ indented line is encountered before the text block ends (with a blank
+ line).
+ """
+ try:
+ block = self.input_lines.get_text_block(self.line_offset,
+ flush_left)
+ self.next_line(len(block) - 1)
+ return block
+ except UnexpectedIndentationError, error:
+ block, source, lineno = error.args
+ self.next_line(len(block) - 1) # advance to last line of block
+ raise
+
+ def check_line(self, context, state, transitions=None):
+ """
+ Examine one line of input for a transition match & execute its method.
+
+ Parameters:
+
+ - `context`: application-dependent storage.
+ - `state`: a `State` object, the current state.
+ - `transitions`: an optional ordered list of transition names to try,
+ instead of ``state.transition_order``.
+
+ Return the values returned by the transition method:
+
+ - context: possibly modified from the parameter `context`;
+ - next state name (`State` subclass name);
+ - the result output of the transition, a list.
+
+ When there is no match, ``state.no_match()`` is called and its return
+ value is returned.
+ """
+ if transitions is None:
+ transitions = state.transition_order
+ state_correction = None
+ if self.debug:
+ print >>sys.stderr, (
+ '\nStateMachine.check_line: state="%s", transitions=%r.'
+ % (state.__class__.__name__, transitions))
+ for name in transitions:
+ pattern, method, next_state = state.transitions[name]
+ match = pattern.match(self.line)
+ if match:
+ if self.debug:
+ print >>sys.stderr, (
+ '\nStateMachine.check_line: Matched transition '
+ '"%s" in state "%s".'
+ % (name, state.__class__.__name__))
+ return method(match, context, next_state)
+ else:
+ if self.debug:
+ print >>sys.stderr, (
+ '\nStateMachine.check_line: No match in state "%s".'
+ % state.__class__.__name__)
+ return state.no_match(context, transitions)
+
+ def add_state(self, state_class):
+ """
+ Initialize & add a `state_class` (`State` subclass) object.
+
+ Exception: `DuplicateStateError` raised if `state_class` was already
+ added.
+ """
+ statename = state_class.__name__
+ if statename in self.states:
+ raise DuplicateStateError(statename)
+ self.states[statename] = state_class(self, self.debug)
+
+ def add_states(self, state_classes):
+ """
+ Add `state_classes` (a list of `State` subclasses).
+ """
+ for state_class in state_classes:
+ self.add_state(state_class)
+
+ def runtime_init(self):
+ """
+ Initialize `self.states`.
+ """
+ for state in self.states.values():
+ state.runtime_init()
+
+ def error(self):
+ """Report error details."""
+ type, value, module, line, function = _exception_data()
+ print >>sys.stderr, '%s: %s' % (type, value)
+ print >>sys.stderr, 'input line %s' % (self.abs_line_number())
+ print >>sys.stderr, ('module %s, line %s, function %s'
+ % (module, line, function))
+
+ def attach_observer(self, observer):
+ """
+ The `observer` parameter is a function or bound method which takes two
+ arguments, the source and offset of the current line.
+ """
+ self.observers.append(observer)
+
+ def detach_observer(self, observer):
+ self.observers.remove(observer)
+
+ def notify_observers(self):
+ for observer in self.observers:
+ try:
+ info = self.input_lines.info(self.line_offset)
+ except IndexError:
+ info = (None, None)
+ observer(*info)
+
+
+class State:
+
+ """
+ State superclass. Contains a list of transitions, and transition methods.
+
+ Transition methods all have the same signature. They take 3 parameters:
+
+ - An `re` match object. ``match.string`` contains the matched input line,
+ ``match.start()`` gives the start index of the match, and
+ ``match.end()`` gives the end index.
+ - A context object, whose meaning is application-defined (initial value
+ ``None``). It can be used to store any information required by the state
+ machine, and the retured context is passed on to the next transition
+ method unchanged.
+ - The name of the next state, a string, taken from the transitions list;
+ normally it is returned unchanged, but it may be altered by the
+ transition method if necessary.
+
+ Transition methods all return a 3-tuple:
+
+ - A context object, as (potentially) modified by the transition method.
+ - The next state name (a return value of ``None`` means no state change).
+ - The processing result, a list, which is accumulated by the state
+ machine.
+
+ Transition methods may raise an `EOFError` to cut processing short.
+
+ There are two implicit transitions, and corresponding transition methods
+ are defined: `bof()` handles the beginning-of-file, and `eof()` handles
+ the end-of-file. These methods have non-standard signatures and return
+ values. `bof()` returns the initial context and results, and may be used
+ to return a header string, or do any other processing needed. `eof()`
+ should handle any remaining context and wrap things up; it returns the
+ final processing result.
+
+ Typical applications need only subclass `State` (or a subclass), set the
+ `patterns` and `initial_transitions` class attributes, and provide
+ corresponding transition methods. The default object initialization will
+ take care of constructing the list of transitions.
+ """
+
+ patterns = None
+ """
+ {Name: pattern} mapping, used by `make_transition()`. Each pattern may
+ be a string or a compiled `re` pattern. Override in subclasses.
+ """
+
+ initial_transitions = None
+ """
+ A list of transitions to initialize when a `State` is instantiated.
+ Each entry is either a transition name string, or a (transition name, next
+ state name) pair. See `make_transitions()`. Override in subclasses.
+ """
+
+ nested_sm = None
+ """
+ The `StateMachine` class for handling nested processing.
+
+ If left as ``None``, `nested_sm` defaults to the class of the state's
+ controlling state machine. Override it in subclasses to avoid the default.
+ """
+
+ nested_sm_kwargs = None
+ """
+ Keyword arguments dictionary, passed to the `nested_sm` constructor.
+
+ Two keys must have entries in the dictionary:
+
+ - Key 'state_classes' must be set to a list of `State` classes.
+ - Key 'initial_state' must be set to the name of the initial state class.
+
+ If `nested_sm_kwargs` is left as ``None``, 'state_classes' defaults to the
+ class of the current state, and 'initial_state' defaults to the name of
+ the class of the current state. Override in subclasses to avoid the
+ defaults.
+ """
+
+ def __init__(self, state_machine, debug=0):
+ """
+ Initialize a `State` object; make & add initial transitions.
+
+ Parameters:
+
+ - `statemachine`: the controlling `StateMachine` object.
+ - `debug`: a boolean; produce verbose output if true (nonzero).
+ """
+
+ self.transition_order = []
+ """A list of transition names in search order."""
+
+ self.transitions = {}
+ """
+ A mapping of transition names to 3-tuples containing
+ (compiled_pattern, transition_method, next_state_name). Initialized as
+ an instance attribute dynamically (instead of as a class attribute)
+ because it may make forward references to patterns and methods in this
+ or other classes.
+ """
+
+ self.add_initial_transitions()
+
+ self.state_machine = state_machine
+ """A reference to the controlling `StateMachine` object."""
+
+ self.debug = debug
+ """Debugging mode on/off."""
+
+ if self.nested_sm is None:
+ self.nested_sm = self.state_machine.__class__
+ if self.nested_sm_kwargs is None:
+ self.nested_sm_kwargs = {'state_classes': [self.__class__],
+ 'initial_state': self.__class__.__name__}
+
+ def runtime_init(self):
+ """
+ Initialize this `State` before running the state machine; called from
+ `self.state_machine.run()`.
+ """
+ pass
+
+ def unlink(self):
+ """Remove circular references to objects no longer required."""
+ self.state_machine = None
+
+ def add_initial_transitions(self):
+ """Make and add transitions listed in `self.initial_transitions`."""
+ if self.initial_transitions:
+ names, transitions = self.make_transitions(
+ self.initial_transitions)
+ self.add_transitions(names, transitions)
+
+ def add_transitions(self, names, transitions):
+ """
+ Add a list of transitions to the start of the transition list.
+
+ Parameters:
+
+ - `names`: a list of transition names.
+ - `transitions`: a mapping of names to transition tuples.
+
+ Exceptions: `DuplicateTransitionError`, `UnknownTransitionError`.
+ """
+ for name in names:
+ if name in self.transitions:
+ raise DuplicateTransitionError(name)
+ if name not in transitions:
+ raise UnknownTransitionError(name)
+ self.transition_order[:0] = names
+ self.transitions.update(transitions)
+
+ def add_transition(self, name, transition):
+ """
+ Add a transition to the start of the transition list.
+
+ Parameter `transition`: a ready-made transition 3-tuple.
+
+ Exception: `DuplicateTransitionError`.
+ """
+ if name in self.transitions:
+ raise DuplicateTransitionError(name)
+ self.transition_order[:0] = [name]
+ self.transitions[name] = transition
+
+ def remove_transition(self, name):
+ """
+ Remove a transition by `name`.
+
+ Exception: `UnknownTransitionError`.
+ """
+ try:
+ del self.transitions[name]
+ self.transition_order.remove(name)
+ except:
+ raise UnknownTransitionError(name)
+
+ def make_transition(self, name, next_state=None):
+ """
+ Make & return a transition tuple based on `name`.
+
+ This is a convenience function to simplify transition creation.
+
+ Parameters:
+
+ - `name`: a string, the name of the transition pattern & method. This
+ `State` object must have a method called '`name`', and a dictionary
+ `self.patterns` containing a key '`name`'.
+ - `next_state`: a string, the name of the next `State` object for this
+ transition. A value of ``None`` (or absent) implies no state change
+ (i.e., continue with the same state).
+
+ Exceptions: `TransitionPatternNotFound`, `TransitionMethodNotFound`.
+ """
+ if next_state is None:
+ next_state = self.__class__.__name__
+ try:
+ pattern = self.patterns[name]
+ if not hasattr(pattern, 'match'):
+ pattern = re.compile(pattern)
+ except KeyError:
+ raise TransitionPatternNotFound(
+ '%s.patterns[%r]' % (self.__class__.__name__, name))
+ try:
+ method = getattr(self, name)
+ except AttributeError:
+ raise TransitionMethodNotFound(
+ '%s.%s' % (self.__class__.__name__, name))
+ return (pattern, method, next_state)
+
+ def make_transitions(self, name_list):
+ """
+ Return a list of transition names and a transition mapping.
+
+ Parameter `name_list`: a list, where each entry is either a transition
+ name string, or a 1- or 2-tuple (transition name, optional next state
+ name).
+ """
+ stringtype = type('')
+ names = []
+ transitions = {}
+ for namestate in name_list:
+ if type(namestate) is stringtype:
+ transitions[namestate] = self.make_transition(namestate)
+ names.append(namestate)
+ else:
+ transitions[namestate[0]] = self.make_transition(*namestate)
+ names.append(namestate[0])
+ return names, transitions
+
+ def no_match(self, context, transitions):
+ """
+ Called when there is no match from `StateMachine.check_line()`.
+
+ Return the same values returned by transition methods:
+
+ - context: unchanged;
+ - next state name: ``None``;
+ - empty result list.
+
+ Override in subclasses to catch this event.
+ """
+ return context, None, []
+
+ def bof(self, context):
+ """
+ Handle beginning-of-file. Return unchanged `context`, empty result.
+
+ Override in subclasses.
+
+ Parameter `context`: application-defined storage.
+ """
+ return context, []
+
+ def eof(self, context):
+ """
+ Handle end-of-file. Return empty result.
+
+ Override in subclasses.
+
+ Parameter `context`: application-defined storage.
+ """
+ return []
+
+ def nop(self, match, context, next_state):
+ """
+ A "do nothing" transition method.
+
+ Return unchanged `context` & `next_state`, empty result. Useful for
+ simple state changes (actionless transitions).
+ """
+ return context, next_state, []
+
+
+class StateMachineWS(StateMachine):
+
+ """
+ `StateMachine` subclass specialized for whitespace recognition.
+
+ There are three methods provided for extracting indented text blocks:
+
+ - `get_indented()`: use when the indent is unknown.
+ - `get_known_indented()`: use when the indent is known for all lines.
+ - `get_first_known_indented()`: use when only the first line's indent is
+ known.
+ """
+
+ def get_indented(self, until_blank=0, strip_indent=1):
+ """
+ Return a block of indented lines of text, and info.
+
+ Extract an indented block where the indent is unknown for all lines.
+
+ :Parameters:
+ - `until_blank`: Stop collecting at the first blank line if true
+ (1).
+ - `strip_indent`: Strip common leading indent if true (1,
+ default).
+
+ :Return:
+ - the indented block (a list of lines of text),
+ - its indent,
+ - its first line offset from BOF, and
+ - whether or not it finished with a blank line.
+ """
+ offset = self.abs_line_offset()
+ indented, indent, blank_finish = self.input_lines.get_indented(
+ self.line_offset, until_blank, strip_indent)
+ if indented:
+ self.next_line(len(indented) - 1) # advance to last indented line
+ while indented and not indented[0].strip():
+ indented.trim_start()
+ offset += 1
+ return indented, indent, offset, blank_finish
+
+ def get_known_indented(self, indent, until_blank=0, strip_indent=1):
+ """
+ Return an indented block and info.
+
+ Extract an indented block where the indent is known for all lines.
+ Starting with the current line, extract the entire text block with at
+ least `indent` indentation (which must be whitespace, except for the
+ first line).
+
+ :Parameters:
+ - `indent`: The number of indent columns/characters.
+ - `until_blank`: Stop collecting at the first blank line if true
+ (1).
+ - `strip_indent`: Strip `indent` characters of indentation if true
+ (1, default).
+
+ :Return:
+ - the indented block,
+ - its first line offset from BOF, and
+ - whether or not it finished with a blank line.
+ """
+ offset = self.abs_line_offset()
+ indented, indent, blank_finish = self.input_lines.get_indented(
+ self.line_offset, until_blank, strip_indent,
+ block_indent=indent)
+ self.next_line(len(indented) - 1) # advance to last indented line
+ while indented and not indented[0].strip():
+ indented.trim_start()
+ offset += 1
+ return indented, offset, blank_finish
+
+ def get_first_known_indented(self, indent, until_blank=0, strip_indent=1,
+ strip_top=1):
+ """
+ Return an indented block and info.
+
+ Extract an indented block where the indent is known for the first line
+ and unknown for all other lines.
+
+ :Parameters:
+ - `indent`: The first line's indent (# of columns/characters).
+ - `until_blank`: Stop collecting at the first blank line if true
+ (1).
+ - `strip_indent`: Strip `indent` characters of indentation if true
+ (1, default).
+ - `strip_top`: Strip blank lines from the beginning of the block.
+
+ :Return:
+ - the indented block,
+ - its indent,
+ - its first line offset from BOF, and
+ - whether or not it finished with a blank line.
+ """
+ offset = self.abs_line_offset()
+ indented, indent, blank_finish = self.input_lines.get_indented(
+ self.line_offset, until_blank, strip_indent,
+ first_indent=indent)
+ self.next_line(len(indented) - 1) # advance to last indented line
+ if strip_top:
+ while indented and not indented[0].strip():
+ indented.trim_start()
+ offset += 1
+ return indented, indent, offset, blank_finish
+
+
+class StateWS(State):
+
+ """
+ State superclass specialized for whitespace (blank lines & indents).
+
+ Use this class with `StateMachineWS`. The transitions 'blank' (for blank
+ lines) and 'indent' (for indented text blocks) are added automatically,
+ before any other transitions. The transition method `blank()` handles
+ blank lines and `indent()` handles nested indented blocks. Indented
+ blocks trigger a new state machine to be created by `indent()` and run.
+ The class of the state machine to be created is in `indent_sm`, and the
+ constructor keyword arguments are in the dictionary `indent_sm_kwargs`.
+
+ The methods `known_indent()` and `firstknown_indent()` are provided for
+ indented blocks where the indent (all lines' and first line's only,
+ respectively) is known to the transition method, along with the attributes
+ `known_indent_sm` and `known_indent_sm_kwargs`. Neither transition method
+ is triggered automatically.
+ """
+
+ indent_sm = None
+ """
+ The `StateMachine` class handling indented text blocks.
+
+ If left as ``None``, `indent_sm` defaults to the value of
+ `State.nested_sm`. Override it in subclasses to avoid the default.
+ """
+
+ indent_sm_kwargs = None
+ """
+ Keyword arguments dictionary, passed to the `indent_sm` constructor.
+
+ If left as ``None``, `indent_sm_kwargs` defaults to the value of
+ `State.nested_sm_kwargs`. Override it in subclasses to avoid the default.
+ """
+
+ known_indent_sm = None
+ """
+ The `StateMachine` class handling known-indented text blocks.
+
+ If left as ``None``, `known_indent_sm` defaults to the value of
+ `indent_sm`. Override it in subclasses to avoid the default.
+ """
+
+ known_indent_sm_kwargs = None
+ """
+ Keyword arguments dictionary, passed to the `known_indent_sm` constructor.
+
+ If left as ``None``, `known_indent_sm_kwargs` defaults to the value of
+ `indent_sm_kwargs`. Override it in subclasses to avoid the default.
+ """
+
+ ws_patterns = {'blank': ' *$',
+ 'indent': ' +'}
+ """Patterns for default whitespace transitions. May be overridden in
+ subclasses."""
+
+ ws_initial_transitions = ('blank', 'indent')
+ """Default initial whitespace transitions, added before those listed in
+ `State.initial_transitions`. May be overridden in subclasses."""
+
+ def __init__(self, state_machine, debug=0):
+ """
+ Initialize a `StateSM` object; extends `State.__init__()`.
+
+ Check for indent state machine attributes, set defaults if not set.
+ """
+ State.__init__(self, state_machine, debug)
+ if self.indent_sm is None:
+ self.indent_sm = self.nested_sm
+ if self.indent_sm_kwargs is None:
+ self.indent_sm_kwargs = self.nested_sm_kwargs
+ if self.known_indent_sm is None:
+ self.known_indent_sm = self.indent_sm
+ if self.known_indent_sm_kwargs is None:
+ self.known_indent_sm_kwargs = self.indent_sm_kwargs
+
+ def add_initial_transitions(self):
+ """
+ Add whitespace-specific transitions before those defined in subclass.
+
+ Extends `State.add_initial_transitions()`.
+ """
+ State.add_initial_transitions(self)
+ if self.patterns is None:
+ self.patterns = {}
+ self.patterns.update(self.ws_patterns)
+ names, transitions = self.make_transitions(
+ self.ws_initial_transitions)
+ self.add_transitions(names, transitions)
+
+ def blank(self, match, context, next_state):
+ """Handle blank lines. Does nothing. Override in subclasses."""
+ return self.nop(match, context, next_state)
+
+ def indent(self, match, context, next_state):
+ """
+ Handle an indented text block. Extend or override in subclasses.
+
+ Recursively run the registered state machine for indented blocks
+ (`self.indent_sm`).
+ """
+ indented, indent, line_offset, blank_finish = \
+ self.state_machine.get_indented()
+ sm = self.indent_sm(debug=self.debug, **self.indent_sm_kwargs)
+ results = sm.run(indented, input_offset=line_offset)
+ return context, next_state, results
+
+ def known_indent(self, match, context, next_state):
+ """
+ Handle a known-indent text block. Extend or override in subclasses.
+
+ Recursively run the registered state machine for known-indent indented
+ blocks (`self.known_indent_sm`). The indent is the length of the
+ match, ``match.end()``.
+ """
+ indented, line_offset, blank_finish = \
+ self.state_machine.get_known_indented(match.end())
+ sm = self.known_indent_sm(debug=self.debug,
+ **self.known_indent_sm_kwargs)
+ results = sm.run(indented, input_offset=line_offset)
+ return context, next_state, results
+
+ def first_known_indent(self, match, context, next_state):
+ """
+ Handle an indented text block (first line's indent known).
+
+ Extend or override in subclasses.
+
+ Recursively run the registered state machine for known-indent indented
+ blocks (`self.known_indent_sm`). The indent is the length of the
+ match, ``match.end()``.
+ """
+ indented, line_offset, blank_finish = \
+ self.state_machine.get_first_known_indented(match.end())
+ sm = self.known_indent_sm(debug=self.debug,
+ **self.known_indent_sm_kwargs)
+ results = sm.run(indented, input_offset=line_offset)
+ return context, next_state, results
+
+
+class _SearchOverride:
+
+ """
+ Mix-in class to override `StateMachine` regular expression behavior.
+
+ Changes regular expression matching, from the default `re.match()`
+ (succeeds only if the pattern matches at the start of `self.line`) to
+ `re.search()` (succeeds if the pattern matches anywhere in `self.line`).
+ When subclassing a `StateMachine`, list this class **first** in the
+ inheritance list of the class definition.
+ """
+
+ def match(self, pattern):
+ """
+ Return the result of a regular expression search.
+
+ Overrides `StateMachine.match()`.
+
+ Parameter `pattern`: `re` compiled regular expression.
+ """
+ return pattern.search(self.line)
+
+
+class SearchStateMachine(_SearchOverride, StateMachine):
+ """`StateMachine` which uses `re.search()` instead of `re.match()`."""
+ pass
+
+
+class SearchStateMachineWS(_SearchOverride, StateMachineWS):
+ """`StateMachineWS` which uses `re.search()` instead of `re.match()`."""
+ pass
+
+
+class ViewList:
+
+ """
+ List with extended functionality: slices of ViewList objects are child
+ lists, linked to their parents. Changes made to a child list also affect
+ the parent list. A child list is effectively a "view" (in the SQL sense)
+ of the parent list. Changes to parent lists, however, do *not* affect
+ active child lists. If a parent list is changed, any active child lists
+ should be recreated.
+
+ The start and end of the slice can be trimmed using the `trim_start()` and
+ `trim_end()` methods, without affecting the parent list. The link between
+ child and parent lists can be broken by calling `disconnect()` on the
+ child list.
+
+ Also, ViewList objects keep track of the source & offset of each item.
+ This information is accessible via the `source()`, `offset()`, and
+ `info()` methods.
+ """
+
+ def __init__(self, initlist=None, source=None, items=None,
+ parent=None, parent_offset=None):
+ self.data = []
+ """The actual list of data, flattened from various sources."""
+
+ self.items = []
+ """A list of (source, offset) pairs, same length as `self.data`: the
+ source of each line and the offset of each line from the beginning of
+ its source."""
+
+ self.parent = parent
+ """The parent list."""
+
+ self.parent_offset = parent_offset
+ """Offset of this list from the beginning of the parent list."""
+
+ if isinstance(initlist, ViewList):
+ self.data = initlist.data[:]
+ self.items = initlist.items[:]
+ elif initlist is not None:
+ self.data = list(initlist)
+ if items:
+ self.items = items
+ else:
+ self.items = [(source, i) for i in range(len(initlist))]
+ assert len(self.data) == len(self.items), 'data mismatch'
+
+ def __str__(self):
+ return str(self.data)
+
+ def __repr__(self):
+ return '%s(%s, items=%s)' % (self.__class__.__name__,
+ self.data, self.items)
+
+ def __lt__(self, other): return self.data < self.__cast(other)
+ def __le__(self, other): return self.data <= self.__cast(other)
+ def __eq__(self, other): return self.data == self.__cast(other)
+ def __ne__(self, other): return self.data != self.__cast(other)
+ def __gt__(self, other): return self.data > self.__cast(other)
+ def __ge__(self, other): return self.data >= self.__cast(other)
+ def __cmp__(self, other): return cmp(self.data, self.__cast(other))
+
+ def __cast(self, other):
+ if isinstance(other, ViewList):
+ return other.data
+ else:
+ return other
+
+ def __contains__(self, item): return item in self.data
+ def __len__(self): return len(self.data)
+
+ # The __getitem__()/__setitem__() methods check whether the index
+ # is a slice first, since indexing a native list with a slice object
+ # just works.
+
+ def __getitem__(self, i):
+ if isinstance(i, types.SliceType):
+ assert i.step in (None, 1), 'cannot handle slice with stride'
+ return self.__class__(self.data[i.start:i.stop],
+ items=self.items[i.start:i.stop],
+ parent=self, parent_offset=i.start or 0)
+ else:
+ return self.data[i]
+
+ def __setitem__(self, i, item):
+ if isinstance(i, types.SliceType):
+ assert i.step in (None, 1), 'cannot handle slice with stride'
+ if not isinstance(item, ViewList):
+ raise TypeError('assigning non-ViewList to ViewList slice')
+ self.data[i.start:i.stop] = item.data
+ self.items[i.start:i.stop] = item.items
+ assert len(self.data) == len(self.items), 'data mismatch'
+ if self.parent:
+ self.parent[(i.start or 0) + self.parent_offset
+ : (i.stop or len(self)) + self.parent_offset] = item
+ else:
+ self.data[i] = item
+ if self.parent:
+ self.parent[i + self.parent_offset] = item
+
+ def __delitem__(self, i):
+ try:
+ del self.data[i]
+ del self.items[i]
+ if self.parent:
+ del self.parent[i + self.parent_offset]
+ except TypeError:
+ assert i.step is None, 'cannot handle slice with stride'
+ del self.data[i.start:i.stop]
+ del self.items[i.start:i.stop]
+ if self.parent:
+ del self.parent[(i.start or 0) + self.parent_offset
+ : (i.stop or len(self)) + self.parent_offset]
+
+ def __add__(self, other):
+ if isinstance(other, ViewList):
+ return self.__class__(self.data + other.data,
+ items=(self.items + other.items))
+ else:
+ raise TypeError('adding non-ViewList to a ViewList')
+
+ def __radd__(self, other):
+ if isinstance(other, ViewList):
+ return self.__class__(other.data + self.data,
+ items=(other.items + self.items))
+ else:
+ raise TypeError('adding ViewList to a non-ViewList')
+
+ def __iadd__(self, other):
+ if isinstance(other, ViewList):
+ self.data += other.data
+ else:
+ raise TypeError('argument to += must be a ViewList')
+ return self
+
+ def __mul__(self, n):
+ return self.__class__(self.data * n, items=(self.items * n))
+
+ __rmul__ = __mul__
+
+ def __imul__(self, n):
+ self.data *= n
+ self.items *= n
+ return self
+
+ def extend(self, other):
+ if not isinstance(other, ViewList):
+ raise TypeError('extending a ViewList with a non-ViewList')
+ if self.parent:
+ self.parent.insert(len(self.data) + self.parent_offset, other)
+ self.data.extend(other.data)
+ self.items.extend(other.items)
+
+ def append(self, item, source=None, offset=0):
+ if source is None:
+ self.extend(item)
+ else:
+ if self.parent:
+ self.parent.insert(len(self.data) + self.parent_offset, item,
+ source, offset)
+ self.data.append(item)
+ self.items.append((source, offset))
+
+ def insert(self, i, item, source=None, offset=0):
+ if source is None:
+ if not isinstance(item, ViewList):
+ raise TypeError('inserting non-ViewList with no source given')
+ self.data[i:i] = item.data
+ self.items[i:i] = item.items
+ if self.parent:
+ index = (len(self.data) + i) % len(self.data)
+ self.parent.insert(index + self.parent_offset, item)
+ else:
+ self.data.insert(i, item)
+ self.items.insert(i, (source, offset))
+ if self.parent:
+ index = (len(self.data) + i) % len(self.data)
+ self.parent.insert(index + self.parent_offset, item,
+ source, offset)
+
+ def pop(self, i=-1):
+ if self.parent:
+ index = (len(self.data) + i) % len(self.data)
+ self.parent.pop(index + self.parent_offset)
+ self.items.pop(i)
+ return self.data.pop(i)
+
+ def trim_start(self, n=1):
+ """
+ Remove items from the start of the list, without touching the parent.
+ """
+ if n > len(self.data):
+ raise IndexError("Size of trim too large; can't trim %s items "
+ "from a list of size %s." % (n, len(self.data)))
+ elif n < 0:
+ raise IndexError('Trim size must be >= 0.')
+ del self.data[:n]
+ del self.items[:n]
+ if self.parent:
+ self.parent_offset += n
+
+ def trim_end(self, n=1):
+ """
+ Remove items from the end of the list, without touching the parent.
+ """
+ if n > len(self.data):
+ raise IndexError("Size of trim too large; can't trim %s items "
+ "from a list of size %s." % (n, len(self.data)))
+ elif n < 0:
+ raise IndexError('Trim size must be >= 0.')
+ del self.data[-n:]
+ del self.items[-n:]
+
+ def remove(self, item):
+ index = self.index(item)
+ del self[index]
+
+ def count(self, item): return self.data.count(item)
+ def index(self, item): return self.data.index(item)
+
+ def reverse(self):
+ self.data.reverse()
+ self.items.reverse()
+ self.parent = None
+
+ def sort(self, *args):
+ tmp = zip(self.data, self.items)
+ tmp.sort(*args)
+ self.data = [entry[0] for entry in tmp]
+ self.items = [entry[1] for entry in tmp]
+ self.parent = None
+
+ def info(self, i):
+ """Return source & offset for index `i`."""
+ try:
+ return self.items[i]
+ except IndexError:
+ if i == len(self.data): # Just past the end
+ return self.items[i - 1][0], None
+ else:
+ raise
+
+ def source(self, i):
+ """Return source for index `i`."""
+ return self.info(i)[0]
+
+ def offset(self, i):
+ """Return offset for index `i`."""
+ return self.info(i)[1]
+
+ def disconnect(self):
+ """Break link between this list and parent list."""
+ self.parent = None
+
+ def xitems(self):
+ """Return iterator yielding (source, offset, value) tuples."""
+ for (value, (source, offset)) in zip(self.data, self.items):
+ yield (source, offset, value)
+
+ def pprint(self):
+ """Print the list in `grep` format (`source:offset:value` lines)"""
+ for line in self.xitems():
+ print "%s:%d:%s" % line
+
+
+class StringList(ViewList):
+
+ """A `ViewList` with string-specific methods."""
+
+ def trim_left(self, length, start=0, end=sys.maxint):
+ """
+ Trim `length` characters off the beginning of each item, in-place,
+ from index `start` to `end`. No whitespace-checking is done on the
+ trimmed text. Does not affect slice parent.
+ """
+ self.data[start:end] = [line[length:]
+ for line in self.data[start:end]]
+
+ def get_text_block(self, start, flush_left=0):
+ """
+ Return a contiguous block of text.
+
+ If `flush_left` is true, raise `UnexpectedIndentationError` if an
+ indented line is encountered before the text block ends (with a blank
+ line).
+ """
+ end = start
+ last = len(self.data)
+ while end < last:
+ line = self.data[end]
+ if not line.strip():
+ break
+ if flush_left and (line[0] == ' '):
+ source, offset = self.info(end)
+ raise UnexpectedIndentationError(self[start:end], source,
+ offset + 1)
+ end += 1
+ return self[start:end]
+
+ def get_indented(self, start=0, until_blank=0, strip_indent=1,
+ block_indent=None, first_indent=None):
+ """
+ Extract and return a StringList of indented lines of text.
+
+ Collect all lines with indentation, determine the minimum indentation,
+ remove the minimum indentation from all indented lines (unless
+ `strip_indent` is false), and return them. All lines up to but not
+ including the first unindented line will be returned.
+
+ :Parameters:
+ - `start`: The index of the first line to examine.
+ - `until_blank`: Stop collecting at the first blank line if true.
+ - `strip_indent`: Strip common leading indent if true (default).
+ - `block_indent`: The indent of the entire block, if known.
+ - `first_indent`: The indent of the first line, if known.
+
+ :Return:
+ - a StringList of indented lines with mininum indent removed;
+ - the amount of the indent;
+ - a boolean: did the indented block finish with a blank line or EOF?
+ """
+ indent = block_indent # start with None if unknown
+ end = start
+ if block_indent is not None and first_indent is None:
+ first_indent = block_indent
+ if first_indent is not None:
+ end += 1
+ last = len(self.data)
+ while end < last:
+ line = self.data[end]
+ if line and (line[0] != ' '
+ or (block_indent is not None
+ and line[:block_indent].strip())):
+ # Line not indented or insufficiently indented.
+ # Block finished properly iff the last indented line blank:
+ blank_finish = ((end > start)
+ and not self.data[end - 1].strip())
+ break
+ stripped = line.lstrip()
+ if not stripped: # blank line
+ if until_blank:
+ blank_finish = 1
+ break
+ elif block_indent is None:
+ line_indent = len(line) - len(stripped)
+ if indent is None:
+ indent = line_indent
+ else:
+ indent = min(indent, line_indent)
+ end += 1
+ else:
+ blank_finish = 1 # block ends at end of lines
+ block = self[start:end]
+ if first_indent is not None and block:
+ block.data[0] = block.data[0][first_indent:]
+ if indent and strip_indent:
+ block.trim_left(indent, start=(first_indent is not None))
+ return block, indent or 0, blank_finish
+
+ def get_2D_block(self, top, left, bottom, right, strip_indent=1):
+ block = self[top:bottom]
+ indent = right
+ for i in range(len(block.data)):
+ block.data[i] = line = block.data[i][left:right].rstrip()
+ if line:
+ indent = min(indent, len(line) - len(line.lstrip()))
+ if strip_indent and 0 < indent < right:
+ block.data = [line[indent:] for line in block.data]
+ return block
+
+ def pad_double_width(self, pad_char):
+ """
+ Pad all double-width characters in self by appending `pad_char` to each.
+ For East Asian language support.
+ """
+ if hasattr(unicodedata, 'east_asian_width'):
+ east_asian_width = unicodedata.east_asian_width
+ else:
+ return # new in Python 2.4
+ for i in range(len(self.data)):
+ line = self.data[i]
+ if isinstance(line, unicode):
+ new = []
+ for char in line:
+ new.append(char)
+ if east_asian_width(char) in 'WF': # 'W'ide & 'F'ull-width
+ new.append(pad_char)
+ self.data[i] = ''.join(new)
+
+ def replace(self, old, new):
+ """Replace all occurrences of substring `old` with `new`."""
+ for i in range(len(self.data)):
+ self.data[i] = self.data[i].replace(old, new)
+
+
+class StateMachineError(Exception): pass
+class UnknownStateError(StateMachineError): pass
+class DuplicateStateError(StateMachineError): pass
+class UnknownTransitionError(StateMachineError): pass
+class DuplicateTransitionError(StateMachineError): pass
+class TransitionPatternNotFound(StateMachineError): pass
+class TransitionMethodNotFound(StateMachineError): pass
+class UnexpectedIndentationError(StateMachineError): pass
+
+
+class TransitionCorrection(Exception):
+
+ """
+ Raise from within a transition method to switch to another transition.
+
+ Raise with one argument, the new transition name.
+ """
+
+
+class StateCorrection(Exception):
+
+ """
+ Raise from within a transition method to switch to another state.
+
+ Raise with one or two arguments: new state name, and an optional new
+ transition name.
+ """
+
+
+def string2lines(astring, tab_width=8, convert_whitespace=0,
+ whitespace=re.compile('[\v\f]')):
+ """
+ Return a list of one-line strings with tabs expanded, no newlines, and
+ trailing whitespace stripped.
+
+ Each tab is expanded with between 1 and `tab_width` spaces, so that the
+ next character's index becomes a multiple of `tab_width` (8 by default).
+
+ Parameters:
+
+ - `astring`: a multi-line string.
+ - `tab_width`: the number of columns between tab stops.
+ - `convert_whitespace`: convert form feeds and vertical tabs to spaces?
+ """
+ if convert_whitespace:
+ astring = whitespace.sub(' ', astring)
+ return [s.expandtabs(tab_width).rstrip() for s in astring.splitlines()]
+
+def _exception_data():
+ """
+ Return exception information:
+
+ - the exception's class name;
+ - the exception object;
+ - the name of the file containing the offending code;
+ - the line number of the offending code;
+ - the function name of the offending code.
+ """
+ type, value, traceback = sys.exc_info()
+ while traceback.tb_next:
+ traceback = traceback.tb_next
+ code = traceback.tb_frame.f_code
+ return (type.__name__, value, code.co_filename, traceback.tb_lineno,
+ code.co_name)
diff --git a/python/helpers/docutils/transforms/__init__.py b/python/helpers/docutils/transforms/__init__.py
new file mode 100644
index 0000000..1f136cc
--- /dev/null
+++ b/python/helpers/docutils/transforms/__init__.py
@@ -0,0 +1,172 @@
+# $Id: __init__.py 4975 2007-03-01 18:08:32Z wiemann $
+# Authors: David Goodger <[email protected]>; Ueli Schlaepfer
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains modules for standard tree transforms available
+to Docutils components. Tree transforms serve a variety of purposes:
+
+- To tie up certain syntax-specific "loose ends" that remain after the
+ initial parsing of the input plaintext. These transforms are used to
+ supplement a limited syntax.
+
+- To automate the internal linking of the document tree (hyperlink
+ references, footnote references, etc.).
+
+- To extract useful information from the document tree. These
+ transforms may be used to construct (for example) indexes and tables
+ of contents.
+
+Each transform is an optional step that a Docutils component may
+choose to perform on the parsed document.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+from docutils import languages, ApplicationError, TransformSpec
+
+
+class TransformError(ApplicationError): pass
+
+
+class Transform:
+
+ """
+ Docutils transform component abstract base class.
+ """
+
+ default_priority = None
+ """Numerical priority of this transform, 0 through 999 (override)."""
+
+ def __init__(self, document, startnode=None):
+ """
+ Initial setup for in-place document transforms.
+ """
+
+ self.document = document
+ """The document tree to transform."""
+
+ self.startnode = startnode
+ """Node from which to begin the transform. For many transforms which
+ apply to the document as a whole, `startnode` is not set (i.e. its
+ value is `None`)."""
+
+ self.language = languages.get_language(
+ document.settings.language_code)
+ """Language module local to this document."""
+
+ def apply(self, **kwargs):
+ """Override to apply the transform to the document tree."""
+ raise NotImplementedError('subclass must override this method')
+
+
+class Transformer(TransformSpec):
+
+ """
+ Stores transforms (`Transform` classes) and applies them to document
+ trees. Also keeps track of components by component type name.
+ """
+
+ def __init__(self, document):
+ self.transforms = []
+ """List of transforms to apply. Each item is a 3-tuple:
+ ``(priority string, transform class, pending node or None)``."""
+
+ self.unknown_reference_resolvers = []
+ """List of hook functions which assist in resolving references"""
+
+ self.document = document
+ """The `nodes.document` object this Transformer is attached to."""
+
+ self.applied = []
+ """Transforms already applied, in order."""
+
+ self.sorted = 0
+ """Boolean: is `self.tranforms` sorted?"""
+
+ self.components = {}
+ """Mapping of component type name to component object. Set by
+ `self.populate_from_components()`."""
+
+ self.serialno = 0
+ """Internal serial number to keep track of the add order of
+ transforms."""
+
+ def add_transform(self, transform_class, priority=None, **kwargs):
+ """
+ Store a single transform. Use `priority` to override the default.
+ `kwargs` is a dictionary whose contents are passed as keyword
+ arguments to the `apply` method of the transform. This can be used to
+ pass application-specific data to the transform instance.
+ """
+ if priority is None:
+ priority = transform_class.default_priority
+ priority_string = self.get_priority_string(priority)
+ self.transforms.append(
+ (priority_string, transform_class, None, kwargs))
+ self.sorted = 0
+
+ def add_transforms(self, transform_list):
+ """Store multiple transforms, with default priorities."""
+ for transform_class in transform_list:
+ priority_string = self.get_priority_string(
+ transform_class.default_priority)
+ self.transforms.append(
+ (priority_string, transform_class, None, {}))
+ self.sorted = 0
+
+ def add_pending(self, pending, priority=None):
+ """Store a transform with an associated `pending` node."""
+ transform_class = pending.transform
+ if priority is None:
+ priority = transform_class.default_priority
+ priority_string = self.get_priority_string(priority)
+ self.transforms.append(
+ (priority_string, transform_class, pending, {}))
+ self.sorted = 0
+
+ def get_priority_string(self, priority):
+ """
+ Return a string, `priority` combined with `self.serialno`.
+
+ This ensures FIFO order on transforms with identical priority.
+ """
+ self.serialno += 1
+ return '%03d-%03d' % (priority, self.serialno)
+
+ def populate_from_components(self, components):
+ """
+ Store each component's default transforms, with default priorities.
+ Also, store components by type name in a mapping for later lookup.
+ """
+ for component in components:
+ if component is None:
+ continue
+ self.add_transforms(component.get_transforms())
+ self.components[component.component_type] = component
+ self.sorted = 0
+ # Set up all of the reference resolvers for this transformer. Each
+ # component of this transformer is able to register its own helper
+ # functions to help resolve references.
+ unknown_reference_resolvers = []
+ for i in components:
+ unknown_reference_resolvers.extend(i.unknown_reference_resolvers)
+ decorated_list = [(f.priority, f) for f in unknown_reference_resolvers]
+ decorated_list.sort()
+ self.unknown_reference_resolvers.extend([f[1] for f in decorated_list])
+
+ def apply_transforms(self):
+ """Apply all of the stored transforms, in priority order."""
+ self.document.reporter.attach_observer(
+ self.document.note_transform_message)
+ while self.transforms:
+ if not self.sorted:
+ # Unsorted initially, and whenever a transform is added.
+ self.transforms.sort()
+ self.transforms.reverse()
+ self.sorted = 1
+ priority, transform_class, pending, kwargs = self.transforms.pop()
+ transform = transform_class(self.document, startnode=pending)
+ transform.apply(**kwargs)
+ self.applied.append((priority, transform_class, pending, kwargs))
diff --git a/python/helpers/docutils/transforms/components.py b/python/helpers/docutils/transforms/components.py
new file mode 100644
index 0000000..d3e548c
--- /dev/null
+++ b/python/helpers/docutils/transforms/components.py
@@ -0,0 +1,52 @@
+# $Id: components.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Docutils component-related transforms.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import os
+import re
+import time
+from docutils import nodes, utils
+from docutils import ApplicationError, DataError
+from docutils.transforms import Transform, TransformError
+
+
+class Filter(Transform):
+
+ """
+ Include or exclude elements which depend on a specific Docutils component.
+
+ For use with `nodes.pending` elements. A "pending" element's dictionary
+ attribute ``details`` must contain the keys "component" and "format". The
+ value of ``details['component']`` must match the type name of the
+ component the elements depend on (e.g. "writer"). The value of
+ ``details['format']`` is the name of a specific format or context of that
+ component (e.g. "html"). If the matching Docutils component supports that
+ format or context, the "pending" element is replaced by the contents of
+ ``details['nodes']`` (a list of nodes); otherwise, the "pending" element
+ is removed.
+
+ For example, the reStructuredText "meta" directive creates a "pending"
+ element containing a "meta" element (in ``pending.details['nodes']``).
+ Only writers (``pending.details['component'] == 'writer'``) supporting the
+ "html" format (``pending.details['format'] == 'html'``) will include the
+ "meta" element; it will be deleted from the output of all other writers.
+ """
+
+ default_priority = 780
+
+ def apply(self):
+ pending = self.startnode
+ component_type = pending.details['component'] # 'reader' or 'writer'
+ format = pending.details['format']
+ component = self.document.transformer.components[component_type]
+ if component.supports(format):
+ pending.replace_self(pending.details['nodes'])
+ else:
+ pending.parent.remove(pending)
diff --git a/python/helpers/docutils/transforms/frontmatter.py b/python/helpers/docutils/transforms/frontmatter.py
new file mode 100644
index 0000000..63ae55d
--- /dev/null
+++ b/python/helpers/docutils/transforms/frontmatter.py
@@ -0,0 +1,512 @@
+# $Id: frontmatter.py 5618 2008-07-28 08:37:32Z strank $
+# Author: David Goodger, Ueli Schlaepfer <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Transforms related to the front matter of a document or a section
+(information found before the main text):
+
+- `DocTitle`: Used to transform a lone top level section's title to
+ the document title, promote a remaining lone top-level section's
+ title to the document subtitle, and determine the document's title
+ metadata (document['title']) based on the document title and/or the
+ "title" setting.
+
+- `SectionSubTitle`: Used to transform a lone subsection into a
+ subtitle.
+
+- `DocInfo`: Used to transform a bibliographic field list into docinfo
+ elements.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import re
+from docutils import nodes, utils
+from docutils.transforms import TransformError, Transform
+
+
+class TitlePromoter(Transform):
+
+ """
+ Abstract base class for DocTitle and SectionSubTitle transforms.
+ """
+
+ def promote_title(self, node):
+ """
+ Transform the following tree::
+
+ <node>
+ <section>
+ <title>
+ ...
+
+ into ::
+
+ <node>
+ <title>
+ ...
+
+ `node` is normally a document.
+ """
+ # `node` must not have a title yet.
+ assert not (len(node) and isinstance(node[0], nodes.title))
+ section, index = self.candidate_index(node)
+ if index is None:
+ return None
+ # Transfer the section's attributes to the node:
+ node.attributes.update(section.attributes)
+ # setup_child is called automatically for all nodes.
+ node[:] = (section[:1] # section title
+ + node[:index] # everything that was in the
+ # node before the section
+ + section[1:]) # everything that was in the section
+ assert isinstance(node[0], nodes.title)
+ return 1
+
+ def promote_subtitle(self, node):
+ """
+ Transform the following node tree::
+
+ <node>
+ <title>
+ <section>
+ <title>
+ ...
+
+ into ::
+
+ <node>
+ <title>
+ <subtitle>
+ ...
+ """
+ subsection, index = self.candidate_index(node)
+ if index is None:
+ return None
+ subtitle = nodes.subtitle()
+ # Transfer the subsection's attributes to the new subtitle:
+ # This causes trouble with list attributes! To do: Write a
+ # test case which catches direct access to the `attributes`
+ # dictionary and/or write a test case which shows problems in
+ # this particular case.
+ subtitle.attributes.update(subsection.attributes)
+ # We're losing the subtitle's attributes here! To do: Write a
+ # test case which shows this behavior.
+ # Transfer the contents of the subsection's title to the
+ # subtitle:
+ subtitle[:] = subsection[0][:]
+ node[:] = (node[:1] # title
+ + [subtitle]
+ # everything that was before the section:
+ + node[1:index]
+ # everything that was in the subsection:
+ + subsection[1:])
+ return 1
+
+ def candidate_index(self, node):
+ """
+ Find and return the promotion candidate and its index.
+
+ Return (None, None) if no valid candidate was found.
+ """
+ index = node.first_child_not_matching_class(
+ nodes.PreBibliographic)
+ if index is None or len(node) > (index + 1) or \
+ not isinstance(node[index], nodes.section):
+ return None, None
+ else:
+ return node[index], index
+
+
+class DocTitle(TitlePromoter):
+
+ """
+ In reStructuredText_, there is no way to specify a document title
+ and subtitle explicitly. Instead, we can supply the document title
+ (and possibly the subtitle as well) implicitly, and use this
+ two-step transform to "raise" or "promote" the title(s) (and their
+ corresponding section contents) to the document level.
+
+ 1. If the document contains a single top-level section as its
+ first non-comment element, the top-level section's title
+ becomes the document's title, and the top-level section's
+ contents become the document's immediate contents. The lone
+ top-level section header must be the first non-comment element
+ in the document.
+
+ For example, take this input text::
+
+ =================
+ Top-Level Title
+ =================
+
+ A paragraph.
+
+ Once parsed, it looks like this::
+
+ <document>
+ <section names="top-level title">
+ <title>
+ Top-Level Title
+ <paragraph>
+ A paragraph.
+
+ After running the DocTitle transform, we have::
+
+ <document names="top-level title">
+ <title>
+ Top-Level Title
+ <paragraph>
+ A paragraph.
+
+ 2. If step 1 successfully determines the document title, we
+ continue by checking for a subtitle.
+
+ If the lone top-level section itself contains a single
+ second-level section as its first non-comment element, that
+ section's title is promoted to the document's subtitle, and
+ that section's contents become the document's immediate
+ contents. Given this input text::
+
+ =================
+ Top-Level Title
+ =================
+
+ Second-Level Title
+ ~~~~~~~~~~~~~~~~~~
+
+ A paragraph.
+
+ After parsing and running the Section Promotion transform, the
+ result is::
+
+ <document names="top-level title">
+ <title>
+ Top-Level Title
+ <subtitle names="second-level title">
+ Second-Level Title
+ <paragraph>
+ A paragraph.
+
+ (Note that the implicit hyperlink target generated by the
+ "Second-Level Title" is preserved on the "subtitle" element
+ itself.)
+
+ Any comment elements occurring before the document title or
+ subtitle are accumulated and inserted as the first body elements
+ after the title(s).
+
+ This transform also sets the document's metadata title
+ (document['title']).
+
+ .. _reStructuredText: http://docutils.sf.net/rst.html
+ """
+
+ default_priority = 320
+
+ def set_metadata(self):
+ """
+ Set document['title'] metadata title from the following
+ sources, listed in order of priority:
+
+ * Existing document['title'] attribute.
+ * "title" setting.
+ * Document title node (as promoted by promote_title).
+ """
+ if not self.document.hasattr('title'):
+ if self.document.settings.title is not None:
+ self.document['title'] = self.document.settings.title
+ elif len(self.document) and isinstance(self.document[0], nodes.title):
+ self.document['title'] = self.document[0].astext()
+
+ def apply(self):
+ if getattr(self.document.settings, 'doctitle_xform', 1):
+ # promote_(sub)title defined in TitlePromoter base class.
+ if self.promote_title(self.document):
+ # If a title has been promoted, also try to promote a
+ # subtitle.
+ self.promote_subtitle(self.document)
+ # Set document['title'].
+ self.set_metadata()
+
+
+class SectionSubTitle(TitlePromoter):
+
+ """
+ This works like document subtitles, but for sections. For example, ::
+
+ <section>
+ <title>
+ Title
+ <section>
+ <title>
+ Subtitle
+ ...
+
+ is transformed into ::
+
+ <section>
+ <title>
+ Title
+ <subtitle>
+ Subtitle
+ ...
+
+ For details refer to the docstring of DocTitle.
+ """
+
+ default_priority = 350
+
+ def apply(self):
+ if not getattr(self.document.settings, 'sectsubtitle_xform', 1):
+ return
+ for section in self.document.traverse(nodes.section):
+ # On our way through the node tree, we are deleting
+ # sections, but we call self.promote_subtitle for those
+ # sections nonetheless. To do: Write a test case which
+ # shows the problem and discuss on Docutils-develop.
+ self.promote_subtitle(section)
+
+
+class DocInfo(Transform):
+
+ """
+ This transform is specific to the reStructuredText_ markup syntax;
+ see "Bibliographic Fields" in the `reStructuredText Markup
+ Specification`_ for a high-level description. This transform
+ should be run *after* the `DocTitle` transform.
+
+ Given a field list as the first non-comment element after the
+ document title and subtitle (if present), registered bibliographic
+ field names are transformed to the corresponding DTD elements,
+ becoming child elements of the "docinfo" element (except for a
+ dedication and/or an abstract, which become "topic" elements after
+ "docinfo").
+
+ For example, given this document fragment after parsing::
+
+ <document>
+ <title>
+ Document Title
+ <field_list>
+ <field>
+ <field_name>
+ Author
+ <field_body>
+ <paragraph>
+ A. Name
+ <field>
+ <field_name>
+ Status
+ <field_body>
+ <paragraph>
+ $RCSfile$
+ ...
+
+ After running the bibliographic field list transform, the
+ resulting document tree would look like this::
+
+ <document>
+ <title>
+ Document Title
+ <docinfo>
+ <author>
+ A. Name
+ <status>
+ frontmatter.py
+ ...
+
+ The "Status" field contained an expanded RCS keyword, which is
+ normally (but optionally) cleaned up by the transform. The sole
+ contents of the field body must be a paragraph containing an
+ expanded RCS keyword of the form "$keyword: expansion text $". Any
+ RCS keyword can be processed in any bibliographic field. The
+ dollar signs and leading RCS keyword name are removed. Extra
+ processing is done for the following RCS keywords:
+
+ - "RCSfile" expands to the name of the file in the RCS or CVS
+ repository, which is the name of the source file with a ",v"
+ suffix appended. The transform will remove the ",v" suffix.
+
+ - "Date" expands to the format "YYYY/MM/DD hh:mm:ss" (in the UTC
+ time zone). The RCS Keywords transform will extract just the
+ date itself and transform it to an ISO 8601 format date, as in
+ "2000-12-31".
+
+ (Since the source file for this text is itself stored under CVS,
+ we can't show an example of the "Date" RCS keyword because we
+ can't prevent any RCS keywords used in this explanation from
+ being expanded. Only the "RCSfile" keyword is stable; its
+ expansion text changes only if the file name changes.)
+
+ .. _reStructuredText: http://docutils.sf.net/rst.html
+ .. _reStructuredText Markup Specification:
+ http://docutils.sf.net/docs/ref/rst/restructuredtext.html
+ """
+
+ default_priority = 340
+
+ biblio_nodes = {
+ 'author': nodes.author,
+ 'authors': nodes.authors,
+ 'organization': nodes.organization,
+ 'address': nodes.address,
+ 'contact': nodes.contact,
+ 'version': nodes.version,
+ 'revision': nodes.revision,
+ 'status': nodes.status,
+ 'date': nodes.date,
+ 'copyright': nodes.copyright,
+ 'dedication': nodes.topic,
+ 'abstract': nodes.topic}
+ """Canonical field name (lowcased) to node class name mapping for
+ bibliographic fields (field_list)."""
+
+ def apply(self):
+ if not getattr(self.document.settings, 'docinfo_xform', 1):
+ return
+ document = self.document
+ index = document.first_child_not_matching_class(
+ nodes.PreBibliographic)
+ if index is None:
+ return
+ candidate = document[index]
+ if isinstance(candidate, nodes.field_list):
+ biblioindex = document.first_child_not_matching_class(
+ (nodes.Titular, nodes.Decorative))
+ nodelist = self.extract_bibliographic(candidate)
+ del document[index] # untransformed field list (candidate)
+ document[biblioindex:biblioindex] = nodelist
+
+ def extract_bibliographic(self, field_list):
+ docinfo = nodes.docinfo()
+ bibliofields = self.language.bibliographic_fields
+ labels = self.language.labels
+ topics = {'dedication': None, 'abstract': None}
+ for field in field_list:
+ try:
+ name = field[0][0].astext()
+ normedname = nodes.fully_normalize_name(name)
+ if not (len(field) == 2 and normedname in bibliofields
+ and self.check_empty_biblio_field(field, name)):
+ raise TransformError
+ canonical = bibliofields[normedname]
+ biblioclass = self.biblio_nodes[canonical]
+ if issubclass(biblioclass, nodes.TextElement):
+ if not self.check_compound_biblio_field(field, name):
+ raise TransformError
+ utils.clean_rcs_keywords(
+ field[1][0], self.rcs_keyword_substitutions)
+ docinfo.append(biblioclass('', '', *field[1][0]))
+ elif issubclass(biblioclass, nodes.authors):
+ self.extract_authors(field, name, docinfo)
+ elif issubclass(biblioclass, nodes.topic):
+ if topics[canonical]:
+ field[-1] += self.document.reporter.warning(
+ 'There can only be one "%s" field.' % name,
+ base_node=field)
+ raise TransformError
+ title = nodes.title(name, labels[canonical])
+ topics[canonical] = biblioclass(
+ '', title, classes=[canonical], *field[1].children)
+ else:
+ docinfo.append(biblioclass('', *field[1].children))
+ except TransformError:
+ if len(field[-1]) == 1 \
+ and isinstance(field[-1][0], nodes.paragraph):
+ utils.clean_rcs_keywords(
+ field[-1][0], self.rcs_keyword_substitutions)
+ docinfo.append(field)
+ nodelist = []
+ if len(docinfo) != 0:
+ nodelist.append(docinfo)
+ for name in ('dedication', 'abstract'):
+ if topics[name]:
+ nodelist.append(topics[name])
+ return nodelist
+
+ def check_empty_biblio_field(self, field, name):
+ if len(field[-1]) < 1:
+ field[-1] += self.document.reporter.warning(
+ 'Cannot extract empty bibliographic field "%s".' % name,
+ base_node=field)
+ return None
+ return 1
+
+ def check_compound_biblio_field(self, field, name):
+ if len(field[-1]) > 1:
+ field[-1] += self.document.reporter.warning(
+ 'Cannot extract compound bibliographic field "%s".' % name,
+ base_node=field)
+ return None
+ if not isinstance(field[-1][0], nodes.paragraph):
+ field[-1] += self.document.reporter.warning(
+ 'Cannot extract bibliographic field "%s" containing '
+ 'anything other than a single paragraph.' % name,
+ base_node=field)
+ return None
+ return 1
+
+ rcs_keyword_substitutions = [
+ (re.compile(r'\$' r'Date: (\d\d\d\d)[-/](\d\d)[-/](\d\d)[ T][\d:]+'
+ r'[^$]* \$', re.IGNORECASE), r'\1-\2-\3'),
+ (re.compile(r'\$' r'RCSfile: (.+),v \$', re.IGNORECASE), r'\1'),
+ (re.compile(r'\$[a-zA-Z]+: (.+) \$'), r'\1'),]
+
+ def extract_authors(self, field, name, docinfo):
+ try:
+ if len(field[1]) == 1:
+ if isinstance(field[1][0], nodes.paragraph):
+ authors = self.authors_from_one_paragraph(field)
+ elif isinstance(field[1][0], nodes.bullet_list):
+ authors = self.authors_from_bullet_list(field)
+ else:
+ raise TransformError
+ else:
+ authors = self.authors_from_paragraphs(field)
+ authornodes = [nodes.author('', '', *author)
+ for author in authors if author]
+ if len(authornodes) >= 1:
+ docinfo.append(nodes.authors('', *authornodes))
+ else:
+ raise TransformError
+ except TransformError:
+ field[-1] += self.document.reporter.warning(
+ 'Bibliographic field "%s" incompatible with extraction: '
+ 'it must contain either a single paragraph (with authors '
+ 'separated by one of "%s"), multiple paragraphs (one per '
+ 'author), or a bullet list with one paragraph (one author) '
+ 'per item.'
+ % (name, ''.join(self.language.author_separators)),
+ base_node=field)
+ raise
+
+ def authors_from_one_paragraph(self, field):
+ text = field[1][0].astext().strip()
+ if not text:
+ raise TransformError
+ for authorsep in self.language.author_separators:
+ authornames = text.split(authorsep)
+ if len(authornames) > 1:
+ break
+ authornames = [author.strip() for author in authornames]
+ authors = [[nodes.Text(author)] for author in authornames if author]
+ return authors
+
+ def authors_from_bullet_list(self, field):
+ authors = []
+ for item in field[1][0]:
+ if len(item) != 1 or not isinstance(item[0], nodes.paragraph):
+ raise TransformError
+ authors.append(item[0].children)
+ if not authors:
+ raise TransformError
+ return authors
+
+ def authors_from_paragraphs(self, field):
+ for item in field[1]:
+ if not isinstance(item, nodes.paragraph):
+ raise TransformError
+ authors = [item.children for item in field[1]]
+ return authors
diff --git a/python/helpers/docutils/transforms/misc.py b/python/helpers/docutils/transforms/misc.py
new file mode 100644
index 0000000..cd68ee1
--- /dev/null
+++ b/python/helpers/docutils/transforms/misc.py
@@ -0,0 +1,144 @@
+# $Id: misc.py 6314 2010-04-26 10:04:17Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Miscellaneous transforms.
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import nodes
+from docutils.transforms import Transform, TransformError
+
+
+class CallBack(Transform):
+
+ """
+ Inserts a callback into a document. The callback is called when the
+ transform is applied, which is determined by its priority.
+
+ For use with `nodes.pending` elements. Requires a ``details['callback']``
+ entry, a bound method or function which takes one parameter: the pending
+ node. Other data can be stored in the ``details`` attribute or in the
+ object hosting the callback method.
+ """
+
+ default_priority = 990
+
+ def apply(self):
+ pending = self.startnode
+ pending.details['callback'](pending)
+ pending.parent.remove(pending)
+
+
+class ClassAttribute(Transform):
+
+ """
+ Move the "class" attribute specified in the "pending" node into the
+ immediately following non-comment element.
+ """
+
+ default_priority = 210
+
+ def apply(self):
+ pending = self.startnode
+ parent = pending.parent
+ child = pending
+ while parent:
+ # Check for appropriate following siblings:
+ for index in range(parent.index(child) + 1, len(parent)):
+ element = parent[index]
+ if (isinstance(element, nodes.Invisible) or
+ isinstance(element, nodes.system_message)):
+ continue
+ element['classes'] += pending.details['class']
+ pending.parent.remove(pending)
+ return
+ else:
+ # At end of section or container; apply to sibling
+ child = parent
+ parent = parent.parent
+ error = self.document.reporter.error(
+ 'No suitable element following "%s" directive'
+ % pending.details['directive'],
+ nodes.literal_block(pending.rawsource, pending.rawsource),
+ line=pending.line)
+ pending.replace_self(error)
+
+
+class Transitions(Transform):
+
+ """
+ Move transitions at the end of sections up the tree. Complain
+ on transitions after a title, at the beginning or end of the
+ document, and after another transition.
+
+ For example, transform this::
+
+ <section>
+ ...
+ <transition>
+ <section>
+ ...
+
+ into this::
+
+ <section>
+ ...
+ <transition>
+ <section>
+ ...
+ """
+
+ default_priority = 830
+
+ def apply(self):
+ for node in self.document.traverse(nodes.transition):
+ self.visit_transition(node)
+
+ def visit_transition(self, node):
+ index = node.parent.index(node)
+ error = None
+ if (index == 0 or
+ isinstance(node.parent[0], nodes.title) and
+ (index == 1 or
+ isinstance(node.parent[1], nodes.subtitle) and
+ index == 2)):
+ assert (isinstance(node.parent, nodes.document) or
+ isinstance(node.parent, nodes.section))
+ error = self.document.reporter.error(
+ 'Document or section may not begin with a transition.',
+ source=node.source, line=node.line)
+ elif isinstance(node.parent[index - 1], nodes.transition):
+ error = self.document.reporter.error(
+ 'At least one body element must separate transitions; '
+ 'adjacent transitions are not allowed.',
+ source=node.source, line=node.line)
+ if error:
+ # Insert before node and update index.
+ node.parent.insert(index, error)
+ index += 1
+ assert index < len(node.parent)
+ if index != len(node.parent) - 1:
+ # No need to move the node.
+ return
+ # Node behind which the transition is to be moved.
+ sibling = node
+ # While sibling is the last node of its parent.
+ while index == len(sibling.parent) - 1:
+ sibling = sibling.parent
+ # If sibling is the whole document (i.e. it has no parent).
+ if sibling.parent is None:
+ # Transition at the end of document. Do not move the
+ # transition up, and place an error behind.
+ error = self.document.reporter.error(
+ 'Document may not end with a transition.',
+ line=node.line)
+ node.parent.insert(node.parent.index(node) + 1, error)
+ return
+ index = sibling.parent.index(sibling)
+ # Remove the original transition node.
+ node.parent.remove(node)
+ # Insert the transition after the sibling.
+ sibling.parent.insert(index + 1, node)
diff --git a/python/helpers/docutils/transforms/parts.py b/python/helpers/docutils/transforms/parts.py
new file mode 100644
index 0000000..ec6eda3
--- /dev/null
+++ b/python/helpers/docutils/transforms/parts.py
@@ -0,0 +1,180 @@
+# $Id: parts.py 6073 2009-08-06 12:21:10Z milde $
+# Authors: David Goodger <[email protected]>; Ueli Schlaepfer; Dmitry Jemerov
+# Copyright: This module has been placed in the public domain.
+
+"""
+Transforms related to document parts.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import re
+import sys
+from docutils import nodes, utils
+from docutils.transforms import TransformError, Transform
+
+
+class SectNum(Transform):
+
+ """
+ Automatically assigns numbers to the titles of document sections.
+
+ It is possible to limit the maximum section level for which the numbers
+ are added. For those sections that are auto-numbered, the "autonum"
+ attribute is set, informing the contents table generator that a different
+ form of the TOC should be used.
+ """
+
+ default_priority = 710
+ """Should be applied before `Contents`."""
+
+ def apply(self):
+ self.maxdepth = self.startnode.details.get('depth', None)
+ self.startvalue = self.startnode.details.get('start', 1)
+ self.prefix = self.startnode.details.get('prefix', '')
+ self.suffix = self.startnode.details.get('suffix', '')
+ self.startnode.parent.remove(self.startnode)
+ if self.document.settings.sectnum_xform:
+ if self.maxdepth is None:
+ self.maxdepth = sys.maxint
+ self.update_section_numbers(self.document)
+ else: # store details for eventual section numbering by the writer
+ self.document.settings.sectnum_depth = self.maxdepth
+ self.document.settings.sectnum_start = self.startvalue
+ self.document.settings.sectnum_prefix = self.prefix
+ self.document.settings.sectnum_suffix = self.suffix
+
+ def update_section_numbers(self, node, prefix=(), depth=0):
+ depth += 1
+ if prefix:
+ sectnum = 1
+ else:
+ sectnum = self.startvalue
+ for child in node:
+ if isinstance(child, nodes.section):
+ numbers = prefix + (str(sectnum),)
+ title = child[0]
+ # Use for spacing:
+ generated = nodes.generated(
+ '', (self.prefix + '.'.join(numbers) + self.suffix
+ + u'\u00a0' * 3),
+ classes=['sectnum'])
+ title.insert(0, generated)
+ title['auto'] = 1
+ if depth < self.maxdepth:
+ self.update_section_numbers(child, numbers, depth)
+ sectnum += 1
+
+
+class Contents(Transform):
+
+ """
+ This transform generates a table of contents from the entire document tree
+ or from a single branch. It locates "section" elements and builds them
+ into a nested bullet list, which is placed within a "topic" created by the
+ contents directive. A title is either explicitly specified, taken from
+ the appropriate language module, or omitted (local table of contents).
+ The depth may be specified. Two-way references between the table of
+ contents and section titles are generated (requires Writer support).
+
+ This transform requires a startnode, which contains generation
+ options and provides the location for the generated table of contents (the
+ startnode is replaced by the table of contents "topic").
+ """
+
+ default_priority = 720
+
+ def apply(self):
+ try: # let the writer (or output software) build the contents list?
+ toc_by_writer = self.document.settings.use_latex_toc
+ except AttributeError:
+ toc_by_writer = False
+ details = self.startnode.details
+ if 'local' in details:
+ startnode = self.startnode.parent.parent
+ while not (isinstance(startnode, nodes.section)
+ or isinstance(startnode, nodes.document)):
+ # find the ToC root: a direct ancestor of startnode
+ startnode = startnode.parent
+ else:
+ startnode = self.document
+ self.toc_id = self.startnode.parent['ids'][0]
+ if 'backlinks' in details:
+ self.backlinks = details['backlinks']
+ else:
+ self.backlinks = self.document.settings.toc_backlinks
+ if toc_by_writer:
+ # move customization settings to the parent node
+ self.startnode.parent.attributes.update(details)
+ self.startnode.parent.remove(self.startnode)
+ else:
+ contents = self.build_contents(startnode)
+ if len(contents):
+ self.startnode.replace_self(contents)
+ else:
+ self.startnode.parent.parent.remove(self.startnode.parent)
+
+ def build_contents(self, node, level=0):
+ level += 1
+ sections = [sect for sect in node if isinstance(sect, nodes.section)]
+ entries = []
+ autonum = 0
+ depth = self.startnode.details.get('depth', sys.maxint)
+ for section in sections:
+ title = section[0]
+ auto = title.get('auto') # May be set by SectNum.
+ entrytext = self.copy_and_filter(title)
+ reference = nodes.reference('', '', refid=section['ids'][0],
+ *entrytext)
+ ref_id = self.document.set_id(reference)
+ entry = nodes.paragraph('', '', reference)
+ item = nodes.list_item('', entry)
+ if ( self.backlinks in ('entry', 'top')
+ and title.next_node(nodes.reference) is None):
+ if self.backlinks == 'entry':
+ title['refid'] = ref_id
+ elif self.backlinks == 'top':
+ title['refid'] = self.toc_id
+ if level < depth:
+ subsects = self.build_contents(section, level)
+ item += subsects
+ entries.append(item)
+ if entries:
+ contents = nodes.bullet_list('', *entries)
+ if auto:
+ contents['classes'].append('auto-toc')
+ return contents
+ else:
+ return []
+
+ def copy_and_filter(self, node):
+ """Return a copy of a title, with references, images, etc. removed."""
+ visitor = ContentsFilter(self.document)
+ node.walkabout(visitor)
+ return visitor.get_entry_text()
+
+
+class ContentsFilter(nodes.TreeCopyVisitor):
+
+ def get_entry_text(self):
+ return self.get_tree_copy().children
+
+ def visit_citation_reference(self, node):
+ raise nodes.SkipNode
+
+ def visit_footnote_reference(self, node):
+ raise nodes.SkipNode
+
+ def visit_image(self, node):
+ if node.hasattr('alt'):
+ self.parent.append(nodes.Text(node['alt']))
+ raise nodes.SkipNode
+
+ def ignore_node_but_process_children(self, node):
+ raise nodes.SkipDeparture
+
+ visit_interpreted = ignore_node_but_process_children
+ visit_problematic = ignore_node_but_process_children
+ visit_reference = ignore_node_but_process_children
+ visit_target = ignore_node_but_process_children
diff --git a/python/helpers/docutils/transforms/peps.py b/python/helpers/docutils/transforms/peps.py
new file mode 100644
index 0000000..b19248b
--- /dev/null
+++ b/python/helpers/docutils/transforms/peps.py
@@ -0,0 +1,304 @@
+# $Id: peps.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Transforms for PEP processing.
+
+- `Headers`: Used to transform a PEP's initial RFC-2822 header. It remains a
+ field list, but some entries get processed.
+- `Contents`: Auto-inserts a table of contents.
+- `PEPZero`: Special processing for PEP 0.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import os
+import re
+import time
+from docutils import nodes, utils, languages
+from docutils import ApplicationError, DataError
+from docutils.transforms import Transform, TransformError
+from docutils.transforms import parts, references, misc
+
+
+class Headers(Transform):
+
+ """
+ Process fields in a PEP's initial RFC-2822 header.
+ """
+
+ default_priority = 360
+
+ pep_url = 'pep-%04d'
+ pep_cvs_url = ('http://svn.python.org/view/*checkout*'
+ '/peps/trunk/pep-%04d.txt')
+ rcs_keyword_substitutions = (
+ (re.compile(r'\$' r'RCSfile: (.+),v \$$', re.IGNORECASE), r'\1'),
+ (re.compile(r'\$[a-zA-Z]+: (.+) \$$'), r'\1'),)
+
+ def apply(self):
+ if not len(self.document):
+ # @@@ replace these DataErrors with proper system messages
+ raise DataError('Document tree is empty.')
+ header = self.document[0]
+ if not isinstance(header, nodes.field_list) or \
+ 'rfc2822' not in header['classes']:
+ raise DataError('Document does not begin with an RFC-2822 '
+ 'header; it is not a PEP.')
+ pep = None
+ for field in header:
+ if field[0].astext().lower() == 'pep': # should be the first field
+ value = field[1].astext()
+ try:
+ pep = int(value)
+ cvs_url = self.pep_cvs_url % pep
+ except ValueError:
+ pep = value
+ cvs_url = None
+ msg = self.document.reporter.warning(
+ '"PEP" header must contain an integer; "%s" is an '
+ 'invalid value.' % pep, base_node=field)
+ msgid = self.document.set_id(msg)
+ prb = nodes.problematic(value, value or '(none)',
+ refid=msgid)
+ prbid = self.document.set_id(prb)
+ msg.add_backref(prbid)
+ if len(field[1]):
+ field[1][0][:] = [prb]
+ else:
+ field[1] += nodes.paragraph('', '', prb)
+ break
+ if pep is None:
+ raise DataError('Document does not contain an RFC-2822 "PEP" '
+ 'header.')
+ if pep == 0:
+ # Special processing for PEP 0.
+ pending = nodes.pending(PEPZero)
+ self.document.insert(1, pending)
+ self.document.note_pending(pending)
+ if len(header) < 2 or header[1][0].astext().lower() != 'title':
+ raise DataError('No title!')
+ for field in header:
+ name = field[0].astext().lower()
+ body = field[1]
+ if len(body) > 1:
+ raise DataError('PEP header field body contains multiple '
+ 'elements:\n%s' % field.pformat(level=1))
+ elif len(body) == 1:
+ if not isinstance(body[0], nodes.paragraph):
+ raise DataError('PEP header field body may only contain '
+ 'a single paragraph:\n%s'
+ % field.pformat(level=1))
+ elif name == 'last-modified':
+ date = time.strftime(
+ '%d-%b-%Y',
+ time.localtime(os.stat(self.document['source'])[8]))
+ if cvs_url:
+ body += nodes.paragraph(
+ '', '', nodes.reference('', date, refuri=cvs_url))
+ else:
+ # empty
+ continue
+ para = body[0]
+ if name == 'author':
+ for node in para:
+ if isinstance(node, nodes.reference):
+ node.replace_self(mask_email(node))
+ elif name == 'discussions-to':
+ for node in para:
+ if isinstance(node, nodes.reference):
+ node.replace_self(mask_email(node, pep))
+ elif name in ('replaces', 'replaced-by', 'requires'):
+ newbody = []
+ space = nodes.Text(' ')
+ for refpep in re.split(',?\s+', body.astext()):
+ pepno = int(refpep)
+ newbody.append(nodes.reference(
+ refpep, refpep,
+ refuri=(self.document.settings.pep_base_url
+ + self.pep_url % pepno)))
+ newbody.append(space)
+ para[:] = newbody[:-1] # drop trailing space
+ elif name == 'last-modified':
+ utils.clean_rcs_keywords(para, self.rcs_keyword_substitutions)
+ if cvs_url:
+ date = para.astext()
+ para[:] = [nodes.reference('', date, refuri=cvs_url)]
+ elif name == 'content-type':
+ pep_type = para.astext()
+ uri = self.document.settings.pep_base_url + self.pep_url % 12
+ para[:] = [nodes.reference('', pep_type, refuri=uri)]
+ elif name == 'version' and len(body):
+ utils.clean_rcs_keywords(para, self.rcs_keyword_substitutions)
+
+
+class Contents(Transform):
+
+ """
+ Insert an empty table of contents topic and a transform placeholder into
+ the document after the RFC 2822 header.
+ """
+
+ default_priority = 380
+
+ def apply(self):
+ language = languages.get_language(self.document.settings.language_code)
+ name = language.labels['contents']
+ title = nodes.title('', name)
+ topic = nodes.topic('', title, classes=['contents'])
+ name = nodes.fully_normalize_name(name)
+ if not self.document.has_name(name):
+ topic['names'].append(name)
+ self.document.note_implicit_target(topic)
+ pending = nodes.pending(parts.Contents)
+ topic += pending
+ self.document.insert(1, topic)
+ self.document.note_pending(pending)
+
+
+class TargetNotes(Transform):
+
+ """
+ Locate the "References" section, insert a placeholder for an external
+ target footnote insertion transform at the end, and schedule the
+ transform to run immediately.
+ """
+
+ default_priority = 520
+
+ def apply(self):
+ doc = self.document
+ i = len(doc) - 1
+ refsect = copyright = None
+ while i >= 0 and isinstance(doc[i], nodes.section):
+ title_words = doc[i][0].astext().lower().split()
+ if 'references' in title_words:
+ refsect = doc[i]
+ break
+ elif 'copyright' in title_words:
+ copyright = i
+ i -= 1
+ if not refsect:
+ refsect = nodes.section()
+ refsect += nodes.title('', 'References')
+ doc.set_id(refsect)
+ if copyright:
+ # Put the new "References" section before "Copyright":
+ doc.insert(copyright, refsect)
+ else:
+ # Put the new "References" section at end of doc:
+ doc.append(refsect)
+ pending = nodes.pending(references.TargetNotes)
+ refsect.append(pending)
+ self.document.note_pending(pending, 0)
+ pending = nodes.pending(misc.CallBack,
+ details={'callback': self.cleanup_callback})
+ refsect.append(pending)
+ self.document.note_pending(pending, 1)
+
+ def cleanup_callback(self, pending):
+ """
+ Remove an empty "References" section.
+
+ Called after the `references.TargetNotes` transform is complete.
+ """
+ if len(pending.parent) == 2: # <title> and <pending>
+ pending.parent.parent.remove(pending.parent)
+
+
+class PEPZero(Transform):
+
+ """
+ Special processing for PEP 0.
+ """
+
+ default_priority =760
+
+ def apply(self):
+ visitor = PEPZeroSpecial(self.document)
+ self.document.walk(visitor)
+ self.startnode.parent.remove(self.startnode)
+
+
+class PEPZeroSpecial(nodes.SparseNodeVisitor):
+
+ """
+ Perform the special processing needed by PEP 0:
+
+ - Mask email addresses.
+
+ - Link PEP numbers in the second column of 4-column tables to the PEPs
+ themselves.
+ """
+
+ pep_url = Headers.pep_url
+
+ def unknown_visit(self, node):
+ pass
+
+ def visit_reference(self, node):
+ node.replace_self(mask_email(node))
+
+ def visit_field_list(self, node):
+ if 'rfc2822' in node['classes']:
+ raise nodes.SkipNode
+
+ def visit_tgroup(self, node):
+ self.pep_table = node['cols'] == 4
+ self.entry = 0
+
+ def visit_colspec(self, node):
+ self.entry += 1
+ if self.pep_table and self.entry == 2:
+ node['classes'].append('num')
+
+ def visit_row(self, node):
+ self.entry = 0
+
+ def visit_entry(self, node):
+ self.entry += 1
+ if self.pep_table and self.entry == 2 and len(node) == 1:
+ node['classes'].append('num')
+ p = node[0]
+ if isinstance(p, nodes.paragraph) and len(p) == 1:
+ text = p.astext()
+ try:
+ pep = int(text)
+ ref = (self.document.settings.pep_base_url
+ + self.pep_url % pep)
+ p[0] = nodes.reference(text, text, refuri=ref)
+ except ValueError:
+ pass
+
+
+non_masked_addresses = ('[email protected]',
+ '[email protected]',
+ '[email protected]')
+
+def mask_email(ref, pepno=None):
+ """
+ Mask the email address in `ref` and return a replacement node.
+
+ `ref` is returned unchanged if it contains no email address.
+
+ For email addresses such as "user@host", mask the address as "user at
+ host" (text) to thwart simple email address harvesters (except for those
+ listed in `non_masked_addresses`). If a PEP number (`pepno`) is given,
+ return a reference including a default email subject.
+ """
+ if ref.hasattr('refuri') and ref['refuri'].startswith('mailto:'):
+ if ref['refuri'][8:] in non_masked_addresses:
+ replacement = ref[0]
+ else:
+ replacement_text = ref.astext().replace('@', ' at ')
+ replacement = nodes.raw('', replacement_text, format='html')
+ if pepno is None:
+ return replacement
+ else:
+ ref['refuri'] += '?subject=PEP%%20%s' % pepno
+ ref[:] = [replacement]
+ return ref
+ else:
+ return ref
diff --git a/python/helpers/docutils/transforms/references.py b/python/helpers/docutils/transforms/references.py
new file mode 100644
index 0000000..4d8a716
--- /dev/null
+++ b/python/helpers/docutils/transforms/references.py
@@ -0,0 +1,904 @@
+# $Id: references.py 6167 2009-10-11 14:51:42Z grubert $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Transforms for resolving references.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import re
+from docutils import nodes, utils
+from docutils.transforms import TransformError, Transform
+
+
+class PropagateTargets(Transform):
+
+ """
+ Propagate empty internal targets to the next element.
+
+ Given the following nodes::
+
+ <target ids="internal1" names="internal1">
+ <target anonymous="1" ids="id1">
+ <target ids="internal2" names="internal2">
+ <paragraph>
+ This is a test.
+
+ PropagateTargets propagates the ids and names of the internal
+ targets preceding the paragraph to the paragraph itself::
+
+ <target refid="internal1">
+ <target anonymous="1" refid="id1">
+ <target refid="internal2">
+ <paragraph ids="internal2 id1 internal1" names="internal2 internal1">
+ This is a test.
+ """
+
+ default_priority = 260
+
+ def apply(self):
+ for target in self.document.traverse(nodes.target):
+ # Only block-level targets without reference (like ".. target:"):
+ if (isinstance(target.parent, nodes.TextElement) or
+ (target.hasattr('refid') or target.hasattr('refuri') or
+ target.hasattr('refname'))):
+ continue
+ assert len(target) == 0, 'error: block-level target has children'
+ next_node = target.next_node(ascend=1)
+ # Do not move names and ids into Invisibles (we'd lose the
+ # attributes) or different Targetables (e.g. footnotes).
+ if (next_node is not None and
+ ((not isinstance(next_node, nodes.Invisible) and
+ not isinstance(next_node, nodes.Targetable)) or
+ isinstance(next_node, nodes.target))):
+ next_node['ids'].extend(target['ids'])
+ next_node['names'].extend(target['names'])
+ # Set defaults for next_node.expect_referenced_by_name/id.
+ if not hasattr(next_node, 'expect_referenced_by_name'):
+ next_node.expect_referenced_by_name = {}
+ if not hasattr(next_node, 'expect_referenced_by_id'):
+ next_node.expect_referenced_by_id = {}
+ for id in target['ids']:
+ # Update IDs to node mapping.
+ self.document.ids[id] = next_node
+ # If next_node is referenced by id ``id``, this
+ # target shall be marked as referenced.
+ next_node.expect_referenced_by_id[id] = target
+ for name in target['names']:
+ next_node.expect_referenced_by_name[name] = target
+ # If there are any expect_referenced_by_... attributes
+ # in target set, copy them to next_node.
+ next_node.expect_referenced_by_name.update(
+ getattr(target, 'expect_referenced_by_name', {}))
+ next_node.expect_referenced_by_id.update(
+ getattr(target, 'expect_referenced_by_id', {}))
+ # Set refid to point to the first former ID of target
+ # which is now an ID of next_node.
+ target['refid'] = target['ids'][0]
+ # Clear ids and names; they have been moved to
+ # next_node.
+ target['ids'] = []
+ target['names'] = []
+ self.document.note_refid(target)
+
+
+class AnonymousHyperlinks(Transform):
+
+ """
+ Link anonymous references to targets. Given::
+
+ <paragraph>
+ <reference anonymous="1">
+ internal
+ <reference anonymous="1">
+ external
+ <target anonymous="1" ids="id1">
+ <target anonymous="1" ids="id2" refuri="http://external">
+
+ Corresponding references are linked via "refid" or resolved via "refuri"::
+
+ <paragraph>
+ <reference anonymous="1" refid="id1">
+ text
+ <reference anonymous="1" refuri="http://external">
+ external
+ <target anonymous="1" ids="id1">
+ <target anonymous="1" ids="id2" refuri="http://external">
+ """
+
+ default_priority = 440
+
+ def apply(self):
+ anonymous_refs = []
+ anonymous_targets = []
+ for node in self.document.traverse(nodes.reference):
+ if node.get('anonymous'):
+ anonymous_refs.append(node)
+ for node in self.document.traverse(nodes.target):
+ if node.get('anonymous'):
+ anonymous_targets.append(node)
+ if len(anonymous_refs) \
+ != len(anonymous_targets):
+ msg = self.document.reporter.error(
+ 'Anonymous hyperlink mismatch: %s references but %s '
+ 'targets.\nSee "backrefs" attribute for IDs.'
+ % (len(anonymous_refs), len(anonymous_targets)))
+ msgid = self.document.set_id(msg)
+ for ref in anonymous_refs:
+ prb = nodes.problematic(
+ ref.rawsource, ref.rawsource, refid=msgid)
+ prbid = self.document.set_id(prb)
+ msg.add_backref(prbid)
+ ref.replace_self(prb)
+ return
+ for ref, target in zip(anonymous_refs, anonymous_targets):
+ target.referenced = 1
+ while 1:
+ if target.hasattr('refuri'):
+ ref['refuri'] = target['refuri']
+ ref.resolved = 1
+ break
+ else:
+ if not target['ids']:
+ # Propagated target.
+ target = self.document.ids[target['refid']]
+ continue
+ ref['refid'] = target['ids'][0]
+ self.document.note_refid(ref)
+ break
+
+
+class IndirectHyperlinks(Transform):
+
+ """
+ a) Indirect external references::
+
+ <paragraph>
+ <reference refname="indirect external">
+ indirect external
+ <target id="id1" name="direct external"
+ refuri="http://indirect">
+ <target id="id2" name="indirect external"
+ refname="direct external">
+
+ The "refuri" attribute is migrated back to all indirect targets
+ from the final direct target (i.e. a target not referring to
+ another indirect target)::
+
+ <paragraph>
+ <reference refname="indirect external">
+ indirect external
+ <target id="id1" name="direct external"
+ refuri="http://indirect">
+ <target id="id2" name="indirect external"
+ refuri="http://indirect">
+
+ Once the attribute is migrated, the preexisting "refname" attribute
+ is dropped.
+
+ b) Indirect internal references::
+
+ <target id="id1" name="final target">
+ <paragraph>
+ <reference refname="indirect internal">
+ indirect internal
+ <target id="id2" name="indirect internal 2"
+ refname="final target">
+ <target id="id3" name="indirect internal"
+ refname="indirect internal 2">
+
+ Targets which indirectly refer to an internal target become one-hop
+ indirect (their "refid" attributes are directly set to the internal
+ target's "id"). References which indirectly refer to an internal
+ target become direct internal references::
+
+ <target id="id1" name="final target">
+ <paragraph>
+ <reference refid="id1">
+ indirect internal
+ <target id="id2" name="indirect internal 2" refid="id1">
+ <target id="id3" name="indirect internal" refid="id1">
+ """
+
+ default_priority = 460
+
+ def apply(self):
+ for target in self.document.indirect_targets:
+ if not target.resolved:
+ self.resolve_indirect_target(target)
+ self.resolve_indirect_references(target)
+
+ def resolve_indirect_target(self, target):
+ refname = target.get('refname')
+ if refname is None:
+ reftarget_id = target['refid']
+ else:
+ reftarget_id = self.document.nameids.get(refname)
+ if not reftarget_id:
+ # Check the unknown_reference_resolvers
+ for resolver_function in \
+ self.document.transformer.unknown_reference_resolvers:
+ if resolver_function(target):
+ break
+ else:
+ self.nonexistent_indirect_target(target)
+ return
+ reftarget = self.document.ids[reftarget_id]
+ reftarget.note_referenced_by(id=reftarget_id)
+ if isinstance(reftarget, nodes.target) \
+ and not reftarget.resolved and reftarget.hasattr('refname'):
+ if hasattr(target, 'multiply_indirect'):
+ #and target.multiply_indirect):
+ #del target.multiply_indirect
+ self.circular_indirect_reference(target)
+ return
+ target.multiply_indirect = 1
+ self.resolve_indirect_target(reftarget) # multiply indirect
+ del target.multiply_indirect
+ if reftarget.hasattr('refuri'):
+ target['refuri'] = reftarget['refuri']
+ if 'refid' in target:
+ del target['refid']
+ elif reftarget.hasattr('refid'):
+ target['refid'] = reftarget['refid']
+ self.document.note_refid(target)
+ else:
+ if reftarget['ids']:
+ target['refid'] = reftarget_id
+ self.document.note_refid(target)
+ else:
+ self.nonexistent_indirect_target(target)
+ return
+ if refname is not None:
+ del target['refname']
+ target.resolved = 1
+
+ def nonexistent_indirect_target(self, target):
+ if target['refname'] in self.document.nameids:
+ self.indirect_target_error(target, 'which is a duplicate, and '
+ 'cannot be used as a unique reference')
+ else:
+ self.indirect_target_error(target, 'which does not exist')
+
+ def circular_indirect_reference(self, target):
+ self.indirect_target_error(target, 'forming a circular reference')
+
+ def indirect_target_error(self, target, explanation):
+ naming = ''
+ reflist = []
+ if target['names']:
+ naming = '"%s" ' % target['names'][0]
+ for name in target['names']:
+ reflist.extend(self.document.refnames.get(name, []))
+ for id in target['ids']:
+ reflist.extend(self.document.refids.get(id, []))
+ naming += '(id="%s")' % target['ids'][0]
+ msg = self.document.reporter.error(
+ 'Indirect hyperlink target %s refers to target "%s", %s.'
+ % (naming, target['refname'], explanation), base_node=target)
+ msgid = self.document.set_id(msg)
+ for ref in utils.uniq(reflist):
+ prb = nodes.problematic(
+ ref.rawsource, ref.rawsource, refid=msgid)
+ prbid = self.document.set_id(prb)
+ msg.add_backref(prbid)
+ ref.replace_self(prb)
+ target.resolved = 1
+
+ def resolve_indirect_references(self, target):
+ if target.hasattr('refid'):
+ attname = 'refid'
+ call_method = self.document.note_refid
+ elif target.hasattr('refuri'):
+ attname = 'refuri'
+ call_method = None
+ else:
+ return
+ attval = target[attname]
+ for name in target['names']:
+ reflist = self.document.refnames.get(name, [])
+ if reflist:
+ target.note_referenced_by(name=name)
+ for ref in reflist:
+ if ref.resolved:
+ continue
+ del ref['refname']
+ ref[attname] = attval
+ if call_method:
+ call_method(ref)
+ ref.resolved = 1
+ if isinstance(ref, nodes.target):
+ self.resolve_indirect_references(ref)
+ for id in target['ids']:
+ reflist = self.document.refids.get(id, [])
+ if reflist:
+ target.note_referenced_by(id=id)
+ for ref in reflist:
+ if ref.resolved:
+ continue
+ del ref['refid']
+ ref[attname] = attval
+ if call_method:
+ call_method(ref)
+ ref.resolved = 1
+ if isinstance(ref, nodes.target):
+ self.resolve_indirect_references(ref)
+
+
+class ExternalTargets(Transform):
+
+ """
+ Given::
+
+ <paragraph>
+ <reference refname="direct external">
+ direct external
+ <target id="id1" name="direct external" refuri="http://direct">
+
+ The "refname" attribute is replaced by the direct "refuri" attribute::
+
+ <paragraph>
+ <reference refuri="http://direct">
+ direct external
+ <target id="id1" name="direct external" refuri="http://direct">
+ """
+
+ default_priority = 640
+
+ def apply(self):
+ for target in self.document.traverse(nodes.target):
+ if target.hasattr('refuri'):
+ refuri = target['refuri']
+ for name in target['names']:
+ reflist = self.document.refnames.get(name, [])
+ if reflist:
+ target.note_referenced_by(name=name)
+ for ref in reflist:
+ if ref.resolved:
+ continue
+ del ref['refname']
+ ref['refuri'] = refuri
+ ref.resolved = 1
+
+
+class InternalTargets(Transform):
+
+ default_priority = 660
+
+ def apply(self):
+ for target in self.document.traverse(nodes.target):
+ if not target.hasattr('refuri') and not target.hasattr('refid'):
+ self.resolve_reference_ids(target)
+
+ def resolve_reference_ids(self, target):
+ """
+ Given::
+
+ <paragraph>
+ <reference refname="direct internal">
+ direct internal
+ <target id="id1" name="direct internal">
+
+ The "refname" attribute is replaced by "refid" linking to the target's
+ "id"::
+
+ <paragraph>
+ <reference refid="id1">
+ direct internal
+ <target id="id1" name="direct internal">
+ """
+ for name in target['names']:
+ refid = self.document.nameids[name]
+ reflist = self.document.refnames.get(name, [])
+ if reflist:
+ target.note_referenced_by(name=name)
+ for ref in reflist:
+ if ref.resolved:
+ continue
+ del ref['refname']
+ ref['refid'] = refid
+ ref.resolved = 1
+
+
+class Footnotes(Transform):
+
+ """
+ Assign numbers to autonumbered footnotes, and resolve links to footnotes,
+ citations, and their references.
+
+ Given the following ``document`` as input::
+
+ <document>
+ <paragraph>
+ A labeled autonumbered footnote referece:
+ <footnote_reference auto="1" id="id1" refname="footnote">
+ <paragraph>
+ An unlabeled autonumbered footnote referece:
+ <footnote_reference auto="1" id="id2">
+ <footnote auto="1" id="id3">
+ <paragraph>
+ Unlabeled autonumbered footnote.
+ <footnote auto="1" id="footnote" name="footnote">
+ <paragraph>
+ Labeled autonumbered footnote.
+
+ Auto-numbered footnotes have attribute ``auto="1"`` and no label.
+ Auto-numbered footnote_references have no reference text (they're
+ empty elements). When resolving the numbering, a ``label`` element
+ is added to the beginning of the ``footnote``, and reference text
+ to the ``footnote_reference``.
+
+ The transformed result will be::
+
+ <document>
+ <paragraph>
+ A labeled autonumbered footnote referece:
+ <footnote_reference auto="1" id="id1" refid="footnote">
+ 2
+ <paragraph>
+ An unlabeled autonumbered footnote referece:
+ <footnote_reference auto="1" id="id2" refid="id3">
+ 1
+ <footnote auto="1" id="id3" backrefs="id2">
+ <label>
+ 1
+ <paragraph>
+ Unlabeled autonumbered footnote.
+ <footnote auto="1" id="footnote" name="footnote" backrefs="id1">
+ <label>
+ 2
+ <paragraph>
+ Labeled autonumbered footnote.
+
+ Note that the footnotes are not in the same order as the references.
+
+ The labels and reference text are added to the auto-numbered ``footnote``
+ and ``footnote_reference`` elements. Footnote elements are backlinked to
+ their references via "refids" attributes. References are assigned "id"
+ and "refid" attributes.
+
+ After adding labels and reference text, the "auto" attributes can be
+ ignored.
+ """
+
+ default_priority = 620
+
+ autofootnote_labels = None
+ """Keep track of unlabeled autonumbered footnotes."""
+
+ symbols = [
+ # Entries 1-4 and 6 below are from section 12.51 of
+ # The Chicago Manual of Style, 14th edition.
+ '*', # asterisk/star
+ u'\u2020', # dagger †
+ u'\u2021', # double dagger ‡
+ u'\u00A7', # section mark §
+ u'\u00B6', # paragraph mark (pilcrow) ¶
+ # (parallels ['||'] in CMoS)
+ '#', # number sign
+ # The entries below were chosen arbitrarily.
+ u'\u2660', # spade suit ♠
+ u'\u2665', # heart suit ♥
+ u'\u2666', # diamond suit ♦
+ u'\u2663', # club suit ♣
+ ]
+
+ def apply(self):
+ self.autofootnote_labels = []
+ startnum = self.document.autofootnote_start
+ self.document.autofootnote_start = self.number_footnotes(startnum)
+ self.number_footnote_references(startnum)
+ self.symbolize_footnotes()
+ self.resolve_footnotes_and_citations()
+
+ def number_footnotes(self, startnum):
+ """
+ Assign numbers to autonumbered footnotes.
+
+ For labeled autonumbered footnotes, copy the number over to
+ corresponding footnote references.
+ """
+ for footnote in self.document.autofootnotes:
+ while 1:
+ label = str(startnum)
+ startnum += 1
+ if label not in self.document.nameids:
+ break
+ footnote.insert(0, nodes.label('', label))
+ for name in footnote['names']:
+ for ref in self.document.footnote_refs.get(name, []):
+ ref += nodes.Text(label)
+ ref.delattr('refname')
+ assert len(footnote['ids']) == len(ref['ids']) == 1
+ ref['refid'] = footnote['ids'][0]
+ footnote.add_backref(ref['ids'][0])
+ self.document.note_refid(ref)
+ ref.resolved = 1
+ if not footnote['names'] and not footnote['dupnames']:
+ footnote['names'].append(label)
+ self.document.note_explicit_target(footnote, footnote)
+ self.autofootnote_labels.append(label)
+ return startnum
+
+ def number_footnote_references(self, startnum):
+ """Assign numbers to autonumbered footnote references."""
+ i = 0
+ for ref in self.document.autofootnote_refs:
+ if ref.resolved or ref.hasattr('refid'):
+ continue
+ try:
+ label = self.autofootnote_labels[i]
+ except IndexError:
+ msg = self.document.reporter.error(
+ 'Too many autonumbered footnote references: only %s '
+ 'corresponding footnotes available.'
+ % len(self.autofootnote_labels), base_node=ref)
+ msgid = self.document.set_id(msg)
+ for ref in self.document.autofootnote_refs[i:]:
+ if ref.resolved or ref.hasattr('refname'):
+ continue
+ prb = nodes.problematic(
+ ref.rawsource, ref.rawsource, refid=msgid)
+ prbid = self.document.set_id(prb)
+ msg.add_backref(prbid)
+ ref.replace_self(prb)
+ break
+ ref += nodes.Text(label)
+ id = self.document.nameids[label]
+ footnote = self.document.ids[id]
+ ref['refid'] = id
+ self.document.note_refid(ref)
+ assert len(ref['ids']) == 1
+ footnote.add_backref(ref['ids'][0])
+ ref.resolved = 1
+ i += 1
+
+ def symbolize_footnotes(self):
+ """Add symbols indexes to "[*]"-style footnotes and references."""
+ labels = []
+ for footnote in self.document.symbol_footnotes:
+ reps, index = divmod(self.document.symbol_footnote_start,
+ len(self.symbols))
+ labeltext = self.symbols[index] * (reps + 1)
+ labels.append(labeltext)
+ footnote.insert(0, nodes.label('', labeltext))
+ self.document.symbol_footnote_start += 1
+ self.document.set_id(footnote)
+ i = 0
+ for ref in self.document.symbol_footnote_refs:
+ try:
+ ref += nodes.Text(labels[i])
+ except IndexError:
+ msg = self.document.reporter.error(
+ 'Too many symbol footnote references: only %s '
+ 'corresponding footnotes available.' % len(labels),
+ base_node=ref)
+ msgid = self.document.set_id(msg)
+ for ref in self.document.symbol_footnote_refs[i:]:
+ if ref.resolved or ref.hasattr('refid'):
+ continue
+ prb = nodes.problematic(
+ ref.rawsource, ref.rawsource, refid=msgid)
+ prbid = self.document.set_id(prb)
+ msg.add_backref(prbid)
+ ref.replace_self(prb)
+ break
+ footnote = self.document.symbol_footnotes[i]
+ assert len(footnote['ids']) == 1
+ ref['refid'] = footnote['ids'][0]
+ self.document.note_refid(ref)
+ footnote.add_backref(ref['ids'][0])
+ i += 1
+
+ def resolve_footnotes_and_citations(self):
+ """
+ Link manually-labeled footnotes and citations to/from their
+ references.
+ """
+ for footnote in self.document.footnotes:
+ for label in footnote['names']:
+ if label in self.document.footnote_refs:
+ reflist = self.document.footnote_refs[label]
+ self.resolve_references(footnote, reflist)
+ for citation in self.document.citations:
+ for label in citation['names']:
+ if label in self.document.citation_refs:
+ reflist = self.document.citation_refs[label]
+ self.resolve_references(citation, reflist)
+
+ def resolve_references(self, note, reflist):
+ assert len(note['ids']) == 1
+ id = note['ids'][0]
+ for ref in reflist:
+ if ref.resolved:
+ continue
+ ref.delattr('refname')
+ ref['refid'] = id
+ assert len(ref['ids']) == 1
+ note.add_backref(ref['ids'][0])
+ ref.resolved = 1
+ note.resolved = 1
+
+
+class CircularSubstitutionDefinitionError(Exception): pass
+
+
+class Substitutions(Transform):
+
+ """
+ Given the following ``document`` as input::
+
+ <document>
+ <paragraph>
+ The
+ <substitution_reference refname="biohazard">
+ biohazard
+ symbol is deservedly scary-looking.
+ <substitution_definition name="biohazard">
+ <image alt="biohazard" uri="biohazard.png">
+
+ The ``substitution_reference`` will simply be replaced by the
+ contents of the corresponding ``substitution_definition``.
+
+ The transformed result will be::
+
+ <document>
+ <paragraph>
+ The
+ <image alt="biohazard" uri="biohazard.png">
+ symbol is deservedly scary-looking.
+ <substitution_definition name="biohazard">
+ <image alt="biohazard" uri="biohazard.png">
+ """
+
+ default_priority = 220
+ """The Substitutions transform has to be applied very early, before
+ `docutils.tranforms.frontmatter.DocTitle` and others."""
+
+ def apply(self):
+ defs = self.document.substitution_defs
+ normed = self.document.substitution_names
+ subreflist = self.document.traverse(nodes.substitution_reference)
+ nested = {}
+ for ref in subreflist:
+ refname = ref['refname']
+ key = None
+ if refname in defs:
+ key = refname
+ else:
+ normed_name = refname.lower()
+ if normed_name in normed:
+ key = normed[normed_name]
+ if key is None:
+ msg = self.document.reporter.error(
+ 'Undefined substitution referenced: "%s".'
+ % refname, base_node=ref)
+ msgid = self.document.set_id(msg)
+ prb = nodes.problematic(
+ ref.rawsource, ref.rawsource, refid=msgid)
+ prbid = self.document.set_id(prb)
+ msg.add_backref(prbid)
+ ref.replace_self(prb)
+ else:
+ subdef = defs[key]
+ parent = ref.parent
+ index = parent.index(ref)
+ if ('ltrim' in subdef.attributes
+ or 'trim' in subdef.attributes):
+ if index > 0 and isinstance(parent[index - 1],
+ nodes.Text):
+ parent.replace(parent[index - 1],
+ parent[index - 1].rstrip())
+ if ('rtrim' in subdef.attributes
+ or 'trim' in subdef.attributes):
+ if (len(parent) > index + 1
+ and isinstance(parent[index + 1], nodes.Text)):
+ parent.replace(parent[index + 1],
+ parent[index + 1].lstrip())
+ subdef_copy = subdef.deepcopy()
+ try:
+ # Take care of nested substitution references:
+ for nested_ref in subdef_copy.traverse(
+ nodes.substitution_reference):
+ nested_name = normed[nested_ref['refname'].lower()]
+ if nested_name in nested.setdefault(nested_name, []):
+ raise CircularSubstitutionDefinitionError
+ else:
+ nested[nested_name].append(key)
+ subreflist.append(nested_ref)
+ except CircularSubstitutionDefinitionError:
+ parent = ref.parent
+ if isinstance(parent, nodes.substitution_definition):
+ msg = self.document.reporter.error(
+ 'Circular substitution definition detected:',
+ nodes.literal_block(parent.rawsource,
+ parent.rawsource),
+ line=parent.line, base_node=parent)
+ parent.replace_self(msg)
+ else:
+ msg = self.document.reporter.error(
+ 'Circular substitution definition referenced: "%s".'
+ % refname, base_node=ref)
+ msgid = self.document.set_id(msg)
+ prb = nodes.problematic(
+ ref.rawsource, ref.rawsource, refid=msgid)
+ prbid = self.document.set_id(prb)
+ msg.add_backref(prbid)
+ ref.replace_self(prb)
+ else:
+ ref.replace_self(subdef_copy.children)
+ # register refname of the replacment node(s)
+ # (needed for resolution of references)
+ for node in subdef_copy.children:
+ if isinstance(node, nodes.Referential):
+ # HACK: verify refname attribute exists.
+ # Test with docs/dev/todo.txt, see. |donate|
+ if 'refname' in node:
+ self.document.note_refname(node)
+
+
+class TargetNotes(Transform):
+
+ """
+ Creates a footnote for each external target in the text, and corresponding
+ footnote references after each reference.
+ """
+
+ default_priority = 540
+ """The TargetNotes transform has to be applied after `IndirectHyperlinks`
+ but before `Footnotes`."""
+
+
+ def __init__(self, document, startnode):
+ Transform.__init__(self, document, startnode=startnode)
+
+ self.classes = startnode.details.get('class', [])
+
+ def apply(self):
+ notes = {}
+ nodelist = []
+ for target in self.document.traverse(nodes.target):
+ # Only external targets.
+ if not target.hasattr('refuri'):
+ continue
+ names = target['names']
+ refs = []
+ for name in names:
+ refs.extend(self.document.refnames.get(name, []))
+ if not refs:
+ continue
+ footnote = self.make_target_footnote(target['refuri'], refs,
+ notes)
+ if target['refuri'] not in notes:
+ notes[target['refuri']] = footnote
+ nodelist.append(footnote)
+ # Take care of anonymous references.
+ for ref in self.document.traverse(nodes.reference):
+ if not ref.get('anonymous'):
+ continue
+ if ref.hasattr('refuri'):
+ footnote = self.make_target_footnote(ref['refuri'], [ref],
+ notes)
+ if ref['refuri'] not in notes:
+ notes[ref['refuri']] = footnote
+ nodelist.append(footnote)
+ self.startnode.replace_self(nodelist)
+
+ def make_target_footnote(self, refuri, refs, notes):
+ if refuri in notes: # duplicate?
+ footnote = notes[refuri]
+ assert len(footnote['names']) == 1
+ footnote_name = footnote['names'][0]
+ else: # original
+ footnote = nodes.footnote()
+ footnote_id = self.document.set_id(footnote)
+ # Use uppercase letters and a colon; they can't be
+ # produced inside names by the parser.
+ footnote_name = 'TARGET_NOTE: ' + footnote_id
+ footnote['auto'] = 1
+ footnote['names'] = [footnote_name]
+ footnote_paragraph = nodes.paragraph()
+ footnote_paragraph += nodes.reference('', refuri, refuri=refuri)
+ footnote += footnote_paragraph
+ self.document.note_autofootnote(footnote)
+ self.document.note_explicit_target(footnote, footnote)
+ for ref in refs:
+ if isinstance(ref, nodes.target):
+ continue
+ refnode = nodes.footnote_reference(
+ refname=footnote_name, auto=1)
+ refnode['classes'] += self.classes
+ self.document.note_autofootnote_ref(refnode)
+ self.document.note_footnote_ref(refnode)
+ index = ref.parent.index(ref) + 1
+ reflist = [refnode]
+ if not utils.get_trim_footnote_ref_space(self.document.settings):
+ if self.classes:
+ reflist.insert(0, nodes.inline(text=' ', Classes=self.classes))
+ else:
+ reflist.insert(0, nodes.Text(' '))
+ ref.parent.insert(index, reflist)
+ return footnote
+
+
+class DanglingReferences(Transform):
+
+ """
+ Check for dangling references (incl. footnote & citation) and for
+ unreferenced targets.
+ """
+
+ default_priority = 850
+
+ def apply(self):
+ visitor = DanglingReferencesVisitor(
+ self.document,
+ self.document.transformer.unknown_reference_resolvers)
+ self.document.walk(visitor)
+ # *After* resolving all references, check for unreferenced
+ # targets:
+ for target in self.document.traverse(nodes.target):
+ if not target.referenced:
+ if target.get('anonymous'):
+ # If we have unreferenced anonymous targets, there
+ # is already an error message about anonymous
+ # hyperlink mismatch; no need to generate another
+ # message.
+ continue
+ if target['names']:
+ naming = target['names'][0]
+ elif target['ids']:
+ naming = target['ids'][0]
+ else:
+ # Hack: Propagated targets always have their refid
+ # attribute set.
+ naming = target['refid']
+ self.document.reporter.info(
+ 'Hyperlink target "%s" is not referenced.'
+ % naming, base_node=target)
+
+
+class DanglingReferencesVisitor(nodes.SparseNodeVisitor):
+
+ def __init__(self, document, unknown_reference_resolvers):
+ nodes.SparseNodeVisitor.__init__(self, document)
+ self.document = document
+ self.unknown_reference_resolvers = unknown_reference_resolvers
+
+ def unknown_visit(self, node):
+ pass
+
+ def visit_reference(self, node):
+ if node.resolved or not node.hasattr('refname'):
+ return
+ refname = node['refname']
+ id = self.document.nameids.get(refname)
+ if id is None:
+ for resolver_function in self.unknown_reference_resolvers:
+ if resolver_function(node):
+ break
+ else:
+ if refname in self.document.nameids:
+ msg = self.document.reporter.error(
+ 'Duplicate target name, cannot be used as a unique '
+ 'reference: "%s".' % (node['refname']), base_node=node)
+ else:
+ msg = self.document.reporter.error(
+ 'Unknown target name: "%s".' % (node['refname']),
+ base_node=node)
+ msgid = self.document.set_id(msg)
+ prb = nodes.problematic(
+ node.rawsource, node.rawsource, refid=msgid)
+ prbid = self.document.set_id(prb)
+ msg.add_backref(prbid)
+ node.replace_self(prb)
+ else:
+ del node['refname']
+ node['refid'] = id
+ self.document.ids[id].note_referenced_by(id=id)
+ node.resolved = 1
+
+ visit_footnote_reference = visit_citation_reference = visit_reference
diff --git a/python/helpers/docutils/transforms/universal.py b/python/helpers/docutils/transforms/universal.py
new file mode 100644
index 0000000..0e8f2c7
--- /dev/null
+++ b/python/helpers/docutils/transforms/universal.py
@@ -0,0 +1,203 @@
+# $Id: universal.py 6112 2009-09-03 07:27:59Z milde $
+# Authors: David Goodger <[email protected]>; Ueli Schlaepfer
+# Copyright: This module has been placed in the public domain.
+
+"""
+Transforms needed by most or all documents:
+
+- `Decorations`: Generate a document's header & footer.
+- `Messages`: Placement of system messages stored in
+ `nodes.document.transform_messages`.
+- `TestMessages`: Like `Messages`, used on test runs.
+- `FinalReferences`: Resolve remaining references.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import re
+import sys
+import time
+from docutils import nodes, utils
+from docutils.transforms import TransformError, Transform
+
+
+class Decorations(Transform):
+
+ """
+ Populate a document's decoration element (header, footer).
+ """
+
+ default_priority = 820
+
+ def apply(self):
+ header_nodes = self.generate_header()
+ if header_nodes:
+ decoration = self.document.get_decoration()
+ header = decoration.get_header()
+ header.extend(header_nodes)
+ footer_nodes = self.generate_footer()
+ if footer_nodes:
+ decoration = self.document.get_decoration()
+ footer = decoration.get_footer()
+ footer.extend(footer_nodes)
+
+ def generate_header(self):
+ return None
+
+ def generate_footer(self):
+ # @@@ Text is hard-coded for now.
+ # Should be made dynamic (language-dependent).
+ settings = self.document.settings
+ if settings.generator or settings.datestamp or settings.source_link \
+ or settings.source_url:
+ text = []
+ if settings.source_link and settings._source \
+ or settings.source_url:
+ if settings.source_url:
+ source = settings.source_url
+ else:
+ source = utils.relative_path(settings._destination,
+ settings._source)
+ text.extend([
+ nodes.reference('', 'View document source',
+ refuri=source),
+ nodes.Text('.\n')])
+ if settings.datestamp:
+ datestamp = time.strftime(settings.datestamp, time.gmtime())
+ text.append(nodes.Text('Generated on: ' + datestamp + '.\n'))
+ if settings.generator:
+ text.extend([
+ nodes.Text('Generated by '),
+ nodes.reference('', 'Docutils', refuri=
+ 'http://docutils.sourceforge.net/'),
+ nodes.Text(' from '),
+ nodes.reference('', 'reStructuredText', refuri='http://'
+ 'docutils.sourceforge.net/rst.html'),
+ nodes.Text(' source.\n')])
+ return [nodes.paragraph('', '', *text)]
+ else:
+ return None
+
+
+class ExposeInternals(Transform):
+
+ """
+ Expose internal attributes if ``expose_internals`` setting is set.
+ """
+
+ default_priority = 840
+
+ def not_Text(self, node):
+ return not isinstance(node, nodes.Text)
+
+ def apply(self):
+ if self.document.settings.expose_internals:
+ for node in self.document.traverse(self.not_Text):
+ for att in self.document.settings.expose_internals:
+ value = getattr(node, att, None)
+ if value is not None:
+ node['internal:' + att] = value
+
+
+class Messages(Transform):
+
+ """
+ Place any system messages generated after parsing into a dedicated section
+ of the document.
+ """
+
+ default_priority = 860
+
+ def apply(self):
+ unfiltered = self.document.transform_messages
+ threshold = self.document.reporter.report_level
+ messages = []
+ for msg in unfiltered:
+ if msg['level'] >= threshold and not msg.parent:
+ messages.append(msg)
+ if messages:
+ section = nodes.section(classes=['system-messages'])
+ # @@@ get this from the language module?
+ section += nodes.title('', 'Docutils System Messages')
+ section += messages
+ self.document.transform_messages[:] = []
+ self.document += section
+
+
+class FilterMessages(Transform):
+
+ """
+ Remove system messages below verbosity threshold.
+ """
+
+ default_priority = 870
+
+ def apply(self):
+ for node in self.document.traverse(nodes.system_message):
+ if node['level'] < self.document.reporter.report_level:
+ node.parent.remove(node)
+
+
+class TestMessages(Transform):
+
+ """
+ Append all post-parse system messages to the end of the document.
+
+ Used for testing purposes.
+ """
+
+ default_priority = 880
+
+ def apply(self):
+ for msg in self.document.transform_messages:
+ if not msg.parent:
+ self.document += msg
+
+
+class StripComments(Transform):
+
+ """
+ Remove comment elements from the document tree (only if the
+ ``strip_comments`` setting is enabled).
+ """
+
+ default_priority = 740
+
+ def apply(self):
+ if self.document.settings.strip_comments:
+ for node in self.document.traverse(nodes.comment):
+ node.parent.remove(node)
+
+
+class StripClassesAndElements(Transform):
+
+ """
+ Remove from the document tree all elements with classes in
+ `self.document.settings.strip_elements_with_classes` and all "classes"
+ attribute values in `self.document.settings.strip_classes`.
+ """
+
+ default_priority = 420
+
+ def apply(self):
+ if not (self.document.settings.strip_elements_with_classes
+ or self.document.settings.strip_classes):
+ return
+ # prepare dicts for lookup (not sets, for Python 2.2 compatibility):
+ self.strip_elements = dict(
+ [(key, None)
+ for key in (self.document.settings.strip_elements_with_classes
+ or [])])
+ self.strip_classes = dict(
+ [(key, None) for key in (self.document.settings.strip_classes
+ or [])])
+ for node in self.document.traverse(self.check_classes):
+ node.parent.remove(node)
+
+ def check_classes(self, node):
+ if isinstance(node, nodes.Element):
+ for class_value in node['classes'][:]:
+ if class_value in self.strip_classes:
+ node['classes'].remove(class_value)
+ if class_value in self.strip_elements:
+ return 1
diff --git a/python/helpers/docutils/transforms/writer_aux.py b/python/helpers/docutils/transforms/writer_aux.py
new file mode 100644
index 0000000..8045703
--- /dev/null
+++ b/python/helpers/docutils/transforms/writer_aux.py
@@ -0,0 +1,88 @@
+# $Id: writer_aux.py 5174 2007-05-31 00:01:52Z wiemann $
+# Author: Lea Wiemann <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Auxiliary transforms mainly to be used by Writer components.
+
+This module is called "writer_aux" because otherwise there would be
+conflicting imports like this one::
+
+ from docutils import writers
+ from docutils.transforms import writers
+"""
+
+__docformat__ = 'reStructuredText'
+
+from docutils import nodes, utils, languages
+from docutils.transforms import Transform
+
+
+class Compound(Transform):
+
+ """
+ Flatten all compound paragraphs. For example, transform ::
+
+ <compound>
+ <paragraph>
+ <literal_block>
+ <paragraph>
+
+ into ::
+
+ <paragraph>
+ <literal_block classes="continued">
+ <paragraph classes="continued">
+ """
+
+ default_priority = 910
+
+ def apply(self):
+ for compound in self.document.traverse(nodes.compound):
+ first_child = 1
+ for child in compound:
+ if first_child:
+ if not isinstance(child, nodes.Invisible):
+ first_child = 0
+ else:
+ child['classes'].append('continued')
+ # Substitute children for compound.
+ compound.replace_self(compound[:])
+
+
+class Admonitions(Transform):
+
+ """
+ Transform specific admonitions, like this:
+
+ <note>
+ <paragraph>
+ Note contents ...
+
+ into generic admonitions, like this::
+
+ <admonition classes="note">
+ <title>
+ Note
+ <paragraph>
+ Note contents ...
+
+ The admonition title is localized.
+ """
+
+ default_priority = 920
+
+ def apply(self):
+ lcode = self.document.settings.language_code
+ language = languages.get_language(lcode)
+ for node in self.document.traverse(nodes.Admonition):
+ node_name = node.__class__.__name__
+ # Set class, so that we know what node this admonition came from.
+ node['classes'].append(node_name)
+ if not isinstance(node, nodes.admonition):
+ # Specific admonition. Transform into a generic admonition.
+ admonition = nodes.admonition(node.rawsource, *node.children,
+ **node.attributes)
+ title = nodes.title('', language.labels[node_name])
+ admonition.insert(0, title)
+ node.replace_self(admonition)
diff --git a/python/helpers/docutils/urischemes.py b/python/helpers/docutils/urischemes.py
new file mode 100644
index 0000000..cf0d1cc
--- /dev/null
+++ b/python/helpers/docutils/urischemes.py
@@ -0,0 +1,136 @@
+# $Id: urischemes.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+`schemes` is a dictionary with lowercase URI addressing schemes as
+keys and descriptions as values. It was compiled from the index at
+http://www.iana.org/assignments/uri-schemes (revised 2005-11-28)
+and an older list at http://www.w3.org/Addressing/schemes.html.
+"""
+
+# Many values are blank and should be filled in with useful descriptions.
+
+schemes = {
+ 'about': 'provides information on Navigator',
+ 'acap': 'Application Configuration Access Protocol; RFC 2244',
+ 'addbook': "To add vCard entries to Communicator's Address Book",
+ 'afp': 'Apple Filing Protocol',
+ 'afs': 'Andrew File System global file names',
+ 'aim': 'AOL Instant Messenger',
+ 'callto': 'for NetMeeting links',
+ 'castanet': 'Castanet Tuner URLs for Netcaster',
+ 'chttp': 'cached HTTP supported by RealPlayer',
+ 'cid': 'content identifier; RFC 2392',
+ 'crid': 'TV-Anytime Content Reference Identifier; RFC 4078',
+ 'data': ('allows inclusion of small data items as "immediate" data; '
+ 'RFC 2397'),
+ 'dav': 'Distributed Authoring and Versioning Protocol; RFC 2518',
+ 'dict': 'dictionary service protocol; RFC 2229',
+ 'dns': 'Domain Name System resources',
+ 'eid': ('External ID; non-URL data; general escape mechanism to allow '
+ 'access to information for applications that are too '
+ 'specialized to justify their own schemes'),
+ 'fax': ('a connection to a terminal that can handle telefaxes '
+ '(facsimiles); RFC 2806'),
+ 'feed' : 'NetNewsWire feed',
+ 'file': 'Host-specific file names; RFC 1738',
+ 'finger': '',
+ 'freenet': '',
+ 'ftp': 'File Transfer Protocol; RFC 1738',
+ 'go': 'go; RFC 3368',
+ 'gopher': 'The Gopher Protocol',
+ 'gsm-sms': ('Global System for Mobile Communications Short Message '
+ 'Service'),
+ 'h323': ('video (audiovisual) communication on local area networks; '
+ 'RFC 3508'),
+ 'h324': ('video and audio communications over low bitrate connections '
+ 'such as POTS modem connections'),
+ 'hdl': 'CNRI handle system',
+ 'hnews': 'an HTTP-tunneling variant of the NNTP news protocol',
+ 'http': 'Hypertext Transfer Protocol; RFC 2616',
+ 'https': 'HTTP over SSL; RFC 2818',
+ 'hydra': 'SubEthaEdit URI. See http://www.codingmonkeys.de/subethaedit.',
+ 'iioploc': 'Internet Inter-ORB Protocol Location?',
+ 'ilu': 'Inter-Language Unification',
+ 'im': 'Instant Messaging; RFC 3860',
+ 'imap': 'Internet Message Access Protocol; RFC 2192',
+ 'info': 'Information Assets with Identifiers in Public Namespaces',
+ 'ior': 'CORBA interoperable object reference',
+ 'ipp': 'Internet Printing Protocol; RFC 3510',
+ 'irc': 'Internet Relay Chat',
+ 'iris.beep': 'iris.beep; RFC 3983',
+ 'iseek' : 'See www.ambrosiasw.com; a little util for OS X.',
+ 'jar': 'Java archive',
+ 'javascript': ('JavaScript code; evaluates the expression after the '
+ 'colon'),
+ 'jdbc': 'JDBC connection URI.',
+ 'ldap': 'Lightweight Directory Access Protocol',
+ 'lifn': '',
+ 'livescript': '',
+ 'lrq': '',
+ 'mailbox': 'Mail folder access',
+ 'mailserver': 'Access to data available from mail servers',
+ 'mailto': 'Electronic mail address; RFC 2368',
+ 'md5': '',
+ 'mid': 'message identifier; RFC 2392',
+ 'mocha': '',
+ 'modem': ('a connection to a terminal that can handle incoming data '
+ 'calls; RFC 2806'),
+ 'mtqp': 'Message Tracking Query Protocol; RFC 3887',
+ 'mupdate': 'Mailbox Update (MUPDATE) Protocol; RFC 3656',
+ 'news': 'USENET news; RFC 1738',
+ 'nfs': 'Network File System protocol; RFC 2224',
+ 'nntp': 'USENET news using NNTP access; RFC 1738',
+ 'opaquelocktoken': 'RFC 2518',
+ 'phone': '',
+ 'pop': 'Post Office Protocol; RFC 2384',
+ 'pop3': 'Post Office Protocol v3',
+ 'pres': 'Presence; RFC 3859',
+ 'printer': '',
+ 'prospero': 'Prospero Directory Service; RFC 4157',
+ 'rdar' : ('URLs found in Darwin source '
+ '(http://www.opensource.apple.com/darwinsource/).'),
+ 'res': '',
+ 'rtsp': 'real time streaming protocol; RFC 2326',
+ 'rvp': '',
+ 'rwhois': '',
+ 'rx': 'Remote Execution',
+ 'sdp': '',
+ 'service': 'service location; RFC 2609',
+ 'shttp': 'secure hypertext transfer protocol',
+ 'sip': 'Session Initiation Protocol; RFC 3261',
+ 'sips': 'secure session intitiaion protocol; RFC 3261',
+ 'smb': 'SAMBA filesystems.',
+ 'snews': 'For NNTP postings via SSL',
+ 'snmp': 'Simple Network Management Protocol; RFC 4088',
+ 'soap.beep': 'RFC 3288',
+ 'soap.beeps': 'RFC 3288',
+ 'ssh': 'Reference to interactive sessions via ssh.',
+ 't120': 'real time data conferencing (audiographics)',
+ 'tag': 'RFC 4151',
+ 'tcp': '',
+ 'tel': ('a connection to a terminal that handles normal voice '
+ 'telephone calls, a voice mailbox or another voice messaging '
+ 'system or a service that can be operated using DTMF tones; '
+ 'RFC 2806.'),
+ 'telephone': 'telephone',
+ 'telnet': 'Reference to interactive sessions; RFC 4248',
+ 'tftp': 'Trivial File Transfer Protocol; RFC 3617',
+ 'tip': 'Transaction Internet Protocol; RFC 2371',
+ 'tn3270': 'Interactive 3270 emulation sessions',
+ 'tv': '',
+ 'urn': 'Uniform Resource Name; RFC 2141',
+ 'uuid': '',
+ 'vemmi': 'versatile multimedia interface; RFC 2122',
+ 'videotex': '',
+ 'view-source': 'displays HTML code that was generated with JavaScript',
+ 'wais': 'Wide Area Information Servers; RFC 4156',
+ 'whodp': '',
+ 'whois++': 'Distributed directory service.',
+ 'x-man-page': ('Opens man page in Terminal.app on OS X '
+ '(see macosxhints.com)'),
+ 'xmlrpc.beep': 'RFC 3529',
+ 'xmlrpc.beeps': 'RFC 3529',
+ 'z39.50r': 'Z39.50 Retrieval; RFC 2056',
+ 'z39.50s': 'Z39.50 Session; RFC 2056',}
diff --git a/python/helpers/docutils/utils.py b/python/helpers/docutils/utils.py
new file mode 100644
index 0000000..e5443bd
--- /dev/null
+++ b/python/helpers/docutils/utils.py
@@ -0,0 +1,675 @@
+# $Id: utils.py 6394 2010-08-20 11:26:58Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Miscellaneous utilities for the documentation utilities.
+"""
+
+__docformat__ = 'reStructuredText'
+
+import sys
+import os
+import os.path
+import warnings
+import unicodedata
+from docutils import ApplicationError, DataError
+from docutils import nodes
+from docutils._compat import bytes
+
+
+class SystemMessage(ApplicationError):
+
+ def __init__(self, system_message, level):
+ Exception.__init__(self, system_message.astext())
+ self.level = level
+
+
+class SystemMessagePropagation(ApplicationError): pass
+
+
+class Reporter:
+
+ """
+ Info/warning/error reporter and ``system_message`` element generator.
+
+ Five levels of system messages are defined, along with corresponding
+ methods: `debug()`, `info()`, `warning()`, `error()`, and `severe()`.
+
+ There is typically one Reporter object per process. A Reporter object is
+ instantiated with thresholds for reporting (generating warnings) and
+ halting processing (raising exceptions), a switch to turn debug output on
+ or off, and an I/O stream for warnings. These are stored as instance
+ attributes.
+
+ When a system message is generated, its level is compared to the stored
+ thresholds, and a warning or error is generated as appropriate. Debug
+ messages are produced iff the stored debug switch is on, independently of
+ other thresholds. Message output is sent to the stored warning stream if
+ not set to ''.
+
+ The Reporter class also employs a modified form of the "Observer" pattern
+ [GoF95]_ to track system messages generated. The `attach_observer` method
+ should be called before parsing, with a bound method or function which
+ accepts system messages. The observer can be removed with
+ `detach_observer`, and another added in its place.
+
+ .. [GoF95] Gamma, Helm, Johnson, Vlissides. *Design Patterns: Elements of
+ Reusable Object-Oriented Software*. Addison-Wesley, Reading, MA, USA,
+ 1995.
+ """
+
+ levels = 'DEBUG INFO WARNING ERROR SEVERE'.split()
+ """List of names for system message levels, indexed by level."""
+
+ # system message level constants:
+ (DEBUG_LEVEL,
+ INFO_LEVEL,
+ WARNING_LEVEL,
+ ERROR_LEVEL,
+ SEVERE_LEVEL) = range(5)
+
+ def __init__(self, source, report_level, halt_level, stream=None,
+ debug=0, encoding=None, error_handler='backslashreplace'):
+ """
+ :Parameters:
+ - `source`: The path to or description of the source data.
+ - `report_level`: The level at or above which warning output will
+ be sent to `stream`.
+ - `halt_level`: The level at or above which `SystemMessage`
+ exceptions will be raised, halting execution.
+ - `debug`: Show debug (level=0) system messages?
+ - `stream`: Where warning output is sent. Can be file-like (has a
+ ``.write`` method), a string (file name, opened for writing),
+ '' (empty string, for discarding all stream messages) or
+ `None` (implies `sys.stderr`; default).
+ - `encoding`: The output encoding.
+ - `error_handler`: The error handler for stderr output encoding.
+ """
+
+ self.source = source
+ """The path to or description of the source data."""
+
+ self.error_handler = error_handler
+ """The character encoding error handler."""
+
+ self.debug_flag = debug
+ """Show debug (level=0) system messages?"""
+
+ self.report_level = report_level
+ """The level at or above which warning output will be sent
+ to `self.stream`."""
+
+ self.halt_level = halt_level
+ """The level at or above which `SystemMessage` exceptions
+ will be raised, halting execution."""
+
+ if stream is None:
+ stream = sys.stderr
+ elif stream and type(stream) in (unicode, bytes):
+ # if `stream` is a file name, open it
+ if type(stream) is bytes:
+ stream = open(stream, 'w')
+ else:
+ stream = open(stream.encode(), 'w')
+
+ self.stream = stream
+ """Where warning output is sent."""
+
+ if encoding is None:
+ try:
+ encoding = stream.encoding
+ except AttributeError:
+ pass
+
+ self.encoding = encoding or 'ascii'
+ """The output character encoding."""
+
+ self.observers = []
+ """List of bound methods or functions to call with each system_message
+ created."""
+
+ self.max_level = -1
+ """The highest level system message generated so far."""
+
+ def set_conditions(self, category, report_level, halt_level,
+ stream=None, debug=0):
+ warnings.warn('docutils.utils.Reporter.set_conditions deprecated; '
+ 'set attributes via configuration settings or directly',
+ DeprecationWarning, stacklevel=2)
+ self.report_level = report_level
+ self.halt_level = halt_level
+ if stream is None:
+ stream = sys.stderr
+ self.stream = stream
+ self.debug_flag = debug
+
+ def attach_observer(self, observer):
+ """
+ The `observer` parameter is a function or bound method which takes one
+ argument, a `nodes.system_message` instance.
+ """
+ self.observers.append(observer)
+
+ def detach_observer(self, observer):
+ self.observers.remove(observer)
+
+ def notify_observers(self, message):
+ for observer in self.observers:
+ observer(message)
+
+ def system_message(self, level, message, *children, **kwargs):
+ """
+ Return a system_message object.
+
+ Raise an exception or generate a warning if appropriate.
+ """
+ attributes = kwargs.copy()
+ if 'base_node' in kwargs:
+ source, line = get_source_line(kwargs['base_node'])
+ del attributes['base_node']
+ if source is not None:
+ attributes.setdefault('source', source)
+ if line is not None:
+ attributes.setdefault('line', line)
+ # assert source is not None, "node has line- but no source-argument"
+ if not 'source' in attributes: # 'line' is absolute line number
+ try: # look up (source, line-in-source)
+ source, line = self.locator(attributes.get('line'))
+ # print "locator lookup", kwargs.get('line'), "->", source, line
+ except AttributeError:
+ source, line = None, None
+ if source is not None:
+ attributes['source'] = source
+ if line is not None:
+ attributes['line'] = line
+ # assert attributes['line'] is not None, (message, kwargs)
+ # assert attributes['source'] is not None, (message, kwargs)
+ attributes.setdefault('source', self.source)
+
+ msg = nodes.system_message(message, level=level,
+ type=self.levels[level],
+ *children, **attributes)
+ if self.stream and (level >= self.report_level
+ or self.debug_flag and level == self.DEBUG_LEVEL
+ or level >= self.halt_level):
+ msgtext = msg.astext() + '\n'
+ try:
+ self.stream.write(msgtext)
+ except UnicodeEncodeError:
+ self.stream.write(msgtext.encode(self.encoding,
+ self.error_handler))
+ if level >= self.halt_level:
+ raise SystemMessage(msg, level)
+ if level > self.DEBUG_LEVEL or self.debug_flag:
+ self.notify_observers(msg)
+ self.max_level = max(level, self.max_level)
+ return msg
+
+ def debug(self, *args, **kwargs):
+ """
+ Level-0, "DEBUG": an internal reporting issue. Typically, there is no
+ effect on the processing. Level-0 system messages are handled
+ separately from the others.
+ """
+ if self.debug_flag:
+ return self.system_message(self.DEBUG_LEVEL, *args, **kwargs)
+
+ def info(self, *args, **kwargs):
+ """
+ Level-1, "INFO": a minor issue that can be ignored. Typically there is
+ no effect on processing, and level-1 system messages are not reported.
+ """
+ return self.system_message(self.INFO_LEVEL, *args, **kwargs)
+
+ def warning(self, *args, **kwargs):
+ """
+ Level-2, "WARNING": an issue that should be addressed. If ignored,
+ there may be unpredictable problems with the output.
+ """
+ return self.system_message(self.WARNING_LEVEL, *args, **kwargs)
+
+ def error(self, *args, **kwargs):
+ """
+ Level-3, "ERROR": an error that should be addressed. If ignored, the
+ output will contain errors.
+ """
+ return self.system_message(self.ERROR_LEVEL, *args, **kwargs)
+
+ def severe(self, *args, **kwargs):
+ """
+ Level-4, "SEVERE": a severe error that must be addressed. If ignored,
+ the output will contain severe errors. Typically level-4 system
+ messages are turned into exceptions which halt processing.
+ """
+ return self.system_message(self.SEVERE_LEVEL, *args, **kwargs)
+
+
+class ExtensionOptionError(DataError): pass
+class BadOptionError(ExtensionOptionError): pass
+class BadOptionDataError(ExtensionOptionError): pass
+class DuplicateOptionError(ExtensionOptionError): pass
+
+
+def extract_extension_options(field_list, options_spec):
+ """
+ Return a dictionary mapping extension option names to converted values.
+
+ :Parameters:
+ - `field_list`: A flat field list without field arguments, where each
+ field body consists of a single paragraph only.
+ - `options_spec`: Dictionary mapping known option names to a
+ conversion function such as `int` or `float`.
+
+ :Exceptions:
+ - `KeyError` for unknown option names.
+ - `ValueError` for invalid option values (raised by the conversion
+ function).
+ - `TypeError` for invalid option value types (raised by conversion
+ function).
+ - `DuplicateOptionError` for duplicate options.
+ - `BadOptionError` for invalid fields.
+ - `BadOptionDataError` for invalid option data (missing name,
+ missing data, bad quotes, etc.).
+ """
+ option_list = extract_options(field_list)
+ option_dict = assemble_option_dict(option_list, options_spec)
+ return option_dict
+
+def extract_options(field_list):
+ """
+ Return a list of option (name, value) pairs from field names & bodies.
+
+ :Parameter:
+ `field_list`: A flat field list, where each field name is a single
+ word and each field body consists of a single paragraph only.
+
+ :Exceptions:
+ - `BadOptionError` for invalid fields.
+ - `BadOptionDataError` for invalid option data (missing name,
+ missing data, bad quotes, etc.).
+ """
+ option_list = []
+ for field in field_list:
+ if len(field[0].astext().split()) != 1:
+ raise BadOptionError(
+ 'extension option field name may not contain multiple words')
+ name = str(field[0].astext().lower())
+ body = field[1]
+ if len(body) == 0:
+ data = None
+ elif len(body) > 1 or not isinstance(body[0], nodes.paragraph) \
+ or len(body[0]) != 1 or not isinstance(body[0][0], nodes.Text):
+ raise BadOptionDataError(
+ 'extension option field body may contain\n'
+ 'a single paragraph only (option "%s")' % name)
+ else:
+ data = body[0][0].astext()
+ option_list.append((name, data))
+ return option_list
+
+def assemble_option_dict(option_list, options_spec):
+ """
+ Return a mapping of option names to values.
+
+ :Parameters:
+ - `option_list`: A list of (name, value) pairs (the output of
+ `extract_options()`).
+ - `options_spec`: Dictionary mapping known option names to a
+ conversion function such as `int` or `float`.
+
+ :Exceptions:
+ - `KeyError` for unknown option names.
+ - `DuplicateOptionError` for duplicate options.
+ - `ValueError` for invalid option values (raised by conversion
+ function).
+ - `TypeError` for invalid option value types (raised by conversion
+ function).
+ """
+ options = {}
+ for name, value in option_list:
+ convertor = options_spec[name] # raises KeyError if unknown
+ if convertor is None:
+ raise KeyError(name) # or if explicitly disabled
+ if name in options:
+ raise DuplicateOptionError('duplicate option "%s"' % name)
+ try:
+ options[name] = convertor(value)
+ except (ValueError, TypeError), detail:
+ raise detail.__class__('(option: "%s"; value: %r)\n%s'
+ % (name, value, ' '.join(detail.args)))
+ return options
+
+
+class NameValueError(DataError): pass
+
+
+def decode_path(path):
+ """
+ Decode file/path string. Return `nodes.reprunicode` object.
+
+ Convert to Unicode without the UnicodeDecode error of the
+ implicit 'ascii:strict' decoding.
+ """
+ # see also http://article.gmane.org/gmane.text.docutils.user/2905
+ try:
+ path = path.decode(sys.getfilesystemencoding(), 'strict')
+ except AttributeError: # default value None has no decode method
+ return nodes.reprunicode(path)
+ except UnicodeDecodeError:
+ try:
+ path = path.decode('utf-8', 'strict')
+ except UnicodeDecodeError:
+ path = path.decode('ascii', 'replace')
+ return nodes.reprunicode(path)
+
+
+def extract_name_value(line):
+ """
+ Return a list of (name, value) from a line of the form "name=value ...".
+
+ :Exception:
+ `NameValueError` for invalid input (missing name, missing data, bad
+ quotes, etc.).
+ """
+ attlist = []
+ while line:
+ equals = line.find('=')
+ if equals == -1:
+ raise NameValueError('missing "="')
+ attname = line[:equals].strip()
+ if equals == 0 or not attname:
+ raise NameValueError(
+ 'missing attribute name before "="')
+ line = line[equals+1:].lstrip()
+ if not line:
+ raise NameValueError(
+ 'missing value after "%s="' % attname)
+ if line[0] in '\'"':
+ endquote = line.find(line[0], 1)
+ if endquote == -1:
+ raise NameValueError(
+ 'attribute "%s" missing end quote (%s)'
+ % (attname, line[0]))
+ if len(line) > endquote + 1 and line[endquote + 1].strip():
+ raise NameValueError(
+ 'attribute "%s" end quote (%s) not followed by '
+ 'whitespace' % (attname, line[0]))
+ data = line[1:endquote]
+ line = line[endquote+1:].lstrip()
+ else:
+ space = line.find(' ')
+ if space == -1:
+ data = line
+ line = ''
+ else:
+ data = line[:space]
+ line = line[space+1:].lstrip()
+ attlist.append((attname.lower(), data))
+ return attlist
+
+def new_reporter(source_path, settings):
+ """
+ Return a new Reporter object.
+
+ :Parameters:
+ `source` : string
+ The path to or description of the source text of the document.
+ `settings` : optparse.Values object
+ Runtime settings.
+ """
+ reporter = Reporter(
+ source_path, settings.report_level, settings.halt_level,
+ stream=settings.warning_stream, debug=settings.debug,
+ encoding=settings.error_encoding,
+ error_handler=settings.error_encoding_error_handler)
+ return reporter
+
+def new_document(source_path, settings=None):
+ """
+ Return a new empty document object.
+
+ :Parameters:
+ `source_path` : string
+ The path to or description of the source text of the document.
+ `settings` : optparse.Values object
+ Runtime settings. If none are provided, a default core set will
+ be used. If you will use the document object with any Docutils
+ components, you must provide their default settings as well. For
+ example, if parsing, at least provide the parser settings,
+ obtainable as follows::
+
+ settings = docutils.frontend.OptionParser(
+ components=(docutils.parsers.rst.Parser,)
+ ).get_default_values()
+ """
+ from docutils import frontend
+ if settings is None:
+ settings = frontend.OptionParser().get_default_values()
+ source_path = decode_path(source_path)
+ reporter = new_reporter(source_path, settings)
+ document = nodes.document(settings, reporter, source=source_path)
+ document.note_source(source_path, -1)
+ return document
+
+def clean_rcs_keywords(paragraph, keyword_substitutions):
+ if len(paragraph) == 1 and isinstance(paragraph[0], nodes.Text):
+ textnode = paragraph[0]
+ for pattern, substitution in keyword_substitutions:
+ match = pattern.search(textnode)
+ if match:
+ paragraph[0] = nodes.Text(pattern.sub(substitution, textnode))
+ return
+
+def relative_path(source, target):
+ """
+ Build and return a path to `target`, relative to `source` (both files).
+
+ If there is no common prefix, return the absolute path to `target`.
+ """
+ source_parts = os.path.abspath(source or 'dummy_file').split(os.sep)
+ target_parts = os.path.abspath(target).split(os.sep)
+ # Check first 2 parts because '/dir'.split('/') == ['', 'dir']:
+ if source_parts[:2] != target_parts[:2]:
+ # Nothing in common between paths.
+ # Return absolute path, using '/' for URLs:
+ return '/'.join(target_parts)
+ source_parts.reverse()
+ target_parts.reverse()
+ while (source_parts and target_parts
+ and source_parts[-1] == target_parts[-1]):
+ # Remove path components in common:
+ source_parts.pop()
+ target_parts.pop()
+ target_parts.reverse()
+ parts = ['..'] * (len(source_parts) - 1) + target_parts
+ return '/'.join(parts)
+
+def get_stylesheet_reference(settings, relative_to=None):
+ """
+ Retrieve a stylesheet reference from the settings object.
+
+ Deprecated. Use get_stylesheet_reference_list() instead to
+ enable specification of multiple stylesheets as a comma-separated
+ list.
+ """
+ if settings.stylesheet_path:
+ assert not settings.stylesheet, (
+ 'stylesheet and stylesheet_path are mutually exclusive.')
+ if relative_to == None:
+ relative_to = settings._destination
+ return relative_path(relative_to, settings.stylesheet_path)
+ else:
+ return settings.stylesheet
+
+# Return 'stylesheet' or 'stylesheet_path' arguments as list.
+#
+# The original settings arguments are kept unchanged: you can test
+# with e.g. ``if settings.stylesheet_path:``
+#
+# Differences to ``get_stylesheet_reference``:
+# * return value is a list
+# * no re-writing of the path (and therefore no optional argument)
+# (if required, use ``utils.relative_path(source, target)``
+# in the calling script)
+def get_stylesheet_list(settings):
+ """
+ Retrieve list of stylesheet references from the settings object.
+ """
+ assert not (settings.stylesheet and settings.stylesheet_path), (
+ 'stylesheet and stylesheet_path are mutually exclusive.')
+ if settings.stylesheet_path:
+ sheets = settings.stylesheet_path.split(",")
+ elif settings.stylesheet:
+ sheets = settings.stylesheet.split(",")
+ else:
+ sheets = []
+ # strip whitespace (frequently occuring in config files)
+ return [sheet.strip(u' \t\n\r') for sheet in sheets]
+
+def get_trim_footnote_ref_space(settings):
+ """
+ Return whether or not to trim footnote space.
+
+ If trim_footnote_reference_space is not None, return it.
+
+ If trim_footnote_reference_space is None, return False unless the
+ footnote reference style is 'superscript'.
+ """
+ if settings.trim_footnote_reference_space is None:
+ return hasattr(settings, 'footnote_references') and \
+ settings.footnote_references == 'superscript'
+ else:
+ return settings.trim_footnote_reference_space
+
+def get_source_line(node):
+ """
+ Return the "source" and "line" attributes from the `node` given or from
+ its closest ancestor.
+ """
+ while node:
+ if node.source or node.line:
+ return node.source, node.line
+ node = node.parent
+ return None, None
+
+def escape2null(text):
+ """Return a string with escape-backslashes converted to nulls."""
+ parts = []
+ start = 0
+ while 1:
+ found = text.find('\\', start)
+ if found == -1:
+ parts.append(text[start:])
+ return ''.join(parts)
+ parts.append(text[start:found])
+ parts.append('\x00' + text[found+1:found+2])
+ start = found + 2 # skip character after escape
+
+def unescape(text, restore_backslashes=0):
+ """
+ Return a string with nulls removed or restored to backslashes.
+ Backslash-escaped spaces are also removed.
+ """
+ if restore_backslashes:
+ return text.replace('\x00', '\\')
+ else:
+ for sep in ['\x00 ', '\x00\n', '\x00']:
+ text = ''.join(text.split(sep))
+ return text
+
+east_asian_widths = {'W': 2, # Wide
+ 'F': 2, # Full-width (wide)
+ 'Na': 1, # Narrow
+ 'H': 1, # Half-width (narrow)
+ 'N': 1, # Neutral (not East Asian, treated as narrow)
+ 'A': 1} # Ambiguous (s/b wide in East Asian context,
+ # narrow otherwise, but that doesn't work)
+"""Mapping of result codes from `unicodedata.east_asian_width()` to character
+column widths."""
+
+def east_asian_column_width(text):
+ if isinstance(text, unicode):
+ total = 0
+ for c in text:
+ total += east_asian_widths[unicodedata.east_asian_width(c)]
+ return total
+ else:
+ return len(text)
+
+if hasattr(unicodedata, 'east_asian_width'):
+ column_width = east_asian_column_width
+else:
+ column_width = len
+
+def uniq(L):
+ r = []
+ for item in L:
+ if not item in r:
+ r.append(item)
+ return r
+
+
+class DependencyList:
+
+ """
+ List of dependencies, with file recording support.
+
+ Note that the output file is not automatically closed. You have
+ to explicitly call the close() method.
+ """
+
+ def __init__(self, output_file=None, dependencies=[]):
+ """
+ Initialize the dependency list, automatically setting the
+ output file to `output_file` (see `set_output()`) and adding
+ all supplied dependencies.
+ """
+ self.set_output(output_file)
+ for i in dependencies:
+ self.add(i)
+
+ def set_output(self, output_file):
+ """
+ Set the output file and clear the list of already added
+ dependencies.
+
+ `output_file` must be a string. The specified file is
+ immediately overwritten.
+
+ If output_file is '-', the output will be written to stdout.
+ If it is None, no file output is done when calling add().
+ """
+ self.list = []
+ if output_file == '-':
+ self.file = sys.stdout
+ elif output_file:
+ self.file = open(output_file, 'w')
+ else:
+ self.file = None
+
+ def add(self, *filenames):
+ """
+ If the dependency `filename` has not already been added,
+ append it to self.list and print it to self.file if self.file
+ is not None.
+ """
+ for filename in filenames:
+ if not filename in self.list:
+ self.list.append(filename)
+ if self.file is not None:
+ print >>self.file, filename
+
+ def close(self):
+ """
+ Close the output file.
+ """
+ self.file.close()
+ self.file = None
+
+ def __repr__(self):
+ if self.file:
+ output_file = self.file.name
+ else:
+ output_file = None
+ return '%s(%r, %s)' % (self.__class__.__name__, output_file, self.list)
diff --git a/python/helpers/docutils/writers/__init__.py b/python/helpers/docutils/writers/__init__.py
new file mode 100644
index 0000000..cc80d84
--- /dev/null
+++ b/python/helpers/docutils/writers/__init__.py
@@ -0,0 +1,133 @@
+# $Id: __init__.py 6111 2009-09-02 21:36:05Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+This package contains Docutils Writer modules.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import os.path
+import docutils
+from docutils import languages, Component
+from docutils.transforms import universal
+
+
+class Writer(Component):
+
+ """
+ Abstract base class for docutils Writers.
+
+ Each writer module or package must export a subclass also called 'Writer'.
+ Each writer must support all standard node types listed in
+ `docutils.nodes.node_class_names`.
+
+ The `write()` method is the main entry point.
+ """
+
+ component_type = 'writer'
+ config_section = 'writers'
+
+ def get_transforms(self):
+ return Component.get_transforms(self) + [
+ universal.Messages,
+ universal.FilterMessages,
+ universal.StripClassesAndElements,]
+
+ document = None
+ """The document to write (Docutils doctree); set by `write`."""
+
+ output = None
+ """Final translated form of `document` (Unicode string for text, binary
+ string for other forms); set by `translate`."""
+
+ language = None
+ """Language module for the document; set by `write`."""
+
+ destination = None
+ """`docutils.io` Output object; where to write the document.
+ Set by `write`."""
+
+ def __init__(self):
+
+ # Used by HTML and LaTex writer for output fragments:
+ self.parts = {}
+ """Mapping of document part names to fragments of `self.output`.
+ Values are Unicode strings; encoding is up to the client. The 'whole'
+ key should contain the entire document output.
+ """
+
+ def write(self, document, destination):
+ """
+ Process a document into its final form.
+
+ Translate `document` (a Docutils document tree) into the Writer's
+ native format, and write it out to its `destination` (a
+ `docutils.io.Output` subclass object).
+
+ Normally not overridden or extended in subclasses.
+ """
+ self.document = document
+ self.language = languages.get_language(
+ document.settings.language_code)
+ self.destination = destination
+ self.translate()
+ output = self.destination.write(self.output)
+ return output
+
+ def translate(self):
+ """
+ Do final translation of `self.document` into `self.output`. Called
+ from `write`. Override in subclasses.
+
+ Usually done with a `docutils.nodes.NodeVisitor` subclass, in
+ combination with a call to `docutils.nodes.Node.walk()` or
+ `docutils.nodes.Node.walkabout()`. The ``NodeVisitor`` subclass must
+ support all standard elements (listed in
+ `docutils.nodes.node_class_names`) and possibly non-standard elements
+ used by the current Reader as well.
+ """
+ raise NotImplementedError('subclass must override this method')
+
+ def assemble_parts(self):
+ """Assemble the `self.parts` dictionary. Extend in subclasses."""
+ self.parts['whole'] = self.output
+ self.parts['encoding'] = self.document.settings.output_encoding
+ self.parts['version'] = docutils.__version__
+
+
+class UnfilteredWriter(Writer):
+
+ """
+ A writer that passes the document tree on unchanged (e.g. a
+ serializer.)
+
+ Documents written by UnfilteredWriters are typically reused at a
+ later date using a subclass of `readers.ReReader`.
+ """
+
+ def get_transforms(self):
+ # Do not add any transforms. When the document is reused
+ # later, the then-used writer will add the appropriate
+ # transforms.
+ return Component.get_transforms(self)
+
+
+_writer_aliases = {
+ 'html': 'html4css1',
+ 'latex': 'latex2e',
+ 'pprint': 'pseudoxml',
+ 'pformat': 'pseudoxml',
+ 'pdf': 'rlpdf',
+ 'xml': 'docutils_xml',
+ 's5': 's5_html'}
+
+def get_writer_class(writer_name):
+ """Return the Writer class from the `writer_name` module."""
+ writer_name = writer_name.lower()
+ if writer_name in _writer_aliases:
+ writer_name = _writer_aliases[writer_name]
+ module = __import__(writer_name, globals(), locals())
+ return module.Writer
diff --git a/python/helpers/docutils/writers/docutils_xml.py b/python/helpers/docutils/writers/docutils_xml.py
new file mode 100644
index 0000000..40ace7e
--- /dev/null
+++ b/python/helpers/docutils/writers/docutils_xml.py
@@ -0,0 +1,73 @@
+# $Id: docutils_xml.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Simple internal document tree Writer, writes Docutils XML.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import docutils
+from docutils import frontend, writers
+
+
+class Writer(writers.Writer):
+
+ supported = ('xml',)
+ """Formats this writer supports."""
+
+ settings_spec = (
+ '"Docutils XML" Writer Options',
+ 'Warning: the --newlines and --indents options may adversely affect '
+ 'whitespace; use them only for reading convenience.',
+ (('Generate XML with newlines before and after tags.',
+ ['--newlines'],
+ {'action': 'store_true', 'validator': frontend.validate_boolean}),
+ ('Generate XML with indents and newlines.',
+ ['--indents'],
+ {'action': 'store_true', 'validator': frontend.validate_boolean}),
+ ('Omit the XML declaration. Use with caution.',
+ ['--no-xml-declaration'],
+ {'dest': 'xml_declaration', 'default': 1, 'action': 'store_false',
+ 'validator': frontend.validate_boolean}),
+ ('Omit the DOCTYPE declaration.',
+ ['--no-doctype'],
+ {'dest': 'doctype_declaration', 'default': 1,
+ 'action': 'store_false', 'validator': frontend.validate_boolean}),))
+
+ settings_defaults = {'output_encoding_error_handler': 'xmlcharrefreplace'}
+
+ config_section = 'docutils_xml writer'
+ config_section_dependencies = ('writers',)
+
+ output = None
+ """Final translated form of `document`."""
+
+ xml_declaration = '<?xml version="1.0" encoding="%s"?>\n'
+ #xml_stylesheet = '<?xml-stylesheet type="text/xsl" href="%s"?>\n'
+ doctype = (
+ '<!DOCTYPE document PUBLIC'
+ ' "+//IDN docutils.sourceforge.net//DTD Docutils Generic//EN//XML"'
+ ' "http://docutils.sourceforge.net/docs/ref/docutils.dtd">\n')
+ generator = '<!-- Generated by Docutils %s -->\n'
+
+ def translate(self):
+ settings = self.document.settings
+ indent = newline = ''
+ if settings.newlines:
+ newline = '\n'
+ if settings.indents:
+ newline = '\n'
+ indent = ' '
+ output_prefix = []
+ if settings.xml_declaration:
+ output_prefix.append(
+ self.xml_declaration % settings.output_encoding)
+ if settings.doctype_declaration:
+ output_prefix.append(self.doctype)
+ output_prefix.append(self.generator % docutils.__version__)
+ docnode = self.document.asdom().childNodes[0]
+ self.output = (''.join(output_prefix)
+ + docnode.toprettyxml(indent, newline))
diff --git a/python/helpers/docutils/writers/html4css1/__init__.py b/python/helpers/docutils/writers/html4css1/__init__.py
new file mode 100644
index 0000000..c1fa360
--- /dev/null
+++ b/python/helpers/docutils/writers/html4css1/__init__.py
@@ -0,0 +1,1553 @@
+# $Id: __init__.py 6315 2010-04-28 12:28:33Z milde $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Simple HyperText Markup Language document tree Writer.
+
+The output conforms to the XHTML version 1.0 Transitional DTD
+(*almost* strict). The output contains a minimum of formatting
+information. The cascading style sheet "html4css1.css" is required
+for proper viewing with a modern graphical browser.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import os
+import os.path
+import time
+import re
+try:
+ import Image # check for the Python Imaging Library
+except ImportError:
+ Image = None
+import docutils
+from docutils import frontend, nodes, utils, writers, languages, io
+from docutils.transforms import writer_aux
+
+
+class Writer(writers.Writer):
+
+ supported = ('html', 'html4css1', 'xhtml')
+ """Formats this writer supports."""
+
+ default_stylesheet = 'html4css1.css'
+
+ default_stylesheet_path = utils.relative_path(
+ os.path.join(os.getcwd(), 'dummy'),
+ os.path.join(os.path.dirname(__file__), default_stylesheet))
+
+ default_template = 'template.txt'
+
+ default_template_path = utils.relative_path(
+ os.path.join(os.getcwd(), 'dummy'),
+ os.path.join(os.path.dirname(__file__), default_template))
+
+ settings_spec = (
+ 'HTML-Specific Options',
+ None,
+ (('Specify the template file (UTF-8 encoded). Default is "%s".'
+ % default_template_path,
+ ['--template'],
+ {'default': default_template_path, 'metavar': '<file>'}),
+ ('Specify comma separated list of stylesheet URLs. '
+ 'Overrides previous --stylesheet and --stylesheet-path settings.',
+ ['--stylesheet'],
+ {'metavar': '<URL>', 'overrides': 'stylesheet_path'}),
+ ('Specify comma separated list of stylesheet paths. '
+ 'With --link-stylesheet, '
+ 'the path is rewritten relative to the output HTML file. '
+ 'Default: "%s"' % default_stylesheet_path,
+ ['--stylesheet-path'],
+ {'metavar': '<file>', 'overrides': 'stylesheet',
+ 'default': default_stylesheet_path}),
+ ('Embed the stylesheet(s) in the output HTML file. The stylesheet '
+ 'files must be accessible during processing. This is the default.',
+ ['--embed-stylesheet'],
+ {'default': 1, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Link to the stylesheet(s) in the output HTML file. '
+ 'Default: embed stylesheets.',
+ ['--link-stylesheet'],
+ {'dest': 'embed_stylesheet', 'action': 'store_false'}),
+ ('Specify the initial header level. Default is 1 for "<h1>". '
+ 'Does not affect document title & subtitle (see --no-doc-title).',
+ ['--initial-header-level'],
+ {'choices': '1 2 3 4 5 6'.split(), 'default': '1',
+ 'metavar': '<level>'}),
+ ('Specify the maximum width (in characters) for one-column field '
+ 'names. Longer field names will span an entire row of the table '
+ 'used to render the field list. Default is 14 characters. '
+ 'Use 0 for "no limit".',
+ ['--field-name-limit'],
+ {'default': 14, 'metavar': '<level>',
+ 'validator': frontend.validate_nonnegative_int}),
+ ('Specify the maximum width (in characters) for options in option '
+ 'lists. Longer options will span an entire row of the table used '
+ 'to render the option list. Default is 14 characters. '
+ 'Use 0 for "no limit".',
+ ['--option-limit'],
+ {'default': 14, 'metavar': '<level>',
+ 'validator': frontend.validate_nonnegative_int}),
+ ('Format for footnote references: one of "superscript" or '
+ '"brackets". Default is "brackets".',
+ ['--footnote-references'],
+ {'choices': ['superscript', 'brackets'], 'default': 'brackets',
+ 'metavar': '<format>',
+ 'overrides': 'trim_footnote_reference_space'}),
+ ('Format for block quote attributions: one of "dash" (em-dash '
+ 'prefix), "parentheses"/"parens", or "none". Default is "dash".',
+ ['--attribution'],
+ {'choices': ['dash', 'parentheses', 'parens', 'none'],
+ 'default': 'dash', 'metavar': '<format>'}),
+ ('Remove extra vertical whitespace between items of "simple" bullet '
+ 'lists and enumerated lists. Default: enabled.',
+ ['--compact-lists'],
+ {'default': 1, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Disable compact simple bullet and enumerated lists.',
+ ['--no-compact-lists'],
+ {'dest': 'compact_lists', 'action': 'store_false'}),
+ ('Remove extra vertical whitespace between items of simple field '
+ 'lists. Default: enabled.',
+ ['--compact-field-lists'],
+ {'default': 1, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Disable compact simple field lists.',
+ ['--no-compact-field-lists'],
+ {'dest': 'compact_field_lists', 'action': 'store_false'}),
+ ('Added to standard table classes. '
+ 'Defined styles: "borderless". Default: ""',
+ ['--table-style'],
+ {'default': ''}),
+ ('Omit the XML declaration. Use with caution.',
+ ['--no-xml-declaration'],
+ {'dest': 'xml_declaration', 'default': 1, 'action': 'store_false',
+ 'validator': frontend.validate_boolean}),
+ ('Obfuscate email addresses to confuse harvesters while still '
+ 'keeping email links usable with standards-compliant browsers.',
+ ['--cloak-email-addresses'],
+ {'action': 'store_true', 'validator': frontend.validate_boolean}),))
+
+ settings_defaults = {'output_encoding_error_handler': 'xmlcharrefreplace'}
+
+ relative_path_settings = ('stylesheet_path',)
+
+ config_section = 'html4css1 writer'
+ config_section_dependencies = ('writers',)
+
+ visitor_attributes = (
+ 'head_prefix', 'head', 'stylesheet', 'body_prefix',
+ 'body_pre_docinfo', 'docinfo', 'body', 'body_suffix',
+ 'title', 'subtitle', 'header', 'footer', 'meta', 'fragment',
+ 'html_prolog', 'html_head', 'html_title', 'html_subtitle',
+ 'html_body')
+
+ def get_transforms(self):
+ return writers.Writer.get_transforms(self) + [writer_aux.Admonitions]
+
+ def __init__(self):
+ writers.Writer.__init__(self)
+ self.translator_class = HTMLTranslator
+
+ def translate(self):
+ self.visitor = visitor = self.translator_class(self.document)
+ self.document.walkabout(visitor)
+ for attr in self.visitor_attributes:
+ setattr(self, attr, getattr(visitor, attr))
+ self.output = self.apply_template()
+
+ def apply_template(self):
+ template_file = open(self.document.settings.template, 'rb')
+ template = unicode(template_file.read(), 'utf-8')
+ template_file.close()
+ subs = self.interpolation_dict()
+ return template % subs
+
+ def interpolation_dict(self):
+ subs = {}
+ settings = self.document.settings
+ for attr in self.visitor_attributes:
+ subs[attr] = ''.join(getattr(self, attr)).rstrip('\n')
+ subs['encoding'] = settings.output_encoding
+ subs['version'] = docutils.__version__
+ return subs
+
+ def assemble_parts(self):
+ writers.Writer.assemble_parts(self)
+ for part in self.visitor_attributes:
+ self.parts[part] = ''.join(getattr(self, part))
+
+
+class HTMLTranslator(nodes.NodeVisitor):
+
+ """
+ This HTML writer has been optimized to produce visually compact
+ lists (less vertical whitespace). HTML's mixed content models
+ allow list items to contain "<li><p>body elements</p></li>" or
+ "<li>just text</li>" or even "<li>text<p>and body
+ elements</p>combined</li>", each with different effects. It would
+ be best to stick with strict body elements in list items, but they
+ affect vertical spacing in browsers (although they really
+ shouldn't).
+
+ Here is an outline of the optimization:
+
+ - Check for and omit <p> tags in "simple" lists: list items
+ contain either a single paragraph, a nested simple list, or a
+ paragraph followed by a nested simple list. This means that
+ this list can be compact:
+
+ - Item 1.
+ - Item 2.
+
+ But this list cannot be compact:
+
+ - Item 1.
+
+ This second paragraph forces space between list items.
+
+ - Item 2.
+
+ - In non-list contexts, omit <p> tags on a paragraph if that
+ paragraph is the only child of its parent (footnotes & citations
+ are allowed a label first).
+
+ - Regardless of the above, in definitions, table cells, field bodies,
+ option descriptions, and list items, mark the first child with
+ 'class="first"' and the last child with 'class="last"'. The stylesheet
+ sets the margins (top & bottom respectively) to 0 for these elements.
+
+ The ``no_compact_lists`` setting (``--no-compact-lists`` command-line
+ option) disables list whitespace optimization.
+ """
+
+ xml_declaration = '<?xml version="1.0" encoding="%s" ?>\n'
+ doctype = (
+ '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"'
+ ' "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\n')
+ head_prefix_template = ('<html xmlns="http://www.w3.org/1999/xhtml"'
+ ' xml:lang="%s" lang="%s">\n<head>\n')
+ content_type = ('<meta http-equiv="Content-Type"'
+ ' content="text/html; charset=%s" />\n')
+ generator = ('<meta name="generator" content="Docutils %s: '
+ 'http://docutils.sourceforge.net/" />\n')
+ stylesheet_link = '<link rel="stylesheet" href="%s" type="text/css" />\n'
+ embedded_stylesheet = '<style type="text/css">\n\n%s\n</style>\n'
+ words_and_spaces = re.compile(r'\S+| +|\n')
+ sollbruchstelle = re.compile(r'.+\W\W.+|[-?].+', re.U) # wrap point inside word
+
+ def __init__(self, document):
+ nodes.NodeVisitor.__init__(self, document)
+ self.settings = settings = document.settings
+ lcode = settings.language_code
+ self.language = languages.get_language(lcode)
+ self.meta = [self.content_type % settings.output_encoding,
+ self.generator % docutils.__version__]
+ self.head_prefix = []
+ self.html_prolog = []
+ if settings.xml_declaration:
+ self.head_prefix.append(self.xml_declaration
+ % settings.output_encoding)
+ # encoding not interpolated:
+ self.html_prolog.append(self.xml_declaration)
+ self.head_prefix.extend([self.doctype,
+ self.head_prefix_template % (lcode, lcode)])
+ self.html_prolog.append(self.doctype)
+ self.head = self.meta[:]
+ # stylesheets
+ styles = utils.get_stylesheet_list(settings)
+ if settings.stylesheet_path and not(settings.embed_stylesheet):
+ styles = [utils.relative_path(settings._destination, sheet)
+ for sheet in styles]
+ if settings.embed_stylesheet:
+ settings.record_dependencies.add(*styles)
+ self.stylesheet = [self.embedded_stylesheet %
+ io.FileInput(source_path=sheet, encoding='utf-8').read()
+ for sheet in styles]
+ else: # link to stylesheets
+ self.stylesheet = [self.stylesheet_link % self.encode(stylesheet)
+ for stylesheet in styles]
+ self.body_prefix = ['</head>\n<body>\n']
+ # document title, subtitle display
+ self.body_pre_docinfo = []
+ # author, date, etc.
+ self.docinfo = []
+ self.body = []
+ self.fragment = []
+ self.body_suffix = ['</body>\n</html>\n']
+ self.section_level = 0
+ self.initial_header_level = int(settings.initial_header_level)
+ # A heterogenous stack used in conjunction with the tree traversal.
+ # Make sure that the pops correspond to the pushes:
+ self.context = []
+ self.topic_classes = []
+ self.colspecs = []
+ self.compact_p = 1
+ self.compact_simple = None
+ self.compact_field_list = None
+ self.in_docinfo = None
+ self.in_sidebar = None
+ self.title = []
+ self.subtitle = []
+ self.header = []
+ self.footer = []
+ self.html_head = [self.content_type] # charset not interpolated
+ self.html_title = []
+ self.html_subtitle = []
+ self.html_body = []
+ self.in_document_title = 0
+ self.in_mailto = 0
+ self.author_in_authors = None
+
+ def astext(self):
+ return ''.join(self.head_prefix + self.head
+ + self.stylesheet + self.body_prefix
+ + self.body_pre_docinfo + self.docinfo
+ + self.body + self.body_suffix)
+
+ def encode(self, text):
+ """Encode special characters in `text` & return."""
+ # @@@ A codec to do these and all other HTML entities would be nice.
+ text = unicode(text)
+ return text.translate({
+ ord('&'): u'&',
+ ord('<'): u'<',
+ ord('"'): u'"',
+ ord('>'): u'>',
+ ord('@'): u'@', # may thwart some address harvesters
+ # TODO: convert non-breaking space only if needed?
+ 0xa0: u' '}) # non-breaking space
+
+ def cloak_mailto(self, uri):
+ """Try to hide a mailto: URL from harvesters."""
+ # Encode "@" using a URL octet reference (see RFC 1738).
+ # Further cloaking with HTML entities will be done in the
+ # `attval` function.
+ return uri.replace('@', '%40')
+
+ def cloak_email(self, addr):
+ """Try to hide the link text of a email link from harversters."""
+ # Surround at-signs and periods with <span> tags. ("@" has
+ # already been encoded to "@" by the `encode` method.)
+ addr = addr.replace('@', '<span>@</span>')
+ addr = addr.replace('.', '<span>.</span>')
+ return addr
+
+ def attval(self, text,
+ whitespace=re.compile('[\n\r\t\v\f]')):
+ """Cleanse, HTML encode, and return attribute value text."""
+ encoded = self.encode(whitespace.sub(' ', text))
+ if self.in_mailto and self.settings.cloak_email_addresses:
+ # Cloak at-signs ("%40") and periods with HTML entities.
+ encoded = encoded.replace('%40', '%40')
+ encoded = encoded.replace('.', '.')
+ return encoded
+
+ def starttag(self, node, tagname, suffix='\n', empty=0, **attributes):
+ """
+ Construct and return a start tag given a node (id & class attributes
+ are extracted), tag name, and optional attributes.
+ """
+ tagname = tagname.lower()
+ prefix = []
+ atts = {}
+ ids = []
+ for (name, value) in attributes.items():
+ atts[name.lower()] = value
+ classes = node.get('classes', [])
+ if 'class' in atts:
+ classes.append(atts['class'])
+ if classes:
+ atts['class'] = ' '.join(classes)
+ assert 'id' not in atts
+ ids.extend(node.get('ids', []))
+ if 'ids' in atts:
+ ids.extend(atts['ids'])
+ del atts['ids']
+ if ids:
+ atts['id'] = ids[0]
+ for id in ids[1:]:
+ # Add empty "span" elements for additional IDs. Note
+ # that we cannot use empty "a" elements because there
+ # may be targets inside of references, but nested "a"
+ # elements aren't allowed in XHTML (even if they do
+ # not all have a "href" attribute).
+ if empty:
+ # Empty tag. Insert target right in front of element.
+ prefix.append('<span id="%s"></span>' % id)
+ else:
+ # Non-empty tag. Place the auxiliary <span> tag
+ # *inside* the element, as the first child.
+ suffix += '<span id="%s"></span>' % id
+ attlist = atts.items()
+ attlist.sort()
+ parts = [tagname]
+ for name, value in attlist:
+ # value=None was used for boolean attributes without
+ # value, but this isn't supported by XHTML.
+ assert value is not None
+ if isinstance(value, list):
+ values = [unicode(v) for v in value]
+ parts.append('%s="%s"' % (name.lower(),
+ self.attval(' '.join(values))))
+ else:
+ parts.append('%s="%s"' % (name.lower(),
+ self.attval(unicode(value))))
+ if empty:
+ infix = ' /'
+ else:
+ infix = ''
+ return ''.join(prefix) + '<%s%s>' % (' '.join(parts), infix) + suffix
+
+ def emptytag(self, node, tagname, suffix='\n', **attributes):
+ """Construct and return an XML-compatible empty tag."""
+ return self.starttag(node, tagname, suffix, empty=1, **attributes)
+
+ def set_class_on_child(self, node, class_, index=0):
+ """
+ Set class `class_` on the visible child no. index of `node`.
+ Do nothing if node has fewer children than `index`.
+ """
+ children = [n for n in node if not isinstance(n, nodes.Invisible)]
+ try:
+ child = children[index]
+ except IndexError:
+ return
+ child['classes'].append(class_)
+
+ def set_first_last(self, node):
+ self.set_class_on_child(node, 'first', 0)
+ self.set_class_on_child(node, 'last', -1)
+
+ def visit_Text(self, node):
+ text = node.astext()
+ encoded = self.encode(text)
+ if self.in_mailto and self.settings.cloak_email_addresses:
+ encoded = self.cloak_email(encoded)
+ self.body.append(encoded)
+
+ def depart_Text(self, node):
+ pass
+
+ def visit_abbreviation(self, node):
+ # @@@ implementation incomplete ("title" attribute)
+ self.body.append(self.starttag(node, 'abbr', ''))
+
+ def depart_abbreviation(self, node):
+ self.body.append('</abbr>')
+
+ def visit_acronym(self, node):
+ # @@@ implementation incomplete ("title" attribute)
+ self.body.append(self.starttag(node, 'acronym', ''))
+
+ def depart_acronym(self, node):
+ self.body.append('</acronym>')
+
+ def visit_address(self, node):
+ self.visit_docinfo_item(node, 'address', meta=None)
+ self.body.append(self.starttag(node, 'pre', CLASS='address'))
+
+ def depart_address(self, node):
+ self.body.append('\n</pre>\n')
+ self.depart_docinfo_item()
+
+ def visit_admonition(self, node):
+ self.body.append(self.starttag(node, 'div'))
+ self.set_first_last(node)
+
+ def depart_admonition(self, node=None):
+ self.body.append('</div>\n')
+
+ attribution_formats = {'dash': ('—', ''),
+ 'parentheses': ('(', ')'),
+ 'parens': ('(', ')'),
+ 'none': ('', '')}
+
+ def visit_attribution(self, node):
+ prefix, suffix = self.attribution_formats[self.settings.attribution]
+ self.context.append(suffix)
+ self.body.append(
+ self.starttag(node, 'p', prefix, CLASS='attribution'))
+
+ def depart_attribution(self, node):
+ self.body.append(self.context.pop() + '</p>\n')
+
+ def visit_author(self, node):
+ if isinstance(node.parent, nodes.authors):
+ if self.author_in_authors:
+ self.body.append('\n<br />')
+ else:
+ self.visit_docinfo_item(node, 'author')
+
+ def depart_author(self, node):
+ if isinstance(node.parent, nodes.authors):
+ self.author_in_authors += 1
+ else:
+ self.depart_docinfo_item()
+
+ def visit_authors(self, node):
+ self.visit_docinfo_item(node, 'authors')
+ self.author_in_authors = 0 # initialize counter
+
+ def depart_authors(self, node):
+ self.depart_docinfo_item()
+ self.author_in_authors = None
+
+ def visit_block_quote(self, node):
+ self.body.append(self.starttag(node, 'blockquote'))
+
+ def depart_block_quote(self, node):
+ self.body.append('</blockquote>\n')
+
+ def check_simple_list(self, node):
+ """Check for a simple list that can be rendered compactly."""
+ visitor = SimpleListChecker(self.document)
+ try:
+ node.walk(visitor)
+ except nodes.NodeFound:
+ return None
+ else:
+ return 1
+
+ def is_compactable(self, node):
+ return ('compact' in node['classes']
+ or (self.settings.compact_lists
+ and 'open' not in node['classes']
+ and (self.compact_simple
+ or self.topic_classes == ['contents']
+ or self.check_simple_list(node))))
+
+ def visit_bullet_list(self, node):
+ atts = {}
+ old_compact_simple = self.compact_simple
+ self.context.append((self.compact_simple, self.compact_p))
+ self.compact_p = None
+ self.compact_simple = self.is_compactable(node)
+ if self.compact_simple and not old_compact_simple:
+ atts['class'] = 'simple'
+ self.body.append(self.starttag(node, 'ul', **atts))
+
+ def depart_bullet_list(self, node):
+ self.compact_simple, self.compact_p = self.context.pop()
+ self.body.append('</ul>\n')
+
+ def visit_caption(self, node):
+ self.body.append(self.starttag(node, 'p', '', CLASS='caption'))
+
+ def depart_caption(self, node):
+ self.body.append('</p>\n')
+
+ def visit_citation(self, node):
+ self.body.append(self.starttag(node, 'table',
+ CLASS='docutils citation',
+ frame="void", rules="none"))
+ self.body.append('<colgroup><col class="label" /><col /></colgroup>\n'
+ '<tbody valign="top">\n'
+ '<tr>')
+ self.footnote_backrefs(node)
+
+ def depart_citation(self, node):
+ self.body.append('</td></tr>\n'
+ '</tbody>\n</table>\n')
+
+ def visit_citation_reference(self, node):
+ href = '#' + node['refid']
+ self.body.append(self.starttag(
+ node, 'a', '[', CLASS='citation-reference', href=href))
+
+ def depart_citation_reference(self, node):
+ self.body.append(']</a>')
+
+ def visit_classifier(self, node):
+ self.body.append(' <span class="classifier-delimiter">:</span> ')
+ self.body.append(self.starttag(node, 'span', '', CLASS='classifier'))
+
+ def depart_classifier(self, node):
+ self.body.append('</span>')
+
+ def visit_colspec(self, node):
+ self.colspecs.append(node)
+ # "stubs" list is an attribute of the tgroup element:
+ node.parent.stubs.append(node.attributes.get('stub'))
+
+ def depart_colspec(self, node):
+ pass
+
+ def write_colspecs(self):
+ width = 0
+ for node in self.colspecs:
+ width += node['colwidth']
+ for node in self.colspecs:
+ colwidth = int(node['colwidth'] * 100.0 / width + 0.5)
+ self.body.append(self.emptytag(node, 'col',
+ width='%i%%' % colwidth))
+ self.colspecs = []
+
+ def visit_comment(self, node,
+ sub=re.compile('-(?=-)').sub):
+ """Escape double-dashes in comment text."""
+ self.body.append('<!-- %s -->\n' % sub('- ', node.astext()))
+ # Content already processed:
+ raise nodes.SkipNode
+
+ def visit_compound(self, node):
+ self.body.append(self.starttag(node, 'div', CLASS='compound'))
+ if len(node) > 1:
+ node[0]['classes'].append('compound-first')
+ node[-1]['classes'].append('compound-last')
+ for child in node[1:-1]:
+ child['classes'].append('compound-middle')
+
+ def depart_compound(self, node):
+ self.body.append('</div>\n')
+
+ def visit_container(self, node):
+ self.body.append(self.starttag(node, 'div', CLASS='container'))
+
+ def depart_container(self, node):
+ self.body.append('</div>\n')
+
+ def visit_contact(self, node):
+ self.visit_docinfo_item(node, 'contact', meta=None)
+
+ def depart_contact(self, node):
+ self.depart_docinfo_item()
+
+ def visit_copyright(self, node):
+ self.visit_docinfo_item(node, 'copyright')
+
+ def depart_copyright(self, node):
+ self.depart_docinfo_item()
+
+ def visit_date(self, node):
+ self.visit_docinfo_item(node, 'date')
+
+ def depart_date(self, node):
+ self.depart_docinfo_item()
+
+ def visit_decoration(self, node):
+ pass
+
+ def depart_decoration(self, node):
+ pass
+
+ def visit_definition(self, node):
+ self.body.append('</dt>\n')
+ self.body.append(self.starttag(node, 'dd', ''))
+ self.set_first_last(node)
+
+ def depart_definition(self, node):
+ self.body.append('</dd>\n')
+
+ def visit_definition_list(self, node):
+ self.body.append(self.starttag(node, 'dl', CLASS='docutils'))
+
+ def depart_definition_list(self, node):
+ self.body.append('</dl>\n')
+
+ def visit_definition_list_item(self, node):
+ pass
+
+ def depart_definition_list_item(self, node):
+ pass
+
+ def visit_description(self, node):
+ self.body.append(self.starttag(node, 'td', ''))
+ self.set_first_last(node)
+
+ def depart_description(self, node):
+ self.body.append('</td>')
+
+ def visit_docinfo(self, node):
+ self.context.append(len(self.body))
+ self.body.append(self.starttag(node, 'table',
+ CLASS='docinfo',
+ frame="void", rules="none"))
+ self.body.append('<col class="docinfo-name" />\n'
+ '<col class="docinfo-content" />\n'
+ '<tbody valign="top">\n')
+ self.in_docinfo = 1
+
+ def depart_docinfo(self, node):
+ self.body.append('</tbody>\n</table>\n')
+ self.in_docinfo = None
+ start = self.context.pop()
+ self.docinfo = self.body[start:]
+ self.body = []
+
+ def visit_docinfo_item(self, node, name, meta=1):
+ if meta:
+ meta_tag = '<meta name="%s" content="%s" />\n' \
+ % (name, self.attval(node.astext()))
+ self.add_meta(meta_tag)
+ self.body.append(self.starttag(node, 'tr', ''))
+ self.body.append('<th class="docinfo-name">%s:</th>\n<td>'
+ % self.language.labels[name])
+ if len(node):
+ if isinstance(node[0], nodes.Element):
+ node[0]['classes'].append('first')
+ if isinstance(node[-1], nodes.Element):
+ node[-1]['classes'].append('last')
+
+ def depart_docinfo_item(self):
+ self.body.append('</td></tr>\n')
+
+ def visit_doctest_block(self, node):
+ self.body.append(self.starttag(node, 'pre', CLASS='doctest-block'))
+
+ def depart_doctest_block(self, node):
+ self.body.append('\n</pre>\n')
+
+ def visit_document(self, node):
+ self.head.append('<title>%s</title>\n'
+ % self.encode(node.get('title', '')))
+
+ def depart_document(self, node):
+ self.fragment.extend(self.body)
+ self.body_prefix.append(self.starttag(node, 'div', CLASS='document'))
+ self.body_suffix.insert(0, '</div>\n')
+ # skip content-type meta tag with interpolated charset value:
+ self.html_head.extend(self.head[1:])
+ self.html_body.extend(self.body_prefix[1:] + self.body_pre_docinfo
+ + self.docinfo + self.body
+ + self.body_suffix[:-1])
+ assert not self.context, 'len(context) = %s' % len(self.context)
+
+ def visit_emphasis(self, node):
+ self.body.append(self.starttag(node, 'em', ''))
+
+ def depart_emphasis(self, node):
+ self.body.append('</em>')
+
+ def visit_entry(self, node):
+ atts = {'class': []}
+ if isinstance(node.parent.parent, nodes.thead):
+ atts['class'].append('head')
+ if node.parent.parent.parent.stubs[node.parent.column]:
+ # "stubs" list is an attribute of the tgroup element
+ atts['class'].append('stub')
+ if atts['class']:
+ tagname = 'th'
+ atts['class'] = ' '.join(atts['class'])
+ else:
+ tagname = 'td'
+ del atts['class']
+ node.parent.column += 1
+ if 'morerows' in node:
+ atts['rowspan'] = node['morerows'] + 1
+ if 'morecols' in node:
+ atts['colspan'] = node['morecols'] + 1
+ node.parent.column += node['morecols']
+ self.body.append(self.starttag(node, tagname, '', **atts))
+ self.context.append('</%s>\n' % tagname.lower())
+ if len(node) == 0: # empty cell
+ self.body.append(' ')
+ self.set_first_last(node)
+
+ def depart_entry(self, node):
+ self.body.append(self.context.pop())
+
+ def visit_enumerated_list(self, node):
+ """
+ The 'start' attribute does not conform to HTML 4.01's strict.dtd, but
+ CSS1 doesn't help. CSS2 isn't widely enough supported yet to be
+ usable.
+ """
+ atts = {}
+ if 'start' in node:
+ atts['start'] = node['start']
+ if 'enumtype' in node:
+ atts['class'] = node['enumtype']
+ # @@@ To do: prefix, suffix. How? Change prefix/suffix to a
+ # single "format" attribute? Use CSS2?
+ old_compact_simple = self.compact_simple
+ self.context.append((self.compact_simple, self.compact_p))
+ self.compact_p = None
+ self.compact_simple = self.is_compactable(node)
+ if self.compact_simple and not old_compact_simple:
+ atts['class'] = (atts.get('class', '') + ' simple').strip()
+ self.body.append(self.starttag(node, 'ol', **atts))
+
+ def depart_enumerated_list(self, node):
+ self.compact_simple, self.compact_p = self.context.pop()
+ self.body.append('</ol>\n')
+
+ def visit_field(self, node):
+ self.body.append(self.starttag(node, 'tr', '', CLASS='field'))
+
+ def depart_field(self, node):
+ self.body.append('</tr>\n')
+
+ def visit_field_body(self, node):
+ self.body.append(self.starttag(node, 'td', '', CLASS='field-body'))
+ self.set_class_on_child(node, 'first', 0)
+ field = node.parent
+ if (self.compact_field_list or
+ isinstance(field.parent, nodes.docinfo) or
+ field.parent.index(field) == len(field.parent) - 1):
+ # If we are in a compact list, the docinfo, or if this is
+ # the last field of the field list, do not add vertical
+ # space after last element.
+ self.set_class_on_child(node, 'last', -1)
+
+ def depart_field_body(self, node):
+ self.body.append('</td>\n')
+
+ def visit_field_list(self, node):
+ self.context.append((self.compact_field_list, self.compact_p))
+ self.compact_p = None
+ if 'compact' in node['classes']:
+ self.compact_field_list = 1
+ elif (self.settings.compact_field_lists
+ and 'open' not in node['classes']):
+ self.compact_field_list = 1
+ if self.compact_field_list:
+ for field in node:
+ field_body = field[-1]
+ assert isinstance(field_body, nodes.field_body)
+ children = [n for n in field_body
+ if not isinstance(n, nodes.Invisible)]
+ if not (len(children) == 0 or
+ len(children) == 1 and
+ isinstance(children[0],
+ (nodes.paragraph, nodes.line_block))):
+ self.compact_field_list = 0
+ break
+ self.body.append(self.starttag(node, 'table', frame='void',
+ rules='none',
+ CLASS='docutils field-list'))
+ self.body.append('<col class="field-name" />\n'
+ '<col class="field-body" />\n'
+ '<tbody valign="top">\n')
+
+ def depart_field_list(self, node):
+ self.body.append('</tbody>\n</table>\n')
+ self.compact_field_list, self.compact_p = self.context.pop()
+
+ def visit_field_name(self, node):
+ atts = {}
+ if self.in_docinfo:
+ atts['class'] = 'docinfo-name'
+ else:
+ atts['class'] = 'field-name'
+ if ( self.settings.field_name_limit
+ and len(node.astext()) > self.settings.field_name_limit):
+ atts['colspan'] = 2
+ self.context.append('</tr>\n<tr><td> </td>')
+ else:
+ self.context.append('')
+ self.body.append(self.starttag(node, 'th', '', **atts))
+
+ def depart_field_name(self, node):
+ self.body.append(':</th>')
+ self.body.append(self.context.pop())
+
+ def visit_figure(self, node):
+ atts = {'class': 'figure'}
+ if node.get('width'):
+ atts['style'] = 'width: %s' % node['width']
+ if node.get('align'):
+ atts['class'] += " align-" + node['align']
+ self.body.append(self.starttag(node, 'div', **atts))
+
+ def depart_figure(self, node):
+ self.body.append('</div>\n')
+
+ def visit_footer(self, node):
+ self.context.append(len(self.body))
+
+ def depart_footer(self, node):
+ start = self.context.pop()
+ footer = [self.starttag(node, 'div', CLASS='footer'),
+ '<hr class="footer" />\n']
+ footer.extend(self.body[start:])
+ footer.append('\n</div>\n')
+ self.footer.extend(footer)
+ self.body_suffix[:0] = footer
+ del self.body[start:]
+
+ def visit_footnote(self, node):
+ self.body.append(self.starttag(node, 'table',
+ CLASS='docutils footnote',
+ frame="void", rules="none"))
+ self.body.append('<colgroup><col class="label" /><col /></colgroup>\n'
+ '<tbody valign="top">\n'
+ '<tr>')
+ self.footnote_backrefs(node)
+
+ def footnote_backrefs(self, node):
+ backlinks = []
+ backrefs = node['backrefs']
+ if self.settings.footnote_backlinks and backrefs:
+ if len(backrefs) == 1:
+ self.context.append('')
+ self.context.append('</a>')
+ self.context.append('<a class="fn-backref" href="#%s">'
+ % backrefs[0])
+ else:
+ i = 1
+ for backref in backrefs:
+ backlinks.append('<a class="fn-backref" href="#%s">%s</a>'
+ % (backref, i))
+ i += 1
+ self.context.append('<em>(%s)</em> ' % ', '.join(backlinks))
+ self.context += ['', '']
+ else:
+ self.context.append('')
+ self.context += ['', '']
+ # If the node does not only consist of a label.
+ if len(node) > 1:
+ # If there are preceding backlinks, we do not set class
+ # 'first', because we need to retain the top-margin.
+ if not backlinks:
+ node[1]['classes'].append('first')
+ node[-1]['classes'].append('last')
+
+ def depart_footnote(self, node):
+ self.body.append('</td></tr>\n'
+ '</tbody>\n</table>\n')
+
+ def visit_footnote_reference(self, node):
+ href = '#' + node['refid']
+ format = self.settings.footnote_references
+ if format == 'brackets':
+ suffix = '['
+ self.context.append(']')
+ else:
+ assert format == 'superscript'
+ suffix = '<sup>'
+ self.context.append('</sup>')
+ self.body.append(self.starttag(node, 'a', suffix,
+ CLASS='footnote-reference', href=href))
+
+ def depart_footnote_reference(self, node):
+ self.body.append(self.context.pop() + '</a>')
+
+ def visit_generated(self, node):
+ pass
+
+ def depart_generated(self, node):
+ pass
+
+ def visit_header(self, node):
+ self.context.append(len(self.body))
+
+ def depart_header(self, node):
+ start = self.context.pop()
+ header = [self.starttag(node, 'div', CLASS='header')]
+ header.extend(self.body[start:])
+ header.append('\n<hr class="header"/>\n</div>\n')
+ self.body_prefix.extend(header)
+ self.header.extend(header)
+ del self.body[start:]
+
+ def visit_image(self, node):
+ atts = {}
+ uri = node['uri']
+ # place SVG and SWF images in an <object> element
+ types = {'.svg': 'image/svg+xml',
+ '.swf': 'application/x-shockwave-flash'}
+ ext = os.path.splitext(uri)[1].lower()
+ if ext in ('.svg', '.swf'):
+ atts['data'] = uri
+ atts['type'] = types[ext]
+ else:
+ atts['src'] = uri
+ atts['alt'] = node.get('alt', uri)
+ # image size
+ if 'width' in node:
+ atts['width'] = node['width']
+ if 'height' in node:
+ atts['height'] = node['height']
+ if 'scale' in node:
+ if Image and not ('width' in node and 'height' in node):
+ try:
+ im = Image.open(str(uri))
+ except (IOError, # Source image can't be found or opened
+ UnicodeError): # PIL doesn't like Unicode paths.
+ pass
+ else:
+ if 'width' not in atts:
+ atts['width'] = str(im.size[0])
+ if 'height' not in atts:
+ atts['height'] = str(im.size[1])
+ del im
+ for att_name in 'width', 'height':
+ if att_name in atts:
+ match = re.match(r'([0-9.]+)(\S*)$', atts[att_name])
+ assert match
+ atts[att_name] = '%s%s' % (
+ float(match.group(1)) * (float(node['scale']) / 100),
+ match.group(2))
+ style = []
+ for att_name in 'width', 'height':
+ if att_name in atts:
+ if re.match(r'^[0-9.]+$', atts[att_name]):
+ # Interpret unitless values as pixels.
+ atts[att_name] += 'px'
+ style.append('%s: %s;' % (att_name, atts[att_name]))
+ del atts[att_name]
+ if style:
+ atts['style'] = ' '.join(style)
+ if (isinstance(node.parent, nodes.TextElement) or
+ (isinstance(node.parent, nodes.reference) and
+ not isinstance(node.parent.parent, nodes.TextElement))):
+ # Inline context or surrounded by <a>...</a>.
+ suffix = ''
+ else:
+ suffix = '\n'
+ if 'align' in node:
+ atts['class'] = 'align-%s' % node['align']
+ self.context.append('')
+ if ext in ('.svg', '.swf'): # place in an object element,
+ # do NOT use an empty tag: incorrect rendering in browsers
+ self.body.append(self.starttag(node, 'object', suffix, **atts) +
+ node.get('alt', uri) + '</object>' + suffix)
+ else:
+ self.body.append(self.emptytag(node, 'img', suffix, **atts))
+
+ def depart_image(self, node):
+ self.body.append(self.context.pop())
+
+ def visit_inline(self, node):
+ self.body.append(self.starttag(node, 'span', ''))
+
+ def depart_inline(self, node):
+ self.body.append('</span>')
+
+ def visit_label(self, node):
+ # Context added in footnote_backrefs.
+ self.body.append(self.starttag(node, 'td', '%s[' % self.context.pop(),
+ CLASS='label'))
+
+ def depart_label(self, node):
+ # Context added in footnote_backrefs.
+ self.body.append(']%s</td><td>%s' % (self.context.pop(), self.context.pop()))
+
+ def visit_legend(self, node):
+ self.body.append(self.starttag(node, 'div', CLASS='legend'))
+
+ def depart_legend(self, node):
+ self.body.append('</div>\n')
+
+ def visit_line(self, node):
+ self.body.append(self.starttag(node, 'div', suffix='', CLASS='line'))
+ if not len(node):
+ self.body.append('<br />')
+
+ def depart_line(self, node):
+ self.body.append('</div>\n')
+
+ def visit_line_block(self, node):
+ self.body.append(self.starttag(node, 'div', CLASS='line-block'))
+
+ def depart_line_block(self, node):
+ self.body.append('</div>\n')
+
+ def visit_list_item(self, node):
+ self.body.append(self.starttag(node, 'li', ''))
+ if len(node):
+ node[0]['classes'].append('first')
+
+ def depart_list_item(self, node):
+ self.body.append('</li>\n')
+
+ def visit_literal(self, node):
+ """Process text to prevent tokens from wrapping."""
+ self.body.append(
+ self.starttag(node, 'tt', '', CLASS='docutils literal'))
+ text = node.astext()
+ for token in self.words_and_spaces.findall(text):
+ if token.strip():
+ # Protect text like "--an-option" and the regular expression
+ # ``[+]?(\d+(\.\d*)?|\.\d+)`` from bad line wrapping
+ if self.sollbruchstelle.search(token):
+ self.body.append('<span class="pre">%s</span>'
+ % self.encode(token))
+ else:
+ self.body.append(self.encode(token))
+ elif token in ('\n', ' '):
+ # Allow breaks at whitespace:
+ self.body.append(token)
+ else:
+ # Protect runs of multiple spaces; the last space can wrap:
+ self.body.append(' ' * (len(token) - 1) + ' ')
+ self.body.append('</tt>')
+ # Content already processed:
+ raise nodes.SkipNode
+
+ def visit_literal_block(self, node):
+ self.body.append(self.starttag(node, 'pre', CLASS='literal-block'))
+
+ def depart_literal_block(self, node):
+ self.body.append('\n</pre>\n')
+
+ def visit_meta(self, node):
+ meta = self.emptytag(node, 'meta', **node.non_default_attributes())
+ self.add_meta(meta)
+
+ def depart_meta(self, node):
+ pass
+
+ def add_meta(self, tag):
+ self.meta.append(tag)
+ self.head.append(tag)
+
+ def visit_option(self, node):
+ if self.context[-1]:
+ self.body.append(', ')
+ self.body.append(self.starttag(node, 'span', '', CLASS='option'))
+
+ def depart_option(self, node):
+ self.body.append('</span>')
+ self.context[-1] += 1
+
+ def visit_option_argument(self, node):
+ self.body.append(node.get('delimiter', ' '))
+ self.body.append(self.starttag(node, 'var', ''))
+
+ def depart_option_argument(self, node):
+ self.body.append('</var>')
+
+ def visit_option_group(self, node):
+ atts = {}
+ if ( self.settings.option_limit
+ and len(node.astext()) > self.settings.option_limit):
+ atts['colspan'] = 2
+ self.context.append('</tr>\n<tr><td> </td>')
+ else:
+ self.context.append('')
+ self.body.append(
+ self.starttag(node, 'td', CLASS='option-group', **atts))
+ self.body.append('<kbd>')
+ self.context.append(0) # count number of options
+
+ def depart_option_group(self, node):
+ self.context.pop()
+ self.body.append('</kbd></td>\n')
+ self.body.append(self.context.pop())
+
+ def visit_option_list(self, node):
+ self.body.append(
+ self.starttag(node, 'table', CLASS='docutils option-list',
+ frame="void", rules="none"))
+ self.body.append('<col class="option" />\n'
+ '<col class="description" />\n'
+ '<tbody valign="top">\n')
+
+ def depart_option_list(self, node):
+ self.body.append('</tbody>\n</table>\n')
+
+ def visit_option_list_item(self, node):
+ self.body.append(self.starttag(node, 'tr', ''))
+
+ def depart_option_list_item(self, node):
+ self.body.append('</tr>\n')
+
+ def visit_option_string(self, node):
+ pass
+
+ def depart_option_string(self, node):
+ pass
+
+ def visit_organization(self, node):
+ self.visit_docinfo_item(node, 'organization')
+
+ def depart_organization(self, node):
+ self.depart_docinfo_item()
+
+ def should_be_compact_paragraph(self, node):
+ """
+ Determine if the <p> tags around paragraph ``node`` can be omitted.
+ """
+ if (isinstance(node.parent, nodes.document) or
+ isinstance(node.parent, nodes.compound)):
+ # Never compact paragraphs in document or compound.
+ return 0
+ for key, value in node.attlist():
+ if (node.is_not_default(key) and
+ not (key == 'classes' and value in
+ ([], ['first'], ['last'], ['first', 'last']))):
+ # Attribute which needs to survive.
+ return 0
+ first = isinstance(node.parent[0], nodes.label) # skip label
+ for child in node.parent.children[first:]:
+ # only first paragraph can be compact
+ if isinstance(child, nodes.Invisible):
+ continue
+ if child is node:
+ break
+ return 0
+ parent_length = len([n for n in node.parent if not isinstance(
+ n, (nodes.Invisible, nodes.label))])
+ if ( self.compact_simple
+ or self.compact_field_list
+ or self.compact_p and parent_length == 1):
+ return 1
+ return 0
+
+ def visit_paragraph(self, node):
+ if self.should_be_compact_paragraph(node):
+ self.context.append('')
+ else:
+ self.body.append(self.starttag(node, 'p', ''))
+ self.context.append('</p>\n')
+
+ def depart_paragraph(self, node):
+ self.body.append(self.context.pop())
+
+ def visit_problematic(self, node):
+ if node.hasattr('refid'):
+ self.body.append('<a href="#%s">' % node['refid'])
+ self.context.append('</a>')
+ else:
+ self.context.append('')
+ self.body.append(self.starttag(node, 'span', '', CLASS='problematic'))
+
+ def depart_problematic(self, node):
+ self.body.append('</span>')
+ self.body.append(self.context.pop())
+
+ def visit_raw(self, node):
+ if 'html' in node.get('format', '').split():
+ t = isinstance(node.parent, nodes.TextElement) and 'span' or 'div'
+ if node['classes']:
+ self.body.append(self.starttag(node, t, suffix=''))
+ self.body.append(node.astext())
+ if node['classes']:
+ self.body.append('</%s>' % t)
+ # Keep non-HTML raw text out of output:
+ raise nodes.SkipNode
+
+ def visit_reference(self, node):
+ atts = {'class': 'reference'}
+ if 'refuri' in node:
+ atts['href'] = node['refuri']
+ if ( self.settings.cloak_email_addresses
+ and atts['href'].startswith('mailto:')):
+ atts['href'] = self.cloak_mailto(atts['href'])
+ self.in_mailto = 1
+ atts['class'] += ' external'
+ else:
+ assert 'refid' in node, \
+ 'References must have "refuri" or "refid" attribute.'
+ atts['href'] = '#' + node['refid']
+ atts['class'] += ' internal'
+ if not isinstance(node.parent, nodes.TextElement):
+ assert len(node) == 1 and isinstance(node[0], nodes.image)
+ atts['class'] += ' image-reference'
+ self.body.append(self.starttag(node, 'a', '', **atts))
+
+ def depart_reference(self, node):
+ self.body.append('</a>')
+ if not isinstance(node.parent, nodes.TextElement):
+ self.body.append('\n')
+ self.in_mailto = 0
+
+ def visit_revision(self, node):
+ self.visit_docinfo_item(node, 'revision', meta=None)
+
+ def depart_revision(self, node):
+ self.depart_docinfo_item()
+
+ def visit_row(self, node):
+ self.body.append(self.starttag(node, 'tr', ''))
+ node.column = 0
+
+ def depart_row(self, node):
+ self.body.append('</tr>\n')
+
+ def visit_rubric(self, node):
+ self.body.append(self.starttag(node, 'p', '', CLASS='rubric'))
+
+ def depart_rubric(self, node):
+ self.body.append('</p>\n')
+
+ def visit_section(self, node):
+ self.section_level += 1
+ self.body.append(
+ self.starttag(node, 'div', CLASS='section'))
+
+ def depart_section(self, node):
+ self.section_level -= 1
+ self.body.append('</div>\n')
+
+ def visit_sidebar(self, node):
+ self.body.append(
+ self.starttag(node, 'div', CLASS='sidebar'))
+ self.set_first_last(node)
+ self.in_sidebar = 1
+
+ def depart_sidebar(self, node):
+ self.body.append('</div>\n')
+ self.in_sidebar = None
+
+ def visit_status(self, node):
+ self.visit_docinfo_item(node, 'status', meta=None)
+
+ def depart_status(self, node):
+ self.depart_docinfo_item()
+
+ def visit_strong(self, node):
+ self.body.append(self.starttag(node, 'strong', ''))
+
+ def depart_strong(self, node):
+ self.body.append('</strong>')
+
+ def visit_subscript(self, node):
+ self.body.append(self.starttag(node, 'sub', ''))
+
+ def depart_subscript(self, node):
+ self.body.append('</sub>')
+
+ def visit_substitution_definition(self, node):
+ """Internal only."""
+ raise nodes.SkipNode
+
+ def visit_substitution_reference(self, node):
+ self.unimplemented_visit(node)
+
+ def visit_subtitle(self, node):
+ if isinstance(node.parent, nodes.sidebar):
+ self.body.append(self.starttag(node, 'p', '',
+ CLASS='sidebar-subtitle'))
+ self.context.append('</p>\n')
+ elif isinstance(node.parent, nodes.document):
+ self.body.append(self.starttag(node, 'h2', '', CLASS='subtitle'))
+ self.context.append('</h2>\n')
+ self.in_document_title = len(self.body)
+ elif isinstance(node.parent, nodes.section):
+ tag = 'h%s' % (self.section_level + self.initial_header_level - 1)
+ self.body.append(
+ self.starttag(node, tag, '', CLASS='section-subtitle') +
+ self.starttag({}, 'span', '', CLASS='section-subtitle'))
+ self.context.append('</span></%s>\n' % tag)
+
+ def depart_subtitle(self, node):
+ self.body.append(self.context.pop())
+ if self.in_document_title:
+ self.subtitle = self.body[self.in_document_title:-1]
+ self.in_document_title = 0
+ self.body_pre_docinfo.extend(self.body)
+ self.html_subtitle.extend(self.body)
+ del self.body[:]
+
+ def visit_superscript(self, node):
+ self.body.append(self.starttag(node, 'sup', ''))
+
+ def depart_superscript(self, node):
+ self.body.append('</sup>')
+
+ def visit_system_message(self, node):
+ self.body.append(self.starttag(node, 'div', CLASS='system-message'))
+ self.body.append('<p class="system-message-title">')
+ backref_text = ''
+ if len(node['backrefs']):
+ backrefs = node['backrefs']
+ if len(backrefs) == 1:
+ backref_text = ('; <em><a href="#%s">backlink</a></em>'
+ % backrefs[0])
+ else:
+ i = 1
+ backlinks = []
+ for backref in backrefs:
+ backlinks.append('<a href="#%s">%s</a>' % (backref, i))
+ i += 1
+ backref_text = ('; <em>backlinks: %s</em>'
+ % ', '.join(backlinks))
+ if node.hasattr('line'):
+ line = ', line %s' % node['line']
+ else:
+ line = ''
+ self.body.append('System Message: %s/%s '
+ '(<tt class="docutils">%s</tt>%s)%s</p>\n'
+ % (node['type'], node['level'],
+ self.encode(node['source']), line, backref_text))
+
+ def depart_system_message(self, node):
+ self.body.append('</div>\n')
+
+ def visit_table(self, node):
+ classes = ' '.join(['docutils', self.settings.table_style]).strip()
+ self.body.append(
+ self.starttag(node, 'table', CLASS=classes, border="1"))
+
+ def depart_table(self, node):
+ self.body.append('</table>\n')
+
+ def visit_target(self, node):
+ if not ('refuri' in node or 'refid' in node
+ or 'refname' in node):
+ self.body.append(self.starttag(node, 'span', '', CLASS='target'))
+ self.context.append('</span>')
+ else:
+ self.context.append('')
+
+ def depart_target(self, node):
+ self.body.append(self.context.pop())
+
+ def visit_tbody(self, node):
+ self.write_colspecs()
+ self.body.append(self.context.pop()) # '</colgroup>\n' or ''
+ self.body.append(self.starttag(node, 'tbody', valign='top'))
+
+ def depart_tbody(self, node):
+ self.body.append('</tbody>\n')
+
+ def visit_term(self, node):
+ self.body.append(self.starttag(node, 'dt', ''))
+
+ def depart_term(self, node):
+ """
+ Leave the end tag to `self.visit_definition()`, in case there's a
+ classifier.
+ """
+ pass
+
+ def visit_tgroup(self, node):
+ # Mozilla needs <colgroup>:
+ self.body.append(self.starttag(node, 'colgroup'))
+ # Appended by thead or tbody:
+ self.context.append('</colgroup>\n')
+ node.stubs = []
+
+ def depart_tgroup(self, node):
+ pass
+
+ def visit_thead(self, node):
+ self.write_colspecs()
+ self.body.append(self.context.pop()) # '</colgroup>\n'
+ # There may or may not be a <thead>; this is for <tbody> to use:
+ self.context.append('')
+ self.body.append(self.starttag(node, 'thead', valign='bottom'))
+
+ def depart_thead(self, node):
+ self.body.append('</thead>\n')
+
+ def visit_title(self, node):
+ """Only 6 section levels are supported by HTML."""
+ check_id = 0
+ close_tag = '</p>\n'
+ if isinstance(node.parent, nodes.topic):
+ self.body.append(
+ self.starttag(node, 'p', '', CLASS='topic-title first'))
+ elif isinstance(node.parent, nodes.sidebar):
+ self.body.append(
+ self.starttag(node, 'p', '', CLASS='sidebar-title'))
+ elif isinstance(node.parent, nodes.Admonition):
+ self.body.append(
+ self.starttag(node, 'p', '', CLASS='admonition-title'))
+ elif isinstance(node.parent, nodes.table):
+ self.body.append(
+ self.starttag(node, 'caption', ''))
+ close_tag = '</caption>\n'
+ elif isinstance(node.parent, nodes.document):
+ self.body.append(self.starttag(node, 'h1', '', CLASS='title'))
+ close_tag = '</h1>\n'
+ self.in_document_title = len(self.body)
+ else:
+ assert isinstance(node.parent, nodes.section)
+ h_level = self.section_level + self.initial_header_level - 1
+ atts = {}
+ if (len(node.parent) >= 2 and
+ isinstance(node.parent[1], nodes.subtitle)):
+ atts['CLASS'] = 'with-subtitle'
+ self.body.append(
+ self.starttag(node, 'h%s' % h_level, '', **atts))
+ atts = {}
+ if node.hasattr('refid'):
+ atts['class'] = 'toc-backref'
+ atts['href'] = '#' + node['refid']
+ if atts:
+ self.body.append(self.starttag({}, 'a', '', **atts))
+ close_tag = '</a></h%s>\n' % (h_level)
+ else:
+ close_tag = '</h%s>\n' % (h_level)
+ self.context.append(close_tag)
+
+ def depart_title(self, node):
+ self.body.append(self.context.pop())
+ if self.in_document_title:
+ self.title = self.body[self.in_document_title:-1]
+ self.in_document_title = 0
+ self.body_pre_docinfo.extend(self.body)
+ self.html_title.extend(self.body)
+ del self.body[:]
+
+ def visit_title_reference(self, node):
+ self.body.append(self.starttag(node, 'cite', ''))
+
+ def depart_title_reference(self, node):
+ self.body.append('</cite>')
+
+ def visit_topic(self, node):
+ self.body.append(self.starttag(node, 'div', CLASS='topic'))
+ self.topic_classes = node['classes']
+
+ def depart_topic(self, node):
+ self.body.append('</div>\n')
+ self.topic_classes = []
+
+ def visit_transition(self, node):
+ self.body.append(self.emptytag(node, 'hr', CLASS='docutils'))
+
+ def depart_transition(self, node):
+ pass
+
+ def visit_version(self, node):
+ self.visit_docinfo_item(node, 'version', meta=None)
+
+ def depart_version(self, node):
+ self.depart_docinfo_item()
+
+ def unimplemented_visit(self, node):
+ raise NotImplementedError('visiting unimplemented node type: %s'
+ % node.__class__.__name__)
+
+
+class SimpleListChecker(nodes.GenericNodeVisitor):
+
+ """
+ Raise `nodes.NodeFound` if non-simple list item is encountered.
+
+ Here "simple" means a list item containing nothing other than a single
+ paragraph, a simple list, or a paragraph followed by a simple list.
+ """
+
+ def default_visit(self, node):
+ raise nodes.NodeFound
+
+ def visit_bullet_list(self, node):
+ pass
+
+ def visit_enumerated_list(self, node):
+ pass
+
+ def visit_list_item(self, node):
+ children = []
+ for child in node.children:
+ if not isinstance(child, nodes.Invisible):
+ children.append(child)
+ if (children and isinstance(children[0], nodes.paragraph)
+ and (isinstance(children[-1], nodes.bullet_list)
+ or isinstance(children[-1], nodes.enumerated_list))):
+ children.pop()
+ if len(children) <= 1:
+ return
+ else:
+ raise nodes.NodeFound
+
+ def visit_paragraph(self, node):
+ raise nodes.SkipNode
+
+ def invisible_visit(self, node):
+ """Invisible nodes should be ignored."""
+ raise nodes.SkipNode
+
+ visit_comment = invisible_visit
+ visit_substitution_definition = invisible_visit
+ visit_target = invisible_visit
+ visit_pending = invisible_visit
diff --git a/python/helpers/docutils/writers/html4css1/html4css1.css b/python/helpers/docutils/writers/html4css1/html4css1.css
new file mode 100644
index 0000000..4374263
--- /dev/null
+++ b/python/helpers/docutils/writers/html4css1/html4css1.css
@@ -0,0 +1,303 @@
+/*
+:Author: David Goodger ([email protected])
+:Id: $Id: html4css1.css 6387 2010-08-13 12:23:41Z milde $
+:Copyright: This stylesheet has been placed in the public domain.
+
+Default cascading style sheet for the HTML output of Docutils.
+
+See http://docutils.sf.net/docs/howto/html-stylesheets.html for how to
+customize this style sheet.
+*/
+
+/* used to remove borders from tables and images */
+.borderless, table.borderless td, table.borderless th {
+ border: 0 }
+
+table.borderless td, table.borderless th {
+ /* Override padding for "table.docutils td" with "! important".
+ The right padding separates the table cells. */
+ padding: 0 0.5em 0 0 ! important }
+
+.first {
+ /* Override more specific margin styles with "! important". */
+ margin-top: 0 ! important }
+
+.last, .with-subtitle {
+ margin-bottom: 0 ! important }
+
+.hidden {
+ display: none }
+
+a.toc-backref {
+ text-decoration: none ;
+ color: black }
+
+blockquote.epigraph {
+ margin: 2em 5em ; }
+
+dl.docutils dd {
+ margin-bottom: 0.5em }
+
+object[type="image/svg+xml"], object[type="application/x-shockwave-flash"] {
+ overflow: hidden;
+}
+
+/* Uncomment (and remove this text!) to get bold-faced definition list terms
+dl.docutils dt {
+ font-weight: bold }
+*/
+
+div.abstract {
+ margin: 2em 5em }
+
+div.abstract p.topic-title {
+ font-weight: bold ;
+ text-align: center }
+
+div.admonition, div.attention, div.caution, div.danger, div.error,
+div.hint, div.important, div.note, div.tip, div.warning {
+ margin: 2em ;
+ border: medium outset ;
+ padding: 1em }
+
+div.admonition p.admonition-title, div.hint p.admonition-title,
+div.important p.admonition-title, div.note p.admonition-title,
+div.tip p.admonition-title {
+ font-weight: bold ;
+ font-family: sans-serif }
+
+div.attention p.admonition-title, div.caution p.admonition-title,
+div.danger p.admonition-title, div.error p.admonition-title,
+div.warning p.admonition-title {
+ color: red ;
+ font-weight: bold ;
+ font-family: sans-serif }
+
+/* Uncomment (and remove this text!) to get reduced vertical space in
+ compound paragraphs.
+div.compound .compound-first, div.compound .compound-middle {
+ margin-bottom: 0.5em }
+
+div.compound .compound-last, div.compound .compound-middle {
+ margin-top: 0.5em }
+*/
+
+div.dedication {
+ margin: 2em 5em ;
+ text-align: center ;
+ font-style: italic }
+
+div.dedication p.topic-title {
+ font-weight: bold ;
+ font-style: normal }
+
+div.figure {
+ margin-left: 2em ;
+ margin-right: 2em }
+
+div.footer, div.header {
+ clear: both;
+ font-size: smaller }
+
+div.line-block {
+ display: block ;
+ margin-top: 1em ;
+ margin-bottom: 1em }
+
+div.line-block div.line-block {
+ margin-top: 0 ;
+ margin-bottom: 0 ;
+ margin-left: 1.5em }
+
+div.sidebar {
+ margin: 0 0 0.5em 1em ;
+ border: medium outset ;
+ padding: 1em ;
+ background-color: #ffffee ;
+ width: 40% ;
+ float: right ;
+ clear: right }
+
+div.sidebar p.rubric {
+ font-family: sans-serif ;
+ font-size: medium }
+
+div.system-messages {
+ margin: 5em }
+
+div.system-messages h1 {
+ color: red }
+
+div.system-message {
+ border: medium outset ;
+ padding: 1em }
+
+div.system-message p.system-message-title {
+ color: red ;
+ font-weight: bold }
+
+div.topic {
+ margin: 2em }
+
+h1.section-subtitle, h2.section-subtitle, h3.section-subtitle,
+h4.section-subtitle, h5.section-subtitle, h6.section-subtitle {
+ margin-top: 0.4em }
+
+h1.title {
+ text-align: center }
+
+h2.subtitle {
+ text-align: center }
+
+hr.docutils {
+ width: 75% }
+
+img.align-left, .figure.align-left, object.align-left {
+ clear: left ;
+ float: left ;
+ margin-right: 1em }
+
+img.align-right, .figure.align-right, object.align-right {
+ clear: right ;
+ float: right ;
+ margin-left: 1em }
+
+img.align-center, .figure.align-center, object.align-center {
+ display: block;
+ margin-left: auto;
+ margin-right: auto;
+}
+
+.align-left {
+ text-align: left }
+
+.align-center {
+ clear: both ;
+ text-align: center }
+
+.align-right {
+ text-align: right }
+
+/* reset inner alignment in figures */
+div.align-right {
+ text-align: left }
+
+/* div.align-center * { */
+/* text-align: left } */
+
+ol.simple, ul.simple {
+ margin-bottom: 1em }
+
+ol.arabic {
+ list-style: decimal }
+
+ol.loweralpha {
+ list-style: lower-alpha }
+
+ol.upperalpha {
+ list-style: upper-alpha }
+
+ol.lowerroman {
+ list-style: lower-roman }
+
+ol.upperroman {
+ list-style: upper-roman }
+
+p.attribution {
+ text-align: right ;
+ margin-left: 50% }
+
+p.caption {
+ font-style: italic }
+
+p.credits {
+ font-style: italic ;
+ font-size: smaller }
+
+p.label {
+ white-space: nowrap }
+
+p.rubric {
+ font-weight: bold ;
+ font-size: larger ;
+ color: maroon ;
+ text-align: center }
+
+p.sidebar-title {
+ font-family: sans-serif ;
+ font-weight: bold ;
+ font-size: larger }
+
+p.sidebar-subtitle {
+ font-family: sans-serif ;
+ font-weight: bold }
+
+p.topic-title {
+ font-weight: bold }
+
+pre.address {
+ margin-bottom: 0 ;
+ margin-top: 0 ;
+ font: inherit }
+
+pre.literal-block, pre.doctest-block {
+ margin-left: 2em ;
+ margin-right: 2em }
+
+span.classifier {
+ font-family: sans-serif ;
+ font-style: oblique }
+
+span.classifier-delimiter {
+ font-family: sans-serif ;
+ font-weight: bold }
+
+span.interpreted {
+ font-family: sans-serif }
+
+span.option {
+ white-space: nowrap }
+
+span.pre {
+ white-space: pre }
+
+span.problematic {
+ color: red }
+
+span.section-subtitle {
+ /* font-size relative to parent (h1..h6 element) */
+ font-size: 80% }
+
+table.citation {
+ border-left: solid 1px gray;
+ margin-left: 1px }
+
+table.docinfo {
+ margin: 2em 4em }
+
+table.docutils {
+ margin-top: 0.5em ;
+ margin-bottom: 0.5em }
+
+table.footnote {
+ border-left: solid 1px black;
+ margin-left: 1px }
+
+table.docutils td, table.docutils th,
+table.docinfo td, table.docinfo th {
+ padding-left: 0.5em ;
+ padding-right: 0.5em ;
+ vertical-align: top }
+
+table.docutils th.field-name, table.docinfo th.docinfo-name {
+ font-weight: bold ;
+ text-align: left ;
+ white-space: nowrap ;
+ padding-left: 0 }
+
+h1 tt.docutils, h2 tt.docutils, h3 tt.docutils,
+h4 tt.docutils, h5 tt.docutils, h6 tt.docutils {
+ font-size: 100% }
+
+ul.auto-toc {
+ list-style-type: none }
diff --git a/python/helpers/docutils/writers/html4css1/template.txt b/python/helpers/docutils/writers/html4css1/template.txt
new file mode 100644
index 0000000..2591bce
--- /dev/null
+++ b/python/helpers/docutils/writers/html4css1/template.txt
@@ -0,0 +1,8 @@
+%(head_prefix)s
+%(head)s
+%(stylesheet)s
+%(body_prefix)s
+%(body_pre_docinfo)s
+%(docinfo)s
+%(body)s
+%(body_suffix)s
diff --git a/python/helpers/docutils/writers/latex2e/__init__.py b/python/helpers/docutils/writers/latex2e/__init__.py
new file mode 100644
index 0000000..c8ef32e
--- /dev/null
+++ b/python/helpers/docutils/writers/latex2e/__init__.py
@@ -0,0 +1,2831 @@
+# -*- coding: utf8 -*-
+# $Id: __init__.py 6385 2010-08-13 12:17:01Z milde $
+# Author: Engelbert Gruber <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""LaTeX2e document tree Writer."""
+
+__docformat__ = 'reStructuredText'
+
+# code contributions from several people included, thanks to all.
+# some named: David Abrahams, Julien Letessier, Lele Gaifax, and others.
+#
+# convention deactivate code by two # i.e. ##.
+
+import sys
+import os
+import time
+import re
+import string
+from docutils import frontend, nodes, languages, writers, utils, io
+from docutils.transforms import writer_aux
+
+# compatibility module for Python 2.3
+if not hasattr(string, 'Template'):
+ import docutils._string_template_compat
+ string.Template = docutils._string_template_compat.Template
+
+class Writer(writers.Writer):
+
+ supported = ('latex','latex2e')
+ """Formats this writer supports."""
+
+ default_template = 'default.tex'
+ default_template_path = os.path.dirname(__file__)
+
+ default_preamble = '\n'.join([r'% PDF Standard Fonts',
+ r'\usepackage{mathptmx} % Times',
+ r'\usepackage[scaled=.90]{helvet}',
+ r'\usepackage{courier}'])
+ settings_spec = (
+ 'LaTeX-Specific Options',
+ None,
+ (('Specify documentclass. Default is "article".',
+ ['--documentclass'],
+ {'default': 'article', }),
+ ('Specify document options. Multiple options can be given, '
+ 'separated by commas. Default is "a4paper".',
+ ['--documentoptions'],
+ {'default': 'a4paper', }),
+ ('Footnotes with numbers/symbols by Docutils. (default)',
+ ['--docutils-footnotes'],
+ {'default': True, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Alias for --docutils-footnotes (deprecated)',
+ ['--use-latex-footnotes'],
+ {'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Use figure floats for footnote text (deprecated)',
+ ['--figure-footnotes'],
+ {'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Format for footnote references: one of "superscript" or '
+ '"brackets". Default is "superscript".',
+ ['--footnote-references'],
+ {'choices': ['superscript', 'brackets'], 'default': 'superscript',
+ 'metavar': '<format>',
+ 'overrides': 'trim_footnote_reference_space'}),
+ ('Use \\cite command for citations. ',
+ ['--use-latex-citations'],
+ {'default': 0, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Use figure floats for citations '
+ '(might get mixed with real figures). (default)',
+ ['--figure-citations'],
+ {'dest': 'use_latex_citations', 'action': 'store_false',
+ 'validator': frontend.validate_boolean}),
+ ('Format for block quote attributions: one of "dash" (em-dash '
+ 'prefix), "parentheses"/"parens", or "none". Default is "dash".',
+ ['--attribution'],
+ {'choices': ['dash', 'parentheses', 'parens', 'none'],
+ 'default': 'dash', 'metavar': '<format>'}),
+ ('Specify LaTeX packages/stylesheets. '
+ ' A style is referenced with \\usepackage if extension is '
+ '".sty" or omitted and with \\input else. '
+ ' Overrides previous --stylesheet and --stylesheet-path settings.',
+ ['--stylesheet'],
+ {'default': '', 'metavar': '<file>',
+ 'overrides': 'stylesheet_path'}),
+ ('Like --stylesheet, but the path is rewritten '
+ 'relative to the output file. ',
+ ['--stylesheet-path'],
+ {'metavar': '<file>', 'overrides': 'stylesheet'}),
+ ('Link to the stylesheet(s) in the output file. (default)',
+ ['--link-stylesheet'],
+ {'dest': 'embed_stylesheet', 'action': 'store_false'}),
+ ('Embed the stylesheet(s) in the output file. '
+ 'Stylesheets must be accessible during processing. ',
+ ['--embed-stylesheet'],
+ {'default': 0, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Customization by LaTeX code in the preamble. '
+ 'Default: select PDF standard fonts (Times, Helvetica, Courier).',
+ ['--latex-preamble'],
+ {'default': default_preamble}),
+ ('Specify the template file. Default: "%s".' % default_template,
+ ['--template'],
+ {'default': default_template, 'metavar': '<file>'}),
+ ('Table of contents by LaTeX. (default) ',
+ ['--use-latex-toc'],
+ {'default': 1, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Table of contents by Docutils (without page numbers). ',
+ ['--use-docutils-toc'],
+ {'dest': 'use_latex_toc', 'action': 'store_false',
+ 'validator': frontend.validate_boolean}),
+ ('Add parts on top of the section hierarchy.',
+ ['--use-part-section'],
+ {'default': 0, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Attach author and date to the document info table. (default) ',
+ ['--use-docutils-docinfo'],
+ {'dest': 'use_latex_docinfo', 'action': 'store_false',
+ 'validator': frontend.validate_boolean}),
+ ('Attach author and date to the document title.',
+ ['--use-latex-docinfo'],
+ {'default': 0, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ("Typeset abstract as topic. (default)",
+ ['--topic-abstract'],
+ {'dest': 'use_latex_abstract', 'action': 'store_false',
+ 'validator': frontend.validate_boolean}),
+ ("Use LaTeX abstract environment for the document's abstract. ",
+ ['--use-latex-abstract'],
+ {'default': 0, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Color of any hyperlinks embedded in text '
+ '(default: "blue", "0" to disable).',
+ ['--hyperlink-color'], {'default': 'blue'}),
+ ('Enable compound enumerators for nested enumerated lists '
+ '(e.g. "1.2.a.ii"). Default: disabled.',
+ ['--compound-enumerators'],
+ {'default': None, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Disable compound enumerators for nested enumerated lists. '
+ 'This is the default.',
+ ['--no-compound-enumerators'],
+ {'action': 'store_false', 'dest': 'compound_enumerators'}),
+ ('Enable section ("." subsection ...) prefixes for compound '
+ 'enumerators. This has no effect without --compound-enumerators.'
+ 'Default: disabled.',
+ ['--section-prefix-for-enumerators'],
+ {'default': None, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Disable section prefixes for compound enumerators. '
+ 'This is the default.',
+ ['--no-section-prefix-for-enumerators'],
+ {'action': 'store_false', 'dest': 'section_prefix_for_enumerators'}),
+ ('Set the separator between section number and enumerator '
+ 'for compound enumerated lists. Default is "-".',
+ ['--section-enumerator-separator'],
+ {'default': '-', 'metavar': '<char>'}),
+ ('When possibile, use the specified environment for literal-blocks. '
+ 'Default is quoting of whitespace and special chars.',
+ ['--literal-block-env'],
+ {'default': ''}),
+ ('When possibile, use verbatim for literal-blocks. '
+ 'Compatibility alias for "--literal-block-env=verbatim".',
+ ['--use-verbatim-when-possible'],
+ {'default': 0, 'action': 'store_true',
+ 'validator': frontend.validate_boolean}),
+ ('Table style. "standard" with horizontal and vertical lines, '
+ '"booktabs" (LaTeX booktabs style) only horizontal lines '
+ 'above and below the table and below the header or "borderless". '
+ 'Default: "standard"',
+ ['--table-style'],
+ {'choices': ['standard', 'booktabs','nolines', 'borderless'],
+ 'default': 'standard',
+ 'metavar': '<format>'}),
+ ('LaTeX graphicx package option. '
+ 'Possible values are "dvips", "pdftex". "auto" includes LaTeX code '
+ 'to use "pdftex" if processing with pdf(la)tex and dvips otherwise. '
+ 'Default is no option.',
+ ['--graphicx-option'],
+ {'default': ''}),
+ ('LaTeX font encoding. '
+ 'Possible values are "", "T1" (default), "OT1", "LGR,T1" or '
+ 'any other combination of options to the `fontenc` package. ',
+ ['--font-encoding'],
+ {'default': 'T1'}),
+ ('Per default the latex-writer puts the reference title into '
+ 'hyperreferences. Specify "ref*" or "pageref*" to get the section '
+ 'number or the page number.',
+ ['--reference-label'],
+ {'default': None, }),
+ ('Specify style and database for bibtex, for example '
+ '"--use-bibtex=mystyle,mydb1,mydb2".',
+ ['--use-bibtex'],
+ {'default': None, }),
+ ),)
+
+ settings_defaults = {'sectnum_depth': 0 # updated by SectNum transform
+ }
+ relative_path_settings = ('stylesheet_path',)
+
+ config_section = 'latex2e writer'
+ config_section_dependencies = ('writers',)
+
+ head_parts = ('head_prefix', 'requirements', 'latex_preamble',
+ 'stylesheet', 'fallbacks', 'pdfsetup', 'title', 'subtitle')
+ visitor_attributes = head_parts + ('body_pre_docinfo', 'docinfo',
+ 'dedication', 'abstract', 'body')
+
+ output = None
+ """Final translated form of `document`."""
+
+ def __init__(self):
+ writers.Writer.__init__(self)
+ self.translator_class = LaTeXTranslator
+
+ # Override parent method to add latex-specific transforms
+ def get_transforms(self):
+ # call the parent class' method
+ transform_list = writers.Writer.get_transforms(self)
+ # print transform_list
+ # Convert specific admonitions to generic one
+ transform_list.append(writer_aux.Admonitions)
+ # TODO: footnote collection transform
+ # transform_list.append(footnotes.collect)
+ return transform_list
+
+
+ def translate(self):
+ visitor = self.translator_class(self.document)
+ self.document.walkabout(visitor)
+ # copy parts
+ for part in self.visitor_attributes:
+ setattr(self, part, getattr(visitor, part))
+ # get template string from file
+ try:
+ file = open(self.document.settings.template, 'rb')
+ except IOError:
+ file = open(os.path.join(os.path.dirname(__file__),
+ self.document.settings.template), 'rb')
+ template = string.Template(unicode(file.read(), 'utf-8'))
+ file.close()
+ # fill template
+ self.assemble_parts() # create dictionary of parts
+ self.output = template.substitute(self.parts)
+
+ def assemble_parts(self):
+ """Assemble the `self.parts` dictionary of output fragments."""
+ writers.Writer.assemble_parts(self)
+ for part in self.visitor_attributes:
+ lines = getattr(self, part)
+ if part in self.head_parts:
+ if lines:
+ lines.append('') # to get a trailing newline
+ self.parts[part] = '\n'.join(lines)
+ else:
+ # body contains inline elements, so join without newline
+ self.parts[part] = ''.join(lines)
+
+
+class Babel(object):
+ """Language specifics for LaTeX."""
+ # country code by a.schlock.
+ # partly manually converted from iso and babel stuff, dialects and some
+ _ISO639_TO_BABEL = {
+ 'no': 'norsk', #XXX added by hand ( forget about nynorsk?)
+ 'gd': 'scottish', #XXX added by hand
+ 'hu': 'magyar', #XXX added by hand
+ 'pt': 'portuguese',#XXX added by hand
+ 'sl': 'slovenian',
+ 'af': 'afrikaans',
+ 'bg': 'bulgarian',
+ 'br': 'breton',
+ 'ca': 'catalan',
+ 'cs': 'czech',
+ 'cy': 'welsh',
+ 'da': 'danish',
+ 'fr': 'french',
+ # french, francais, canadien, acadian
+ 'de': 'ngerman', #XXX rather than german
+ # ngerman, naustrian, german, germanb, austrian
+ 'el': 'greek',
+ 'en': 'english',
+ # english, USenglish, american, UKenglish, british, canadian
+ 'eo': 'esperanto',
+ 'es': 'spanish',
+ 'et': 'estonian',
+ 'eu': 'basque',
+ 'fi': 'finnish',
+ 'ga': 'irish',
+ 'gl': 'galician',
+ 'he': 'hebrew',
+ 'hr': 'croatian',
+ 'hu': 'hungarian',
+ 'is': 'icelandic',
+ 'it': 'italian',
+ 'la': 'latin',
+ 'nl': 'dutch',
+ 'pl': 'polish',
+ 'pt': 'portuguese',
+ 'ro': 'romanian',
+ 'ru': 'russian',
+ 'sk': 'slovak',
+ 'sr': 'serbian',
+ 'sv': 'swedish',
+ 'tr': 'turkish',
+ 'uk': 'ukrainian'
+ }
+
+ def __init__(self, lang):
+ self.language = lang
+ self.quote_index = 0
+ self.quotes = ('``', "''")
+ self.setup = '' # language dependent configuration code
+ # double quotes are "active" in some languages (e.g. German).
+ # TODO: use \textquotedbl in OT1 font encoding?
+ self.literal_double_quote = u'"'
+ if self.language.startswith('de'):
+ self.quotes = (r'\glqq{}', r'\grqq{}')
+ self.literal_double_quote = ur'\dq{}'
+ if self.language.startswith('it'):
+ self.literal_double_quote = ur'{\char`\"}'
+ if self.language.startswith('es'):
+ # reset tilde ~ to the original binding (nobreakspace):
+ self.setup = ('\n'
+ r'\addto\shorthandsspanish{\spanishdeactivate{."~<>}}')
+
+ def next_quote(self):
+ q = self.quotes[self.quote_index]
+ self.quote_index = (self.quote_index+1) % 2
+ return q
+
+ def quote_quotes(self,text):
+ t = None
+ for part in text.split('"'):
+ if t == None:
+ t = part
+ else:
+ t += self.next_quote() + part
+ return t
+
+ def get_language(self):
+ lang = self.language.split('_')[0] # filter dialects
+ return self._ISO639_TO_BABEL.get(lang, "")
+
+# Building blocks for the latex preamble
+# --------------------------------------
+
+class SortableDict(dict):
+ """Dictionary with additional sorting methods
+
+ Tip: use key starting with with '_' for sorting before small letters
+ and with '~' for sorting after small letters.
+ """
+ def sortedkeys(self):
+ """Return sorted list of keys"""
+ keys = self.keys()
+ keys.sort()
+ return keys
+
+ def sortedvalues(self):
+ """Return list of values sorted by keys"""
+ return [self[key] for key in self.sortedkeys()]
+
+
+# PreambleCmds
+# `````````````
+# A container for LaTeX code snippets that can be
+# inserted into the preamble if required in the document.
+#
+# .. The package 'makecmds' would enable shorter definitions using the
+# \providelength and \provideenvironment commands.
+# However, it is pretty non-standard (texlive-latex-extra).
+
+class PreambleCmds(object):
+ """Building blocks for the latex preamble."""
+
+PreambleCmds.abstract = r"""
+% abstract title
+\providecommand*{\DUtitleabstract}[1]{\centerline{\textbf{#1}}}"""
+
+PreambleCmds.admonition = r"""
+% admonition (specially marked topic)
+\providecommand{\DUadmonition}[2][class-arg]{%
+ % try \DUadmonition#1{#2}:
+ \ifcsname DUadmonition#1\endcsname%
+ \csname DUadmonition#1\endcsname{#2}%
+ \else
+ \begin{center}
+ \fbox{\parbox{0.9\textwidth}{#2}}
+ \end{center}
+ \fi
+}"""
+
+## PreambleCmds.caption = r"""% configure caption layout
+## \usepackage{caption}
+## \captionsetup{singlelinecheck=false}% no exceptions for one-liners"""
+
+PreambleCmds.color = r"""\usepackage{color}"""
+
+PreambleCmds.docinfo = r"""
+% docinfo (width of docinfo table)
+\DUprovidelength{\DUdocinfowidth}{0.9\textwidth}"""
+# PreambleCmds.docinfo._depends = 'providelength'
+
+PreambleCmds.embedded_package_wrapper = r"""\makeatletter
+%% embedded stylesheet: %s
+%s
+\makeatother"""
+
+PreambleCmds.dedication = r"""
+% dedication topic
+\providecommand{\DUtopicdedication}[1]{\begin{center}#1\end{center}}"""
+
+PreambleCmds.error = r"""
+% error admonition title
+\providecommand*{\DUtitleerror}[1]{\DUtitle{\color{red}#1}}"""
+# PreambleCmds.errortitle._depends = 'color'
+
+PreambleCmds.fieldlist = r"""
+% fieldlist environment
+\ifthenelse{\isundefined{\DUfieldlist}}{
+ \newenvironment{DUfieldlist}%
+ {\quote\description}
+ {\enddescription\endquote}
+}{}"""
+
+PreambleCmds.float_settings = r"""\usepackage{float} % float configuration
+\floatplacement{figure}{H} % place figures here definitely"""
+
+PreambleCmds.footnotes = r"""% numeric or symbol footnotes with hyperlinks
+\providecommand*{\DUfootnotemark}[3]{%
+ \raisebox{1em}{\hypertarget{#1}{}}%
+ \hyperlink{#2}{\textsuperscript{#3}}%
+}
+\providecommand{\DUfootnotetext}[4]{%
+ \begingroup%
+ \renewcommand{\thefootnote}{%
+ \protect\raisebox{1em}{\protect\hypertarget{#1}{}}%
+ \protect\hyperlink{#2}{#3}}%
+ \footnotetext{#4}%
+ \endgroup%
+}"""
+
+PreambleCmds.footnote_floats = r"""% settings for footnotes as floats:
+\setlength{\floatsep}{0.5em}
+\setlength{\textfloatsep}{\fill}
+\addtolength{\textfloatsep}{3em}
+\renewcommand{\textfraction}{0.5}
+\renewcommand{\topfraction}{0.5}
+\renewcommand{\bottomfraction}{0.5}
+\setcounter{totalnumber}{50}
+\setcounter{topnumber}{50}
+\setcounter{bottomnumber}{50}"""
+
+PreambleCmds.graphicx_auto = r"""% Check output format
+\ifx\pdftexversion\undefined
+ \usepackage{graphicx}
+\else
+ \usepackage[pdftex]{graphicx}
+\fi'))"""
+
+
+PreambleCmds.inline = r"""
+% inline markup (custom roles)
+% \DUrole{#1}{#2} tries \DUrole#1{#2}
+\providecommand*{\DUrole}[2]{%
+ \ifcsname DUrole#1\endcsname%
+ \csname DUrole#1\endcsname{#2}%
+ \else% backwards compatibility: try \docutilsrole#1{#2}
+ \ifcsname docutilsrole#1\endcsname%
+ \csname docutilsrole#1\endcsname{#2}%
+ \else%
+ #2%
+ \fi%
+ \fi%
+}"""
+
+PreambleCmds.legend = r"""
+% legend environment
+\ifthenelse{\isundefined{\DUlegend}}{
+ \newenvironment{DUlegend}{\small}{}
+}{}"""
+
+PreambleCmds.lineblock = r"""
+% lineblock environment
+\DUprovidelength{\DUlineblockindent}{2.5em}
+\ifthenelse{\isundefined{\DUlineblock}}{
+ \newenvironment{DUlineblock}[1]{%
+ \list{}{\setlength{\partopsep}{\parskip}
+ \addtolength{\partopsep}{\baselineskip}
+ \setlength{\topsep}{0pt}
+ \setlength{\itemsep}{0.15\baselineskip}
+ \setlength{\parsep}{0pt}
+ \setlength{\leftmargin}{#1}}
+ \raggedright
+ }
+ {\endlist}
+}{}"""
+# PreambleCmds.lineblock._depends = 'providelength'
+
+PreambleCmds.linking = r"""
+%% hyperlinks:
+\ifthenelse{\isundefined{\hypersetup}}{
+ \usepackage[unicode,colorlinks=%s,linkcolor=%s,urlcolor=%s]{hyperref}
+ \urlstyle{same} %% normal text font (alternatives: tt, rm, sf)
+}{}"""
+
+PreambleCmds.minitoc = r"""%% local table of contents
+\usepackage{minitoc}"""
+
+PreambleCmds.optionlist = r"""
+% optionlist environment
+\providecommand*{\DUoptionlistlabel}[1]{\bf #1 \hfill}
+\DUprovidelength{\DUoptionlistindent}{3cm}
+\ifthenelse{\isundefined{\DUoptionlist}}{
+ \newenvironment{DUoptionlist}{%
+ \list{}{\setlength{\labelwidth}{\DUoptionlistindent}
+ \setlength{\rightmargin}{1cm}
+ \setlength{\leftmargin}{\rightmargin}
+ \addtolength{\leftmargin}{\labelwidth}
+ \addtolength{\leftmargin}{\labelsep}
+ \renewcommand{\makelabel}{\DUoptionlistlabel}}
+ }
+ {\endlist}
+}{}"""
+# PreambleCmds.optionlist._depends = 'providelength'
+
+PreambleCmds.providelength = r"""
+% providelength (provide a length variable and set default, if it is new)
+\providecommand*{\DUprovidelength}[2]{
+ \ifthenelse{\isundefined{#1}}{\newlength{#1}\setlength{#1}{#2}}{}
+}"""
+
+PreambleCmds.rubric = r"""
+% rubric (informal heading)
+\providecommand*{\DUrubric}[2][class-arg]{%
+ \subsubsection*{\centering\textit{\textmd{#2}}}}"""
+
+PreambleCmds.sidebar = r"""
+% sidebar (text outside the main text flow)
+\providecommand{\DUsidebar}[2][class-arg]{%
+ \begin{center}
+ \colorbox[gray]{0.80}{\parbox{0.9\textwidth}{#2}}
+ \end{center}
+}"""
+
+PreambleCmds.subtitle = r"""
+% subtitle (for topic/sidebar)
+\providecommand*{\DUsubtitle}[2][class-arg]{\par\emph{#2}\smallskip}"""
+
+PreambleCmds.table = r"""\usepackage{longtable}
+\usepackage{array}
+\setlength{\extrarowheight}{2pt}
+\newlength{\DUtablewidth} % internal use in tables"""
+
+# Options [force,almostfull] prevent spurious error messages, see
+# de.comp.text.tex/2005-12/msg01855
+PreambleCmds.textcomp = """\
+\\usepackage{textcomp} % text symbol macros"""
+
+PreambleCmds.documenttitle = r"""
+%% Document title
+\title{%s}
+\author{%s}
+\date{%s}
+\maketitle
+"""
+
+PreambleCmds.titlereference = r"""
+% titlereference role
+\providecommand*{\DUroletitlereference}[1]{\textsl{#1}}"""
+
+PreambleCmds.title = r"""
+% title for topics, admonitions and sidebar
+\providecommand*{\DUtitle}[2][class-arg]{%
+ % call \DUtitle#1{#2} if it exists:
+ \ifcsname DUtitle#1\endcsname%
+ \csname DUtitle#1\endcsname{#2}%
+ \else
+ \smallskip\noindent\textbf{#2}\smallskip%
+ \fi
+}"""
+
+PreambleCmds.topic = r"""
+% topic (quote with heading)
+\providecommand{\DUtopic}[2][class-arg]{%
+ \ifcsname DUtopic#1\endcsname%
+ \csname DUtopic#1\endcsname{#2}%
+ \else
+ \begin{quote}#2\end{quote}
+ \fi
+}"""
+
+PreambleCmds.transition = r"""
+% transition (break, fancybreak, anonymous section)
+\providecommand*{\DUtransition}[1][class-arg]{%
+ \hspace*{\fill}\hrulefill\hspace*{\fill}
+ \vskip 0.5\baselineskip
+}"""
+
+
+class DocumentClass(object):
+ """Details of a LaTeX document class."""
+
+ def __init__(self, document_class, with_part=False):
+ self.document_class = document_class
+ self._with_part = with_part
+ self.sections = ['section', 'subsection', 'subsubsection',
+ 'paragraph', 'subparagraph']
+ if self.document_class in ('book', 'memoir', 'report',
+ 'scrbook', 'scrreprt'):
+ self.sections.insert(0, 'chapter')
+ if self._with_part:
+ self.sections.insert(0, 'part')
+
+ def section(self, level):
+ """Return the LaTeX section name for section `level`.
+
+ The name depends on the specific document class.
+ Level is 1,2,3..., as level 0 is the title.
+ """
+
+ if level <= len(self.sections):
+ return self.sections[level-1]
+ else:
+ return self.sections[-1]
+
+
+class Table(object):
+ """Manage a table while traversing.
+
+ Maybe change to a mixin defining the visit/departs, but then
+ class Table internal variables are in the Translator.
+
+ Table style might be
+
+ :standard: horizontal and vertical lines
+ :booktabs: only horizontal lines (requires "booktabs" LaTeX package)
+ :borderless: no borders around table cells
+ :nolines: alias for borderless
+ """
+ def __init__(self,translator,latex_type,table_style):
+ self._translator = translator
+ self._latex_type = latex_type
+ self._table_style = table_style
+ self._open = 0
+ # miscellaneous attributes
+ self._attrs = {}
+ self._col_width = []
+ self._rowspan = []
+ self.stubs = []
+ self._in_thead = 0
+
+ def open(self):
+ self._open = 1
+ self._col_specs = []
+ self.caption = []
+ self._attrs = {}
+ self._in_head = 0 # maybe context with search
+ def close(self):
+ self._open = 0
+ self._col_specs = None
+ self.caption = []
+ self._attrs = {}
+ self.stubs = []
+ def is_open(self):
+ return self._open
+
+ def set_table_style(self, table_style):
+ if not table_style in ('standard','booktabs','borderless','nolines'):
+ return
+ self._table_style = table_style
+
+ def get_latex_type(self):
+ return self._latex_type
+
+ def set(self,attr,value):
+ self._attrs[attr] = value
+ def get(self,attr):
+ if attr in self._attrs:
+ return self._attrs[attr]
+ return None
+
+ def get_vertical_bar(self):
+ if self._table_style == 'standard':
+ return '|'
+ return ''
+
+ # horizontal lines are drawn below a row,
+ def get_opening(self):
+ if self._latex_type == 'longtable':
+ # otherwise longtable might move before paragraph and subparagraph
+ prefix = '\\leavevmode\n'
+ else:
+ prefix = ''
+ prefix += '\setlength{\DUtablewidth}{\linewidth}'
+ return '%s\n\\begin{%s}[c]' % (prefix, self._latex_type)
+
+ def get_closing(self):
+ line = ''
+ if self._table_style == 'booktabs':
+ line = '\\bottomrule\n'
+ elif self._table_style == 'standard':
+ lines = '\\hline\n'
+ return '%s\\end{%s}' % (line,self._latex_type)
+
+ def visit_colspec(self, node):
+ self._col_specs.append(node)
+ # "stubs" list is an attribute of the tgroup element:
+ self.stubs.append(node.attributes.get('stub'))
+
+ def get_colspecs(self):
+ """Return column specification for longtable.
+
+ Assumes reST line length being 80 characters.
+ Table width is hairy.
+
+ === ===
+ ABC DEF
+ === ===
+
+ usually gets to narrow, therefore we add 1 (fiddlefactor).
+ """
+ width = 80
+
+ total_width = 0.0
+ # first see if we get too wide.
+ for node in self._col_specs:
+ colwidth = float(node['colwidth']+1) / width
+ total_width += colwidth
+ self._col_width = []
+ self._rowspan = []
+ # donot make it full linewidth
+ factor = 0.93
+ if total_width > 1.0:
+ factor /= total_width
+ bar = self.get_vertical_bar()
+ latex_table_spec = ''
+ for node in self._col_specs:
+ colwidth = factor * float(node['colwidth']+1) / width
+ self._col_width.append(colwidth+0.005)
+ self._rowspan.append(0)
+ latex_table_spec += '%sp{%.3f\\DUtablewidth}' % (bar, colwidth+0.005)
+ return latex_table_spec+bar
+
+ def get_column_width(self):
+ """Return columnwidth for current cell (not multicell)."""
+ return '%.2f\\DUtablewidth' % self._col_width[self._cell_in_row-1]
+
+ def get_caption(self):
+ if not self.caption:
+ return ''
+ caption = ''.join(self.caption)
+ if 1 == self._translator.thead_depth():
+ return r'\caption{%s}\\' '\n' % caption
+ return r'\caption[]{%s (... continued)}\\' '\n' % caption
+
+ def need_recurse(self):
+ if self._latex_type == 'longtable':
+ return 1 == self._translator.thead_depth()
+ return 0
+
+ def visit_thead(self):
+ self._in_thead += 1
+ if self._table_style == 'standard':
+ return ['\\hline\n']
+ elif self._table_style == 'booktabs':
+ return ['\\toprule\n']
+ return []
+ def depart_thead(self):
+ a = []
+ #if self._table_style == 'standard':
+ # a.append('\\hline\n')
+ if self._table_style == 'booktabs':
+ a.append('\\midrule\n')
+ if self._latex_type == 'longtable':
+ if 1 == self._translator.thead_depth():
+ a.append('\\endfirsthead\n')
+ else:
+ a.append('\\endhead\n')
+ a.append(r'\multicolumn{%d}{c}' % len(self._col_specs) +
+ r'{\hfill ... continued on next page} \\')
+ a.append('\n\\endfoot\n\\endlastfoot\n')
+ # for longtable one could add firsthead, foot and lastfoot
+ self._in_thead -= 1
+ return a
+ def visit_row(self):
+ self._cell_in_row = 0
+ def depart_row(self):
+ res = [' \\\\\n']
+ self._cell_in_row = None # remove cell counter
+ for i in range(len(self._rowspan)):
+ if (self._rowspan[i]>0):
+ self._rowspan[i] -= 1
+
+ if self._table_style == 'standard':
+ rowspans = [i+1 for i in range(len(self._rowspan))
+ if (self._rowspan[i]<=0)]
+ if len(rowspans)==len(self._rowspan):
+ res.append('\\hline\n')
+ else:
+ cline = ''
+ rowspans.reverse()
+ # TODO merge clines
+ while 1:
+ try:
+ c_start = rowspans.pop()
+ except:
+ break
+ cline += '\\cline{%d-%d}\n' % (c_start,c_start)
+ res.append(cline)
+ return res
+
+ def set_rowspan(self,cell,value):
+ try:
+ self._rowspan[cell] = value
+ except:
+ pass
+ def get_rowspan(self,cell):
+ try:
+ return self._rowspan[cell]
+ except:
+ return 0
+ def get_entry_number(self):
+ return self._cell_in_row
+ def visit_entry(self):
+ self._cell_in_row += 1
+ def is_stub_column(self):
+ if len(self.stubs) >= self._cell_in_row:
+ return self.stubs[self._cell_in_row-1]
+ return False
+
+
+class LaTeXTranslator(nodes.NodeVisitor):
+
+ # When options are given to the documentclass, latex will pass them
+ # to other packages, as done with babel.
+ # Dummy settings might be taken from document settings
+
+ # Config setting defaults
+ # -----------------------
+
+ # TODO: use mixins for different implementations.
+ # list environment for docinfo. else tabularx
+ ## use_optionlist_for_docinfo = False # TODO: NOT YET IN USE
+
+ # Use compound enumerations (1.A.1.)
+ compound_enumerators = 0
+
+ # If using compound enumerations, include section information.
+ section_prefix_for_enumerators = 0
+
+ # This is the character that separates the section ("." subsection ...)
+ # prefix from the regular list enumerator.
+ section_enumerator_separator = '-'
+
+ # default link color
+ hyperlink_color = 'blue'
+
+ # Auxiliary variables
+ # -------------------
+
+ has_latex_toc = False # is there a toc in the doc? (needed by minitoc)
+ is_toc_list = False # is the current bullet_list a ToC?
+ section_level = 0
+
+ # Flags to encode():
+ # inside citation reference labels underscores dont need to be escaped
+ inside_citation_reference_label = False
+ verbatim = False # do not encode
+ insert_non_breaking_blanks = False # replace blanks by "~"
+ insert_newline = False # add latex newline commands
+ literal = False # literal text (block or inline)
+
+
+ def __init__(self, document):
+ nodes.NodeVisitor.__init__(self, document)
+ # Reporter
+ # ~~~~~~~~
+ self.warn = self.document.reporter.warning
+ self.error = self.document.reporter.error
+
+ # Settings
+ # ~~~~~~~~
+ self.settings = settings = document.settings
+ self.latex_encoding = self.to_latex_encoding(settings.output_encoding)
+ self.use_latex_toc = settings.use_latex_toc
+ self.use_latex_docinfo = settings.use_latex_docinfo
+ self._use_latex_citations = settings.use_latex_citations
+ self.embed_stylesheet = settings.embed_stylesheet
+ self._reference_label = settings.reference_label
+ self.hyperlink_color = settings.hyperlink_color
+ self.compound_enumerators = settings.compound_enumerators
+ self.font_encoding = settings.font_encoding
+ self.section_prefix_for_enumerators = (
+ settings.section_prefix_for_enumerators)
+ self.section_enumerator_separator = (
+ settings.section_enumerator_separator.replace('_', '\\_'))
+ # literal blocks:
+ self.literal_block_env = ''
+ self.literal_block_options = ''
+ if settings.literal_block_env != '':
+ (none,
+ self.literal_block_env,
+ self.literal_block_options,
+ none ) = re.split('(\w+)(.*)', settings.literal_block_env)
+ elif settings.use_verbatim_when_possible:
+ self.literal_block_env = 'verbatim'
+ #
+ if self.settings.use_bibtex:
+ self.bibtex = self.settings.use_bibtex.split(',',1)
+ # TODO avoid errors on not declared citations.
+ else:
+ self.bibtex = None
+ # language:
+ # (labels, bibliographic_fields, and author_separators)
+ self.language = languages.get_language(settings.language_code)
+ self.babel = Babel(settings.language_code)
+ self.author_separator = self.language.author_separators[0]
+ self.d_options = [self.settings.documentoptions,
+ self.babel.get_language()]
+ self.d_options = ','.join([opt for opt in self.d_options if opt])
+ self.d_class = DocumentClass(settings.documentclass,
+ settings.use_part_section)
+ # graphic package options:
+ if self.settings.graphicx_option == '':
+ self.graphicx_package = r'\usepackage{graphicx}'
+ elif self.settings.graphicx_option.lower() == 'auto':
+ self.graphicx_package = PreambleCmds.graphicx_auto
+ else:
+ self.graphicx_package = (r'\usepackage[%s]{graphicx}' %
+ self.settings.graphicx_option)
+ # footnotes:
+ self.docutils_footnotes = settings.docutils_footnotes
+ if settings.use_latex_footnotes:
+ self.docutils_footnotes = True
+ self.warn('`use_latex_footnotes` is deprecated. '
+ 'The setting has been renamed to `docutils_footnotes` '
+ 'and the alias will be removed in a future version.')
+ self.figure_footnotes = settings.figure_footnotes
+ if self.figure_footnotes:
+ self.docutils_footnotes = True
+ self.warn('The "figure footnotes" workaround/setting is strongly '
+ 'deprecated and will be removed in a future version.')
+
+ # Output collection stacks
+ # ~~~~~~~~~~~~~~~~~~~~~~~~
+
+ # Document parts
+ self.head_prefix = [r'\documentclass[%s]{%s}' %
+ (self.d_options, self.settings.documentclass)]
+ self.requirements = SortableDict() # made a list in depart_document()
+ self.latex_preamble = [settings.latex_preamble]
+ self.stylesheet = []
+ self.fallbacks = SortableDict() # made a list in depart_document()
+ self.pdfsetup = [] # PDF properties (hyperref package)
+ self.title = []
+ self.subtitle = []
+ ## self.body_prefix = ['\\begin{document}\n']
+ self.body_pre_docinfo = [] # title data and \maketitle
+ self.docinfo = []
+ self.dedication = []
+ self.abstract = []
+ self.body = []
+ ## self.body_suffix = ['\\end{document}\n']
+
+ # A heterogenous stack used in conjunction with the tree traversal.
+ # Make sure that the pops correspond to the pushes:
+ self.context = []
+
+ # Title metadata:
+ self.title_labels = []
+ self.subtitle_labels = []
+ # (if use_latex_docinfo: collects lists of
+ # author/organization/contact/address lines)
+ self.author_stack = []
+ # date (the default supresses the "auto-date" feature of \maketitle)
+ self.date = []
+
+ # PDF properties: pdftitle, pdfauthor
+ # TODO?: pdfcreator, pdfproducer, pdfsubject, pdfkeywords
+ self.pdfinfo = []
+ self.pdfauthor = []
+
+ # Stack of section counters so that we don't have to use_latex_toc.
+ # This will grow and shrink as processing occurs.
+ # Initialized for potential first-level sections.
+ self._section_number = [0]
+
+ # The current stack of enumerations so that we can expand
+ # them into a compound enumeration.
+ self._enumeration_counters = []
+ # The maximum number of enumeration counters we've used.
+ # If we go beyond this number, we need to create a new
+ # counter; otherwise, just reuse an old one.
+ self._max_enumeration_counters = 0
+
+ self._bibitems = []
+
+ # object for a table while proccessing.
+ self.table_stack = []
+ self.active_table = Table(self, 'longtable', settings.table_style)
+
+ # Where to collect the output of visitor methods (default: body)
+ self.out = self.body
+ self.out_stack = [] # stack of output collectors
+
+ # Process settings
+ # ~~~~~~~~~~~~~~~~
+
+ # Static requirements
+ # TeX font encoding
+ if self.font_encoding:
+ encodings = [r'\usepackage[%s]{fontenc}' % self.font_encoding]
+ else:
+ encodings = [r'%\usepackage[OT1]{fontenc}'] # just a comment
+ # Docutils' output-encoding => TeX input encoding:
+ if self.latex_encoding != 'ascii':
+ encodings.append(r'\usepackage[%s]{inputenc}'
+ % self.latex_encoding)
+ self.requirements['_static'] = '\n'.join(
+ encodings + [
+ r'\usepackage{ifthen}',
+ # multi-language support (language is in document options)
+ '\\usepackage{babel}%s' % self.babel.setup,
+ ])
+ # page layout with typearea (if there are relevant document options)
+ if (settings.documentclass.find('scr') == -1 and
+ (self.d_options.find('DIV') != -1 or
+ self.d_options.find('BCOR') != -1)):
+ self.requirements['typearea'] = r'\usepackage{typearea}'
+
+ # Stylesheets
+ # get list of style sheets from settings
+ styles = utils.get_stylesheet_list(settings)
+ # adapt path if --stylesheet_path is used
+ if settings.stylesheet_path and not(self.embed_stylesheet):
+ styles = [utils.relative_path(settings._destination, sheet)
+ for sheet in styles]
+ for sheet in styles:
+ (base, ext) = os.path.splitext(sheet)
+ is_package = ext in ['.sty', '']
+ if self.embed_stylesheet:
+ if is_package:
+ sheet = base + '.sty' # adapt package name
+ # wrap in \makeatletter, \makeatother
+ wrapper = PreambleCmds.embedded_package_wrapper
+ else:
+ wrapper = '%% embedded stylesheet: %s\n%s'
+ settings.record_dependencies.add(sheet)
+ self.stylesheet.append(wrapper %
+ (sheet, io.FileInput(source_path=sheet, encoding='utf-8').read()))
+ else: # link to style sheet
+ if is_package:
+ self.stylesheet.append(r'\usepackage{%s}' % base)
+ else:
+ self.stylesheet.append(r'\input{%s}' % sheet)
+
+ # PDF setup
+ if self.hyperlink_color == '0':
+ self.hyperlink_color = 'black'
+ self.colorlinks = 'false'
+ else:
+ self.colorlinks = 'true'
+
+ # LaTeX Toc
+ # include all supported sections in toc and PDF bookmarks
+ # (or use documentclass-default (as currently))?
+ ## if self.use_latex_toc:
+ ## self.requirements['tocdepth'] = (r'\setcounter{tocdepth}{%d}' %
+ ## len(self.d_class.sections))
+
+ # LaTeX section numbering
+ if not self.settings.sectnum_xform: # section numbering by LaTeX:
+ # sectnum_depth:
+ # None "sectnum" directive without depth arg -> LaTeX default
+ # 0 no "sectnum" directive -> no section numbers
+ # else value of the "depth" argument: translate to LaTeX level
+ # -1 part (0 with "article" document class)
+ # 0 chapter (missing in "article" document class)
+ # 1 section
+ # 2 subsection
+ # 3 subsubsection
+ # 4 paragraph
+ # 5 subparagraph
+ if settings.sectnum_depth is not None:
+ # limit to supported levels
+ sectnum_depth = min(settings.sectnum_depth,
+ len(self.d_class.sections))
+ # adjust to document class and use_part_section settings
+ if 'chapter' in self.d_class.sections:
+ sectnum_depth -= 1
+ if self.d_class.sections[0] == 'part':
+ sectnum_depth -= 1
+ self.requirements['sectnum_depth'] = (
+ r'\setcounter{secnumdepth}{%d}' % sectnum_depth)
+ # start with specified number:
+ if (hasattr(settings, 'sectnum_start') and
+ settings.sectnum_start != 1):
+ self.requirements['sectnum_start'] = (
+ r'\setcounter{%s}{%d}' % (self.d_class.sections[0],
+ settings.sectnum_start-1))
+ # currently ignored (configure in a stylesheet):
+ ## settings.sectnum_prefix
+ ## settings.sectnum_suffix
+
+
+ # Auxiliary Methods
+ # -----------------
+
+ def to_latex_encoding(self,docutils_encoding):
+ """Translate docutils encoding name into LaTeX's.
+
+ Default method is remove "-" and "_" chars from docutils_encoding.
+ """
+ tr = { 'iso-8859-1': 'latin1', # west european
+ 'iso-8859-2': 'latin2', # east european
+ 'iso-8859-3': 'latin3', # esperanto, maltese
+ 'iso-8859-4': 'latin4', # north european, scandinavian, baltic
+ 'iso-8859-5': 'iso88595', # cyrillic (ISO)
+ 'iso-8859-9': 'latin5', # turkish
+ 'iso-8859-15': 'latin9', # latin9, update to latin1.
+ 'mac_cyrillic': 'maccyr', # cyrillic (on Mac)
+ 'windows-1251': 'cp1251', # cyrillic (on Windows)
+ 'koi8-r': 'koi8-r', # cyrillic (Russian)
+ 'koi8-u': 'koi8-u', # cyrillic (Ukrainian)
+ 'windows-1250': 'cp1250', #
+ 'windows-1252': 'cp1252', #
+ 'us-ascii': 'ascii', # ASCII (US)
+ # unmatched encodings
+ #'': 'applemac',
+ #'': 'ansinew', # windows 3.1 ansi
+ #'': 'ascii', # ASCII encoding for the range 32--127.
+ #'': 'cp437', # dos latin us
+ #'': 'cp850', # dos latin 1
+ #'': 'cp852', # dos latin 2
+ #'': 'decmulti',
+ #'': 'latin10',
+ #'iso-8859-6': '' # arabic
+ #'iso-8859-7': '' # greek
+ #'iso-8859-8': '' # hebrew
+ #'iso-8859-10': '' # latin6, more complete iso-8859-4
+ }
+ encoding = docutils_encoding.lower()
+ if encoding in tr:
+ return tr[encoding]
+ # convert: latin-1, latin_1, utf-8 and similar things
+ encoding = encoding.replace('_', '').replace('-', '')
+ # strip the error handler
+ return encoding.split(':')[0]
+
+ def language_label(self, docutil_label):
+ return self.language.labels[docutil_label]
+
+ def ensure_math(self, text):
+ if not hasattr(self, 'ensure_math_re'):
+ chars = { # lnot,pm,twosuperior,threesuperior,mu,onesuperior,times,div
+ 'latin1' : '\xac\xb1\xb2\xb3\xb5\xb9\xd7\xf7' , # ¬±²³µ¹×÷
+ # TODO?: use texcomp instead.
+ }
+ self.ensure_math_re = re.compile('([%s])' % chars['latin1'])
+ text = self.ensure_math_re.sub(r'\\ensuremath{\1}', text)
+ return text
+
+ def encode(self, text):
+ """Return text with 'problematic' characters escaped.
+
+ Escape the ten special printing characters ``# $ % & ~ _ ^ \ { }``,
+ square brackets ``[ ]``, double quotes and (in OT1) ``< | >``.
+
+ Separate ``-`` (and more in literal text) to prevent input ligatures.
+
+ Translate non-supported Unicode characters.
+ """
+ if self.verbatim:
+ return text
+ # Separate compound characters, e.g. '--' to '-{}-'.
+ separate_chars = '-'
+ # In monospace-font, we also separate ',,', '``' and "''" and some
+ # other characters which can't occur in non-literal text.
+ if self.literal:
+ separate_chars += ',`\'"<>'
+ # LaTeX encoding maps:
+ special_chars = {
+ ord('#'): ur'\#',
+ ord('$'): ur'\$',
+ ord('%'): ur'\%',
+ ord('&'): ur'\&',
+ ord('~'): ur'\textasciitilde{}',
+ ord('_'): ur'\_',
+ ord('^'): ur'\textasciicircum{}',
+ ord('\\'): ur'\textbackslash{}',
+ ord('{'): ur'\{',
+ ord('}'): ur'\}',
+ # Square brackets are ordinary chars and cannot be escaped with '\',
+ # so we put them in a group '{[}'. (Alternative: ensure that all
+ # macros with optional arguments are terminated with {} and text
+ # inside any optional argument is put in a group ``[{text}]``).
+ # Commands with optional args inside an optional arg must be put
+ # in a group, e.g. ``\item[{\hyperref[label]{text}}]``.
+ ord('['): ur'{[}',
+ ord(']'): ur'{]}'
+ }
+ # Unicode chars that are not recognized by LaTeX's utf8 encoding
+ unsupported_unicode_chars = {
+ 0x00A0: ur'~', # NO-BREAK SPACE
+ 0x00AD: ur'\-', # SOFT HYPHEN
+ #
+ 0x2011: ur'\hbox{-}', # NON-BREAKING HYPHEN
+ 0x21d4: ur'$\Leftrightarrow$',
+ # Docutils footnote symbols:
+ 0x2660: ur'$\spadesuit$',
+ 0x2663: ur'$\clubsuit$',
+ }
+ # Unicode chars that are recognized by LaTeX's utf8 encoding
+ unicode_chars = {
+ 0x200C: ur'\textcompwordmark', # ZERO WIDTH NON-JOINER
+ 0x2013: ur'\textendash{}',
+ 0x2014: ur'\textemdash{}',
+ 0x2018: ur'\textquoteleft{}',
+ 0x2019: ur'\textquoteright{}',
+ 0x201A: ur'\quotesinglbase{}', # SINGLE LOW-9 QUOTATION MARK
+ 0x201C: ur'\textquotedblleft{}',
+ 0x201D: ur'\textquotedblright{}',
+ 0x201E: ur'\quotedblbase{}', # DOUBLE LOW-9 QUOTATION MARK
+ 0x2030: ur'\textperthousand{}', # PER MILLE SIGN
+ 0x2031: ur'\textpertenthousand{}', # PER TEN THOUSAND SIGN
+ 0x2039: ur'\guilsinglleft{}',
+ 0x203A: ur'\guilsinglright{}',
+ 0x2423: ur'\textvisiblespace{}', # OPEN BOX
+ 0x2020: ur'\dag{}',
+ 0x2021: ur'\ddag{}',
+ 0x2026: ur'\dots{}',
+ 0x2122: ur'\texttrademark{}',
+ }
+ # Unicode chars that require a feature/package to render
+ pifont_chars = {
+ 0x2665: ur'\ding{170}', # black heartsuit
+ 0x2666: ur'\ding{169}', # black diamondsuit
+ }
+ # recognized with 'utf8', if textcomp is loaded
+ textcomp_chars = {
+ # Latin-1 Supplement
+ 0x00a2: ur'\textcent{}', # ¢ CENT SIGN
+ 0x00a4: ur'\textcurrency{}', # ¤ CURRENCY SYMBOL
+ 0x00a5: ur'\textyen{}', # ¥ YEN SIGN
+ 0x00a6: ur'\textbrokenbar{}', # ¦ BROKEN BAR
+ 0x00a7: ur'\textsection{}', # § SECTION SIGN
+ 0x00a8: ur'\textasciidieresis{}', # ¨ DIAERESIS
+ 0x00a9: ur'\textcopyright{}', # © COPYRIGHT SIGN
+ 0x00aa: ur'\textordfeminine{}', # ª FEMININE ORDINAL INDICATOR
+ 0x00ac: ur'\textlnot{}', # ¬ NOT SIGN
+ 0x00ae: ur'\textregistered{}', # ® REGISTERED SIGN
+ 0x00af: ur'\textasciimacron{}', # ¯ MACRON
+ 0x00b0: ur'\textdegree{}', # ° DEGREE SIGN
+ 0x00b1: ur'\textpm{}', # ± PLUS-MINUS SIGN
+ 0x00b2: ur'\texttwosuperior{}', # ² SUPERSCRIPT TWO
+ 0x00b3: ur'\textthreesuperior{}', # ³ SUPERSCRIPT THREE
+ 0x00b4: ur'\textasciiacute{}', # ´ ACUTE ACCENT
+ 0x00b5: ur'\textmu{}', # µ MICRO SIGN
+ 0x00b6: ur'\textparagraph{}', # ¶ PILCROW SIGN # not equal to \textpilcrow
+ 0x00b9: ur'\textonesuperior{}', # ¹ SUPERSCRIPT ONE
+ 0x00ba: ur'\textordmasculine{}', # º MASCULINE ORDINAL INDICATOR
+ 0x00bc: ur'\textonequarter{}', # 1/4 FRACTION
+ 0x00bd: ur'\textonehalf{}', # 1/2 FRACTION
+ 0x00be: ur'\textthreequarters{}', # 3/4 FRACTION
+ 0x00d7: ur'\texttimes{}', # × MULTIPLICATION SIGN
+ 0x00f7: ur'\textdiv{}', # ÷ DIVISION SIGN
+ #
+ 0x0192: ur'\textflorin{}', # LATIN SMALL LETTER F WITH HOOK
+ 0x02b9: ur'\textasciiacute{}', # MODIFIER LETTER PRIME
+ 0x02ba: ur'\textacutedbl{}', # MODIFIER LETTER DOUBLE PRIME
+ 0x2016: ur'\textbardbl{}', # DOUBLE VERTICAL LINE
+ 0x2022: ur'\textbullet{}', # BULLET
+ 0x2032: ur'\textasciiacute{}', # PRIME
+ 0x2033: ur'\textacutedbl{}', # DOUBLE PRIME
+ 0x2035: ur'\textasciigrave{}', # REVERSED PRIME
+ 0x2036: ur'\textgravedbl{}', # REVERSED DOUBLE PRIME
+ 0x203b: ur'\textreferencemark{}', # REFERENCE MARK
+ 0x203d: ur'\textinterrobang{}', # INTERROBANG
+ 0x2044: ur'\textfractionsolidus{}', # FRACTION SLASH
+ 0x2045: ur'\textlquill{}', # LEFT SQUARE BRACKET WITH QUILL
+ 0x2046: ur'\textrquill{}', # RIGHT SQUARE BRACKET WITH QUILL
+ 0x2052: ur'\textdiscount{}', # COMMERCIAL MINUS SIGN
+ 0x20a1: ur'\textcolonmonetary{}', # COLON SIGN
+ 0x20a3: ur'\textfrenchfranc{}', # FRENCH FRANC SIGN
+ 0x20a4: ur'\textlira{}', # LIRA SIGN
+ 0x20a6: ur'\textnaira{}', # NAIRA SIGN
+ 0x20a9: ur'\textwon{}', # WON SIGN
+ 0x20ab: ur'\textdong{}', # DONG SIGN
+ 0x20ac: ur'\texteuro{}', # EURO SIGN
+ 0x20b1: ur'\textpeso{}', # PESO SIGN
+ 0x20b2: ur'\textguarani{}', # GUARANI SIGN
+ 0x2103: ur'\textcelsius{}', # DEGREE CELSIUS
+ 0x2116: ur'\textnumero{}', # NUMERO SIGN
+ 0x2117: ur'\textcircledP{}', # SOUND RECORDING COYRIGHT
+ 0x211e: ur'\textrecipe{}', # PRESCRIPTION TAKE
+ 0x2120: ur'\textservicemark{}', # SERVICE MARK
+ 0x2122: ur'\texttrademark{}', # TRADE MARK SIGN
+ 0x2126: ur'\textohm{}', # OHM SIGN
+ 0x2127: ur'\textmho{}', # INVERTED OHM SIGN
+ 0x212e: ur'\textestimated{}', # ESTIMATED SYMBOL
+ 0x2190: ur'\textleftarrow{}', # LEFTWARDS ARROW
+ 0x2191: ur'\textuparrow{}', # UPWARDS ARROW
+ 0x2192: ur'\textrightarrow{}', # RIGHTWARDS ARROW
+ 0x2193: ur'\textdownarrow{}', # DOWNWARDS ARROW
+ 0x2212: ur'\textminus{}', # MINUS SIGN
+ 0x2217: ur'\textasteriskcentered{}', # ASTERISK OPERATOR
+ 0x221a: ur'\textsurd{}', # SQUARE ROOT
+ 0x2422: ur'\textblank{}', # BLANK SYMBOL
+ 0x25e6: ur'\textopenbullet{}', # WHITE BULLET
+ 0x25ef: ur'\textbigcircle{}', # LARGE CIRCLE
+ 0x266a: ur'\textmusicalnote{}', # EIGHTH NOTE
+ 0x26ad: ur'\textmarried{}', # MARRIAGE SYMBOL
+ 0x26ae: ur'\textdivorced{}', # DIVORCE SYMBOL
+ 0x27e8: ur'\textlangle{}', # MATHEMATICAL LEFT ANGLE BRACKET
+ 0x27e9: ur'\textrangle{}', # MATHEMATICAL RIGHT ANGLE BRACKET
+ }
+ # TODO: greek alphabet ... ?
+ # see also LaTeX codec
+ # http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/252124
+ # and unimap.py from TeXML
+
+ # set up the translation table:
+ table = special_chars
+ # keep the underscore in citation references
+ if self.inside_citation_reference_label:
+ del(table[ord('_')])
+ # Workarounds for OT1 font-encoding
+ if self.font_encoding in ['OT1', '']:
+ # * out-of-order characters in cmtt
+ if self.literal:
+ # replace underscore by underlined blank,
+ # because this has correct width.
+ table[ord('_')] = u'\\underline{~}'
+ # the backslash doesn't work, so we use a mirrored slash.
+ # \reflectbox is provided by graphicx:
+ self.requirements['graphicx'] = self.graphicx_package
+ table[ord('\\')] = ur'\reflectbox{/}'
+ # * ``< | >`` come out as different chars (except for cmtt):
+ else:
+ table[ord('|')] = ur'\textbar{}'
+ table[ord('<')] = ur'\textless{}'
+ table[ord('>')] = ur'\textgreater{}'
+ if self.insert_non_breaking_blanks:
+ table[ord(' ')] = ur'~'
+ if self.literal:
+ # double quotes are 'active' in some languages
+ table[ord('"')] = self.babel.literal_double_quote
+ else:
+ text = self.babel.quote_quotes(text)
+ # Unicode chars:
+ table.update(unsupported_unicode_chars)
+ table.update(pifont_chars)
+ if not self.latex_encoding.startswith('utf8'):
+ table.update(unicode_chars)
+ table.update(textcomp_chars)
+ # Characters that require a feature/package to render
+ for ch in text:
+ if ord(ch) in pifont_chars:
+ self.requirements['pifont'] = '\\usepackage{pifont}'
+ if ord(ch) in textcomp_chars:
+ self.requirements['textcomp'] = PreambleCmds.textcomp
+
+ text = text.translate(table)
+
+ # Break up input ligatures
+ for char in separate_chars * 2:
+ # Do it twice ("* 2") because otherwise we would replace
+ # '---' by '-{}--'.
+ text = text.replace(char + char, char + '{}' + char)
+ # Literal line breaks (in address or literal blocks):
+ if self.insert_newline:
+ # for blank lines, insert a protected space, to avoid
+ # ! LaTeX Error: There's no line here to end.
+ textlines = [line + '~'*(not line.lstrip())
+ for line in text.split('\n')]
+ text = '\\\\\n'.join(textlines)
+ if self.literal and not self.insert_non_breaking_blanks:
+ # preserve runs of spaces but allow wrapping
+ text = text.replace(' ', ' ~')
+ if not self.latex_encoding.startswith('utf8'):
+ text = self.ensure_math(text)
+ return text
+
+ def attval(self, text,
+ whitespace=re.compile('[\n\r\t\v\f]')):
+ """Cleanse, encode, and return attribute value text."""
+ return self.encode(whitespace.sub(' ', text))
+
+ # TODO: is this used anywhere? (update or delete)
+ ## def astext(self):
+ ## """Assemble document parts and return as string."""
+ ## head = '\n'.join(self.head_prefix + self.stylesheet + self.head)
+ ## body = ''.join(self.body_prefix + self.body + self.body_suffix)
+ ## return head + '\n' + body
+
+ def is_inline(self, node):
+ """Check whether a node represents an inline element"""
+ return isinstance(node.parent, nodes.TextElement)
+
+ def append_hypertargets(self, node):
+ """Append hypertargets for all ids of `node`"""
+ # hypertarget places the anchor at the target's baseline,
+ # so we raise it explicitely
+ self.out.append('%\n'.join(['\\raisebox{1em}{\\hypertarget{%s}{}}' %
+ id for id in node['ids']]))
+
+ def ids_to_labels(self, node, set_anchor=True):
+ """Return list of label definitions for all ids of `node`
+
+ If `set_anchor` is True, an anchor is set with \phantomsection.
+ """
+ labels = ['\\label{%s}' % id for id in node.get('ids', [])]
+ if set_anchor and labels:
+ labels.insert(0, '\\phantomsection')
+ return labels
+
+ def push_output_collector(self, new_out):
+ self.out_stack.append(self.out)
+ self.out = new_out
+
+ def pop_output_collector(self):
+ self.out = self.out_stack.pop()
+
+ # Visitor methods
+ # ---------------
+
+ def visit_Text(self, node):
+ self.out.append(self.encode(node.astext()))
+
+ def depart_Text(self, node):
+ pass
+
+ def visit_address(self, node):
+ self.visit_docinfo_item(node, 'address')
+
+ def depart_address(self, node):
+ self.depart_docinfo_item(node)
+
+ def visit_admonition(self, node):
+ self.fallbacks['admonition'] = PreambleCmds.admonition
+ if 'error' in node['classes']:
+ self.fallbacks['error'] = PreambleCmds.error
+ # strip the generic 'admonition' from the list of classes
+ node['classes'] = [cls for cls in node['classes']
+ if cls != 'admonition']
+ self.out.append('\n\\DUadmonition[%s]{\n' % ','.join(node['classes']))
+
+ def depart_admonition(self, node=None):
+ self.out.append('}\n')
+
+ def visit_author(self, node):
+ self.visit_docinfo_item(node, 'author')
+
+ def depart_author(self, node):
+ self.depart_docinfo_item(node)
+
+ def visit_authors(self, node):
+ # not used: visit_author is called anyway for each author.
+ pass
+
+ def depart_authors(self, node):
+ pass
+
+ def visit_block_quote(self, node):
+ self.out.append( '%\n\\begin{quote}\n')
+
+ def depart_block_quote(self, node):
+ self.out.append( '\n\\end{quote}\n')
+
+ def visit_bullet_list(self, node):
+ if self.is_toc_list:
+ self.out.append( '%\n\\begin{list}{}{}\n' )
+ else:
+ self.out.append( '%\n\\begin{itemize}\n' )
+
+ def depart_bullet_list(self, node):
+ if self.is_toc_list:
+ self.out.append( '\n\\end{list}\n' )
+ else:
+ self.out.append( '\n\\end{itemize}\n' )
+
+ def visit_superscript(self, node):
+ self.out.append(r'\textsuperscript{')
+ if node['classes']:
+ self.visit_inline(node)
+
+ def depart_superscript(self, node):
+ if node['classes']:
+ self.depart_inline(node)
+ self.out.append('}')
+
+ def visit_subscript(self, node):
+ self.out.append(r'\textsubscript{') # requires `fixltx2e`
+ if node['classes']:
+ self.visit_inline(node)
+
+ def depart_subscript(self, node):
+ if node['classes']:
+ self.depart_inline(node)
+ self.out.append('}')
+
+ def visit_caption(self, node):
+ self.out.append( '\\caption{' )
+
+ def depart_caption(self, node):
+ self.out.append('}\n')
+
+ def visit_title_reference(self, node):
+ self.fallbacks['titlereference'] = PreambleCmds.titlereference
+ self.out.append(r'\DUroletitlereference{')
+ if node['classes']:
+ self.visit_inline(node)
+
+ def depart_title_reference(self, node):
+ if node['classes']:
+ self.depart_inline(node)
+ self.out.append( '}' )
+
+ def visit_citation(self, node):
+ # TODO maybe use cite bibitems
+ if self._use_latex_citations:
+ self.push_output_collector([])
+ else:
+ # TODO: do we need these?
+ ## self.requirements['~fnt_floats'] = PreambleCmds.footnote_floats
+ self.out.append(r'\begin{figure}[b]')
+ self.append_hypertargets(node)
+
+ def depart_citation(self, node):
+ if self._use_latex_citations:
+ label = self.out[0]
+ text = ''.join(self.out[1:])
+ self._bibitems.append([label, text])
+ self.pop_output_collector()
+ else:
+ self.out.append('\\end{figure}\n')
+
+ def visit_citation_reference(self, node):
+ if self._use_latex_citations:
+ if not self.inside_citation_reference_label:
+ self.out.append(r'\cite{')
+ self.inside_citation_reference_label = 1
+ else:
+ assert self.body[-1] in (' ', '\n'),\
+ 'unexpected non-whitespace while in reference label'
+ del self.body[-1]
+ else:
+ href = ''
+ if 'refid' in node:
+ href = node['refid']
+ elif 'refname' in node:
+ href = self.document.nameids[node['refname']]
+ self.out.append('[\\hyperlink{%s}{' % href)
+
+ def depart_citation_reference(self, node):
+ if self._use_latex_citations:
+ followup_citation = False
+ # check for a following citation separated by a space or newline
+ next_siblings = node.traverse(descend=0, siblings=1,
+ include_self=0)
+ if len(next_siblings) > 1:
+ next = next_siblings[0]
+ if (isinstance(next, nodes.Text) and
+ next.astext() in (' ', '\n')):
+ if next_siblings[1].__class__ == node.__class__:
+ followup_citation = True
+ if followup_citation:
+ self.out.append(',')
+ else:
+ self.out.append('}')
+ self.inside_citation_reference_label = False
+ else:
+ self.out.append('}]')
+
+ def visit_classifier(self, node):
+ self.out.append( '(\\textbf{' )
+
+ def depart_classifier(self, node):
+ self.out.append( '})\n' )
+
+ def visit_colspec(self, node):
+ self.active_table.visit_colspec(node)
+
+ def depart_colspec(self, node):
+ pass
+
+ def visit_comment(self, node):
+ # Precede every line with a comment sign, wrap in newlines
+ self.out.append('\n%% %s\n' % node.astext().replace('\n', '\n% '))
+ raise nodes.SkipNode
+
+ def depart_comment(self, node):
+ pass
+
+ def visit_compound(self, node):
+ pass
+
+ def depart_compound(self, node):
+ pass
+
+ def visit_contact(self, node):
+ self.visit_docinfo_item(node, 'contact')
+
+ def depart_contact(self, node):
+ self.depart_docinfo_item(node)
+
+ def visit_container(self, node):
+ pass
+
+ def depart_container(self, node):
+ pass
+
+ def visit_copyright(self, node):
+ self.visit_docinfo_item(node, 'copyright')
+
+ def depart_copyright(self, node):
+ self.depart_docinfo_item(node)
+
+ def visit_date(self, node):
+ self.visit_docinfo_item(node, 'date')
+
+ def depart_date(self, node):
+ self.depart_docinfo_item(node)
+
+ def visit_decoration(self, node):
+ # header and footer
+ pass
+
+ def depart_decoration(self, node):
+ pass
+
+ def visit_definition(self, node):
+ pass
+
+ def depart_definition(self, node):
+ self.out.append('\n')
+
+ def visit_definition_list(self, node):
+ self.out.append( '%\n\\begin{description}\n' )
+
+ def depart_definition_list(self, node):
+ self.out.append( '\\end{description}\n' )
+
+ def visit_definition_list_item(self, node):
+ pass
+
+ def depart_definition_list_item(self, node):
+ pass
+
+ def visit_description(self, node):
+ self.out.append(' ')
+
+ def depart_description(self, node):
+ pass
+
+ def visit_docinfo(self, node):
+ self.push_output_collector(self.docinfo)
+
+ def depart_docinfo(self, node):
+ self.pop_output_collector()
+ # Some itmes (e.g. author) end up at other places
+ if self.docinfo:
+ # tabularx: automatic width of columns, no page breaks allowed.
+ self.requirements['tabularx'] = r'\usepackage{tabularx}'
+ self.fallbacks['_providelength'] = PreambleCmds.providelength
+ self.fallbacks['docinfo'] = PreambleCmds.docinfo
+ #
+ self.docinfo.insert(0, '\n% Docinfo\n'
+ '\\begin{center}\n'
+ '\\begin{tabularx}{\\DUdocinfowidth}{lX}\n')
+ self.docinfo.append('\\end{tabularx}\n'
+ '\\end{center}\n')
+
+ def visit_docinfo_item(self, node, name):
+ if name == 'author':
+ self.pdfauthor.append(self.attval(node.astext()))
+ if self.use_latex_docinfo:
+ if name in ('author', 'organization', 'contact', 'address'):
+ # We attach these to the last author. If any of them precedes
+ # the first author, put them in a separate "author" group
+ # (in lack of better semantics).
+ if name == 'author' or not self.author_stack:
+ self.author_stack.append([])
+ if name == 'address': # newlines are meaningful
+ self.insert_newline = 1
+ text = self.encode(node.astext())
+ self.insert_newline = False
+ else:
+ text = self.attval(node.astext())
+ self.author_stack[-1].append(text)
+ raise nodes.SkipNode
+ elif name == 'date':
+ self.date.append(self.attval(node.astext()))
+ raise nodes.SkipNode
+ self.out.append('\\textbf{%s}: &\n\t' % self.language_label(name))
+ if name == 'address':
+ self.insert_newline = 1
+ self.out.append('{\\raggedright\n')
+ self.context.append(' } \\\\\n')
+ else:
+ self.context.append(' \\\\\n')
+
+ def depart_docinfo_item(self, node):
+ self.out.append(self.context.pop())
+ # for address we did set insert_newline
+ self.insert_newline = False
+
+ def visit_doctest_block(self, node):
+ self.visit_literal_block(node)
+
+ def depart_doctest_block(self, node):
+ self.depart_literal_block(node)
+
+ def visit_document(self, node):
+ # titled document?
+ if (self.use_latex_docinfo or len(node) and
+ isinstance(node[0], nodes.title)):
+ self.title_labels += self.ids_to_labels(node)
+
+ def depart_document(self, node):
+ # Complete header with information gained from walkabout
+ # a) conditional requirements (before style sheet)
+ self.requirements = self.requirements.sortedvalues()
+ # b) coditional fallback definitions (after style sheet)
+ self.fallbacks = self.fallbacks.sortedvalues()
+ # c) PDF properties
+ self.pdfsetup.append(PreambleCmds.linking % (self.colorlinks,
+ self.hyperlink_color,
+ self.hyperlink_color))
+ if self.pdfauthor:
+ authors = self.author_separator.join(self.pdfauthor)
+ self.pdfinfo.append(' pdfauthor={%s}' % authors)
+ if self.pdfinfo:
+ self.pdfsetup += [r'\hypersetup{'] + self.pdfinfo + ['}']
+ # Complete body
+ # a) document title (part 'body_prefix'):
+ # NOTE: Docutils puts author/date into docinfo, so normally
+ # we do not want LaTeX author/date handling (via \maketitle).
+ # To deactivate it, we add \title, \author, \date,
+ # even if the arguments are empty strings.
+ if self.title or self.author_stack or self.date:
+ authors = ['\\\\\n'.join(author_entry)
+ for author_entry in self.author_stack]
+ title = [''.join(self.title)] + self.title_labels
+ if self.subtitle:
+ title += [r'\\ % subtitle',
+ r'\large{%s}' % ''.join(self.subtitle)
+ ] + self.subtitle_labels
+ self.body_pre_docinfo.append(PreambleCmds.documenttitle % (
+ '%\n '.join(title),
+ ' \\and\n'.join(authors),
+ ', '.join(self.date)))
+ # b) bibliography
+ # TODO insertion point of bibliography should be configurable.
+ if self._use_latex_citations and len(self._bibitems)>0:
+ if not self.bibtex:
+ widest_label = ''
+ for bi in self._bibitems:
+ if len(widest_label)<len(bi[0]):
+ widest_label = bi[0]
+ self.out.append('\n\\begin{thebibliography}{%s}\n' %
+ widest_label)
+ for bi in self._bibitems:
+ # cite_key: underscores must not be escaped
+ cite_key = bi[0].replace(r'\_','_')
+ self.out.append('\\bibitem[%s]{%s}{%s}\n' %
+ (bi[0], cite_key, bi[1]))
+ self.out.append('\\end{thebibliography}\n')
+ else:
+ self.out.append('\n\\bibliographystyle{%s}\n' %
+ self.bibtex[0])
+ self.out.append('\\bibliography{%s}\n' % self.bibtex[1])
+ # c) make sure to generate a toc file if needed for local contents:
+ if 'minitoc' in self.requirements and not self.has_latex_toc:
+ self.out.append('\n\\faketableofcontents % for local ToCs\n')
+
+ def visit_emphasis(self, node):
+ self.out.append('\\emph{')
+ if node['classes']:
+ self.visit_inline(node)
+
+ def depart_emphasis(self, node):
+ if node['classes']:
+ self.depart_inline(node)
+ self.out.append('}')
+
+ def visit_entry(self, node):
+ self.active_table.visit_entry()
+ # cell separation
+ # BUG: the following fails, with more than one multirow
+ # starting in the second column (or later) see
+ # ../../../test/functional/input/data/latex.txt
+ if self.active_table.get_entry_number() == 1:
+ # if the first row is a multirow, this actually is the second row.
+ # this gets hairy if rowspans follow each other.
+ if self.active_table.get_rowspan(0):
+ count = 0
+ while self.active_table.get_rowspan(count):
+ count += 1
+ self.out.append(' & ')
+ self.active_table.visit_entry() # increment cell count
+ else:
+ self.out.append(' & ')
+ # multirow, multicolumn
+ # IN WORK BUG TODO HACK continues here
+ # multirow in LaTeX simply will enlarge the cell over several rows
+ # (the following n if n is positive, the former if negative).
+ if 'morerows' in node and 'morecols' in node:
+ raise NotImplementedError('Cells that '
+ 'span multiple rows *and* columns are not supported, sorry.')
+ if 'morerows' in node:
+ self.requirements['multirow'] = r'\usepackage{multirow}'
+ count = node['morerows'] + 1
+ self.active_table.set_rowspan(
+ self.active_table.get_entry_number()-1,count)
+ self.out.append('\\multirow{%d}{%s}{%%' %
+ (count,self.active_table.get_column_width()))
+ self.context.append('}')
+ elif 'morecols' in node:
+ # the vertical bar before column is missing if it is the first
+ # column. the one after always.
+ if self.active_table.get_entry_number() == 1:
+ bar1 = self.active_table.get_vertical_bar()
+ else:
+ bar1 = ''
+ count = node['morecols'] + 1
+ self.out.append('\\multicolumn{%d}{%sl%s}{' %
+ (count, bar1, self.active_table.get_vertical_bar()))
+ self.context.append('}')
+ else:
+ self.context.append('')
+
+ # header / not header
+ if isinstance(node.parent.parent, nodes.thead):
+ self.out.append('\\textbf{%')
+ self.context.append('}')
+ elif self.active_table.is_stub_column():
+ self.out.append('\\textbf{')
+ self.context.append('}')
+ else:
+ self.context.append('')
+
+ def depart_entry(self, node):
+ self.out.append(self.context.pop()) # header / not header
+ self.out.append(self.context.pop()) # multirow/column
+ # if following row is spanned from above.
+ if self.active_table.get_rowspan(self.active_table.get_entry_number()):
+ self.out.append(' & ')
+ self.active_table.visit_entry() # increment cell count
+
+ def visit_row(self, node):
+ self.active_table.visit_row()
+
+ def depart_row(self, node):
+ self.out.extend(self.active_table.depart_row())
+
+ def visit_enumerated_list(self, node):
+ # We create our own enumeration list environment.
+ # This allows to set the style and starting value
+ # and unlimited nesting.
+ enum_style = {'arabic':'arabic',
+ 'loweralpha':'alph',
+ 'upperalpha':'Alph',
+ 'lowerroman':'roman',
+ 'upperroman':'Roman' }
+ enum_suffix = ''
+ if 'suffix' in node:
+ enum_suffix = node['suffix']
+ enum_prefix = ''
+ if 'prefix' in node:
+ enum_prefix = node['prefix']
+ if self.compound_enumerators:
+ pref = ''
+ if self.section_prefix_for_enumerators and self.section_level:
+ for i in range(self.section_level):
+ pref += '%d.' % self._section_number[i]
+ pref = pref[:-1] + self.section_enumerator_separator
+ enum_prefix += pref
+ for ctype, cname in self._enumeration_counters:
+ enum_prefix += '\\%s{%s}.' % (ctype, cname)
+ enum_type = 'arabic'
+ if 'enumtype' in node:
+ enum_type = node['enumtype']
+ if enum_type in enum_style:
+ enum_type = enum_style[enum_type]
+
+ counter_name = 'listcnt%d' % len(self._enumeration_counters)
+ self._enumeration_counters.append((enum_type, counter_name))
+ # If we haven't used this counter name before, then create a
+ # new counter; otherwise, reset & reuse the old counter.
+ if len(self._enumeration_counters) > self._max_enumeration_counters:
+ self._max_enumeration_counters = len(self._enumeration_counters)
+ self.out.append('\\newcounter{%s}\n' % counter_name)
+ else:
+ self.out.append('\\setcounter{%s}{0}\n' % counter_name)
+
+ self.out.append('\\begin{list}{%s\\%s{%s}%s}\n' %
+ (enum_prefix,enum_type,counter_name,enum_suffix))
+ self.out.append('{\n')
+ self.out.append('\\usecounter{%s}\n' % counter_name)
+ # set start after usecounter, because it initializes to zero.
+ if 'start' in node:
+ self.out.append('\\addtocounter{%s}{%d}\n' %
+ (counter_name,node['start']-1))
+ ## set rightmargin equal to leftmargin
+ self.out.append('\\setlength{\\rightmargin}{\\leftmargin}\n')
+ self.out.append('}\n')
+
+ def depart_enumerated_list(self, node):
+ self.out.append('\\end{list}\n')
+ self._enumeration_counters.pop()
+
+ def visit_field(self, node):
+ # real output is done in siblings: _argument, _body, _name
+ pass
+
+ def depart_field(self, node):
+ self.out.append('\n')
+ ##self.out.append('%[depart_field]\n')
+
+ def visit_field_argument(self, node):
+ self.out.append('%[visit_field_argument]\n')
+
+ def depart_field_argument(self, node):
+ self.out.append('%[depart_field_argument]\n')
+
+ def visit_field_body(self, node):
+ pass
+
+ def depart_field_body(self, node):
+ if self.out is self.docinfo:
+ self.out.append(r'\\')
+
+ def visit_field_list(self, node):
+ if self.out is not self.docinfo:
+ self.fallbacks['fieldlist'] = PreambleCmds.fieldlist
+ self.out.append('%\n\\begin{DUfieldlist}\n')
+
+ def depart_field_list(self, node):
+ if self.out is not self.docinfo:
+ self.out.append('\\end{DUfieldlist}\n')
+
+ def visit_field_name(self, node):
+ if self.out is self.docinfo:
+ self.out.append('\\textbf{')
+ else:
+ # Commands with optional args inside an optional arg must be put
+ # in a group, e.g. ``\item[{\hyperref[label]{text}}]``.
+ self.out.append('\\item[{')
+
+ def depart_field_name(self, node):
+ if self.out is self.docinfo:
+ self.out.append('}: &')
+ else:
+ self.out.append(':}]')
+
+ def visit_figure(self, node):
+ self.requirements['float_settings'] = PreambleCmds.float_settings
+ # ! the 'align' attribute should set "outer alignment" !
+ # For "inner alignment" use LaTeX default alignment (similar to HTML)
+ ## if ('align' not in node.attributes or
+ ## node.attributes['align'] == 'center'):
+ ## align = '\n\\centering'
+ ## align_end = ''
+ ## else:
+ ## # TODO non vertical space for other alignments.
+ ## align = '\\begin{flush%s}' % node.attributes['align']
+ ## align_end = '\\end{flush%s}' % node.attributes['align']
+ ## self.out.append( '\\begin{figure}%s\n' % align )
+ ## self.context.append( '%s\\end{figure}\n' % align_end )
+ self.out.append('\\begin{figure}')
+ if node.get('ids'):
+ self.out += ['\n'] + self.ids_to_labels(node)
+
+ def depart_figure(self, node):
+ self.out.append('\\end{figure}\n')
+
+ def visit_footer(self, node):
+ self.push_output_collector([])
+ self.out.append(r'\newcommand{\DUfooter}{')
+
+ def depart_footer(self, node):
+ self.out.append('}')
+ self.requirements['~footer'] = ''.join(self.out)
+ self.pop_output_collector()
+
+ def visit_footnote(self, node):
+ try:
+ backref = node['backrefs'][0]
+ except IndexError:
+ backref = node['ids'][0] # no backref, use self-ref instead
+ if self.settings.figure_footnotes:
+ self.requirements['~fnt_floats'] = PreambleCmds.footnote_floats
+ self.out.append('\\begin{figure}[b]')
+ self.append_hypertargets(node)
+ if node.get('id') == node.get('name'): # explicite label
+ self.out += self.ids_to_labels(node)
+ elif self.docutils_footnotes:
+ self.fallbacks['footnotes'] = PreambleCmds.footnotes
+ num,text = node.astext().split(None,1)
+ if self.settings.footnote_references == 'brackets':
+ num = '[%s]' % num
+ self.out.append('%%\n\\DUfootnotetext{%s}{%s}{%s}{' %
+ (node['ids'][0], backref, self.encode(num)))
+ if node['ids'] == node['names']:
+ self.out += self.ids_to_labels(node)
+ # mask newline to prevent spurious whitespace:
+ self.out.append('%')
+ ## else: # TODO: "real" LaTeX \footnote{}s
+
+ def depart_footnote(self, node):
+ if self.figure_footnotes:
+ self.out.append('\\end{figure}\n')
+ else:
+ self.out.append('}\n')
+
+ def visit_footnote_reference(self, node):
+ href = ''
+ if 'refid' in node:
+ href = node['refid']
+ elif 'refname' in node:
+ href = self.document.nameids[node['refname']]
+ # if not self.docutils_footnotes:
+ # TODO: insert footnote content at (or near) this place
+ # print "footnote-ref to", node['refid']
+ # footnotes = (self.document.footnotes +
+ # self.document.autofootnotes +
+ # self.document.symbol_footnotes)
+ # for footnote in footnotes:
+ # # print footnote['ids']
+ # if node.get('refid', '') in footnote['ids']:
+ # print 'matches', footnote['ids']
+ format = self.settings.footnote_references
+ if format == 'brackets':
+ self.append_hypertargets(node)
+ self.out.append('\\hyperlink{%s}{[' % href)
+ self.context.append(']}')
+ else:
+ self.fallbacks['footnotes'] = PreambleCmds.footnotes
+ self.out.append(r'\DUfootnotemark{%s}{%s}{' %
+ (node['ids'][0], href))
+ self.context.append('}')
+
+ def depart_footnote_reference(self, node):
+ self.out.append(self.context.pop())
+
+ # footnote/citation label
+ def label_delim(self, node, bracket, superscript):
+ if isinstance(node.parent, nodes.footnote):
+ if not self.figure_footnotes:
+ raise nodes.SkipNode
+ if self.settings.footnote_references == 'brackets':
+ self.out.append(bracket)
+ else:
+ self.out.append(superscript)
+ else:
+ assert isinstance(node.parent, nodes.citation)
+ if not self._use_latex_citations:
+ self.out.append(bracket)
+
+ def visit_label(self, node):
+ """footnote or citation label: in brackets or as superscript"""
+ self.label_delim(node, '[', '\\textsuperscript{')
+
+ def depart_label(self, node):
+ self.label_delim(node, ']', '}')
+
+ # elements generated by the framework e.g. section numbers.
+ def visit_generated(self, node):
+ pass
+
+ def depart_generated(self, node):
+ pass
+
+ def visit_header(self, node):
+ self.push_output_collector([])
+ self.out.append(r'\newcommand{\DUheader}{')
+
+ def depart_header(self, node):
+ self.out.append('}')
+ self.requirements['~header'] = ''.join(self.out)
+ self.pop_output_collector()
+
+ def to_latex_length(self, length_str):
+ """Convert string with rst lenght to LaTeX"""
+ match = re.match('(\d*\.?\d*)\s*(\S*)', length_str)
+ if not match:
+ return length_str
+ value, unit = match.groups()[:2]
+ # no unit or "DTP" points (called 'bp' in TeX):
+ if unit in ('', 'pt'):
+ length_str = '%sbp' % value
+ # percentage: relate to current line width
+ elif unit == '%':
+ length_str = '%.3f\\linewidth' % (float(value)/100.0)
+ return length_str
+
+ def visit_image(self, node):
+ self.requirements['graphicx'] = self.graphicx_package
+ attrs = node.attributes
+ # Add image URI to dependency list, assuming that it's
+ # referring to a local file.
+ self.settings.record_dependencies.add(attrs['uri'])
+ # alignment defaults:
+ if not 'align' in attrs:
+ # Set default align of image in a figure to 'center'
+ if isinstance(node.parent, nodes.figure):
+ attrs['align'] = 'center'
+ # query 'align-*' class argument
+ for cls in node['classes']:
+ if cls.startswith('align-'):
+ attrs['align'] = cls.split('-')[1]
+ # pre- and postfix (prefix inserted in reverse order)
+ pre = []
+ post = []
+ include_graphics_options = []
+ display_style = ('block-', 'inline-')[self.is_inline(node)]
+ align_codes = {
+ # inline images: by default latex aligns the bottom.
+ 'bottom': ('', ''),
+ 'middle': (r'\raisebox{-0.5\height}{', '}'),
+ 'top': (r'\raisebox{-\height}{', '}'),
+ # block level images:
+ 'center': (r'\noindent\makebox[\textwidth][c]{', '}'),
+ 'left': (r'\noindent{', r'\hfill}'),
+ 'right': (r'\noindent{\hfill', '}'),}
+ if 'align' in attrs:
+ try:
+ align_code = align_codes[attrs['align']]
+ pre.append(align_code[0])
+ post.append(align_code[1])
+ except KeyError:
+ pass # TODO: warn?
+ if 'height' in attrs:
+ include_graphics_options.append('height=%s' %
+ self.to_latex_length(attrs['height']))
+ if 'scale' in attrs:
+ include_graphics_options.append('scale=%f' %
+ (attrs['scale'] / 100.0))
+ if 'width' in attrs:
+ include_graphics_options.append('width=%s' %
+ self.to_latex_length(attrs['width']))
+ if not self.is_inline(node):
+ pre.append('\n')
+ post.append('\n')
+ pre.reverse()
+ self.out.extend(pre)
+ options = ''
+ if include_graphics_options:
+ options = '[%s]' % (','.join(include_graphics_options))
+ self.out.append('\\includegraphics%s{%s}' % (options, attrs['uri']))
+ self.out.extend(post)
+
+ def depart_image(self, node):
+ if node.get('ids'):
+ self.out += self.ids_to_labels(node) + ['\n']
+
+ def visit_interpreted(self, node):
+ # @@@ Incomplete, pending a proper implementation on the
+ # Parser/Reader end.
+ self.visit_literal(node)
+
+ def depart_interpreted(self, node):
+ self.depart_literal(node)
+
+ def visit_legend(self, node):
+ self.fallbacks['legend'] = PreambleCmds.legend
+ self.out.append('\\begin{DUlegend}')
+
+ def depart_legend(self, node):
+ self.out.append('\\end{DUlegend}\n')
+
+ def visit_line(self, node):
+ self.out.append('\item[] ')
+
+ def depart_line(self, node):
+ self.out.append('\n')
+
+ def visit_line_block(self, node):
+ self.fallbacks['_providelength'] = PreambleCmds.providelength
+ self.fallbacks['lineblock'] = PreambleCmds.lineblock
+ if isinstance(node.parent, nodes.line_block):
+ self.out.append('\\item[]\n'
+ '\\begin{DUlineblock}{\\DUlineblockindent}\n')
+ else:
+ self.out.append('\n\\begin{DUlineblock}{0em}\n')
+
+ def depart_line_block(self, node):
+ self.out.append('\\end{DUlineblock}\n')
+
+ def visit_list_item(self, node):
+ self.out.append('\n\\item ')
+
+ def depart_list_item(self, node):
+ pass
+
+ def visit_literal(self, node):
+ self.literal = True
+ self.out.append('\\texttt{')
+ if node['classes']:
+ self.visit_inline(node)
+
+ def depart_literal(self, node):
+ self.literal = False
+ if node['classes']:
+ self.depart_inline(node)
+ self.out.append('}')
+
+ # Literal blocks are used for '::'-prefixed literal-indented
+ # blocks of text, where the inline markup is not recognized,
+ # but are also the product of the "parsed-literal" directive,
+ # where the markup is respected.
+ #
+ # In both cases, we want to use a typewriter/monospaced typeface.
+ # For "real" literal-blocks, we can use \verbatim, while for all
+ # the others we must use \mbox or \alltt.
+ #
+ # We can distinguish between the two kinds by the number of
+ # siblings that compose this node: if it is composed by a
+ # single element, it's either
+ # * a real one,
+ # * a parsed-literal that does not contain any markup, or
+ # * a parsed-literal containing just one markup construct.
+ def is_plaintext(self, node):
+ """Check whether a node can be typeset verbatim"""
+ return (len(node) == 1) and isinstance(node[0], nodes.Text)
+
+ def visit_literal_block(self, node):
+ """Render a literal block."""
+ # environments and packages to typeset literal blocks
+ packages = {'listing': r'\usepackage{moreverb}',
+ 'lstlisting': r'\usepackage{listings}',
+ 'Verbatim': r'\usepackage{fancyvrb}',
+ # 'verbatim': '',
+ 'verbatimtab': r'\usepackage{moreverb}'}
+
+ if not self.active_table.is_open():
+ # no quote inside tables, to avoid vertical space between
+ # table border and literal block.
+ # BUG: fails if normal text preceeds the literal block.
+ self.out.append('%\n\\begin{quote}')
+ self.context.append('\n\\end{quote}\n')
+ else:
+ self.out.append('\n')
+ self.context.append('\n')
+ if self.literal_block_env != '' and self.is_plaintext(node):
+ self.requirements['literal_block'] = packages.get(
+ self.literal_block_env, '')
+ self.verbatim = True
+ self.out.append('\\begin{%s}%s\n' % (self.literal_block_env,
+ self.literal_block_options))
+ else:
+ self.literal = True
+ self.insert_newline = True
+ self.insert_non_breaking_blanks = True
+ self.out.append('{\\ttfamily \\raggedright \\noindent\n')
+
+ def depart_literal_block(self, node):
+ if self.verbatim:
+ self.out.append('\n\\end{%s}\n' % self.literal_block_env)
+ self.verbatim = False
+ else:
+ self.out.append('\n}')
+ self.insert_non_breaking_blanks = False
+ self.insert_newline = False
+ self.literal = False
+ self.out.append(self.context.pop())
+
+ ## def visit_meta(self, node):
+ ## self.out.append('[visit_meta]\n')
+ # TODO: set keywords for pdf?
+ # But:
+ # The reStructuredText "meta" directive creates a "pending" node,
+ # which contains knowledge that the embedded "meta" node can only
+ # be handled by HTML-compatible writers. The "pending" node is
+ # resolved by the docutils.transforms.components.Filter transform,
+ # which checks that the calling writer supports HTML; if it doesn't,
+ # the "pending" node (and enclosed "meta" node) is removed from the
+ # document.
+ # --- docutils/docs/peps/pep-0258.html#transformer
+
+ ## def depart_meta(self, node):
+ ## self.out.append('[depart_meta]\n')
+
+ def visit_option(self, node):
+ if self.context[-1]:
+ # this is not the first option
+ self.out.append(', ')
+
+ def depart_option(self, node):
+ # flag tha the first option is done.
+ self.context[-1] += 1
+
+ def visit_option_argument(self, node):
+ """Append the delimiter betweeen an option and its argument to body."""
+ self.out.append(node.get('delimiter', ' '))
+
+ def depart_option_argument(self, node):
+ pass
+
+ def visit_option_group(self, node):
+ self.out.append('\n\\item[')
+ # flag for first option
+ self.context.append(0)
+
+ def depart_option_group(self, node):
+ self.context.pop() # the flag
+ self.out.append('] ')
+
+ def visit_option_list(self, node):
+ self.fallbacks['_providelength'] = PreambleCmds.providelength
+ self.fallbacks['optionlist'] = PreambleCmds.optionlist
+ self.out.append('%\n\\begin{DUoptionlist}\n')
+
+ def depart_option_list(self, node):
+ self.out.append('\n\\end{DUoptionlist}\n')
+
+ def visit_option_list_item(self, node):
+ pass
+
+ def depart_option_list_item(self, node):
+ pass
+
+ def visit_option_string(self, node):
+ ##self.out.append(self.starttag(node, 'span', '', CLASS='option'))
+ pass
+
+ def depart_option_string(self, node):
+ ##self.out.append('</span>')
+ pass
+
+ def visit_organization(self, node):
+ self.visit_docinfo_item(node, 'organization')
+
+ def depart_organization(self, node):
+ self.depart_docinfo_item(node)
+
+ def visit_paragraph(self, node):
+ # no newline if the paragraph is first in a list item
+ if ((isinstance(node.parent, nodes.list_item) or
+ isinstance(node.parent, nodes.description)) and
+ node is node.parent[0]):
+ return
+ index = node.parent.index(node)
+ if (isinstance(node.parent, nodes.compound) and
+ index > 0 and
+ not isinstance(node.parent[index - 1], nodes.paragraph) and
+ not isinstance(node.parent[index - 1], nodes.compound)):
+ return
+ self.out.append('\n')
+ if node.get('ids'):
+ self.out += self.ids_to_labels(node) + ['\n']
+
+ def depart_paragraph(self, node):
+ self.out.append('\n')
+
+ def visit_problematic(self, node):
+ self.requirements['color'] = PreambleCmds.color
+ self.out.append('%\n')
+ self.append_hypertargets(node)
+ self.out.append(r'\hyperlink{%s}{\textbf{\color{red}' % node['refid'])
+
+ def depart_problematic(self, node):
+ self.out.append('}}')
+
+ def visit_raw(self, node):
+ if not 'latex' in node.get('format', '').split():
+ raise nodes.SkipNode
+ if node['classes']:
+ self.visit_inline(node)
+ # append "as-is" skipping any LaTeX-encoding
+ self.verbatim = True
+
+ def depart_raw(self, node):
+ self.verbatim = False
+ if node['classes']:
+ self.depart_inline(node)
+
+ def has_unbalanced_braces(self, string):
+ """Test whether there are unmatched '{' or '}' characters."""
+ level = 0
+ for ch in string:
+ if ch == '{':
+ level += 1
+ if ch == '}':
+ level -= 1
+ if level < 0:
+ return True
+ return level != 0
+
+ def visit_reference(self, node):
+ # We need to escape #, \, and % if we use the URL in a command.
+ special_chars = {ord('#'): ur'\#',
+ ord('%'): ur'\%',
+ ord('\\'): ur'\\',
+ }
+ # external reference (URL)
+ if 'refuri' in node:
+ href = unicode(node['refuri']).translate(special_chars)
+ # problematic chars double caret and unbalanced braces:
+ if href.find('^^') != -1 or self.has_unbalanced_braces(href):
+ self.error(
+ 'External link "%s" not supported by LaTeX.\n'
+ ' (Must not contain "^^" or unbalanced braces.)' % href)
+ if node['refuri'] == node.astext():
+ self.out.append(r'\url{%s}' % href)
+ raise nodes.SkipNode
+ self.out.append(r'\href{%s}{' % href)
+ return
+ # internal reference
+ if 'refid' in node:
+ href = node['refid']
+ elif 'refname' in node:
+ href = self.document.nameids[node['refname']]
+ else:
+ raise AssertionError('Unknown reference.')
+ if not self.is_inline(node):
+ self.out.append('\n')
+ self.out.append('\\hyperref[%s]{' % href)
+ if self._reference_label:
+ self.out.append('\\%s{%s}}' %
+ (self._reference_label, href.replace('#', '')))
+ raise nodes.SkipNode
+
+ def depart_reference(self, node):
+ self.out.append('}')
+ if not self.is_inline(node):
+ self.out.append('\n')
+
+ def visit_revision(self, node):
+ self.visit_docinfo_item(node, 'revision')
+
+ def depart_revision(self, node):
+ self.depart_docinfo_item(node)
+
+ def visit_section(self, node):
+ self.section_level += 1
+ # Initialize counter for potential subsections:
+ self._section_number.append(0)
+ # Counter for this section's level (initialized by parent section):
+ self._section_number[self.section_level - 1] += 1
+
+ def depart_section(self, node):
+ # Remove counter for potential subsections:
+ self._section_number.pop()
+ self.section_level -= 1
+
+ def visit_sidebar(self, node):
+ self.requirements['color'] = PreambleCmds.color
+ self.fallbacks['sidebar'] = PreambleCmds.sidebar
+ self.out.append('\n\\DUsidebar{\n')
+
+ def depart_sidebar(self, node):
+ self.out.append('}\n')
+
+ attribution_formats = {'dash': ('---', ''),
+ 'parentheses': ('(', ')'),
+ 'parens': ('(', ')'),
+ 'none': ('', '')}
+
+ def visit_attribution(self, node):
+ prefix, suffix = self.attribution_formats[self.settings.attribution]
+ self.out.append('\n\\begin{flushright}\n')
+ self.out.append(prefix)
+ self.context.append(suffix)
+
+ def depart_attribution(self, node):
+ self.out.append(self.context.pop() + '\n')
+ self.out.append('\\end{flushright}\n')
+
+ def visit_status(self, node):
+ self.visit_docinfo_item(node, 'status')
+
+ def depart_status(self, node):
+ self.depart_docinfo_item(node)
+
+ def visit_strong(self, node):
+ self.out.append('\\textbf{')
+ if node['classes']:
+ self.visit_inline(node)
+
+ def depart_strong(self, node):
+ if node['classes']:
+ self.depart_inline(node)
+ self.out.append('}')
+
+ def visit_substitution_definition(self, node):
+ raise nodes.SkipNode
+
+ def visit_substitution_reference(self, node):
+ self.unimplemented_visit(node)
+
+ def visit_subtitle(self, node):
+ if isinstance(node.parent, nodes.document):
+ self.push_output_collector(self.subtitle)
+ self.subtitle_labels += self.ids_to_labels(node, set_anchor=False)
+ # section subtitle: "starred" (no number, not in ToC)
+ elif isinstance(node.parent, nodes.section):
+ self.out.append(r'\%s*{' %
+ self.d_class.section(self.section_level + 1))
+ else:
+ self.fallbacks['subtitle'] = PreambleCmds.subtitle
+ self.out.append('\n\\DUsubtitle[%s]{' % node.parent.tagname)
+
+ def depart_subtitle(self, node):
+ if isinstance(node.parent, nodes.document):
+ self.pop_output_collector()
+ else:
+ self.out.append('}\n')
+
+ def visit_system_message(self, node):
+ self.requirements['color'] = PreambleCmds.color
+ self.fallbacks['title'] = PreambleCmds.title
+ node['classes'] = ['system-message']
+ self.visit_admonition(node)
+ self.out.append('\\DUtitle[system-message]{system-message}\n')
+ self.append_hypertargets(node)
+ try:
+ line = ', line~%s' % node['line']
+ except KeyError:
+ line = ''
+ self.out.append('\n\n{\color{red}%s/%s} in \\texttt{%s}%s\n' %
+ (node['type'], node['level'],
+ self.encode(node['source']), line))
+ if len(node['backrefs']) == 1:
+ self.out.append('\n\\hyperlink{%s}{' % node['backrefs'][0])
+ self.context.append('}')
+ else:
+ backrefs = ['\\hyperlink{%s}{%d}' % (href, i+1)
+ for (i, href) in enumerate(node['backrefs'])]
+ self.context.append('backrefs: ' + ' '.join(backrefs))
+
+ def depart_system_message(self, node):
+ self.out.append(self.context.pop())
+ self.depart_admonition()
+
+ def visit_table(self, node):
+ self.requirements['table'] = PreambleCmds.table
+ if self.active_table.is_open():
+ self.table_stack.append(self.active_table)
+ # nesting longtable does not work (e.g. 2007-04-18)
+ self.active_table = Table(self,'tabular',self.settings.table_style)
+ self.active_table.open()
+ for cls in node['classes']:
+ self.active_table.set_table_style(cls)
+ if self.active_table._table_style == 'booktabs':
+ self.requirements['booktabs'] = r'\usepackage{booktabs}'
+ self.out.append('\n' + self.active_table.get_opening())
+
+ def depart_table(self, node):
+ self.out.append(self.active_table.get_closing() + '\n')
+ self.active_table.close()
+ if len(self.table_stack)>0:
+ self.active_table = self.table_stack.pop()
+ else:
+ self.active_table.set_table_style(self.settings.table_style)
+ # Insert hyperlabel after (long)table, as
+ # other places (beginning, caption) result in LaTeX errors.
+ if node.get('ids'):
+ self.out += self.ids_to_labels(node, set_anchor=False) + ['\n']
+
+ def visit_target(self, node):
+ # Skip indirect targets:
+ if ('refuri' in node # external hyperlink
+ or 'refid' in node # resolved internal link
+ or 'refname' in node): # unresolved internal link
+ ## self.out.append('%% %s\n' % node) # for debugging
+ return
+ self.out.append('%\n')
+ # do we need an anchor (\phantomsection)?
+ set_anchor = not(isinstance(node.parent, nodes.caption) or
+ isinstance(node.parent, nodes.title))
+ # TODO: where else can/must we omit the \phantomsection?
+ self.out += self.ids_to_labels(node, set_anchor)
+
+ def depart_target(self, node):
+ pass
+
+ def visit_tbody(self, node):
+ # BUG write preamble if not yet done (colspecs not [])
+ # for tables without heads.
+ if not self.active_table.get('preamble written'):
+ self.visit_thead(None)
+ self.depart_thead(None)
+
+ def depart_tbody(self, node):
+ pass
+
+ def visit_term(self, node):
+ """definition list term"""
+ # Commands with optional args inside an optional arg must be put
+ # in a group, e.g. ``\item[{\hyperref[label]{text}}]``.
+ self.out.append('\\item[{')
+
+ def depart_term(self, node):
+ # \leavevmode results in a line break if the
+ # term is followed by an item list.
+ self.out.append('}] \leavevmode ')
+
+ def visit_tgroup(self, node):
+ #self.out.append(self.starttag(node, 'colgroup'))
+ #self.context.append('</colgroup>\n')
+ pass
+
+ def depart_tgroup(self, node):
+ pass
+
+ _thead_depth = 0
+ def thead_depth (self):
+ return self._thead_depth
+
+ def visit_thead(self, node):
+ self._thead_depth += 1
+ if 1 == self.thead_depth():
+ self.out.append('{%s}\n' % self.active_table.get_colspecs())
+ self.active_table.set('preamble written',1)
+ self.out.append(self.active_table.get_caption())
+ self.out.extend(self.active_table.visit_thead())
+
+ def depart_thead(self, node):
+ if node is not None:
+ self.out.extend(self.active_table.depart_thead())
+ if self.active_table.need_recurse():
+ node.walkabout(self)
+ self._thead_depth -= 1
+
+ def bookmark(self, node):
+ """Return label and pdfbookmark string for titles."""
+ result = ['']
+ if self.settings.sectnum_xform: # "starred" section cmd
+ # add to the toc and pdfbookmarks
+ section_name = self.d_class.section(max(self.section_level, 1))
+ section_title = self.encode(node.astext())
+ result.append(r'\phantomsection')
+ result.append(r'\addcontentsline{toc}{%s}{%s}' %
+ (section_name, section_title))
+ result += self.ids_to_labels(node.parent, set_anchor=False)
+ return '%\n '.join(result) + '%\n'
+
+ def visit_title(self, node):
+ """Append section and other titles."""
+ # Document title
+ if node.parent.tagname == 'document':
+ self.push_output_collector(self.title)
+ self.context.append('')
+ self.pdfinfo.append(' pdftitle={%s},' %
+ self.encode(node.astext()))
+ # Topic titles (topic, admonition, sidebar)
+ elif (isinstance(node.parent, nodes.topic) or
+ isinstance(node.parent, nodes.admonition) or
+ isinstance(node.parent, nodes.sidebar)):
+ self.fallbacks['title'] = PreambleCmds.title
+ classes = ','.join(node.parent['classes'])
+ if not classes:
+ classes = node.tagname
+ self.out.append('\\DUtitle[%s]{' % classes)
+ self.context.append('}\n')
+ # Table caption
+ elif isinstance(node.parent, nodes.table):
+ self.push_output_collector(self.active_table.caption)
+ self.context.append('')
+ # Section title
+ else:
+ self.out.append('\n\n')
+ self.out.append('%' + '_' * 75)
+ self.out.append('\n\n')
+ #
+ section_name = self.d_class.section(self.section_level)
+ # number sections?
+ if (self.settings.sectnum_xform # numbering by Docutils
+ or (self.section_level > len(self.d_class.sections))):
+ section_star = '*'
+ else: # LaTeX numbered sections
+ section_star = ''
+ self.out.append(r'\%s%s{' % (section_name, section_star))
+ # System messages heading in red:
+ if ('system-messages' in node.parent['classes']):
+ self.requirements['color'] = PreambleCmds.color
+ self.out.append('\color{red}')
+ # label and ToC entry:
+ self.context.append(self.bookmark(node) + '}\n')
+ # MAYBE postfix paragraph and subparagraph with \leavemode to
+ # ensure floats stay in the section and text starts on a new line.
+
+ def depart_title(self, node):
+ self.out.append(self.context.pop())
+ if (isinstance(node.parent, nodes.table) or
+ node.parent.tagname == 'document'):
+ self.pop_output_collector()
+
+ def minitoc(self, node, title, depth):
+ """Generate a local table of contents with LaTeX package minitoc"""
+ section_name = self.d_class.section(self.section_level)
+ # name-prefix for current section level
+ minitoc_names = {'part': 'part', 'chapter': 'mini'}
+ if 'chapter' not in self.d_class.sections:
+ minitoc_names['section'] = 'sect'
+ try:
+ minitoc_name = minitoc_names[section_name]
+ except KeyError: # minitoc only supports part- and toplevel
+ self.warn('Skipping local ToC at %s level.\n' % section_name +
+ ' Feature not supported with option "use-latex-toc"',
+ base_node=node)
+ return
+ # Requirements/Setup
+ self.requirements['minitoc'] = PreambleCmds.minitoc
+ self.requirements['minitoc-'+minitoc_name] = (r'\do%stoc' %
+ minitoc_name)
+ # depth: (Docutils defaults to unlimited depth)
+ maxdepth = len(self.d_class.sections)
+ self.requirements['minitoc-%s-depth' % minitoc_name] = (
+ r'\mtcsetdepth{%stoc}{%d}' % (minitoc_name, maxdepth))
+ # Process 'depth' argument (!Docutils stores a relative depth while
+ # minitoc expects an absolute depth!):
+ offset = {'sect': 1, 'mini': 0, 'part': 0}
+ if 'chapter' in self.d_class.sections:
+ offset['part'] = -1
+ if depth:
+ self.out.append('\\setcounter{%stocdepth}{%d}' %
+ (minitoc_name, depth + offset[minitoc_name]))
+ # title:
+ self.out.append('\\mtcsettitle{%stoc}{%s}\n' % (minitoc_name, title))
+ # the toc-generating command:
+ self.out.append('\\%stoc\n' % minitoc_name)
+
+ def visit_topic(self, node):
+ # Topic nodes can be generic topic, abstract, dedication, or ToC.
+ # table of contents:
+ if 'contents' in node['classes']:
+ self.out.append('\n')
+ self.out += self.ids_to_labels(node)
+ # add contents to PDF bookmarks sidebar
+ if isinstance(node.next_node(), nodes.title):
+ self.out.append('\n\\pdfbookmark[%d]{%s}{%s}\n' %
+ (self.section_level+1,
+ node.next_node().astext(),
+ node.get('ids', ['contents'])[0]
+ ))
+ if self.use_latex_toc:
+ title = ''
+ if isinstance(node.next_node(), nodes.title):
+ title = self.encode(node.pop(0).astext())
+ depth = node.get('depth', 0)
+ if 'local' in node['classes']:
+ self.minitoc(node, title, depth)
+ self.context.append('')
+ return
+ if depth:
+ self.out.append('\\setcounter{tocdepth}{%d}\n' % depth)
+ if title != 'Contents':
+ self.out.append('\\renewcommand{\\contentsname}{%s}\n' %
+ title)
+ self.out.append('\\tableofcontents\n\n')
+ self.has_latex_toc = True
+ else: # Docutils generated contents list
+ # set flag for visit_bullet_list() and visit_title()
+ self.is_toc_list = True
+ self.context.append('')
+ elif ('abstract' in node['classes'] and
+ self.settings.use_latex_abstract):
+ self.push_output_collector(self.abstract)
+ self.out.append('\\begin{abstract}')
+ self.context.append('\\end{abstract}\n')
+ if isinstance(node.next_node(), nodes.title):
+ node.pop(0) # LaTeX provides its own title
+ else:
+ self.fallbacks['topic'] = PreambleCmds.topic
+ # special topics:
+ if 'abstract' in node['classes']:
+ self.fallbacks['abstract'] = PreambleCmds.abstract
+ self.push_output_collector(self.abstract)
+ if 'dedication' in node['classes']:
+ self.fallbacks['dedication'] = PreambleCmds.dedication
+ self.push_output_collector(self.dedication)
+ self.out.append('\n\\DUtopic[%s]{\n' % ','.join(node['classes']))
+ self.context.append('}\n')
+
+ def depart_topic(self, node):
+ self.out.append(self.context.pop())
+ self.is_toc_list = False
+ if ('abstract' in node['classes'] or
+ 'dedication' in node['classes']):
+ self.pop_output_collector()
+
+ def visit_inline(self, node): # <span>, i.e. custom roles
+ # insert fallback definition
+ self.fallbacks['inline'] = PreambleCmds.inline
+ self.out += [r'\DUrole{%s}{' % cls for cls in node['classes']]
+ self.context.append('}' * (len(node['classes'])))
+
+ def depart_inline(self, node):
+ self.out.append(self.context.pop())
+
+ def visit_rubric(self, node):
+ self.fallbacks['rubric'] = PreambleCmds.rubric
+ self.out.append('\n\\DUrubric{')
+ self.context.append('}\n')
+
+ def depart_rubric(self, node):
+ self.out.append(self.context.pop())
+
+ def visit_transition(self, node):
+ self.fallbacks['transition'] = PreambleCmds.transition
+ self.out.append('\n\n')
+ self.out.append('%' + '_' * 75 + '\n')
+ self.out.append(r'\DUtransition')
+ self.out.append('\n\n')
+
+ def depart_transition(self, node):
+ pass
+
+ def visit_version(self, node):
+ self.visit_docinfo_item(node, 'version')
+
+ def depart_version(self, node):
+ self.depart_docinfo_item(node)
+
+ def unimplemented_visit(self, node):
+ raise NotImplementedError('visiting unimplemented node type: %s' %
+ node.__class__.__name__)
+
+# def unknown_visit(self, node):
+# def default_visit(self, node):
+
+# vim: set ts=4 et ai :
diff --git a/python/helpers/docutils/writers/latex2e/default.tex b/python/helpers/docutils/writers/latex2e/default.tex
new file mode 100644
index 0000000..c6e8ec3
--- /dev/null
+++ b/python/helpers/docutils/writers/latex2e/default.tex
@@ -0,0 +1,14 @@
+% generated by Docutils <http://docutils.sourceforge.net/>
+$head_prefix\usepackage{fixltx2e} % LaTeX patches, \textsubscript
+\usepackage{cmap} % fix search and cut-and-paste in PDF
+$requirements
+%%% Custom LaTeX preamble
+$latex_preamble
+%%% User specified packages and stylesheets
+$stylesheet
+%%% Fallback definitions for Docutils-specific commands
+$fallbacks$pdfsetup
+%%% Body
+\begin{document}
+$body_pre_docinfo$docinfo$dedication$abstract$body
+\end{document}
diff --git a/python/helpers/docutils/writers/latex2e/docutils-05-compat.sty b/python/helpers/docutils/writers/latex2e/docutils-05-compat.sty
new file mode 100644
index 0000000..7d0323e
--- /dev/null
+++ b/python/helpers/docutils/writers/latex2e/docutils-05-compat.sty
@@ -0,0 +1,732 @@
+% ==================================================================
+% Changes to the Docutils latex2e writer since version 0.5
+% ==================================================================
+%
+% A backwards compatibility style sheet
+% *************************************
+%
+% :Author: Guenter Milde
+% :Contact: [email protected]
+% :Revision: $Revision: 6152 $
+% :Date: $Date: 2009-02-24$
+% :Copyright: © 2009 G. Milde,
+% Released without warranties or conditions of any kind
+% under the terms of the Apache License, Version 2.0
+% http://www.apache.org/licenses/LICENSE-2.0
+% :Abstract: This file documents changes and provides a style for best
+% possible compatibility to the behaviour of the `latex2e`
+% writer of Doctutils release 0.5.
+%
+% ::
+
+\NeedsTeXFormat{LaTeX2e}
+\ProvidesPackage{docutils-05-compat}
+[2009/03/26 v0.1 compatibility with rst2latex from Docutils 0.5]
+
+% .. contents::
+% :depth: 3
+%
+% Usage
+% =====
+%
+% * To get an (almost) identic look for your old documents,
+% place ``docutils-05-compat.sty`` in the TEXINPUT path (e.g.
+% the current work directory) and pass the
+% ``--stylesheet=docutils-05-compat`` option to ``rst2latex.py``.
+%
+% * To use your custom stylesheets without change, add them to the
+% compatibility style, e.g.
+% ``--stylesheet="docutils-05-compat,mystyle.tex``.
+%
+% .. tip:: As the changes include bug fixes that are partly reverted by this
+% style, it is recommended to adapt the stylesheets to the new version or
+% copy just the relevant parts of this style into them.
+%
+% Changes since 0.5
+% =================
+%
+% Bugfixes
+% --------
+%
+% * Newlines around comments, targets and references prevent run-together
+% paragraphs.
+%
+% + An image directive with hyperlink reference or target did not start a
+% new paragraph (e.g. the first two image examples in
+% standalone_rst_latex.tex).
+%
+% + Paragraphs were not separated if there was a (hyper) target definition
+% inbetween.
+%
+% + Paragraphs did run together, if separated by a comment-paragraph in the
+% rst source.
+%
+% * Fixed missing and spurious internal links/targets.
+% Internal links now take you to the correct place.
+%
+% * Verbose and linked system messages.
+%
+% * `Figure and image alignment`_ now conforms to the rst definition.
+%
+% * Put `header and footer directive`__ content in \DUheader respective
+% \DUfooter macros (ignored by the default style/template).
+%
+% (They were put inside hard-coded markup at the top/bottom of the document
+% without an option to get them on every page.)
+%
+% __ ../ref/rst/directives.html#document-header-footer
+%
+% * Render doctest blocks as literal blocks (fixes bug [1586058] doctest block
+% nested in admonition). I.e.
+%
+% + indent doctest blocks by nesting in a quote environment. This is also
+% the rendering by the HTML writer (html4css2.css).
+% + apply the ``--literal-block-env`` setting also to doctest blocks.
+%
+% .. warning::
+% (``--literal-block-env=verbatim`` and
+% ``--literal-block-env=lstlistings`` fail with literal or doctest
+% blocks nested in an admonition.
+%
+% * Two-way hyperlinked footnotes and support for symbol footnotes and
+% ``--footnote-references=brackets`` with ``--use-latex-footnotes``.
+%
+% * The packages `fixltx2e` (providing LaTeX patches and the \textsubscript
+% command) and `cmap` (including character maps in the generated PDF for
+% better search and copy-and-paste operations) are now always loaded
+% (configurable with custom templates_).
+%
+% Backwards compatibility:
+% "Bug for bug compatibility" is not provided.
+%
+%
+% New configuration setting defaults
+% ----------------------------------
+%
+% - font-encoding: "T1" (formerly implicitely set by 'ae').
+% - use-latex-toc: true (ToC with page numbers).
+% - use-latex-footnotes: true (no mixup with figures).
+%
+% Backwards compatibility:
+% Reset to the former defaults with:
+%
+% | font-encoding: ''
+% | use-latex-toc: False
+% | use-latex-footnotes: False
+%
+% (in the config file) or the command line options:
+%
+% ``--figure-footnotes --use-docutils-toc --font-encoding=''``
+%
+%
+% Cleaner LaTeX source
+% --------------------
+%
+% New features:
+% * Remove redundant "double protection" from the encoding of the "special
+% printing characters" and square brackets, e.g. ``\%`` instead of
+% ``{\%}``.
+% * Remove some spurious whitespace, e.g. ``\item [what:] -> \item[what:]``.
+% * Use conventional style for "named" macros, e.g. ``\dots{}`` instead of
+% ``{\dots}``
+%
+% Backwards compatibility:
+% Changes do not affect the output.
+%
+%
+% LaTeX style sheets
+% ------------------
+%
+% New Feature:
+% LaTeX packages can be used as ``--stylesheet`` argument without
+% restriction.
+%
+% Implementation:
+% Use ``\usepackage`` if style sheet ends with ``.sty`` or has no
+% extension and ``\input`` else.
+%
+% Rationale:
+% while ``\input`` works with extension as well as without extension,
+% ``\usepackage`` expects the package name without extension. (The latex2e
+% writer will strip a ``.sty`` extension.)
+%
+%
+% Backwards compatibility:
+% Up to Docutils 0.5, if no filename extension is given in the
+% ``stylesheet`` argument, ``.tex`` is assumed (by latex).
+%
+% Since Docutils 0.6, a stylesheet without filename extension is assumed to
+% be a LaTeX package (``*.sty``) and referenced with the ``\usepackage``
+% command.
+%
+% .. important::
+% Always specify the extension if you want the style sheet to be
+% ``\input`` by LaTeX.
+%
+%
+% Templates
+% ---------
+%
+% New Feature:
+% Advanced configuration via custom templates.
+%
+% Implementation:
+% A ``--template`` option and config setting allows specification of a
+% template file.
+%
+% See the `LaTeX writer documentation`__ for details.
+%
+% __ latex.html#templates
+%
+%
+% Custom roles
+% ------------
+%
+% New Feature: failsave implementation
+% As with classes to HTML objects, class arguments are silently ignored if
+% there is no styling rule for this class in a custom style sheet.
+%
+% New Feature: custom roles based on standard roles
+% As class support needs to be handled by the LaTeX writer, this feature was
+% not present "automatically" (as in HTML). Modified visit/depart_*()
+% methods for the standard roles now call visit/depart_inline() if there are
+% class arguments to the node.
+%
+% Backwards compatibility:
+% The implementation is fully backwards compatible. (SVN versions 5742 to
+% 5861 contained an implementation that did not work with commands expecting
+% an argument.)
+%
+% Length units
+% ------------
+%
+% New Features:
+% 1. Add default unit if none given.
+% A poll on docutils-users favoured ``bp`` (Big Point: 1 bp = 1/72 in).
+%
+% 2. Do not change ``px`` to ``pt``.
+%
+% 3. Lengths specified in the document with unit "pt" will be written with
+% unit "bp" to the LaTeX source.
+%
+% Rationale:
+% 1. prevent LaTeX error "missing unit".
+%
+% 2. ``px`` is a valid unit in pdftex since version 1.3.0 released on
+% 2005-02-04:
+%
+% 1px defaults to 1bp (or 72dpi), but can be changed with the
+% ``\pdfpxdimen`` primitive.::
+
+ \pdfpxdimen=1in % 1 dpi
+ \divide\pdfpxdimen by 96 % 96 dpi
+
+% -- http://www.tug.org/applications/pdftex/NEWS
+%
+% Modern TeX distributions use pdftex also for dvi generation (i.e.
+% ``latex`` actually calls ``pdftex`` with some options).
+%
+% 3. In Docutils (as well as CSS) the unit symbol "pt" denotes the
+% `Postscript point` or `DTP point` while LaTeX uses "pt" for the `LaTeX
+% point`, which is unknown to Docutils and 0.3 % smaller.
+%
+% The `DTP point` is available in LaTeX as "bp" (big point):
+%
+% 1 pt = 1/72.25 in < 1 bp = 1/72 in
+%
+%
+% Backwards compatibility:
+% Images with width specification in ``px`` come out slightly (0.3 %) larger:
+%
+% 1 px = 1 bp = 1/72 in > 1 pt = 1/72.25 in
+%
+% This can be reset with ::
+
+ \pdfpxdimen=1pt
+
+% .. caution:: It is impossible to revert the change of lengths specified with
+% "pt" or without unit in a style sheet, however the 0.3 % change will be
+% imperceptible in most cases.
+%
+% .. admonition:: Error ``illegal unit px``
+%
+% The unit ``px`` is not defined in "pure" LaTeX, but introduced by the
+% `pdfTeX` converter on 2005-02-04. `pdfTeX` is used in all modern LaTeX
+% distributions (since ca. 2006) also for conversion into DVI.
+%
+% If you convert the LaTeX source with a legacy program, you might get the
+% error ``illegal unit px``.
+%
+% If updating LaTeX is not an option, just remove the ``px`` from the length
+% specification. HTML/CSS will default to ``px`` while the `latexe2` writer
+% will add the fallback unit ``bp``.
+%
+%
+% Font encoding
+% -------------
+%
+% New feature:
+% Do not mix font-encoding and font settings: do not load the obsolete
+% `ae` and `aeguill` packages unless explicitely required via the
+% ``--stylesheet`` option.
+%
+% :font-encoding = "": do not load `ae` and `aeguill`, i.e.
+%
+% * do not change font settings,
+% * do not use the fontenc package
+% (implicitely loaded via `ae`),
+% * use LaTeX default font encoding (OT1)
+%
+% :font-encoding = "OT1": load `fontenc` with ``\usepackage[OT1]{fontenc}``
+%
+% Example:
+% ``--font-encoding=LGR,T1`` becomes ``\usepackage[LGR,T1]{fontenc}``
+% (Latin, Latin-1 Supplement, and Greek)
+%
+%
+% Backwards compatibility:
+% Load the ae and aeguill packages if fontenc is not used.
+%
+% .. tip:: Using `ae` is not recommended. A similar look (but better
+% implementation) can be achieved with the packages `lmodern`, `cmsuper`,
+% or `cmlgr` all providing Computer Modern look-alikes in vector format and
+% T1 encoding, e.g. ``--font-encoding=T1 --stylesheet=lmodern``.
+%
+% Sub- and superscript as text
+% ----------------------------
+%
+% New feature:
+% Set sub- and superscript role argument in text mode not as math.
+%
+% Pass the role content to ``\textsubscript`` or ``\textsuperscript``.
+%
+% Backwards compatibility:
+% The old implementation set the role content in Math mode, where
+%
+% * whitespace is ignored,
+% * a different command set and font setting scheme is active,
+% * Latin letters are typeset italic but numbers upright.
+%
+% Although it is possible to redefine ``\textsubscript`` and
+% ``\textsuperscript`` to typeset the content in math-mode, this can lead to
+% errors with certain input and is therefore not done in this style sheet.
+%
+% .. tip:: To get italic subscripts, define and use in your document
+% `custom roles`_ like ``.. role:: sub(subscript)`` and
+% ``.. role:: super(superscript)`` and define the "role commands"::
+
+ \newcommand{\DUrolesub}{\itshape}
+ \newcommand{\DUrolesuper}{\itshape}
+
+% Alternatively, if you want all sub- and superscripts in italic, redefine
+% the macros::
+
+ %% \let\DUsup\textsubscript
+ %% \let\DUsuper\textsuperscript
+ %% \renewcommand*{\textsubscript}{\DUsub\itshape}
+ %% \renewcommand*{\textsuperscript}{\DUsuper\itshape}
+
+% This is not fully backwards compatible, as it will also set numbers in
+% italic shape and not ignore whitespace.
+%
+% Page layout
+% -----------
+%
+% New features:
+% * Margins are configurable via the ``DIV=...`` document option.
+%
+% * The ``\raggedbottom`` setting is no longer inserted into the document. It
+% is the default for article and report classes. If requested in combination
+% with a book class, it can be given in a custom style sheet.
+%
+% Backwards compatibility:
+% Up to version 0.5, use of `typearea` and a DIV setting of 12 were
+% hard-coded into the latex2e writer ::
+
+ \usepackage{typearea}
+ \typearea{12}
+
+% and the vertical alignment of lower boundary of the text area in book
+% classes disabled via ::
+
+ \raggedbottom
+
+
+% ToC and section numbers
+% -----------------------
+%
+% Better conformance to Docutils specifications.
+%
+% New feature:
+% * The "depth" argument of the "contents" and "sectnum" directives is
+% respected.
+%
+% * section numbering independent of 'use-latex-toc':
+%
+% + sections are only numbered if there is a "sectnum" directive in the
+% document
+%
+% + section numbering by LaTeX if the "sectnum_xforms" config setting is
+% False.
+%
+% Backwards compatibility:
+%
+% The previous behaviour was to always number sections if 'use-latex-toc' is
+% true, using the document class defaults. It cannot be restored
+% universally, the following code sets the default values of the "article"
+% document class::
+
+ \setcounter{secnumdepth}{3}
+ \setcounter{tocdepth}{3}
+
+% .. TODO or not to do? (Back-compatibility problems)
+% * The default "depth" of the LaTeX-created ToC and the LaTeX section
+% numbering is increased to the number of supported section levels.
+%
+% New feature:
+% If 'use-latex-toc' is set, local tables of content are typeset using the
+% 'minitoc' package (instead of being ignored).
+%
+% Backwards compatibility:
+% Disable the creation of local ToCs (ignoring all special commands) by
+% replacing ``\usepackage{minitoc} with ``\usepackage{mtcoff}``.
+%
+%
+% Default font in admonitions and sidebar
+% ---------------------------------------
+%
+% New feature:
+% Use default font in admonitions and sidebar.
+%
+% Backward compatibility:
+% See the fallback definitions for admonitions_, `topic title`_ and
+% `sidebar`_.
+%
+%
+% Figure placement
+% ----------------
+%
+% New feature:
+% Use ``\floatplacement`` from the `float` package instead of
+% "hard-coded" optional argument for the global setting.
+%
+% Default to ``\floatplacement{figure}{H}`` (here definitely). This
+% corresponds most closely to the source and HTML placement (principle of
+% least surprise).
+%
+% Backwards compatibility:
+% Set the global default back to the previous used value::
+
+ \usepackage{float}
+ \floatplacement{figure}{htbp} % here, top, bottom, extra-page
+
+
+% Figure and image alignment
+% --------------------------
+%
+% New features:
+%
+% a) Fix behaviour of 'align' argument to a figure (do not align figure
+% contents).
+%
+% As the 'figwidth' argument is still ignored and the "natural width" of a
+% figure in LaTeX is 100% \textwidth, setting the 'align' argument of a
+% figure has currently no effect on the LaTeX output.
+%
+% b) Set default align of image in a figure to 'center'.
+%
+% c) Also center images that are wider than textwidth.
+%
+% d) Align images with class "align-[right|center|left]" (allows setting the
+% alignment of an image in a figure).
+%
+% Backwards compatibility:
+% There is no "automatic" way to reverse these changes via a style sheet.
+%
+% a) The alignment of the image can be set with the "align-left",
+% "align-center" and "align-right" class arguments.
+%
+% As previously, the caption of a figure is aligned according to the
+% document class -- configurable with a style sheet using the "caption"
+% package.
+%
+% b) See a)
+%
+% c) Set the alignment of "oversized" images to "left" to get back the
+% old placement.
+%
+% Shorter preamble
+% ----------------
+%
+% New feature:
+% The document preamble is pruned to contain only relevant commands and
+% settings.
+%
+% Packages that are no longer required
+% ````````````````````````````````````
+%
+% The following packages where required in pre-0.5 versions and still loaded
+% with version 0.5::
+
+\usepackage{shortvrb}
+\usepackage{amsmath}
+
+
+% Packages that are conditionally loaded
+% ``````````````````````````````````````
+%
+% Additional to the `typearea` for `page layout`_, the following packages are
+% only loaded if actually required by doctree elements:
+%
+% Tables
+% ^^^^^^
+%
+% Standard package for tables across several pages::
+
+\usepackage{longtable}
+
+% Extra space between text in tables and the line above them
+% ('array' is implicitely loaded by 'tabularx', see below)::
+
+\usepackage{array}
+\setlength{\extrarowheight}{2pt}
+
+% Table cells spanning multiple rows::
+
+\usepackage{multirow}
+
+% Docinfo
+% ^^^^^^^
+%
+% One-page tables with auto-width columns::
+
+\usepackage{tabularx}
+
+% Images
+% ^^^^^^
+% Include graphic files::
+
+\usepackage{graphicx}
+
+% Problematic, Sidebar
+% ^^^^^^^^^^^^^^^^^^^^
+% Set text and/or background colour, coloured boxes with ``\colorbox``::
+
+\usepackage{color}
+
+% Floats for footnotes settings
+% `````````````````````````````
+%
+% Settings for the use of floats for footnotes are only included if
+%
+% * the option "use-latex-footnotes" is False, and
+% * there is at least one footnote in the document.
+%
+% ::
+
+% begin: floats for footnotes tweaking.
+\setlength{\floatsep}{0.5em}
+\setlength{\textfloatsep}{\fill}
+\addtolength{\textfloatsep}{3em}
+\renewcommand{\textfraction}{0.5}
+\renewcommand{\topfraction}{0.5}
+\renewcommand{\bottomfraction}{0.5}
+\setcounter{totalnumber}{50}
+\setcounter{topnumber}{50}
+\setcounter{bottomnumber}{50}
+% end floats for footnotes
+
+
+% Special lengths, commands, and environments
+% -------------------------------------------
+%
+% Removed definitions
+% ```````````````````
+%
+% admonition width
+% ^^^^^^^^^^^^^^^^
+% The ``admonitionwith`` lenght is replaced by the more powerful
+% ``\DUadmonition`` command (see admonitions_).
+%
+% Backwards compatibility:
+% The default value (90 % of the textwidth) is unchanged.
+%
+% To configure the admonition width, you must redefine the ``DUadmonition``
+% command instead of changing the ``admonitionwith`` length value.
+%
+%
+% Renamed definitions (now conditional)
+% `````````````````````````````````````
+%
+% The names for special doctree elements are now prefixed with ``DU``.
+%
+% Up to version 0.5, all definitions were included in the preamble (before the
+% style sheet) of every document -- even if not used in the body. Since
+% version 0.6, fallback definitions are included after the style sheet and
+% only if required.
+%
+% Customization is done by an alternative definition in a style sheet with
+% ``\newcommand`` instead of the former ``\renewcommand``.
+%
+% The following code provides the old definitions and maps them (or their
+% custom variants) to the new interface.
+%
+% docinfo width
+% ^^^^^^^^^^^^^
+% ::
+
+\newlength{\docinfowidth}
+\setlength{\docinfowidth}{0.9\textwidth}
+
+\newlength{\DUdocinfowidth}
+\AtBeginDocument{\setlength{\DUdocinfowidth}{\docinfowidth}}
+
+% line block
+% ^^^^^^^^^^
+% ::
+
+\newlength{\lineblockindentation}
+\setlength{\lineblockindentation}{2.5em}
+\newenvironment{lineblock}[1]
+{\begin{list}{}
+ {\setlength{\partopsep}{\parskip}
+ \addtolength{\partopsep}{\baselineskip}
+ \topsep0pt\itemsep0.15\baselineskip\parsep0pt
+ \leftmargin#1}
+ \raggedright}
+{\end{list}}
+
+\newlength{\DUlineblockindent}
+\AtBeginDocument{\setlength{\DUlineblockindent}{\lineblockindentation}}
+\newenvironment{DUlineblock}[1]
+ {\begin{lineblock}{#1}}
+ {\end{lineblock}}
+
+% local line width
+% ^^^^^^^^^^^^^^^^
+%
+% The ``\locallinewidth`` length for internal use in tables is replaced
+% by ``\DUtablewidth``. It was never intended for customization::
+
+\newlength{\locallinewidth}
+
+% option lists
+% ^^^^^^^^^^^^
+% ::
+
+\newcommand{\optionlistlabel}[1]{\bf #1 \hfill}
+\newenvironment{optionlist}[1]
+{\begin{list}{}
+ {\setlength{\labelwidth}{#1}
+ \setlength{\rightmargin}{1cm}
+ \setlength{\leftmargin}{\rightmargin}
+ \addtolength{\leftmargin}{\labelwidth}
+ \addtolength{\leftmargin}{\labelsep}
+ \renewcommand{\makelabel}{\optionlistlabel}}
+}{\end{list}}
+
+\newcommand{\DUoptionlistlabel}{\optionlistlabel}
+\newenvironment{DUoptionlist}
+ {\begin{optionlist}{3cm}}
+ {\end{optionlist}}
+
+% rubric
+% ^^^^^^
+% Now less prominent (not bold, normal size) restore with::
+
+\newcommand{\rubric}[1]{\subsection*{~\hfill {\it #1} \hfill ~}}
+\newcommand{\DUrubric}[2][class-arg]{\rubric{#2}}
+
+% title reference role
+% ^^^^^^^^^^^^^^^^^^^^
+% ::
+
+\newcommand{\titlereference}[1]{\textsl{#1}}
+\newcommand{\DUroletitlereference}[1]{\titlereference{#1}}
+
+
+% New definitions
+% ```````````````
+%
+% New Feature:
+% Enable customization of some more Docutils elements with special commands
+%
+% :admonition: ``DUadmonition`` command (replacing ``\admonitionwidth``),
+% :field list: ``DUfieldlist`` environment,
+% :legend: ``DUlegend`` environment,
+% :sidebar: ``\DUsidebar``, ``\DUtitle``, and
+% ``DUsubtitle`` commands,
+% :topic: ``\DUtopic`` and ``\DUtitle`` commands,
+% :transition: ``\DUtransition`` command.
+% :footnotes: ``\DUfootnotemark`` and ``\DUfootnotetext`` commands with
+% hyperlink support using the Docutils-provided footnote label.
+%
+% Backwards compatibility:
+% In most cases, the default definition corresponds to the previously used
+% construct. The following definitions restore the old behaviour in case of
+% changes.
+%
+% admonitions
+% ^^^^^^^^^^^
+% Use sans-serif fonts::
+
+\newcommand{\DUadmonition}[2][class-arg]{%
+ \begin{center}
+ \fbox{\parbox{0.9\textwidth}{\sffamily #2}}
+ \end{center}
+}
+
+% dedication
+% ^^^^^^^^^^
+% Do not center::
+
+\newcommand{\DUtopicdedication}[1]{#1}
+
+% But center the title::
+
+\newcommand*{\DUtitlededication}[1]{\centerline{\textbf{#1}}}
+
+% sidebar
+% ^^^^^^^
+% Use sans-serif fonts, a frame, and a darker shade of grey::
+
+\providecommand{\DUsidebar}[2][class-arg]{%
+ \begin{center}
+ \sffamily
+ \fbox{\colorbox[gray]{0.80}{\parbox{0.9\textwidth}{#2}}}
+ \end{center}
+}
+
+% sidebar sub-title
+% ^^^^^^^^^^^^^^^^^
+% Bold instead of emphasized::
+
+\providecommand*{\DUsubtitlesidebar}[1]{\hspace*{\fill}\\
+ \textbf{#1}\smallskip}
+
+% topic
+% ^^^^^
+% No quote but normal text::
+
+\newcommand{\DUtopic}[2][class-arg]{%
+ \ifcsname DUtopic#1\endcsname%
+ \csname DUtopic#1\endcsname{#2}%
+ \else
+ #2
+ \fi
+}
+
+% topic title
+% ^^^^^^^^^^^
+% Title for "topics" (admonitions, sidebar).
+%
+% Larger font size::
+
+\providecommand*{\DUtitletopic}[1]{\textbf{\large #1}\smallskip}
+
+% transition
+% ^^^^^^^^^^
+% Do not add vertical space after the transition. ::
+
+\providecommand*{\DUtransition}[1][class-arg]{%
+ \hspace*{\fill}\hrulefill\hspace*{\fill}}
diff --git a/python/helpers/docutils/writers/latex2e/titlepage.tex b/python/helpers/docutils/writers/latex2e/titlepage.tex
new file mode 100644
index 0000000..f7f35f9
--- /dev/null
+++ b/python/helpers/docutils/writers/latex2e/titlepage.tex
@@ -0,0 +1,16 @@
+% generated by Docutils <http://docutils.sourceforge.net/>
+$head_prefix$requirements
+%%% Custom LaTeX preamble
+$latex_preamble
+%%% User specified packages and stylesheets
+$stylesheet
+%%% Fallback definitions for Docutils-specific commands
+$fallbacks$pdfsetup
+%%% Body
+\begin{document}
+\begin{titlepage}
+$body_pre_docinfo$docinfo$dedication$abstract
+\thispagestyle{empty}
+\end{titlepage}
+$body
+\end{document}
diff --git a/python/helpers/docutils/writers/manpage.py b/python/helpers/docutils/writers/manpage.py
new file mode 100644
index 0000000..e4e46b3
--- /dev/null
+++ b/python/helpers/docutils/writers/manpage.py
@@ -0,0 +1,1104 @@
+# -*- coding: utf-8 -*-
+# $Id: manpage.py 6378 2010-07-12 19:30:24Z grubert $
+# Author: Engelbert Gruber <[email protected]>
+# Copyright: This module is put into the public domain.
+
+"""
+Simple man page writer for reStructuredText.
+
+Man pages (short for "manual pages") contain system documentation on unix-like
+systems. The pages are grouped in numbered sections:
+
+ 1 executable programs and shell commands
+ 2 system calls
+ 3 library functions
+ 4 special files
+ 5 file formats
+ 6 games
+ 7 miscellaneous
+ 8 system administration
+
+Man pages are written *troff*, a text file formatting system.
+
+See http://www.tldp.org/HOWTO/Man-Page for a start.
+
+Man pages have no subsection only parts.
+Standard parts
+
+ NAME ,
+ SYNOPSIS ,
+ DESCRIPTION ,
+ OPTIONS ,
+ FILES ,
+ SEE ALSO ,
+ BUGS ,
+
+and
+
+ AUTHOR .
+
+A unix-like system keeps an index of the DESCRIPTIONs, which is accesable
+by the command whatis or apropos.
+
+"""
+
+__docformat__ = 'reStructuredText'
+
+import re
+
+import docutils
+from docutils import nodes, writers, languages
+import roman
+
+FIELD_LIST_INDENT = 7
+DEFINITION_LIST_INDENT = 7
+OPTION_LIST_INDENT = 7
+BLOCKQOUTE_INDENT = 3.5
+
+# Define two macros so man/roff can calculate the
+# indent/unindent margins by itself
+MACRO_DEF = (r""".
+.nr rst2man-indent-level 0
+.
+.de1 rstReportMargin
+\\$1 \\n[an-margin]
+level \\n[rst2man-indent-level]
+level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
+-
+\\n[rst2man-indent0]
+\\n[rst2man-indent1]
+\\n[rst2man-indent2]
+..
+.de1 INDENT
+.\" .rstReportMargin pre:
+. RS \\$1
+. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
+. nr rst2man-indent-level +1
+.\" .rstReportMargin post:
+..
+.de UNINDENT
+. RE
+.\" indent \\n[an-margin]
+.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
+.nr rst2man-indent-level -1
+.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
+.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
+..
+""")
+
+class Writer(writers.Writer):
+
+ supported = ('manpage',)
+ """Formats this writer supports."""
+
+ output = None
+ """Final translated form of `document`."""
+
+ def __init__(self):
+ writers.Writer.__init__(self)
+ self.translator_class = Translator
+
+ def translate(self):
+ visitor = self.translator_class(self.document)
+ self.document.walkabout(visitor)
+ self.output = visitor.astext()
+
+
+class Table:
+ def __init__(self):
+ self._rows = []
+ self._options = ['center']
+ self._tab_char = '\t'
+ self._coldefs = []
+ def new_row(self):
+ self._rows.append([])
+ def append_separator(self, separator):
+ """Append the separator for table head."""
+ self._rows.append([separator])
+ def append_cell(self, cell_lines):
+ """cell_lines is an array of lines"""
+ start = 0
+ if len(cell_lines) > 0 and cell_lines[0] == '.sp\n':
+ start = 1
+ self._rows[-1].append(cell_lines[start:])
+ if len(self._coldefs) < len(self._rows[-1]):
+ self._coldefs.append('l')
+ def _minimize_cell(self, cell_lines):
+ """Remove leading and trailing blank and ``.sp`` lines"""
+ while (cell_lines and cell_lines[0] in ('\n', '.sp\n')):
+ del cell_lines[0]
+ while (cell_lines and cell_lines[-1] in ('\n', '.sp\n')):
+ del cell_lines[-1]
+ def as_list(self):
+ text = ['.TS\n']
+ text.append(' '.join(self._options) + ';\n')
+ text.append('|%s|.\n' % ('|'.join(self._coldefs)))
+ for row in self._rows:
+ # row = array of cells. cell = array of lines.
+ text.append('_\n') # line above
+ text.append('T{\n')
+ for i in range(len(row)):
+ cell = row[i]
+ self._minimize_cell(cell)
+ text.extend(cell)
+ if not text[-1].endswith('\n'):
+ text[-1] += '\n'
+ if i < len(row)-1:
+ text.append('T}'+self._tab_char+'T{\n')
+ else:
+ text.append('T}\n')
+ text.append('_\n')
+ text.append('.TE\n')
+ return text
+
+class Translator(nodes.NodeVisitor):
+ """"""
+
+ words_and_spaces = re.compile(r'\S+| +|\n')
+ document_start = """Man page generated from reStructeredText."""
+
+ def __init__(self, document):
+ nodes.NodeVisitor.__init__(self, document)
+ self.settings = settings = document.settings
+ lcode = settings.language_code
+ self.language = languages.get_language(lcode)
+ self.head = []
+ self.body = []
+ self.foot = []
+ self.section_level = 0
+ self.context = []
+ self.topic_class = ''
+ self.colspecs = []
+ self.compact_p = 1
+ self.compact_simple = None
+ # the list style "*" bullet or "#" numbered
+ self._list_char = []
+ # writing the header .TH and .SH NAME is postboned after
+ # docinfo.
+ self._docinfo = {
+ "title" : "", "title_upper": "",
+ "subtitle" : "",
+ "manual_section" : "", "manual_group" : "",
+ "author" : [],
+ "date" : "",
+ "copyright" : "",
+ "version" : "",
+ }
+ self._docinfo_keys = [] # a list to keep the sequence as in source.
+ self._docinfo_names = {} # to get name from text not normalized.
+ self._in_docinfo = None
+ self._active_table = None
+ self._in_literal = False
+ self.header_written = 0
+ self._line_block = 0
+ self.authors = []
+ self.section_level = 0
+ self._indent = [0]
+ # central definition of simple processing rules
+ # what to output on : visit, depart
+ # Do not use paragraph requests ``.PP`` because these set indentation.
+ # use ``.sp``. Remove superfluous ``.sp`` in ``astext``.
+ #
+ # Fonts are put on a stack, the top one is used.
+ # ``.ft P`` or ``\\fP`` pop from stack.
+ # ``B`` bold, ``I`` italic, ``R`` roman should be available.
+ # Hopefully ``C`` courier too.
+ self.defs = {
+ 'indent' : ('.INDENT %.1f\n', '.UNINDENT\n'),
+ 'definition_list_item' : ('.TP', ''),
+ 'field_name' : ('.TP\n.B ', '\n'),
+ 'literal' : ('\\fB', '\\fP'),
+ 'literal_block' : ('.sp\n.nf\n.ft C\n', '\n.ft P\n.fi\n'),
+
+ 'option_list_item' : ('.TP\n', ''),
+
+ 'reference' : (r'\fI\%', r'\fP'),
+ 'emphasis': ('\\fI', '\\fP'),
+ 'strong' : ('\\fB', '\\fP'),
+ 'term' : ('\n.B ', '\n'),
+ 'title_reference' : ('\\fI', '\\fP'),
+
+ 'topic-title' : ('.SS ',),
+ 'sidebar-title' : ('.SS ',),
+
+ 'problematic' : ('\n.nf\n', '\n.fi\n'),
+ }
+ # NOTE do not specify the newline before a dot-command, but ensure
+ # it is there.
+
+ def comment_begin(self, text):
+ """Return commented version of the passed text WITHOUT end of
+ line/comment."""
+ prefix = '.\\" '
+ out_text = ''.join(
+ [(prefix + in_line + '\n')
+ for in_line in text.split('\n')])
+ return out_text
+
+ def comment(self, text):
+ """Return commented version of the passed text."""
+ return self.comment_begin(text)+'.\n'
+
+ def ensure_eol(self):
+ """Ensure the last line in body is terminated by new line."""
+ if self.body[-1][-1] != '\n':
+ self.body.append('\n')
+
+ def astext(self):
+ """Return the final formatted document as a string."""
+ if not self.header_written:
+ # ensure we get a ".TH" as viewers require it.
+ self.head.append(self.header())
+ # filter body
+ for i in xrange(len(self.body)-1, 0, -1):
+ # remove superfluous vertical gaps.
+ if self.body[i] == '.sp\n':
+ if self.body[i - 1][:4] in ('.BI ','.IP '):
+ self.body[i] = '.\n'
+ elif (self.body[i - 1][:3] == '.B ' and
+ self.body[i - 2][:4] == '.TP\n'):
+ self.body[i] = '.\n'
+ elif (self.body[i - 1] == '\n' and
+ self.body[i - 2][0] != '.' and
+ (self.body[i - 3][:7] == '.TP\n.B '
+ or self.body[i - 3][:4] == '\n.B ')
+ ):
+ self.body[i] = '.\n'
+ return ''.join(self.head + self.body + self.foot)
+
+ def deunicode(self, text):
+ text = text.replace(u'\xa0', '\\ ')
+ text = text.replace(u'\u2020', '\\(dg')
+ return text
+
+ def visit_Text(self, node):
+ text = node.astext()
+ text = text.replace('\\','\\e')
+ replace_pairs = [
+ (u'-', ur'\-'),
+ (u'\'', ur'\(aq'),
+ (u'´', ur'\''),
+ (u'`', ur'\(ga'),
+ ]
+ for (in_char, out_markup) in replace_pairs:
+ text = text.replace(in_char, out_markup)
+ # unicode
+ text = self.deunicode(text)
+ if self._in_literal:
+ # prevent interpretation of "." at line start
+ if text[0] == '.':
+ text = '\\&' + text
+ text = text.replace('\n.', '\n\\&.')
+ self.body.append(text)
+
+ def depart_Text(self, node):
+ pass
+
+ def list_start(self, node):
+ class enum_char:
+ enum_style = {
+ 'bullet' : '\\(bu',
+ 'emdash' : '\\(em',
+ }
+
+ def __init__(self, style):
+ self._style = style
+ if node.has_key('start'):
+ self._cnt = node['start'] - 1
+ else:
+ self._cnt = 0
+ self._indent = 2
+ if style == 'arabic':
+ # indentation depends on number of childrens
+ # and start value.
+ self._indent = len(str(len(node.children)))
+ self._indent += len(str(self._cnt)) + 1
+ elif style == 'loweralpha':
+ self._cnt += ord('a') - 1
+ self._indent = 3
+ elif style == 'upperalpha':
+ self._cnt += ord('A') - 1
+ self._indent = 3
+ elif style.endswith('roman'):
+ self._indent = 5
+
+ def next(self):
+ if self._style == 'bullet':
+ return self.enum_style[self._style]
+ elif self._style == 'emdash':
+ return self.enum_style[self._style]
+ self._cnt += 1
+ # TODO add prefix postfix
+ if self._style == 'arabic':
+ return "%d." % self._cnt
+ elif self._style in ('loweralpha', 'upperalpha'):
+ return "%c." % self._cnt
+ elif self._style.endswith('roman'):
+ res = roman.toRoman(self._cnt) + '.'
+ if self._style.startswith('upper'):
+ return res.upper()
+ return res.lower()
+ else:
+ return "%d." % self._cnt
+ def get_width(self):
+ return self._indent
+ def __repr__(self):
+ return 'enum_style-%s' % list(self._style)
+
+ if node.has_key('enumtype'):
+ self._list_char.append(enum_char(node['enumtype']))
+ else:
+ self._list_char.append(enum_char('bullet'))
+ if len(self._list_char) > 1:
+ # indent nested lists
+ self.indent(self._list_char[-2].get_width())
+ else:
+ self.indent(self._list_char[-1].get_width())
+
+ def list_end(self):
+ self.dedent()
+ self._list_char.pop()
+
+ def header(self):
+ tmpl = (".TH %(title_upper)s %(manual_section)s"
+ " \"%(date)s\" \"%(version)s\" \"%(manual_group)s\"\n"
+ ".SH NAME\n"
+ "%(title)s \- %(subtitle)s\n")
+ return tmpl % self._docinfo
+
+ def append_header(self):
+ """append header with .TH and .SH NAME"""
+ # NOTE before everything
+ # .TH title_upper section date source manual
+ if self.header_written:
+ return
+ self.body.append(self.header())
+ self.body.append(MACRO_DEF)
+ self.header_written = 1
+
+ def visit_address(self, node):
+ self.visit_docinfo_item(node, 'address')
+
+ def depart_address(self, node):
+ pass
+
+ def visit_admonition(self, node, name=None):
+ if name:
+ self.body.append('.IP %s\n' %
+ self.language.labels.get(name, name))
+
+ def depart_admonition(self, node):
+ self.body.append('.RE\n')
+
+ def visit_attention(self, node):
+ self.visit_admonition(node, 'attention')
+
+ depart_attention = depart_admonition
+
+ def visit_docinfo_item(self, node, name):
+ if name == 'author':
+ self._docinfo[name].append(node.astext())
+ else:
+ self._docinfo[name] = node.astext()
+ self._docinfo_keys.append(name)
+ raise nodes.SkipNode
+
+ def depart_docinfo_item(self, node):
+ pass
+
+ def visit_author(self, node):
+ self.visit_docinfo_item(node, 'author')
+
+ depart_author = depart_docinfo_item
+
+ def visit_authors(self, node):
+ # _author is called anyway.
+ pass
+
+ def depart_authors(self, node):
+ pass
+
+ def visit_block_quote(self, node):
+ # BUG/HACK: indent alway uses the _last_ indention,
+ # thus we need two of them.
+ self.indent(BLOCKQOUTE_INDENT)
+ self.indent(0)
+
+ def depart_block_quote(self, node):
+ self.dedent()
+ self.dedent()
+
+ def visit_bullet_list(self, node):
+ self.list_start(node)
+
+ def depart_bullet_list(self, node):
+ self.list_end()
+
+ def visit_caption(self, node):
+ pass
+
+ def depart_caption(self, node):
+ pass
+
+ def visit_caution(self, node):
+ self.visit_admonition(node, 'caution')
+
+ depart_caution = depart_admonition
+
+ def visit_citation(self, node):
+ num, text = node.astext().split(None, 1)
+ num = num.strip()
+ self.body.append('.IP [%s] 5\n' % num)
+
+ def depart_citation(self, node):
+ pass
+
+ def visit_citation_reference(self, node):
+ self.body.append('['+node.astext()+']')
+ raise nodes.SkipNode
+
+ def visit_classifier(self, node):
+ pass
+
+ def depart_classifier(self, node):
+ pass
+
+ def visit_colspec(self, node):
+ self.colspecs.append(node)
+
+ def depart_colspec(self, node):
+ pass
+
+ def write_colspecs(self):
+ self.body.append("%s.\n" % ('L '*len(self.colspecs)))
+
+ def visit_comment(self, node,
+ sub=re.compile('-(?=-)').sub):
+ self.body.append(self.comment(node.astext()))
+ raise nodes.SkipNode
+
+ def visit_contact(self, node):
+ self.visit_docinfo_item(node, 'contact')
+
+ depart_contact = depart_docinfo_item
+
+ def visit_container(self, node):
+ pass
+
+ def depart_container(self, node):
+ pass
+
+ def visit_compound(self, node):
+ pass
+
+ def depart_compound(self, node):
+ pass
+
+ def visit_copyright(self, node):
+ self.visit_docinfo_item(node, 'copyright')
+
+ def visit_danger(self, node):
+ self.visit_admonition(node, 'danger')
+
+ depart_danger = depart_admonition
+
+ def visit_date(self, node):
+ self.visit_docinfo_item(node, 'date')
+
+ def visit_decoration(self, node):
+ pass
+
+ def depart_decoration(self, node):
+ pass
+
+ def visit_definition(self, node):
+ pass
+
+ def depart_definition(self, node):
+ pass
+
+ def visit_definition_list(self, node):
+ self.indent(DEFINITION_LIST_INDENT)
+
+ def depart_definition_list(self, node):
+ self.dedent()
+
+ def visit_definition_list_item(self, node):
+ self.body.append(self.defs['definition_list_item'][0])
+
+ def depart_definition_list_item(self, node):
+ self.body.append(self.defs['definition_list_item'][1])
+
+ def visit_description(self, node):
+ pass
+
+ def depart_description(self, node):
+ pass
+
+ def visit_docinfo(self, node):
+ self._in_docinfo = 1
+
+ def depart_docinfo(self, node):
+ self._in_docinfo = None
+ # NOTE nothing should be written before this
+ self.append_header()
+
+ def visit_doctest_block(self, node):
+ self.body.append(self.defs['literal_block'][0])
+ self._in_literal = True
+
+ def depart_doctest_block(self, node):
+ self._in_literal = False
+ self.body.append(self.defs['literal_block'][1])
+
+ def visit_document(self, node):
+ # no blank line between comment and header.
+ self.body.append(self.comment(self.document_start).rstrip()+'\n')
+ # writing header is postboned
+ self.header_written = 0
+
+ def depart_document(self, node):
+ if self._docinfo['author']:
+ self.body.append('.SH AUTHOR\n%s\n'
+ % ', '.join(self._docinfo['author']))
+ skip = ('author', 'copyright', 'date',
+ 'manual_group', 'manual_section',
+ 'subtitle',
+ 'title', 'title_upper', 'version')
+ for name in self._docinfo_keys:
+ if name == 'address':
+ self.body.append("\n%s:\n%s%s.nf\n%s\n.fi\n%s%s" % (
+ self.language.labels.get(name, name),
+ self.defs['indent'][0] % 0,
+ self.defs['indent'][0] % BLOCKQOUTE_INDENT,
+ self._docinfo[name],
+ self.defs['indent'][1],
+ self.defs['indent'][1]))
+ elif not name in skip:
+ if name in self._docinfo_names:
+ label = self._docinfo_names[name]
+ else:
+ label = self.language.labels.get(name, name)
+ self.body.append("\n%s: %s\n" % (label, self._docinfo[name]))
+ if self._docinfo['copyright']:
+ self.body.append('.SH COPYRIGHT\n%s\n'
+ % self._docinfo['copyright'])
+ self.body.append(self.comment(
+ 'Generated by docutils manpage writer.\n'))
+
+ def visit_emphasis(self, node):
+ self.body.append(self.defs['emphasis'][0])
+
+ def depart_emphasis(self, node):
+ self.body.append(self.defs['emphasis'][1])
+
+ def visit_entry(self, node):
+ # a cell in a table row
+ if 'morerows' in node:
+ self.document.reporter.warning('"table row spanning" not supported',
+ base_node=node)
+ if 'morecols' in node:
+ self.document.reporter.warning(
+ '"table cell spanning" not supported', base_node=node)
+ self.context.append(len(self.body))
+
+ def depart_entry(self, node):
+ start = self.context.pop()
+ self._active_table.append_cell(self.body[start:])
+ del self.body[start:]
+
+ def visit_enumerated_list(self, node):
+ self.list_start(node)
+
+ def depart_enumerated_list(self, node):
+ self.list_end()
+
+ def visit_error(self, node):
+ self.visit_admonition(node, 'error')
+
+ depart_error = depart_admonition
+
+ def visit_field(self, node):
+ pass
+
+ def depart_field(self, node):
+ pass
+
+ def visit_field_body(self, node):
+ if self._in_docinfo:
+ name_normalized = self._field_name.lower().replace(" ","_")
+ self._docinfo_names[name_normalized] = self._field_name
+ self.visit_docinfo_item(node, name_normalized)
+ raise nodes.SkipNode
+
+ def depart_field_body(self, node):
+ pass
+
+ def visit_field_list(self, node):
+ self.indent(FIELD_LIST_INDENT)
+
+ def depart_field_list(self, node):
+ self.dedent()
+
+ def visit_field_name(self, node):
+ if self._in_docinfo:
+ self._field_name = node.astext()
+ raise nodes.SkipNode
+ else:
+ self.body.append(self.defs['field_name'][0])
+
+ def depart_field_name(self, node):
+ self.body.append(self.defs['field_name'][1])
+
+ def visit_figure(self, node):
+ self.indent(2.5)
+ self.indent(0)
+
+ def depart_figure(self, node):
+ self.dedent()
+ self.dedent()
+
+ def visit_footer(self, node):
+ self.document.reporter.warning('"footer" not supported',
+ base_node=node)
+
+ def depart_footer(self, node):
+ pass
+
+ def visit_footnote(self, node):
+ num, text = node.astext().split(None, 1)
+ num = num.strip()
+ self.body.append('.IP [%s] 5\n' % self.deunicode(num))
+
+ def depart_footnote(self, node):
+ pass
+
+ def footnote_backrefs(self, node):
+ self.document.reporter.warning('"footnote_backrefs" not supported',
+ base_node=node)
+
+ def visit_footnote_reference(self, node):
+ self.body.append('['+self.deunicode(node.astext())+']')
+ raise nodes.SkipNode
+
+ def depart_footnote_reference(self, node):
+ pass
+
+ def visit_generated(self, node):
+ pass
+
+ def depart_generated(self, node):
+ pass
+
+ def visit_header(self, node):
+ raise NotImplementedError, node.astext()
+
+ def depart_header(self, node):
+ pass
+
+ def visit_hint(self, node):
+ self.visit_admonition(node, 'hint')
+
+ depart_hint = depart_admonition
+
+ def visit_subscript(self, node):
+ self.body.append('\\s-2\\d')
+
+ def depart_subscript(self, node):
+ self.body.append('\\u\\s0')
+
+ def visit_superscript(self, node):
+ self.body.append('\\s-2\\u')
+
+ def depart_superscript(self, node):
+ self.body.append('\\d\\s0')
+
+ def visit_attribution(self, node):
+ self.body.append('\\(em ')
+
+ def depart_attribution(self, node):
+ self.body.append('\n')
+
+ def visit_image(self, node):
+ self.document.reporter.warning('"image" not supported',
+ base_node=node)
+ text = []
+ if 'alt' in node.attributes:
+ text.append(node.attributes['alt'])
+ if 'uri' in node.attributes:
+ text.append(node.attributes['uri'])
+ self.body.append('[image: %s]\n' % ('/'.join(text)))
+ raise nodes.SkipNode
+
+ def visit_important(self, node):
+ self.visit_admonition(node, 'important')
+
+ depart_important = depart_admonition
+
+ def visit_label(self, node):
+ # footnote and citation
+ if (isinstance(node.parent, nodes.footnote)
+ or isinstance(node.parent, nodes.citation)):
+ raise nodes.SkipNode
+ self.document.reporter.warning('"unsupported "label"',
+ base_node=node)
+ self.body.append('[')
+
+ def depart_label(self, node):
+ self.body.append(']\n')
+
+ def visit_legend(self, node):
+ pass
+
+ def depart_legend(self, node):
+ pass
+
+ # WHAT should we use .INDENT, .UNINDENT ?
+ def visit_line_block(self, node):
+ self._line_block += 1
+ if self._line_block == 1:
+ # TODO: separate inline blocks from previous paragraphs
+ # see http://hg.intevation.org/mercurial/crew/rev/9c142ed9c405
+ # self.body.append('.sp\n')
+ # but it does not work for me.
+ self.body.append('.nf\n')
+ else:
+ self.body.append('.in +2\n')
+
+ def depart_line_block(self, node):
+ self._line_block -= 1
+ if self._line_block == 0:
+ self.body.append('.fi\n')
+ self.body.append('.sp\n')
+ else:
+ self.body.append('.in -2\n')
+
+ def visit_line(self, node):
+ pass
+
+ def depart_line(self, node):
+ self.body.append('\n')
+
+ def visit_list_item(self, node):
+ # man 7 man argues to use ".IP" instead of ".TP"
+ self.body.append('.IP %s %d\n' % (
+ self._list_char[-1].next(),
+ self._list_char[-1].get_width(),))
+
+ def depart_list_item(self, node):
+ pass
+
+ def visit_literal(self, node):
+ self.body.append(self.defs['literal'][0])
+
+ def depart_literal(self, node):
+ self.body.append(self.defs['literal'][1])
+
+ def visit_literal_block(self, node):
+ self.body.append(self.defs['literal_block'][0])
+ self._in_literal = True
+
+ def depart_literal_block(self, node):
+ self._in_literal = False
+ self.body.append(self.defs['literal_block'][1])
+
+ def visit_meta(self, node):
+ raise NotImplementedError, node.astext()
+
+ def depart_meta(self, node):
+ pass
+
+ def visit_note(self, node):
+ self.visit_admonition(node, 'note')
+
+ depart_note = depart_admonition
+
+ def indent(self, by=0.5):
+ # if we are in a section ".SH" there already is a .RS
+ step = self._indent[-1]
+ self._indent.append(by)
+ self.body.append(self.defs['indent'][0] % step)
+
+ def dedent(self):
+ self._indent.pop()
+ self.body.append(self.defs['indent'][1])
+
+ def visit_option_list(self, node):
+ self.indent(OPTION_LIST_INDENT)
+
+ def depart_option_list(self, node):
+ self.dedent()
+
+ def visit_option_list_item(self, node):
+ # one item of the list
+ self.body.append(self.defs['option_list_item'][0])
+
+ def depart_option_list_item(self, node):
+ self.body.append(self.defs['option_list_item'][1])
+
+ def visit_option_group(self, node):
+ # as one option could have several forms it is a group
+ # options without parameter bold only, .B, -v
+ # options with parameter bold italic, .BI, -f file
+ #
+ # we do not know if .B or .BI
+ self.context.append('.B') # blind guess
+ self.context.append(len(self.body)) # to be able to insert later
+ self.context.append(0) # option counter
+
+ def depart_option_group(self, node):
+ self.context.pop() # the counter
+ start_position = self.context.pop()
+ text = self.body[start_position:]
+ del self.body[start_position:]
+ self.body.append('%s%s\n' % (self.context.pop(), ''.join(text)))
+
+ def visit_option(self, node):
+ # each form of the option will be presented separately
+ if self.context[-1] > 0:
+ self.body.append(', ')
+ if self.context[-3] == '.BI':
+ self.body.append('\\')
+ self.body.append(' ')
+
+ def depart_option(self, node):
+ self.context[-1] += 1
+
+ def visit_option_string(self, node):
+ # do not know if .B or .BI
+ pass
+
+ def depart_option_string(self, node):
+ pass
+
+ def visit_option_argument(self, node):
+ self.context[-3] = '.BI' # bold/italic alternate
+ if node['delimiter'] != ' ':
+ self.body.append('\\fB%s ' % node['delimiter'])
+ elif self.body[len(self.body)-1].endswith('='):
+ # a blank only means no blank in output, just changing font
+ self.body.append(' ')
+ else:
+ # blank backslash blank, switch font then a blank
+ self.body.append(' \\ ')
+
+ def depart_option_argument(self, node):
+ pass
+
+ def visit_organization(self, node):
+ self.visit_docinfo_item(node, 'organization')
+
+ def depart_organization(self, node):
+ pass
+
+ def visit_paragraph(self, node):
+ # ``.PP`` : Start standard indented paragraph.
+ # ``.LP`` : Start block paragraph, all except the first.
+ # ``.P [type]`` : Start paragraph type.
+ # NOTE dont use paragraph starts because they reset indentation.
+ # ``.sp`` is only vertical space
+ self.ensure_eol()
+ self.body.append('.sp\n')
+
+ def depart_paragraph(self, node):
+ self.body.append('\n')
+
+ def visit_problematic(self, node):
+ self.body.append(self.defs['problematic'][0])
+
+ def depart_problematic(self, node):
+ self.body.append(self.defs['problematic'][1])
+
+ def visit_raw(self, node):
+ if node.get('format') == 'manpage':
+ self.body.append(node.astext() + "\n")
+ # Keep non-manpage raw text out of output:
+ raise nodes.SkipNode
+
+ def visit_reference(self, node):
+ """E.g. link or email address."""
+ self.body.append(self.defs['reference'][0])
+
+ def depart_reference(self, node):
+ self.body.append(self.defs['reference'][1])
+
+ def visit_revision(self, node):
+ self.visit_docinfo_item(node, 'revision')
+
+ depart_revision = depart_docinfo_item
+
+ def visit_row(self, node):
+ self._active_table.new_row()
+
+ def depart_row(self, node):
+ pass
+
+ def visit_section(self, node):
+ self.section_level += 1
+
+ def depart_section(self, node):
+ self.section_level -= 1
+
+ def visit_status(self, node):
+ self.visit_docinfo_item(node, 'status')
+
+ depart_status = depart_docinfo_item
+
+ def visit_strong(self, node):
+ self.body.append(self.defs['strong'][0])
+
+ def depart_strong(self, node):
+ self.body.append(self.defs['strong'][1])
+
+ def visit_substitution_definition(self, node):
+ """Internal only."""
+ raise nodes.SkipNode
+
+ def visit_substitution_reference(self, node):
+ self.document.reporter.warning('"substitution_reference" not supported',
+ base_node=node)
+
+ def visit_subtitle(self, node):
+ if isinstance(node.parent, nodes.sidebar):
+ self.body.append(self.defs['strong'][0])
+ elif isinstance(node.parent, nodes.document):
+ self.visit_docinfo_item(node, 'subtitle')
+ elif isinstance(node.parent, nodes.section):
+ self.body.append(self.defs['strong'][0])
+
+ def depart_subtitle(self, node):
+ # document subtitle calls SkipNode
+ self.body.append(self.defs['strong'][1]+'\n.PP\n')
+
+ def visit_system_message(self, node):
+ # TODO add report_level
+ #if node['level'] < self.document.reporter['writer'].report_level:
+ # Level is too low to display:
+ # raise nodes.SkipNode
+ attr = {}
+ backref_text = ''
+ if node.hasattr('id'):
+ attr['name'] = node['id']
+ if node.hasattr('line'):
+ line = ', line %s' % node['line']
+ else:
+ line = ''
+ self.body.append('.IP "System Message: %s/%s (%s:%s)"\n'
+ % (node['type'], node['level'], node['source'], line))
+
+ def depart_system_message(self, node):
+ pass
+
+ def visit_table(self, node):
+ self._active_table = Table()
+
+ def depart_table(self, node):
+ self.ensure_eol()
+ self.body.extend(self._active_table.as_list())
+ self._active_table = None
+
+ def visit_target(self, node):
+ # targets are in-document hyper targets, without any use for man-pages.
+ raise nodes.SkipNode
+
+ def visit_tbody(self, node):
+ pass
+
+ def depart_tbody(self, node):
+ pass
+
+ def visit_term(self, node):
+ self.body.append(self.defs['term'][0])
+
+ def depart_term(self, node):
+ self.body.append(self.defs['term'][1])
+
+ def visit_tgroup(self, node):
+ pass
+
+ def depart_tgroup(self, node):
+ pass
+
+ def visit_thead(self, node):
+ # MAYBE double line '='
+ pass
+
+ def depart_thead(self, node):
+ # MAYBE double line '='
+ pass
+
+ def visit_tip(self, node):
+ self.visit_admonition(node, 'tip')
+
+ depart_tip = depart_admonition
+
+ def visit_title(self, node):
+ if isinstance(node.parent, nodes.topic):
+ self.body.append(self.defs['topic-title'][0])
+ elif isinstance(node.parent, nodes.sidebar):
+ self.body.append(self.defs['sidebar-title'][0])
+ elif isinstance(node.parent, nodes.admonition):
+ self.body.append('.IP "')
+ elif self.section_level == 0:
+ self._docinfo['title'] = node.astext()
+ # document title for .TH
+ self._docinfo['title_upper'] = node.astext().upper()
+ raise nodes.SkipNode
+ elif self.section_level == 1:
+ self.body.append('.SH %s\n' % self.deunicode(node.astext().upper()))
+ raise nodes.SkipNode
+ else:
+ self.body.append('.SS ')
+
+ def depart_title(self, node):
+ if isinstance(node.parent, nodes.admonition):
+ self.body.append('"')
+ self.body.append('\n')
+
+ def visit_title_reference(self, node):
+ """inline citation reference"""
+ self.body.append(self.defs['title_reference'][0])
+
+ def depart_title_reference(self, node):
+ self.body.append(self.defs['title_reference'][1])
+
+ def visit_topic(self, node):
+ pass
+
+ def depart_topic(self, node):
+ pass
+
+ def visit_sidebar(self, node):
+ pass
+
+ def depart_sidebar(self, node):
+ pass
+
+ def visit_rubric(self, node):
+ pass
+
+ def depart_rubric(self, node):
+ pass
+
+ def visit_transition(self, node):
+ # .PP Begin a new paragraph and reset prevailing indent.
+ # .sp N leaves N lines of blank space.
+ # .ce centers the next line
+ self.body.append('\n.sp\n.ce\n----\n')
+
+ def depart_transition(self, node):
+ self.body.append('\n.ce 0\n.sp\n')
+
+ def visit_version(self, node):
+ self.visit_docinfo_item(node, 'version')
+
+ def visit_warning(self, node):
+ self.visit_admonition(node, 'warning')
+
+ depart_warning = depart_admonition
+
+ def unimplemented_visit(self, node):
+ raise NotImplementedError('visiting unimplemented node type: %s'
+ % node.__class__.__name__)
+
+# vim: set fileencoding=utf-8 et ts=4 ai :
diff --git a/python/helpers/docutils/writers/newlatex2e/__init__.py b/python/helpers/docutils/writers/newlatex2e/__init__.py
new file mode 100644
index 0000000..c04208d
--- /dev/null
+++ b/python/helpers/docutils/writers/newlatex2e/__init__.py
@@ -0,0 +1,836 @@
+# $Id: __init__.py 5738 2008-11-30 08:59:04Z grubert $
+# Author: Lea Wiemann <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+LaTeX2e document tree Writer.
+"""
+
+# Thanks to Engelbert Gruber and various contributors for the original
+# LaTeX writer, some code and many ideas of which have been used for
+# this writer.
+
+__docformat__ = 'reStructuredText'
+
+
+import re
+import os.path
+
+import docutils
+from docutils import nodes, writers, utils
+from docutils.writers.newlatex2e import unicode_map
+from docutils.transforms import writer_aux
+
+
+class Writer(writers.Writer):
+
+ supported = ('newlatex', 'newlatex2e')
+ """Formats this writer supports."""
+
+ default_stylesheet = 'base.tex'
+
+ default_stylesheet_path = utils.relative_path(
+ os.path.join(os.getcwd(), 'dummy'),
+ os.path.join(os.path.dirname(__file__), default_stylesheet))
+
+ settings_spec = (
+ 'LaTeX-Specific Options',
+ 'Note that this LaTeX writer is still EXPERIMENTAL and not '
+ 'feature-complete. ',
+ (('Specify a stylesheet file. The path is used verbatim to include '
+ 'the file. Overrides --stylesheet-path.',
+ ['--stylesheet'],
+ {'default': '', 'metavar': '<file>',
+ 'overrides': 'stylesheet_path'}),
+ ('Specify a stylesheet file, relative to the current working '
+ 'directory. Overrides --stylesheet. Default: "%s"'
+ % default_stylesheet_path,
+ ['--stylesheet-path'],
+ {'metavar': '<file>', 'overrides': 'stylesheet',
+ 'default': default_stylesheet_path}),
+ ('Specify a user stylesheet file. See --stylesheet.',
+ ['--user-stylesheet'],
+ {'default': '', 'metavar': '<file>',
+ 'overrides': 'user_stylesheet_path'}),
+ ('Specify a user stylesheet file. See --stylesheet-path.',
+ ['--user-stylesheet-path'],
+ {'metavar': '<file>', 'overrides': 'user_stylesheet'})
+ ),)
+
+ settings_defaults = {
+ # Many Unicode characters are provided by unicode_map.py, so
+ # we can default to latin-1.
+ 'output_encoding': 'latin-1',
+ 'output_encoding_error_handler': 'strict',
+ # Since we are using superscript footnotes, it is necessary to
+ # trim whitespace in front of footnote references.
+ 'trim_footnote_reference_space': 1,
+ # Currently unsupported:
+ 'docinfo_xform': 0,
+ # During development:
+ 'traceback': 1
+ }
+
+ relative_path_settings = ('stylesheet_path', 'user_stylesheet_path')
+
+ config_section = 'newlatex2e writer'
+ config_section_dependencies = ('writers',)
+
+ output = None
+ """Final translated form of `document`."""
+
+ def get_transforms(self):
+ return writers.Writer.get_transforms(self) + [
+ writer_aux.Compound, writer_aux.Admonitions]
+
+ def __init__(self):
+ writers.Writer.__init__(self)
+ self.translator_class = LaTeXTranslator
+
+ def translate(self):
+ visitor = self.translator_class(self.document)
+ self.document.walkabout(visitor)
+ assert not visitor.context, 'context not empty: %s' % visitor.context
+ self.output = visitor.astext()
+ self.head = visitor.header
+ self.body = visitor.body
+
+
+class LaTeXException(Exception):
+ """
+ Exception base class to for exceptions which influence the
+ automatic generation of LaTeX code.
+ """
+
+
+class SkipAttrParentLaTeX(LaTeXException):
+ """
+ Do not generate ``\DECattr`` and ``\renewcommand{\DEVparent}{...}`` for this
+ node.
+
+ To be raised from ``before_...`` methods.
+ """
+
+
+class SkipParentLaTeX(LaTeXException):
+ """
+ Do not generate ``\renewcommand{\DEVparent}{...}`` for this node.
+
+ To be raised from ``before_...`` methods.
+ """
+
+
+class LaTeXTranslator(nodes.SparseNodeVisitor):
+
+ # Country code by a.schlock.
+ # Partly manually converted from iso and babel stuff.
+ iso639_to_babel = {
+ 'no': 'norsk', # added by hand
+ 'gd': 'scottish', # added by hand
+ 'sl': 'slovenian',
+ 'af': 'afrikaans',
+ 'bg': 'bulgarian',
+ 'br': 'breton',
+ 'ca': 'catalan',
+ 'cs': 'czech',
+ 'cy': 'welsh',
+ 'da': 'danish',
+ 'fr': 'french',
+ # french, francais, canadien, acadian
+ 'de': 'ngerman',
+ # ngerman, naustrian, german, germanb, austrian
+ 'el': 'greek',
+ 'en': 'english',
+ # english, USenglish, american, UKenglish, british, canadian
+ 'eo': 'esperanto',
+ 'es': 'spanish',
+ 'et': 'estonian',
+ 'eu': 'basque',
+ 'fi': 'finnish',
+ 'ga': 'irish',
+ 'gl': 'galician',
+ 'he': 'hebrew',
+ 'hr': 'croatian',
+ 'hu': 'hungarian',
+ 'is': 'icelandic',
+ 'it': 'italian',
+ 'la': 'latin',
+ 'nl': 'dutch',
+ 'pl': 'polish',
+ 'pt': 'portuguese',
+ 'ro': 'romanian',
+ 'ru': 'russian',
+ 'sk': 'slovak',
+ 'sr': 'serbian',
+ 'sv': 'swedish',
+ 'tr': 'turkish',
+ 'uk': 'ukrainian'
+ }
+
+ # Start with left double quote.
+ left_quote = 1
+
+ def __init__(self, document):
+ nodes.NodeVisitor.__init__(self, document)
+ self.settings = document.settings
+ self.header = []
+ self.body = []
+ self.context = []
+ self.stylesheet_path = utils.get_stylesheet_reference(
+ self.settings, os.path.join(os.getcwd(), 'dummy'))
+ if self.stylesheet_path:
+ self.settings.record_dependencies.add(self.stylesheet_path)
+ # This ugly hack will be cleaned up when refactoring the
+ # stylesheet mess.
+ self.settings.stylesheet = self.settings.user_stylesheet
+ self.settings.stylesheet_path = self.settings.user_stylesheet_path
+ self.user_stylesheet_path = utils.get_stylesheet_reference(
+ self.settings, os.path.join(os.getcwd(), 'dummy'))
+ if self.user_stylesheet_path:
+ self.settings.record_dependencies.add(self.user_stylesheet_path)
+
+ lang = self.settings.language_code or ''
+ if lang.startswith('de'):
+ self.double_quote_replacment = "{\\dq}"
+ elif lang.startswith('it'):
+ self.double_quote_replacment = r'{\char`\"}'
+ else:
+ self.double_quote_replacment = None
+
+ self.write_header()
+
+ def write_header(self):
+ a = self.header.append
+ a('%% Generated by Docutils %s <http://docutils.sourceforge.net>.'
+ % docutils.__version__)
+ a('')
+ a('% Docutils settings:')
+ lang = self.settings.language_code or ''
+ a(r'\providecommand{\DEVlanguageiso}{%s}' % lang)
+ a(r'\providecommand{\DEVlanguagebabel}{%s}' % self.iso639_to_babel.get(
+ lang, self.iso639_to_babel.get(lang.split('_')[0], '')))
+ a('')
+ if self.user_stylesheet_path:
+ a('% User stylesheet:')
+ a(r'\input{%s}' % self.user_stylesheet_path)
+ a('% Docutils stylesheet:')
+ a(r'\input{%s}' % self.stylesheet_path)
+ a('')
+ a('% Default definitions for Docutils nodes:')
+ for node_name in nodes.node_class_names:
+ a(r'\providecommand{\DN%s}[1]{#1}' % node_name.replace('_', ''))
+ a('')
+ a('% Auxiliary definitions:')
+ for attr in (r'\DEVparent \DEVattrlen \DEVtitleastext '
+ r'\DEVsinglebackref \DEVmultiplebackrefs'
+ ).split():
+ # Later set using \renewcommand.
+ a(r'\providecommand{%s}{DOCUTILSUNINITIALIZEDVARIABLE}' % attr)
+ for attr in (r'\DEVparagraphindented \DEVhassubtitle').split():
+ # Initialize as boolean variables.
+ a(r'\providecommand{%s}{false}' % attr)
+ a('\n\n')
+
+ unicode_map = unicode_map.unicode_map # comprehensive Unicode map
+ # Fix problems with unimap.py.
+ unicode_map.update({
+ # We have AE or T1 encoding, so "``" etc. work. The macros
+ # from unimap.py may *not* work.
+ u'\u201C': '{``}',
+ u'\u201D': "{''}",
+ u'\u201E': '{,,}',
+ })
+
+ character_map = {
+ '\\': r'{\textbackslash}',
+ '{': r'{\{}',
+ '}': r'{\}}',
+ '$': r'{\$}',
+ '&': r'{\&}',
+ '%': r'{\%}',
+ '#': r'{\#}',
+ '[': r'{[}',
+ ']': r'{]}',
+ '-': r'{-}',
+ '`': r'{`}',
+ "'": r"{'}",
+ ',': r'{,}',
+ '"': r'{"}',
+ '|': r'{\textbar}',
+ '<': r'{\textless}',
+ '>': r'{\textgreater}',
+ '^': r'{\textasciicircum}',
+ '~': r'{\textasciitilde}',
+ '_': r'{\DECtextunderscore}',
+ }
+ character_map.update(unicode_map)
+ #character_map.update(special_map)
+
+ # `att_map` is for encoding attributes. According to
+ # <http://www-h.eng.cam.ac.uk/help/tpl/textprocessing/teTeX/latex/latex2e-html/ltx-164.html>,
+ # the following characters are special: # $ % & ~ _ ^ \ { }
+ # These work without special treatment in macro parameters:
+ # $, &, ~, _, ^
+ att_map = {'#': '\\#',
+ '%': '\\%',
+ # We cannot do anything about backslashes.
+ '\\': '',
+ '{': '\\{',
+ '}': '\\}',
+ # The quotation mark may be redefined by babel.
+ '"': '"{}',
+ }
+ att_map.update(unicode_map)
+
+ def encode(self, text, attval=None):
+ """
+ Encode special characters in ``text`` and return it.
+
+ If attval is true, preserve as much as possible verbatim (used
+ in attribute value encoding). If attval is 'width' or
+ 'height', `text` is interpreted as a length value.
+ """
+ if attval in ('width', 'height'):
+ match = re.match(r'([0-9.]+)(\S*)$', text)
+ assert match, '%s="%s" must be a length' % (attval, text)
+ value, unit = match.groups()
+ if unit == '%':
+ value = str(float(value) / 100)
+ unit = r'\DECrelativeunit'
+ elif unit in ('', 'px'):
+ # If \DECpixelunit is "pt", this gives the same notion
+ # of pixels as graphicx. This is a bit of a hack.
+ value = str(float(value) * 0.75)
+ unit = '\DECpixelunit'
+ return '%s%s' % (value, unit)
+ if attval:
+ get = self.att_map.get
+ else:
+ get = self.character_map.get
+ text = ''.join([get(c, c) for c in text])
+ if (self.literal_block or self.inline_literal) and not attval:
+ # NB: We can have inline literals within literal blocks.
+ # Shrink '\r\n'.
+ text = text.replace('\r\n', '\n')
+ # Convert space. If "{ }~~~~~" is wrapped (at the
+ # brace-enclosed space "{ }"), the following non-breaking
+ # spaces ("~~~~") do *not* wind up at the beginning of the
+ # next line. Also note that no hyphenation is done if the
+ # breaking space ("{ }") comes *after* the non-breaking
+ # spaces.
+ if self.literal_block:
+ # Replace newlines with real newlines.
+ text = text.replace('\n', '\mbox{}\\\\{}')
+ replace_fn = self.encode_replace_for_literal_block_spaces
+ else:
+ replace_fn = self.encode_replace_for_inline_literal_spaces
+ text = re.sub(r'\s+', replace_fn, text)
+ # Protect hyphens; if we don't, line breaks will be
+ # possible at the hyphens and even the \textnhtt macro
+ # from the hyphenat package won't change that.
+ text = text.replace('-', r'\mbox{-}')
+ text = text.replace("'", r'{\DECtextliteralsinglequote}')
+ if self.double_quote_replacment is not None:
+ text = text.replace('"', self.double_quote_replacment)
+
+ return text
+ else:
+ if not attval:
+ # Replace space with single protected space.
+ text = re.sub(r'\s+', '{ }', text)
+ # Replace double quotes with macro calls.
+ L = []
+ for part in text.split(self.character_map['"']):
+ if L:
+ # Insert quote.
+ L.append(self.left_quote and r'{\DECtextleftdblquote}'
+ or r'{\DECtextrightdblquote}')
+ self.left_quote = not self.left_quote
+ L.append(part)
+ return ''.join(L)
+ else:
+ return text
+
+ def encode_replace_for_literal_block_spaces(self, match):
+ return '~' * len(match.group())
+
+ def encode_replace_for_inline_literal_spaces(self, match):
+ return '{ }' + '~' * (len(match.group()) - 1)
+
+ def astext(self):
+ return '\n'.join(self.header) + (''.join(self.body))
+
+ def append(self, text, newline='%\n'):
+ """
+ Append text, stripping newlines, producing nice LaTeX code.
+ """
+ lines = [' ' * self.indentation_level + line + newline
+ for line in text.splitlines(0)]
+ self.body.append(''.join(lines))
+
+ def visit_Text(self, node):
+ self.append(self.encode(node.astext()))
+
+ def depart_Text(self, node):
+ pass
+
+ def is_indented(self, paragraph):
+ """Return true if `paragraph` should be first-line-indented."""
+ assert isinstance(paragraph, nodes.paragraph)
+ siblings = [n for n in paragraph.parent if
+ self.is_visible(n) and not isinstance(n, nodes.Titular)]
+ index = siblings.index(paragraph)
+ if ('continued' in paragraph['classes'] or
+ index > 0 and isinstance(siblings[index-1], nodes.transition)):
+ return 0
+ # Indent all but the first paragraphs.
+ return index > 0
+
+ def before_paragraph(self, node):
+ self.append(r'\renewcommand{\DEVparagraphindented}{%s}'
+ % (self.is_indented(node) and 'true' or 'false'))
+
+ def before_title(self, node):
+ self.append(r'\renewcommand{\DEVtitleastext}{%s}'
+ % self.encode(node.astext()))
+ self.append(r'\renewcommand{\DEVhassubtitle}{%s}'
+ % ((len(node.parent) > 2 and
+ isinstance(node.parent[1], nodes.subtitle))
+ and 'true' or 'false'))
+
+ def before_generated(self, node):
+ if 'sectnum' in node['classes']:
+ node[0] = node[0].strip()
+
+ literal_block = 0
+
+ def visit_literal_block(self, node):
+ self.literal_block = 1
+
+ def depart_literal_block(self, node):
+ self.literal_block = 0
+
+ visit_doctest_block = visit_literal_block
+ depart_doctest_block = depart_literal_block
+
+ inline_literal = 0
+
+ def visit_literal(self, node):
+ self.inline_literal += 1
+
+ def depart_literal(self, node):
+ self.inline_literal -= 1
+
+ def _make_encodable(self, text):
+ """
+ Return text (a unicode object) with all unencodable characters
+ replaced with '?'.
+
+ Thus, the returned unicode string is guaranteed to be encodable.
+ """
+ encoding = self.settings.output_encoding
+ return text.encode(encoding, 'replace').decode(encoding)
+
+ def visit_comment(self, node):
+ """
+ Insert the comment unchanged into the document, replacing
+ unencodable characters with '?'.
+
+ (This is done in order not to fail if comments contain unencodable
+ characters, because our default encoding is not UTF-8.)
+ """
+ self.append('\n'.join(['% ' + self._make_encodable(line) for line
+ in node.astext().splitlines(0)]), newline='\n')
+ raise nodes.SkipChildren
+
+ def before_topic(self, node):
+ if 'contents' in node['classes']:
+ for bullet_list in list(node.traverse(nodes.bullet_list)):
+ p = bullet_list.parent
+ if isinstance(p, nodes.list_item):
+ p.parent.insert(p.parent.index(p) + 1, bullet_list)
+ del p[1]
+ for paragraph in node.traverse(nodes.paragraph):
+ paragraph.attributes.update(paragraph[0].attributes)
+ paragraph[:] = paragraph[0]
+ paragraph.parent['tocrefid'] = paragraph['refid']
+ node['contents'] = 1
+ else:
+ node['contents'] = 0
+
+ bullet_list_level = 0
+
+ def visit_bullet_list(self, node):
+ self.append(r'\DECsetbullet{\labelitem%s}' %
+ ['i', 'ii', 'iii', 'iv'][min(self.bullet_list_level, 3)])
+ self.bullet_list_level += 1
+
+ def depart_bullet_list(self, node):
+ self.bullet_list_level -= 1
+
+ enum_styles = {'arabic': 'arabic', 'loweralpha': 'alph', 'upperalpha':
+ 'Alph', 'lowerroman': 'roman', 'upperroman': 'Roman'}
+
+ enum_counter = 0
+
+ def visit_enumerated_list(self, node):
+ # We create our own enumeration list environment. This allows
+ # to set the style and starting value and unlimited nesting.
+ # Maybe the actual creation (\DEC) can be moved to the
+ # stylesheet?
+ self.enum_counter += 1
+ enum_prefix = self.encode(node['prefix'])
+ enum_suffix = self.encode(node['suffix'])
+ enum_type = '\\' + self.enum_styles.get(node['enumtype'], r'arabic')
+ start = node.get('start', 1) - 1
+ counter = 'Denumcounter%d' % self.enum_counter
+ self.append(r'\DECmakeenumeratedlist{%s}{%s}{%s}{%s}{%s}{'
+ % (enum_prefix, enum_type, enum_suffix, counter, start))
+ # for Emacs: }
+
+ def depart_enumerated_list(self, node):
+ self.append('}') # for Emacs: {
+
+ def before_list_item(self, node):
+ # XXX needs cleanup.
+ if (len(node) and (isinstance(node[-1], nodes.TextElement) or
+ isinstance(node[-1], nodes.Text)) and
+ node.parent.index(node) == len(node.parent) - 1):
+ node['lastitem'] = 'true'
+
+ before_line = before_list_item
+
+ def before_raw(self, node):
+ if 'latex' in node.get('format', '').split():
+ # We're inserting the text in before_raw and thus outside
+ # of \DN... and \DECattr in order to make grouping with
+ # curly brackets work.
+ self.append(node.astext())
+ raise nodes.SkipChildren
+
+ def process_backlinks(self, node, type):
+ """
+ Add LaTeX handling code for backlinks of footnote or citation
+ node `node`. `type` is either 'footnote' or 'citation'.
+ """
+ self.append(r'\renewcommand{\DEVsinglebackref}{}')
+ self.append(r'\renewcommand{\DEVmultiplebackrefs}{}')
+ if len(node['backrefs']) > 1:
+ refs = []
+ for i in range(len(node['backrefs'])):
+ # \DECmulticitationbacklink or \DECmultifootnotebacklink.
+ refs.append(r'\DECmulti%sbacklink{%s}{%s}'
+ % (type, node['backrefs'][i], i + 1))
+ self.append(r'\renewcommand{\DEVmultiplebackrefs}{(%s){ }}'
+ % ', '.join(refs))
+ elif len(node['backrefs']) == 1:
+ self.append(r'\renewcommand{\DEVsinglebackref}{%s}'
+ % node['backrefs'][0])
+
+ def visit_footnote(self, node):
+ self.process_backlinks(node, 'footnote')
+
+ def visit_citation(self, node):
+ self.process_backlinks(node, 'citation')
+
+ def before_table(self, node):
+ # A table contains exactly one tgroup. See before_tgroup.
+ pass
+
+ def before_tgroup(self, node):
+ widths = []
+ total_width = 0
+ for i in range(int(node['cols'])):
+ assert isinstance(node[i], nodes.colspec)
+ widths.append(int(node[i]['colwidth']) + 1)
+ total_width += widths[-1]
+ del node[:len(widths)]
+ tablespec = '|'
+ for w in widths:
+ # 0.93 is probably wrong in many cases. XXX Find a
+ # solution which works *always*.
+ tablespec += r'p{%s\textwidth}|' % (0.93 * w /
+ max(total_width, 60))
+ self.append(r'\DECmaketable{%s}{' % tablespec)
+ self.context.append('}')
+ raise SkipAttrParentLaTeX
+
+ def depart_tgroup(self, node):
+ self.append(self.context.pop())
+
+ def before_row(self, node):
+ raise SkipAttrParentLaTeX
+
+ def before_thead(self, node):
+ raise SkipAttrParentLaTeX
+
+ def before_tbody(self, node):
+ raise SkipAttrParentLaTeX
+
+ def is_simply_entry(self, node):
+ return (len(node) == 1 and isinstance(node[0], nodes.paragraph) or
+ len(node) == 0)
+
+ def before_entry(self, node):
+ is_leftmost = 0
+ if node.hasattr('morerows'):
+ self.document.reporter.severe('Rowspans are not supported.')
+ # Todo: Add empty cells below rowspanning cell and issue
+ # warning instead of severe.
+ if node.hasattr('morecols'):
+ # The author got a headache trying to implement
+ # multicolumn support.
+ if not self.is_simply_entry(node):
+ self.document.reporter.severe(
+ 'Colspanning table cells may only contain one paragraph.')
+ # Todo: Same as above.
+ # The number of columns this entry spans (as a string).
+ colspan = int(node['morecols']) + 1
+ del node['morecols']
+ else:
+ colspan = 1
+ # Macro to call -- \DECcolspan or \DECcolspanleft.
+ macro_name = r'\DECcolspan'
+ if node.parent.index(node) == 0:
+ # Leftmost column.
+ macro_name += 'left'
+ is_leftmost = 1
+ if colspan > 1:
+ self.append('%s{%s}{' % (macro_name, colspan))
+ self.context.append('}')
+ else:
+ # Do not add a multicolumn with colspan 1 beacuse we need
+ # at least one non-multicolumn cell per column to get the
+ # desired column widths, and we can only do colspans with
+ # cells consisting of only one paragraph.
+ if not is_leftmost:
+ self.append(r'\DECsubsequententry{')
+ self.context.append('}')
+ else:
+ self.context.append('')
+ if isinstance(node.parent.parent, nodes.thead):
+ node['tableheaderentry'] = 'true'
+
+ # Don't add \renewcommand{\DEVparent}{...} because there must
+ # not be any non-expandable commands in front of \multicolumn.
+ raise SkipParentLaTeX
+
+ def depart_entry(self, node):
+ self.append(self.context.pop())
+
+ def before_substitution_definition(self, node):
+ raise nodes.SkipNode
+
+ indentation_level = 0
+
+ def node_name(self, node):
+ return node.__class__.__name__.replace('_', '')
+
+ # Attribute propagation order.
+ attribute_order = ['align', 'classes', 'ids']
+
+ def attribute_cmp(self, a1, a2):
+ """
+ Compare attribute names `a1` and `a2`. Used in
+ propagate_attributes to determine propagation order.
+
+ See built-in function `cmp` for return value.
+ """
+ if a1 in self.attribute_order and a2 in self.attribute_order:
+ return cmp(self.attribute_order.index(a1),
+ self.attribute_order.index(a2))
+ if (a1 in self.attribute_order) != (a2 in self.attribute_order):
+ # Attributes not in self.attribute_order come last.
+ return a1 in self.attribute_order and -1 or 1
+ else:
+ return cmp(a1, a2)
+
+ def propagate_attributes(self, node):
+ # Propagate attributes using \DECattr macros.
+ node_name = self.node_name(node)
+ attlist = []
+ if isinstance(node, nodes.Element):
+ attlist = node.attlist()
+ attlist.sort(lambda pair1, pair2: self.attribute_cmp(pair1[0],
+ pair2[0]))
+ # `numatts` may be greater than len(attlist) due to list
+ # attributes.
+ numatts = 0
+ pass_contents = self.pass_contents(node)
+ for key, value in attlist:
+ if isinstance(value, list):
+ self.append(r'\renewcommand{\DEVattrlen}{%s}' % len(value))
+ for i in range(len(value)):
+ self.append(r'\DECattr{%s}{%s}{%s}{%s}{' %
+ (i+1, key, self.encode(value[i], attval=key),
+ node_name))
+ if not pass_contents:
+ self.append('}')
+ numatts += len(value)
+ else:
+ self.append(r'\DECattr{}{%s}{%s}{%s}{' %
+ (key, self.encode(unicode(value), attval=key),
+ node_name))
+ if not pass_contents:
+ self.append('}')
+ numatts += 1
+ if pass_contents:
+ self.context.append('}' * numatts) # for Emacs: {
+ else:
+ self.context.append('')
+
+ def visit_docinfo(self, node):
+ raise NotImplementedError('Docinfo not yet implemented.')
+
+ def visit_document(self, node):
+ document = node
+ # Move IDs into TextElements. This won't work for images.
+ # Need to review this.
+ for node in document.traverse(nodes.Element):
+ if 'ids' in node and not isinstance(node,
+ nodes.TextElement):
+ next_text_element = node.next_node(nodes.TextElement)
+ if next_text_element:
+ next_text_element['ids'].extend(node['ids'])
+ node['ids'] = []
+
+ def pass_contents(self, node):
+ r"""
+ Return True if the node contents should be passed in
+ \DN<nodename>{<contents>} and \DECattr{}{}{}{}{<contents>}.
+ Return False if the node contents should be passed in
+ \DECvisit<nodename> <contents> \DECdepart<nodename>, and no
+ attribute handler should be called.
+ """
+ # Passing the whole document or whole sections as parameters
+ # to \DN... or \DECattr causes LaTeX to run out of memory.
+ return not isinstance(node, (nodes.document, nodes.section))
+
+ def dispatch_visit(self, node):
+ skip_attr = skip_parent = 0
+ # TreePruningException to be propagated.
+ tree_pruning_exception = None
+ if hasattr(self, 'before_' + node.__class__.__name__):
+ try:
+ getattr(self, 'before_' + node.__class__.__name__)(node)
+ except SkipParentLaTeX:
+ skip_parent = 1
+ except SkipAttrParentLaTeX:
+ skip_attr = 1
+ skip_parent = 1
+ except nodes.SkipNode:
+ raise
+ except (nodes.SkipChildren, nodes.SkipSiblings), instance:
+ tree_pruning_exception = instance
+ except nodes.SkipDeparture:
+ raise NotImplementedError(
+ 'SkipDeparture not usable in LaTeX writer')
+
+ if not isinstance(node, nodes.Text):
+ node_name = self.node_name(node)
+ # attribute_deleters will be appended to self.context.
+ attribute_deleters = []
+ if not skip_parent and not isinstance(node, nodes.document):
+ self.append(r'\renewcommand{\DEVparent}{%s}'
+ % self.node_name(node.parent))
+ for name, value in node.attlist():
+ if not isinstance(value, list) and not ':' in name:
+ # For non-list and non-special (like
+ # 'xml:preserve') attributes, set
+ # \DEVcurrentN<nodename>A<attribute> to the
+ # attribute value, so that the value of the
+ # attribute is available in the node handler
+ # and all children.
+ macro = r'\DEVcurrentN%sA%s' % (node_name, name)
+ self.append(r'\def%s{%s}' % (
+ macro, self.encode(unicode(value), attval=name)))
+ # Make the attribute undefined afterwards.
+ attribute_deleters.append(r'\let%s=\relax' % macro)
+ self.context.append('\n'.join(attribute_deleters))
+ if self.pass_contents(node):
+ # Call \DN<nodename>{<contents>}.
+ self.append(r'\DN%s{' % node_name)
+ self.context.append('}')
+ else:
+ # Call \DECvisit<nodename> <contents>
+ # \DECdepart<nodename>. (Maybe we should use LaTeX
+ # environments for this?)
+ self.append(r'\DECvisit%s' % node_name)
+ self.context.append(r'\DECdepart%s' % node_name)
+ self.indentation_level += 1
+ if not skip_attr:
+ self.propagate_attributes(node)
+ else:
+ self.context.append('')
+
+ if (isinstance(node, nodes.TextElement) and
+ not isinstance(node.parent, nodes.TextElement)):
+ # Reset current quote to left.
+ self.left_quote = 1
+
+ # Call visit_... method.
+ try:
+ nodes.SparseNodeVisitor.dispatch_visit(self, node)
+ except LaTeXException:
+ raise NotImplementedError(
+ 'visit_... methods must not raise LaTeXExceptions')
+
+ if tree_pruning_exception:
+ # Propagate TreePruningException raised in before_... method.
+ raise tree_pruning_exception
+
+ def is_invisible(self, node):
+ # Return true if node is invisible or moved away in the LaTeX
+ # rendering.
+ return (not isinstance(node, nodes.Text) and
+ (isinstance(node, nodes.Invisible) or
+ isinstance(node, nodes.footnote) or
+ isinstance(node, nodes.citation) or
+ # Assume raw nodes to be invisible.
+ isinstance(node, nodes.raw) or
+ # Floating image or figure.
+ node.get('align') in ('left', 'right')))
+
+ def is_visible(self, node):
+ return not self.is_invisible(node)
+
+ def needs_space(self, node):
+ """Two nodes for which `needs_space` is true need auxiliary space."""
+ # Return true if node is a visible block-level element.
+ return ((isinstance(node, nodes.Body) or
+ isinstance(node, nodes.topic)) and
+ not (self.is_invisible(node) or
+ isinstance(node.parent, nodes.TextElement)))
+
+ def always_needs_space(self, node):
+ """
+ Always add space around nodes for which `always_needs_space()`
+ is true, regardless of whether the other node needs space as
+ well. (E.g. transition next to section.)
+ """
+ return isinstance(node, nodes.transition)
+
+ def dispatch_departure(self, node):
+ # Call departure method.
+ nodes.SparseNodeVisitor.dispatch_departure(self, node)
+
+ if not isinstance(node, nodes.Text):
+ # Close attribute and node handler call (\DN...{...}).
+ self.indentation_level -= 1
+ self.append(self.context.pop() + self.context.pop())
+ # Delete \DECcurrentN... attribute macros.
+ self.append(self.context.pop())
+ # Get next sibling.
+ next_node = node.next_node(
+ ascend=0, siblings=1, descend=0,
+ condition=self.is_visible)
+ # Insert space if necessary.
+ if (self.needs_space(node) and self.needs_space(next_node) or
+ self.always_needs_space(node) or
+ self.always_needs_space(next_node)):
+ if isinstance(node, nodes.paragraph) and isinstance(next_node, nodes.paragraph):
+ # Space between paragraphs.
+ self.append(r'\DECparagraphspace')
+ else:
+ # One of the elements is not a paragraph.
+ self.append(r'\DECauxiliaryspace')
diff --git a/python/helpers/docutils/writers/newlatex2e/base.tex b/python/helpers/docutils/writers/newlatex2e/base.tex
new file mode 100644
index 0000000..4955e60
--- /dev/null
+++ b/python/helpers/docutils/writers/newlatex2e/base.tex
@@ -0,0 +1,1180 @@
+% System stylesheet for the new LaTeX writer, newlatex2e.
+
+% Major parts of the rendering are done in this stylesheet and not in the
+% Python module.
+
+% For development notes, see notes.txt.
+
+% User documentation (in the stylesheet for now; that may change though):
+
+% Naming conventions:
+% All uppercase letters in macro names have a specific meaning.
+% \D...: All macros introduced by the Docutils LaTeX writer start with "D".
+% \DS<name>: Setup function (called at the bottom of this stylesheet).
+% \DN<nodename>{<contents>}: Handler for Docutils document tree node `node`; called by
+% the Python module.
+% \DEV<name>: External variable, set by the Python module.
+% \DEC<name>: External command. It is called by the Python module and must be
+% defined in this stylesheet.
+% \DN<nodename>A<attribute>{<number>}{<attribute>}{<value>}{<nodename>}{<contents>}:
+% Attribute handler for `attribute` set on nodes of type `nodename`.
+% See below for a discussion of attribute handlers.
+% \DA<attribute>{<number>}{<attribute>}{<value>}{<nodename>}{<contents>}:
+% Attribute handler for all `attribute`. Called only when no specific
+% \DN<nodename>A<attribute> handler is defined.
+% \DN<nodename>C<class>{<contents>}:
+% Handler for `class`, when set on nodes of type `nodename`.
+% \DC<class>{<contents>}:
+% Handler for `class`. Called only when no specific \DN<nodename>C<class>
+% handler is defined.
+% \D<name>: Generic variable or function.
+
+% Attribute handlers:
+% TODO
+
+% ---------------------------------------------------------------------------
+
+% Having to intersperse code with \makeatletter-\makeatother pairs is very
+% annoying, so we call \makeatletter at the top and \makeatother at the
+% bottom. Just be aware that you cannot use "@" as a text character inside
+% this stylesheet.
+\makeatletter
+
+% Print-mode (as opposed to online mode e.g. with Adobe Reader).
+% This causes for example blue hyperlinks.
+\providecommand{\Dprinting}{false}
+
+% \DSearly is called right after \documentclass.
+\providecommand{\DSearly}{}
+% \DSlate is called at the end of the stylesheet (right before the document
+% tree).
+\providecommand{\DSlate}{}
+
+% Use the KOMA script article class.
+\providecommand{\Ddocumentclass}{scrartcl}
+\providecommand{\Ddocumentoptions}{a4paper}
+\providecommand{\DSdocumentclass}{
+ \documentclass[\Ddocumentoptions]{\Ddocumentclass} }
+
+% Todo: This should be movable to the bottom, but it isn't as long as
+% we use \usepackage commands at the top level of this stylesheet
+% (which we shouldn't).
+\DSdocumentclass
+
+\providecommand{\DSpackages}{
+ % Load miscellaneous packages.
+ % Note 1: Many of the packages loaded here are used throughout this stylesheet.
+ % If one of these packages does not work on your system or in your scenario,
+ % please let us know, so we can consider making the package optional.
+ % Note 2: It would appear cleaner to load packages where they are used.
+ % However, since using a wrong package loading order can lead to *very*
+ % subtle bugs, we centralize the loading of most packages here.
+ \DSfontencoding % load font encoding packages
+ \DSlanguage % load babel
+ % Using \ifthenelse conditionals.
+ \usepackage{ifthen} % before hyperref (really!)
+ % There is not support for *not* using hyperref because it's used in many
+ % places. If this is a problem (e.g. because hyperref doesn't work on your
+ % system), please let us know.
+ \usepackage[colorlinks=false,pdfborder={0 0 0}]{hyperref}
+ % Get color, e.g. for links and system messages.
+ \usepackage{color}
+ % Get \textnhtt macro (non-hyphenating type writer).
+ \usepackage{hyphenat}
+ % For sidebars.
+ \usepackage{picins}
+ % We use longtable to create tables.
+ \usepackage{longtable}
+ % Images.
+ \usepackage{graphicx}
+ % These packages might be useful (some just add magic pixie dust), so
+ % evaluate them:
+ %\usepackage{fixmath}
+ %\usepackage{amsmath}
+ % Add some missing symbols like \textonehalf.
+ \usepackage{textcomp}
+}
+
+\providecommand{\DSfontencoding}{
+ % Set up font encoding. Called by \DSpackages.
+ % AE is a T1 emulation. It provides mostly the same characters and
+ % features as T1-encoded fonts but doesn't use bitmap fonts (which are
+ % unsuitable for online reading and subtle for printers).
+ \usepackage{ae}
+ % Provide the characters not contained in AE from EC bitmap fonts.
+ \usepackage{aecompl}
+ % Guillemets ("<<", ">>") in AE.
+ \usepackage{aeguill}
+}
+
+\providecommand{\DSsymbols}{%
+ % Fix up symbols.
+ % The Euro symbol in Computer Modern looks, um, funny. Let's get a
+ % proper Euro symbol.
+ \usepackage{eurosym}%
+ \renewcommand{\texteuro}{\euro}%
+}
+
+% Taken from
+% <http://groups.google.de/groups?selm=1i0n5tgtplti420e1omp4pctlv19jpuhbb%404ax.com>
+% and modified. Used with permission.
+\providecommand{\Dprovidelength}[2]{%
+ \begingroup%
+ \escapechar\m@ne%
+ \xdef\@gtempa{{\string#1}}%
+ \endgroup%
+ \expandafter\@ifundefined\@gtempa%
+ {\newlength{#1}\setlength{#1}{#2}}%
+ {}%
+}
+
+\providecommand{\Dprovidecounter}[2]{%
+ % Like \newcounter except that it doesn't crash if the counter
+ % already exists.
+ \@ifundefined{c@#1}{\newcounter{#1}\setcounter{#1}{#2}}{}
+}
+
+\Dprovidelength{\Dboxparindent}{\parindent}
+
+\providecommand{\Dmakebox}[1]{%
+ % Make a centered, frameless box. Useful e.g. for block quotes.
+ % Do not use minipages here, but create pseudo-lists to allow
+ % page-breaking. (Don't use KOMA-script's addmargin environment
+ % because it messes up bullet lists.)
+ \Dmakelistenvironment{}{}{%
+ \setlength{\parskip}{0pt}%
+ \setlength{\parindent}{\Dboxparindent}%
+ \item{#1}%
+ }%
+}
+
+\providecommand{\Dmakefbox}[1]{%
+ % Make a centered, framed box. Useful e.g. for admonitions.
+ \vspace{0.4\baselineskip}%
+ \begin{center}%
+ \fbox{%
+ \begin{minipage}[t]{0.9\linewidth}%
+ \setlength{\parindent}{\Dboxparindent}%
+ #1%
+ \end{minipage}%
+ }%
+ \end{center}%
+ \vspace{0.4\baselineskip}%
+}
+
+% We do not currently recognize the difference between an end-sentence and a
+% mid-sentence period (". " vs. ". " in plain text). So \frenchspacing is
+% appropriate.
+\providecommand{\DSfrenchspacing}{\frenchspacing}
+
+
+\Dprovidelength{\Dblocklevelvspace}{%
+ % Space between block-level elements other than paragraphs.
+ 0.7\baselineskip plus 0.3\baselineskip minus 0.2\baselineskip%
+}
+\providecommand{\DECauxiliaryspace}{%
+ \ifthenelse{\equal{\Dneedvspace}{true}}{\vspace{\Dblocklevelvspace}}{}%
+ \par\noindent%
+}
+\providecommand{\DECparagraphspace}{\par}
+\providecommand{\Dneedvspace}{true}
+
+\providecommand{\DSlanguage}{%
+ % Set up babel.
+ \usepackage[\DEVlanguagebabel]{babel}
+}
+
+\providecommand{\Difdefined}[3]{\@ifundefined{#1}{#3}{#2}}
+
+% Handler for 'classes' attribute (called for each class attribute).
+\providecommand{\DAclasses}[5]{%
+ % Dispatch to \DN<nodename>C<class>.
+ \Difdefined{DN#4C#3}{%
+ % Pass only contents, nothing else!
+ \csname DN#4C#3\endcsname{#5}%
+ }{%
+ % Otherwise, dispatch to \DC<class>.
+ \Difdefined{DC#3}{%
+ \csname DC#3\endcsname{#5}%
+ }{%
+ #5%
+ }%
+ }%
+}
+
+\providecommand{\DECattr}[5]{%
+ % Global attribute dispatcher, called inside the document tree.
+ % Parameters:
+ % 1. Attribute number.
+ % 2. Attribute name.
+ % 3. Attribute value.
+ % 4. Node name.
+ % 5. Node contents.
+ \Difdefined{DN#4A#2}{%
+ % Dispatch to \DN<nodename>A<attribute>.
+ \csname DN#4A#2\endcsname{#1}{#2}{#3}{#4}{#5}%
+ }{\Difdefined{DA#2}{%
+ % Otherwise dispatch to \DA<attribute>.
+ \csname DA#2\endcsname{#1}{#2}{#3}{#4}{#5}%
+ }{%
+ % Otherwise simply run the contents without calling a handler.
+ #5%
+ }}%
+}
+
+% ---------- Link handling ----------
+% Targets and references.
+
+\providecommand{\Draisedlink}[1]{%
+ % Anchors are placed on the base line by default. This is a bad thing for
+ % inline context, so we raise the anchor (normally by \baselineskip).
+ \Hy@raisedlink{#1}%
+}
+
+% References.
+% We're assuming here that the "refid" and "refuri" attributes occur
+% only in inline context (in TextElements).
+\providecommand{\DArefid}[5]{%
+ \ifthenelse{\equal{#4}{reference}}{%
+ \Dexplicitreference{\##3}{#5}%
+ }{%
+ % If this is not a target node (targets with refids are
+ % uninteresting and should be silently dropped).
+ \ifthenelse{\not\equal{#4}{target}}{%
+ % If this is a footnote reference, call special macro.
+ \ifthenelse{\equal{#4}{footnotereference}}{%
+ \Dimplicitfootnotereference{\##3}{#5}%
+ }{%
+ \ifthenelse{\equal{#4}{citationreference}}{%
+ \Dimplicitcitationreference{\##3}{#5}%
+ }{%
+ \Dimplicitreference{\##3}{#5}%
+ }%
+ }%
+ }{}%
+ }%
+}
+\providecommand{\DArefuri}[5]{%
+ \ifthenelse{\equal{#4}{target}}{%
+ % The node name is 'target', so this is a hyperlink target, like this:
+ % .. _mytarget: URI
+ % Hyperlink targets are ignored because they are invisible.
+ }{%
+ % If a non-target node has a refuri attribute, it must be an explicit URI
+ % reference (i.e. node name is 'reference').
+ \Durireference{#3}{#5}%
+ }%
+}
+% Targets.
+\providecommand{\DAids}[5]{%
+ \label{#3}%
+ \ifthenelse{\equal{#4}{footnotereference}}{%
+ {%
+ \renewcommand{\HyperRaiseLinkDefault}{%
+ % Dirty hack to make backrefs to footnote references work.
+ % For some reason, \baselineskip is 0pt in fn references.
+ 0.5\Doriginalbaselineskip%
+ }%
+ \Draisedlink{\hypertarget{#3}{}}#5%
+ }%
+ }{%
+ \Draisedlink{\hypertarget{#3}{}}#5%
+ }%
+}
+\providecommand{\Dimplicitreference}[2]{%
+ % Create implicit reference to ID. Implicit references occur
+ % e.g. in TOC-backlinks of section titles. Parameters:
+ % 1. Target.
+ % 2. Link text.
+ \href{#1}{#2}%
+}
+\providecommand{\Dimplicitfootnotereference}[2]{%
+ % Ditto, but for the special case of footnotes.
+ % We want them to be rendered like explicit references.
+ \Dexplicitreference{#1}{#2}%
+}
+\providecommand{\Dimplicitcitationreference}[2]{%
+ % Ditto for citation references.
+ \Dimplicitfootnotereference{#1}{#2}%
+}
+\providecommand{\Dcolorexplicitreference}{%
+ \ifthenelse{\equal{\Dprinting}{true}}{\color{black}}{\color{blue}}%
+}
+\providecommand{\Dexplicitreference}[2]{%
+ % Create explicit reference to ID, e.g. created with "foo_".
+ % Parameters:
+ % 1. Target.
+ % 2. Link text.
+ \href{#1}{{\Dcolorexplicitreference#2}}%
+}
+\providecommand{\Dcolorurireference}{\Dcolorexplicitreference}
+\providecommand{\Durireference}[2]{%
+ % Create reference to URI. Parameters:
+ % 1. Target.
+ % 2. Link text.
+ \href{#1}{{\Dcolorurireference#2}}%
+}
+
+\Dprovidecounter{Dpdfbookmarkid}{0}%
+\providecommand{\Dpdfbookmark}[1]{%
+ % Temporarily decrement Desctionlevel counter.
+ \addtocounter{Dsectionlevel}{-1}%
+ %\typeout{\arabic{Dsectionlevel}}%
+ %\typeout{#1}%
+ %\typeout{docutils\roman{Dpdfbookmarkid}}%
+ %\typeout{}%
+ \pdfbookmark[\arabic{Dsectionlevel}]{#1}{docutils\arabic{Dpdfbookmarkid}}%
+ \addtocounter{Dsectionlevel}{1}%
+ \addtocounter{Dpdfbookmarkid}{1}%
+}
+% ---------- End of Link Handling ----------
+
+\providecommand{\DNparagraph}[1]{%
+ \ifthenelse{\equal{\DEVparagraphindented}{true}}{\indent}{\noindent}%
+ #1%
+}
+\providecommand{\Dformatboxtitle}[1]{{\Large\textbf{#1}}}
+\providecommand{\Dformatboxsubtitle}[1]{{\large\textbf{#1}}}
+\providecommand{\Dtopictitle}[1]{%
+ \Difinsidetoc{\vspace{1em}\par}{}%
+ \noindent\Dformatboxtitle{#1}%
+ \ifthenelse{\equal{\DEVhassubtitle}{false}}{\vspace{1em}}{\vspace{0.5em}}%
+ \par%
+}
+\providecommand{\Dadmonitiontitle}[1]{%
+ \Dtopictitle{#1}%
+}
+\providecommand{\Dtopicsubtitle}[1]{%
+ \noindent\Dformatboxsubtitle{#1}%
+ \vspace{1em}%
+ \par%
+}
+\providecommand{\Dsidebartitle}[1]{\Dtopictitle{#1}}
+\providecommand{\Dsidebarsubtitle}[1]{\Dtopicsubtitle{#1}}
+\providecommand{\Ddocumenttitle}[1]{%
+ \begin{center}{\Huge#1}\end{center}%
+ \ifthenelse{\equal{\DEVhassubtitle}{true}}{\vspace{0.1cm}}{\vspace{1cm}}%
+}
+\providecommand{\Ddocumentsubtitle}[1]{%
+ \begin{center}{\huge#1}\end{center}%
+ \vspace{1cm}%
+}
+% Can be overwritten by user stylesheet.
+\providecommand{\Dformatsectiontitle}[1]{#1}
+\providecommand{\Dformatsectionsubtitle}[1]{\Dformatsectiontitle{#1}}
+\providecommand{\Dbookmarksectiontitle}[1]{%
+ % Return text suitable for use in \section*, \subsection*, etc.,
+ % containing a PDF bookmark. Parameter: The title (as node tree).
+ \Draisedlink{\Dpdfbookmark{\DEVtitleastext}}%
+ #1%
+}
+\providecommand{\Dsectiontitlehook}[1]{#1}
+\providecommand{\Dsectiontitle}[1]{%
+ \Dsectiontitlehook{%
+ \Ddispatchsectiontitle{\Dbookmarksectiontitle{\Dformatsectiontitle{#1}}}%
+ }%
+}
+\providecommand{\Ddispatchsectiontitle}[1]{%
+ \@ifundefined{Dsectiontitle\roman{Dsectionlevel}}{%
+ \Ddeepsectiontitle{#1}%
+ }{%
+ \csname Dsectiontitle\roman{Dsectionlevel}\endcsname{#1}%
+ }%
+}
+\providecommand{\Ddispatchsectionsubtitle}[1]{%
+ \Ddispatchsectiontitle{#1}%
+}
+\providecommand{\Dsectiontitlei}[1]{\section*{#1}}
+\providecommand{\Dsectiontitleii}[1]{\subsection*{#1}}
+\providecommand{\Ddeepsectiontitle}[1]{%
+ % Anything below \subsubsection (like \paragraph or \subparagraph)
+ % is useless because it uses the same font. The only way to
+ % (visually) distinguish such deeply nested sections is to use
+ % section numbering.
+ \subsubsection*{#1}%
+}
+\providecommand{\Dsectionsubtitlehook}[1]{#1}
+\Dprovidelength{\Dsectionsubtitleraisedistance}{0.7em}
+\providecommand{\Dsectionsubtitlescaling}{0.85}
+\providecommand{\Dsectionsubtitle}[1]{%
+ \Dsectionsubtitlehook{%
+ % Move the subtitle nearer to the title.
+ \vspace{-\Dsectionsubtitleraisedistance}%
+ % Don't create a PDF bookmark.
+ \Ddispatchsectionsubtitle{%
+ \Dformatsectionsubtitle{\scalebox{\Dsectionsubtitlescaling}{#1}}%
+ }%
+ }%
+}
+\providecommand{\DNtitle}[1]{%
+ % Dispatch to \D<parent>title.
+ \csname D\DEVparent title\endcsname{#1}%
+}
+\providecommand{\DNsubtitle}[1]{%
+ % Dispatch to \D<parent>subtitle.
+ \csname D\DEVparent subtitle\endcsname{#1}%
+}
+
+\providecommand{\DNliteralblock}[1]{%
+ \Dmakelistenvironment{}{%
+ \ifthenelse{\equal{\Dinsidetabular}{true}}{%
+ \setlength{\leftmargin}{0pt}%
+ }{}%
+ \setlength{\rightmargin}{0pt}%
+ }{%
+ \raggedright\item\noindent\nohyphens{\textnhtt{#1\Dfinalstrut}}%
+ }%
+}
+\providecommand{\DNdoctestblock}[1]{\DNliteralblock{#1}}
+\providecommand{\DNliteral}[1]{\textnhtt{#1}}
+\providecommand{\DNemphasis}[1]{\emph{#1}}
+\providecommand{\DNstrong}[1]{\textbf{#1}}
+\providecommand{\DECvisitdocument}{\begin{document}\noindent}
+\providecommand{\DECdepartdocument}{\end{document}}
+\providecommand{\DNtopic}[1]{%
+ \ifthenelse{\equal{\DEVcurrentNtopicAcontents}{1}}{%
+ \addtocounter{Dtoclevel}{1}%
+ \par\noindent%
+ #1%
+ \addtocounter{Dtoclevel}{-1}%
+ }{%
+ \par\noindent%
+ \Dmakebox{#1}%
+ }%
+}
+\providecommand{\DNadmonition}[1]{%
+ \DNtopic{#1}%
+}
+\providecommand{\Dformatrubric}[1]{\textbf{#1}}
+\Dprovidelength{\Dprerubricspace}{0.3em}
+\providecommand{\DNrubric}[1]{%
+ \vspace{\Dprerubricspace}\par\noindent\Dformatrubric{#1}\par%
+}
+
+\providecommand{\Dbullet}{}
+\providecommand{\DECsetbullet}[1]{\renewcommand{\Dbullet}{#1}}
+\providecommand{\DNbulletlist}[1]{%
+ \Difinsidetoc{%
+ \Dtocbulletlist{#1}%
+ }{%
+ \Dmakelistenvironment{\Dbullet}{}{#1}%
+ }%
+}
+% Todo: So what on earth is @pnumwidth?
+\renewcommand{\@pnumwidth}{2.2em}
+\providecommand{\DNlistitem}[1]{%
+ \Difinsidetoc{%
+ \ifthenelse{\equal{\theDtoclevel}{1}\and\equal{\Dlocaltoc}{false}}{%
+ {%
+ \par\addvspace{1em}\noindent%
+ \sectfont%
+ #1\hfill\pageref{\DEVcurrentNlistitemAtocrefid}%
+ }%
+ }{%
+ \@dottedtocline{0}{\Dtocindent}{0em}{#1}{%
+ \pageref{\DEVcurrentNlistitemAtocrefid}%
+ }%
+ }%
+ }{%
+ \item{#1}%
+ }%
+}
+\providecommand{\DNenumeratedlist}[1]{#1}
+\Dprovidecounter{Dsectionlevel}{0}
+\providecommand{\Dvisitsectionhook}{}
+\providecommand{\Ddepartsectionhook}{}
+\providecommand{\DECvisitsection}{%
+ \addtocounter{Dsectionlevel}{1}%
+ \Dvisitsectionhook%
+}
+\providecommand{\DECdepartsection}{%
+ \Ddepartsectionhook%
+ \addtocounter{Dsectionlevel}{-1}%
+}
+
+% Using \_ will cause hyphenation after _ even in \textnhtt-typewriter
+% because the hyphenat package redefines \_. So we use
+% \textunderscore here.
+\providecommand{\DECtextunderscore}{\textunderscore}
+
+\providecommand{\Dtextinlineliteralfirstspace}{{ }}
+\providecommand{\Dtextinlineliteralsecondspace}{{~}}
+
+\Dprovidelength{\Dlistspacing}{0.8\baselineskip}
+
+\providecommand{\Dsetlistrightmargin}{%
+ \ifthenelse{\lengthtest{\linewidth>12em}}{%
+ % Equal margins.
+ \setlength{\rightmargin}{\leftmargin}%
+ }{%
+ % If the line is narrower than 10em, we don't remove any further
+ % space from the right.
+ \setlength{\rightmargin}{0pt}%
+ }%
+}
+\providecommand{\Dresetlistdepth}{false}
+\Dprovidelength{\Doriginallabelsep}{\labelsep}
+\providecommand{\Dmakelistenvironment}[3]{%
+ % Make list environment with support for unlimited nesting and with
+ % reasonable default lengths. Parameters:
+ % 1. Label (same as in list environment).
+ % 2. Spacing (same as in list environment).
+ % 3. List contents (contents of list environment).
+ \ifthenelse{\equal{\Dinsidetabular}{true}}{%
+ % Unfortunately, vertical spacing doesn't work correctly when
+ % using lists inside tabular environments, so we use a minipage.
+ \begin{minipage}[t]{\linewidth}%
+ }{}%
+ {%
+ \renewcommand{\Dneedvspace}{false}%
+ % \parsep0.5\baselineskip
+ \renewcommand{\Dresetlistdepth}{false}%
+ \ifnum \@listdepth>5%
+ \protect\renewcommand{\Dresetlistdepth}{true}%
+ \@listdepth=5%
+ \fi%
+ \begin{list}{%
+ #1%
+ }{%
+ \setlength{\itemsep}{0pt}%
+ \setlength{\partopsep}{0pt}%
+ \setlength{\topsep}{0pt}%
+ % List should take 90% of total width.
+ \setlength{\leftmargin}{0.05\linewidth}%
+ \ifthenelse{\lengthtest{\leftmargin<1.8em}}{%
+ \setlength{\leftmargin}{1.8em}%
+ }{}%
+ \setlength{\labelsep}{\Doriginallabelsep}%
+ \Dsetlistrightmargin%
+ #2%
+ }{%
+ #3%
+ }%
+ \end{list}%
+ \ifthenelse{\equal{\Dresetlistdepth}{true}}{\@listdepth=5}{}%
+ }%
+ \ifthenelse{\equal{\Dinsidetabular}{true}}{\end{minipage}}{}%
+}
+\providecommand{\Dfinalstrut}{\@finalstrut\@arstrutbox}
+\providecommand{\DAlastitem}[5]{#5\Dfinalstrut}
+
+\Dprovidelength{\Ditemsep}{0pt}
+\providecommand{\DECmakeenumeratedlist}[6]{%
+ % Make enumerated list.
+ % Parameters:
+ % - prefix
+ % - type (\arabic, \roman, ...)
+ % - suffix
+ % - suggested counter name
+ % - start number - 1
+ % - list contents
+ \newcounter{#4}%
+ \Dmakelistenvironment{#1#2{#4}#3}{%
+ % Use as much space as needed for the label.
+ \setlength{\labelwidth}{10em}%
+ % Reserve enough space so that the label doesn't go beyond the
+ % left margin of preceding paragraphs. Like that:
+ %
+ % A paragraph.
+ %
+ % 1. First item.
+ \setlength{\leftmargin}{2.5em}%
+ \Dsetlistrightmargin%
+ \setlength{\itemsep}{\Ditemsep}%
+ % Use counter recommended by Python module.
+ \usecounter{#4}%
+ % Set start value.
+ \addtocounter{#4}{#5}%
+ }{%
+ % The list contents.
+ #6%
+ }%
+}
+
+
+% Single quote in literal mode. \textquotesingle from package
+% textcomp has wrong width when using package ae, so we use a normal
+% single curly quote here.
+\providecommand{\DECtextliteralsinglequote}{'}
+
+
+% "Tabular lists" are field lists and options lists (not definition
+% lists because there the term always appears on its own line). We'll
+% use the terminology of field lists now ("field", "field name",
+% "field body"), but the same is also analogously applicable to option
+% lists.
+%
+% We want these lists to be breakable across pages. We cannot
+% automatically get the narrowest possible size for the left column
+% (i.e. the field names or option groups) because tabularx does not
+% support multi-page tables, ltxtable needs to have the table in an
+% external file and we don't want to clutter the user's directories
+% with auxiliary files created by the filecontents environment, and
+% ltablex is not included in teTeX.
+%
+% Thus we set a fixed length for the left column and use list
+% environments. This also has the nice side effect that breaking is
+% now possible anywhere, not just between fields.
+%
+% Note that we are creating a distinct list environment for each
+% field. There is no macro for a whole tabular list!
+\Dprovidelength{\Dtabularlistfieldnamewidth}{6em}
+\Dprovidelength{\Dtabularlistfieldnamesep}{0.5em}
+\providecommand{\Dinsidetabular}{false}
+\providecommand{\Dsavefieldname}{}
+\providecommand{\Dsavefieldbody}{}
+\Dprovidelength{\Dusedfieldnamewidth}{0pt}
+\Dprovidelength{\Drealfieldnamewidth}{0pt}
+\providecommand{\Dtabularlistfieldname}[1]{\renewcommand{\Dsavefieldname}{#1}}
+\providecommand{\Dtabularlistfieldbody}[1]{\renewcommand{\Dsavefieldbody}{#1}}
+\Dprovidelength{\Dparskiptemp}{0pt}
+\providecommand{\Dtabularlistfield}[1]{%
+ {%
+ % This only saves field name and field body in \Dsavefieldname and
+ % \Dsavefieldbody, resp. It does not insert any text into the
+ % document.
+ #1%
+ % Recalculate the real field name width everytime we encounter a
+ % tabular list field because it may have been changed using a
+ % "raw" node.
+ \setlength{\Drealfieldnamewidth}{\Dtabularlistfieldnamewidth}%
+ \addtolength{\Drealfieldnamewidth}{\Dtabularlistfieldnamesep}%
+ \Dmakelistenvironment{%
+ \makebox[\Drealfieldnamewidth][l]{\Dsavefieldname}%
+ }{%
+ \setlength{\labelwidth}{\Drealfieldnamewidth}%
+ \setlength{\leftmargin}{\Drealfieldnamewidth}%
+ \setlength{\rightmargin}{0pt}%
+ \setlength{\labelsep}{0pt}%
+ }{%
+ \item%
+ \settowidth{\Dusedfieldnamewidth}{\Dsavefieldname}%
+ \setlength{\Dparskiptemp}{\parskip}%
+ \ifthenelse{%
+ \lengthtest{\Dusedfieldnamewidth>\Dtabularlistfieldnamewidth}%
+ }{%
+ \mbox{}\par%
+ \setlength{\parskip}{0pt}%
+ }{}%
+ \Dsavefieldbody%
+ \setlength{\parskip}{\Dparskiptemp}%
+ %XXX Why did we need this?
+ %\@finalstrut\@arstrutbox%
+ }%
+ \par%
+ }%
+}
+
+\providecommand{\Dformatfieldname}[1]{\textbf{#1:}}
+\providecommand{\DNfieldlist}[1]{#1}
+\providecommand{\DNfield}[1]{\Dtabularlistfield{#1}}
+\providecommand{\DNfieldname}[1]{%
+ \Dtabularlistfieldname{%
+ \Dformatfieldname{#1}%
+ }%
+}
+\providecommand{\DNfieldbody}[1]{\Dtabularlistfieldbody{#1}}
+
+\providecommand{\Dformatoptiongroup}[1]{%
+ % Format option group, e.g. "-f file, --input file".
+ \texttt{#1}%
+}
+\providecommand{\Dformatoption}[1]{%
+ % Format option, e.g. "-f file".
+ % Put into mbox to avoid line-breaking at spaces.
+ \mbox{#1}%
+}
+\providecommand{\Dformatoptionstring}[1]{%
+ % Format option string, e.g. "-f".
+ #1%
+}
+\providecommand{\Dformatoptionargument}[1]{%
+ % Format option argument, e.g. "file".
+ \textsl{#1}%
+}
+\providecommand{\Dformatoptiondescription}[1]{%
+ % Format option description, e.g.
+ % "\DNparagraph{Read input data from file.}"
+ #1%
+}
+\providecommand{\DNoptionlist}[1]{#1}
+\providecommand{\Doptiongroupjoiner}{,{ }}
+\providecommand{\Disfirstoption}{%
+ % Auxiliary macro indicating if a given option is the first child
+ % of its option group (if it's not, it has to preceded by
+ % \Doptiongroupjoiner).
+ false%
+}
+\providecommand{\DNoptionlistitem}[1]{%
+ \Dtabularlistfield{#1}%
+}
+\providecommand{\DNoptiongroup}[1]{%
+ \renewcommand{\Disfirstoption}{true}%
+ \Dtabularlistfieldname{\Dformatoptiongroup{#1}}%
+}
+\providecommand{\DNoption}[1]{%
+ % If this is not the first option in this option group, add a
+ % joiner.
+ \ifthenelse{\equal{\Disfirstoption}{true}}{%
+ \renewcommand{\Disfirstoption}{false}%
+ }{%
+ \Doptiongroupjoiner%
+ }%
+ \Dformatoption{#1}%
+}
+\providecommand{\DNoptionstring}[1]{\Dformatoptionstring{#1}}
+\providecommand{\DNoptionargument}[1]{{ }\Dformatoptionargument{#1}}
+\providecommand{\DNdescription}[1]{%
+ \Dtabularlistfieldbody{\Dformatoptiondescription{#1}}%
+}
+
+\providecommand{\DNdefinitionlist}[1]{%
+ \begin{description}%
+ \parskip0pt%
+ #1%
+ \end{description}%
+}
+\providecommand{\DNdefinitionlistitem}[1]{%
+ % LaTeX expects the label in square brackets; we provide an empty
+ % label.
+ \item[]#1%
+}
+\providecommand{\Dformatterm}[1]{#1}
+\providecommand{\DNterm}[1]{\hspace{-5pt}\Dformatterm{#1}}
+% I'm still not sure what's the best rendering for classifiers. The
+% colon syntax is used by reStructuredText, so it's at least WYSIWYG.
+% Use slanted text because italic would cause too much emphasis.
+\providecommand{\Dformatclassifier}[1]{\textsl{#1}}
+\providecommand{\DNclassifier}[1]{~:~\Dformatclassifier{#1}}
+\providecommand{\Dformatdefinition}[1]{#1}
+\providecommand{\DNdefinition}[1]{\par\Dformatdefinition{#1}}
+
+\providecommand{\Dlineblockindentation}{2.5em}
+\providecommand{\DNlineblock}[1]{%
+ \Dmakelistenvironment{}{%
+ \ifthenelse{\equal{\DEVparent}{lineblock}}{%
+ % Parent is a line block, so indent.
+ \setlength{\leftmargin}{\Dlineblockindentation}%
+ }{%
+ % At top level; don't indent.
+ \setlength{\leftmargin}{0pt}%
+ }%
+ \setlength{\rightmargin}{0pt}%
+ \setlength{\parsep}{0pt}%
+ }{%
+ #1%
+ }%
+}
+\providecommand{\DNline}[1]{\item#1}
+
+\providecommand{\DNtransition}{%
+ \raisebox{0.25em}{\parbox{\linewidth}{\hspace*{\fill}\hrulefill\hrulefill\hspace*{\fill}}}%
+}
+
+\providecommand{\Dformatblockquote}[1]{%
+ % Format contents of block quote.
+ % This occurs in block-level context, so we cannot use \textsl.
+ {\slshape#1}%
+}
+\providecommand{\Dformatattribution}[1]{---\textup{#1}}
+\providecommand{\DNblockquote}[1]{%
+ \Dmakebox{%
+ \Dformatblockquote{#1}
+ }%
+}
+\providecommand{\DNattribution}[1]{%
+ \par%
+ \begin{flushright}\Dformatattribution{#1}\end{flushright}%
+}
+
+
+% Sidebars:
+% Vertical and horizontal margins.
+\Dprovidelength{\Dsidebarvmargin}{0.5em}
+\Dprovidelength{\Dsidebarhmargin}{1em}
+% Padding (space between contents and frame).
+\Dprovidelength{\Dsidebarpadding}{1em}
+% Frame width.
+\Dprovidelength{\Dsidebarframewidth}{2\fboxrule}
+% Position ("l" or "r").
+\providecommand{\Dsidebarposition}{r}
+% Width.
+\Dprovidelength{\Dsidebarwidth}{0.45\linewidth}
+\providecommand{\DNsidebar}[1]{
+ \parpic[\Dsidebarposition]{%
+ \begin{minipage}[t]{\Dsidebarwidth}%
+ % Doing this with nested minipages is ugly, but I haven't found
+ % another way to place vertical space before and after the fbox.
+ \vspace{\Dsidebarvmargin}%
+ {%
+ \setlength{\fboxrule}{\Dsidebarframewidth}%
+ \setlength{\fboxsep}{\Dsidebarpadding}%
+ \fbox{%
+ \begin{minipage}[t]{\linewidth}%
+ \setlength{\parindent}{\Dboxparindent}%
+ #1%
+ \end{minipage}%
+ }%
+ }%
+ \vspace{\Dsidebarvmargin}%
+ \end{minipage}%
+ }%
+}
+
+
+% Citations and footnotes.
+\providecommand{\Dformatfootnote}[1]{%
+ % Format footnote.
+ {%
+ \footnotesize#1%
+ % \par is necessary for LaTeX to adjust baselineskip to the
+ % changed font size.
+ \par%
+ }%
+}
+\providecommand{\Dformatcitation}[1]{\Dformatfootnote{#1}}
+\Dprovidelength{\Doriginalbaselineskip}{0pt}
+\providecommand{\DNfootnotereference}[1]{%
+ {%
+ % \baselineskip is 0pt in \textsuperscript, so we save it here.
+ \setlength{\Doriginalbaselineskip}{\baselineskip}%
+ \textsuperscript{#1}%
+ }%
+}
+\providecommand{\DNcitationreference}[1]{{[}#1{]}}
+\Dprovidelength{\Dfootnotesep}{3.5pt}
+\providecommand{\Dsetfootnotespacing}{%
+ % Spacing commands executed at the beginning of footnotes.
+ \setlength{\parindent}{0pt}%
+ \hspace{1em}%
+}
+\providecommand{\DNfootnote}[1]{%
+ % See ltfloat.dtx for details.
+ {%
+ \insert\footins{%
+ % BUG: This is too small if the user adds
+ % \onehalfspacing or \doublespace.
+ \vspace{\Dfootnotesep}%
+ \Dsetfootnotespacing%
+ \Dformatfootnote{#1}%
+ }%
+ }%
+}
+\providecommand{\DNcitation}[1]{\DNfootnote{#1}}
+\providecommand{\Dformatfootnotelabel}[1]{%
+ % Keep \footnotesize in footnote labels (\textsuperscript would
+ % reduce the font size even more).
+ \textsuperscript{\footnotesize#1{ }}%
+}
+\providecommand{\Dformatcitationlabel}[1]{{[}#1{]}{ }}
+\providecommand{\Dformatmultiplebackrefs}[1]{%
+ % If in printing mode, do not write out multiple backrefs.
+ \ifthenelse{\equal{\Dprinting}{true}}{}{\textsl{#1}}%
+}
+\providecommand{\Dthislabel}{}
+\providecommand{\DNlabel}[1]{%
+ % Footnote or citatation label.
+ \renewcommand{\Dthislabel}{#1}%
+ \ifthenelse{\not\equal{\DEVsinglebackref}{}}{%
+ \let\Doriginallabel=\Dthislabel%
+ \def\Dthislabel{%
+ \Dsinglefootnotebacklink{\DEVsinglebackref}{\Doriginallabel}%
+ }%
+ }{}%
+ \ifthenelse{\equal{\DEVparent}{footnote}}{%
+ % Footnote label.
+ \Dformatfootnotelabel{\Dthislabel}%
+ }{%
+ \ifthenelse{\equal{\DEVparent}{citation}}{%
+ % Citation label.
+ \Dformatcitationlabel{\Dthislabel}%
+ }{}%
+ }%
+ % If there are multiple backrefs, add them now.
+ \Dformatmultiplebackrefs{\DEVmultiplebackrefs}%
+}
+\providecommand{\Dsinglefootnotebacklink}[2]{%
+ % Create normal backlink of a footnote label. Parameters:
+ % 1. ID.
+ % 2. Link text.
+ % Treat like a footnote reference.
+ \Dimplicitfootnotereference{\##1}{#2}%
+}
+\providecommand{\DECmultifootnotebacklink}[2]{%
+ % Create generated backlink, as in (1, 2). Parameters:
+ % 1. ID.
+ % 2. Link text.
+ % Treat like a footnote reference.
+ \Dimplicitfootnotereference{\##1}{#2}%
+}
+\providecommand{\Dsinglecitationbacklink}[2]{\Dsinglefootnotebacklink{#1}{#2}}
+\providecommand{\DECmulticitationbacklink}[2]{\DECmultifootnotebacklink{#1}{#2}}
+
+
+\providecommand{\DECmaketable}[2]{%
+ % Make table. Parameters:
+ % 1. Table spec (like "|p|p|").
+ % 2. Table contents.
+ {%
+ \ifthenelse{\equal{\Dinsidetabular}{true}}{%
+ % Inside longtable; we cannot have nested longtables.
+ \begin{tabular}{#1}%
+ \hline%
+ #2%
+ \end{tabular}%
+ }{%
+ \renewcommand{\Dinsidetabular}{true}%
+ \begin{longtable}{#1}%
+ \hline%
+ #2%
+ \end{longtable}%
+ }%
+ }%
+}
+\providecommand{\DNthead}[1]{%
+ #1%
+ \endhead%
+}
+\providecommand{\DNrow}[1]{%
+ #1\tabularnewline%
+ \hline%
+}
+\providecommand{\Dinsidemulticolumn}{false}
+\providecommand{\Dcompensatingmulticol}[3]{%
+ \multicolumn{#1}{#2}{%
+ {%
+ \renewcommand{\Dinsidemulticolumn}{true}%
+ % Compensate for weird missing vertical space at top of paragraph.
+ \raisebox{-2.5pt}{#3}%
+ }%
+ }%
+}
+\providecommand{\DECcolspan}[2]{%
+ % Take care of the morecols attribute (but incremented by 1).
+ &%
+ \Dcompensatingmulticol{#1}{l|}{#2}%
+}
+\providecommand{\DECcolspanleft}[2]{%
+ % Like \Dmorecols, but called for the leftmost entries in a table
+ % row.
+ \Dcompensatingmulticol{#1}{|l|}{#2}%
+}
+\providecommand{\DECsubsequententry}[1]{%
+ %
+}
+\providecommand{\DNentry}[1]{%
+ % The following sequence adds minimal vertical space above the top
+ % lines of the first cell paragraph, so that vertical space is
+ % balanced at the top and bottom of table cells.
+ \ifthenelse{\equal{\Dinsidemulticolumn}{false}}{%
+ \vspace{-1em}\vspace{-\parskip}\par%
+ }{}%
+ #1%
+ % No need to add an ampersand ("&"); that's done by \DECsubsequententry.
+}
+\providecommand{\DAtableheaderentry}[5]{\Dformattableheaderentry{#5}}
+\providecommand{\Dformattableheaderentry}[1]{{\bfseries#1}}
+
+
+\providecommand{\DNsystemmessage}[1]{%
+ {%
+ \ifthenelse{\equal{\Dprinting}{false}}{\color{red}}{}%
+ \bfseries%
+ #1%
+ }%
+}
+
+
+\providecommand{\Dinsidehalign}{false}
+\newsavebox{\Dalignedimagebox}
+\Dprovidelength{\Dalignedimagewidth}{0pt}
+\providecommand{\Dhalign}[2]{%
+ % Horizontally align the contents to the left or right so that the
+ % text flows around it.
+ % Parameters:
+ % 1. l or r
+ % 2. Contents.
+ \renewcommand{\Dinsidehalign}{true}%
+ % For some obscure reason \parpic consumes some vertical space.
+ \vspace{-3pt}%
+ % Now we do something *really* ugly, but this enables us to wrap the
+ % image in a minipage while still allowing tight frames when
+ % class=border (see \DNimageCborder).
+ \sbox{\Dalignedimagebox}{#2}%
+ \settowidth{\Dalignedimagewidth}{\usebox{\Dalignedimagebox}}%
+ \parpic[#1]{%
+ \begin{minipage}[b]{\Dalignedimagewidth}%
+ % Compensate for previously added space, but not entirely.
+ \vspace*{2.0pt}%
+ \vspace*{\Dfloatimagetopmargin}%
+ \usebox{\Dalignedimagebox}%
+ \vspace*{1.5pt}%
+ \vspace*{\Dfloatimagebottommargin}%
+ \end{minipage}%
+ }%
+ \renewcommand{\Dinsidehalign}{false}%
+}
+
+
+% Maximum width of an image.
+\providecommand{\Dimagemaxwidth}{\linewidth}
+\providecommand{\Dfloatimagemaxwidth}{0.5\linewidth}
+% Auxiliary variable.
+\Dprovidelength{\Dcurrentimagewidth}{0pt}
+\providecommand{\DNimageAalign}[5]{%
+ \ifthenelse{\equal{#3}{left}}{%
+ \Dhalign{l}{#5}%
+ }{%
+ \ifthenelse{\equal{#3}{right}}{%
+ \Dhalign{r}{#5}%
+ }{%
+ \ifthenelse{\equal{#3}{center}}{%
+ % Text floating around centered figures is a bad idea. Thus
+ % we use a center environment. Note that no extra space is
+ % added by the writer, so the space added by the center
+ % environment is fine.
+ \begin{center}#5\end{center}%
+ }{%
+ #5%
+ }%
+ }%
+ }%
+}
+% Base path for images.
+\providecommand{\Dimagebase}{}
+% Auxiliary command. Current image path.
+\providecommand{\Dimagepath}{}
+\providecommand{\DNimageAuri}[5]{%
+ % Insert image. We treat the URI like a path here.
+ \renewcommand{\Dimagepath}{\Dimagebase#3}%
+ \Difdefined{DcurrentNimageAwidth}{%
+ \Dwidthimage{\DEVcurrentNimageAwidth}{\Dimagepath}%
+ }{%
+ \Dsimpleimage{\Dimagepath}%
+ }%
+}
+\Dprovidelength{\Dfloatimagevmargin}{0pt}
+\providecommand{\Dfloatimagetopmargin}{\Dfloatimagevmargin}
+\providecommand{\Dfloatimagebottommargin}{\Dfloatimagevmargin}
+\providecommand{\Dwidthimage}[2]{%
+ % Image with specified width.
+ % Parameters:
+ % 1. Image width.
+ % 2. Image path.
+ % Need to make bottom-alignment dependent on align attribute (add
+ % functional test first). Need to observe height attribute.
+ %\begin{minipage}[b]{#1}%
+ \includegraphics[width=#1,height=\textheight,keepaspectratio]{#2}%
+ %\end{minipage}%
+}
+\providecommand{\Dcurrentimagemaxwidth}{}
+\providecommand{\Dsimpleimage}[1]{%
+ % Insert image, without much parametrization.
+ \settowidth{\Dcurrentimagewidth}{\includegraphics{#1}}%
+ \ifthenelse{\equal{\Dinsidehalign}{true}}{%
+ \renewcommand{\Dcurrentimagemaxwidth}{\Dfloatimagemaxwidth}%
+ }{%
+ \renewcommand{\Dcurrentimagemaxwidth}{\Dimagemaxwidth}%
+ }%
+ \ifthenelse{\lengthtest{\Dcurrentimagewidth>\Dcurrentimagemaxwidth}}{%
+ \Dwidthimage{\Dcurrentimagemaxwidth}{#1}%
+ }{%
+ \Dwidthimage{\Dcurrentimagewidth}{#1}%
+ }%
+}
+\providecommand{\Dwidthimage}[2]{%
+ % Image with specified width.
+ % Parameters:
+ % 1. Image width.
+ % 2. Image path.
+ \Dwidthimage{#1}{#2}%
+}
+
+% Figures.
+\providecommand{\DNfigureAalign}[5]{%
+ % Hack to make it work Right Now.
+ %\def\DEVcurrentNimageAwidth{\DEVcurrentNfigureAwidth}%
+ %
+ %\def\DEVcurrentNimageAwidth{\linewidth}%
+ \DNimageAalign{#1}{#2}{#3}{#4}{%
+ \begin{minipage}[b]{0.4\linewidth}#5\end{minipage}}%
+ %\let\DEVcurrentNimageAwidth=\relax%
+ %
+ %\let\DEVcurrentNimageAwidth=\relax%
+}
+\providecommand{\DNcaption}[1]{\par\noindent{\slshape#1}}
+\providecommand{\DNlegend}[1]{\DECauxiliaryspace#1}
+
+\providecommand{\DCborder}[1]{\fbox{#1}}
+% No padding between image and border.
+\providecommand{\DNimageCborder}[1]{\frame{#1}}
+
+
+% Need to replace with language-specific stuff. Maybe look at
+% csquotes.sty and ask the author for permission to use parts of it.
+\providecommand{\DECtextleftdblquote}{``}
+\providecommand{\DECtextrightdblquote}{''}
+
+% Table of contents:
+\Dprovidelength{\Dtocininitialsectnumwidth}{2.4em}
+\Dprovidelength{\Dtocadditionalsectnumwidth}{0.7em}
+% Level inside a table of contents. While this is at -1, we are not
+% inside a TOC.
+\Dprovidecounter{Dtoclevel}{-1}%
+\providecommand{\Dlocaltoc}{false}%
+\providecommand{\DNtopicClocal}[1]{%
+ \renewcommand{\Dlocaltoc}{true}%
+ \addtolength{\Dtocsectnumwidth}{2\Dtocadditionalsectnumwidth}%
+ \addtolength{\Dtocindent}{-2\Dtocadditionalsectnumwidth}%
+ #1%
+ \addtolength{\Dtocindent}{2\Dtocadditionalsectnumwidth}%
+ \addtolength{\Dtocsectnumwidth}{-2\Dtocadditionalsectnumwidth}%
+ \renewcommand{\Dlocaltoc}{false}%
+}
+\Dprovidelength{\Dtocindent}{0pt}%
+\Dprovidelength{\Dtocsectnumwidth}{\Dtocininitialsectnumwidth}
+% Compensate for one additional TOC indentation space so that the
+% top-level is unindented.
+\addtolength{\Dtocsectnumwidth}{-\Dtocadditionalsectnumwidth}
+\addtolength{\Dtocindent}{-\Dtocsectnumwidth}
+\providecommand{\Difinsidetoc}[2]{%
+ \ifthenelse{\not\equal{\theDtoclevel}{-1}}{#1}{#2}%
+}
+\providecommand{\DNgeneratedCsectnum}[1]{%
+ \Difinsidetoc{%
+ % Section number inside TOC.
+ \makebox[\Dtocsectnumwidth][l]{#1}%
+ }{%
+ % Section number inside section title.
+ #1\quad%
+ }%
+}
+\providecommand{\Dtocbulletlist}[1]{%
+ \addtocounter{Dtoclevel}{1}%
+ \addtolength{\Dtocindent}{\Dtocsectnumwidth}%
+ \addtolength{\Dtocsectnumwidth}{\Dtocadditionalsectnumwidth}%
+ #1%
+ \addtolength{\Dtocsectnumwidth}{-\Dtocadditionalsectnumwidth}%
+ \addtolength{\Dtocindent}{-\Dtocsectnumwidth}%
+ \addtocounter{Dtoclevel}{-1}%
+}
+
+
+% For \DECpixelunit, the length value is pre-multiplied with 0.75, so by
+% specifying "pt" we get the same notion of "pixel" as graphicx.
+\providecommand{\DECpixelunit}{pt}
+% Normally lengths are relative to the current linewidth.
+\providecommand{\DECrelativeunit}{\linewidth}
+
+
+% ACTION: These commands actually *do* something.
+% Ultimately, everything should be done here, and no active content should be
+% above (not even \usepackage).
+
+\DSearly
+\DSpackages
+\DSfrenchspacing
+\DSsymbols
+\DSlate
+
+\makeatother
diff --git a/python/helpers/docutils/writers/newlatex2e/notes.txt b/python/helpers/docutils/writers/newlatex2e/notes.txt
new file mode 100644
index 0000000..a34b8fe
--- /dev/null
+++ b/python/helpers/docutils/writers/newlatex2e/notes.txt
@@ -0,0 +1,79 @@
+New LaTeX Writer
+================
+
+:Copyright: This document has been placed in the public domain.
+
+The new LaTeX writer (newlatex2e) is in active development. These are
+development notes -- edit ahead! Ultimately, they will be moved to
+the global to-do list, but while newlatex2e is incomplete, they remain
+here.
+
+* It appears that all visit_ methods can be turned into before_
+ methods (and renamed thereafter).
+
+* Also pass raw text (foo_bar) and not only renderable text (foo{\_}bar).
+ See http://article.gmane.org/gmane.text.docutils.user/2516.
+
+* Try the commands mentioned in
+ <http://groups.google.com/groups?selm=c7opho%248ts%241%40wsc10.lrz-muenchen.de>.
+
+* <http://www.tug.org/applications/pdftex/pdfTeX-FAQ.pdf>::
+
+ 3.1.6. How can I make a document portable to both latex and pdflatex
+ Contributed by: Christian Kumpf
+ Check for the existence of the variable \pdfoutput:
+ \newif\ifpdf
+ \ifx\pdfoutput\undefined
+ \pdffalse % we are not running PDFLaTeX
+ \else
+ \pdfoutput=1 % we are running PDFLaTeX
+ \pdftrue
+ \fi
+ Then use your new variable \ifpdf
+ \ifpdf
+ \usepackage[pdftex]{graphicx}
+ \pdfcompresslevel=9
+ \else
+ \usepackage{graphicx}
+ \fi
+
+* Need to get some simple docinfo field handling. Move language look-up logic to nodes.py?
+ Same for admonitions.
+
+* Footnotes should be placed on the same page as their references.
+ However, there may be multiple references per footnote, so
+ we'll probably need an option and some sophisticated handling for this.
+
+* Make sure we don't break ligatures (and possibly hyphenation) with zealous brace protection.
+ See http://article.gmane.org/gmane.text.docutils.user/2586.
+
+* Tables inside of footnotes have too large vertical margins.
+ Need a "reduced vertical margin" mode, maybe?
+
+* There's not enough vertical space between fields::
+
+ :Name:
+ Paragraph.
+ Paragraph.
+ :Name:
+ Paragraph.
+ Paragraph.
+
+* Another edge case with too much vertical margin::
+
+ +--------------------+
+ | :Name: |
+ | Paragraph. |
+ | :Name: |
+ | Paragraph. |
+ +--------------------+
+
+* We want to support underscores in citation references, they need to
+ appear unescaped.
+
+* If there's raw code between paragraphs, it gets appended to the last
+ paragraph unless we do ``\par``. That's a little bit ugly. Can we
+ fix this? (Change paragraph handling maybe?)
+
+* Test that, say, all Latin 1 characters are renderable. (And
+ possibly test more characters.)
diff --git a/python/helpers/docutils/writers/newlatex2e/tests.txt b/python/helpers/docutils/writers/newlatex2e/tests.txt
new file mode 100644
index 0000000..b456b89
--- /dev/null
+++ b/python/helpers/docutils/writers/newlatex2e/tests.txt
@@ -0,0 +1,7 @@
+These tests will have to be migrated to a test suite (either
+functional tests or unit tests).
+
+::
+
+ Newline followed by
+ *an asterisk.
diff --git a/python/helpers/docutils/writers/newlatex2e/unicode_map.py b/python/helpers/docutils/writers/newlatex2e/unicode_map.py
new file mode 100644
index 0000000..c0d63b6
--- /dev/null
+++ b/python/helpers/docutils/writers/newlatex2e/unicode_map.py
@@ -0,0 +1,2369 @@
+# $Id$
+# Author: Lea Wiemann <[email protected]>
+# Copyright: This file has been placed in the public domain.
+
+# This is a mapping of Unicode characters to LaTeX equivalents.
+# The information has been extracted from
+# <http://www.w3.org/2003/entities/xml/unicode.xml>, written by
+# David Carlisle and Sebastian Rahtz.
+#
+# The extraction has been done by the "create_unimap.py" script
+# located at <http://docutils.sf.net/tools/dev/create_unimap.py>.
+
+unicode_map = {u'\xa0': '$~$',
+u'\xa1': '{\\textexclamdown}',
+u'\xa2': '{\\textcent}',
+u'\xa3': '{\\textsterling}',
+u'\xa4': '{\\textcurrency}',
+u'\xa5': '{\\textyen}',
+u'\xa6': '{\\textbrokenbar}',
+u'\xa7': '{\\textsection}',
+u'\xa8': '{\\textasciidieresis}',
+u'\xa9': '{\\textcopyright}',
+u'\xaa': '{\\textordfeminine}',
+u'\xab': '{\\guillemotleft}',
+u'\xac': '$\\lnot$',
+u'\xad': '$\\-$',
+u'\xae': '{\\textregistered}',
+u'\xaf': '{\\textasciimacron}',
+u'\xb0': '{\\textdegree}',
+u'\xb1': '$\\pm$',
+u'\xb2': '${^2}$',
+u'\xb3': '${^3}$',
+u'\xb4': '{\\textasciiacute}',
+u'\xb5': '$\\mathrm{\\mu}$',
+u'\xb6': '{\\textparagraph}',
+u'\xb7': '$\\cdot$',
+u'\xb8': '{\\c{}}',
+u'\xb9': '${^1}$',
+u'\xba': '{\\textordmasculine}',
+u'\xbb': '{\\guillemotright}',
+u'\xbc': '{\\textonequarter}',
+u'\xbd': '{\\textonehalf}',
+u'\xbe': '{\\textthreequarters}',
+u'\xbf': '{\\textquestiondown}',
+u'\xc0': '{\\`{A}}',
+u'\xc1': "{\\'{A}}",
+u'\xc2': '{\\^{A}}',
+u'\xc3': '{\\~{A}}',
+u'\xc4': '{\\"{A}}',
+u'\xc5': '{\\AA}',
+u'\xc6': '{\\AE}',
+u'\xc7': '{\\c{C}}',
+u'\xc8': '{\\`{E}}',
+u'\xc9': "{\\'{E}}",
+u'\xca': '{\\^{E}}',
+u'\xcb': '{\\"{E}}',
+u'\xcc': '{\\`{I}}',
+u'\xcd': "{\\'{I}}",
+u'\xce': '{\\^{I}}',
+u'\xcf': '{\\"{I}}',
+u'\xd0': '{\\DH}',
+u'\xd1': '{\\~{N}}',
+u'\xd2': '{\\`{O}}',
+u'\xd3': "{\\'{O}}",
+u'\xd4': '{\\^{O}}',
+u'\xd5': '{\\~{O}}',
+u'\xd6': '{\\"{O}}',
+u'\xd7': '{\\texttimes}',
+u'\xd8': '{\\O}',
+u'\xd9': '{\\`{U}}',
+u'\xda': "{\\'{U}}",
+u'\xdb': '{\\^{U}}',
+u'\xdc': '{\\"{U}}',
+u'\xdd': "{\\'{Y}}",
+u'\xde': '{\\TH}',
+u'\xdf': '{\\ss}',
+u'\xe0': '{\\`{a}}',
+u'\xe1': "{\\'{a}}",
+u'\xe2': '{\\^{a}}',
+u'\xe3': '{\\~{a}}',
+u'\xe4': '{\\"{a}}',
+u'\xe5': '{\\aa}',
+u'\xe6': '{\\ae}',
+u'\xe7': '{\\c{c}}',
+u'\xe8': '{\\`{e}}',
+u'\xe9': "{\\'{e}}",
+u'\xea': '{\\^{e}}',
+u'\xeb': '{\\"{e}}',
+u'\xec': '{\\`{\\i}}',
+u'\xed': "{\\'{\\i}}",
+u'\xee': '{\\^{\\i}}',
+u'\xef': '{\\"{\\i}}',
+u'\xf0': '{\\dh}',
+u'\xf1': '{\\~{n}}',
+u'\xf2': '{\\`{o}}',
+u'\xf3': "{\\'{o}}",
+u'\xf4': '{\\^{o}}',
+u'\xf5': '{\\~{o}}',
+u'\xf6': '{\\"{o}}',
+u'\xf7': '$\\div$',
+u'\xf8': '{\\o}',
+u'\xf9': '{\\`{u}}',
+u'\xfa': "{\\'{u}}",
+u'\xfb': '{\\^{u}}',
+u'\xfc': '{\\"{u}}',
+u'\xfd': "{\\'{y}}",
+u'\xfe': '{\\th}',
+u'\xff': '{\\"{y}}',
+u'\u0100': '{\\={A}}',
+u'\u0101': '{\\={a}}',
+u'\u0102': '{\\u{A}}',
+u'\u0103': '{\\u{a}}',
+u'\u0104': '{\\k{A}}',
+u'\u0105': '{\\k{a}}',
+u'\u0106': "{\\'{C}}",
+u'\u0107': "{\\'{c}}",
+u'\u0108': '{\\^{C}}',
+u'\u0109': '{\\^{c}}',
+u'\u010a': '{\\.{C}}',
+u'\u010b': '{\\.{c}}',
+u'\u010c': '{\\v{C}}',
+u'\u010d': '{\\v{c}}',
+u'\u010e': '{\\v{D}}',
+u'\u010f': '{\\v{d}}',
+u'\u0110': '{\\DJ}',
+u'\u0111': '{\\dj}',
+u'\u0112': '{\\={E}}',
+u'\u0113': '{\\={e}}',
+u'\u0114': '{\\u{E}}',
+u'\u0115': '{\\u{e}}',
+u'\u0116': '{\\.{E}}',
+u'\u0117': '{\\.{e}}',
+u'\u0118': '{\\k{E}}',
+u'\u0119': '{\\k{e}}',
+u'\u011a': '{\\v{E}}',
+u'\u011b': '{\\v{e}}',
+u'\u011c': '{\\^{G}}',
+u'\u011d': '{\\^{g}}',
+u'\u011e': '{\\u{G}}',
+u'\u011f': '{\\u{g}}',
+u'\u0120': '{\\.{G}}',
+u'\u0121': '{\\.{g}}',
+u'\u0122': '{\\c{G}}',
+u'\u0123': '{\\c{g}}',
+u'\u0124': '{\\^{H}}',
+u'\u0125': '{\\^{h}}',
+u'\u0126': '{{\\fontencoding{LELA}\\selectfont\\char40}}',
+u'\u0127': '$\\Elzxh$',
+u'\u0128': '{\\~{I}}',
+u'\u0129': '{\\~{\\i}}',
+u'\u012a': '{\\={I}}',
+u'\u012b': '{\\={\\i}}',
+u'\u012c': '{\\u{I}}',
+u'\u012d': '{\\u{\\i}}',
+u'\u012e': '{\\k{I}}',
+u'\u012f': '{\\k{i}}',
+u'\u0130': '{\\.{I}}',
+u'\u0131': '{\\i}',
+u'\u0132': '{IJ}',
+u'\u0133': '{ij}',
+u'\u0134': '{\\^{J}}',
+u'\u0135': '{\\^{\\j}}',
+u'\u0136': '{\\c{K}}',
+u'\u0137': '{\\c{k}}',
+u'\u0138': '{{\\fontencoding{LELA}\\selectfont\\char91}}',
+u'\u0139': "{\\'{L}}",
+u'\u013a': "{\\'{l}}",
+u'\u013b': '{\\c{L}}',
+u'\u013c': '{\\c{l}}',
+u'\u013d': '{\\v{L}}',
+u'\u013e': '{\\v{l}}',
+u'\u013f': '{{\\fontencoding{LELA}\\selectfont\\char201}}',
+u'\u0140': '{{\\fontencoding{LELA}\\selectfont\\char202}}',
+u'\u0141': '{\\L}',
+u'\u0142': '{\\l}',
+u'\u0143': "{\\'{N}}",
+u'\u0144': "{\\'{n}}",
+u'\u0145': '{\\c{N}}',
+u'\u0146': '{\\c{n}}',
+u'\u0147': '{\\v{N}}',
+u'\u0148': '{\\v{n}}',
+u'\u0149': "{'n}",
+u'\u014a': '{\\NG}',
+u'\u014b': '{\\ng}',
+u'\u014c': '{\\={O}}',
+u'\u014d': '{\\={o}}',
+u'\u014e': '{\\u{O}}',
+u'\u014f': '{\\u{o}}',
+u'\u0150': '{\\H{O}}',
+u'\u0151': '{\\H{o}}',
+u'\u0152': '{\\OE}',
+u'\u0153': '{\\oe}',
+u'\u0154': "{\\'{R}}",
+u'\u0155': "{\\'{r}}",
+u'\u0156': '{\\c{R}}',
+u'\u0157': '{\\c{r}}',
+u'\u0158': '{\\v{R}}',
+u'\u0159': '{\\v{r}}',
+u'\u015a': "{\\'{S}}",
+u'\u015b': "{\\'{s}}",
+u'\u015c': '{\\^{S}}',
+u'\u015d': '{\\^{s}}',
+u'\u015e': '{\\c{S}}',
+u'\u015f': '{\\c{s}}',
+u'\u0160': '{\\v{S}}',
+u'\u0161': '{\\v{s}}',
+u'\u0162': '{\\c{T}}',
+u'\u0163': '{\\c{t}}',
+u'\u0164': '{\\v{T}}',
+u'\u0165': '{\\v{t}}',
+u'\u0166': '{{\\fontencoding{LELA}\\selectfont\\char47}}',
+u'\u0167': '{{\\fontencoding{LELA}\\selectfont\\char63}}',
+u'\u0168': '{\\~{U}}',
+u'\u0169': '{\\~{u}}',
+u'\u016a': '{\\={U}}',
+u'\u016b': '{\\={u}}',
+u'\u016c': '{\\u{U}}',
+u'\u016d': '{\\u{u}}',
+u'\u016e': '{\\r{U}}',
+u'\u016f': '{\\r{u}}',
+u'\u0170': '{\\H{U}}',
+u'\u0171': '{\\H{u}}',
+u'\u0172': '{\\k{U}}',
+u'\u0173': '{\\k{u}}',
+u'\u0174': '{\\^{W}}',
+u'\u0175': '{\\^{w}}',
+u'\u0176': '{\\^{Y}}',
+u'\u0177': '{\\^{y}}',
+u'\u0178': '{\\"{Y}}',
+u'\u0179': "{\\'{Z}}",
+u'\u017a': "{\\'{z}}",
+u'\u017b': '{\\.{Z}}',
+u'\u017c': '{\\.{z}}',
+u'\u017d': '{\\v{Z}}',
+u'\u017e': '{\\v{z}}',
+u'\u0192': '$f$',
+u'\u0195': '{\\texthvlig}',
+u'\u019e': '{\\textnrleg}',
+u'\u01aa': '$\\eth$',
+u'\u01ba': '{{\\fontencoding{LELA}\\selectfont\\char195}}',
+u'\u01c2': '{\\textdoublepipe}',
+u'\u01f5': "{\\'{g}}",
+u'\u0250': '$\\Elztrna$',
+u'\u0252': '$\\Elztrnsa$',
+u'\u0254': '$\\Elzopeno$',
+u'\u0256': '$\\Elzrtld$',
+u'\u0258': '{{\\fontencoding{LEIP}\\selectfont\\char61}}',
+u'\u0259': '$\\Elzschwa$',
+u'\u025b': '$\\varepsilon$',
+u'\u0261': '{g}',
+u'\u0263': '$\\Elzpgamma$',
+u'\u0264': '$\\Elzpbgam$',
+u'\u0265': '$\\Elztrnh$',
+u'\u026c': '$\\Elzbtdl$',
+u'\u026d': '$\\Elzrtll$',
+u'\u026f': '$\\Elztrnm$',
+u'\u0270': '$\\Elztrnmlr$',
+u'\u0271': '$\\Elzltlmr$',
+u'\u0272': '{\\Elzltln}',
+u'\u0273': '$\\Elzrtln$',
+u'\u0277': '$\\Elzclomeg$',
+u'\u0278': '{\\textphi}',
+u'\u0279': '$\\Elztrnr$',
+u'\u027a': '$\\Elztrnrl$',
+u'\u027b': '$\\Elzrttrnr$',
+u'\u027c': '$\\Elzrl$',
+u'\u027d': '$\\Elzrtlr$',
+u'\u027e': '$\\Elzfhr$',
+u'\u027f': '{{\\fontencoding{LEIP}\\selectfont\\char202}}',
+u'\u0282': '$\\Elzrtls$',
+u'\u0283': '$\\Elzesh$',
+u'\u0287': '$\\Elztrnt$',
+u'\u0288': '$\\Elzrtlt$',
+u'\u028a': '$\\Elzpupsil$',
+u'\u028b': '$\\Elzpscrv$',
+u'\u028c': '$\\Elzinvv$',
+u'\u028d': '$\\Elzinvw$',
+u'\u028e': '$\\Elztrny$',
+u'\u0290': '$\\Elzrtlz$',
+u'\u0292': '$\\Elzyogh$',
+u'\u0294': '$\\Elzglst$',
+u'\u0295': '$\\Elzreglst$',
+u'\u0296': '$\\Elzinglst$',
+u'\u029e': '{\\textturnk}',
+u'\u02a4': '$\\Elzdyogh$',
+u'\u02a7': '$\\Elztesh$',
+u'\u02bc': "{'}",
+u'\u02c7': '{\\textasciicaron}',
+u'\u02c8': '$\\Elzverts$',
+u'\u02cc': '$\\Elzverti$',
+u'\u02d0': '$\\Elzlmrk$',
+u'\u02d1': '$\\Elzhlmrk$',
+u'\u02d2': '$\\Elzsbrhr$',
+u'\u02d3': '$\\Elzsblhr$',
+u'\u02d4': '$\\Elzrais$',
+u'\u02d5': '$\\Elzlow$',
+u'\u02d8': '{\\textasciibreve}',
+u'\u02d9': '{\\textperiodcentered}',
+u'\u02da': '{\\r{}}',
+u'\u02db': '{\\k{}}',
+u'\u02dc': '{\\texttildelow}',
+u'\u02dd': '{\\H{}}',
+u'\u02e5': '{\\tone{55}}',
+u'\u02e6': '{\\tone{44}}',
+u'\u02e7': '{\\tone{33}}',
+u'\u02e8': '{\\tone{22}}',
+u'\u02e9': '{\\tone{11}}',
+u'\u0300': '{\\`}',
+u'\u0301': "{\\'}",
+u'\u0302': '{\\^}',
+u'\u0303': '{\\~}',
+u'\u0304': '{\\=}',
+u'\u0306': '{\\u}',
+u'\u0307': '{\\.}',
+u'\u0308': '{\\"}',
+u'\u030a': '{\\r}',
+u'\u030b': '{\\H}',
+u'\u030c': '{\\v}',
+u'\u030f': '{\\cyrchar\\C}',
+u'\u0311': '{{\\fontencoding{LECO}\\selectfont\\char177}}',
+u'\u0318': '{{\\fontencoding{LECO}\\selectfont\\char184}}',
+u'\u0319': '{{\\fontencoding{LECO}\\selectfont\\char185}}',
+u'\u0321': '$\\Elzpalh$',
+u'\u0322': '{\\Elzrh}',
+u'\u0327': '{\\c}',
+u'\u0328': '{\\k}',
+u'\u032a': '$\\Elzsbbrg$',
+u'\u032b': '{{\\fontencoding{LECO}\\selectfont\\char203}}',
+u'\u032f': '{{\\fontencoding{LECO}\\selectfont\\char207}}',
+u'\u0335': '{\\Elzxl}',
+u'\u0336': '{\\Elzbar}',
+u'\u0337': '{{\\fontencoding{LECO}\\selectfont\\char215}}',
+u'\u0338': '{{\\fontencoding{LECO}\\selectfont\\char216}}',
+u'\u033a': '{{\\fontencoding{LECO}\\selectfont\\char218}}',
+u'\u033b': '{{\\fontencoding{LECO}\\selectfont\\char219}}',
+u'\u033c': '{{\\fontencoding{LECO}\\selectfont\\char220}}',
+u'\u033d': '{{\\fontencoding{LECO}\\selectfont\\char221}}',
+u'\u0361': '{{\\fontencoding{LECO}\\selectfont\\char225}}',
+u'\u0386': "{\\'{A}}",
+u'\u0388': "{\\'{E}}",
+u'\u0389': "{\\'{H}}",
+u'\u038a': "{\\'{}{I}}",
+u'\u038c': "{\\'{}O}",
+u'\u038e': "$\\mathrm{'Y}$",
+u'\u038f': "$\\mathrm{'\\Omega}$",
+u'\u0390': '$\\acute{\\ddot{\\iota}}$',
+u'\u0391': '$\\Alpha$',
+u'\u0392': '$\\Beta$',
+u'\u0393': '$\\Gamma$',
+u'\u0394': '$\\Delta$',
+u'\u0395': '$\\Epsilon$',
+u'\u0396': '$\\Zeta$',
+u'\u0397': '$\\Eta$',
+u'\u0398': '$\\Theta$',
+u'\u0399': '$\\Iota$',
+u'\u039a': '$\\Kappa$',
+u'\u039b': '$\\Lambda$',
+u'\u039c': '$M$',
+u'\u039d': '$N$',
+u'\u039e': '$\\Xi$',
+u'\u039f': '$O$',
+u'\u03a0': '$\\Pi$',
+u'\u03a1': '$\\Rho$',
+u'\u03a3': '$\\Sigma$',
+u'\u03a4': '$\\Tau$',
+u'\u03a5': '$\\Upsilon$',
+u'\u03a6': '$\\Phi$',
+u'\u03a7': '$\\Chi$',
+u'\u03a8': '$\\Psi$',
+u'\u03a9': '$\\Omega$',
+u'\u03aa': '$\\mathrm{\\ddot{I}}$',
+u'\u03ab': '$\\mathrm{\\ddot{Y}}$',
+u'\u03ac': "{\\'{$\\alpha$}}",
+u'\u03ad': '$\\acute{\\epsilon}$',
+u'\u03ae': '$\\acute{\\eta}$',
+u'\u03af': '$\\acute{\\iota}$',
+u'\u03b0': '$\\acute{\\ddot{\\upsilon}}$',
+u'\u03b1': '$\\alpha$',
+u'\u03b2': '$\\beta$',
+u'\u03b3': '$\\gamma$',
+u'\u03b4': '$\\delta$',
+u'\u03b5': '$\\epsilon$',
+u'\u03b6': '$\\zeta$',
+u'\u03b7': '$\\eta$',
+u'\u03b8': '{\\texttheta}',
+u'\u03b9': '$\\iota$',
+u'\u03ba': '$\\kappa$',
+u'\u03bb': '$\\lambda$',
+u'\u03bc': '$\\mu$',
+u'\u03bd': '$\\nu$',
+u'\u03be': '$\\xi$',
+u'\u03bf': '$o$',
+u'\u03c0': '$\\pi$',
+u'\u03c1': '$\\rho$',
+u'\u03c2': '$\\varsigma$',
+u'\u03c3': '$\\sigma$',
+u'\u03c4': '$\\tau$',
+u'\u03c5': '$\\upsilon$',
+u'\u03c6': '$\\varphi$',
+u'\u03c7': '$\\chi$',
+u'\u03c8': '$\\psi$',
+u'\u03c9': '$\\omega$',
+u'\u03ca': '$\\ddot{\\iota}$',
+u'\u03cb': '$\\ddot{\\upsilon}$',
+u'\u03cc': "{\\'{o}}",
+u'\u03cd': '$\\acute{\\upsilon}$',
+u'\u03ce': '$\\acute{\\omega}$',
+u'\u03d0': '{\\Pisymbol{ppi022}{87}}',
+u'\u03d1': '{\\textvartheta}',
+u'\u03d2': '$\\Upsilon$',
+u'\u03d5': '$\\phi$',
+u'\u03d6': '$\\varpi$',
+u'\u03da': '$\\Stigma$',
+u'\u03dc': '$\\Digamma$',
+u'\u03dd': '$\\digamma$',
+u'\u03de': '$\\Koppa$',
+u'\u03e0': '$\\Sampi$',
+u'\u03f0': '$\\varkappa$',
+u'\u03f1': '$\\varrho$',
+u'\u03f4': '{\\textTheta}',
+u'\u03f6': '$\\backepsilon$',
+u'\u0401': '{\\cyrchar\\CYRYO}',
+u'\u0402': '{\\cyrchar\\CYRDJE}',
+u'\u0403': "{\\cyrchar{\\'\\CYRG}}",
+u'\u0404': '{\\cyrchar\\CYRIE}',
+u'\u0405': '{\\cyrchar\\CYRDZE}',
+u'\u0406': '{\\cyrchar\\CYRII}',
+u'\u0407': '{\\cyrchar\\CYRYI}',
+u'\u0408': '{\\cyrchar\\CYRJE}',
+u'\u0409': '{\\cyrchar\\CYRLJE}',
+u'\u040a': '{\\cyrchar\\CYRNJE}',
+u'\u040b': '{\\cyrchar\\CYRTSHE}',
+u'\u040c': "{\\cyrchar{\\'\\CYRK}}",
+u'\u040e': '{\\cyrchar\\CYRUSHRT}',
+u'\u040f': '{\\cyrchar\\CYRDZHE}',
+u'\u0410': '{\\cyrchar\\CYRA}',
+u'\u0411': '{\\cyrchar\\CYRB}',
+u'\u0412': '{\\cyrchar\\CYRV}',
+u'\u0413': '{\\cyrchar\\CYRG}',
+u'\u0414': '{\\cyrchar\\CYRD}',
+u'\u0415': '{\\cyrchar\\CYRE}',
+u'\u0416': '{\\cyrchar\\CYRZH}',
+u'\u0417': '{\\cyrchar\\CYRZ}',
+u'\u0418': '{\\cyrchar\\CYRI}',
+u'\u0419': '{\\cyrchar\\CYRISHRT}',
+u'\u041a': '{\\cyrchar\\CYRK}',
+u'\u041b': '{\\cyrchar\\CYRL}',
+u'\u041c': '{\\cyrchar\\CYRM}',
+u'\u041d': '{\\cyrchar\\CYRN}',
+u'\u041e': '{\\cyrchar\\CYRO}',
+u'\u041f': '{\\cyrchar\\CYRP}',
+u'\u0420': '{\\cyrchar\\CYRR}',
+u'\u0421': '{\\cyrchar\\CYRS}',
+u'\u0422': '{\\cyrchar\\CYRT}',
+u'\u0423': '{\\cyrchar\\CYRU}',
+u'\u0424': '{\\cyrchar\\CYRF}',
+u'\u0425': '{\\cyrchar\\CYRH}',
+u'\u0426': '{\\cyrchar\\CYRC}',
+u'\u0427': '{\\cyrchar\\CYRCH}',
+u'\u0428': '{\\cyrchar\\CYRSH}',
+u'\u0429': '{\\cyrchar\\CYRSHCH}',
+u'\u042a': '{\\cyrchar\\CYRHRDSN}',
+u'\u042b': '{\\cyrchar\\CYRERY}',
+u'\u042c': '{\\cyrchar\\CYRSFTSN}',
+u'\u042d': '{\\cyrchar\\CYREREV}',
+u'\u042e': '{\\cyrchar\\CYRYU}',
+u'\u042f': '{\\cyrchar\\CYRYA}',
+u'\u0430': '{\\cyrchar\\cyra}',
+u'\u0431': '{\\cyrchar\\cyrb}',
+u'\u0432': '{\\cyrchar\\cyrv}',
+u'\u0433': '{\\cyrchar\\cyrg}',
+u'\u0434': '{\\cyrchar\\cyrd}',
+u'\u0435': '{\\cyrchar\\cyre}',
+u'\u0436': '{\\cyrchar\\cyrzh}',
+u'\u0437': '{\\cyrchar\\cyrz}',
+u'\u0438': '{\\cyrchar\\cyri}',
+u'\u0439': '{\\cyrchar\\cyrishrt}',
+u'\u043a': '{\\cyrchar\\cyrk}',
+u'\u043b': '{\\cyrchar\\cyrl}',
+u'\u043c': '{\\cyrchar\\cyrm}',
+u'\u043d': '{\\cyrchar\\cyrn}',
+u'\u043e': '{\\cyrchar\\cyro}',
+u'\u043f': '{\\cyrchar\\cyrp}',
+u'\u0440': '{\\cyrchar\\cyrr}',
+u'\u0441': '{\\cyrchar\\cyrs}',
+u'\u0442': '{\\cyrchar\\cyrt}',
+u'\u0443': '{\\cyrchar\\cyru}',
+u'\u0444': '{\\cyrchar\\cyrf}',
+u'\u0445': '{\\cyrchar\\cyrh}',
+u'\u0446': '{\\cyrchar\\cyrc}',
+u'\u0447': '{\\cyrchar\\cyrch}',
+u'\u0448': '{\\cyrchar\\cyrsh}',
+u'\u0449': '{\\cyrchar\\cyrshch}',
+u'\u044a': '{\\cyrchar\\cyrhrdsn}',
+u'\u044b': '{\\cyrchar\\cyrery}',
+u'\u044c': '{\\cyrchar\\cyrsftsn}',
+u'\u044d': '{\\cyrchar\\cyrerev}',
+u'\u044e': '{\\cyrchar\\cyryu}',
+u'\u044f': '{\\cyrchar\\cyrya}',
+u'\u0451': '{\\cyrchar\\cyryo}',
+u'\u0452': '{\\cyrchar\\cyrdje}',
+u'\u0453': "{\\cyrchar{\\'\\cyrg}}",
+u'\u0454': '{\\cyrchar\\cyrie}',
+u'\u0455': '{\\cyrchar\\cyrdze}',
+u'\u0456': '{\\cyrchar\\cyrii}',
+u'\u0457': '{\\cyrchar\\cyryi}',
+u'\u0458': '{\\cyrchar\\cyrje}',
+u'\u0459': '{\\cyrchar\\cyrlje}',
+u'\u045a': '{\\cyrchar\\cyrnje}',
+u'\u045b': '{\\cyrchar\\cyrtshe}',
+u'\u045c': "{\\cyrchar{\\'\\cyrk}}",
+u'\u045e': '{\\cyrchar\\cyrushrt}',
+u'\u045f': '{\\cyrchar\\cyrdzhe}',
+u'\u0460': '{\\cyrchar\\CYROMEGA}',
+u'\u0461': '{\\cyrchar\\cyromega}',
+u'\u0462': '{\\cyrchar\\CYRYAT}',
+u'\u0464': '{\\cyrchar\\CYRIOTE}',
+u'\u0465': '{\\cyrchar\\cyriote}',
+u'\u0466': '{\\cyrchar\\CYRLYUS}',
+u'\u0467': '{\\cyrchar\\cyrlyus}',
+u'\u0468': '{\\cyrchar\\CYRIOTLYUS}',
+u'\u0469': '{\\cyrchar\\cyriotlyus}',
+u'\u046a': '{\\cyrchar\\CYRBYUS}',
+u'\u046c': '{\\cyrchar\\CYRIOTBYUS}',
+u'\u046d': '{\\cyrchar\\cyriotbyus}',
+u'\u046e': '{\\cyrchar\\CYRKSI}',
+u'\u046f': '{\\cyrchar\\cyrksi}',
+u'\u0470': '{\\cyrchar\\CYRPSI}',
+u'\u0471': '{\\cyrchar\\cyrpsi}',
+u'\u0472': '{\\cyrchar\\CYRFITA}',
+u'\u0474': '{\\cyrchar\\CYRIZH}',
+u'\u0478': '{\\cyrchar\\CYRUK}',
+u'\u0479': '{\\cyrchar\\cyruk}',
+u'\u047a': '{\\cyrchar\\CYROMEGARND}',
+u'\u047b': '{\\cyrchar\\cyromegarnd}',
+u'\u047c': '{\\cyrchar\\CYROMEGATITLO}',
+u'\u047d': '{\\cyrchar\\cyromegatitlo}',
+u'\u047e': '{\\cyrchar\\CYROT}',
+u'\u047f': '{\\cyrchar\\cyrot}',
+u'\u0480': '{\\cyrchar\\CYRKOPPA}',
+u'\u0481': '{\\cyrchar\\cyrkoppa}',
+u'\u0482': '{\\cyrchar\\cyrthousands}',
+u'\u0488': '{\\cyrchar\\cyrhundredthousands}',
+u'\u0489': '{\\cyrchar\\cyrmillions}',
+u'\u048c': '{\\cyrchar\\CYRSEMISFTSN}',
+u'\u048d': '{\\cyrchar\\cyrsemisftsn}',
+u'\u048e': '{\\cyrchar\\CYRRTICK}',
+u'\u048f': '{\\cyrchar\\cyrrtick}',
+u'\u0490': '{\\cyrchar\\CYRGUP}',
+u'\u0491': '{\\cyrchar\\cyrgup}',
+u'\u0492': '{\\cyrchar\\CYRGHCRS}',
+u'\u0493': '{\\cyrchar\\cyrghcrs}',
+u'\u0494': '{\\cyrchar\\CYRGHK}',
+u'\u0495': '{\\cyrchar\\cyrghk}',
+u'\u0496': '{\\cyrchar\\CYRZHDSC}',
+u'\u0497': '{\\cyrchar\\cyrzhdsc}',
+u'\u0498': '{\\cyrchar\\CYRZDSC}',
+u'\u0499': '{\\cyrchar\\cyrzdsc}',
+u'\u049a': '{\\cyrchar\\CYRKDSC}',
+u'\u049b': '{\\cyrchar\\cyrkdsc}',
+u'\u049c': '{\\cyrchar\\CYRKVCRS}',
+u'\u049d': '{\\cyrchar\\cyrkvcrs}',
+u'\u049e': '{\\cyrchar\\CYRKHCRS}',
+u'\u049f': '{\\cyrchar\\cyrkhcrs}',
+u'\u04a0': '{\\cyrchar\\CYRKBEAK}',
+u'\u04a1': '{\\cyrchar\\cyrkbeak}',
+u'\u04a2': '{\\cyrchar\\CYRNDSC}',
+u'\u04a3': '{\\cyrchar\\cyrndsc}',
+u'\u04a4': '{\\cyrchar\\CYRNG}',
+u'\u04a5': '{\\cyrchar\\cyrng}',
+u'\u04a6': '{\\cyrchar\\CYRPHK}',
+u'\u04a7': '{\\cyrchar\\cyrphk}',
+u'\u04a8': '{\\cyrchar\\CYRABHHA}',
+u'\u04a9': '{\\cyrchar\\cyrabhha}',
+u'\u04aa': '{\\cyrchar\\CYRSDSC}',
+u'\u04ab': '{\\cyrchar\\cyrsdsc}',
+u'\u04ac': '{\\cyrchar\\CYRTDSC}',
+u'\u04ad': '{\\cyrchar\\cyrtdsc}',
+u'\u04ae': '{\\cyrchar\\CYRY}',
+u'\u04af': '{\\cyrchar\\cyry}',
+u'\u04b0': '{\\cyrchar\\CYRYHCRS}',
+u'\u04b1': '{\\cyrchar\\cyryhcrs}',
+u'\u04b2': '{\\cyrchar\\CYRHDSC}',
+u'\u04b3': '{\\cyrchar\\cyrhdsc}',
+u'\u04b4': '{\\cyrchar\\CYRTETSE}',
+u'\u04b5': '{\\cyrchar\\cyrtetse}',
+u'\u04b6': '{\\cyrchar\\CYRCHRDSC}',
+u'\u04b7': '{\\cyrchar\\cyrchrdsc}',
+u'\u04b8': '{\\cyrchar\\CYRCHVCRS}',
+u'\u04b9': '{\\cyrchar\\cyrchvcrs}',
+u'\u04ba': '{\\cyrchar\\CYRSHHA}',
+u'\u04bb': '{\\cyrchar\\cyrshha}',
+u'\u04bc': '{\\cyrchar\\CYRABHCH}',
+u'\u04bd': '{\\cyrchar\\cyrabhch}',
+u'\u04be': '{\\cyrchar\\CYRABHCHDSC}',
+u'\u04bf': '{\\cyrchar\\cyrabhchdsc}',
+u'\u04c0': '{\\cyrchar\\CYRpalochka}',
+u'\u04c3': '{\\cyrchar\\CYRKHK}',
+u'\u04c4': '{\\cyrchar\\cyrkhk}',
+u'\u04c7': '{\\cyrchar\\CYRNHK}',
+u'\u04c8': '{\\cyrchar\\cyrnhk}',
+u'\u04cb': '{\\cyrchar\\CYRCHLDSC}',
+u'\u04cc': '{\\cyrchar\\cyrchldsc}',
+u'\u04d4': '{\\cyrchar\\CYRAE}',
+u'\u04d5': '{\\cyrchar\\cyrae}',
+u'\u04d8': '{\\cyrchar\\CYRSCHWA}',
+u'\u04d9': '{\\cyrchar\\cyrschwa}',
+u'\u04e0': '{\\cyrchar\\CYRABHDZE}',
+u'\u04e1': '{\\cyrchar\\cyrabhdze}',
+u'\u04e8': '{\\cyrchar\\CYROTLD}',
+u'\u04e9': '{\\cyrchar\\cyrotld}',
+u'\u2002': '{\\hspace{0.6em}}',
+u'\u2003': '{\\hspace{1em}}',
+u'\u2004': '{\\hspace{0.33em}}',
+u'\u2005': '{\\hspace{0.25em}}',
+u'\u2006': '{\\hspace{0.166em}}',
+u'\u2007': '{\\hphantom{0}}',
+u'\u2008': '{\\hphantom{,}}',
+u'\u2009': '{\\hspace{0.167em}}',
+u'\u200a': '$\\mkern1mu$',
+u'\u2010': '{-}',
+u'\u2013': '{\\textendash}',
+u'\u2014': '{\\textemdash}',
+u'\u2015': '{\\rule{1em}{1pt}}',
+u'\u2016': '$\\Vert$',
+u'\u2018': '{`}',
+u'\u2019': "{'}",
+u'\u201a': '{,}',
+u'\u201b': '$\\Elzreapos$',
+u'\u201c': '{\\textquotedblleft}',
+u'\u201d': '{\\textquotedblright}',
+u'\u201e': '{,,}',
+u'\u2020': '{\\textdagger}',
+u'\u2021': '{\\textdaggerdbl}',
+u'\u2022': '{\\textbullet}',
+u'\u2024': '{.}',
+u'\u2025': '{..}',
+u'\u2026': '{\\ldots}',
+u'\u2030': '{\\textperthousand}',
+u'\u2031': '{\\textpertenthousand}',
+u'\u2032': "${'}$",
+u'\u2033': "${''}$",
+u'\u2034': "${'''}$",
+u'\u2035': '$\\backprime$',
+u'\u2039': '{\\guilsinglleft}',
+u'\u203a': '{\\guilsinglright}',
+u'\u2057': "$''''$",
+u'\u205f': '{\\mkern4mu}',
+u'\u2060': '{\\nolinebreak}',
+u'\u20a7': '{\\ensuremath{\\Elzpes}}',
+u'\u20ac': '{\\mbox{\\texteuro}}',
+u'\u20db': '$\\dddot$',
+u'\u20dc': '$\\ddddot$',
+u'\u2102': '$\\mathbb{C}$',
+u'\u210a': '{\\mathscr{g}}',
+u'\u210b': '$\\mathscr{H}$',
+u'\u210c': '$\\mathfrak{H}$',
+u'\u210d': '$\\mathbb{H}$',
+u'\u210f': '$\\hslash$',
+u'\u2110': '$\\mathscr{I}$',
+u'\u2111': '$\\mathfrak{I}$',
+u'\u2112': '$\\mathscr{L}$',
+u'\u2113': '$\\mathscr{l}$',
+u'\u2115': '$\\mathbb{N}$',
+u'\u2116': '{\\cyrchar\\textnumero}',
+u'\u2118': '$\\wp$',
+u'\u2119': '$\\mathbb{P}$',
+u'\u211a': '$\\mathbb{Q}$',
+u'\u211b': '$\\mathscr{R}$',
+u'\u211c': '$\\mathfrak{R}$',
+u'\u211d': '$\\mathbb{R}$',
+u'\u211e': '$\\Elzxrat$',
+u'\u2122': '{\\texttrademark}',
+u'\u2124': '$\\mathbb{Z}$',
+u'\u2126': '$\\Omega$',
+u'\u2127': '$\\mho$',
+u'\u2128': '$\\mathfrak{Z}$',
+u'\u2129': '$\\ElsevierGlyph{2129}$',
+u'\u212b': '{\\AA}',
+u'\u212c': '$\\mathscr{B}$',
+u'\u212d': '$\\mathfrak{C}$',
+u'\u212f': '$\\mathscr{e}$',
+u'\u2130': '$\\mathscr{E}$',
+u'\u2131': '$\\mathscr{F}$',
+u'\u2133': '$\\mathscr{M}$',
+u'\u2134': '$\\mathscr{o}$',
+u'\u2135': '$\\aleph$',
+u'\u2136': '$\\beth$',
+u'\u2137': '$\\gimel$',
+u'\u2138': '$\\daleth$',
+u'\u2153': '$\\textfrac{1}{3}$',
+u'\u2154': '$\\textfrac{2}{3}$',
+u'\u2155': '$\\textfrac{1}{5}$',
+u'\u2156': '$\\textfrac{2}{5}$',
+u'\u2157': '$\\textfrac{3}{5}$',
+u'\u2158': '$\\textfrac{4}{5}$',
+u'\u2159': '$\\textfrac{1}{6}$',
+u'\u215a': '$\\textfrac{5}{6}$',
+u'\u215b': '$\\textfrac{1}{8}$',
+u'\u215c': '$\\textfrac{3}{8}$',
+u'\u215d': '$\\textfrac{5}{8}$',
+u'\u215e': '$\\textfrac{7}{8}$',
+u'\u2190': '$\\leftarrow$',
+u'\u2191': '$\\uparrow$',
+u'\u2192': '$\\rightarrow$',
+u'\u2193': '$\\downarrow$',
+u'\u2194': '$\\leftrightarrow$',
+u'\u2195': '$\\updownarrow$',
+u'\u2196': '$\\nwarrow$',
+u'\u2197': '$\\nearrow$',
+u'\u2198': '$\\searrow$',
+u'\u2199': '$\\swarrow$',
+u'\u219a': '$\\nleftarrow$',
+u'\u219b': '$\\nrightarrow$',
+u'\u219c': '$\\arrowwaveright$',
+u'\u219d': '$\\arrowwaveright$',
+u'\u219e': '$\\twoheadleftarrow$',
+u'\u21a0': '$\\twoheadrightarrow$',
+u'\u21a2': '$\\leftarrowtail$',
+u'\u21a3': '$\\rightarrowtail$',
+u'\u21a6': '$\\mapsto$',
+u'\u21a9': '$\\hookleftarrow$',
+u'\u21aa': '$\\hookrightarrow$',
+u'\u21ab': '$\\looparrowleft$',
+u'\u21ac': '$\\looparrowright$',
+u'\u21ad': '$\\leftrightsquigarrow$',
+u'\u21ae': '$\\nleftrightarrow$',
+u'\u21b0': '$\\Lsh$',
+u'\u21b1': '$\\Rsh$',
+u'\u21b3': '$\\ElsevierGlyph{21B3}$',
+u'\u21b6': '$\\curvearrowleft$',
+u'\u21b7': '$\\curvearrowright$',
+u'\u21ba': '$\\circlearrowleft$',
+u'\u21bb': '$\\circlearrowright$',
+u'\u21bc': '$\\leftharpoonup$',
+u'\u21bd': '$\\leftharpoondown$',
+u'\u21be': '$\\upharpoonright$',
+u'\u21bf': '$\\upharpoonleft$',
+u'\u21c0': '$\\rightharpoonup$',
+u'\u21c1': '$\\rightharpoondown$',
+u'\u21c2': '$\\downharpoonright$',
+u'\u21c3': '$\\downharpoonleft$',
+u'\u21c4': '$\\rightleftarrows$',
+u'\u21c5': '$\\dblarrowupdown$',
+u'\u21c6': '$\\leftrightarrows$',
+u'\u21c7': '$\\leftleftarrows$',
+u'\u21c8': '$\\upuparrows$',
+u'\u21c9': '$\\rightrightarrows$',
+u'\u21ca': '$\\downdownarrows$',
+u'\u21cb': '$\\leftrightharpoons$',
+u'\u21cc': '$\\rightleftharpoons$',
+u'\u21cd': '$\\nLeftarrow$',
+u'\u21ce': '$\\nLeftrightarrow$',
+u'\u21cf': '$\\nRightarrow$',
+u'\u21d0': '$\\Leftarrow$',
+u'\u21d1': '$\\Uparrow$',
+u'\u21d2': '$\\Rightarrow$',
+u'\u21d3': '$\\Downarrow$',
+u'\u21d4': '$\\Leftrightarrow$',
+u'\u21d5': '$\\Updownarrow$',
+u'\u21da': '$\\Lleftarrow$',
+u'\u21db': '$\\Rrightarrow$',
+u'\u21dd': '$\\rightsquigarrow$',
+u'\u21f5': '$\\DownArrowUpArrow$',
+u'\u2200': '$\\forall$',
+u'\u2201': '$\\complement$',
+u'\u2202': '$\\partial$',
+u'\u2203': '$\\exists$',
+u'\u2204': '$\\nexists$',
+u'\u2205': '$\\varnothing$',
+u'\u2207': '$\\nabla$',
+u'\u2208': '$\\in$',
+u'\u2209': '$\\not\\in$',
+u'\u220b': '$\\ni$',
+u'\u220c': '$\\not\\ni$',
+u'\u220f': '$\\prod$',
+u'\u2210': '$\\coprod$',
+u'\u2211': '$\\sum$',
+u'\u2212': '{-}',
+u'\u2213': '$\\mp$',
+u'\u2214': '$\\dotplus$',
+u'\u2216': '$\\setminus$',
+u'\u2217': '${_\\ast}$',
+u'\u2218': '$\\circ$',
+u'\u2219': '$\\bullet$',
+u'\u221a': '$\\surd$',
+u'\u221d': '$\\propto$',
+u'\u221e': '$\\infty$',
+u'\u221f': '$\\rightangle$',
+u'\u2220': '$\\angle$',
+u'\u2221': '$\\measuredangle$',
+u'\u2222': '$\\sphericalangle$',
+u'\u2223': '$\\mid$',
+u'\u2224': '$\\nmid$',
+u'\u2225': '$\\parallel$',
+u'\u2226': '$\\nparallel$',
+u'\u2227': '$\\wedge$',
+u'\u2228': '$\\vee$',
+u'\u2229': '$\\cap$',
+u'\u222a': '$\\cup$',
+u'\u222b': '$\\int$',
+u'\u222c': '$\\int\\!\\int$',
+u'\u222d': '$\\int\\!\\int\\!\\int$',
+u'\u222e': '$\\oint$',
+u'\u222f': '$\\surfintegral$',
+u'\u2230': '$\\volintegral$',
+u'\u2231': '$\\clwintegral$',
+u'\u2232': '$\\ElsevierGlyph{2232}$',
+u'\u2233': '$\\ElsevierGlyph{2233}$',
+u'\u2234': '$\\therefore$',
+u'\u2235': '$\\because$',
+u'\u2237': '$\\Colon$',
+u'\u2238': '$\\ElsevierGlyph{2238}$',
+u'\u223a': '$\\mathbin{{:}\\!\\!{-}\\!\\!{:}}$',
+u'\u223b': '$\\homothetic$',
+u'\u223c': '$\\sim$',
+u'\u223d': '$\\backsim$',
+u'\u223e': '$\\lazysinv$',
+u'\u2240': '$\\wr$',
+u'\u2241': '$\\not\\sim$',
+u'\u2242': '$\\ElsevierGlyph{2242}$',
+u'\u2243': '$\\simeq$',
+u'\u2244': '$\\not\\simeq$',
+u'\u2245': '$\\cong$',
+u'\u2246': '$\\approxnotequal$',
+u'\u2247': '$\\not\\cong$',
+u'\u2248': '$\\approx$',
+u'\u2249': '$\\not\\approx$',
+u'\u224a': '$\\approxeq$',
+u'\u224b': '$\\tildetrpl$',
+u'\u224c': '$\\allequal$',
+u'\u224d': '$\\asymp$',
+u'\u224e': '$\\Bumpeq$',
+u'\u224f': '$\\bumpeq$',
+u'\u2250': '$\\doteq$',
+u'\u2251': '$\\doteqdot$',
+u'\u2252': '$\\fallingdotseq$',
+u'\u2253': '$\\risingdotseq$',
+u'\u2254': '{:=}',
+u'\u2255': '$=:$',
+u'\u2256': '$\\eqcirc$',
+u'\u2257': '$\\circeq$',
+u'\u2259': '$\\estimates$',
+u'\u225a': '$\\ElsevierGlyph{225A}$',
+u'\u225b': '$\\starequal$',
+u'\u225c': '$\\triangleq$',
+u'\u225f': '$\\ElsevierGlyph{225F}$',
+u'\u2260': '$\\not =$',
+u'\u2261': '$\\equiv$',
+u'\u2262': '$\\not\\equiv$',
+u'\u2264': '$\\leq$',
+u'\u2265': '$\\geq$',
+u'\u2266': '$\\leqq$',
+u'\u2267': '$\\geqq$',
+u'\u2268': '$\\lneqq$',
+u'\u2269': '$\\gneqq$',
+u'\u226a': '$\\ll$',
+u'\u226b': '$\\gg$',
+u'\u226c': '$\\between$',
+u'\u226d': '$\\not\\kern-0.3em\\times$',
+u'\u226e': '$\\not<$',
+u'\u226f': '$\\not>$',
+u'\u2270': '$\\not\\leq$',
+u'\u2271': '$\\not\\geq$',
+u'\u2272': '$\\lessequivlnt$',
+u'\u2273': '$\\greaterequivlnt$',
+u'\u2274': '$\\ElsevierGlyph{2274}$',
+u'\u2275': '$\\ElsevierGlyph{2275}$',
+u'\u2276': '$\\lessgtr$',
+u'\u2277': '$\\gtrless$',
+u'\u2278': '$\\notlessgreater$',
+u'\u2279': '$\\notgreaterless$',
+u'\u227a': '$\\prec$',
+u'\u227b': '$\\succ$',
+u'\u227c': '$\\preccurlyeq$',
+u'\u227d': '$\\succcurlyeq$',
+u'\u227e': '$\\precapprox$',
+u'\u227f': '$\\succapprox$',
+u'\u2280': '$\\not\\prec$',
+u'\u2281': '$\\not\\succ$',
+u'\u2282': '$\\subset$',
+u'\u2283': '$\\supset$',
+u'\u2284': '$\\not\\subset$',
+u'\u2285': '$\\not\\supset$',
+u'\u2286': '$\\subseteq$',
+u'\u2287': '$\\supseteq$',
+u'\u2288': '$\\not\\subseteq$',
+u'\u2289': '$\\not\\supseteq$',
+u'\u228a': '$\\subsetneq$',
+u'\u228b': '$\\supsetneq$',
+u'\u228e': '$\\uplus$',
+u'\u228f': '$\\sqsubset$',
+u'\u2290': '$\\sqsupset$',
+u'\u2291': '$\\sqsubseteq$',
+u'\u2292': '$\\sqsupseteq$',
+u'\u2293': '$\\sqcap$',
+u'\u2294': '$\\sqcup$',
+u'\u2295': '$\\oplus$',
+u'\u2296': '$\\ominus$',
+u'\u2297': '$\\otimes$',
+u'\u2298': '$\\oslash$',
+u'\u2299': '$\\odot$',
+u'\u229a': '$\\circledcirc$',
+u'\u229b': '$\\circledast$',
+u'\u229d': '$\\circleddash$',
+u'\u229e': '$\\boxplus$',
+u'\u229f': '$\\boxminus$',
+u'\u22a0': '$\\boxtimes$',
+u'\u22a1': '$\\boxdot$',
+u'\u22a2': '$\\vdash$',
+u'\u22a3': '$\\dashv$',
+u'\u22a4': '$\\top$',
+u'\u22a5': '$\\perp$',
+u'\u22a7': '$\\truestate$',
+u'\u22a8': '$\\forcesextra$',
+u'\u22a9': '$\\Vdash$',
+u'\u22aa': '$\\Vvdash$',
+u'\u22ab': '$\\VDash$',
+u'\u22ac': '$\\nvdash$',
+u'\u22ad': '$\\nvDash$',
+u'\u22ae': '$\\nVdash$',
+u'\u22af': '$\\nVDash$',
+u'\u22b2': '$\\vartriangleleft$',
+u'\u22b3': '$\\vartriangleright$',
+u'\u22b4': '$\\trianglelefteq$',
+u'\u22b5': '$\\trianglerighteq$',
+u'\u22b6': '$\\original$',
+u'\u22b7': '$\\image$',
+u'\u22b8': '$\\multimap$',
+u'\u22b9': '$\\hermitconjmatrix$',
+u'\u22ba': '$\\intercal$',
+u'\u22bb': '$\\veebar$',
+u'\u22be': '$\\rightanglearc$',
+u'\u22c0': '$\\ElsevierGlyph{22C0}$',
+u'\u22c1': '$\\ElsevierGlyph{22C1}$',
+u'\u22c2': '$\\bigcap$',
+u'\u22c3': '$\\bigcup$',
+u'\u22c4': '$\\diamond$',
+u'\u22c5': '$\\cdot$',
+u'\u22c6': '$\\star$',
+u'\u22c7': '$\\divideontimes$',
+u'\u22c8': '$\\bowtie$',
+u'\u22c9': '$\\ltimes$',
+u'\u22ca': '$\\rtimes$',
+u'\u22cb': '$\\leftthreetimes$',
+u'\u22cc': '$\\rightthreetimes$',
+u'\u22cd': '$\\backsimeq$',
+u'\u22ce': '$\\curlyvee$',
+u'\u22cf': '$\\curlywedge$',
+u'\u22d0': '$\\Subset$',
+u'\u22d1': '$\\Supset$',
+u'\u22d2': '$\\Cap$',
+u'\u22d3': '$\\Cup$',
+u'\u22d4': '$\\pitchfork$',
+u'\u22d6': '$\\lessdot$',
+u'\u22d7': '$\\gtrdot$',
+u'\u22d8': '$\\verymuchless$',
+u'\u22d9': '$\\verymuchgreater$',
+u'\u22da': '$\\lesseqgtr$',
+u'\u22db': '$\\gtreqless$',
+u'\u22de': '$\\curlyeqprec$',
+u'\u22df': '$\\curlyeqsucc$',
+u'\u22e2': '$\\not\\sqsubseteq$',
+u'\u22e3': '$\\not\\sqsupseteq$',
+u'\u22e5': '$\\Elzsqspne$',
+u'\u22e6': '$\\lnsim$',
+u'\u22e7': '$\\gnsim$',
+u'\u22e8': '$\\precedesnotsimilar$',
+u'\u22e9': '$\\succnsim$',
+u'\u22ea': '$\\ntriangleleft$',
+u'\u22eb': '$\\ntriangleright$',
+u'\u22ec': '$\\ntrianglelefteq$',
+u'\u22ed': '$\\ntrianglerighteq$',
+u'\u22ee': '$\\vdots$',
+u'\u22ef': '$\\cdots$',
+u'\u22f0': '$\\upslopeellipsis$',
+u'\u22f1': '$\\downslopeellipsis$',
+u'\u2305': '{\\barwedge}',
+u'\u2306': '$\\perspcorrespond$',
+u'\u2308': '$\\lceil$',
+u'\u2309': '$\\rceil$',
+u'\u230a': '$\\lfloor$',
+u'\u230b': '$\\rfloor$',
+u'\u2315': '$\\recorder$',
+u'\u2316': '$\\mathchar"2208$',
+u'\u231c': '$\\ulcorner$',
+u'\u231d': '$\\urcorner$',
+u'\u231e': '$\\llcorner$',
+u'\u231f': '$\\lrcorner$',
+u'\u2322': '$\\frown$',
+u'\u2323': '$\\smile$',
+u'\u2329': '$\\langle$',
+u'\u232a': '$\\rangle$',
+u'\u233d': '$\\ElsevierGlyph{E838}$',
+u'\u23a3': '$\\Elzdlcorn$',
+u'\u23b0': '$\\lmoustache$',
+u'\u23b1': '$\\rmoustache$',
+u'\u2423': '{\\textvisiblespace}',
+u'\u2460': '{\\ding{172}}',
+u'\u2461': '{\\ding{173}}',
+u'\u2462': '{\\ding{174}}',
+u'\u2463': '{\\ding{175}}',
+u'\u2464': '{\\ding{176}}',
+u'\u2465': '{\\ding{177}}',
+u'\u2466': '{\\ding{178}}',
+u'\u2467': '{\\ding{179}}',
+u'\u2468': '{\\ding{180}}',
+u'\u2469': '{\\ding{181}}',
+u'\u24c8': '$\\circledS$',
+u'\u2506': '$\\Elzdshfnc$',
+u'\u2519': '$\\Elzsqfnw$',
+u'\u2571': '$\\diagup$',
+u'\u25a0': '{\\ding{110}}',
+u'\u25a1': '$\\square$',
+u'\u25aa': '$\\blacksquare$',
+u'\u25ad': '$\\fbox{~~}$',
+u'\u25af': '$\\Elzvrecto$',
+u'\u25b1': '$\\ElsevierGlyph{E381}$',
+u'\u25b2': '{\\ding{115}}',
+u'\u25b3': '$\\bigtriangleup$',
+u'\u25b4': '$\\blacktriangle$',
+u'\u25b5': '$\\vartriangle$',
+u'\u25b8': '$\\blacktriangleright$',
+u'\u25b9': '$\\triangleright$',
+u'\u25bc': '{\\ding{116}}',
+u'\u25bd': '$\\bigtriangledown$',
+u'\u25be': '$\\blacktriangledown$',
+u'\u25bf': '$\\triangledown$',
+u'\u25c2': '$\\blacktriangleleft$',
+u'\u25c3': '$\\triangleleft$',
+u'\u25c6': '{\\ding{117}}',
+u'\u25ca': '$\\lozenge$',
+u'\u25cb': '$\\bigcirc$',
+u'\u25cf': '{\\ding{108}}',
+u'\u25d0': '$\\Elzcirfl$',
+u'\u25d1': '$\\Elzcirfr$',
+u'\u25d2': '$\\Elzcirfb$',
+u'\u25d7': '{\\ding{119}}',
+u'\u25d8': '$\\Elzrvbull$',
+u'\u25e7': '$\\Elzsqfl$',
+u'\u25e8': '$\\Elzsqfr$',
+u'\u25ea': '$\\Elzsqfse$',
+u'\u25ef': '$\\bigcirc$',
+u'\u2605': '{\\ding{72}}',
+u'\u2606': '{\\ding{73}}',
+u'\u260e': '{\\ding{37}}',
+u'\u261b': '{\\ding{42}}',
+u'\u261e': '{\\ding{43}}',
+u'\u263e': '{\\rightmoon}',
+u'\u263f': '{\\mercury}',
+u'\u2640': '{\\venus}',
+u'\u2642': '{\\male}',
+u'\u2643': '{\\jupiter}',
+u'\u2644': '{\\saturn}',
+u'\u2645': '{\\uranus}',
+u'\u2646': '{\\neptune}',
+u'\u2647': '{\\pluto}',
+u'\u2648': '{\\aries}',
+u'\u2649': '{\\taurus}',
+u'\u264a': '{\\gemini}',
+u'\u264b': '{\\cancer}',
+u'\u264c': '{\\leo}',
+u'\u264d': '{\\virgo}',
+u'\u264e': '{\\libra}',
+u'\u264f': '{\\scorpio}',
+u'\u2650': '{\\sagittarius}',
+u'\u2651': '{\\capricornus}',
+u'\u2652': '{\\aquarius}',
+u'\u2653': '{\\pisces}',
+u'\u2660': '{\\ding{171}}',
+u'\u2662': '$\\diamond$',
+u'\u2663': '{\\ding{168}}',
+u'\u2665': '{\\ding{170}}',
+u'\u2666': '{\\ding{169}}',
+u'\u2669': '{\\quarternote}',
+u'\u266a': '{\\eighthnote}',
+u'\u266d': '$\\flat$',
+u'\u266e': '$\\natural$',
+u'\u266f': '$\\sharp$',
+u'\u2701': '{\\ding{33}}',
+u'\u2702': '{\\ding{34}}',
+u'\u2703': '{\\ding{35}}',
+u'\u2704': '{\\ding{36}}',
+u'\u2706': '{\\ding{38}}',
+u'\u2707': '{\\ding{39}}',
+u'\u2708': '{\\ding{40}}',
+u'\u2709': '{\\ding{41}}',
+u'\u270c': '{\\ding{44}}',
+u'\u270d': '{\\ding{45}}',
+u'\u270e': '{\\ding{46}}',
+u'\u270f': '{\\ding{47}}',
+u'\u2710': '{\\ding{48}}',
+u'\u2711': '{\\ding{49}}',
+u'\u2712': '{\\ding{50}}',
+u'\u2713': '{\\ding{51}}',
+u'\u2714': '{\\ding{52}}',
+u'\u2715': '{\\ding{53}}',
+u'\u2716': '{\\ding{54}}',
+u'\u2717': '{\\ding{55}}',
+u'\u2718': '{\\ding{56}}',
+u'\u2719': '{\\ding{57}}',
+u'\u271a': '{\\ding{58}}',
+u'\u271b': '{\\ding{59}}',
+u'\u271c': '{\\ding{60}}',
+u'\u271d': '{\\ding{61}}',
+u'\u271e': '{\\ding{62}}',
+u'\u271f': '{\\ding{63}}',
+u'\u2720': '{\\ding{64}}',
+u'\u2721': '{\\ding{65}}',
+u'\u2722': '{\\ding{66}}',
+u'\u2723': '{\\ding{67}}',
+u'\u2724': '{\\ding{68}}',
+u'\u2725': '{\\ding{69}}',
+u'\u2726': '{\\ding{70}}',
+u'\u2727': '{\\ding{71}}',
+u'\u2729': '{\\ding{73}}',
+u'\u272a': '{\\ding{74}}',
+u'\u272b': '{\\ding{75}}',
+u'\u272c': '{\\ding{76}}',
+u'\u272d': '{\\ding{77}}',
+u'\u272e': '{\\ding{78}}',
+u'\u272f': '{\\ding{79}}',
+u'\u2730': '{\\ding{80}}',
+u'\u2731': '{\\ding{81}}',
+u'\u2732': '{\\ding{82}}',
+u'\u2733': '{\\ding{83}}',
+u'\u2734': '{\\ding{84}}',
+u'\u2735': '{\\ding{85}}',
+u'\u2736': '{\\ding{86}}',
+u'\u2737': '{\\ding{87}}',
+u'\u2738': '{\\ding{88}}',
+u'\u2739': '{\\ding{89}}',
+u'\u273a': '{\\ding{90}}',
+u'\u273b': '{\\ding{91}}',
+u'\u273c': '{\\ding{92}}',
+u'\u273d': '{\\ding{93}}',
+u'\u273e': '{\\ding{94}}',
+u'\u273f': '{\\ding{95}}',
+u'\u2740': '{\\ding{96}}',
+u'\u2741': '{\\ding{97}}',
+u'\u2742': '{\\ding{98}}',
+u'\u2743': '{\\ding{99}}',
+u'\u2744': '{\\ding{100}}',
+u'\u2745': '{\\ding{101}}',
+u'\u2746': '{\\ding{102}}',
+u'\u2747': '{\\ding{103}}',
+u'\u2748': '{\\ding{104}}',
+u'\u2749': '{\\ding{105}}',
+u'\u274a': '{\\ding{106}}',
+u'\u274b': '{\\ding{107}}',
+u'\u274d': '{\\ding{109}}',
+u'\u274f': '{\\ding{111}}',
+u'\u2750': '{\\ding{112}}',
+u'\u2751': '{\\ding{113}}',
+u'\u2752': '{\\ding{114}}',
+u'\u2756': '{\\ding{118}}',
+u'\u2758': '{\\ding{120}}',
+u'\u2759': '{\\ding{121}}',
+u'\u275a': '{\\ding{122}}',
+u'\u275b': '{\\ding{123}}',
+u'\u275c': '{\\ding{124}}',
+u'\u275d': '{\\ding{125}}',
+u'\u275e': '{\\ding{126}}',
+u'\u2761': '{\\ding{161}}',
+u'\u2762': '{\\ding{162}}',
+u'\u2763': '{\\ding{163}}',
+u'\u2764': '{\\ding{164}}',
+u'\u2765': '{\\ding{165}}',
+u'\u2766': '{\\ding{166}}',
+u'\u2767': '{\\ding{167}}',
+u'\u2776': '{\\ding{182}}',
+u'\u2777': '{\\ding{183}}',
+u'\u2778': '{\\ding{184}}',
+u'\u2779': '{\\ding{185}}',
+u'\u277a': '{\\ding{186}}',
+u'\u277b': '{\\ding{187}}',
+u'\u277c': '{\\ding{188}}',
+u'\u277d': '{\\ding{189}}',
+u'\u277e': '{\\ding{190}}',
+u'\u277f': '{\\ding{191}}',
+u'\u2780': '{\\ding{192}}',
+u'\u2781': '{\\ding{193}}',
+u'\u2782': '{\\ding{194}}',
+u'\u2783': '{\\ding{195}}',
+u'\u2784': '{\\ding{196}}',
+u'\u2785': '{\\ding{197}}',
+u'\u2786': '{\\ding{198}}',
+u'\u2787': '{\\ding{199}}',
+u'\u2788': '{\\ding{200}}',
+u'\u2789': '{\\ding{201}}',
+u'\u278a': '{\\ding{202}}',
+u'\u278b': '{\\ding{203}}',
+u'\u278c': '{\\ding{204}}',
+u'\u278d': '{\\ding{205}}',
+u'\u278e': '{\\ding{206}}',
+u'\u278f': '{\\ding{207}}',
+u'\u2790': '{\\ding{208}}',
+u'\u2791': '{\\ding{209}}',
+u'\u2792': '{\\ding{210}}',
+u'\u2793': '{\\ding{211}}',
+u'\u2794': '{\\ding{212}}',
+u'\u2798': '{\\ding{216}}',
+u'\u2799': '{\\ding{217}}',
+u'\u279a': '{\\ding{218}}',
+u'\u279b': '{\\ding{219}}',
+u'\u279c': '{\\ding{220}}',
+u'\u279d': '{\\ding{221}}',
+u'\u279e': '{\\ding{222}}',
+u'\u279f': '{\\ding{223}}',
+u'\u27a0': '{\\ding{224}}',
+u'\u27a1': '{\\ding{225}}',
+u'\u27a2': '{\\ding{226}}',
+u'\u27a3': '{\\ding{227}}',
+u'\u27a4': '{\\ding{228}}',
+u'\u27a5': '{\\ding{229}}',
+u'\u27a6': '{\\ding{230}}',
+u'\u27a7': '{\\ding{231}}',
+u'\u27a8': '{\\ding{232}}',
+u'\u27a9': '{\\ding{233}}',
+u'\u27aa': '{\\ding{234}}',
+u'\u27ab': '{\\ding{235}}',
+u'\u27ac': '{\\ding{236}}',
+u'\u27ad': '{\\ding{237}}',
+u'\u27ae': '{\\ding{238}}',
+u'\u27af': '{\\ding{239}}',
+u'\u27b1': '{\\ding{241}}',
+u'\u27b2': '{\\ding{242}}',
+u'\u27b3': '{\\ding{243}}',
+u'\u27b4': '{\\ding{244}}',
+u'\u27b5': '{\\ding{245}}',
+u'\u27b6': '{\\ding{246}}',
+u'\u27b7': '{\\ding{247}}',
+u'\u27b8': '{\\ding{248}}',
+u'\u27b9': '{\\ding{249}}',
+u'\u27ba': '{\\ding{250}}',
+u'\u27bb': '{\\ding{251}}',
+u'\u27bc': '{\\ding{252}}',
+u'\u27bd': '{\\ding{253}}',
+u'\u27be': '{\\ding{254}}',
+u'\u27f5': '$\\longleftarrow$',
+u'\u27f6': '$\\longrightarrow$',
+u'\u27f7': '$\\longleftrightarrow$',
+u'\u27f8': '$\\Longleftarrow$',
+u'\u27f9': '$\\Longrightarrow$',
+u'\u27fa': '$\\Longleftrightarrow$',
+u'\u27fc': '$\\longmapsto$',
+u'\u27ff': '$\\sim\\joinrel\\leadsto$',
+u'\u2905': '$\\ElsevierGlyph{E212}$',
+u'\u2912': '$\\UpArrowBar$',
+u'\u2913': '$\\DownArrowBar$',
+u'\u2923': '$\\ElsevierGlyph{E20C}$',
+u'\u2924': '$\\ElsevierGlyph{E20D}$',
+u'\u2925': '$\\ElsevierGlyph{E20B}$',
+u'\u2926': '$\\ElsevierGlyph{E20A}$',
+u'\u2927': '$\\ElsevierGlyph{E211}$',
+u'\u2928': '$\\ElsevierGlyph{E20E}$',
+u'\u2929': '$\\ElsevierGlyph{E20F}$',
+u'\u292a': '$\\ElsevierGlyph{E210}$',
+u'\u2933': '$\\ElsevierGlyph{E21C}$',
+u'\u2936': '$\\ElsevierGlyph{E21A}$',
+u'\u2937': '$\\ElsevierGlyph{E219}$',
+u'\u2940': '$\\Elolarr$',
+u'\u2941': '$\\Elorarr$',
+u'\u2942': '$\\ElzRlarr$',
+u'\u2944': '$\\ElzrLarr$',
+u'\u2947': '$\\Elzrarrx$',
+u'\u294e': '$\\LeftRightVector$',
+u'\u294f': '$\\RightUpDownVector$',
+u'\u2950': '$\\DownLeftRightVector$',
+u'\u2951': '$\\LeftUpDownVector$',
+u'\u2952': '$\\LeftVectorBar$',
+u'\u2953': '$\\RightVectorBar$',
+u'\u2954': '$\\RightUpVectorBar$',
+u'\u2955': '$\\RightDownVectorBar$',
+u'\u2956': '$\\DownLeftVectorBar$',
+u'\u2957': '$\\DownRightVectorBar$',
+u'\u2958': '$\\LeftUpVectorBar$',
+u'\u2959': '$\\LeftDownVectorBar$',
+u'\u295a': '$\\LeftTeeVector$',
+u'\u295b': '$\\RightTeeVector$',
+u'\u295c': '$\\RightUpTeeVector$',
+u'\u295d': '$\\RightDownTeeVector$',
+u'\u295e': '$\\DownLeftTeeVector$',
+u'\u295f': '$\\DownRightTeeVector$',
+u'\u2960': '$\\LeftUpTeeVector$',
+u'\u2961': '$\\LeftDownTeeVector$',
+u'\u296e': '$\\UpEquilibrium$',
+u'\u296f': '$\\ReverseUpEquilibrium$',
+u'\u2970': '$\\RoundImplies$',
+u'\u297c': '$\\ElsevierGlyph{E214}$',
+u'\u297d': '$\\ElsevierGlyph{E215}$',
+u'\u2980': '$\\Elztfnc$',
+u'\u2985': '$\\ElsevierGlyph{3018}$',
+u'\u2986': '$\\Elroang$',
+u'\u2993': '$<\\kern-0.58em($',
+u'\u2994': '$\\ElsevierGlyph{E291}$',
+u'\u2999': '$\\Elzddfnc$',
+u'\u299c': '$\\Angle$',
+u'\u29a0': '$\\Elzlpargt$',
+u'\u29b5': '$\\ElsevierGlyph{E260}$',
+u'\u29b6': '$\\ElsevierGlyph{E61B}$',
+u'\u29ca': '$\\ElzLap$',
+u'\u29cb': '$\\Elzdefas$',
+u'\u29cf': '$\\LeftTriangleBar$',
+u'\u29d0': '$\\RightTriangleBar$',
+u'\u29dc': '$\\ElsevierGlyph{E372}$',
+u'\u29eb': '$\\blacklozenge$',
+u'\u29f4': '$\\RuleDelayed$',
+u'\u2a04': '$\\Elxuplus$',
+u'\u2a05': '$\\ElzThr$',
+u'\u2a06': '$\\Elxsqcup$',
+u'\u2a07': '$\\ElzInf$',
+u'\u2a08': '$\\ElzSup$',
+u'\u2a0d': '$\\ElzCint$',
+u'\u2a0f': '$\\clockoint$',
+u'\u2a10': '$\\ElsevierGlyph{E395}$',
+u'\u2a16': '$\\sqrint$',
+u'\u2a25': '$\\ElsevierGlyph{E25A}$',
+u'\u2a2a': '$\\ElsevierGlyph{E25B}$',
+u'\u2a2d': '$\\ElsevierGlyph{E25C}$',
+u'\u2a2e': '$\\ElsevierGlyph{E25D}$',
+u'\u2a2f': '$\\ElzTimes$',
+u'\u2a34': '$\\ElsevierGlyph{E25E}$',
+u'\u2a35': '$\\ElsevierGlyph{E25E}$',
+u'\u2a3c': '$\\ElsevierGlyph{E259}$',
+u'\u2a3f': '$\\amalg$',
+u'\u2a53': '$\\ElzAnd$',
+u'\u2a54': '$\\ElzOr$',
+u'\u2a55': '$\\ElsevierGlyph{E36E}$',
+u'\u2a56': '$\\ElOr$',
+u'\u2a5e': '$\\perspcorrespond$',
+u'\u2a5f': '$\\Elzminhat$',
+u'\u2a63': '$\\ElsevierGlyph{225A}$',
+u'\u2a6e': '$\\stackrel{*}{=}$',
+u'\u2a75': '$\\Equal$',
+u'\u2a7d': '$\\leqslant$',
+u'\u2a7e': '$\\geqslant$',
+u'\u2a85': '$\\lessapprox$',
+u'\u2a86': '$\\gtrapprox$',
+u'\u2a87': '$\\lneq$',
+u'\u2a88': '$\\gneq$',
+u'\u2a89': '$\\lnapprox$',
+u'\u2a8a': '$\\gnapprox$',
+u'\u2a8b': '$\\lesseqqgtr$',
+u'\u2a8c': '$\\gtreqqless$',
+u'\u2a95': '$\\eqslantless$',
+u'\u2a96': '$\\eqslantgtr$',
+u'\u2a9d': '$\\Pisymbol{ppi020}{117}$',
+u'\u2a9e': '$\\Pisymbol{ppi020}{105}$',
+u'\u2aa1': '$\\NestedLessLess$',
+u'\u2aa2': '$\\NestedGreaterGreater$',
+u'\u2aaf': '$\\preceq$',
+u'\u2ab0': '$\\succeq$',
+u'\u2ab5': '$\\precneqq$',
+u'\u2ab6': '$\\succneqq$',
+u'\u2ab7': '$\\precapprox$',
+u'\u2ab8': '$\\succapprox$',
+u'\u2ab9': '$\\precnapprox$',
+u'\u2aba': '$\\succnapprox$',
+u'\u2ac5': '$\\subseteqq$',
+u'\u2ac6': '$\\supseteqq$',
+u'\u2acb': '$\\subsetneqq$',
+u'\u2acc': '$\\supsetneqq$',
+u'\u2aeb': '$\\ElsevierGlyph{E30D}$',
+u'\u2af6': '$\\Elztdcol$',
+u'\u2afd': '${{/}\\!\\!{/}}$',
+u'\u300a': '$\\ElsevierGlyph{300A}$',
+u'\u300b': '$\\ElsevierGlyph{300B}$',
+u'\u3018': '$\\ElsevierGlyph{3018}$',
+u'\u3019': '$\\ElsevierGlyph{3019}$',
+u'\u301a': '$\\openbracketleft$',
+u'\u301b': '$\\openbracketright$',
+u'\ufb00': '{ff}',
+u'\ufb01': '{fi}',
+u'\ufb02': '{fl}',
+u'\ufb03': '{ffi}',
+u'\ufb04': '{ffl}',
+u'\U0001d400': '$\\mathbf{A}$',
+u'\U0001d401': '$\\mathbf{B}$',
+u'\U0001d402': '$\\mathbf{C}$',
+u'\U0001d403': '$\\mathbf{D}$',
+u'\U0001d404': '$\\mathbf{E}$',
+u'\U0001d405': '$\\mathbf{F}$',
+u'\U0001d406': '$\\mathbf{G}$',
+u'\U0001d407': '$\\mathbf{H}$',
+u'\U0001d408': '$\\mathbf{I}$',
+u'\U0001d409': '$\\mathbf{J}$',
+u'\U0001d40a': '$\\mathbf{K}$',
+u'\U0001d40b': '$\\mathbf{L}$',
+u'\U0001d40c': '$\\mathbf{M}$',
+u'\U0001d40d': '$\\mathbf{N}$',
+u'\U0001d40e': '$\\mathbf{O}$',
+u'\U0001d40f': '$\\mathbf{P}$',
+u'\U0001d410': '$\\mathbf{Q}$',
+u'\U0001d411': '$\\mathbf{R}$',
+u'\U0001d412': '$\\mathbf{S}$',
+u'\U0001d413': '$\\mathbf{T}$',
+u'\U0001d414': '$\\mathbf{U}$',
+u'\U0001d415': '$\\mathbf{V}$',
+u'\U0001d416': '$\\mathbf{W}$',
+u'\U0001d417': '$\\mathbf{X}$',
+u'\U0001d418': '$\\mathbf{Y}$',
+u'\U0001d419': '$\\mathbf{Z}$',
+u'\U0001d41a': '$\\mathbf{a}$',
+u'\U0001d41b': '$\\mathbf{b}$',
+u'\U0001d41c': '$\\mathbf{c}$',
+u'\U0001d41d': '$\\mathbf{d}$',
+u'\U0001d41e': '$\\mathbf{e}$',
+u'\U0001d41f': '$\\mathbf{f}$',
+u'\U0001d420': '$\\mathbf{g}$',
+u'\U0001d421': '$\\mathbf{h}$',
+u'\U0001d422': '$\\mathbf{i}$',
+u'\U0001d423': '$\\mathbf{j}$',
+u'\U0001d424': '$\\mathbf{k}$',
+u'\U0001d425': '$\\mathbf{l}$',
+u'\U0001d426': '$\\mathbf{m}$',
+u'\U0001d427': '$\\mathbf{n}$',
+u'\U0001d428': '$\\mathbf{o}$',
+u'\U0001d429': '$\\mathbf{p}$',
+u'\U0001d42a': '$\\mathbf{q}$',
+u'\U0001d42b': '$\\mathbf{r}$',
+u'\U0001d42c': '$\\mathbf{s}$',
+u'\U0001d42d': '$\\mathbf{t}$',
+u'\U0001d42e': '$\\mathbf{u}$',
+u'\U0001d42f': '$\\mathbf{v}$',
+u'\U0001d430': '$\\mathbf{w}$',
+u'\U0001d431': '$\\mathbf{x}$',
+u'\U0001d432': '$\\mathbf{y}$',
+u'\U0001d433': '$\\mathbf{z}$',
+u'\U0001d434': '$\\mathsl{A}$',
+u'\U0001d435': '$\\mathsl{B}$',
+u'\U0001d436': '$\\mathsl{C}$',
+u'\U0001d437': '$\\mathsl{D}$',
+u'\U0001d438': '$\\mathsl{E}$',
+u'\U0001d439': '$\\mathsl{F}$',
+u'\U0001d43a': '$\\mathsl{G}$',
+u'\U0001d43b': '$\\mathsl{H}$',
+u'\U0001d43c': '$\\mathsl{I}$',
+u'\U0001d43d': '$\\mathsl{J}$',
+u'\U0001d43e': '$\\mathsl{K}$',
+u'\U0001d43f': '$\\mathsl{L}$',
+u'\U0001d440': '$\\mathsl{M}$',
+u'\U0001d441': '$\\mathsl{N}$',
+u'\U0001d442': '$\\mathsl{O}$',
+u'\U0001d443': '$\\mathsl{P}$',
+u'\U0001d444': '$\\mathsl{Q}$',
+u'\U0001d445': '$\\mathsl{R}$',
+u'\U0001d446': '$\\mathsl{S}$',
+u'\U0001d447': '$\\mathsl{T}$',
+u'\U0001d448': '$\\mathsl{U}$',
+u'\U0001d449': '$\\mathsl{V}$',
+u'\U0001d44a': '$\\mathsl{W}$',
+u'\U0001d44b': '$\\mathsl{X}$',
+u'\U0001d44c': '$\\mathsl{Y}$',
+u'\U0001d44d': '$\\mathsl{Z}$',
+u'\U0001d44e': '$\\mathsl{a}$',
+u'\U0001d44f': '$\\mathsl{b}$',
+u'\U0001d450': '$\\mathsl{c}$',
+u'\U0001d451': '$\\mathsl{d}$',
+u'\U0001d452': '$\\mathsl{e}$',
+u'\U0001d453': '$\\mathsl{f}$',
+u'\U0001d454': '$\\mathsl{g}$',
+u'\U0001d456': '$\\mathsl{i}$',
+u'\U0001d457': '$\\mathsl{j}$',
+u'\U0001d458': '$\\mathsl{k}$',
+u'\U0001d459': '$\\mathsl{l}$',
+u'\U0001d45a': '$\\mathsl{m}$',
+u'\U0001d45b': '$\\mathsl{n}$',
+u'\U0001d45c': '$\\mathsl{o}$',
+u'\U0001d45d': '$\\mathsl{p}$',
+u'\U0001d45e': '$\\mathsl{q}$',
+u'\U0001d45f': '$\\mathsl{r}$',
+u'\U0001d460': '$\\mathsl{s}$',
+u'\U0001d461': '$\\mathsl{t}$',
+u'\U0001d462': '$\\mathsl{u}$',
+u'\U0001d463': '$\\mathsl{v}$',
+u'\U0001d464': '$\\mathsl{w}$',
+u'\U0001d465': '$\\mathsl{x}$',
+u'\U0001d466': '$\\mathsl{y}$',
+u'\U0001d467': '$\\mathsl{z}$',
+u'\U0001d468': '$\\mathbit{A}$',
+u'\U0001d469': '$\\mathbit{B}$',
+u'\U0001d46a': '$\\mathbit{C}$',
+u'\U0001d46b': '$\\mathbit{D}$',
+u'\U0001d46c': '$\\mathbit{E}$',
+u'\U0001d46d': '$\\mathbit{F}$',
+u'\U0001d46e': '$\\mathbit{G}$',
+u'\U0001d46f': '$\\mathbit{H}$',
+u'\U0001d470': '$\\mathbit{I}$',
+u'\U0001d471': '$\\mathbit{J}$',
+u'\U0001d472': '$\\mathbit{K}$',
+u'\U0001d473': '$\\mathbit{L}$',
+u'\U0001d474': '$\\mathbit{M}$',
+u'\U0001d475': '$\\mathbit{N}$',
+u'\U0001d476': '$\\mathbit{O}$',
+u'\U0001d477': '$\\mathbit{P}$',
+u'\U0001d478': '$\\mathbit{Q}$',
+u'\U0001d479': '$\\mathbit{R}$',
+u'\U0001d47a': '$\\mathbit{S}$',
+u'\U0001d47b': '$\\mathbit{T}$',
+u'\U0001d47c': '$\\mathbit{U}$',
+u'\U0001d47d': '$\\mathbit{V}$',
+u'\U0001d47e': '$\\mathbit{W}$',
+u'\U0001d47f': '$\\mathbit{X}$',
+u'\U0001d480': '$\\mathbit{Y}$',
+u'\U0001d481': '$\\mathbit{Z}$',
+u'\U0001d482': '$\\mathbit{a}$',
+u'\U0001d483': '$\\mathbit{b}$',
+u'\U0001d484': '$\\mathbit{c}$',
+u'\U0001d485': '$\\mathbit{d}$',
+u'\U0001d486': '$\\mathbit{e}$',
+u'\U0001d487': '$\\mathbit{f}$',
+u'\U0001d488': '$\\mathbit{g}$',
+u'\U0001d489': '$\\mathbit{h}$',
+u'\U0001d48a': '$\\mathbit{i}$',
+u'\U0001d48b': '$\\mathbit{j}$',
+u'\U0001d48c': '$\\mathbit{k}$',
+u'\U0001d48d': '$\\mathbit{l}$',
+u'\U0001d48e': '$\\mathbit{m}$',
+u'\U0001d48f': '$\\mathbit{n}$',
+u'\U0001d490': '$\\mathbit{o}$',
+u'\U0001d491': '$\\mathbit{p}$',
+u'\U0001d492': '$\\mathbit{q}$',
+u'\U0001d493': '$\\mathbit{r}$',
+u'\U0001d494': '$\\mathbit{s}$',
+u'\U0001d495': '$\\mathbit{t}$',
+u'\U0001d496': '$\\mathbit{u}$',
+u'\U0001d497': '$\\mathbit{v}$',
+u'\U0001d498': '$\\mathbit{w}$',
+u'\U0001d499': '$\\mathbit{x}$',
+u'\U0001d49a': '$\\mathbit{y}$',
+u'\U0001d49b': '$\\mathbit{z}$',
+u'\U0001d49c': '$\\mathscr{A}$',
+u'\U0001d49e': '$\\mathscr{C}$',
+u'\U0001d49f': '$\\mathscr{D}$',
+u'\U0001d4a2': '$\\mathscr{G}$',
+u'\U0001d4a5': '$\\mathscr{J}$',
+u'\U0001d4a6': '$\\mathscr{K}$',
+u'\U0001d4a9': '$\\mathscr{N}$',
+u'\U0001d4aa': '$\\mathscr{O}$',
+u'\U0001d4ab': '$\\mathscr{P}$',
+u'\U0001d4ac': '$\\mathscr{Q}$',
+u'\U0001d4ae': '$\\mathscr{S}$',
+u'\U0001d4af': '$\\mathscr{T}$',
+u'\U0001d4b0': '$\\mathscr{U}$',
+u'\U0001d4b1': '$\\mathscr{V}$',
+u'\U0001d4b2': '$\\mathscr{W}$',
+u'\U0001d4b3': '$\\mathscr{X}$',
+u'\U0001d4b4': '$\\mathscr{Y}$',
+u'\U0001d4b5': '$\\mathscr{Z}$',
+u'\U0001d4b6': '$\\mathscr{a}$',
+u'\U0001d4b7': '$\\mathscr{b}$',
+u'\U0001d4b8': '$\\mathscr{c}$',
+u'\U0001d4b9': '$\\mathscr{d}$',
+u'\U0001d4bb': '$\\mathscr{f}$',
+u'\U0001d4bd': '$\\mathscr{h}$',
+u'\U0001d4be': '$\\mathscr{i}$',
+u'\U0001d4bf': '$\\mathscr{j}$',
+u'\U0001d4c0': '$\\mathscr{k}$',
+u'\U0001d4c1': '$\\mathscr{l}$',
+u'\U0001d4c2': '$\\mathscr{m}$',
+u'\U0001d4c3': '$\\mathscr{n}$',
+u'\U0001d4c5': '$\\mathscr{p}$',
+u'\U0001d4c6': '$\\mathscr{q}$',
+u'\U0001d4c7': '$\\mathscr{r}$',
+u'\U0001d4c8': '$\\mathscr{s}$',
+u'\U0001d4c9': '$\\mathscr{t}$',
+u'\U0001d4ca': '$\\mathscr{u}$',
+u'\U0001d4cb': '$\\mathscr{v}$',
+u'\U0001d4cc': '$\\mathscr{w}$',
+u'\U0001d4cd': '$\\mathscr{x}$',
+u'\U0001d4ce': '$\\mathscr{y}$',
+u'\U0001d4cf': '$\\mathscr{z}$',
+u'\U0001d4d0': '$\\mathmit{A}$',
+u'\U0001d4d1': '$\\mathmit{B}$',
+u'\U0001d4d2': '$\\mathmit{C}$',
+u'\U0001d4d3': '$\\mathmit{D}$',
+u'\U0001d4d4': '$\\mathmit{E}$',
+u'\U0001d4d5': '$\\mathmit{F}$',
+u'\U0001d4d6': '$\\mathmit{G}$',
+u'\U0001d4d7': '$\\mathmit{H}$',
+u'\U0001d4d8': '$\\mathmit{I}$',
+u'\U0001d4d9': '$\\mathmit{J}$',
+u'\U0001d4da': '$\\mathmit{K}$',
+u'\U0001d4db': '$\\mathmit{L}$',
+u'\U0001d4dc': '$\\mathmit{M}$',
+u'\U0001d4dd': '$\\mathmit{N}$',
+u'\U0001d4de': '$\\mathmit{O}$',
+u'\U0001d4df': '$\\mathmit{P}$',
+u'\U0001d4e0': '$\\mathmit{Q}$',
+u'\U0001d4e1': '$\\mathmit{R}$',
+u'\U0001d4e2': '$\\mathmit{S}$',
+u'\U0001d4e3': '$\\mathmit{T}$',
+u'\U0001d4e4': '$\\mathmit{U}$',
+u'\U0001d4e5': '$\\mathmit{V}$',
+u'\U0001d4e6': '$\\mathmit{W}$',
+u'\U0001d4e7': '$\\mathmit{X}$',
+u'\U0001d4e8': '$\\mathmit{Y}$',
+u'\U0001d4e9': '$\\mathmit{Z}$',
+u'\U0001d4ea': '$\\mathmit{a}$',
+u'\U0001d4eb': '$\\mathmit{b}$',
+u'\U0001d4ec': '$\\mathmit{c}$',
+u'\U0001d4ed': '$\\mathmit{d}$',
+u'\U0001d4ee': '$\\mathmit{e}$',
+u'\U0001d4ef': '$\\mathmit{f}$',
+u'\U0001d4f0': '$\\mathmit{g}$',
+u'\U0001d4f1': '$\\mathmit{h}$',
+u'\U0001d4f2': '$\\mathmit{i}$',
+u'\U0001d4f3': '$\\mathmit{j}$',
+u'\U0001d4f4': '$\\mathmit{k}$',
+u'\U0001d4f5': '$\\mathmit{l}$',
+u'\U0001d4f6': '$\\mathmit{m}$',
+u'\U0001d4f7': '$\\mathmit{n}$',
+u'\U0001d4f8': '$\\mathmit{o}$',
+u'\U0001d4f9': '$\\mathmit{p}$',
+u'\U0001d4fa': '$\\mathmit{q}$',
+u'\U0001d4fb': '$\\mathmit{r}$',
+u'\U0001d4fc': '$\\mathmit{s}$',
+u'\U0001d4fd': '$\\mathmit{t}$',
+u'\U0001d4fe': '$\\mathmit{u}$',
+u'\U0001d4ff': '$\\mathmit{v}$',
+u'\U0001d500': '$\\mathmit{w}$',
+u'\U0001d501': '$\\mathmit{x}$',
+u'\U0001d502': '$\\mathmit{y}$',
+u'\U0001d503': '$\\mathmit{z}$',
+u'\U0001d504': '$\\mathfrak{A}$',
+u'\U0001d505': '$\\mathfrak{B}$',
+u'\U0001d507': '$\\mathfrak{D}$',
+u'\U0001d508': '$\\mathfrak{E}$',
+u'\U0001d509': '$\\mathfrak{F}$',
+u'\U0001d50a': '$\\mathfrak{G}$',
+u'\U0001d50d': '$\\mathfrak{J}$',
+u'\U0001d50e': '$\\mathfrak{K}$',
+u'\U0001d50f': '$\\mathfrak{L}$',
+u'\U0001d510': '$\\mathfrak{M}$',
+u'\U0001d511': '$\\mathfrak{N}$',
+u'\U0001d512': '$\\mathfrak{O}$',
+u'\U0001d513': '$\\mathfrak{P}$',
+u'\U0001d514': '$\\mathfrak{Q}$',
+u'\U0001d516': '$\\mathfrak{S}$',
+u'\U0001d517': '$\\mathfrak{T}$',
+u'\U0001d518': '$\\mathfrak{U}$',
+u'\U0001d519': '$\\mathfrak{V}$',
+u'\U0001d51a': '$\\mathfrak{W}$',
+u'\U0001d51b': '$\\mathfrak{X}$',
+u'\U0001d51c': '$\\mathfrak{Y}$',
+u'\U0001d51e': '$\\mathfrak{a}$',
+u'\U0001d51f': '$\\mathfrak{b}$',
+u'\U0001d520': '$\\mathfrak{c}$',
+u'\U0001d521': '$\\mathfrak{d}$',
+u'\U0001d522': '$\\mathfrak{e}$',
+u'\U0001d523': '$\\mathfrak{f}$',
+u'\U0001d524': '$\\mathfrak{g}$',
+u'\U0001d525': '$\\mathfrak{h}$',
+u'\U0001d526': '$\\mathfrak{i}$',
+u'\U0001d527': '$\\mathfrak{j}$',
+u'\U0001d528': '$\\mathfrak{k}$',
+u'\U0001d529': '$\\mathfrak{l}$',
+u'\U0001d52a': '$\\mathfrak{m}$',
+u'\U0001d52b': '$\\mathfrak{n}$',
+u'\U0001d52c': '$\\mathfrak{o}$',
+u'\U0001d52d': '$\\mathfrak{p}$',
+u'\U0001d52e': '$\\mathfrak{q}$',
+u'\U0001d52f': '$\\mathfrak{r}$',
+u'\U0001d530': '$\\mathfrak{s}$',
+u'\U0001d531': '$\\mathfrak{t}$',
+u'\U0001d532': '$\\mathfrak{u}$',
+u'\U0001d533': '$\\mathfrak{v}$',
+u'\U0001d534': '$\\mathfrak{w}$',
+u'\U0001d535': '$\\mathfrak{x}$',
+u'\U0001d536': '$\\mathfrak{y}$',
+u'\U0001d537': '$\\mathfrak{z}$',
+u'\U0001d538': '$\\mathbb{A}$',
+u'\U0001d539': '$\\mathbb{B}$',
+u'\U0001d53b': '$\\mathbb{D}$',
+u'\U0001d53c': '$\\mathbb{E}$',
+u'\U0001d53d': '$\\mathbb{F}$',
+u'\U0001d53e': '$\\mathbb{G}$',
+u'\U0001d540': '$\\mathbb{I}$',
+u'\U0001d541': '$\\mathbb{J}$',
+u'\U0001d542': '$\\mathbb{K}$',
+u'\U0001d543': '$\\mathbb{L}$',
+u'\U0001d544': '$\\mathbb{M}$',
+u'\U0001d546': '$\\mathbb{O}$',
+u'\U0001d54a': '$\\mathbb{S}$',
+u'\U0001d54b': '$\\mathbb{T}$',
+u'\U0001d54c': '$\\mathbb{U}$',
+u'\U0001d54d': '$\\mathbb{V}$',
+u'\U0001d54e': '$\\mathbb{W}$',
+u'\U0001d54f': '$\\mathbb{X}$',
+u'\U0001d550': '$\\mathbb{Y}$',
+u'\U0001d552': '$\\mathbb{a}$',
+u'\U0001d553': '$\\mathbb{b}$',
+u'\U0001d554': '$\\mathbb{c}$',
+u'\U0001d555': '$\\mathbb{d}$',
+u'\U0001d556': '$\\mathbb{e}$',
+u'\U0001d557': '$\\mathbb{f}$',
+u'\U0001d558': '$\\mathbb{g}$',
+u'\U0001d559': '$\\mathbb{h}$',
+u'\U0001d55a': '$\\mathbb{i}$',
+u'\U0001d55b': '$\\mathbb{j}$',
+u'\U0001d55c': '$\\mathbb{k}$',
+u'\U0001d55d': '$\\mathbb{l}$',
+u'\U0001d55e': '$\\mathbb{m}$',
+u'\U0001d55f': '$\\mathbb{n}$',
+u'\U0001d560': '$\\mathbb{o}$',
+u'\U0001d561': '$\\mathbb{p}$',
+u'\U0001d562': '$\\mathbb{q}$',
+u'\U0001d563': '$\\mathbb{r}$',
+u'\U0001d564': '$\\mathbb{s}$',
+u'\U0001d565': '$\\mathbb{t}$',
+u'\U0001d566': '$\\mathbb{u}$',
+u'\U0001d567': '$\\mathbb{v}$',
+u'\U0001d568': '$\\mathbb{w}$',
+u'\U0001d569': '$\\mathbb{x}$',
+u'\U0001d56a': '$\\mathbb{y}$',
+u'\U0001d56b': '$\\mathbb{z}$',
+u'\U0001d56c': '$\\mathslbb{A}$',
+u'\U0001d56d': '$\\mathslbb{B}$',
+u'\U0001d56e': '$\\mathslbb{C}$',
+u'\U0001d56f': '$\\mathslbb{D}$',
+u'\U0001d570': '$\\mathslbb{E}$',
+u'\U0001d571': '$\\mathslbb{F}$',
+u'\U0001d572': '$\\mathslbb{G}$',
+u'\U0001d573': '$\\mathslbb{H}$',
+u'\U0001d574': '$\\mathslbb{I}$',
+u'\U0001d575': '$\\mathslbb{J}$',
+u'\U0001d576': '$\\mathslbb{K}$',
+u'\U0001d577': '$\\mathslbb{L}$',
+u'\U0001d578': '$\\mathslbb{M}$',
+u'\U0001d579': '$\\mathslbb{N}$',
+u'\U0001d57a': '$\\mathslbb{O}$',
+u'\U0001d57b': '$\\mathslbb{P}$',
+u'\U0001d57c': '$\\mathslbb{Q}$',
+u'\U0001d57d': '$\\mathslbb{R}$',
+u'\U0001d57e': '$\\mathslbb{S}$',
+u'\U0001d57f': '$\\mathslbb{T}$',
+u'\U0001d580': '$\\mathslbb{U}$',
+u'\U0001d581': '$\\mathslbb{V}$',
+u'\U0001d582': '$\\mathslbb{W}$',
+u'\U0001d583': '$\\mathslbb{X}$',
+u'\U0001d584': '$\\mathslbb{Y}$',
+u'\U0001d585': '$\\mathslbb{Z}$',
+u'\U0001d586': '$\\mathslbb{a}$',
+u'\U0001d587': '$\\mathslbb{b}$',
+u'\U0001d588': '$\\mathslbb{c}$',
+u'\U0001d589': '$\\mathslbb{d}$',
+u'\U0001d58a': '$\\mathslbb{e}$',
+u'\U0001d58b': '$\\mathslbb{f}$',
+u'\U0001d58c': '$\\mathslbb{g}$',
+u'\U0001d58d': '$\\mathslbb{h}$',
+u'\U0001d58e': '$\\mathslbb{i}$',
+u'\U0001d58f': '$\\mathslbb{j}$',
+u'\U0001d590': '$\\mathslbb{k}$',
+u'\U0001d591': '$\\mathslbb{l}$',
+u'\U0001d592': '$\\mathslbb{m}$',
+u'\U0001d593': '$\\mathslbb{n}$',
+u'\U0001d594': '$\\mathslbb{o}$',
+u'\U0001d595': '$\\mathslbb{p}$',
+u'\U0001d596': '$\\mathslbb{q}$',
+u'\U0001d597': '$\\mathslbb{r}$',
+u'\U0001d598': '$\\mathslbb{s}$',
+u'\U0001d599': '$\\mathslbb{t}$',
+u'\U0001d59a': '$\\mathslbb{u}$',
+u'\U0001d59b': '$\\mathslbb{v}$',
+u'\U0001d59c': '$\\mathslbb{w}$',
+u'\U0001d59d': '$\\mathslbb{x}$',
+u'\U0001d59e': '$\\mathslbb{y}$',
+u'\U0001d59f': '$\\mathslbb{z}$',
+u'\U0001d5a0': '$\\mathsf{A}$',
+u'\U0001d5a1': '$\\mathsf{B}$',
+u'\U0001d5a2': '$\\mathsf{C}$',
+u'\U0001d5a3': '$\\mathsf{D}$',
+u'\U0001d5a4': '$\\mathsf{E}$',
+u'\U0001d5a5': '$\\mathsf{F}$',
+u'\U0001d5a6': '$\\mathsf{G}$',
+u'\U0001d5a7': '$\\mathsf{H}$',
+u'\U0001d5a8': '$\\mathsf{I}$',
+u'\U0001d5a9': '$\\mathsf{J}$',
+u'\U0001d5aa': '$\\mathsf{K}$',
+u'\U0001d5ab': '$\\mathsf{L}$',
+u'\U0001d5ac': '$\\mathsf{M}$',
+u'\U0001d5ad': '$\\mathsf{N}$',
+u'\U0001d5ae': '$\\mathsf{O}$',
+u'\U0001d5af': '$\\mathsf{P}$',
+u'\U0001d5b0': '$\\mathsf{Q}$',
+u'\U0001d5b1': '$\\mathsf{R}$',
+u'\U0001d5b2': '$\\mathsf{S}$',
+u'\U0001d5b3': '$\\mathsf{T}$',
+u'\U0001d5b4': '$\\mathsf{U}$',
+u'\U0001d5b5': '$\\mathsf{V}$',
+u'\U0001d5b6': '$\\mathsf{W}$',
+u'\U0001d5b7': '$\\mathsf{X}$',
+u'\U0001d5b8': '$\\mathsf{Y}$',
+u'\U0001d5b9': '$\\mathsf{Z}$',
+u'\U0001d5ba': '$\\mathsf{a}$',
+u'\U0001d5bb': '$\\mathsf{b}$',
+u'\U0001d5bc': '$\\mathsf{c}$',
+u'\U0001d5bd': '$\\mathsf{d}$',
+u'\U0001d5be': '$\\mathsf{e}$',
+u'\U0001d5bf': '$\\mathsf{f}$',
+u'\U0001d5c0': '$\\mathsf{g}$',
+u'\U0001d5c1': '$\\mathsf{h}$',
+u'\U0001d5c2': '$\\mathsf{i}$',
+u'\U0001d5c3': '$\\mathsf{j}$',
+u'\U0001d5c4': '$\\mathsf{k}$',
+u'\U0001d5c5': '$\\mathsf{l}$',
+u'\U0001d5c6': '$\\mathsf{m}$',
+u'\U0001d5c7': '$\\mathsf{n}$',
+u'\U0001d5c8': '$\\mathsf{o}$',
+u'\U0001d5c9': '$\\mathsf{p}$',
+u'\U0001d5ca': '$\\mathsf{q}$',
+u'\U0001d5cb': '$\\mathsf{r}$',
+u'\U0001d5cc': '$\\mathsf{s}$',
+u'\U0001d5cd': '$\\mathsf{t}$',
+u'\U0001d5ce': '$\\mathsf{u}$',
+u'\U0001d5cf': '$\\mathsf{v}$',
+u'\U0001d5d0': '$\\mathsf{w}$',
+u'\U0001d5d1': '$\\mathsf{x}$',
+u'\U0001d5d2': '$\\mathsf{y}$',
+u'\U0001d5d3': '$\\mathsf{z}$',
+u'\U0001d5d4': '$\\mathsfbf{A}$',
+u'\U0001d5d5': '$\\mathsfbf{B}$',
+u'\U0001d5d6': '$\\mathsfbf{C}$',
+u'\U0001d5d7': '$\\mathsfbf{D}$',
+u'\U0001d5d8': '$\\mathsfbf{E}$',
+u'\U0001d5d9': '$\\mathsfbf{F}$',
+u'\U0001d5da': '$\\mathsfbf{G}$',
+u'\U0001d5db': '$\\mathsfbf{H}$',
+u'\U0001d5dc': '$\\mathsfbf{I}$',
+u'\U0001d5dd': '$\\mathsfbf{J}$',
+u'\U0001d5de': '$\\mathsfbf{K}$',
+u'\U0001d5df': '$\\mathsfbf{L}$',
+u'\U0001d5e0': '$\\mathsfbf{M}$',
+u'\U0001d5e1': '$\\mathsfbf{N}$',
+u'\U0001d5e2': '$\\mathsfbf{O}$',
+u'\U0001d5e3': '$\\mathsfbf{P}$',
+u'\U0001d5e4': '$\\mathsfbf{Q}$',
+u'\U0001d5e5': '$\\mathsfbf{R}$',
+u'\U0001d5e6': '$\\mathsfbf{S}$',
+u'\U0001d5e7': '$\\mathsfbf{T}$',
+u'\U0001d5e8': '$\\mathsfbf{U}$',
+u'\U0001d5e9': '$\\mathsfbf{V}$',
+u'\U0001d5ea': '$\\mathsfbf{W}$',
+u'\U0001d5eb': '$\\mathsfbf{X}$',
+u'\U0001d5ec': '$\\mathsfbf{Y}$',
+u'\U0001d5ed': '$\\mathsfbf{Z}$',
+u'\U0001d5ee': '$\\mathsfbf{a}$',
+u'\U0001d5ef': '$\\mathsfbf{b}$',
+u'\U0001d5f0': '$\\mathsfbf{c}$',
+u'\U0001d5f1': '$\\mathsfbf{d}$',
+u'\U0001d5f2': '$\\mathsfbf{e}$',
+u'\U0001d5f3': '$\\mathsfbf{f}$',
+u'\U0001d5f4': '$\\mathsfbf{g}$',
+u'\U0001d5f5': '$\\mathsfbf{h}$',
+u'\U0001d5f6': '$\\mathsfbf{i}$',
+u'\U0001d5f7': '$\\mathsfbf{j}$',
+u'\U0001d5f8': '$\\mathsfbf{k}$',
+u'\U0001d5f9': '$\\mathsfbf{l}$',
+u'\U0001d5fa': '$\\mathsfbf{m}$',
+u'\U0001d5fb': '$\\mathsfbf{n}$',
+u'\U0001d5fc': '$\\mathsfbf{o}$',
+u'\U0001d5fd': '$\\mathsfbf{p}$',
+u'\U0001d5fe': '$\\mathsfbf{q}$',
+u'\U0001d5ff': '$\\mathsfbf{r}$',
+u'\U0001d600': '$\\mathsfbf{s}$',
+u'\U0001d601': '$\\mathsfbf{t}$',
+u'\U0001d602': '$\\mathsfbf{u}$',
+u'\U0001d603': '$\\mathsfbf{v}$',
+u'\U0001d604': '$\\mathsfbf{w}$',
+u'\U0001d605': '$\\mathsfbf{x}$',
+u'\U0001d606': '$\\mathsfbf{y}$',
+u'\U0001d607': '$\\mathsfbf{z}$',
+u'\U0001d608': '$\\mathsfsl{A}$',
+u'\U0001d609': '$\\mathsfsl{B}$',
+u'\U0001d60a': '$\\mathsfsl{C}$',
+u'\U0001d60b': '$\\mathsfsl{D}$',
+u'\U0001d60c': '$\\mathsfsl{E}$',
+u'\U0001d60d': '$\\mathsfsl{F}$',
+u'\U0001d60e': '$\\mathsfsl{G}$',
+u'\U0001d60f': '$\\mathsfsl{H}$',
+u'\U0001d610': '$\\mathsfsl{I}$',
+u'\U0001d611': '$\\mathsfsl{J}$',
+u'\U0001d612': '$\\mathsfsl{K}$',
+u'\U0001d613': '$\\mathsfsl{L}$',
+u'\U0001d614': '$\\mathsfsl{M}$',
+u'\U0001d615': '$\\mathsfsl{N}$',
+u'\U0001d616': '$\\mathsfsl{O}$',
+u'\U0001d617': '$\\mathsfsl{P}$',
+u'\U0001d618': '$\\mathsfsl{Q}$',
+u'\U0001d619': '$\\mathsfsl{R}$',
+u'\U0001d61a': '$\\mathsfsl{S}$',
+u'\U0001d61b': '$\\mathsfsl{T}$',
+u'\U0001d61c': '$\\mathsfsl{U}$',
+u'\U0001d61d': '$\\mathsfsl{V}$',
+u'\U0001d61e': '$\\mathsfsl{W}$',
+u'\U0001d61f': '$\\mathsfsl{X}$',
+u'\U0001d620': '$\\mathsfsl{Y}$',
+u'\U0001d621': '$\\mathsfsl{Z}$',
+u'\U0001d622': '$\\mathsfsl{a}$',
+u'\U0001d623': '$\\mathsfsl{b}$',
+u'\U0001d624': '$\\mathsfsl{c}$',
+u'\U0001d625': '$\\mathsfsl{d}$',
+u'\U0001d626': '$\\mathsfsl{e}$',
+u'\U0001d627': '$\\mathsfsl{f}$',
+u'\U0001d628': '$\\mathsfsl{g}$',
+u'\U0001d629': '$\\mathsfsl{h}$',
+u'\U0001d62a': '$\\mathsfsl{i}$',
+u'\U0001d62b': '$\\mathsfsl{j}$',
+u'\U0001d62c': '$\\mathsfsl{k}$',
+u'\U0001d62d': '$\\mathsfsl{l}$',
+u'\U0001d62e': '$\\mathsfsl{m}$',
+u'\U0001d62f': '$\\mathsfsl{n}$',
+u'\U0001d630': '$\\mathsfsl{o}$',
+u'\U0001d631': '$\\mathsfsl{p}$',
+u'\U0001d632': '$\\mathsfsl{q}$',
+u'\U0001d633': '$\\mathsfsl{r}$',
+u'\U0001d634': '$\\mathsfsl{s}$',
+u'\U0001d635': '$\\mathsfsl{t}$',
+u'\U0001d636': '$\\mathsfsl{u}$',
+u'\U0001d637': '$\\mathsfsl{v}$',
+u'\U0001d638': '$\\mathsfsl{w}$',
+u'\U0001d639': '$\\mathsfsl{x}$',
+u'\U0001d63a': '$\\mathsfsl{y}$',
+u'\U0001d63b': '$\\mathsfsl{z}$',
+u'\U0001d63c': '$\\mathsfbfsl{A}$',
+u'\U0001d63d': '$\\mathsfbfsl{B}$',
+u'\U0001d63e': '$\\mathsfbfsl{C}$',
+u'\U0001d63f': '$\\mathsfbfsl{D}$',
+u'\U0001d640': '$\\mathsfbfsl{E}$',
+u'\U0001d641': '$\\mathsfbfsl{F}$',
+u'\U0001d642': '$\\mathsfbfsl{G}$',
+u'\U0001d643': '$\\mathsfbfsl{H}$',
+u'\U0001d644': '$\\mathsfbfsl{I}$',
+u'\U0001d645': '$\\mathsfbfsl{J}$',
+u'\U0001d646': '$\\mathsfbfsl{K}$',
+u'\U0001d647': '$\\mathsfbfsl{L}$',
+u'\U0001d648': '$\\mathsfbfsl{M}$',
+u'\U0001d649': '$\\mathsfbfsl{N}$',
+u'\U0001d64a': '$\\mathsfbfsl{O}$',
+u'\U0001d64b': '$\\mathsfbfsl{P}$',
+u'\U0001d64c': '$\\mathsfbfsl{Q}$',
+u'\U0001d64d': '$\\mathsfbfsl{R}$',
+u'\U0001d64e': '$\\mathsfbfsl{S}$',
+u'\U0001d64f': '$\\mathsfbfsl{T}$',
+u'\U0001d650': '$\\mathsfbfsl{U}$',
+u'\U0001d651': '$\\mathsfbfsl{V}$',
+u'\U0001d652': '$\\mathsfbfsl{W}$',
+u'\U0001d653': '$\\mathsfbfsl{X}$',
+u'\U0001d654': '$\\mathsfbfsl{Y}$',
+u'\U0001d655': '$\\mathsfbfsl{Z}$',
+u'\U0001d656': '$\\mathsfbfsl{a}$',
+u'\U0001d657': '$\\mathsfbfsl{b}$',
+u'\U0001d658': '$\\mathsfbfsl{c}$',
+u'\U0001d659': '$\\mathsfbfsl{d}$',
+u'\U0001d65a': '$\\mathsfbfsl{e}$',
+u'\U0001d65b': '$\\mathsfbfsl{f}$',
+u'\U0001d65c': '$\\mathsfbfsl{g}$',
+u'\U0001d65d': '$\\mathsfbfsl{h}$',
+u'\U0001d65e': '$\\mathsfbfsl{i}$',
+u'\U0001d65f': '$\\mathsfbfsl{j}$',
+u'\U0001d660': '$\\mathsfbfsl{k}$',
+u'\U0001d661': '$\\mathsfbfsl{l}$',
+u'\U0001d662': '$\\mathsfbfsl{m}$',
+u'\U0001d663': '$\\mathsfbfsl{n}$',
+u'\U0001d664': '$\\mathsfbfsl{o}$',
+u'\U0001d665': '$\\mathsfbfsl{p}$',
+u'\U0001d666': '$\\mathsfbfsl{q}$',
+u'\U0001d667': '$\\mathsfbfsl{r}$',
+u'\U0001d668': '$\\mathsfbfsl{s}$',
+u'\U0001d669': '$\\mathsfbfsl{t}$',
+u'\U0001d66a': '$\\mathsfbfsl{u}$',
+u'\U0001d66b': '$\\mathsfbfsl{v}$',
+u'\U0001d66c': '$\\mathsfbfsl{w}$',
+u'\U0001d66d': '$\\mathsfbfsl{x}$',
+u'\U0001d66e': '$\\mathsfbfsl{y}$',
+u'\U0001d66f': '$\\mathsfbfsl{z}$',
+u'\U0001d670': '$\\mathtt{A}$',
+u'\U0001d671': '$\\mathtt{B}$',
+u'\U0001d672': '$\\mathtt{C}$',
+u'\U0001d673': '$\\mathtt{D}$',
+u'\U0001d674': '$\\mathtt{E}$',
+u'\U0001d675': '$\\mathtt{F}$',
+u'\U0001d676': '$\\mathtt{G}$',
+u'\U0001d677': '$\\mathtt{H}$',
+u'\U0001d678': '$\\mathtt{I}$',
+u'\U0001d679': '$\\mathtt{J}$',
+u'\U0001d67a': '$\\mathtt{K}$',
+u'\U0001d67b': '$\\mathtt{L}$',
+u'\U0001d67c': '$\\mathtt{M}$',
+u'\U0001d67d': '$\\mathtt{N}$',
+u'\U0001d67e': '$\\mathtt{O}$',
+u'\U0001d67f': '$\\mathtt{P}$',
+u'\U0001d680': '$\\mathtt{Q}$',
+u'\U0001d681': '$\\mathtt{R}$',
+u'\U0001d682': '$\\mathtt{S}$',
+u'\U0001d683': '$\\mathtt{T}$',
+u'\U0001d684': '$\\mathtt{U}$',
+u'\U0001d685': '$\\mathtt{V}$',
+u'\U0001d686': '$\\mathtt{W}$',
+u'\U0001d687': '$\\mathtt{X}$',
+u'\U0001d688': '$\\mathtt{Y}$',
+u'\U0001d689': '$\\mathtt{Z}$',
+u'\U0001d68a': '$\\mathtt{a}$',
+u'\U0001d68b': '$\\mathtt{b}$',
+u'\U0001d68c': '$\\mathtt{c}$',
+u'\U0001d68d': '$\\mathtt{d}$',
+u'\U0001d68e': '$\\mathtt{e}$',
+u'\U0001d68f': '$\\mathtt{f}$',
+u'\U0001d690': '$\\mathtt{g}$',
+u'\U0001d691': '$\\mathtt{h}$',
+u'\U0001d692': '$\\mathtt{i}$',
+u'\U0001d693': '$\\mathtt{j}$',
+u'\U0001d694': '$\\mathtt{k}$',
+u'\U0001d695': '$\\mathtt{l}$',
+u'\U0001d696': '$\\mathtt{m}$',
+u'\U0001d697': '$\\mathtt{n}$',
+u'\U0001d698': '$\\mathtt{o}$',
+u'\U0001d699': '$\\mathtt{p}$',
+u'\U0001d69a': '$\\mathtt{q}$',
+u'\U0001d69b': '$\\mathtt{r}$',
+u'\U0001d69c': '$\\mathtt{s}$',
+u'\U0001d69d': '$\\mathtt{t}$',
+u'\U0001d69e': '$\\mathtt{u}$',
+u'\U0001d69f': '$\\mathtt{v}$',
+u'\U0001d6a0': '$\\mathtt{w}$',
+u'\U0001d6a1': '$\\mathtt{x}$',
+u'\U0001d6a2': '$\\mathtt{y}$',
+u'\U0001d6a3': '$\\mathtt{z}$',
+u'\U0001d6a8': '$\\mathbf{\\Alpha}$',
+u'\U0001d6a9': '$\\mathbf{\\Beta}$',
+u'\U0001d6aa': '$\\mathbf{\\Gamma}$',
+u'\U0001d6ab': '$\\mathbf{\\Delta}$',
+u'\U0001d6ac': '$\\mathbf{\\Epsilon}$',
+u'\U0001d6ad': '$\\mathbf{\\Zeta}$',
+u'\U0001d6ae': '$\\mathbf{\\Eta}$',
+u'\U0001d6af': '$\\mathbf{\\Theta}$',
+u'\U0001d6b0': '$\\mathbf{\\Iota}$',
+u'\U0001d6b1': '$\\mathbf{\\Kappa}$',
+u'\U0001d6b2': '$\\mathbf{\\Lambda}$',
+u'\U0001d6b3': '$M$',
+u'\U0001d6b4': '$N$',
+u'\U0001d6b5': '$\\mathbf{\\Xi}$',
+u'\U0001d6b6': '$O$',
+u'\U0001d6b7': '$\\mathbf{\\Pi}$',
+u'\U0001d6b8': '$\\mathbf{\\Rho}$',
+u'\U0001d6b9': '{\\mathbf{\\vartheta}}',
+u'\U0001d6ba': '$\\mathbf{\\Sigma}$',
+u'\U0001d6bb': '$\\mathbf{\\Tau}$',
+u'\U0001d6bc': '$\\mathbf{\\Upsilon}$',
+u'\U0001d6bd': '$\\mathbf{\\Phi}$',
+u'\U0001d6be': '$\\mathbf{\\Chi}$',
+u'\U0001d6bf': '$\\mathbf{\\Psi}$',
+u'\U0001d6c0': '$\\mathbf{\\Omega}$',
+u'\U0001d6c1': '$\\mathbf{\\nabla}$',
+u'\U0001d6c2': '$\\mathbf{\\Alpha}$',
+u'\U0001d6c3': '$\\mathbf{\\Beta}$',
+u'\U0001d6c4': '$\\mathbf{\\Gamma}$',
+u'\U0001d6c5': '$\\mathbf{\\Delta}$',
+u'\U0001d6c6': '$\\mathbf{\\Epsilon}$',
+u'\U0001d6c7': '$\\mathbf{\\Zeta}$',
+u'\U0001d6c8': '$\\mathbf{\\Eta}$',
+u'\U0001d6c9': '$\\mathbf{\\theta}$',
+u'\U0001d6ca': '$\\mathbf{\\Iota}$',
+u'\U0001d6cb': '$\\mathbf{\\Kappa}$',
+u'\U0001d6cc': '$\\mathbf{\\Lambda}$',
+u'\U0001d6cd': '$M$',
+u'\U0001d6ce': '$N$',
+u'\U0001d6cf': '$\\mathbf{\\Xi}$',
+u'\U0001d6d0': '$O$',
+u'\U0001d6d1': '$\\mathbf{\\Pi}$',
+u'\U0001d6d2': '$\\mathbf{\\Rho}$',
+u'\U0001d6d3': '$\\mathbf{\\varsigma}$',
+u'\U0001d6d4': '$\\mathbf{\\Sigma}$',
+u'\U0001d6d5': '$\\mathbf{\\Tau}$',
+u'\U0001d6d6': '$\\mathbf{\\Upsilon}$',
+u'\U0001d6d7': '$\\mathbf{\\Phi}$',
+u'\U0001d6d8': '$\\mathbf{\\Chi}$',
+u'\U0001d6d9': '$\\mathbf{\\Psi}$',
+u'\U0001d6da': '$\\mathbf{\\Omega}$',
+u'\U0001d6db': '$\\partial$',
+u'\U0001d6dc': '$\\in$',
+u'\U0001d6dd': '{\\mathbf{\\vartheta}}',
+u'\U0001d6de': '{\\mathbf{\\varkappa}}',
+u'\U0001d6df': '{\\mathbf{\\phi}}',
+u'\U0001d6e0': '{\\mathbf{\\varrho}}',
+u'\U0001d6e1': '{\\mathbf{\\varpi}}',
+u'\U0001d6e2': '$\\mathsl{\\Alpha}$',
+u'\U0001d6e3': '$\\mathsl{\\Beta}$',
+u'\U0001d6e4': '$\\mathsl{\\Gamma}$',
+u'\U0001d6e5': '$\\mathsl{\\Delta}$',
+u'\U0001d6e6': '$\\mathsl{\\Epsilon}$',
+u'\U0001d6e7': '$\\mathsl{\\Zeta}$',
+u'\U0001d6e8': '$\\mathsl{\\Eta}$',
+u'\U0001d6e9': '$\\mathsl{\\Theta}$',
+u'\U0001d6ea': '$\\mathsl{\\Iota}$',
+u'\U0001d6eb': '$\\mathsl{\\Kappa}$',
+u'\U0001d6ec': '$\\mathsl{\\Lambda}$',
+u'\U0001d6ed': '$M$',
+u'\U0001d6ee': '$N$',
+u'\U0001d6ef': '$\\mathsl{\\Xi}$',
+u'\U0001d6f0': '$O$',
+u'\U0001d6f1': '$\\mathsl{\\Pi}$',
+u'\U0001d6f2': '$\\mathsl{\\Rho}$',
+u'\U0001d6f3': '{\\mathsl{\\vartheta}}',
+u'\U0001d6f4': '$\\mathsl{\\Sigma}$',
+u'\U0001d6f5': '$\\mathsl{\\Tau}$',
+u'\U0001d6f6': '$\\mathsl{\\Upsilon}$',
+u'\U0001d6f7': '$\\mathsl{\\Phi}$',
+u'\U0001d6f8': '$\\mathsl{\\Chi}$',
+u'\U0001d6f9': '$\\mathsl{\\Psi}$',
+u'\U0001d6fa': '$\\mathsl{\\Omega}$',
+u'\U0001d6fb': '$\\mathsl{\\nabla}$',
+u'\U0001d6fc': '$\\mathsl{\\Alpha}$',
+u'\U0001d6fd': '$\\mathsl{\\Beta}$',
+u'\U0001d6fe': '$\\mathsl{\\Gamma}$',
+u'\U0001d6ff': '$\\mathsl{\\Delta}$',
+u'\U0001d700': '$\\mathsl{\\Epsilon}$',
+u'\U0001d701': '$\\mathsl{\\Zeta}$',
+u'\U0001d702': '$\\mathsl{\\Eta}$',
+u'\U0001d703': '$\\mathsl{\\Theta}$',
+u'\U0001d704': '$\\mathsl{\\Iota}$',
+u'\U0001d705': '$\\mathsl{\\Kappa}$',
+u'\U0001d706': '$\\mathsl{\\Lambda}$',
+u'\U0001d707': '$M$',
+u'\U0001d708': '$N$',
+u'\U0001d709': '$\\mathsl{\\Xi}$',
+u'\U0001d70a': '$O$',
+u'\U0001d70b': '$\\mathsl{\\Pi}$',
+u'\U0001d70c': '$\\mathsl{\\Rho}$',
+u'\U0001d70d': '$\\mathsl{\\varsigma}$',
+u'\U0001d70e': '$\\mathsl{\\Sigma}$',
+u'\U0001d70f': '$\\mathsl{\\Tau}$',
+u'\U0001d710': '$\\mathsl{\\Upsilon}$',
+u'\U0001d711': '$\\mathsl{\\Phi}$',
+u'\U0001d712': '$\\mathsl{\\Chi}$',
+u'\U0001d713': '$\\mathsl{\\Psi}$',
+u'\U0001d714': '$\\mathsl{\\Omega}$',
+u'\U0001d715': '$\\partial$',
+u'\U0001d716': '$\\in$',
+u'\U0001d717': '{\\mathsl{\\vartheta}}',
+u'\U0001d718': '{\\mathsl{\\varkappa}}',
+u'\U0001d719': '{\\mathsl{\\phi}}',
+u'\U0001d71a': '{\\mathsl{\\varrho}}',
+u'\U0001d71b': '{\\mathsl{\\varpi}}',
+u'\U0001d71c': '$\\mathbit{\\Alpha}$',
+u'\U0001d71d': '$\\mathbit{\\Beta}$',
+u'\U0001d71e': '$\\mathbit{\\Gamma}$',
+u'\U0001d71f': '$\\mathbit{\\Delta}$',
+u'\U0001d720': '$\\mathbit{\\Epsilon}$',
+u'\U0001d721': '$\\mathbit{\\Zeta}$',
+u'\U0001d722': '$\\mathbit{\\Eta}$',
+u'\U0001d723': '$\\mathbit{\\Theta}$',
+u'\U0001d724': '$\\mathbit{\\Iota}$',
+u'\U0001d725': '$\\mathbit{\\Kappa}$',
+u'\U0001d726': '$\\mathbit{\\Lambda}$',
+u'\U0001d727': '$M$',
+u'\U0001d728': '$N$',
+u'\U0001d729': '$\\mathbit{\\Xi}$',
+u'\U0001d72a': '$O$',
+u'\U0001d72b': '$\\mathbit{\\Pi}$',
+u'\U0001d72c': '$\\mathbit{\\Rho}$',
+u'\U0001d72d': '{\\mathbit{O}}',
+u'\U0001d72e': '$\\mathbit{\\Sigma}$',
+u'\U0001d72f': '$\\mathbit{\\Tau}$',
+u'\U0001d730': '$\\mathbit{\\Upsilon}$',
+u'\U0001d731': '$\\mathbit{\\Phi}$',
+u'\U0001d732': '$\\mathbit{\\Chi}$',
+u'\U0001d733': '$\\mathbit{\\Psi}$',
+u'\U0001d734': '$\\mathbit{\\Omega}$',
+u'\U0001d735': '$\\mathbit{\\nabla}$',
+u'\U0001d736': '$\\mathbit{\\Alpha}$',
+u'\U0001d737': '$\\mathbit{\\Beta}$',
+u'\U0001d738': '$\\mathbit{\\Gamma}$',
+u'\U0001d739': '$\\mathbit{\\Delta}$',
+u'\U0001d73a': '$\\mathbit{\\Epsilon}$',
+u'\U0001d73b': '$\\mathbit{\\Zeta}$',
+u'\U0001d73c': '$\\mathbit{\\Eta}$',
+u'\U0001d73d': '$\\mathbit{\\Theta}$',
+u'\U0001d73e': '$\\mathbit{\\Iota}$',
+u'\U0001d73f': '$\\mathbit{\\Kappa}$',
+u'\U0001d740': '$\\mathbit{\\Lambda}$',
+u'\U0001d741': '$M$',
+u'\U0001d742': '$N$',
+u'\U0001d743': '$\\mathbit{\\Xi}$',
+u'\U0001d744': '$O$',
+u'\U0001d745': '$\\mathbit{\\Pi}$',
+u'\U0001d746': '$\\mathbit{\\Rho}$',
+u'\U0001d747': '$\\mathbit{\\varsigma}$',
+u'\U0001d748': '$\\mathbit{\\Sigma}$',
+u'\U0001d749': '$\\mathbit{\\Tau}$',
+u'\U0001d74a': '$\\mathbit{\\Upsilon}$',
+u'\U0001d74b': '$\\mathbit{\\Phi}$',
+u'\U0001d74c': '$\\mathbit{\\Chi}$',
+u'\U0001d74d': '$\\mathbit{\\Psi}$',
+u'\U0001d74e': '$\\mathbit{\\Omega}$',
+u'\U0001d74f': '$\\partial$',
+u'\U0001d750': '$\\in$',
+u'\U0001d751': '{\\mathbit{\\vartheta}}',
+u'\U0001d752': '{\\mathbit{\\varkappa}}',
+u'\U0001d753': '{\\mathbit{\\phi}}',
+u'\U0001d754': '{\\mathbit{\\varrho}}',
+u'\U0001d755': '{\\mathbit{\\varpi}}',
+u'\U0001d756': '$\\mathsfbf{\\Alpha}$',
+u'\U0001d757': '$\\mathsfbf{\\Beta}$',
+u'\U0001d758': '$\\mathsfbf{\\Gamma}$',
+u'\U0001d759': '$\\mathsfbf{\\Delta}$',
+u'\U0001d75a': '$\\mathsfbf{\\Epsilon}$',
+u'\U0001d75b': '$\\mathsfbf{\\Zeta}$',
+u'\U0001d75c': '$\\mathsfbf{\\Eta}$',
+u'\U0001d75d': '$\\mathsfbf{\\Theta}$',
+u'\U0001d75e': '$\\mathsfbf{\\Iota}$',
+u'\U0001d75f': '$\\mathsfbf{\\Kappa}$',
+u'\U0001d760': '$\\mathsfbf{\\Lambda}$',
+u'\U0001d761': '$M$',
+u'\U0001d762': '$N$',
+u'\U0001d763': '$\\mathsfbf{\\Xi}$',
+u'\U0001d764': '$O$',
+u'\U0001d765': '$\\mathsfbf{\\Pi}$',
+u'\U0001d766': '$\\mathsfbf{\\Rho}$',
+u'\U0001d767': '{\\mathsfbf{\\vartheta}}',
+u'\U0001d768': '$\\mathsfbf{\\Sigma}$',
+u'\U0001d769': '$\\mathsfbf{\\Tau}$',
+u'\U0001d76a': '$\\mathsfbf{\\Upsilon}$',
+u'\U0001d76b': '$\\mathsfbf{\\Phi}$',
+u'\U0001d76c': '$\\mathsfbf{\\Chi}$',
+u'\U0001d76d': '$\\mathsfbf{\\Psi}$',
+u'\U0001d76e': '$\\mathsfbf{\\Omega}$',
+u'\U0001d76f': '$\\mathsfbf{\\nabla}$',
+u'\U0001d770': '$\\mathsfbf{\\Alpha}$',
+u'\U0001d771': '$\\mathsfbf{\\Beta}$',
+u'\U0001d772': '$\\mathsfbf{\\Gamma}$',
+u'\U0001d773': '$\\mathsfbf{\\Delta}$',
+u'\U0001d774': '$\\mathsfbf{\\Epsilon}$',
+u'\U0001d775': '$\\mathsfbf{\\Zeta}$',
+u'\U0001d776': '$\\mathsfbf{\\Eta}$',
+u'\U0001d777': '$\\mathsfbf{\\Theta}$',
+u'\U0001d778': '$\\mathsfbf{\\Iota}$',
+u'\U0001d779': '$\\mathsfbf{\\Kappa}$',
+u'\U0001d77a': '$\\mathsfbf{\\Lambda}$',
+u'\U0001d77b': '$M$',
+u'\U0001d77c': '$N$',
+u'\U0001d77d': '$\\mathsfbf{\\Xi}$',
+u'\U0001d77e': '$O$',
+u'\U0001d77f': '$\\mathsfbf{\\Pi}$',
+u'\U0001d780': '$\\mathsfbf{\\Rho}$',
+u'\U0001d781': '$\\mathsfbf{\\varsigma}$',
+u'\U0001d782': '$\\mathsfbf{\\Sigma}$',
+u'\U0001d783': '$\\mathsfbf{\\Tau}$',
+u'\U0001d784': '$\\mathsfbf{\\Upsilon}$',
+u'\U0001d785': '$\\mathsfbf{\\Phi}$',
+u'\U0001d786': '$\\mathsfbf{\\Chi}$',
+u'\U0001d787': '$\\mathsfbf{\\Psi}$',
+u'\U0001d788': '$\\mathsfbf{\\Omega}$',
+u'\U0001d789': '$\\partial$',
+u'\U0001d78a': '$\\in$',
+u'\U0001d78b': '{\\mathsfbf{\\vartheta}}',
+u'\U0001d78c': '{\\mathsfbf{\\varkappa}}',
+u'\U0001d78d': '{\\mathsfbf{\\phi}}',
+u'\U0001d78e': '{\\mathsfbf{\\varrho}}',
+u'\U0001d78f': '{\\mathsfbf{\\varpi}}',
+u'\U0001d790': '$\\mathsfbfsl{\\Alpha}$',
+u'\U0001d791': '$\\mathsfbfsl{\\Beta}$',
+u'\U0001d792': '$\\mathsfbfsl{\\Gamma}$',
+u'\U0001d793': '$\\mathsfbfsl{\\Delta}$',
+u'\U0001d794': '$\\mathsfbfsl{\\Epsilon}$',
+u'\U0001d795': '$\\mathsfbfsl{\\Zeta}$',
+u'\U0001d796': '$\\mathsfbfsl{\\Eta}$',
+u'\U0001d797': '$\\mathsfbfsl{\\vartheta}$',
+u'\U0001d798': '$\\mathsfbfsl{\\Iota}$',
+u'\U0001d799': '$\\mathsfbfsl{\\Kappa}$',
+u'\U0001d79a': '$\\mathsfbfsl{\\Lambda}$',
+u'\U0001d79b': '$M$',
+u'\U0001d79c': '$N$',
+u'\U0001d79d': '$\\mathsfbfsl{\\Xi}$',
+u'\U0001d79e': '$O$',
+u'\U0001d79f': '$\\mathsfbfsl{\\Pi}$',
+u'\U0001d7a0': '$\\mathsfbfsl{\\Rho}$',
+u'\U0001d7a1': '{\\mathsfbfsl{\\vartheta}}',
+u'\U0001d7a2': '$\\mathsfbfsl{\\Sigma}$',
+u'\U0001d7a3': '$\\mathsfbfsl{\\Tau}$',
+u'\U0001d7a4': '$\\mathsfbfsl{\\Upsilon}$',
+u'\U0001d7a5': '$\\mathsfbfsl{\\Phi}$',
+u'\U0001d7a6': '$\\mathsfbfsl{\\Chi}$',
+u'\U0001d7a7': '$\\mathsfbfsl{\\Psi}$',
+u'\U0001d7a8': '$\\mathsfbfsl{\\Omega}$',
+u'\U0001d7a9': '$\\mathsfbfsl{\\nabla}$',
+u'\U0001d7aa': '$\\mathsfbfsl{\\Alpha}$',
+u'\U0001d7ab': '$\\mathsfbfsl{\\Beta}$',
+u'\U0001d7ac': '$\\mathsfbfsl{\\Gamma}$',
+u'\U0001d7ad': '$\\mathsfbfsl{\\Delta}$',
+u'\U0001d7ae': '$\\mathsfbfsl{\\Epsilon}$',
+u'\U0001d7af': '$\\mathsfbfsl{\\Zeta}$',
+u'\U0001d7b0': '$\\mathsfbfsl{\\Eta}$',
+u'\U0001d7b1': '$\\mathsfbfsl{\\vartheta}$',
+u'\U0001d7b2': '$\\mathsfbfsl{\\Iota}$',
+u'\U0001d7b3': '$\\mathsfbfsl{\\Kappa}$',
+u'\U0001d7b4': '$\\mathsfbfsl{\\Lambda}$',
+u'\U0001d7b5': '$M$',
+u'\U0001d7b6': '$N$',
+u'\U0001d7b7': '$\\mathsfbfsl{\\Xi}$',
+u'\U0001d7b8': '$O$',
+u'\U0001d7b9': '$\\mathsfbfsl{\\Pi}$',
+u'\U0001d7ba': '$\\mathsfbfsl{\\Rho}$',
+u'\U0001d7bb': '$\\mathsfbfsl{\\varsigma}$',
+u'\U0001d7bc': '$\\mathsfbfsl{\\Sigma}$',
+u'\U0001d7bd': '$\\mathsfbfsl{\\Tau}$',
+u'\U0001d7be': '$\\mathsfbfsl{\\Upsilon}$',
+u'\U0001d7bf': '$\\mathsfbfsl{\\Phi}$',
+u'\U0001d7c0': '$\\mathsfbfsl{\\Chi}$',
+u'\U0001d7c1': '$\\mathsfbfsl{\\Psi}$',
+u'\U0001d7c2': '$\\mathsfbfsl{\\Omega}$',
+u'\U0001d7c3': '$\\partial$',
+u'\U0001d7c4': '$\\in$',
+u'\U0001d7c5': '{\\mathsfbfsl{\\vartheta}}',
+u'\U0001d7c6': '{\\mathsfbfsl{\\varkappa}}',
+u'\U0001d7c7': '{\\mathsfbfsl{\\phi}}',
+u'\U0001d7c8': '{\\mathsfbfsl{\\varrho}}',
+u'\U0001d7c9': '{\\mathsfbfsl{\\varpi}}',
+u'\U0001d7ce': '$\\mathbf{0}$',
+u'\U0001d7cf': '$\\mathbf{1}$',
+u'\U0001d7d0': '$\\mathbf{2}$',
+u'\U0001d7d1': '$\\mathbf{3}$',
+u'\U0001d7d2': '$\\mathbf{4}$',
+u'\U0001d7d3': '$\\mathbf{5}$',
+u'\U0001d7d4': '$\\mathbf{6}$',
+u'\U0001d7d5': '$\\mathbf{7}$',
+u'\U0001d7d6': '$\\mathbf{8}$',
+u'\U0001d7d7': '$\\mathbf{9}$',
+u'\U0001d7d8': '$\\mathbb{0}$',
+u'\U0001d7d9': '$\\mathbb{1}$',
+u'\U0001d7da': '$\\mathbb{2}$',
+u'\U0001d7db': '$\\mathbb{3}$',
+u'\U0001d7dc': '$\\mathbb{4}$',
+u'\U0001d7dd': '$\\mathbb{5}$',
+u'\U0001d7de': '$\\mathbb{6}$',
+u'\U0001d7df': '$\\mathbb{7}$',
+u'\U0001d7e0': '$\\mathbb{8}$',
+u'\U0001d7e1': '$\\mathbb{9}$',
+u'\U0001d7e2': '$\\mathsf{0}$',
+u'\U0001d7e3': '$\\mathsf{1}$',
+u'\U0001d7e4': '$\\mathsf{2}$',
+u'\U0001d7e5': '$\\mathsf{3}$',
+u'\U0001d7e6': '$\\mathsf{4}$',
+u'\U0001d7e7': '$\\mathsf{5}$',
+u'\U0001d7e8': '$\\mathsf{6}$',
+u'\U0001d7e9': '$\\mathsf{7}$',
+u'\U0001d7ea': '$\\mathsf{8}$',
+u'\U0001d7eb': '$\\mathsf{9}$',
+u'\U0001d7ec': '$\\mathsfbf{0}$',
+u'\U0001d7ed': '$\\mathsfbf{1}$',
+u'\U0001d7ee': '$\\mathsfbf{2}$',
+u'\U0001d7ef': '$\\mathsfbf{3}$',
+u'\U0001d7f0': '$\\mathsfbf{4}$',
+u'\U0001d7f1': '$\\mathsfbf{5}$',
+u'\U0001d7f2': '$\\mathsfbf{6}$',
+u'\U0001d7f3': '$\\mathsfbf{7}$',
+u'\U0001d7f4': '$\\mathsfbf{8}$',
+u'\U0001d7f5': '$\\mathsfbf{9}$',
+u'\U0001d7f6': '$\\mathtt{0}$',
+u'\U0001d7f7': '$\\mathtt{1}$',
+u'\U0001d7f8': '$\\mathtt{2}$',
+u'\U0001d7f9': '$\\mathtt{3}$',
+u'\U0001d7fa': '$\\mathtt{4}$',
+u'\U0001d7fb': '$\\mathtt{5}$',
+u'\U0001d7fc': '$\\mathtt{6}$',
+u'\U0001d7fd': '$\\mathtt{7}$',
+u'\U0001d7fe': '$\\mathtt{8}$',
+u'\U0001d7ff': '$\\mathtt{9}$'}
diff --git a/python/helpers/docutils/writers/null.py b/python/helpers/docutils/writers/null.py
new file mode 100644
index 0000000..b870788
--- /dev/null
+++ b/python/helpers/docutils/writers/null.py
@@ -0,0 +1,21 @@
+# $Id: null.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+A do-nothing Writer.
+"""
+
+from docutils import writers
+
+
+class Writer(writers.UnfilteredWriter):
+
+ supported = ('null',)
+ """Formats this writer supports."""
+
+ config_section = 'null writer'
+ config_section_dependencies = ('writers',)
+
+ def translate(self):
+ pass
diff --git a/python/helpers/docutils/writers/odf_odt/__init__.py b/python/helpers/docutils/writers/odf_odt/__init__.py
new file mode 100644
index 0000000..f7a41bf
--- /dev/null
+++ b/python/helpers/docutils/writers/odf_odt/__init__.py
@@ -0,0 +1,3131 @@
+# $Id: __init__.py 6381 2010-07-26 19:26:14Z dkuhlman $
+# Author: Dave Kuhlman <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Open Document Format (ODF) Writer.
+
+"""
+
+VERSION = '1.0a'
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import os
+import os.path
+import tempfile
+import zipfile
+from xml.dom import minidom
+import time
+import re
+import StringIO
+import inspect
+import imp
+import copy
+import docutils
+from docutils import frontend, nodes, utils, writers, languages
+from docutils.parsers import rst
+from docutils.readers import standalone
+from docutils.transforms import references
+
+
+WhichElementTree = ''
+try:
+ # 1. Try to use lxml.
+ #from lxml import etree
+ #WhichElementTree = 'lxml'
+ raise ImportError('Ignoring lxml')
+except ImportError, e:
+ try:
+ # 2. Try to use ElementTree from the Python standard library.
+ from xml.etree import ElementTree as etree
+ WhichElementTree = 'elementtree'
+ except ImportError, e:
+ try:
+ # 3. Try to use a version of ElementTree installed as a separate
+ # product.
+ from elementtree import ElementTree as etree
+ WhichElementTree = 'elementtree'
+ except ImportError, e:
+ s1 = 'Must install either a version of Python containing ' \
+ 'ElementTree (Python version >=2.5) or install ElementTree.'
+ raise ImportError(s1)
+
+#
+# Import pygments and odtwriter pygments formatters if possible.
+try:
+ import pygments
+ import pygments.lexers
+ from pygmentsformatter import OdtPygmentsProgFormatter, \
+ OdtPygmentsLaTeXFormatter
+except ImportError, exp:
+ pygments = None
+
+#
+# Is the PIL imaging library installed?
+try:
+ import Image
+except ImportError, exp:
+ Image = None
+
+## import warnings
+## warnings.warn('importing IPShellEmbed', UserWarning)
+## from IPython.Shell import IPShellEmbed
+## args = ['-pdb', '-pi1', 'In <\\#>: ', '-pi2', ' .\\D.: ',
+## '-po', 'Out<\\#>: ', '-nosep']
+## ipshell = IPShellEmbed(args,
+## banner = 'Entering IPython. Press Ctrl-D to exit.',
+## exit_msg = 'Leaving Interpreter, back to program.')
+
+
+#
+# ElementTree does not support getparent method (lxml does).
+# This wrapper class and the following support functions provide
+# that support for the ability to get the parent of an element.
+#
+if WhichElementTree == 'elementtree':
+ class _ElementInterfaceWrapper(etree._ElementInterface):
+ def __init__(self, tag, attrib=None):
+ etree._ElementInterface.__init__(self, tag, attrib)
+ if attrib is None:
+ attrib = {}
+ self.parent = None
+ def setparent(self, parent):
+ self.parent = parent
+ def getparent(self):
+ return self.parent
+
+
+#
+# Constants and globals
+
+SPACES_PATTERN = re.compile(r'( +)')
+TABS_PATTERN = re.compile(r'(\t+)')
+FILL_PAT1 = re.compile(r'^ +')
+FILL_PAT2 = re.compile(r' {2,}')
+
+TABLESTYLEPREFIX = 'rststyle-table-'
+TABLENAMEDEFAULT = '%s0' % TABLESTYLEPREFIX
+TABLEPROPERTYNAMES = ('border', 'border-top', 'border-left',
+ 'border-right', 'border-bottom', )
+
+GENERATOR_DESC = 'Docutils.org/odf_odt'
+
+NAME_SPACE_1 = 'urn:oasis:names:tc:opendocument:xmlns:office:1.0'
+
+CONTENT_NAMESPACE_DICT = CNSD = {
+# 'office:version': '1.0',
+ 'chart': 'urn:oasis:names:tc:opendocument:xmlns:chart:1.0',
+ 'dc': 'http://purl.org/dc/elements/1.1/',
+ 'dom': 'http://www.w3.org/2001/xml-events',
+ 'dr3d': 'urn:oasis:names:tc:opendocument:xmlns:dr3d:1.0',
+ 'draw': 'urn:oasis:names:tc:opendocument:xmlns:drawing:1.0',
+ 'fo': 'urn:oasis:names:tc:opendocument:xmlns:xsl-fo-compatible:1.0',
+ 'form': 'urn:oasis:names:tc:opendocument:xmlns:form:1.0',
+ 'math': 'http://www.w3.org/1998/Math/MathML',
+ 'meta': 'urn:oasis:names:tc:opendocument:xmlns:meta:1.0',
+ 'number': 'urn:oasis:names:tc:opendocument:xmlns:datastyle:1.0',
+ 'office': NAME_SPACE_1,
+ 'ooo': 'http://openoffice.org/2004/office',
+ 'oooc': 'http://openoffice.org/2004/calc',
+ 'ooow': 'http://openoffice.org/2004/writer',
+ 'presentation': 'urn:oasis:names:tc:opendocument:xmlns:presentation:1.0',
+
+ 'script': 'urn:oasis:names:tc:opendocument:xmlns:script:1.0',
+ 'style': 'urn:oasis:names:tc:opendocument:xmlns:style:1.0',
+ 'svg': 'urn:oasis:names:tc:opendocument:xmlns:svg-compatible:1.0',
+ 'table': 'urn:oasis:names:tc:opendocument:xmlns:table:1.0',
+ 'text': 'urn:oasis:names:tc:opendocument:xmlns:text:1.0',
+ 'xforms': 'http://www.w3.org/2002/xforms',
+ 'xlink': 'http://www.w3.org/1999/xlink',
+ 'xsd': 'http://www.w3.org/2001/XMLSchema',
+ 'xsi': 'http://www.w3.org/2001/XMLSchema-instance',
+ }
+
+STYLES_NAMESPACE_DICT = SNSD = {
+# 'office:version': '1.0',
+ 'chart': 'urn:oasis:names:tc:opendocument:xmlns:chart:1.0',
+ 'dc': 'http://purl.org/dc/elements/1.1/',
+ 'dom': 'http://www.w3.org/2001/xml-events',
+ 'dr3d': 'urn:oasis:names:tc:opendocument:xmlns:dr3d:1.0',
+ 'draw': 'urn:oasis:names:tc:opendocument:xmlns:drawing:1.0',
+ 'fo': 'urn:oasis:names:tc:opendocument:xmlns:xsl-fo-compatible:1.0',
+ 'form': 'urn:oasis:names:tc:opendocument:xmlns:form:1.0',
+ 'math': 'http://www.w3.org/1998/Math/MathML',
+ 'meta': 'urn:oasis:names:tc:opendocument:xmlns:meta:1.0',
+ 'number': 'urn:oasis:names:tc:opendocument:xmlns:datastyle:1.0',
+ 'office': NAME_SPACE_1,
+ 'presentation': 'urn:oasis:names:tc:opendocument:xmlns:presentation:1.0',
+ 'ooo': 'http://openoffice.org/2004/office',
+ 'oooc': 'http://openoffice.org/2004/calc',
+ 'ooow': 'http://openoffice.org/2004/writer',
+ 'script': 'urn:oasis:names:tc:opendocument:xmlns:script:1.0',
+ 'style': 'urn:oasis:names:tc:opendocument:xmlns:style:1.0',
+ 'svg': 'urn:oasis:names:tc:opendocument:xmlns:svg-compatible:1.0',
+ 'table': 'urn:oasis:names:tc:opendocument:xmlns:table:1.0',
+ 'text': 'urn:oasis:names:tc:opendocument:xmlns:text:1.0',
+ 'xlink': 'http://www.w3.org/1999/xlink',
+ }
+
+MANIFEST_NAMESPACE_DICT = MANNSD = {
+ 'manifest': 'urn:oasis:names:tc:opendocument:xmlns:manifest:1.0',
+}
+
+META_NAMESPACE_DICT = METNSD = {
+# 'office:version': '1.0',
+ 'dc': 'http://purl.org/dc/elements/1.1/',
+ 'meta': 'urn:oasis:names:tc:opendocument:xmlns:meta:1.0',
+ 'office': NAME_SPACE_1,
+ 'ooo': 'http://openoffice.org/2004/office',
+ 'xlink': 'http://www.w3.org/1999/xlink',
+}
+
+#
+# Attribute dictionaries for use with ElementTree (not lxml), which
+# does not support use of nsmap parameter on Element() and SubElement().
+
+CONTENT_NAMESPACE_ATTRIB = {
+ 'office:version': '1.0',
+ 'xmlns:chart': 'urn:oasis:names:tc:opendocument:xmlns:chart:1.0',
+ 'xmlns:dc': 'http://purl.org/dc/elements/1.1/',
+ 'xmlns:dom': 'http://www.w3.org/2001/xml-events',
+ 'xmlns:dr3d': 'urn:oasis:names:tc:opendocument:xmlns:dr3d:1.0',
+ 'xmlns:draw': 'urn:oasis:names:tc:opendocument:xmlns:drawing:1.0',
+ 'xmlns:fo': 'urn:oasis:names:tc:opendocument:xmlns:xsl-fo-compatible:1.0',
+ 'xmlns:form': 'urn:oasis:names:tc:opendocument:xmlns:form:1.0',
+ 'xmlns:math': 'http://www.w3.org/1998/Math/MathML',
+ 'xmlns:meta': 'urn:oasis:names:tc:opendocument:xmlns:meta:1.0',
+ 'xmlns:number': 'urn:oasis:names:tc:opendocument:xmlns:datastyle:1.0',
+ 'xmlns:office': NAME_SPACE_1,
+ 'xmlns:presentation': 'urn:oasis:names:tc:opendocument:xmlns:presentation:1.0',
+ 'xmlns:ooo': 'http://openoffice.org/2004/office',
+ 'xmlns:oooc': 'http://openoffice.org/2004/calc',
+ 'xmlns:ooow': 'http://openoffice.org/2004/writer',
+ 'xmlns:script': 'urn:oasis:names:tc:opendocument:xmlns:script:1.0',
+ 'xmlns:style': 'urn:oasis:names:tc:opendocument:xmlns:style:1.0',
+ 'xmlns:svg': 'urn:oasis:names:tc:opendocument:xmlns:svg-compatible:1.0',
+ 'xmlns:table': 'urn:oasis:names:tc:opendocument:xmlns:table:1.0',
+ 'xmlns:text': 'urn:oasis:names:tc:opendocument:xmlns:text:1.0',
+ 'xmlns:xforms': 'http://www.w3.org/2002/xforms',
+ 'xmlns:xlink': 'http://www.w3.org/1999/xlink',
+ 'xmlns:xsd': 'http://www.w3.org/2001/XMLSchema',
+ 'xmlns:xsi': 'http://www.w3.org/2001/XMLSchema-instance',
+ }
+
+STYLES_NAMESPACE_ATTRIB = {
+ 'office:version': '1.0',
+ 'xmlns:chart': 'urn:oasis:names:tc:opendocument:xmlns:chart:1.0',
+ 'xmlns:dc': 'http://purl.org/dc/elements/1.1/',
+ 'xmlns:dom': 'http://www.w3.org/2001/xml-events',
+ 'xmlns:dr3d': 'urn:oasis:names:tc:opendocument:xmlns:dr3d:1.0',
+ 'xmlns:draw': 'urn:oasis:names:tc:opendocument:xmlns:drawing:1.0',
+ 'xmlns:fo': 'urn:oasis:names:tc:opendocument:xmlns:xsl-fo-compatible:1.0',
+ 'xmlns:form': 'urn:oasis:names:tc:opendocument:xmlns:form:1.0',
+ 'xmlns:math': 'http://www.w3.org/1998/Math/MathML',
+ 'xmlns:meta': 'urn:oasis:names:tc:opendocument:xmlns:meta:1.0',
+ 'xmlns:number': 'urn:oasis:names:tc:opendocument:xmlns:datastyle:1.0',
+ 'xmlns:office': NAME_SPACE_1,
+ 'xmlns:presentation': 'urn:oasis:names:tc:opendocument:xmlns:presentation:1.0',
+ 'xmlns:ooo': 'http://openoffice.org/2004/office',
+ 'xmlns:oooc': 'http://openoffice.org/2004/calc',
+ 'xmlns:ooow': 'http://openoffice.org/2004/writer',
+ 'xmlns:script': 'urn:oasis:names:tc:opendocument:xmlns:script:1.0',
+ 'xmlns:style': 'urn:oasis:names:tc:opendocument:xmlns:style:1.0',
+ 'xmlns:svg': 'urn:oasis:names:tc:opendocument:xmlns:svg-compatible:1.0',
+ 'xmlns:table': 'urn:oasis:names:tc:opendocument:xmlns:table:1.0',
+ 'xmlns:text': 'urn:oasis:names:tc:opendocument:xmlns:text:1.0',
+ 'xmlns:xlink': 'http://www.w3.org/1999/xlink',
+ }
+
+MANIFEST_NAMESPACE_ATTRIB = {
+ 'xmlns:manifest': 'urn:oasis:names:tc:opendocument:xmlns:manifest:1.0',
+}
+
+META_NAMESPACE_ATTRIB = {
+ 'office:version': '1.0',
+ 'xmlns:dc': 'http://purl.org/dc/elements/1.1/',
+ 'xmlns:meta': 'urn:oasis:names:tc:opendocument:xmlns:meta:1.0',
+ 'xmlns:office': NAME_SPACE_1,
+ 'xmlns:ooo': 'http://openoffice.org/2004/office',
+ 'xmlns:xlink': 'http://www.w3.org/1999/xlink',
+}
+
+
+#
+# Functions
+#
+
+#
+# ElementTree support functions.
+# In order to be able to get the parent of elements, must use these
+# instead of the functions with same name provided by ElementTree.
+#
+def Element(tag, attrib=None, nsmap=None, nsdict=CNSD):
+ if attrib is None:
+ attrib = {}
+ tag, attrib = fix_ns(tag, attrib, nsdict)
+ if WhichElementTree == 'lxml':
+ el = etree.Element(tag, attrib, nsmap=nsmap)
+ else:
+ el = _ElementInterfaceWrapper(tag, attrib)
+ return el
+
+def SubElement(parent, tag, attrib=None, nsmap=None, nsdict=CNSD):
+ if attrib is None:
+ attrib = {}
+ tag, attrib = fix_ns(tag, attrib, nsdict)
+ if WhichElementTree == 'lxml':
+ el = etree.SubElement(parent, tag, attrib, nsmap=nsmap)
+ else:
+ el = _ElementInterfaceWrapper(tag, attrib)
+ parent.append(el)
+ el.setparent(parent)
+ return el
+
+def fix_ns(tag, attrib, nsdict):
+ nstag = add_ns(tag, nsdict)
+ nsattrib = {}
+ for key, val in attrib.iteritems():
+ nskey = add_ns(key, nsdict)
+ nsattrib[nskey] = val
+ return nstag, nsattrib
+
+def add_ns(tag, nsdict=CNSD):
+ if WhichElementTree == 'lxml':
+ nstag, name = tag.split(':')
+ ns = nsdict.get(nstag)
+ if ns is None:
+ raise RuntimeError, 'Invalid namespace prefix: %s' % nstag
+ tag = '{%s}%s' % (ns, name,)
+ return tag
+
+def ToString(et):
+ outstream = StringIO.StringIO()
+ et.write(outstream)
+ s1 = outstream.getvalue()
+ outstream.close()
+ return s1
+
+
+def escape_cdata(text):
+ text = text.replace("&", "&")
+ text = text.replace("<", "<")
+ text = text.replace(">", ">")
+ ascii = ''
+ for char in text:
+ if ord(char) >= ord("\x7f"):
+ ascii += "&#x%X;" % ( ord(char), )
+ else:
+ ascii += char
+ return ascii
+
+
+
+WORD_SPLIT_PAT1 = re.compile(r'\b(\w*)\b\W*')
+
+def split_words(line):
+ # We need whitespace at the end of the string for our regexpr.
+ line += ' '
+ words = []
+ pos1 = 0
+ mo = WORD_SPLIT_PAT1.search(line, pos1)
+ while mo is not None:
+ word = mo.groups()[0]
+ words.append(word)
+ pos1 = mo.end()
+ mo = WORD_SPLIT_PAT1.search(line, pos1)
+ return words
+
+
+#
+# Classes
+#
+
+
+class TableStyle(object):
+ def __init__(self, border=None, backgroundcolor=None):
+ self.border = border
+ self.backgroundcolor = backgroundcolor
+ def get_border_(self):
+ return self.border_
+ def set_border_(self, border):
+ self.border_ = border
+ border = property(get_border_, set_border_)
+ def get_backgroundcolor_(self):
+ return self.backgroundcolor_
+ def set_backgroundcolor_(self, backgroundcolor):
+ self.backgroundcolor_ = backgroundcolor
+ backgroundcolor = property(get_backgroundcolor_, set_backgroundcolor_)
+
+BUILTIN_DEFAULT_TABLE_STYLE = TableStyle(
+ border = '0.0007in solid #000000')
+
+#
+# Information about the indentation level for lists nested inside
+# other contexts, e.g. dictionary lists.
+class ListLevel(object):
+ def __init__(self, level, sibling_level=True, nested_level=True):
+ self.level = level
+ self.sibling_level = sibling_level
+ self.nested_level = nested_level
+ def set_sibling(self, sibling_level): self.sibling_level = sibling_level
+ def get_sibling(self): return self.sibling_level
+ def set_nested(self, nested_level): self.nested_level = nested_level
+ def get_nested(self): return self.nested_level
+ def set_level(self, level): self.level = level
+ def get_level(self): return self.level
+
+
+class Writer(writers.Writer):
+
+ MIME_TYPE = 'application/vnd.oasis.opendocument.text'
+ EXTENSION = '.odt'
+
+ supported = ('odt', )
+ """Formats this writer supports."""
+
+ default_stylesheet = 'styles' + EXTENSION
+
+ default_stylesheet_path = utils.relative_path(
+ os.path.join(os.getcwd(), 'dummy'),
+ os.path.join(os.path.dirname(__file__), default_stylesheet))
+
+ default_template = 'template.txt'
+
+ default_template_path = utils.relative_path(
+ os.path.join(os.getcwd(), 'dummy'),
+ os.path.join(os.path.dirname(__file__), default_template))
+
+ settings_spec = (
+ 'ODF-Specific Options',
+ None,
+ (
+ ('Specify a stylesheet. '
+ 'Default: "%s"' % default_stylesheet_path,
+ ['--stylesheet'],
+ {
+ 'default': default_stylesheet_path,
+ 'dest': 'stylesheet'
+ }),
+ ('Specify a configuration/mapping file relative to the '
+ 'current working '
+ 'directory for additional ODF options. '
+ 'In particular, this file may contain a section named '
+ '"Formats" that maps default style names to '
+ 'names to be used in the resulting output file allowing for '
+ 'adhering to external standards. '
+ 'For more info and the format of the configuration/mapping file, '
+ 'see the odtwriter doc.',
+ ['--odf-config-file'],
+ {'metavar': '<file>'}),
+ ('Obfuscate email addresses to confuse harvesters while still '
+ 'keeping email links usable with standards-compliant browsers.',
+ ['--cloak-email-addresses'],
+ {'default': False,
+ 'action': 'store_true',
+ 'dest': 'cloak_email_addresses',
+ 'validator': frontend.validate_boolean}),
+ ('Do not obfuscate email addresses.',
+ ['--no-cloak-email-addresses'],
+ {'default': False,
+ 'action': 'store_false',
+ 'dest': 'cloak_email_addresses',
+ 'validator': frontend.validate_boolean}),
+ ('Specify the thickness of table borders in thousands of a cm. '
+ 'Default is 35.',
+ ['--table-border-thickness'],
+ {'default': None,
+ 'validator': frontend.validate_nonnegative_int}),
+ ('Add syntax highlighting in literal code blocks.',
+ ['--add-syntax-highlighting'],
+ {'default': False,
+ 'action': 'store_true',
+ 'dest': 'add_syntax_highlighting',
+ 'validator': frontend.validate_boolean}),
+ ('Do not add syntax highlighting in literal code blocks. (default)',
+ ['--no-syntax-highlighting'],
+ {'default': False,
+ 'action': 'store_false',
+ 'dest': 'add_syntax_highlighting',
+ 'validator': frontend.validate_boolean}),
+ ('Create sections for headers. (default)',
+ ['--create-sections'],
+ {'default': True,
+ 'action': 'store_true',
+ 'dest': 'create_sections',
+ 'validator': frontend.validate_boolean}),
+ ('Do not create sections for headers.',
+ ['--no-sections'],
+ {'default': True,
+ 'action': 'store_false',
+ 'dest': 'create_sections',
+ 'validator': frontend.validate_boolean}),
+ ('Create links.',
+ ['--create-links'],
+ {'default': False,
+ 'action': 'store_true',
+ 'dest': 'create_links',
+ 'validator': frontend.validate_boolean}),
+ ('Do not create links. (default)',
+ ['--no-links'],
+ {'default': False,
+ 'action': 'store_false',
+ 'dest': 'create_links',
+ 'validator': frontend.validate_boolean}),
+ ('Generate endnotes at end of document, not footnotes '
+ 'at bottom of page.',
+ ['--endnotes-end-doc'],
+ {'default': False,
+ 'action': 'store_true',
+ 'dest': 'endnotes_end_doc',
+ 'validator': frontend.validate_boolean}),
+ ('Generate footnotes at bottom of page, not endnotes '
+ 'at end of document. (default)',
+ ['--no-endnotes-end-doc'],
+ {'default': False,
+ 'action': 'store_false',
+ 'dest': 'endnotes_end_doc',
+ 'validator': frontend.validate_boolean}),
+ ('Generate a bullet list table of contents, not '
+ 'an ODF/oowriter table of contents.',
+ ['--generate-list-toc'],
+ {'default': True,
+ 'action': 'store_false',
+ 'dest': 'generate_oowriter_toc',
+ 'validator': frontend.validate_boolean}),
+ ('Generate an ODF/oowriter table of contents, not '
+ 'a bullet list. (default)',
+ ['--generate-oowriter-toc'],
+ {'default': True,
+ 'action': 'store_true',
+ 'dest': 'generate_oowriter_toc',
+ 'validator': frontend.validate_boolean}),
+ ('Specify the contents of an custom header line. '
+ 'See odf_odt writer documentation for details '
+ 'about special field character sequences.',
+ ['--custom-odt-header'],
+ { 'default': '',
+ 'dest': 'custom_header',
+ }),
+ ('Specify the contents of an custom footer line. '
+ 'See odf_odt writer documentation for details '
+ 'about special field character sequences.',
+ ['--custom-odt-footer'],
+ { 'default': '',
+ 'dest': 'custom_footer',
+ }),
+ )
+ )
+
+ settings_defaults = {
+ 'output_encoding_error_handler': 'xmlcharrefreplace',
+ }
+
+ relative_path_settings = (
+ 'stylesheet_path',
+ )
+
+ config_section = 'opendocument odf writer'
+ config_section_dependencies = (
+ 'writers',
+ )
+
+ def __init__(self):
+ writers.Writer.__init__(self)
+ self.translator_class = ODFTranslator
+
+ def translate(self):
+ self.settings = self.document.settings
+ self.visitor = self.translator_class(self.document)
+ self.visitor.retrieve_styles(self.EXTENSION)
+ self.document.walkabout(self.visitor)
+ self.visitor.add_doc_title()
+ self.assemble_my_parts()
+ self.output = self.parts['whole']
+
+ def assemble_my_parts(self):
+ """Assemble the `self.parts` dictionary. Extend in subclasses.
+ """
+ writers.Writer.assemble_parts(self)
+ f = tempfile.NamedTemporaryFile()
+ zfile = zipfile.ZipFile(f, 'w', zipfile.ZIP_DEFLATED)
+ content = self.visitor.content_astext()
+ self.write_zip_str(zfile, 'content.xml', content)
+ self.write_zip_str(zfile, 'mimetype', self.MIME_TYPE)
+ s1 = self.create_manifest()
+ self.write_zip_str(zfile, 'META-INF/manifest.xml', s1)
+ s1 = self.create_meta()
+ self.write_zip_str(zfile, 'meta.xml', s1)
+ s1 = self.get_stylesheet()
+ self.write_zip_str(zfile, 'styles.xml', s1)
+ s1 = self.get_settings()
+ self.write_zip_str(zfile, 'settings.xml', s1)
+ self.store_embedded_files(zfile)
+ zfile.close()
+ f.seek(0)
+ whole = f.read()
+ f.close()
+ self.parts['whole'] = whole
+ self.parts['encoding'] = self.document.settings.output_encoding
+ self.parts['version'] = docutils.__version__
+
+ def write_zip_str(self, zfile, name, bytes):
+ localtime = time.localtime(time.time())
+ zinfo = zipfile.ZipInfo(name, localtime)
+ # Add some standard UNIX file access permissions (-rw-r--r--).
+ zinfo.external_attr = (0x81a4 & 0xFFFF) << 16L
+ zinfo.compress_type = zipfile.ZIP_DEFLATED
+ zfile.writestr(zinfo, bytes)
+
+ def store_embedded_files(self, zfile):
+ embedded_files = self.visitor.get_embedded_file_list()
+ for source, destination in embedded_files:
+ if source is None:
+ continue
+ try:
+ # encode/decode
+ destination1 = destination.decode('latin-1').encode('utf-8')
+ zfile.write(source, destination1, zipfile.ZIP_STORED)
+ except OSError, e:
+ self.document.reporter.warning(
+ "Can't open file %s." % (source, ))
+
+ def get_settings(self):
+ """
+ modeled after get_stylesheet
+ """
+ stylespath = self.settings.stylesheet
+ zfile = zipfile.ZipFile(stylespath, 'r')
+ s1 = zfile.read('settings.xml')
+ zfile.close()
+ return s1
+
+ def get_stylesheet(self):
+ """Get the stylesheet from the visitor.
+ Ask the visitor to setup the page.
+ """
+ s1 = self.visitor.setup_page()
+ return s1
+
+ def assemble_parts(self):
+ pass
+
+ def create_manifest(self):
+ if WhichElementTree == 'lxml':
+ root = Element('manifest:manifest',
+ nsmap=MANIFEST_NAMESPACE_DICT,
+ nsdict=MANIFEST_NAMESPACE_DICT,
+ )
+ else:
+ root = Element('manifest:manifest',
+ attrib=MANIFEST_NAMESPACE_ATTRIB,
+ nsdict=MANIFEST_NAMESPACE_DICT,
+ )
+ doc = etree.ElementTree(root)
+ SubElement(root, 'manifest:file-entry', attrib={
+ 'manifest:media-type': self.MIME_TYPE,
+ 'manifest:full-path': '/',
+ }, nsdict=MANNSD)
+ SubElement(root, 'manifest:file-entry', attrib={
+ 'manifest:media-type': 'text/xml',
+ 'manifest:full-path': 'content.xml',
+ }, nsdict=MANNSD)
+ SubElement(root, 'manifest:file-entry', attrib={
+ 'manifest:media-type': 'text/xml',
+ 'manifest:full-path': 'styles.xml',
+ }, nsdict=MANNSD)
+ SubElement(root, 'manifest:file-entry', attrib={
+ 'manifest:media-type': 'text/xml',
+ 'manifest:full-path': 'meta.xml',
+ }, nsdict=MANNSD)
+ s1 = ToString(doc)
+ doc = minidom.parseString(s1)
+ s1 = doc.toprettyxml(' ')
+ return s1
+
+ def create_meta(self):
+ if WhichElementTree == 'lxml':
+ root = Element('office:document-meta',
+ nsmap=META_NAMESPACE_DICT,
+ nsdict=META_NAMESPACE_DICT,
+ )
+ else:
+ root = Element('office:document-meta',
+ attrib=META_NAMESPACE_ATTRIB,
+ nsdict=META_NAMESPACE_DICT,
+ )
+ doc = etree.ElementTree(root)
+ root = SubElement(root, 'office:meta', nsdict=METNSD)
+ el1 = SubElement(root, 'meta:generator', nsdict=METNSD)
+ el1.text = 'Docutils/rst2odf.py/%s' % (VERSION, )
+ s1 = os.environ.get('USER', '')
+ el1 = SubElement(root, 'meta:initial-creator', nsdict=METNSD)
+ el1.text = s1
+ s2 = time.strftime('%Y-%m-%dT%H:%M:%S', time.localtime())
+ el1 = SubElement(root, 'meta:creation-date', nsdict=METNSD)
+ el1.text = s2
+ el1 = SubElement(root, 'dc:creator', nsdict=METNSD)
+ el1.text = s1
+ el1 = SubElement(root, 'dc:date', nsdict=METNSD)
+ el1.text = s2
+ el1 = SubElement(root, 'dc:language', nsdict=METNSD)
+ el1.text = 'en-US'
+ el1 = SubElement(root, 'meta:editing-cycles', nsdict=METNSD)
+ el1.text = '1'
+ el1 = SubElement(root, 'meta:editing-duration', nsdict=METNSD)
+ el1.text = 'PT00M01S'
+ title = self.visitor.get_title()
+ el1 = SubElement(root, 'dc:title', nsdict=METNSD)
+ if title:
+ el1.text = title
+ else:
+ el1.text = '[no title]'
+ meta_dict = self.visitor.get_meta_dict()
+ keywordstr = meta_dict.get('keywords')
+ if keywordstr is not None:
+ keywords = split_words(keywordstr)
+ for keyword in keywords:
+ el1 = SubElement(root, 'meta:keyword', nsdict=METNSD)
+ el1.text = keyword
+ description = meta_dict.get('description')
+ if description is not None:
+ el1 = SubElement(root, 'dc:description', nsdict=METNSD)
+ el1.text = description
+ s1 = ToString(doc)
+ #doc = minidom.parseString(s1)
+ #s1 = doc.toprettyxml(' ')
+ return s1
+
+# class ODFTranslator(nodes.SparseNodeVisitor):
+
+class ODFTranslator(nodes.GenericNodeVisitor):
+
+ used_styles = (
+ 'attribution', 'blockindent', 'blockquote', 'blockquote-bulletitem',
+ 'blockquote-bulletlist', 'blockquote-enumitem', 'blockquote-enumlist',
+ 'bulletitem', 'bulletlist',
+ 'caption', 'legend',
+ 'centeredtextbody', 'codeblock',
+ 'codeblock-classname', 'codeblock-comment', 'codeblock-functionname',
+ 'codeblock-keyword', 'codeblock-name', 'codeblock-number',
+ 'codeblock-operator', 'codeblock-string', 'emphasis', 'enumitem',
+ 'enumlist', 'epigraph', 'epigraph-bulletitem', 'epigraph-bulletlist',
+ 'epigraph-enumitem', 'epigraph-enumlist', 'footer',
+ 'footnote', 'citation',
+ 'header', 'highlights', 'highlights-bulletitem',
+ 'highlights-bulletlist', 'highlights-enumitem', 'highlights-enumlist',
+ 'horizontalline', 'inlineliteral', 'quotation', 'rubric',
+ 'strong', 'table-title', 'textbody', 'tocbulletlist', 'tocenumlist',
+ 'title',
+ 'subtitle',
+ 'heading1',
+ 'heading2',
+ 'heading3',
+ 'heading4',
+ 'heading5',
+ 'heading6',
+ 'heading7',
+ 'admon-attention-hdr',
+ 'admon-attention-body',
+ 'admon-caution-hdr',
+ 'admon-caution-body',
+ 'admon-danger-hdr',
+ 'admon-danger-body',
+ 'admon-error-hdr',
+ 'admon-error-body',
+ 'admon-generic-hdr',
+ 'admon-generic-body',
+ 'admon-hint-hdr',
+ 'admon-hint-body',
+ 'admon-important-hdr',
+ 'admon-important-body',
+ 'admon-note-hdr',
+ 'admon-note-body',
+ 'admon-tip-hdr',
+ 'admon-tip-body',
+ 'admon-warning-hdr',
+ 'admon-warning-body',
+ 'tableoption',
+ 'tableoption.%c', 'tableoption.%c%d', 'Table%d', 'Table%d.%c',
+ 'Table%d.%c%d',
+ 'lineblock1',
+ 'lineblock2',
+ 'lineblock3',
+ 'lineblock4',
+ 'lineblock5',
+ 'lineblock6',
+ 'image', 'figureframe',
+ )
+
+ def __init__(self, document):
+ #nodes.SparseNodeVisitor.__init__(self, document)
+ nodes.GenericNodeVisitor.__init__(self, document)
+ self.settings = document.settings
+ self.format_map = { }
+ if self.settings.odf_config_file:
+ from ConfigParser import ConfigParser
+
+ parser = ConfigParser()
+ parser.read(self.settings.odf_config_file)
+ for rststyle, format in parser.items("Formats"):
+ if rststyle not in self.used_styles:
+ self.document.reporter.warning(
+ 'Style "%s" is not a style used by odtwriter.' % (
+ rststyle, ))
+ self.format_map[rststyle] = format
+ self.section_level = 0
+ self.section_count = 0
+ # Create ElementTree content and styles documents.
+ if WhichElementTree == 'lxml':
+ root = Element(
+ 'office:document-content',
+ nsmap=CONTENT_NAMESPACE_DICT,
+ )
+ else:
+ root = Element(
+ 'office:document-content',
+ attrib=CONTENT_NAMESPACE_ATTRIB,
+ )
+ self.content_tree = etree.ElementTree(element=root)
+ self.current_element = root
+ SubElement(root, 'office:scripts')
+ SubElement(root, 'office:font-face-decls')
+ el = SubElement(root, 'office:automatic-styles')
+ self.automatic_styles = el
+ el = SubElement(root, 'office:body')
+ el = self.generate_content_element(el)
+ self.current_element = el
+ self.body_text_element = el
+ self.paragraph_style_stack = [self.rststyle('textbody'), ]
+ self.list_style_stack = []
+ self.table_count = 0
+ self.column_count = ord('A') - 1
+ self.trace_level = -1
+ self.optiontablestyles_generated = False
+ self.field_name = None
+ self.field_element = None
+ self.title = None
+ self.image_count = 0
+ self.image_style_count = 0
+ self.image_dict = {}
+ self.embedded_file_list = []
+ self.syntaxhighlighting = 1
+ self.syntaxhighlight_lexer = 'python'
+ self.header_content = []
+ self.footer_content = []
+ self.in_header = False
+ self.in_footer = False
+ self.blockstyle = ''
+ self.in_table_of_contents = False
+ self.table_of_content_index_body = None
+ self.list_level = 0
+ self.footnote_ref_dict = {}
+ self.footnote_list = []
+ self.footnote_chars_idx = 0
+ self.footnote_level = 0
+ self.pending_ids = [ ]
+ self.in_paragraph = False
+ self.found_doc_title = False
+ self.bumped_list_level_stack = []
+ self.meta_dict = {}
+ self.line_block_level = 0
+ self.line_indent_level = 0
+ self.citation_id = None
+ self.style_index = 0 # use to form unique style names
+ self.str_stylesheet = ''
+ self.str_stylesheetcontent = ''
+ self.dom_stylesheet = None
+ self.table_styles = None
+
+ def get_str_stylesheet(self):
+ return self.str_stylesheet
+
+ def retrieve_styles(self, extension):
+ """Retrieve the stylesheet from either a .xml file or from
+ a .odt (zip) file. Return the content as a string.
+ """
+ s2 = None
+ stylespath = self.settings.stylesheet
+ ext = os.path.splitext(stylespath)[1]
+ if ext == '.xml':
+ stylesfile = open(stylespath, 'r')
+ s1 = stylesfile.read()
+ stylesfile.close()
+ elif ext == extension:
+ zfile = zipfile.ZipFile(stylespath, 'r')
+ s1 = zfile.read('styles.xml')
+ s2 = zfile.read('content.xml')
+ zfile.close()
+ else:
+ raise RuntimeError, 'stylesheet path (%s) must be %s or .xml file' %(stylespath, extension)
+ self.str_stylesheet = s1
+ self.str_stylesheetcontent = s2
+ self.dom_stylesheet = etree.fromstring(self.str_stylesheet)
+ self.dom_stylesheetcontent = etree.fromstring(self.str_stylesheetcontent)
+ self.table_styles = self.extract_table_styles(s2)
+
+ def extract_table_styles(self, styles_str):
+ root = etree.fromstring(styles_str)
+ table_styles = {}
+ auto_styles = root.find(
+ '{%s}automatic-styles' % (CNSD['office'], ))
+ for stylenode in auto_styles:
+ name = stylenode.get('{%s}name' % (CNSD['style'], ))
+ tablename = name.split('.')[0]
+ family = stylenode.get('{%s}family' % (CNSD['style'], ))
+ if name.startswith(TABLESTYLEPREFIX):
+ tablestyle = table_styles.get(tablename)
+ if tablestyle is None:
+ tablestyle = TableStyle()
+ table_styles[tablename] = tablestyle
+ if family == 'table':
+ properties = stylenode.find(
+ '{%s}table-properties' % (CNSD['style'], ))
+ property = properties.get('{%s}%s' % (CNSD['fo'],
+ 'background-color', ))
+ if property is not None and property != 'none':
+ tablestyle.backgroundcolor = property
+ elif family == 'table-cell':
+ properties = stylenode.find(
+ '{%s}table-cell-properties' % (CNSD['style'], ))
+ if properties is not None:
+ border = self.get_property(properties)
+ if border is not None:
+ tablestyle.border = border
+ return table_styles
+
+ def get_property(self, stylenode):
+ border = None
+ for propertyname in TABLEPROPERTYNAMES:
+ border = stylenode.get('{%s}%s' % (CNSD['fo'], propertyname, ))
+ if border is not None and border != 'none':
+ return border
+ return border
+
+ def add_doc_title(self):
+ text = self.settings.title
+ if text:
+ self.title = text
+ if not self.found_doc_title:
+ el = Element('text:p', attrib = {
+ 'text:style-name': self.rststyle('title'),
+ })
+ el.text = text
+ self.body_text_element.insert(0, el)
+
+ def rststyle(self, name, parameters=( )):
+ """
+ Returns the style name to use for the given style.
+
+ If `parameters` is given `name` must contain a matching number of ``%`` and
+ is used as a format expression with `parameters` as the value.
+ """
+ name1 = name % parameters
+ stylename = self.format_map.get(name1, 'rststyle-%s' % name1)
+ return stylename
+
+ def generate_content_element(self, root):
+ return SubElement(root, 'office:text')
+
+ def setup_page(self):
+ self.setup_paper(self.dom_stylesheet)
+ if (len(self.header_content) > 0 or len(self.footer_content) > 0 or
+ self.settings.custom_header or self.settings.custom_footer):
+ self.add_header_footer(self.dom_stylesheet)
+ new_content = etree.tostring(self.dom_stylesheet)
+ return new_content
+
+ def setup_paper(self, root_el):
+ try:
+ fin = os.popen("paperconf -s 2> /dev/null")
+ w, h = map(float, fin.read().split())
+ fin.close()
+ except:
+ w, h = 612, 792 # default to Letter
+ def walk(el):
+ if el.tag == "{%s}page-layout-properties" % SNSD["style"] and \
+ not el.attrib.has_key("{%s}page-width" % SNSD["fo"]):
+ el.attrib["{%s}page-width" % SNSD["fo"]] = "%.3fpt" % w
+ el.attrib["{%s}page-height" % SNSD["fo"]] = "%.3fpt" % h
+ el.attrib["{%s}margin-left" % SNSD["fo"]] = \
+ el.attrib["{%s}margin-right" % SNSD["fo"]] = \
+ "%.3fpt" % (.1 * w)
+ el.attrib["{%s}margin-top" % SNSD["fo"]] = \
+ el.attrib["{%s}margin-bottom" % SNSD["fo"]] = \
+ "%.3fpt" % (.1 * h)
+ else:
+ for subel in el.getchildren(): walk(subel)
+ walk(root_el)
+
+ def add_header_footer(self, root_el):
+ automatic_styles = root_el.find(
+ '{%s}automatic-styles' % SNSD['office'])
+ path = '{%s}master-styles' % (NAME_SPACE_1, )
+ master_el = root_el.find(path)
+ if master_el is None:
+ return
+ path = '{%s}master-page' % (SNSD['style'], )
+ master_el = master_el.find(path)
+ if master_el is None:
+ return
+ el1 = master_el
+ if self.header_content or self.settings.custom_header:
+ if WhichElementTree == 'lxml':
+ el2 = SubElement(el1, 'style:header', nsdict=SNSD)
+ else:
+ el2 = SubElement(el1, 'style:header',
+ attrib=STYLES_NAMESPACE_ATTRIB,
+ nsdict=STYLES_NAMESPACE_DICT,
+ )
+ for el in self.header_content:
+ attrkey = add_ns('text:style-name', nsdict=SNSD)
+ el.attrib[attrkey] = self.rststyle('header')
+ el2.append(el)
+ if self.settings.custom_header:
+ elcustom = self.create_custom_headfoot(el2,
+ self.settings.custom_header, 'header', automatic_styles)
+ if self.footer_content or self.settings.custom_footer:
+ if WhichElementTree == 'lxml':
+ el2 = SubElement(el1, 'style:footer', nsdict=SNSD)
+ else:
+ el2 = SubElement(el1, 'style:footer',
+ attrib=STYLES_NAMESPACE_ATTRIB,
+ nsdict=STYLES_NAMESPACE_DICT,
+ )
+ for el in self.footer_content:
+ attrkey = add_ns('text:style-name', nsdict=SNSD)
+ el.attrib[attrkey] = self.rststyle('footer')
+ el2.append(el)
+ if self.settings.custom_footer:
+ elcustom = self.create_custom_headfoot(el2,
+ self.settings.custom_footer, 'footer', automatic_styles)
+
+ code_none, code_field, code_text = range(3)
+ field_pat = re.compile(r'%(..?)%')
+
+ def create_custom_headfoot(self, parent, text, style_name, automatic_styles):
+ current_element = None
+ field_iter = self.split_field_specifiers_iter(text)
+ for item in field_iter:
+ if item[0] == ODFTranslator.code_field:
+ if item[1] not in ('p', 'P',
+ 't1', 't2', 't3', 't4',
+ 'd1', 'd2', 'd3', 'd4', 'd5',
+ 's', 't', 'a'):
+ msg = 'bad field spec: %%%s%%' % (item[1], )
+ raise RuntimeError, msg
+ if current_element is None:
+ parent = SubElement(parent, 'text:p', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ })
+ el1 = self.make_field_element(parent,
+ item[1], style_name, automatic_styles)
+ if el1 is None:
+ msg = 'bad field spec: %%%s%%' % (item[1], )
+ raise RuntimeError, msg
+ else:
+ current_element = el1
+ else:
+ if current_element is None:
+ parent = SubElement(parent, 'text:p', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ })
+ parent.text = item[1]
+ else:
+ current_element.tail = item[1]
+
+ def make_field_element(self, parent, text, style_name, automatic_styles):
+ if text == 'p':
+ el1 = SubElement(parent, 'text:page-number', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ 'text:select-page': 'current',
+ })
+ elif text == 'P':
+ el1 = SubElement(parent, 'text:page-count', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ })
+ elif text == 't1':
+ self.style_index += 1
+ el1 = SubElement(parent, 'text:time', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ 'text:fixed': 'true',
+ 'style:data-style-name': 'rst-time-style-%d' % self.style_index,
+ })
+ el2 = SubElement(automatic_styles, 'number:time-style', attrib={
+ 'style:name': 'rst-time-style-%d' % self.style_index,
+ 'xmlns:number': SNSD['number'],
+ 'xmlns:style': SNSD['style'],
+ })
+ el3 = SubElement(el2, 'number:hours', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ':'
+ el3 = SubElement(el2, 'number:minutes', attrib={
+ 'number:style': 'long',
+ })
+ elif text == 't2':
+ self.style_index += 1
+ el1 = SubElement(parent, 'text:time', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ 'text:fixed': 'true',
+ 'style:data-style-name': 'rst-time-style-%d' % self.style_index,
+ })
+ el2 = SubElement(automatic_styles, 'number:time-style', attrib={
+ 'style:name': 'rst-time-style-%d' % self.style_index,
+ 'xmlns:number': SNSD['number'],
+ 'xmlns:style': SNSD['style'],
+ })
+ el3 = SubElement(el2, 'number:hours', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ':'
+ el3 = SubElement(el2, 'number:minutes', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ':'
+ el3 = SubElement(el2, 'number:seconds', attrib={
+ 'number:style': 'long',
+ })
+ elif text == 't3':
+ self.style_index += 1
+ el1 = SubElement(parent, 'text:time', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ 'text:fixed': 'true',
+ 'style:data-style-name': 'rst-time-style-%d' % self.style_index,
+ })
+ el2 = SubElement(automatic_styles, 'number:time-style', attrib={
+ 'style:name': 'rst-time-style-%d' % self.style_index,
+ 'xmlns:number': SNSD['number'],
+ 'xmlns:style': SNSD['style'],
+ })
+ el3 = SubElement(el2, 'number:hours', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ':'
+ el3 = SubElement(el2, 'number:minutes', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ' '
+ el3 = SubElement(el2, 'number:am-pm')
+ elif text == 't4':
+ self.style_index += 1
+ el1 = SubElement(parent, 'text:time', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ 'text:fixed': 'true',
+ 'style:data-style-name': 'rst-time-style-%d' % self.style_index,
+ })
+ el2 = SubElement(automatic_styles, 'number:time-style', attrib={
+ 'style:name': 'rst-time-style-%d' % self.style_index,
+ 'xmlns:number': SNSD['number'],
+ 'xmlns:style': SNSD['style'],
+ })
+ el3 = SubElement(el2, 'number:hours', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ':'
+ el3 = SubElement(el2, 'number:minutes', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ':'
+ el3 = SubElement(el2, 'number:seconds', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ' '
+ el3 = SubElement(el2, 'number:am-pm')
+ elif text == 'd1':
+ self.style_index += 1
+ el1 = SubElement(parent, 'text:date', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ 'style:data-style-name': 'rst-date-style-%d' % self.style_index,
+ })
+ el2 = SubElement(automatic_styles, 'number:date-style', attrib={
+ 'style:name': 'rst-date-style-%d' % self.style_index,
+ 'number:automatic-order': 'true',
+ 'xmlns:number': SNSD['number'],
+ 'xmlns:style': SNSD['style'],
+ })
+ el3 = SubElement(el2, 'number:month', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = '/'
+ el3 = SubElement(el2, 'number:day', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = '/'
+ el3 = SubElement(el2, 'number:year')
+ elif text == 'd2':
+ self.style_index += 1
+ el1 = SubElement(parent, 'text:date', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ 'style:data-style-name': 'rst-date-style-%d' % self.style_index,
+ })
+ el2 = SubElement(automatic_styles, 'number:date-style', attrib={
+ 'style:name': 'rst-date-style-%d' % self.style_index,
+ 'number:automatic-order': 'true',
+ 'xmlns:number': SNSD['number'],
+ 'xmlns:style': SNSD['style'],
+ })
+ el3 = SubElement(el2, 'number:month', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = '/'
+ el3 = SubElement(el2, 'number:day', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = '/'
+ el3 = SubElement(el2, 'number:year', attrib={
+ 'number:style': 'long',
+ })
+ elif text == 'd3':
+ self.style_index += 1
+ el1 = SubElement(parent, 'text:date', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ 'style:data-style-name': 'rst-date-style-%d' % self.style_index,
+ })
+ el2 = SubElement(automatic_styles, 'number:date-style', attrib={
+ 'style:name': 'rst-date-style-%d' % self.style_index,
+ 'number:automatic-order': 'true',
+ 'xmlns:number': SNSD['number'],
+ 'xmlns:style': SNSD['style'],
+ })
+ el3 = SubElement(el2, 'number:month', attrib={
+ 'number:textual': 'true',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ' '
+ el3 = SubElement(el2, 'number:day', attrib={
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ', '
+ el3 = SubElement(el2, 'number:year', attrib={
+ 'number:style': 'long',
+ })
+ elif text == 'd4':
+ self.style_index += 1
+ el1 = SubElement(parent, 'text:date', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ 'style:data-style-name': 'rst-date-style-%d' % self.style_index,
+ })
+ el2 = SubElement(automatic_styles, 'number:date-style', attrib={
+ 'style:name': 'rst-date-style-%d' % self.style_index,
+ 'number:automatic-order': 'true',
+ 'xmlns:number': SNSD['number'],
+ 'xmlns:style': SNSD['style'],
+ })
+ el3 = SubElement(el2, 'number:month', attrib={
+ 'number:textual': 'true',
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ' '
+ el3 = SubElement(el2, 'number:day', attrib={
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = ', '
+ el3 = SubElement(el2, 'number:year', attrib={
+ 'number:style': 'long',
+ })
+ elif text == 'd5':
+ self.style_index += 1
+ el1 = SubElement(parent, 'text:date', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ 'style:data-style-name': 'rst-date-style-%d' % self.style_index,
+ })
+ el2 = SubElement(automatic_styles, 'number:date-style', attrib={
+ 'style:name': 'rst-date-style-%d' % self.style_index,
+ 'xmlns:number': SNSD['number'],
+ 'xmlns:style': SNSD['style'],
+ })
+ el3 = SubElement(el2, 'number:year', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = '-'
+ el3 = SubElement(el2, 'number:month', attrib={
+ 'number:style': 'long',
+ })
+ el3 = SubElement(el2, 'number:text')
+ el3.text = '-'
+ el3 = SubElement(el2, 'number:day', attrib={
+ 'number:style': 'long',
+ })
+ elif text == 's':
+ el1 = SubElement(parent, 'text:subject', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ })
+ elif text == 't':
+ el1 = SubElement(parent, 'text:title', attrib={
+ 'text:style-name': self.rststyle(style_name),
+ })
+ elif text == 'a':
+ el1 = SubElement(parent, 'text:author-name', attrib={
+ 'text:fixed': 'false',
+ })
+ else:
+ el1 = None
+ return el1
+
+ def split_field_specifiers_iter(self, text):
+ pos1 = 0
+ pos_end = len(text)
+ while True:
+ mo = ODFTranslator.field_pat.search(text, pos1)
+ if mo:
+ pos2 = mo.start()
+ if pos2 > pos1:
+ yield (ODFTranslator.code_text, text[pos1:pos2])
+ yield (ODFTranslator.code_field, mo.group(1))
+ pos1 = mo.end()
+ else:
+ break
+ trailing = text[pos1:]
+ if trailing:
+ yield (ODFTranslator.code_text, trailing)
+
+
+ def astext(self):
+ root = self.content_tree.getroot()
+ et = etree.ElementTree(root)
+ s1 = ToString(et)
+ return s1
+
+ def content_astext(self):
+ return self.astext()
+
+ def set_title(self, title): self.title = title
+ def get_title(self): return self.title
+ def set_embedded_file_list(self, embedded_file_list):
+ self.embedded_file_list = embedded_file_list
+ def get_embedded_file_list(self): return self.embedded_file_list
+ def get_meta_dict(self): return self.meta_dict
+
+ def process_footnotes(self):
+ for node, el1 in self.footnote_list:
+ backrefs = node.attributes.get('backrefs', [])
+ first = True
+ for ref in backrefs:
+ el2 = self.footnote_ref_dict.get(ref)
+ if el2 is not None:
+ if first:
+ first = False
+ el3 = copy.deepcopy(el1)
+ el2.append(el3)
+ else:
+ children = el2.getchildren()
+ if len(children) > 0: # and 'id' in el2.attrib:
+ child = children[0]
+ ref1 = child.text
+ attribkey = add_ns('text:id', nsdict=SNSD)
+ id1 = el2.get(attribkey, 'footnote-error')
+ if id1 is None:
+ id1 = ''
+ tag = add_ns('text:note-ref', nsdict=SNSD)
+ el2.tag = tag
+ if self.settings.endnotes_end_doc:
+ note_class = 'endnote'
+ else:
+ note_class = 'footnote'
+ el2.attrib.clear()
+ attribkey = add_ns('text:note-class', nsdict=SNSD)
+ el2.attrib[attribkey] = note_class
+ attribkey = add_ns('text:ref-name', nsdict=SNSD)
+ el2.attrib[attribkey] = id1
+ attribkey = add_ns('text:reference-format', nsdict=SNSD)
+ el2.attrib[attribkey] = 'page'
+ el2.text = ref1
+
+ #
+ # Utility methods
+
+ def append_child(self, tag, attrib=None, parent=None):
+ if parent is None:
+ parent = self.current_element
+ if attrib is None:
+ el = SubElement(parent, tag)
+ else:
+ el = SubElement(parent, tag, attrib)
+ return el
+
+ def append_p(self, style, text=None):
+ result = self.append_child('text:p', attrib={
+ 'text:style-name': self.rststyle(style)})
+ self.append_pending_ids(result)
+ if text is not None:
+ result.text = text
+ return result
+
+ def append_pending_ids(self, el):
+ if self.settings.create_links:
+ for id in self.pending_ids:
+ SubElement(el, 'text:reference-mark', attrib={
+ 'text:name': id})
+ self.pending_ids = [ ]
+
+ def set_current_element(self, el):
+ self.current_element = el
+
+ def set_to_parent(self):
+ self.current_element = self.current_element.getparent()
+
+ def generate_labeled_block(self, node, label):
+ el = self.append_p('textbody')
+ el1 = SubElement(el, 'text:span',
+ attrib={'text:style-name': self.rststyle('strong')})
+ el1.text = label
+ el = self.append_p('blockindent')
+ return el
+
+ def generate_labeled_line(self, node, label):
+ el = self.append_p('textbody')
+ el1 = SubElement(el, 'text:span',
+ attrib={'text:style-name': self.rststyle('strong')})
+ el1.text = label
+ el1.tail = node.astext()
+ return el
+
+ def encode(self, text):
+ text = text.replace(u'\u00a0', " ")
+ return text
+
+ #
+ # Visitor functions
+ #
+ # In alphabetic order, more or less.
+ # See docutils.docutils.nodes.node_class_names.
+ #
+
+ def dispatch_visit(self, node):
+ """Override to catch basic attributes which many nodes have."""
+ self.handle_basic_atts(node)
+ nodes.GenericNodeVisitor.dispatch_visit(self, node)
+
+ def handle_basic_atts(self, node):
+ if isinstance(node, nodes.Element) and node['ids']:
+ self.pending_ids += node['ids']
+
+ def default_visit(self, node):
+ self.document.reporter.warning('missing visit_%s' % (node.tagname, ))
+
+ def default_departure(self, node):
+ self.document.reporter.warning('missing depart_%s' % (node.tagname, ))
+
+ def visit_Text(self, node):
+ # Skip nodes whose text has been processed in parent nodes.
+ if isinstance(node.parent, docutils.nodes.literal_block):
+ return
+ text = node.astext()
+ # Are we in mixed content? If so, add the text to the
+ # etree tail of the previous sibling element.
+ if len(self.current_element.getchildren()) > 0:
+ if self.current_element.getchildren()[-1].tail:
+ self.current_element.getchildren()[-1].tail += text
+ else:
+ self.current_element.getchildren()[-1].tail = text
+ else:
+ if self.current_element.text:
+ self.current_element.text += text
+ else:
+ self.current_element.text = text
+
+ def depart_Text(self, node):
+ pass
+
+ #
+ # Pre-defined fields
+ #
+
+ def visit_address(self, node):
+ el = self.generate_labeled_block(node, 'Address: ')
+ self.set_current_element(el)
+
+ def depart_address(self, node):
+ self.set_to_parent()
+
+ def visit_author(self, node):
+ if isinstance(node.parent, nodes.authors):
+ el = self.append_p('blockindent')
+ else:
+ el = self.generate_labeled_block(node, 'Author: ')
+ self.set_current_element(el)
+
+ def depart_author(self, node):
+ self.set_to_parent()
+
+ def visit_authors(self, node):
+ label = 'Authors:'
+ el = self.append_p('textbody')
+ el1 = SubElement(el, 'text:span',
+ attrib={'text:style-name': self.rststyle('strong')})
+ el1.text = label
+
+ def depart_authors(self, node):
+ pass
+
+ def visit_contact(self, node):
+ el = self.generate_labeled_block(node, 'Contact: ')
+ self.set_current_element(el)
+
+ def depart_contact(self, node):
+ self.set_to_parent()
+
+ def visit_copyright(self, node):
+ el = self.generate_labeled_block(node, 'Copyright: ')
+ self.set_current_element(el)
+
+ def depart_copyright(self, node):
+ self.set_to_parent()
+
+ def visit_date(self, node):
+ self.generate_labeled_line(node, 'Date: ')
+
+ def depart_date(self, node):
+ pass
+
+ def visit_organization(self, node):
+ el = self.generate_labeled_block(node, 'Organization: ')
+ self.set_current_element(el)
+
+ def depart_organization(self, node):
+ self.set_to_parent()
+
+ def visit_status(self, node):
+ el = self.generate_labeled_block(node, 'Status: ')
+ self.set_current_element(el)
+
+ def depart_status(self, node):
+ self.set_to_parent()
+
+ def visit_revision(self, node):
+ self.generate_labeled_line(node, 'Revision: ')
+
+ def depart_revision(self, node):
+ pass
+
+ def visit_version(self, node):
+ el = self.generate_labeled_line(node, 'Version: ')
+ #self.set_current_element(el)
+
+ def depart_version(self, node):
+ #self.set_to_parent()
+ pass
+
+ def visit_attribution(self, node):
+ el = self.append_p('attribution', node.astext())
+
+ def depart_attribution(self, node):
+ pass
+
+ def visit_block_quote(self, node):
+ if 'epigraph' in node.attributes['classes']:
+ self.paragraph_style_stack.append(self.rststyle('epigraph'))
+ self.blockstyle = self.rststyle('epigraph')
+ elif 'highlights' in node.attributes['classes']:
+ self.paragraph_style_stack.append(self.rststyle('highlights'))
+ self.blockstyle = self.rststyle('highlights')
+ else:
+ self.paragraph_style_stack.append(self.rststyle('blockquote'))
+ self.blockstyle = self.rststyle('blockquote')
+ self.line_indent_level += 1
+
+ def depart_block_quote(self, node):
+ self.paragraph_style_stack.pop()
+ self.blockstyle = ''
+ self.line_indent_level -= 1
+
+ def visit_bullet_list(self, node):
+ self.list_level +=1
+ if self.in_table_of_contents:
+ if self.settings.generate_oowriter_toc:
+ pass
+ else:
+ if node.has_key('classes') and \
+ 'auto-toc' in node.attributes['classes']:
+ el = SubElement(self.current_element, 'text:list', attrib={
+ 'text:style-name': self.rststyle('tocenumlist'),
+ })
+ self.list_style_stack.append(self.rststyle('enumitem'))
+ else:
+ el = SubElement(self.current_element, 'text:list', attrib={
+ 'text:style-name': self.rststyle('tocbulletlist'),
+ })
+ self.list_style_stack.append(self.rststyle('bulletitem'))
+ self.set_current_element(el)
+ else:
+ if self.blockstyle == self.rststyle('blockquote'):
+ el = SubElement(self.current_element, 'text:list', attrib={
+ 'text:style-name': self.rststyle('blockquote-bulletlist'),
+ })
+ self.list_style_stack.append(
+ self.rststyle('blockquote-bulletitem'))
+ elif self.blockstyle == self.rststyle('highlights'):
+ el = SubElement(self.current_element, 'text:list', attrib={
+ 'text:style-name': self.rststyle('highlights-bulletlist'),
+ })
+ self.list_style_stack.append(
+ self.rststyle('highlights-bulletitem'))
+ elif self.blockstyle == self.rststyle('epigraph'):
+ el = SubElement(self.current_element, 'text:list', attrib={
+ 'text:style-name': self.rststyle('epigraph-bulletlist'),
+ })
+ self.list_style_stack.append(
+ self.rststyle('epigraph-bulletitem'))
+ else:
+ el = SubElement(self.current_element, 'text:list', attrib={
+ 'text:style-name': self.rststyle('bulletlist'),
+ })
+ self.list_style_stack.append(self.rststyle('bulletitem'))
+ self.set_current_element(el)
+
+ def depart_bullet_list(self, node):
+ if self.in_table_of_contents:
+ if self.settings.generate_oowriter_toc:
+ pass
+ else:
+ self.set_to_parent()
+ self.list_style_stack.pop()
+ else:
+ self.set_to_parent()
+ self.list_style_stack.pop()
+ self.list_level -=1
+
+ def visit_caption(self, node):
+ raise nodes.SkipChildren()
+ pass
+
+ def depart_caption(self, node):
+ pass
+
+ def visit_comment(self, node):
+ el = self.append_p('textbody')
+ el1 = SubElement(el, 'office:annotation', attrib={})
+ el2 = SubElement(el1, 'text:p', attrib={})
+ el2.text = node.astext()
+
+ def depart_comment(self, node):
+ pass
+
+ def visit_compound(self, node):
+ # The compound directive currently receives no special treatment.
+ pass
+
+ def depart_compound(self, node):
+ pass
+
+ def visit_container(self, node):
+ styles = node.attributes.get('classes', ())
+ if len(styles) > 0:
+ self.paragraph_style_stack.append(self.rststyle(styles[0]))
+
+ def depart_container(self, node):
+ styles = node.attributes.get('classes', ())
+ if len(styles) > 0:
+ self.paragraph_style_stack.pop()
+
+ def visit_decoration(self, node):
+ pass
+
+ def depart_decoration(self, node):
+ pass
+
+ def visit_definition(self, node):
+ self.paragraph_style_stack.append(self.rststyle('blockindent'))
+ self.bumped_list_level_stack.append(ListLevel(1))
+
+ def depart_definition(self, node):
+ self.paragraph_style_stack.pop()
+ self.bumped_list_level_stack.pop()
+
+ def visit_definition_list(self, node):
+ pass
+
+ def depart_definition_list(self, node):
+ pass
+
+ def visit_definition_list_item(self, node):
+ pass
+
+ def depart_definition_list_item(self, node):
+ pass
+
+ def visit_term(self, node):
+ el = self.append_p('textbody')
+ el1 = SubElement(el, 'text:span',
+ attrib={'text:style-name': self.rststyle('strong')})
+ #el1.text = node.astext()
+ self.set_current_element(el1)
+
+ def depart_term(self, node):
+ self.set_to_parent()
+ self.set_to_parent()
+
+ def visit_classifier(self, node):
+ els = self.current_element.getchildren()
+ if len(els) > 0:
+ el = els[-1]
+ el1 = SubElement(el, 'text:span',
+ attrib={'text:style-name': self.rststyle('emphasis')
+ })
+ el1.text = ' (%s)' % (node.astext(), )
+
+ def depart_classifier(self, node):
+ pass
+
+ def visit_document(self, node):
+ pass
+
+ def depart_document(self, node):
+ self.process_footnotes()
+
+ def visit_docinfo(self, node):
+ self.section_level += 1
+ self.section_count += 1
+ if self.settings.create_sections:
+ el = self.append_child('text:section', attrib={
+ 'text:name': 'Section%d' % self.section_count,
+ 'text:style-name': 'Sect%d' % self.section_level,
+ })
+ self.set_current_element(el)
+
+ def depart_docinfo(self, node):
+ self.section_level -= 1
+ if self.settings.create_sections:
+ self.set_to_parent()
+
+ def visit_emphasis(self, node):
+ el = SubElement(self.current_element, 'text:span',
+ attrib={'text:style-name': self.rststyle('emphasis')})
+ self.set_current_element(el)
+
+ def depart_emphasis(self, node):
+ self.set_to_parent()
+
+ def visit_enumerated_list(self, node):
+ el1 = self.current_element
+ if self.blockstyle == self.rststyle('blockquote'):
+ el2 = SubElement(el1, 'text:list', attrib={
+ 'text:style-name': self.rststyle('blockquote-enumlist'),
+ })
+ self.list_style_stack.append(self.rststyle('blockquote-enumitem'))
+ elif self.blockstyle == self.rststyle('highlights'):
+ el2 = SubElement(el1, 'text:list', attrib={
+ 'text:style-name': self.rststyle('highlights-enumlist'),
+ })
+ self.list_style_stack.append(self.rststyle('highlights-enumitem'))
+ elif self.blockstyle == self.rststyle('epigraph'):
+ el2 = SubElement(el1, 'text:list', attrib={
+ 'text:style-name': self.rststyle('epigraph-enumlist'),
+ })
+ self.list_style_stack.append(self.rststyle('epigraph-enumitem'))
+ else:
+ liststylename = 'enumlist-%s' % (node.get('enumtype', 'arabic'), )
+ el2 = SubElement(el1, 'text:list', attrib={
+ 'text:style-name': self.rststyle(liststylename),
+ })
+ self.list_style_stack.append(self.rststyle('enumitem'))
+ self.set_current_element(el2)
+
+ def depart_enumerated_list(self, node):
+ self.set_to_parent()
+ self.list_style_stack.pop()
+
+ def visit_list_item(self, node):
+ # If we are in a "bumped" list level, then wrap this
+ # list in an outer lists in order to increase the
+ # indentation level.
+ if self.in_table_of_contents:
+ if self.settings.generate_oowriter_toc:
+ self.paragraph_style_stack.append(
+ self.rststyle('contents-%d' % (self.list_level, )))
+ else:
+ el1 = self.append_child('text:list-item')
+ self.set_current_element(el1)
+ else:
+ el1 = self.append_child('text:list-item')
+ el3 = el1
+ if len(self.bumped_list_level_stack) > 0:
+ level_obj = self.bumped_list_level_stack[-1]
+ if level_obj.get_sibling():
+ level_obj.set_nested(False)
+ for level_obj1 in self.bumped_list_level_stack:
+ for idx in range(level_obj1.get_level()):
+ el2 = self.append_child('text:list', parent=el3)
+ el3 = self.append_child(
+ 'text:list-item', parent=el2)
+ self.paragraph_style_stack.append(self.list_style_stack[-1])
+ self.set_current_element(el3)
+
+ def depart_list_item(self, node):
+ if self.in_table_of_contents:
+ if self.settings.generate_oowriter_toc:
+ self.paragraph_style_stack.pop()
+ else:
+ self.set_to_parent()
+ else:
+ if len(self.bumped_list_level_stack) > 0:
+ level_obj = self.bumped_list_level_stack[-1]
+ if level_obj.get_sibling():
+ level_obj.set_nested(True)
+ for level_obj1 in self.bumped_list_level_stack:
+ for idx in range(level_obj1.get_level()):
+ self.set_to_parent()
+ self.set_to_parent()
+ self.paragraph_style_stack.pop()
+ self.set_to_parent()
+
+ def visit_header(self, node):
+ self.in_header = True
+
+ def depart_header(self, node):
+ self.in_header = False
+
+ def visit_footer(self, node):
+ self.in_footer = True
+
+ def depart_footer(self, node):
+ self.in_footer = False
+
+ def visit_field(self, node):
+ pass
+
+ def depart_field(self, node):
+ pass
+
+ def visit_field_list(self, node):
+ pass
+
+ def depart_field_list(self, node):
+ pass
+
+ def visit_field_name(self, node):
+ el = self.append_p('textbody')
+ el1 = SubElement(el, 'text:span',
+ attrib={'text:style-name': self.rststyle('strong')})
+ el1.text = node.astext()
+
+ def depart_field_name(self, node):
+ pass
+
+ def visit_field_body(self, node):
+ self.paragraph_style_stack.append(self.rststyle('blockindent'))
+
+ def depart_field_body(self, node):
+ self.paragraph_style_stack.pop()
+
+ def visit_figure(self, node):
+ pass
+
+ def depart_figure(self, node):
+ pass
+
+ def visit_footnote(self, node):
+ self.footnote_level += 1
+ self.save_footnote_current = self.current_element
+ el1 = Element('text:note-body')
+ self.current_element = el1
+ self.footnote_list.append((node, el1))
+ if isinstance(node, docutils.nodes.citation):
+ self.paragraph_style_stack.append(self.rststyle('citation'))
+ else:
+ self.paragraph_style_stack.append(self.rststyle('footnote'))
+
+ def depart_footnote(self, node):
+ self.paragraph_style_stack.pop()
+ self.current_element = self.save_footnote_current
+ self.footnote_level -= 1
+
+ footnote_chars = [
+ '*', '**', '***',
+ '++', '+++',
+ '##', '###',
+ '@@', '@@@',
+ ]
+
+ def visit_footnote_reference(self, node):
+ if self.footnote_level <= 0:
+ id = node.attributes['ids'][0]
+ refid = node.attributes.get('refid')
+ if refid is None:
+ refid = ''
+ if self.settings.endnotes_end_doc:
+ note_class = 'endnote'
+ else:
+ note_class = 'footnote'
+ el1 = self.append_child('text:note', attrib={
+ 'text:id': '%s' % (refid, ),
+ 'text:note-class': note_class,
+ })
+ note_auto = str(node.attributes.get('auto', 1))
+ if isinstance(node, docutils.nodes.citation_reference):
+ citation = '[%s]' % node.astext()
+ el2 = SubElement(el1, 'text:note-citation', attrib={
+ 'text:label': citation,
+ })
+ el2.text = citation
+ elif note_auto == '1':
+ el2 = SubElement(el1, 'text:note-citation', attrib={
+ 'text:label': node.astext(),
+ })
+ el2.text = node.astext()
+ elif note_auto == '*':
+ if self.footnote_chars_idx >= len(
+ ODFTranslator.footnote_chars):
+ self.footnote_chars_idx = 0
+ footnote_char = ODFTranslator.footnote_chars[
+ self.footnote_chars_idx]
+ self.footnote_chars_idx += 1
+ el2 = SubElement(el1, 'text:note-citation', attrib={
+ 'text:label': footnote_char,
+ })
+ el2.text = footnote_char
+ self.footnote_ref_dict[id] = el1
+ raise nodes.SkipChildren()
+
+ def depart_footnote_reference(self, node):
+ pass
+
+ def visit_citation(self, node):
+ for id in node.attributes['ids']:
+ self.citation_id = id
+ break
+ self.paragraph_style_stack.append(self.rststyle('blockindent'))
+ self.bumped_list_level_stack.append(ListLevel(1))
+
+ def depart_citation(self, node):
+ self.citation_id = None
+ self.paragraph_style_stack.pop()
+ self.bumped_list_level_stack.pop()
+
+ def visit_citation_reference(self, node):
+ if self.settings.create_links:
+ id = node.attributes['refid']
+ el = self.append_child('text:reference-ref', attrib={
+ 'text:ref-name': '%s' % (id, ),
+ 'text:reference-format': 'text',
+ })
+ el.text = '['
+ self.set_current_element(el)
+ elif self.current_element.text is None:
+ self.current_element.text = '['
+ else:
+ self.current_element.text += '['
+
+ def depart_citation_reference(self, node):
+ self.current_element.text += ']'
+ if self.settings.create_links:
+ self.set_to_parent()
+
+ def visit_label(self, node):
+ if isinstance(node.parent, docutils.nodes.footnote):
+ raise nodes.SkipChildren()
+ elif self.citation_id is not None:
+ el = self.append_p('textbody')
+ self.set_current_element(el)
+ el.text = '['
+ if self.settings.create_links:
+ el1 = self.append_child('text:reference-mark-start', attrib={
+ 'text:name': '%s' % (self.citation_id, ),
+ })
+
+ def depart_label(self, node):
+ if isinstance(node.parent, docutils.nodes.footnote):
+ pass
+ elif self.citation_id is not None:
+ self.current_element.text += ']'
+ if self.settings.create_links:
+ el = self.append_child('text:reference-mark-end', attrib={
+ 'text:name': '%s' % (self.citation_id, ),
+ })
+ self.set_to_parent()
+
+ def visit_generated(self, node):
+ pass
+
+ def depart_generated(self, node):
+ pass
+
+ def check_file_exists(self, path):
+ if os.path.exists(path):
+ return 1
+ else:
+ return 0
+
+ def visit_image(self, node):
+ # Capture the image file.
+ if 'uri' in node.attributes:
+ source = node.attributes['uri']
+ if not self.check_file_exists(source):
+ self.document.reporter.warning(
+ 'Cannot find image file %s.' % (source, ))
+ return
+ else:
+ return
+ if source in self.image_dict:
+ filename, destination = self.image_dict[source]
+ else:
+ self.image_count += 1
+ filename = os.path.split(source)[1]
+ destination = 'Pictures/1%08x%s' % (self.image_count, filename, )
+ spec = (os.path.abspath(source), destination,)
+
+ self.embedded_file_list.append(spec)
+ self.image_dict[source] = (source, destination,)
+ # Is this a figure (containing an image) or just a plain image?
+ if self.in_paragraph:
+ el1 = self.current_element
+ else:
+ el1 = SubElement(self.current_element, 'text:p',
+ attrib={'text:style-name': self.rststyle('textbody')})
+ el2 = el1
+ if isinstance(node.parent, docutils.nodes.figure):
+ el3, el4, el5, caption = self.generate_figure(node, source,
+ destination, el2)
+ attrib = {}
+ el6, width = self.generate_image(node, source, destination,
+ el5, attrib)
+ if caption is not None:
+ el6.tail = caption
+ else: #if isinstance(node.parent, docutils.nodes.image):
+ el3 = self.generate_image(node, source, destination, el2)
+
+ def depart_image(self, node):
+ pass
+
+ def get_image_width_height(self, node, attr):
+ size = None
+ if attr in node.attributes:
+ size = node.attributes[attr]
+ unit = size[-2:]
+ if unit.isalpha():
+ size = size[:-2]
+ else:
+ unit = 'px'
+ try:
+ size = float(size)
+ except ValueError, e:
+ self.document.reporter.warning(
+ 'Invalid %s for image: "%s"' % (
+ attr, node.attributes[attr]))
+ size = [size, unit]
+ return size
+
+ def get_image_scale(self, node):
+ if 'scale' in node.attributes:
+ try:
+ scale = int(node.attributes['scale'])
+ if scale < 1: # or scale > 100:
+ self.document.reporter.warning(
+ 'scale out of range (%s), using 1.' % (scale, ))
+ scale = 1
+ scale = scale * 0.01
+ except ValueError, e:
+ self.document.reporter.warning(
+ 'Invalid scale for image: "%s"' % (
+ node.attributes['scale'], ))
+ else:
+ scale = 1.0
+ return scale
+
+ def get_image_scaled_width_height(self, node, source):
+ scale = self.get_image_scale(node)
+ width = self.get_image_width_height(node, 'width')
+ height = self.get_image_width_height(node, 'height')
+
+ dpi = (72, 72)
+ if Image is not None and source in self.image_dict:
+ filename, destination = self.image_dict[source]
+ imageobj = Image.open(filename, 'r')
+ dpi = imageobj.info.get('dpi', dpi)
+ # dpi information can be (xdpi, ydpi) or xydpi
+ try: iter(dpi)
+ except: dpi = (dpi, dpi)
+ else:
+ imageobj = None
+
+ if width is None or height is None:
+ if imageobj is None:
+ raise RuntimeError(
+ 'image size not fully specified and PIL not installed')
+ if width is None: width = [imageobj.size[0], 'px']
+ if height is None: height = [imageobj.size[1], 'px']
+
+ width[0] *= scale
+ height[0] *= scale
+ if width[1] == 'px': width = [width[0] / dpi[0], 'in']
+ if height[1] == 'px': height = [height[0] / dpi[1], 'in']
+
+ width[0] = str(width[0])
+ height[0] = str(height[0])
+ return ''.join(width), ''.join(height)
+
+ def generate_figure(self, node, source, destination, current_element):
+ caption = None
+ width, height = self.get_image_scaled_width_height(node, source)
+ for node1 in node.parent.children:
+ if node1.tagname == 'caption':
+ caption = node1.astext()
+ self.image_style_count += 1
+ #
+ # Add the style for the caption.
+ if caption is not None:
+ attrib = {
+ 'style:class': 'extra',
+ 'style:family': 'paragraph',
+ 'style:name': 'Caption',
+ 'style:parent-style-name': 'Standard',
+ }
+ el1 = SubElement(self.automatic_styles, 'style:style',
+ attrib=attrib, nsdict=SNSD)
+ attrib = {
+ 'fo:margin-bottom': '0.0835in',
+ 'fo:margin-top': '0.0835in',
+ 'text:line-number': '0',
+ 'text:number-lines': 'false',
+ }
+ el2 = SubElement(el1, 'style:paragraph-properties',
+ attrib=attrib, nsdict=SNSD)
+ attrib = {
+ 'fo:font-size': '12pt',
+ 'fo:font-style': 'italic',
+ 'style:font-name': 'Times',
+ 'style:font-name-complex': 'Lucidasans1',
+ 'style:font-size-asian': '12pt',
+ 'style:font-size-complex': '12pt',
+ 'style:font-style-asian': 'italic',
+ 'style:font-style-complex': 'italic',
+ }
+ el2 = SubElement(el1, 'style:text-properties',
+ attrib=attrib, nsdict=SNSD)
+ style_name = 'rstframestyle%d' % self.image_style_count
+ # Add the styles
+ attrib = {
+ 'style:name': style_name,
+ 'style:family': 'graphic',
+ 'style:parent-style-name': self.rststyle('figureframe'),
+ }
+ el1 = SubElement(self.automatic_styles,
+ 'style:style', attrib=attrib, nsdict=SNSD)
+ halign = 'center'
+ valign = 'top'
+ if 'align' in node.attributes:
+ align = node.attributes['align'].split()
+ for val in align:
+ if val in ('left', 'center', 'right'):
+ halign = val
+ elif val in ('top', 'middle', 'bottom'):
+ valign = val
+ attrib = {}
+ wrap = False
+ classes = node.parent.attributes.get('classes')
+ if classes and 'wrap' in classes:
+ wrap = True
+ if wrap:
+ attrib['style:wrap'] = 'dynamic'
+ else:
+ attrib['style:wrap'] = 'none'
+ el2 = SubElement(el1,
+ 'style:graphic-properties', attrib=attrib, nsdict=SNSD)
+ attrib = {
+ 'draw:style-name': style_name,
+ 'draw:name': 'Frame1',
+ 'text:anchor-type': 'paragraph',
+ 'draw:z-index': '0',
+ }
+ attrib['svg:width'] = width
+ # dbg
+ #attrib['svg:height'] = height
+ el3 = SubElement(current_element, 'draw:frame', attrib=attrib)
+ attrib = {}
+ el4 = SubElement(el3, 'draw:text-box', attrib=attrib)
+ attrib = {
+ 'text:style-name': self.rststyle('caption'),
+ }
+ el5 = SubElement(el4, 'text:p', attrib=attrib)
+ return el3, el4, el5, caption
+
+ def generate_image(self, node, source, destination, current_element,
+ frame_attrs=None):
+ width, height = self.get_image_scaled_width_height(node, source)
+ self.image_style_count += 1
+ style_name = 'rstframestyle%d' % self.image_style_count
+ # Add the style.
+ attrib = {
+ 'style:name': style_name,
+ 'style:family': 'graphic',
+ 'style:parent-style-name': self.rststyle('image'),
+ }
+ el1 = SubElement(self.automatic_styles,
+ 'style:style', attrib=attrib, nsdict=SNSD)
+ halign = None
+ valign = None
+ if 'align' in node.attributes:
+ align = node.attributes['align'].split()
+ for val in align:
+ if val in ('left', 'center', 'right'):
+ halign = val
+ elif val in ('top', 'middle', 'bottom'):
+ valign = val
+ if frame_attrs is None:
+ attrib = {
+ 'style:vertical-pos': 'top',
+ 'style:vertical-rel': 'paragraph',
+ 'style:horizontal-rel': 'paragraph',
+ 'style:mirror': 'none',
+ 'fo:clip': 'rect(0cm 0cm 0cm 0cm)',
+ 'draw:luminance': '0%',
+ 'draw:contrast': '0%',
+ 'draw:red': '0%',
+ 'draw:green': '0%',
+ 'draw:blue': '0%',
+ 'draw:gamma': '100%',
+ 'draw:color-inversion': 'false',
+ 'draw:image-opacity': '100%',
+ 'draw:color-mode': 'standard',
+ }
+ else:
+ attrib = frame_attrs
+ if halign is not None:
+ attrib['style:horizontal-pos'] = halign
+ if valign is not None:
+ attrib['style:vertical-pos'] = valign
+ # If there is a classes/wrap directive or we are
+ # inside a table, add a no-wrap style.
+ wrap = False
+ classes = node.attributes.get('classes')
+ if classes and 'wrap' in classes:
+ wrap = True
+ if wrap:
+ attrib['style:wrap'] = 'dynamic'
+ else:
+ attrib['style:wrap'] = 'none'
+ # If we are inside a table, add a no-wrap style.
+ if self.is_in_table(node):
+ attrib['style:wrap'] = 'none'
+ el2 = SubElement(el1,
+ 'style:graphic-properties', attrib=attrib, nsdict=SNSD)
+ # Add the content.
+ #el = SubElement(current_element, 'text:p',
+ # attrib={'text:style-name': self.rststyle('textbody')})
+ attrib={
+ 'draw:style-name': style_name,
+ 'draw:name': 'graphics2',
+ 'draw:z-index': '1',
+ }
+ if isinstance(node.parent, nodes.TextElement):
+ attrib['text:anchor-type'] = 'as-char' #vds
+ else:
+ attrib['text:anchor-type'] = 'paragraph'
+ attrib['svg:width'] = width
+ attrib['svg:height'] = height
+ el1 = SubElement(current_element, 'draw:frame', attrib=attrib)
+ el2 = SubElement(el1, 'draw:image', attrib={
+ 'xlink:href': '%s' % (destination, ),
+ 'xlink:type': 'simple',
+ 'xlink:show': 'embed',
+ 'xlink:actuate': 'onLoad',
+ })
+ return el1, width
+
+ def is_in_table(self, node):
+ node1 = node.parent
+ while node1:
+ if isinstance(node1, docutils.nodes.entry):
+ return True
+ node1 = node1.parent
+ return False
+
+ def visit_legend(self, node):
+ if isinstance(node.parent, docutils.nodes.figure):
+ el1 = self.current_element[-1]
+ el1 = el1[0][0]
+ self.current_element = el1
+ self.paragraph_style_stack.append(self.rststyle('legend'))
+
+ def depart_legend(self, node):
+ if isinstance(node.parent, docutils.nodes.figure):
+ self.paragraph_style_stack.pop()
+ self.set_to_parent()
+ self.set_to_parent()
+ self.set_to_parent()
+
+ def visit_line_block(self, node):
+ self.line_indent_level += 1
+ self.line_block_level += 1
+
+ def depart_line_block(self, node):
+ self.line_indent_level -= 1
+ self.line_block_level -= 1
+
+ def visit_line(self, node):
+ style = 'lineblock%d' % self.line_indent_level
+ el1 = SubElement(self.current_element, 'text:p', attrib={
+ 'text:style-name': self.rststyle(style),
+ })
+ self.current_element = el1
+
+ def depart_line(self, node):
+ self.set_to_parent()
+
+ def visit_literal(self, node):
+ el = SubElement(self.current_element, 'text:span',
+ attrib={'text:style-name': self.rststyle('inlineliteral')})
+ self.set_current_element(el)
+
+ def depart_literal(self, node):
+ self.set_to_parent()
+
+ def visit_inline(self, node):
+ styles = node.attributes.get('classes', ())
+ if len(styles) > 0:
+ inline_style = styles[0]
+ el = SubElement(self.current_element, 'text:span',
+ attrib={'text:style-name': self.rststyle(inline_style)})
+ self.set_current_element(el)
+
+ def depart_inline(self, node):
+ self.set_to_parent()
+
+ def _calculate_code_block_padding(self, line):
+ count = 0
+ matchobj = SPACES_PATTERN.match(line)
+ if matchobj:
+ pad = matchobj.group()
+ count = len(pad)
+ else:
+ matchobj = TABS_PATTERN.match(line)
+ if matchobj:
+ pad = matchobj.group()
+ count = len(pad) * 8
+ return count
+
+ def _add_syntax_highlighting(self, insource, language):
+ lexer = pygments.lexers.get_lexer_by_name(language, stripall=True)
+ if language in ('latex', 'tex'):
+ fmtr = OdtPygmentsLaTeXFormatter(lambda name, parameters=():
+ self.rststyle(name, parameters),
+ escape_function=escape_cdata)
+ else:
+ fmtr = OdtPygmentsProgFormatter(lambda name, parameters=():
+ self.rststyle(name, parameters),
+ escape_function=escape_cdata)
+ outsource = pygments.highlight(insource, lexer, fmtr)
+ return outsource
+
+ def fill_line(self, line):
+ line = FILL_PAT1.sub(self.fill_func1, line)
+ line = FILL_PAT2.sub(self.fill_func2, line)
+ return line
+
+ def fill_func1(self, matchobj):
+ spaces = matchobj.group(0)
+ repl = '<text:s text:c="%d"/>' % (len(spaces), )
+ return repl
+
+ def fill_func2(self, matchobj):
+ spaces = matchobj.group(0)
+ repl = ' <text:s text:c="%d"/>' % (len(spaces) - 1, )
+ return repl
+
+ def visit_literal_block(self, node):
+ wrapper1 = '<text:p text:style-name="%s">%%s</text:p>' % (
+ self.rststyle('codeblock'), )
+ source = node.astext()
+ if (pygments and
+ self.settings.add_syntax_highlighting
+ #and
+ #node.get('hilight', False)
+ ):
+ language = node.get('language', 'python')
+ source = self._add_syntax_highlighting(source, language)
+ else:
+ source = escape_cdata(source)
+ lines = source.split('\n')
+ lines1 = ['<wrappertag1 xmlns:text="urn:oasis:names:tc:opendocument:xmlns:text:1.0">']
+
+ my_lines = []
+ for my_line in lines:
+ my_line = self.fill_line(my_line)
+ my_line = my_line.replace(" ", "\n")
+ my_lines.append(my_line)
+ my_lines_str = '<text:line-break/>'.join(my_lines)
+ my_lines_str2 = wrapper1 % (my_lines_str, )
+ lines1.append(my_lines_str2)
+ lines1.append('</wrappertag1>')
+ s1 = ''.join(lines1)
+ if WhichElementTree != "lxml":
+ s1 = s1.encode("utf-8")
+ el1 = etree.fromstring(s1)
+ children = el1.getchildren()
+ for child in children:
+ self.current_element.append(child)
+
+ def depart_literal_block(self, node):
+ pass
+
+ visit_doctest_block = visit_literal_block
+ depart_doctest_block = depart_literal_block
+
+ def visit_meta(self, node):
+ name = node.attributes.get('name')
+ content = node.attributes.get('content')
+ if name is not None and content is not None:
+ self.meta_dict[name] = content
+
+ def depart_meta(self, node):
+ pass
+
+ def visit_option_list(self, node):
+ table_name = 'tableoption'
+ #
+ # Generate automatic styles
+ if not self.optiontablestyles_generated:
+ self.optiontablestyles_generated = True
+ el = SubElement(self.automatic_styles, 'style:style', attrib={
+ 'style:name': self.rststyle(table_name),
+ 'style:family': 'table'}, nsdict=SNSD)
+ el1 = SubElement(el, 'style:table-properties', attrib={
+ 'style:width': '17.59cm',
+ 'table:align': 'left',
+ 'style:shadow': 'none'}, nsdict=SNSD)
+ el = SubElement(self.automatic_styles, 'style:style', attrib={
+ 'style:name': self.rststyle('%s.%%c' % table_name, ( 'A', )),
+ 'style:family': 'table-column'}, nsdict=SNSD)
+ el1 = SubElement(el, 'style:table-column-properties', attrib={
+ 'style:column-width': '4.999cm'}, nsdict=SNSD)
+ el = SubElement(self.automatic_styles, 'style:style', attrib={
+ 'style:name': self.rststyle('%s.%%c' % table_name, ( 'B', )),
+ 'style:family': 'table-column'}, nsdict=SNSD)
+ el1 = SubElement(el, 'style:table-column-properties', attrib={
+ 'style:column-width': '12.587cm'}, nsdict=SNSD)
+ el = SubElement(self.automatic_styles, 'style:style', attrib={
+ 'style:name': self.rststyle(
+ '%s.%%c%%d' % table_name, ( 'A', 1, )),
+ 'style:family': 'table-cell'}, nsdict=SNSD)
+ el1 = SubElement(el, 'style:table-cell-properties', attrib={
+ 'fo:background-color': 'transparent',
+ 'fo:padding': '0.097cm',
+ 'fo:border-left': '0.035cm solid #000000',
+ 'fo:border-right': 'none',
+ 'fo:border-top': '0.035cm solid #000000',
+ 'fo:border-bottom': '0.035cm solid #000000'}, nsdict=SNSD)
+ el2 = SubElement(el1, 'style:background-image', nsdict=SNSD)
+ el = SubElement(self.automatic_styles, 'style:style', attrib={
+ 'style:name': self.rststyle(
+ '%s.%%c%%d' % table_name, ( 'B', 1, )),
+ 'style:family': 'table-cell'}, nsdict=SNSD)
+ el1 = SubElement(el, 'style:table-cell-properties', attrib={
+ 'fo:padding': '0.097cm',
+ 'fo:border': '0.035cm solid #000000'}, nsdict=SNSD)
+ el = SubElement(self.automatic_styles, 'style:style', attrib={
+ 'style:name': self.rststyle(
+ '%s.%%c%%d' % table_name, ( 'A', 2, )),
+ 'style:family': 'table-cell'}, nsdict=SNSD)
+ el1 = SubElement(el, 'style:table-cell-properties', attrib={
+ 'fo:padding': '0.097cm',
+ 'fo:border-left': '0.035cm solid #000000',
+ 'fo:border-right': 'none',
+ 'fo:border-top': 'none',
+ 'fo:border-bottom': '0.035cm solid #000000'}, nsdict=SNSD)
+ el = SubElement(self.automatic_styles, 'style:style', attrib={
+ 'style:name': self.rststyle(
+ '%s.%%c%%d' % table_name, ( 'B', 2, )),
+ 'style:family': 'table-cell'}, nsdict=SNSD)
+ el1 = SubElement(el, 'style:table-cell-properties', attrib={
+ 'fo:padding': '0.097cm',
+ 'fo:border-left': '0.035cm solid #000000',
+ 'fo:border-right': '0.035cm solid #000000',
+ 'fo:border-top': 'none',
+ 'fo:border-bottom': '0.035cm solid #000000'}, nsdict=SNSD)
+ #
+ # Generate table data
+ el = self.append_child('table:table', attrib={
+ 'table:name': self.rststyle(table_name),
+ 'table:style-name': self.rststyle(table_name),
+ })
+ el1 = SubElement(el, 'table:table-column', attrib={
+ 'table:style-name': self.rststyle(
+ '%s.%%c' % table_name, ( 'A', ))})
+ el1 = SubElement(el, 'table:table-column', attrib={
+ 'table:style-name': self.rststyle(
+ '%s.%%c' % table_name, ( 'B', ))})
+ el1 = SubElement(el, 'table:table-header-rows')
+ el2 = SubElement(el1, 'table:table-row')
+ el3 = SubElement(el2, 'table:table-cell', attrib={
+ 'table:style-name': self.rststyle(
+ '%s.%%c%%d' % table_name, ( 'A', 1, )),
+ 'office:value-type': 'string'})
+ el4 = SubElement(el3, 'text:p', attrib={
+ 'text:style-name': 'Table_20_Heading'})
+ el4.text= 'Option'
+ el3 = SubElement(el2, 'table:table-cell', attrib={
+ 'table:style-name': self.rststyle(
+ '%s.%%c%%d' % table_name, ( 'B', 1, )),
+ 'office:value-type': 'string'})
+ el4 = SubElement(el3, 'text:p', attrib={
+ 'text:style-name': 'Table_20_Heading'})
+ el4.text= 'Description'
+ self.set_current_element(el)
+
+ def depart_option_list(self, node):
+ self.set_to_parent()
+
+ def visit_option_list_item(self, node):
+ el = self.append_child('table:table-row')
+ self.set_current_element(el)
+
+ def depart_option_list_item(self, node):
+ self.set_to_parent()
+
+ def visit_option_group(self, node):
+ el = self.append_child('table:table-cell', attrib={
+ 'table:style-name': 'Table%d.A2' % self.table_count,
+ 'office:value-type': 'string',
+ })
+ self.set_current_element(el)
+
+ def depart_option_group(self, node):
+ self.set_to_parent()
+
+ def visit_option(self, node):
+ el = self.append_child('text:p', attrib={
+ 'text:style-name': 'Table_20_Contents'})
+ el.text = node.astext()
+
+ def depart_option(self, node):
+ pass
+
+ def visit_option_string(self, node):
+ pass
+
+ def depart_option_string(self, node):
+ pass
+
+ def visit_option_argument(self, node):
+ pass
+
+ def depart_option_argument(self, node):
+ pass
+
+ def visit_description(self, node):
+ el = self.append_child('table:table-cell', attrib={
+ 'table:style-name': 'Table%d.B2' % self.table_count,
+ 'office:value-type': 'string',
+ })
+ el1 = SubElement(el, 'text:p', attrib={
+ 'text:style-name': 'Table_20_Contents'})
+ el1.text = node.astext()
+ raise nodes.SkipChildren()
+
+ def depart_description(self, node):
+ pass
+
+ def visit_paragraph(self, node):
+ self.in_paragraph = True
+ if self.in_header:
+ el = self.append_p('header')
+ elif self.in_footer:
+ el = self.append_p('footer')
+ else:
+ style_name = self.paragraph_style_stack[-1]
+ el = self.append_child('text:p',
+ attrib={'text:style-name': style_name})
+ self.append_pending_ids(el)
+ self.set_current_element(el)
+
+ def depart_paragraph(self, node):
+ self.in_paragraph = False
+ self.set_to_parent()
+ if self.in_header:
+ self.header_content.append(
+ self.current_element.getchildren()[-1])
+ self.current_element.remove(
+ self.current_element.getchildren()[-1])
+ elif self.in_footer:
+ self.footer_content.append(
+ self.current_element.getchildren()[-1])
+ self.current_element.remove(
+ self.current_element.getchildren()[-1])
+
+ def visit_problematic(self, node):
+ pass
+
+ def depart_problematic(self, node):
+ pass
+
+ def visit_raw(self, node):
+ if 'format' in node.attributes:
+ formats = node.attributes['format']
+ formatlist = formats.split()
+ if 'odt' in formatlist:
+ rawstr = node.astext()
+ attrstr = ' '.join(['%s="%s"' % (k, v, )
+ for k,v in CONTENT_NAMESPACE_ATTRIB.items()])
+ contentstr = '<stuff %s>%s</stuff>' % (attrstr, rawstr, )
+ if WhichElementTree != "lxml":
+ contentstr = contentstr.encode("utf-8")
+ content = etree.fromstring(contentstr)
+ elements = content.getchildren()
+ if len(elements) > 0:
+ el1 = elements[0]
+ if self.in_header:
+ pass
+ elif self.in_footer:
+ pass
+ else:
+ self.current_element.append(el1)
+ raise nodes.SkipChildren()
+
+ def depart_raw(self, node):
+ if self.in_header:
+ pass
+ elif self.in_footer:
+ pass
+ else:
+ pass
+
+ def visit_reference(self, node):
+ text = node.astext()
+ if self.settings.create_links:
+ if node.has_key('refuri'):
+ href = node['refuri']
+ if ( self.settings.cloak_email_addresses
+ and href.startswith('mailto:')):
+ href = self.cloak_mailto(href)
+ el = self.append_child('text:a', attrib={
+ 'xlink:href': '%s' % href,
+ 'xlink:type': 'simple',
+ })
+ self.set_current_element(el)
+ elif node.has_key('refid'):
+ if self.settings.create_links:
+ href = node['refid']
+ el = self.append_child('text:reference-ref', attrib={
+ 'text:ref-name': '%s' % href,
+ 'text:reference-format': 'text',
+ })
+ else:
+ self.document.reporter.warning(
+ 'References must have "refuri" or "refid" attribute.')
+ if (self.in_table_of_contents and
+ len(node.children) >= 1 and
+ isinstance(node.children[0], docutils.nodes.generated)):
+ node.remove(node.children[0])
+
+ def depart_reference(self, node):
+ if self.settings.create_links:
+ if node.has_key('refuri'):
+ self.set_to_parent()
+
+ def visit_rubric(self, node):
+ style_name = self.rststyle('rubric')
+ classes = node.get('classes')
+ if classes:
+ class1 = classes[0]
+ if class1:
+ style_name = class1
+ el = SubElement(self.current_element, 'text:h', attrib = {
+ #'text:outline-level': '%d' % section_level,
+ #'text:style-name': 'Heading_20_%d' % section_level,
+ 'text:style-name': style_name,
+ })
+ text = node.astext()
+ el.text = self.encode(text)
+
+ def depart_rubric(self, node):
+ pass
+
+ def visit_section(self, node, move_ids=1):
+ self.section_level += 1
+ self.section_count += 1
+ if self.settings.create_sections:
+ el = self.append_child('text:section', attrib={
+ 'text:name': 'Section%d' % self.section_count,
+ 'text:style-name': 'Sect%d' % self.section_level,
+ })
+ self.set_current_element(el)
+
+ def depart_section(self, node):
+ self.section_level -= 1
+ if self.settings.create_sections:
+ self.set_to_parent()
+
+ def visit_strong(self, node):
+ el = SubElement(self.current_element, 'text:span',
+ attrib={'text:style-name': self.rststyle('strong')})
+ self.set_current_element(el)
+
+ def depart_strong(self, node):
+ self.set_to_parent()
+
+ def visit_substitution_definition(self, node):
+ raise nodes.SkipChildren()
+
+ def depart_substitution_definition(self, node):
+ pass
+
+ def visit_system_message(self, node):
+ pass
+
+ def depart_system_message(self, node):
+ pass
+
+ def get_table_style(self, node):
+ table_style = None
+ table_name = None
+ use_predefined_table_style = False
+ str_classes = node.get('classes')
+ if str_classes is not None:
+ for str_class in str_classes:
+ if str_class.startswith(TABLESTYLEPREFIX):
+ table_name = str_class
+ use_predefined_table_style = True
+ break
+ if table_name is not None:
+ table_style = self.table_styles.get(table_name)
+ if table_style is None:
+ # If we can't find the table style, issue warning
+ # and use the default table style.
+ self.document.reporter.warning(
+ 'Can\'t find table style "%s". Using default.' % (
+ table_name, ))
+ table_name = TABLENAMEDEFAULT
+ table_style = self.table_styles.get(table_name)
+ if table_style is None:
+ # If we can't find the default table style, issue a warning
+ # and use a built-in default style.
+ self.document.reporter.warning(
+ 'Can\'t find default table style "%s". Using built-in default.' % (
+ table_name, ))
+ table_style = BUILTIN_DEFAULT_TABLE_STYLE
+ else:
+ table_name = TABLENAMEDEFAULT
+ table_style = self.table_styles.get(table_name)
+ if table_style is None:
+ # If we can't find the default table style, issue a warning
+ # and use a built-in default style.
+ self.document.reporter.warning(
+ 'Can\'t find default table style "%s". Using built-in default.' % (
+ table_name, ))
+ table_style = BUILTIN_DEFAULT_TABLE_STYLE
+ return table_style
+
+ def visit_table(self, node):
+ self.table_count += 1
+ table_style = self.get_table_style(node)
+ table_name = '%s%%d' % TABLESTYLEPREFIX
+ el1 = SubElement(self.automatic_styles, 'style:style', attrib={
+ 'style:name': self.rststyle(
+ '%s' % table_name, ( self.table_count, )),
+ 'style:family': 'table',
+ }, nsdict=SNSD)
+ if table_style.backgroundcolor is None:
+ el1_1 = SubElement(el1, 'style:table-properties', attrib={
+ #'style:width': '17.59cm',
+ 'table:align': 'margins',
+ 'fo:margin-top': '0in',
+ 'fo:margin-bottom': '0.10in',
+ }, nsdict=SNSD)
+ else:
+ el1_1 = SubElement(el1, 'style:table-properties', attrib={
+ #'style:width': '17.59cm',
+ 'table:align': 'margins',
+ 'fo:margin-top': '0in',
+ 'fo:margin-bottom': '0.10in',
+ 'fo:background-color': table_style.backgroundcolor,
+ }, nsdict=SNSD)
+ # We use a single cell style for all cells in this table.
+ # That's probably not correct, but seems to work.
+ el2 = SubElement(self.automatic_styles, 'style:style', attrib={
+ 'style:name': self.rststyle(
+ '%s.%%c%%d' % table_name, ( self.table_count, 'A', 1, )),
+ 'style:family': 'table-cell',
+ }, nsdict=SNSD)
+ thickness = self.settings.table_border_thickness
+ if thickness is None:
+ line_style1 = table_style.border
+ else:
+ line_style1 = '0.%03dcm solid #000000' % (thickness, )
+ el2_1 = SubElement(el2, 'style:table-cell-properties', attrib={
+ 'fo:padding': '0.049cm',
+ 'fo:border-left': line_style1,
+ 'fo:border-right': line_style1,
+ 'fo:border-top': line_style1,
+ 'fo:border-bottom': line_style1,
+ }, nsdict=SNSD)
+ title = None
+ for child in node.children:
+ if child.tagname == 'title':
+ title = child.astext()
+ break
+ if title is not None:
+ el3 = self.append_p('table-title', title)
+ else:
+ pass
+ el4 = SubElement(self.current_element, 'table:table', attrib={
+ 'table:name': self.rststyle(
+ '%s' % table_name, ( self.table_count, )),
+ 'table:style-name': self.rststyle(
+ '%s' % table_name, ( self.table_count, )),
+ })
+ self.set_current_element(el4)
+ self.current_table_style = el1
+ self.table_width = 0
+
+ def depart_table(self, node):
+ attribkey = add_ns('style:width', nsdict=SNSD)
+ attribval = '%dcm' % self.table_width
+ self.current_table_style.attrib[attribkey] = attribval
+ self.set_to_parent()
+
+ def visit_tgroup(self, node):
+ self.column_count = ord('A') - 1
+
+ def depart_tgroup(self, node):
+ pass
+
+ def visit_colspec(self, node):
+ self.column_count += 1
+ colspec_name = self.rststyle(
+ '%s%%d.%%s' % TABLESTYLEPREFIX,
+ (self.table_count, chr(self.column_count), )
+ )
+ colwidth = node['colwidth']
+ el1 = SubElement(self.automatic_styles, 'style:style', attrib={
+ 'style:name': colspec_name,
+ 'style:family': 'table-column',
+ }, nsdict=SNSD)
+ el1_1 = SubElement(el1, 'style:table-column-properties', attrib={
+ 'style:column-width': '%dcm' % colwidth }, nsdict=SNSD)
+ el2 = self.append_child('table:table-column', attrib={
+ 'table:style-name': colspec_name,
+ })
+ self.table_width += colwidth
+
+ def depart_colspec(self, node):
+ pass
+
+ def visit_thead(self, node):
+ el = self.append_child('table:table-header-rows')
+ self.set_current_element(el)
+ self.in_thead = True
+ self.paragraph_style_stack.append('Table_20_Heading')
+
+ def depart_thead(self, node):
+ self.set_to_parent()
+ self.in_thead = False
+ self.paragraph_style_stack.pop()
+
+ def visit_row(self, node):
+ self.column_count = ord('A') - 1
+ el = self.append_child('table:table-row')
+ self.set_current_element(el)
+
+ def depart_row(self, node):
+ self.set_to_parent()
+
+ def visit_entry(self, node):
+ self.column_count += 1
+ cellspec_name = self.rststyle(
+ '%s%%d.%%c%%d' % TABLESTYLEPREFIX,
+ (self.table_count, 'A', 1, )
+ )
+ attrib={
+ 'table:style-name': cellspec_name,
+ 'office:value-type': 'string',
+ }
+ morecols = node.get('morecols', 0)
+ if morecols > 0:
+ attrib['table:number-columns-spanned'] = '%d' % (morecols + 1,)
+ self.column_count += morecols
+ morerows = node.get('morerows', 0)
+ if morerows > 0:
+ attrib['table:number-rows-spanned'] = '%d' % (morerows + 1,)
+ el1 = self.append_child('table:table-cell', attrib=attrib)
+ self.set_current_element(el1)
+
+ def depart_entry(self, node):
+ self.set_to_parent()
+
+ def visit_tbody(self, node):
+ pass
+
+ def depart_tbody(self, node):
+ pass
+
+ def visit_target(self, node):
+ #
+ # I don't know how to implement targets in ODF.
+ # How do we create a target in oowriter? A cross-reference?
+ if not (node.has_key('refuri') or node.has_key('refid')
+ or node.has_key('refname')):
+ pass
+ else:
+ pass
+
+ def depart_target(self, node):
+ pass
+
+ def visit_title(self, node, move_ids=1, title_type='title'):
+ if isinstance(node.parent, docutils.nodes.section):
+ section_level = self.section_level
+ if section_level > 7:
+ self.document.reporter.warning(
+ 'Heading/section levels greater than 7 not supported.')
+ self.document.reporter.warning(
+ ' Reducing to heading level 7 for heading: "%s"' % (
+ node.astext(), ))
+ section_level = 7
+ el1 = self.append_child('text:h', attrib = {
+ 'text:outline-level': '%d' % section_level,
+ #'text:style-name': 'Heading_20_%d' % section_level,
+ 'text:style-name': self.rststyle(
+ 'heading%d', (section_level, )),
+ })
+ self.append_pending_ids(el1)
+ self.set_current_element(el1)
+ elif isinstance(node.parent, docutils.nodes.document):
+ # text = self.settings.title
+ #else:
+ # text = node.astext()
+ el1 = SubElement(self.current_element, 'text:p', attrib = {
+ 'text:style-name': self.rststyle(title_type),
+ })
+ self.append_pending_ids(el1)
+ text = node.astext()
+ self.title = text
+ self.found_doc_title = True
+ self.set_current_element(el1)
+
+ def depart_title(self, node):
+ if (isinstance(node.parent, docutils.nodes.section) or
+ isinstance(node.parent, docutils.nodes.document)):
+ self.set_to_parent()
+
+ def visit_subtitle(self, node, move_ids=1):
+ self.visit_title(node, move_ids, title_type='subtitle')
+
+ def depart_subtitle(self, node):
+ self.depart_title(node)
+
+ def visit_title_reference(self, node):
+ el = self.append_child('text:span', attrib={
+ 'text:style-name': self.rststyle('quotation')})
+ el.text = self.encode(node.astext())
+ raise nodes.SkipChildren()
+
+ def depart_title_reference(self, node):
+ pass
+
+ def generate_table_of_content_entry_template(self, el1):
+ for idx in range(1, 11):
+ el2 = SubElement(el1,
+ 'text:table-of-content-entry-template',
+ attrib={
+ 'text:outline-level': "%d" % (idx, ),
+ 'text:style-name': self.rststyle('contents-%d' % (idx, )),
+ })
+ el3 = SubElement(el2, 'text:index-entry-chapter')
+ el3 = SubElement(el2, 'text:index-entry-text')
+ el3 = SubElement(el2, 'text:index-entry-tab-stop', attrib={
+ 'style:leader-char': ".",
+ 'style:type': "right",
+ })
+ el3 = SubElement(el2, 'text:index-entry-page-number')
+
+ def visit_topic(self, node):
+ if 'classes' in node.attributes:
+ if 'contents' in node.attributes['classes']:
+ if self.settings.generate_oowriter_toc:
+ el1 = self.append_child('text:table-of-content', attrib={
+ 'text:name': 'Table of Contents1',
+ 'text:protected': 'true',
+ 'text:style-name': 'Sect1',
+ })
+ el2 = SubElement(el1,
+ 'text:table-of-content-source',
+ attrib={
+ 'text:outline-level': '10',
+ })
+ el3 =SubElement(el2, 'text:index-title-template', attrib={
+ 'text:style-name': 'Contents_20_Heading',
+ })
+ el3.text = 'Table of Contents'
+ self.generate_table_of_content_entry_template(el2)
+ el4 = SubElement(el1, 'text:index-body')
+ el5 = SubElement(el4, 'text:index-title')
+ el6 = SubElement(el5, 'text:p', attrib={
+ 'text:style-name': self.rststyle('contents-heading'),
+ })
+ el6.text = 'Table of Contents'
+ self.save_current_element = self.current_element
+ self.table_of_content_index_body = el4
+ self.set_current_element(el4)
+ else:
+ el = self.append_p('horizontalline')
+ el = self.append_p('centeredtextbody')
+ el1 = SubElement(el, 'text:span',
+ attrib={'text:style-name': self.rststyle('strong')})
+ el1.text = 'Contents'
+ self.in_table_of_contents = True
+ elif 'abstract' in node.attributes['classes']:
+ el = self.append_p('horizontalline')
+ el = self.append_p('centeredtextbody')
+ el1 = SubElement(el, 'text:span',
+ attrib={'text:style-name': self.rststyle('strong')})
+ el1.text = 'Abstract'
+
+ def depart_topic(self, node):
+ if 'classes' in node.attributes:
+ if 'contents' in node.attributes['classes']:
+ if self.settings.generate_oowriter_toc:
+ self.update_toc_page_numbers(
+ self.table_of_content_index_body)
+ self.set_current_element(self.save_current_element)
+ else:
+ el = self.append_p('horizontalline')
+ self.in_table_of_contents = False
+
+ def update_toc_page_numbers(self, el):
+ collection = []
+ self.update_toc_collect(el, 0, collection)
+ self.update_toc_add_numbers(collection)
+
+ def update_toc_collect(self, el, level, collection):
+ collection.append((level, el))
+ level += 1
+ for child_el in el.getchildren():
+ if child_el.tag != 'text:index-body':
+ self.update_toc_collect(child_el, level, collection)
+
+ def update_toc_add_numbers(self, collection):
+ for level, el1 in collection:
+ if (el1.tag == 'text:p' and
+ el1.text != 'Table of Contents'):
+ el2 = SubElement(el1, 'text:tab')
+ el2.tail = '9999'
+
+
+ def visit_transition(self, node):
+ el = self.append_p('horizontalline')
+
+ def depart_transition(self, node):
+ pass
+
+ #
+ # Admonitions
+ #
+ def visit_warning(self, node):
+ self.generate_admonition(node, 'warning')
+
+ def depart_warning(self, node):
+ self.paragraph_style_stack.pop()
+
+ def visit_attention(self, node):
+ self.generate_admonition(node, 'attention')
+
+ depart_attention = depart_warning
+
+ def visit_caution(self, node):
+ self.generate_admonition(node, 'caution')
+
+ depart_caution = depart_warning
+
+ def visit_danger(self, node):
+ self.generate_admonition(node, 'danger')
+
+ depart_danger = depart_warning
+
+ def visit_error(self, node):
+ self.generate_admonition(node, 'error')
+
+ depart_error = depart_warning
+
+ def visit_hint(self, node):
+ self.generate_admonition(node, 'hint')
+
+ depart_hint = depart_warning
+
+ def visit_important(self, node):
+ self.generate_admonition(node, 'important')
+
+ depart_important = depart_warning
+
+ def visit_note(self, node):
+ self.generate_admonition(node, 'note')
+
+ depart_note = depart_warning
+
+ def visit_tip(self, node):
+ self.generate_admonition(node, 'tip')
+
+ depart_tip = depart_warning
+
+ def visit_admonition(self, node):
+ title = None
+ for child in node.children:
+ if child.tagname == 'title':
+ title = child.astext()
+ if title is None:
+ classes1 = node.get('classes')
+ if classes1:
+ title = classes1[0]
+ self.generate_admonition(node, 'generic', title)
+
+ depart_admonition = depart_warning
+
+ def generate_admonition(self, node, label, title=None):
+ el1 = SubElement(self.current_element, 'text:p', attrib = {
+ 'text:style-name': self.rststyle('admon-%s-hdr', ( label, )),
+ })
+ if title:
+ el1.text = title
+ else:
+ el1.text = '%s!' % (label.capitalize(), )
+ s1 = self.rststyle('admon-%s-body', ( label, ))
+ self.paragraph_style_stack.append(s1)
+
+ #
+ # Roles (e.g. subscript, superscript, strong, ...
+ #
+ def visit_subscript(self, node):
+ el = self.append_child('text:span', attrib={
+ 'text:style-name': 'rststyle-subscript',
+ })
+ self.set_current_element(el)
+
+ def depart_subscript(self, node):
+ self.set_to_parent()
+
+ def visit_superscript(self, node):
+ el = self.append_child('text:span', attrib={
+ 'text:style-name': 'rststyle-superscript',
+ })
+ self.set_current_element(el)
+
+ def depart_superscript(self, node):
+ self.set_to_parent()
+
+
+# Use an own reader to modify transformations done.
+class Reader(standalone.Reader):
+
+ def get_transforms(self):
+ default = standalone.Reader.get_transforms(self)
+ if self.settings.create_links:
+ return default
+ return [ i
+ for i in default
+ if i is not references.DanglingReferences ]
diff --git a/python/helpers/docutils/writers/odf_odt/pygmentsformatter.py b/python/helpers/docutils/writers/odf_odt/pygmentsformatter.py
new file mode 100644
index 0000000..e8ce827
--- /dev/null
+++ b/python/helpers/docutils/writers/odf_odt/pygmentsformatter.py
@@ -0,0 +1,109 @@
+# $Id: pygmentsformatter.py 5853 2009-01-19 21:02:02Z dkuhlman $
+# Author: Dave Kuhlman <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+
+Additional support for Pygments formatter.
+
+"""
+
+
+import pygments
+import pygments.formatter
+
+
+class OdtPygmentsFormatter(pygments.formatter.Formatter):
+ def __init__(self, rststyle_function, escape_function):
+ pygments.formatter.Formatter.__init__(self)
+ self.rststyle_function = rststyle_function
+ self.escape_function = escape_function
+
+ def rststyle(self, name, parameters=( )):
+ return self.rststyle_function(name, parameters)
+
+
+class OdtPygmentsProgFormatter(OdtPygmentsFormatter):
+ def format(self, tokensource, outfile):
+ tokenclass = pygments.token.Token
+ for ttype, value in tokensource:
+ value = self.escape_function(value)
+ if ttype == tokenclass.Keyword:
+ s2 = self.rststyle('codeblock-keyword')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype == tokenclass.Literal.String:
+ s2 = self.rststyle('codeblock-string')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype in (
+ tokenclass.Literal.Number.Integer,
+ tokenclass.Literal.Number.Integer.Long,
+ tokenclass.Literal.Number.Float,
+ tokenclass.Literal.Number.Hex,
+ tokenclass.Literal.Number.Oct,
+ tokenclass.Literal.Number,
+ ):
+ s2 = self.rststyle('codeblock-number')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype == tokenclass.Operator:
+ s2 = self.rststyle('codeblock-operator')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype == tokenclass.Comment:
+ s2 = self.rststyle('codeblock-comment')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype == tokenclass.Name.Class:
+ s2 = self.rststyle('codeblock-classname')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype == tokenclass.Name.Function:
+ s2 = self.rststyle('codeblock-functionname')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype == tokenclass.Name:
+ s2 = self.rststyle('codeblock-name')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ else:
+ s1 = value
+ outfile.write(s1)
+
+
+class OdtPygmentsLaTeXFormatter(OdtPygmentsFormatter):
+ def format(self, tokensource, outfile):
+ tokenclass = pygments.token.Token
+ for ttype, value in tokensource:
+ value = self.escape_function(value)
+ if ttype == tokenclass.Keyword:
+ s2 = self.rststyle('codeblock-keyword')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype in (tokenclass.Literal.String,
+ tokenclass.Literal.String.Backtick,
+ ):
+ s2 = self.rststyle('codeblock-string')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype == tokenclass.Name.Attribute:
+ s2 = self.rststyle('codeblock-operator')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype == tokenclass.Comment:
+ if value[-1] == '\n':
+ s2 = self.rststyle('codeblock-comment')
+ s1 = '<text:span text:style-name="%s">%s</text:span>\n' % \
+ (s2, value[:-1], )
+ else:
+ s2 = self.rststyle('codeblock-comment')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ elif ttype == tokenclass.Name.Builtin:
+ s2 = self.rststyle('codeblock-name')
+ s1 = '<text:span text:style-name="%s">%s</text:span>' % \
+ (s2, value, )
+ else:
+ s1 = value
+ outfile.write(s1)
diff --git a/python/helpers/docutils/writers/odf_odt/styles.odt b/python/helpers/docutils/writers/odf_odt/styles.odt
new file mode 100644
index 0000000..f217bb9
--- /dev/null
+++ b/python/helpers/docutils/writers/odf_odt/styles.odt
Binary files differ
diff --git a/python/helpers/docutils/writers/pep_html/__init__.py b/python/helpers/docutils/writers/pep_html/__init__.py
new file mode 100644
index 0000000..503fa17
--- /dev/null
+++ b/python/helpers/docutils/writers/pep_html/__init__.py
@@ -0,0 +1,105 @@
+# $Id: __init__.py 6328 2010-05-23 21:20:29Z gbrandl $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+PEP HTML Writer.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import os
+import os.path
+import codecs
+import docutils
+from docutils import frontend, nodes, utils, writers
+from docutils.writers import html4css1
+
+
+class Writer(html4css1.Writer):
+
+ default_stylesheet = 'pep.css'
+
+ default_stylesheet_path = utils.relative_path(
+ os.path.join(os.getcwd(), 'dummy'),
+ os.path.join(os.path.dirname(__file__), default_stylesheet))
+
+ default_template = 'template.txt'
+
+ default_template_path = utils.relative_path(
+ os.path.join(os.getcwd(), 'dummy'),
+ os.path.join(os.path.dirname(__file__), default_template))
+
+ settings_spec = html4css1.Writer.settings_spec + (
+ 'PEP/HTML-Specific Options',
+ 'For the PEP/HTML writer, the default value for the --stylesheet-path '
+ 'option is "%s", and the default value for --template is "%s". '
+ 'See HTML-Specific Options above.'
+ % (default_stylesheet_path, default_template_path),
+ (('Python\'s home URL. Default is "http://www.python.org".',
+ ['--python-home'],
+ {'default': 'http://www.python.org', 'metavar': '<URL>'}),
+ ('Home URL prefix for PEPs. Default is "." (current directory).',
+ ['--pep-home'],
+ {'default': '.', 'metavar': '<URL>'}),
+ # For testing.
+ (frontend.SUPPRESS_HELP,
+ ['--no-random'],
+ {'action': 'store_true', 'validator': frontend.validate_boolean}),))
+
+ settings_default_overrides = {'stylesheet_path': default_stylesheet_path,
+ 'template': default_template_path,}
+
+ relative_path_settings = (html4css1.Writer.relative_path_settings
+ + ('template',))
+
+ config_section = 'pep_html writer'
+ config_section_dependencies = ('writers', 'html4css1 writer')
+
+ def __init__(self):
+ html4css1.Writer.__init__(self)
+ self.translator_class = HTMLTranslator
+
+ def interpolation_dict(self):
+ subs = html4css1.Writer.interpolation_dict(self)
+ settings = self.document.settings
+ pyhome = settings.python_home
+ subs['pyhome'] = pyhome
+ subs['pephome'] = settings.pep_home
+ if pyhome == '..':
+ subs['pepindex'] = '.'
+ else:
+ subs['pepindex'] = pyhome + '/dev/peps'
+ index = self.document.first_child_matching_class(nodes.field_list)
+ header = self.document[index]
+ self.pepnum = header[0][1].astext()
+ subs['pep'] = self.pepnum
+ if settings.no_random:
+ subs['banner'] = 0
+ else:
+ import random
+ subs['banner'] = random.randrange(64)
+ try:
+ subs['pepnum'] = '%04i' % int(self.pepnum)
+ except ValueError:
+ subs['pepnum'] = self.pepnum
+ self.title = header[1][1].astext()
+ subs['title'] = self.title
+ subs['body'] = ''.join(
+ self.body_pre_docinfo + self.docinfo + self.body)
+ return subs
+
+ def assemble_parts(self):
+ html4css1.Writer.assemble_parts(self)
+ self.parts['title'] = [self.title]
+ self.parts['pepnum'] = self.pepnum
+
+
+class HTMLTranslator(html4css1.HTMLTranslator):
+
+ def depart_field_list(self, node):
+ html4css1.HTMLTranslator.depart_field_list(self, node)
+ if 'rfc2822' in node['classes']:
+ self.body.append('<hr />\n')
diff --git a/python/helpers/docutils/writers/pep_html/pep.css b/python/helpers/docutils/writers/pep_html/pep.css
new file mode 100644
index 0000000..5d8c040
--- /dev/null
+++ b/python/helpers/docutils/writers/pep_html/pep.css
@@ -0,0 +1,344 @@
+/*
+:Author: David Goodger
+:Contact: [email protected]
+:date: $Date: 2006-05-21 16:44:42 -0400 (Sun, 21 May 2006) $
+:version: $Revision: 4564 $
+:copyright: This stylesheet has been placed in the public domain.
+
+Default cascading style sheet for the PEP HTML output of Docutils.
+*/
+
+/* "! important" is used here to override other ``margin-top`` and
+ ``margin-bottom`` styles that are later in the stylesheet or
+ more specific. See http://www.w3.org/TR/CSS1#the-cascade */
+.first {
+ margin-top: 0 ! important }
+
+.last, .with-subtitle {
+ margin-bottom: 0 ! important }
+
+.hidden {
+ display: none }
+
+.navigation {
+ width: 100% ;
+ background: #99ccff ;
+ margin-top: 0px ;
+ margin-bottom: 0px }
+
+.navigation .navicon {
+ width: 150px ;
+ height: 35px }
+
+.navigation .textlinks {
+ padding-left: 1em ;
+ text-align: left }
+
+.navigation td, .navigation th {
+ padding-left: 0em ;
+ padding-right: 0em ;
+ vertical-align: middle }
+
+.rfc2822 {
+ margin-top: 0.5em ;
+ margin-left: 0.5em ;
+ margin-right: 0.5em ;
+ margin-bottom: 0em }
+
+.rfc2822 td {
+ text-align: left }
+
+.rfc2822 th.field-name {
+ text-align: right ;
+ font-family: sans-serif ;
+ padding-right: 0.5em ;
+ font-weight: bold ;
+ margin-bottom: 0em }
+
+a.toc-backref {
+ text-decoration: none ;
+ color: black }
+
+blockquote.epigraph {
+ margin: 2em 5em ; }
+
+body {
+ margin: 0px ;
+ margin-bottom: 1em ;
+ padding: 0px }
+
+dl.docutils dd {
+ margin-bottom: 0.5em }
+
+div.section {
+ margin-left: 1em ;
+ margin-right: 1em ;
+ margin-bottom: 1.5em }
+
+div.section div.section {
+ margin-left: 0em ;
+ margin-right: 0em ;
+ margin-top: 1.5em }
+
+div.abstract {
+ margin: 2em 5em }
+
+div.abstract p.topic-title {
+ font-weight: bold ;
+ text-align: center }
+
+div.admonition, div.attention, div.caution, div.danger, div.error,
+div.hint, div.important, div.note, div.tip, div.warning {
+ margin: 2em ;
+ border: medium outset ;
+ padding: 1em }
+
+div.admonition p.admonition-title, div.hint p.admonition-title,
+div.important p.admonition-title, div.note p.admonition-title,
+div.tip p.admonition-title {
+ font-weight: bold ;
+ font-family: sans-serif }
+
+div.attention p.admonition-title, div.caution p.admonition-title,
+div.danger p.admonition-title, div.error p.admonition-title,
+div.warning p.admonition-title {
+ color: red ;
+ font-weight: bold ;
+ font-family: sans-serif }
+
+/* Uncomment (and remove this text!) to get reduced vertical space in
+ compound paragraphs.
+div.compound .compound-first, div.compound .compound-middle {
+ margin-bottom: 0.5em }
+
+div.compound .compound-last, div.compound .compound-middle {
+ margin-top: 0.5em }
+*/
+
+div.dedication {
+ margin: 2em 5em ;
+ text-align: center ;
+ font-style: italic }
+
+div.dedication p.topic-title {
+ font-weight: bold ;
+ font-style: normal }
+
+div.figure {
+ margin-left: 2em ;
+ margin-right: 2em }
+
+div.footer, div.header {
+ clear: both;
+ font-size: smaller }
+
+div.footer {
+ margin-left: 1em ;
+ margin-right: 1em }
+
+div.line-block {
+ display: block ;
+ margin-top: 1em ;
+ margin-bottom: 1em }
+
+div.line-block div.line-block {
+ margin-top: 0 ;
+ margin-bottom: 0 ;
+ margin-left: 1.5em }
+
+div.sidebar {
+ margin-left: 1em ;
+ border: medium outset ;
+ padding: 1em ;
+ background-color: #ffffee ;
+ width: 40% ;
+ float: right ;
+ clear: right }
+
+div.sidebar p.rubric {
+ font-family: sans-serif ;
+ font-size: medium }
+
+div.system-messages {
+ margin: 5em }
+
+div.system-messages h1 {
+ color: red }
+
+div.system-message {
+ border: medium outset ;
+ padding: 1em }
+
+div.system-message p.system-message-title {
+ color: red ;
+ font-weight: bold }
+
+div.topic {
+ margin: 2em }
+
+h1.section-subtitle, h2.section-subtitle, h3.section-subtitle,
+h4.section-subtitle, h5.section-subtitle, h6.section-subtitle {
+ margin-top: 0.4em }
+
+h1 {
+ font-family: sans-serif ;
+ font-size: large }
+
+h2 {
+ font-family: sans-serif ;
+ font-size: medium }
+
+h3 {
+ font-family: sans-serif ;
+ font-size: small }
+
+h4 {
+ font-family: sans-serif ;
+ font-style: italic ;
+ font-size: small }
+
+h5 {
+ font-family: sans-serif;
+ font-size: x-small }
+
+h6 {
+ font-family: sans-serif;
+ font-style: italic ;
+ font-size: x-small }
+
+hr.docutils {
+ width: 75% }
+
+img.align-left {
+ clear: left }
+
+img.align-right {
+ clear: right }
+
+img.borderless {
+ border: 0 }
+
+ol.simple, ul.simple {
+ margin-bottom: 1em }
+
+ol.arabic {
+ list-style: decimal }
+
+ol.loweralpha {
+ list-style: lower-alpha }
+
+ol.upperalpha {
+ list-style: upper-alpha }
+
+ol.lowerroman {
+ list-style: lower-roman }
+
+ol.upperroman {
+ list-style: upper-roman }
+
+p.attribution {
+ text-align: right ;
+ margin-left: 50% }
+
+p.caption {
+ font-style: italic }
+
+p.credits {
+ font-style: italic ;
+ font-size: smaller }
+
+p.label {
+ white-space: nowrap }
+
+p.rubric {
+ font-weight: bold ;
+ font-size: larger ;
+ color: maroon ;
+ text-align: center }
+
+p.sidebar-title {
+ font-family: sans-serif ;
+ font-weight: bold ;
+ font-size: larger }
+
+p.sidebar-subtitle {
+ font-family: sans-serif ;
+ font-weight: bold }
+
+p.topic-title {
+ font-family: sans-serif ;
+ font-weight: bold }
+
+pre.address {
+ margin-bottom: 0 ;
+ margin-top: 0 ;
+ font-family: serif ;
+ font-size: 100% }
+
+pre.literal-block, pre.doctest-block {
+ margin-left: 2em ;
+ margin-right: 2em }
+
+span.classifier {
+ font-family: sans-serif ;
+ font-style: oblique }
+
+span.classifier-delimiter {
+ font-family: sans-serif ;
+ font-weight: bold }
+
+span.interpreted {
+ font-family: sans-serif }
+
+span.option {
+ white-space: nowrap }
+
+span.option-argument {
+ font-style: italic }
+
+span.pre {
+ white-space: pre }
+
+span.problematic {
+ color: red }
+
+span.section-subtitle {
+ /* font-size relative to parent (h1..h6 element) */
+ font-size: 80% }
+
+table.citation {
+ border-left: solid 1px gray;
+ margin-left: 1px }
+
+table.docinfo {
+ margin: 2em 4em }
+
+table.docutils {
+ margin-top: 0.5em ;
+ margin-bottom: 0.5em }
+
+table.footnote {
+ border-left: solid 1px black;
+ margin-left: 1px }
+
+table.docutils td, table.docutils th,
+table.docinfo td, table.docinfo th {
+ padding-left: 0.5em ;
+ padding-right: 0.5em ;
+ vertical-align: top }
+
+td.num {
+ text-align: right }
+
+th.field-name {
+ font-weight: bold ;
+ text-align: left ;
+ white-space: nowrap ;
+ padding-left: 0 }
+
+h1 tt.docutils, h2 tt.docutils, h3 tt.docutils,
+h4 tt.docutils, h5 tt.docutils, h6 tt.docutils {
+ font-size: 100% }
+
+ul.auto-toc {
+ list-style-type: none }
diff --git a/python/helpers/docutils/writers/pep_html/template.txt b/python/helpers/docutils/writers/pep_html/template.txt
new file mode 100644
index 0000000..62c07a8
--- /dev/null
+++ b/python/helpers/docutils/writers/pep_html/template.txt
@@ -0,0 +1,29 @@
+<?xml version="1.0" encoding="%(encoding)s" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+<!--
+This HTML is auto-generated. DO NOT EDIT THIS FILE! If you are writing a new
+PEP, see http://www.python.org/dev/peps/pep-0001 for instructions and links
+to templates. DO NOT USE THIS HTML FILE AS YOUR TEMPLATE!
+-->
+<head>
+ <meta http-equiv="Content-Type" content="text/html; charset=%(encoding)s" />
+ <meta name="generator" content="Docutils %(version)s: http://docutils.sourceforge.net/" />
+ <title>PEP %(pep)s -- %(title)s</title>
+ %(stylesheet)s
+</head>
+<body bgcolor="white">
+<table class="navigation" cellpadding="0" cellspacing="0"
+ width="100%%" border="0">
+<tr><td class="navicon" width="150" height="35">
+<a href="%(pyhome)s/" title="Python Home Page">
+<img src="%(pyhome)s/pics/PyBanner%(banner)03d.gif" alt="[Python]"
+ border="0" width="150" height="35" /></a></td>
+<td class="textlinks" align="left">
+[<b><a href="%(pyhome)s/">Python Home</a></b>]
+[<b><a href="%(pepindex)s/">PEP Index</a></b>]
+[<b><a href="%(pephome)s/pep-%(pepnum)s.txt">PEP Source</a></b>]
+</td></tr></table>
+<div class="document">
+%(body)s
+%(body_suffix)s
diff --git a/python/helpers/docutils/writers/pseudoxml.py b/python/helpers/docutils/writers/pseudoxml.py
new file mode 100644
index 0000000..547adbf
--- /dev/null
+++ b/python/helpers/docutils/writers/pseudoxml.py
@@ -0,0 +1,31 @@
+# $Id: pseudoxml.py 4564 2006-05-21 20:44:42Z wiemann $
+# Author: David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+Simple internal document tree Writer, writes indented pseudo-XML.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+from docutils import writers
+
+
+class Writer(writers.Writer):
+
+ supported = ('pprint', 'pformat', 'pseudoxml')
+ """Formats this writer supports."""
+
+ config_section = 'pseudoxml writer'
+ config_section_dependencies = ('writers',)
+
+ output = None
+ """Final translated form of `document`."""
+
+ def translate(self):
+ self.output = self.document.pformat()
+
+ def supports(self, format):
+ """This writer supports all format-specific elements."""
+ return 1
diff --git a/python/helpers/docutils/writers/s5_html/__init__.py b/python/helpers/docutils/writers/s5_html/__init__.py
new file mode 100644
index 0000000..12647bc
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/__init__.py
@@ -0,0 +1,340 @@
+# $Id: __init__.py 5889 2009-04-01 20:00:21Z gbrandl $
+# Authors: Chris Liechti <[email protected]>;
+# David Goodger <[email protected]>
+# Copyright: This module has been placed in the public domain.
+
+"""
+S5/HTML Slideshow Writer.
+"""
+
+__docformat__ = 'reStructuredText'
+
+
+import sys
+import os
+import re
+import docutils
+from docutils import frontend, nodes, utils
+from docutils.writers import html4css1
+from docutils.parsers.rst import directives
+from docutils._compat import b
+
+themes_dir_path = utils.relative_path(
+ os.path.join(os.getcwd(), 'dummy'),
+ os.path.join(os.path.dirname(__file__), 'themes'))
+
+def find_theme(name):
+ # Where else to look for a theme?
+ # Check working dir? Destination dir? Config dir? Plugins dir?
+ path = os.path.join(themes_dir_path, name)
+ if not os.path.isdir(path):
+ raise docutils.ApplicationError(
+ 'Theme directory not found: %r (path: %r)' % (name, path))
+ return path
+
+
+class Writer(html4css1.Writer):
+
+ settings_spec = html4css1.Writer.settings_spec + (
+ 'S5 Slideshow Specific Options',
+ 'For the S5/HTML writer, the --no-toc-backlinks option '
+ '(defined in General Docutils Options above) is the default, '
+ 'and should not be changed.',
+ (('Specify an installed S5 theme by name. Overrides --theme-url. '
+ 'The default theme name is "default". The theme files will be '
+ 'copied into a "ui/<theme>" directory, in the same directory as the '
+ 'destination file (output HTML). Note that existing theme files '
+ 'will not be overwritten (unless --overwrite-theme-files is used).',
+ ['--theme'],
+ {'default': 'default', 'metavar': '<name>',
+ 'overrides': 'theme_url'}),
+ ('Specify an S5 theme URL. The destination file (output HTML) will '
+ 'link to this theme; nothing will be copied. Overrides --theme.',
+ ['--theme-url'],
+ {'metavar': '<URL>', 'overrides': 'theme'}),
+ ('Allow existing theme files in the ``ui/<theme>`` directory to be '
+ 'overwritten. The default is not to overwrite theme files.',
+ ['--overwrite-theme-files'],
+ {'action': 'store_true', 'validator': frontend.validate_boolean}),
+ ('Keep existing theme files in the ``ui/<theme>`` directory; do not '
+ 'overwrite any. This is the default.',
+ ['--keep-theme-files'],
+ {'dest': 'overwrite_theme_files', 'action': 'store_false'}),
+ ('Set the initial view mode to "slideshow" [default] or "outline".',
+ ['--view-mode'],
+ {'choices': ['slideshow', 'outline'], 'default': 'slideshow',
+ 'metavar': '<mode>'}),
+ ('Normally hide the presentation controls in slideshow mode. '
+ 'This is the default.',
+ ['--hidden-controls'],
+ {'action': 'store_true', 'default': True,
+ 'validator': frontend.validate_boolean}),
+ ('Always show the presentation controls in slideshow mode. '
+ 'The default is to hide the controls.',
+ ['--visible-controls'],
+ {'dest': 'hidden_controls', 'action': 'store_false'}),
+ ('Enable the current slide indicator ("1 / 15"). '
+ 'The default is to disable it.',
+ ['--current-slide'],
+ {'action': 'store_true', 'validator': frontend.validate_boolean}),
+ ('Disable the current slide indicator. This is the default.',
+ ['--no-current-slide'],
+ {'dest': 'current_slide', 'action': 'store_false'}),))
+
+ settings_default_overrides = {'toc_backlinks': 0}
+
+ config_section = 's5_html writer'
+ config_section_dependencies = ('writers', 'html4css1 writer')
+
+ def __init__(self):
+ html4css1.Writer.__init__(self)
+ self.translator_class = S5HTMLTranslator
+
+
+class S5HTMLTranslator(html4css1.HTMLTranslator):
+
+ s5_stylesheet_template = """\
+<!-- configuration parameters -->
+<meta name="defaultView" content="%(view_mode)s" />
+<meta name="controlVis" content="%(control_visibility)s" />
+<!-- style sheet links -->
+<script src="%(path)s/slides.js" type="text/javascript"></script>
+<link rel="stylesheet" href="%(path)s/slides.css"
+ type="text/css" media="projection" id="slideProj" />
+<link rel="stylesheet" href="%(path)s/outline.css"
+ type="text/css" media="screen" id="outlineStyle" />
+<link rel="stylesheet" href="%(path)s/print.css"
+ type="text/css" media="print" id="slidePrint" />
+<link rel="stylesheet" href="%(path)s/opera.css"
+ type="text/css" media="projection" id="operaFix" />\n"""
+ # The script element must go in front of the link elements to
+ # avoid a flash of unstyled content (FOUC), reproducible with
+ # Firefox.
+
+ disable_current_slide = """
+<style type="text/css">
+#currentSlide {display: none;}
+</style>\n"""
+
+ layout_template = """\
+<div class="layout">
+<div id="controls"></div>
+<div id="currentSlide"></div>
+<div id="header">
+%(header)s
+</div>
+<div id="footer">
+%(title)s%(footer)s
+</div>
+</div>\n"""
+# <div class="topleft"></div>
+# <div class="topright"></div>
+# <div class="bottomleft"></div>
+# <div class="bottomright"></div>
+
+ default_theme = 'default'
+ """Name of the default theme."""
+
+ base_theme_file = '__base__'
+ """Name of the file containing the name of the base theme."""
+
+ direct_theme_files = (
+ 'slides.css', 'outline.css', 'print.css', 'opera.css', 'slides.js')
+ """Names of theme files directly linked to in the output HTML"""
+
+ indirect_theme_files = (
+ 's5-core.css', 'framing.css', 'pretty.css', 'blank.gif', 'iepngfix.htc')
+ """Names of files used indirectly; imported or used by files in
+ `direct_theme_files`."""
+
+ required_theme_files = indirect_theme_files + direct_theme_files
+ """Names of mandatory theme files."""
+
+ def __init__(self, *args):
+ html4css1.HTMLTranslator.__init__(self, *args)
+ #insert S5-specific stylesheet and script stuff:
+ self.theme_file_path = None
+ self.setup_theme()
+ view_mode = self.document.settings.view_mode
+ control_visibility = ('visible', 'hidden')[self.document.settings
+ .hidden_controls]
+ self.stylesheet.append(self.s5_stylesheet_template
+ % {'path': self.theme_file_path,
+ 'view_mode': view_mode,
+ 'control_visibility': control_visibility})
+ if not self.document.settings.current_slide:
+ self.stylesheet.append(self.disable_current_slide)
+ self.add_meta('<meta name="version" content="S5 1.1" />\n')
+ self.s5_footer = []
+ self.s5_header = []
+ self.section_count = 0
+ self.theme_files_copied = None
+
+ def setup_theme(self):
+ if self.document.settings.theme:
+ self.copy_theme()
+ elif self.document.settings.theme_url:
+ self.theme_file_path = self.document.settings.theme_url
+ else:
+ raise docutils.ApplicationError(
+ 'No theme specified for S5/HTML writer.')
+
+ def copy_theme(self):
+ """
+ Locate & copy theme files.
+
+ A theme may be explicitly based on another theme via a '__base__'
+ file. The default base theme is 'default'. Files are accumulated
+ from the specified theme, any base themes, and 'default'.
+ """
+ settings = self.document.settings
+ path = find_theme(settings.theme)
+ theme_paths = [path]
+ self.theme_files_copied = {}
+ required_files_copied = {}
+ # This is a link (URL) in HTML, so we use "/", not os.sep:
+ self.theme_file_path = '%s/%s' % ('ui', settings.theme)
+ if settings._destination:
+ dest = os.path.join(
+ os.path.dirname(settings._destination), 'ui', settings.theme)
+ if not os.path.isdir(dest):
+ os.makedirs(dest)
+ else:
+ # no destination, so we can't copy the theme
+ return
+ default = 0
+ while path:
+ for f in os.listdir(path): # copy all files from each theme
+ if f == self.base_theme_file:
+ continue # ... except the "__base__" file
+ if ( self.copy_file(f, path, dest)
+ and f in self.required_theme_files):
+ required_files_copied[f] = 1
+ if default:
+ break # "default" theme has no base theme
+ # Find the "__base__" file in theme directory:
+ base_theme_file = os.path.join(path, self.base_theme_file)
+ # If it exists, read it and record the theme path:
+ if os.path.isfile(base_theme_file):
+ lines = open(base_theme_file).readlines()
+ for line in lines:
+ line = line.strip()
+ if line and not line.startswith('#'):
+ path = find_theme(line)
+ if path in theme_paths: # check for duplicates (cycles)
+ path = None # if found, use default base
+ else:
+ theme_paths.append(path)
+ break
+ else: # no theme name found
+ path = None # use default base
+ else: # no base theme file found
+ path = None # use default base
+ if not path:
+ path = find_theme(self.default_theme)
+ theme_paths.append(path)
+ default = 1
+ if len(required_files_copied) != len(self.required_theme_files):
+ # Some required files weren't found & couldn't be copied.
+ required = list(self.required_theme_files)
+ for f in required_files_copied.keys():
+ required.remove(f)
+ raise docutils.ApplicationError(
+ 'Theme files not found: %s'
+ % ', '.join(['%r' % f for f in required]))
+
+ files_to_skip_pattern = re.compile(r'~$|\.bak$|#$|\.cvsignore$')
+
+ def copy_file(self, name, source_dir, dest_dir):
+ """
+ Copy file `name` from `source_dir` to `dest_dir`.
+ Return 1 if the file exists in either `source_dir` or `dest_dir`.
+ """
+ source = os.path.join(source_dir, name)
+ dest = os.path.join(dest_dir, name)
+ if dest in self.theme_files_copied:
+ return 1
+ else:
+ self.theme_files_copied[dest] = 1
+ if os.path.isfile(source):
+ if self.files_to_skip_pattern.search(source):
+ return None
+ settings = self.document.settings
+ if os.path.exists(dest) and not settings.overwrite_theme_files:
+ settings.record_dependencies.add(dest)
+ else:
+ src_file = open(source, 'rb')
+ src_data = src_file.read()
+ src_file.close()
+ dest_file = open(dest, 'wb')
+ dest_dir = dest_dir.replace(os.sep, '/')
+ dest_file.write(src_data.replace(
+ b('ui/default'),
+ dest_dir[dest_dir.rfind('ui/'):].encode(
+ sys.getfilesystemencoding())))
+ dest_file.close()
+ settings.record_dependencies.add(source)
+ return 1
+ if os.path.isfile(dest):
+ return 1
+
+ def depart_document(self, node):
+ header = ''.join(self.s5_header)
+ footer = ''.join(self.s5_footer)
+ title = ''.join(self.html_title).replace('<h1 class="title">', '<h1>')
+ layout = self.layout_template % {'header': header,
+ 'title': title,
+ 'footer': footer}
+ self.fragment.extend(self.body)
+ self.body_prefix.extend(layout)
+ self.body_prefix.append('<div class="presentation">\n')
+ self.body_prefix.append(
+ self.starttag({'classes': ['slide'], 'ids': ['slide0']}, 'div'))
+ if not self.section_count:
+ self.body.append('</div>\n')
+ self.body_suffix.insert(0, '</div>\n')
+ # skip content-type meta tag with interpolated charset value:
+ self.html_head.extend(self.head[1:])
+ self.html_body.extend(self.body_prefix[1:] + self.body_pre_docinfo
+ + self.docinfo + self.body
+ + self.body_suffix[:-1])
+
+ def depart_footer(self, node):
+ start = self.context.pop()
+ self.s5_footer.append('<h2>')
+ self.s5_footer.extend(self.body[start:])
+ self.s5_footer.append('</h2>')
+ del self.body[start:]
+
+ def depart_header(self, node):
+ start = self.context.pop()
+ header = ['<div id="header">\n']
+ header.extend(self.body[start:])
+ header.append('\n</div>\n')
+ del self.body[start:]
+ self.s5_header.extend(header)
+
+ def visit_section(self, node):
+ if not self.section_count:
+ self.body.append('\n</div>\n')
+ self.section_count += 1
+ self.section_level += 1
+ if self.section_level > 1:
+ # dummy for matching div's
+ self.body.append(self.starttag(node, 'div', CLASS='section'))
+ else:
+ self.body.append(self.starttag(node, 'div', CLASS='slide'))
+
+ def visit_subtitle(self, node):
+ if isinstance(node.parent, nodes.section):
+ level = self.section_level + self.initial_header_level - 1
+ if level == 1:
+ level = 2
+ tag = 'h%s' % level
+ self.body.append(self.starttag(node, tag, ''))
+ self.context.append('</%s>\n' % tag)
+ else:
+ html4css1.HTMLTranslator.visit_subtitle(self, node)
+
+ def visit_title(self, node):
+ html4css1.HTMLTranslator.visit_title(self, node)
diff --git a/python/helpers/docutils/writers/s5_html/themes/README.txt b/python/helpers/docutils/writers/s5_html/themes/README.txt
new file mode 100644
index 0000000..2e01b51
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/README.txt
@@ -0,0 +1,6 @@
+Except where otherwise noted (default/iepngfix.htc), all files in this
+directory have been released into the Public Domain.
+
+These files are based on files from S5 1.1, released into the Public
+Domain by Eric Meyer. For further details, please see
+http://www.meyerweb.com/eric/tools/s5/credits.html.
diff --git a/python/helpers/docutils/writers/s5_html/themes/big-black/__base__ b/python/helpers/docutils/writers/s5_html/themes/big-black/__base__
new file mode 100644
index 0000000..f08be9a
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/big-black/__base__
@@ -0,0 +1,2 @@
+# base theme of this theme:
+big-white
diff --git a/python/helpers/docutils/writers/s5_html/themes/big-black/framing.css b/python/helpers/docutils/writers/s5_html/themes/big-black/framing.css
new file mode 100644
index 0000000..5a31113
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/big-black/framing.css
@@ -0,0 +1,25 @@
+/* The following styles size, place, and layer the slide components.
+ Edit these if you want to change the overall slide layout.
+ The commented lines can be uncommented (and modified, if necessary)
+ to help you with the rearrangement process. */
+
+/* target = 1024x768 */
+
+div#header, div#footer, .slide {width: 100%; top: 0; left: 0;}
+div#header {top: 0; z-index: 1;}
+div#footer {display:none;}
+.slide {top: 0; width: 92%; padding: 0.1em 4% 4%; z-index: 2;}
+/* list-style: none;} */
+div#controls {left: 50%; bottom: 0; width: 50%; z-index: 100;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0;}
+#currentSlide {position: absolute; width: 10%; left: 45%; bottom: 1em;
+ z-index: 10;}
+html>body #currentSlide {position: fixed;}
+
+/*
+div#header {background: #FCC;}
+div#footer {background: #CCF;}
+div#controls {background: #BBD;}
+div#currentSlide {background: #FFC;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/big-black/pretty.css b/python/helpers/docutils/writers/s5_html/themes/big-black/pretty.css
new file mode 100644
index 0000000..82bcc9d
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/big-black/pretty.css
@@ -0,0 +1,109 @@
+/* This file has been placed in the public domain. */
+/* Following are the presentation styles -- edit away! */
+
+html, body {margin: 0; padding: 0;}
+body {background: black; color: white;}
+:link, :visited {text-decoration: none; color: cyan;}
+#controls :active {color: #888 !important;}
+#controls :focus {outline: 1px dotted #CCC;}
+
+blockquote {padding: 0 2em 0.5em; margin: 0 1.5em 0.5em;}
+blockquote p {margin: 0;}
+
+kbd {font-weight: bold; font-size: 1em;}
+sup {font-size: smaller; line-height: 1px;}
+
+.slide pre {padding: 0; margin-left: 0; margin-right: 0; font-size: 90%;}
+.slide ul ul li {list-style: square;}
+.slide img.leader {display: block; margin: 0 auto;}
+.slide tt {font-size: 90%;}
+
+.slide {font-size: 3em; font-family: sans-serif; font-weight: bold;}
+.slide h1 {padding-top: 0; z-index: 1; margin: 0; font-size: 120%;}
+.slide h2 {font-size: 110%;}
+.slide h3 {font-size: 105%;}
+h1 abbr {font-variant: small-caps;}
+
+div#controls {position: absolute; left: 50%; bottom: 0;
+ width: 50%; text-align: right; font: bold 0.9em sans-serif;}
+html>body div#controls {position: fixed; padding: 0 0 1em 0; top: auto;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0; padding: 0;}
+#controls #navLinks a {padding: 0; margin: 0 0.5em;
+ border: none; color: #888; cursor: pointer;}
+#controls #navList {height: 1em;}
+#controls #navList #jumplist {position: absolute; bottom: 0; right: 0;
+ background: black; color: #CCC;}
+
+#currentSlide {text-align: center; font-size: 0.5em; color: #AAA;
+ font-family: sans-serif; font-weight: bold;}
+
+#slide0 h1 {position: static; margin: 0 0 0.5em; padding-top: 0.3em; top: 0;
+ font-size: 150%; white-space: normal; background: transparent;}
+#slide0 h2 {font: 110%; font-style: italic; color: gray;}
+#slide0 h3 {margin-top: 1.5em; font-size: 1.5em;}
+#slide0 h4 {margin-top: 0; font-size: 1em;}
+
+ul.urls {list-style: none; display: inline; margin: 0;}
+.urls li {display: inline; margin: 0;}
+.external {border-bottom: 1px dotted gray;}
+html>body .external {border-bottom: none;}
+.external:after {content: " \274F"; font-size: smaller; color: #FCC;}
+
+.incremental, .incremental *, .incremental *:after {
+ color: black; visibility: visible; border: 0;}
+img.incremental {visibility: hidden;}
+.slide .current {color: lime;}
+
+.slide-display {display: inline ! important;}
+
+.huge {font-size: 150%;}
+.big {font-size: 120%;}
+.small {font-size: 75%;}
+.tiny {font-size: 50%;}
+.huge tt, .big tt, .small tt, .tiny tt {font-size: 115%;}
+.huge pre, .big pre, .small pre, .tiny pre {font-size: 115%;}
+
+.maroon {color: maroon;}
+.red {color: red;}
+.magenta {color: magenta;}
+.fuchsia {color: fuchsia;}
+.pink {color: #FAA;}
+.orange {color: orange;}
+.yellow {color: yellow;}
+.lime {color: lime;}
+.green {color: green;}
+.olive {color: olive;}
+.teal {color: teal;}
+.cyan {color: cyan;}
+.aqua {color: aqua;}
+.blue {color: blue;}
+.navy {color: navy;}
+.purple {color: purple;}
+.black {color: black;}
+.gray {color: gray;}
+.silver {color: silver;}
+.white {color: white;}
+
+.left {text-align: left ! important;}
+.center {text-align: center ! important;}
+.right {text-align: right ! important;}
+
+.animation {position: relative; margin: 1em 0; padding: 0;}
+.animation img {position: absolute;}
+
+/* Docutils-specific overrides */
+
+.slide table.docinfo {margin: 0.5em 0 0.5em 1em;}
+
+div.sidebar {background-color: black;}
+
+pre.literal-block, pre.doctest-block {background-color: black;}
+
+tt.docutils {background-color: black;}
+
+/* diagnostics */
+/*
+li:after {content: " [" attr(class) "]"; color: #F88;}
+div:before {content: "[" attr(class) "]"; color: #F88;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/big-white/framing.css b/python/helpers/docutils/writers/s5_html/themes/big-white/framing.css
new file mode 100644
index 0000000..cd34343
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/big-white/framing.css
@@ -0,0 +1,24 @@
+/* This file has been placed in the public domain. */
+/* The following styles size, place, and layer the slide components.
+ Edit these if you want to change the overall slide layout.
+ The commented lines can be uncommented (and modified, if necessary)
+ to help you with the rearrangement process. */
+
+/* target = 1024x768 */
+
+div#header, div#footer, .slide {width: 100%; top: 0; left: 0;}
+div#footer {display:none;}
+.slide {top: 0; width: 92%; padding: 0.25em 4% 4%; z-index: 2;}
+div#controls {left: 50%; bottom: 0; width: 50%; z-index: 100;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0;}
+#currentSlide {position: absolute; width: 10%; left: 45%; bottom: 1em;
+ z-index: 10;}
+html>body #currentSlide {position: fixed;}
+
+/*
+div#header {background: #FCC;}
+div#footer {background: #CCF;}
+div#controls {background: #BBD;}
+div#currentSlide {background: #FFC;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/big-white/pretty.css b/python/helpers/docutils/writers/s5_html/themes/big-white/pretty.css
new file mode 100644
index 0000000..c5e2fcf
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/big-white/pretty.css
@@ -0,0 +1,107 @@
+/* This file has been placed in the public domain. */
+/* Following are the presentation styles -- edit away! */
+
+html, body {margin: 0; padding: 0;}
+body {background: white; color: black;}
+:link, :visited {text-decoration: none; color: #00C;}
+#controls :active {color: #88A !important;}
+#controls :focus {outline: 1px dotted #227;}
+
+blockquote {padding: 0 2em 0.5em; margin: 0 1.5em 0.5em;}
+blockquote p {margin: 0;}
+
+kbd {font-weight: bold; font-size: 1em;}
+sup {font-size: smaller; line-height: 1px;}
+
+.slide pre {padding: 0; margin-left: 0; margin-right: 0; font-size: 90%;}
+.slide ul ul li {list-style: square;}
+.slide img.leader {display: block; margin: 0 auto;}
+.slide tt {font-size: 90%;}
+
+.slide {font-size: 3em; font-family: sans-serif; font-weight: bold;}
+.slide h1 {padding-top: 0; z-index: 1; margin: 0; font-size: 120%;}
+.slide h2 {font-size: 110%;}
+.slide h3 {font-size: 105%;}
+h1 abbr {font-variant: small-caps;}
+
+div#controls {position: absolute; left: 50%; bottom: 0;
+ width: 50%; text-align: right; font: bold 0.9em sans-serif;}
+html>body div#controls {position: fixed; padding: 0 0 1em 0; top: auto;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0; padding: 0;}
+#controls #navLinks a {padding: 0; margin: 0 0.5em;
+ border: none; color: #005; cursor: pointer;}
+#controls #navList {height: 1em;}
+#controls #navList #jumplist {position: absolute; bottom: 0; right: 0;
+ background: #DDD; color: #227;}
+
+#currentSlide {text-align: center; font-size: 0.5em; color: #444;
+ font-family: sans-serif; font-weight: bold;}
+
+#slide0 h1 {position: static; margin: 0 0 0.5em; padding-top: 0.3em; top: 0;
+ font-size: 150%; white-space: normal; background: transparent;}
+#slide0 h2 {font: 110%; font-style: italic; color: gray;}
+#slide0 h3 {margin-top: 1.5em; font-size: 1.5em;}
+#slide0 h4 {margin-top: 0; font-size: 1em;}
+
+ul.urls {list-style: none; display: inline; margin: 0;}
+.urls li {display: inline; margin: 0;}
+.external {border-bottom: 1px dotted gray;}
+html>body .external {border-bottom: none;}
+.external:after {content: " \274F"; font-size: smaller; color: #77B;}
+
+.incremental, .incremental *, .incremental *:after {
+ color: white; visibility: visible; border: 0;}
+img.incremental {visibility: hidden;}
+.slide .current {color: green;}
+
+.slide-display {display: inline ! important;}
+
+.huge {font-size: 150%;}
+.big {font-size: 120%;}
+.small {font-size: 75%;}
+.tiny {font-size: 50%;}
+.huge tt, .big tt, .small tt, .tiny tt {font-size: 115%;}
+.huge pre, .big pre, .small pre, .tiny pre {font-size: 115%;}
+
+.maroon {color: maroon;}
+.red {color: red;}
+.magenta {color: magenta;}
+.fuchsia {color: fuchsia;}
+.pink {color: #FAA;}
+.orange {color: orange;}
+.yellow {color: yellow;}
+.lime {color: lime;}
+.green {color: green;}
+.olive {color: olive;}
+.teal {color: teal;}
+.cyan {color: cyan;}
+.aqua {color: aqua;}
+.blue {color: blue;}
+.navy {color: navy;}
+.purple {color: purple;}
+.black {color: black;}
+.gray {color: gray;}
+.silver {color: silver;}
+.white {color: white;}
+
+.left {text-align: left ! important;}
+.center {text-align: center ! important;}
+.right {text-align: right ! important;}
+
+.animation {position: relative; margin: 1em 0; padding: 0;}
+.animation img {position: absolute;}
+
+/* Docutils-specific overrides */
+
+.slide table.docinfo {margin: 0.5em 0 0.5em 1em;}
+
+pre.literal-block, pre.doctest-block {background-color: white;}
+
+tt.docutils {background-color: white;}
+
+/* diagnostics */
+/*
+li:after {content: " [" attr(class) "]"; color: #F88;}
+div:before {content: "[" attr(class) "]"; color: #F88;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/default/blank.gif b/python/helpers/docutils/writers/s5_html/themes/default/blank.gif
new file mode 100644
index 0000000..75b945d
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/default/blank.gif
Binary files differ
diff --git a/python/helpers/docutils/writers/s5_html/themes/default/framing.css b/python/helpers/docutils/writers/s5_html/themes/default/framing.css
new file mode 100644
index 0000000..c4727f3
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/default/framing.css
@@ -0,0 +1,25 @@
+/* This file has been placed in the public domain. */
+/* The following styles size, place, and layer the slide components.
+ Edit these if you want to change the overall slide layout.
+ The commented lines can be uncommented (and modified, if necessary)
+ to help you with the rearrangement process. */
+
+/* target = 1024x768 */
+
+div#header, div#footer, .slide {width: 100%; top: 0; left: 0;}
+div#header {position: fixed; top: 0; height: 3em; z-index: 1;}
+div#footer {top: auto; bottom: 0; height: 2.5em; z-index: 5;}
+.slide {top: 0; width: 92%; padding: 2.5em 4% 4%; z-index: 2;}
+div#controls {left: 50%; bottom: 0; width: 50%; z-index: 100;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0;}
+#currentSlide {position: absolute; width: 10%; left: 45%; bottom: 1em;
+ z-index: 10;}
+html>body #currentSlide {position: fixed;}
+
+/*
+div#header {background: #FCC;}
+div#footer {background: #CCF;}
+div#controls {background: #BBD;}
+div#currentSlide {background: #FFC;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/default/iepngfix.htc b/python/helpers/docutils/writers/s5_html/themes/default/iepngfix.htc
new file mode 100644
index 0000000..9f3d628
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/default/iepngfix.htc
@@ -0,0 +1,42 @@
+<public:component>
+<public:attach event="onpropertychange" onevent="doFix()" />
+
+<script>
+
+// IE5.5+ PNG Alpha Fix v1.0 by Angus Turnbull http://www.twinhelix.com
+// Free usage permitted as long as this notice remains intact.
+
+// This must be a path to a blank image. That's all the configuration you need here.
+var blankImg = 'ui/default/blank.gif';
+
+var f = 'DXImageTransform.Microsoft.AlphaImageLoader';
+
+function filt(s, m) {
+ if (filters[f]) {
+ filters[f].enabled = s ? true : false;
+ if (s) with (filters[f]) { src = s; sizingMethod = m }
+ } else if (s) style.filter = 'progid:'+f+'(src="'+s+'",sizingMethod="'+m+'")';
+}
+
+function doFix() {
+ if ((parseFloat(navigator.userAgent.match(/MSIE (\S+)/)[1]) < 5.5) ||
+ (event && !/(background|src)/.test(event.propertyName))) return;
+
+ if (tagName == 'IMG') {
+ if ((/\.png$/i).test(src)) {
+ filt(src, 'image'); // was 'scale'
+ src = blankImg;
+ } else if (src.indexOf(blankImg) < 0) filt();
+ } else if (style.backgroundImage) {
+ if (style.backgroundImage.match(/^url[("']+(.*\.png)[)"']+$/i)) {
+ var s = RegExp.$1;
+ style.backgroundImage = '';
+ filt(s, 'crop');
+ } else filt();
+ }
+}
+
+doFix();
+
+</script>
+</public:component>
\ No newline at end of file
diff --git a/python/helpers/docutils/writers/s5_html/themes/default/opera.css b/python/helpers/docutils/writers/s5_html/themes/default/opera.css
new file mode 100644
index 0000000..c9d1148
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/default/opera.css
@@ -0,0 +1,8 @@
+/* This file has been placed in the public domain. */
+/* DO NOT CHANGE THESE unless you really want to break Opera Show */
+.slide {
+ visibility: visible !important;
+ position: static !important;
+ page-break-before: always;
+}
+#slide0 {page-break-before: avoid;}
diff --git a/python/helpers/docutils/writers/s5_html/themes/default/outline.css b/python/helpers/docutils/writers/s5_html/themes/default/outline.css
new file mode 100644
index 0000000..fa767e2
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/default/outline.css
@@ -0,0 +1,16 @@
+/* This file has been placed in the public domain. */
+/* Don't change this unless you want the layout stuff to show up in the
+ outline view! */
+
+.layout div, #footer *, #controlForm * {display: none;}
+#footer, #controls, #controlForm, #navLinks, #toggle {
+ display: block; visibility: visible; margin: 0; padding: 0;}
+#toggle {float: right; padding: 0.5em;}
+html>body #toggle {position: fixed; top: 0; right: 0;}
+
+/* making the outline look pretty-ish */
+
+#slide0 h1, #slide0 h2, #slide0 h3, #slide0 h4 {border: none; margin: 0;}
+#toggle {border: 1px solid; border-width: 0 0 1px 1px; background: #FFF;}
+
+.outline {display: inline ! important;}
diff --git a/python/helpers/docutils/writers/s5_html/themes/default/pretty.css b/python/helpers/docutils/writers/s5_html/themes/default/pretty.css
new file mode 100644
index 0000000..1cede72
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/default/pretty.css
@@ -0,0 +1,120 @@
+/* This file has been placed in the public domain. */
+/* Following are the presentation styles -- edit away! */
+
+html, body {margin: 0; padding: 0;}
+body {background: white; color: black;}
+/* Replace the background style above with the style below (and again for
+ div#header) for a graphic: */
+/* background: white url(bodybg.gif) -16px 0 no-repeat; */
+:link, :visited {text-decoration: none; color: #00C;}
+#controls :active {color: #88A !important;}
+#controls :focus {outline: 1px dotted #227;}
+h1, h2, h3, h4 {font-size: 100%; margin: 0; padding: 0; font-weight: inherit;}
+
+blockquote {padding: 0 2em 0.5em; margin: 0 1.5em 0.5em;}
+blockquote p {margin: 0;}
+
+kbd {font-weight: bold; font-size: 1em;}
+sup {font-size: smaller; line-height: 1px;}
+
+.slide pre {padding: 0; margin-left: 0; margin-right: 0; font-size: 90%;}
+.slide ul ul li {list-style: square;}
+.slide img.leader {display: block; margin: 0 auto;}
+.slide tt {font-size: 90%;}
+
+div#header, div#footer {background: #005; color: #AAB; font-family: sans-serif;}
+/* background: #005 url(bodybg.gif) -16px 0 no-repeat; */
+div#footer {font-size: 0.5em; font-weight: bold; padding: 1em 0;}
+#footer h1 {display: block; padding: 0 1em;}
+#footer h2 {display: block; padding: 0.8em 1em 0;}
+
+.slide {font-size: 1.2em;}
+.slide h1 {position: absolute; top: 0.45em; z-index: 1;
+ margin: 0; padding-left: 0.7em; white-space: nowrap;
+ font: bold 150% sans-serif; color: #DDE; background: #005;}
+.slide h2 {font: bold 120%/1em sans-serif; padding-top: 0.5em;}
+.slide h3 {font: bold 100% sans-serif; padding-top: 0.5em;}
+h1 abbr {font-variant: small-caps;}
+
+div#controls {position: absolute; left: 50%; bottom: 0;
+ width: 50%; text-align: right; font: bold 0.9em sans-serif;}
+html>body div#controls {position: fixed; padding: 0 0 1em 0; top: auto;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0; padding: 0;}
+#controls #navLinks a {padding: 0; margin: 0 0.5em;
+ background: #005; border: none; color: #779; cursor: pointer;}
+#controls #navList {height: 1em;}
+#controls #navList #jumplist {position: absolute; bottom: 0; right: 0;
+ background: #DDD; color: #227;}
+
+#currentSlide {text-align: center; font-size: 0.5em; color: #449;
+ font-family: sans-serif; font-weight: bold;}
+
+#slide0 {padding-top: 1.5em}
+#slide0 h1 {position: static; margin: 1em 0 0; padding: 0; color: #000;
+ font: bold 2em sans-serif; white-space: normal; background: transparent;}
+#slide0 h2 {font: bold italic 1em sans-serif; margin: 0.25em;}
+#slide0 h3 {margin-top: 1.5em; font-size: 1.5em;}
+#slide0 h4 {margin-top: 0; font-size: 1em;}
+
+ul.urls {list-style: none; display: inline; margin: 0;}
+.urls li {display: inline; margin: 0;}
+.external {border-bottom: 1px dotted gray;}
+html>body .external {border-bottom: none;}
+.external:after {content: " \274F"; font-size: smaller; color: #77B;}
+
+.incremental, .incremental *, .incremental *:after {visibility: visible;
+ color: white; border: 0;}
+img.incremental {visibility: hidden;}
+.slide .current {color: green;}
+
+.slide-display {display: inline ! important;}
+
+.huge {font-family: sans-serif; font-weight: bold; font-size: 150%;}
+.big {font-family: sans-serif; font-weight: bold; font-size: 120%;}
+.small {font-size: 75%;}
+.tiny {font-size: 50%;}
+.huge tt, .big tt, .small tt, .tiny tt {font-size: 115%;}
+.huge pre, .big pre, .small pre, .tiny pre {font-size: 115%;}
+
+.maroon {color: maroon;}
+.red {color: red;}
+.magenta {color: magenta;}
+.fuchsia {color: fuchsia;}
+.pink {color: #FAA;}
+.orange {color: orange;}
+.yellow {color: yellow;}
+.lime {color: lime;}
+.green {color: green;}
+.olive {color: olive;}
+.teal {color: teal;}
+.cyan {color: cyan;}
+.aqua {color: aqua;}
+.blue {color: blue;}
+.navy {color: navy;}
+.purple {color: purple;}
+.black {color: black;}
+.gray {color: gray;}
+.silver {color: silver;}
+.white {color: white;}
+
+.left {text-align: left ! important;}
+.center {text-align: center ! important;}
+.right {text-align: right ! important;}
+
+.animation {position: relative; margin: 1em 0; padding: 0;}
+.animation img {position: absolute;}
+
+/* Docutils-specific overrides */
+
+.slide table.docinfo {margin: 1em 0 0.5em 2em;}
+
+pre.literal-block, pre.doctest-block {background-color: white;}
+
+tt.docutils {background-color: white;}
+
+/* diagnostics */
+/*
+li:after {content: " [" attr(class) "]"; color: #F88;}
+div:before {content: "[" attr(class) "]"; color: #F88;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/default/print.css b/python/helpers/docutils/writers/s5_html/themes/default/print.css
new file mode 100644
index 0000000..9d057cc
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/default/print.css
@@ -0,0 +1,24 @@
+/* This file has been placed in the public domain. */
+/* The following rule is necessary to have all slides appear in print!
+ DO NOT REMOVE IT! */
+.slide, ul {page-break-inside: avoid; visibility: visible !important;}
+h1 {page-break-after: avoid;}
+
+body {font-size: 12pt; background: white;}
+* {color: black;}
+
+#slide0 h1 {font-size: 200%; border: none; margin: 0.5em 0 0.25em;}
+#slide0 h3 {margin: 0; padding: 0;}
+#slide0 h4 {margin: 0 0 0.5em; padding: 0;}
+#slide0 {margin-bottom: 3em;}
+
+#header {display: none;}
+#footer h1 {margin: 0; border-bottom: 1px solid; color: gray;
+ font-style: italic;}
+#footer h2, #controls {display: none;}
+
+.print {display: inline ! important;}
+
+/* The following rule keeps the layout stuff out of print.
+ Remove at your own risk! */
+.layout, .layout * {display: none !important;}
diff --git a/python/helpers/docutils/writers/s5_html/themes/default/s5-core.css b/python/helpers/docutils/writers/s5_html/themes/default/s5-core.css
new file mode 100644
index 0000000..6965f5e
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/default/s5-core.css
@@ -0,0 +1,11 @@
+/* This file has been placed in the public domain. */
+/* Do not edit or override these styles!
+ The system will likely break if you do. */
+
+div#header, div#footer, div#controls, .slide {position: absolute;}
+html>body div#header, html>body div#footer,
+ html>body div#controls, html>body .slide {position: fixed;}
+.handout {display: none;}
+.layout {display: block;}
+.slide, .hideme, .incremental {visibility: hidden;}
+#slide0 {visibility: visible;}
diff --git a/python/helpers/docutils/writers/s5_html/themes/default/slides.css b/python/helpers/docutils/writers/s5_html/themes/default/slides.css
new file mode 100644
index 0000000..82bdc0e
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/default/slides.css
@@ -0,0 +1,10 @@
+/* This file has been placed in the public domain. */
+
+/* required to make the slide show run at all */
+@import url(s5-core.css);
+
+/* sets basic placement and size of slide components */
+@import url(framing.css);
+
+/* styles that make the slides look good */
+@import url(pretty.css);
diff --git a/python/helpers/docutils/writers/s5_html/themes/default/slides.js b/python/helpers/docutils/writers/s5_html/themes/default/slides.js
new file mode 100644
index 0000000..81e04e5
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/default/slides.js
@@ -0,0 +1,558 @@
+// S5 v1.1 slides.js -- released into the Public Domain
+// Modified for Docutils (http://docutils.sf.net) by David Goodger
+//
+// Please see http://www.meyerweb.com/eric/tools/s5/credits.html for
+// information about all the wonderful and talented contributors to this code!
+
+var undef;
+var slideCSS = '';
+var snum = 0;
+var smax = 1;
+var slideIDs = new Array();
+var incpos = 0;
+var number = undef;
+var s5mode = true;
+var defaultView = 'slideshow';
+var controlVis = 'visible';
+
+var isIE = navigator.appName == 'Microsoft Internet Explorer' ? 1 : 0;
+var isOp = navigator.userAgent.indexOf('Opera') > -1 ? 1 : 0;
+var isGe = navigator.userAgent.indexOf('Gecko') > -1 && navigator.userAgent.indexOf('Safari') < 1 ? 1 : 0;
+
+function hasClass(object, className) {
+ if (!object.className) return false;
+ return (object.className.search('(^|\\s)' + className + '(\\s|$)') != -1);
+}
+
+function hasValue(object, value) {
+ if (!object) return false;
+ return (object.search('(^|\\s)' + value + '(\\s|$)') != -1);
+}
+
+function removeClass(object,className) {
+ if (!object) return;
+ object.className = object.className.replace(new RegExp('(^|\\s)'+className+'(\\s|$)'), RegExp.$1+RegExp.$2);
+}
+
+function addClass(object,className) {
+ if (!object || hasClass(object, className)) return;
+ if (object.className) {
+ object.className += ' '+className;
+ } else {
+ object.className = className;
+ }
+}
+
+function GetElementsWithClassName(elementName,className) {
+ var allElements = document.getElementsByTagName(elementName);
+ var elemColl = new Array();
+ for (var i = 0; i< allElements.length; i++) {
+ if (hasClass(allElements[i], className)) {
+ elemColl[elemColl.length] = allElements[i];
+ }
+ }
+ return elemColl;
+}
+
+function isParentOrSelf(element, id) {
+ if (element == null || element.nodeName=='BODY') return false;
+ else if (element.id == id) return true;
+ else return isParentOrSelf(element.parentNode, id);
+}
+
+function nodeValue(node) {
+ var result = "";
+ if (node.nodeType == 1) {
+ var children = node.childNodes;
+ for (var i = 0; i < children.length; ++i) {
+ result += nodeValue(children[i]);
+ }
+ }
+ else if (node.nodeType == 3) {
+ result = node.nodeValue;
+ }
+ return(result);
+}
+
+function slideLabel() {
+ var slideColl = GetElementsWithClassName('*','slide');
+ var list = document.getElementById('jumplist');
+ smax = slideColl.length;
+ for (var n = 0; n < smax; n++) {
+ var obj = slideColl[n];
+
+ var did = 'slide' + n.toString();
+ if (obj.getAttribute('id')) {
+ slideIDs[n] = obj.getAttribute('id');
+ }
+ else {
+ obj.setAttribute('id',did);
+ slideIDs[n] = did;
+ }
+ if (isOp) continue;
+
+ var otext = '';
+ var menu = obj.firstChild;
+ if (!menu) continue; // to cope with empty slides
+ while (menu && menu.nodeType == 3) {
+ menu = menu.nextSibling;
+ }
+ if (!menu) continue; // to cope with slides with only text nodes
+
+ var menunodes = menu.childNodes;
+ for (var o = 0; o < menunodes.length; o++) {
+ otext += nodeValue(menunodes[o]);
+ }
+ list.options[list.length] = new Option(n + ' : ' + otext, n);
+ }
+}
+
+function currentSlide() {
+ var cs;
+ var footer_nodes;
+ var vis = 'visible';
+ if (document.getElementById) {
+ cs = document.getElementById('currentSlide');
+ footer_nodes = document.getElementById('footer').childNodes;
+ } else {
+ cs = document.currentSlide;
+ footer = document.footer.childNodes;
+ }
+ cs.innerHTML = '<span id="csHere">' + snum + '<\/span> ' +
+ '<span id="csSep">\/<\/span> ' +
+ '<span id="csTotal">' + (smax-1) + '<\/span>';
+ if (snum == 0) {
+ vis = 'hidden';
+ }
+ cs.style.visibility = vis;
+ for (var i = 0; i < footer_nodes.length; i++) {
+ if (footer_nodes[i].nodeType == 1) {
+ footer_nodes[i].style.visibility = vis;
+ }
+ }
+}
+
+function go(step) {
+ if (document.getElementById('slideProj').disabled || step == 0) return;
+ var jl = document.getElementById('jumplist');
+ var cid = slideIDs[snum];
+ var ce = document.getElementById(cid);
+ if (incrementals[snum].length > 0) {
+ for (var i = 0; i < incrementals[snum].length; i++) {
+ removeClass(incrementals[snum][i], 'current');
+ removeClass(incrementals[snum][i], 'incremental');
+ }
+ }
+ if (step != 'j') {
+ snum += step;
+ lmax = smax - 1;
+ if (snum > lmax) snum = lmax;
+ if (snum < 0) snum = 0;
+ } else
+ snum = parseInt(jl.value);
+ var nid = slideIDs[snum];
+ var ne = document.getElementById(nid);
+ if (!ne) {
+ ne = document.getElementById(slideIDs[0]);
+ snum = 0;
+ }
+ if (step < 0) {incpos = incrementals[snum].length} else {incpos = 0;}
+ if (incrementals[snum].length > 0 && incpos == 0) {
+ for (var i = 0; i < incrementals[snum].length; i++) {
+ if (hasClass(incrementals[snum][i], 'current'))
+ incpos = i + 1;
+ else
+ addClass(incrementals[snum][i], 'incremental');
+ }
+ }
+ if (incrementals[snum].length > 0 && incpos > 0)
+ addClass(incrementals[snum][incpos - 1], 'current');
+ ce.style.visibility = 'hidden';
+ ne.style.visibility = 'visible';
+ jl.selectedIndex = snum;
+ currentSlide();
+ number = 0;
+}
+
+function goTo(target) {
+ if (target >= smax || target == snum) return;
+ go(target - snum);
+}
+
+function subgo(step) {
+ if (step > 0) {
+ removeClass(incrementals[snum][incpos - 1],'current');
+ removeClass(incrementals[snum][incpos], 'incremental');
+ addClass(incrementals[snum][incpos],'current');
+ incpos++;
+ } else {
+ incpos--;
+ removeClass(incrementals[snum][incpos],'current');
+ addClass(incrementals[snum][incpos], 'incremental');
+ addClass(incrementals[snum][incpos - 1],'current');
+ }
+}
+
+function toggle() {
+ var slideColl = GetElementsWithClassName('*','slide');
+ var slides = document.getElementById('slideProj');
+ var outline = document.getElementById('outlineStyle');
+ if (!slides.disabled) {
+ slides.disabled = true;
+ outline.disabled = false;
+ s5mode = false;
+ fontSize('1em');
+ for (var n = 0; n < smax; n++) {
+ var slide = slideColl[n];
+ slide.style.visibility = 'visible';
+ }
+ } else {
+ slides.disabled = false;
+ outline.disabled = true;
+ s5mode = true;
+ fontScale();
+ for (var n = 0; n < smax; n++) {
+ var slide = slideColl[n];
+ slide.style.visibility = 'hidden';
+ }
+ slideColl[snum].style.visibility = 'visible';
+ }
+}
+
+function showHide(action) {
+ var obj = GetElementsWithClassName('*','hideme')[0];
+ switch (action) {
+ case 's': obj.style.visibility = 'visible'; break;
+ case 'h': obj.style.visibility = 'hidden'; break;
+ case 'k':
+ if (obj.style.visibility != 'visible') {
+ obj.style.visibility = 'visible';
+ } else {
+ obj.style.visibility = 'hidden';
+ }
+ break;
+ }
+}
+
+// 'keys' code adapted from MozPoint (http://mozpoint.mozdev.org/)
+function keys(key) {
+ if (!key) {
+ key = event;
+ key.which = key.keyCode;
+ }
+ if (key.which == 84) {
+ toggle();
+ return;
+ }
+ if (s5mode) {
+ switch (key.which) {
+ case 10: // return
+ case 13: // enter
+ if (window.event && isParentOrSelf(window.event.srcElement, 'controls')) return;
+ if (key.target && isParentOrSelf(key.target, 'controls')) return;
+ if(number != undef) {
+ goTo(number);
+ break;
+ }
+ case 32: // spacebar
+ case 34: // page down
+ case 39: // rightkey
+ case 40: // downkey
+ if(number != undef) {
+ go(number);
+ } else if (!incrementals[snum] || incpos >= incrementals[snum].length) {
+ go(1);
+ } else {
+ subgo(1);
+ }
+ break;
+ case 33: // page up
+ case 37: // leftkey
+ case 38: // upkey
+ if(number != undef) {
+ go(-1 * number);
+ } else if (!incrementals[snum] || incpos <= 0) {
+ go(-1);
+ } else {
+ subgo(-1);
+ }
+ break;
+ case 36: // home
+ goTo(0);
+ break;
+ case 35: // end
+ goTo(smax-1);
+ break;
+ case 67: // c
+ showHide('k');
+ break;
+ }
+ if (key.which < 48 || key.which > 57) {
+ number = undef;
+ } else {
+ if (window.event && isParentOrSelf(window.event.srcElement, 'controls')) return;
+ if (key.target && isParentOrSelf(key.target, 'controls')) return;
+ number = (((number != undef) ? number : 0) * 10) + (key.which - 48);
+ }
+ }
+ return false;
+}
+
+function clicker(e) {
+ number = undef;
+ var target;
+ if (window.event) {
+ target = window.event.srcElement;
+ e = window.event;
+ } else target = e.target;
+ if (target.href != null || hasValue(target.rel, 'external') || isParentOrSelf(target, 'controls') || isParentOrSelf(target,'embed') || isParentOrSelf(target, 'object')) return true;
+ if (!e.which || e.which == 1) {
+ if (!incrementals[snum] || incpos >= incrementals[snum].length) {
+ go(1);
+ } else {
+ subgo(1);
+ }
+ }
+}
+
+function findSlide(hash) {
+ var target = document.getElementById(hash);
+ if (target) {
+ for (var i = 0; i < slideIDs.length; i++) {
+ if (target.id == slideIDs[i]) return i;
+ }
+ }
+ return null;
+}
+
+function slideJump() {
+ if (window.location.hash == null || window.location.hash == '') {
+ currentSlide();
+ return;
+ }
+ if (window.location.hash == null) return;
+ var dest = null;
+ dest = findSlide(window.location.hash.slice(1));
+ if (dest == null) {
+ dest = 0;
+ }
+ go(dest - snum);
+}
+
+function fixLinks() {
+ var thisUri = window.location.href;
+ thisUri = thisUri.slice(0, thisUri.length - window.location.hash.length);
+ var aelements = document.getElementsByTagName('A');
+ for (var i = 0; i < aelements.length; i++) {
+ var a = aelements[i].href;
+ var slideID = a.match('\#.+');
+ if ((slideID) && (slideID[0].slice(0,1) == '#')) {
+ var dest = findSlide(slideID[0].slice(1));
+ if (dest != null) {
+ if (aelements[i].addEventListener) {
+ aelements[i].addEventListener("click", new Function("e",
+ "if (document.getElementById('slideProj').disabled) return;" +
+ "go("+dest+" - snum); " +
+ "if (e.preventDefault) e.preventDefault();"), true);
+ } else if (aelements[i].attachEvent) {
+ aelements[i].attachEvent("onclick", new Function("",
+ "if (document.getElementById('slideProj').disabled) return;" +
+ "go("+dest+" - snum); " +
+ "event.returnValue = false;"));
+ }
+ }
+ }
+ }
+}
+
+function externalLinks() {
+ if (!document.getElementsByTagName) return;
+ var anchors = document.getElementsByTagName('a');
+ for (var i=0; i<anchors.length; i++) {
+ var anchor = anchors[i];
+ if (anchor.getAttribute('href') && hasValue(anchor.rel, 'external')) {
+ anchor.target = '_blank';
+ addClass(anchor,'external');
+ }
+ }
+}
+
+function createControls() {
+ var controlsDiv = document.getElementById("controls");
+ if (!controlsDiv) return;
+ var hider = ' onmouseover="showHide(\'s\');" onmouseout="showHide(\'h\');"';
+ var hideDiv, hideList = '';
+ if (controlVis == 'hidden') {
+ hideDiv = hider;
+ } else {
+ hideList = hider;
+ }
+ controlsDiv.innerHTML = '<form action="#" id="controlForm"' + hideDiv + '>' +
+ '<div id="navLinks">' +
+ '<a accesskey="t" id="toggle" href="javascript:toggle();">Ø<\/a>' +
+ '<a accesskey="z" id="prev" href="javascript:go(-1);">«<\/a>' +
+ '<a accesskey="x" id="next" href="javascript:go(1);">»<\/a>' +
+ '<div id="navList"' + hideList + '><select id="jumplist" onchange="go(\'j\');"><\/select><\/div>' +
+ '<\/div><\/form>';
+ if (controlVis == 'hidden') {
+ var hidden = document.getElementById('navLinks');
+ } else {
+ var hidden = document.getElementById('jumplist');
+ }
+ addClass(hidden,'hideme');
+}
+
+function fontScale() { // causes layout problems in FireFox that get fixed if browser's Reload is used; same may be true of other Gecko-based browsers
+ if (!s5mode) return false;
+ var vScale = 22; // both yield 32 (after rounding) at 1024x768
+ var hScale = 32; // perhaps should auto-calculate based on theme's declared value?
+ if (window.innerHeight) {
+ var vSize = window.innerHeight;
+ var hSize = window.innerWidth;
+ } else if (document.documentElement.clientHeight) {
+ var vSize = document.documentElement.clientHeight;
+ var hSize = document.documentElement.clientWidth;
+ } else if (document.body.clientHeight) {
+ var vSize = document.body.clientHeight;
+ var hSize = document.body.clientWidth;
+ } else {
+ var vSize = 700; // assuming 1024x768, minus chrome and such
+ var hSize = 1024; // these do not account for kiosk mode or Opera Show
+ }
+ var newSize = Math.min(Math.round(vSize/vScale),Math.round(hSize/hScale));
+ fontSize(newSize + 'px');
+ if (isGe) { // hack to counter incremental reflow bugs
+ var obj = document.getElementsByTagName('body')[0];
+ obj.style.display = 'none';
+ obj.style.display = 'block';
+ }
+}
+
+function fontSize(value) {
+ if (!(s5ss = document.getElementById('s5ss'))) {
+ if (!isIE) {
+ document.getElementsByTagName('head')[0].appendChild(s5ss = document.createElement('style'));
+ s5ss.setAttribute('media','screen, projection');
+ s5ss.setAttribute('id','s5ss');
+ } else {
+ document.createStyleSheet();
+ document.s5ss = document.styleSheets[document.styleSheets.length - 1];
+ }
+ }
+ if (!isIE) {
+ while (s5ss.lastChild) s5ss.removeChild(s5ss.lastChild);
+ s5ss.appendChild(document.createTextNode('body {font-size: ' + value + ' !important;}'));
+ } else {
+ document.s5ss.addRule('body','font-size: ' + value + ' !important;');
+ }
+}
+
+function notOperaFix() {
+ slideCSS = document.getElementById('slideProj').href;
+ var slides = document.getElementById('slideProj');
+ var outline = document.getElementById('outlineStyle');
+ slides.setAttribute('media','screen');
+ outline.disabled = true;
+ if (isGe) {
+ slides.setAttribute('href','null'); // Gecko fix
+ slides.setAttribute('href',slideCSS); // Gecko fix
+ }
+ if (isIE && document.styleSheets && document.styleSheets[0]) {
+ document.styleSheets[0].addRule('img', 'behavior: url(ui/default/iepngfix.htc)');
+ document.styleSheets[0].addRule('div', 'behavior: url(ui/default/iepngfix.htc)');
+ document.styleSheets[0].addRule('.slide', 'behavior: url(ui/default/iepngfix.htc)');
+ }
+}
+
+function getIncrementals(obj) {
+ var incrementals = new Array();
+ if (!obj)
+ return incrementals;
+ var children = obj.childNodes;
+ for (var i = 0; i < children.length; i++) {
+ var child = children[i];
+ if (hasClass(child, 'incremental')) {
+ if (child.nodeName == 'OL' || child.nodeName == 'UL') {
+ removeClass(child, 'incremental');
+ for (var j = 0; j < child.childNodes.length; j++) {
+ if (child.childNodes[j].nodeType == 1) {
+ addClass(child.childNodes[j], 'incremental');
+ }
+ }
+ } else {
+ incrementals[incrementals.length] = child;
+ removeClass(child,'incremental');
+ }
+ }
+ if (hasClass(child, 'show-first')) {
+ if (child.nodeName == 'OL' || child.nodeName == 'UL') {
+ removeClass(child, 'show-first');
+ if (child.childNodes[isGe].nodeType == 1) {
+ removeClass(child.childNodes[isGe], 'incremental');
+ }
+ } else {
+ incrementals[incrementals.length] = child;
+ }
+ }
+ incrementals = incrementals.concat(getIncrementals(child));
+ }
+ return incrementals;
+}
+
+function createIncrementals() {
+ var incrementals = new Array();
+ for (var i = 0; i < smax; i++) {
+ incrementals[i] = getIncrementals(document.getElementById(slideIDs[i]));
+ }
+ return incrementals;
+}
+
+function defaultCheck() {
+ var allMetas = document.getElementsByTagName('meta');
+ for (var i = 0; i< allMetas.length; i++) {
+ if (allMetas[i].name == 'defaultView') {
+ defaultView = allMetas[i].content;
+ }
+ if (allMetas[i].name == 'controlVis') {
+ controlVis = allMetas[i].content;
+ }
+ }
+}
+
+// Key trap fix, new function body for trap()
+function trap(e) {
+ if (!e) {
+ e = event;
+ e.which = e.keyCode;
+ }
+ try {
+ modifierKey = e.ctrlKey || e.altKey || e.metaKey;
+ }
+ catch(e) {
+ modifierKey = false;
+ }
+ return modifierKey || e.which == 0;
+}
+
+function startup() {
+ defaultCheck();
+ if (!isOp) createControls();
+ slideLabel();
+ fixLinks();
+ externalLinks();
+ fontScale();
+ if (!isOp) {
+ notOperaFix();
+ incrementals = createIncrementals();
+ slideJump();
+ if (defaultView == 'outline') {
+ toggle();
+ }
+ document.onkeyup = keys;
+ document.onkeypress = trap;
+ document.onclick = clicker;
+ }
+}
+
+window.onload = startup;
+window.onresize = function(){setTimeout('fontScale()', 50);}
diff --git a/python/helpers/docutils/writers/s5_html/themes/medium-black/__base__ b/python/helpers/docutils/writers/s5_html/themes/medium-black/__base__
new file mode 100644
index 0000000..401b621
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/medium-black/__base__
@@ -0,0 +1,2 @@
+# base theme of this theme:
+medium-white
diff --git a/python/helpers/docutils/writers/s5_html/themes/medium-black/pretty.css b/python/helpers/docutils/writers/s5_html/themes/medium-black/pretty.css
new file mode 100644
index 0000000..2ec10e2
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/medium-black/pretty.css
@@ -0,0 +1,115 @@
+/* This file has been placed in the public domain. */
+/* Following are the presentation styles -- edit away! */
+
+html, body {margin: 0; padding: 0;}
+body {background: black; color: white;}
+:link, :visited {text-decoration: none; color: cyan;}
+#controls :active {color: #888 !important;}
+#controls :focus {outline: 1px dotted #CCC;}
+h1, h2, h3, h4 {font-size: 100%; margin: 0; padding: 0; font-weight: inherit;}
+
+blockquote {padding: 0 2em 0.5em; margin: 0 1.5em 0.5em;}
+blockquote p {margin: 0;}
+
+kbd {font-weight: bold; font-size: 1em;}
+sup {font-size: smaller; line-height: 1px;}
+
+.slide pre {padding: 0; margin-left: 0; margin-right: 0; font-size: 90%;}
+.slide ul ul li {list-style: square;}
+.slide img.leader {display: block; margin: 0 auto;}
+.slide tt {font-size: 90%;}
+
+div#footer {font-family: sans-serif; color: #AAA;
+ font-size: 0.5em; font-weight: bold; padding: 1em 0;}
+#footer h1 {display: block; padding: 0 1em;}
+#footer h2 {display: block; padding: 0.8em 1em 0;}
+
+.slide {font-size: 1.75em;}
+.slide h1 {padding-top: 0; z-index: 1; margin: 0; font: bold 150% sans-serif;}
+.slide h2 {font: bold 125% sans-serif; padding-top: 0.5em;}
+.slide h3 {font: bold 110% sans-serif; padding-top: 0.5em;}
+h1 abbr {font-variant: small-caps;}
+
+div#controls {position: absolute; left: 50%; bottom: 0;
+ width: 50%; text-align: right; font: bold 0.9em sans-serif;}
+html>body div#controls {position: fixed; padding: 0 0 1em 0; top: auto;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0; padding: 0;}
+#controls #navLinks a {padding: 0; margin: 0 0.5em;
+ border: none; color: #888; cursor: pointer;}
+#controls #navList {height: 1em;}
+#controls #navList #jumplist {position: absolute; bottom: 0; right: 0;
+ background: black; color: #CCC;}
+
+#currentSlide {text-align: center; font-size: 0.5em; color: #AAA;
+ font-family: sans-serif; font-weight: bold;}
+
+#slide0 h1 {position: static; margin: 0 0 0.5em; padding-top: 1em; top: 0;
+ font: bold 150% sans-serif; white-space: normal; background: transparent;}
+#slide0 h2 {font: bold italic 125% sans-serif; color: gray;}
+#slide0 h3 {margin-top: 1.5em; font: bold 110% sans-serif;}
+#slide0 h4 {margin-top: 0; font-size: 1em;}
+
+ul.urls {list-style: none; display: inline; margin: 0;}
+.urls li {display: inline; margin: 0;}
+.external {border-bottom: 1px dotted gray;}
+html>body .external {border-bottom: none;}
+.external:after {content: " \274F"; font-size: smaller; color: #FCC;}
+
+.incremental, .incremental *, .incremental *:after {
+ color: black; visibility: visible; border: 0;}
+img.incremental {visibility: hidden;}
+.slide .current {color: lime;}
+
+.slide-display {display: inline ! important;}
+
+.huge {font-family: sans-serif; font-weight: bold; font-size: 150%;}
+.big {font-family: sans-serif; font-weight: bold; font-size: 120%;}
+.small {font-size: 75%;}
+.tiny {font-size: 50%;}
+.huge tt, .big tt, .small tt, .tiny tt {font-size: 115%;}
+.huge pre, .big pre, .small pre, .tiny pre {font-size: 115%;}
+
+.maroon {color: maroon;}
+.red {color: red;}
+.magenta {color: magenta;}
+.fuchsia {color: fuchsia;}
+.pink {color: #FAA;}
+.orange {color: orange;}
+.yellow {color: yellow;}
+.lime {color: lime;}
+.green {color: green;}
+.olive {color: olive;}
+.teal {color: teal;}
+.cyan {color: cyan;}
+.aqua {color: aqua;}
+.blue {color: blue;}
+.navy {color: navy;}
+.purple {color: purple;}
+.black {color: black;}
+.gray {color: gray;}
+.silver {color: silver;}
+.white {color: white;}
+
+.left {text-align: left ! important;}
+.center {text-align: center ! important;}
+.right {text-align: right ! important;}
+
+.animation {position: relative; margin: 1em 0; padding: 0;}
+.animation img {position: absolute;}
+
+/* Docutils-specific overrides */
+
+.slide table.docinfo {margin: 0.5em 0 0.5em 1em;}
+
+div.sidebar {background-color: black;}
+
+pre.literal-block, pre.doctest-block {background-color: black;}
+
+tt.docutils {background-color: black;}
+
+/* diagnostics */
+/*
+li:after {content: " [" attr(class) "]"; color: #F88;}
+div:before {content: "[" attr(class) "]"; color: #F88;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/medium-white/framing.css b/python/helpers/docutils/writers/s5_html/themes/medium-white/framing.css
new file mode 100644
index 0000000..6c4e3ab
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/medium-white/framing.css
@@ -0,0 +1,24 @@
+/* This file has been placed in the public domain. */
+/* The following styles size, place, and layer the slide components.
+ Edit these if you want to change the overall slide layout.
+ The commented lines can be uncommented (and modified, if necessary)
+ to help you with the rearrangement process. */
+
+/* target = 1024x768 */
+
+div#header, div#footer, .slide {width: 100%; top: 0; left: 0;}
+div#footer {top: auto; bottom: 0; height: 2.5em; z-index: 5;}
+.slide {top: 0; width: 92%; padding: 0.75em 4% 0 4%; z-index: 2;}
+div#controls {left: 50%; bottom: 0; width: 50%; z-index: 100;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0;}
+#currentSlide {position: absolute; width: 10%; left: 45%; bottom: 1em;
+ z-index: 10;}
+html>body #currentSlide {position: fixed;}
+
+/*
+div#header {background: #FCC;}
+div#footer {background: #CCF;}
+div#controls {background: #BBD;}
+div#currentSlide {background: #FFC;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/medium-white/pretty.css b/python/helpers/docutils/writers/s5_html/themes/medium-white/pretty.css
new file mode 100644
index 0000000..07e07b9
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/medium-white/pretty.css
@@ -0,0 +1,113 @@
+/* This file has been placed in the public domain. */
+/* Following are the presentation styles -- edit away! */
+
+html, body {margin: 0; padding: 0;}
+body {background: white; color: black;}
+:link, :visited {text-decoration: none; color: #00C;}
+#controls :active {color: #888 !important;}
+#controls :focus {outline: 1px dotted #222;}
+h1, h2, h3, h4 {font-size: 100%; margin: 0; padding: 0; font-weight: inherit;}
+
+blockquote {padding: 0 2em 0.5em; margin: 0 1.5em 0.5em;}
+blockquote p {margin: 0;}
+
+kbd {font-weight: bold; font-size: 1em;}
+sup {font-size: smaller; line-height: 1px;}
+
+.slide pre {padding: 0; margin-left: 0; margin-right: 0; font-size: 90%;}
+.slide ul ul li {list-style: square;}
+.slide img.leader {display: block; margin: 0 auto;}
+.slide tt {font-size: 90%;}
+
+div#footer {font-family: sans-serif; color: #444;
+ font-size: 0.5em; font-weight: bold; padding: 1em 0;}
+#footer h1 {display: block; padding: 0 1em;}
+#footer h2 {display: block; padding: 0.8em 1em 0;}
+
+.slide {font-size: 1.75em;}
+.slide h1 {padding-top: 0; z-index: 1; margin: 0; font: bold 150% sans-serif;}
+.slide h2 {font: bold 125% sans-serif; padding-top: 0.5em;}
+.slide h3 {font: bold 110% sans-serif; padding-top: 0.5em;}
+h1 abbr {font-variant: small-caps;}
+
+div#controls {position: absolute; left: 50%; bottom: 0;
+ width: 50%; text-align: right; font: bold 0.9em sans-serif;}
+html>body div#controls {position: fixed; padding: 0 0 1em 0; top: auto;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0; padding: 0;}
+#controls #navLinks a {padding: 0; margin: 0 0.5em;
+ border: none; color: #888; cursor: pointer;}
+#controls #navList {height: 1em;}
+#controls #navList #jumplist {position: absolute; bottom: 0; right: 0;
+ background: #DDD; color: #222;}
+
+#currentSlide {text-align: center; font-size: 0.5em; color: #444;
+ font-family: sans-serif; font-weight: bold;}
+
+#slide0 h1 {position: static; margin: 0 0 0.5em; padding-top: 1em; top: 0;
+ font: bold 150% sans-serif; white-space: normal; background: transparent;}
+#slide0 h2 {font: bold italic 125% sans-serif; color: gray;}
+#slide0 h3 {margin-top: 1.5em; font: bold 110% sans-serif;}
+#slide0 h4 {margin-top: 0; font-size: 1em;}
+
+ul.urls {list-style: none; display: inline; margin: 0;}
+.urls li {display: inline; margin: 0;}
+.external {border-bottom: 1px dotted gray;}
+html>body .external {border-bottom: none;}
+.external:after {content: " \274F"; font-size: smaller; color: #77B;}
+
+.incremental, .incremental *, .incremental *:after {
+ color: white; visibility: visible; border: 0;}
+img.incremental {visibility: hidden;}
+.slide .current {color: green;}
+
+.slide-display {display: inline ! important;}
+
+.huge {font-family: sans-serif; font-weight: bold; font-size: 150%;}
+.big {font-family: sans-serif; font-weight: bold; font-size: 120%;}
+.small {font-size: 75%;}
+.tiny {font-size: 50%;}
+.huge tt, .big tt, .small tt, .tiny tt {font-size: 115%;}
+.huge pre, .big pre, .small pre, .tiny pre {font-size: 115%;}
+
+.maroon {color: maroon;}
+.red {color: red;}
+.magenta {color: magenta;}
+.fuchsia {color: fuchsia;}
+.pink {color: #FAA;}
+.orange {color: orange;}
+.yellow {color: yellow;}
+.lime {color: lime;}
+.green {color: green;}
+.olive {color: olive;}
+.teal {color: teal;}
+.cyan {color: cyan;}
+.aqua {color: aqua;}
+.blue {color: blue;}
+.navy {color: navy;}
+.purple {color: purple;}
+.black {color: black;}
+.gray {color: gray;}
+.silver {color: silver;}
+.white {color: white;}
+
+.left {text-align: left ! important;}
+.center {text-align: center ! important;}
+.right {text-align: right ! important;}
+
+.animation {position: relative; margin: 1em 0; padding: 0;}
+.animation img {position: absolute;}
+
+/* Docutils-specific overrides */
+
+.slide table.docinfo {margin: 0.5em 0 0.5em 1em;}
+
+pre.literal-block, pre.doctest-block {background-color: white;}
+
+tt.docutils {background-color: white;}
+
+/* diagnostics */
+/*
+li:after {content: " [" attr(class) "]"; color: #F88;}
+div:before {content: "[" attr(class) "]"; color: #F88;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/small-black/__base__ b/python/helpers/docutils/writers/s5_html/themes/small-black/__base__
new file mode 100644
index 0000000..67f4db2
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/small-black/__base__
@@ -0,0 +1,2 @@
+# base theme of this theme:
+small-white
diff --git a/python/helpers/docutils/writers/s5_html/themes/small-black/pretty.css b/python/helpers/docutils/writers/s5_html/themes/small-black/pretty.css
new file mode 100644
index 0000000..5c19327
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/small-black/pretty.css
@@ -0,0 +1,116 @@
+/* This file has been placed in the public domain. */
+/* Following are the presentation styles -- edit away! */
+
+html, body {margin: 0; padding: 0;}
+body {background: black; color: white;}
+:link, :visited {text-decoration: none; color: cyan;}
+#controls :active {color: #888 !important;}
+#controls :focus {outline: 1px dotted #CCC;}
+h1, h2, h3, h4 {font-size: 100%; margin: 0; padding: 0; font-weight: inherit;}
+
+blockquote {padding: 0 2em 0.5em; margin: 0 1.5em 0.5em;}
+blockquote p {margin: 0;}
+
+kbd {font-weight: bold; font-size: 1em;}
+sup {font-size: smaller; line-height: 1px;}
+
+.slide pre {padding: 0; margin-left: 0; margin-right: 0; font-size: 90%;}
+.slide ul ul li {list-style: square;}
+.slide img.leader {display: block; margin: 0 auto;}
+.slide tt {font-size: 90%;}
+
+div#footer {font-family: sans-serif; color: #AAA;
+ font-size: 0.5em; font-weight: bold; padding: 1em 0;}
+#footer h1 {display: block; padding: 0 1em;}
+#footer h2 {display: block; padding: 0.8em 1em 0;}
+
+.slide {font-size: 1.2em;}
+.slide h1 {padding-top: 0; z-index: 1; margin: 0; font: bold 150% sans-serif;}
+.slide h2 {font: bold 120% sans-serif; padding-top: 0.5em;}
+.slide h3 {font: bold 100% sans-serif; padding-top: 0.5em;}
+h1 abbr {font-variant: small-caps;}
+
+div#controls {position: absolute; left: 50%; bottom: 0;
+ width: 50%; text-align: right; font: bold 0.9em sans-serif;}
+html>body div#controls {position: fixed; padding: 0 0 1em 0; top: auto;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0; padding: 0;}
+#controls #navLinks a {padding: 0; margin: 0 0.5em;
+ border: none; color: #888; cursor: pointer;}
+#controls #navList {height: 1em;}
+#controls #navList #jumplist {position: absolute; bottom: 0; right: 0;
+ background: black; color: #CCC;}
+
+#currentSlide {text-align: center; font-size: 0.5em; color: #AAA;
+ font-family: sans-serif; font-weight: bold;}
+
+#slide0 {padding-top: 0em}
+#slide0 h1 {position: static; margin: 1em 0 0; padding: 0;
+ font: bold 2em sans-serif; white-space: normal; background: transparent;}
+#slide0 h2 {font: bold italic 1em sans-serif; margin: 0.25em;}
+#slide0 h3 {margin-top: 1.5em; font-size: 1.5em;}
+#slide0 h4 {margin-top: 0; font-size: 1em;}
+
+ul.urls {list-style: none; display: inline; margin: 0;}
+.urls li {display: inline; margin: 0;}
+.external {border-bottom: 1px dotted gray;}
+html>body .external {border-bottom: none;}
+.external:after {content: " \274F"; font-size: smaller; color: #FCC;}
+
+.incremental, .incremental *, .incremental *:after {
+ color: black; visibility: visible; border: 0;}
+img.incremental {visibility: hidden;}
+.slide .current {color: lime;}
+
+.slide-display {display: inline ! important;}
+
+.huge {font-family: sans-serif; font-weight: bold; font-size: 150%;}
+.big {font-family: sans-serif; font-weight: bold; font-size: 120%;}
+.small {font-size: 75%;}
+.tiny {font-size: 50%;}
+.huge tt, .big tt, .small tt, .tiny tt {font-size: 115%;}
+.huge pre, .big pre, .small pre, .tiny pre {font-size: 115%;}
+
+.maroon {color: maroon;}
+.red {color: red;}
+.magenta {color: magenta;}
+.fuchsia {color: fuchsia;}
+.pink {color: #FAA;}
+.orange {color: orange;}
+.yellow {color: yellow;}
+.lime {color: lime;}
+.green {color: green;}
+.olive {color: olive;}
+.teal {color: teal;}
+.cyan {color: cyan;}
+.aqua {color: aqua;}
+.blue {color: blue;}
+.navy {color: navy;}
+.purple {color: purple;}
+.black {color: black;}
+.gray {color: gray;}
+.silver {color: silver;}
+.white {color: white;}
+
+.left {text-align: left ! important;}
+.center {text-align: center ! important;}
+.right {text-align: right ! important;}
+
+.animation {position: relative; margin: 1em 0; padding: 0;}
+.animation img {position: absolute;}
+
+/* Docutils-specific overrides */
+
+.slide table.docinfo {margin: 1em 0 0.5em 2em;}
+
+div.sidebar {background-color: black;}
+
+pre.literal-block, pre.doctest-block {background-color: black;}
+
+tt.docutils {background-color: black;}
+
+/* diagnostics */
+/*
+li:after {content: " [" attr(class) "]"; color: #F88;}
+div:before {content: "[" attr(class) "]"; color: #F88;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/small-white/framing.css b/python/helpers/docutils/writers/s5_html/themes/small-white/framing.css
new file mode 100644
index 0000000..70287dd
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/small-white/framing.css
@@ -0,0 +1,24 @@
+/* This file has been placed in the public domain. */
+/* The following styles size, place, and layer the slide components.
+ Edit these if you want to change the overall slide layout.
+ The commented lines can be uncommented (and modified, if necessary)
+ to help you with the rearrangement process. */
+
+/* target = 1024x768 */
+
+div#header, div#footer, .slide {width: 100%; top: 0; left: 0;}
+div#footer {top: auto; bottom: 0; height: 2.5em; z-index: 5;}
+.slide {top: 0; width: 92%; padding: 1em 4% 0 4%; z-index: 2;}
+div#controls {left: 50%; bottom: 0; width: 50%; z-index: 100;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0;}
+#currentSlide {position: absolute; width: 10%; left: 45%; bottom: 1em;
+ z-index: 10;}
+html>body #currentSlide {position: fixed;}
+
+/*
+div#header {background: #FCC;}
+div#footer {background: #CCF;}
+div#controls {background: #BBD;}
+div#currentSlide {background: #FFC;}
+*/
diff --git a/python/helpers/docutils/writers/s5_html/themes/small-white/pretty.css b/python/helpers/docutils/writers/s5_html/themes/small-white/pretty.css
new file mode 100644
index 0000000..ba988e1
--- /dev/null
+++ b/python/helpers/docutils/writers/s5_html/themes/small-white/pretty.css
@@ -0,0 +1,114 @@
+/* This file has been placed in the public domain. */
+/* Following are the presentation styles -- edit away! */
+
+html, body {margin: 0; padding: 0;}
+body {background: white; color: black;}
+:link, :visited {text-decoration: none; color: #00C;}
+#controls :active {color: #888 !important;}
+#controls :focus {outline: 1px dotted #222;}
+h1, h2, h3, h4 {font-size: 100%; margin: 0; padding: 0; font-weight: inherit;}
+
+blockquote {padding: 0 2em 0.5em; margin: 0 1.5em 0.5em;}
+blockquote p {margin: 0;}
+
+kbd {font-weight: bold; font-size: 1em;}
+sup {font-size: smaller; line-height: 1px;}
+
+.slide pre {padding: 0; margin-left: 0; margin-right: 0; font-size: 90%;}
+.slide ul ul li {list-style: square;}
+.slide img.leader {display: block; margin: 0 auto;}
+.slide tt {font-size: 90%;}
+
+div#footer {font-family: sans-serif; color: #444;
+ font-size: 0.5em; font-weight: bold; padding: 1em 0;}
+#footer h1 {display: block; padding: 0 1em;}
+#footer h2 {display: block; padding: 0.8em 1em 0;}
+
+.slide {font-size: 1.2em;}
+.slide h1 {padding-top: 0; z-index: 1; margin: 0; font: bold 150% sans-serif;}
+.slide h2 {font: bold 120% sans-serif; padding-top: 0.5em;}
+.slide h3 {font: bold 100% sans-serif; padding-top: 0.5em;}
+h1 abbr {font-variant: small-caps;}
+
+div#controls {position: absolute; left: 50%; bottom: 0;
+ width: 50%; text-align: right; font: bold 0.9em sans-serif;}
+html>body div#controls {position: fixed; padding: 0 0 1em 0; top: auto;}
+div#controls form {position: absolute; bottom: 0; right: 0; width: 100%;
+ margin: 0; padding: 0;}
+#controls #navLinks a {padding: 0; margin: 0 0.5em;
+ border: none; color: #888; cursor: pointer;}
+#controls #navList {height: 1em;}
+#controls #navList #jumplist {position: absolute; bottom: 0; right: 0;
+ background: #DDD; color: #222;}
+
+#currentSlide {text-align: center; font-size: 0.5em; color: #444;
+ font-family: sans-serif; font-weight: bold;}
+
+#slide0 {padding-top: 0em}
+#slide0 h1 {position: static; margin: 1em 0 0; padding: 0;
+ font: bold 2em sans-serif; white-space: normal; background: transparent;}
+#slide0 h2 {font: bold italic 1em sans-serif; margin: 0.25em;}
+#slide0 h3 {margin-top: 1.5em; font-size: 1.5em;}
+#slide0 h4 {margin-top: 0; font-size: 1em;}
+
+ul.urls {list-style: none; display: inline; margin: 0;}
+.urls li {display: inline; margin: 0;}
+.external {border-bottom: 1px dotted gray;}
+html>body .external {border-bottom: none;}
+.external:after {content: " \274F"; font-size: smaller; color: #77B;}
+
+.incremental, .incremental *, .incremental *:after {
+ color: white; visibility: visible; border: 0; border: 0;}
+img.incremental {visibility: hidden;}
+.slide .current {color: green;}
+
+.slide-display {display: inline ! important;}
+
+.huge {font-family: sans-serif; font-weight: bold; font-size: 150%;}
+.big {font-family: sans-serif; font-weight: bold; font-size: 120%;}
+.small {font-size: 75%;}
+.tiny {font-size: 50%;}
+.huge tt, .big tt, .small tt, .tiny tt {font-size: 115%;}
+.huge pre, .big pre, .small pre, .tiny pre {font-size: 115%;}
+
+.maroon {color: maroon;}
+.red {color: red;}
+.magenta {color: magenta;}
+.fuchsia {color: fuchsia;}
+.pink {color: #FAA;}
+.orange {color: orange;}
+.yellow {color: yellow;}
+.lime {color: lime;}
+.green {color: green;}
+.olive {color: olive;}
+.teal {color: teal;}
+.cyan {color: cyan;}
+.aqua {color: aqua;}
+.blue {color: blue;}
+.navy {color: navy;}
+.purple {color: purple;}
+.black {color: black;}
+.gray {color: gray;}
+.silver {color: silver;}
+.white {color: white;}
+
+.left {text-align: left ! important;}
+.center {text-align: center ! important;}
+.right {text-align: right ! important;}
+
+.animation {position: relative; margin: 1em 0; padding: 0;}
+.animation img {position: absolute;}
+
+/* Docutils-specific overrides */
+
+.slide table.docinfo {margin: 1em 0 0.5em 2em;}
+
+pre.literal-block, pre.doctest-block {background-color: white;}
+
+tt.docutils {background-color: white;}
+
+/* diagnostics */
+/*
+li:after {content: " [" attr(class) "]"; color: #F88;}
+div:before {content: "[" attr(class) "]"; color: #F88;}
+*/
diff --git a/python/helpers/epydoc/__init__.py b/python/helpers/epydoc/__init__.py
new file mode 100644
index 0000000..212054e
--- /dev/null
+++ b/python/helpers/epydoc/__init__.py
@@ -0,0 +1,227 @@
+# epydoc
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: __init__.py 1691 2008-01-30 17:11:09Z edloper $
+
+"""
+Automatic Python reference documentation generator. Epydoc processes
+Python modules and docstrings to generate formatted API documentation,
+in the form of HTML pages. Epydoc can be used via a command-line
+interface (`epydoc.cli`) and a graphical interface (`epydoc.gui`).
+Both interfaces let the user specify a set of modules or other objects
+to document, and produce API documentation using the following steps:
+
+1. Extract basic information about the specified objects, and objects
+ that are related to them (such as the values defined by a module).
+ This can be done via introspection, parsing, or both:
+
+ * *Introspection* imports the objects, and examines them directly
+ using Python's introspection mechanisms.
+
+ * *Parsing* reads the Python source files that define the objects,
+ and extracts information from those files.
+
+2. Combine and process that information.
+
+ * **Merging**: Merge the information obtained from introspection &
+ parsing each object into a single structure.
+
+ * **Linking**: Replace any \"pointers\" that were created for
+ imported variables with the documentation that they point to.
+
+ * **Naming**: Assign unique *canonical names* to each of the
+ specified objects, and any related objects.
+
+ * **Docstrings**: Parse the docstrings of each of the specified
+ objects.
+
+ * **Inheritance**: Add variables to classes for any values that
+ they inherit from their base classes.
+
+3. Generate output. Output can be generated in a variety of formats:
+
+ * An HTML webpage.
+
+ * A LaTeX document (which can be rendered as a PDF file)
+
+ * A plaintext description.
+
+.. digraph:: Overview of epydoc's architecture
+ :caption: The boxes represent steps in epydoc's processing chain.
+ Arrows are annotated with the data classes used to
+ communicate between steps. The lines along the right
+ side mark what portions of the processing chain are
+ initiated by build_doc_index() and cli(). Click on
+ any item to see its documentation.
+
+ /*
+ Python module or value * *
+ / \ | |
+ V V | |
+ introspect_docs() parse_docs() | |
+ \ / | |
+ V V | |
+ merge_docs() | |
+ | build_doc_index() cli()
+ V | |
+ link_imports() | |
+ | | |
+ V | |
+ assign_canonical_names() | |
+ | | |
+ V | |
+ parse_docstrings() | |
+ | | |
+ V | |
+ inherit_docs() * |
+ / | \ |
+ V V V |
+ HTMLWriter LaTeXWriter PlaintextWriter *
+ */
+
+ ranksep = 0.1;
+ node [shape="box", height="0", width="0"]
+
+ { /* Task nodes */
+ node [fontcolor=\"#000060\"]
+ introspect [label="Introspect value:\\nintrospect_docs()",
+ href="<docintrospecter.introspect_docs>"]
+ parse [label="Parse source code:\\nparse_docs()",
+ href="<docparser.parse_docs>"]
+ merge [label="Merge introspected & parsed docs:\\nmerge_docs()",
+ href="<docbuilder.merge_docs>", width="2.5"]
+ link [label="Link imports:\\nlink_imports()",
+ href="<docbuilder.link_imports>", width="2.5"]
+ name [label="Assign names:\\nassign_canonical_names()",
+ href="<docbuilder.assign_canonical_names>", width="2.5"]
+ docstrings [label="Parse docstrings:\\nparse_docstring()",
+ href="<docstringparser.parse_docstring>", width="2.5"]
+ inheritance [label="Inherit docs from bases:\\ninherit_docs()",
+ href="<docbuilder.inherit_docs>", width="2.5"]
+ write_html [label="Write HTML output:\\nHTMLWriter",
+ href="<docwriter.html>"]
+ write_latex [label="Write LaTeX output:\\nLaTeXWriter",
+ href="<docwriter.latex>"]
+ write_text [label="Write text output:\\nPlaintextWriter",
+ href="<docwriter.plaintext>"]
+ }
+
+ { /* Input & Output nodes */
+ node [fontcolor=\"#602000\", shape="plaintext"]
+ input [label="Python module or value"]
+ output [label="DocIndex", href="<apidoc.DocIndex>"]
+ }
+
+ { /* Graph edges */
+ edge [fontcolor=\"#602000\"]
+ input -> introspect
+ introspect -> merge [label="APIDoc", href="<apidoc.APIDoc>"]
+ input -> parse
+ parse -> merge [label="APIDoc", href="<apidoc.APIDoc>"]
+ merge -> link [label=" DocIndex", href="<apidoc.DocIndex>"]
+ link -> name [label=" DocIndex", href="<apidoc.DocIndex>"]
+ name -> docstrings [label=" DocIndex", href="<apidoc.DocIndex>"]
+ docstrings -> inheritance [label=" DocIndex", href="<apidoc.DocIndex>"]
+ inheritance -> output
+ output -> write_html
+ output -> write_latex
+ output -> write_text
+ }
+
+ { /* Task collections */
+ node [shape="circle",label="",width=.1,height=.1]
+ edge [fontcolor="black", dir="none", fontcolor=\"#000060\"]
+ l3 -> l4 [label=" epydoc.\\l docbuilder.\\l build_doc_index()",
+ href="<docbuilder.build_doc_index>"]
+ l1 -> l2 [label=" epydoc.\\l cli()", href="<cli>"]
+ }
+ { rank=same; l1 l3 input }
+ { rank=same; l2 write_html }
+ { rank=same; l4 output }
+
+Package Organization
+====================
+The epydoc package contains the following subpackages and modules:
+
+.. packagetree::
+ :style: UML
+
+The user interfaces are provided by the `gui` and `cli` modules.
+The `apidoc` module defines the basic data types used to record
+information about Python objects. The programmatic interface to
+epydoc is provided by `docbuilder`. Docstring markup parsing is
+handled by the `markup` package, and output generation is handled by
+the `docwriter` package. See the submodule list for more
+information about the submodules and subpackages.
+
+:group User Interface: gui, cli
+:group Basic Data Types: apidoc
+:group Documentation Generation: docbuilder, docintrospecter, docparser
+:group Docstring Processing: docstringparser, markup
+:group Output Generation: docwriter
+:group Completeness Checking: checker
+:group Miscellaneous: log, util, test, compat
+
+:author: `Edward Loper <[email protected]>`__
+:requires: Python 2.3+
+:version: 3.0.1
+:see: `The epydoc webpage <http://epydoc.sourceforge.net>`__
+:see: `The epytext markup language
+ manual <http://epydoc.sourceforge.net/epytext.html>`__
+
+:todo: Create a better default top_page than trees.html.
+:todo: Fix trees.html to work when documenting non-top-level
+ modules/packages
+:todo: Implement @include
+:todo: Optimize epytext
+:todo: More doctests
+:todo: When introspecting, limit how much introspection you do (eg,
+ don't construct docs for imported modules' vars if it's
+ not necessary)
+
+:bug: UserDict.* is interpreted as imported .. why??
+
+:license: IBM Open Source License
+:copyright: |copy| 2006 Edward Loper
+
+:newfield contributor: Contributor, Contributors (Alphabetical Order)
+:contributor: `Glyph Lefkowitz <mailto:[email protected]>`__
+:contributor: `Edward Loper <mailto:[email protected]>`__
+:contributor: `Bruce Mitchener <mailto:[email protected]>`__
+:contributor: `Jeff O'Halloran <mailto:[email protected]>`__
+:contributor: `Simon Pamies <mailto:[email protected]>`__
+:contributor: `Christian Reis <mailto:[email protected]>`__
+:contributor: `Daniele Varrazzo <mailto:[email protected]>`__
+
+.. |copy| unicode:: 0xA9 .. copyright sign
+"""
+__docformat__ = 'restructuredtext en'
+
+__version__ = '3.0.1'
+"""The version of epydoc"""
+
+__author__ = 'Edward Loper <[email protected]>'
+"""The primary author of eypdoc"""
+
+__url__ = 'http://epydoc.sourceforge.net'
+"""The URL for epydoc's homepage"""
+
+__license__ = 'IBM Open Source License'
+"""The license governing the use and distribution of epydoc"""
+
+# [xx] this should probably be a private variable:
+DEBUG = False
+"""True if debugging is turned on."""
+
+# Changes needed for docs:
+# - document the method for deciding what's public/private
+# - epytext: fields are defined slightly differently (@group)
+# - new fields
+# - document __extra_epydoc_fields__ and @newfield
+# - Add a faq?
+# - @type a,b,c: ...
+# - new command line option: --command-line-order
+
diff --git a/python/helpers/epydoc/apidoc.py b/python/helpers/epydoc/apidoc.py
new file mode 100644
index 0000000..7eac120
--- /dev/null
+++ b/python/helpers/epydoc/apidoc.py
@@ -0,0 +1,2203 @@
+# epydoc -- API Documentation Classes
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: apidoc.py 1675 2008-01-29 17:12:56Z edloper $
+
+"""
+Classes for encoding API documentation about Python programs.
+These classes are used as a common representation for combining
+information derived from introspection and from parsing.
+
+The API documentation for a Python program is encoded using a graph of
+L{APIDoc} objects, each of which encodes information about a single
+Python variable or value. C{APIDoc} has two direct subclasses:
+L{VariableDoc}, for documenting variables; and L{ValueDoc}, for
+documenting values. The C{ValueDoc} class is subclassed further, to
+define the different pieces of information that should be recorded
+about each value type:
+
+G{classtree: APIDoc}
+
+The distinction between variables and values is intentionally made
+explicit. This allows us to distinguish information about a variable
+itself (such as whether it should be considered 'public' in its
+containing namespace) from information about the value it contains
+(such as what type the value has). This distinction is also important
+because several variables can contain the same value: each variable
+should be described by a separate C{VariableDoc}; but we only need one
+C{ValueDoc}, since they share a single value.
+
+@todo: Add a cache to canonical name lookup?
+"""
+__docformat__ = 'epytext en'
+
+######################################################################
+## Imports
+######################################################################
+
+import types, re, os.path, pickle
+from epydoc import log
+import epydoc
+import __builtin__
+from epydoc.compat import * # Backwards compatibility
+from epydoc.util import decode_with_backslashreplace, py_src_filename
+import epydoc.markup.pyval_repr
+
+######################################################################
+# Dotted Names
+######################################################################
+
+class DottedName:
+ """
+ A sequence of identifiers, separated by periods, used to name a
+ Python variable, value, or argument. The identifiers that make up
+ a dotted name can be accessed using the indexing operator:
+
+ >>> name = DottedName('epydoc', 'api_doc', 'DottedName')
+ >>> print name
+ epydoc.apidoc.DottedName
+ >>> name[1]
+ 'api_doc'
+ """
+ UNREACHABLE = "??"
+ _IDENTIFIER_RE = re.compile("""(?x)
+ (%s | # UNREACHABLE marker, or..
+ (script-)? # Prefix: script (not a module)
+ \w+ # Identifier (yes, identifiers starting with a
+ # digit are allowed. See SF bug #1649347)
+ '?) # Suffix: submodule that is shadowed by a var
+ (-\d+)? # Suffix: unreachable vals with the same name
+ $"""
+ % re.escape(UNREACHABLE))
+
+ class InvalidDottedName(ValueError):
+ """
+ An exception raised by the DottedName constructor when one of
+ its arguments is not a valid dotted name.
+ """
+
+ _ok_identifiers = set()
+ """A cache of identifier strings that have been checked against
+ _IDENTIFIER_RE and found to be acceptable."""
+
+ def __init__(self, *pieces, **options):
+ """
+ Construct a new dotted name from the given sequence of pieces,
+ each of which can be either a C{string} or a C{DottedName}.
+ Each piece is divided into a sequence of identifiers, and
+ these sequences are combined together (in order) to form the
+ identifier sequence for the new C{DottedName}. If a piece
+ contains a string, then it is divided into substrings by
+ splitting on periods, and each substring is checked to see if
+ it is a valid identifier.
+
+ As an optimization, C{pieces} may also contain a single tuple
+ of values. In that case, that tuple will be used as the
+ C{DottedName}'s identifiers; it will I{not} be checked to
+ see if it's valid.
+
+ @kwparam strict: if true, then raise an L{InvalidDottedName}
+ if the given name is invalid.
+ """
+ if len(pieces) == 1 and isinstance(pieces[0], tuple):
+ self._identifiers = pieces[0] # Optimization
+ return
+ if len(pieces) == 0:
+ raise DottedName.InvalidDottedName('Empty DottedName')
+ self._identifiers = []
+ for piece in pieces:
+ if isinstance(piece, DottedName):
+ self._identifiers += piece._identifiers
+ elif isinstance(piece, basestring):
+ for subpiece in piece.split('.'):
+ if piece not in self._ok_identifiers:
+ if not self._IDENTIFIER_RE.match(subpiece):
+ if options.get('strict'):
+ raise DottedName.InvalidDottedName(
+ 'Bad identifier %r' % (piece,))
+ else:
+ log.warning("Identifier %r looks suspicious; "
+ "using it anyway." % piece)
+ self._ok_identifiers.add(piece)
+ self._identifiers.append(subpiece)
+ else:
+ raise TypeError('Bad identifier %r: expected '
+ 'DottedName or str' % (piece,))
+ self._identifiers = tuple(self._identifiers)
+
+ def __repr__(self):
+ idents = [`ident` for ident in self._identifiers]
+ return 'DottedName(' + ', '.join(idents) + ')'
+
+ def __str__(self):
+ """
+ Return the dotted name as a string formed by joining its
+ identifiers with periods:
+
+ >>> print DottedName('epydoc', 'api_doc', DottedName')
+ epydoc.apidoc.DottedName
+ """
+ return '.'.join(self._identifiers)
+
+ def __add__(self, other):
+ """
+ Return a new C{DottedName} whose identifier sequence is formed
+ by adding C{other}'s identifier sequence to C{self}'s.
+ """
+ if isinstance(other, (basestring, DottedName)):
+ return DottedName(self, other)
+ else:
+ return DottedName(self, *other)
+
+ def __radd__(self, other):
+ """
+ Return a new C{DottedName} whose identifier sequence is formed
+ by adding C{self}'s identifier sequence to C{other}'s.
+ """
+ if isinstance(other, (basestring, DottedName)):
+ return DottedName(other, self)
+ else:
+ return DottedName(*(list(other)+[self]))
+
+ def __getitem__(self, i):
+ """
+ Return the C{i}th identifier in this C{DottedName}. If C{i} is
+ a non-empty slice, then return a C{DottedName} built from the
+ identifiers selected by the slice. If C{i} is an empty slice,
+ return an empty list (since empty C{DottedName}s are not valid).
+ """
+ if isinstance(i, types.SliceType):
+ pieces = self._identifiers[i.start:i.stop]
+ if pieces: return DottedName(pieces)
+ else: return []
+ else:
+ return self._identifiers[i]
+
+ def __hash__(self):
+ return hash(self._identifiers)
+
+ def __cmp__(self, other):
+ """
+ Compare this dotted name to C{other}. Two dotted names are
+ considered equal if their identifier subsequences are equal.
+ Ordering between dotted names is lexicographic, in order of
+ identifier from left to right.
+ """
+ if not isinstance(other, DottedName):
+ return -1
+ return cmp(self._identifiers, other._identifiers)
+
+ def __len__(self):
+ """
+ Return the number of identifiers in this dotted name.
+ """
+ return len(self._identifiers)
+
+ def container(self):
+ """
+ Return the DottedName formed by removing the last identifier
+ from this dotted name's identifier sequence. If this dotted
+ name only has one name in its identifier sequence, return
+ C{None} instead.
+ """
+ if len(self._identifiers) == 1:
+ return None
+ else:
+ return DottedName(*self._identifiers[:-1])
+
+ def dominates(self, name, strict=False):
+ """
+ Return true if this dotted name is equal to a prefix of
+ C{name}. If C{strict} is true, then also require that
+ C{self!=name}.
+
+ >>> DottedName('a.b').dominates(DottedName('a.b.c.d'))
+ True
+ """
+ len_self = len(self._identifiers)
+ len_name = len(name._identifiers)
+
+ if (len_self > len_name) or (strict and len_self == len_name):
+ return False
+ # The following is redundant (the first clause is implied by
+ # the second), but is done as an optimization.
+ return ((self._identifiers[0] == name._identifiers[0]) and
+ self._identifiers == name._identifiers[:len_self])
+
+ def contextualize(self, context):
+ """
+ If C{self} and C{context} share a common ancestor, then return
+ a name for C{self}, relative to that ancestor. If they do not
+ share a common ancestor (or if C{context} is C{UNKNOWN}), then
+ simply return C{self}.
+
+ This is used to generate shorter versions of dotted names in
+ cases where users can infer the intended target from the
+ context.
+
+ @type context: L{DottedName}
+ @rtype: L{DottedName}
+ """
+ if context is UNKNOWN or not context or len(self) <= 1:
+ return self
+ if self[0] == context[0]:
+ return self[1:].contextualize(context[1:])
+ else:
+ return self
+
+ # Find the first index where self & context differ.
+ for i in range(min(len(context), len(self))):
+ if self._identifiers[i] != context._identifiers[i]:
+ first_difference = i
+ break
+ else:
+ first_difference = i+1
+
+ # Strip off anything before that index.
+ if first_difference == 0:
+ return self
+ elif first_difference == len(self):
+ return self[-1:]
+ else:
+ return self[first_difference:]
+
+######################################################################
+# UNKNOWN Value
+######################################################################
+
+class _Sentinel:
+ """
+ A unique value that won't compare equal to any other value. This
+ class is used to create L{UNKNOWN}.
+ """
+ def __init__(self, name):
+ self.name = name
+ def __repr__(self):
+ return '<%s>' % self.name
+ def __nonzero__(self):
+ raise ValueError('Sentinel value <%s> can not be used as a boolean' %
+ self.name)
+
+UNKNOWN = _Sentinel('UNKNOWN')
+"""A special value used to indicate that a given piece of
+information about an object is unknown. This is used as the
+default value for all instance variables."""
+
+######################################################################
+# API Documentation Objects: Abstract Base Classes
+######################################################################
+
+class APIDoc(object):
+ """
+ API documentation information for a single element of a Python
+ program. C{APIDoc} itself is an abstract base class; subclasses
+ are used to specify what information should be recorded about each
+ type of program element. In particular, C{APIDoc} has two direct
+ subclasses, C{VariableDoc} for documenting variables and
+ C{ValueDoc} for documenting values; and the C{ValueDoc} class is
+ subclassed further for different value types.
+
+ Each C{APIDoc} subclass specifies the set of attributes that
+ should be used to record information about the corresponding
+ program element type. The default value for each attribute is
+ stored in the class; these default values can then be overridden
+ with instance variables. Most attributes use the special value
+ L{UNKNOWN} as their default value, to indicate that the correct
+ value for that attribute has not yet been determined. This makes
+ it easier to merge two C{APIDoc} objects that are documenting the
+ same element (in particular, to merge information about an element
+ that was derived from parsing with information that was derived
+ from introspection).
+
+ For all attributes with boolean values, use only the constants
+ C{True} and C{False} to designate true and false. In particular,
+ do I{not} use other values that evaluate as true or false, such as
+ C{2} or C{()}. This restriction makes it easier to handle
+ C{UNKNOWN} values. For example, to test if a boolean attribute is
+ C{True} or C{UNKNOWN}, use 'C{attrib in (True, UNKNOWN)}' or
+ 'C{attrib is not False}'.
+
+ Two C{APIDoc} objects describing the same object can be X{merged},
+ using the method L{merge_and_overwrite(other)}. After two
+ C{APIDoc}s are merged, any changes to one will be reflected in the
+ other. This is accomplished by setting the two C{APIDoc} objects
+ to use a shared instance dictionary. See the documentation for
+ L{merge_and_overwrite} for more information, and some important
+ caveats about hashing.
+ """
+ #{ Docstrings
+ docstring = UNKNOWN
+ """@ivar: The documented item's docstring.
+ @type: C{string} or C{None}"""
+
+ docstring_lineno = UNKNOWN
+ """@ivar: The line number on which the documented item's docstring
+ begins.
+ @type: C{int}"""
+ #} end of "docstrings" group
+
+ #{ Information Extracted from Docstrings
+ descr = UNKNOWN
+ """@ivar: A description of the documented item, extracted from its
+ docstring.
+ @type: L{ParsedDocstring<epydoc.markup.ParsedDocstring>}"""
+
+ summary = UNKNOWN
+ """@ivar: A summary description of the documented item, extracted from
+ its docstring.
+ @type: L{ParsedDocstring<epydoc.markup.ParsedDocstring>}"""
+
+ other_docs = UNKNOWN
+ """@ivar: A flag indicating if the entire L{docstring} body (except tags
+ if any) is entirely included in the L{summary}.
+ @type: C{bool}"""
+
+ metadata = UNKNOWN
+ """@ivar: Metadata about the documented item, extracted from fields in
+ its docstring. I{Currently} this is encoded as a list of tuples
+ C{(field, arg, descr)}. But that may change.
+ @type: C{(str, str, L{ParsedDocstring<markup.ParsedDocstring>})}"""
+
+ extra_docstring_fields = UNKNOWN
+ """@ivar: A list of new docstring fields tags that are defined by the
+ documented item's docstring. These new field tags can be used by
+ this item or by any item it contains.
+ @type: L{DocstringField <epydoc.docstringparser.DocstringField>}"""
+ #} end of "information extracted from docstrings" group
+
+ #{ Source Information
+ docs_extracted_by = UNKNOWN # 'parser' or 'introspecter' or 'both'
+ """@ivar: Information about where the information contained by this
+ C{APIDoc} came from. Can be one of C{'parser'},
+ C{'introspector'}, or C{'both'}.
+ @type: C{str}"""
+ #} end of "source information" group
+
+ def __init__(self, **kwargs):
+ """
+ Construct a new C{APIDoc} object. Keyword arguments may be
+ used to initialize the new C{APIDoc}'s attributes.
+
+ @raise TypeError: If a keyword argument is specified that does
+ not correspond to a valid attribute for this (sub)class of
+ C{APIDoc}.
+ """
+ if epydoc.DEBUG:
+ for key in kwargs:
+ if key[0] != '_' and not hasattr(self.__class__, key):
+ raise TypeError('%s got unexpected arg %r' %
+ (self.__class__.__name__, key))
+ self.__dict__.update(kwargs)
+
+ def _debug_setattr(self, attr, val):
+ """
+ Modify an C{APIDoc}'s attribute. This is used when
+ L{epydoc.DEBUG} is true, to make sure we don't accidentally
+ set any inappropriate attributes on C{APIDoc} objects.
+
+ @raise AttributeError: If C{attr} is not a valid attribute for
+ this (sub)class of C{APIDoc}. (C{attr} is considered a
+ valid attribute iff C{self.__class__} defines an attribute
+ with that name.)
+ """
+ # Don't intercept special assignments like __class__, or
+ # assignments to private variables.
+ if attr.startswith('_'):
+ return object.__setattr__(self, attr, val)
+ if not hasattr(self, attr):
+ raise AttributeError('%s does not define attribute %r' %
+ (self.__class__.__name__, attr))
+ self.__dict__[attr] = val
+
+ if epydoc.DEBUG:
+ __setattr__ = _debug_setattr
+
+ def __repr__(self):
+ return '<%s>' % self.__class__.__name__
+
+ def pp(self, doublespace=0, depth=5, exclude=(), include=()):
+ """
+ Return a pretty-printed string representation for the
+ information contained in this C{APIDoc}.
+ """
+ return pp_apidoc(self, doublespace, depth, exclude, include)
+ __str__ = pp
+
+ def specialize_to(self, cls):
+ """
+ Change C{self}'s class to C{cls}. C{cls} must be a subclass
+ of C{self}'s current class. For example, if a generic
+ C{ValueDoc} was created for a value, and it is determined that
+ the value is a routine, you can update its class with:
+
+ >>> valdoc.specialize_to(RoutineDoc)
+ """
+ if not issubclass(cls, self.__class__):
+ raise ValueError('Can not specialize to %r' % cls)
+ # Update the class.
+ self.__class__ = cls
+ # Update the class of any other apidoc's in the mergeset.
+ if self.__mergeset is not None:
+ for apidoc in self.__mergeset:
+ apidoc.__class__ = cls
+ # Re-initialize self, in case the subclass constructor does
+ # any special processing on its arguments.
+ self.__init__(**self.__dict__)
+
+ __has_been_hashed = False
+ """True iff L{self.__hash__()} has ever been called."""
+
+ def __hash__(self):
+ self.__has_been_hashed = True
+ return id(self.__dict__)
+
+ def __cmp__(self, other):
+ if not isinstance(other, APIDoc): return -1
+ if self.__dict__ is other.__dict__: return 0
+ name_cmp = cmp(self.canonical_name, other.canonical_name)
+ if name_cmp == 0: return -1
+ else: return name_cmp
+
+ def is_detailed(self):
+ """
+ Does this object deserve a box with extra details?
+
+ @return: True if the object needs extra details, else False.
+ @rtype: C{bool}
+ """
+ if self.other_docs is True:
+ return True
+
+ if self.metadata is not UNKNOWN:
+ return bool(self.metadata)
+
+ __mergeset = None
+ """The set of all C{APIDoc} objects that have been merged with
+ this C{APIDoc} (using L{merge_and_overwrite()}). Each C{APIDoc}
+ in this set shares a common instance dictionary (C{__dict__})."""
+
+ def merge_and_overwrite(self, other, ignore_hash_conflict=False):
+ """
+ Combine C{self} and C{other} into a X{merged object}, such
+ that any changes made to one will affect the other. Any
+ attributes that C{other} had before merging will be discarded.
+ This is accomplished by copying C{self.__dict__} over
+ C{other.__dict__} and C{self.__class__} over C{other.__class__}.
+
+ Care must be taken with this method, since it modifies the
+ hash value of C{other}. To help avoid the problems that this
+ can cause, C{merge_and_overwrite} will raise an exception if
+ C{other} has ever been hashed, unless C{ignore_hash_conflict}
+ is True. Note that adding C{other} to a dictionary, set, or
+ similar data structure will implicitly cause it to be hashed.
+ If you do set C{ignore_hash_conflict} to True, then any
+ existing data structures that rely on C{other}'s hash staying
+ constant may become corrupted.
+
+ @return: C{self}
+ @raise ValueError: If C{other} has ever been hashed.
+ """
+ # If we're already merged, then there's nothing to do.
+ if (self.__dict__ is other.__dict__ and
+ self.__class__ is other.__class__): return self
+
+ if other.__has_been_hashed and not ignore_hash_conflict:
+ raise ValueError("%r has already been hashed! Merging it "
+ "would cause its has value to change." % other)
+
+ # If other was itself already merged with anything,
+ # then we need to merge those too.
+ a,b = (self.__mergeset, other.__mergeset)
+ mergeset = (self.__mergeset or [self]) + (other.__mergeset or [other])
+ other.__dict__.clear()
+ for apidoc in mergeset:
+ #if apidoc is self: pass
+ apidoc.__class__ = self.__class__
+ apidoc.__dict__ = self.__dict__
+ self.__mergeset = mergeset
+ # Sanity chacks.
+ assert self in mergeset and other in mergeset
+ for apidoc in mergeset:
+ assert apidoc.__dict__ is self.__dict__
+ # Return self.
+ return self
+
+ def apidoc_links(self, **filters):
+ """
+ Return a list of all C{APIDoc}s that are directly linked from
+ this C{APIDoc} (i.e., are contained or pointed to by one or
+ more of this C{APIDoc}'s attributes.)
+
+ Keyword argument C{filters} can be used to selectively exclude
+ certain categories of attribute value. For example, using
+ C{includes=False} will exclude variables that were imported
+ from other modules; and C{subclasses=False} will exclude
+ subclasses. The filter categories currently supported by
+ epydoc are:
+ - C{imports}: Imported variables.
+ - C{packages}: Containing packages for modules.
+ - C{submodules}: Contained submodules for packages.
+ - C{bases}: Bases for classes.
+ - C{subclasses}: Subclasses for classes.
+ - C{variables}: All variables.
+ - C{private}: Private variables.
+ - C{overrides}: Points from class variables to the variables
+ they override. This filter is False by default.
+ """
+ return []
+
+def reachable_valdocs(root, **filters):
+ """
+ Return a list of all C{ValueDoc}s that can be reached, directly or
+ indirectly from the given root list of C{ValueDoc}s.
+
+ @param filters: A set of filters that can be used to prevent
+ C{reachable_valdocs} from following specific link types when
+ looking for C{ValueDoc}s that can be reached from the root
+ set. See C{APIDoc.apidoc_links} for a more complete
+ description.
+ """
+ apidoc_queue = list(root)
+ val_set = set()
+ var_set = set()
+ while apidoc_queue:
+ api_doc = apidoc_queue.pop()
+ if isinstance(api_doc, ValueDoc):
+ val_set.add(api_doc)
+ else:
+ var_set.add(api_doc)
+ apidoc_queue.extend([v for v in api_doc.apidoc_links(**filters)
+ if v not in val_set and v not in var_set])
+ return val_set
+
+######################################################################
+# Variable Documentation Objects
+######################################################################
+
+class VariableDoc(APIDoc):
+ """
+ API documentation information about a single Python variable.
+
+ @note: The only time a C{VariableDoc} will have its own docstring
+ is if that variable was created using an assignment statement, and
+ that assignment statement had a docstring-comment or was followed
+ by a pseudo-docstring.
+ """
+ #{ Basic Variable Information
+ name = UNKNOWN
+ """@ivar: The name of this variable in its containing namespace.
+ @type: C{str}"""
+
+ container = UNKNOWN
+ """@ivar: API documentation for the namespace that contains this
+ variable.
+ @type: L{ValueDoc}"""
+
+ canonical_name = UNKNOWN
+ """@ivar: A dotted name that serves as a unique identifier for
+ this C{VariableDoc}. It should be formed by concatenating
+ the C{VariableDoc}'s C{container} with its C{name}.
+ @type: L{DottedName}"""
+
+ value = UNKNOWN
+ """@ivar: The API documentation for this variable's value.
+ @type: L{ValueDoc}"""
+ #}
+
+ #{ Information Extracted from Docstrings
+ type_descr = UNKNOWN
+ """@ivar: A description of the variable's expected type, extracted from
+ its docstring.
+ @type: L{ParsedDocstring<epydoc.markup.ParsedDocstring>}"""
+ #} end of "information extracted from docstrings" group
+
+ #{ Information about Imported Variables
+ imported_from = UNKNOWN
+ """@ivar: The fully qualified dotted name of the variable that this
+ variable's value was imported from. This attribute should only
+ be defined if C{is_instvar} is true.
+ @type: L{DottedName}"""
+
+ is_imported = UNKNOWN
+ """@ivar: Was this variable's value imported from another module?
+ (Exception: variables that are explicitly included in __all__ have
+ C{is_imported} set to C{False}, even if they are in fact
+ imported.)
+ @type: C{bool}"""
+ #} end of "information about imported variables" group
+
+ #{ Information about Variables in Classes
+ is_instvar = UNKNOWN
+ """@ivar: If true, then this variable is an instance variable; if false,
+ then this variable is a class variable. This attribute should
+ only be defined if the containing namespace is a class
+ @type: C{bool}"""
+
+ overrides = UNKNOWN # [XXX] rename -- don't use a verb.
+ """@ivar: The API documentation for the variable that is overridden by
+ this variable. This attribute should only be defined if the
+ containing namespace is a class.
+ @type: L{VariableDoc}"""
+ #} end of "information about variables in classes" group
+
+ #{ Flags
+ is_alias = UNKNOWN
+ """@ivar: Is this variable an alias for another variable with the same
+ value? If so, then this variable will be dispreferred when
+ assigning canonical names.
+ @type: C{bool}"""
+
+ is_public = UNKNOWN
+ """@ivar: Is this variable part of its container's public API?
+ @type: C{bool}"""
+ #} end of "flags" group
+
+ def __init__(self, **kwargs):
+ APIDoc.__init__(self, **kwargs)
+ if self.is_public is UNKNOWN and self.name is not UNKNOWN:
+ self.is_public = (not self.name.startswith('_') or
+ self.name.endswith('_'))
+
+ def __repr__(self):
+ if self.canonical_name is not UNKNOWN:
+ return '<%s %s>' % (self.__class__.__name__, self.canonical_name)
+ if self.name is not UNKNOWN:
+ return '<%s %s>' % (self.__class__.__name__, self.name)
+ else:
+ return '<%s>' % self.__class__.__name__
+
+ def _get_defining_module(self):
+ if self.container is UNKNOWN:
+ return UNKNOWN
+ return self.container.defining_module
+ defining_module = property(_get_defining_module, doc="""
+ A read-only property that can be used to get the variable's
+ defining module. This is defined as the defining module
+ of the variable's container.""")
+
+ def apidoc_links(self, **filters):
+ # nb: overrides filter is *False* by default.
+ if (filters.get('overrides', False) and
+ (self.overrides not in (None, UNKNOWN))):
+ overrides = [self.overrides]
+ else:
+ overrides = []
+ if self.value in (None, UNKNOWN):
+ return []+overrides
+ else:
+ return [self.value]+overrides
+
+ def is_detailed(self):
+ pval = super(VariableDoc, self).is_detailed()
+ if pval or self.value in (None, UNKNOWN):
+ return pval
+
+ if (self.overrides not in (None, UNKNOWN) and
+ isinstance(self.value, RoutineDoc)):
+ return True
+
+ if isinstance(self.value, GenericValueDoc):
+ # [XX] This is a little hackish -- we assume that the
+ # summary lines will have SUMMARY_REPR_LINELEN chars,
+ # that len(name) of those will be taken up by the name,
+ # and that 3 of those will be taken up by " = " between
+ # the name & val. Note that if any docwriter uses a
+ # different formula for maxlen for this, then it will
+ # not get the right value for is_detailed().
+ maxlen = self.value.SUMMARY_REPR_LINELEN-3-len(self.name)
+ return (not self.value.summary_pyval_repr(maxlen).is_complete)
+ else:
+ return self.value.is_detailed()
+
+######################################################################
+# Value Documentation Objects
+######################################################################
+
+class ValueDoc(APIDoc):
+ """
+ API documentation information about a single Python value.
+ """
+ canonical_name = UNKNOWN
+ """@ivar: A dotted name that serves as a unique identifier for
+ this C{ValueDoc}'s value. If the value can be reached using a
+ single sequence of identifiers (given the appropriate imports),
+ then that sequence of identifiers is used as its canonical name.
+ If the value can be reached by multiple sequences of identifiers
+ (i.e., if it has multiple aliases), then one of those sequences of
+ identifiers is used. If the value cannot be reached by any
+ sequence of identifiers (e.g., if it was used as a base class but
+ then its variable was deleted), then its canonical name will start
+ with C{'??'}. If necessary, a dash followed by a number will be
+ appended to the end of a non-reachable identifier to make its
+ canonical name unique.
+
+ When possible, canonical names are chosen when new C{ValueDoc}s
+ are created. However, this is sometimes not possible. If a
+ canonical name can not be chosen when the C{ValueDoc} is created,
+ then one will be assigned by L{assign_canonical_names()
+ <docbuilder.assign_canonical_names>}.
+
+ @type: L{DottedName}"""
+
+ #{ Value Representation
+ pyval = UNKNOWN
+ """@ivar: A pointer to the actual Python object described by this
+ C{ValueDoc}. This is used to display the value (e.g., when
+ describing a variable.) Use L{pyval_repr()} to generate a
+ plaintext string representation of this value.
+ @type: Python object"""
+
+ parse_repr = UNKNOWN
+ """@ivar: A text representation of this value, extracted from
+ parsing its source code. This representation may not accurately
+ reflect the actual value (e.g., if the value was modified after
+ the initial assignment).
+ @type: C{unicode}"""
+
+ REPR_MAXLINES = 5
+ """@cvar: The maximum number of lines of text that should be
+ generated by L{pyval_repr()}. If the string representation does
+ not fit in this number of lines, an ellpsis marker (...) will
+ be placed at the end of the formatted representation."""
+
+ REPR_LINELEN = 75
+ """@cvar: The maximum number of characters for lines of text that
+ should be generated by L{pyval_repr()}. Any lines that exceed
+ this number of characters will be line-wrappped; The S{crarr}
+ symbol will be used to indicate that the line was wrapped."""
+
+ SUMMARY_REPR_LINELEN = 75
+ """@cvar: The maximum number of characters for the single-line
+ text representation generated by L{summary_pyval_repr()}. If
+ the value's representation does not fit in this number of
+ characters, an ellipsis marker (...) will be placed at the end
+ of the formatted representation."""
+
+ REPR_MIN_SCORE = 0
+ """@cvar: The minimum score that a value representation based on
+ L{pyval} should have in order to be used instead of L{parse_repr}
+ as the canonical representation for this C{ValueDoc}'s value.
+ @see: L{epydoc.markup.pyval_repr}"""
+ #} end of "value representation" group
+
+ #{ Context
+ defining_module = UNKNOWN
+ """@ivar: The documentation for the module that defines this
+ value. This is used, e.g., to lookup the appropriate markup
+ language for docstrings. For a C{ModuleDoc},
+ C{defining_module} should be C{self}.
+ @type: L{ModuleDoc}"""
+ #} end of "context group"
+
+ #{ Information about Imported Variables
+ proxy_for = None # [xx] in progress.
+ """@ivar: If C{proxy_for} is not None, then this value was
+ imported from another file. C{proxy_for} is the dotted name of
+ the variable that this value was imported from. If that
+ variable is documented, then its C{value} may contain more
+ complete API documentation about this value. The C{proxy_for}
+ attribute is used by the source code parser to link imported
+ values to their source values (in particular, for base
+ classes). When possible, these proxy C{ValueDoc}s are replaced
+ by the imported value's C{ValueDoc} by
+ L{link_imports()<docbuilder.link_imports>}.
+ @type: L{DottedName}"""
+ #} end of "information about imported variables" group
+
+ #: @ivar:
+ #: This is currently used to extract values from __all__, etc, in
+ #: the docparser module; maybe I should specialize
+ #: process_assignment and extract it there? Although, for __all__,
+ #: it's not clear where I'd put the value, since I just use it to
+ #: set private/public/imported attribs on other vars (that might not
+ #: exist yet at the time.)
+ toktree = UNKNOWN
+
+ def __repr__(self):
+ if self.canonical_name is not UNKNOWN:
+ return '<%s %s>' % (self.__class__.__name__, self.canonical_name)
+ else:
+ return '<%s %s>' % (self.__class__.__name__,
+ self.summary_pyval_repr().to_plaintext(None))
+
+ def __setstate__(self, state):
+ self.__dict__ = state
+
+ def __getstate__(self):
+ """
+ State serializer for the pickle module. This is necessary
+ because sometimes the C{pyval} attribute contains an
+ un-pickleable value.
+ """
+ # Construct our pickled dictionary. Maintain this dictionary
+ # as a private attribute, so we can reuse it later, since
+ # merged objects need to share a single dictionary.
+ if not hasattr(self, '_ValueDoc__pickle_state'):
+ # Make sure __pyval_repr & __summary_pyval_repr are cached:
+ self.pyval_repr(), self.summary_pyval_repr()
+ # Construct the dictionary; leave out 'pyval'.
+ self.__pickle_state = self.__dict__.copy()
+ self.__pickle_state['pyval'] = UNKNOWN
+
+ if not isinstance(self, GenericValueDoc):
+ assert self.__pickle_state != {}
+ # Return the pickle state.
+ return self.__pickle_state
+
+ #{ Value Representation
+ def pyval_repr(self):
+ """
+ Return a formatted representation of the Python object
+ described by this C{ValueDoc}. This representation may
+ include data from introspection or parsing, and is authorative
+ as 'the best way to represent a Python value.' Any lines that
+ go beyond L{REPR_LINELEN} characters will be wrapped; and if
+ the representation as a whole takes more than L{REPR_MAXLINES}
+ lines, then it will be truncated (with an ellipsis marker).
+ This function will never return L{UNKNOWN} or C{None}.
+
+ @rtype: L{ColorizedPyvalRepr}
+ """
+ # Use self.__pyval_repr to cache the result.
+ if not hasattr(self, '_ValueDoc__pyval_repr'):
+ self.__pyval_repr = epydoc.markup.pyval_repr.colorize_pyval(
+ self.pyval, self.parse_repr, self.REPR_MIN_SCORE,
+ self.REPR_LINELEN, self.REPR_MAXLINES, linebreakok=True)
+ return self.__pyval_repr
+
+ def summary_pyval_repr(self, max_len=None):
+ """
+ Return a single-line formatted representation of the Python
+ object described by this C{ValueDoc}. This representation may
+ include data from introspection or parsing, and is authorative
+ as 'the best way to summarize a Python value.' If the
+ representation takes more then L{SUMMARY_REPR_LINELEN}
+ characters, then it will be truncated (with an ellipsis
+ marker). This function will never return L{UNKNOWN} or
+ C{None}.
+
+ @rtype: L{ColorizedPyvalRepr}
+ """
+ # If max_len is specified, then do *not* cache the result.
+ if max_len is not None:
+ return epydoc.markup.pyval_repr.colorize_pyval(
+ self.pyval, self.parse_repr, self.REPR_MIN_SCORE,
+ max_len, maxlines=1, linebreakok=False)
+
+ # Use self.__summary_pyval_repr to cache the result.
+ if not hasattr(self, '_ValueDoc__summary_pyval_repr'):
+ self.__summary_pyval_repr = epydoc.markup.pyval_repr.colorize_pyval(
+ self.pyval, self.parse_repr, self.REPR_MIN_SCORE,
+ self.SUMMARY_REPR_LINELEN, maxlines=1, linebreakok=False)
+ return self.__summary_pyval_repr
+ #} end of "value representation" group
+
+ def apidoc_links(self, **filters):
+ return []
+
+class GenericValueDoc(ValueDoc):
+ """
+ API documentation about a 'generic' value, i.e., one that does not
+ have its own docstring or any information other than its value and
+ parse representation. C{GenericValueDoc}s do not get assigned
+ cannonical names.
+ """
+ canonical_name = None
+
+ def is_detailed(self):
+ return (not self.summary_pyval_repr().is_complete)
+
+class NamespaceDoc(ValueDoc):
+ """
+ API documentation information about a singe Python namespace
+ value. (I.e., a module or a class).
+ """
+ #{ Information about Variables
+ variables = UNKNOWN
+ """@ivar: The contents of the namespace, encoded as a
+ dictionary mapping from identifiers to C{VariableDoc}s. This
+ dictionary contains all names defined by the namespace,
+ including imported variables, aliased variables, and variables
+ inherited from base classes (once L{inherit_docs()
+ <epydoc.docbuilder.inherit_docs>} has added them).
+ @type: C{dict} from C{string} to L{VariableDoc}"""
+ sorted_variables = UNKNOWN
+ """@ivar: A list of all variables defined by this
+ namespace, in sorted order. The elements of this list should
+ exactly match the values of L{variables}. The sort order for
+ this list is defined as follows:
+ - Any variables listed in a C{@sort} docstring field are
+ listed in the order given by that field.
+ - These are followed by any variables that were found while
+ parsing the source code, in the order in which they were
+ defined in the source file.
+ - Finally, any remaining variables are listed in
+ alphabetical order.
+ @type: C{list} of L{VariableDoc}"""
+ sort_spec = UNKNOWN
+ """@ivar: The order in which variables should be listed,
+ encoded as a list of names. Any variables whose names are not
+ included in this list should be listed alphabetically,
+ following the variables that are included.
+ @type: C{list} of C{str}"""
+ group_specs = UNKNOWN
+ """@ivar: The groups that are defined by this namespace's
+ docstrings. C{group_specs} is encoded as an ordered list of
+ tuples C{(group_name, elt_names)}, where C{group_name} is the
+
+ name of a group and C{elt_names} is a list of element names in
+ that group. (An element can be a variable or a submodule.) A
+ '*' in an element name will match any string of characters.
+ @type: C{list} of C{(str,list)}"""
+ variable_groups = UNKNOWN
+ """@ivar: A dictionary specifying what group each
+ variable belongs to. The keys of the dictionary are group
+ names, and the values are lists of C{VariableDoc}s. The order
+ that groups should be listed in should be taken from
+ L{group_specs}.
+ @type: C{dict} from C{str} to C{list} of L{VariableDoc}"""
+ #} end of group "information about variables"
+
+ def __init__(self, **kwargs):
+ kwargs.setdefault('variables', {})
+ APIDoc.__init__(self, **kwargs)
+ assert self.variables is not UNKNOWN
+
+ def is_detailed(self):
+ return True
+
+ def apidoc_links(self, **filters):
+ variables = filters.get('variables', True)
+ imports = filters.get('imports', True)
+ private = filters.get('private', True)
+ if variables and imports and private:
+ return self.variables.values() # list the common case first.
+ elif not variables:
+ return []
+ elif not imports and not private:
+ return [v for v in self.variables.values() if
+ v.is_imported != True and v.is_public != False]
+ elif not private:
+ return [v for v in self.variables.values() if
+ v.is_public != False]
+ elif not imports:
+ return [v for v in self.variables.values() if
+ v.is_imported != True]
+ assert 0, 'this line should be unreachable'
+
+ def init_sorted_variables(self):
+ """
+ Initialize the L{sorted_variables} attribute, based on the
+ L{variables} and L{sort_spec} attributes. This should usually
+ be called after all variables have been added to C{variables}
+ (including any inherited variables for classes).
+ """
+ unsorted = self.variables.copy()
+ self.sorted_variables = []
+
+ # Add any variables that are listed in sort_spec
+ if self.sort_spec is not UNKNOWN:
+ unused_idents = set(self.sort_spec)
+ for ident in self.sort_spec:
+ if ident in unsorted:
+ self.sorted_variables.append(unsorted.pop(ident))
+ unused_idents.discard(ident)
+ elif '*' in ident:
+ regexp = re.compile('^%s$' % ident.replace('*', '(.*)'))
+ # sort within matching group?
+ for name, var_doc in unsorted.items():
+ if regexp.match(name):
+ self.sorted_variables.append(unsorted.pop(name))
+ unused_idents.discard(ident)
+ for ident in unused_idents:
+ if ident not in ['__all__', '__docformat__', '__path__']:
+ log.warning("@sort: %s.%s not found" %
+ (self.canonical_name, ident))
+
+
+ # Add any remaining variables in alphabetical order.
+ var_docs = unsorted.items()
+ var_docs.sort()
+ for name, var_doc in var_docs:
+ self.sorted_variables.append(var_doc)
+
+ def init_variable_groups(self):
+ """
+ Initialize the L{variable_groups} attribute, based on the
+ L{sorted_variables} and L{group_specs} attributes.
+ """
+ if self.sorted_variables is UNKNOWN:
+ self.init_sorted_variables()
+ assert len(self.sorted_variables) == len(self.variables)
+
+ elts = [(v.name, v) for v in self.sorted_variables]
+ self._unused_groups = dict([(n,set(i)) for (n,i) in self.group_specs])
+ self.variable_groups = self._init_grouping(elts)
+
+ def group_names(self):
+ """
+ Return a list of the group names defined by this namespace, in
+ the order in which they should be listed, with no duplicates.
+ """
+ name_list = ['']
+ name_set = set()
+ for name, spec in self.group_specs:
+ if name not in name_set:
+ name_set.add(name)
+ name_list.append(name)
+ return name_list
+
+ def _init_grouping(self, elts):
+ """
+ Divide a given a list of APIDoc objects into groups, as
+ specified by L{self.group_specs}.
+
+ @param elts: A list of tuples C{(name, apidoc)}.
+
+ @return: A list of tuples C{(groupname, elts)}, where
+ C{groupname} is the name of a group and C{elts} is a list of
+ C{APIDoc}s in that group. The first tuple has name C{''}, and
+ is used for ungrouped elements. The remaining tuples are
+ listed in the order that they appear in C{self.group_specs}.
+ Within each tuple, the elements are listed in the order that
+ they appear in C{api_docs}.
+ """
+ # Make the common case fast.
+ if len(self.group_specs) == 0:
+ return {'': [elt[1] for elt in elts]}
+
+ ungrouped = set([elt_doc for (elt_name, elt_doc) in elts])
+
+ ungrouped = dict(elts)
+ groups = {}
+ for elt_name, elt_doc in elts:
+ for (group_name, idents) in self.group_specs:
+ group = groups.setdefault(group_name, [])
+ unused_groups = self._unused_groups[group_name]
+ for ident in idents:
+ if re.match('^%s$' % ident.replace('*', '(.*)'), elt_name):
+ unused_groups.discard(ident)
+ if elt_name in ungrouped:
+ group.append(ungrouped.pop(elt_name))
+ else:
+ log.warning("%s.%s in multiple groups" %
+ (self.canonical_name, elt_name))
+
+ # Convert ungrouped from an unordered set to an ordered list.
+ groups[''] = [elt_doc for (elt_name, elt_doc) in elts
+ if elt_name in ungrouped]
+ return groups
+
+ def report_unused_groups(self):
+ """
+ Issue a warning for any @group items that were not used by
+ L{_init_grouping()}.
+ """
+ for (group, unused_idents) in self._unused_groups.items():
+ for ident in unused_idents:
+ log.warning("@group %s: %s.%s not found" %
+ (group, self.canonical_name, ident))
+
+class ModuleDoc(NamespaceDoc):
+ """
+ API documentation information about a single module.
+ """
+ #{ Information about the Module
+ filename = UNKNOWN
+ """@ivar: The name of the file that defines the module.
+ @type: C{string}"""
+ docformat = UNKNOWN
+ """@ivar: The markup language used by docstrings in this module.
+ @type: C{string}"""
+ #{ Information about Submodules
+ submodules = UNKNOWN
+ """@ivar: Modules contained by this module (if this module
+ is a package). (Note: on rare occasions, a module may have a
+ submodule that is shadowed by a variable with the same name.)
+ @type: C{list} of L{ModuleDoc}"""
+ submodule_groups = UNKNOWN
+ """@ivar: A dictionary specifying what group each
+ submodule belongs to. The keys of the dictionary are group
+ names, and the values are lists of C{ModuleDoc}s. The order
+ that groups should be listed in should be taken from
+ L{group_specs}.
+ @type: C{dict} from C{str} to C{list} of L{ModuleDoc}"""
+ #{ Information about Packages
+ package = UNKNOWN
+ """@ivar: API documentation for the module's containing package.
+ @type: L{ModuleDoc}"""
+ is_package = UNKNOWN
+ """@ivar: True if this C{ModuleDoc} describes a package.
+ @type: C{bool}"""
+ path = UNKNOWN
+ """@ivar: If this C{ModuleDoc} describes a package, then C{path}
+ contains a list of directories that constitute its path (i.e.,
+ the value of its C{__path__} variable).
+ @type: C{list} of C{str}"""
+ #{ Information about Imported Variables
+ imports = UNKNOWN
+ """@ivar: A list of the source names of variables imported into
+ this module. This is used to construct import graphs.
+ @type: C{list} of L{DottedName}"""
+ #}
+
+ def apidoc_links(self, **filters):
+ val_docs = NamespaceDoc.apidoc_links(self, **filters)
+ if (filters.get('packages', True) and
+ self.package not in (None, UNKNOWN)):
+ val_docs.append(self.package)
+ if (filters.get('submodules', True) and
+ self.submodules not in (None, UNKNOWN)):
+ val_docs += self.submodules
+ return val_docs
+
+ def init_submodule_groups(self):
+ """
+ Initialize the L{submodule_groups} attribute, based on the
+ L{submodules} and L{group_specs} attributes.
+ """
+ if self.submodules in (None, UNKNOWN):
+ return
+ self.submodules = sorted(self.submodules,
+ key=lambda m:m.canonical_name)
+ elts = [(m.canonical_name[-1], m) for m in self.submodules]
+ self.submodule_groups = self._init_grouping(elts)
+
+ def select_variables(self, group=None, value_type=None, public=None,
+ imported=None, detailed=None):
+ """
+ Return a specified subset of this module's L{sorted_variables}
+ list. If C{value_type} is given, then only return variables
+ whose values have the specified type. If C{group} is given,
+ then only return variables that belong to the specified group.
+
+ @require: The L{sorted_variables}, L{variable_groups}, and
+ L{submodule_groups} attributes must be initialized before
+ this method can be used. See L{init_sorted_variables()},
+ L{init_variable_groups()}, and L{init_submodule_groups()}.
+
+ @param value_type: A string specifying the value type for
+ which variables should be returned. Valid values are:
+ - 'class' - variables whose values are classes or types.
+ - 'function' - variables whose values are functions.
+ - 'other' - variables whose values are not classes,
+ exceptions, types, or functions.
+ @type value_type: C{string}
+
+ @param group: The name of the group for which variables should
+ be returned. A complete list of the groups defined by
+ this C{ModuleDoc} is available in the L{group_names}
+ instance variable. The first element of this list is
+ always the special group name C{''}, which is used for
+ variables that do not belong to any group.
+ @type group: C{string}
+
+ @param detailed: If True (False), return only the variables
+ deserving (not deserving) a detailed informative box.
+ If C{None}, don't care.
+ @type detailed: C{bool}
+ """
+ if (self.sorted_variables is UNKNOWN or
+ self.variable_groups is UNKNOWN):
+ raise ValueError('sorted_variables and variable_groups '
+ 'must be initialized first.')
+
+ if group is None: var_list = self.sorted_variables
+ else:
+ var_list = self.variable_groups.get(group, self.sorted_variables)
+
+ # Public/private filter (Count UNKNOWN as public)
+ if public is True:
+ var_list = [v for v in var_list if v.is_public is not False]
+ elif public is False:
+ var_list = [v for v in var_list if v.is_public is False]
+
+ # Imported filter (Count UNKNOWN as non-imported)
+ if imported is True:
+ var_list = [v for v in var_list if v.is_imported is True]
+ elif imported is False:
+ var_list = [v for v in var_list if v.is_imported is not True]
+
+ # Detailed filter
+ if detailed is True:
+ var_list = [v for v in var_list if v.is_detailed() is True]
+ elif detailed is False:
+ var_list = [v for v in var_list if v.is_detailed() is not True]
+
+ # [xx] Modules are not currently included in any of these
+ # value types.
+ if value_type is None:
+ return var_list
+ elif value_type == 'class':
+ return [var_doc for var_doc in var_list
+ if (isinstance(var_doc.value, ClassDoc))]
+ elif value_type == 'function':
+ return [var_doc for var_doc in var_list
+ if isinstance(var_doc.value, RoutineDoc)]
+ elif value_type == 'other':
+ return [var_doc for var_doc in var_list
+ if not isinstance(var_doc.value,
+ (ClassDoc, RoutineDoc, ModuleDoc))]
+ else:
+ raise ValueError('Bad value type %r' % value_type)
+
+class ClassDoc(NamespaceDoc):
+ """
+ API documentation information about a single class.
+ """
+ #{ Information about Base Classes
+ bases = UNKNOWN
+ """@ivar: API documentation for the class's base classes.
+ @type: C{list} of L{ClassDoc}"""
+ #{ Information about Subclasses
+ subclasses = UNKNOWN
+ """@ivar: API documentation for the class's known subclasses.
+ @type: C{list} of L{ClassDoc}"""
+ #}
+
+ def apidoc_links(self, **filters):
+ val_docs = NamespaceDoc.apidoc_links(self, **filters)
+ if (filters.get('bases', True) and
+ self.bases not in (None, UNKNOWN)):
+ val_docs += self.bases
+ if (filters.get('subclasses', True) and
+ self.subclasses not in (None, UNKNOWN)):
+ val_docs += self.subclasses
+ return val_docs
+
+ def is_type(self):
+ if self.canonical_name == DottedName('type'): return True
+ if self.bases is UNKNOWN: return False
+ for base in self.bases:
+ if isinstance(base, ClassDoc) and base.is_type():
+ return True
+ return False
+
+ def is_exception(self):
+ if self.canonical_name == DottedName('Exception'): return True
+ if self.bases is UNKNOWN: return False
+ for base in self.bases:
+ if isinstance(base, ClassDoc) and base.is_exception():
+ return True
+ return False
+
+ def is_newstyle_class(self):
+ if self.canonical_name == DottedName('object'): return True
+ if self.bases is UNKNOWN: return False
+ for base in self.bases:
+ if isinstance(base, ClassDoc) and base.is_newstyle_class():
+ return True
+ return False
+
+ def mro(self, warn_about_bad_bases=False):
+ if self.is_newstyle_class():
+ return self._c3_mro(warn_about_bad_bases)
+ else:
+ return self._dfs_bases([], set(), warn_about_bad_bases)
+
+ def _dfs_bases(self, mro, seen, warn_about_bad_bases):
+ if self in seen: return mro
+ mro.append(self)
+ seen.add(self)
+ if self.bases is not UNKNOWN:
+ for base in self.bases:
+ if isinstance(base, ClassDoc) and base.proxy_for is None:
+ base._dfs_bases(mro, seen, warn_about_bad_bases)
+ elif warn_about_bad_bases:
+ self._report_bad_base(base)
+ return mro
+
+ def _c3_mro(self, warn_about_bad_bases):
+ """
+ Compute the class precedence list (mro) according to C3.
+ @seealso: U{http://www.python.org/2.3/mro.html}
+ """
+ bases = [base for base in self.bases if isinstance(base, ClassDoc)]
+ if len(bases) != len(self.bases) and warn_about_bad_bases:
+ for base in self.bases:
+ if (not isinstance(base, ClassDoc) or
+ base.proxy_for is not None):
+ self._report_bad_base(base)
+ w = [warn_about_bad_bases]*len(bases)
+ return self._c3_merge([[self]] + map(ClassDoc._c3_mro, bases, w) +
+ [list(bases)])
+
+ def _report_bad_base(self, base):
+ if not isinstance(base, ClassDoc):
+ if not isinstance(base, GenericValueDoc):
+ base_name = base.canonical_name
+ elif base.parse_repr is not UNKNOWN:
+ base_name = base.parse_repr
+ else:
+ base_name = '%r' % base
+ log.warning("%s's base %s is not a class" %
+ (self.canonical_name, base_name))
+ elif base.proxy_for is not None:
+ log.warning("No information available for %s's base %s" %
+ (self.canonical_name, base.proxy_for))
+
+ def _c3_merge(self, seqs):
+ """
+ Helper function for L{_c3_mro}.
+ """
+ res = []
+ while 1:
+ nonemptyseqs=[seq for seq in seqs if seq]
+ if not nonemptyseqs: return res
+ for seq in nonemptyseqs: # find merge candidates among seq heads
+ cand = seq[0]
+ nothead=[s for s in nonemptyseqs if cand in s[1:]]
+ if nothead: cand=None #reject candidate
+ else: break
+ if not cand: raise "Inconsistent hierarchy"
+ res.append(cand)
+ for seq in nonemptyseqs: # remove cand
+ if seq[0] == cand: del seq[0]
+
+ def select_variables(self, group=None, value_type=None, inherited=None,
+ public=None, imported=None, detailed=None):
+ """
+ Return a specified subset of this class's L{sorted_variables}
+ list. If C{value_type} is given, then only return variables
+ whose values have the specified type. If C{group} is given,
+ then only return variables that belong to the specified group.
+ If C{inherited} is True, then only return inherited variables;
+ if C{inherited} is False, then only return local variables.
+
+ @require: The L{sorted_variables} and L{variable_groups}
+ attributes must be initialized before this method can be
+ used. See L{init_sorted_variables()} and
+ L{init_variable_groups()}.
+
+ @param value_type: A string specifying the value type for
+ which variables should be returned. Valid values are:
+ - 'instancemethod' - variables whose values are
+ instance methods.
+ - 'classmethod' - variables whose values are class
+ methods.
+ - 'staticmethod' - variables whose values are static
+ methods.
+ - 'properties' - variables whose values are properties.
+ - 'class' - variables whose values are nested classes
+ (including exceptions and types).
+ - 'instancevariable' - instance variables. This includes
+ any variables that are explicitly marked as instance
+ variables with docstring fields; and variables with
+ docstrings that are initialized in the constructor.
+ - 'classvariable' - class variables. This includes any
+ variables that are not included in any of the above
+ categories.
+ @type value_type: C{string}
+
+ @param group: The name of the group for which variables should
+ be returned. A complete list of the groups defined by
+ this C{ClassDoc} is available in the L{group_names}
+ instance variable. The first element of this list is
+ always the special group name C{''}, which is used for
+ variables that do not belong to any group.
+ @type group: C{string}
+
+ @param inherited: If C{None}, then return both inherited and
+ local variables; if C{True}, then return only inherited
+ variables; if C{False}, then return only local variables.
+
+ @param detailed: If True (False), return only the variables
+ deserving (not deserving) a detailed informative box.
+ If C{None}, don't care.
+ @type detailed: C{bool}
+ """
+ if (self.sorted_variables is UNKNOWN or
+ self.variable_groups is UNKNOWN):
+ raise ValueError('sorted_variables and variable_groups '
+ 'must be initialized first.')
+
+ if group is None: var_list = self.sorted_variables
+ else: var_list = self.variable_groups[group]
+
+ # Public/private filter (Count UNKNOWN as public)
+ if public is True:
+ var_list = [v for v in var_list if v.is_public is not False]
+ elif public is False:
+ var_list = [v for v in var_list if v.is_public is False]
+
+ # Inherited filter (Count UNKNOWN as non-inherited)
+ if inherited is None: pass
+ elif inherited:
+ var_list = [v for v in var_list if v.container != self]
+ else:
+ var_list = [v for v in var_list if v.container == self ]
+
+ # Imported filter (Count UNKNOWN as non-imported)
+ if imported is True:
+ var_list = [v for v in var_list if v.is_imported is True]
+ elif imported is False:
+ var_list = [v for v in var_list if v.is_imported is not True]
+
+ # Detailed filter
+ if detailed is True:
+ var_list = [v for v in var_list if v.is_detailed() is True]
+ elif detailed is False:
+ var_list = [v for v in var_list if v.is_detailed() is not True]
+
+ if value_type is None:
+ return var_list
+ elif value_type == 'method':
+ return [var_doc for var_doc in var_list
+ if (isinstance(var_doc.value, RoutineDoc) and
+ var_doc.is_instvar in (False, UNKNOWN))]
+ elif value_type == 'instancemethod':
+ return [var_doc for var_doc in var_list
+ if (isinstance(var_doc.value, RoutineDoc) and
+ not isinstance(var_doc.value, ClassMethodDoc) and
+ not isinstance(var_doc.value, StaticMethodDoc) and
+ var_doc.is_instvar in (False, UNKNOWN))]
+ elif value_type == 'classmethod':
+ return [var_doc for var_doc in var_list
+ if (isinstance(var_doc.value, ClassMethodDoc) and
+ var_doc.is_instvar in (False, UNKNOWN))]
+ elif value_type == 'staticmethod':
+ return [var_doc for var_doc in var_list
+ if (isinstance(var_doc.value, StaticMethodDoc) and
+ var_doc.is_instvar in (False, UNKNOWN))]
+ elif value_type == 'property':
+ return [var_doc for var_doc in var_list
+ if (isinstance(var_doc.value, PropertyDoc) and
+ var_doc.is_instvar in (False, UNKNOWN))]
+ elif value_type == 'class':
+ return [var_doc for var_doc in var_list
+ if (isinstance(var_doc.value, ClassDoc) and
+ var_doc.is_instvar in (False, UNKNOWN))]
+ elif value_type == 'instancevariable':
+ return [var_doc for var_doc in var_list
+ if var_doc.is_instvar is True]
+ elif value_type == 'classvariable':
+ return [var_doc for var_doc in var_list
+ if (var_doc.is_instvar in (False, UNKNOWN) and
+ not isinstance(var_doc.value,
+ (RoutineDoc, ClassDoc, PropertyDoc)))]
+ else:
+ raise ValueError('Bad value type %r' % value_type)
+
+class RoutineDoc(ValueDoc):
+ """
+ API documentation information about a single routine.
+ """
+ #{ Signature
+ posargs = UNKNOWN
+ """@ivar: The names of the routine's positional arguments.
+ If an argument list contains \"unpacking\" arguments, then
+ their names will be specified using nested lists. E.g., if
+ a function's argument list is C{((x1,y1), (x2,y2))}, then
+ posargs will be C{[['x1','y1'], ['x2','y2']]}.
+ @type: C{list}"""
+ posarg_defaults = UNKNOWN
+ """@ivar: API documentation for the positional arguments'
+ default values. This list has the same length as C{posargs}, and
+ each element of C{posarg_defaults} describes the corresponding
+ argument in C{posargs}. For positional arguments with no default,
+ C{posargs_defaults} will contain None.
+ @type: C{list} of C{ValueDoc} or C{None}"""
+ vararg = UNKNOWN
+ """@ivar: The name of the routine's vararg argument, or C{None} if
+ it has no vararg argument.
+ @type: C{string} or C{None}"""
+ kwarg = UNKNOWN
+ """@ivar: The name of the routine's keyword argument, or C{None} if
+ it has no keyword argument.
+ @type: C{string} or C{None}"""
+ lineno = UNKNOWN # used to look up profiling info from pstats.
+ """@ivar: The line number of the first line of the function's
+ signature. For Python functions, this is equal to
+ C{func.func_code.co_firstlineno}. The first line of a file
+ is considered line 1.
+ @type: C{int}"""
+ #} end of "signature" group
+
+ #{ Decorators
+ decorators = UNKNOWN
+ """@ivar: A list of names of decorators that were applied to this
+ routine, in the order that they are listed in the source code.
+ (I.e., in the reverse of the order that they were applied in.)
+ @type: C{list} of C{string}"""
+ #} end of "decorators" group
+
+ #{ Information Extracted from Docstrings
+ arg_descrs = UNKNOWN
+ """@ivar: A list of descriptions of the routine's
+ arguments. Each element of this list is a tuple C{(args,
+ descr)}, where C{args} is a list of argument names; and
+ C{descr} is a L{ParsedDocstring
+ <epydoc.markup.ParsedDocstring>} describing the argument(s)
+ specified by C{arg}.
+ @type: C{list}"""
+ arg_types = UNKNOWN
+ """@ivar: Descriptions of the expected types for the
+ routine's arguments, encoded as a dictionary mapping from
+ argument names to type descriptions.
+ @type: C{dict} from C{string} to L{ParsedDocstring
+ <epydoc.markup.ParsedDocstring>}"""
+ return_descr = UNKNOWN
+ """@ivar: A description of the value returned by this routine.
+ @type: L{ParsedDocstring<epydoc.markup.ParsedDocstring>}"""
+ return_type = UNKNOWN
+ """@ivar: A description of expected type for the value
+ returned by this routine.
+ @type: L{ParsedDocstring<epydoc.markup.ParsedDocstring>}"""
+ exception_descrs = UNKNOWN
+ """@ivar: A list of descriptions of exceptions
+ that the routine might raise. Each element of this list is a
+ tuple C{(exc, descr)}, where C{exc} is a string contianing the
+ exception name; and C{descr} is a L{ParsedDocstring
+ <epydoc.markup.ParsedDocstring>} describing the circumstances
+ under which the exception specified by C{exc} is raised.
+ @type: C{list}"""
+ #} end of "information extracted from docstrings" group
+ callgraph_uid = None
+ """@ivar: L{DotGraph}.uid of the call graph for the function.
+ @type: C{str}"""
+
+ def is_detailed(self):
+ if super(RoutineDoc, self).is_detailed():
+ return True
+
+ if self.arg_descrs not in (None, UNKNOWN) and self.arg_descrs:
+ return True
+
+ if self.arg_types not in (None, UNKNOWN) and self.arg_types:
+ return True
+
+ if self.return_descr not in (None, UNKNOWN):
+ return True
+
+ if self.exception_descrs not in (None, UNKNOWN) and self.exception_descrs:
+ return True
+
+ if (self.decorators not in (None, UNKNOWN)
+ and [ d for d in self.decorators
+ if d not in ('classmethod', 'staticmethod') ]):
+ return True
+
+ return False
+
+ def all_args(self):
+ """
+ @return: A list of the names of all arguments (positional,
+ vararg, and keyword), in order. If a positional argument
+ consists of a tuple of names, then that tuple will be
+ flattened.
+ """
+ if self.posargs is UNKNOWN:
+ return UNKNOWN
+
+ all_args = _flatten(self.posargs)
+ if self.vararg not in (None, UNKNOWN):
+ all_args.append(self.vararg)
+ if self.kwarg not in (None, UNKNOWN):
+ all_args.append(self.kwarg)
+ return all_args
+
+def _flatten(lst, out=None):
+ """
+ Return a flattened version of C{lst}.
+ """
+ if out is None: out = []
+ for elt in lst:
+ if isinstance(elt, (list,tuple)):
+ _flatten(elt, out)
+ else:
+ out.append(elt)
+ return out
+
+class ClassMethodDoc(RoutineDoc): pass
+class StaticMethodDoc(RoutineDoc): pass
+
+class PropertyDoc(ValueDoc):
+ """
+ API documentation information about a single property.
+ """
+ #{ Property Access Functions
+ fget = UNKNOWN
+ """@ivar: API documentation for the property's get function.
+ @type: L{RoutineDoc}"""
+ fset = UNKNOWN
+ """@ivar: API documentation for the property's set function.
+ @type: L{RoutineDoc}"""
+ fdel = UNKNOWN
+ """@ivar: API documentation for the property's delete function.
+ @type: L{RoutineDoc}"""
+ #}
+ #{ Information Extracted from Docstrings
+ type_descr = UNKNOWN
+ """@ivar: A description of the property's expected type, extracted
+ from its docstring.
+ @type: L{ParsedDocstring<epydoc.markup.ParsedDocstring>}"""
+ #} end of "information extracted from docstrings" group
+
+ def apidoc_links(self, **filters):
+ val_docs = []
+ if self.fget not in (None, UNKNOWN): val_docs.append(self.fget)
+ if self.fset not in (None, UNKNOWN): val_docs.append(self.fset)
+ if self.fdel not in (None, UNKNOWN): val_docs.append(self.fdel)
+ return val_docs
+
+ def is_detailed(self):
+ if super(PropertyDoc, self).is_detailed():
+ return True
+
+ if self.fget not in (None, UNKNOWN) and self.fget.pyval is not None:
+ return True
+ if self.fset not in (None, UNKNOWN) and self.fset.pyval is not None:
+ return True
+ if self.fdel not in (None, UNKNOWN) and self.fdel.pyval is not None:
+ return True
+
+ return False
+
+######################################################################
+## Index
+######################################################################
+
+class DocIndex:
+ """
+ [xx] out of date.
+
+ An index that .. hmm... it *can't* be used to access some things,
+ cuz they're not at the root level. Do I want to add them or what?
+ And if so, then I have a sort of a new top level. hmm.. so
+ basically the question is what to do with a name that's not in the
+ root var's name space. 2 types:
+ - entirely outside (eg os.path)
+ - inside but not known (eg a submodule that we didn't look at?)
+ - container of current thing not examined?
+
+ An index of all the C{APIDoc} objects that can be reached from a
+ root set of C{ValueDoc}s.
+
+ The members of this index can be accessed by dotted name. In
+ particular, C{DocIndex} defines two mappings, accessed via the
+ L{get_vardoc()} and L{get_valdoc()} methods, which can be used to
+ access C{VariableDoc}s or C{ValueDoc}s respectively by name. (Two
+ separate mappings are necessary because a single name can be used
+ to refer to both a variable and to the value contained by that
+ variable.)
+
+ Additionally, the index defines two sets of C{ValueDoc}s:
+ \"reachable C{ValueDoc}s\" and \"contained C{ValueDoc}s\". The
+ X{reachable C{ValueDoc}s} are defined as the set of all
+ C{ValueDoc}s that can be reached from the root set by following
+ I{any} sequence of pointers to C{ValueDoc}s or C{VariableDoc}s.
+ The X{contained C{ValueDoc}s} are defined as the set of all
+ C{ValueDoc}s that can be reached from the root set by following
+ only the C{ValueDoc} pointers defined by non-imported
+ C{VariableDoc}s. For example, if the root set contains a module
+ C{m}, then the contained C{ValueDoc}s includes the C{ValueDoc}s
+ for any functions, variables, or classes defined in that module,
+ as well as methods and variables defined in classes defined in the
+ module. The reachable C{ValueDoc}s includes all of those
+ C{ValueDoc}s, as well as C{ValueDoc}s for any values imported into
+ the module, and base classes for classes defined in the module.
+ """
+
+ def __init__(self, root):
+ """
+ Create a new documentation index, based on the given root set
+ of C{ValueDoc}s. If any C{APIDoc}s reachable from the root
+ set does not have a canonical name, then it will be assigned
+ one. etc.
+
+ @param root: A list of C{ValueDoc}s.
+ """
+ for apidoc in root:
+ if apidoc.canonical_name in (None, UNKNOWN):
+ raise ValueError("All APIdocs passed to DocIndexer "
+ "must already have canonical names.")
+
+ # Initialize the root items list. We sort them by length in
+ # ascending order. (This ensures that variables will shadow
+ # submodules when appropriate.)
+ # When the elements name is the same, list in alphabetical order:
+ # this is needed by the check for duplicates below.
+ self.root = sorted(root,
+ key=lambda d: (len(d.canonical_name), d.canonical_name))
+ """The list of C{ValueDoc}s to document.
+ @type: C{list}"""
+
+ # Drop duplicated modules
+ # [xx] maybe what causes duplicates should be fixed instead.
+ # If fixed, adjust the sort here above: sorting by names will not
+ # be required anymore
+ i = 1
+ while i < len(self.root):
+ if self.root[i-1] is self.root[i]:
+ del self.root[i]
+ else:
+ i += 1
+
+ self.mlclasses = self._get_module_classes(self.root)
+ """A mapping from class names to L{ClassDoc}. Contains
+ classes defined at module level for modules in L{root}
+ and which can be used as fallback by L{find()} if looking
+ in containing namespaces fails.
+ @type: C{dict} from C{str} to L{ClassDoc} or C{list}"""
+
+ self.callers = None
+ """A dictionary mapping from C{RoutineDoc}s in this index
+ to lists of C{RoutineDoc}s for the routine's callers.
+ This dictionary is initialized by calling
+ L{read_profiling_info()}.
+ @type: C{list} of L{RoutineDoc}"""
+
+ self.callees = None
+ """A dictionary mapping from C{RoutineDoc}s in this index
+ to lists of C{RoutineDoc}s for the routine's callees.
+ This dictionary is initialized by calling
+ L{read_profiling_info()}.
+ @type: C{list} of L{RoutineDoc}"""
+
+ self._funcid_to_doc = {}
+ """A mapping from C{profile} function ids to corresponding
+ C{APIDoc} objects. A function id is a tuple of the form
+ C{(filename, lineno, funcname)}. This is used to update
+ the L{callers} and L{callees} variables."""
+
+ self._container_cache = {}
+ """A cache for the L{container()} method, to increase speed."""
+
+ self._get_cache = {}
+ """A cache for the L{get_vardoc()} and L{get_valdoc()} methods,
+ to increase speed."""
+
+ #////////////////////////////////////////////////////////////
+ # Lookup methods
+ #////////////////////////////////////////////////////////////
+ # [xx]
+ # Currently these only work for things reachable from the
+ # root... :-/ I might want to change this so that imported
+ # values can be accessed even if they're not contained.
+ # Also, I might want canonical names to not start with ??
+ # if the thing is a top-level imported module..?
+
+ def get_vardoc(self, name):
+ """
+ Return the C{VariableDoc} with the given name, or C{None} if this
+ index does not contain a C{VariableDoc} with the given name.
+ """
+ var, val = self._get(name)
+ return var
+
+ def get_valdoc(self, name):
+ """
+ Return the C{ValueDoc} with the given name, or C{None} if this
+ index does not contain a C{ValueDoc} with the given name.
+ """
+ var, val = self._get(name)
+ return val
+
+ def _get(self, name):
+ """
+ A helper function that's used to implement L{get_vardoc()}
+ and L{get_valdoc()}.
+ """
+ # Convert name to a DottedName, if necessary.
+ if not isinstance(name, DottedName):
+ name = DottedName(name)
+
+ # Check if the result is cached.
+ val = self._get_cache.get(name)
+ if val is not None: return val
+
+ # Look for an element in the root set whose name is a prefix
+ # of `name`. If we can't find one, then return None.
+ for root_valdoc in self.root:
+ if root_valdoc.canonical_name.dominates(name):
+ # Starting at the root valdoc, walk down the variable/
+ # submodule chain until we find the requested item.
+ var_doc = None
+ val_doc = root_valdoc
+ for identifier in name[len(root_valdoc.canonical_name):]:
+ if val_doc is None: break
+ var_doc, val_doc = self._get_from(val_doc, identifier)
+ else:
+ # If we found it, then return.
+ if var_doc is not None or val_doc is not None:
+ self._get_cache[name] = (var_doc, val_doc)
+ return var_doc, val_doc
+
+ # We didn't find it.
+ self._get_cache[name] = (None, None)
+ return None, None
+
+ def _get_from(self, val_doc, identifier):
+ if isinstance(val_doc, NamespaceDoc):
+ child_var = val_doc.variables.get(identifier)
+ if child_var is not None:
+ child_val = child_var.value
+ if child_val is UNKNOWN: child_val = None
+ return child_var, child_val
+
+ # If that fails, then see if it's a submodule.
+ if (isinstance(val_doc, ModuleDoc) and
+ val_doc.submodules is not UNKNOWN):
+ for submodule in val_doc.submodules:
+ if submodule.canonical_name[-1] == identifier:
+ var_doc = None
+ val_doc = submodule
+ if val_doc is UNKNOWN: val_doc = None
+ return var_doc, val_doc
+
+ return None, None
+
+ def find(self, name, context):
+ """
+ Look for an C{APIDoc} named C{name}, relative to C{context}.
+ Return the C{APIDoc} if one is found; otherwise, return
+ C{None}. C{find} looks in the following places, in order:
+ - Function parameters (if one matches, return C{None})
+ - All enclosing namespaces, from closest to furthest.
+ - If C{name} starts with C{'self'}, then strip it off and
+ look for the remaining part of the name using C{find}
+ - Builtins
+ - Parameter attributes
+ - Classes at module level (if the name is not ambiguous)
+
+ @type name: C{str} or L{DottedName}
+ @type context: L{APIDoc}
+ """
+ if isinstance(name, basestring):
+ name = re.sub(r'\(.*\)$', '', name.strip())
+ if re.match('^([a-zA-Z_]\w*)(\.[a-zA-Z_]\w*)*$', name):
+ name = DottedName(name)
+ else:
+ return None
+ elif not isinstance(name, DottedName):
+ raise TypeError("'name' should be a string or DottedName")
+
+ if context is None or context.canonical_name is None:
+ container_name = []
+ else:
+ container_name = context.canonical_name
+
+ # Check for the name in all containing namespaces, starting
+ # with the closest one.
+ for i in range(len(container_name), -1, -1):
+ relative_name = container_name[:i]+name
+ # Is `name` the absolute name of a documented value?
+ # (excepting GenericValueDoc values.)
+ val_doc = self.get_valdoc(relative_name)
+ if (val_doc is not None and
+ not isinstance(val_doc, GenericValueDoc)):
+ return val_doc
+ # Is `name` the absolute name of a documented variable?
+ var_doc = self.get_vardoc(relative_name)
+ if var_doc is not None: return var_doc
+
+ # If the name begins with 'self', then try stripping that off
+ # and see if we can find the variable.
+ if name[0] == 'self':
+ doc = self.find('.'.join(name[1:]), context)
+ if doc is not None: return doc
+
+ # Is it the name of a builtin?
+ if len(name)==1 and hasattr(__builtin__, name[0]):
+ return None
+
+ # Is it a parameter's name or an attribute of a parameter?
+ if isinstance(context, RoutineDoc):
+ all_args = context.all_args()
+ if all_args is not UNKNOWN and name[0] in all_args:
+ return None
+
+ # Is this an object directly contained by any module?
+ doc = self.mlclasses.get(name[-1])
+ if isinstance(doc, APIDoc):
+ return doc
+ elif isinstance(doc, list):
+ log.warning("%s is an ambiguous name: it may be %s" % (
+ name[-1],
+ ", ".join([ "'%s'" % d.canonical_name for d in doc ])))
+
+ # Drop this item so that the warning is reported only once.
+ # fail() will fail anyway.
+ del self.mlclasses[name[-1]]
+
+ def _get_module_classes(self, docs):
+ """
+ Gather all the classes defined in a list of modules.
+
+ Very often people refers to classes only by class name,
+ even if they are not imported in the namespace. Linking
+ to such classes will fail if we look for them only in nested
+ namespaces. Allow them to retrieve only by name.
+
+ @param docs: containers of the objects to collect
+ @type docs: C{list} of C{APIDoc}
+ @return: mapping from objects name to the object(s) with that name
+ @rtype: C{dict} from C{str} to L{ClassDoc} or C{list}
+ """
+ classes = {}
+ for doc in docs:
+ if not isinstance(doc, ModuleDoc):
+ continue
+
+ for var in doc.variables.values():
+ if not isinstance(var.value, ClassDoc):
+ continue
+
+ val = var.value
+ if val in (None, UNKNOWN) or val.defining_module is not doc:
+ continue
+ if val.canonical_name in (None, UNKNOWN):
+ continue
+
+ name = val.canonical_name[-1]
+ vals = classes.get(name)
+ if vals is None:
+ classes[name] = val
+ elif not isinstance(vals, list):
+ classes[name] = [ vals, val ]
+ else:
+ vals.append(val)
+
+ return classes
+
+ #////////////////////////////////////////////////////////////
+ # etc
+ #////////////////////////////////////////////////////////////
+
+ def reachable_valdocs(self, **filters):
+ """
+ Return a list of all C{ValueDoc}s that can be reached,
+ directly or indirectly from this C{DocIndex}'s root set.
+
+ @param filters: A set of filters that can be used to prevent
+ C{reachable_valdocs} from following specific link types
+ when looking for C{ValueDoc}s that can be reached from the
+ root set. See C{APIDoc.apidoc_links} for a more complete
+ description.
+ """
+ return reachable_valdocs(self.root, **filters)
+
+ def container(self, api_doc):
+ """
+ Return the C{ValueDoc} that contains the given C{APIDoc}, or
+ C{None} if its container is not in the index.
+ """
+ # Check if the result is cached.
+ val = self._container_cache.get(api_doc)
+ if val is not None: return val
+
+ if isinstance(api_doc, GenericValueDoc):
+ self._container_cache[api_doc] = None
+ return None # [xx] unknown.
+ if isinstance(api_doc, VariableDoc):
+ self._container_cache[api_doc] = api_doc.container
+ return api_doc.container
+ if len(api_doc.canonical_name) == 1:
+ self._container_cache[api_doc] = None
+ return None
+ elif isinstance(api_doc, ModuleDoc) and api_doc.package is not UNKNOWN:
+ self._container_cache[api_doc] = api_doc.package
+ return api_doc.package
+ else:
+ parent = self.get_valdoc(api_doc.canonical_name.container())
+ self._container_cache[api_doc] = parent
+ return parent
+
+ #////////////////////////////////////////////////////////////
+ # Profiling information
+ #////////////////////////////////////////////////////////////
+
+ def read_profiling_info(self, profile_stats):
+ """
+ Initialize the L{callers} and L{callees} variables, given a
+ C{Stat} object from the C{pstats} module.
+
+ @warning: This method uses undocumented data structures inside
+ of C{profile_stats}.
+ """
+ if self.callers is None: self.callers = {}
+ if self.callees is None: self.callees = {}
+
+ # The Stat object encodes functions using `funcid`s, or
+ # tuples of (filename, lineno, funcname). Create a mapping
+ # from these `funcid`s to `RoutineDoc`s.
+ self._update_funcid_to_doc(profile_stats)
+
+ for callee, (cc, nc, tt, ct, callers) in profile_stats.stats.items():
+ callee = self._funcid_to_doc.get(callee)
+ if callee is None: continue
+ for caller in callers:
+ caller = self._funcid_to_doc.get(caller)
+ if caller is None: continue
+ self.callers.setdefault(callee, []).append(caller)
+ self.callees.setdefault(caller, []).append(callee)
+
+ def _update_funcid_to_doc(self, profile_stats):
+ """
+ Update the dictionary mapping from C{pstat.Stat} funciton ids to
+ C{RoutineDoc}s. C{pstat.Stat} function ids are tuples of
+ C{(filename, lineno, funcname)}.
+ """
+ # Maps (filename, lineno, funcname) -> RoutineDoc
+ for val_doc in self.reachable_valdocs():
+ # We only care about routines.
+ if not isinstance(val_doc, RoutineDoc): continue
+ # Get the filename from the defining module.
+ module = val_doc.defining_module
+ if module is UNKNOWN or module.filename is UNKNOWN: continue
+ # Normalize the filename.
+ filename = os.path.abspath(module.filename)
+ try: filename = py_src_filename(filename)
+ except: pass
+ # Look up the stat_func_id
+ funcid = (filename, val_doc.lineno, val_doc.canonical_name[-1])
+ if funcid in profile_stats.stats:
+ self._funcid_to_doc[funcid] = val_doc
+
+######################################################################
+## Pretty Printing
+######################################################################
+
+def pp_apidoc(api_doc, doublespace=0, depth=5, exclude=(), include=(),
+ backpointers=None):
+ """
+ @return: A multiline pretty-printed string representation for the
+ given C{APIDoc}.
+ @param doublespace: If true, then extra lines will be
+ inserted to make the output more readable.
+ @param depth: The maximum depth that pp_apidoc will descend
+ into descendent VarDocs. To put no limit on
+ depth, use C{depth=-1}.
+ @param exclude: A list of names of attributes whose values should
+ not be shown.
+ @param backpointers: For internal use.
+ """
+ pyid = id(api_doc.__dict__)
+ if backpointers is None: backpointers = {}
+ if (hasattr(api_doc, 'canonical_name') and
+ api_doc.canonical_name not in (None, UNKNOWN)):
+ name = '%s for %s' % (api_doc.__class__.__name__,
+ api_doc.canonical_name)
+ elif getattr(api_doc, 'name', None) not in (UNKNOWN, None):
+ if (getattr(api_doc, 'container', None) not in (UNKNOWN, None) and
+ getattr(api_doc.container, 'canonical_name', None)
+ not in (UNKNOWN, None)):
+ name ='%s for %s' % (api_doc.__class__.__name__,
+ api_doc.container.canonical_name+
+ api_doc.name)
+ else:
+ name = '%s for %s' % (api_doc.__class__.__name__, api_doc.name)
+ else:
+ name = api_doc.__class__.__name__
+
+ if pyid in backpointers:
+ return '%s [%s] (defined above)' % (name, backpointers[pyid])
+
+ if depth == 0:
+ if hasattr(api_doc, 'name') and api_doc.name is not None:
+ return '%s...' % api_doc.name
+ else:
+ return '...'
+
+ backpointers[pyid] = len(backpointers)
+ s = '%s [%s]' % (name, backpointers[pyid])
+
+ # Only print non-empty fields:
+ fields = [field for field in api_doc.__dict__.keys()
+ if (field in include or
+ (getattr(api_doc, field) is not UNKNOWN
+ and field not in exclude))]
+ if include:
+ fields = [field for field in dir(api_doc)
+ if field in include]
+ else:
+ fields = [field for field in api_doc.__dict__.keys()
+ if (getattr(api_doc, field) is not UNKNOWN
+ and field not in exclude)]
+ fields.sort()
+
+ for field in fields:
+ fieldval = getattr(api_doc, field)
+ if doublespace: s += '\n |'
+ s += '\n +- %s' % field
+
+ if (isinstance(fieldval, types.ListType) and
+ len(fieldval)>0 and
+ isinstance(fieldval[0], APIDoc)):
+ s += _pp_list(api_doc, fieldval, doublespace, depth,
+ exclude, include, backpointers,
+ (field is fields[-1]))
+ elif (isinstance(fieldval, types.DictType) and
+ len(fieldval)>0 and
+ isinstance(fieldval.values()[0], APIDoc)):
+ s += _pp_dict(api_doc, fieldval, doublespace,
+ depth, exclude, include, backpointers,
+ (field is fields[-1]))
+ elif isinstance(fieldval, APIDoc):
+ s += _pp_apidoc(api_doc, fieldval, doublespace, depth,
+ exclude, include, backpointers,
+ (field is fields[-1]))
+ else:
+ s += ' = ' + _pp_val(api_doc, fieldval, doublespace,
+ depth, exclude, include, backpointers)
+
+ return s
+
+def _pp_list(api_doc, items, doublespace, depth, exclude, include,
+ backpointers, is_last):
+ line1 = (is_last and ' ') or '|'
+ s = ''
+ for item in items:
+ line2 = ((item is items[-1]) and ' ') or '|'
+ joiner = '\n %s %s ' % (line1, line2)
+ if doublespace: s += '\n %s |' % line1
+ s += '\n %s +- ' % line1
+ valstr = _pp_val(api_doc, item, doublespace, depth, exclude, include,
+ backpointers)
+ s += joiner.join(valstr.split('\n'))
+ return s
+
+def _pp_dict(api_doc, dict, doublespace, depth, exclude, include,
+ backpointers, is_last):
+ items = dict.items()
+ items.sort()
+ line1 = (is_last and ' ') or '|'
+ s = ''
+ for item in items:
+ line2 = ((item is items[-1]) and ' ') or '|'
+ joiner = '\n %s %s ' % (line1, line2)
+ if doublespace: s += '\n %s |' % line1
+ s += '\n %s +- ' % line1
+ valstr = _pp_val(api_doc, item[1], doublespace, depth, exclude,
+ include, backpointers)
+ s += joiner.join(('%s => %s' % (item[0], valstr)).split('\n'))
+ return s
+
+def _pp_apidoc(api_doc, val, doublespace, depth, exclude, include,
+ backpointers, is_last):
+ line1 = (is_last and ' ') or '|'
+ s = ''
+ if doublespace: s += '\n %s | ' % line1
+ s += '\n %s +- ' % line1
+ joiner = '\n %s ' % line1
+ childstr = pp_apidoc(val, doublespace, depth-1, exclude,
+ include, backpointers)
+ return s + joiner.join(childstr.split('\n'))
+
+def _pp_val(api_doc, val, doublespace, depth, exclude, include, backpointers):
+ from epydoc import markup
+ if isinstance(val, APIDoc):
+ return pp_apidoc(val, doublespace, depth-1, exclude,
+ include, backpointers)
+ elif isinstance(val, markup.ParsedDocstring):
+ valrepr = `val.to_plaintext(None)`
+ if len(valrepr) < 40: return valrepr
+ else: return valrepr[:37]+'...'
+ else:
+ valrepr = repr(val)
+ if len(valrepr) < 40: return valrepr
+ else: return valrepr[:37]+'...'
+
diff --git a/python/helpers/epydoc/checker.py b/python/helpers/epydoc/checker.py
new file mode 100644
index 0000000..3bc41d0
--- /dev/null
+++ b/python/helpers/epydoc/checker.py
@@ -0,0 +1,349 @@
+#
+# objdoc: epydoc documentation completeness checker
+# Edward Loper
+#
+# Created [01/30/01 05:18 PM]
+# $Id: checker.py 1366 2006-09-07 15:54:59Z edloper $
+#
+
+"""
+Documentation completeness checker. This module defines a single
+class, C{DocChecker}, which can be used to check the that specified
+classes of objects are documented.
+"""
+__docformat__ = 'epytext en'
+
+##################################################
+## Imports
+##################################################
+
+import re, sys, os.path, string
+from xml.dom.minidom import Text as _Text
+from epydoc.apidoc import *
+
+# The following methods may be undocumented:
+_NO_DOCS = ['__hash__', '__repr__', '__str__', '__cmp__']
+
+# The following methods never need descriptions, authors, or
+# versions:
+_NO_BASIC = ['__hash__', '__repr__', '__str__', '__cmp__']
+
+# The following methods never need return value descriptions.
+_NO_RETURN = ['__init__', '__hash__', '__repr__', '__str__', '__cmp__']
+
+# The following methods don't need parameters documented:
+_NO_PARAM = ['__cmp__']
+
+class DocChecker:
+ """
+ Documentation completeness checker. C{DocChecker} can be used to
+ check that specified classes of objects are documented. To check
+ the documentation for a group of objects, you should create a
+ C{DocChecker} from a L{DocIndex<apidoc.DocIndex>} that documents
+ those objects; and then use the L{check} method to run specified
+ checks on the objects' documentation.
+
+ What checks are run, and what objects they are run on, are
+ specified by the constants defined by C{DocChecker}. These
+ constants are divided into three groups.
+
+ - Type specifiers indicate what type of objects should be
+ checked: L{MODULE}; L{CLASS}; L{FUNC}; L{VAR}; L{IVAR};
+ L{CVAR}; L{PARAM}; and L{RETURN}.
+ - Public/private specifiers indicate whether public or private
+ objects should be checked: L{PRIVATE}.
+ - Check specifiers indicate what checks should be run on the
+ objects: L{TYPE}; L{DESCR}; L{AUTHOR};
+ and L{VERSION}.
+
+ The L{check} method is used to perform a check on the
+ documentation. Its parameter is formed by or-ing together at
+ least one value from each specifier group:
+
+ >>> checker.check(DocChecker.MODULE | DocChecker.DESCR)
+
+ To specify multiple values from a single group, simply or their
+ values together:
+
+ >>> checker.check(DocChecker.MODULE | DocChecker.CLASS |
+ ... DocChecker.FUNC )
+
+ @group Types: MODULE, CLASS, FUNC, VAR, IVAR, CVAR, PARAM,
+ RETURN, ALL_T
+ @type MODULE: C{int}
+ @cvar MODULE: Type specifier that indicates that the documentation
+ of modules should be checked.
+ @type CLASS: C{int}
+ @cvar CLASS: Type specifier that indicates that the documentation
+ of classes should be checked.
+ @type FUNC: C{int}
+ @cvar FUNC: Type specifier that indicates that the documentation
+ of functions should be checked.
+ @type VAR: C{int}
+ @cvar VAR: Type specifier that indicates that the documentation
+ of module variables should be checked.
+ @type IVAR: C{int}
+ @cvar IVAR: Type specifier that indicates that the documentation
+ of instance variables should be checked.
+ @type CVAR: C{int}
+ @cvar CVAR: Type specifier that indicates that the documentation
+ of class variables should be checked.
+ @type PARAM: C{int}
+ @cvar PARAM: Type specifier that indicates that the documentation
+ of function and method parameters should be checked.
+ @type RETURN: C{int}
+ @cvar RETURN: Type specifier that indicates that the documentation
+ of return values should be checked.
+ @type ALL_T: C{int}
+ @cvar ALL_T: Type specifier that indicates that the documentation
+ of all objects should be checked.
+
+ @group Checks: TYPE, AUTHOR, VERSION, DESCR, ALL_C
+ @type TYPE: C{int}
+ @cvar TYPE: Check specifier that indicates that every variable and
+ parameter should have a C{@type} field.
+ @type AUTHOR: C{int}
+ @cvar AUTHOR: Check specifier that indicates that every object
+ should have an C{author} field.
+ @type VERSION: C{int}
+ @cvar VERSION: Check specifier that indicates that every object
+ should have a C{version} field.
+ @type DESCR: C{int}
+ @cvar DESCR: Check specifier that indicates that every object
+ should have a description.
+ @type ALL_C: C{int}
+ @cvar ALL_C: Check specifier that indicates that all checks
+ should be run.
+
+ @group Publicity: PRIVATE
+ @type PRIVATE: C{int}
+ @cvar PRIVATE: Specifier that indicates that private objects should
+ be checked.
+ """
+ # Types
+ MODULE = 1
+ CLASS = 2
+ FUNC = 4
+ VAR = 8
+ #IVAR = 16
+ #CVAR = 32
+ PARAM = 64
+ RETURN = 128
+ PROPERTY = 256
+ ALL_T = 1+2+4+8+16+32+64+128+256
+
+ # Checks
+ TYPE = 256
+ AUTHOR = 1024
+ VERSION = 2048
+ DESCR = 4096
+ ALL_C = 256+512+1024+2048+4096
+
+ # Private/public
+ PRIVATE = 16384
+
+ ALL = ALL_T + ALL_C + PRIVATE
+
+ def __init__(self, docindex):
+ """
+ Create a new C{DocChecker} that can be used to run checks on
+ the documentation of the objects documented by C{docindex}
+
+ @param docindex: A documentation map containing the
+ documentation for the objects to be checked.
+ @type docindex: L{Docindex<apidoc.DocIndex>}
+ """
+ self._docindex = docindex
+
+ # Initialize instance variables
+ self._checks = 0
+ self._last_warn = None
+ self._out = sys.stdout
+ self._num_warnings = 0
+
+ def check(self, *check_sets):
+ """
+ Run the specified checks on the documentation of the objects
+ contained by this C{DocChecker}'s C{DocIndex}. Any errors found
+ are printed to standard out.
+
+ @param check_sets: The checks that should be run on the
+ documentation. This value is constructed by or-ing
+ together the specifiers that indicate which objects should
+ be checked, and which checks should be run. See the
+ L{module description<checker>} for more information.
+ If no checks are specified, then a default set of checks
+ will be run.
+ @type check_sets: C{int}
+ @return: True if no problems were found.
+ @rtype: C{boolean}
+ """
+ if not check_sets:
+ check_sets = (DocChecker.MODULE | DocChecker.CLASS |
+ DocChecker.FUNC | DocChecker.VAR |
+ DocChecker.DESCR,)
+
+ self._warnings = {}
+ log.start_progress('Checking docs')
+ for j, checks in enumerate(check_sets):
+ self._check(checks)
+ log.end_progress()
+
+ for (warning, docs) in self._warnings.items():
+ docs = sorted(docs)
+ docnames = '\n'.join([' - %s' % self._name(d) for d in docs])
+ log.warning('%s:\n%s' % (warning, docnames))
+
+ def _check(self, checks):
+ self._checks = checks
+
+ # Get the list of objects to check.
+ valdocs = sorted(self._docindex.reachable_valdocs(
+ imports=False, packages=False, bases=False, submodules=False,
+ subclasses=False, private = (checks & DocChecker.PRIVATE)))
+ docs = set()
+ for d in valdocs:
+ if not isinstance(d, GenericValueDoc): docs.add(d)
+ for doc in valdocs:
+ if isinstance(doc, NamespaceDoc):
+ for d in doc.variables.values():
+ if isinstance(d.value, GenericValueDoc): docs.add(d)
+
+ for i, doc in enumerate(sorted(docs)):
+ if isinstance(doc, ModuleDoc):
+ self._check_module(doc)
+ elif isinstance(doc, ClassDoc):
+ self._check_class(doc)
+ elif isinstance(doc, RoutineDoc):
+ self._check_func(doc)
+ elif isinstance(doc, PropertyDoc):
+ self._check_property(doc)
+ elif isinstance(doc, VariableDoc):
+ self._check_var(doc)
+ else:
+ log.error("Don't know how to check %r" % doc)
+
+ def _name(self, doc):
+ name = str(doc.canonical_name)
+ if isinstance(doc, RoutineDoc): name += '()'
+ return name
+
+ def _check_basic(self, doc):
+ """
+ Check the description, author, version, and see-also fields of
+ C{doc}. This is used as a helper function by L{_check_module},
+ L{_check_class}, and L{_check_func}.
+
+ @param doc: The documentation that should be checked.
+ @type doc: L{APIDoc}
+ @rtype: C{None}
+ """
+ if ((self._checks & DocChecker.DESCR) and
+ (doc.descr in (None, UNKNOWN))):
+ if doc.docstring in (None, UNKNOWN):
+ self.warning('Undocumented', doc)
+ else:
+ self.warning('No description', doc)
+ if self._checks & DocChecker.AUTHOR:
+ for tag, arg, descr in doc.metadata:
+ if 'author' == tag: break
+ else:
+ self.warning('No authors', doc)
+ if self._checks & DocChecker.VERSION:
+ for tag, arg, descr in doc.metadata:
+ if 'version' == tag: break
+ else:
+ self.warning('No version', doc)
+
+ def _check_module(self, doc):
+ """
+ Run checks on the module whose APIDoc is C{doc}.
+
+ @param doc: The APIDoc of the module to check.
+ @type doc: L{APIDoc}
+ @rtype: C{None}
+ """
+ if self._checks & DocChecker.MODULE:
+ self._check_basic(doc)
+
+ def _check_class(self, doc):
+ """
+ Run checks on the class whose APIDoc is C{doc}.
+
+ @param doc: The APIDoc of the class to check.
+ @type doc: L{APIDoc}
+ @rtype: C{None}
+ """
+ if self._checks & DocChecker.CLASS:
+ self._check_basic(doc)
+
+ def _check_property(self, doc):
+ if self._checks & DocChecker.PROPERTY:
+ self._check_basic(doc)
+
+ def _check_var(self, doc):
+ """
+ Run checks on the variable whose documentation is C{var} and
+ whose name is C{name}.
+
+ @param doc: The documentation for the variable to check.
+ @type doc: L{APIDoc}
+ @rtype: C{None}
+ """
+ if self._checks & DocChecker.VAR:
+ if (self._checks & (DocChecker.DESCR|DocChecker.TYPE) and
+ doc.descr in (None, UNKNOWN) and
+ doc.type_descr in (None, UNKNOWN) and
+ doc.docstring in (None, UNKNOWN)):
+ self.warning('Undocumented', doc)
+ else:
+ if (self._checks & DocChecker.DESCR and
+ doc.descr in (None, UNKNOWN)):
+ self.warning('No description', doc)
+ if (self._checks & DocChecker.TYPE and
+ doc.type_descr in (None, UNKNOWN)):
+ self.warning('No type information', doc)
+
+ def _check_func(self, doc):
+ """
+ Run checks on the function whose APIDoc is C{doc}.
+
+ @param doc: The APIDoc of the function to check.
+ @type doc: L{APIDoc}
+ @rtype: C{None}
+ """
+ name = doc.canonical_name
+ if (self._checks & DocChecker.FUNC and
+ doc.docstring in (None, UNKNOWN) and
+ doc.canonical_name[-1] not in _NO_DOCS):
+ self.warning('Undocumented', doc)
+ return
+ if (self._checks & DocChecker.FUNC and
+ doc.canonical_name[-1] not in _NO_BASIC):
+ self._check_basic(doc)
+ if (self._checks & DocChecker.RETURN and
+ doc.canonical_name[-1] not in _NO_RETURN):
+ if (doc.return_type in (None, UNKNOWN) and
+ doc.return_descr in (None, UNKNOWN)):
+ self.warning('No return descr', doc)
+ if (self._checks & DocChecker.PARAM and
+ doc.canonical_name[-1] not in _NO_PARAM):
+ if doc.arg_descrs in (None, UNKNOWN):
+ self.warning('No argument info', doc)
+ else:
+ args_with_descr = []
+ for arg, descr in doc.arg_descrs:
+ if isinstance(arg, basestring):
+ args_with_descr.append(arg)
+ else:
+ args_with_descr += arg
+ for posarg in doc.posargs:
+ if (self._checks & DocChecker.DESCR and
+ posarg not in args_with_descr):
+ self.warning('Argument(s) not described', doc)
+ if (self._checks & DocChecker.TYPE and
+ posarg not in doc.arg_types):
+ self.warning('Argument type(s) not described', doc)
+
+ def warning(self, msg, doc):
+ self._warnings.setdefault(msg,set()).add(doc)
diff --git a/python/helpers/epydoc/cli.py b/python/helpers/epydoc/cli.py
new file mode 100644
index 0000000..fe2359a
--- /dev/null
+++ b/python/helpers/epydoc/cli.py
@@ -0,0 +1,1470 @@
+# epydoc -- Command line interface
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: cli.py 1678 2008-01-29 17:21:29Z edloper $
+
+"""
+Command-line interface for epydoc. Abbreviated Usage::
+
+ epydoc [options] NAMES...
+
+ NAMES... The Python modules to document.
+ --html Generate HTML output (default).
+ --latex Generate LaTeX output.
+ --pdf Generate pdf output, via LaTeX.
+ -o DIR, --output DIR The output directory.
+ --inheritance STYLE The format for showing inherited objects.
+ -V, --version Print the version of epydoc.
+ -h, --help Display a usage message.
+
+Run \"epydoc --help\" for a complete option list. See the epydoc(1)
+man page for more information.
+
+Config Files
+============
+Configuration files can be specified with the C{--config} option.
+These files are read using U{ConfigParser
+<http://docs.python.org/lib/module-ConfigParser.html>}. Configuration
+files may set options or add names of modules to document. Option
+names are (usually) identical to the long names of command line
+options. To specify names to document, use any of the following
+option names::
+
+ module modules value values object objects
+
+A simple example of a config file is::
+
+ [epydoc]
+ modules: sys, os, os.path, re, %(MYSANDBOXPATH)/utilities.py
+ name: Example
+ graph: classtree
+ introspect: no
+
+All ConfigParser interpolations are done using local values and the
+environment variables.
+
+
+Verbosity Levels
+================
+The C{-v} and C{-q} options increase and decrease verbosity,
+respectively. The default verbosity level is zero. The verbosity
+levels are currently defined as follows::
+
+ Progress Markup warnings Warnings Errors
+ -3 none no no no
+ -2 none no no yes
+ -1 none no yes yes
+ 0 (default) bar no yes yes
+ 1 bar yes yes yes
+ 2 list yes yes yes
+"""
+__docformat__ = 'epytext en'
+
+import sys, os, time, re, pickle, textwrap
+from glob import glob
+from optparse import OptionParser, OptionGroup, SUPPRESS_HELP
+import optparse
+import epydoc
+from epydoc import log
+from epydoc.util import wordwrap, run_subprocess, RunSubprocessError
+from epydoc.util import plaintext_to_html
+from epydoc.apidoc import UNKNOWN
+from epydoc.compat import *
+import ConfigParser
+from epydoc.docwriter.html_css import STYLESHEETS as CSS_STYLESHEETS
+
+# This module is only available if Docutils are in the system
+try:
+ from epydoc.docwriter import xlink
+except:
+ xlink = None
+
+INHERITANCE_STYLES = ('grouped', 'listed', 'included')
+GRAPH_TYPES = ('classtree', 'callgraph', 'umlclasstree')
+ACTIONS = ('html', 'text', 'latex', 'dvi', 'ps', 'pdf', 'check')
+DEFAULT_DOCFORMAT = 'epytext'
+PROFILER = 'profile' #: Which profiler to use: 'hotshot' or 'profile'
+
+######################################################################
+#{ Help Topics
+######################################################################
+
+DOCFORMATS = ('epytext', 'plaintext', 'restructuredtext', 'javadoc')
+HELP_TOPICS = {
+ 'docformat': textwrap.dedent('''\
+ __docformat__ is a module variable that specifies the markup
+ language for the docstrings in a module. Its value is a
+ string, consisting the name of a markup language, optionally
+ followed by a language code (such as "en" for English). Epydoc
+ currently recognizes the following markup language names:
+ ''' + ', '.join(DOCFORMATS)),
+ 'inheritance': textwrap.dedent('''\
+ The following inheritance formats are currently supported:
+ - grouped: inherited objects are gathered into groups,
+ based on what class they were inherited from.
+ - listed: inherited objects are listed in a short list
+ at the end of their section.
+ - included: inherited objects are mixed in with
+ non-inherited objects.'''),
+ 'css': textwrap.dedent(
+ 'The following built-in CSS stylesheets are available:\n' +
+ '\n'.join([' %10s: %s' % (key, descr)
+ for (key, (sheet, descr))
+ in CSS_STYLESHEETS.items()])),
+ #'checks': textwrap.dedent('''\
+ #
+ # '''),
+ }
+
+
+HELP_TOPICS['topics'] = wordwrap(
+ 'Epydoc can provide additional help for the following topics: ' +
+ ', '.join(['%r' % topic for topic in HELP_TOPICS.keys()]))
+
+######################################################################
+#{ Argument & Config File Parsing
+######################################################################
+
+OPTION_DEFAULTS = dict(
+ action="html", show_frames=True, docformat=DEFAULT_DOCFORMAT,
+ show_private=True, show_imports=False, inheritance="listed",
+ verbose=0, quiet=0, load_pickle=False, parse=True, introspect=True,
+ debug=epydoc.DEBUG, profile=False, graphs=[],
+ list_classes_separately=False, graph_font=None, graph_font_size=None,
+ include_source_code=True, pstat_files=[], simple_term=False, fail_on=None,
+ exclude=[], exclude_parse=[], exclude_introspect=[],
+ external_api=[], external_api_file=[], external_api_root=[],
+ redundant_details=False, src_code_tab_width=8)
+
+def parse_arguments():
+ # Construct the option parser.
+ usage = '%prog [ACTION] [options] NAMES...'
+ version = "Epydoc, version %s" % epydoc.__version__
+ optparser = OptionParser(usage=usage, add_help_option=False)
+
+ optparser.add_option('--config',
+ action='append', dest="configfiles", metavar='FILE',
+ help=("A configuration file, specifying additional OPTIONS "
+ "and/or NAMES. This option may be repeated."))
+
+ optparser.add_option("--output", "-o",
+ dest="target", metavar="PATH",
+ help="The output directory. If PATH does not exist, then "
+ "it will be created.")
+
+ optparser.add_option("--quiet", "-q",
+ action="count", dest="quiet",
+ help="Decrease the verbosity.")
+
+ optparser.add_option("--verbose", "-v",
+ action="count", dest="verbose",
+ help="Increase the verbosity.")
+
+ optparser.add_option("--debug",
+ action="store_true", dest="debug",
+ help="Show full tracebacks for internal errors.")
+
+ optparser.add_option("--simple-term",
+ action="store_true", dest="simple_term",
+ help="Do not try to use color or cursor control when displaying "
+ "the progress bar, warnings, or errors.")
+
+
+ action_group = OptionGroup(optparser, 'Actions')
+ optparser.add_option_group(action_group)
+
+ action_group.add_option("--html",
+ action="store_const", dest="action", const="html",
+ help="Write HTML output.")
+
+ action_group.add_option("--text",
+ action="store_const", dest="action", const="text",
+ help="Write plaintext output. (not implemented yet)")
+
+ action_group.add_option("--latex",
+ action="store_const", dest="action", const="latex",
+ help="Write LaTeX output.")
+
+ action_group.add_option("--dvi",
+ action="store_const", dest="action", const="dvi",
+ help="Write DVI output.")
+
+ action_group.add_option("--ps",
+ action="store_const", dest="action", const="ps",
+ help="Write Postscript output.")
+
+ action_group.add_option("--pdf",
+ action="store_const", dest="action", const="pdf",
+ help="Write PDF output.")
+
+ action_group.add_option("--check",
+ action="store_const", dest="action", const="check",
+ help="Check completeness of docs.")
+
+ action_group.add_option("--pickle",
+ action="store_const", dest="action", const="pickle",
+ help="Write the documentation to a pickle file.")
+
+ # Provide our own --help and --version options.
+ action_group.add_option("--version",
+ action="store_const", dest="action", const="version",
+ help="Show epydoc's version number and exit.")
+
+ action_group.add_option("-h", "--help",
+ action="store_const", dest="action", const="help",
+ help="Show this message and exit. For help on specific "
+ "topics, use \"--help TOPIC\". Use \"--help topics\" for a "
+ "list of available help topics")
+
+
+ generation_group = OptionGroup(optparser, 'Generation Options')
+ optparser.add_option_group(generation_group)
+
+ generation_group.add_option("--docformat",
+ dest="docformat", metavar="NAME",
+ help="The default markup language for docstrings. Defaults "
+ "to \"%s\"." % DEFAULT_DOCFORMAT)
+
+ generation_group.add_option("--parse-only",
+ action="store_false", dest="introspect",
+ help="Get all information from parsing (don't introspect)")
+
+ generation_group.add_option("--introspect-only",
+ action="store_false", dest="parse",
+ help="Get all information from introspecting (don't parse)")
+
+ generation_group.add_option("--exclude",
+ dest="exclude", metavar="PATTERN", action="append",
+ help="Exclude modules whose dotted name matches "
+ "the regular expression PATTERN")
+
+ generation_group.add_option("--exclude-introspect",
+ dest="exclude_introspect", metavar="PATTERN", action="append",
+ help="Exclude introspection of modules whose dotted name matches "
+ "the regular expression PATTERN")
+
+ generation_group.add_option("--exclude-parse",
+ dest="exclude_parse", metavar="PATTERN", action="append",
+ help="Exclude parsing of modules whose dotted name matches "
+ "the regular expression PATTERN")
+
+ generation_group.add_option("--inheritance",
+ dest="inheritance", metavar="STYLE",
+ help="The format for showing inheritance objects. STYLE "
+ "should be one of: %s." % ', '.join(INHERITANCE_STYLES))
+
+ generation_group.add_option("--show-private",
+ action="store_true", dest="show_private",
+ help="Include private variables in the output. (default)")
+
+ generation_group.add_option("--no-private",
+ action="store_false", dest="show_private",
+ help="Do not include private variables in the output.")
+
+ generation_group.add_option("--show-imports",
+ action="store_true", dest="show_imports",
+ help="List each module's imports.")
+
+ generation_group.add_option("--no-imports",
+ action="store_false", dest="show_imports",
+ help="Do not list each module's imports. (default)")
+
+ generation_group.add_option('--show-sourcecode',
+ action='store_true', dest='include_source_code',
+ help=("Include source code with syntax highlighting in the "
+ "HTML output. (default)"))
+
+ generation_group.add_option('--no-sourcecode',
+ action='store_false', dest='include_source_code',
+ help=("Do not include source code with syntax highlighting in the "
+ "HTML output."))
+
+ generation_group.add_option('--include-log',
+ action='store_true', dest='include_log',
+ help=("Include a page with the process log (epydoc-log.html)"))
+
+ generation_group.add_option(
+ '--redundant-details',
+ action='store_true', dest='redundant_details',
+ help=("Include values in the details lists even if all info "
+ "about them is already provided by the summary table."))
+
+ output_group = OptionGroup(optparser, 'Output Options')
+ optparser.add_option_group(output_group)
+
+ output_group.add_option("--name", "-n",
+ dest="prj_name", metavar="NAME",
+ help="The documented project's name (for the navigation bar).")
+
+ output_group.add_option("--css", "-c",
+ dest="css", metavar="STYLESHEET",
+ help="The CSS stylesheet. STYLESHEET can be either a "
+ "builtin stylesheet or the name of a CSS file.")
+
+ output_group.add_option("--url", "-u",
+ dest="prj_url", metavar="URL",
+ help="The documented project's URL (for the navigation bar).")
+
+ output_group.add_option("--navlink",
+ dest="prj_link", metavar="HTML",
+ help="HTML code for a navigation link to place in the "
+ "navigation bar.")
+
+ output_group.add_option("--top",
+ dest="top_page", metavar="PAGE",
+ help="The \"top\" page for the HTML documentation. PAGE can "
+ "be a URL, the name of a module or class, or one of the "
+ "special names \"trees.html\", \"indices.html\", or \"help.html\"")
+
+ output_group.add_option("--help-file",
+ dest="help_file", metavar="FILE",
+ help="An alternate help file. FILE should contain the body "
+ "of an HTML file -- navigation bars will be added to it.")
+
+ output_group.add_option("--show-frames",
+ action="store_true", dest="show_frames",
+ help="Include frames in the HTML output. (default)")
+
+ output_group.add_option("--no-frames",
+ action="store_false", dest="show_frames",
+ help="Do not include frames in the HTML output.")
+
+ output_group.add_option('--separate-classes',
+ action='store_true', dest='list_classes_separately',
+ help=("When generating LaTeX or PDF output, list each class in "
+ "its own section, instead of listing them under their "
+ "containing module."))
+
+ output_group.add_option('--src-code-tab-width',
+ action='store', type='int', dest='src_code_tab_width',
+ help=("When generating HTML output, sets the number of spaces "
+ "each tab in source code listings is replaced with."))
+
+ # The group of external API options.
+ # Skip if the module couldn't be imported (usually missing docutils)
+ if xlink is not None:
+ link_group = OptionGroup(optparser,
+ xlink.ApiLinkReader.settings_spec[0])
+ optparser.add_option_group(link_group)
+
+ for help, names, opts in xlink.ApiLinkReader.settings_spec[2]:
+ opts = opts.copy()
+ opts['help'] = help
+ link_group.add_option(*names, **opts)
+
+ graph_group = OptionGroup(optparser, 'Graph Options')
+ optparser.add_option_group(graph_group)
+
+ graph_group.add_option('--graph',
+ action='append', dest='graphs', metavar='GRAPHTYPE',
+ help=("Include graphs of type GRAPHTYPE in the generated output. "
+ "Graphs are generated using the Graphviz dot executable. "
+ "If this executable is not on the path, then use --dotpath "
+ "to specify its location. This option may be repeated to "
+ "include multiple graph types in the output. GRAPHTYPE "
+ "should be one of: all, %s." % ', '.join(GRAPH_TYPES)))
+
+ graph_group.add_option("--dotpath",
+ dest="dotpath", metavar='PATH',
+ help="The path to the Graphviz 'dot' executable.")
+
+ graph_group.add_option('--graph-font',
+ dest='graph_font', metavar='FONT',
+ help=("Specify the font used to generate Graphviz graphs. (e.g., "
+ "helvetica or times)."))
+
+ graph_group.add_option('--graph-font-size',
+ dest='graph_font_size', metavar='SIZE',
+ help=("Specify the font size used to generate Graphviz graphs, "
+ "in points."))
+
+ graph_group.add_option('--pstat',
+ action='append', dest='pstat_files', metavar='FILE',
+ help="A pstat output file, to be used in generating call graphs.")
+
+ # this option is for developers, not users.
+ graph_group.add_option("--profile-epydoc",
+ action="store_true", dest="profile",
+ help=SUPPRESS_HELP or
+ ("Run the hotshot profiler on epydoc itself. Output "
+ "will be written to profile.out."))
+
+
+ return_group = OptionGroup(optparser, 'Return Value Options')
+ optparser.add_option_group(return_group)
+
+ return_group.add_option("--fail-on-error",
+ action="store_const", dest="fail_on", const=log.ERROR,
+ help="Return a non-zero exit status, indicating failure, if any "
+ "errors are encountered.")
+
+ return_group.add_option("--fail-on-warning",
+ action="store_const", dest="fail_on", const=log.WARNING,
+ help="Return a non-zero exit status, indicating failure, if any "
+ "errors or warnings are encountered (not including docstring "
+ "warnings).")
+
+ return_group.add_option("--fail-on-docstring-warning",
+ action="store_const", dest="fail_on", const=log.DOCSTRING_WARNING,
+ help="Return a non-zero exit status, indicating failure, if any "
+ "errors or warnings are encountered (including docstring "
+ "warnings).")
+
+ # Set the option parser's defaults.
+ optparser.set_defaults(**OPTION_DEFAULTS)
+
+ # Parse the arguments.
+ options, names = optparser.parse_args()
+
+ # Print help message, if requested. We also provide support for
+ # --help [topic]
+ if options.action == 'help':
+ names = set([n.lower() for n in names])
+ for (topic, msg) in HELP_TOPICS.items():
+ if topic.lower() in names:
+ print '\n' + msg.rstrip() + '\n'
+ sys.exit(0)
+ optparser.print_help()
+ sys.exit(0)
+
+ # Print version message, if requested.
+ if options.action == 'version':
+ print version
+ sys.exit(0)
+
+ # Process any config files.
+ if options.configfiles:
+ try:
+ parse_configfiles(options.configfiles, options, names)
+ except (KeyboardInterrupt,SystemExit): raise
+ except Exception, e:
+ if len(options.configfiles) == 1:
+ cf_name = 'config file %s' % options.configfiles[0]
+ else:
+ cf_name = 'config files %s' % ', '.join(options.configfiles)
+ optparser.error('Error reading %s:\n %s' % (cf_name, e))
+
+ # Check if the input file is a pickle file.
+ for name in names:
+ if name.endswith('.pickle'):
+ if len(names) != 1:
+ optparser.error("When a pickle file is specified, no other "
+ "input files may be specified.")
+ options.load_pickle = True
+
+ # Check to make sure all options are valid.
+ if len(names) == 0:
+ optparser.error("No names specified.")
+
+ # perform shell expansion.
+ for i, name in reversed(list(enumerate(names[:]))):
+ if '?' in name or '*' in name:
+ names[i:i+1] = glob(name)
+
+ if options.inheritance not in INHERITANCE_STYLES:
+ optparser.error("Bad inheritance style. Valid options are " +
+ ",".join(INHERITANCE_STYLES))
+ if not options.parse and not options.introspect:
+ optparser.error("Invalid option combination: --parse-only "
+ "and --introspect-only.")
+ if options.action == 'text' and len(names) > 1:
+ optparser.error("--text option takes only one name.")
+
+ # Check the list of requested graph types to make sure they're
+ # acceptable.
+ options.graphs = [graph_type.lower() for graph_type in options.graphs]
+ for graph_type in options.graphs:
+ if graph_type == 'callgraph' and not options.pstat_files:
+ optparser.error('"callgraph" graph type may only be used if '
+ 'one or more pstat files are specified.')
+ # If it's 'all', then add everything (but don't add callgraph if
+ # we don't have any profiling info to base them on).
+ if graph_type == 'all':
+ if options.pstat_files:
+ options.graphs = GRAPH_TYPES
+ else:
+ options.graphs = [g for g in GRAPH_TYPES if g != 'callgraph']
+ break
+ elif graph_type not in GRAPH_TYPES:
+ optparser.error("Invalid graph type %s." % graph_type)
+
+ # Calculate verbosity.
+ verbosity = getattr(options, 'verbosity', 0)
+ options.verbosity = verbosity + options.verbose - options.quiet
+
+ # The target default depends on the action.
+ if options.target is None:
+ options.target = options.action
+
+ # Return parsed args.
+ options.names = names
+ return options, names
+
+def parse_configfiles(configfiles, options, names):
+ configparser = ConfigParser.ConfigParser()
+ # ConfigParser.read() silently ignores errors, so open the files
+ # manually (since we want to notify the user of any errors).
+ for configfile in configfiles:
+ fp = open(configfile, 'r') # may raise IOError.
+ configparser.readfp(fp, configfile)
+ fp.close()
+ for optname in configparser.options('epydoc'):
+ val = configparser.get('epydoc', optname, vars=os.environ).strip()
+ optname = optname.lower().strip()
+
+ if optname in ('modules', 'objects', 'values',
+ 'module', 'object', 'value'):
+ names.extend(_str_to_list(val))
+ elif optname == 'target':
+ options.target = val
+ elif optname == 'output':
+ if val.lower() not in ACTIONS:
+ raise ValueError('"%s" expected one of: %s' %
+ (optname, ', '.join(ACTIONS)))
+ options.action = val.lower()
+ elif optname == 'verbosity':
+ options.verbosity = _str_to_int(val, optname)
+ elif optname == 'debug':
+ options.debug = _str_to_bool(val, optname)
+ elif optname in ('simple-term', 'simple_term'):
+ options.simple_term = _str_to_bool(val, optname)
+
+ # Generation options
+ elif optname == 'docformat':
+ options.docformat = val
+ elif optname == 'parse':
+ options.parse = _str_to_bool(val, optname)
+ elif optname == 'introspect':
+ options.introspect = _str_to_bool(val, optname)
+ elif optname == 'exclude':
+ options.exclude.extend(_str_to_list(val))
+ elif optname in ('exclude-parse', 'exclude_parse'):
+ options.exclude_parse.extend(_str_to_list(val))
+ elif optname in ('exclude-introspect', 'exclude_introspect'):
+ options.exclude_introspect.extend(_str_to_list(val))
+ elif optname == 'inheritance':
+ if val.lower() not in INHERITANCE_STYLES:
+ raise ValueError('"%s" expected one of: %s.' %
+ (optname, ', '.join(INHERITANCE_STYLES)))
+ options.inheritance = val.lower()
+ elif optname =='private':
+ options.show_private = _str_to_bool(val, optname)
+ elif optname =='imports':
+ options.show_imports = _str_to_bool(val, optname)
+ elif optname == 'sourcecode':
+ options.include_source_code = _str_to_bool(val, optname)
+ elif optname in ('include-log', 'include_log'):
+ options.include_log = _str_to_bool(val, optname)
+ elif optname in ('redundant-details', 'redundant_details'):
+ options.redundant_details = _str_to_bool(val, optname)
+
+ # Output options
+ elif optname == 'name':
+ options.prj_name = val
+ elif optname == 'css':
+ options.css = val
+ elif optname == 'url':
+ options.prj_url = val
+ elif optname == 'link':
+ options.prj_link = val
+ elif optname == 'top':
+ options.top_page = val
+ elif optname == 'help':
+ options.help_file = val
+ elif optname =='frames':
+ options.show_frames = _str_to_bool(val, optname)
+ elif optname in ('separate-classes', 'separate_classes'):
+ options.list_classes_separately = _str_to_bool(val, optname)
+ elif optname in ('src-code-tab-width', 'src_code_tab_width'):
+ options.src_code_tab_width = _str_to_int(val, optname)
+
+ # External API
+ elif optname in ('external-api', 'external_api'):
+ options.external_api.extend(_str_to_list(val))
+ elif optname in ('external-api-file', 'external_api_file'):
+ options.external_api_file.extend(_str_to_list(val))
+ elif optname in ('external-api-root', 'external_api_root'):
+ options.external_api_root.extend(_str_to_list(val))
+
+ # Graph options
+ elif optname == 'graph':
+ graphtypes = _str_to_list(val)
+ for graphtype in graphtypes:
+ if graphtype not in GRAPH_TYPES + ('all',):
+ raise ValueError('"%s" expected one of: all, %s.' %
+ (optname, ', '.join(GRAPH_TYPES)))
+ options.graphs.extend(graphtypes)
+ elif optname == 'dotpath':
+ options.dotpath = val
+ elif optname in ('graph-font', 'graph_font'):
+ options.graph_font = val
+ elif optname in ('graph-font-size', 'graph_font_size'):
+ options.graph_font_size = _str_to_int(val, optname)
+ elif optname == 'pstat':
+ options.pstat_files.extend(_str_to_list(val))
+
+ # Return value options
+ elif optname in ('failon', 'fail-on', 'fail_on'):
+ if val.lower().strip() in ('error', 'errors'):
+ options.fail_on = log.ERROR
+ elif val.lower().strip() in ('warning', 'warnings'):
+ options.fail_on = log.WARNING
+ elif val.lower().strip() in ('docstring_warning',
+ 'docstring_warnings'):
+ options.fail_on = log.DOCSTRING_WARNING
+ else:
+ raise ValueError("%r expected one of: error, warning, "
+ "docstring_warning" % optname)
+ else:
+ raise ValueError('Unknown option %s' % optname)
+
+def _str_to_bool(val, optname):
+ if val.lower() in ('0', 'no', 'false', 'n', 'f', 'hide'):
+ return False
+ elif val.lower() in ('1', 'yes', 'true', 'y', 't', 'show'):
+ return True
+ else:
+ raise ValueError('"%s" option expected a boolean' % optname)
+
+def _str_to_int(val, optname):
+ try:
+ return int(val)
+ except ValueError:
+ raise ValueError('"%s" option expected an int' % optname)
+
+def _str_to_list(val):
+ return val.replace(',', ' ').split()
+
+######################################################################
+#{ Interface
+######################################################################
+
+def main(options, names):
+ # Set the debug flag, if '--debug' was specified.
+ if options.debug:
+ epydoc.DEBUG = True
+
+ ## [XX] Did this serve a purpose? Commenting out for now:
+ #if options.action == 'text':
+ # if options.parse and options.introspect:
+ # options.parse = False
+
+ # Set up the logger
+ if options.simple_term:
+ TerminalController.FORCE_SIMPLE_TERM = True
+ if options.action == 'text':
+ logger = None # no logger for text output.
+ elif options.verbosity > 1:
+ logger = ConsoleLogger(options.verbosity)
+ log.register_logger(logger)
+ else:
+ # Each number is a rough approximation of how long we spend on
+ # that task, used to divide up the unified progress bar.
+ stages = [40, # Building documentation
+ 7, # Merging parsed & introspected information
+ 1, # Linking imported variables
+ 3, # Indexing documentation
+ 1, # Checking for overridden methods
+ 30, # Parsing Docstrings
+ 1, # Inheriting documentation
+ 2] # Sorting & Grouping
+ if options.load_pickle:
+ stages = [30] # Loading pickled documentation
+ if options.action == 'html': stages += [100]
+ elif options.action == 'text': stages += [30]
+ elif options.action == 'latex': stages += [60]
+ elif options.action == 'dvi': stages += [60,30]
+ elif options.action == 'ps': stages += [60,40]
+ elif options.action == 'pdf': stages += [60,50]
+ elif options.action == 'check': stages += [10]
+ elif options.action == 'pickle': stages += [10]
+ else: raise ValueError, '%r not supported' % options.action
+ if options.parse and not options.introspect:
+ del stages[1] # no merging
+ if options.introspect and not options.parse:
+ del stages[1:3] # no merging or linking
+ logger = UnifiedProgressConsoleLogger(options.verbosity, stages)
+ log.register_logger(logger)
+
+ # check the output directory.
+ if options.action not in ('text', 'check', 'pickle'):
+ if os.path.exists(options.target):
+ if not os.path.isdir(options.target):
+ log.error("%s is not a directory" % options.target)
+ sys.exit(1)
+
+ if options.include_log:
+ if options.action == 'html':
+ if not os.path.exists(options.target):
+ os.mkdir(options.target)
+ log.register_logger(HTMLLogger(options.target, options))
+ else:
+ log.warning("--include-log requires --html")
+
+ # Set the default docformat
+ from epydoc import docstringparser
+ docstringparser.DEFAULT_DOCFORMAT = options.docformat
+
+ # Configure the external API linking
+ if xlink is not None:
+ try:
+ xlink.ApiLinkReader.read_configuration(options, problematic=False)
+ except Exception, exc:
+ log.error("Error while configuring external API linking: %s: %s"
+ % (exc.__class__.__name__, exc))
+
+ # Set the dot path
+ if options.dotpath:
+ from epydoc.docwriter import dotgraph
+ dotgraph.DOT_COMMAND = options.dotpath
+
+ # Set the default graph font & size
+ if options.graph_font:
+ from epydoc.docwriter import dotgraph
+ fontname = options.graph_font
+ dotgraph.DotGraph.DEFAULT_NODE_DEFAULTS['fontname'] = fontname
+ dotgraph.DotGraph.DEFAULT_EDGE_DEFAULTS['fontname'] = fontname
+ if options.graph_font_size:
+ from epydoc.docwriter import dotgraph
+ fontsize = options.graph_font_size
+ dotgraph.DotGraph.DEFAULT_NODE_DEFAULTS['fontsize'] = fontsize
+ dotgraph.DotGraph.DEFAULT_EDGE_DEFAULTS['fontsize'] = fontsize
+
+ # If the input name is a pickle file, then read the docindex that
+ # it contains. Otherwise, build the docs for the input names.
+ if options.load_pickle:
+ assert len(names) == 1
+ log.start_progress('Deserializing')
+ log.progress(0.1, 'Loading %r' % names[0])
+ t0 = time.time()
+ unpickler = pickle.Unpickler(open(names[0], 'rb'))
+ unpickler.persistent_load = pickle_persistent_load
+ docindex = unpickler.load()
+ log.debug('deserialization time: %.1f sec' % (time.time()-t0))
+ log.end_progress()
+ else:
+ # Build docs for the named values.
+ from epydoc.docbuilder import build_doc_index
+ exclude_parse = '|'.join(options.exclude_parse+options.exclude)
+ exclude_introspect = '|'.join(options.exclude_introspect+
+ options.exclude)
+ docindex = build_doc_index(names, options.introspect, options.parse,
+ add_submodules=(options.action!='text'),
+ exclude_introspect=exclude_introspect,
+ exclude_parse=exclude_parse)
+
+ if docindex is None:
+ if log.ERROR in logger.reported_message_levels:
+ sys.exit(1)
+ else:
+ return # docbuilder already logged an error.
+
+ # Load profile information, if it was given.
+ if options.pstat_files:
+ try: import pstats
+ except ImportError:
+ log.error("Could not import pstats -- ignoring pstat files.")
+ try:
+ profile_stats = pstats.Stats(options.pstat_files[0])
+ for filename in options.pstat_files[1:]:
+ profile_stats.add(filename)
+ except KeyboardInterrupt: raise
+ except Exception, e:
+ log.error("Error reading pstat file: %s" % e)
+ profile_stats = None
+ if profile_stats is not None:
+ docindex.read_profiling_info(profile_stats)
+
+ # Perform the specified action.
+ if options.action == 'html':
+ write_html(docindex, options)
+ elif options.action in ('latex', 'dvi', 'ps', 'pdf'):
+ write_latex(docindex, options, options.action)
+ elif options.action == 'text':
+ write_text(docindex, options)
+ elif options.action == 'check':
+ check_docs(docindex, options)
+ elif options.action == 'pickle':
+ write_pickle(docindex, options)
+ else:
+ print >>sys.stderr, '\nUnsupported action %s!' % options.action
+
+ # If we suppressed docstring warnings, then let the user know.
+ if logger is not None and logger.suppressed_docstring_warning:
+ if logger.suppressed_docstring_warning == 1:
+ prefix = '1 markup error was found'
+ else:
+ prefix = ('%d markup errors were found' %
+ logger.suppressed_docstring_warning)
+ log.warning("%s while processing docstrings. Use the verbose "
+ "switch (-v) to display markup errors." % prefix)
+
+ # Basic timing breakdown:
+ if options.verbosity >= 2 and logger is not None:
+ logger.print_times()
+
+ # If we encountered any message types that we were requested to
+ # fail on, then exit with status 2.
+ if options.fail_on is not None:
+ max_reported_message_level = max(logger.reported_message_levels)
+ if max_reported_message_level >= options.fail_on:
+ sys.exit(2)
+
+def write_html(docindex, options):
+ from epydoc.docwriter.html import HTMLWriter
+ html_writer = HTMLWriter(docindex, **options.__dict__)
+ if options.verbose > 0:
+ log.start_progress('Writing HTML docs to %r' % options.target)
+ else:
+ log.start_progress('Writing HTML docs')
+ html_writer.write(options.target)
+ log.end_progress()
+
+def write_pickle(docindex, options):
+ """Helper for writing output to a pickle file, which can then be
+ read in at a later time. But loading the pickle is only marginally
+ faster than building the docs from scratch, so this has pretty
+ limited application."""
+ if options.target == 'pickle':
+ options.target = 'api.pickle'
+ elif not options.target.endswith('.pickle'):
+ options.target += '.pickle'
+
+ log.start_progress('Serializing output')
+ log.progress(0.2, 'Writing %r' % options.target)
+ outfile = open(options.target, 'wb')
+ pickler = pickle.Pickler(outfile, protocol=0)
+ pickler.persistent_id = pickle_persistent_id
+ pickler.dump(docindex)
+ outfile.close()
+ log.end_progress()
+
+def pickle_persistent_id(obj):
+ """Helper for pickling, which allows us to save and restore UNKNOWN,
+ which is required to be identical to apidoc.UNKNOWN."""
+ if obj is UNKNOWN: return 'UNKNOWN'
+ else: return None
+
+def pickle_persistent_load(identifier):
+ """Helper for pickling, which allows us to save and restore UNKNOWN,
+ which is required to be identical to apidoc.UNKNOWN."""
+ if identifier == 'UNKNOWN': return UNKNOWN
+ else: raise pickle.UnpicklingError, 'Invalid persistent id'
+
+_RERUN_LATEX_RE = re.compile(r'(?im)^LaTeX\s+Warning:\s+Label\(s\)\s+may'
+ r'\s+have\s+changed.\s+Rerun')
+
+def write_latex(docindex, options, format):
+ from epydoc.docwriter.latex import LatexWriter
+ latex_writer = LatexWriter(docindex, **options.__dict__)
+ log.start_progress('Writing LaTeX docs')
+ latex_writer.write(options.target)
+ log.end_progress()
+ # If we're just generating the latex, and not any output format,
+ # then we're done.
+ if format == 'latex': return
+
+ if format == 'dvi': steps = 4
+ elif format == 'ps': steps = 5
+ elif format == 'pdf': steps = 6
+
+ log.start_progress('Processing LaTeX docs')
+ oldpath = os.path.abspath(os.curdir)
+ running = None # keep track of what we're doing.
+ try:
+ try:
+ os.chdir(options.target)
+
+ # Clear any old files out of the way.
+ for ext in 'tex aux log out idx ilg toc ind'.split():
+ if os.path.exists('apidoc.%s' % ext):
+ os.remove('apidoc.%s' % ext)
+
+ # The first pass generates index files.
+ running = 'latex'
+ log.progress(0./steps, 'LaTeX: First pass')
+ run_subprocess('latex api.tex')
+
+ # Build the index.
+ running = 'makeindex'
+ log.progress(1./steps, 'LaTeX: Build index')
+ run_subprocess('makeindex api.idx')
+
+ # The second pass generates our output.
+ running = 'latex'
+ log.progress(2./steps, 'LaTeX: Second pass')
+ out, err = run_subprocess('latex api.tex')
+
+ # The third pass is only necessary if the second pass
+ # changed what page some things are on.
+ running = 'latex'
+ if _RERUN_LATEX_RE.match(out):
+ log.progress(3./steps, 'LaTeX: Third pass')
+ out, err = run_subprocess('latex api.tex')
+
+ # A fourth path should (almost?) never be necessary.
+ running = 'latex'
+ if _RERUN_LATEX_RE.match(out):
+ log.progress(3./steps, 'LaTeX: Fourth pass')
+ run_subprocess('latex api.tex')
+
+ # If requested, convert to postscript.
+ if format in ('ps', 'pdf'):
+ running = 'dvips'
+ log.progress(4./steps, 'dvips')
+ run_subprocess('dvips api.dvi -o api.ps -G0 -Ppdf')
+
+ # If requested, convert to pdf.
+ if format in ('pdf'):
+ running = 'ps2pdf'
+ log.progress(5./steps, 'ps2pdf')
+ run_subprocess(
+ 'ps2pdf -sPAPERSIZE#letter -dMaxSubsetPct#100 '
+ '-dSubsetFonts#true -dCompatibilityLevel#1.2 '
+ '-dEmbedAllFonts#true api.ps api.pdf')
+ except RunSubprocessError, e:
+ if running == 'latex':
+ e.out = re.sub(r'(?sm)\A.*?!( LaTeX Error:)?', r'', e.out)
+ e.out = re.sub(r'(?sm)\s*Type X to quit.*', '', e.out)
+ e.out = re.sub(r'(?sm)^! Emergency stop.*', '', e.out)
+ log.error("%s failed: %s" % (running, (e.out+e.err).lstrip()))
+ except OSError, e:
+ log.error("%s failed: %s" % (running, e))
+ finally:
+ os.chdir(oldpath)
+ log.end_progress()
+
+def write_text(docindex, options):
+ log.start_progress('Writing output')
+ from epydoc.docwriter.plaintext import PlaintextWriter
+ plaintext_writer = PlaintextWriter()
+ s = ''
+ for apidoc in docindex.root:
+ s += plaintext_writer.write(apidoc)
+ log.end_progress()
+ if isinstance(s, unicode):
+ s = s.encode('ascii', 'backslashreplace')
+ print s
+
+def check_docs(docindex, options):
+ from epydoc.checker import DocChecker
+ DocChecker(docindex).check()
+
+def cli():
+ # Parse command-line arguments.
+ options, names = parse_arguments()
+
+ try:
+ try:
+ if options.profile:
+ _profile()
+ else:
+ main(options, names)
+ finally:
+ log.close()
+ except SystemExit:
+ raise
+ except KeyboardInterrupt:
+ print '\n\n'
+ print >>sys.stderr, 'Keyboard interrupt.'
+ except:
+ if options.debug: raise
+ print '\n\n'
+ exc_info = sys.exc_info()
+ if isinstance(exc_info[0], basestring): e = exc_info[0]
+ else: e = exc_info[1]
+ print >>sys.stderr, ('\nUNEXPECTED ERROR:\n'
+ '%s\n' % (str(e) or e.__class__.__name__))
+ print >>sys.stderr, 'Use --debug to see trace information.'
+ sys.exit(3)
+
+def _profile():
+ # Hotshot profiler.
+ if PROFILER == 'hotshot':
+ try: import hotshot, hotshot.stats
+ except ImportError:
+ print >>sys.stderr, "Could not import profile module!"
+ return
+ try:
+ prof = hotshot.Profile('hotshot.out')
+ prof = prof.runctx('main(*parse_arguments())', globals(), {})
+ except SystemExit:
+ pass
+ prof.close()
+ # Convert profile.hotshot -> profile.out
+ print 'Consolidating hotshot profiling info...'
+ hotshot.stats.load('hotshot.out').dump_stats('profile.out')
+
+ # Standard 'profile' profiler.
+ elif PROFILER == 'profile':
+ # cProfile module was added in Python 2.5 -- use it if its'
+ # available, since it's faster.
+ try: from cProfile import Profile
+ except ImportError:
+ try: from profile import Profile
+ except ImportError:
+ print >>sys.stderr, "Could not import profile module!"
+ return
+
+ # There was a bug in Python 2.4's profiler. Check if it's
+ # present, and if so, fix it. (Bug was fixed in 2.4maint:
+ # <http://mail.python.org/pipermail/python-checkins/
+ # 2005-September/047099.html>)
+ if (hasattr(Profile, 'dispatch') and
+ Profile.dispatch['c_exception'] is
+ Profile.trace_dispatch_exception.im_func):
+ trace_dispatch_return = Profile.trace_dispatch_return.im_func
+ Profile.dispatch['c_exception'] = trace_dispatch_return
+ try:
+ prof = Profile()
+ prof = prof.runctx('main(*parse_arguments())', globals(), {})
+ except SystemExit:
+ pass
+ prof.dump_stats('profile.out')
+
+ else:
+ print >>sys.stderr, 'Unknown profiler %s' % PROFILER
+ return
+
+######################################################################
+#{ Logging
+######################################################################
+
+class TerminalController:
+ """
+ A class that can be used to portably generate formatted output to
+ a terminal. See
+ U{http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/475116}
+ for documentation. (This is a somewhat stripped-down version.)
+ """
+ BOL = '' #: Move the cursor to the beginning of the line
+ UP = '' #: Move the cursor up one line
+ DOWN = '' #: Move the cursor down one line
+ LEFT = '' #: Move the cursor left one char
+ RIGHT = '' #: Move the cursor right one char
+ CLEAR_EOL = '' #: Clear to the end of the line.
+ CLEAR_LINE = '' #: Clear the current line; cursor to BOL.
+ BOLD = '' #: Turn on bold mode
+ NORMAL = '' #: Turn off all modes
+ COLS = 75 #: Width of the terminal (default to 75)
+ BLACK = BLUE = GREEN = CYAN = RED = MAGENTA = YELLOW = WHITE = ''
+
+ _STRING_CAPABILITIES = """
+ BOL=cr UP=cuu1 DOWN=cud1 LEFT=cub1 RIGHT=cuf1
+ CLEAR_EOL=el BOLD=bold UNDERLINE=smul NORMAL=sgr0""".split()
+ _COLORS = """BLACK BLUE GREEN CYAN RED MAGENTA YELLOW WHITE""".split()
+ _ANSICOLORS = "BLACK RED GREEN YELLOW BLUE MAGENTA CYAN WHITE".split()
+
+ #: If this is set to true, then new TerminalControllers will
+ #: assume that the terminal is not capable of doing manipulation
+ #: of any kind.
+ FORCE_SIMPLE_TERM = False
+
+ def __init__(self, term_stream=sys.stdout):
+ # If the stream isn't a tty, then assume it has no capabilities.
+ if not term_stream.isatty(): return
+ if self.FORCE_SIMPLE_TERM: return
+
+ # Curses isn't available on all platforms
+ try: import curses
+ except:
+ # If it's not available, then try faking enough to get a
+ # simple progress bar.
+ self.BOL = '\r'
+ self.CLEAR_LINE = '\r' + ' '*self.COLS + '\r'
+
+ # Check the terminal type. If we fail, then assume that the
+ # terminal has no capabilities.
+ try: curses.setupterm()
+ except: return
+
+ # Look up numeric capabilities.
+ self.COLS = curses.tigetnum('cols')
+
+ # Look up string capabilities.
+ for capability in self._STRING_CAPABILITIES:
+ (attrib, cap_name) = capability.split('=')
+ setattr(self, attrib, self._tigetstr(cap_name) or '')
+ if self.BOL and self.CLEAR_EOL:
+ self.CLEAR_LINE = self.BOL+self.CLEAR_EOL
+
+ # Colors
+ set_fg = self._tigetstr('setf')
+ if set_fg:
+ for i,color in zip(range(len(self._COLORS)), self._COLORS):
+ setattr(self, color, curses.tparm(set_fg, i) or '')
+ set_fg_ansi = self._tigetstr('setaf')
+ if set_fg_ansi:
+ for i,color in zip(range(len(self._ANSICOLORS)), self._ANSICOLORS):
+ setattr(self, color, curses.tparm(set_fg_ansi, i) or '')
+
+ def _tigetstr(self, cap_name):
+ # String capabilities can include "delays" of the form "$<2>".
+ # For any modern terminal, we should be able to just ignore
+ # these, so strip them out.
+ import curses
+ cap = curses.tigetstr(cap_name) or ''
+ return re.sub(r'\$<\d+>[/*]?', '', cap)
+
+class ConsoleLogger(log.Logger):
+ def __init__(self, verbosity, progress_mode=None):
+ self._verbosity = verbosity
+ self._progress = None
+ self._message_blocks = []
+ # For ETA display:
+ self._progress_start_time = None
+ # For per-task times:
+ self._task_times = []
+ self._progress_header = None
+
+ self.reported_message_levels = set()
+ """This set contains all the message levels (WARNING, ERROR,
+ etc) that have been reported. It is used by the options
+ --fail-on-warning etc to determine the return value."""
+
+ self.suppressed_docstring_warning = 0
+ """This variable will be incremented once every time a
+ docstring warning is reported tothe logger, but the verbosity
+ level is too low for it to be displayed."""
+
+ self.term = TerminalController()
+
+ # Set the progress bar mode.
+ if verbosity >= 2: self._progress_mode = 'list'
+ elif verbosity >= 0:
+ if progress_mode is not None:
+ self._progress_mode = progress_mode
+ elif self.term.COLS < 15:
+ self._progress_mode = 'simple-bar'
+ elif self.term.BOL and self.term.CLEAR_EOL and self.term.UP:
+ self._progress_mode = 'multiline-bar'
+ elif self.term.BOL and self.term.CLEAR_LINE:
+ self._progress_mode = 'bar'
+ else:
+ self._progress_mode = 'simple-bar'
+ else: self._progress_mode = 'hide'
+
+ def start_block(self, header):
+ self._message_blocks.append( (header, []) )
+
+ def end_block(self):
+ header, messages = self._message_blocks.pop()
+ if messages:
+ width = self.term.COLS - 5 - 2*len(self._message_blocks)
+ prefix = self.term.CYAN+self.term.BOLD+'| '+self.term.NORMAL
+ divider = (self.term.CYAN+self.term.BOLD+'+'+'-'*(width-1)+
+ self.term.NORMAL)
+ # Mark up the header:
+ header = wordwrap(header, right=width-2, splitchars='\\/').rstrip()
+ header = '\n'.join([prefix+self.term.CYAN+l+self.term.NORMAL
+ for l in header.split('\n')])
+ # Construct the body:
+ body = ''
+ for message in messages:
+ if message.endswith('\n'): body += message
+ else: body += message+'\n'
+ # Indent the body:
+ body = '\n'.join([prefix+' '+l for l in body.split('\n')])
+ # Put it all together:
+ message = divider + '\n' + header + '\n' + body + '\n'
+ self._report(message)
+
+ def _format(self, prefix, message, color):
+ """
+ Rewrap the message; but preserve newlines, and don't touch any
+ lines that begin with spaces.
+ """
+ lines = message.split('\n')
+ startindex = indent = len(prefix)
+ for i in range(len(lines)):
+ if lines[i].startswith(' '):
+ lines[i] = ' '*(indent-startindex) + lines[i] + '\n'
+ else:
+ width = self.term.COLS - 5 - 4*len(self._message_blocks)
+ lines[i] = wordwrap(lines[i], indent, width, startindex, '\\/')
+ startindex = 0
+ return color+prefix+self.term.NORMAL+''.join(lines)
+
+ def log(self, level, message):
+ self.reported_message_levels.add(level)
+ if self._verbosity >= -2 and level >= log.ERROR:
+ message = self._format(' Error: ', message, self.term.RED)
+ elif self._verbosity >= -1 and level >= log.WARNING:
+ message = self._format('Warning: ', message, self.term.YELLOW)
+ elif self._verbosity >= 1 and level >= log.DOCSTRING_WARNING:
+ message = self._format('Warning: ', message, self.term.YELLOW)
+ elif self._verbosity >= 3 and level >= log.INFO:
+ message = self._format(' Info: ', message, self.term.NORMAL)
+ elif epydoc.DEBUG and level == log.DEBUG:
+ message = self._format(' Debug: ', message, self.term.CYAN)
+ else:
+ if level >= log.DOCSTRING_WARNING:
+ self.suppressed_docstring_warning += 1
+ return
+
+ self._report(message)
+
+ def _report(self, message):
+ if not message.endswith('\n'): message += '\n'
+
+ if self._message_blocks:
+ self._message_blocks[-1][-1].append(message)
+ else:
+ # If we're in the middle of displaying a progress bar,
+ # then make room for the message.
+ if self._progress_mode == 'simple-bar':
+ if self._progress is not None:
+ print
+ self._progress = None
+ if self._progress_mode == 'bar':
+ sys.stdout.write(self.term.CLEAR_LINE)
+ if self._progress_mode == 'multiline-bar':
+ sys.stdout.write((self.term.CLEAR_EOL + '\n')*2 +
+ self.term.CLEAR_EOL + self.term.UP*2)
+
+ # Display the message message.
+ sys.stdout.write(message)
+ sys.stdout.flush()
+
+ def progress(self, percent, message=''):
+ percent = min(1.0, percent)
+ message = '%s' % message
+
+ if self._progress_mode == 'list':
+ if message:
+ print '[%3d%%] %s' % (100*percent, message)
+ sys.stdout.flush()
+
+ elif self._progress_mode == 'bar':
+ dots = int((self.term.COLS/2-8)*percent)
+ background = '-'*(self.term.COLS/2-8)
+ if len(message) > self.term.COLS/2:
+ message = message[:self.term.COLS/2-3]+'...'
+ sys.stdout.write(self.term.CLEAR_LINE + '%3d%% '%(100*percent) +
+ self.term.GREEN + '[' + self.term.BOLD +
+ '='*dots + background[dots:] + self.term.NORMAL +
+ self.term.GREEN + '] ' + self.term.NORMAL +
+ message + self.term.BOL)
+ sys.stdout.flush()
+ self._progress = percent
+ elif self._progress_mode == 'multiline-bar':
+ dots = int((self.term.COLS-10)*percent)
+ background = '-'*(self.term.COLS-10)
+
+ if len(message) > self.term.COLS-10:
+ message = message[:self.term.COLS-10-3]+'...'
+ else:
+ message = message.center(self.term.COLS-10)
+
+ time_elapsed = time.time()-self._progress_start_time
+ if percent > 0:
+ time_remain = (time_elapsed / percent) * (1-percent)
+ else:
+ time_remain = 0
+
+ sys.stdout.write(
+ # Line 1:
+ self.term.CLEAR_EOL + ' ' +
+ '%-8s' % self._timestr(time_elapsed) +
+ self.term.BOLD + 'Progress:'.center(self.term.COLS-26) +
+ self.term.NORMAL + '%8s' % self._timestr(time_remain) + '\n' +
+ # Line 2:
+ self.term.CLEAR_EOL + ('%3d%% ' % (100*percent)) +
+ self.term.GREEN + '[' + self.term.BOLD + '='*dots +
+ background[dots:] + self.term.NORMAL + self.term.GREEN +
+ ']' + self.term.NORMAL + '\n' +
+ # Line 3:
+ self.term.CLEAR_EOL + ' ' + message + self.term.BOL +
+ self.term.UP + self.term.UP)
+
+ sys.stdout.flush()
+ self._progress = percent
+ elif self._progress_mode == 'simple-bar':
+ if self._progress is None:
+ sys.stdout.write(' [')
+ self._progress = 0.0
+ dots = int((self.term.COLS-2)*percent)
+ progress_dots = int((self.term.COLS-2)*self._progress)
+ if dots > progress_dots:
+ sys.stdout.write('.'*(dots-progress_dots))
+ sys.stdout.flush()
+ self._progress = percent
+
+ def _timestr(self, dt):
+ dt = int(dt)
+ if dt >= 3600:
+ return '%d:%02d:%02d' % (dt/3600, dt%3600/60, dt%60)
+ else:
+ return '%02d:%02d' % (dt/60, dt%60)
+
+ def start_progress(self, header=None):
+ if self._progress is not None:
+ raise ValueError
+ self._progress = None
+ self._progress_start_time = time.time()
+ self._progress_header = header
+ if self._progress_mode != 'hide' and header:
+ print self.term.BOLD + header + self.term.NORMAL
+
+ def end_progress(self):
+ self.progress(1.)
+ if self._progress_mode == 'bar':
+ sys.stdout.write(self.term.CLEAR_LINE)
+ if self._progress_mode == 'multiline-bar':
+ sys.stdout.write((self.term.CLEAR_EOL + '\n')*2 +
+ self.term.CLEAR_EOL + self.term.UP*2)
+ if self._progress_mode == 'simple-bar':
+ print ']'
+ self._progress = None
+ self._task_times.append( (time.time()-self._progress_start_time,
+ self._progress_header) )
+
+ def print_times(self):
+ print
+ print 'Timing summary:'
+ total = sum([time for (time, task) in self._task_times])
+ max_t = max([time for (time, task) in self._task_times])
+ for (time, task) in self._task_times:
+ task = task[:31]
+ print ' %s%s %7.1fs' % (task, '.'*(35-len(task)), time),
+ if self.term.COLS > 55:
+ print '|'+'=' * int((self.term.COLS-53) * time / max_t)
+ else:
+ print
+ print
+
+class UnifiedProgressConsoleLogger(ConsoleLogger):
+ def __init__(self, verbosity, stages, progress_mode=None):
+ self.stage = 0
+ self.stages = stages
+ self.task = None
+ ConsoleLogger.__init__(self, verbosity, progress_mode)
+
+ def progress(self, percent, message=''):
+ #p = float(self.stage-1+percent)/self.stages
+ i = self.stage-1
+ p = ((sum(self.stages[:i]) + percent*self.stages[i]) /
+ float(sum(self.stages)))
+
+ if message is UNKNOWN: message = None
+ if message: message = '%s: %s' % (self.task, message)
+ ConsoleLogger.progress(self, p, message)
+
+ def start_progress(self, header=None):
+ self.task = header
+ if self.stage == 0:
+ ConsoleLogger.start_progress(self)
+ self.stage += 1
+
+ def end_progress(self):
+ if self.stage == len(self.stages):
+ ConsoleLogger.end_progress(self)
+
+ def print_times(self):
+ pass
+
+class HTMLLogger(log.Logger):
+ """
+ A logger used to generate a log of all warnings and messages to an
+ HTML file.
+ """
+
+ FILENAME = "epydoc-log.html"
+ HEADER = textwrap.dedent('''\
+ <?xml version="1.0" encoding="ascii"?>
+ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "DTD/xhtml1-transitional.dtd">
+ <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+ <head>
+ <title>Epydoc Log</title>
+ <link rel="stylesheet" href="epydoc.css" type="text/css" />
+ </head>
+
+ <body bgcolor="white" text="black" link="blue" vlink="#204080"
+ alink="#204080">
+ <h1 class="epydoc">Epydoc Log</h1>
+ <p class="log">Epydoc started at %s</p>''')
+ START_BLOCK = '<div class="log-block"><h2 class="log-hdr">%s</h2>'
+ MESSAGE = ('<div class="log-%s"><b>%s</b>: \n'
+ '%s</div>\n')
+ END_BLOCK = '</div>'
+ FOOTER = "</body>\n</html>\n"
+
+ def __init__(self, directory, options):
+ self.start_time = time.time()
+ self.out = open(os.path.join(directory, self.FILENAME), 'w')
+ self.out.write(self.HEADER % time.ctime(self.start_time))
+ self.is_empty = True
+ self.options = options
+
+ def write_options(self, options):
+ self.out.write(self.START_BLOCK % 'Epydoc Options')
+ msg = '<table border="0" cellpadding="0" cellspacing="0">\n'
+ opts = [(key, getattr(options, key)) for key in dir(options)
+ if key not in dir(optparse.Values)]
+ opts = [(val==OPTION_DEFAULTS.get(key), key, val)
+ for (key, val) in opts]
+ for is_default, key, val in sorted(opts):
+ css = is_default and 'opt-default' or 'opt-changed'
+ msg += ('<tr valign="top" class="%s"><td valign="top">%s</td>'
+ '<td valign="top"><tt> = </tt></td>'
+ '<td valign="top"><tt>%s</tt></td></tr>' %
+ (css, key, plaintext_to_html(repr(val))))
+ msg += '</table>\n'
+ self.out.write('<div class="log-info">\n%s</div>\n' % msg)
+ self.out.write(self.END_BLOCK)
+
+ def start_block(self, header):
+ self.out.write(self.START_BLOCK % header)
+
+ def end_block(self):
+ self.out.write(self.END_BLOCK)
+
+ def log(self, level, message):
+ if message.endswith("(-v) to display markup errors."): return
+ if level >= log.ERROR:
+ self.out.write(self._message('error', message))
+ elif level >= log.WARNING:
+ self.out.write(self._message('warning', message))
+ elif level >= log.DOCSTRING_WARNING:
+ self.out.write(self._message('docstring warning', message))
+
+ def _message(self, level, message):
+ self.is_empty = False
+ message = plaintext_to_html(message)
+ if '\n' in message:
+ message = '<pre class="log">%s</pre>' % message
+ hdr = ' '.join([w.capitalize() for w in level.split()])
+ return self.MESSAGE % (level.split()[-1], hdr, message)
+
+ def close(self):
+ if self.is_empty:
+ self.out.write('<div class="log-info">'
+ 'No warnings or errors!</div>')
+ self.write_options(self.options)
+ self.out.write('<p class="log">Epydoc finished at %s</p>\n'
+ '<p class="log">(Elapsed time: %s)</p>' %
+ (time.ctime(), self._elapsed_time()))
+ self.out.write(self.FOOTER)
+ self.out.close()
+
+ def _elapsed_time(self):
+ secs = int(time.time()-self.start_time)
+ if secs < 60:
+ return '%d seconds' % secs
+ if secs < 3600:
+ return '%d minutes, %d seconds' % (secs/60, secs%60)
+ else:
+ return '%d hours, %d minutes' % (secs/3600, secs%3600)
+
+
+######################################################################
+## main
+######################################################################
+
+if __name__ == '__main__':
+ cli()
+
diff --git a/python/helpers/epydoc/compat.py b/python/helpers/epydoc/compat.py
new file mode 100644
index 0000000..327d8b4
--- /dev/null
+++ b/python/helpers/epydoc/compat.py
@@ -0,0 +1,250 @@
+# epydoc -- Backwards compatibility
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: util.py 956 2006-03-10 01:30:51Z edloper $
+
+"""
+Backwards compatibility with previous versions of Python.
+
+This module provides backwards compatibility by defining several
+functions and classes that were not available in earlier versions of
+Python. Intented usage:
+
+ >>> from epydoc.compat import *
+
+Currently, epydoc requires Python 2.3+.
+"""
+__docformat__ = 'epytext'
+
+######################################################################
+#{ New in Python 2.4
+######################################################################
+
+# set
+try:
+ set
+except NameError:
+ try:
+ from sets import Set as set, ImmutableSet as frozenset
+ except ImportError:
+ pass # use fallback, in the next section.
+
+# sorted
+try:
+ sorted
+except NameError:
+ def sorted(iterable, cmp=None, key=None, reverse=False):
+ if key is None:
+ elts = list(iterable)
+ else:
+ elts = [(key(v), v) for v in iterable]
+
+ if reverse: elts.reverse() # stable sort.
+ if cmp is None: elts.sort()
+ else: elts.sort(cmp)
+ if reverse: elts.reverse()
+
+ if key is None:
+ return elts
+ else:
+ return [v for (k,v) in elts]
+
+# reversed
+try:
+ reversed
+except NameError:
+ def reversed(iterable):
+ elts = list(iterable)
+ elts.reverse()
+ return elts
+
+######################################################################
+#{ New in Python 2.3
+######################################################################
+# Below is my initial attempt at backporting enough code that
+# epydoc 3 would run under python 2.2. However, I'm starting
+# to think that it's not worth the trouble. At the very least,
+# epydoc's current unicode handling still doesn't work under
+# 2.2 (after the backports below), since the 'xmlcharrefreplace'
+# error handler was introduced in python 2.3.
+
+# # basestring
+# try:
+# basestring
+# except NameError:
+# basestring = (str, unicode)
+
+# # sum
+# try:
+# sum
+# except NameError:
+# def _add(a,b): return a+b
+# def sum(vals): return reduce(_add, vals, 0)
+
+# # True & False
+# try:
+# True
+# except NameError:
+# True = 1
+# False = 0
+
+# # enumerate
+# try:
+# enumerate
+# except NameError:
+# def enumerate(iterable):
+# lst = list(iterable)
+# return zip(range(len(lst)), lst)
+
+# # set
+# try:
+# set
+# except NameError:
+# class set(dict):
+# def __init__(self, elts=()):
+# dict.__init__(self, [(e,1) for e in elts])
+# def __repr__(self):
+# return 'set(%r)' % list(self)
+# def add(self, key): self[key] = 1
+# def copy(self):
+# return set(dict.copy(self))
+# def difference(self, other):
+# return set([v for v in self if v not in other])
+# def difference_udpate(self, other):
+# newval = self.difference(other)
+# self.clear(); self.update(newval)
+# def discard(self, elt):
+# try: del self[elt]
+# except: pass
+# def intersection(self, other):
+# return self.copy().update(other)
+# def intersection_update(self, other):
+# newval = self.intersection(other)
+# self.clear(); self.update(newval)
+# def issubset(self, other):
+# for elt in self:
+# if elt not in other: return False
+# return True
+# def issuperset(self, other):
+# for elt in other:
+# if elt not in self: return False
+# return True
+# def pop(self): self.popitem()[0]
+# def remove(self, elt): del self[elt]
+# def symmetric_difference(self, other):
+# return set([v for v in list(self)+list(other)
+# if (v in self)^(v in other)])
+# def symmatric_difference_update(self, other):
+# newval = self.symmetric_difference(other)
+# self.clear(); self.update(newval)
+# def union(self, other):
+# return set([v for v in list(self)+list(other)
+# if (v in self) or (v in other)])
+# def union_update(self, other):
+# newval = self.union(other)
+# self.clear(); self.update(newval)
+# def update(self, other):
+# dict.update(self, set(other))
+
+# # optparse module
+# try:
+# import optparse
+# except ImportError:
+# import new, sys, getopt
+# class _OptionVals:
+# def __init__(self, vals): self.__dict__.update(vals)
+# class OptionParser:
+# def __init__(self, usage=None, version=None):
+# self.usage = usage
+# self.version = version
+# self.shortops = ['h']
+# self.longops = []
+# self.option_specs = {}
+# self.defaults = {}
+# def fail(self, message, exitval=1):
+# print >>sys.stderr, message
+# system.exit(exitval)
+# def add_option_group(self, group): pass
+# def set_defaults(self, **defaults):
+# self.defaults = defaults.copy()
+# def parse_args(self):
+# try:
+# (opts, names) = getopt.getopt(sys.argv[1:],
+# ''.join(self.shortops),
+# self.longops)
+# except getopt.GetoptError, e:
+# self.fail(e)
+
+# options = self.defaults.copy()
+# for (opt,val) in opts:
+# if opt == '-h':
+# self.fail('No help available')
+# if opt not in self.option_specs:
+# self.fail('Unknown option %s' % opt)
+# (action, dest, const) = self.option_specs[opt]
+# if action == 'store':
+# options[dest] = val
+# elif action == 'store_const':
+# options[dest] = const
+# elif action == 'count':
+# options[dest] = options.get(dest,0)+1
+# elif action == 'append':
+# options.setdefault(dest, []).append(val)
+# else:
+# self.fail('unsupported action: %s' % action)
+# for (action,dest,const) in self.option_specs.values():
+# if dest not in options:
+# if action == 'count': options[dest] = 0
+# elif action == 'append': options[dest] = []
+# else: options[dest] = None
+# for name in names:
+# if name.startswith('-'):
+# self.fail('names must follow options')
+# return _OptionVals(options), names
+# class OptionGroup:
+# def __init__(self, optparser, name):
+# self.optparser = optparser
+# self.name = name
+
+# def add_option(self, *args, **kwargs):
+# action = 'store'
+# dest = None
+# const = None
+# for (key,val) in kwargs.items():
+# if key == 'action': action = val
+# elif key == 'dest': dest = val
+# elif key == 'const': const = val
+# elif key in ('help', 'metavar'): pass
+# else: self.fail('unsupported: %s' % key)
+
+# if action not in ('store_const', 'store_true', 'store_false',
+# 'store', 'count', 'append'):
+# self.fail('unsupported action: %s' % action)
+
+# optparser = self.optparser
+# for arg in args:
+# if arg.startswith('--'):
+# optparser.longops.append(arg[2:])
+# elif arg.startswith('-') and len(arg)==2:
+# optparser.shortops += arg[1]
+# if action in ('store', 'append'):
+# optparser.shortops += ':'
+# else:
+# self.fail('bad option name %s' % arg)
+# if action == 'store_true':
+# (action, const) = ('store_const', True)
+# if action == 'store_false':
+# (action, const) = ('store_const', False)
+# optparser.option_specs[arg] = (action, dest, const)
+
+# # Install a fake module.
+# optparse = new.module('optparse')
+# optparse.OptionParser = OptionParser
+# optparse.OptionGroup = OptionGroup
+# sys.modules['optparse'] = optparse
+# # Clean up
+# del OptionParser, OptionGroup
+
diff --git a/python/helpers/epydoc/docbuilder.py b/python/helpers/epydoc/docbuilder.py
new file mode 100644
index 0000000..1e6918d
--- /dev/null
+++ b/python/helpers/epydoc/docbuilder.py
@@ -0,0 +1,1358 @@
+# epydoc -- Documentation Builder
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: docbuilder.py 1683 2008-01-29 22:17:39Z edloper $
+
+"""
+Construct data structures that encode the API documentation for Python
+objects. These data structures are created using a series of steps:
+
+ 1. B{Building docs}: Extract basic information about the objects,
+ and objects that are related to them. This can be done by
+ introspecting the objects' values (with L{epydoc.docintrospecter}; or
+ by parsing their source code (with L{epydoc.docparser}.
+
+ 2. B{Merging}: Combine the information obtained from introspection &
+ parsing each object into a single structure.
+
+ 3. B{Linking}: Replace any 'pointers' that were created for imported
+ variables by their target (if it's available).
+
+ 4. B{Naming}: Chose a unique 'canonical name' for each
+ object.
+
+ 5. B{Docstring Parsing}: Parse the docstring of each object, and
+ extract any pertinant information.
+
+ 6. B{Inheritance}: Add information about variables that classes
+ inherit from their base classes.
+
+The documentation information for each individual object is
+represented using an L{APIDoc}; and the documentation for a collection
+of objects is represented using a L{DocIndex}.
+
+The main interface to C{epydoc.docbuilder} consists of two functions:
+
+ - L{build_doc()} -- Builds documentation for a single item, and
+ returns it as an L{APIDoc} object.
+ - L{build_doc_index()} -- Builds documentation for a collection of
+ items, and returns it as a L{DocIndex} object.
+
+The remaining functions are used by these two main functions to
+perform individual steps in the creation of the documentation.
+
+@group Documentation Construction: build_doc, build_doc_index,
+ _get_docs_from_*, _report_valdoc_progress
+@group Merging: *MERGE*, *merge*
+@group Linking: link_imports
+@group Naming: _name_scores, _unreachable_names, assign_canonical_names,
+ _var_shadows_self, _fix_self_shadowing_var, _unreachable_name_for
+@group Inheritance: inherit_docs, _inherit_info
+"""
+__docformat__ = 'epytext en'
+
+######################################################################
+## Contents
+######################################################################
+## 1. build_doc() & build_doc_index() -- the main interface.
+## 2. merge_docs() -- helper, used to merge parse & introspect info
+## 3. link_imports() -- helper, used to connect imported vars w/ values
+## 4. assign_canonical_names() -- helper, used to set canonical names
+## 5. inherit_docs() -- helper, used to inherit docs from base classes
+
+######################################################################
+## Imports
+######################################################################
+
+import sys, os, os.path, __builtin__, imp, re, inspect
+from epydoc.apidoc import *
+from epydoc.docintrospecter import introspect_docs
+from epydoc.docparser import parse_docs, ParseError
+from epydoc.docstringparser import parse_docstring
+from epydoc import log
+from epydoc.util import *
+from epydoc.compat import * # Backwards compatibility
+
+######################################################################
+## 1. build_doc()
+######################################################################
+
+class BuildOptions:
+ """
+ Holds the parameters for a documentation building process.
+ """
+ def __init__(self, introspect=True, parse=True,
+ exclude_introspect=None, exclude_parse=None,
+ add_submodules=True):
+ self.introspect = introspect
+ self.parse = parse
+ self.exclude_introspect = exclude_introspect
+ self.exclude_parse = exclude_parse
+ self.add_submodules = add_submodules
+
+ # Test for pattern syntax and compile them into pattern objects.
+ try:
+ self._introspect_regexp = (exclude_introspect
+ and re.compile(exclude_introspect) or None)
+ self._parse_regexp = (exclude_parse
+ and re.compile(exclude_parse) or None)
+ except Exception, exc:
+ log.error('Error in regular expression pattern: %s' % exc)
+ raise
+
+ def must_introspect(self, name):
+ """
+ Return C{True} if a module is to be introsepcted with the current
+ settings.
+
+ @param name: The name of the module to test
+ @type name: L{DottedName} or C{str}
+ """
+ return self.introspect \
+ and not self._matches_filter(name, self._introspect_regexp)
+
+ def must_parse(self, name):
+ """
+ Return C{True} if a module is to be parsed with the current settings.
+
+ @param name: The name of the module to test
+ @type name: L{DottedName} or C{str}
+ """
+ return self.parse \
+ and not self._matches_filter(name, self._parse_regexp)
+
+ def _matches_filter(self, name, regexp):
+ """
+ Test if a module name matches a pattern.
+
+ @param name: The name of the module to test
+ @type name: L{DottedName} or C{str}
+ @param regexp: The pattern object to match C{name} against.
+ If C{None}, return C{False}
+ @type regexp: C{pattern}
+ @return: C{True} if C{name} in dotted format matches C{regexp},
+ else C{False}
+ @rtype: C{bool}
+ """
+ if regexp is None: return False
+
+ if isinstance(name, DottedName):
+ name = str(name)
+
+ return bool(regexp.search(name))
+
+
+def build_doc(item, introspect=True, parse=True, add_submodules=True,
+ exclude_introspect=None, exclude_parse=None):
+ """
+ Build API documentation for a given item, and return it as
+ an L{APIDoc} object.
+
+ @rtype: L{APIDoc}
+ @param item: The item to document, specified using any of the
+ following:
+ - A string, naming a python package directory
+ (e.g., C{'epydoc/markup'})
+ - A string, naming a python file
+ (e.g., C{'epydoc/docparser.py'})
+ - A string, naming a python object
+ (e.g., C{'epydoc.docparser.DocParser'})
+ - Any (non-string) python object
+ (e.g., C{list.append})
+ @param introspect: If true, then use introspection to examine the
+ specified items. Otherwise, just use parsing.
+ @param parse: If true, then use parsing to examine the specified
+ items. Otherwise, just use introspection.
+ """
+ docindex = build_doc_index([item], introspect, parse, add_submodules,
+ exclude_introspect=exclude_introspect,
+ exclude_parse=exclude_parse)
+ return docindex.root[0]
+
+def build_doc_index(items, introspect=True, parse=True, add_submodules=True,
+ exclude_introspect=None, exclude_parse=None):
+ """
+ Build API documentation for the given list of items, and
+ return it in the form of a L{DocIndex}.
+
+ @rtype: L{DocIndex}
+ @param items: The items to document, specified using any of the
+ following:
+ - A string, naming a python package directory
+ (e.g., C{'epydoc/markup'})
+ - A string, naming a python file
+ (e.g., C{'epydoc/docparser.py'})
+ - A string, naming a python object
+ (e.g., C{'epydoc.docparser.DocParser'})
+ - Any (non-string) python object
+ (e.g., C{list.append})
+ @param introspect: If true, then use introspection to examine the
+ specified items. Otherwise, just use parsing.
+ @param parse: If true, then use parsing to examine the specified
+ items. Otherwise, just use introspection.
+ """
+ try:
+ options = BuildOptions(parse=parse, introspect=introspect,
+ exclude_introspect=exclude_introspect, exclude_parse=exclude_parse,
+ add_submodules=add_submodules)
+ except Exception, e:
+ # log.error already reported by constructor.
+ return None
+
+ # Get the basic docs for each item.
+ doc_pairs = _get_docs_from_items(items, options)
+
+ # Merge the introspection & parse docs.
+ if options.parse and options.introspect:
+ log.start_progress('Merging parsed & introspected information')
+ docs = []
+ for i, (introspect_doc, parse_doc) in enumerate(doc_pairs):
+ if introspect_doc is not None and parse_doc is not None:
+ if introspect_doc.canonical_name not in (None, UNKNOWN):
+ name = introspect_doc.canonical_name
+ else:
+ name = parse_doc.canonical_name
+ log.progress(float(i)/len(doc_pairs), name)
+ docs.append(merge_docs(introspect_doc, parse_doc))
+ elif introspect_doc is not None:
+ docs.append(introspect_doc)
+ elif parse_doc is not None:
+ docs.append(parse_doc)
+ log.end_progress()
+ elif options.introspect:
+ docs = [doc_pair[0] for doc_pair in doc_pairs if doc_pair[0]]
+ else:
+ docs = [doc_pair[1] for doc_pair in doc_pairs if doc_pair[1]]
+
+ if len(docs) == 0:
+ log.error('Nothing left to document!')
+ return None
+
+ # Collect the docs into a single index.
+ docindex = DocIndex(docs)
+
+ # Replace any proxy valuedocs that we got from importing with
+ # their targets.
+ if options.parse:
+ log.start_progress('Linking imported variables')
+ valdocs = sorted(docindex.reachable_valdocs(
+ imports=False, submodules=False, packages=False, subclasses=False))
+ for i, val_doc in enumerate(valdocs):
+ _report_valdoc_progress(i, val_doc, valdocs)
+ link_imports(val_doc, docindex)
+ log.end_progress()
+
+ # Assign canonical names.
+ log.start_progress('Indexing documentation')
+ for i, val_doc in enumerate(docindex.root):
+ log.progress(float(i)/len(docindex.root), val_doc.canonical_name)
+ assign_canonical_names(val_doc, val_doc.canonical_name, docindex)
+ log.end_progress()
+
+ # Set overrides pointers
+ log.start_progress('Checking for overridden methods')
+ valdocs = sorted(docindex.reachable_valdocs(
+ imports=False, submodules=False, packages=False, subclasses=False))
+ for i, val_doc in enumerate(valdocs):
+ if isinstance(val_doc, ClassDoc):
+ percent = float(i)/len(valdocs)
+ log.progress(percent, val_doc.canonical_name)
+ find_overrides(val_doc)
+ log.end_progress()
+
+ # Parse the docstrings for each object.
+ log.start_progress('Parsing docstrings')
+ suppress_warnings = set(valdocs).difference(
+ docindex.reachable_valdocs(
+ imports=False, submodules=False, packages=False, subclasses=False,
+ bases=False, overrides=True))
+ for i, val_doc in enumerate(valdocs):
+ _report_valdoc_progress(i, val_doc, valdocs)
+ # the value's docstring
+ parse_docstring(val_doc, docindex, suppress_warnings)
+ # the value's variables' docstrings
+ if (isinstance(val_doc, NamespaceDoc) and
+ val_doc.variables not in (None, UNKNOWN)):
+ for var_doc in val_doc.variables.values():
+ # Now we have a chance to propagate the defining module
+ # to objects for which introspection is not possible,
+ # such as properties.
+ if (isinstance(var_doc.value, ValueDoc)
+ and var_doc.value.defining_module is UNKNOWN):
+ var_doc.value.defining_module = val_doc.defining_module
+ parse_docstring(var_doc, docindex, suppress_warnings)
+ log.end_progress()
+
+ # Take care of inheritance.
+ log.start_progress('Inheriting documentation')
+ for i, val_doc in enumerate(valdocs):
+ if isinstance(val_doc, ClassDoc):
+ percent = float(i)/len(valdocs)
+ log.progress(percent, val_doc.canonical_name)
+ inherit_docs(val_doc)
+ log.end_progress()
+
+ # Initialize the groups & sortedvars attributes.
+ log.start_progress('Sorting & Grouping')
+ for i, val_doc in enumerate(valdocs):
+ if isinstance(val_doc, NamespaceDoc):
+ percent = float(i)/len(valdocs)
+ log.progress(percent, val_doc.canonical_name)
+ val_doc.init_sorted_variables()
+ val_doc.init_variable_groups()
+ if isinstance(val_doc, ModuleDoc):
+ val_doc.init_submodule_groups()
+ val_doc.report_unused_groups()
+ log.end_progress()
+
+ return docindex
+
+def _report_valdoc_progress(i, val_doc, val_docs):
+ if (isinstance(val_doc, (ModuleDoc, ClassDoc)) and
+ val_doc.canonical_name is not UNKNOWN and
+ not val_doc.canonical_name[0].startswith('??')):
+ log.progress(float(i)/len(val_docs), val_doc.canonical_name)
+
+#/////////////////////////////////////////////////////////////////
+# Documentation Generation
+#/////////////////////////////////////////////////////////////////
+
+def _get_docs_from_items(items, options):
+
+ # Start the progress bar.
+ log.start_progress('Building documentation')
+ progress_estimator = _ProgressEstimator(items)
+
+ # Check for duplicate item names.
+ item_set = set()
+ for item in items[:]:
+ if item in item_set:
+ log.warning("Name %r given multiple times" % item)
+ items.remove(item)
+ item_set.add(item)
+
+ # Keep track of what top-level canonical names we've assigned, to
+ # make sure there are no naming conflicts. This dict maps
+ # canonical names to the item names they came from (so we can print
+ # useful error messages).
+ canonical_names = {}
+
+ # Collect (introspectdoc, parsedoc) pairs for each item.
+ doc_pairs = []
+ for item in items:
+ if isinstance(item, basestring):
+ if is_module_file(item):
+ doc_pairs.append(_get_docs_from_module_file(
+ item, options, progress_estimator))
+ elif is_package_dir(item):
+ pkgfile = os.path.abspath(os.path.join(item, '__init__'))
+ doc_pairs.append(_get_docs_from_module_file(
+ pkgfile, options, progress_estimator))
+ elif os.path.isfile(item):
+ doc_pairs.append(_get_docs_from_pyscript(
+ item, options, progress_estimator))
+ elif hasattr(__builtin__, item):
+ val = getattr(__builtin__, item)
+ doc_pairs.append(_get_docs_from_pyobject(
+ val, options, progress_estimator))
+ elif is_pyname(item):
+ doc_pairs.append(_get_docs_from_pyname(
+ item, options, progress_estimator))
+ elif os.path.isdir(item):
+ log.error("Directory %r is not a package" % item)
+ continue
+ elif os.path.isfile(item):
+ log.error("File %s is not a Python module" % item)
+ continue
+ else:
+ log.error("Could not find a file or object named %s" %
+ item)
+ continue
+ else:
+ doc_pairs.append(_get_docs_from_pyobject(
+ item, options, progress_estimator))
+
+ # Make sure there are no naming conflicts.
+ name = (getattr(doc_pairs[-1][0], 'canonical_name', None) or
+ getattr(doc_pairs[-1][1], 'canonical_name', None))
+ if name in canonical_names:
+ log.error(
+ 'Two of the specified items, %r and %r, have the same '
+ 'canonical name ("%s"). This may mean that you specified '
+ 'two different files that both use the same module name. '
+ 'Ignoring the second item (%r)' %
+ (canonical_names[name], item, name, canonical_names[name]))
+ doc_pairs.pop()
+ else:
+ canonical_names[name] = item
+
+ # This will only have an effect if doc_pairs[-1] contains a
+ # package's docs. The 'not is_module_file(item)' prevents
+ # us from adding subdirectories if they explicitly specify
+ # a package's __init__.py file.
+ if options.add_submodules and not is_module_file(item):
+ doc_pairs += _get_docs_from_submodules(
+ item, doc_pairs[-1], options, progress_estimator)
+
+ log.end_progress()
+ return doc_pairs
+
+def _get_docs_from_pyobject(obj, options, progress_estimator):
+ progress_estimator.complete += 1
+ log.progress(progress_estimator.progress(), repr(obj))
+
+ if not options.introspect:
+ log.error("Cannot get docs for Python objects without "
+ "introspecting them.")
+
+ introspect_doc = parse_doc = None
+ introspect_error = parse_error = None
+ try:
+ introspect_doc = introspect_docs(value=obj)
+ except ImportError, e:
+ log.error(e)
+ return (None, None)
+ if options.parse:
+ if introspect_doc.canonical_name is not None:
+ prev_introspect = options.introspect
+ options.introspect = False
+ try:
+ _, parse_docs = _get_docs_from_pyname(
+ str(introspect_doc.canonical_name), options,
+ progress_estimator, suppress_warnings=True)
+ finally:
+ options.introspect = prev_introspect
+
+ # We need a name:
+ if introspect_doc.canonical_name in (None, UNKNOWN):
+ if hasattr(obj, '__name__'):
+ introspect_doc.canonical_name = DottedName(
+ DottedName.UNREACHABLE, obj.__name__)
+ else:
+ introspect_doc.canonical_name = DottedName(
+ DottedName.UNREACHABLE)
+ return (introspect_doc, parse_doc)
+
+def _get_docs_from_pyname(name, options, progress_estimator,
+ suppress_warnings=False):
+ progress_estimator.complete += 1
+ if options.must_introspect(name) or options.must_parse(name):
+ log.progress(progress_estimator.progress(), name)
+
+ introspect_doc = parse_doc = None
+ introspect_error = parse_error = None
+ if options.must_introspect(name):
+ try:
+ introspect_doc = introspect_docs(name=name)
+ except ImportError, e:
+ introspect_error = str(e)
+ if options.must_parse(name):
+ try:
+ parse_doc = parse_docs(name=name)
+ except ParseError, e:
+ parse_error = str(e)
+ except ImportError, e:
+ # If we get here, then there' probably no python source
+ # available; don't bother to generate a warnining.
+ pass
+
+ # Report any errors we encountered.
+ if not suppress_warnings:
+ _report_errors(name, introspect_doc, parse_doc,
+ introspect_error, parse_error)
+
+ # Return the docs we found.
+ return (introspect_doc, parse_doc)
+
+def _get_docs_from_pyscript(filename, options, progress_estimator):
+ # [xx] I should be careful about what names I allow as filenames,
+ # and maybe do some munging to prevent problems.
+
+ introspect_doc = parse_doc = None
+ introspect_error = parse_error = None
+ if options.introspect:
+ try:
+ introspect_doc = introspect_docs(filename=filename, is_script=True)
+ if introspect_doc.canonical_name is UNKNOWN:
+ introspect_doc.canonical_name = munge_script_name(filename)
+ except ImportError, e:
+ introspect_error = str(e)
+ if options.parse:
+ try:
+ parse_doc = parse_docs(filename=filename, is_script=True)
+ except ParseError, e:
+ parse_error = str(e)
+ except ImportError, e:
+ parse_error = str(e)
+
+ # Report any errors we encountered.
+ _report_errors(filename, introspect_doc, parse_doc,
+ introspect_error, parse_error)
+
+ # Return the docs we found.
+ return (introspect_doc, parse_doc)
+
+def _get_docs_from_module_file(filename, options, progress_estimator,
+ parent_docs=(None,None)):
+ """
+ Construct and return the API documentation for the python
+ module with the given filename.
+
+ @param parent_docs: The C{ModuleDoc} of the containing package.
+ If C{parent_docs} is not provided, then this method will
+ check if the given filename is contained in a package; and
+ if so, it will construct a stub C{ModuleDoc} for the
+ containing package(s). C{parent_docs} is a tuple, where
+ the first element is the parent from introspection, and
+ the second element is the parent from parsing.
+ """
+ # Record our progress.
+ modulename = os.path.splitext(os.path.split(filename)[1])[0]
+ if modulename == '__init__':
+ modulename = os.path.split(os.path.split(filename)[0])[1]
+ if parent_docs[0]:
+ modulename = DottedName(parent_docs[0].canonical_name, modulename)
+ elif parent_docs[1]:
+ modulename = DottedName(parent_docs[1].canonical_name, modulename)
+ if options.must_introspect(modulename) or options.must_parse(modulename):
+ log.progress(progress_estimator.progress(),
+ '%s (%s)' % (modulename, filename))
+ progress_estimator.complete += 1
+
+ # Normalize the filename.
+ filename = os.path.normpath(os.path.abspath(filename))
+
+ # When possible, use the source version of the file.
+ try:
+ filename = py_src_filename(filename)
+ src_file_available = True
+ except ValueError:
+ src_file_available = False
+
+ # Get the introspected & parsed docs (as appropriate)
+ introspect_doc = parse_doc = None
+ introspect_error = parse_error = None
+ if options.must_introspect(modulename):
+ try:
+ introspect_doc = introspect_docs(
+ filename=filename, context=parent_docs[0])
+ if introspect_doc.canonical_name is UNKNOWN:
+ introspect_doc.canonical_name = modulename
+ except ImportError, e:
+ introspect_error = str(e)
+ if src_file_available and options.must_parse(modulename):
+ try:
+ parse_doc = parse_docs(
+ filename=filename, context=parent_docs[1])
+ except ParseError, e:
+ parse_error = str(e)
+ except ImportError, e:
+ parse_error = str(e)
+
+ # Report any errors we encountered.
+ _report_errors(filename, introspect_doc, parse_doc,
+ introspect_error, parse_error)
+
+ # Return the docs we found.
+ return (introspect_doc, parse_doc)
+
+def _get_docs_from_submodules(item, pkg_docs, options, progress_estimator):
+ # Extract the package's __path__.
+ if isinstance(pkg_docs[0], ModuleDoc) and pkg_docs[0].is_package:
+ pkg_path = pkg_docs[0].path
+ package_dir = os.path.split(pkg_docs[0].filename)[0]
+ elif isinstance(pkg_docs[1], ModuleDoc) and pkg_docs[1].is_package:
+ pkg_path = pkg_docs[1].path
+ package_dir = os.path.split(pkg_docs[1].filename)[0]
+ else:
+ return []
+
+ module_filenames = {}
+ subpackage_dirs = set()
+ for subdir in pkg_path:
+ if os.path.isdir(subdir):
+ for name in os.listdir(subdir):
+ filename = os.path.join(subdir, name)
+ # Is it a valid module filename?
+ if is_module_file(filename):
+ basename = os.path.splitext(filename)[0]
+ if os.path.split(basename)[1] != '__init__':
+ module_filenames[basename] = filename
+ # Is it a valid package filename?
+ if is_package_dir(filename):
+ subpackage_dirs.add(filename)
+
+ # Update our estimate of the number of modules in this package.
+ progress_estimator.revise_estimate(item, module_filenames.items(),
+ subpackage_dirs)
+
+ docs = [pkg_docs]
+ for module_filename in module_filenames.values():
+ d = _get_docs_from_module_file(
+ module_filename, options, progress_estimator, pkg_docs)
+ docs.append(d)
+ for subpackage_dir in subpackage_dirs:
+ subpackage_file = os.path.join(subpackage_dir, '__init__')
+ docs.append(_get_docs_from_module_file(
+ subpackage_file, options, progress_estimator, pkg_docs))
+ docs += _get_docs_from_submodules(
+ subpackage_dir, docs[-1], options, progress_estimator)
+ return docs
+
+def _report_errors(name, introspect_doc, parse_doc,
+ introspect_error, parse_error):
+ hdr = 'In %s:\n' % name
+ if introspect_doc == parse_doc == None:
+ log.start_block('%sNo documentation available!' % hdr)
+ if introspect_error:
+ log.error('Import failed:\n%s' % introspect_error)
+ if parse_error:
+ log.error('Source code parsing failed:\n%s' % parse_error)
+ log.end_block()
+ elif introspect_error:
+ log.start_block('%sImport failed (but source code parsing '
+ 'was successful).' % hdr)
+ log.error(introspect_error)
+ log.end_block()
+ elif parse_error:
+ log.start_block('%sSource code parsing failed (but '
+ 'introspection was successful).' % hdr)
+ log.error(parse_error)
+ log.end_block()
+
+
+#/////////////////////////////////////////////////////////////////
+# Progress Estimation (for Documentation Generation)
+#/////////////////////////////////////////////////////////////////
+
+class _ProgressEstimator:
+ """
+ Used to keep track of progress when generating the initial docs
+ for the given items. (It is not known in advance how many items a
+ package directory will contain, since it might depend on those
+ packages' __path__ values.)
+ """
+ def __init__(self, items):
+ self.est_totals = {}
+ self.complete = 0
+
+ for item in items:
+ if is_package_dir(item):
+ self.est_totals[item] = self._est_pkg_modules(item)
+ else:
+ self.est_totals[item] = 1
+
+ def progress(self):
+ total = sum(self.est_totals.values())
+ return float(self.complete) / total
+
+ def revise_estimate(self, pkg_item, modules, subpackages):
+ del self.est_totals[pkg_item]
+ for item in modules:
+ self.est_totals[item] = 1
+ for item in subpackages:
+ self.est_totals[item] = self._est_pkg_modules(item)
+
+ def _est_pkg_modules(self, package_dir):
+ num_items = 0
+
+ if is_package_dir(package_dir):
+ for name in os.listdir(package_dir):
+ filename = os.path.join(package_dir, name)
+ if is_module_file(filename):
+ num_items += 1
+ elif is_package_dir(filename):
+ num_items += self._est_pkg_modules(filename)
+
+ return num_items
+
+######################################################################
+## Doc Merger
+######################################################################
+
+MERGE_PRECEDENCE = {
+ 'repr': 'parse',
+
+ # The names we get from introspection match the names that users
+ # can actually use -- i.e., they take magic into account.
+ 'canonical_name': 'introspect',
+
+ # Only fall-back on the parser for is_imported if the introspecter
+ # isn't sure. Otherwise, we can end up thinking that vars
+ # containing modules are not imported, which can cause external
+ # modules to show up in the docs (sf bug #1653486)
+ 'is_imported': 'introspect',
+
+ # The parser can tell if an assignment creates an alias or not.
+ 'is_alias': 'parse',
+
+ # The parser is better able to determine what text file something
+ # came from; e.g., it can't be fooled by 'covert' imports.
+ 'docformat': 'parse',
+
+ # The parse should be able to tell definitively whether a module
+ # is a package or not.
+ 'is_package': 'parse',
+
+ # Extract the sort spec from the order in which values are defined
+ # in the source file.
+ 'sort_spec': 'parse',
+
+ 'submodules': 'introspect',
+
+ # The filename used by 'parse' is the source file.
+ 'filename': 'parse',
+
+ # 'parse' is more likely to get the encoding right, but
+ # 'introspect' will handle programatically generated docstrings.
+ # Which is better?
+ 'docstring': 'introspect',
+ }
+"""Indicates whether information from introspection or parsing should be
+given precedence, for specific attributes. This dictionary maps from
+attribute names to either C{'introspect'} or C{'parse'}."""
+
+DEFAULT_MERGE_PRECEDENCE = 'introspect'
+"""Indicates whether information from introspection or parsing should be
+given precedence. Should be either C{'introspect'} or C{'parse'}"""
+
+_attribute_mergefunc_registry = {}
+def register_attribute_mergefunc(attrib, mergefunc):
+ """
+ Register an attribute merge function. This function will be
+ called by L{merge_docs()} when it needs to merge the attribute
+ values of two C{APIDoc}s.
+
+ @param attrib: The name of the attribute whose values are merged
+ by C{mergefunc}.
+
+ @param mergefunc: The merge function, whose sinature is:
+
+ >>> def mergefunc(introspect_val, parse_val, precedence, cyclecheck, path):
+ ... return calculate_merged_value(introspect_val, parse_val)
+
+ Where C{introspect_val} and C{parse_val} are the two values to
+ combine; C{precedence} is a string indicating which value takes
+ precedence for this attribute (C{'introspect'} or C{'parse'});
+ C{cyclecheck} is a value used by C{merge_docs()} to make sure that
+ it only visits each pair of docs once; and C{path} is a string
+ describing the path that was taken from the root to this
+ attribute (used to generate log messages).
+
+ If the merge function needs to call C{merge_docs}, then it should
+ pass C{cyclecheck} and C{path} back in. (When appropriate, a
+ suffix should be added to C{path} to describe the path taken to
+ the merged values.)
+ """
+ _attribute_mergefunc_registry[attrib] = mergefunc
+
+def merge_docs(introspect_doc, parse_doc, cyclecheck=None, path=None):
+ """
+ Merge the API documentation information that was obtained from
+ introspection with information that was obtained from parsing.
+ C{introspect_doc} and C{parse_doc} should be two C{APIDoc} instances
+ that describe the same object. C{merge_docs} combines the
+ information from these two instances, and returns the merged
+ C{APIDoc}.
+
+ If C{introspect_doc} and C{parse_doc} are compatible, then they will
+ be I{merged} -- i.e., they will be coerced to a common class, and
+ their state will be stored in a shared dictionary. Once they have
+ been merged, any change made to the attributes of one will affect
+ the other. The value for the each of the merged C{APIDoc}'s
+ attributes is formed by combining the values of the source
+ C{APIDoc}s' attributes, as follows:
+
+ - If either of the source attributes' value is C{UNKNOWN}, then
+ use the other source attribute's value.
+ - Otherwise, if an attribute merge function has been registered
+ for the attribute, then use that function to calculate the
+ merged value from the two source attribute values.
+ - Otherwise, if L{MERGE_PRECEDENCE} is defined for the
+ attribute, then use the attribute value from the source that
+ it indicates.
+ - Otherwise, use the attribute value from the source indicated
+ by L{DEFAULT_MERGE_PRECEDENCE}.
+
+ If C{introspect_doc} and C{parse_doc} are I{not} compatible (e.g., if
+ their values have incompatible types), then C{merge_docs()} will
+ simply return either C{introspect_doc} or C{parse_doc}, depending on
+ the value of L{DEFAULT_MERGE_PRECEDENCE}. The two input
+ C{APIDoc}s will not be merged or modified in any way.
+
+ @param cyclecheck, path: These arguments should only be provided
+ when C{merge_docs()} is called by an attribute merge
+ function. See L{register_attribute_mergefunc()} for more
+ details.
+ """
+ assert isinstance(introspect_doc, APIDoc)
+ assert isinstance(parse_doc, APIDoc)
+
+ if cyclecheck is None:
+ cyclecheck = set()
+ if introspect_doc.canonical_name not in (None, UNKNOWN):
+ path = '%s' % introspect_doc.canonical_name
+ elif parse_doc.canonical_name not in (None, UNKNOWN):
+ path = '%s' % parse_doc.canonical_name
+ else:
+ path = '??'
+
+ # If we've already examined this pair, then there's nothing
+ # more to do. The reason that we check id's here is that we
+ # want to avoid hashing the APIDoc objects for now, so we can
+ # use APIDoc.merge_and_overwrite() later.
+ if (id(introspect_doc), id(parse_doc)) in cyclecheck:
+ return introspect_doc
+ cyclecheck.add( (id(introspect_doc), id(parse_doc)) )
+
+ # If these two are already merged, then we're done. (Two
+ # APIDoc's compare equal iff they are identical or have been
+ # merged.)
+ if introspect_doc == parse_doc:
+ return introspect_doc
+
+ # If both values are GenericValueDoc, then we don't want to merge
+ # them. E.g., we don't want to merge 2+2 with 4. So just copy
+ # the parse_doc's parse_repr to introspect_doc, & return it.
+ # (In particular, do *not* call merge_and_overwrite.)
+ if type(introspect_doc) == type(parse_doc) == GenericValueDoc:
+ if parse_doc.parse_repr is not UNKNOWN:
+ introspect_doc.parse_repr = parse_doc.parse_repr
+ introspect_doc.docs_extracted_by = 'both'
+ return introspect_doc
+
+ # Perform several sanity checks here -- if we accidentally
+ # merge values that shouldn't get merged, then bad things can
+ # happen.
+ mismatch = None
+ if (introspect_doc.__class__ != parse_doc.__class__ and
+ not (issubclass(introspect_doc.__class__, parse_doc.__class__) or
+ issubclass(parse_doc.__class__, introspect_doc.__class__))):
+ mismatch = ("value types don't match -- i=%r, p=%r." %
+ (introspect_doc.__class__, parse_doc.__class__))
+ if (isinstance(introspect_doc, ValueDoc) and
+ isinstance(parse_doc, ValueDoc)):
+ if (introspect_doc.pyval is not UNKNOWN and
+ parse_doc.pyval is not UNKNOWN and
+ introspect_doc.pyval is not parse_doc.pyval):
+ mismatch = "values don't match."
+ elif (introspect_doc.canonical_name not in (None, UNKNOWN) and
+ parse_doc.canonical_name not in (None, UNKNOWN) and
+ introspect_doc.canonical_name != parse_doc.canonical_name):
+ mismatch = "canonical names don't match."
+ if mismatch is not None:
+ log.info("Not merging the parsed & introspected values of %s, "
+ "since their %s" % (path, mismatch))
+ if DEFAULT_MERGE_PRECEDENCE == 'introspect':
+ return introspect_doc
+ else:
+ return parse_doc
+
+ # If one apidoc's class is a superclass of the other's, then
+ # specialize it to the more specific class.
+ if introspect_doc.__class__ is not parse_doc.__class__:
+ if issubclass(introspect_doc.__class__, parse_doc.__class__):
+ parse_doc.specialize_to(introspect_doc.__class__)
+ if issubclass(parse_doc.__class__, introspect_doc.__class__):
+ introspect_doc.specialize_to(parse_doc.__class__)
+ assert introspect_doc.__class__ is parse_doc.__class__
+
+ # The posargs and defaults are tied together -- if we merge
+ # the posargs one way, then we need to merge the defaults the
+ # same way. So check them first. (This is a minor hack)
+ if (isinstance(introspect_doc, RoutineDoc) and
+ isinstance(parse_doc, RoutineDoc)):
+ _merge_posargs_and_defaults(introspect_doc, parse_doc, path)
+
+ # Merge the two api_doc's attributes.
+ for attrib in set(introspect_doc.__dict__.keys() +
+ parse_doc.__dict__.keys()):
+ # Be sure not to merge any private attributes (especially
+ # __mergeset or __has_been_hashed!)
+ if attrib.startswith('_'): continue
+ merge_attribute(attrib, introspect_doc, parse_doc,
+ cyclecheck, path)
+
+ # Set the dictionaries to be shared.
+ return introspect_doc.merge_and_overwrite(parse_doc)
+
+def _merge_posargs_and_defaults(introspect_doc, parse_doc, path):
+ # If either is unknown, then let merge_attrib handle it.
+ if introspect_doc.posargs is UNKNOWN or parse_doc.posargs is UNKNOWN:
+ return
+
+ # If the introspected doc just has '...', then trust the parsed doc.
+ if introspect_doc.posargs == ['...'] and parse_doc.posargs != ['...']:
+ introspect_doc.posargs = parse_doc.posargs
+ introspect_doc.posarg_defaults = parse_doc.posarg_defaults
+
+ # If they are incompatible, then check the precedence.
+ elif introspect_doc.posargs != parse_doc.posargs:
+ log.info("Not merging the parsed & introspected arg "
+ "lists for %s, since they don't match (%s vs %s)"
+ % (path, introspect_doc.posargs, parse_doc.posargs))
+ if (MERGE_PRECEDENCE.get('posargs', DEFAULT_MERGE_PRECEDENCE) ==
+ 'introspect'):
+ parse_doc.posargs = introspect_doc.posargs
+ parse_doc.posarg_defaults = introspect_doc.posarg_defaults
+ else:
+ introspect_doc.posargs = parse_doc.posargs
+ introspect_doc.posarg_defaults = parse_doc.posarg_defaults
+
+def merge_attribute(attrib, introspect_doc, parse_doc, cyclecheck, path):
+ precedence = MERGE_PRECEDENCE.get(attrib, DEFAULT_MERGE_PRECEDENCE)
+ if precedence not in ('parse', 'introspect'):
+ raise ValueError('Bad precedence value %r' % precedence)
+
+ if (getattr(introspect_doc, attrib) is UNKNOWN and
+ getattr(parse_doc, attrib) is not UNKNOWN):
+ setattr(introspect_doc, attrib, getattr(parse_doc, attrib))
+ elif (getattr(introspect_doc, attrib) is not UNKNOWN and
+ getattr(parse_doc, attrib) is UNKNOWN):
+ setattr(parse_doc, attrib, getattr(introspect_doc, attrib))
+ elif (getattr(introspect_doc, attrib) is UNKNOWN and
+ getattr(parse_doc, attrib) is UNKNOWN):
+ pass
+ else:
+ # Both APIDoc objects have values; we need to merge them.
+ introspect_val = getattr(introspect_doc, attrib)
+ parse_val = getattr(parse_doc, attrib)
+ if attrib in _attribute_mergefunc_registry:
+ handler = _attribute_mergefunc_registry[attrib]
+ merged_val = handler(introspect_val, parse_val, precedence,
+ cyclecheck, path)
+ elif precedence == 'introspect':
+ merged_val = introspect_val
+ elif precedence == 'parse':
+ merged_val = parse_val
+
+ setattr(introspect_doc, attrib, merged_val)
+ setattr(parse_doc, attrib, merged_val)
+
+def merge_variables(varlist1, varlist2, precedence, cyclecheck, path):
+ # Merge all variables that are in both sets.
+ for varname, var1 in varlist1.items():
+ var2 = varlist2.get(varname)
+ if var2 is not None:
+ var = merge_docs(var1, var2, cyclecheck, path+'.'+varname)
+ varlist1[varname] = var
+ varlist2[varname] = var
+
+ # Copy any variables that are not in varlist1 over.
+ for varname, var in varlist2.items():
+ varlist1.setdefault(varname, var)
+
+ return varlist1
+
+def merge_value(value1, value2, precedence, cyclecheck, path):
+ assert value1 is not None and value2 is not None
+ return merge_docs(value1, value2, cyclecheck, path)
+
+def merge_overrides(v1, v2, precedence, cyclecheck, path):
+ return merge_value(v1, v2, precedence, cyclecheck, path+'.<overrides>')
+def merge_fget(v1, v2, precedence, cyclecheck, path):
+ return merge_value(v1, v2, precedence, cyclecheck, path+'.fget')
+def merge_fset(v1, v2, precedence, cyclecheck, path):
+ return merge_value(v1, v2, precedence, cyclecheck, path+'.fset')
+def merge_fdel(v1, v2, precedence, cyclecheck, path):
+ return merge_value(v1, v2, precedence, cyclecheck, path+'.fdel')
+
+def merge_proxy_for(v1, v2, precedence, cyclecheck, path):
+ # Anything we got from introspection shouldn't have a proxy_for
+ # attribute -- it should be the actual object's documentation.
+ return v1
+
+def merge_bases(baselist1, baselist2, precedence, cyclecheck, path):
+ # Be careful here -- if we get it wrong, then we could end up
+ # merging two unrelated classes, which could lead to bad
+ # things (e.g., a class that's its own subclass). So only
+ # merge two bases if we're quite sure they're the same class.
+ # (In particular, if they have the same canonical name.)
+
+ # If the lengths don't match up, then give up. This is most
+ # often caused by __metaclass__.
+ if len(baselist1) != len(baselist2):
+ log.info("Not merging the introspected & parsed base lists "
+ "for %s, since their lengths don't match (%s vs %s)" %
+ (path, len(baselist1), len(baselist2)))
+ if precedence == 'introspect': return baselist1
+ else: return baselist2
+
+ # If any names disagree, then give up.
+ for base1, base2 in zip(baselist1, baselist2):
+ if ((base1.canonical_name not in (None, UNKNOWN) and
+ base2.canonical_name not in (None, UNKNOWN)) and
+ base1.canonical_name != base2.canonical_name):
+ log.info("Not merging the parsed & introspected base "
+ "lists for %s, since the bases' names don't match "
+ "(%s vs %s)" % (path, base1.canonical_name,
+ base2.canonical_name))
+ if precedence == 'introspect': return baselist1
+ else: return baselist2
+
+ for i, (base1, base2) in enumerate(zip(baselist1, baselist2)):
+ base = merge_docs(base1, base2, cyclecheck,
+ '%s.__bases__[%d]' % (path, i))
+ baselist1[i] = baselist2[i] = base
+
+ return baselist1
+
+def merge_posarg_defaults(defaults1, defaults2, precedence, cyclecheck, path):
+ if len(defaults1) != len(defaults2):
+ if precedence == 'introspect': return defaults1
+ else: return defaults2
+ defaults = []
+ for i, (d1, d2) in enumerate(zip(defaults1, defaults2)):
+ if d1 is not None and d2 is not None:
+ d_path = '%s.<default-arg-val>[%d]' % (path, i)
+ defaults.append(merge_docs(d1, d2, cyclecheck, d_path))
+ elif precedence == 'introspect':
+ defaults.append(d1)
+ else:
+ defaults.append(d2)
+ return defaults
+
+def merge_docstring(docstring1, docstring2, precedence, cyclecheck, path):
+ if docstring1 is None or docstring1 is UNKNOWN or precedence=='parse':
+ return docstring2
+ else:
+ return docstring1
+
+def merge_docs_extracted_by(v1, v2, precedence, cyclecheck, path):
+ return 'both'
+
+def merge_submodules(v1, v2, precedence, cyclecheck, path):
+ n1 = sorted([m.canonical_name for m in v1])
+ n2 = sorted([m.canonical_name for m in v2])
+ if (n1 != n2) and (n2 != []):
+ log.info('Introspector & parser disagree about submodules '
+ 'for %s: (%s) vs (%s)' % (path,
+ ', '.join([str(n) for n in n1]),
+ ', '.join([str(n) for n in n2])))
+ return v1 + [m for m in v2 if m.canonical_name not in n1]
+
+ return v1
+
+register_attribute_mergefunc('variables', merge_variables)
+register_attribute_mergefunc('value', merge_value)
+register_attribute_mergefunc('overrides', merge_overrides)
+register_attribute_mergefunc('fget', merge_fget)
+register_attribute_mergefunc('fset', merge_fset)
+register_attribute_mergefunc('fdel', merge_fdel)
+register_attribute_mergefunc('proxy_for', merge_proxy_for)
+register_attribute_mergefunc('bases', merge_bases)
+register_attribute_mergefunc('posarg_defaults', merge_posarg_defaults)
+register_attribute_mergefunc('docstring', merge_docstring)
+register_attribute_mergefunc('docs_extracted_by', merge_docs_extracted_by)
+register_attribute_mergefunc('submodules', merge_submodules)
+
+######################################################################
+## Import Linking
+######################################################################
+
+def link_imports(val_doc, docindex):
+ # Check if the ValueDoc has an unresolved proxy_for link.
+ # If so, then resolve it.
+ while val_doc.proxy_for not in (UNKNOWN, None):
+ # Find the valuedoc that the proxy_for name points to.
+ src_doc = docindex.get_valdoc(val_doc.proxy_for)
+
+ # If we don't have any valuedoc at that address, then
+ # set that address as its canonical name.
+ # [XXX] Do I really want to do this?
+ if src_doc is None:
+ val_doc.canonical_name = val_doc.proxy_for
+ return
+
+ # If we *do* have something at that address, then
+ # merge the proxy `val_doc` with it.
+ elif src_doc != val_doc:
+ # Copy any subclass information from val_doc->src_doc.
+ if (isinstance(val_doc, ClassDoc) and
+ isinstance(src_doc, ClassDoc)):
+ for subclass in val_doc.subclasses:
+ if subclass not in src_doc.subclasses:
+ src_doc.subclasses.append(subclass)
+ # Then overwrite val_doc with the contents of src_doc.
+ src_doc.merge_and_overwrite(val_doc, ignore_hash_conflict=True)
+
+ # If the proxy_for link points back at src_doc
+ # itself, then we most likely have a variable that's
+ # shadowing a submodule that it should be equal to.
+ # So just get rid of the variable.
+ elif src_doc == val_doc:
+ parent_name = val_doc.proxy_for[:-1]
+ var_name = val_doc.proxy_for[-1]
+ parent = docindex.get_valdoc(parent_name)
+ if parent is not None and var_name in parent.variables:
+ del parent.variables[var_name]
+ src_doc.proxy_for = None
+
+######################################################################
+## Canonical Name Assignment
+######################################################################
+
+_name_scores = {}
+"""A dictionary mapping from each C{ValueDoc} to the score that has
+been assigned to its current cannonical name. If
+L{assign_canonical_names()} finds a canonical name with a better
+score, then it will replace the old name."""
+
+_unreachable_names = {DottedName(DottedName.UNREACHABLE):1}
+"""The set of names that have been used for unreachable objects. This
+is used to ensure there are no duplicate cannonical names assigned.
+C{_unreachable_names} is a dictionary mapping from dotted names to
+integer ids, where the next unused unreachable name derived from
+dotted name C{n} is
+C{DottedName('%s-%s' % (n, str(_unreachable_names[n]+1))}"""
+
+def assign_canonical_names(val_doc, name, docindex, score=0):
+ """
+ Assign a canonical name to C{val_doc} (if it doesn't have one
+ already), and (recursively) to each variable in C{val_doc}.
+ In particular, C{val_doc} will be assigned the canonical name
+ C{name} iff either:
+ - C{val_doc}'s canonical name is C{UNKNOWN}; or
+ - C{val_doc}'s current canonical name was assigned by this
+ method; but the score of the new name (C{score}) is higher
+ than the score of the current name (C{score_dict[val_doc]}).
+
+ Note that canonical names will even be assigned to values
+ like integers and C{None}; but these should be harmless.
+ """
+ # If we've already visited this node, and our new score
+ # doesn't beat our old score, then there's nothing more to do.
+ # Note that since score increases strictly monotonically, this
+ # also prevents us from going in cycles.
+ if val_doc in _name_scores and score <= _name_scores[val_doc]:
+ return
+
+ # Update val_doc's canonical name, if appropriate.
+ if (val_doc not in _name_scores and
+ val_doc.canonical_name is not UNKNOWN):
+ # If this is the first time we've seen val_doc, and it
+ # already has a name, then don't change that name.
+ _name_scores[val_doc] = sys.maxint
+ name = val_doc.canonical_name
+ score = 0
+ else:
+ # Otherwise, update the name iff the new score is better
+ # than the old one.
+ if (val_doc not in _name_scores or
+ score > _name_scores[val_doc]):
+ val_doc.canonical_name = name
+ _name_scores[val_doc] = score
+
+ # Recurse to any contained values.
+ if isinstance(val_doc, NamespaceDoc):
+ for var_doc in val_doc.variables.values():
+ # Set the variable's canonical name.
+ varname = DottedName(name, var_doc.name)
+ var_doc.canonical_name = varname
+
+ # If the value is unknown, or is a generic value doc, then
+ # the valuedoc doesn't get assigned a name; move on.
+ if (var_doc.value is UNKNOWN
+ or isinstance(var_doc.value, GenericValueDoc)):
+ continue
+
+ # [XX] After svn commit 1644-1647, I'm not sure if this
+ # ever gets used: This check is for cases like
+ # curses.wrapper, where an imported variable shadows its
+ # value's "real" location.
+ if _var_shadows_self(var_doc, varname):
+ _fix_self_shadowing_var(var_doc, varname, docindex)
+
+ # Find the score for this new name.
+ vardoc_score = score-1
+ if var_doc.is_imported is UNKNOWN: vardoc_score -= 10
+ elif var_doc.is_imported: vardoc_score -= 100
+ if var_doc.is_alias is UNKNOWN: vardoc_score -= 10
+ elif var_doc.is_alias: vardoc_score -= 1000
+
+ assign_canonical_names(var_doc.value, varname,
+ docindex, vardoc_score)
+
+ # Recurse to any directly reachable values.
+ for val_doc_2 in val_doc.apidoc_links(variables=False):
+ val_name, val_score = _unreachable_name_for(val_doc_2, docindex)
+ assign_canonical_names(val_doc_2, val_name, docindex, val_score)
+
+def _var_shadows_self(var_doc, varname):
+ return (var_doc.value not in (None, UNKNOWN) and
+ var_doc.value.canonical_name not in (None, UNKNOWN) and
+ var_doc.value.canonical_name != varname and
+ varname.dominates(var_doc.value.canonical_name))
+
+def _fix_self_shadowing_var(var_doc, varname, docindex):
+ # If possible, find another name for the shadowed value.
+ cname = var_doc.value.canonical_name
+ for i in range(1, len(cname)-1):
+ new_name = cname[:i] + (cname[i]+"'") + cname[i+1:]
+ val_doc = docindex.get_valdoc(new_name)
+ if val_doc is not None:
+ log.warning("%s shadows its own value -- using %s instead" %
+ (varname, new_name))
+ var_doc.value = val_doc
+ return
+
+ # If we couldn't find the actual value, use an unreachable name.
+ name, score = _unreachable_name_for(var_doc.value, docindex)
+ log.warning('%s shadows itself -- using %s instead' % (varname, name))
+ var_doc.value.canonical_name = name
+
+def _unreachable_name_for(val_doc, docindex):
+ assert isinstance(val_doc, ValueDoc)
+
+ # [xx] (when) does this help?
+ if (isinstance(val_doc, ModuleDoc) and
+ len(val_doc.canonical_name)==1 and val_doc.package is None):
+ for root_val in docindex.root:
+ if root_val.canonical_name == val_doc.canonical_name:
+ if root_val != val_doc:
+ log.error("Name conflict: %r vs %r" %
+ (val_doc, root_val))
+ break
+ else:
+ return val_doc.canonical_name, -1000
+
+ # Assign it an 'unreachable' name:
+ if (val_doc.pyval is not UNKNOWN and
+ hasattr(val_doc.pyval, '__name__')):
+ try:
+ name = DottedName(DottedName.UNREACHABLE,
+ val_doc.pyval.__name__, strict=True)
+ except DottedName.InvalidDottedName:
+ name = DottedName(DottedName.UNREACHABLE)
+ else:
+ name = DottedName(DottedName.UNREACHABLE)
+
+ # Uniquify the name.
+ if name in _unreachable_names:
+ _unreachable_names[name] += 1
+ name = DottedName('%s-%s' % (name, _unreachable_names[name]-1))
+ else:
+ _unreachable_names[name] = 1
+
+ return name, -10000
+
+######################################################################
+## Documentation Inheritance
+######################################################################
+
+def find_overrides(class_doc):
+ """
+ Set the C{overrides} attribute for all variables in C{class_doc}.
+ This needs to be done early (before docstring parsing), so we can
+ know which docstrings to suppress warnings for.
+ """
+ for base_class in list(class_doc.mro(warn_about_bad_bases=True)):
+ if base_class == class_doc: continue
+ if base_class.variables is UNKNOWN: continue
+ for name, var_doc in base_class.variables.items():
+ if ( not (name.startswith('__') and not name.endswith('__')) and
+ base_class == var_doc.container and
+ name in class_doc.variables and
+ class_doc.variables[name].container==class_doc and
+ class_doc.variables[name].overrides is UNKNOWN ):
+ class_doc.variables[name].overrides = var_doc
+
+
+def inherit_docs(class_doc):
+ for base_class in list(class_doc.mro(warn_about_bad_bases=True)):
+ if base_class == class_doc: continue
+
+ # Inherit any groups. Place them *after* this class's groups,
+ # so that any groups that are important to this class come
+ # first.
+ if base_class.group_specs not in (None, UNKNOWN):
+ class_doc.group_specs += [gs for gs in base_class.group_specs
+ if gs not in class_doc.group_specs]
+
+ # Inherit any variables.
+ if base_class.variables is UNKNOWN: continue
+ for name, var_doc in base_class.variables.items():
+ # If it's a __private variable, then don't inherit it.
+ if name.startswith('__') and not name.endswith('__'):
+ continue
+
+ # Inhetit only from the defining class. Or else, in case of
+ # multiple inheritance, we may import from a grand-ancestor
+ # variables overridden by a class that follows in mro.
+ if base_class != var_doc.container:
+ continue
+
+ # If class_doc doesn't have a variable with this name,
+ # then inherit it.
+ if name not in class_doc.variables:
+ class_doc.variables[name] = var_doc
+
+ # Otherwise, class_doc already contains a variable
+ # that shadows var_doc. But if class_doc's var is
+ # local, then record the fact that it overrides
+ # var_doc.
+ elif class_doc.variables[name].container==class_doc:
+ class_doc.variables[name].overrides = var_doc
+ _inherit_info(class_doc.variables[name])
+
+_INHERITED_ATTRIBS = [
+ 'descr', 'summary', 'metadata', 'extra_docstring_fields',
+ 'type_descr', 'arg_descrs', 'arg_types', 'return_descr',
+ 'return_type', 'exception_descrs']
+
+_method_descriptor = type(list.append)
+
+def _inherit_info(var_doc):
+ """
+ Copy any relevant documentation information from the variable that
+ C{var_doc} overrides into C{var_doc} itself.
+ """
+ src_var = var_doc.overrides
+ src_val = var_doc.overrides.value
+ val_doc = var_doc.value
+
+ # Special case: if the source value and target values are both c
+ # extension methods, and the target value's signature is not
+ # specified, then inherit the source value's signature.
+ if (isinstance(val_doc, RoutineDoc) and
+ isinstance(src_val, RoutineDoc) and
+ (inspect.isbuiltin(val_doc.pyval) or
+ isinstance(val_doc.pyval, _method_descriptor)) and
+ (inspect.isbuiltin(src_val.pyval) or
+ isinstance(src_val.pyval, _method_descriptor)) and
+ val_doc.all_args() in (['...'], UNKNOWN) and
+ src_val.all_args() not in (['...'], UNKNOWN)):
+ for attrib in ['posargs', 'posarg_defaults', 'vararg',
+ 'kwarg', 'return_type']:
+ setattr(val_doc, attrib, getattr(src_val, attrib))
+
+ # If the new variable has a docstring, then don't inherit
+ # anything, even if the docstring is blank.
+ if var_doc.docstring not in (None, UNKNOWN):
+ return
+ # [xx] Do I want a check like this:?
+# # If it's a method and the signature doesn't match well enough,
+# # then give up.
+# if (isinstance(src_val, RoutineDoc) and
+# isinstance(val_doc, RoutineDoc)):
+# if (src_val.posargs != val_doc.posargs[:len(src_val.posargs)] or
+# src_val.vararg != None and src_val.vararg != val_doc.vararg):
+# log.docstring_warning(
+# "The signature of %s does not match the signature of the "
+# "method it overrides (%s); not inheriting documentation." %
+# (var_doc.canonical_name, src_var.canonical_name))
+# return
+
+ # Inherit attributes!
+ for attrib in _INHERITED_ATTRIBS:
+ if (hasattr(var_doc, attrib) and hasattr(src_var, attrib) and
+ getattr(src_var, attrib) not in (None, UNKNOWN)):
+ setattr(var_doc, attrib, getattr(src_var, attrib))
+ elif (src_val is not None and
+ hasattr(val_doc, attrib) and hasattr(src_val, attrib) and
+ getattr(src_val, attrib) not in (None, UNKNOWN) and
+ getattr(val_doc, attrib) in (None, UNKNOWN, [])):
+ setattr(val_doc, attrib, getattr(src_val, attrib))
diff --git a/python/helpers/epydoc/docintrospecter.py b/python/helpers/epydoc/docintrospecter.py
new file mode 100644
index 0000000..cbbbb56
--- /dev/null
+++ b/python/helpers/epydoc/docintrospecter.py
@@ -0,0 +1,1056 @@
+# epydoc -- Introspection
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: docintrospecter.py 1678 2008-01-29 17:21:29Z edloper $
+
+"""
+Extract API documentation about python objects by directly introspecting
+their values.
+
+The function L{introspect_docs()}, which provides the main interface
+of this module, examines a Python objects via introspection, and uses
+the information it finds to create an L{APIDoc} objects containing the
+API documentation for that objects.
+
+The L{register_introspecter()} method can be used to extend the
+functionality of C{docintrospector}, by providing methods that handle
+special value types.
+"""
+__docformat__ = 'epytext en'
+
+######################################################################
+## Imports
+######################################################################
+
+import inspect, re, sys, os.path, imp
+# API documentation encoding:
+from epydoc.apidoc import *
+# Type comparisons:
+from types import *
+# Error reporting:
+from epydoc import log
+# Helper functions:
+from epydoc.util import *
+# For extracting encoding for docstrings:
+import epydoc.docparser
+# Builtin values
+import __builtin__
+# Backwards compatibility
+from epydoc.compat import *
+
+######################################################################
+## Caches
+######################################################################
+
+_valuedoc_cache = {}
+"""A cache containing the API documentation for values that we've
+already seen. This cache is implemented as a dictionary that maps a
+value's pyid to its L{ValueDoc}.
+
+Note that if we encounter a value but decide not to introspect it
+(because it's imported from another module), then C{_valuedoc_cache}
+will contain an entry for the value, but the value will not be listed
+in L{_introspected_values}."""
+
+_introspected_values = {}
+"""A record which values we've introspected, encoded as a dictionary from
+pyid to C{bool}."""
+
+def clear_cache():
+ """
+ Discard any cached C{APIDoc} values that have been computed for
+ introspected values.
+ """
+ _valuedoc_cache.clear()
+ _introspected_values.clear()
+
+######################################################################
+## Introspection
+######################################################################
+
+def introspect_docs(value=None, name=None, filename=None, context=None,
+ is_script=False, module_name=None):
+ """
+ Generate the API documentation for a specified object by
+ introspecting Python values, and return it as a L{ValueDoc}. The
+ object to generate documentation for may be specified using
+ the C{value} parameter, the C{filename} parameter, I{or} the
+ C{name} parameter. (It is an error to specify more than one
+ of these three parameters, or to not specify any of them.)
+
+ @param value: The python object that should be documented.
+ @param filename: The name of the file that contains the python
+ source code for a package, module, or script. If
+ C{filename} is specified, then C{introspect} will return a
+ C{ModuleDoc} describing its contents.
+ @param name: The fully-qualified python dotted name of any
+ value (including packages, modules, classes, and
+ functions). C{DocParser} will automatically figure out
+ which module(s) it needs to import in order to find the
+ documentation for the specified object.
+ @param context: The API documentation for the class of module
+ that contains C{value} (if available).
+ @param module_name: The name of the module where the value is defined.
+ Useful to retrieve the docstring encoding if there is no way to
+ detect the module by introspection (such as in properties)
+ """
+ if value is None and name is not None and filename is None:
+ value = get_value_from_name(DottedName(name))
+ elif value is None and name is None and filename is not None:
+ if is_script:
+ value = get_value_from_scriptname(filename)
+ else:
+ value = get_value_from_filename(filename, context)
+ elif name is None and filename is None:
+ # it's ok if value is None -- that's a value, after all.
+ pass
+ else:
+ raise ValueError("Expected exactly one of the following "
+ "arguments: value, name, filename")
+
+ pyid = id(value)
+
+ # If we've already introspected this value, then simply return
+ # its ValueDoc from our cache.
+ if pyid in _introspected_values:
+ # If the file is a script, then adjust its name.
+ if is_script and filename is not None:
+ _valuedoc_cache[pyid].canonical_name = DottedName(
+ munge_script_name(str(filename)))
+ return _valuedoc_cache[pyid]
+
+ # Create an initial value doc for this value & add it to the cache.
+ val_doc = _get_valuedoc(value)
+
+ # Introspect the value.
+ _introspected_values[pyid] = True
+ introspect_func = _get_introspecter(value)
+ introspect_func(value, val_doc, module_name=module_name)
+
+ # Set canonical name, if it was given
+ if val_doc.canonical_name is UNKNOWN and name is not None:
+ val_doc.canonical_name = DottedName(name)
+
+ # If the file is a script, then adjust its name.
+ if is_script and filename is not None:
+ val_doc.canonical_name = DottedName(munge_script_name(str(filename)))
+
+ if val_doc.canonical_name is UNKNOWN and filename is not None:
+ shadowed_name = DottedName(value.__name__)
+ log.warning("Module %s is shadowed by a variable with "
+ "the same name." % shadowed_name)
+ val_doc.canonical_name = DottedName(str(shadowed_name)+"'")
+
+ return val_doc
+
+def _get_valuedoc(value):
+ """
+ If a C{ValueDoc} for the given value exists in the valuedoc
+ cache, then return it; otherwise, create a new C{ValueDoc},
+ add it to the cache, and return it. When possible, the new
+ C{ValueDoc}'s C{pyval}, C{repr}, and C{canonical_name}
+ attributes will be set appropriately.
+ """
+ pyid = id(value)
+ val_doc = _valuedoc_cache.get(pyid)
+ if val_doc is None:
+ try: canonical_name = get_canonical_name(value, strict=True)
+ except DottedName.InvalidDottedName: canonical_name = UNKNOWN
+ val_doc = ValueDoc(pyval=value, canonical_name = canonical_name,
+ docs_extracted_by='introspecter')
+ _valuedoc_cache[pyid] = val_doc
+
+ # If it's a module, then do some preliminary introspection.
+ # Otherwise, check what the containing module is (used e.g.
+ # to decide what markup language should be used for docstrings)
+ if inspect.ismodule(value):
+ introspect_module(value, val_doc, preliminary=True)
+ val_doc.defining_module = val_doc
+ else:
+ module_name = str(get_containing_module(value))
+ module = sys.modules.get(module_name)
+ if module is not None and inspect.ismodule(module):
+ val_doc.defining_module = _get_valuedoc(module)
+
+ return val_doc
+
+#////////////////////////////////////////////////////////////
+# Module Introspection
+#////////////////////////////////////////////////////////////
+
+#: A list of module variables that should not be included in a
+#: module's API documentation.
+UNDOCUMENTED_MODULE_VARS = (
+ '__builtins__', '__doc__', '__all__', '__file__', '__path__',
+ '__name__', '__extra_epydoc_fields__', '__docformat__')
+
+def introspect_module(module, module_doc, module_name=None, preliminary=False):
+ """
+ Add API documentation information about the module C{module}
+ to C{module_doc}.
+ """
+ module_doc.specialize_to(ModuleDoc)
+
+ # Record the module's docformat
+ if hasattr(module, '__docformat__'):
+ module_doc.docformat = unicode(module.__docformat__)
+
+ # Record the module's filename
+ if hasattr(module, '__file__'):
+ try: module_doc.filename = unicode(module.__file__)
+ except KeyboardInterrupt: raise
+ except: pass
+ if module_doc.filename is not UNKNOWN:
+ try: module_doc.filename = py_src_filename(module_doc.filename)
+ except ValueError: pass
+
+ # If this is just a preliminary introspection, then don't do
+ # anything else. (Typically this is true if this module was
+ # imported, but is not included in the set of modules we're
+ # documenting.)
+ module_doc.variables = {}
+ if preliminary: return
+
+ # Record the module's docstring
+ if hasattr(module, '__doc__'):
+ module_doc.docstring = get_docstring(module)
+
+ # If the module has a __path__, then it's (probably) a
+ # package; so set is_package=True and record its __path__.
+ if hasattr(module, '__path__'):
+ module_doc.is_package = True
+ try: module_doc.path = [unicode(p) for p in module.__path__]
+ except KeyboardInterrupt: raise
+ except: pass
+ else:
+ module_doc.is_package = False
+
+ # Make sure we have a name for the package.
+ dotted_name = module_doc.canonical_name
+ if dotted_name is UNKNOWN:
+ dotted_name = DottedName(module.__name__)
+ name_without_primes = DottedName(str(dotted_name).replace("'", ""))
+
+ # Record the module's parent package, if it has one.
+ if len(dotted_name) > 1:
+ package_name = str(dotted_name.container())
+ package = sys.modules.get(package_name)
+ if package is not None:
+ module_doc.package = introspect_docs(package)
+ else:
+ module_doc.package = None
+
+ # Initialize the submodules property
+ module_doc.submodules = []
+
+ # Add the module to its parent package's submodules list.
+ if module_doc.package not in (None, UNKNOWN):
+ module_doc.package.submodules.append(module_doc)
+
+ # Look up the module's __all__ attribute (public names).
+ public_names = None
+ if hasattr(module, '__all__'):
+ try:
+ public_names = set([str(name) for name in module.__all__])
+ except KeyboardInterrupt: raise
+ except: pass
+
+ # Record the module's variables.
+ module_doc.variables = {}
+ for child_name in dir(module):
+ if child_name in UNDOCUMENTED_MODULE_VARS: continue
+ child = getattr(module, child_name)
+
+ # Create a VariableDoc for the child, and introspect its
+ # value if it's defined in this module.
+ container = get_containing_module(child)
+ if ((container is not None and
+ container == name_without_primes) or
+ (public_names is not None and
+ child_name in public_names)):
+ # Local variable.
+ child_val_doc = introspect_docs(child, context=module_doc,
+ module_name=dotted_name)
+ child_var_doc = VariableDoc(name=child_name,
+ value=child_val_doc,
+ is_imported=False,
+ container=module_doc,
+ docs_extracted_by='introspecter')
+ elif container is None or module_doc.canonical_name is UNKNOWN:
+
+ # Don't introspect stuff "from __future__"
+ if is_future_feature(child): continue
+
+ # Possibly imported variable.
+ child_val_doc = introspect_docs(child, context=module_doc)
+ child_var_doc = VariableDoc(name=child_name,
+ value=child_val_doc,
+ container=module_doc,
+ docs_extracted_by='introspecter')
+ else:
+ # Imported variable.
+ child_val_doc = _get_valuedoc(child)
+ child_var_doc = VariableDoc(name=child_name,
+ value=child_val_doc,
+ is_imported=True,
+ container=module_doc,
+ docs_extracted_by='introspecter')
+
+ # If the module's __all__ attribute is set, use it to set the
+ # variables public/private status and imported status.
+ if public_names is not None:
+ if child_name in public_names:
+ child_var_doc.is_public = True
+ if not isinstance(child_var_doc, ModuleDoc):
+ child_var_doc.is_imported = False
+ else:
+ child_var_doc.is_public = False
+
+ module_doc.variables[child_name] = child_var_doc
+
+ return module_doc
+
+#////////////////////////////////////////////////////////////
+# Class Introspection
+#////////////////////////////////////////////////////////////
+
+#: A list of class variables that should not be included in a
+#: class's API documentation.
+UNDOCUMENTED_CLASS_VARS = (
+ '__doc__', '__module__', '__dict__', '__weakref__', '__slots__',
+ '__pyx_vtable__')
+
+def introspect_class(cls, class_doc, module_name=None):
+ """
+ Add API documentation information about the class C{cls}
+ to C{class_doc}.
+ """
+ class_doc.specialize_to(ClassDoc)
+
+ # Record the class's docstring.
+ class_doc.docstring = get_docstring(cls)
+
+ # Record the class's __all__ attribute (public names).
+ public_names = None
+ if hasattr(cls, '__all__'):
+ try:
+ public_names = set([str(name) for name in cls.__all__])
+ except KeyboardInterrupt: raise
+ except: pass
+
+ # Start a list of subclasses.
+ class_doc.subclasses = []
+
+ # Sometimes users will define a __metaclass__ that copies all
+ # class attributes from bases directly into the derived class's
+ # __dict__ when the class is created. (This saves the lookup time
+ # needed to search the base tree for an attribute.) But for the
+ # docs, we only want to list these copied attributes in the
+ # parent. So only add an attribute if it is not identical to an
+ # attribute of a base class. (Unfortunately, this can sometimes
+ # cause an attribute to look like it was inherited, even though it
+ # wasn't, if it happens to have the exact same value as the
+ # corresponding base's attribute.) An example of a case where
+ # this helps is PyQt -- subclasses of QWidget get about 300
+ # methods injected into them.
+ base_children = {}
+
+ # Record the class's base classes; and add the class to its
+ # base class's subclass lists.
+ if hasattr(cls, '__bases__'):
+ try: bases = list(cls.__bases__)
+ except:
+ bases = None
+ log.warning("Class '%s' defines __bases__, but it does not "
+ "contain an iterable; ignoring base list."
+ % getattr(cls, '__name__', '??'))
+ if bases is not None:
+ class_doc.bases = []
+ for base in bases:
+ basedoc = introspect_docs(base)
+ class_doc.bases.append(basedoc)
+ basedoc.subclasses.append(class_doc)
+
+ bases.reverse()
+ for base in bases:
+ if hasattr(base, '__dict__'):
+ base_children.update(base.__dict__)
+
+ # The module name is not defined if the class is being introspected
+ # as another class base.
+ if module_name is None and class_doc.defining_module not in (None, UNKNOWN):
+ module_name = class_doc.defining_module.canonical_name
+
+ # Record the class's local variables.
+ class_doc.variables = {}
+ if hasattr(cls, '__dict__'):
+ private_prefix = '_%s__' % getattr(cls, '__name__', '<none>')
+ for child_name, child in cls.__dict__.items():
+ if (child_name in base_children
+ and base_children[child_name] == child):
+ continue
+
+ if child_name.startswith(private_prefix):
+ child_name = child_name[len(private_prefix)-2:]
+ if child_name in UNDOCUMENTED_CLASS_VARS: continue
+ val_doc = introspect_docs(child, context=class_doc,
+ module_name=module_name)
+ var_doc = VariableDoc(name=child_name, value=val_doc,
+ container=class_doc,
+ docs_extracted_by='introspecter')
+ if public_names is not None:
+ var_doc.is_public = (child_name in public_names)
+ class_doc.variables[child_name] = var_doc
+
+ return class_doc
+
+#////////////////////////////////////////////////////////////
+# Routine Introspection
+#////////////////////////////////////////////////////////////
+
+def introspect_routine(routine, routine_doc, module_name=None):
+ """Add API documentation information about the function
+ C{routine} to C{routine_doc} (specializing it to C{Routine_doc})."""
+ routine_doc.specialize_to(RoutineDoc)
+
+ # Extract the underying function
+ if isinstance(routine, MethodType):
+ func = routine.im_func
+ elif isinstance(routine, staticmethod):
+ func = routine.__get__(0)
+ elif isinstance(routine, classmethod):
+ func = routine.__get__(0).im_func
+ else:
+ func = routine
+
+ # Record the function's docstring.
+ routine_doc.docstring = get_docstring(func)
+
+ # Record the function's signature.
+ if isinstance(func, FunctionType):
+ (args, vararg, kwarg, defaults) = inspect.getargspec(func)
+
+ # Add the arguments.
+ routine_doc.posargs = args
+ routine_doc.vararg = vararg
+ routine_doc.kwarg = kwarg
+
+ # Set default values for positional arguments.
+ routine_doc.posarg_defaults = [None]*len(args)
+ if defaults is not None:
+ offset = len(args)-len(defaults)
+ for i in range(len(defaults)):
+ default_val = introspect_docs(defaults[i])
+ routine_doc.posarg_defaults[i+offset] = default_val
+
+ # If it's a bound method, then strip off the first argument.
+ if isinstance(routine, MethodType) and routine.im_self is not None:
+ routine_doc.posargs = routine_doc.posargs[1:]
+ routine_doc.posarg_defaults = routine_doc.posarg_defaults[1:]
+
+ # Set the routine's line number.
+ if hasattr(func, 'func_code'):
+ routine_doc.lineno = func.func_code.co_firstlineno
+
+ else:
+ # [XX] I should probably use UNKNOWN here??
+ # dvarrazzo: if '...' is to be changed, also check that
+ # `docstringparser.process_arg_field()` works correctly.
+ # See SF bug #1556024.
+ routine_doc.posargs = ['...']
+ routine_doc.posarg_defaults = [None]
+ routine_doc.kwarg = None
+ routine_doc.vararg = None
+
+ # Change type, if appropriate.
+ if isinstance(routine, staticmethod):
+ routine_doc.specialize_to(StaticMethodDoc)
+ if isinstance(routine, classmethod):
+ routine_doc.specialize_to(ClassMethodDoc)
+
+ return routine_doc
+
+#////////////////////////////////////////////////////////////
+# Property Introspection
+#////////////////////////////////////////////////////////////
+
+def introspect_property(prop, prop_doc, module_name=None):
+ """Add API documentation information about the property
+ C{prop} to C{prop_doc} (specializing it to C{PropertyDoc})."""
+ prop_doc.specialize_to(PropertyDoc)
+
+ # Record the property's docstring.
+ prop_doc.docstring = get_docstring(prop, module_name=module_name)
+
+ # Record the property's access functions.
+ if hasattr(prop, 'fget'):
+ prop_doc.fget = introspect_docs(prop.fget)
+ prop_doc.fset = introspect_docs(prop.fset)
+ prop_doc.fdel = introspect_docs(prop.fdel)
+
+ return prop_doc
+
+#////////////////////////////////////////////////////////////
+# Generic Value Introspection
+#////////////////////////////////////////////////////////////
+
+def introspect_other(val, val_doc, module_name=None):
+ """Specialize val_doc to a C{GenericValueDoc} and return it."""
+ val_doc.specialize_to(GenericValueDoc)
+ return val_doc
+
+#////////////////////////////////////////////////////////////
+# Helper functions
+#////////////////////////////////////////////////////////////
+
+def isclass(object):
+ """
+ Return true if the given object is a class. In particular, return
+ true if object is an instance of C{types.TypeType} or of
+ C{types.ClassType}. This is used instead of C{inspect.isclass()},
+ because the latter returns true for objects that are not classes
+ (in particular, it returns true for any object that has a
+ C{__bases__} attribute, including objects that define
+ C{__getattr__} to always return a value).
+ """
+ return isinstance(object, tuple(_CLASS_TYPES))
+
+_CLASS_TYPES = set([TypeType, ClassType])
+"""A list of types that should be treated as classes."""
+
+def register_class_type(typ):
+ """Add a type to the lists of types that should be treated as
+ classes. By default, this list contains C{TypeType} and
+ C{ClassType}."""
+ _CLASS_TYPES.add(typ)
+
+__future_check_works = None
+
+def is_future_feature(object):
+ """
+ Return True if C{object} results from a C{from __future__ import feature}
+ statement.
+ """
+ # Guard from unexpected implementation changes of the __future__ module.
+ global __future_check_works
+ if __future_check_works is not None:
+ if __future_check_works:
+ import __future__
+ return isinstance(object, __future__._Feature)
+ else:
+ return False
+ else:
+ __future_check_works = True
+ try:
+ return is_future_feature(object)
+ except:
+ __future_check_works = False
+ log.warning("Troubles inspecting __future__. Python implementation"
+ " may have been changed.")
+ return False
+
+def get_docstring(value, module_name=None):
+ """
+ Return the docstring for the given value; or C{None} if it
+ does not have a docstring.
+ @rtype: C{unicode}
+ """
+ docstring = getattr(value, '__doc__', None)
+ if docstring is None:
+ return None
+ elif isinstance(docstring, unicode):
+ return docstring
+ elif isinstance(docstring, str):
+ try: return unicode(docstring, 'ascii')
+ except UnicodeDecodeError:
+ if module_name is None:
+ module_name = get_containing_module(value)
+ if module_name is not None:
+ try:
+ module = get_value_from_name(module_name)
+ filename = py_src_filename(module.__file__)
+ encoding = epydoc.docparser.get_module_encoding(filename)
+ return unicode(docstring, encoding)
+ except KeyboardInterrupt: raise
+ except Exception: pass
+ if hasattr(value, '__name__'): name = value.__name__
+ else: name = repr(value)
+ log.warning("%s's docstring is not a unicode string, but it "
+ "contains non-ascii data -- treating it as "
+ "latin-1." % name)
+ return unicode(docstring, 'latin-1')
+ return None
+ elif value is BuiltinMethodType:
+ # Don't issue a warning for this special case.
+ return None
+ else:
+ if hasattr(value, '__name__'): name = value.__name__
+ else: name = repr(value)
+ log.warning("%s's docstring is not a string -- ignoring it." %
+ name)
+ return None
+
+def get_canonical_name(value, strict=False):
+ """
+ @return: the canonical name for C{value}, or C{UNKNOWN} if no
+ canonical name can be found. Currently, C{get_canonical_name}
+ can find canonical names for: modules; functions; non-nested
+ classes; methods of non-nested classes; and some class methods
+ of non-nested classes.
+
+ @rtype: L{DottedName} or C{UNKNOWN}
+ """
+ if not hasattr(value, '__name__'): return UNKNOWN
+
+ # Get the name via introspection.
+ if isinstance(value, ModuleType):
+ try:
+ dotted_name = DottedName(value.__name__, strict=strict)
+ # If the module is shadowed by a variable in its parent
+ # package(s), then add a prime mark to the end, to
+ # differentiate it from the variable that shadows it.
+ if verify_name(value, dotted_name) is UNKNOWN:
+ log.warning("Module %s is shadowed by a variable with "
+ "the same name." % dotted_name)
+ # Note -- this return bypasses verify_name check:
+ return DottedName(value.__name__+"'")
+ except DottedName.InvalidDottedName:
+ # Name is not a valid Python identifier -- treat as script.
+ if hasattr(value, '__file__'):
+ filename = '%s' % value.__str__
+ dotted_name = DottedName(munge_script_name(filename))
+
+ elif isclass(value):
+ if value.__module__ == '__builtin__':
+ dotted_name = DottedName(value.__name__, strict=strict)
+ else:
+ dotted_name = DottedName(value.__module__, value.__name__,
+ strict=strict)
+
+ elif (inspect.ismethod(value) and value.im_self is not None and
+ value.im_class is ClassType and
+ not value.__name__.startswith('<')): # class method.
+ class_name = get_canonical_name(value.im_self)
+ if class_name is UNKNOWN: return UNKNOWN
+ dotted_name = DottedName(class_name, value.__name__, strict=strict)
+ elif (inspect.ismethod(value) and
+ not value.__name__.startswith('<')):
+ class_name = get_canonical_name(value.im_class)
+ if class_name is UNKNOWN: return UNKNOWN
+ dotted_name = DottedName(class_name, value.__name__, strict=strict)
+ elif (isinstance(value, FunctionType) and
+ not value.__name__.startswith('<')):
+ module_name = _find_function_module(value)
+ if module_name is None: return UNKNOWN
+ dotted_name = DottedName(module_name, value.__name__, strict=strict)
+ else:
+ return UNKNOWN
+
+ return verify_name(value, dotted_name)
+
+def verify_name(value, dotted_name):
+ """
+ Verify the name. E.g., if it's a nested class, then we won't be
+ able to find it with the name we constructed.
+ """
+ if dotted_name is UNKNOWN: return UNKNOWN
+ if len(dotted_name) == 1 and hasattr(__builtin__, dotted_name[0]):
+ return dotted_name
+ named_value = sys.modules.get(dotted_name[0])
+ if named_value is None: return UNKNOWN
+ for identifier in dotted_name[1:]:
+ try: named_value = getattr(named_value, identifier)
+ except: return UNKNOWN
+ if value is named_value:
+ return dotted_name
+ else:
+ return UNKNOWN
+
+# [xx] not used:
+def value_repr(value):
+ try:
+ s = '%r' % value
+ if isinstance(s, str):
+ s = decode_with_backslashreplace(s)
+ return s
+ except:
+ return UNKNOWN
+
+def get_containing_module(value):
+ """
+ Return the name of the module containing the given value, or
+ C{None} if the module name can't be determined.
+ @rtype: L{DottedName}
+ """
+ if inspect.ismodule(value):
+ return DottedName(value.__name__)
+ elif isclass(value):
+ return DottedName(value.__module__)
+ elif (inspect.ismethod(value) and value.im_self is not None and
+ value.im_class is ClassType): # class method.
+ return DottedName(value.im_self.__module__)
+ elif inspect.ismethod(value):
+ return DottedName(value.im_class.__module__)
+ elif inspect.isroutine(value):
+ module = _find_function_module(value)
+ if module is None: return None
+ return DottedName(module)
+ else:
+ return None
+
+def _find_function_module(func):
+ """
+ @return: The module that defines the given function.
+ @rtype: C{module}
+ @param func: The function whose module should be found.
+ @type func: C{function}
+ """
+ if hasattr(func, '__module__'):
+ return func.__module__
+ try:
+ module = inspect.getmodule(func)
+ if module: return module.__name__
+ except KeyboardInterrupt: raise
+ except: pass
+
+ # This fallback shouldn't usually be needed. But it is needed in
+ # a couple special cases (including using epydoc to document
+ # itself). In particular, if a module gets loaded twice, using
+ # two different names for the same file, then this helps.
+ for module in sys.modules.values():
+ if (hasattr(module, '__dict__') and
+ hasattr(func, 'func_globals') and
+ func.func_globals is module.__dict__):
+ return module.__name__
+ return None
+
+#////////////////////////////////////////////////////////////
+# Introspection Dispatch Table
+#////////////////////////////////////////////////////////////
+
+_introspecter_registry = []
+def register_introspecter(applicability_test, introspecter, priority=10):
+ """
+ Register an introspecter function. Introspecter functions take
+ two arguments, a python value and a C{ValueDoc} object, and should
+ add information about the given value to the the C{ValueDoc}.
+ Usually, the first line of an inspecter function will specialize
+ it to a sublass of C{ValueDoc}, using L{ValueDoc.specialize_to()}:
+
+ >>> def typical_introspecter(value, value_doc):
+ ... value_doc.specialize_to(SomeSubclassOfValueDoc)
+ ... <add info to value_doc>
+
+ @param priority: The priority of this introspecter, which determines
+ the order in which introspecters are tried -- introspecters with lower
+ numbers are tried first. The standard introspecters have priorities
+ ranging from 20 to 30. The default priority (10) will place new
+ introspecters before standard introspecters.
+ """
+ _introspecter_registry.append( (priority, applicability_test,
+ introspecter) )
+ _introspecter_registry.sort()
+
+def _get_introspecter(value):
+ for (priority, applicability_test, introspecter) in _introspecter_registry:
+ if applicability_test(value):
+ return introspecter
+ else:
+ return introspect_other
+
+# Register the standard introspecter functions.
+def is_classmethod(v): return isinstance(v, classmethod)
+def is_staticmethod(v): return isinstance(v, staticmethod)
+def is_property(v): return isinstance(v, property)
+register_introspecter(inspect.ismodule, introspect_module, priority=20)
+register_introspecter(isclass, introspect_class, priority=24)
+register_introspecter(inspect.isroutine, introspect_routine, priority=28)
+register_introspecter(is_property, introspect_property, priority=30)
+
+# Register getset_descriptor as a property
+try:
+ import array
+ getset_type = type(array.array.typecode)
+ del array
+ def is_getset(v): return isinstance(v, getset_type)
+ register_introspecter(is_getset, introspect_property, priority=32)
+except:
+ pass
+
+# Register member_descriptor as a property
+try:
+ import datetime
+ member_type = type(datetime.timedelta.days)
+ del datetime
+ def is_member(v): return isinstance(v, member_type)
+ register_introspecter(is_member, introspect_property, priority=34)
+except:
+ pass
+
+#////////////////////////////////////////////////////////////
+# Import support
+#////////////////////////////////////////////////////////////
+
+def get_value_from_filename(filename, context=None):
+ # Normalize the filename.
+ filename = os.path.normpath(os.path.abspath(filename))
+
+ # Divide the filename into a base directory and a name. (For
+ # packages, use the package's parent directory as the base, and
+ # the directory name as its name).
+ basedir = os.path.split(filename)[0]
+ name = os.path.splitext(os.path.split(filename)[1])[0]
+ if name == '__init__':
+ basedir, name = os.path.split(basedir)
+ name = DottedName(name)
+
+ # If the context wasn't provided, then check if the file is in a
+ # package directory. If so, then update basedir & name to contain
+ # the topmost package's directory and the fully qualified name for
+ # this file. (This update assume the default value of __path__
+ # for the parent packages; if the parent packages override their
+ # __path__s, then this can cause us not to find the value.)
+ if context is None:
+ while is_package_dir(basedir):
+ basedir, pkg_name = os.path.split(basedir)
+ name = DottedName(pkg_name, name)
+
+ # If a parent package was specified, then find the directory of
+ # the topmost package, and the fully qualified name for this file.
+ if context is not None:
+ # Combine the name.
+ name = DottedName(context.canonical_name, name)
+ # Find the directory of the base package.
+ while context not in (None, UNKNOWN):
+ pkg_dir = os.path.split(context.filename)[0]
+ basedir = os.path.split(pkg_dir)[0]
+ context = context.package
+
+ # Import the module. (basedir is the directory of the module's
+ # topmost package, or its own directory if it's not in a package;
+ # and name is the fully qualified dotted name for the module.)
+ old_sys_path = sys.path[:]
+ try:
+ sys.path.insert(0, basedir)
+ # This will make sure that we get the module itself, even
+ # if it is shadowed by a variable. (E.g., curses.wrapper):
+ _import(str(name))
+ if str(name) in sys.modules:
+ return sys.modules[str(name)]
+ else:
+ # Use this as a fallback -- it *shouldn't* ever be needed.
+ return get_value_from_name(name)
+ finally:
+ sys.path = old_sys_path
+
+def get_value_from_scriptname(filename):
+ name = munge_script_name(filename)
+ return _import(name, filename)
+
+def get_value_from_name(name, globs=None):
+ """
+ Given a name, return the corresponding value.
+
+ @param globs: A namespace to check for the value, if there is no
+ module containing the named value. Defaults to __builtin__.
+ """
+ name = DottedName(name)
+
+ # Import the topmost module/package. If we fail, then check if
+ # the requested name refers to a builtin.
+ try:
+ module = _import(name[0])
+ except ImportError, e:
+ if globs is None: globs = __builtin__.__dict__
+ if name[0] in globs:
+ try: return _lookup(globs[name[0]], name[1:])
+ except: raise e
+ else:
+ raise
+
+ # Find the requested value in the module/package or its submodules.
+ for i in range(1, len(name)):
+ try: return _lookup(module, name[i:])
+ except ImportError: pass
+ module = _import('.'.join(name[:i+1]))
+ module = _lookup(module, name[1:i+1])
+ return module
+
+def _lookup(module, name):
+ val = module
+ for i, identifier in enumerate(name):
+ try: val = getattr(val, identifier)
+ except AttributeError:
+ exc_msg = ('no variable named %s in %s' %
+ (identifier, '.'.join(name[:1+i])))
+ raise ImportError(exc_msg)
+ return val
+
+def _import(name, filename=None):
+ """
+ Run the given callable in a 'sandboxed' environment.
+ Currently, this includes saving and restoring the contents of
+ sys and __builtins__; and suppressing stdin, stdout, and stderr.
+ """
+ # Note that we just do a shallow copy of sys. In particular,
+ # any changes made to sys.modules will be kept. But we do
+ # explicitly store sys.path.
+ old_sys = sys.__dict__.copy()
+ old_sys_path = sys.path[:]
+ old_builtins = __builtin__.__dict__.copy()
+
+ # Add the current directory to sys.path, in case they're trying to
+ # import a module by name that resides in the current directory.
+ # But add it to the end -- otherwise, the explicit directory added
+ # in get_value_from_filename might get overwritten
+ sys.path.append('')
+
+ # Suppress input and output. (These get restored when we restore
+ # sys to old_sys).
+ sys.stdin = sys.stdout = sys.stderr = _dev_null
+ sys.__stdin__ = sys.__stdout__ = sys.__stderr__ = _dev_null
+
+ # Remove any command-line arguments
+ sys.argv = ['(imported)']
+
+ try:
+ try:
+ if filename is None:
+ return __import__(name)
+ else:
+ # For importing scripts:
+ return imp.load_source(name, filename)
+ except KeyboardInterrupt: raise
+ except:
+ exc_typ, exc_val, exc_tb = sys.exc_info()
+ if exc_val is None:
+ estr = '%s' % (exc_typ,)
+ else:
+ estr = '%s: %s' % (exc_typ.__name__, exc_val)
+ if exc_tb.tb_next is not None:
+ estr += ' (line %d)' % (exc_tb.tb_next.tb_lineno,)
+ raise ImportError(estr)
+ finally:
+ # Restore the important values that we saved.
+ __builtin__.__dict__.clear()
+ __builtin__.__dict__.update(old_builtins)
+ sys.__dict__.clear()
+ sys.__dict__.update(old_sys)
+ sys.path = old_sys_path
+
+def introspect_docstring_lineno(api_doc):
+ """
+ Try to determine the line number on which the given item's
+ docstring begins. Return the line number, or C{None} if the line
+ number can't be determined. The line number of the first line in
+ the file is 1.
+ """
+ if api_doc.docstring_lineno is not UNKNOWN:
+ return api_doc.docstring_lineno
+ if isinstance(api_doc, ValueDoc) and api_doc.pyval is not UNKNOWN:
+ try:
+ lines, lineno = inspect.findsource(api_doc.pyval)
+ if not isinstance(api_doc, ModuleDoc): lineno += 1
+ for lineno in range(lineno, len(lines)):
+ if lines[lineno].split('#', 1)[0].strip():
+ api_doc.docstring_lineno = lineno + 1
+ return lineno + 1
+ except IOError: pass
+ except TypeError: pass
+ except IndexError:
+ log.warning('inspect.findsource(%s) raised IndexError'
+ % api_doc.canonical_name)
+ return None
+
+class _DevNull:
+ """
+ A "file-like" object that discards anything that is written and
+ always reports end-of-file when read. C{_DevNull} is used by
+ L{_import()} to discard output when importing modules; and to
+ ensure that stdin appears closed.
+ """
+ def __init__(self):
+ self.closed = 1
+ self.mode = 'r+'
+ self.softspace = 0
+ self.name='</dev/null>'
+ def close(self): pass
+ def flush(self): pass
+ def read(self, size=0): return ''
+ def readline(self, size=0): return ''
+ def readlines(self, sizehint=0): return []
+ def seek(self, offset, whence=0): pass
+ def tell(self): return 0L
+ def truncate(self, size=0): pass
+ def write(self, str): pass
+ def writelines(self, sequence): pass
+ xreadlines = readlines
+_dev_null = _DevNull()
+
+######################################################################
+## Zope InterfaceClass
+######################################################################
+
+try:
+ from zope.interface.interface import InterfaceClass as _ZopeInterfaceClass
+ register_class_type(_ZopeInterfaceClass)
+except:
+ pass
+
+######################################################################
+## Zope Extension classes
+######################################################################
+
+try:
+ # Register type(ExtensionClass.ExtensionClass)
+ from ExtensionClass import ExtensionClass as _ExtensionClass
+ _ZopeType = type(_ExtensionClass)
+ def _is_zope_type(val):
+ return isinstance(val, _ZopeType)
+ register_introspecter(_is_zope_type, introspect_class)
+
+ # Register ExtensionClass.*MethodType
+ from ExtensionClass import PythonMethodType as _ZopeMethodType
+ from ExtensionClass import ExtensionMethodType as _ZopeCMethodType
+ def _is_zope_method(val):
+ return isinstance(val, (_ZopeMethodType, _ZopeCMethodType))
+ register_introspecter(_is_zope_method, introspect_routine)
+except:
+ pass
+
+
+
+
+# [xx]
+0 # hm.. otherwise the following gets treated as a docstring! ouch!
+"""
+######################################################################
+## Zope Extension...
+######################################################################
+class ZopeIntrospecter(Introspecter):
+ VALUEDOC_CLASSES = Introspecter.VALUEDOC_CLASSES.copy()
+ VALUEDOC_CLASSES.update({
+ 'module': ZopeModuleDoc,
+ 'class': ZopeClassDoc,
+ 'interface': ZopeInterfaceDoc,
+ 'attribute': ZopeAttributeDoc,
+ })
+
+ def add_module_child(self, child, child_name, module_doc):
+ if isinstance(child, zope.interfaces.Interface):
+ module_doc.add_zope_interface(child_name)
+ else:
+ Introspecter.add_module_child(self, child, child_name, module_doc)
+
+ def add_class_child(self, child, child_name, class_doc):
+ if isinstance(child, zope.interfaces.Interface):
+ class_doc.add_zope_interface(child_name)
+ else:
+ Introspecter.add_class_child(self, child, child_name, class_doc)
+
+ def introspect_zope_interface(self, interface, interfacename):
+ pass # etc...
+"""
diff --git a/python/helpers/epydoc/docparser.py b/python/helpers/epydoc/docparser.py
new file mode 100644
index 0000000..b52e226
--- /dev/null
+++ b/python/helpers/epydoc/docparser.py
@@ -0,0 +1,2113 @@
+# epydoc -- Source code parsing
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: docparser.py 1673 2008-01-29 05:42:58Z edloper $
+
+"""
+Extract API documentation about python objects by parsing their source
+code.
+
+The function L{parse_docs()}, which provides the main interface
+of this module, reads and parses the Python source code for a
+module, and uses it to create an L{APIDoc} object containing
+the API documentation for the variables and values defined in
+that modules.
+
+Currently, C{parse_docs()} extracts documentation from the following
+source code constructions:
+
+ - module docstring
+ - import statements
+ - class definition blocks
+ - function definition blocks
+ - assignment statements
+ - simple assignment statements
+ - assignment statements with multiple C{'='}s
+ - assignment statements with unpacked left-hand sides
+ - assignment statements that wrap a function in classmethod
+ or staticmethod.
+ - assignment to special variables __path__, __all__, and
+ __docformat__.
+ - delete statements
+
+C{parse_docs()} does not yet support the following source code
+constructions:
+
+ - assignment statements that create properties
+
+By default, C{parse_docs()} will expore the contents of top-level
+C{try} and C{if} blocks. If desired, C{parse_docs()} can also
+be configured to explore the contents of C{while} and C{for} blocks.
+(See the configuration constants, below.)
+
+@todo: Make it possible to extend the functionality of C{parse_docs()},
+ by replacing process_line with a dispatch table that can be
+ customized (similarly to C{docintrospector.register_introspector()}).
+"""
+__docformat__ = 'epytext en'
+
+######################################################################
+## Imports
+######################################################################
+
+# Python source code parsing:
+import token, tokenize
+# Finding modules:
+import imp
+# File services:
+import os, os.path, sys
+# Unicode:
+import codecs
+# API documentation encoding:
+from epydoc.apidoc import *
+# For looking up the docs of builtins:
+import __builtin__, exceptions
+import epydoc.docintrospecter
+# Misc utility functions:
+from epydoc.util import *
+# Backwards compatibility
+from epydoc.compat import *
+
+######################################################################
+## Doc Parser
+######################################################################
+
+class ParseError(Exception):
+ """
+ An exception that is used to signify that C{docparser} encountered
+ syntactically invalid Python code while processing a Python source
+ file.
+ """
+
+_moduledoc_cache = {}
+"""A cache of C{ModuleDoc}s that we've already created.
+C{_moduledoc_cache} is a dictionary mapping from filenames to
+C{ValueDoc} objects.
+@type: C{dict}"""
+
+#////////////////////////////////////////////////////////////
+# Configuration Constants
+#////////////////////////////////////////////////////////////
+
+#{ Configuration Constants: Control Flow
+PARSE_TRY_BLOCKS = True
+"""Should the contents of C{try} blocks be examined?"""
+PARSE_EXCEPT_BLOCKS = True
+"""Should the contents of C{except} blocks be examined?"""
+PARSE_FINALLY_BLOCKS = True
+"""Should the contents of C{finally} blocks be examined?"""
+PARSE_IF_BLOCKS = True
+"""Should the contents of C{if} blocks be examined?"""
+PARSE_ELSE_BLOCKS = True
+"""Should the contents of C{else} and C{elif} blocks be examined?"""
+PARSE_WHILE_BLOCKS = False
+"""Should the contents of C{while} blocks be examined?"""
+PARSE_FOR_BLOCKS = False
+"""Should the contents of C{for} blocks be examined?"""
+
+#{ Configuration Constants: Imports
+IMPORT_HANDLING = 'link'
+"""What should C{docparser} do when it encounters an import
+statement?
+ - C{'link'}: Create variabledoc objects with imported_from pointers
+ to the source object.
+ - C{'parse'}: Parse the imported file, to find the actual
+ documentation for the imported object. (This will fall back
+ to the 'link' behavior if the imported file can't be parsed,
+ e.g., if it's a builtin.)
+"""
+
+IMPORT_STAR_HANDLING = 'parse'
+"""When C{docparser} encounters a C{'from M{m} import *'}
+statement, and is unable to parse C{M{m}} (either because
+L{IMPORT_HANDLING}=C{'link'}, or because parsing failed), how
+should it determine the list of identifiers expored by C{M{m}}?
+ - C{'ignore'}: ignore the import statement, and don't create
+ any new variables.
+ - C{'parse'}: parse it to find a list of the identifiers that it
+ exports. (This will fall back to the 'ignore' behavior if the
+ imported file can't be parsed, e.g., if it's a builtin.)
+ - C{'introspect'}: import the module and introspect it (using C{dir})
+ to find a list of the identifiers that it exports. (This will
+ fall back to the 'ignore' behavior if the imported file can't
+ be parsed, e.g., if it's a builtin.)
+"""
+
+DEFAULT_DECORATOR_BEHAVIOR = 'transparent'
+"""When C{DocParse} encounters an unknown decorator, what should
+it do to the documentation of the decorated function?
+ - C{'transparent'}: leave the function's documentation as-is.
+ - C{'opaque'}: replace the function's documentation with an
+ empty C{ValueDoc} object, reflecting the fact that we have no
+ knowledge about what value the decorator returns.
+"""
+
+BASE_HANDLING = 'parse'#'link'
+"""What should C{docparser} do when it encounters a base class that
+was imported from another module?
+ - C{'link'}: Create a valuedoc with a C{proxy_for} pointer to the
+ base class.
+ - C{'parse'}: Parse the file containing the base class, to find
+ the actual documentation for it. (This will fall back to the
+ 'link' behavior if the imported file can't be parsed, e.g., if
+ it's a builtin.)
+"""
+
+#{ Configuration Constants: Comment docstrings
+COMMENT_DOCSTRING_MARKER = '#:'
+"""The prefix used to mark comments that contain attribute
+docstrings for variables."""
+
+#{ Configuration Constants: Grouping
+START_GROUP_MARKER = '#{'
+"""The prefix used to mark a comment that starts a group. This marker
+should be followed (on the same line) by the name of the group.
+Following a start-group comment, all variables defined at the same
+indentation level will be assigned to this group name, until the
+parser reaches the end of the file, a matching end-group comment, or
+another start-group comment at the same indentation level.
+"""
+
+END_GROUP_MARKER = '#}'
+"""The prefix used to mark a comment that ends a group. See
+L{START_GROUP_MARKER}."""
+
+#/////////////////////////////////////////////////////////////////
+#{ Module parser
+#/////////////////////////////////////////////////////////////////
+
+def parse_docs(filename=None, name=None, context=None, is_script=False):
+ """
+ Generate the API documentation for a specified object by
+ parsing Python source files, and return it as a L{ValueDoc}.
+ The object to generate documentation for may be specified
+ using the C{filename} parameter I{or} the C{name} parameter.
+ (It is an error to specify both a filename and a name; or to
+ specify neither a filename nor a name).
+
+ @param filename: The name of the file that contains the python
+ source code for a package, module, or script. If
+ C{filename} is specified, then C{parse} will return a
+ C{ModuleDoc} describing its contents.
+ @param name: The fully-qualified python dotted name of any
+ value (including packages, modules, classes, and
+ functions). C{parse_docs()} will automatically figure out
+ which module(s) it needs to parse in order to find the
+ documentation for the specified object.
+ @param context: The API documentation for the package that
+ contains C{filename}. If no context is given, then
+ C{filename} is assumed to contain a top-level module or
+ package. It is an error to specify a C{context} if the
+ C{name} argument is used.
+ @rtype: L{ValueDoc}
+ """
+ # Always introspect __builtins__ & exceptions (e.g., in case
+ # they're used as base classes.)
+ epydoc.docintrospecter.introspect_docs(__builtin__)
+ epydoc.docintrospecter.introspect_docs(exceptions)
+
+ # If our input is a python object name, then delegate to
+ # _find().
+ if filename is None and name is not None:
+ if context:
+ raise ValueError("context should only be specified together "
+ "with filename, not with name.")
+ name = DottedName(name)
+ val_doc = _find(name)
+ if val_doc.canonical_name is UNKNOWN:
+ val_doc.canonical_name = name
+ return val_doc
+
+ # If our input is a filename, then create a ModuleDoc for it,
+ # and use process_file() to populate its attributes.
+ elif filename is not None and name is None:
+ # Use a python source version, if possible.
+ if not is_script:
+ try: filename = py_src_filename(filename)
+ except ValueError, e: raise ImportError('%s' % e)
+
+ # Check the cache, first.
+ if filename in _moduledoc_cache:
+ return _moduledoc_cache[filename]
+
+ log.info("Parsing %s" % filename)
+
+ # If the context wasn't provided, then check if the file is in
+ # a package directory. If so, then update basedir & name to
+ # contain the topmost package's directory and the fully
+ # qualified name for this file. (This update assume the
+ # default value of __path__ for the parent packages; if the
+ # parent packages override their __path__s, then this can
+ # cause us not to find the value.)
+ if context is None and not is_script:
+ basedir = os.path.split(filename)[0]
+ name = os.path.splitext(os.path.split(filename)[1])[0]
+ if name == '__init__':
+ basedir, name = os.path.split(basedir)
+ context = _parse_package(basedir)
+
+ # Figure out the canonical name of the module we're parsing.
+ if not is_script:
+ module_name, is_pkg = _get_module_name(filename, context)
+ else:
+ module_name = DottedName(munge_script_name(filename))
+ is_pkg = False
+
+ # Create a new ModuleDoc for the module, & add it to the cache.
+ module_doc = ModuleDoc(canonical_name=module_name, variables={},
+ sort_spec=[], imports=[],
+ filename=filename, package=context,
+ is_package=is_pkg, submodules=[],
+ docs_extracted_by='parser')
+ module_doc.defining_module = module_doc
+ _moduledoc_cache[filename] = module_doc
+
+ # Set the module's __path__ to its default value.
+ if is_pkg:
+ module_doc.path = [os.path.split(module_doc.filename)[0]]
+
+ # Add this module to the parent package's list of submodules.
+ if context is not None:
+ context.submodules.append(module_doc)
+
+ # Tokenize & process the contents of the module's source file.
+ try:
+ process_file(module_doc)
+ except tokenize.TokenError, e:
+ msg, (srow, scol) = e.args
+ raise ParseError('Error during parsing: %s '
+ '(%s, line %d, char %d)' %
+ (msg, module_doc.filename, srow, scol))
+ except IndentationError, e:
+ raise ParseError('Error during parsing: %s (%s)' %
+ (e, module_doc.filename))
+
+ # Handle any special variables (__path__, __docformat__, etc.)
+ handle_special_module_vars(module_doc)
+
+ # Return the completed ModuleDoc
+ return module_doc
+ else:
+ raise ValueError("Expected exactly one of the following "
+ "arguments: name, filename")
+
+def _parse_package(package_dir):
+ """
+ If the given directory is a package directory, then parse its
+ __init__.py file (and the __init__.py files of all ancestor
+ packages); and return its C{ModuleDoc}.
+ """
+ if not is_package_dir(package_dir):
+ return None
+ parent_dir = os.path.split(package_dir)[0]
+ parent_doc = _parse_package(parent_dir)
+ package_file = os.path.join(package_dir, '__init__')
+ return parse_docs(filename=package_file, context=parent_doc)
+
+# Special vars:
+# C{__docformat__}, C{__all__}, and C{__path__}.
+def handle_special_module_vars(module_doc):
+ # If __docformat__ is defined, parse its value.
+ toktree = _module_var_toktree(module_doc, '__docformat__')
+ if toktree is not None:
+ try: module_doc.docformat = parse_string(toktree)
+ except: pass
+ del module_doc.variables['__docformat__']
+
+ # If __all__ is defined, parse its value.
+ toktree = _module_var_toktree(module_doc, '__all__')
+ if toktree is not None:
+ try:
+ public_names = set(parse_string_list(toktree))
+ for name, var_doc in module_doc.variables.items():
+ if name in public_names:
+ var_doc.is_public = True
+ if not isinstance(var_doc, ModuleDoc):
+ var_doc.is_imported = False
+ else:
+ var_doc.is_public = False
+ except ParseError:
+ # If we couldn't parse the list, give precedence to introspection.
+ for name, var_doc in module_doc.variables.items():
+ if not isinstance(var_doc, ModuleDoc):
+ var_doc.is_imported = UNKNOWN
+ del module_doc.variables['__all__']
+
+ # If __path__ is defined, then extract its value (pkgs only)
+ if module_doc.is_package:
+ toktree = _module_var_toktree(module_doc, '__path__')
+ if toktree is not None:
+ try:
+ module_doc.path = parse_string_list(toktree)
+ except ParseError:
+ pass # [xx]
+ del module_doc.variables['__path__']
+
+def _module_var_toktree(module_doc, name):
+ var_doc = module_doc.variables.get(name)
+ if (var_doc is None or var_doc.value in (None, UNKNOWN) or
+ var_doc.value.toktree is UNKNOWN):
+ return None
+ else:
+ return var_doc.value.toktree
+
+#////////////////////////////////////////////////////////////
+#{ Module Lookup
+#////////////////////////////////////////////////////////////
+
+def _find(name, package_doc=None):
+ """
+ Return the API documentaiton for the object whose name is
+ C{name}. C{package_doc}, if specified, is the API
+ documentation for the package containing the named object.
+ """
+ # If we're inside a package, then find the package's path.
+ if package_doc is None:
+ path = None
+ elif package_doc.path is not UNKNOWN:
+ path = package_doc.path
+ else:
+ path = [os.path.split(package_doc.filename)[0]]
+
+ # The leftmost identifier in `name` should be a module or
+ # package on the given path; find it and parse it.
+ filename = _get_filename(name[0], path)
+ module_doc = parse_docs(filename, context=package_doc)
+
+ # If the name just has one identifier, then the module we just
+ # parsed is the object we're looking for; return it.
+ if len(name) == 1: return module_doc
+
+ # Otherwise, we're looking for something inside the module.
+ # First, check to see if it's in a variable (but ignore
+ # variables that just contain imported submodules).
+ if not _is_submodule_import_var(module_doc, name[1]):
+ try: return _find_in_namespace(name[1:], module_doc)
+ except ImportError: pass
+
+ # If not, then check to see if it's in a subpackage.
+ if module_doc.is_package:
+ return _find(name[1:], module_doc)
+
+ # If it's not in a variable or a subpackage, then we can't
+ # find it.
+ raise ImportError('Could not find value')
+
+def _is_submodule_import_var(module_doc, var_name):
+ """
+ Return true if C{var_name} is the name of a variable in
+ C{module_doc} that just contains an C{imported_from} link to a
+ submodule of the same name. (I.e., is a variable created when
+ a package imports one of its own submodules.)
+ """
+ var_doc = module_doc.variables.get(var_name)
+ full_var_name = DottedName(module_doc.canonical_name, var_name)
+ return (var_doc is not None and
+ var_doc.imported_from == full_var_name)
+
+def _find_in_namespace(name, namespace_doc):
+ if name[0] not in namespace_doc.variables:
+ raise ImportError('Could not find value')
+
+ # Look up the variable in the namespace.
+ var_doc = namespace_doc.variables[name[0]]
+ if var_doc.value is UNKNOWN:
+ raise ImportError('Could not find value')
+ val_doc = var_doc.value
+
+ # If the variable's value was imported, then follow its
+ # alias link.
+ if var_doc.imported_from not in (None, UNKNOWN):
+ return _find(var_doc.imported_from+name[1:])
+
+ # Otherwise, if the name has one identifier, then this is the
+ # value we're looking for; return it.
+ elif len(name) == 1:
+ return val_doc
+
+ # Otherwise, if this value is a namespace, look inside it.
+ elif isinstance(val_doc, NamespaceDoc):
+ return _find_in_namespace(name[1:], val_doc)
+
+ # Otherwise, we ran into a dead end.
+ else:
+ raise ImportError('Could not find value')
+
+def _get_filename(identifier, path=None):
+ if path is UNKNOWN: path = None
+ try:
+ fp, filename, (s,m,typ) = imp.find_module(identifier, path)
+ if fp is not None: fp.close()
+ except ImportError:
+ raise ImportError, 'No Python source file found.'
+
+ if typ == imp.PY_SOURCE:
+ return filename
+ elif typ == imp.PY_COMPILED:
+ # See if we can find a corresponding non-compiled version.
+ filename = re.sub('.py\w$', '.py', filename)
+ if not os.path.exists(filename):
+ raise ImportError, 'No Python source file found.'
+ return filename
+ elif typ == imp.PKG_DIRECTORY:
+ filename = os.path.join(filename, '__init__.py')
+ if not os.path.exists(filename):
+ filename = os.path.join(filename, '__init__.pyw')
+ if not os.path.exists(filename):
+ raise ImportError, 'No package file found.'
+ return filename
+ elif typ == imp.C_BUILTIN:
+ raise ImportError, 'No Python source file for builtin modules.'
+ elif typ == imp.C_EXTENSION:
+ raise ImportError, 'No Python source file for c extensions.'
+ else:
+ raise ImportError, 'No Python source file found.'
+
+#/////////////////////////////////////////////////////////////////
+#{ File tokenization loop
+#/////////////////////////////////////////////////////////////////
+
+def process_file(module_doc):
+ """
+ Read the given C{ModuleDoc}'s file, and add variables
+ corresponding to any objects defined in that file. In
+ particular, read and tokenize C{module_doc.filename}, and
+ process each logical line using L{process_line()}.
+ """
+ # Keep track of the current line number:
+ lineno = None
+
+ # Use this list to collect the tokens on a single logical line:
+ line_toks = []
+
+ # This list contains one APIDoc for each indentation level.
+ # The first element is the APIDoc for the module, and each
+ # subsequent element is the APIDoc for the object at that
+ # indentation level. The final element of the list is the
+ # C{APIDoc} for the entity that we're currently processing.
+ parent_docs = [module_doc]
+
+ # The APIDoc for the object that was defined by the previous
+ # line, if any; or None otherwise. This is used to update
+ # parent_docs when we encounter an indent; and to decide what
+ # object (if any) is described by a docstring.
+ prev_line_doc = module_doc
+
+ # A list of comments that occur before or on the current
+ # logical line, used to build the comment docstring. Each
+ # element is a tuple (comment_text, comment_lineno).
+ comments = []
+
+ # A list of decorator lines that occur before the current
+ # logical line. This is used so we can process a function
+ # declaration line and its decorators all at once.
+ decorators = []
+
+ # A list of group names, one for each indentation level. This is
+ # used to keep track groups that are defined by comment markers
+ # START_GROUP_MARKER and END_GROUP_MARKER.
+ groups = [None]
+
+ # When we encounter a comment start group marker, set this to the
+ # name of the group; but wait until we're ready to process the
+ # next line before we actually set groups[-1] to this value. This
+ # is necessary because at the top of a block, the tokenizer gives
+ # us comments before the INDENT token; but if we encounter a group
+ # start marker at the top of a block, then we want it to apply
+ # inside that block, not outside it.
+ start_group = None
+
+ # Check if the source file declares an encoding.
+ encoding = get_module_encoding(module_doc.filename)
+
+ # The token-eating loop:
+ try:
+ module_file = codecs.open(module_doc.filename, 'rU', encoding)
+ except LookupError:
+ log.warning("Unknown encoding %r for %s; using the default"
+ "encoding instead (iso-8859-1)" %
+ (encoding, module_doc.filename))
+ encoding = 'iso-8859-1'
+ module_file = codecs.open(module_doc.filename, 'rU', encoding)
+ tok_iter = tokenize.generate_tokens(module_file.readline)
+ for toktype, toktext, (srow,scol), (erow,ecol), line_str in tok_iter:
+ # BOM encoding marker: ignore.
+ if (toktype == token.ERRORTOKEN and
+ (toktext == u'\ufeff' or
+ toktext.encode(encoding) == '\xef\xbb\xbf')):
+ pass
+
+ # Error token: abort
+ elif toktype == token.ERRORTOKEN:
+ raise ParseError('Error during parsing: invalid syntax '
+ '(%s, line %d, char %d: %r)' %
+ (module_doc.filename, srow, scol, toktext))
+
+ # Indent token: update the parent_doc stack.
+ elif toktype == token.INDENT:
+ if prev_line_doc is None:
+ parent_docs.append(parent_docs[-1])
+ else:
+ parent_docs.append(prev_line_doc)
+ groups.append(None)
+
+ # Dedent token: update the parent_doc stack.
+ elif toktype == token.DEDENT:
+ if line_toks == []:
+ parent_docs.pop()
+ groups.pop()
+ else:
+ # This *should* only happen if the file ends on an
+ # indented line, with no final newline.
+ # (otherwise, this is the wrong thing to do.)
+ pass
+
+ # Line-internal newline token: if we're still at the start of
+ # the logical line, and we've seen one or more comment lines,
+ # then discard them: blank lines are not allowed between a
+ # comment block and the thing it describes.
+ elif toktype == tokenize.NL:
+ if comments and not line_toks:
+ log.warning('Ignoring docstring comment block followed by '
+ 'a blank line in %r on line %r' %
+ (module_doc.filename, srow-1))
+ comments = []
+
+ # Comment token: add to comments if appropriate.
+ elif toktype == tokenize.COMMENT:
+ if toktext.startswith(COMMENT_DOCSTRING_MARKER):
+ comment_line = toktext[len(COMMENT_DOCSTRING_MARKER):].rstrip()
+ if comment_line.startswith(" "):
+ comment_line = comment_line[1:]
+ comments.append( [comment_line, srow])
+ elif toktext.startswith(START_GROUP_MARKER):
+ start_group = toktext[len(START_GROUP_MARKER):].strip()
+ elif toktext.startswith(END_GROUP_MARKER):
+ for i in range(len(groups)-1, -1, -1):
+ if groups[i]:
+ groups[i] = None
+ break
+ else:
+ log.warning("Got group end marker without a corresponding "
+ "start marker in %r on line %r" %
+ (module_doc.filename, srow))
+
+ # Normal token: Add it to line_toks. (If it's a non-unicode
+ # string literal, then we need to re-encode using the file's
+ # encoding, to get back to the original 8-bit data; and then
+ # convert that string with 8-bit data to a 7-bit ascii
+ # representation.)
+ elif toktype != token.NEWLINE and toktype != token.ENDMARKER:
+ if lineno is None: lineno = srow
+ if toktype == token.STRING:
+ str_prefixes = re.match('[^\'"]*', toktext).group()
+ if 'u' not in str_prefixes:
+ s = toktext.encode(encoding)
+ toktext = decode_with_backslashreplace(s)
+ line_toks.append( (toktype, toktext) )
+
+ # Decorator line: add it to the decorators list.
+ elif line_toks and line_toks[0] == (token.OP, '@'):
+ decorators.append(shallow_parse(line_toks))
+ line_toks = []
+
+ # End of line token, but nothing to do.
+ elif line_toks == []:
+ pass
+
+ # End of line token: parse the logical line & process it.
+ else:
+ if start_group:
+ groups[-1] = start_group
+ start_group = None
+
+ if parent_docs[-1] != 'skip_block':
+ try:
+ prev_line_doc = process_line(
+ shallow_parse(line_toks), parent_docs, prev_line_doc,
+ lineno, comments, decorators, encoding)
+ except ParseError, e:
+ raise ParseError('Error during parsing: invalid '
+ 'syntax (%s, line %d) -- %s' %
+ (module_doc.filename, lineno, e))
+ except KeyboardInterrupt, e: raise
+ except Exception, e:
+ log.error('Internal error during parsing (%s, line '
+ '%s):\n%s' % (module_doc.filename, lineno, e))
+ raise
+
+ # grouping...
+ if groups[-1] and prev_line_doc not in (None, 'skip_block'):
+ if isinstance(prev_line_doc, VariableDoc):
+ # prev_line_doc's container will only be
+ # UNKNOWN if it's an instance variable that
+ # didn't have a doc-comment, but might still
+ # be followed by a docstring. Since we
+ # tokenize in order, we can't do lookahead to
+ # see if the variable will have a comment; but
+ # it should only be added to the container if
+ # it does. So we defer the grouping of that
+ # to be handled by process_docstring instead.
+ if prev_line_doc.container is not UNKNOWN:
+ add_to_group(prev_line_doc.container,
+ prev_line_doc, groups[-1])
+ elif isinstance(parent_docs[-1], NamespaceDoc):
+ add_to_group(parent_docs[-1], prev_line_doc,
+ groups[-1])
+ else:
+ prev_line_doc = None
+
+ # Reset line contents.
+ line_toks = []
+ lineno = None
+ comments = []
+ decorators = []
+
+def add_to_group(container, api_doc, group_name):
+ if container.group_specs is UNKNOWN:
+ container.group_specs = []
+
+ if isinstance(api_doc, VariableDoc):
+ var_name = api_doc.name
+ else:
+ if api_doc.canonical_name is UNKNOWN: log.debug('ouch', `api_doc`)
+ var_name = api_doc.canonical_name[-1]
+
+ for (name, group_vars) in container.group_specs:
+ if name == group_name:
+ group_vars.append(var_name)
+ return
+ else:
+ container.group_specs.append( (group_name, [var_name]) )
+
+def script_guard(line):
+ """Detect the idiomatic trick C{if __name__ == "__main__":}"""
+ return (len(line) == 5
+ and line[1][1] == '__name__' # this is the most selective
+ and line[0][1] == 'if'
+ and line[2][1] == '=='
+ and line[4][1] == ':'
+ and line[3][1][1:-1] == '__main__')
+
+#/////////////////////////////////////////////////////////////////
+#{ Shallow parser
+#/////////////////////////////////////////////////////////////////
+
+def shallow_parse(line_toks):
+ """
+ Given a flat list of tokens, return a nested tree structure
+ (called a X{token tree}), whose leaves are identical to the
+ original list, but whose structure reflects the structure
+ implied by the grouping tokens (i.e., parenthases, braces, and
+ brackets). If the parenthases, braces, and brackets do not
+ match, or are not balanced, then raise a ParseError.
+
+ Assign some structure to a sequence of structure (group parens).
+ """
+ stack = [[]]
+ parens = []
+ for tok in line_toks:
+ toktype, toktext = tok
+ if toktext in ('(','[','{'):
+ parens.append(tok)
+ stack.append([tok])
+ elif toktext in ('}',']',')'):
+ if not parens:
+ raise ParseError('Unbalanced parens')
+ left_paren = parens.pop()[1]
+ if left_paren+toktext not in ('()', '[]', '{}'):
+ raise ParseError('Mismatched parens')
+ lst = stack.pop()
+ lst.append(tok)
+ stack[-1].append(lst)
+ else:
+ stack[-1].append(tok)
+ if len(stack) != 1 or len(parens) != 0:
+ raise ParseError('Unbalanced parens')
+ return stack[0]
+
+#/////////////////////////////////////////////////////////////////
+#{ Line processing
+#/////////////////////////////////////////////////////////////////
+# The methods process_*() are used to handle lines.
+
+def process_line(line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding):
+ """
+ @return: C{new-doc}, C{decorator}..?
+ """
+ args = (line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding)
+
+ if not line: # blank line.
+ return None
+ elif (token.OP, ':') in line[:-1]:
+ return process_one_line_block(*args)
+ elif (token.OP, ';') in line:
+ return process_multi_stmt(*args)
+ elif line[0] == (token.NAME, 'def'):
+ return process_funcdef(*args)
+ elif line[0] == (token.OP, '@'):
+ return process_funcdef(*args)
+ elif line[0] == (token.NAME, 'class'):
+ return process_classdef(*args)
+ elif line[0] == (token.NAME, 'import'):
+ return process_import(*args)
+ elif line[0] == (token.NAME, 'from'):
+ return process_from_import(*args)
+ elif line[0] == (token.NAME, 'del'):
+ return process_del(*args)
+ elif len(line)==1 and line[0][0] == token.STRING:
+ return process_docstring(*args)
+ elif (token.OP, '=') in line:
+ return process_assignment(*args)
+ elif (line[0][0] == token.NAME and
+ line[0][1] in CONTROL_FLOW_KEYWORDS):
+ return process_control_flow_line(*args)
+ else:
+ return None
+ # [xx] do something with control structures like for/if?
+
+#/////////////////////////////////////////////////////////////////
+# Line handler: control flow
+#/////////////////////////////////////////////////////////////////
+
+CONTROL_FLOW_KEYWORDS = [
+ #: A list of the control flow keywords. If a line begins with
+ #: one of these keywords, then it should be handled by
+ #: C{process_control_flow_line}.
+ 'if', 'elif', 'else', 'while', 'for', 'try', 'except', 'finally']
+
+def process_control_flow_line(line, parent_docs, prev_line_doc,
+ lineno, comments, decorators, encoding):
+ keyword = line[0][1]
+
+ # If it's a 'for' block: create the loop variable.
+ if keyword == 'for' and PARSE_FOR_BLOCKS:
+ loopvar_name = parse_dotted_name(
+ split_on(line[1:], (token.NAME, 'in'))[0])
+ parent = get_lhs_parent(loopvar_name, parent_docs)
+ if parent is not None:
+ var_doc = VariableDoc(name=loopvar_name[-1], is_alias=False,
+ is_imported=False, is_instvar=False,
+ docs_extracted_by='parser')
+ set_variable(parent, var_doc)
+
+ if ((keyword == 'if' and PARSE_IF_BLOCKS and not script_guard(line)) or
+ (keyword == 'elif' and PARSE_ELSE_BLOCKS) or
+ (keyword == 'else' and PARSE_ELSE_BLOCKS) or
+ (keyword == 'while' and PARSE_WHILE_BLOCKS) or
+ (keyword == 'for' and PARSE_FOR_BLOCKS) or
+ (keyword == 'try' and PARSE_TRY_BLOCKS) or
+ (keyword == 'except' and PARSE_EXCEPT_BLOCKS) or
+ (keyword == 'finally' and PARSE_FINALLY_BLOCKS)):
+ # Return "None" to indicate that we should process the
+ # block using the same context that we were already in.
+ return None
+ else:
+ # Return 'skip_block' to indicate that we should ignore
+ # the contents of this block.
+ return 'skip_block'
+
+#/////////////////////////////////////////////////////////////////
+# Line handler: imports
+#/////////////////////////////////////////////////////////////////
+# [xx] I could optionally add ValueDoc's for the imported
+# variables with proxy_for set to the imported source; but
+# I don't think I gain much of anything by doing so.
+
+def process_import(line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding):
+ if not isinstance(parent_docs[-1], NamespaceDoc): return
+
+ names = split_on(line[1:], (token.OP, ','))
+
+ for name in names:
+ name_pieces = split_on(name, (token.NAME, 'as'))
+ if len(name_pieces) == 1:
+ src_name = parse_dotted_name(name_pieces[0])
+ _import_var(src_name, parent_docs)
+ elif len(name_pieces) == 2:
+ if len(name_pieces[1]) != 1:
+ raise ParseError('Expected identifier after "as"')
+ src_name = parse_dotted_name(name_pieces[0])
+ var_name = parse_name(name_pieces[1][0])
+ _import_var_as(src_name, var_name, parent_docs)
+ else:
+ raise ParseError('Multiple "as" tokens in import')
+
+def process_from_import(line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding):
+ if not isinstance(parent_docs[-1], NamespaceDoc): return
+
+ pieces = split_on(line[1:], (token.NAME, 'import'))
+ if len(pieces) != 2 or not pieces[0] or not pieces[1]:
+ raise ParseError("Bad from-import")
+ lhs, rhs = pieces
+
+ # The RHS might be parenthasized, as specified by PEP 328:
+ # http://www.python.org/peps/pep-0328.html
+ if (len(rhs) == 1 and isinstance(rhs[0], list) and
+ rhs[0][0] == (token.OP, '(') and rhs[0][-1] == (token.OP, ')')):
+ rhs = rhs[0][1:-1]
+
+ # >>> from __future__ import nested_scopes
+ if lhs == [(token.NAME, '__future__')]:
+ return
+
+ # >>> from sys import *
+ elif rhs == [(token.OP, '*')]:
+ src_name = parse_dotted_name(lhs)
+ _process_fromstar_import(src_name, parent_docs)
+
+ # >>> from os.path import join, split
+ else:
+ # Allow relative imports in this case, as per PEP 328
+ src_name = parse_dotted_name(lhs,
+ parent_name=parent_docs[-1].canonical_name)
+ parts = split_on(rhs, (token.OP, ','))
+ for part in parts:
+ # from m import x
+ if len(part) == 1:
+ var_name = parse_name(part[0])
+ _import_var_as(DottedName(src_name, var_name),
+ var_name, parent_docs)
+
+ # from m import x as y
+ elif len(part) == 3 and part[1] == (token.NAME, 'as'):
+ orig_name = parse_name(part[0])
+ var_name = parse_name(part[2])
+ _import_var_as(DottedName(src_name, orig_name),
+ var_name, parent_docs)
+
+ else:
+ ParseError("Bad from-import")
+
+def _process_fromstar_import(src, parent_docs):
+ """
+ Handle a statement of the form:
+ >>> from <src> import *
+
+ If L{IMPORT_HANDLING} is C{'parse'}, then first try to parse
+ the module C{M{<src>}}, and copy all of its exported variables
+ to C{parent_docs[-1]}.
+
+ Otherwise, try to determine the names of the variables exported by
+ C{M{<src>}}, and create a new variable for each export. If
+ L{IMPORT_STAR_HANDLING} is C{'parse'}, then the list of exports if
+ found by parsing C{M{<src>}}; if it is C{'introspect'}, then the
+ list of exports is found by importing and introspecting
+ C{M{<src>}}.
+ """
+ # This is redundant: already checked by caller.
+ if not isinstance(parent_docs[-1], NamespaceDoc): return
+
+ # If src is package-local, then convert it to a global name.
+ src = _global_name(src, parent_docs)
+
+ # Record the import
+ parent_docs[0].imports.append(src) # mark that it's .*??
+
+ # [xx] add check for if we already have the source docs in our
+ # cache??
+
+ if (IMPORT_HANDLING == 'parse' or
+ IMPORT_STAR_HANDLING == 'parse'): # [xx] is this ok?
+ try: module_doc = _find(src)
+ except ImportError: module_doc = None
+ if isinstance(module_doc, ModuleDoc):
+ for name, imp_var in module_doc.variables.items():
+ # [xx] this is not exactly correct, but close. It
+ # does the wrong thing if a __var__ is explicitly
+ # listed in __all__.
+ if (imp_var.is_public and
+ not (name.startswith('__') and name.endswith('__'))):
+ var_doc = _add_import_var(DottedName(src, name), name,
+ parent_docs[-1])
+ if IMPORT_HANDLING == 'parse':
+ var_doc.value = imp_var.value
+
+ # If we got here, then either IMPORT_HANDLING='link' or we
+ # failed to parse the `src` module.
+ if IMPORT_STAR_HANDLING == 'introspect':
+ try: module = __import__(str(src), {}, {}, [0])
+ except: return # We couldn't import it.
+ if module is None: return # We couldn't import it.
+ if hasattr(module, '__all__'):
+ names = list(module.__all__)
+ else:
+ names = [n for n in dir(module) if not n.startswith('_')]
+ for name in names:
+ _add_import_var(DottedName(src, name), name, parent_docs[-1])
+
+def _import_var(name, parent_docs):
+ """
+ Handle a statement of the form:
+ >>> import <name>
+
+ If L{IMPORT_HANDLING} is C{'parse'}, then first try to find
+ the value by parsing; and create an appropriate variable in
+ parentdoc.
+
+ Otherwise, add a variable for the imported variable. (More than
+ one variable may be created for cases like C{'import a.b'}, where
+ we need to create a variable C{'a'} in parentdoc containing a
+ proxy module; and a variable C{'b'} in the proxy module.
+ """
+ # This is redundant: already checked by caller.
+ if not isinstance(parent_docs[-1], NamespaceDoc): return
+
+ # If name is package-local, then convert it to a global name.
+ src = _global_name(name, parent_docs)
+ src_prefix = src[:len(src)-len(name)]
+
+ # Record the import
+ parent_docs[0].imports.append(name)
+
+ # [xx] add check for if we already have the source docs in our
+ # cache??
+
+ if IMPORT_HANDLING == 'parse':
+ # Check to make sure that we can actually find the value.
+ try: val_doc = _find(src)
+ except ImportError: val_doc = None
+ if val_doc is not None:
+ # We found it; but it's not the value itself we want to
+ # import, but the module containing it; so import that
+ # module (=top_mod) and create a variable for it.
+ top_mod = src_prefix+name[0]
+ var_doc = _add_import_var(top_mod, name[0], parent_docs[-1])
+ var_doc.value = _find(DottedName(name[0]))
+ return
+
+ # If we got here, then either IMPORT_HANDLING='link', or we
+ # did not successfully find the value's docs by parsing; use
+ # a variable with an UNKNOWN value.
+
+ # Create any necessary intermediate proxy module values.
+ container = parent_docs[-1]
+ for i, identifier in enumerate(name[:-1]):
+ if (identifier not in container.variables or
+ not isinstance(container.variables[identifier], ModuleDoc)):
+ var_doc = _add_import_var(name[:i+1], identifier, container)
+ var_doc.value = ModuleDoc(variables={}, sort_spec=[],
+ proxy_for=src_prefix+name[:i+1],
+ submodules={},
+ docs_extracted_by='parser')
+ container = container.variables[identifier].value
+
+ # Add the variable to the container.
+ _add_import_var(src, name[-1], container)
+
+def _import_var_as(src, name, parent_docs):
+ """
+ Handle a statement of the form:
+ >>> import src as name
+
+ If L{IMPORT_HANDLING} is C{'parse'}, then first try to find
+ the value by parsing; and create an appropriate variable in
+ parentdoc.
+
+ Otherwise, create a variables with its C{imported_from} attribute
+ pointing to the imported object.
+ """
+ # This is redundant: already checked by caller.
+ if not isinstance(parent_docs[-1], NamespaceDoc): return
+
+ # If src is package-local, then convert it to a global name.
+ src = _global_name(src, parent_docs)
+
+ # Record the import
+ parent_docs[0].imports.append(src)
+
+ if IMPORT_HANDLING == 'parse':
+ # Parse the value and create a variable for it.
+ try: val_doc = _find(src)
+ except ImportError: val_doc = None
+ if val_doc is not None:
+ var_doc = VariableDoc(name=name, value=val_doc,
+ is_imported=True, is_alias=False,
+ imported_from=src,
+ docs_extracted_by='parser')
+ set_variable(parent_docs[-1], var_doc)
+ return
+
+ # If we got here, then either IMPORT_HANDLING='link', or we
+ # did not successfully find the value's docs by parsing; use a
+ # variable with a proxy value.
+ _add_import_var(src, name, parent_docs[-1])
+
+def _add_import_var(src, name, container):
+ """
+ Add a new imported variable named C{name} to C{container}, with
+ C{imported_from=src}.
+ """
+ var_doc = VariableDoc(name=name, is_imported=True, is_alias=False,
+ imported_from=src, docs_extracted_by='parser')
+ set_variable(container, var_doc)
+ return var_doc
+
+def _global_name(name, parent_docs):
+ """
+ If the given name is package-local (relative to the current
+ context, as determined by C{parent_docs}), then convert it
+ to a global name.
+ """
+ # Get the containing package from parent_docs.
+ if parent_docs[0].is_package:
+ package = parent_docs[0]
+ else:
+ package = parent_docs[0].package
+
+ # Check each package (from closest to furthest) to see if it
+ # contains a module named name[0]; if so, then treat `name` as
+ # relative to that package.
+ while package not in (None, UNKNOWN):
+ try:
+ fp = imp.find_module(name[0], package.path)[0]
+ if fp is not None: fp.close()
+ except ImportError:
+ # No submodule found here; try the next package up.
+ package = package.package
+ continue
+ # A submodule was found; return its name.
+ return package.canonical_name + name
+
+ # We didn't find any package containing `name`; so just return
+ # `name` as-is.
+ return name
+
+#/////////////////////////////////////////////////////////////////
+# Line handler: assignment
+#/////////////////////////////////////////////////////////////////
+
+def process_assignment(line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding):
+ # Divide the assignment statement into its pieces.
+ pieces = split_on(line, (token.OP, '='))
+
+ lhs_pieces = pieces[:-1]
+ rhs = pieces[-1]
+
+ # Decide whether the variable is an instance variable or not.
+ # If it's an instance var, then discard the value.
+ is_instvar = lhs_is_instvar(lhs_pieces, parent_docs)
+
+ # if it's not an instance var, and we're not in a namespace,
+ # then it's just a local var -- so ignore it.
+ if not (is_instvar or isinstance(parent_docs[-1], NamespaceDoc)):
+ return None
+
+ # Evaluate the right hand side.
+ if not is_instvar:
+ rhs_val, is_alias = rhs_to_valuedoc(rhs, parent_docs)
+ else:
+ rhs_val, is_alias = UNKNOWN, False
+
+ # Assign the right hand side value to each left hand side.
+ # (Do the rightmost assignment first)
+ lhs_pieces.reverse()
+ for lhs in lhs_pieces:
+ # Try treating the LHS as a simple dotted name.
+ try: lhs_name = parse_dotted_name(lhs)
+ except: lhs_name = None
+ if lhs_name is not None:
+ lhs_parent = get_lhs_parent(lhs_name, parent_docs)
+ if lhs_parent is None: continue
+
+ # Skip a special class variable.
+ if lhs_name[-1] == '__slots__':
+ continue
+
+ # Create the VariableDoc.
+ var_doc = VariableDoc(name=lhs_name[-1], value=rhs_val,
+ is_imported=False, is_alias=is_alias,
+ is_instvar=is_instvar,
+ docs_extracted_by='parser')
+ # Extract a docstring from the comments, when present,
+ # but only if there's a single LHS.
+ if len(lhs_pieces) == 1:
+ add_docstring_from_comments(var_doc, comments)
+
+ # Assign the variable to the containing namespace,
+ # *unless* the variable is an instance variable
+ # without a comment docstring. In that case, we'll
+ # only want to add it if we later discover that it's
+ # followed by a variable docstring. If it is, then
+ # process_docstring will take care of adding it to the
+ # containing clas. (This is a little hackish, but
+ # unfortunately is necessary because we won't know if
+ # this assignment line is followed by a docstring
+ # until later.)
+ if (not is_instvar) or comments:
+ set_variable(lhs_parent, var_doc, True)
+
+ # If it's the only var, then return the VarDoc for use
+ # as the new `prev_line_doc`.
+ if (len(lhs_pieces) == 1 and
+ (len(lhs_name) == 1 or is_instvar)):
+ return var_doc
+
+ # Otherwise, the LHS must be a complex expression; use
+ # dotted_names_in() to decide what variables it contains,
+ # and create VariableDoc's for all of them (with UNKNOWN
+ # value).
+ else:
+ for lhs_name in dotted_names_in(lhs_pieces):
+ lhs_parent = get_lhs_parent(lhs_name, parent_docs)
+ if lhs_parent is None: continue
+ var_doc = VariableDoc(name=lhs_name[-1],
+ is_imported=False,
+ is_alias=is_alias,
+ is_instvar=is_instvar,
+ docs_extracted_by='parser')
+ set_variable(lhs_parent, var_doc, True)
+
+ # If we have multiple left-hand-sides, then all but the
+ # rightmost one are considered aliases.
+ is_alias = True
+
+
+def lhs_is_instvar(lhs_pieces, parent_docs):
+ if not isinstance(parent_docs[-1], RoutineDoc):
+ return False
+ # make sure that lhs_pieces is <self>.<name>, where <self> is
+ # the name of the first arg to the containing routinedoc, and
+ # <name> is a simple name.
+ posargs = parent_docs[-1].posargs
+ if posargs is UNKNOWN: return False
+ if not (len(lhs_pieces)==1 and len(posargs) > 0 and
+ len(lhs_pieces[0]) == 3 and
+ lhs_pieces[0][0] == (token.NAME, posargs[0]) and
+ lhs_pieces[0][1] == (token.OP, '.') and
+ lhs_pieces[0][2][0] == token.NAME):
+ return False
+ # Make sure we're in an instance method, and not a
+ # module-level function.
+ for i in range(len(parent_docs)-1, -1, -1):
+ if isinstance(parent_docs[i], ClassDoc):
+ return True
+ elif parent_docs[i] != parent_docs[-1]:
+ return False
+ return False
+
+def rhs_to_valuedoc(rhs, parent_docs):
+ # Dotted variable:
+ try:
+ rhs_name = parse_dotted_name(rhs)
+ rhs_val = lookup_value(rhs_name, parent_docs)
+ if rhs_val is not None and rhs_val is not UNKNOWN:
+ return rhs_val, True
+ except ParseError:
+ pass
+
+ # Decorators:
+ if (len(rhs)==2 and rhs[0][0] == token.NAME and
+ isinstance(rhs[1], list)):
+ arg_val, _ = rhs_to_valuedoc(rhs[1][1:-1], parent_docs)
+ if isinstance(arg_val, RoutineDoc):
+ doc = apply_decorator(DottedName(rhs[0][1]), arg_val)
+ doc.canonical_name = UNKNOWN
+ doc.parse_repr = pp_toktree(rhs)
+ return doc, False
+
+ # Nothing else to do: make a val with the source as its repr.
+ return GenericValueDoc(parse_repr=pp_toktree(rhs), toktree=rhs,
+ defining_module=parent_docs[0],
+ docs_extracted_by='parser'), False
+
+def get_lhs_parent(lhs_name, parent_docs):
+ assert isinstance(lhs_name, DottedName)
+
+ # For instance vars inside an __init__ method:
+ if isinstance(parent_docs[-1], RoutineDoc):
+ for i in range(len(parent_docs)-1, -1, -1):
+ if isinstance(parent_docs[i], ClassDoc):
+ return parent_docs[i]
+ else:
+ raise ValueError("%r is not a namespace or method" %
+ parent_docs[-1])
+
+ # For local variables:
+ if len(lhs_name) == 1:
+ return parent_docs[-1]
+
+ # For non-local variables:
+ return lookup_value(lhs_name.container(), parent_docs)
+
+#/////////////////////////////////////////////////////////////////
+# Line handler: single-line blocks
+#/////////////////////////////////////////////////////////////////
+
+def process_one_line_block(line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding):
+ """
+ The line handler for single-line blocks, such as:
+
+ >>> def f(x): return x*2
+
+ This handler calls L{process_line} twice: once for the tokens
+ up to and including the colon, and once for the remaining
+ tokens. The comment docstring is applied to the first line
+ only.
+ @return: C{None}
+ """
+ i = line.index((token.OP, ':'))
+ doc1 = process_line(line[:i+1], parent_docs, prev_line_doc,
+ lineno, comments, decorators, encoding)
+ doc2 = process_line(line[i+1:], parent_docs+[doc1],
+ doc1, lineno, None, [], encoding)
+ return doc1
+
+#/////////////////////////////////////////////////////////////////
+# Line handler: semicolon-separated statements
+#/////////////////////////////////////////////////////////////////
+
+def process_multi_stmt(line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding):
+ """
+ The line handler for semicolon-separated statements, such as:
+
+ >>> x=1; y=2; z=3
+
+ This handler calls L{process_line} once for each statement.
+ The comment docstring is not passed on to any of the
+ sub-statements.
+ @return: C{None}
+ """
+ for statement in split_on(line, (token.OP, ';')):
+ if not statement: continue
+ doc = process_line(statement, parent_docs, prev_line_doc,
+ lineno, None, decorators, encoding)
+ prev_line_doc = doc
+ decorators = []
+ return None
+
+#/////////////////////////////////////////////////////////////////
+# Line handler: delete statements
+#/////////////////////////////////////////////////////////////////
+
+def process_del(line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding):
+ """
+ The line handler for delete statements, such as:
+
+ >>> del x, y.z
+
+ This handler calls L{del_variable} for each dotted variable in
+ the variable list. The variable list may be nested. Complex
+ expressions in the variable list (such as C{x[3]}) are ignored.
+ @return: C{None}
+ """
+ # If we're not in a namespace, then ignore it.
+ parent_doc = parent_docs[-1]
+ if not isinstance(parent_doc, NamespaceDoc): return
+
+ var_list = split_on(line[1:], (token.OP, ','))
+ for var_name in dotted_names_in(var_list):
+ del_variable(parent_docs[-1], var_name)
+
+ return None
+
+#/////////////////////////////////////////////////////////////////
+# Line handler: docstrings
+#/////////////////////////////////////////////////////////////////
+
+def process_docstring(line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding):
+ """
+ The line handler for bare string literals. If
+ C{prev_line_doc} is not C{None}, then the string literal is
+ added to that C{APIDoc} as a docstring. If it already has a
+ docstring (from comment docstrings), then the new docstring
+ will be appended to the old one.
+ """
+ if prev_line_doc is None: return
+ docstring = parse_string(line)
+
+ # If the docstring is a str, then convert it to unicode.
+ # According to a strict reading of PEP 263, this might not be the
+ # right thing to do; but it will almost always be what the
+ # module's author intended.
+ if isinstance(docstring, str):
+ try:
+ docstring = docstring.decode(encoding)
+ except UnicodeDecodeError:
+ # If decoding failed, then fall back on using
+ # decode_with_backslashreplace, which will map e.g.
+ # "\xe9" -> u"\\xe9".
+ docstring = decode_with_backslashreplace(docstring)
+ log.warning("While parsing %s: docstring is not a unicode "
+ "string, but it contains non-ascii data." %
+ prev_line_doc.canonical_name)
+
+ # If the modified APIDoc is an instance variable, and it has
+ # not yet been added to its class's C{variables} list,
+ # then add it now. This is done here, rather than in the
+ # process_assignment() call that created the variable, because
+ # we only want to add instance variables if they have an
+ # associated docstring. (For more info, see the comment above
+ # the set_variable() call in process_assignment().)
+ added_instvar = False
+ if (isinstance(prev_line_doc, VariableDoc) and
+ prev_line_doc.is_instvar and
+ prev_line_doc.docstring in (None, UNKNOWN)):
+ for i in range(len(parent_docs)-1, -1, -1):
+ if isinstance(parent_docs[i], ClassDoc):
+ set_variable(parent_docs[i], prev_line_doc, True)
+ added_instvar = True
+ break
+
+ if prev_line_doc.docstring not in (None, UNKNOWN):
+ log.warning("%s has both a comment-docstring and a normal "
+ "(string) docstring; ignoring the comment-"
+ "docstring." % prev_line_doc.canonical_name)
+
+ prev_line_doc.docstring = docstring
+ prev_line_doc.docstring_lineno = lineno
+
+ # If the modified APIDoc is an instance variable, and we added it
+ # to the class's variables list here, then it still needs to be
+ # grouped too; so return it for use as the new "prev_line_doc."
+ if added_instvar:
+ return prev_line_doc
+
+
+#/////////////////////////////////////////////////////////////////
+# Line handler: function declarations
+#/////////////////////////////////////////////////////////////////
+
+def process_funcdef(line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding):
+ """
+ The line handler for function declaration lines, such as:
+
+ >>> def f(a, b=22, (c,d)):
+
+ This handler creates and initializes a new C{VariableDoc}
+ containing a C{RoutineDoc}, adds the C{VariableDoc} to the
+ containing namespace, and returns the C{RoutineDoc}.
+ """
+ # Check syntax.
+ if len(line) != 4 or line[3] != (token.OP, ':'):
+ raise ParseError("Bad function definition line")
+
+ # If we're not in a namespace, then ignore it.
+ parent_doc = parent_docs[-1]
+ if not isinstance(parent_doc, NamespaceDoc): return
+
+ # Get the function's name
+ func_name = parse_name(line[1])
+ canonical_name = DottedName(parent_doc.canonical_name, func_name)
+
+ # Create the function's RoutineDoc.
+ func_doc = RoutineDoc(canonical_name=canonical_name,
+ defining_module=parent_docs[0],
+ lineno=lineno, docs_extracted_by='parser')
+
+ # Process the signature.
+ init_arglist(func_doc, line[2])
+
+ # If the preceeding comment includes a docstring, then add it.
+ add_docstring_from_comments(func_doc, comments)
+
+ # Apply any decorators.
+ func_doc.decorators = [pp_toktree(deco[1:]) for deco in decorators]
+ decorators.reverse()
+ for decorator in decorators:
+ try:
+ deco_name = parse_dotted_name(decorator[1:])
+ except ParseError:
+ deco_name = None
+ if func_doc.canonical_name is not UNKNOWN:
+ deco_repr = '%s(%s)' % (pp_toktree(decorator[1:]),
+ func_doc.canonical_name)
+ elif func_doc.parse_repr not in (None, UNKNOWN):
+ # [xx] this case should be improved.. when will func_doc
+ # have a known parse_repr??
+ deco_repr = '%s(%s)' % (pp_toktree(decorator[1:]),
+ func_doc.parse_repr)
+ else:
+ deco_repr = UNKNOWN
+ func_doc = apply_decorator(deco_name, func_doc)
+ func_doc.parse_repr = deco_repr
+ # [XX] Is there a reson the following should be done? It
+ # causes the grouping code to break. Presumably the canonical
+ # name should remain valid if we're just applying a standard
+ # decorator.
+ #func_doc.canonical_name = UNKNOWN
+
+ # Add a variable to the containing namespace.
+ var_doc = VariableDoc(name=func_name, value=func_doc,
+ is_imported=False, is_alias=False,
+ docs_extracted_by='parser')
+ set_variable(parent_doc, var_doc)
+
+ # Return the new ValueDoc.
+ return func_doc
+
+def apply_decorator(decorator_name, func_doc):
+ # [xx] what if func_doc is not a RoutineDoc?
+ if decorator_name == DottedName('staticmethod'):
+ return StaticMethodDoc(**func_doc.__dict__)
+ elif decorator_name == DottedName('classmethod'):
+ return ClassMethodDoc(**func_doc.__dict__)
+ elif DEFAULT_DECORATOR_BEHAVIOR == 'transparent':
+ return func_doc.__class__(**func_doc.__dict__) # make a copy.
+ elif DEFAULT_DECORATOR_BEHAVIOR == 'opaque':
+ return GenericValueDoc(docs_extracted_by='parser')
+ else:
+ raise ValueError, 'Bad value for DEFAULT_DECORATOR_BEHAVIOR'
+
+def init_arglist(func_doc, arglist):
+ if not isinstance(arglist, list) or arglist[0] != (token.OP, '('):
+ raise ParseError("Bad argument list")
+
+ # Initialize to defaults.
+ func_doc.posargs = []
+ func_doc.posarg_defaults = []
+ func_doc.vararg = None
+ func_doc.kwarg = None
+
+ # Divide the arglist into individual args.
+ args = split_on(arglist[1:-1], (token.OP, ','))
+
+ # Keyword argument.
+ if args and args[-1][0] == (token.OP, '**'):
+ if len(args[-1]) != 2 or args[-1][1][0] != token.NAME:
+ raise ParseError("Expected name after ** in argument list")
+ func_doc.kwarg = args[-1][1][1]
+ args.pop()
+
+ # Vararg argument.
+ if args and args[-1][0] == (token.OP, '*'):
+ if len(args[-1]) != 2 or args[-1][1][0] != token.NAME:
+ raise ParseError("Expected name after * in argument list")
+ func_doc.vararg = args[-1][1][1]
+ args.pop()
+
+ # Positional arguments.
+ for arg in args:
+ func_doc.posargs.append(parse_funcdef_arg(arg[0]))
+ if len(arg) == 1:
+ func_doc.posarg_defaults.append(None)
+ elif arg[1] != (token.OP, '=') or len(arg) == 2:
+ raise ParseError("Bad argument list")
+ else:
+ default_repr = pp_toktree(arg[2:], 'tight')
+ default_val = GenericValueDoc(parse_repr=default_repr,
+ docs_extracted_by='parser')
+ func_doc.posarg_defaults.append(default_val)
+
+#/////////////////////////////////////////////////////////////////
+# Line handler: class declarations
+#/////////////////////////////////////////////////////////////////
+
+def process_classdef(line, parent_docs, prev_line_doc, lineno,
+ comments, decorators, encoding):
+ """
+ The line handler for class declaration lines, such as:
+
+ >>> class Foo(Bar, Baz):
+
+ This handler creates and initializes a new C{VariableDoc}
+ containing a C{ClassDoc}, adds the C{VariableDoc} to the
+ containing namespace, and returns the C{ClassDoc}.
+ """
+ # Check syntax
+ if len(line)<3 or len(line)>4 or line[-1] != (token.OP, ':'):
+ raise ParseError("Bad class definition line")
+
+ # If we're not in a namespace, then ignore it.
+ parent_doc = parent_docs[-1]
+ if not isinstance(parent_doc, NamespaceDoc): return
+
+ # Get the class's name
+ class_name = parse_name(line[1])
+ canonical_name = DottedName(parent_doc.canonical_name, class_name)
+
+ # Create the class's ClassDoc & VariableDoc.
+ class_doc = ClassDoc(variables={}, sort_spec=[],
+ bases=[], subclasses=[],
+ canonical_name=canonical_name,
+ defining_module=parent_docs[0],
+ docs_extracted_by='parser')
+ var_doc = VariableDoc(name=class_name, value=class_doc,
+ is_imported=False, is_alias=False,
+ docs_extracted_by='parser')
+
+ # Add the bases.
+ if len(line) == 4:
+ if (not isinstance(line[2], list) or
+ line[2][0] != (token.OP, '(')):
+ raise ParseError("Expected base list")
+ try:
+ for base_name in parse_classdef_bases(line[2]):
+ class_doc.bases.append(find_base(base_name, parent_docs))
+ except ParseError, e:
+ log.warning("Unable to extract the base list for %s: %s" %
+ (canonical_name, e))
+ class_doc.bases = UNKNOWN
+ else:
+ class_doc.bases = []
+
+ # Register ourselves as a subclass to our bases.
+ if class_doc.bases is not UNKNOWN:
+ for basedoc in class_doc.bases:
+ if isinstance(basedoc, ClassDoc):
+ # This test avoids that a subclass gets listed twice when
+ # both introspection and parsing.
+ # [XXX] This check only works because currently parsing is
+ # always performed just after introspection of the same
+ # class. A more complete fix shuld be independent from
+ # calling order; probably the subclasses list should be
+ # replaced by a ClassDoc set or a {name: ClassDoc} mapping.
+ if (basedoc.subclasses
+ and basedoc.subclasses[-1].canonical_name
+ != class_doc.canonical_name):
+ basedoc.subclasses.append(class_doc)
+
+ # If the preceeding comment includes a docstring, then add it.
+ add_docstring_from_comments(class_doc, comments)
+
+ # Add the VariableDoc to our container.
+ set_variable(parent_doc, var_doc)
+
+ return class_doc
+
+def _proxy_base(**attribs):
+ return ClassDoc(variables={}, sort_spec=[], bases=[], subclasses=[],
+ docs_extracted_by='parser', **attribs)
+
+def find_base(name, parent_docs):
+ assert isinstance(name, DottedName)
+
+ # Find the variable containing the base.
+ base_var = lookup_variable(name, parent_docs)
+ if base_var is None:
+ # If we didn't find it, then it must have been imported.
+ # First, check if it looks like it's contained in any
+ # known imported variable:
+ if len(name) > 1:
+ src = lookup_name(name[0], parent_docs)
+ if (src is not None and
+ src.imported_from not in (None, UNKNOWN)):
+ base_src = DottedName(src.imported_from, name[1:])
+ base_var = VariableDoc(name=name[-1], is_imported=True,
+ is_alias=False, imported_from=base_src,
+ docs_extracted_by='parser')
+ # Otherwise, it must have come from an "import *" statement
+ # (or from magic, such as direct manipulation of the module's
+ # dictionary), so we don't know where it came from. So
+ # there's nothing left but to use an empty proxy.
+ if base_var is None:
+ return _proxy_base(parse_repr=str(name))
+ #raise ParseError("Could not find %s" % name)
+
+ # If the variable has a value, return that value.
+ if base_var.value is not UNKNOWN:
+ return base_var.value
+
+ # Otherwise, if BASE_HANDLING is 'parse', try parsing the docs for
+ # the base class; if that fails, or if BASE_HANDLING is 'link',
+ # just make a proxy object.
+ if base_var.imported_from not in (None, UNKNOWN):
+ if BASE_HANDLING == 'parse':
+ old_sys_path = sys.path
+ try:
+ dirname = os.path.split(parent_docs[0].filename)[0]
+ sys.path = [dirname] + sys.path
+ try:
+ return parse_docs(name=str(base_var.imported_from))
+ except ParseError:
+ log.info('Unable to parse base', base_var.imported_from)
+ except ImportError:
+ log.info('Unable to find base', base_var.imported_from)
+ finally:
+ sys.path = old_sys_path
+
+ # Either BASE_HANDLING='link' or parsing the base class failed;
+ # return a proxy value for the base class.
+ return _proxy_base(proxy_for=base_var.imported_from)
+ else:
+ return _proxy_base(parse_repr=str(name))
+
+#/////////////////////////////////////////////////////////////////
+#{ Parsing
+#/////////////////////////////////////////////////////////////////
+
+def dotted_names_in(elt_list):
+ """
+ Return a list of all simple dotted names in the given
+ expression.
+ """
+ names = []
+ while elt_list:
+ elt = elt_list.pop()
+ if len(elt) == 1 and isinstance(elt[0], list):
+ # Nested list: process the contents
+ elt_list.extend(split_on(elt[0][1:-1], (token.OP, ',')))
+ else:
+ try:
+ names.append(parse_dotted_name(elt))
+ except ParseError:
+ pass # complex expression -- ignore
+ return names
+
+def parse_name(elt, strip_parens=False):
+ """
+ If the given token tree element is a name token, then return
+ that name as a string. Otherwise, raise ParseError.
+ @param strip_parens: If true, then if elt is a single name
+ enclosed in parenthases, then return that name.
+ """
+ if strip_parens and isinstance(elt, list):
+ while (isinstance(elt, list) and len(elt) == 3 and
+ elt[0] == (token.OP, '(') and
+ elt[-1] == (token.OP, ')')):
+ elt = elt[1]
+ if isinstance(elt, list) or elt[0] != token.NAME:
+ raise ParseError("Bad name")
+ return elt[1]
+
+def parse_dotted_name(elt_list, strip_parens=True, parent_name=None):
+ """
+ @param parent_name: canonical name of referring module, to resolve
+ relative imports.
+ @type parent_name: L{DottedName}
+ @bug: does not handle 'x.(y).z'
+ """
+ if len(elt_list) == 0: raise ParseError("Bad dotted name")
+
+ # Handle ((x.y).z). (If the contents of the parens include
+ # anything other than dotted names, such as (x,y), then we'll
+ # catch it below and raise a ParseError.
+ while (isinstance(elt_list[0], list) and
+ len(elt_list[0]) >= 3 and
+ elt_list[0][0] == (token.OP, '(') and
+ elt_list[0][-1] == (token.OP, ')')):
+ elt_list[:1] = elt_list[0][1:-1]
+
+ # Convert a relative import into an absolute name.
+ prefix_name = None
+ if parent_name is not None and elt_list[0][-1] == '.':
+ items = 1
+ while len(elt_list) > items and elt_list[items][-1] == '.':
+ items += 1
+
+ elt_list = elt_list[items:]
+ prefix_name = parent_name[:-items]
+
+ # >>> from . import foo
+ if not elt_list:
+ if prefix_name == []:
+ raise ParseError("Attempted relative import in non-package, "
+ "or beyond toplevel package")
+ return prefix_name
+
+ if len(elt_list) % 2 != 1: raise ParseError("Bad dotted name")
+ name = DottedName(parse_name(elt_list[0], True))
+ if prefix_name is not None:
+ name = prefix_name + name
+
+ for i in range(2, len(elt_list), 2):
+ dot, identifier = elt_list[i-1], elt_list[i]
+ if dot != (token.OP, '.'):
+ raise ParseError("Bad dotted name")
+ name = DottedName(name, parse_name(identifier, True))
+ return name
+
+def split_on(elt_list, split_tok):
+ # [xx] add code to guarantee each elt is non-empty.
+ result = [[]]
+ for elt in elt_list:
+ if elt == split_tok:
+ if result[-1] == []: raise ParseError("Empty element from split")
+ result.append([])
+ else:
+ result[-1].append(elt)
+ if result[-1] == []: result.pop()
+ return result
+
+def parse_funcdef_arg(elt):
+ """
+ If the given tree token element contains a valid function
+ definition argument (i.e., an identifier token or nested list
+ of identifiers), then return a corresponding string identifier
+ or nested list of string identifiers. Otherwise, raise a
+ ParseError.
+ """
+ if isinstance(elt, list):
+ if elt[0] == (token.OP, '('):
+ if len(elt) == 3:
+ return parse_funcdef_arg(elt[1])
+ else:
+ return [parse_funcdef_arg(e)
+ for e in elt[1:-1]
+ if e != (token.OP, ',')]
+ else:
+ raise ParseError("Bad argument -- expected name or tuple")
+ elif elt[0] == token.NAME:
+ return elt[1]
+ else:
+ raise ParseError("Bad argument -- expected name or tuple")
+
+def parse_classdef_bases(elt):
+ """
+ If the given tree token element contains a valid base list
+ (that contains only dotted names), then return a corresponding
+ list of L{DottedName}s. Otherwise, raise a ParseError.
+
+ @bug: Does not handle either of::
+ - class A( (base.in.parens) ): pass
+ - class B( (lambda:calculated.base)() ): pass
+ """
+ if (not isinstance(elt, list) or
+ elt[0] != (token.OP, '(')):
+ raise ParseError("Bad base list")
+
+ return [parse_dotted_name(n)
+ for n in split_on(elt[1:-1], (token.OP, ','))]
+
+# Used by: base list; 'del'; ...
+def parse_dotted_name_list(elt_list):
+ """
+ If the given list of tree token elements contains a
+ comma-separated list of dotted names, then return a
+ corresponding list of L{DottedName} objects. Otherwise, raise
+ ParseError.
+ """
+ names = []
+
+ state = 0
+ for elt in elt_list:
+ # State 0 -- Expecting a name, or end of arglist
+ if state == 0:
+ # Make sure it's a name
+ if isinstance(elt, tuple) and elt[0] == token.NAME:
+ names.append(DottedName(elt[1]))
+ state = 1
+ else:
+ raise ParseError("Expected a name")
+ # State 1 -- Expecting comma, period, or end of arglist
+ elif state == 1:
+ if elt == (token.OP, '.'):
+ state = 2
+ elif elt == (token.OP, ','):
+ state = 0
+ else:
+ raise ParseError("Expected '.' or ',' or end of list")
+ # State 2 -- Continuation of dotted name.
+ elif state == 2:
+ if isinstance(elt, tuple) and elt[0] == token.NAME:
+ names[-1] = DottedName(names[-1], elt[1])
+ state = 1
+ else:
+ raise ParseError("Expected a name")
+ if state == 2:
+ raise ParseError("Expected a name")
+ return names
+
+def parse_string(elt_list):
+ if len(elt_list) == 1 and elt_list[0][0] == token.STRING:
+ # [xx] use something safer here? But it needs to deal with
+ # any string type (eg r"foo\bar" etc).
+ return eval(elt_list[0][1])
+ else:
+ raise ParseError("Expected a string")
+
+# ['1', 'b', 'c']
+def parse_string_list(elt_list):
+ if (len(elt_list) == 1 and isinstance(elt_list, list) and
+ elt_list[0][0][1] in ('(', '[')):
+ elt_list = elt_list[0][1:-1]
+
+ string_list = []
+ for string_elt in split_on(elt_list, (token.OP, ',')):
+ string_list.append(parse_string(string_elt))
+
+ return string_list
+
+#/////////////////////////////////////////////////////////////////
+#{ Variable Manipulation
+#/////////////////////////////////////////////////////////////////
+
+def set_variable(namespace, var_doc, preserve_docstring=False):
+ """
+ Add var_doc to namespace. If namespace already contains a
+ variable with the same name, then discard the old variable. If
+ C{preserve_docstring} is true, then keep the old variable's
+ docstring when overwriting a variable.
+ """
+ # Choose which dictionary we'll be storing the variable in.
+ if not isinstance(namespace, NamespaceDoc):
+ return
+
+ # This happens when the class definition has not been parsed, e.g. in
+ # sf bug #1693253 on ``Exception.x = y``
+ if namespace.sort_spec is UNKNOWN:
+ namespace.sort_spec = namespace.variables.keys()
+
+ # If we already have a variable with this name, then remove the
+ # old VariableDoc from the sort_spec list; and if we gave its
+ # value a canonical name, then delete it.
+ if var_doc.name in namespace.variables:
+ namespace.sort_spec.remove(var_doc.name)
+ old_var_doc = namespace.variables[var_doc.name]
+ if (old_var_doc.is_alias == False and
+ old_var_doc.value is not UNKNOWN):
+ old_var_doc.value.canonical_name = UNKNOWN
+ if (preserve_docstring and var_doc.docstring in (None, UNKNOWN) and
+ old_var_doc.docstring not in (None, UNKNOWN)):
+ var_doc.docstring = old_var_doc.docstring
+ var_doc.docstring_lineno = old_var_doc.docstring_lineno
+ # Add the variable to the namespace.
+ namespace.variables[var_doc.name] = var_doc
+ namespace.sort_spec.append(var_doc.name)
+ assert var_doc.container is UNKNOWN
+ var_doc.container = namespace
+
+def del_variable(namespace, name):
+ if not isinstance(namespace, NamespaceDoc):
+ return
+
+ if name[0] in namespace.variables:
+ if len(name) == 1:
+ var_doc = namespace.variables[name[0]]
+ namespace.sort_spec.remove(name[0])
+ del namespace.variables[name[0]]
+ if not var_doc.is_alias and var_doc.value is not UNKNOWN:
+ var_doc.value.canonical_name = UNKNOWN
+ else:
+ del_variable(namespace.variables[name[0]].value, name[1:])
+
+#/////////////////////////////////////////////////////////////////
+#{ Name Lookup
+#/////////////////////////////////////////////////////////////////
+
+def lookup_name(identifier, parent_docs):
+ """
+ Find and return the documentation for the variable named by
+ the given identifier.
+
+ @rtype: L{VariableDoc} or C{None}
+ """
+ # We need to check 3 namespaces: locals, globals, and builtins.
+ # Note that this is true even if we're in a version of python with
+ # nested scopes, because nested scope lookup does not apply to
+ # nested class definitions, and we're not worried about variables
+ # in nested functions.
+ if not isinstance(identifier, basestring):
+ raise TypeError('identifier must be a string')
+
+ # Locals
+ if isinstance(parent_docs[-1], NamespaceDoc):
+ if identifier in parent_docs[-1].variables:
+ return parent_docs[-1].variables[identifier]
+
+ # Globals (aka the containing module)
+ if isinstance(parent_docs[0], NamespaceDoc):
+ if identifier in parent_docs[0].variables:
+ return parent_docs[0].variables[identifier]
+
+ # Builtins
+ builtins = epydoc.docintrospecter.introspect_docs(__builtin__)
+ if isinstance(builtins, NamespaceDoc):
+ if identifier in builtins.variables:
+ return builtins.variables[identifier]
+
+ # We didn't find it; return None.
+ return None
+
+def lookup_variable(dotted_name, parent_docs):
+ assert isinstance(dotted_name, DottedName)
+ # If it's a simple identifier, use lookup_name.
+ if len(dotted_name) == 1:
+ return lookup_name(dotted_name[0], parent_docs)
+
+ # If it's a dotted name with multiple pieces, look up the
+ # namespace containing the var (=parent) first; and then
+ # look for the var in that namespace.
+ else:
+ parent = lookup_value(dotted_name[:-1], parent_docs)
+ if (isinstance(parent, NamespaceDoc) and
+ dotted_name[-1] in parent.variables):
+ return parent.variables[dotted_name[-1]]
+ else:
+ return None # var not found.
+
+def lookup_value(dotted_name, parent_docs):
+ """
+ Find and return the documentation for the value contained in
+ the variable with the given name in the current namespace.
+ """
+ assert isinstance(dotted_name, DottedName)
+ var_doc = lookup_name(dotted_name[0], parent_docs)
+
+ for i in range(1, len(dotted_name)):
+ if var_doc is None: return None
+
+ if isinstance(var_doc.value, NamespaceDoc):
+ var_dict = var_doc.value.variables
+ elif (var_doc.value is UNKNOWN and
+ var_doc.imported_from not in (None, UNKNOWN)):
+ src_name = var_doc.imported_from + dotted_name[i:]
+ # [xx] do I want to create a proxy here??
+ return GenericValueDoc(proxy_for=src_name,
+ parse_repr=str(dotted_name),
+ docs_extracted_by='parser')
+ else:
+ return None
+
+ var_doc = var_dict.get(dotted_name[i])
+
+ if var_doc is None: return None
+ return var_doc.value
+
+#/////////////////////////////////////////////////////////////////
+#{ Docstring Comments
+#/////////////////////////////////////////////////////////////////
+
+def add_docstring_from_comments(api_doc, comments):
+ if api_doc is None or not comments: return
+ api_doc.docstring = '\n'.join([line for (line, lineno) in comments])
+ api_doc.docstring_lineno = comments[0][1]
+
+#/////////////////////////////////////////////////////////////////
+#{ Tree tokens
+#/////////////////////////////////////////////////////////////////
+
+def _join_toktree(s1, s2):
+ # Join them. s1 = left side; s2 = right side.
+ if (s2=='' or s1=='' or
+ s1 in ('-','`') or s2 in ('}',']',')','`',':') or
+ s2[0] in ('.',',') or s1[-1] in ('(','[','{','.','\n',' ') or
+ (s2[0] == '(' and s1[-1] not in (',','='))):
+ return '%s%s' % (s1,s2)
+ elif (spacing=='tight' and
+ s1[-1] in '+-*/=,' or s2[0] in '+-*/=,'):
+ return '%s%s' % (s1, s2)
+ else:
+ return '%s %s' % (s1, s2)
+
+def _pp_toktree_add_piece(spacing, pieces, piece):
+ s1 = pieces[-1]
+ s2 = piece
+
+ if (s2=='' or s1=='' or
+ s1 in ('-','`') or s2 in ('}',']',')','`',':') or
+ s2[0] in ('.',',') or s1[-1] in ('(','[','{','.','\n',' ') or
+ (s2[0] == '(' and s1[-1] not in (',','='))):
+ pass
+ elif (spacing=='tight' and
+ s1[-1] in '+-*/=,' or s2[0] in '+-*/=,'):
+ pass
+ else:
+ pieces.append(' ')
+
+ pieces.append(piece)
+
+def pp_toktree(elts, spacing='normal', indent=0):
+ pieces = ['']
+ _pp_toktree(elts, spacing, indent, pieces)
+ return ''.join(pieces)
+
+def _pp_toktree(elts, spacing, indent, pieces):
+ add_piece = _pp_toktree_add_piece
+
+ for elt in elts:
+ # Put a blank line before class & def statements.
+ if elt == (token.NAME, 'class') or elt == (token.NAME, 'def'):
+ add_piece(spacing, pieces, '\n%s' % (' '*indent))
+
+ if isinstance(elt, tuple):
+ if elt[0] == token.NEWLINE:
+ add_piece(spacing, pieces, ' '+elt[1])
+ add_piece(spacing, pieces, '\n%s' % (' '*indent))
+ elif elt[0] == token.INDENT:
+ add_piece(spacing, pieces, ' ')
+ indent += 1
+ elif elt[0] == token.DEDENT:
+ assert pieces[-1] == ' '
+ pieces.pop()
+ indent -= 1
+ elif elt[0] == tokenize.COMMENT:
+ add_piece(spacing, pieces, elt[1].rstrip() + '\n')
+ add_piece(' '*indent)
+ else:
+ add_piece(spacing, pieces, elt[1])
+ else:
+ _pp_toktree(elt, spacing, indent, pieces)
+
+#/////////////////////////////////////////////////////////////////
+#{ Helper Functions
+#/////////////////////////////////////////////////////////////////
+
+def get_module_encoding(filename):
+ """
+ @see: U{PEP 263<http://www.python.org/peps/pep-0263.html>}
+ """
+ module_file = open(filename, 'rU')
+ try:
+ lines = [module_file.readline() for i in range(2)]
+ if lines[0].startswith('\xef\xbb\xbf'):
+ return 'utf-8'
+ else:
+ for line in lines:
+ m = re.search("coding[:=]\s*([-\w.]+)", line)
+ if m: return m.group(1)
+
+ # Fall back on Python's default encoding.
+ return 'iso-8859-1' # aka 'latin-1'
+ finally:
+ module_file.close()
+
+def _get_module_name(filename, package_doc):
+ """
+ Return (dotted_name, is_package)
+ """
+ name = re.sub(r'.py\w?$', '', os.path.split(filename)[1])
+ if name == '__init__':
+ is_package = True
+ name = os.path.split(os.path.split(filename)[0])[1]
+ else:
+ is_package = False
+
+ # [XX] if the module contains a script, then `name` may not
+ # necessarily be a valid identifier -- which will cause
+ # DottedName to raise an exception. Is that what I want?
+ if package_doc is None:
+ dotted_name = DottedName(name)
+ else:
+ dotted_name = DottedName(package_doc.canonical_name, name)
+
+ # Check if the module looks like it's shadowed by a variable.
+ # If so, then add a "'" to the end of its canonical name, to
+ # distinguish it from the variable.
+ if package_doc is not None and name in package_doc.variables:
+ vardoc = package_doc.variables[name]
+ if (vardoc.value not in (None, UNKNOWN) and
+ vardoc.imported_from != dotted_name):
+ log.warning("Module %s might be shadowed by a variable with "
+ "the same name." % dotted_name)
+ dotted_name = DottedName(str(dotted_name)+"'")
+
+ return dotted_name, is_package
+
+def flatten(lst, out=None):
+ """
+ @return: a flat list containing the leaves of the given nested
+ list.
+ @param lst: The nested list that should be flattened.
+ """
+ if out is None: out = []
+ for elt in lst:
+ if isinstance(elt, (list, tuple)):
+ flatten(elt, out)
+ else:
+ out.append(elt)
+ return out
+
diff --git a/python/helpers/epydoc/docstringparser.py b/python/helpers/epydoc/docstringparser.py
new file mode 100644
index 0000000..b609bc9
--- /dev/null
+++ b/python/helpers/epydoc/docstringparser.py
@@ -0,0 +1,1111 @@
+# epydoc -- Docstring processing
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: docstringparser.py 1689 2008-01-30 17:01:02Z edloper $
+
+"""
+Parse docstrings and handle any fields it defines, such as C{@type}
+and C{@author}. Fields are used to describe specific information
+about an object. There are two classes of fields: X{simple fields}
+and X{special fields}.
+
+Simple fields are fields that get stored directly in an C{APIDoc}'s
+metadata dictionary, without any special processing. The set of
+simple fields is defined by the list L{STANDARD_FIELDS}, whose
+elements are L{DocstringField}s.
+
+Special fields are fields that perform some sort of processing on the
+C{APIDoc}, or add information to attributes other than the metadata
+dictionary. Special fields are are handled by field handler
+functions, which are registered using L{register_field_handler}.
+"""
+__docformat__ = 'epytext en'
+
+
+######################################################################
+## Imports
+######################################################################
+
+import re, sys
+from epydoc import markup
+from epydoc.markup import epytext
+from epydoc.apidoc import *
+from epydoc.docintrospecter import introspect_docstring_lineno
+from epydoc.util import py_src_filename
+from epydoc import log
+import epydoc.docparser
+import __builtin__, exceptions
+
+######################################################################
+# Docstring Fields
+######################################################################
+
+class DocstringField:
+ """
+ A simple docstring field, which can be used to describe specific
+ information about an object, such as its author or its version.
+ Simple docstring fields are fields that take no arguments, and
+ are displayed as simple sections.
+
+ @ivar tags: The set of tags that can be used to identify this
+ field.
+ @ivar singular: The label that should be used to identify this
+ field in the output, if the field contains one value.
+ @ivar plural: The label that should be used to identify this
+ field in the output, if the field contains multiple values.
+ @ivar short: If true, then multiple values should be combined
+ into a single comma-delimited list. If false, then
+ multiple values should be listed separately in a bulleted
+ list.
+ @ivar multivalue: If true, then multiple values may be given
+ for this field; if false, then this field can only take a
+ single value, and a warning should be issued if it is
+ redefined.
+ @ivar takes_arg: If true, then this field expects an argument;
+ and a separate field section will be constructed for each
+ argument value. The label (and plural label) should include
+ a '%s' to mark where the argument's string rep should be
+ added.
+ """
+ def __init__(self, tags, label, plural=None,
+ short=0, multivalue=1, takes_arg=0,
+ varnames=None):
+ if type(tags) in (list, tuple):
+ self.tags = tuple(tags)
+ elif type(tags) is str:
+ self.tags = (tags,)
+ else: raise TypeError('Bad tags: %s' % tags)
+ self.singular = label
+ if plural is None: self.plural = label
+ else: self.plural = plural
+ self.multivalue = multivalue
+ self.short = short
+ self.takes_arg = takes_arg
+ self.varnames = varnames or []
+
+ def __cmp__(self, other):
+ if not isinstance(other, DocstringField): return -1
+ return cmp(self.tags, other.tags)
+
+ def __hash__(self):
+ return hash(self.tags)
+
+ def __repr__(self):
+ return '<Field: %s>' % self.tags[0]
+
+STANDARD_FIELDS = [
+ #: A list of the standard simple fields accepted by epydoc. This
+ #: list can be augmented at run-time by a docstring with the special
+ #: C{@deffield} field. The order in which fields are listed here
+ #: determines the order in which they will be displayed in the
+ #: output.
+
+ # If it's deprecated, put that first.
+ DocstringField(['deprecated', 'depreciated'],
+ 'Deprecated', multivalue=0, varnames=['__deprecated__']),
+
+ # Status info
+ DocstringField(['version'], 'Version', multivalue=0,
+ varnames=['__version__']),
+ DocstringField(['date'], 'Date', multivalue=0,
+ varnames=['__date__']),
+ DocstringField(['status'], 'Status', multivalue=0),
+
+ # Bibliographic Info
+ DocstringField(['author', 'authors'], 'Author', 'Authors', short=1,
+ varnames=['__author__', '__authors__']),
+ DocstringField(['contact'], 'Contact', 'Contacts', short=1,
+ varnames=['__contact__']),
+ DocstringField(['organization', 'org'],
+ 'Organization', 'Organizations'),
+ DocstringField(['copyright', '(c)'], 'Copyright', multivalue=0,
+ varnames=['__copyright__']),
+ DocstringField(['license'], 'License', multivalue=0,
+ varnames=['__license__']),
+
+ # Various warnings etc.
+ DocstringField(['bug'], 'Bug', 'Bugs'),
+ DocstringField(['warning', 'warn'], 'Warning', 'Warnings'),
+ DocstringField(['attention'], 'Attention'),
+ DocstringField(['note'], 'Note', 'Notes'),
+
+ # Formal conditions
+ DocstringField(['requires', 'require', 'requirement'], 'Requires'),
+ DocstringField(['precondition', 'precond'],
+ 'Precondition', 'Preconditions'),
+ DocstringField(['postcondition', 'postcond'],
+ 'Postcondition', 'Postconditions'),
+ DocstringField(['invariant'], 'Invariant'),
+
+ # When was it introduced (version # or date)
+ DocstringField(['since'], 'Since', multivalue=0),
+
+ # Changes made
+ DocstringField(['change', 'changed'], 'Change Log'),
+
+ # Crossreferences
+ DocstringField(['see', 'seealso'], 'See Also', short=1),
+
+ # Future Work
+ DocstringField(['todo'], 'To Do', takes_arg=True),
+
+ # Permissions (used by zope-based projects)
+ DocstringField(['permission', 'permissions'], 'Permission', 'Permissions')
+ ]
+
+######################################################################
+#{ Docstring Parsing
+######################################################################
+
+DEFAULT_DOCFORMAT = 'epytext'
+"""The name of the default markup languge used to process docstrings."""
+
+# [xx] keep track of which ones we've already done, in case we're
+# asked to process one twice? e.g., for @include we might have to
+# parse the included docstring earlier than we might otherwise..??
+
+def parse_docstring(api_doc, docindex, suppress_warnings=[]):
+ """
+ Process the given C{APIDoc}'s docstring. In particular, populate
+ the C{APIDoc}'s C{descr} and C{summary} attributes, and add any
+ information provided by fields in the docstring.
+
+ @param docindex: A DocIndex, used to find the containing
+ module (to look up the docformat); and to find any
+ user docfields defined by containing objects.
+ @param suppress_warnings: A set of objects for which docstring
+ warnings should be suppressed.
+ """
+ if api_doc.metadata is not UNKNOWN:
+ if not (isinstance(api_doc, RoutineDoc)
+ and api_doc.canonical_name[-1] == '__init__'):
+ log.debug("%s's docstring processed twice" %
+ api_doc.canonical_name)
+ return
+
+ initialize_api_doc(api_doc)
+
+ # If there's no docstring, then check for special variables (e.g.,
+ # __version__), and then return -- there's nothing else to do.
+ if (api_doc.docstring in (None, UNKNOWN)):
+ if isinstance(api_doc, NamespaceDoc):
+ for field in STANDARD_FIELDS + user_docfields(api_doc, docindex):
+ add_metadata_from_var(api_doc, field)
+ return
+
+ # Remove leading indentation from the docstring.
+ api_doc.docstring = unindent_docstring(api_doc.docstring)
+
+ # Decide which docformat is used by this module.
+ docformat = get_docformat(api_doc, docindex)
+
+ # A list of markup errors from parsing.
+ parse_errors = []
+
+ # Extract a signature from the docstring, if it has one. This
+ # overrides any signature we got via introspection/parsing.
+ if isinstance(api_doc, RoutineDoc):
+ parse_function_signature(api_doc, None, docformat, parse_errors)
+
+ # Parse the docstring. Any errors encountered are stored as
+ # `ParseError` objects in the errors list.
+ parsed_docstring = markup.parse(api_doc.docstring, docformat,
+ parse_errors)
+
+ # Divide the docstring into a description and a list of
+ # fields.
+ descr, fields = parsed_docstring.split_fields(parse_errors)
+ api_doc.descr = descr
+
+ field_warnings = []
+
+ # Handle the constructor fields that have been defined in the class
+ # docstring. This code assumes that a class docstring is parsed before
+ # the same class __init__ docstring.
+ if isinstance(api_doc, ClassDoc):
+
+ # Parse ahead the __init__ docstring for this class
+ initvar = api_doc.variables.get('__init__')
+ if initvar and isinstance(initvar.value, RoutineDoc):
+ init_api_doc = initvar.value
+ parse_docstring(init_api_doc, docindex, suppress_warnings)
+
+ parse_function_signature(init_api_doc, api_doc,
+ docformat, parse_errors)
+ init_fields = split_init_fields(fields, field_warnings)
+
+ # Process fields
+ for field in init_fields:
+ try:
+ process_field(init_api_doc, docindex, field.tag(),
+ field.arg(), field.body())
+ except ValueError, e: field_warnings.append(str(e))
+
+ # Process fields
+ for field in fields:
+ try:
+ process_field(api_doc, docindex, field.tag(),
+ field.arg(), field.body())
+ except ValueError, e: field_warnings.append(str(e))
+
+ # Check to make sure that all type parameters correspond to
+ # some documented parameter.
+ check_type_fields(api_doc, field_warnings)
+
+ # Check for special variables (e.g., __version__)
+ if isinstance(api_doc, NamespaceDoc):
+ for field in STANDARD_FIELDS + user_docfields(api_doc, docindex):
+ add_metadata_from_var(api_doc, field)
+
+ # Extract a summary
+ if api_doc.summary is None and api_doc.descr is not None:
+ api_doc.summary, api_doc.other_docs = api_doc.descr.summary()
+
+ # If the summary is empty, but the return field is not, then use
+ # the return field to generate a summary description.
+ if (isinstance(api_doc, RoutineDoc) and api_doc.summary is None and
+ api_doc.return_descr is not None):
+ s, o = api_doc.return_descr.summary()
+ api_doc.summary = RETURN_PDS + s
+ api_doc.other_docs = o
+
+ # [XX] Make sure we don't have types/param descrs for unknown
+ # vars/params?
+
+ # Report any errors that occured
+ if api_doc in suppress_warnings:
+ if parse_errors or field_warnings:
+ log.info("Suppressing docstring warnings for %s, since it "
+ "is not included in the documented set." %
+ api_doc.canonical_name)
+ else:
+ report_errors(api_doc, docindex, parse_errors, field_warnings)
+
+def add_metadata_from_var(api_doc, field):
+ for varname in field.varnames:
+ # Check if api_doc has a variable w/ the given name.
+ if varname not in api_doc.variables: continue
+
+ # Check moved here from before the for loop because we expect to
+ # reach rarely this point. The loop below is to be performed more than
+ # once only for fields with more than one varname, which currently is
+ # only 'author'.
+ for md in api_doc.metadata:
+ if field == md[0]:
+ return # We already have a value for this metadata.
+
+ var_doc = api_doc.variables[varname]
+ if var_doc.value is UNKNOWN: continue
+ val_doc = var_doc.value
+ value = []
+
+ # Try extracting the value from the pyval.
+ ok_types = (basestring, int, float, bool, type(None))
+ if val_doc.pyval is not UNKNOWN:
+ if isinstance(val_doc.pyval, ok_types):
+ value = [val_doc.pyval]
+ elif field.multivalue:
+ if isinstance(val_doc.pyval, (tuple, list)):
+ for elt in val_doc.pyval:
+ if not isinstance(elt, ok_types): break
+ else:
+ value = list(val_doc.pyval)
+
+ # Try extracting the value from the parse tree.
+ elif val_doc.toktree is not UNKNOWN:
+ try: value = [epydoc.docparser.parse_string(val_doc.toktree)]
+ except KeyboardInterrupt: raise
+ except: pass
+ if field.multivalue and not value:
+ try: value = epydoc.docparser.parse_string_list(val_doc.toktree)
+ except KeyboardInterrupt: raise
+ except: raise
+
+ # Add any values that we found.
+ for elt in value:
+ if isinstance(elt, str):
+ elt = decode_with_backslashreplace(elt)
+ else:
+ elt = unicode(elt)
+ elt = epytext.ParsedEpytextDocstring(
+ epytext.parse_as_para(elt), inline=True)
+
+ # Add in the metadata and remove from the variables
+ api_doc.metadata.append( (field, varname, elt) )
+
+ # Remove the variable itself (unless it's documented)
+ if var_doc.docstring in (None, UNKNOWN):
+ del api_doc.variables[varname]
+ if api_doc.sort_spec is not UNKNOWN:
+ try: api_doc.sort_spec.remove(varname)
+ except ValueError: pass
+
+def initialize_api_doc(api_doc):
+ """A helper function for L{parse_docstring()} that initializes
+ the attributes that C{parse_docstring()} will write to."""
+ if api_doc.descr is UNKNOWN:
+ api_doc.descr = None
+ if api_doc.summary is UNKNOWN:
+ api_doc.summary = None
+ if api_doc.metadata is UNKNOWN:
+ api_doc.metadata = []
+ if isinstance(api_doc, RoutineDoc):
+ if api_doc.arg_descrs is UNKNOWN:
+ api_doc.arg_descrs = []
+ if api_doc.arg_types is UNKNOWN:
+ api_doc.arg_types = {}
+ if api_doc.return_descr is UNKNOWN:
+ api_doc.return_descr = None
+ if api_doc.return_type is UNKNOWN:
+ api_doc.return_type = None
+ if api_doc.exception_descrs is UNKNOWN:
+ api_doc.exception_descrs = []
+ if isinstance(api_doc, (VariableDoc, PropertyDoc)):
+ if api_doc.type_descr is UNKNOWN:
+ api_doc.type_descr = None
+ if isinstance(api_doc, NamespaceDoc):
+ if api_doc.group_specs is UNKNOWN:
+ api_doc.group_specs = []
+ if api_doc.sort_spec is UNKNOWN:
+ api_doc.sort_spec = []
+
+def split_init_fields(fields, warnings):
+ """
+ Remove the fields related to the constructor from a class docstring
+ fields list.
+
+ @param fields: The fields to process. The list will be modified in place
+ @type fields: C{list} of L{markup.Field}
+ @param warnings: A list to emit processing warnings
+ @type warnings: C{list}
+ @return: The C{fields} items to be applied to the C{__init__} method
+ @rtype: C{list} of L{markup.Field}
+ """
+ init_fields = []
+
+ # Split fields in lists according to their argument, keeping order.
+ arg_fields = {}
+ args_order = []
+ i = 0
+ while i < len(fields):
+ field = fields[i]
+
+ # gather together all the fields with the same arg
+ if field.arg() is not None:
+ arg_fields.setdefault(field.arg(), []).append(fields.pop(i))
+ args_order.append(field.arg())
+ else:
+ i += 1
+
+ # Now check that for each argument there is at most a single variable
+ # and a single parameter, and at most a single type for each of them.
+ for arg in args_order:
+ ff = arg_fields.pop(arg, None)
+ if ff is None:
+ continue
+
+ var = tvar = par = tpar = None
+ for field in ff:
+ if field.tag() in VARIABLE_TAGS:
+ if var is None:
+ var = field
+ fields.append(field)
+ else:
+ warnings.append(
+ "There is more than one variable named '%s'"
+ % arg)
+ elif field.tag() in PARAMETER_TAGS:
+ if par is None:
+ par = field
+ init_fields.append(field)
+ else:
+ warnings.append(
+ "There is more than one parameter named '%s'"
+ % arg)
+
+ elif field.tag() == 'type':
+ if var is None and par is None:
+ # type before obj
+ tvar = tpar = field
+ else:
+ if var is not None and tvar is None:
+ tvar = field
+ if par is not None and tpar is None:
+ tpar = field
+
+ elif field.tag() in EXCEPTION_TAGS:
+ init_fields.append(field)
+
+ else: # Unespected field
+ fields.append(field)
+
+ # Put selected types into the proper output lists
+ if tvar is not None:
+ if var is not None:
+ fields.append(tvar)
+ else:
+ pass # [xx] warn about type w/o object?
+
+ if tpar is not None:
+ if par is not None:
+ init_fields.append(tpar)
+ else:
+ pass # [xx] warn about type w/o object?
+
+ return init_fields
+
+def report_errors(api_doc, docindex, parse_errors, field_warnings):
+ """A helper function for L{parse_docstring()} that reports any
+ markup warnings and field warnings that we encountered while
+ processing C{api_doc}'s docstring."""
+ if not parse_errors and not field_warnings: return
+
+ # Get the name of the item containing the error, and the
+ # filename of its containing module.
+ name = api_doc.canonical_name
+ module = api_doc.defining_module
+ if module is not UNKNOWN and module.filename not in (None, UNKNOWN):
+ try: filename = py_src_filename(module.filename)
+ except: filename = module.filename
+ else:
+ filename = '??'
+
+ # [xx] Don't report markup errors for standard builtins.
+ # n.b. that we must use 'is' to compare pyvals here -- if we use
+ # 'in' or '==', then a user __cmp__ method might raise an
+ # exception, or lie.
+ if isinstance(api_doc, ValueDoc) and api_doc != module:
+ if module not in (None, UNKNOWN) and module.pyval is exceptions:
+ return
+ for builtin_val in __builtin__.__dict__.values():
+ if builtin_val is api_doc.pyval:
+ return
+
+ # Get the start line of the docstring containing the error.
+ startline = api_doc.docstring_lineno
+ if startline in (None, UNKNOWN):
+ startline = introspect_docstring_lineno(api_doc)
+ if startline in (None, UNKNOWN):
+ startline = None
+
+ # Display a block header.
+ header = 'File %s, ' % filename
+ if startline is not None:
+ header += 'line %d, ' % startline
+ header += 'in %s' % name
+ log.start_block(header)
+
+
+ # Display all parse errors. But first, combine any errors
+ # with duplicate description messages.
+ if startline is None:
+ # remove dups, but keep original order:
+ dups = {}
+ for error in parse_errors:
+ message = error.descr()
+ if message not in dups:
+ log.docstring_warning(message)
+ dups[message] = 1
+ else:
+ # Combine line number fields for dup messages:
+ messages = {} # maps message -> list of linenum
+ for error in parse_errors:
+ error.set_linenum_offset(startline)
+ message = error.descr()
+ messages.setdefault(message, []).append(error.linenum())
+ message_items = messages.items()
+ message_items.sort(lambda a,b:cmp(min(a[1]), min(b[1])))
+ for message, linenums in message_items:
+ linenums = [n for n in linenums if n is not None]
+ if len(linenums) == 0:
+ log.docstring_warning(message)
+ elif len(linenums) == 1:
+ log.docstring_warning("Line %s: %s" % (linenums[0], message))
+ else:
+ linenums = ', '.join(['%s' % l for l in linenums])
+ log.docstring_warning("Lines %s: %s" % (linenums, message))
+
+ # Display all field warnings.
+ for warning in field_warnings:
+ log.docstring_warning(warning)
+
+ # End the message block.
+ log.end_block()
+
+RETURN_PDS = markup.parse('Returns:', markup='epytext')
+"""A ParsedDocstring containing the text 'Returns'. This is used to
+construct summary descriptions for routines that have empty C{descr},
+but non-empty C{return_descr}."""
+RETURN_PDS._tree.children[0].attribs['inline'] = True
+
+######################################################################
+#{ Field Processing Error Messages
+######################################################################
+
+UNEXPECTED_ARG = '%r did not expect an argument'
+EXPECTED_ARG = '%r expected an argument'
+EXPECTED_SINGLE_ARG = '%r expected a single argument'
+BAD_CONTEXT = 'Invalid context for %r'
+REDEFINED = 'Redefinition of %s'
+UNKNOWN_TAG = 'Unknown field tag %r'
+BAD_PARAM = '@%s for unknown parameter %s'
+
+######################################################################
+#{ Field Processing
+######################################################################
+
+def process_field(api_doc, docindex, tag, arg, descr):
+ """
+ Process a single field, and use it to update C{api_doc}. If
+ C{tag} is the name of a special field, then call its handler
+ function. If C{tag} is the name of a simple field, then use
+ C{process_simple_field} to process it. Otherwise, check if it's a
+ user-defined field, defined in this docstring or the docstring of
+ a containing object; and if so, process it with
+ C{process_simple_field}.
+
+ @param tag: The field's tag, such as C{'author'}
+ @param arg: The field's optional argument
+ @param descr: The description following the field tag and
+ argument.
+ @raise ValueError: If a problem was encountered while processing
+ the field. The C{ValueError}'s string argument is an
+ explanation of the problem, which should be displayed as a
+ warning message.
+ """
+ # standard special fields
+ if tag in _field_dispatch_table:
+ handler = _field_dispatch_table[tag]
+ handler(api_doc, docindex, tag, arg, descr)
+ return
+
+ # standard simple fields & user-defined fields
+ for field in STANDARD_FIELDS + user_docfields(api_doc, docindex):
+ if tag in field.tags:
+ # [xx] check if it's redefined if it's not multivalue??
+ if not field.takes_arg:
+ _check(api_doc, tag, arg, expect_arg=False)
+ api_doc.metadata.append((field, arg, descr))
+ return
+
+ # If we didn't handle the field, then report a warning.
+ raise ValueError(UNKNOWN_TAG % tag)
+
+def user_docfields(api_doc, docindex):
+ """
+ Return a list of user defined fields that can be used for the
+ given object. This list is taken from the given C{api_doc}, and
+ any of its containing C{NamepaceDoc}s.
+
+ @note: We assume here that a parent's docstring will always be
+ parsed before its childrens'. This is indeed the case when we
+ are called via L{docbuilder.build_doc_index()}. If a child's
+ docstring is parsed before its parents, then its parent won't
+ yet have had its C{extra_docstring_fields} attribute
+ initialized.
+ """
+ docfields = []
+ # Get any docfields from `api_doc` itself
+ if api_doc.extra_docstring_fields not in (None, UNKNOWN):
+ docfields += api_doc.extra_docstring_fields
+ # Get any docfields from `api_doc`'s ancestors
+ for i in range(len(api_doc.canonical_name)-1, 0, -1):
+ ancestor = docindex.get_valdoc(api_doc.canonical_name[:i])
+ if ancestor is not None \
+ and ancestor.extra_docstring_fields not in (None, UNKNOWN):
+ docfields += ancestor.extra_docstring_fields
+ return docfields
+
+_field_dispatch_table = {}
+def register_field_handler(handler, *field_tags):
+ """
+ Register the given field handler function for processing any
+ of the given field tags. Field handler functions should
+ have the following signature:
+
+ >>> def field_handler(api_doc, docindex, tag, arg, descr):
+ ... '''update api_doc in response to the field.'''
+
+ Where C{api_doc} is the documentation object to update;
+ C{docindex} is a L{DocIndex} that can be used to look up the
+ documentation for related objects; C{tag} is the field tag that
+ was used; C{arg} is the optional argument; and C{descr} is the
+ description following the field tag and argument.
+ """
+ for field_tag in field_tags:
+ _field_dispatch_table[field_tag] = handler
+
+######################################################################
+#{ Field Handler Functions
+######################################################################
+
+def process_summary_field(api_doc, docindex, tag, arg, descr):
+ """Store C{descr} in C{api_doc.summary}"""
+ _check(api_doc, tag, arg, expect_arg=False)
+ if api_doc.summary is not None:
+ raise ValueError(REDEFINED % tag)
+ api_doc.summary = descr
+
+def process_include_field(api_doc, docindex, tag, arg, descr):
+ """Copy the docstring contents from the object named in C{descr}"""
+ _check(api_doc, tag, arg, expect_arg=False)
+ # options:
+ # a. just append the descr to our own
+ # b. append descr and update metadata
+ # c. append descr and process all fields.
+ # in any case, mark any errors we may find as coming from an
+ # imported docstring.
+
+ # how does this interact with documentation inheritance??
+ raise ValueError('%s not implemented yet' % tag)
+
+def process_undocumented_field(api_doc, docindex, tag, arg, descr):
+ """Remove any documentation for the variables named in C{descr}"""
+ _check(api_doc, tag, arg, context=NamespaceDoc, expect_arg=False)
+ for ident in _descr_to_identifiers(descr):
+ var_name_re = re.compile('^%s$' % ident.replace('*', '(.*)'))
+ for var_name, var_doc in api_doc.variables.items():
+ if var_name_re.match(var_name):
+ # Remove the variable from `variables`.
+ api_doc.variables.pop(var_name, None)
+ if api_doc.sort_spec is not UNKNOWN:
+ try: api_doc.sort_spec.remove(var_name)
+ except ValueError: pass
+ # For modules, remove any submodules that match var_name_re.
+ if isinstance(api_doc, ModuleDoc):
+ removed = set([m for m in api_doc.submodules
+ if var_name_re.match(m.canonical_name[-1])])
+ if removed:
+ # Remove the indicated submodules from this module.
+ api_doc.submodules = [m for m in api_doc.submodules
+ if m not in removed]
+ # Remove all ancestors of the indicated submodules
+ # from the docindex root. E.g., if module x
+ # declares y to be undocumented, then x.y.z should
+ # also be undocumented.
+ for elt in docindex.root[:]:
+ for m in removed:
+ if m.canonical_name.dominates(elt.canonical_name):
+ docindex.root.remove(elt)
+
+def process_group_field(api_doc, docindex, tag, arg, descr):
+ """Define a group named C{arg} containing the variables whose
+ names are listed in C{descr}."""
+ _check(api_doc, tag, arg, context=NamespaceDoc, expect_arg=True)
+ api_doc.group_specs.append( (arg, _descr_to_identifiers(descr)) )
+ # [xx] should this also set sort order?
+
+def process_deffield_field(api_doc, docindex, tag, arg, descr):
+ """Define a new custom field."""
+ _check(api_doc, tag, arg, expect_arg=True)
+ if api_doc.extra_docstring_fields is UNKNOWN:
+ api_doc.extra_docstring_fields = []
+ try:
+ docstring_field = _descr_to_docstring_field(arg, descr)
+ docstring_field.varnames.append("__%s__" % arg)
+ api_doc.extra_docstring_fields.append(docstring_field)
+ except ValueError, e:
+ raise ValueError('Bad %s: %s' % (tag, e))
+
+def process_raise_field(api_doc, docindex, tag, arg, descr):
+ """Record the fact that C{api_doc} can raise the exception named
+ C{tag} in C{api_doc.exception_descrs}."""
+ _check(api_doc, tag, arg, context=RoutineDoc, expect_arg='single')
+ try: name = DottedName(arg, strict=True)
+ except DottedName.InvalidDottedName: name = arg
+ api_doc.exception_descrs.append( (name, descr) )
+
+def process_sort_field(api_doc, docindex, tag, arg, descr):
+ _check(api_doc, tag, arg, context=NamespaceDoc, expect_arg=False)
+ api_doc.sort_spec = _descr_to_identifiers(descr) + api_doc.sort_spec
+
+# [xx] should I notice when they give a type for an unknown var?
+def process_type_field(api_doc, docindex, tag, arg, descr):
+ # In namespace, "@type var: ..." describes the type of a var.
+ if isinstance(api_doc, NamespaceDoc):
+ _check(api_doc, tag, arg, expect_arg='single')
+ set_var_type(api_doc, arg, descr)
+
+ # For variables & properties, "@type: ..." describes the variable.
+ elif isinstance(api_doc, (VariableDoc, PropertyDoc)):
+ _check(api_doc, tag, arg, expect_arg=False)
+ if api_doc.type_descr is not None:
+ raise ValueError(REDEFINED % tag)
+ api_doc.type_descr = descr
+
+ # For routines, "@type param: ..." describes a parameter.
+ elif isinstance(api_doc, RoutineDoc):
+ _check(api_doc, tag, arg, expect_arg='single')
+ if arg in api_doc.arg_types:
+ raise ValueError(REDEFINED % ('type for '+arg))
+ api_doc.arg_types[arg] = descr
+
+ else:
+ raise ValueError(BAD_CONTEXT % tag)
+
+def process_var_field(api_doc, docindex, tag, arg, descr):
+ _check(api_doc, tag, arg, context=ModuleDoc, expect_arg=True)
+ for ident in re.split('[:;, ] *', arg):
+ set_var_descr(api_doc, ident, descr)
+
+def process_cvar_field(api_doc, docindex, tag, arg, descr):
+ # If @cvar is used *within* a variable, then use it as the
+ # variable's description, and treat the variable as a class var.
+ if (isinstance(api_doc, VariableDoc) and
+ isinstance(api_doc.container, ClassDoc)):
+ _check(api_doc, tag, arg, expect_arg=False)
+ api_doc.is_instvar = False
+ api_doc.descr = markup.ConcatenatedDocstring(api_doc.descr, descr)
+ api_doc.summary, api_doc.other_docs = descr.summary()
+
+ # Otherwise, @cvar should be used in a class.
+ else:
+ _check(api_doc, tag, arg, context=ClassDoc, expect_arg=True)
+ for ident in re.split('[:;, ] *', arg):
+ set_var_descr(api_doc, ident, descr)
+ api_doc.variables[ident].is_instvar = False
+
+def process_ivar_field(api_doc, docindex, tag, arg, descr):
+ # If @ivar is used *within* a variable, then use it as the
+ # variable's description, and treat the variable as an instvar.
+ if (isinstance(api_doc, VariableDoc) and
+ isinstance(api_doc.container, ClassDoc)):
+ _check(api_doc, tag, arg, expect_arg=False)
+ # require that there be no other descr?
+ api_doc.is_instvar = True
+ api_doc.descr = markup.ConcatenatedDocstring(api_doc.descr, descr)
+ api_doc.summary, api_doc.other_docs = descr.summary()
+
+ # Otherwise, @ivar should be used in a class.
+ else:
+ _check(api_doc, tag, arg, context=ClassDoc, expect_arg=True)
+ for ident in re.split('[:;, ] *', arg):
+ set_var_descr(api_doc, ident, descr)
+ api_doc.variables[ident].is_instvar = True
+
+# [xx] '@return: foo' used to get used as a descr if no other
+# descr was present. is that still true?
+def process_return_field(api_doc, docindex, tag, arg, descr):
+ _check(api_doc, tag, arg, context=RoutineDoc, expect_arg=False)
+ if api_doc.return_descr is not None:
+ raise ValueError(REDEFINED % 'return value description')
+ api_doc.return_descr = descr
+
+def process_rtype_field(api_doc, docindex, tag, arg, descr):
+ _check(api_doc, tag, arg,
+ context=(RoutineDoc, PropertyDoc), expect_arg=False)
+ if isinstance(api_doc, RoutineDoc):
+ if api_doc.return_type is not None:
+ raise ValueError(REDEFINED % 'return value type')
+ api_doc.return_type = descr
+
+ elif isinstance(api_doc, PropertyDoc):
+ _check(api_doc, tag, arg, expect_arg=False)
+ if api_doc.type_descr is not None:
+ raise ValueError(REDEFINED % tag)
+ api_doc.type_descr = descr
+
+def process_arg_field(api_doc, docindex, tag, arg, descr):
+ _check(api_doc, tag, arg, context=RoutineDoc, expect_arg=True)
+ idents = re.split('[:;, ] *', arg)
+ api_doc.arg_descrs.append( (idents, descr) )
+ # Check to make sure that the documented parameter(s) are
+ # actually part of the function signature.
+ all_args = api_doc.all_args()
+ if all_args not in (['...'], UNKNOWN):
+ bad_params = ['"%s"' % i for i in idents if i not in all_args]
+ if bad_params:
+ raise ValueError(BAD_PARAM % (tag, ', '.join(bad_params)))
+
+def process_kwarg_field(api_doc, docindex, tag, arg, descr):
+ # [xx] these should -not- be checked if they exist..
+ # and listed separately or not??
+ _check(api_doc, tag, arg, context=RoutineDoc, expect_arg=True)
+ idents = re.split('[:;, ] *', arg)
+ api_doc.arg_descrs.append( (idents, descr) )
+
+register_field_handler(process_group_field, 'group')
+register_field_handler(process_deffield_field, 'deffield', 'newfield')
+register_field_handler(process_sort_field, 'sort')
+register_field_handler(process_summary_field, 'summary')
+register_field_handler(process_undocumented_field, 'undocumented')
+register_field_handler(process_include_field, 'include')
+register_field_handler(process_var_field, 'var', 'variable')
+register_field_handler(process_type_field, 'type')
+register_field_handler(process_cvar_field, 'cvar', 'cvariable')
+register_field_handler(process_ivar_field, 'ivar', 'ivariable')
+register_field_handler(process_return_field, 'return', 'returns')
+register_field_handler(process_rtype_field, 'rtype', 'returntype')
+register_field_handler(process_arg_field, 'arg', 'argument',
+ 'parameter', 'param')
+register_field_handler(process_kwarg_field, 'kwarg', 'keyword', 'kwparam')
+register_field_handler(process_raise_field, 'raise', 'raises',
+ 'except', 'exception')
+
+# Tags related to function parameters
+PARAMETER_TAGS = ('arg', 'argument', 'parameter', 'param',
+ 'kwarg', 'keyword', 'kwparam')
+
+# Tags related to variables in a class
+VARIABLE_TAGS = ('cvar', 'cvariable', 'ivar', 'ivariable')
+
+# Tags related to exceptions
+EXCEPTION_TAGS = ('raise', 'raises', 'except', 'exception')
+
+######################################################################
+#{ Helper Functions
+######################################################################
+
+def check_type_fields(api_doc, field_warnings):
+ """Check to make sure that all type fields correspond to some
+ documented parameter; if not, append a warning to field_warnings."""
+ if isinstance(api_doc, RoutineDoc):
+ for arg in api_doc.arg_types:
+ if arg not in api_doc.all_args():
+ for args, descr in api_doc.arg_descrs:
+ if arg in args:
+ break
+ else:
+ field_warnings.append(BAD_PARAM % ('type', '"%s"' % arg))
+
+def set_var_descr(api_doc, ident, descr):
+ if ident not in api_doc.variables:
+ api_doc.variables[ident] = VariableDoc(
+ container=api_doc, name=ident,
+ canonical_name=api_doc.canonical_name+ident)
+
+ var_doc = api_doc.variables[ident]
+ if var_doc.descr not in (None, UNKNOWN):
+ raise ValueError(REDEFINED % ('description for '+ident))
+ var_doc.descr = descr
+ if var_doc.summary in (None, UNKNOWN):
+ var_doc.summary, var_doc.other_docs = var_doc.descr.summary()
+
+def set_var_type(api_doc, ident, descr):
+ if ident not in api_doc.variables:
+ api_doc.variables[ident] = VariableDoc(
+ container=api_doc, name=ident,
+ canonical_name=api_doc.canonical_name+ident)
+
+ var_doc = api_doc.variables[ident]
+ if var_doc.type_descr not in (None, UNKNOWN):
+ raise ValueError(REDEFINED % ('type for '+ident))
+ var_doc.type_descr = descr
+
+def _check(api_doc, tag, arg, context=None, expect_arg=None):
+ if context is not None:
+ if not isinstance(api_doc, context):
+ raise ValueError(BAD_CONTEXT % tag)
+ if expect_arg is not None:
+ if expect_arg == True:
+ if arg is None:
+ raise ValueError(EXPECTED_ARG % tag)
+ elif expect_arg == False:
+ if arg is not None:
+ raise ValueError(UNEXPECTED_ARG % tag)
+ elif expect_arg == 'single':
+ if (arg is None or ' ' in arg):
+ raise ValueError(EXPECTED_SINGLE_ARG % tag)
+ else:
+ assert 0, 'bad value for expect_arg'
+
+def get_docformat(api_doc, docindex):
+ """
+ Return the name of the markup language that should be used to
+ parse the API documentation for the given object.
+ """
+ # Find the module that defines api_doc.
+ module = api_doc.defining_module
+ # Look up its docformat.
+ if module is not UNKNOWN and module.docformat not in (None, UNKNOWN):
+ docformat = module.docformat
+ else:
+ docformat = DEFAULT_DOCFORMAT
+ # Convert to lower case & strip region codes.
+ try: return docformat.lower().split()[0]
+ except: return DEFAULT_DOCFORMAT
+
+def unindent_docstring(docstring):
+ # [xx] copied from inspect.getdoc(); we can't use inspect.getdoc()
+ # itself, since it expects an object, not a string.
+
+ if not docstring: return ''
+ lines = docstring.expandtabs().split('\n')
+
+ # Find minimum indentation of any non-blank lines after first line.
+ margin = sys.maxint
+ for line in lines[1:]:
+ content = len(line.lstrip())
+ if content:
+ indent = len(line) - content
+ margin = min(margin, indent)
+ # Remove indentation.
+ if lines:
+ lines[0] = lines[0].lstrip()
+ if margin < sys.maxint:
+ for i in range(1, len(lines)): lines[i] = lines[i][margin:]
+ # Remove any trailing (but not leading!) blank lines.
+ while lines and not lines[-1]:
+ lines.pop()
+ #while lines and not lines[0]:
+ # lines.pop(0)
+ return '\n'.join(lines)
+
+_IDENTIFIER_LIST_REGEXP = re.compile(r'^[\w.\*]+([\s,:;]\s*[\w.\*]+)*$')
+def _descr_to_identifiers(descr):
+ """
+ Given a C{ParsedDocstring} that contains a list of identifiers,
+ return a list of those identifiers. This is used by fields such
+ as C{@group} and C{@sort}, which expect lists of identifiers as
+ their values. To extract the identifiers, the docstring is first
+ converted to plaintext, and then split. The plaintext content of
+ the docstring must be a a list of identifiers, separated by
+ spaces, commas, colons, or semicolons.
+
+ @rtype: C{list} of C{string}
+ @return: A list of the identifier names contained in C{descr}.
+ @type descr: L{markup.ParsedDocstring}
+ @param descr: A C{ParsedDocstring} containing a list of
+ identifiers.
+ @raise ValueError: If C{descr} does not contain a valid list of
+ identifiers.
+ """
+ idents = descr.to_plaintext(None).strip()
+ idents = re.sub(r'\s+', ' ', idents)
+ if not _IDENTIFIER_LIST_REGEXP.match(idents):
+ raise ValueError, 'Bad Identifier list: %r' % idents
+ rval = re.split('[:;, ] *', idents)
+ return rval
+
+def _descr_to_docstring_field(arg, descr):
+ tags = [s.lower() for s in re.split('[:;, ] *', arg)]
+ descr = descr.to_plaintext(None).strip()
+ args = re.split('[:;,] *', descr)
+ if len(args) == 0 or len(args) > 3:
+ raise ValueError, 'Wrong number of arguments'
+ singular = args[0]
+ if len(args) >= 2: plural = args[1]
+ else: plural = None
+ short = 0
+ if len(args) >= 3:
+ if args[2] == 'short': short = 1
+ else: raise ValueError('Bad arg 2 (expected "short")')
+ return DocstringField(tags, singular, plural, short)
+
+######################################################################
+#{ Function Signature Extraction
+######################################################################
+
+# [XX] todo: add optional type modifiers?
+_SIGNATURE_RE = re.compile(
+ # Class name (for builtin methods)
+ r'^\s*((?P<self>\w+)\.)?' +
+ # The function name (must match exactly) [XX] not anymore!
+ r'(?P<func>\w+)' +
+ # The parameters
+ r'\((?P<params>(\s*\[?\s*\*{0,2}[\w\-\.]+(\s*=.+?)?'+
+ r'(\s*\[?\s*,\s*\]?\s*\*{0,2}[\w\-\.]+(\s*=.+?)?)*\]*)?)\s*\)' +
+ # The return value (optional)
+ r'(\s*(->)\s*(?P<return>\S.*?))?'+
+ # The end marker
+ r'\s*(\n|\s+(--|<=+>)\s+|$|\.\s+|\.\n)')
+"""A regular expression that is used to extract signatures from
+docstrings."""
+
+def parse_function_signature(func_doc, doc_source, docformat, parse_errors):
+ """
+ Construct the signature for a builtin function or method from
+ its docstring. If the docstring uses the standard convention
+ of including a signature in the first line of the docstring
+ (and formats that signature according to standard
+ conventions), then it will be used to extract a signature.
+ Otherwise, the signature will be set to a single varargs
+ variable named C{"..."}.
+
+ @param func_doc: The target object where to store parsed signature. Also
+ container of the docstring to parse if doc_source is C{None}
+ @type func_doc: L{RoutineDoc}
+ @param doc_source: Contains the docstring to parse. If C{None}, parse
+ L{func_doc} docstring instead
+ @type doc_source: L{APIDoc}
+ @rtype: C{None}
+ """
+ if doc_source is None:
+ doc_source = func_doc
+
+ # If there's no docstring, then don't do anything.
+ if not doc_source.docstring: return False
+
+ m = _SIGNATURE_RE.match(doc_source.docstring)
+ if m is None: return False
+
+ # Do I want to be this strict?
+ # Notice that __init__ must match the class name instead, if the signature
+ # comes from the class docstring
+# if not (m.group('func') == func_doc.canonical_name[-1] or
+# '_'+m.group('func') == func_doc.canonical_name[-1]):
+# log.warning("Not extracting function signature from %s's "
+# "docstring, since the name doesn't match." %
+# func_doc.canonical_name)
+# return False
+
+ params = m.group('params')
+ rtype = m.group('return')
+ selfparam = m.group('self')
+
+ # Extract the parameters from the signature.
+ func_doc.posargs = []
+ func_doc.vararg = None
+ func_doc.kwarg = None
+ if func_doc.posarg_defaults is UNKNOWN:
+ func_doc.posarg_defaults = []
+ if params:
+ # Figure out which parameters are optional.
+ while '[' in params or ']' in params:
+ m2 = re.match(r'(.*)\[([^\[\]]+)\](.*)', params)
+ if not m2: return False
+ (start, mid, end) = m2.groups()
+ mid = re.sub(r'((,|^)\s*[\w\-\.]+)', r'\1=...', mid)
+ params = start+mid+end
+
+ params = re.sub(r'=...=' , r'=', params)
+ for name in params.split(','):
+ if '=' in name:
+ (name, default_repr) = name.split('=',1)
+ default = GenericValueDoc(parse_repr=default_repr)
+ else:
+ default = None
+ name = name.strip()
+ if name == '...':
+ func_doc.vararg = '...'
+ elif name.startswith('**'):
+ func_doc.kwarg = name[2:]
+ elif name.startswith('*'):
+ func_doc.vararg = name[1:]
+ else:
+ func_doc.posargs.append(name)
+ if len(func_doc.posarg_defaults) < len(func_doc.posargs):
+ func_doc.posarg_defaults.append(default)
+ elif default is not None:
+ argnum = len(func_doc.posargs)-1
+ func_doc.posarg_defaults[argnum] = default
+
+ # Extract the return type/value from the signature
+ if rtype:
+ func_doc.return_type = markup.parse(rtype, docformat, parse_errors,
+ inline=True)
+
+ # Add the self parameter, if it was specified.
+ if selfparam:
+ func_doc.posargs.insert(0, selfparam)
+ func_doc.posarg_defaults.insert(0, None)
+
+ # Remove the signature from the docstring.
+ doc_source.docstring = doc_source.docstring[m.end():]
+
+ # We found a signature.
+ return True
+
diff --git a/python/helpers/epydoc/docwriter/__init__.py b/python/helpers/epydoc/docwriter/__init__.py
new file mode 100644
index 0000000..05af188
--- /dev/null
+++ b/python/helpers/epydoc/docwriter/__init__.py
@@ -0,0 +1,12 @@
+# epydoc -- Output generation
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: __init__.py 956 2006-03-10 01:30:51Z edloper $
+
+"""
+Output generation.
+"""
+__docformat__ = 'epytext en'
diff --git a/python/helpers/epydoc/docwriter/dotgraph.py b/python/helpers/epydoc/docwriter/dotgraph.py
new file mode 100644
index 0000000..b7128d3
--- /dev/null
+++ b/python/helpers/epydoc/docwriter/dotgraph.py
@@ -0,0 +1,1351 @@
+# epydoc -- Graph generation
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: dotgraph.py 1663 2007-11-07 15:29:47Z dvarrazzo $
+
+"""
+Render Graphviz directed graphs as images. Below are some examples.
+
+.. importgraph::
+
+.. classtree:: epydoc.apidoc.APIDoc
+
+.. packagetree:: epydoc
+
+:see: `The Graphviz Homepage
+ <http://www.research.att.com/sw/tools/graphviz/>`__
+"""
+__docformat__ = 'restructuredtext'
+
+import re
+import sys
+from epydoc import log
+from epydoc.apidoc import *
+from epydoc.util import *
+from epydoc.compat import * # Backwards compatibility
+
+# colors for graphs of APIDocs
+MODULE_BG = '#d8e8ff'
+CLASS_BG = '#d8ffe8'
+SELECTED_BG = '#ffd0d0'
+BASECLASS_BG = '#e0b0a0'
+SUBCLASS_BG = '#e0b0a0'
+ROUTINE_BG = '#e8d0b0' # maybe?
+INH_LINK_COLOR = '#800000'
+
+######################################################################
+#{ Dot Graphs
+######################################################################
+
+DOT_COMMAND = 'dot'
+"""The command that should be used to spawn dot"""
+
+class DotGraph:
+ """
+ A ``dot`` directed graph. The contents of the graph are
+ constructed from the following instance variables:
+
+ - `nodes`: A list of `DotGraphNode`\\s, encoding the nodes
+ that are present in the graph. Each node is characterized
+ a set of attributes, including an optional label.
+ - `edges`: A list of `DotGraphEdge`\\s, encoding the edges
+ that are present in the graph. Each edge is characterized
+ by a set of attributes, including an optional label.
+ - `node_defaults`: Default attributes for nodes.
+ - `edge_defaults`: Default attributes for edges.
+ - `body`: A string that is appended as-is in the body of
+ the graph. This can be used to build more complex dot
+ graphs.
+
+ The `link()` method can be used to resolve crossreference links
+ within the graph. In particular, if the 'href' attribute of any
+ node or edge is assigned a value of the form ``<name>``, then it
+ will be replaced by the URL of the object with that name. This
+ applies to the `body` as well as the `nodes` and `edges`.
+
+ To render the graph, use the methods `write()` and `render()`.
+ Usually, you should call `link()` before you render the graph.
+ """
+ _uids = set()
+ """A set of all uids that that have been generated, used to ensure
+ that each new graph has a unique uid."""
+
+ DEFAULT_NODE_DEFAULTS={'fontsize':10, 'fontname': 'Helvetica'}
+ DEFAULT_EDGE_DEFAULTS={'fontsize':10, 'fontname': 'Helvetica'}
+
+ def __init__(self, title, body='', node_defaults=None,
+ edge_defaults=None, caption=None):
+ """
+ Create a new `DotGraph`.
+ """
+ self.title = title
+ """The title of the graph."""
+
+ self.caption = caption
+ """A caption for the graph."""
+
+ self.nodes = []
+ """A list of the nodes that are present in the graph.
+
+ :type: ``list`` of `DotGraphNode`"""
+
+ self.edges = []
+ """A list of the edges that are present in the graph.
+
+ :type: ``list`` of `DotGraphEdge`"""
+
+ self.body = body
+ """A string that should be included as-is in the body of the
+ graph.
+
+ :type: ``str``"""
+
+ self.node_defaults = node_defaults or self.DEFAULT_NODE_DEFAULTS
+ """Default attribute values for nodes."""
+
+ self.edge_defaults = edge_defaults or self.DEFAULT_EDGE_DEFAULTS
+ """Default attribute values for edges."""
+
+ self.uid = re.sub(r'\W', '_', title).lower()
+ """A unique identifier for this graph. This can be used as a
+ filename when rendering the graph. No two `DotGraph`\s will
+ have the same uid."""
+
+ # Encode the title, if necessary.
+ if isinstance(self.title, unicode):
+ self.title = self.title.encode('ascii', 'xmlcharrefreplace')
+
+ # Make sure the UID isn't too long.
+ self.uid = self.uid[:30]
+
+ # Make sure the UID is unique
+ if self.uid in self._uids:
+ n = 2
+ while ('%s_%s' % (self.uid, n)) in self._uids: n += 1
+ self.uid = '%s_%s' % (self.uid, n)
+ self._uids.add(self.uid)
+
+ def to_html(self, image_file, image_url, center=True):
+ """
+ Return the HTML code that should be uesd to display this graph
+ (including a client-side image map).
+
+ :param image_url: The URL of the image file for this graph;
+ this should be generated separately with the `write()` method.
+ """
+ # If dotversion >1.8.10, then we can generate the image and
+ # the cmapx with a single call to dot. Otherwise, we need to
+ # run dot twice.
+ if get_dot_version() > [1,8,10]:
+ cmapx = self._run_dot('-Tgif', '-o%s' % image_file, '-Tcmapx')
+ if cmapx is None: return '' # failed to render
+ else:
+ if not self.write(image_file):
+ return '' # failed to render
+ cmapx = self.render('cmapx') or ''
+
+ # Decode the cmapx (dot uses utf-8)
+ try:
+ cmapx = cmapx.decode('utf-8')
+ except UnicodeDecodeError:
+ log.debug('%s: unable to decode cmapx from dot; graph will '
+ 'not have clickable regions' % image_file)
+ cmapx = ''
+
+ title = plaintext_to_html(self.title or '')
+ caption = plaintext_to_html(self.caption or '')
+ if title or caption:
+ css_class = 'graph-with-title'
+ else:
+ css_class = 'graph-without-title'
+ if len(title)+len(caption) > 80:
+ title_align = 'left'
+ table_width = ' width="600"'
+ else:
+ title_align = 'center'
+ table_width = ''
+
+ if center: s = '<center>'
+ if title or caption:
+ s += ('<table border="0" cellpadding="0" cellspacing="0" '
+ 'class="graph"%s>\n <tr><td align="center">\n' %
+ table_width)
+ s += (' %s\n <img src="%s" alt=%r usemap="#%s" '
+ 'ismap="ismap" class="%s" />\n' %
+ (cmapx.strip(), image_url, title, self.uid, css_class))
+ if title or caption:
+ s += ' </td></tr>\n <tr><td align=%r>\n' % title_align
+ if title:
+ s += '<span class="graph-title">%s</span>' % title
+ if title and caption:
+ s += ' -- '
+ if caption:
+ s += '<span class="graph-caption">%s</span>' % caption
+ s += '\n </td></tr>\n</table><br />'
+ if center: s += '</center>'
+ return s
+
+ def link(self, docstring_linker):
+ """
+ Replace any href attributes whose value is ``<name>`` with
+ the url of the object whose name is ``<name>``.
+ """
+ # Link xrefs in nodes
+ self._link_href(self.node_defaults, docstring_linker)
+ for node in self.nodes:
+ self._link_href(node.attribs, docstring_linker)
+
+ # Link xrefs in edges
+ self._link_href(self.edge_defaults, docstring_linker)
+ for edge in self.nodes:
+ self._link_href(edge.attribs, docstring_linker)
+
+ # Link xrefs in body
+ def subfunc(m):
+ url = docstring_linker.url_for(m.group(1))
+ if url: return 'href="%s"%s' % (url, m.group(2))
+ else: return ''
+ self.body = re.sub("href\s*=\s*['\"]?<([\w\.]+)>['\"]?\s*(,?)",
+ subfunc, self.body)
+
+ def _link_href(self, attribs, docstring_linker):
+ """Helper for `link()`"""
+ if 'href' in attribs:
+ m = re.match(r'^<([\w\.]+)>$', attribs['href'])
+ if m:
+ url = docstring_linker.url_for(m.group(1))
+ if url: attribs['href'] = url
+ else: del attribs['href']
+
+ def write(self, filename, language='gif'):
+ """
+ Render the graph using the output format `language`, and write
+ the result to `filename`.
+
+ :return: True if rendering was successful.
+ """
+ result = self._run_dot('-T%s' % language,
+ '-o%s' % filename)
+ # Decode into unicode, if necessary.
+ if language == 'cmapx' and result is not None:
+ result = result.decode('utf-8')
+ return (result is not None)
+
+ def render(self, language='gif'):
+ """
+ Use the ``dot`` command to render this graph, using the output
+ format `language`. Return the result as a string, or ``None``
+ if the rendering failed.
+ """
+ return self._run_dot('-T%s' % language)
+
+ def _run_dot(self, *options):
+ try:
+ result, err = run_subprocess((DOT_COMMAND,)+options,
+ self.to_dotfile())
+ if err: log.warning("Graphviz dot warning(s):\n%s" % err)
+ except OSError, e:
+ log.warning("Unable to render Graphviz dot graph:\n%s" % e)
+ #log.debug(self.to_dotfile())
+ return None
+
+ return result
+
+ def to_dotfile(self):
+ """
+ Return the string contents of the dot file that should be used
+ to render this graph.
+ """
+ lines = ['digraph %s {' % self.uid,
+ 'node [%s]' % ','.join(['%s="%s"' % (k,v) for (k,v)
+ in self.node_defaults.items()]),
+ 'edge [%s]' % ','.join(['%s="%s"' % (k,v) for (k,v)
+ in self.edge_defaults.items()])]
+ if self.body:
+ lines.append(self.body)
+ lines.append('/* Nodes */')
+ for node in self.nodes:
+ lines.append(node.to_dotfile())
+ lines.append('/* Edges */')
+ for edge in self.edges:
+ lines.append(edge.to_dotfile())
+ lines.append('}')
+
+ # Default dot input encoding is UTF-8
+ return u'\n'.join(lines).encode('utf-8')
+
+class DotGraphNode:
+ _next_id = 0
+ def __init__(self, label=None, html_label=None, **attribs):
+ if label is not None and html_label is not None:
+ raise ValueError('Use label or html_label, not both.')
+ if label is not None: attribs['label'] = label
+ self._html_label = html_label
+ self._attribs = attribs
+ self.id = self.__class__._next_id
+ self.__class__._next_id += 1
+ self.port = None
+
+ def __getitem__(self, attr):
+ return self._attribs[attr]
+
+ def __setitem__(self, attr, val):
+ if attr == 'html_label':
+ self._attribs.pop('label')
+ self._html_label = val
+ else:
+ if attr == 'label': self._html_label = None
+ self._attribs[attr] = val
+
+ def to_dotfile(self):
+ """
+ Return the dot commands that should be used to render this node.
+ """
+ attribs = ['%s="%s"' % (k,v) for (k,v) in self._attribs.items()
+ if v is not None]
+ if self._html_label:
+ attribs.insert(0, 'label=<%s>' % (self._html_label,))
+ if attribs: attribs = ' [%s]' % (','.join(attribs))
+ return 'node%d%s' % (self.id, attribs)
+
+class DotGraphEdge:
+ def __init__(self, start, end, label=None, **attribs):
+ """
+ :type start: `DotGraphNode`
+ :type end: `DotGraphNode`
+ """
+ assert isinstance(start, DotGraphNode)
+ assert isinstance(end, DotGraphNode)
+ if label is not None: attribs['label'] = label
+ self.start = start #: :type: `DotGraphNode`
+ self.end = end #: :type: `DotGraphNode`
+ self._attribs = attribs
+
+ def __getitem__(self, attr):
+ return self._attribs[attr]
+
+ def __setitem__(self, attr, val):
+ self._attribs[attr] = val
+
+ def to_dotfile(self):
+ """
+ Return the dot commands that should be used to render this edge.
+ """
+ # Set head & tail ports, if the nodes have preferred ports.
+ attribs = self._attribs.copy()
+ if (self.start.port is not None and 'headport' not in attribs):
+ attribs['headport'] = self.start.port
+ if (self.end.port is not None and 'tailport' not in attribs):
+ attribs['tailport'] = self.end.port
+ # Convert attribs to a string
+ attribs = ','.join(['%s="%s"' % (k,v) for (k,v) in attribs.items()
+ if v is not None])
+ if attribs: attribs = ' [%s]' % attribs
+ # Return the dotfile edge.
+ return 'node%d -> node%d%s' % (self.start.id, self.end.id, attribs)
+
+######################################################################
+#{ Specialized Nodes for UML Graphs
+######################################################################
+
+class DotGraphUmlClassNode(DotGraphNode):
+ """
+ A specialized dot graph node used to display `ClassDoc`\s using
+ UML notation. The node is rendered as a table with three cells:
+ the top cell contains the class name; the middle cell contains a
+ list of attributes; and the bottom cell contains a list of
+ operations::
+
+ +-------------+
+ | ClassName |
+ +-------------+
+ | x: int |
+ | ... |
+ +-------------+
+ | f(self, x) |
+ | ... |
+ +-------------+
+
+ `DotGraphUmlClassNode`\s may be *collapsed*, in which case they are
+ drawn as a simple box containing the class name::
+
+ +-------------+
+ | ClassName |
+ +-------------+
+
+ Attributes with types corresponding to documented classes can
+ optionally be converted into edges, using `link_attributes()`.
+
+ :todo: Add more options?
+ - show/hide operation signature
+ - show/hide operation signature types
+ - show/hide operation signature return type
+ - show/hide attribute types
+ - use qualifiers
+ """
+
+ def __init__(self, class_doc, linker, context, collapsed=False,
+ bgcolor=CLASS_BG, **options):
+ """
+ Create a new `DotGraphUmlClassNode` based on the class
+ `class_doc`.
+
+ :Parameters:
+ `linker` : `markup.DocstringLinker`
+ Used to look up URLs for classes.
+ `context` : `APIDoc`
+ The context in which this node will be drawn; dotted
+ names will be contextualized to this context.
+ `collapsed` : ``bool``
+ If true, then display this node as a simple box.
+ `bgcolor` : ```str```
+ The background color for this node.
+ `options` : ``dict``
+ A set of options used to control how the node should
+ be displayed.
+
+ :Keywords:
+ - `show_private_vars`: If false, then private variables
+ are filtered out of the attributes & operations lists.
+ (Default: *False*)
+ - `show_magic_vars`: If false, then magic variables
+ (such as ``__init__`` and ``__add__``) are filtered out of
+ the attributes & operations lists. (Default: *True*)
+ - `show_inherited_vars`: If false, then inherited variables
+ are filtered out of the attributes & operations lists.
+ (Default: *False*)
+ - `max_attributes`: The maximum number of attributes that
+ should be listed in the attribute box. If the class has
+ more than this number of attributes, some will be
+ ellided. Ellipsis is marked with ``'...'``.
+ - `max_operations`: The maximum number of operations that
+ should be listed in the operation box.
+ - `add_nodes_for_linked_attributes`: If true, then
+ `link_attributes()` will create new a collapsed node for
+ the types of a linked attributes if no node yet exists for
+ that type.
+ """
+ if not isinstance(class_doc, ClassDoc):
+ raise TypeError('Expected a ClassDoc as 1st argument')
+
+ self.class_doc = class_doc
+ """The class represented by this node."""
+
+ self.linker = linker
+ """Used to look up URLs for classes."""
+
+ self.context = context
+ """The context in which the node will be drawn."""
+
+ self.bgcolor = bgcolor
+ """The background color of the node."""
+
+ self.options = options
+ """Options used to control how the node is displayed."""
+
+ self.collapsed = collapsed
+ """If true, then draw this node as a simple box."""
+
+ self.attributes = []
+ """The list of VariableDocs for attributes"""
+
+ self.operations = []
+ """The list of VariableDocs for operations"""
+
+ self.qualifiers = []
+ """List of (key_label, port) tuples."""
+
+ self.edges = []
+ """List of edges used to represent this node's attributes.
+ These should not be added to the `DotGraph`; this node will
+ generate their dotfile code directly."""
+
+ # Initialize operations & attributes lists.
+ show_private = options.get('show_private_vars', False)
+ show_magic = options.get('show_magic_vars', True)
+ show_inherited = options.get('show_inherited_vars', False)
+ for var in class_doc.sorted_variables:
+ name = var.canonical_name[-1]
+ if ((not show_private and var.is_public == False) or
+ (not show_magic and re.match('__\w+__$', name)) or
+ (not show_inherited and var.container != class_doc)):
+ pass
+ elif isinstance(var.value, RoutineDoc):
+ self.operations.append(var)
+ else:
+ self.attributes.append(var)
+
+ # Initialize our dot node settings.
+ tooltip = self._summary(class_doc)
+ if tooltip:
+ # dot chokes on a \n in the attribute...
+ tooltip = " ".join(tooltip.split())
+ else:
+ tooltip = class_doc.canonical_name
+ DotGraphNode.__init__(self, tooltip=tooltip,
+ width=0, height=0, shape='plaintext',
+ href=linker.url_for(class_doc) or NOOP_URL)
+
+ #/////////////////////////////////////////////////////////////////
+ #{ Attribute Linking
+ #/////////////////////////////////////////////////////////////////
+
+ SIMPLE_TYPE_RE = re.compile(
+ r'^([\w\.]+)$')
+ """A regular expression that matches descriptions of simple types."""
+
+ COLLECTION_TYPE_RE = re.compile(
+ r'^(list|set|sequence|tuple|collection) of ([\w\.]+)$')
+ """A regular expression that matches descriptions of collection types."""
+
+ MAPPING_TYPE_RE = re.compile(
+ r'^(dict|dictionary|map|mapping) from ([\w\.]+) to ([\w\.]+)$')
+ """A regular expression that matches descriptions of mapping types."""
+
+ MAPPING_TO_COLLECTION_TYPE_RE = re.compile(
+ r'^(dict|dictionary|map|mapping) from ([\w\.]+) to '
+ r'(list|set|sequence|tuple|collection) of ([\w\.]+)$')
+ """A regular expression that matches descriptions of mapping types
+ whose value type is a collection."""
+
+ OPTIONAL_TYPE_RE = re.compile(
+ r'^(None or|optional) ([\w\.]+)$|^([\w\.]+) or None$')
+ """A regular expression that matches descriptions of optional types."""
+
+ def link_attributes(self, nodes):
+ """
+ Convert any attributes with type descriptions corresponding to
+ documented classes to edges. The following type descriptions
+ are currently handled:
+
+ - Dotted names: Create an attribute edge to the named type,
+ labelled with the variable name.
+ - Collections: Create an attribute edge to the named type,
+ labelled with the variable name, and marked with '*' at the
+ type end of the edge.
+ - Mappings: Create an attribute edge to the named type,
+ labelled with the variable name, connected to the class by
+ a qualifier box that contains the key type description.
+ - Optional: Create an attribute edge to the named type,
+ labelled with the variable name, and marked with '0..1' at
+ the type end of the edge.
+
+ The edges created by `link_attributes()` are handled internally
+ by `DotGraphUmlClassNode`; they should *not* be added directly
+ to the `DotGraph`.
+
+ :param nodes: A dictionary mapping from `ClassDoc`\s to
+ `DotGraphUmlClassNode`\s, used to look up the nodes for
+ attribute types. If the ``add_nodes_for_linked_attributes``
+ option is used, then new nodes will be added to this
+ dictionary for any types that are not already listed.
+ These added nodes must be added to the `DotGraph`.
+ """
+ # Try to convert each attribute var into a graph edge. If
+ # _link_attribute returns true, then it succeeded, so remove
+ # that var from our attribute list; otherwise, leave that var
+ # in our attribute list.
+ self.attributes = [var for var in self.attributes
+ if not self._link_attribute(var, nodes)]
+
+ def _link_attribute(self, var, nodes):
+ """
+ Helper for `link_attributes()`: try to convert the attribute
+ variable `var` into an edge, and add that edge to
+ `self.edges`. Return ``True`` iff the variable was
+ successfully converted to an edge (in which case, it should be
+ removed from the attributes list).
+ """
+ type_descr = self._type_descr(var) or self._type_descr(var.value)
+
+ # Simple type.
+ m = self.SIMPLE_TYPE_RE.match(type_descr)
+ if m and self._add_attribute_edge(var, nodes, m.group(1)):
+ return True
+
+ # Collection type.
+ m = self.COLLECTION_TYPE_RE.match(type_descr)
+ if m and self._add_attribute_edge(var, nodes, m.group(2),
+ headlabel='*'):
+ return True
+
+ # Optional type.
+ m = self.OPTIONAL_TYPE_RE.match(type_descr)
+ if m and self._add_attribute_edge(var, nodes, m.group(2) or m.group(3),
+ headlabel='0..1'):
+ return True
+
+ # Mapping type.
+ m = self.MAPPING_TYPE_RE.match(type_descr)
+ if m:
+ port = 'qualifier_%s' % var.name
+ if self._add_attribute_edge(var, nodes, m.group(3),
+ tailport='%s:e' % port):
+ self.qualifiers.append( (m.group(2), port) )
+ return True
+
+ # Mapping to collection type.
+ m = self.MAPPING_TO_COLLECTION_TYPE_RE.match(type_descr)
+ if m:
+ port = 'qualifier_%s' % var.name
+ if self._add_attribute_edge(var, nodes, m.group(4), headlabel='*',
+ tailport='%s:e' % port):
+ self.qualifiers.append( (m.group(2), port) )
+ return True
+
+ # We were unable to link this attribute.
+ return False
+
+ def _add_attribute_edge(self, var, nodes, type_str, **attribs):
+ """
+ Helper for `link_attributes()`: try to add an edge for the
+ given attribute variable `var`. Return ``True`` if
+ successful.
+ """
+ # Use the type string to look up a corresponding ValueDoc.
+ type_doc = self.linker.docindex.find(type_str, var)
+ if not type_doc: return False
+
+ # Make sure the type is a class.
+ if not isinstance(type_doc, ClassDoc): return False
+
+ # Get the type ValueDoc's node. If it doesn't have one (and
+ # add_nodes_for_linked_attributes=True), then create it.
+ type_node = nodes.get(type_doc)
+ if not type_node:
+ if self.options.get('add_nodes_for_linked_attributes', True):
+ type_node = DotGraphUmlClassNode(type_doc, self.linker,
+ self.context, collapsed=True)
+ nodes[type_doc] = type_node
+ else:
+ return False
+
+ # Add an edge from self to the target type node.
+ # [xx] should I set constraint=false here?
+ attribs.setdefault('headport', 'body')
+ attribs.setdefault('tailport', 'body')
+ url = self.linker.url_for(var) or NOOP_URL
+ self.edges.append(DotGraphEdge(self, type_node, label=var.name,
+ arrowhead='open', href=url,
+ tooltip=var.canonical_name, labeldistance=1.5,
+ **attribs))
+ return True
+
+ #/////////////////////////////////////////////////////////////////
+ #{ Helper Methods
+ #/////////////////////////////////////////////////////////////////
+ def _summary(self, api_doc):
+ """Return a plaintext summary for `api_doc`"""
+ if not isinstance(api_doc, APIDoc): return ''
+ if api_doc.summary in (None, UNKNOWN): return ''
+ summary = api_doc.summary.to_plaintext(None).strip()
+ return plaintext_to_html(summary)
+
+ _summary = classmethod(_summary)
+
+ def _type_descr(self, api_doc):
+ """Return a plaintext type description for `api_doc`"""
+ if not hasattr(api_doc, 'type_descr'): return ''
+ if api_doc.type_descr in (None, UNKNOWN): return ''
+ type_descr = api_doc.type_descr.to_plaintext(self.linker).strip()
+ return plaintext_to_html(type_descr)
+
+ def _tooltip(self, var_doc):
+ """Return a tooltip for `var_doc`."""
+ return (self._summary(var_doc) or
+ self._summary(var_doc.value) or
+ var_doc.canonical_name)
+
+ #/////////////////////////////////////////////////////////////////
+ #{ Rendering
+ #/////////////////////////////////////////////////////////////////
+
+ def _attribute_cell(self, var_doc):
+ # Construct the label
+ label = var_doc.name
+ type_descr = (self._type_descr(var_doc) or
+ self._type_descr(var_doc.value))
+ if type_descr: label += ': %s' % type_descr
+ # Get the URL
+ url = self.linker.url_for(var_doc) or NOOP_URL
+ # Construct & return the pseudo-html code
+ return self._ATTRIBUTE_CELL % (url, self._tooltip(var_doc), label)
+
+ def _operation_cell(self, var_doc):
+ """
+ :todo: do 'word wrapping' on the signature, by starting a new
+ row in the table, if necessary. How to indent the new
+ line? Maybe use align=right? I don't think dot has a
+ .
+ :todo: Optionally add return type info?
+ """
+ # Construct the label (aka function signature)
+ func_doc = var_doc.value
+ args = [self._operation_arg(n, d, func_doc) for (n, d)
+ in zip(func_doc.posargs, func_doc.posarg_defaults)]
+ args = [plaintext_to_html(arg) for arg in args]
+ if func_doc.vararg: args.append('*'+func_doc.vararg)
+ if func_doc.kwarg: args.append('**'+func_doc.kwarg)
+ label = '%s(%s)' % (var_doc.name, ', '.join(args))
+ # Get the URL
+ url = self.linker.url_for(var_doc) or NOOP_URL
+ # Construct & return the pseudo-html code
+ return self._OPERATION_CELL % (url, self._tooltip(var_doc), label)
+
+ def _operation_arg(self, name, default, func_doc):
+ """
+ :todo: Handle tuple args better
+ :todo: Optionally add type info?
+ """
+ if default is None:
+ return '%s' % name
+ else:
+ pyval_repr = default.summary_pyval_repr().to_plaintext(None)
+ return '%s=%s' % (name, pyval_repr)
+
+ def _qualifier_cell(self, key_label, port):
+ return self._QUALIFIER_CELL % (port, self.bgcolor, key_label)
+
+ #: args: (url, tooltip, label)
+ _ATTRIBUTE_CELL = '''
+ <TR><TD ALIGN="LEFT" HREF="%s" TOOLTIP="%s">%s</TD></TR>
+ '''
+
+ #: args: (url, tooltip, label)
+ _OPERATION_CELL = '''
+ <TR><TD ALIGN="LEFT" HREF="%s" TOOLTIP="%s">%s</TD></TR>
+ '''
+
+ #: args: (port, bgcolor, label)
+ _QUALIFIER_CELL = '''
+ <TR><TD VALIGN="BOTTOM" PORT="%s" BGCOLOR="%s" BORDER="1">%s</TD></TR>
+ '''
+
+ _QUALIFIER_DIV = '''
+ <TR><TD VALIGN="BOTTOM" HEIGHT="10" WIDTH="10" FIXEDSIZE="TRUE"></TD></TR>
+ '''
+
+ #: Args: (rowspan, bgcolor, classname, attributes, operations, qualifiers)
+ _LABEL = '''
+ <TABLE BORDER="0" CELLBORDER="0" CELLSPACING="0" CELLPADDING="0">
+ <TR><TD ROWSPAN="%s">
+ <TABLE BORDER="0" CELLBORDER="1" CELLSPACING="0"
+ CELLPADDING="0" PORT="body" BGCOLOR="%s">
+ <TR><TD>%s</TD></TR>
+ <TR><TD><TABLE BORDER="0" CELLBORDER="0" CELLSPACING="0">
+ %s</TABLE></TD></TR>
+ <TR><TD><TABLE BORDER="0" CELLBORDER="0" CELLSPACING="0">
+ %s</TABLE></TD></TR>
+ </TABLE>
+ </TD></TR>
+ %s
+ </TABLE>'''
+
+ _COLLAPSED_LABEL = '''
+ <TABLE CELLBORDER="0" BGCOLOR="%s" PORT="body">
+ <TR><TD>%s</TD></TR>
+ </TABLE>'''
+
+ def _get_html_label(self):
+ # Get the class name & contextualize it.
+ classname = self.class_doc.canonical_name
+ classname = classname.contextualize(self.context.canonical_name)
+
+ # If we're collapsed, display the node as a single box.
+ if self.collapsed:
+ return self._COLLAPSED_LABEL % (self.bgcolor, classname)
+
+ # Construct the attribute list. (If it's too long, truncate)
+ attrib_cells = [self._attribute_cell(a) for a in self.attributes]
+ max_attributes = self.options.get('max_attributes', 15)
+ if len(attrib_cells) == 0:
+ attrib_cells = ['<TR><TD></TD></TR>']
+ elif len(attrib_cells) > max_attributes:
+ attrib_cells[max_attributes-2:-1] = ['<TR><TD>...</TD></TR>']
+ attributes = ''.join(attrib_cells)
+
+ # Construct the operation list. (If it's too long, truncate)
+ oper_cells = [self._operation_cell(a) for a in self.operations]
+ max_operations = self.options.get('max_operations', 15)
+ if len(oper_cells) == 0:
+ oper_cells = ['<TR><TD></TD></TR>']
+ elif len(oper_cells) > max_operations:
+ oper_cells[max_operations-2:-1] = ['<TR><TD>...</TD></TR>']
+ operations = ''.join(oper_cells)
+
+ # Construct the qualifier list & determine the rowspan.
+ if self.qualifiers:
+ rowspan = len(self.qualifiers)*2+2
+ div = self._QUALIFIER_DIV
+ qualifiers = div+div.join([self._qualifier_cell(l,p) for
+ (l,p) in self.qualifiers])+div
+ else:
+ rowspan = 1
+ qualifiers = ''
+
+ # Put it all together.
+ return self._LABEL % (rowspan, self.bgcolor, classname,
+ attributes, operations, qualifiers)
+
+ def to_dotfile(self):
+ attribs = ['%s="%s"' % (k,v) for (k,v) in self._attribs.items()]
+ attribs.append('label=<%s>' % self._get_html_label())
+ s = 'node%d%s' % (self.id, ' [%s]' % (','.join(attribs)))
+ if not self.collapsed:
+ for edge in self.edges:
+ s += '\n' + edge.to_dotfile()
+ return s
+
+class DotGraphUmlModuleNode(DotGraphNode):
+ """
+ A specialized dot grah node used to display `ModuleDoc`\s using
+ UML notation. Simple module nodes look like::
+
+ .----.
+ +------------+
+ | modulename |
+ +------------+
+
+ Packages nodes are drawn with their modules & subpackages nested
+ inside::
+
+ .----.
+ +----------------------------------------+
+ | packagename |
+ | |
+ | .----. .----. .----. |
+ | +---------+ +---------+ +---------+ |
+ | | module1 | | module2 | | module3 | |
+ | +---------+ +---------+ +---------+ |
+ | |
+ +----------------------------------------+
+
+ """
+ def __init__(self, module_doc, linker, context, collapsed=False,
+ excluded_submodules=(), **options):
+ self.module_doc = module_doc
+ self.linker = linker
+ self.context = context
+ self.collapsed = collapsed
+ self.options = options
+ self.excluded_submodules = excluded_submodules
+ DotGraphNode.__init__(self, shape='plaintext',
+ href=linker.url_for(module_doc) or NOOP_URL,
+ tooltip=module_doc.canonical_name)
+
+ #: Expects: (color, color, url, tooltip, body)
+ _MODULE_LABEL = '''
+ <TABLE BORDER="0" CELLBORDER="0" CELLSPACING="0" ALIGN="LEFT">
+ <TR><TD ALIGN="LEFT" VALIGN="BOTTOM" HEIGHT="8" WIDTH="16"
+ FIXEDSIZE="true" BGCOLOR="%s" BORDER="1" PORT="tab"></TD></TR>
+ <TR><TD ALIGN="LEFT" VALIGN="TOP" BGCOLOR="%s" BORDER="1" WIDTH="20"
+ PORT="body" HREF="%s" TOOLTIP="%s">%s</TD></TR>
+ </TABLE>'''
+
+ #: Expects: (name, body_rows)
+ _NESTED_BODY = '''
+ <TABLE BORDER="0" CELLBORDER="0" CELLPADDING="0" CELLSPACING="0">
+ <TR><TD ALIGN="LEFT">%s</TD></TR>
+ %s
+ </TABLE>'''
+
+ #: Expects: (cells,)
+ _NESTED_BODY_ROW = '''
+ <TR><TD>
+ <TABLE BORDER="0" CELLBORDER="0"><TR>%s</TR></TABLE>
+ </TD></TR>'''
+
+ def _get_html_label(self, package):
+ """
+ :Return: (label, depth, width) where:
+
+ - ``label`` is the HTML label
+ - ``depth`` is the depth of the package tree (for coloring)
+ - ``width`` is the max width of the HTML label, roughly in
+ units of characters.
+ """
+ MAX_ROW_WIDTH = 80 # unit is roughly characters.
+ pkg_name = package.canonical_name
+ pkg_url = self.linker.url_for(package) or NOOP_URL
+
+ if (not package.is_package or len(package.submodules) == 0 or
+ self.collapsed):
+ pkg_color = self._color(package, 1)
+ label = self._MODULE_LABEL % (pkg_color, pkg_color,
+ pkg_url, pkg_name, pkg_name[-1])
+ return (label, 1, len(pkg_name[-1])+3)
+
+ # Get the label for each submodule, and divide them into rows.
+ row_list = ['']
+ row_width = 0
+ max_depth = 0
+ max_row_width = len(pkg_name[-1])+3
+ for submodule in package.submodules:
+ if submodule in self.excluded_submodules: continue
+ # Get the submodule's label.
+ label, depth, width = self._get_html_label(submodule)
+ # Check if we should start a new row.
+ if row_width > 0 and width+row_width > MAX_ROW_WIDTH:
+ row_list.append('')
+ row_width = 0
+ # Add the submodule's label to the row.
+ row_width += width
+ row_list[-1] += '<TD ALIGN="LEFT">%s</TD>' % label
+ # Update our max's.
+ max_depth = max(depth, max_depth)
+ max_row_width = max(row_width, max_row_width)
+
+ # Figure out which color to use.
+ pkg_color = self._color(package, depth+1)
+
+ # Assemble & return the label.
+ rows = ''.join([self._NESTED_BODY_ROW % r for r in row_list])
+ body = self._NESTED_BODY % (pkg_name, rows)
+ label = self._MODULE_LABEL % (pkg_color, pkg_color,
+ pkg_url, pkg_name, body)
+ return label, max_depth+1, max_row_width
+
+ _COLOR_DIFF = 24
+ def _color(self, package, depth):
+ if package == self.context: return SELECTED_BG
+ else:
+ # Parse the base color.
+ if re.match(MODULE_BG, 'r#[0-9a-fA-F]{6}$'):
+ base = int(MODULE_BG[1:], 16)
+ else:
+ base = int('d8e8ff', 16)
+ red = (base & 0xff0000) >> 16
+ green = (base & 0x00ff00) >> 8
+ blue = (base & 0x0000ff)
+ # Make it darker with each level of depth. (but not *too*
+ # dark -- package name needs to be readable)
+ red = max(64, red-(depth-1)*self._COLOR_DIFF)
+ green = max(64, green-(depth-1)*self._COLOR_DIFF)
+ blue = max(64, blue-(depth-1)*self._COLOR_DIFF)
+ # Convert it back to a color string
+ return '#%06x' % ((red<<16)+(green<<8)+blue)
+
+ def to_dotfile(self):
+ attribs = ['%s="%s"' % (k,v) for (k,v) in self._attribs.items()]
+ label, depth, width = self._get_html_label(self.module_doc)
+ attribs.append('label=<%s>' % label)
+ return 'node%d%s' % (self.id, ' [%s]' % (','.join(attribs)))
+
+
+
+######################################################################
+#{ Graph Generation Functions
+######################################################################
+
+def package_tree_graph(packages, linker, context=None, **options):
+ """
+ Return a `DotGraph` that graphically displays the package
+ hierarchies for the given packages.
+ """
+ if options.get('style', 'uml') == 'uml': # default to uml style?
+ if get_dot_version() >= [2]:
+ return uml_package_tree_graph(packages, linker, context,
+ **options)
+ elif 'style' in options:
+ log.warning('UML style package trees require dot version 2.0+')
+
+ graph = DotGraph('Package Tree for %s' % name_list(packages, context),
+ body='ranksep=.3\n;nodesep=.1\n',
+ edge_defaults={'dir':'none'})
+
+ # Options
+ if options.get('dir', 'TB') != 'TB': # default: top-to-bottom
+ graph.body += 'rankdir=%s\n' % options.get('dir', 'TB')
+
+ # Get a list of all modules in the package.
+ queue = list(packages)
+ modules = set(packages)
+ for module in queue:
+ queue.extend(module.submodules)
+ modules.update(module.submodules)
+
+ # Add a node for each module.
+ nodes = add_valdoc_nodes(graph, modules, linker, context)
+
+ # Add an edge for each package/submodule relationship.
+ for module in modules:
+ for submodule in module.submodules:
+ graph.edges.append(DotGraphEdge(nodes[module], nodes[submodule],
+ headport='tab'))
+
+ return graph
+
+def uml_package_tree_graph(packages, linker, context=None, **options):
+ """
+ Return a `DotGraph` that graphically displays the package
+ hierarchies for the given packages as a nested set of UML
+ symbols.
+ """
+ graph = DotGraph('Package Tree for %s' % name_list(packages, context))
+ # Remove any packages whose containers are also in the list.
+ root_packages = []
+ for package1 in packages:
+ for package2 in packages:
+ if (package1 is not package2 and
+ package2.canonical_name.dominates(package1.canonical_name)):
+ break
+ else:
+ root_packages.append(package1)
+ # If the context is a variable, then get its value.
+ if isinstance(context, VariableDoc) and context.value is not UNKNOWN:
+ context = context.value
+ # Return a graph with one node for each root package.
+ for package in root_packages:
+ graph.nodes.append(DotGraphUmlModuleNode(package, linker, context))
+ return graph
+
+######################################################################
+def class_tree_graph(bases, linker, context=None, **options):
+ """
+ Return a `DotGraph` that graphically displays the class
+ hierarchy for the given classes. Options:
+
+ - exclude
+ - dir: LR|RL|BT requests a left-to-right, right-to-left, or
+ bottom-to- top, drawing. (corresponds to the dot option
+ 'rankdir'
+ """
+ if isinstance(bases, ClassDoc): bases = [bases]
+ graph = DotGraph('Class Hierarchy for %s' % name_list(bases, context),
+ body='ranksep=0.3\n',
+ edge_defaults={'sametail':True, 'dir':'none'})
+
+ # Options
+ if options.get('dir', 'TB') != 'TB': # default: top-down
+ graph.body += 'rankdir=%s\n' % options.get('dir', 'TB')
+ exclude = options.get('exclude', ())
+
+ # Find all superclasses & subclasses of the given classes.
+ classes = set(bases)
+ queue = list(bases)
+ for cls in queue:
+ if isinstance(cls, ClassDoc):
+ if cls.subclasses not in (None, UNKNOWN):
+ subclasses = cls.subclasses
+ if exclude:
+ subclasses = [d for d in subclasses if d not in exclude]
+ queue.extend(subclasses)
+ classes.update(subclasses)
+ queue = list(bases)
+ for cls in queue:
+ if isinstance(cls, ClassDoc):
+ if cls.bases not in (None, UNKNOWN):
+ bases = cls.bases
+ if exclude:
+ bases = [d for d in bases if d not in exclude]
+ queue.extend(bases)
+ classes.update(bases)
+
+ # Add a node for each cls.
+ classes = [d for d in classes if isinstance(d, ClassDoc)
+ if d.pyval is not object]
+ nodes = add_valdoc_nodes(graph, classes, linker, context)
+
+ # Add an edge for each package/subclass relationship.
+ edges = set()
+ for cls in classes:
+ for subcls in cls.subclasses:
+ if cls in nodes and subcls in nodes:
+ edges.add((nodes[cls], nodes[subcls]))
+ graph.edges = [DotGraphEdge(src,dst) for (src,dst) in edges]
+
+ return graph
+
+######################################################################
+def uml_class_tree_graph(class_doc, linker, context=None, **options):
+ """
+ Return a `DotGraph` that graphically displays the class hierarchy
+ for the given class, using UML notation. Options:
+
+ - max_attributes
+ - max_operations
+ - show_private_vars
+ - show_magic_vars
+ - link_attributes
+ """
+ nodes = {} # ClassDoc -> DotGraphUmlClassNode
+ exclude = options.get('exclude', ())
+
+ # Create nodes for class_doc and all its bases.
+ for cls in class_doc.mro():
+ if cls.pyval is object: continue # don't include `object`.
+ if cls in exclude: break # stop if we get to an excluded class.
+ if cls == class_doc: color = SELECTED_BG
+ else: color = BASECLASS_BG
+ nodes[cls] = DotGraphUmlClassNode(cls, linker, context,
+ show_inherited_vars=False,
+ collapsed=False, bgcolor=color)
+
+ # Create nodes for all class_doc's subclasses.
+ queue = [class_doc]
+ for cls in queue:
+ if (isinstance(cls, ClassDoc) and
+ cls.subclasses not in (None, UNKNOWN)):
+ for subcls in cls.subclasses:
+ subcls_name = subcls.canonical_name[-1]
+ if subcls not in nodes and subcls not in exclude:
+ queue.append(subcls)
+ nodes[subcls] = DotGraphUmlClassNode(
+ subcls, linker, context, collapsed=True,
+ bgcolor=SUBCLASS_BG)
+
+ # Only show variables in the class where they're defined for
+ # *class_doc*.
+ mro = class_doc.mro()
+ for name, var in class_doc.variables.items():
+ i = mro.index(var.container)
+ for base in mro[i+1:]:
+ if base.pyval is object: continue # don't include `object`.
+ overridden_var = base.variables.get(name)
+ if overridden_var and overridden_var.container == base:
+ try:
+ if isinstance(overridden_var.value, RoutineDoc):
+ nodes[base].operations.remove(overridden_var)
+ else:
+ nodes[base].attributes.remove(overridden_var)
+ except ValueError:
+ pass # var is filtered (eg private or magic)
+
+ # Keep track of which nodes are part of the inheritance graph
+ # (since link_attributes might add new nodes)
+ inheritance_nodes = set(nodes.values())
+
+ # Turn attributes into links.
+ if options.get('link_attributes', True):
+ for node in nodes.values():
+ node.link_attributes(nodes)
+ # Make sure that none of the new attribute edges break the
+ # rank ordering assigned by inheritance.
+ for edge in node.edges:
+ if edge.end in inheritance_nodes:
+ edge['constraint'] = 'False'
+
+ # Construct the graph.
+ graph = DotGraph('UML class diagram for %s' % class_doc.canonical_name,
+ body='ranksep=.2\n;nodesep=.3\n')
+ graph.nodes = nodes.values()
+
+ # Add inheritance edges.
+ for node in inheritance_nodes:
+ for base in node.class_doc.bases:
+ if base in nodes:
+ graph.edges.append(DotGraphEdge(nodes[base], node,
+ dir='back', arrowtail='empty',
+ headport='body', tailport='body',
+ color=INH_LINK_COLOR, weight=100,
+ style='bold'))
+
+ # And we're done!
+ return graph
+
+######################################################################
+def import_graph(modules, docindex, linker, context=None, **options):
+ graph = DotGraph('Import Graph', body='ranksep=.3\n;nodesep=.3\n')
+
+ # Options
+ if options.get('dir', 'RL') != 'TB': # default: right-to-left.
+ graph.body += 'rankdir=%s\n' % options.get('dir', 'RL')
+
+ # Add a node for each module.
+ nodes = add_valdoc_nodes(graph, modules, linker, context)
+
+ # Edges.
+ edges = set()
+ for dst in modules:
+ if dst.imports in (None, UNKNOWN): continue
+ for var_name in dst.imports:
+ for i in range(len(var_name), 0, -1):
+ val_doc = docindex.find(var_name[:i], context)
+ if isinstance(val_doc, ModuleDoc):
+ if val_doc in nodes and dst in nodes:
+ edges.add((nodes[val_doc], nodes[dst]))
+ break
+ graph.edges = [DotGraphEdge(src,dst) for (src,dst) in edges]
+
+ return graph
+
+######################################################################
+def call_graph(api_docs, docindex, linker, context=None, **options):
+ """
+ :param options:
+ - ``dir``: rankdir for the graph. (default=LR)
+ - ``add_callers``: also include callers for any of the
+ routines in ``api_docs``. (default=False)
+ - ``add_callees``: also include callees for any of the
+ routines in ``api_docs``. (default=False)
+ :todo: Add an ``exclude`` option?
+ """
+ if docindex.callers is None:
+ log.warning("No profiling information for call graph!")
+ return DotGraph('Call Graph') # return None instead?
+
+ if isinstance(context, VariableDoc):
+ context = context.value
+
+ # Get the set of requested functions.
+ functions = []
+ for api_doc in api_docs:
+ # If it's a variable, get its value.
+ if isinstance(api_doc, VariableDoc):
+ api_doc = api_doc.value
+ # Add the value to the functions list.
+ if isinstance(api_doc, RoutineDoc):
+ functions.append(api_doc)
+ elif isinstance(api_doc, NamespaceDoc):
+ for vardoc in api_doc.variables.values():
+ if isinstance(vardoc.value, RoutineDoc):
+ functions.append(vardoc.value)
+
+ # Filter out functions with no callers/callees?
+ # [xx] this isnt' quite right, esp if add_callers or add_callees
+ # options are fales.
+ functions = [f for f in functions if
+ (f in docindex.callers) or (f in docindex.callees)]
+
+ # Add any callers/callees of the selected functions
+ func_set = set(functions)
+ if options.get('add_callers', False) or options.get('add_callees', False):
+ for func_doc in functions:
+ if options.get('add_callers', False):
+ func_set.update(docindex.callers.get(func_doc, ()))
+ if options.get('add_callees', False):
+ func_set.update(docindex.callees.get(func_doc, ()))
+
+ graph = DotGraph('Call Graph for %s' % name_list(api_docs, context),
+ node_defaults={'shape':'box', 'width': 0, 'height': 0})
+
+ # Options
+ if options.get('dir', 'LR') != 'TB': # default: left-to-right
+ graph.body += 'rankdir=%s\n' % options.get('dir', 'LR')
+
+ nodes = add_valdoc_nodes(graph, func_set, linker, context)
+
+ # Find the edges.
+ edges = set()
+ for func_doc in functions:
+ for caller in docindex.callers.get(func_doc, ()):
+ if caller in nodes:
+ edges.add( (nodes[caller], nodes[func_doc]) )
+ for callee in docindex.callees.get(func_doc, ()):
+ if callee in nodes:
+ edges.add( (nodes[func_doc], nodes[callee]) )
+ graph.edges = [DotGraphEdge(src,dst) for (src,dst) in edges]
+
+ return graph
+
+######################################################################
+#{ Dot Version
+######################################################################
+
+_dot_version = None
+_DOT_VERSION_RE = re.compile(r'dot version ([\d\.]+)')
+def get_dot_version():
+ global _dot_version
+ if _dot_version is None:
+ try:
+ out, err = run_subprocess([DOT_COMMAND, '-V'])
+ version_info = err or out
+ m = _DOT_VERSION_RE.match(version_info)
+ if m:
+ _dot_version = [int(x) for x in m.group(1).split('.')]
+ else:
+ _dot_version = (0,)
+ except OSError, e:
+ _dot_version = (0,)
+ log.info('Detected dot version %s' % _dot_version)
+ return _dot_version
+
+######################################################################
+#{ Helper Functions
+######################################################################
+
+def add_valdoc_nodes(graph, val_docs, linker, context):
+ """
+ :todo: Use different node styles for different subclasses of APIDoc
+ """
+ nodes = {}
+ val_docs = sorted(val_docs, key=lambda d:d.canonical_name)
+ for i, val_doc in enumerate(val_docs):
+ label = val_doc.canonical_name.contextualize(context.canonical_name)
+ node = nodes[val_doc] = DotGraphNode(label)
+ graph.nodes.append(node)
+ specialize_valdoc_node(node, val_doc, context, linker.url_for(val_doc))
+ return nodes
+
+NOOP_URL = 'javascript:void(0);'
+MODULE_NODE_HTML = '''
+ <TABLE BORDER="0" CELLBORDER="0" CELLSPACING="0"
+ CELLPADDING="0" PORT="table" ALIGN="LEFT">
+ <TR><TD ALIGN="LEFT" VALIGN="BOTTOM" HEIGHT="8" WIDTH="16" FIXEDSIZE="true"
+ BGCOLOR="%s" BORDER="1" PORT="tab"></TD></TR>
+ <TR><TD ALIGN="LEFT" VALIGN="TOP" BGCOLOR="%s" BORDER="1"
+ PORT="body" HREF="%s" TOOLTIP="%s">%s</TD></TR>
+ </TABLE>'''.strip()
+
+def specialize_valdoc_node(node, val_doc, context, url):
+ """
+ Update the style attributes of `node` to reflext its type
+ and context.
+ """
+ # We can only use html-style nodes if dot_version>2.
+ dot_version = get_dot_version()
+
+ # If val_doc or context is a variable, get its value.
+ if isinstance(val_doc, VariableDoc) and val_doc.value is not UNKNOWN:
+ val_doc = val_doc.value
+ if isinstance(context, VariableDoc) and context.value is not UNKNOWN:
+ context = context.value
+
+ # Set the URL. (Do this even if it points to the page we're
+ # currently on; otherwise, the tooltip is ignored.)
+ node['href'] = url or NOOP_URL
+
+ if isinstance(val_doc, ModuleDoc) and dot_version >= [2]:
+ node['shape'] = 'plaintext'
+ if val_doc == context: color = SELECTED_BG
+ else: color = MODULE_BG
+ node['tooltip'] = node['label']
+ node['html_label'] = MODULE_NODE_HTML % (color, color, url,
+ val_doc.canonical_name,
+ node['label'])
+ node['width'] = node['height'] = 0
+ node.port = 'body'
+
+ elif isinstance(val_doc, RoutineDoc):
+ node['shape'] = 'box'
+ node['style'] = 'rounded'
+ node['width'] = 0
+ node['height'] = 0
+ node['label'] = '%s()' % node['label']
+ node['tooltip'] = node['label']
+ if val_doc == context:
+ node['fillcolor'] = SELECTED_BG
+ node['style'] = 'filled,rounded,bold'
+
+ else:
+ node['shape'] = 'box'
+ node['width'] = 0
+ node['height'] = 0
+ node['tooltip'] = node['label']
+ if val_doc == context:
+ node['fillcolor'] = SELECTED_BG
+ node['style'] = 'filled,bold'
+
+def name_list(api_docs, context=None):
+ if context is not None:
+ context = context.canonical_name
+ names = [str(d.canonical_name.contextualize(context)) for d in api_docs]
+ if len(names) == 0: return ''
+ if len(names) == 1: return '%s' % names[0]
+ elif len(names) == 2: return '%s and %s' % (names[0], names[1])
+ else:
+ return '%s, and %s' % (', '.join(names[:-1]), names[-1])
+
diff --git a/python/helpers/epydoc/docwriter/html.py b/python/helpers/epydoc/docwriter/html.py
new file mode 100644
index 0000000..e7791ef
--- /dev/null
+++ b/python/helpers/epydoc/docwriter/html.py
@@ -0,0 +1,3491 @@
+#
+# epydoc -- HTML output generator
+# Edward Loper
+#
+# Created [01/30/01 05:18 PM]
+# $Id: html.py 1674 2008-01-29 06:03:36Z edloper $
+#
+
+"""
+The HTML output generator for epydoc. The main interface provided by
+this module is the L{HTMLWriter} class.
+
+@todo: Add a cache to L{HTMLWriter.url()}?
+"""
+__docformat__ = 'epytext en'
+
+import re, os, sys, codecs, sre_constants, pprint, base64
+import urllib
+import __builtin__
+from epydoc.apidoc import *
+import epydoc.docstringparser
+import time, epydoc, epydoc.markup, epydoc.markup.epytext
+from epydoc.docwriter.html_colorize import PythonSourceColorizer
+from epydoc.docwriter import html_colorize
+from epydoc.docwriter.html_css import STYLESHEETS
+from epydoc.docwriter.html_help import HTML_HELP
+from epydoc.docwriter.dotgraph import *
+from epydoc import log
+from epydoc.util import plaintext_to_html, is_src_filename
+from epydoc.compat import * # Backwards compatibility
+
+######################################################################
+## Template Compiler
+######################################################################
+# The compile_template() method defined in this section is used to
+# define several of HTMLWriter's methods.
+
+def compile_template(docstring, template_string,
+ output_function='out', debug=epydoc.DEBUG):
+ """
+ Given a template string containing inline python source code,
+ return a python function that will fill in the template, and
+ output the result. The signature for this function is taken from
+ the first line of C{docstring}. Output is generated by making
+ repeated calls to the output function with the given name (which
+ is typically one of the function's parameters).
+
+ The templating language used by this function passes through all
+ text as-is, with three exceptions:
+
+ - If every line in the template string is indented by at least
+ M{x} spaces, then the first M{x} spaces are stripped from each
+ line.
+
+ - Any line that begins with '>>>' (with no indentation)
+ should contain python code, and will be inserted as-is into
+ the template-filling function. If the line begins a control
+ block (such as 'if' or 'for'), then the control block will
+ be closed by the first '>>>'-marked line whose indentation is
+ less than or equal to the line's own indentation (including
+ lines that only contain comments.)
+
+ - In any other line, any expression between two '$' signs will
+ be evaluated and inserted into the line (using C{str()} to
+ convert the result to a string).
+
+ Here is a simple example:
+
+ >>> TEMPLATE = '''
+ ... <book>
+ ... <title>$book.title$</title>
+ ... <pages>$book.count_pages()$</pages>
+ ... >>> for chapter in book.chapters:
+ ... <chaptername>$chapter.name$</chaptername>
+ ... >>> #endfor
+ ... </book>
+ >>> write_book = compile_template('write_book(out, book)', TEMPLATE)
+
+ @newfield acknowledgements: Acknowledgements
+ @acknowledgements: The syntax used by C{compile_template} is
+ loosely based on Cheetah.
+ """
+ # Extract signature from the docstring:
+ signature = docstring.lstrip().split('\n',1)[0].strip()
+ func_name = signature.split('(',1)[0].strip()
+
+ # Regexp to search for inline substitutions:
+ INLINE = re.compile(r'\$([^\$]+)\$')
+ # Regexp to search for python statements in the template:
+ COMMAND = re.compile(r'(^>>>.*)\n?', re.MULTILINE)
+
+ # Strip indentation from the template.
+ template_string = strip_indent(template_string)
+
+ # If we're debugging, then we'll store the generated function,
+ # so we can print it along with any tracebacks that depend on it.
+ if debug:
+ signature = re.sub(r'\)\s*$', ', __debug=__debug)', signature)
+
+ # Funciton declaration line
+ pysrc_lines = ['def %s:' % signature]
+ indents = [-1]
+
+ if debug:
+ pysrc_lines.append(' try:')
+ indents.append(-1)
+
+ commands = COMMAND.split(template_string.strip()+'\n')
+ for i, command in enumerate(commands):
+ if command == '': continue
+
+ # String literal segment:
+ if i%2 == 0:
+ pieces = INLINE.split(command)
+ for j, piece in enumerate(pieces):
+ if j%2 == 0:
+ # String piece
+ pysrc_lines.append(' '*len(indents)+
+ '%s(%r)' % (output_function, piece))
+ else:
+ # Variable piece
+ pysrc_lines.append(' '*len(indents)+
+ '%s(unicode(%s))' % (output_function, piece))
+
+ # Python command:
+ else:
+ srcline = command[3:].lstrip()
+ # Update indentation
+ indent = len(command)-len(srcline)
+ while indent <= indents[-1]: indents.pop()
+ # Add on the line.
+ srcline = srcline.rstrip()
+ pysrc_lines.append(' '*len(indents)+srcline)
+ if srcline.endswith(':'):
+ indents.append(indent)
+
+ if debug:
+ pysrc_lines.append(' except Exception,e:')
+ pysrc_lines.append(' pysrc, func_name = __debug ')
+ pysrc_lines.append(' lineno = sys.exc_info()[2].tb_lineno')
+ pysrc_lines.append(' print ("Exception in template %s() on "')
+ pysrc_lines.append(' "line %d:" % (func_name, lineno))')
+ pysrc_lines.append(' print pysrc[lineno-1]')
+ pysrc_lines.append(' raise')
+
+ pysrc = '\n'.join(pysrc_lines)+'\n'
+ #log.debug(pysrc)
+ if debug: localdict = {'__debug': (pysrc_lines, func_name)}
+ else: localdict = {}
+ try: exec pysrc in globals(), localdict
+ except SyntaxError:
+ log.error('Error in script:\n' + pysrc + '\n')
+ raise
+ template_func = localdict[func_name]
+ template_func.__doc__ = docstring
+ return template_func
+
+def strip_indent(s):
+ """
+ Given a multiline string C{s}, find the minimum indentation for
+ all non-blank lines, and return a new string formed by stripping
+ that amount of indentation from all lines in C{s}.
+ """
+ # Strip indentation from the template.
+ minindent = sys.maxint
+ lines = s.split('\n')
+ for line in lines:
+ stripline = line.lstrip()
+ if stripline:
+ minindent = min(minindent, len(line)-len(stripline))
+ return '\n'.join([l[minindent:] for l in lines])
+
+######################################################################
+## HTML Writer
+######################################################################
+
+class HTMLWriter:
+ #////////////////////////////////////////////////////////////
+ # Table of Contents
+ #////////////////////////////////////////////////////////////
+ #
+ # 1. Interface Methods
+ #
+ # 2. Page Generation -- write complete web page files
+ # 2.1. Module Pages
+ # 2.2. Class Pages
+ # 2.3. Trees Page
+ # 2.4. Indices Page
+ # 2.5. Help Page
+ # 2.6. Frames-based table of contents pages
+ # 2.7. Homepage (index.html)
+ # 2.8. CSS Stylesheet
+ # 2.9. Javascript file
+ # 2.10. Graphs
+ # 2.11. Images
+ #
+ # 3. Page Element Generation -- write pieces of a web page file
+ # 3.1. Page Header
+ # 3.2. Page Footer
+ # 3.3. Navigation Bar
+ # 3.4. Breadcrumbs
+ # 3.5. Summary Tables
+ #
+ # 4. Helper functions
+
+ def __init__(self, docindex, **kwargs):
+ """
+ Construct a new HTML writer, using the given documentation
+ index.
+
+ @param docindex: The documentation index.
+
+ @type prj_name: C{string}
+ @keyword prj_name: The name of the project. Defaults to
+ none.
+ @type prj_url: C{string}
+ @keyword prj_url: The target for the project hopeage link on
+ the navigation bar. If C{prj_url} is not specified,
+ then no hyperlink is created.
+ @type prj_link: C{string}
+ @keyword prj_link: The label for the project link on the
+ navigation bar. This link can contain arbitrary HTML
+ code (e.g. images). By default, a label is constructed
+ from C{prj_name}.
+ @type top_page: C{string}
+ @keyword top_page: The top page for the documentation. This
+ is the default page shown main frame, when frames are
+ enabled. C{top} can be a URL, the name of a
+ module, the name of a class, or one of the special
+ strings C{"trees.html"}, C{"indices.html"}, or
+ C{"help.html"}. By default, the top-level package or
+ module is used, if there is one; otherwise, C{"trees"}
+ is used.
+ @type css: C{string}
+ @keyword css: The CSS stylesheet file. If C{css} is a file
+ name, then the specified file's conents will be used.
+ Otherwise, if C{css} is the name of a CSS stylesheet in
+ L{epydoc.docwriter.html_css}, then that stylesheet will
+ be used. Otherwise, an error is reported. If no stylesheet
+ is specified, then the default stylesheet is used.
+ @type help_file: C{string}
+ @keyword help_file: The name of the help file. If no help file is
+ specified, then the default help file will be used.
+ @type show_private: C{boolean}
+ @keyword show_private: Whether to create documentation for
+ private objects. By default, private objects are documented.
+ @type show_frames: C{boolean})
+ @keyword show_frames: Whether to create a frames-based table of
+ contents. By default, it is produced.
+ @type show_imports: C{boolean}
+ @keyword show_imports: Whether or not to display lists of
+ imported functions and classes. By default, they are
+ not shown.
+ @type variable_maxlines: C{int}
+ @keyword variable_maxlines: The maximum number of lines that
+ should be displayed for the value of a variable in the
+ variable details section. By default, 8 lines are
+ displayed.
+ @type variable_linelength: C{int}
+ @keyword variable_linelength: The maximum line length used for
+ displaying the values of variables in the variable
+ details sections. If a line is longer than this length,
+ then it will be wrapped to the next line. The default
+ line length is 70 characters.
+ @type variable_summary_linelength: C{int}
+ @keyword variable_summary_linelength: The maximum line length
+ used for displaying the values of variables in the summary
+ section. If a line is longer than this length, then it
+ will be truncated. The default is 40 characters.
+ @type variable_tooltip_linelength: C{int}
+ @keyword variable_tooltip_linelength: The maximum line length
+ used for tooltips for the values of variables. If a
+ line is longer than this length, then it will be
+ truncated. The default is 600 characters.
+ @type property_function_linelength: C{int}
+ @keyword property_function_linelength: The maximum line length
+ used to dispaly property functions (C{fget}, C{fset}, and
+ C{fdel}) that contain something other than a function
+ object. The default length is 40 characters.
+ @type inheritance: C{string}
+ @keyword inheritance: How inherited objects should be displayed.
+ If C{inheritance='grouped'}, then inherited objects are
+ gathered into groups; if C{inheritance='listed'}, then
+ inherited objects are listed in a short list at the
+ end of their group; if C{inheritance='included'}, then
+ inherited objects are mixed in with non-inherited
+ objects. The default is 'grouped'.
+ @type include_source_code: C{boolean}
+ @keyword include_source_code: If true, then generate colorized
+ source code files for each python module.
+ @type include_log: C{boolean}
+ @keyword include_log: If true, the the footer will include an
+ href to the page 'epydoc-log.html'.
+ @type src_code_tab_width: C{int}
+ @keyword src_code_tab_width: Number of spaces to replace each tab
+ with in source code listings.
+ """
+ self.docindex = docindex
+
+ # Process keyword arguments.
+ self._show_private = kwargs.get('show_private', 1)
+ """Should private docs be included?"""
+
+ self._prj_name = kwargs.get('prj_name', None)
+ """The project's name (for the project link in the navbar)"""
+
+ self._prj_url = kwargs.get('prj_url', None)
+ """URL for the project link in the navbar"""
+
+ self._prj_link = kwargs.get('prj_link', None)
+ """HTML code for the project link in the navbar"""
+
+ self._top_page = kwargs.get('top_page', None)
+ """The 'main' page"""
+
+ self._css = kwargs.get('css')
+ """CSS stylesheet to use"""
+
+ self._helpfile = kwargs.get('help_file', None)
+ """Filename of file to extract help contents from"""
+
+ self._frames_index = kwargs.get('show_frames', 1)
+ """Should a frames index be created?"""
+
+ self._show_imports = kwargs.get('show_imports', False)
+ """Should imports be listed?"""
+
+ self._propfunc_linelen = kwargs.get('property_function_linelength', 40)
+ """[XXX] Not used!"""
+
+ self._variable_maxlines = kwargs.get('variable_maxlines', 8)
+ """Max lines for variable values"""
+
+ self._variable_linelen = kwargs.get('variable_linelength', 70)
+ """Max line length for variable values"""
+
+ self._variable_summary_linelen = \
+ kwargs.get('variable_summary_linelength', 65)
+ """Max length for variable value summaries"""
+
+ self._variable_tooltip_linelen = \
+ kwargs.get('variable_tooltip_linelength', 600)
+ """Max length for variable tooltips"""
+
+ self._inheritance = kwargs.get('inheritance', 'listed')
+ """How should inheritance be displayed? 'listed', 'included',
+ or 'grouped'"""
+
+ self._incl_sourcecode = kwargs.get('include_source_code', True)
+ """Should pages be generated for source code of modules?"""
+
+ self._mark_docstrings = kwargs.get('mark_docstrings', False)
+ """Wrap <span class='docstring'>...</span> around docstrings?"""
+
+ self._graph_types = kwargs.get('graphs', ()) or ()
+ """Graphs that we should include in our output."""
+
+ self._include_log = kwargs.get('include_log', False)
+ """Are we generating an HTML log page?"""
+
+ self._src_code_tab_width = kwargs.get('src_code_tab_width', 8)
+ """Number of spaces to replace each tab with in source code
+ listings."""
+
+ self._callgraph_cache = {}
+ """Map the callgraph L{uid<DotGraph.uid>} to their HTML
+ representation."""
+
+ self._redundant_details = kwargs.get('redundant_details', False)
+ """If true, then include objects in the details list even if all
+ info about them is already provided by the summary table."""
+
+ # For use with select_variables():
+ if self._show_private:
+ self._public_filter = None
+ else:
+ self._public_filter = True
+
+ # Make sure inheritance has a sane value.
+ if self._inheritance not in ('listed', 'included', 'grouped'):
+ raise ValueError, 'Bad value for inheritance'
+
+ # Create the project homepage link, if it was not specified.
+ if (self._prj_name or self._prj_url) and not self._prj_link:
+ self._prj_link = plaintext_to_html(self._prj_name or
+ 'Project Homepage')
+
+ # Add a hyperlink to _prj_url, if _prj_link doesn't already
+ # contain any hyperlinks.
+ if (self._prj_link and self._prj_url and
+ not re.search(r'<a[^>]*\shref', self._prj_link)):
+ self._prj_link = ('<a class="navbar" target="_top" href="'+
+ self._prj_url+'">'+self._prj_link+'</a>')
+
+ # Precompute lists & sets of APIDoc objects that we're
+ # interested in.
+ self.valdocs = valdocs = sorted(docindex.reachable_valdocs(
+ imports=False, packages=False, bases=False, submodules=False,
+ subclasses=False, private=self._show_private))
+ self.module_list = [d for d in valdocs if isinstance(d, ModuleDoc)]
+ """The list of L{ModuleDoc}s for the documented modules."""
+ self.module_set = set(self.module_list)
+ """The set of L{ModuleDoc}s for the documented modules."""
+ self.class_list = [d for d in valdocs if isinstance(d, ClassDoc)]
+ """The list of L{ClassDoc}s for the documented classes."""
+ self.class_set = set(self.class_list)
+ """The set of L{ClassDoc}s for the documented classes."""
+ self.routine_list = [d for d in valdocs if isinstance(d, RoutineDoc)]
+ """The list of L{RoutineDoc}s for the documented routines."""
+ self.indexed_docs = []
+ """The list of L{APIDoc}s for variables and values that should
+ be included in the index."""
+
+ # URL for 'trees' page
+ if self.module_list: self._trees_url = 'module-tree.html'
+ else: self._trees_url = 'class-tree.html'
+
+ # Construct the value for self.indexed_docs.
+ self.indexed_docs += [d for d in valdocs
+ if not isinstance(d, GenericValueDoc)]
+ for doc in valdocs:
+ if isinstance(doc, NamespaceDoc):
+ # add any vars with generic values; but don't include
+ # inherited vars.
+ self.indexed_docs += [d for d in doc.variables.values() if
+ isinstance(d.value, GenericValueDoc)
+ and d.container == doc]
+ self.indexed_docs.sort()
+
+ # Figure out the url for the top page.
+ self._top_page_url = self._find_top_page(self._top_page)
+
+ # Decide whether or not to split the identifier index.
+ self._split_ident_index = (len(self.indexed_docs) >=
+ self.SPLIT_IDENT_INDEX_SIZE)
+
+ # Figure out how many output files there will be (for progress
+ # reporting).
+ self.modules_with_sourcecode = set()
+ for doc in self.module_list:
+ if isinstance(doc, ModuleDoc) and is_src_filename(doc.filename):
+ self.modules_with_sourcecode.add(doc)
+ self._num_files = (len(self.class_list) + len(self.module_list) +
+ 10 + len(self.METADATA_INDICES))
+ if self._frames_index:
+ self._num_files += len(self.module_list) + 3
+
+ if self._incl_sourcecode:
+ self._num_files += len(self.modules_with_sourcecode)
+ if self._split_ident_index:
+ self._num_files += len(self.LETTERS)
+
+ def _find_top_page(self, pagename):
+ """
+ Find the top page for the API documentation. This page is
+ used as the default page shown in the main frame, when frames
+ are used. When frames are not used, this page is copied to
+ C{index.html}.
+
+ @param pagename: The name of the page, as specified by the
+ keyword argument C{top} to the constructor.
+ @type pagename: C{string}
+ @return: The URL of the top page.
+ @rtype: C{string}
+ """
+ # If a page name was specified, then we need to figure out
+ # what it points to.
+ if pagename:
+ # If it's a URL, then use it directly.
+ if pagename.lower().startswith('http:'):
+ return pagename
+
+ # If it's an object, then use that object's page.
+ try:
+ doc = self.docindex.get_valdoc(pagename)
+ return self.url(doc)
+ except:
+ pass
+
+ # Otherwise, give up.
+ log.warning('Could not find top page %r; using %s '
+ 'instead' % (pagename, self._trees_url))
+ return self._trees_url
+
+ # If no page name was specified, then try to choose one
+ # automatically.
+ else:
+ root = [val_doc for val_doc in self.docindex.root
+ if isinstance(val_doc, (ClassDoc, ModuleDoc))]
+ if len(root) == 0:
+ # No docs?? Try the trees page.
+ return self._trees_url
+ elif len(root) == 1:
+ # One item in the root; use that.
+ return self.url(root[0])
+ else:
+ # Multiple root items; if they're all in one package,
+ # then use that. Otherwise, use self._trees_url
+ root = sorted(root, key=lambda v:len(v.canonical_name))
+ top = root[0]
+ for doc in root[1:]:
+ if not top.canonical_name.dominates(doc.canonical_name):
+ return self._trees_url
+ else:
+ return self.url(top)
+
+ #////////////////////////////////////////////////////////////
+ #{ 1. Interface Methods
+ #////////////////////////////////////////////////////////////
+
+ def write(self, directory=None):
+ """
+ Write the documentation to the given directory.
+
+ @type directory: C{string}
+ @param directory: The directory to which output should be
+ written. If no directory is specified, output will be
+ written to the current directory. If the directory does
+ not exist, it will be created.
+ @rtype: C{None}
+ @raise OSError: If C{directory} cannot be created.
+ @raise OSError: If any file cannot be created or written to.
+ """
+ # For progress reporting:
+ self._files_written = 0.
+
+ # Set the default values for ValueDoc formatted representations.
+ orig_valdoc_defaults = (ValueDoc.SUMMARY_REPR_LINELEN,
+ ValueDoc.REPR_LINELEN,
+ ValueDoc.REPR_MAXLINES)
+ ValueDoc.SUMMARY_REPR_LINELEN = self._variable_summary_linelen
+ ValueDoc.REPR_LINELEN = self._variable_linelen
+ ValueDoc.REPR_MAXLINES = self._variable_maxlines
+
+ # Use an image for the crarr symbol.
+ from epydoc.markup.epytext import ParsedEpytextDocstring
+ orig_crarr_html = ParsedEpytextDocstring.SYMBOL_TO_HTML['crarr']
+ ParsedEpytextDocstring.SYMBOL_TO_HTML['crarr'] = (
+ r'<span class="variable-linewrap">'
+ r'<img src="crarr.png" alt="\" /></span>')
+
+ # Keep track of failed xrefs, and report them at the end.
+ self._failed_xrefs = {}
+
+ # Create destination directories, if necessary
+ if not directory: directory = os.curdir
+ self._mkdir(directory)
+ self._directory = directory
+
+ # Write the CSS file.
+ self._files_written += 1
+ log.progress(self._files_written/self._num_files, 'epydoc.css')
+ self.write_css(directory, self._css)
+
+ # Write the Javascript file.
+ self._files_written += 1
+ log.progress(self._files_written/self._num_files, 'epydoc.js')
+ self.write_javascript(directory)
+
+ # Write images
+ self.write_images(directory)
+
+ # Build the indices.
+ indices = {'ident': self.build_identifier_index(),
+ 'term': self.build_term_index()}
+ for (name, label, label2) in self.METADATA_INDICES:
+ indices[name] = self.build_metadata_index(name)
+
+ # Write the identifier index. If requested, split it into
+ # separate pages for each letter.
+ ident_by_letter = self._group_by_letter(indices['ident'])
+ if not self._split_ident_index:
+ self._write(self.write_link_index, directory,
+ 'identifier-index.html', indices,
+ 'Identifier Index', 'identifier-index.html',
+ ident_by_letter)
+ else:
+ # Write a page for each section.
+ for letter in self.LETTERS:
+ filename = 'identifier-index-%s.html' % letter
+ self._write(self.write_link_index, directory, filename,
+ indices, 'Identifier Index', filename,
+ ident_by_letter, [letter],
+ 'identifier-index-%s.html')
+ # Use the first non-empty section as the main index page.
+ for letter in self.LETTERS:
+ if letter in ident_by_letter:
+ filename = 'identifier-index.html'
+ self._write(self.write_link_index, directory, filename,
+ indices, 'Identifier Index', filename,
+ ident_by_letter, [letter],
+ 'identifier-index-%s.html')
+ break
+
+ # Write the term index.
+ if indices['term']:
+ term_by_letter = self._group_by_letter(indices['term'])
+ self._write(self.write_link_index, directory, 'term-index.html',
+ indices, 'Term Definition Index',
+ 'term-index.html', term_by_letter)
+ else:
+ self._files_written += 1 # (skipped)
+
+ # Write the metadata indices.
+ for (name, label, label2) in self.METADATA_INDICES:
+ if indices[name]:
+ self._write(self.write_metadata_index, directory,
+ '%s-index.html' % name, indices, name,
+ label, label2)
+ else:
+ self._files_written += 1 # (skipped)
+
+ # Write the trees file (package & class hierarchies)
+ if self.module_list:
+ self._write(self.write_module_tree, directory, 'module-tree.html')
+ else:
+ self._files_written += 1 # (skipped)
+ if self.class_list:
+ self._write(self.write_class_tree, directory, 'class-tree.html')
+ else:
+ self._files_written += 1 # (skipped)
+
+ # Write the help file.
+ self._write(self.write_help, directory,'help.html')
+
+ # Write the frames-based table of contents.
+ if self._frames_index:
+ self._write(self.write_frames_index, directory, 'frames.html')
+ self._write(self.write_toc, directory, 'toc.html')
+ self._write(self.write_project_toc, directory, 'toc-everything.html')
+ for doc in self.module_list:
+ filename = 'toc-%s' % urllib.unquote(self.url(doc))
+ self._write(self.write_module_toc, directory, filename, doc)
+
+ # Write the object documentation.
+ for doc in self.module_list:
+ filename = urllib.unquote(self.url(doc))
+ self._write(self.write_module, directory, filename, doc)
+ for doc in self.class_list:
+ filename = urllib.unquote(self.url(doc))
+ self._write(self.write_class, directory, filename, doc)
+
+ # Write source code files.
+ if self._incl_sourcecode:
+ # Build a map from short names to APIDocs, used when
+ # linking names in the source code.
+ name_to_docs = {}
+ for api_doc in self.indexed_docs:
+ if (api_doc.canonical_name is not None and
+ self.url(api_doc) is not None):
+ name = api_doc.canonical_name[-1]
+ name_to_docs.setdefault(name, []).append(api_doc)
+ # Sort each entry of the name_to_docs list.
+ for doc_list in name_to_docs.values():
+ doc_list.sort()
+ # Write the source code for each module.
+ for doc in self.modules_with_sourcecode:
+ filename = urllib.unquote(self.pysrc_url(doc))
+ self._write(self.write_sourcecode, directory, filename, doc,
+ name_to_docs)
+
+ # Write the auto-redirect page.
+ self._write(self.write_redirect_page, directory, 'redirect.html')
+
+ # Write the mapping object name -> URL
+ self._write(self.write_api_list, directory, 'api-objects.txt')
+
+ # Write the index.html files.
+ # (this must be done last, since it might copy another file)
+ self._files_written += 1
+ log.progress(self._files_written/self._num_files, 'index.html')
+ self.write_homepage(directory)
+
+ # Don't report references to builtins as missing
+ for k in self._failed_xrefs.keys(): # have a copy of keys
+ if hasattr(__builtin__, k):
+ del self._failed_xrefs[k]
+
+ # Report any failed crossreferences
+ if self._failed_xrefs:
+ estr = 'Failed identifier crossreference targets:\n'
+ failed_identifiers = self._failed_xrefs.keys()
+ failed_identifiers.sort()
+ for identifier in failed_identifiers:
+ names = self._failed_xrefs[identifier].keys()
+ names.sort()
+ estr += '- %s' % identifier
+ estr += '\n'
+ for name in names:
+ estr += ' (from %s)\n' % name
+ log.docstring_warning(estr)
+
+ # [xx] testing:
+ if self._num_files != int(self._files_written):
+ log.debug("Expected to write %d files, but actually "
+ "wrote %d files" %
+ (self._num_files, int(self._files_written)))
+
+ # Restore defaults that we changed.
+ (ValueDoc.SUMMARY_REPR_LINELEN, ValueDoc.REPR_LINELEN,
+ ValueDoc.REPR_MAXLINES) = orig_valdoc_defaults
+ ParsedEpytextDocstring.SYMBOL_TO_HTML['crarr'] = orig_crarr_html
+
+ def _write(self, write_func, directory, filename, *args):
+ # Display our progress.
+ self._files_written += 1
+ log.progress(self._files_written/self._num_files, filename)
+
+ path = os.path.join(directory, filename)
+ f = codecs.open(path, 'w', 'ascii', errors='xmlcharrefreplace')
+ write_func(f.write, *args)
+ f.close()
+
+ def _mkdir(self, directory):
+ """
+ If the given directory does not exist, then attempt to create it.
+ @rtype: C{None}
+ """
+ if not os.path.isdir(directory):
+ if os.path.exists(directory):
+ raise OSError('%r is not a directory' % directory)
+ os.mkdir(directory)
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.1. Module Pages
+ #////////////////////////////////////////////////////////////
+
+ def write_module(self, out, doc):
+ """
+ Write an HTML page containing the API documentation for the
+ given module to C{out}.
+
+ @param doc: A L{ModuleDoc} containing the API documentation
+ for the module that should be described.
+ """
+ longname = doc.canonical_name
+ shortname = doc.canonical_name[-1]
+
+ # Write the page header (incl. navigation bar & breadcrumbs)
+ self.write_header(out, str(longname))
+ self.write_navbar(out, doc)
+ self.write_breadcrumbs(out, doc, self.url(doc))
+
+ # Write the name of the module we're describing.
+ if doc.is_package is True: typ = 'Package'
+ else: typ = 'Module'
+ if longname[0].startswith('script-'):
+ shortname = str(longname)[7:]
+ typ = 'Script'
+ out('<!-- ==================== %s ' % typ.upper() +
+ 'DESCRIPTION ==================== -->\n')
+ out('<h1 class="epydoc">%s %s</h1>' % (typ, shortname))
+ out('<p class="nomargin-top">%s</p>\n' % self.pysrc_link(doc))
+
+ # If the module has a description, then list it.
+ if doc.descr not in (None, UNKNOWN):
+ out(self.descr(doc, 2)+'\n\n')
+
+ # Write any standarad metadata (todo, author, etc.)
+ if doc.metadata is not UNKNOWN and doc.metadata:
+ out('<hr />\n')
+ self.write_standard_fields(out, doc)
+
+ # If it's a package, then list the modules it contains.
+ if doc.is_package is True:
+ self.write_module_list(out, doc)
+
+ # Write summary tables describing the variables that the
+ # module defines.
+ self.write_summary_table(out, "Classes", doc, "class")
+ self.write_summary_table(out, "Functions", doc, "function")
+ self.write_summary_table(out, "Variables", doc, "other")
+
+ # Write a list of all imported objects.
+ if self._show_imports:
+ self.write_imports(out, doc)
+
+ # Write detailed descriptions of functions & variables defined
+ # in this module.
+ self.write_details_list(out, "Function Details", doc, "function")
+ self.write_details_list(out, "Variables Details", doc, "other")
+
+ # Write the page footer (including navigation bar)
+ self.write_navbar(out, doc)
+ self.write_footer(out)
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.??. Source Code Pages
+ #////////////////////////////////////////////////////////////
+
+ def write_sourcecode(self, out, doc, name_to_docs):
+ #t0 = time.time()
+
+ filename = doc.filename
+ name = str(doc.canonical_name)
+
+ # Header
+ self.write_header(out, name)
+ self.write_navbar(out, doc)
+ self.write_breadcrumbs(out, doc, self.pysrc_url(doc))
+
+ # Source code listing
+ out('<h1 class="epydoc">Source Code for %s</h1>\n' %
+ self.href(doc, label='%s %s' % (self.doc_kind(doc), name)))
+ out('<pre class="py-src">\n')
+ out(PythonSourceColorizer(filename, name, self.docindex,
+ self.url, name_to_docs,
+ self._src_code_tab_width).colorize())
+ out('</pre>\n<br />\n')
+
+ # Footer
+ self.write_navbar(out, doc)
+ self.write_footer(out)
+
+ #log.debug('[%6.2f sec] Wrote pysrc for %s' %
+ # (time.time()-t0, name))
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.2. Class Pages
+ #////////////////////////////////////////////////////////////
+
+ def write_class(self, out, doc):
+ """
+ Write an HTML page containing the API documentation for the
+ given class to C{out}.
+
+ @param doc: A L{ClassDoc} containing the API documentation
+ for the class that should be described.
+ """
+ longname = doc.canonical_name
+ shortname = doc.canonical_name[-1]
+
+ # Write the page header (incl. navigation bar & breadcrumbs)
+ self.write_header(out, str(longname))
+ self.write_navbar(out, doc)
+ self.write_breadcrumbs(out, doc, self.url(doc))
+
+ # Write the name of the class we're describing.
+ if doc.is_type(): typ = 'Type'
+ elif doc.is_exception(): typ = 'Exception'
+ else: typ = 'Class'
+ out('<!-- ==================== %s ' % typ.upper() +
+ 'DESCRIPTION ==================== -->\n')
+ out('<h1 class="epydoc">%s %s</h1>' % (typ, shortname))
+ out('<p class="nomargin-top">%s</p>\n' % self.pysrc_link(doc))
+
+ if ((doc.bases not in (UNKNOWN, None) and len(doc.bases) > 0) or
+ (doc.subclasses not in (UNKNOWN,None) and len(doc.subclasses)>0)):
+ # Display bases graphically, if requested.
+ if 'umlclasstree' in self._graph_types:
+ self.write_class_tree_graph(out, doc, uml_class_tree_graph)
+ elif 'classtree' in self._graph_types:
+ self.write_class_tree_graph(out, doc, class_tree_graph)
+
+ # Otherwise, use ascii-art.
+ else:
+ # Write the base class tree.
+ if doc.bases not in (UNKNOWN, None) and len(doc.bases) > 0:
+ out('<pre class="base-tree">\n%s</pre>\n\n' %
+ self.base_tree(doc))
+
+ # Write the known subclasses
+ if (doc.subclasses not in (UNKNOWN, None) and
+ len(doc.subclasses) > 0):
+ out('<dl><dt>Known Subclasses:</dt>\n<dd>\n ')
+ out(' <ul class="subclass-list">\n')
+ for i, subclass in enumerate(doc.subclasses):
+ href = self.href(subclass, context=doc)
+ if self._val_is_public(subclass): css = ''
+ else: css = ' class="private"'
+ if i > 0: href = ', '+href
+ out('<li%s>%s</li>' % (css, href))
+ out(' </ul>\n')
+ out('</dd></dl>\n\n')
+
+ out('<hr />\n')
+
+ # If the class has a description, then list it.
+ if doc.descr not in (None, UNKNOWN):
+ out(self.descr(doc, 2)+'\n\n')
+
+ # Write any standarad metadata (todo, author, etc.)
+ if doc.metadata is not UNKNOWN and doc.metadata:
+ out('<hr />\n')
+ self.write_standard_fields(out, doc)
+
+ # Write summary tables describing the variables that the
+ # class defines.
+ self.write_summary_table(out, "Nested Classes", doc, "class")
+ self.write_summary_table(out, "Instance Methods", doc,
+ "instancemethod")
+ self.write_summary_table(out, "Class Methods", doc, "classmethod")
+ self.write_summary_table(out, "Static Methods", doc, "staticmethod")
+ self.write_summary_table(out, "Class Variables", doc,
+ "classvariable")
+ self.write_summary_table(out, "Instance Variables", doc,
+ "instancevariable")
+ self.write_summary_table(out, "Properties", doc, "property")
+
+ # Write a list of all imported objects.
+ if self._show_imports:
+ self.write_imports(out, doc)
+
+ # Write detailed descriptions of functions & variables defined
+ # in this class.
+ # [xx] why group methods into one section but split vars into two?
+ # seems like we should either group in both cases or split in both
+ # cases.
+ self.write_details_list(out, "Method Details", doc, "method")
+ self.write_details_list(out, "Class Variable Details", doc,
+ "classvariable")
+ self.write_details_list(out, "Instance Variable Details", doc,
+ "instancevariable")
+ self.write_details_list(out, "Property Details", doc, "property")
+
+ # Write the page footer (including navigation bar)
+ self.write_navbar(out, doc)
+ self.write_footer(out)
+
+ def write_class_tree_graph(self, out, doc, graphmaker):
+ """
+ Write HTML code for a class tree graph of C{doc} (a classdoc),
+ using C{graphmaker} to draw the actual graph. C{graphmaker}
+ should be L{class_tree_graph()}, or L{uml_class_tree_graph()},
+ or any other function with a compatible signature.
+
+ If the given class has any private sublcasses (including
+ recursive subclasses), then two graph images will be generated
+ -- one to display when private values are shown, and the other
+ to display when private values are hidden.
+ """
+ linker = _HTMLDocstringLinker(self, doc)
+ private_subcls = self._private_subclasses(doc)
+ if private_subcls:
+ out('<center>\n'
+ ' <div class="private">%s</div>\n'
+ ' <div class="public" style="display:none">%s</div>\n'
+ '</center>\n' %
+ (self.render_graph(graphmaker(doc, linker, doc)),
+ self.render_graph(graphmaker(doc, linker, doc,
+ exclude=private_subcls))))
+ else:
+ out('<center>\n%s\n</center>\n' %
+ self.render_graph(graphmaker(doc, linker, doc)))
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.3. Trees pages
+ #////////////////////////////////////////////////////////////
+
+ def write_module_tree(self, out):
+ # Header material
+ self.write_treepage_header(out, 'Module Hierarchy', 'module-tree.html')
+ out('<h1 class="epydoc">Module Hierarchy</h1>\n')
+
+ # Write entries for all top-level modules/packages.
+ out('<ul class="nomargin-top">\n')
+ for doc in self.module_list:
+ if (doc.package in (None, UNKNOWN) or
+ doc.package not in self.module_set):
+ self.write_module_tree_item(out, doc)
+ out('</ul>\n')
+
+ # Footer material
+ self.write_navbar(out, 'trees')
+ self.write_footer(out)
+
+ def write_class_tree(self, out):
+ """
+ Write HTML code for a nested list showing the base/subclass
+ relationships between all documented classes. Each element of
+ the top-level list is a class with no (documented) bases; and
+ under each class is listed all of its subclasses. Note that
+ in the case of multiple inheritance, a class may appear
+ multiple times.
+
+ @todo: For multiple inheritance, don't repeat subclasses the
+ second time a class is mentioned; instead, link to the
+ first mention.
+ """
+ # [XX] backref for multiple inheritance?
+ # Header material
+ self.write_treepage_header(out, 'Class Hierarchy', 'class-tree.html')
+ out('<h1 class="epydoc">Class Hierarchy</h1>\n')
+
+ # Build a set containing all classes that we should list.
+ # This includes everything in class_list, plus any of those
+ # class' bases, but not undocumented subclasses.
+ class_set = self.class_set.copy()
+ for doc in self.class_list:
+ if doc.bases != UNKNOWN:
+ for base in doc.bases:
+ if base not in class_set:
+ if isinstance(base, ClassDoc):
+ class_set.update(base.mro())
+ else:
+ # [XX] need to deal with this -- how?
+ pass
+ #class_set.add(base)
+
+ out('<ul class="nomargin-top">\n')
+ for doc in sorted(class_set, key=lambda c:c.canonical_name[-1]):
+ if doc.bases != UNKNOWN and len(doc.bases)==0:
+ self.write_class_tree_item(out, doc, class_set)
+ out('</ul>\n')
+
+ # Footer material
+ self.write_navbar(out, 'trees')
+ self.write_footer(out)
+
+ def write_treepage_header(self, out, title, url):
+ # Header material.
+ self.write_header(out, title)
+ self.write_navbar(out, 'trees')
+ self.write_breadcrumbs(out, 'trees', url)
+ if self.class_list and self.module_list:
+ out('<center><b>\n')
+ out(' [ <a href="module-tree.html">Module Hierarchy</a>\n')
+ out(' | <a href="class-tree.html">Class Hierarchy</a> ]\n')
+ out('</b></center><br />\n')
+
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.4. Index pages
+ #////////////////////////////////////////////////////////////
+
+ SPLIT_IDENT_INDEX_SIZE = 3000
+ """If the identifier index has more than this number of entries,
+ then it will be split into separate pages, one for each
+ alphabetical section."""
+
+ LETTERS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ_'
+ """The alphabetical sections that are used for link index pages."""
+
+ def write_link_index(self, out, indices, title, url, index_by_section,
+ sections=LETTERS, section_url='#%s'):
+
+ # Header
+ self.write_indexpage_header(out, indices, title, url)
+
+ # Index title & links to alphabetical sections.
+ out('<table border="0" width="100%">\n'
+ '<tr valign="bottom"><td>\n')
+ out('<h1 class="epydoc">%s</h1>\n</td><td>\n[\n' % title)
+ for sec in self.LETTERS:
+ if sec in index_by_section:
+ out(' <a href="%s">%s</a>\n' % (section_url % sec, sec))
+ else:
+ out(' %s\n' % sec)
+ out(']\n')
+ out('</td></table>\n')
+
+ # Alphabetical sections.
+ sections = [s for s in sections if s in index_by_section]
+ if sections:
+ out('<table border="0" width="100%">\n')
+ for section in sorted(sections):
+ out('<tr valign="top"><td valign="top" width="1%">')
+ out('<h2 class="epydoc"><a name="%s">%s</a></h2></td>\n' %
+ (section, section))
+ out('<td valign="top">\n')
+ self.write_index_section(out, index_by_section[section], True)
+ out('</td></tr>\n')
+ out('</table>\n<br />')
+
+ # Footer material.
+ out('<br />')
+ self.write_navbar(out, 'indices')
+ self.write_footer(out)
+
+
+ def write_metadata_index(self, out, indices, field, title, typ):
+ """
+ Write an HTML page containing a metadata index.
+ """
+ index = indices[field]
+
+ # Header material.
+ self.write_indexpage_header(out, indices, title,
+ '%s-index.html' % field)
+
+ # Page title.
+ out('<h1 class="epydoc"><a name="%s">%s</a></h1>\n<br />\n' %
+ (field, title))
+
+ # Index (one section per arg)
+ for arg in sorted(index):
+ # Write a section title.
+ if arg is not None:
+ if len([1 for (doc, descrs) in index[arg] if
+ not self._doc_or_ancestor_is_private(doc)]) == 0:
+ out('<div class="private">')
+ else:
+ out('<div>')
+ self.write_table_header(out, 'metadata-index', arg)
+ out('</table>')
+ # List every descr for this arg.
+ for (doc, descrs) in index[arg]:
+ if self._doc_or_ancestor_is_private(doc):
+ out('<div class="private">\n')
+ else:
+ out('<div>\n')
+ out('<table width="100%" class="metadata-index" '
+ 'bgcolor="#e0e0e0"><tr><td class="metadata-index">')
+ out('<b>%s in %s</b>' %
+ (typ, self.href(doc, label=doc.canonical_name)))
+ out(' <ul class="nomargin">\n')
+ for descr in descrs:
+ out(' <li>%s</li>\n' %
+ self.docstring_to_html(descr,doc,4))
+ out(' </ul>\n')
+ out('</table></div>\n')
+
+ # Footer material.
+ out('<br />')
+ self.write_navbar(out, 'indices')
+ self.write_footer(out)
+
+ def write_indexpage_header(self, out, indices, title, url):
+ """
+ A helper for the index page generation functions, which
+ generates a header that can be used to navigate between the
+ different indices.
+ """
+ self.write_header(out, title)
+ self.write_navbar(out, 'indices')
+ self.write_breadcrumbs(out, 'indices', url)
+
+ if (indices['term'] or
+ [1 for (name,l,l2) in self.METADATA_INDICES if indices[name]]):
+ out('<center><b>[\n')
+ out(' <a href="identifier-index.html">Identifiers</a>\n')
+ if indices['term']:
+ out('| <a href="term-index.html">Term Definitions</a>\n')
+ for (name, label, label2) in self.METADATA_INDICES:
+ if indices[name]:
+ out('| <a href="%s-index.html">%s</a>\n' %
+ (name, label2))
+ out(']</b></center><br />\n')
+
+ def write_index_section(self, out, items, add_blankline=False):
+ out('<table class="link-index" width="100%" border="1">\n')
+ num_rows = (len(items)+2)/3
+ for row in range(num_rows):
+ out('<tr>\n')
+ for col in range(3):
+ out('<td width="33%" class="link-index">')
+ i = col*num_rows+row
+ if i < len(items):
+ name, url, container = items[col*num_rows+row]
+ out('<a href="%s">%s</a>' % (url, name))
+ if container is not None:
+ out('<br />\n')
+ if isinstance(container, ModuleDoc):
+ label = container.canonical_name
+ else:
+ label = container.canonical_name[-1]
+ out('<span class="index-where">(in %s)'
+ '</span>' % self.href(container, label))
+ else:
+ out(' ')
+ out('</td>\n')
+ out('</tr>\n')
+ if add_blankline and num_rows == 1:
+ blank_cell = '<td class="link-index"> </td>'
+ out('<tr>'+3*blank_cell+'</tr>\n')
+ out('</table>\n')
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.5. Help Page
+ #////////////////////////////////////////////////////////////
+
+ def write_help(self, out):
+ """
+ Write an HTML help file to the given stream. If
+ C{self._helpfile} contains a help file, then use it;
+ otherwise, use the default helpfile from
+ L{epydoc.docwriter.html_help}.
+ """
+ # todo: optionally parse .rst etc help files?
+
+ # Get the contents of the help file.
+ if self._helpfile:
+ if os.path.exists(self._helpfile):
+ try: help = open(self._helpfile).read()
+ except: raise IOError("Can't open help file: %r" %
+ self._helpfile)
+ else:
+ raise IOError("Can't find help file: %r" % self._helpfile)
+ else:
+ if self._prj_name: thisprj = self._prj_name
+ else: thisprj = 'this project'
+ help = HTML_HELP % {'this_project':thisprj}
+
+ # Insert the help contents into a webpage.
+ self.write_header(out, 'Help')
+ self.write_navbar(out, 'help')
+ self.write_breadcrumbs(out, 'help', 'help.html')
+ out(help)
+ self.write_navbar(out, 'help')
+ self.write_footer(out)
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.6. Frames-based Table of Contents
+ #////////////////////////////////////////////////////////////
+
+ write_frames_index = compile_template(
+ """
+ write_frames_index(self, out)
+
+ Write the frames index file for the frames-based table of
+ contents to the given streams.
+ """,
+ # /------------------------- Template -------------------------\
+ '''
+ <?xml version="1.0" encoding="iso-8859-1"?>
+ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN"
+ "DTD/xhtml1-frameset.dtd">
+ <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+ <head>
+ <title> $self._prj_name or "API Documentation"$ </title>
+ </head>
+ <frameset cols="20%,80%">
+ <frameset rows="30%,70%">
+ <frame src="toc.html" name="moduleListFrame"
+ id="moduleListFrame" />
+ <frame src="toc-everything.html" name="moduleFrame"
+ id="moduleFrame" />
+ </frameset>
+ <frame src="$self._top_page_url$" name="mainFrame" id="mainFrame" />
+ </frameset>
+ </html>
+ ''')
+ # \------------------------------------------------------------/
+
+ write_toc = compile_template(
+ """
+ write_toc(self, out)
+ """,
+ # /------------------------- Template -------------------------\
+ '''
+ >>> self.write_header(out, "Table of Contents")
+ <h1 class="toc">Table of Contents</h1>
+ <hr />
+ <a target="moduleFrame" href="toc-everything.html">Everything</a>
+ <br />
+ >>> self.write_toc_section(out, "Modules", self.module_list)
+ <hr />
+ >>> if self._show_private:
+ $self.PRIVATE_LINK$
+ >>> #endif
+ >>> self.write_footer(out, short=True)
+ ''')
+ # \------------------------------------------------------------/
+
+ def write_toc_section(self, out, name, docs, fullname=True):
+ if not docs: return
+
+ # Assign names to each item, and sort by name.
+ if fullname:
+ docs = [(str(d.canonical_name), d) for d in docs]
+ else:
+ docs = [(str(d.canonical_name[-1]), d) for d in docs]
+ docs.sort()
+
+ out(' <h2 class="toc">%s</h2>\n' % name)
+ for label, doc in docs:
+ doc_url = self.url(doc)
+ toc_url = 'toc-%s' % doc_url
+ is_private = self._doc_or_ancestor_is_private(doc)
+ if is_private:
+ if not self._show_private: continue
+ out(' <div class="private">\n')
+
+ if isinstance(doc, ModuleDoc):
+ out(' <a target="moduleFrame" href="%s"\n'
+ ' onclick="setFrame(\'%s\',\'%s\');"'
+ ' >%s</a><br />' % (toc_url, toc_url, doc_url, label))
+ else:
+ out(' <a target="mainFrame" href="%s"\n'
+ ' >%s</a><br />' % (doc_url, label))
+ if is_private:
+ out(' </div>\n')
+
+ def write_project_toc(self, out):
+ self.write_header(out, "Everything")
+ out('<h1 class="toc">Everything</h1>\n')
+ out('<hr />\n')
+
+ # List the classes.
+ self.write_toc_section(out, "All Classes", self.class_list)
+
+ # List the functions.
+ funcs = [d for d in self.routine_list
+ if not isinstance(self.docindex.container(d),
+ (ClassDoc, types.NoneType))]
+ self.write_toc_section(out, "All Functions", funcs)
+
+ # List the variables.
+ vars = []
+ for doc in self.module_list:
+ vars += doc.select_variables(value_type='other',
+ imported=False,
+ public=self._public_filter)
+ self.write_toc_section(out, "All Variables", vars)
+
+ # Footer material.
+ out('<hr />\n')
+ if self._show_private:
+ out(self.PRIVATE_LINK+'\n')
+ self.write_footer(out, short=True)
+
+ def write_module_toc(self, out, doc):
+ """
+ Write an HTML page containing the table of contents page for
+ the given module to the given streams. This page lists the
+ modules, classes, exceptions, functions, and variables defined
+ by the module.
+ """
+ name = doc.canonical_name[-1]
+ self.write_header(out, name)
+ out('<h1 class="toc">Module %s</h1>\n' % name)
+ out('<hr />\n')
+
+
+ # List the classes.
+ classes = doc.select_variables(value_type='class', imported=False,
+ public=self._public_filter)
+ self.write_toc_section(out, "Classes", classes, fullname=False)
+
+ # List the functions.
+ funcs = doc.select_variables(value_type='function', imported=False,
+ public=self._public_filter)
+ self.write_toc_section(out, "Functions", funcs, fullname=False)
+
+ # List the variables.
+ variables = doc.select_variables(value_type='other', imported=False,
+ public=self._public_filter)
+ self.write_toc_section(out, "Variables", variables, fullname=False)
+
+ # Footer material.
+ out('<hr />\n')
+ if self._show_private:
+ out(self.PRIVATE_LINK+'\n')
+ self.write_footer(out, short=True)
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.7. Project homepage (index.html)
+ #////////////////////////////////////////////////////////////
+
+ def write_homepage(self, directory):
+ """
+ Write an C{index.html} file in the given directory. The
+ contents of this file are copied or linked from an existing
+ page, so this method must be called after all pages have been
+ written. The page used is determined by L{_frames_index} and
+ L{_top_page}:
+ - If L{_frames_index} is true, then C{frames.html} is
+ copied.
+ - Otherwise, the page specified by L{_top_page} is
+ copied.
+ """
+ filename = os.path.join(directory, 'index.html')
+ if self._frames_index: top = 'frames.html'
+ else: top = self._top_page_url
+
+ # Copy the non-frames index file from top, if it's internal.
+ if top[:5] != 'http:' and '/' not in top:
+ try:
+ # Read top into `s`.
+ topfile = os.path.join(directory, top)
+ s = open(topfile, 'r').read()
+
+ # Write the output file.
+ open(filename, 'w').write(s)
+ return
+ except:
+ log.error('Warning: error copying index; '
+ 'using a redirect page')
+
+ # Use a redirect if top is external, or if we faild to copy.
+ name = self._prj_name or 'this project'
+ f = open(filename, 'w')
+ self.write_redirect_index(f.write, top, name)
+ f.close()
+
+ write_redirect_index = compile_template(
+ """
+ write_redirect_index(self, out, top, name)
+ """,
+ # /------------------------- Template -------------------------\
+ '''
+ <?xml version="1.0" encoding="iso-8859-1"?>
+ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
+ "DTD/xhtml1-strict.dtd">
+ <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+ <head>
+ <title> Redirect </title>
+ <meta http-equiv="refresh" content="1;url=$top$" />
+ <link rel="stylesheet" href="epydoc.css" type="text/css"></link>
+ </head>
+ <body>
+ <p>Redirecting to the API documentation for
+ <a href="$top$">$self._prj_name or "this project"$</a>...</p>
+ </body>
+ </html>
+ ''')
+ # \------------------------------------------------------------/
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.8. Stylesheet (epydoc.css)
+ #////////////////////////////////////////////////////////////
+
+ def write_css(self, directory, cssname):
+ """
+ Write the CSS stylesheet in the given directory. If
+ C{cssname} contains a stylesheet file or name (from
+ L{epydoc.docwriter.html_css}), then use that stylesheet;
+ otherwise, use the default stylesheet.
+
+ @rtype: C{None}
+ """
+ filename = os.path.join(directory, 'epydoc.css')
+
+ # Get the contents for the stylesheet file.
+ if cssname is None:
+ css = STYLESHEETS['default'][0]
+ else:
+ if os.path.exists(cssname):
+ try: css = open(cssname).read()
+ except: raise IOError("Can't open CSS file: %r" % cssname)
+ elif cssname in STYLESHEETS:
+ css = STYLESHEETS[cssname][0]
+ else:
+ raise IOError("Can't find CSS file: %r" % cssname)
+
+ # Write the stylesheet.
+ cssfile = open(filename, 'w')
+ cssfile.write(css)
+ cssfile.close()
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.9. Javascript (epydoc.js)
+ #////////////////////////////////////////////////////////////
+
+ def write_javascript(self, directory):
+ jsfile = open(os.path.join(directory, 'epydoc.js'), 'w')
+ print >> jsfile, self.TOGGLE_PRIVATE_JS
+ print >> jsfile, self.SHOW_PRIVATE_JS
+ print >> jsfile, self.GET_COOKIE_JS
+ print >> jsfile, self.SET_FRAME_JS
+ print >> jsfile, self.HIDE_PRIVATE_JS
+ print >> jsfile, self.TOGGLE_CALLGRAPH_JS
+ print >> jsfile, html_colorize.PYSRC_JAVASCRIPTS
+ print >> jsfile, self.GET_ANCHOR_JS
+ print >> jsfile, self.REDIRECT_URL_JS
+ jsfile.close()
+
+ #: A javascript that is used to show or hide the API documentation
+ #: for private objects. In order for this to work correctly, all
+ #: documentation for private objects should be enclosed in
+ #: C{<div class="private">...</div>} elements.
+ TOGGLE_PRIVATE_JS = '''
+ function toggle_private() {
+ // Search for any private/public links on this page. Store
+ // their old text in "cmd," so we will know what action to
+ // take; and change their text to the opposite action.
+ var cmd = "?";
+ var elts = document.getElementsByTagName("a");
+ for(var i=0; i<elts.length; i++) {
+ if (elts[i].className == "privatelink") {
+ cmd = elts[i].innerHTML;
+ elts[i].innerHTML = ((cmd && cmd.substr(0,4)=="show")?
+ "hide private":"show private");
+ }
+ }
+ // Update all DIVs containing private objects.
+ var elts = document.getElementsByTagName("div");
+ for(var i=0; i<elts.length; i++) {
+ if (elts[i].className == "private") {
+ elts[i].style.display = ((cmd && cmd.substr(0,4)=="hide")?"none":"block");
+ }
+ else if (elts[i].className == "public") {
+ elts[i].style.display = ((cmd && cmd.substr(0,4)=="hide")?"block":"none");
+ }
+ }
+ // Update all table rows containing private objects. Note, we
+ // use "" instead of "block" becaue IE & firefox disagree on what
+ // this should be (block vs table-row), and "" just gives the
+ // default for both browsers.
+ var elts = document.getElementsByTagName("tr");
+ for(var i=0; i<elts.length; i++) {
+ if (elts[i].className == "private") {
+ elts[i].style.display = ((cmd && cmd.substr(0,4)=="hide")?"none":"");
+ }
+ }
+ // Update all list items containing private objects.
+ var elts = document.getElementsByTagName("li");
+ for(var i=0; i<elts.length; i++) {
+ if (elts[i].className == "private") {
+ elts[i].style.display = ((cmd && cmd.substr(0,4)=="hide")?
+ "none":"");
+ }
+ }
+ // Update all list items containing private objects.
+ var elts = document.getElementsByTagName("ul");
+ for(var i=0; i<elts.length; i++) {
+ if (elts[i].className == "private") {
+ elts[i].style.display = ((cmd && cmd.substr(0,4)=="hide")?"none":"block");
+ }
+ }
+ // Set a cookie to remember the current option.
+ document.cookie = "EpydocPrivate="+cmd;
+ }
+ '''.strip()
+
+ #: A javascript that is used to read the value of a cookie. This
+ #: is used to remember whether private variables should be shown or
+ #: hidden.
+ GET_COOKIE_JS = '''
+ function getCookie(name) {
+ var dc = document.cookie;
+ var prefix = name + "=";
+ var begin = dc.indexOf("; " + prefix);
+ if (begin == -1) {
+ begin = dc.indexOf(prefix);
+ if (begin != 0) return null;
+ } else
+ { begin += 2; }
+ var end = document.cookie.indexOf(";", begin);
+ if (end == -1)
+ { end = dc.length; }
+ return unescape(dc.substring(begin + prefix.length, end));
+ }
+ '''.strip()
+
+ #: A javascript that is used to set the contents of two frames at
+ #: once. This is used by the project table-of-contents frame to
+ #: set both the module table-of-contents frame and the main frame
+ #: when the user clicks on a module.
+ SET_FRAME_JS = '''
+ function setFrame(url1, url2) {
+ parent.frames[1].location.href = url1;
+ parent.frames[2].location.href = url2;
+ }
+ '''.strip()
+
+ #: A javascript that is used to hide private variables, unless
+ #: either: (a) the cookie says not to; or (b) we appear to be
+ #: linking to a private variable.
+ HIDE_PRIVATE_JS = '''
+ function checkCookie() {
+ var cmd=getCookie("EpydocPrivate");
+ if (cmd && cmd.substr(0,4)!="show" && location.href.indexOf("#_") < 0)
+ toggle_private();
+ }
+ '''.strip()
+
+ TOGGLE_CALLGRAPH_JS = '''
+ function toggleCallGraph(id) {
+ var elt = document.getElementById(id);
+ if (elt.style.display == "none")
+ elt.style.display = "block";
+ else
+ elt.style.display = "none";
+ }
+ '''.strip()
+
+ SHOW_PRIVATE_JS = '''
+ function show_private() {
+ var elts = document.getElementsByTagName("a");
+ for(var i=0; i<elts.length; i++) {
+ if (elts[i].className == "privatelink") {
+ cmd = elts[i].innerHTML;
+ if (cmd && cmd.substr(0,4)=="show")
+ toggle_private();
+ }
+ }
+ }
+ '''.strip()
+
+ GET_ANCHOR_JS = '''
+ function get_anchor() {
+ var href = location.href;
+ var start = href.indexOf("#")+1;
+ if ((start != 0) && (start != href.length))
+ return href.substring(start, href.length);
+ }
+ '''.strip()
+
+ #: A javascript that is used to implement the auto-redirect page.
+ #: When the user visits <redirect.html#dotted.name>, they will
+ #: automatically get redirected to the page for the object with
+ #: the given fully-qualified dotted name. E.g., for epydoc,
+ #: <redirect.html#epydoc.apidoc.UNKNOWN> redirects the user to
+ #: <epydoc.apidoc-module.html#UNKNOWN>.
+ REDIRECT_URL_JS = '''
+ function redirect_url(dottedName) {
+ // Scan through each element of the "pages" list, and check
+ // if "name" matches with any of them.
+ for (var i=0; i<pages.length; i++) {
+
+ // Each page has the form "<pagename>-m" or "<pagename>-c";
+ // extract the <pagename> portion & compare it to dottedName.
+ var pagename = pages[i].substring(0, pages[i].length-2);
+ if (pagename == dottedName.substring(0,pagename.length)) {
+
+ // We\'ve found a page that matches `dottedName`;
+ // construct its URL, using leftover `dottedName`
+ // content to form an anchor.
+ var pagetype = pages[i].charAt(pages[i].length-1);
+ var url = pagename + ((pagetype=="m")?"-module.html":
+ "-class.html");
+ if (dottedName.length > pagename.length)
+ url += "#" + dottedName.substring(pagename.length+1,
+ dottedName.length);
+ return url;
+ }
+ }
+ }
+ '''.strip()
+
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.10. Graphs
+ #////////////////////////////////////////////////////////////
+
+ def render_graph(self, graph):
+ if graph is None: return ''
+ graph.caption = graph.title = None
+ image_url = '%s.gif' % graph.uid
+ image_file = os.path.join(self._directory, image_url)
+ return graph.to_html(image_file, image_url)
+
+ RE_CALLGRAPH_ID = re.compile(r"""["'](.+-div)['"]""")
+
+ def render_callgraph(self, callgraph, token=""):
+ """Render the HTML chunk of a callgraph.
+
+ If C{callgraph} is a string, use the L{_callgraph_cache} to return
+ a pre-rendered HTML chunk. This mostly avoids to run C{dot} twice for
+ the same callgraph. Else, run the graph and store its HTML output in
+ the cache.
+
+ @param callgraph: The graph to render or its L{uid<DotGraph.uid>}.
+ @type callgraph: L{DotGraph} or C{str}
+ @param token: A string that can be used to make the C{<div>} id
+ unambiguous, if the callgraph is used more than once in a page.
+ @type token: C{str}
+ @return: The HTML representation of the graph.
+ @rtype: C{str}
+ """
+ if callgraph is None: return ""
+
+ if isinstance(callgraph, basestring):
+ uid = callgraph
+ rv = self._callgraph_cache.get(callgraph, "")
+
+ else:
+ uid = callgraph.uid
+ graph_html = self.render_graph(callgraph)
+ if graph_html == '':
+ rv = ""
+ else:
+ rv = ('<div style="display:none" id="%%s-div"><center>\n'
+ '<table border="0" cellpadding="0" cellspacing="0">\n'
+ ' <tr><td>%s</td></tr>\n'
+ ' <tr><th>Call Graph</th></tr>\n'
+ '</table><br />\n</center></div>\n' % graph_html)
+
+ # Store in the cache the complete HTML chunk without the
+ # div id, which may be made unambiguous by the token
+ self._callgraph_cache[uid] = rv
+
+ # Mangle with the graph
+ if rv: rv = rv % (uid + token)
+ return rv
+
+ def callgraph_link(self, callgraph, token=""):
+ """Render the HTML chunk of a callgraph link.
+
+ The link can toggles the visibility of the callgraph rendered using
+ L{render_callgraph} with matching parameters.
+
+ @param callgraph: The graph to render or its L{uid<DotGraph.uid>}.
+ @type callgraph: L{DotGraph} or C{str}
+ @param token: A string that can be used to make the C{<div>} id
+ unambiguous, if the callgraph is used more than once in a page.
+ @type token: C{str}
+ @return: The HTML representation of the graph link.
+ @rtype: C{str}
+ """
+ # Use class=codelink, to match style w/ the source code link.
+ if callgraph is None: return ''
+
+ if isinstance(callgraph, basestring):
+ uid = callgraph
+ else:
+ uid = callgraph.uid
+
+ return ('<br /><span class="codelink"><a href="javascript:void(0);" '
+ 'onclick="toggleCallGraph(\'%s-div\');return false;">'
+ 'call graph</a></span> ' % (uid + token))
+
+ #////////////////////////////////////////////////////////////
+ #{ 2.11. Images
+ #////////////////////////////////////////////////////////////
+
+ IMAGES = {'crarr.png': # Carriage-return arrow, used for LINEWRAP.
+ 'iVBORw0KGgoAAAANSUhEUgAAABEAAAAKCAMAAABlokWQAAAALHRFWHRD'
+ 'cmVhdGlvbiBUaW1lAFR1\nZSAyMiBBdWcgMjAwNiAwMDo0MzoxMCAtMD'
+ 'UwMGAMEFgAAAAHdElNRQfWCBYFASkQ033WAAAACXBI\nWXMAAB7CAAAe'
+ 'wgFu0HU+AAAABGdBTUEAALGPC/xhBQAAAEVQTFRF////zcOw18/AgGY0'
+ 'c1cg4dvQ\ninJEYEAAYkME3NXI6eTcloFYe2Asr5+AbE4Uh29A9fPwqp'
+ 'l4ZEUI8O3onopk0Ma0lH5U1nfFdgAA\nAAF0Uk5TAEDm2GYAAABNSURB'
+ 'VHjaY2BAAbzsvDAmK5oIlxgfioiwCAe7KJKIgKAQOzsLLwTwA0VY\n+d'
+ 'iRAT8T0AxuIIMHqoaXCWIPGzsHJ6orGJiYWRjQASOcBQAocgMSPKMTIg'
+ 'AAAABJRU5ErkJggg==\n',
+ }
+
+ def write_images(self, directory):
+ for (name, data) in self.IMAGES.items():
+ f = open(os.path.join(directory, name), 'wb')
+ f.write(base64.decodestring(data))
+ f.close()
+
+ #////////////////////////////////////////////////////////////
+ #{ 3.1. Page Header
+ #////////////////////////////////////////////////////////////
+
+ write_header = compile_template(
+ """
+ write_header(self, out, title)
+
+ Generate HTML code for the standard page header, and write it
+ to C{out}. C{title} is a string containing the page title.
+ It should be appropriately escaped/encoded.
+ """,
+ # /------------------------- Template -------------------------\
+ '''
+ <?xml version="1.0" encoding="ascii"?>
+ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "DTD/xhtml1-transitional.dtd">
+ <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+ <head>
+ <title>$title$</title>
+ <link rel="stylesheet" href="epydoc.css" type="text/css" />
+ <script type="text/javascript" src="epydoc.js"></script>
+ </head>
+
+ <body bgcolor="white" text="black" link="blue" vlink="#204080"
+ alink="#204080">
+ ''')
+ # \------------------------------------------------------------/
+
+ #////////////////////////////////////////////////////////////
+ #{ 3.2. Page Footer
+ #////////////////////////////////////////////////////////////
+
+ write_footer = compile_template(
+ """
+ write_footer(self, out, short=False)
+
+ Generate HTML code for the standard page footer, and write it
+ to C{out}.
+ """,
+ # /------------------------- Template -------------------------\
+ '''
+ >>> if not short:
+ <table border="0" cellpadding="0" cellspacing="0" width="100%%">
+ <tr>
+ <td align="left" class="footer">
+ >>> if self._include_log:
+ <a href="epydoc-log.html">Generated by Epydoc
+ $epydoc.__version__$ on $time.asctime()$</a>
+ >>> else:
+ Generated by Epydoc $epydoc.__version__$ on $time.asctime()$
+ >>> #endif
+ </td>
+ <td align="right" class="footer">
+ <a target="mainFrame" href="http://epydoc.sourceforge.net"
+ >http://epydoc.sourceforge.net</a>
+ </td>
+ </tr>
+ </table>
+ >>> #endif
+
+ <script type="text/javascript">
+ <!--
+ // Private objects are initially displayed (because if
+ // javascript is turned off then we want them to be
+ // visible); but by default, we want to hide them. So hide
+ // them unless we have a cookie that says to show them.
+ checkCookie();
+ // -->
+ </script>
+ </body>
+ </html>
+ ''')
+ # \------------------------------------------------------------/
+
+ #////////////////////////////////////////////////////////////
+ #{ 3.3. Navigation Bar
+ #////////////////////////////////////////////////////////////
+
+ write_navbar = compile_template(
+ """
+ write_navbar(self, out, context)
+
+ Generate HTML code for the navigation bar, and write it to
+ C{out}. The navigation bar typically looks like::
+
+ [ Home Trees Index Help Project ]
+
+ @param context: A value indicating what page we're generating
+ a navigation bar for. If we're generating an API
+ documentation page for an object, then C{context} is a
+ L{ValueDoc} containing the documentation for that object;
+ otherwise, C{context} is a string name for the page. The
+ following string names are recognized: C{'tree'}, C{'index'},
+ and C{'help'}.
+ """,
+ # /------------------------- Template -------------------------\
+ '''
+ <!-- ==================== NAVIGATION BAR ==================== -->
+ <table class="navbar" border="0" width="100%" cellpadding="0"
+ bgcolor="#a0c0ff" cellspacing="0">
+ <tr valign="middle">
+ >>> if self._top_page_url not in (self._trees_url, "identifier-index.html", "help.html"):
+ <!-- Home link -->
+ >>> if (isinstance(context, ValueDoc) and
+ >>> self._top_page_url == self.url(context.canonical_name)):
+ <th bgcolor="#70b0f0" class="navbar-select"
+ > Home </th>
+ >>> else:
+ <th> <a
+ href="$self._top_page_url$">Home</a> </th>
+ >>> #endif
+
+ <!-- Tree link -->
+ >>> if context == "trees":
+ <th bgcolor="#70b0f0" class="navbar-select"
+ > Trees </th>
+ >>> else:
+ <th> <a
+ href="$self._trees_url$">Trees</a> </th>
+ >>> #endif
+
+ <!-- Index link -->
+ >>> if context == "indices":
+ <th bgcolor="#70b0f0" class="navbar-select"
+ > Indices </th>
+ >>> else:
+ <th> <a
+ href="identifier-index.html">Indices</a> </th>
+ >>> #endif
+
+ <!-- Help link -->
+ >>> if context == "help":
+ <th bgcolor="#70b0f0" class="navbar-select"
+ > Help </th>
+ >>> else:
+ <th> <a
+ href="help.html">Help</a> </th>
+ >>> #endif
+
+ >>> if self._prj_link:
+ <!-- Project homepage -->
+ <th class="navbar" align="right" width="100%">
+ <table border="0" cellpadding="0" cellspacing="0">
+ <tr><th class="navbar" align="center"
+ >$self._prj_link.strip()$</th>
+ </tr></table></th>
+ >>> else:
+ <th class="navbar" width="100%"></th>
+ >>> #endif
+ </tr>
+ </table>
+ ''')
+ # \------------------------------------------------------------/
+
+ #////////////////////////////////////////////////////////////
+ #{ 3.4. Breadcrumbs
+ #////////////////////////////////////////////////////////////
+
+ write_breadcrumbs = compile_template(
+ """
+ write_breadcrumbs(self, out, context, context_url)
+
+ Generate HTML for the breadcrumbs line, and write it to
+ C{out}. The breadcrumbs line is an invisible table with a
+ list of pointers to the current object's ancestors on the
+ left; and the show/hide private selector and the
+ frames/noframes selector on the right.
+
+ @param context: The API documentation for the object whose
+ breadcrumbs we should generate.
+ @type context: L{ValueDoc}
+ """,
+ # /------------------------- Template -------------------------\
+ '''
+ <table width="100%" cellpadding="0" cellspacing="0">
+ <tr valign="top">
+ >>> if isinstance(context, APIDoc):
+ <td width="100%">
+ <span class="breadcrumbs">
+ >>> crumbs = self.breadcrumbs(context)
+ >>> for crumb in crumbs[:-1]:
+ $crumb$ ::
+ >>> #endfor
+ $crumbs[-1]$
+ </span>
+ </td>
+ >>> else:
+ <td width="100%"> </td>
+ >>> #endif
+ <td>
+ <table cellpadding="0" cellspacing="0">
+ <!-- hide/show private -->
+ >>> if self._show_private:
+ <tr><td align="right">$self.PRIVATE_LINK$</td></tr>
+ >>> #endif
+ >>> if self._frames_index:
+ <tr><td align="right"><span class="options"
+ >[<a href="frames.html" target="_top">frames</a
+ >] | <a href="$context_url$"
+ target="_top">no frames</a>]</span></td></tr>
+ >>> #endif
+ </table>
+ </td>
+ </tr>
+ </table>
+ ''')
+ # \------------------------------------------------------------/
+
+ def breadcrumbs(self, doc):
+ crumbs = [self._crumb(doc)]
+
+ # Generate the crumbs for uid's ancestors.
+ while True:
+ container = self.docindex.container(doc)
+ assert doc != container, 'object is its own container?'
+ if container is None:
+ if doc.canonical_name is UNKNOWN:
+ return ['??']+crumbs
+ elif isinstance(doc, ModuleDoc):
+ return ['Package %s' % ident
+ for ident in doc.canonical_name[:-1]]+crumbs
+ else:
+ return list(doc.canonical_name)+crumbs
+ else:
+ label = self._crumb(container)
+ name = container.canonical_name
+ crumbs.insert(0, self.href(container, label)) # [xx] code=0??
+ doc = container
+
+ def _crumb(self, doc):
+ if (len(doc.canonical_name)==1 and
+ doc.canonical_name[0].startswith('script-')):
+ return 'Script %s' % doc.canonical_name[0][7:]
+ return '%s %s' % (self.doc_kind(doc), doc.canonical_name[-1])
+
+ #////////////////////////////////////////////////////////////
+ #{ 3.5. Summary Tables
+ #////////////////////////////////////////////////////////////
+
+ def write_summary_table(self, out, heading, doc, value_type):
+ """
+ Generate HTML code for a summary table, and write it to
+ C{out}. A summary table is a table that includes a one-row
+ description for each variable (of a given type) in a module
+ or class.
+
+ @param heading: The heading for the summary table; typically,
+ this indicates what kind of value the table describes
+ (e.g., functions or classes).
+ @param doc: A L{ValueDoc} object containing the API
+ documentation for the module or class whose variables
+ we should summarize.
+ @param value_type: A string indicating what type of value
+ should be listed in this summary table. This value
+ is passed on to C{doc}'s C{select_variables()} method.
+ """
+ # inh_var_groups is a dictionary used to hold "inheritance
+ # pseudo-groups", which are created when inheritance is
+ # 'grouped'. It maps each base to a list of vars inherited
+ # from that base.
+ grouped_inh_vars = {}
+
+ # Divide all public variables of the given type into groups.
+ groups = [(plaintext_to_html(group_name),
+ doc.select_variables(group=group_name, imported=False,
+ value_type=value_type,
+ public=self._public_filter))
+ for group_name in doc.group_names()]
+
+ # Discard any empty groups; and return if they're all empty.
+ groups = [(g,vars) for (g,vars) in groups if vars]
+ if not groups: return
+
+ # Write a header
+ self.write_table_header(out, "summary", heading)
+
+ # Write a section for each group.
+ for name, var_docs in groups:
+ self.write_summary_group(out, doc, name,
+ var_docs, grouped_inh_vars)
+
+ # Write a section for each inheritance pseudo-group (used if
+ # inheritance=='grouped')
+ if grouped_inh_vars:
+ for base in doc.mro():
+ if base in grouped_inh_vars:
+ hdr = 'Inherited from %s' % self.href(base, context=doc)
+ tr_class = ''
+ if len([v for v in grouped_inh_vars[base]
+ if v.is_public]) == 0:
+ tr_class = ' class="private"'
+ self.write_group_header(out, hdr, tr_class)
+ for var_doc in grouped_inh_vars[base]:
+ self.write_summary_line(out, var_doc, doc)
+
+ # Write a footer for the table.
+ out(self.TABLE_FOOTER)
+
+ def write_summary_group(self, out, doc, name, var_docs, grouped_inh_vars):
+ # Split up the var_docs list, according to the way each var
+ # should be displayed:
+ # - listed_inh_vars -- for listed inherited variables.
+ # - grouped_inh_vars -- for grouped inherited variables.
+ # - normal_vars -- for all other variables.
+ listed_inh_vars = {}
+ normal_vars = []
+ for var_doc in var_docs:
+ if var_doc.container != doc:
+ base = var_doc.container
+ if not isinstance(base, ClassDoc):
+ # This *should* never happen:
+ log.warning("%s's container is not a class!" % var_doc)
+ normal_vars.append(var_doc)
+ elif (base not in self.class_set or
+ self._inheritance == 'listed'):
+ listed_inh_vars.setdefault(base,[]).append(var_doc)
+ elif self._inheritance == 'grouped':
+ grouped_inh_vars.setdefault(base,[]).append(var_doc)
+ else:
+ normal_vars.append(var_doc)
+ else:
+ normal_vars.append(var_doc)
+
+ # Write a header for the group.
+ if name != '':
+ tr_class = ''
+ if len([v for v in var_docs if v.is_public]) == 0:
+ tr_class = ' class="private"'
+ self.write_group_header(out, name, tr_class)
+
+ # Write a line for each normal var:
+ for var_doc in normal_vars:
+ self.write_summary_line(out, var_doc, doc)
+ # Write a subsection for inherited vars:
+ if listed_inh_vars:
+ self.write_inheritance_list(out, doc, listed_inh_vars)
+
+ def write_inheritance_list(self, out, doc, listed_inh_vars):
+ out(' <tr>\n <td colspan="2" class="summary">\n')
+ for base in doc.mro():
+ if base not in listed_inh_vars: continue
+ public_vars = [v for v in listed_inh_vars[base]
+ if v.is_public]
+ private_vars = [v for v in listed_inh_vars[base]
+ if not v.is_public]
+ if public_vars:
+ out(' <p class="indent-wrapped-lines">'
+ '<b>Inherited from <code>%s</code></b>:\n' %
+ self.href(base, context=doc))
+ self.write_var_list(out, public_vars)
+ out(' </p>\n')
+ if private_vars and self._show_private:
+ out(' <div class="private">')
+ out(' <p class="indent-wrapped-lines">'
+ '<b>Inherited from <code>%s</code></b> (private):\n' %
+ self.href(base, context=doc))
+ self.write_var_list(out, private_vars)
+ out(' </p></div>\n')
+ out(' </td>\n </tr>\n')
+
+ def write_var_list(self, out, vardocs):
+ out(' ')
+ out(',\n '.join(['<code>%s</code>' % self.href(v,v.name)
+ for v in vardocs])+'\n')
+
+ def write_summary_line(self, out, var_doc, container):
+ """
+ Generate HTML code for a single line of a summary table, and
+ write it to C{out}. See L{write_summary_table} for more
+ information.
+
+ @param var_doc: The API documentation for the variable that
+ should be described by this line of the summary table.
+ @param container: The API documentation for the class or
+ module whose summary table we're writing.
+ """
+ pysrc_link = None
+ callgraph = None
+
+ # If it's a private variable, then mark its <tr>.
+ if var_doc.is_public: tr_class = ''
+ else: tr_class = ' class="private"'
+
+ # Decide an anchor or a link is to be generated.
+ link_name = self._redundant_details or var_doc.is_detailed()
+ anchor = not link_name
+
+ # Construct the HTML code for the type (cell 1) & description
+ # (cell 2).
+ if isinstance(var_doc.value, RoutineDoc):
+ typ = self.return_type(var_doc, indent=6)
+ description = self.function_signature(var_doc, is_summary=True,
+ link_name=link_name, anchor=anchor)
+ pysrc_link = self.pysrc_link(var_doc.value)
+
+ # Perpare the call-graph, if requested
+ if 'callgraph' in self._graph_types:
+ linker = _HTMLDocstringLinker(self, var_doc.value)
+ callgraph = call_graph([var_doc.value], self.docindex,
+ linker, var_doc, add_callers=True,
+ add_callees=True)
+ if callgraph and callgraph.nodes:
+ var_doc.value.callgraph_uid = callgraph.uid
+ else:
+ callgraph = None
+ else:
+ typ = self.type_descr(var_doc, indent=6)
+ description = self.summary_name(var_doc,
+ link_name=link_name, anchor=anchor)
+ if isinstance(var_doc.value, GenericValueDoc):
+ # The summary max length has been chosen setting
+ # L{ValueDoc.SUMMARY_REPR_LINELEN} in the constructor
+ max_len=self._variable_summary_linelen-3-len(var_doc.name)
+ val_repr = var_doc.value.summary_pyval_repr(max_len)
+ tooltip = self.variable_tooltip(var_doc)
+ description += (' = <code%s>%s</code>' %
+ (tooltip, val_repr.to_html(None)))
+
+ # Add the summary to the description (if there is one).
+ summary = self.summary(var_doc, indent=6)
+ if summary: description += '<br />\n %s' % summary
+
+ # If it's inherited, then add a note to the description.
+ if var_doc.container != container and self._inheritance=="included":
+ description += ("\n <em>(Inherited from " +
+ self.href(var_doc.container) + ")</em>")
+
+ # Write the summary line.
+ self._write_summary_line(out, typ, description, tr_class, pysrc_link,
+ callgraph)
+
+ _write_summary_line = compile_template(
+ "_write_summary_line(self, out, typ, description, tr_class, "
+ "pysrc_link, callgraph)",
+ # /------------------------- Template -------------------------\
+ '''
+ <tr$tr_class$>
+ <td width="15%" align="right" valign="top" class="summary">
+ <span class="summary-type">$typ or " "$</span>
+ </td><td class="summary">
+ >>> if pysrc_link is not None or callgraph is not None:
+ <table width="100%" cellpadding="0" cellspacing="0" border="0">
+ <tr>
+ <td>$description$</td>
+ <td align="right" valign="top">
+ $pysrc_link$
+ $self.callgraph_link(callgraph, token='-summary')$
+ </td>
+ </tr>
+ </table>
+ $self.render_callgraph(callgraph, token='-summary')$
+ >>> #endif
+ >>> if pysrc_link is None and callgraph is None:
+ $description$
+ >>> #endif
+ </td>
+ </tr>
+ ''')
+ # \------------------------------------------------------------/
+
+ #////////////////////////////////////////////////////////////
+ #{ 3.6. Details Lists
+ #////////////////////////////////////////////////////////////
+
+ def write_details_list(self, out, heading, doc, value_type):
+ # Get a list of the VarDocs we should describe.
+ if self._redundant_details:
+ detailed = None
+ else:
+ detailed = True
+ if isinstance(doc, ClassDoc):
+ var_docs = doc.select_variables(value_type=value_type,
+ imported=False, inherited=False,
+ public=self._public_filter,
+ detailed=detailed)
+ else:
+ var_docs = doc.select_variables(value_type=value_type,
+ imported=False,
+ public=self._public_filter,
+ detailed=detailed)
+ if not var_docs: return
+
+ # Write a header
+ self.write_table_header(out, "details", heading)
+ out(self.TABLE_FOOTER)
+
+ for var_doc in var_docs:
+ self.write_details_entry(out, var_doc)
+
+ out('<br />\n')
+
+ def write_details_entry(self, out, var_doc):
+ descr = self.descr(var_doc, indent=2) or ''
+ if var_doc.is_public: div_class = ''
+ else: div_class = ' class="private"'
+
+ # Functions
+ if isinstance(var_doc.value, RoutineDoc):
+ rtype = self.return_type(var_doc, indent=10)
+ rdescr = self.return_descr(var_doc, indent=10)
+ arg_descrs = []
+ args = set()
+ # Find the description for each arg. (Leave them in the
+ # same order that they're listed in the docstring.)
+ for (arg_names, arg_descr) in var_doc.value.arg_descrs:
+ args.update(arg_names)
+ lhs = ', '.join([self.arg_name_to_html(var_doc.value, n)
+ for n in arg_names])
+ rhs = self.docstring_to_html(arg_descr, var_doc.value, 10)
+ arg_descrs.append( (lhs, rhs) )
+ # Check for arguments for which we have @type but not @param;
+ # and add them to the arg_descrs list.
+ for arg in var_doc.value.arg_types:
+ if arg not in args:
+ argname = self.arg_name_to_html(var_doc.value, arg)
+ arg_descrs.append( (argname,'') )
+
+ self.write_function_details_entry(out, var_doc, descr,
+ var_doc.value.callgraph_uid,
+ rtype, rdescr, arg_descrs,
+ div_class)
+
+ # Properties
+ elif isinstance(var_doc.value, PropertyDoc):
+ prop_doc = var_doc.value
+ accessors = [ (name,
+ self.property_accessor_to_html(val_doc, prop_doc),
+ self.summary(val_doc))
+ for (name, val_doc) in
+ [('Get', prop_doc.fget), ('Set', prop_doc.fset),
+ ('Delete', prop_doc.fdel)]
+ if val_doc not in (None, UNKNOWN)
+ and val_doc.pyval is not None ]
+
+ self.write_property_details_entry(out, var_doc, descr,
+ accessors, div_class)
+
+ # Variables
+ else:
+ self.write_variable_details_entry(out, var_doc, descr, div_class)
+
+ def labelled_list_item(self, lhs, rhs):
+ # If the RHS starts with a paragraph, then move the
+ # paragraph-start tag to the beginning of the lhs instead (so
+ # there won't be a line break after the '-').
+ m = re.match(r'^<p( [^>]+)?>', rhs)
+ if m:
+ lhs = m.group() + lhs
+ rhs = rhs[m.end():]
+
+ if rhs:
+ return '<li>%s - %s</li>' % (lhs, rhs)
+ else:
+ return '<li>%s</li>' % (lhs,)
+
+ def property_accessor_to_html(self, val_doc, context=None):
+ if val_doc not in (None, UNKNOWN):
+ if isinstance(val_doc, RoutineDoc):
+ return self.function_signature(val_doc, is_summary=True,
+ link_name=True, context=context)
+ elif isinstance(val_doc, GenericValueDoc):
+ return self.pprint_value(val_doc)
+ else:
+ return self.href(val_doc, context=context)
+ else:
+ return '??'
+
+ def arg_name_to_html(self, func_doc, arg_name):
+ """
+ A helper function used to format an argument name, for use in
+ the argument description list under a routine's details entry.
+ This just wraps strong & code tags around the arg name; and if
+ the arg name is associated with a type, then adds it
+ parenthetically after the name.
+ """
+ s = '<strong class="pname"><code>%s</code></strong>' % arg_name
+ if arg_name in func_doc.arg_types:
+ typ = func_doc.arg_types[arg_name]
+ typ_html = self.docstring_to_html(typ, func_doc, 10)
+ s += " (%s)" % typ_html
+ return s
+
+ write_function_details_entry = compile_template(
+ '''
+ write_function_details_entry(self, out, var_doc, descr, callgraph, \
+ rtype, rdescr, arg_descrs, div_class)
+ ''',
+ # /------------------------- Template -------------------------\
+ '''
+ >>> func_doc = var_doc.value
+ <a name="$var_doc.name$"></a>
+ <div$div_class$>
+ >>> self.write_table_header(out, "details")
+ <tr><td>
+ <table width="100%" cellpadding="0" cellspacing="0" border="0">
+ <tr valign="top"><td>
+ <h3 class="epydoc">$self.function_signature(var_doc)$
+ >>> if var_doc.name in self.SPECIAL_METHODS:
+ <br /><em class="fname">($self.SPECIAL_METHODS[var_doc.name]$)</em>
+ >>> #endif
+ >>> if isinstance(func_doc, ClassMethodDoc):
+ <br /><em class="fname">Class Method</em>
+ >>> #endif
+ >>> if isinstance(func_doc, StaticMethodDoc):
+ <br /><em class="fname">Static Method</em>
+ >>> #endif
+ </h3>
+ </td><td align="right" valign="top"
+ >$self.pysrc_link(func_doc)$
+ $self.callgraph_link(callgraph)$</td>
+ </tr></table>
+ $self.render_callgraph(callgraph)$
+ $descr$
+ <dl class="fields">
+ >>> # === parameters ===
+ >>> if arg_descrs:
+ <dt>Parameters:</dt>
+ <dd><ul class="nomargin-top">
+ >>> for lhs, rhs in arg_descrs:
+ $self.labelled_list_item(lhs, rhs)$
+ >>> #endfor
+ </ul></dd>
+ >>> #endif
+ >>> # === return type ===
+ >>> if rdescr and rtype:
+ <dt>Returns: $rtype$</dt>
+ <dd>$rdescr$</dd>
+ >>> elif rdescr:
+ <dt>Returns:</dt>
+ <dd>$rdescr$</dd>
+ >>> elif rtype:
+ <dt>Returns: $rtype$</dt>
+ >>> #endif
+ >>> # === decorators ===
+ >>> if func_doc.decorators not in (None, UNKNOWN):
+ >>> # (staticmethod & classmethod are already shown, above)
+ >>> decos = filter(lambda deco:
+ >>> not ((deco=="staticmethod" and
+ >>> isinstance(func_doc, StaticMethodDoc)) or
+ >>> (deco=="classmethod" and
+ >>> isinstance(func_doc, ClassMethodDoc))),
+ >>> func_doc.decorators)
+ >>> else:
+ >>> decos = None
+ >>> #endif
+ >>> if decos:
+ <dt>Decorators:</dt>
+ <dd><ul class="nomargin-top">
+ >>> for deco in decos:
+ <li><code>@$deco$</code></li>
+ >>> #endfor
+ </ul></dd>
+ >>> #endif
+ >>> # === exceptions ===
+ >>> if func_doc.exception_descrs not in (None, UNKNOWN, (), []):
+ <dt>Raises:</dt>
+ <dd><ul class="nomargin-top">
+ >>> for name, descr in func_doc.exception_descrs:
+ >>> exc_name = self.docindex.find(name, func_doc)
+ >>> if exc_name is not None:
+ >>> name = self.href(exc_name, label=str(name))
+ >>> #endif
+ $self.labelled_list_item(
+ "<code><strong class=\'fraise\'>" +
+ str(name) + "</strong></code>",
+ self.docstring_to_html(descr, func_doc, 8))$
+ >>> #endfor
+ </ul></dd>
+ >>> #endif
+ >>> # === overrides ===
+ >>> if var_doc.overrides not in (None, UNKNOWN):
+ <dt>Overrides:
+ >>> # Avoid passing GenericValueDoc to href()
+ >>> if isinstance(var_doc.overrides.value, RoutineDoc):
+ $self.href(var_doc.overrides.value, context=var_doc)$
+ >>> else:
+ >>> # In this case, a less interesting label is generated.
+ $self.href(var_doc.overrides, context=var_doc)$
+ >>> #endif
+ >>> if (func_doc.docstring in (None, UNKNOWN) and
+ >>> var_doc.overrides.value.docstring not in (None, UNKNOWN)):
+ <dd><em class="note">(inherited documentation)</em></dd>
+ >>> #endif
+ </dt>
+ >>> #endif
+ </dl>
+ >>> # === metadata ===
+ >>> self.write_standard_fields(out, func_doc)
+ </td></tr></table>
+ </div>
+ ''')
+ # \------------------------------------------------------------/
+
+ # Names for the __special__ methods.
+ SPECIAL_METHODS ={
+ '__init__': 'Constructor',
+ '__del__': 'Destructor',
+ '__add__': 'Addition operator',
+ '__sub__': 'Subtraction operator',
+ '__and__': 'And operator',
+ '__or__': 'Or operator',
+ '__xor__': 'Exclusive-Or operator',
+ '__repr__': 'Representation operator',
+ '__call__': 'Call operator',
+ '__getattr__': 'Qualification operator',
+ '__getitem__': 'Indexing operator',
+ '__setitem__': 'Index assignment operator',
+ '__delitem__': 'Index deletion operator',
+ '__delslice__': 'Slice deletion operator',
+ '__setslice__': 'Slice assignment operator',
+ '__getslice__': 'Slicling operator',
+ '__len__': 'Length operator',
+ '__cmp__': 'Comparison operator',
+ '__eq__': 'Equality operator',
+ '__in__': 'Containership operator',
+ '__gt__': 'Greater-than operator',
+ '__lt__': 'Less-than operator',
+ '__ge__': 'Greater-than-or-equals operator',
+ '__le__': 'Less-than-or-equals operator',
+ '__radd__': 'Right-side addition operator',
+ '__hash__': 'Hashing function',
+ '__contains__': 'In operator',
+ '__nonzero__': 'Boolean test operator',
+ '__str__': 'Informal representation operator',
+ }
+
+ write_property_details_entry = compile_template(
+ '''
+ write_property_details_entry(self, out, var_doc, descr, \
+ accessors, div_class)
+ ''',
+ # /------------------------- Template -------------------------\
+ '''
+ >>> prop_doc = var_doc.value
+ <a name="$var_doc.name$"></a>
+ <div$div_class$>
+ >>> self.write_table_header(out, "details")
+ <tr><td>
+ <h3 class="epydoc">$var_doc.name$</h3>
+ $descr$
+ <dl class="fields">
+ >>> for (name, val, summary) in accessors:
+ <dt>$name$ Method:</dt>
+ <dd class="value">$val$
+ >>> if summary:
+ - $summary$
+ >>> #endif
+ </dd>
+ >>> #endfor
+ >>> if prop_doc.type_descr not in (None, UNKNOWN):
+ <dt>Type:</dt>
+ <dd>$self.type_descr(var_doc, indent=6)$</dd>
+ >>> #endif
+ </dl>
+ >>> self.write_standard_fields(out, prop_doc)
+ </td></tr></table>
+ </div>
+ ''')
+ # \------------------------------------------------------------/
+
+ write_variable_details_entry = compile_template(
+ '''
+ write_variable_details_entry(self, out, var_doc, descr, div_class)
+ ''',
+ # /------------------------- Template -------------------------\
+ '''
+ <a name="$var_doc.name$"></a>
+ <div$div_class$>
+ >>> self.write_table_header(out, "details")
+ <tr><td>
+ <h3 class="epydoc">$var_doc.name$</h3>
+ $descr$
+ <dl class="fields">
+ >>> if var_doc.type_descr not in (None, UNKNOWN):
+ <dt>Type:</dt>
+ <dd>$self.type_descr(var_doc, indent=6)$</dd>
+ >>> #endif
+ </dl>
+ >>> self.write_standard_fields(out, var_doc)
+ >>> if var_doc.value is not UNKNOWN:
+ <dl class="fields">
+ <dt>Value:</dt>
+ <dd>$self.pprint_value(var_doc.value)$</dd>
+ </dl>
+ >>> #endif
+ </td></tr></table>
+ </div>
+ ''')
+ # \------------------------------------------------------------/
+
+ def variable_tooltip(self, var_doc):
+ if var_doc.value in (None, UNKNOWN):
+ return ''
+ s = var_doc.value.pyval_repr().to_plaintext(None)
+ if len(s) > self._variable_tooltip_linelen:
+ s = s[:self._variable_tooltip_linelen-3]+'...'
+ return ' title="%s"' % plaintext_to_html(s)
+
+ def pprint_value(self, val_doc):
+ if val_doc is UNKNOWN:
+ return '??'
+ elif isinstance(val_doc, GenericValueDoc):
+ return ('<table><tr><td><pre class="variable">\n' +
+ val_doc.pyval_repr().to_html(None) +
+ '\n</pre></td></tr></table>\n')
+ else:
+ return self.href(val_doc)
+
+ #////////////////////////////////////////////////////////////
+ #{ Base Tree
+ #////////////////////////////////////////////////////////////
+
+ def base_tree(self, doc, width=None, postfix='', context=None):
+ """
+ @return: The HTML code for a class's base tree. The tree is
+ drawn 'upside-down' and right justified, to allow for
+ multiple inheritance.
+ @rtype: C{string}
+ """
+ if context is None:
+ context = doc.defining_module
+ if width == None: width = self.find_tree_width(doc, context)
+ if isinstance(doc, ClassDoc) and doc.bases != UNKNOWN:
+ bases = doc.bases
+ else:
+ bases = []
+
+ if postfix == '':
+ # [XX] use var name instead of canonical name?
+ s = (' '*(width-2) + '<strong class="uidshort">'+
+ self.contextual_label(doc, context)+'</strong>\n')
+ else: s = ''
+ for i in range(len(bases)-1, -1, -1):
+ base = bases[i]
+ label = self.contextual_label(base, context)
+ s = (' '*(width-4-len(label)) + self.href(base, label)
+ +' --+'+postfix+'\n' +
+ ' '*(width-4) +
+ ' |'+postfix+'\n' +
+ s)
+ if i != 0:
+ s = (self.base_tree(base, width-4, ' |'+postfix, context)+s)
+ else:
+ s = (self.base_tree(base, width-4, ' '+postfix, context)+s)
+ return s
+
+ def find_tree_width(self, doc, context):
+ """
+ Helper function for L{base_tree}.
+ @return: The width of a base tree, when drawn
+ right-justified. This is used by L{base_tree} to
+ determine how far to indent lines of the base tree.
+ @rtype: C{int}
+ """
+ if not isinstance(doc, ClassDoc): return 2
+ if doc.bases == UNKNOWN: return 2
+ width = 2
+ for base in doc.bases:
+ width = max(width, len(self.contextual_label(base, context))+4,
+ self.find_tree_width(base, context)+4)
+ return width
+
+ def contextual_label(self, doc, context):
+ """
+ Return the label for C{doc} to be shown in C{context}.
+ """
+ if doc.canonical_name is None:
+ if doc.parse_repr is not None:
+ return doc.parse_repr
+ else:
+ return '??'
+ else:
+ if context is UNKNOWN:
+ return str(doc.canonical_name)
+ else:
+ context_name = context.canonical_name
+ return str(doc.canonical_name.contextualize(context_name))
+
+ #////////////////////////////////////////////////////////////
+ #{ Function Signatures
+ #////////////////////////////////////////////////////////////
+
+ def function_signature(self, api_doc, is_summary=False,
+ link_name=False, anchor=False, context=None):
+ """Render a function signature in HTML.
+
+ @param api_doc: The object whose name is to be rendered. If a
+ C{VariableDoc}, its C{value} should be a C{RoutineDoc}
+ @type api_doc: L{VariableDoc} or L{RoutineDoc}
+ @param is_summary: True if the fuction is to be rendered in the summary.
+ type css_class: C{bool}
+ @param link_name: If True, the name is a link to the object anchor.
+ @type link_name: C{bool}
+ @param anchor: If True, the name is the object anchor.
+ @type anchor: C{bool}
+ @param context: If set, represent the function name from this context.
+ Only useful when C{api_doc} is a L{RoutineDoc}.
+ @type context: L{DottedName}
+
+ @return: The HTML code for the object.
+ @rtype: C{str}
+ """
+ if is_summary: css_class = 'summary-sig'
+ else: css_class = 'sig'
+
+ # [XX] clean this up!
+ if isinstance(api_doc, VariableDoc):
+ func_doc = api_doc.value
+ # This should never happen, but just in case:
+ if api_doc.value in (None, UNKNOWN):
+ return (('<span class="%s"><span class="%s-name">%s'+
+ '</span>(...)</span>') %
+ (css_class, css_class, api_doc.name))
+ # Get the function's name.
+ name = self.summary_name(api_doc, css_class=css_class+'-name',
+ link_name=link_name, anchor=anchor)
+ else:
+ func_doc = api_doc
+ name = self.href(api_doc, css_class=css_class+'-name',
+ context=context)
+
+ if func_doc.posargs == UNKNOWN:
+ args = ['...']
+ else:
+ args = [self.func_arg(n, d, css_class) for (n, d)
+ in zip(func_doc.posargs, func_doc.posarg_defaults)]
+ if func_doc.vararg not in (None, UNKNOWN):
+ if func_doc.vararg == '...':
+ args.append('<span class="%s-arg">...</span>' % css_class)
+ else:
+ args.append('<span class="%s-arg">*%s</span>' %
+ (css_class, func_doc.vararg))
+ if func_doc.kwarg not in (None, UNKNOWN):
+ args.append('<span class="%s-arg">**%s</span>' %
+ (css_class, func_doc.kwarg))
+
+ return ('<span class="%s">%s(%s)</span>' %
+ (css_class, name, ',\n '.join(args)))
+
+ def summary_name(self, api_doc, css_class='summary-name',
+ link_name=False, anchor=False):
+ """Render an object name in HTML.
+
+ @param api_doc: The object whose name is to be rendered
+ @type api_doc: L{APIDoc}
+ @param css_class: The CSS class to assign to the rendered name
+ type css_class: C{str}
+ @param link_name: If True, the name is a link to the object anchor.
+ @type link_name: C{bool}
+ @param anchor: If True, the name is the object anchor.
+ @type anchor: C{bool}
+
+ @return: The HTML code for the object.
+ @rtype: C{str}
+ """
+ if anchor:
+ rv = '<a name="%s"></a>' % api_doc.name
+ else:
+ rv = ''
+
+ if link_name:
+ rv += self.href(api_doc, css_class=css_class)
+ else:
+ rv += '<span class="%s">%s</span>' % (css_class, api_doc.name)
+
+ return rv
+
+ # [xx] tuple args???
+ def func_arg(self, name, default, css_class):
+ name = self._arg_name(name)
+ s = '<span class="%s-arg">%s</span>' % (css_class, name)
+ if default is not None:
+ s += ('=<span class="%s-default">%s</span>' %
+ (css_class, default.summary_pyval_repr().to_html(None)))
+ return s
+
+ def _arg_name(self, arg):
+ if isinstance(arg, basestring):
+ return arg
+ elif len(arg) == 1:
+ return '(%s,)' % self._arg_name(arg[0])
+ else:
+ return '(%s)' % (', '.join([self._arg_name(a) for a in arg]))
+
+
+
+
+ #////////////////////////////////////////////////////////////
+ #{ Import Lists
+ #////////////////////////////////////////////////////////////
+
+ def write_imports(self, out, doc):
+ assert isinstance(doc, NamespaceDoc)
+ imports = doc.select_variables(imported=True,
+ public=self._public_filter)
+ if not imports: return
+
+ out('<p class="indent-wrapped-lines">')
+ out('<b>Imports:</b>\n ')
+ out(',\n '.join([self._import(v, doc) for v in imports]))
+ out('\n</p><br />\n')
+
+ def _import(self, var_doc, context):
+ if var_doc.imported_from not in (None, UNKNOWN):
+ return self.href(var_doc.imported_from,
+ var_doc.name, context=context,
+ tooltip='%s' % var_doc.imported_from)
+ elif (var_doc.value not in (None, UNKNOWN) and not
+ isinstance(var_doc.value, GenericValueDoc)):
+ return self.href(var_doc.value,
+ var_doc.name, context=context,
+ tooltip='%s' % var_doc.value.canonical_name)
+ else:
+ return plaintext_to_html(var_doc.name)
+
+ #////////////////////////////////////////////////////////////
+ #{ Function Attributes
+ #////////////////////////////////////////////////////////////
+
+ #////////////////////////////////////////////////////////////
+ #{ Module Trees
+ #////////////////////////////////////////////////////////////
+
+ def write_module_list(self, out, doc):
+ if len(doc.submodules) == 0: return
+ self.write_table_header(out, "summary", "Submodules")
+
+ for group_name in doc.group_names():
+ if not doc.submodule_groups[group_name]: continue
+ if group_name:
+ self.write_group_header(out, group_name)
+ out(' <tr><td class="summary">\n'
+ ' <ul class="nomargin">\n')
+ for submodule in doc.submodule_groups[group_name]:
+ self.write_module_tree_item(out, submodule, package=doc)
+ out(' </ul></td></tr>\n')
+
+ out(self.TABLE_FOOTER+'\n<br />\n')
+
+ def write_module_tree_item(self, out, doc, package=None):
+ # If it's a private variable, then mark its <li>.
+ var = package and package.variables.get(doc.canonical_name[-1])
+ priv = ((var is not None and var.is_public is False) or
+ (var is None and doc.canonical_name[-1].startswith('_')))
+ out(' <li%s> <strong class="uidlink">%s</strong>'
+ % (priv and ' class="private"' or '', self.href(doc)))
+ if doc.summary not in (None, UNKNOWN):
+ out(': <em class="summary">'+
+ self.description(doc.summary, doc, 8)+'</em>')
+ if doc.submodules != UNKNOWN and doc.submodules:
+ if priv: out('\n <ul class="private">\n')
+ else: out('\n <ul>\n')
+ for submodule in doc.submodules:
+ self.write_module_tree_item(out, submodule, package=doc)
+ out(' </ul>\n')
+ out(' </li>\n')
+
+ #////////////////////////////////////////////////////////////
+ #{ Class trees
+ #////////////////////////////////////////////////////////////
+
+ write_class_tree_item = compile_template(
+ '''
+ write_class_tree_item(self, out, doc, class_set)
+ ''',
+ # /------------------------- Template -------------------------\
+ '''
+ >>> if doc.summary in (None, UNKNOWN):
+ <li> <strong class="uidlink">$self.href(doc)$</strong>
+ >>> else:
+ <li> <strong class="uidlink">$self.href(doc)$</strong>:
+ <em class="summary">$self.description(doc.summary, doc, 8)$</em>
+ >>> # endif
+ >>> if doc.subclasses:
+ <ul>
+ >>> for subclass in sorted(set(doc.subclasses), key=lambda c:c.canonical_name[-1]):
+ >>> if subclass in class_set:
+ >>> self.write_class_tree_item(out, subclass, class_set)
+ >>> #endif
+ >>> #endfor
+ </ul>
+ >>> #endif
+ </li>
+ ''')
+ # \------------------------------------------------------------/
+
+ #////////////////////////////////////////////////////////////
+ #{ Standard Fields
+ #////////////////////////////////////////////////////////////
+
+ def write_standard_fields(self, out, doc):
+ """
+ Write HTML code containing descriptions of any standard markup
+ fields that are defined by the given L{APIDoc} object (such as
+ C{@author} and C{@todo} fields).
+
+ @param doc: The L{APIDoc} object containing the API documentation
+ for the object whose standard markup fields should be
+ described.
+ """
+ fields = []
+ field_values = {}
+
+ for (field, arg, descr) in doc.metadata:
+ if field not in field_values:
+ fields.append(field)
+ if field.takes_arg:
+ subfields = field_values.setdefault(field,{})
+ subfields.setdefault(arg,[]).append(descr)
+ else:
+ field_values.setdefault(field,[]).append(descr)
+
+ if not fields: return
+
+ out('<div class="fields">')
+ for field in fields:
+ if field.takes_arg:
+ for arg, descrs in field_values[field].items():
+ self.write_standard_field(out, doc, field, descrs, arg)
+
+ else:
+ self.write_standard_field(out, doc, field, field_values[field])
+
+ out('</div>')
+
+ write_standard_field = compile_template(
+ """
+ write_standard_field(self, out, doc, field, descrs, arg='')
+
+ """,
+ # /------------------------- Template -------------------------\
+ '''
+ >>> if arg: arglabel = " (%s)" % arg
+ >>> else: arglabel = ""
+ >>> if len(descrs) == 1:
+ <p><strong>$field.singular+arglabel$:</strong>
+ $self.description(descrs[0], doc, 8)$
+ </p>
+ >>> elif field.short:
+ <dl><dt>$field.plural+arglabel$:</dt>
+ <dd>
+ >>> for descr in descrs[:-1]:
+ $self.description(descr, doc, 10)$,
+ >>> # end for
+ $self.description(descrs[-1], doc, 10)$
+ </dd>
+ </dl>
+ >>> else:
+ <strong>$field.plural+arglabel$:</strong>
+ <ul class="nomargin-top">
+ >>> for descr in descrs:
+ <li>
+ $self.description(descr, doc, 8)$
+ </li>
+ >>> # end for
+ </ul>
+ >>> # end else
+ >>> # end for
+ ''')
+ # \------------------------------------------------------------/
+
+ #////////////////////////////////////////////////////////////
+ #{ Index generation
+ #////////////////////////////////////////////////////////////
+
+ #: A list of metadata indices that should be generated. Each
+ #: entry in this list is a tuple C{(tag, label, short_label)},
+ #: where C{tag} is the cannonical tag of a metadata field;
+ #: C{label} is a label for the index page; and C{short_label}
+ #: is a shorter label, used in the index selector.
+ METADATA_INDICES = [('bug', 'Bug List', 'Bugs'),
+ ('todo', 'To Do List', 'To Do'),
+ ('change', 'Change Log', 'Changes'),
+ ('deprecated', 'Deprecation List', 'Deprecations'),
+ ('since', 'Introductions List', 'Introductions'),
+ ]
+
+ def build_identifier_index(self):
+ items = []
+ for doc in self.indexed_docs:
+ name = plaintext_to_html(doc.canonical_name[-1])
+ if isinstance(doc, RoutineDoc): name += '()'
+ url = self.url(doc)
+ if not url: continue
+ container = self.docindex.container(doc)
+ items.append( (name, url, container) )
+ return sorted(items, key=lambda v:v[0].lower())
+
+ def _group_by_letter(self, items):
+ """Preserves sort order of the input."""
+ index = {}
+ for item in items:
+ first_letter = item[0][0].upper()
+ if not ("A" <= first_letter <= "Z"):
+ first_letter = '_'
+ index.setdefault(first_letter, []).append(item)
+ return index
+
+ def build_term_index(self):
+ items = []
+ for doc in self.indexed_docs:
+ url = self.url(doc)
+ items += self._terms_from_docstring(url, doc, doc.descr)
+ for (field, arg, descr) in doc.metadata:
+ items += self._terms_from_docstring(url, doc, descr)
+ if hasattr(doc, 'type_descr'):
+ items += self._terms_from_docstring(url, doc,
+ doc.type_descr)
+ if hasattr(doc, 'return_descr'):
+ items += self._terms_from_docstring(url, doc,
+ doc.return_descr)
+ if hasattr(doc, 'return_type'):
+ items += self._terms_from_docstring(url, doc,
+ doc.return_type)
+ return sorted(items, key=lambda v:v[0].lower())
+
+ def _terms_from_docstring(self, base_url, container, parsed_docstring):
+ if parsed_docstring in (None, UNKNOWN): return []
+ terms = []
+ # Strip any existing anchor off:
+ base_url = re.sub('#.*', '', '%s' % (base_url,))
+ for term in parsed_docstring.index_terms():
+ anchor = self._term_index_to_anchor(term)
+ url = '%s#%s' % (base_url, anchor)
+ terms.append( (term.to_plaintext(None), url, container) )
+ return terms
+
+ def build_metadata_index(self, field_name):
+ # Build the index.
+ index = {}
+ for doc in self.indexed_docs:
+ if (not self._show_private and
+ self._doc_or_ancestor_is_private(doc)):
+ continue
+ descrs = {}
+ if doc.metadata is not UNKNOWN:
+ for (field, arg, descr) in doc.metadata:
+ if field.tags[0] == field_name:
+ descrs.setdefault(arg, []).append(descr)
+ for (arg, descr_list) in descrs.iteritems():
+ index.setdefault(arg, []).append( (doc, descr_list) )
+ return index
+
+ def _term_index_to_anchor(self, term):
+ """
+ Given the name of an inline index item, construct a URI anchor.
+ These anchors are used to create links from the index page to each
+ index item.
+ """
+ # Include "-" so we don't accidentally collide with the name
+ # of a python identifier.
+ s = re.sub(r'\s\s+', '-', term.to_plaintext(None))
+ return "index-"+re.sub("[^a-zA-Z0-9]", "_", s)
+
+ #////////////////////////////////////////////////////////////
+ #{ Redirect page
+ #////////////////////////////////////////////////////////////
+
+ def write_redirect_page(self, out):
+ """
+ Build the auto-redirect page, which translates dotted names to
+ URLs using javascript. When the user visits
+ <redirect.html#dotted.name>, they will automatically get
+ redirected to the page for the object with the given
+ fully-qualified dotted name. E.g., for epydoc,
+ <redirect.html#epydoc.apidoc.UNKNOWN> redirects the user to
+ <epydoc.apidoc-module.html#UNKNOWN>.
+ """
+ # Construct a list of all the module & class pages that we're
+ # documenting. The redirect_url javascript will scan through
+ # this list, looking for a page name that matches the
+ # requested dotted name.
+ pages = (['%s-m' % val_doc.canonical_name
+ for val_doc in self.module_list] +
+ ['%s-c' % val_doc.canonical_name
+ for val_doc in self.class_list])
+ # Sort the pages from longest to shortest. This ensures that
+ # we find e.g. "x.y.z" in the list before "x.y".
+ pages = sorted(pages, key=lambda p:-len(p))
+
+ # Write the redirect page.
+ self._write_redirect_page(out, pages)
+
+ _write_redirect_page = compile_template(
+ '''
+ _write_redirect_page(self, out, pages)
+ ''',
+ # /------------------------- Template -------------------------\
+ '''
+ <html><head><title>Epydoc Redirect Page</title>
+ <meta http-equiv="cache-control" content="no-cache" />
+ <meta http-equiv="expires" content="0" />
+ <meta http-equiv="pragma" content="no-cache" />
+ <script type="text/javascript" src="epydoc.js"></script>
+ </head>
+ <body>
+ <script type="text/javascript">
+ <!--
+ var pages = $"[%s]" % ", ".join(['"%s"' % v for v in pages])$;
+ var dottedName = get_anchor();
+ if (dottedName) {
+ var target = redirect_url(dottedName);
+ if (target) window.location.replace(target);
+ }
+ // -->
+ </script>
+
+ <h3>Epydoc Auto-redirect page</h3>
+
+ <p>When javascript is enabled, this page will redirect URLs of
+ the form <tt>redirect.html#<i>dotted.name</i></tt> to the
+ documentation for the object with the given fully-qualified
+ dotted name.</p>
+ <p><a id="message"> </a></p>
+
+ <script type="text/javascript">
+ <!--
+ if (dottedName) {
+ var msg = document.getElementById("message");
+ msg.innerHTML = "No documentation found for <tt>"+
+ dottedName+"</tt>";
+ }
+ // -->
+ </script>
+
+ </body>
+ </html>
+ ''')
+ # \------------------------------------------------------------/
+
+ #////////////////////////////////////////////////////////////
+ #{ URLs list
+ #////////////////////////////////////////////////////////////
+
+ def write_api_list(self, out):
+ """
+ Write a list of mapping name->url for all the documented objects.
+ """
+ # Construct a list of all the module & class pages that we're
+ # documenting. The redirect_url javascript will scan through
+ # this list, looking for a page name that matches the
+ # requested dotted name.
+ skip = (ModuleDoc, ClassDoc, type(UNKNOWN))
+ for val_doc in self.module_list:
+ self.write_url_record(out, val_doc)
+ for var in val_doc.variables.itervalues():
+ if not isinstance(var.value, skip):
+ self.write_url_record(out, var)
+
+ for val_doc in self.class_list:
+ self.write_url_record(out, val_doc)
+ for var in val_doc.variables.itervalues():
+ self.write_url_record(out, var)
+
+ def write_url_record(self, out, obj):
+ url = self.url(obj)
+ if url is not None:
+ out("%s\t%s\n" % (obj.canonical_name, url))
+
+ #////////////////////////////////////////////////////////////
+ #{ Helper functions
+ #////////////////////////////////////////////////////////////
+
+ def _val_is_public(self, valdoc):
+ """Make a best-guess as to whether the given class is public."""
+ container = self.docindex.container(valdoc)
+ if isinstance(container, NamespaceDoc):
+ for vardoc in container.variables.values():
+ if vardoc in (UNKNOWN, None): continue
+ if vardoc.value is valdoc:
+ return vardoc.is_public
+ return True
+
+ # [XX] Is it worth-while to pull the anchor tricks that I do here?
+ # Or should I just live with the fact that show/hide private moves
+ # stuff around?
+ write_table_header = compile_template(
+ '''
+ write_table_header(self, out, css_class, heading=None, \
+ private_link=True, colspan=2)
+ ''',
+ # /------------------------- Template -------------------------\
+ '''
+ >>> if heading is not None:
+ >>> anchor = "section-%s" % re.sub("\W", "", heading)
+ <!-- ==================== $heading.upper()$ ==================== -->
+ <a name="$anchor$"></a>
+ >>> #endif
+ <table class="$css_class$" border="1" cellpadding="3"
+ cellspacing="0" width="100%" bgcolor="white">
+ >>> if heading is not None:
+ <tr bgcolor="#70b0f0" class="table-header">
+ >>> if private_link and self._show_private:
+ <td colspan="$colspan$" class="table-header">
+ <table border="0" cellpadding="0" cellspacing="0" width="100%">
+ <tr valign="top">
+ <td align="left"><span class="table-header">$heading$</span></td>
+ <td align="right" valign="top"
+ ><span class="options">[<a href="#$anchor$"
+ class="privatelink" onclick="toggle_private();"
+ >hide private</a>]</span></td>
+ </tr>
+ </table>
+ </td>
+ >>> else:
+ <td align="left" colspan="2" class="table-header">
+ <span class="table-header">$heading$</span></td>
+ >>> #endif
+ </tr>
+ >>> #endif
+ ''')
+ # \------------------------------------------------------------/
+
+ TABLE_FOOTER = '</table>\n'
+
+ PRIVATE_LINK = '''
+ <span class="options">[<a href="javascript:void(0);" class="privatelink"
+ onclick="toggle_private();">hide private</a>]</span>
+ '''.strip()
+
+ write_group_header = compile_template(
+ '''
+ write_group_header(self, out, group, tr_class='')
+ ''',
+ # /------------------------- Template -------------------------\
+ '''
+ <tr bgcolor="#e8f0f8" $tr_class$>
+ <th colspan="2" class="group-header"
+ > $group$</th></tr>
+ ''')
+ # \------------------------------------------------------------/
+
+ _url_cache = {}
+ def url(self, obj):
+ """
+ Return the URL for the given object, which can be a
+ C{VariableDoc}, a C{ValueDoc}, or a C{DottedName}.
+ """
+ cached_url = self._url_cache.get(id(obj))
+ if cached_url is not None:
+ return cached_url
+ else:
+ url = self._url_cache[id(obj)] = self._url(obj)
+ return url
+
+ def _url(self, obj):
+ """
+ Internal helper for L{url}.
+ """
+ # Module: <canonical_name>-module.html
+ if isinstance(obj, ModuleDoc):
+ if obj not in self.module_set: return None
+ return urllib.quote('%s'%obj.canonical_name) + '-module.html'
+ # Class: <canonical_name>-class.html
+ elif isinstance(obj, ClassDoc):
+ if obj not in self.class_set: return None
+ return urllib.quote('%s'%obj.canonical_name) + '-class.html'
+ # Variable
+ elif isinstance(obj, VariableDoc):
+ val_doc = obj.value
+ if isinstance(val_doc, (ModuleDoc, ClassDoc)):
+ return self.url(val_doc)
+ elif obj.container in (None, UNKNOWN):
+ if val_doc in (None, UNKNOWN): return None
+ return self.url(val_doc)
+ elif obj.is_imported == True:
+ if obj.imported_from is not UNKNOWN:
+ return self.url(obj.imported_from)
+ else:
+ return None
+ else:
+ container_url = self.url(obj.container)
+ if container_url is None: return None
+ return '%s#%s' % (container_url, urllib.quote('%s'%obj.name))
+ # Value (other than module or class)
+ elif isinstance(obj, ValueDoc):
+ container = self.docindex.container(obj)
+ if container is None:
+ return None # We couldn't find it!
+ else:
+ container_url = self.url(container)
+ if container_url is None: return None
+ anchor = urllib.quote('%s'%obj.canonical_name[-1])
+ return '%s#%s' % (container_url, anchor)
+ # Dotted name: look up the corresponding APIDoc
+ elif isinstance(obj, DottedName):
+ val_doc = self.docindex.get_valdoc(obj)
+ if val_doc is None: return None
+ return self.url(val_doc)
+ # Special pages:
+ elif obj == 'indices':
+ return 'identifier-index.html'
+ elif obj == 'help':
+ return 'help.html'
+ elif obj == 'trees':
+ return self._trees_url
+ else:
+ raise ValueError, "Don't know what to do with %r" % obj
+
+ def pysrc_link(self, api_doc):
+ if not self._incl_sourcecode:
+ return ''
+ url = self.pysrc_url(api_doc)
+ if url is not None:
+ return ('<span class="codelink"><a href="%s">source '
+ 'code</a></span>' % url)
+ else:
+ return ''
+
+ def pysrc_url(self, api_doc):
+ if isinstance(api_doc, VariableDoc):
+ if api_doc.value not in (None, UNKNOWN):
+ return pysrc_url(api_doc.value)
+ else:
+ return None
+ elif isinstance(api_doc, ModuleDoc):
+ if api_doc in self.modules_with_sourcecode:
+ return ('%s-pysrc.html' %
+ urllib.quote('%s' % api_doc.canonical_name))
+ else:
+ return None
+ else:
+ module = api_doc.defining_module
+ if module == UNKNOWN: return None
+ module_pysrc_url = self.pysrc_url(module)
+ if module_pysrc_url is None: return None
+ module_name = module.canonical_name
+ if not module_name.dominates(api_doc.canonical_name, True):
+ log.debug('%r is in %r but name does not dominate' %
+ (api_doc, module))
+ return module_pysrc_url
+ mname_len = len(module.canonical_name)
+ anchor = '%s' % api_doc.canonical_name[mname_len:]
+ return '%s#%s' % (module_pysrc_url, urllib.quote(anchor))
+
+ # We didn't find it:
+ return None
+
+ # [xx] add code to automatically do <code> wrapping or the like?
+ def href(self, target, label=None, css_class=None, context=None,
+ tooltip=None):
+ """
+ Return the HTML code for an HREF link to the given target
+ (which can be a C{VariableDoc}, a C{ValueDoc}, or a
+ C{DottedName}.
+ If a C{NamespaceDoc} C{context} is specified, the target label is
+ contextualized to it.
+ """
+ assert isinstance(target, (APIDoc, DottedName))
+
+ # Pick a label, if none was given.
+ if label is None:
+ if isinstance(target, VariableDoc):
+ label = target.name
+ elif (isinstance(target, ValueDoc) and
+ target.canonical_name is not UNKNOWN):
+ label = target.canonical_name
+ elif isinstance(target, DottedName):
+ label = target
+ elif isinstance(target, GenericValueDoc):
+ raise ValueError("href() should not be called with "
+ "GenericValueDoc objects (perhaps you "
+ "meant to use the containing variable?)")
+ else:
+ raise ValueError("Unable to find a label for %r" % target)
+
+ if context is not None and isinstance(label, DottedName):
+ label = label.contextualize(context.canonical_name.container())
+
+ label = plaintext_to_html(str(label))
+
+ # Munge names for scripts & unreachable values
+ if label.startswith('script-'):
+ label = label[7:] + ' (script)'
+ if label.startswith('??'):
+ label = '<i>unreachable</i>' + label[2:]
+ label = re.sub(r'-\d+$', '', label)
+
+ # Get the url for the target.
+ url = self.url(target)
+ if url is None:
+ if tooltip: return '<span title="%s">%s</span>' % (tooltip, label)
+ else: return label
+
+ # Construct a string for the class attribute.
+ if css_class is None:
+ css = ''
+ else:
+ css = ' class="%s"' % css_class
+
+ onclick = ''
+ if ((isinstance(target, VariableDoc) and not target.is_public) or
+ (isinstance(target, ValueDoc) and
+ not isinstance(target, GenericValueDoc) and
+ not self._val_is_public(target))):
+ onclick = ' onclick="show_private();"'
+
+ if tooltip:
+ tooltip = ' title="%s"' % tooltip
+ else:
+ tooltip = ''
+
+ return '<a href="%s"%s%s%s>%s</a>' % (url, css, onclick, tooltip, label)
+
+ def _attr_to_html(self, attr, api_doc, indent):
+ if api_doc in (None, UNKNOWN):
+ return ''
+ pds = getattr(api_doc, attr, None) # pds = ParsedDocstring.
+ if pds not in (None, UNKNOWN):
+ return self.docstring_to_html(pds, api_doc, indent)
+ elif isinstance(api_doc, VariableDoc):
+ return self._attr_to_html(attr, api_doc.value, indent)
+
+ def summary(self, api_doc, indent=0):
+ return self._attr_to_html('summary', api_doc, indent)
+
+ def descr(self, api_doc, indent=0):
+ return self._attr_to_html('descr', api_doc, indent)
+
+ def type_descr(self, api_doc, indent=0):
+ return self._attr_to_html('type_descr', api_doc, indent)
+
+ def return_type(self, api_doc, indent=0):
+ return self._attr_to_html('return_type', api_doc, indent)
+
+ def return_descr(self, api_doc, indent=0):
+ return self._attr_to_html('return_descr', api_doc, indent)
+
+ def docstring_to_html(self, parsed_docstring, where=None, indent=0):
+ if parsed_docstring in (None, UNKNOWN): return ''
+ linker = _HTMLDocstringLinker(self, where)
+ s = parsed_docstring.to_html(linker, indent=indent,
+ directory=self._directory,
+ docindex=self.docindex,
+ context=where).strip()
+ if self._mark_docstrings:
+ s = '<span class="docstring">%s</span><!--end docstring-->' % s
+ return s
+
+ def description(self, parsed_docstring, where=None, indent=0):
+ assert isinstance(where, (APIDoc, type(None)))
+ if parsed_docstring in (None, UNKNOWN): return ''
+ linker = _HTMLDocstringLinker(self, where)
+ descr = parsed_docstring.to_html(linker, indent=indent,
+ directory=self._directory,
+ docindex=self.docindex,
+ context=where).strip()
+ if descr == '': return ' '
+ return descr
+
+ # [xx] Should this be defined by the APIDoc classes themselves??
+ def doc_kind(self, doc):
+ if isinstance(doc, ModuleDoc) and doc.is_package == True:
+ return 'Package'
+ elif (isinstance(doc, ModuleDoc) and
+ doc.canonical_name[0].startswith('script')):
+ return 'Script'
+ elif isinstance(doc, ModuleDoc):
+ return 'Module'
+ elif isinstance(doc, ClassDoc):
+ return 'Class'
+ elif isinstance(doc, ClassMethodDoc):
+ return 'Class Method'
+ elif isinstance(doc, StaticMethodDoc):
+ return 'Static Method'
+ elif isinstance(doc, RoutineDoc):
+ if isinstance(self.docindex.container(doc), ClassDoc):
+ return 'Method'
+ else:
+ return 'Function'
+ else:
+ return 'Variable'
+
+ def _doc_or_ancestor_is_private(self, api_doc):
+ name = api_doc.canonical_name
+ for i in range(len(name), 0, -1):
+ # Is it (or an ancestor) a private var?
+ var_doc = self.docindex.get_vardoc(name[:i])
+ if var_doc is not None and var_doc.is_public == False:
+ return True
+ # Is it (or an ancestor) a private module?
+ val_doc = self.docindex.get_valdoc(name[:i])
+ if (val_doc is not None and isinstance(val_doc, ModuleDoc) and
+ val_doc.canonical_name[-1].startswith('_')):
+ return True
+ return False
+
+ def _private_subclasses(self, class_doc):
+ """Return a list of all subclasses of the given class that are
+ private, as determined by L{_val_is_private}. Recursive
+ subclasses are included in this list."""
+ queue = [class_doc]
+ private = set()
+ for cls in queue:
+ if (isinstance(cls, ClassDoc) and
+ cls.subclasses not in (None, UNKNOWN)):
+ queue.extend(cls.subclasses)
+ private.update([c for c in cls.subclasses if
+ not self._val_is_public(c)])
+ return private
+
+class _HTMLDocstringLinker(epydoc.markup.DocstringLinker):
+ def __init__(self, htmlwriter, container):
+ self.htmlwriter = htmlwriter
+ self.docindex = htmlwriter.docindex
+ self.container = container
+
+ def translate_indexterm(self, indexterm):
+ key = self.htmlwriter._term_index_to_anchor(indexterm)
+ return ('<a name="%s"></a><i class="indexterm">%s</i>' %
+ (key, indexterm.to_html(self)))
+
+ def translate_identifier_xref(self, identifier, label=None):
+ # Pick a label for this xref.
+ if label is None: label = plaintext_to_html(identifier)
+
+ # Find the APIDoc for it (if it's available).
+ doc = self.docindex.find(identifier, self.container)
+
+ # If we didn't find a target, then try checking in the contexts
+ # of the ancestor classes.
+ if doc is None and isinstance(self.container, RoutineDoc):
+ container = self.docindex.get_vardoc(
+ self.container.canonical_name)
+ while (doc is None and container not in (None, UNKNOWN)
+ and container.overrides not in (None, UNKNOWN)):
+ container = container.overrides
+ doc = self.docindex.find(identifier, container)
+
+ # Translate it into HTML.
+ if doc is None:
+ self._failed_xref(identifier)
+ return '<code class="link">%s</code>' % label
+ else:
+ return self.htmlwriter.href(doc, label, 'link')
+
+ # [xx] Should this be added to the DocstringLinker interface???
+ # Currently, this is *only* used by dotgraph.
+ def url_for(self, identifier):
+ if isinstance(identifier, (basestring, DottedName)):
+ doc = self.docindex.find(identifier, self.container)
+ if doc:
+ return self.htmlwriter.url(doc)
+ else:
+ return None
+
+ elif isinstance(identifier, APIDoc):
+ return self.htmlwriter.url(identifier)
+ doc = identifier
+
+ else:
+ raise TypeError('Expected string or APIDoc')
+
+ def _failed_xref(self, identifier):
+ """Add an identifier to the htmlwriter's failed crossreference
+ list."""
+ # Don't count it as a failed xref if it's a parameter of the
+ # current function.
+ if (isinstance(self.container, RoutineDoc) and
+ identifier in self.container.all_args()):
+ return
+
+ failed_xrefs = self.htmlwriter._failed_xrefs
+ context = self.container.canonical_name
+ failed_xrefs.setdefault(identifier,{})[context] = 1
diff --git a/python/helpers/epydoc/docwriter/html_colorize.py b/python/helpers/epydoc/docwriter/html_colorize.py
new file mode 100644
index 0000000..38d0758
--- /dev/null
+++ b/python/helpers/epydoc/docwriter/html_colorize.py
@@ -0,0 +1,909 @@
+#
+# epydoc.html: HTML colorizers
+# Edward Loper
+#
+# Created [10/16/02 09:49 PM]
+# $Id: html_colorize.py 1674 2008-01-29 06:03:36Z edloper $
+#
+
+"""
+Functions to produce colorized HTML code for various objects.
+Currently, C{html_colorize} defines functions to colorize
+Python source code.
+"""
+__docformat__ = 'epytext en'
+
+import re, codecs
+from epydoc import log
+from epydoc.util import py_src_filename
+from epydoc.apidoc import *
+import tokenize, token, cgi, keyword
+try: from cStringIO import StringIO
+except: from StringIO import StringIO
+
+######################################################################
+## Python source colorizer
+######################################################################
+"""
+Goals:
+ - colorize tokens appropriately (using css)
+ - optionally add line numbers
+ -
+"""
+
+#: Javascript code for the PythonSourceColorizer
+PYSRC_JAVASCRIPTS = '''\
+function expand(id) {
+ var elt = document.getElementById(id+"-expanded");
+ if (elt) elt.style.display = "block";
+ var elt = document.getElementById(id+"-expanded-linenums");
+ if (elt) elt.style.display = "block";
+ var elt = document.getElementById(id+"-collapsed");
+ if (elt) { elt.innerHTML = ""; elt.style.display = "none"; }
+ var elt = document.getElementById(id+"-collapsed-linenums");
+ if (elt) { elt.innerHTML = ""; elt.style.display = "none"; }
+ var elt = document.getElementById(id+"-toggle");
+ if (elt) { elt.innerHTML = "-"; }
+}
+
+function collapse(id) {
+ var elt = document.getElementById(id+"-expanded");
+ if (elt) elt.style.display = "none";
+ var elt = document.getElementById(id+"-expanded-linenums");
+ if (elt) elt.style.display = "none";
+ var elt = document.getElementById(id+"-collapsed-linenums");
+ if (elt) { elt.innerHTML = "<br />"; elt.style.display="block"; }
+ var elt = document.getElementById(id+"-toggle");
+ if (elt) { elt.innerHTML = "+"; }
+ var elt = document.getElementById(id+"-collapsed");
+ if (elt) {
+ elt.style.display = "block";
+
+ var indent = elt.getAttribute("indent");
+ var pad = elt.getAttribute("pad");
+ var s = "<tt class=\'py-lineno\'>";
+ for (var i=0; i<pad.length; i++) { s += " " }
+ s += "</tt>";
+ s += " <tt class=\'py-line\'>";
+ for (var i=0; i<indent.length; i++) { s += " " }
+ s += "<a href=\'#\' onclick=\'expand(\\"" + id;
+ s += "\\");return false\'>...</a></tt><br />";
+ elt.innerHTML = s;
+ }
+}
+
+function toggle(id) {
+ elt = document.getElementById(id+"-toggle");
+ if (elt.innerHTML == "-")
+ collapse(id);
+ else
+ expand(id);
+ return false;
+}
+
+function highlight(id) {
+ var elt = document.getElementById(id+"-def");
+ if (elt) elt.className = "py-highlight-hdr";
+ var elt = document.getElementById(id+"-expanded");
+ if (elt) elt.className = "py-highlight";
+ var elt = document.getElementById(id+"-collapsed");
+ if (elt) elt.className = "py-highlight";
+}
+
+function num_lines(s) {
+ var n = 1;
+ var pos = s.indexOf("\\n");
+ while ( pos > 0) {
+ n += 1;
+ pos = s.indexOf("\\n", pos+1);
+ }
+ return n;
+}
+
+// Collapse all blocks that mave more than `min_lines` lines.
+function collapse_all(min_lines) {
+ var elts = document.getElementsByTagName("div");
+ for (var i=0; i<elts.length; i++) {
+ var elt = elts[i];
+ var split = elt.id.indexOf("-");
+ if (split > 0)
+ if (elt.id.substring(split, elt.id.length) == "-expanded")
+ if (num_lines(elt.innerHTML) > min_lines)
+ collapse(elt.id.substring(0, split));
+ }
+}
+
+function expandto(href) {
+ var start = href.indexOf("#")+1;
+ if (start != 0 && start != href.length) {
+ if (href.substring(start, href.length) != "-") {
+ collapse_all(4);
+ pos = href.indexOf(".", start);
+ while (pos != -1) {
+ var id = href.substring(start, pos);
+ expand(id);
+ pos = href.indexOf(".", pos+1);
+ }
+ var id = href.substring(start, href.length);
+ expand(id);
+ highlight(id);
+ }
+ }
+}
+
+function kill_doclink(id) {
+ var parent = document.getElementById(id);
+ parent.removeChild(parent.childNodes.item(0));
+}
+function auto_kill_doclink(ev) {
+ if (!ev) var ev = window.event;
+ if (!this.contains(ev.toElement)) {
+ var parent = document.getElementById(this.parentID);
+ parent.removeChild(parent.childNodes.item(0));
+ }
+}
+
+function doclink(id, name, targets_id) {
+ var elt = document.getElementById(id);
+
+ // If we already opened the box, then destroy it.
+ // (This case should never occur, but leave it in just in case.)
+ if (elt.childNodes.length > 1) {
+ elt.removeChild(elt.childNodes.item(0));
+ }
+ else {
+ // The outer box: relative + inline positioning.
+ var box1 = document.createElement("div");
+ box1.style.position = "relative";
+ box1.style.display = "inline";
+ box1.style.top = 0;
+ box1.style.left = 0;
+
+ // A shadow for fun
+ var shadow = document.createElement("div");
+ shadow.style.position = "absolute";
+ shadow.style.left = "-1.3em";
+ shadow.style.top = "-1.3em";
+ shadow.style.background = "#404040";
+
+ // The inner box: absolute positioning.
+ var box2 = document.createElement("div");
+ box2.style.position = "relative";
+ box2.style.border = "1px solid #a0a0a0";
+ box2.style.left = "-.2em";
+ box2.style.top = "-.2em";
+ box2.style.background = "white";
+ box2.style.padding = ".3em .4em .3em .4em";
+ box2.style.fontStyle = "normal";
+ box2.onmouseout=auto_kill_doclink;
+ box2.parentID = id;
+
+ // Get the targets
+ var targets_elt = document.getElementById(targets_id);
+ var targets = targets_elt.getAttribute("targets");
+ var links = "";
+ target_list = targets.split(",");
+ for (var i=0; i<target_list.length; i++) {
+ var target = target_list[i].split("=");
+ links += "<li><a href=\'" + target[1] +
+ "\' style=\'text-decoration:none\'>" +
+ target[0] + "</a></li>";
+ }
+
+ // Put it all together.
+ elt.insertBefore(box1, elt.childNodes.item(0));
+ //box1.appendChild(box2);
+ box1.appendChild(shadow);
+ shadow.appendChild(box2);
+ box2.innerHTML =
+ "Which <b>"+name+"</b> do you want to see documentation for?" +
+ "<ul style=\'margin-bottom: 0;\'>" +
+ links +
+ "<li><a href=\'#\' style=\'text-decoration:none\' " +
+ "onclick=\'kill_doclink(\\""+id+"\\");return false;\'>"+
+ "<i>None of the above</i></a></li></ul>";
+ }
+ return false;
+}
+'''
+
+PYSRC_EXPANDTO_JAVASCRIPT = '''\
+<script type="text/javascript">
+<!--
+expandto(location.href);
+// -->
+</script>
+'''
+
+class PythonSourceColorizer:
+ """
+ A class that renders a python module's source code into HTML
+ pages. These HTML pages are intended to be provided along with
+ the API documentation for a module, in case a user wants to learn
+ more about a particular object by examining its source code.
+ Links are therefore generated from the API documentation to the
+ source code pages, and from the source code pages back into the
+ API documentation.
+
+ The HTML generated by C{PythonSourceColorizer} has several notable
+ features:
+
+ - CSS styles are used to color tokens according to their type.
+ (See L{CSS_CLASSES} for a list of the different token types
+ that are identified).
+
+ - Line numbers are included to the left of each line.
+
+ - The first line of each class and function definition includes
+ a link to the API source documentation for that object.
+
+ - The first line of each class and function definition includes
+ an anchor that can be used to link directly to that class or
+ function.
+
+ - If javascript is enabled, and the page is loaded using the
+ anchor for a class or function (i.e., if the url ends in
+ C{'#I{<name>}'}), then that class or function will automatically
+ be highlighted; and all other classes and function definition
+ blocks will be 'collapsed'. These collapsed blocks can be
+ expanded by clicking on them.
+
+ - Unicode input is supported (including automatic detection
+ of C{'coding:'} declarations).
+
+ """
+ #: A look-up table that is used to determine which CSS class
+ #: should be used to colorize a given token. The following keys
+ #: may be used:
+ #: - Any token name (e.g., C{'STRING'})
+ #: - Any operator token (e.g., C{'='} or C{'@'}).
+ #: - C{'KEYWORD'} -- Python keywords such as C{'for'} and C{'if'}
+ #: - C{'DEFNAME'} -- the name of a class or function at the top
+ #: of its definition statement.
+ #: - C{'BASECLASS'} -- names of base classes at the top of a class
+ #: definition statement.
+ #: - C{'PARAM'} -- function parameters
+ #: - C{'DOCSTRING'} -- docstrings
+ #: - C{'DECORATOR'} -- decorator names
+ #: If no CSS class can be found for a given token, then it won't
+ #: be marked with any CSS class.
+ CSS_CLASSES = {
+ 'NUMBER': 'py-number',
+ 'STRING': 'py-string',
+ 'COMMENT': 'py-comment',
+ 'NAME': 'py-name',
+ 'KEYWORD': 'py-keyword',
+ 'DEFNAME': 'py-def-name',
+ 'BASECLASS': 'py-base-class',
+ 'PARAM': 'py-param',
+ 'DOCSTRING': 'py-docstring',
+ 'DECORATOR': 'py-decorator',
+ 'OP': 'py-op',
+ '@': 'py-decorator',
+ }
+
+ #: HTML code for the beginning of a collapsable function or class
+ #: definition block. The block contains two <div>...</div>
+ #: elements -- a collapsed version and an expanded version -- and
+ #: only one of these elements is visible at any given time. By
+ #: default, all definition blocks are expanded.
+ #:
+ #: This string should be interpolated with the following values::
+ #: (name, indentation, name)
+ #: Where C{name} is the anchor name for the function or class; and
+ #: indentation is a string of whitespace used to indent the
+ #: ellipsis marker in the collapsed version.
+ START_DEF_BLOCK = (
+ '<div id="%s-collapsed" style="display:none;" '
+ 'pad="%s" indent="%s"></div>'
+ '<div id="%s-expanded">')
+
+ #: HTML code for the end of a collapsable function or class
+ #: definition block.
+ END_DEF_BLOCK = '</div>'
+
+ #: A regular expression used to pick out the unicode encoding for
+ #: the source file.
+ UNICODE_CODING_RE = re.compile(r'.*?\n?.*?coding[:=]\s*([-\w.]+)')
+
+ #: A configuration constant, used to determine whether or not to add
+ #: collapsable <div> elements for definition blocks.
+ ADD_DEF_BLOCKS = True
+
+ #: A configuration constant, used to determine whether or not to
+ #: add line numbers.
+ ADD_LINE_NUMBERS = True
+
+ #: A configuration constant, used to determine whether or not to
+ #: add tooltips for linked names.
+ ADD_TOOLTIPS = True
+
+ #: If true, then try to guess which target is appropriate for
+ #: linked names; if false, then always open a div asking the
+ #: user which one they want.
+ GUESS_LINK_TARGETS = False
+
+ def __init__(self, module_filename, module_name,
+ docindex=None, url_func=None, name_to_docs=None,
+ tab_width=8):
+ """
+ Create a new HTML colorizer for the specified module.
+
+ @param module_filename: The name of the file containing the
+ module; its text will be loaded from this file.
+ @param module_name: The dotted name of the module; this will
+ be used to create links back into the API source
+ documentation.
+ """
+ # Get the source version, if possible.
+ try: module_filename = py_src_filename(module_filename)
+ except: pass
+
+ #: The filename of the module we're colorizing.
+ self.module_filename = module_filename
+
+ #: The dotted name of the module we're colorizing.
+ self.module_name = module_name
+
+ #: A docindex, used to create href links from identifiers to
+ #: the API documentation for their values.
+ self.docindex = docindex
+
+ #: A mapping from short names to lists of ValueDoc, used to
+ #: decide which values an identifier might map to when creating
+ #: href links from identifiers to the API docs for their values.
+ self.name_to_docs = name_to_docs
+
+ #: A function that maps APIDoc -> URL, used to create href
+ #: links from identifiers to the API documentation for their
+ #: values.
+ self.url_func = url_func
+
+ #: The index in C{text} of the last character of the last
+ #: token we've processed.
+ self.pos = 0
+
+ #: A list that maps line numbers to character offsets in
+ #: C{text}. In particular, line C{M{i}} begins at character
+ #: C{line_offset[i]} in C{text}. Since line numbers begin at
+ #: 1, the first element of C{line_offsets} is C{None}.
+ self.line_offsets = []
+
+ #: A list of C{(toktype, toktext)} for all tokens on the
+ #: logical line that we are currently processing. Once a
+ #: complete line of tokens has been collected in C{cur_line},
+ #: it is sent to L{handle_line} for processing.
+ self.cur_line = []
+
+ #: A list of the names of the class or functions that include
+ #: the current block. C{context} has one element for each
+ #: level of indentation; C{context[i]} is the name of the class
+ #: or function defined by the C{i}th level of indentation, or
+ #: C{None} if that level of indentation doesn't correspond to a
+ #: class or function definition.
+ self.context = []
+
+ #: A list, corresponding one-to-one with L{self.context},
+ #: indicating the type of each entry. Each element of
+ #: C{context_types} is one of: C{'func'}, C{'class'}, C{None}.
+ self.context_types = []
+
+ #: A list of indentation strings for each of the current
+ #: block's indents. I.e., the current total indentation can
+ #: be found by taking C{''.join(self.indents)}.
+ self.indents = []
+
+ #: The line number of the line we're currently processing.
+ self.lineno = 0
+
+ #: The name of the class or function whose definition started
+ #: on the previous logical line, or C{None} if the previous
+ #: logical line was not a class or function definition.
+ self.def_name = None
+
+ #: The type of the class or function whose definition started
+ #: on the previous logical line, or C{None} if the previous
+ #: logical line was not a class or function definition.
+ #: Can be C{'func'}, C{'class'}, C{None}.
+ self.def_type = None
+
+ #: The number of spaces to replace each tab in source code with
+ self.tab_width = tab_width
+
+
+ def find_line_offsets(self):
+ """
+ Construct the L{line_offsets} table from C{self.text}.
+ """
+ # line 0 doesn't exist; line 1 starts at char offset 0.
+ self.line_offsets = [None, 0]
+ # Find all newlines in `text`, and add an entry to
+ # line_offsets for each one.
+ pos = self.text.find('\n')
+ while pos != -1:
+ self.line_offsets.append(pos+1)
+ pos = self.text.find('\n', pos+1)
+ # Add a final entry, marking the end of the string.
+ self.line_offsets.append(len(self.text))
+
+ def lineno_to_html(self):
+ template = '%%%ds' % self.linenum_size
+ n = template % self.lineno
+ return '<a name="L%s"></a><tt class="py-lineno">%s</tt>' \
+ % (self.lineno, n)
+
+ def colorize(self):
+ """
+ Return an HTML string that renders the source code for the
+ module that was specified in the constructor.
+ """
+ # Initialize all our state variables
+ self.pos = 0
+ self.cur_line = []
+ self.context = []
+ self.context_types = []
+ self.indents = []
+ self.lineno = 1
+ self.def_name = None
+ self.def_type = None
+ self.has_decorators = False
+
+ # Cache, used so we only need to list the target elements once
+ # for each variable.
+ self.doclink_targets_cache = {}
+
+ # Load the module's text.
+ self.text = open(self.module_filename).read()
+ self.text = self.text.expandtabs(self.tab_width).rstrip()+'\n'
+
+ # Construct the line_offsets table.
+ self.find_line_offsets()
+
+ num_lines = self.text.count('\n')+1
+ self.linenum_size = len(`num_lines+1`)
+
+ # Call the tokenizer, and send tokens to our `tokeneater()`
+ # method. If anything goes wrong, then fall-back to using
+ # the input text as-is (with no colorization).
+ try:
+ output = StringIO()
+ self.out = output.write
+ tokenize.tokenize(StringIO(self.text).readline, self.tokeneater)
+ html = output.getvalue()
+ if self.has_decorators:
+ html = self._FIX_DECORATOR_RE.sub(r'\2\1', html)
+ except tokenize.TokenError, ex:
+ html = self.text
+
+ # Check for a unicode encoding declaration.
+ m = self.UNICODE_CODING_RE.match(self.text)
+ if m: coding = m.group(1)
+ else: coding = 'iso-8859-1'
+
+ # Decode the html string into unicode, and then encode it back
+ # into ascii, replacing any non-ascii characters with xml
+ # character references.
+ try:
+ html = html.decode(coding).encode('ascii', 'xmlcharrefreplace')
+ except LookupError:
+ coding = 'iso-8859-1'
+ html = html.decode(coding).encode('ascii', 'xmlcharrefreplace')
+
+ # Call expandto.
+ html += PYSRC_EXPANDTO_JAVASCRIPT
+
+ return html
+
+ def tokeneater(self, toktype, toktext, (srow,scol), (erow,ecol), line):
+ """
+ A callback function used by C{tokenize.tokenize} to handle
+ each token in the module. C{tokeneater} collects tokens into
+ the C{self.cur_line} list until a complete logical line has
+ been formed; and then calls L{handle_line} to process that line.
+ """
+ # If we encounter any errors, then just give up.
+ if toktype == token.ERRORTOKEN:
+ raise tokenize.TokenError, toktype
+
+ # Did we skip anything whitespace? If so, add a pseudotoken
+ # for it, with toktype=None. (Note -- this skipped string
+ # might also contain continuation slashes; but I won't bother
+ # to colorize them.)
+ startpos = self.line_offsets[srow] + scol
+ if startpos > self.pos:
+ skipped = self.text[self.pos:startpos]
+ self.cur_line.append( (None, skipped) )
+
+ # Update our position.
+ self.pos = startpos + len(toktext)
+
+ # Update our current line.
+ self.cur_line.append( (toktype, toktext) )
+
+ # When we reach the end of a line, process it.
+ if toktype == token.NEWLINE or toktype == token.ENDMARKER:
+ self.handle_line(self.cur_line)
+ self.cur_line = []
+
+ _next_uid = 0
+
+ # [xx] note -- this works with byte strings, not unicode strings!
+ # I may change it to use unicode eventually, but when I do it
+ # needs to be changed all at once.
+ def handle_line(self, line):
+ """
+ Render a single logical line from the module, and write the
+ generated HTML to C{self.out}.
+
+ @param line: A single logical line, encoded as a list of
+ C{(toktype,tokttext)} pairs corresponding to the tokens in
+ the line.
+ """
+ # def_name is the name of the function or class defined by
+ # this line; or None if no funciton or class is defined.
+ def_name = None
+
+ # def_type is the type of the function or class defined by
+ # this line; or None if no funciton or class is defined.
+ def_type = None
+
+ # does this line start a class/func def?
+ starting_def_block = False
+
+ in_base_list = False
+ in_param_list = False
+ in_param_default = 0
+ at_module_top = (self.lineno == 1)
+
+ ended_def_blocks = 0
+
+ # The html output.
+ if self.ADD_LINE_NUMBERS:
+ s = self.lineno_to_html()
+ self.lineno += 1
+ else:
+ s = ''
+ s += ' <tt class="py-line">'
+
+ # Loop through each token, and colorize it appropriately.
+ for i, (toktype, toktext) in enumerate(line):
+ if type(s) is not str:
+ if type(s) is unicode:
+ log.error('While colorizing %s -- got unexpected '
+ 'unicode string' % self.module_name)
+ s = s.encode('ascii', 'xmlcharrefreplace')
+ else:
+ raise ValueError('Unexpected value for s -- %s' %
+ type(s).__name__)
+
+ # For each token, determine its css class and whether it
+ # should link to a url.
+ css_class = None
+ url = None
+ tooltip = None
+ onclick = uid = targets = None # these 3 are used together.
+
+ # Is this token the class name in a class definition? If
+ # so, then make it a link back into the API docs.
+ if i>=2 and line[i-2][1] == 'class':
+ in_base_list = True
+ css_class = self.CSS_CLASSES['DEFNAME']
+ def_name = toktext
+ def_type = 'class'
+ if 'func' not in self.context_types:
+ cls_name = self.context_name(def_name)
+ url = self.name2url(cls_name)
+ s = self.mark_def(s, cls_name)
+ starting_def_block = True
+
+ # Is this token the function name in a function def? If
+ # so, then make it a link back into the API docs.
+ elif i>=2 and line[i-2][1] == 'def':
+ in_param_list = True
+ css_class = self.CSS_CLASSES['DEFNAME']
+ def_name = toktext
+ def_type = 'func'
+ if 'func' not in self.context_types:
+ cls_name = self.context_name()
+ func_name = self.context_name(def_name)
+ url = self.name2url(cls_name, def_name)
+ s = self.mark_def(s, func_name)
+ starting_def_block = True
+
+ # For each indent, update the indents list (which we use
+ # to keep track of indentation strings) and the context
+ # list. If this indent is the start of a class or
+ # function def block, then self.def_name will be its name;
+ # otherwise, it will be None.
+ elif toktype == token.INDENT:
+ self.indents.append(toktext)
+ self.context.append(self.def_name)
+ self.context_types.append(self.def_type)
+
+ # When we dedent, pop the last elements off the indents
+ # list and the context list. If the last context element
+ # is a name, then we're ending a class or function def
+ # block; so write an end-div tag.
+ elif toktype == token.DEDENT:
+ self.indents.pop()
+ self.context_types.pop()
+ if self.context.pop():
+ ended_def_blocks += 1
+
+ # If this token contains whitespace, then don't bother to
+ # give it a css tag.
+ elif toktype in (None, tokenize.NL, token.NEWLINE,
+ token.ENDMARKER):
+ css_class = None
+
+ # Check if the token is a keyword.
+ elif toktype == token.NAME and keyword.iskeyword(toktext):
+ css_class = self.CSS_CLASSES['KEYWORD']
+
+ elif in_base_list and toktype == token.NAME:
+ css_class = self.CSS_CLASSES['BASECLASS']
+
+ elif (in_param_list and toktype == token.NAME and
+ not in_param_default):
+ css_class = self.CSS_CLASSES['PARAM']
+
+ # Class/function docstring.
+ elif (self.def_name and line[i-1][0] == token.INDENT and
+ self.is_docstring(line, i)):
+ css_class = self.CSS_CLASSES['DOCSTRING']
+
+ # Module docstring.
+ elif at_module_top and self.is_docstring(line, i):
+ css_class = self.CSS_CLASSES['DOCSTRING']
+
+ # check for decorators??
+ elif (toktype == token.NAME and
+ ((i>0 and line[i-1][1]=='@') or
+ (i>1 and line[i-1][0]==None and line[i-2][1] == '@'))):
+ css_class = self.CSS_CLASSES['DECORATOR']
+ self.has_decorators = True
+
+ # If it's a name, try to link it.
+ elif toktype == token.NAME:
+ css_class = self.CSS_CLASSES['NAME']
+ # If we have a variable named `toktext` in the current
+ # context, then link to that. Note that if we're inside
+ # a function, then that function is our context, not
+ # the namespace that contains it. [xx] this isn't always
+ # the right thing to do.
+ if (self.GUESS_LINK_TARGETS and self.docindex is not None
+ and self.url_func is not None):
+ context = [n for n in self.context if n is not None]
+ container = self.docindex.get_vardoc(
+ DottedName(self.module_name, *context))
+ if isinstance(container, NamespaceDoc):
+ doc = container.variables.get(toktext)
+ if doc is not None:
+ url = self.url_func(doc)
+ tooltip = str(doc.canonical_name)
+ # Otherwise, check the name_to_docs index to see what
+ # else this name might refer to.
+ if (url is None and self.name_to_docs is not None
+ and self.url_func is not None):
+ docs = self.name_to_docs.get(toktext)
+ if docs:
+ tooltip='\n'.join([str(d.canonical_name)
+ for d in docs])
+ if len(docs) == 1 and self.GUESS_LINK_TARGETS:
+ url = self.url_func(docs[0])
+ else:
+ uid, onclick, targets = self.doclink(toktext, docs)
+
+ # For all other tokens, look up the CSS class to use
+ # based on the token's type.
+ else:
+ if toktype == token.OP and toktext in self.CSS_CLASSES:
+ css_class = self.CSS_CLASSES[toktext]
+ elif token.tok_name[toktype] in self.CSS_CLASSES:
+ css_class = self.CSS_CLASSES[token.tok_name[toktype]]
+ else:
+ css_class = None
+
+ # update our status..
+ if toktext == ':':
+ in_base_list = False
+ in_param_list = False
+ if toktext == '=' and in_param_list:
+ in_param_default = True
+ if in_param_default:
+ if toktext in ('(','[','{'): in_param_default += 1
+ if toktext in (')',']','}'): in_param_default -= 1
+ if toktext == ',' and in_param_default == 1:
+ in_param_default = 0
+
+ # Write this token, with appropriate colorization.
+ if tooltip and self.ADD_TOOLTIPS:
+ tooltip_html = ' title="%s"' % tooltip
+ else: tooltip_html = ''
+ if css_class: css_class_html = ' class="%s"' % css_class
+ else: css_class_html = ''
+ if onclick:
+ if targets: targets_html = ' targets="%s"' % targets
+ else: targets_html = ''
+ s += ('<tt id="%s"%s%s><a%s%s href="#" onclick="%s">' %
+ (uid, css_class_html, targets_html, tooltip_html,
+ css_class_html, onclick))
+ elif url:
+ if isinstance(url, unicode):
+ url = url.encode('ascii', 'xmlcharrefreplace')
+ s += ('<a%s%s href="%s">' %
+ (tooltip_html, css_class_html, url))
+ elif css_class_html or tooltip_html:
+ s += '<tt%s%s>' % (tooltip_html, css_class_html)
+ if i == len(line)-1:
+ s += ' </tt>' # Closes <tt class="py-line">
+ s += cgi.escape(toktext)
+ else:
+ try:
+ s += self.add_line_numbers(cgi.escape(toktext), css_class)
+ except Exception, e:
+ print (toktext, css_class, toktext.encode('ascii'))
+ raise
+
+ if onclick: s += "</a></tt>"
+ elif url: s += '</a>'
+ elif css_class_html or tooltip_html: s += '</tt>'
+
+ if self.ADD_DEF_BLOCKS:
+ for i in range(ended_def_blocks):
+ self.out(self.END_DEF_BLOCK)
+
+ # Strip any empty <tt>s.
+ s = re.sub(r'<tt class="[\w+]"></tt>', '', s)
+
+ # Write the line.
+ self.out(s)
+
+ if def_name and starting_def_block:
+ self.out('</div>')
+
+ # Add div's if we're starting a def block.
+ if (self.ADD_DEF_BLOCKS and def_name and starting_def_block and
+ (line[-2][1] == ':')):
+ indentation = (''.join(self.indents)+' ').replace(' ', '+')
+ linenum_padding = '+'*self.linenum_size
+ name=self.context_name(def_name)
+ self.out(self.START_DEF_BLOCK % (name, linenum_padding,
+ indentation, name))
+
+ self.def_name = def_name
+ self.def_type = def_type
+
+ def context_name(self, extra=None):
+ pieces = [n for n in self.context if n is not None]
+ if extra is not None: pieces.append(extra)
+ return '.'.join(pieces)
+
+ def doclink(self, name, docs):
+ uid = 'link-%s' % self._next_uid
+ self._next_uid += 1
+ context = [n for n in self.context if n is not None]
+ container = DottedName(self.module_name, *context)
+ #else:
+ # container = None
+ targets = ','.join(['%s=%s' % (str(self.doc_descr(d,container)),
+ str(self.url_func(d)))
+ for d in docs])
+
+ if targets in self.doclink_targets_cache:
+ onclick = ("return doclink('%s', '%s', '%s');" %
+ (uid, name, self.doclink_targets_cache[targets]))
+ return uid, onclick, None
+ else:
+ self.doclink_targets_cache[targets] = uid
+ onclick = ("return doclink('%s', '%s', '%s');" %
+ (uid, name, uid))
+ return uid, onclick, targets
+
+ def doc_descr(self, doc, context):
+ name = str(doc.canonical_name)
+ descr = '%s %s' % (self.doc_kind(doc), name)
+ if isinstance(doc, RoutineDoc):
+ descr += '()'
+ return descr
+
+ # [XX] copied streight from html.py; this should be consolidated,
+ # probably into apidoc.
+ def doc_kind(self, doc):
+ if isinstance(doc, ModuleDoc) and doc.is_package == True:
+ return 'Package'
+ elif (isinstance(doc, ModuleDoc) and
+ doc.canonical_name[0].startswith('script')):
+ return 'Script'
+ elif isinstance(doc, ModuleDoc):
+ return 'Module'
+ elif isinstance(doc, ClassDoc):
+ return 'Class'
+ elif isinstance(doc, ClassMethodDoc):
+ return 'Class Method'
+ elif isinstance(doc, StaticMethodDoc):
+ return 'Static Method'
+ elif isinstance(doc, RoutineDoc):
+ if (self.docindex is not None and
+ isinstance(self.docindex.container(doc), ClassDoc)):
+ return 'Method'
+ else:
+ return 'Function'
+ else:
+ return 'Variable'
+
+ def mark_def(self, s, name):
+ replacement = ('<a name="%s"></a><div id="%s-def">\\1'
+ '<a class="py-toggle" href="#" id="%s-toggle" '
+ 'onclick="return toggle(\'%s\');">-</a>\\2' %
+ (name, name, name, name))
+ return re.sub('(.*) (<tt class="py-line">.*)\Z', replacement, s)
+
+ def is_docstring(self, line, i):
+ if line[i][0] != token.STRING: return False
+ for toktype, toktext in line[i:]:
+ if toktype not in (token.NEWLINE, tokenize.COMMENT,
+ tokenize.NL, token.STRING, None):
+ return False
+ return True
+
+ def add_line_numbers(self, s, css_class):
+ result = ''
+ start = 0
+ end = s.find('\n')+1
+ while end:
+ result += s[start:end-1]
+ if css_class: result += '</tt>'
+ result += ' </tt>' # py-line
+ result += '\n'
+ if self.ADD_LINE_NUMBERS:
+ result += self.lineno_to_html()
+ result += ' <tt class="py-line">'
+ if css_class: result += '<tt class="%s">' % css_class
+ start = end
+ end = s.find('\n', end)+1
+ self.lineno += 1
+ result += s[start:]
+ return result
+
+ def name2url(self, class_name, func_name=None):
+ if class_name:
+ class_name = '%s.%s' % (self.module_name, class_name)
+ if func_name:
+ return '%s-class.html#%s' % (class_name, func_name)
+ else:
+ return '%s-class.html' % class_name
+ else:
+ return '%s-module.html#%s' % (self.module_name, func_name)
+
+ #: A regexp used to move the <div> that marks the beginning of a
+ #: function or method to just before the decorators.
+ _FIX_DECORATOR_RE = re.compile(
+ r'((?:^<a name="L\d+"></a><tt class="py-lineno">\s*\d+</tt>'
+ r'\s*<tt class="py-line">(?:<tt class="py-decorator">.*|\s*</tt>|'
+ r'\s*<tt class="py-comment">.*)\n)+)'
+ r'(<a name="\w+"></a><div id="\w+-def">)', re.MULTILINE)
+
+_HDR = '''\
+<?xml version="1.0" encoding="ascii"?>
+ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "DTD/xhtml1-transitional.dtd">
+ <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+ <head>
+ <title>$title$</title>
+ <link rel="stylesheet" href="epydoc.css" type="text/css" />
+ <script type="text/javascript" src="epydoc.js"></script>
+ </head>
+
+ <body bgcolor="white" text="black" link="blue" vlink="#204080"
+ alink="#204080">
+'''
+_FOOT = '</body></html>'
+if __name__=='__main__':
+ #s = PythonSourceColorizer('../apidoc.py', 'epydoc.apidoc').colorize()
+ s = PythonSourceColorizer('/tmp/fo.py', 'epydoc.apidoc').colorize()
+ #print s
+ import codecs
+ f = codecs.open('/home/edloper/public_html/color3.html', 'w', 'ascii', 'xmlcharrefreplace')
+ f.write(_HDR+'<pre id="py-src-top" class="py-src">'+s+'</pre>'+_FOOT)
+ f.close()
diff --git a/python/helpers/epydoc/docwriter/html_css.py b/python/helpers/epydoc/docwriter/html_css.py
new file mode 100644
index 0000000..6b3746b
--- /dev/null
+++ b/python/helpers/epydoc/docwriter/html_css.py
@@ -0,0 +1,550 @@
+#
+# epydoc.css: default epydoc CSS stylesheets
+# Edward Loper
+#
+# Created [01/30/01 05:18 PM]
+# $Id: html_css.py 1634 2007-09-24 15:58:38Z dvarrazzo $
+#
+
+"""
+Predefined CSS stylesheets for the HTML outputter (L{epydoc.docwriter.html}).
+
+@type STYLESHEETS: C{dictionary} from C{string} to C{(string, string)}
+@var STYLESHEETS: A dictionary mapping from stylesheet names to CSS
+ stylesheets and descriptions. A single stylesheet may have
+ multiple names. Currently, the following stylesheets are defined:
+ - C{default}: The default stylesheet (synonym for C{white}).
+ - C{white}: Black on white, with blue highlights (similar to
+ javadoc).
+ - C{blue}: Black on steel blue.
+ - C{green}: Black on green.
+ - C{black}: White on black, with blue highlights
+ - C{grayscale}: Grayscale black on white.
+ - C{none}: An empty stylesheet.
+"""
+__docformat__ = 'epytext en'
+
+import re
+
+############################################################
+## Basic stylesheets
+############################################################
+
+# [xx] Should I do something like:
+#
+# @import url(html4css1.css);
+#
+# But then where do I get that css file from? Hm.
+# Also, in principle I'm mangling classes, but it looks like I'm
+# failing.
+#
+
+# Black on white, with blue highlights. This is similar to how
+# javadoc looks.
+TEMPLATE = """
+
+/* Epydoc CSS Stylesheet
+ *
+ * This stylesheet can be used to customize the appearance of epydoc's
+ * HTML output.
+ *
+ */
+
+/* Default Colors & Styles
+ * - Set the default foreground & background color with 'body'; and
+ * link colors with 'a:link' and 'a:visited'.
+ * - Use bold for decision list terms.
+ * - The heading styles defined here are used for headings *within*
+ * docstring descriptions. All headings used by epydoc itself use
+ * either class='epydoc' or class='toc' (CSS styles for both
+ * defined below).
+ */
+body { background: $body_bg; color: $body_fg; }
+p { margin-top: 0.5em; margin-bottom: 0.5em; }
+a:link { color: $body_link; }
+a:visited { color: $body_visited_link; }
+dt { font-weight: bold; }
+h1 { font-size: +140%; font-style: italic;
+ font-weight: bold; }
+h2 { font-size: +125%; font-style: italic;
+ font-weight: bold; }
+h3 { font-size: +110%; font-style: italic;
+ font-weight: normal; }
+code { font-size: 100%; }
+/* N.B.: class, not pseudoclass */
+a.link { font-family: monospace; }
+
+/* Page Header & Footer
+ * - The standard page header consists of a navigation bar (with
+ * pointers to standard pages such as 'home' and 'trees'); a
+ * breadcrumbs list, which can be used to navigate to containing
+ * classes or modules; options links, to show/hide private
+ * variables and to show/hide frames; and a page title (using
+ * <h1>). The page title may be followed by a link to the
+ * corresponding source code (using 'span.codelink').
+ * - The footer consists of a navigation bar, a timestamp, and a
+ * pointer to epydoc's homepage.
+ */
+h1.epydoc { margin: 0; font-size: +140%; font-weight: bold; }
+h2.epydoc { font-size: +130%; font-weight: bold; }
+h3.epydoc { font-size: +115%; font-weight: bold;
+ margin-top: 0.2em; }
+td h3.epydoc { font-size: +115%; font-weight: bold;
+ margin-bottom: 0; }
+table.navbar { background: $navbar_bg; color: $navbar_fg;
+ border: $navbar_border; }
+table.navbar table { color: $navbar_fg; }
+th.navbar-select { background: $navbar_select_bg;
+ color: $navbar_select_fg; }
+table.navbar a { text-decoration: none; }
+table.navbar a:link { color: $navbar_link; }
+table.navbar a:visited { color: $navbar_visited_link; }
+span.breadcrumbs { font-size: 85%; font-weight: bold; }
+span.options { font-size: 70%; }
+span.codelink { font-size: 85%; }
+td.footer { font-size: 85%; }
+
+/* Table Headers
+ * - Each summary table and details section begins with a 'header'
+ * row. This row contains a section title (marked by
+ * 'span.table-header') as well as a show/hide private link
+ * (marked by 'span.options', defined above).
+ * - Summary tables that contain user-defined groups mark those
+ * groups using 'group header' rows.
+ */
+td.table-header { background: $table_hdr_bg; color: $table_hdr_fg;
+ border: $table_border; }
+td.table-header table { color: $table_hdr_fg; }
+td.table-header table a:link { color: $table_hdr_link; }
+td.table-header table a:visited { color: $table_hdr_visited_link; }
+span.table-header { font-size: 120%; font-weight: bold; }
+th.group-header { background: $group_hdr_bg; color: $group_hdr_fg;
+ text-align: left; font-style: italic;
+ font-size: 115%;
+ border: $table_border; }
+
+/* Summary Tables (functions, variables, etc)
+ * - Each object is described by a single row of the table with
+ * two cells. The left cell gives the object's type, and is
+ * marked with 'code.summary-type'. The right cell gives the
+ * object's name and a summary description.
+ * - CSS styles for the table's header and group headers are
+ * defined above, under 'Table Headers'
+ */
+table.summary { border-collapse: collapse;
+ background: $table_bg; color: $table_fg;
+ border: $table_border;
+ margin-bottom: 0.5em; }
+td.summary { border: $table_border; }
+code.summary-type { font-size: 85%; }
+table.summary a:link { color: $table_link; }
+table.summary a:visited { color: $table_visited_link; }
+
+
+/* Details Tables (functions, variables, etc)
+ * - Each object is described in its own div.
+ * - A single-row summary table w/ table-header is used as
+ * a header for each details section (CSS style for table-header
+ * is defined above, under 'Table Headers').
+ */
+table.details { border-collapse: collapse;
+ background: $table_bg; color: $table_fg;
+ border: $table_border;
+ margin: .2em 0 0 0; }
+table.details table { color: $table_fg; }
+table.details a:link { color: $table_link; }
+table.details a:visited { color: $table_visited_link; }
+
+/* Fields */
+dl.fields { margin-left: 2em; margin-top: 1em;
+ margin-bottom: 1em; }
+dl.fields dd ul { margin-left: 0em; padding-left: 0em; }
+dl.fields dd ul li ul { margin-left: 2em; padding-left: 0em; }
+div.fields { margin-left: 2em; }
+div.fields p { margin-bottom: 0.5em; }
+
+/* Index tables (identifier index, term index, etc)
+ * - link-index is used for indices containing lists of links
+ * (namely, the identifier index & term index).
+ * - index-where is used in link indices for the text indicating
+ * the container/source for each link.
+ * - metadata-index is used for indices containing metadata
+ * extracted from fields (namely, the bug index & todo index).
+ */
+table.link-index { border-collapse: collapse;
+ background: $table_bg; color: $table_fg;
+ border: $table_border; }
+td.link-index { border-width: 0px; }
+table.link-index a:link { color: $table_link; }
+table.link-index a:visited { color: $table_visited_link; }
+span.index-where { font-size: 70%; }
+table.metadata-index { border-collapse: collapse;
+ background: $table_bg; color: $table_fg;
+ border: $table_border;
+ margin: .2em 0 0 0; }
+td.metadata-index { border-width: 1px; border-style: solid; }
+table.metadata-index a:link { color: $table_link; }
+table.metadata-index a:visited { color: $table_visited_link; }
+
+/* Function signatures
+ * - sig* is used for the signature in the details section.
+ * - .summary-sig* is used for the signature in the summary
+ * table, and when listing property accessor functions.
+ * */
+.sig-name { color: $sig_name; }
+.sig-arg { color: $sig_arg; }
+.sig-default { color: $sig_default; }
+.summary-sig { font-family: monospace; }
+.summary-sig-name { color: $summary_sig_name; font-weight: bold; }
+table.summary a.summary-sig-name:link
+ { color: $summary_sig_name; font-weight: bold; }
+table.summary a.summary-sig-name:visited
+ { color: $summary_sig_name; font-weight: bold; }
+.summary-sig-arg { color: $summary_sig_arg; }
+.summary-sig-default { color: $summary_sig_default; }
+
+/* Subclass list
+ */
+ul.subclass-list { display: inline; }
+ul.subclass-list li { display: inline; }
+
+/* To render variables, classes etc. like functions */
+table.summary .summary-name { color: $summary_sig_name; font-weight: bold;
+ font-family: monospace; }
+table.summary
+ a.summary-name:link { color: $summary_sig_name; font-weight: bold;
+ font-family: monospace; }
+table.summary
+ a.summary-name:visited { color: $summary_sig_name; font-weight: bold;
+ font-family: monospace; }
+
+/* Variable values
+ * - In the 'variable details' sections, each varaible's value is
+ * listed in a 'pre.variable' box. The width of this box is
+ * restricted to 80 chars; if the value's repr is longer than
+ * this it will be wrapped, using a backslash marked with
+ * class 'variable-linewrap'. If the value's repr is longer
+ * than 3 lines, the rest will be ellided; and an ellipsis
+ * marker ('...' marked with 'variable-ellipsis') will be used.
+ * - If the value is a string, its quote marks will be marked
+ * with 'variable-quote'.
+ * - If the variable is a regexp, it is syntax-highlighted using
+ * the re* CSS classes.
+ */
+pre.variable { padding: .5em; margin: 0;
+ background: $variable_bg; color: $variable_fg;
+ border: $variable_border; }
+.variable-linewrap { color: $variable_linewrap; font-weight: bold; }
+.variable-ellipsis { color: $variable_ellipsis; font-weight: bold; }
+.variable-quote { color: $variable_quote; font-weight: bold; }
+.variable-group { color: $variable_group; font-weight: bold; }
+.variable-op { color: $variable_op; font-weight: bold; }
+.variable-string { color: $variable_string; }
+.variable-unknown { color: $variable_unknown; font-weight: bold; }
+.re { color: $re; }
+.re-char { color: $re_char; }
+.re-op { color: $re_op; }
+.re-group { color: $re_group; }
+.re-ref { color: $re_ref; }
+
+/* Base tree
+ * - Used by class pages to display the base class hierarchy.
+ */
+pre.base-tree { font-size: 80%; margin: 0; }
+
+/* Frames-based table of contents headers
+ * - Consists of two frames: one for selecting modules; and
+ * the other listing the contents of the selected module.
+ * - h1.toc is used for each frame's heading
+ * - h2.toc is used for subheadings within each frame.
+ */
+h1.toc { text-align: center; font-size: 105%;
+ margin: 0; font-weight: bold;
+ padding: 0; }
+h2.toc { font-size: 100%; font-weight: bold;
+ margin: 0.5em 0 0 -0.3em; }
+
+/* Syntax Highlighting for Source Code
+ * - doctest examples are displayed in a 'pre.py-doctest' block.
+ * If the example is in a details table entry, then it will use
+ * the colors specified by the 'table pre.py-doctest' line.
+ * - Source code listings are displayed in a 'pre.py-src' block.
+ * Each line is marked with 'span.py-line' (used to draw a line
+ * down the left margin, separating the code from the line
+ * numbers). Line numbers are displayed with 'span.py-lineno'.
+ * The expand/collapse block toggle button is displayed with
+ * 'a.py-toggle' (Note: the CSS style for 'a.py-toggle' should not
+ * modify the font size of the text.)
+ * - If a source code page is opened with an anchor, then the
+ * corresponding code block will be highlighted. The code
+ * block's header is highlighted with 'py-highlight-hdr'; and
+ * the code block's body is highlighted with 'py-highlight'.
+ * - The remaining py-* classes are used to perform syntax
+ * highlighting (py-string for string literals, py-name for names,
+ * etc.)
+ */
+pre.py-doctest { padding: .5em; margin: 1em;
+ background: $doctest_bg; color: $doctest_fg;
+ border: $doctest_border; }
+table pre.py-doctest { background: $doctest_in_table_bg;
+ color: $doctest_in_table_fg; }
+pre.py-src { border: $pysrc_border;
+ background: $pysrc_bg; color: $pysrc_fg; }
+.py-line { border-left: $pysrc_sep_border;
+ margin-left: .2em; padding-left: .4em; }
+.py-lineno { font-style: italic; font-size: 90%;
+ padding-left: .5em; }
+a.py-toggle { text-decoration: none; }
+div.py-highlight-hdr { border-top: $pysrc_border;
+ border-bottom: $pysrc_border;
+ background: $pysrc_highlight_hdr_bg; }
+div.py-highlight { border-bottom: $pysrc_border;
+ background: $pysrc_highlight_bg; }
+.py-prompt { color: $py_prompt; font-weight: bold;}
+.py-more { color: $py_more; font-weight: bold;}
+.py-string { color: $py_string; }
+.py-comment { color: $py_comment; }
+.py-keyword { color: $py_keyword; }
+.py-output { color: $py_output; }
+.py-name { color: $py_name; }
+.py-name:link { color: $py_name !important; }
+.py-name:visited { color: $py_name !important; }
+.py-number { color: $py_number; }
+.py-defname { color: $py_def_name; font-weight: bold; }
+.py-def-name { color: $py_def_name; font-weight: bold; }
+.py-base-class { color: $py_base_class; }
+.py-param { color: $py_param; }
+.py-docstring { color: $py_docstring; }
+.py-decorator { color: $py_decorator; }
+/* Use this if you don't want links to names underlined: */
+/*a.py-name { text-decoration: none; }*/
+
+/* Graphs & Diagrams
+ * - These CSS styles are used for graphs & diagrams generated using
+ * Graphviz dot. 'img.graph-without-title' is used for bare
+ * diagrams (to remove the border created by making the image
+ * clickable).
+ */
+img.graph-without-title { border: none; }
+img.graph-with-title { border: $graph_border; }
+span.graph-title { font-weight: bold; }
+span.graph-caption { }
+
+/* General-purpose classes
+ * - 'p.indent-wrapped-lines' defines a paragraph whose first line
+ * is not indented, but whose subsequent lines are.
+ * - The 'nomargin-top' class is used to remove the top margin (e.g.
+ * from lists). The 'nomargin' class is used to remove both the
+ * top and bottom margin (but not the left or right margin --
+ * for lists, that would cause the bullets to disappear.)
+ */
+p.indent-wrapped-lines { padding: 0 0 0 7em; text-indent: -7em;
+ margin: 0; }
+.nomargin-top { margin-top: 0; }
+.nomargin { margin-top: 0; margin-bottom: 0; }
+
+/* HTML Log */
+div.log-block { padding: 0; margin: .5em 0 .5em 0;
+ background: $log_bg; color: $log_fg;
+ border: $log_border; }
+div.log-error { padding: .1em .3em .1em .3em; margin: 4px;
+ background: $log_error_bg; color: $log_error_fg;
+ border: $log_error_border; }
+div.log-warning { padding: .1em .3em .1em .3em; margin: 4px;
+ background: $log_warn_bg; color: $log_warn_fg;
+ border: $log_warn_border; }
+div.log-info { padding: .1em .3em .1em .3em; margin: 4px;
+ background: $log_info_bg; color: $log_info_fg;
+ border: $log_info_border; }
+h2.log-hdr { background: $log_hdr_bg; color: $log_hdr_fg;
+ margin: 0; padding: 0em 0.5em 0em 0.5em;
+ border-bottom: $log_border; font-size: 110%; }
+p.log { font-weight: bold; margin: .5em 0 .5em 0; }
+tr.opt-changed { color: $opt_changed_fg; font-weight: bold; }
+tr.opt-default { color: $opt_default_fg; }
+pre.log { margin: 0; padding: 0; padding-left: 1em; }
+"""
+
+############################################################
+## Derived stylesheets
+############################################################
+# Use some simple manipulations to produce a wide variety of color
+# schemes. In particular, use th _COLOR_RE regular expression to
+# search for colors, and to transform them in various ways.
+
+_COLOR_RE = re.compile(r'#(..)(..)(..)')
+
+def _set_colors(template, *dicts):
+ colors = dicts[0].copy()
+ for d in dicts[1:]: colors.update(d)
+ return re.sub(r'\$(\w+)', lambda m:colors[m.group(1)], template)
+
+def _rv(match):
+ """
+ Given a regexp match for a color, return the reverse-video version
+ of that color.
+
+ @param match: A regular expression match.
+ @type match: C{Match}
+ @return: The reverse-video color.
+ @rtype: C{string}
+ """
+ rgb = [int(grp, 16) for grp in match.groups()]
+ return '#' + ''.join(['%02x' % (255-c) for c in rgb])
+
+def _darken_darks(match):
+ rgb = [int(grp, 16) for grp in match.groups()]
+ return '#' + ''.join(['%02x' % (((c/255.)**2) * 255) for c in rgb])
+
+_WHITE_COLORS = dict(
+ # Defaults:
+ body_bg = '#ffffff',
+ body_fg = '#000000',
+ body_link = '#0000ff',
+ body_visited_link = '#204080',
+ # Navigation bar:
+ navbar_bg = '#a0c0ff',
+ navbar_fg = '#000000',
+ navbar_border = '2px groove #c0d0d0',
+ navbar_select_bg = '#70b0ff',
+ navbar_select_fg = '#000000',
+ navbar_link = '#0000ff',
+ navbar_visited_link = '#204080',
+ # Tables (summary tables, details tables, indices):
+ table_bg = '#e8f0f8',
+ table_fg = '#000000',
+ table_link = '#0000ff',
+ table_visited_link = '#204080',
+ table_border = '1px solid #608090',
+ table_hdr_bg = '#70b0ff',
+ table_hdr_fg = '#000000',
+ table_hdr_link = '#0000ff',
+ table_hdr_visited_link = '#204080',
+ group_hdr_bg = '#c0e0f8',
+ group_hdr_fg = '#000000',
+ # Function signatures:
+ sig_name = '#006080',
+ sig_arg = '#008060',
+ sig_default = '#602000',
+ summary_sig_name = '#006080',
+ summary_sig_arg = '#006040',
+ summary_sig_default = '#501800',
+ # Variable values:
+ variable_bg = '#dce4ec',
+ variable_fg = '#000000',
+ variable_border = '1px solid #708890',
+ variable_linewrap = '#604000',
+ variable_ellipsis = '#604000',
+ variable_quote = '#604000',
+ variable_group = '#008000',
+ variable_string = '#006030',
+ variable_op = '#604000',
+ variable_unknown = '#a00000',
+ re = '#000000',
+ re_char = '#006030',
+ re_op = '#600000',
+ re_group = '#003060',
+ re_ref = '#404040',
+ # Python source code:
+ doctest_bg = '#e8f0f8',
+ doctest_fg = '#000000',
+ doctest_border = '1px solid #708890',
+ doctest_in_table_bg = '#dce4ec',
+ doctest_in_table_fg = '#000000',
+ pysrc_border = '2px solid #000000',
+ pysrc_sep_border = '2px solid #000000',
+ pysrc_bg = '#f0f0f0',
+ pysrc_fg = '#000000',
+ pysrc_highlight_hdr_bg = '#d8e8e8',
+ pysrc_highlight_bg = '#d0e0e0',
+ py_prompt = '#005050',
+ py_more = '#005050',
+ py_string = '#006030',
+ py_comment = '#003060',
+ py_keyword = '#600000',
+ py_output = '#404040',
+ py_name = '#000050',
+ py_number = '#005000',
+ py_def_name = '#000060',
+ py_base_class = '#000060',
+ py_param = '#000060',
+ py_docstring = '#006030',
+ py_decorator = '#804020',
+ # Graphs
+ graph_border = '1px solid #000000',
+ # Log block
+ log_bg = '#e8f0f8',
+ log_fg = '#000000',
+ log_border = '1px solid #000000',
+ log_hdr_bg = '#70b0ff',
+ log_hdr_fg = '#000000',
+ log_error_bg = '#ffb0b0',
+ log_error_fg = '#000000',
+ log_error_border = '1px solid #000000',
+ log_warn_bg = '#ffffb0',
+ log_warn_fg = '#000000',
+ log_warn_border = '1px solid #000000',
+ log_info_bg = '#b0ffb0',
+ log_info_fg = '#000000',
+ log_info_border = '1px solid #000000',
+ opt_changed_fg = '#000000',
+ opt_default_fg = '#606060',
+ )
+
+_BLUE_COLORS = _WHITE_COLORS.copy()
+_BLUE_COLORS.update(dict(
+ # Body: white text on a dark blue background
+ body_bg = '#000070',
+ body_fg = '#ffffff',
+ body_link = '#ffffff',
+ body_visited_link = '#d0d0ff',
+ # Tables: cyan headers, black on white bodies
+ table_bg = '#ffffff',
+ table_fg = '#000000',
+ table_hdr_bg = '#70b0ff',
+ table_hdr_fg = '#000000',
+ table_hdr_link = '#000000',
+ table_hdr_visited_link = '#000000',
+ table_border = '1px solid #000000',
+ # Navigation bar: blue w/ cyan selection
+ navbar_bg = '#0000ff',
+ navbar_fg = '#ffffff',
+ navbar_link = '#ffffff',
+ navbar_visited_link = '#ffffff',
+ navbar_select_bg = '#70b0ff',
+ navbar_select_fg = '#000000',
+ navbar_border = '1px solid #70b0ff',
+ # Variable values & doctest blocks: cyan
+ variable_bg = '#c0e0f8',
+ variable_fg = '#000000',
+ doctest_bg = '#c0e0f8',
+ doctest_fg = '#000000',
+ doctest_in_table_bg = '#c0e0f8',
+ doctest_in_table_fg = '#000000',
+ ))
+
+_WHITE = _set_colors(TEMPLATE, _WHITE_COLORS)
+_BLUE = _set_colors(TEMPLATE, _BLUE_COLORS)
+
+ # Black-on-green
+_GREEN = _COLOR_RE.sub(_darken_darks, _COLOR_RE.sub(r'#\1\3\2', _BLUE))
+
+# White-on-black, with blue highlights.
+_BLACK = _COLOR_RE.sub(r'#\3\2\1', _COLOR_RE.sub(_rv, _WHITE))
+
+# Grayscale
+_GRAYSCALE = _COLOR_RE.sub(r'#\2\2\2', _WHITE)
+
+############################################################
+## Stylesheet table
+############################################################
+
+STYLESHEETS = {
+ 'white': (_WHITE, "Black on white, with blue highlights"),
+ 'blue': (_BLUE, "Black on steel blue"),
+ 'green': (_GREEN, "Black on green"),
+ 'black': (_BLACK, "White on black, with blue highlights"),
+ 'grayscale': (_GRAYSCALE, "Grayscale black on white"),
+ 'default': (_WHITE, "Default stylesheet (=white)"),
+# 'none': (_LAYOUT, "A base stylesheet (no color modifications)"),
+ }
diff --git a/python/helpers/epydoc/docwriter/html_help.py b/python/helpers/epydoc/docwriter/html_help.py
new file mode 100644
index 0000000..da24270
--- /dev/null
+++ b/python/helpers/epydoc/docwriter/html_help.py
@@ -0,0 +1,190 @@
+#
+# epydoc.css: default help page
+# Edward Loper
+#
+# Created [01/30/01 05:18 PM]
+# $Id: html_help.py 1239 2006-07-05 11:29:50Z edloper $
+#
+
+"""
+Default help file for the HTML outputter (L{epydoc.docwriter.html}).
+
+@type HTML_HELP: C{string}
+@var HTML_HELP: The contents of the HTML body for the default
+help page.
+"""
+__docformat__ = 'epytext en'
+
+# Expects: {'this_project': name}
+HTML_HELP = '''
+<h1 class="epydoc"> API Documentation </h1>
+
+<p> This document contains the API (Application Programming Interface)
+documentation for %(this_project)s. Documentation for the Python
+objects defined by the project is divided into separate pages for each
+package, module, and class. The API documentation also includes two
+pages containing information about the project as a whole: a trees
+page, and an index page. </p>
+
+<h2> Object Documentation </h2>
+
+ <p>Each <strong>Package Documentation</strong> page contains: </p>
+ <ul>
+ <li> A description of the package. </li>
+ <li> A list of the modules and sub-packages contained by the
+ package. </li>
+ <li> A summary of the classes defined by the package. </li>
+ <li> A summary of the functions defined by the package. </li>
+ <li> A summary of the variables defined by the package. </li>
+ <li> A detailed description of each function defined by the
+ package. </li>
+ <li> A detailed description of each variable defined by the
+ package. </li>
+ </ul>
+
+ <p>Each <strong>Module Documentation</strong> page contains:</p>
+ <ul>
+ <li> A description of the module. </li>
+ <li> A summary of the classes defined by the module. </li>
+ <li> A summary of the functions defined by the module. </li>
+ <li> A summary of the variables defined by the module. </li>
+ <li> A detailed description of each function defined by the
+ module. </li>
+ <li> A detailed description of each variable defined by the
+ module. </li>
+ </ul>
+
+ <p>Each <strong>Class Documentation</strong> page contains: </p>
+ <ul>
+ <li> A class inheritance diagram. </li>
+ <li> A list of known subclasses. </li>
+ <li> A description of the class. </li>
+ <li> A summary of the methods defined by the class. </li>
+ <li> A summary of the instance variables defined by the class. </li>
+ <li> A summary of the class (static) variables defined by the
+ class. </li>
+ <li> A detailed description of each method defined by the
+ class. </li>
+ <li> A detailed description of each instance variable defined by the
+ class. </li>
+ <li> A detailed description of each class (static) variable defined
+ by the class. </li>
+ </ul>
+
+<h2> Project Documentation </h2>
+
+ <p> The <strong>Trees</strong> page contains the module and class hierarchies: </p>
+ <ul>
+ <li> The <em>module hierarchy</em> lists every package and module, with
+ modules grouped into packages. At the top level, and within each
+ package, modules and sub-packages are listed alphabetically. </li>
+ <li> The <em>class hierarchy</em> lists every class, grouped by base
+ class. If a class has more than one base class, then it will be
+ listed under each base class. At the top level, and under each base
+ class, classes are listed alphabetically. </li>
+ </ul>
+
+ <p> The <strong>Index</strong> page contains indices of terms and
+ identifiers: </p>
+ <ul>
+ <li> The <em>term index</em> lists every term indexed by any object\'s
+ documentation. For each term, the index provides links to each
+ place where the term is indexed. </li>
+ <li> The <em>identifier index</em> lists the (short) name of every package,
+ module, class, method, function, variable, and parameter. For each
+ identifier, the index provides a short description, and a link to
+ its documentation. </li>
+ </ul>
+
+<h2> The Table of Contents </h2>
+
+<p> The table of contents occupies the two frames on the left side of
+the window. The upper-left frame displays the <em>project
+contents</em>, and the lower-left frame displays the <em>module
+contents</em>: </p>
+
+<table class="help summary" border="1" cellspacing="0" cellpadding="3">
+ <tr style="height: 30%%">
+ <td align="center" style="font-size: small">
+ Project<br />Contents<hr />...</td>
+ <td align="center" style="font-size: small" rowspan="2" width="70%%">
+ API<br />Documentation<br />Frame<br /><br /><br />
+ </td>
+ </tr>
+ <tr>
+ <td align="center" style="font-size: small">
+ Module<br />Contents<hr /> <br />...<br />
+ </td>
+ </tr>
+</table><br />
+
+<p> The <strong>project contents frame</strong> contains a list of all packages
+and modules that are defined by the project. Clicking on an entry
+will display its contents in the module contents frame. Clicking on a
+special entry, labeled "Everything," will display the contents of
+the entire project. </p>
+
+<p> The <strong>module contents frame</strong> contains a list of every
+submodule, class, type, exception, function, and variable defined by a
+module or package. Clicking on an entry will display its
+documentation in the API documentation frame. Clicking on the name of
+the module, at the top of the frame, will display the documentation
+for the module itself. </p>
+
+<p> The "<strong>frames</strong>" and "<strong>no frames</strong>" buttons below the top
+navigation bar can be used to control whether the table of contents is
+displayed or not. </p>
+
+<h2> The Navigation Bar </h2>
+
+<p> A navigation bar is located at the top and bottom of every page.
+It indicates what type of page you are currently viewing, and allows
+you to go to related pages. The following table describes the labels
+on the navigation bar. Note that not some labels (such as
+[Parent]) are not displayed on all pages. </p>
+
+<table class="summary" border="1" cellspacing="0" cellpadding="3" width="100%%">
+<tr class="summary">
+ <th>Label</th>
+ <th>Highlighted when...</th>
+ <th>Links to...</th>
+</tr>
+ <tr><td valign="top"><strong>[Parent]</strong></td>
+ <td valign="top"><em>(never highlighted)</em></td>
+ <td valign="top"> the parent of the current package </td></tr>
+ <tr><td valign="top"><strong>[Package]</strong></td>
+ <td valign="top">viewing a package</td>
+ <td valign="top">the package containing the current object
+ </td></tr>
+ <tr><td valign="top"><strong>[Module]</strong></td>
+ <td valign="top">viewing a module</td>
+ <td valign="top">the module containing the current object
+ </td></tr>
+ <tr><td valign="top"><strong>[Class]</strong></td>
+ <td valign="top">viewing a class </td>
+ <td valign="top">the class containing the current object</td></tr>
+ <tr><td valign="top"><strong>[Trees]</strong></td>
+ <td valign="top">viewing the trees page</td>
+ <td valign="top"> the trees page </td></tr>
+ <tr><td valign="top"><strong>[Index]</strong></td>
+ <td valign="top">viewing the index page</td>
+ <td valign="top"> the index page </td></tr>
+ <tr><td valign="top"><strong>[Help]</strong></td>
+ <td valign="top">viewing the help page</td>
+ <td valign="top"> the help page </td></tr>
+</table>
+
+<p> The "<strong>show private</strong>" and "<strong>hide private</strong>" buttons below
+the top navigation bar can be used to control whether documentation
+for private objects is displayed. Private objects are usually defined
+as objects whose (short) names begin with a single underscore, but do
+not end with an underscore. For example, "<code>_x</code>",
+"<code>__pprint</code>", and "<code>epydoc.epytext._tokenize</code>"
+are private objects; but "<code>re.sub</code>",
+"<code>__init__</code>", and "<code>type_</code>" are not. However,
+if a module defines the "<code>__all__</code>" variable, then its
+contents are used to decide which objects are private. </p>
+
+<p> A timestamp below the bottom navigation bar indicates when each
+page was last updated. </p>
+'''
diff --git a/python/helpers/epydoc/docwriter/latex.py b/python/helpers/epydoc/docwriter/latex.py
new file mode 100644
index 0000000..18a88cf
--- /dev/null
+++ b/python/helpers/epydoc/docwriter/latex.py
@@ -0,0 +1,1187 @@
+#
+# epydoc.py: epydoc LaTeX output generator
+# Edward Loper
+#
+# Created [01/30/01 05:18 PM]
+# $Id: latex.py 1621 2007-09-23 18:54:23Z edloper $
+#
+
+"""
+The LaTeX output generator for epydoc. The main interface provided by
+this module is the L{LatexWriter} class.
+
+@todo: Inheritance=listed
+"""
+__docformat__ = 'epytext en'
+
+import os.path, sys, time, re, textwrap, codecs
+
+from epydoc.apidoc import *
+from epydoc.compat import *
+import epydoc
+from epydoc import log
+from epydoc import markup
+from epydoc.util import plaintext_to_latex
+import epydoc.markup
+
+class LatexWriter:
+ PREAMBLE = [
+ "\\documentclass{article}",
+ "\\usepackage{alltt, parskip, fancyhdr, boxedminipage}",
+ "\\usepackage{makeidx, multirow, longtable, tocbibind, amssymb}",
+ "\\usepackage{fullpage}",
+ "\\usepackage[usenames]{color}",
+ # Fix the heading position -- without this, the headings generated
+ # by the fancyheadings package sometimes overlap the text.
+ "\\setlength{\\headheight}{16pt}",
+ "\\setlength{\\headsep}{24pt}",
+ "\\setlength{\\topmargin}{-\\headsep}",
+ # By default, do not indent paragraphs.
+ "\\setlength{\\parindent}{0ex}",
+ "\\setlength{\\parskip}{2ex}",
+ # Double the standard size boxedminipage outlines.
+ "\\setlength{\\fboxrule}{2\\fboxrule}",
+ # Create a 'base class' length named BCL for use in base trees.
+ "\\newlength{\\BCL} % base class length, for base trees.",
+ # Display the section & subsection names in a header.
+ "\\pagestyle{fancy}",
+ "\\renewcommand{\\sectionmark}[1]{\\markboth{#1}{}}",
+ "\\renewcommand{\\subsectionmark}[1]{\\markright{#1}}",
+ # Colorization for python source code
+ "\\definecolor{py@keywordcolour}{rgb}{1,0.45882,0}",
+ "\\definecolor{py@stringcolour}{rgb}{0,0.666666,0}",
+ "\\definecolor{py@commentcolour}{rgb}{1,0,0}",
+ "\\definecolor{py@ps1colour}{rgb}{0.60784,0,0}",
+ "\\definecolor{py@ps2colour}{rgb}{0.60784,0,1}",
+ "\\definecolor{py@inputcolour}{rgb}{0,0,0}",
+ "\\definecolor{py@outputcolour}{rgb}{0,0,1}",
+ "\\definecolor{py@exceptcolour}{rgb}{1,0,0}",
+ "\\definecolor{py@defnamecolour}{rgb}{1,0.5,0.5}",
+ "\\definecolor{py@builtincolour}{rgb}{0.58039,0,0.58039}",
+ "\\definecolor{py@identifiercolour}{rgb}{0,0,0}",
+ "\\definecolor{py@linenumcolour}{rgb}{0.4,0.4,0.4}",
+ "\\definecolor{py@inputcolour}{rgb}{0,0,0}",
+ "% Prompt",
+ "\\newcommand{\\pysrcprompt}[1]{\\textcolor{py@ps1colour}"
+ "{\\small\\textbf{#1}}}",
+ "\\newcommand{\\pysrcmore}[1]{\\textcolor{py@ps2colour}"
+ "{\\small\\textbf{#1}}}",
+ "% Source code",
+ "\\newcommand{\\pysrckeyword}[1]{\\textcolor{py@keywordcolour}"
+ "{\\small\\textbf{#1}}}",
+ "\\newcommand{\\pysrcbuiltin}[1]{\\textcolor{py@builtincolour}"
+ "{\\small\\textbf{#1}}}",
+ "\\newcommand{\\pysrcstring}[1]{\\textcolor{py@stringcolour}"
+ "{\\small\\textbf{#1}}}",
+ "\\newcommand{\\pysrcdefname}[1]{\\textcolor{py@defnamecolour}"
+ "{\\small\\textbf{#1}}}",
+ "\\newcommand{\\pysrcother}[1]{\\small\\textbf{#1}}",
+ "% Comments",
+ "\\newcommand{\\pysrccomment}[1]{\\textcolor{py@commentcolour}"
+ "{\\small\\textbf{#1}}}",
+ "% Output",
+ "\\newcommand{\\pysrcoutput}[1]{\\textcolor{py@outputcolour}"
+ "{\\small\\textbf{#1}}}",
+ "% Exceptions",
+ "\\newcommand{\\pysrcexcept}[1]{\\textcolor{py@exceptcolour}"
+ "{\\small\\textbf{#1}}}",
+ # Size of the function description boxes.
+ "\\newlength{\\funcindent}",
+ "\\newlength{\\funcwidth}",
+ "\\setlength{\\funcindent}{1cm}",
+ "\\setlength{\\funcwidth}{\\textwidth}",
+ "\\addtolength{\\funcwidth}{-2\\funcindent}",
+ # Size of the var description tables.
+ "\\newlength{\\varindent}",
+ "\\newlength{\\varnamewidth}",
+ "\\newlength{\\vardescrwidth}",
+ "\\newlength{\\varwidth}",
+ "\\setlength{\\varindent}{1cm}",
+ "\\setlength{\\varnamewidth}{.3\\textwidth}",
+ "\\setlength{\\varwidth}{\\textwidth}",
+ "\\addtolength{\\varwidth}{-4\\tabcolsep}",
+ "\\addtolength{\\varwidth}{-3\\arrayrulewidth}",
+ "\\addtolength{\\varwidth}{-2\\varindent}",
+ "\\setlength{\\vardescrwidth}{\\varwidth}",
+ "\\addtolength{\\vardescrwidth}{-\\varnamewidth}",
+ # Define new environment for displaying parameter lists.
+ textwrap.dedent("""\
+ \\newenvironment{Ventry}[1]%
+ {\\begin{list}{}{%
+ \\renewcommand{\\makelabel}[1]{\\texttt{##1:}\\hfil}%
+ \\settowidth{\\labelwidth}{\\texttt{#1:}}%
+ \\setlength{\\leftmargin}{\\labelsep}%
+ \\addtolength{\\leftmargin}{\\labelwidth}}}%
+ {\\end{list}}"""),
+ ]
+
+ HRULE = '\\rule{\\textwidth}{0.5\\fboxrule}\n\n'
+
+ SECTIONS = ['\\part{%s}', '\\chapter{%s}', '\\section{%s}',
+ '\\subsection{%s}', '\\subsubsection{%s}',
+ '\\textbf{%s}']
+
+ STAR_SECTIONS = ['\\part*{%s}', '\\chapter*{%s}', '\\section*{%s}',
+ '\\subsection*{%s}', '\\subsubsection*{%s}',
+ '\\textbf{%s}']
+
+ def __init__(self, docindex, **kwargs):
+ self.docindex = docindex
+ # Process keyword arguments
+ self._show_private = kwargs.get('private', 0)
+ self._prj_name = kwargs.get('prj_name', None) or 'API Documentation'
+ self._crossref = kwargs.get('crossref', 1)
+ self._index = kwargs.get('index', 1)
+ self._list_classes_separately=kwargs.get('list_classes_separately',0)
+ self._inheritance = kwargs.get('inheritance', 'listed')
+ self._exclude = kwargs.get('exclude', 1)
+ self._top_section = 2
+ self._index_functions = 1
+ self._hyperref = 1
+
+ #: The Python representation of the encoding.
+ #: Update L{latex_encodings} in case of mismatch between it and
+ #: the C{inputenc} LaTeX package.
+ self._encoding = kwargs.get('encoding', 'utf-8')
+
+ self.valdocs = valdocs = sorted(docindex.reachable_valdocs(
+ imports=False, packages=False, bases=False, submodules=False,
+ subclasses=False, private=self._show_private))
+ self._num_files = self.num_files()
+ # For use with select_variables():
+ if self._show_private: self._public_filter = None
+ else: self._public_filter = True
+
+ self.class_list = [d for d in valdocs if isinstance(d, ClassDoc)]
+ """The list of L{ClassDoc}s for the documented classes."""
+ self.class_set = set(self.class_list)
+ """The set of L{ClassDoc}s for the documented classes."""
+
+ def write(self, directory=None):
+ """
+ Write the API documentation for the entire project to the
+ given directory.
+
+ @type directory: C{string}
+ @param directory: The directory to which output should be
+ written. If no directory is specified, output will be
+ written to the current directory. If the directory does
+ not exist, it will be created.
+ @rtype: C{None}
+ @raise OSError: If C{directory} cannot be created,
+ @raise OSError: If any file cannot be created or written to.
+ """
+ # For progress reporting:
+ self._files_written = 0.
+
+ # Set the default values for ValueDoc formatted representations.
+ orig_valdoc_defaults = (ValueDoc.SUMMARY_REPR_LINELEN,
+ ValueDoc.REPR_LINELEN,
+ ValueDoc.REPR_MAXLINES)
+ ValueDoc.SUMMARY_REPR_LINELEN = 60
+ ValueDoc.REPR_LINELEN = 52
+ ValueDoc.REPR_MAXLINES = 5
+
+ # Create destination directories, if necessary
+ if not directory: directory = os.curdir
+ self._mkdir(directory)
+ self._directory = directory
+
+ # Write the top-level file.
+ self._write(self.write_topfile, directory, 'api.tex')
+
+ # Write the module & class files.
+ for val_doc in self.valdocs:
+ if isinstance(val_doc, ModuleDoc):
+ filename = '%s-module.tex' % val_doc.canonical_name
+ self._write(self.write_module, directory, filename, val_doc)
+ elif (isinstance(val_doc, ClassDoc) and
+ self._list_classes_separately):
+ filename = '%s-class.tex' % val_doc.canonical_name
+ self._write(self.write_class, directory, filename, val_doc)
+
+ # Restore defaults that we changed.
+ (ValueDoc.SUMMARY_REPR_LINELEN, ValueDoc.REPR_LINELEN,
+ ValueDoc.REPR_MAXLINES) = orig_valdoc_defaults
+
+ def _write(self, write_func, directory, filename, *args):
+ # Display our progress.
+ self._files_written += 1
+ log.progress(self._files_written/self._num_files, filename)
+
+ path = os.path.join(directory, filename)
+ if self._encoding == 'utf-8':
+ f = codecs.open(path, 'w', 'utf-8')
+ write_func(f.write, *args)
+ f.close()
+ else:
+ result = []
+ write_func(result.append, *args)
+ s = u''.join(result)
+ try:
+ s = s.encode(self._encoding)
+ except UnicodeError:
+ log.error("Output could not be represented with the "
+ "given encoding (%r). Unencodable characters "
+ "will be displayed as '?'. It is recommended "
+ "that you use a different output encoding (utf-8, "
+ "if it's supported by latex on your system)."
+ % self._encoding)
+ s = s.encode(self._encoding, 'replace')
+ f = open(path, 'w')
+ f.write(s)
+ f.close()
+
+ def num_files(self):
+ """
+ @return: The number of files that this C{LatexFormatter} will
+ generate.
+ @rtype: C{int}
+ """
+ n = 1
+ for doc in self.valdocs:
+ if isinstance(doc, ModuleDoc): n += 1
+ if isinstance(doc, ClassDoc) and self._list_classes_separately:
+ n += 1
+ return n
+
+ def _mkdir(self, directory):
+ """
+ If the given directory does not exist, then attempt to create it.
+ @rtype: C{None}
+ """
+ if not os.path.isdir(directory):
+ if os.path.exists(directory):
+ raise OSError('%r is not a directory' % directory)
+ os.mkdir(directory)
+
+ #////////////////////////////////////////////////////////////
+ #{ Main Doc File
+ #////////////////////////////////////////////////////////////
+
+ def write_topfile(self, out):
+ self.write_header(out, 'Include File')
+ self.write_preamble(out)
+ out('\n\\begin{document}\n\n')
+ self.write_start_of(out, 'Header')
+
+ # Write the title.
+ self.write_start_of(out, 'Title')
+ out('\\title{%s}\n' % plaintext_to_latex(self._prj_name, 1))
+ out('\\author{API Documentation}\n')
+ out('\\maketitle\n')
+
+ # Add a table of contents.
+ self.write_start_of(out, 'Table of Contents')
+ out('\\addtolength{\\parskip}{-2ex}\n')
+ out('\\tableofcontents\n')
+ out('\\addtolength{\\parskip}{2ex}\n')
+
+ # Include documentation files.
+ self.write_start_of(out, 'Includes')
+ for val_doc in self.valdocs:
+ if isinstance(val_doc, ModuleDoc):
+ out('\\include{%s-module}\n' % val_doc.canonical_name)
+
+ # If we're listing classes separately, put them after all the
+ # modules.
+ if self._list_classes_separately:
+ for val_doc in self.valdocs:
+ if isinstance(val_doc, ClassDoc):
+ out('\\include{%s-class}\n' % val_doc.canonical_name)
+
+ # Add the index, if requested.
+ if self._index:
+ self.write_start_of(out, 'Index')
+ out('\\printindex\n\n')
+
+ # Add the footer.
+ self.write_start_of(out, 'Footer')
+ out('\\end{document}\n\n')
+
+ def write_preamble(self, out):
+ out('\n'.join(self.PREAMBLE))
+ out('\n')
+
+ # Set the encoding.
+ out('\\usepackage[%s]{inputenc}\n' % self.get_latex_encoding())
+
+ # If we're generating hyperrefs, add the appropriate packages.
+ if self._hyperref:
+ out('\\definecolor{UrlColor}{rgb}{0,0.08,0.45}\n')
+ out('\\usepackage[dvips, pagebackref, pdftitle={%s}, '
+ 'pdfcreator={epydoc %s}, bookmarks=true, '
+ 'bookmarksopen=false, pdfpagemode=UseOutlines, '
+ 'colorlinks=true, linkcolor=black, anchorcolor=black, '
+ 'citecolor=black, filecolor=black, menucolor=black, '
+ 'pagecolor=black, urlcolor=UrlColor]{hyperref}\n' %
+ (self._prj_name or '', epydoc.__version__))
+
+ # If we're generating an index, add it to the preamble.
+ if self._index:
+ out("\\makeindex\n")
+
+ # If restructuredtext was used, then we need to extend
+ # the prefix to include LatexTranslator.head_prefix.
+ if 'restructuredtext' in epydoc.markup.MARKUP_LANGUAGES_USED:
+ from epydoc.markup import restructuredtext
+ rst_head = restructuredtext.latex_head_prefix()
+ rst_head = ''.join(rst_head).split('\n')
+ for line in rst_head[1:]:
+ m = re.match(r'\\usepackage(\[.*?\])?{(.*?)}', line)
+ if m and m.group(2) in (
+ 'babel', 'hyperref', 'color', 'alltt', 'parskip',
+ 'fancyhdr', 'boxedminipage', 'makeidx',
+ 'multirow', 'longtable', 'tocbind', 'assymb',
+ 'fullpage', 'inputenc'):
+ pass
+ else:
+ out(line+'\n')
+
+
+ #////////////////////////////////////////////////////////////
+ #{ Chapters
+ #////////////////////////////////////////////////////////////
+
+ def write_module(self, out, doc):
+ self.write_header(out, doc)
+ self.write_start_of(out, 'Module Description')
+
+ # Add this module to the index.
+ out(' ' + self.indexterm(doc, 'start'))
+
+ # Add a section marker.
+ out(self.section('%s %s' % (self.doc_kind(doc),
+ doc.canonical_name)))
+
+ # Label our current location.
+ out(' \\label{%s}\n' % self.label(doc))
+
+ # Add the module's description.
+ if doc.descr not in (None, UNKNOWN):
+ out(self.docstring_to_latex(doc.descr))
+
+ # Add version, author, warnings, requirements, notes, etc.
+ self.write_standard_fields(out, doc)
+
+ # If it's a package, list the sub-modules.
+ if doc.submodules != UNKNOWN and doc.submodules:
+ self.write_module_list(out, doc)
+
+ # Contents.
+ if self._list_classes_separately:
+ self.write_class_list(out, doc)
+ self.write_func_list(out, 'Functions', doc, 'function')
+ self.write_var_list(out, 'Variables', doc, 'other')
+
+ # Class list.
+ if not self._list_classes_separately:
+ classes = doc.select_variables(imported=False, value_type='class',
+ public=self._public_filter)
+ for var_doc in classes:
+ self.write_class(out, var_doc.value)
+
+ # Mark the end of the module (for the index)
+ out(' ' + self.indexterm(doc, 'end'))
+
+ def write_class(self, out, doc):
+ if self._list_classes_separately:
+ self.write_header(out, doc)
+ self.write_start_of(out, 'Class Description')
+
+ # Add this class to the index.
+ out(' ' + self.indexterm(doc, 'start'))
+
+ # Add a section marker.
+ if self._list_classes_separately:
+ seclevel = 0
+ out(self.section('%s %s' % (self.doc_kind(doc),
+ doc.canonical_name), seclevel))
+ else:
+ seclevel = 1
+ out(self.section('%s %s' % (self.doc_kind(doc),
+ doc.canonical_name[-1]), seclevel))
+
+ # Label our current location.
+ out(' \\label{%s}\n' % self.label(doc))
+
+ # Add our base list.
+ if doc.bases not in (UNKNOWN, None) and len(doc.bases) > 0:
+ out(self.base_tree(doc))
+
+ # The class's known subclasses
+ if doc.subclasses not in (UNKNOWN, None) and len(doc.subclasses) > 0:
+ sc_items = [plaintext_to_latex('%s' % sc.canonical_name)
+ for sc in doc.subclasses]
+ out(self._descrlist(sc_items, 'Known Subclasses', short=1))
+
+ # The class's description.
+ if doc.descr not in (None, UNKNOWN):
+ out(self.docstring_to_latex(doc.descr))
+
+ # Version, author, warnings, requirements, notes, etc.
+ self.write_standard_fields(out, doc)
+
+ # Contents.
+ self.write_func_list(out, 'Methods', doc, 'method',
+ seclevel+1)
+ self.write_var_list(out, 'Properties', doc,
+ 'property', seclevel+1)
+ self.write_var_list(out, 'Class Variables', doc,
+ 'classvariable', seclevel+1)
+ self.write_var_list(out, 'Instance Variables', doc,
+ 'instancevariable', seclevel+1)
+
+ # Mark the end of the class (for the index)
+ out(' ' + self.indexterm(doc, 'end'))
+
+ #////////////////////////////////////////////////////////////
+ #{ Module hierarchy trees
+ #////////////////////////////////////////////////////////////
+
+ def write_module_tree(self, out):
+ modules = [doc for doc in self.valdocs
+ if isinstance(doc, ModuleDoc)]
+ if not modules: return
+
+ # Write entries for all top-level modules/packages.
+ out('\\begin{itemize}\n')
+ out('\\setlength{\\parskip}{0ex}\n')
+ for doc in modules:
+ if (doc.package in (None, UNKNOWN) or
+ doc.package not in self.valdocs):
+ self.write_module_tree_item(out, doc)
+ return s +'\\end{itemize}\n'
+
+ def write_module_list(self, out, doc):
+ if len(doc.submodules) == 0: return
+ self.write_start_of(out, 'Modules')
+
+ out(self.section('Modules', 1))
+ out('\\begin{itemize}\n')
+ out('\\setlength{\\parskip}{0ex}\n')
+
+ for group_name in doc.group_names():
+ if not doc.submodule_groups[group_name]: continue
+ if group_name:
+ out(' \\item \\textbf{%s}\n' % group_name)
+ out(' \\begin{itemize}\n')
+ for submodule in doc.submodule_groups[group_name]:
+ self.write_module_tree_item(out, submodule)
+ if group_name:
+ out(' \end{itemize}\n')
+
+ out('\\end{itemize}\n\n')
+
+ def write_module_tree_item(self, out, doc, depth=0):
+ """
+ Helper function for L{write_module_tree} and L{write_module_list}.
+
+ @rtype: C{string}
+ """
+ out(' '*depth + '\\item \\textbf{')
+ out(plaintext_to_latex(doc.canonical_name[-1]) +'}')
+ if doc.summary not in (None, UNKNOWN):
+ out(': %s\n' % self.docstring_to_latex(doc.summary))
+ if self._crossref:
+ out('\n \\textit{(Section \\ref{%s}' % self.label(doc))
+ out(', p.~\\pageref{%s})}\n\n' % self.label(doc))
+ if doc.submodules != UNKNOWN and doc.submodules:
+ out(' '*depth + ' \\begin{itemize}\n')
+ out(' '*depth + '\\setlength{\\parskip}{0ex}\n')
+ for submodule in doc.submodules:
+ self.write_module_tree_item(out, submodule, depth+4)
+ out(' '*depth + ' \\end{itemize}\n')
+
+ #////////////////////////////////////////////////////////////
+ #{ Base class trees
+ #////////////////////////////////////////////////////////////
+
+ def base_tree(self, doc, width=None, linespec=None):
+ if width is None:
+ width = self._find_tree_width(doc)+2
+ linespec = []
+ s = ('&'*(width-4)+'\\multicolumn{2}{l}{\\textbf{%s}}\n' %
+ plaintext_to_latex('%s'%self._base_name(doc)))
+ s += '\\end{tabular}\n\n'
+ top = 1
+ else:
+ s = self._base_tree_line(doc, width, linespec)
+ top = 0
+
+ if isinstance(doc, ClassDoc):
+ for i in range(len(doc.bases)-1, -1, -1):
+ base = doc.bases[i]
+ spec = (i > 0)
+ s = self.base_tree(base, width, [spec]+linespec) + s
+
+ if top:
+ s = '\\begin{tabular}{%s}\n' % (width*'c') + s
+
+ return s
+
+ def _base_name(self, doc):
+ if doc.canonical_name is None:
+ if doc.parse_repr is not None:
+ return doc.parse_repr
+ else:
+ return '??'
+ else:
+ return '%s' % doc.canonical_name
+
+ def _find_tree_width(self, doc):
+ if not isinstance(doc, ClassDoc): return 2
+ width = 2
+ for base in doc.bases:
+ width = max(width, self._find_tree_width(base)+2)
+ return width
+
+ def _base_tree_line(self, doc, width, linespec):
+ base_name = plaintext_to_latex(self._base_name(doc))
+
+ # linespec is a list of booleans.
+ s = '%% Line for %s, linespec=%s\n' % (base_name, linespec)
+
+ labelwidth = width-2*len(linespec)-2
+
+ # The base class name.
+ s += ('\\multicolumn{%s}{r}{' % labelwidth)
+ s += '\\settowidth{\\BCL}{%s}' % base_name
+ s += '\\multirow{2}{\\BCL}{%s}}\n' % base_name
+
+ # The vertical bars for other base classes (top half)
+ for vbar in linespec:
+ if vbar: s += '&&\\multicolumn{1}{|c}{}\n'
+ else: s += '&&\n'
+
+ # The horizontal line.
+ s += ' \\\\\\cline{%s-%s}\n' % (labelwidth+1, labelwidth+1)
+
+ # The vertical bar for this base class.
+ s += ' ' + '&'*labelwidth
+ s += '\\multicolumn{1}{c|}{}\n'
+
+ # The vertical bars for other base classes (bottom half)
+ for vbar in linespec:
+ if vbar: s += '&\\multicolumn{1}{|c}{}&\n'
+ else: s += '&&\n'
+ s += ' \\\\\n'
+
+ return s
+
+ #////////////////////////////////////////////////////////////
+ #{ Class List
+ #////////////////////////////////////////////////////////////
+
+ def write_class_list(self, out, doc):
+ groups = [(plaintext_to_latex(group_name),
+ doc.select_variables(group=group_name, imported=False,
+ value_type='class',
+ public=self._public_filter))
+ for group_name in doc.group_names()]
+
+ # Discard any empty groups; and return if they're all empty.
+ groups = [(g,vars) for (g,vars) in groups if vars]
+ if not groups: return
+
+ # Write a header.
+ self.write_start_of(out, 'Classes')
+ out(self.section('Classes', 1))
+ out('\\begin{itemize}')
+ out(' \\setlength{\\parskip}{0ex}\n')
+
+ for name, var_docs in groups:
+ if name:
+ out(' \\item \\textbf{%s}\n' % name)
+ out(' \\begin{itemize}\n')
+ # Add the lines for each class
+ for var_doc in var_docs:
+ self.write_class_list_line(out, var_doc)
+ if name:
+ out(' \\end{itemize}\n')
+
+ out('\\end{itemize}\n')
+
+ def write_class_list_line(self, out, var_doc):
+ if var_doc.value in (None, UNKNOWN): return # shouldn't happen
+ doc = var_doc.value
+ out(' ' + '\\item \\textbf{')
+ out(plaintext_to_latex(var_doc.name) + '}')
+ if doc.summary not in (None, UNKNOWN):
+ out(': %s\n' % self.docstring_to_latex(doc.summary))
+ if self._crossref:
+ out(('\n \\textit{(Section \\ref{%s}' % self.label(doc)))
+ out((', p.~\\pageref{%s})}\n\n' % self.label(doc)))
+
+ #////////////////////////////////////////////////////////////
+ #{ Function List
+ #////////////////////////////////////////////////////////////
+ _FUNC_GROUP_HEADER = '\n\\large{\\textbf{\\textit{%s}}}\n\n'
+
+ def write_func_list(self, out, heading, doc, value_type, seclevel=1):
+ # Divide all public variables of the given type into groups.
+ groups = [(plaintext_to_latex(group_name),
+ doc.select_variables(group=group_name, imported=False,
+ value_type=value_type,
+ public=self._public_filter))
+ for group_name in doc.group_names()]
+
+ # Discard any empty groups; and return if they're all empty.
+ groups = [(g,vars) for (g,vars) in groups if vars]
+ if not groups: return
+
+ # Write a header.
+ self.write_start_of(out, heading)
+ out(' '+self.section(heading, seclevel))
+
+ # Write a section for each group.
+ grouped_inh_vars = {}
+ for name, var_docs in groups:
+ self.write_func_group(out, doc, name, var_docs, grouped_inh_vars)
+
+ # Write a section for each inheritance pseudo-group (used if
+ # inheritance=='grouped')
+ if grouped_inh_vars:
+ for base in doc.mro():
+ if base in grouped_inh_vars:
+ hdr = ('Inherited from %s' %
+ plaintext_to_latex('%s' % base.canonical_name))
+ if self._crossref and base in self.class_set:
+ hdr += ('\\textit{(Section \\ref{%s})}' %
+ self.label(base))
+ out(self._FUNC_GROUP_HEADER % (hdr))
+ for var_doc in grouped_inh_vars[base]:
+ self.write_func_list_box(out, var_doc)
+
+ def write_func_group(self, out, doc, name, var_docs, grouped_inh_vars):
+ # Split up the var_docs list, according to the way each var
+ # should be displayed:
+ # - listed_inh_vars -- for listed inherited variables.
+ # - grouped_inh_vars -- for grouped inherited variables.
+ # - normal_vars -- for all other variables.
+ listed_inh_vars = {}
+ normal_vars = []
+ for var_doc in var_docs:
+ if var_doc.container != doc:
+ base = var_doc.container
+ if (base not in self.class_set or
+ self._inheritance == 'listed'):
+ listed_inh_vars.setdefault(base,[]).append(var_doc)
+ elif self._inheritance == 'grouped':
+ grouped_inh_vars.setdefault(base,[]).append(var_doc)
+ else:
+ normal_vars.append(var_doc)
+ else:
+ normal_vars.append(var_doc)
+
+ # Write a header for the group.
+ if name:
+ out(self._FUNC_GROUP_HEADER % name)
+ # Write an entry for each normal var:
+ for var_doc in normal_vars:
+ self.write_func_list_box(out, var_doc)
+ # Write a subsection for inherited vars:
+ if listed_inh_vars:
+ self.write_func_inheritance_list(out, doc, listed_inh_vars)
+
+ def write_func_inheritance_list(self, out, doc, listed_inh_vars):
+ for base in doc.mro():
+ if base not in listed_inh_vars: continue
+ #if str(base.canonical_name) == 'object': continue
+ var_docs = listed_inh_vars[base]
+ if self._public_filter:
+ var_docs = [v for v in var_docs if v.is_public]
+ if var_docs:
+ hdr = ('Inherited from %s' %
+ plaintext_to_latex('%s' % base.canonical_name))
+ if self._crossref and base in self.class_set:
+ hdr += ('\\textit{(Section \\ref{%s})}' %
+ self.label(base))
+ out(self._FUNC_GROUP_HEADER % hdr)
+ out('\\begin{quote}\n')
+ out('%s\n' % ', '.join(
+ ['%s()' % plaintext_to_latex(var_doc.name)
+ for var_doc in var_docs]))
+ out('\\end{quote}\n')
+
+ def write_func_list_box(self, out, var_doc):
+ func_doc = var_doc.value
+ is_inherited = (var_doc.overrides not in (None, UNKNOWN))
+
+ # nb: this gives the containing section, not a reference
+ # directly to the function.
+ if not is_inherited:
+ out(' \\label{%s}\n' % self.label(func_doc))
+ out(' %s\n' % self.indexterm(func_doc))
+
+ # Start box for this function.
+ out(' \\vspace{0.5ex}\n\n')
+ out('\\hspace{.8\\funcindent}')
+ out('\\begin{boxedminipage}{\\funcwidth}\n\n')
+
+ # Function signature.
+ out(' %s\n\n' % self.function_signature(var_doc))
+
+ if (func_doc.docstring not in (None, UNKNOWN) and
+ func_doc.docstring.strip() != ''):
+ out(' \\vspace{-1.5ex}\n\n')
+ out(' \\rule{\\textwidth}{0.5\\fboxrule}\n')
+
+ # Description
+ out("\\setlength{\\parskip}{2ex}\n")
+ if func_doc.descr not in (None, UNKNOWN):
+ out(self.docstring_to_latex(func_doc.descr, 4))
+
+ # Parameters
+ out("\\setlength{\\parskip}{1ex}\n")
+ if func_doc.arg_descrs or func_doc.arg_types:
+ # Find the longest name.
+ longest = max([0]+[len(n) for n in func_doc.arg_types])
+ for names, descrs in func_doc.arg_descrs:
+ longest = max([longest]+[len(n) for n in names])
+ # Table header.
+ out(' '*6+'\\textbf{Parameters}\n')
+ out(' \\vspace{-1ex}\n\n')
+ out(' '*6+'\\begin{quote}\n')
+ out(' \\begin{Ventry}{%s}\n\n' % (longest*'x'))
+ # Add params that have @type but not @param info:
+ arg_descrs = list(func_doc.arg_descrs)
+ args = set()
+ for arg_names, arg_descr in arg_descrs:
+ args.update(arg_names)
+ for arg in var_doc.value.arg_types:
+ if arg not in args:
+ arg_descrs.append( ([arg],None) )
+ # Display params
+ for (arg_names, arg_descr) in arg_descrs:
+ arg_name = plaintext_to_latex(', '.join(arg_names))
+ out('%s\\item[%s]\n\n' % (' '*10, arg_name))
+ if arg_descr:
+ out(self.docstring_to_latex(arg_descr, 10))
+ for arg_name in arg_names:
+ arg_typ = func_doc.arg_types.get(arg_name)
+ if arg_typ is not None:
+ if len(arg_names) == 1:
+ lhs = 'type'
+ else:
+ lhs = 'type of %s' % arg_name
+ rhs = self.docstring_to_latex(arg_typ).strip()
+ out('%s{\\it (%s=%s)}\n\n' % (' '*12, lhs, rhs))
+ out(' \\end{Ventry}\n\n')
+ out(' '*6+'\\end{quote}\n\n')
+
+ # Returns
+ rdescr = func_doc.return_descr
+ rtype = func_doc.return_type
+ if rdescr not in (None, UNKNOWN) or rtype not in (None, UNKNOWN):
+ out(' '*6+'\\textbf{Return Value}\n')
+ out(' \\vspace{-1ex}\n\n')
+ out(' '*6+'\\begin{quote}\n')
+ if rdescr not in (None, UNKNOWN):
+ out(self.docstring_to_latex(rdescr, 6))
+ if rtype not in (None, UNKNOWN):
+ out(' '*6+'{\\it (type=%s)}\n\n' %
+ self.docstring_to_latex(rtype, 6).strip())
+ elif rtype not in (None, UNKNOWN):
+ out(self.docstring_to_latex(rtype, 6))
+ out(' '*6+'\\end{quote}\n\n')
+
+ # Raises
+ if func_doc.exception_descrs not in (None, UNKNOWN, [], ()):
+ out(' '*6+'\\textbf{Raises}\n')
+ out(' \\vspace{-1ex}\n\n')
+ out(' '*6+'\\begin{quote}\n')
+ out(' \\begin{description}\n\n')
+ for name, descr in func_doc.exception_descrs:
+ out(' '*10+'\\item[\\texttt{%s}]\n\n' %
+ plaintext_to_latex('%s' % name))
+ out(self.docstring_to_latex(descr, 10))
+ out(' \\end{description}\n\n')
+ out(' '*6+'\\end{quote}\n\n')
+
+ ## Overrides
+ if var_doc.overrides not in (None, UNKNOWN):
+ out(' Overrides: ' +
+ plaintext_to_latex('%s'%var_doc.overrides.canonical_name))
+ if (func_doc.docstring in (None, UNKNOWN) and
+ var_doc.overrides.value.docstring not in (None, UNKNOWN)):
+ out(' \textit{(inherited documentation)}')
+ out('\n\n')
+
+ # Add version, author, warnings, requirements, notes, etc.
+ self.write_standard_fields(out, func_doc)
+
+ out(' \\end{boxedminipage}\n\n')
+
+ def function_signature(self, var_doc):
+ func_doc = var_doc.value
+ func_name = var_doc.name
+
+ # This should never happen, but just in case:
+ if func_doc in (None, UNKNOWN):
+ return ('\\raggedright \\textbf{%s}(...)' %
+ plaintext_to_latex(func_name))
+
+ if func_doc.posargs == UNKNOWN:
+ args = ['...']
+ else:
+ args = [self.func_arg(name, default) for (name, default)
+ in zip(func_doc.posargs, func_doc.posarg_defaults)]
+ if func_doc.vararg:
+ if func_doc.vararg == '...':
+ args.append('\\textit{...}')
+ else:
+ args.append('*\\textit{%s}' %
+ plaintext_to_latex(func_doc.vararg))
+ if func_doc.kwarg:
+ args.append('**\\textit{%s}' %
+ plaintext_to_latex(func_doc.kwarg))
+ return ('\\raggedright \\textbf{%s}(%s)' %
+ (plaintext_to_latex(func_name), ', '.join(args)))
+
+ def func_arg(self, name, default):
+ s = '\\textit{%s}' % plaintext_to_latex(self._arg_name(name))
+ if default is not None:
+ s += '={\\tt %s}' % default.summary_pyval_repr().to_latex(None)
+ return s
+
+ def _arg_name(self, arg):
+ if isinstance(arg, basestring):
+ return arg
+ elif len(arg) == 1:
+ return '(%s,)' % self._arg_name(arg[0])
+ else:
+ return '(%s)' % (', '.join([self._arg_name(a) for a in arg]))
+
+ #////////////////////////////////////////////////////////////
+ #{ Variable List
+ #////////////////////////////////////////////////////////////
+ _VAR_GROUP_HEADER = '\\multicolumn{2}{|l|}{\\textit{%s}}\\\\\n'
+
+ # Also used for the property list.
+ def write_var_list(self, out, heading, doc, value_type, seclevel=1):
+ groups = [(plaintext_to_latex(group_name),
+ doc.select_variables(group=group_name, imported=False,
+ value_type=value_type,
+ public=self._public_filter))
+ for group_name in doc.group_names()]
+
+ # Discard any empty groups; and return if they're all empty.
+ groups = [(g,vars) for (g,vars) in groups if vars]
+ if not groups: return
+
+ # Write a header.
+ self.write_start_of(out, heading)
+ out(' '+self.section(heading, seclevel))
+
+ # [xx] without this, there's a huge gap before the table -- why??
+ out(' \\vspace{-1cm}\n')
+
+ out('\\hspace{\\varindent}')
+ out('\\begin{longtable}')
+ out('{|p{\\varnamewidth}|')
+ out('p{\\vardescrwidth}|l}\n')
+ out('\\cline{1-2}\n')
+
+ # Set up the headers & footer (this makes the table span
+ # multiple pages in a happy way).
+ out('\\cline{1-2} ')
+ out('\\centering \\textbf{Name} & ')
+ out('\\centering \\textbf{Description}& \\\\\n')
+ out('\\cline{1-2}\n')
+ out('\\endhead')
+ out('\\cline{1-2}')
+ out('\\multicolumn{3}{r}{\\small\\textit{')
+ out('continued on next page}}\\\\')
+ out('\\endfoot')
+ out('\\cline{1-2}\n')
+ out('\\endlastfoot')
+
+ # Write a section for each group.
+ grouped_inh_vars = {}
+ for name, var_docs in groups:
+ self.write_var_group(out, doc, name, var_docs, grouped_inh_vars)
+
+ # Write a section for each inheritance pseudo-group (used if
+ # inheritance=='grouped')
+ if grouped_inh_vars:
+ for base in doc.mro():
+ if base in grouped_inh_vars:
+ hdr = ('Inherited from %s' %
+ plaintext_to_latex('%s' % base.canonical_name))
+ if self._crossref and base in self.class_set:
+ hdr += (' \\textit{(Section \\ref{%s})}' %
+ self.label(base))
+ out(self._VAR_GROUP_HEADER % (hdr))
+ out('\\cline{1-2}\n')
+ for var_doc in grouped_inh_vars[base]:
+ if isinstance(var_doc.value3, PropertyDoc):
+ self.write_property_list_line(out, var_doc)
+ else:
+ self.write_var_list_line(out, var_doc)
+
+ out('\\end{longtable}\n\n')
+
+ def write_var_group(self, out, doc, name, var_docs, grouped_inh_vars):
+ # Split up the var_docs list, according to the way each var
+ # should be displayed:
+ # - listed_inh_vars -- for listed inherited variables.
+ # - grouped_inh_vars -- for grouped inherited variables.
+ # - normal_vars -- for all other variables.
+ listed_inh_vars = {}
+ normal_vars = []
+ for var_doc in var_docs:
+ if var_doc.container != doc:
+ base = var_doc.container
+ if (base not in self.class_set or
+ self._inheritance == 'listed'):
+ listed_inh_vars.setdefault(base,[]).append(var_doc)
+ elif self._inheritance == 'grouped':
+ grouped_inh_vars.setdefault(base,[]).append(var_doc)
+ else:
+ normal_vars.append(var_doc)
+ else:
+ normal_vars.append(var_doc)
+
+ # Write a header for the group.
+ if name:
+ out(self._VAR_GROUP_HEADER % name)
+ out('\\cline{1-2}\n')
+ # Write an entry for each normal var:
+ for var_doc in normal_vars:
+ if isinstance(var_doc.value, PropertyDoc):
+ self.write_property_list_line(out, var_doc)
+ else:
+ self.write_var_list_line(out, var_doc)
+ # Write a subsection for inherited vars:
+ if listed_inh_vars:
+ self.write_var_inheritance_list(out, doc, listed_inh_vars)
+
+ def write_var_inheritance_list(self, out, doc, listed_inh_vars):
+ for base in doc.mro():
+ if base not in listed_inh_vars: continue
+ #if str(base.canonical_name) == 'object': continue
+ var_docs = listed_inh_vars[base]
+ if self._public_filter:
+ var_docs = [v for v in var_docs if v.is_public]
+ if var_docs:
+ hdr = ('Inherited from %s' %
+ plaintext_to_latex('%s' % base.canonical_name))
+ if self._crossref and base in self.class_set:
+ hdr += (' \\textit{(Section \\ref{%s})}' %
+ self.label(base))
+ out(self._VAR_GROUP_HEADER % hdr)
+ out('\\multicolumn{2}{|p{\\varwidth}|}{'
+ '\\raggedright %s}\\\\\n' %
+ ', '.join(['%s' % plaintext_to_latex(var_doc.name)
+ for var_doc in var_docs]))
+ out('\\cline{1-2}\n')
+
+
+ def write_var_list_line(self, out, var_doc):
+ out('\\raggedright ')
+ out(plaintext_to_latex(var_doc.name, nbsp=True, breakany=True))
+ out(' & ')
+ has_descr = var_doc.descr not in (None, UNKNOWN)
+ has_type = var_doc.type_descr not in (None, UNKNOWN)
+ has_value = var_doc.value is not UNKNOWN
+ if has_type or has_value:
+ out('\\raggedright ')
+ if has_descr:
+ out(self.docstring_to_latex(var_doc.descr, 10).strip())
+ if has_type or has_value: out('\n\n')
+ if has_value:
+ out('\\textbf{Value:} \n{\\tt %s}' %
+ var_doc.value.summary_pyval_repr().to_latex(None))
+ if has_type:
+ ptype = self.docstring_to_latex(var_doc.type_descr, 12).strip()
+ out('%s{\\it (type=%s)}' % (' '*12, ptype))
+ out('&\\\\\n')
+ out('\\cline{1-2}\n')
+
+ def write_property_list_line(self, out, var_doc):
+ prop_doc = var_doc.value
+ out('\\raggedright ')
+ out(plaintext_to_latex(var_doc.name, nbsp=True, breakany=True))
+ out(' & ')
+ has_descr = prop_doc.descr not in (None, UNKNOWN)
+ has_type = prop_doc.type_descr not in (None, UNKNOWN)
+ if has_descr or has_type:
+ out('\\raggedright ')
+ if has_descr:
+ out(self.docstring_to_latex(prop_doc.descr, 10).strip())
+ if has_type: out('\n\n')
+ if has_type:
+ ptype = self.docstring_to_latex(prop_doc.type_descr, 12).strip()
+ out('%s{\\it (type=%s)}' % (' '*12, ptype))
+ # [xx] List the fget/fset/fdel functions?
+ out('&\\\\\n')
+ out('\\cline{1-2}\n')
+
+ #////////////////////////////////////////////////////////////
+ #{ Standard Fields
+ #////////////////////////////////////////////////////////////
+
+ # Copied from HTMLWriter:
+ def write_standard_fields(self, out, doc):
+ fields = []
+ field_values = {}
+
+ #if _sort_fields: fields = STANDARD_FIELD_NAMES [XX]
+
+ for (field, arg, descr) in doc.metadata:
+ if field not in field_values:
+ fields.append(field)
+ if field.takes_arg:
+ subfields = field_values.setdefault(field,{})
+ subfields.setdefault(arg,[]).append(descr)
+ else:
+ field_values.setdefault(field,[]).append(descr)
+
+ for field in fields:
+ if field.takes_arg:
+ for arg, descrs in field_values[field].items():
+ self.write_standard_field(out, doc, field, descrs, arg)
+
+ else:
+ self.write_standard_field(out, doc, field, field_values[field])
+
+ def write_standard_field(self, out, doc, field, descrs, arg=''):
+ singular = field.singular
+ plural = field.plural
+ if arg:
+ singular += ' (%s)' % arg
+ plural += ' (%s)' % arg
+ out(self._descrlist([self.docstring_to_latex(d) for d in descrs],
+ field.singular, field.plural, field.short))
+
+ def _descrlist(self, items, singular, plural=None, short=0):
+ if plural is None: plural = singular
+ if len(items) == 0: return ''
+ if len(items) == 1 and singular is not None:
+ return '\\textbf{%s:} %s\n\n' % (singular, items[0])
+ if short:
+ s = '\\textbf{%s:}\n' % plural
+ items = [item.strip() for item in items]
+ return s + ',\n '.join(items) + '\n\n'
+ else:
+ s = '\\textbf{%s:}\n' % plural
+ s += '\\begin{quote}\n'
+ s += ' \\begin{itemize}\n\n \item\n'
+ s += ' \\setlength{\\parskip}{0.6ex}\n'
+ s += '\n\n \item '.join(items)
+ return s + '\n\n\\end{itemize}\n\n\\end{quote}\n\n'
+
+
+ #////////////////////////////////////////////////////////////
+ #{ Docstring -> LaTeX Conversion
+ #////////////////////////////////////////////////////////////
+
+ # We only need one linker, since we don't use context:
+ class _LatexDocstringLinker(markup.DocstringLinker):
+ def translate_indexterm(self, indexterm):
+ indexstr = re.sub(r'["!|@]', r'"\1', indexterm.to_latex(self))
+ return ('\\index{%s}\\textit{%s}' % (indexstr, indexstr))
+ def translate_identifier_xref(self, identifier, label=None):
+ if label is None: label = markup.plaintext_to_latex(identifier)
+ return '\\texttt{%s}' % label
+ _docstring_linker = _LatexDocstringLinker()
+
+ def docstring_to_latex(self, docstring, indent=0, breakany=0):
+ if docstring is None: return ''
+ return docstring.to_latex(self._docstring_linker, indent=indent,
+ hyperref=self._hyperref)
+
+ #////////////////////////////////////////////////////////////
+ #{ Helpers
+ #////////////////////////////////////////////////////////////
+
+ def write_header(self, out, where):
+ out('%\n% API Documentation')
+ if self._prj_name: out(' for %s' % self._prj_name)
+ if isinstance(where, APIDoc):
+ out('\n%% %s %s' % (self.doc_kind(where), where.canonical_name))
+ else:
+ out('\n%% %s' % where)
+ out('\n%%\n%% Generated by epydoc %s\n' % epydoc.__version__)
+ out('%% [%s]\n%%\n' % time.asctime(time.localtime(time.time())))
+
+ def write_start_of(self, out, section_name):
+ out('\n' + 75*'%' + '\n')
+ out('%%' + ((71-len(section_name))/2)*' ')
+ out(section_name)
+ out(((72-len(section_name))/2)*' ' + '%%\n')
+ out(75*'%' + '\n\n')
+
+ def section(self, title, depth=0):
+ sec = self.SECTIONS[depth+self._top_section]
+ return (('%s\n\n' % sec) % plaintext_to_latex(title))
+
+ def sectionstar(self, title, depth):
+ sec = self.STARSECTIONS[depth+self._top_section]
+ return (('%s\n\n' % sec) % plaintext_to_latex(title))
+
+ def doc_kind(self, doc):
+ if isinstance(doc, ModuleDoc) and doc.is_package == True:
+ return 'Package'
+ elif (isinstance(doc, ModuleDoc) and
+ doc.canonical_name[0].startswith('script')):
+ return 'Script'
+ elif isinstance(doc, ModuleDoc):
+ return 'Module'
+ elif isinstance(doc, ClassDoc):
+ return 'Class'
+ elif isinstance(doc, ClassMethodDoc):
+ return 'Class Method'
+ elif isinstance(doc, StaticMethodDoc):
+ return 'Static Method'
+ elif isinstance(doc, RoutineDoc):
+ if isinstance(self.docindex.container(doc), ClassDoc):
+ return 'Method'
+ else:
+ return 'Function'
+ else:
+ return 'Variable'
+
+ def indexterm(self, doc, pos='only'):
+ """Mark a term or section for inclusion in the index."""
+ if not self._index: return ''
+ if isinstance(doc, RoutineDoc) and not self._index_functions:
+ return ''
+
+ pieces = []
+ while doc is not None:
+ if doc.canonical_name == UNKNOWN:
+ return '' # Give up.
+ pieces.append('%s \\textit{(%s)}' %
+ (plaintext_to_latex('%s'%doc.canonical_name),
+ self.doc_kind(doc).lower()))
+ doc = self.docindex.container(doc)
+ if doc == UNKNOWN:
+ return '' # Give up.
+
+ pieces.reverse()
+ if pos == 'only':
+ return '\\index{%s}\n' % '!'.join(pieces)
+ elif pos == 'start':
+ return '\\index{%s|(}\n' % '!'.join(pieces)
+ elif pos == 'end':
+ return '\\index{%s|)}\n' % '!'.join(pieces)
+ else:
+ raise AssertionError('Bad index position %s' % pos)
+
+ def label(self, doc):
+ return ':'.join(doc.canonical_name)
+
+ #: Map the Python encoding representation into mismatching LaTeX ones.
+ latex_encodings = {
+ 'utf-8': 'utf8',
+ }
+
+ def get_latex_encoding(self):
+ """
+ @return: The LaTeX representation of the selected encoding.
+ @rtype: C{str}
+ """
+ enc = self._encoding.lower()
+ return self.latex_encodings.get(enc, enc)
diff --git a/python/helpers/epydoc/docwriter/plaintext.py b/python/helpers/epydoc/docwriter/plaintext.py
new file mode 100644
index 0000000..00baa70
--- /dev/null
+++ b/python/helpers/epydoc/docwriter/plaintext.py
@@ -0,0 +1,276 @@
+# epydoc -- Plaintext output generation
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: plaintext.py 1473 2007-02-13 19:46:05Z edloper $
+
+"""
+Plaintext output generation.
+"""
+__docformat__ = 'epytext en'
+
+from epydoc.apidoc import *
+import re
+
+class PlaintextWriter:
+ def write(self, api_doc, **options):
+ result = []
+ out = result.append
+
+ self._cols = options.get('cols', 75)
+
+ try:
+ if isinstance(api_doc, ModuleDoc):
+ self.write_module(out, api_doc)
+ elif isinstance(api_doc, ClassDoc):
+ self.write_class(out, api_doc)
+ elif isinstance(api_doc, RoutineDoc):
+ self.write_function(out, api_doc)
+ else:
+ assert 0, ('%s not handled yet' % api_doc.__class__)
+ except Exception, e:
+ print '\n\n'
+ print ''.join(result)
+ raise
+
+ return ''.join(result)
+
+ def write_module(self, out, mod_doc):
+ #for n,v in mod_doc.variables.items():
+ # print n, `v.value`, `v.value.value`
+
+ # The cannonical name of the module.
+ out(self.section('Module Name'))
+ out(' %s\n\n' % mod_doc.canonical_name)
+
+ # The module's description.
+ if mod_doc.descr not in (None, '', UNKNOWN):
+ out(self.section('Description'))
+ out(mod_doc.descr.to_plaintext(None, indent=4))
+
+ #out('metadata: %s\n\n' % mod_doc.metadata) # [xx] testing
+
+ self.write_list(out, 'Classes', mod_doc, value_type='class')
+ self.write_list(out, 'Functions', mod_doc, value_type='function')
+ self.write_list(out, 'Variables', mod_doc, value_type='other')
+ # hmm.. do this as just a flat list??
+ #self.write_list(out, 'Imports', mod_doc, imported=True, verbose=False)
+
+ def baselist(self, class_doc):
+ if class_doc.bases is UNKNOWN:
+ return '(unknown bases)'
+ if len(class_doc.bases) == 0: return ''
+ s = '('
+ class_parent = class_doc.canonical_name.container()
+ for i, base in enumerate(class_doc.bases):
+ if base.canonical_name is None:
+ if base.parse_repr is not UNKNOWN:
+ s += base.parse_repr
+ else:
+ s += '??'
+ elif base.canonical_name.container() == class_parent:
+ s += str(base.canonical_name[-1])
+ else:
+ s += str(base.canonical_name)
+ if i < len(class_doc.bases)-1: out(', ')
+ return s+')'
+
+ def write_class(self, out, class_doc, name=None, prefix='', verbose=True):
+ baselist = self.baselist(class_doc)
+
+ # If we're at the top level, then list the cannonical name of
+ # the class; otherwise, our parent will have already printed
+ # the name of the variable containing the class.
+ if prefix == '':
+ out(self.section('Class Name'))
+ out(' %s%s\n\n' % (class_doc.canonical_name, baselist))
+ else:
+ out(prefix + 'class %s' % self.bold(str(name)) + baselist+'\n')
+
+ if not verbose: return
+
+ # Indent the body
+ if prefix != '':
+ prefix += ' | '
+
+ # The class's description.
+ if class_doc.descr not in (None, '', UNKNOWN):
+ if prefix == '':
+ out(self.section('Description', prefix))
+ out(self._descr(class_doc.descr, ' '))
+ else:
+ out(self._descr(class_doc.descr, prefix))
+
+ # List of nested classes in this class.
+ self.write_list(out, 'Methods', class_doc,
+ value_type='instancemethod', prefix=prefix,
+ noindent=len(prefix)>4)
+ self.write_list(out, 'Class Methods', class_doc,
+ value_type='classmethod', prefix=prefix)
+ self.write_list(out, 'Static Methods', class_doc,
+ value_type='staticmethod', prefix=prefix)
+ self.write_list(out, 'Nested Classes', class_doc,
+ value_type='class', prefix=prefix)
+ self.write_list(out, 'Instance Variables', class_doc,
+ value_type='instancevariable', prefix=prefix)
+ self.write_list(out, 'Class Variables', class_doc,
+ value_type='classvariable', prefix=prefix)
+
+ self.write_list(out, 'Inherited Methods', class_doc,
+ value_type='method', prefix=prefix,
+ inherited=True, verbose=False)
+ self.write_list(out, 'Inherited Instance Variables', class_doc,
+ value_type='instancevariable', prefix=prefix,
+ inherited=True, verbose=False)
+ self.write_list(out, 'Inherited Class Variables', class_doc,
+ value_type='classvariable', prefix=prefix,
+ inherited=True, verbose=False)
+ self.write_list(out, 'Inherited Nested Classes', class_doc,
+ value_type='class', prefix=prefix,
+ inherited=True, verbose=False)
+
+ def write_variable(self, out, var_doc, name=None, prefix='', verbose=True):
+ if name is None: name = var_doc.name
+ out(prefix+self.bold(str(name)))
+ if (var_doc.value not in (UNKNOWN, None) and
+ var_doc.is_alias is True and
+ var_doc.value.canonical_name not in (None, UNKNOWN)):
+ out(' = %s' % var_doc.value.canonical_name)
+ elif var_doc.value not in (UNKNOWN, None):
+ val_repr = var_doc.value.summary_pyval_repr(
+ max_len=self._cols-len(name)-len(prefix)-3)
+ out(' = %s' % val_repr.to_plaintext(None))
+ out('\n')
+ if not verbose: return
+ prefix += ' ' # indent the body.
+ if var_doc.descr not in (None, '', UNKNOWN):
+ out(self._descr(var_doc.descr, prefix))
+
+ def write_property(self, out, prop_doc, name=None, prefix='',
+ verbose=True):
+ if name is None: name = prop_doc.canonical_name
+ out(prefix+self.bold(str(name)))
+ if not verbose: return
+ prefix += ' ' # indent the body.
+
+ if prop_doc.descr not in (None, '', UNKNOWN):
+ out(self._descr(prop_doc.descr, prefix))
+
+
+ def write_function(self, out, func_doc, name=None, prefix='',
+ verbose=True):
+ if name is None: name = func_doc.canonical_name
+ self.write_signature(out, func_doc, name, prefix)
+ if not verbose: return
+
+ prefix += ' ' # indent the body.
+
+ if func_doc.descr not in (None, '', UNKNOWN):
+ out(self._descr(func_doc.descr, prefix))
+
+ if func_doc.return_descr not in (None, '', UNKNOWN):
+ out(self.section('Returns:', prefix))
+ out(self._descr(func_doc.return_descr, prefix+' '))
+
+ if func_doc.return_type not in (None, '', UNKNOWN):
+ out(self.section('Return Type:', prefix))
+ out(self._descr(func_doc.return_type, prefix+' '))
+
+ def write_signature(self, out, func_doc, name, prefix):
+ args = [self.fmt_arg(argname, default) for (argname, default)
+ in zip(func_doc.posargs, func_doc.posarg_defaults)]
+ if func_doc.vararg: args.append('*'+func_doc.vararg)
+ if func_doc.kwarg: args.append('**'+func_doc.kwarg)
+
+ out(prefix+self.bold(str(name))+'(')
+ x = left = len(prefix) + len(name) + 1
+ for i, arg in enumerate(args):
+ if x > left and x+len(arg) > 75:
+ out('\n'+prefix + ' '*len(name) + ' ')
+ x = left
+ out(arg)
+ x += len(arg)
+ if i < len(args)-1:
+ out(', ')
+ x += 2
+ out(')\n')
+
+ # [xx] tuple args!
+ def fmt_arg(self, name, default):
+ if default is None:
+ return '%s' % name
+ else:
+ default_repr = default.summary_pyval_repr()
+ return '%s=%s' % (name, default_repr.to_plaintext(None))
+
+ def write_list(self, out, heading, doc, value_type=None, imported=False,
+ inherited=False, prefix='', noindent=False,
+ verbose=True):
+ # Get a list of the VarDocs we should describe.
+ if isinstance(doc, ClassDoc):
+ var_docs = doc.select_variables(value_type=value_type,
+ imported=imported,
+ inherited=inherited)
+ else:
+ var_docs = doc.select_variables(value_type=value_type,
+ imported=imported)
+ if not var_docs: return
+
+ out(prefix+'\n')
+ if not noindent:
+ out(self.section(heading, prefix))
+ prefix += ' '
+
+ for i, var_doc in enumerate(var_docs):
+ val_doc, name = var_doc.value, var_doc.name
+
+ if verbose:
+ out(prefix+'\n')
+
+ # hmm:
+ if not verbose:
+ if isinstance(doc, ClassDoc):
+ name = var_doc.canonical_name
+ elif val_doc not in (None, UNKNOWN):
+ name = val_doc.canonical_name
+
+ if isinstance(val_doc, RoutineDoc):
+ self.write_function(out, val_doc, name, prefix, verbose)
+ elif isinstance(val_doc, PropertyDoc):
+ self.write_property(out, val_doc, name, prefix, verbose)
+ elif isinstance(val_doc, ClassDoc):
+ self.write_class(out, val_doc, name, prefix, verbose)
+ else:
+ self.write_variable(out, var_doc, name, prefix, verbose)
+
+ def _descr(self, descr, prefix):
+ s = descr.to_plaintext(None, indent=len(prefix)).rstrip()
+ s = '\n'.join([(prefix+l[len(prefix):]) for l in s.split('\n')])
+ return s+'\n'#+prefix+'\n'
+
+
+# def drawline(self, s, x):
+# s = re.sub(r'(?m)^(.{%s}) ' % x, r'\1|', s)
+# return re.sub(r'(?m)^( {,%s})$(?=\n)' % x, x*' '+'|', s)
+
+
+ #////////////////////////////////////////////////////////////
+ # Helpers
+ #////////////////////////////////////////////////////////////
+
+ def bold(self, text):
+ """Write a string in bold by overstriking."""
+ return ''.join([ch+'\b'+ch for ch in text])
+
+ def title(self, text, indent):
+ return ' '*indent + self.bold(text.capitalize()) + '\n\n'
+
+ def section(self, text, indent=''):
+ if indent == '':
+ return indent + self.bold(text.upper()) + '\n'
+ else:
+ return indent + self.bold(text.capitalize()) + '\n'
+
+
diff --git a/python/helpers/epydoc/docwriter/xlink.py b/python/helpers/epydoc/docwriter/xlink.py
new file mode 100644
index 0000000..e15bb7f
--- /dev/null
+++ b/python/helpers/epydoc/docwriter/xlink.py
@@ -0,0 +1,505 @@
+"""
+A Docutils_ interpreted text role for cross-API reference support.
+
+This module allows a Docutils_ document to refer to elements defined in
+external API documentation. It is possible to refer to many external API
+from the same document.
+
+Each API documentation is assigned a new interpreted text role: using such
+interpreted text, an user can specify an object name inside an API
+documentation. The system will convert such text into an url and generate a
+reference to it. For example, if the API ``db`` is defined, being a database
+package, then a certain method may be referred as::
+
+ :db:`Connection.cursor()`
+
+To define a new API, an *index file* must be provided. This file contains
+a mapping from the object name to the URL part required to resolve such object.
+
+Index file
+----------
+
+Each line in the the index file describes an object.
+
+Each line contains the fully qualified name of the object and the URL at which
+the documentation is located. The fields are separated by a ``<tab>``
+character.
+
+The URL's in the file are relative from the documentation root: the system can
+be configured to add a prefix in front of each returned URL.
+
+Allowed names
+-------------
+
+When a name is used in an API text role, it is split over any *separator*.
+The separators defined are '``.``', '``::``', '``->``'. All the text from the
+first noise char (neither a separator nor alphanumeric or '``_``') is
+discarded. The same algorithm is applied when the index file is read.
+
+First the sequence of name parts is looked for in the provided index file.
+If no matching name is found, a partial match against the trailing part of the
+names in the index is performed. If no object is found, or if the trailing part
+of the name may refer to many objects, a warning is issued and no reference
+is created.
+
+Configuration
+-------------
+
+This module provides the class `ApiLinkReader` a replacement for the Docutils
+standalone reader. Such reader specifies the settings required for the
+API canonical roles configuration. The same command line options are exposed by
+Epydoc.
+
+The script ``apirst2html.py`` is a frontend for the `ApiLinkReader` reader.
+
+API Linking Options::
+
+ --external-api=NAME
+ Define a new API document. A new interpreted text
+ role NAME will be added.
+ --external-api-file=NAME:FILENAME
+ Use records in FILENAME to resolve objects in the API
+ named NAME.
+ --external-api-root=NAME:STRING
+ Use STRING as prefix for the URL generated from the
+ API NAME.
+
+.. _Docutils: http://docutils.sourceforge.net/
+"""
+
+# $Id: xlink.py 1586 2007-03-14 01:53:42Z dvarrazzo $
+__version__ = "$Revision: 1586 $"[11:-2]
+__author__ = "Daniele Varrazzo"
+__copyright__ = "Copyright (C) 2007 by Daniele Varrazzo"
+__docformat__ = 'reStructuredText en'
+
+import re
+import sys
+from optparse import OptionValueError
+
+from epydoc import log
+
+class UrlGenerator:
+ """
+ Generate URL from an object name.
+ """
+ class IndexAmbiguous(IndexError):
+ """
+ The name looked for is ambiguous
+ """
+
+ def get_url(self, name):
+ """Look for a name and return the matching URL documentation.
+
+ First look for a fully qualified name. If not found, try with partial
+ name.
+
+ If no url exists for the given object, return `None`.
+
+ :Parameters:
+ `name` : `str`
+ the name to look for
+
+ :return: the URL that can be used to reach the `name` documentation.
+ `None` if no such URL exists.
+ :rtype: `str`
+
+ :Exceptions:
+ - `IndexError`: no object found with `name`
+ - `DocUrlGenerator.IndexAmbiguous` : more than one object found with
+ a non-fully qualified name; notice that this is an ``IndexError``
+ subclass
+ """
+ raise NotImplementedError
+
+ def get_canonical_name(self, name):
+ """
+ Convert an object name into a canonical name.
+
+ the canonical name of an object is a tuple of strings containing its
+ name fragments, splitted on any allowed separator ('``.``', '``::``',
+ '``->``').
+
+ Noise such parenthesis to indicate a function is discarded.
+
+ :Parameters:
+ `name` : `str`
+ an object name, such as ``os.path.prefix()`` or ``lib::foo::bar``
+
+ :return: the fully qualified name such ``('os', 'path', 'prefix')`` and
+ ``('lib', 'foo', 'bar')``
+ :rtype: `tuple` of `str`
+ """
+ rv = []
+ for m in self._SEP_RE.finditer(name):
+ groups = m.groups()
+ if groups[0] is not None:
+ rv.append(groups[0])
+ elif groups[2] is not None:
+ break
+
+ return tuple(rv)
+
+ _SEP_RE = re.compile(r"""(?x)
+ # Tokenize the input into keyword, separator, noise
+ ([a-zA-Z0-9_]+) | # A keyword is a alphanum word
+ ( \. | \:\: | \-\> ) | # These are the allowed separators
+ (.) # If it doesn't fit, it's noise.
+ # Matching a single noise char is enough, because it
+ # is used to break the tokenization as soon as some noise
+ # is found.
+ """)
+
+
+class VoidUrlGenerator(UrlGenerator):
+ """
+ Don't actually know any url, but don't report any error.
+
+ Useful if an index file is not available, but a document linking to it
+ is to be generated, and warnings are to be avoided.
+
+ Don't report any object as missing, Don't return any url anyway.
+ """
+ def get_url(self, name):
+ return None
+
+
+class DocUrlGenerator(UrlGenerator):
+ """
+ Read a *documentation index* and generate URL's for it.
+ """
+ def __init__(self):
+ self._exact_matches = {}
+ """
+ A map from an object fully qualified name to its URL.
+
+ Values are both the name as tuple of fragments and as read from the
+ records (see `load_records()`), mostly to help `_partial_names` to
+ perform lookup for unambiguous names.
+ """
+
+ self._partial_names= {}
+ """
+ A map from partial names to the fully qualified names they may refer.
+
+ The keys are the possible left sub-tuples of fully qualified names,
+ the values are list of strings as provided by the index.
+
+ If the list for a given tuple contains a single item, the partial
+ match is not ambuguous. In this case the string can be looked up in
+ `_exact_matches`.
+
+ If the name fragment is ambiguous, a warning may be issued to the user.
+ The items can be used to provide an informative message to the user,
+ to help him qualifying the name in a unambiguous manner.
+ """
+
+ self.prefix = ''
+ """
+ Prefix portion for the URL's returned by `get_url()`.
+ """
+
+ self._filename = None
+ """
+ Not very important: only for logging.
+ """
+
+ def get_url(self, name):
+ cname = self.get_canonical_name(name)
+ url = self._exact_matches.get(cname, None)
+ if url is None:
+
+ # go for a partial match
+ vals = self._partial_names.get(cname)
+ if vals is None:
+ raise IndexError(
+ "no object named '%s' found" % (name))
+
+ elif len(vals) == 1:
+ url = self._exact_matches[vals[0]]
+
+ else:
+ raise self.IndexAmbiguous(
+ "found %d objects that '%s' may refer to: %s"
+ % (len(vals), name, ", ".join(["'%s'" % n for n in vals])))
+
+ return self.prefix + url
+
+ #{ Content loading
+ # ---------------
+
+ def clear(self):
+ """
+ Clear the current class content.
+ """
+ self._exact_matches.clear()
+ self._partial_names.clear()
+
+ def load_index(self, f):
+ """
+ Read the content of an index file.
+
+ Populate the internal maps with the file content using `load_records()`.
+
+ :Parameters:
+ f : `str` or file
+ a file name or file-like object fron which read the index.
+ """
+ self._filename = str(f)
+
+ if isinstance(f, basestring):
+ f = open(f)
+
+ self.load_records(self._iter_tuples(f))
+
+ def _iter_tuples(self, f):
+ """Iterate on a file returning 2-tuples."""
+ for nrow, row in enumerate(f):
+ # skip blank lines
+ row = row.rstrip()
+ if not row: continue
+
+ rec = row.split('\t', 2)
+ if len(rec) == 2:
+ yield rec
+ else:
+ log.warning("invalid row in '%s' row %d: '%s'"
+ % (self._filename, nrow+1, row))
+
+ def load_records(self, records):
+ """
+ Read a sequence of pairs name -> url and populate the internal maps.
+
+ :Parameters:
+ records : iterable
+ the sequence of pairs (*name*, *url*) to add to the maps.
+ """
+ for name, url in records:
+ cname = self.get_canonical_name(name)
+ if not cname:
+ log.warning("invalid object name in '%s': '%s'"
+ % (self._filename, name))
+ continue
+
+ # discard duplicates
+ if name in self._exact_matches:
+ continue
+
+ self._exact_matches[name] = url
+ self._exact_matches[cname] = url
+
+ # Link the different ambiguous fragments to the url
+ for i in range(1, len(cname)):
+ self._partial_names.setdefault(cname[i:], []).append(name)
+
+#{ API register
+# ------------
+
+api_register = {}
+"""
+Mapping from the API name to the `UrlGenerator` to be used.
+
+Use `register_api()` to add new generators to the register.
+"""
+
+def register_api(name, generator=None):
+ """Register the API `name` into the `api_register`.
+
+ A registered API will be available to the markup as the interpreted text
+ role ``name``.
+
+ If a `generator` is not provided, register a `VoidUrlGenerator` instance:
+ in this case no warning will be issued for missing names, but no URL will
+ be generated and all the dotted names will simply be rendered as literals.
+
+ :Parameters:
+ `name` : `str`
+ the name of the generator to be registered
+ `generator` : `UrlGenerator`
+ the object to register to translate names into URLs.
+ """
+ if generator is None:
+ generator = VoidUrlGenerator()
+
+ api_register[name] = generator
+
+def set_api_file(name, file):
+ """Set an URL generator populated with data from `file`.
+
+ Use `file` to populate a new `DocUrlGenerator` instance and register it
+ as `name`.
+
+ :Parameters:
+ `name` : `str`
+ the name of the generator to be registered
+ `file` : `str` or file
+ the file to parse populate the URL generator
+ """
+ generator = DocUrlGenerator()
+ generator.load_index(file)
+ register_api(name, generator)
+
+def set_api_root(name, prefix):
+ """Set the root for the URLs returned by a registered URL generator.
+
+ :Parameters:
+ `name` : `str`
+ the name of the generator to be updated
+ `prefix` : `str`
+ the prefix for the generated URL's
+
+ :Exceptions:
+ - `IndexError`: `name` is not a registered generator
+ """
+ api_register[name].prefix = prefix
+
+######################################################################
+# Below this point requires docutils.
+try:
+ import docutils
+ from docutils.parsers.rst import roles
+ from docutils import nodes, utils
+ from docutils.readers.standalone import Reader
+except ImportError:
+ docutils = roles = nodes = utils = None
+ class Reader: settings_spec = ()
+
+def create_api_role(name, problematic):
+ """
+ Create and register a new role to create links for an API documentation.
+
+ Create a role called `name`, which will use the URL resolver registered as
+ ``name`` in `api_register` to create a link for an object.
+
+ :Parameters:
+ `name` : `str`
+ name of the role to create.
+ `problematic` : `bool`
+ if True, the registered role will create problematic nodes in
+ case of failed references. If False, a warning will be raised
+ anyway, but the output will appear as an ordinary literal.
+ """
+ def resolve_api_name(n, rawtext, text, lineno, inliner,
+ options={}, content=[]):
+ if docutils is None:
+ raise AssertionError('requires docutils')
+
+ # node in monotype font
+ text = utils.unescape(text)
+ node = nodes.literal(rawtext, text, **options)
+
+ # Get the resolver from the register and create an url from it.
+ try:
+ url = api_register[name].get_url(text)
+ except IndexError, exc:
+ msg = inliner.reporter.warning(str(exc), line=lineno)
+ if problematic:
+ prb = inliner.problematic(rawtext, text, msg)
+ return [prb], [msg]
+ else:
+ return [node], []
+
+ if url is not None:
+ node = nodes.reference(rawtext, '', node, refuri=url, **options)
+ return [node], []
+
+ roles.register_local_role(name, resolve_api_name)
+
+
+#{ Command line parsing
+# --------------------
+
+
+def split_name(value):
+ """
+ Split an option in form ``NAME:VALUE`` and check if ``NAME`` exists.
+ """
+ parts = value.split(':', 1)
+ if len(parts) != 2:
+ raise OptionValueError(
+ "option value must be specified as NAME:VALUE; got '%s' instead"
+ % value)
+
+ name, val = parts
+
+ if name not in api_register:
+ raise OptionValueError(
+ "the name '%s' has not been registered; use --external-api"
+ % name)
+
+ return (name, val)
+
+
+class ApiLinkReader(Reader):
+ """
+ A Docutils standalone reader allowing external documentation links.
+
+ The reader configure the url resolvers at the time `read()` is invoked the
+ first time.
+ """
+ #: The option parser configuration.
+ settings_spec = (
+ 'API Linking Options',
+ None,
+ ((
+ 'Define a new API document. A new interpreted text role NAME will be '
+ 'added.',
+ ['--external-api'],
+ {'metavar': 'NAME', 'action': 'append'}
+ ), (
+ 'Use records in FILENAME to resolve objects in the API named NAME.',
+ ['--external-api-file'],
+ {'metavar': 'NAME:FILENAME', 'action': 'append'}
+ ), (
+ 'Use STRING as prefix for the URL generated from the API NAME.',
+ ['--external-api-root'],
+ {'metavar': 'NAME:STRING', 'action': 'append'}
+ ),)) + Reader.settings_spec
+
+ def __init__(self, *args, **kwargs):
+ if docutils is None:
+ raise AssertionError('requires docutils')
+ Reader.__init__(self, *args, **kwargs)
+
+ def read(self, source, parser, settings):
+ self.read_configuration(settings, problematic=True)
+ return Reader.read(self, source, parser, settings)
+
+ def read_configuration(self, settings, problematic=True):
+ """
+ Read the configuration for the configured URL resolver.
+
+ Register a new role for each configured API.
+
+ :Parameters:
+ `settings`
+ the settings structure containing the options to read.
+ `problematic` : `bool`
+ if True, the registered role will create problematic nodes in
+ case of failed references. If False, a warning will be raised
+ anyway, but the output will appear as an ordinary literal.
+ """
+ # Read config only once
+ if hasattr(self, '_conf'):
+ return
+ ApiLinkReader._conf = True
+
+ try:
+ if settings.external_api is not None:
+ for name in settings.external_api:
+ register_api(name)
+ create_api_role(name, problematic=problematic)
+
+ if settings.external_api_file is not None:
+ for name, file in map(split_name, settings.external_api_file):
+ set_api_file(name, file)
+
+ if settings.external_api_root is not None:
+ for name, root in map(split_name, settings.external_api_root):
+ set_api_root(name, root)
+
+ except OptionValueError, exc:
+ print >>sys.stderr, "%s: %s" % (exc.__class__.__name__, exc)
+ sys.exit(2)
+
+ read_configuration = classmethod(read_configuration)
diff --git a/python/helpers/epydoc/gui.py b/python/helpers/epydoc/gui.py
new file mode 100644
index 0000000..dbd388a
--- /dev/null
+++ b/python/helpers/epydoc/gui.py
@@ -0,0 +1,1148 @@
+#!/usr/bin/env python
+#
+# objdoc: epydoc command-line interface
+# Edward Loper
+#
+# Created [03/15/02 10:31 PM]
+# $Id: gui.py 646 2004-03-19 19:01:37Z edloper $
+#
+
+"""
+Graphical interface to epydoc. This interface might be useful for
+systems where it's inconvenient to use the command-line interface
+(such as Windows). It supports many (but not all) of the features
+that are supported by the command-line interface. It also supports
+loading and saving of X{project files}, which store a set of related
+modules, and the options that should be used to generate the
+documentation for those modules.
+
+Usage::
+ epydocgui [OPTIONS] [FILE.prj | MODULES...]
+
+ FILE.prj An epydoc GUI project file.
+ MODULES... A list of Python modules to document.
+ -V, --version Print the version of epydoc.
+ -h, -?, --help, --usage Display this usage message
+ --debug Do not suppress error messages
+
+@todo: Use ini-style project files, rather than pickles (using the
+same format as the CLI).
+"""
+__docformat__ = 'epytext en'
+
+import sys, os.path, re, glob
+from Tkinter import *
+from tkFileDialog import askopenfilename, asksaveasfilename
+from thread import start_new_thread, exit_thread
+from pickle import dump, load
+
+# askdirectory is only defined in python 2.2+; fall back on
+# asksaveasfilename if it's not available.
+try: from tkFileDialog import askdirectory
+except: askdirectory = None
+
+# Include support for Zope, if it's available.
+try: import ZODB
+except: pass
+
+##/////////////////////////////////////////////////////////////////////////
+## CONSTANTS
+##/////////////////////////////////////////////////////////////////////////
+
+DEBUG = 0
+
+# Colors for tkinter display
+BG_COLOR='#e0e0e0'
+ACTIVEBG_COLOR='#e0e0e0'
+TEXT_COLOR='black'
+ENTRYSELECT_COLOR = ACTIVEBG_COLOR
+SELECT_COLOR = '#208070'
+MESSAGE_COLOR = '#000060'
+ERROR_COLOR = '#600000'
+GUIERROR_COLOR = '#600000'
+WARNING_COLOR = '#604000'
+HEADER_COLOR = '#000000'
+
+# Convenience dictionaries for specifying widget colors
+COLOR_CONFIG = {'background':BG_COLOR, 'highlightcolor': BG_COLOR,
+ 'foreground':TEXT_COLOR, 'highlightbackground': BG_COLOR}
+ENTRY_CONFIG = {'background':BG_COLOR, 'highlightcolor': BG_COLOR,
+ 'foreground':TEXT_COLOR, 'highlightbackground': BG_COLOR,
+ 'selectbackground': ENTRYSELECT_COLOR,
+ 'selectforeground': TEXT_COLOR}
+SB_CONFIG = {'troughcolor':BG_COLOR, 'activebackground':BG_COLOR,
+ 'background':BG_COLOR, 'highlightbackground':BG_COLOR}
+LISTBOX_CONFIG = {'highlightcolor': BG_COLOR, 'highlightbackground': BG_COLOR,
+ 'foreground':TEXT_COLOR, 'selectforeground': TEXT_COLOR,
+ 'selectbackground': ACTIVEBG_COLOR, 'background':BG_COLOR}
+BUTTON_CONFIG = {'background':BG_COLOR, 'highlightthickness':0, 'padx':4,
+ 'highlightbackground': BG_COLOR, 'foreground':TEXT_COLOR,
+ 'highlightcolor': BG_COLOR, 'activeforeground': TEXT_COLOR,
+ 'activebackground': ACTIVEBG_COLOR, 'pady':0}
+CBUTTON_CONFIG = {'background':BG_COLOR, 'highlightthickness':0, 'padx':4,
+ 'highlightbackground': BG_COLOR, 'foreground':TEXT_COLOR,
+ 'highlightcolor': BG_COLOR, 'activeforeground': TEXT_COLOR,
+ 'activebackground': ACTIVEBG_COLOR, 'pady':0,
+ 'selectcolor': SELECT_COLOR}
+SHOWMSG_CONFIG = CBUTTON_CONFIG.copy()
+SHOWMSG_CONFIG['foreground'] = MESSAGE_COLOR
+SHOWWRN_CONFIG = CBUTTON_CONFIG.copy()
+SHOWWRN_CONFIG['foreground'] = WARNING_COLOR
+SHOWERR_CONFIG = CBUTTON_CONFIG.copy()
+SHOWERR_CONFIG['foreground'] = ERROR_COLOR
+
+# Colors for the progress bar
+PROGRESS_HEIGHT = 16
+PROGRESS_WIDTH = 200
+PROGRESS_BG='#305060'
+PROGRESS_COLOR1 = '#30c070'
+PROGRESS_COLOR2 = '#60ffa0'
+PROGRESS_COLOR3 = '#106030'
+
+# On tkinter canvases, where's the zero coordinate?
+if sys.platform.lower().startswith('win'):
+ DX = 3; DY = 3
+ DH = 0; DW = 7
+else:
+ DX = 1; DY = 1
+ DH = 1; DW = 3
+
+# How much of the progress is in each subtask?
+IMPORT_PROGRESS = 0.1
+BUILD_PROGRESS = 0.2
+WRITE_PROGRESS = 1.0 - BUILD_PROGRESS - IMPORT_PROGRESS
+
+##/////////////////////////////////////////////////////////////////////////
+## IMAGE CONSTANTS
+##/////////////////////////////////////////////////////////////////////////
+
+UP_GIF = '''\
+R0lGODlhCwAMALMAANnZ2QDMmQCZZgBmZgAAAAAzM////////wAAAAAAAAAAAAAAAAAAAAAAAAAA
+AAAAACH5BAEAAAAALAAAAAALAAwAAAQjEMhJKxCW4gzCIJxXZIEwFGDlDadqsii1sq1U0nA64+ON
+5xEAOw==
+'''
+DOWN_GIF = '''\
+R0lGODlhCwAMALMAANnZ2QDMmQCZZgBmZgAAAAAzM////////wAAAAAAAAAAAAAAAAAAAAAAAAAA
+AAAAACH5BAEAAAAALAAAAAALAAwAAAQmEIQxgLVUCsppsVPngVtXEFfIfWk5nBe4xuSL0tKLy/cu
+7JffJQIAOw==
+'''
+LEFT_GIF='''\
+R0lGODlhDAALAKIAANnZ2QDMmQCZZgBmZgAAAAAzM////////yH5BAEAAAAALAAAAAAMAAsAAAM4
+CLocgaCrESiDoBshOAoAgBEyMzgAEIGCowsiOLoLgEBVOLoIqlSFo4OgC1RYM4Ogq1RYg6DLVJgA
+Ow==
+'''
+RIGHT_GIF='''\
+R0lGODlhDAALAKIAANnZ2QDMmQBmZgCZZgAzMwAAAP///////yH5BAEAAAAALAAAAAAMAAsAAAM5
+GIGgyzIYgaCrIigTgaALIigyEQiqKLoTgaAoujuDgKJLVAgqIoJEBQAIIkKEhaArRFgIukqFoMsJ
+ADs=
+'''
+
+##/////////////////////////////////////////////////////////////////////////
+## MessageIO
+##/////////////////////////////////////////////////////////////////////////
+
+from epydoc import log
+from epydoc.util import wordwrap
+class GUILogger(log.Logger):
+ _STAGES = [40, 7, 1, 3, 1, 30, 1, 2, 100]
+
+ def __init__(self, progress, cancel):
+ self._progress = progress
+ self._cancel = cancel
+ self.clear()
+
+ def clear(self):
+ self._messages = []
+ self._n = 0
+ self._stage = 0
+ self._message_blocks = []
+
+ def log(self, level, message):
+ message = wordwrap(str(message)).rstrip() + '\n'
+ if self._message_blocks:
+ self._message_blocks[-1][-1].append( (level, message) )
+ else:
+ self._messages.append( (level, message) )
+
+ def start_block(self, header):
+ self._message_blocks.append( (header, []) )
+
+ def end_block(self):
+ header, messages = self._message_blocks.pop()
+ if messages:
+ self._messages.append( ('uline', ' '*75+'\n') )
+ self.log('header', header)
+ self._messages += messages
+ self._messages.append( ('uline', ' '*75+'\n') )
+
+ def start_progress(self, header=None):
+ self.log(log.INFO, header)
+ self._stage += 1
+
+ def end_progress(self):
+ pass
+
+ def progress(self, percent, message=''):
+ if self._cancel[0]: exit_thread()
+ i = self._stage - 1
+ p = ((sum(self._STAGES[:i]) + percent*self._STAGES[i]) /
+ float(sum(self._STAGES)))
+ self._progress[0] = p
+
+ def read(self):
+ if self._n >= len(self._messages):
+ return None, None
+ else:
+ self._n += 1
+ return self._messages[self._n-1]
+
+##/////////////////////////////////////////////////////////////////////////
+## THREADED DOCUMENTER
+##/////////////////////////////////////////////////////////////////////////
+
+def document(options, cancel, done):
+ """
+ Create the documentation for C{modules}, using the options
+ specified by C{options}. C{document} is designed to be started in
+ its own thread by L{EpydocGUI._go}.
+
+ @param options: The options to use for generating documentation.
+ This includes keyword options that can be given to
+ L{docwriter.html.HTMLWriter}, as well as the option C{target}, which
+ controls where the output is written to.
+ @type options: C{dictionary}
+ """
+ from epydoc.docwriter.html import HTMLWriter
+ from epydoc.docbuilder import build_doc_index
+ import epydoc.docstringparser
+
+ # Set the default docformat.
+ docformat = options.get('docformat', 'epytext')
+ epydoc.docstringparser.DEFAULT_DOCFORMAT = docformat
+
+ try:
+ parse = options['introspect_or_parse'] in ('parse', 'both')
+ introspect = options['introspect_or_parse'] in ('introspect', 'both')
+ docindex = build_doc_index(options['modules'], parse, introspect)
+ html_writer = HTMLWriter(docindex, **options)
+ log.start_progress('Writing HTML docs to %r' % options['target'])
+ html_writer.write(options['target'])
+ log.end_progress()
+
+ # We're done.
+ log.warning('Finished!')
+ done[0] = 'done'
+
+ except SystemExit:
+ # Cancel.
+ log.error('Cancelled!')
+ done[0] ='cancel'
+ raise
+ except Exception, e:
+ # We failed.
+ log.error('Internal error: %s' % e)
+ done[0] ='cancel'
+ raise
+ except:
+ # We failed.
+ log.error('Internal error!')
+ done[0] ='cancel'
+ raise
+
+##/////////////////////////////////////////////////////////////////////////
+## GUI
+##/////////////////////////////////////////////////////////////////////////
+
+class EpydocGUI:
+ """
+ A graphical user interace to epydoc.
+ """
+ def __init__(self):
+ self._afterid = 0
+ self._progress = [None]
+ self._cancel = [0]
+ self._filename = None
+ self._init_dir = None
+
+ # Store a copy of sys.modules, so that we can restore it
+ # later. This is useful for making sure that we reload
+ # everything when we re-build its documentation. This will
+ # *not* reload the modules that are present when the EpydocGUI
+ # is created, but that should only contain some builtins, some
+ # epydoc modules, Tkinter, pickle, and thread..
+ self._old_modules = sys.modules.keys()
+
+ # Create the main window.
+ self._root = Tk()
+ self._root['background']=BG_COLOR
+ self._root.bind('<Control-q>', self.destroy)
+ self._root.bind('<Alt-q>', self.destroy)
+ self._root.bind('<Alt-x>', self.destroy)
+ self._root.bind('<Control-x>', self.destroy)
+ #self._root.bind('<Control-d>', self.destroy)
+ self._root.title('Epydoc')
+ self._rootframe = Frame(self._root, background=BG_COLOR,
+ border=2, relief='raised')
+ self._rootframe.pack(expand=1, fill='both', padx=2, pady=2)
+
+ # Set up the basic frames. Do not pack the options frame or
+ # the messages frame; the GUI has buttons to expand them.
+ leftframe = Frame(self._rootframe, background=BG_COLOR)
+ leftframe.pack(expand=1, fill='both', side='left')
+ optsframe = Frame(self._rootframe, background=BG_COLOR)
+ mainframe = Frame(leftframe, background=BG_COLOR)
+ mainframe.pack(expand=1, fill='both', side='top')
+ ctrlframe = Frame(mainframe, background=BG_COLOR)
+ ctrlframe.pack(side="bottom", fill='x', expand=0)
+ msgsframe = Frame(leftframe, background=BG_COLOR)
+
+ self._optsframe = optsframe
+ self._msgsframe = msgsframe
+
+ # Initialize all the frames, etc.
+ self._init_menubar()
+ self._init_progress_bar(mainframe)
+ self._init_module_list(mainframe)
+ self._init_options(optsframe, ctrlframe)
+ self._init_messages(msgsframe, ctrlframe)
+ self._init_bindings()
+
+ # Set up logging
+ self._logger = GUILogger(self._progress, self._cancel)
+ log.register_logger(self._logger)
+
+ # Open the messages pane by default.
+ self._messages_toggle()
+
+ ## For testing options:
+ #self._options_toggle()
+
+ def _init_menubar(self):
+ menubar = Menu(self._root, borderwidth=2,
+ background=BG_COLOR,
+ activebackground=BG_COLOR)
+ filemenu = Menu(menubar, tearoff=0)
+ filemenu.add_command(label='New Project', underline=0,
+ command=self._new,
+ accelerator='Ctrl-n')
+ filemenu.add_command(label='Open Project', underline=0,
+ command=self._open,
+ accelerator='Ctrl-o')
+ filemenu.add_command(label='Save Project', underline=0,
+ command=self._save,
+ accelerator='Ctrl-s')
+ filemenu.add_command(label='Save As..', underline=5,
+ command=self._saveas,
+ accelerator='Ctrl-a')
+ filemenu.add_separator()
+ filemenu.add_command(label='Exit', underline=1,
+ command=self.destroy,
+ accelerator='Ctrl-x')
+ menubar.add_cascade(label='File', underline=0, menu=filemenu)
+ gomenu = Menu(menubar, tearoff=0)
+ gomenu.add_command(label='Run Epydoc', command=self._open,
+ underline=0, accelerator='Alt-g')
+ menubar.add_cascade(label='Run', menu=gomenu, underline=0)
+ self._root.config(menu=menubar)
+
+ def _init_module_list(self, mainframe):
+ mframe1 = Frame(mainframe, relief='groove', border=2,
+ background=BG_COLOR)
+ mframe1.pack(side="top", fill='both', expand=1, padx=4, pady=3)
+ l = Label(mframe1, text="Modules to document:",
+ justify='left', **COLOR_CONFIG)
+ l.pack(side='top', fill='none', anchor='nw', expand=0)
+ mframe2 = Frame(mframe1, background=BG_COLOR)
+ mframe2.pack(side="top", fill='both', expand=1)
+ mframe3 = Frame(mframe1, background=BG_COLOR)
+ mframe3.pack(side="bottom", fill='x', expand=0)
+ self._module_list = Listbox(mframe2, width=80, height=10,
+ selectmode='multiple',
+ **LISTBOX_CONFIG)
+ self._module_list.pack(side="left", fill='both', expand=1)
+ sb = Scrollbar(mframe2, orient='vertical',**SB_CONFIG)
+ sb['command']=self._module_list.yview
+ sb.pack(side='right', fill='y')
+ self._module_list.config(yscrollcommand=sb.set)
+ Label(mframe3, text="Add:", **COLOR_CONFIG).pack(side='left')
+ self._module_entry = Entry(mframe3, **ENTRY_CONFIG)
+ self._module_entry.pack(side='left', fill='x', expand=1)
+ self._module_entry.bind('<Return>', self._entry_module)
+ self._module_delete = Button(mframe3, text="Remove",
+ command=self._delete_module,
+ **BUTTON_CONFIG)
+ self._module_delete.pack(side='right', expand=0, padx=2)
+ self._module_browse = Button(mframe3, text="Browse",
+ command=self._browse_module,
+ **BUTTON_CONFIG)
+ self._module_browse.pack(side='right', expand=0, padx=2)
+
+ def _init_progress_bar(self, mainframe):
+ pframe1 = Frame(mainframe, background=BG_COLOR)
+ pframe1.pack(side="bottom", fill='x', expand=0)
+ self._go_button = Button(pframe1, width=4, text='Start',
+ underline=0, command=self._go,
+ **BUTTON_CONFIG)
+ self._go_button.pack(side='left', padx=4)
+ pframe2 = Frame(pframe1, relief='groove', border=2,
+ background=BG_COLOR)
+ pframe2.pack(side="top", fill='x', expand=1, padx=4, pady=3)
+ Label(pframe2, text='Progress:', **COLOR_CONFIG).pack(side='left')
+ H = self._H = PROGRESS_HEIGHT
+ W = self._W = PROGRESS_WIDTH
+ c = self._canvas = Canvas(pframe2, height=H+DH, width=W+DW,
+ background=PROGRESS_BG, border=0,
+ selectborderwidth=0, relief='sunken',
+ insertwidth=0, insertborderwidth=0,
+ highlightbackground=BG_COLOR)
+ self._canvas.pack(side='left', fill='x', expand=1, padx=4)
+ self._r2 = c.create_rectangle(0,0,0,0, outline=PROGRESS_COLOR2)
+ self._r3 = c.create_rectangle(0,0,0,0, outline=PROGRESS_COLOR3)
+ self._r1 = c.create_rectangle(0,0,0,0, fill=PROGRESS_COLOR1,
+ outline='')
+ self._canvas.bind('<Configure>', self._configure)
+
+ def _init_messages(self, msgsframe, ctrlframe):
+ self._downImage = PhotoImage(master=self._root, data=DOWN_GIF)
+ self._upImage = PhotoImage(master=self._root, data=UP_GIF)
+
+ # Set up the messages control frame
+ b1 = Button(ctrlframe, text="Messages", justify='center',
+ command=self._messages_toggle, underline=0,
+ highlightthickness=0, activebackground=BG_COLOR,
+ border=0, relief='flat', padx=2, pady=0, **COLOR_CONFIG)
+ b2 = Button(ctrlframe, image=self._downImage, relief='flat',
+ border=0, command=self._messages_toggle,
+ activebackground=BG_COLOR, **COLOR_CONFIG)
+ self._message_button = b2
+ self._messages_visible = 0
+ b2.pack(side="left")
+ b1.pack(side="left")
+
+ f = Frame(msgsframe, background=BG_COLOR)
+ f.pack(side='top', expand=1, fill='both')
+ messages = Text(f, width=80, height=10, **ENTRY_CONFIG)
+ messages['state'] = 'disabled'
+ messages.pack(fill='both', expand=1, side='left')
+ self._messages = messages
+
+ # Add a scrollbar
+ sb = Scrollbar(f, orient='vertical', **SB_CONFIG)
+ sb.pack(fill='y', side='right')
+ sb['command'] = messages.yview
+ messages['yscrollcommand'] = sb.set
+
+ # Set up some colorization tags
+ messages.tag_config('error', foreground=ERROR_COLOR)
+ messages.tag_config('warning', foreground=WARNING_COLOR)
+ messages.tag_config('guierror', foreground=GUIERROR_COLOR)
+ messages.tag_config('message', foreground=MESSAGE_COLOR)
+ messages.tag_config('header', foreground=HEADER_COLOR)
+ messages.tag_config('uline', underline=1)
+
+ # Keep track of tag state..
+ self._in_header = 0
+ self._last_tag = 'error'
+
+ # Add some buttons
+ buttons = Frame(msgsframe, background=BG_COLOR)
+ buttons.pack(side='bottom', fill='x')
+ self._show_errors = IntVar(self._root)
+ self._show_errors.set(1)
+ self._show_warnings = IntVar(self._root)
+ self._show_warnings.set(1)
+ self._show_messages = IntVar(self._root)
+ self._show_messages.set(0)
+ Checkbutton(buttons, text='Show Messages', var=self._show_messages,
+ command=self._update_msg_tags,
+ **SHOWMSG_CONFIG).pack(side='left')
+ Checkbutton(buttons, text='Show Warnings', var=self._show_warnings,
+ command=self._update_msg_tags,
+ **SHOWWRN_CONFIG).pack(side='left')
+ Checkbutton(buttons, text='Show Errors', var=self._show_errors,
+ command=self._update_msg_tags,
+ **SHOWERR_CONFIG).pack(side='left')
+ self._update_msg_tags()
+
+ def _update_msg_tags(self, *e):
+ elide_errors = not self._show_errors.get()
+ elide_warnings = not self._show_warnings.get()
+ elide_messages = not self._show_messages.get()
+ elide_headers = elide_errors and elide_warnings
+ self._messages.tag_config('error', elide=elide_errors)
+ self._messages.tag_config('guierror', elide=elide_errors)
+ self._messages.tag_config('warning', elide=elide_warnings)
+ self._messages.tag_config('message', elide=elide_messages)
+ self._messages.tag_config('header', elide=elide_headers)
+
+ def _init_options(self, optsframe, ctrlframe):
+ self._leftImage=PhotoImage(master=self._root, data=LEFT_GIF)
+ self._rightImage=PhotoImage(master=self._root, data=RIGHT_GIF)
+
+ # Set up the options control frame
+ b1 = Button(ctrlframe, text="Options", justify='center',
+ border=0, relief='flat',
+ command=self._options_toggle, padx=2,
+ underline=0, pady=0, highlightthickness=0,
+ activebackground=BG_COLOR, **COLOR_CONFIG)
+ b2 = Button(ctrlframe, image=self._rightImage, relief='flat',
+ border=0, command=self._options_toggle,
+ activebackground=BG_COLOR, **COLOR_CONFIG)
+ self._option_button = b2
+ self._options_visible = 0
+ b2.pack(side="right")
+ b1.pack(side="right")
+
+ oframe2 = Frame(optsframe, relief='groove', border=2,
+ background=BG_COLOR)
+ oframe2.pack(side="right", fill='both',
+ expand=0, padx=4, pady=3, ipadx=4)
+
+ Label(oframe2, text="Project Options", font='helvetica -16',
+ **COLOR_CONFIG).pack(anchor='w')
+ oframe3 = Frame(oframe2, background=BG_COLOR)
+ oframe3.pack(fill='x')
+ oframe4 = Frame(oframe2, background=BG_COLOR)
+ oframe4.pack(fill='x')
+ oframe7 = Frame(oframe2, background=BG_COLOR)
+ oframe7.pack(fill='x')
+ div = Frame(oframe2, background=BG_COLOR, border=1, relief='sunk')
+ div.pack(ipady=1, fill='x', padx=4, pady=2)
+
+ Label(oframe2, text="Help File", font='helvetica -16',
+ **COLOR_CONFIG).pack(anchor='w')
+ oframe5 = Frame(oframe2, background=BG_COLOR)
+ oframe5.pack(fill='x')
+ div = Frame(oframe2, background=BG_COLOR, border=1, relief='sunk')
+ div.pack(ipady=1, fill='x', padx=4, pady=2)
+
+ Label(oframe2, text="CSS Stylesheet", font='helvetica -16',
+ **COLOR_CONFIG).pack(anchor='w')
+ oframe6 = Frame(oframe2, background=BG_COLOR)
+ oframe6.pack(fill='x')
+
+ #==================== oframe3 ====================
+ # -n NAME, --name NAME
+ row = 0
+ l = Label(oframe3, text="Project Name:", **COLOR_CONFIG)
+ l.grid(row=row, column=0, sticky='e')
+ self._name_entry = Entry(oframe3, **ENTRY_CONFIG)
+ self._name_entry.grid(row=row, column=1, sticky='ew', columnspan=3)
+
+ # -u URL, --url URL
+ row += 1
+ l = Label(oframe3, text="Project URL:", **COLOR_CONFIG)
+ l.grid(row=row, column=0, sticky='e')
+ self._url_entry = Entry(oframe3, **ENTRY_CONFIG)
+ self._url_entry.grid(row=row, column=1, sticky='ew', columnspan=3)
+
+ # -o DIR, --output DIR
+ row += 1
+ l = Label(oframe3, text="Output Directory:", **COLOR_CONFIG)
+ l.grid(row=row, column=0, sticky='e')
+ self._out_entry = Entry(oframe3, **ENTRY_CONFIG)
+ self._out_entry.grid(row=row, column=1, sticky='ew', columnspan=2)
+ self._out_browse = Button(oframe3, text="Browse",
+ command=self._browse_out,
+ **BUTTON_CONFIG)
+ self._out_browse.grid(row=row, column=3, sticky='ew', padx=2)
+
+ #==================== oframe4 ====================
+ # --no-frames
+ row = 0
+ self._frames_var = IntVar(self._root)
+ self._frames_var.set(1)
+ l = Label(oframe4, text="Generate a frame-based table of contents",
+ **COLOR_CONFIG)
+ l.grid(row=row, column=1, sticky='w')
+ cb = Checkbutton(oframe4, var=self._frames_var, **CBUTTON_CONFIG)
+ cb.grid(row=row, column=0, sticky='e')
+
+ # --no-private
+ row += 1
+ self._private_var = IntVar(self._root)
+ self._private_var.set(1)
+ l = Label(oframe4, text="Generate documentation for private objects",
+ **COLOR_CONFIG)
+ l.grid(row=row, column=1, sticky='w')
+ cb = Checkbutton(oframe4, var=self._private_var, **CBUTTON_CONFIG)
+ cb.grid(row=row, column=0, sticky='e')
+
+ # --show-imports
+ row += 1
+ self._imports_var = IntVar(self._root)
+ self._imports_var.set(0)
+ l = Label(oframe4, text="List imported classes and functions",
+ **COLOR_CONFIG)
+ l.grid(row=row, column=1, sticky='w')
+ cb = Checkbutton(oframe4, var=self._imports_var, **CBUTTON_CONFIG)
+ cb.grid(row=row, column=0, sticky='e')
+
+ #==================== oframe7 ====================
+ # --docformat
+ row += 1
+ l = Label(oframe7, text="Default Docformat:", **COLOR_CONFIG)
+ l.grid(row=row, column=0, sticky='e')
+ df_var = self._docformat_var = StringVar(self._root)
+ self._docformat_var.set('epytext')
+ b = Radiobutton(oframe7, var=df_var, text='Epytext',
+ value='epytext', **CBUTTON_CONFIG)
+ b.grid(row=row, column=1, sticky='w')
+ b = Radiobutton(oframe7, var=df_var, text='ReStructuredText',
+ value='restructuredtext', **CBUTTON_CONFIG)
+ b.grid(row=row, column=2, columnspan=2, sticky='w')
+ row += 1
+ b = Radiobutton(oframe7, var=df_var, text='Plaintext',
+ value='plaintext', **CBUTTON_CONFIG)
+ b.grid(row=row, column=1, sticky='w')
+ b = Radiobutton(oframe7, var=df_var, text='Javadoc',
+ value='javadoc', **CBUTTON_CONFIG)
+ b.grid(row=row, column=2, columnspan=2, sticky='w')
+ row += 1
+
+ # Separater
+ Frame(oframe7, background=BG_COLOR).grid(row=row, column=1, pady=3)
+ row += 1
+
+ # --inheritance
+ l = Label(oframe7, text="Inheritance Style:", **COLOR_CONFIG)
+ l.grid(row=row, column=0, sticky='e')
+ inh_var = self._inheritance_var = StringVar(self._root)
+ self._inheritance_var.set('grouped')
+ b = Radiobutton(oframe7, var=inh_var, text='Grouped',
+ value='grouped', **CBUTTON_CONFIG)
+ b.grid(row=row, column=1, sticky='w')
+ b = Radiobutton(oframe7, var=inh_var, text='Listed',
+ value='listed', **CBUTTON_CONFIG)
+ b.grid(row=row, column=2, sticky='w')
+ b = Radiobutton(oframe7, var=inh_var, text='Included',
+ value='included', **CBUTTON_CONFIG)
+ b.grid(row=row, column=3, sticky='w')
+ row += 1
+
+ # Separater
+ Frame(oframe7, background=BG_COLOR).grid(row=row, column=1, pady=3)
+ row += 1
+
+ # --parse-only, --introspect-only
+ l = Label(oframe7, text="Get docs from:", **COLOR_CONFIG)
+ l.grid(row=row, column=0, sticky='e')
+ iop_var = self._introspect_or_parse_var = StringVar(self._root)
+ self._introspect_or_parse_var.set('both')
+ b = Radiobutton(oframe7, var=iop_var, text='Parsing',
+ value='parse', **CBUTTON_CONFIG)
+ b.grid(row=row, column=1, sticky='w')
+ b = Radiobutton(oframe7, var=iop_var, text='Introspecting',
+ value='introspect', **CBUTTON_CONFIG)
+ b.grid(row=row, column=2, sticky='w')
+ b = Radiobutton(oframe7, var=iop_var, text='Both',
+ value='both', **CBUTTON_CONFIG)
+ b.grid(row=row, column=3, sticky='w')
+ row += 1
+
+ #==================== oframe5 ====================
+ # --help-file FILE
+ row = 0
+ self._help_var = StringVar(self._root)
+ self._help_var.set('default')
+ b = Radiobutton(oframe5, var=self._help_var,
+ text='Default',
+ value='default', **CBUTTON_CONFIG)
+ b.grid(row=row, column=1, sticky='w')
+ row += 1
+ b = Radiobutton(oframe5, var=self._help_var,
+ text='Select File',
+ value='-other-', **CBUTTON_CONFIG)
+ b.grid(row=row, column=1, sticky='w')
+ self._help_entry = Entry(oframe5, **ENTRY_CONFIG)
+ self._help_entry.grid(row=row, column=2, sticky='ew')
+ self._help_browse = Button(oframe5, text='Browse',
+ command=self._browse_help,
+ **BUTTON_CONFIG)
+ self._help_browse.grid(row=row, column=3, sticky='ew', padx=2)
+
+ from epydoc.docwriter.html_css import STYLESHEETS
+ items = STYLESHEETS.items()
+ def _css_sort(css1, css2):
+ if css1[0] == 'default': return -1
+ elif css2[0] == 'default': return 1
+ else: return cmp(css1[0], css2[0])
+ items.sort(_css_sort)
+
+ #==================== oframe6 ====================
+ # -c CSS, --css CSS
+ # --private-css CSS
+ row = 0
+ #l = Label(oframe6, text="Public", **COLOR_CONFIG)
+ #l.grid(row=row, column=0, sticky='e')
+ #l = Label(oframe6, text="Private", **COLOR_CONFIG)
+ #l.grid(row=row, column=1, sticky='w')
+ row += 1
+ css_var = self._css_var = StringVar(self._root)
+ css_var.set('default')
+ #private_css_var = self._private_css_var = StringVar(self._root)
+ #private_css_var.set('default')
+ for (name, (sheet, descr)) in items:
+ b = Radiobutton(oframe6, var=css_var, value=name, **CBUTTON_CONFIG)
+ b.grid(row=row, column=0, sticky='e')
+ #b = Radiobutton(oframe6, var=private_css_var, value=name,
+ # text=name, **CBUTTON_CONFIG)
+ #b.grid(row=row, column=1, sticky='w')
+ l = Label(oframe6, text=descr, **COLOR_CONFIG)
+ l.grid(row=row, column=1, sticky='w')
+ row += 1
+ b = Radiobutton(oframe6, var=css_var, value='-other-',
+ **CBUTTON_CONFIG)
+ b.grid(row=row, column=0, sticky='e')
+ #b = Radiobutton(oframe6, text='Select File', var=private_css_var,
+ # value='-other-', **CBUTTON_CONFIG)
+ #b.grid(row=row, column=1, sticky='w')
+ #l = Label(oframe6, text='Select File', **COLOR_CONFIG)
+ #l.grid(row=row, column=1, sticky='w')
+ self._css_entry = Entry(oframe6, **ENTRY_CONFIG)
+ self._css_entry.grid(row=row, column=1, sticky='ew')
+ self._css_browse = Button(oframe6, text="Browse",
+ command=self._browse_css,
+ **BUTTON_CONFIG)
+ self._css_browse.grid(row=row, column=2, sticky='ew', padx=2)
+
+ def _init_bindings(self):
+ self._root.bind('<Delete>', self._delete_module)
+ self._root.bind('<Alt-o>', self._options_toggle)
+ self._root.bind('<Alt-m>', self._messages_toggle)
+ self._root.bind('<F5>', self._go)
+ self._root.bind('<Alt-s>', self._go)
+
+ self._root.bind('<Control-n>', self._new)
+ self._root.bind('<Control-o>', self._open)
+ self._root.bind('<Control-s>', self._save)
+ self._root.bind('<Control-a>', self._saveas)
+
+ def _options_toggle(self, *e):
+ if self._options_visible:
+ self._optsframe.forget()
+ self._option_button['image'] = self._rightImage
+ self._options_visible = 0
+ else:
+ self._optsframe.pack(fill='both', side='right')
+ self._option_button['image'] = self._leftImage
+ self._options_visible = 1
+
+ def _messages_toggle(self, *e):
+ if self._messages_visible:
+ self._msgsframe.forget()
+ self._message_button['image'] = self._rightImage
+ self._messages_visible = 0
+ else:
+ self._msgsframe.pack(fill='both', side='bottom', expand=1)
+ self._message_button['image'] = self._leftImage
+ self._messages_visible = 1
+
+ def _configure(self, event):
+ self._W = event.width-DW
+
+ def _delete_module(self, *e):
+ selection = self._module_list.curselection()
+ if len(selection) != 1: return
+ self._module_list.delete(selection[0])
+
+ def _entry_module(self, *e):
+ modules = [self._module_entry.get()]
+ if glob.has_magic(modules[0]):
+ modules = glob.glob(modules[0])
+ for name in modules:
+ self.add_module(name, check=1)
+ self._module_entry.delete(0, 'end')
+
+ def _browse_module(self, *e):
+ title = 'Select a module for documentation'
+ ftypes = [('Python module', '.py'),
+ ('Python extension', '.so'),
+ ('All files', '*')]
+ filename = askopenfilename(filetypes=ftypes, title=title,
+ defaultextension='.py',
+ initialdir=self._init_dir)
+ if not filename: return
+ self._init_dir = os.path.dirname(filename)
+ self.add_module(filename, check=1)
+
+ def _browse_css(self, *e):
+ title = 'Select a CSS stylesheet'
+ ftypes = [('CSS Stylesheet', '.css'), ('All files', '*')]
+ filename = askopenfilename(filetypes=ftypes, title=title,
+ defaultextension='.css')
+ if not filename: return
+ self._css_entry.delete(0, 'end')
+ self._css_entry.insert(0, filename)
+
+ def _browse_help(self, *e):
+ title = 'Select a help file'
+ self._help_var.set('-other-')
+ ftypes = [('HTML file', '.html'), ('All files', '*')]
+ filename = askopenfilename(filetypes=ftypes, title=title,
+ defaultextension='.html')
+ if not filename: return
+ self._help_entry.delete(0, 'end')
+ self._help_entry.insert(0, filename)
+
+ def _browse_out(self, *e):
+ ftypes = [('All files', '*')]
+ title = 'Choose the output directory'
+ if askdirectory is not None:
+ filename = askdirectory(mustexist=0, title=title)
+ if not filename: return
+ else:
+ # Hack for Python 2.1 or earlier:
+ filename = asksaveasfilename(filetypes=ftypes, title=title,
+ initialfile='--this directory--')
+ if not filename: return
+ (f1, f2) = os.path.split(filename)
+ if f2 == '--this directory--': filename = f1
+ self._out_entry.delete(0, 'end')
+ self._out_entry.insert(0, filename)
+
+ def destroy(self, *e):
+ if self._root is None: return
+
+ # Unload any modules that we've imported
+ for m in sys.modules.keys():
+ if m not in self._old_modules: del sys.modules[m]
+ self._root.destroy()
+ self._root = None
+
+ def add_module(self, name, check=0):
+ from epydoc.util import is_package_dir, is_pyname, is_module_file
+ from epydoc.docintrospecter import get_value_from_name
+ from epydoc.docintrospecter import get_value_from_filename
+
+ if (os.path.isfile(name) or is_package_dir(name) or is_pyname(name)):
+ # Check that it's a good module, if requested.
+ if check:
+ try:
+ if is_module_file(name) or is_package_dir(name):
+ get_value_from_filename(name)
+ elif os.path.isfile(name):
+ get_value_from_scriptname(name)
+ else:
+ get_value_from_name(name)
+ except ImportError, e:
+ log.error(e)
+ self._update_messages()
+ self._root.bell()
+ return
+
+ # Add the module to the list of modules.
+ self._module_list.insert('end', name)
+ self._module_list.yview('end')
+ else:
+ log.error("Couldn't find %r" % name)
+ self._update_messages()
+ self._root.bell()
+
+ def mainloop(self, *args, **kwargs):
+ self._root.mainloop(*args, **kwargs)
+
+ def _getopts(self):
+ options = {}
+ options['modules'] = self._module_list.get(0, 'end')
+ options['prj_name'] = self._name_entry.get() or ''
+ options['prj_url'] = self._url_entry.get() or None
+ options['docformat'] = self._docformat_var.get()
+ options['inheritance'] = self._inheritance_var.get()
+ options['introspect_or_parse'] = self._introspect_or_parse_var.get()
+ options['target'] = self._out_entry.get() or 'html'
+ options['frames'] = self._frames_var.get()
+ options['private'] = self._private_var.get()
+ options['show_imports'] = self._imports_var.get()
+ if self._help_var.get() == '-other-':
+ options['help'] = self._help_entry.get() or None
+ else:
+ options['help'] = None
+ if self._css_var.get() == '-other-':
+ options['css'] = self._css_entry.get() or 'default'
+ else:
+ options['css'] = self._css_var.get() or 'default'
+ #if self._private_css_var.get() == '-other-':
+ # options['private_css'] = self._css_entry.get() or 'default'
+ #else:
+ # options['private_css'] = self._private_css_var.get() or 'default'
+ return options
+
+ def _go(self, *e):
+ if len(self._module_list.get(0,'end')) == 0:
+ self._root.bell()
+ return
+
+ if self._progress[0] != None:
+ self._cancel[0] = 1
+ return
+
+ # Construct the argument list for document().
+ opts = self._getopts()
+ self._progress[0] = 0.0
+ self._cancel[0] = 0
+ args = (opts, self._cancel, self._progress)
+
+ # Clear the messages window.
+ self._messages['state'] = 'normal'
+ self._messages.delete('0.0', 'end')
+ self._messages['state'] = 'disabled'
+ self._logger.clear()
+
+ # Restore the module list. This will force re-loading of
+ # anything that we're documenting.
+ for m in sys.modules.keys():
+ if m not in self._old_modules:
+ del sys.modules[m]
+
+ # [xx] Reset caches??
+
+ # Start documenting
+ start_new_thread(document, args)
+
+ # Start the progress bar.
+ self._go_button['text'] = 'Stop'
+ self._afterid += 1
+ dt = 300 # How often to update, in milliseconds
+ self._update(dt, self._afterid)
+
+ def _update_messages(self):
+ while 1:
+ level, data = self._logger.read()
+ if data is None: break
+ self._messages['state'] = 'normal'
+ if level == 'header':
+ self._messages.insert('end', data, 'header')
+ elif level == 'uline':
+ self._messages.insert('end', data, 'uline header')
+ elif level >= log.ERROR:
+ data= data.rstrip()+'\n\n'
+ self._messages.insert('end', data, 'guierror')
+ elif level >= log.DOCSTRING_WARNING:
+ data= data.rstrip()+'\n\n'
+ self._messages.insert('end', data, 'warning')
+ elif log >= log.INFO:
+ data= data.rstrip()+'\n\n'
+ self._messages.insert('end', data, 'message')
+# if data == '\n':
+# if self._last_tag != 'header2':
+# self._messages.insert('end', '\n', self._last_tag)
+# elif data == '='*75:
+# if self._messages.get('end-3c', 'end') == '\n\n\n':
+# self._messages.delete('end-1c')
+# self._in_header = 1
+# self._messages.insert('end', ' '*75, 'uline header')
+# self._last_tag = 'header'
+# elif data == '-'*75:
+# self._in_header = 0
+# self._last_tag = 'header2'
+# elif self._in_header:
+# self._messages.insert('end', data, 'header')
+# self._last_tag = 'header'
+# elif re.match(r'\s*(L\d+:|-)?\s*Warning: ', data):
+# self._messages.insert('end', data, 'warning')
+# self._last_tag = 'warning'
+# else:
+# self._messages.insert('end', data, 'error')
+# self._last_tag = 'error'
+
+ self._messages['state'] = 'disabled'
+ self._messages.yview('end')
+
+ def _update(self, dt, id):
+ if self._root is None: return
+ if self._progress[0] is None: return
+ if id != self._afterid: return
+
+ # Update the messages box
+ self._update_messages()
+
+ # Update the progress bar.
+ if self._progress[0] == 'done': p = self._W + DX
+ elif self._progress[0] == 'cancel': p = -5
+ else: p = DX + self._W * self._progress[0]
+ self._canvas.coords(self._r1, DX+1, DY+1, p, self._H+1)
+ self._canvas.coords(self._r2, DX, DY, p-1, self._H)
+ self._canvas.coords(self._r3, DX+1, DY+1, p, self._H+1)
+
+ # Are we done?
+ if self._progress[0] in ('done', 'cancel'):
+ if self._progress[0] == 'cancel': self._root.bell()
+ self._go_button['text'] = 'Start'
+ self._progress[0] = None
+ return
+
+ self._root.after(dt, self._update, dt, id)
+
+ def _new(self, *e):
+ self._module_list.delete(0, 'end')
+ self._name_entry.delete(0, 'end')
+ self._url_entry.delete(0, 'end')
+ self._docformat_var.set('epytext')
+ self._inheritance_var.set('grouped')
+ self._introspect_or_parse_var.set('both')
+ self._out_entry.delete(0, 'end')
+ self._module_entry.delete(0, 'end')
+ self._css_entry.delete(0, 'end')
+ self._help_entry.delete(0, 'end')
+ self._frames_var.set(1)
+ self._private_var.set(1)
+ self._imports_var.set(0)
+ self._css_var.set('default')
+ #self._private_css_var.set('default')
+ self._help_var.set('default')
+ self._filename = None
+ self._init_dir = None
+
+ def _open(self, *e):
+ title = 'Open project'
+ ftypes = [('Project file', '.prj'),
+ ('All files', '*')]
+ filename = askopenfilename(filetypes=ftypes, title=title,
+ defaultextension='.css')
+ if not filename: return
+ self.open(filename)
+
+ def open(self, prjfile):
+ from epydoc.docwriter.html_css import STYLESHEETS
+ self._filename = prjfile
+ try:
+ opts = load(open(prjfile, 'r'))
+
+ modnames = list(opts.get('modules', []))
+ modnames.sort()
+ self._module_list.delete(0, 'end')
+ for name in modnames:
+ self.add_module(name)
+ self._module_entry.delete(0, 'end')
+
+ self._name_entry.delete(0, 'end')
+ if opts.get('prj_name'):
+ self._name_entry.insert(0, opts['prj_name'])
+
+ self._url_entry.delete(0, 'end')
+ if opts.get('prj_url'):
+ self._url_entry.insert(0, opts['prj_url'])
+
+ self._docformat_var.set(opts.get('docformat', 'epytext'))
+ self._inheritance_var.set(opts.get('inheritance', 'grouped'))
+ self._introspect_or_parse_var.set(
+ opts.get('introspect_or_parse', 'both'))
+
+ self._help_entry.delete(0, 'end')
+ if opts.get('help') is None:
+ self._help_var.set('default')
+ else:
+ self._help_var.set('-other-')
+ self._help_entry.insert(0, opts.get('help'))
+
+ self._out_entry.delete(0, 'end')
+ self._out_entry.insert(0, opts.get('target', 'html'))
+
+ self._frames_var.set(opts.get('frames', 1))
+ self._private_var.set(opts.get('private', 1))
+ self._imports_var.set(opts.get('show_imports', 0))
+
+ self._css_entry.delete(0, 'end')
+ if opts.get('css', 'default') in STYLESHEETS.keys():
+ self._css_var.set(opts.get('css', 'default'))
+ else:
+ self._css_var.set('-other-')
+ self._css_entry.insert(0, opts.get('css', 'default'))
+
+ #if opts.get('private_css', 'default') in STYLESHEETS.keys():
+ # self._private_css_var.set(opts.get('private_css', 'default'))
+ #else:
+ # self._private_css_var.set('-other-')
+ # self._css_entry.insert(0, opts.get('private_css', 'default'))
+
+ except Exception, e:
+ log.error('Error opening %s: %s' % (prjfile, e))
+ self._root.bell()
+
+ def _save(self, *e):
+ if self._filename is None: return self._saveas()
+ try:
+ opts = self._getopts()
+ dump(opts, open(self._filename, 'w'))
+ except Exception, e:
+ if self._filename is None:
+ log.error('Error saving: %s' % e)
+ else:
+ log.error('Error saving %s: %s' % (self._filename, e))
+ self._root.bell()
+
+ def _saveas(self, *e):
+ title = 'Save project as'
+ ftypes = [('Project file', '.prj'), ('All files', '*')]
+ filename = asksaveasfilename(filetypes=ftypes, title=title,
+ defaultextension='.prj')
+ if not filename: return
+ self._filename = filename
+ self._save()
+
+def _version():
+ """
+ Display the version information, and exit.
+ @rtype: C{None}
+ """
+ import epydoc
+ print "Epydoc version %s" % epydoc.__version__
+ sys.exit(0)
+
+# At some point I could add:
+# --show-messages, --hide-messages
+# --show-options, --hide-options
+def _usage():
+ print
+ print 'Usage: epydocgui [OPTIONS] [FILE.prj | MODULES...]'
+ print
+ print ' FILE.prj An epydoc GUI project file.'
+ print ' MODULES... A list of Python modules to document.'
+ print ' -V, --version Print the version of epydoc.'
+ print ' -h, -?, --help, --usage Display this usage message'
+ print ' --debug Do not suppress error messages'
+ print
+ sys.exit(0)
+
+def _error(s):
+ s = '%s; run "%s -h" for usage' % (s, os.path.basename(sys.argv[0]))
+ if len(s) > 80:
+ i = s.rfind(' ', 0, 80)
+ if i>0: s = s[:i]+'\n'+s[i+1:]
+ print >>sys.stderr, s
+ sys.exit(1)
+
+def gui():
+ global DEBUG
+ sys.stderr = sys.__stderr__
+ projects = []
+ modules = []
+ for arg in sys.argv[1:]:
+ if arg[0] == '-':
+ if arg != '-V': arg = arg.lower()
+ if arg in ('-h', '--help', '-?', '--usage'): _usage()
+ elif arg in ('-V', '--version'): _version()
+ elif arg in ('--debug',): DEBUG = 1
+ else:
+ _error('Unknown parameter %r' % arg)
+ elif arg[-4:] == '.prj': projects.append(arg)
+ else: modules.append(arg)
+
+ if len(projects) > 1:
+ _error('Too many projects')
+ if len(projects) == 1:
+ if len(modules) > 0:
+ _error('You must specify either a project or a list of modules')
+ if not os.path.exists(projects[0]):
+ _error('Cannot open project file %s' % projects[0])
+ gui = EpydocGUI()
+ gui.open(projects[0])
+ gui.mainloop()
+ else:
+ gui = EpydocGUI()
+ for module in modules: gui.add_module(module, check=1)
+ gui.mainloop()
+
+if __name__ == '__main__': gui()
+
diff --git a/python/helpers/epydoc/log.py b/python/helpers/epydoc/log.py
new file mode 100644
index 0000000..e6fae68
--- /dev/null
+++ b/python/helpers/epydoc/log.py
@@ -0,0 +1,204 @@
+# epydoc -- Logging
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: log.py 1488 2007-02-14 00:34:27Z edloper $
+
+"""
+Functions used to report messages and progress updates to the user.
+These functions are delegated to zero or more registered L{Logger}
+objects, which are responsible for actually presenting the information
+to the user. Different interfaces are free to create and register
+their own C{Logger}s, allowing them to present this information in the
+manner that is best suited to each interface.
+
+@note: I considered using the standard C{logging} package to provide
+this functionality. However, I found that it would be too difficult
+to get that package to provide the behavior I want (esp. with respect
+to progress displays; but also with respect to message blocks).
+
+@group Message Severity Levels: DEBUG, INFO, WARNING, ERROR, FATAL
+"""
+__docformat__ = 'epytext en'
+
+import sys, os
+
+DEBUG = 10
+INFO = 20
+DOCSTRING_WARNING = 25
+WARNING = 30
+ERROR = 40
+FATAL = 40
+
+######################################################################
+# Logger Base Class
+######################################################################
+class Logger:
+ """
+ An abstract base class that defines the interface for X{loggers},
+ which are used by epydoc to report information back to the user.
+ Loggers are responsible for tracking two types of information:
+
+ - Messages, such as warnings and errors.
+ - Progress on the current task.
+
+ This abstract class allows the command-line interface and the
+ graphical interface to each present this information to the user
+ in the way that's most natural for each interface. To set up a
+ logger, create a subclass of C{Logger} that overrides all methods,
+ and register it using L{register_logger}.
+ """
+ #////////////////////////////////////////////////////////////
+ # Messages
+ #////////////////////////////////////////////////////////////
+
+ def log(self, level, message):
+ """
+ Display a message.
+
+ @param message: The message string to display. C{message} may
+ contain newlines, but does not need to end in a newline.
+ @param level: An integer value indicating the severity of the
+ message.
+ """
+
+ def close(self):
+ """
+ Perform any tasks needed to close this logger.
+ """
+
+ #////////////////////////////////////////////////////////////
+ # Message blocks
+ #////////////////////////////////////////////////////////////
+
+ def start_block(self, header):
+ """
+ Start a new message block. Any calls to L{info()},
+ L{warning()}, or L{error()} that occur between a call to
+ C{start_block} and a corresponding call to C{end_block} will
+ be grouped together, and displayed with a common header.
+ C{start_block} can be called multiple times (to form nested
+ blocks), but every call to C{start_block} I{must} be balanced
+ by a call to C{end_block}.
+ """
+
+ def end_block(self):
+ """
+ End a warning block. See L{start_block} for details.
+ """
+
+ #////////////////////////////////////////////////////////////
+ # Progress bar
+ #////////////////////////////////////////////////////////////
+
+ def start_progress(self, header=None):
+ """
+ Begin displaying progress for a new task. C{header} is a
+ description of the task for which progress is being reported.
+ Each call to C{start_progress} must be followed by a call to
+ C{end_progress} (with no intervening calls to
+ C{start_progress}).
+ """
+
+ def end_progress(self):
+ """
+ Finish off the display of progress for the current task. See
+ L{start_progress} for more information.
+ """
+
+ def progress(self, percent, message=''):
+ """
+ Update the progress display.
+
+ @param percent: A float from 0.0 to 1.0, indicating how much
+ progress has been made.
+ @param message: A message indicating the most recent action
+ that contributed towards that progress.
+ """
+
+class SimpleLogger(Logger):
+ def __init__(self, threshold=WARNING):
+ self.threshold = threshold
+ def log(self, level, message):
+ if level >= self.threshold: print message
+
+######################################################################
+# Logger Registry
+######################################################################
+
+_loggers = []
+"""
+The list of registered logging functions.
+"""
+
+def register_logger(logger):
+ """
+ Register a logger. Each call to one of the logging functions
+ defined by this module will be delegated to each registered
+ logger.
+ """
+ _loggers.append(logger)
+
+def remove_logger(logger):
+ _loggers.remove(logger)
+
+######################################################################
+# Logging Functions
+######################################################################
+# The following methods all just delegate to the corresponding
+# methods in the Logger class (above) for each registered logger.
+
+def fatal(*messages):
+ """Display the given fatal message."""
+ message = ' '.join(['%s' % (m,) for m in messages])
+ for logger in _loggers: logger.log(FATAL, message)
+
+def error(*messages):
+ """Display the given error message."""
+ message = ' '.join(['%s' % (m,) for m in messages])
+ for logger in _loggers: logger.log(ERROR, message)
+
+def warning(*messages):
+ """Display the given warning message."""
+ message = ' '.join(['%s' % (m,) for m in messages])
+ for logger in _loggers: logger.log(WARNING, message)
+
+def docstring_warning(*messages):
+ """Display the given docstring warning message."""
+ message = ' '.join(['%s' % (m,) for m in messages])
+ for logger in _loggers: logger.log(DOCSTRING_WARNING, message)
+
+def info(*messages):
+ """Display the given informational message."""
+ message = ' '.join(['%s' % (m,) for m in messages])
+ for logger in _loggers: logger.log(INFO, message)
+
+def debug(*messages):
+ """Display the given debugging message."""
+ message = ' '.join(['%s' % (m,) for m in messages])
+ for logger in _loggers: logger.log(DEBUG, message)
+
+def start_block(header):
+ for logger in _loggers: logger.start_block(header)
+start_block.__doc__ = Logger.start_block.__doc__
+
+def end_block():
+ for logger in _loggers: logger.end_block()
+end_block.__doc__ = Logger.end_block.__doc__
+
+def start_progress(header=None):
+ for logger in _loggers: logger.start_progress(header)
+start_progress.__doc__ = Logger.start_progress.__doc__
+
+def end_progress():
+ for logger in _loggers: logger.end_progress()
+end_progress.__doc__ = Logger.end_progress.__doc__
+
+def progress(percent, message=''):
+ for logger in _loggers: logger.progress(percent, '%s' % message)
+progress.__doc__ = Logger.progress.__doc__
+
+def close():
+ for logger in _loggers: logger.close()
diff --git a/python/helpers/epydoc/markup/__init__.py b/python/helpers/epydoc/markup/__init__.py
new file mode 100644
index 0000000..2b9565c
--- /dev/null
+++ b/python/helpers/epydoc/markup/__init__.py
@@ -0,0 +1,623 @@
+#
+# epydoc package file
+#
+# A python documentation Module
+# Edward Loper
+#
+# $Id: __init__.py 1577 2007-03-09 23:26:21Z dvarrazzo $
+#
+
+"""
+Markup language support for docstrings. Each submodule defines a
+parser for a single markup language. These parsers convert an
+object's docstring to a L{ParsedDocstring}, a standard intermediate
+representation that can be used to generate output.
+C{ParsedDocstring}s support the following operations:
+ - output generation (L{to_plaintext()<ParsedDocstring.to_plaintext>},
+ L{to_html()<ParsedDocstring.to_html>}, and
+ L{to_latex()<ParsedDocstring.to_latex>}).
+ - Summarization (L{summary()<ParsedDocstring.summary>}).
+ - Field extraction (L{split_fields()<ParsedDocstring.split_fields>}).
+ - Index term extraction (L{index_terms()<ParsedDocstring.index_terms>}.
+
+The L{parse()} function provides a single interface to the
+C{epydoc.markup} package: it takes a docstring and the name of a
+markup language; delegates to the appropriate parser; and returns the
+parsed docstring (along with any errors or warnings that were
+generated).
+
+The C{ParsedDocstring} output generation methods (C{to_M{format}()})
+use a L{DocstringLinker} to link the docstring output with the rest of
+the documentation that epydoc generates. C{DocstringLinker}s are
+currently responsible for translating two kinds of crossreference:
+ - index terms (L{translate_indexterm()
+ <DocstringLinker.translate_indexterm>}).
+ - identifier crossreferences (L{translate_identifier_xref()
+ <DocstringLinker.translate_identifier_xref>}).
+
+A parsed docstring's fields can be extracted using the
+L{ParsedDocstring.split_fields()} method. This method divides a
+docstring into its main body and a list of L{Field}s, each of which
+encodes a single field. The field's bodies are encoded as
+C{ParsedDocstring}s.
+
+Markup errors are represented using L{ParseError}s. These exception
+classes record information about the cause, location, and severity of
+each error.
+
+@sort: parse, ParsedDocstring, Field, DocstringLinker
+@group Errors and Warnings: ParseError
+@group Utility Functions: parse_type_of
+@var SCRWIDTH: The default width with which text will be wrapped
+ when formatting the output of the parser.
+@type SCRWIDTH: C{int}
+@var _parse_warnings: Used by L{_parse_warn}.
+"""
+__docformat__ = 'epytext en'
+
+import re, types, sys
+from epydoc import log
+from epydoc.util import plaintext_to_html, plaintext_to_latex
+import epydoc
+from epydoc.compat import *
+
+##################################################
+## Contents
+##################################################
+#
+# 1. parse() dispatcher
+# 2. ParsedDocstring abstract base class
+# 3. Field class
+# 4. Docstring Linker
+# 5. ParseError exceptions
+# 6. Misc helpers
+#
+
+##################################################
+## Dispatcher
+##################################################
+
+_markup_language_registry = {
+ 'restructuredtext': 'epydoc.markup.restructuredtext',
+ 'epytext': 'epydoc.markup.epytext',
+ 'plaintext': 'epydoc.markup.plaintext',
+ 'javadoc': 'epydoc.markup.javadoc',
+ }
+
+def register_markup_language(name, parse_function):
+ """
+ Register a new markup language named C{name}, which can be parsed
+ by the function C{parse_function}.
+
+ @param name: The name of the markup language. C{name} should be a
+ simple identifier, such as C{'epytext'} or C{'restructuredtext'}.
+ Markup language names are case insensitive.
+
+ @param parse_function: A function which can be used to parse the
+ markup language, and returns a L{ParsedDocstring}. It should
+ have the following signature:
+
+ >>> def parse(s, errors):
+ ... 'returns a ParsedDocstring'
+
+ Where:
+ - C{s} is the string to parse. (C{s} will be a unicode
+ string.)
+ - C{errors} is a list; any errors that are generated
+ during docstring parsing should be appended to this
+ list (as L{ParseError} objects).
+ """
+ _markup_language_registry[name.lower()] = parse_function
+
+MARKUP_LANGUAGES_USED = set()
+
+def parse(docstring, markup='plaintext', errors=None, **options):
+ """
+ Parse the given docstring, and use it to construct a
+ C{ParsedDocstring}. If any fatal C{ParseError}s are encountered
+ while parsing the docstring, then the docstring will be rendered
+ as plaintext, instead.
+
+ @type docstring: C{string}
+ @param docstring: The docstring to encode.
+ @type markup: C{string}
+ @param markup: The name of the markup language that is used by
+ the docstring. If the markup language is not supported, then
+ the docstring will be treated as plaintext. The markup name
+ is case-insensitive.
+ @param errors: A list where any errors generated during parsing
+ will be stored. If no list is specified, then fatal errors
+ will generate exceptions, and non-fatal errors will be
+ ignored.
+ @type errors: C{list} of L{ParseError}
+ @rtype: L{ParsedDocstring}
+ @return: A L{ParsedDocstring} that encodes the contents of
+ C{docstring}.
+ @raise ParseError: If C{errors} is C{None} and an error is
+ encountered while parsing.
+ """
+ # Initialize errors list.
+ raise_on_error = (errors is None)
+ if errors == None: errors = []
+
+ # Normalize the markup language name.
+ markup = markup.lower()
+
+ # Is the markup language valid?
+ if not re.match(r'\w+', markup):
+ _parse_warn('Bad markup language name %r. Treating '
+ 'docstrings as plaintext.' % markup)
+ import epydoc.markup.plaintext as plaintext
+ return plaintext.parse_docstring(docstring, errors, **options)
+
+ # Is the markup language supported?
+ if markup not in _markup_language_registry:
+ _parse_warn('Unsupported markup language %r. Treating '
+ 'docstrings as plaintext.' % markup)
+ import epydoc.markup.plaintext as plaintext
+ return plaintext.parse_docstring(docstring, errors, **options)
+
+ # Get the parse function.
+ parse_docstring = _markup_language_registry[markup]
+
+ # If it's a string, then it names a function to import.
+ if isinstance(parse_docstring, basestring):
+ try: exec('from %s import parse_docstring' % parse_docstring)
+ except ImportError, e:
+ _parse_warn('Error importing %s for markup language %s: %s' %
+ (parse_docstring, markup, e))
+ import epydoc.markup.plaintext as plaintext
+ return plaintext.parse_docstring(docstring, errors, **options)
+ _markup_language_registry[markup] = parse_docstring
+
+ # Keep track of which markup languages have been used so far.
+ MARKUP_LANGUAGES_USED.add(markup)
+
+ # Parse the docstring.
+ try: parsed_docstring = parse_docstring(docstring, errors, **options)
+ except KeyboardInterrupt: raise
+ except Exception, e:
+ if epydoc.DEBUG: raise
+ log.error('Internal error while parsing a docstring: %s; '
+ 'treating docstring as plaintext' % e)
+ import epydoc.markup.plaintext as plaintext
+ return plaintext.parse_docstring(docstring, errors, **options)
+
+ # Check for fatal errors.
+ fatal_errors = [e for e in errors if e.is_fatal()]
+ if fatal_errors and raise_on_error: raise fatal_errors[0]
+ if fatal_errors:
+ import epydoc.markup.plaintext as plaintext
+ return plaintext.parse_docstring(docstring, errors, **options)
+
+ return parsed_docstring
+
+# only issue each warning once:
+_parse_warnings = {}
+def _parse_warn(estr):
+ """
+ Print a warning message. If the given error has already been
+ printed, then do nothing.
+ """
+ global _parse_warnings
+ if estr in _parse_warnings: return
+ _parse_warnings[estr] = 1
+ log.warning(estr)
+
+##################################################
+## ParsedDocstring
+##################################################
+class ParsedDocstring:
+ """
+ A standard intermediate representation for parsed docstrings that
+ can be used to generate output. Parsed docstrings are produced by
+ markup parsers (such as L{epytext.parse} or L{javadoc.parse}).
+ C{ParsedDocstring}s support several kinds of operation:
+ - output generation (L{to_plaintext()}, L{to_html()}, and
+ L{to_latex()}).
+ - Summarization (L{summary()}).
+ - Field extraction (L{split_fields()}).
+ - Index term extraction (L{index_terms()}.
+
+ The output generation methods (C{to_M{format}()}) use a
+ L{DocstringLinker} to link the docstring output with the rest
+ of the documentation that epydoc generates.
+
+ Subclassing
+ ===========
+ The only method that a subclass is I{required} to implement is
+ L{to_plaintext()}; but it is often useful to override the other
+ methods. The default behavior of each method is described below:
+ - C{to_I{format}}: Calls C{to_plaintext}, and uses the string it
+ returns to generate verbatim output.
+ - C{summary}: Returns C{self} (i.e., the entire docstring).
+ - C{split_fields}: Returns C{(self, [])} (i.e., extracts no
+ fields).
+ - C{index_terms}: Returns C{[]} (i.e., extracts no index terms).
+
+ If and when epydoc adds more output formats, new C{to_I{format}}
+ methods will be added to this base class; but they will always
+ be given a default implementation.
+ """
+ def split_fields(self, errors=None):
+ """
+ Split this docstring into its body and its fields.
+
+ @return: A tuple C{(M{body}, M{fields})}, where C{M{body}} is
+ the main body of this docstring, and C{M{fields}} is a list
+ of its fields. If the resulting body is empty, return
+ C{None} for the body.
+ @rtype: C{(L{ParsedDocstring}, list of L{Field})}
+ @param errors: A list where any errors generated during
+ splitting will be stored. If no list is specified, then
+ errors will be ignored.
+ @type errors: C{list} of L{ParseError}
+ """
+ # Default behavior:
+ return self, []
+
+ def summary(self):
+ """
+ @return: A pair consisting of a short summary of this docstring and a
+ boolean value indicating whether there is further documentation
+ in addition to the summary. Typically, the summary consists of the
+ first sentence of the docstring.
+ @rtype: (L{ParsedDocstring}, C{bool})
+ """
+ # Default behavior:
+ return self, False
+
+ def concatenate(self, other):
+ """
+ @return: A new parsed docstring containing the concatination
+ of this docstring and C{other}.
+ @raise ValueError: If the two parsed docstrings are
+ incompatible.
+ """
+ return ConcatenatedDocstring(self, other)
+
+ def __add__(self, other): return self.concatenate(other)
+
+ def to_html(self, docstring_linker, **options):
+ """
+ Translate this docstring to HTML.
+
+ @param docstring_linker: An HTML translator for crossreference
+ links into and out of the docstring.
+ @type docstring_linker: L{DocstringLinker}
+ @param options: Any extra options for the output. Unknown
+ options are ignored.
+ @return: An HTML fragment that encodes this docstring.
+ @rtype: C{string}
+ """
+ # Default behavior:
+ plaintext = plaintext_to_html(self.to_plaintext(docstring_linker))
+ return '<pre class="literalblock">\n%s\n</pre>\n' % plaintext
+
+ def to_latex(self, docstring_linker, **options):
+ """
+ Translate this docstring to LaTeX.
+
+ @param docstring_linker: A LaTeX translator for crossreference
+ links into and out of the docstring.
+ @type docstring_linker: L{DocstringLinker}
+ @param options: Any extra options for the output. Unknown
+ options are ignored.
+ @return: A LaTeX fragment that encodes this docstring.
+ @rtype: C{string}
+ """
+ # Default behavior:
+ plaintext = plaintext_to_latex(self.to_plaintext(docstring_linker))
+ return '\\begin{alltt}\n%s\\end{alltt}\n\n' % plaintext
+
+ def to_plaintext(self, docstring_linker, **options):
+ """
+ Translate this docstring to plaintext.
+
+ @param docstring_linker: A plaintext translator for
+ crossreference links into and out of the docstring.
+ @type docstring_linker: L{DocstringLinker}
+ @param options: Any extra options for the output. Unknown
+ options are ignored.
+ @return: A plaintext fragment that encodes this docstring.
+ @rtype: C{string}
+ """
+ raise NotImplementedError, 'ParsedDocstring.to_plaintext()'
+
+ def index_terms(self):
+ """
+ @return: The list of index terms that are defined in this
+ docstring. Each of these items will be added to the index
+ page of the documentation.
+ @rtype: C{list} of C{ParsedDocstring}
+ """
+ # Default behavior:
+ return []
+
+##################################################
+## Concatenated Docstring
+##################################################
+class ConcatenatedDocstring:
+ def __init__(self, *parsed_docstrings):
+ self._parsed_docstrings = [pds for pds in parsed_docstrings
+ if pds is not None]
+
+ def split_fields(self, errors=None):
+ bodies = []
+ fields = []
+ for doc in self._parsed_docstrings:
+ b,f = doc.split_fields()
+ bodies.append(b)
+ fields.extend(f)
+
+ return ConcatenatedDocstring(*bodies), fields
+
+ def summary(self):
+ return self._parsed_docstrings[0].summary()
+
+ def to_html(self, docstring_linker, **options):
+ htmlstring = ''
+ for doc in self._parsed_docstrings:
+ htmlstring += doc.to_html(docstring_linker, **options)
+ return htmlstring
+
+ def to_latex(self, docstring_linker, **options):
+ latexstring = ''
+ for doc in self._parsed_docstrings:
+ latexstring += doc.to_latex(docstring_linker, **options)
+ return latexstring
+
+ def to_plaintext(self, docstring_linker, **options):
+ textstring = ''
+ for doc in self._parsed_docstrings:
+ textstring += doc.to_plaintext(docstring_linker, **options)
+ return textstring
+
+ def index_terms(self):
+ terms = []
+ for doc in self._parsed_docstrings:
+ terms += doc.index_terms()
+ return terms
+
+##################################################
+## Fields
+##################################################
+class Field:
+ """
+ The contents of a docstring's field. Docstring fields are used
+ to describe specific aspects of an object, such as a parameter of
+ a function or the author of a module. Each field consists of a
+ tag, an optional argument, and a body:
+ - The tag specifies the type of information that the field
+ encodes.
+ - The argument specifies the object that the field describes.
+ The argument may be C{None} or a C{string}.
+ - The body contains the field's information.
+
+ Tags are automatically downcased and stripped; and arguments are
+ automatically stripped.
+ """
+ def __init__(self, tag, arg, body):
+ self._tag = tag.lower().strip()
+ if arg is None: self._arg = None
+ else: self._arg = arg.strip()
+ self._body = body
+
+ def tag(self):
+ """
+ @return: This field's tag.
+ @rtype: C{string}
+ """
+ return self._tag
+
+ def arg(self):
+ """
+ @return: This field's argument, or C{None} if this field has
+ no argument.
+ @rtype: C{string} or C{None}
+ """
+ return self._arg
+
+ def body(self):
+ """
+ @return: This field's body.
+ @rtype: L{ParsedDocstring}
+ """
+ return self._body
+
+ def __repr__(self):
+ if self._arg is None:
+ return '<Field @%s: ...>' % self._tag
+ else:
+ return '<Field @%s %s: ...>' % (self._tag, self._arg)
+
+##################################################
+## Docstring Linker (resolves crossreferences)
+##################################################
+class DocstringLinker:
+ """
+ A translator for crossreference links into and out of a
+ C{ParsedDocstring}. C{DocstringLinker} is used by
+ C{ParsedDocstring} to convert these crossreference links into
+ appropriate output formats. For example,
+ C{DocstringLinker.to_html} expects a C{DocstringLinker} that
+ converts crossreference links to HTML.
+ """
+ def translate_indexterm(self, indexterm):
+ """
+ Translate an index term to the appropriate output format. The
+ output will typically include a crossreference anchor.
+
+ @type indexterm: L{ParsedDocstring}
+ @param indexterm: The index term to translate.
+ @rtype: C{string}
+ @return: The translated index term.
+ """
+ raise NotImplementedError, 'DocstringLinker.translate_indexterm()'
+
+ def translate_identifier_xref(self, identifier, label=None):
+ """
+ Translate a crossreference link to a Python identifier to the
+ appropriate output format. The output will typically include
+ a reference or pointer to the crossreference target.
+
+ @type identifier: C{string}
+ @param identifier: The name of the Python identifier that
+ should be linked to.
+ @type label: C{string} or C{None}
+ @param label: The label that should be used for the identifier,
+ if it's different from the name of the identifier.
+ @rtype: C{string}
+ @return: The translated crossreference link.
+ """
+ raise NotImplementedError, 'DocstringLinker.translate_xref()'
+
+##################################################
+## ParseError exceptions
+##################################################
+
+class ParseError(Exception):
+ """
+ The base class for errors generated while parsing docstrings.
+
+ @ivar _linenum: The line on which the error occured within the
+ docstring. The linenum of the first line is 0.
+ @type _linenum: C{int}
+ @ivar _offset: The line number where the docstring begins. This
+ offset is added to C{_linenum} when displaying the line number
+ of the error. Default value: 1.
+ @type _offset: C{int}
+ @ivar _descr: A description of the error.
+ @type _descr: C{string}
+ @ivar _fatal: True if this is a fatal error.
+ @type _fatal: C{boolean}
+ """
+ def __init__(self, descr, linenum=None, is_fatal=1):
+ """
+ @type descr: C{string}
+ @param descr: A description of the error.
+ @type linenum: C{int}
+ @param linenum: The line on which the error occured within
+ the docstring. The linenum of the first line is 0.
+ @type is_fatal: C{boolean}
+ @param is_fatal: True if this is a fatal error.
+ """
+ self._descr = descr
+ self._linenum = linenum
+ self._fatal = is_fatal
+ self._offset = 1
+
+ def is_fatal(self):
+ """
+ @return: true if this is a fatal error. If an error is fatal,
+ then epydoc should ignore the output of the parser, and
+ parse the docstring as plaintext.
+ @rtype: C{boolean}
+ """
+ return self._fatal
+
+ def linenum(self):
+ """
+ @return: The line number on which the error occured (including
+ any offset). If the line number is unknown, then return
+ C{None}.
+ @rtype: C{int} or C{None}
+ """
+ if self._linenum is None: return None
+ else: return self._offset + self._linenum
+
+ def set_linenum_offset(self, offset):
+ """
+ Set the line number offset for this error. This offset is the
+ line number where the docstring begins. This offset is added
+ to C{_linenum} when displaying the line number of the error.
+
+ @param offset: The new line number offset.
+ @type offset: C{int}
+ @rtype: C{None}
+ """
+ self._offset = offset
+
+ def descr(self):
+ return self._descr
+
+ def __str__(self):
+ """
+ Return a string representation of this C{ParseError}. This
+ multi-line string contains a description of the error, and
+ specifies where it occured.
+
+ @return: the informal representation of this C{ParseError}.
+ @rtype: C{string}
+ """
+ if self._linenum is not None:
+ return 'Line %s: %s' % (self._linenum+self._offset, self.descr())
+ else:
+ return self.descr()
+
+ def __repr__(self):
+ """
+ Return the formal representation of this C{ParseError}.
+ C{ParseError}s have formal representations of the form::
+ <ParseError on line 12>
+
+ @return: the formal representation of this C{ParseError}.
+ @rtype: C{string}
+ """
+ if self._linenum is None:
+ return '<ParseError on line %d' % self._offset
+ else:
+ return '<ParseError on line %d>' % (self._linenum+self._offset)
+
+ def __cmp__(self, other):
+ """
+ Compare two C{ParseError}s, based on their line number.
+ - Return -1 if C{self.linenum<other.linenum}
+ - Return +1 if C{self.linenum>other.linenum}
+ - Return 0 if C{self.linenum==other.linenum}.
+ The return value is undefined if C{other} is not a
+ ParseError.
+
+ @rtype: C{int}
+ """
+ if not isinstance(other, ParseError): return -1000
+ return cmp(self._linenum+self._offset,
+ other._linenum+other._offset)
+
+##################################################
+## Misc helpers
+##################################################
+# These are used by multiple markup parsers
+
+def parse_type_of(obj):
+ """
+ @return: A C{ParsedDocstring} that encodes the type of the given
+ object.
+ @rtype: L{ParsedDocstring}
+ @param obj: The object whose type should be returned as DOM document.
+ @type obj: any
+ """
+ # This is a bit hackish; oh well. :)
+ from epydoc.markup.epytext import ParsedEpytextDocstring
+ from xml.dom.minidom import Document
+ doc = Document()
+ epytext = doc.createElement('epytext')
+ para = doc.createElement('para')
+ doc.appendChild(epytext)
+ epytext.appendChild(para)
+
+ if type(obj) is types.InstanceType:
+ link = doc.createElement('link')
+ name = doc.createElement('name')
+ target = doc.createElement('target')
+ para.appendChild(link)
+ link.appendChild(name)
+ link.appendChild(target)
+ name.appendChild(doc.createTextNode(str(obj.__class__.__name__)))
+ target.appendChild(doc.createTextNode(str(obj.__class__)))
+ else:
+ code = doc.createElement('code')
+ para.appendChild(code)
+ code.appendChild(doc.createTextNode(type(obj).__name__))
+ return ParsedEpytextDocstring(doc)
+
diff --git a/python/helpers/epydoc/markup/doctest.py b/python/helpers/epydoc/markup/doctest.py
new file mode 100644
index 0000000..987df40
--- /dev/null
+++ b/python/helpers/epydoc/markup/doctest.py
@@ -0,0 +1,311 @@
+#
+# doctest.py: Syntax Highlighting for doctest blocks
+# Edward Loper
+#
+# Created [06/28/03 02:52 AM]
+# $Id: restructuredtext.py 1210 2006-04-10 13:25:50Z edloper $
+#
+
+"""
+Syntax highlighting for doctest blocks. This module defines two
+functions, L{doctest_to_html()} and L{doctest_to_latex()}, which can
+be used to perform syntax highlighting on doctest blocks. It also
+defines the more general C{colorize_doctest()}, which could be used to
+do syntac highlighting on doctest blocks with other output formats.
+(Both C{doctest_to_html()} and C{doctest_to_latex()} are defined using
+C{colorize_doctest()}.)
+"""
+__docformat__ = 'epytext en'
+
+import re
+from epydoc.util import plaintext_to_html, plaintext_to_latex
+
+__all__ = ['doctest_to_html', 'doctest_to_latex',
+ 'DoctestColorizer', 'XMLDoctestColorizer',
+ 'HTMLDoctestColorizer', 'LaTeXDoctestColorizer']
+
+def doctest_to_html(s):
+ """
+ Perform syntax highlighting on the given doctest string, and
+ return the resulting HTML code. This code consists of a C{<pre>}
+ block with class=py-doctest. Syntax highlighting is performed
+ using the following css classes:
+
+ - C{py-prompt} -- the Python PS1 prompt (>>>)
+ - C{py-more} -- the Python PS2 prompt (...)
+ - C{py-keyword} -- a Python keyword (for, if, etc.)
+ - C{py-builtin} -- a Python builtin name (abs, dir, etc.)
+ - C{py-string} -- a string literal
+ - C{py-comment} -- a comment
+ - C{py-except} -- an exception traceback (up to the next >>>)
+ - C{py-output} -- the output from a doctest block.
+ - C{py-defname} -- the name of a function or class defined by
+ a C{def} or C{class} statement.
+ """
+ return HTMLDoctestColorizer().colorize_doctest(s)
+
+def doctest_to_latex(s):
+ """
+ Perform syntax highlighting on the given doctest string, and
+ return the resulting LaTeX code. This code consists of an
+ C{alltt} environment. Syntax highlighting is performed using
+ the following new latex commands, which must be defined externally:
+ - C{\pysrcprompt} -- the Python PS1 prompt (>>>)
+ - C{\pysrcmore} -- the Python PS2 prompt (...)
+ - C{\pysrckeyword} -- a Python keyword (for, if, etc.)
+ - C{\pysrcbuiltin} -- a Python builtin name (abs, dir, etc.)
+ - C{\pysrcstring} -- a string literal
+ - C{\pysrccomment} -- a comment
+ - C{\pysrcexcept} -- an exception traceback (up to the next >>>)
+ - C{\pysrcoutput} -- the output from a doctest block.
+ - C{\pysrcdefname} -- the name of a function or class defined by
+ a C{def} or C{class} statement.
+ """
+ return LaTeXDoctestColorizer().colorize_doctest(s)
+
+class DoctestColorizer:
+ """
+ An abstract base class for performing syntax highlighting on
+ doctest blocks and other bits of Python code. Subclasses should
+ provide definitions for:
+
+ - The L{markup()} method, which takes a substring and a tag, and
+ returns a colorized version of the substring.
+ - The L{PREFIX} and L{SUFFIX} variables, which will be added
+ to the beginning and end of the strings returned by
+ L{colorize_codeblock} and L{colorize_doctest}.
+ """
+
+ #: A string that is added to the beginning of the strings
+ #: returned by L{colorize_codeblock} and L{colorize_doctest}.
+ #: Typically, this string begins a preformatted area.
+ PREFIX = None
+
+ #: A string that is added to the end of the strings
+ #: returned by L{colorize_codeblock} and L{colorize_doctest}.
+ #: Typically, this string ends a preformatted area.
+ SUFFIX = None
+
+ #: A list of the names of all Python keywords. ('as' is included
+ #: even though it is technically not a keyword.)
+ _KEYWORDS = ("and del for is raise"
+ "assert elif from lambda return"
+ "break else global not try"
+ "class except if or while"
+ "continue exec import pass yield"
+ "def finally in print as").split()
+
+ #: A list of all Python builtins.
+ _BUILTINS = [_BI for _BI in dir(__builtins__)
+ if not _BI.startswith('__')]
+
+ #: A regexp group that matches keywords.
+ _KEYWORD_GRP = '|'.join([r'\b%s\b' % _KW for _KW in _KEYWORDS])
+
+ #: A regexp group that matches Python builtins.
+ _BUILTIN_GRP = (r'(?<!\.)(?:%s)' % '|'.join([r'\b%s\b' % _BI
+ for _BI in _BUILTINS]))
+
+ #: A regexp group that matches Python strings.
+ _STRING_GRP = '|'.join(
+ [r'("""("""|.*?((?!").)"""))', r'("("|.*?((?!").)"))',
+ r"('''('''|.*?[^\\']'''))", r"('('|.*?[^\\']'))"])
+
+ #: A regexp group that matches Python comments.
+ _COMMENT_GRP = '(#.*?$)'
+
+ #: A regexp group that matches Python ">>>" prompts.
+ _PROMPT1_GRP = r'^[ \t]*>>>(?:[ \t]|$)'
+
+ #: A regexp group that matches Python "..." prompts.
+ _PROMPT2_GRP = r'^[ \t]*\.\.\.(?:[ \t]|$)'
+
+ #: A regexp group that matches function and class definitions.
+ _DEFINE_GRP = r'\b(?:def|class)[ \t]+\w+'
+
+ #: A regexp that matches Python prompts
+ PROMPT_RE = re.compile('(%s|%s)' % (_PROMPT1_GRP, _PROMPT2_GRP),
+ re.MULTILINE | re.DOTALL)
+
+ #: A regexp that matches Python "..." prompts.
+ PROMPT2_RE = re.compile('(%s)' % _PROMPT2_GRP,
+ re.MULTILINE | re.DOTALL)
+
+ #: A regexp that matches doctest exception blocks.
+ EXCEPT_RE = re.compile(r'^[ \t]*Traceback \(most recent call last\):.*',
+ re.DOTALL | re.MULTILINE)
+
+ #: A regexp that matches doctest directives.
+ DOCTEST_DIRECTIVE_RE = re.compile(r'#[ \t]*doctest:.*')
+
+ #: A regexp that matches all of the regions of a doctest block
+ #: that should be colored.
+ DOCTEST_RE = re.compile(
+ r'(.*?)((?P<STRING>%s)|(?P<COMMENT>%s)|(?P<DEFINE>%s)|'
+ r'(?P<KEYWORD>%s)|(?P<BUILTIN>%s)|'
+ r'(?P<PROMPT1>%s)|(?P<PROMPT2>%s)|(?P<EOS>\Z))' % (
+ _STRING_GRP, _COMMENT_GRP, _DEFINE_GRP, _KEYWORD_GRP, _BUILTIN_GRP,
+ _PROMPT1_GRP, _PROMPT2_GRP), re.MULTILINE | re.DOTALL)
+
+ #: This regular expression is used to find doctest examples in a
+ #: string. This is copied from the standard Python doctest.py
+ #: module (after the refactoring in Python 2.4+).
+ DOCTEST_EXAMPLE_RE = re.compile(r'''
+ # Source consists of a PS1 line followed by zero or more PS2 lines.
+ (?P<source>
+ (?:^(?P<indent> [ ]*) >>> .*) # PS1 line
+ (?:\n [ ]* \.\.\. .*)* # PS2 lines
+ \n?)
+ # Want consists of any non-blank lines that do not start with PS1.
+ (?P<want> (?:(?![ ]*$) # Not a blank line
+ (?![ ]*>>>) # Not a line starting with PS1
+ .*$\n? # But any other line
+ )*)
+ ''', re.MULTILINE | re.VERBOSE)
+
+ def colorize_inline(self, s):
+ """
+ Colorize a string containing Python code. Do not add the
+ L{PREFIX} and L{SUFFIX} strings to the returned value. This
+ method is intended for generating syntax-highlighted strings
+ that are appropriate for inclusion as inline expressions.
+ """
+ return self.DOCTEST_RE.sub(self.subfunc, s)
+
+ def colorize_codeblock(self, s):
+ """
+ Colorize a string containing only Python code. This method
+ differs from L{colorize_doctest} in that it will not search
+ for doctest prompts when deciding how to colorize the string.
+ """
+ body = self.DOCTEST_RE.sub(self.subfunc, s)
+ return self.PREFIX + body + self.SUFFIX
+
+ def colorize_doctest(self, s, strip_directives=False):
+ """
+ Colorize a string containing one or more doctest examples.
+ """
+ output = []
+ charno = 0
+ for m in self.DOCTEST_EXAMPLE_RE.finditer(s):
+ # Parse the doctest example:
+ pysrc, want = m.group('source', 'want')
+ # Pre-example text:
+ output.append(s[charno:m.start()])
+ # Example source code:
+ output.append(self.DOCTEST_RE.sub(self.subfunc, pysrc))
+ # Example output:
+ if want:
+ if self.EXCEPT_RE.match(want):
+ output += '\n'.join([self.markup(line, 'except')
+ for line in want.split('\n')])
+ else:
+ output += '\n'.join([self.markup(line, 'output')
+ for line in want.split('\n')])
+ # Update charno
+ charno = m.end()
+ # Add any remaining post-example text.
+ output.append(s[charno:])
+
+ return self.PREFIX + ''.join(output) + self.SUFFIX
+
+ def subfunc(self, match):
+ other, text = match.group(1, 2)
+ #print 'M %20r %20r' % (other, text) # <- for debugging
+ if other:
+ other = '\n'.join([self.markup(line, 'other')
+ for line in other.split('\n')])
+
+ if match.group('PROMPT1'):
+ return other + self.markup(text, 'prompt')
+ elif match.group('PROMPT2'):
+ return other + self.markup(text, 'more')
+ elif match.group('KEYWORD'):
+ return other + self.markup(text, 'keyword')
+ elif match.group('BUILTIN'):
+ return other + self.markup(text, 'builtin')
+ elif match.group('COMMENT'):
+ return other + self.markup(text, 'comment')
+ elif match.group('STRING') and '\n' not in text:
+ return other + self.markup(text, 'string')
+ elif match.group('STRING'):
+ # It's a multiline string; colorize the string & prompt
+ # portion of each line.
+ pieces = []
+ for line in text.split('\n'):
+ if self.PROMPT2_RE.match(line):
+ if len(line) > 4:
+ pieces.append(self.markup(line[:4], 'more') +
+ self.markup(line[4:], 'string'))
+ else:
+ pieces.append(self.markup(line[:4], 'more'))
+ elif line:
+ pieces.append(self.markup(line, 'string'))
+ else:
+ pieces.append('')
+ return other + '\n'.join(pieces)
+ elif match.group('DEFINE'):
+ m = re.match('(?P<def>\w+)(?P<space>\s+)(?P<name>\w+)', text)
+ return other + (self.markup(m.group('def'), 'keyword') +
+ self.markup(m.group('space'), 'other') +
+ self.markup(m.group('name'), 'defname'))
+ elif match.group('EOS') is not None:
+ return other
+ else:
+ assert 0, 'Unexpected match!'
+
+ def markup(self, s, tag):
+ """
+ Apply syntax highlighting to a single substring from a doctest
+ block. C{s} is the substring, and C{tag} is the tag that
+ should be applied to the substring. C{tag} will be one of the
+ following strings:
+
+ - C{prompt} -- the Python PS1 prompt (>>>)
+ - C{more} -- the Python PS2 prompt (...)
+ - C{keyword} -- a Python keyword (for, if, etc.)
+ - C{builtin} -- a Python builtin name (abs, dir, etc.)
+ - C{string} -- a string literal
+ - C{comment} -- a comment
+ - C{except} -- an exception traceback (up to the next >>>)
+ - C{output} -- the output from a doctest block.
+ - C{defname} -- the name of a function or class defined by
+ a C{def} or C{class} statement.
+ - C{other} -- anything else (does *not* include output.)
+ """
+ raise AssertionError("Abstract method")
+
+class XMLDoctestColorizer(DoctestColorizer):
+ """
+ A subclass of DoctestColorizer that generates XML-like output.
+ This class is mainly intended to be used for testing purposes.
+ """
+ PREFIX = '<colorized>\n'
+ SUFFIX = '</colorized>\n'
+ def markup(self, s, tag):
+ s = s.replace('&', '&').replace('<', '<').replace('>', '>')
+ if tag == 'other': return s
+ else: return '<%s>%s</%s>' % (tag, s, tag)
+
+class HTMLDoctestColorizer(DoctestColorizer):
+ """A subclass of DoctestColorizer that generates HTML output."""
+ PREFIX = '<pre class="py-doctest">\n'
+ SUFFIX = '</pre>\n'
+ def markup(self, s, tag):
+ if tag == 'other':
+ return plaintext_to_html(s)
+ else:
+ return ('<span class="py-%s">%s</span>' %
+ (tag, plaintext_to_html(s)))
+
+class LaTeXDoctestColorizer(DoctestColorizer):
+ """A subclass of DoctestColorizer that generates LaTeX output."""
+ PREFIX = '\\begin{alltt}\n'
+ SUFFIX = '\\end{alltt}\n'
+ def markup(self, s, tag):
+ if tag == 'other':
+ return plaintext_to_latex(s)
+ else:
+ return '\\pysrc%s{%s}' % (tag, plaintext_to_latex(s))
+
+
diff --git a/python/helpers/epydoc/markup/epytext.py b/python/helpers/epydoc/markup/epytext.py
new file mode 100644
index 0000000..058c5fa
--- /dev/null
+++ b/python/helpers/epydoc/markup/epytext.py
@@ -0,0 +1,2116 @@
+#
+# epytext.py: epydoc formatted docstring parsing
+# Edward Loper
+#
+# Created [04/10/01 12:00 AM]
+# $Id: epytext.py 1652 2007-09-26 04:45:34Z edloper $
+#
+
+"""
+Parser for epytext strings. Epytext is a lightweight markup whose
+primary intended application is Python documentation strings. This
+parser converts Epytext strings to a simple DOM-like representation
+(encoded as a tree of L{Element} objects and strings). Epytext
+strings can contain the following X{structural blocks}:
+
+ - X{epytext}: The top-level element of the DOM tree.
+ - X{para}: A paragraph of text. Paragraphs contain no newlines,
+ and all spaces are soft.
+ - X{section}: A section or subsection.
+ - X{field}: A tagged field. These fields provide information
+ about specific aspects of a Python object, such as the
+ description of a function's parameter, or the author of a
+ module.
+ - X{literalblock}: A block of literal text. This text should be
+ displayed as it would be displayed in plaintext. The
+ parser removes the appropriate amount of leading whitespace
+ from each line in the literal block.
+ - X{doctestblock}: A block containing sample python code,
+ formatted according to the specifications of the C{doctest}
+ module.
+ - X{ulist}: An unordered list.
+ - X{olist}: An ordered list.
+ - X{li}: A list item. This tag is used both for unordered list
+ items and for ordered list items.
+
+Additionally, the following X{inline regions} may be used within
+C{para} blocks:
+
+ - X{code}: Source code and identifiers.
+ - X{math}: Mathematical expressions.
+ - X{index}: A term which should be included in an index, if one
+ is generated.
+ - X{italic}: Italicized text.
+ - X{bold}: Bold-faced text.
+ - X{uri}: A Universal Resource Indicator (URI) or Universal
+ Resource Locator (URL)
+ - X{link}: A Python identifier which should be hyperlinked to
+ the named object's documentation, when possible.
+
+The returned DOM tree will conform to the the following Document Type
+Description::
+
+ <!ENTITY % colorized '(code | math | index | italic |
+ bold | uri | link | symbol)*'>
+
+ <!ELEMENT epytext ((para | literalblock | doctestblock |
+ section | ulist | olist)*, fieldlist?)>
+
+ <!ELEMENT para (#PCDATA | %colorized;)*>
+
+ <!ELEMENT section (para | listblock | doctestblock |
+ section | ulist | olist)+>
+
+ <!ELEMENT fieldlist (field+)>
+ <!ELEMENT field (tag, arg?, (para | listblock | doctestblock)
+ ulist | olist)+)>
+ <!ELEMENT tag (#PCDATA)>
+ <!ELEMENT arg (#PCDATA)>
+
+ <!ELEMENT literalblock (#PCDATA | %colorized;)*>
+ <!ELEMENT doctestblock (#PCDATA)>
+
+ <!ELEMENT ulist (li+)>
+ <!ELEMENT olist (li+)>
+ <!ELEMENT li (para | literalblock | doctestblock | ulist | olist)+>
+ <!ATTLIST li bullet NMTOKEN #IMPLIED>
+ <!ATTLIST olist start NMTOKEN #IMPLIED>
+
+ <!ELEMENT uri (name, target)>
+ <!ELEMENT link (name, target)>
+ <!ELEMENT name (#PCDATA | %colorized;)*>
+ <!ELEMENT target (#PCDATA)>
+
+ <!ELEMENT code (#PCDATA | %colorized;)*>
+ <!ELEMENT math (#PCDATA | %colorized;)*>
+ <!ELEMENT italic (#PCDATA | %colorized;)*>
+ <!ELEMENT bold (#PCDATA | %colorized;)*>
+ <!ELEMENT indexed (#PCDATA | %colorized;)>
+ <!ATTLIST code style CDATA #IMPLIED>
+
+ <!ELEMENT symbol (#PCDATA)>
+
+@var SYMBOLS: A list of the of escape symbols that are supported
+ by epydoc. Currently the following symbols are supported:
+<<<SYMBOLS>>>
+"""
+# Note: the symbol list is appended to the docstring automatically,
+# below.
+
+__docformat__ = 'epytext en'
+
+# Code organization..
+# 1. parse()
+# 2. tokenize()
+# 3. colorize()
+# 4. helpers
+# 5. testing
+
+import re, string, types, sys, os.path
+from epydoc.markup import *
+from epydoc.util import wordwrap, plaintext_to_html, plaintext_to_latex
+from epydoc.markup.doctest import doctest_to_html, doctest_to_latex
+
+##################################################
+## DOM-Like Encoding
+##################################################
+
+class Element:
+ """
+ A very simple DOM-like representation for parsed epytext
+ documents. Each epytext document is encoded as a tree whose nodes
+ are L{Element} objects, and whose leaves are C{string}s. Each
+ node is marked by a I{tag} and zero or more I{attributes}. Each
+ attribute is a mapping from a string key to a string value.
+ """
+ def __init__(self, tag, *children, **attribs):
+ self.tag = tag
+ """A string tag indicating the type of this element.
+ @type: C{string}"""
+
+ self.children = list(children)
+ """A list of the children of this element.
+ @type: C{list} of (C{string} or C{Element})"""
+
+ self.attribs = attribs
+ """A dictionary mapping attribute names to attribute values
+ for this element.
+ @type: C{dict} from C{string} to C{string}"""
+
+ def __str__(self):
+ """
+ Return a string representation of this element, using XML
+ notation.
+ @bug: Doesn't escape '<' or '&' or '>'.
+ """
+ attribs = ''.join([' %s=%r' % t for t in self.attribs.items()])
+ return ('<%s%s>' % (self.tag, attribs) +
+ ''.join([str(child) for child in self.children]) +
+ '</%s>' % self.tag)
+
+ def __repr__(self):
+ attribs = ''.join([', %s=%r' % t for t in self.attribs.items()])
+ args = ''.join([', %r' % c for c in self.children])
+ return 'Element(%s%s%s)' % (self.tag, args, attribs)
+
+##################################################
+## Constants
+##################################################
+
+# The possible heading underline characters, listed in order of
+# heading depth.
+_HEADING_CHARS = "=-~"
+
+# Escape codes. These should be needed very rarely.
+_ESCAPES = {'lb':'{', 'rb': '}'}
+
+# Symbols. These can be generated via S{...} escapes.
+SYMBOLS = [
+ # Arrows
+ '<-', '->', '^', 'v',
+
+ # Greek letters
+ 'alpha', 'beta', 'gamma', 'delta', 'epsilon', 'zeta',
+ 'eta', 'theta', 'iota', 'kappa', 'lambda', 'mu',
+ 'nu', 'xi', 'omicron', 'pi', 'rho', 'sigma',
+ 'tau', 'upsilon', 'phi', 'chi', 'psi', 'omega',
+ 'Alpha', 'Beta', 'Gamma', 'Delta', 'Epsilon', 'Zeta',
+ 'Eta', 'Theta', 'Iota', 'Kappa', 'Lambda', 'Mu',
+ 'Nu', 'Xi', 'Omicron', 'Pi', 'Rho', 'Sigma',
+ 'Tau', 'Upsilon', 'Phi', 'Chi', 'Psi', 'Omega',
+
+ # HTML character entities
+ 'larr', 'rarr', 'uarr', 'darr', 'harr', 'crarr',
+ 'lArr', 'rArr', 'uArr', 'dArr', 'hArr',
+ 'copy', 'times', 'forall', 'exist', 'part',
+ 'empty', 'isin', 'notin', 'ni', 'prod', 'sum',
+ 'prop', 'infin', 'ang', 'and', 'or', 'cap', 'cup',
+ 'int', 'there4', 'sim', 'cong', 'asymp', 'ne',
+ 'equiv', 'le', 'ge', 'sub', 'sup', 'nsub',
+ 'sube', 'supe', 'oplus', 'otimes', 'perp',
+
+ # Alternate (long) names
+ 'infinity', 'integral', 'product',
+ '>=', '<=',
+ ]
+# Convert to a dictionary, for quick lookup
+_SYMBOLS = {}
+for symbol in SYMBOLS: _SYMBOLS[symbol] = 1
+
+# Add symbols to the docstring.
+symblist = ' '
+symblist += ';\n '.join([' - C{E{S}{%s}}=S{%s}' % (symbol, symbol)
+ for symbol in SYMBOLS])
+__doc__ = __doc__.replace('<<<SYMBOLS>>>', symblist)
+del symbol, symblist
+
+# Tags for colorizing text.
+_COLORIZING_TAGS = {
+ 'C': 'code',
+ 'M': 'math',
+ 'X': 'indexed',
+ 'I': 'italic',
+ 'B': 'bold',
+ 'U': 'uri',
+ 'L': 'link', # A Python identifier that should be linked to
+ 'E': 'escape', # escapes characters or creates symbols
+ 'S': 'symbol',
+ 'G': 'graph',
+ }
+
+# Which tags can use "link syntax" (e.g., U{Python<www.python.org>})?
+_LINK_COLORIZING_TAGS = ['link', 'uri']
+
+##################################################
+## Structuring (Top Level)
+##################################################
+
+def parse(str, errors = None):
+ """
+ Return a DOM tree encoding the contents of an epytext string. Any
+ errors generated during parsing will be stored in C{errors}.
+
+ @param str: The epytext string to parse.
+ @type str: C{string}
+ @param errors: A list where any errors generated during parsing
+ will be stored. If no list is specified, then fatal errors
+ will generate exceptions, and non-fatal errors will be
+ ignored.
+ @type errors: C{list} of L{ParseError}
+ @return: a DOM tree encoding the contents of an epytext string.
+ @rtype: C{Element}
+ @raise ParseError: If C{errors} is C{None} and an error is
+ encountered while parsing.
+ """
+ # Initialize errors list.
+ if errors == None:
+ errors = []
+ raise_on_error = 1
+ else:
+ raise_on_error = 0
+
+ # Preprocess the string.
+ str = re.sub('\015\012', '\012', str)
+ str = string.expandtabs(str)
+
+ # Tokenize the input string.
+ tokens = _tokenize(str, errors)
+
+ # Have we encountered a field yet?
+ encountered_field = 0
+
+ # Create an document to hold the epytext.
+ doc = Element('epytext')
+
+ # Maintain two parallel stacks: one contains DOM elements, and
+ # gives the ancestors of the current block. The other contains
+ # indentation values, and gives the indentation of the
+ # corresponding DOM elements. An indentation of "None" reflects
+ # an unknown indentation. However, the indentation must be
+ # greater than, or greater than or equal to, the indentation of
+ # the prior element (depending on what type of DOM element it
+ # corresponds to). No 2 consecutive indent_stack values will be
+ # ever be "None." Use initial dummy elements in the stack, so we
+ # don't have to worry about bounds checking.
+ stack = [None, doc]
+ indent_stack = [-1, None]
+
+ for token in tokens:
+ # Uncomment this for debugging:
+ #print ('%s: %s\n%s: %s\n' %
+ # (''.join(['%-11s' % (t and t.tag) for t in stack]),
+ # token.tag, ''.join(['%-11s' % i for i in indent_stack]),
+ # token.indent))
+
+ # Pop any completed blocks off the stack.
+ _pop_completed_blocks(token, stack, indent_stack)
+
+ # If Token has type PARA, colorize and add the new paragraph
+ if token.tag == Token.PARA:
+ _add_para(doc, token, stack, indent_stack, errors)
+
+ # If Token has type HEADING, add the new section
+ elif token.tag == Token.HEADING:
+ _add_section(doc, token, stack, indent_stack, errors)
+
+ # If Token has type LBLOCK, add the new literal block
+ elif token.tag == Token.LBLOCK:
+ stack[-1].children.append(token.to_dom(doc))
+
+ # If Token has type DTBLOCK, add the new doctest block
+ elif token.tag == Token.DTBLOCK:
+ stack[-1].children.append(token.to_dom(doc))
+
+ # If Token has type BULLET, add the new list/list item/field
+ elif token.tag == Token.BULLET:
+ _add_list(doc, token, stack, indent_stack, errors)
+ else:
+ assert 0, 'Unknown token type: '+token.tag
+
+ # Check if the DOM element we just added was a field..
+ if stack[-1].tag == 'field':
+ encountered_field = 1
+ elif encountered_field == 1:
+ if len(stack) <= 3:
+ estr = ("Fields must be the final elements in an "+
+ "epytext string.")
+ errors.append(StructuringError(estr, token.startline))
+
+ # Graphs use inline markup (G{...}) but are really block-level
+ # elements; so "raise" any graphs we generated. This is a bit of
+ # a hack, but the alternative is to define a new markup for
+ # block-level elements, which I'd rather not do. (See sourceforge
+ # bug #1673017.)
+ for child in doc.children:
+ _raise_graphs(child, doc)
+
+ # If there was an error, then signal it!
+ if len([e for e in errors if e.is_fatal()]) > 0:
+ if raise_on_error:
+ raise errors[0]
+ else:
+ return None
+
+ # Return the top-level epytext DOM element.
+ return doc
+
+def _raise_graphs(tree, parent):
+ # Recurse to children.
+ have_graph_child = False
+ for elt in tree.children:
+ if isinstance(elt, Element):
+ _raise_graphs(elt, tree)
+ if elt.tag == 'graph': have_graph_child = True
+
+ block = ('section', 'fieldlist', 'field', 'ulist', 'olist', 'li')
+ if have_graph_child and tree.tag not in block:
+ child_index = 0
+ for elt in tree.children:
+ if isinstance(elt, Element) and elt.tag == 'graph':
+ # We found a graph: splice it into the parent.
+ parent_index = parent.children.index(tree)
+ left = tree.children[:child_index]
+ right = tree.children[child_index+1:]
+ parent.children[parent_index:parent_index+1] = [
+ Element(tree.tag, *left, **tree.attribs),
+ elt,
+ Element(tree.tag, *right, **tree.attribs)]
+ child_index = 0
+ parent_index += 2
+ else:
+ child_index += 1
+
+def _pop_completed_blocks(token, stack, indent_stack):
+ """
+ Pop any completed blocks off the stack. This includes any
+ blocks that we have dedented past, as well as any list item
+ blocks that we've dedented to. The top element on the stack
+ should only be a list if we're about to start a new list
+ item (i.e., if the next token is a bullet).
+ """
+ indent = token.indent
+ if indent != None:
+ while (len(stack) > 2):
+ pop = 0
+
+ # Dedent past a block
+ if indent_stack[-1]!=None and indent<indent_stack[-1]: pop=1
+ elif indent_stack[-1]==None and indent<indent_stack[-2]: pop=1
+
+ # Dedent to a list item, if it is follwed by another list
+ # item with the same indentation.
+ elif (token.tag == 'bullet' and indent==indent_stack[-2] and
+ stack[-1].tag in ('li', 'field')): pop=1
+
+ # End of a list (no more list items available)
+ elif (stack[-1].tag in ('ulist', 'olist') and
+ (token.tag != 'bullet' or token.contents[-1] == ':')):
+ pop=1
+
+ # Pop the block, if it's complete. Otherwise, we're done.
+ if pop == 0: return
+ stack.pop()
+ indent_stack.pop()
+
+def _add_para(doc, para_token, stack, indent_stack, errors):
+ """Colorize the given paragraph, and add it to the DOM tree."""
+ # Check indentation, and update the parent's indentation
+ # when appropriate.
+ if indent_stack[-1] == None:
+ indent_stack[-1] = para_token.indent
+ if para_token.indent == indent_stack[-1]:
+ # Colorize the paragraph and add it.
+ para = _colorize(doc, para_token, errors)
+ if para_token.inline:
+ para.attribs['inline'] = True
+ stack[-1].children.append(para)
+ else:
+ estr = "Improper paragraph indentation."
+ errors.append(StructuringError(estr, para_token.startline))
+
+def _add_section(doc, heading_token, stack, indent_stack, errors):
+ """Add a new section to the DOM tree, with the given heading."""
+ if indent_stack[-1] == None:
+ indent_stack[-1] = heading_token.indent
+ elif indent_stack[-1] != heading_token.indent:
+ estr = "Improper heading indentation."
+ errors.append(StructuringError(estr, heading_token.startline))
+
+ # Check for errors.
+ for tok in stack[2:]:
+ if tok.tag != "section":
+ estr = "Headings must occur at the top level."
+ errors.append(StructuringError(estr, heading_token.startline))
+ break
+ if (heading_token.level+2) > len(stack):
+ estr = "Wrong underline character for heading."
+ errors.append(StructuringError(estr, heading_token.startline))
+
+ # Pop the appropriate number of headings so we're at the
+ # correct level.
+ stack[heading_token.level+2:] = []
+ indent_stack[heading_token.level+2:] = []
+
+ # Colorize the heading
+ head = _colorize(doc, heading_token, errors, 'heading')
+
+ # Add the section's and heading's DOM elements.
+ sec = Element("section")
+ stack[-1].children.append(sec)
+ stack.append(sec)
+ sec.children.append(head)
+ indent_stack.append(None)
+
+def _add_list(doc, bullet_token, stack, indent_stack, errors):
+ """
+ Add a new list item or field to the DOM tree, with the given
+ bullet or field tag. When necessary, create the associated
+ list.
+ """
+ # Determine what type of bullet it is.
+ if bullet_token.contents[-1] == '-':
+ list_type = 'ulist'
+ elif bullet_token.contents[-1] == '.':
+ list_type = 'olist'
+ elif bullet_token.contents[-1] == ':':
+ list_type = 'fieldlist'
+ else:
+ raise AssertionError('Bad Bullet: %r' % bullet_token.contents)
+
+ # Is this a new list?
+ newlist = 0
+ if stack[-1].tag != list_type:
+ newlist = 1
+ elif list_type == 'olist' and stack[-1].tag == 'olist':
+ old_listitem = stack[-1].children[-1]
+ old_bullet = old_listitem.attribs.get("bullet").split('.')[:-1]
+ new_bullet = bullet_token.contents.split('.')[:-1]
+ if (new_bullet[:-1] != old_bullet[:-1] or
+ int(new_bullet[-1]) != int(old_bullet[-1])+1):
+ newlist = 1
+
+ # Create the new list.
+ if newlist:
+ if stack[-1].tag is 'fieldlist':
+ # The new list item is not a field list item (since this
+ # is a new list); but it's indented the same as the field
+ # list. This either means that they forgot to indent the
+ # list, or they are trying to put something after the
+ # field list. The first one seems more likely, so we'll
+ # just warn about that (to avoid confusion).
+ estr = "Lists must be indented."
+ errors.append(StructuringError(estr, bullet_token.startline))
+ if stack[-1].tag in ('ulist', 'olist', 'fieldlist'):
+ stack.pop()
+ indent_stack.pop()
+
+ if (list_type != 'fieldlist' and indent_stack[-1] is not None and
+ bullet_token.indent == indent_stack[-1]):
+ # Ignore this error if there's text on the same line as
+ # the comment-opening quote -- epydoc can't reliably
+ # determine the indentation for that line.
+ if bullet_token.startline != 1 or bullet_token.indent != 0:
+ estr = "Lists must be indented."
+ errors.append(StructuringError(estr, bullet_token.startline))
+
+ if list_type == 'fieldlist':
+ # Fieldlist should be at the top-level.
+ for tok in stack[2:]:
+ if tok.tag != "section":
+ estr = "Fields must be at the top level."
+ errors.append(
+ StructuringError(estr, bullet_token.startline))
+ break
+ stack[2:] = []
+ indent_stack[2:] = []
+
+ # Add the new list.
+ lst = Element(list_type)
+ stack[-1].children.append(lst)
+ stack.append(lst)
+ indent_stack.append(bullet_token.indent)
+ if list_type == 'olist':
+ start = bullet_token.contents.split('.')[:-1]
+ if start != '1':
+ lst.attribs["start"] = start[-1]
+
+ # Fields are treated somewhat specially: A "fieldlist"
+ # node is created to make the parsing simpler, but fields
+ # are adjoined directly into the "epytext" node, not into
+ # the "fieldlist" node.
+ if list_type == 'fieldlist':
+ li = Element("field")
+ token_words = bullet_token.contents[1:-1].split(None, 1)
+ tag_elt = Element("tag")
+ tag_elt.children.append(token_words[0])
+ li.children.append(tag_elt)
+
+ if len(token_words) > 1:
+ arg_elt = Element("arg")
+ arg_elt.children.append(token_words[1])
+ li.children.append(arg_elt)
+ else:
+ li = Element("li")
+ if list_type == 'olist':
+ li.attribs["bullet"] = bullet_token.contents
+
+ # Add the bullet.
+ stack[-1].children.append(li)
+ stack.append(li)
+ indent_stack.append(None)
+
+##################################################
+## Tokenization
+##################################################
+
+class Token:
+ """
+ C{Token}s are an intermediate data structure used while
+ constructing the structuring DOM tree for a formatted docstring.
+ There are five types of C{Token}:
+
+ - Paragraphs
+ - Literal blocks
+ - Doctest blocks
+ - Headings
+ - Bullets
+
+ The text contained in each C{Token} is stored in the
+ C{contents} variable. The string in this variable has been
+ normalized. For paragraphs, this means that it has been converted
+ into a single line of text, with newline/indentation replaced by
+ single spaces. For literal blocks and doctest blocks, this means
+ that the appropriate amount of leading whitespace has been removed
+ from each line.
+
+ Each C{Token} has an indentation level associated with it,
+ stored in the C{indent} variable. This indentation level is used
+ by the structuring procedure to assemble hierarchical blocks.
+
+ @type tag: C{string}
+ @ivar tag: This C{Token}'s type. Possible values are C{Token.PARA}
+ (paragraph), C{Token.LBLOCK} (literal block), C{Token.DTBLOCK}
+ (doctest block), C{Token.HEADINGC}, and C{Token.BULLETC}.
+
+ @type startline: C{int}
+ @ivar startline: The line on which this C{Token} begins. This
+ line number is only used for issuing errors.
+
+ @type contents: C{string}
+ @ivar contents: The normalized text contained in this C{Token}.
+
+ @type indent: C{int} or C{None}
+ @ivar indent: The indentation level of this C{Token} (in
+ number of leading spaces). A value of C{None} indicates an
+ unknown indentation; this is used for list items and fields
+ that begin with one-line paragraphs.
+
+ @type level: C{int} or C{None}
+ @ivar level: The heading-level of this C{Token} if it is a
+ heading; C{None}, otherwise. Valid heading levels are 0, 1,
+ and 2.
+
+ @type inline: C{bool}
+ @ivar inline: If True, the element is an inline level element, comparable
+ to an HTML C{<span>} tag. Else, it is a block level element, comparable
+ to an HTML C{<div>}.
+
+ @type PARA: C{string}
+ @cvar PARA: The C{tag} value for paragraph C{Token}s.
+ @type LBLOCK: C{string}
+ @cvar LBLOCK: The C{tag} value for literal C{Token}s.
+ @type DTBLOCK: C{string}
+ @cvar DTBLOCK: The C{tag} value for doctest C{Token}s.
+ @type HEADING: C{string}
+ @cvar HEADING: The C{tag} value for heading C{Token}s.
+ @type BULLET: C{string}
+ @cvar BULLET: The C{tag} value for bullet C{Token}s. This C{tag}
+ value is also used for field tag C{Token}s, since fields
+ function syntactically the same as list items.
+ """
+ # The possible token types.
+ PARA = "para"
+ LBLOCK = "literalblock"
+ DTBLOCK = "doctestblock"
+ HEADING = "heading"
+ BULLET = "bullet"
+
+ def __init__(self, tag, startline, contents, indent, level=None,
+ inline=False):
+ """
+ Create a new C{Token}.
+
+ @param tag: The type of the new C{Token}.
+ @type tag: C{string}
+ @param startline: The line on which the new C{Token} begins.
+ @type startline: C{int}
+ @param contents: The normalized contents of the new C{Token}.
+ @type contents: C{string}
+ @param indent: The indentation of the new C{Token} (in number
+ of leading spaces). A value of C{None} indicates an
+ unknown indentation.
+ @type indent: C{int} or C{None}
+ @param level: The heading-level of this C{Token} if it is a
+ heading; C{None}, otherwise.
+ @type level: C{int} or C{None}
+ @param inline: Is this C{Token} inline as a C{<span>}?.
+ @type inline: C{bool}
+ """
+ self.tag = tag
+ self.startline = startline
+ self.contents = contents
+ self.indent = indent
+ self.level = level
+ self.inline = inline
+
+ def __repr__(self):
+ """
+ @rtype: C{string}
+ @return: the formal representation of this C{Token}.
+ C{Token}s have formal representaitons of the form::
+ <Token: para at line 12>
+ """
+ return '<Token: %s at line %s>' % (self.tag, self.startline)
+
+ def to_dom(self, doc):
+ """
+ @return: a DOM representation of this C{Token}.
+ @rtype: L{Element}
+ """
+ e = Element(self.tag)
+ e.children.append(self.contents)
+ return e
+
+# Construct regular expressions for recognizing bullets. These are
+# global so they don't have to be reconstructed each time we tokenize
+# a docstring.
+_ULIST_BULLET = '[-]( +|$)'
+_OLIST_BULLET = '(\d+[.])+( +|$)'
+_FIELD_BULLET = '@\w+( [^{}:\n]+)?:'
+_BULLET_RE = re.compile(_ULIST_BULLET + '|' +
+ _OLIST_BULLET + '|' +
+ _FIELD_BULLET)
+_LIST_BULLET_RE = re.compile(_ULIST_BULLET + '|' + _OLIST_BULLET)
+_FIELD_BULLET_RE = re.compile(_FIELD_BULLET)
+del _ULIST_BULLET, _OLIST_BULLET, _FIELD_BULLET
+
+def _tokenize_doctest(lines, start, block_indent, tokens, errors):
+ """
+ Construct a L{Token} containing the doctest block starting at
+ C{lines[start]}, and append it to C{tokens}. C{block_indent}
+ should be the indentation of the doctest block. Any errors
+ generated while tokenizing the doctest block will be appended to
+ C{errors}.
+
+ @param lines: The list of lines to be tokenized
+ @param start: The index into C{lines} of the first line of the
+ doctest block to be tokenized.
+ @param block_indent: The indentation of C{lines[start]}. This is
+ the indentation of the doctest block.
+ @param errors: A list where any errors generated during parsing
+ will be stored. If no list is specified, then errors will
+ generate exceptions.
+ @return: The line number of the first line following the doctest
+ block.
+
+ @type lines: C{list} of C{string}
+ @type start: C{int}
+ @type block_indent: C{int}
+ @type tokens: C{list} of L{Token}
+ @type errors: C{list} of L{ParseError}
+ @rtype: C{int}
+ """
+ # If they dedent past block_indent, keep track of the minimum
+ # indentation. This is used when removing leading indentation
+ # from the lines of the doctest block.
+ min_indent = block_indent
+
+ linenum = start + 1
+ while linenum < len(lines):
+ # Find the indentation of this line.
+ line = lines[linenum]
+ indent = len(line) - len(line.lstrip())
+
+ # A blank line ends doctest block.
+ if indent == len(line): break
+
+ # A Dedent past block_indent is an error.
+ if indent < block_indent:
+ min_indent = min(min_indent, indent)
+ estr = 'Improper doctest block indentation.'
+ errors.append(TokenizationError(estr, linenum))
+
+ # Go on to the next line.
+ linenum += 1
+
+ # Add the token, and return the linenum after the token ends.
+ contents = [line[min_indent:] for line in lines[start:linenum]]
+ contents = '\n'.join(contents)
+ tokens.append(Token(Token.DTBLOCK, start, contents, block_indent))
+ return linenum
+
+def _tokenize_literal(lines, start, block_indent, tokens, errors):
+ """
+ Construct a L{Token} containing the literal block starting at
+ C{lines[start]}, and append it to C{tokens}. C{block_indent}
+ should be the indentation of the literal block. Any errors
+ generated while tokenizing the literal block will be appended to
+ C{errors}.
+
+ @param lines: The list of lines to be tokenized
+ @param start: The index into C{lines} of the first line of the
+ literal block to be tokenized.
+ @param block_indent: The indentation of C{lines[start]}. This is
+ the indentation of the literal block.
+ @param errors: A list of the errors generated by parsing. Any
+ new errors generated while will tokenizing this paragraph
+ will be appended to this list.
+ @return: The line number of the first line following the literal
+ block.
+
+ @type lines: C{list} of C{string}
+ @type start: C{int}
+ @type block_indent: C{int}
+ @type tokens: C{list} of L{Token}
+ @type errors: C{list} of L{ParseError}
+ @rtype: C{int}
+ """
+ linenum = start + 1
+ while linenum < len(lines):
+ # Find the indentation of this line.
+ line = lines[linenum]
+ indent = len(line) - len(line.lstrip())
+
+ # A Dedent to block_indent ends the literal block.
+ # (Ignore blank likes, though)
+ if len(line) != indent and indent <= block_indent:
+ break
+
+ # Go on to the next line.
+ linenum += 1
+
+ # Add the token, and return the linenum after the token ends.
+ contents = [line[block_indent+1:] for line in lines[start:linenum]]
+ contents = '\n'.join(contents)
+ contents = re.sub('(\A[ \n]*\n)|(\n[ \n]*\Z)', '', contents)
+ tokens.append(Token(Token.LBLOCK, start, contents, block_indent))
+ return linenum
+
+def _tokenize_listart(lines, start, bullet_indent, tokens, errors):
+ """
+ Construct L{Token}s for the bullet and the first paragraph of the
+ list item (or field) starting at C{lines[start]}, and append them
+ to C{tokens}. C{bullet_indent} should be the indentation of the
+ list item. Any errors generated while tokenizing will be
+ appended to C{errors}.
+
+ @param lines: The list of lines to be tokenized
+ @param start: The index into C{lines} of the first line of the
+ list item to be tokenized.
+ @param bullet_indent: The indentation of C{lines[start]}. This is
+ the indentation of the list item.
+ @param errors: A list of the errors generated by parsing. Any
+ new errors generated while will tokenizing this paragraph
+ will be appended to this list.
+ @return: The line number of the first line following the list
+ item's first paragraph.
+
+ @type lines: C{list} of C{string}
+ @type start: C{int}
+ @type bullet_indent: C{int}
+ @type tokens: C{list} of L{Token}
+ @type errors: C{list} of L{ParseError}
+ @rtype: C{int}
+ """
+ linenum = start + 1
+ para_indent = None
+ doublecolon = lines[start].rstrip()[-2:] == '::'
+
+ # Get the contents of the bullet.
+ para_start = _BULLET_RE.match(lines[start], bullet_indent).end()
+ bcontents = lines[start][bullet_indent:para_start].strip()
+
+ while linenum < len(lines):
+ # Find the indentation of this line.
+ line = lines[linenum]
+ indent = len(line) - len(line.lstrip())
+
+ # "::" markers end paragraphs.
+ if doublecolon: break
+ if line.rstrip()[-2:] == '::': doublecolon = 1
+
+ # A blank line ends the token
+ if indent == len(line): break
+
+ # Dedenting past bullet_indent ends the list item.
+ if indent < bullet_indent: break
+
+ # A line beginning with a bullet ends the token.
+ if _BULLET_RE.match(line, indent): break
+
+ # If this is the second line, set the paragraph indentation, or
+ # end the token, as appropriate.
+ if para_indent == None: para_indent = indent
+
+ # A change in indentation ends the token
+ if indent != para_indent: break
+
+ # Go on to the next line.
+ linenum += 1
+
+ # Add the bullet token.
+ tokens.append(Token(Token.BULLET, start, bcontents, bullet_indent,
+ inline=True))
+
+ # Add the paragraph token.
+ pcontents = ([lines[start][para_start:].strip()] +
+ [line.strip() for line in lines[start+1:linenum]])
+ pcontents = ' '.join(pcontents).strip()
+ if pcontents:
+ tokens.append(Token(Token.PARA, start, pcontents, para_indent,
+ inline=True))
+
+ # Return the linenum after the paragraph token ends.
+ return linenum
+
+def _tokenize_para(lines, start, para_indent, tokens, errors):
+ """
+ Construct a L{Token} containing the paragraph starting at
+ C{lines[start]}, and append it to C{tokens}. C{para_indent}
+ should be the indentation of the paragraph . Any errors
+ generated while tokenizing the paragraph will be appended to
+ C{errors}.
+
+ @param lines: The list of lines to be tokenized
+ @param start: The index into C{lines} of the first line of the
+ paragraph to be tokenized.
+ @param para_indent: The indentation of C{lines[start]}. This is
+ the indentation of the paragraph.
+ @param errors: A list of the errors generated by parsing. Any
+ new errors generated while will tokenizing this paragraph
+ will be appended to this list.
+ @return: The line number of the first line following the
+ paragraph.
+
+ @type lines: C{list} of C{string}
+ @type start: C{int}
+ @type para_indent: C{int}
+ @type tokens: C{list} of L{Token}
+ @type errors: C{list} of L{ParseError}
+ @rtype: C{int}
+ """
+ linenum = start + 1
+ doublecolon = 0
+ while linenum < len(lines):
+ # Find the indentation of this line.
+ line = lines[linenum]
+ indent = len(line) - len(line.lstrip())
+
+ # "::" markers end paragraphs.
+ if doublecolon: break
+ if line.rstrip()[-2:] == '::': doublecolon = 1
+
+ # Blank lines end paragraphs
+ if indent == len(line): break
+
+ # Indentation changes end paragraphs
+ if indent != para_indent: break
+
+ # List bullets end paragraphs
+ if _BULLET_RE.match(line, indent): break
+
+ # Check for mal-formatted field items.
+ if line[indent] == '@':
+ estr = "Possible mal-formatted field item."
+ errors.append(TokenizationError(estr, linenum, is_fatal=0))
+
+ # Go on to the next line.
+ linenum += 1
+
+ contents = [line.strip() for line in lines[start:linenum]]
+
+ # Does this token look like a heading?
+ if ((len(contents) < 2) or
+ (contents[1][0] not in _HEADING_CHARS) or
+ (abs(len(contents[0])-len(contents[1])) > 5)):
+ looks_like_heading = 0
+ else:
+ looks_like_heading = 1
+ for char in contents[1]:
+ if char != contents[1][0]:
+ looks_like_heading = 0
+ break
+
+ if looks_like_heading:
+ if len(contents[0]) != len(contents[1]):
+ estr = ("Possible heading typo: the number of "+
+ "underline characters must match the "+
+ "number of heading characters.")
+ errors.append(TokenizationError(estr, start, is_fatal=0))
+ else:
+ level = _HEADING_CHARS.index(contents[1][0])
+ tokens.append(Token(Token.HEADING, start,
+ contents[0], para_indent, level))
+ return start+2
+
+ # Add the paragraph token, and return the linenum after it ends.
+ contents = ' '.join(contents)
+ tokens.append(Token(Token.PARA, start, contents, para_indent))
+ return linenum
+
+def _tokenize(str, errors):
+ """
+ Split a given formatted docstring into an ordered list of
+ C{Token}s, according to the epytext markup rules.
+
+ @param str: The epytext string
+ @type str: C{string}
+ @param errors: A list where any errors generated during parsing
+ will be stored. If no list is specified, then errors will
+ generate exceptions.
+ @type errors: C{list} of L{ParseError}
+ @return: a list of the C{Token}s that make up the given string.
+ @rtype: C{list} of L{Token}
+ """
+ tokens = []
+ lines = str.split('\n')
+
+ # Scan through the lines, determining what @type of token we're
+ # dealing with, and tokenizing it, as appropriate.
+ linenum = 0
+ while linenum < len(lines):
+ # Get the current line and its indentation.
+ line = lines[linenum]
+ indent = len(line)-len(line.lstrip())
+
+ if indent == len(line):
+ # Ignore blank lines.
+ linenum += 1
+ continue
+ elif line[indent:indent+4] == '>>> ':
+ # blocks starting with ">>> " are doctest block tokens.
+ linenum = _tokenize_doctest(lines, linenum, indent,
+ tokens, errors)
+ elif _BULLET_RE.match(line, indent):
+ # blocks starting with a bullet are LI start tokens.
+ linenum = _tokenize_listart(lines, linenum, indent,
+ tokens, errors)
+ if tokens[-1].indent != None:
+ indent = tokens[-1].indent
+ else:
+ # Check for mal-formatted field items.
+ if line[indent] == '@':
+ estr = "Possible mal-formatted field item."
+ errors.append(TokenizationError(estr, linenum, is_fatal=0))
+
+ # anything else is either a paragraph or a heading.
+ linenum = _tokenize_para(lines, linenum, indent, tokens, errors)
+
+ # Paragraph tokens ending in '::' initiate literal blocks.
+ if (tokens[-1].tag == Token.PARA and
+ tokens[-1].contents[-2:] == '::'):
+ tokens[-1].contents = tokens[-1].contents[:-1]
+ linenum = _tokenize_literal(lines, linenum, indent, tokens, errors)
+
+ return tokens
+
+
+##################################################
+## Inline markup ("colorizing")
+##################################################
+
+# Assorted regular expressions used for colorizing.
+_BRACE_RE = re.compile('{|}')
+_TARGET_RE = re.compile('^(.*?)\s*<(?:URI:|URL:)?([^<>]+)>$')
+
+def _colorize(doc, token, errors, tagName='para'):
+ """
+ Given a string containing the contents of a paragraph, produce a
+ DOM C{Element} encoding that paragraph. Colorized regions are
+ represented using DOM C{Element}s, and text is represented using
+ DOM C{Text}s.
+
+ @param errors: A list of errors. Any newly generated errors will
+ be appended to this list.
+ @type errors: C{list} of C{string}
+
+ @param tagName: The element tag for the DOM C{Element} that should
+ be generated.
+ @type tagName: C{string}
+
+ @return: a DOM C{Element} encoding the given paragraph.
+ @returntype: C{Element}
+ """
+ str = token.contents
+ linenum = 0
+
+ # Maintain a stack of DOM elements, containing the ancestors of
+ # the text currently being analyzed. New elements are pushed when
+ # "{" is encountered, and old elements are popped when "}" is
+ # encountered.
+ stack = [Element(tagName)]
+
+ # This is just used to make error-reporting friendlier. It's a
+ # stack parallel to "stack" containing the index of each element's
+ # open brace.
+ openbrace_stack = [0]
+
+ # Process the string, scanning for '{' and '}'s. start is the
+ # index of the first unprocessed character. Each time through the
+ # loop, we process the text from the first unprocessed character
+ # to the next open or close brace.
+ start = 0
+ while 1:
+ match = _BRACE_RE.search(str, start)
+ if match == None: break
+ end = match.start()
+
+ # Open braces start new colorizing elements. When preceeded
+ # by a capital letter, they specify a colored region, as
+ # defined by the _COLORIZING_TAGS dictionary. Otherwise,
+ # use a special "literal braces" element (with tag "litbrace"),
+ # and convert them to literal braces once we find the matching
+ # close-brace.
+ if match.group() == '{':
+ if (end>0) and 'A' <= str[end-1] <= 'Z':
+ if (end-1) > start:
+ stack[-1].children.append(str[start:end-1])
+ if str[end-1] not in _COLORIZING_TAGS:
+ estr = "Unknown inline markup tag."
+ errors.append(ColorizingError(estr, token, end-1))
+ stack.append(Element('unknown'))
+ else:
+ tag = _COLORIZING_TAGS[str[end-1]]
+ stack.append(Element(tag))
+ else:
+ if end > start:
+ stack[-1].children.append(str[start:end])
+ stack.append(Element('litbrace'))
+ openbrace_stack.append(end)
+ stack[-2].children.append(stack[-1])
+
+ # Close braces end colorizing elements.
+ elif match.group() == '}':
+ # Check for (and ignore) unbalanced braces.
+ if len(stack) <= 1:
+ estr = "Unbalanced '}'."
+ errors.append(ColorizingError(estr, token, end))
+ start = end + 1
+ continue
+
+ # Add any remaining text.
+ if end > start:
+ stack[-1].children.append(str[start:end])
+
+ # Special handling for symbols:
+ if stack[-1].tag == 'symbol':
+ if (len(stack[-1].children) != 1 or
+ not isinstance(stack[-1].children[0], basestring)):
+ estr = "Invalid symbol code."
+ errors.append(ColorizingError(estr, token, end))
+ else:
+ symb = stack[-1].children[0]
+ if symb in _SYMBOLS:
+ # It's a symbol
+ stack[-2].children[-1] = Element('symbol', symb)
+ else:
+ estr = "Invalid symbol code."
+ errors.append(ColorizingError(estr, token, end))
+
+ # Special handling for escape elements:
+ if stack[-1].tag == 'escape':
+ if (len(stack[-1].children) != 1 or
+ not isinstance(stack[-1].children[0], basestring)):
+ estr = "Invalid escape code."
+ errors.append(ColorizingError(estr, token, end))
+ else:
+ escp = stack[-1].children[0]
+ if escp in _ESCAPES:
+ # It's an escape from _ESCPAES
+ stack[-2].children[-1] = _ESCAPES[escp]
+ elif len(escp) == 1:
+ # It's a single-character escape (eg E{.})
+ stack[-2].children[-1] = escp
+ else:
+ estr = "Invalid escape code."
+ errors.append(ColorizingError(estr, token, end))
+
+ # Special handling for literal braces elements:
+ if stack[-1].tag == 'litbrace':
+ stack[-2].children[-1:] = ['{'] + stack[-1].children + ['}']
+
+ # Special handling for graphs:
+ if stack[-1].tag == 'graph':
+ _colorize_graph(doc, stack[-1], token, end, errors)
+
+ # Special handling for link-type elements:
+ if stack[-1].tag in _LINK_COLORIZING_TAGS:
+ _colorize_link(doc, stack[-1], token, end, errors)
+
+ # Pop the completed element.
+ openbrace_stack.pop()
+ stack.pop()
+
+ start = end+1
+
+ # Add any final text.
+ if start < len(str):
+ stack[-1].children.append(str[start:])
+
+ if len(stack) != 1:
+ estr = "Unbalanced '{'."
+ errors.append(ColorizingError(estr, token, openbrace_stack[-1]))
+
+ return stack[0]
+
+GRAPH_TYPES = ['classtree', 'packagetree', 'importgraph', 'callgraph']
+
+def _colorize_graph(doc, graph, token, end, errors):
+ """
+ Eg::
+ G{classtree}
+ G{classtree x, y, z}
+ G{importgraph}
+ """
+ bad_graph_spec = False
+
+ children = graph.children[:]
+ graph.children = []
+
+ if len(children) != 1 or not isinstance(children[0], basestring):
+ bad_graph_spec = "Bad graph specification"
+ else:
+ pieces = children[0].split(None, 1)
+ graphtype = pieces[0].replace(':','').strip().lower()
+ if graphtype in GRAPH_TYPES:
+ if len(pieces) == 2:
+ if re.match(r'\s*:?\s*([\w\.]+\s*,?\s*)*', pieces[1]):
+ args = pieces[1].replace(',', ' ').replace(':','').split()
+ else:
+ bad_graph_spec = "Bad graph arg list"
+ else:
+ args = []
+ else:
+ bad_graph_spec = ("Bad graph type %s -- use one of %s" %
+ (pieces[0], ', '.join(GRAPH_TYPES)))
+
+ if bad_graph_spec:
+ errors.append(ColorizingError(bad_graph_spec, token, end))
+ graph.children.append('none')
+ graph.children.append('')
+ return
+
+ graph.children.append(graphtype)
+ for arg in args:
+ graph.children.append(arg)
+
+def _colorize_link(doc, link, token, end, errors):
+ variables = link.children[:]
+
+ # If the last child isn't text, we know it's bad.
+ if len(variables)==0 or not isinstance(variables[-1], basestring):
+ estr = "Bad %s target." % link.tag
+ errors.append(ColorizingError(estr, token, end))
+ return
+
+ # Did they provide an explicit target?
+ match2 = _TARGET_RE.match(variables[-1])
+ if match2:
+ (text, target) = match2.groups()
+ variables[-1] = text
+ # Can we extract an implicit target?
+ elif len(variables) == 1:
+ target = variables[0]
+ else:
+ estr = "Bad %s target." % link.tag
+ errors.append(ColorizingError(estr, token, end))
+ return
+
+ # Construct the name element.
+ name_elt = Element('name', *variables)
+
+ # Clean up the target. For URIs, assume http or mailto if they
+ # don't specify (no relative urls)
+ target = re.sub(r'\s', '', target)
+ if link.tag=='uri':
+ if not re.match(r'\w+:', target):
+ if re.match(r'\w+@(\w+)(\.\w+)*', target):
+ target = 'mailto:' + target
+ else:
+ target = 'http://'+target
+ elif link.tag=='link':
+ # Remove arg lists for functions (e.g., L{_colorize_link()})
+ target = re.sub(r'\(.*\)$', '', target)
+ if not re.match(r'^[a-zA-Z_]\w*(\.[a-zA-Z_]\w*)*$', target):
+ estr = "Bad link target."
+ errors.append(ColorizingError(estr, token, end))
+ return
+
+ # Construct the target element.
+ target_elt = Element('target', target)
+
+ # Add them to the link element.
+ link.children = [name_elt, target_elt]
+
+##################################################
+## Formatters
+##################################################
+
+def to_epytext(tree, indent=0, seclevel=0):
+ """
+ Convert a DOM document encoding epytext back to an epytext string.
+ This is the inverse operation from L{parse}. I.e., assuming there
+ are no errors, the following is true:
+ - C{parse(to_epytext(tree)) == tree}
+
+ The inverse is true, except that whitespace, line wrapping, and
+ character escaping may be done differently.
+ - C{to_epytext(parse(str)) == str} (approximately)
+
+ @param tree: A DOM document encoding of an epytext string.
+ @type tree: C{Element}
+ @param indent: The indentation for the string representation of
+ C{tree}. Each line of the returned string will begin with
+ C{indent} space characters.
+ @type indent: C{int}
+ @param seclevel: The section level that C{tree} appears at. This
+ is used to generate section headings.
+ @type seclevel: C{int}
+ @return: The epytext string corresponding to C{tree}.
+ @rtype: C{string}
+ """
+ if isinstance(tree, basestring):
+ str = re.sub(r'\{', '\0', tree)
+ str = re.sub(r'\}', '\1', str)
+ return str
+
+ if tree.tag == 'epytext': indent -= 2
+ if tree.tag == 'section': seclevel += 1
+ variables = [to_epytext(c, indent+2, seclevel) for c in tree.children]
+ childstr = ''.join(variables)
+
+ # Clean up for literal blocks (add the double "::" back)
+ childstr = re.sub(':(\s*)\2', '::\\1', childstr)
+
+ if tree.tag == 'para':
+ str = wordwrap(childstr, indent)+'\n'
+ str = re.sub(r'((^|\n)\s*\d+)\.', r'\1E{.}', str)
+ str = re.sub(r'((^|\n)\s*)-', r'\1E{-}', str)
+ str = re.sub(r'((^|\n)\s*)@', r'\1E{@}', str)
+ str = re.sub(r'::(\s*($|\n))', r'E{:}E{:}\1', str)
+ str = re.sub('\0', 'E{lb}', str)
+ str = re.sub('\1', 'E{rb}', str)
+ return str
+ elif tree.tag == 'li':
+ bullet = tree.attribs.get('bullet') or '-'
+ return indent*' '+ bullet + ' ' + childstr.lstrip()
+ elif tree.tag == 'heading':
+ str = re.sub('\0', 'E{lb}',childstr)
+ str = re.sub('\1', 'E{rb}', str)
+ uline = len(childstr)*_HEADING_CHARS[seclevel-1]
+ return (indent-2)*' ' + str + '\n' + (indent-2)*' '+uline+'\n'
+ elif tree.tag == 'doctestblock':
+ str = re.sub('\0', '{', childstr)
+ str = re.sub('\1', '}', str)
+ lines = [' '+indent*' '+line for line in str.split('\n')]
+ return '\n'.join(lines) + '\n\n'
+ elif tree.tag == 'literalblock':
+ str = re.sub('\0', '{', childstr)
+ str = re.sub('\1', '}', str)
+ lines = [(indent+1)*' '+line for line in str.split('\n')]
+ return '\2' + '\n'.join(lines) + '\n\n'
+ elif tree.tag == 'field':
+ numargs = 0
+ while tree.children[numargs+1].tag == 'arg': numargs += 1
+ tag = variables[0]
+ args = variables[1:1+numargs]
+ body = variables[1+numargs:]
+ str = (indent)*' '+'@'+variables[0]
+ if args: str += '(' + ', '.join(args) + ')'
+ return str + ':\n' + ''.join(body)
+ elif tree.tag == 'target':
+ return '<%s>' % childstr
+ elif tree.tag in ('fieldlist', 'tag', 'arg', 'epytext',
+ 'section', 'olist', 'ulist', 'name'):
+ return childstr
+ elif tree.tag == 'symbol':
+ return 'E{%s}' % childstr
+ elif tree.tag == 'graph':
+ return 'G{%s}' % ' '.join(variables)
+ else:
+ for (tag, name) in _COLORIZING_TAGS.items():
+ if name == tree.tag:
+ return '%s{%s}' % (tag, childstr)
+ raise ValueError('Unknown DOM element %r' % tree.tag)
+
+SYMBOL_TO_PLAINTEXT = {
+ 'crarr': '\\',
+ }
+
+def to_plaintext(tree, indent=0, seclevel=0):
+ """
+ Convert a DOM document encoding epytext to a string representation.
+ This representation is similar to the string generated by
+ C{to_epytext}, but C{to_plaintext} removes inline markup, prints
+ escaped characters in unescaped form, etc.
+
+ @param tree: A DOM document encoding of an epytext string.
+ @type tree: C{Element}
+ @param indent: The indentation for the string representation of
+ C{tree}. Each line of the returned string will begin with
+ C{indent} space characters.
+ @type indent: C{int}
+ @param seclevel: The section level that C{tree} appears at. This
+ is used to generate section headings.
+ @type seclevel: C{int}
+ @return: The epytext string corresponding to C{tree}.
+ @rtype: C{string}
+ """
+ if isinstance(tree, basestring): return tree
+
+ if tree.tag == 'section': seclevel += 1
+
+ # Figure out the child indent level.
+ if tree.tag == 'epytext': cindent = indent
+ elif tree.tag == 'li' and tree.attribs.get('bullet'):
+ cindent = indent + 1 + len(tree.attribs.get('bullet'))
+ else:
+ cindent = indent + 2
+ variables = [to_plaintext(c, cindent, seclevel) for c in tree.children]
+ childstr = ''.join(variables)
+
+ if tree.tag == 'para':
+ return wordwrap(childstr, indent)+'\n'
+ elif tree.tag == 'li':
+ # We should be able to use getAttribute here; but there's no
+ # convenient way to test if an element has an attribute..
+ bullet = tree.attribs.get('bullet') or '-'
+ return indent*' ' + bullet + ' ' + childstr.lstrip()
+ elif tree.tag == 'heading':
+ uline = len(childstr)*_HEADING_CHARS[seclevel-1]
+ return ((indent-2)*' ' + childstr + '\n' +
+ (indent-2)*' ' + uline + '\n')
+ elif tree.tag == 'doctestblock':
+ lines = [(indent+2)*' '+line for line in childstr.split('\n')]
+ return '\n'.join(lines) + '\n\n'
+ elif tree.tag == 'literalblock':
+ lines = [(indent+1)*' '+line for line in childstr.split('\n')]
+ return '\n'.join(lines) + '\n\n'
+ elif tree.tag == 'fieldlist':
+ return childstr
+ elif tree.tag == 'field':
+ numargs = 0
+ while tree.children[numargs+1].tag == 'arg': numargs += 1
+ tag = variables[0]
+ args = variables[1:1+numargs]
+ body = variables[1+numargs:]
+ str = (indent)*' '+'@'+variables[0]
+ if args: str += '(' + ', '.join(args) + ')'
+ return str + ':\n' + ''.join(body)
+ elif tree.tag == 'uri':
+ if len(variables) != 2: raise ValueError('Bad URI ')
+ elif variables[0] == variables[1]: return '<%s>' % variables[1]
+ else: return '%r<%s>' % (variables[0], variables[1])
+ elif tree.tag == 'link':
+ if len(variables) != 2: raise ValueError('Bad Link')
+ return '%s' % variables[0]
+ elif tree.tag in ('olist', 'ulist'):
+ # [xx] always use condensed lists.
+ ## Use a condensed list if each list item is 1 line long.
+ #for child in variables:
+ # if child.count('\n') > 2: return childstr
+ return childstr.replace('\n\n', '\n')+'\n'
+ elif tree.tag == 'symbol':
+ return '%s' % SYMBOL_TO_PLAINTEXT.get(childstr, childstr)
+ elif tree.tag == 'graph':
+ return '<<%s graph: %s>>' % (variables[0], ', '.join(variables[1:]))
+ else:
+ # Assume that anything else can be passed through.
+ return childstr
+
+def to_debug(tree, indent=4, seclevel=0):
+ """
+ Convert a DOM document encoding epytext back to an epytext string,
+ annotated with extra debugging information. This function is
+ similar to L{to_epytext}, but it adds explicit information about
+ where different blocks begin, along the left margin.
+
+ @param tree: A DOM document encoding of an epytext string.
+ @type tree: C{Element}
+ @param indent: The indentation for the string representation of
+ C{tree}. Each line of the returned string will begin with
+ C{indent} space characters.
+ @type indent: C{int}
+ @param seclevel: The section level that C{tree} appears at. This
+ is used to generate section headings.
+ @type seclevel: C{int}
+ @return: The epytext string corresponding to C{tree}.
+ @rtype: C{string}
+ """
+ if isinstance(tree, basestring):
+ str = re.sub(r'\{', '\0', tree)
+ str = re.sub(r'\}', '\1', str)
+ return str
+
+ if tree.tag == 'section': seclevel += 1
+ variables = [to_debug(c, indent+2, seclevel) for c in tree.children]
+ childstr = ''.join(variables)
+
+ # Clean up for literal blocks (add the double "::" back)
+ childstr = re.sub(':( *\n \|\n)\2', '::\\1', childstr)
+
+ if tree.tag == 'para':
+ str = wordwrap(childstr, indent-6, 69)+'\n'
+ str = re.sub(r'((^|\n)\s*\d+)\.', r'\1E{.}', str)
+ str = re.sub(r'((^|\n)\s*)-', r'\1E{-}', str)
+ str = re.sub(r'((^|\n)\s*)@', r'\1E{@}', str)
+ str = re.sub(r'::(\s*($|\n))', r'E{:}E{:}\1', str)
+ str = re.sub('\0', 'E{lb}', str)
+ str = re.sub('\1', 'E{rb}', str)
+ lines = str.rstrip().split('\n')
+ lines[0] = ' P>|' + lines[0]
+ lines[1:] = [' |'+l for l in lines[1:]]
+ return '\n'.join(lines)+'\n |\n'
+ elif tree.tag == 'li':
+ bullet = tree.attribs.get('bullet') or '-'
+ return ' LI>|'+ (indent-6)*' '+ bullet + ' ' + childstr[6:].lstrip()
+ elif tree.tag in ('olist', 'ulist'):
+ return 'LIST>|'+(indent-4)*' '+childstr[indent+2:]
+ elif tree.tag == 'heading':
+ str = re.sub('\0', 'E{lb}', childstr)
+ str = re.sub('\1', 'E{rb}', str)
+ uline = len(childstr)*_HEADING_CHARS[seclevel-1]
+ return ('SEC'+`seclevel`+'>|'+(indent-8)*' ' + str + '\n' +
+ ' |'+(indent-8)*' ' + uline + '\n')
+ elif tree.tag == 'doctestblock':
+ str = re.sub('\0', '{', childstr)
+ str = re.sub('\1', '}', str)
+ lines = [' |'+(indent-4)*' '+line for line in str.split('\n')]
+ lines[0] = 'DTST>'+lines[0][5:]
+ return '\n'.join(lines) + '\n |\n'
+ elif tree.tag == 'literalblock':
+ str = re.sub('\0', '{', childstr)
+ str = re.sub('\1', '}', str)
+ lines = [' |'+(indent-5)*' '+line for line in str.split('\n')]
+ lines[0] = ' LIT>'+lines[0][5:]
+ return '\2' + '\n'.join(lines) + '\n |\n'
+ elif tree.tag == 'field':
+ numargs = 0
+ while tree.children[numargs+1].tag == 'arg': numargs += 1
+ tag = variables[0]
+ args = variables[1:1+numargs]
+ body = variables[1+numargs:]
+ str = ' FLD>|'+(indent-6)*' '+'@'+variables[0]
+ if args: str += '(' + ', '.join(args) + ')'
+ return str + ':\n' + ''.join(body)
+ elif tree.tag == 'target':
+ return '<%s>' % childstr
+ elif tree.tag in ('fieldlist', 'tag', 'arg', 'epytext',
+ 'section', 'olist', 'ulist', 'name'):
+ return childstr
+ elif tree.tag == 'symbol':
+ return 'E{%s}' % childstr
+ elif tree.tag == 'graph':
+ return 'G{%s}' % ' '.join(variables)
+ else:
+ for (tag, name) in _COLORIZING_TAGS.items():
+ if name == tree.tag:
+ return '%s{%s}' % (tag, childstr)
+ raise ValueError('Unknown DOM element %r' % tree.tag)
+
+##################################################
+## Top-Level Wrapper function
+##################################################
+def pparse(str, show_warnings=1, show_errors=1, stream=sys.stderr):
+ """
+ Pretty-parse the string. This parses the string, and catches any
+ warnings or errors produced. Any warnings and errors are
+ displayed, and the resulting DOM parse structure is returned.
+
+ @param str: The string to parse.
+ @type str: C{string}
+ @param show_warnings: Whether or not to display non-fatal errors
+ generated by parsing C{str}.
+ @type show_warnings: C{boolean}
+ @param show_errors: Whether or not to display fatal errors
+ generated by parsing C{str}.
+ @type show_errors: C{boolean}
+ @param stream: The stream that warnings and errors should be
+ written to.
+ @type stream: C{stream}
+ @return: a DOM document encoding the contents of C{str}.
+ @rtype: C{Element}
+ @raise SyntaxError: If any fatal errors were encountered.
+ """
+ errors = []
+ confused = 0
+ try:
+ val = parse(str, errors)
+ warnings = [e for e in errors if not e.is_fatal()]
+ errors = [e for e in errors if e.is_fatal()]
+ except:
+ confused = 1
+
+ if not show_warnings: warnings = []
+ warnings.sort()
+ errors.sort()
+ if warnings:
+ print >>stream, '='*SCRWIDTH
+ print >>stream, "WARNINGS"
+ print >>stream, '-'*SCRWIDTH
+ for warning in warnings:
+ print >>stream, warning.as_warning()
+ print >>stream, '='*SCRWIDTH
+ if errors and show_errors:
+ if not warnings: print >>stream, '='*SCRWIDTH
+ print >>stream, "ERRORS"
+ print >>stream, '-'*SCRWIDTH
+ for error in errors:
+ print >>stream, error
+ print >>stream, '='*SCRWIDTH
+
+ if confused: raise
+ elif errors: raise SyntaxError('Encountered Errors')
+ else: return val
+
+##################################################
+## Parse Errors
+##################################################
+
+class TokenizationError(ParseError):
+ """
+ An error generated while tokenizing a formatted documentation
+ string.
+ """
+
+class StructuringError(ParseError):
+ """
+ An error generated while structuring a formatted documentation
+ string.
+ """
+
+class ColorizingError(ParseError):
+ """
+ An error generated while colorizing a paragraph.
+ """
+ def __init__(self, descr, token, charnum, is_fatal=1):
+ """
+ Construct a new colorizing exception.
+
+ @param descr: A short description of the error.
+ @type descr: C{string}
+ @param token: The token where the error occured
+ @type token: L{Token}
+ @param charnum: The character index of the position in
+ C{token} where the error occured.
+ @type charnum: C{int}
+ """
+ ParseError.__init__(self, descr, token.startline, is_fatal)
+ self.token = token
+ self.charnum = charnum
+
+ CONTEXT_RANGE = 20
+ def descr(self):
+ RANGE = self.CONTEXT_RANGE
+ if self.charnum <= RANGE:
+ left = self.token.contents[0:self.charnum]
+ else:
+ left = '...'+self.token.contents[self.charnum-RANGE:self.charnum]
+ if (len(self.token.contents)-self.charnum) <= RANGE:
+ right = self.token.contents[self.charnum:]
+ else:
+ right = (self.token.contents[self.charnum:self.charnum+RANGE]
+ + '...')
+ return ('%s\n\n%s%s\n%s^' % (self._descr, left, right, ' '*len(left)))
+
+##################################################
+## Convenience parsers
+##################################################
+
+def parse_as_literal(str):
+ """
+ Return a DOM document matching the epytext DTD, containing a
+ single literal block. That literal block will include the
+ contents of the given string. This method is typically used as a
+ fall-back when the parser fails.
+
+ @param str: The string which should be enclosed in a literal
+ block.
+ @type str: C{string}
+
+ @return: A DOM document containing C{str} in a single literal
+ block.
+ @rtype: C{Element}
+ """
+ return Element('epytext', Element('literalblock', str))
+
+def parse_as_para(str):
+ """
+ Return a DOM document matching the epytext DTD, containing a
+ single paragraph. That paragraph will include the contents of the
+ given string. This can be used to wrap some forms of
+ automatically generated information (such as type names) in
+ paragraphs.
+
+ @param str: The string which should be enclosed in a paragraph.
+ @type str: C{string}
+
+ @return: A DOM document containing C{str} in a single paragraph.
+ @rtype: C{Element}
+ """
+ return Element('epytext', Element('para', str))
+
+#################################################################
+## SUPPORT FOR EPYDOC
+#################################################################
+
+def parse_docstring(docstring, errors, **options):
+ """
+ Parse the given docstring, which is formatted using epytext; and
+ return a C{ParsedDocstring} representation of its contents.
+ @param docstring: The docstring to parse
+ @type docstring: C{string}
+ @param errors: A list where any errors generated during parsing
+ will be stored.
+ @type errors: C{list} of L{ParseError}
+ @param options: Extra options. Unknown options are ignored.
+ Currently, no extra options are defined.
+ @rtype: L{ParsedDocstring}
+ """
+ return ParsedEpytextDocstring(parse(docstring, errors), **options)
+
+class ParsedEpytextDocstring(ParsedDocstring):
+ SYMBOL_TO_HTML = {
+ # Symbols
+ '<-': '←', '->': '→', '^': '↑', 'v': '↓',
+
+ # Greek letters
+ 'alpha': 'α', 'beta': 'β', 'gamma': 'γ',
+ 'delta': 'δ', 'epsilon': 'ε', 'zeta': 'ζ',
+ 'eta': 'η', 'theta': 'θ', 'iota': 'ι',
+ 'kappa': 'κ', 'lambda': 'λ', 'mu': 'μ',
+ 'nu': 'ν', 'xi': 'ξ', 'omicron': 'ο',
+ 'pi': 'π', 'rho': 'ρ', 'sigma': 'σ',
+ 'tau': 'τ', 'upsilon': 'υ', 'phi': 'φ',
+ 'chi': 'χ', 'psi': 'ψ', 'omega': 'ω',
+ 'Alpha': 'Α', 'Beta': 'Β', 'Gamma': 'Γ',
+ 'Delta': 'Δ', 'Epsilon': 'Ε', 'Zeta': 'Ζ',
+ 'Eta': 'Η', 'Theta': 'Θ', 'Iota': 'Ι',
+ 'Kappa': 'Κ', 'Lambda': 'Λ', 'Mu': 'Μ',
+ 'Nu': 'Ν', 'Xi': 'Ξ', 'Omicron': 'Ο',
+ 'Pi': 'Π', 'Rho': 'Ρ', 'Sigma': 'Σ',
+ 'Tau': 'Τ', 'Upsilon': 'Υ', 'Phi': 'Φ',
+ 'Chi': 'Χ', 'Psi': 'Ψ', 'Omega': 'Ω',
+
+ # HTML character entities
+ 'larr': '←', 'rarr': '→', 'uarr': '↑',
+ 'darr': '↓', 'harr': '↔', 'crarr': '↵',
+ 'lArr': '⇐', 'rArr': '⇒', 'uArr': '⇑',
+ 'dArr': '⇓', 'hArr': '⇔',
+ 'copy': '©', 'times': '×', 'forall': '∀',
+ 'exist': '∃', 'part': '∂',
+ 'empty': '∅', 'isin': '∈', 'notin': '∉',
+ 'ni': '∋', 'prod': '∏', 'sum': '∑',
+ 'prop': '∝', 'infin': '∞', 'ang': '∠',
+ 'and': '∧', 'or': '∨', 'cap': '∩', 'cup': '∪',
+ 'int': '∫', 'there4': '∴', 'sim': '∼',
+ 'cong': '≅', 'asymp': '≈', 'ne': '≠',
+ 'equiv': '≡', 'le': '≤', 'ge': '≥',
+ 'sub': '⊂', 'sup': '⊃', 'nsub': '⊄',
+ 'sube': '⊆', 'supe': '⊇', 'oplus': '⊕',
+ 'otimes': '⊗', 'perp': '⊥',
+
+ # Alternate (long) names
+ 'infinity': '∞', 'integral': '∫', 'product': '∏',
+ '<=': '≤', '>=': '≥',
+ }
+
+ SYMBOL_TO_LATEX = {
+ # Symbols
+ '<-': r'\(\leftarrow\)', '->': r'\(\rightarrow\)',
+ '^': r'\(\uparrow\)', 'v': r'\(\downarrow\)',
+
+ # Greek letters (use lower case when upcase not available)
+
+ 'alpha': r'\(\alpha\)', 'beta': r'\(\beta\)', 'gamma':
+ r'\(\gamma\)', 'delta': r'\(\delta\)', 'epsilon':
+ r'\(\epsilon\)', 'zeta': r'\(\zeta\)', 'eta': r'\(\eta\)',
+ 'theta': r'\(\theta\)', 'iota': r'\(\iota\)', 'kappa':
+ r'\(\kappa\)', 'lambda': r'\(\lambda\)', 'mu': r'\(\mu\)',
+ 'nu': r'\(\nu\)', 'xi': r'\(\xi\)', 'omicron': r'\(o\)', 'pi':
+ r'\(\pi\)', 'rho': r'\(\rho\)', 'sigma': r'\(\sigma\)', 'tau':
+ r'\(\tau\)', 'upsilon': r'\(\upsilon\)', 'phi': r'\(\phi\)',
+ 'chi': r'\(\chi\)', 'psi': r'\(\psi\)', 'omega':
+ r'\(\omega\)',
+
+ 'Alpha': r'\(\alpha\)', 'Beta': r'\(\beta\)', 'Gamma':
+ r'\(\Gamma\)', 'Delta': r'\(\Delta\)', 'Epsilon':
+ r'\(\epsilon\)', 'Zeta': r'\(\zeta\)', 'Eta': r'\(\eta\)',
+ 'Theta': r'\(\Theta\)', 'Iota': r'\(\iota\)', 'Kappa':
+ r'\(\kappa\)', 'Lambda': r'\(\Lambda\)', 'Mu': r'\(\mu\)',
+ 'Nu': r'\(\nu\)', 'Xi': r'\(\Xi\)', 'Omicron': r'\(o\)', 'Pi':
+ r'\(\Pi\)', 'ho': r'\(\rho\)', 'Sigma': r'\(\Sigma\)', 'Tau':
+ r'\(\tau\)', 'Upsilon': r'\(\Upsilon\)', 'Phi': r'\(\Phi\)',
+ 'Chi': r'\(\chi\)', 'Psi': r'\(\Psi\)', 'Omega':
+ r'\(\Omega\)',
+
+ # HTML character entities
+ 'larr': r'\(\leftarrow\)', 'rarr': r'\(\rightarrow\)', 'uarr':
+ r'\(\uparrow\)', 'darr': r'\(\downarrow\)', 'harr':
+ r'\(\leftrightarrow\)', 'crarr': r'\(\hookleftarrow\)',
+ 'lArr': r'\(\Leftarrow\)', 'rArr': r'\(\Rightarrow\)', 'uArr':
+ r'\(\Uparrow\)', 'dArr': r'\(\Downarrow\)', 'hArr':
+ r'\(\Leftrightarrow\)', 'copy': r'{\textcopyright}',
+ 'times': r'\(\times\)', 'forall': r'\(\forall\)', 'exist':
+ r'\(\exists\)', 'part': r'\(\partial\)', 'empty':
+ r'\(\emptyset\)', 'isin': r'\(\in\)', 'notin': r'\(\notin\)',
+ 'ni': r'\(\ni\)', 'prod': r'\(\prod\)', 'sum': r'\(\sum\)',
+ 'prop': r'\(\propto\)', 'infin': r'\(\infty\)', 'ang':
+ r'\(\angle\)', 'and': r'\(\wedge\)', 'or': r'\(\vee\)', 'cap':
+ r'\(\cap\)', 'cup': r'\(\cup\)', 'int': r'\(\int\)', 'there4':
+ r'\(\therefore\)', 'sim': r'\(\sim\)', 'cong': r'\(\cong\)',
+ 'asymp': r'\(\approx\)', 'ne': r'\(\ne\)', 'equiv':
+ r'\(\equiv\)', 'le': r'\(\le\)', 'ge': r'\(\ge\)', 'sub':
+ r'\(\subset\)', 'sup': r'\(\supset\)', 'nsub': r'\(\supset\)',
+ 'sube': r'\(\subseteq\)', 'supe': r'\(\supseteq\)', 'oplus':
+ r'\(\oplus\)', 'otimes': r'\(\otimes\)', 'perp': r'\(\perp\)',
+
+ # Alternate (long) names
+ 'infinity': r'\(\infty\)', 'integral': r'\(\int\)', 'product':
+ r'\(\prod\)', '<=': r'\(\le\)', '>=': r'\(\ge\)',
+ }
+
+ def __init__(self, dom_tree, **options):
+ self._tree = dom_tree
+ # Caching:
+ self._html = self._latex = self._plaintext = None
+ self._terms = None
+ # inline option -- mark top-level children as inline.
+ if options.get('inline') and self._tree is not None:
+ for elt in self._tree.children:
+ elt.attribs['inline'] = True
+
+ def __str__(self):
+ return str(self._tree)
+
+ def to_html(self, docstring_linker, directory=None, docindex=None,
+ context=None, **options):
+ if self._html is not None: return self._html
+ if self._tree is None: return ''
+ indent = options.get('indent', 0)
+ self._html = self._to_html(self._tree, docstring_linker, directory,
+ docindex, context, indent)
+ return self._html
+
+ def to_latex(self, docstring_linker, **options):
+ if self._latex is not None: return self._latex
+ if self._tree is None: return ''
+ indent = options.get('indent', 0)
+ self._hyperref = options.get('hyperref', 1)
+ self._latex = self._to_latex(self._tree, docstring_linker, indent)
+ return self._latex
+
+ def to_plaintext(self, docstring_linker, **options):
+ # [XX] don't cache -- different options might be used!!
+ #if self._plaintext is not None: return self._plaintext
+ if self._tree is None: return ''
+ if 'indent' in options:
+ self._plaintext = to_plaintext(self._tree,
+ indent=options['indent'])
+ else:
+ self._plaintext = to_plaintext(self._tree)
+ return self._plaintext
+
+ def _index_term_key(self, tree):
+ str = to_plaintext(tree)
+ str = re.sub(r'\s\s+', '-', str)
+ return "index-"+re.sub("[^a-zA-Z0-9]", "_", str)
+
+ def _to_html(self, tree, linker, directory, docindex, context,
+ indent=0, seclevel=0):
+ if isinstance(tree, basestring):
+ return plaintext_to_html(tree)
+
+ if tree.tag == 'epytext': indent -= 2
+ if tree.tag == 'section': seclevel += 1
+
+ # Process the variables first.
+ variables = [self._to_html(c, linker, directory, docindex, context,
+ indent+2, seclevel)
+ for c in tree.children]
+
+ # Construct the HTML string for the variables.
+ childstr = ''.join(variables)
+
+ # Perform the approriate action for the DOM tree type.
+ if tree.tag == 'para':
+ return wordwrap(
+ (tree.attribs.get('inline') and '%s' or '<p>%s</p>') % childstr,
+ indent)
+ elif tree.tag == 'code':
+ style = tree.attribs.get('style')
+ if style:
+ return '<code class="%s">%s</code>' % (style, childstr)
+ else:
+ return '<code>%s</code>' % childstr
+ elif tree.tag == 'uri':
+ return ('<a href="%s" target="_top">%s</a>' %
+ (variables[1], variables[0]))
+ elif tree.tag == 'link':
+ return linker.translate_identifier_xref(variables[1], variables[0])
+ elif tree.tag == 'italic':
+ return '<i>%s</i>' % childstr
+ elif tree.tag == 'math':
+ return '<i class="math">%s</i>' % childstr
+ elif tree.tag == 'indexed':
+ term = Element('epytext', *tree.children, **tree.attribs)
+ return linker.translate_indexterm(ParsedEpytextDocstring(term))
+ #term_key = self._index_term_key(tree)
+ #return linker.translate_indexterm(childstr, term_key)
+ elif tree.tag == 'bold':
+ return '<b>%s</b>' % childstr
+ elif tree.tag == 'ulist':
+ return '%s<ul>\n%s%s</ul>\n' % (indent*' ', childstr, indent*' ')
+ elif tree.tag == 'olist':
+ start = tree.attribs.get('start') or ''
+ return ('%s<ol start="%s">\n%s%s</ol>\n' %
+ (indent*' ', start, childstr, indent*' '))
+ elif tree.tag == 'li':
+ return indent*' '+'<li>\n%s%s</li>\n' % (childstr, indent*' ')
+ elif tree.tag == 'heading':
+ return ('%s<h%s class="heading">%s</h%s>\n' %
+ ((indent-2)*' ', seclevel, childstr, seclevel))
+ elif tree.tag == 'literalblock':
+ return '<pre class="literalblock">\n%s\n</pre>\n' % childstr
+ elif tree.tag == 'doctestblock':
+ return doctest_to_html(tree.children[0].strip())
+ elif tree.tag == 'fieldlist':
+ raise AssertionError("There should not be any field lists left")
+ elif tree.tag in ('epytext', 'section', 'tag', 'arg',
+ 'name', 'target', 'html'):
+ return childstr
+ elif tree.tag == 'symbol':
+ symbol = tree.children[0]
+ return self.SYMBOL_TO_HTML.get(symbol, '[%s]' % symbol)
+ elif tree.tag == 'graph':
+ # Generate the graph.
+ graph = self._build_graph(variables[0], variables[1:], linker,
+ docindex, context)
+ if not graph: return ''
+ # Write the graph.
+ image_url = '%s.gif' % graph.uid
+ image_file = os.path.join(directory, image_url)
+ return graph.to_html(image_file, image_url)
+ else:
+ raise ValueError('Unknown epytext DOM element %r' % tree.tag)
+
+ #GRAPH_TYPES = ['classtree', 'packagetree', 'importgraph']
+ def _build_graph(self, graph_type, graph_args, linker,
+ docindex, context):
+ # Generate the graph
+ if graph_type == 'classtree':
+ from epydoc.apidoc import ClassDoc
+ if graph_args:
+ bases = [docindex.find(name, context)
+ for name in graph_args]
+ elif isinstance(context, ClassDoc):
+ bases = [context]
+ else:
+ log.warning("Could not construct class tree: you must "
+ "specify one or more base classes.")
+ return None
+ from epydoc.docwriter.dotgraph import class_tree_graph
+ return class_tree_graph(bases, linker, context)
+ elif graph_type == 'packagetree':
+ from epydoc.apidoc import ModuleDoc
+ if graph_args:
+ packages = [docindex.find(name, context)
+ for name in graph_args]
+ elif isinstance(context, ModuleDoc):
+ packages = [context]
+ else:
+ log.warning("Could not construct package tree: you must "
+ "specify one or more root packages.")
+ return None
+ from epydoc.docwriter.dotgraph import package_tree_graph
+ return package_tree_graph(packages, linker, context)
+ elif graph_type == 'importgraph':
+ from epydoc.apidoc import ModuleDoc
+ modules = [d for d in docindex.root if isinstance(d, ModuleDoc)]
+ from epydoc.docwriter.dotgraph import import_graph
+ return import_graph(modules, docindex, linker, context)
+
+ elif graph_type == 'callgraph':
+ if graph_args:
+ docs = [docindex.find(name, context) for name in graph_args]
+ docs = [doc for doc in docs if doc is not None]
+ else:
+ docs = [context]
+ from epydoc.docwriter.dotgraph import call_graph
+ return call_graph(docs, docindex, linker, context)
+ else:
+ log.warning("Unknown graph type %s" % graph_type)
+
+
+ def _to_latex(self, tree, linker, indent=0, seclevel=0, breakany=0):
+ if isinstance(tree, basestring):
+ return plaintext_to_latex(tree, breakany=breakany)
+
+ if tree.tag == 'section': seclevel += 1
+
+ # Figure out the child indent level.
+ if tree.tag == 'epytext': cindent = indent
+ else: cindent = indent + 2
+ variables = [self._to_latex(c, linker, cindent, seclevel, breakany)
+ for c in tree.children]
+ childstr = ''.join(variables)
+
+ if tree.tag == 'para':
+ return wordwrap(childstr, indent)+'\n'
+ elif tree.tag == 'code':
+ return '\\texttt{%s}' % childstr
+ elif tree.tag == 'uri':
+ if len(variables) != 2: raise ValueError('Bad URI ')
+ if self._hyperref:
+ # ~ and # should not be escaped in the URI.
+ uri = tree.children[1].children[0]
+ uri = uri.replace('{\\textasciitilde}', '~')
+ uri = uri.replace('\\#', '#')
+ if variables[0] == variables[1]:
+ return '\\href{%s}{\\textit{%s}}' % (uri, variables[1])
+ else:
+ return ('%s\\footnote{\\href{%s}{%s}}' %
+ (variables[0], uri, variables[1]))
+ else:
+ if variables[0] == variables[1]:
+ return '\\textit{%s}' % variables[1]
+ else:
+ return '%s\\footnote{%s}' % (variables[0], variables[1])
+ elif tree.tag == 'link':
+ if len(variables) != 2: raise ValueError('Bad Link')
+ return linker.translate_identifier_xref(variables[1], variables[0])
+ elif tree.tag == 'italic':
+ return '\\textit{%s}' % childstr
+ elif tree.tag == 'math':
+ return '\\textit{%s}' % childstr
+ elif tree.tag == 'indexed':
+ term = Element('epytext', *tree.children, **tree.attribs)
+ return linker.translate_indexterm(ParsedEpytextDocstring(term))
+ elif tree.tag == 'bold':
+ return '\\textbf{%s}' % childstr
+ elif tree.tag == 'li':
+ return indent*' ' + '\\item ' + childstr.lstrip()
+ elif tree.tag == 'heading':
+ return ' '*(indent-2) + '(section) %s\n\n' % childstr
+ elif tree.tag == 'doctestblock':
+ return doctest_to_latex(tree.children[0].strip())
+ elif tree.tag == 'literalblock':
+ return '\\begin{alltt}\n%s\\end{alltt}\n\n' % childstr
+ elif tree.tag == 'fieldlist':
+ return indent*' '+'{omitted fieldlist}\n'
+ elif tree.tag == 'olist':
+ return (' '*indent + '\\begin{enumerate}\n\n' +
+ ' '*indent + '\\setlength{\\parskip}{0.5ex}\n' +
+ childstr +
+ ' '*indent + '\\end{enumerate}\n\n')
+ elif tree.tag == 'ulist':
+ return (' '*indent + '\\begin{itemize}\n' +
+ ' '*indent + '\\setlength{\\parskip}{0.6ex}\n' +
+ childstr +
+ ' '*indent + '\\end{itemize}\n\n')
+ elif tree.tag == 'symbol':
+ symbol = tree.children[0]
+ return self.SYMBOL_TO_LATEX.get(symbol, '[%s]' % symbol)
+ elif tree.tag == 'graph':
+ return '(GRAPH)'
+ #raise ValueError, 'graph not implemented yet for latex'
+ else:
+ # Assume that anything else can be passed through.
+ return childstr
+
+ _SUMMARY_RE = re.compile(r'(\s*[\w\W]*?\.)(\s|$)')
+
+ def summary(self):
+ if self._tree is None: return self, False
+ tree = self._tree
+ doc = Element('epytext')
+
+ # Find the first paragraph.
+ variables = tree.children
+ while (len(variables) > 0) and (variables[0].tag != 'para'):
+ if variables[0].tag in ('section', 'ulist', 'olist', 'li'):
+ variables = variables[0].children
+ else:
+ variables = variables[1:]
+
+ # Special case: if the docstring contains a single literal block,
+ # then try extracting the summary from it.
+ if (len(variables) == 0 and len(tree.children) == 1 and
+ tree.children[0].tag == 'literalblock'):
+ str = re.split(r'\n\s*(\n|$).*',
+ tree.children[0].children[0], 1)[0]
+ variables = [Element('para')]
+ variables[0].children.append(str)
+
+ # If we didn't find a paragraph, return an empty epytext.
+ if len(variables) == 0: return ParsedEpytextDocstring(doc), False
+
+ # Is there anything else, excluding tags, after the first variable?
+ long_docs = False
+ for var in variables[1:]:
+ if isinstance(var, Element) and var.tag == 'fieldlist':
+ continue
+ long_docs = True
+ break
+
+ # Extract the first sentence.
+ parachildren = variables[0].children
+ para = Element('para', inline=True)
+ doc.children.append(para)
+ for parachild in parachildren:
+ if isinstance(parachild, basestring):
+ m = self._SUMMARY_RE.match(parachild)
+ if m:
+ para.children.append(m.group(1))
+ long_docs |= parachild is not parachildren[-1]
+ if not long_docs:
+ other = parachild[m.end():]
+ if other and not other.isspace():
+ long_docs = True
+ return ParsedEpytextDocstring(doc), long_docs
+ para.children.append(parachild)
+
+ return ParsedEpytextDocstring(doc), long_docs
+
+ def split_fields(self, errors=None):
+ if self._tree is None: return (self, ())
+ tree = Element(self._tree.tag, *self._tree.children,
+ **self._tree.attribs)
+ fields = []
+
+ if (tree.children and
+ tree.children[-1].tag == 'fieldlist' and
+ tree.children[-1].children):
+ field_nodes = tree.children[-1].children
+ del tree.children[-1]
+
+ for field in field_nodes:
+ # Get the tag
+ tag = field.children[0].children[0].lower()
+ del field.children[0]
+
+ # Get the argument.
+ if field.children and field.children[0].tag == 'arg':
+ arg = field.children[0].children[0]
+ del field.children[0]
+ else:
+ arg = None
+
+ # Process the field.
+ field.tag = 'epytext'
+ fields.append(Field(tag, arg, ParsedEpytextDocstring(field)))
+
+ # Save the remaining docstring as the description..
+ if tree.children and tree.children[0].children:
+ return ParsedEpytextDocstring(tree), fields
+ else:
+ return None, fields
+
+
+ def index_terms(self):
+ if self._terms is None:
+ self._terms = []
+ self._index_terms(self._tree, self._terms)
+ return self._terms
+
+ def _index_terms(self, tree, terms):
+ if tree is None or isinstance(tree, basestring):
+ return
+
+ if tree.tag == 'indexed':
+ term = Element('epytext', *tree.children, **tree.attribs)
+ terms.append(ParsedEpytextDocstring(term))
+
+ # Look for index items in child nodes.
+ for child in tree.children:
+ self._index_terms(child, terms)
diff --git a/python/helpers/epydoc/markup/javadoc.py b/python/helpers/epydoc/markup/javadoc.py
new file mode 100644
index 0000000..6aa5a4a
--- /dev/null
+++ b/python/helpers/epydoc/markup/javadoc.py
@@ -0,0 +1,250 @@
+#
+# javadoc.py: javadoc docstring parsing
+# Edward Loper
+#
+# Created [07/03/03 12:37 PM]
+# $Id: javadoc.py 1574 2007-03-07 02:55:14Z dvarrazzo $
+#
+
+"""
+Epydoc parser for U{Javadoc<http://java.sun.com/j2se/javadoc/>}
+docstrings. Javadoc is an HTML-based markup language that was
+developed for documenting Java APIs with inline comments. It consists
+of raw HTML, augmented by Javadoc tags. There are two types of
+Javadoc tag:
+
+ - X{Javadoc block tags} correspond to Epydoc fields. They are
+ marked by starting a line with a string of the form \"C{@M{tag}
+ [M{arg}]}\", where C{M{tag}} indicates the type of block, and
+ C{M{arg}} is an optional argument. (For fields that take
+ arguments, Javadoc assumes that the single word immediately
+ following the tag is an argument; multi-word arguments cannot be
+ used with javadoc.)
+
+ - X{inline Javadoc tags} are used for inline markup. In particular,
+ epydoc uses them for crossreference links between documentation.
+ Inline tags may appear anywhere in the text, and have the form
+ \"C{{@M{tag} M{[args...]}}}\", where C{M{tag}} indicates the
+ type of inline markup, and C{M{args}} are optional arguments.
+
+Epydoc supports all Javadoc tags, I{except}:
+ - C{{@docRoot}}, which gives the (relative) URL of the generated
+ documentation's root.
+ - C{{@inheritDoc}}, which copies the documentation of the nearest
+ overridden object. This can be used to combine the documentation
+ of the overridden object with the documentation of the
+ overridding object.
+ - C{@serial}, C{@serialField}, and C{@serialData} which describe the
+ serialization (pickling) of an object.
+ - C{{@value}}, which copies the value of a constant.
+
+@warning: Epydoc only supports HTML output for Javadoc docstrings.
+"""
+__docformat__ = 'epytext en'
+
+# Imports
+import re
+from xml.dom.minidom import *
+from epydoc.markup import *
+
+def parse_docstring(docstring, errors, **options):
+ """
+ Parse the given docstring, which is formatted using Javadoc; and
+ return a C{ParsedDocstring} representation of its contents.
+ @param docstring: The docstring to parse
+ @type docstring: C{string}
+ @param errors: A list where any errors generated during parsing
+ will be stored.
+ @type errors: C{list} of L{ParseError}
+ @param options: Extra options. Unknown options are ignored.
+ Currently, no extra options are defined.
+ @rtype: L{ParsedDocstring}
+ """
+ return ParsedJavadocDocstring(docstring, errors)
+
+class ParsedJavadocDocstring(ParsedDocstring):
+ """
+ An encoded version of a Javadoc docstring. Since Javadoc is a
+ fairly simple markup language, we don't do any processing in
+ advance; instead, we wait to split fields or resolve
+ crossreference links until we need to.
+
+ @group Field Splitting: split_fields, _ARG_FIELDS, _FIELD_RE
+ @cvar _ARG_FIELDS: A list of the fields that take arguments.
+ Since Javadoc doesn't mark arguments in any special way, we
+ must consult this list to decide whether the first word of a
+ field is an argument or not.
+ @cvar _FIELD_RE: A regular expression used to search for Javadoc
+ block tags.
+
+ @group HTML Output: to_html, _LINK_SPLIT_RE, _LINK_RE
+ @cvar _LINK_SPLIT_RE: A regular expression used to search for
+ Javadoc inline tags.
+ @cvar _LINK_RE: A regular expression used to process Javadoc
+ inline tags.
+ """
+ def __init__(self, docstring, errors=None):
+ """
+ Create a new C{ParsedJavadocDocstring}.
+
+ @param docstring: The docstring that should be used to
+ construct this C{ParsedJavadocDocstring}.
+ @type docstring: C{string}
+ @param errors: A list where any errors generated during
+ parsing will be stored. If no list is given, then
+ all errors are ignored.
+ @type errors: C{list} of L{ParseError}
+ """
+ self._docstring = docstring
+ if errors is None: errors = []
+ self._check_links(errors)
+
+ #////////////////////////////////////////////////////////////
+ # Field Splitting
+ #////////////////////////////////////////////////////////////
+
+ _ARG_FIELDS = ('group variable var type cvariable cvar ivariable '+
+ 'ivar param '+
+ 'parameter arg argument raise raises exception '+
+ 'except deffield newfield keyword kwarg kwparam').split()
+ _FIELD_RE = re.compile(r'(^\s*\@\w+[\s$])', re.MULTILINE)
+
+ # Inherit docs from ParsedDocstring.
+ def split_fields(self, errors=None):
+
+ # Split the docstring into an alternating list of field tags
+ # and text (odd pieces are field tags).
+ pieces = self._FIELD_RE.split(self._docstring)
+
+ # The first piece is the description.
+ descr = ParsedJavadocDocstring(pieces[0])
+
+ # The remaining pieces are the block fields (alternating tags
+ # and bodies; odd pieces are tags).
+ fields = []
+ for i in range(1, len(pieces)):
+ if i%2 == 1:
+ # Get the field tag.
+ tag = pieces[i].strip()[1:]
+ else:
+ # Get the field argument (if appropriate).
+ if tag in self._ARG_FIELDS:
+ subpieces = pieces[i].strip().split(None, 1)+['','']
+ (arg, body) = subpieces[:2]
+ else:
+ (arg, body) = (None, pieces[i])
+
+ # Special processing for @see fields, since Epydoc
+ # allows unrestricted text in them, but Javadoc just
+ # uses them for xref links:
+ if tag == 'see' and body:
+ if body[0] in '"\'':
+ if body[-1] == body[0]: body = body[1:-1]
+ elif body[0] == '<': pass
+ else: body = '{@link %s}' % body
+
+ # Construct the field.
+ parsed_body = ParsedJavadocDocstring(body)
+ fields.append(Field(tag, arg, parsed_body))
+
+ if pieces[0].strip():
+ return (descr, fields)
+ else:
+ return (None, fields)
+
+ #////////////////////////////////////////////////////////////
+ # HTML Output.
+ #////////////////////////////////////////////////////////////
+
+ _LINK_SPLIT_RE = re.compile(r'({@link(?:plain)?\s[^}]+})')
+ _LINK_RE = re.compile(r'{@link(?:plain)?\s+' + r'([\w#.]+)' +
+ r'(?:\([^\)]*\))?' + r'(\s+.*)?' + r'}')
+
+ # Inherit docs from ParsedDocstring.
+ def to_html(self, docstring_linker, **options):
+ # Split the docstring into an alternating list of HTML and
+ # links (odd pieces are links).
+ pieces = self._LINK_SPLIT_RE.split(self._docstring)
+
+ # This function is used to translate {@link ...}s to HTML.
+ translate_xref = docstring_linker.translate_identifier_xref
+
+ # Build up the HTML string from the pieces. For HTML pieces
+ # (even), just add it to html. For link pieces (odd), use
+ # docstring_linker to translate the crossreference link to
+ # HTML for us.
+ html = ''
+ for i in range(len(pieces)):
+ if i%2 == 0:
+ html += pieces[i]
+ else:
+ # Decompose the link into pieces.
+ m = self._LINK_RE.match(pieces[i])
+ if m is None: continue # Error flagged by _check_links
+ (target, name) = m.groups()
+
+ # Normalize the target name.
+ if target[0] == '#': target = target[1:]
+ target = target.replace('#', '.')
+ target = re.sub(r'\(.*\)', '', target)
+
+ # Provide a name, if it wasn't specified.
+ if name is None: name = target
+ else: name = name.strip()
+
+ # Use docstring_linker to convert the name to html.
+ html += translate_xref(target, name)
+ return html
+
+ def _check_links(self, errors):
+ """
+ Make sure that all @{link}s are valid. We need a separate
+ method for ths because we want to do this at parse time, not
+ html output time. Any errors found are appended to C{errors}.
+ """
+ pieces = self._LINK_SPLIT_RE.split(self._docstring)
+ linenum = 0
+ for i in range(len(pieces)):
+ if i%2 == 1 and not self._LINK_RE.match(pieces[i]):
+ estr = 'Bad link %r' % pieces[i]
+ errors.append(ParseError(estr, linenum, is_fatal=0))
+ linenum += pieces[i].count('\n')
+
+ #////////////////////////////////////////////////////////////
+ # Plaintext Output.
+ #////////////////////////////////////////////////////////////
+
+ # Inherit docs from ParsedDocstring. Since we don't define
+ # to_latex, this is used when generating latex output.
+ def to_plaintext(self, docstring_linker, **options):
+ return self._docstring
+
+ _SUMMARY_RE = re.compile(r'(\s*[\w\W]*?\.)(\s|$)')
+
+ # Jeff's hack to get summary working
+ def summary(self):
+ # Drop tags
+ doc = "\n".join([ row for row in self._docstring.split('\n')
+ if not row.lstrip().startswith('@') ])
+
+ m = self._SUMMARY_RE.match(doc)
+ if m:
+ other = doc[m.end():]
+ return (ParsedJavadocDocstring(m.group(1)),
+ other != '' and not other.isspace())
+
+ else:
+ parts = doc.strip('\n').split('\n', 1)
+ if len(parts) == 1:
+ summary = parts[0]
+ other = False
+ else:
+ summary = parts[0] + '...'
+ other = True
+
+ return ParsedJavadocDocstring(summary), other
+
+# def concatenate(self, other):
+# if not isinstance(other, ParsedJavadocDocstring):
+# raise ValueError, 'Could not concatenate docstrings'
+# return ParsedJavadocDocstring(self._docstring+other._docstring)
diff --git a/python/helpers/epydoc/markup/plaintext.py b/python/helpers/epydoc/markup/plaintext.py
new file mode 100644
index 0000000..9825b34
--- /dev/null
+++ b/python/helpers/epydoc/markup/plaintext.py
@@ -0,0 +1,78 @@
+#
+# plaintext.py: plaintext docstring parsing
+# Edward Loper
+#
+# Created [04/10/01 12:00 AM]
+# $Id: plaintext.py 1574 2007-03-07 02:55:14Z dvarrazzo $
+#
+
+"""
+Parser for plaintext docstrings. Plaintext docstrings are rendered as
+verbatim output, preserving all whitespace.
+"""
+__docformat__ = 'epytext en'
+
+from epydoc.markup import *
+from epydoc.util import plaintext_to_html, plaintext_to_latex
+
+def parse_docstring(docstring, errors, **options):
+ """
+ @return: A pair C{(M{d}, M{e})}, where C{M{d}} is a
+ C{ParsedDocstring} that encodes the contents of the given
+ plaintext docstring; and C{M{e}} is a list of errors that were
+ generated while parsing the docstring.
+ @rtype: C{L{ParsedPlaintextDocstring}, C{list} of L{ParseError}}
+ """
+ return ParsedPlaintextDocstring(docstring, **options)
+
+class ParsedPlaintextDocstring(ParsedDocstring):
+ def __init__(self, text, **options):
+ self._verbatim = options.get('verbatim', 1)
+ if text is None: raise ValueError, 'Bad text value (expected a str)'
+ self._text = text
+
+ def to_html(self, docstring_linker, **options):
+ if options.get('verbatim', self._verbatim) == 0:
+ return plaintext_to_html(self.to_plaintext(docstring_linker))
+ else:
+ return ParsedDocstring.to_html(self, docstring_linker, **options)
+
+ def to_latex(self, docstring_linker, **options):
+ if options.get('verbatim', self._verbatim) == 0:
+ return plaintext_to_latex(self.to_plaintext(docstring_linker))
+ else:
+ return ParsedDocstring.to_latex(self, docstring_linker, **options)
+
+ def to_plaintext(self, docstring_linker, **options):
+ if 'indent' in options:
+ indent = options['indent']
+ lines = self._text.split('\n')
+ return '\n'.join([' '*indent+l for l in lines])+'\n'
+ return self._text+'\n'
+
+ _SUMMARY_RE = re.compile(r'(\s*[\w\W]*?(?:\.(\s|$)|[\n][\t ]*[\n]))')
+
+ def summary(self):
+ m = self._SUMMARY_RE.match(self._text)
+ if m:
+ other = self._text[m.end():]
+ return (ParsedPlaintextDocstring(m.group(1), verbatim=0),
+ other != '' and not other.isspace())
+ else:
+ parts = self._text.strip('\n').split('\n', 1)
+ if len(parts) == 1:
+ summary = parts[0]
+ other = False
+ else:
+ summary = parts[0] + '...'
+ other = True
+
+ return ParsedPlaintextDocstring(summary, verbatim=0), other
+
+# def concatenate(self, other):
+# if not isinstance(other, ParsedPlaintextDocstring):
+# raise ValueError, 'Could not concatenate docstrings'
+# text = self._text+other._text
+# options = self._options.copy()
+# options.update(other._options)
+# return ParsedPlaintextDocstring(text, options)
diff --git a/python/helpers/epydoc/markup/pyval_repr.py b/python/helpers/epydoc/markup/pyval_repr.py
new file mode 100644
index 0000000..c6d41e0
--- /dev/null
+++ b/python/helpers/epydoc/markup/pyval_repr.py
@@ -0,0 +1,532 @@
+# epydoc -- Marked-up Representations for Python Values
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: apidoc.py 1448 2007-02-11 00:05:34Z dvarrazzo $
+
+"""
+Syntax highlighter for Python values. Currently provides special
+colorization support for:
+
+ - lists, tuples, sets, frozensets, dicts
+ - numbers
+ - strings
+ - compiled regexps
+
+The highlighter also takes care of line-wrapping, and automatically
+stops generating repr output as soon as it has exceeded the specified
+number of lines (which should make it faster than pprint for large
+values). It does I{not} bother to do automatic cycle detection,
+because maxlines is typically around 5, so it's really not worth it.
+
+The syntax-highlighted output is encoded using a
+L{ParsedEpytextDocstring}, which can then be used to generate output in
+a variety of formats.
+"""
+__docformat__ = 'epytext en'
+
+# Implementation note: we use exact tests for classes (list, etc)
+# rather than using isinstance, because subclasses might override
+# __repr__.
+
+import types, re
+import epydoc.apidoc
+from epydoc.util import decode_with_backslashreplace
+from epydoc.util import plaintext_to_html, plaintext_to_latex
+from epydoc.compat import *
+import sre_parse, sre_constants
+
+from epydoc.markup.epytext import Element, ParsedEpytextDocstring
+
+def is_re_pattern(pyval):
+ return type(pyval).__name__ == 'SRE_Pattern'
+
+class _ColorizerState:
+ """
+ An object uesd to keep track of the current state of the pyval
+ colorizer. The L{mark()}/L{restore()} methods can be used to set
+ a backup point, and restore back to that backup point. This is
+ used by several colorization methods that first try colorizing
+ their object on a single line (setting linebreakok=False); and
+ then fall back on a multi-line output if that fails. The L{score}
+ variable is used to keep track of a 'score', reflecting how good
+ we think this repr is. E.g., unhelpful values like '<Foo instance
+ at 0x12345>' get low scores. If the score is too low, we'll use
+ the parse-derived repr instead.
+ """
+ def __init__(self):
+ self.result = []
+ self.charpos = 0
+ self.lineno = 1
+ self.linebreakok = True
+
+ #: How good this represention is?
+ self.score = 0
+
+ def mark(self):
+ return (len(self.result), self.charpos,
+ self.lineno, self.linebreakok, self.score)
+
+ def restore(self, mark):
+ n, self.charpos, self.lineno, self.linebreakok, self.score = mark
+ del self.result[n:]
+
+class _Maxlines(Exception):
+ """A control-flow exception that is raised when PyvalColorizer
+ exeeds the maximum number of allowed lines."""
+
+class _Linebreak(Exception):
+ """A control-flow exception that is raised when PyvalColorizer
+ generates a string containing a newline, but the state object's
+ linebreakok variable is False."""
+
+class ColorizedPyvalRepr(ParsedEpytextDocstring):
+ """
+ @ivar score: A score, evaluating how good this repr is.
+ @ivar is_complete: True if this colorized repr completely describes
+ the object.
+ """
+ def __init__(self, tree, score, is_complete):
+ ParsedEpytextDocstring.__init__(self, tree)
+ self.score = score
+ self.is_complete = is_complete
+
+def colorize_pyval(pyval, parse_repr=None, min_score=None,
+ linelen=75, maxlines=5, linebreakok=True, sort=True):
+ return PyvalColorizer(linelen, maxlines, linebreakok, sort).colorize(
+ pyval, parse_repr, min_score)
+
+class PyvalColorizer:
+ """
+ Syntax highlighter for Python values.
+ """
+
+ def __init__(self, linelen=75, maxlines=5, linebreakok=True, sort=True):
+ self.linelen = linelen
+ self.maxlines = maxlines
+ self.linebreakok = linebreakok
+ self.sort = sort
+
+ #////////////////////////////////////////////////////////////
+ # Colorization Tags & other constants
+ #////////////////////////////////////////////////////////////
+
+ GROUP_TAG = 'variable-group' # e.g., "[" and "]"
+ COMMA_TAG = 'variable-op' # The "," that separates elements
+ COLON_TAG = 'variable-op' # The ":" in dictionaries
+ CONST_TAG = None # None, True, False
+ NUMBER_TAG = None # ints, floats, etc
+ QUOTE_TAG = 'variable-quote' # Quotes around strings.
+ STRING_TAG = 'variable-string' # Body of string literals
+
+ RE_CHAR_TAG = None
+ RE_GROUP_TAG = 're-group'
+ RE_REF_TAG = 're-ref'
+ RE_OP_TAG = 're-op'
+ RE_FLAGS_TAG = 're-flags'
+
+ ELLIPSIS = Element('code', u'...', style='variable-ellipsis')
+ LINEWRAP = Element('symbol', u'crarr')
+ UNKNOWN_REPR = Element('code', u'??', style='variable-unknown')
+
+ GENERIC_OBJECT_RE = re.compile(r'^<.* at 0x[0-9a-f]+>$', re.IGNORECASE)
+
+ ESCAPE_UNICODE = False # should we escape non-ascii unicode chars?
+
+ #////////////////////////////////////////////////////////////
+ # Entry Point
+ #////////////////////////////////////////////////////////////
+
+ def colorize(self, pyval, parse_repr=None, min_score=None):
+ """
+ @return: A L{ColorizedPyvalRepr} describing the given pyval.
+ """
+ UNKNOWN = epydoc.apidoc.UNKNOWN
+ # Create an object to keep track of the colorization.
+ state = _ColorizerState()
+ state.linebreakok = self.linebreakok
+ # Colorize the value. If we reach maxlines, then add on an
+ # ellipsis marker and call it a day.
+ try:
+ if pyval is not UNKNOWN:
+ self._colorize(pyval, state)
+ elif parse_repr not in (None, UNKNOWN):
+ self._output(parse_repr, None, state)
+ else:
+ state.result.append(PyvalColorizer.UNKNOWN_REPR)
+ is_complete = True
+ except (_Maxlines, _Linebreak):
+ if self.linebreakok:
+ state.result.append('\n')
+ state.result.append(self.ELLIPSIS)
+ else:
+ if state.result[-1] is self.LINEWRAP:
+ state.result.pop()
+ self._trim_result(state.result, 3)
+ state.result.append(self.ELLIPSIS)
+ is_complete = False
+ # If we didn't score high enough, then try again.
+ if (pyval is not UNKNOWN and parse_repr not in (None, UNKNOWN)
+ and min_score is not None and state.score < min_score):
+ return self.colorize(UNKNOWN, parse_repr)
+ # Put it all together.
+ tree = Element('epytext', *state.result)
+ return ColorizedPyvalRepr(tree, state.score, is_complete)
+
+ def _colorize(self, pyval, state):
+ pyval_type = type(pyval)
+ state.score += 1
+
+ if pyval is None or pyval is True or pyval is False:
+ self._output(unicode(pyval), self.CONST_TAG, state)
+ elif pyval_type in (int, float, long, types.ComplexType):
+ self._output(unicode(pyval), self.NUMBER_TAG, state)
+ elif pyval_type is str:
+ self._colorize_str(pyval, state, '', 'string-escape')
+ elif pyval_type is unicode:
+ if self.ESCAPE_UNICODE:
+ self._colorize_str(pyval, state, 'u', 'unicode-escape')
+ else:
+ self._colorize_str(pyval, state, 'u', None)
+ elif pyval_type is list:
+ self._multiline(self._colorize_iter, pyval, state, '[', ']')
+ elif pyval_type is tuple:
+ self._multiline(self._colorize_iter, pyval, state, '(', ')')
+ elif pyval_type is set:
+ self._multiline(self._colorize_iter, self._sort(pyval),
+ state, 'set([', '])')
+ elif pyval_type is frozenset:
+ self._multiline(self._colorize_iter, self._sort(pyval),
+ state, 'frozenset([', '])')
+ elif pyval_type is dict:
+ self._multiline(self._colorize_dict, self._sort(pyval.items()),
+ state, '{', '}')
+ elif is_re_pattern(pyval):
+ self._colorize_re(pyval, state)
+ else:
+ try:
+ pyval_repr = repr(pyval)
+ if not isinstance(pyval_repr, (str, unicode)):
+ pyval_repr = unicode(pyval_repr)
+ pyval_repr_ok = True
+ except KeyboardInterrupt:
+ raise
+ except:
+ pyval_repr_ok = False
+ state.score -= 100
+
+ if pyval_repr_ok:
+ if self.GENERIC_OBJECT_RE.match(pyval_repr):
+ state.score -= 5
+ self._output(pyval_repr, None, state)
+ else:
+ state.result.append(self.UNKNOWN_REPR)
+
+ def _sort(self, items):
+ if not self.sort: return items
+ try: return sorted(items)
+ except KeyboardInterrupt: raise
+ except: return items
+
+ def _trim_result(self, result, num_chars):
+ while num_chars > 0:
+ if not result: return
+ if isinstance(result[-1], Element):
+ assert len(result[-1].children) == 1
+ trim = min(num_chars, len(result[-1].children[0]))
+ result[-1].children[0] = result[-1].children[0][:-trim]
+ if not result[-1].children[0]: result.pop()
+ num_chars -= trim
+ else:
+ trim = min(num_chars, len(result[-1]))
+ result[-1] = result[-1][:-trim]
+ if not result[-1]: result.pop()
+ num_chars -= trim
+
+ #////////////////////////////////////////////////////////////
+ # Object Colorization Functions
+ #////////////////////////////////////////////////////////////
+
+ def _multiline(self, func, pyval, state, *args):
+ """
+ Helper for container-type colorizers. First, try calling
+ C{func(pyval, state, *args)} with linebreakok set to false;
+ and if that fails, then try again with it set to true.
+ """
+ linebreakok = state.linebreakok
+ mark = state.mark()
+
+ try:
+ state.linebreakok = False
+ func(pyval, state, *args)
+ state.linebreakok = linebreakok
+
+ except _Linebreak:
+ if not linebreakok:
+ raise
+ state.restore(mark)
+ func(pyval, state, *args)
+
+ def _colorize_iter(self, pyval, state, prefix, suffix):
+ self._output(prefix, self.GROUP_TAG, state)
+ indent = state.charpos
+ for i, elt in enumerate(pyval):
+ if i>=1:
+ if state.linebreakok:
+ self._output(',', self.COMMA_TAG, state)
+ self._output('\n'+' '*indent, None, state)
+ else:
+ self._output(', ', self.COMMA_TAG, state)
+ self._colorize(elt, state)
+ self._output(suffix, self.GROUP_TAG, state)
+
+ def _colorize_dict(self, items, state, prefix, suffix):
+ self._output(prefix, self.GROUP_TAG, state)
+ indent = state.charpos
+ for i, (key, val) in enumerate(items):
+ if i>=1:
+ if state.linebreakok:
+ self._output(',', self.COMMA_TAG, state)
+ self._output('\n'+' '*indent, None, state)
+ else:
+ self._output(', ', self.COMMA_TAG, state)
+ self._colorize(key, state)
+ self._output(': ', self.COLON_TAG, state)
+ self._colorize(val, state)
+ self._output(suffix, self.GROUP_TAG, state)
+
+ def _colorize_str(self, pyval, state, prefix, encoding):
+ # Decide which quote to use.
+ if '\n' in pyval and state.linebreakok: quote = "'''"
+ else: quote = "'"
+ # Divide the string into lines.
+ if state.linebreakok:
+ lines = pyval.split('\n')
+ else:
+ lines = [pyval]
+ # Open quote.
+ self._output(prefix+quote, self.QUOTE_TAG, state)
+ # Body
+ for i, line in enumerate(lines):
+ if i>0: self._output('\n', None, state)
+ if encoding: line = line.encode(encoding)
+ self._output(line, self.STRING_TAG, state)
+ # Close quote.
+ self._output(quote, self.QUOTE_TAG, state)
+
+ def _colorize_re(self, pyval, state):
+ # Extract the flag & pattern from the regexp.
+ pat, flags = pyval.pattern, pyval.flags
+ # If the pattern is a string, decode it to unicode.
+ if isinstance(pat, str):
+ pat = decode_with_backslashreplace(pat)
+ # Parse the regexp pattern.
+ tree = sre_parse.parse(pat, flags)
+ groups = dict([(num,name) for (name,num) in
+ tree.pattern.groupdict.items()])
+ # Colorize it!
+ self._output("re.compile(r'", None, state)
+ self._colorize_re_flags(tree.pattern.flags, state)
+ self._colorize_re_tree(tree, state, True, groups)
+ self._output("')", None, state)
+
+ def _colorize_re_flags(self, flags, state):
+ if flags:
+ flags = [c for (c,n) in sorted(sre_parse.FLAGS.items())
+ if (n&flags)]
+ flags = '(?%s)' % ''.join(flags)
+ self._output(flags, self.RE_FLAGS_TAG, state)
+
+ def _colorize_re_tree(self, tree, state, noparen, groups):
+ assert noparen in (True, False)
+ if len(tree) > 1 and not noparen:
+ self._output('(', self.RE_GROUP_TAG, state)
+ for elt in tree:
+ op = elt[0]
+ args = elt[1]
+
+ if op == sre_constants.LITERAL:
+ c = unichr(args)
+ # Add any appropriate escaping.
+ if c in '.^$\\*+?{}[]|()\'': c = '\\'+c
+ elif c == '\t': c = '\\t'
+ elif c == '\r': c = '\\r'
+ elif c == '\n': c = '\\n'
+ elif c == '\f': c = '\\f'
+ elif c == '\v': c = '\\v'
+ elif ord(c) > 0xffff: c = r'\U%08x' % ord(c)
+ elif ord(c) > 0xff: c = r'\u%04x' % ord(c)
+ elif ord(c)<32 or ord(c)>=127: c = r'\x%02x' % ord(c)
+ self._output(c, self.RE_CHAR_TAG, state)
+
+ elif op == sre_constants.ANY:
+ self._output('.', self.RE_CHAR_TAG, state)
+
+ elif op == sre_constants.BRANCH:
+ if args[0] is not None:
+ raise ValueError('Branch expected None arg but got %s'
+ % args[0])
+ for i, item in enumerate(args[1]):
+ if i > 0:
+ self._output('|', self.RE_OP_TAG, state)
+ self._colorize_re_tree(item, state, True, groups)
+
+ elif op == sre_constants.IN:
+ if (len(args) == 1 and args[0][0] == sre_constants.CATEGORY):
+ self._colorize_re_tree(args, state, False, groups)
+ else:
+ self._output('[', self.RE_GROUP_TAG, state)
+ self._colorize_re_tree(args, state, True, groups)
+ self._output(']', self.RE_GROUP_TAG, state)
+
+ elif op == sre_constants.CATEGORY:
+ if args == sre_constants.CATEGORY_DIGIT: val = r'\d'
+ elif args == sre_constants.CATEGORY_NOT_DIGIT: val = r'\D'
+ elif args == sre_constants.CATEGORY_SPACE: val = r'\s'
+ elif args == sre_constants.CATEGORY_NOT_SPACE: val = r'\S'
+ elif args == sre_constants.CATEGORY_WORD: val = r'\w'
+ elif args == sre_constants.CATEGORY_NOT_WORD: val = r'\W'
+ else: raise ValueError('Unknown category %s' % args)
+ self._output(val, self.RE_CHAR_TAG, state)
+
+ elif op == sre_constants.AT:
+ if args == sre_constants.AT_BEGINNING_STRING: val = r'\A'
+ elif args == sre_constants.AT_BEGINNING: val = r'^'
+ elif args == sre_constants.AT_END: val = r'$'
+ elif args == sre_constants.AT_BOUNDARY: val = r'\b'
+ elif args == sre_constants.AT_NON_BOUNDARY: val = r'\B'
+ elif args == sre_constants.AT_END_STRING: val = r'\Z'
+ else: raise ValueError('Unknown position %s' % args)
+ self._output(val, self.RE_CHAR_TAG, state)
+
+ elif op in (sre_constants.MAX_REPEAT, sre_constants.MIN_REPEAT):
+ minrpt = args[0]
+ maxrpt = args[1]
+ if maxrpt == sre_constants.MAXREPEAT:
+ if minrpt == 0: val = '*'
+ elif minrpt == 1: val = '+'
+ else: val = '{%d,}' % (minrpt)
+ elif minrpt == 0:
+ if maxrpt == 1: val = '?'
+ else: val = '{,%d}' % (maxrpt)
+ elif minrpt == maxrpt:
+ val = '{%d}' % (maxrpt)
+ else:
+ val = '{%d,%d}' % (minrpt, maxrpt)
+ if op == sre_constants.MIN_REPEAT:
+ val += '?'
+
+ self._colorize_re_tree(args[2], state, False, groups)
+ self._output(val, self.RE_OP_TAG, state)
+
+ elif op == sre_constants.SUBPATTERN:
+ if args[0] is None:
+ self._output('(?:', self.RE_GROUP_TAG, state)
+ elif args[0] in groups:
+ self._output('(?P<', self.RE_GROUP_TAG, state)
+ self._output(groups[args[0]], self.RE_REF_TAG, state)
+ self._output('>', self.RE_GROUP_TAG, state)
+ elif isinstance(args[0], (int, long)):
+ # This is cheating:
+ self._output('(', self.RE_GROUP_TAG, state)
+ else:
+ self._output('(?P<', self.RE_GROUP_TAG, state)
+ self._output(args[0], self.RE_REF_TAG, state)
+ self._output('>', self.RE_GROUP_TAG, state)
+ self._colorize_re_tree(args[1], state, True, groups)
+ self._output(')', self.RE_GROUP_TAG, state)
+
+ elif op == sre_constants.GROUPREF:
+ self._output('\\%d' % args, self.RE_REF_TAG, state)
+
+ elif op == sre_constants.RANGE:
+ self._colorize_re_tree( ((sre_constants.LITERAL, args[0]),),
+ state, False, groups )
+ self._output('-', self.RE_OP_TAG, state)
+ self._colorize_re_tree( ((sre_constants.LITERAL, args[1]),),
+ state, False, groups )
+
+ elif op == sre_constants.NEGATE:
+ self._output('^', self.RE_OP_TAG, state)
+
+ elif op == sre_constants.ASSERT:
+ if args[0] > 0:
+ self._output('(?=', self.RE_GROUP_TAG, state)
+ else:
+ self._output('(?<=', self.RE_GROUP_TAG, state)
+ self._colorize_re_tree(args[1], state, True, groups)
+ self._output(')', self.RE_GROUP_TAG, state)
+
+ elif op == sre_constants.ASSERT_NOT:
+ if args[0] > 0:
+ self._output('(?!', self.RE_GROUP_TAG, state)
+ else:
+ self._output('(?<!', self.RE_GROUP_TAG, state)
+ self._colorize_re_tree(args[1], state, True, groups)
+ self._output(')', self.RE_GROUP_TAG, state)
+
+ elif op == sre_constants.NOT_LITERAL:
+ self._output('[^', self.RE_GROUP_TAG, state)
+ self._colorize_re_tree( ((sre_constants.LITERAL, args),),
+ state, False, groups )
+ self._output(']', self.RE_GROUP_TAG, state)
+ else:
+ log.error("Error colorizing regexp: unknown elt %r" % elt)
+ if len(tree) > 1 and not noparen:
+ self._output(')', self.RE_GROUP_TAG, state)
+
+ #////////////////////////////////////////////////////////////
+ # Output function
+ #////////////////////////////////////////////////////////////
+
+ def _output(self, s, tag, state):
+ """
+ Add the string `s` to the result list, tagging its contents
+ with tag `tag`. Any lines that go beyond `self.linelen` will
+ be line-wrapped. If the total number of lines exceeds
+ `self.maxlines`, then raise a `_Maxlines` exception.
+ """
+ # Make sure the string is unicode.
+ if isinstance(s, str):
+ s = decode_with_backslashreplace(s)
+
+ # Split the string into segments. The first segment is the
+ # content to add to the current line, and the remaining
+ # segments are new lines.
+ segments = s.split('\n')
+
+ for i, segment in enumerate(segments):
+ # If this isn't the first segment, then add a newline to
+ # split it from the previous segment.
+ if i > 0:
+ if (state.lineno+1) > self.maxlines:
+ raise _Maxlines()
+ if not state.linebreakok:
+ raise _Linebreak()
+ state.result.append(u'\n')
+ state.lineno += 1
+ state.charpos = 0
+
+ # If the segment fits on the current line, then just call
+ # markup to tag it, and store the result.
+ if state.charpos + len(segment) <= self.linelen:
+ state.charpos += len(segment)
+ if tag:
+ segment = Element('code', segment, style=tag)
+ state.result.append(segment)
+
+ # If the segment doesn't fit on the current line, then
+ # line-wrap it, and insert the remainder of the line into
+ # the segments list that we're iterating over. (We'll go
+ # the the beginning of the next line at the start of the
+ # next iteration through the loop.)
+ else:
+ split = self.linelen-state.charpos
+ segments.insert(i+1, segment[split:])
+ segment = segment[:split]
+ if tag:
+ segment = Element('code', segment, style=tag)
+ state.result += [segment, self.LINEWRAP]
+
diff --git a/python/helpers/epydoc/markup/restructuredtext.py b/python/helpers/epydoc/markup/restructuredtext.py
new file mode 100644
index 0000000..b11b154
--- /dev/null
+++ b/python/helpers/epydoc/markup/restructuredtext.py
@@ -0,0 +1,906 @@
+#
+# rst.py: ReStructuredText docstring parsing
+# Edward Loper
+#
+# Created [06/28/03 02:52 AM]
+# $Id: restructuredtext.py 1661 2007-11-07 12:59:34Z dvarrazzo $
+#
+
+"""
+Epydoc parser for ReStructuredText strings. ReStructuredText is the
+standard markup language used by the Docutils project.
+L{parse_docstring()} provides the primary interface to this module; it
+returns a L{ParsedRstDocstring}, which supports all of the methods
+defined by L{ParsedDocstring}.
+
+L{ParsedRstDocstring} is basically just a L{ParsedDocstring} wrapper
+for the C{docutils.nodes.document} class.
+
+Creating C{ParsedRstDocstring}s
+===============================
+
+C{ParsedRstDocstring}s are created by the C{parse_document} function,
+using the C{docutils.core.publish_string()} method, with the following
+helpers:
+
+ - An L{_EpydocReader} is used to capture all error messages as it
+ parses the docstring.
+ - A L{_DocumentPseudoWriter} is used to extract the document itself,
+ without actually writing any output. The document is saved for
+ further processing. The settings for the writer are copied from
+ C{docutils.writers.html4css1.Writer}, since those settings will
+ be used when we actually write the docstring to html.
+
+Using C{ParsedRstDocstring}s
+============================
+
+C{ParsedRstDocstring}s support all of the methods defined by
+C{ParsedDocstring}; but only the following four methods have
+non-default behavior:
+
+ - L{to_html()<ParsedRstDocstring.to_html>} uses an
+ L{_EpydocHTMLTranslator} to translate the C{ParsedRstDocstring}'s
+ document into an HTML segment.
+ - L{split_fields()<ParsedRstDocstring.split_fields>} uses a
+ L{_SplitFieldsTranslator} to divide the C{ParsedRstDocstring}'s
+ document into its main body and its fields. Special handling
+ is done to account for consolidated fields.
+ - L{summary()<ParsedRstDocstring.summary>} uses a
+ L{_SummaryExtractor} to extract the first sentence from
+ the C{ParsedRstDocstring}'s document.
+ - L{to_plaintext()<ParsedRstDocstring.to_plaintext>} uses
+ C{document.astext()} to convert the C{ParsedRstDocstring}'s
+ document to plaintext.
+
+@todo: Add ParsedRstDocstring.to_latex()
+@var CONSOLIDATED_FIELDS: A dictionary encoding the set of
+'consolidated fields' that can be used. Each consolidated field is
+marked by a single tag, and contains a single bulleted list, where
+each list item starts with an identifier, marked as interpreted text
+(C{`...`}). This module automatically splits these consolidated
+fields into individual fields. The keys of C{CONSOLIDATED_FIELDS} are
+the names of possible consolidated fields; and the values are the
+names of the field tags that should be used for individual entries in
+the list.
+"""
+__docformat__ = 'epytext en'
+
+# Imports
+import re, os, os.path
+from xml.dom.minidom import *
+
+from docutils.core import publish_string
+from docutils.writers import Writer
+from docutils.writers.html4css1 import HTMLTranslator, Writer as HTMLWriter
+from docutils.writers.latex2e import LaTeXTranslator, Writer as LaTeXWriter
+from docutils.readers.standalone import Reader as StandaloneReader
+from docutils.utils import new_document
+from docutils.nodes import NodeVisitor, Text, SkipChildren
+from docutils.nodes import SkipNode, TreeCopyVisitor
+from docutils.frontend import OptionParser
+from docutils.parsers.rst import directives, roles
+import docutils.nodes
+import docutils.transforms.frontmatter
+import docutils.transforms
+import docutils.utils
+
+from epydoc.compat import * # Backwards compatibility
+from epydoc.markup import *
+from epydoc.apidoc import ModuleDoc, ClassDoc
+from epydoc.docwriter.dotgraph import *
+from epydoc.docwriter.xlink import ApiLinkReader
+from epydoc.markup.doctest import doctest_to_html, doctest_to_latex, \
+ HTMLDoctestColorizer
+
+#: A dictionary whose keys are the "consolidated fields" that are
+#: recognized by epydoc; and whose values are the corresponding epydoc
+#: field names that should be used for the individual fields.
+CONSOLIDATED_FIELDS = {
+ 'parameters': 'param',
+ 'arguments': 'arg',
+ 'exceptions': 'except',
+ 'variables': 'var',
+ 'ivariables': 'ivar',
+ 'cvariables': 'cvar',
+ 'groups': 'group',
+ 'types': 'type',
+ 'keywords': 'keyword',
+ }
+
+#: A list of consolidated fields whose bodies may be specified using a
+#: definition list, rather than a bulleted list. For these fields, the
+#: 'classifier' for each term in the definition list is translated into
+#: a @type field.
+CONSOLIDATED_DEFLIST_FIELDS = ['param', 'arg', 'var', 'ivar', 'cvar', 'keyword']
+
+def parse_docstring(docstring, errors, **options):
+ """
+ Parse the given docstring, which is formatted using
+ ReStructuredText; and return a L{ParsedDocstring} representation
+ of its contents.
+ @param docstring: The docstring to parse
+ @type docstring: C{string}
+ @param errors: A list where any errors generated during parsing
+ will be stored.
+ @type errors: C{list} of L{ParseError}
+ @param options: Extra options. Unknown options are ignored.
+ Currently, no extra options are defined.
+ @rtype: L{ParsedDocstring}
+ """
+ writer = _DocumentPseudoWriter()
+ reader = _EpydocReader(errors) # Outputs errors to the list.
+ publish_string(docstring, writer=writer, reader=reader,
+ settings_overrides={'report_level':10000,
+ 'halt_level':10000,
+ 'warning_stream':None})
+ return ParsedRstDocstring(writer.document)
+
+class OptimizedReporter(docutils.utils.Reporter):
+ """A reporter that ignores all debug messages. This is used to
+ shave a couple seconds off of epydoc's run time, since docutils
+ isn't very fast about processing its own debug messages."""
+ def debug(self, *args, **kwargs): pass
+
+class ParsedRstDocstring(ParsedDocstring):
+ """
+ An encoded version of a ReStructuredText docstring. The contents
+ of the docstring are encoded in the L{_document} instance
+ variable.
+
+ @ivar _document: A ReStructuredText document, encoding the
+ docstring.
+ @type _document: C{docutils.nodes.document}
+ """
+ def __init__(self, document):
+ """
+ @type document: C{docutils.nodes.document}
+ """
+ self._document = document
+
+ # The default document reporter and transformer are not
+ # pickle-able; so replace them with stubs that are.
+ document.reporter = OptimizedReporter(
+ document.reporter.source, 'SEVERE', 'SEVERE', '')
+ document.transformer = docutils.transforms.Transformer(document)
+
+ def split_fields(self, errors=None):
+ # Inherit docs
+ if errors is None: errors = []
+ visitor = _SplitFieldsTranslator(self._document, errors)
+ self._document.walk(visitor)
+ if len(self._document.children) > 0:
+ return self, visitor.fields
+ else:
+ return None, visitor.fields
+
+ def summary(self):
+ # Inherit docs
+ visitor = _SummaryExtractor(self._document)
+ try: self._document.walk(visitor)
+ except docutils.nodes.NodeFound: pass
+ return visitor.summary, bool(visitor.other_docs)
+
+# def concatenate(self, other):
+# result = self._document.copy()
+# for child in (self._document.get_children() +
+# other._document.get_children()):
+# visitor = TreeCopyVisitor(self._document)
+# child.walkabout(visitor)
+# result.append(visitor.get_tree_copy())
+# return ParsedRstDocstring(result)
+
+ def to_html(self, docstring_linker, directory=None,
+ docindex=None, context=None, **options):
+ # Inherit docs
+ visitor = _EpydocHTMLTranslator(self._document, docstring_linker,
+ directory, docindex, context)
+ self._document.walkabout(visitor)
+ return ''.join(visitor.body)
+
+ def to_latex(self, docstring_linker, **options):
+ # Inherit docs
+ visitor = _EpydocLaTeXTranslator(self._document, docstring_linker)
+ self._document.walkabout(visitor)
+ return ''.join(visitor.body)
+
+ def to_plaintext(self, docstring_linker, **options):
+ # This is should be replaced by something better:
+ return self._document.astext()
+
+ def __repr__(self): return '<ParsedRstDocstring: ...>'
+
+ def index_terms(self):
+ visitor = _TermsExtractor(self._document)
+ self._document.walkabout(visitor)
+ return visitor.terms
+
+class _EpydocReader(ApiLinkReader):
+ """
+ A reader that captures all errors that are generated by parsing,
+ and appends them to a list.
+ """
+ # Remove the DocInfo transform, to ensure that :author: fields are
+ # correctly handled. This needs to be handled differently
+ # depending on the version of docutils that's being used, because
+ # the default_transforms attribute was deprecated & replaced by
+ # get_transforms().
+ version = [int(v) for v in docutils.__version__.split('.')]
+ version += [ 0 ] * (3 - len(version))
+ if version < [0,4,0]:
+ default_transforms = list(ApiLinkReader.default_transforms)
+ try: default_transforms.remove(docutils.transforms.frontmatter.DocInfo)
+ except ValueError: pass
+ else:
+ def get_transforms(self):
+ return [t for t in ApiLinkReader.get_transforms(self)
+ if t != docutils.transforms.frontmatter.DocInfo]
+ del version
+
+ def __init__(self, errors):
+ self._errors = errors
+ ApiLinkReader.__init__(self)
+
+ def new_document(self):
+ document = new_document(self.source.source_path, self.settings)
+ # Capture all warning messages.
+ document.reporter.attach_observer(self.report)
+ # These are used so we know how to encode warning messages:
+ self._encoding = document.reporter.encoding
+ self._error_handler = document.reporter.error_handler
+ # Return the new document.
+ return document
+
+ def report(self, error):
+ try: is_fatal = int(error['level']) > 2
+ except: is_fatal = 1
+ try: linenum = int(error['line'])
+ except: linenum = None
+
+ msg = ''.join([c.astext().encode(self._encoding, self._error_handler)
+ for c in error])
+
+ self._errors.append(ParseError(msg, linenum, is_fatal))
+
+class _DocumentPseudoWriter(Writer):
+ """
+ A pseudo-writer for the docutils framework, that can be used to
+ access the document itself. The output of C{_DocumentPseudoWriter}
+ is just an empty string; but after it has been used, the most
+ recently processed document is available as the instance variable
+ C{document}
+
+ @type document: C{docutils.nodes.document}
+ @ivar document: The most recently processed document.
+ """
+ def __init__(self):
+ self.document = None
+ Writer.__init__(self)
+
+ def translate(self):
+ self.output = ''
+
+class _SummaryExtractor(NodeVisitor):
+ """
+ A docutils node visitor that extracts the first sentence from
+ the first paragraph in a document.
+ """
+ def __init__(self, document):
+ NodeVisitor.__init__(self, document)
+ self.summary = None
+ self.other_docs = None
+
+ def visit_document(self, node):
+ self.summary = None
+
+ _SUMMARY_RE = re.compile(r'(\s*[\w\W]*?\.)(\s|$)')
+ def visit_paragraph(self, node):
+ if self.summary is not None:
+ # found a paragraph after the first one
+ self.other_docs = True
+ raise docutils.nodes.NodeFound('Found summary')
+
+ summary_pieces = []
+
+ # Extract the first sentence.
+ for child in node:
+ if isinstance(child, docutils.nodes.Text):
+ m = self._SUMMARY_RE.match(child.data)
+ if m:
+ summary_pieces.append(docutils.nodes.Text(m.group(1)))
+ other = child.data[m.end():]
+ if other and not other.isspace():
+ self.other_docs = True
+ break
+ summary_pieces.append(child)
+
+ summary_doc = self.document.copy() # shallow copy
+ summary_para = node.copy() # shallow copy
+ summary_doc[:] = [summary_para]
+ summary_para[:] = summary_pieces
+ self.summary = ParsedRstDocstring(summary_doc)
+
+ def visit_field(self, node):
+ raise SkipNode
+
+ def unknown_visit(self, node):
+ 'Ignore all unknown nodes'
+
+class _TermsExtractor(NodeVisitor):
+ """
+ A docutils node visitor that extracts the terms from documentation.
+
+ Terms are created using the C{:term:} interpreted text role.
+ """
+ def __init__(self, document):
+ NodeVisitor.__init__(self, document)
+
+ self.terms = None
+ """
+ The terms currently found.
+ @type: C{list}
+ """
+
+ def visit_document(self, node):
+ self.terms = []
+ self._in_term = False
+
+ def visit_emphasis(self, node):
+ if 'term' in node.get('classes'):
+ self._in_term = True
+
+ def depart_emphasis(self, node):
+ if 'term' in node.get('classes'):
+ self._in_term = False
+
+ def visit_Text(self, node):
+ if self._in_term:
+ doc = self.document.copy()
+ doc[:] = [node.copy()]
+ self.terms.append(ParsedRstDocstring(doc))
+
+ def unknown_visit(self, node):
+ 'Ignore all unknown nodes'
+
+ def unknown_departure(self, node):
+ 'Ignore all unknown nodes'
+
+class _SplitFieldsTranslator(NodeVisitor):
+ """
+ A docutils translator that removes all fields from a document, and
+ collects them into the instance variable C{fields}
+
+ @ivar fields: The fields of the most recently walked document.
+ @type fields: C{list} of L{Field<markup.Field>}
+ """
+
+ ALLOW_UNMARKED_ARG_IN_CONSOLIDATED_FIELD = True
+ """If true, then consolidated fields are not required to mark
+ arguments with C{`backticks`}. (This is currently only
+ implemented for consolidated fields expressed as definition lists;
+ consolidated fields expressed as unordered lists still require
+ backticks for now."""
+
+ def __init__(self, document, errors):
+ NodeVisitor.__init__(self, document)
+ self._errors = errors
+ self.fields = []
+ self._newfields = {}
+
+ def visit_document(self, node):
+ self.fields = []
+
+ def visit_field(self, node):
+ # Remove the field from the tree.
+ node.parent.remove(node)
+
+ # Extract the field name & optional argument
+ tag = node[0].astext().split(None, 1)
+ tagname = tag[0]
+ if len(tag)>1: arg = tag[1]
+ else: arg = None
+
+ # Handle special fields:
+ fbody = node[1]
+ if arg is None:
+ for (list_tag, entry_tag) in CONSOLIDATED_FIELDS.items():
+ if tagname.lower() == list_tag:
+ try:
+ self.handle_consolidated_field(fbody, entry_tag)
+ return
+ except ValueError, e:
+ estr = 'Unable to split consolidated field '
+ estr += '"%s" - %s' % (tagname, e)
+ self._errors.append(ParseError(estr, node.line,
+ is_fatal=0))
+
+ # Use a @newfield to let it be displayed as-is.
+ if tagname.lower() not in self._newfields:
+ newfield = Field('newfield', tagname.lower(),
+ parse(tagname, 'plaintext'))
+ self.fields.append(newfield)
+ self._newfields[tagname.lower()] = 1
+
+ self._add_field(tagname, arg, fbody)
+
+ def _add_field(self, tagname, arg, fbody):
+ field_doc = self.document.copy()
+ for child in fbody: field_doc.append(child)
+ field_pdoc = ParsedRstDocstring(field_doc)
+ self.fields.append(Field(tagname, arg, field_pdoc))
+
+ def visit_field_list(self, node):
+ # Remove the field list from the tree. The visitor will still walk
+ # over the node's children.
+ node.parent.remove(node)
+
+ def handle_consolidated_field(self, body, tagname):
+ """
+ Attempt to handle a consolidated section.
+ """
+ if len(body) != 1:
+ raise ValueError('does not contain a single list.')
+ elif body[0].tagname == 'bullet_list':
+ self.handle_consolidated_bullet_list(body[0], tagname)
+ elif (body[0].tagname == 'definition_list' and
+ tagname in CONSOLIDATED_DEFLIST_FIELDS):
+ self.handle_consolidated_definition_list(body[0], tagname)
+ elif tagname in CONSOLIDATED_DEFLIST_FIELDS:
+ raise ValueError('does not contain a bulleted list or '
+ 'definition list.')
+ else:
+ raise ValueError('does not contain a bulleted list.')
+
+ def handle_consolidated_bullet_list(self, items, tagname):
+ # Check the contents of the list. In particular, each list
+ # item should have the form:
+ # - `arg`: description...
+ n = 0
+ _BAD_ITEM = ("list item %d is not well formed. Each item must "
+ "consist of a single marked identifier (e.g., `x`), "
+ "optionally followed by a colon or dash and a "
+ "description.")
+ for item in items:
+ n += 1
+ if item.tagname != 'list_item' or len(item) == 0:
+ raise ValueError('bad bulleted list (bad child %d).' % n)
+ if item[0].tagname != 'paragraph':
+ if item[0].tagname == 'definition_list':
+ raise ValueError(('list item %d contains a definition '+
+ 'list (it\'s probably indented '+
+ 'wrong).') % n)
+ else:
+ raise ValueError(_BAD_ITEM % n)
+ if len(item[0]) == 0:
+ raise ValueError(_BAD_ITEM % n)
+ if item[0][0].tagname != 'title_reference':
+ raise ValueError(_BAD_ITEM % n)
+
+ # Everything looks good; convert to multiple fields.
+ for item in items:
+ # Extract the arg
+ arg = item[0][0].astext()
+
+ # Extract the field body, and remove the arg
+ fbody = item[:]
+ fbody[0] = fbody[0].copy()
+ fbody[0][:] = item[0][1:]
+
+ # Remove the separating ":", if present
+ if (len(fbody[0]) > 0 and
+ isinstance(fbody[0][0], docutils.nodes.Text)):
+ child = fbody[0][0]
+ if child.data[:1] in ':-':
+ child.data = child.data[1:].lstrip()
+ elif child.data[:2] in (' -', ' :'):
+ child.data = child.data[2:].lstrip()
+
+ # Wrap the field body, and add a new field
+ self._add_field(tagname, arg, fbody)
+
+ def handle_consolidated_definition_list(self, items, tagname):
+ # Check the list contents.
+ n = 0
+ _BAD_ITEM = ("item %d is not well formed. Each item's term must "
+ "consist of a single marked identifier (e.g., `x`), "
+ "optionally followed by a space, colon, space, and "
+ "a type description.")
+ for item in items:
+ n += 1
+ if (item.tagname != 'definition_list_item' or len(item) < 2 or
+ item[0].tagname != 'term' or
+ item[-1].tagname != 'definition'):
+ raise ValueError('bad definition list (bad child %d).' % n)
+ if len(item) > 3:
+ raise ValueError(_BAD_ITEM % n)
+ if not ((item[0][0].tagname == 'title_reference') or
+ (self.ALLOW_UNMARKED_ARG_IN_CONSOLIDATED_FIELD and
+ isinstance(item[0][0], docutils.nodes.Text))):
+ raise ValueError(_BAD_ITEM % n)
+ for child in item[0][1:]:
+ if child.astext() != '':
+ raise ValueError(_BAD_ITEM % n)
+
+ # Extract it.
+ for item in items:
+ # The basic field.
+ arg = item[0][0].astext()
+ fbody = item[-1]
+ self._add_field(tagname, arg, fbody)
+ # If there's a classifier, treat it as a type.
+ if len(item) == 3:
+ type_descr = item[1]
+ self._add_field('type', arg, type_descr)
+
+ def unknown_visit(self, node):
+ 'Ignore all unknown nodes'
+
+def latex_head_prefix():
+ document = new_document('<fake>')
+ translator = _EpydocLaTeXTranslator(document, None)
+ return translator.head_prefix
+
+class _EpydocLaTeXTranslator(LaTeXTranslator):
+ settings = None
+ def __init__(self, document, docstring_linker):
+ # Set the document's settings.
+ if self.settings is None:
+ settings = OptionParser([LaTeXWriter()]).get_default_values()
+ settings.output_encoding = 'utf-8'
+ self.__class__.settings = settings
+ document.settings = self.settings
+
+ LaTeXTranslator.__init__(self, document)
+ self._linker = docstring_linker
+
+ # Start at section level 3. (Unfortunately, we now have to
+ # set a private variable to make this work; perhaps the standard
+ # latex translator should grow an official way to spell this?)
+ self.section_level = 3
+ self._section_number = [0]*self.section_level
+
+ # Handle interpreted text (crossreferences)
+ def visit_title_reference(self, node):
+ target = self.encode(node.astext())
+ xref = self._linker.translate_identifier_xref(target, target)
+ self.body.append(xref)
+ raise SkipNode()
+
+ def visit_document(self, node): pass
+ def depart_document(self, node): pass
+
+ # For now, just ignore dotgraphs. [XXX]
+ def visit_dotgraph(self, node):
+ log.warning("Ignoring dotgraph in latex output (dotgraph "
+ "rendering for latex not implemented yet).")
+ raise SkipNode()
+
+ def visit_doctest_block(self, node):
+ self.body.append(doctest_to_latex(node[0].astext()))
+ raise SkipNode()
+
+class _EpydocHTMLTranslator(HTMLTranslator):
+ settings = None
+ def __init__(self, document, docstring_linker, directory,
+ docindex, context):
+ self._linker = docstring_linker
+ self._directory = directory
+ self._docindex = docindex
+ self._context = context
+
+ # Set the document's settings.
+ if self.settings is None:
+ settings = OptionParser([HTMLWriter()]).get_default_values()
+ self.__class__.settings = settings
+ document.settings = self.settings
+
+ # Call the parent constructor.
+ HTMLTranslator.__init__(self, document)
+
+ # Handle interpreted text (crossreferences)
+ def visit_title_reference(self, node):
+ target = self.encode(node.astext())
+ xref = self._linker.translate_identifier_xref(target, target)
+ self.body.append(xref)
+ raise SkipNode()
+
+ def should_be_compact_paragraph(self, node):
+ if self.document.children == [node]:
+ return True
+ else:
+ return HTMLTranslator.should_be_compact_paragraph(self, node)
+
+ def visit_document(self, node): pass
+ def depart_document(self, node): pass
+
+ def starttag(self, node, tagname, suffix='\n', **attributes):
+ """
+ This modified version of starttag makes a few changes to HTML
+ tags, to prevent them from conflicting with epydoc. In particular:
+ - existing class attributes are prefixed with C{'rst-'}
+ - existing names are prefixed with C{'rst-'}
+ - hrefs starting with C{'#'} are prefixed with C{'rst-'}
+ - hrefs not starting with C{'#'} are given target='_top'
+ - all headings (C{<hM{n}>}) are given the css class C{'heading'}
+ """
+ # Get the list of all attribute dictionaries we need to munge.
+ attr_dicts = [attributes]
+ if isinstance(node, docutils.nodes.Node):
+ attr_dicts.append(node.attributes)
+ if isinstance(node, dict):
+ attr_dicts.append(node)
+ # Munge each attribute dictionary. Unfortunately, we need to
+ # iterate through attributes one at a time because some
+ # versions of docutils don't case-normalize attributes.
+ for attr_dict in attr_dicts:
+ for (key, val) in attr_dict.items():
+ # Prefix all CSS classes with "rst-"; and prefix all
+ # names with "rst-" to avoid conflicts.
+ if key.lower() in ('class', 'id', 'name'):
+ attr_dict[key] = 'rst-%s' % val
+ elif key.lower() in ('classes', 'ids', 'names'):
+ attr_dict[key] = ['rst-%s' % cls for cls in val]
+ elif key.lower() == 'href':
+ if attr_dict[key][:1]=='#':
+ attr_dict[key] = '#rst-%s' % attr_dict[key][1:]
+ else:
+ # If it's an external link, open it in a new
+ # page.
+ attr_dict['target'] = '_top'
+
+ # For headings, use class="heading"
+ if re.match(r'^h\d+$', tagname):
+ attributes['class'] = ' '.join([attributes.get('class',''),
+ 'heading']).strip()
+
+ return HTMLTranslator.starttag(self, node, tagname, suffix,
+ **attributes)
+
+ def visit_dotgraph(self, node):
+ if self._directory is None: return # [xx] warning?
+
+ # Generate the graph.
+ graph = node.graph(self._docindex, self._context, self._linker)
+ if graph is None: return
+
+ # Write the graph.
+ image_url = '%s.gif' % graph.uid
+ image_file = os.path.join(self._directory, image_url)
+ self.body.append(graph.to_html(image_file, image_url))
+ raise SkipNode()
+
+ def visit_doctest_block(self, node):
+ pysrc = node[0].astext()
+ if node.get('codeblock'):
+ self.body.append(HTMLDoctestColorizer().colorize_codeblock(pysrc))
+ else:
+ self.body.append(doctest_to_html(pysrc))
+ raise SkipNode()
+
+ def visit_emphasis(self, node):
+ # Generate a corrent index term anchor
+ if 'term' in node.get('classes') and node.children:
+ doc = self.document.copy()
+ doc[:] = [node.children[0].copy()]
+ self.body.append(
+ self._linker.translate_indexterm(ParsedRstDocstring(doc)))
+ raise SkipNode()
+
+ HTMLTranslator.visit_emphasis(self, node)
+
+def python_code_directive(name, arguments, options, content, lineno,
+ content_offset, block_text, state, state_machine):
+ """
+ A custom restructuredtext directive which can be used to display
+ syntax-highlighted Python code blocks. This directive takes no
+ arguments, and the body should contain only Python code. This
+ directive can be used instead of doctest blocks when it is
+ inconvenient to list prompts on each line, or when you would
+ prefer that the output not contain prompts (e.g., to make
+ copy/paste easier).
+ """
+ required_arguments = 0
+ optional_arguments = 0
+
+ text = '\n'.join(content)
+ node = docutils.nodes.doctest_block(text, text, codeblock=True)
+ return [ node ]
+
+python_code_directive.arguments = (0, 0, 0)
+python_code_directive.content = True
+
+directives.register_directive('python', python_code_directive)
+
+def term_role(name, rawtext, text, lineno, inliner,
+ options={}, content=[]):
+
+ text = docutils.utils.unescape(text)
+ node = docutils.nodes.emphasis(rawtext, text, **options)
+ node.attributes['classes'].append('term')
+
+ return [node], []
+
+roles.register_local_role('term', term_role)
+
+######################################################################
+#{ Graph Generation Directives
+######################################################################
+# See http://docutils.sourceforge.net/docs/howto/rst-directives.html
+
+class dotgraph(docutils.nodes.image):
+ """
+ A custom docutils node that should be rendered using Graphviz dot.
+ This node does not directly store the graph; instead, it stores a
+ pointer to a function that can be used to generate the graph.
+ This allows the graph to be built based on information that might
+ not be available yet at parse time. This graph generation
+ function has the following signature:
+
+ >>> def generate_graph(docindex, context, linker, *args):
+ ... 'generates and returns a new DotGraph'
+
+ Where C{docindex} is a docindex containing the documentation that
+ epydoc has built; C{context} is the C{APIDoc} whose docstring
+ contains this dotgraph node; C{linker} is a L{DocstringLinker}
+ that can be used to resolve crossreferences; and C{args} is any
+ extra arguments that are passed to the C{dotgraph} constructor.
+ """
+ def __init__(self, generate_graph_func, *generate_graph_args):
+ docutils.nodes.image.__init__(self)
+ self.graph_func = generate_graph_func
+ self.args = generate_graph_args
+ def graph(self, docindex, context, linker):
+ return self.graph_func(docindex, context, linker, *self.args)
+
+def _dir_option(argument):
+ """A directive option spec for the orientation of a graph."""
+ argument = argument.lower().strip()
+ if argument == 'right': return 'LR'
+ if argument == 'left': return 'RL'
+ if argument == 'down': return 'TB'
+ if argument == 'up': return 'BT'
+ raise ValueError('%r unknown; choose from left, right, up, down' %
+ argument)
+
+def digraph_directive(name, arguments, options, content, lineno,
+ content_offset, block_text, state, state_machine):
+ """
+ A custom restructuredtext directive which can be used to display
+ Graphviz dot graphs. This directive takes a single argument,
+ which is used as the graph's name. The contents of the directive
+ are used as the body of the graph. Any href attributes whose
+ value has the form <name> will be replaced by the URL of the object
+ with that name. Here's a simple example::
+
+ .. digraph:: example_digraph
+ a -> b -> c
+ c -> a [dir=\"none\"]
+ """
+ if arguments: title = arguments[0]
+ else: title = ''
+ return [ dotgraph(_construct_digraph, title, options.get('caption'),
+ '\n'.join(content)) ]
+digraph_directive.arguments = (0, 1, True)
+digraph_directive.options = {'caption': directives.unchanged}
+digraph_directive.content = True
+directives.register_directive('digraph', digraph_directive)
+
+def _construct_digraph(docindex, context, linker, title, caption,
+ body):
+ """Graph generator for L{digraph_directive}"""
+ graph = DotGraph(title, body, caption=caption)
+ graph.link(linker)
+ return graph
+
+def classtree_directive(name, arguments, options, content, lineno,
+ content_offset, block_text, state, state_machine):
+ """
+ A custom restructuredtext directive which can be used to
+ graphically display a class hierarchy. If one or more arguments
+ are given, then those classes and all their descendants will be
+ displayed. If no arguments are given, and the directive is in a
+ class's docstring, then that class and all its descendants will be
+ displayed. It is an error to use this directive with no arguments
+ in a non-class docstring.
+
+ Options:
+ - C{:dir:} -- Specifies the orientation of the graph. One of
+ C{down}, C{right} (default), C{left}, C{up}.
+ """
+ return [ dotgraph(_construct_classtree, arguments, options) ]
+classtree_directive.arguments = (0, 1, True)
+classtree_directive.options = {'dir': _dir_option}
+classtree_directive.content = False
+directives.register_directive('classtree', classtree_directive)
+
+def _construct_classtree(docindex, context, linker, arguments, options):
+ """Graph generator for L{classtree_directive}"""
+ if len(arguments) == 1:
+ bases = [docindex.find(name, context) for name in
+ arguments[0].replace(',',' ').split()]
+ bases = [d for d in bases if isinstance(d, ClassDoc)]
+ elif isinstance(context, ClassDoc):
+ bases = [context]
+ else:
+ log.warning("Could not construct class tree: you must "
+ "specify one or more base classes.")
+ return None
+
+ return class_tree_graph(bases, linker, context, **options)
+
+def packagetree_directive(name, arguments, options, content, lineno,
+ content_offset, block_text, state, state_machine):
+ """
+ A custom restructuredtext directive which can be used to
+ graphically display a package hierarchy. If one or more arguments
+ are given, then those packages and all their submodules will be
+ displayed. If no arguments are given, and the directive is in a
+ package's docstring, then that package and all its submodules will
+ be displayed. It is an error to use this directive with no
+ arguments in a non-package docstring.
+
+ Options:
+ - C{:dir:} -- Specifies the orientation of the graph. One of
+ C{down}, C{right} (default), C{left}, C{up}.
+ """
+ return [ dotgraph(_construct_packagetree, arguments, options) ]
+packagetree_directive.arguments = (0, 1, True)
+packagetree_directive.options = {
+ 'dir': _dir_option,
+ 'style': lambda a:directives.choice(a.lower(), ('uml', 'tree'))}
+packagetree_directive.content = False
+directives.register_directive('packagetree', packagetree_directive)
+
+def _construct_packagetree(docindex, context, linker, arguments, options):
+ """Graph generator for L{packagetree_directive}"""
+ if len(arguments) == 1:
+ packages = [docindex.find(name, context) for name in
+ arguments[0].replace(',',' ').split()]
+ packages = [d for d in packages if isinstance(d, ModuleDoc)]
+ elif isinstance(context, ModuleDoc):
+ packages = [context]
+ else:
+ log.warning("Could not construct package tree: you must "
+ "specify one or more root packages.")
+ return None
+
+ return package_tree_graph(packages, linker, context, **options)
+
+def importgraph_directive(name, arguments, options, content, lineno,
+ content_offset, block_text, state, state_machine):
+ return [ dotgraph(_construct_importgraph, arguments, options) ]
+importgraph_directive.arguments = (0, 1, True)
+importgraph_directive.options = {'dir': _dir_option}
+importgraph_directive.content = False
+directives.register_directive('importgraph', importgraph_directive)
+
+def _construct_importgraph(docindex, context, linker, arguments, options):
+ """Graph generator for L{importgraph_directive}"""
+ if len(arguments) == 1:
+ modules = [ docindex.find(name, context)
+ for name in arguments[0].replace(',',' ').split() ]
+ modules = [d for d in modules if isinstance(d, ModuleDoc)]
+ else:
+ modules = [d for d in docindex.root if isinstance(d, ModuleDoc)]
+
+ return import_graph(modules, docindex, linker, context, **options)
+
+def callgraph_directive(name, arguments, options, content, lineno,
+ content_offset, block_text, state, state_machine):
+ return [ dotgraph(_construct_callgraph, arguments, options) ]
+callgraph_directive.arguments = (0, 1, True)
+callgraph_directive.options = {'dir': _dir_option,
+ 'add_callers': directives.flag,
+ 'add_callees': directives.flag}
+callgraph_directive.content = False
+directives.register_directive('callgraph', callgraph_directive)
+
+def _construct_callgraph(docindex, context, linker, arguments, options):
+ """Graph generator for L{callgraph_directive}"""
+ if len(arguments) == 1:
+ docs = [docindex.find(name, context) for name in
+ arguments[0].replace(',',' ').split()]
+ docs = [doc for doc in docs if doc is not None]
+ else:
+ docs = [context]
+ return call_graph(docs, docindex, linker, context, **options)
+
diff --git a/python/helpers/epydoc/util.py b/python/helpers/epydoc/util.py
new file mode 100644
index 0000000..85f3102
--- /dev/null
+++ b/python/helpers/epydoc/util.py
@@ -0,0 +1,289 @@
+# epydoc -- Utility functions
+#
+# Copyright (C) 2005 Edward Loper
+# Author: Edward Loper <[email protected]>
+# URL: <http://epydoc.sf.net>
+#
+# $Id: util.py 1671 2008-01-29 02:55:49Z edloper $
+
+"""
+Miscellaneous utility functions that are used by multiple modules.
+
+@group Python source types: is_module_file, is_package_dir, is_pyname,
+ py_src_filename
+@group Text processing: wordwrap, decode_with_backslashreplace,
+ plaintext_to_html
+"""
+__docformat__ = 'epytext en'
+
+import os, os.path, re
+
+######################################################################
+## Python Source Types
+######################################################################
+
+PY_SRC_EXTENSIONS = ['.py', '.pyw']
+PY_BIN_EXTENSIONS = ['.pyc', '.so', '.pyd']
+
+def is_module_file(path):
+ # Make sure it's a file name.
+ if not isinstance(path, basestring):
+ return False
+ (dir, filename) = os.path.split(path)
+ (basename, extension) = os.path.splitext(filename)
+ return (os.path.isfile(path) and
+ re.match('[a-zA-Z_]\w*$', basename) and
+ extension in PY_SRC_EXTENSIONS+PY_BIN_EXTENSIONS)
+
+def is_src_filename(filename):
+ if not isinstance(filename, basestring): return False
+ if not os.path.exists(filename): return False
+ return os.path.splitext(filename)[1] in PY_SRC_EXTENSIONS
+
+def is_package_dir(dirname):
+ """
+ Return true if the given directory is a valid package directory
+ (i.e., it names a directory that contains a valid __init__ file,
+ and its name is a valid identifier).
+ """
+ # Make sure it's a directory name.
+ if not isinstance(dirname, basestring):
+ return False
+ if not os.path.isdir(dirname):
+ return False
+ dirname = os.path.abspath(dirname)
+ # Make sure it's a valid identifier. (Special case for
+ # "foo/", where os.path.split -> ("foo", "").)
+ (parent, dir) = os.path.split(dirname)
+ if dir == '': (parent, dir) = os.path.split(parent)
+
+ # The following constraint was removed because of sourceforge
+ # bug #1787028 -- in some cases (eg eggs), it's too strict.
+ #if not re.match('\w+$', dir):
+ # return False
+
+ for name in os.listdir(dirname):
+ filename = os.path.join(dirname, name)
+ if name.startswith('__init__.') and is_module_file(filename):
+ return True
+ else:
+ return False
+
+def is_pyname(name):
+ return re.match(r"\w+(\.\w+)*$", name)
+
+def py_src_filename(filename):
+ basefile, extension = os.path.splitext(filename)
+ if extension in PY_SRC_EXTENSIONS:
+ return filename
+ else:
+ for ext in PY_SRC_EXTENSIONS:
+ if os.path.isfile('%s%s' % (basefile, ext)):
+ return '%s%s' % (basefile, ext)
+ else:
+ raise ValueError('Could not find a corresponding '
+ 'Python source file for %r.' % filename)
+
+def munge_script_name(filename):
+ name = os.path.split(filename)[1]
+ name = re.sub(r'\W', '_', name)
+ return 'script-'+name
+
+######################################################################
+## Text Processing
+######################################################################
+
+def decode_with_backslashreplace(s):
+ r"""
+ Convert the given 8-bit string into unicode, treating any
+ character c such that ord(c)<128 as an ascii character, and
+ converting any c such that ord(c)>128 into a backslashed escape
+ sequence.
+
+ >>> decode_with_backslashreplace('abc\xff\xe8')
+ u'abc\\xff\\xe8'
+ """
+ # s.encode('string-escape') is not appropriate here, since it
+ # also adds backslashes to some ascii chars (eg \ and ').
+ assert isinstance(s, str)
+ return (s
+ .decode('latin1')
+ .encode('ascii', 'backslashreplace')
+ .decode('ascii'))
+
+def wordwrap(str, indent=0, right=75, startindex=0, splitchars=''):
+ """
+ Word-wrap the given string. I.e., add newlines to the string such
+ that any lines that are longer than C{right} are broken into
+ shorter lines (at the first whitespace sequence that occurs before
+ index C{right}). If the given string contains newlines, they will
+ I{not} be removed. Any lines that begin with whitespace will not
+ be wordwrapped.
+
+ @param indent: If specified, then indent each line by this number
+ of spaces.
+ @type indent: C{int}
+ @param right: The right margin for word wrapping. Lines that are
+ longer than C{right} will be broken at the first whitespace
+ sequence before the right margin.
+ @type right: C{int}
+ @param startindex: If specified, then assume that the first line
+ is already preceeded by C{startindex} characters.
+ @type startindex: C{int}
+ @param splitchars: A list of non-whitespace characters which can
+ be used to split a line. (E.g., use '/\\' to allow path names
+ to be split over multiple lines.)
+ @rtype: C{str}
+ """
+ if splitchars:
+ chunks = re.split(r'( +|\n|[^ \n%s]*[%s])' %
+ (re.escape(splitchars), re.escape(splitchars)),
+ str.expandtabs())
+ else:
+ chunks = re.split(r'( +|\n)', str.expandtabs())
+ result = [' '*(indent-startindex)]
+ charindex = max(indent, startindex)
+ for chunknum, chunk in enumerate(chunks):
+ if (charindex+len(chunk) > right and charindex > 0) or chunk == '\n':
+ result.append('\n' + ' '*indent)
+ charindex = indent
+ if chunk[:1] not in ('\n', ' '):
+ result.append(chunk)
+ charindex += len(chunk)
+ else:
+ result.append(chunk)
+ charindex += len(chunk)
+ return ''.join(result).rstrip()+'\n'
+
+def plaintext_to_html(s):
+ """
+ @return: An HTML string that encodes the given plaintext string.
+ In particular, special characters (such as C{'<'} and C{'&'})
+ are escaped.
+ @rtype: C{string}
+ """
+ s = s.replace('&', '&').replace('"', '"')
+ s = s.replace('<', '<').replace('>', '>')
+ return s
+
+def plaintext_to_latex(str, nbsp=0, breakany=0):
+ """
+ @return: A LaTeX string that encodes the given plaintext string.
+ In particular, special characters (such as C{'$'} and C{'_'})
+ are escaped, and tabs are expanded.
+ @rtype: C{string}
+ @param breakany: Insert hyphenation marks, so that LaTeX can
+ break the resulting string at any point. This is useful for
+ small boxes (e.g., the type box in the variable list table).
+ @param nbsp: Replace every space with a non-breaking space
+ (C{'~'}).
+ """
+ # These get converted to hyphenation points later
+ if breakany: str = re.sub('(.)', '\\1\1', str)
+
+ # These get converted to \textbackslash later.
+ str = str.replace('\\', '\0')
+
+ # Expand tabs
+ str = str.expandtabs()
+
+ # These elements need to be backslashed.
+ str = re.sub(r'([#$&%_\${}])', r'\\\1', str)
+
+ # These elements have special names.
+ str = str.replace('|', '{\\textbar}')
+ str = str.replace('<', '{\\textless}')
+ str = str.replace('>', '{\\textgreater}')
+ str = str.replace('^', '{\\textasciicircum}')
+ str = str.replace('~', '{\\textasciitilde}')
+ str = str.replace('\0', r'{\textbackslash}')
+
+ # replace spaces with non-breaking spaces
+ if nbsp: str = str.replace(' ', '~')
+
+ # Convert \1's to hyphenation points.
+ if breakany: str = str.replace('\1', r'\-')
+
+ return str
+
+class RunSubprocessError(OSError):
+ def __init__(self, cmd, out, err):
+ OSError.__init__(self, '%s failed' % cmd[0])
+ self.out = out
+ self.err = err
+
+def run_subprocess(cmd, data=None):
+ """
+ Execute the command C{cmd} in a subprocess.
+
+ @param cmd: The command to execute, specified as a list
+ of string.
+ @param data: A string containing data to send to the
+ subprocess.
+ @return: A tuple C{(out, err)}.
+ @raise OSError: If there is any problem executing the
+ command, or if its exitval is not 0.
+ """
+ if isinstance(cmd, basestring):
+ cmd = cmd.split()
+
+ # Under Python 2.4+, use subprocess
+ try:
+ from subprocess import Popen, PIPE
+ pipe = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
+ out, err = pipe.communicate(data)
+ if hasattr(pipe, 'returncode'):
+ if pipe.returncode == 0:
+ return out, err
+ else:
+ raise RunSubprocessError(cmd, out, err)
+ else:
+ # Assume that there was an error iff anything was written
+ # to the child's stderr.
+ if err == '':
+ return out, err
+ else:
+ raise RunSubprocessError(cmd, out, err)
+ except ImportError:
+ pass
+
+ # Under Python 2.3 or earlier, on unix, use popen2.Popen3 so we
+ # can access the return value.
+ import popen2
+ if hasattr(popen2, 'Popen3'):
+ pipe = popen2.Popen3(' '.join(cmd), True)
+ to_child = pipe.tochild
+ from_child = pipe.fromchild
+ child_err = pipe.childerr
+ if data:
+ to_child.write(data)
+ to_child.close()
+ out = err = ''
+ while pipe.poll() is None:
+ out += from_child.read()
+ err += child_err.read()
+ out += from_child.read()
+ err += child_err.read()
+ if pipe.wait() == 0:
+ return out, err
+ else:
+ raise RunSubprocessError(cmd, out, err)
+
+ # Under Python 2.3 or earlier, on non-unix, use os.popen3
+ else:
+ to_child, from_child, child_err = os.popen3(' '.join(cmd), 'b')
+ if data:
+ try:
+ to_child.write(data)
+ # Guard for a broken pipe error
+ except IOError, e:
+ raise OSError(e)
+ to_child.close()
+ out = from_child.read()
+ err = child_err.read()
+ # Assume that there was an error iff anything was written
+ # to the child's stderr.
+ if err == '':
+ return out, err
+ else:
+ raise RunSubprocessError(cmd, out, err)
diff --git a/python/helpers/epydoc_formatter.py b/python/helpers/epydoc_formatter.py
new file mode 100644
index 0000000..3d30e1e
--- /dev/null
+++ b/python/helpers/epydoc_formatter.py
@@ -0,0 +1,46 @@
+import sys
+from epydoc.markup import DocstringLinker
+from epydoc.markup.epytext import parse_docstring, ParseError, _colorize
+import epydoc.markup.epytext
+
+def _add_para(doc, para_token, stack, indent_stack, errors):
+ """Colorize the given paragraph, and add it to the DOM tree."""
+ para = _colorize(doc, para_token, errors)
+ if para_token.inline:
+ para.attribs['inline'] = True
+ stack[-1].children.append(para)
+
+epydoc.markup.epytext._add_para = _add_para
+
+def is_fatal():
+ return False
+
+ParseError.is_fatal = is_fatal
+
+try:
+ src = sys.stdin.read()
+ errors = []
+
+ class EmptyLinker(DocstringLinker):
+ def translate_indexterm(self, indexterm):
+ return ""
+
+ def translate_identifier_xref(self, identifier, label=None):
+ return identifier
+
+ docstring = parse_docstring(src, errors)
+ docstring, fields = docstring.split_fields()
+ html = docstring.to_html(EmptyLinker())
+
+ if errors and not html:
+ sys.stderr.write("Error parsing docstring:\n")
+ for error in errors:
+ sys.stderr.write(str(error) + "\n")
+ sys.exit(1)
+
+ sys.stdout.write(html)
+ sys.stdout.flush()
+except:
+ exc_type, exc_value, exc_traceback = sys.exc_info()
+ sys.stderr.write("Error calculating docstring: " + str(exc_value))
+ sys.exit(1)
diff --git a/python/helpers/extra_syspath.py b/python/helpers/extra_syspath.py
new file mode 100644
index 0000000..6b47da2
--- /dev/null
+++ b/python/helpers/extra_syspath.py
@@ -0,0 +1,14 @@
+import sys, os
+qualified_name = sys.argv[-1]
+path = qualified_name.split(".")
+
+try:
+ module = __import__(qualified_name, globals(), locals(), [path[-1]])
+ try:
+ p = module.__path__
+ sys.stdout.write(os.sep.join(p.split(os.sep)[:-1]))
+ sys.stdout.flush()
+ except AttributeError:
+ pass
+except ImportError:
+ pass
diff --git a/python/helpers/generator3.py b/python/helpers/generator3.py
new file mode 100644
index 0000000..5fced92
--- /dev/null
+++ b/python/helpers/generator3.py
@@ -0,0 +1,478 @@
+# encoding: utf-8
+from pycharm_generator_utils.module_redeclarator import *
+from pycharm_generator_utils.util_methods import *
+from pycharm_generator_utils.constants import *
+import os
+import atexit
+import zipfile
+
+debug_mode = False
+
+
+def build_output_name(dirname, qualified_name):
+ qualifiers = qualified_name.split(".")
+ if dirname and not dirname.endswith("/") and not dirname.endswith("\\"):
+ dirname += os.path.sep # "a -> a/"
+ for pathindex in range(len(qualifiers) - 1): # create dirs for all qualifiers but last
+ subdirname = dirname + os.path.sep.join(qualifiers[0: pathindex + 1])
+ if not os.path.isdir(subdirname):
+ action("creating subdir %r", subdirname)
+ os.makedirs(subdirname)
+ init_py = os.path.join(subdirname, "__init__.py")
+ if os.path.isfile(subdirname + ".py"):
+ os.rename(subdirname + ".py", init_py)
+ elif not os.path.isfile(init_py):
+ init = fopen(init_py, "w")
+ init.close()
+ target_name = dirname + os.path.sep.join(qualifiers)
+ if os.path.isdir(target_name):
+ fname = os.path.join(target_name, "__init__.py")
+ else:
+ fname = target_name + ".py"
+
+ dirname = os.path.dirname(fname)
+
+ if not os.path.isdir(dirname):
+ os.makedirs(dirname)
+
+ return fname
+
+
+def redo_module(mod_name, outfile, module_file_name, doing_builtins):
+ # gobject does 'del _gobject' in its __init__.py, so the chained attribute lookup code
+ # fails to find 'gobject._gobject'. thus we need to pull the module directly out of
+ # sys.modules
+ mod = sys.modules.get(mod_name)
+ mod_path = mod_name.split('.')
+ if not mod and sys.platform == 'cli':
+ # "import System.Collections" in IronPython 2.7 doesn't actually put System.Collections in sys.modules
+ # instead, sys.modules['System'] get set to a Microsoft.Scripting.Actions.NamespaceTracker and Collections can be
+ # accessed as its attribute
+ mod = sys.modules[mod_path[0]]
+ for component in mod_path[1:]:
+ try:
+ mod = getattr(mod, component)
+ except AttributeError:
+ mod = None
+ report("Failed to find CLR module " + mod_name)
+ break
+ if mod:
+ action("restoring")
+ r = ModuleRedeclarator(mod, outfile, module_file_name, doing_builtins=doing_builtins)
+ r.redo(mod_name, ".".join(mod_path[:-1]) in MODULES_INSPECT_DIR)
+ action("flushing")
+ r.flush()
+ else:
+ report("Failed to find imported module in sys.modules " + mod_name)
+
+# find_binaries functionality
+def cut_binary_lib_suffix(path, f):
+ """
+ @param path where f lives
+ @param f file name of a possible binary lib file (no path)
+ @return f without a binary suffix (that is, an importable name) if path+f is indeed a binary lib, or None.
+ Note: if for .pyc or .pyo file a .py is found, None is returned.
+ """
+ if not f.endswith(".pyc") and not f.endswith(".typelib") and not f.endswith(".pyo") and not f.endswith(".so") and not f.endswith(".pyd"):
+ return None
+ ret = None
+ match = BIN_MODULE_FNAME_PAT.match(f)
+ if match:
+ ret = match.group(1)
+ modlen = len('module')
+ retlen = len(ret)
+ if ret.endswith('module') and retlen > modlen and f.endswith('.so'): # what for?
+ ret = ret[:(retlen - modlen)]
+ if f.endswith('.pyc') or f.endswith('.pyo'):
+ fullname = os.path.join(path, f[:-1]) # check for __pycache__ is made outside
+ if os.path.exists(fullname):
+ ret = None
+ pat_match = TYPELIB_MODULE_FNAME_PAT.match(f)
+ if pat_match:
+ ret = "gi.repository." + pat_match.group(1)
+ return ret
+
+
+def is_posix_skipped_module(path, f):
+ if os.name == 'posix':
+ name = os.path.join(path, f)
+ for mod in POSIX_SKIP_MODULES:
+ if name.endswith(mod):
+ return True
+ return False
+
+
+def is_mac_skipped_module(path, f):
+ fullname = os.path.join(path, f)
+ m = MAC_STDLIB_PATTERN.match(fullname)
+ if not m: return 0
+ relpath = m.group(2)
+ for module in MAC_SKIP_MODULES:
+ if relpath.startswith(module): return 1
+ return 0
+
+
+def is_skipped_module(path, f):
+ return is_mac_skipped_module(path, f) or is_posix_skipped_module(path, f[:f.rindex('.')]) or 'pynestkernel' in f
+
+
+def is_module(d, root):
+ return (os.path.exists(os.path.join(root, d, "__init__.py")) or
+ os.path.exists(os.path.join(root, d, "__init__.pyc")) or
+ os.path.exists(os.path.join(root, d, "__init__.pyo")))
+
+
+def list_binaries(paths):
+ """
+ Finds binaries in the given list of paths.
+ Understands nested paths, as sys.paths have it (both "a/b" and "a/b/c").
+ Tries to be case-insensitive, but case-preserving.
+ @param paths: list of paths.
+ @return: dict[module_name, full_path]
+ """
+ SEP = os.path.sep
+ res = {} # {name.upper(): (name, full_path)} # b/c windows is case-oblivious
+ if not paths:
+ return {}
+ if IS_JAVA: # jython can't have binary modules
+ return {}
+ paths = sorted_no_case(paths)
+ for path in paths:
+ if path == os.path.dirname(sys.argv[0]): continue
+ for root, dirs, files in os.walk(path):
+ if root.endswith('__pycache__'): continue
+ dirs_copy = list(dirs)
+ for d in dirs_copy:
+ if d.endswith("__pycache__") or not is_module(d, root):
+ dirs.remove(d)
+
+ cutpoint = path.rfind(SEP)
+ if cutpoint > 0:
+ preprefix = path[(cutpoint + len(SEP)):] + '.'
+ else:
+ preprefix = ''
+ prefix = root[(len(path) + len(SEP)):].replace(SEP, '.')
+ if prefix:
+ prefix += '.'
+ note("root: %s path: %s prefix: %s preprefix: %s", root, path, prefix, preprefix)
+ for f in files:
+ name = cut_binary_lib_suffix(root, f)
+ if name and not is_skipped_module(root, f):
+ note("cutout: %s", name)
+ if preprefix:
+ note("prefixes: %s %s", prefix, preprefix)
+ pre_name = (preprefix + prefix + name).upper()
+ if pre_name in res:
+ res.pop(pre_name) # there might be a dupe, if paths got both a/b and a/b/c
+ note("done with %s", name)
+ the_name = prefix + name
+ file_path = os.path.join(root, f)
+
+ res[the_name.upper()] = (the_name, file_path, os.path.getsize(file_path), int(os.stat(file_path).st_mtime))
+ return list(res.values())
+
+
+def list_sources(paths):
+ #noinspection PyBroadException
+ try:
+ for path in paths:
+ if path == os.path.dirname(sys.argv[0]): continue
+
+ path = os.path.normpath(path)
+
+ for root, dirs, files in os.walk(path):
+ if root.endswith('__pycache__'): continue
+ dirs_copy = list(dirs)
+ for d in dirs_copy:
+ if d.endswith("__pycache__") or not is_module(d, root):
+ dirs.remove(d)
+ for name in files:
+ if name.endswith('.py'):
+ file_path = os.path.join(root, name)
+ # some files show up but are actually non-existent symlinks
+ if not os.path.exists(file_path): continue
+ say("%s\t%s\t%d", os.path.normpath(file_path), path, os.path.getsize(file_path))
+ say('END')
+ sys.stdout.flush()
+ except:
+ import traceback
+
+ traceback.print_exc()
+ sys.exit(1)
+
+
+#noinspection PyBroadException
+def zip_sources(zip_path):
+ if not os.path.exists(zip_path):
+ os.makedirs(zip_path)
+
+ zip_filename = os.path.normpath(os.path.sep.join([zip_path, "skeletons.zip"]))
+
+ try:
+ zip = zipfile.ZipFile(zip_filename, 'w', zipfile.ZIP_DEFLATED)
+ except:
+ zip = zipfile.ZipFile(zip_filename, 'w')
+
+ try:
+ try:
+ while True:
+ line = sys.stdin.readline()
+ line = line.strip()
+
+ if line == '-':
+ break
+
+ if line:
+ # This line will break the split:
+ # /.../dist-packages/setuptools/script template (dev).py setuptools/script template (dev).py
+ split_items = line.split()
+ if len(split_items) > 2:
+ match_two_files = re.match(r'^(.+\.py)\s+(.+\.py)$', line)
+ if not match_two_files:
+ report("Error(zip_sources): invalid line '%s'" % line)
+ continue
+ split_items = match_two_files.group(1, 2)
+ (path, arcpath) = split_items
+ zip.write(path, arcpath)
+ else:
+ # busy waiting for input from PyCharm...
+ time.sleep(0.10)
+ say('OK: ' + zip_filename)
+ sys.stdout.flush()
+ except:
+ import traceback
+
+ traceback.print_exc()
+ say('Error creating archive.')
+
+ sys.exit(1)
+ finally:
+ zip.close()
+
+
+# command-line interface
+#noinspection PyBroadException
+def process_one(name, mod_file_name, doing_builtins, subdir):
+ """
+ Processes a single module named name defined in file_name (autodetect if not given).
+ Returns True on success.
+ """
+ if has_regular_python_ext(name):
+ report("Ignored a regular Python file %r", name)
+ return True
+ if not quiet:
+ say(name)
+ sys.stdout.flush()
+ action("doing nothing")
+ outfile = None
+ try:
+ try:
+ fname = build_output_name(subdir, name)
+ action("opening %r", fname)
+ outfile = fopen(fname, "w")
+ old_modules = list(sys.modules.keys())
+ imported_module_names = []
+
+ class MyFinder:
+ #noinspection PyMethodMayBeStatic
+ def find_module(self, fullname, path=None):
+ if fullname != name:
+ imported_module_names.append(fullname)
+ return None
+
+ my_finder = None
+ if hasattr(sys, 'meta_path'):
+ my_finder = MyFinder()
+ sys.meta_path.append(my_finder)
+ else:
+ imported_module_names = None
+
+ action("importing")
+ __import__(name) # sys.modules will fill up with what we want
+
+ if my_finder:
+ sys.meta_path.remove(my_finder)
+ if imported_module_names is None:
+ imported_module_names = [m for m in sys.modules.keys() if m not in old_modules]
+
+ redo_module(name, outfile, mod_file_name, doing_builtins)
+ # The C library may have called Py_InitModule() multiple times to define several modules (gtk._gtk and gtk.gdk);
+ # restore all of them
+ path = name.split(".")
+ redo_imports = not ".".join(path[:-1]) in MODULES_INSPECT_DIR
+ if imported_module_names and redo_imports:
+ for m in sys.modules.keys():
+ action("looking at possible submodule %r", m)
+ # if module has __file__ defined, it has Python source code and doesn't need a skeleton
+ if m not in old_modules and m not in imported_module_names and m != name and not hasattr(
+ sys.modules[m], '__file__'):
+ if not quiet:
+ say(m)
+ sys.stdout.flush()
+ fname = build_output_name(subdir, m)
+ action("opening %r", fname)
+ subfile = fopen(fname, "w")
+ try:
+ redo_module(m, subfile, mod_file_name, doing_builtins)
+ finally:
+ action("closing %r", fname)
+ subfile.close()
+ except:
+ exctype, value = sys.exc_info()[:2]
+ msg = "Failed to process %r while %s: %s"
+ args = name, CURRENT_ACTION, str(value)
+ report(msg, *args)
+ if outfile is not None and not outfile.closed:
+ outfile.write("# encoding: %s\n" % OUT_ENCODING)
+ outfile.write("# module %s\n" % name)
+ outfile.write("# from %s\n" % mod_file_name)
+ outfile.write("# by generator %s\n" % VERSION)
+ outfile.write("\n\n")
+ outfile.write("# Skeleton generation error:\n#\n# " + (msg % args) + "\n")
+ if debug_mode:
+ if sys.platform == 'cli':
+ import traceback
+ traceback.print_exc(file=sys.stderr)
+ raise
+ return False
+ finally:
+ if outfile is not None and not outfile.closed:
+ outfile.close()
+ return True
+
+
+def get_help_text():
+ return (
+ #01234567890123456789012345678901234567890123456789012345678901234567890123456789
+ 'Generates interface skeletons for python modules.' '\n'
+ 'Usage: ' '\n'
+ ' generator [options] [module_name [file_name]]' '\n'
+ ' generator [options] -L ' '\n'
+ 'module_name is fully qualified, and file_name is where the module is defined.' '\n'
+ 'E.g. foo.bar /usr/lib/python/foo_bar.so' '\n'
+ 'For built-in modules file_name is not provided.' '\n'
+ 'Output files will be named as modules plus ".py" suffix.' '\n'
+ 'Normally every name processed will be printed and stdout flushed.' '\n'
+ 'directory_list is one string separated by OS-specific path separtors.' '\n'
+ '\n'
+ 'Options are:' '\n'
+ ' -h -- prints this help message.' '\n'
+ ' -d dir -- output dir, must be writable. If not given, current dir is used.' '\n'
+ ' -b -- use names from sys.builtin_module_names' '\n'
+ ' -q -- quiet, do not print anything on stdout. Errors still go to stderr.' '\n'
+ ' -x -- die on exceptions with a stacktrace; only for debugging.' '\n'
+ ' -v -- be verbose, print lots of debug output to stderr' '\n'
+ ' -c modules -- import CLR assemblies with specified names' '\n'
+ ' -p -- run CLR profiler ' '\n'
+ ' -s path_list -- add paths to sys.path before run; path_list lists directories' '\n'
+ ' separated by path separator char, e.g. "c:\\foo;d:\\bar;c:\\with space"' '\n'
+ ' -L -- print version and then a list of binary module files found ' '\n'
+ ' on sys.path and in directories in directory_list;' '\n'
+ ' lines are "qualified.module.name /full/path/to/module_file.{pyd,dll,so}"' '\n'
+ ' -S -- lists all python sources found in sys.path and in directories in directory_list\n'
+ ' -z archive_name -- zip files to archive_name. Accepts files to be archived from stdin in format <filepath> <name in archive>'
+ )
+
+
+if __name__ == "__main__":
+ from getopt import getopt
+
+ helptext = get_help_text()
+ opts, args = getopt(sys.argv[1:], "d:hbqxvc:ps:LSz")
+ opts = dict(opts)
+
+ quiet = '-q' in opts
+ _is_verbose = '-v' in opts
+ subdir = opts.get('-d', '')
+
+ if not opts or '-h' in opts:
+ say(helptext)
+ sys.exit(0)
+
+ if '-L' not in opts and '-b' not in opts and '-S' not in opts and not args:
+ report("Neither -L nor -b nor -S nor any module name given")
+ sys.exit(1)
+
+ if "-x" in opts:
+ debug_mode = True
+
+ # patch sys.path?
+ extra_path = opts.get('-s', None)
+ if extra_path:
+ source_dirs = extra_path.split(os.path.pathsep)
+ for p in source_dirs:
+ if p and p not in sys.path:
+ sys.path.append(p) # we need this to make things in additional dirs importable
+ note("Altered sys.path: %r", sys.path)
+
+ # find binaries?
+ if "-L" in opts:
+ if len(args) > 0:
+ report("Expected no args with -L, got %d args", len(args))
+ sys.exit(1)
+ say(VERSION)
+ results = list(list_binaries(sys.path))
+ results.sort()
+ for name, path, size, last_modified in results:
+ say("%s\t%s\t%d\t%d", name, path, size, last_modified)
+ sys.exit(0)
+
+ if "-S" in opts:
+ if len(args) > 0:
+ report("Expected no args with -S, got %d args", len(args))
+ sys.exit(1)
+ say(VERSION)
+ list_sources(sys.path)
+ sys.exit(0)
+
+ if "-z" in opts:
+ if len(args) != 1:
+ report("Expected 1 arg with -S, got %d args", len(args))
+ sys.exit(1)
+ zip_sources(args[0])
+ sys.exit(0)
+
+ # build skeleton(s)
+
+ timer = Timer()
+ # determine names
+ if '-b' in opts:
+ if args:
+ report("No names should be specified with -b")
+ sys.exit(1)
+ names = list(sys.builtin_module_names)
+ if not BUILTIN_MOD_NAME in names:
+ names.append(BUILTIN_MOD_NAME)
+ if '__main__' in names:
+ names.remove('__main__') # we don't want ourselves processed
+ ok = True
+ for name in names:
+ ok = process_one(name, None, True, subdir) and ok
+ if not ok:
+ sys.exit(1)
+
+ else:
+ if len(args) > 2:
+ report("Only module_name or module_name and file_name should be specified; got %d args", len(args))
+ sys.exit(1)
+ name = args[0]
+ if len(args) == 2:
+ mod_file_name = args[1]
+ else:
+ mod_file_name = None
+
+ if sys.platform == 'cli':
+ #noinspection PyUnresolvedReferences
+ import clr
+
+ refs = opts.get('-c', '')
+ if refs:
+ for ref in refs.split(';'): clr.AddReferenceByPartialName(ref)
+
+ if '-p' in opts:
+ atexit.register(print_profile)
+
+ if not process_one(name, mod_file_name, False, subdir):
+ sys.exit(1)
+
+ say("Generation completed in %d ms", timer.elapsed())
diff --git a/python/helpers/icon-robots.txt b/python/helpers/icon-robots.txt
new file mode 100644
index 0000000..1313ad9
--- /dev/null
+++ b/python/helpers/icon-robots.txt
@@ -0,0 +1 @@
+skip: *
diff --git a/python/helpers/packaging_tool.py b/python/helpers/packaging_tool.py
new file mode 100644
index 0000000..9937a6b
--- /dev/null
+++ b/python/helpers/packaging_tool.py
@@ -0,0 +1,158 @@
+import sys
+import traceback
+import getopt
+import os
+
+ERROR_WRONG_USAGE = 1
+ERROR_NO_PIP = 2
+ERROR_NO_SETUPTOOLS = 3
+ERROR_EXCEPTION = 4
+
+def exit(retcode):
+ major, minor, micro, release, serial = sys.version_info
+ version = major * 10 + minor
+ if version < 25:
+ import os
+ os._exit(retcode)
+ else:
+ sys.exit(retcode)
+
+
+def usage():
+ sys.stderr.write('Usage: packaging_tool.py <list|install|uninstall|pyvenv>\n')
+ sys.stderr.flush()
+ exit(ERROR_WRONG_USAGE)
+
+
+def error(message, retcode):
+ sys.stderr.write('Error: %s\n' % message)
+ sys.stderr.flush()
+ exit(retcode)
+
+
+def error_no_pip():
+ tb = sys.exc_traceback
+ if tb is not None and tb.tb_next is None:
+ error("Python package management tool 'pip' not found", ERROR_NO_PIP)
+ else:
+ error(traceback.format_exc(), ERROR_EXCEPTION)
+
+
+def do_list():
+ try:
+ import pkg_resources
+ except ImportError:
+ error("Python package management tool 'setuptools' or 'distribute' not found", ERROR_NO_SETUPTOOLS)
+ for pkg in pkg_resources.working_set:
+ requires = ':'.join([str(x) for x in pkg.requires()])
+ sys.stdout.write('\t'.join([pkg.project_name, pkg.version, pkg.location, requires])+chr(10))
+ sys.stdout.flush()
+
+
+def do_install(pkgs):
+ try:
+ import pip
+ except ImportError:
+ error_no_pip()
+ return pip.main(['install'] + pkgs)
+
+
+def do_uninstall(pkgs):
+ try:
+ import pip
+ except ImportError:
+ error_no_pip()
+ return pip.main(['uninstall', '-y'] + pkgs)
+
+
+def do_pyvenv(path, system_site_packages):
+ try:
+ import venv
+ except ImportError:
+ error("Standard Python 'venv' module not found", ERROR_EXCEPTION)
+ venv.create(path, system_site_packages=system_site_packages)
+
+
+def untarDirectory(name):
+ import tempfile
+
+ directory_name = tempfile.mkdtemp("pycharm-management")
+
+ import tarfile
+
+ filename = name + ".tar.gz"
+ tar = tarfile.open(filename)
+ for item in tar:
+ tar.extract(item, directory_name)
+
+ sys.stdout.write(directory_name+chr(10))
+ sys.stdout.flush()
+ return 0
+
+def mkdtemp_ifneeded():
+ try:
+ ind = sys.argv.index('--build-dir')
+ if not os.path.exists(sys.argv[ind + 1]):
+ import tempfile
+
+ sys.argv[ind + 1] = tempfile.mkdtemp('pycharm-packaging')
+ return sys.argv[ind + 1]
+ except:
+ pass
+
+ return None
+
+
+def main():
+ retcode = 0
+ try:
+ if len(sys.argv) < 2:
+ usage()
+ cmd = sys.argv[1]
+ if cmd == 'list':
+ if len(sys.argv) != 2:
+ usage()
+ do_list()
+ elif cmd == 'install':
+ if len(sys.argv) < 2:
+ usage()
+
+ rmdir = mkdtemp_ifneeded()
+
+ pkgs = sys.argv[2:]
+ retcode = do_install(pkgs)
+
+ if rmdir is not None:
+ import shutil
+ shutil.rmtree(rmdir)
+
+
+ elif cmd == 'untar':
+ if len(sys.argv) < 2:
+ usage()
+ name = sys.argv[2]
+ retcode = untarDirectory(name)
+ elif cmd == 'uninstall':
+ if len(sys.argv) < 2:
+ usage()
+ pkgs = sys.argv[2:]
+ retcode = do_uninstall(pkgs)
+ elif cmd == 'pyvenv':
+ opts, args = getopt.getopt(sys.argv[2:], '', ['system-site-packages'])
+ if len(args) != 1:
+ usage()
+ path = args[0]
+ system_site_packages = False
+ for opt, arg in opts:
+ if opt == '--system-site-packages':
+ system_site_packages = True
+ do_pyvenv(path, system_site_packages)
+ else:
+ usage()
+ except Exception:
+ traceback.print_exc()
+ exit(ERROR_EXCEPTION)
+ exit(retcode)
+
+if __name__ == '__main__':
+ main()
diff --git a/python/helpers/pep8.py b/python/helpers/pep8.py
new file mode 100644
index 0000000..2ce7554
--- /dev/null
+++ b/python/helpers/pep8.py
@@ -0,0 +1,1865 @@
+#!/usr/bin/env python
+# pep8.py - Check Python source code formatting, according to PEP 8
+# Copyright (C) 2006-2009 Johann C. Rocholl <[email protected]>
+# Copyright (C) 2009-2013 Florent Xicluna <[email protected]>
+#
+# Permission is hereby granted, free of charge, to any person
+# obtaining a copy of this software and associated documentation files
+# (the "Software"), to deal in the Software without restriction,
+# including without limitation the rights to use, copy, modify, merge,
+# publish, distribute, sublicense, and/or sell copies of the Software,
+# and to permit persons to whom the Software is furnished to do so,
+# subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be
+# included in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+# SOFTWARE.
+
+r"""
+Check Python source code formatting, according to PEP 8:
+http://www.python.org/dev/peps/pep-0008/
+
+For usage and a list of options, try this:
+$ python pep8.py -h
+
+This program and its regression test suite live here:
+http://github.com/jcrocholl/pep8
+
+Groups of errors and warnings:
+E errors
+W warnings
+100 indentation
+200 whitespace
+300 blank lines
+400 imports
+500 line length
+600 deprecation
+700 statements
+900 syntax error
+"""
+__version__ = '1.4.5a0'
+
+import os
+import sys
+import re
+import time
+import inspect
+import keyword
+import tokenize
+from optparse import OptionParser
+from fnmatch import fnmatch
+try:
+ from configparser import RawConfigParser
+ from io import TextIOWrapper
+except ImportError:
+ from ConfigParser import RawConfigParser
+
+DEFAULT_EXCLUDE = '.svn,CVS,.bzr,.hg,.git,__pycache__'
+DEFAULT_IGNORE = 'E226,E24'
+if sys.platform == 'win32':
+ DEFAULT_CONFIG = os.path.expanduser(r'~\.pep8')
+else:
+ DEFAULT_CONFIG = os.path.join(os.getenv('XDG_CONFIG_HOME') or
+ os.path.expanduser('~/.config'), 'pep8')
+PROJECT_CONFIG = ('.pep8', 'tox.ini', 'setup.cfg')
+TESTSUITE_PATH = os.path.join(os.path.dirname(__file__), 'testsuite')
+MAX_LINE_LENGTH = 79
+REPORT_FORMAT = {
+ 'default': '%(path)s:%(row)d:%(col)d: %(code)s %(text)s',
+ 'pylint': '%(path)s:%(row)d: [%(code)s] %(text)s',
+}
+
+PyCF_ONLY_AST = 1024
+SINGLETONS = frozenset(['False', 'None', 'True'])
+KEYWORDS = frozenset(keyword.kwlist + ['print']) - SINGLETONS
+UNARY_OPERATORS = frozenset(['>>', '**', '*', '+', '-'])
+ARITHMETIC_OP = frozenset(['**', '*', '/', '//', '+', '-'])
+WS_OPTIONAL_OPERATORS = ARITHMETIC_OP.union(['^', '&', '|', '<<', '>>', '%'])
+WS_NEEDED_OPERATORS = frozenset([
+ '**=', '*=', '/=', '//=', '+=', '-=', '!=', '<>', '<', '>',
+ '%=', '^=', '&=', '|=', '==', '<=', '>=', '<<=', '>>=', '='])
+WHITESPACE = frozenset(' \t')
+SKIP_TOKENS = frozenset([tokenize.COMMENT, tokenize.NL, tokenize.NEWLINE,
+ tokenize.INDENT, tokenize.DEDENT])
+BENCHMARK_KEYS = ['directories', 'files', 'logical lines', 'physical lines']
+
+INDENT_REGEX = re.compile(r'([ \t]*)')
+RAISE_COMMA_REGEX = re.compile(r'raise\s+\w+\s*(,)')
+RERAISE_COMMA_REGEX = re.compile(r'raise\s+\w+\s*,\s*\w+\s*,\s*\w+')
+ERRORCODE_REGEX = re.compile(r'\b[A-Z]\d{3}\b')
+DOCSTRING_REGEX = re.compile(r'u?r?["\']')
+EXTRANEOUS_WHITESPACE_REGEX = re.compile(r'[[({] | []}),;:]')
+WHITESPACE_AFTER_COMMA_REGEX = re.compile(r'[,;:]\s*(?: |\t)')
+COMPARE_SINGLETON_REGEX = re.compile(r'([=!]=)\s*(None|False|True)')
+COMPARE_TYPE_REGEX = re.compile(r'(?:[=!]=|is(?:\s+not)?)\s*type(?:s.\w+Type'
+ r'|\s*\(\s*([^)]*[^ )])\s*\))')
+KEYWORD_REGEX = re.compile(r'(\s*)\b(?:%s)\b(\s*)' % r'|'.join(KEYWORDS))
+OPERATOR_REGEX = re.compile(r'(?:[^,\s])(\s*)(?:[-+*/|!<=>%&^]+)(\s*)')
+LAMBDA_REGEX = re.compile(r'\blambda\b')
+HUNK_REGEX = re.compile(r'^@@ -\d+(?:,\d+)? \+(\d+)(?:,(\d+))? @@.*$')
+
+# Work around Python < 2.6 behaviour, which does not generate NL after
+# a comment which is on a line by itself.
+COMMENT_WITH_NL = tokenize.generate_tokens(['#\n'].pop).send(None)[1] == '#\n'
+
+
+##############################################################################
+# Plugins (check functions) for physical lines
+##############################################################################
+
+
+def tabs_or_spaces(physical_line, indent_char):
+ r"""
+ Never mix tabs and spaces.
+
+ The most popular way of indenting Python is with spaces only. The
+ second-most popular way is with tabs only. Code indented with a mixture
+ of tabs and spaces should be converted to using spaces exclusively. When
+ invoking the Python command line interpreter with the -t option, it issues
+ warnings about code that illegally mixes tabs and spaces. When using -tt
+ these warnings become errors. These options are highly recommended!
+
+ Okay: if a == 0:\n a = 1\n b = 1
+ E101: if a == 0:\n a = 1\n\tb = 1
+ """
+ indent = INDENT_REGEX.match(physical_line).group(1)
+ for offset, char in enumerate(indent):
+ if char != indent_char:
+ return offset, "E101 indentation contains mixed spaces and tabs"
+
+
+def tabs_obsolete(physical_line):
+ r"""
+ For new projects, spaces-only are strongly recommended over tabs. Most
+ editors have features that make this easy to do.
+
+ Okay: if True:\n return
+ W191: if True:\n\treturn
+ """
+ indent = INDENT_REGEX.match(physical_line).group(1)
+ if '\t' in indent:
+ return indent.index('\t'), "W191 indentation contains tabs"
+
+
+def trailing_whitespace(physical_line):
+ r"""
+ JCR: Trailing whitespace is superfluous.
+ FBM: Except when it occurs as part of a blank line (i.e. the line is
+ nothing but whitespace). According to Python docs[1] a line with only
+ whitespace is considered a blank line, and is to be ignored. However,
+ matching a blank line to its indentation level avoids mistakenly
+ terminating a multi-line statement (e.g. class declaration) when
+ pasting code into the standard Python interpreter.
+
+ [1] http://docs.python.org/reference/lexical_analysis.html#blank-lines
+
+ The warning returned varies on whether the line itself is blank, for easier
+ filtering for those who want to indent their blank lines.
+
+ Okay: spam(1)\n#
+ W291: spam(1) \n#
+ W293: class Foo(object):\n \n bang = 12
+ """
+ physical_line = physical_line.rstrip('\n') # chr(10), newline
+ physical_line = physical_line.rstrip('\r') # chr(13), carriage return
+ physical_line = physical_line.rstrip('\x0c') # chr(12), form feed, ^L
+ stripped = physical_line.rstrip(' \t\v')
+ if physical_line != stripped:
+ if stripped:
+ return len(stripped), "W291 trailing whitespace"
+ else:
+ return 0, "W293 blank line contains whitespace"
+
+
+def trailing_blank_lines(physical_line, lines, line_number):
+ r"""
+ JCR: Trailing blank lines are superfluous.
+
+ Okay: spam(1)
+ W391: spam(1)\n
+ """
+ if not physical_line.rstrip() and line_number == len(lines):
+ return 0, "W391 blank line at end of file"
+
+
+def missing_newline(physical_line):
+ """
+ JCR: The last line should have a newline.
+
+ Reports warning W292.
+ """
+ if physical_line.rstrip() == physical_line:
+ return len(physical_line), "W292 no newline at end of file"
+
+
+def maximum_line_length(physical_line, max_line_length):
+ """
+ Limit all lines to a maximum of 79 characters.
+
+ There are still many devices around that are limited to 80 character
+ lines; plus, limiting windows to 80 characters makes it possible to have
+ several windows side-by-side. The default wrapping on such devices looks
+ ugly. Therefore, please limit all lines to a maximum of 79 characters.
+ For flowing long blocks of text (docstrings or comments), limiting the
+ length to 72 characters is recommended.
+
+ Reports error E501.
+ """
+ line = physical_line.rstrip()
+ length = len(line)
+ if length > max_line_length:
+ if noqa(line):
+ return
+ if hasattr(line, 'decode'): # Python 2
+ # The line could contain multi-byte characters
+ try:
+ length = len(line.decode('utf-8'))
+ except UnicodeError:
+ pass
+ if length > max_line_length:
+ return (max_line_length, "E501 line too long "
+ "(%d > %d characters)" % (length, max_line_length))
+
+
+##############################################################################
+# Plugins (check functions) for logical lines
+##############################################################################
+
+
+def blank_lines(logical_line, blank_lines, indent_level, line_number,
+ previous_logical, previous_indent_level):
+ r"""
+ Separate top-level function and class definitions with two blank lines.
+
+ Method definitions inside a class are separated by a single blank line.
+
+ Extra blank lines may be used (sparingly) to separate groups of related
+ functions. Blank lines may be omitted between a bunch of related
+ one-liners (e.g. a set of dummy implementations).
+
+ Use blank lines in functions, sparingly, to indicate logical sections.
+
+ Okay: def a():\n pass\n\n\ndef b():\n pass
+ Okay: def a():\n pass\n\n\n# Foo\n# Bar\n\ndef b():\n pass
+
+ E301: class Foo:\n b = 0\n def bar():\n pass
+ E302: def a():\n pass\n\ndef b(n):\n pass
+ E303: def a():\n pass\n\n\n\ndef b(n):\n pass
+ E303: def a():\n\n\n\n pass
+ E304: @decorator\n\ndef a():\n pass
+ """
+ if line_number < 3 and not previous_logical:
+ return # Don't expect blank lines before the first line
+ if previous_logical.startswith('@'):
+ if blank_lines:
+ yield 0, "E304 blank lines found after function decorator"
+ elif blank_lines > 2 or (indent_level and blank_lines == 2):
+ yield 0, "E303 too many blank lines (%d)" % blank_lines
+ elif logical_line.startswith(('def ', 'class ', '@')):
+ if indent_level:
+ if not (blank_lines or previous_indent_level < indent_level or
+ DOCSTRING_REGEX.match(previous_logical)):
+ yield 0, "E301 expected 1 blank line, found 0"
+ elif blank_lines != 2:
+ yield 0, "E302 expected 2 blank lines, found %d" % blank_lines
+
+
+def extraneous_whitespace(logical_line):
+ """
+ Avoid extraneous whitespace in the following situations:
+
+ - Immediately inside parentheses, brackets or braces.
+
+ - Immediately before a comma, semicolon, or colon.
+
+ Okay: spam(ham[1], {eggs: 2})
+ E201: spam( ham[1], {eggs: 2})
+ E201: spam(ham[ 1], {eggs: 2})
+ E201: spam(ham[1], { eggs: 2})
+ E202: spam(ham[1], {eggs: 2} )
+ E202: spam(ham[1 ], {eggs: 2})
+ E202: spam(ham[1], {eggs: 2 })
+
+ E203: if x == 4: print x, y; x, y = y , x
+ E203: if x == 4: print x, y ; x, y = y, x
+ E203: if x == 4 : print x, y; x, y = y, x
+ """
+ line = logical_line
+ for match in EXTRANEOUS_WHITESPACE_REGEX.finditer(line):
+ text = match.group()
+ char = text.strip()
+ found = match.start()
+ if text == char + ' ':
+ # assert char in '([{'
+ yield found + 1, "E201 whitespace after '%s'" % char
+ elif line[found - 1] != ',':
+ code = ('E202' if char in '}])' else 'E203') # if char in ',;:'
+ yield found, "%s whitespace before '%s'" % (code, char)
+
+
+def whitespace_around_keywords(logical_line):
+ r"""
+ Avoid extraneous whitespace around keywords.
+
+ Okay: True and False
+ E271: True and False
+ E272: True and False
+ E273: True and\tFalse
+ E274: True\tand False
+ """
+ for match in KEYWORD_REGEX.finditer(logical_line):
+ before, after = match.groups()
+
+ if '\t' in before:
+ yield match.start(1), "E274 tab before keyword"
+ elif len(before) > 1:
+ yield match.start(1), "E272 multiple spaces before keyword"
+
+ if '\t' in after:
+ yield match.start(2), "E273 tab after keyword"
+ elif len(after) > 1:
+ yield match.start(2), "E271 multiple spaces after keyword"
+
+
+def missing_whitespace(logical_line):
+ """
+ JCR: Each comma, semicolon or colon should be followed by whitespace.
+
+ Okay: [a, b]
+ Okay: (3,)
+ Okay: a[1:4]
+ Okay: a[:4]
+ Okay: a[1:]
+ Okay: a[1:4:2]
+ E231: ['a','b']
+ E231: foo(bar,baz)
+ E231: [{'a':'b'}]
+ """
+ line = logical_line
+ for index in range(len(line) - 1):
+ char = line[index]
+ if char in ',;:' and line[index + 1] not in WHITESPACE:
+ before = line[:index]
+ if char == ':' and before.count('[') > before.count(']') and \
+ before.rfind('{') < before.rfind('['):
+ continue # Slice syntax, no space required
+ if char == ',' and line[index + 1] == ')':
+ continue # Allow tuple with only one element: (3,)
+ yield index, "E231 missing whitespace after '%s'" % char
+
+
+def indentation(logical_line, previous_logical, indent_char,
+ indent_level, previous_indent_level):
+ r"""
+ Use 4 spaces per indentation level.
+
+ For really old code that you don't want to mess up, you can continue to
+ use 8-space tabs.
+
+ Okay: a = 1
+ Okay: if a == 0:\n a = 1
+ E111: a = 1
+
+ Okay: for item in items:\n pass
+ E112: for item in items:\npass
+
+ Okay: a = 1\nb = 2
+ E113: a = 1\n b = 2
+ """
+ if indent_char == ' ' and indent_level % 4:
+ yield 0, "E111 indentation is not a multiple of four"
+ indent_expect = previous_logical.endswith(':')
+ if indent_expect and indent_level <= previous_indent_level:
+ yield 0, "E112 expected an indented block"
+ if indent_level > previous_indent_level and not indent_expect:
+ yield 0, "E113 unexpected indentation"
+
+
+def continuation_line_indentation(logical_line, tokens, indent_level, verbose):
+ r"""
+ Continuation lines should align wrapped elements either vertically using
+ Python's implicit line joining inside parentheses, brackets and braces, or
+ using a hanging indent.
+
+ When using a hanging indent the following considerations should be applied:
+
+ - there should be no arguments on the first line, and
+
+ - further indentation should be used to clearly distinguish itself as a
+ continuation line.
+
+ Okay: a = (\n)
+ E123: a = (\n )
+
+ Okay: a = (\n 42)
+ E121: a = (\n 42)
+ E122: a = (\n42)
+ E123: a = (\n 42\n )
+ E124: a = (24,\n 42\n)
+ E125: if (a or\n b):\n pass
+ E126: a = (\n 42)
+ E127: a = (24,\n 42)
+ E128: a = (24,\n 42)
+ """
+ first_row = tokens[0][2][0]
+ nrows = 1 + tokens[-1][2][0] - first_row
+ if nrows == 1 or noqa(tokens[0][4]):
+ return
+
+ # indent_next tells us whether the next block is indented; assuming
+ # that it is indented by 4 spaces, then we should not allow 4-space
+ # indents on the final continuation line; in turn, some other
+ # indents are allowed to have an extra 4 spaces.
+ indent_next = logical_line.endswith(':')
+
+ row = depth = 0
+ # remember how many brackets were opened on each line
+ parens = [0] * nrows
+ # relative indents of physical lines
+ rel_indent = [0] * nrows
+ # visual indents
+ indent_chances = {}
+ last_indent = tokens[0][2]
+ indent = [last_indent[1]]
+ if verbose >= 3:
+ print(">>> " + tokens[0][4].rstrip())
+
+ for token_type, text, start, end, line in tokens:
+
+ newline = row < start[0] - first_row
+ if newline:
+ row = start[0] - first_row
+ newline = (not last_token_multiline and
+ token_type not in (tokenize.NL, tokenize.NEWLINE))
+
+ if newline:
+ # this is the beginning of a continuation line.
+ last_indent = start
+ if verbose >= 3:
+ print("... " + line.rstrip())
+
+ # record the initial indent.
+ rel_indent[row] = expand_indent(line) - indent_level
+
+ if depth:
+ # a bracket expression in a continuation line.
+ # find the line that it was opened on
+ for open_row in range(row - 1, -1, -1):
+ if parens[open_row]:
+ break
+ else:
+ # an unbracketed continuation line (ie, backslash)
+ open_row = 0
+ hang = rel_indent[row] - rel_indent[open_row]
+ visual_indent = indent_chances.get(start[1])
+
+ if token_type == tokenize.OP and text in ']})':
+ # this line starts with a closing bracket
+ if indent[depth]:
+ if start[1] != indent[depth]:
+ yield (start, "E124 closing bracket does not match "
+ "visual indentation")
+ elif hang:
+ yield (start, "E123 closing bracket does not match "
+ "indentation of opening bracket's line")
+ elif visual_indent is True:
+ # visual indent is verified
+ if not indent[depth]:
+ indent[depth] = start[1]
+ elif visual_indent in (text, str):
+ # ignore token lined up with matching one from a previous line
+ pass
+ elif indent[depth] and start[1] < indent[depth]:
+ # visual indent is broken
+ yield (start, "E128 continuation line "
+ "under-indented for visual indent")
+ elif hang == 4 or (indent_next and rel_indent[row] == 8):
+ # hanging indent is verified
+ pass
+ else:
+ # indent is broken
+ if hang <= 0:
+ error = "E122", "missing indentation or outdented"
+ elif indent[depth]:
+ error = "E127", "over-indented for visual indent"
+ elif hang % 4:
+ error = "E121", "indentation is not a multiple of four"
+ else:
+ error = "E126", "over-indented for hanging indent"
+ yield start, "%s continuation line %s" % error
+
+ # look for visual indenting
+ if (parens[row] and token_type not in (tokenize.NL, tokenize.COMMENT)
+ and not indent[depth]):
+ indent[depth] = start[1]
+ indent_chances[start[1]] = True
+ if verbose >= 4:
+ print("bracket depth %s indent to %s" % (depth, start[1]))
+ # deal with implicit string concatenation
+ elif (token_type in (tokenize.STRING, tokenize.COMMENT) or
+ text in ('u', 'ur', 'b', 'br')):
+ indent_chances[start[1]] = str
+ # special case for the "if" statement because len("if (") == 4
+ elif not indent_chances and not row and not depth and text == 'if':
+ indent_chances[end[1] + 1] = True
+
+ # keep track of bracket depth
+ if token_type == tokenize.OP:
+ if text in '([{':
+ depth += 1
+ indent.append(0)
+ parens[row] += 1
+ if verbose >= 4:
+ print("bracket depth %s seen, col %s, visual min = %s" %
+ (depth, start[1], indent[depth]))
+ elif text in ')]}' and depth > 0:
+ # parent indents should not be more than this one
+ prev_indent = indent.pop() or last_indent[1]
+ for d in range(depth):
+ if indent[d] > prev_indent:
+ indent[d] = 0
+ for ind in list(indent_chances):
+ if ind >= prev_indent:
+ del indent_chances[ind]
+ depth -= 1
+ if depth:
+ indent_chances[indent[depth]] = True
+ for idx in range(row, -1, -1):
+ if parens[idx]:
+ parens[idx] -= 1
+ break
+ assert len(indent) == depth + 1
+ if start[1] not in indent_chances:
+ # allow to line up tokens
+ indent_chances[start[1]] = text
+
+ last_token_multiline = (start[0] != end[0])
+
+ if indent_next and rel_indent[-1] == 4:
+ yield (last_indent, "E125 continuation line does not distinguish "
+ "itself from next logical line")
+
+
+def whitespace_before_parameters(logical_line, tokens):
+ """
+ Avoid extraneous whitespace in the following situations:
+
+ - Immediately before the open parenthesis that starts the argument
+ list of a function call.
+
+ - Immediately before the open parenthesis that starts an indexing or
+ slicing.
+
+ Okay: spam(1)
+ E211: spam (1)
+
+ Okay: dict['key'] = list[index]
+ E211: dict ['key'] = list[index]
+ E211: dict['key'] = list [index]
+ """
+ prev_type = tokens[0][0]
+ prev_text = tokens[0][1]
+ prev_end = tokens[0][3]
+ for index in range(1, len(tokens)):
+ token_type, text, start, end, line = tokens[index]
+ if (token_type == tokenize.OP and
+ text in '([' and
+ start != prev_end and
+ (prev_type == tokenize.NAME or prev_text in '}])') and
+ # Syntax "class A (B):" is allowed, but avoid it
+ (index < 2 or tokens[index - 2][1] != 'class') and
+ # Allow "return (a.foo for a in range(5))"
+ not keyword.iskeyword(prev_text)):
+ yield prev_end, "E211 whitespace before '%s'" % text
+ prev_type = token_type
+ prev_text = text
+ prev_end = end
+
+
+def whitespace_around_operator(logical_line):
+ r"""
+ Avoid extraneous whitespace in the following situations:
+
+ - More than one space around an assignment (or other) operator to
+ align it with another.
+
+ Okay: a = 12 + 3
+ E221: a = 4 + 5
+ E222: a = 4 + 5
+ E223: a = 4\t+ 5
+ E224: a = 4 +\t5
+ """
+ for match in OPERATOR_REGEX.finditer(logical_line):
+ before, after = match.groups()
+
+ if '\t' in before:
+ yield match.start(1), "E223 tab before operator"
+ elif len(before) > 1:
+ yield match.start(1), "E221 multiple spaces before operator"
+
+ if '\t' in after:
+ yield match.start(2), "E224 tab after operator"
+ elif len(after) > 1:
+ yield match.start(2), "E222 multiple spaces after operator"
+
+
+def missing_whitespace_around_operator(logical_line, tokens):
+ r"""
+ - Always surround these binary operators with a single space on
+ either side: assignment (=), augmented assignment (+=, -= etc.),
+ comparisons (==, <, >, !=, <>, <=, >=, in, not in, is, is not),
+ Booleans (and, or, not).
+
+ - Use spaces around arithmetic operators.
+
+ Okay: i = i + 1
+ Okay: submitted += 1
+ Okay: x = x * 2 - 1
+ Okay: hypot2 = x * x + y * y
+ Okay: c = (a + b) * (a - b)
+ Okay: foo(bar, key='word', *args, **kwargs)
+ Okay: alpha[:-i]
+
+ E225: i=i+1
+ E225: submitted +=1
+ E225: x = x /2 - 1
+ E225: z = x **y
+ E226: c = (a+b) * (a-b)
+ E226: hypot2 = x*x + y*y
+ E227: c = a|b
+ E228: msg = fmt%(errno, errmsg)
+ """
+ parens = 0
+ need_space = False
+ prev_type = tokenize.OP
+ prev_text = prev_end = None
+ for token_type, text, start, end, line in tokens:
+ if token_type in (tokenize.NL, tokenize.NEWLINE, tokenize.ERRORTOKEN):
+ # ERRORTOKEN is triggered by backticks in Python 3
+ continue
+ if text in ('(', 'lambda'):
+ parens += 1
+ elif text == ')':
+ parens -= 1
+ if need_space:
+ if start != prev_end:
+ # Found a (probably) needed space
+ if need_space is not True and not need_space[1]:
+ yield (need_space[0],
+ "E225 missing whitespace around operator")
+ need_space = False
+ elif text == '>' and prev_text in ('<', '-'):
+ # Tolerate the "<>" operator, even if running Python 3
+ # Deal with Python 3's annotated return value "->"
+ pass
+ else:
+ if need_space is True or need_space[1]:
+ # A needed trailing space was not found
+ yield prev_end, "E225 missing whitespace around operator"
+ else:
+ code, optype = 'E226', 'arithmetic'
+ if prev_text == '%':
+ code, optype = 'E228', 'modulo'
+ elif prev_text not in ARITHMETIC_OP:
+ code, optype = 'E227', 'bitwise or shift'
+ yield (need_space[0], "%s missing whitespace "
+ "around %s operator" % (code, optype))
+ need_space = False
+ elif token_type == tokenize.OP and prev_end is not None:
+ if text == '=' and parens:
+ # Allow keyword args or defaults: foo(bar=None).
+ pass
+ elif text in WS_NEEDED_OPERATORS:
+ need_space = True
+ elif text in UNARY_OPERATORS:
+ # Check if the operator is being used as a binary operator
+ # Allow unary operators: -123, -x, +1.
+ # Allow argument unpacking: foo(*args, **kwargs).
+ if prev_type == tokenize.OP:
+ binary_usage = (prev_text in '}])')
+ elif prev_type == tokenize.NAME:
+ binary_usage = (prev_text not in KEYWORDS)
+ else:
+ binary_usage = (prev_type not in SKIP_TOKENS)
+
+ if binary_usage:
+ if text in WS_OPTIONAL_OPERATORS:
+ need_space = None
+ else:
+ need_space = True
+ elif text in WS_OPTIONAL_OPERATORS:
+ need_space = None
+
+ if need_space is None:
+ # Surrounding space is optional, but ensure that
+ # trailing space matches opening space
+ need_space = (prev_end, start != prev_end)
+ elif need_space and start == prev_end:
+ # A needed opening space was not found
+ yield prev_end, "E225 missing whitespace around operator"
+ need_space = False
+ prev_type = token_type
+ prev_text = text
+ prev_end = end
+
+
+def whitespace_around_comma(logical_line):
+ r"""
+ Avoid extraneous whitespace in the following situations:
+
+ - More than one space around an assignment (or other) operator to
+ align it with another.
+
+ Note: these checks are disabled by default
+
+ Okay: a = (1, 2)
+ E241: a = (1, 2)
+ E242: a = (1,\t2)
+ """
+ line = logical_line
+ for m in WHITESPACE_AFTER_COMMA_REGEX.finditer(line):
+ found = m.start() + 1
+ if '\t' in m.group():
+ yield found, "E242 tab after '%s'" % m.group()[0]
+ else:
+ yield found, "E241 multiple spaces after '%s'" % m.group()[0]
+
+
+def whitespace_around_named_parameter_equals(logical_line, tokens):
+ """
+ Don't use spaces around the '=' sign when used to indicate a
+ keyword argument or a default parameter value.
+
+ Okay: def complex(real, imag=0.0):
+ Okay: return magic(r=real, i=imag)
+ Okay: boolean(a == b)
+ Okay: boolean(a != b)
+ Okay: boolean(a <= b)
+ Okay: boolean(a >= b)
+
+ E251: def complex(real, imag = 0.0):
+ E251: return magic(r = real, i = imag)
+ """
+ parens = 0
+ no_space = False
+ prev_end = None
+ message = "E251 unexpected spaces around keyword / parameter equals"
+ for token_type, text, start, end, line in tokens:
+ if no_space:
+ no_space = False
+ if start != prev_end:
+ yield (prev_end, message)
+ elif token_type == tokenize.OP:
+ if text == '(':
+ parens += 1
+ elif text == ')':
+ parens -= 1
+ elif parens and text == '=':
+ no_space = True
+ if start != prev_end:
+ yield (prev_end, message)
+ prev_end = end
+
+
+def whitespace_before_inline_comment(logical_line, tokens):
+ """
+ Separate inline comments by at least two spaces.
+
+ An inline comment is a comment on the same line as a statement. Inline
+ comments should be separated by at least two spaces from the statement.
+ They should start with a # and a single space.
+
+ Okay: x = x + 1 # Increment x
+ Okay: x = x + 1 # Increment x
+ E261: x = x + 1 # Increment x
+ E262: x = x + 1 #Increment x
+ E262: x = x + 1 # Increment x
+ """
+ prev_end = (0, 0)
+ for token_type, text, start, end, line in tokens:
+ if token_type == tokenize.COMMENT:
+ if not line[:start[1]].strip():
+ continue
+ if prev_end[0] == start[0] and start[1] < prev_end[1] + 2:
+ yield (prev_end,
+ "E261 at least two spaces before inline comment")
+ symbol, sp, comment = text.partition(' ')
+ if symbol not in ('#', '#:') or comment[:1].isspace():
+ yield start, "E262 inline comment should start with '# '"
+ elif token_type != tokenize.NL:
+ prev_end = end
+
+
+def imports_on_separate_lines(logical_line):
+ r"""
+ Imports should usually be on separate lines.
+
+ Okay: import os\nimport sys
+ E401: import sys, os
+
+ Okay: from subprocess import Popen, PIPE
+ Okay: from myclas import MyClass
+ Okay: from foo.bar.yourclass import YourClass
+ Okay: import myclass
+ Okay: import foo.bar.yourclass
+ """
+ line = logical_line
+ if line.startswith('import '):
+ found = line.find(',')
+ if -1 < found and ';' not in line[:found]:
+ yield found, "E401 multiple imports on one line"
+
+
+def compound_statements(logical_line):
+ r"""
+ Compound statements (multiple statements on the same line) are
+ generally discouraged.
+
+ While sometimes it's okay to put an if/for/while with a small body
+ on the same line, never do this for multi-clause statements. Also
+ avoid folding such long lines!
+
+ Okay: if foo == 'blah':\n do_blah_thing()
+ Okay: do_one()
+ Okay: do_two()
+ Okay: do_three()
+
+ E701: if foo == 'blah': do_blah_thing()
+ E701: for x in lst: total += x
+ E701: while t < 10: t = delay()
+ E701: if foo == 'blah': do_blah_thing()
+ E701: else: do_non_blah_thing()
+ E701: try: something()
+ E701: finally: cleanup()
+ E701: if foo == 'blah': one(); two(); three()
+
+ E702: do_one(); do_two(); do_three()
+ E703: do_four(); # useless semicolon
+ """
+ line = logical_line
+ last_char = len(line) - 1
+ found = line.find(':')
+ if -1 < found < last_char:
+ before = line[:found]
+ if (before.count('{') <= before.count('}') and # {'a': 1} (dict)
+ before.count('[') <= before.count(']') and # [1:2] (slice)
+ before.count('(') <= before.count(')') and # (Python 3 annotation)
+ not LAMBDA_REGEX.search(before)): # lambda x: x
+ yield found, "E701 multiple statements on one line (colon)"
+ found = line.find(';')
+ if -1 < found:
+ if found < last_char:
+ yield found, "E702 multiple statements on one line (semicolon)"
+ else:
+ yield found, "E703 statement ends with a semicolon"
+
+
+def explicit_line_join(logical_line, tokens):
+ r"""
+ Avoid explicit line join between brackets.
+
+ The preferred way of wrapping long lines is by using Python's implied line
+ continuation inside parentheses, brackets and braces. Long lines can be
+ broken over multiple lines by wrapping expressions in parentheses. These
+ should be used in preference to using a backslash for line continuation.
+
+ E502: aaa = [123, \\n 123]
+ E502: aaa = ("bbb " \\n "ccc")
+
+ Okay: aaa = [123,\n 123]
+ Okay: aaa = ("bbb "\n "ccc")
+ Okay: aaa = "bbb " \\n "ccc"
+ """
+ prev_start = prev_end = parens = 0
+ for token_type, text, start, end, line in tokens:
+ if start[0] != prev_start and parens and backslash:
+ yield backslash, "E502 the backslash is redundant between brackets"
+ if end[0] != prev_end:
+ if line.rstrip('\r\n').endswith('\\'):
+ backslash = (end[0], len(line.splitlines()[-1]) - 1)
+ else:
+ backslash = None
+ prev_start = prev_end = end[0]
+ else:
+ prev_start = start[0]
+ if token_type == tokenize.OP:
+ if text in '([{':
+ parens += 1
+ elif text in ')]}':
+ parens -= 1
+
+
+def comparison_to_singleton(logical_line):
+ """
+ Comparisons to singletons like None should always be done
+ with "is" or "is not", never the equality operators.
+
+ Okay: if arg is not None:
+ E711: if arg != None:
+ E712: if arg == True:
+
+ Also, beware of writing if x when you really mean if x is not None --
+ e.g. when testing whether a variable or argument that defaults to None was
+ set to some other value. The other value might have a type (such as a
+ container) that could be false in a boolean context!
+ """
+ match = COMPARE_SINGLETON_REGEX.search(logical_line)
+ if match:
+ same = (match.group(1) == '==')
+ singleton = match.group(2)
+ msg = "'if cond is %s:'" % (('' if same else 'not ') + singleton)
+ if singleton in ('None',):
+ code = 'E711'
+ else:
+ code = 'E712'
+ nonzero = ((singleton == 'True' and same) or
+ (singleton == 'False' and not same))
+ msg += " or 'if %scond:'" % ('' if nonzero else 'not ')
+ yield match.start(1), ("%s comparison to %s should be %s" %
+ (code, singleton, msg))
+
+
+def comparison_type(logical_line):
+ """
+ Object type comparisons should always use isinstance() instead of
+ comparing types directly.
+
+ Okay: if isinstance(obj, int):
+ E721: if type(obj) is type(1):
+
+ When checking if an object is a string, keep in mind that it might be a
+ unicode string too! In Python 2.3, str and unicode have a common base
+ class, basestring, so you can do:
+
+ Okay: if isinstance(obj, basestring):
+ Okay: if type(a1) is type(b1):
+ """
+ match = COMPARE_TYPE_REGEX.search(logical_line)
+ if match:
+ inst = match.group(1)
+ if inst and isidentifier(inst) and inst not in SINGLETONS:
+ return # Allow comparison for types which are not obvious
+ yield match.start(0), "E721 do not compare types, use 'isinstance()'"
+
+
+def python_3000_has_key(logical_line):
+ r"""
+ The {}.has_key() method is removed in the Python 3.
+ Use the 'in' operation instead.
+
+ Okay: if "alph" in d:\n print d["alph"]
+ W601: assert d.has_key('alph')
+ """
+ pos = logical_line.find('.has_key(')
+ if pos > -1:
+ yield pos, "W601 .has_key() is deprecated, use 'in'"
+
+
+def python_3000_raise_comma(logical_line):
+ """
+ When raising an exception, use "raise ValueError('message')"
+ instead of the older form "raise ValueError, 'message'".
+
+ The paren-using form is preferred because when the exception arguments
+ are long or include string formatting, you don't need to use line
+ continuation characters thanks to the containing parentheses. The older
+ form is removed in Python 3.
+
+ Okay: raise DummyError("Message")
+ W602: raise DummyError, "Message"
+ """
+ match = RAISE_COMMA_REGEX.match(logical_line)
+ if match and not RERAISE_COMMA_REGEX.match(logical_line):
+ yield match.start(1), "W602 deprecated form of raising exception"
+
+
+def python_3000_not_equal(logical_line):
+ """
+ != can also be written <>, but this is an obsolete usage kept for
+ backwards compatibility only. New code should always use !=.
+ The older syntax is removed in Python 3.
+
+ Okay: if a != 'no':
+ W603: if a <> 'no':
+ """
+ pos = logical_line.find('<>')
+ if pos > -1:
+ yield pos, "W603 '<>' is deprecated, use '!='"
+
+
+def python_3000_backticks(logical_line):
+ """
+ Backticks are removed in Python 3.
+ Use repr() instead.
+
+ Okay: val = repr(1 + 2)
+ W604: val = `1 + 2`
+ """
+ pos = logical_line.find('`')
+ if pos > -1:
+ yield pos, "W604 backticks are deprecated, use 'repr()'"
+
+
+##############################################################################
+# Helper functions
+##############################################################################
+
+
+if '' == ''.encode():
+ # Python 2: implicit encoding.
+ def readlines(filename):
+ f = open(filename)
+ try:
+ return f.readlines()
+ finally:
+ f.close()
+
+ isidentifier = re.compile(r'[a-zA-Z_]\w*').match
+ stdin_get_value = sys.stdin.read
+else:
+ # Python 3
+ def readlines(filename):
+ f = open(filename, 'rb')
+ try:
+ coding, lines = tokenize.detect_encoding(f.readline)
+ f = TextIOWrapper(f, coding, line_buffering=True)
+ return [l.decode(coding) for l in lines] + f.readlines()
+ except (LookupError, SyntaxError, UnicodeError):
+ f.close()
+ # Fall back if files are improperly declared
+ f = open(filename, encoding='latin-1')
+ return f.readlines()
+ finally:
+ f.close()
+
+ isidentifier = str.isidentifier
+
+ def stdin_get_value():
+ return TextIOWrapper(sys.stdin.buffer, errors='ignore').read()
+readlines.__doc__ = " Read the source code."
+noqa = re.compile(r'# no(?:qa|pep8)\b', re.I).search
+
+
+def expand_indent(line):
+ r"""
+ Return the amount of indentation.
+ Tabs are expanded to the next multiple of 8.
+
+ >>> expand_indent(' ')
+ 4
+ >>> expand_indent('\t')
+ 8
+ >>> expand_indent(' \t')
+ 8
+ >>> expand_indent(' \t')
+ 8
+ >>> expand_indent(' \t')
+ 16
+ """
+ if '\t' not in line:
+ return len(line) - len(line.lstrip())
+ result = 0
+ for char in line:
+ if char == '\t':
+ result = result // 8 * 8 + 8
+ elif char == ' ':
+ result += 1
+ else:
+ break
+ return result
+
+
+def mute_string(text):
+ """
+ Replace contents with 'xxx' to prevent syntax matching.
+
+ >>> mute_string('"abc"')
+ '"xxx"'
+ >>> mute_string("'''abc'''")
+ "'''xxx'''"
+ >>> mute_string("r'abc'")
+ "r'xxx'"
+ """
+ # String modifiers (e.g. u or r)
+ start = text.index(text[-1]) + 1
+ end = len(text) - 1
+ # Triple quotes
+ if text[-3:] in ('"""', "'''"):
+ start += 2
+ end -= 2
+ return text[:start] + 'x' * (end - start) + text[end:]
+
+
+def parse_udiff(diff, patterns=None, parent='.'):
+ """Return a dictionary of matching lines."""
+ # For each file of the diff, the entry key is the filename,
+ # and the value is a set of row numbers to consider.
+ rv = {}
+ path = nrows = None
+ for line in diff.splitlines():
+ if nrows:
+ if line[:1] != '-':
+ nrows -= 1
+ continue
+ if line[:3] == '@@ ':
+ hunk_match = HUNK_REGEX.match(line)
+ row, nrows = [int(g or '1') for g in hunk_match.groups()]
+ rv[path].update(range(row, row + nrows))
+ elif line[:3] == '+++':
+ path = line[4:].split('\t', 1)[0]
+ if path[:2] == 'b/':
+ path = path[2:]
+ rv[path] = set()
+ return dict([(os.path.join(parent, path), rows)
+ for (path, rows) in rv.items()
+ if rows and filename_match(path, patterns)])
+
+
+def filename_match(filename, patterns, default=True):
+ """
+ Check if patterns contains a pattern that matches filename.
+ If patterns is unspecified, this always returns True.
+ """
+ if not patterns:
+ return default
+ return any(fnmatch(filename, pattern) for pattern in patterns)
+
+
+##############################################################################
+# Framework to run all checks
+##############################################################################
+
+
+_checks = {'physical_line': {}, 'logical_line': {}, 'tree': {}}
+
+
+def register_check(check, codes=None):
+ """
+ Register a new check object.
+ """
+ def _add_check(check, kind, codes, args):
+ if check in _checks[kind]:
+ _checks[kind][check][0].extend(codes or [])
+ else:
+ _checks[kind][check] = (codes or [''], args)
+ if inspect.isfunction(check):
+ args = inspect.getargspec(check)[0]
+ if args and args[0] in ('physical_line', 'logical_line'):
+ if codes is None:
+ codes = ERRORCODE_REGEX.findall(check.__doc__ or '')
+ _add_check(check, args[0], codes, args)
+ elif inspect.isclass(check):
+ if inspect.getargspec(check.__init__)[0][:2] == ['self', 'tree']:
+ _add_check(check, 'tree', codes, None)
+
+
+def init_checks_registry():
+ """
+ Register all globally visible functions where the first argument name
+ is 'physical_line' or 'logical_line'.
+ """
+ mod = inspect.getmodule(register_check)
+ for (name, function) in inspect.getmembers(mod, inspect.isfunction):
+ register_check(function)
+init_checks_registry()
+
+
+class Checker(object):
+ """
+ Load a Python source file, tokenize it, check coding style.
+ """
+
+ def __init__(self, filename=None, lines=None,
+ options=None, report=None, **kwargs):
+ if options is None:
+ options = StyleGuide(kwargs).options
+ else:
+ assert not kwargs
+ self._io_error = None
+ self._physical_checks = options.physical_checks
+ self._logical_checks = options.logical_checks
+ self._ast_checks = options.ast_checks
+ self.max_line_length = options.max_line_length
+ self.verbose = options.verbose
+ self.filename = filename
+ if filename is None:
+ self.filename = 'stdin'
+ self.lines = lines or []
+ elif filename == '-':
+ self.filename = 'stdin'
+ self.lines = stdin_get_value().splitlines(True)
+ elif lines is None:
+ try:
+ self.lines = readlines(filename)
+ except IOError:
+ exc_type, exc = sys.exc_info()[:2]
+ self._io_error = '%s: %s' % (exc_type.__name__, exc)
+ self.lines = []
+ else:
+ self.lines = lines
+ self.report = report or options.report
+ self.report_error = self.report.error
+
+ def report_invalid_syntax(self):
+ exc_type, exc = sys.exc_info()[:2]
+ offset = exc.args[1]
+ if len(offset) > 2:
+ offset = offset[1:3]
+ self.report_error(offset[0], offset[1] or 0,
+ 'E901 %s: %s' % (exc_type.__name__, exc.args[0]),
+ self.report_invalid_syntax)
+ report_invalid_syntax.__doc__ = " Check if the syntax is valid."
+
+ def readline(self):
+ """
+ Get the next line from the input buffer.
+ """
+ self.line_number += 1
+ if self.line_number > len(self.lines):
+ return ''
+ return self.lines[self.line_number - 1]
+
+ def readline_check_physical(self):
+ """
+ Check and return the next physical line. This method can be
+ used to feed tokenize.generate_tokens.
+ """
+ line = self.readline()
+ if line:
+ self.check_physical(line)
+ return line
+
+ def run_check(self, check, argument_names):
+ """
+ Run a check plugin.
+ """
+ arguments = []
+ for name in argument_names:
+ arguments.append(getattr(self, name))
+ return check(*arguments)
+
+ def check_physical(self, line):
+ """
+ Run all physical checks on a raw input line.
+ """
+ self.physical_line = line
+ if self.indent_char is None and line[:1] in WHITESPACE:
+ self.indent_char = line[0]
+ for name, check, argument_names in self._physical_checks:
+ result = self.run_check(check, argument_names)
+ if result is not None:
+ offset, text = result
+ self.report_error(self.line_number, offset, text, check)
+
+ def build_tokens_line(self):
+ """
+ Build a logical line from tokens.
+ """
+ self.mapping = []
+ logical = []
+ length = 0
+ previous = None
+ for token in self.tokens:
+ token_type, text = token[0:2]
+ if token_type in SKIP_TOKENS:
+ continue
+ if token_type == tokenize.STRING:
+ text = mute_string(text)
+ if previous:
+ end_row, end = previous[3]
+ start_row, start = token[2]
+ if end_row != start_row: # different row
+ prev_text = self.lines[end_row - 1][end - 1]
+ if prev_text == ',' or (prev_text not in '{[('
+ and text not in '}])'):
+ logical.append(' ')
+ length += 1
+ elif end != start: # different column
+ fill = self.lines[end_row - 1][end:start]
+ logical.append(fill)
+ length += len(fill)
+ self.mapping.append((length, token))
+ logical.append(text)
+ length += len(text)
+ previous = token
+ self.logical_line = ''.join(logical)
+ # With Python 2, if the line ends with '\r\r\n' the assertion fails
+ # assert self.logical_line.strip() == self.logical_line
+
+ def check_logical(self):
+ """
+ Build a line from tokens and run all logical checks on it.
+ """
+ self.build_tokens_line()
+ self.report.increment_logical_line()
+ first_line = self.lines[self.mapping[0][1][2][0] - 1]
+ indent = first_line[:self.mapping[0][1][2][1]]
+ self.previous_indent_level = self.indent_level
+ self.indent_level = expand_indent(indent)
+ if self.verbose >= 2:
+ print(self.logical_line[:80].rstrip())
+ for name, check, argument_names in self._logical_checks:
+ if self.verbose >= 4:
+ print(' ' + name)
+ for result in self.run_check(check, argument_names):
+ offset, text = result
+ if isinstance(offset, tuple):
+ orig_number, orig_offset = offset
+ else:
+ for token_offset, token in self.mapping:
+ if offset >= token_offset:
+ orig_number = token[2][0]
+ orig_offset = (token[2][1] + offset - token_offset)
+ self.report_error(orig_number, orig_offset, text, check)
+ self.previous_logical = self.logical_line
+
+ def check_ast(self):
+ try:
+ tree = compile(''.join(self.lines), '', 'exec', PyCF_ONLY_AST)
+ except SyntaxError:
+ return self.report_invalid_syntax()
+ for name, cls, _ in self._ast_checks:
+ checker = cls(tree, self.filename)
+ for lineno, offset, text, check in checker.run():
+ if not noqa(self.lines[lineno - 1]):
+ self.report_error(lineno, offset, text, check)
+
+ def generate_tokens(self):
+ if self._io_error:
+ self.report_error(1, 0, 'E902 %s' % self._io_error, readlines)
+ tokengen = tokenize.generate_tokens(self.readline_check_physical)
+ try:
+ for token in tokengen:
+ yield token
+ except (SyntaxError, tokenize.TokenError):
+ self.report_invalid_syntax()
+
+ def check_all(self, expected=None, line_offset=0):
+ """
+ Run all checks on the input file.
+ """
+ self.report.init_file(self.filename, self.lines, expected, line_offset)
+ if self._ast_checks:
+ self.check_ast()
+ self.line_number = 0
+ self.indent_char = None
+ self.indent_level = 0
+ self.previous_logical = ''
+ self.tokens = []
+ self.blank_lines = blank_lines_before_comment = 0
+ parens = 0
+ for token in self.generate_tokens():
+ self.tokens.append(token)
+ token_type, text = token[0:2]
+ if self.verbose >= 3:
+ if token[2][0] == token[3][0]:
+ pos = '[%s:%s]' % (token[2][1] or '', token[3][1])
+ else:
+ pos = 'l.%s' % token[3][0]
+ print('l.%s\t%s\t%s\t%r' %
+ (token[2][0], pos, tokenize.tok_name[token[0]], text))
+ if token_type == tokenize.OP:
+ if text in '([{':
+ parens += 1
+ elif text in '}])':
+ parens -= 1
+ elif not parens:
+ if token_type == tokenize.NEWLINE:
+ if self.blank_lines < blank_lines_before_comment:
+ self.blank_lines = blank_lines_before_comment
+ self.check_logical()
+ self.tokens = []
+ self.blank_lines = blank_lines_before_comment = 0
+ elif token_type == tokenize.NL:
+ if len(self.tokens) == 1:
+ # The physical line contains only this token.
+ self.blank_lines += 1
+ self.tokens = []
+ elif token_type == tokenize.COMMENT and len(self.tokens) == 1:
+ if blank_lines_before_comment < self.blank_lines:
+ blank_lines_before_comment = self.blank_lines
+ self.blank_lines = 0
+ if COMMENT_WITH_NL:
+ # The comment also ends a physical line
+ self.tokens = []
+ return self.report.get_file_results()
+
+
+class BaseReport(object):
+ """Collect the results of the checks."""
+ print_filename = False
+
+ def __init__(self, options):
+ self._benchmark_keys = options.benchmark_keys
+ self._ignore_code = options.ignore_code
+ # Results
+ self.elapsed = 0
+ self.total_errors = 0
+ self.counters = dict.fromkeys(self._benchmark_keys, 0)
+ self.messages = {}
+
+ def start(self):
+ """Start the timer."""
+ self._start_time = time.time()
+
+ def stop(self):
+ """Stop the timer."""
+ self.elapsed = time.time() - self._start_time
+
+ def init_file(self, filename, lines, expected, line_offset):
+ """Signal a new file."""
+ self.filename = filename
+ self.lines = lines
+ self.expected = expected or ()
+ self.line_offset = line_offset
+ self.file_errors = 0
+ self.counters['files'] += 1
+ self.counters['physical lines'] += len(lines)
+
+ def increment_logical_line(self):
+ """Signal a new logical line."""
+ self.counters['logical lines'] += 1
+
+ def error(self, line_number, offset, text, check):
+ """Report an error, according to options."""
+ code = text[:4]
+ if self._ignore_code(code):
+ return
+ if code in self.counters:
+ self.counters[code] += 1
+ else:
+ self.counters[code] = 1
+ self.messages[code] = text[5:]
+ # Don't care about expected errors or warnings
+ if code in self.expected:
+ return
+ if self.print_filename and not self.file_errors:
+ print(self.filename)
+ self.file_errors += 1
+ self.total_errors += 1
+ return code
+
+ def get_file_results(self):
+ """Return the count of errors and warnings for this file."""
+ return self.file_errors
+
+ def get_count(self, prefix=''):
+ """Return the total count of errors and warnings."""
+ return sum([self.counters[key]
+ for key in self.messages if key.startswith(prefix)])
+
+ def get_statistics(self, prefix=''):
+ """
+ Get statistics for message codes that start with the prefix.
+
+ prefix='' matches all errors and warnings
+ prefix='E' matches all errors
+ prefix='W' matches all warnings
+ prefix='E4' matches all errors that have to do with imports
+ """
+ return ['%-7s %s %s' % (self.counters[key], key, self.messages[key])
+ for key in sorted(self.messages) if key.startswith(prefix)]
+
+ def print_statistics(self, prefix=''):
+ """Print overall statistics (number of errors and warnings)."""
+ for line in self.get_statistics(prefix):
+ print(line)
+
+ def print_benchmark(self):
+ """Print benchmark numbers."""
+ print('%-7.2f %s' % (self.elapsed, 'seconds elapsed'))
+ if self.elapsed:
+ for key in self._benchmark_keys:
+ print('%-7d %s per second (%d total)' %
+ (self.counters[key] / self.elapsed, key,
+ self.counters[key]))
+
+
+class FileReport(BaseReport):
+ """Collect the results of the checks and print only the filenames."""
+ print_filename = True
+
+
+class StandardReport(BaseReport):
+ """Collect and print the results of the checks."""
+
+ def __init__(self, options):
+ super(StandardReport, self).__init__(options)
+ self._fmt = REPORT_FORMAT.get(options.format.lower(),
+ options.format)
+ self._repeat = options.repeat
+ self._show_source = options.show_source
+ self._show_pep8 = options.show_pep8
+
+ def init_file(self, filename, lines, expected, line_offset):
+ """Signal a new file."""
+ self._deferred_print = []
+ return super(StandardReport, self).init_file(
+ filename, lines, expected, line_offset)
+
+ def error(self, line_number, offset, text, check):
+ """Report an error, according to options."""
+ code = super(StandardReport, self).error(line_number, offset,
+ text, check)
+ if code and (self.counters[code] == 1 or self._repeat):
+ self._deferred_print.append(
+ (line_number, offset, code, text[5:], check.__doc__))
+ return code
+
+ def get_file_results(self):
+ """Print the result and return the overall count for this file."""
+ self._deferred_print.sort()
+ for line_number, offset, code, text, doc in self._deferred_print:
+ print(self._fmt % {
+ 'path': self.filename,
+ 'row': self.line_offset + line_number, 'col': offset + 1,
+ 'code': code, 'text': text,
+ })
+ if self._show_source:
+ if line_number > len(self.lines):
+ line = ''
+ else:
+ line = self.lines[line_number - 1]
+ print(line.rstrip())
+ print(' ' * offset + '^')
+ if self._show_pep8 and doc:
+ print(doc.lstrip('\n').rstrip())
+ return self.file_errors
+
+
+class DiffReport(StandardReport):
+ """Collect and print the results for the changed lines only."""
+
+ def __init__(self, options):
+ super(DiffReport, self).__init__(options)
+ self._selected = options.selected_lines
+
+ def error(self, line_number, offset, text, check):
+ if line_number not in self._selected[self.filename]:
+ return
+ return super(DiffReport, self).error(line_number, offset, text, check)
+
+
+class StyleGuide(object):
+ """Initialize a PEP-8 instance with few options."""
+
+ def __init__(self, *args, **kwargs):
+ # build options from the command line
+ self.checker_class = kwargs.pop('checker_class', Checker)
+ parse_argv = kwargs.pop('parse_argv', False)
+ config_file = kwargs.pop('config_file', None)
+ parser = kwargs.pop('parser', None)
+ options, self.paths = process_options(
+ parse_argv=parse_argv, config_file=config_file, parser=parser)
+ if args or kwargs:
+ # build options from dict
+ options_dict = dict(*args, **kwargs)
+ options.__dict__.update(options_dict)
+ if 'paths' in options_dict:
+ self.paths = options_dict['paths']
+
+ self.runner = self.input_file
+ self.options = options
+
+ if not options.reporter:
+ options.reporter = BaseReport if options.quiet else StandardReport
+
+ for index, value in enumerate(options.exclude):
+ options.exclude[index] = value.rstrip('/')
+ # Ignore all checks which are not explicitly selected
+ options.select = tuple(options.select or ())
+ options.ignore = tuple(options.ignore or options.select and ('',))
+ options.benchmark_keys = BENCHMARK_KEYS[:]
+ options.ignore_code = self.ignore_code
+ options.physical_checks = self.get_checks('physical_line')
+ options.logical_checks = self.get_checks('logical_line')
+ options.ast_checks = self.get_checks('tree')
+ self.init_report()
+
+ def init_report(self, reporter=None):
+ """Initialize the report instance."""
+ self.options.report = (reporter or self.options.reporter)(self.options)
+ return self.options.report
+
+ def check_files(self, paths=None):
+ """Run all checks on the paths."""
+ if paths is None:
+ paths = self.paths
+ report = self.options.report
+ runner = self.runner
+ report.start()
+ try:
+ for path in paths:
+ if os.path.isdir(path):
+ self.input_dir(path)
+ elif not self.excluded(path):
+ runner(path)
+ except KeyboardInterrupt:
+ print('... stopped')
+ report.stop()
+ return report
+
+ def input_file(self, filename, lines=None, expected=None, line_offset=0):
+ """Run all checks on a Python source file."""
+ if self.options.verbose:
+ print('checking %s' % filename)
+ fchecker = self.checker_class(
+ filename, lines=lines, options=self.options)
+ return fchecker.check_all(expected=expected, line_offset=line_offset)
+
+ def input_dir(self, dirname):
+ """Check all files in this directory and all subdirectories."""
+ dirname = dirname.rstrip('/')
+ if self.excluded(dirname):
+ return 0
+ counters = self.options.report.counters
+ verbose = self.options.verbose
+ filepatterns = self.options.filename
+ runner = self.runner
+ for root, dirs, files in os.walk(dirname):
+ if verbose:
+ print('directory ' + root)
+ counters['directories'] += 1
+ for subdir in sorted(dirs):
+ if self.excluded(os.path.join(root, subdir)):
+ dirs.remove(subdir)
+ for filename in sorted(files):
+ # contain a pattern that matches?
+ if ((filename_match(filename, filepatterns) and
+ not self.excluded(filename))):
+ runner(os.path.join(root, filename))
+
+ def excluded(self, filename):
+ """
+ Check if options.exclude contains a pattern that matches filename.
+ """
+ basename = os.path.basename(filename)
+ return any((filename_match(filename, self.options.exclude,
+ default=False),
+ filename_match(basename, self.options.exclude,
+ default=False)))
+
+ def ignore_code(self, code):
+ """
+ Check if the error code should be ignored.
+
+ If 'options.select' contains a prefix of the error code,
+ return False. Else, if 'options.ignore' contains a prefix of
+ the error code, return True.
+ """
+ return (code.startswith(self.options.ignore) and
+ not code.startswith(self.options.select))
+
+ def get_checks(self, argument_name):
+ """
+ Find all globally visible functions where the first argument name
+ starts with argument_name and which contain selected tests.
+ """
+ checks = []
+ for check, attrs in _checks[argument_name].items():
+ (codes, args) = attrs
+ if any(not (code and self.ignore_code(code)) for code in codes):
+ checks.append((check.__name__, check, args))
+ return sorted(checks)
+
+
+def get_parser(prog='pep8', version=__version__):
+ parser = OptionParser(prog=prog, version=version,
+ usage="%prog [options] input ...")
+ parser.config_options = [
+ 'exclude', 'filename', 'select', 'ignore', 'max-line-length', 'count',
+ 'format', 'quiet', 'show-pep8', 'show-source', 'statistics', 'verbose']
+ parser.add_option('-v', '--verbose', default=0, action='count',
+ help="print status messages, or debug with -vv")
+ parser.add_option('-q', '--quiet', default=0, action='count',
+ help="report only file names, or nothing with -qq")
+ parser.add_option('-r', '--repeat', default=True, action='store_true',
+ help="(obsolete) show all occurrences of the same error")
+ parser.add_option('--first', action='store_false', dest='repeat',
+ help="show first occurrence of each error")
+ parser.add_option('--exclude', metavar='patterns', default=DEFAULT_EXCLUDE,
+ help="exclude files or directories which match these "
+ "comma separated patterns (default: %default)")
+ parser.add_option('--filename', metavar='patterns', default='*.py',
+ help="when parsing directories, only check filenames "
+ "matching these comma separated patterns "
+ "(default: %default)")
+ parser.add_option('--select', metavar='errors', default='',
+ help="select errors and warnings (e.g. E,W6)")
+ parser.add_option('--ignore', metavar='errors', default='',
+ help="skip errors and warnings (e.g. E4,W)")
+ parser.add_option('--show-source', action='store_true',
+ help="show source code for each error")
+ parser.add_option('--show-pep8', action='store_true',
+ help="show text of PEP 8 for each error "
+ "(implies --first)")
+ parser.add_option('--statistics', action='store_true',
+ help="count errors and warnings")
+ parser.add_option('--count', action='store_true',
+ help="print total number of errors and warnings "
+ "to standard error and set exit code to 1 if "
+ "total is not null")
+ parser.add_option('--max-line-length', type='int', metavar='n',
+ default=MAX_LINE_LENGTH,
+ help="set maximum allowed line length "
+ "(default: %default)")
+ parser.add_option('--format', metavar='format', default='default',
+ help="set the error format [default|pylint|<custom>]")
+ parser.add_option('--diff', action='store_true',
+ help="report only lines changed according to the "
+ "unified diff received on STDIN")
+ group = parser.add_option_group("Testing Options")
+ if os.path.exists(TESTSUITE_PATH):
+ group.add_option('--testsuite', metavar='dir',
+ help="run regression tests from dir")
+ group.add_option('--doctest', action='store_true',
+ help="run doctest on myself")
+ group.add_option('--benchmark', action='store_true',
+ help="measure processing speed")
+ return parser
+
+
+def read_config(options, args, arglist, parser):
+ """Read both user configuration and local configuration."""
+ config = RawConfigParser()
+
+ user_conf = options.config
+ if user_conf and os.path.isfile(user_conf):
+ if options.verbose:
+ print('user configuration: %s' % user_conf)
+ config.read(user_conf)
+
+ parent = tail = args and os.path.abspath(os.path.commonprefix(args))
+ while tail:
+ for name in PROJECT_CONFIG:
+ local_conf = os.path.join(parent, name)
+ if os.path.isfile(local_conf):
+ break
+ else:
+ parent, tail = os.path.split(parent)
+ continue
+ if options.verbose:
+ print('local configuration: %s' % local_conf)
+ config.read(local_conf)
+ break
+
+ pep8_section = parser.prog
+ if config.has_section(pep8_section):
+ option_list = dict([(o.dest, o.type or o.action)
+ for o in parser.option_list])
+
+ # First, read the default values
+ new_options, _ = parser.parse_args([])
+
+ # Second, parse the configuration
+ for opt in config.options(pep8_section):
+ if options.verbose > 1:
+ print(" %s = %s" % (opt, config.get(pep8_section, opt)))
+ if opt.replace('_', '-') not in parser.config_options:
+ print("Unknown option: '%s'\n not in [%s]" %
+ (opt, ' '.join(parser.config_options)))
+ sys.exit(1)
+ normalized_opt = opt.replace('-', '_')
+ opt_type = option_list[normalized_opt]
+ if opt_type in ('int', 'count'):
+ value = config.getint(pep8_section, opt)
+ elif opt_type == 'string':
+ value = config.get(pep8_section, opt)
+ else:
+ assert opt_type in ('store_true', 'store_false')
+ value = config.getboolean(pep8_section, opt)
+ setattr(new_options, normalized_opt, value)
+
+ # Third, overwrite with the command-line options
+ options, _ = parser.parse_args(arglist, values=new_options)
+ options.doctest = options.testsuite = False
+ return options
+
+
+def process_options(arglist=None, parse_argv=False, config_file=None,
+ parser=None):
+ """Process options passed either via arglist or via command line args."""
+ if not arglist and not parse_argv:
+ # Don't read the command line if the module is used as a library.
+ arglist = []
+ if not parser:
+ parser = get_parser()
+ if not parser.has_option('--config'):
+ if config_file is True:
+ config_file = DEFAULT_CONFIG
+ group = parser.add_option_group("Configuration", description=(
+ "The project options are read from the [%s] section of the "
+ "tox.ini file or the setup.cfg file located in any parent folder "
+ "of the path(s) being processed. Allowed options are: %s." %
+ (parser.prog, ', '.join(parser.config_options))))
+ group.add_option('--config', metavar='path', default=config_file,
+ help="user config file location (default: %default)")
+ options, args = parser.parse_args(arglist)
+ options.reporter = None
+
+ if options.ensure_value('testsuite', False):
+ args.append(options.testsuite)
+ elif not options.ensure_value('doctest', False):
+ if parse_argv and not args:
+ if options.diff or any(os.path.exists(name)
+ for name in PROJECT_CONFIG):
+ args = ['.']
+ else:
+ parser.error('input not specified')
+ options = read_config(options, args, arglist, parser)
+ options.reporter = parse_argv and options.quiet == 1 and FileReport
+
+ if options.filename:
+ options.filename = options.filename.split(',')
+ options.exclude = options.exclude.split(',')
+ if options.select:
+ options.select = options.select.split(',')
+ if options.ignore:
+ options.ignore = options.ignore.split(',')
+ elif not (options.select or
+ options.testsuite or options.doctest) and DEFAULT_IGNORE:
+ # The default choice: ignore controversial checks
+ # (for doctest and testsuite, all checks are required)
+ options.ignore = DEFAULT_IGNORE.split(',')
+
+ if options.diff:
+ options.reporter = DiffReport
+ stdin = stdin_get_value()
+ options.selected_lines = parse_udiff(stdin, options.filename, args[0])
+ args = sorted(options.selected_lines)
+
+ return options, args
+
+
+def _main():
+ """Parse options and run checks on Python source."""
+ pep8style = StyleGuide(parse_argv=True, config_file=True)
+ options = pep8style.options
+ if options.doctest or options.testsuite:
+ sys.path[:0] = [TESTSUITE_PATH]
+ from test_pep8 import run_tests
+ del sys.path[0]
+ report = run_tests(pep8style, options.doctest, options.testsuite)
+ else:
+ report = pep8style.check_files()
+ if options.statistics:
+ report.print_statistics()
+ if options.benchmark:
+ report.print_benchmark()
+ if options.testsuite and not options.quiet:
+ report.print_results()
+ if report.total_errors:
+ if options.count:
+ sys.stderr.write(str(report.total_errors) + '\n')
+ sys.exit(1)
+
+if __name__ == '__main__':
+ _main()
diff --git a/python/helpers/pip-1.4.1.tar.gz b/python/helpers/pip-1.4.1.tar.gz
new file mode 100644
index 0000000..f56454f
--- /dev/null
+++ b/python/helpers/pip-1.4.1.tar.gz
Binary files differ
diff --git a/python/helpers/pycharm/.gitignore b/python/helpers/pycharm/.gitignore
new file mode 100644
index 0000000..6b468b6
--- /dev/null
+++ b/python/helpers/pycharm/.gitignore
@@ -0,0 +1 @@
+*.class
diff --git a/python/helpers/pycharm/__init__.py b/python/helpers/pycharm/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/python/helpers/pycharm/__init__.py
diff --git a/python/helpers/pycharm/attestrunner.py b/python/helpers/pycharm/attestrunner.py
new file mode 100644
index 0000000..2bbd84f
--- /dev/null
+++ b/python/helpers/pycharm/attestrunner.py
@@ -0,0 +1,266 @@
+import sys, os
+import imp
+
+helpers_dir = os.getenv("PYCHARM_HELPERS_DIR", sys.path[0])
+if sys.path[0] != helpers_dir:
+ sys.path.insert(0, helpers_dir)
+
+from tcunittest import TeamcityTestResult
+
+from pycharm_run_utils import import_system_module
+from pycharm_run_utils import adjust_sys_path
+from pycharm_run_utils import debug, getModuleName
+
+adjust_sys_path()
+
+re = import_system_module("re")
+inspect = import_system_module("inspect")
+
+try:
+ from attest.reporters import AbstractReporter
+ from attest.collectors import Tests
+ from attest import TestBase
+except:
+ raise NameError("Please, install attests")
+
+class TeamCityReporter(AbstractReporter, TeamcityTestResult):
+ """Teamcity reporter for attests."""
+
+ def __init__(self, prefix):
+ TeamcityTestResult.__init__(self)
+ self.prefix = prefix
+
+ def begin(self, tests):
+ """initialize suite stack and count tests"""
+ self.total = len(tests)
+ self.suite_stack = []
+ self.messages.testCount(self.total)
+
+ def success(self, result):
+ """called when test finished successfully"""
+ suite = self.get_suite_name(result.test)
+ self.start_suite(suite)
+ name = self.get_test_name(result)
+ self.start_test(result, name)
+ self.messages.testFinished(name)
+
+ def failure(self, result):
+ """called when test failed"""
+ suite = self.get_suite_name(result.test)
+ self.start_suite(suite)
+ name = self.get_test_name(result)
+ self.start_test(result, name)
+ exctype, value, tb = result.exc_info
+ error_value = self.find_error_value(tb)
+ if (error_value.startswith("'") or error_value.startswith('"')) and\
+ (error_value.endswith("'") or error_value.endswith('"')):
+ first = self._unescape(self.find_first(error_value))
+ second = self._unescape(self.find_second(error_value))
+ else:
+ first = second = ""
+
+ err = self.formatErr(result.exc_info)
+ if isinstance(result.error, AssertionError):
+ self.messages.testFailed(name, message='Failure',
+ details=err,
+ expected=first, actual=second)
+ else:
+ self.messages.testError(name, message='Error',
+ details=err)
+
+ def finished(self):
+ """called when all tests finished"""
+ self.end_last_suite()
+ for suite in self.suite_stack[::-1]:
+ self.messages.testSuiteFinished(suite)
+
+ def get_test_name(self, result):
+ name = result.test_name
+ ind = name.find("%") #remove unique module prefix
+ if ind != -1:
+ name = name[:ind]+name[name.find(".", ind):]
+ return name
+
+ def end_last_suite(self):
+ if self.current_suite:
+ self.messages.testSuiteFinished(self.current_suite)
+ self.current_suite = None
+
+ def get_suite_name(self, test):
+ module = inspect.getmodule(test)
+ klass = getattr(test, "im_class", None)
+ file = module.__file__
+ if file.endswith("pyc"):
+ file = file[:-1]
+
+ suite = module.__name__
+ if self.prefix:
+ tmp = file[:-3]
+ ind = tmp.split(self.prefix)[1]
+ suite = ind.replace("/", ".")
+ if klass:
+ suite += "." + klass.__name__
+ lineno = inspect.getsourcelines(klass)
+ else:
+ lineno = ("", 1)
+
+ return (suite, file+":"+str(lineno[1]))
+
+ def start_suite(self, suite_info):
+ """finish previous suite and put current suite
+ to stack"""
+ suite, file = suite_info
+ if suite != self.current_suite:
+ if self.current_suite:
+ if suite.startswith(self.current_suite+"."):
+ self.suite_stack.append(self.current_suite)
+ else:
+ self.messages.testSuiteFinished(self.current_suite)
+ for s in self.suite_stack:
+ if not suite.startswith(s+"."):
+ self.current_suite = s
+ self.messages.testSuiteFinished(self.current_suite)
+ else:
+ break
+ self.current_suite = suite
+ self.messages.testSuiteStarted(self.current_suite, location="file://" + file)
+
+ def start_test(self, result, name):
+ """trying to find test location """
+ real_func = result.test.func_closure[0].cell_contents
+ lineno = inspect.getsourcelines(real_func)
+ file = inspect.getsourcefile(real_func)
+ self.messages.testStarted(name, "file://"+file+":"+str(lineno[1]))
+
+def get_subclasses(module, base_class=TestBase):
+ test_classes = []
+ for name in dir(module):
+ obj = getattr(module, name)
+ try:
+ if issubclass(obj, base_class):
+ test_classes.append(obj)
+ except TypeError: # If 'obj' is not a class
+ pass
+ return test_classes
+
+def get_module(file_name):
+ baseName = os.path.splitext(os.path.basename(file_name))[0]
+ return imp.load_source(baseName, file_name)
+
+modules = {}
+def getModuleName(prefix, cnt):
+ """ adds unique number to prevent name collisions"""
+ return prefix + "%" + str(cnt)
+
+def loadSource(fileName):
+ baseName = os.path.basename(fileName)
+ moduleName = os.path.splitext(baseName)[0]
+
+ if moduleName in modules:
+ cnt = 2
+ prefix = moduleName
+ while getModuleName(prefix, cnt) in modules:
+ cnt += 1
+ moduleName = getModuleName(prefix, cnt)
+ debug("/ Loading " + fileName + " as " + moduleName)
+ module = imp.load_source(moduleName, fileName)
+ modules[moduleName] = module
+ return module
+
+
+def register_tests_from_module(module, tests):
+ """add tests from module to main test suite"""
+ tests_to_register = []
+
+ for i in dir(module):
+ obj = getattr(module, i)
+ if isinstance(obj, Tests):
+ tests_to_register.append(i)
+
+ for i in tests_to_register:
+ baseName = module.__name__+"."+i
+ tests.register(baseName)
+ test_subclasses = get_subclasses(module)
+ if test_subclasses:
+ for subclass in test_subclasses:
+ tests.register(subclass())
+
+
+def register_tests_from_folder(tests, folder, pattern=None):
+ """add tests from folder to main test suite"""
+ listing = os.listdir(folder)
+ files = listing
+ if pattern: #get files matched given pattern
+ prog_list = [re.compile(pat.strip()) for pat in pattern.split(',')]
+ files = []
+ for fileName in listing:
+ if os.path.isdir(folder+fileName):
+ files.append(fileName)
+ for prog in prog_list:
+ if prog.match(fileName):
+ files.append(fileName)
+
+ if not folder.endswith("/"):
+ folder += "/"
+ for fileName in files:
+ if os.path.isdir(folder+fileName):
+ register_tests_from_folder(tests, folder+fileName, pattern)
+ if not fileName.endswith("py"):
+ continue
+
+ module = loadSource(folder+fileName)
+ register_tests_from_module(module, tests)
+
+def process_args():
+ tests = Tests()
+ prefix = ""
+ if not sys.argv:
+ return
+
+ arg = sys.argv[1].strip()
+ if not len(arg):
+ return
+
+ argument_list = arg.split("::")
+ if len(argument_list) == 1:
+ # From module or folder
+ a_splitted = argument_list[0].split(";")
+ if len(a_splitted) != 1:
+ # means we have pattern to match against
+ if a_splitted[0].endswith("/"):
+ debug("/ from folder " + a_splitted[0] + ". Use pattern: " + a_splitted[1])
+ prefix = a_splitted[0]
+ register_tests_from_folder(tests, a_splitted[0], a_splitted[1])
+ else:
+ if argument_list[0].endswith("/"):
+ debug("/ from folder " + argument_list[0])
+ prefix = a_splitted[0]
+ register_tests_from_folder(tests, argument_list[0])
+ else:
+ debug("/ from file " + argument_list[0])
+ module = get_module(argument_list[0])
+ register_tests_from_module(module, tests)
+
+ elif len(argument_list) == 2:
+ # From testcase
+ debug("/ from test class " + argument_list[1] + " in " + argument_list[0])
+ module = get_module(argument_list[0])
+ klass = getattr(module, argument_list[1])
+ tests.register(klass())
+ else:
+ # From method in class or from function
+ module = get_module(argument_list[0])
+ if argument_list[1] == "":
+ debug("/ from function " + argument_list[2] + " in " + argument_list[0])
+ # test function, not method
+ test = getattr(module, argument_list[2])
+ else:
+ debug("/ from method " + argument_list[2] + " in class " + argument_list[1] + " in " + argument_list[0])
+ klass = getattr(module, argument_list[1])
+ test = getattr(klass(), argument_list[2])
+ tests.register([test])
+
+ tests.run(reporter=TeamCityReporter(prefix))
+
+if __name__ == "__main__":
+ process_args()
\ No newline at end of file
diff --git a/python/helpers/pycharm/buildout_engulfer.py b/python/helpers/pycharm/buildout_engulfer.py
new file mode 100644
index 0000000..2a4785d
--- /dev/null
+++ b/python/helpers/pycharm/buildout_engulfer.py
@@ -0,0 +1,43 @@
+# Expects two env variables:
+# PYCHARM_ENGULF_SCRIPT = which script should be engulfed.
+# PYCHARM_PREPEND_SYSPATH = which entries should be added to the beginning of sys.path;
+# items must be separated by path separator. May be unset.
+#
+# Given script is loaded and compiled, then sys.path is prepended as requested.
+# On win32, getpass is changed to insecure but working version.
+# Then the compiled script evaluated, as if it were run by python interpreter itself.
+# Works OK with debugger.
+
+import os
+import sys
+
+target = os.getenv("PYCHARM_ENGULF_SCRIPT")
+print("Running script through buildout: " + target)
+
+assert target, "PYCHARM_ENGULF_SCRIPT must be set"
+
+filepath = os.path.abspath(target)
+f = None
+try:
+ f = open(filepath, "r")
+ source = "\n".join((s.rstrip() for s in f.readlines()))
+finally:
+ if f:
+ f.close()
+
+from fix_getpass import fixGetpass
+fixGetpass()
+
+#prependable = os.getenv("PYCHARM_PREPEND_SYSPATH")
+#if prependable:
+# sys.path[0:0] = [x for x in prependable.split(os.path.pathsep)]
+
+# include engulfed's path, everyone expects this
+our_path = os.path.dirname(filepath)
+if our_path not in sys.path:
+ sys.path.append(our_path)
+
+compile(source, target, "exec")
+exec(source)
+
+# here we come
diff --git a/python/helpers/pycharm/django_manage.py b/python/helpers/pycharm/django_manage.py
new file mode 100644
index 0000000..3985795
--- /dev/null
+++ b/python/helpers/pycharm/django_manage.py
@@ -0,0 +1,24 @@
+#!/usr/bin/env python
+import sys
+import os
+
+from pycharm_run_utils import adjust_django_sys_path
+from fix_getpass import fixGetpass
+
+try:
+ from runpy import run_module
+except ImportError:
+ from runpy_compat import run_module
+
+adjust_django_sys_path()
+base_path = sys.argv.pop()
+
+manage_file = os.getenv('PYCHARM_DJANGO_MANAGE_MODULE')
+if not manage_file:
+ manage_file = 'manage'
+
+
+if __name__ == "__main__":
+ fixGetpass()
+ run_module(manage_file, None, '__main__', True)
+
diff --git a/python/helpers/pycharm/django_manage_shell.py b/python/helpers/pycharm/django_manage_shell.py
new file mode 100644
index 0000000..ec8af3a
--- /dev/null
+++ b/python/helpers/pycharm/django_manage_shell.py
@@ -0,0 +1,47 @@
+#!/usr/bin/env python
+from fix_getpass import fixGetpass
+import os
+from django.core import management
+import sys
+
+try:
+ from runpy import run_module
+except ImportError:
+ from runpy_compat import run_module
+
+
+def run(working_dir):
+ sys.path.insert(0, working_dir)
+ manage_file = os.getenv('PYCHARM_DJANGO_MANAGE_MODULE')
+ if not manage_file:
+ manage_file = 'manage'
+
+ def execute_manager(settings_mod, argv = None):
+ management.setup_environ(settings_mod)
+
+ management.execute_manager = execute_manager
+
+ def execute_from_command_line(argv=None):
+ pass
+
+ management.execute_from_command_line = execute_from_command_line
+
+ fixGetpass()
+
+ try:
+ #import settings to prevent circular dependencies later on import django.db
+ from django.conf import settings
+ apps=settings.INSTALLED_APPS
+
+ # From django.core.management.shell
+
+ # XXX: (Temporary) workaround for ticket #1796: force early loading of all
+ # models from installed apps.
+ from django.db.models.loading import get_models
+ get_models()
+
+ except:
+ pass
+
+ run_module(manage_file, None, '__main__', True)
+
diff --git a/python/helpers/pycharm/django_test_manage.py b/python/helpers/pycharm/django_test_manage.py
new file mode 100644
index 0000000..44b7c7c
--- /dev/null
+++ b/python/helpers/pycharm/django_test_manage.py
@@ -0,0 +1,124 @@
+#!/usr/bin/env python
+
+import os, sys
+
+from django.core.management import ManagementUtility
+
+from pycharm_run_utils import import_system_module
+
+inspect = import_system_module("inspect")
+
+#import settings to prevent circular dependencies later on import django.db
+try:
+ from django.conf import settings
+ apps = settings.INSTALLED_APPS
+except:
+ pass
+
+import django_test_runner
+project_directory = sys.argv.pop()
+
+from django.core import management
+from django.core.management.commands.test import Command
+
+try:
+ # setup environment
+ # this stuff was done earlier by setup_environ() which was removed in 1.4
+ sys.path.append(os.path.join(project_directory, os.pardir))
+ project_name = os.path.basename(project_directory)
+ __import__(project_name)
+except ImportError:
+ # project has custom structure (project directory is not importable)
+ pass
+finally:
+ sys.path.pop()
+
+manage_file = os.getenv('PYCHARM_DJANGO_MANAGE_MODULE')
+if not manage_file:
+ manage_file = 'manage'
+
+try:
+ __import__(manage_file)
+except ImportError:
+ print ("There is no such manage file " + str(manage_file) + "\n")
+
+settings_file = os.getenv('DJANGO_SETTINGS_MODULE')
+if not settings_file:
+ settings_file = 'settings'
+
+
+class PycharmTestCommand(Command):
+ def get_runner(self):
+ TEST_RUNNER = 'django_test_runner.run_tests'
+ test_path = TEST_RUNNER.split('.')
+ # Allow for Python 2.5 relative paths
+ if len(test_path) > 1:
+ test_module_name = '.'.join(test_path[:-1])
+ else:
+ test_module_name = '.'
+ test_module = __import__(test_module_name, {}, {}, test_path[-1])
+ test_runner = getattr(test_module, test_path[-1])
+ return test_runner
+
+ def handle(self, *test_labels, **options):
+ # handle south migration in tests
+ management.get_commands()
+ if hasattr(settings, "SOUTH_TESTS_MIGRATE") and not settings.SOUTH_TESTS_MIGRATE:
+ # point at the core syncdb command when creating tests
+ # tests should always be up to date with the most recent model structure
+ management._commands['syncdb'] = 'django.core'
+ elif 'south' in settings.INSTALLED_APPS:
+ try:
+ from south.management.commands import MigrateAndSyncCommand
+ management._commands['syncdb'] = MigrateAndSyncCommand()
+ from south.hacks import hacks
+ if hasattr(hacks, "patch_flush_during_test_db_creation"):
+ hacks.patch_flush_during_test_db_creation()
+ except ImportError:
+ management._commands['syncdb'] = 'django.core'
+
+ verbosity = int(options.get('verbosity', 1))
+ interactive = options.get('interactive', True)
+ failfast = options.get('failfast', False)
+ TestRunner = self.get_runner()
+
+ if not inspect.ismethod(TestRunner):
+ failures = TestRunner(test_labels, verbosity=verbosity, interactive=interactive, failfast=failfast)
+ else:
+ test_runner = TestRunner(verbosity=verbosity, interactive=interactive, failfast=failfast)
+ failures = test_runner.run_tests(test_labels)
+
+ if failures:
+ sys.exit(bool(failures))
+
+class PycharmTestManagementUtility(ManagementUtility):
+ def __init__(self, argv=None):
+ ManagementUtility.__init__(self, argv)
+
+ def execute(self):
+ PycharmTestCommand().run_from_argv(self.argv)
+
+if __name__ == "__main__":
+
+ try:
+ custom_settings = __import__(settings_file)
+ splitted_settings = settings_file.split('.')
+ if len(splitted_settings) != 1:
+ settings_name = '.'.join(splitted_settings[:-1])
+ settings_module = __import__(settings_name, globals(), locals(), [splitted_settings[-1]])
+ custom_settings = getattr(settings_module, splitted_settings[-1])
+
+ except ImportError:
+ print ("There is no such settings file " + str(settings_file) + "\n")
+
+ try:
+ subcommand = sys.argv[1]
+ except IndexError:
+ subcommand = 'help' # Display help if no arguments were given.
+
+ if subcommand == 'test':
+ utility = PycharmTestManagementUtility(sys.argv)
+ else:
+ utility = ManagementUtility()
+
+ utility.execute()
\ No newline at end of file
diff --git a/python/helpers/pycharm/django_test_runner.py b/python/helpers/pycharm/django_test_runner.py
new file mode 100644
index 0000000..3eb993d
--- /dev/null
+++ b/python/helpers/pycharm/django_test_runner.py
@@ -0,0 +1,195 @@
+import sys
+
+from tcunittest import TeamcityTestRunner
+from tcmessages import TeamcityServiceMessages
+
+from pycharm_run_utils import adjust_django_sys_path
+from django.conf import settings
+
+if hasattr(settings, "TEST_RUNNER") and "NoseTestSuiteRunner" in settings.TEST_RUNNER:
+ from nose_utils import TeamcityNoseRunner
+
+adjust_django_sys_path()
+
+from django.test.testcases import TestCase
+from django import VERSION
+try:
+ from django.utils import unittest
+except ImportError:
+ import unittest
+
+def get_test_suite_runner():
+ if hasattr(settings, "TEST_RUNNER"):
+ from django.test.utils import get_runner
+
+ class TempSettings:
+ TEST_RUNNER = settings.TEST_RUNNER
+
+ return get_runner(TempSettings)
+
+try:
+ from django.test.simple import DjangoTestSuiteRunner
+ from inspect import isfunction
+
+ SUITE_RUNNER = get_test_suite_runner()
+ if isfunction(SUITE_RUNNER):
+ import sys
+
+ sys.stderr.write(
+ "WARNING: TEST_RUNNER variable is ignored. PyCharm test runner supports "
+ "only class-like TEST_RUNNER valiables. Use Tools->run manage.py tasks.\n")
+ SUITE_RUNNER = None
+ BaseSuiteRunner = SUITE_RUNNER or DjangoTestSuiteRunner
+
+ class BaseRunner(TeamcityTestRunner, BaseSuiteRunner):
+ def __init__(self, stream=sys.stdout, **options):
+ TeamcityTestRunner.__init__(self, stream)
+ BaseSuiteRunner.__init__(self)
+
+except ImportError:
+ # for Django <= 1.1 compatibility
+ class BaseRunner(TeamcityTestRunner):
+ def __init__(self, stream=sys.stdout, **options):
+ TeamcityTestRunner.__init__(self, stream)
+
+class DjangoTeamcityTestRunner(BaseRunner):
+ def __init__(self, stream=sys.stdout, **options):
+ super(DjangoTeamcityTestRunner, self).__init__(stream)
+
+ def build_suite(self, *args, **kwargs):
+ EXCLUDED_APPS = getattr(settings, 'TEST_EXCLUDE', [])
+ suite = super(DjangoTeamcityTestRunner, self).build_suite(*args, **kwargs)
+ if not args[0] and not getattr(settings, 'RUN_ALL_TESTS', False):
+ tests = []
+ for case in suite:
+ pkg = case.__class__.__module__.split('.')[0]
+ if pkg not in EXCLUDED_APPS:
+ tests.append(case)
+ suite._tests = tests
+ return suite
+
+ def run_suite(self, suite, **kwargs):
+ if hasattr(settings, "TEST_RUNNER") and "NoseTestSuiteRunner" in settings.TEST_RUNNER:
+ from django_nose.plugin import DjangoSetUpPlugin, ResultPlugin
+ from django_nose.runner import _get_plugins_from_settings
+ from nose.plugins.manager import PluginManager
+ from nose.config import Config
+ import nose
+
+ config = Config(plugins=PluginManager())
+ config.plugins.loadPlugins()
+ result_plugin = ResultPlugin()
+ config.plugins.addPlugin(DjangoSetUpPlugin(self))
+ config.plugins.addPlugin(result_plugin)
+ for plugin in _get_plugins_from_settings():
+ config.plugins.addPlugin(plugin)
+
+ nose.core.TestProgram(argv=suite, exit=False,
+ testRunner=TeamcityNoseRunner(config=config))
+ return result_plugin.result
+ else:
+ return TeamcityTestRunner.run(self, suite, **kwargs)
+
+ def run_tests(self, test_labels, extra_tests=None, **kwargs):
+ if hasattr(settings, "TEST_RUNNER") and "NoseTestSuiteRunner" in settings.TEST_RUNNER:
+ return super(DjangoTeamcityTestRunner, self).run_tests(test_labels,
+ extra_tests)
+ return super(DjangoTeamcityTestRunner, self).run_tests(test_labels,
+ extra_tests, **kwargs)
+
+
+def partition_suite(suite, classes, bins):
+ """
+ Partitions a test suite by test type.
+
+ classes is a sequence of types
+ bins is a sequence of TestSuites, one more than classes
+
+ Tests of type classes[i] are added to bins[i],
+ tests with no match found in classes are place in bins[-1]
+ """
+ for test in suite:
+ if isinstance(test, unittest.TestSuite):
+ partition_suite(test, classes, bins)
+ else:
+ for i in range(len(classes)):
+ if isinstance(test, classes[i]):
+ bins[i].addTest(test)
+ break
+ else:
+ bins[-1].addTest(test)
+
+
+def reorder_suite(suite, classes):
+ """
+ Reorders a test suite by test type.
+
+ classes is a sequence of types
+
+ All tests of type clases[0] are placed first, then tests of type classes[1], etc.
+ Tests with no match in classes are placed last.
+ """
+ class_count = len(classes)
+ bins = [unittest.TestSuite() for i in range(class_count + 1)]
+ partition_suite(suite, classes, bins)
+ for i in range(class_count):
+ bins[0].addTests(bins[i + 1])
+ return bins[0]
+
+
+def run_the_old_way(extra_tests, kwargs, test_labels, verbosity):
+ from django.test.simple import build_suite, build_test, get_app, get_apps, \
+ setup_test_environment, teardown_test_environment
+
+ setup_test_environment()
+ settings.DEBUG = False
+ suite = unittest.TestSuite()
+ if test_labels:
+ for label in test_labels:
+ if '.' in label:
+ suite.addTest(build_test(label))
+ else:
+ app = get_app(label)
+ suite.addTest(build_suite(app))
+ else:
+ for app in get_apps():
+ suite.addTest(build_suite(app))
+ for test in extra_tests:
+ suite.addTest(test)
+ suite = reorder_suite(suite, (TestCase,))
+ old_name = settings.DATABASE_NAME
+ from django.db import connection
+
+ connection.creation.create_test_db(verbosity, autoclobber=False)
+ result = DjangoTeamcityTestRunner().run(suite, **kwargs)
+ connection.creation.destroy_test_db(old_name, verbosity)
+ teardown_test_environment()
+ return len(result.failures) + len(result.errors)
+
+
+def run_tests(test_labels, verbosity=1, interactive=False, extra_tests=[],
+ **kwargs):
+ """
+ Run the unit tests for all the test labels in the provided list.
+ Labels must be of the form:
+ - app.TestClass.test_method
+ Run a single specific test method
+ - app.TestClass
+ Run all the test methods in a given class
+ - app
+ Search for doctests and unittests in the named application.
+
+ When looking for tests, the test runner will look in the models and
+ tests modules for the application.
+
+ A list of 'extra' tests may also be provided; these tests
+ will be added to the test suite.
+
+ Returns the number of tests that failed.
+ """
+ TeamcityServiceMessages(sys.stdout).testMatrixEntered()
+ if VERSION[1] > 1:
+ return DjangoTeamcityTestRunner().run_tests(test_labels,
+ extra_tests=extra_tests, **kwargs)
+
+ return run_the_old_way(extra_tests, kwargs, test_labels, verbosity)
diff --git a/python/helpers/pycharm/docrunner.py b/python/helpers/pycharm/docrunner.py
new file mode 100644
index 0000000..ad619be
--- /dev/null
+++ b/python/helpers/pycharm/docrunner.py
@@ -0,0 +1,340 @@
+import imp
+import sys
+import datetime
+import os
+helpers_dir = os.getenv("PYCHARM_HELPERS_DIR", sys.path[0])
+if sys.path[0] != helpers_dir:
+ sys.path.insert(0, helpers_dir)
+
+from tcunittest import TeamcityTestResult
+from tcmessages import TeamcityServiceMessages
+
+from pycharm_run_utils import import_system_module
+from pycharm_run_utils import adjust_sys_path, debug, getModuleName, PYTHON_VERSION_MAJOR
+
+adjust_sys_path()
+
+re = import_system_module("re")
+doctest = import_system_module("doctest")
+traceback = import_system_module("traceback")
+
+class TeamcityDocTestResult(TeamcityTestResult):
+ """
+ DocTests Result extends TeamcityTestResult,
+ overrides some methods, specific for doc tests,
+ such as getTestName, getTestId.
+ """
+ def getTestName(self, test):
+ name = self.current_suite.name + test.source
+ return name
+
+ def getSuiteName(self, suite):
+ if test.source.rfind(".") == -1:
+ name = self.current_suite.name + test.source
+ else:
+ name = test.source
+ return name
+
+ def getTestId(self, test):
+ file = os.path.realpath(self.current_suite.filename) if self.current_suite.filename else ""
+ line_no = test.lineno
+ if self.current_suite.lineno:
+ line_no += self.current_suite.lineno
+ return "file://" + file + ":" + str(line_no)
+
+ def getSuiteLocation(self):
+ file = os.path.realpath(self.current_suite.filename) if self.current_suite.filename else ""
+ location = "file://" + file
+ if self.current_suite.lineno:
+ location += ":" + str(self.current_suite.lineno)
+ return location
+
+ def startTest(self, test):
+ setattr(test, "startTime", datetime.datetime.now())
+ id = self.getTestId(test)
+ self.messages.testStarted(self.getTestName(test), location=id)
+
+ def startSuite(self, suite):
+ self.current_suite = suite
+ self.messages.testSuiteStarted(suite.name, location=self.getSuiteLocation())
+
+ def stopSuite(self, suite):
+ self.messages.testSuiteFinished(suite.name)
+
+ def addFailure(self, test, err = ''):
+ self.messages.testFailed(self.getTestName(test),
+ message='Failure', details=err)
+
+ def addError(self, test, err = ''):
+ self.messages.testError(self.getTestName(test),
+ message='Error', details=err)
+
+class DocTestRunner(doctest.DocTestRunner):
+ """
+ Special runner for doctests,
+ overrides __run method to report results using TeamcityDocTestResult
+ """
+ def __init__(self, verbose=None, optionflags=0):
+ doctest.DocTestRunner.__init__(self, verbose, optionflags)
+ self.stream = sys.stdout
+ self.result = TeamcityDocTestResult(self.stream)
+ #self.result.messages.testMatrixEntered()
+ self._tests = []
+
+ def addTests(self, tests):
+ self._tests.extend(tests)
+
+ def addTest(self, test):
+ self._tests.append(test)
+
+ def countTests(self):
+ return len(self._tests)
+
+ def start(self):
+ for test in self._tests:
+ self.run(test)
+
+ def __run(self, test, compileflags, out):
+ failures = tries = 0
+
+ original_optionflags = self.optionflags
+ SUCCESS, FAILURE, BOOM = range(3) # `outcome` state
+ check = self._checker.check_output
+ self.result.startSuite(test)
+ for examplenum, example in enumerate(test.examples):
+
+ quiet = (self.optionflags & doctest.REPORT_ONLY_FIRST_FAILURE and
+ failures > 0)
+
+ self.optionflags = original_optionflags
+ if example.options:
+ for (optionflag, val) in example.options.items():
+ if val:
+ self.optionflags |= optionflag
+ else:
+ self.optionflags &= ~optionflag
+
+ if hasattr(doctest, 'SKIP'):
+ if self.optionflags & doctest.SKIP:
+ continue
+
+ tries += 1
+ if not quiet:
+ self.report_start(out, test, example)
+
+ filename = '<doctest %s[%d]>' % (test.name, examplenum)
+
+ try:
+ exec(compile(example.source, filename, "single",
+ compileflags, 1), test.globs)
+ self.debugger.set_continue() # ==== Example Finished ====
+ exception = None
+ except KeyboardInterrupt:
+ raise
+ except:
+ exception = sys.exc_info()
+ self.debugger.set_continue() # ==== Example Finished ====
+
+ got = self._fakeout.getvalue() # the actual output
+ self._fakeout.truncate(0)
+ outcome = FAILURE # guilty until proved innocent or insane
+
+ if exception is None:
+ if check(example.want, got, self.optionflags):
+ outcome = SUCCESS
+
+ else:
+ exc_msg = traceback.format_exception_only(*exception[:2])[-1]
+ if not quiet:
+ got += doctest._exception_traceback(exception)
+
+ if example.exc_msg is None:
+ outcome = BOOM
+
+ elif check(example.exc_msg, exc_msg, self.optionflags):
+ outcome = SUCCESS
+
+ elif self.optionflags & doctest.IGNORE_EXCEPTION_DETAIL:
+ m1 = re.match(r'[^:]*:', example.exc_msg)
+ m2 = re.match(r'[^:]*:', exc_msg)
+ if m1 and m2 and check(m1.group(0), m2.group(0),
+ self.optionflags):
+ outcome = SUCCESS
+
+ # Report the outcome.
+ if outcome is SUCCESS:
+ self.result.startTest(example)
+ self.result.stopTest(example)
+ elif outcome is FAILURE:
+ self.result.startTest(example)
+ err = self._failure_header(test, example) +\
+ self._checker.output_difference(example, got, self.optionflags)
+ self.result.addFailure(example, err)
+
+ elif outcome is BOOM:
+ self.result.startTest(example)
+ err=self._failure_header(test, example) +\
+ 'Exception raised:\n' + doctest._indent(doctest._exception_traceback(exception))
+ self.result.addError(example, err)
+
+ else:
+ assert False, ("unknown outcome", outcome)
+
+ self.optionflags = original_optionflags
+
+ self.result.stopSuite(test)
+
+
+modules = {}
+
+
+
+runner = DocTestRunner()
+
+def loadSource(fileName):
+ """
+ loads source from fileName,
+ we can't use tat function from utrunner, because of we
+ store modules in global variable.
+ """
+ baseName = os.path.basename(fileName)
+ moduleName = os.path.splitext(baseName)[0]
+
+ # for users wanted to run simple doctests under django
+ #because of django took advantage of module name
+ settings_file = os.getenv('DJANGO_SETTINGS_MODULE')
+ if settings_file and moduleName=="models":
+ baseName = os.path.realpath(fileName)
+ moduleName = ".".join((baseName.split(os.sep)[-2], "models"))
+
+ if moduleName in modules: # add unique number to prevent name collisions
+ cnt = 2
+ prefix = moduleName
+ while getModuleName(prefix, cnt) in modules:
+ cnt += 1
+ moduleName = getModuleName(prefix, cnt)
+ debug("/ Loading " + fileName + " as " + moduleName)
+ module = imp.load_source(moduleName, fileName)
+ modules[moduleName] = module
+ return module
+
+def testfile(filename):
+ if PYTHON_VERSION_MAJOR == 3:
+ text, filename = doctest._load_testfile(filename, None, False, "utf-8")
+ else:
+ text, filename = doctest._load_testfile(filename, None, False)
+
+ name = os.path.basename(filename)
+ globs = {'__name__': '__main__'}
+
+ parser = doctest.DocTestParser()
+ # Read the file, convert it to a test, and run it.
+ test = parser.get_doctest(text, globs, name, filename, 0)
+ if test.examples:
+ runner.addTest(test)
+
+def testFilesInFolder(folder):
+ return testFilesInFolderUsingPattern(folder)
+
+def testFilesInFolderUsingPattern(folder, pattern = ".*"):
+ ''' loads modules from folder ,
+ check if module name matches given pattern'''
+ modules = []
+ prog = re.compile(pattern)
+
+ for root, dirs, files in os.walk(folder):
+ for name in files:
+ path = os.path.join(root, name)
+ if prog.match(name):
+ if name.endswith(".py"):
+ modules.append(loadSource(path))
+ elif not name.endswith(".pyc") and not name.endswith("$py.class") and os.path.isfile(path):
+ testfile(path)
+
+ return modules
+
+if __name__ == "__main__":
+ finder = doctest.DocTestFinder()
+
+ for arg in sys.argv[1:]:
+ arg = arg.strip()
+ if len(arg) == 0:
+ continue
+
+ a = arg.split("::")
+ if len(a) == 1:
+ # From module or folder
+ a_splitted = a[0].split(";")
+ if len(a_splitted) != 1:
+ # means we have pattern to match against
+ if a_splitted[0].endswith("/"):
+ debug("/ from folder " + a_splitted[0] + ". Use pattern: " + a_splitted[1])
+ modules = testFilesInFolderUsingPattern(a_splitted[0], a_splitted[1])
+ else:
+ if a[0].endswith("/"):
+ debug("/ from folder " + a[0])
+ modules = testFilesInFolder(a[0])
+ else:
+ # from file
+ debug("/ from module " + a[0])
+ # for doctests from non-python file
+ if a[0].rfind(".py") == -1:
+ testfile(a[0])
+ modules = []
+ else:
+ modules = [loadSource(a[0])]
+
+ # for doctests
+ for module in modules:
+ tests = finder.find(module, module.__name__)
+ for test in tests:
+ if test.examples:
+ runner.addTest(test)
+
+ elif len(a) == 2:
+ # From testcase
+ debug("/ from class " + a[1] + " in " + a[0])
+ try:
+ module = loadSource(a[0])
+ except SyntaxError:
+ raise NameError('File "%s" is not python file' % (a[0], ))
+ if hasattr(module, a[1]):
+ testcase = getattr(module, a[1])
+ tests = finder.find(testcase, getattr(testcase, "__name__", None))
+ runner.addTests(tests)
+ else:
+ raise NameError('Module "%s" has no class "%s"' % (a[0], a[1]))
+ else:
+ # From method in class or from function
+ try:
+ module = loadSource(a[0])
+ except SyntaxError:
+ raise NameError('File "%s" is not python file' % (a[0], ))
+ if a[1] == "":
+ # test function, not method
+ debug("/ from method " + a[2] + " in " + a[0])
+ if hasattr(module, a[2]):
+ testcase = getattr(module, a[2])
+ tests = finder.find(testcase, getattr(testcase, "__name__", None))
+ runner.addTests(tests)
+ else:
+ raise NameError('Module "%s" has no method "%s"' % (a[0], a[2]))
+ else:
+ debug("/ from method " + a[2] + " in class " + a[1] + " in " + a[0])
+ if hasattr(module, a[1]):
+ testCaseClass = getattr(module, a[1])
+ if hasattr(testCaseClass, a[2]):
+ testcase = getattr(testCaseClass, a[2])
+ name = getattr(testcase, "__name__", None)
+ if not name:
+ name = testCaseClass.__name__
+ tests = finder.find(testcase, name)
+ runner.addTests(tests)
+ else:
+ raise NameError('Class "%s" has no function "%s"' % (testCaseClass, a[2]))
+ else:
+ raise NameError('Module "%s" has no class "%s"' % (module, a[1]))
+
+ debug("/ Loaded " + str(runner.countTests()) + " tests")
+ TeamcityServiceMessages(sys.stdout).testCount(runner.countTests())
+ runner.start()
\ No newline at end of file
diff --git a/python/helpers/pycharm/fix_getpass.py b/python/helpers/pycharm/fix_getpass.py
new file mode 100644
index 0000000..c81d935
--- /dev/null
+++ b/python/helpers/pycharm/fix_getpass.py
@@ -0,0 +1,10 @@
+def fixGetpass():
+ import getpass
+ import warnings
+ fallback = getattr(getpass, 'fallback_getpass', None) # >= 2.6
+ if not fallback:
+ fallback = getpass.default_getpass # <= 2.5
+ getpass.getpass = fallback
+ if hasattr(getpass, 'GetPassWarning'):
+ warnings.simplefilter("ignore", category=getpass.GetPassWarning)
+
diff --git a/python/helpers/pycharm/nose_helper/_2.py b/python/helpers/pycharm/nose_helper/_2.py
new file mode 100644
index 0000000..d169878
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/_2.py
@@ -0,0 +1,2 @@
+def reraise(exc_class, exc_val, tb):
+ raise exc_class, exc_val, tb
\ No newline at end of file
diff --git a/python/helpers/pycharm/nose_helper/_3.py b/python/helpers/pycharm/nose_helper/_3.py
new file mode 100644
index 0000000..dbdce1f
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/_3.py
@@ -0,0 +1,2 @@
+def reraise(exc_class, exc_val, tb):
+ raise exc_class(exc_val).with_traceback(tb)
diff --git a/python/helpers/pycharm/nose_helper/__init__.py b/python/helpers/pycharm/nose_helper/__init__.py
new file mode 100644
index 0000000..d5d14a2
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/__init__.py
@@ -0,0 +1,2 @@
+from nose_helper.suite import ContextSuite
+from nose_helper.loader import TestLoader
diff --git a/python/helpers/pycharm/nose_helper/case.py b/python/helpers/pycharm/nose_helper/case.py
new file mode 100644
index 0000000..45bd901
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/case.py
@@ -0,0 +1,230 @@
+import sys
+import unittest
+from nose_helper.config import Config
+from nose_helper.util import resolve_name, try_run
+import imp
+
+class Test(unittest.TestCase):
+ """The universal test case wrapper.
+ """
+ __test__ = False # do not collect
+ def __init__(self, test, config=None):
+ if not hasattr(test, '__call__'):
+ raise TypeError("Test called with argument %r that "
+ "is not callable. A callable is required."
+ % test)
+ self.test = test
+ if config is None:
+ config = Config()
+ self.config = config
+ unittest.TestCase.__init__(self)
+
+ def __call__(self, *arg, **kwarg):
+ return self.run(*arg, **kwarg)
+
+ def __str__(self):
+ return str(self.test)
+
+ def _context(self):
+ try:
+ return self.test.context
+ except AttributeError:
+ pass
+ try:
+ return self.test.__class__
+ except AttributeError:
+ pass
+ try:
+ return resolve_name(self.test.__module__)
+ except AttributeError:
+ pass
+ return None
+ context = property(_context, None, None,
+ """Get the context object of this test.""")
+
+ def run(self, result):
+ try:
+ self.runTest(result)
+ except KeyboardInterrupt:
+ raise
+ except:
+ err = sys.exc_info()
+ result.addError(self, err)
+
+ def runTest(self, result):
+ test = self.test
+ test(result)
+
+
+class TestBase(unittest.TestCase):
+ """Common functionality for FunctionTestCase and MethodTestCase.
+ """
+ __test__ = False # do not collect
+
+ class Suite:
+ pass
+
+ def runTest(self):
+ self.test(*self.arg)
+
+class FunctionTestCase(TestBase):
+ """TestCase wrapper for test functions.
+ """
+ __test__ = False # do not collect
+
+ def __init__(self, test, setUp=None, tearDown=None, arg=tuple(),
+ descriptor=None):
+ self.test = test
+ self.setUpFunc = setUp
+ self.tearDownFunc = tearDown
+ self.arg = arg
+ self.descriptor = descriptor
+ TestBase.__init__(self)
+
+ self.suite = TestBase.Suite()
+ self.suite.__module__ = self.__get_module()
+ self.suite.__name__ = ""
+ has_module = True
+ try:
+ imp.find_module(self.suite.__module__)[1]
+ except ImportError:
+ has_module = False
+ if sys.version.find("IronPython") != -1 or not has_module:
+ # Iron Python doesn't fully support imp
+ self.suite.abs_location = ""
+ self.suite.location = ""
+ else:
+ self.suite.abs_location = "file://" + imp.find_module(self.suite.__module__)[1]
+ self.suite.location = "file://" + imp.find_module(self.suite.__module__)[1]
+
+ def _context(self):
+ return resolve_name(self.test.__module__)
+ context = property(_context, None, None,
+ """Get context (module) of this test""")
+
+ def setUp(self):
+ """Run any setup function attached to the test function
+ """
+ if self.setUpFunc:
+ self.setUpFunc()
+ else:
+ names = ('setup', 'setUp', 'setUpFunc')
+ try_run(self.test, names)
+
+ def tearDown(self):
+ """Run any teardown function attached to the test function
+ """
+ if self.tearDownFunc:
+ self.tearDownFunc()
+ else:
+ names = ('teardown', 'tearDown', 'tearDownFunc')
+ try_run(self.test, names)
+
+ def __str__(self):
+ func, arg = self._descriptors()
+ if hasattr(func, 'compat_func_name'):
+ name = func.compat_func_name
+ else:
+ name = func.__name__
+ if arg:
+ name = "%s%s" % (name, arg)
+ return name
+ __repr__ = __str__
+
+ def __get_module(self):
+ func, arg = self._descriptors()
+ if hasattr(func, "__module__"):
+ return func.__module__
+ else:
+ #TODO[kate]: get module of function in jython < 2.2
+ return "Unknown module."
+
+ def _descriptors(self):
+ """In most cases, this is the function itself and no arguments. For
+ tests generated by generator functions, the original
+ (generator) function and args passed to the generated function
+ are returned.
+ """
+ if self.descriptor:
+ return self.descriptor, self.arg
+ else:
+ return self.test, self.arg
+
+
+class MethodTestCase(TestBase):
+ """Test case wrapper for test methods.
+ """
+ __test__ = False # do not collect
+
+ def __init__(self, method, test=None, arg=tuple(), descriptor=None):
+ """Initialize the MethodTestCase.
+ """
+ self.method = method
+ self.test = test
+ self.arg = arg
+ self.descriptor = descriptor
+ self.cls = method.im_class
+ self.inst = self.cls()
+ if self.test is None:
+ method_name = self.method.__name__
+ self.test = getattr(self.inst, method_name)
+ TestBase.__init__(self)
+
+ self.suite = TestBase.Suite()
+ self.suite.__module__, self.suite.__name__ = self.__get_module()
+
+ has_module = True
+ try:
+ imp.find_module(self.suite.__module__)[1]
+ except ImportError:
+ has_module = False
+ if sys.version.find("IronPython") != -1 or not has_module:
+ # Iron Python doesn't fully support imp
+ self.suite.abs_location = ""
+ else:
+ self.suite.abs_location = "file://" + imp.find_module(self.suite.__module__)[1]
+ self.suite.location = "python_uttestid://" + self.suite.__module__ + "." + self.suite.__name__
+
+ def __get_module(self):
+ def get_class_that_defined_method(meth):
+ import inspect
+ obj = meth.im_self
+ for cls in inspect.getmro(meth.im_class):
+ if meth.__name__ in cls.__dict__: return (cls.__module__, cls.__name__)
+ return ("Unknown module", "")
+
+ func, arg = self._descriptors()
+ return get_class_that_defined_method(func)
+
+ def __str__(self):
+ func, arg = self._descriptors()
+ if hasattr(func, 'compat_func_name'):
+ name = func.compat_func_name
+ else:
+ name = func.__name__
+ if arg:
+ name = "%s%s" % (name, arg)
+ return name
+ __repr__ = __str__
+
+ def _context(self):
+ return self.cls
+ context = property(_context, None, None,
+ """Get context (class) of this test""")
+
+ def setUp(self):
+ try_run(self.inst, ('setup', 'setUp'))
+
+ def tearDown(self):
+ try_run(self.inst, ('teardown', 'tearDown'))
+
+ def _descriptors(self):
+ """in most cases, this is the method itself and no arguments. For
+ tests generated by generator methods, the original
+ (generator) method and args passed to the generated method
+ or function are returned.
+ """
+ if self.descriptor:
+ return self.descriptor, self.arg
+ else:
+ return self.method, self.arg
diff --git a/python/helpers/pycharm/nose_helper/config.py b/python/helpers/pycharm/nose_helper/config.py
new file mode 100644
index 0000000..f4852b3
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/config.py
@@ -0,0 +1,27 @@
+import os
+import re
+
+class Config(object):
+ """nose configuration.
+ """
+
+ def __init__(self, **kw):
+ self.env = kw.pop('env', {})
+ self.testMatchPat = r'(?:^|[\b_\.%s-])[Tt]est' % os.sep
+ self.testMatch = re.compile(self.testMatchPat)
+ self.srcDirs = ('lib', 'src')
+ self.workingDir = os.getcwd()
+ self.update(kw)
+
+ def __repr__(self):
+ dict = self.__dict__.copy()
+ dict['env'] = {}
+ keys = [ k for k in dict.keys()
+ if not k.startswith('_') ]
+ keys.sort()
+ return "Config(%s)" % ', '.join([ '%s=%r' % (k, dict[k])
+ for k in keys ])
+ __str__ = __repr__
+
+ def update(self, d):
+ self.__dict__.update(d)
diff --git a/python/helpers/pycharm/nose_helper/failure.py b/python/helpers/pycharm/nose_helper/failure.py
new file mode 100644
index 0000000..7e49ae1
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/failure.py
@@ -0,0 +1,22 @@
+import unittest
+
+from nose_helper.raise_compat import reraise
+
+class Failure(unittest.TestCase):
+ """Unloadable or unexecutable test.
+ """
+ __test__ = False # do not collect
+ def __init__(self, exc_class, exc_val, tb = None):
+ self.exc_class = exc_class
+ self.exc_val = exc_val
+ unittest.TestCase.__init__(self)
+ self.tb = tb
+ def __str__(self):
+ return "Failure: %s (%s)" % (
+ getattr(self.exc_class, '__name__', self.exc_class), self.exc_val)
+
+ def runTest(self):
+ if self.tb is not None:
+ reraise(self.exc_class, self.exc_val, self.tb)
+ else:
+ raise self.exc_class(self.exc_val)
\ No newline at end of file
diff --git a/python/helpers/pycharm/nose_helper/loader.py b/python/helpers/pycharm/nose_helper/loader.py
new file mode 100644
index 0000000..6f247ef
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/loader.py
@@ -0,0 +1,201 @@
+"""
+nose's test loader implements the nosetests functionality
+"""
+
+from __future__ import generators
+
+import os
+import sys
+import unittest
+from inspect import isfunction, ismethod
+from nose_helper.case import FunctionTestCase, MethodTestCase
+from nose_helper.failure import Failure
+from nose_helper.config import Config
+from nose_helper.selector import defaultSelector
+from nose_helper.util import cmp_lineno, func_lineno, isclass, isgenerator, ismethod, isunboundmethod
+from nose_helper.util import transplant_class, transplant_func
+from nose_helper.suite import ContextSuiteFactory, ContextList
+
+op_normpath = os.path.normpath
+op_abspath = os.path.abspath
+
+PYTHON_VERSION_MAJOR = sys.version_info[0]
+PYTHON_VERSION_MINOR = sys.version_info[1]
+
+from nose_helper.util import unbound_method
+import types
+
+class TestLoader(unittest.TestLoader):
+ """Test loader that extends unittest.TestLoader to support nosetests
+ """
+ config = None
+ workingDir = None
+ selector = None
+ suiteClass = None
+
+ def __init__(self):
+ """Initialize a test loader.
+ """
+ self.config = Config()
+ self.selector = defaultSelector(self.config)
+ self.workingDir = op_normpath(op_abspath(self.config.workingDir))
+ self.suiteClass = ContextSuiteFactory(config=self.config)
+ unittest.TestLoader.__init__(self)
+
+ def loadTestsFromGenerator(self, generator, module, lineno):
+ """The generator function may yield either:
+ * a callable, or
+ * a function name resolvable within the same module
+ """
+ def generate(g=generator, m=module):
+ try:
+ for test in g():
+ test_func, arg = self.parseGeneratedTest(test)
+ if not hasattr(test_func, '__call__'):
+ test_func = getattr(m, test_func)
+ test_case = FunctionTestCase(test_func, arg=arg, descriptor=g)
+ test_case.lineno = lineno
+ yield test_case
+ except KeyboardInterrupt:
+ raise
+ except:
+ exc = sys.exc_info()
+ yield Failure(exc[0], exc[1], exc[2])
+ return self.suiteClass(generate, context=generator)
+
+ def loadTestsFromModule(self, module, direct = True):
+ """Load all tests from module and return a suite containing
+ them.
+ """
+ tests = []
+ test_funcs = []
+ test_classes = []
+ if self.selector.wantModule(module) or direct:
+ for item in dir(module):
+ test = getattr(module, item, None)
+ if isclass(test):
+ if self.selector.wantClass(test):
+ test_classes.append(test)
+ elif isfunction(test) and self.selector.wantFunction(test):
+ test_funcs.append(test)
+ if PYTHON_VERSION_MAJOR != 3:
+ test_classes.sort(lambda a, b: cmp(a.__name__, b.__name__))
+ test_funcs.sort(cmp_lineno)
+ tests = map(lambda t: self.makeTest(t, parent=module),
+ test_classes + test_funcs)
+ else:
+ test_classes.sort(key = lambda a: a.__name__)
+ test_funcs.sort(key = func_lineno)
+ tests = [self.makeTest(t, parent=module) for t in
+ test_classes + test_funcs]
+ return self.suiteClass(ContextList(tests, context=module))
+
+
+ def loadTestsFromTestClass(self, cls):
+ """Load tests from a test class that is *not* a unittest.TestCase
+ subclass.
+ """
+ def wanted(attr, cls=cls, sel=self.selector):
+ item = getattr(cls, attr, None)
+ if isfunction(item):
+ item = unbound_method(cls, item)
+ if not ismethod(item):
+ return False
+ return sel.wantMethod(item)
+ cases = [self.makeTest(getattr(cls, case), cls)
+ for case in filter(wanted, dir(cls))]
+
+ return self.suiteClass(ContextList(cases, context=cls))
+
+ def makeTest(self, obj, parent=None):
+ try:
+ return self._makeTest(obj, parent)
+ except (KeyboardInterrupt, SystemExit):
+ raise
+ except:
+ exc = sys.exc_info()
+ return Failure(exc[0], exc[1], exc[2])
+
+ def _makeTest(self, obj, parent=None):
+ """Given a test object and its parent, return a test case
+ or test suite.
+ """
+ import inspect
+ try:
+ lineno = inspect.getsourcelines(obj)
+ except:
+ lineno = ("", 1)
+ if isfunction(obj) and parent and not isinstance(parent, types.ModuleType):
+ obj = unbound_method(parent, obj)
+ if isinstance(obj, unittest.TestCase):
+ return obj
+ elif isclass(obj):
+ if parent and obj.__module__ != parent.__name__:
+ obj = transplant_class(obj, parent.__name__)
+ if issubclass(obj, unittest.TestCase):
+ return self.loadTestsFromTestCase(obj)
+ else:
+ return self.loadTestsFromTestClass(obj)
+ elif ismethod(obj) or isunboundmethod(obj):
+ if parent is None:
+ parent = obj.__class__
+ if issubclass(parent, unittest.TestCase):
+ return parent(obj.__name__)
+ else:
+ if PYTHON_VERSION_MAJOR > 2:
+ setattr(obj, "im_class", parent)
+ setattr(obj, "im_self", parent)
+ test_case = MethodTestCase(obj)
+ test_case.lineno = lineno[1]
+ return test_case
+ elif isfunction(obj):
+ setattr(obj, "lineno", lineno[1])
+ if hasattr(obj, "__module__"):
+ if parent and obj.__module__ != parent.__name__:
+ obj = transplant_func(obj, parent.__name__)
+ else:
+ if parent:
+ obj = transplant_func(obj, parent.__name__)
+ else:
+ obj = transplant_func(obj)
+
+ if isgenerator(obj):
+ return self.loadTestsFromGenerator(obj, parent, lineno[1])
+ else:
+ return FunctionTestCase(obj)
+ else:
+ return Failure(TypeError,
+ "Can't make a test from %s" % obj)
+
+ def loadTestsFromTestCase(self, testCaseClass):
+ """Return a suite of all tests cases contained in testCaseClass"""
+ try:
+ # PY-2412
+ # because of Twisted overrides runTest function and we don't need to harvest them
+ import twisted.trial.unittest
+ if issubclass(testCaseClass, twisted.trial.unittest.TestCase):
+ testCaseNames = self.getTestCaseNames(testCaseClass)
+ return self.suiteClass(map(testCaseClass, testCaseNames))
+ except ImportError:
+ pass
+
+ if issubclass(testCaseClass, unittest.TestSuite):
+ raise TypeError("Test cases should not be derived from TestSuite. Maybe you meant to derive from TestCase?")
+ testCaseNames = self.getTestCaseNames(testCaseClass)
+ if not testCaseNames and hasattr(testCaseClass, 'runTest'):
+ testCaseNames = ['runTest']
+ return self.suiteClass(map(testCaseClass, testCaseNames))
+
+ def parseGeneratedTest(self, test):
+ """Given the yield value of a test generator, return a func and args.
+ """
+ if not isinstance(test, tuple): # yield test
+ test_func, arg = (test, tuple())
+ elif len(test) == 1: # yield (test,)
+ test_func, arg = (test[0], tuple())
+ else: # yield test, foo, bar, ...
+ assert len(test) > 1 # sanity check
+ test_func, arg = (test[0], test[1:])
+ return test_func, arg
+
+defaultLoader = TestLoader()
diff --git a/python/helpers/pycharm/nose_helper/raise_compat.py b/python/helpers/pycharm/nose_helper/raise_compat.py
new file mode 100644
index 0000000..3b8fb8d
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/raise_compat.py
@@ -0,0 +1,5 @@
+try:
+ from nose_helper._2 import *
+except (ImportError, SyntaxError):
+ from nose_helper._3 import *
+
diff --git a/python/helpers/pycharm/nose_helper/selector.py b/python/helpers/pycharm/nose_helper/selector.py
new file mode 100644
index 0000000..a83c1b9
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/selector.py
@@ -0,0 +1,91 @@
+"""
+Test Selection
+"""
+import unittest
+from nose_helper.config import Config
+
+class Selector(object):
+ """Examines test candidates and determines whether,
+ given the specified configuration, the test candidate should be selected
+ as a test.
+ """
+ def __init__(self, config):
+ if config is None:
+ config = Config()
+ self.configure(config)
+
+ def configure(self, config):
+ self.config = config
+ self.match = config.testMatch
+
+ def matches(self, name):
+ return self.match.search(name)
+
+ def wantClass(self, cls):
+ """Is the class a wanted test class
+ """
+ declared = getattr(cls, '__test__', None)
+ if declared is not None:
+ wanted = declared
+ else:
+ wanted = (not cls.__name__.startswith('_')
+ and (issubclass(cls, unittest.TestCase)
+ or self.matches(cls.__name__)))
+
+ return wanted
+
+ def wantFunction(self, function):
+ """Is the function a test function
+ """
+ try:
+ if hasattr(function, 'compat_func_name'):
+ funcname = function.compat_func_name
+ else:
+ funcname = function.__name__
+ except AttributeError:
+ # not a function
+ return False
+ import inspect
+ arguments = inspect.getargspec(function)
+ if len(arguments[0]) or arguments[1] or arguments[2]:
+ return False
+ declared = getattr(function, '__test__', None)
+ if declared is not None:
+ wanted = declared
+ else:
+ wanted = not funcname.startswith('_') and self.matches(funcname)
+
+ return wanted
+
+ def wantMethod(self, method):
+ """Is the method a test method
+ """
+ try:
+ method_name = method.__name__
+ except AttributeError:
+ # not a method
+ return False
+ if method_name.startswith('_'):
+ # never collect 'private' methods
+ return False
+ declared = getattr(method, '__test__', None)
+ if declared is not None:
+ wanted = declared
+ else:
+ wanted = self.matches(method_name)
+ return wanted
+
+ def wantModule(self, module):
+ """Is the module a test module
+ we always want __main__.
+ """
+ declared = getattr(module, '__test__', None)
+ if declared is not None:
+ wanted = declared
+ else:
+ wanted = self.matches(module.__name__.split('.')[-1]) \
+ or module.__name__ == '__main__'
+ return wanted
+
+defaultSelector = Selector
+
diff --git a/python/helpers/pycharm/nose_helper/suite.py b/python/helpers/pycharm/nose_helper/suite.py
new file mode 100644
index 0000000..052e968
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/suite.py
@@ -0,0 +1,320 @@
+"""
+Test Suites
+"""
+from __future__ import generators
+
+import sys
+import unittest
+from nose_helper.case import Test
+from nose_helper.config import Config
+from nose_helper.util import isclass, resolve_name, try_run
+PYTHON_VERSION_MAJOR = sys.version_info[0]
+class LazySuite(unittest.TestSuite):
+ """A suite that may use a generator as its list of tests
+ """
+ def __init__(self, tests=()):
+ self._set_tests(tests)
+
+ def __iter__(self):
+ return iter(self._tests)
+
+ def __hash__(self):
+ return object.__hash__(self)
+
+ def addTest(self, test):
+ self._precache.append(test)
+
+ def __nonzero__(self):
+ if self._precache:
+ return True
+ if self.test_generator is None:
+ return False
+ try:
+ test = self.test_generator.next()
+ if test is not None:
+ self._precache.append(test)
+ return True
+ except StopIteration:
+ pass
+ return False
+
+ def _get_tests(self):
+ if self.test_generator is not None:
+ for i in self.test_generator:
+ yield i
+ for test in self._precache:
+ yield test
+
+ def _set_tests(self, tests):
+ self._precache = []
+ is_suite = isinstance(tests, unittest.TestSuite)
+ if hasattr(tests, '__call__') and not is_suite:
+ self.test_generator = tests()
+ self.test_generator_counter = list(tests())
+ elif is_suite:
+ self.addTests([tests])
+ self.test_generator = None
+ self.test_generator_counter = None
+ else:
+ self.addTests(tests)
+ self.test_generator = None
+ self.test_generator_counter = None
+
+ def countTestCases(self):
+ counter = 0
+ generator = self.test_generator_counter
+ if generator is not None:
+ for test in generator:
+ counter +=1
+ for test in self._precache:
+ counter += test.countTestCases()
+ return counter
+
+ _tests = property(_get_tests, _set_tests, None,
+ "Access the tests in this suite.")
+
+class ContextSuite(LazySuite):
+ """A suite with context.
+ """
+ was_setup = False
+ was_torndown = False
+ classSetup = ('setup_class', 'setup_all', 'setupClass', 'setupAll',
+ 'setUpClass', 'setUpAll')
+ classTeardown = ('teardown_class', 'teardown_all', 'teardownClass',
+ 'teardownAll', 'tearDownClass', 'tearDownAll')
+ moduleSetup = ('setup_module', 'setupModule', 'setUpModule', 'setup',
+ 'setUp')
+ moduleTeardown = ('teardown_module', 'teardownModule', 'tearDownModule',
+ 'teardown', 'tearDown')
+ packageSetup = ('setup_package', 'setupPackage', 'setUpPackage')
+ packageTeardown = ('teardown_package', 'teardownPackage',
+ 'tearDownPackage')
+
+ def __init__(self, tests=(), context=None, factory=None,
+ config=None):
+
+ self.context = context
+ self.factory = factory
+ if config is None:
+ config = Config()
+ self.config = config
+ self.has_run = False
+ self.error_context = None
+ LazySuite.__init__(self, tests)
+
+ def __hash__(self):
+ return object.__hash__(self)
+
+ def __call__(self, *arg, **kw):
+ return self.run(*arg, **kw)
+
+ def _exc_info(self):
+ return sys.exc_info()
+
+ def addTests(self, tests, context=None):
+ if context:
+ self.context = context
+ if PYTHON_VERSION_MAJOR < 3 and isinstance(tests, basestring):
+ raise TypeError("tests must be an iterable of tests, not a string")
+ else:
+ if isinstance(tests, str):
+ raise TypeError("tests must be an iterable of tests, not a string")
+ for test in tests:
+ self.addTest(test)
+
+ def run(self, result):
+ """Run tests in suite inside of suite fixtures.
+ """
+ result, orig = result, result
+ try:
+ self.setUp()
+ except KeyboardInterrupt:
+ raise
+ except:
+ self.error_context = 'setup'
+ result.addError(self, self._exc_info())
+ return
+ try:
+ for test in self._tests:
+ if result.shouldStop:
+ break
+ test(orig)
+ finally:
+ self.has_run = True
+ try:
+ self.tearDown()
+ except KeyboardInterrupt:
+ raise
+ except:
+ self.error_context = 'teardown'
+ result.addError(self, self._exc_info())
+
+ def setUp(self):
+ if not self:
+ return
+ if self.was_setup:
+ return
+ context = self.context
+ if context is None:
+ return
+
+ factory = self.factory
+ if factory:
+ ancestors = factory.context.get(self, [])[:]
+ while ancestors:
+ ancestor = ancestors.pop()
+ if ancestor in factory.was_setup:
+ continue
+ self.setupContext(ancestor)
+ if not context in factory.was_setup:
+ self.setupContext(context)
+ else:
+ self.setupContext(context)
+ self.was_setup = True
+
+ def setupContext(self, context):
+ if self.factory:
+ if context in self.factory.was_setup:
+ return
+ self.factory.was_setup[context] = self
+ if isclass(context):
+ names = self.classSetup
+ else:
+ names = self.moduleSetup
+ if hasattr(context, '__path__'):
+ names = self.packageSetup + names
+ try_run(context, names)
+
+ def tearDown(self):
+ if not self.was_setup or self.was_torndown:
+ return
+ self.was_torndown = True
+ context = self.context
+ if context is None:
+ return
+
+ factory = self.factory
+ if factory:
+ ancestors = factory.context.get(self, []) + [context]
+ for ancestor in ancestors:
+ if not ancestor in factory.was_setup:
+ continue
+ if ancestor in factory.was_torndown:
+ continue
+ setup = factory.was_setup[ancestor]
+ if setup is self:
+ self.teardownContext(ancestor)
+ else:
+ self.teardownContext(context)
+
+ def teardownContext(self, context):
+ if self.factory:
+ if context in self.factory.was_torndown:
+ return
+ self.factory.was_torndown[context] = self
+ if isclass(context):
+ names = self.classTeardown
+ else:
+ names = self.moduleTeardown
+ if hasattr(context, '__path__'):
+ names = self.packageTeardown + names
+ try_run(context, names)
+
+ def _get_wrapped_tests(self):
+ for test in self._get_tests():
+ if isinstance(test, Test) or isinstance(test, unittest.TestSuite):
+ yield test
+ else:
+ yield Test(test,
+ config=self.config)
+
+ _tests = property(_get_wrapped_tests, LazySuite._set_tests, None,
+ "Access the tests in this suite. Tests are returned "
+ "inside of a context wrapper.")
+
+class ContextSuiteFactory(object):
+ suiteClass = ContextSuite
+ def __init__(self, config=None):
+ if config is None:
+ config = Config()
+ self.config = config
+ self.suites = {}
+ self.context = {}
+ self.was_setup = {}
+ self.was_torndown = {}
+
+ def __call__(self, tests, **kw):
+ """Return 'ContextSuite' for tests.
+ """
+ context = kw.pop('context', getattr(tests, 'context', None))
+ if context is None:
+ tests = self.wrapTests(tests)
+ context = self.findContext(tests)
+ return self.makeSuite(tests, context, **kw)
+
+ def ancestry(self, context):
+ """Return the ancestry of the context
+ """
+ if context is None:
+ return
+ if hasattr(context, 'im_class'):
+ context = context.im_class
+ if hasattr(context, '__module__'):
+ ancestors = context.__module__.split('.')
+ elif hasattr(context, '__name__'):
+ ancestors = context.__name__.split('.')[:-1]
+ else:
+ raise TypeError("%s has no ancestors?" % context)
+ while ancestors:
+ yield resolve_name('.'.join(ancestors))
+ ancestors.pop()
+
+ def findContext(self, tests):
+ if hasattr(tests, '__call__') or isinstance(tests, unittest.TestSuite):
+ return None
+ context = None
+ for test in tests:
+ # Don't look at suites for contexts, only tests
+ ctx = getattr(test, 'context', None)
+ if ctx is None:
+ continue
+ if context is None:
+ context = ctx
+ return context
+
+ def makeSuite(self, tests, context, **kw):
+ suite = self.suiteClass(
+ tests, context=context, config=self.config, factory=self, **kw)
+ if context is not None:
+ self.suites.setdefault(context, []).append(suite)
+ self.context.setdefault(suite, []).append(context)
+ for ancestor in self.ancestry(context):
+ self.suites.setdefault(ancestor, []).append(suite)
+ self.context[suite].append(ancestor)
+
+ return suite
+
+ def wrapTests(self, tests):
+ if hasattr(tests, '__call__') or isinstance(tests, unittest.TestSuite):
+ return tests
+ wrapped = []
+ for test in tests:
+ if isinstance(test, Test) or isinstance(test, unittest.TestSuite):
+ wrapped.append(test)
+ elif isinstance(test, ContextList):
+ wrapped.append(self.makeSuite(test, context=test.context))
+ else:
+ wrapped.append(
+ Test(test, config=self.config)
+ )
+ return wrapped
+
+class ContextList(object):
+ """a group of tests in a context.
+ """
+ def __init__(self, tests, context=None):
+ self.tests = tests
+ self.context = context
+
+ def __iter__(self):
+ return iter(self.tests)
diff --git a/python/helpers/pycharm/nose_helper/util.py b/python/helpers/pycharm/nose_helper/util.py
new file mode 100644
index 0000000..7419ea1d
--- /dev/null
+++ b/python/helpers/pycharm/nose_helper/util.py
@@ -0,0 +1,211 @@
+"""Utility functions and classes used by nose internally.
+"""
+import inspect
+import os
+import sys
+import types
+try:
+ # for python 3
+ from types import ClassType, TypeType
+ class_types = (ClassType, TypeType)
+except:
+ class_types = (type, )
+
+try:
+ #for jython
+ from compiler.consts import CO_GENERATOR
+except:
+ CO_GENERATOR=0x20
+
+PYTHON_VERSION_MAJOR = sys.version_info[0]
+PYTHON_VERSION_MINOR = sys.version_info[1]
+
+def cmp_lineno(a, b):
+ """Compare functions by their line numbers.
+ """
+ return cmp(func_lineno(a), func_lineno(b))
+
+def func_lineno(func):
+ """Get the line number of a function.
+ """
+ try:
+ return func.compat_co_firstlineno
+ except AttributeError:
+ try:
+ if PYTHON_VERSION_MAJOR == 3:
+ return func.__code__.co_firstlineno
+ return func.func_code.co_firstlineno
+ except AttributeError:
+ return -1
+
+def isclass(obj):
+ obj_type = type(obj)
+ return obj_type in class_types or issubclass(obj_type, type)
+
+def isgenerator(func):
+ if PYTHON_VERSION_MAJOR == 3:
+ return inspect.isgeneratorfunction(func)
+ try:
+ return func.func_code.co_flags & CO_GENERATOR != 0
+ except AttributeError:
+ return False
+
+def resolve_name(name, module=None):
+ """Resolve a dotted name to a module and its parts.
+ """
+ parts = name.split('.')
+ parts_copy = parts[:]
+ if module is None:
+ while parts_copy:
+ try:
+ module = __import__('.'.join(parts_copy))
+ break
+ except ImportError:
+ del parts_copy[-1]
+ if not parts_copy:
+ raise
+ parts = parts[1:]
+ obj = module
+ for part in parts:
+ obj = getattr(obj, part)
+ return obj
+
+def try_run(obj, names):
+ """Given a list of possible method names, try to run them with the
+ provided object.
+ """
+ for name in names:
+ func = getattr(obj, name, None)
+ if func is not None:
+ if type(obj) == types.ModuleType:
+ try:
+ args, varargs, varkw, defaults = inspect.getargspec(func)
+ except TypeError:
+ if hasattr(func, '__call__'):
+ func = func.__call__
+ try:
+ args, varargs, varkw, defaults = \
+ inspect.getargspec(func)
+ args.pop(0)
+ except TypeError:
+ raise TypeError("Attribute %s of %r is not a python "
+ "function. Only functions or callables"
+ " may be used as fixtures." %
+ (name, obj))
+ if len(args):
+ return func(obj)
+ return func()
+
+def src(filename):
+ """Find the python source file for a .pyc, .pyo
+ or $py.class file on jython
+ """
+ if filename is None:
+ return filename
+ if sys.platform.startswith('java') and filename.endswith('$py.class'):
+ return '.'.join((filename[:-9], 'py'))
+ base, ext = os.path.splitext(filename)
+ if ext in ('.pyc', '.pyo', '.py'):
+ return '.'.join((base, 'py'))
+ return filename
+
+def transplant_class(cls, module):
+ """
+ Make a class appear to reside in `module`, rather than the module in which
+ it is actually defined.
+ """
+ class C(cls):
+ pass
+ C.__module__ = module
+ C.__name__ = cls.__name__
+ return C
+
+def transplant_func(func, module = None):
+ """
+ Make a function imported from module A appear as if it is located
+ in module B.
+ """
+
+ def newfunc(*arg, **kw):
+ return func(*arg, **kw)
+
+ newfunc = make_decorator(func)(newfunc)
+ if module is None:
+ newfunc.__module__ = inspect.getmodule(func)
+ else:
+ newfunc.__module__ = module
+ return newfunc
+
+def make_decorator(func):
+ """
+ Wraps a test decorator so as to properly replicate metadata
+ of the decorated function.
+ """
+ def decorate(newfunc):
+ if hasattr(func, 'compat_func_name'):
+ name = func.compat_func_name
+ else:
+ name = func.__name__
+ newfunc.__dict__ = func.__dict__
+ newfunc.__doc__ = func.__doc__
+ if not hasattr(newfunc, 'compat_co_firstlineno'):
+ if PYTHON_VERSION_MAJOR == 3:
+ newfunc.compat_co_firstlineno = func.__code__.co_firstlineno
+ else:
+ newfunc.compat_co_firstlineno = func.func_code.co_firstlineno
+ try:
+ newfunc.__name__ = name
+ except TypeError:
+ newfunc.compat_func_name = name
+ return newfunc
+ return decorate
+
+# trick for python 3
+# The following emulates the behavior (we need) of an 'unbound method' under
+# Python 3.x (namely, the ability to have a class associated with a function
+# definition so that things can do stuff based on its associated class)
+
+class UnboundMethod:
+ def __init__(self, cls, func):
+ self.func = func
+ self.__self__ = UnboundSelf(cls)
+
+ def address(self):
+ cls = self.__self__.cls
+ module = cls.__module__
+ m = sys.modules[module]
+ file = getattr(m, '__file__', None)
+ if file is not None:
+ file = os.path.abspath(file)
+ return (nose.util.src(file), module, "%s.%s" % (cls.__name__, self.func.__name__))
+
+ def __call__(self, *args, **kwargs):
+ return self.func(*args, **kwargs)
+
+ def __getattr__(self, attr):
+ return getattr(self.func, attr)
+
+class UnboundSelf:
+ def __init__(self, cls):
+ self.cls = cls
+
+ # We have to do this hackery because Python won't let us override the
+ # __class__ attribute...
+ def __getattribute__(self, attr):
+ if attr == '__class__':
+ return self.cls
+ else:
+ return object.__getattribute__(self, attr)
+
+def unbound_method(cls, func):
+ if inspect.ismethod(func):
+ return func
+ if not inspect.isfunction(func):
+ raise TypeError('%s is not a function' % (repr(func),))
+ return UnboundMethod(cls, func)
+
+def ismethod(obj):
+ return inspect.ismethod(obj) or isinstance(obj, UnboundMethod)
+
+def isunboundmethod(obj):
+ return (inspect.ismethod(obj) and obj.im_self is None) or isinstance(obj, UnboundMethod)
diff --git a/python/helpers/pycharm/nose_utils.py b/python/helpers/pycharm/nose_utils.py
new file mode 100644
index 0000000..62c77a2
--- /dev/null
+++ b/python/helpers/pycharm/nose_utils.py
@@ -0,0 +1,240 @@
+from tcmessages import TeamcityServiceMessages
+import sys, traceback, datetime
+import unittest
+from tcunittest import strclass
+from tcunittest import TeamcityTestResult
+
+try:
+ from nose.util import isclass # backwards compat
+ from nose.config import Config
+ from nose.result import TextTestResult
+ from nose import SkipTest
+ from nose.plugins.errorclass import ErrorClassPlugin
+except (Exception, ):
+ e = sys.exc_info()[1]
+ raise NameError(
+ "Something went wrong, do you have nosetest installed? I got this error: %s" % e)
+
+class TeamcityPlugin(ErrorClassPlugin, TextTestResult, TeamcityTestResult):
+ """
+ TeamcityTest plugin for nose tests
+ """
+ name = "TeamcityPlugin"
+ enabled = True
+
+ def __init__(self, stream=sys.stderr, descriptions=None, verbosity=1,
+ config=None, errorClasses=None):
+ super(TeamcityPlugin, self).__init__()
+
+ if errorClasses is None:
+ errorClasses = {}
+
+ self.errorClasses = errorClasses
+ if config is None:
+ config = Config()
+ self.config = config
+ self.output = stream
+ self.messages = TeamcityServiceMessages(self.output,
+ prepend_linebreak=True)
+ self.messages.testMatrixEntered()
+ self.current_suite = None
+ TextTestResult.__init__(self, stream, descriptions, verbosity, config,
+ errorClasses)
+ TeamcityTestResult.__init__(self, stream)
+
+ def configure(self, options, conf):
+ if not self.can_configure:
+ return
+ self.conf = conf
+
+
+ def addError(self, test, err):
+ exctype, value, tb = err
+ err = self.formatErr(err)
+ if exctype == SkipTest:
+ self.messages.testIgnored(self.getTestName(test), message='Skip')
+ else:
+ self.messages.testError(self.getTestName(test), message='Error', details=err)
+
+ def formatErr(self, err):
+ exctype, value, tb = err
+ if isinstance(value, str):
+ value = exctype(value)
+ return ''.join(traceback.format_exception(exctype, value, tb))
+
+ def is_gen(self, test):
+ if hasattr(test, "test") and hasattr(test.test, "descriptor"):
+ if test.test.descriptor is not None:
+ return True
+ return False
+
+
+ def getTestName(self, test):
+ if hasattr(test, "error_context"):
+ return test.error_context
+ test_name_full = str(test)
+ if self.is_gen(test):
+ return test_name_full
+
+ ind_1 = test_name_full.rfind('(')
+ if ind_1 != -1:
+ return test_name_full[:ind_1]
+ ind = test_name_full.rfind('.')
+ if ind != -1:
+ return test_name_full[test_name_full.rfind(".") + 1:]
+ return test_name_full
+
+
+ def addFailure(self, test, err):
+ err = self.formatErr(err)
+
+ self.messages.testFailed(self.getTestName(test),
+ message='Failure', details=err)
+
+
+ def addSkip(self, test, reason):
+ self.messages.testIgnored(self.getTestName(test), message=reason)
+
+
+ def __getSuite(self, test):
+ if hasattr(test, "suite"):
+ suite = strclass(test.suite)
+ suite_location = test.suite.location
+ location = test.suite.abs_location
+ if hasattr(test, "lineno"):
+ location = location + ":" + str(test.lineno)
+ else:
+ location = location + ":" + str(test.test.lineno)
+ else:
+ suite = strclass(test.__class__)
+ suite_location = "python_uttestid://" + suite
+ try:
+ from nose_helper.util import func_lineno
+
+ if hasattr(test.test, "descriptor") and test.test.descriptor:
+ suite_location = "file://" + self.test_address(
+ test.test.descriptor)
+ location = suite_location + ":" + str(
+ func_lineno(test.test.descriptor))
+ else:
+ suite_location = "file://" + self.test_address(
+ test.test.test)
+ location = "file://" + self.test_address(
+ test.test.test) + ":" + str(func_lineno(test.test.test))
+ except:
+ test_id = test.id()
+ suite_id = test_id[:test_id.rfind(".")]
+ suite_location = "python_uttestid://" + str(suite_id)
+ location = "python_uttestid://" + str(test_id)
+ return (location, suite_location)
+
+
+ def test_address(self, test):
+ if hasattr(test, "address"):
+ return test.address()[0]
+ t = type(test)
+ file = None
+ import types, os
+
+ if (t == types.FunctionType or issubclass(t, type) or t == type
+ or isclass(test)):
+ module = getattr(test, '__module__', None)
+ if module is not None:
+ m = sys.modules[module]
+ file = getattr(m, '__file__', None)
+ if file is not None:
+ file = os.path.abspath(file)
+ if file.endswith("pyc"):
+ file = file[:-1]
+ return file
+ raise TypeError("I don't know what %s is (%s)" % (test, t))
+
+
+ def getSuiteName(self, test):
+ test_name_full = str(test)
+
+ if self.is_gen(test):
+ ind_1 = test_name_full.rfind('(')
+ if ind_1 != -1:
+ ind = test_name_full.rfind('.')
+ if ind != -1:
+ return test_name_full[:test_name_full.rfind(".")]
+
+ ind_1 = test_name_full.rfind('(')
+ if ind_1 != -1:
+ return test_name_full[ind_1 + 1: -1]
+ ind = test_name_full.rfind('.')
+ if ind != -1:
+ return test_name_full[:test_name_full.rfind(".")]
+ return test_name_full
+
+
+ def startTest(self, test):
+ location, suite_location = self.__getSuite(test)
+ suite = self.getSuiteName(test)
+ if suite != self.current_suite:
+ if self.current_suite:
+ self.messages.testSuiteFinished(self.current_suite)
+ self.current_suite = suite
+ self.messages.testSuiteStarted(self.current_suite,
+ location=suite_location)
+ setattr(test, "startTime", datetime.datetime.now())
+ self.messages.testStarted(self.getTestName(test), location=location)
+
+
+ def stopTest(self, test):
+ start = getattr(test, "startTime", datetime.datetime.now())
+ d = datetime.datetime.now() - start
+ duration = d.microseconds / 1000 + d.seconds * 1000 + d.days * 86400000
+ self.messages.testFinished(self.getTestName(test),
+ duration=int(duration))
+
+
+ def finalize(self, result):
+ if self.current_suite:
+ self.messages.testSuiteFinished(self.current_suite)
+ self.current_suite = None
+
+
+class TeamcityNoseRunner(unittest.TextTestRunner):
+ """Test runner that supports teamcity output
+ """
+
+ def __init__(self, stream=sys.stdout, descriptions=1, verbosity=1,
+ config=None):
+ if config is None:
+ config = Config()
+ self.config = config
+
+ unittest.TextTestRunner.__init__(self, stream, descriptions, verbosity)
+
+
+ def _makeResult(self):
+ return TeamcityPlugin(self.stream,
+ self.descriptions,
+ self.verbosity,
+ self.config)
+
+ def run(self, test):
+ """Overrides to provide plugin hooks and defer all output to
+ the test result class.
+ """
+ #for 2.5 compat
+ plugins = self.config.plugins
+ plugins.configure(self.config.options, self.config)
+ plugins.begin()
+ wrapper = plugins.prepareTest(test)
+ if wrapper is not None:
+ test = wrapper
+
+ # plugins can decorate or capture the output stream
+ wrapped = self.config.plugins.setOutputStream(self.stream)
+ if wrapped is not None:
+ self.stream = wrapped
+
+ result = self._makeResult()
+ test(result)
+ result.endLastSuite()
+ plugins.finalize(result)
+
+ return result
\ No newline at end of file
diff --git a/python/helpers/pycharm/noserunner.py b/python/helpers/pycharm/noserunner.py
new file mode 100644
index 0000000..9f8b84e
--- /dev/null
+++ b/python/helpers/pycharm/noserunner.py
@@ -0,0 +1,96 @@
+import sys
+import os
+
+helpers_dir = os.getenv("PYCHARM_HELPERS_DIR", sys.path[0])
+if sys.path[0] != helpers_dir:
+ sys.path.insert(0, helpers_dir)
+
+from nose_utils import TeamcityPlugin
+
+from pycharm_run_utils import debug, import_system_module
+from pycharm_run_utils import adjust_sys_path
+
+adjust_sys_path(False)
+
+shlex = import_system_module("shlex")
+
+try:
+ from nose.core import TestProgram
+ from nose.config import Config
+ from nose.plugins.manager import DefaultPluginManager
+except:
+ raise NameError("Please, install nosetests")
+
+teamcity_plugin = TeamcityPlugin()
+
+class MyConfig(Config):
+ def __init__(self, **kw):
+ super(MyConfig, self).__init__(**kw)
+
+ def __setstate__(self, state):
+ super(MyConfig, self).__setstate__(state)
+ self.plugins.addPlugin(teamcity_plugin)
+
+def process_args():
+ tests = []
+
+ opts = None
+ if sys.argv[-1].startswith("-"):
+ test_names = sys.argv[1:-1]
+ opts = sys.argv[-1]
+ else:
+ test_names = sys.argv[1:]
+
+ for arg in test_names:
+ arg = arg.strip()
+ if len(arg) == 0:
+ return
+
+ a = arg.split("::")
+ if len(a) == 1:
+ # From module or folder
+ a_splitted = a[0].split(";")
+ if len(a_splitted) != 1:
+ # means we have pattern to match against
+ if a_splitted[0].endswith("/"):
+ debug("/ from folder " + a_splitted[0] + ". Use pattern: " + a_splitted[1])
+ tests.append(a_splitted[0])
+ else:
+ if a[0].endswith("/"):
+ debug("/ from folder " + a[0])
+ tests.append(a[0])
+ else:
+ debug("/ from module " + a[0])
+ tests.append(a[0])
+
+ elif len(a) == 2:
+ # From testcase
+ debug("/ from testcase " + a[1] + " in " + a[0])
+ tests.append(a[0] + ":" + a[1])
+ else:
+ # From method in class or from function
+ debug("/ from method " + a[2] + " in testcase " + a[1] + " in " + a[0])
+ if a[1] == "":
+ # test function, not method
+ tests.append(a[0] + ":" + a[2])
+ else:
+ tests.append(a[0] + ":" + a[1] + "." + a[2])
+
+ argv = ['nosetests']
+
+ argv.extend(tests)
+
+
+ if opts:
+ options = shlex.split(opts)
+ argv.extend(options)
+
+ manager = DefaultPluginManager()
+ manager.addPlugin(teamcity_plugin)
+ config = MyConfig(plugins=manager)
+ config.configure(argv)
+
+ TestProgram(argv=argv, config=config)
+
+if __name__ == "__main__":
+ process_args()
\ No newline at end of file
diff --git a/python/helpers/pycharm/pycharm_commands/__init__.py b/python/helpers/pycharm/pycharm_commands/__init__.py
new file mode 100644
index 0000000..5267186
--- /dev/null
+++ b/python/helpers/pycharm/pycharm_commands/__init__.py
@@ -0,0 +1 @@
+# __init__
\ No newline at end of file
diff --git a/python/helpers/pycharm/pycharm_commands/pycharm_test.py b/python/helpers/pycharm/pycharm_commands/pycharm_test.py
new file mode 100644
index 0000000..e9a08ff
--- /dev/null
+++ b/python/helpers/pycharm/pycharm_commands/pycharm_test.py
@@ -0,0 +1,19 @@
+__author__ = 'ktisha'
+try:
+ from pkg_resources import EntryPoint
+ from setuptools.command import test
+ from tcunittest import TeamcityTestRunner
+except ImportError:
+ raise NameError("Something went wrong, do you have setuptools installed?")
+
+class pycharm_test(test.test):
+ def run_tests(self):
+ import unittest
+
+ loader_ep = EntryPoint.parse("x=" + self.test_loader)
+ loader_class = loader_ep.load(require=False)
+ unittest.main(
+ None, None, [unittest.__file__] + self.test_args,
+ testRunner=TeamcityTestRunner,
+ testLoader=loader_class()
+ )
diff --git a/python/helpers/pycharm/pycharm_run_utils.py b/python/helpers/pycharm/pycharm_run_utils.py
new file mode 100644
index 0000000..5fbc35c
--- /dev/null
+++ b/python/helpers/pycharm/pycharm_run_utils.py
@@ -0,0 +1,37 @@
+__author__ = 'ktisha'
+import os, sys
+import imp
+
+PYTHON_VERSION_MAJOR = sys.version_info[0]
+PYTHON_VERSION_MINOR = sys.version_info[1]
+
+ENABLE_DEBUG_LOGGING = False
+if os.getenv("UTRUNNER_ENABLE_DEBUG_LOGGING"):
+ ENABLE_DEBUG_LOGGING = True
+
+def debug(what):
+ if ENABLE_DEBUG_LOGGING:
+ sys.stdout.writelines(str(what) + '\n')
+
+def adjust_sys_path(add_script_parent=True, script_index=1):
+ sys.path.pop(0)
+ if add_script_parent:
+ script_path = os.path.dirname(sys.argv[script_index])
+ insert_to_sys_path(script_path)
+
+def adjust_django_sys_path():
+ sys.path.pop(0)
+ script_path = sys.argv[-1]
+ insert_to_sys_path(script_path)
+
+def import_system_module(name):
+ f, filename, desc = imp.find_module(name)
+ return imp.load_module('pycharm_' + name, f, filename, desc)
+
+def getModuleName(prefix, cnt):
+ return prefix + "%" + str(cnt)
+
+def insert_to_sys_path(script_path):
+ while script_path in sys.path:
+ sys.path.remove(script_path)
+ sys.path.insert(0, script_path)
diff --git a/python/helpers/pycharm/pycharm_setup_runner.py b/python/helpers/pycharm/pycharm_setup_runner.py
new file mode 100644
index 0000000..5cea51e
--- /dev/null
+++ b/python/helpers/pycharm/pycharm_setup_runner.py
@@ -0,0 +1,26 @@
+__author__ = 'ktisha'
+
+import sys
+from pycharm_run_utils import PYTHON_VERSION_MAJOR, PYTHON_VERSION_MINOR
+#noinspection PyUnresolvedReferences
+import pycharm_commands # we need pycharm_commands module to be loaded
+
+if __name__ == "__main__":
+ parameters = []
+
+ test_suite = sys.argv.pop(-1)
+ while test_suite.startswith("-"):
+ parameters.append(test_suite)
+ test_suite = sys.argv.pop(-1)
+
+ sys.argv = [test_suite, "--command-packages", "pycharm_commands", "pycharm_test"]
+ sys.argv.extend(parameters)
+ __file__ = test_suite
+
+ if PYTHON_VERSION_MINOR == 2 and PYTHON_VERSION_MAJOR == 4:
+ #noinspection PyCompatibility
+ execfile(test_suite)
+ else:
+ #noinspection PyCompatibility
+ with open(test_suite, "r") as fh:
+ exec (fh.read(), globals(), locals())
diff --git a/python/helpers/pycharm/pytest_teamcity.py b/python/helpers/pycharm/pytest_teamcity.py
new file mode 100644
index 0000000..2eaa15c
--- /dev/null
+++ b/python/helpers/pycharm/pytest_teamcity.py
@@ -0,0 +1,150 @@
+import os
+import sys
+helpers_dir = os.getenv("PYCHARM_HELPERS_DIR", sys.path[0])
+if sys.path[0] != helpers_dir:
+ sys.path.insert(0, helpers_dir)
+
+from tcmessages import TeamcityServiceMessages
+from pycharm_run_utils import adjust_sys_path
+
+adjust_sys_path(False)
+
+messages = TeamcityServiceMessages(prepend_linebreak=True)
+messages.testMatrixEntered()
+try:
+ import pytest
+ PYVERSION = [int(x) for x in pytest.__version__.split(".")]
+except:
+ import py
+ PYVERSION = [int(x) for x in py.__version__.split(".")]
+
+def get_name(nodeid):
+ return nodeid.split("::")[-1]
+
+def fspath_to_url(fspath):
+ return "file:///" + str(fspath).replace("\\", "/")
+
+if PYVERSION > [1, 4, 0]:
+ items = {}
+ current_suite = None
+ current_file = None
+ current_file_suite = None
+
+ def pytest_runtest_logstart(nodeid, location):
+ path = "file://" + os.path.realpath(location[0])
+ if location[1]:
+ path += ":" +str(location[1] + 1)
+ global current_suite, current_file, current_file_suite
+ current_file = nodeid.split("::")[0]
+
+ file_suite = current_file.split("/")[-1]
+ if file_suite != current_file_suite:
+ if current_suite:
+ messages.testSuiteFinished(current_suite)
+ if current_file_suite:
+ messages.testSuiteFinished(current_file_suite)
+ current_file_suite = file_suite
+ if current_file_suite:
+ messages.testSuiteStarted(current_file_suite, location="file://" + os.path.realpath(location[0]))
+
+ if location[2].find(".") != -1:
+ suite = location[2].split(".")[0]
+ name = location[2].split(".")[-1]
+ else:
+ name = location[2]
+ splitted = nodeid.split("::")
+ try:
+ ind = splitted.index(name.split("[")[0])
+ except ValueError:
+ try:
+ ind = splitted.index(name)
+ except ValueError:
+ ind = 0
+ if splitted[ind-1] == current_file:
+ suite = None
+ else:
+ suite = current_suite
+ if suite != current_suite:
+ if current_suite:
+ messages.testSuiteFinished(current_suite)
+ current_suite = suite
+ if current_suite:
+ messages.testSuiteStarted(current_suite, location="file://" + os.path.realpath(location[0]))
+ messages.testStarted(name, location=path)
+ items[nodeid] = name
+
+ def pytest_runtest_logreport(report):
+ name = items[report.nodeid]
+
+ if report.skipped:
+ messages.testIgnored(name)
+ elif report.failed:
+ messages.testFailed(name, details=report.longrepr)
+ elif report.when == "call":
+ messages.testFinished(name)
+
+ def pytest_sessionfinish(session, exitstatus):
+ if current_suite:
+ messages.testSuiteFinished(current_suite)
+ if current_file_suite:
+ messages.testSuiteFinished(current_file_suite)
+
+ from _pytest.terminal import TerminalReporter
+ class PycharmTestReporter(TerminalReporter):
+ def __init__(self, config, file=None):
+ TerminalReporter.__init__(self, config, file)
+
+ def summary_errors(self):
+ reports = self.getreports('error')
+ if not reports:
+ return
+ for rep in self.stats['error']:
+ name = rep.nodeid.split("/")[-1]
+ location = None
+ if hasattr(rep, 'location'):
+ location, lineno, domain = rep.location
+
+ messages.testSuiteStarted(name, location=fspath_to_url(location))
+ messages.testStarted("<noname>", location=fspath_to_url(location))
+ TerminalReporter.summary_errors(self)
+ messages.testError("<noname>")
+ messages.testSuiteFinished(name)
+
+else:
+ def pytest_collectstart(collector):
+ if collector.name != "()":
+ messages.testSuiteStarted(collector.name, location=fspath_to_url(collector.fspath))
+
+ def pytest_runtest_makereport(item, call):
+ if call.when == "setup":
+ fspath, lineno, msg = item.reportinfo()
+ url = fspath_to_url(fspath)
+ if lineno: url += ":" + str(lineno)
+ # messages.testStarted(item.name, location=url)
+
+ def pytest_runtest_logreport(report):
+ if report.item._args:
+ name = report.item.function.__name__ + str(report.item._args)
+ else:
+ name = report.item.name
+ if report.failed:
+ messages.testFailed(name, details=report.longrepr)
+ elif report.skipped:
+ messages.testIgnored(name)
+ else:
+ messages.testFinished(name)
+
+ def pytest_collectreport(report):
+ if report.collector.name != "()":
+ messages.testSuiteFinished(report.collector.name)
+
+ def pytest_itemstart(item, node=None):
+ if item._args:
+ name = item.function.__name__ + str(item._args)
+ else:
+ name = item.name
+ if hasattr(item, "_fslineno"):
+ path = fspath_to_url(item._fslineno[0]) + ":" + str(item._fslineno[1] + 1)
+ else:
+ path = fspath_to_url(item.fspath)
+ messages.testStarted(name, location=path)
diff --git a/python/helpers/pycharm/pytestrunner.py b/python/helpers/pycharm/pytestrunner.py
new file mode 100644
index 0000000..0725e2f
--- /dev/null
+++ b/python/helpers/pycharm/pytestrunner.py
@@ -0,0 +1,58 @@
+import sys
+
+has_pytest = False
+#there is the difference between 1.3.4 and 2.0.2 versions
+#Since version 1.4, the testing tool "py.test" is part of its own pytest distribution.
+try:
+ import pytest
+ has_pytest = True
+except:
+ try:
+ import py
+ except:
+ raise NameError("No py.test runner found in selected interpreter")
+
+def get_plugin_manager():
+ try:
+ from _pytest.config import get_plugin_manager
+ return get_plugin_manager()
+ except ImportError:
+ from _pytest.core import PluginManager
+ return PluginManager(load=True)
+
+if has_pytest:
+ _preinit = []
+ def main():
+ args = sys.argv[1:]
+ _pluginmanager = get_plugin_manager()
+ hook = _pluginmanager.hook
+ try:
+ config = hook.pytest_cmdline_parse(
+ pluginmanager=_pluginmanager, args=args)
+ exitstatus = hook.pytest_cmdline_main(config=config)
+ except pytest.UsageError:
+ e = sys.exc_info()[1]
+ sys.stderr.write("ERROR: %s\n" %(e.args[0],))
+ exitstatus = 3
+ return exitstatus
+
+else:
+ def main():
+ args = sys.argv[1:]
+ config = py.test.config
+ try:
+ config.parse(args)
+ config.pluginmanager.do_configure(config)
+ session = config.initsession()
+ colitems = config.getinitialnodes()
+ exitstatus = session.main(colitems)
+ config.pluginmanager.do_unconfigure(config)
+ except config.Error:
+ e = sys.exc_info()[1]
+ sys.stderr.write("ERROR: %s\n" %(e.args[0],))
+ exitstatus = 3
+ py.test.config = py.test.config.__class__()
+ return exitstatus
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/python/helpers/pycharm/runpy_compat.py b/python/helpers/pycharm/runpy_compat.py
new file mode 100644
index 0000000..c178e3a
--- /dev/null
+++ b/python/helpers/pycharm/runpy_compat.py
@@ -0,0 +1,257 @@
+import sys
+from types import ModuleType
+import os, imp
+class ImpLoader:
+ code = source = None
+
+ def __init__(self, fullname, file, filename, etc):
+ self.file = file
+ self.filename = filename
+ self.fullname = fullname
+ self.etc = etc
+
+ def load_module(self, fullname):
+ self._reopen()
+ try:
+ mod = imp.load_module(fullname, self.file, self.filename, self.etc)
+ finally:
+ if self.file:
+ self.file.close()
+ return mod
+
+ def get_data(self, pathname):
+ return open(pathname, "rb").read()
+
+ def _reopen(self):
+ if self.file and self.file.closed:
+ mod_type = self.etc[2]
+ if mod_type==imp.PY_SOURCE:
+ self.file = open(self.filename, 'rU')
+ elif mod_type in (imp.PY_COMPILED, imp.C_EXTENSION):
+ self.file = open(self.filename, 'rb')
+
+ def _fix_name(self, fullname):
+ if fullname is None:
+ fullname = self.fullname
+ elif fullname != self.fullname:
+ raise ImportError("Loader for module %s cannot handle "
+ "module %s" % (self.fullname, fullname))
+ return fullname
+
+ def is_package(self, fullname):
+ fullname = self._fix_name(fullname)
+ return self.etc[2]==imp.PKG_DIRECTORY
+
+ def get_code(self, fullname=None):
+ fullname = self._fix_name(fullname)
+ if self.code is None:
+ mod_type = self.etc[2]
+ if mod_type==imp.PY_SOURCE:
+ source = self.get_source(fullname)
+ self.code = compile(source, self.filename, 'exec')
+ elif mod_type==imp.PY_COMPILED:
+ self._reopen()
+ try:
+ self.code = read_code(self.file)
+ finally:
+ self.file.close()
+ elif mod_type==imp.PKG_DIRECTORY:
+ self.code = self._get_delegate().get_code()
+ return self.code
+
+ def get_source(self, fullname=None):
+ fullname = self._fix_name(fullname)
+ if self.source is None:
+ mod_type = self.etc[2]
+ if mod_type==imp.PY_SOURCE:
+ self._reopen()
+ try:
+ self.source = self.file.read()
+ finally:
+ self.file.close()
+ elif mod_type==imp.PY_COMPILED:
+ if os.path.exists(self.filename[:-1]):
+ f = open(self.filename[:-1], 'rU')
+ self.source = f.read()
+ f.close()
+ elif mod_type==imp.PKG_DIRECTORY:
+ self.source = self._get_delegate().get_source()
+ return self.source
+
+
+ def _get_delegate(self):
+ return ImpImporter(self.filename).find_module('__init__')
+
+ def get_filename(self, fullname=None):
+ fullname = self._fix_name(fullname)
+ mod_type = self.etc[2]
+ if self.etc[2]==imp.PKG_DIRECTORY:
+ return self._get_delegate().get_filename()
+ elif self.etc[2] in (imp.PY_SOURCE, imp.PY_COMPILED, imp.C_EXTENSION):
+ return self.filename
+ return None
+
+
+class ImpImporter:
+ def __init__(self, path=None):
+ self.path = path
+
+ def find_module(self, fullname, path=None):
+ # Note: we ignore 'path' argument since it is only used via meta_path
+ subname = fullname.split(".")[-1]
+ if subname != fullname and self.path is None:
+ return None
+ if self.path is None:
+ path = None
+ else:
+ path = [os.path.realpath(self.path)]
+ try:
+ file, filename, etc = imp.find_module(subname, path)
+ except ImportError:
+ return None
+ return ImpLoader(fullname, file, filename, etc)
+
+ def iter_modules(self, prefix=''):
+ if self.path is None or not os.path.isdir(self.path):
+ return
+
+ yielded = {}
+ import inspect
+
+ filenames = os.listdir(self.path)
+ filenames.sort() # handle packages before same-named modules
+
+ for fn in filenames:
+ modname = inspect.getmodulename(fn)
+ if modname=='__init__' or modname in yielded:
+ continue
+
+ path = os.path.join(self.path, fn)
+ ispkg = False
+
+ if not modname and os.path.isdir(path) and '.' not in fn:
+ modname = fn
+ for fn in os.listdir(path):
+ subname = inspect.getmodulename(fn)
+ if subname=='__init__':
+ ispkg = True
+ break
+ else:
+ continue # not a package
+
+ if modname and '.' not in modname:
+ yielded[modname] = 1
+ yield prefix + modname, ispkg
+
+def get_importer(path_item):
+ try:
+ importer = sys.path_importer_cache[path_item]
+ except KeyError:
+ for path_hook in sys.path_hooks:
+ try:
+ importer = path_hook(path_item)
+ break
+ except ImportError:
+ pass
+ else:
+ importer = None
+ sys.path_importer_cache.setdefault(path_item, importer)
+
+ if importer is None:
+ try:
+ importer = ImpImporter(path_item)
+ except ImportError:
+ importer = None
+ return importer
+
+def iter_importers(fullname=""):
+ if fullname.startswith('.'):
+ raise ImportError("Relative module names not supported")
+ if '.' in fullname:
+ # Get the containing package's __path__
+ pkg = '.'.join(fullname.split('.')[:-1])
+ if pkg not in sys.modules:
+ __import__(pkg)
+ path = getattr(sys.modules[pkg], '__path__', None) or []
+ else:
+ for importer in sys.meta_path:
+ yield importer
+ path = sys.path
+ for item in path:
+ yield get_importer(item)
+ if '.' not in fullname:
+ yield ImpImporter()
+
+def find_loader(fullname):
+ for importer in iter_importers(fullname):
+ loader = importer.find_module(fullname)
+ if loader is not None:
+ return loader
+
+ return None
+
+def get_loader(module_or_name):
+ if module_or_name in sys.modules:
+ module_or_name = sys.modules[module_or_name]
+ if isinstance(module_or_name, ModuleType):
+ module = module_or_name
+ loader = getattr(module, '__loader__', None)
+ if loader is not None:
+ return loader
+ fullname = module.__name__
+ else:
+ fullname = module_or_name
+ return find_loader(fullname)
+
+
+def _get_filename(loader, mod_name):
+ for attr in ("get_filename", "_get_filename"):
+ meth = getattr(loader, attr, None)
+ if meth is not None:
+ return meth(mod_name)
+ return None
+
+def _get_module_details(mod_name):
+ loader = get_loader(mod_name)
+ if loader is None:
+ raise ImportError("No module named %s" % mod_name)
+ if loader.is_package(mod_name):
+ if mod_name == "__main__" or mod_name.endswith(".__main__"):
+ raise ImportError("Cannot use package as __main__ module")
+ try:
+ pkg_main_name = mod_name + ".__main__"
+ return _get_module_details(pkg_main_name)
+ except ImportError, e:
+ raise ImportError(("%s; %r is a package and cannot " +
+ "be directly executed") %(e, mod_name))
+ code = loader.get_code(mod_name)
+ if code is None:
+ raise ImportError("No code object available for %s" % mod_name)
+ filename = _get_filename(loader, mod_name)
+ return mod_name, loader, code, filename
+
+def _run_code(code, run_globals, init_globals=None,
+ mod_name=None, mod_fname=None,
+ mod_loader=None, pkg_name=None):
+ if init_globals is not None:
+ run_globals.update(init_globals)
+ run_globals.update(__name__ = mod_name,
+ __file__ = mod_fname,
+ __loader__ = mod_loader,
+ __package__ = pkg_name)
+ exec code in run_globals
+ return run_globals
+
+def run_module(mod_name, init_globals=None,
+ run_name=None):
+ mod_name, loader, code, fname = _get_module_details(mod_name)
+ if run_name is None:
+ run_name = mod_name
+
+ ind = mod_name.rfind(".")
+ if ind != -1:
+ pkg_name = mod_name[:ind]
+ else:
+ pkg_name = mod_name
+ return _run_code(code, {}, init_globals, run_name,
+ fname, loader, pkg_name)
\ No newline at end of file
diff --git a/python/helpers/pycharm/tcmessages.py b/python/helpers/pycharm/tcmessages.py
new file mode 100644
index 0000000..24f4a2e
--- /dev/null
+++ b/python/helpers/pycharm/tcmessages.py
@@ -0,0 +1,63 @@
+import sys
+
+class TeamcityServiceMessages:
+ quote = {"'": "|'", "|": "||", "\n": "|n", "\r": "|r", ']': '|]'}
+
+ def __init__(self, output=sys.stdout, prepend_linebreak=False):
+ self.output = output
+ self.prepend_linebreak = prepend_linebreak
+
+ def escapeValue(self, value):
+ if sys.version_info[0] <= 2 and isinstance(value, unicode):
+ s = value.encode("utf-8")
+ else:
+ s = str(value)
+ return "".join([self.quote.get(x, x) for x in s])
+
+ def message(self, messageName, **properties):
+ s = "##teamcity[" + messageName
+ for k, v in properties.items():
+ if v is None:
+ continue
+ s = s + " %s='%s'" % (k, self.escapeValue(v))
+ s += "]\n"
+
+ if self.prepend_linebreak: self.output.write("\n")
+ self.output.write(s)
+
+ def testSuiteStarted(self, suiteName, location=None):
+ self.message('testSuiteStarted', name=suiteName, locationHint=location)
+
+ def testSuiteFinished(self, suiteName):
+ self.message('testSuiteFinished', name=suiteName)
+
+ def testStarted(self, testName, location=None):
+ self.message('testStarted', name=testName, locationHint=location)
+
+ def testFinished(self, testName, duration=None):
+ self.message('testFinished', name=testName, duration=duration)
+
+ def testIgnored(self, testName, message=''):
+ self.message('testIgnored', name=testName, message=message)
+
+ def testFailed(self, testName, message='', details='', expected='', actual=''):
+ if expected and actual:
+ self.message('testFailed', type='comparisonFailure', name=testName, message=message,
+ details=details, expected=expected, actual=actual)
+ else:
+ self.message('testFailed', name=testName, message=message, details=details)
+
+ def testError(self, testName, message='', details=''):
+ self.message('testFailed', name=testName, message=message, details=details, error="true")
+
+ def testStdOut(self, testName, out):
+ self.message('testStdOut', name=testName, out=out)
+
+ def testStdErr(self, testName, out):
+ self.message('testStdErr', name=testName, out=out)
+
+ def testCount(self, count):
+ self.message('testCount', count=count)
+
+ def testMatrixEntered(self):
+ self.message('enteredTheMatrix')
diff --git a/python/helpers/pycharm/tcunittest.py b/python/helpers/pycharm/tcunittest.py
new file mode 100644
index 0000000..07da02d
--- /dev/null
+++ b/python/helpers/pycharm/tcunittest.py
@@ -0,0 +1,203 @@
+import traceback, sys
+from unittest import TestResult
+import datetime
+
+from tcmessages import TeamcityServiceMessages
+
+PYTHON_VERSION_MAJOR = sys.version_info[0]
+
+def strclass(cls):
+ if not cls.__name__:
+ return cls.__module__
+ return "%s.%s" % (cls.__module__, cls.__name__)
+
+def smart_str(s):
+ encoding='utf-8'
+ errors='strict'
+ if PYTHON_VERSION_MAJOR < 3:
+ is_string = isinstance(s, basestring)
+ else:
+ is_string = isinstance(s, str)
+ if not is_string:
+ try:
+ return str(s)
+ except UnicodeEncodeError:
+ if isinstance(s, Exception):
+ # An Exception subclass containing non-ASCII data that doesn't
+ # know how to print itself properly. We shouldn't raise a
+ # further exception.
+ return ' '.join([smart_str(arg) for arg in s])
+ return unicode(s).encode(encoding, errors)
+ elif isinstance(s, unicode):
+ return s.encode(encoding, errors)
+ else:
+ return s
+
+class TeamcityTestResult(TestResult):
+ def __init__(self, stream=sys.stdout, *args, **kwargs):
+ TestResult.__init__(self)
+ for arg, value in kwargs.items():
+ setattr(self, arg, value)
+ self.output = stream
+ self.messages = TeamcityServiceMessages(self.output, prepend_linebreak=True)
+ self.messages.testMatrixEntered()
+ self.current_suite = None
+
+ def find_first(self, val):
+ quot = val[0]
+ count = 1
+ quote_ind = val[count:].find(quot)
+ while val[count+quote_ind-1] == "\\" and quote_ind != -1:
+ count = count + quote_ind + 1
+ quote_ind = val[count:].find(quot)
+
+ return val[0:quote_ind+count+1]
+
+ def find_second(self, val):
+ val_index = val.find("!=")
+ if val_index != -1:
+ count = 1
+ val = val[val_index+2:].strip()
+ quot = val[0]
+ quote_ind = val[count:].find(quot)
+ while val[count+quote_ind-1] == "\\" and quote_ind != -1:
+ count = count + quote_ind + 1
+ quote_ind = val[count:].find(quot)
+ return val[0:quote_ind+count+1]
+
+ else:
+ quot = val[-1]
+ count = 0
+ quote_ind = val[:len(val)-count-1].rfind(quot)
+ while val[quote_ind-1] == "\\":
+ quote_ind = val[:quote_ind-1].rfind(quot)
+ return val[quote_ind:]
+
+ def formatErr(self, err):
+ exctype, value, tb = err
+ return ''.join(traceback.format_exception(exctype, value, tb))
+
+ def getTestName(self, test):
+ if hasattr(test, '_testMethodName'):
+ if test._testMethodName == "runTest":
+ return str(test)
+ return test._testMethodName
+ else:
+ test_name = str(test)
+ whitespace_index = test_name.index(" ")
+ if whitespace_index != -1:
+ test_name = test_name[:whitespace_index]
+ return test_name
+
+ def getTestId(self, test):
+ return test.id
+
+ def addSuccess(self, test):
+ TestResult.addSuccess(self, test)
+
+ def addError(self, test, err):
+ TestResult.addError(self, test, err)
+
+ err = self._exc_info_to_string(err, test)
+
+ self.messages.testError(self.getTestName(test),
+ message='Error', details=err)
+
+ def find_error_value(self, err):
+ error_value = traceback.extract_tb(err)
+ error_value = error_value[-1][-1]
+ return error_value.split('assert')[-1].strip()
+
+ def addFailure(self, test, err):
+ TestResult.addFailure(self, test, err)
+
+ error_value = smart_str(err[1])
+ if not len(error_value):
+ # means it's test function and we have to extract value from traceback
+ error_value = self.find_error_value(err[2])
+
+ self_find_first = self.find_first(error_value)
+ self_find_second = self.find_second(error_value)
+ quotes = ["'", '"']
+ if (self_find_first[0] == self_find_first[-1] and self_find_first[0] in quotes and
+ self_find_second[0] == self_find_second[-1] and self_find_second[0] in quotes):
+ # let's unescape strings to show sexy multiline diff in PyCharm.
+ # By default all caret return chars are escaped by testing framework
+ first = self._unescape(self_find_first)
+ second = self._unescape(self_find_second)
+ else:
+ first = second = ""
+ err = self._exc_info_to_string(err, test)
+
+ self.messages.testFailed(self.getTestName(test),
+ message='Failure', details=err, expected=first, actual=second)
+
+ def addSkip(self, test, reason):
+ self.messages.testIgnored(self.getTestName(test), message=reason)
+
+ def __getSuite(self, test):
+ if hasattr(test, "suite"):
+ suite = strclass(test.suite)
+ suite_location = test.suite.location
+ location = test.suite.abs_location
+ if hasattr(test, "lineno"):
+ location = location + ":" + str(test.lineno)
+ else:
+ location = location + ":" + str(test.test.lineno)
+ else:
+ import inspect
+
+ try:
+ source_file = inspect.getsourcefile(test.__class__)
+ if source_file:
+ source_dir_splitted = source_file.split("/")[:-1]
+ source_dir = "/".join(source_dir_splitted) + "/"
+ else:
+ source_dir = ""
+ except TypeError:
+ source_dir = ""
+
+ suite = strclass(test.__class__)
+ suite_location = "python_uttestid://" + source_dir + suite
+ location = "python_uttestid://" + source_dir + str(test.id())
+
+ return (suite, location, suite_location)
+
+ def startTest(self, test):
+ suite, location, suite_location = self.__getSuite(test)
+ if suite != self.current_suite:
+ if self.current_suite:
+ self.messages.testSuiteFinished(self.current_suite)
+ self.current_suite = suite
+ self.messages.testSuiteStarted(self.current_suite, location=suite_location)
+ setattr(test, "startTime", datetime.datetime.now())
+ self.messages.testStarted(self.getTestName(test), location=location)
+
+ def stopTest(self, test):
+ start = getattr(test, "startTime", datetime.datetime.now())
+ d = datetime.datetime.now() - start
+ duration=d.microseconds / 1000 + d.seconds * 1000 + d.days * 86400000
+ self.messages.testFinished(self.getTestName(test), duration=int(duration))
+
+ def endLastSuite(self):
+ if self.current_suite:
+ self.messages.testSuiteFinished(self.current_suite)
+ self.current_suite = None
+
+ def _unescape(self, text):
+ # do not use text.decode('string_escape'), it leads to problems with different string encodings given
+ return text.replace("\\n", "\n")
+
+class TeamcityTestRunner(object):
+ def __init__(self, stream=sys.stdout):
+ self.stream = stream
+
+ def _makeResult(self, **kwargs):
+ return TeamcityTestResult(self.stream, **kwargs)
+
+ def run(self, test, **kwargs):
+ result = self._makeResult(**kwargs)
+ result.messages.testCount(test.countTestCases())
+ test(result)
+ result.endLastSuite()
+ return result
diff --git a/python/helpers/pycharm/utrunner.py b/python/helpers/pycharm/utrunner.py
new file mode 100644
index 0000000..b2e333b
--- /dev/null
+++ b/python/helpers/pycharm/utrunner.py
@@ -0,0 +1,151 @@
+import sys
+import imp
+import os
+
+helpers_dir = os.getenv("PYCHARM_HELPERS_DIR", sys.path[0])
+if sys.path[0] != helpers_dir:
+ sys.path.insert(0, helpers_dir)
+
+from tcunittest import TeamcityTestRunner
+from nose_helper import TestLoader, ContextSuite
+from pycharm_run_utils import import_system_module
+from pycharm_run_utils import adjust_sys_path
+from pycharm_run_utils import debug, getModuleName, PYTHON_VERSION_MAJOR
+
+adjust_sys_path()
+
+os = import_system_module("os")
+re = import_system_module("re")
+
+modules = {}
+
+def loadSource(fileName):
+ baseName = os.path.basename(fileName)
+ moduleName = os.path.splitext(baseName)[0]
+
+ # for users wanted to run unittests under django
+ #because of django took advantage of module name
+ settings_file = os.getenv('DJANGO_SETTINGS_MODULE')
+ if settings_file and moduleName == "models":
+ baseName = os.path.realpath(fileName)
+ moduleName = ".".join((baseName.split(os.sep)[-2], "models"))
+
+ if moduleName in modules and len(sys.argv[1:-1]) == 1: # add unique number to prevent name collisions
+ cnt = 2
+ prefix = moduleName
+ while getModuleName(prefix, cnt) in modules:
+ cnt += 1
+ moduleName = getModuleName(prefix, cnt)
+ debug("/ Loading " + fileName + " as " + moduleName)
+ module = imp.load_source(moduleName, fileName)
+ modules[moduleName] = module
+ return module
+
+def walkModules(modulesAndPattern, dirname, names):
+ modules = modulesAndPattern[0]
+ pattern = modulesAndPattern[1]
+ prog_list = [re.compile(pat.strip()) for pat in pattern.split(',')]
+ for name in names:
+ for prog in prog_list:
+ if name.endswith(".py") and prog.match(name):
+ modules.append(loadSource(os.path.join(dirname, name)))
+
+def loadModulesFromFolderRec(folder, pattern = "test.*"):
+ modules = []
+ if PYTHON_VERSION_MAJOR == 3:
+ prog_list = [re.compile(pat.strip()) for pat in pattern.split(',')]
+ for root, dirs, files in os.walk(folder):
+ for name in files:
+ for prog in prog_list:
+ if name.endswith(".py") and prog.match(name):
+ modules.append(loadSource(os.path.join(root, name)))
+ else: # actually for jython compatibility
+ os.path.walk(folder, walkModules, (modules, pattern))
+
+ return modules
+
+testLoader = TestLoader()
+all = ContextSuite()
+pure_unittest = False
+
+def setLoader(module):
+ global testLoader, all
+ try:
+ module.__getattribute__('unittest2')
+ import unittest2
+
+ testLoader = unittest2.TestLoader()
+ all = unittest2.TestSuite()
+ except:
+ pass
+
+if __name__ == "__main__":
+ arg = sys.argv[-1]
+ if arg == "true":
+ import unittest
+
+ testLoader = unittest.TestLoader()
+ all = unittest.TestSuite()
+ pure_unittest = True
+
+ options = {}
+ for arg in sys.argv[1:-1]:
+ arg = arg.strip()
+ if len(arg) == 0:
+ continue
+
+ if arg.startswith("--"):
+ options[arg[2:]] = True
+ continue
+
+ a = arg.split("::")
+ if len(a) == 1:
+ # From module or folder
+ a_splitted = a[0].split(";")
+ if len(a_splitted) != 1:
+ # means we have pattern to match against
+ if a_splitted[0].endswith("/"):
+ debug("/ from folder " + a_splitted[0] + ". Use pattern: " + a_splitted[1])
+ modules = loadModulesFromFolderRec(a_splitted[0], a_splitted[1])
+ else:
+ if a[0].endswith("/"):
+ debug("/ from folder " + a[0])
+ modules = loadModulesFromFolderRec(a[0])
+ else:
+ debug("/ from module " + a[0])
+ modules = [loadSource(a[0])]
+
+ for module in modules:
+ all.addTests(testLoader.loadTestsFromModule(module))
+
+ elif len(a) == 2:
+ # From testcase
+ debug("/ from testcase " + a[1] + " in " + a[0])
+ module = loadSource(a[0])
+ setLoader(module)
+
+ if pure_unittest:
+ all.addTests(testLoader.loadTestsFromTestCase(getattr(module, a[1])))
+ else:
+ all.addTests(testLoader.loadTestsFromTestClass(getattr(module, a[1])),
+ getattr(module, a[1]))
+ else:
+ # From method in class or from function
+ debug("/ from method " + a[2] + " in testcase " + a[1] + " in " + a[0])
+ module = loadSource(a[0])
+ setLoader(module)
+
+ if a[1] == "":
+ # test function, not method
+ all.addTest(testLoader.makeTest(getattr(module, a[2])))
+ else:
+ testCaseClass = getattr(module, a[1])
+ try:
+ all.addTest(testCaseClass(a[2]))
+ except:
+ # class is not a testcase inheritor
+ all.addTest(
+ testLoader.makeTest(getattr(testCaseClass, a[2]), testCaseClass))
+
+ debug("/ Loaded " + str(all.countTestCases()) + " tests")
+ TeamcityTestRunner().run(all, **options)
diff --git a/python/helpers/pycharm_generator_utils/__init__.py b/python/helpers/pycharm_generator_utils/__init__.py
new file mode 100644
index 0000000..7c7597a
--- /dev/null
+++ b/python/helpers/pycharm_generator_utils/__init__.py
@@ -0,0 +1 @@
+__author__ = 'ktisha'
diff --git a/python/helpers/pycharm_generator_utils/constants.py b/python/helpers/pycharm_generator_utils/constants.py
new file mode 100644
index 0000000..d50f704
--- /dev/null
+++ b/python/helpers/pycharm_generator_utils/constants.py
@@ -0,0 +1,790 @@
+import os
+import re
+import types
+import sys
+import string
+import time
+
+
+VERSION = "1.131"
+
+OUT_ENCODING = 'utf-8'
+
+version = (
+ (sys.hexversion & (0xff << 24)) >> 24,
+ (sys.hexversion & (0xff << 16)) >> 16
+)
+
+if version[0] >= 3:
+ #noinspection PyUnresolvedReferences
+ import builtins as the_builtins
+
+ string = "".__class__
+
+ STR_TYPES = (getattr(the_builtins, "bytes"), str)
+
+ NUM_TYPES = (int, float)
+ SIMPLEST_TYPES = NUM_TYPES + STR_TYPES + (None.__class__,)
+ EASY_TYPES = NUM_TYPES + STR_TYPES + (None.__class__, dict, tuple, list)
+
+ def the_exec(source, context):
+ exec (source, context)
+
+else: # < 3.0
+ import __builtin__ as the_builtins
+
+ STR_TYPES = (getattr(the_builtins, "unicode"), str)
+
+ NUM_TYPES = (int, long, float)
+ SIMPLEST_TYPES = NUM_TYPES + STR_TYPES + (types.NoneType,)
+ EASY_TYPES = NUM_TYPES + STR_TYPES + (types.NoneType, dict, tuple, list)
+
+ def the_exec(source, context):
+ #noinspection PyRedundantParentheses
+ exec (source) in context
+
+if version[0] == 2 and version[1] < 4:
+ HAS_DECORATORS = False
+
+ def lstrip(s, prefix):
+ i = 0
+ while s[i] == prefix:
+ i += 1
+ return s[i:]
+
+else:
+ HAS_DECORATORS = True
+ lstrip = string.lstrip
+
+# return type inference helper table
+INT_LIT = '0'
+FLOAT_LIT = '0.0'
+DICT_LIT = '{}'
+LIST_LIT = '[]'
+TUPLE_LIT = '()'
+BOOL_LIT = 'False'
+RET_TYPE = {# {'type_name': 'value_string'} lookup table
+ # chaining
+ "self": "self",
+ "self.": "self",
+ # int
+ "int": INT_LIT,
+ "Int": INT_LIT,
+ "integer": INT_LIT,
+ "Integer": INT_LIT,
+ "short": INT_LIT,
+ "long": INT_LIT,
+ "number": INT_LIT,
+ "Number": INT_LIT,
+ # float
+ "float": FLOAT_LIT,
+ "Float": FLOAT_LIT,
+ "double": FLOAT_LIT,
+ "Double": FLOAT_LIT,
+ "floating": FLOAT_LIT,
+ # boolean
+ "bool": BOOL_LIT,
+ "boolean": BOOL_LIT,
+ "Bool": BOOL_LIT,
+ "Boolean": BOOL_LIT,
+ "True": BOOL_LIT,
+ "true": BOOL_LIT,
+ "False": BOOL_LIT,
+ "false": BOOL_LIT,
+ # list
+ 'list': LIST_LIT,
+ 'List': LIST_LIT,
+ '[]': LIST_LIT,
+ # tuple
+ "tuple": TUPLE_LIT,
+ "sequence": TUPLE_LIT,
+ "Sequence": TUPLE_LIT,
+ # dict
+ "dict": DICT_LIT,
+ "Dict": DICT_LIT,
+ "dictionary": DICT_LIT,
+ "Dictionary": DICT_LIT,
+ "map": DICT_LIT,
+ "Map": DICT_LIT,
+ "hashtable": DICT_LIT,
+ "Hashtable": DICT_LIT,
+ "{}": DICT_LIT,
+ # "objects"
+ "object": "object()",
+}
+if version[0] < 3:
+ UNICODE_LIT = 'u""'
+ BYTES_LIT = '""'
+ RET_TYPE.update({
+ 'string': BYTES_LIT,
+ 'String': BYTES_LIT,
+ 'str': BYTES_LIT,
+ 'Str': BYTES_LIT,
+ 'character': BYTES_LIT,
+ 'char': BYTES_LIT,
+ 'unicode': UNICODE_LIT,
+ 'Unicode': UNICODE_LIT,
+ 'bytes': BYTES_LIT,
+ 'byte': BYTES_LIT,
+ 'Bytes': BYTES_LIT,
+ 'Byte': BYTES_LIT,
+ })
+ DEFAULT_STR_LIT = BYTES_LIT
+ # also, files:
+ RET_TYPE.update({
+ 'file': "file('/dev/null')",
+ })
+
+ def ensureUnicode(data):
+ if type(data) == str:
+ return data.decode(OUT_ENCODING, 'replace')
+ return unicode(data)
+else:
+ UNICODE_LIT = '""'
+ BYTES_LIT = 'b""'
+ RET_TYPE.update({
+ 'string': UNICODE_LIT,
+ 'String': UNICODE_LIT,
+ 'str': UNICODE_LIT,
+ 'Str': UNICODE_LIT,
+ 'character': UNICODE_LIT,
+ 'char': UNICODE_LIT,
+ 'unicode': UNICODE_LIT,
+ 'Unicode': UNICODE_LIT,
+ 'bytes': BYTES_LIT,
+ 'byte': BYTES_LIT,
+ 'Bytes': BYTES_LIT,
+ 'Byte': BYTES_LIT,
+ })
+ DEFAULT_STR_LIT = UNICODE_LIT
+ # also, files: we can't provide an easy expression on py3k
+ RET_TYPE.update({
+ 'file': None,
+ })
+
+ def ensureUnicode(data):
+ if type(data) == bytes:
+ return data.decode(OUT_ENCODING, 'replace')
+ return str(data)
+
+if version[0] > 2:
+ import io # in 3.0
+
+ #noinspection PyArgumentList
+ fopen = lambda name, mode: io.open(name, mode, encoding=OUT_ENCODING)
+else:
+ fopen = open
+
+if sys.platform == 'cli':
+ #noinspection PyUnresolvedReferences
+ from System import DateTime
+
+ class Timer(object):
+ def __init__(self):
+ self.started = DateTime.Now
+
+ def elapsed(self):
+ return (DateTime.Now - self.started).TotalMilliseconds
+else:
+ class Timer(object):
+ def __init__(self):
+ self.started = time.time()
+
+ def elapsed(self):
+ return int((time.time() - self.started) * 1000)
+
+IS_JAVA = hasattr(os, "java")
+
+BUILTIN_MOD_NAME = the_builtins.__name__
+
+IDENT_PATTERN = "[A-Za-z_][0-9A-Za-z_]*" # re pattern for identifier
+NUM_IDENT_PATTERN = re.compile("([A-Za-z_]+)[0-9]?[A-Za-z_]*") # 'foo_123' -> $1 = 'foo_'
+STR_CHAR_PATTERN = "[0-9A-Za-z_.,\+\-&\*% ]"
+
+DOC_FUNC_RE = re.compile("(?:.*\.)?(\w+)\(([^\)]*)\).*") # $1 = function name, $2 = arglist
+
+SANE_REPR_RE = re.compile(IDENT_PATTERN + "(?:\(.*\))?") # identifier with possible (...), go catches
+
+IDENT_RE = re.compile("(" + IDENT_PATTERN + ")") # $1 = identifier
+
+STARS_IDENT_RE = re.compile("(\*?\*?" + IDENT_PATTERN + ")") # $1 = identifier, maybe with a * or **
+
+IDENT_EQ_RE = re.compile("(" + IDENT_PATTERN + "\s*=)") # $1 = identifier with a following '='
+
+SIMPLE_VALUE_RE = re.compile(
+ "(\([+-]?[0-9](?:\s*,\s*[+-]?[0-9])*\))|" + # a numeric tuple, e.g. in pygame
+ "([+-]?[0-9]+\.?[0-9]*(?:[Ee]?[+-]?[0-9]+\.?[0-9]*)?)|" + # number
+ "('" + STR_CHAR_PATTERN + "*')|" + # single-quoted string
+ '("' + STR_CHAR_PATTERN + '*")|' + # double-quoted string
+ "(\[\])|" +
+ "(\{\})|" +
+ "(\(\))|" +
+ "(True|False|None)"
+) # $? = sane default value
+
+########################### parsing ###########################################################
+if version[0] < 3:
+ from pyparsing import *
+else:
+ #noinspection PyUnresolvedReferences
+ from pyparsing_py3 import *
+
+# grammar to parse parameter lists
+
+# // snatched from parsePythonValue.py, from pyparsing samples, copyright 2006 by Paul McGuire but under BSD license.
+# we don't suppress lots of punctuation because we want it back when we reconstruct the lists
+
+lparen, rparen, lbrack, rbrack, lbrace, rbrace, colon = map(Literal, "()[]{}:")
+
+integer = Combine(Optional(oneOf("+ -")) + Word(nums)).setName("integer")
+real = Combine(Optional(oneOf("+ -")) + Word(nums) + "." +
+ Optional(Word(nums)) +
+ Optional(oneOf("e E") + Optional(oneOf("+ -")) + Word(nums))).setName("real")
+tupleStr = Forward()
+listStr = Forward()
+dictStr = Forward()
+
+boolLiteral = oneOf("True False")
+noneLiteral = Literal("None")
+
+listItem = real | integer | quotedString | unicodeString | boolLiteral | noneLiteral | \
+ Group(listStr) | tupleStr | dictStr
+
+tupleStr << ( Suppress("(") + Optional(delimitedList(listItem)) +
+ Optional(Literal(",")) + Suppress(")") ).setResultsName("tuple")
+
+listStr << (lbrack + Optional(delimitedList(listItem) +
+ Optional(Literal(","))) + rbrack).setResultsName("list")
+
+dictEntry = Group(listItem + colon + listItem)
+dictStr << (lbrace + Optional(delimitedList(dictEntry) + Optional(Literal(","))) + rbrace).setResultsName("dict")
+# \\ end of the snatched part
+
+# our output format is s-expressions:
+# (simple name optional_value) is name or name=value
+# (nested (simple ...) (simple ...)) is (name, name,...)
+# (opt ...) is [, ...] or suchlike.
+
+T_SIMPLE = 'Simple'
+T_NESTED = 'Nested'
+T_OPTIONAL = 'Opt'
+T_RETURN = "Ret"
+
+TRIPLE_DOT = '...'
+
+COMMA = Suppress(",")
+APOS = Suppress("'")
+QUOTE = Suppress('"')
+SP = Suppress(Optional(White()))
+
+ident = Word(alphas + "_", alphanums + "_-.").setName("ident") # we accept things like "foo-or-bar"
+decorated_ident = ident + Optional(Suppress(SP + Literal(":") + SP + ident)) # accept "foo: bar", ignore "bar"
+spaced_ident = Combine(decorated_ident + ZeroOrMore(Literal(' ') + decorated_ident)) # we accept 'list or tuple' or 'C struct'
+
+# allow quoted names, because __setattr__, etc docs use it
+paramname = spaced_ident | \
+ APOS + spaced_ident + APOS | \
+ QUOTE + spaced_ident + QUOTE
+
+parenthesized_tuple = ( Literal("(") + Optional(delimitedList(listItem, combine=True)) +
+ Optional(Literal(",")) + Literal(")") ).setResultsName("(tuple)")
+
+initializer = (SP + Suppress("=") + SP + Combine(parenthesized_tuple | listItem | ident)).setName("=init") # accept foo=defaultfoo
+
+param = Group(Empty().setParseAction(replaceWith(T_SIMPLE)) + Combine(Optional(oneOf("* **")) + paramname) + Optional(initializer))
+
+ellipsis = Group(
+ Empty().setParseAction(replaceWith(T_SIMPLE)) + \
+ (Literal("..") +
+ ZeroOrMore(Literal('.'))).setParseAction(replaceWith(TRIPLE_DOT)) # we want to accept both 'foo,..' and 'foo, ...'
+)
+
+paramSlot = Forward()
+
+simpleParamSeq = ZeroOrMore(paramSlot + COMMA) + Optional(paramSlot + Optional(COMMA))
+nestedParamSeq = Group(
+ Suppress('(').setParseAction(replaceWith(T_NESTED)) + \
+ simpleParamSeq + Optional(ellipsis + Optional(COMMA) + Optional(simpleParamSeq)) + \
+ Suppress(')')
+) # we accept "(a1, ... an)"
+
+paramSlot << (param | nestedParamSeq)
+
+optionalPart = Forward()
+
+paramSeq = simpleParamSeq + Optional(optionalPart) # this is our approximate target
+
+optionalPart << (
+ Group(
+ Suppress('[').setParseAction(replaceWith(T_OPTIONAL)) + Optional(COMMA) +
+ paramSeq + Optional(ellipsis) +
+ Suppress(']')
+ )
+ | ellipsis
+)
+
+return_type = Group(
+ Empty().setParseAction(replaceWith(T_RETURN)) +
+ Suppress(SP + (Literal("->") | (Literal(":") + SP + Literal("return"))) + SP) +
+ ident
+)
+
+# this is our ideal target, with balancing paren and a multiline rest of doc.
+paramSeqAndRest = paramSeq + Suppress(')') + Optional(return_type) + Suppress(Optional(Regex(".*(?s)")))
+############################################################################################
+
+
+# Some values are known to be of no use in source and needs to be suppressed.
+# Dict is keyed by module names, with "*" meaning "any module";
+# values are lists of names of members whose value must be pruned.
+SKIP_VALUE_IN_MODULE = {
+ "sys": (
+ "modules", "path_importer_cache", "argv", "builtins",
+ "last_traceback", "last_type", "last_value", "builtin_module_names",
+ ),
+ "posix": (
+ "environ",
+ ),
+ "zipimport": (
+ "_zip_directory_cache",
+ ),
+ "*": (BUILTIN_MOD_NAME,)
+}
+# {"module": ("name",..)}: omit the names from the skeleton at all.
+OMIT_NAME_IN_MODULE = {}
+
+if version[0] >= 3:
+ v = OMIT_NAME_IN_MODULE.get(BUILTIN_MOD_NAME, []) + ["True", "False", "None", "__debug__"]
+ OMIT_NAME_IN_MODULE[BUILTIN_MOD_NAME] = v
+
+if IS_JAVA and version > (2, 4): # in 2.5.1 things are way weird!
+ OMIT_NAME_IN_MODULE['_codecs'] = ['EncodingMap']
+ OMIT_NAME_IN_MODULE['_hashlib'] = ['Hash']
+
+ADD_VALUE_IN_MODULE = {
+ "sys": ("exc_value = Exception()", "exc_traceback=None"), # only present after an exception in current thread
+}
+
+# Some values are special and are better represented by hand-crafted constructs.
+# Dict is keyed by (module name, member name) and value is the replacement.
+REPLACE_MODULE_VALUES = {
+ ("numpy.core.multiarray", "typeinfo"): "{}",
+ ("psycopg2._psycopg", "string_types"): "{}", # badly mangled __eq__ breaks fmtValue
+ ("PyQt5.QtWidgets", "qApp") : "QApplication()", # instead of None
+}
+if version[0] <= 2:
+ REPLACE_MODULE_VALUES[(BUILTIN_MOD_NAME, "None")] = "object()"
+ for std_file in ("stdin", "stdout", "stderr"):
+ REPLACE_MODULE_VALUES[("sys", std_file)] = "open('')" #
+
+# Some functions and methods of some builtin classes have special signatures.
+# {("class", "method"): ("signature_string")}
+PREDEFINED_BUILTIN_SIGS = { #TODO: user-skeleton
+ ("type", "__init__"): "(cls, what, bases=None, dict=None)", # two sigs squeezed into one
+ ("object", "__init__"): "(self)",
+ ("object", "__new__"): "(cls, *more)", # only for the sake of parameter names readability
+ ("object", "__subclasshook__"): "(cls, subclass)", # trusting PY-1818 on sig
+ ("int", "__init__"): "(self, x, base=10)", # overrides a fake
+ ("list", "__init__"): "(self, seq=())",
+ ("tuple", "__init__"): "(self, seq=())", # overrides a fake
+ ("set", "__init__"): "(self, seq=())",
+ ("dict", "__init__"): "(self, seq=None, **kwargs)",
+ ("property", "__init__"): "(self, fget=None, fset=None, fdel=None, doc=None)",
+ # TODO: infer, doc comments have it
+ ("dict", "update"): "(self, E=None, **F)", # docstring nearly lies
+ (None, "zip"): "(seq1, seq2, *more_seqs)",
+ (None, "range"): "(start=None, stop=None, step=None)", # suboptimal: allows empty arglist
+ (None, "filter"): "(function_or_none, sequence)",
+ (None, "iter"): "(source, sentinel=None)",
+ (None, "getattr"): "(object, name, default=None)",
+ ('frozenset', "__init__"): "(self, seq=())",
+ ("bytearray", "__init__"): "(self, source=None, encoding=None, errors='strict')",
+}
+
+if version[0] < 3:
+ PREDEFINED_BUILTIN_SIGS[
+ ("unicode", "__init__")] = "(self, string=u'', encoding=None, errors='strict')" # overrides a fake
+ PREDEFINED_BUILTIN_SIGS[("super", "__init__")] = "(self, type1, type2=None)"
+ PREDEFINED_BUILTIN_SIGS[
+ (None, "min")] = "(*args, **kwargs)" # too permissive, but py2.x won't allow a better sig
+ PREDEFINED_BUILTIN_SIGS[(None, "max")] = "(*args, **kwargs)"
+ PREDEFINED_BUILTIN_SIGS[("str", "__init__")] = "(self, string='')" # overrides a fake
+ PREDEFINED_BUILTIN_SIGS[(None, "print")] = "(*args, **kwargs)" # can't do better in 2.x
+else:
+ PREDEFINED_BUILTIN_SIGS[("super", "__init__")] = "(self, type1=None, type2=None)"
+ PREDEFINED_BUILTIN_SIGS[(None, "min")] = "(*args, key=None)"
+ PREDEFINED_BUILTIN_SIGS[(None, "max")] = "(*args, key=None)"
+ PREDEFINED_BUILTIN_SIGS[
+ (None, "open")] = "(file, mode='r', buffering=None, encoding=None, errors=None, newline=None, closefd=True)"
+ PREDEFINED_BUILTIN_SIGS[
+ ("str", "__init__")] = "(self, value='', encoding=None, errors='strict')" # overrides a fake
+ PREDEFINED_BUILTIN_SIGS[("str", "format")] = "(*args, **kwargs)"
+ PREDEFINED_BUILTIN_SIGS[
+ ("bytes", "__init__")] = "(self, value=b'', encoding=None, errors='strict')" # overrides a fake
+ PREDEFINED_BUILTIN_SIGS[("bytes", "format")] = "(*args, **kwargs)"
+ PREDEFINED_BUILTIN_SIGS[(None, "print")] = "(*args, sep=' ', end='\\n', file=None)" # proper signature
+
+if (2, 6) <= version < (3, 0):
+ PREDEFINED_BUILTIN_SIGS[("unicode", "format")] = "(*args, **kwargs)"
+ PREDEFINED_BUILTIN_SIGS[("str", "format")] = "(*args, **kwargs)"
+
+if version == (2, 5):
+ PREDEFINED_BUILTIN_SIGS[("unicode", "splitlines")] = "(keepends=None)" # a typo in docstring there
+
+if version >= (2, 7):
+ PREDEFINED_BUILTIN_SIGS[
+ ("enumerate", "__init__")] = "(self, iterable, start=0)" # dosctring omits this completely.
+
+if version < (3, 3):
+ datetime_mod = "datetime"
+else:
+ datetime_mod = "_datetime"
+
+
+# NOTE: per-module signature data may be lazily imported
+# keyed by (module_name, class_name, method_name). PREDEFINED_BUILTIN_SIGS might be a layer of it.
+# value is ("signature", "return_literal")
+PREDEFINED_MOD_CLASS_SIGS = { #TODO: user-skeleton
+ (BUILTIN_MOD_NAME, None, 'divmod'): ("(x, y)", "(0, 0)"),
+
+ ("binascii", None, "hexlify"): ("(data)", BYTES_LIT),
+ ("binascii", None, "unhexlify"): ("(hexstr)", BYTES_LIT),
+
+ ("time", None, "ctime"): ("(seconds=None)", DEFAULT_STR_LIT),
+
+ ("_struct", None, "pack"): ("(fmt, *args)", BYTES_LIT),
+ ("_struct", None, "pack_into"): ("(fmt, buffer, offset, *args)", None),
+ ("_struct", None, "unpack"): ("(fmt, string)", None),
+ ("_struct", None, "unpack_from"): ("(fmt, buffer, offset=0)", None),
+ ("_struct", None, "calcsize"): ("(fmt)", INT_LIT),
+ ("_struct", "Struct", "__init__"): ("(self, fmt)", None),
+ ("_struct", "Struct", "pack"): ("(self, *args)", BYTES_LIT),
+ ("_struct", "Struct", "pack_into"): ("(self, buffer, offset, *args)", None),
+ ("_struct", "Struct", "unpack"): ("(self, string)", None),
+ ("_struct", "Struct", "unpack_from"): ("(self, buffer, offset=0)", None),
+
+ (datetime_mod, "date", "__new__"): ("(cls, year=None, month=None, day=None)", None),
+ (datetime_mod, "date", "fromordinal"): ("(cls, ordinal)", "date(1,1,1)"),
+ (datetime_mod, "date", "fromtimestamp"): ("(cls, timestamp)", "date(1,1,1)"),
+ (datetime_mod, "date", "isocalendar"): ("(self)", "(1, 1, 1)"),
+ (datetime_mod, "date", "isoformat"): ("(self)", DEFAULT_STR_LIT),
+ (datetime_mod, "date", "isoweekday"): ("(self)", INT_LIT),
+ (datetime_mod, "date", "replace"): ("(self, year=None, month=None, day=None)", "date(1,1,1)"),
+ (datetime_mod, "date", "strftime"): ("(self, format)", DEFAULT_STR_LIT),
+ (datetime_mod, "date", "timetuple"): ("(self)", "(0, 0, 0, 0, 0, 0, 0, 0, 0)"),
+ (datetime_mod, "date", "today"): ("(self)", "date(1, 1, 1)"),
+ (datetime_mod, "date", "toordinal"): ("(self)", INT_LIT),
+ (datetime_mod, "date", "weekday"): ("(self)", INT_LIT),
+ (datetime_mod, "timedelta", "__new__"
+ ): (
+ "(cls, days=None, seconds=None, microseconds=None, milliseconds=None, minutes=None, hours=None, weeks=None)",
+ None),
+ (datetime_mod, "datetime", "__new__"
+ ): (
+ "(cls, year=None, month=None, day=None, hour=None, minute=None, second=None, microsecond=None, tzinfo=None)",
+ None),
+ (datetime_mod, "datetime", "astimezone"): ("(self, tz)", "datetime(1, 1, 1)"),
+ (datetime_mod, "datetime", "combine"): ("(cls, date, time)", "datetime(1, 1, 1)"),
+ (datetime_mod, "datetime", "date"): ("(self)", "datetime(1, 1, 1)"),
+ (datetime_mod, "datetime", "fromtimestamp"): ("(cls, timestamp, tz=None)", "datetime(1, 1, 1)"),
+ (datetime_mod, "datetime", "isoformat"): ("(self, sep='T')", DEFAULT_STR_LIT),
+ (datetime_mod, "datetime", "now"): ("(cls, tz=None)", "datetime(1, 1, 1)"),
+ (datetime_mod, "datetime", "strptime"): ("(cls, date_string, format)", DEFAULT_STR_LIT),
+ (datetime_mod, "datetime", "replace" ):
+ (
+ "(self, year=None, month=None, day=None, hour=None, minute=None, second=None, microsecond=None, tzinfo=None)",
+ "datetime(1, 1, 1)"),
+ (datetime_mod, "datetime", "time"): ("(self)", "time(0, 0)"),
+ (datetime_mod, "datetime", "timetuple"): ("(self)", "(0, 0, 0, 0, 0, 0, 0, 0, 0)"),
+ (datetime_mod, "datetime", "timetz"): ("(self)", "time(0, 0)"),
+ (datetime_mod, "datetime", "utcfromtimestamp"): ("(self, timestamp)", "datetime(1, 1, 1)"),
+ (datetime_mod, "datetime", "utcnow"): ("(cls)", "datetime(1, 1, 1)"),
+ (datetime_mod, "datetime", "utctimetuple"): ("(self)", "(0, 0, 0, 0, 0, 0, 0, 0, 0)"),
+ (datetime_mod, "time", "__new__"): (
+ "(cls, hour=None, minute=None, second=None, microsecond=None, tzinfo=None)", None),
+ (datetime_mod, "time", "isoformat"): ("(self)", DEFAULT_STR_LIT),
+ (datetime_mod, "time", "replace"): (
+ "(self, hour=None, minute=None, second=None, microsecond=None, tzinfo=None)", "time(0, 0)"),
+ (datetime_mod, "time", "strftime"): ("(self, format)", DEFAULT_STR_LIT),
+ (datetime_mod, "tzinfo", "dst"): ("(self, date_time)", INT_LIT),
+ (datetime_mod, "tzinfo", "fromutc"): ("(self, date_time)", "datetime(1, 1, 1)"),
+ (datetime_mod, "tzinfo", "tzname"): ("(self, date_time)", DEFAULT_STR_LIT),
+ (datetime_mod, "tzinfo", "utcoffset"): ("(self, date_time)", INT_LIT),
+
+ ("_io", None, "open"): ("(name, mode=None, buffering=None)", "file('/dev/null')"),
+ ("_io", "FileIO", "read"): ("(self, size=-1)", DEFAULT_STR_LIT),
+ ("_fileio", "_FileIO", "read"): ("(self, size=-1)", DEFAULT_STR_LIT),
+
+ ("thread", None, "start_new"): ("(function, args, kwargs=None)", INT_LIT),
+ ("_thread", None, "start_new"): ("(function, args, kwargs=None)", INT_LIT),
+
+ ("itertools", "groupby", "__init__"): ("(self, iterable, key=None)", None),
+ ("itertools", None, "groupby"): ("(iterable, key=None)", LIST_LIT),
+
+ ("cStringIO", "OutputType", "seek"): ("(self, position, mode=0)", None),
+ ("cStringIO", "InputType", "seek"): ("(self, position, mode=0)", None),
+
+ # NOTE: here we stand on shaky ground providing sigs for 3rd-party modules, though well-known
+ ("numpy.core.multiarray", "ndarray", "__array__"): ("(self, dtype=None)", None),
+ ("numpy.core.multiarray", None, "arange"): ("(start=None, stop=None, step=None, dtype=None)", None),
+ # same as range()
+ ("numpy.core.multiarray", None, "set_numeric_ops"): ("(**ops)", None),
+}
+
+bin_collections_names = ['collections', '_collections']
+
+for name in bin_collections_names:
+ PREDEFINED_MOD_CLASS_SIGS[(name, "deque", "__init__")] = ("(self, iterable=(), maxlen=None)", None)
+ PREDEFINED_MOD_CLASS_SIGS[(name, "defaultdict", "__init__")] = ("(self, default_factory=None, **kwargs)", None)
+
+if version[0] < 3:
+ PREDEFINED_MOD_CLASS_SIGS[("exceptions", "BaseException", "__unicode__")] = ("(self)", UNICODE_LIT)
+ PREDEFINED_MOD_CLASS_SIGS[("itertools", "product", "__init__")] = ("(self, *iterables, **kwargs)", LIST_LIT)
+else:
+ PREDEFINED_MOD_CLASS_SIGS[("itertools", "product", "__init__")] = ("(self, *iterables, repeat=1)", LIST_LIT)
+
+if version[0] < 3:
+ PREDEFINED_MOD_CLASS_SIGS[("PyQt4.QtCore", None, "pyqtSlot")] = (
+ "(*types, **keywords)", None) # doc assumes py3k syntax
+
+# known properties of modules
+# {{"module": {"class", "property" : ("letters", ("getter", "type"))}},
+# where letters is any set of r,w,d (read, write, del) and "getter" is a source of typed getter.
+# if value is None, the property should be omitted.
+# read-only properties that return an object are not listed.
+G_OBJECT = ("lambda self: object()", None)
+G_TYPE = ("lambda self: type(object)", "type")
+G_DICT = ("lambda self: {}", "dict")
+G_STR = ("lambda self: ''", "string")
+G_TUPLE = ("lambda self: tuple()", "tuple")
+G_FLOAT = ("lambda self: 0.0", "float")
+G_INT = ("lambda self: 0", "int")
+G_BOOL = ("lambda self: True", "bool")
+
+KNOWN_PROPS = {
+ BUILTIN_MOD_NAME: {
+ ("object", '__class__'): ('r', G_TYPE),
+ ('complex', 'real'): ('r', G_FLOAT),
+ ('complex', 'imag'): ('r', G_FLOAT),
+ ("file", 'softspace'): ('r', G_BOOL),
+ ("file", 'name'): ('r', G_STR),
+ ("file", 'encoding'): ('r', G_STR),
+ ("file", 'mode'): ('r', G_STR),
+ ("file", 'closed'): ('r', G_BOOL),
+ ("file", 'newlines'): ('r', G_STR),
+ ("slice", 'start'): ('r', G_INT),
+ ("slice", 'step'): ('r', G_INT),
+ ("slice", 'stop'): ('r', G_INT),
+ ("super", '__thisclass__'): ('r', G_TYPE),
+ ("super", '__self__'): ('r', G_TYPE),
+ ("super", '__self_class__'): ('r', G_TYPE),
+ ("type", '__basicsize__'): ('r', G_INT),
+ ("type", '__itemsize__'): ('r', G_INT),
+ ("type", '__base__'): ('r', G_TYPE),
+ ("type", '__flags__'): ('r', G_INT),
+ ("type", '__mro__'): ('r', G_TUPLE),
+ ("type", '__bases__'): ('r', G_TUPLE),
+ ("type", '__dictoffset__'): ('r', G_INT),
+ ("type", '__dict__'): ('r', G_DICT),
+ ("type", '__name__'): ('r', G_STR),
+ ("type", '__weakrefoffset__'): ('r', G_INT),
+ },
+ "exceptions": {
+ ("BaseException", '__dict__'): ('r', G_DICT),
+ ("BaseException", 'message'): ('rwd', G_STR),
+ ("BaseException", 'args'): ('r', G_TUPLE),
+ ("EnvironmentError", 'errno'): ('rwd', G_INT),
+ ("EnvironmentError", 'message'): ('rwd', G_STR),
+ ("EnvironmentError", 'strerror'): ('rwd', G_INT),
+ ("EnvironmentError", 'filename'): ('rwd', G_STR),
+ ("SyntaxError", 'text'): ('rwd', G_STR),
+ ("SyntaxError", 'print_file_and_line'): ('rwd', G_BOOL),
+ ("SyntaxError", 'filename'): ('rwd', G_STR),
+ ("SyntaxError", 'lineno'): ('rwd', G_INT),
+ ("SyntaxError", 'offset'): ('rwd', G_INT),
+ ("SyntaxError", 'msg'): ('rwd', G_STR),
+ ("SyntaxError", 'message'): ('rwd', G_STR),
+ ("SystemExit", 'message'): ('rwd', G_STR),
+ ("SystemExit", 'code'): ('rwd', G_OBJECT),
+ ("UnicodeDecodeError", '__basicsize__'): None,
+ ("UnicodeDecodeError", '__itemsize__'): None,
+ ("UnicodeDecodeError", '__base__'): None,
+ ("UnicodeDecodeError", '__flags__'): ('rwd', G_INT),
+ ("UnicodeDecodeError", '__mro__'): None,
+ ("UnicodeDecodeError", '__bases__'): None,
+ ("UnicodeDecodeError", '__dictoffset__'): None,
+ ("UnicodeDecodeError", '__dict__'): None,
+ ("UnicodeDecodeError", '__name__'): None,
+ ("UnicodeDecodeError", '__weakrefoffset__'): None,
+ ("UnicodeEncodeError", 'end'): ('rwd', G_INT),
+ ("UnicodeEncodeError", 'encoding'): ('rwd', G_STR),
+ ("UnicodeEncodeError", 'object'): ('rwd', G_OBJECT),
+ ("UnicodeEncodeError", 'start'): ('rwd', G_INT),
+ ("UnicodeEncodeError", 'reason'): ('rwd', G_STR),
+ ("UnicodeEncodeError", 'message'): ('rwd', G_STR),
+ ("UnicodeTranslateError", 'end'): ('rwd', G_INT),
+ ("UnicodeTranslateError", 'encoding'): ('rwd', G_STR),
+ ("UnicodeTranslateError", 'object'): ('rwd', G_OBJECT),
+ ("UnicodeTranslateError", 'start'): ('rwd', G_INT),
+ ("UnicodeTranslateError", 'reason'): ('rwd', G_STR),
+ ("UnicodeTranslateError", 'message'): ('rwd', G_STR),
+ },
+ '_ast': {
+ ("AST", '__dict__'): ('rd', G_DICT),
+ },
+ 'posix': {
+ ("statvfs_result", 'f_flag'): ('r', G_INT),
+ ("statvfs_result", 'f_bavail'): ('r', G_INT),
+ ("statvfs_result", 'f_favail'): ('r', G_INT),
+ ("statvfs_result", 'f_files'): ('r', G_INT),
+ ("statvfs_result", 'f_frsize'): ('r', G_INT),
+ ("statvfs_result", 'f_blocks'): ('r', G_INT),
+ ("statvfs_result", 'f_ffree'): ('r', G_INT),
+ ("statvfs_result", 'f_bfree'): ('r', G_INT),
+ ("statvfs_result", 'f_namemax'): ('r', G_INT),
+ ("statvfs_result", 'f_bsize'): ('r', G_INT),
+
+ ("stat_result", 'st_ctime'): ('r', G_INT),
+ ("stat_result", 'st_rdev'): ('r', G_INT),
+ ("stat_result", 'st_mtime'): ('r', G_INT),
+ ("stat_result", 'st_blocks'): ('r', G_INT),
+ ("stat_result", 'st_gid'): ('r', G_INT),
+ ("stat_result", 'st_nlink'): ('r', G_INT),
+ ("stat_result", 'st_ino'): ('r', G_INT),
+ ("stat_result", 'st_blksize'): ('r', G_INT),
+ ("stat_result", 'st_dev'): ('r', G_INT),
+ ("stat_result", 'st_size'): ('r', G_INT),
+ ("stat_result", 'st_mode'): ('r', G_INT),
+ ("stat_result", 'st_uid'): ('r', G_INT),
+ ("stat_result", 'st_atime'): ('r', G_INT),
+ },
+ "pwd": {
+ ("struct_pwent", 'pw_dir'): ('r', G_STR),
+ ("struct_pwent", 'pw_gid'): ('r', G_INT),
+ ("struct_pwent", 'pw_passwd'): ('r', G_STR),
+ ("struct_pwent", 'pw_gecos'): ('r', G_STR),
+ ("struct_pwent", 'pw_shell'): ('r', G_STR),
+ ("struct_pwent", 'pw_name'): ('r', G_STR),
+ ("struct_pwent", 'pw_uid'): ('r', G_INT),
+
+ ("struct_passwd", 'pw_dir'): ('r', G_STR),
+ ("struct_passwd", 'pw_gid'): ('r', G_INT),
+ ("struct_passwd", 'pw_passwd'): ('r', G_STR),
+ ("struct_passwd", 'pw_gecos'): ('r', G_STR),
+ ("struct_passwd", 'pw_shell'): ('r', G_STR),
+ ("struct_passwd", 'pw_name'): ('r', G_STR),
+ ("struct_passwd", 'pw_uid'): ('r', G_INT),
+ },
+ "thread": {
+ ("_local", '__dict__'): None
+ },
+ "xxsubtype": {
+ ("spamdict", 'state'): ('r', G_INT),
+ ("spamlist", 'state'): ('r', G_INT),
+ },
+ "zipimport": {
+ ("zipimporter", 'prefix'): ('r', G_STR),
+ ("zipimporter", 'archive'): ('r', G_STR),
+ ("zipimporter", '_files'): ('r', G_DICT),
+ },
+ "_struct": {
+ ("Struct", "size"): ('r', G_INT),
+ ("Struct", "format"): ('r', G_STR),
+ },
+ datetime_mod: {
+ ("datetime", "hour"): ('r', G_INT),
+ ("datetime", "minute"): ('r', G_INT),
+ ("datetime", "second"): ('r', G_INT),
+ ("datetime", "microsecond"): ('r', G_INT),
+ ("date", "day"): ('r', G_INT),
+ ("date", "month"): ('r', G_INT),
+ ("date", "year"): ('r', G_INT),
+ ("time", "hour"): ('r', G_INT),
+ ("time", "minute"): ('r', G_INT),
+ ("time", "second"): ('r', G_INT),
+ ("time", "microsecond"): ('r', G_INT),
+ ("timedelta", "days"): ('r', G_INT),
+ ("timedelta", "seconds"): ('r', G_INT),
+ ("timedelta", "microseconds"): ('r', G_INT),
+ },
+}
+
+# Sometimes module X defines item foo but foo.__module__ == 'Y' instead of 'X';
+# module Y just re-exports foo, and foo fakes being defined in Y.
+# We list all such Ys keyed by X, all fully-qualified names:
+# {"real_definer_module": ("fake_reexporter_module",..)}
+KNOWN_FAKE_REEXPORTERS = {
+ "_collections": ('collections',),
+ "_functools": ('functools',),
+ "_socket": ('socket',), # .error, etc
+ "pyexpat": ('xml.parsers.expat',),
+ "_bsddb": ('bsddb.db',),
+ "pysqlite2._sqlite": ('pysqlite2.dbapi2',), # errors
+ "numpy.core.multiarray": ('numpy', 'numpy.core'),
+ "numpy.core._dotblas": ('numpy', 'numpy.core'),
+ "numpy.core.umath": ('numpy', 'numpy.core'),
+ "gtk._gtk": ('gtk', 'gtk.gdk',),
+ "gobject._gobject": ('gobject',),
+ "gnomecanvas": ("gnome.canvas",),
+}
+
+KNOWN_FAKE_BASES = []
+# list of classes that pretend to be base classes but are mere wrappers, and their defining modules
+# [(class, module),...] -- real objects, not names
+#noinspection PyBroadException
+try:
+ #noinspection PyUnresolvedReferences
+ import sip as sip_module # Qt specifically likes it
+
+ if hasattr(sip_module, 'wrapper'):
+ KNOWN_FAKE_BASES.append((sip_module.wrapper, sip_module))
+ if hasattr(sip_module, 'simplewrapper'):
+ KNOWN_FAKE_BASES.append((sip_module.simplewrapper, sip_module))
+ del sip_module
+except:
+ pass
+
+# This is a list of builtin classes to use fake init
+FAKE_BUILTIN_INITS = (tuple, type, int, str)
+if version[0] < 3:
+ FAKE_BUILTIN_INITS = FAKE_BUILTIN_INITS + (getattr(the_builtins, "unicode"),)
+else:
+ FAKE_BUILTIN_INITS = FAKE_BUILTIN_INITS + (getattr(the_builtins, "str"), getattr(the_builtins, "bytes"))
+
+# Some builtin methods are decorated, but this is hard to detect.
+# {("class_name", "method_name"): "decorator"}
+KNOWN_DECORATORS = {
+ ("dict", "fromkeys"): "staticmethod",
+ ("object", "__subclasshook__"): "classmethod",
+ ("bytearray", "fromhex"): "classmethod",
+ ("bytes", "fromhex"): "classmethod",
+ ("bytearray", "maketrans"): "staticmethod",
+ ("bytes", "maketrans"): "staticmethod",
+ ("int", "from_bytes"): "classmethod",
+}
+
+classobj_txt = ( #TODO: user-skeleton
+"class ___Classobj:" "\n"
+" '''A mock class representing the old style class base.'''" "\n"
+" __module__ = ''" "\n"
+" __class__ = None" "\n"
+"\n"
+" def __init__(self):" "\n"
+" pass" "\n"
+" __dict__ = {}" "\n"
+" __doc__ = ''" "\n"
+)
+
+MAC_STDLIB_PATTERN = re.compile("/System/Library/Frameworks/Python\\.framework/Versions/(.+)/lib/python\\1/(.+)")
+MAC_SKIP_MODULES = ["test", "ctypes/test", "distutils/tests", "email/test",
+ "importlib/test", "json/tests", "lib2to3/tests",
+ "bsddb/test",
+ "sqlite3/test", "tkinter/test", "idlelib", "antigravity"]
+
+POSIX_SKIP_MODULES = ["vtemodule", "PAMmodule", "_snackmodule", "/quodlibet/_mmkeys"]
+
+BIN_MODULE_FNAME_PAT = re.compile('([a-zA-Z_]+[0-9a-zA-Z]*)\\.(?:pyc|pyo|(?:[a-zA-Z_]+-\\d\\d[a-zA-Z]*\\.|.+-linux-gnu\\.)?(?:so|pyd))')
+# possible binary module filename: letter, alphanum architecture per PEP-3149
+TYPELIB_MODULE_FNAME_PAT = re.compile("([a-zA-Z_]+[0-9a-zA-Z]*)[0-9a-zA-Z-.]*\\.typelib")
+
+MODULES_INSPECT_DIR = ['gi.repository']
\ No newline at end of file
diff --git a/python/helpers/pycharm_generator_utils/module_redeclarator.py b/python/helpers/pycharm_generator_utils/module_redeclarator.py
new file mode 100644
index 0000000..f56fb43
--- /dev/null
+++ b/python/helpers/pycharm_generator_utils/module_redeclarator.py
@@ -0,0 +1,1026 @@
+from pycharm_generator_utils.util_methods import *
+from pycharm_generator_utils.constants import *
+import keyword, re
+
+
+class emptylistdict(dict):
+ """defaultdict not available before 2.5; simplest reimplementation using [] as default"""
+
+ def __getitem__(self, item):
+ if item in self:
+ return dict.__getitem__(self, item)
+ else:
+ it = []
+ self.__setitem__(item, it)
+ return it
+
+class Buf(object):
+ """Buffers data in a list, can write to a file. Indentation is provided externally."""
+
+ def __init__(self, indenter):
+ self.data = []
+ self.indenter = indenter
+
+ def put(self, data):
+ if data:
+ self.data.append(ensureUnicode(data))
+
+ def out(self, indent, *what):
+ """Output the arguments, indenting as needed, and adding an eol"""
+ self.put(self.indenter.indent(indent))
+ for item in what:
+ self.put(item)
+ self.put("\n")
+
+ def flush_bytes(self, outfile):
+ for data in self.data:
+ outfile.write(data.encode(OUT_ENCODING, "replace"))
+
+ def flush_str(self, outfile):
+ for data in self.data:
+ outfile.write(data)
+
+ if version[0] < 3:
+ flush = flush_bytes
+ else:
+ flush = flush_str
+
+ def isEmpty(self):
+ return len(self.data) == 0
+
+
+#noinspection PyUnresolvedReferences,PyBroadException
+class ModuleRedeclarator(object):
+ def __init__(self, module, outfile, mod_filename, indent_size=4, doing_builtins=False):
+ """
+ Create new instance.
+ @param module module to restore.
+ @param outfile output file, must be open and writable.
+ @param mod_filename filename of binary module (the .dll or .so)
+ @param indent_size amount of space characters per indent
+ """
+ self.module = module
+ self.outfile = outfile # where we finally write
+ self.mod_filename = mod_filename
+ # we write things into buffers out-of-order
+ self.header_buf = Buf(self)
+ self.imports_buf = Buf(self)
+ self.functions_buf = Buf(self)
+ self.classes_buf = Buf(self)
+ self.footer_buf = Buf(self)
+ self.indent_size = indent_size
+ self._indent_step = " " * self.indent_size
+ #
+ self.imported_modules = {"": the_builtins} # explicit module imports: {"name": module}
+ self.hidden_imports = {} # {'real_mod_name': 'alias'}; we alias names with "__" since we don't want them exported
+ # ^ used for things that we don't re-export but need to import, e.g. certain base classes in gnome.
+ self._defined = {} # stores True for every name defined so far, to break circular refs in values
+ self.doing_builtins = doing_builtins
+ self.ret_type_cache = {}
+ self.used_imports = emptylistdict() # qual_mod_name -> [imported_names,..]: actually used imported names
+
+ def _initializeQApp(self):
+ try: # QtGui should be imported _before_ QtCore package.
+ # This is done for the QWidget references from QtCore (such as QSignalMapper). Known bug in PyQt 4.7+
+ # Causes "TypeError: C++ type 'QWidget*' is not supported as a native Qt signal type"
+ import PyQt4.QtGui
+ except ImportError:
+ pass
+
+ # manually instantiate and keep reference to singleton QCoreApplication (we don't want it to be deleted during the introspection)
+ # use QCoreApplication instead of QApplication to avoid blinking app in Dock on Mac OS
+ try:
+ from PyQt4.QtCore import QCoreApplication
+ self.app = QCoreApplication([])
+ return
+ except ImportError:
+ pass
+ try:
+ from PyQt5.QtCore import QCoreApplication
+ self.app = QCoreApplication([])
+ except ImportError:
+ pass
+
+ def indent(self, level):
+ """Return indentation whitespace for given level."""
+ return self._indent_step * level
+
+ def flush(self):
+ for buf in (self.header_buf, self.imports_buf, self.functions_buf, self.classes_buf, self.footer_buf):
+ buf.flush(self.outfile)
+
+ # Some builtin classes effectively change __init__ signature without overriding it.
+ # This callable serves as a placeholder to be replaced via REDEFINED_BUILTIN_SIGS
+ def fake_builtin_init(self):
+ pass # just a callable, sig doesn't matter
+
+ fake_builtin_init.__doc__ = object.__init__.__doc__ # this forces class's doc to be used instead
+
+
+ def find_imported_name(self, item):
+ """
+ Finds out how the item is represented in imported modules.
+ @param item what to check
+ @return qualified name (like "sys.stdin") or None
+ """
+ # TODO: return a pair, not a glued string
+ if not isinstance(item, SIMPLEST_TYPES):
+ for mname in self.imported_modules:
+ m = self.imported_modules[mname]
+ for inner_name in m.__dict__:
+ suspect = getattr(m, inner_name)
+ if suspect is item:
+ if mname:
+ mname += "."
+ elif self.module is the_builtins: # don't short-circuit builtins
+ return None
+ return mname + inner_name
+ return None
+
+ _initializers = (
+ (dict, "{}"),
+ (tuple, "()"),
+ (list, "[]"),
+ )
+
+ def invent_initializer(self, a_type):
+ """
+ Returns an innocuous initializer expression for a_type, or "None"
+ """
+ for initializer_type, r in self._initializers:
+ if initializer_type == a_type:
+ return r
+ # NOTE: here we could handle things like defaultdict, sets, etc if we wanted
+ return "None"
+
+
+ def fmt_value(self, out, p_value, indent, prefix="", postfix="", as_name=None, seen_values=None):
+ """
+ Formats and outputs value (it occupies an entire line or several lines).
+ @param out function that does output (a Buf.out)
+ @param p_value the value.
+ @param indent indent level.
+ @param prefix text to print before the value
+ @param postfix text to print after the value
+ @param as_name hints which name are we trying to print; helps with circular refs.
+ @param seen_values a list of keys we've seen if we're processing a dict
+ """
+ SELF_VALUE = "<value is a self-reference, replaced by this string>"
+ ERR_VALUE = "<failed to retrieve the value>"
+ if isinstance(p_value, SIMPLEST_TYPES):
+ out(indent, prefix, reliable_repr(p_value), postfix)
+ else:
+ if sys.platform == "cli":
+ imported_name = None
+ else:
+ imported_name = self.find_imported_name(p_value)
+ if imported_name:
+ out(indent, prefix, imported_name, postfix)
+ # TODO: kind of self.used_imports[imported_name].append(p_value) but split imported_name
+ # else we could potentially return smth we did not otherwise import. but not likely.
+ else:
+ if isinstance(p_value, (list, tuple)):
+ if not seen_values:
+ seen_values = [p_value]
+ if len(p_value) == 0:
+ out(indent, prefix, repr(p_value), postfix)
+ else:
+ if isinstance(p_value, list):
+ lpar, rpar = "[", "]"
+ else:
+ lpar, rpar = "(", ")"
+ out(indent, prefix, lpar)
+ for value in p_value:
+ if value in seen_values:
+ value = SELF_VALUE
+ elif not isinstance(value, SIMPLEST_TYPES):
+ seen_values.append(value)
+ self.fmt_value(out, value, indent + 1, postfix=",", seen_values=seen_values)
+ out(indent, rpar, postfix)
+ elif isinstance(p_value, dict):
+ if len(p_value) == 0:
+ out(indent, prefix, repr(p_value), postfix)
+ else:
+ if not seen_values:
+ seen_values = [p_value]
+ out(indent, prefix, "{")
+ keys = list(p_value.keys())
+ try:
+ keys.sort()
+ except TypeError:
+ pass # unsortable keys happen, e,g, in py3k _ctypes
+ for k in keys:
+ value = p_value[k]
+
+ try:
+ is_seen = value in seen_values
+ except:
+ is_seen = False
+ value = ERR_VALUE
+
+ if is_seen:
+ value = SELF_VALUE
+ elif not isinstance(value, SIMPLEST_TYPES):
+ seen_values.append(value)
+ if isinstance(k, SIMPLEST_TYPES):
+ self.fmt_value(out, value, indent + 1, prefix=repr(k) + ": ", postfix=",",
+ seen_values=seen_values)
+ else:
+ # both key and value need fancy formatting
+ self.fmt_value(out, k, indent + 1, postfix=": ", seen_values=seen_values)
+ self.fmt_value(out, value, indent + 2, seen_values=seen_values)
+ out(indent + 1, ",")
+ out(indent, "}", postfix)
+ else: # something else, maybe representable
+ # look up this value in the module.
+ if sys.platform == "cli":
+ out(indent, prefix, "None", postfix)
+ return
+ found_name = ""
+ for inner_name in self.module.__dict__:
+ if self.module.__dict__[inner_name] is p_value:
+ found_name = inner_name
+ break
+ if self._defined.get(found_name, False):
+ out(indent, prefix, found_name, postfix)
+ else:
+ # a forward / circular declaration happens
+ notice = ""
+ real_value = cleanup(repr(p_value))
+ if found_name:
+ if found_name == as_name:
+ notice = " # (!) real value is %r" % real_value
+ real_value = "None"
+ else:
+ notice = " # (!) forward: %s, real value is %r" % (found_name, real_value)
+ if SANE_REPR_RE.match(real_value):
+ out(indent, prefix, real_value, postfix, notice)
+ else:
+ if not found_name:
+ notice = " # (!) real value is %r" % real_value
+ out(indent, prefix, "None", postfix, notice)
+
+ def get_ret_type(self, attr):
+ """
+ Returns a return type string as given by T_RETURN in tokens, or None
+ """
+ if attr:
+ ret_type = RET_TYPE.get(attr, None)
+ if ret_type:
+ return ret_type
+ thing = getattr(self.module, attr, None)
+ if thing:
+ if not isinstance(thing, type) and is_callable(thing): # a function
+ return None # TODO: maybe divinate a return type; see pygame.mixer.Channel
+ return attr
+ # adds no noticeable slowdown, I did measure. dch.
+ for im_name, im_module in self.imported_modules.items():
+ cache_key = (im_name, attr)
+ cached = self.ret_type_cache.get(cache_key, None)
+ if cached:
+ return cached
+ ret_type = getattr(im_module, attr, None)
+ if ret_type:
+ if isinstance(ret_type, type):
+ # detect a constructor
+ constr_args = detect_constructor(ret_type)
+ if constr_args is None:
+ constr_args = "*(), **{}" # a silly catch-all constructor
+ reference = "%s(%s)" % (attr, constr_args)
+ elif is_callable(ret_type): # a function, classes are ruled out above
+ return None
+ else:
+ reference = attr
+ if im_name:
+ result = "%s.%s" % (im_name, reference)
+ else: # built-in
+ result = reference
+ self.ret_type_cache[cache_key] = result
+ return result
+ # TODO: handle things like "[a, b,..] and (foo,..)"
+ return None
+
+
+ SIG_DOC_NOTE = "restored from __doc__"
+ SIG_DOC_UNRELIABLY = "NOTE: unreliably restored from __doc__ "
+
+ def restore_by_docstring(self, signature_string, class_name, deco=None, ret_hint=None):
+ """
+ @param signature_string: parameter list extracted from the doc string.
+ @param class_name: name of the containing class, or None
+ @param deco: decorator to use
+ @param ret_hint: return type hint, if available
+ @return (reconstructed_spec, return_type, note) or (None, _, _) if failed.
+ """
+ action("restoring func %r of class %r", signature_string, class_name)
+ # parse
+ parsing_failed = False
+ ret_type = None
+ try:
+ # strict parsing
+ tokens = paramSeqAndRest.parseString(signature_string, True)
+ ret_name = None
+ if tokens:
+ ret_t = tokens[-1]
+ if ret_t[0] is T_RETURN:
+ ret_name = ret_t[1]
+ ret_type = self.get_ret_type(ret_name) or self.get_ret_type(ret_hint)
+ except ParseException:
+ # it did not parse completely; scavenge what we can
+ parsing_failed = True
+ tokens = []
+ try:
+ # most unrestrictive parsing
+ tokens = paramSeq.parseString(signature_string, False)
+ except ParseException:
+ pass
+ #
+ seq = transform_seq(tokens)
+
+ # add safe defaults for unparsed
+ if parsing_failed:
+ doc_node = self.SIG_DOC_UNRELIABLY
+ starred = None
+ double_starred = None
+ for one in seq:
+ if type(one) is str:
+ if one.startswith("**"):
+ double_starred = one
+ elif one.startswith("*"):
+ starred = one
+ if not starred:
+ seq.append("*args")
+ if not double_starred:
+ seq.append("**kwargs")
+ else:
+ doc_node = self.SIG_DOC_NOTE
+
+ # add 'self' if needed YYY
+ if class_name and (not seq or seq[0] != 'self'):
+ first_param = propose_first_param(deco)
+ if first_param:
+ seq.insert(0, first_param)
+ seq = make_names_unique(seq)
+ return (seq, ret_type, doc_node)
+
+ def parse_func_doc(self, func_doc, func_id, func_name, class_name, deco=None, sip_generated=False):
+ """
+ @param func_doc: __doc__ of the function.
+ @param func_id: name to look for as identifier of the function in docstring
+ @param func_name: name of the function.
+ @param class_name: name of the containing class, or None
+ @param deco: decorator to use
+ @return (reconstructed_spec, return_literal, note) or (None, _, _) if failed.
+ """
+ if sip_generated:
+ overloads = []
+ for part in func_doc.split('\n'):
+ signature = func_id + '('
+ i = part.find(signature)
+ if i >= 0:
+ overloads.append(part[i + len(signature):])
+ if len(overloads) > 1:
+ docstring_results = [self.restore_by_docstring(overload, class_name, deco) for overload in overloads]
+ ret_types = []
+ for result in docstring_results:
+ rt = result[1]
+ if rt and rt not in ret_types:
+ ret_types.append(rt)
+ if ret_types:
+ ret_literal = " or ".join(ret_types)
+ else:
+ ret_literal = None
+ param_lists = [result[0] for result in docstring_results]
+ spec = build_signature(func_name, restore_parameters_for_overloads(param_lists))
+ return (spec, ret_literal, "restored from __doc__ with multiple overloads")
+
+ # find the first thing to look like a definition
+ prefix_re = re.compile("\s*(?:(\w+)[ \\t]+)?" + func_id + "\s*\(") # "foo(..." or "int foo(..."
+ match = prefix_re.search(func_doc) # Note: this and previous line may consume up to 35% of time
+ # parse the part that looks right
+ if match:
+ ret_hint = match.group(1)
+ params, ret_literal, doc_note = self.restore_by_docstring(func_doc[match.end():], class_name, deco, ret_hint)
+ spec = func_name + flatten(params)
+ return (spec, ret_literal, doc_note)
+ else:
+ return (None, None, None)
+
+
+ def is_predefined_builtin(self, module_name, class_name, func_name):
+ return self.doing_builtins and module_name == BUILTIN_MOD_NAME and (
+ class_name, func_name) in PREDEFINED_BUILTIN_SIGS
+
+
+ def redo_function(self, out, p_func, p_name, indent, p_class=None, p_modname=None, classname=None, seen=None):
+ """
+ Restore function argument list as best we can.
+ @param out output function of a Buf
+ @param p_func function or method object
+ @param p_name function name as known to owner
+ @param indent indentation level
+ @param p_class the class that contains this function as a method
+ @param p_modname module name
+ @param seen {id(func): name} map of functions already seen in the same namespace;
+ id() because *some* functions are unhashable (eg _elementtree.Comment in py2.7)
+ """
+ action("redoing func %r of class %r", p_name, p_class)
+ if seen is not None:
+ other_func = seen.get(id(p_func), None)
+ if other_func and getattr(other_func, "__doc__", None) is getattr(p_func, "__doc__", None):
+ # _bisect.bisect == _bisect.bisect_right in py31, but docs differ
+ out(indent, p_name, " = ", seen[id(p_func)])
+ out(indent, "")
+ return
+ else:
+ seen[id(p_func)] = p_name
+ # real work
+ if classname is None:
+ classname = p_class and p_class.__name__ or None
+ if p_class and hasattr(p_class, '__mro__'):
+ sip_generated = [base_t for base_t in p_class.__mro__ if 'sip.simplewrapper' in str(base_t)]
+ else:
+ sip_generated = False
+ deco = None
+ deco_comment = ""
+ mod_class_method_tuple = (p_modname, classname, p_name)
+ ret_literal = None
+ is_init = False
+ # any decorators?
+ action("redoing decos of func %r of class %r", p_name, p_class)
+ if self.doing_builtins and p_modname == BUILTIN_MOD_NAME:
+ deco = KNOWN_DECORATORS.get((classname, p_name), None)
+ if deco:
+ deco_comment = " # known case"
+ elif p_class and p_name in p_class.__dict__:
+ # detect native methods declared with METH_CLASS flag
+ descriptor = p_class.__dict__[p_name]
+ if p_name != "__new__" and type(descriptor).__name__.startswith('classmethod'):
+ # 'classmethod_descriptor' in Python 2.x and 3.x, 'classmethod' in Jython
+ deco = "classmethod"
+ elif type(p_func).__name__.startswith('staticmethod'):
+ deco = "staticmethod"
+ if p_name == "__new__":
+ deco = "staticmethod"
+ deco_comment = " # known case of __new__"
+
+ action("redoing innards of func %r of class %r", p_name, p_class)
+ if deco and HAS_DECORATORS:
+ out(indent, "@", deco, deco_comment)
+ if inspect and inspect.isfunction(p_func):
+ out(indent, "def ", p_name, restore_by_inspect(p_func), ": # reliably restored by inspect", )
+ out_doc_attr(out, p_func, indent + 1, p_class)
+ elif self.is_predefined_builtin(*mod_class_method_tuple):
+ spec, sig_note = restore_predefined_builtin(classname, p_name)
+ out(indent, "def ", spec, ": # ", sig_note)
+ out_doc_attr(out, p_func, indent + 1, p_class)
+ elif sys.platform == 'cli' and is_clr_type(p_class):
+ spec, sig_note = restore_clr(p_name, p_class)
+ if not spec: return
+ if sig_note:
+ out(indent, "def ", spec, ": #", sig_note)
+ else:
+ out(indent, "def ", spec, ":")
+ if not p_name in ['__gt__', '__ge__', '__lt__', '__le__', '__ne__', '__reduce_ex__', '__str__']:
+ out_doc_attr(out, p_func, indent + 1, p_class)
+ elif mod_class_method_tuple in PREDEFINED_MOD_CLASS_SIGS:
+ sig, ret_literal = PREDEFINED_MOD_CLASS_SIGS[mod_class_method_tuple]
+ if classname:
+ ofwhat = "%s.%s.%s" % mod_class_method_tuple
+ else:
+ ofwhat = "%s.%s" % (p_modname, p_name)
+ out(indent, "def ", p_name, sig, ": # known case of ", ofwhat)
+ out_doc_attr(out, p_func, indent + 1, p_class)
+ else:
+ # __doc__ is our best source of arglist
+ sig_note = "real signature unknown"
+ spec = ""
+ is_init = (p_name == "__init__" and p_class is not None)
+ funcdoc = None
+ if is_init and hasattr(p_class, "__doc__"):
+ if hasattr(p_func, "__doc__"):
+ funcdoc = p_func.__doc__
+ if funcdoc == object.__init__.__doc__:
+ funcdoc = p_class.__doc__
+ elif hasattr(p_func, "__doc__"):
+ funcdoc = p_func.__doc__
+ sig_restored = False
+ action("parsing doc of func %r of class %r", p_name, p_class)
+ if isinstance(funcdoc, STR_TYPES):
+ (spec, ret_literal, more_notes) = self.parse_func_doc(funcdoc, p_name, p_name, classname, deco,
+ sip_generated)
+ if spec is None and p_name == '__init__' and classname:
+ (spec, ret_literal, more_notes) = self.parse_func_doc(funcdoc, classname, p_name, classname, deco,
+ sip_generated)
+ sig_restored = spec is not None
+ if more_notes:
+ if sig_note:
+ sig_note += "; "
+ sig_note += more_notes
+ if not sig_restored:
+ # use an allow-all declaration
+ decl = []
+ if p_class:
+ first_param = propose_first_param(deco)
+ if first_param:
+ decl.append(first_param)
+ decl.append("*args")
+ decl.append("**kwargs")
+ spec = p_name + "(" + ", ".join(decl) + ")"
+ out(indent, "def ", spec, ": # ", sig_note)
+ # to reduce size of stubs, don't output same docstring twice for class and its __init__ method
+ if not is_init or funcdoc != p_class.__doc__:
+ out_docstring(out, funcdoc, indent + 1)
+ # body
+ if ret_literal and not is_init:
+ out(indent + 1, "return ", ret_literal)
+ else:
+ out(indent + 1, "pass")
+ if deco and not HAS_DECORATORS:
+ out(indent, p_name, " = ", deco, "(", p_name, ")", deco_comment)
+ out(0, "") # empty line after each item
+
+
+ def redo_class(self, out, p_class, p_name, indent, p_modname=None, seen=None, inspect_dir=False):
+ """
+ Restores a class definition.
+ @param out output function of a relevant buf
+ @param p_class the class object
+ @param p_name class name as known to owner
+ @param indent indentation level
+ @param p_modname name of module
+ @param seen {class: name} map of classes already seen in the same namespace
+ """
+ action("redoing class %r of module %r", p_name, p_modname)
+ if seen is not None:
+ if p_class in seen:
+ out(indent, p_name, " = ", seen[p_class])
+ out(indent, "")
+ return
+ else:
+ seen[p_class] = p_name
+ bases = get_bases(p_class)
+ base_def = ""
+ skipped_bases = []
+ if bases:
+ skip_qualifiers = [p_modname, BUILTIN_MOD_NAME, 'exceptions']
+ skip_qualifiers.extend(KNOWN_FAKE_REEXPORTERS.get(p_modname, ()))
+ bases_list = [] # what we'll render in the class decl
+ for base in bases:
+ if [1 for (cls, mdl) in KNOWN_FAKE_BASES if cls == base and mdl != self.module]:
+ # our base is a wrapper and our module is not its defining module
+ skipped_bases.append(str(base))
+ continue
+ # somehow import every base class
+ base_name = base.__name__
+ qual_module_name = qualifier_of(base, skip_qualifiers)
+ got_existing_import = False
+ if qual_module_name:
+ if qual_module_name in self.used_imports:
+ import_list = self.used_imports[qual_module_name]
+ if base in import_list:
+ bases_list.append(base_name) # unqualified: already set to import
+ got_existing_import = True
+ if not got_existing_import:
+ mangled_qualifier = "__" + qual_module_name.replace('.', '_') # foo.bar -> __foo_bar
+ bases_list.append(mangled_qualifier + "." + base_name)
+ self.hidden_imports[qual_module_name] = mangled_qualifier
+ else:
+ bases_list.append(base_name)
+ base_def = "(" + ", ".join(bases_list) + ")"
+ out(indent, "class ", p_name, base_def, ":",
+ skipped_bases and " # skipped bases: " + ", ".join(skipped_bases) or "")
+ out_doc_attr(out, p_class, indent + 1)
+ # inner parts
+ methods = {}
+ properties = {}
+ others = {}
+ we_are_the_base_class = p_modname == BUILTIN_MOD_NAME and p_name == "object"
+ field_source = {}
+ try:
+ if hasattr(p_class, "__dict__") and not inspect_dir:
+ field_source = p_class.__dict__
+ field_keys = field_source.keys() # Jython 2.5.1 _codecs fail here
+ else:
+ field_keys = dir(p_class) # this includes unwanted inherited methods, but no dict + inheritance is rare
+ except:
+ field_keys = ()
+ for item_name in field_keys:
+ if item_name in ("__doc__", "__module__"):
+ if we_are_the_base_class:
+ item = "" # must be declared in base types
+ else:
+ continue # in all other cases must be skipped
+ elif keyword.iskeyword(item_name): # for example, PyQt4 contains definitions of methods named 'exec'
+ continue
+ else:
+ try:
+ item = getattr(p_class, item_name) # let getters do the magic
+ except AttributeError:
+ item = field_source[item_name] # have it raw
+ except Exception:
+ continue
+ if is_callable(item) and not isinstance(item, type):
+ methods[item_name] = item
+ elif is_property(item):
+ properties[item_name] = item
+ else:
+ others[item_name] = item
+ #
+ if we_are_the_base_class:
+ others["__dict__"] = {} # force-feed it, for __dict__ does not contain a reference to itself :)
+ # add fake __init__s to have the right sig
+ if p_class in FAKE_BUILTIN_INITS:
+ methods["__init__"] = self.fake_builtin_init
+ note("Faking init of %s", p_name)
+ elif '__init__' not in methods:
+ init_method = getattr(p_class, '__init__', None)
+ if init_method:
+ methods['__init__'] = init_method
+
+ #
+ seen_funcs = {}
+ for item_name in sorted_no_case(methods.keys()):
+ item = methods[item_name]
+ try:
+ self.redo_function(out, item, item_name, indent + 1, p_class, p_modname, classname=p_name, seen=seen_funcs)
+ except:
+ handle_error_func(item_name, out)
+ #
+ known_props = KNOWN_PROPS.get(p_modname, {})
+ a_setter = "lambda self, v: None"
+ a_deleter = "lambda self: None"
+ for item_name in sorted_no_case(properties.keys()):
+ item = properties[item_name]
+ prop_docstring = getattr(item, '__doc__', None)
+ prop_key = (p_name, item_name)
+ if prop_key in known_props:
+ prop_descr = known_props.get(prop_key, None)
+ if prop_descr is None:
+ continue # explicitly omitted
+ acc_line, getter_and_type = prop_descr
+ if getter_and_type:
+ getter, prop_type = getter_and_type
+ else:
+ getter, prop_type = None, None
+ out(indent + 1, item_name,
+ " = property(", format_accessors(acc_line, getter, a_setter, a_deleter), ")"
+ )
+ if prop_type:
+ if prop_docstring:
+ out(indent + 1, '"""', prop_docstring)
+ out(0, "")
+ out(indent + 1, ':type: ', prop_type)
+ out(indent + 1, '"""')
+ else:
+ out(indent + 1, '""":type: ', prop_type, '"""')
+ out(0, "")
+ else:
+ out(indent + 1, item_name, " = property(lambda self: object(), lambda self, v: None, lambda self: None) # default")
+ if prop_docstring:
+ out(indent + 1, '"""', prop_docstring, '"""')
+ out(0, "")
+ if properties:
+ out(0, "") # empty line after the block
+ #
+ for item_name in sorted_no_case(others.keys()):
+ item = others[item_name]
+ self.fmt_value(out, item, indent + 1, prefix=item_name + " = ")
+ if p_name == "object":
+ out(indent + 1, "__module__ = ''")
+ if others:
+ out(0, "") # empty line after the block
+ #
+ if not methods and not properties and not others:
+ out(indent + 1, "pass")
+
+
+
+ def redo_simple_header(self, p_name):
+ """Puts boilerplate code on the top"""
+ out = self.header_buf.out # 1st class methods rule :)
+ out(0, "# encoding: %s" % OUT_ENCODING) # line 1
+ # NOTE: maybe encoding should be selectable
+ if hasattr(self.module, "__name__"):
+ self_name = self.module.__name__
+ if self_name != p_name:
+ mod_name = " calls itself " + self_name
+ else:
+ mod_name = ""
+ else:
+ mod_name = " does not know its name"
+ out(0, "# module ", p_name, mod_name) # line 2
+
+ BUILT_IN_HEADER = "(built-in)"
+ if self.mod_filename:
+ filename = self.mod_filename
+ elif p_name in sys.builtin_module_names:
+ filename = BUILT_IN_HEADER
+ else:
+ filename = getattr(self.module, "__file__", BUILT_IN_HEADER)
+
+ out(0, "# from %s" % filename) # line 3
+ out(0, "# by generator %s" % VERSION) # line 4
+ if p_name == BUILTIN_MOD_NAME and version[0] == 2 and version[1] >= 6:
+ out(0, "from __future__ import print_function")
+ out_doc_attr(out, self.module, 0)
+
+
+ def redo_imports(self):
+ module_type = type(sys)
+ for item_name in self.module.__dict__.keys():
+ try:
+ item = self.module.__dict__[item_name]
+ except:
+ continue
+ if type(item) is module_type: # not isinstance, py2.7 + PyQt4.QtCore on windows have a bug here
+ self.imported_modules[item_name] = item
+ self.add_import_header_if_needed()
+ ref_notice = getattr(item, "__file__", str(item))
+ if hasattr(item, "__name__"):
+ self.imports_buf.out(0, "import ", item.__name__, " as ", item_name, " # ", ref_notice)
+ else:
+ self.imports_buf.out(0, item_name, " = None # ??? name unknown; ", ref_notice)
+
+ def add_import_header_if_needed(self):
+ if self.imports_buf.isEmpty():
+ self.imports_buf.out(0, "")
+ self.imports_buf.out(0, "# imports")
+
+
+ def redo(self, p_name, inspect_dir):
+ """
+ Restores module declarations.
+ Intended for built-in modules and thus does not handle import statements.
+ @param p_name name of module
+ """
+ action("redoing header of module %r %r", p_name, str(self.module))
+
+ if "pyqt" in p_name.lower(): # qt specific patch
+ self._initializeQApp()
+
+ self.redo_simple_header(p_name)
+
+ # find whatever other self.imported_modules the module knows; effectively these are imports
+ action("redoing imports of module %r %r", p_name, str(self.module))
+ try:
+ self.redo_imports()
+ except:
+ pass
+
+ action("redoing innards of module %r %r", p_name, str(self.module))
+
+ module_type = type(sys)
+ # group what we have into buckets
+ vars_simple = {}
+ vars_complex = {}
+ funcs = {}
+ classes = {}
+ module_dict = self.module.__dict__
+ if inspect_dir:
+ module_dict = dir(self.module)
+ for item_name in module_dict:
+ note("looking at %s", item_name)
+ if item_name in (
+ "__dict__", "__doc__", "__module__", "__file__", "__name__", "__builtins__", "__package__"):
+ continue # handled otherwise
+ try:
+ item = getattr(self.module, item_name) # let getters do the magic
+ except AttributeError:
+ if not item_name in self.module.__dict__: continue
+ item = self.module.__dict__[item_name] # have it raw
+ # check if it has percolated from an imported module
+ except NotImplementedError:
+ if not item_name in self.module.__dict__: continue
+ item = self.module.__dict__[item_name] # have it raw
+
+ # unless we're adamantly positive that the name was imported, we assume it is defined here
+ mod_name = None # module from which p_name might have been imported
+ # IronPython has non-trivial reexports in System module, but not in others:
+ skip_modname = sys.platform == "cli" and p_name != "System"
+ surely_not_imported_mods = KNOWN_FAKE_REEXPORTERS.get(p_name, ())
+ ## can't figure weirdness in some modules, assume no reexports:
+ #skip_modname = skip_modname or p_name in self.KNOWN_FAKE_REEXPORTERS
+ if not skip_modname:
+ try:
+ mod_name = getattr(item, '__module__', None)
+ except:
+ pass
+ # we assume that module foo.bar never imports foo; foo may import foo.bar. (see pygame and pygame.rect)
+ maybe_import_mod_name = mod_name or ""
+ import_is_from_top = len(p_name) > len(maybe_import_mod_name) and p_name.startswith(maybe_import_mod_name)
+ note("mod_name = %s, prospective = %s, from top = %s", mod_name, maybe_import_mod_name, import_is_from_top)
+ want_to_import = False
+ if (mod_name
+ and mod_name != BUILTIN_MOD_NAME
+ and mod_name != p_name
+ and mod_name not in surely_not_imported_mods
+ and not import_is_from_top
+ ):
+ # import looks valid, but maybe it's a .py file? we're certain not to import from .py
+ # e.g. this rules out _collections import collections and builtins import site.
+ try:
+ imported = __import__(mod_name) # ok to repeat, Python caches for us
+ if imported:
+ qualifiers = mod_name.split(".")[1:]
+ for qual in qualifiers:
+ imported = getattr(imported, qual, None)
+ if not imported:
+ break
+ imported_path = (getattr(imported, '__file__', False) or "").lower()
+ want_to_import = not (imported_path.endswith('.py') or imported_path.endswith('.pyc'))
+ note("path of %r is %r, want? %s", mod_name, imported_path, want_to_import)
+ except ImportError:
+ want_to_import = False
+ # NOTE: if we fail to import, we define 'imported' names here lest we lose them at all
+ if want_to_import:
+ import_list = self.used_imports[mod_name]
+ if item_name not in import_list:
+ import_list.append(item_name)
+ if not want_to_import:
+ if isinstance(item, type) or type(item).__name__ == 'classobj':
+ classes[item_name] = item
+ elif is_callable(item): # some classes are callable, check them before functions
+ funcs[item_name] = item
+ elif isinstance(item, module_type):
+ continue # self.imported_modules handled above already
+ else:
+ if isinstance(item, SIMPLEST_TYPES):
+ vars_simple[item_name] = item
+ else:
+ vars_complex[item_name] = item
+
+ # sort and output every bucket
+ action("outputting innards of module %r %r", p_name, str(self.module))
+ #
+ omitted_names = OMIT_NAME_IN_MODULE.get(p_name, [])
+ if vars_simple:
+ out = self.functions_buf.out
+ prefix = "" # try to group variables by common prefix
+ PREFIX_LEN = 2 # default prefix length if we can't guess better
+ out(0, "# Variables with simple values")
+ for item_name in sorted_no_case(vars_simple.keys()):
+ if item_name in omitted_names:
+ out(0, "# definition of " + item_name + " omitted")
+ continue
+ item = vars_simple[item_name]
+ # track the prefix
+ if len(item_name) >= PREFIX_LEN:
+ prefix_pos = string.rfind(item_name, "_") # most prefixes end in an underscore
+ if prefix_pos < 1:
+ prefix_pos = PREFIX_LEN
+ beg = item_name[0:prefix_pos]
+ if prefix != beg:
+ out(0, "") # space out from other prefix
+ prefix = beg
+ else:
+ prefix = ""
+ # output
+ replacement = REPLACE_MODULE_VALUES.get((p_name, item_name), None)
+ if replacement is not None:
+ out(0, item_name, " = ", replacement, " # real value of type ", str(type(item)), " replaced")
+ elif is_skipped_in_module(p_name, item_name):
+ t_item = type(item)
+ out(0, item_name, " = ", self.invent_initializer(t_item), " # real value of type ", str(t_item),
+ " skipped")
+ else:
+ self.fmt_value(out, item, 0, prefix=item_name + " = ")
+ self._defined[item_name] = True
+ out(0, "") # empty line after vars
+ #
+ if funcs:
+ out = self.functions_buf.out
+ out(0, "# functions")
+ out(0, "")
+ seen_funcs = {}
+ for item_name in sorted_no_case(funcs.keys()):
+ if item_name in omitted_names:
+ out(0, "# definition of ", item_name, " omitted")
+ continue
+ item = funcs[item_name]
+ try:
+ self.redo_function(out, item, item_name, 0, p_modname=p_name, seen=seen_funcs)
+ except:
+ handle_error_func(item_name, out)
+ else:
+ self.functions_buf.out(0, "# no functions")
+ #
+ if classes:
+ out = self.functions_buf.out
+ out(0, "# classes")
+ out(0, "")
+ seen_classes = {}
+ # sort classes so that inheritance order is preserved
+ cls_list = [] # items are (class_name, mro_tuple)
+ for cls_name in sorted_no_case(classes.keys()):
+ cls = classes[cls_name]
+ ins_index = len(cls_list)
+ for i in range(ins_index):
+ maybe_child_bases = cls_list[i][1]
+ if cls in maybe_child_bases:
+ ins_index = i # we could not go farther than current ins_index
+ break # ...and need not go fartehr than first known child
+ cls_list.insert(ins_index, (cls_name, get_mro(cls)))
+ for item_name in [cls_item[0] for cls_item in cls_list]:
+ if item_name in omitted_names:
+ out(0, "# definition of ", item_name, " omitted")
+ continue
+ item = classes[item_name]
+ self.redo_class(out, item, item_name, 0, p_modname=p_name, seen=seen_classes, inspect_dir=inspect_dir)
+ self._defined[item_name] = True
+ out(0, "") # empty line after each item
+
+ if self.doing_builtins and p_name == BUILTIN_MOD_NAME and version[0] < 3:
+ # classobj still supported
+ txt = classobj_txt
+ self.classes_buf.out(0, txt)
+
+ if self.doing_builtins and p_name == BUILTIN_MOD_NAME:
+ txt = create_generator()
+ self.classes_buf.out(0, txt)
+
+ # Fake <type 'namedtuple'>
+ if version[0] >= 3 or (version[0] == 2 and version[1] >= 6):
+ namedtuple_text = create_named_tuple()
+ self.classes_buf.out(0, namedtuple_text)
+
+ else:
+ self.classes_buf.out(0, "# no classes")
+ #
+ if vars_complex:
+ out = self.footer_buf.out
+ out(0, "# variables with complex values")
+ out(0, "")
+ for item_name in sorted_no_case(vars_complex.keys()):
+ if item_name in omitted_names:
+ out(0, "# definition of " + item_name + " omitted")
+ continue
+ item = vars_complex[item_name]
+ if str(type(item)) == "<type 'namespace#'>":
+ continue # this is an IronPython submodule, we mustn't generate a reference for it in the base module
+ replacement = REPLACE_MODULE_VALUES.get((p_name, item_name), None)
+ if replacement is not None:
+ out(0, item_name + " = " + replacement + " # real value of type " + str(type(item)) + " replaced")
+ elif is_skipped_in_module(p_name, item_name):
+ t_item = type(item)
+ out(0, item_name + " = " + self.invent_initializer(t_item) + " # real value of type " + str(
+ t_item) + " skipped")
+ else:
+ self.fmt_value(out, item, 0, prefix=item_name + " = ", as_name=item_name)
+ self._defined[item_name] = True
+ out(0, "") # empty line after each item
+ values_to_add = ADD_VALUE_IN_MODULE.get(p_name, None)
+ if values_to_add:
+ self.footer_buf.out(0, "# intermittent names")
+ for value in values_to_add:
+ self.footer_buf.out(0, value)
+ # imports: last, because previous parts could alter used_imports or hidden_imports
+ self.output_import_froms()
+ if self.imports_buf.isEmpty():
+ self.imports_buf.out(0, "# no imports")
+ self.imports_buf.out(0, "") # empty line after imports
+
+ def output_import_froms(self):
+ """Mention all imported names known within the module, wrapping as per PEP."""
+ out = self.imports_buf.out
+ if self.used_imports:
+ self.add_import_header_if_needed()
+ for mod_name in sorted_no_case(self.used_imports.keys()):
+ import_names = self.used_imports[mod_name]
+ if import_names:
+ self._defined[mod_name] = True
+ right_pos = 0 # tracks width of list to fold it at right margin
+ import_heading = "from % s import (" % mod_name
+ right_pos += len(import_heading)
+ names_pack = [import_heading]
+ indent_level = 0
+ import_names = list(import_names)
+ import_names.sort()
+ for n in import_names:
+ self._defined[n] = True
+ len_n = len(n)
+ if right_pos + len_n >= 78:
+ out(indent_level, *names_pack)
+ names_pack = [n, ", "]
+ if indent_level == 0:
+ indent_level = 1 # all but first line is indented
+ right_pos = self.indent_size + len_n + 2
+ else:
+ names_pack.append(n)
+ names_pack.append(", ")
+ right_pos += (len_n + 2)
+ # last line is...
+ if indent_level == 0: # one line
+ names_pack[0] = names_pack[0][:-1] # cut off lpar
+ names_pack[-1] = "" # cut last comma
+ else: # last line of multiline
+ names_pack[-1] = ")" # last comma -> rpar
+ out(indent_level, *names_pack)
+
+ out(0, "") # empty line after group
+
+ if self.hidden_imports:
+ self.add_import_header_if_needed()
+ for mod_name in sorted_no_case(self.hidden_imports.keys()):
+ out(0, 'import ', mod_name, ' as ', self.hidden_imports[mod_name])
+ out(0, "") # empty line after group
diff --git a/python/helpers/pycharm_generator_utils/pyparsing.py b/python/helpers/pycharm_generator_utils/pyparsing.py
new file mode 100644
index 0000000..d344af1
--- /dev/null
+++ b/python/helpers/pycharm_generator_utils/pyparsing.py
@@ -0,0 +1,3737 @@
+# module pyparsing.py
+#
+# Copyright (c) 2003-2009 Paul T. McGuire
+#
+# Permission is hereby granted, free of charge, to any person obtaining
+# a copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish,
+# distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so, subject to
+# the following conditions:
+#
+# The above copyright notice and this permission notice shall be
+# included in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+#
+from __future__ import generators
+
+__doc__ = \
+"""
+pyparsing module - Classes and methods to define and execute parsing grammars
+
+The pyparsing module is an alternative approach to creating and executing simple grammars,
+vs. the traditional lex/yacc approach, or the use of regular expressions. With pyparsing, you
+don't need to learn a new syntax for defining grammars or matching expressions - the parsing module
+provides a library of classes that you use to construct the grammar directly in Python.
+
+Here is a program to parse "Hello, World!" (or any greeting of the form "<salutation>, <addressee>!")::
+
+ from pyparsing import Word, alphas
+
+ # define grammar of a greeting
+ greet = Word( alphas ) + "," + Word( alphas ) + "!"
+
+ hello = "Hello, World!"
+ print hello, "->", greet.parseString( hello )
+
+The program outputs the following::
+
+ Hello, World! -> ['Hello', ',', 'World', '!']
+
+The Python representation of the grammar is quite readable, owing to the self-explanatory
+class names, and the use of '+', '|' and '^' operators.
+
+The parsed results returned from parseString() can be accessed as a nested list, a dictionary, or an
+object with named attributes.
+
+The pyparsing module handles some of the problems that are typically vexing when writing text parsers:
+ - extra or missing whitespace (the above program will also handle "Hello,World!", "Hello , World !", etc.)
+ - quoted strings
+ - embedded comments
+"""
+
+__version__ = "1.5.2 patch 2.2"
+__versionTime__ = "17 February 2009 19:45"
+__author__ = "Paul McGuire <[email protected]>, patched by [email protected]"
+
+import string
+from weakref import ref as wkref
+import copy
+import sys
+import warnings
+import re
+import sre_constants
+#~ sys.stderr.write( "testing pyparsing module, version %s, %s\n" % (__version__,__versionTime__ ) )
+
+__all__ = [
+'And', 'CaselessKeyword', 'CaselessLiteral', 'CharsNotIn', 'Combine', 'Dict', 'Each', 'Empty',
+'FollowedBy', 'Forward', 'GoToColumn', 'Group', 'Keyword', 'LineEnd', 'LineStart', 'Literal',
+'MatchFirst', 'NoMatch', 'NotAny', 'OneOrMore', 'OnlyOnce', 'Optional', 'Or',
+'ParseBaseException', 'ParseElementEnhance', 'ParseException', 'ParseExpression', 'ParseFatalException',
+'ParseResults', 'ParseSyntaxException', 'ParserElement', 'QuotedString', 'RecursiveGrammarException',
+'Regex', 'SkipTo', 'StringEnd', 'StringStart', 'Suppress', 'Token', 'TokenConverter', 'Upcase',
+'White', 'Word', 'WordEnd', 'WordStart', 'ZeroOrMore',
+'alphanums', 'alphas', 'alphas8bit', 'anyCloseTag', 'anyOpenTag', 'cStyleComment', 'col',
+'commaSeparatedList', 'commonHTMLEntity', 'countedArray', 'cppStyleComment', 'dblQuotedString',
+'dblSlashComment', 'delimitedList', 'dictOf', 'downcaseTokens', 'empty', 'getTokensEndLoc', 'hexnums',
+'htmlComment', 'javaStyleComment', 'keepOriginalText', 'line', 'lineEnd', 'lineStart', 'lineno',
+'makeHTMLTags', 'makeXMLTags', 'matchOnlyAtCol', 'matchPreviousExpr', 'matchPreviousLiteral',
+'nestedExpr', 'nullDebugAction', 'nums', 'oneOf', 'opAssoc', 'operatorPrecedence', 'printables',
+'punc8bit', 'pythonStyleComment', 'quotedString', 'removeQuotes', 'replaceHTMLEntity',
+'replaceWith', 'restOfLine', 'sglQuotedString', 'srange', 'stringEnd',
+'stringStart', 'traceParseAction', 'unicodeString', 'upcaseTokens', 'withAttribute',
+'indentedBlock', 'originalTextFor',
+]
+
+version_info = sys.version_info
+
+"""
+Detect if we are running version 3.X and make appropriate changes
+Robert A. Clark
+"""
+if version_info[0] > 2:
+ _PY3K = True
+ _MAX_INT = sys.maxsize
+ basestring = str
+else:
+ _PY3K = False
+ _MAX_INT = sys.maxint
+
+if version_info[0] == 2 and version_info[1] < 4:
+ _BEFORE_24 = True # before Python 2.4
+ INT_OR_SLICE = (int, type(slice(1, 2)))
+
+ def slice_indices(a_slice, maxlen):
+ start = a_slice.start or 0
+ if start > maxlen:
+ start = maxlen
+ stop = a_slice.stop
+ if stop is None or maxlen < stop:
+ stop = maxlen
+ return (start, stop, a_slice.step or 1)
+else:
+ _BEFORE_24 = False
+ INT_OR_SLICE = (int, slice)
+ slice_indices = slice.indices
+
+
+if not _PY3K:
+ def _ustr(obj):
+ """Drop-in replacement for str(obj) that tries to be Unicode friendly. It first tries
+ str(obj). If that fails with a UnicodeEncodeError, then it tries unicode(obj). It
+ then < returns the unicode object | encodes it with the default encoding | ... >.
+ """
+ if isinstance(obj,unicode):
+ return obj
+
+ try:
+ # If this works, then _ustr(obj) has the same behaviour as str(obj), so
+ # it won't break any existing code.
+ return str(obj)
+
+ except UnicodeEncodeError:
+ # The Python docs (http://docs.python.org/ref/customization.html#l2h-182)
+ # state that "The return value must be a string object". However, does a
+ # unicode object (being a subclass of basestring) count as a "string
+ # object"?
+ # If so, then return a unicode object:
+ return unicode(obj)
+ # Else encode it... but how? There are many choices... :)
+ # Replace unprintables with escape codes?
+ #return unicode(obj).encode(sys.getdefaultencoding(), 'backslashreplace_errors')
+ # Replace unprintables with question marks?
+ #return unicode(obj).encode(sys.getdefaultencoding(), 'replace')
+ # ...
+else:
+ _ustr = str
+ unichr = chr
+
+if not _PY3K:
+ def _str2dict(strg):
+ return dict( [(c,0) for c in strg] )
+else:
+ _str2dict = set
+
+def _xml_escape(data):
+ """Escape &, <, >, ", ', etc. in a string of data."""
+
+ # ampersand must be replaced first
+ from_symbols = '&><"\''
+ to_symbols = ['&'+s+';' for s in "amp gt lt quot apos".split()]
+ for from_,to_ in zip(from_symbols, to_symbols):
+ data = data.replace(from_, to_)
+ return data
+
+class _Constants(object):
+ pass
+
+if not _PY3K:
+ alphas = string.lowercase + string.uppercase
+else:
+ alphas = string.ascii_lowercase + string.ascii_uppercase
+nums = string.digits
+hexnums = nums + "ABCDEFabcdef"
+alphanums = alphas + nums
+_bslash = chr(92)
+printables = "".join( [ c for c in string.printable if c not in string.whitespace ] )
+
+class ParseBaseException(Exception):
+ """base exception class for all parsing runtime exceptions"""
+ # Performance tuning: we construct a *lot* of these, so keep this
+ # constructor as small and fast as possible
+ def __init__( self, pstr, loc=0, msg=None, elem=None ):
+ self.loc = loc
+ if msg is None:
+ self.msg = pstr
+ self.pstr = ""
+ else:
+ self.msg = msg
+ self.pstr = pstr
+ self.parserElement = elem
+
+ def __getattr__( self, aname ):
+ """supported attributes by name are:
+ - lineno - returns the line number of the exception text
+ - col - returns the column number of the exception text
+ - line - returns the line containing the exception text
+ """
+ if( aname == "lineno" ):
+ return lineno( self.loc, self.pstr )
+ elif( aname in ("col", "column") ):
+ return col( self.loc, self.pstr )
+ elif( aname == "line" ):
+ return line( self.loc, self.pstr )
+ else:
+ raise AttributeError(aname)
+
+ def __str__( self ):
+ return "%s (at char %d), (line:%d, col:%d)" % \
+ ( self.msg, self.loc, self.lineno, self.column )
+ def __repr__( self ):
+ return _ustr(self)
+ def markInputline( self, markerString = ">!<" ):
+ """Extracts the exception line from the input string, and marks
+ the location of the exception with a special symbol.
+ """
+ line_str = self.line
+ line_column = self.column - 1
+ if markerString:
+ line_str = "".join( [line_str[:line_column],
+ markerString, line_str[line_column:]])
+ return line_str.strip()
+ def __dir__(self):
+ return "loc msg pstr parserElement lineno col line " \
+ "markInputLine __str__ __repr__".split()
+
+class ParseException(ParseBaseException):
+ """exception thrown when parse expressions don't match class;
+ supported attributes by name are:
+ - lineno - returns the line number of the exception text
+ - col - returns the column number of the exception text
+ - line - returns the line containing the exception text
+ """
+ pass
+
+class ParseFatalException(ParseBaseException):
+ """user-throwable exception thrown when inconsistent parse content
+ is found; stops all parsing immediately"""
+ pass
+
+class ParseSyntaxException(ParseFatalException):
+ """just like ParseFatalException, but thrown internally when an
+ ErrorStop indicates that parsing is to stop immediately because
+ an unbacktrackable syntax error has been found"""
+ def __init__(self, pe):
+ super(ParseSyntaxException, self).__init__(
+ pe.pstr, pe.loc, pe.msg, pe.parserElement)
+
+#~ class ReparseException(ParseBaseException):
+ #~ """Experimental class - parse actions can raise this exception to cause
+ #~ pyparsing to reparse the input string:
+ #~ - with a modified input string, and/or
+ #~ - with a modified start location
+ #~ Set the values of the ReparseException in the constructor, and raise the
+ #~ exception in a parse action to cause pyparsing to use the new string/location.
+ #~ Setting the values as None causes no change to be made.
+ #~ """
+ #~ def __init_( self, newstring, restartLoc ):
+ #~ self.newParseText = newstring
+ #~ self.reparseLoc = restartLoc
+
+class RecursiveGrammarException(Exception):
+ """exception thrown by validate() if the grammar could be improperly recursive"""
+ def __init__( self, parseElementList ):
+ self.parseElementTrace = parseElementList
+
+ def __str__( self ):
+ return "RecursiveGrammarException: %s" % self.parseElementTrace
+
+class _ParseResultsWithOffset(object):
+ def __init__(self,p1,p2):
+ self.tup = (p1,p2)
+ def __getitem__(self,i):
+ return self.tup[i]
+ def __repr__(self):
+ return repr(self.tup)
+ def setOffset(self,i):
+ self.tup = (self.tup[0],i)
+
+class ParseResults(object):
+ """Structured parse results, to provide multiple means of access to the parsed data:
+ - as a list (len(results))
+ - by list index (results[0], results[1], etc.)
+ - by attribute (results.<resultsName>)
+ """
+ __slots__ = ( "__toklist", "__tokdict", "__doinit", "__name", "__parent", "__accumNames", "__weakref__" )
+ def __new__(cls, toklist, name=None, asList=True, modal=True ):
+ if isinstance(toklist, cls):
+ return toklist
+ retobj = object.__new__(cls)
+ retobj.__doinit = True
+ return retobj
+
+ # Performance tuning: we construct a *lot* of these, so keep this
+ # constructor as small and fast as possible
+ def __init__( self, toklist, name=None, asList=True, modal=True ):
+ if self.__doinit:
+ self.__doinit = False
+ self.__name = None
+ self.__parent = None
+ self.__accumNames = {}
+ if isinstance(toklist, list):
+ self.__toklist = toklist[:]
+ else:
+ self.__toklist = [toklist]
+ self.__tokdict = dict()
+
+ if name:
+ if not modal:
+ self.__accumNames[name] = 0
+ if isinstance(name,int):
+ name = _ustr(name) # will always return a str, but use _ustr for consistency
+ self.__name = name
+ if not toklist in (None,'',[]):
+ if isinstance(toklist,basestring):
+ toklist = [ toklist ]
+ if asList:
+ if isinstance(toklist,ParseResults):
+ self[name] = _ParseResultsWithOffset(toklist.copy(),0)
+ else:
+ self[name] = _ParseResultsWithOffset(ParseResults(toklist[0]),0)
+ self[name].__name = name
+ else:
+ try:
+ self[name] = toklist[0]
+ except (KeyError,TypeError,IndexError):
+ self[name] = toklist
+
+ def __getitem__( self, i ):
+ if isinstance( i, INT_OR_SLICE ):
+ return self.__toklist[i]
+ else:
+ if i not in self.__accumNames:
+ return self.__tokdict[i][-1][0]
+ else:
+ return ParseResults([ v[0] for v in self.__tokdict[i] ])
+
+ def __setitem__( self, k, v ):
+ if isinstance(v,_ParseResultsWithOffset):
+ self.__tokdict[k] = self.__tokdict.get(k,list()) + [v]
+ sub = v[0]
+ elif isinstance(k,int):
+ self.__toklist[k] = v
+ sub = v
+ else:
+ self.__tokdict[k] = self.__tokdict.get(k,list()) + [_ParseResultsWithOffset(v,0)]
+ sub = v
+ if isinstance(sub,ParseResults):
+ sub.__parent = wkref(self)
+
+ def __delitem__( self, i ):
+ if isinstance(i, INT_OR_SLICE):
+ mylen = len( self.__toklist )
+ del self.__toklist[i]
+
+ # convert int to slice
+ if isinstance(i, int):
+ if i < 0:
+ i += mylen
+ i = slice(i, i+1)
+ # get removed indices
+ removed = list(range(*slice_indices(i, mylen)))
+ removed.reverse()
+ # fixup indices in token dictionary
+ for name in self.__tokdict:
+ occurrences = self.__tokdict[name]
+ for j in removed:
+ for k, (value, position) in enumerate(occurrences):
+ occurrences[k] = _ParseResultsWithOffset(value, position - (position > j))
+ else:
+ del self.__tokdict[i]
+
+ def __contains__( self, k ):
+ return k in self.__tokdict
+
+ def __len__( self ): return len( self.__toklist )
+ def __bool__(self): return len( self.__toklist ) > 0
+ __nonzero__ = __bool__
+ def __iter__( self ): return iter( self.__toklist )
+ def __reversed__( self ): return iter( reversed(self.__toklist) )
+ def keys( self ):
+ """Returns all named result keys."""
+ return self.__tokdict.keys()
+
+ def pop( self, index=-1 ):
+ """Removes and returns item at specified index (default=last).
+ Will work with either numeric indices or dict-key indicies."""
+ ret = self[index]
+ del self[index]
+ return ret
+
+ def get(self, key, defaultValue=None):
+ """Returns named result matching the given key, or if there is no
+ such name, then returns the given defaultValue or None if no
+ defaultValue is specified."""
+ if key in self:
+ return self[key]
+ else:
+ return defaultValue
+
+ def insert( self, index, insStr ):
+ self.__toklist.insert(index, insStr)
+ # fixup indices in token dictionary
+ for name in self.__tokdict:
+ occurrences = self.__tokdict[name]
+ for k, (value, position) in enumerate(occurrences):
+ occurrences[k] = _ParseResultsWithOffset(value, position + (position > index))
+
+ def items( self ):
+ """Returns all named result keys and values as a list of tuples."""
+ return [(k,self[k]) for k in self.__tokdict]
+
+ def values( self ):
+ """Returns all named result values."""
+ return [ v[-1][0] for v in self.__tokdict.values() ]
+
+ def __getattr__( self, name ):
+ if name not in self.__slots__:
+ if name in self.__tokdict:
+ if name not in self.__accumNames:
+ return self.__tokdict[name][-1][0]
+ else:
+ return ParseResults([ v[0] for v in self.__tokdict[name] ])
+ else:
+ return ""
+ return None
+
+ def __add__( self, other ):
+ ret = self.copy()
+ ret += other
+ return ret
+
+ def __iadd__( self, other ):
+ if other.__tokdict:
+ offset = len(self.__toklist)
+ addoffset = ( lambda a: (a<0 and offset) or (a+offset) )
+ otheritems = other.__tokdict.items()
+ otherdictitems = [(k, _ParseResultsWithOffset(v[0],addoffset(v[1])) )
+ for (k,vlist) in otheritems for v in vlist]
+ for k,v in otherdictitems:
+ self[k] = v
+ if isinstance(v[0],ParseResults):
+ v[0].__parent = wkref(self)
+
+ self.__toklist += other.__toklist
+ self.__accumNames.update( other.__accumNames )
+ del other
+ return self
+
+ def __repr__( self ):
+ return "(%s, %s)" % ( repr( self.__toklist ), repr( self.__tokdict ) )
+
+ def __str__( self ):
+ out = "["
+ sep = ""
+ for i in self.__toklist:
+ if isinstance(i, ParseResults):
+ out += sep + _ustr(i)
+ else:
+ out += sep + repr(i)
+ sep = ", "
+ out += "]"
+ return out
+
+ def _asStringList( self, sep='' ):
+ out = []
+ for item in self.__toklist:
+ if out and sep:
+ out.append(sep)
+ if isinstance( item, ParseResults ):
+ out += item._asStringList()
+ else:
+ out.append( _ustr(item) )
+ return out
+
+ def asList( self ):
+ """Returns the parse results as a nested list of matching tokens, all converted to strings."""
+ out = []
+ for res in self.__toklist:
+ if isinstance(res,ParseResults):
+ out.append( res.asList() )
+ else:
+ out.append( res )
+ return out
+
+ def asDict( self ):
+ """Returns the named parse results as dictionary."""
+ return dict( self.items() )
+
+ def copy( self ):
+ """Returns a new copy of a ParseResults object."""
+ ret = ParseResults( self.__toklist )
+ ret.__tokdict = self.__tokdict.copy()
+ ret.__parent = self.__parent
+ ret.__accumNames.update( self.__accumNames )
+ ret.__name = self.__name
+ return ret
+
+ def asXML( self, doctag=None, namedItemsOnly=False, indent="", formatted=True ):
+ """Returns the parse results as XML. Tags are created for tokens and lists that have defined results names."""
+ nl = "\n"
+ out = []
+ namedItems = dict( [ (v[1],k) for (k,vlist) in self.__tokdict.items()
+ for v in vlist ] )
+ nextLevelIndent = indent + " "
+
+ # collapse out indents if formatting is not desired
+ if not formatted:
+ indent = ""
+ nextLevelIndent = ""
+ nl = ""
+
+ selfTag = None
+ if doctag is not None:
+ selfTag = doctag
+ else:
+ if self.__name:
+ selfTag = self.__name
+
+ if not selfTag:
+ if namedItemsOnly:
+ return ""
+ else:
+ selfTag = "ITEM"
+
+ out += [ nl, indent, "<", selfTag, ">" ]
+
+ worklist = self.__toklist
+ for i,res in enumerate(worklist):
+ if isinstance(res,ParseResults):
+ if i in namedItems:
+ out += [ res.asXML(namedItems[i],
+ namedItemsOnly and doctag is None,
+ nextLevelIndent,
+ formatted)]
+ else:
+ out += [ res.asXML(None,
+ namedItemsOnly and doctag is None,
+ nextLevelIndent,
+ formatted)]
+ else:
+ # individual token, see if there is a name for it
+ resTag = None
+ if i in namedItems:
+ resTag = namedItems[i]
+ if not resTag:
+ if namedItemsOnly:
+ continue
+ else:
+ resTag = "ITEM"
+ xmlBodyText = _xml_escape(_ustr(res))
+ out += [ nl, nextLevelIndent, "<", resTag, ">",
+ xmlBodyText,
+ "</", resTag, ">" ]
+
+ out += [ nl, indent, "</", selfTag, ">" ]
+ return "".join(out)
+
+ def __lookup(self,sub):
+ for k,vlist in self.__tokdict.items():
+ for v,loc in vlist:
+ if sub is v:
+ return k
+ return None
+
+ def getName(self):
+ """Returns the results name for this token expression."""
+ if self.__name:
+ return self.__name
+ elif self.__parent:
+ par = self.__parent()
+ if par:
+ return par.__lookup(self)
+ else:
+ return None
+ elif (len(self) == 1 and
+ len(self.__tokdict) == 1 and
+ self.__tokdict.values()[0][0][1] in (0,-1)):
+ return self.__tokdict.keys()[0]
+ else:
+ return None
+
+ def dump(self,indent='',depth=0):
+ """Diagnostic method for listing out the contents of a ParseResults.
+ Accepts an optional indent argument so that this string can be embedded
+ in a nested display of other data."""
+ out = []
+ out.append( indent+_ustr(self.asList()) )
+ keys = self.items()
+ keys.sort()
+ for k,v in keys:
+ if out:
+ out.append('\n')
+ out.append( "%s%s- %s: " % (indent,(' '*depth), k) )
+ if isinstance(v,ParseResults):
+ if v.keys():
+ #~ out.append('\n')
+ out.append( v.dump(indent,depth+1) )
+ #~ out.append('\n')
+ else:
+ out.append(_ustr(v))
+ else:
+ out.append(_ustr(v))
+ #~ out.append('\n')
+ return "".join(out)
+
+ # add support for pickle protocol
+ def __getstate__(self):
+ return ( self.__toklist,
+ ( self.__tokdict.copy(),
+ self.__parent is not None and self.__parent() or None,
+ self.__accumNames,
+ self.__name ) )
+
+ def __setstate__(self,state):
+ self.__toklist = state[0]
+ self.__tokdict, \
+ par, \
+ inAccumNames, \
+ self.__name = state[1]
+ self.__accumNames = {}
+ self.__accumNames.update(inAccumNames)
+ if par is not None:
+ self.__parent = wkref(par)
+ else:
+ self.__parent = None
+
+ def __dir__(self):
+ return dir(super(ParseResults,self)) + self.keys()
+
+def col (loc,strg):
+ """Returns current column within a string, counting newlines as line separators.
+ The first column is number 1.
+
+ Note: the default parsing behavior is to expand tabs in the input string
+ before starting the parsing process. See L{I{ParserElement.parseString}<ParserElement.parseString>} for more information
+ on parsing strings containing <TAB>s, and suggested methods to maintain a
+ consistent view of the parsed string, the parse location, and line and column
+ positions within the parsed string.
+ """
+ return (loc<len(strg) and strg[loc] == '\n') and 1 or loc - strg.rfind("\n", 0, loc)
+
+def lineno(loc,strg):
+ """Returns current line number within a string, counting newlines as line separators.
+ The first line is number 1.
+
+ Note: the default parsing behavior is to expand tabs in the input string
+ before starting the parsing process. See L{I{ParserElement.parseString}<ParserElement.parseString>} for more information
+ on parsing strings containing <TAB>s, and suggested methods to maintain a
+ consistent view of the parsed string, the parse location, and line and column
+ positions within the parsed string.
+ """
+ return strg.count("\n",0,loc) + 1
+
+def line( loc, strg ):
+ """Returns the line of text containing loc within a string, counting newlines as line separators.
+ """
+ lastCR = strg.rfind("\n", 0, loc)
+ nextCR = strg.find("\n", loc)
+ if nextCR > 0:
+ return strg[lastCR+1:nextCR]
+ else:
+ return strg[lastCR+1:]
+
+def _defaultStartDebugAction( instring, loc, expr ):
+ print ("Match " + _ustr(expr) + " at loc " + _ustr(loc) + "(%d,%d)" % ( lineno(loc,instring), col(loc,instring) ))
+
+def _defaultSuccessDebugAction( instring, startloc, endloc, expr, toks ):
+ print ("Matched " + _ustr(expr) + " -> " + str(toks.asList()))
+
+def _defaultExceptionDebugAction( instring, loc, expr, exc ):
+ print ("Exception raised:" + _ustr(exc))
+
+def nullDebugAction(*args):
+ """'Do-nothing' debug action, to suppress debugging output during parsing."""
+ pass
+
+class ParserElement(object):
+ """Abstract base level parser element class."""
+ DEFAULT_WHITE_CHARS = " \n\t\r"
+
+ def setDefaultWhitespaceChars( chars ):
+ """Overrides the default whitespace chars
+ """
+ ParserElement.DEFAULT_WHITE_CHARS = chars
+ setDefaultWhitespaceChars = staticmethod(setDefaultWhitespaceChars)
+
+ def __init__( self, savelist=False ):
+ self.parseAction = list()
+ self.failAction = None
+ #~ self.name = "<unknown>" # don't define self.name, let subclasses try/except upcall
+ self.strRepr = None
+ self.resultsName = None
+ self.saveAsList = savelist
+ self.skipWhitespace = True
+ self.whiteChars = ParserElement.DEFAULT_WHITE_CHARS
+ self.copyDefaultWhiteChars = True
+ self.mayReturnEmpty = False # used when checking for left-recursion
+ self.keepTabs = False
+ self.ignoreExprs = list()
+ self.debug = False
+ self.streamlined = False
+ self.mayIndexError = True # used to optimize exception handling for subclasses that don't advance parse index
+ self.errmsg = ""
+ self.modalResults = True # used to mark results names as modal (report only last) or cumulative (list all)
+ self.debugActions = ( None, None, None ) #custom debug actions
+ self.re = None
+ self.callPreparse = True # used to avoid redundant calls to preParse
+ self.callDuringTry = False
+
+ def copy( self ):
+ """Make a copy of this ParserElement. Useful for defining different parse actions
+ for the same parsing pattern, using copies of the original parse element."""
+ cpy = copy.copy( self )
+ cpy.parseAction = self.parseAction[:]
+ cpy.ignoreExprs = self.ignoreExprs[:]
+ if self.copyDefaultWhiteChars:
+ cpy.whiteChars = ParserElement.DEFAULT_WHITE_CHARS
+ return cpy
+
+ if _BEFORE_24:
+ def __copy__(self):
+ # needed by copy.copy in e.g. Jython 2.2
+ cpy = self.__class__.__new__(self.__class__, self.saveAsList)
+ cpy.__dict__.update(self.__dict__)
+ return cpy
+
+
+ def setName( self, name ):
+ """Define name for this expression, for use in debugging."""
+ self.name = name
+ self.errmsg = "Expected " + self.name
+ if hasattr(self,"exception"):
+ self.exception.msg = self.errmsg
+ return self
+
+ def setResultsName( self, name, listAllMatches=False ):
+ """Define name for referencing matching tokens as a nested attribute
+ of the returned parse results.
+ NOTE: this returns a *copy* of the original ParserElement object;
+ this is so that the client can define a basic element, such as an
+ integer, and reference it in multiple places with different names.
+ """
+ newself = self.copy()
+ newself.resultsName = name
+ newself.modalResults = not listAllMatches
+ return newself
+
+ def setBreak(self,breakFlag = True):
+ """Method to invoke the Python pdb debugger when this element is
+ about to be parsed. Set breakFlag to True to enable, False to
+ disable.
+ """
+ if breakFlag:
+ _parseMethod = self._parse
+ def breaker(instring, loc, doActions=True, callPreParse=True):
+ import pdb
+ pdb.set_trace()
+ return _parseMethod( instring, loc, doActions, callPreParse )
+ breaker._originalParseMethod = _parseMethod
+ self._parse = breaker
+ else:
+ if hasattr(self._parse,"_originalParseMethod"):
+ self._parse = self._parse._originalParseMethod
+ return self
+
+ def _normalizeParseActionArgs( f ):
+ """Internal method used to decorate parse actions that take fewer than 3 arguments,
+ so that all parse actions can be called as f(s,l,t)."""
+ STAR_ARGS = 4
+
+ try:
+ restore = None
+ if isinstance(f,type):
+ restore = f
+ f = f.__init__
+ if not _PY3K:
+ codeObj = f.func_code
+ else:
+ codeObj = f.code
+ if codeObj.co_flags & STAR_ARGS:
+ return f
+ numargs = codeObj.co_argcount
+ if not _PY3K:
+ if hasattr(f,"im_self"):
+ numargs -= 1
+ else:
+ if hasattr(f,"__self__"):
+ numargs -= 1
+ if restore:
+ f = restore
+ except AttributeError:
+ try:
+ if not _PY3K:
+ call_im_func_code = f.__call__.im_func.func_code
+ else:
+ call_im_func_code = f.__code__
+
+ # not a function, must be a callable object, get info from the
+ # im_func binding of its bound __call__ method
+ if call_im_func_code.co_flags & STAR_ARGS:
+ return f
+ numargs = call_im_func_code.co_argcount
+ if not _PY3K:
+ if hasattr(f.__call__,"im_self"):
+ numargs -= 1
+ else:
+ if hasattr(f.__call__,"__self__"):
+ numargs -= 0
+ except AttributeError:
+ if not _PY3K:
+ call_func_code = f.__call__.func_code
+ else:
+ call_func_code = f.__call__.__code__
+ # not a bound method, get info directly from __call__ method
+ if call_func_code.co_flags & STAR_ARGS:
+ return f
+ numargs = call_func_code.co_argcount
+ if not _PY3K:
+ if hasattr(f.__call__,"im_self"):
+ numargs -= 1
+ else:
+ if hasattr(f.__call__,"__self__"):
+ numargs -= 1
+
+
+ #~ print ("adding function %s with %d args" % (f.func_name,numargs))
+ if numargs == 3:
+ return f
+ else:
+ if numargs > 3:
+ def tmp(s,l,t):
+ return f(f.__call__.__self__, s,l,t)
+ if numargs == 2:
+ def tmp(s,l,t):
+ return f(l,t)
+ elif numargs == 1:
+ def tmp(s,l,t):
+ return f(t)
+ else: #~ numargs == 0:
+ def tmp(s,l,t):
+ return f()
+ try:
+ tmp.__name__ = f.__name__
+ except (AttributeError,TypeError):
+ # no need for special handling if attribute doesnt exist
+ pass
+ try:
+ tmp.__doc__ = f.__doc__
+ except (AttributeError,TypeError):
+ # no need for special handling if attribute doesnt exist
+ pass
+ try:
+ tmp.__dict__.update(f.__dict__)
+ except (AttributeError,TypeError):
+ # no need for special handling if attribute doesnt exist
+ pass
+ return tmp
+ _normalizeParseActionArgs = staticmethod(_normalizeParseActionArgs)
+
+ def setParseAction( self, *fns, **kwargs ):
+ """Define action to perform when successfully matching parse element definition.
+ Parse action fn is a callable method with 0-3 arguments, called as fn(s,loc,toks),
+ fn(loc,toks), fn(toks), or just fn(), where:
+ - s = the original string being parsed (see note below)
+ - loc = the location of the matching substring
+ - toks = a list of the matched tokens, packaged as a ParseResults object
+ If the functions in fns modify the tokens, they can return them as the return
+ value from fn, and the modified list of tokens will replace the original.
+ Otherwise, fn does not need to return any value.
+
+ Note: the default parsing behavior is to expand tabs in the input string
+ before starting the parsing process. See L{I{parseString}<parseString>} for more information
+ on parsing strings containing <TAB>s, and suggested methods to maintain a
+ consistent view of the parsed string, the parse location, and line and column
+ positions within the parsed string.
+ """
+ self.parseAction = list(map(self._normalizeParseActionArgs, list(fns)))
+ self.callDuringTry = ("callDuringTry" in kwargs and kwargs["callDuringTry"])
+ return self
+
+ def addParseAction( self, *fns, **kwargs ):
+ """Add parse action to expression's list of parse actions. See L{I{setParseAction}<setParseAction>}."""
+ self.parseAction += list(map(self._normalizeParseActionArgs, list(fns)))
+ self.callDuringTry = self.callDuringTry or ("callDuringTry" in kwargs and kwargs["callDuringTry"])
+ return self
+
+ def setFailAction( self, fn ):
+ """Define action to perform if parsing fails at this expression.
+ Fail acton fn is a callable function that takes the arguments
+ fn(s,loc,expr,err) where:
+ - s = string being parsed
+ - loc = location where expression match was attempted and failed
+ - expr = the parse expression that failed
+ - err = the exception thrown
+ The function returns no value. It may throw ParseFatalException
+ if it is desired to stop parsing immediately."""
+ self.failAction = fn
+ return self
+
+ def _skipIgnorables( self, instring, loc ):
+ exprsFound = True
+ while exprsFound:
+ exprsFound = False
+ for e in self.ignoreExprs:
+ try:
+ while 1:
+ loc,dummy = e._parse( instring, loc )
+ exprsFound = True
+ except ParseException:
+ pass
+ return loc
+
+ def preParse( self, instring, loc ):
+ if self.ignoreExprs:
+ loc = self._skipIgnorables( instring, loc )
+
+ if self.skipWhitespace:
+ wt = self.whiteChars
+ instrlen = len(instring)
+ while loc < instrlen and instring[loc] in wt:
+ loc += 1
+
+ return loc
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ return loc, []
+
+ def postParse( self, instring, loc, tokenlist ):
+ return tokenlist
+
+ #~ @profile
+ def _parseNoCache( self, instring, loc, doActions=True, callPreParse=True ):
+ debugging = ( self.debug ) #and doActions )
+
+ if debugging or self.failAction:
+ #~ print ("Match",self,"at loc",loc,"(%d,%d)" % ( lineno(loc,instring), col(loc,instring) ))
+ if (self.debugActions[0] ):
+ self.debugActions[0]( instring, loc, self )
+ if callPreParse and self.callPreparse:
+ preloc = self.preParse( instring, loc )
+ else:
+ preloc = loc
+ tokensStart = loc
+ try:
+ try:
+ loc,tokens = self.parseImpl( instring, preloc, doActions )
+ except IndexError:
+ raise ParseException( instring, len(instring), self.errmsg, self )
+ except ParseBaseException, err:
+ #~ print ("Exception raised:", err)
+ if self.debugActions[2]:
+ self.debugActions[2]( instring, tokensStart, self, err )
+ if self.failAction:
+ self.failAction( instring, tokensStart, self, err )
+ raise
+ else:
+ if callPreParse and self.callPreparse:
+ preloc = self.preParse( instring, loc )
+ else:
+ preloc = loc
+ tokensStart = loc
+ if self.mayIndexError or loc >= len(instring):
+ try:
+ loc,tokens = self.parseImpl( instring, preloc, doActions )
+ except IndexError:
+ raise ParseException( instring, len(instring), self.errmsg, self )
+ else:
+ loc,tokens = self.parseImpl( instring, preloc, doActions )
+
+ tokens = self.postParse( instring, loc, tokens )
+
+ retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList, modal=self.modalResults )
+ if self.parseAction and (doActions or self.callDuringTry):
+ if debugging:
+ try:
+ for fn in self.parseAction:
+ tokens = fn( instring, tokensStart, retTokens )
+ if tokens is not None:
+ retTokens = ParseResults( tokens,
+ self.resultsName,
+ asList=self.saveAsList and isinstance(tokens,(ParseResults,list)),
+ modal=self.modalResults )
+ except ParseBaseException, err:
+ #~ print "Exception raised in user parse action:", err
+ if (self.debugActions[2] ):
+ self.debugActions[2]( instring, tokensStart, self, err )
+ raise
+ else:
+ for fn in self.parseAction:
+ tokens = fn( instring, tokensStart, retTokens )
+ if tokens is not None:
+ retTokens = ParseResults( tokens,
+ self.resultsName,
+ asList=self.saveAsList and isinstance(tokens,(ParseResults,list)),
+ modal=self.modalResults )
+
+ if debugging:
+ #~ print ("Matched",self,"->",retTokens.asList())
+ if (self.debugActions[1] ):
+ self.debugActions[1]( instring, tokensStart, loc, self, retTokens )
+
+ return loc, retTokens
+
+ def tryParse( self, instring, loc ):
+ try:
+ return self._parse( instring, loc, doActions=False )[0]
+ except ParseFatalException:
+ raise ParseException( instring, loc, self.errmsg, self)
+
+ # this method gets repeatedly called during backtracking with the same arguments -
+ # we can cache these arguments and save ourselves the trouble of re-parsing the contained expression
+ def _parseCache( self, instring, loc, doActions=True, callPreParse=True ):
+ lookup = (self,instring,loc,callPreParse,doActions)
+ if lookup in ParserElement._exprArgCache:
+ value = ParserElement._exprArgCache[ lookup ]
+ if isinstance(value,Exception):
+ raise value
+ return value
+ else:
+ try:
+ value = self._parseNoCache( instring, loc, doActions, callPreParse )
+ ParserElement._exprArgCache[ lookup ] = (value[0],value[1].copy())
+ return value
+ except ParseBaseException, pe:
+ ParserElement._exprArgCache[ lookup ] = pe
+ raise
+
+ _parse = _parseNoCache
+
+ # argument cache for optimizing repeated calls when backtracking through recursive expressions
+ _exprArgCache = {}
+ def resetCache():
+ ParserElement._exprArgCache.clear()
+ resetCache = staticmethod(resetCache)
+
+ _packratEnabled = False
+ def enablePackrat():
+ """Enables "packrat" parsing, which adds memoizing to the parsing logic.
+ Repeated parse attempts at the same string location (which happens
+ often in many complex grammars) can immediately return a cached value,
+ instead of re-executing parsing/validating code. Memoizing is done of
+ both valid results and parsing exceptions.
+
+ This speedup may break existing programs that use parse actions that
+ have side-effects. For this reason, packrat parsing is disabled when
+ you first import pyparsing. To activate the packrat feature, your
+ program must call the class method ParserElement.enablePackrat(). If
+ your program uses psyco to "compile as you go", you must call
+ enablePackrat before calling psyco.full(). If you do not do this,
+ Python will crash. For best results, call enablePackrat() immediately
+ after importing pyparsing.
+ """
+ if not ParserElement._packratEnabled:
+ ParserElement._packratEnabled = True
+ ParserElement._parse = ParserElement._parseCache
+ enablePackrat = staticmethod(enablePackrat)
+
+ def parseString( self, instring, parseAll=False ):
+ """Execute the parse expression with the given string.
+ This is the main interface to the client code, once the complete
+ expression has been built.
+
+ If you want the grammar to require that the entire input string be
+ successfully parsed, then set parseAll to True (equivalent to ending
+ the grammar with StringEnd()).
+
+ Note: parseString implicitly calls expandtabs() on the input string,
+ in order to report proper column numbers in parse actions.
+ If the input string contains tabs and
+ the grammar uses parse actions that use the loc argument to index into the
+ string being parsed, you can ensure you have a consistent view of the input
+ string by:
+ - calling parseWithTabs on your grammar before calling parseString
+ (see L{I{parseWithTabs}<parseWithTabs>})
+ - define your parse action using the full (s,loc,toks) signature, and
+ reference the input string using the parse action's s argument
+ - explictly expand the tabs in your input string before calling
+ parseString
+ """
+ ParserElement.resetCache()
+ if not self.streamlined:
+ self.streamline()
+ #~ self.saveAsList = True
+ for e in self.ignoreExprs:
+ e.streamline()
+ if not self.keepTabs:
+ instring = instring.expandtabs()
+ try:
+ loc, tokens = self._parse( instring, 0 )
+ if parseAll:
+ loc = self.preParse( instring, loc )
+ StringEnd()._parse( instring, loc )
+ except ParseBaseException, exc:
+ # catch and re-raise exception from here, clears out pyparsing internal stack trace
+ raise exc
+ else:
+ return tokens
+
+ def scanString( self, instring, maxMatches=_MAX_INT ):
+ """Scan the input string for expression matches. Each match will return the
+ matching tokens, start location, and end location. May be called with optional
+ maxMatches argument, to clip scanning after 'n' matches are found.
+
+ Note that the start and end locations are reported relative to the string
+ being parsed. See L{I{parseString}<parseString>} for more information on parsing
+ strings with embedded tabs."""
+ if not self.streamlined:
+ self.streamline()
+ for e in self.ignoreExprs:
+ e.streamline()
+
+ if not self.keepTabs:
+ instring = _ustr(instring).expandtabs()
+ instrlen = len(instring)
+ loc = 0
+ preparseFn = self.preParse
+ parseFn = self._parse
+ ParserElement.resetCache()
+ matches = 0
+ try:
+ while loc <= instrlen and matches < maxMatches:
+ try:
+ preloc = preparseFn( instring, loc )
+ nextLoc,tokens = parseFn( instring, preloc, callPreParse=False )
+ except ParseException:
+ loc = preloc+1
+ else:
+ matches += 1
+ yield tokens, preloc, nextLoc
+ loc = nextLoc
+ except ParseBaseException, pe:
+ raise pe
+
+ def transformString( self, instring ):
+ """Extension to scanString, to modify matching text with modified tokens that may
+ be returned from a parse action. To use transformString, define a grammar and
+ attach a parse action to it that modifies the returned token list.
+ Invoking transformString() on a target string will then scan for matches,
+ and replace the matched text patterns according to the logic in the parse
+ action. transformString() returns the resulting transformed string."""
+ out = []
+ lastE = 0
+ # force preservation of <TAB>s, to minimize unwanted transformation of string, and to
+ # keep string locs straight between transformString and scanString
+ self.keepTabs = True
+ try:
+ for t,s,e in self.scanString( instring ):
+ out.append( instring[lastE:s] )
+ if t:
+ if isinstance(t,ParseResults):
+ out += t.asList()
+ elif isinstance(t,list):
+ out += t
+ else:
+ out.append(t)
+ lastE = e
+ out.append(instring[lastE:])
+ return "".join(map(_ustr,out))
+ except ParseBaseException, pe:
+ raise pe
+
+ def searchString( self, instring, maxMatches=_MAX_INT ):
+ """Another extension to scanString, simplifying the access to the tokens found
+ to match the given parse expression. May be called with optional
+ maxMatches argument, to clip searching after 'n' matches are found.
+ """
+ try:
+ return ParseResults([ t for t,s,e in self.scanString( instring, maxMatches ) ])
+ except ParseBaseException, pe:
+ raise pe
+
+ def __add__(self, other ):
+ """Implementation of + operator - returns And"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return And( [ self, other ] )
+
+ def __radd__(self, other ):
+ """Implementation of + operator when left operand is not a ParserElement"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return other + self
+
+ def __sub__(self, other):
+ """Implementation of - operator, returns And with error stop"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return And( [ self, And._ErrorStop(), other ] )
+
+ def __rsub__(self, other ):
+ """Implementation of - operator when left operand is not a ParserElement"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return other - self
+
+ def __mul__(self,other):
+ if isinstance(other,int):
+ minElements, optElements = other,0
+ elif isinstance(other,tuple):
+ other = (other + (None, None))[:2]
+ if other[0] is None:
+ other = (0, other[1])
+ if isinstance(other[0],int) and other[1] is None:
+ if other[0] == 0:
+ return ZeroOrMore(self)
+ if other[0] == 1:
+ return OneOrMore(self)
+ else:
+ return self*other[0] + ZeroOrMore(self)
+ elif isinstance(other[0],int) and isinstance(other[1],int):
+ minElements, optElements = other
+ optElements -= minElements
+ else:
+ raise TypeError("cannot multiply 'ParserElement' and ('%s','%s') objects", type(other[0]),type(other[1]))
+ else:
+ raise TypeError("cannot multiply 'ParserElement' and '%s' objects", type(other))
+
+ if minElements < 0:
+ raise ValueError("cannot multiply ParserElement by negative value")
+ if optElements < 0:
+ raise ValueError("second tuple value must be greater or equal to first tuple value")
+ if minElements == optElements == 0:
+ raise ValueError("cannot multiply ParserElement by 0 or (0,0)")
+
+ if (optElements):
+ def makeOptionalList(n):
+ if n>1:
+ return Optional(self + makeOptionalList(n-1))
+ else:
+ return Optional(self)
+ if minElements:
+ if minElements == 1:
+ ret = self + makeOptionalList(optElements)
+ else:
+ ret = And([self]*minElements) + makeOptionalList(optElements)
+ else:
+ ret = makeOptionalList(optElements)
+ else:
+ if minElements == 1:
+ ret = self
+ else:
+ ret = And([self]*minElements)
+ return ret
+
+ def __rmul__(self, other):
+ return self.__mul__(other)
+
+ def __or__(self, other ):
+ """Implementation of | operator - returns MatchFirst"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return MatchFirst( [ self, other ] )
+
+ def __ror__(self, other ):
+ """Implementation of | operator when left operand is not a ParserElement"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return other | self
+
+ def __xor__(self, other ):
+ """Implementation of ^ operator - returns Or"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return Or( [ self, other ] )
+
+ def __rxor__(self, other ):
+ """Implementation of ^ operator when left operand is not a ParserElement"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return other ^ self
+
+ def __and__(self, other ):
+ """Implementation of & operator - returns Each"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return Each( [ self, other ] )
+
+ def __rand__(self, other ):
+ """Implementation of & operator when left operand is not a ParserElement"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return other & self
+
+ def __invert__( self ):
+ """Implementation of ~ operator - returns NotAny"""
+ return NotAny( self )
+
+ def __call__(self, name):
+ """Shortcut for setResultsName, with listAllMatches=default::
+ userdata = Word(alphas).setResultsName("name") + Word(nums+"-").setResultsName("socsecno")
+ could be written as::
+ userdata = Word(alphas)("name") + Word(nums+"-")("socsecno")
+ """
+ return self.setResultsName(name)
+
+ def suppress( self ):
+ """Suppresses the output of this ParserElement; useful to keep punctuation from
+ cluttering up returned output.
+ """
+ return Suppress( self )
+
+ def leaveWhitespace( self ):
+ """Disables the skipping of whitespace before matching the characters in the
+ ParserElement's defined pattern. This is normally only used internally by
+ the pyparsing module, but may be needed in some whitespace-sensitive grammars.
+ """
+ self.skipWhitespace = False
+ return self
+
+ def setWhitespaceChars( self, chars ):
+ """Overrides the default whitespace chars
+ """
+ self.skipWhitespace = True
+ self.whiteChars = chars
+ self.copyDefaultWhiteChars = False
+ return self
+
+ def parseWithTabs( self ):
+ """Overrides default behavior to expand <TAB>s to spaces before parsing the input string.
+ Must be called before parseString when the input grammar contains elements that
+ match <TAB> characters."""
+ self.keepTabs = True
+ return self
+
+ def ignore( self, other ):
+ """Define expression to be ignored (e.g., comments) while doing pattern
+ matching; may be called repeatedly, to define multiple comment or other
+ ignorable patterns.
+ """
+ if isinstance( other, Suppress ):
+ if other not in self.ignoreExprs:
+ self.ignoreExprs.append( other )
+ else:
+ self.ignoreExprs.append( Suppress( other ) )
+ return self
+
+ def setDebugActions( self, startAction, successAction, exceptionAction ):
+ """Enable display of debugging messages while doing pattern matching."""
+ self.debugActions = (startAction or _defaultStartDebugAction,
+ successAction or _defaultSuccessDebugAction,
+ exceptionAction or _defaultExceptionDebugAction)
+ self.debug = True
+ return self
+
+ def setDebug( self, flag=True ):
+ """Enable display of debugging messages while doing pattern matching.
+ Set flag to True to enable, False to disable."""
+ if flag:
+ self.setDebugActions( _defaultStartDebugAction, _defaultSuccessDebugAction, _defaultExceptionDebugAction )
+ else:
+ self.debug = False
+ return self
+
+ def __str__( self ):
+ return self.name
+
+ def __repr__( self ):
+ return _ustr(self)
+
+ def streamline( self ):
+ self.streamlined = True
+ self.strRepr = None
+ return self
+
+ def checkRecursion( self, parseElementList ):
+ pass
+
+ def validate( self, validateTrace=[] ):
+ """Check defined expressions for valid structure, check for infinite recursive definitions."""
+ self.checkRecursion( [] )
+
+ def parseFile( self, file_or_filename, parseAll=False ):
+ """Execute the parse expression on the given file or filename.
+ If a filename is specified (instead of a file object),
+ the entire file is opened, read, and closed before parsing.
+ """
+ try:
+ file_contents = file_or_filename.read()
+ except AttributeError:
+ f = open(file_or_filename, "rb")
+ file_contents = f.read()
+ f.close()
+ try:
+ return self.parseString(file_contents, parseAll)
+ except ParseBaseException, exc:
+ # catch and re-raise exception from here, clears out pyparsing internal stack trace
+ raise exc
+
+ def getException(self):
+ return ParseException("",0,self.errmsg,self)
+
+ def __getattr__(self,aname):
+ if aname == "myException":
+ self.myException = ret = self.getException();
+ return ret;
+ else:
+ raise AttributeError("no such attribute " + aname)
+
+ def __eq__(self,other):
+ if isinstance(other, ParserElement):
+ return self is other or self.__dict__ == other.__dict__
+ elif isinstance(other, basestring):
+ try:
+ self.parseString(_ustr(other), parseAll=True)
+ return True
+ except ParseBaseException:
+ return False
+ else:
+ return super(ParserElement,self)==other
+
+ def __ne__(self,other):
+ return not (self == other)
+
+ def __hash__(self):
+ return hash(id(self))
+
+ def __req__(self,other):
+ return self == other
+
+ def __rne__(self,other):
+ return not (self == other)
+
+
+class Token(ParserElement):
+ """Abstract ParserElement subclass, for defining atomic matching patterns."""
+ def __init__( self ):
+ super(Token,self).__init__( savelist=False )
+ #self.myException = ParseException("",0,"",self)
+
+ def setName(self, name):
+ s = super(Token,self).setName(name)
+ self.errmsg = "Expected " + self.name
+ #s.myException.msg = self.errmsg
+ return s
+
+
+class Empty(Token):
+ """An empty token, will always match."""
+ def __init__( self ):
+ super(Empty,self).__init__()
+ self.name = "Empty"
+ self.mayReturnEmpty = True
+ self.mayIndexError = False
+
+
+class NoMatch(Token):
+ """A token that will never match."""
+ def __init__( self ):
+ super(NoMatch,self).__init__()
+ self.name = "NoMatch"
+ self.mayReturnEmpty = True
+ self.mayIndexError = False
+ self.errmsg = "Unmatchable token"
+ #self.myException.msg = self.errmsg
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+
+class Literal(Token):
+ """Token to exactly match a specified string."""
+ def __init__( self, matchString ):
+ super(Literal,self).__init__()
+ self.match = matchString
+ self.matchLen = len(matchString)
+ try:
+ self.firstMatchChar = matchString[0]
+ except IndexError:
+ warnings.warn("null string passed to Literal; use Empty() instead",
+ SyntaxWarning, stacklevel=2)
+ self.__class__ = Empty
+ self.name = '"%s"' % _ustr(self.match)
+ self.errmsg = "Expected " + self.name
+ self.mayReturnEmpty = False
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+
+ # Performance tuning: this routine gets called a *lot*
+ # if this is a single character match string and the first character matches,
+ # short-circuit as quickly as possible, and avoid calling startswith
+ #~ @profile
+ def parseImpl( self, instring, loc, doActions=True ):
+ if (instring[loc] == self.firstMatchChar and
+ (self.matchLen==1 or instring.startswith(self.match,loc)) ):
+ return loc+self.matchLen, self.match
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+_L = Literal
+
+class Keyword(Token):
+ """Token to exactly match a specified string as a keyword, that is, it must be
+ immediately followed by a non-keyword character. Compare with Literal::
+ Literal("if") will match the leading 'if' in 'ifAndOnlyIf'.
+ Keyword("if") will not; it will only match the leading 'if in 'if x=1', or 'if(y==2)'
+ Accepts two optional constructor arguments in addition to the keyword string:
+ identChars is a string of characters that would be valid identifier characters,
+ defaulting to all alphanumerics + "_" and "$"; caseless allows case-insensitive
+ matching, default is False.
+ """
+ DEFAULT_KEYWORD_CHARS = alphanums+"_$"
+
+ def __init__( self, matchString, identChars=DEFAULT_KEYWORD_CHARS, caseless=False ):
+ super(Keyword,self).__init__()
+ self.match = matchString
+ self.matchLen = len(matchString)
+ try:
+ self.firstMatchChar = matchString[0]
+ except IndexError:
+ warnings.warn("null string passed to Keyword; use Empty() instead",
+ SyntaxWarning, stacklevel=2)
+ self.name = '"%s"' % self.match
+ self.errmsg = "Expected " + self.name
+ self.mayReturnEmpty = False
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+ self.caseless = caseless
+ if caseless:
+ self.caselessmatch = matchString.upper()
+ identChars = identChars.upper()
+ self.identChars = _str2dict(identChars)
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if self.caseless:
+ if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and
+ (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) and
+ (loc == 0 or instring[loc-1].upper() not in self.identChars) ):
+ return loc+self.matchLen, self.match
+ else:
+ if (instring[loc] == self.firstMatchChar and
+ (self.matchLen==1 or instring.startswith(self.match,loc)) and
+ (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen] not in self.identChars) and
+ (loc == 0 or instring[loc-1] not in self.identChars) ):
+ return loc+self.matchLen, self.match
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ def copy(self):
+ c = super(Keyword,self).copy()
+ c.identChars = Keyword.DEFAULT_KEYWORD_CHARS
+ return c
+
+ def setDefaultKeywordChars( chars ):
+ """Overrides the default Keyword chars
+ """
+ Keyword.DEFAULT_KEYWORD_CHARS = chars
+ setDefaultKeywordChars = staticmethod(setDefaultKeywordChars)
+
+class CaselessLiteral(Literal):
+ """Token to match a specified string, ignoring case of letters.
+ Note: the matched results will always be in the case of the given
+ match string, NOT the case of the input text.
+ """
+ def __init__( self, matchString ):
+ super(CaselessLiteral,self).__init__( matchString.upper() )
+ # Preserve the defining literal.
+ self.returnString = matchString
+ self.name = "'%s'" % self.returnString
+ self.errmsg = "Expected " + self.name
+ #self.myException.msg = self.errmsg
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if instring[ loc:loc+self.matchLen ].upper() == self.match:
+ return loc+self.matchLen, self.returnString
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+class CaselessKeyword(Keyword):
+ def __init__( self, matchString, identChars=Keyword.DEFAULT_KEYWORD_CHARS ):
+ super(CaselessKeyword,self).__init__( matchString, identChars, caseless=True )
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and
+ (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) ):
+ return loc+self.matchLen, self.match
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+class Word(Token):
+ """Token for matching words composed of allowed character sets.
+ Defined with string containing all allowed initial characters,
+ an optional string containing allowed body characters (if omitted,
+ defaults to the initial character set), and an optional minimum,
+ maximum, and/or exact length. The default value for min is 1 (a
+ minimum value < 1 is not valid); the default values for max and exact
+ are 0, meaning no maximum or exact length restriction.
+ """
+ def __init__( self, initChars, bodyChars=None, min=1, max=0, exact=0, asKeyword=False ):
+ super(Word,self).__init__()
+ self.initCharsOrig = initChars
+ self.initChars = _str2dict(initChars)
+ if bodyChars :
+ self.bodyCharsOrig = bodyChars
+ self.bodyChars = _str2dict(bodyChars)
+ else:
+ self.bodyCharsOrig = initChars
+ self.bodyChars = _str2dict(initChars)
+
+ self.maxSpecified = max > 0
+
+ if min < 1:
+ raise ValueError("cannot specify a minimum length < 1; use Optional(Word()) if zero-length word is permitted")
+
+ self.minLen = min
+
+ if max > 0:
+ self.maxLen = max
+ else:
+ self.maxLen = _MAX_INT
+
+ if exact > 0:
+ self.maxLen = exact
+ self.minLen = exact
+
+ self.name = _ustr(self)
+ self.errmsg = "Expected " + self.name
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+ self.asKeyword = asKeyword
+
+ if ' ' not in self.initCharsOrig+self.bodyCharsOrig and (min==1 and max==0 and exact==0):
+ if self.bodyCharsOrig == self.initCharsOrig:
+ self.reString = "[%s]+" % _escapeRegexRangeChars(self.initCharsOrig)
+ elif len(self.bodyCharsOrig) == 1:
+ self.reString = "%s[%s]*" % \
+ (re.escape(self.initCharsOrig),
+ _escapeRegexRangeChars(self.bodyCharsOrig),)
+ else:
+ self.reString = "[%s][%s]*" % \
+ (_escapeRegexRangeChars(self.initCharsOrig),
+ _escapeRegexRangeChars(self.bodyCharsOrig),)
+ if self.asKeyword:
+ self.reString = r"\b"+self.reString+r"\b"
+ try:
+ self.re = re.compile( self.reString )
+ except:
+ self.re = None
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if self.re:
+ result = self.re.match(instring,loc)
+ if not result:
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ loc = result.end()
+ return loc,result.group()
+
+ if not(instring[ loc ] in self.initChars):
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ start = loc
+ loc += 1
+ instrlen = len(instring)
+ bodychars = self.bodyChars
+ maxloc = start + self.maxLen
+ maxloc = min( maxloc, instrlen )
+ while loc < maxloc and instring[loc] in bodychars:
+ loc += 1
+
+ throwException = False
+ if loc - start < self.minLen:
+ throwException = True
+ if self.maxSpecified and loc < instrlen and instring[loc] in bodychars:
+ throwException = True
+ if self.asKeyword:
+ if (start>0 and instring[start-1] in bodychars) or (loc<instrlen and instring[loc] in bodychars):
+ throwException = True
+
+ if throwException:
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ return loc, instring[start:loc]
+
+ def __str__( self ):
+ try:
+ return super(Word,self).__str__()
+ except:
+ pass
+
+
+ if self.strRepr is None:
+
+ def charsAsStr(s):
+ if len(s)>4:
+ return s[:4]+"..."
+ else:
+ return s
+
+ if ( self.initCharsOrig != self.bodyCharsOrig ):
+ self.strRepr = "W:(%s,%s)" % ( charsAsStr(self.initCharsOrig), charsAsStr(self.bodyCharsOrig) )
+ else:
+ self.strRepr = "W:(%s)" % charsAsStr(self.initCharsOrig)
+
+ return self.strRepr
+
+
+class Regex(Token):
+ """Token for matching strings that match a given regular expression.
+ Defined with string specifying the regular expression in a form recognized by the inbuilt Python re module.
+ """
+ def __init__( self, pattern, flags=0):
+ """The parameters pattern and flags are passed to the re.compile() function as-is. See the Python re module for an explanation of the acceptable patterns and flags."""
+ super(Regex,self).__init__()
+
+ if len(pattern) == 0:
+ warnings.warn("null string passed to Regex; use Empty() instead",
+ SyntaxWarning, stacklevel=2)
+
+ self.pattern = pattern
+ self.flags = flags
+
+ try:
+ self.re = re.compile(self.pattern, self.flags)
+ self.reString = self.pattern
+ except sre_constants.error:
+ warnings.warn("invalid pattern (%s) passed to Regex" % pattern,
+ SyntaxWarning, stacklevel=2)
+ raise
+
+ self.name = _ustr(self)
+ self.errmsg = "Expected " + self.name
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ result = self.re.match(instring,loc)
+ if not result:
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ loc = result.end()
+ d = result.groupdict()
+ ret = ParseResults(result.group())
+ if d:
+ for k in d:
+ ret[k] = d[k]
+ return loc,ret
+
+ def __str__( self ):
+ try:
+ return super(Regex,self).__str__()
+ except:
+ pass
+
+ if self.strRepr is None:
+ self.strRepr = "Re:(%s)" % repr(self.pattern)
+
+ return self.strRepr
+
+
+class QuotedString(Token):
+ """Token for matching strings that are delimited by quoting characters.
+ """
+ def __init__( self, quoteChar, escChar=None, escQuote=None, multiline=False, unquoteResults=True, endQuoteChar=None):
+ """
+ Defined with the following parameters:
+ - quoteChar - string of one or more characters defining the quote delimiting string
+ - escChar - character to escape quotes, typically backslash (default=None)
+ - escQuote - special quote sequence to escape an embedded quote string (such as SQL's "" to escape an embedded ") (default=None)
+ - multiline - boolean indicating whether quotes can span multiple lines (default=False)
+ - unquoteResults - boolean indicating whether the matched text should be unquoted (default=True)
+ - endQuoteChar - string of one or more characters defining the end of the quote delimited string (default=None => same as quoteChar)
+ """
+ super(QuotedString,self).__init__()
+
+ # remove white space from quote chars - wont work anyway
+ quoteChar = quoteChar.strip()
+ if len(quoteChar) == 0:
+ warnings.warn("quoteChar cannot be the empty string",SyntaxWarning,stacklevel=2)
+ raise SyntaxError()
+
+ if endQuoteChar is None:
+ endQuoteChar = quoteChar
+ else:
+ endQuoteChar = endQuoteChar.strip()
+ if len(endQuoteChar) == 0:
+ warnings.warn("endQuoteChar cannot be the empty string",SyntaxWarning,stacklevel=2)
+ raise SyntaxError()
+
+ self.quoteChar = quoteChar
+ self.quoteCharLen = len(quoteChar)
+ self.firstQuoteChar = quoteChar[0]
+ self.endQuoteChar = endQuoteChar
+ self.endQuoteCharLen = len(endQuoteChar)
+ self.escChar = escChar
+ self.escQuote = escQuote
+ self.unquoteResults = unquoteResults
+
+ if multiline:
+ self.flags = re.MULTILINE | re.DOTALL
+ self.pattern = r'%s(?:[^%s%s]' % \
+ ( re.escape(self.quoteChar),
+ _escapeRegexRangeChars(self.endQuoteChar[0]),
+ (escChar is not None and _escapeRegexRangeChars(escChar) or '') )
+ else:
+ self.flags = 0
+ self.pattern = r'%s(?:[^%s\n\r%s]' % \
+ ( re.escape(self.quoteChar),
+ _escapeRegexRangeChars(self.endQuoteChar[0]),
+ (escChar is not None and _escapeRegexRangeChars(escChar) or '') )
+ if len(self.endQuoteChar) > 1:
+ self.pattern += (
+ '|(?:' + ')|(?:'.join(["%s[^%s]" % (re.escape(self.endQuoteChar[:i]),
+ _escapeRegexRangeChars(self.endQuoteChar[i]))
+ for i in range(len(self.endQuoteChar)-1,0,-1)]) + ')'
+ )
+ if escQuote:
+ self.pattern += (r'|(?:%s)' % re.escape(escQuote))
+ if escChar:
+ self.pattern += (r'|(?:%s.)' % re.escape(escChar))
+ self.escCharReplacePattern = re.escape(self.escChar)+"(.)"
+ self.pattern += (r')*%s' % re.escape(self.endQuoteChar))
+
+ try:
+ self.re = re.compile(self.pattern, self.flags)
+ self.reString = self.pattern
+ except sre_constants.error:
+ warnings.warn("invalid pattern (%s) passed to Regex" % self.pattern,
+ SyntaxWarning, stacklevel=2)
+ raise
+
+ self.name = _ustr(self)
+ self.errmsg = "Expected " + self.name
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ result = instring[loc] == self.firstQuoteChar and self.re.match(instring,loc) or None
+ if not result:
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ loc = result.end()
+ ret = result.group()
+
+ if self.unquoteResults:
+
+ # strip off quotes
+ ret = ret[self.quoteCharLen:-self.endQuoteCharLen]
+
+ if isinstance(ret,basestring):
+ # replace escaped characters
+ if self.escChar:
+ ret = re.sub(self.escCharReplacePattern,"\g<1>",ret)
+
+ # replace escaped quotes
+ if self.escQuote:
+ ret = ret.replace(self.escQuote, self.endQuoteChar)
+
+ return loc, ret
+
+ def __str__( self ):
+ try:
+ return super(QuotedString,self).__str__()
+ except:
+ pass
+
+ if self.strRepr is None:
+ self.strRepr = "quoted string, starting with %s ending with %s" % (self.quoteChar, self.endQuoteChar)
+
+ return self.strRepr
+
+
+class CharsNotIn(Token):
+ """Token for matching words composed of characters *not* in a given set.
+ Defined with string containing all disallowed characters, and an optional
+ minimum, maximum, and/or exact length. The default value for min is 1 (a
+ minimum value < 1 is not valid); the default values for max and exact
+ are 0, meaning no maximum or exact length restriction.
+ """
+ def __init__( self, notChars, min=1, max=0, exact=0 ):
+ super(CharsNotIn,self).__init__()
+ self.skipWhitespace = False
+ self.notChars = notChars
+
+ if min < 1:
+ raise ValueError("cannot specify a minimum length < 1; use Optional(CharsNotIn()) if zero-length char group is permitted")
+
+ self.minLen = min
+
+ if max > 0:
+ self.maxLen = max
+ else:
+ self.maxLen = _MAX_INT
+
+ if exact > 0:
+ self.maxLen = exact
+ self.minLen = exact
+
+ self.name = _ustr(self)
+ self.errmsg = "Expected " + self.name
+ self.mayReturnEmpty = ( self.minLen == 0 )
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if instring[loc] in self.notChars:
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ start = loc
+ loc += 1
+ notchars = self.notChars
+ maxlen = min( start+self.maxLen, len(instring) )
+ while loc < maxlen and \
+ (instring[loc] not in notchars):
+ loc += 1
+
+ if loc - start < self.minLen:
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ return loc, instring[start:loc]
+
+ def __str__( self ):
+ try:
+ return super(CharsNotIn, self).__str__()
+ except:
+ pass
+
+ if self.strRepr is None:
+ if len(self.notChars) > 4:
+ self.strRepr = "!W:(%s...)" % self.notChars[:4]
+ else:
+ self.strRepr = "!W:(%s)" % self.notChars
+
+ return self.strRepr
+
+class White(Token):
+ """Special matching class for matching whitespace. Normally, whitespace is ignored
+ by pyparsing grammars. This class is included when some whitespace structures
+ are significant. Define with a string containing the whitespace characters to be
+ matched; default is " \\t\\r\\n". Also takes optional min, max, and exact arguments,
+ as defined for the Word class."""
+ whiteStrs = {
+ " " : "<SPC>",
+ "\t": "<TAB>",
+ "\n": "<LF>",
+ "\r": "<CR>",
+ "\f": "<FF>",
+ }
+ def __init__(self, ws=" \t\r\n", min=1, max=0, exact=0):
+ super(White,self).__init__()
+ self.matchWhite = ws
+ self.setWhitespaceChars( "".join([c for c in self.whiteChars if c not in self.matchWhite]) )
+ #~ self.leaveWhitespace()
+ self.name = ("".join([White.whiteStrs[c] for c in self.matchWhite]))
+ self.mayReturnEmpty = True
+ self.errmsg = "Expected " + self.name
+ #self.myException.msg = self.errmsg
+
+ self.minLen = min
+
+ if max > 0:
+ self.maxLen = max
+ else:
+ self.maxLen = _MAX_INT
+
+ if exact > 0:
+ self.maxLen = exact
+ self.minLen = exact
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if not(instring[ loc ] in self.matchWhite):
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ start = loc
+ loc += 1
+ maxloc = start + self.maxLen
+ maxloc = min( maxloc, len(instring) )
+ while loc < maxloc and instring[loc] in self.matchWhite:
+ loc += 1
+
+ if loc - start < self.minLen:
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ return loc, instring[start:loc]
+
+
+class _PositionToken(Token):
+ def __init__( self ):
+ super(_PositionToken,self).__init__()
+ self.name=self.__class__.__name__
+ self.mayReturnEmpty = True
+ self.mayIndexError = False
+
+class GoToColumn(_PositionToken):
+ """Token to advance to a specific column of input text; useful for tabular report scraping."""
+ def __init__( self, colno ):
+ super(GoToColumn,self).__init__()
+ self.col = colno
+
+ def preParse( self, instring, loc ):
+ if col(loc,instring) != self.col:
+ instrlen = len(instring)
+ if self.ignoreExprs:
+ loc = self._skipIgnorables( instring, loc )
+ while loc < instrlen and instring[loc].isspace() and col( loc, instring ) != self.col :
+ loc += 1
+ return loc
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ thiscol = col( loc, instring )
+ if thiscol > self.col:
+ raise ParseException( instring, loc, "Text not in expected column", self )
+ newloc = loc + self.col - thiscol
+ ret = instring[ loc: newloc ]
+ return newloc, ret
+
+class LineStart(_PositionToken):
+ """Matches if current position is at the beginning of a line within the parse string"""
+ def __init__( self ):
+ super(LineStart,self).__init__()
+ self.setWhitespaceChars( ParserElement.DEFAULT_WHITE_CHARS.replace("\n","") )
+ self.errmsg = "Expected start of line"
+ #self.myException.msg = self.errmsg
+
+ def preParse( self, instring, loc ):
+ preloc = super(LineStart,self).preParse(instring,loc)
+ if instring[preloc] == "\n":
+ loc += 1
+ return loc
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if not( loc==0 or
+ (loc == self.preParse( instring, 0 )) or
+ (instring[loc-1] == "\n") ): #col(loc, instring) != 1:
+ #~ raise ParseException( instring, loc, "Expected start of line" )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ return loc, []
+
+class LineEnd(_PositionToken):
+ """Matches if current position is at the end of a line within the parse string"""
+ def __init__( self ):
+ super(LineEnd,self).__init__()
+ self.setWhitespaceChars( ParserElement.DEFAULT_WHITE_CHARS.replace("\n","") )
+ self.errmsg = "Expected end of line"
+ #self.myException.msg = self.errmsg
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if loc<len(instring):
+ if instring[loc] == "\n":
+ return loc+1, "\n"
+ else:
+ #~ raise ParseException( instring, loc, "Expected end of line" )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ elif loc == len(instring):
+ return loc+1, []
+ else:
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+class StringStart(_PositionToken):
+ """Matches if current position is at the beginning of the parse string"""
+ def __init__( self ):
+ super(StringStart,self).__init__()
+ self.errmsg = "Expected start of text"
+ #self.myException.msg = self.errmsg
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if loc != 0:
+ # see if entire string up to here is just whitespace and ignoreables
+ if loc != self.preParse( instring, 0 ):
+ #~ raise ParseException( instring, loc, "Expected start of text" )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ return loc, []
+
+class StringEnd(_PositionToken):
+ """Matches if current position is at the end of the parse string"""
+ def __init__( self ):
+ super(StringEnd,self).__init__()
+ self.errmsg = "Expected end of text"
+ #self.myException.msg = self.errmsg
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if loc < len(instring):
+ #~ raise ParseException( instring, loc, "Expected end of text" )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ elif loc == len(instring):
+ return loc+1, []
+ elif loc > len(instring):
+ return loc, []
+ else:
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+class WordStart(_PositionToken):
+ """Matches if the current position is at the beginning of a Word, and
+ is not preceded by any character in a given set of wordChars
+ (default=printables). To emulate the \b behavior of regular expressions,
+ use WordStart(alphanums). WordStart will also match at the beginning of
+ the string being parsed, or at the beginning of a line.
+ """
+ def __init__(self, wordChars = printables):
+ super(WordStart,self).__init__()
+ self.wordChars = _str2dict(wordChars)
+ self.errmsg = "Not at the start of a word"
+
+ def parseImpl(self, instring, loc, doActions=True ):
+ if loc != 0:
+ if (instring[loc-1] in self.wordChars or
+ instring[loc] not in self.wordChars):
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ return loc, []
+
+class WordEnd(_PositionToken):
+ """Matches if the current position is at the end of a Word, and
+ is not followed by any character in a given set of wordChars
+ (default=printables). To emulate the \b behavior of regular expressions,
+ use WordEnd(alphanums). WordEnd will also match at the end of
+ the string being parsed, or at the end of a line.
+ """
+ def __init__(self, wordChars = printables):
+ super(WordEnd,self).__init__()
+ self.wordChars = _str2dict(wordChars)
+ self.skipWhitespace = False
+ self.errmsg = "Not at the end of a word"
+
+ def parseImpl(self, instring, loc, doActions=True ):
+ instrlen = len(instring)
+ if instrlen>0 and loc<instrlen:
+ if (instring[loc] in self.wordChars or
+ instring[loc-1] not in self.wordChars):
+ #~ raise ParseException( instring, loc, "Expected end of word" )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ return loc, []
+
+
+class ParseExpression(ParserElement):
+ """Abstract subclass of ParserElement, for combining and post-processing parsed tokens."""
+ def __init__( self, exprs, savelist = False ):
+ super(ParseExpression,self).__init__(savelist)
+ if isinstance( exprs, list ):
+ self.exprs = exprs
+ elif isinstance( exprs, basestring ):
+ self.exprs = [ Literal( exprs ) ]
+ else:
+ try:
+ self.exprs = list( exprs )
+ except TypeError:
+ self.exprs = [ exprs ]
+ self.callPreparse = False
+
+ def __getitem__( self, i ):
+ return self.exprs[i]
+
+ def append( self, other ):
+ self.exprs.append( other )
+ self.strRepr = None
+ return self
+
+ def leaveWhitespace( self ):
+ """Extends leaveWhitespace defined in base class, and also invokes leaveWhitespace on
+ all contained expressions."""
+ self.skipWhitespace = False
+ # print "My class is %s %d, dict is %r" % (self.__class__, id(self), self.__dict__) # XXX
+ self.exprs = [ e.copy() for e in self.exprs ]
+ for e in self.exprs:
+ e.leaveWhitespace()
+ return self
+
+ def ignore( self, other ):
+ if isinstance( other, Suppress ):
+ if other not in self.ignoreExprs:
+ super( ParseExpression, self).ignore( other )
+ for e in self.exprs:
+ e.ignore( self.ignoreExprs[-1] )
+ else:
+ super( ParseExpression, self).ignore( other )
+ for e in self.exprs:
+ e.ignore( self.ignoreExprs[-1] )
+ return self
+
+ def __str__( self ):
+ try:
+ return super(ParseExpression,self).__str__()
+ except:
+ pass
+
+ if self.strRepr is None:
+ self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.exprs) )
+ return self.strRepr
+
+ def streamline( self ):
+ super(ParseExpression,self).streamline()
+
+ for e in self.exprs:
+ e.streamline()
+
+ # collapse nested And's of the form And( And( And( a,b), c), d) to And( a,b,c,d )
+ # but only if there are no parse actions or resultsNames on the nested And's
+ # (likewise for Or's and MatchFirst's)
+ if ( len(self.exprs) == 2 ):
+ other = self.exprs[0]
+ if ( isinstance( other, self.__class__ ) and
+ not(other.parseAction) and
+ other.resultsName is None and
+ not other.debug ):
+ self.exprs = other.exprs[:] + [ self.exprs[1] ]
+ self.strRepr = None
+ self.mayReturnEmpty |= other.mayReturnEmpty
+ self.mayIndexError |= other.mayIndexError
+
+ other = self.exprs[-1]
+ if ( isinstance( other, self.__class__ ) and
+ not(other.parseAction) and
+ other.resultsName is None and
+ not other.debug ):
+ self.exprs = self.exprs[:-1] + other.exprs[:]
+ self.strRepr = None
+ self.mayReturnEmpty |= other.mayReturnEmpty
+ self.mayIndexError |= other.mayIndexError
+
+ return self
+
+ def setResultsName( self, name, listAllMatches=False ):
+ ret = super(ParseExpression,self).setResultsName(name,listAllMatches)
+ return ret
+
+ def validate( self, validateTrace=[] ):
+ tmp = validateTrace[:]+[self]
+ for e in self.exprs:
+ e.validate(tmp)
+ self.checkRecursion( [] )
+
+class And(ParseExpression):
+ """Requires all given ParseExpressions to be found in the given order.
+ Expressions may be separated by whitespace.
+ May be constructed using the '+' operator.
+ """
+
+ class _ErrorStop(Empty):
+ def __init__(self, *args, **kwargs):
+ super(Empty,self).__init__(*args, **kwargs)
+ self.leaveWhitespace()
+
+ def __init__( self, exprs, savelist = True ):
+ # print "Init of %s %d, exprs = %r" % (self.__class__, id(self), exprs) # XXX
+ super(And,self).__init__(exprs, savelist)
+ # print "dist is %r" % self.__dict__ # XXX
+ self.mayReturnEmpty = True
+ for e in self.exprs:
+ if not e.mayReturnEmpty:
+ self.mayReturnEmpty = False
+ break
+ self.setWhitespaceChars( exprs[0].whiteChars )
+ self.skipWhitespace = exprs[0].skipWhitespace
+ self.callPreparse = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ # pass False as last arg to _parse for first element, since we already
+ # pre-parsed the string as part of our And pre-parsing
+ loc, resultlist = self.exprs[0]._parse( instring, loc, doActions, callPreParse=False )
+ errorStop = False
+ for e in self.exprs[1:]:
+ if isinstance(e, And._ErrorStop):
+ errorStop = True
+ continue
+ if errorStop:
+ try:
+ loc, exprtokens = e._parse( instring, loc, doActions )
+ except ParseSyntaxException:
+ raise
+ except ParseBaseException, pe:
+ raise ParseSyntaxException(pe)
+ except IndexError, ie:
+ raise ParseSyntaxException( ParseException(instring, len(instring), self.errmsg, self) )
+ else:
+ loc, exprtokens = e._parse( instring, loc, doActions )
+ if exprtokens or exprtokens.keys():
+ resultlist += exprtokens
+ return loc, resultlist
+
+ def __iadd__(self, other ):
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ return self.append( other ) #And( [ self, other ] )
+
+ def checkRecursion( self, parseElementList ):
+ subRecCheckList = parseElementList[:] + [ self ]
+ for e in self.exprs:
+ e.checkRecursion( subRecCheckList )
+ if not e.mayReturnEmpty:
+ break
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "{" + " ".join( [ _ustr(e) for e in self.exprs ] ) + "}"
+
+ return self.strRepr
+
+
+class Or(ParseExpression):
+ """Requires that at least one ParseExpression is found.
+ If two expressions match, the expression that matches the longest string will be used.
+ May be constructed using the '^' operator.
+ """
+ def __init__( self, exprs, savelist = False ):
+ super(Or,self).__init__(exprs, savelist)
+ self.mayReturnEmpty = False
+ for e in self.exprs:
+ if e.mayReturnEmpty:
+ self.mayReturnEmpty = True
+ break
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ maxExcLoc = -1
+ maxMatchLoc = -1
+ maxException = None
+ for e in self.exprs:
+ try:
+ loc2 = e.tryParse( instring, loc )
+ except ParseException, err:
+ if err.loc > maxExcLoc:
+ maxException = err
+ maxExcLoc = err.loc
+ except IndexError:
+ if len(instring) > maxExcLoc:
+ maxException = ParseException(instring,len(instring),e.errmsg,self)
+ maxExcLoc = len(instring)
+ else:
+ if loc2 > maxMatchLoc:
+ maxMatchLoc = loc2
+ maxMatchExp = e
+
+ if maxMatchLoc < 0:
+ if maxException is not None:
+ raise maxException
+ else:
+ raise ParseException(instring, loc, "no defined alternatives to match", self)
+
+ return maxMatchExp._parse( instring, loc, doActions )
+
+ def __ixor__(self, other ):
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ return self.append( other ) #Or( [ self, other ] )
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "{" + " ^ ".join( [ _ustr(e) for e in self.exprs ] ) + "}"
+
+ return self.strRepr
+
+ def checkRecursion( self, parseElementList ):
+ subRecCheckList = parseElementList[:] + [ self ]
+ for e in self.exprs:
+ e.checkRecursion( subRecCheckList )
+
+
+class MatchFirst(ParseExpression):
+ """Requires that at least one ParseExpression is found.
+ If two expressions match, the first one listed is the one that will match.
+ May be constructed using the '|' operator.
+ """
+ def __init__( self, exprs, savelist = False ):
+ super(MatchFirst,self).__init__(exprs, savelist)
+ if exprs:
+ self.mayReturnEmpty = False
+ for e in self.exprs:
+ if e.mayReturnEmpty:
+ self.mayReturnEmpty = True
+ break
+ else:
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ maxExcLoc = -1
+ maxException = None
+ for e in self.exprs:
+ try:
+ ret = e._parse( instring, loc, doActions )
+ return ret
+ except ParseException, err:
+ if err.loc > maxExcLoc:
+ maxException = err
+ maxExcLoc = err.loc
+ except IndexError:
+ if len(instring) > maxExcLoc:
+ maxException = ParseException(instring,len(instring),e.errmsg,self)
+ maxExcLoc = len(instring)
+
+ # only got here if no expression matched, raise exception for match that made it the furthest
+ else:
+ if maxException is not None:
+ raise maxException
+ else:
+ raise ParseException(instring, loc, "no defined alternatives to match", self)
+
+ def __ior__(self, other ):
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ return self.append( other ) #MatchFirst( [ self, other ] )
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "{" + " | ".join( [ _ustr(e) for e in self.exprs ] ) + "}"
+
+ return self.strRepr
+
+ def checkRecursion( self, parseElementList ):
+ subRecCheckList = parseElementList[:] + [ self ]
+ for e in self.exprs:
+ e.checkRecursion( subRecCheckList )
+
+
+class Each(ParseExpression):
+ """Requires all given ParseExpressions to be found, but in any order.
+ Expressions may be separated by whitespace.
+ May be constructed using the '&' operator.
+ """
+ def __init__( self, exprs, savelist = True ):
+ super(Each,self).__init__(exprs, savelist)
+ self.mayReturnEmpty = True
+ for e in self.exprs:
+ if not e.mayReturnEmpty:
+ self.mayReturnEmpty = False
+ break
+ self.skipWhitespace = True
+ self.initExprGroups = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if self.initExprGroups:
+ self.optionals = [ e.expr for e in self.exprs if isinstance(e,Optional) ]
+ self.multioptionals = [ e.expr for e in self.exprs if isinstance(e,ZeroOrMore) ]
+ self.multirequired = [ e.expr for e in self.exprs if isinstance(e,OneOrMore) ]
+ self.required = [ e for e in self.exprs if not isinstance(e,(Optional,ZeroOrMore,OneOrMore)) ]
+ self.required += self.multirequired
+ self.initExprGroups = False
+ tmpLoc = loc
+ tmpReqd = self.required[:]
+ tmpOpt = self.optionals[:]
+ matchOrder = []
+
+ keepMatching = True
+ while keepMatching:
+ tmpExprs = tmpReqd + tmpOpt + self.multioptionals + self.multirequired
+ failed = []
+ for e in tmpExprs:
+ try:
+ tmpLoc = e.tryParse( instring, tmpLoc )
+ except ParseException:
+ failed.append(e)
+ else:
+ matchOrder.append(e)
+ if e in tmpReqd:
+ tmpReqd.remove(e)
+ elif e in tmpOpt:
+ tmpOpt.remove(e)
+ if len(failed) == len(tmpExprs):
+ keepMatching = False
+
+ if tmpReqd:
+ missing = ", ".join( [ _ustr(e) for e in tmpReqd ] )
+ raise ParseException(instring,loc,"Missing one or more required elements (%s)" % missing )
+
+ # add any unmatched Optionals, in case they have default values defined
+ matchOrder += [e for e in self.exprs if isinstance(e,Optional) and e.expr in tmpOpt]
+
+ resultlist = []
+ for e in matchOrder:
+ loc,results = e._parse(instring,loc,doActions)
+ resultlist.append(results)
+
+ finalResults = ParseResults([])
+ for r in resultlist:
+ dups = {}
+ for k in r.keys():
+ if k in finalResults.keys():
+ tmp = ParseResults(finalResults[k])
+ tmp += ParseResults(r[k])
+ dups[k] = tmp
+ finalResults += ParseResults(r)
+ for k,v in dups.items():
+ finalResults[k] = v
+ return loc, finalResults
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "{" + " & ".join( [ _ustr(e) for e in self.exprs ] ) + "}"
+
+ return self.strRepr
+
+ def checkRecursion( self, parseElementList ):
+ subRecCheckList = parseElementList[:] + [ self ]
+ for e in self.exprs:
+ e.checkRecursion( subRecCheckList )
+
+
+class ParseElementEnhance(ParserElement):
+ """Abstract subclass of ParserElement, for combining and post-processing parsed tokens."""
+ def __init__( self, expr, savelist=False ):
+ super(ParseElementEnhance,self).__init__(savelist)
+ if isinstance( expr, basestring ):
+ expr = Literal(expr)
+ self.expr = expr
+ self.strRepr = None
+ if expr is not None:
+ self.mayIndexError = expr.mayIndexError
+ self.mayReturnEmpty = expr.mayReturnEmpty
+ self.setWhitespaceChars( expr.whiteChars )
+ self.skipWhitespace = expr.skipWhitespace
+ self.saveAsList = expr.saveAsList
+ self.callPreparse = expr.callPreparse
+ self.ignoreExprs.extend(expr.ignoreExprs)
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if self.expr is not None:
+ return self.expr._parse( instring, loc, doActions, callPreParse=False )
+ else:
+ raise ParseException("",loc,self.errmsg,self)
+
+ def leaveWhitespace( self ):
+ self.skipWhitespace = False
+ self.expr = self.expr.copy()
+ if self.expr is not None:
+ self.expr.leaveWhitespace()
+ return self
+
+ def ignore( self, other ):
+ if isinstance( other, Suppress ):
+ if other not in self.ignoreExprs:
+ super( ParseElementEnhance, self).ignore( other )
+ if self.expr is not None:
+ self.expr.ignore( self.ignoreExprs[-1] )
+ else:
+ super( ParseElementEnhance, self).ignore( other )
+ if self.expr is not None:
+ self.expr.ignore( self.ignoreExprs[-1] )
+ return self
+
+ def streamline( self ):
+ super(ParseElementEnhance,self).streamline()
+ if self.expr is not None:
+ self.expr.streamline()
+ return self
+
+ def checkRecursion( self, parseElementList ):
+ if self in parseElementList:
+ raise RecursiveGrammarException( parseElementList+[self] )
+ subRecCheckList = parseElementList[:] + [ self ]
+ if self.expr is not None:
+ self.expr.checkRecursion( subRecCheckList )
+
+ def validate( self, validateTrace=[] ):
+ tmp = validateTrace[:]+[self]
+ if self.expr is not None:
+ self.expr.validate(tmp)
+ self.checkRecursion( [] )
+
+ def __str__( self ):
+ try:
+ return super(ParseElementEnhance,self).__str__()
+ except:
+ pass
+
+ if self.strRepr is None and self.expr is not None:
+ self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.expr) )
+ return self.strRepr
+
+
+class FollowedBy(ParseElementEnhance):
+ """Lookahead matching of the given parse expression. FollowedBy
+ does *not* advance the parsing position within the input string, it only
+ verifies that the specified parse expression matches at the current
+ position. FollowedBy always returns a null token list."""
+ def __init__( self, expr ):
+ super(FollowedBy,self).__init__(expr)
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ self.expr.tryParse( instring, loc )
+ return loc, []
+
+
+class NotAny(ParseElementEnhance):
+ """Lookahead to disallow matching with the given parse expression. NotAny
+ does *not* advance the parsing position within the input string, it only
+ verifies that the specified parse expression does *not* match at the current
+ position. Also, NotAny does *not* skip over leading whitespace. NotAny
+ always returns a null token list. May be constructed using the '~' operator."""
+ def __init__( self, expr ):
+ super(NotAny,self).__init__(expr)
+ #~ self.leaveWhitespace()
+ self.skipWhitespace = False # do NOT use self.leaveWhitespace(), don't want to propagate to exprs
+ self.mayReturnEmpty = True
+ self.errmsg = "Found unwanted token, "+_ustr(self.expr)
+ #self.myException = ParseException("",0,self.errmsg,self)
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ try:
+ self.expr.tryParse( instring, loc )
+ except (ParseException,IndexError):
+ pass
+ else:
+ #~ raise ParseException(instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ return loc, []
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "~{" + _ustr(self.expr) + "}"
+
+ return self.strRepr
+
+
+class ZeroOrMore(ParseElementEnhance):
+ """Optional repetition of zero or more of the given expression."""
+ def __init__( self, expr ):
+ super(ZeroOrMore,self).__init__(expr)
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ tokens = []
+ try:
+ loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False )
+ hasIgnoreExprs = ( len(self.ignoreExprs) > 0 )
+ while 1:
+ if hasIgnoreExprs:
+ preloc = self._skipIgnorables( instring, loc )
+ else:
+ preloc = loc
+ loc, tmptokens = self.expr._parse( instring, preloc, doActions )
+ if tmptokens or tmptokens.keys():
+ tokens += tmptokens
+ except (ParseException,IndexError):
+ pass
+
+ return loc, tokens
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "[" + _ustr(self.expr) + "]..."
+
+ return self.strRepr
+
+ def setResultsName( self, name, listAllMatches=False ):
+ ret = super(ZeroOrMore,self).setResultsName(name,listAllMatches)
+ ret.saveAsList = True
+ return ret
+
+
+class OneOrMore(ParseElementEnhance):
+ """Repetition of one or more of the given expression."""
+ def parseImpl( self, instring, loc, doActions=True ):
+ # must be at least one
+ loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False )
+ try:
+ hasIgnoreExprs = ( len(self.ignoreExprs) > 0 )
+ while 1:
+ if hasIgnoreExprs:
+ preloc = self._skipIgnorables( instring, loc )
+ else:
+ preloc = loc
+ loc, tmptokens = self.expr._parse( instring, preloc, doActions )
+ if tmptokens or tmptokens.keys():
+ tokens += tmptokens
+ except (ParseException,IndexError):
+ pass
+
+ return loc, tokens
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "{" + _ustr(self.expr) + "}..."
+
+ return self.strRepr
+
+ def setResultsName( self, name, listAllMatches=False ):
+ ret = super(OneOrMore,self).setResultsName(name,listAllMatches)
+ ret.saveAsList = True
+ return ret
+
+class _NullToken(object):
+ def __bool__(self):
+ return False
+ __nonzero__ = __bool__
+ def __str__(self):
+ return ""
+
+_optionalNotMatched = _NullToken()
+class Optional(ParseElementEnhance):
+ """Optional matching of the given expression.
+ A default return string can also be specified, if the optional expression
+ is not found.
+ """
+ def __init__( self, exprs, default=_optionalNotMatched ):
+ super(Optional,self).__init__( exprs, savelist=False )
+ self.defaultValue = default
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ try:
+ loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False )
+ except (ParseException,IndexError):
+ if self.defaultValue is not _optionalNotMatched:
+ if self.expr.resultsName:
+ tokens = ParseResults([ self.defaultValue ])
+ tokens[self.expr.resultsName] = self.defaultValue
+ else:
+ tokens = [ self.defaultValue ]
+ else:
+ tokens = []
+ return loc, tokens
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "[" + _ustr(self.expr) + "]"
+
+ return self.strRepr
+
+
+class SkipTo(ParseElementEnhance):
+ """Token for skipping over all undefined text until the matched expression is found.
+ If include is set to true, the matched expression is also parsed (the skipped text
+ and matched expression are returned as a 2-element list). The ignore
+ argument is used to define grammars (typically quoted strings and comments) that
+ might contain false matches.
+ """
+ def __init__( self, other, include=False, ignore=None, failOn=None ):
+ super( SkipTo, self ).__init__( other )
+ self.ignoreExpr = ignore
+ self.mayReturnEmpty = True
+ self.mayIndexError = False
+ self.includeMatch = include
+ self.asList = False
+ if failOn is not None and isinstance(failOn, basestring):
+ self.failOn = Literal(failOn)
+ else:
+ self.failOn = failOn
+ self.errmsg = "No match found for "+_ustr(self.expr)
+ #self.myException = ParseException("",0,self.errmsg,self)
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ startLoc = loc
+ instrlen = len(instring)
+ expr = self.expr
+ failParse = False
+ while loc <= instrlen:
+ try:
+ if self.failOn:
+ try:
+ self.failOn.tryParse(instring, loc)
+ except ParseBaseException:
+ pass
+ else:
+ failParse = True
+ raise ParseException(instring, loc, "Found expression " + str(self.failOn))
+ failParse = False
+ if self.ignoreExpr is not None:
+ while 1:
+ try:
+ loc = self.ignoreExpr.tryParse(instring,loc)
+ print "found ignoreExpr, advance to", loc
+ except ParseBaseException:
+ break
+ expr._parse( instring, loc, doActions=False, callPreParse=False )
+ skipText = instring[startLoc:loc]
+ if self.includeMatch:
+ loc,mat = expr._parse(instring,loc,doActions,callPreParse=False)
+ if mat:
+ skipRes = ParseResults( skipText )
+ skipRes += mat
+ return loc, [ skipRes ]
+ else:
+ return loc, [ skipText ]
+ else:
+ return loc, [ skipText ]
+ except (ParseException,IndexError):
+ if failParse:
+ raise
+ else:
+ loc += 1
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+class Forward(ParseElementEnhance):
+ """Forward declaration of an expression to be defined later -
+ used for recursive grammars, such as algebraic infix notation.
+ When the expression is known, it is assigned to the Forward variable using the '<<' operator.
+
+ Note: take care when assigning to Forward not to overlook precedence of operators.
+ Specifically, '|' has a lower precedence than '<<', so that::
+ fwdExpr << a | b | c
+ will actually be evaluated as::
+ (fwdExpr << a) | b | c
+ thereby leaving b and c out as parseable alternatives. It is recommended that you
+ explicitly group the values inserted into the Forward::
+ fwdExpr << (a | b | c)
+ """
+ def __init__( self, other=None ):
+ super(Forward,self).__init__( other, savelist=False )
+
+ def __lshift__( self, other ):
+ if isinstance( other, basestring ):
+ other = Literal(other)
+ self.expr = other
+ self.mayReturnEmpty = other.mayReturnEmpty
+ self.strRepr = None
+ self.mayIndexError = self.expr.mayIndexError
+ self.mayReturnEmpty = self.expr.mayReturnEmpty
+ self.setWhitespaceChars( self.expr.whiteChars )
+ self.skipWhitespace = self.expr.skipWhitespace
+ self.saveAsList = self.expr.saveAsList
+ self.ignoreExprs.extend(self.expr.ignoreExprs)
+ return None
+
+ def leaveWhitespace( self ):
+ self.skipWhitespace = False
+ return self
+
+ def streamline( self ):
+ if not self.streamlined:
+ self.streamlined = True
+ if self.expr is not None:
+ self.expr.streamline()
+ return self
+
+ def validate( self, validateTrace=[] ):
+ if self not in validateTrace:
+ tmp = validateTrace[:]+[self]
+ if self.expr is not None:
+ self.expr.validate(tmp)
+ self.checkRecursion([])
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ self._revertClass = self.__class__
+ self.__class__ = _ForwardNoRecurse
+ try:
+ if self.expr is not None:
+ retString = _ustr(self.expr)
+ else:
+ retString = "None"
+ finally:
+ self.__class__ = self._revertClass
+ return self.__class__.__name__ + ": " + retString
+
+ def copy(self):
+ if self.expr is not None:
+ return super(Forward,self).copy()
+ else:
+ ret = Forward()
+ ret << self
+ return ret
+
+class _ForwardNoRecurse(Forward):
+ def __str__( self ):
+ return "..."
+
+class TokenConverter(ParseElementEnhance):
+ """Abstract subclass of ParseExpression, for converting parsed results."""
+ def __init__( self, expr, savelist=False ):
+ super(TokenConverter,self).__init__( expr )#, savelist )
+ self.saveAsList = False
+
+class Upcase(TokenConverter):
+ """Converter to upper case all matching tokens."""
+ def __init__(self, *args):
+ super(Upcase,self).__init__(*args)
+ warnings.warn("Upcase class is deprecated, use upcaseTokens parse action instead",
+ DeprecationWarning,stacklevel=2)
+
+ def postParse( self, instring, loc, tokenlist ):
+ return list(map( string.upper, tokenlist ))
+
+
+class Combine(TokenConverter):
+ """Converter to concatenate all matching tokens to a single string.
+ By default, the matching patterns must also be contiguous in the input string;
+ this can be disabled by specifying 'adjacent=False' in the constructor.
+ """
+ def __init__( self, expr, joinString="", adjacent=True ):
+ super(Combine,self).__init__( expr )
+ # suppress whitespace-stripping in contained parse expressions, but re-enable it on the Combine itself
+ if adjacent:
+ self.leaveWhitespace()
+ self.adjacent = adjacent
+ self.skipWhitespace = True
+ self.joinString = joinString
+
+ def ignore( self, other ):
+ if self.adjacent:
+ ParserElement.ignore(self, other)
+ else:
+ super( Combine, self).ignore( other )
+ return self
+
+ def postParse( self, instring, loc, tokenlist ):
+ retToks = tokenlist.copy()
+ del retToks[:]
+ retToks += ParseResults([ "".join(tokenlist._asStringList(self.joinString)) ], modal=self.modalResults)
+
+ if self.resultsName and len(retToks.keys())>0:
+ return [ retToks ]
+ else:
+ return retToks
+
+class Group(TokenConverter):
+ """Converter to return the matched tokens as a list - useful for returning tokens of ZeroOrMore and OneOrMore expressions."""
+ def __init__( self, expr ):
+ super(Group,self).__init__( expr )
+ self.saveAsList = True
+
+ def postParse( self, instring, loc, tokenlist ):
+ return [ tokenlist ]
+
+class Dict(TokenConverter):
+ """Converter to return a repetitive expression as a list, but also as a dictionary.
+ Each element can also be referenced using the first token in the expression as its key.
+ Useful for tabular report scraping when the first column can be used as a item key.
+ """
+ def __init__( self, exprs ):
+ super(Dict,self).__init__( exprs )
+ self.saveAsList = True
+
+ def postParse( self, instring, loc, tokenlist ):
+ for i,tok in enumerate(tokenlist):
+ if len(tok) == 0:
+ continue
+ ikey = tok[0]
+ if isinstance(ikey,int):
+ ikey = _ustr(tok[0]).strip()
+ if len(tok)==1:
+ tokenlist[ikey] = _ParseResultsWithOffset("",i)
+ elif len(tok)==2 and not isinstance(tok[1],ParseResults):
+ tokenlist[ikey] = _ParseResultsWithOffset(tok[1],i)
+ else:
+ dictvalue = tok.copy() #ParseResults(i)
+ del dictvalue[0]
+ if len(dictvalue)!= 1 or (isinstance(dictvalue,ParseResults) and dictvalue.keys()):
+ tokenlist[ikey] = _ParseResultsWithOffset(dictvalue,i)
+ else:
+ tokenlist[ikey] = _ParseResultsWithOffset(dictvalue[0],i)
+
+ if self.resultsName:
+ return [ tokenlist ]
+ else:
+ return tokenlist
+
+
+class Suppress(TokenConverter):
+ """Converter for ignoring the results of a parsed expression."""
+ def postParse( self, instring, loc, tokenlist ):
+ return []
+
+ def suppress( self ):
+ return self
+
+
+class OnlyOnce(object):
+ """Wrapper for parse actions, to ensure they are only called once."""
+ def __init__(self, methodCall):
+ self.callable = ParserElement._normalizeParseActionArgs(methodCall)
+ self.called = False
+ def __call__(self,s,l,t):
+ if not self.called:
+ results = self.callable(s,l,t)
+ self.called = True
+ return results
+ raise ParseException(s,l,"")
+ def reset(self):
+ self.called = False
+
+def traceParseAction(f):
+ """Decorator for debugging parse actions."""
+ f = ParserElement._normalizeParseActionArgs(f)
+ def z(*paArgs):
+ thisFunc = f.func_name
+ s,l,t = paArgs[-3:]
+ if len(paArgs)>3:
+ thisFunc = paArgs[0].__class__.__name__ + '.' + thisFunc
+ sys.stderr.write( ">>entering %s(line: '%s', %d, %s)\n" % (thisFunc,line(l,s),l,t) )
+ try:
+ ret = f(*paArgs)
+ except Exception, exc:
+ sys.stderr.write( "<<leaving %s (exception: %s)\n" % (thisFunc,exc) )
+ raise
+ sys.stderr.write( "<<leaving %s (ret: %s)\n" % (thisFunc,ret) )
+ return ret
+ try:
+ z.__name__ = f.__name__
+ except AttributeError:
+ pass
+ return z
+
+#
+# global helpers
+#
+def delimitedList( expr, delim=",", combine=False ):
+ """Helper to define a delimited list of expressions - the delimiter defaults to ','.
+ By default, the list elements and delimiters can have intervening whitespace, and
+ comments, but this can be overridden by passing 'combine=True' in the constructor.
+ If combine is set to True, the matching tokens are returned as a single token
+ string, with the delimiters included; otherwise, the matching tokens are returned
+ as a list of tokens, with the delimiters suppressed.
+ """
+ dlName = _ustr(expr)+" ["+_ustr(delim)+" "+_ustr(expr)+"]..."
+ if combine:
+ return Combine( expr + ZeroOrMore( delim + expr ) ).setName(dlName)
+ else:
+ return ( expr + ZeroOrMore( Suppress( delim ) + expr ) ).setName(dlName)
+
+def countedArray( expr ):
+ """Helper to define a counted list of expressions.
+ This helper defines a pattern of the form::
+ integer expr expr expr...
+ where the leading integer tells how many expr expressions follow.
+ The matched tokens returns the array of expr tokens as a list - the leading count token is suppressed.
+ """
+ arrayExpr = Forward()
+ def countFieldParseAction(s,l,t):
+ n = int(t[0])
+ arrayExpr << (n and Group(And([expr]*n)) or Group(empty))
+ return []
+ return ( Word(nums).setName("arrayLen").setParseAction(countFieldParseAction, callDuringTry=True) + arrayExpr )
+
+def _flatten(L):
+ if type(L) is not list: return [L]
+ if L == []: return L
+ return _flatten(L[0]) + _flatten(L[1:])
+
+def matchPreviousLiteral(expr):
+ """Helper to define an expression that is indirectly defined from
+ the tokens matched in a previous expression, that is, it looks
+ for a 'repeat' of a previous expression. For example::
+ first = Word(nums)
+ second = matchPreviousLiteral(first)
+ matchExpr = first + ":" + second
+ will match "1:1", but not "1:2". Because this matches a
+ previous literal, will also match the leading "1:1" in "1:10".
+ If this is not desired, use matchPreviousExpr.
+ Do *not* use with packrat parsing enabled.
+ """
+ rep = Forward()
+ def copyTokenToRepeater(s,l,t):
+ if t:
+ if len(t) == 1:
+ rep << t[0]
+ else:
+ # flatten t tokens
+ tflat = _flatten(t.asList())
+ rep << And( [ Literal(tt) for tt in tflat ] )
+ else:
+ rep << Empty()
+ expr.addParseAction(copyTokenToRepeater, callDuringTry=True)
+ return rep
+
+def matchPreviousExpr(expr):
+ """Helper to define an expression that is indirectly defined from
+ the tokens matched in a previous expression, that is, it looks
+ for a 'repeat' of a previous expression. For example::
+ first = Word(nums)
+ second = matchPreviousExpr(first)
+ matchExpr = first + ":" + second
+ will match "1:1", but not "1:2". Because this matches by
+ expressions, will *not* match the leading "1:1" in "1:10";
+ the expressions are evaluated first, and then compared, so
+ "1" is compared with "10".
+ Do *not* use with packrat parsing enabled.
+ """
+ rep = Forward()
+ e2 = expr.copy()
+ rep << e2
+ def copyTokenToRepeater(s,l,t):
+ matchTokens = _flatten(t.asList())
+ def mustMatchTheseTokens(s,l,t):
+ theseTokens = _flatten(t.asList())
+ if theseTokens != matchTokens:
+ raise ParseException("",0,"")
+ rep.setParseAction( mustMatchTheseTokens, callDuringTry=True )
+ expr.addParseAction(copyTokenToRepeater, callDuringTry=True)
+ return rep
+
+def _escapeRegexRangeChars(s):
+ #~ escape these chars: ^-]
+ for c in r"\^-]":
+ s = s.replace(c,_bslash+c)
+ s = s.replace("\n",r"\n")
+ s = s.replace("\t",r"\t")
+ return _ustr(s)
+
+def oneOf( strs, caseless=False, useRegex=True ):
+ """Helper to quickly define a set of alternative Literals, and makes sure to do
+ longest-first testing when there is a conflict, regardless of the input order,
+ but returns a MatchFirst for best performance.
+
+ Parameters:
+ - strs - a string of space-delimited literals, or a list of string literals
+ - caseless - (default=False) - treat all literals as caseless
+ - useRegex - (default=True) - as an optimization, will generate a Regex
+ object; otherwise, will generate a MatchFirst object (if caseless=True, or
+ if creating a Regex raises an exception)
+ """
+ if caseless:
+ isequal = ( lambda a,b: a.upper() == b.upper() )
+ masks = ( lambda a,b: b.upper().startswith(a.upper()) )
+ parseElementClass = CaselessLiteral
+ else:
+ isequal = ( lambda a,b: a == b )
+ masks = ( lambda a,b: b.startswith(a) )
+ parseElementClass = Literal
+
+ if isinstance(strs,(list,tuple)):
+ symbols = list(strs[:])
+ elif isinstance(strs,basestring):
+ symbols = strs.split()
+ else:
+ warnings.warn("Invalid argument to oneOf, expected string or list",
+ SyntaxWarning, stacklevel=2)
+
+ i = 0
+ while i < len(symbols)-1:
+ cur = symbols[i]
+ for j,other in enumerate(symbols[i+1:]):
+ if ( isequal(other, cur) ):
+ del symbols[i+j+1]
+ break
+ elif ( masks(cur, other) ):
+ del symbols[i+j+1]
+ symbols.insert(i,other)
+ cur = other
+ break
+ else:
+ i += 1
+
+ if not caseless and useRegex:
+ #~ print (strs,"->", "|".join( [ _escapeRegexChars(sym) for sym in symbols] ))
+ try:
+ if len(symbols)==len("".join(symbols)):
+ return Regex( "[%s]" % "".join( [ _escapeRegexRangeChars(sym) for sym in symbols] ) )
+ else:
+ return Regex( "|".join( [ re.escape(sym) for sym in symbols] ) )
+ except:
+ warnings.warn("Exception creating Regex for oneOf, building MatchFirst",
+ SyntaxWarning, stacklevel=2)
+
+
+ # last resort, just use MatchFirst
+ return MatchFirst( [ parseElementClass(sym) for sym in symbols ] )
+
+def dictOf( key, value ):
+ """Helper to easily and clearly define a dictionary by specifying the respective patterns
+ for the key and value. Takes care of defining the Dict, ZeroOrMore, and Group tokens
+ in the proper order. The key pattern can include delimiting markers or punctuation,
+ as long as they are suppressed, thereby leaving the significant key text. The value
+ pattern can include named results, so that the Dict results can include named token
+ fields.
+ """
+ return Dict( ZeroOrMore( Group ( key + value ) ) )
+
+def originalTextFor(expr, asString=True):
+ """Helper to return the original, untokenized text for a given expression. Useful to
+ restore the parsed fields of an HTML start tag into the raw tag text itself, or to
+ revert separate tokens with intervening whitespace back to the original matching
+ input text. Simpler to use than the parse action keepOriginalText, and does not
+ require the inspect module to chase up the call stack. By default, returns a
+ string containing the original parsed text.
+
+ If the optional asString argument is passed as False, then the return value is a
+ ParseResults containing any results names that were originally matched, and a
+ single token containing the original matched text from the input string. So if
+ the expression passed to originalTextFor contains expressions with defined
+ results names, you must set asString to False if you want to preserve those
+ results name values."""
+ locMarker = Empty().setParseAction(lambda s,loc,t: loc)
+ matchExpr = locMarker("_original_start") + expr + locMarker("_original_end")
+ if asString:
+ extractText = lambda s,l,t: s[t._original_start:t._original_end]
+ else:
+ def extractText(s,l,t):
+ del t[:]
+ t.insert(0, s[t._original_start:t._original_end])
+ del t["_original_start"]
+ del t["_original_end"]
+ matchExpr.setParseAction(extractText)
+ return matchExpr
+
+# convenience constants for positional expressions
+empty = Empty().setName("empty")
+lineStart = LineStart().setName("lineStart")
+lineEnd = LineEnd().setName("lineEnd")
+stringStart = StringStart().setName("stringStart")
+stringEnd = StringEnd().setName("stringEnd")
+
+_escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(lambda s,l,t:t[0][1])
+_printables_less_backslash = "".join([ c for c in printables if c not in r"\]" ])
+_escapedHexChar = Combine( Suppress(_bslash + "0x") + Word(hexnums) ).setParseAction(lambda s,l,t:unichr(int(t[0],16)))
+_escapedOctChar = Combine( Suppress(_bslash) + Word("0","01234567") ).setParseAction(lambda s,l,t:unichr(int(t[0],8)))
+_singleChar = _escapedPunc | _escapedHexChar | _escapedOctChar | Word(_printables_less_backslash,exact=1)
+_charRange = Group(_singleChar + Suppress("-") + _singleChar)
+_reBracketExpr = Literal("[") + Optional("^").setResultsName("negate") + Group( OneOrMore( _charRange | _singleChar ) ).setResultsName("body") + "]"
+
+_expanded = lambda p: (isinstance(p,ParseResults) and ''.join([ unichr(c) for c in range(ord(p[0]),ord(p[1])+1) ]) or p)
+
+def srange(s):
+ r"""Helper to easily define string ranges for use in Word construction. Borrows
+ syntax from regexp '[]' string range definitions::
+ srange("[0-9]") -> "0123456789"
+ srange("[a-z]") -> "abcdefghijklmnopqrstuvwxyz"
+ srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_"
+ The input string must be enclosed in []'s, and the returned string is the expanded
+ character set joined into a single string.
+ The values enclosed in the []'s may be::
+ a single character
+ an escaped character with a leading backslash (such as \- or \])
+ an escaped hex character with a leading '\0x' (\0x21, which is a '!' character)
+ an escaped octal character with a leading '\0' (\041, which is a '!' character)
+ a range of any of the above, separated by a dash ('a-z', etc.)
+ any combination of the above ('aeiouy', 'a-zA-Z0-9_$', etc.)
+ """
+ try:
+ return "".join([_expanded(part) for part in _reBracketExpr.parseString(s).body])
+ except:
+ return ""
+
+def matchOnlyAtCol(n):
+ """Helper method for defining parse actions that require matching at a specific
+ column in the input text.
+ """
+ def verifyCol(strg,locn,toks):
+ if col(locn,strg) != n:
+ raise ParseException(strg,locn,"matched token not at column %d" % n)
+ return verifyCol
+
+def replaceWith(replStr):
+ """Helper method for common parse actions that simply return a literal value. Especially
+ useful when used with transformString().
+ """
+ def _replFunc(*args):
+ return [replStr]
+ return _replFunc
+
+def removeQuotes(s,l,t):
+ """Helper parse action for removing quotation marks from parsed quoted strings.
+ To use, add this parse action to quoted string using::
+ quotedString.setParseAction( removeQuotes )
+ """
+ return t[0][1:-1]
+
+def upcaseTokens(s,l,t):
+ """Helper parse action to convert tokens to upper case."""
+ return [ tt.upper() for tt in map(_ustr,t) ]
+
+def downcaseTokens(s,l,t):
+ """Helper parse action to convert tokens to lower case."""
+ return [ tt.lower() for tt in map(_ustr,t) ]
+
+def keepOriginalText(s,startLoc,t):
+ """Helper parse action to preserve original parsed text,
+ overriding any nested parse actions."""
+ try:
+ endloc = getTokensEndLoc()
+ except ParseException:
+ raise ParseFatalException("incorrect usage of keepOriginalText - may only be called as a parse action")
+ del t[:]
+ t += ParseResults(s[startLoc:endloc])
+ return t
+
+def getTokensEndLoc():
+ """Method to be called from within a parse action to determine the end
+ location of the parsed tokens."""
+ import inspect
+ fstack = inspect.stack()
+ try:
+ # search up the stack (through intervening argument normalizers) for correct calling routine
+ for f in fstack[2:]:
+ if f[3] == "_parseNoCache":
+ endloc = f[0].f_locals["loc"]
+ return endloc
+ else:
+ raise ParseFatalException("incorrect usage of getTokensEndLoc - may only be called from within a parse action")
+ finally:
+ del fstack
+
+def _makeTags(tagStr, xml):
+ """Internal helper to construct opening and closing tag expressions, given a tag name"""
+ if isinstance(tagStr,basestring):
+ resname = tagStr
+ tagStr = Keyword(tagStr, caseless=not xml)
+ else:
+ resname = tagStr.name
+
+ tagAttrName = Word(alphas,alphanums+"_-:")
+ if (xml):
+ tagAttrValue = dblQuotedString.copy().setParseAction( removeQuotes )
+ openTag = Suppress("<") + tagStr + \
+ Dict(ZeroOrMore(Group( tagAttrName + Suppress("=") + tagAttrValue ))) + \
+ Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">")
+ else:
+ printablesLessRAbrack = "".join( [ c for c in printables if c not in ">" ] )
+ tagAttrValue = quotedString.copy().setParseAction( removeQuotes ) | Word(printablesLessRAbrack)
+ openTag = Suppress("<") + tagStr + \
+ Dict(ZeroOrMore(Group( tagAttrName.setParseAction(downcaseTokens) + \
+ Optional( Suppress("=") + tagAttrValue ) ))) + \
+ Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">")
+ closeTag = Combine(_L("</") + tagStr + ">")
+
+ openTag = openTag.setResultsName("start"+"".join(resname.replace(":"," ").title().split())).setName("<%s>" % tagStr)
+ closeTag = closeTag.setResultsName("end"+"".join(resname.replace(":"," ").title().split())).setName("</%s>" % tagStr)
+
+ return openTag, closeTag
+
+def makeHTMLTags(tagStr):
+ """Helper to construct opening and closing tag expressions for HTML, given a tag name"""
+ return _makeTags( tagStr, False )
+
+def makeXMLTags(tagStr):
+ """Helper to construct opening and closing tag expressions for XML, given a tag name"""
+ return _makeTags( tagStr, True )
+
+def withAttribute(*args,**attrDict):
+ """Helper to create a validating parse action to be used with start tags created
+ with makeXMLTags or makeHTMLTags. Use withAttribute to qualify a starting tag
+ with a required attribute value, to avoid false matches on common tags such as
+ <TD> or <DIV>.
+
+ Call withAttribute with a series of attribute names and values. Specify the list
+ of filter attributes names and values as:
+ - keyword arguments, as in (class="Customer",align="right"), or
+ - a list of name-value tuples, as in ( ("ns1:class", "Customer"), ("ns2:align","right") )
+ For attribute names with a namespace prefix, you must use the second form. Attribute
+ names are matched insensitive to upper/lower case.
+
+ To verify that the attribute exists, but without specifying a value, pass
+ withAttribute.ANY_VALUE as the value.
+ """
+ if args:
+ attrs = args[:]
+ else:
+ attrs = attrDict.items()
+ attrs = [(k,v) for k,v in attrs]
+ def pa(s,l,tokens):
+ for attrName,attrValue in attrs:
+ if attrName not in tokens:
+ raise ParseException(s,l,"no matching attribute " + attrName)
+ if attrValue != withAttribute.ANY_VALUE and tokens[attrName] != attrValue:
+ raise ParseException(s,l,"attribute '%s' has value '%s', must be '%s'" %
+ (attrName, tokens[attrName], attrValue))
+ return pa
+withAttribute.ANY_VALUE = object()
+
+opAssoc = _Constants()
+opAssoc.LEFT = object()
+opAssoc.RIGHT = object()
+
+def operatorPrecedence( baseExpr, opList ):
+ """Helper method for constructing grammars of expressions made up of
+ operators working in a precedence hierarchy. Operators may be unary or
+ binary, left- or right-associative. Parse actions can also be attached
+ to operator expressions.
+
+ Parameters:
+ - baseExpr - expression representing the most basic element for the nested
+ - opList - list of tuples, one for each operator precedence level in the
+ expression grammar; each tuple is of the form
+ (opExpr, numTerms, rightLeftAssoc, parseAction), where:
+ - opExpr is the pyparsing expression for the operator;
+ may also be a string, which will be converted to a Literal;
+ if numTerms is 3, opExpr is a tuple of two expressions, for the
+ two operators separating the 3 terms
+ - numTerms is the number of terms for this operator (must
+ be 1, 2, or 3)
+ - rightLeftAssoc is the indicator whether the operator is
+ right or left associative, using the pyparsing-defined
+ constants opAssoc.RIGHT and opAssoc.LEFT.
+ - parseAction is the parse action to be associated with
+ expressions matching this operator expression (the
+ parse action tuple member may be omitted)
+ """
+ ret = Forward()
+ lastExpr = baseExpr | ( Suppress('(') + ret + Suppress(')') )
+ for i,operDef in enumerate(opList):
+ opExpr,arity,rightLeftAssoc,pa = (operDef + (None,))[:4]
+ if arity == 3:
+ if opExpr is None or len(opExpr) != 2:
+ raise ValueError("if numterms=3, opExpr must be a tuple or list of two expressions")
+ opExpr1, opExpr2 = opExpr
+ thisExpr = Forward()#.setName("expr%d" % i)
+ if rightLeftAssoc == opAssoc.LEFT:
+ if arity == 1:
+ matchExpr = FollowedBy(lastExpr + opExpr) + Group( lastExpr + OneOrMore( opExpr ) )
+ elif arity == 2:
+ if opExpr is not None:
+ matchExpr = FollowedBy(lastExpr + opExpr + lastExpr) + Group( lastExpr + OneOrMore( opExpr + lastExpr ) )
+ else:
+ matchExpr = FollowedBy(lastExpr+lastExpr) + Group( lastExpr + OneOrMore(lastExpr) )
+ elif arity == 3:
+ matchExpr = FollowedBy(lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr) + \
+ Group( lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr )
+ else:
+ raise ValueError("operator must be unary (1), binary (2), or ternary (3)")
+ elif rightLeftAssoc == opAssoc.RIGHT:
+ if arity == 1:
+ # try to avoid LR with this extra test
+ if not isinstance(opExpr, Optional):
+ opExpr = Optional(opExpr)
+ matchExpr = FollowedBy(opExpr.expr + thisExpr) + Group( opExpr + thisExpr )
+ elif arity == 2:
+ if opExpr is not None:
+ matchExpr = FollowedBy(lastExpr + opExpr + thisExpr) + Group( lastExpr + OneOrMore( opExpr + thisExpr ) )
+ else:
+ matchExpr = FollowedBy(lastExpr + thisExpr) + Group( lastExpr + OneOrMore( thisExpr ) )
+ elif arity == 3:
+ matchExpr = FollowedBy(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) + \
+ Group( lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr )
+ else:
+ raise ValueError("operator must be unary (1), binary (2), or ternary (3)")
+ else:
+ raise ValueError("operator must indicate right or left associativity")
+ if pa:
+ matchExpr.setParseAction( pa )
+ thisExpr << ( matchExpr | lastExpr )
+ lastExpr = thisExpr
+ ret << lastExpr
+ return ret
+
+dblQuotedString = Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\x[0-9a-fA-F]+)|(?:\\.))*"').setName("string enclosed in double quotes")
+sglQuotedString = Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\x[0-9a-fA-F]+)|(?:\\.))*'").setName("string enclosed in single quotes")
+quotedString = Regex(r'''(?:"(?:[^"\n\r\\]|(?:"")|(?:\\x[0-9a-fA-F]+)|(?:\\.))*")|(?:'(?:[^'\n\r\\]|(?:'')|(?:\\x[0-9a-fA-F]+)|(?:\\.))*')''').setName("quotedString using single or double quotes")
+unicodeString = Combine(_L('u') + quotedString.copy())
+
+def nestedExpr(opener="(", closer=")", content=None, ignoreExpr=quotedString):
+ """Helper method for defining nested lists enclosed in opening and closing
+ delimiters ("(" and ")" are the default).
+
+ Parameters:
+ - opener - opening character for a nested list (default="("); can also be a pyparsing expression
+ - closer - closing character for a nested list (default=")"); can also be a pyparsing expression
+ - content - expression for items within the nested lists (default=None)
+ - ignoreExpr - expression for ignoring opening and closing delimiters (default=quotedString)
+
+ If an expression is not provided for the content argument, the nested
+ expression will capture all whitespace-delimited content between delimiters
+ as a list of separate values.
+
+ Use the ignoreExpr argument to define expressions that may contain
+ opening or closing characters that should not be treated as opening
+ or closing characters for nesting, such as quotedString or a comment
+ expression. Specify multiple expressions using an Or or MatchFirst.
+ The default is quotedString, but if no expressions are to be ignored,
+ then pass None for this argument.
+ """
+ if opener == closer:
+ raise ValueError("opening and closing strings cannot be the same")
+ if content is None:
+ if isinstance(opener,basestring) and isinstance(closer,basestring):
+ if len(opener) == 1 and len(closer)==1:
+ if ignoreExpr is not None:
+ content = (Combine(OneOrMore(~ignoreExpr +
+ CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS,exact=1))
+ ).setParseAction(lambda t:t[0].strip()))
+ else:
+ content = (empty+CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS
+ ).setParseAction(lambda t:t[0].strip()))
+ else:
+ if ignoreExpr is not None:
+ content = (Combine(OneOrMore(~ignoreExpr +
+ ~Literal(opener) + ~Literal(closer) +
+ CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1))
+ ).setParseAction(lambda t:t[0].strip()))
+ else:
+ content = (Combine(OneOrMore(~Literal(opener) + ~Literal(closer) +
+ CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1))
+ ).setParseAction(lambda t:t[0].strip()))
+ else:
+ raise ValueError("opening and closing arguments must be strings if no content expression is given")
+ ret = Forward()
+ if ignoreExpr is not None:
+ ret << Group( Suppress(opener) + ZeroOrMore( ignoreExpr | ret | content ) + Suppress(closer) )
+ else:
+ ret << Group( Suppress(opener) + ZeroOrMore( ret | content ) + Suppress(closer) )
+ return ret
+
+def indentedBlock(blockStatementExpr, indentStack, indent=True):
+ """Helper method for defining space-delimited indentation blocks, such as
+ those used to define block statements in Python source code.
+
+ Parameters:
+ - blockStatementExpr - expression defining syntax of statement that
+ is repeated within the indented block
+ - indentStack - list created by caller to manage indentation stack
+ (multiple statementWithIndentedBlock expressions within a single grammar
+ should share a common indentStack)
+ - indent - boolean indicating whether block must be indented beyond the
+ the current level; set to False for block of left-most statements
+ (default=True)
+
+ A valid block must contain at least one blockStatement.
+ """
+ def checkPeerIndent(s,l,t):
+ if l >= len(s): return
+ curCol = col(l,s)
+ if curCol != indentStack[-1]:
+ if curCol > indentStack[-1]:
+ raise ParseFatalException(s,l,"illegal nesting")
+ raise ParseException(s,l,"not a peer entry")
+
+ def checkSubIndent(s,l,t):
+ curCol = col(l,s)
+ if curCol > indentStack[-1]:
+ indentStack.append( curCol )
+ else:
+ raise ParseException(s,l,"not a subentry")
+
+ def checkUnindent(s,l,t):
+ if l >= len(s): return
+ curCol = col(l,s)
+ if not(indentStack and curCol < indentStack[-1] and curCol <= indentStack[-2]):
+ raise ParseException(s,l,"not an unindent")
+ indentStack.pop()
+
+ NL = OneOrMore(LineEnd().setWhitespaceChars("\t ").suppress())
+ INDENT = Empty() + Empty().setParseAction(checkSubIndent)
+ PEER = Empty().setParseAction(checkPeerIndent)
+ UNDENT = Empty().setParseAction(checkUnindent)
+ if indent:
+ smExpr = Group( Optional(NL) +
+ FollowedBy(blockStatementExpr) +
+ INDENT + (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) + UNDENT)
+ else:
+ smExpr = Group( Optional(NL) +
+ (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) )
+ blockStatementExpr.ignore(_bslash + LineEnd())
+ return smExpr
+
+alphas8bit = srange(r"[\0xc0-\0xd6\0xd8-\0xf6\0xf8-\0xff]")
+punc8bit = srange(r"[\0xa1-\0xbf\0xd7\0xf7]")
+
+anyOpenTag,anyCloseTag = makeHTMLTags(Word(alphas,alphanums+"_:"))
+commonHTMLEntity = Combine(_L("&") + oneOf("gt lt amp nbsp quot").setResultsName("entity") +";").streamline()
+_htmlEntityMap = dict(zip("gt lt amp nbsp quot".split(),'><& "'))
+replaceHTMLEntity = lambda t : t.entity in _htmlEntityMap and _htmlEntityMap[t.entity] or None
+
+# it's easy to get these comment structures wrong - they're very common, so may as well make them available
+cStyleComment = Regex(r"/\*(?:[^*]*\*+)+?/").setName("C style comment")
+
+htmlComment = Regex(r"<!--[\s\S]*?-->")
+restOfLine = Regex(r".*").leaveWhitespace()
+dblSlashComment = Regex(r"\/\/(\\\n|.)*").setName("// comment")
+cppStyleComment = Regex(r"/(?:\*(?:[^*]*\*+)+?/|/[^\n]*(?:\n[^\n]*)*?(?:(?<!\\)|\Z))").setName("C++ style comment")
+
+javaStyleComment = cppStyleComment
+pythonStyleComment = Regex(r"#.*").setName("Python style comment")
+_noncomma = "".join( [ c for c in printables if c != "," ] )
+_commasepitem = Combine(OneOrMore(Word(_noncomma) +
+ Optional( Word(" \t") +
+ ~Literal(",") + ~LineEnd() ) ) ).streamline().setName("commaItem")
+commaSeparatedList = delimitedList( Optional( quotedString | _commasepitem, default="") ).setName("commaSeparatedList")
+
+
+if __name__ == "__main__":
+
+ def test( teststring ):
+ try:
+ tokens = simpleSQL.parseString( teststring )
+ tokenlist = tokens.asList()
+ print (teststring + "->" + str(tokenlist))
+ print ("tokens = " + str(tokens))
+ print ("tokens.columns = " + str(tokens.columns))
+ print ("tokens.tables = " + str(tokens.tables))
+ print (tokens.asXML("SQL",True))
+ except ParseBaseException,err:
+ print (teststring + "->")
+ print (err.line)
+ print (" "*(err.column-1) + "^")
+ print (err)
+ print()
+
+ selectToken = CaselessLiteral( "select" )
+ fromToken = CaselessLiteral( "from" )
+
+ ident = Word( alphas, alphanums + "_$" )
+ columnName = delimitedList( ident, ".", combine=True ).setParseAction( upcaseTokens )
+ columnNameList = Group( delimitedList( columnName ) )#.setName("columns")
+ tableName = delimitedList( ident, ".", combine=True ).setParseAction( upcaseTokens )
+ tableNameList = Group( delimitedList( tableName ) )#.setName("tables")
+ simpleSQL = ( selectToken + \
+ ( '*' | columnNameList ).setResultsName( "columns" ) + \
+ fromToken + \
+ tableNameList.setResultsName( "tables" ) )
+
+ test( "SELECT * from XYZZY, ABC" )
+ test( "select * from SYS.XYZZY" )
+ test( "Select A from Sys.dual" )
+ test( "Select AA,BB,CC from Sys.dual" )
+ test( "Select A, B, C from Sys.dual" )
+ test( "Select A, B, C from Sys.dual" )
+ test( "Xelect A, B, C from Sys.dual" )
+ test( "Select A, B, C frox Sys.dual" )
+ test( "Select" )
+ test( "Select ^^^ frox Sys.dual" )
+ test( "Select A, B, C from Sys.dual, Table2 " )
diff --git a/python/helpers/pycharm_generator_utils/pyparsing_py3.py b/python/helpers/pycharm_generator_utils/pyparsing_py3.py
new file mode 100644
index 0000000..4e0bc8a
--- /dev/null
+++ b/python/helpers/pycharm_generator_utils/pyparsing_py3.py
@@ -0,0 +1,3716 @@
+# module pyparsing.py
+#
+# Copyright (c) 2003-2009 Paul T. McGuire
+#
+# Permission is hereby granted, free of charge, to any person obtaining
+# a copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish,
+# distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so, subject to
+# the following conditions:
+#
+# The above copyright notice and this permission notice shall be
+# included in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+#
+#from __future__ import generators
+
+__doc__ = \
+"""
+pyparsing module - Classes and methods to define and execute parsing grammars
+
+The pyparsing module is an alternative approach to creating and executing simple grammars,
+vs. the traditional lex/yacc approach, or the use of regular expressions. With pyparsing, you
+don't need to learn a new syntax for defining grammars or matching expressions - the parsing module
+provides a library of classes that you use to construct the grammar directly in Python.
+
+Here is a program to parse "Hello, World!" (or any greeting of the form "<salutation>, <addressee>!")::
+
+ from pyparsing_py3 import Word, alphas
+
+ # define grammar of a greeting
+ greet = Word( alphas ) + "," + Word( alphas ) + "!"
+
+ hello = "Hello, World!"
+ print hello, "->", greet.parseString( hello )
+
+The program outputs the following::
+
+ Hello, World! -> ['Hello', ',', 'World', '!']
+
+The Python representation of the grammar is quite readable, owing to the self-explanatory
+class names, and the use of '+', '|' and '^' operators.
+
+The parsed results returned from parseString() can be accessed as a nested list, a dictionary, or an
+object with named attributes.
+
+The pyparsing module handles some of the problems that are typically vexing when writing text parsers:
+ - extra or missing whitespace (the above program will also handle "Hello,World!", "Hello , World !", etc.)
+ - quoted strings
+ - embedded comments
+"""
+
+__version__ = "1.5.2.Py3"
+__versionTime__ = "9 April 2009 12:21"
+__author__ = "Paul McGuire <[email protected]>"
+
+import string
+from weakref import ref as wkref
+import copy
+import sys
+import warnings
+import re
+import sre_constants
+#~ sys.stderr.write( "testing pyparsing module, version %s, %s\n" % (__version__,__versionTime__ ) )
+
+__all__ = [
+'And', 'CaselessKeyword', 'CaselessLiteral', 'CharsNotIn', 'Combine', 'Dict', 'Each', 'Empty',
+'FollowedBy', 'Forward', 'GoToColumn', 'Group', 'Keyword', 'LineEnd', 'LineStart', 'Literal',
+'MatchFirst', 'NoMatch', 'NotAny', 'OneOrMore', 'OnlyOnce', 'Optional', 'Or',
+'ParseBaseException', 'ParseElementEnhance', 'ParseException', 'ParseExpression', 'ParseFatalException',
+'ParseResults', 'ParseSyntaxException', 'ParserElement', 'QuotedString', 'RecursiveGrammarException',
+'Regex', 'SkipTo', 'StringEnd', 'StringStart', 'Suppress', 'Token', 'TokenConverter', 'Upcase',
+'White', 'Word', 'WordEnd', 'WordStart', 'ZeroOrMore',
+'alphanums', 'alphas', 'alphas8bit', 'anyCloseTag', 'anyOpenTag', 'cStyleComment', 'col',
+'commaSeparatedList', 'commonHTMLEntity', 'countedArray', 'cppStyleComment', 'dblQuotedString',
+'dblSlashComment', 'delimitedList', 'dictOf', 'downcaseTokens', 'empty', 'getTokensEndLoc', 'hexnums',
+'htmlComment', 'javaStyleComment', 'keepOriginalText', 'line', 'lineEnd', 'lineStart', 'lineno',
+'makeHTMLTags', 'makeXMLTags', 'matchOnlyAtCol', 'matchPreviousExpr', 'matchPreviousLiteral',
+'nestedExpr', 'nullDebugAction', 'nums', 'oneOf', 'opAssoc', 'operatorPrecedence', 'printables',
+'punc8bit', 'pythonStyleComment', 'quotedString', 'removeQuotes', 'replaceHTMLEntity',
+'replaceWith', 'restOfLine', 'sglQuotedString', 'srange', 'stringEnd',
+'stringStart', 'traceParseAction', 'unicodeString', 'upcaseTokens', 'withAttribute',
+'indentedBlock', 'originalTextFor',
+]
+
+"""
+Detect if we are running version 3.X and make appropriate changes
+Robert A. Clark
+"""
+_PY3K = sys.version_info[0] > 2
+if _PY3K:
+ _MAX_INT = sys.maxsize
+ basestring = str
+ unichr = chr
+ _ustr = str
+ _str2dict = set
+ alphas = string.ascii_lowercase + string.ascii_uppercase
+else:
+ _MAX_INT = sys.maxint
+
+ def _ustr(obj):
+ """Drop-in replacement for str(obj) that tries to be Unicode friendly. It first tries
+ str(obj). If that fails with a UnicodeEncodeError, then it tries unicode(obj). It
+ then < returns the unicode object | encodes it with the default encoding | ... >.
+ """
+ if isinstance(obj,unicode):
+ return obj
+
+ try:
+ # If this works, then _ustr(obj) has the same behaviour as str(obj), so
+ # it won't break any existing code.
+ return str(obj)
+
+ except UnicodeEncodeError:
+ # The Python docs (http://docs.python.org/ref/customization.html#l2h-182)
+ # state that "The return value must be a string object". However, does a
+ # unicode object (being a subclass of basestring) count as a "string
+ # object"?
+ # If so, then return a unicode object:
+ return unicode(obj)
+ # Else encode it... but how? There are many choices... :)
+ # Replace unprintables with escape codes?
+ #return unicode(obj).encode(sys.getdefaultencoding(), 'backslashreplace_errors')
+ # Replace unprintables with question marks?
+ #return unicode(obj).encode(sys.getdefaultencoding(), 'replace')
+ # ...
+
+ def _str2dict(strg):
+ return dict( [(c,0) for c in strg] )
+
+ alphas = string.lowercase + string.uppercase
+
+
+def _xml_escape(data):
+ """Escape &, <, >, ", ', etc. in a string of data."""
+
+ # ampersand must be replaced first
+ from_symbols = '&><"\''
+ to_symbols = ['&'+s+';' for s in "amp gt lt quot apos".split()]
+ for from_,to_ in zip(from_symbols, to_symbols):
+ data = data.replace(from_, to_)
+ return data
+
+class _Constants(object):
+ pass
+
+nums = string.digits
+hexnums = nums + "ABCDEFabcdef"
+alphanums = alphas + nums
+_bslash = chr(92)
+printables = "".join( [ c for c in string.printable if c not in string.whitespace ] )
+
+class ParseBaseException(Exception):
+ """base exception class for all parsing runtime exceptions"""
+ # Performance tuning: we construct a *lot* of these, so keep this
+ # constructor as small and fast as possible
+ def __init__( self, pstr, loc=0, msg=None, elem=None ):
+ self.loc = loc
+ if msg is None:
+ self.msg = pstr
+ self.pstr = ""
+ else:
+ self.msg = msg
+ self.pstr = pstr
+ self.parserElement = elem
+
+ def __getattr__( self, aname ):
+ """supported attributes by name are:
+ - lineno - returns the line number of the exception text
+ - col - returns the column number of the exception text
+ - line - returns the line containing the exception text
+ """
+ if( aname == "lineno" ):
+ return lineno( self.loc, self.pstr )
+ elif( aname in ("col", "column") ):
+ return col( self.loc, self.pstr )
+ elif( aname == "line" ):
+ return line( self.loc, self.pstr )
+ else:
+ raise AttributeError(aname)
+
+ def __str__( self ):
+ return "%s (at char %d), (line:%d, col:%d)" % \
+ ( self.msg, self.loc, self.lineno, self.column )
+ def __repr__( self ):
+ return _ustr(self)
+ def markInputline( self, markerString = ">!<" ):
+ """Extracts the exception line from the input string, and marks
+ the location of the exception with a special symbol.
+ """
+ line_str = self.line
+ line_column = self.column - 1
+ if markerString:
+ line_str = "".join( [line_str[:line_column],
+ markerString, line_str[line_column:]])
+ return line_str.strip()
+ def __dir__(self):
+ return "loc msg pstr parserElement lineno col line " \
+ "markInputLine __str__ __repr__".split()
+
+class ParseException(ParseBaseException):
+ """exception thrown when parse expressions don't match class;
+ supported attributes by name are:
+ - lineno - returns the line number of the exception text
+ - col - returns the column number of the exception text
+ - line - returns the line containing the exception text
+ """
+ pass
+
+class ParseFatalException(ParseBaseException):
+ """user-throwable exception thrown when inconsistent parse content
+ is found; stops all parsing immediately"""
+ pass
+
+class ParseSyntaxException(ParseFatalException):
+ """just like ParseFatalException, but thrown internally when an
+ ErrorStop indicates that parsing is to stop immediately because
+ an unbacktrackable syntax error has been found"""
+ def __init__(self, pe):
+ super(ParseSyntaxException, self).__init__(
+ pe.pstr, pe.loc, pe.msg, pe.parserElement)
+
+#~ class ReparseException(ParseBaseException):
+ #~ """Experimental class - parse actions can raise this exception to cause
+ #~ pyparsing to reparse the input string:
+ #~ - with a modified input string, and/or
+ #~ - with a modified start location
+ #~ Set the values of the ReparseException in the constructor, and raise the
+ #~ exception in a parse action to cause pyparsing to use the new string/location.
+ #~ Setting the values as None causes no change to be made.
+ #~ """
+ #~ def __init_( self, newstring, restartLoc ):
+ #~ self.newParseText = newstring
+ #~ self.reparseLoc = restartLoc
+
+class RecursiveGrammarException(Exception):
+ """exception thrown by validate() if the grammar could be improperly recursive"""
+ def __init__( self, parseElementList ):
+ self.parseElementTrace = parseElementList
+
+ def __str__( self ):
+ return "RecursiveGrammarException: %s" % self.parseElementTrace
+
+class _ParseResultsWithOffset(object):
+ def __init__(self,p1,p2):
+ self.tup = (p1,p2)
+ def __getitem__(self,i):
+ return self.tup[i]
+ def __repr__(self):
+ return repr(self.tup)
+ def setOffset(self,i):
+ self.tup = (self.tup[0],i)
+
+class ParseResults(object):
+ """Structured parse results, to provide multiple means of access to the parsed data:
+ - as a list (len(results))
+ - by list index (results[0], results[1], etc.)
+ - by attribute (results.<resultsName>)
+ """
+ __slots__ = ( "__toklist", "__tokdict", "__doinit", "__name", "__parent", "__accumNames", "__weakref__" )
+ def __new__(cls, toklist, name=None, asList=True, modal=True ):
+ if isinstance(toklist, cls):
+ return toklist
+ retobj = object.__new__(cls)
+ retobj.__doinit = True
+ return retobj
+
+ # Performance tuning: we construct a *lot* of these, so keep this
+ # constructor as small and fast as possible
+ def __init__( self, toklist, name=None, asList=True, modal=True ):
+ if self.__doinit:
+ self.__doinit = False
+ self.__name = None
+ self.__parent = None
+ self.__accumNames = {}
+ if isinstance(toklist, list):
+ self.__toklist = toklist[:]
+ else:
+ self.__toklist = [toklist]
+ self.__tokdict = dict()
+
+ if name:
+ if not modal:
+ self.__accumNames[name] = 0
+ if isinstance(name,int):
+ name = _ustr(name) # will always return a str, but use _ustr for consistency
+ self.__name = name
+ if not toklist in (None,'',[]):
+ if isinstance(toklist,basestring):
+ toklist = [ toklist ]
+ if asList:
+ if isinstance(toklist,ParseResults):
+ self[name] = _ParseResultsWithOffset(toklist.copy(),0)
+ else:
+ self[name] = _ParseResultsWithOffset(ParseResults(toklist[0]),0)
+ self[name].__name = name
+ else:
+ try:
+ self[name] = toklist[0]
+ except (KeyError,TypeError,IndexError):
+ self[name] = toklist
+
+ def __getitem__( self, i ):
+ if isinstance( i, (int,slice) ):
+ return self.__toklist[i]
+ else:
+ if i not in self.__accumNames:
+ return self.__tokdict[i][-1][0]
+ else:
+ return ParseResults([ v[0] for v in self.__tokdict[i] ])
+
+ def __setitem__( self, k, v ):
+ if isinstance(v,_ParseResultsWithOffset):
+ self.__tokdict[k] = self.__tokdict.get(k,list()) + [v]
+ sub = v[0]
+ elif isinstance(k,int):
+ self.__toklist[k] = v
+ sub = v
+ else:
+ self.__tokdict[k] = self.__tokdict.get(k,list()) + [_ParseResultsWithOffset(v,0)]
+ sub = v
+ if isinstance(sub,ParseResults):
+ sub.__parent = wkref(self)
+
+ def __delitem__( self, i ):
+ if isinstance(i,(int,slice)):
+ mylen = len( self.__toklist )
+ del self.__toklist[i]
+
+ # convert int to slice
+ if isinstance(i, int):
+ if i < 0:
+ i += mylen
+ i = slice(i, i+1)
+ # get removed indices
+ removed = list(range(*i.indices(mylen)))
+ removed.reverse()
+ # fixup indices in token dictionary
+ for name in self.__tokdict:
+ occurrences = self.__tokdict[name]
+ for j in removed:
+ for k, (value, position) in enumerate(occurrences):
+ occurrences[k] = _ParseResultsWithOffset(value, position - (position > j))
+ else:
+ del self.__tokdict[i]
+
+ def __contains__( self, k ):
+ return k in self.__tokdict
+
+ def __len__( self ): return len( self.__toklist )
+ def __bool__(self): return len( self.__toklist ) > 0
+ __nonzero__ = __bool__
+ def __iter__( self ): return iter( self.__toklist )
+ def __reversed__( self ): return iter( reversed(self.__toklist) )
+ def keys( self ):
+ """Returns all named result keys."""
+ return self.__tokdict.keys()
+
+ def pop( self, index=-1 ):
+ """Removes and returns item at specified index (default=last).
+ Will work with either numeric indices or dict-key indicies."""
+ ret = self[index]
+ del self[index]
+ return ret
+
+ def get(self, key, defaultValue=None):
+ """Returns named result matching the given key, or if there is no
+ such name, then returns the given defaultValue or None if no
+ defaultValue is specified."""
+ if key in self:
+ return self[key]
+ else:
+ return defaultValue
+
+ def insert( self, index, insStr ):
+ self.__toklist.insert(index, insStr)
+ # fixup indices in token dictionary
+ for name in self.__tokdict:
+ occurrences = self.__tokdict[name]
+ for k, (value, position) in enumerate(occurrences):
+ occurrences[k] = _ParseResultsWithOffset(value, position + (position > index))
+
+ def items( self ):
+ """Returns all named result keys and values as a list of tuples."""
+ return [(k,self[k]) for k in self.__tokdict]
+
+ def values( self ):
+ """Returns all named result values."""
+ return [ v[-1][0] for v in self.__tokdict.values() ]
+
+ def __getattr__( self, name ):
+ if name not in self.__slots__:
+ if name in self.__tokdict:
+ if name not in self.__accumNames:
+ return self.__tokdict[name][-1][0]
+ else:
+ return ParseResults([ v[0] for v in self.__tokdict[name] ])
+ else:
+ return ""
+ return None
+
+ def __add__( self, other ):
+ ret = self.copy()
+ ret += other
+ return ret
+
+ def __iadd__( self, other ):
+ if other.__tokdict:
+ offset = len(self.__toklist)
+ addoffset = ( lambda a: (a<0 and offset) or (a+offset) )
+ otheritems = other.__tokdict.items()
+ otherdictitems = [(k, _ParseResultsWithOffset(v[0],addoffset(v[1])) )
+ for (k,vlist) in otheritems for v in vlist]
+ for k,v in otherdictitems:
+ self[k] = v
+ if isinstance(v[0],ParseResults):
+ v[0].__parent = wkref(self)
+
+ self.__toklist += other.__toklist
+ self.__accumNames.update( other.__accumNames )
+ del other
+ return self
+
+ def __repr__( self ):
+ return "(%s, %s)" % ( repr( self.__toklist ), repr( self.__tokdict ) )
+
+ def __str__( self ):
+ out = "["
+ sep = ""
+ for i in self.__toklist:
+ if isinstance(i, ParseResults):
+ out += sep + _ustr(i)
+ else:
+ out += sep + repr(i)
+ sep = ", "
+ out += "]"
+ return out
+
+ def _asStringList( self, sep='' ):
+ out = []
+ for item in self.__toklist:
+ if out and sep:
+ out.append(sep)
+ if isinstance( item, ParseResults ):
+ out += item._asStringList()
+ else:
+ out.append( _ustr(item) )
+ return out
+
+ def asList( self ):
+ """Returns the parse results as a nested list of matching tokens, all converted to strings."""
+ out = []
+ for res in self.__toklist:
+ if isinstance(res,ParseResults):
+ out.append( res.asList() )
+ else:
+ out.append( res )
+ return out
+
+ def asDict( self ):
+ """Returns the named parse results as dictionary."""
+ return dict( self.items() )
+
+ def copy( self ):
+ """Returns a new copy of a ParseResults object."""
+ ret = ParseResults( self.__toklist )
+ ret.__tokdict = self.__tokdict.copy()
+ ret.__parent = self.__parent
+ ret.__accumNames.update( self.__accumNames )
+ ret.__name = self.__name
+ return ret
+
+ def asXML( self, doctag=None, namedItemsOnly=False, indent="", formatted=True ):
+ """Returns the parse results as XML. Tags are created for tokens and lists that have defined results names."""
+ nl = "\n"
+ out = []
+ namedItems = dict( [ (v[1],k) for (k,vlist) in self.__tokdict.items()
+ for v in vlist ] )
+ nextLevelIndent = indent + " "
+
+ # collapse out indents if formatting is not desired
+ if not formatted:
+ indent = ""
+ nextLevelIndent = ""
+ nl = ""
+
+ selfTag = None
+ if doctag is not None:
+ selfTag = doctag
+ else:
+ if self.__name:
+ selfTag = self.__name
+
+ if not selfTag:
+ if namedItemsOnly:
+ return ""
+ else:
+ selfTag = "ITEM"
+
+ out += [ nl, indent, "<", selfTag, ">" ]
+
+ worklist = self.__toklist
+ for i,res in enumerate(worklist):
+ if isinstance(res,ParseResults):
+ if i in namedItems:
+ out += [ res.asXML(namedItems[i],
+ namedItemsOnly and doctag is None,
+ nextLevelIndent,
+ formatted)]
+ else:
+ out += [ res.asXML(None,
+ namedItemsOnly and doctag is None,
+ nextLevelIndent,
+ formatted)]
+ else:
+ # individual token, see if there is a name for it
+ resTag = None
+ if i in namedItems:
+ resTag = namedItems[i]
+ if not resTag:
+ if namedItemsOnly:
+ continue
+ else:
+ resTag = "ITEM"
+ xmlBodyText = _xml_escape(_ustr(res))
+ out += [ nl, nextLevelIndent, "<", resTag, ">",
+ xmlBodyText,
+ "</", resTag, ">" ]
+
+ out += [ nl, indent, "</", selfTag, ">" ]
+ return "".join(out)
+
+ def __lookup(self,sub):
+ for k,vlist in self.__tokdict.items():
+ for v,loc in vlist:
+ if sub is v:
+ return k
+ return None
+
+ def getName(self):
+ """Returns the results name for this token expression."""
+ if self.__name:
+ return self.__name
+ elif self.__parent:
+ par = self.__parent()
+ if par:
+ return par.__lookup(self)
+ else:
+ return None
+ elif (len(self) == 1 and
+ len(self.__tokdict) == 1 and
+ self.__tokdict.values()[0][0][1] in (0,-1)):
+ return self.__tokdict.keys()[0]
+ else:
+ return None
+
+ def dump(self,indent='',depth=0):
+ """Diagnostic method for listing out the contents of a ParseResults.
+ Accepts an optional indent argument so that this string can be embedded
+ in a nested display of other data."""
+ out = []
+ out.append( indent+_ustr(self.asList()) )
+ keys = self.items()
+ keys.sort()
+ for k,v in keys:
+ if out:
+ out.append('\n')
+ out.append( "%s%s- %s: " % (indent,(' '*depth), k) )
+ if isinstance(v,ParseResults):
+ if v.keys():
+ out.append( v.dump(indent,depth+1) )
+ else:
+ out.append(_ustr(v))
+ else:
+ out.append(_ustr(v))
+ return "".join(out)
+
+ # add support for pickle protocol
+ def __getstate__(self):
+ return ( self.__toklist,
+ ( self.__tokdict.copy(),
+ self.__parent is not None and self.__parent() or None,
+ self.__accumNames,
+ self.__name ) )
+
+ def __setstate__(self,state):
+ self.__toklist = state[0]
+ self.__tokdict, \
+ par, \
+ inAccumNames, \
+ self.__name = state[1]
+ self.__accumNames = {}
+ self.__accumNames.update(inAccumNames)
+ if par is not None:
+ self.__parent = wkref(par)
+ else:
+ self.__parent = None
+
+ def __dir__(self):
+ return dir(super(ParseResults,self)) + self.keys()
+
+def col (loc,strg):
+ """Returns current column within a string, counting newlines as line separators.
+ The first column is number 1.
+
+ Note: the default parsing behavior is to expand tabs in the input string
+ before starting the parsing process. See L{I{ParserElement.parseString}<ParserElement.parseString>} for more information
+ on parsing strings containing <TAB>s, and suggested methods to maintain a
+ consistent view of the parsed string, the parse location, and line and column
+ positions within the parsed string.
+ """
+ return (loc<len(strg) and strg[loc] == '\n') and 1 or loc - strg.rfind("\n", 0, loc)
+
+def lineno(loc,strg):
+ """Returns current line number within a string, counting newlines as line separators.
+ The first line is number 1.
+
+ Note: the default parsing behavior is to expand tabs in the input string
+ before starting the parsing process. See L{I{ParserElement.parseString}<ParserElement.parseString>} for more information
+ on parsing strings containing <TAB>s, and suggested methods to maintain a
+ consistent view of the parsed string, the parse location, and line and column
+ positions within the parsed string.
+ """
+ return strg.count("\n",0,loc) + 1
+
+def line( loc, strg ):
+ """Returns the line of text containing loc within a string, counting newlines as line separators.
+ """
+ lastCR = strg.rfind("\n", 0, loc)
+ nextCR = strg.find("\n", loc)
+ if nextCR > 0:
+ return strg[lastCR+1:nextCR]
+ else:
+ return strg[lastCR+1:]
+
+def _defaultStartDebugAction( instring, loc, expr ):
+ print ("Match " + _ustr(expr) + " at loc " + _ustr(loc) + "(%d,%d)" % ( lineno(loc,instring), col(loc,instring) ))
+
+def _defaultSuccessDebugAction( instring, startloc, endloc, expr, toks ):
+ print ("Matched " + _ustr(expr) + " -> " + str(toks.asList()))
+
+def _defaultExceptionDebugAction( instring, loc, expr, exc ):
+ print ("Exception raised:" + _ustr(exc))
+
+def nullDebugAction(*args):
+ """'Do-nothing' debug action, to suppress debugging output during parsing."""
+ pass
+
+class ParserElement(object):
+ """Abstract base level parser element class."""
+ DEFAULT_WHITE_CHARS = " \n\t\r"
+
+ def setDefaultWhitespaceChars( chars ):
+ """Overrides the default whitespace chars
+ """
+ ParserElement.DEFAULT_WHITE_CHARS = chars
+ setDefaultWhitespaceChars = staticmethod(setDefaultWhitespaceChars)
+
+ def __init__( self, savelist=False ):
+ self.parseAction = list()
+ self.failAction = None
+ #~ self.name = "<unknown>" # don't define self.name, let subclasses try/except upcall
+ self.strRepr = None
+ self.resultsName = None
+ self.saveAsList = savelist
+ self.skipWhitespace = True
+ self.whiteChars = ParserElement.DEFAULT_WHITE_CHARS
+ self.copyDefaultWhiteChars = True
+ self.mayReturnEmpty = False # used when checking for left-recursion
+ self.keepTabs = False
+ self.ignoreExprs = list()
+ self.debug = False
+ self.streamlined = False
+ self.mayIndexError = True # used to optimize exception handling for subclasses that don't advance parse index
+ self.errmsg = ""
+ self.modalResults = True # used to mark results names as modal (report only last) or cumulative (list all)
+ self.debugActions = ( None, None, None ) #custom debug actions
+ self.re = None
+ self.callPreparse = True # used to avoid redundant calls to preParse
+ self.callDuringTry = False
+
+ def copy( self ):
+ """Make a copy of this ParserElement. Useful for defining different parse actions
+ for the same parsing pattern, using copies of the original parse element."""
+ cpy = copy.copy( self )
+ cpy.parseAction = self.parseAction[:]
+ cpy.ignoreExprs = self.ignoreExprs[:]
+ if self.copyDefaultWhiteChars:
+ cpy.whiteChars = ParserElement.DEFAULT_WHITE_CHARS
+ return cpy
+
+ def setName( self, name ):
+ """Define name for this expression, for use in debugging."""
+ self.name = name
+ self.errmsg = "Expected " + self.name
+ if hasattr(self,"exception"):
+ self.exception.msg = self.errmsg
+ return self
+
+ def setResultsName( self, name, listAllMatches=False ):
+ """Define name for referencing matching tokens as a nested attribute
+ of the returned parse results.
+ NOTE: this returns a *copy* of the original ParserElement object;
+ this is so that the client can define a basic element, such as an
+ integer, and reference it in multiple places with different names.
+ """
+ newself = self.copy()
+ newself.resultsName = name
+ newself.modalResults = not listAllMatches
+ return newself
+
+ def setBreak(self,breakFlag = True):
+ """Method to invoke the Python pdb debugger when this element is
+ about to be parsed. Set breakFlag to True to enable, False to
+ disable.
+ """
+ if breakFlag:
+ _parseMethod = self._parse
+ def breaker(instring, loc, doActions=True, callPreParse=True):
+ import pdb
+ pdb.set_trace()
+ return _parseMethod( instring, loc, doActions, callPreParse )
+ breaker._originalParseMethod = _parseMethod
+ self._parse = breaker
+ else:
+ if hasattr(self._parse,"_originalParseMethod"):
+ self._parse = self._parse._originalParseMethod
+ return self
+
+ def _normalizeParseActionArgs( f ):
+ """Internal method used to decorate parse actions that take fewer than 3 arguments,
+ so that all parse actions can be called as f(s,l,t)."""
+ STAR_ARGS = 4
+
+ try:
+ restore = None
+ if isinstance(f,type):
+ restore = f
+ f = f.__init__
+ if not _PY3K:
+ codeObj = f.func_code
+ else:
+ codeObj = f.code
+ if codeObj.co_flags & STAR_ARGS:
+ return f
+ numargs = codeObj.co_argcount
+ if not _PY3K:
+ if hasattr(f,"im_self"):
+ numargs -= 1
+ else:
+ if hasattr(f,"__self__"):
+ numargs -= 1
+ if restore:
+ f = restore
+ except AttributeError:
+ try:
+ if not _PY3K:
+ call_im_func_code = f.__call__.im_func.func_code
+ else:
+ call_im_func_code = f.__code__
+
+ # not a function, must be a callable object, get info from the
+ # im_func binding of its bound __call__ method
+ if call_im_func_code.co_flags & STAR_ARGS:
+ return f
+ numargs = call_im_func_code.co_argcount
+ if not _PY3K:
+ if hasattr(f.__call__,"im_self"):
+ numargs -= 1
+ else:
+ if hasattr(f.__call__,"__self__"):
+ numargs -= 0
+ except AttributeError:
+ if not _PY3K:
+ call_func_code = f.__call__.func_code
+ else:
+ call_func_code = f.__call__.__code__
+ # not a bound method, get info directly from __call__ method
+ if call_func_code.co_flags & STAR_ARGS:
+ return f
+ numargs = call_func_code.co_argcount
+ if not _PY3K:
+ if hasattr(f.__call__,"im_self"):
+ numargs -= 1
+ else:
+ if hasattr(f.__call__,"__self__"):
+ numargs -= 1
+
+
+ #~ print ("adding function %s with %d args" % (f.func_name,numargs))
+ if numargs == 3:
+ return f
+ else:
+ if numargs > 3:
+ def tmp(s,l,t):
+ return f(f.__call__.__self__, s,l,t)
+ if numargs == 2:
+ def tmp(s,l,t):
+ return f(l,t)
+ elif numargs == 1:
+ def tmp(s,l,t):
+ return f(t)
+ else: #~ numargs == 0:
+ def tmp(s,l,t):
+ return f()
+ try:
+ tmp.__name__ = f.__name__
+ except (AttributeError,TypeError):
+ # no need for special handling if attribute doesnt exist
+ pass
+ try:
+ tmp.__doc__ = f.__doc__
+ except (AttributeError,TypeError):
+ # no need for special handling if attribute doesnt exist
+ pass
+ try:
+ tmp.__dict__.update(f.__dict__)
+ except (AttributeError,TypeError):
+ # no need for special handling if attribute doesnt exist
+ pass
+ return tmp
+ _normalizeParseActionArgs = staticmethod(_normalizeParseActionArgs)
+
+ def setParseAction( self, *fns, **kwargs ):
+ """Define action to perform when successfully matching parse element definition.
+ Parse action fn is a callable method with 0-3 arguments, called as fn(s,loc,toks),
+ fn(loc,toks), fn(toks), or just fn(), where:
+ - s = the original string being parsed (see note below)
+ - loc = the location of the matching substring
+ - toks = a list of the matched tokens, packaged as a ParseResults object
+ If the functions in fns modify the tokens, they can return them as the return
+ value from fn, and the modified list of tokens will replace the original.
+ Otherwise, fn does not need to return any value.
+
+ Note: the default parsing behavior is to expand tabs in the input string
+ before starting the parsing process. See L{I{parseString}<parseString>} for more information
+ on parsing strings containing <TAB>s, and suggested methods to maintain a
+ consistent view of the parsed string, the parse location, and line and column
+ positions within the parsed string.
+ """
+ self.parseAction = list(map(self._normalizeParseActionArgs, list(fns)))
+ self.callDuringTry = ("callDuringTry" in kwargs and kwargs["callDuringTry"])
+ return self
+
+ def addParseAction( self, *fns, **kwargs ):
+ """Add parse action to expression's list of parse actions. See L{I{setParseAction}<setParseAction>}."""
+ self.parseAction += list(map(self._normalizeParseActionArgs, list(fns)))
+ self.callDuringTry = self.callDuringTry or ("callDuringTry" in kwargs and kwargs["callDuringTry"])
+ return self
+
+ def setFailAction( self, fn ):
+ """Define action to perform if parsing fails at this expression.
+ Fail acton fn is a callable function that takes the arguments
+ fn(s,loc,expr,err) where:
+ - s = string being parsed
+ - loc = location where expression match was attempted and failed
+ - expr = the parse expression that failed
+ - err = the exception thrown
+ The function returns no value. It may throw ParseFatalException
+ if it is desired to stop parsing immediately."""
+ self.failAction = fn
+ return self
+
+ def _skipIgnorables( self, instring, loc ):
+ exprsFound = True
+ while exprsFound:
+ exprsFound = False
+ for e in self.ignoreExprs:
+ try:
+ while 1:
+ loc,dummy = e._parse( instring, loc )
+ exprsFound = True
+ except ParseException:
+ pass
+ return loc
+
+ def preParse( self, instring, loc ):
+ if self.ignoreExprs:
+ loc = self._skipIgnorables( instring, loc )
+
+ if self.skipWhitespace:
+ wt = self.whiteChars
+ instrlen = len(instring)
+ while loc < instrlen and instring[loc] in wt:
+ loc += 1
+
+ return loc
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ return loc, []
+
+ def postParse( self, instring, loc, tokenlist ):
+ return tokenlist
+
+ #~ @profile
+ def _parseNoCache( self, instring, loc, doActions=True, callPreParse=True ):
+ debugging = ( self.debug ) #and doActions )
+
+ if debugging or self.failAction:
+ #~ print ("Match",self,"at loc",loc,"(%d,%d)" % ( lineno(loc,instring), col(loc,instring) ))
+ if (self.debugActions[0] ):
+ self.debugActions[0]( instring, loc, self )
+ if callPreParse and self.callPreparse:
+ preloc = self.preParse( instring, loc )
+ else:
+ preloc = loc
+ tokensStart = loc
+ try:
+ try:
+ loc,tokens = self.parseImpl( instring, preloc, doActions )
+ except IndexError:
+ raise ParseException( instring, len(instring), self.errmsg, self )
+ except ParseBaseException:
+ #~ print ("Exception raised:", err)
+ err = None
+ if self.debugActions[2]:
+ err = sys.exc_info()[1]
+ self.debugActions[2]( instring, tokensStart, self, err )
+ if self.failAction:
+ if err is None:
+ err = sys.exc_info()[1]
+ self.failAction( instring, tokensStart, self, err )
+ raise
+ else:
+ if callPreParse and self.callPreparse:
+ preloc = self.preParse( instring, loc )
+ else:
+ preloc = loc
+ tokensStart = loc
+ if self.mayIndexError or loc >= len(instring):
+ try:
+ loc,tokens = self.parseImpl( instring, preloc, doActions )
+ except IndexError:
+ raise ParseException( instring, len(instring), self.errmsg, self )
+ else:
+ loc,tokens = self.parseImpl( instring, preloc, doActions )
+
+ tokens = self.postParse( instring, loc, tokens )
+
+ retTokens = ParseResults( tokens, self.resultsName, asList=self.saveAsList, modal=self.modalResults )
+ if self.parseAction and (doActions or self.callDuringTry):
+ if debugging:
+ try:
+ for fn in self.parseAction:
+ tokens = fn( instring, tokensStart, retTokens )
+ if tokens is not None:
+ retTokens = ParseResults( tokens,
+ self.resultsName,
+ asList=self.saveAsList and isinstance(tokens,(ParseResults,list)),
+ modal=self.modalResults )
+ except ParseBaseException:
+ #~ print "Exception raised in user parse action:", err
+ if (self.debugActions[2] ):
+ err = sys.exc_info()[1]
+ self.debugActions[2]( instring, tokensStart, self, err )
+ raise
+ else:
+ for fn in self.parseAction:
+ tokens = fn( instring, tokensStart, retTokens )
+ if tokens is not None:
+ retTokens = ParseResults( tokens,
+ self.resultsName,
+ asList=self.saveAsList and isinstance(tokens,(ParseResults,list)),
+ modal=self.modalResults )
+
+ if debugging:
+ #~ print ("Matched",self,"->",retTokens.asList())
+ if (self.debugActions[1] ):
+ self.debugActions[1]( instring, tokensStart, loc, self, retTokens )
+
+ return loc, retTokens
+
+ def tryParse( self, instring, loc ):
+ try:
+ return self._parse( instring, loc, doActions=False )[0]
+ except ParseFatalException:
+ raise ParseException( instring, loc, self.errmsg, self)
+
+ # this method gets repeatedly called during backtracking with the same arguments -
+ # we can cache these arguments and save ourselves the trouble of re-parsing the contained expression
+ def _parseCache( self, instring, loc, doActions=True, callPreParse=True ):
+ lookup = (self,instring,loc,callPreParse,doActions)
+ if lookup in ParserElement._exprArgCache:
+ value = ParserElement._exprArgCache[ lookup ]
+ if isinstance(value,Exception):
+ raise value
+ return value
+ else:
+ try:
+ value = self._parseNoCache( instring, loc, doActions, callPreParse )
+ ParserElement._exprArgCache[ lookup ] = (value[0],value[1].copy())
+ return value
+ except ParseBaseException:
+ pe = sys.exc_info()[1]
+ ParserElement._exprArgCache[ lookup ] = pe
+ raise
+
+ _parse = _parseNoCache
+
+ # argument cache for optimizing repeated calls when backtracking through recursive expressions
+ _exprArgCache = {}
+ def resetCache():
+ ParserElement._exprArgCache.clear()
+ resetCache = staticmethod(resetCache)
+
+ _packratEnabled = False
+ def enablePackrat():
+ """Enables "packrat" parsing, which adds memoizing to the parsing logic.
+ Repeated parse attempts at the same string location (which happens
+ often in many complex grammars) can immediately return a cached value,
+ instead of re-executing parsing/validating code. Memoizing is done of
+ both valid results and parsing exceptions.
+
+ This speedup may break existing programs that use parse actions that
+ have side-effects. For this reason, packrat parsing is disabled when
+ you first import pyparsing_py3 as pyparsing. To activate the packrat feature, your
+ program must call the class method ParserElement.enablePackrat(). If
+ your program uses psyco to "compile as you go", you must call
+ enablePackrat before calling psyco.full(). If you do not do this,
+ Python will crash. For best results, call enablePackrat() immediately
+ after importing pyparsing.
+ """
+ if not ParserElement._packratEnabled:
+ ParserElement._packratEnabled = True
+ ParserElement._parse = ParserElement._parseCache
+ enablePackrat = staticmethod(enablePackrat)
+
+ def parseString( self, instring, parseAll=False ):
+ """Execute the parse expression with the given string.
+ This is the main interface to the client code, once the complete
+ expression has been built.
+
+ If you want the grammar to require that the entire input string be
+ successfully parsed, then set parseAll to True (equivalent to ending
+ the grammar with StringEnd()).
+
+ Note: parseString implicitly calls expandtabs() on the input string,
+ in order to report proper column numbers in parse actions.
+ If the input string contains tabs and
+ the grammar uses parse actions that use the loc argument to index into the
+ string being parsed, you can ensure you have a consistent view of the input
+ string by:
+ - calling parseWithTabs on your grammar before calling parseString
+ (see L{I{parseWithTabs}<parseWithTabs>})
+ - define your parse action using the full (s,loc,toks) signature, and
+ reference the input string using the parse action's s argument
+ - explictly expand the tabs in your input string before calling
+ parseString
+ """
+ ParserElement.resetCache()
+ if not self.streamlined:
+ self.streamline()
+ #~ self.saveAsList = True
+ for e in self.ignoreExprs:
+ e.streamline()
+ if not self.keepTabs:
+ instring = instring.expandtabs()
+ try:
+ loc, tokens = self._parse( instring, 0 )
+ if parseAll:
+ loc = self.preParse( instring, loc )
+ StringEnd()._parse( instring, loc )
+ except ParseBaseException:
+ exc = sys.exc_info()[1]
+ # catch and re-raise exception from here, clears out pyparsing internal stack trace
+ raise exc
+ else:
+ return tokens
+
+ def scanString( self, instring, maxMatches=_MAX_INT ):
+ """Scan the input string for expression matches. Each match will return the
+ matching tokens, start location, and end location. May be called with optional
+ maxMatches argument, to clip scanning after 'n' matches are found.
+
+ Note that the start and end locations are reported relative to the string
+ being parsed. See L{I{parseString}<parseString>} for more information on parsing
+ strings with embedded tabs."""
+ if not self.streamlined:
+ self.streamline()
+ for e in self.ignoreExprs:
+ e.streamline()
+
+ if not self.keepTabs:
+ instring = _ustr(instring).expandtabs()
+ instrlen = len(instring)
+ loc = 0
+ preparseFn = self.preParse
+ parseFn = self._parse
+ ParserElement.resetCache()
+ matches = 0
+ try:
+ while loc <= instrlen and matches < maxMatches:
+ try:
+ preloc = preparseFn( instring, loc )
+ nextLoc,tokens = parseFn( instring, preloc, callPreParse=False )
+ except ParseException:
+ loc = preloc+1
+ else:
+ if nextLoc > loc:
+ matches += 1
+ yield tokens, preloc, nextLoc
+ loc = nextLoc
+ else:
+ loc = preloc+1
+ except ParseBaseException:
+ pe = sys.exc_info()[1]
+ raise pe
+
+ def transformString( self, instring ):
+ """Extension to scanString, to modify matching text with modified tokens that may
+ be returned from a parse action. To use transformString, define a grammar and
+ attach a parse action to it that modifies the returned token list.
+ Invoking transformString() on a target string will then scan for matches,
+ and replace the matched text patterns according to the logic in the parse
+ action. transformString() returns the resulting transformed string."""
+ out = []
+ lastE = 0
+ # force preservation of <TAB>s, to minimize unwanted transformation of string, and to
+ # keep string locs straight between transformString and scanString
+ self.keepTabs = True
+ try:
+ for t,s,e in self.scanString( instring ):
+ out.append( instring[lastE:s] )
+ if t:
+ if isinstance(t,ParseResults):
+ out += t.asList()
+ elif isinstance(t,list):
+ out += t
+ else:
+ out.append(t)
+ lastE = e
+ out.append(instring[lastE:])
+ return "".join(map(_ustr,out))
+ except ParseBaseException:
+ pe = sys.exc_info()[1]
+ raise pe
+
+ def searchString( self, instring, maxMatches=_MAX_INT ):
+ """Another extension to scanString, simplifying the access to the tokens found
+ to match the given parse expression. May be called with optional
+ maxMatches argument, to clip searching after 'n' matches are found.
+ """
+ try:
+ return ParseResults([ t for t,s,e in self.scanString( instring, maxMatches ) ])
+ except ParseBaseException:
+ pe = sys.exc_info()[1]
+ raise pe
+
+ def __add__(self, other ):
+ """Implementation of + operator - returns And"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return And( [ self, other ] )
+
+ def __radd__(self, other ):
+ """Implementation of + operator when left operand is not a ParserElement"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return other + self
+
+ def __sub__(self, other):
+ """Implementation of - operator, returns And with error stop"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return And( [ self, And._ErrorStop(), other ] )
+
+ def __rsub__(self, other ):
+ """Implementation of - operator when left operand is not a ParserElement"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return other - self
+
+ def __mul__(self,other):
+ if isinstance(other,int):
+ minElements, optElements = other,0
+ elif isinstance(other,tuple):
+ other = (other + (None, None))[:2]
+ if other[0] is None:
+ other = (0, other[1])
+ if isinstance(other[0],int) and other[1] is None:
+ if other[0] == 0:
+ return ZeroOrMore(self)
+ if other[0] == 1:
+ return OneOrMore(self)
+ else:
+ return self*other[0] + ZeroOrMore(self)
+ elif isinstance(other[0],int) and isinstance(other[1],int):
+ minElements, optElements = other
+ optElements -= minElements
+ else:
+ raise TypeError("cannot multiply 'ParserElement' and ('%s','%s') objects", type(other[0]),type(other[1]))
+ else:
+ raise TypeError("cannot multiply 'ParserElement' and '%s' objects", type(other))
+
+ if minElements < 0:
+ raise ValueError("cannot multiply ParserElement by negative value")
+ if optElements < 0:
+ raise ValueError("second tuple value must be greater or equal to first tuple value")
+ if minElements == optElements == 0:
+ raise ValueError("cannot multiply ParserElement by 0 or (0,0)")
+
+ if (optElements):
+ def makeOptionalList(n):
+ if n>1:
+ return Optional(self + makeOptionalList(n-1))
+ else:
+ return Optional(self)
+ if minElements:
+ if minElements == 1:
+ ret = self + makeOptionalList(optElements)
+ else:
+ ret = And([self]*minElements) + makeOptionalList(optElements)
+ else:
+ ret = makeOptionalList(optElements)
+ else:
+ if minElements == 1:
+ ret = self
+ else:
+ ret = And([self]*minElements)
+ return ret
+
+ def __rmul__(self, other):
+ return self.__mul__(other)
+
+ def __or__(self, other ):
+ """Implementation of | operator - returns MatchFirst"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return MatchFirst( [ self, other ] )
+
+ def __ror__(self, other ):
+ """Implementation of | operator when left operand is not a ParserElement"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return other | self
+
+ def __xor__(self, other ):
+ """Implementation of ^ operator - returns Or"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return Or( [ self, other ] )
+
+ def __rxor__(self, other ):
+ """Implementation of ^ operator when left operand is not a ParserElement"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return other ^ self
+
+ def __and__(self, other ):
+ """Implementation of & operator - returns Each"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return Each( [ self, other ] )
+
+ def __rand__(self, other ):
+ """Implementation of & operator when left operand is not a ParserElement"""
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ if not isinstance( other, ParserElement ):
+ warnings.warn("Cannot combine element of type %s with ParserElement" % type(other),
+ SyntaxWarning, stacklevel=2)
+ return None
+ return other & self
+
+ def __invert__( self ):
+ """Implementation of ~ operator - returns NotAny"""
+ return NotAny( self )
+
+ def __call__(self, name):
+ """Shortcut for setResultsName, with listAllMatches=default::
+ userdata = Word(alphas).setResultsName("name") + Word(nums+"-").setResultsName("socsecno")
+ could be written as::
+ userdata = Word(alphas)("name") + Word(nums+"-")("socsecno")
+ """
+ return self.setResultsName(name)
+
+ def suppress( self ):
+ """Suppresses the output of this ParserElement; useful to keep punctuation from
+ cluttering up returned output.
+ """
+ return Suppress( self )
+
+ def leaveWhitespace( self ):
+ """Disables the skipping of whitespace before matching the characters in the
+ ParserElement's defined pattern. This is normally only used internally by
+ the pyparsing module, but may be needed in some whitespace-sensitive grammars.
+ """
+ self.skipWhitespace = False
+ return self
+
+ def setWhitespaceChars( self, chars ):
+ """Overrides the default whitespace chars
+ """
+ self.skipWhitespace = True
+ self.whiteChars = chars
+ self.copyDefaultWhiteChars = False
+ return self
+
+ def parseWithTabs( self ):
+ """Overrides default behavior to expand <TAB>s to spaces before parsing the input string.
+ Must be called before parseString when the input grammar contains elements that
+ match <TAB> characters."""
+ self.keepTabs = True
+ return self
+
+ def ignore( self, other ):
+ """Define expression to be ignored (e.g., comments) while doing pattern
+ matching; may be called repeatedly, to define multiple comment or other
+ ignorable patterns.
+ """
+ if isinstance( other, Suppress ):
+ if other not in self.ignoreExprs:
+ self.ignoreExprs.append( other )
+ else:
+ self.ignoreExprs.append( Suppress( other ) )
+ return self
+
+ def setDebugActions( self, startAction, successAction, exceptionAction ):
+ """Enable display of debugging messages while doing pattern matching."""
+ self.debugActions = (startAction or _defaultStartDebugAction,
+ successAction or _defaultSuccessDebugAction,
+ exceptionAction or _defaultExceptionDebugAction)
+ self.debug = True
+ return self
+
+ def setDebug( self, flag=True ):
+ """Enable display of debugging messages while doing pattern matching.
+ Set flag to True to enable, False to disable."""
+ if flag:
+ self.setDebugActions( _defaultStartDebugAction, _defaultSuccessDebugAction, _defaultExceptionDebugAction )
+ else:
+ self.debug = False
+ return self
+
+ def __str__( self ):
+ return self.name
+
+ def __repr__( self ):
+ return _ustr(self)
+
+ def streamline( self ):
+ self.streamlined = True
+ self.strRepr = None
+ return self
+
+ def checkRecursion( self, parseElementList ):
+ pass
+
+ def validate( self, validateTrace=[] ):
+ """Check defined expressions for valid structure, check for infinite recursive definitions."""
+ self.checkRecursion( [] )
+
+ def parseFile( self, file_or_filename, parseAll=False ):
+ """Execute the parse expression on the given file or filename.
+ If a filename is specified (instead of a file object),
+ the entire file is opened, read, and closed before parsing.
+ """
+ try:
+ file_contents = file_or_filename.read()
+ except AttributeError:
+ f = open(file_or_filename, "rb")
+ file_contents = f.read()
+ f.close()
+ try:
+ return self.parseString(file_contents, parseAll)
+ except ParseBaseException:
+ # catch and re-raise exception from here, clears out pyparsing internal stack trace
+ exc = sys.exc_info()[1]
+ raise exc
+
+ def getException(self):
+ return ParseException("",0,self.errmsg,self)
+
+ def __getattr__(self,aname):
+ if aname == "myException":
+ self.myException = ret = self.getException();
+ return ret;
+ else:
+ raise AttributeError("no such attribute " + aname)
+
+ def __eq__(self,other):
+ if isinstance(other, ParserElement):
+ return self is other or self.__dict__ == other.__dict__
+ elif isinstance(other, basestring):
+ try:
+ self.parseString(_ustr(other), parseAll=True)
+ return True
+ except ParseBaseException:
+ return False
+ else:
+ return super(ParserElement,self)==other
+
+ def __ne__(self,other):
+ return not (self == other)
+
+ def __hash__(self):
+ return hash(id(self))
+
+ def __req__(self,other):
+ return self == other
+
+ def __rne__(self,other):
+ return not (self == other)
+
+
+class Token(ParserElement):
+ """Abstract ParserElement subclass, for defining atomic matching patterns."""
+ def __init__( self ):
+ super(Token,self).__init__( savelist=False )
+ #self.myException = ParseException("",0,"",self)
+
+ def setName(self, name):
+ s = super(Token,self).setName(name)
+ self.errmsg = "Expected " + self.name
+ #s.myException.msg = self.errmsg
+ return s
+
+
+class Empty(Token):
+ """An empty token, will always match."""
+ def __init__( self ):
+ super(Empty,self).__init__()
+ self.name = "Empty"
+ self.mayReturnEmpty = True
+ self.mayIndexError = False
+
+
+class NoMatch(Token):
+ """A token that will never match."""
+ def __init__( self ):
+ super(NoMatch,self).__init__()
+ self.name = "NoMatch"
+ self.mayReturnEmpty = True
+ self.mayIndexError = False
+ self.errmsg = "Unmatchable token"
+ #self.myException.msg = self.errmsg
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+
+class Literal(Token):
+ """Token to exactly match a specified string."""
+ def __init__( self, matchString ):
+ super(Literal,self).__init__()
+ self.match = matchString
+ self.matchLen = len(matchString)
+ try:
+ self.firstMatchChar = matchString[0]
+ except IndexError:
+ warnings.warn("null string passed to Literal; use Empty() instead",
+ SyntaxWarning, stacklevel=2)
+ self.__class__ = Empty
+ self.name = '"%s"' % _ustr(self.match)
+ self.errmsg = "Expected " + self.name
+ self.mayReturnEmpty = False
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+
+ # Performance tuning: this routine gets called a *lot*
+ # if this is a single character match string and the first character matches,
+ # short-circuit as quickly as possible, and avoid calling startswith
+ #~ @profile
+ def parseImpl( self, instring, loc, doActions=True ):
+ if (instring[loc] == self.firstMatchChar and
+ (self.matchLen==1 or instring.startswith(self.match,loc)) ):
+ return loc+self.matchLen, self.match
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+_L = Literal
+
+class Keyword(Token):
+ """Token to exactly match a specified string as a keyword, that is, it must be
+ immediately followed by a non-keyword character. Compare with Literal::
+ Literal("if") will match the leading 'if' in 'ifAndOnlyIf'.
+ Keyword("if") will not; it will only match the leading 'if in 'if x=1', or 'if(y==2)'
+ Accepts two optional constructor arguments in addition to the keyword string:
+ identChars is a string of characters that would be valid identifier characters,
+ defaulting to all alphanumerics + "_" and "$"; caseless allows case-insensitive
+ matching, default is False.
+ """
+ DEFAULT_KEYWORD_CHARS = alphanums+"_$"
+
+ def __init__( self, matchString, identChars=DEFAULT_KEYWORD_CHARS, caseless=False ):
+ super(Keyword,self).__init__()
+ self.match = matchString
+ self.matchLen = len(matchString)
+ try:
+ self.firstMatchChar = matchString[0]
+ except IndexError:
+ warnings.warn("null string passed to Keyword; use Empty() instead",
+ SyntaxWarning, stacklevel=2)
+ self.name = '"%s"' % self.match
+ self.errmsg = "Expected " + self.name
+ self.mayReturnEmpty = False
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+ self.caseless = caseless
+ if caseless:
+ self.caselessmatch = matchString.upper()
+ identChars = identChars.upper()
+ self.identChars = _str2dict(identChars)
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if self.caseless:
+ if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and
+ (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) and
+ (loc == 0 or instring[loc-1].upper() not in self.identChars) ):
+ return loc+self.matchLen, self.match
+ else:
+ if (instring[loc] == self.firstMatchChar and
+ (self.matchLen==1 or instring.startswith(self.match,loc)) and
+ (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen] not in self.identChars) and
+ (loc == 0 or instring[loc-1] not in self.identChars) ):
+ return loc+self.matchLen, self.match
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ def copy(self):
+ c = super(Keyword,self).copy()
+ c.identChars = Keyword.DEFAULT_KEYWORD_CHARS
+ return c
+
+ def setDefaultKeywordChars( chars ):
+ """Overrides the default Keyword chars
+ """
+ Keyword.DEFAULT_KEYWORD_CHARS = chars
+ setDefaultKeywordChars = staticmethod(setDefaultKeywordChars)
+
+class CaselessLiteral(Literal):
+ """Token to match a specified string, ignoring case of letters.
+ Note: the matched results will always be in the case of the given
+ match string, NOT the case of the input text.
+ """
+ def __init__( self, matchString ):
+ super(CaselessLiteral,self).__init__( matchString.upper() )
+ # Preserve the defining literal.
+ self.returnString = matchString
+ self.name = "'%s'" % self.returnString
+ self.errmsg = "Expected " + self.name
+ #self.myException.msg = self.errmsg
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if instring[ loc:loc+self.matchLen ].upper() == self.match:
+ return loc+self.matchLen, self.returnString
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+class CaselessKeyword(Keyword):
+ def __init__( self, matchString, identChars=Keyword.DEFAULT_KEYWORD_CHARS ):
+ super(CaselessKeyword,self).__init__( matchString, identChars, caseless=True )
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if ( (instring[ loc:loc+self.matchLen ].upper() == self.caselessmatch) and
+ (loc >= len(instring)-self.matchLen or instring[loc+self.matchLen].upper() not in self.identChars) ):
+ return loc+self.matchLen, self.match
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+class Word(Token):
+ """Token for matching words composed of allowed character sets.
+ Defined with string containing all allowed initial characters,
+ an optional string containing allowed body characters (if omitted,
+ defaults to the initial character set), and an optional minimum,
+ maximum, and/or exact length. The default value for min is 1 (a
+ minimum value < 1 is not valid); the default values for max and exact
+ are 0, meaning no maximum or exact length restriction.
+ """
+ def __init__( self, initChars, bodyChars=None, min=1, max=0, exact=0, asKeyword=False ):
+ super(Word,self).__init__()
+ self.initCharsOrig = initChars
+ self.initChars = _str2dict(initChars)
+ if bodyChars :
+ self.bodyCharsOrig = bodyChars
+ self.bodyChars = _str2dict(bodyChars)
+ else:
+ self.bodyCharsOrig = initChars
+ self.bodyChars = _str2dict(initChars)
+
+ self.maxSpecified = max > 0
+
+ if min < 1:
+ raise ValueError("cannot specify a minimum length < 1; use Optional(Word()) if zero-length word is permitted")
+
+ self.minLen = min
+
+ if max > 0:
+ self.maxLen = max
+ else:
+ self.maxLen = _MAX_INT
+
+ if exact > 0:
+ self.maxLen = exact
+ self.minLen = exact
+
+ self.name = _ustr(self)
+ self.errmsg = "Expected " + self.name
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+ self.asKeyword = asKeyword
+
+ if ' ' not in self.initCharsOrig+self.bodyCharsOrig and (min==1 and max==0 and exact==0):
+ if self.bodyCharsOrig == self.initCharsOrig:
+ self.reString = "[%s]+" % _escapeRegexRangeChars(self.initCharsOrig)
+ elif len(self.bodyCharsOrig) == 1:
+ self.reString = "%s[%s]*" % \
+ (re.escape(self.initCharsOrig),
+ _escapeRegexRangeChars(self.bodyCharsOrig),)
+ else:
+ self.reString = "[%s][%s]*" % \
+ (_escapeRegexRangeChars(self.initCharsOrig),
+ _escapeRegexRangeChars(self.bodyCharsOrig),)
+ if self.asKeyword:
+ self.reString = r"\b"+self.reString+r"\b"
+ try:
+ self.re = re.compile( self.reString )
+ except:
+ self.re = None
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if self.re:
+ result = self.re.match(instring,loc)
+ if not result:
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ loc = result.end()
+ return loc,result.group()
+
+ if not(instring[ loc ] in self.initChars):
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ start = loc
+ loc += 1
+ instrlen = len(instring)
+ bodychars = self.bodyChars
+ maxloc = start + self.maxLen
+ maxloc = min( maxloc, instrlen )
+ while loc < maxloc and instring[loc] in bodychars:
+ loc += 1
+
+ throwException = False
+ if loc - start < self.minLen:
+ throwException = True
+ if self.maxSpecified and loc < instrlen and instring[loc] in bodychars:
+ throwException = True
+ if self.asKeyword:
+ if (start>0 and instring[start-1] in bodychars) or (loc<instrlen and instring[loc] in bodychars):
+ throwException = True
+
+ if throwException:
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ return loc, instring[start:loc]
+
+ def __str__( self ):
+ try:
+ return super(Word,self).__str__()
+ except:
+ pass
+
+
+ if self.strRepr is None:
+
+ def charsAsStr(s):
+ if len(s)>4:
+ return s[:4]+"..."
+ else:
+ return s
+
+ if ( self.initCharsOrig != self.bodyCharsOrig ):
+ self.strRepr = "W:(%s,%s)" % ( charsAsStr(self.initCharsOrig), charsAsStr(self.bodyCharsOrig) )
+ else:
+ self.strRepr = "W:(%s)" % charsAsStr(self.initCharsOrig)
+
+ return self.strRepr
+
+
+class Regex(Token):
+ """Token for matching strings that match a given regular expression.
+ Defined with string specifying the regular expression in a form recognized by the inbuilt Python re module.
+ """
+ def __init__( self, pattern, flags=0):
+ """The parameters pattern and flags are passed to the re.compile() function as-is. See the Python re module for an explanation of the acceptable patterns and flags."""
+ super(Regex,self).__init__()
+
+ if len(pattern) == 0:
+ warnings.warn("null string passed to Regex; use Empty() instead",
+ SyntaxWarning, stacklevel=2)
+
+ self.pattern = pattern
+ self.flags = flags
+
+ try:
+ self.re = re.compile(self.pattern, self.flags)
+ self.reString = self.pattern
+ except sre_constants.error:
+ warnings.warn("invalid pattern (%s) passed to Regex" % pattern,
+ SyntaxWarning, stacklevel=2)
+ raise
+
+ self.name = _ustr(self)
+ self.errmsg = "Expected " + self.name
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ result = self.re.match(instring,loc)
+ if not result:
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ loc = result.end()
+ d = result.groupdict()
+ ret = ParseResults(result.group())
+ if d:
+ for k in d:
+ ret[k] = d[k]
+ return loc,ret
+
+ def __str__( self ):
+ try:
+ return super(Regex,self).__str__()
+ except:
+ pass
+
+ if self.strRepr is None:
+ self.strRepr = "Re:(%s)" % repr(self.pattern)
+
+ return self.strRepr
+
+
+class QuotedString(Token):
+ """Token for matching strings that are delimited by quoting characters.
+ """
+ def __init__( self, quoteChar, escChar=None, escQuote=None, multiline=False, unquoteResults=True, endQuoteChar=None):
+ """
+ Defined with the following parameters:
+ - quoteChar - string of one or more characters defining the quote delimiting string
+ - escChar - character to escape quotes, typically backslash (default=None)
+ - escQuote - special quote sequence to escape an embedded quote string (such as SQL's "" to escape an embedded ") (default=None)
+ - multiline - boolean indicating whether quotes can span multiple lines (default=False)
+ - unquoteResults - boolean indicating whether the matched text should be unquoted (default=True)
+ - endQuoteChar - string of one or more characters defining the end of the quote delimited string (default=None => same as quoteChar)
+ """
+ super(QuotedString,self).__init__()
+
+ # remove white space from quote chars - wont work anyway
+ quoteChar = quoteChar.strip()
+ if len(quoteChar) == 0:
+ warnings.warn("quoteChar cannot be the empty string",SyntaxWarning,stacklevel=2)
+ raise SyntaxError()
+
+ if endQuoteChar is None:
+ endQuoteChar = quoteChar
+ else:
+ endQuoteChar = endQuoteChar.strip()
+ if len(endQuoteChar) == 0:
+ warnings.warn("endQuoteChar cannot be the empty string",SyntaxWarning,stacklevel=2)
+ raise SyntaxError()
+
+ self.quoteChar = quoteChar
+ self.quoteCharLen = len(quoteChar)
+ self.firstQuoteChar = quoteChar[0]
+ self.endQuoteChar = endQuoteChar
+ self.endQuoteCharLen = len(endQuoteChar)
+ self.escChar = escChar
+ self.escQuote = escQuote
+ self.unquoteResults = unquoteResults
+
+ if multiline:
+ self.flags = re.MULTILINE | re.DOTALL
+ self.pattern = r'%s(?:[^%s%s]' % \
+ ( re.escape(self.quoteChar),
+ _escapeRegexRangeChars(self.endQuoteChar[0]),
+ (escChar is not None and _escapeRegexRangeChars(escChar) or '') )
+ else:
+ self.flags = 0
+ self.pattern = r'%s(?:[^%s\n\r%s]' % \
+ ( re.escape(self.quoteChar),
+ _escapeRegexRangeChars(self.endQuoteChar[0]),
+ (escChar is not None and _escapeRegexRangeChars(escChar) or '') )
+ if len(self.endQuoteChar) > 1:
+ self.pattern += (
+ '|(?:' + ')|(?:'.join(["%s[^%s]" % (re.escape(self.endQuoteChar[:i]),
+ _escapeRegexRangeChars(self.endQuoteChar[i]))
+ for i in range(len(self.endQuoteChar)-1,0,-1)]) + ')'
+ )
+ if escQuote:
+ self.pattern += (r'|(?:%s)' % re.escape(escQuote))
+ if escChar:
+ self.pattern += (r'|(?:%s.)' % re.escape(escChar))
+ self.escCharReplacePattern = re.escape(self.escChar)+"(.)"
+ self.pattern += (r')*%s' % re.escape(self.endQuoteChar))
+
+ try:
+ self.re = re.compile(self.pattern, self.flags)
+ self.reString = self.pattern
+ except sre_constants.error:
+ warnings.warn("invalid pattern (%s) passed to Regex" % self.pattern,
+ SyntaxWarning, stacklevel=2)
+ raise
+
+ self.name = _ustr(self)
+ self.errmsg = "Expected " + self.name
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ result = instring[loc] == self.firstQuoteChar and self.re.match(instring,loc) or None
+ if not result:
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ loc = result.end()
+ ret = result.group()
+
+ if self.unquoteResults:
+
+ # strip off quotes
+ ret = ret[self.quoteCharLen:-self.endQuoteCharLen]
+
+ if isinstance(ret,basestring):
+ # replace escaped characters
+ if self.escChar:
+ ret = re.sub(self.escCharReplacePattern,"\g<1>",ret)
+
+ # replace escaped quotes
+ if self.escQuote:
+ ret = ret.replace(self.escQuote, self.endQuoteChar)
+
+ return loc, ret
+
+ def __str__( self ):
+ try:
+ return super(QuotedString,self).__str__()
+ except:
+ pass
+
+ if self.strRepr is None:
+ self.strRepr = "quoted string, starting with %s ending with %s" % (self.quoteChar, self.endQuoteChar)
+
+ return self.strRepr
+
+
+class CharsNotIn(Token):
+ """Token for matching words composed of characters *not* in a given set.
+ Defined with string containing all disallowed characters, and an optional
+ minimum, maximum, and/or exact length. The default value for min is 1 (a
+ minimum value < 1 is not valid); the default values for max and exact
+ are 0, meaning no maximum or exact length restriction.
+ """
+ def __init__( self, notChars, min=1, max=0, exact=0 ):
+ super(CharsNotIn,self).__init__()
+ self.skipWhitespace = False
+ self.notChars = notChars
+
+ if min < 1:
+ raise ValueError("cannot specify a minimum length < 1; use Optional(CharsNotIn()) if zero-length char group is permitted")
+
+ self.minLen = min
+
+ if max > 0:
+ self.maxLen = max
+ else:
+ self.maxLen = _MAX_INT
+
+ if exact > 0:
+ self.maxLen = exact
+ self.minLen = exact
+
+ self.name = _ustr(self)
+ self.errmsg = "Expected " + self.name
+ self.mayReturnEmpty = ( self.minLen == 0 )
+ #self.myException.msg = self.errmsg
+ self.mayIndexError = False
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if instring[loc] in self.notChars:
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ start = loc
+ loc += 1
+ notchars = self.notChars
+ maxlen = min( start+self.maxLen, len(instring) )
+ while loc < maxlen and \
+ (instring[loc] not in notchars):
+ loc += 1
+
+ if loc - start < self.minLen:
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ return loc, instring[start:loc]
+
+ def __str__( self ):
+ try:
+ return super(CharsNotIn, self).__str__()
+ except:
+ pass
+
+ if self.strRepr is None:
+ if len(self.notChars) > 4:
+ self.strRepr = "!W:(%s...)" % self.notChars[:4]
+ else:
+ self.strRepr = "!W:(%s)" % self.notChars
+
+ return self.strRepr
+
+class White(Token):
+ """Special matching class for matching whitespace. Normally, whitespace is ignored
+ by pyparsing grammars. This class is included when some whitespace structures
+ are significant. Define with a string containing the whitespace characters to be
+ matched; default is " \\t\\r\\n". Also takes optional min, max, and exact arguments,
+ as defined for the Word class."""
+ whiteStrs = {
+ " " : "<SPC>",
+ "\t": "<TAB>",
+ "\n": "<LF>",
+ "\r": "<CR>",
+ "\f": "<FF>",
+ }
+ def __init__(self, ws=" \t\r\n", min=1, max=0, exact=0):
+ super(White,self).__init__()
+ self.matchWhite = ws
+ self.setWhitespaceChars( "".join([c for c in self.whiteChars if c not in self.matchWhite]) )
+ #~ self.leaveWhitespace()
+ self.name = ("".join([White.whiteStrs[c] for c in self.matchWhite]))
+ self.mayReturnEmpty = True
+ self.errmsg = "Expected " + self.name
+ #self.myException.msg = self.errmsg
+
+ self.minLen = min
+
+ if max > 0:
+ self.maxLen = max
+ else:
+ self.maxLen = _MAX_INT
+
+ if exact > 0:
+ self.maxLen = exact
+ self.minLen = exact
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if not(instring[ loc ] in self.matchWhite):
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ start = loc
+ loc += 1
+ maxloc = start + self.maxLen
+ maxloc = min( maxloc, len(instring) )
+ while loc < maxloc and instring[loc] in self.matchWhite:
+ loc += 1
+
+ if loc - start < self.minLen:
+ #~ raise ParseException( instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+ return loc, instring[start:loc]
+
+
+class _PositionToken(Token):
+ def __init__( self ):
+ super(_PositionToken,self).__init__()
+ self.name=self.__class__.__name__
+ self.mayReturnEmpty = True
+ self.mayIndexError = False
+
+class GoToColumn(_PositionToken):
+ """Token to advance to a specific column of input text; useful for tabular report scraping."""
+ def __init__( self, colno ):
+ super(GoToColumn,self).__init__()
+ self.col = colno
+
+ def preParse( self, instring, loc ):
+ if col(loc,instring) != self.col:
+ instrlen = len(instring)
+ if self.ignoreExprs:
+ loc = self._skipIgnorables( instring, loc )
+ while loc < instrlen and instring[loc].isspace() and col( loc, instring ) != self.col :
+ loc += 1
+ return loc
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ thiscol = col( loc, instring )
+ if thiscol > self.col:
+ raise ParseException( instring, loc, "Text not in expected column", self )
+ newloc = loc + self.col - thiscol
+ ret = instring[ loc: newloc ]
+ return newloc, ret
+
+class LineStart(_PositionToken):
+ """Matches if current position is at the beginning of a line within the parse string"""
+ def __init__( self ):
+ super(LineStart,self).__init__()
+ self.setWhitespaceChars( ParserElement.DEFAULT_WHITE_CHARS.replace("\n","") )
+ self.errmsg = "Expected start of line"
+ #self.myException.msg = self.errmsg
+
+ def preParse( self, instring, loc ):
+ preloc = super(LineStart,self).preParse(instring,loc)
+ if instring[preloc] == "\n":
+ loc += 1
+ return loc
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if not( loc==0 or
+ (loc == self.preParse( instring, 0 )) or
+ (instring[loc-1] == "\n") ): #col(loc, instring) != 1:
+ #~ raise ParseException( instring, loc, "Expected start of line" )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ return loc, []
+
+class LineEnd(_PositionToken):
+ """Matches if current position is at the end of a line within the parse string"""
+ def __init__( self ):
+ super(LineEnd,self).__init__()
+ self.setWhitespaceChars( ParserElement.DEFAULT_WHITE_CHARS.replace("\n","") )
+ self.errmsg = "Expected end of line"
+ #self.myException.msg = self.errmsg
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if loc<len(instring):
+ if instring[loc] == "\n":
+ return loc+1, "\n"
+ else:
+ #~ raise ParseException( instring, loc, "Expected end of line" )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ elif loc == len(instring):
+ return loc+1, []
+ else:
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+class StringStart(_PositionToken):
+ """Matches if current position is at the beginning of the parse string"""
+ def __init__( self ):
+ super(StringStart,self).__init__()
+ self.errmsg = "Expected start of text"
+ #self.myException.msg = self.errmsg
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if loc != 0:
+ # see if entire string up to here is just whitespace and ignoreables
+ if loc != self.preParse( instring, 0 ):
+ #~ raise ParseException( instring, loc, "Expected start of text" )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ return loc, []
+
+class StringEnd(_PositionToken):
+ """Matches if current position is at the end of the parse string"""
+ def __init__( self ):
+ super(StringEnd,self).__init__()
+ self.errmsg = "Expected end of text"
+ #self.myException.msg = self.errmsg
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if loc < len(instring):
+ #~ raise ParseException( instring, loc, "Expected end of text" )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ elif loc == len(instring):
+ return loc+1, []
+ elif loc > len(instring):
+ return loc, []
+ else:
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+class WordStart(_PositionToken):
+ """Matches if the current position is at the beginning of a Word, and
+ is not preceded by any character in a given set of wordChars
+ (default=printables). To emulate the \b behavior of regular expressions,
+ use WordStart(alphanums). WordStart will also match at the beginning of
+ the string being parsed, or at the beginning of a line.
+ """
+ def __init__(self, wordChars = printables):
+ super(WordStart,self).__init__()
+ self.wordChars = _str2dict(wordChars)
+ self.errmsg = "Not at the start of a word"
+
+ def parseImpl(self, instring, loc, doActions=True ):
+ if loc != 0:
+ if (instring[loc-1] in self.wordChars or
+ instring[loc] not in self.wordChars):
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ return loc, []
+
+class WordEnd(_PositionToken):
+ """Matches if the current position is at the end of a Word, and
+ is not followed by any character in a given set of wordChars
+ (default=printables). To emulate the \b behavior of regular expressions,
+ use WordEnd(alphanums). WordEnd will also match at the end of
+ the string being parsed, or at the end of a line.
+ """
+ def __init__(self, wordChars = printables):
+ super(WordEnd,self).__init__()
+ self.wordChars = _str2dict(wordChars)
+ self.skipWhitespace = False
+ self.errmsg = "Not at the end of a word"
+
+ def parseImpl(self, instring, loc, doActions=True ):
+ instrlen = len(instring)
+ if instrlen>0 and loc<instrlen:
+ if (instring[loc] in self.wordChars or
+ instring[loc-1] not in self.wordChars):
+ #~ raise ParseException( instring, loc, "Expected end of word" )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ return loc, []
+
+
+class ParseExpression(ParserElement):
+ """Abstract subclass of ParserElement, for combining and post-processing parsed tokens."""
+ def __init__( self, exprs, savelist = False ):
+ super(ParseExpression,self).__init__(savelist)
+ if isinstance( exprs, list ):
+ self.exprs = exprs
+ elif isinstance( exprs, basestring ):
+ self.exprs = [ Literal( exprs ) ]
+ else:
+ try:
+ self.exprs = list( exprs )
+ except TypeError:
+ self.exprs = [ exprs ]
+ self.callPreparse = False
+
+ def __getitem__( self, i ):
+ return self.exprs[i]
+
+ def append( self, other ):
+ self.exprs.append( other )
+ self.strRepr = None
+ return self
+
+ def leaveWhitespace( self ):
+ """Extends leaveWhitespace defined in base class, and also invokes leaveWhitespace on
+ all contained expressions."""
+ self.skipWhitespace = False
+ self.exprs = [ e.copy() for e in self.exprs ]
+ for e in self.exprs:
+ e.leaveWhitespace()
+ return self
+
+ def ignore( self, other ):
+ if isinstance( other, Suppress ):
+ if other not in self.ignoreExprs:
+ super( ParseExpression, self).ignore( other )
+ for e in self.exprs:
+ e.ignore( self.ignoreExprs[-1] )
+ else:
+ super( ParseExpression, self).ignore( other )
+ for e in self.exprs:
+ e.ignore( self.ignoreExprs[-1] )
+ return self
+
+ def __str__( self ):
+ try:
+ return super(ParseExpression,self).__str__()
+ except:
+ pass
+
+ if self.strRepr is None:
+ self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.exprs) )
+ return self.strRepr
+
+ def streamline( self ):
+ super(ParseExpression,self).streamline()
+
+ for e in self.exprs:
+ e.streamline()
+
+ # collapse nested And's of the form And( And( And( a,b), c), d) to And( a,b,c,d )
+ # but only if there are no parse actions or resultsNames on the nested And's
+ # (likewise for Or's and MatchFirst's)
+ if ( len(self.exprs) == 2 ):
+ other = self.exprs[0]
+ if ( isinstance( other, self.__class__ ) and
+ not(other.parseAction) and
+ other.resultsName is None and
+ not other.debug ):
+ self.exprs = other.exprs[:] + [ self.exprs[1] ]
+ self.strRepr = None
+ self.mayReturnEmpty |= other.mayReturnEmpty
+ self.mayIndexError |= other.mayIndexError
+
+ other = self.exprs[-1]
+ if ( isinstance( other, self.__class__ ) and
+ not(other.parseAction) and
+ other.resultsName is None and
+ not other.debug ):
+ self.exprs = self.exprs[:-1] + other.exprs[:]
+ self.strRepr = None
+ self.mayReturnEmpty |= other.mayReturnEmpty
+ self.mayIndexError |= other.mayIndexError
+
+ return self
+
+ def setResultsName( self, name, listAllMatches=False ):
+ ret = super(ParseExpression,self).setResultsName(name,listAllMatches)
+ return ret
+
+ def validate( self, validateTrace=[] ):
+ tmp = validateTrace[:]+[self]
+ for e in self.exprs:
+ e.validate(tmp)
+ self.checkRecursion( [] )
+
+class And(ParseExpression):
+ """Requires all given ParseExpressions to be found in the given order.
+ Expressions may be separated by whitespace.
+ May be constructed using the '+' operator.
+ """
+
+ class _ErrorStop(Empty):
+ def __init__(self, *args, **kwargs):
+ super(Empty,self).__init__(*args, **kwargs)
+ self.leaveWhitespace()
+
+ def __init__( self, exprs, savelist = True ):
+ super(And,self).__init__(exprs, savelist)
+ self.mayReturnEmpty = True
+ for e in self.exprs:
+ if not e.mayReturnEmpty:
+ self.mayReturnEmpty = False
+ break
+ self.setWhitespaceChars( exprs[0].whiteChars )
+ self.skipWhitespace = exprs[0].skipWhitespace
+ self.callPreparse = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ # pass False as last arg to _parse for first element, since we already
+ # pre-parsed the string as part of our And pre-parsing
+ loc, resultlist = self.exprs[0]._parse( instring, loc, doActions, callPreParse=False )
+ errorStop = False
+ for e in self.exprs[1:]:
+ if isinstance(e, And._ErrorStop):
+ errorStop = True
+ continue
+ if errorStop:
+ try:
+ loc, exprtokens = e._parse( instring, loc, doActions )
+ except ParseSyntaxException:
+ raise
+ except ParseBaseException:
+ pe = sys.exc_info()[1]
+ raise ParseSyntaxException(pe)
+ except IndexError:
+ raise ParseSyntaxException( ParseException(instring, len(instring), self.errmsg, self) )
+ else:
+ loc, exprtokens = e._parse( instring, loc, doActions )
+ if exprtokens or exprtokens.keys():
+ resultlist += exprtokens
+ return loc, resultlist
+
+ def __iadd__(self, other ):
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ return self.append( other ) #And( [ self, other ] )
+
+ def checkRecursion( self, parseElementList ):
+ subRecCheckList = parseElementList[:] + [ self ]
+ for e in self.exprs:
+ e.checkRecursion( subRecCheckList )
+ if not e.mayReturnEmpty:
+ break
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "{" + " ".join( [ _ustr(e) for e in self.exprs ] ) + "}"
+
+ return self.strRepr
+
+
+class Or(ParseExpression):
+ """Requires that at least one ParseExpression is found.
+ If two expressions match, the expression that matches the longest string will be used.
+ May be constructed using the '^' operator.
+ """
+ def __init__( self, exprs, savelist = False ):
+ super(Or,self).__init__(exprs, savelist)
+ self.mayReturnEmpty = False
+ for e in self.exprs:
+ if e.mayReturnEmpty:
+ self.mayReturnEmpty = True
+ break
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ maxExcLoc = -1
+ maxMatchLoc = -1
+ maxException = None
+ for e in self.exprs:
+ try:
+ loc2 = e.tryParse( instring, loc )
+ except ParseException:
+ err = sys.exc_info()[1]
+ if err.loc > maxExcLoc:
+ maxException = err
+ maxExcLoc = err.loc
+ except IndexError:
+ if len(instring) > maxExcLoc:
+ maxException = ParseException(instring,len(instring),e.errmsg,self)
+ maxExcLoc = len(instring)
+ else:
+ if loc2 > maxMatchLoc:
+ maxMatchLoc = loc2
+ maxMatchExp = e
+
+ if maxMatchLoc < 0:
+ if maxException is not None:
+ raise maxException
+ else:
+ raise ParseException(instring, loc, "no defined alternatives to match", self)
+
+ return maxMatchExp._parse( instring, loc, doActions )
+
+ def __ixor__(self, other ):
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ return self.append( other ) #Or( [ self, other ] )
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "{" + " ^ ".join( [ _ustr(e) for e in self.exprs ] ) + "}"
+
+ return self.strRepr
+
+ def checkRecursion( self, parseElementList ):
+ subRecCheckList = parseElementList[:] + [ self ]
+ for e in self.exprs:
+ e.checkRecursion( subRecCheckList )
+
+
+class MatchFirst(ParseExpression):
+ """Requires that at least one ParseExpression is found.
+ If two expressions match, the first one listed is the one that will match.
+ May be constructed using the '|' operator.
+ """
+ def __init__( self, exprs, savelist = False ):
+ super(MatchFirst,self).__init__(exprs, savelist)
+ if exprs:
+ self.mayReturnEmpty = False
+ for e in self.exprs:
+ if e.mayReturnEmpty:
+ self.mayReturnEmpty = True
+ break
+ else:
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ maxExcLoc = -1
+ maxException = None
+ for e in self.exprs:
+ try:
+ ret = e._parse( instring, loc, doActions )
+ return ret
+ except ParseException as err:
+ if err.loc > maxExcLoc:
+ maxException = err
+ maxExcLoc = err.loc
+ except IndexError:
+ if len(instring) > maxExcLoc:
+ maxException = ParseException(instring,len(instring),e.errmsg,self)
+ maxExcLoc = len(instring)
+
+ # only got here if no expression matched, raise exception for match that made it the furthest
+ else:
+ if maxException is not None:
+ raise maxException
+ else:
+ raise ParseException(instring, loc, "no defined alternatives to match", self)
+
+ def __ior__(self, other ):
+ if isinstance( other, basestring ):
+ other = Literal( other )
+ return self.append( other ) #MatchFirst( [ self, other ] )
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "{" + " | ".join( [ _ustr(e) for e in self.exprs ] ) + "}"
+
+ return self.strRepr
+
+ def checkRecursion( self, parseElementList ):
+ subRecCheckList = parseElementList[:] + [ self ]
+ for e in self.exprs:
+ e.checkRecursion( subRecCheckList )
+
+
+class Each(ParseExpression):
+ """Requires all given ParseExpressions to be found, but in any order.
+ Expressions may be separated by whitespace.
+ May be constructed using the '&' operator.
+ """
+ def __init__( self, exprs, savelist = True ):
+ super(Each,self).__init__(exprs, savelist)
+ self.mayReturnEmpty = True
+ for e in self.exprs:
+ if not e.mayReturnEmpty:
+ self.mayReturnEmpty = False
+ break
+ self.skipWhitespace = True
+ self.initExprGroups = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if self.initExprGroups:
+ self.optionals = [ e.expr for e in self.exprs if isinstance(e,Optional) ]
+ self.multioptionals = [ e.expr for e in self.exprs if isinstance(e,ZeroOrMore) ]
+ self.multirequired = [ e.expr for e in self.exprs if isinstance(e,OneOrMore) ]
+ self.required = [ e for e in self.exprs if not isinstance(e,(Optional,ZeroOrMore,OneOrMore)) ]
+ self.required += self.multirequired
+ self.initExprGroups = False
+ tmpLoc = loc
+ tmpReqd = self.required[:]
+ tmpOpt = self.optionals[:]
+ matchOrder = []
+
+ keepMatching = True
+ while keepMatching:
+ tmpExprs = tmpReqd + tmpOpt + self.multioptionals + self.multirequired
+ failed = []
+ for e in tmpExprs:
+ try:
+ tmpLoc = e.tryParse( instring, tmpLoc )
+ except ParseException:
+ failed.append(e)
+ else:
+ matchOrder.append(e)
+ if e in tmpReqd:
+ tmpReqd.remove(e)
+ elif e in tmpOpt:
+ tmpOpt.remove(e)
+ if len(failed) == len(tmpExprs):
+ keepMatching = False
+
+ if tmpReqd:
+ missing = ", ".join( [ _ustr(e) for e in tmpReqd ] )
+ raise ParseException(instring,loc,"Missing one or more required elements (%s)" % missing )
+
+ # add any unmatched Optionals, in case they have default values defined
+ matchOrder += list(e for e in self.exprs if isinstance(e,Optional) and e.expr in tmpOpt)
+
+ resultlist = []
+ for e in matchOrder:
+ loc,results = e._parse(instring,loc,doActions)
+ resultlist.append(results)
+
+ finalResults = ParseResults([])
+ for r in resultlist:
+ dups = {}
+ for k in r.keys():
+ if k in finalResults.keys():
+ tmp = ParseResults(finalResults[k])
+ tmp += ParseResults(r[k])
+ dups[k] = tmp
+ finalResults += ParseResults(r)
+ for k,v in dups.items():
+ finalResults[k] = v
+ return loc, finalResults
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "{" + " & ".join( [ _ustr(e) for e in self.exprs ] ) + "}"
+
+ return self.strRepr
+
+ def checkRecursion( self, parseElementList ):
+ subRecCheckList = parseElementList[:] + [ self ]
+ for e in self.exprs:
+ e.checkRecursion( subRecCheckList )
+
+
+class ParseElementEnhance(ParserElement):
+ """Abstract subclass of ParserElement, for combining and post-processing parsed tokens."""
+ def __init__( self, expr, savelist=False ):
+ super(ParseElementEnhance,self).__init__(savelist)
+ if isinstance( expr, basestring ):
+ expr = Literal(expr)
+ self.expr = expr
+ self.strRepr = None
+ if expr is not None:
+ self.mayIndexError = expr.mayIndexError
+ self.mayReturnEmpty = expr.mayReturnEmpty
+ self.setWhitespaceChars( expr.whiteChars )
+ self.skipWhitespace = expr.skipWhitespace
+ self.saveAsList = expr.saveAsList
+ self.callPreparse = expr.callPreparse
+ self.ignoreExprs.extend(expr.ignoreExprs)
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ if self.expr is not None:
+ return self.expr._parse( instring, loc, doActions, callPreParse=False )
+ else:
+ raise ParseException("",loc,self.errmsg,self)
+
+ def leaveWhitespace( self ):
+ self.skipWhitespace = False
+ self.expr = self.expr.copy()
+ if self.expr is not None:
+ self.expr.leaveWhitespace()
+ return self
+
+ def ignore( self, other ):
+ if isinstance( other, Suppress ):
+ if other not in self.ignoreExprs:
+ super( ParseElementEnhance, self).ignore( other )
+ if self.expr is not None:
+ self.expr.ignore( self.ignoreExprs[-1] )
+ else:
+ super( ParseElementEnhance, self).ignore( other )
+ if self.expr is not None:
+ self.expr.ignore( self.ignoreExprs[-1] )
+ return self
+
+ def streamline( self ):
+ super(ParseElementEnhance,self).streamline()
+ if self.expr is not None:
+ self.expr.streamline()
+ return self
+
+ def checkRecursion( self, parseElementList ):
+ if self in parseElementList:
+ raise RecursiveGrammarException( parseElementList+[self] )
+ subRecCheckList = parseElementList[:] + [ self ]
+ if self.expr is not None:
+ self.expr.checkRecursion( subRecCheckList )
+
+ def validate( self, validateTrace=[] ):
+ tmp = validateTrace[:]+[self]
+ if self.expr is not None:
+ self.expr.validate(tmp)
+ self.checkRecursion( [] )
+
+ def __str__( self ):
+ try:
+ return super(ParseElementEnhance,self).__str__()
+ except:
+ pass
+
+ if self.strRepr is None and self.expr is not None:
+ self.strRepr = "%s:(%s)" % ( self.__class__.__name__, _ustr(self.expr) )
+ return self.strRepr
+
+
+class FollowedBy(ParseElementEnhance):
+ """Lookahead matching of the given parse expression. FollowedBy
+ does *not* advance the parsing position within the input string, it only
+ verifies that the specified parse expression matches at the current
+ position. FollowedBy always returns a null token list."""
+ def __init__( self, expr ):
+ super(FollowedBy,self).__init__(expr)
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ self.expr.tryParse( instring, loc )
+ return loc, []
+
+
+class NotAny(ParseElementEnhance):
+ """Lookahead to disallow matching with the given parse expression. NotAny
+ does *not* advance the parsing position within the input string, it only
+ verifies that the specified parse expression does *not* match at the current
+ position. Also, NotAny does *not* skip over leading whitespace. NotAny
+ always returns a null token list. May be constructed using the '~' operator."""
+ def __init__( self, expr ):
+ super(NotAny,self).__init__(expr)
+ #~ self.leaveWhitespace()
+ self.skipWhitespace = False # do NOT use self.leaveWhitespace(), don't want to propagate to exprs
+ self.mayReturnEmpty = True
+ self.errmsg = "Found unwanted token, "+_ustr(self.expr)
+ #self.myException = ParseException("",0,self.errmsg,self)
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ try:
+ self.expr.tryParse( instring, loc )
+ except (ParseException,IndexError):
+ pass
+ else:
+ #~ raise ParseException(instring, loc, self.errmsg )
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+ return loc, []
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "~{" + _ustr(self.expr) + "}"
+
+ return self.strRepr
+
+
+class ZeroOrMore(ParseElementEnhance):
+ """Optional repetition of zero or more of the given expression."""
+ def __init__( self, expr ):
+ super(ZeroOrMore,self).__init__(expr)
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ tokens = []
+ try:
+ loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False )
+ hasIgnoreExprs = ( len(self.ignoreExprs) > 0 )
+ while 1:
+ if hasIgnoreExprs:
+ preloc = self._skipIgnorables( instring, loc )
+ else:
+ preloc = loc
+ loc, tmptokens = self.expr._parse( instring, preloc, doActions )
+ if tmptokens or tmptokens.keys():
+ tokens += tmptokens
+ except (ParseException,IndexError):
+ pass
+
+ return loc, tokens
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "[" + _ustr(self.expr) + "]..."
+
+ return self.strRepr
+
+ def setResultsName( self, name, listAllMatches=False ):
+ ret = super(ZeroOrMore,self).setResultsName(name,listAllMatches)
+ ret.saveAsList = True
+ return ret
+
+
+class OneOrMore(ParseElementEnhance):
+ """Repetition of one or more of the given expression."""
+ def parseImpl( self, instring, loc, doActions=True ):
+ # must be at least one
+ loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False )
+ try:
+ hasIgnoreExprs = ( len(self.ignoreExprs) > 0 )
+ while 1:
+ if hasIgnoreExprs:
+ preloc = self._skipIgnorables( instring, loc )
+ else:
+ preloc = loc
+ loc, tmptokens = self.expr._parse( instring, preloc, doActions )
+ if tmptokens or tmptokens.keys():
+ tokens += tmptokens
+ except (ParseException,IndexError):
+ pass
+
+ return loc, tokens
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "{" + _ustr(self.expr) + "}..."
+
+ return self.strRepr
+
+ def setResultsName( self, name, listAllMatches=False ):
+ ret = super(OneOrMore,self).setResultsName(name,listAllMatches)
+ ret.saveAsList = True
+ return ret
+
+class _NullToken(object):
+ def __bool__(self):
+ return False
+ __nonzero__ = __bool__
+ def __str__(self):
+ return ""
+
+_optionalNotMatched = _NullToken()
+class Optional(ParseElementEnhance):
+ """Optional matching of the given expression.
+ A default return string can also be specified, if the optional expression
+ is not found.
+ """
+ def __init__( self, exprs, default=_optionalNotMatched ):
+ super(Optional,self).__init__( exprs, savelist=False )
+ self.defaultValue = default
+ self.mayReturnEmpty = True
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ try:
+ loc, tokens = self.expr._parse( instring, loc, doActions, callPreParse=False )
+ except (ParseException,IndexError):
+ if self.defaultValue is not _optionalNotMatched:
+ if self.expr.resultsName:
+ tokens = ParseResults([ self.defaultValue ])
+ tokens[self.expr.resultsName] = self.defaultValue
+ else:
+ tokens = [ self.defaultValue ]
+ else:
+ tokens = []
+ return loc, tokens
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ if self.strRepr is None:
+ self.strRepr = "[" + _ustr(self.expr) + "]"
+
+ return self.strRepr
+
+
+class SkipTo(ParseElementEnhance):
+ """Token for skipping over all undefined text until the matched expression is found.
+ If include is set to true, the matched expression is also parsed (the skipped text
+ and matched expression are returned as a 2-element list). The ignore
+ argument is used to define grammars (typically quoted strings and comments) that
+ might contain false matches.
+ """
+ def __init__( self, other, include=False, ignore=None, failOn=None ):
+ super( SkipTo, self ).__init__( other )
+ self.ignoreExpr = ignore
+ self.mayReturnEmpty = True
+ self.mayIndexError = False
+ self.includeMatch = include
+ self.asList = False
+ if failOn is not None and isinstance(failOn, basestring):
+ self.failOn = Literal(failOn)
+ else:
+ self.failOn = failOn
+ self.errmsg = "No match found for "+_ustr(self.expr)
+ #self.myException = ParseException("",0,self.errmsg,self)
+
+ def parseImpl( self, instring, loc, doActions=True ):
+ startLoc = loc
+ instrlen = len(instring)
+ expr = self.expr
+ failParse = False
+ while loc <= instrlen:
+ try:
+ if self.failOn:
+ try:
+ self.failOn.tryParse(instring, loc)
+ except ParseBaseException:
+ pass
+ else:
+ failParse = True
+ raise ParseException(instring, loc, "Found expression " + str(self.failOn))
+ failParse = False
+ if self.ignoreExpr is not None:
+ while 1:
+ try:
+ loc = self.ignoreExpr.tryParse(instring,loc)
+ # print("found ignoreExpr, advance to", loc)
+ except ParseBaseException:
+ break
+ expr._parse( instring, loc, doActions=False, callPreParse=False )
+ skipText = instring[startLoc:loc]
+ if self.includeMatch:
+ loc,mat = expr._parse(instring,loc,doActions,callPreParse=False)
+ if mat:
+ skipRes = ParseResults( skipText )
+ skipRes += mat
+ return loc, [ skipRes ]
+ else:
+ return loc, [ skipText ]
+ else:
+ return loc, [ skipText ]
+ except (ParseException,IndexError):
+ if failParse:
+ raise
+ else:
+ loc += 1
+ exc = self.myException
+ exc.loc = loc
+ exc.pstr = instring
+ raise exc
+
+class Forward(ParseElementEnhance):
+ """Forward declaration of an expression to be defined later -
+ used for recursive grammars, such as algebraic infix notation.
+ When the expression is known, it is assigned to the Forward variable using the '<<' operator.
+
+ Note: take care when assigning to Forward not to overlook precedence of operators.
+ Specifically, '|' has a lower precedence than '<<', so that::
+ fwdExpr << a | b | c
+ will actually be evaluated as::
+ (fwdExpr << a) | b | c
+ thereby leaving b and c out as parseable alternatives. It is recommended that you
+ explicitly group the values inserted into the Forward::
+ fwdExpr << (a | b | c)
+ """
+ def __init__( self, other=None ):
+ super(Forward,self).__init__( other, savelist=False )
+
+ def __lshift__( self, other ):
+ if isinstance( other, basestring ):
+ other = Literal(other)
+ self.expr = other
+ self.mayReturnEmpty = other.mayReturnEmpty
+ self.strRepr = None
+ self.mayIndexError = self.expr.mayIndexError
+ self.mayReturnEmpty = self.expr.mayReturnEmpty
+ self.setWhitespaceChars( self.expr.whiteChars )
+ self.skipWhitespace = self.expr.skipWhitespace
+ self.saveAsList = self.expr.saveAsList
+ self.ignoreExprs.extend(self.expr.ignoreExprs)
+ return None
+
+ def leaveWhitespace( self ):
+ self.skipWhitespace = False
+ return self
+
+ def streamline( self ):
+ if not self.streamlined:
+ self.streamlined = True
+ if self.expr is not None:
+ self.expr.streamline()
+ return self
+
+ def validate( self, validateTrace=[] ):
+ if self not in validateTrace:
+ tmp = validateTrace[:]+[self]
+ if self.expr is not None:
+ self.expr.validate(tmp)
+ self.checkRecursion([])
+
+ def __str__( self ):
+ if hasattr(self,"name"):
+ return self.name
+
+ self._revertClass = self.__class__
+ self.__class__ = _ForwardNoRecurse
+ try:
+ if self.expr is not None:
+ retString = _ustr(self.expr)
+ else:
+ retString = "None"
+ finally:
+ self.__class__ = self._revertClass
+ return self.__class__.__name__ + ": " + retString
+
+ def copy(self):
+ if self.expr is not None:
+ return super(Forward,self).copy()
+ else:
+ ret = Forward()
+ ret << self
+ return ret
+
+class _ForwardNoRecurse(Forward):
+ def __str__( self ):
+ return "..."
+
+class TokenConverter(ParseElementEnhance):
+ """Abstract subclass of ParseExpression, for converting parsed results."""
+ def __init__( self, expr, savelist=False ):
+ super(TokenConverter,self).__init__( expr )#, savelist )
+ self.saveAsList = False
+
+class Upcase(TokenConverter):
+ """Converter to upper case all matching tokens."""
+ def __init__(self, *args):
+ super(Upcase,self).__init__(*args)
+ warnings.warn("Upcase class is deprecated, use upcaseTokens parse action instead",
+ DeprecationWarning,stacklevel=2)
+
+ def postParse( self, instring, loc, tokenlist ):
+ return list(map( string.upper, tokenlist ))
+
+
+class Combine(TokenConverter):
+ """Converter to concatenate all matching tokens to a single string.
+ By default, the matching patterns must also be contiguous in the input string;
+ this can be disabled by specifying 'adjacent=False' in the constructor.
+ """
+ def __init__( self, expr, joinString="", adjacent=True ):
+ super(Combine,self).__init__( expr )
+ # suppress whitespace-stripping in contained parse expressions, but re-enable it on the Combine itself
+ if adjacent:
+ self.leaveWhitespace()
+ self.adjacent = adjacent
+ self.skipWhitespace = True
+ self.joinString = joinString
+
+ def ignore( self, other ):
+ if self.adjacent:
+ ParserElement.ignore(self, other)
+ else:
+ super( Combine, self).ignore( other )
+ return self
+
+ def postParse( self, instring, loc, tokenlist ):
+ retToks = tokenlist.copy()
+ del retToks[:]
+ retToks += ParseResults([ "".join(tokenlist._asStringList(self.joinString)) ], modal=self.modalResults)
+
+ if self.resultsName and len(retToks.keys())>0:
+ return [ retToks ]
+ else:
+ return retToks
+
+class Group(TokenConverter):
+ """Converter to return the matched tokens as a list - useful for returning tokens of ZeroOrMore and OneOrMore expressions."""
+ def __init__( self, expr ):
+ super(Group,self).__init__( expr )
+ self.saveAsList = True
+
+ def postParse( self, instring, loc, tokenlist ):
+ return [ tokenlist ]
+
+class Dict(TokenConverter):
+ """Converter to return a repetitive expression as a list, but also as a dictionary.
+ Each element can also be referenced using the first token in the expression as its key.
+ Useful for tabular report scraping when the first column can be used as a item key.
+ """
+ def __init__( self, exprs ):
+ super(Dict,self).__init__( exprs )
+ self.saveAsList = True
+
+ def postParse( self, instring, loc, tokenlist ):
+ for i,tok in enumerate(tokenlist):
+ if len(tok) == 0:
+ continue
+ ikey = tok[0]
+ if isinstance(ikey,int):
+ ikey = _ustr(tok[0]).strip()
+ if len(tok)==1:
+ tokenlist[ikey] = _ParseResultsWithOffset("",i)
+ elif len(tok)==2 and not isinstance(tok[1],ParseResults):
+ tokenlist[ikey] = _ParseResultsWithOffset(tok[1],i)
+ else:
+ dictvalue = tok.copy() #ParseResults(i)
+ del dictvalue[0]
+ if len(dictvalue)!= 1 or (isinstance(dictvalue,ParseResults) and dictvalue.keys()):
+ tokenlist[ikey] = _ParseResultsWithOffset(dictvalue,i)
+ else:
+ tokenlist[ikey] = _ParseResultsWithOffset(dictvalue[0],i)
+
+ if self.resultsName:
+ return [ tokenlist ]
+ else:
+ return tokenlist
+
+
+class Suppress(TokenConverter):
+ """Converter for ignoring the results of a parsed expression."""
+ def postParse( self, instring, loc, tokenlist ):
+ return []
+
+ def suppress( self ):
+ return self
+
+
+class OnlyOnce(object):
+ """Wrapper for parse actions, to ensure they are only called once."""
+ def __init__(self, methodCall):
+ self.callable = ParserElement._normalizeParseActionArgs(methodCall)
+ self.called = False
+ def __call__(self,s,l,t):
+ if not self.called:
+ results = self.callable(s,l,t)
+ self.called = True
+ return results
+ raise ParseException(s,l,"")
+ def reset(self):
+ self.called = False
+
+def traceParseAction(f):
+ """Decorator for debugging parse actions."""
+ f = ParserElement._normalizeParseActionArgs(f)
+ def z(*paArgs):
+ thisFunc = f.func_name
+ s,l,t = paArgs[-3:]
+ if len(paArgs)>3:
+ thisFunc = paArgs[0].__class__.__name__ + '.' + thisFunc
+ sys.stderr.write( ">>entering %s(line: '%s', %d, %s)\n" % (thisFunc,line(l,s),l,t) )
+ try:
+ ret = f(*paArgs)
+ except Exception:
+ exc = sys.exc_info()[1]
+ sys.stderr.write( "<<leaving %s (exception: %s)\n" % (thisFunc,exc) )
+ raise
+ sys.stderr.write( "<<leaving %s (ret: %s)\n" % (thisFunc,ret) )
+ return ret
+ try:
+ z.__name__ = f.__name__
+ except AttributeError:
+ pass
+ return z
+
+#
+# global helpers
+#
+def delimitedList( expr, delim=",", combine=False ):
+ """Helper to define a delimited list of expressions - the delimiter defaults to ','.
+ By default, the list elements and delimiters can have intervening whitespace, and
+ comments, but this can be overridden by passing 'combine=True' in the constructor.
+ If combine is set to True, the matching tokens are returned as a single token
+ string, with the delimiters included; otherwise, the matching tokens are returned
+ as a list of tokens, with the delimiters suppressed.
+ """
+ dlName = _ustr(expr)+" ["+_ustr(delim)+" "+_ustr(expr)+"]..."
+ if combine:
+ return Combine( expr + ZeroOrMore( delim + expr ) ).setName(dlName)
+ else:
+ return ( expr + ZeroOrMore( Suppress( delim ) + expr ) ).setName(dlName)
+
+def countedArray( expr ):
+ """Helper to define a counted list of expressions.
+ This helper defines a pattern of the form::
+ integer expr expr expr...
+ where the leading integer tells how many expr expressions follow.
+ The matched tokens returns the array of expr tokens as a list - the leading count token is suppressed.
+ """
+ arrayExpr = Forward()
+ def countFieldParseAction(s,l,t):
+ n = int(t[0])
+ arrayExpr << (n and Group(And([expr]*n)) or Group(empty))
+ return []
+ return ( Word(nums).setName("arrayLen").setParseAction(countFieldParseAction, callDuringTry=True) + arrayExpr )
+
+def _flatten(L):
+ if type(L) is not list: return [L]
+ if L == []: return L
+ return _flatten(L[0]) + _flatten(L[1:])
+
+def matchPreviousLiteral(expr):
+ """Helper to define an expression that is indirectly defined from
+ the tokens matched in a previous expression, that is, it looks
+ for a 'repeat' of a previous expression. For example::
+ first = Word(nums)
+ second = matchPreviousLiteral(first)
+ matchExpr = first + ":" + second
+ will match "1:1", but not "1:2". Because this matches a
+ previous literal, will also match the leading "1:1" in "1:10".
+ If this is not desired, use matchPreviousExpr.
+ Do *not* use with packrat parsing enabled.
+ """
+ rep = Forward()
+ def copyTokenToRepeater(s,l,t):
+ if t:
+ if len(t) == 1:
+ rep << t[0]
+ else:
+ # flatten t tokens
+ tflat = _flatten(t.asList())
+ rep << And( [ Literal(tt) for tt in tflat ] )
+ else:
+ rep << Empty()
+ expr.addParseAction(copyTokenToRepeater, callDuringTry=True)
+ return rep
+
+def matchPreviousExpr(expr):
+ """Helper to define an expression that is indirectly defined from
+ the tokens matched in a previous expression, that is, it looks
+ for a 'repeat' of a previous expression. For example::
+ first = Word(nums)
+ second = matchPreviousExpr(first)
+ matchExpr = first + ":" + second
+ will match "1:1", but not "1:2". Because this matches by
+ expressions, will *not* match the leading "1:1" in "1:10";
+ the expressions are evaluated first, and then compared, so
+ "1" is compared with "10".
+ Do *not* use with packrat parsing enabled.
+ """
+ rep = Forward()
+ e2 = expr.copy()
+ rep << e2
+ def copyTokenToRepeater(s,l,t):
+ matchTokens = _flatten(t.asList())
+ def mustMatchTheseTokens(s,l,t):
+ theseTokens = _flatten(t.asList())
+ if theseTokens != matchTokens:
+ raise ParseException("",0,"")
+ rep.setParseAction( mustMatchTheseTokens, callDuringTry=True )
+ expr.addParseAction(copyTokenToRepeater, callDuringTry=True)
+ return rep
+
+def _escapeRegexRangeChars(s):
+ #~ escape these chars: ^-]
+ for c in r"\^-]":
+ s = s.replace(c,_bslash+c)
+ s = s.replace("\n",r"\n")
+ s = s.replace("\t",r"\t")
+ return _ustr(s)
+
+def oneOf( strs, caseless=False, useRegex=True ):
+ """Helper to quickly define a set of alternative Literals, and makes sure to do
+ longest-first testing when there is a conflict, regardless of the input order,
+ but returns a MatchFirst for best performance.
+
+ Parameters:
+ - strs - a string of space-delimited literals, or a list of string literals
+ - caseless - (default=False) - treat all literals as caseless
+ - useRegex - (default=True) - as an optimization, will generate a Regex
+ object; otherwise, will generate a MatchFirst object (if caseless=True, or
+ if creating a Regex raises an exception)
+ """
+ if caseless:
+ isequal = ( lambda a,b: a.upper() == b.upper() )
+ masks = ( lambda a,b: b.upper().startswith(a.upper()) )
+ parseElementClass = CaselessLiteral
+ else:
+ isequal = ( lambda a,b: a == b )
+ masks = ( lambda a,b: b.startswith(a) )
+ parseElementClass = Literal
+
+ if isinstance(strs,(list,tuple)):
+ symbols = list(strs[:])
+ elif isinstance(strs,basestring):
+ symbols = strs.split()
+ else:
+ warnings.warn("Invalid argument to oneOf, expected string or list",
+ SyntaxWarning, stacklevel=2)
+
+ i = 0
+ while i < len(symbols)-1:
+ cur = symbols[i]
+ for j,other in enumerate(symbols[i+1:]):
+ if ( isequal(other, cur) ):
+ del symbols[i+j+1]
+ break
+ elif ( masks(cur, other) ):
+ del symbols[i+j+1]
+ symbols.insert(i,other)
+ cur = other
+ break
+ else:
+ i += 1
+
+ if not caseless and useRegex:
+ #~ print (strs,"->", "|".join( [ _escapeRegexChars(sym) for sym in symbols] ))
+ try:
+ if len(symbols)==len("".join(symbols)):
+ return Regex( "[%s]" % "".join( [ _escapeRegexRangeChars(sym) for sym in symbols] ) )
+ else:
+ return Regex( "|".join( [ re.escape(sym) for sym in symbols] ) )
+ except:
+ warnings.warn("Exception creating Regex for oneOf, building MatchFirst",
+ SyntaxWarning, stacklevel=2)
+
+
+ # last resort, just use MatchFirst
+ return MatchFirst( [ parseElementClass(sym) for sym in symbols ] )
+
+def dictOf( key, value ):
+ """Helper to easily and clearly define a dictionary by specifying the respective patterns
+ for the key and value. Takes care of defining the Dict, ZeroOrMore, and Group tokens
+ in the proper order. The key pattern can include delimiting markers or punctuation,
+ as long as they are suppressed, thereby leaving the significant key text. The value
+ pattern can include named results, so that the Dict results can include named token
+ fields.
+ """
+ return Dict( ZeroOrMore( Group ( key + value ) ) )
+
+def originalTextFor(expr, asString=True):
+ """Helper to return the original, untokenized text for a given expression. Useful to
+ restore the parsed fields of an HTML start tag into the raw tag text itself, or to
+ revert separate tokens with intervening whitespace back to the original matching
+ input text. Simpler to use than the parse action keepOriginalText, and does not
+ require the inspect module to chase up the call stack. By default, returns a
+ string containing the original parsed text.
+
+ If the optional asString argument is passed as False, then the return value is a
+ ParseResults containing any results names that were originally matched, and a
+ single token containing the original matched text from the input string. So if
+ the expression passed to originalTextFor contains expressions with defined
+ results names, you must set asString to False if you want to preserve those
+ results name values."""
+ locMarker = Empty().setParseAction(lambda s,loc,t: loc)
+ matchExpr = locMarker("_original_start") + expr + locMarker("_original_end")
+ if asString:
+ extractText = lambda s,l,t: s[t._original_start:t._original_end]
+ else:
+ def extractText(s,l,t):
+ del t[:]
+ t.insert(0, s[t._original_start:t._original_end])
+ del t["_original_start"]
+ del t["_original_end"]
+ matchExpr.setParseAction(extractText)
+ return matchExpr
+
+# convenience constants for positional expressions
+empty = Empty().setName("empty")
+lineStart = LineStart().setName("lineStart")
+lineEnd = LineEnd().setName("lineEnd")
+stringStart = StringStart().setName("stringStart")
+stringEnd = StringEnd().setName("stringEnd")
+
+_escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(lambda s,l,t:t[0][1])
+_printables_less_backslash = "".join([ c for c in printables if c not in r"\]" ])
+_escapedHexChar = Combine( Suppress(_bslash + "0x") + Word(hexnums) ).setParseAction(lambda s,l,t:unichr(int(t[0],16)))
+_escapedOctChar = Combine( Suppress(_bslash) + Word("0","01234567") ).setParseAction(lambda s,l,t:unichr(int(t[0],8)))
+_singleChar = _escapedPunc | _escapedHexChar | _escapedOctChar | Word(_printables_less_backslash,exact=1)
+_charRange = Group(_singleChar + Suppress("-") + _singleChar)
+_reBracketExpr = Literal("[") + Optional("^").setResultsName("negate") + Group( OneOrMore( _charRange | _singleChar ) ).setResultsName("body") + "]"
+
+_expanded = lambda p: (isinstance(p,ParseResults) and ''.join([ unichr(c) for c in range(ord(p[0]),ord(p[1])+1) ]) or p)
+
+def srange(s):
+ r"""Helper to easily define string ranges for use in Word construction. Borrows
+ syntax from regexp '[]' string range definitions::
+ srange("[0-9]") -> "0123456789"
+ srange("[a-z]") -> "abcdefghijklmnopqrstuvwxyz"
+ srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_"
+ The input string must be enclosed in []'s, and the returned string is the expanded
+ character set joined into a single string.
+ The values enclosed in the []'s may be::
+ a single character
+ an escaped character with a leading backslash (such as \- or \])
+ an escaped hex character with a leading '\0x' (\0x21, which is a '!' character)
+ an escaped octal character with a leading '\0' (\041, which is a '!' character)
+ a range of any of the above, separated by a dash ('a-z', etc.)
+ any combination of the above ('aeiouy', 'a-zA-Z0-9_$', etc.)
+ """
+ try:
+ return "".join([_expanded(part) for part in _reBracketExpr.parseString(s).body])
+ except:
+ return ""
+
+def matchOnlyAtCol(n):
+ """Helper method for defining parse actions that require matching at a specific
+ column in the input text.
+ """
+ def verifyCol(strg,locn,toks):
+ if col(locn,strg) != n:
+ raise ParseException(strg,locn,"matched token not at column %d" % n)
+ return verifyCol
+
+def replaceWith(replStr):
+ """Helper method for common parse actions that simply return a literal value. Especially
+ useful when used with transformString().
+ """
+ def _replFunc(*args):
+ return [replStr]
+ return _replFunc
+
+def removeQuotes(s,l,t):
+ """Helper parse action for removing quotation marks from parsed quoted strings.
+ To use, add this parse action to quoted string using::
+ quotedString.setParseAction( removeQuotes )
+ """
+ return t[0][1:-1]
+
+def upcaseTokens(s,l,t):
+ """Helper parse action to convert tokens to upper case."""
+ return [ tt.upper() for tt in map(_ustr,t) ]
+
+def downcaseTokens(s,l,t):
+ """Helper parse action to convert tokens to lower case."""
+ return [ tt.lower() for tt in map(_ustr,t) ]
+
+def keepOriginalText(s,startLoc,t):
+ """Helper parse action to preserve original parsed text,
+ overriding any nested parse actions."""
+ try:
+ endloc = getTokensEndLoc()
+ except ParseException:
+ raise ParseFatalException("incorrect usage of keepOriginalText - may only be called as a parse action")
+ del t[:]
+ t += ParseResults(s[startLoc:endloc])
+ return t
+
+def getTokensEndLoc():
+ """Method to be called from within a parse action to determine the end
+ location of the parsed tokens."""
+ import inspect
+ fstack = inspect.stack()
+ try:
+ # search up the stack (through intervening argument normalizers) for correct calling routine
+ for f in fstack[2:]:
+ if f[3] == "_parseNoCache":
+ endloc = f[0].f_locals["loc"]
+ return endloc
+ else:
+ raise ParseFatalException("incorrect usage of getTokensEndLoc - may only be called from within a parse action")
+ finally:
+ del fstack
+
+def _makeTags(tagStr, xml):
+ """Internal helper to construct opening and closing tag expressions, given a tag name"""
+ if isinstance(tagStr,basestring):
+ resname = tagStr
+ tagStr = Keyword(tagStr, caseless=not xml)
+ else:
+ resname = tagStr.name
+
+ tagAttrName = Word(alphas,alphanums+"_-:")
+ if (xml):
+ tagAttrValue = dblQuotedString.copy().setParseAction( removeQuotes )
+ openTag = Suppress("<") + tagStr + \
+ Dict(ZeroOrMore(Group( tagAttrName + Suppress("=") + tagAttrValue ))) + \
+ Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">")
+ else:
+ printablesLessRAbrack = "".join( [ c for c in printables if c not in ">" ] )
+ tagAttrValue = quotedString.copy().setParseAction( removeQuotes ) | Word(printablesLessRAbrack)
+ openTag = Suppress("<") + tagStr + \
+ Dict(ZeroOrMore(Group( tagAttrName.setParseAction(downcaseTokens) + \
+ Optional( Suppress("=") + tagAttrValue ) ))) + \
+ Optional("/",default=[False]).setResultsName("empty").setParseAction(lambda s,l,t:t[0]=='/') + Suppress(">")
+ closeTag = Combine(_L("</") + tagStr + ">")
+
+ openTag = openTag.setResultsName("start"+"".join(resname.replace(":"," ").title().split())).setName("<%s>" % tagStr)
+ closeTag = closeTag.setResultsName("end"+"".join(resname.replace(":"," ").title().split())).setName("</%s>" % tagStr)
+
+ return openTag, closeTag
+
+def makeHTMLTags(tagStr):
+ """Helper to construct opening and closing tag expressions for HTML, given a tag name"""
+ return _makeTags( tagStr, False )
+
+def makeXMLTags(tagStr):
+ """Helper to construct opening and closing tag expressions for XML, given a tag name"""
+ return _makeTags( tagStr, True )
+
+def withAttribute(*args,**attrDict):
+ """Helper to create a validating parse action to be used with start tags created
+ with makeXMLTags or makeHTMLTags. Use withAttribute to qualify a starting tag
+ with a required attribute value, to avoid false matches on common tags such as
+ <TD> or <DIV>.
+
+ Call withAttribute with a series of attribute names and values. Specify the list
+ of filter attributes names and values as:
+ - keyword arguments, as in (class="Customer",align="right"), or
+ - a list of name-value tuples, as in ( ("ns1:class", "Customer"), ("ns2:align","right") )
+ For attribute names with a namespace prefix, you must use the second form. Attribute
+ names are matched insensitive to upper/lower case.
+
+ To verify that the attribute exists, but without specifying a value, pass
+ withAttribute.ANY_VALUE as the value.
+ """
+ if args:
+ attrs = args[:]
+ else:
+ attrs = attrDict.items()
+ attrs = [(k,v) for k,v in attrs]
+ def pa(s,l,tokens):
+ for attrName,attrValue in attrs:
+ if attrName not in tokens:
+ raise ParseException(s,l,"no matching attribute " + attrName)
+ if attrValue != withAttribute.ANY_VALUE and tokens[attrName] != attrValue:
+ raise ParseException(s,l,"attribute '%s' has value '%s', must be '%s'" %
+ (attrName, tokens[attrName], attrValue))
+ return pa
+withAttribute.ANY_VALUE = object()
+
+opAssoc = _Constants()
+opAssoc.LEFT = object()
+opAssoc.RIGHT = object()
+
+def operatorPrecedence( baseExpr, opList ):
+ """Helper method for constructing grammars of expressions made up of
+ operators working in a precedence hierarchy. Operators may be unary or
+ binary, left- or right-associative. Parse actions can also be attached
+ to operator expressions.
+
+ Parameters:
+ - baseExpr - expression representing the most basic element for the nested
+ - opList - list of tuples, one for each operator precedence level in the
+ expression grammar; each tuple is of the form
+ (opExpr, numTerms, rightLeftAssoc, parseAction), where:
+ - opExpr is the pyparsing expression for the operator;
+ may also be a string, which will be converted to a Literal;
+ if numTerms is 3, opExpr is a tuple of two expressions, for the
+ two operators separating the 3 terms
+ - numTerms is the number of terms for this operator (must
+ be 1, 2, or 3)
+ - rightLeftAssoc is the indicator whether the operator is
+ right or left associative, using the pyparsing-defined
+ constants opAssoc.RIGHT and opAssoc.LEFT.
+ - parseAction is the parse action to be associated with
+ expressions matching this operator expression (the
+ parse action tuple member may be omitted)
+ """
+ ret = Forward()
+ lastExpr = baseExpr | ( Suppress('(') + ret + Suppress(')') )
+ for i,operDef in enumerate(opList):
+ opExpr,arity,rightLeftAssoc,pa = (operDef + (None,))[:4]
+ if arity == 3:
+ if opExpr is None or len(opExpr) != 2:
+ raise ValueError("if numterms=3, opExpr must be a tuple or list of two expressions")
+ opExpr1, opExpr2 = opExpr
+ thisExpr = Forward()#.setName("expr%d" % i)
+ if rightLeftAssoc == opAssoc.LEFT:
+ if arity == 1:
+ matchExpr = FollowedBy(lastExpr + opExpr) + Group( lastExpr + OneOrMore( opExpr ) )
+ elif arity == 2:
+ if opExpr is not None:
+ matchExpr = FollowedBy(lastExpr + opExpr + lastExpr) + Group( lastExpr + OneOrMore( opExpr + lastExpr ) )
+ else:
+ matchExpr = FollowedBy(lastExpr+lastExpr) + Group( lastExpr + OneOrMore(lastExpr) )
+ elif arity == 3:
+ matchExpr = FollowedBy(lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr) + \
+ Group( lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr )
+ else:
+ raise ValueError("operator must be unary (1), binary (2), or ternary (3)")
+ elif rightLeftAssoc == opAssoc.RIGHT:
+ if arity == 1:
+ # try to avoid LR with this extra test
+ if not isinstance(opExpr, Optional):
+ opExpr = Optional(opExpr)
+ matchExpr = FollowedBy(opExpr.expr + thisExpr) + Group( opExpr + thisExpr )
+ elif arity == 2:
+ if opExpr is not None:
+ matchExpr = FollowedBy(lastExpr + opExpr + thisExpr) + Group( lastExpr + OneOrMore( opExpr + thisExpr ) )
+ else:
+ matchExpr = FollowedBy(lastExpr + thisExpr) + Group( lastExpr + OneOrMore( thisExpr ) )
+ elif arity == 3:
+ matchExpr = FollowedBy(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) + \
+ Group( lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr )
+ else:
+ raise ValueError("operator must be unary (1), binary (2), or ternary (3)")
+ else:
+ raise ValueError("operator must indicate right or left associativity")
+ if pa:
+ matchExpr.setParseAction( pa )
+ thisExpr << ( matchExpr | lastExpr )
+ lastExpr = thisExpr
+ ret << lastExpr
+ return ret
+
+dblQuotedString = Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\x[0-9a-fA-F]+)|(?:\\.))*"').setName("string enclosed in double quotes")
+sglQuotedString = Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\x[0-9a-fA-F]+)|(?:\\.))*'").setName("string enclosed in single quotes")
+quotedString = Regex(r'''(?:"(?:[^"\n\r\\]|(?:"")|(?:\\x[0-9a-fA-F]+)|(?:\\.))*")|(?:'(?:[^'\n\r\\]|(?:'')|(?:\\x[0-9a-fA-F]+)|(?:\\.))*')''').setName("quotedString using single or double quotes")
+unicodeString = Combine(_L('u') + quotedString.copy())
+
+def nestedExpr(opener="(", closer=")", content=None, ignoreExpr=quotedString):
+ """Helper method for defining nested lists enclosed in opening and closing
+ delimiters ("(" and ")" are the default).
+
+ Parameters:
+ - opener - opening character for a nested list (default="("); can also be a pyparsing expression
+ - closer - closing character for a nested list (default=")"); can also be a pyparsing expression
+ - content - expression for items within the nested lists (default=None)
+ - ignoreExpr - expression for ignoring opening and closing delimiters (default=quotedString)
+
+ If an expression is not provided for the content argument, the nested
+ expression will capture all whitespace-delimited content between delimiters
+ as a list of separate values.
+
+ Use the ignoreExpr argument to define expressions that may contain
+ opening or closing characters that should not be treated as opening
+ or closing characters for nesting, such as quotedString or a comment
+ expression. Specify multiple expressions using an Or or MatchFirst.
+ The default is quotedString, but if no expressions are to be ignored,
+ then pass None for this argument.
+ """
+ if opener == closer:
+ raise ValueError("opening and closing strings cannot be the same")
+ if content is None:
+ if isinstance(opener,basestring) and isinstance(closer,basestring):
+ if len(opener) == 1 and len(closer)==1:
+ if ignoreExpr is not None:
+ content = (Combine(OneOrMore(~ignoreExpr +
+ CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS,exact=1))
+ ).setParseAction(lambda t:t[0].strip()))
+ else:
+ content = (empty+CharsNotIn(opener+closer+ParserElement.DEFAULT_WHITE_CHARS
+ ).setParseAction(lambda t:t[0].strip()))
+ else:
+ if ignoreExpr is not None:
+ content = (Combine(OneOrMore(~ignoreExpr +
+ ~Literal(opener) + ~Literal(closer) +
+ CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1))
+ ).setParseAction(lambda t:t[0].strip()))
+ else:
+ content = (Combine(OneOrMore(~Literal(opener) + ~Literal(closer) +
+ CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS,exact=1))
+ ).setParseAction(lambda t:t[0].strip()))
+ else:
+ raise ValueError("opening and closing arguments must be strings if no content expression is given")
+ ret = Forward()
+ if ignoreExpr is not None:
+ ret << Group( Suppress(opener) + ZeroOrMore( ignoreExpr | ret | content ) + Suppress(closer) )
+ else:
+ ret << Group( Suppress(opener) + ZeroOrMore( ret | content ) + Suppress(closer) )
+ return ret
+
+def indentedBlock(blockStatementExpr, indentStack, indent=True):
+ """Helper method for defining space-delimited indentation blocks, such as
+ those used to define block statements in Python source code.
+
+ Parameters:
+ - blockStatementExpr - expression defining syntax of statement that
+ is repeated within the indented block
+ - indentStack - list created by caller to manage indentation stack
+ (multiple statementWithIndentedBlock expressions within a single grammar
+ should share a common indentStack)
+ - indent - boolean indicating whether block must be indented beyond the
+ the current level; set to False for block of left-most statements
+ (default=True)
+
+ A valid block must contain at least one blockStatement.
+ """
+ def checkPeerIndent(s,l,t):
+ if l >= len(s): return
+ curCol = col(l,s)
+ if curCol != indentStack[-1]:
+ if curCol > indentStack[-1]:
+ raise ParseFatalException(s,l,"illegal nesting")
+ raise ParseException(s,l,"not a peer entry")
+
+ def checkSubIndent(s,l,t):
+ curCol = col(l,s)
+ if curCol > indentStack[-1]:
+ indentStack.append( curCol )
+ else:
+ raise ParseException(s,l,"not a subentry")
+
+ def checkUnindent(s,l,t):
+ if l >= len(s): return
+ curCol = col(l,s)
+ if not(indentStack and curCol < indentStack[-1] and curCol <= indentStack[-2]):
+ raise ParseException(s,l,"not an unindent")
+ indentStack.pop()
+
+ NL = OneOrMore(LineEnd().setWhitespaceChars("\t ").suppress())
+ INDENT = Empty() + Empty().setParseAction(checkSubIndent)
+ PEER = Empty().setParseAction(checkPeerIndent)
+ UNDENT = Empty().setParseAction(checkUnindent)
+ if indent:
+ smExpr = Group( Optional(NL) +
+ FollowedBy(blockStatementExpr) +
+ INDENT + (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) + UNDENT)
+ else:
+ smExpr = Group( Optional(NL) +
+ (OneOrMore( PEER + Group(blockStatementExpr) + Optional(NL) )) )
+ blockStatementExpr.ignore(_bslash + LineEnd())
+ return smExpr
+
+alphas8bit = srange(r"[\0xc0-\0xd6\0xd8-\0xf6\0xf8-\0xff]")
+punc8bit = srange(r"[\0xa1-\0xbf\0xd7\0xf7]")
+
+anyOpenTag,anyCloseTag = makeHTMLTags(Word(alphas,alphanums+"_:"))
+commonHTMLEntity = Combine(_L("&") + oneOf("gt lt amp nbsp quot").setResultsName("entity") +";").streamline()
+_htmlEntityMap = dict(zip("gt lt amp nbsp quot".split(),'><& "'))
+replaceHTMLEntity = lambda t : t.entity in _htmlEntityMap and _htmlEntityMap[t.entity] or None
+
+# it's easy to get these comment structures wrong - they're very common, so may as well make them available
+cStyleComment = Regex(r"/\*(?:[^*]*\*+)+?/").setName("C style comment")
+
+htmlComment = Regex(r"<!--[\s\S]*?-->")
+restOfLine = Regex(r".*").leaveWhitespace()
+dblSlashComment = Regex(r"\/\/(\\\n|.)*").setName("// comment")
+cppStyleComment = Regex(r"/(?:\*(?:[^*]*\*+)+?/|/[^\n]*(?:\n[^\n]*)*?(?:(?<!\\)|\Z))").setName("C++ style comment")
+
+javaStyleComment = cppStyleComment
+pythonStyleComment = Regex(r"#.*").setName("Python style comment")
+_noncomma = "".join( [ c for c in printables if c != "," ] )
+_commasepitem = Combine(OneOrMore(Word(_noncomma) +
+ Optional( Word(" \t") +
+ ~Literal(",") + ~LineEnd() ) ) ).streamline().setName("commaItem")
+commaSeparatedList = delimitedList( Optional( quotedString | _commasepitem, default="") ).setName("commaSeparatedList")
+
+
+if __name__ == "__main__":
+
+ def test( teststring ):
+ try:
+ tokens = simpleSQL.parseString( teststring )
+ tokenlist = tokens.asList()
+ print (teststring + "->" + str(tokenlist))
+ print ("tokens = " + str(tokens))
+ print ("tokens.columns = " + str(tokens.columns))
+ print ("tokens.tables = " + str(tokens.tables))
+ print (tokens.asXML("SQL",True))
+ except ParseBaseException:
+ err = sys.exc_info()[1]
+ print (teststring + "->")
+ print (err.line)
+ print (" "*(err.column-1) + "^")
+ print (err)
+ print()
+
+ selectToken = CaselessLiteral( "select" )
+ fromToken = CaselessLiteral( "from" )
+
+ ident = Word( alphas, alphanums + "_$" )
+ columnName = delimitedList( ident, ".", combine=True ).setParseAction( upcaseTokens )
+ columnNameList = Group( delimitedList( columnName ) )#.setName("columns")
+ tableName = delimitedList( ident, ".", combine=True ).setParseAction( upcaseTokens )
+ tableNameList = Group( delimitedList( tableName ) )#.setName("tables")
+ simpleSQL = ( selectToken + \
+ ( '*' | columnNameList ).setResultsName( "columns" ) + \
+ fromToken + \
+ tableNameList.setResultsName( "tables" ) )
+
+ test( "SELECT * from XYZZY, ABC" )
+ test( "select * from SYS.XYZZY" )
+ test( "Select A from Sys.dual" )
+ test( "Select AA,BB,CC from Sys.dual" )
+ test( "Select A, B, C from Sys.dual" )
+ test( "Select A, B, C from Sys.dual" )
+ test( "Xelect A, B, C from Sys.dual" )
+ test( "Select A, B, C frox Sys.dual" )
+ test( "Select" )
+ test( "Select ^^^ frox Sys.dual" )
+ test( "Select A, B, C from Sys.dual, Table2 " )
diff --git a/python/helpers/pycharm_generator_utils/util_methods.py b/python/helpers/pycharm_generator_utils/util_methods.py
new file mode 100644
index 0000000..8d6d356
--- /dev/null
+++ b/python/helpers/pycharm_generator_utils/util_methods.py
@@ -0,0 +1,544 @@
+from pycharm_generator_utils.constants import *
+
+try:
+ import inspect
+except ImportError:
+ inspect = None
+
+def create_named_tuple(): #TODO: user-skeleton
+ return """
+class __namedtuple(tuple):
+ '''A mock base class for named tuples.'''
+
+ __slots__ = ()
+ _fields = ()
+
+ def __new__(cls, *args, **kwargs):
+ 'Create a new instance of the named tuple.'
+ return tuple.__new__(cls, *args)
+
+ @classmethod
+ def _make(cls, iterable, new=tuple.__new__, len=len):
+ 'Make a new named tuple object from a sequence or iterable.'
+ return new(cls, iterable)
+
+ def __repr__(self):
+ return ''
+
+ def _asdict(self):
+ 'Return a new dict which maps field types to their values.'
+ return {}
+
+ def _replace(self, **kwargs):
+ 'Return a new named tuple object replacing specified fields with new values.'
+ return self
+
+ def __getnewargs__(self):
+ return tuple(self)
+"""
+
+def create_generator():
+ # Fake <type 'generator'>
+ if version[0] < 3:
+ next_name = "next"
+ else:
+ next_name = "__next__"
+ txt = """
+class __generator(object):
+ '''A mock class representing the generator function type.'''
+ def __init__(self):
+ self.gi_code = None
+ self.gi_frame = None
+ self.gi_running = 0
+
+ def __iter__(self):
+ '''Defined to support iteration over container.'''
+ pass
+
+ def %s(self):
+ '''Return the next item from the container.'''
+ pass
+""" % (next_name,)
+ if version[0] >= 3 or (version[0] == 2 and version[1] >= 5):
+ txt += """
+ def close(self):
+ '''Raises new GeneratorExit exception inside the generator to terminate the iteration.'''
+ pass
+
+ def send(self, value):
+ '''Resumes the generator and "sends" a value that becomes the result of the current yield-expression.'''
+ pass
+
+ def throw(self, type, value=None, traceback=None):
+ '''Used to raise an exception inside the generator.'''
+ pass
+"""
+ return txt
+
+def _searchbases(cls, accum):
+# logic copied from inspect.py
+ if cls not in accum:
+ accum.append(cls)
+ for x in cls.__bases__:
+ _searchbases(x, accum)
+
+
+def get_mro(a_class):
+# logic copied from inspect.py
+ """Returns a tuple of MRO classes."""
+ if hasattr(a_class, "__mro__"):
+ return a_class.__mro__
+ elif hasattr(a_class, "__bases__"):
+ bases = []
+ _searchbases(a_class, bases)
+ return tuple(bases)
+ else:
+ return tuple()
+
+
+def get_bases(a_class): # TODO: test for classes that don't fit this scheme
+ """Returns a sequence of class's bases."""
+ if hasattr(a_class, "__bases__"):
+ return a_class.__bases__
+ else:
+ return ()
+
+
+def is_callable(x):
+ return hasattr(x, '__call__')
+
+
+def sorted_no_case(p_array):
+ """Sort an array case insensitevely, returns a sorted copy"""
+ p_array = list(p_array)
+ p_array = sorted(p_array, key=lambda x: x.upper())
+ return p_array
+
+
+def cleanup(value):
+ result = []
+ prev = i = 0
+ length = len(value)
+ last_ascii = chr(127)
+ while i < length:
+ char = value[i]
+ replacement = None
+ if char == '\n':
+ replacement = '\\n'
+ elif char == '\r':
+ replacement = '\\r'
+ elif char < ' ' or char > last_ascii:
+ replacement = '?' # NOTE: such chars are rare; long swaths could be precessed differently
+ if replacement:
+ result.append(value[prev:i])
+ result.append(replacement)
+ i += 1
+ return "".join(result)
+
+
+_prop_types = [type(property())]
+#noinspection PyBroadException
+try:
+ _prop_types.append(types.GetSetDescriptorType)
+except:
+ pass
+
+#noinspection PyBroadException
+try:
+ _prop_types.append(types.MemberDescriptorType)
+except:
+ pass
+
+_prop_types = tuple(_prop_types)
+
+
+def is_property(x):
+ return isinstance(x, _prop_types)
+
+
+def sanitize_ident(x, is_clr=False):
+ """Takes an identifier and returns it sanitized"""
+ if x in ("class", "object", "def", "list", "tuple", "int", "float", "str", "unicode" "None"):
+ return "p_" + x
+ else:
+ if is_clr:
+ # it tends to have names like "int x", turn it to just x
+ xs = x.split(" ")
+ if len(xs) == 2:
+ return sanitize_ident(xs[1])
+ return x.replace("-", "_").replace(" ", "_").replace(".", "_") # for things like "list-or-tuple" or "list or tuple"
+
+
+def reliable_repr(value):
+ # some subclasses of built-in types (see PyGtk) may provide invalid __repr__ implementations,
+ # so we need to sanitize the output
+ if type(bool) == type and isinstance(value, bool):
+ return repr(bool(value))
+ for num_type in NUM_TYPES:
+ if isinstance(value, num_type):
+ return repr(num_type(value))
+ return repr(value)
+
+
+def sanitize_value(p_value):
+ """Returns p_value or its part if it represents a sane simple value, else returns 'None'"""
+ if isinstance(p_value, STR_TYPES):
+ match = SIMPLE_VALUE_RE.match(p_value)
+ if match:
+ return match.groups()[match.lastindex - 1]
+ else:
+ return 'None'
+ elif isinstance(p_value, NUM_TYPES):
+ return reliable_repr(p_value)
+ elif p_value is None:
+ return 'None'
+ else:
+ if hasattr(p_value, "__name__") and hasattr(p_value, "__module__") and p_value.__module__ == BUILTIN_MOD_NAME:
+ return p_value.__name__ # float -> "float"
+ else:
+ return repr(repr(p_value)) # function -> "<function ...>", etc
+
+
+def extract_alpha_prefix(p_string, default_prefix="some"):
+ """Returns 'foo' for things like 'foo1' or 'foo2'; if prefix cannot be found, the default is returned"""
+ match = NUM_IDENT_PATTERN.match(p_string)
+ prefix = match and match.groups()[match.lastindex - 1] or None
+ return prefix or default_prefix
+
+
+def report(msg, *data):
+ """Say something at error level (stderr)"""
+ sys.stderr.write(msg % data)
+ sys.stderr.write("\n")
+
+
+def say(msg, *data):
+ """Say something at info level (stdout)"""
+ sys.stdout.write(msg % data)
+ sys.stdout.write("\n")
+
+
+def transform_seq(results, toplevel=True):
+ """Transforms a tree of ParseResults into a param spec string."""
+ is_clr = sys.platform == "cli"
+ ret = [] # add here token to join
+ for token in results:
+ token_type = token[0]
+ if token_type is T_SIMPLE:
+ token_name = token[1]
+ if len(token) == 3: # name with value
+ if toplevel:
+ ret.append(sanitize_ident(token_name, is_clr) + "=" + sanitize_value(token[2]))
+ else:
+ # smth like "a, (b1=1, b2=2)", make it "a, p_b"
+ return ["p_" + results[0][1]] # NOTE: for each item of tuple, return the same name of its 1st item.
+ elif token_name == TRIPLE_DOT:
+ if toplevel and not has_item_starting_with(ret, "*"):
+ ret.append("*more")
+ else:
+ # we're in a "foo, (bar1, bar2, ...)"; make it "foo, bar_tuple"
+ return extract_alpha_prefix(results[0][1]) + "_tuple"
+ else: # just name
+ ret.append(sanitize_ident(token_name, is_clr))
+ elif token_type is T_NESTED:
+ inner = transform_seq(token[1:], False)
+ if len(inner) != 1:
+ ret.append(inner)
+ else:
+ ret.append(inner[0]) # [foo] -> foo
+ elif token_type is T_OPTIONAL:
+ ret.extend(transform_optional_seq(token))
+ elif token_type is T_RETURN:
+ pass # this is handled elsewhere
+ else:
+ raise Exception("This cannot be a token type: " + repr(token_type))
+ return ret
+
+
+def transform_optional_seq(results):
+ """
+ Produces a string that describes the optional part of parameters.
+ @param results must start from T_OPTIONAL.
+ """
+ assert results[0] is T_OPTIONAL, "transform_optional_seq expects a T_OPTIONAL node, sees " + \
+ repr(results[0])
+ is_clr = sys.platform == "cli"
+ ret = []
+ for token in results[1:]:
+ token_type = token[0]
+ if token_type is T_SIMPLE:
+ token_name = token[1]
+ if len(token) == 3: # name with value; little sense, but can happen in a deeply nested optional
+ ret.append(sanitize_ident(token_name, is_clr) + "=" + sanitize_value(token[2]))
+ elif token_name == '...':
+ # we're in a "foo, [bar, ...]"; make it "foo, *bar"
+ return ["*" + extract_alpha_prefix(
+ results[1][1])] # we must return a seq; [1] is first simple, [1][1] is its name
+ else: # just name
+ ret.append(sanitize_ident(token_name, is_clr) + "=None")
+ elif token_type is T_OPTIONAL:
+ ret.extend(transform_optional_seq(token))
+ # maybe handle T_NESTED if such cases ever occur in real life
+ # it can't be nested in a sane case, really
+ return ret
+
+
+def flatten(seq):
+ """Transforms tree lists like ['a', ['b', 'c'], 'd'] to strings like '(a, (b, c), d)', enclosing each tree level in parens."""
+ ret = []
+ for one in seq:
+ if type(one) is list:
+ ret.append(flatten(one))
+ else:
+ ret.append(one)
+ return "(" + ", ".join(ret) + ")"
+
+
+def make_names_unique(seq, name_map=None):
+ """
+ Returns a copy of tree list seq where all clashing names are modified by numeric suffixes:
+ ['a', 'b', 'a', 'b'] becomes ['a', 'b', 'a_1', 'b_1'].
+ Each repeating name has its own counter in the name_map.
+ """
+ ret = []
+ if not name_map:
+ name_map = {}
+ for one in seq:
+ if type(one) is list:
+ ret.append(make_names_unique(one, name_map))
+ else:
+ one_key = lstrip(one, "*") # starred parameters are unique sans stars
+ if one_key in name_map:
+ old_one = one_key
+ one = one + "_" + str(name_map[old_one])
+ name_map[old_one] += 1
+ else:
+ name_map[one_key] = 1
+ ret.append(one)
+ return ret
+
+
+def has_item_starting_with(p_seq, p_start):
+ for item in p_seq:
+ if isinstance(item, STR_TYPES) and item.startswith(p_start):
+ return True
+ return False
+
+
+def out_docstring(out_func, docstring, indent):
+ if not isinstance(docstring, str): return
+ lines = docstring.strip().split("\n")
+ if lines:
+ if len(lines) == 1:
+ out_func(indent, '""" ' + lines[0] + ' """')
+ else:
+ out_func(indent, '"""')
+ for line in lines:
+ try:
+ out_func(indent, line)
+ except UnicodeEncodeError:
+ continue
+ out_func(indent, '"""')
+
+def out_doc_attr(out_func, p_object, indent, p_class=None):
+ the_doc = getattr(p_object, "__doc__", None)
+ if the_doc:
+ if p_class and the_doc == object.__init__.__doc__ and p_object is not object.__init__ and p_class.__doc__:
+ the_doc = str(p_class.__doc__) # replace stock init's doc with class's; make it a certain string.
+ the_doc += "\n# (copied from class doc)"
+ out_docstring(out_func, the_doc, indent)
+ else:
+ out_func(indent, "# no doc")
+
+def is_skipped_in_module(p_module, p_value):
+ """
+ Returns True if p_value's value must be skipped for module p_module.
+ """
+ skip_list = SKIP_VALUE_IN_MODULE.get(p_module, [])
+ if p_value in skip_list:
+ return True
+ skip_list = SKIP_VALUE_IN_MODULE.get("*", [])
+ if p_value in skip_list:
+ return True
+ return False
+
+def restore_predefined_builtin(class_name, func_name):
+ spec = func_name + PREDEFINED_BUILTIN_SIGS[(class_name, func_name)]
+ note = "known special case of " + (class_name and class_name + "." or "") + func_name
+ return (spec, note)
+
+def restore_by_inspect(p_func):
+ """
+ Returns paramlist restored by inspect.
+ """
+ args, varg, kwarg, defaults = inspect.getargspec(p_func)
+ spec = []
+ if defaults:
+ dcnt = len(defaults) - 1
+ else:
+ dcnt = -1
+ args = args or []
+ args.reverse() # backwards, for easier defaults handling
+ for arg in args:
+ if dcnt >= 0:
+ arg += "=" + sanitize_value(defaults[dcnt])
+ dcnt -= 1
+ spec.insert(0, arg)
+ if varg:
+ spec.append("*" + varg)
+ if kwarg:
+ spec.append("**" + kwarg)
+ return flatten(spec)
+
+def restore_parameters_for_overloads(parameter_lists):
+ param_index = 0
+ star_args = False
+ optional = False
+ params = []
+ while True:
+ parameter_lists_copy = [pl for pl in parameter_lists]
+ for pl in parameter_lists_copy:
+ if param_index >= len(pl):
+ parameter_lists.remove(pl)
+ optional = True
+ if not parameter_lists:
+ break
+ name = parameter_lists[0][param_index]
+ for pl in parameter_lists[1:]:
+ if pl[param_index] != name:
+ star_args = True
+ break
+ if star_args: break
+ if optional and not '=' in name:
+ params.append(name + '=None')
+ else:
+ params.append(name)
+ param_index += 1
+ if star_args:
+ params.append("*__args")
+ return params
+
+def build_signature(p_name, params):
+ return p_name + '(' + ', '.join(params) + ')'
+
+
+def propose_first_param(deco):
+ """@return: name of missing first paramater, considering a decorator"""
+ if deco is None:
+ return "self"
+ if deco == "classmethod":
+ return "cls"
+ # if deco == "staticmethod":
+ return None
+
+def qualifier_of(cls, qualifiers_to_skip):
+ m = getattr(cls, "__module__", None)
+ if m in qualifiers_to_skip:
+ return ""
+ return m
+
+def handle_error_func(item_name, out):
+ exctype, value = sys.exc_info()[:2]
+ msg = "Error generating skeleton for function %s: %s"
+ args = item_name, value
+ report(msg, *args)
+ out(0, "# " + msg % args)
+ out(0, "")
+
+def format_accessors(accessor_line, getter, setter, deleter):
+ """Nicely format accessors, like 'getter, fdel=deleter'"""
+ ret = []
+ consecutive = True
+ for key, arg, par in (('r', 'fget', getter), ('w', 'fset', setter), ('d', 'fdel', deleter)):
+ if key in accessor_line:
+ if consecutive:
+ ret.append(par)
+ else:
+ ret.append(arg + "=" + par)
+ else:
+ consecutive = False
+ return ", ".join(ret)
+
+
+def has_regular_python_ext(file_name):
+ """Does name end with .py?"""
+ return file_name.endswith(".py")
+ # Note that the standard library on MacOS X 10.6 is shipped only as .pyc files, so we need to
+ # have them processed by the generator in order to have any code insight for the standard library.
+
+
+def detect_constructor(p_class):
+ # try to inspect the thing
+ constr = getattr(p_class, "__init__")
+ if constr and inspect and inspect.isfunction(constr):
+ args, _, _, _ = inspect.getargspec(constr)
+ return ", ".join(args)
+ else:
+ return None
+
+############## notes, actions #################################################################
+_is_verbose = False # controlled by -v
+
+CURRENT_ACTION = "nothing yet"
+
+def action(msg, *data):
+ global CURRENT_ACTION
+ CURRENT_ACTION = msg % data
+ note(msg, *data)
+
+def note(msg, *data):
+ """Say something at debug info level (stderr)"""
+ global _is_verbose
+ if _is_verbose:
+ sys.stderr.write(msg % data)
+ sys.stderr.write("\n")
+
+
+############## plaform-specific methods #######################################################
+import sys
+if sys.platform == 'cli':
+ #noinspection PyUnresolvedReferences
+ import clr
+
+# http://blogs.msdn.com/curth/archive/2009/03/29/an-ironpython-profiler.aspx
+def print_profile():
+ data = []
+ data.extend(clr.GetProfilerData())
+ data.sort(lambda x, y: -cmp(x.ExclusiveTime, y.ExclusiveTime))
+
+ for pd in data:
+ say('%s\t%d\t%d\t%d', pd.Name, pd.InclusiveTime, pd.ExclusiveTime, pd.Calls)
+
+def is_clr_type(clr_type):
+ if not clr_type: return False
+ try:
+ clr.GetClrType(clr_type)
+ return True
+ except TypeError:
+ return False
+
+def restore_clr(p_name, p_class):
+ """
+ Restore the function signature by the CLR type signature
+ """
+ clr_type = clr.GetClrType(p_class)
+ if p_name == '__new__':
+ methods = [c for c in clr_type.GetConstructors()]
+ if not methods:
+ return p_name + '(*args)', 'cannot find CLR constructor'
+ else:
+ methods = [m for m in clr_type.GetMethods() if m.Name == p_name]
+ if not methods:
+ bases = p_class.__bases__
+ if len(bases) == 1 and p_name in dir(bases[0]):
+ # skip inherited methods
+ return None, None
+ return p_name + '(*args)', 'cannot find CLR method'
+
+ parameter_lists = []
+ for m in methods:
+ parameter_lists.append([p.Name for p in m.GetParameters()])
+ params = restore_parameters_for_overloads(parameter_lists)
+ if not methods[0].IsStatic:
+ params = ['self'] + params
+ return build_signature(p_name, params), None
diff --git a/python/helpers/pydev/__init__.py b/python/helpers/pydev/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/python/helpers/pydev/__init__.py
diff --git a/python/helpers/pydev/_completer.py b/python/helpers/pydev/_completer.py
new file mode 100644
index 0000000..4d34fec
--- /dev/null
+++ b/python/helpers/pydev/_completer.py
@@ -0,0 +1,155 @@
+
+try:
+ import __builtin__
+except ImportError:
+ import builtins as __builtin__
+
+try:
+ False
+ True
+except NameError: # version < 2.3 -- didn't have the True/False builtins
+ setattr(__builtin__, 'True', 1)
+ setattr(__builtin__, 'False', 0)
+
+try:
+ import java.lang #@UnusedImport
+ import jyimportsTipper #as importsTipper #changed to be backward compatible with 1.5
+ importsTipper = jyimportsTipper
+except ImportError:
+ IS_JYTHON = False
+ import importsTipper
+
+dir2 = importsTipper.GenerateImportsTipForModule
+
+
+#=======================================================================================================================
+# _StartsWithFilter
+#=======================================================================================================================
+class _StartsWithFilter:
+ '''
+ Used because we can't create a lambda that'll use an outer scope in jython 2.1
+ '''
+
+
+ def __init__(self, start_with):
+ self.start_with = start_with.lower()
+
+ def __call__(self, name):
+ return name.lower().startswith(self.start_with)
+
+#=======================================================================================================================
+# Completer
+#
+# This class was gotten from IPython.completer (dir2 was replaced with the completer already in pydev)
+#=======================================================================================================================
+class Completer:
+
+ def __init__(self, namespace=None, global_namespace=None):
+ """Create a new completer for the command line.
+
+ Completer([namespace,global_namespace]) -> completer instance.
+
+ If unspecified, the default namespace where completions are performed
+ is __main__ (technically, __main__.__dict__). Namespaces should be
+ given as dictionaries.
+
+ An optional second namespace can be given. This allows the completer
+ to handle cases where both the local and global scopes need to be
+ distinguished.
+
+ Completer instances should be used as the completion mechanism of
+ readline via the set_completer() call:
+
+ readline.set_completer(Completer(my_namespace).complete)
+ """
+
+ # Don't bind to namespace quite yet, but flag whether the user wants a
+ # specific namespace or to use __main__.__dict__. This will allow us
+ # to bind to __main__.__dict__ at completion time, not now.
+ if namespace is None:
+ self.use_main_ns = 1
+ else:
+ self.use_main_ns = 0
+ self.namespace = namespace
+
+ # The global namespace, if given, can be bound directly
+ if global_namespace is None:
+ self.global_namespace = {}
+ else:
+ self.global_namespace = global_namespace
+
+ def complete(self, text):
+ """Return the next possible completion for 'text'.
+
+ This is called successively with state == 0, 1, 2, ... until it
+ returns None. The completion should begin with 'text'.
+
+ """
+ if self.use_main_ns:
+ #In pydev this option should never be used
+ raise RuntimeError('Namespace must be provided!')
+ self.namespace = __main__.__dict__ #@UndefinedVariable
+
+ if "." in text:
+ return self.attr_matches(text)
+ else:
+ return self.global_matches(text)
+
+ def global_matches(self, text):
+ """Compute matches when text is a simple name.
+
+ Return a list of all keywords, built-in functions and names currently
+ defined in self.namespace or self.global_namespace that match.
+
+ """
+
+
+ def get_item(obj, attr):
+ return obj[attr]
+
+ a = {}
+
+ for dict_with_comps in [__builtin__.__dict__, self.namespace, self.global_namespace]: #@UndefinedVariable
+ a.update(dict_with_comps)
+
+ filter = _StartsWithFilter(text)
+
+ return dir2(a, a.keys(), get_item, filter)
+
+ def attr_matches(self, text):
+ """Compute matches when text contains a dot.
+
+ Assuming the text is of the form NAME.NAME....[NAME], and is
+ evaluatable in self.namespace or self.global_namespace, it will be
+ evaluated and its attributes (as revealed by dir()) are used as
+ possible completions. (For class instances, class members are are
+ also considered.)
+
+ WARNING: this can still invoke arbitrary C code, if an object
+ with a __getattr__ hook is evaluated.
+
+ """
+ import re
+
+ # Another option, seems to work great. Catches things like ''.<tab>
+ m = re.match(r"(\S+(\.\w+)*)\.(\w*)$", text) #@UndefinedVariable
+
+ if not m:
+ return []
+
+ expr, attr = m.group(1, 3)
+ try:
+ obj = eval(expr, self.namespace)
+ except:
+ try:
+ obj = eval(expr, self.global_namespace)
+ except:
+ return []
+
+ filter = _StartsWithFilter(attr)
+
+ words = dir2(obj, filter=filter)
+
+ return words
+
+
diff --git a/python/helpers/pydev/_pydev_BaseHTTPServer.py b/python/helpers/pydev/_pydev_BaseHTTPServer.py
new file mode 100644
index 0000000..1dcef2e
--- /dev/null
+++ b/python/helpers/pydev/_pydev_BaseHTTPServer.py
@@ -0,0 +1,604 @@
+"""HTTP server base class.
+
+Note: the class in this module doesn't implement any HTTP request; see
+SimpleHTTPServer for simple implementations of GET, HEAD and POST
+(including CGI scripts). It does, however, optionally implement HTTP/1.1
+persistent connections, as of version 0.3.
+
+Contents:
+
+- BaseHTTPRequestHandler: HTTP request handler base class
+- test: test function
+
+XXX To do:
+
+- log requests even later (to capture byte count)
+- log user-agent header and other interesting goodies
+- send error log to separate file
+"""
+
+
+# See also:
+#
+# HTTP Working Group T. Berners-Lee
+# INTERNET-DRAFT R. T. Fielding
+# <draft-ietf-http-v10-spec-00.txt> H. Frystyk Nielsen
+# Expires September 8, 1995 March 8, 1995
+#
+# URL: http://www.ics.uci.edu/pub/ietf/http/draft-ietf-http-v10-spec-00.txt
+#
+# and
+#
+# Network Working Group R. Fielding
+# Request for Comments: 2616 et al
+# Obsoletes: 2068 June 1999
+# Category: Standards Track
+#
+# URL: http://www.faqs.org/rfcs/rfc2616.html
+
+# Log files
+# ---------
+#
+# Here's a quote from the NCSA httpd docs about log file format.
+#
+# | The logfile format is as follows. Each line consists of:
+# |
+# | host rfc931 authuser [DD/Mon/YYYY:hh:mm:ss] "request" ddd bbbb
+# |
+# | host: Either the DNS name or the IP number of the remote client
+# | rfc931: Any information returned by identd for this person,
+# | - otherwise.
+# | authuser: If user sent a userid for authentication, the user name,
+# | - otherwise.
+# | DD: Day
+# | Mon: Month (calendar name)
+# | YYYY: Year
+# | hh: hour (24-hour format, the machine's timezone)
+# | mm: minutes
+# | ss: seconds
+# | request: The first line of the HTTP request as sent by the client.
+# | ddd: the status code returned by the server, - if not available.
+# | bbbb: the total number of bytes sent,
+# | *not including the HTTP/1.0 header*, - if not available
+# |
+# | You can determine the name of the file accessed through request.
+#
+# (Actually, the latter is only true if you know the server configuration
+# at the time the request was made!)
+
+__version__ = "0.3"
+
+__all__ = ["HTTPServer", "BaseHTTPRequestHandler"]
+
+import sys
+import _pydev_time as time
+import _pydev_socket as socket # For gethostbyaddr()
+from warnings import filterwarnings, catch_warnings
+with catch_warnings():
+ if sys.py3kwarning:
+ filterwarnings("ignore", ".*mimetools has been removed",
+ DeprecationWarning)
+ import mimetools
+
+import _pydev_SocketServer as SocketServer
+
+# Default error message template
+DEFAULT_ERROR_MESSAGE = """\
+<head>
+<title>Error response</title>
+</head>
+<body>
+<h1>Error response</h1>
+<p>Error code %(code)d.
+<p>Message: %(message)s.
+<p>Error code explanation: %(code)s = %(explain)s.
+</body>
+"""
+
+DEFAULT_ERROR_CONTENT_TYPE = "text/html"
+
+def _quote_html(html):
+ return html.replace("&", "&").replace("<", "<").replace(">", ">")
+
+class HTTPServer(SocketServer.TCPServer):
+
+ allow_reuse_address = 1 # Seems to make sense in testing environment
+
+ def server_bind(self):
+ """Override server_bind to store the server name."""
+ SocketServer.TCPServer.server_bind(self)
+ host, port = self.socket.getsockname()[:2]
+ self.server_name = socket.getfqdn(host)
+ self.server_port = port
+
+
+class BaseHTTPRequestHandler(SocketServer.StreamRequestHandler):
+
+ """HTTP request handler base class.
+
+ The following explanation of HTTP serves to guide you through the
+ code as well as to expose any misunderstandings I may have about
+ HTTP (so you don't need to read the code to figure out I'm wrong
+ :-).
+
+ HTTP (HyperText Transfer Protocol) is an extensible protocol on
+ top of a reliable stream transport (e.g. TCP/IP). The protocol
+ recognizes three parts to a request:
+
+ 1. One line identifying the request type and path
+ 2. An optional set of RFC-822-style headers
+ 3. An optional data part
+
+ The headers and data are separated by a blank line.
+
+ The first line of the request has the form
+
+ <command> <path> <version>
+
+ where <command> is a (case-sensitive) keyword such as GET or POST,
+ <path> is a string containing path information for the request,
+ and <version> should be the string "HTTP/1.0" or "HTTP/1.1".
+ <path> is encoded using the URL encoding scheme (using %xx to signify
+ the ASCII character with hex code xx).
+
+ The specification specifies that lines are separated by CRLF but
+ for compatibility with the widest range of clients recommends
+ servers also handle LF. Similarly, whitespace in the request line
+ is treated sensibly (allowing multiple spaces between components
+ and allowing trailing whitespace).
+
+ Similarly, for output, lines ought to be separated by CRLF pairs
+ but most clients grok LF characters just fine.
+
+ If the first line of the request has the form
+
+ <command> <path>
+
+ (i.e. <version> is left out) then this is assumed to be an HTTP
+ 0.9 request; this form has no optional headers and data part and
+ the reply consists of just the data.
+
+ The reply form of the HTTP 1.x protocol again has three parts:
+
+ 1. One line giving the response code
+ 2. An optional set of RFC-822-style headers
+ 3. The data
+
+ Again, the headers and data are separated by a blank line.
+
+ The response code line has the form
+
+ <version> <responsecode> <responsestring>
+
+ where <version> is the protocol version ("HTTP/1.0" or "HTTP/1.1"),
+ <responsecode> is a 3-digit response code indicating success or
+ failure of the request, and <responsestring> is an optional
+ human-readable string explaining what the response code means.
+
+ This server parses the request and the headers, and then calls a
+ function specific to the request type (<command>). Specifically,
+ a request SPAM will be handled by a method do_SPAM(). If no
+ such method exists the server sends an error response to the
+ client. If it exists, it is called with no arguments:
+
+ do_SPAM()
+
+ Note that the request name is case sensitive (i.e. SPAM and spam
+ are different requests).
+
+ The various request details are stored in instance variables:
+
+ - client_address is the client IP address in the form (host,
+ port);
+
+ - command, path and version are the broken-down request line;
+
+ - headers is an instance of mimetools.Message (or a derived
+ class) containing the header information;
+
+ - rfile is a file object open for reading positioned at the
+ start of the optional input data part;
+
+ - wfile is a file object open for writing.
+
+ IT IS IMPORTANT TO ADHERE TO THE PROTOCOL FOR WRITING!
+
+ The first thing to be written must be the response line. Then
+ follow 0 or more header lines, then a blank line, and then the
+ actual data (if any). The meaning of the header lines depends on
+ the command executed by the server; in most cases, when data is
+ returned, there should be at least one header line of the form
+
+ Content-type: <type>/<subtype>
+
+ where <type> and <subtype> should be registered MIME types,
+ e.g. "text/html" or "text/plain".
+
+ """
+
+ # The Python system version, truncated to its first component.
+ sys_version = "Python/" + sys.version.split()[0]
+
+ # The server software version. You may want to override this.
+ # The format is multiple whitespace-separated strings,
+ # where each string is of the form name[/version].
+ server_version = "BaseHTTP/" + __version__
+
+ # The default request version. This only affects responses up until
+ # the point where the request line is parsed, so it mainly decides what
+ # the client gets back when sending a malformed request line.
+ # Most web servers default to HTTP 0.9, i.e. don't send a status line.
+ default_request_version = "HTTP/0.9"
+
+ def parse_request(self):
+ """Parse a request (internal).
+
+ The request should be stored in self.raw_requestline; the results
+ are in self.command, self.path, self.request_version and
+ self.headers.
+
+ Return True for success, False for failure; on failure, an
+ error is sent back.
+
+ """
+ self.command = None # set in case of error on the first line
+ self.request_version = version = self.default_request_version
+ self.close_connection = 1
+ requestline = self.raw_requestline
+ requestline = requestline.rstrip('\r\n')
+ self.requestline = requestline
+ words = requestline.split()
+ if len(words) == 3:
+ command, path, version = words
+ if version[:5] != 'HTTP/':
+ self.send_error(400, "Bad request version (%r)" % version)
+ return False
+ try:
+ base_version_number = version.split('/', 1)[1]
+ version_number = base_version_number.split(".")
+ # RFC 2145 section 3.1 says there can be only one "." and
+ # - major and minor numbers MUST be treated as
+ # separate integers;
+ # - HTTP/2.4 is a lower version than HTTP/2.13, which in
+ # turn is lower than HTTP/12.3;
+ # - Leading zeros MUST be ignored by recipients.
+ if len(version_number) != 2:
+ raise ValueError
+ version_number = int(version_number[0]), int(version_number[1])
+ except (ValueError, IndexError):
+ self.send_error(400, "Bad request version (%r)" % version)
+ return False
+ if version_number >= (1, 1) and self.protocol_version >= "HTTP/1.1":
+ self.close_connection = 0
+ if version_number >= (2, 0):
+ self.send_error(505,
+ "Invalid HTTP Version (%s)" % base_version_number)
+ return False
+ elif len(words) == 2:
+ command, path = words
+ self.close_connection = 1
+ if command != 'GET':
+ self.send_error(400,
+ "Bad HTTP/0.9 request type (%r)" % command)
+ return False
+ elif not words:
+ return False
+ else:
+ self.send_error(400, "Bad request syntax (%r)" % requestline)
+ return False
+ self.command, self.path, self.request_version = command, path, version
+
+ # Examine the headers and look for a Connection directive
+ self.headers = self.MessageClass(self.rfile, 0)
+
+ conntype = self.headers.get('Connection', "")
+ if conntype.lower() == 'close':
+ self.close_connection = 1
+ elif (conntype.lower() == 'keep-alive' and
+ self.protocol_version >= "HTTP/1.1"):
+ self.close_connection = 0
+ return True
+
+ def handle_one_request(self):
+ """Handle a single HTTP request.
+
+ You normally don't need to override this method; see the class
+ __doc__ string for information on how to handle specific HTTP
+ commands such as GET and POST.
+
+ """
+ try:
+ self.raw_requestline = self.rfile.readline(65537)
+ if len(self.raw_requestline) > 65536:
+ self.requestline = ''
+ self.request_version = ''
+ self.command = ''
+ self.send_error(414)
+ return
+ if not self.raw_requestline:
+ self.close_connection = 1
+ return
+ if not self.parse_request():
+ # An error code has been sent, just exit
+ return
+ mname = 'do_' + self.command
+ if not hasattr(self, mname):
+ self.send_error(501, "Unsupported method (%r)" % self.command)
+ return
+ method = getattr(self, mname)
+ method()
+ self.wfile.flush() #actually send the response if not already done.
+ except socket.timeout:
+ #a read or a write timed out. Discard this connection
+ self.log_error("Request timed out: %r", sys.exc_info()[1])
+ self.close_connection = 1
+ return
+
+ def handle(self):
+ """Handle multiple requests if necessary."""
+ self.close_connection = 1
+
+ self.handle_one_request()
+ while not self.close_connection:
+ self.handle_one_request()
+
+ def send_error(self, code, message=None):
+ """Send and log an error reply.
+
+ Arguments are the error code, and a detailed message.
+ The detailed message defaults to the short entry matching the
+ response code.
+
+ This sends an error response (so it must be called before any
+ output has been generated), logs the error, and finally sends
+ a piece of HTML explaining the error to the user.
+
+ """
+
+ try:
+ short, long = self.responses[code]
+ except KeyError:
+ short, long = '???', '???'
+ if message is None:
+ message = short
+ explain = long
+ self.log_error("code %d, message %s", code, message)
+ # using _quote_html to prevent Cross Site Scripting attacks (see bug #1100201)
+ content = (self.error_message_format %
+ {'code': code, 'message': _quote_html(message), 'explain': explain})
+ self.send_response(code, message)
+ self.send_header("Content-Type", self.error_content_type)
+ self.send_header('Connection', 'close')
+ self.end_headers()
+ if self.command != 'HEAD' and code >= 200 and code not in (204, 304):
+ self.wfile.write(content)
+
+ error_message_format = DEFAULT_ERROR_MESSAGE
+ error_content_type = DEFAULT_ERROR_CONTENT_TYPE
+
+ def send_response(self, code, message=None):
+ """Send the response header and log the response code.
+
+ Also send two standard headers with the server software
+ version and the current date.
+
+ """
+ self.log_request(code)
+ if message is None:
+ if code in self.responses:
+ message = self.responses[code][0]
+ else:
+ message = ''
+ if self.request_version != 'HTTP/0.9':
+ self.wfile.write("%s %d %s\r\n" %
+ (self.protocol_version, code, message))
+ # print (self.protocol_version, code, message)
+ self.send_header('Server', self.version_string())
+ self.send_header('Date', self.date_time_string())
+
+ def send_header(self, keyword, value):
+ """Send a MIME header."""
+ if self.request_version != 'HTTP/0.9':
+ self.wfile.write("%s: %s\r\n" % (keyword, value))
+
+ if keyword.lower() == 'connection':
+ if value.lower() == 'close':
+ self.close_connection = 1
+ elif value.lower() == 'keep-alive':
+ self.close_connection = 0
+
+ def end_headers(self):
+ """Send the blank line ending the MIME headers."""
+ if self.request_version != 'HTTP/0.9':
+ self.wfile.write("\r\n")
+
+ def log_request(self, code='-', size='-'):
+ """Log an accepted request.
+
+ This is called by send_response().
+
+ """
+
+ self.log_message('"%s" %s %s',
+ self.requestline, str(code), str(size))
+
+ def log_error(self, format, *args):
+ """Log an error.
+
+ This is called when a request cannot be fulfilled. By
+ default it passes the message on to log_message().
+
+ Arguments are the same as for log_message().
+
+ XXX This should go to the separate error log.
+
+ """
+
+ self.log_message(format, *args)
+
+ def log_message(self, format, *args):
+ """Log an arbitrary message.
+
+ This is used by all other logging functions. Override
+ it if you have specific logging wishes.
+
+ The first argument, FORMAT, is a format string for the
+ message to be logged. If the format string contains
+ any % escapes requiring parameters, they should be
+ specified as subsequent arguments (it's just like
+ printf!).
+
+ The client host and current date/time are prefixed to
+ every message.
+
+ """
+
+ sys.stderr.write("%s - - [%s] %s\n" %
+ (self.address_string(),
+ self.log_date_time_string(),
+ format%args))
+
+ def version_string(self):
+ """Return the server software version string."""
+ return self.server_version + ' ' + self.sys_version
+
+ def date_time_string(self, timestamp=None):
+ """Return the current date and time formatted for a message header."""
+ if timestamp is None:
+ timestamp = time.time()
+ year, month, day, hh, mm, ss, wd, y, z = time.gmtime(timestamp)
+ s = "%s, %02d %3s %4d %02d:%02d:%02d GMT" % (
+ self.weekdayname[wd],
+ day, self.monthname[month], year,
+ hh, mm, ss)
+ return s
+
+ def log_date_time_string(self):
+ """Return the current time formatted for logging."""
+ now = time.time()
+ year, month, day, hh, mm, ss, x, y, z = time.localtime(now)
+ s = "%02d/%3s/%04d %02d:%02d:%02d" % (
+ day, self.monthname[month], year, hh, mm, ss)
+ return s
+
+ weekdayname = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
+
+ monthname = [None,
+ 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
+ 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
+
+ def address_string(self):
+ """Return the client address formatted for logging.
+
+ This version looks up the full hostname using gethostbyaddr(),
+ and tries to find a name that contains at least one dot.
+
+ """
+
+ host, port = self.client_address[:2]
+ return socket.getfqdn(host)
+
+ # Essentially static class variables
+
+ # The version of the HTTP protocol we support.
+ # Set this to HTTP/1.1 to enable automatic keepalive
+ protocol_version = "HTTP/1.0"
+
+ # The Message-like class used to parse headers
+ MessageClass = mimetools.Message
+
+ # Table mapping response codes to messages; entries have the
+ # form {code: (shortmessage, longmessage)}.
+ # See RFC 2616.
+ responses = {
+ 100: ('Continue', 'Request received, please continue'),
+ 101: ('Switching Protocols',
+ 'Switching to new protocol; obey Upgrade header'),
+
+ 200: ('OK', 'Request fulfilled, document follows'),
+ 201: ('Created', 'Document created, URL follows'),
+ 202: ('Accepted',
+ 'Request accepted, processing continues off-line'),
+ 203: ('Non-Authoritative Information', 'Request fulfilled from cache'),
+ 204: ('No Content', 'Request fulfilled, nothing follows'),
+ 205: ('Reset Content', 'Clear input form for further input.'),
+ 206: ('Partial Content', 'Partial content follows.'),
+
+ 300: ('Multiple Choices',
+ 'Object has several resources -- see URI list'),
+ 301: ('Moved Permanently', 'Object moved permanently -- see URI list'),
+ 302: ('Found', 'Object moved temporarily -- see URI list'),
+ 303: ('See Other', 'Object moved -- see Method and URL list'),
+ 304: ('Not Modified',
+ 'Document has not changed since given time'),
+ 305: ('Use Proxy',
+ 'You must use proxy specified in Location to access this '
+ 'resource.'),
+ 307: ('Temporary Redirect',
+ 'Object moved temporarily -- see URI list'),
+
+ 400: ('Bad Request',
+ 'Bad request syntax or unsupported method'),
+ 401: ('Unauthorized',
+ 'No permission -- see authorization schemes'),
+ 402: ('Payment Required',
+ 'No payment -- see charging schemes'),
+ 403: ('Forbidden',
+ 'Request forbidden -- authorization will not help'),
+ 404: ('Not Found', 'Nothing matches the given URI'),
+ 405: ('Method Not Allowed',
+ 'Specified method is invalid for this resource.'),
+ 406: ('Not Acceptable', 'URI not available in preferred format.'),
+ 407: ('Proxy Authentication Required', 'You must authenticate with '
+ 'this proxy before proceeding.'),
+ 408: ('Request Timeout', 'Request timed out; try again later.'),
+ 409: ('Conflict', 'Request conflict.'),
+ 410: ('Gone',
+ 'URI no longer exists and has been permanently removed.'),
+ 411: ('Length Required', 'Client must specify Content-Length.'),
+ 412: ('Precondition Failed', 'Precondition in headers is false.'),
+ 413: ('Request Entity Too Large', 'Entity is too large.'),
+ 414: ('Request-URI Too Long', 'URI is too long.'),
+ 415: ('Unsupported Media Type', 'Entity body in unsupported format.'),
+ 416: ('Requested Range Not Satisfiable',
+ 'Cannot satisfy request range.'),
+ 417: ('Expectation Failed',
+ 'Expect condition could not be satisfied.'),
+
+ 500: ('Internal Server Error', 'Server got itself in trouble'),
+ 501: ('Not Implemented',
+ 'Server does not support this operation'),
+ 502: ('Bad Gateway', 'Invalid responses from another server/proxy.'),
+ 503: ('Service Unavailable',
+ 'The server cannot process the request due to a high load'),
+ 504: ('Gateway Timeout',
+ 'The gateway server did not receive a timely response'),
+ 505: ('HTTP Version Not Supported', 'Cannot fulfill request.'),
+ }
+
+
+def test(HandlerClass = BaseHTTPRequestHandler,
+ ServerClass = HTTPServer, protocol="HTTP/1.0"):
+ """Test the HTTP request handler class.
+
+ This runs an HTTP server on port 8000 (or the first command line
+ argument).
+
+ """
+
+ if sys.argv[1:]:
+ port = int(sys.argv[1])
+ else:
+ port = 8000
+ server_address = ('', port)
+
+ HandlerClass.protocol_version = protocol
+ httpd = ServerClass(server_address, HandlerClass)
+
+ sa = httpd.socket.getsockname()
+ print ("Serving HTTP on", sa[0], "port", sa[1], "...")
+ httpd.serve_forever()
+
+
+if __name__ == '__main__':
+ test()
diff --git a/python/helpers/pydev/_pydev_Queue.py b/python/helpers/pydev/_pydev_Queue.py
new file mode 100644
index 0000000..cc32ea6
--- /dev/null
+++ b/python/helpers/pydev/_pydev_Queue.py
@@ -0,0 +1,244 @@
+"""A multi-producer, multi-consumer queue."""
+
+from _pydev_time import time as _time
+try:
+ import _pydev_threading as _threading
+except ImportError:
+ import dummy_threading as _threading
+from collections import deque
+import heapq
+
+__all__ = ['Empty', 'Full', 'Queue', 'PriorityQueue', 'LifoQueue']
+
+class Empty(Exception):
+ "Exception raised by Queue.get(block=0)/get_nowait()."
+ pass
+
+class Full(Exception):
+ "Exception raised by Queue.put(block=0)/put_nowait()."
+ pass
+
+class Queue:
+ """Create a queue object with a given maximum size.
+
+ If maxsize is <= 0, the queue size is infinite.
+ """
+ def __init__(self, maxsize=0):
+ self.maxsize = maxsize
+ self._init(maxsize)
+ # mutex must be held whenever the queue is mutating. All methods
+ # that acquire mutex must release it before returning. mutex
+ # is shared between the three conditions, so acquiring and
+ # releasing the conditions also acquires and releases mutex.
+ self.mutex = _threading.Lock()
+ # Notify not_empty whenever an item is added to the queue; a
+ # thread waiting to get is notified then.
+ self.not_empty = _threading.Condition(self.mutex)
+ # Notify not_full whenever an item is removed from the queue;
+ # a thread waiting to put is notified then.
+ self.not_full = _threading.Condition(self.mutex)
+ # Notify all_tasks_done whenever the number of unfinished tasks
+ # drops to zero; thread waiting to join() is notified to resume
+ self.all_tasks_done = _threading.Condition(self.mutex)
+ self.unfinished_tasks = 0
+
+ def task_done(self):
+ """Indicate that a formerly enqueued task is complete.
+
+ Used by Queue consumer threads. For each get() used to fetch a task,
+ a subsequent call to task_done() tells the queue that the processing
+ on the task is complete.
+
+ If a join() is currently blocking, it will resume when all items
+ have been processed (meaning that a task_done() call was received
+ for every item that had been put() into the queue).
+
+ Raises a ValueError if called more times than there were items
+ placed in the queue.
+ """
+ self.all_tasks_done.acquire()
+ try:
+ unfinished = self.unfinished_tasks - 1
+ if unfinished <= 0:
+ if unfinished < 0:
+ raise ValueError('task_done() called too many times')
+ self.all_tasks_done.notify_all()
+ self.unfinished_tasks = unfinished
+ finally:
+ self.all_tasks_done.release()
+
+ def join(self):
+ """Blocks until all items in the Queue have been gotten and processed.
+
+ The count of unfinished tasks goes up whenever an item is added to the
+ queue. The count goes down whenever a consumer thread calls task_done()
+ to indicate the item was retrieved and all work on it is complete.
+
+ When the count of unfinished tasks drops to zero, join() unblocks.
+ """
+ self.all_tasks_done.acquire()
+ try:
+ while self.unfinished_tasks:
+ self.all_tasks_done.wait()
+ finally:
+ self.all_tasks_done.release()
+
+ def qsize(self):
+ """Return the approximate size of the queue (not reliable!)."""
+ self.mutex.acquire()
+ n = self._qsize()
+ self.mutex.release()
+ return n
+
+ def empty(self):
+ """Return True if the queue is empty, False otherwise (not reliable!)."""
+ self.mutex.acquire()
+ n = not self._qsize()
+ self.mutex.release()
+ return n
+
+ def full(self):
+ """Return True if the queue is full, False otherwise (not reliable!)."""
+ self.mutex.acquire()
+ n = 0 < self.maxsize == self._qsize()
+ self.mutex.release()
+ return n
+
+ def put(self, item, block=True, timeout=None):
+ """Put an item into the queue.
+
+ If optional args 'block' is true and 'timeout' is None (the default),
+ block if necessary until a free slot is available. If 'timeout' is
+ a positive number, it blocks at most 'timeout' seconds and raises
+ the Full exception if no free slot was available within that time.
+ Otherwise ('block' is false), put an item on the queue if a free slot
+ is immediately available, else raise the Full exception ('timeout'
+ is ignored in that case).
+ """
+ self.not_full.acquire()
+ try:
+ if self.maxsize > 0:
+ if not block:
+ if self._qsize() == self.maxsize:
+ raise Full
+ elif timeout is None:
+ while self._qsize() == self.maxsize:
+ self.not_full.wait()
+ elif timeout < 0:
+ raise ValueError("'timeout' must be a positive number")
+ else:
+ endtime = _time() + timeout
+ while self._qsize() == self.maxsize:
+ remaining = endtime - _time()
+ if remaining <= 0.0:
+ raise Full
+ self.not_full.wait(remaining)
+ self._put(item)
+ self.unfinished_tasks += 1
+ self.not_empty.notify()
+ finally:
+ self.not_full.release()
+
+ def put_nowait(self, item):
+ """Put an item into the queue without blocking.
+
+ Only enqueue the item if a free slot is immediately available.
+ Otherwise raise the Full exception.
+ """
+ return self.put(item, False)
+
+ def get(self, block=True, timeout=None):
+ """Remove and return an item from the queue.
+
+ If optional args 'block' is true and 'timeout' is None (the default),
+ block if necessary until an item is available. If 'timeout' is
+ a positive number, it blocks at most 'timeout' seconds and raises
+ the Empty exception if no item was available within that time.
+ Otherwise ('block' is false), return an item if one is immediately
+ available, else raise the Empty exception ('timeout' is ignored
+ in that case).
+ """
+ self.not_empty.acquire()
+ try:
+ if not block:
+ if not self._qsize():
+ raise Empty
+ elif timeout is None:
+ while not self._qsize():
+ self.not_empty.wait()
+ elif timeout < 0:
+ raise ValueError("'timeout' must be a positive number")
+ else:
+ endtime = _time() + timeout
+ while not self._qsize():
+ remaining = endtime - _time()
+ if remaining <= 0.0:
+ raise Empty
+ self.not_empty.wait(remaining)
+ item = self._get()
+ self.not_full.notify()
+ return item
+ finally:
+ self.not_empty.release()
+
+ def get_nowait(self):
+ """Remove and return an item from the queue without blocking.
+
+ Only get an item if one is immediately available. Otherwise
+ raise the Empty exception.
+ """
+ return self.get(False)
+
+ # Override these methods to implement other queue organizations
+ # (e.g. stack or priority queue).
+ # These will only be called with appropriate locks held
+
+ # Initialize the queue representation
+ def _init(self, maxsize):
+ self.queue = deque()
+
+ def _qsize(self, len=len):
+ return len(self.queue)
+
+ # Put a new item in the queue
+ def _put(self, item):
+ self.queue.append(item)
+
+ # Get an item from the queue
+ def _get(self):
+ return self.queue.popleft()
+
+
+class PriorityQueue(Queue):
+ '''Variant of Queue that retrieves open entries in priority order (lowest first).
+
+ Entries are typically tuples of the form: (priority number, data).
+ '''
+
+ def _init(self, maxsize):
+ self.queue = []
+
+ def _qsize(self, len=len):
+ return len(self.queue)
+
+ def _put(self, item, heappush=heapq.heappush):
+ heappush(self.queue, item)
+
+ def _get(self, heappop=heapq.heappop):
+ return heappop(self.queue)
+
+
+class LifoQueue(Queue):
+ '''Variant of Queue that retrieves most recently added entries first.'''
+
+ def _init(self, maxsize):
+ self.queue = []
+
+ def _qsize(self, len=len):
+ return len(self.queue)
+
+ def _put(self, item):
+ self.queue.append(item)
+
+ def _get(self):
+ return self.queue.pop()
diff --git a/python/helpers/pydev/_pydev_SimpleXMLRPCServer.py b/python/helpers/pydev/_pydev_SimpleXMLRPCServer.py
new file mode 100644
index 0000000..c7da5b7
--- /dev/null
+++ b/python/helpers/pydev/_pydev_SimpleXMLRPCServer.py
@@ -0,0 +1,610 @@
+#Just a copy of the version in python 2.5 to be used if it's not available in jython 2.1
+
+"""Simple XML-RPC Server.
+
+This module can be used to create simple XML-RPC servers
+by creating a server and either installing functions, a
+class instance, or by extending the SimpleXMLRPCServer
+class.
+
+It can also be used to handle XML-RPC requests in a CGI
+environment using CGIXMLRPCRequestHandler.
+
+A list of possible usage patterns follows:
+
+1. Install functions:
+
+server = SimpleXMLRPCServer(("localhost", 8000))
+server.register_function(pow)
+server.register_function(lambda x,y: x+y, 'add')
+server.serve_forever()
+
+2. Install an instance:
+
+class MyFuncs:
+ def __init__(self):
+ # make all of the string functions available through
+ # string.func_name
+ import string
+ self.string = string
+ def _listMethods(self):
+ # implement this method so that system.listMethods
+ # knows to advertise the strings methods
+ return list_public_methods(self) + \
+ ['string.' + method for method in list_public_methods(self.string)]
+ def pow(self, x, y): return pow(x, y)
+ def add(self, x, y) : return x + y
+
+server = SimpleXMLRPCServer(("localhost", 8000))
+server.register_introspection_functions()
+server.register_instance(MyFuncs())
+server.serve_forever()
+
+3. Install an instance with custom dispatch method:
+
+class Math:
+ def _listMethods(self):
+ # this method must be present for system.listMethods
+ # to work
+ return ['add', 'pow']
+ def _methodHelp(self, method):
+ # this method must be present for system.methodHelp
+ # to work
+ if method == 'add':
+ return "add(2,3) => 5"
+ elif method == 'pow':
+ return "pow(x, y[, z]) => number"
+ else:
+ # By convention, return empty
+ # string if no help is available
+ return ""
+ def _dispatch(self, method, params):
+ if method == 'pow':
+ return pow(*params)
+ elif method == 'add':
+ return params[0] + params[1]
+ else:
+ raise 'bad method'
+
+server = SimpleXMLRPCServer(("localhost", 8000))
+server.register_introspection_functions()
+server.register_instance(Math())
+server.serve_forever()
+
+4. Subclass SimpleXMLRPCServer:
+
+class MathServer(SimpleXMLRPCServer):
+ def _dispatch(self, method, params):
+ try:
+ # We are forcing the 'export_' prefix on methods that are
+ # callable through XML-RPC to prevent potential security
+ # problems
+ func = getattr(self, 'export_' + method)
+ except AttributeError:
+ raise Exception('method "%s" is not supported' % method)
+ else:
+ return func(*params)
+
+ def export_add(self, x, y):
+ return x + y
+
+server = MathServer(("localhost", 8000))
+server.serve_forever()
+
+5. CGI script:
+
+server = CGIXMLRPCRequestHandler()
+server.register_function(pow)
+server.handle_request()
+"""
+
+# Written by Brian Quinlan ([email protected]).
+# Based on code written by Fredrik Lundh.
+
+try:
+ True
+ False
+except:
+ import __builtin__
+ setattr(__builtin__, 'True', 1) #Python 3.0 does not accept __builtin__.True = 1 in its syntax
+ setattr(__builtin__, 'False', 0)
+
+
+import _pydev_xmlrpclib as xmlrpclib
+from _pydev_xmlrpclib import Fault
+import _pydev_SocketServer as SocketServer
+import _pydev_BaseHTTPServer as BaseHTTPServer
+import sys
+import os
+try:
+ import fcntl
+except ImportError:
+ fcntl = None
+
+def resolve_dotted_attribute(obj, attr, allow_dotted_names=True):
+ """resolve_dotted_attribute(a, 'b.c.d') => a.b.c.d
+
+ Resolves a dotted attribute name to an object. Raises
+ an AttributeError if any attribute in the chain starts with a '_'.
+
+ If the optional allow_dotted_names argument is false, dots are not
+ supported and this function operates similar to getattr(obj, attr).
+ """
+
+ if allow_dotted_names:
+ attrs = attr.split('.')
+ else:
+ attrs = [attr]
+
+ for i in attrs:
+ if i.startswith('_'):
+ raise AttributeError(
+ 'attempt to access private attribute "%s"' % i
+ )
+ else:
+ obj = getattr(obj, i)
+ return obj
+
+def list_public_methods(obj):
+ """Returns a list of attribute strings, found in the specified
+ object, which represent callable attributes"""
+
+ return [member for member in dir(obj)
+ if not member.startswith('_') and
+ callable(getattr(obj, member))]
+
+def remove_duplicates(lst):
+ """remove_duplicates([2,2,2,1,3,3]) => [3,1,2]
+
+ Returns a copy of a list without duplicates. Every list
+ item must be hashable and the order of the items in the
+ resulting list is not defined.
+ """
+ u = {}
+ for x in lst:
+ u[x] = 1
+
+ return u.keys()
+
+class SimpleXMLRPCDispatcher:
+ """Mix-in class that dispatches XML-RPC requests.
+
+ This class is used to register XML-RPC method handlers
+ and then to dispatch them. There should never be any
+ reason to instantiate this class directly.
+ """
+
+ def __init__(self, allow_none, encoding):
+ self.funcs = {}
+ self.instance = None
+ self.allow_none = allow_none
+ self.encoding = encoding
+
+ def register_instance(self, instance, allow_dotted_names=False):
+ """Registers an instance to respond to XML-RPC requests.
+
+ Only one instance can be installed at a time.
+
+ If the registered instance has a _dispatch method then that
+ method will be called with the name of the XML-RPC method and
+ its parameters as a tuple
+ e.g. instance._dispatch('add',(2,3))
+
+ If the registered instance does not have a _dispatch method
+ then the instance will be searched to find a matching method
+ and, if found, will be called. Methods beginning with an '_'
+ are considered private and will not be called by
+ SimpleXMLRPCServer.
+
+ If a registered function matches a XML-RPC request, then it
+ will be called instead of the registered instance.
+
+ If the optional allow_dotted_names argument is true and the
+ instance does not have a _dispatch method, method names
+ containing dots are supported and resolved, as long as none of
+ the name segments start with an '_'.
+
+ *** SECURITY WARNING: ***
+
+ Enabling the allow_dotted_names options allows intruders
+ to access your module's global variables and may allow
+ intruders to execute arbitrary code on your machine. Only
+ use this option on a secure, closed network.
+
+ """
+
+ self.instance = instance
+ self.allow_dotted_names = allow_dotted_names
+
+ def register_function(self, function, name=None):
+ """Registers a function to respond to XML-RPC requests.
+
+ The optional name argument can be used to set a Unicode name
+ for the function.
+ """
+
+ if name is None:
+ name = function.__name__
+ self.funcs[name] = function
+
+ def register_introspection_functions(self):
+ """Registers the XML-RPC introspection methods in the system
+ namespace.
+
+ see http://xmlrpc.usefulinc.com/doc/reserved.html
+ """
+
+ self.funcs.update({'system.listMethods' : self.system_listMethods,
+ 'system.methodSignature' : self.system_methodSignature,
+ 'system.methodHelp' : self.system_methodHelp})
+
+ def register_multicall_functions(self):
+ """Registers the XML-RPC multicall method in the system
+ namespace.
+
+ see http://www.xmlrpc.com/discuss/msgReader$1208"""
+
+ self.funcs.update({'system.multicall' : self.system_multicall})
+
+ def _marshaled_dispatch(self, data, dispatch_method=None):
+ """Dispatches an XML-RPC method from marshalled (XML) data.
+
+ XML-RPC methods are dispatched from the marshalled (XML) data
+ using the _dispatch method and the result is returned as
+ marshalled data. For backwards compatibility, a dispatch
+ function can be provided as an argument (see comment in
+ SimpleXMLRPCRequestHandler.do_POST) but overriding the
+ existing method through subclassing is the prefered means
+ of changing method dispatch behavior.
+ """
+ try:
+ params, method = xmlrpclib.loads(data)
+
+ # generate response
+ if dispatch_method is not None:
+ response = dispatch_method(method, params)
+ else:
+ response = self._dispatch(method, params)
+ # wrap response in a singleton tuple
+ response = (response,)
+ response = xmlrpclib.dumps(response, methodresponse=1,
+ allow_none=self.allow_none, encoding=self.encoding)
+ except Fault, fault:
+ response = xmlrpclib.dumps(fault, allow_none=self.allow_none,
+ encoding=self.encoding)
+ except:
+ # report exception back to server
+ response = xmlrpclib.dumps(
+ xmlrpclib.Fault(1, "%s:%s" % (sys.exc_type, sys.exc_value)), #@UndefinedVariable exc_value only available when we actually have an exception
+ encoding=self.encoding, allow_none=self.allow_none,
+ )
+
+ return response
+
+ def system_listMethods(self):
+ """system.listMethods() => ['add', 'subtract', 'multiple']
+
+ Returns a list of the methods supported by the server."""
+
+ methods = self.funcs.keys()
+ if self.instance is not None:
+ # Instance can implement _listMethod to return a list of
+ # methods
+ if hasattr(self.instance, '_listMethods'):
+ methods = remove_duplicates(
+ methods + self.instance._listMethods()
+ )
+ # if the instance has a _dispatch method then we
+ # don't have enough information to provide a list
+ # of methods
+ elif not hasattr(self.instance, '_dispatch'):
+ methods = remove_duplicates(
+ methods + list_public_methods(self.instance)
+ )
+ methods.sort()
+ return methods
+
+ def system_methodSignature(self, method_name):
+ """system.methodSignature('add') => [double, int, int]
+
+ Returns a list describing the signature of the method. In the
+ above example, the add method takes two integers as arguments
+ and returns a double result.
+
+ This server does NOT support system.methodSignature."""
+
+ # See http://xmlrpc.usefulinc.com/doc/sysmethodsig.html
+
+ return 'signatures not supported'
+
+ def system_methodHelp(self, method_name):
+ """system.methodHelp('add') => "Adds two integers together"
+
+ Returns a string containing documentation for the specified method."""
+
+ method = None
+ if self.funcs.has_key(method_name):
+ method = self.funcs[method_name]
+ elif self.instance is not None:
+ # Instance can implement _methodHelp to return help for a method
+ if hasattr(self.instance, '_methodHelp'):
+ return self.instance._methodHelp(method_name)
+ # if the instance has a _dispatch method then we
+ # don't have enough information to provide help
+ elif not hasattr(self.instance, '_dispatch'):
+ try:
+ method = resolve_dotted_attribute(
+ self.instance,
+ method_name,
+ self.allow_dotted_names
+ )
+ except AttributeError:
+ pass
+
+ # Note that we aren't checking that the method actually
+ # be a callable object of some kind
+ if method is None:
+ return ""
+ else:
+ try:
+ import pydoc
+ except ImportError:
+ return "" #not there for jython
+ else:
+ return pydoc.getdoc(method)
+
+ def system_multicall(self, call_list):
+ """system.multicall([{'methodName': 'add', 'params': [2, 2]}, ...]) => \
+[[4], ...]
+
+ Allows the caller to package multiple XML-RPC calls into a single
+ request.
+
+ See http://www.xmlrpc.com/discuss/msgReader$1208
+ """
+
+ results = []
+ for call in call_list:
+ method_name = call['methodName']
+ params = call['params']
+
+ try:
+ # XXX A marshalling error in any response will fail the entire
+ # multicall. If someone cares they should fix this.
+ results.append([self._dispatch(method_name, params)])
+ except Fault, fault:
+ results.append(
+ {'faultCode' : fault.faultCode,
+ 'faultString' : fault.faultString}
+ )
+ except:
+ results.append(
+ {'faultCode' : 1,
+ 'faultString' : "%s:%s" % (sys.exc_type, sys.exc_value)} #@UndefinedVariable exc_value only available when we actually have an exception
+ )
+ return results
+
+ def _dispatch(self, method, params):
+ """Dispatches the XML-RPC method.
+
+ XML-RPC calls are forwarded to a registered function that
+ matches the called XML-RPC method name. If no such function
+ exists then the call is forwarded to the registered instance,
+ if available.
+
+ If the registered instance has a _dispatch method then that
+ method will be called with the name of the XML-RPC method and
+ its parameters as a tuple
+ e.g. instance._dispatch('add',(2,3))
+
+ If the registered instance does not have a _dispatch method
+ then the instance will be searched to find a matching method
+ and, if found, will be called.
+
+ Methods beginning with an '_' are considered private and will
+ not be called.
+ """
+
+ func = None
+ try:
+ # check to see if a matching function has been registered
+ func = self.funcs[method]
+ except KeyError:
+ if self.instance is not None:
+ # check for a _dispatch method
+ if hasattr(self.instance, '_dispatch'):
+ return self.instance._dispatch(method, params)
+ else:
+ # call instance method directly
+ try:
+ func = resolve_dotted_attribute(
+ self.instance,
+ method,
+ self.allow_dotted_names
+ )
+ except AttributeError:
+ pass
+
+ if func is not None:
+ return func(*params)
+ else:
+ raise Exception('method "%s" is not supported' % method)
+
+class SimpleXMLRPCRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
+ """Simple XML-RPC request handler class.
+
+ Handles all HTTP POST requests and attempts to decode them as
+ XML-RPC requests.
+ """
+
+ # Class attribute listing the accessible path components;
+ # paths not on this list will result in a 404 error.
+ rpc_paths = ('/', '/RPC2')
+
+ def is_rpc_path_valid(self):
+ if self.rpc_paths:
+ return self.path in self.rpc_paths
+ else:
+ # If .rpc_paths is empty, just assume all paths are legal
+ return True
+
+ def do_POST(self):
+ """Handles the HTTP POST request.
+
+ Attempts to interpret all HTTP POST requests as XML-RPC calls,
+ which are forwarded to the server's _dispatch method for handling.
+ """
+
+ # Check that the path is legal
+ if not self.is_rpc_path_valid():
+ self.report_404()
+ return
+
+ try:
+ # Get arguments by reading body of request.
+ # We read this in chunks to avoid straining
+ # socket.read(); around the 10 or 15Mb mark, some platforms
+ # begin to have problems (bug #792570).
+ max_chunk_size = 10 * 1024 * 1024
+ size_remaining = int(self.headers["content-length"])
+ L = []
+ while size_remaining:
+ chunk_size = min(size_remaining, max_chunk_size)
+ L.append(self.rfile.read(chunk_size))
+ size_remaining -= len(L[-1])
+ data = ''.join(L)
+
+ # In previous versions of SimpleXMLRPCServer, _dispatch
+ # could be overridden in this class, instead of in
+ # SimpleXMLRPCDispatcher. To maintain backwards compatibility,
+ # check to see if a subclass implements _dispatch and dispatch
+ # using that method if present.
+ response = self.server._marshaled_dispatch(
+ data, getattr(self, '_dispatch', None)
+ )
+ except: # This should only happen if the module is buggy
+ # internal error, report as HTTP server error
+ self.send_response(500)
+ self.end_headers()
+ else:
+ # got a valid XML RPC response
+ self.send_response(200)
+ self.send_header("Content-type", "text/xml")
+ self.send_header("Content-length", str(len(response)))
+ self.end_headers()
+ self.wfile.write(response)
+
+ # shut down the connection
+ self.wfile.flush()
+ self.connection.shutdown(1)
+
+ def report_404 (self):
+ # Report a 404 error
+ self.send_response(404)
+ response = 'No such page'
+ self.send_header("Content-type", "text/plain")
+ self.send_header("Content-length", str(len(response)))
+ self.end_headers()
+ self.wfile.write(response)
+ # shut down the connection
+ self.wfile.flush()
+ self.connection.shutdown(1)
+
+ def log_request(self, code='-', size='-'):
+ """Selectively log an accepted request."""
+
+ if self.server.logRequests:
+ BaseHTTPServer.BaseHTTPRequestHandler.log_request(self, code, size)
+
+class SimpleXMLRPCServer(SocketServer.TCPServer,
+ SimpleXMLRPCDispatcher):
+ """Simple XML-RPC server.
+
+ Simple XML-RPC server that allows functions and a single instance
+ to be installed to handle requests. The default implementation
+ attempts to dispatch XML-RPC calls to the functions or instance
+ installed in the server. Override the _dispatch method inhereted
+ from SimpleXMLRPCDispatcher to change this behavior.
+ """
+
+ allow_reuse_address = True
+
+ def __init__(self, addr, requestHandler=SimpleXMLRPCRequestHandler,
+ logRequests=True, allow_none=False, encoding=None):
+ self.logRequests = logRequests
+
+ SimpleXMLRPCDispatcher.__init__(self, allow_none, encoding)
+ SocketServer.TCPServer.__init__(self, addr, requestHandler)
+
+ # [Bug #1222790] If possible, set close-on-exec flag; if a
+ # method spawns a subprocess, the subprocess shouldn't have
+ # the listening socket open.
+ if fcntl is not None and hasattr(fcntl, 'FD_CLOEXEC'):
+ flags = fcntl.fcntl(self.fileno(), fcntl.F_GETFD)
+ flags |= fcntl.FD_CLOEXEC
+ fcntl.fcntl(self.fileno(), fcntl.F_SETFD, flags)
+
+class CGIXMLRPCRequestHandler(SimpleXMLRPCDispatcher):
+ """Simple handler for XML-RPC data passed through CGI."""
+
+ def __init__(self, allow_none=False, encoding=None):
+ SimpleXMLRPCDispatcher.__init__(self, allow_none, encoding)
+
+ def handle_xmlrpc(self, request_text):
+ """Handle a single XML-RPC request"""
+
+ response = self._marshaled_dispatch(request_text)
+
+ sys.stdout.write('Content-Type: text/xml\n')
+ sys.stdout.write('Content-Length: %d\n' % len(response))
+ sys.stdout.write('\n')
+
+ sys.stdout.write(response)
+
+ def handle_get(self):
+ """Handle a single HTTP GET request.
+
+ Default implementation indicates an error because
+ XML-RPC uses the POST method.
+ """
+
+ code = 400
+ message, explain = \
+ BaseHTTPServer.BaseHTTPRequestHandler.responses[code]
+
+ response = BaseHTTPServer.DEFAULT_ERROR_MESSAGE % { #@UndefinedVariable
+ 'code' : code,
+ 'message' : message,
+ 'explain' : explain
+ }
+ sys.stdout.write('Status: %d %s\n' % (code, message))
+ sys.stdout.write('Content-Type: text/html\n')
+ sys.stdout.write('Content-Length: %d\n' % len(response))
+ sys.stdout.write('\n')
+
+ sys.stdout.write(response)
+
+ def handle_request(self, request_text=None):
+ """Handle a single XML-RPC request passed through a CGI post method.
+
+ If no XML data is given then it is read from stdin. The resulting
+ XML-RPC response is printed to stdout along with the correct HTTP
+ headers.
+ """
+
+ if request_text is None and \
+ os.environ.get('REQUEST_METHOD', None) == 'GET':
+ self.handle_get()
+ else:
+ # POST data is normally available through stdin
+ if request_text is None:
+ request_text = sys.stdin.read()
+
+ self.handle_xmlrpc(request_text)
+
+if __name__ == '__main__':
+ sys.stdout.write('Running XML-RPC server on port 8000\n')
+ server = SimpleXMLRPCServer(("localhost", 8000))
+ server.register_function(pow)
+ server.register_function(lambda x, y: x + y, 'add')
+ server.serve_forever()
diff --git a/python/helpers/pydev/_pydev_SocketServer.py b/python/helpers/pydev/_pydev_SocketServer.py
new file mode 100644
index 0000000..c611126
--- /dev/null
+++ b/python/helpers/pydev/_pydev_SocketServer.py
@@ -0,0 +1,715 @@
+"""Generic socket server classes.
+
+This module tries to capture the various aspects of defining a server:
+
+For socket-based servers:
+
+- address family:
+ - AF_INET{,6}: IP (Internet Protocol) sockets (default)
+ - AF_UNIX: Unix domain sockets
+ - others, e.g. AF_DECNET are conceivable (see <socket.h>
+- socket type:
+ - SOCK_STREAM (reliable stream, e.g. TCP)
+ - SOCK_DGRAM (datagrams, e.g. UDP)
+
+For request-based servers (including socket-based):
+
+- client address verification before further looking at the request
+ (This is actually a hook for any processing that needs to look
+ at the request before anything else, e.g. logging)
+- how to handle multiple requests:
+ - synchronous (one request is handled at a time)
+ - forking (each request is handled by a new process)
+ - threading (each request is handled by a new thread)
+
+The classes in this module favor the server type that is simplest to
+write: a synchronous TCP/IP server. This is bad class design, but
+save some typing. (There's also the issue that a deep class hierarchy
+slows down method lookups.)
+
+There are five classes in an inheritance diagram, four of which represent
+synchronous servers of four types:
+
+ +------------+
+ | BaseServer |
+ +------------+
+ |
+ v
+ +-----------+ +------------------+
+ | TCPServer |------->| UnixStreamServer |
+ +-----------+ +------------------+
+ |
+ v
+ +-----------+ +--------------------+
+ | UDPServer |------->| UnixDatagramServer |
+ +-----------+ +--------------------+
+
+Note that UnixDatagramServer derives from UDPServer, not from
+UnixStreamServer -- the only difference between an IP and a Unix
+stream server is the address family, which is simply repeated in both
+unix server classes.
+
+Forking and threading versions of each type of server can be created
+using the ForkingMixIn and ThreadingMixIn mix-in classes. For
+instance, a threading UDP server class is created as follows:
+
+ class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
+
+The Mix-in class must come first, since it overrides a method defined
+in UDPServer! Setting the various member variables also changes
+the behavior of the underlying server mechanism.
+
+To implement a service, you must derive a class from
+BaseRequestHandler and redefine its handle() method. You can then run
+various versions of the service by combining one of the server classes
+with your request handler class.
+
+The request handler class must be different for datagram or stream
+services. This can be hidden by using the request handler
+subclasses StreamRequestHandler or DatagramRequestHandler.
+
+Of course, you still have to use your head!
+
+For instance, it makes no sense to use a forking server if the service
+contains state in memory that can be modified by requests (since the
+modifications in the child process would never reach the initial state
+kept in the parent process and passed to each child). In this case,
+you can use a threading server, but you will probably have to use
+locks to avoid two requests that come in nearly simultaneous to apply
+conflicting changes to the server state.
+
+On the other hand, if you are building e.g. an HTTP server, where all
+data is stored externally (e.g. in the file system), a synchronous
+class will essentially render the service "deaf" while one request is
+being handled -- which may be for a very long time if a client is slow
+to read all the data it has requested. Here a threading or forking
+server is appropriate.
+
+In some cases, it may be appropriate to process part of a request
+synchronously, but to finish processing in a forked child depending on
+the request data. This can be implemented by using a synchronous
+server and doing an explicit fork in the request handler class
+handle() method.
+
+Another approach to handling multiple simultaneous requests in an
+environment that supports neither threads nor fork (or where these are
+too expensive or inappropriate for the service) is to maintain an
+explicit table of partially finished requests and to use select() to
+decide which request to work on next (or whether to handle a new
+incoming request). This is particularly important for stream services
+where each client can potentially be connected for a long time (if
+threads or subprocesses cannot be used).
+
+Future work:
+- Standard classes for Sun RPC (which uses either UDP or TCP)
+- Standard mix-in classes to implement various authentication
+ and encryption schemes
+- Standard framework for select-based multiplexing
+
+XXX Open problems:
+- What to do with out-of-band data?
+
+BaseServer:
+- split generic "request" functionality out into BaseServer class.
+ Copyright (C) 2000 Luke Kenneth Casson Leighton <[email protected]>
+
+ example: read entries from a SQL database (requires overriding
+ get_request() to return a table entry from the database).
+ entry is processed by a RequestHandlerClass.
+
+"""
+
+# Author of the BaseServer patch: Luke Kenneth Casson Leighton
+
+# XXX Warning!
+# There is a test suite for this module, but it cannot be run by the
+# standard regression test.
+# To run it manually, run Lib/test/test_socketserver.py.
+
+__version__ = "0.4"
+
+
+import _pydev_socket as socket
+import _pydev_select as select
+import sys
+import os
+try:
+ import _pydev_threading as threading
+except ImportError:
+ import dummy_threading as threading
+
+__all__ = ["TCPServer","UDPServer","ForkingUDPServer","ForkingTCPServer",
+ "ThreadingUDPServer","ThreadingTCPServer","BaseRequestHandler",
+ "StreamRequestHandler","DatagramRequestHandler",
+ "ThreadingMixIn", "ForkingMixIn"]
+if hasattr(socket, "AF_UNIX"):
+ __all__.extend(["UnixStreamServer","UnixDatagramServer",
+ "ThreadingUnixStreamServer",
+ "ThreadingUnixDatagramServer"])
+
+class BaseServer:
+
+ """Base class for server classes.
+
+ Methods for the caller:
+
+ - __init__(server_address, RequestHandlerClass)
+ - serve_forever(poll_interval=0.5)
+ - shutdown()
+ - handle_request() # if you do not use serve_forever()
+ - fileno() -> int # for select()
+
+ Methods that may be overridden:
+
+ - server_bind()
+ - server_activate()
+ - get_request() -> request, client_address
+ - handle_timeout()
+ - verify_request(request, client_address)
+ - server_close()
+ - process_request(request, client_address)
+ - shutdown_request(request)
+ - close_request(request)
+ - handle_error()
+
+ Methods for derived classes:
+
+ - finish_request(request, client_address)
+
+ Class variables that may be overridden by derived classes or
+ instances:
+
+ - timeout
+ - address_family
+ - socket_type
+ - allow_reuse_address
+
+ Instance variables:
+
+ - RequestHandlerClass
+ - socket
+
+ """
+
+ timeout = None
+
+ def __init__(self, server_address, RequestHandlerClass):
+ """Constructor. May be extended, do not override."""
+ self.server_address = server_address
+ self.RequestHandlerClass = RequestHandlerClass
+ self.__is_shut_down = threading.Event()
+ self.__shutdown_request = False
+
+ def server_activate(self):
+ """Called by constructor to activate the server.
+
+ May be overridden.
+
+ """
+ pass
+
+ def serve_forever(self, poll_interval=0.5):
+ """Handle one request at a time until shutdown.
+
+ Polls for shutdown every poll_interval seconds. Ignores
+ self.timeout. If you need to do periodic tasks, do them in
+ another thread.
+ """
+ self.__is_shut_down.clear()
+ try:
+ while not self.__shutdown_request:
+ # XXX: Consider using another file descriptor or
+ # connecting to the socket to wake this up instead of
+ # polling. Polling reduces our responsiveness to a
+ # shutdown request and wastes cpu at all other times.
+ r, w, e = select.select([self], [], [], poll_interval)
+ if self in r:
+ self._handle_request_noblock()
+ finally:
+ self.__shutdown_request = False
+ self.__is_shut_down.set()
+
+ def shutdown(self):
+ """Stops the serve_forever loop.
+
+ Blocks until the loop has finished. This must be called while
+ serve_forever() is running in another thread, or it will
+ deadlock.
+ """
+ self.__shutdown_request = True
+ self.__is_shut_down.wait()
+
+ # The distinction between handling, getting, processing and
+ # finishing a request is fairly arbitrary. Remember:
+ #
+ # - handle_request() is the top-level call. It calls
+ # select, get_request(), verify_request() and process_request()
+ # - get_request() is different for stream or datagram sockets
+ # - process_request() is the place that may fork a new process
+ # or create a new thread to finish the request
+ # - finish_request() instantiates the request handler class;
+ # this constructor will handle the request all by itself
+
+ def handle_request(self):
+ """Handle one request, possibly blocking.
+
+ Respects self.timeout.
+ """
+ # Support people who used socket.settimeout() to escape
+ # handle_request before self.timeout was available.
+ timeout = self.socket.gettimeout()
+ if timeout is None:
+ timeout = self.timeout
+ elif self.timeout is not None:
+ timeout = min(timeout, self.timeout)
+ fd_sets = select.select([self], [], [], timeout)
+ if not fd_sets[0]:
+ self.handle_timeout()
+ return
+ self._handle_request_noblock()
+
+ def _handle_request_noblock(self):
+ """Handle one request, without blocking.
+
+ I assume that select.select has returned that the socket is
+ readable before this function was called, so there should be
+ no risk of blocking in get_request().
+ """
+ try:
+ request, client_address = self.get_request()
+ except socket.error:
+ return
+ if self.verify_request(request, client_address):
+ try:
+ self.process_request(request, client_address)
+ except:
+ self.handle_error(request, client_address)
+ self.shutdown_request(request)
+
+ def handle_timeout(self):
+ """Called if no new request arrives within self.timeout.
+
+ Overridden by ForkingMixIn.
+ """
+ pass
+
+ def verify_request(self, request, client_address):
+ """Verify the request. May be overridden.
+
+ Return True if we should proceed with this request.
+
+ """
+ return True
+
+ def process_request(self, request, client_address):
+ """Call finish_request.
+
+ Overridden by ForkingMixIn and ThreadingMixIn.
+
+ """
+ self.finish_request(request, client_address)
+ self.shutdown_request(request)
+
+ def server_close(self):
+ """Called to clean-up the server.
+
+ May be overridden.
+
+ """
+ pass
+
+ def finish_request(self, request, client_address):
+ """Finish one request by instantiating RequestHandlerClass."""
+ self.RequestHandlerClass(request, client_address, self)
+
+ def shutdown_request(self, request):
+ """Called to shutdown and close an individual request."""
+ self.close_request(request)
+
+ def close_request(self, request):
+ """Called to clean up an individual request."""
+ pass
+
+ def handle_error(self, request, client_address):
+ """Handle an error gracefully. May be overridden.
+
+ The default is to print a traceback and continue.
+
+ """
+ print '-'*40
+ print 'Exception happened during processing of request from',
+ print client_address
+ import traceback
+ traceback.print_exc() # XXX But this goes to stderr!
+ print '-'*40
+
+
+class TCPServer(BaseServer):
+
+ """Base class for various socket-based server classes.
+
+ Defaults to synchronous IP stream (i.e., TCP).
+
+ Methods for the caller:
+
+ - __init__(server_address, RequestHandlerClass, bind_and_activate=True)
+ - serve_forever(poll_interval=0.5)
+ - shutdown()
+ - handle_request() # if you don't use serve_forever()
+ - fileno() -> int # for select()
+
+ Methods that may be overridden:
+
+ - server_bind()
+ - server_activate()
+ - get_request() -> request, client_address
+ - handle_timeout()
+ - verify_request(request, client_address)
+ - process_request(request, client_address)
+ - shutdown_request(request)
+ - close_request(request)
+ - handle_error()
+
+ Methods for derived classes:
+
+ - finish_request(request, client_address)
+
+ Class variables that may be overridden by derived classes or
+ instances:
+
+ - timeout
+ - address_family
+ - socket_type
+ - request_queue_size (only for stream sockets)
+ - allow_reuse_address
+
+ Instance variables:
+
+ - server_address
+ - RequestHandlerClass
+ - socket
+
+ """
+
+ address_family = socket.AF_INET
+
+ socket_type = socket.SOCK_STREAM
+
+ request_queue_size = 5
+
+ allow_reuse_address = False
+
+ def __init__(self, server_address, RequestHandlerClass, bind_and_activate=True):
+ """Constructor. May be extended, do not override."""
+ BaseServer.__init__(self, server_address, RequestHandlerClass)
+ self.socket = socket.socket(self.address_family,
+ self.socket_type)
+ if bind_and_activate:
+ self.server_bind()
+ self.server_activate()
+
+ def server_bind(self):
+ """Called by constructor to bind the socket.
+
+ May be overridden.
+
+ """
+ if self.allow_reuse_address:
+ self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+ self.socket.bind(self.server_address)
+ self.server_address = self.socket.getsockname()
+
+ def server_activate(self):
+ """Called by constructor to activate the server.
+
+ May be overridden.
+
+ """
+ self.socket.listen(self.request_queue_size)
+
+ def server_close(self):
+ """Called to clean-up the server.
+
+ May be overridden.
+
+ """
+ self.socket.close()
+
+ def fileno(self):
+ """Return socket file number.
+
+ Interface required by select().
+
+ """
+ return self.socket.fileno()
+
+ def get_request(self):
+ """Get the request and client address from the socket.
+
+ May be overridden.
+
+ """
+ return self.socket.accept()
+
+ def shutdown_request(self, request):
+ """Called to shutdown and close an individual request."""
+ try:
+ #explicitly shutdown. socket.close() merely releases
+ #the socket and waits for GC to perform the actual close.
+ request.shutdown(socket.SHUT_WR)
+ except socket.error:
+ pass #some platforms may raise ENOTCONN here
+ self.close_request(request)
+
+ def close_request(self, request):
+ """Called to clean up an individual request."""
+ request.close()
+
+
+class UDPServer(TCPServer):
+
+ """UDP server class."""
+
+ allow_reuse_address = False
+
+ socket_type = socket.SOCK_DGRAM
+
+ max_packet_size = 8192
+
+ def get_request(self):
+ data, client_addr = self.socket.recvfrom(self.max_packet_size)
+ return (data, self.socket), client_addr
+
+ def server_activate(self):
+ # No need to call listen() for UDP.
+ pass
+
+ def shutdown_request(self, request):
+ # No need to shutdown anything.
+ self.close_request(request)
+
+ def close_request(self, request):
+ # No need to close anything.
+ pass
+
+class ForkingMixIn:
+
+ """Mix-in class to handle each request in a new process."""
+
+ timeout = 300
+ active_children = None
+ max_children = 40
+
+ def collect_children(self):
+ """Internal routine to wait for children that have exited."""
+ if self.active_children is None: return
+ while len(self.active_children) >= self.max_children:
+ # XXX: This will wait for any child process, not just ones
+ # spawned by this library. This could confuse other
+ # libraries that expect to be able to wait for their own
+ # children.
+ try:
+ pid, status = os.waitpid(0, 0)
+ except os.error:
+ pid = None
+ if pid not in self.active_children: continue
+ self.active_children.remove(pid)
+
+ # XXX: This loop runs more system calls than it ought
+ # to. There should be a way to put the active_children into a
+ # process group and then use os.waitpid(-pgid) to wait for any
+ # of that set, but I couldn't find a way to allocate pgids
+ # that couldn't collide.
+ for child in self.active_children:
+ try:
+ pid, status = os.waitpid(child, os.WNOHANG)
+ except os.error:
+ pid = None
+ if not pid: continue
+ try:
+ self.active_children.remove(pid)
+ except ValueError, e:
+ raise ValueError('%s. x=%d and list=%r' % (e.message, pid,
+ self.active_children))
+
+ def handle_timeout(self):
+ """Wait for zombies after self.timeout seconds of inactivity.
+
+ May be extended, do not override.
+ """
+ self.collect_children()
+
+ def process_request(self, request, client_address):
+ """Fork a new subprocess to process the request."""
+ self.collect_children()
+ pid = os.fork()
+ if pid:
+ # Parent process
+ if self.active_children is None:
+ self.active_children = []
+ self.active_children.append(pid)
+ self.close_request(request) #close handle in parent process
+ return
+ else:
+ # Child process.
+ # This must never return, hence os._exit()!
+ try:
+ self.finish_request(request, client_address)
+ self.shutdown_request(request)
+ os._exit(0)
+ except:
+ try:
+ self.handle_error(request, client_address)
+ self.shutdown_request(request)
+ finally:
+ os._exit(1)
+
+
+class ThreadingMixIn:
+ """Mix-in class to handle each request in a new thread."""
+
+ # Decides how threads will act upon termination of the
+ # main process
+ daemon_threads = False
+
+ def process_request_thread(self, request, client_address):
+ """Same as in BaseServer but as a thread.
+
+ In addition, exception handling is done here.
+
+ """
+ try:
+ self.finish_request(request, client_address)
+ self.shutdown_request(request)
+ except:
+ self.handle_error(request, client_address)
+ self.shutdown_request(request)
+
+ def process_request(self, request, client_address):
+ """Start a new thread to process the request."""
+ t = threading.Thread(target = self.process_request_thread,
+ args = (request, client_address))
+ t.daemon = self.daemon_threads
+ t.start()
+
+
+class ForkingUDPServer(ForkingMixIn, UDPServer): pass
+class ForkingTCPServer(ForkingMixIn, TCPServer): pass
+
+class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
+class ThreadingTCPServer(ThreadingMixIn, TCPServer): pass
+
+if hasattr(socket, 'AF_UNIX'):
+
+ class UnixStreamServer(TCPServer):
+ address_family = socket.AF_UNIX
+
+ class UnixDatagramServer(UDPServer):
+ address_family = socket.AF_UNIX
+
+ class ThreadingUnixStreamServer(ThreadingMixIn, UnixStreamServer): pass
+
+ class ThreadingUnixDatagramServer(ThreadingMixIn, UnixDatagramServer): pass
+
+class BaseRequestHandler:
+
+ """Base class for request handler classes.
+
+ This class is instantiated for each request to be handled. The
+ constructor sets the instance variables request, client_address
+ and server, and then calls the handle() method. To implement a
+ specific service, all you need to do is to derive a class which
+ defines a handle() method.
+
+ The handle() method can find the request as self.request, the
+ client address as self.client_address, and the server (in case it
+ needs access to per-server information) as self.server. Since a
+ separate instance is created for each request, the handle() method
+ can define arbitrary other instance variariables.
+
+ """
+
+ def __init__(self, request, client_address, server):
+ self.request = request
+ self.client_address = client_address
+ self.server = server
+ self.setup()
+ try:
+ self.handle()
+ finally:
+ self.finish()
+
+ def setup(self):
+ pass
+
+ def handle(self):
+ pass
+
+ def finish(self):
+ pass
+
+
+# The following two classes make it possible to use the same service
+# class for stream or datagram servers.
+# Each class sets up these instance variables:
+# - rfile: a file object from which receives the request is read
+# - wfile: a file object to which the reply is written
+# When the handle() method returns, wfile is flushed properly
+
+
+class StreamRequestHandler(BaseRequestHandler):
+
+ """Define self.rfile and self.wfile for stream sockets."""
+
+ # Default buffer sizes for rfile, wfile.
+ # We default rfile to buffered because otherwise it could be
+ # really slow for large data (a getc() call per byte); we make
+ # wfile unbuffered because (a) often after a write() we want to
+ # read and we need to flush the line; (b) big writes to unbuffered
+ # files are typically optimized by stdio even when big reads
+ # aren't.
+ rbufsize = -1
+ wbufsize = 0
+
+ # A timeout to apply to the request socket, if not None.
+ timeout = None
+
+ # Disable nagle algorithm for this socket, if True.
+ # Use only when wbufsize != 0, to avoid small packets.
+ disable_nagle_algorithm = False
+
+ def setup(self):
+ self.connection = self.request
+ if self.timeout is not None:
+ self.connection.settimeout(self.timeout)
+ if self.disable_nagle_algorithm:
+ self.connection.setsockopt(socket.IPPROTO_TCP,
+ socket.TCP_NODELAY, True)
+ self.rfile = self.connection.makefile('rb', self.rbufsize)
+ self.wfile = self.connection.makefile('wb', self.wbufsize)
+
+ def finish(self):
+ if not self.wfile.closed:
+ self.wfile.flush()
+ self.wfile.close()
+ self.rfile.close()
+
+
+class DatagramRequestHandler(BaseRequestHandler):
+
+ # XXX Regrettably, I cannot get this working on Linux;
+ # s.recvfrom() doesn't return a meaningful client address.
+
+ """Define self.rfile and self.wfile for datagram sockets."""
+
+ def setup(self):
+ try:
+ from cStringIO import StringIO
+ except ImportError:
+ from StringIO import StringIO
+ self.packet, self.socket = self.request
+ self.rfile = StringIO(self.packet)
+ self.wfile = StringIO()
+
+ def finish(self):
+ self.socket.sendto(self.wfile.getvalue(), self.client_address)
diff --git a/python/helpers/pydev/_pydev_execfile.py b/python/helpers/pydev/_pydev_execfile.py
new file mode 100644
index 0000000..1d8e141
--- /dev/null
+++ b/python/helpers/pydev/_pydev_execfile.py
@@ -0,0 +1,37 @@
+#We must redefine it in Py3k if it's not already there
+def execfile(file, glob=None, loc=None):
+ if glob is None:
+ glob = globals()
+ if loc is None:
+ loc = glob
+ stream = open(file, 'rb')
+ try:
+ encoding = None
+ #Get encoding!
+ for _i in range(2):
+ line = stream.readline() #Should not raise an exception even if there are no more contents
+ #Must be a comment line
+ if line.strip().startswith(b'#'):
+ #Don't import re if there's no chance that there's an encoding in the line
+ if b'coding' in line:
+ import re
+ p = re.search(br"coding[:=]\s*([-\w.]+)", line)
+ if p:
+ try:
+ encoding = p.group(1).decode('ascii')
+ break
+ except:
+ encoding = None
+ finally:
+ stream.close()
+
+ if encoding:
+ stream = open(file, encoding=encoding)
+ else:
+ stream = open(file)
+ try:
+ contents = stream.read()
+ finally:
+ stream.close()
+
+ exec(compile(contents+"\n", file, 'exec'), glob, loc) #execute the script
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_inspect.py b/python/helpers/pydev/_pydev_inspect.py
new file mode 100644
index 0000000..5fd33d8
--- /dev/null
+++ b/python/helpers/pydev/_pydev_inspect.py
@@ -0,0 +1,788 @@
+"""Get useful information from live Python objects.
+
+This module encapsulates the interface provided by the internal special
+attributes (func_*, co_*, im_*, tb_*, etc.) in a friendlier fashion.
+It also provides some help for examining source code and class layout.
+
+Here are some of the useful functions provided by this module:
+
+ ismodule(), isclass(), ismethod(), isfunction(), istraceback(),
+ isframe(), iscode(), isbuiltin(), isroutine() - check object types
+ getmembers() - get members of an object that satisfy a given condition
+
+ getfile(), getsourcefile(), getsource() - find an object's source code
+ getdoc(), getcomments() - get documentation on an object
+ getmodule() - determine the module that an object came from
+ getclasstree() - arrange classes so as to represent their hierarchy
+
+ getargspec(), getargvalues() - get info about function arguments
+ formatargspec(), formatargvalues() - format an argument spec
+ getouterframes(), getinnerframes() - get info about frames
+ currentframe() - get the current stack frame
+ stack(), trace() - get info about frames on the stack or in a traceback
+"""
+
+# This module is in the public domain. No warranties.
+
+__author__ = 'Ka-Ping Yee <[email protected]>'
+__date__ = '1 Jan 2001'
+
+import sys, os, types, string, re, imp, tokenize
+
+# ----------------------------------------------------------- type-checking
+def ismodule(object):
+ """Return true if the object is a module.
+
+ Module objects provide these attributes:
+ __doc__ documentation string
+ __file__ filename (missing for built-in modules)"""
+ return isinstance(object, types.ModuleType)
+
+def isclass(object):
+ """Return true if the object is a class.
+
+ Class objects provide these attributes:
+ __doc__ documentation string
+ __module__ name of module in which this class was defined"""
+ return isinstance(object, types.ClassType) or hasattr(object, '__bases__')
+
+def ismethod(object):
+ """Return true if the object is an instance method.
+
+ Instance method objects provide these attributes:
+ __doc__ documentation string
+ __name__ name with which this method was defined
+ im_class class object in which this method belongs
+ im_func function object containing implementation of method
+ im_self instance to which this method is bound, or None"""
+ return isinstance(object, types.MethodType)
+
+def ismethoddescriptor(object):
+ """Return true if the object is a method descriptor.
+
+ But not if ismethod() or isclass() or isfunction() are true.
+
+ This is new in Python 2.2, and, for example, is true of int.__add__.
+ An object passing this test has a __get__ attribute but not a __set__
+ attribute, but beyond that the set of attributes varies. __name__ is
+ usually sensible, and __doc__ often is.
+
+ Methods implemented via descriptors that also pass one of the other
+ tests return false from the ismethoddescriptor() test, simply because
+ the other tests promise more -- you can, e.g., count on having the
+ im_func attribute (etc) when an object passes ismethod()."""
+ return (hasattr(object, "__get__")
+ and not hasattr(object, "__set__") # else it's a data descriptor
+ and not ismethod(object) # mutual exclusion
+ and not isfunction(object)
+ and not isclass(object))
+
+def isfunction(object):
+ """Return true if the object is a user-defined function.
+
+ Function objects provide these attributes:
+ __doc__ documentation string
+ __name__ name with which this function was defined
+ func_code code object containing compiled function bytecode
+ func_defaults tuple of any default values for arguments
+ func_doc (same as __doc__)
+ func_globals global namespace in which this function was defined
+ func_name (same as __name__)"""
+ return isinstance(object, types.FunctionType)
+
+def istraceback(object):
+ """Return true if the object is a traceback.
+
+ Traceback objects provide these attributes:
+ tb_frame frame object at this level
+ tb_lasti index of last attempted instruction in bytecode
+ tb_lineno current line number in Python source code
+ tb_next next inner traceback object (called by this level)"""
+ return isinstance(object, types.TracebackType)
+
+def isframe(object):
+ """Return true if the object is a frame object.
+
+ Frame objects provide these attributes:
+ f_back next outer frame object (this frame's caller)
+ f_builtins built-in namespace seen by this frame
+ f_code code object being executed in this frame
+ f_exc_traceback traceback if raised in this frame, or None
+ f_exc_type exception type if raised in this frame, or None
+ f_exc_value exception value if raised in this frame, or None
+ f_globals global namespace seen by this frame
+ f_lasti index of last attempted instruction in bytecode
+ f_lineno current line number in Python source code
+ f_locals local namespace seen by this frame
+ f_restricted 0 or 1 if frame is in restricted execution mode
+ f_trace tracing function for this frame, or None"""
+ return isinstance(object, types.FrameType)
+
+def iscode(object):
+ """Return true if the object is a code object.
+
+ Code objects provide these attributes:
+ co_argcount number of arguments (not including * or ** args)
+ co_code string of raw compiled bytecode
+ co_consts tuple of constants used in the bytecode
+ co_filename name of file in which this code object was created
+ co_firstlineno number of first line in Python source code
+ co_flags bitmap: 1=optimized | 2=newlocals | 4=*arg | 8=**arg
+ co_lnotab encoded mapping of line numbers to bytecode indices
+ co_name name with which this code object was defined
+ co_names tuple of names of local variables
+ co_nlocals number of local variables
+ co_stacksize virtual machine stack space required
+ co_varnames tuple of names of arguments and local variables"""
+ return isinstance(object, types.CodeType)
+
+def isbuiltin(object):
+ """Return true if the object is a built-in function or method.
+
+ Built-in functions and methods provide these attributes:
+ __doc__ documentation string
+ __name__ original name of this function or method
+ __self__ instance to which a method is bound, or None"""
+ return isinstance(object, types.BuiltinFunctionType)
+
+def isroutine(object):
+ """Return true if the object is any kind of function or method."""
+ return (isbuiltin(object)
+ or isfunction(object)
+ or ismethod(object)
+ or ismethoddescriptor(object))
+
+def getmembers(object, predicate=None):
+ """Return all members of an object as (name, value) pairs sorted by name.
+ Optionally, only return members that satisfy a given predicate."""
+ results = []
+ for key in dir(object):
+ value = getattr(object, key)
+ if not predicate or predicate(value):
+ results.append((key, value))
+ results.sort()
+ return results
+
+def classify_class_attrs(cls):
+ """Return list of attribute-descriptor tuples.
+
+ For each name in dir(cls), the return list contains a 4-tuple
+ with these elements:
+
+ 0. The name (a string).
+
+ 1. The kind of attribute this is, one of these strings:
+ 'class method' created via classmethod()
+ 'static method' created via staticmethod()
+ 'property' created via property()
+ 'method' any other flavor of method
+ 'data' not a method
+
+ 2. The class which defined this attribute (a class).
+
+ 3. The object as obtained directly from the defining class's
+ __dict__, not via getattr. This is especially important for
+ data attributes: C.data is just a data object, but
+ C.__dict__['data'] may be a data descriptor with additional
+ info, like a __doc__ string.
+ """
+
+ mro = getmro(cls)
+ names = dir(cls)
+ result = []
+ for name in names:
+ # Get the object associated with the name.
+ # Getting an obj from the __dict__ sometimes reveals more than
+ # using getattr. Static and class methods are dramatic examples.
+ if name in cls.__dict__:
+ obj = cls.__dict__[name]
+ else:
+ obj = getattr(cls, name)
+
+ # Figure out where it was defined.
+ homecls = getattr(obj, "__objclass__", None)
+ if homecls is None:
+ # search the dicts.
+ for base in mro:
+ if name in base.__dict__:
+ homecls = base
+ break
+
+ # Get the object again, in order to get it from the defining
+ # __dict__ instead of via getattr (if possible).
+ if homecls is not None and name in homecls.__dict__:
+ obj = homecls.__dict__[name]
+
+ # Also get the object via getattr.
+ obj_via_getattr = getattr(cls, name)
+
+ # Classify the object.
+ if isinstance(obj, staticmethod):
+ kind = "static method"
+ elif isinstance(obj, classmethod):
+ kind = "class method"
+ elif isinstance(obj, property):
+ kind = "property"
+ elif (ismethod(obj_via_getattr) or
+ ismethoddescriptor(obj_via_getattr)):
+ kind = "method"
+ else:
+ kind = "data"
+
+ result.append((name, kind, homecls, obj))
+
+ return result
+
+# ----------------------------------------------------------- class helpers
+def _searchbases(cls, accum):
+ # Simulate the "classic class" search order.
+ if cls in accum:
+ return
+ accum.append(cls)
+ for base in cls.__bases__:
+ _searchbases(base, accum)
+
+def getmro(cls):
+ "Return tuple of base classes (including cls) in method resolution order."
+ if hasattr(cls, "__mro__"):
+ return cls.__mro__
+ else:
+ result = []
+ _searchbases(cls, result)
+ return tuple(result)
+
+# -------------------------------------------------- source code extraction
+def indentsize(line):
+ """Return the indent size, in spaces, at the start of a line of text."""
+ expline = string.expandtabs(line)
+ return len(expline) - len(string.lstrip(expline))
+
+def getdoc(object):
+ """Get the documentation string for an object.
+
+ All tabs are expanded to spaces. To clean up docstrings that are
+ indented to line up with blocks of code, any whitespace than can be
+ uniformly removed from the second line onwards is removed."""
+ try:
+ doc = object.__doc__
+ except AttributeError:
+ return None
+ if not isinstance(doc, (str, unicode)):
+ return None
+ try:
+ lines = string.split(string.expandtabs(doc), '\n')
+ except UnicodeError:
+ return None
+ else:
+ margin = None
+ for line in lines[1:]:
+ content = len(string.lstrip(line))
+ if not content: continue
+ indent = len(line) - content
+ if margin is None: margin = indent
+ else: margin = min(margin, indent)
+ if margin is not None:
+ for i in range(1, len(lines)): lines[i] = lines[i][margin:]
+ return string.join(lines, '\n')
+
+def getfile(object):
+ """Work out which source or compiled file an object was defined in."""
+ if ismodule(object):
+ if hasattr(object, '__file__'):
+ return object.__file__
+ raise TypeError, 'arg is a built-in module'
+ if isclass(object):
+ object = sys.modules.get(object.__module__)
+ if hasattr(object, '__file__'):
+ return object.__file__
+ raise TypeError, 'arg is a built-in class'
+ if ismethod(object):
+ object = object.im_func
+ if isfunction(object):
+ object = object.func_code
+ if istraceback(object):
+ object = object.tb_frame
+ if isframe(object):
+ object = object.f_code
+ if iscode(object):
+ return object.co_filename
+ raise TypeError, 'arg is not a module, class, method, ' \
+ 'function, traceback, frame, or code object'
+
+def getmoduleinfo(path):
+ """Get the module name, suffix, mode, and module type for a given file."""
+ filename = os.path.basename(path)
+ suffixes = map(lambda (suffix, mode, mtype):
+ (-len(suffix), suffix, mode, mtype), imp.get_suffixes())
+ suffixes.sort() # try longest suffixes first, in case they overlap
+ for neglen, suffix, mode, mtype in suffixes:
+ if filename[neglen:] == suffix:
+ return filename[:neglen], suffix, mode, mtype
+
+def getmodulename(path):
+ """Return the module name for a given file, or None."""
+ info = getmoduleinfo(path)
+ if info: return info[0]
+
+def getsourcefile(object):
+ """Return the Python source file an object was defined in, if it exists."""
+ filename = getfile(object)
+ if string.lower(filename[-4:]) in ['.pyc', '.pyo']:
+ filename = filename[:-4] + '.py'
+ for suffix, mode, kind in imp.get_suffixes():
+ if 'b' in mode and string.lower(filename[-len(suffix):]) == suffix:
+ # Looks like a binary file. We want to only return a text file.
+ return None
+ if os.path.exists(filename):
+ return filename
+
+def getabsfile(object):
+ """Return an absolute path to the source or compiled file for an object.
+
+ The idea is for each object to have a unique origin, so this routine
+ normalizes the result as much as possible."""
+ return os.path.normcase(
+ os.path.abspath(getsourcefile(object) or getfile(object)))
+
+modulesbyfile = {}
+
+def getmodule(object):
+ """Return the module an object was defined in, or None if not found."""
+ if ismodule(object):
+ return object
+ if isclass(object):
+ return sys.modules.get(object.__module__)
+ try:
+ file = getabsfile(object)
+ except TypeError:
+ return None
+ if modulesbyfile.has_key(file):
+ return sys.modules[modulesbyfile[file]]
+ for module in sys.modules.values():
+ if hasattr(module, '__file__'):
+ modulesbyfile[getabsfile(module)] = module.__name__
+ if modulesbyfile.has_key(file):
+ return sys.modules[modulesbyfile[file]]
+ main = sys.modules['__main__']
+ if hasattr(main, object.__name__):
+ mainobject = getattr(main, object.__name__)
+ if mainobject is object:
+ return main
+ builtin = sys.modules['__builtin__']
+ if hasattr(builtin, object.__name__):
+ builtinobject = getattr(builtin, object.__name__)
+ if builtinobject is object:
+ return builtin
+
+def findsource(object):
+ """Return the entire source file and starting line number for an object.
+
+ The argument may be a module, class, method, function, traceback, frame,
+ or code object. The source code is returned as a list of all the lines
+ in the file and the line number indexes a line in that list. An IOError
+ is raised if the source code cannot be retrieved."""
+ try:
+ file = open(getsourcefile(object))
+ except (TypeError, IOError):
+ raise IOError, 'could not get source code'
+ lines = file.readlines()
+ file.close()
+
+ if ismodule(object):
+ return lines, 0
+
+ if isclass(object):
+ name = object.__name__
+ pat = re.compile(r'^\s*class\s*' + name + r'\b')
+ for i in range(len(lines)):
+ if pat.match(lines[i]): return lines, i
+ else: raise IOError, 'could not find class definition'
+
+ if ismethod(object):
+ object = object.im_func
+ if isfunction(object):
+ object = object.func_code
+ if istraceback(object):
+ object = object.tb_frame
+ if isframe(object):
+ object = object.f_code
+ if iscode(object):
+ if not hasattr(object, 'co_firstlineno'):
+ raise IOError, 'could not find function definition'
+ lnum = object.co_firstlineno - 1
+ pat = re.compile(r'^(\s*def\s)|(.*\slambda(:|\s))')
+ while lnum > 0:
+ if pat.match(lines[lnum]): break
+ lnum = lnum - 1
+ return lines, lnum
+ raise IOError, 'could not find code object'
+
+def getcomments(object):
+ """Get lines of comments immediately preceding an object's source code."""
+ try: lines, lnum = findsource(object)
+ except IOError: return None
+
+ if ismodule(object):
+ # Look for a comment block at the top of the file.
+ start = 0
+ if lines and lines[0][:2] == '#!': start = 1
+ while start < len(lines) and string.strip(lines[start]) in ['', '#']:
+ start = start + 1
+ if start < len(lines) and lines[start][:1] == '#':
+ comments = []
+ end = start
+ while end < len(lines) and lines[end][:1] == '#':
+ comments.append(string.expandtabs(lines[end]))
+ end = end + 1
+ return string.join(comments, '')
+
+ # Look for a preceding block of comments at the same indentation.
+ elif lnum > 0:
+ indent = indentsize(lines[lnum])
+ end = lnum - 1
+ if end >= 0 and string.lstrip(lines[end])[:1] == '#' and \
+ indentsize(lines[end]) == indent:
+ comments = [string.lstrip(string.expandtabs(lines[end]))]
+ if end > 0:
+ end = end - 1
+ comment = string.lstrip(string.expandtabs(lines[end]))
+ while comment[:1] == '#' and indentsize(lines[end]) == indent:
+ comments[:0] = [comment]
+ end = end - 1
+ if end < 0: break
+ comment = string.lstrip(string.expandtabs(lines[end]))
+ while comments and string.strip(comments[0]) == '#':
+ comments[:1] = []
+ while comments and string.strip(comments[-1]) == '#':
+ comments[-1:] = []
+ return string.join(comments, '')
+
+class ListReader:
+ """Provide a readline() method to return lines from a list of strings."""
+ def __init__(self, lines):
+ self.lines = lines
+ self.index = 0
+
+ def readline(self):
+ i = self.index
+ if i < len(self.lines):
+ self.index = i + 1
+ return self.lines[i]
+ else: return ''
+
+class EndOfBlock(Exception): pass
+
+class BlockFinder:
+ """Provide a tokeneater() method to detect the end of a code block."""
+ def __init__(self):
+ self.indent = 0
+ self.started = 0
+ self.last = 0
+
+ def tokeneater(self, type, token, (srow, scol), (erow, ecol), line):
+ if not self.started:
+ if type == tokenize.NAME: self.started = 1
+ elif type == tokenize.NEWLINE:
+ self.last = srow
+ elif type == tokenize.INDENT:
+ self.indent = self.indent + 1
+ elif type == tokenize.DEDENT:
+ self.indent = self.indent - 1
+ if self.indent == 0: raise EndOfBlock, self.last
+ elif type == tokenize.NAME and scol == 0:
+ raise EndOfBlock, self.last
+
+def getblock(lines):
+ """Extract the block of code at the top of the given list of lines."""
+ try:
+ tokenize.tokenize(ListReader(lines).readline, BlockFinder().tokeneater)
+ except EndOfBlock, eob:
+ return lines[:eob.args[0]]
+ # Fooling the indent/dedent logic implies a one-line definition
+ return lines[:1]
+
+def getsourcelines(object):
+ """Return a list of source lines and starting line number for an object.
+
+ The argument may be a module, class, method, function, traceback, frame,
+ or code object. The source code is returned as a list of the lines
+ corresponding to the object and the line number indicates where in the
+ original source file the first line of code was found. An IOError is
+ raised if the source code cannot be retrieved."""
+ lines, lnum = findsource(object)
+
+ if ismodule(object): return lines, 0
+ else: return getblock(lines[lnum:]), lnum + 1
+
+def getsource(object):
+ """Return the text of the source code for an object.
+
+ The argument may be a module, class, method, function, traceback, frame,
+ or code object. The source code is returned as a single string. An
+ IOError is raised if the source code cannot be retrieved."""
+ lines, lnum = getsourcelines(object)
+ return string.join(lines, '')
+
+# --------------------------------------------------- class tree extraction
+def walktree(classes, children, parent):
+ """Recursive helper function for getclasstree()."""
+ results = []
+ classes.sort(lambda a, b: cmp(a.__name__, b.__name__))
+ for c in classes:
+ results.append((c, c.__bases__))
+ if children.has_key(c):
+ results.append(walktree(children[c], children, c))
+ return results
+
+def getclasstree(classes, unique=0):
+ """Arrange the given list of classes into a hierarchy of nested lists.
+
+ Where a nested list appears, it contains classes derived from the class
+ whose entry immediately precedes the list. Each entry is a 2-tuple
+ containing a class and a tuple of its base classes. If the 'unique'
+ argument is true, exactly one entry appears in the returned structure
+ for each class in the given list. Otherwise, classes using multiple
+ inheritance and their descendants will appear multiple times."""
+ children = {}
+ roots = []
+ for c in classes:
+ if c.__bases__:
+ for parent in c.__bases__:
+ if not children.has_key(parent):
+ children[parent] = []
+ children[parent].append(c)
+ if unique and parent in classes: break
+ elif c not in roots:
+ roots.append(c)
+ for parent in children.keys():
+ if parent not in classes:
+ roots.append(parent)
+ return walktree(roots, children, None)
+
+# ------------------------------------------------ argument list extraction
+# These constants are from Python's compile.h.
+CO_OPTIMIZED, CO_NEWLOCALS, CO_VARARGS, CO_VARKEYWORDS = 1, 2, 4, 8
+
+def getargs(co):
+ """Get information about the arguments accepted by a code object.
+
+ Three things are returned: (args, varargs, varkw), where 'args' is
+ a list of argument names (possibly containing nested lists), and
+ 'varargs' and 'varkw' are the names of the * and ** arguments or None."""
+ if not iscode(co): raise TypeError, 'arg is not a code object'
+
+ nargs = co.co_argcount
+ names = co.co_varnames
+ args = list(names[:nargs])
+ step = 0
+
+ # The following acrobatics are for anonymous (tuple) arguments.
+ if not sys.platform.startswith('java'):#Jython doesn't have co_code
+ code = co.co_code
+ import dis
+ for i in range(nargs):
+ if args[i][:1] in ['', '.']:
+ stack, remain, count = [], [], []
+ while step < len(code):
+ op = ord(code[step])
+ step = step + 1
+ if op >= dis.HAVE_ARGUMENT:
+ opname = dis.opname[op]
+ value = ord(code[step]) + ord(code[step + 1]) * 256
+ step = step + 2
+ if opname in ['UNPACK_TUPLE', 'UNPACK_SEQUENCE']:
+ remain.append(value)
+ count.append(value)
+ elif opname == 'STORE_FAST':
+ stack.append(names[value])
+ remain[-1] = remain[-1] - 1
+ while remain[-1] == 0:
+ remain.pop()
+ size = count.pop()
+ stack[-size:] = [stack[-size:]]
+ if not remain: break
+ remain[-1] = remain[-1] - 1
+ if not remain: break
+ args[i] = stack[0]
+
+ varargs = None
+ if co.co_flags & CO_VARARGS:
+ varargs = co.co_varnames[nargs]
+ nargs = nargs + 1
+ varkw = None
+ if co.co_flags & CO_VARKEYWORDS:
+ varkw = co.co_varnames[nargs]
+ return args, varargs, varkw
+
+def getargspec(func):
+ """Get the names and default values of a function's arguments.
+
+ A tuple of four things is returned: (args, varargs, varkw, defaults).
+ 'args' is a list of the argument names (it may contain nested lists).
+ 'varargs' and 'varkw' are the names of the * and ** arguments or None.
+ 'defaults' is an n-tuple of the default values of the last n arguments."""
+ if ismethod(func):
+ func = func.im_func
+ if not isfunction(func): raise TypeError, 'arg is not a Python function'
+ args, varargs, varkw = getargs(func.func_code)
+ return args, varargs, varkw, func.func_defaults
+
+def getargvalues(frame):
+ """Get information about arguments passed into a particular frame.
+
+ A tuple of four things is returned: (args, varargs, varkw, locals).
+ 'args' is a list of the argument names (it may contain nested lists).
+ 'varargs' and 'varkw' are the names of the * and ** arguments or None.
+ 'locals' is the locals dictionary of the given frame."""
+ args, varargs, varkw = getargs(frame.f_code)
+ return args, varargs, varkw, frame.f_locals
+
+def joinseq(seq):
+ if len(seq) == 1:
+ return '(' + seq[0] + ',)'
+ else:
+ return '(' + string.join(seq, ', ') + ')'
+
+def strseq(object, convert, join=joinseq):
+ """Recursively walk a sequence, stringifying each element."""
+ if type(object) in [types.ListType, types.TupleType]:
+ return join(map(lambda o, c=convert, j=join: strseq(o, c, j), object))
+ else:
+ return convert(object)
+
+def formatargspec(args, varargs=None, varkw=None, defaults=None,
+ formatarg=str,
+ formatvarargs=lambda name: '*' + name,
+ formatvarkw=lambda name: '**' + name,
+ formatvalue=lambda value: '=' + repr(value),
+ join=joinseq):
+ """Format an argument spec from the 4 values returned by getargspec.
+
+ The first four arguments are (args, varargs, varkw, defaults). The
+ other four arguments are the corresponding optional formatting functions
+ that are called to turn names and values into strings. The ninth
+ argument is an optional function to format the sequence of arguments."""
+ specs = []
+ if defaults:
+ firstdefault = len(args) - len(defaults)
+ for i in range(len(args)):
+ spec = strseq(args[i], formatarg, join)
+ if defaults and i >= firstdefault:
+ spec = spec + formatvalue(defaults[i - firstdefault])
+ specs.append(spec)
+ if varargs:
+ specs.append(formatvarargs(varargs))
+ if varkw:
+ specs.append(formatvarkw(varkw))
+ return '(' + string.join(specs, ', ') + ')'
+
+def formatargvalues(args, varargs, varkw, locals,
+ formatarg=str,
+ formatvarargs=lambda name: '*' + name,
+ formatvarkw=lambda name: '**' + name,
+ formatvalue=lambda value: '=' + repr(value),
+ join=joinseq):
+ """Format an argument spec from the 4 values returned by getargvalues.
+
+ The first four arguments are (args, varargs, varkw, locals). The
+ next four arguments are the corresponding optional formatting functions
+ that are called to turn names and values into strings. The ninth
+ argument is an optional function to format the sequence of arguments."""
+ def convert(name, locals=locals,
+ formatarg=formatarg, formatvalue=formatvalue):
+ return formatarg(name) + formatvalue(locals[name])
+ specs = []
+ for i in range(len(args)):
+ specs.append(strseq(args[i], convert, join))
+ if varargs:
+ specs.append(formatvarargs(varargs) + formatvalue(locals[varargs]))
+ if varkw:
+ specs.append(formatvarkw(varkw) + formatvalue(locals[varkw]))
+ return '(' + string.join(specs, ', ') + ')'
+
+# -------------------------------------------------- stack frame extraction
+def getframeinfo(frame, context=1):
+ """Get information about a frame or traceback object.
+
+ A tuple of five things is returned: the filename, the line number of
+ the current line, the function name, a list of lines of context from
+ the source code, and the index of the current line within that list.
+ The optional second argument specifies the number of lines of context
+ to return, which are centered around the current line."""
+ raise NotImplementedError
+# if istraceback(frame):
+# frame = frame.tb_frame
+# if not isframe(frame):
+# raise TypeError, 'arg is not a frame or traceback object'
+#
+# filename = getsourcefile(frame)
+# lineno = getlineno(frame)
+# if context > 0:
+# start = lineno - 1 - context//2
+# try:
+# lines, lnum = findsource(frame)
+# except IOError:
+# lines = index = None
+# else:
+# start = max(start, 1)
+# start = min(start, len(lines) - context)
+# lines = lines[start:start+context]
+# index = lineno - 1 - start
+# else:
+# lines = index = None
+#
+# return (filename, lineno, frame.f_code.co_name, lines, index)
+
+def getlineno(frame):
+ """Get the line number from a frame object, allowing for optimization."""
+ # Written by Marc-Andr Lemburg; revised by Jim Hugunin and Fredrik Lundh.
+ lineno = frame.f_lineno
+ code = frame.f_code
+ if hasattr(code, 'co_lnotab'):
+ table = code.co_lnotab
+ lineno = code.co_firstlineno
+ addr = 0
+ for i in range(0, len(table), 2):
+ addr = addr + ord(table[i])
+ if addr > frame.f_lasti: break
+ lineno = lineno + ord(table[i + 1])
+ return lineno
+
+def getouterframes(frame, context=1):
+ """Get a list of records for a frame and all higher (calling) frames.
+
+ Each record contains a frame object, filename, line number, function
+ name, a list of lines of context, and index within the context."""
+ framelist = []
+ while frame:
+ framelist.append((frame,) + getframeinfo(frame, context))
+ frame = frame.f_back
+ return framelist
+
+def getinnerframes(tb, context=1):
+ """Get a list of records for a traceback's frame and all lower frames.
+
+ Each record contains a frame object, filename, line number, function
+ name, a list of lines of context, and index within the context."""
+ framelist = []
+ while tb:
+ framelist.append((tb.tb_frame,) + getframeinfo(tb, context))
+ tb = tb.tb_next
+ return framelist
+
+def currentframe():
+ """Return the frame object for the caller's stack frame."""
+ try:
+ raise 'catch me'
+ except:
+ return sys.exc_traceback.tb_frame.f_back #@UndefinedVariable
+
+if hasattr(sys, '_getframe'): currentframe = sys._getframe
+
+def stack(context=1):
+ """Return a list of records for the stack above the caller's frame."""
+ return getouterframes(currentframe().f_back, context)
+
+def trace(context=1):
+ """Return a list of records for the stack below the current exception."""
+ return getinnerframes(sys.exc_traceback, context) #@UndefinedVariable
diff --git a/python/helpers/pydev/_pydev_jython_execfile.py b/python/helpers/pydev/_pydev_jython_execfile.py
new file mode 100644
index 0000000..16c28ab
--- /dev/null
+++ b/python/helpers/pydev/_pydev_jython_execfile.py
@@ -0,0 +1,6 @@
+def jython_execfile(argv):
+ import org.python.util.PythonInterpreter as PythonInterpreter
+ interpreter = PythonInterpreter()
+ state = interpreter.getSystemState()
+ state.argv = argv
+ interpreter.execfile(argv[0])
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_log.py b/python/helpers/pydev/_pydev_log.py
new file mode 100644
index 0000000..6cc627f
--- /dev/null
+++ b/python/helpers/pydev/_pydev_log.py
@@ -0,0 +1,28 @@
+import traceback
+import sys
+try:
+ import StringIO
+except:
+ import io as StringIO #Python 3.0
+
+
+class Log:
+
+ def __init__(self):
+ self._contents = []
+
+ def AddContent(self, *content):
+ self._contents.append(' '.join(content))
+
+ def AddException(self):
+ s = StringIO.StringIO()
+ exc_info = sys.exc_info()
+ traceback.print_exception(exc_info[0], exc_info[1], exc_info[2], limit=None, file=s)
+ self._contents.append(s.getvalue())
+
+
+ def GetContents(self):
+ return '\n'.join(self._contents)
+
+ def Clear(self):
+ del self._contents[:]
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_select.py b/python/helpers/pydev/_pydev_select.py
new file mode 100644
index 0000000..b8dad03
--- /dev/null
+++ b/python/helpers/pydev/_pydev_select.py
@@ -0,0 +1 @@
+from select import *
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_socket.py b/python/helpers/pydev/_pydev_socket.py
new file mode 100644
index 0000000..9e96e800
--- /dev/null
+++ b/python/helpers/pydev/_pydev_socket.py
@@ -0,0 +1 @@
+from socket import *
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_thread.py b/python/helpers/pydev/_pydev_thread.py
new file mode 100644
index 0000000..3971c79d
--- /dev/null
+++ b/python/helpers/pydev/_pydev_thread.py
@@ -0,0 +1 @@
+from thread import *
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_threading.py b/python/helpers/pydev/_pydev_threading.py
new file mode 100644
index 0000000..52d48c9
--- /dev/null
+++ b/python/helpers/pydev/_pydev_threading.py
@@ -0,0 +1,982 @@
+"""Thread module emulating a subset of Java's threading model."""
+
+import sys as _sys
+
+try:
+ import _pydev_thread as thread
+except ImportError:
+ import thread
+
+import warnings
+
+from _pydev_time import time as _time, sleep as _sleep
+from traceback import format_exc as _format_exc
+
+# Note regarding PEP 8 compliant aliases
+# This threading model was originally inspired by Java, and inherited
+# the convention of camelCase function and method names from that
+# language. While those names are not in any imminent danger of being
+# deprecated, starting with Python 2.6, the module now provides a
+# PEP 8 compliant alias for any such method name.
+# Using the new PEP 8 compliant names also facilitates substitution
+# with the multiprocessing module, which doesn't provide the old
+# Java inspired names.
+
+
+# Rename some stuff so "from threading import *" is safe
+__all__ = ['activeCount', 'active_count', 'Condition', 'currentThread',
+ 'current_thread', 'enumerate', 'Event',
+ 'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Thread',
+ 'Timer', 'setprofile', 'settrace', 'local', 'stack_size']
+
+_start_new_thread = thread.start_new_thread
+_allocate_lock = thread.allocate_lock
+_get_ident = thread.get_ident
+ThreadError = thread.error
+del thread
+
+
+# sys.exc_clear is used to work around the fact that except blocks
+# don't fully clear the exception until 3.0.
+warnings.filterwarnings('ignore', category=DeprecationWarning,
+ module='threading', message='sys.exc_clear')
+
+# Debug support (adapted from ihooks.py).
+# All the major classes here derive from _Verbose. We force that to
+# be a new-style class so that all the major classes here are new-style.
+# This helps debugging (type(instance) is more revealing for instances
+# of new-style classes).
+
+_VERBOSE = False
+
+if __debug__:
+
+ class _Verbose(object):
+
+ def __init__(self, verbose=None):
+ if verbose is None:
+ verbose = _VERBOSE
+ self.__verbose = verbose
+
+ def _note(self, format, *args):
+ if self.__verbose:
+ format = format % args
+ # Issue #4188: calling current_thread() can incur an infinite
+ # recursion if it has to create a DummyThread on the fly.
+ ident = _get_ident()
+ try:
+ name = _active[ident].name
+ except KeyError:
+ name = "<OS thread %d>" % ident
+ format = "%s: %s\n" % (name, format)
+ _sys.stderr.write(format)
+
+else:
+ # Disable this when using "python -O"
+ class _Verbose(object):
+ def __init__(self, verbose=None):
+ pass
+ def _note(self, *args):
+ pass
+
+# Support for profile and trace hooks
+
+_profile_hook = None
+_trace_hook = None
+
+def setprofile(func):
+ global _profile_hook
+ _profile_hook = func
+
+def settrace(func):
+ global _trace_hook
+ _trace_hook = func
+
+# Synchronization classes
+
+Lock = _allocate_lock
+
+def RLock(*args, **kwargs):
+ return _RLock(*args, **kwargs)
+
+class _RLock(_Verbose):
+
+ def __init__(self, verbose=None):
+ _Verbose.__init__(self, verbose)
+ self.__block = _allocate_lock()
+ self.__owner = None
+ self.__count = 0
+
+ def __repr__(self):
+ owner = self.__owner
+ try:
+ owner = _active[owner].name
+ except KeyError:
+ pass
+ return "<%s owner=%r count=%d>" % (
+ self.__class__.__name__, owner, self.__count)
+
+ def acquire(self, blocking=1):
+ me = _get_ident()
+ if self.__owner == me:
+ self.__count = self.__count + 1
+ if __debug__:
+ self._note("%s.acquire(%s): recursive success", self, blocking)
+ return 1
+ rc = self.__block.acquire(blocking)
+ if rc:
+ self.__owner = me
+ self.__count = 1
+ if __debug__:
+ self._note("%s.acquire(%s): initial success", self, blocking)
+ else:
+ if __debug__:
+ self._note("%s.acquire(%s): failure", self, blocking)
+ return rc
+
+ __enter__ = acquire
+
+ def release(self):
+ if self.__owner != _get_ident():
+ raise RuntimeError("cannot release un-acquired lock")
+ self.__count = count = self.__count - 1
+ if not count:
+ self.__owner = None
+ self.__block.release()
+ if __debug__:
+ self._note("%s.release(): final release", self)
+ else:
+ if __debug__:
+ self._note("%s.release(): non-final release", self)
+
+ def __exit__(self, t, v, tb):
+ self.release()
+
+ # Internal methods used by condition variables
+
+ def _acquire_restore(self, count_owner):
+ count, owner = count_owner
+ self.__block.acquire()
+ self.__count = count
+ self.__owner = owner
+ if __debug__:
+ self._note("%s._acquire_restore()", self)
+
+ def _release_save(self):
+ if __debug__:
+ self._note("%s._release_save()", self)
+ count = self.__count
+ self.__count = 0
+ owner = self.__owner
+ self.__owner = None
+ self.__block.release()
+ return (count, owner)
+
+ def _is_owned(self):
+ return self.__owner == _get_ident()
+
+
+def Condition(*args, **kwargs):
+ return _Condition(*args, **kwargs)
+
+class _Condition(_Verbose):
+
+ def __init__(self, lock=None, verbose=None):
+ _Verbose.__init__(self, verbose)
+ if lock is None:
+ lock = RLock()
+ self.__lock = lock
+ # Export the lock's acquire() and release() methods
+ self.acquire = lock.acquire
+ self.release = lock.release
+ # If the lock defines _release_save() and/or _acquire_restore(),
+ # these override the default implementations (which just call
+ # release() and acquire() on the lock). Ditto for _is_owned().
+ try:
+ self._release_save = lock._release_save
+ except AttributeError:
+ pass
+ try:
+ self._acquire_restore = lock._acquire_restore
+ except AttributeError:
+ pass
+ try:
+ self._is_owned = lock._is_owned
+ except AttributeError:
+ pass
+ self.__waiters = []
+
+ def __enter__(self):
+ return self.__lock.__enter__()
+
+ def __exit__(self, *args):
+ return self.__lock.__exit__(*args)
+
+ def __repr__(self):
+ return "<Condition(%s, %d)>" % (self.__lock, len(self.__waiters))
+
+ def _release_save(self):
+ self.__lock.release() # No state to save
+
+ def _acquire_restore(self, x):
+ self.__lock.acquire() # Ignore saved state
+
+ def _is_owned(self):
+ # Return True if lock is owned by current_thread.
+ # This method is called only if __lock doesn't have _is_owned().
+ if self.__lock.acquire(0):
+ self.__lock.release()
+ return False
+ else:
+ return True
+
+ def wait(self, timeout=None):
+ if not self._is_owned():
+ raise RuntimeError("cannot wait on un-acquired lock")
+ waiter = _allocate_lock()
+ waiter.acquire()
+ self.__waiters.append(waiter)
+ saved_state = self._release_save()
+ try: # restore state no matter what (e.g., KeyboardInterrupt)
+ if timeout is None:
+ waiter.acquire()
+ if __debug__:
+ self._note("%s.wait(): got it", self)
+ else:
+ # Balancing act: We can't afford a pure busy loop, so we
+ # have to sleep; but if we sleep the whole timeout time,
+ # we'll be unresponsive. The scheme here sleeps very
+ # little at first, longer as time goes on, but never longer
+ # than 20 times per second (or the timeout time remaining).
+ endtime = _time() + timeout
+ delay = 0.0005 # 500 us -> initial delay of 1 ms
+ while True:
+ gotit = waiter.acquire(0)
+ if gotit:
+ break
+ remaining = endtime - _time()
+ if remaining <= 0:
+ break
+ delay = min(delay * 2, remaining, .05)
+ _sleep(delay)
+ if not gotit:
+ if __debug__:
+ self._note("%s.wait(%s): timed out", self, timeout)
+ try:
+ self.__waiters.remove(waiter)
+ except ValueError:
+ pass
+ else:
+ if __debug__:
+ self._note("%s.wait(%s): got it", self, timeout)
+ finally:
+ self._acquire_restore(saved_state)
+
+ def notify(self, n=1):
+ if not self._is_owned():
+ raise RuntimeError("cannot notify on un-acquired lock")
+ __waiters = self.__waiters
+ waiters = __waiters[:n]
+ if not waiters:
+ if __debug__:
+ self._note("%s.notify(): no waiters", self)
+ return
+ self._note("%s.notify(): notifying %d waiter%s", self, n,
+ n!=1 and "s" or "")
+ for waiter in waiters:
+ waiter.release()
+ try:
+ __waiters.remove(waiter)
+ except ValueError:
+ pass
+
+ def notifyAll(self):
+ self.notify(len(self.__waiters))
+
+ notify_all = notifyAll
+
+
+def Semaphore(*args, **kwargs):
+ return _Semaphore(*args, **kwargs)
+
+class _Semaphore(_Verbose):
+
+ # After Tim Peters' semaphore class, but not quite the same (no maximum)
+
+ def __init__(self, value=1, verbose=None):
+ if value < 0:
+ raise ValueError("semaphore initial value must be >= 0")
+ _Verbose.__init__(self, verbose)
+ self.__cond = Condition(Lock())
+ self.__value = value
+
+ def acquire(self, blocking=1):
+ rc = False
+ self.__cond.acquire()
+ while self.__value == 0:
+ if not blocking:
+ break
+ if __debug__:
+ self._note("%s.acquire(%s): blocked waiting, value=%s",
+ self, blocking, self.__value)
+ self.__cond.wait()
+ else:
+ self.__value = self.__value - 1
+ if __debug__:
+ self._note("%s.acquire: success, value=%s",
+ self, self.__value)
+ rc = True
+ self.__cond.release()
+ return rc
+
+ __enter__ = acquire
+
+ def release(self):
+ self.__cond.acquire()
+ self.__value = self.__value + 1
+ if __debug__:
+ self._note("%s.release: success, value=%s",
+ self, self.__value)
+ self.__cond.notify()
+ self.__cond.release()
+
+ def __exit__(self, t, v, tb):
+ self.release()
+
+
+def BoundedSemaphore(*args, **kwargs):
+ return _BoundedSemaphore(*args, **kwargs)
+
+class _BoundedSemaphore(_Semaphore):
+ """Semaphore that checks that # releases is <= # acquires"""
+ def __init__(self, value=1, verbose=None):
+ _Semaphore.__init__(self, value, verbose)
+ self._initial_value = value
+
+ def release(self):
+ if self._Semaphore__value >= self._initial_value:
+ raise ValueError, "Semaphore released too many times"
+ return _Semaphore.release(self)
+
+
+def Event(*args, **kwargs):
+ return _Event(*args, **kwargs)
+
+class _Event(_Verbose):
+
+ # After Tim Peters' event class (without is_posted())
+
+ def __init__(self, verbose=None):
+ _Verbose.__init__(self, verbose)
+ self.__cond = Condition(Lock())
+ self.__flag = False
+
+ def _reset_internal_locks(self):
+ # private! called by Thread._reset_internal_locks by _after_fork()
+ self.__cond.__init__()
+
+ def isSet(self):
+ return self.__flag
+
+ is_set = isSet
+
+ def set(self):
+ self.__cond.acquire()
+ try:
+ self.__flag = True
+ self.__cond.notify_all()
+ finally:
+ self.__cond.release()
+
+ def clear(self):
+ self.__cond.acquire()
+ try:
+ self.__flag = False
+ finally:
+ self.__cond.release()
+
+ def wait(self, timeout=None):
+ self.__cond.acquire()
+ try:
+ if not self.__flag:
+ self.__cond.wait(timeout)
+ return self.__flag
+ finally:
+ self.__cond.release()
+
+# Helper to generate new thread names
+_counter = 0
+def _newname(template="Thread-%d"):
+ global _counter
+ _counter = _counter + 1
+ return template % _counter
+
+# Active thread administration
+_active_limbo_lock = _allocate_lock()
+_active = {} # maps thread id to Thread object
+_limbo = {}
+
+
+# Main class for threads
+
+class Thread(_Verbose):
+
+ __initialized = False
+ # Need to store a reference to sys.exc_info for printing
+ # out exceptions when a thread tries to use a global var. during interp.
+ # shutdown and thus raises an exception about trying to perform some
+ # operation on/with a NoneType
+ __exc_info = _sys.exc_info
+ # Keep sys.exc_clear too to clear the exception just before
+ # allowing .join() to return.
+ __exc_clear = _sys.exc_clear
+
+ def __init__(self, group=None, target=None, name=None,
+ args=(), kwargs=None, verbose=None):
+ assert group is None, "group argument must be None for now"
+ _Verbose.__init__(self, verbose)
+ if kwargs is None:
+ kwargs = {}
+ self.__target = target
+ self.__name = str(name or _newname())
+ self.__args = args
+ self.__kwargs = kwargs
+ self.__daemonic = self._set_daemon()
+ self.__ident = None
+ self.__started = Event()
+ self.__stopped = False
+ self.__block = Condition(Lock())
+ self.__initialized = True
+ # sys.stderr is not stored in the class like
+ # sys.exc_info since it can be changed between instances
+ self.__stderr = _sys.stderr
+
+ def _reset_internal_locks(self):
+ # private! Called by _after_fork() to reset our internal locks as
+ # they may be in an invalid state leading to a deadlock or crash.
+ if hasattr(self, '_Thread__block'): # DummyThread deletes self.__block
+ self.__block.__init__()
+ self.__started._reset_internal_locks()
+
+ @property
+ def _block(self):
+ # used by a unittest
+ return self.__block
+
+ def _set_daemon(self):
+ # Overridden in _MainThread and _DummyThread
+ return current_thread().daemon
+
+ def __repr__(self):
+ assert self.__initialized, "Thread.__init__() was not called"
+ status = "initial"
+ if self.__started.is_set():
+ status = "started"
+ if self.__stopped:
+ status = "stopped"
+ if self.__daemonic:
+ status += " daemon"
+ if self.__ident is not None:
+ status += " %s" % self.__ident
+ return "<%s(%s, %s)>" % (self.__class__.__name__, self.__name, status)
+
+ def start(self):
+ if not self.__initialized:
+ raise RuntimeError("thread.__init__() not called")
+ if self.__started.is_set():
+ raise RuntimeError("threads can only be started once")
+ if __debug__:
+ self._note("%s.start(): starting thread", self)
+ with _active_limbo_lock:
+ _limbo[self] = self
+ try:
+ _start_new_thread(self.__bootstrap, ())
+ except Exception:
+ with _active_limbo_lock:
+ del _limbo[self]
+ raise
+ self.__started.wait()
+
+ def run(self):
+ try:
+ if self.__target:
+ self.__target(*self.__args, **self.__kwargs)
+ finally:
+ # Avoid a refcycle if the thread is running a function with
+ # an argument that has a member that points to the thread.
+ del self.__target, self.__args, self.__kwargs
+
+ def __bootstrap(self):
+ # Wrapper around the real bootstrap code that ignores
+ # exceptions during interpreter cleanup. Those typically
+ # happen when a daemon thread wakes up at an unfortunate
+ # moment, finds the world around it destroyed, and raises some
+ # random exception *** while trying to report the exception in
+ # __bootstrap_inner() below ***. Those random exceptions
+ # don't help anybody, and they confuse users, so we suppress
+ # them. We suppress them only when it appears that the world
+ # indeed has already been destroyed, so that exceptions in
+ # __bootstrap_inner() during normal business hours are properly
+ # reported. Also, we only suppress them for daemonic threads;
+ # if a non-daemonic encounters this, something else is wrong.
+ try:
+ self.__bootstrap_inner()
+ except:
+ if self.__daemonic and _sys is None:
+ return
+ raise
+
+ def _set_ident(self):
+ self.__ident = _get_ident()
+
+ def __bootstrap_inner(self):
+ try:
+ self._set_ident()
+ self.__started.set()
+ with _active_limbo_lock:
+ _active[self.__ident] = self
+ del _limbo[self]
+ if __debug__:
+ self._note("%s.__bootstrap(): thread started", self)
+
+ if _trace_hook:
+ self._note("%s.__bootstrap(): registering trace hook", self)
+ _sys.settrace(_trace_hook)
+ if _profile_hook:
+ self._note("%s.__bootstrap(): registering profile hook", self)
+ _sys.setprofile(_profile_hook)
+
+ try:
+ self.run()
+ except SystemExit:
+ if __debug__:
+ self._note("%s.__bootstrap(): raised SystemExit", self)
+ except:
+ if __debug__:
+ self._note("%s.__bootstrap(): unhandled exception", self)
+ # If sys.stderr is no more (most likely from interpreter
+ # shutdown) use self.__stderr. Otherwise still use sys (as in
+ # _sys) in case sys.stderr was redefined since the creation of
+ # self.
+ if _sys:
+ _sys.stderr.write("Exception in thread %s:\n%s\n" %
+ (self.name, _format_exc()))
+ else:
+ # Do the best job possible w/o a huge amt. of code to
+ # approximate a traceback (code ideas from
+ # Lib/traceback.py)
+ exc_type, exc_value, exc_tb = self.__exc_info()
+ try:
+ print>>self.__stderr, (
+ "Exception in thread " + self.name +
+ " (most likely raised during interpreter shutdown):")
+ print>>self.__stderr, (
+ "Traceback (most recent call last):")
+ while exc_tb:
+ print>>self.__stderr, (
+ ' File "%s", line %s, in %s' %
+ (exc_tb.tb_frame.f_code.co_filename,
+ exc_tb.tb_lineno,
+ exc_tb.tb_frame.f_code.co_name))
+ exc_tb = exc_tb.tb_next
+ print>>self.__stderr, ("%s: %s" % (exc_type, exc_value))
+ # Make sure that exc_tb gets deleted since it is a memory
+ # hog; deleting everything else is just for thoroughness
+ finally:
+ del exc_type, exc_value, exc_tb
+ else:
+ if __debug__:
+ self._note("%s.__bootstrap(): normal return", self)
+ finally:
+ # Prevent a race in
+ # test_threading.test_no_refcycle_through_target when
+ # the exception keeps the target alive past when we
+ # assert that it's dead.
+ self.__exc_clear()
+ finally:
+ with _active_limbo_lock:
+ self.__stop()
+ try:
+ # We don't call self.__delete() because it also
+ # grabs _active_limbo_lock.
+ del _active[_get_ident()]
+ except:
+ pass
+
+ def __stop(self):
+ self.__block.acquire()
+ self.__stopped = True
+ self.__block.notify_all()
+ self.__block.release()
+
+ def __delete(self):
+ "Remove current thread from the dict of currently running threads."
+
+ # Notes about running with dummy_thread:
+ #
+ # Must take care to not raise an exception if dummy_thread is being
+ # used (and thus this module is being used as an instance of
+ # dummy_threading). dummy_thread.get_ident() always returns -1 since
+ # there is only one thread if dummy_thread is being used. Thus
+ # len(_active) is always <= 1 here, and any Thread instance created
+ # overwrites the (if any) thread currently registered in _active.
+ #
+ # An instance of _MainThread is always created by 'threading'. This
+ # gets overwritten the instant an instance of Thread is created; both
+ # threads return -1 from dummy_thread.get_ident() and thus have the
+ # same key in the dict. So when the _MainThread instance created by
+ # 'threading' tries to clean itself up when atexit calls this method
+ # it gets a KeyError if another Thread instance was created.
+ #
+ # This all means that KeyError from trying to delete something from
+ # _active if dummy_threading is being used is a red herring. But
+ # since it isn't if dummy_threading is *not* being used then don't
+ # hide the exception.
+
+ try:
+ with _active_limbo_lock:
+ del _active[_get_ident()]
+ # There must not be any python code between the previous line
+ # and after the lock is released. Otherwise a tracing function
+ # could try to acquire the lock again in the same thread, (in
+ # current_thread()), and would block.
+ except KeyError:
+ if 'dummy_threading' not in _sys.modules:
+ raise
+
+ def join(self, timeout=None):
+ if not self.__initialized:
+ raise RuntimeError("Thread.__init__() not called")
+ if not self.__started.is_set():
+ raise RuntimeError("cannot join thread before it is started")
+ if self is current_thread():
+ raise RuntimeError("cannot join current thread")
+
+ if __debug__:
+ if not self.__stopped:
+ self._note("%s.join(): waiting until thread stops", self)
+ self.__block.acquire()
+ try:
+ if timeout is None:
+ while not self.__stopped:
+ self.__block.wait()
+ if __debug__:
+ self._note("%s.join(): thread stopped", self)
+ else:
+ deadline = _time() + timeout
+ while not self.__stopped:
+ delay = deadline - _time()
+ if delay <= 0:
+ if __debug__:
+ self._note("%s.join(): timed out", self)
+ break
+ self.__block.wait(delay)
+ else:
+ if __debug__:
+ self._note("%s.join(): thread stopped", self)
+ finally:
+ self.__block.release()
+
+ @property
+ def name(self):
+ assert self.__initialized, "Thread.__init__() not called"
+ return self.__name
+
+ @name.setter
+ def name(self, name):
+ assert self.__initialized, "Thread.__init__() not called"
+ self.__name = str(name)
+
+ @property
+ def ident(self):
+ assert self.__initialized, "Thread.__init__() not called"
+ return self.__ident
+
+ def isAlive(self):
+ assert self.__initialized, "Thread.__init__() not called"
+ return self.__started.is_set() and not self.__stopped
+
+ is_alive = isAlive
+
+ @property
+ def daemon(self):
+ assert self.__initialized, "Thread.__init__() not called"
+ return self.__daemonic
+
+ @daemon.setter
+ def daemon(self, daemonic):
+ if not self.__initialized:
+ raise RuntimeError("Thread.__init__() not called")
+ if self.__started.is_set():
+ raise RuntimeError("cannot set daemon status of active thread");
+ self.__daemonic = daemonic
+
+ def isDaemon(self):
+ return self.daemon
+
+ def setDaemon(self, daemonic):
+ self.daemon = daemonic
+
+ def getName(self):
+ return self.name
+
+ def setName(self, name):
+ self.name = name
+
+# The timer class was contributed by Itamar Shtull-Trauring
+
+def Timer(*args, **kwargs):
+ return _Timer(*args, **kwargs)
+
+class _Timer(Thread):
+ """Call a function after a specified number of seconds:
+
+ t = Timer(30.0, f, args=[], kwargs={})
+ t.start()
+ t.cancel() # stop the timer's action if it's still waiting
+ """
+
+ def __init__(self, interval, function, args=[], kwargs={}):
+ Thread.__init__(self)
+ self.interval = interval
+ self.function = function
+ self.args = args
+ self.kwargs = kwargs
+ self.finished = Event()
+
+ def cancel(self):
+ """Stop the timer if it hasn't finished yet"""
+ self.finished.set()
+
+ def run(self):
+ self.finished.wait(self.interval)
+ if not self.finished.is_set():
+ self.function(*self.args, **self.kwargs)
+ self.finished.set()
+
+# Special thread class to represent the main thread
+# This is garbage collected through an exit handler
+
+class _MainThread(Thread):
+
+ def __init__(self):
+ Thread.__init__(self, name="MainThread")
+ self._Thread__started.set()
+ self._set_ident()
+ with _active_limbo_lock:
+ _active[_get_ident()] = self
+
+ def _set_daemon(self):
+ return False
+
+ def _exitfunc(self):
+ self._Thread__stop()
+ t = _pickSomeNonDaemonThread()
+ if t:
+ if __debug__:
+ self._note("%s: waiting for other threads", self)
+ while t:
+ t.join()
+ t = _pickSomeNonDaemonThread()
+ if __debug__:
+ self._note("%s: exiting", self)
+ self._Thread__delete()
+
+def _pickSomeNonDaemonThread():
+ for t in enumerate():
+ if not t.daemon and t.is_alive():
+ return t
+ return None
+
+
+# Dummy thread class to represent threads not started here.
+# These aren't garbage collected when they die, nor can they be waited for.
+# If they invoke anything in threading.py that calls current_thread(), they
+# leave an entry in the _active dict forever after.
+# Their purpose is to return *something* from current_thread().
+# They are marked as daemon threads so we won't wait for them
+# when we exit (conform previous semantics).
+
+class _DummyThread(Thread):
+
+ def __init__(self):
+ Thread.__init__(self, name=_newname("Dummy-%d"))
+
+ # Thread.__block consumes an OS-level locking primitive, which
+ # can never be used by a _DummyThread. Since a _DummyThread
+ # instance is immortal, that's bad, so release this resource.
+ del self._Thread__block
+
+ self._Thread__started.set()
+ self._set_ident()
+ with _active_limbo_lock:
+ _active[_get_ident()] = self
+
+ def _set_daemon(self):
+ return True
+
+ def join(self, timeout=None):
+ assert False, "cannot join a dummy thread"
+
+
+# Global API functions
+
+def currentThread():
+ try:
+ return _active[_get_ident()]
+ except KeyError:
+ ##print "current_thread(): no current thread for", _get_ident()
+ return _DummyThread()
+
+current_thread = currentThread
+
+def activeCount():
+ with _active_limbo_lock:
+ return len(_active) + len(_limbo)
+
+active_count = activeCount
+
+def _enumerate():
+ # Same as enumerate(), but without the lock. Internal use only.
+ return _active.values() + _limbo.values()
+
+def enumerate():
+ with _active_limbo_lock:
+ return _active.values() + _limbo.values()
+
+# Create the main thread object,
+# and make it available for the interpreter
+# (Py_Main) as threading._shutdown.
+
+_shutdown = _MainThread()._exitfunc
+
+# get thread-local implementation, either from the thread
+# module, or from the python fallback
+
+try:
+ from _pydev_thread import _local as local
+except ImportError:
+ from _threading_local import local
+
+
+def _after_fork():
+ # This function is called by Python/ceval.c:PyEval_ReInitThreads which
+ # is called from PyOS_AfterFork. Here we cleanup threading module state
+ # that should not exist after a fork.
+
+ # Reset _active_limbo_lock, in case we forked while the lock was held
+ # by another (non-forked) thread. http://bugs.python.org/issue874900
+ global _active_limbo_lock
+ _active_limbo_lock = _allocate_lock()
+
+ # fork() only copied the current thread; clear references to others.
+ new_active = {}
+ current = current_thread()
+ with _active_limbo_lock:
+ for thread in _active.itervalues():
+ # Any lock/condition variable may be currently locked or in an
+ # invalid state, so we reinitialize them.
+ if hasattr(thread, '_reset_internal_locks'):
+ thread._reset_internal_locks()
+ if thread is current:
+ # There is only one active thread. We reset the ident to
+ # its new value since it can have changed.
+ ident = _get_ident()
+ thread._Thread__ident = ident
+ new_active[ident] = thread
+ else:
+ # All the others are already stopped.
+ thread._Thread__stop()
+
+ _limbo.clear()
+ _active.clear()
+ _active.update(new_active)
+ assert len(_active) == 1
+
+
+# Self-test code
+
+def _test():
+
+ class BoundedQueue(_Verbose):
+
+ def __init__(self, limit):
+ _Verbose.__init__(self)
+ self.mon = RLock()
+ self.rc = Condition(self.mon)
+ self.wc = Condition(self.mon)
+ self.limit = limit
+ self.queue = deque()
+
+ def put(self, item):
+ self.mon.acquire()
+ while len(self.queue) >= self.limit:
+ self._note("put(%s): queue full", item)
+ self.wc.wait()
+ self.queue.append(item)
+ self._note("put(%s): appended, length now %d",
+ item, len(self.queue))
+ self.rc.notify()
+ self.mon.release()
+
+ def get(self):
+ self.mon.acquire()
+ while not self.queue:
+ self._note("get(): queue empty")
+ self.rc.wait()
+ item = self.queue.popleft()
+ self._note("get(): got %s, %d left", item, len(self.queue))
+ self.wc.notify()
+ self.mon.release()
+ return item
+
+ class ProducerThread(Thread):
+
+ def __init__(self, queue, quota):
+ Thread.__init__(self, name="Producer")
+ self.queue = queue
+ self.quota = quota
+
+ def run(self):
+ from random import random
+ counter = 0
+ while counter < self.quota:
+ counter = counter + 1
+ self.queue.put("%s.%d" % (self.name, counter))
+ _sleep(random() * 0.00001)
+
+
+ class ConsumerThread(Thread):
+
+ def __init__(self, queue, count):
+ Thread.__init__(self, name="Consumer")
+ self.queue = queue
+ self.count = count
+
+ def run(self):
+ while self.count > 0:
+ item = self.queue.get()
+ print item
+ self.count = self.count - 1
+
+ NP = 3
+ QL = 4
+ NI = 5
+
+ Q = BoundedQueue(QL)
+ P = []
+ for i in range(NP):
+ t = ProducerThread(Q, NI)
+ t.name = ("Producer-%d" % (i+1))
+ P.append(t)
+ C = ConsumerThread(Q, NI*NP)
+ for t in P:
+ t.start()
+ _sleep(0.000001)
+ C.start()
+ for t in P:
+ t.join()
+ C.join()
+
+if __name__ == '__main__':
+ _test()
diff --git a/python/helpers/pydev/_pydev_time.py b/python/helpers/pydev/_pydev_time.py
new file mode 100644
index 0000000..72705db
--- /dev/null
+++ b/python/helpers/pydev/_pydev_time.py
@@ -0,0 +1 @@
+from time import *
diff --git a/python/helpers/pydev/_pydev_xmlrpclib.py b/python/helpers/pydev/_pydev_xmlrpclib.py
new file mode 100644
index 0000000..5f6e2b7
--- /dev/null
+++ b/python/helpers/pydev/_pydev_xmlrpclib.py
@@ -0,0 +1,1493 @@
+#Just a copy of the version in python 2.5 to be used if it's not available in jython 2.1
+import sys
+
+#
+# XML-RPC CLIENT LIBRARY
+#
+# an XML-RPC client interface for Python.
+#
+# the marshalling and response parser code can also be used to
+# implement XML-RPC servers.
+#
+# Notes:
+# this version is designed to work with Python 2.1 or newer.
+#
+# History:
+# 1999-01-14 fl Created
+# 1999-01-15 fl Changed dateTime to use localtime
+# 1999-01-16 fl Added Binary/base64 element, default to RPC2 service
+# 1999-01-19 fl Fixed array data element (from Skip Montanaro)
+# 1999-01-21 fl Fixed dateTime constructor, etc.
+# 1999-02-02 fl Added fault handling, handle empty sequences, etc.
+# 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro)
+# 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8)
+# 2000-11-28 fl Changed boolean to check the truth value of its argument
+# 2001-02-24 fl Added encoding/Unicode/SafeTransport patches
+# 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1)
+# 2001-03-28 fl Make sure response tuple is a singleton
+# 2001-03-29 fl Don't require empty params element (from Nicholas Riley)
+# 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2)
+# 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from Paul Prescod)
+# 2001-09-03 fl Allow Transport subclass to override getparser
+# 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup)
+# 2001-10-01 fl Remove containers from memo cache when done with them
+# 2001-10-01 fl Use faster escape method (80% dumps speedup)
+# 2001-10-02 fl More dumps microtuning
+# 2001-10-04 fl Make sure import expat gets a parser (from Guido van Rossum)
+# 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow
+# 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems)
+# 2001-11-12 fl Use repr() to marshal doubles (from Paul Felix)
+# 2002-03-17 fl Avoid buffered read when possible (from James Rucker)
+# 2002-04-07 fl Added pythondoc comments
+# 2002-04-16 fl Added __str__ methods to datetime/binary wrappers
+# 2002-05-15 fl Added error constants (from Andrew Kuchling)
+# 2002-06-27 fl Merged with Python CVS version
+# 2002-10-22 fl Added basic authentication (based on code from Phillip Eby)
+# 2003-01-22 sm Add support for the bool type
+# 2003-02-27 gvr Remove apply calls
+# 2003-04-24 sm Use cStringIO if available
+# 2003-04-25 ak Add support for nil
+# 2003-06-15 gn Add support for time.struct_time
+# 2003-07-12 gp Correct marshalling of Faults
+# 2003-10-31 mvl Add multicall support
+# 2004-08-20 mvl Bump minimum supported Python version to 2.1
+#
+# Copyright (c) 1999-2002 by Secret Labs AB.
+# Copyright (c) 1999-2002 by Fredrik Lundh.
+#
+# [email protected]
+# http://www.pythonware.com
+#
+# --------------------------------------------------------------------
+# The XML-RPC client interface is
+#
+# Copyright (c) 1999-2002 by Secret Labs AB
+# Copyright (c) 1999-2002 by Fredrik Lundh
+#
+# By obtaining, using, and/or copying this software and/or its
+# associated documentation, you agree that you have read, understood,
+# and will comply with the following terms and conditions:
+#
+# Permission to use, copy, modify, and distribute this software and
+# its associated documentation for any purpose and without fee is
+# hereby granted, provided that the above copyright notice appears in
+# all copies, and that both that copyright notice and this permission
+# notice appear in supporting documentation, and that the name of
+# Secret Labs AB or the author not be used in advertising or publicity
+# pertaining to distribution of the software without specific, written
+# prior permission.
+#
+# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
+# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
+# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
+# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
+# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
+# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
+# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
+# OF THIS SOFTWARE.
+# --------------------------------------------------------------------
+
+#
+# things to look into some day:
+
+# TODO: sort out True/False/boolean issues for Python 2.3
+
+"""
+An XML-RPC client interface for Python.
+
+The marshalling and response parser code can also be used to
+implement XML-RPC servers.
+
+Exported exceptions:
+
+ Error Base class for client errors
+ ProtocolError Indicates an HTTP protocol error
+ ResponseError Indicates a broken response package
+ Fault Indicates an XML-RPC fault package
+
+Exported classes:
+
+ ServerProxy Represents a logical connection to an XML-RPC server
+
+ MultiCall Executor of boxcared xmlrpc requests
+ Boolean boolean wrapper to generate a "boolean" XML-RPC value
+ DateTime dateTime wrapper for an ISO 8601 string or time tuple or
+ localtime integer value to generate a "dateTime.iso8601"
+ XML-RPC value
+ Binary binary data wrapper
+
+ SlowParser Slow but safe standard parser (based on xmllib)
+ Marshaller Generate an XML-RPC params chunk from a Python data structure
+ Unmarshaller Unmarshal an XML-RPC response from incoming XML event message
+ Transport Handles an HTTP transaction to an XML-RPC server
+ SafeTransport Handles an HTTPS transaction to an XML-RPC server
+
+Exported constants:
+
+ True
+ False
+
+Exported functions:
+
+ boolean Convert any Python value to an XML-RPC boolean
+ getparser Create instance of the fastest available parser & attach
+ to an unmarshalling object
+ dumps Convert an argument tuple or a Fault instance to an XML-RPC
+ request (or response, if the methodresponse option is used).
+ loads Convert an XML-RPC packet to unmarshalled data plus a method
+ name (None if not present).
+"""
+
+import re, string, time, operator
+
+from types import *
+
+# --------------------------------------------------------------------
+# Internal stuff
+
+try:
+ unicode
+except NameError:
+ unicode = None # unicode support not available
+
+try:
+ import datetime
+except ImportError:
+ datetime = None
+
+try:
+ _bool_is_builtin = False.__class__.__name__ == "bool"
+except (NameError, AttributeError):
+ _bool_is_builtin = 0
+
+def _decode(data, encoding, is8bit=re.compile("[\x80-\xff]").search):
+ # decode non-ascii string (if possible)
+ if unicode and encoding and is8bit(data):
+ data = unicode(data, encoding)
+ return data
+
+def escape(s, replace=string.replace):
+ s = replace(s, "&", "&")
+ s = replace(s, "<", "<")
+ return replace(s, ">", ">",)
+
+if unicode:
+ def _stringify(string):
+ # convert to 7-bit ascii if possible
+ try:
+ return string.encode("ascii")
+ except UnicodeError:
+ return string
+else:
+ def _stringify(string):
+ return string
+
+__version__ = "1.0.1"
+
+# xmlrpc integer limits
+try:
+ long
+except NameError:
+ long = int
+MAXINT = long(2) ** 31 - 1
+MININT = long(-2) ** 31
+
+# --------------------------------------------------------------------
+# Error constants (from Dan Libby's specification at
+# http://xmlrpc-epi.sourceforge.net/specs/rfc.fault_codes.php)
+
+# Ranges of errors
+PARSE_ERROR = -32700
+SERVER_ERROR = -32600
+APPLICATION_ERROR = -32500
+SYSTEM_ERROR = -32400
+TRANSPORT_ERROR = -32300
+
+# Specific errors
+NOT_WELLFORMED_ERROR = -32700
+UNSUPPORTED_ENCODING = -32701
+INVALID_ENCODING_CHAR = -32702
+INVALID_XMLRPC = -32600
+METHOD_NOT_FOUND = -32601
+INVALID_METHOD_PARAMS = -32602
+INTERNAL_ERROR = -32603
+
+# --------------------------------------------------------------------
+# Exceptions
+
+##
+# Base class for all kinds of client-side errors.
+
+class Error(Exception):
+ """Base class for client errors."""
+ def __str__(self):
+ return repr(self)
+
+##
+# Indicates an HTTP-level protocol error. This is raised by the HTTP
+# transport layer, if the server returns an error code other than 200
+# (OK).
+#
+# @param url The target URL.
+# @param errcode The HTTP error code.
+# @param errmsg The HTTP error message.
+# @param headers The HTTP header dictionary.
+
+class ProtocolError(Error):
+ """Indicates an HTTP protocol error."""
+ def __init__(self, url, errcode, errmsg, headers):
+ Error.__init__(self)
+ self.url = url
+ self.errcode = errcode
+ self.errmsg = errmsg
+ self.headers = headers
+ def __repr__(self):
+ return (
+ "<ProtocolError for %s: %s %s>" %
+ (self.url, self.errcode, self.errmsg)
+ )
+
+##
+# Indicates a broken XML-RPC response package. This exception is
+# raised by the unmarshalling layer, if the XML-RPC response is
+# malformed.
+
+class ResponseError(Error):
+ """Indicates a broken response package."""
+ pass
+
+##
+# Indicates an XML-RPC fault response package. This exception is
+# raised by the unmarshalling layer, if the XML-RPC response contains
+# a fault string. This exception can also used as a class, to
+# generate a fault XML-RPC message.
+#
+# @param faultCode The XML-RPC fault code.
+# @param faultString The XML-RPC fault string.
+
+class Fault(Error):
+ """Indicates an XML-RPC fault package."""
+ def __init__(self, faultCode, faultString, **extra):
+ Error.__init__(self)
+ self.faultCode = faultCode
+ self.faultString = faultString
+ def __repr__(self):
+ return (
+ "<Fault %s: %s>" %
+ (self.faultCode, repr(self.faultString))
+ )
+
+# --------------------------------------------------------------------
+# Special values
+
+##
+# Wrapper for XML-RPC boolean values. Use the xmlrpclib.True and
+# xmlrpclib.False constants, or the xmlrpclib.boolean() function, to
+# generate boolean XML-RPC values.
+#
+# @param value A boolean value. Any true value is interpreted as True,
+# all other values are interpreted as False.
+
+if _bool_is_builtin:
+ boolean = Boolean = bool #@UndefinedVariable
+ # to avoid breaking code which references xmlrpclib.{True,False}
+ True, False = True, False
+else:
+ class Boolean:
+ """Boolean-value wrapper.
+
+ Use True or False to generate a "boolean" XML-RPC value.
+ """
+
+ def __init__(self, value=0):
+ self.value = operator.truth(value)
+
+ def encode(self, out):
+ out.write("<value><boolean>%d</boolean></value>\n" % self.value)
+
+ def __cmp__(self, other):
+ if isinstance(other, Boolean):
+ other = other.value
+ return cmp(self.value, other)
+
+ def __repr__(self):
+ if self.value:
+ return "<Boolean True at %x>" % id(self)
+ else:
+ return "<Boolean False at %x>" % id(self)
+
+ def __int__(self):
+ return self.value
+
+ def __nonzero__(self):
+ return self.value
+
+ True, False = Boolean(1), Boolean(0)
+
+ ##
+ # Map true or false value to XML-RPC boolean values.
+ #
+ # @def boolean(value)
+ # @param value A boolean value. Any true value is mapped to True,
+ # all other values are mapped to False.
+ # @return xmlrpclib.True or xmlrpclib.False.
+ # @see Boolean
+ # @see True
+ # @see False
+
+ def boolean(value, _truefalse=(False, True)):
+ """Convert any Python value to XML-RPC 'boolean'."""
+ return _truefalse[operator.truth(value)]
+
+##
+# Wrapper for XML-RPC DateTime values. This converts a time value to
+# the format used by XML-RPC.
+# <p>
+# The value can be given as a string in the format
+# "yyyymmddThh:mm:ss", as a 9-item time tuple (as returned by
+# time.localtime()), or an integer value (as returned by time.time()).
+# The wrapper uses time.localtime() to convert an integer to a time
+# tuple.
+#
+# @param value The time, given as an ISO 8601 string, a time
+# tuple, or a integer time value.
+
+class DateTime:
+ """DateTime wrapper for an ISO 8601 string or time tuple or
+ localtime integer value to generate 'dateTime.iso8601' XML-RPC
+ value.
+ """
+
+ def __init__(self, value=0):
+ if not isinstance(value, StringType):
+ if datetime and isinstance(value, datetime.datetime):
+ self.value = value.strftime("%Y%m%dT%H:%M:%S")
+ return
+ if datetime and isinstance(value, datetime.date):
+ self.value = value.strftime("%Y%m%dT%H:%M:%S")
+ return
+ if datetime and isinstance(value, datetime.time):
+ today = datetime.datetime.now().strftime("%Y%m%d")
+ self.value = value.strftime(today + "T%H:%M:%S")
+ return
+ if not isinstance(value, (TupleType, time.struct_time)): #@UndefinedVariable
+ if value == 0:
+ value = time.time()
+ value = time.localtime(value)
+ value = time.strftime("%Y%m%dT%H:%M:%S", value)
+ self.value = value
+
+ def __cmp__(self, other):
+ if isinstance(other, DateTime):
+ other = other.value
+ return cmp(self.value, other)
+
+ ##
+ # Get date/time value.
+ #
+ # @return Date/time value, as an ISO 8601 string.
+
+ def __str__(self):
+ return self.value
+
+ def __repr__(self):
+ return "<DateTime %s at %x>" % (repr(self.value), id(self))
+
+ def decode(self, data):
+ data = str(data)
+ self.value = string.strip(data)
+
+ def encode(self, out):
+ out.write("<value><dateTime.iso8601>")
+ out.write(self.value)
+ out.write("</dateTime.iso8601></value>\n")
+
+def _datetime(data):
+ # decode xml element contents into a DateTime structure.
+ value = DateTime()
+ value.decode(data)
+ return value
+
+def _datetime_type(data):
+ t = time.strptime(data, "%Y%m%dT%H:%M:%S") #@UndefinedVariable
+ return datetime.datetime(*tuple(t)[:6])
+
+##
+# Wrapper for binary data. This can be used to transport any kind
+# of binary data over XML-RPC, using BASE64 encoding.
+#
+# @param data An 8-bit string containing arbitrary data.
+
+import base64
+try:
+ import cStringIO as StringIO
+except ImportError:
+ import StringIO
+
+class Binary:
+ """Wrapper for binary data."""
+
+ def __init__(self, data=None):
+ self.data = data
+
+ ##
+ # Get buffer contents.
+ #
+ # @return Buffer contents, as an 8-bit string.
+
+ def __str__(self):
+ return self.data or ""
+
+ def __cmp__(self, other):
+ if isinstance(other, Binary):
+ other = other.data
+ return cmp(self.data, other)
+
+ def decode(self, data):
+ self.data = base64.decodestring(data)
+
+ def encode(self, out):
+ out.write("<value><base64>\n")
+ base64.encode(StringIO.StringIO(self.data), out)
+ out.write("</base64></value>\n")
+
+def _binary(data):
+ # decode xml element contents into a Binary structure
+ value = Binary()
+ value.decode(data)
+ return value
+
+WRAPPERS = (DateTime, Binary)
+if not _bool_is_builtin:
+ WRAPPERS = WRAPPERS + (Boolean,)
+
+# --------------------------------------------------------------------
+# XML parsers
+
+try:
+ # optional xmlrpclib accelerator
+ import _xmlrpclib #@UnresolvedImport
+ FastParser = _xmlrpclib.Parser
+ FastUnmarshaller = _xmlrpclib.Unmarshaller
+except (AttributeError, ImportError):
+ FastParser = FastUnmarshaller = None
+
+try:
+ import _xmlrpclib #@UnresolvedImport
+ FastMarshaller = _xmlrpclib.Marshaller
+except (AttributeError, ImportError):
+ FastMarshaller = None
+
+#
+# the SGMLOP parser is about 15x faster than Python's builtin
+# XML parser. SGMLOP sources can be downloaded from:
+#
+# http://www.pythonware.com/products/xml/sgmlop.htm
+#
+
+try:
+ import sgmlop
+ if not hasattr(sgmlop, "XMLParser"):
+ raise ImportError()
+except ImportError:
+ SgmlopParser = None # sgmlop accelerator not available
+else:
+ class SgmlopParser:
+ def __init__(self, target):
+
+ # setup callbacks
+ self.finish_starttag = target.start
+ self.finish_endtag = target.end
+ self.handle_data = target.data
+ self.handle_xml = target.xml
+
+ # activate parser
+ self.parser = sgmlop.XMLParser()
+ self.parser.register(self)
+ self.feed = self.parser.feed
+ self.entity = {
+ "amp": "&", "gt": ">", "lt": "<",
+ "apos": "'", "quot": '"'
+ }
+
+ def close(self):
+ try:
+ self.parser.close()
+ finally:
+ self.parser = self.feed = None # nuke circular reference
+
+ def handle_proc(self, tag, attr):
+ m = re.search("encoding\s*=\s*['\"]([^\"']+)[\"']", attr) #@UndefinedVariable
+ if m:
+ self.handle_xml(m.group(1), 1)
+
+ def handle_entityref(self, entity):
+ # <string> entity
+ try:
+ self.handle_data(self.entity[entity])
+ except KeyError:
+ self.handle_data("&%s;" % entity)
+
+try:
+ from xml.parsers import expat
+ if not hasattr(expat, "ParserCreate"):
+ raise ImportError()
+except ImportError:
+ ExpatParser = None # expat not available
+else:
+ class ExpatParser:
+ # fast expat parser for Python 2.0 and later. this is about
+ # 50% slower than sgmlop, on roundtrip testing
+ def __init__(self, target):
+ self._parser = parser = expat.ParserCreate(None, None)
+ self._target = target
+ parser.StartElementHandler = target.start
+ parser.EndElementHandler = target.end
+ parser.CharacterDataHandler = target.data
+ encoding = None
+ if not parser.returns_unicode:
+ encoding = "utf-8"
+ target.xml(encoding, None)
+
+ def feed(self, data):
+ self._parser.Parse(data, 0)
+
+ def close(self):
+ self._parser.Parse("", 1) # end of data
+ del self._target, self._parser # get rid of circular references
+
+class SlowParser:
+ """Default XML parser (based on xmllib.XMLParser)."""
+ # this is about 10 times slower than sgmlop, on roundtrip
+ # testing.
+ def __init__(self, target):
+ import xmllib # lazy subclassing (!)
+ if xmllib.XMLParser not in SlowParser.__bases__:
+ SlowParser.__bases__ = (xmllib.XMLParser,)
+ self.handle_xml = target.xml
+ self.unknown_starttag = target.start
+ self.handle_data = target.data
+ self.handle_cdata = target.data
+ self.unknown_endtag = target.end
+ try:
+ xmllib.XMLParser.__init__(self, accept_utf8=1)
+ except TypeError:
+ xmllib.XMLParser.__init__(self) # pre-2.0
+
+# --------------------------------------------------------------------
+# XML-RPC marshalling and unmarshalling code
+
+##
+# XML-RPC marshaller.
+#
+# @param encoding Default encoding for 8-bit strings. The default
+# value is None (interpreted as UTF-8).
+# @see dumps
+
+class Marshaller:
+ """Generate an XML-RPC params chunk from a Python data structure.
+
+ Create a Marshaller instance for each set of parameters, and use
+ the "dumps" method to convert your data (represented as a tuple)
+ to an XML-RPC params chunk. To write a fault response, pass a
+ Fault instance instead. You may prefer to use the "dumps" module
+ function for this purpose.
+ """
+
+ # by the way, if you don't understand what's going on in here,
+ # that's perfectly ok.
+
+ def __init__(self, encoding=None, allow_none=0):
+ self.memo = {}
+ self.data = None
+ self.encoding = encoding
+ self.allow_none = allow_none
+
+ dispatch = {}
+
+ def dumps(self, values):
+ out = []
+ write = out.append
+ dump = self.__dump
+ if isinstance(values, Fault):
+ # fault instance
+ write("<fault>\n")
+ dump({'faultCode': values.faultCode,
+ 'faultString': values.faultString},
+ write)
+ write("</fault>\n")
+ else:
+ # parameter block
+ # FIXME: the xml-rpc specification allows us to leave out
+ # the entire <params> block if there are no parameters.
+ # however, changing this may break older code (including
+ # old versions of xmlrpclib.py), so this is better left as
+ # is for now. See @XMLRPC3 for more information. /F
+ write("<params>\n")
+ for v in values:
+ write("<param>\n")
+ dump(v, write)
+ write("</param>\n")
+ write("</params>\n")
+ result = string.join(out, "")
+ return result
+
+ def __dump(self, value, write):
+ try:
+ f = self.dispatch[type(value)]
+ except KeyError:
+ raise TypeError("cannot marshal %s objects" % type(value))
+ else:
+ f(self, value, write)
+
+ def dump_nil (self, value, write):
+ if not self.allow_none:
+ raise TypeError("cannot marshal None unless allow_none is enabled")
+ write("<value><nil/></value>")
+ dispatch[NoneType] = dump_nil
+
+ def dump_int(self, value, write):
+ # in case ints are > 32 bits
+ if value > MAXINT or value < MININT:
+ raise OverflowError("int exceeds XML-RPC limits")
+ write("<value><int>")
+ write(str(value))
+ write("</int></value>\n")
+ dispatch[IntType] = dump_int
+
+ if _bool_is_builtin:
+ def dump_bool(self, value, write):
+ write("<value><boolean>")
+ write(value and "1" or "0")
+ write("</boolean></value>\n")
+ dispatch[bool] = dump_bool #@UndefinedVariable
+
+ def dump_long(self, value, write):
+ if value > MAXINT or value < MININT:
+ raise OverflowError("long int exceeds XML-RPC limits")
+ write("<value><int>")
+ write(str(int(value)))
+ write("</int></value>\n")
+ dispatch[LongType] = dump_long
+
+ def dump_double(self, value, write):
+ write("<value><double>")
+ write(repr(value))
+ write("</double></value>\n")
+ dispatch[FloatType] = dump_double
+
+ def dump_string(self, value, write, escape=escape):
+ write("<value><string>")
+ write(escape(value))
+ write("</string></value>\n")
+ dispatch[StringType] = dump_string
+
+ if unicode:
+ def dump_unicode(self, value, write, escape=escape):
+ value = value.encode(self.encoding)
+ write("<value><string>")
+ write(escape(value))
+ write("</string></value>\n")
+ dispatch[UnicodeType] = dump_unicode
+
+ def dump_array(self, value, write):
+ i = id(value)
+ if self.memo.has_key(i):
+ raise TypeError("cannot marshal recursive sequences")
+ self.memo[i] = None
+ dump = self.__dump
+ write("<value><array><data>\n")
+ for v in value:
+ dump(v, write)
+ write("</data></array></value>\n")
+ del self.memo[i]
+ dispatch[TupleType] = dump_array
+ dispatch[ListType] = dump_array
+
+ def dump_struct(self, value, write, escape=escape):
+ i = id(value)
+ if self.memo.has_key(i):
+ raise TypeError("cannot marshal recursive dictionaries")
+ self.memo[i] = None
+ dump = self.__dump
+ write("<value><struct>\n")
+ for k, v in value.items():
+ write("<member>\n")
+ if type(k) is not StringType:
+ if unicode and type(k) is UnicodeType:
+ k = k.encode(self.encoding)
+ else:
+ raise TypeError("dictionary key must be string")
+ write("<name>%s</name>\n" % escape(k))
+ dump(v, write)
+ write("</member>\n")
+ write("</struct></value>\n")
+ del self.memo[i]
+ dispatch[DictType] = dump_struct
+
+ if datetime:
+ def dump_datetime(self, value, write):
+ write("<value><dateTime.iso8601>")
+ write(value.strftime("%Y%m%dT%H:%M:%S"))
+ write("</dateTime.iso8601></value>\n")
+ dispatch[datetime.datetime] = dump_datetime
+
+ def dump_date(self, value, write):
+ write("<value><dateTime.iso8601>")
+ write(value.strftime("%Y%m%dT00:00:00"))
+ write("</dateTime.iso8601></value>\n")
+ dispatch[datetime.date] = dump_date
+
+ def dump_time(self, value, write):
+ write("<value><dateTime.iso8601>")
+ write(datetime.datetime.now().date().strftime("%Y%m%dT"))
+ write(value.strftime("%H:%M:%S"))
+ write("</dateTime.iso8601></value>\n")
+ dispatch[datetime.time] = dump_time
+
+ def dump_instance(self, value, write):
+ # check for special wrappers
+ if value.__class__ in WRAPPERS:
+ self.write = write
+ value.encode(self)
+ del self.write
+ else:
+ # store instance attributes as a struct (really?)
+ self.dump_struct(value.__dict__, write)
+ dispatch[InstanceType] = dump_instance
+
+##
+# XML-RPC unmarshaller.
+#
+# @see loads
+
+class Unmarshaller:
+ """Unmarshal an XML-RPC response, based on incoming XML event
+ messages (start, data, end). Call close() to get the resulting
+ data structure.
+
+ Note that this reader is fairly tolerant, and gladly accepts bogus
+ XML-RPC data without complaining (but not bogus XML).
+ """
+
+ # and again, if you don't understand what's going on in here,
+ # that's perfectly ok.
+
+ def __init__(self, use_datetime=0):
+ self._type = None
+ self._stack = []
+ self._marks = []
+ self._data = []
+ self._methodname = None
+ self._encoding = "utf-8"
+ self.append = self._stack.append
+ self._use_datetime = use_datetime
+ if use_datetime and not datetime:
+ raise ValueError("the datetime module is not available")
+
+ def close(self):
+ # return response tuple and target method
+ if self._type is None or self._marks:
+ raise ResponseError()
+ if self._type == "fault":
+ raise Fault(**self._stack[0])
+ return tuple(self._stack)
+
+ def getmethodname(self):
+ return self._methodname
+
+ #
+ # event handlers
+
+ def xml(self, encoding, standalone):
+ self._encoding = encoding
+ # FIXME: assert standalone == 1 ???
+
+ def start(self, tag, attrs):
+ # prepare to handle this element
+ if tag == "array" or tag == "struct":
+ self._marks.append(len(self._stack))
+ self._data = []
+ self._value = (tag == "value")
+
+ def data(self, text):
+ self._data.append(text)
+
+ def end(self, tag, join=string.join):
+ # call the appropriate end tag handler
+ try:
+ f = self.dispatch[tag]
+ except KeyError:
+ pass # unknown tag ?
+ else:
+ return f(self, join(self._data, ""))
+
+ #
+ # accelerator support
+
+ def end_dispatch(self, tag, data):
+ # dispatch data
+ try:
+ f = self.dispatch[tag]
+ except KeyError:
+ pass # unknown tag ?
+ else:
+ return f(self, data)
+
+ #
+ # element decoders
+
+ dispatch = {}
+
+ def end_nil (self, data):
+ self.append(None)
+ self._value = 0
+ dispatch["nil"] = end_nil
+
+ def end_boolean(self, data):
+ if data == "0":
+ self.append(False)
+ elif data == "1":
+ self.append(True)
+ else:
+ raise TypeError("bad boolean value")
+ self._value = 0
+ dispatch["boolean"] = end_boolean
+
+ def end_int(self, data):
+ self.append(int(data))
+ self._value = 0
+ dispatch["i4"] = end_int
+ dispatch["int"] = end_int
+
+ def end_double(self, data):
+ self.append(float(data))
+ self._value = 0
+ dispatch["double"] = end_double
+
+ def end_string(self, data):
+ if self._encoding:
+ data = _decode(data, self._encoding)
+ self.append(_stringify(data))
+ self._value = 0
+ dispatch["string"] = end_string
+ dispatch["name"] = end_string # struct keys are always strings
+
+ def end_array(self, data):
+ mark = self._marks.pop()
+ # map arrays to Python lists
+ self._stack[mark:] = [self._stack[mark:]]
+ self._value = 0
+ dispatch["array"] = end_array
+
+ def end_struct(self, data):
+ mark = self._marks.pop()
+ # map structs to Python dictionaries
+ dict = {}
+ items = self._stack[mark:]
+ for i in range(0, len(items), 2):
+ dict[_stringify(items[i])] = items[i + 1]
+ self._stack[mark:] = [dict]
+ self._value = 0
+ dispatch["struct"] = end_struct
+
+ def end_base64(self, data):
+ value = Binary()
+ value.decode(data)
+ self.append(value)
+ self._value = 0
+ dispatch["base64"] = end_base64
+
+ def end_dateTime(self, data):
+ value = DateTime()
+ value.decode(data)
+ if self._use_datetime:
+ value = _datetime_type(data)
+ self.append(value)
+ dispatch["dateTime.iso8601"] = end_dateTime
+
+ def end_value(self, data):
+ # if we stumble upon a value element with no internal
+ # elements, treat it as a string element
+ if self._value:
+ self.end_string(data)
+ dispatch["value"] = end_value
+
+ def end_params(self, data):
+ self._type = "params"
+ dispatch["params"] = end_params
+
+ def end_fault(self, data):
+ self._type = "fault"
+ dispatch["fault"] = end_fault
+
+ def end_methodName(self, data):
+ if self._encoding:
+ data = _decode(data, self._encoding)
+ self._methodname = data
+ self._type = "methodName" # no params
+ dispatch["methodName"] = end_methodName
+
+## Multicall support
+#
+
+class _MultiCallMethod:
+ # some lesser magic to store calls made to a MultiCall object
+ # for batch execution
+ def __init__(self, call_list, name):
+ self.__call_list = call_list
+ self.__name = name
+ def __getattr__(self, name):
+ return _MultiCallMethod(self.__call_list, "%s.%s" % (self.__name, name))
+ def __call__(self, *args):
+ self.__call_list.append((self.__name, args))
+
+class MultiCallIterator:
+ """Iterates over the results of a multicall. Exceptions are
+ thrown in response to xmlrpc faults."""
+
+ def __init__(self, results):
+ self.results = results
+
+ def __getitem__(self, i):
+ item = self.results[i]
+ if type(item) == type({}):
+ raise Fault(item['faultCode'], item['faultString'])
+ elif type(item) == type([]):
+ return item[0]
+ else:
+ raise ValueError("unexpected type in multicall result")
+
+class MultiCall:
+ """server -> a object used to boxcar method calls
+
+ server should be a ServerProxy object.
+
+ Methods can be added to the MultiCall using normal
+ method call syntax e.g.:
+
+ multicall = MultiCall(server_proxy)
+ multicall.add(2,3)
+ multicall.get_address("Guido")
+
+ To execute the multicall, call the MultiCall object e.g.:
+
+ add_result, address = multicall()
+ """
+
+ def __init__(self, server):
+ self.__server = server
+ self.__call_list = []
+
+ def __repr__(self):
+ return "<MultiCall at %x>" % id(self)
+
+ __str__ = __repr__
+
+ def __getattr__(self, name):
+ return _MultiCallMethod(self.__call_list, name)
+
+ def __call__(self):
+ marshalled_list = []
+ for name, args in self.__call_list:
+ marshalled_list.append({'methodName' : name, 'params' : args})
+
+ return MultiCallIterator(self.__server.system.multicall(marshalled_list))
+
+# --------------------------------------------------------------------
+# convenience functions
+
+##
+# Create a parser object, and connect it to an unmarshalling instance.
+# This function picks the fastest available XML parser.
+#
+# return A (parser, unmarshaller) tuple.
+
+def getparser(use_datetime=0):
+ """getparser() -> parser, unmarshaller
+
+ Create an instance of the fastest available parser, and attach it
+ to an unmarshalling object. Return both objects.
+ """
+ if use_datetime and not datetime:
+ raise ValueError("the datetime module is not available")
+ if FastParser and FastUnmarshaller:
+ if use_datetime:
+ mkdatetime = _datetime_type
+ else:
+ mkdatetime = _datetime
+ target = FastUnmarshaller(True, False, _binary, mkdatetime, Fault)
+ parser = FastParser(target)
+ else:
+ target = Unmarshaller(use_datetime=use_datetime)
+ if FastParser:
+ parser = FastParser(target)
+ elif SgmlopParser:
+ parser = SgmlopParser(target)
+ elif ExpatParser:
+ parser = ExpatParser(target)
+ else:
+ parser = SlowParser(target)
+ return parser, target
+
+##
+# Convert a Python tuple or a Fault instance to an XML-RPC packet.
+#
+# @def dumps(params, **options)
+# @param params A tuple or Fault instance.
+# @keyparam methodname If given, create a methodCall request for
+# this method name.
+# @keyparam methodresponse If given, create a methodResponse packet.
+# If used with a tuple, the tuple must be a singleton (that is,
+# it must contain exactly one element).
+# @keyparam encoding The packet encoding.
+# @return A string containing marshalled data.
+
+def dumps(params, methodname=None, methodresponse=None, encoding=None,
+ allow_none=0):
+ """data [,options] -> marshalled data
+
+ Convert an argument tuple or a Fault instance to an XML-RPC
+ request (or response, if the methodresponse option is used).
+
+ In addition to the data object, the following options can be given
+ as keyword arguments:
+
+ methodname: the method name for a methodCall packet
+
+ methodresponse: true to create a methodResponse packet.
+ If this option is used with a tuple, the tuple must be
+ a singleton (i.e. it can contain only one element).
+
+ encoding: the packet encoding (default is UTF-8)
+
+ All 8-bit strings in the data structure are assumed to use the
+ packet encoding. Unicode strings are automatically converted,
+ where necessary.
+ """
+
+ assert isinstance(params, TupleType) or isinstance(params, Fault), \
+ "argument must be tuple or Fault instance"
+
+ if isinstance(params, Fault):
+ methodresponse = 1
+ elif methodresponse and isinstance(params, TupleType):
+ assert len(params) == 1, "response tuple must be a singleton"
+
+ if not encoding:
+ encoding = "utf-8"
+
+ if FastMarshaller:
+ m = FastMarshaller(encoding)
+ else:
+ m = Marshaller(encoding, allow_none)
+
+ data = m.dumps(params)
+
+ if encoding != "utf-8":
+ xmlheader = "<?xml version='1.0' encoding='%s'?>\n" % str(encoding)
+ else:
+ xmlheader = "<?xml version='1.0'?>\n" # utf-8 is default
+
+ # standard XML-RPC wrappings
+ if methodname:
+ # a method call
+ if not isinstance(methodname, StringType):
+ methodname = methodname.encode(encoding)
+ data = (
+ xmlheader,
+ "<methodCall>\n"
+ "<methodName>", methodname, "</methodName>\n",
+ data,
+ "</methodCall>\n"
+ )
+ elif methodresponse:
+ # a method response, or a fault structure
+ data = (
+ xmlheader,
+ "<methodResponse>\n",
+ data,
+ "</methodResponse>\n"
+ )
+ else:
+ return data # return as is
+ return string.join(data, "")
+
+##
+# Convert an XML-RPC packet to a Python object. If the XML-RPC packet
+# represents a fault condition, this function raises a Fault exception.
+#
+# @param data An XML-RPC packet, given as an 8-bit string.
+# @return A tuple containing the unpacked data, and the method name
+# (None if not present).
+# @see Fault
+
+def loads(data, use_datetime=0):
+ """data -> unmarshalled data, method name
+
+ Convert an XML-RPC packet to unmarshalled data plus a method
+ name (None if not present).
+
+ If the XML-RPC packet represents a fault condition, this function
+ raises a Fault exception.
+ """
+ p, u = getparser(use_datetime=use_datetime)
+ p.feed(data)
+ p.close()
+ return u.close(), u.getmethodname()
+
+
+# --------------------------------------------------------------------
+# request dispatcher
+
+class _Method:
+ # some magic to bind an XML-RPC method to an RPC server.
+ # supports "nested" methods (e.g. examples.getStateName)
+ def __init__(self, send, name):
+ self.__send = send
+ self.__name = name
+ def __getattr__(self, name):
+ return _Method(self.__send, "%s.%s" % (self.__name, name))
+ def __call__(self, *args):
+ return self.__send(self.__name, args)
+
+##
+# Standard transport class for XML-RPC over HTTP.
+# <p>
+# You can create custom transports by subclassing this method, and
+# overriding selected methods.
+
+class Transport:
+ """Handles an HTTP transaction to an XML-RPC server."""
+
+ # client identifier (may be overridden)
+ user_agent = "xmlrpclib.py/%s (by www.pythonware.com)" % __version__
+
+ def __init__(self, use_datetime=0):
+ self._use_datetime = use_datetime
+
+ ##
+ # Send a complete request, and parse the response.
+ #
+ # @param host Target host.
+ # @param handler Target PRC handler.
+ # @param request_body XML-RPC request body.
+ # @param verbose Debugging flag.
+ # @return Parsed response.
+
+ def request(self, host, handler, request_body, verbose=0):
+ # issue XML-RPC request
+
+ h = self.make_connection(host)
+ if verbose:
+ h.set_debuglevel(1)
+
+ self.send_request(h, handler, request_body)
+ self.send_host(h, host)
+ self.send_user_agent(h)
+ self.send_content(h, request_body)
+
+ errcode, errmsg, headers = h.getreply()
+
+ if errcode != 200:
+ raise ProtocolError(
+ host + handler,
+ errcode, errmsg,
+ headers
+ )
+
+ self.verbose = verbose
+
+ try:
+ sock = h._conn.sock
+ except AttributeError:
+ sock = None
+
+ return self._parse_response(h.getfile(), sock)
+
+ ##
+ # Create parser.
+ #
+ # @return A 2-tuple containing a parser and a unmarshaller.
+
+ def getparser(self):
+ # get parser and unmarshaller
+ return getparser(use_datetime=self._use_datetime)
+
+ ##
+ # Get authorization info from host parameter
+ # Host may be a string, or a (host, x509-dict) tuple; if a string,
+ # it is checked for a "user:pw@host" format, and a "Basic
+ # Authentication" header is added if appropriate.
+ #
+ # @param host Host descriptor (URL or (URL, x509 info) tuple).
+ # @return A 3-tuple containing (actual host, extra headers,
+ # x509 info). The header and x509 fields may be None.
+
+ def get_host_info(self, host):
+
+ x509 = {}
+ if isinstance(host, TupleType):
+ host, x509 = host
+
+ import urllib
+ auth, host = urllib.splituser(host)
+
+ if auth:
+ import base64
+ auth = base64.encodestring(urllib.unquote(auth))
+ auth = string.join(string.split(auth), "") # get rid of whitespace
+ extra_headers = [
+ ("Authorization", "Basic " + auth)
+ ]
+ else:
+ extra_headers = None
+
+ return host, extra_headers, x509
+
+ ##
+ # Connect to server.
+ #
+ # @param host Target host.
+ # @return A connection handle.
+
+ def make_connection(self, host):
+ # create a HTTP connection object from a host descriptor
+ import httplib
+ host, extra_headers, x509 = self.get_host_info(host)
+ return httplib.HTTP(host)
+
+ ##
+ # Send request header.
+ #
+ # @param connection Connection handle.
+ # @param handler Target RPC handler.
+ # @param request_body XML-RPC body.
+
+ def send_request(self, connection, handler, request_body):
+ connection.putrequest("POST", handler)
+
+ ##
+ # Send host name.
+ #
+ # @param connection Connection handle.
+ # @param host Host name.
+
+ def send_host(self, connection, host):
+ host, extra_headers, x509 = self.get_host_info(host)
+ connection.putheader("Host", host)
+ if extra_headers:
+ if isinstance(extra_headers, DictType):
+ extra_headers = extra_headers.items()
+ for key, value in extra_headers:
+ connection.putheader(key, value)
+
+ ##
+ # Send user-agent identifier.
+ #
+ # @param connection Connection handle.
+
+ def send_user_agent(self, connection):
+ connection.putheader("User-Agent", self.user_agent)
+
+ ##
+ # Send request body.
+ #
+ # @param connection Connection handle.
+ # @param request_body XML-RPC request body.
+
+ def send_content(self, connection, request_body):
+ connection.putheader("Content-Type", "text/xml")
+ connection.putheader("Content-Length", str(len(request_body)))
+ connection.endheaders()
+ if request_body:
+ connection.send(request_body)
+
+ ##
+ # Parse response.
+ #
+ # @param file Stream.
+ # @return Response tuple and target method.
+
+ def parse_response(self, file):
+ # compatibility interface
+ return self._parse_response(file, None)
+
+ ##
+ # Parse response (alternate interface). This is similar to the
+ # parse_response method, but also provides direct access to the
+ # underlying socket object (where available).
+ #
+ # @param file Stream.
+ # @param sock Socket handle (or None, if the socket object
+ # could not be accessed).
+ # @return Response tuple and target method.
+
+ def _parse_response(self, file, sock):
+ # read response from input file/socket, and parse it
+
+ p, u = self.getparser()
+
+ while 1:
+ if sock:
+ response = sock.recv(1024)
+ else:
+ response = file.read(1024)
+ if not response:
+ break
+ if self.verbose:
+ sys.stdout.write("body: %s\n" % repr(response))
+ p.feed(response)
+
+ file.close()
+ p.close()
+
+ return u.close()
+
+##
+# Standard transport class for XML-RPC over HTTPS.
+
+class SafeTransport(Transport):
+ """Handles an HTTPS transaction to an XML-RPC server."""
+
+ # FIXME: mostly untested
+
+ def make_connection(self, host):
+ # create a HTTPS connection object from a host descriptor
+ # host may be a string, or a (host, x509-dict) tuple
+ import httplib
+ host, extra_headers, x509 = self.get_host_info(host)
+ try:
+ HTTPS = httplib.HTTPS
+ except AttributeError:
+ raise NotImplementedError(
+ "your version of httplib doesn't support HTTPS"
+ )
+ else:
+ return HTTPS(host, None, **(x509 or {}))
+
+##
+# Standard server proxy. This class establishes a virtual connection
+# to an XML-RPC server.
+# <p>
+# This class is available as ServerProxy and Server. New code should
+# use ServerProxy, to avoid confusion.
+#
+# @def ServerProxy(uri, **options)
+# @param uri The connection point on the server.
+# @keyparam transport A transport factory, compatible with the
+# standard transport class.
+# @keyparam encoding The default encoding used for 8-bit strings
+# (default is UTF-8).
+# @keyparam verbose Use a true value to enable debugging output.
+# (printed to standard output).
+# @see Transport
+
+class ServerProxy:
+ """uri [,options] -> a logical connection to an XML-RPC server
+
+ uri is the connection point on the server, given as
+ scheme://host/target.
+
+ The standard implementation always supports the "http" scheme. If
+ SSL socket support is available (Python 2.0), it also supports
+ "https".
+
+ If the target part and the slash preceding it are both omitted,
+ "/RPC2" is assumed.
+
+ The following options can be given as keyword arguments:
+
+ transport: a transport factory
+ encoding: the request encoding (default is UTF-8)
+
+ All 8-bit strings passed to the server proxy are assumed to use
+ the given encoding.
+ """
+
+ def __init__(self, uri, transport=None, encoding=None, verbose=0,
+ allow_none=0, use_datetime=0):
+ # establish a "logical" server connection
+
+ # get the url
+ import urllib
+ type, uri = urllib.splittype(uri)
+ if type not in ("http", "https"):
+ raise IOError("unsupported XML-RPC protocol")
+ self.__host, self.__handler = urllib.splithost(uri)
+ if not self.__handler:
+ self.__handler = "/RPC2"
+
+ if transport is None:
+ if type == "https":
+ transport = SafeTransport(use_datetime=use_datetime)
+ else:
+ transport = Transport(use_datetime=use_datetime)
+ self.__transport = transport
+
+ self.__encoding = encoding
+ self.__verbose = verbose
+ self.__allow_none = allow_none
+
+ def __request(self, methodname, params):
+ # call a method on the remote server
+
+ request = dumps(params, methodname, encoding=self.__encoding,
+ allow_none=self.__allow_none)
+
+ response = self.__transport.request(
+ self.__host,
+ self.__handler,
+ request,
+ verbose=self.__verbose
+ )
+
+ if len(response) == 1:
+ response = response[0]
+
+ return response
+
+ def __repr__(self):
+ return (
+ "<ServerProxy for %s%s>" %
+ (self.__host, self.__handler)
+ )
+
+ __str__ = __repr__
+
+ def __getattr__(self, name):
+ # magic method dispatcher
+ return _Method(self.__request, name)
+
+ # note: to call a remote object with an non-standard name, use
+ # result getattr(server, "strange-python-name")(args)
+
+# compatibility
+
+Server = ServerProxy
+
+# --------------------------------------------------------------------
+# test code
+
+if __name__ == "__main__":
+
+ # simple test program (from the XML-RPC specification)
+
+ # server = ServerProxy("http://localhost:8000") # local server
+ server = ServerProxy("http://time.xmlrpc.com/RPC2")
+
+ sys.stdout.write('%s\n' % server)
+
+ try:
+ sys.stdout.write('%s\n' % (server.currentTime.getCurrentTime(),))
+ except Error:
+ import traceback;traceback.print_exc()
+
+ multi = MultiCall(server)
+ multi.currentTime.getCurrentTime()
+ multi.currentTime.getCurrentTime()
+ try:
+ for response in multi():
+ sys.stdout.write('%s\n' % (response,))
+ except Error:
+ import traceback;traceback.print_exc()
diff --git a/python/helpers/pydev/_pydevd_re.py b/python/helpers/pydev/_pydevd_re.py
new file mode 100644
index 0000000..cd00672
--- /dev/null
+++ b/python/helpers/pydev/_pydevd_re.py
@@ -0,0 +1,11 @@
+
+__all__ = [ "match", "search", "sub", "subn", "split", "findall",
+ "compile", "purge", "template", "escape", "I", "L", "M", "S", "X",
+ "U", "IGNORECASE", "LOCALE", "MULTILINE", "DOTALL", "VERBOSE",
+ "UNICODE", "error" ]
+
+import sre, sys
+module = sys.modules['re']
+for name in __all__:
+ setattr(module, name, getattr(sre, name))
+
diff --git a/python/helpers/pydev/_tipper_common.py b/python/helpers/pydev/_tipper_common.py
new file mode 100644
index 0000000..f8c46d2
--- /dev/null
+++ b/python/helpers/pydev/_tipper_common.py
@@ -0,0 +1,66 @@
+try:
+ import inspect
+except:
+ try:
+ import _pydev_inspect as inspect # for older versions
+ except:
+ import traceback;traceback.print_exc() #Ok, no inspect available (search will not work)
+
+try:
+ import re
+except:
+ try:
+ import _pydev_re as re # for older versions @UnresolvedImport
+ except:
+ import traceback;traceback.print_exc() #Ok, no inspect available (search will not work)
+
+
+
+def DoFind(f, mod):
+ import linecache
+ if inspect.ismodule(mod):
+ return f, 0, 0
+
+ lines = linecache.getlines(f)
+
+ if inspect.isclass(mod):
+ name = mod.__name__
+ pat = re.compile(r'^\s*class\s*' + name + r'\b')
+ for i in range(len(lines)):
+ if pat.match(lines[i]):
+ return f, i, 0
+
+ return f, 0, 0
+
+ if inspect.ismethod(mod):
+ mod = mod.im_func
+
+ if inspect.isfunction(mod):
+ try:
+ mod = mod.func_code
+ except AttributeError:
+ mod = mod.__code__ #python 3k
+
+ if inspect.istraceback(mod):
+ mod = mod.tb_frame
+
+ if inspect.isframe(mod):
+ mod = mod.f_code
+
+ if inspect.iscode(mod):
+ if not hasattr(mod, 'co_filename'):
+ return None, 0, 0
+
+ if not hasattr(mod, 'co_firstlineno'):
+ return mod.co_filename, 0, 0
+
+ lnum = mod.co_firstlineno
+ pat = re.compile(r'^(\s*def\s)|(.*(?<!\w)lambda(:|\s))|^(\s*@)')
+ while lnum > 0:
+ if pat.match(lines[lnum]):
+ break
+ lnum -= 1
+
+ return f, lnum, 0
+
+ raise RuntimeError('Do not know about: ' + f + ' ' + str(mod))
diff --git a/python/helpers/pydev/django_debug.py b/python/helpers/pydev/django_debug.py
new file mode 100644
index 0000000..37ee042
--- /dev/null
+++ b/python/helpers/pydev/django_debug.py
@@ -0,0 +1,132 @@
+import inspect
+from django_frame import DjangoTemplateFrame, get_template_file_name, get_template_line
+from pydevd_comm import CMD_SET_BREAK
+from pydevd_constants import DJANGO_SUSPEND, GetThreadId
+from pydevd_file_utils import NormFileToServer
+from runfiles import DictContains
+from pydevd_breakpoints import LineBreakpoint
+import pydevd_vars
+import traceback
+
+class DjangoLineBreakpoint(LineBreakpoint):
+ def __init__(self, type, file, line, flag, condition, func_name, expression):
+ self.file = file
+ self.line = line
+ LineBreakpoint.__init__(self, type, flag, condition, func_name, expression)
+
+ def __eq__(self, other):
+ if not isinstance(other, DjangoLineBreakpoint):
+ return False
+ return self.file == other.file and self.line == other.line
+
+ def is_triggered(self, frame):
+ file = get_template_file_name(frame)
+ line = get_template_line(frame)
+ return self.file == file and self.line == line
+
+ def __str__(self):
+ return "DjangoLineBreakpoint: %s-%d" %(self.file, self.line)
+
+
+def inherits(cls, *names):
+ if cls.__name__ in names:
+ return True
+ inherits_node = False
+ for base in inspect.getmro(cls):
+ if base.__name__ in names:
+ inherits_node = True
+ break
+ return inherits_node
+
+
+def is_django_render_call(frame):
+ try:
+ name = frame.f_code.co_name
+ if name != 'render':
+ return False
+
+ if not DictContains(frame.f_locals, 'self'):
+ return False
+
+ cls = frame.f_locals['self'].__class__
+
+ inherits_node = inherits(cls, 'Node')
+
+ if not inherits_node:
+ return False
+
+ clsname = cls.__name__
+ return clsname != 'TextNode' and clsname != 'NodeList'
+ except:
+ traceback.print_exc()
+ return False
+
+
+def is_django_context_get_call(frame):
+ try:
+ if not DictContains(frame.f_locals, 'self'):
+ return False
+
+ cls = frame.f_locals['self'].__class__
+
+ return inherits(cls, 'BaseContext')
+ except:
+ traceback.print_exc()
+ return False
+
+
+def is_django_resolve_call(frame):
+ try:
+ name = frame.f_code.co_name
+ if name != '_resolve_lookup':
+ return False
+
+ if not DictContains(frame.f_locals, 'self'):
+ return False
+
+ cls = frame.f_locals['self'].__class__
+
+ clsname = cls.__name__
+ return clsname == 'Variable'
+ except:
+ traceback.print_exc()
+ return False
+
+
+def is_django_suspended(thread):
+ return thread.additionalInfo.suspend_type == DJANGO_SUSPEND
+
+
+def suspend_django(py_db_frame, mainDebugger, thread, frame, cmd=CMD_SET_BREAK):
+ frame = DjangoTemplateFrame(frame)
+
+ if frame.f_lineno is None:
+ return None
+
+ #try:
+ # if thread.additionalInfo.filename == frame.f_code.co_filename and thread.additionalInfo.line == frame.f_lineno:
+ # return None # don't stay twice on the same line
+ #except AttributeError:
+ # pass
+
+ pydevd_vars.addAdditionalFrameById(GetThreadId(thread), {id(frame): frame})
+
+ py_db_frame.setSuspend(thread, cmd)
+ thread.additionalInfo.suspend_type = DJANGO_SUSPEND
+
+ thread.additionalInfo.filename = frame.f_code.co_filename
+ thread.additionalInfo.line = frame.f_lineno
+
+ return frame
+
+
+def find_django_render_frame(frame):
+ while frame is not None and not is_django_render_call(frame):
+ frame = frame.f_back
+
+ return frame
+
+
+
+
+
diff --git a/python/helpers/pydev/django_frame.py b/python/helpers/pydev/django_frame.py
new file mode 100644
index 0000000..272888e
--- /dev/null
+++ b/python/helpers/pydev/django_frame.py
@@ -0,0 +1,115 @@
+from pydevd_file_utils import GetFileNameAndBaseFromFile
+import pydev_log
+import traceback
+
+def read_file(filename):
+ f = open(filename, "r")
+ s = f.read()
+ f.close()
+ return s
+
+
+def offset_to_line_number(text, offset):
+ curLine = 1
+ curOffset = 0
+ while curOffset < offset:
+ if curOffset == len(text):
+ return -1
+ c = text[curOffset]
+ if c == '\n':
+ curLine += 1
+ elif c == '\r':
+ curLine += 1
+ if curOffset < len(text) and text[curOffset + 1] == '\n':
+ curOffset += 1
+
+ curOffset += 1
+
+ return curLine
+
+
+def get_source(frame):
+ try:
+ node = frame.f_locals['self']
+ if hasattr(node, 'source'):
+ return node.source
+ else:
+ pydev_log.error_once("WARNING: Template path is not available. Please set TEMPLATE_DEBUG=True in your settings.py to make django template breakpoints working")
+ return None
+
+ except:
+ pydev_log.debug(traceback.format_exc())
+ return None
+
+
+def get_template_file_name(frame):
+ try:
+ source = get_source(frame)
+ if source is None:
+ pydev_log.debug("Source is None\n")
+ return None
+ fname = source[0].name
+ pydev_log.debug("Source name is %s\n" % fname)
+ filename, base = GetFileNameAndBaseFromFile(fname)
+ return filename
+ except:
+ pydev_log.debug(traceback.format_exc())
+ return None
+
+
+def get_template_line(frame):
+ source = get_source(frame)
+ file_name = get_template_file_name(frame)
+ try:
+ return offset_to_line_number(read_file(file_name), source[1][0])
+ except:
+ return None
+
+
+class DjangoTemplateFrame:
+ def __init__(self, frame):
+ file_name = get_template_file_name(frame)
+ self.back_context = frame.f_locals['context']
+ self.f_code = FCode('Django Template', file_name)
+ self.f_lineno = get_template_line(frame)
+ self.f_back = frame
+ self.f_globals = {}
+ self.f_locals = self.collect_context(self.back_context)
+ self.f_trace = None
+
+ def collect_context(self, context):
+ res = {}
+ try:
+ for d in context.dicts:
+ for k, v in d.items():
+ res[k] = v
+ except AttributeError:
+ pass
+ return res
+
+ def changeVariable(self, name, value):
+ for d in self.back_context.dicts:
+ for k, v in d.items():
+ if k == name:
+ d[k] = value
+
+
+class FCode:
+ def __init__(self, name, filename):
+ self.co_name = name
+ self.co_filename = filename
+
+
+def is_django_exception_break_context(frame):
+ try:
+ name = frame.f_code.co_name
+ except:
+ name = None
+ return name in ['_resolve_lookup', 'find_template']
+
+
+def just_raised(trace):
+ if trace is None:
+ return False
+ return trace.tb_next is None
+
diff --git a/python/helpers/pydev/fix_getpass.py b/python/helpers/pydev/fix_getpass.py
new file mode 100644
index 0000000..c81d935
--- /dev/null
+++ b/python/helpers/pydev/fix_getpass.py
@@ -0,0 +1,10 @@
+def fixGetpass():
+ import getpass
+ import warnings
+ fallback = getattr(getpass, 'fallback_getpass', None) # >= 2.6
+ if not fallback:
+ fallback = getpass.default_getpass # <= 2.5
+ getpass.getpass = fallback
+ if hasattr(getpass, 'GetPassWarning'):
+ warnings.simplefilter("ignore", category=getpass.GetPassWarning)
+
diff --git a/python/helpers/pydev/importsTipper.py b/python/helpers/pydev/importsTipper.py
new file mode 100644
index 0000000..97fc76a
--- /dev/null
+++ b/python/helpers/pydev/importsTipper.py
@@ -0,0 +1,345 @@
+import os.path
+import inspect
+import sys
+
+from _tipper_common import DoFind
+
+
+#completion types.
+TYPE_IMPORT = '0'
+TYPE_CLASS = '1'
+TYPE_FUNCTION = '2'
+TYPE_ATTR = '3'
+TYPE_BUILTIN = '4'
+TYPE_PARAM = '5'
+
+def _imp(name, log=None):
+ try:
+ return __import__(name)
+ except:
+ if '.' in name:
+ sub = name[0:name.rfind('.')]
+
+ if log is not None:
+ log.AddContent('Unable to import', name, 'trying with', sub)
+ log.AddException()
+
+ return _imp(sub, log)
+ else:
+ s = 'Unable to import module: %s - sys.path: %s' % (str(name), sys.path)
+ if log is not None:
+ log.AddContent(s)
+ log.AddException()
+
+ raise ImportError(s)
+
+
+IS_IPY = False
+if sys.platform == 'cli':
+ IS_IPY = True
+ _old_imp = _imp
+ def _imp(name, log=None):
+ #We must add a reference in clr for .Net
+ import clr #@UnresolvedImport
+ initial_name = name
+ while '.' in name:
+ try:
+ clr.AddReference(name)
+ break #If it worked, that's OK.
+ except:
+ name = name[0:name.rfind('.')]
+ else:
+ try:
+ clr.AddReference(name)
+ except:
+ pass #That's OK (not dot net module).
+
+ return _old_imp(initial_name, log)
+
+
+
+def GetFile(mod):
+ f = None
+ try:
+ f = inspect.getsourcefile(mod) or inspect.getfile(mod)
+ except:
+ if hasattr(mod, '__file__'):
+ f = mod.__file__
+ if f.lower(f[-4:]) in ['.pyc', '.pyo']:
+ filename = f[:-4] + '.py'
+ if os.path.exists(filename):
+ f = filename
+
+ return f
+
+def Find(name, log=None):
+ f = None
+
+ mod = _imp(name, log)
+ parent = mod
+ foundAs = ''
+
+ if inspect.ismodule(mod):
+ f = GetFile(mod)
+
+ components = name.split('.')
+
+ old_comp = None
+ for comp in components[1:]:
+ try:
+ #this happens in the following case:
+ #we have mx.DateTime.mxDateTime.mxDateTime.pyd
+ #but after importing it, mx.DateTime.mxDateTime shadows access to mxDateTime.pyd
+ mod = getattr(mod, comp)
+ except AttributeError:
+ if old_comp != comp:
+ raise
+
+ if inspect.ismodule(mod):
+ f = GetFile(mod)
+ else:
+ if len(foundAs) > 0:
+ foundAs = foundAs + '.'
+ foundAs = foundAs + comp
+
+ old_comp = comp
+
+ return f, mod, parent, foundAs
+
+def Search(data):
+ '''@return file, line, col
+ '''
+
+ data = data.replace('\n', '')
+ if data.endswith('.'):
+ data = data.rstrip('.')
+ f, mod, parent, foundAs = Find(data)
+ try:
+ return DoFind(f, mod), foundAs
+ except:
+ return DoFind(f, parent), foundAs
+
+
+def GenerateTip(data, log=None):
+ data = data.replace('\n', '')
+ if data.endswith('.'):
+ data = data.rstrip('.')
+
+ f, mod, parent, foundAs = Find(data, log)
+ #print_ >> open('temp.txt', 'w'), f
+ tips = GenerateImportsTipForModule(mod)
+ return f, tips
+
+
+def CheckChar(c):
+ if c == '-' or c == '.':
+ return '_'
+ return c
+
+def GenerateImportsTipForModule(obj_to_complete, dirComps=None, getattr=getattr, filter=lambda name:True):
+ '''
+ @param obj_to_complete: the object from where we should get the completions
+ @param dirComps: if passed, we should not 'dir' the object and should just iterate those passed as a parameter
+ @param getattr: the way to get a given object from the obj_to_complete (used for the completer)
+ @param filter: a callable that receives the name and decides if it should be appended or not to the results
+ @return: list of tuples, so that each tuple represents a completion with:
+ name, doc, args, type (from the TYPE_* constants)
+ '''
+ ret = []
+
+ if dirComps is None:
+ dirComps = dir(obj_to_complete)
+ if hasattr(obj_to_complete, '__dict__'):
+ dirComps.append('__dict__')
+
+ getCompleteInfo = True
+
+ if len(dirComps) > 1000:
+ #ok, we don't want to let our users wait forever...
+ #no complete info for you...
+
+ getCompleteInfo = False
+
+ dontGetDocsOn = (float, int, str, tuple, list)
+ for d in dirComps:
+
+ if d is None:
+ continue
+
+ if not filter(d):
+ continue
+
+ args = ''
+
+ try:
+ obj = getattr(obj_to_complete, d)
+ except: #just ignore and get it without aditional info
+ ret.append((d, '', args, TYPE_BUILTIN))
+ else:
+
+ if getCompleteInfo:
+ try:
+ retType = TYPE_BUILTIN
+
+ #check if we have to get docs
+ getDoc = True
+ for class_ in dontGetDocsOn:
+
+ if isinstance(obj, class_):
+ getDoc = False
+ break
+
+ doc = ''
+ if getDoc:
+ #no need to get this info... too many constants are defined and
+ #makes things much slower (passing all that through sockets takes quite some time)
+ try:
+ doc = inspect.getdoc(obj)
+ if doc is None:
+ doc = ''
+ except: #may happen on jython when checking java classes (so, just ignore it)
+ doc = ''
+
+
+ if inspect.ismethod(obj) or inspect.isbuiltin(obj) or inspect.isfunction(obj) or inspect.isroutine(obj):
+ try:
+ args, vargs, kwargs, defaults = inspect.getargspec(obj)
+
+ r = ''
+ for a in (args):
+ if len(r) > 0:
+ r = r + ', '
+ r = r + str(a)
+ args = '(%s)' % (r)
+ except TypeError:
+ #ok, let's see if we can get the arguments from the doc
+ args = '()'
+ try:
+ found = False
+ if len(doc) > 0:
+ if IS_IPY:
+ #Handle case where we have the situation below
+ #sort(self, object cmp, object key)
+ #sort(self, object cmp, object key, bool reverse)
+ #sort(self)
+ #sort(self, object cmp)
+
+ #Or: sort(self: list, cmp: object, key: object)
+ #sort(self: list, cmp: object, key: object, reverse: bool)
+ #sort(self: list)
+ #sort(self: list, cmp: object)
+ if hasattr(obj, '__name__'):
+ name = obj.__name__+'('
+
+
+ #Fix issue where it was appearing sort(aa)sort(bb)sort(cc) in the same line.
+ lines = doc.splitlines()
+ if len(lines) == 1:
+ c = doc.count(name)
+ if c > 1:
+ doc = ('\n'+name).join(doc.split(name))
+
+
+ major = ''
+ for line in doc.splitlines():
+ if line.startswith(name) and line.endswith(')'):
+ if len(line) > len(major):
+ major = line
+ if major:
+ args = major[major.index('('):]
+ found = True
+
+
+ if not found:
+ i = doc.find('->')
+ if i < 0:
+ i = doc.find('--')
+ if i < 0:
+ i = doc.find('\n')
+ if i < 0:
+ i = doc.find('\r')
+
+
+ if i > 0:
+ s = doc[0:i]
+ s = s.strip()
+
+ #let's see if we have a docstring in the first line
+ if s[-1] == ')':
+ start = s.find('(')
+ if start >= 0:
+ end = s.find('[')
+ if end <= 0:
+ end = s.find(')')
+ if end <= 0:
+ end = len(s)
+
+ args = s[start:end]
+ if not args[-1] == ')':
+ args = args + ')'
+
+
+ #now, get rid of unwanted chars
+ l = len(args) - 1
+ r = []
+ for i in range(len(args)):
+ if i == 0 or i == l:
+ r.append(args[i])
+ else:
+ r.append(CheckChar(args[i]))
+
+ args = ''.join(r)
+
+ if IS_IPY:
+ if args.startswith('(self:'):
+ i = args.find(',')
+ if i >= 0:
+ args = '(self'+args[i:]
+ else:
+ args = '(self)'
+ i = args.find(')')
+ if i > 0:
+ args = args[:i+1]
+
+ except:
+ pass
+
+ retType = TYPE_FUNCTION
+
+ elif inspect.isclass(obj):
+ retType = TYPE_CLASS
+
+ elif inspect.ismodule(obj):
+ retType = TYPE_IMPORT
+
+ else:
+ retType = TYPE_ATTR
+
+
+ #add token and doc to return - assure only strings.
+ ret.append((d, doc, args, retType))
+
+ except: #just ignore and get it without aditional info
+ ret.append((d, '', args, TYPE_BUILTIN))
+
+ else: #getCompleteInfo == False
+ if inspect.ismethod(obj) or inspect.isbuiltin(obj) or inspect.isfunction(obj) or inspect.isroutine(obj):
+ retType = TYPE_FUNCTION
+
+ elif inspect.isclass(obj):
+ retType = TYPE_CLASS
+
+ elif inspect.ismodule(obj):
+ retType = TYPE_IMPORT
+
+ else:
+ retType = TYPE_ATTR
+ #ok, no complete info, let's try to do this as fast and clean as possible
+ #so, no docs for this kind of information, only the signatures
+ ret.append((d, '', str(args), retType))
+
+ return ret
+
+
+
+
diff --git a/python/helpers/pydev/interpreterInfo.py b/python/helpers/pydev/interpreterInfo.py
new file mode 100644
index 0000000..f6056d0
--- /dev/null
+++ b/python/helpers/pydev/interpreterInfo.py
@@ -0,0 +1,143 @@
+'''
+This module was created to get information available in the interpreter, such as libraries,
+paths, etc.
+
+what is what:
+sys.builtin_module_names: contains the builtin modules embeeded in python (rigth now, we specify all manually).
+sys.prefix: A string giving the site-specific directory prefix where the platform independent Python files are installed
+
+format is something as
+EXECUTABLE:python.exe|libs@compiled_dlls$builtin_mods
+
+all internal are separated by |
+'''
+import sys
+import os
+
+
+try:
+ #Just check if False and True are defined (depends on version, not whether it's jython/python)
+ False
+ True
+except:
+ exec ('True, False = 1,0') #An exec is used so that python 3k does not give a syntax error
+
+import pydevd_constants
+
+if pydevd_constants.USE_LIB_COPY:
+ import _pydev_time as time
+else:
+ import time
+
+if sys.platform == "cygwin":
+
+ try:
+ import ctypes #use from the system if available
+ except ImportError:
+ sys.path.append(os.path.join(sys.path[0], 'ThirdParty/wrapped_for_pydev'))
+ import ctypes
+
+ def nativePath(path):
+ MAX_PATH = 512 # On cygwin NT, its 260 lately, but just need BIG ENOUGH buffer
+ '''Get the native form of the path, like c:\\Foo for /cygdrive/c/Foo'''
+
+ retval = ctypes.create_string_buffer(MAX_PATH)
+ path = fullyNormalizePath(path)
+ ctypes.cdll.cygwin1.cygwin_conv_to_win32_path(path, retval) #@UndefinedVariable
+ return retval.value
+
+else:
+ def nativePath(path):
+ return fullyNormalizePath(path)
+
+def fullyNormalizePath(path):
+ '''fixes the path so that the format of the path really reflects the directories in the system
+ '''
+ return os.path.normpath(path)
+
+
+if __name__ == '__main__':
+ try:
+ #just give some time to get the reading threads attached (just in case)
+ time.sleep(0.1)
+ except:
+ pass
+
+ try:
+ executable = nativePath(sys.executable)
+ except:
+ executable = sys.executable
+
+ if sys.platform == "cygwin" and not executable.endswith('.exe'):
+ executable += '.exe'
+
+
+ try:
+ s = 'Version%s.%s' % (sys.version_info[0], sys.version_info[1])
+ except AttributeError:
+ #older versions of python don't have version_info
+ import string
+ s = string.split(sys.version, ' ')[0]
+ s = string.split(s, '.')
+ major = s[0]
+ minor = s[1]
+ s = 'Version%s.%s' % (major, minor)
+
+ sys.stdout.write('%s\n' % (s,))
+
+ sys.stdout.write('EXECUTABLE:%s|\n' % executable)
+
+ #this is the new implementation to get the system folders
+ #(still need to check if it works in linux)
+ #(previously, we were getting the executable dir, but that is not always correct...)
+ prefix = nativePath(sys.prefix)
+ #print_ 'prefix is', prefix
+
+
+ result = []
+
+ path_used = sys.path
+ try:
+ path_used = path_used[:] #Use a copy.
+ except:
+ pass #just ignore it...
+
+ for p in path_used:
+ p = nativePath(p)
+
+ try:
+ import string #to be compatible with older versions
+ if string.find(p, prefix) == 0: #was startswith
+ result.append((p, True))
+ else:
+ result.append((p, False))
+ except (ImportError, AttributeError):
+ #python 3k also does not have it
+ #jython may not have it (depending on how are things configured)
+ if p.startswith(prefix): #was startswith
+ result.append((p, True))
+ else:
+ result.append((p, False))
+
+ for p, b in result:
+ if b:
+ sys.stdout.write('|%s%s\n' % (p, 'INS_PATH'))
+ else:
+ sys.stdout.write('|%s%s\n' % (p, 'OUT_PATH'))
+
+ sys.stdout.write('@\n') #no compiled libs
+ sys.stdout.write('$\n') #the forced libs
+
+ for builtinMod in sys.builtin_module_names:
+ sys.stdout.write('|%s\n' % builtinMod)
+
+
+ try:
+ sys.stdout.flush()
+ sys.stderr.flush()
+ #and give some time to let it read things (just in case)
+ time.sleep(0.1)
+ except:
+ pass
+
+ raise RuntimeError('Ok, this is so that it shows the output (ugly hack for some platforms, so that it releases the output).')
diff --git a/python/helpers/pydev/jyimportsTipper.py b/python/helpers/pydev/jyimportsTipper.py
new file mode 100644
index 0000000..cd8da26
--- /dev/null
+++ b/python/helpers/pydev/jyimportsTipper.py
@@ -0,0 +1,479 @@
+import StringIO
+import traceback
+from java.lang import StringBuffer #@UnresolvedImport
+from java.lang import String #@UnresolvedImport
+import java.lang #@UnresolvedImport
+import sys
+from _tipper_common import DoFind
+
+
+try:
+ False
+ True
+except NameError: # version < 2.3 -- didn't have the True/False builtins
+ import __builtin__
+ setattr(__builtin__, 'True', 1)
+ setattr(__builtin__, 'False', 0)
+
+
+from org.python.core import PyReflectedFunction #@UnresolvedImport
+
+from org.python import core #@UnresolvedImport
+from org.python.core import PyClass #@UnresolvedImport
+
+
+#completion types.
+TYPE_IMPORT = '0'
+TYPE_CLASS = '1'
+TYPE_FUNCTION = '2'
+TYPE_ATTR = '3'
+TYPE_BUILTIN = '4'
+TYPE_PARAM = '5'
+
+def _imp(name):
+ try:
+ return __import__(name)
+ except:
+ if '.' in name:
+ sub = name[0:name.rfind('.')]
+ return _imp(sub)
+ else:
+ s = 'Unable to import module: %s - sys.path: %s' % (str(name), sys.path)
+ raise RuntimeError(s)
+
+def Find(name):
+ f = None
+ if name.startswith('__builtin__'):
+ if name == '__builtin__.str':
+ name = 'org.python.core.PyString'
+ elif name == '__builtin__.dict':
+ name = 'org.python.core.PyDictionary'
+
+ mod = _imp(name)
+ parent = mod
+ foundAs = ''
+
+ if hasattr(mod, '__file__'):
+ f = mod.__file__
+
+
+ components = name.split('.')
+ old_comp = None
+ for comp in components[1:]:
+ try:
+ #this happens in the following case:
+ #we have mx.DateTime.mxDateTime.mxDateTime.pyd
+ #but after importing it, mx.DateTime.mxDateTime does shadows access to mxDateTime.pyd
+ mod = getattr(mod, comp)
+ except AttributeError:
+ if old_comp != comp:
+ raise
+
+ if hasattr(mod, '__file__'):
+ f = mod.__file__
+ else:
+ if len(foundAs) > 0:
+ foundAs = foundAs + '.'
+ foundAs = foundAs + comp
+
+ old_comp = comp
+
+ return f, mod, parent, foundAs
+
+def formatParamClassName(paramClassName):
+ if paramClassName.startswith('['):
+ if paramClassName == '[C':
+ paramClassName = 'char[]'
+
+ elif paramClassName == '[B':
+ paramClassName = 'byte[]'
+
+ elif paramClassName == '[I':
+ paramClassName = 'int[]'
+
+ elif paramClassName.startswith('[L') and paramClassName.endswith(';'):
+ paramClassName = paramClassName[2:-1]
+ paramClassName += '[]'
+ return paramClassName
+
+
+def GenerateTip(data, log=None):
+ data = data.replace('\n', '')
+ if data.endswith('.'):
+ data = data.rstrip('.')
+
+ f, mod, parent, foundAs = Find(data)
+ tips = GenerateImportsTipForModule(mod)
+ return f, tips
+
+
+#=======================================================================================================================
+# Info
+#=======================================================================================================================
+class Info:
+
+ def __init__(self, name, **kwargs):
+ self.name = name
+ self.doc = kwargs.get('doc', None)
+ self.args = kwargs.get('args', ()) #tuple of strings
+ self.varargs = kwargs.get('varargs', None) #string
+ self.kwargs = kwargs.get('kwargs', None) #string
+ self.ret = kwargs.get('ret', None) #string
+
+ def basicAsStr(self):
+ '''@returns this class information as a string (just basic format)
+ '''
+
+ s = 'function:%s args=%s, varargs=%s, kwargs=%s, docs:%s' % \
+ (str(self.name), str(self.args), str(self.varargs), str(self.kwargs), str(self.doc))
+ return s
+
+
+ def getAsDoc(self):
+ s = str(self.name)
+ if self.doc:
+ s += '\n@doc %s\n' % str(self.doc)
+
+ if self.args:
+ s += '\n@params '
+ for arg in self.args:
+ s += str(formatParamClassName(arg))
+ s += ' '
+
+ if self.varargs:
+ s += '\n@varargs '
+ s += str(self.varargs)
+
+ if self.kwargs:
+ s += '\n@kwargs '
+ s += str(self.kwargs)
+
+ if self.ret:
+ s += '\n@return '
+ s += str(formatParamClassName(str(self.ret)))
+
+ return str(s)
+
+def isclass(cls):
+ return isinstance(cls, core.PyClass)
+
+def ismethod(func):
+ '''this function should return the information gathered on a function
+
+ @param func: this is the function we want to get info on
+ @return a tuple where:
+ 0 = indicates whether the parameter passed is a method or not
+ 1 = a list of classes 'Info', with the info gathered from the function
+ this is a list because when we have methods from java with the same name and different signatures,
+ we actually have many methods, each with its own set of arguments
+ '''
+
+ try:
+ if isinstance(func, core.PyFunction):
+ #ok, this is from python, created by jython
+ #print_ ' PyFunction'
+
+ def getargs(func_code):
+ """Get information about the arguments accepted by a code object.
+
+ Three things are returned: (args, varargs, varkw), where 'args' is
+ a list of argument names (possibly containing nested lists), and
+ 'varargs' and 'varkw' are the names of the * and ** arguments or None."""
+
+ nargs = func_code.co_argcount
+ names = func_code.co_varnames
+ args = list(names[:nargs])
+ step = 0
+
+ varargs = None
+ if func_code.co_flags & func_code.CO_VARARGS:
+ varargs = func_code.co_varnames[nargs]
+ nargs = nargs + 1
+ varkw = None
+ if func_code.co_flags & func_code.CO_VARKEYWORDS:
+ varkw = func_code.co_varnames[nargs]
+ return args, varargs, varkw
+
+ args = getargs(func.func_code)
+ return 1, [Info(func.func_name, args=args[0], varargs=args[1], kwargs=args[2], doc=func.func_doc)]
+
+ if isinstance(func, core.PyMethod):
+ #this is something from java itself, and jython just wrapped it...
+
+ #things to play in func:
+ #['__call__', '__class__', '__cmp__', '__delattr__', '__dir__', '__doc__', '__findattr__', '__name__', '_doget', 'im_class',
+ #'im_func', 'im_self', 'toString']
+ #print_ ' PyMethod'
+ #that's the PyReflectedFunction... keep going to get it
+ func = func.im_func
+
+ if isinstance(func, PyReflectedFunction):
+ #this is something from java itself, and jython just wrapped it...
+
+ #print_ ' PyReflectedFunction'
+
+ infos = []
+ for i in range(len(func.argslist)):
+ #things to play in func.argslist[i]:
+
+ #'PyArgsCall', 'PyArgsKeywordsCall', 'REPLACE', 'StandardCall', 'args', 'compare', 'compareTo', 'data', 'declaringClass'
+ #'flags', 'isStatic', 'matches', 'precedence']
+
+ #print_ ' ', func.argslist[i].data.__class__
+ #func.argslist[i].data.__class__ == java.lang.reflect.Method
+
+ if func.argslist[i]:
+ met = func.argslist[i].data
+ name = met.getName()
+ try:
+ ret = met.getReturnType()
+ except AttributeError:
+ ret = ''
+ parameterTypes = met.getParameterTypes()
+
+ args = []
+ for j in range(len(parameterTypes)):
+ paramTypesClass = parameterTypes[j]
+ try:
+ try:
+ paramClassName = paramTypesClass.getName()
+ except:
+ paramClassName = paramTypesClass.getName(paramTypesClass)
+ except AttributeError:
+ try:
+ paramClassName = repr(paramTypesClass) #should be something like <type 'object'>
+ paramClassName = paramClassName.split('\'')[1]
+ except:
+ paramClassName = repr(paramTypesClass) #just in case something else happens... it will at least be visible
+ #if the parameter equals [C, it means it it a char array, so, let's change it
+
+ a = formatParamClassName(paramClassName)
+ #a = a.replace('[]','Array')
+ #a = a.replace('Object', 'obj')
+ #a = a.replace('String', 's')
+ #a = a.replace('Integer', 'i')
+ #a = a.replace('Char', 'c')
+ #a = a.replace('Double', 'd')
+ args.append(a) #so we don't leave invalid code
+
+
+ info = Info(name, args=args, ret=ret)
+ #print_ info.basicAsStr()
+ infos.append(info)
+
+ return 1, infos
+ except Exception, e:
+ s = StringIO.StringIO()
+ traceback.print_exc(file=s)
+ return 1, [Info(str('ERROR'), doc=s.getvalue())]
+
+ return 0, None
+
+def ismodule(mod):
+ #java modules... do we have other way to know that?
+ if not hasattr(mod, 'getClass') and not hasattr(mod, '__class__') \
+ and hasattr(mod, '__name__'):
+ return 1
+
+ return isinstance(mod, core.PyModule)
+
+
+def dirObj(obj):
+ ret = []
+ found = java.util.HashMap()
+ original = obj
+ if hasattr(obj, '__class__'):
+ if obj.__class__ == java.lang.Class:
+
+ #get info about superclasses
+ classes = []
+ classes.append(obj)
+ try:
+ c = obj.getSuperclass()
+ except TypeError:
+ #may happen on jython when getting the java.lang.Class class
+ c = obj.getSuperclass(obj)
+
+ while c != None:
+ classes.append(c)
+ c = c.getSuperclass()
+
+ #get info about interfaces
+ interfs = []
+ for obj in classes:
+ try:
+ interfs.extend(obj.getInterfaces())
+ except TypeError:
+ interfs.extend(obj.getInterfaces(obj))
+ classes.extend(interfs)
+
+ #now is the time when we actually get info on the declared methods and fields
+ for obj in classes:
+ try:
+ declaredMethods = obj.getDeclaredMethods()
+ except TypeError:
+ declaredMethods = obj.getDeclaredMethods(obj)
+
+ try:
+ declaredFields = obj.getDeclaredFields()
+ except TypeError:
+ declaredFields = obj.getDeclaredFields(obj)
+
+ for i in range(len(declaredMethods)):
+ name = declaredMethods[i].getName()
+ ret.append(name)
+ found.put(name, 1)
+
+ for i in range(len(declaredFields)):
+ name = declaredFields[i].getName()
+ ret.append(name)
+ found.put(name, 1)
+
+
+ elif isclass(obj.__class__):
+ d = dir(obj.__class__)
+ for name in d:
+ ret.append(name)
+ found.put(name, 1)
+
+
+ #this simple dir does not always get all the info, that's why we have the part before
+ #(e.g.: if we do a dir on String, some methods that are from other interfaces such as
+ #charAt don't appear)
+ d = dir(original)
+ for name in d:
+ if found.get(name) != 1:
+ ret.append(name)
+
+ return ret
+
+
+def formatArg(arg):
+ '''formats an argument to be shown
+ '''
+
+ s = str(arg)
+ dot = s.rfind('.')
+ if dot >= 0:
+ s = s[dot + 1:]
+
+ s = s.replace(';', '')
+ s = s.replace('[]', 'Array')
+ if len(s) > 0:
+ c = s[0].lower()
+ s = c + s[1:]
+
+ return s
+
+
+
+def Search(data):
+ '''@return file, line, col
+ '''
+
+ data = data.replace('\n', '')
+ if data.endswith('.'):
+ data = data.rstrip('.')
+ f, mod, parent, foundAs = Find(data)
+ try:
+ return DoFind(f, mod), foundAs
+ except:
+ return DoFind(f, parent), foundAs
+
+
+def GenerateImportsTipForModule(obj_to_complete, dirComps=None, getattr=getattr, filter=lambda name:True):
+ '''
+ @param obj_to_complete: the object from where we should get the completions
+ @param dirComps: if passed, we should not 'dir' the object and should just iterate those passed as a parameter
+ @param getattr: the way to get a given object from the obj_to_complete (used for the completer)
+ @param filter: a callable that receives the name and decides if it should be appended or not to the results
+ @return: list of tuples, so that each tuple represents a completion with:
+ name, doc, args, type (from the TYPE_* constants)
+ '''
+ ret = []
+
+ if dirComps is None:
+ dirComps = dirObj(obj_to_complete)
+
+ for d in dirComps:
+
+ if d is None:
+ continue
+
+ if not filter(d):
+ continue
+
+ args = ''
+ doc = ''
+ retType = TYPE_BUILTIN
+
+ try:
+ obj = getattr(obj_to_complete, d)
+ except (AttributeError, java.lang.NoClassDefFoundError):
+ #jython has a bug in its custom classloader that prevents some things from working correctly, so, let's see if
+ #we can fix that... (maybe fixing it in jython itself would be a better idea, as this is clearly a bug)
+ #for that we need a custom classloader... we have references from it in the below places:
+ #
+ #http://mindprod.com/jgloss/classloader.html
+ #http://www.javaworld.com/javaworld/jw-03-2000/jw-03-classload-p2.html
+ #http://freshmeat.net/articles/view/1643/
+ #
+ #note: this only happens when we add things to the sys.path at runtime, if they are added to the classpath
+ #before the run, everything goes fine.
+ #
+ #The code below ilustrates what I mean...
+ #
+ #import sys
+ #sys.path.insert(1, r"C:\bin\eclipse310\plugins\org.junit_3.8.1\junit.jar" )
+ #
+ #import junit.framework
+ #print_ dir(junit.framework) #shows the TestCase class here
+ #
+ #import junit.framework.TestCase
+ #
+ #raises the error:
+ #Traceback (innermost last):
+ # File "<console>", line 1, in ?
+ #ImportError: No module named TestCase
+ #
+ #whereas if we had added the jar to the classpath before, everything would be fine by now...
+
+ ret.append((d, '', '', retType))
+ #that's ok, private things cannot be gotten...
+ continue
+ else:
+
+ isMet = ismethod(obj)
+ if isMet[0]:
+ info = isMet[1][0]
+ try:
+ args, vargs, kwargs = info.args, info.varargs, info.kwargs
+ doc = info.getAsDoc()
+ r = ''
+ for a in (args):
+ if len(r) > 0:
+ r += ', '
+ r += formatArg(a)
+ args = '(%s)' % (r)
+ except TypeError:
+ traceback.print_exc()
+ args = '()'
+
+ retType = TYPE_FUNCTION
+
+ elif isclass(obj):
+ retType = TYPE_CLASS
+
+ elif ismodule(obj):
+ retType = TYPE_IMPORT
+
+ #add token and doc to return - assure only strings.
+ ret.append((d, doc, args, retType))
+
+
+ return ret
+
+
+if __name__ == "__main__":
+ sys.path.append(r'D:\dev_programs\eclipse_3\310\eclipse\plugins\org.junit_3.8.1\junit.jar')
+ sys.stdout.write('%s\n' % Find('junit.framework.TestCase'))
diff --git a/python/helpers/pydev/pycompletion.py b/python/helpers/pydev/pycompletion.py
new file mode 100644
index 0000000..93dd2e8
--- /dev/null
+++ b/python/helpers/pydev/pycompletion.py
@@ -0,0 +1,39 @@
+#!/usr/bin/python
+'''
+@author Radim Kubacki
+'''
+import importsTipper
+import traceback
+import StringIO
+import sys
+import urllib
+import pycompletionserver
+
+
+#=======================================================================================================================
+# GetImports
+#=======================================================================================================================
+def GetImports(module_name):
+ try:
+ processor = pycompletionserver.Processor()
+ data = urllib.unquote_plus(module_name)
+ def_file, completions = importsTipper.GenerateTip(data)
+ return processor.formatCompletionMessage(def_file, completions)
+ except:
+ s = StringIO.StringIO()
+ exc_info = sys.exc_info()
+
+ traceback.print_exception(exc_info[0], exc_info[1], exc_info[2], limit=None, file=s)
+ err = s.getvalue()
+ pycompletionserver.dbg('Received error: ' + str(err), pycompletionserver.ERROR)
+ raise
+
+
+#=======================================================================================================================
+# main
+#=======================================================================================================================
+if __name__ == '__main__':
+ mod_name = sys.argv[1]
+
+ print(GetImports(mod_name))
+
diff --git a/python/helpers/pydev/pycompletionserver.py b/python/helpers/pydev/pycompletionserver.py
new file mode 100644
index 0000000..fc4332f
--- /dev/null
+++ b/python/helpers/pydev/pycompletionserver.py
@@ -0,0 +1,400 @@
+#@PydevCodeAnalysisIgnore
+'''
+@author Fabio Zadrozny
+'''
+IS_PYTHON3K = 0
+try:
+ import __builtin__
+except ImportError:
+ import builtins as __builtin__ # Python 3.0
+ IS_PYTHON3K = 1
+
+try:
+ True
+ False
+except NameError:
+ #If it's not defined, let's define it now.
+ setattr(__builtin__, 'True', 1) #Python 3.0 does not accept __builtin__.True = 1 in its syntax
+ setattr(__builtin__, 'False', 0)
+
+import pydevd_constants
+
+try:
+ from java.lang import Thread
+ IS_JYTHON = True
+ SERVER_NAME = 'jycompletionserver'
+ import jyimportsTipper #as importsTipper #changed to be backward compatible with 1.5
+ importsTipper = jyimportsTipper
+
+except ImportError:
+ #it is python
+ IS_JYTHON = False
+ SERVER_NAME = 'pycompletionserver'
+ if pydevd_constants.USE_LIB_COPY:
+ from _pydev_threading import Thread
+ else:
+ from threading import Thread
+ import importsTipper
+
+
+if pydevd_constants.USE_LIB_COPY:
+ import _pydev_socket as socket
+else:
+ import socket
+
+import sys
+if sys.platform == "darwin":
+ #See: https://sourceforge.net/projects/pydev/forums/forum/293649/topic/3454227
+ try:
+ import _CF #Don't fail if it doesn't work.
+ except:
+ pass
+
+
+#initial sys.path
+_sys_path = []
+for p in sys.path:
+ #changed to be compatible with 1.5
+ _sys_path.append(p)
+
+#initial sys.modules
+_sys_modules = {}
+for name, mod in sys.modules.items():
+ _sys_modules[name] = mod
+
+
+import traceback
+
+if pydevd_constants.USE_LIB_COPY:
+ import _pydev_time as time
+else:
+ import time
+
+try:
+ import StringIO
+except:
+ import io as StringIO #Python 3.0
+
+try:
+ from urllib import quote_plus, unquote_plus
+except ImportError:
+ from urllib.parse import quote_plus, unquote_plus #Python 3.0
+
+INFO1 = 1
+INFO2 = 2
+WARN = 4
+ERROR = 8
+
+DEBUG = INFO1 | ERROR
+
+def dbg(s, prior):
+ if prior & DEBUG != 0:
+ sys.stdout.write('%s\n' % (s,))
+# f = open('c:/temp/test.txt', 'a')
+# print_ >> f, s
+# f.close()
+
+import pydev_localhost
+HOST = pydev_localhost.get_localhost() # Symbolic name meaning the local host
+
+MSG_KILL_SERVER = '@@KILL_SERVER_END@@'
+MSG_COMPLETIONS = '@@COMPLETIONS'
+MSG_END = 'END@@'
+MSG_INVALID_REQUEST = '@@INVALID_REQUEST'
+MSG_JYTHON_INVALID_REQUEST = '@@JYTHON_INVALID_REQUEST'
+MSG_CHANGE_DIR = '@@CHANGE_DIR:'
+MSG_OK = '@@MSG_OK_END@@'
+MSG_BIKE = '@@BIKE'
+MSG_PROCESSING = '@@PROCESSING_END@@'
+MSG_PROCESSING_PROGRESS = '@@PROCESSING:%sEND@@'
+MSG_IMPORTS = '@@IMPORTS:'
+MSG_PYTHONPATH = '@@PYTHONPATH_END@@'
+MSG_CHANGE_PYTHONPATH = '@@CHANGE_PYTHONPATH:'
+MSG_SEARCH = '@@SEARCH'
+
+BUFFER_SIZE = 1024
+
+
+
+currDirModule = None
+
+def CompleteFromDir(dir):
+ '''
+ This is necessary so that we get the imports from the same dir where the file
+ we are completing is located.
+ '''
+ global currDirModule
+ if currDirModule is not None:
+ del sys.path[currDirModule]
+
+ sys.path.insert(0, dir)
+
+
+def ChangePythonPath(pythonpath):
+ '''Changes the pythonpath (clears all the previous pythonpath)
+
+ @param pythonpath: string with paths separated by |
+ '''
+
+ split = pythonpath.split('|')
+ sys.path = []
+ for path in split:
+ path = path.strip()
+ if len(path) > 0:
+ sys.path.append(path)
+
+class KeepAliveThread(Thread):
+ def __init__(self, socket):
+ Thread.__init__(self)
+ self.socket = socket
+ self.processMsgFunc = None
+ self.lastMsg = None
+
+ def run(self):
+ time.sleep(0.1)
+
+ def send(s, msg):
+ if IS_PYTHON3K:
+ s.send(bytearray(msg, 'utf-8'))
+ else:
+ s.send(msg)
+
+ while self.lastMsg == None:
+
+ if self.processMsgFunc != None:
+ s = MSG_PROCESSING_PROGRESS % quote_plus(self.processMsgFunc())
+ sent = send(self.socket, s)
+ else:
+ sent = send(self.socket, MSG_PROCESSING)
+ if sent == 0:
+ sys.exit(0) #connection broken
+ time.sleep(0.1)
+
+ sent = send(self.socket, self.lastMsg)
+ if sent == 0:
+ sys.exit(0) #connection broken
+
+class Processor:
+
+ def __init__(self):
+ # nothing to do
+ return
+
+ def removeInvalidChars(self, msg):
+ try:
+ msg = str(msg)
+ except UnicodeDecodeError:
+ pass
+
+ if msg:
+ try:
+ return quote_plus(msg)
+ except:
+ sys.stdout.write('error making quote plus in %s\n' % (msg,))
+ raise
+ return ' '
+
+ def formatCompletionMessage(self, defFile, completionsList):
+ '''
+ Format the completions suggestions in the following format:
+ @@COMPLETIONS(modFile(token,description),(token,description),(token,description))END@@
+ '''
+ compMsg = []
+ compMsg.append('%s' % defFile)
+ for tup in completionsList:
+ compMsg.append(',')
+
+ compMsg.append('(')
+ compMsg.append(str(self.removeInvalidChars(tup[0]))) #token
+ compMsg.append(',')
+ compMsg.append(self.removeInvalidChars(tup[1])) #description
+
+ if(len(tup) > 2):
+ compMsg.append(',')
+ compMsg.append(self.removeInvalidChars(tup[2])) #args - only if function.
+
+ if(len(tup) > 3):
+ compMsg.append(',')
+ compMsg.append(self.removeInvalidChars(tup[3])) #TYPE
+
+ compMsg.append(')')
+
+ return '%s(%s)%s' % (MSG_COMPLETIONS, ''.join(compMsg), MSG_END)
+
+
+class T(Thread):
+
+ def __init__(self, thisP, serverP):
+ Thread.__init__(self)
+ self.thisPort = thisP
+ self.serverPort = serverP
+ self.socket = None #socket to send messages.
+ self.processor = Processor()
+
+
+ def connectToServer(self):
+ self.socket = s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ try:
+ s.connect((HOST, self.serverPort))
+ except:
+ sys.stderr.write('Error on connectToServer with parameters: host: %s port: %s\n' % (HOST, self.serverPort))
+ raise
+
+ def getCompletionsMessage(self, defFile, completionsList):
+ '''
+ get message with completions.
+ '''
+ return self.processor.formatCompletionMessage(defFile, completionsList)
+
+ def getTokenAndData(self, data):
+ '''
+ When we receive this, we have 'token):data'
+ '''
+ token = ''
+ for c in data:
+ if c != ')':
+ token = token + c
+ else:
+ break;
+
+ return token, data.lstrip(token + '):')
+
+
+ def run(self):
+ # Echo server program
+ try:
+ import _pydev_log
+ log = _pydev_log.Log()
+
+ dbg(SERVER_NAME + ' creating socket' , INFO1)
+ try:
+ s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ s.bind((HOST, self.thisPort))
+ except:
+ sys.stderr.write('Error connecting with parameters: host: %s port: %s\n' % (HOST, self.serverPort))
+ raise
+ s.listen(1) #socket to receive messages.
+
+
+ #we stay here until we are connected.
+ #we only accept 1 client.
+ #the exit message for the server is @@KILL_SERVER_END@@
+ dbg(SERVER_NAME + ' waiting for connection' , INFO1)
+ conn, addr = s.accept()
+ time.sleep(0.5) #wait a little before connecting to JAVA server
+
+ dbg(SERVER_NAME + ' waiting to java client' , INFO1)
+ #after being connected, create a socket as a client.
+ self.connectToServer()
+
+ dbg(SERVER_NAME + ' Connected by ' + str(addr), INFO1)
+
+
+ while 1:
+ data = ''
+ returnMsg = ''
+ keepAliveThread = KeepAliveThread(self.socket)
+
+ while data.find(MSG_END) == -1:
+ received = conn.recv(BUFFER_SIZE)
+ if len(received) == 0:
+ sys.exit(0) #ok, connection ended
+ if IS_PYTHON3K:
+ data = data + received.decode('utf-8')
+ else:
+ data = data + received
+
+ try:
+ try:
+ if data.find(MSG_KILL_SERVER) != -1:
+ dbg(SERVER_NAME + ' kill message received', INFO1)
+ #break if we received kill message.
+ self.ended = True
+ sys.exit(0)
+
+ dbg(SERVER_NAME + ' starting keep alive thread', INFO2)
+ keepAliveThread.start()
+
+ if data.find(MSG_PYTHONPATH) != -1:
+ comps = []
+ for p in _sys_path:
+ comps.append((p, ' '))
+ returnMsg = self.getCompletionsMessage(None, comps)
+
+ else:
+ data = data[:data.rfind(MSG_END)]
+
+ if data.startswith(MSG_IMPORTS):
+ data = data.replace(MSG_IMPORTS, '')
+ data = unquote_plus(data)
+ defFile, comps = importsTipper.GenerateTip(data, log)
+ returnMsg = self.getCompletionsMessage(defFile, comps)
+
+ elif data.startswith(MSG_CHANGE_PYTHONPATH):
+ data = data.replace(MSG_CHANGE_PYTHONPATH, '')
+ data = unquote_plus(data)
+ ChangePythonPath(data)
+ returnMsg = MSG_OK
+
+ elif data.startswith(MSG_SEARCH):
+ data = data.replace(MSG_SEARCH, '')
+ data = unquote_plus(data)
+ (f, line, col), foundAs = importsTipper.Search(data)
+ returnMsg = self.getCompletionsMessage(f, [(line, col, foundAs)])
+
+ elif data.startswith(MSG_CHANGE_DIR):
+ data = data.replace(MSG_CHANGE_DIR, '')
+ data = unquote_plus(data)
+ CompleteFromDir(data)
+ returnMsg = MSG_OK
+
+ elif data.startswith(MSG_BIKE):
+ returnMsg = MSG_INVALID_REQUEST #No longer supported.
+
+ else:
+ returnMsg = MSG_INVALID_REQUEST
+ except SystemExit:
+ returnMsg = self.getCompletionsMessage(None, [('Exit:', 'SystemExit', '')])
+ keepAliveThread.lastMsg = returnMsg
+ raise
+ except:
+ dbg(SERVER_NAME + ' exception occurred', ERROR)
+ s = StringIO.StringIO()
+ traceback.print_exc(file=s)
+
+ err = s.getvalue()
+ dbg(SERVER_NAME + ' received error: ' + str(err), ERROR)
+ returnMsg = self.getCompletionsMessage(None, [('ERROR:', '%s\nLog:%s' % (err, log.GetContents()), '')])
+
+
+ finally:
+ log.Clear()
+ keepAliveThread.lastMsg = returnMsg
+
+ conn.close()
+ self.ended = True
+ sys.exit(0) #connection broken
+
+
+ except SystemExit:
+ raise
+ #No need to log SystemExit error
+ except:
+ s = StringIO.StringIO()
+ exc_info = sys.exc_info()
+
+ traceback.print_exception(exc_info[0], exc_info[1], exc_info[2], limit=None, file=s)
+ err = s.getvalue()
+ dbg(SERVER_NAME + ' received error: ' + str(err), ERROR)
+ raise
+
+if __name__ == '__main__':
+
+ thisPort = int(sys.argv[1]) #this is from where we want to receive messages.
+ serverPort = int(sys.argv[2])#this is where we want to write messages.
+
+ t = T(thisPort, serverPort)
+ dbg(SERVER_NAME + ' will start', INFO1)
+ t.start()
+ time.sleep(5)
+ t.join()
diff --git a/python/helpers/pydev/pydev_console_utils.py b/python/helpers/pydev/pydev_console_utils.py
new file mode 100644
index 0000000..ee62910
--- /dev/null
+++ b/python/helpers/pydev/pydev_console_utils.py
@@ -0,0 +1,379 @@
+from pydev_imports import xmlrpclib
+import sys
+
+import traceback
+
+from pydevd_constants import USE_LIB_COPY
+
+try:
+ if USE_LIB_COPY:
+ import _pydev_Queue as _queue
+ else:
+ import Queue as _queue
+except:
+ import queue as _queue
+
+try:
+ from pydevd_exec import Exec
+except:
+ from pydevd_exec2 import Exec
+
+try:
+ if USE_LIB_COPY:
+ import _pydev_thread as thread
+ else:
+ import thread
+except:
+ import _thread as thread
+
+import pydevd_xml
+import pydevd_vars
+
+from pydevd_utils import *
+
+#=======================================================================================================================
+# Null
+#=======================================================================================================================
+class Null:
+ """
+ Gotten from: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/68205
+ """
+
+ def __init__(self, *args, **kwargs):
+ return None
+
+ def __call__(self, *args, **kwargs):
+ return self
+
+ def __getattr__(self, mname):
+ return self
+
+ def __setattr__(self, name, value):
+ return self
+
+ def __delattr__(self, name):
+ return self
+
+ def __repr__(self):
+ return "<Null>"
+
+ def __str__(self):
+ return "Null"
+
+ def __len__(self):
+ return 0
+
+ def __getitem__(self):
+ return self
+
+ def __setitem__(self, *args, **kwargs):
+ pass
+
+ def write(self, *args, **kwargs):
+ pass
+
+ def __nonzero__(self):
+ return 0
+
+
+#=======================================================================================================================
+# BaseStdIn
+#=======================================================================================================================
+class BaseStdIn:
+ def __init__(self, *args, **kwargs):
+ try:
+ self.encoding = sys.stdin.encoding
+ except:
+ #Not sure if it's available in all Python versions...
+ pass
+
+ def readline(self, *args, **kwargs):
+ #sys.stderr.write('Cannot readline out of the console evaluation\n') -- don't show anything
+ #This could happen if the user had done input('enter number).<-- upon entering this, that message would appear,
+ #which is not something we want.
+ return '\n'
+
+ def isatty(self):
+ return False #not really a file
+
+ def write(self, *args, **kwargs):
+ pass #not available StdIn (but it can be expected to be in the stream interface)
+
+ def flush(self, *args, **kwargs):
+ pass #not available StdIn (but it can be expected to be in the stream interface)
+
+ def read(self, *args, **kwargs):
+ #in the interactive interpreter, a read and a readline are the same.
+ return self.readline()
+
+#=======================================================================================================================
+# StdIn
+#=======================================================================================================================
+class StdIn(BaseStdIn):
+ '''
+ Object to be added to stdin (to emulate it as non-blocking while the next line arrives)
+ '''
+
+ def __init__(self, interpreter, host, client_port):
+ BaseStdIn.__init__(self)
+ self.interpreter = interpreter
+ self.client_port = client_port
+ self.host = host
+
+ def readline(self, *args, **kwargs):
+ #Ok, callback into the client to get the new input
+ try:
+ server = xmlrpclib.Server('http://%s:%s' % (self.host, self.client_port))
+ requested_input = server.RequestInput()
+ if not requested_input:
+ return '\n' #Yes, a readline must return something (otherwise we can get an EOFError on the input() call).
+ return requested_input
+ except:
+ return '\n'
+
+
+#=======================================================================================================================
+# BaseInterpreterInterface
+#=======================================================================================================================
+class BaseInterpreterInterface:
+ def __init__(self, mainThread):
+ self.mainThread = mainThread
+ self.interruptable = False
+ self.exec_queue = _queue.Queue(0)
+ self.buffer = []
+
+ def needMore(self, buffer, line):
+ if not buffer:
+ buffer = []
+ buffer.append(line)
+ source = "\n".join(buffer)
+ if hasattr(self.interpreter, 'is_complete'):
+ return not self.interpreter.is_complete(source)
+
+ try:
+ code = self.interpreter.compile(source, "<input>", "single")
+ except (OverflowError, SyntaxError, ValueError):
+ # Case 1
+ return False
+
+ if code is None:
+ # Case 2
+ return True
+
+ # Case 3
+ return False
+
+
+ def addExec(self, line):
+ #f_opened = open('c:/temp/a.txt', 'a')
+ #f_opened.write(line+'\n')
+ original_in = sys.stdin
+ try:
+ help = None
+ if 'pydoc' in sys.modules:
+ pydoc = sys.modules['pydoc'] #Don't import it if it still is not there.
+
+ if hasattr(pydoc, 'help'):
+ #You never know how will the API be changed, so, let's code defensively here
+ help = pydoc.help
+ if not hasattr(help, 'input'):
+ help = None
+ except:
+ #Just ignore any error here
+ pass
+
+ more = False
+ try:
+ sys.stdin = StdIn(self, self.host, self.client_port)
+ try:
+ if help is not None:
+ #This will enable the help() function to work.
+ try:
+ try:
+ help.input = sys.stdin
+ except AttributeError:
+ help._input = sys.stdin
+ except:
+ help = None
+ if not self._input_error_printed:
+ self._input_error_printed = True
+ sys.stderr.write('\nError when trying to update pydoc.help.input\n')
+ sys.stderr.write('(help() may not work -- please report this as a bug in the pydev bugtracker).\n\n')
+ import traceback;
+
+ traceback.print_exc()
+
+ try:
+ self.startExec()
+ more = self.doAddExec(line)
+ self.finishExec()
+ finally:
+ if help is not None:
+ try:
+ try:
+ help.input = original_in
+ except AttributeError:
+ help._input = original_in
+ except:
+ pass
+
+ finally:
+ sys.stdin = original_in
+ except SystemExit:
+ raise
+ except:
+ import traceback;
+
+ traceback.print_exc()
+
+ #it's always false at this point
+ need_input = False
+ return more, need_input
+
+
+ def doAddExec(self, line):
+ '''
+ Subclasses should override.
+
+ @return: more (True if more input is needed to complete the statement and False if the statement is complete).
+ '''
+ raise NotImplementedError()
+
+
+ def getNamespace(self):
+ '''
+ Subclasses should override.
+
+ @return: dict with namespace.
+ '''
+ raise NotImplementedError()
+
+
+ def getDescription(self, text):
+ try:
+ obj = None
+ if '.' not in text:
+ try:
+ obj = self.getNamespace()[text]
+ except KeyError:
+ return ''
+
+ else:
+ try:
+ splitted = text.split('.')
+ obj = self.getNamespace()[splitted[0]]
+ for t in splitted[1:]:
+ obj = getattr(obj, t)
+ except:
+ return ''
+
+ if obj is not None:
+ try:
+ if sys.platform.startswith("java"):
+ #Jython
+ doc = obj.__doc__
+ if doc is not None:
+ return doc
+
+ import jyimportsTipper
+
+ is_method, infos = jyimportsTipper.ismethod(obj)
+ ret = ''
+ if is_method:
+ for info in infos:
+ ret += info.getAsDoc()
+ return ret
+
+ else:
+ #Python and Iron Python
+ import inspect #@UnresolvedImport
+
+ doc = inspect.getdoc(obj)
+ if doc is not None:
+ return doc
+ except:
+ pass
+
+ try:
+ #if no attempt succeeded, try to return repr()...
+ return repr(obj)
+ except:
+ try:
+ #otherwise the class
+ return str(obj.__class__)
+ except:
+ #if all fails, go to an empty string
+ return ''
+ except:
+ traceback.print_exc()
+ return ''
+
+
+ def execLine(self, line):
+ try:
+ #buffer = self.interpreter.buffer[:]
+ self.exec_queue.put(line)
+ return self.needMore(self.buffer, line)
+ except:
+ traceback.print_exc()
+ return False
+
+
+ def interrupt(self):
+ try:
+ if self.interruptable:
+ if hasattr(thread, 'interrupt_main'): #Jython doesn't have it
+ thread.interrupt_main()
+ else:
+ self.mainThread._thread.interrupt() #Jython
+ return True
+ except:
+ traceback.print_exc()
+ return False
+
+ def close(self):
+ sys.exit(0)
+
+ def startExec(self):
+ self.interruptable = True
+
+ def get_server(self):
+ if self.host is not None:
+ return xmlrpclib.Server('http://%s:%s' % (self.host, self.client_port))
+ else:
+ return None
+
+ def finishExec(self):
+ self.interruptable = False
+
+ server = self.get_server()
+
+ if server is not None:
+ return server.NotifyFinished()
+ else:
+ return True
+
+ def getFrame(self):
+ xml = "<xml>"
+ xml += pydevd_xml.frameVarsToXML(self.getNamespace())
+ xml += "</xml>"
+
+ return xml
+
+ def getVariable(self, attributes):
+ xml = "<xml>"
+ valDict = pydevd_vars.resolveVar(self.getNamespace(), attributes)
+ if valDict is None:
+ valDict = {}
+
+ keys = valDict.keys()
+
+ for k in keys:
+ xml += pydevd_vars.varToXML(valDict[k], to_string(k))
+
+ xml += "</xml>"
+
+ return xml
+
+ def changeVariable(self, attr, value):
+ Exec('%s=%s' % (attr, value), self.getNamespace(), self.getNamespace())
\ No newline at end of file
diff --git a/python/helpers/pydev/pydev_imports.py b/python/helpers/pydev/pydev_imports.py
new file mode 100644
index 0000000..9dce8c4
--- /dev/null
+++ b/python/helpers/pydev/pydev_imports.py
@@ -0,0 +1,36 @@
+from pydevd_constants import USE_LIB_COPY
+try:
+ try:
+ if USE_LIB_COPY:
+ import _pydev_xmlrpclib as xmlrpclib
+ else:
+ import xmlrpclib
+ except ImportError:
+ import xmlrpc.client as xmlrpclib
+except ImportError:
+ import _pydev_xmlrpclib as xmlrpclib
+try:
+ try:
+ if USE_LIB_COPY:
+ from _pydev_SimpleXMLRPCServer import SimpleXMLRPCServer
+ else:
+ from SimpleXMLRPCServer import SimpleXMLRPCServer
+ except ImportError:
+ from xmlrpc.server import SimpleXMLRPCServer
+except ImportError:
+ from _pydev_SimpleXMLRPCServer import SimpleXMLRPCServer
+try:
+ from StringIO import StringIO
+except ImportError:
+ from io import StringIO
+try:
+ execfile=execfile #Not in Py3k
+except NameError:
+ from _pydev_execfile import execfile
+try:
+ if USE_LIB_COPY:
+ import _pydev_Queue as _queue
+ else:
+ import Queue as _queue
+except:
+ import queue as _queue
diff --git a/python/helpers/pydev/pydev_ipython_console.py b/python/helpers/pydev/pydev_ipython_console.py
new file mode 100644
index 0000000..d3d4ae8
--- /dev/null
+++ b/python/helpers/pydev/pydev_ipython_console.py
@@ -0,0 +1,132 @@
+import sys
+from pydev_console_utils import BaseInterpreterInterface
+import re
+
+import os
+
+os.environ['TERM'] = 'emacs' #to use proper page_more() for paging
+
+
+#Uncomment to force PyDev standard shell.
+#raise ImportError()
+
+try:
+ #IPython 0.11 broke compatibility...
+ from pydev_ipython_console_011 import PyDevFrontEnd
+except:
+ from pydev_ipython_console_010 import PyDevFrontEnd
+
+#=======================================================================================================================
+# InterpreterInterface
+#=======================================================================================================================
+class InterpreterInterface(BaseInterpreterInterface):
+ '''
+ The methods in this class should be registered in the xml-rpc server.
+ '''
+
+ def __init__(self, host, client_port, mainThread):
+ BaseInterpreterInterface.__init__(self, mainThread)
+ self.client_port = client_port
+ self.host = host
+ self.interpreter = PyDevFrontEnd()
+ self._input_error_printed = False
+ self.notification_succeeded = False
+ self.notification_tries = 0
+ self.notification_max_tries = 3
+
+ self.notify_about_magic()
+
+ def get_greeting_msg(self):
+ return self.interpreter.get_greeting_msg()
+
+ def doAddExec(self, line):
+ self.notify_about_magic()
+ if (line.rstrip().endswith('??')):
+ print('IPython-->')
+ try:
+ res = bool(self.interpreter.addExec(line))
+ finally:
+ if (line.rstrip().endswith('??')):
+ print('<--IPython')
+
+ return res
+
+
+ def getNamespace(self):
+ return self.interpreter.getNamespace()
+
+
+ def getCompletions(self, text, act_tok):
+ try:
+ ipython_completion = text.startswith('%')
+ if not ipython_completion:
+ s = re.search(r'\bcd\b', text)
+ if s is not None and s.start() == 0:
+ ipython_completion = True
+
+ if text is None:
+ text = ""
+
+ TYPE_LOCAL = '9'
+ _line, completions = self.interpreter.complete(text)
+
+ ret = []
+ append = ret.append
+ for completion in completions:
+ if completion.startswith('%'):
+ append((completion[1:], '', '%', TYPE_LOCAL))
+ else:
+ append((completion, '', '', TYPE_LOCAL))
+
+ if ipython_completion:
+ return ret
+
+ #Otherwise, use the default PyDev completer (to get nice icons)
+ from _completer import Completer
+
+ completer = Completer(self.getNamespace(), None)
+ completions = completer.complete(act_tok)
+ cset = set()
+ for c in completions:
+ cset.add(c[0])
+ for c in ret:
+ if c[0] not in cset:
+ completions.append(c)
+
+ return completions
+
+ except:
+ import traceback
+
+ traceback.print_exc()
+ return []
+
+ def close(self):
+ sys.exit(0)
+
+ def ipython_editor(self, file, line):
+ server = self.get_server()
+
+ if server is not None:
+ return server.IPythonEditor(os.path.realpath(file), line)
+
+ def notify_about_magic(self):
+ if not self.notification_succeeded:
+ self.notification_tries+=1
+ if self.notification_tries>self.notification_max_tries:
+ return
+ completions = self.getCompletions("%", "%")
+ magic_commands = [x[0] for x in completions]
+
+ server = self.get_server()
+
+ if server is not None:
+ try:
+ server.NotifyAboutMagic(magic_commands, self.interpreter.is_automagic())
+ self.notification_succeeded = True
+ except :
+ self.notification_succeeded = False
+
+
+
+
diff --git a/python/helpers/pydev/pydev_ipython_console_010.py b/python/helpers/pydev/pydev_ipython_console_010.py
new file mode 100644
index 0000000..e093fef
--- /dev/null
+++ b/python/helpers/pydev/pydev_ipython_console_010.py
@@ -0,0 +1,129 @@
+from IPython.frontend.prefilterfrontend import PrefilterFrontEnd
+from pydev_console_utils import Null
+import sys
+original_stdout = sys.stdout
+original_stderr = sys.stderr
+
+
+#=======================================================================================================================
+# PyDevFrontEnd
+#=======================================================================================================================
+class PyDevFrontEnd(PrefilterFrontEnd):
+
+
+ def __init__(self, *args, **kwargs):
+ PrefilterFrontEnd.__init__(self, *args, **kwargs)
+ #Disable the output trap: we want all that happens to go to the output directly
+ self.shell.output_trap = Null()
+ self._curr_exec_lines = []
+ self._continuation_prompt = ''
+
+
+ def capture_output(self):
+ pass
+
+
+ def release_output(self):
+ pass
+
+
+ def continuation_prompt(self):
+ return self._continuation_prompt
+
+
+ def write(self, txt, refresh=True):
+ original_stdout.write(txt)
+
+
+ def new_prompt(self, prompt):
+ self.input_buffer = ''
+ #The java side takes care of this part.
+ #self.write(prompt)
+
+
+ def show_traceback(self):
+ import traceback;traceback.print_exc()
+
+
+ def write_out(self, txt, *args, **kwargs):
+ original_stdout.write(txt)
+
+
+ def write_err(self, txt, *args, **kwargs):
+ original_stderr.write(txt)
+
+
+ def getNamespace(self):
+ return self.shell.user_ns
+
+
+ def is_complete(self, string):
+ #Based on IPython 0.10.1
+
+ if string in ('', '\n'):
+ # Prefiltering, eg through ipython0, may return an empty
+ # string although some operations have been accomplished. We
+ # thus want to consider an empty string as a complete
+ # statement.
+ return True
+ else:
+ try:
+ # Add line returns here, to make sure that the statement is
+ # complete (except if '\' was used).
+ # This should probably be done in a different place (like
+ # maybe 'prefilter_input' method? For now, this works.
+ clean_string = string.rstrip('\n')
+ if not clean_string.endswith('\\'): clean_string += '\n\n'
+ is_complete = codeop.compile_command(clean_string,
+ "<string>", "exec")
+ except Exception:
+ # XXX: Hack: return True so that the
+ # code gets executed and the error captured.
+ is_complete = True
+ return is_complete
+
+
+ def addExec(self, line):
+ if self._curr_exec_lines:
+ if not line:
+ self._curr_exec_lines.append(line)
+
+ #Would be the line below, but we've set the continuation_prompt to ''.
+ #buf = self.continuation_prompt() + ('\n' + self.continuation_prompt()).join(self._curr_exec_lines)
+ buf = '\n'.join(self._curr_exec_lines)
+
+ self.input_buffer = buf + '\n'
+ if self._on_enter():
+ del self._curr_exec_lines[:]
+ return False #execute complete (no more)
+
+ return True #needs more
+ else:
+ self._curr_exec_lines.append(line)
+ return True #needs more
+
+ else:
+
+ self.input_buffer = line
+ if not self._on_enter():
+ #Did not execute
+ self._curr_exec_lines.append(line)
+ return True #needs more
+
+ return False #execute complete (no more)
+
+ def update(self, globals, locals):
+ locals['_oh'] = self.shell.user_ns['_oh']
+ locals['_ip'] = self.shell.user_ns['_ip']
+ self.shell.user_global_ns = globals
+ self.shell.user_ns = locals
+
+ def is_automagic(self):
+ if self.ipython0.rc.automagic:
+ return True
+ else:
+ return False
+
+ def get_greeting_msg(self):
+ return 'PyDev console: using IPython 0.10\n'
+
diff --git a/python/helpers/pydev/pydev_ipython_console_011.py b/python/helpers/pydev/pydev_ipython_console_011.py
new file mode 100644
index 0000000..a6c0d82
--- /dev/null
+++ b/python/helpers/pydev/pydev_ipython_console_011.py
@@ -0,0 +1,152 @@
+try:
+ from IPython.terminal.interactiveshell import TerminalInteractiveShell
+except ImportError:
+ from IPython.frontend.terminal.interactiveshell import TerminalInteractiveShell
+from IPython.utils import io
+import sys
+import codeop, re
+original_stdout = sys.stdout
+original_stderr = sys.stderr
+from IPython.core import release
+
+
+#=======================================================================================================================
+# _showtraceback
+#=======================================================================================================================
+def _showtraceback(*args, **kwargs):
+ import traceback;traceback.print_exc()
+
+
+
+#=======================================================================================================================
+# PyDevFrontEnd
+#=======================================================================================================================
+class PyDevFrontEnd:
+
+ version = release.__version__
+
+
+ def __init__(self, *args, **kwargs):
+ #Initialization based on: from IPython.testing.globalipapp import start_ipython
+
+ self._curr_exec_line = 0
+ # Store certain global objects that IPython modifies
+ _displayhook = sys.displayhook
+ _excepthook = sys.excepthook
+
+ # Create and initialize our IPython instance.
+ shell = TerminalInteractiveShell.instance()
+
+ shell.showtraceback = _showtraceback
+ # IPython is ready, now clean up some global state...
+
+ # Deactivate the various python system hooks added by ipython for
+ # interactive convenience so we don't confuse the doctest system
+ sys.displayhook = _displayhook
+ sys.excepthook = _excepthook
+
+ # So that ipython magics and aliases can be doctested (they work by making
+ # a call into a global _ip object). Also make the top-level get_ipython
+ # now return this without recursively calling here again.
+ get_ipython = shell.get_ipython
+ try:
+ import __builtin__
+ except:
+ import builtins as __builtin__
+ __builtin__._ip = shell
+ __builtin__.get_ipython = get_ipython
+
+ # We want to print to stdout/err as usual.
+ io.stdout = original_stdout
+ io.stderr = original_stderr
+
+
+ self._curr_exec_lines = []
+ self.ipython = shell
+
+
+ def update(self, globals, locals):
+ ns = self.ipython.user_ns
+
+ for ind in ['_oh', '_ih', '_dh', '_sh', 'In', 'Out', 'get_ipython', 'exit', 'quit']:
+ locals[ind] = ns[ind]
+
+ self.ipython.user_global_ns.clear()
+ self.ipython.user_global_ns.update(globals)
+ self.ipython.user_ns = locals
+
+ if hasattr(self.ipython, 'history_manager') and hasattr(self.ipython.history_manager, 'save_thread'):
+ self.ipython.history_manager.save_thread.pydev_do_not_trace = True #don't trace ipython history saving thread
+
+ def complete(self, string):
+ if string:
+ return self.ipython.complete(string)
+ else:
+ return self.ipython.complete(string, string, 0)
+
+
+
+ def is_complete(self, string):
+ #Based on IPython 0.10.1
+
+ if string in ('', '\n'):
+ # Prefiltering, eg through ipython0, may return an empty
+ # string although some operations have been accomplished. We
+ # thus want to consider an empty string as a complete
+ # statement.
+ return True
+ else:
+ try:
+ # Add line returns here, to make sure that the statement is
+ # complete (except if '\' was used).
+ # This should probably be done in a different place (like
+ # maybe 'prefilter_input' method? For now, this works.
+ clean_string = string.rstrip('\n')
+ if not clean_string.endswith('\\'): clean_string += '\n\n'
+ is_complete = codeop.compile_command(clean_string,
+ "<string>", "exec")
+ except Exception:
+ # XXX: Hack: return True so that the
+ # code gets executed and the error captured.
+ is_complete = True
+ return is_complete
+
+
+ def getNamespace(self):
+ return self.ipython.user_ns
+
+
+ def addExec(self, line):
+ if self._curr_exec_lines:
+ self._curr_exec_lines.append(line)
+
+ buf = '\n'.join(self._curr_exec_lines)
+
+ if self.is_complete(buf):
+ self._curr_exec_line += 1
+ self.ipython.run_cell(buf)
+ del self._curr_exec_lines[:]
+ return False #execute complete (no more)
+
+ return True #needs more
+ else:
+
+ if not self.is_complete(line):
+ #Did not execute
+ self._curr_exec_lines.append(line)
+ return True #needs more
+ else:
+ self._curr_exec_line += 1
+ self.ipython.run_cell(line, store_history=True)
+ #hist = self.ipython.history_manager.output_hist_reprs
+ #rep = hist.get(self._curr_exec_line, None)
+ #if rep is not None:
+ # print(rep)
+ return False #execute complete (no more)
+
+ def is_automagic(self):
+ return self.ipython.automagic
+
+ def get_greeting_msg(self):
+ return 'PyDev console: using IPython %s\n' % self.version
+
diff --git a/python/helpers/pydev/pydev_localhost.py b/python/helpers/pydev/pydev_localhost.py
new file mode 100644
index 0000000..4e7a4d9
--- /dev/null
+++ b/python/helpers/pydev/pydev_localhost.py
@@ -0,0 +1,40 @@
+import pydevd_constants
+if pydevd_constants.USE_LIB_COPY:
+ import _pydev_socket as socket
+else:
+ import socket
+
+_cache = None
+def get_localhost():
+ '''
+ Should return 127.0.0.1 in ipv4 and ::1 in ipv6
+
+ localhost is not used because on windows vista/windows 7, there can be issues where the resolving doesn't work
+ properly and takes a lot of time (had this issue on the pyunit server).
+
+ Using the IP directly solves the problem.
+ '''
+ #TODO: Needs better investigation!
+
+ global _cache
+ if _cache is None:
+ try:
+ for addr_info in socket.getaddrinfo("localhost", 80, 0, 0, socket.SOL_TCP):
+ config = addr_info[4]
+ if config[0] == '127.0.0.1':
+ _cache = '127.0.0.1'
+ return _cache
+ except:
+ #Ok, some versions of Python don't have getaddrinfo or SOL_TCP... Just consider it 127.0.0.1 in this case.
+ _cache = '127.0.0.1'
+ else:
+ _cache = 'localhost'
+
+ return _cache
+
+
+def get_socket_name():
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+ sock.bind(('', 0))
+ return sock.getsockname()
diff --git a/python/helpers/pydev/pydev_log.py b/python/helpers/pydev/pydev_log.py
new file mode 100644
index 0000000..229784b
--- /dev/null
+++ b/python/helpers/pydev/pydev_log.py
@@ -0,0 +1,31 @@
+import sys
+from pydevd_constants import DebugInfoHolder
+from pydevd_constants import DictContains
+
+WARN_ONCE_MAP = {}
+
+def stderr_write(message):
+ sys.stderr.write(message)
+ sys.stderr.write("\n")
+
+
+def debug(message):
+ if DebugInfoHolder.DEBUG_TRACE_LEVEL>2:
+ stderr_write(message)
+
+
+def warn(message):
+ if DebugInfoHolder.DEBUG_TRACE_LEVEL>1:
+ stderr_write(message)
+
+def info(message):
+ stderr_write(message)
+
+def error(message):
+ stderr_write(message)
+
+def error_once(message):
+ if not DictContains(WARN_ONCE_MAP, message):
+ WARN_ONCE_MAP[message] = True
+ error(message)
+
diff --git a/python/helpers/pydev/pydev_monkey.py b/python/helpers/pydev/pydev_monkey.py
new file mode 100644
index 0000000..07c7e2b
--- /dev/null
+++ b/python/helpers/pydev/pydev_monkey.py
@@ -0,0 +1,308 @@
+import os
+import shlex
+import sys
+import pydev_log
+import traceback
+
+helpers = os.path.dirname(__file__)
+
+def is_python(path):
+ if path.endswith("'") or path.endswith('"'):
+ path = path[1:len(path)-1]
+ filename = os.path.basename(path).lower()
+ for name in ['python', 'jython', 'pypy']:
+ if filename.find(name) != -1:
+ return True
+
+ return False
+
+def patch_args(args):
+ try:
+ pydev_log.debug("Patching args: %s"% str(args))
+
+ import sys
+ new_args = []
+ i = 0
+ if len(args) == 0:
+ return args
+
+ if is_python(args[0]):
+ try:
+ indC = args.index('-c')
+ except ValueError:
+ indC = -1
+
+ if indC != -1:
+ import pydevd
+ host, port = pydevd.dispatch()
+
+ if port is not None:
+ args[indC + 1] = "import sys; sys.path.append('%s'); import pydevd; pydevd.settrace(host='%s', port=%s, suspend=False); %s"%(helpers, host, port, args[indC + 1])
+ return args
+ else:
+ new_args.append(args[0])
+ else:
+ pydev_log.debug("Process is not python, returning.")
+ return args
+
+ i = 1
+ while i < len(args):
+ if args[i].startswith('-'):
+ new_args.append(args[i])
+ else:
+ break
+ i+=1
+
+ if args[i].endswith('pydevd.py'): #no need to add pydevd twice
+ return args
+
+ for x in sys.original_argv:
+ if sys.platform == "win32" and not x.endswith('"'):
+ arg = '"%s"'%x
+ else:
+ arg = x
+ new_args.append(arg)
+ if x == '--file':
+ break
+
+ while i < len(args):
+ new_args.append(args[i])
+ i+=1
+
+ return new_args
+ except:
+ traceback.print_exc()
+ return args
+
+
+def args_to_str(args):
+ quoted_args = []
+ for x in args:
+ if x.startswith('"') and x.endswith('"'):
+ quoted_args.append(x)
+ else:
+ quoted_args.append('"%s"' % x)
+
+ return ' '.join(quoted_args)
+
+def remove_quotes(str):
+ if str.startswith('"') and str.endswith('"'):
+ return str[1:-1]
+ else:
+ return str
+
+def str_to_args(str):
+ return [remove_quotes(x) for x in shlex.split(str)]
+
+def patch_arg_str_win(arg_str):
+ new_arg_str = arg_str.replace('\\', '/')
+ args = str_to_args(new_arg_str)
+ if not is_python(args[0]):
+ return arg_str
+ art = args_to_str(patch_args(args))
+ return art
+
+def monkey_patch_module(module, funcname, create_func):
+ if hasattr(module, funcname):
+ original_name = 'original_' + funcname
+ if not hasattr(module, original_name):
+ setattr(module, original_name, getattr(module, funcname))
+ setattr(module, funcname, create_func(original_name))
+
+
+def monkey_patch_os(funcname, create_func):
+ monkey_patch_module(os, funcname, create_func)
+
+
+def warn_multiproc():
+ import pydev_log
+
+ pydev_log.error_once(
+ "New process is launching. Breakpoints won't work.\n To debug that process please enable 'Attach to subprocess automatically while debugging' option in the debugger settings.\n")
+
+
+def create_warn_multiproc(original_name):
+
+ def new_warn_multiproc(*args):
+ import os
+
+ warn_multiproc()
+
+ return getattr(os, original_name)(*args)
+ return new_warn_multiproc
+
+def create_execl(original_name):
+ def new_execl(path, *args):
+ '''
+os.execl(path, arg0, arg1, ...)
+os.execle(path, arg0, arg1, ..., env)
+os.execlp(file, arg0, arg1, ...)
+os.execlpe(file, arg0, arg1, ..., env)
+ '''
+ import os
+ args = patch_args(args)
+ return getattr(os, original_name)(path, *args)
+ return new_execl
+
+def create_execv(original_name):
+ def new_execv(path, args):
+ '''
+os.execv(path, args)
+os.execvp(file, args)
+ '''
+ import os
+ return getattr(os, original_name)(path, patch_args(args))
+ return new_execv
+
+def create_execve(original_name):
+ """
+os.execve(path, args, env)
+os.execvpe(file, args, env)
+ """
+ def new_execve(path, args, env):
+ import os
+ return getattr(os, original_name)(path, patch_args(args), env)
+ return new_execve
+
+
+def create_spawnl(original_name):
+ def new_spawnl(mode, path, *args):
+ '''
+os.spawnl(mode, path, arg0, arg1, ...)
+os.spawnlp(mode, file, arg0, arg1, ...)
+ '''
+ import os
+ args = patch_args(args)
+ return getattr(os, original_name)(mode, path, *args)
+ return new_spawnl
+
+def create_spawnv(original_name):
+ def new_spawnv(mode, path, args):
+ '''
+os.spawnv(mode, path, args)
+os.spawnvp(mode, file, args)
+ '''
+ import os
+ return getattr(os, original_name)(mode, path, patch_args(args))
+ return new_spawnv
+
+def create_spawnve(original_name):
+ """
+os.spawnve(mode, path, args, env)
+os.spawnvpe(mode, file, args, env)
+ """
+ def new_spawnve(mode, path, args, env):
+ import os
+ return getattr(os, original_name)(mode, path, patch_args(args), env)
+ return new_spawnve
+
+def create_CreateProcess(original_name):
+ """
+CreateProcess(*args, **kwargs)
+ """
+ def new_CreateProcess(appName, commandLine, *args):
+ try:
+ import _subprocess
+ except ImportError:
+ import _winapi as _subprocess
+ return getattr(_subprocess, original_name)(appName, patch_arg_str_win(commandLine), *args)
+ return new_CreateProcess
+
+def create_CreateProcessWarnMultiproc(original_name):
+ """
+CreateProcess(*args, **kwargs)
+ """
+ def new_CreateProcess(*args):
+ try:
+ import _subprocess
+ except ImportError:
+ import _winapi as _subprocess
+ warn_multiproc()
+ return getattr(_subprocess, original_name)(*args)
+ return new_CreateProcess
+
+def create_fork(original_name):
+ def new_fork():
+ import os
+ child_process = getattr(os, original_name)() # fork
+ if not child_process:
+ import pydevd
+
+ pydevd.settrace_forked()
+ return child_process
+ return new_fork
+
+def patch_new_process_functions():
+#os.execl(path, arg0, arg1, ...)
+#os.execle(path, arg0, arg1, ..., env)
+#os.execlp(file, arg0, arg1, ...)
+#os.execlpe(file, arg0, arg1, ..., env)
+#os.execv(path, args)
+#os.execve(path, args, env)
+#os.execvp(file, args)
+#os.execvpe(file, args, env)
+ monkey_patch_os('execl', create_execl)
+ monkey_patch_os('execle', create_execl)
+ monkey_patch_os('execlp', create_execl)
+ monkey_patch_os('execlpe', create_execl)
+ monkey_patch_os('execv', create_execv)
+ monkey_patch_os('execve', create_execve)
+ monkey_patch_os('execvp', create_execv)
+ monkey_patch_os('execvpe', create_execve)
+
+#os.spawnl(mode, path, ...)
+#os.spawnle(mode, path, ..., env)
+#os.spawnlp(mode, file, ...)
+#os.spawnlpe(mode, file, ..., env)
+#os.spawnv(mode, path, args)
+#os.spawnve(mode, path, args, env)
+#os.spawnvp(mode, file, args)
+#os.spawnvpe(mode, file, args, env)
+
+ monkey_patch_os('spawnl', create_spawnl)
+ monkey_patch_os('spawnle', create_spawnl)
+ monkey_patch_os('spawnlp', create_spawnl)
+ monkey_patch_os('spawnlpe', create_spawnl)
+ monkey_patch_os('spawnv', create_spawnv)
+ monkey_patch_os('spawnve', create_spawnve)
+ monkey_patch_os('spawnvp', create_spawnv)
+ monkey_patch_os('spawnvpe', create_spawnve)
+
+ if sys.platform != 'win32':
+ monkey_patch_os('fork', create_fork)
+ else:
+ #Windows
+ try:
+ import _subprocess
+ except ImportError:
+ import _winapi as _subprocess
+ monkey_patch_module(_subprocess, 'CreateProcess', create_CreateProcess)
+
+
+def patch_new_process_functions_with_warning():
+ monkey_patch_os('execl', create_warn_multiproc)
+ monkey_patch_os('execle', create_warn_multiproc)
+ monkey_patch_os('execlp', create_warn_multiproc)
+ monkey_patch_os('execlpe', create_warn_multiproc)
+ monkey_patch_os('execv', create_warn_multiproc)
+ monkey_patch_os('execve', create_warn_multiproc)
+ monkey_patch_os('execvp', create_warn_multiproc)
+ monkey_patch_os('execvpe', create_warn_multiproc)
+ monkey_patch_os('spawnl', create_warn_multiproc)
+ monkey_patch_os('spawnle', create_warn_multiproc)
+ monkey_patch_os('spawnlp', create_warn_multiproc)
+ monkey_patch_os('spawnlpe', create_warn_multiproc)
+ monkey_patch_os('spawnv', create_warn_multiproc)
+ monkey_patch_os('spawnve', create_warn_multiproc)
+ monkey_patch_os('spawnvp', create_warn_multiproc)
+ monkey_patch_os('spawnvpe', create_warn_multiproc)
+
+ if sys.platform != 'win32':
+ monkey_patch_os('fork', create_warn_multiproc)
+ else:
+ #Windows
+ try:
+ import _subprocess
+ except ImportError:
+ import _winapi as _subprocess
+ monkey_patch_module(_subprocess, 'CreateProcess', create_CreateProcessWarnMultiproc)
diff --git a/python/helpers/pydev/pydevconsole.py b/python/helpers/pydev/pydevconsole.py
new file mode 100644
index 0000000..026fb7f
--- /dev/null
+++ b/python/helpers/pydev/pydevconsole.py
@@ -0,0 +1,443 @@
+try:
+ from code import InteractiveConsole
+except ImportError:
+ from pydevconsole_code_for_ironpython import InteractiveConsole
+
+from code import compile_command
+from code import InteractiveInterpreter
+
+import os
+import sys
+
+from pydevd_constants import USE_LIB_COPY
+from pydevd_utils import *
+
+if USE_LIB_COPY:
+ import _pydev_threading as threading
+else:
+ import threading
+
+import traceback
+import fix_getpass
+fix_getpass.fixGetpass()
+
+import pydevd_vars
+
+try:
+ from pydevd_exec import Exec
+except:
+ from pydevd_exec2 import Exec
+
+try:
+ if USE_LIB_COPY:
+ import _pydev_Queue as _queue
+ else:
+ import Queue as _queue
+except:
+ import queue as _queue
+
+try:
+ import __builtin__
+except:
+ import builtins as __builtin__
+
+try:
+ False
+ True
+except NameError: # version < 2.3 -- didn't have the True/False builtins
+ import __builtin__
+
+ setattr(__builtin__, 'True', 1) #Python 3.0 does not accept __builtin__.True = 1 in its syntax
+ setattr(__builtin__, 'False', 0)
+
+from pydev_console_utils import BaseInterpreterInterface
+
+IS_PYTHON_3K = False
+
+try:
+ if sys.version_info[0] == 3:
+ IS_PYTHON_3K = True
+except:
+ #That's OK, not all versions of python have sys.version_info
+ pass
+
+try:
+ try:
+ if USE_LIB_COPY:
+ import _pydev_xmlrpclib as xmlrpclib
+ else:
+ import xmlrpclib
+ except ImportError:
+ import xmlrpc.client as xmlrpclib
+except ImportError:
+ import _pydev_xmlrpclib as xmlrpclib
+
+try:
+ class ExecState:
+ FIRST_CALL = True
+ PYDEV_CONSOLE_RUN_IN_UI = False #Defines if we should run commands in the UI thread.
+
+ from org.python.pydev.core.uiutils import RunInUiThread #@UnresolvedImport
+ from java.lang import Runnable #@UnresolvedImport
+
+ class Command(Runnable):
+ def __init__(self, interpreter, line):
+ self.interpreter = interpreter
+ self.line = line
+
+ def run(self):
+ if ExecState.FIRST_CALL:
+ ExecState.FIRST_CALL = False
+ sys.stdout.write('\nYou are now in a console within Eclipse.\nUse it with care as it can halt the VM.\n')
+ sys.stdout.write(
+ 'Typing a line with "PYDEV_CONSOLE_TOGGLE_RUN_IN_UI"\nwill start executing all the commands in the UI thread.\n\n')
+
+ if self.line == 'PYDEV_CONSOLE_TOGGLE_RUN_IN_UI':
+ ExecState.PYDEV_CONSOLE_RUN_IN_UI = not ExecState.PYDEV_CONSOLE_RUN_IN_UI
+ if ExecState.PYDEV_CONSOLE_RUN_IN_UI:
+ sys.stdout.write(
+ 'Running commands in UI mode. WARNING: using sys.stdin (i.e.: calling raw_input()) WILL HALT ECLIPSE.\n')
+ else:
+ sys.stdout.write('No longer running commands in UI mode.\n')
+ self.more = False
+ else:
+ self.more = self.interpreter.push(self.line)
+
+
+ def Sync(runnable):
+ if ExecState.PYDEV_CONSOLE_RUN_IN_UI:
+ return RunInUiThread.sync(runnable)
+ else:
+ return runnable.run()
+
+except:
+ #If things are not there, define a way in which there's no 'real' sync, only the default execution.
+ class Command:
+ def __init__(self, interpreter, line):
+ self.interpreter = interpreter
+ self.line = line
+
+ def run(self):
+ self.more = self.interpreter.push(self.line)
+
+ def Sync(runnable):
+ runnable.run()
+
+try:
+ try:
+ execfile #Not in Py3k
+ except NameError:
+ from pydev_imports import execfile
+
+ __builtin__.execfile = execfile
+
+except:
+ pass
+
+
+#=======================================================================================================================
+# InterpreterInterface
+#=======================================================================================================================
+class InterpreterInterface(BaseInterpreterInterface):
+ '''
+ The methods in this class should be registered in the xml-rpc server.
+ '''
+
+ def __init__(self, host, client_port, mainThread):
+ BaseInterpreterInterface.__init__(self, mainThread)
+ self.client_port = client_port
+ self.host = host
+ self.namespace = {}
+ self.interpreter = InteractiveConsole(self.namespace)
+ self._input_error_printed = False
+
+
+ def doAddExec(self, line):
+ command = Command(self.interpreter, line)
+ Sync(command)
+ return command.more
+
+
+ def getNamespace(self):
+ return self.namespace
+
+
+ def getCompletions(self, text, act_tok):
+ try:
+ from _completer import Completer
+
+ completer = Completer(self.namespace, None)
+ return completer.complete(act_tok)
+ except:
+ import traceback
+
+ traceback.print_exc()
+ return []
+
+ def close(self):
+ sys.exit(0)
+
+ def get_greeting_msg(self):
+ return 'PyDev console: starting.\n'
+
+
+def process_exec_queue(interpreter):
+ while 1:
+ try:
+ try:
+ line = interpreter.exec_queue.get(block=True, timeout=0.05)
+ except _queue.Empty:
+ continue
+
+ if not interpreter.addExec(line): #TODO: think about locks here
+ interpreter.buffer = []
+ except KeyboardInterrupt:
+ interpreter.buffer = []
+ continue
+ except SystemExit:
+ raise
+ except:
+ type, value, tb = sys.exc_info()
+ traceback.print_exception(type, value, tb, file=sys.__stderr__)
+ exit()
+
+
+try:
+ try:
+ exitfunc = sys.exitfunc
+ except AttributeError:
+ exitfunc = None
+ from pydev_ipython_console import InterpreterInterface
+
+ IPYTHON = True
+ if exitfunc is not None:
+ sys.exitfunc = exitfunc
+
+ else:
+ try:
+ delattr(sys, 'exitfunc')
+ except:
+ pass
+except:
+ IPYTHON = False
+ #sys.stderr.write('PyDev console: started.\n')
+ pass #IPython not available, proceed as usual.
+
+#=======================================================================================================================
+# _DoExit
+#=======================================================================================================================
+def _DoExit(*args):
+ '''
+ We have to override the exit because calling sys.exit will only actually exit the main thread,
+ and as we're in a Xml-rpc server, that won't work.
+ '''
+
+ try:
+ import java.lang.System
+
+ java.lang.System.exit(1)
+ except ImportError:
+ if len(args) == 1:
+ os._exit(args[0])
+ else:
+ os._exit(0)
+
+
+def handshake():
+ return "PyCharm"
+
+
+def ipython_editor(interpreter):
+ def editor(file, line):
+ if file is None:
+ file = ""
+ if line is None:
+ line = "-1"
+ interpreter.ipython_editor(file, line)
+
+ return editor
+
+#=======================================================================================================================
+# StartServer
+#=======================================================================================================================
+def start_server(host, port, interpreter):
+ if port == 0:
+ host = ''
+
+ from pydev_imports import SimpleXMLRPCServer
+
+ try:
+ server = SimpleXMLRPCServer((host, port), logRequests=False, allow_none=True)
+
+ except:
+ sys.stderr.write('Error starting server with host: %s, port: %s, client_port: %s\n' % (host, port, client_port))
+ raise
+
+ server.register_function(interpreter.execLine)
+ server.register_function(interpreter.getCompletions)
+ server.register_function(interpreter.getFrame)
+ server.register_function(interpreter.getVariable)
+ server.register_function(interpreter.changeVariable)
+ server.register_function(interpreter.getDescription)
+ server.register_function(interpreter.close)
+ server.register_function(interpreter.interrupt)
+ server.register_function(handshake)
+
+ if IPYTHON:
+ try:
+ interpreter.interpreter.ipython.hooks.editor = ipython_editor(interpreter)
+ except:
+ pass
+
+ if port == 0:
+ (h, port) = server.socket.getsockname()
+
+ print(port)
+ print(client_port)
+
+
+ sys.stderr.write(interpreter.get_greeting_msg())
+
+ server.serve_forever()
+
+ return server
+
+
+def StartServer(host, port, client_port):
+ #replace exit (see comments on method)
+ #note that this does not work in jython!!! (sys method can't be replaced).
+ sys.exit = _DoExit
+
+ interpreter = InterpreterInterface(host, client_port, threading.currentThread())
+
+ server_thread = threading.Thread(target=start_server,
+ name='ServerThread',
+ args=(host, port, interpreter))
+ server_thread.setDaemon(True)
+ server_thread.start()
+
+ process_exec_queue(interpreter)
+
+
+def get_interpreter():
+ try:
+ interpreterInterface = getattr(__builtin__, 'interpreter')
+ except AttributeError:
+ interpreterInterface = InterpreterInterface(None, None, threading.currentThread())
+ setattr(__builtin__, 'interpreter', interpreterInterface)
+
+ return interpreterInterface
+
+
+def get_completions(text, token, globals, locals):
+ interpreterInterface = get_interpreter()
+
+ interpreterInterface.interpreter.update(globals, locals)
+
+ return interpreterInterface.getCompletions(text, token)
+
+def get_frame():
+ return interpreterInterface.getFrame()
+
+def exec_expression(expression, globals, locals):
+ interpreterInterface = get_interpreter()
+
+ interpreterInterface.interpreter.update(globals, locals)
+
+ res = interpreterInterface.needMore(None, expression)
+
+ if res:
+ return True
+
+ interpreterInterface.addExec(expression)
+
+ return False
+
+
+def read_line(s):
+ ret = ''
+
+ while True:
+ c = s.recv(1)
+
+ if c == '\n' or c == '':
+ break
+ else:
+ ret += c
+
+ return ret
+
+# Debugger integration
+
+class ConsoleWriter(InteractiveInterpreter):
+ skip = 0
+
+ def __init__(self, locals=None):
+ InteractiveInterpreter.__init__(self, locals)
+
+ def write(self, data):
+ #if (data.find("global_vars") == -1 and data.find("pydevd") == -1):
+ if self.skip > 0:
+ self.skip -= 1
+ else:
+ if data == "Traceback (most recent call last):\n":
+ self.skip = 1
+ sys.stderr.write(data)
+
+def consoleExec(thread_id, frame_id, expression):
+ """returns 'False' in case expression is partialy correct
+ """
+ frame = pydevd_vars.findFrame(thread_id, frame_id)
+
+ expression = str(expression.replace('@LINE@', '\n'))
+
+ #Not using frame.f_globals because of https://sourceforge.net/tracker2/?func=detail&aid=2541355&group_id=85796&atid=577329
+ #(Names not resolved in generator expression in method)
+ #See message: http://mail.python.org/pipermail/python-list/2009-January/526522.html
+ updated_globals = {}
+ updated_globals.update(frame.f_globals)
+ updated_globals.update(frame.f_locals) #locals later because it has precedence over the actual globals
+
+ if IPYTHON:
+ return exec_expression(expression, updated_globals, frame.f_locals)
+
+ interpreter = ConsoleWriter()
+
+ try:
+ code = compile_command(expression)
+ except (OverflowError, SyntaxError, ValueError):
+ # Case 1
+ interpreter.showsyntaxerror()
+ return False
+
+ if code is None:
+ # Case 2
+ return True
+
+ #Case 3
+
+ try:
+ Exec(code, updated_globals, frame.f_locals)
+
+ except SystemExit:
+ raise
+ except:
+ interpreter.showtraceback()
+
+ return False
+
+#=======================================================================================================================
+# main
+#=======================================================================================================================
+
+
+if __name__ == '__main__':
+ port, client_port = sys.argv[1:3]
+ import pydev_localhost
+
+ if int(port) == 0 and int(client_port) == 0:
+ (h, p) = pydev_localhost.get_socket_name()
+
+ client_port = p
+
+ StartServer(pydev_localhost.get_localhost(), int(port), int(client_port))
diff --git a/python/helpers/pydev/pydevconsole_code_for_ironpython.py b/python/helpers/pydev/pydevconsole_code_for_ironpython.py
new file mode 100644
index 0000000..71346cc
--- /dev/null
+++ b/python/helpers/pydev/pydevconsole_code_for_ironpython.py
@@ -0,0 +1,513 @@
+"""Utilities needed to emulate Python's interactive interpreter.
+
+"""
+
+# Inspired by similar code by Jeff Epler and Fredrik Lundh.
+
+
+import sys
+import traceback
+
+
+
+
+
+
+
+
+#START --------------------------- from codeop import CommandCompiler, compile_command
+#START --------------------------- from codeop import CommandCompiler, compile_command
+#START --------------------------- from codeop import CommandCompiler, compile_command
+#START --------------------------- from codeop import CommandCompiler, compile_command
+#START --------------------------- from codeop import CommandCompiler, compile_command
+r"""Utilities to compile possibly incomplete Python source code.
+
+This module provides two interfaces, broadly similar to the builtin
+function compile(), which take program text, a filename and a 'mode'
+and:
+
+- Return code object if the command is complete and valid
+- Return None if the command is incomplete
+- Raise SyntaxError, ValueError or OverflowError if the command is a
+ syntax error (OverflowError and ValueError can be produced by
+ malformed literals).
+
+Approach:
+
+First, check if the source consists entirely of blank lines and
+comments; if so, replace it with 'pass', because the built-in
+parser doesn't always do the right thing for these.
+
+Compile three times: as is, with \n, and with \n\n appended. If it
+compiles as is, it's complete. If it compiles with one \n appended,
+we expect more. If it doesn't compile either way, we compare the
+error we get when compiling with \n or \n\n appended. If the errors
+are the same, the code is broken. But if the errors are different, we
+expect more. Not intuitive; not even guaranteed to hold in future
+releases; but this matches the compiler's behavior from Python 1.4
+through 2.2, at least.
+
+Caveat:
+
+It is possible (but not likely) that the parser stops parsing with a
+successful outcome before reaching the end of the source; in this
+case, trailing symbols may be ignored instead of causing an error.
+For example, a backslash followed by two newlines may be followed by
+arbitrary garbage. This will be fixed once the API for the parser is
+better.
+
+The two interfaces are:
+
+compile_command(source, filename, symbol):
+
+ Compiles a single command in the manner described above.
+
+CommandCompiler():
+
+ Instances of this class have __call__ methods identical in
+ signature to compile_command; the difference is that if the
+ instance compiles program text containing a __future__ statement,
+ the instance 'remembers' and compiles all subsequent program texts
+ with the statement in force.
+
+The module also provides another class:
+
+Compile():
+
+ Instances of this class act like the built-in function compile,
+ but with 'memory' in the sense described above.
+"""
+
+import __future__
+
+_features = [getattr(__future__, fname)
+ for fname in __future__.all_feature_names]
+
+__all__ = ["compile_command", "Compile", "CommandCompiler"]
+
+PyCF_DONT_IMPLY_DEDENT = 0x200 # Matches pythonrun.h
+
+def _maybe_compile(compiler, source, filename, symbol):
+ # Check for source consisting of only blank lines and comments
+ for line in source.split("\n"):
+ line = line.strip()
+ if line and line[0] != '#':
+ break # Leave it alone
+ else:
+ if symbol != "eval":
+ source = "pass" # Replace it with a 'pass' statement
+
+ err = err1 = err2 = None
+ code = code1 = code2 = None
+
+ try:
+ code = compiler(source, filename, symbol)
+ except SyntaxError, err:
+ pass
+
+ try:
+ code1 = compiler(source + "\n", filename, symbol)
+ except SyntaxError, err1:
+ pass
+
+ try:
+ code2 = compiler(source + "\n\n", filename, symbol)
+ except SyntaxError, err2:
+ pass
+
+ if code:
+ return code
+ if not code1 and repr(err1) == repr(err2):
+ raise SyntaxError, err1
+
+def _compile(source, filename, symbol):
+ return compile(source, filename, symbol, PyCF_DONT_IMPLY_DEDENT)
+
+def compile_command(source, filename="<input>", symbol="single"):
+ r"""Compile a command and determine whether it is incomplete.
+
+ Arguments:
+
+ source -- the source string; may contain \n characters
+ filename -- optional filename from which source was read; default
+ "<input>"
+ symbol -- optional grammar start symbol; "single" (default) or "eval"
+
+ Return value / exceptions raised:
+
+ - Return a code object if the command is complete and valid
+ - Return None if the command is incomplete
+ - Raise SyntaxError, ValueError or OverflowError if the command is a
+ syntax error (OverflowError and ValueError can be produced by
+ malformed literals).
+ """
+ return _maybe_compile(_compile, source, filename, symbol)
+
+class Compile:
+ """Instances of this class behave much like the built-in compile
+ function, but if one is used to compile text containing a future
+ statement, it "remembers" and compiles all subsequent program texts
+ with the statement in force."""
+ def __init__(self):
+ self.flags = PyCF_DONT_IMPLY_DEDENT
+
+ def __call__(self, source, filename, symbol):
+ codeob = compile(source, filename, symbol, self.flags, 1)
+ for feature in _features:
+ if codeob.co_flags & feature.compiler_flag:
+ self.flags |= feature.compiler_flag
+ return codeob
+
+class CommandCompiler:
+ """Instances of this class have __call__ methods identical in
+ signature to compile_command; the difference is that if the
+ instance compiles program text containing a __future__ statement,
+ the instance 'remembers' and compiles all subsequent program texts
+ with the statement in force."""
+
+ def __init__(self,):
+ self.compiler = Compile()
+
+ def __call__(self, source, filename="<input>", symbol="single"):
+ r"""Compile a command and determine whether it is incomplete.
+
+ Arguments:
+
+ source -- the source string; may contain \n characters
+ filename -- optional filename from which source was read;
+ default "<input>"
+ symbol -- optional grammar start symbol; "single" (default) or
+ "eval"
+
+ Return value / exceptions raised:
+
+ - Return a code object if the command is complete and valid
+ - Return None if the command is incomplete
+ - Raise SyntaxError, ValueError or OverflowError if the command is a
+ syntax error (OverflowError and ValueError can be produced by
+ malformed literals).
+ """
+ return _maybe_compile(self.compiler, source, filename, symbol)
+
+#END --------------------------- from codeop import CommandCompiler, compile_command
+#END --------------------------- from codeop import CommandCompiler, compile_command
+#END --------------------------- from codeop import CommandCompiler, compile_command
+#END --------------------------- from codeop import CommandCompiler, compile_command
+#END --------------------------- from codeop import CommandCompiler, compile_command
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+__all__ = ["InteractiveInterpreter", "InteractiveConsole", "interact",
+ "compile_command"]
+
+def softspace(file, newvalue):
+ oldvalue = 0
+ try:
+ oldvalue = file.softspace
+ except AttributeError:
+ pass
+ try:
+ file.softspace = newvalue
+ except (AttributeError, TypeError):
+ # "attribute-less object" or "read-only attributes"
+ pass
+ return oldvalue
+
+class InteractiveInterpreter:
+ """Base class for InteractiveConsole.
+
+ This class deals with parsing and interpreter state (the user's
+ namespace); it doesn't deal with input buffering or prompting or
+ input file naming (the filename is always passed in explicitly).
+
+ """
+
+ def __init__(self, locals=None):
+ """Constructor.
+
+ The optional 'locals' argument specifies the dictionary in
+ which code will be executed; it defaults to a newly created
+ dictionary with key "__name__" set to "__console__" and key
+ "__doc__" set to None.
+
+ """
+ if locals is None:
+ locals = {"__name__": "__console__", "__doc__": None}
+ self.locals = locals
+ self.compile = CommandCompiler()
+
+ def runsource(self, source, filename="<input>", symbol="single"):
+ """Compile and run some source in the interpreter.
+
+ Arguments are as for compile_command().
+
+ One several things can happen:
+
+ 1) The input is incorrect; compile_command() raised an
+ exception (SyntaxError or OverflowError). A syntax traceback
+ will be printed by calling the showsyntaxerror() method.
+
+ 2) The input is incomplete, and more input is required;
+ compile_command() returned None. Nothing happens.
+
+ 3) The input is complete; compile_command() returned a code
+ object. The code is executed by calling self.runcode() (which
+ also handles run-time exceptions, except for SystemExit).
+
+ The return value is True in case 2, False in the other cases (unless
+ an exception is raised). The return value can be used to
+ decide whether to use sys.ps1 or sys.ps2 to prompt the next
+ line.
+
+ """
+ try:
+ code = self.compile(source, filename, symbol)
+ except (OverflowError, SyntaxError, ValueError):
+ # Case 1
+ self.showsyntaxerror(filename)
+ return False
+
+ if code is None:
+ # Case 2
+ return True
+
+ # Case 3
+ self.runcode(code)
+ return False
+
+ def runcode(self, code):
+ """Execute a code object.
+
+ When an exception occurs, self.showtraceback() is called to
+ display a traceback. All exceptions are caught except
+ SystemExit, which is reraised.
+
+ A note about KeyboardInterrupt: this exception may occur
+ elsewhere in this code, and may not always be caught. The
+ caller should be prepared to deal with it.
+
+ """
+ try:
+ exec code in self.locals
+ except SystemExit:
+ raise
+ except:
+ self.showtraceback()
+ else:
+ if softspace(sys.stdout, 0):
+ sys.stdout.write('\n')
+
+ def showsyntaxerror(self, filename=None):
+ """Display the syntax error that just occurred.
+
+ This doesn't display a stack trace because there isn't one.
+
+ If a filename is given, it is stuffed in the exception instead
+ of what was there before (because Python's parser always uses
+ "<string>" when reading from a string).
+
+ The output is written by self.write(), below.
+
+ """
+ type, value, sys.last_traceback = sys.exc_info()
+ sys.last_type = type
+ sys.last_value = value
+ if filename and type is SyntaxError:
+ # Work hard to stuff the correct filename in the exception
+ try:
+ msg, (dummy_filename, lineno, offset, line) = value
+ except:
+ # Not the format we expect; leave it alone
+ pass
+ else:
+ # Stuff in the right filename
+ value = SyntaxError(msg, (filename, lineno, offset, line))
+ sys.last_value = value
+ list = traceback.format_exception_only(type, value)
+ map(self.write, list)
+
+ def showtraceback(self):
+ """Display the exception that just occurred.
+
+ We remove the first stack item because it is our own code.
+
+ The output is written by self.write(), below.
+
+ """
+ try:
+ type, value, tb = sys.exc_info()
+ sys.last_type = type
+ sys.last_value = value
+ sys.last_traceback = tb
+ tblist = traceback.extract_tb(tb)
+ del tblist[:1]
+ list = traceback.format_list(tblist)
+ if list:
+ list.insert(0, "Traceback (most recent call last):\n")
+ list[len(list):] = traceback.format_exception_only(type, value)
+ finally:
+ tblist = tb = None
+ map(self.write, list)
+
+ def write(self, data):
+ """Write a string.
+
+ The base implementation writes to sys.stderr; a subclass may
+ replace this with a different implementation.
+
+ """
+ sys.stderr.write(data)
+
+
+class InteractiveConsole(InteractiveInterpreter):
+ """Closely emulate the behavior of the interactive Python interpreter.
+
+ This class builds on InteractiveInterpreter and adds prompting
+ using the familiar sys.ps1 and sys.ps2, and input buffering.
+
+ """
+
+ def __init__(self, locals=None, filename="<console>"):
+ """Constructor.
+
+ The optional locals argument will be passed to the
+ InteractiveInterpreter base class.
+
+ The optional filename argument should specify the (file)name
+ of the input stream; it will show up in tracebacks.
+
+ """
+ InteractiveInterpreter.__init__(self, locals)
+ self.filename = filename
+ self.resetbuffer()
+
+ def resetbuffer(self):
+ """Reset the input buffer."""
+ self.buffer = []
+
+ def interact(self, banner=None):
+ """Closely emulate the interactive Python console.
+
+ The optional banner argument specify the banner to print
+ before the first interaction; by default it prints a banner
+ similar to the one printed by the real Python interpreter,
+ followed by the current class name in parentheses (so as not
+ to confuse this with the real interpreter -- since it's so
+ close!).
+
+ """
+ try:
+ sys.ps1 #@UndefinedVariable
+ except AttributeError:
+ sys.ps1 = ">>> "
+ try:
+ sys.ps2 #@UndefinedVariable
+ except AttributeError:
+ sys.ps2 = "... "
+ cprt = 'Type "help", "copyright", "credits" or "license" for more information.'
+ if banner is None:
+ self.write("Python %s on %s\n%s\n(%s)\n" %
+ (sys.version, sys.platform, cprt,
+ self.__class__.__name__))
+ else:
+ self.write("%s\n" % str(banner))
+ more = 0
+ while 1:
+ try:
+ if more:
+ prompt = sys.ps2 #@UndefinedVariable
+ else:
+ prompt = sys.ps1 #@UndefinedVariable
+ try:
+ line = self.raw_input(prompt)
+ # Can be None if sys.stdin was redefined
+ encoding = getattr(sys.stdin, "encoding", None)
+ if encoding and not isinstance(line, unicode):
+ line = line.decode(encoding)
+ except EOFError:
+ self.write("\n")
+ break
+ else:
+ more = self.push(line)
+ except KeyboardInterrupt:
+ self.write("\nKeyboardInterrupt\n")
+ self.resetbuffer()
+ more = 0
+
+ def push(self, line):
+ """Push a line to the interpreter.
+
+ The line should not have a trailing newline; it may have
+ internal newlines. The line is appended to a buffer and the
+ interpreter's runsource() method is called with the
+ concatenated contents of the buffer as source. If this
+ indicates that the command was executed or invalid, the buffer
+ is reset; otherwise, the command is incomplete, and the buffer
+ is left as it was after the line was appended. The return
+ value is 1 if more input is required, 0 if the line was dealt
+ with in some way (this is the same as runsource()).
+
+ """
+ self.buffer.append(line)
+ source = "\n".join(self.buffer)
+ more = self.runsource(source, self.filename)
+ if not more:
+ self.resetbuffer()
+ return more
+
+ def raw_input(self, prompt=""):
+ """Write a prompt and read a line.
+
+ The returned line does not include the trailing newline.
+ When the user enters the EOF key sequence, EOFError is raised.
+
+ The base implementation uses the built-in function
+ raw_input(); a subclass may replace this with a different
+ implementation.
+
+ """
+ return raw_input(prompt)
+
+
+def interact(banner=None, readfunc=None, local=None):
+ """Closely emulate the interactive Python interpreter.
+
+ This is a backwards compatible interface to the InteractiveConsole
+ class. When readfunc is not specified, it attempts to import the
+ readline module to enable GNU readline if it is available.
+
+ Arguments (all optional, all default to None):
+
+ banner -- passed to InteractiveConsole.interact()
+ readfunc -- if not None, replaces InteractiveConsole.raw_input()
+ local -- passed to InteractiveInterpreter.__init__()
+
+ """
+ console = InteractiveConsole(local)
+ if readfunc is not None:
+ console.raw_input = readfunc
+ else:
+ try:
+ import readline
+ except ImportError:
+ pass
+ console.interact(banner)
+
+
+if __name__ == '__main__':
+ import pdb
+ pdb.run("interact()\n")
diff --git a/python/helpers/pydev/pydevd.py b/python/helpers/pydev/pydevd.py
new file mode 100644
index 0000000..031d6a1
--- /dev/null
+++ b/python/helpers/pydev/pydevd.py
@@ -0,0 +1,1532 @@
+#IMPORTANT: pydevd_constants must be the 1st thing defined because it'll keep a reference to the original sys._getframe
+from django_debug import DjangoLineBreakpoint
+from pydevd_signature import SignatureFactory
+from pydevd_frame import add_exception_to_frame
+from pydevd_constants import * #@UnusedWildImport
+import pydev_imports
+from pydevd_breakpoints import * #@UnusedWildImport
+import fix_getpass
+
+from pydevd_comm import CMD_CHANGE_VARIABLE, \
+ CMD_EVALUATE_EXPRESSION, \
+ CMD_EXEC_EXPRESSION, \
+ CMD_GET_COMPLETIONS, \
+ CMD_GET_FRAME, \
+ CMD_GET_VARIABLE, \
+ CMD_LIST_THREADS, \
+ CMD_REMOVE_BREAK, \
+ CMD_RUN, \
+ CMD_SET_BREAK, \
+ CMD_SET_NEXT_STATEMENT,\
+ CMD_STEP_INTO, \
+ CMD_STEP_OVER, \
+ CMD_STEP_RETURN, \
+ CMD_THREAD_CREATE, \
+ CMD_THREAD_KILL, \
+ CMD_THREAD_RUN, \
+ CMD_THREAD_SUSPEND, \
+ CMD_RUN_TO_LINE, \
+ CMD_RELOAD_CODE, \
+ CMD_VERSION, \
+ CMD_CONSOLE_EXEC, \
+ CMD_ADD_EXCEPTION_BREAK, \
+ CMD_REMOVE_EXCEPTION_BREAK, \
+ CMD_LOAD_SOURCE, \
+ CMD_ADD_DJANGO_EXCEPTION_BREAK, \
+ CMD_REMOVE_DJANGO_EXCEPTION_BREAK, \
+ CMD_SMART_STEP_INTO,\
+ InternalChangeVariable, \
+ InternalGetCompletions, \
+ InternalEvaluateExpression, \
+ InternalConsoleExec, \
+ InternalGetFrame, \
+ InternalGetVariable, \
+ InternalTerminateThread, \
+ InternalRunThread, \
+ InternalStepThread, \
+ NetCommand, \
+ NetCommandFactory, \
+ PyDBDaemonThread, \
+ _queue, \
+ ReaderThread, \
+ SetGlobalDebugger, \
+ WriterThread, \
+ PydevdFindThreadById, \
+ PydevdLog, \
+ StartClient, \
+ StartServer, \
+ InternalSetNextStatementThread
+
+from pydevd_file_utils import NormFileToServer, GetFilenameAndBase
+import pydevd_file_utils
+import pydevd_vars
+import traceback
+import pydevd_vm_type
+import pydevd_tracing
+import pydevd_io
+import pydev_monkey
+from pydevd_additional_thread_info import PyDBAdditionalThreadInfo
+
+if USE_LIB_COPY:
+ import _pydev_time as time
+ import _pydev_threading as threading
+else:
+ import time
+ import threading
+
+import os
+
+
+threadingEnumerate = threading.enumerate
+threadingCurrentThread = threading.currentThread
+
+
+DONT_TRACE = {
+ #commonly used things from the stdlib that we don't want to trace
+ 'threading.py':1,
+ 'Queue.py':1,
+ 'queue.py':1,
+ 'socket.py':1,
+
+ #things from pydev that we don't want to trace
+ 'pydevd_additional_thread_info.py':1,
+ 'pydevd_comm.py':1,
+ 'pydevd_constants.py':1,
+ 'pydevd_exec.py':1,
+ 'pydevd_exec2.py':1,
+ 'pydevd_file_utils.py':1,
+ 'pydevd_frame.py':1,
+ 'pydevd_io.py':1 ,
+ 'pydevd_resolver.py':1 ,
+ 'pydevd_tracing.py':1 ,
+ 'pydevd_signature.py':1,
+ 'pydevd_utils.py':1,
+ 'pydevd_vars.py':1,
+ 'pydevd_vm_type.py':1,
+ 'pydevd.py':1 ,
+ 'pydevd_psyco_stub.py':1,
+ '_pydev_execfile.py':1,
+ '_pydev_jython_execfile.py':1
+ }
+
+if IS_PY3K:
+ #if we try to trace io.py it seems it can get halted (see http://bugs.python.org/issue4716)
+ DONT_TRACE['io.py'] = 1
+
+
+connected = False
+bufferStdOutToServer = False
+bufferStdErrToServer = False
+remote = False
+
+PyDBUseLocks = True
+
+
+#=======================================================================================================================
+# PyDBCommandThread
+#=======================================================================================================================
+class PyDBCommandThread(PyDBDaemonThread):
+
+ def __init__(self, pyDb):
+ PyDBDaemonThread.__init__(self)
+ self.pyDb = pyDb
+ self.setName('pydevd.CommandThread')
+
+ def OnRun(self):
+ for i in range(1, 10):
+ time.sleep(0.5) #this one will only start later on (because otherwise we may not have any non-daemon threads
+ if self.killReceived:
+ return
+
+ if self.dontTraceMe:
+ self.pyDb.SetTrace(None) # no debugging on this thread
+
+ try:
+ while not self.killReceived:
+ try:
+ self.pyDb.processInternalCommands()
+ except:
+ PydevdLog(0, 'Finishing debug communication...(2)')
+ time.sleep(0.5)
+ except:
+ pydev_log.debug(sys.exc_info()[0])
+
+ #only got this error in interpreter shutdown
+ #PydevdLog(0, 'Finishing debug communication...(3)')
+
+
+def killAllPydevThreads():
+ threads = threadingEnumerate()
+ for t in threads:
+ if hasattr(t, 'doKillPydevThread'):
+ t.doKillPydevThread()
+
+
+#=======================================================================================================================
+# PyDBCheckAliveThread
+#=======================================================================================================================
+class PyDBCheckAliveThread(PyDBDaemonThread):
+
+ def __init__(self, pyDb):
+ PyDBDaemonThread.__init__(self)
+ self.pyDb = pyDb
+ self.setDaemon(False)
+ self.setName('pydevd.CheckAliveThread')
+
+ def OnRun(self):
+ if self.dontTraceMe:
+ self.pyDb.SetTrace(None) # no debugging on this thread
+ while not self.killReceived:
+ if not self.pyDb.haveAliveThreads():
+ try:
+ pydev_log.debug("No alive threads, finishing debug session")
+ self.pyDb.FinishDebuggingSession()
+ killAllPydevThreads()
+ except:
+ traceback.print_exc()
+
+ self.stop()
+ self.killReceived = True
+ return
+
+ time.sleep(0.3)
+
+ def doKillPydevThread(self):
+ pass
+
+if USE_LIB_COPY:
+ import _pydev_thread as thread
+else:
+ try:
+ import thread
+ except ImportError:
+ import _thread as thread #Py3K changed it.
+
+_original_start_new_thread = thread.start_new_thread
+
+if getattr(thread, '_original_start_new_thread', None) is None:
+ thread._original_start_new_thread = thread.start_new_thread
+
+#=======================================================================================================================
+# NewThreadStartup
+#=======================================================================================================================
+class NewThreadStartup:
+
+ def __init__(self, original_func, args, kwargs):
+ self.original_func = original_func
+ self.args = args
+ self.kwargs = kwargs
+
+ def __call__(self):
+ global_debugger = GetGlobalDebugger()
+ global_debugger.SetTrace(global_debugger.trace_dispatch)
+ self.original_func(*self.args, **self.kwargs)
+
+thread.NewThreadStartup = NewThreadStartup
+
+#=======================================================================================================================
+# pydev_start_new_thread
+#=======================================================================================================================
+def _pydev_start_new_thread(function, args, kwargs={}):
+ '''
+ We need to replace the original thread.start_new_thread with this function so that threads started through
+ it and not through the threading module are properly traced.
+ '''
+ if USE_LIB_COPY:
+ import _pydev_thread as thread
+ else:
+ try:
+ import thread
+ except ImportError:
+ import _thread as thread #Py3K changed it.
+
+ return thread._original_start_new_thread(thread.NewThreadStartup(function, args, kwargs), ())
+
+class PydevStartNewThread(object):
+ def __get__(self, obj, type=None):
+ return self
+
+ def __call__(self, function, args, kwargs={}):
+ return _pydev_start_new_thread(function, args, kwargs)
+
+pydev_start_new_thread = PydevStartNewThread()
+
+#=======================================================================================================================
+# PyDB
+#=======================================================================================================================
+class PyDB:
+ """ Main debugging class
+ Lots of stuff going on here:
+
+ PyDB starts two threads on startup that connect to remote debugger (RDB)
+ The threads continuously read & write commands to RDB.
+ PyDB communicates with these threads through command queues.
+ Every RDB command is processed by calling processNetCommand.
+ Every PyDB net command is sent to the net by posting NetCommand to WriterThread queue
+
+ Some commands need to be executed on the right thread (suspend/resume & friends)
+ These are placed on the internal command queue.
+ """
+
+ RUNNING_THREAD_IDS = {} #this is a dict of thread ids pointing to thread ids. Whenever a command
+ #is passed to the java end that acknowledges that a thread was created,
+ #the thread id should be passed here -- and if at some time we do not find
+ #that thread alive anymore, we must remove it from this list and make
+ #the java side know that the thread was killed.
+
+ def __init__(self):
+ SetGlobalDebugger(self)
+ pydevd_tracing.ReplaceSysSetTraceFunc()
+ self.reader = None
+ self.writer = None
+ self.quitting = None
+ self.cmdFactory = NetCommandFactory()
+ self._cmd_queue = {} # the hash of Queues. Key is thread id, value is thread
+ self.breakpoints = {}
+ self.django_breakpoints = {}
+ self.exception_set = {}
+ self.always_exception_set = set()
+ self.django_exception_break = {}
+ self.readyToRun = False
+ self._main_lock = threading.Lock()
+ self._lock_running_thread_ids = threading.Lock()
+ self._finishDebuggingSession = False
+ self._terminationEventSent = False
+ self.force_post_mortem_stop = 0
+ self.signature_factory = None
+ self.SetTrace = pydevd_tracing.SetTrace
+
+ #this is a dict of thread ids pointing to thread ids. Whenever a command is passed to the java end that
+ #acknowledges that a thread was created, the thread id should be passed here -- and if at some time we do not
+ #find that thread alive anymore, we must remove it from this list and make the java side know that the thread
+ #was killed.
+ self._running_thread_ids = {}
+
+ def haveAliveThreads(self):
+ for t in threadingEnumerate():
+ if not isinstance(t, PyDBDaemonThread) and t.isAlive() and not t.isDaemon():
+ return True
+
+ return False
+
+ def FinishDebuggingSession(self):
+ self._finishDebuggingSession = True
+
+ def acquire(self):
+ if PyDBUseLocks:
+ self.lock.acquire()
+ return True
+
+ def release(self):
+ if PyDBUseLocks:
+ self.lock.release()
+ return True
+
+ def initializeNetwork(self, sock):
+ try:
+ sock.settimeout(None) # infinite, no timeouts from now on - jython does not have it
+ except:
+ pass
+ self.writer = WriterThread(sock)
+ self.reader = ReaderThread(sock)
+ self.writer.start()
+ self.reader.start()
+
+ time.sleep(0.1) # give threads time to start
+
+ def connect(self, host, port):
+ if host:
+ s = StartClient(host, port)
+ else:
+ s = StartServer(port)
+
+ self.initializeNetwork(s)
+
+
+ def getInternalQueue(self, thread_id):
+ """ returns internal command queue for a given thread.
+ if new queue is created, notify the RDB about it """
+ try:
+ return self._cmd_queue[thread_id]
+ except KeyError:
+ return self._cmd_queue.setdefault(thread_id, _queue.Queue()) #@UndefinedVariable
+
+
+ def postInternalCommand(self, int_cmd, thread_id):
+ """ if thread_id is *, post to all """
+ if thread_id == "*":
+ for k in self._cmd_queue.keys():
+ self._cmd_queue[k].put(int_cmd)
+
+ else:
+ queue = self.getInternalQueue(thread_id)
+ queue.put(int_cmd)
+
+ def checkOutputRedirect(self):
+ global bufferStdOutToServer
+ global bufferStdErrToServer
+
+ if bufferStdOutToServer:
+ initStdoutRedirect()
+ self.checkOutput(sys.stdoutBuf, 1) #@UndefinedVariable
+
+ if bufferStdErrToServer:
+ initStderrRedirect()
+ self.checkOutput(sys.stderrBuf, 2) #@UndefinedVariable
+
+ def checkOutput(self, out, outCtx):
+ '''Checks the output to see if we have to send some buffered output to the debug server
+
+ @param out: sys.stdout or sys.stderr
+ @param outCtx: the context indicating: 1=stdout and 2=stderr (to know the colors to write it)
+ '''
+
+ try:
+ v = out.getvalue()
+
+ if v:
+ self.cmdFactory.makeIoMessage(v, outCtx, self)
+ except:
+ traceback.print_exc()
+
+
+ def processInternalCommands(self):
+ '''This function processes internal commands
+ '''
+ curr_thread_id = GetThreadId(threadingCurrentThread())
+ program_threads_alive = {}
+ all_threads = threadingEnumerate()
+ program_threads_dead = []
+
+
+ self._main_lock.acquire()
+ try:
+
+ self.checkOutputRedirect()
+
+ self._lock_running_thread_ids.acquire()
+ try:
+ for t in all_threads:
+ thread_id = GetThreadId(t)
+
+ if not isinstance(t, PyDBDaemonThread) and t.isAlive():
+ program_threads_alive[thread_id] = t
+
+ if not DictContains(self._running_thread_ids, thread_id):
+ if not hasattr(t, 'additionalInfo'):
+ #see http://sourceforge.net/tracker/index.php?func=detail&aid=1955428&group_id=85796&atid=577329
+ #Let's create the additional info right away!
+ t.additionalInfo = PyDBAdditionalThreadInfo()
+ self._running_thread_ids[thread_id] = t
+ self.writer.addCommand(self.cmdFactory.makeThreadCreatedMessage(t))
+
+
+ queue = self.getInternalQueue(thread_id)
+ cmdsToReadd = [] #some commands must be processed by the thread itself... if that's the case,
+ #we will re-add the commands to the queue after executing.
+ try:
+ while True:
+ int_cmd = queue.get(False)
+ if int_cmd.canBeExecutedBy(curr_thread_id):
+ PydevdLog(2, "processing internal command ", str(int_cmd))
+ int_cmd.doIt(self)
+ else:
+ PydevdLog(2, "NOT processing internal command ", str(int_cmd))
+ cmdsToReadd.append(int_cmd)
+
+ except _queue.Empty: #@UndefinedVariable
+ for int_cmd in cmdsToReadd:
+ queue.put(int_cmd)
+ # this is how we exit
+
+
+ thread_ids = list(self._running_thread_ids.keys())
+ for tId in thread_ids:
+ if not DictContains(program_threads_alive, tId):
+ program_threads_dead.append(tId)
+ finally:
+ self._lock_running_thread_ids.release()
+
+ for tId in program_threads_dead:
+ try:
+ self.processThreadNotAlive(tId)
+ except:
+ sys.stderr.write('Error iterating through %s (%s) - %s\n' % (
+ program_threads_alive, program_threads_alive.__class__, dir(program_threads_alive)))
+ raise
+
+
+ if len(program_threads_alive) == 0:
+ self.FinishDebuggingSession()
+ for t in all_threads:
+ if hasattr(t, 'doKillPydevThread'):
+ t.doKillPydevThread()
+
+ finally:
+ self._main_lock.release()
+
+
+ def setTracingForUntracedContexts(self):
+ #Enable the tracing for existing threads (because there may be frames being executed that
+ #are currently untraced).
+ threads = threadingEnumerate()
+ for t in threads:
+ if not t.getName().startswith('pydevd.'):
+ #TODO: optimize so that we only actually add that tracing if it's in
+ #the new breakpoint context.
+ additionalInfo = None
+ try:
+ additionalInfo = t.additionalInfo
+ except AttributeError:
+ pass #that's ok, no info currently set
+
+ if additionalInfo is not None:
+ for frame in additionalInfo.IterFrames():
+ self.SetTraceForFrameAndParents(frame)
+ del frame
+
+
+ def processNetCommand(self, cmd_id, seq, text):
+ '''Processes a command received from the Java side
+
+ @param cmd_id: the id of the command
+ @param seq: the sequence of the command
+ @param text: the text received in the command
+
+ @note: this method is run as a big switch... after doing some tests, it's not clear whether changing it for
+ a dict id --> function call will have better performance result. A simple test with xrange(10000000) showed
+ that the gains from having a fast access to what should be executed are lost because of the function call in
+ a way that if we had 10 elements in the switch the if..elif are better -- but growing the number of choices
+ makes the solution with the dispatch look better -- so, if this gets more than 20-25 choices at some time,
+ it may be worth refactoring it (actually, reordering the ifs so that the ones used mostly come before
+ probably will give better performance).
+ '''
+
+ self._main_lock.acquire()
+ try:
+ try:
+ cmd = None
+ if cmd_id == CMD_RUN:
+ self.readyToRun = True
+
+ elif cmd_id == CMD_VERSION:
+ # response is version number
+ local_version, pycharm_os = text.split('\t', 1)
+
+ pydevd_file_utils.set_pycharm_os(pycharm_os)
+
+ cmd = self.cmdFactory.makeVersionMessage(seq)
+
+ elif cmd_id == CMD_LIST_THREADS:
+ # response is a list of threads
+ cmd = self.cmdFactory.makeListThreadsMessage(seq)
+
+ elif cmd_id == CMD_THREAD_KILL:
+ int_cmd = InternalTerminateThread(text)
+ self.postInternalCommand(int_cmd, text)
+
+ elif cmd_id == CMD_THREAD_SUSPEND:
+ #Yes, thread suspend is still done at this point, not through an internal command!
+ t = PydevdFindThreadById(text)
+ if t:
+ additionalInfo = None
+ try:
+ additionalInfo = t.additionalInfo
+ except AttributeError:
+ pass #that's ok, no info currently set
+
+ if additionalInfo is not None:
+ for frame in additionalInfo.IterFrames():
+ self.SetTraceForFrameAndParents(frame)
+ del frame
+
+ self.setSuspend(t, CMD_THREAD_SUSPEND)
+
+ elif cmd_id == CMD_THREAD_RUN:
+ t = PydevdFindThreadById(text)
+ if t:
+ thread_id = GetThreadId(t)
+ int_cmd = InternalRunThread(thread_id)
+ self.postInternalCommand(int_cmd, thread_id)
+
+ elif cmd_id == CMD_STEP_INTO or cmd_id == CMD_STEP_OVER or cmd_id == CMD_STEP_RETURN:
+ #we received some command to make a single step
+ t = PydevdFindThreadById(text)
+ if t:
+ thread_id = GetThreadId(t)
+ int_cmd = InternalStepThread(thread_id, cmd_id)
+ self.postInternalCommand(int_cmd, thread_id)
+
+ elif cmd_id == CMD_RUN_TO_LINE or cmd_id == CMD_SET_NEXT_STATEMENT or cmd_id == CMD_SMART_STEP_INTO:
+ #we received some command to make a single step
+ thread_id, line, func_name = text.split('\t', 2)
+ t = PydevdFindThreadById(thread_id)
+ if t:
+ int_cmd = InternalSetNextStatementThread(thread_id, cmd_id, line, func_name)
+ self.postInternalCommand(int_cmd, thread_id)
+
+
+ elif cmd_id == CMD_RELOAD_CODE:
+ #we received some command to make a reload of a module
+ module_name = text.strip()
+ from pydevd_reload import xreload
+ if not DictContains(sys.modules, module_name):
+ if '.' in module_name:
+ new_module_name = module_name.split('.')[-1]
+ if DictContains(sys.modules, new_module_name):
+ module_name = new_module_name
+
+ if not DictContains(sys.modules, module_name):
+ sys.stderr.write('pydev debugger: Unable to find module to reload: "'+module_name+'".\n')
+ sys.stderr.write('pydev debugger: This usually means you are trying to reload the __main__ module (which cannot be reloaded).\n')
+ sys.stderr.flush()
+
+ else:
+ sys.stderr.write('pydev debugger: Reloading: '+module_name+'\n')
+ sys.stderr.flush()
+ xreload(sys.modules[module_name])
+
+
+ elif cmd_id == CMD_CHANGE_VARIABLE:
+ #the text is: thread\tstackframe\tFRAME|GLOBAL\tattribute_to_change\tvalue_to_change
+ try:
+ thread_id, frame_id, scope, attr_and_value = text.split('\t', 3)
+
+ tab_index = attr_and_value.rindex('\t')
+ attr = attr_and_value[0:tab_index].replace('\t', '.')
+ value = attr_and_value[tab_index + 1:]
+ int_cmd = InternalChangeVariable(seq, thread_id, frame_id, scope, attr, value)
+ self.postInternalCommand(int_cmd, thread_id)
+
+ except:
+ traceback.print_exc()
+
+ elif cmd_id == CMD_GET_VARIABLE:
+ #we received some command to get a variable
+ #the text is: thread_id\tframe_id\tFRAME|GLOBAL\tattributes*
+ try:
+ thread_id, frame_id, scopeattrs = text.split('\t', 2)
+
+ if scopeattrs.find('\t') != -1: # there are attributes beyond scope
+ scope, attrs = scopeattrs.split('\t', 1)
+ else:
+ scope, attrs = (scopeattrs, None)
+
+ int_cmd = InternalGetVariable(seq, thread_id, frame_id, scope, attrs)
+ self.postInternalCommand(int_cmd, thread_id)
+
+ except:
+ traceback.print_exc()
+
+ elif cmd_id == CMD_GET_COMPLETIONS:
+ #we received some command to get a variable
+ #the text is: thread_id\tframe_id\tactivation token
+ try:
+ thread_id, frame_id, scope, act_tok = text.split('\t', 3)
+
+ int_cmd = InternalGetCompletions(seq, thread_id, frame_id, act_tok)
+ self.postInternalCommand(int_cmd, thread_id)
+
+ except:
+ traceback.print_exc()
+
+ elif cmd_id == CMD_GET_FRAME:
+ thread_id, frame_id, scope = text.split('\t', 2)
+
+ int_cmd = InternalGetFrame(seq, thread_id, frame_id)
+ self.postInternalCommand(int_cmd, thread_id)
+
+ elif cmd_id == CMD_SET_BREAK:
+ #func name: 'None': match anything. Empty: match global, specified: only method context.
+
+ #command to add some breakpoint.
+ # text is file\tline. Add to breakpoints dictionary
+ type, file, line, condition, expression = text.split('\t', 4)
+
+ if condition.startswith('**FUNC**'):
+ func_name, condition = condition.split('\t', 1)
+
+ #We must restore new lines and tabs as done in
+ #AbstractDebugTarget.breakpointAdded
+ condition = condition.replace("@_@NEW_LINE_CHAR@_@", '\n').\
+ replace("@_@TAB_CHAR@_@", '\t').strip()
+
+ func_name = func_name[8:]
+ else:
+ func_name = 'None' #Match anything if not specified.
+
+
+ file = NormFileToServer(file)
+
+ if not pydevd_file_utils.exists(file):
+ sys.stderr.write('pydev debugger: warning: trying to add breakpoint'\
+ ' to file that does not exist: %s (will have no effect)\n' % (file,))
+ sys.stderr.flush()
+
+ line = int(line)
+
+ if len(condition) <= 0 or condition is None or condition == "None":
+ condition = None
+
+ if len(expression) <= 0 or expression is None or expression == "None":
+ expression = None
+
+ if type == 'python-line':
+ breakpoint = LineBreakpoint(type, True, condition, func_name, expression)
+ breakpoint.add(self.breakpoints, file, line, func_name)
+ elif type == 'django-line':
+ breakpoint = DjangoLineBreakpoint(type, file, line, True, condition, func_name, expression)
+ breakpoint.add(self.django_breakpoints, file, line, func_name)
+ else:
+ raise NameError(type)
+
+ self.setTracingForUntracedContexts()
+
+ elif cmd_id == CMD_REMOVE_BREAK:
+ #command to remove some breakpoint
+ #text is file\tline. Remove from breakpoints dictionary
+ type, file, line = text.split('\t', 2)
+ file = NormFileToServer(file)
+ try:
+ line = int(line)
+ except ValueError:
+ pass
+
+ else:
+ found = False
+ try:
+ if type == 'django-line':
+ del self.django_breakpoints[file][line]
+ elif type == 'python-line':
+ del self.breakpoints[file][line] #remove the breakpoint in that line
+ else:
+ try:
+ del self.django_breakpoints[file][line]
+ found = True
+ except:
+ pass
+ try:
+ del self.breakpoints[file][line] #remove the breakpoint in that line
+ found = True
+ except:
+ pass
+
+ if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
+ sys.stderr.write('Removed breakpoint:%s - %s\n' % (file, line))
+ sys.stderr.flush()
+ except KeyError:
+ found = False
+
+ if not found:
+ #ok, it's not there...
+ if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
+ #Sometimes, when adding a breakpoint, it adds a remove command before (don't really know why)
+ sys.stderr.write("breakpoint not found: %s - %s\n" % (file, line))
+ sys.stderr.flush()
+
+ elif cmd_id == CMD_EVALUATE_EXPRESSION or cmd_id == CMD_EXEC_EXPRESSION:
+ #command to evaluate the given expression
+ #text is: thread\tstackframe\tLOCAL\texpression
+ thread_id, frame_id, scope, expression, trim = text.split('\t', 4)
+ int_cmd = InternalEvaluateExpression(seq, thread_id, frame_id, expression,
+ cmd_id == CMD_EXEC_EXPRESSION, int(trim) == 1)
+ self.postInternalCommand(int_cmd, thread_id)
+
+ elif cmd_id == CMD_CONSOLE_EXEC:
+ #command to exec expression in console, in case expression is only partially valid 'False' is returned
+ #text is: thread\tstackframe\tLOCAL\texpression
+
+ thread_id, frame_id, scope, expression = text.split('\t', 3)
+
+ int_cmd = InternalConsoleExec(seq, thread_id, frame_id, expression)
+ self.postInternalCommand(int_cmd, thread_id)
+
+ elif cmd_id == CMD_ADD_EXCEPTION_BREAK:
+ exception, notify_always, notify_on_terminate = text.split('\t', 2)
+
+ eb = ExceptionBreakpoint(exception, notify_always, notify_on_terminate)
+
+ self.exception_set[exception] = eb
+
+ if eb.notify_on_terminate:
+ update_exception_hook(self)
+ if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
+ pydev_log.error("Exceptions to hook on terminate: %s\n" % (self.exception_set,))
+
+ if eb.notify_always:
+ self.always_exception_set.add(exception)
+ if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
+ pydev_log.error("Exceptions to hook always: %s\n" % (self.always_exception_set,))
+ self.setTracingForUntracedContexts()
+
+ elif cmd_id == CMD_REMOVE_EXCEPTION_BREAK:
+ exception = text
+ try:
+ del self.exception_set[exception]
+ self.always_exception_set.remove(exception)
+ except:
+ pass
+ update_exception_hook(self)
+
+ elif cmd_id == CMD_LOAD_SOURCE:
+ path = text
+ try:
+ f = open(path, 'r')
+ source = f.read()
+ self.cmdFactory.makeLoadSourceMessage(seq, source, self)
+ except:
+ return self.cmdFactory.makeErrorMessage(seq, pydevd_tracing.GetExceptionTracebackStr())
+
+ elif cmd_id == CMD_ADD_DJANGO_EXCEPTION_BREAK:
+ exception = text
+
+ self.django_exception_break[exception] = True
+ self.setTracingForUntracedContexts()
+
+ elif cmd_id == CMD_REMOVE_DJANGO_EXCEPTION_BREAK:
+ exception = text
+
+ try:
+ del self.django_exception_break[exception]
+ except :
+ pass
+
+ else:
+ #I have no idea what this is all about
+ cmd = self.cmdFactory.makeErrorMessage(seq, "unexpected command " + str(cmd_id))
+
+ if cmd is not None:
+ self.writer.addCommand(cmd)
+ del cmd
+
+ except Exception:
+ traceback.print_exc()
+ cmd = self.cmdFactory.makeErrorMessage(seq,
+ "Unexpected exception in processNetCommand.\nInitial params: %s" % ((cmd_id, seq, text),))
+
+ self.writer.addCommand(cmd)
+ finally:
+ self._main_lock.release()
+
+ def processThreadNotAlive(self, threadId):
+ """ if thread is not alive, cancel trace_dispatch processing """
+ self._lock_running_thread_ids.acquire()
+ try:
+ thread = self._running_thread_ids.pop(threadId, None)
+ if thread is None:
+ return
+
+ wasNotified = thread.additionalInfo.pydev_notify_kill
+ if not wasNotified:
+ thread.additionalInfo.pydev_notify_kill = True
+
+ finally:
+ self._lock_running_thread_ids.release()
+
+ cmd = self.cmdFactory.makeThreadKilledMessage(threadId)
+ self.writer.addCommand(cmd)
+
+
+ def setSuspend(self, thread, stop_reason):
+ thread.additionalInfo.suspend_type = PYTHON_SUSPEND
+ thread.additionalInfo.pydev_state = STATE_SUSPEND
+ thread.stop_reason = stop_reason
+
+
+ def doWaitSuspend(self, thread, frame, event, arg): #@UnusedVariable
+ """ busy waits until the thread state changes to RUN
+ it expects thread's state as attributes of the thread.
+ Upon running, processes any outstanding Stepping commands.
+ """
+ self.processInternalCommands()
+
+ message = getattr(thread.additionalInfo, "message", None)
+
+ cmd = self.cmdFactory.makeThreadSuspendMessage(GetThreadId(thread), frame, thread.stop_reason, message)
+ self.writer.addCommand(cmd)
+
+ info = thread.additionalInfo
+
+ while info.pydev_state == STATE_SUSPEND and not self._finishDebuggingSession:
+ self.processInternalCommands()
+ time.sleep(0.01)
+
+ #process any stepping instructions
+ if info.pydev_step_cmd == CMD_STEP_INTO:
+ info.pydev_step_stop = None
+ info.pydev_smart_step_stop = None
+
+ elif info.pydev_step_cmd == CMD_STEP_OVER:
+ info.pydev_step_stop = frame
+ info.pydev_smart_step_stop = None
+ self.SetTraceForFrameAndParents(frame)
+
+ elif info.pydev_step_cmd == CMD_SMART_STEP_INTO:
+ self.SetTraceForFrameAndParents(frame)
+ info.pydev_step_stop = None
+ info.pydev_smart_step_stop = frame
+
+ elif info.pydev_step_cmd == CMD_RUN_TO_LINE or info.pydev_step_cmd == CMD_SET_NEXT_STATEMENT :
+ self.SetTraceForFrameAndParents(frame)
+
+ if event == 'line' or event == 'exception':
+ #If we're already in the correct context, we have to stop it now, because we can act only on
+ #line events -- if a return was the next statement it wouldn't work (so, we have this code
+ #repeated at pydevd_frame).
+ stop = False
+ curr_func_name = frame.f_code.co_name
+
+ #global context is set with an empty name
+ if curr_func_name in ('?', '<module>'):
+ curr_func_name = ''
+
+ if curr_func_name == info.pydev_func_name:
+ line = info.pydev_next_line
+ if frame.f_lineno == line:
+ stop = True
+ else :
+ if frame.f_trace is None:
+ frame.f_trace = self.trace_dispatch
+ frame.f_lineno = line
+ frame.f_trace = None
+ stop = True
+ if stop:
+ info.pydev_state = STATE_SUSPEND
+ self.doWaitSuspend(thread, frame, event, arg)
+ return
+
+
+ elif info.pydev_step_cmd == CMD_STEP_RETURN:
+ back_frame = frame.f_back
+ if back_frame is not None:
+ #steps back to the same frame (in a return call it will stop in the 'back frame' for the user)
+ info.pydev_step_stop = frame
+ self.SetTraceForFrameAndParents(frame)
+ else:
+ #No back frame?!? -- this happens in jython when we have some frame created from an awt event
+ #(the previous frame would be the awt event, but this doesn't make part of 'jython', only 'java')
+ #so, if we're doing a step return in this situation, it's the same as just making it run
+ info.pydev_step_stop = None
+ info.pydev_step_cmd = None
+ info.pydev_state = STATE_RUN
+
+ del frame
+ cmd = self.cmdFactory.makeThreadRunMessage(GetThreadId(thread), info.pydev_step_cmd)
+ self.writer.addCommand(cmd)
+
+
+ def handle_post_mortem_stop(self, additionalInfo, t):
+ pydev_log.debug("We are stopping in post-mortem\n")
+ self.force_post_mortem_stop -= 1
+ frame, frames_byid = additionalInfo.pydev_force_stop_at_exception
+ thread_id = GetThreadId(t)
+ pydevd_vars.addAdditionalFrameById(thread_id, frames_byid)
+ try:
+ try:
+ add_exception_to_frame(frame, additionalInfo.exception)
+ self.setSuspend(t, CMD_ADD_EXCEPTION_BREAK)
+ self.doWaitSuspend(t, frame, 'exception', None)
+ except:
+ pydev_log.error("We've got an error while stopping in post-mortem: %s\n"%sys.exc_info()[0])
+ finally:
+ additionalInfo.pydev_force_stop_at_exception = None
+ pydevd_vars.removeAdditionalFrameById(thread_id)
+
+ def trace_dispatch(self, frame, event, arg):
+ ''' This is the callback used when we enter some context in the debugger.
+
+ We also decorate the thread we are in with info about the debugging.
+ The attributes added are:
+ pydev_state
+ pydev_step_stop
+ pydev_step_cmd
+ pydev_notify_kill
+ '''
+ try:
+ if self._finishDebuggingSession and not self._terminationEventSent:
+ #that was not working very well because jython gave some socket errors
+ t = threadingCurrentThread()
+ try:
+ threads = threadingEnumerate()
+ for t in threads:
+ if hasattr(t, 'doKillPydevThread'):
+ t.doKillPydevThread()
+ except:
+ traceback.print_exc()
+ self._terminationEventSent = True
+ return None
+
+ filename, base = GetFilenameAndBase(frame)
+
+ is_file_to_ignore = DictContains(DONT_TRACE, base) #we don't want to debug threading or anything related to pydevd
+
+ if is_file_to_ignore:
+ return None
+
+ #print('trace_dispatch', base, frame.f_lineno, event, frame.f_code.co_name)
+ try:
+ #this shouldn't give an exception, but it could happen... (python bug)
+ #see http://mail.python.org/pipermail/python-bugs-list/2007-June/038796.html
+ #and related bug: http://bugs.python.org/issue1733757
+ t = threadingCurrentThread()
+ except:
+ frame.f_trace = self.trace_dispatch
+ return self.trace_dispatch
+
+ try:
+ additionalInfo = t.additionalInfo
+ if additionalInfo is None:
+ raise AttributeError()
+ except:
+ t.additionalInfo = PyDBAdditionalThreadInfo()
+ additionalInfo = t.additionalInfo
+
+ if additionalInfo is None:
+ return None
+
+ if additionalInfo.is_tracing:
+ f = frame
+ while f is not None:
+ fname, bs = GetFilenameAndBase(f)
+ if bs == 'pydevd_frame.py':
+ if 'trace_dispatch' == f.f_code.co_name:
+ return None #we don't wan't to trace code invoked from pydevd_frame.trace_dispatch
+ f = f.f_back
+
+ # if thread is not alive, cancel trace_dispatch processing
+ if not t.isAlive():
+ self.processThreadNotAlive(GetThreadId(t))
+ return None # suspend tracing
+
+ if is_file_to_ignore:
+ return None
+
+ #each new frame...
+ return additionalInfo.CreateDbFrame((self, filename, additionalInfo, t, frame)).trace_dispatch(frame, event, arg)
+
+ except SystemExit:
+ return None
+
+ except TypeError:
+ return None
+
+ except Exception:
+ #Log it
+ if traceback is not None:
+ #This can actually happen during the interpreter shutdown in Python 2.7
+ traceback.print_exc()
+ return None
+
+ if USE_PSYCO_OPTIMIZATION:
+ try:
+ import psyco
+ trace_dispatch = psyco.proxy(trace_dispatch)
+ processNetCommand = psyco.proxy(processNetCommand)
+ processInternalCommands = psyco.proxy(processInternalCommands)
+ doWaitSuspend = psyco.proxy(doWaitSuspend)
+ getInternalQueue = psyco.proxy(getInternalQueue)
+ except ImportError:
+ if hasattr(sys, 'exc_clear'): #jython does not have it
+ sys.exc_clear() #don't keep the traceback (let's keep it clear for when we go to the point of executing client code)
+
+ if not IS_PY3K and not IS_PY27 and not IS_64_BITS and not sys.platform.startswith("java") and not sys.platform.startswith("cli"):
+ sys.stderr.write("pydev debugger: warning: psyco not available for speedups (the debugger will still work correctly, but a bit slower)\n")
+ sys.stderr.flush()
+
+
+
+ def SetTraceForFrameAndParents(self, frame, also_add_to_passed_frame=True, overwrite_prev=False):
+ dispatch_func = self.trace_dispatch
+
+ if also_add_to_passed_frame:
+ self.update_trace(frame, dispatch_func, overwrite_prev)
+
+ frame = frame.f_back
+ while frame:
+ self.update_trace(frame, dispatch_func, overwrite_prev)
+
+ frame = frame.f_back
+ del frame
+
+ def update_trace(self, frame, dispatch_func, overwrite_prev):
+ if frame.f_trace is None:
+ frame.f_trace = dispatch_func
+ else:
+ if overwrite_prev:
+ frame.f_trace = dispatch_func
+ else:
+ try:
+ #If it's the trace_exception, go back to the frame trace dispatch!
+ if frame.f_trace.im_func.__name__ == 'trace_exception':
+ frame.f_trace = frame.f_trace.im_self.trace_dispatch
+ except AttributeError:
+ pass
+ frame = frame.f_back
+ del frame
+
+
+
+ def run(self, file, globals=None, locals=None):
+
+ if globals is None:
+ #patch provided by: Scott Schlesier - when script is run, it does not
+ #use globals from pydevd:
+ #This will prevent the pydevd script from contaminating the namespace for the script to be debugged
+
+ #pretend pydevd is not the main module, and
+ #convince the file to be debugged that it was loaded as main
+ sys.modules['pydevd'] = sys.modules['__main__']
+ sys.modules['pydevd'].__name__ = 'pydevd'
+
+ from imp import new_module
+ m = new_module('__main__')
+ sys.modules['__main__'] = m
+ m.__file__ = file
+ globals = m.__dict__
+ try:
+ globals['__builtins__'] = __builtins__
+ except NameError:
+ pass #Not there on Jython...
+
+ if locals is None:
+ locals = globals
+
+ #Predefined (writable) attributes: __name__ is the module's name;
+ #__doc__ is the module's documentation string, or None if unavailable;
+ #__file__ is the pathname of the file from which the module was loaded,
+ #if it was loaded from a file. The __file__ attribute is not present for
+ #C modules that are statically linked into the interpreter; for extension modules
+ #loaded dynamically from a shared library, it is the pathname of the shared library file.
+
+
+ #I think this is an ugly hack, bug it works (seems to) for the bug that says that sys.path should be the same in
+ #debug and run.
+ if m.__file__.startswith(sys.path[0]):
+ #print >> sys.stderr, 'Deleting: ', sys.path[0]
+ del sys.path[0]
+
+ #now, the local directory has to be added to the pythonpath
+ #sys.path.insert(0, os.getcwd())
+ #Changed: it's not the local directory, but the directory of the file launched
+ #The file being run ust be in the pythonpath (even if it was not before)
+ sys.path.insert(0, os.path.split(file)[0])
+
+ # for completness, we'll register the pydevd.reader & pydevd.writer threads
+ net = NetCommand(str(CMD_THREAD_CREATE), 0, '<xml><thread name="pydevd.reader" id="-1"/></xml>')
+ self.writer.addCommand(net)
+ net = NetCommand(str(CMD_THREAD_CREATE), 0, '<xml><thread name="pydevd.writer" id="-1"/></xml>')
+ self.writer.addCommand(net)
+
+ pydevd_tracing.SetTrace(self.trace_dispatch)
+ try:
+ #not available in jython!
+ threading.settrace(self.trace_dispatch) # for all future threads
+ except:
+ pass
+
+ try:
+ thread.start_new_thread = pydev_start_new_thread
+ thread.start_new = pydev_start_new_thread
+ except:
+ pass
+
+ while not self.readyToRun:
+ time.sleep(0.1) # busy wait until we receive run command
+
+ PyDBCommandThread(debugger).start()
+ PyDBCheckAliveThread(debugger).start()
+
+ if pydevd_vm_type.GetVmType() == pydevd_vm_type.PydevdVmType.JYTHON and sys.version_info[1] == 5 and sys.version_info[2] >= 3:
+ from _pydev_jython_execfile import jython_execfile
+ jython_execfile(sys.argv)
+ else:
+ pydev_imports.execfile(file, globals, locals) #execute the script
+
+ def exiting(self):
+ sys.stdout.flush()
+ sys.stderr.flush()
+ self.checkOutputRedirect()
+ cmd = self.cmdFactory.makeExitMessage()
+ self.writer.addCommand(cmd)
+
+def set_debug(setup):
+ setup['DEBUG_RECORD_SOCKET_READS'] = True
+ setup['DEBUG_TRACE_BREAKPOINTS'] = 1
+ setup['DEBUG_TRACE_LEVEL'] = 3
+
+
+def processCommandLine(argv):
+ """ parses the arguments.
+ removes our arguments from the command line """
+ setup = {}
+ setup['client'] = ''
+ setup['server'] = False
+ setup['port'] = 0
+ setup['file'] = ''
+ setup['multiproc'] = False
+ setup['save-signatures'] = False
+ i = 0
+ del argv[0]
+ while (i < len(argv)):
+ if (argv[i] == '--port'):
+ del argv[i]
+ setup['port'] = int(argv[i])
+ del argv[i]
+ elif (argv[i] == '--vm_type'):
+ del argv[i]
+ setup['vm_type'] = argv[i]
+ del argv[i]
+ elif (argv[i] == '--client'):
+ del argv[i]
+ setup['client'] = argv[i]
+ del argv[i]
+ elif (argv[i] == '--server'):
+ del argv[i]
+ setup['server'] = True
+ elif (argv[i] == '--file'):
+ del argv[i]
+ setup['file'] = argv[i]
+ i = len(argv) # pop out, file is our last argument
+ elif (argv[i] == '--DEBUG_RECORD_SOCKET_READS'):
+ del argv[i]
+ setup['DEBUG_RECORD_SOCKET_READS'] = True
+ elif (argv[i] == '--DEBUG'):
+ del argv[i]
+ set_debug(setup)
+ elif (argv[i] == '--multiproc'):
+ del argv[i]
+ setup['multiproc'] = True
+ elif (argv[i] == '--save-signatures'):
+ del argv[i]
+ setup['save-signatures'] = True
+ else:
+ raise ValueError("unexpected option " + argv[i])
+ return setup
+
+def usage(doExit=0):
+ sys.stdout.write('Usage:\n')
+ sys.stdout.write('pydevd.py --port=N [(--client hostname) | --server] --file executable [file_options]\n')
+ if doExit:
+ sys.exit(0)
+
+def SetTraceForParents(frame, dispatch_func):
+ frame = frame.f_back
+ while frame:
+ if frame.f_trace is None:
+ frame.f_trace = dispatch_func
+
+ frame = frame.f_back
+ del frame
+
+def exit_hook():
+ debugger = GetGlobalDebugger()
+ debugger.exiting()
+
+def initStdoutRedirect():
+ if not getattr(sys, 'stdoutBuf', None):
+ sys.stdoutBuf = pydevd_io.IOBuf()
+ sys.stdout = pydevd_io.IORedirector(sys.stdout, sys.stdoutBuf) #@UndefinedVariable
+
+def initStderrRedirect():
+ if not getattr(sys, 'stderrBuf', None):
+ sys.stderrBuf = pydevd_io.IOBuf()
+ sys.stderr = pydevd_io.IORedirector(sys.stderr, sys.stderrBuf) #@UndefinedVariable
+
+def settrace(host='localhost', stdoutToServer=False, stderrToServer=False, port=5678, suspend=True, trace_only_current_thread=False, overwrite_prev_trace=False):
+ '''Sets the tracing function with the pydev debug function and initializes needed facilities.
+
+ @param host: the user may specify another host, if the debug server is not in the same machine
+ @param stdoutToServer: when this is true, the stdout is passed to the debug server
+ @param stderrToServer: when this is true, the stderr is passed to the debug server
+ so that they are printed in its console and not in this process console.
+ @param port: specifies which port to use for communicating with the server (note that the server must be started
+ in the same port). @note: currently it's hard-coded at 5678 in the client
+ @param suspend: whether a breakpoint should be emulated as soon as this function is called.
+ @param trace_only_current_thread: determines if only the current thread will be traced or all future threads will also have the tracing enabled.
+ '''
+ _set_trace_lock.acquire()
+ try:
+ _locked_settrace(host, stdoutToServer, stderrToServer, port, suspend, trace_only_current_thread, overwrite_prev_trace)
+ finally:
+ _set_trace_lock.release()
+
+_set_trace_lock = threading.Lock()
+
+def _locked_settrace(host, stdoutToServer, stderrToServer, port, suspend, trace_only_current_thread, overwrite_prev_trace):
+ if host is None:
+ import pydev_localhost
+ host = pydev_localhost.get_localhost()
+
+ global connected
+ global bufferStdOutToServer
+ global bufferStdErrToServer
+ global remote
+
+ remote = True
+
+ if not connected :
+ connected = True
+ bufferStdOutToServer = stdoutToServer
+ bufferStdErrToServer = stderrToServer
+
+ pydevd_vm_type.SetupType()
+
+ debugger = PyDB()
+ debugger.connect(host, port)
+
+ net = NetCommand(str(CMD_THREAD_CREATE), 0, '<xml><thread name="pydevd.reader" id="-1"/></xml>')
+ debugger.writer.addCommand(net)
+ net = NetCommand(str(CMD_THREAD_CREATE), 0, '<xml><thread name="pydevd.writer" id="-1"/></xml>')
+ debugger.writer.addCommand(net)
+
+ if bufferStdOutToServer:
+ initStdoutRedirect()
+
+ if bufferStdErrToServer:
+ initStderrRedirect()
+
+ debugger.SetTraceForFrameAndParents(GetFrame(), False, overwrite_prev=overwrite_prev_trace)
+
+ t = threadingCurrentThread()
+ try:
+ additionalInfo = t.additionalInfo
+ except AttributeError:
+ additionalInfo = PyDBAdditionalThreadInfo()
+ t.additionalInfo = additionalInfo
+
+ while not debugger.readyToRun:
+ time.sleep(0.1) # busy wait until we receive run command
+
+ if suspend:
+ debugger.setSuspend(t, CMD_SET_BREAK)
+
+ #note that we do that through pydevd_tracing.SetTrace so that the tracing
+ #is not warned to the user!
+ pydevd_tracing.SetTrace(debugger.trace_dispatch)
+
+ if not trace_only_current_thread:
+ #Trace future threads?
+ try:
+ #not available in jython!
+ threading.settrace(debugger.trace_dispatch) # for all future threads
+ except:
+ pass
+
+ try:
+ thread.start_new_thread = pydev_start_new_thread
+ thread.start_new = pydev_start_new_thread
+ except:
+ pass
+
+ sys.exitfunc = exit_hook
+
+ PyDBCommandThread(debugger).start()
+ PyDBCheckAliveThread(debugger).start()
+
+ else:
+ #ok, we're already in debug mode, with all set, so, let's just set the break
+ debugger = GetGlobalDebugger()
+
+ debugger.SetTraceForFrameAndParents(GetFrame(), False)
+
+ t = threadingCurrentThread()
+ try:
+ additionalInfo = t.additionalInfo
+ except AttributeError:
+ additionalInfo = PyDBAdditionalThreadInfo()
+ t.additionalInfo = additionalInfo
+
+ pydevd_tracing.SetTrace(debugger.trace_dispatch)
+
+ if not trace_only_current_thread:
+ #Trace future threads?
+ try:
+ #not available in jython!
+ threading.settrace(debugger.trace_dispatch) # for all future threads
+ except:
+ pass
+
+ try:
+ thread.start_new_thread = pydev_start_new_thread
+ thread.start_new = pydev_start_new_thread
+ except:
+ pass
+
+ if suspend:
+ debugger.setSuspend(t, CMD_SET_BREAK)
+
+def stoptrace():
+ global connected
+ if connected:
+ pydevd_tracing.RestoreSysSetTraceFunc()
+ sys.settrace(None)
+ try:
+ #not available in jython!
+ threading.settrace(None) # for all future threads
+ except:
+ pass
+
+ try:
+ thread.start_new_thread = _original_start_new_thread
+ thread.start_new = _original_start_new_thread
+ except:
+ pass
+
+ debugger = GetGlobalDebugger()
+
+ if debugger:
+ debugger.trace_dispatch = None
+
+ debugger.SetTraceForFrameAndParents(GetFrame(), False)
+
+ debugger.exiting()
+
+ killAllPydevThreads()
+
+ connected = False
+
+class Dispatcher(object):
+ def __init__(self):
+ self.port = None
+
+ def connect(self, host, port):
+ self.host = host
+ self.port = port
+ self.client = StartClient(self.host, self.port)
+ self.reader = DispatchReader(self)
+ self.reader.dontTraceMe = False #we run reader in the same thread so we don't want to loose tracing
+ self.reader.run()
+
+ def close(self):
+ try:
+ self.reader.doKillPydevThread()
+ except :
+ pass
+
+class DispatchReader(ReaderThread):
+ def __init__(self, dispatcher):
+ self.dispatcher = dispatcher
+ ReaderThread.__init__(self, self.dispatcher.client)
+
+ def handleExcept(self):
+ ReaderThread.handleExcept(self)
+
+ def processCommand(self, cmd_id, seq, text):
+ if cmd_id == 99:
+ self.dispatcher.port = int(text)
+ self.killReceived = True
+
+
+def dispatch():
+ argv = sys.original_argv[:]
+ setup = processCommandLine(argv)
+ host = setup['client']
+ port = setup['port']
+ dispatcher = Dispatcher()
+ try:
+ dispatcher.connect(host, port)
+ port = dispatcher.port
+ finally:
+ dispatcher.close()
+ return host, port
+
+
+def settrace_forked():
+ host, port = dispatch()
+
+ import pydevd_tracing
+ pydevd_tracing.RestoreSysSetTraceFunc()
+
+ if port is not None:
+ global connected
+ connected = False
+ settrace(host, port=port, suspend=False, overwrite_prev_trace=True)
+#=======================================================================================================================
+# main
+#=======================================================================================================================
+if __name__ == '__main__':
+ # parse the command line. --file is our last argument that is required
+ try:
+ sys.original_argv = sys.argv[:]
+ setup = processCommandLine(sys.argv)
+ except ValueError:
+ traceback.print_exc()
+ usage(1)
+
+
+ #as to get here all our imports are already resolved, the psyco module can be
+ #changed and we'll still get the speedups in the debugger, as those functions
+ #are already compiled at this time.
+ try:
+ import psyco
+ except ImportError:
+ if hasattr(sys, 'exc_clear'): #jython does not have it
+ sys.exc_clear() #don't keep the traceback -- clients don't want to see it
+ pass #that's ok, no need to mock psyco if it's not available anyways
+ else:
+ #if it's available, let's change it for a stub (pydev already made use of it)
+ import pydevd_psyco_stub
+ sys.modules['psyco'] = pydevd_psyco_stub
+
+ fix_getpass.fixGetpass()
+
+
+ pydev_log.debug("Executing file %s" % setup['file'])
+ pydev_log.debug("arguments: %s"% str(sys.argv))
+
+
+ pydevd_vm_type.SetupType(setup.get('vm_type', None))
+
+ if os.getenv('PYCHARM_DEBUG'):
+ set_debug(setup)
+
+ DebugInfoHolder.DEBUG_RECORD_SOCKET_READS = setup.get('DEBUG_RECORD_SOCKET_READS', False)
+ DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS = setup.get('DEBUG_TRACE_BREAKPOINTS', -1)
+ DebugInfoHolder.DEBUG_TRACE_LEVEL = setup.get('DEBUG_TRACE_LEVEL', -1)
+
+ port = setup['port']
+ host = setup['client']
+
+ if setup['multiproc']:
+ pydev_log.debug("Started in multiproc mode\n")
+
+ dispatcher = Dispatcher()
+ try:
+ dispatcher.connect(host, port)
+ if dispatcher.port is not None:
+ port = dispatcher.port
+ pydev_log.debug("Received port %d\n" %port)
+ pydev_log.info("pydev debugger: process %d is connecting\n"% os.getpid())
+
+ try:
+ pydev_monkey.patch_new_process_functions()
+ except:
+ pydev_log.error("Error patching process functions\n")
+ traceback.print_exc()
+ else:
+ pydev_log.error("pydev debugger: couldn't get port for new debug process\n")
+ finally:
+ dispatcher.close()
+ else:
+ pydev_log.info("pydev debugger: starting\n")
+
+ try:
+ pydev_monkey.patch_new_process_functions_with_warning()
+ except:
+ pydev_log.error("Error patching process functions\n")
+ traceback.print_exc()
+
+
+ debugger = PyDB()
+
+ if setup['save-signatures']:
+ if pydevd_vm_type.GetVmType() == pydevd_vm_type.PydevdVmType.JYTHON:
+ sys.stderr.write("Collecting run-time type information is not supported for Jython\n")
+ else:
+ debugger.signature_factory = SignatureFactory()
+
+ debugger.connect(host, port)
+
+ connected = True #Mark that we're connected when started from inside ide.
+
+ debugger.run(setup['file'], None, None)
diff --git a/python/helpers/pydev/pydevd_additional_thread_info.py b/python/helpers/pydev/pydevd_additional_thread_info.py
new file mode 100644
index 0000000..d2d3fff
--- /dev/null
+++ b/python/helpers/pydev/pydevd_additional_thread_info.py
@@ -0,0 +1,148 @@
+import sys
+from pydevd_constants import * #@UnusedWildImport
+if USE_LIB_COPY:
+ import _pydev_threading as threading
+else:
+ import threading
+from pydevd_frame import PyDBFrame
+import weakref
+
+#=======================================================================================================================
+# AbstractPyDBAdditionalThreadInfo
+#=======================================================================================================================
+class AbstractPyDBAdditionalThreadInfo:
+ def __init__(self):
+ self.pydev_state = STATE_RUN
+ self.pydev_step_stop = None
+ self.pydev_step_cmd = None
+ self.pydev_notify_kill = False
+ self.pydev_force_stop_at_exception = None
+ self.pydev_smart_step_stop = None
+ self.pydev_django_resolve_frame = None
+ self.is_tracing = False
+
+
+ def IterFrames(self):
+ raise NotImplementedError()
+
+ def CreateDbFrame(self, args):
+ #args = mainDebugger, filename, base, additionalInfo, t, frame
+ raise NotImplementedError()
+
+ def __str__(self):
+ return 'State:%s Stop:%s Cmd: %s Kill:%s' % (self.pydev_state, self.pydev_step_stop, self.pydev_step_cmd, self.pydev_notify_kill)
+
+
+#=======================================================================================================================
+# PyDBAdditionalThreadInfoWithCurrentFramesSupport
+#=======================================================================================================================
+class PyDBAdditionalThreadInfoWithCurrentFramesSupport(AbstractPyDBAdditionalThreadInfo):
+
+ def IterFrames(self):
+ #sys._current_frames(): dictionary with thread id -> topmost frame
+ return sys._current_frames().values() #return a copy... don't know if it's changed if we did get an iterator
+
+ #just create the db frame directly
+ CreateDbFrame = PyDBFrame
+
+#=======================================================================================================================
+# PyDBAdditionalThreadInfoWithoutCurrentFramesSupport
+#=======================================================================================================================
+class PyDBAdditionalThreadInfoWithoutCurrentFramesSupport(AbstractPyDBAdditionalThreadInfo):
+
+ def __init__(self):
+ AbstractPyDBAdditionalThreadInfo.__init__(self)
+ #That's where the last frame entered is kept. That's needed so that we're able to
+ #trace contexts that were previously untraced and are currently active. So, the bad thing
+ #is that the frame may be kept alive longer than it would if we go up on the frame stack,
+ #and is only disposed when some other frame is removed.
+ #A better way would be if we could get the topmost frame for each thread, but that's
+ #not possible (until python 2.5 -- which is the PyDBAdditionalThreadInfoWithCurrentFramesSupport version)
+ #Or if the user compiled threadframe (from http://www.majid.info/mylos/stories/2004/06/10/threadframe.html)
+
+ #NOT RLock!! (could deadlock if it was)
+ self.lock = threading.Lock()
+ self._acquire_lock = self.lock.acquire
+ self._release_lock = self.lock.release
+
+ #collection with the refs
+ d = {}
+ self.pydev_existing_frames = d
+ try:
+ self._iter_frames = d.iterkeys
+ except AttributeError:
+ self._iter_frames = d.keys
+
+
+ def _OnDbFrameCollected(self, ref):
+ '''
+ Callback to be called when a given reference is garbage-collected.
+ '''
+ self._acquire_lock()
+ try:
+ del self.pydev_existing_frames[ref]
+ finally:
+ self._release_lock()
+
+
+ def _AddDbFrame(self, db_frame):
+ self._acquire_lock()
+ try:
+ #create the db frame with a callback to remove it from the dict when it's garbage-collected
+ #(could be a set, but that's not available on all versions we want to target).
+ r = weakref.ref(db_frame, self._OnDbFrameCollected)
+ self.pydev_existing_frames[r] = r
+ finally:
+ self._release_lock()
+
+
+ def CreateDbFrame(self, args):
+ #the frame must be cached as a weak-ref (we return the actual db frame -- which will be kept
+ #alive until its trace_dispatch method is not referenced anymore).
+ #that's a large workaround because:
+ #1. we can't have weak-references to python frame object
+ #2. only from 2.5 onwards we have _current_frames support from the interpreter
+ db_frame = PyDBFrame(args)
+ db_frame.frame = args[-1]
+ self._AddDbFrame(db_frame)
+ return db_frame
+
+
+ def IterFrames(self):
+ #We cannot use yield (because of the lock)
+ self._acquire_lock()
+ try:
+ ret = []
+
+ for weak_db_frame in self._iter_frames():
+ try:
+ ret.append(weak_db_frame().frame)
+ except AttributeError:
+ pass #ok, garbage-collected already
+ return ret
+ finally:
+ self._release_lock()
+
+ def __str__(self):
+ return 'State:%s Stop:%s Cmd: %s Kill:%s Frames:%s' % (self.pydev_state, self.pydev_step_stop, self.pydev_step_cmd, self.pydev_notify_kill, len(self.IterFrames()))
+
+#=======================================================================================================================
+# NOW, WE HAVE TO DEFINE WHICH THREAD INFO TO USE
+# (whether we have to keep references to the frames or not)
+# from version 2.5 onwards, we can use sys._current_frames to get a dict with the threads
+# and frames, but to support other versions, we can't rely on that.
+#=======================================================================================================================
+if hasattr(sys, '_current_frames'):
+ PyDBAdditionalThreadInfo = PyDBAdditionalThreadInfoWithCurrentFramesSupport
+else:
+ try:
+ import threadframe
+ sys._current_frames = threadframe.dict
+ assert sys._current_frames is threadframe.dict #Just check if it was correctly set
+ PyDBAdditionalThreadInfo = PyDBAdditionalThreadInfoWithCurrentFramesSupport
+ except:
+ #If all fails, let's use the support without frames
+ PyDBAdditionalThreadInfo = PyDBAdditionalThreadInfoWithoutCurrentFramesSupport
+
+ sys.stderr.write("pydev debugger: warning: sys._current_frames is not supported in Python 2.4, it is recommended to install threadframe module\n")
+ sys.stderr.write("pydev debugger: warning: See http://majid.info/blog/threadframe-multithreaded-stack-frame-extraction-for-python/\n")
diff --git a/python/helpers/pydev/pydevd_breakpoints.py b/python/helpers/pydev/pydevd_breakpoints.py
new file mode 100644
index 0000000..0a6644b
--- /dev/null
+++ b/python/helpers/pydev/pydevd_breakpoints.py
@@ -0,0 +1,195 @@
+from pydevd_constants import *
+import pydevd_tracing
+import sys
+import pydev_log
+
+_original_excepthook = None
+_handle_exceptions = None
+
+
+NOTIFY_ALWAYS="NOTIFY_ALWAYS"
+NOTIFY_ON_TERMINATE="NOTIFY_ON_TERMINATE"
+
+if USE_LIB_COPY:
+ import _pydev_threading as threading
+else:
+ import threading
+
+threadingCurrentThread = threading.currentThread
+
+from pydevd_comm import GetGlobalDebugger
+
+class ExceptionBreakpoint:
+ def __init__(self, qname, notify_always, notify_on_terminate):
+ exctype = get_class(qname)
+ self.qname = qname
+ if exctype is not None:
+ self.name = exctype.__name__
+ else:
+ self.name = None
+
+ self.notify_on_terminate = int(notify_on_terminate) == 1
+ self.notify_always = int(notify_always) > 0
+ self.notify_on_first_raise_only = int(notify_always) == 2
+
+ self.type = exctype
+ self.notify = {NOTIFY_ALWAYS: self.notify_always, NOTIFY_ON_TERMINATE: self.notify_on_terminate}
+
+
+ def __str__(self):
+ return self.qname
+
+class LineBreakpoint:
+ def __init__(self, type, flag, condition, func_name, expression):
+ self.type = type
+ self.condition = condition
+ self.func_name = func_name
+ self.expression = expression
+
+ def get_break_dict(self, breakpoints, file):
+ if DictContains(breakpoints, file):
+ breakDict = breakpoints[file]
+ else:
+ breakDict = {}
+ breakpoints[file] = breakDict
+ return breakDict
+
+ def trace(self, file, line, func_name):
+ if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
+ pydev_log.debug('Added breakpoint:%s - line:%s - func_name:%s\n' % (file, line, func_name))
+ sys.stderr.flush()
+
+ def add(self, breakpoints, file, line, func_name):
+ self.trace(file, line, func_name)
+
+ breakDict = self.get_break_dict(breakpoints, file)
+
+ breakDict[line] = self
+
+def get_exception_full_qname(exctype):
+ if not exctype:
+ return None
+ return str(exctype.__module__) + '.' + exctype.__name__
+
+def get_exception_name(exctype):
+ if not exctype:
+ return None
+ return exctype.__name__
+
+
+def get_exception_breakpoint(exctype, exceptions, notify_class):
+ name = get_exception_full_qname(exctype)
+ exc = None
+ if exceptions is not None:
+ for k, e in exceptions.items():
+ if e.notify[notify_class]:
+ if name == k:
+ return e
+ if (e.type is not None and issubclass(exctype, e.type)):
+ if exc is None or issubclass(e.type, exc.type):
+ exc = e
+ return exc
+
+#=======================================================================================================================
+# excepthook
+#=======================================================================================================================
+def excepthook(exctype, value, tb):
+ global _handle_exceptions
+ if _handle_exceptions is not None:
+ exception_breakpoint = get_exception_breakpoint(exctype, _handle_exceptions, NOTIFY_ON_TERMINATE)
+ else:
+ exception_breakpoint = None
+
+ if exception_breakpoint is None:
+ return _original_excepthook(exctype, value, tb)
+
+ #Always call the original excepthook before going on to call the debugger post mortem to show it.
+ _original_excepthook(exctype, value, tb)
+
+ if tb is None: #sometimes it can be None, e.g. with GTK
+ return
+
+ frames = []
+
+ traceback = tb
+ while tb:
+ frames.append(tb.tb_frame)
+ tb = tb.tb_next
+
+ thread = threadingCurrentThread()
+ frames_byid = dict([(id(frame),frame) for frame in frames])
+ frame = frames[-1]
+ thread.additionalInfo.exception = (exctype, value, tb)
+ thread.additionalInfo.pydev_force_stop_at_exception = (frame, frames_byid)
+ thread.additionalInfo.message = exception_breakpoint.qname
+ #sys.exc_info = lambda : (exctype, value, traceback)
+ debugger = GetGlobalDebugger()
+ debugger.force_post_mortem_stop += 1
+
+ pydevd_tracing.SetTrace(None) #no tracing from here
+ debugger.handle_post_mortem_stop(thread.additionalInfo, thread)
+
+#=======================================================================================================================
+# set_pm_excepthook
+#=======================================================================================================================
+def set_pm_excepthook(handle_exceptions_arg=None):
+ '''
+ Should be called to register the excepthook to be used.
+
+ It's only useful for uncaucht exceptions. I.e.: exceptions that go up to the excepthook.
+
+ Can receive a parameter to stop only on some exceptions.
+
+ E.g.:
+ register_excepthook((IndexError, ValueError))
+
+ or
+
+ register_excepthook(IndexError)
+
+ if passed without a parameter, will break on any exception
+
+ @param handle_exceptions: exception or tuple(exceptions)
+ The exceptions that should be handled.
+ '''
+ global _handle_exceptions
+ global _original_excepthook
+ if sys.excepthook != excepthook:
+ #Only keep the original if it's not our own excepthook (if called many times).
+ _original_excepthook = sys.excepthook
+
+ _handle_exceptions = handle_exceptions_arg
+ sys.excepthook = excepthook
+
+def restore_pm_excepthook():
+ global _original_excepthook
+ if _original_excepthook:
+ sys.excepthook = _original_excepthook
+ _original_excepthook = None
+
+
+def update_exception_hook(dbg):
+ if dbg.exception_set:
+ set_pm_excepthook(dict(dbg.exception_set))
+ else:
+ restore_pm_excepthook()
+
+def get_class( kls ):
+ if IS_PY24 and "BaseException" == kls:
+ kls = "Exception"
+ parts = kls.split('.')
+ module = ".".join(parts[:-1])
+ if module == "":
+ if IS_PY3K:
+ module = "builtins"
+ else:
+ module = "__builtin__"
+ try:
+ m = __import__( module )
+ for comp in parts[-1:]:
+ if m is None:
+ return None
+ m = getattr(m, comp, None)
+ return m
+ except ImportError:
+ return None
\ No newline at end of file
diff --git a/python/helpers/pydev/pydevd_comm.py b/python/helpers/pydev/pydevd_comm.py
new file mode 100644
index 0000000..592a969
--- /dev/null
+++ b/python/helpers/pydev/pydevd_comm.py
@@ -0,0 +1,1025 @@
+''' pydevd - a debugging daemon
+This is the daemon you launch for python remote debugging.
+
+Protocol:
+each command has a format:
+ id\tsequence-num\ttext
+ id: protocol command number
+ sequence-num: each request has a sequence number. Sequence numbers
+ originating at the debugger are odd, sequence numbers originating
+ at the daemon are even. Every response uses the same sequence number
+ as the request.
+ payload: it is protocol dependent. When response is a complex structure, it
+ is returned as XML. Each attribute value is urlencoded, and then the whole
+ payload is urlencoded again to prevent stray characters corrupting protocol/xml encodings
+
+ Commands:
+
+ NUMBER NAME FROM* ARGUMENTS RESPONSE NOTE
+100 series: program execution
+ 101 RUN JAVA - -
+ 102 LIST_THREADS JAVA RETURN with XML listing of all threads
+ 103 THREAD_CREATE PYDB - XML with thread information
+ 104 THREAD_KILL JAVA id (or * to exit) kills the thread
+ PYDB id nofies JAVA that thread was killed
+ 105 THREAD_SUSPEND JAVA XML of the stack, suspends the thread
+ reason for suspension
+ PYDB id notifies JAVA that thread was suspended
+
+ 106 CMD_THREAD_RUN JAVA id resume the thread
+ PYDB id \t reason notifies JAVA that thread was resumed
+
+ 107 STEP_INTO JAVA thread_id
+ 108 STEP_OVER JAVA thread_id
+ 109 STEP_RETURN JAVA thread_id
+
+ 110 GET_VARIABLE JAVA thread_id \t frame_id \t GET_VARIABLE with XML of var content
+ FRAME|GLOBAL \t attributes*
+
+ 111 SET_BREAK JAVA file/line of the breakpoint
+ 112 REMOVE_BREAK JAVA file/line of the return
+ 113 CMD_EVALUATE_EXPRESSION JAVA expression result of evaluating the expression
+ 114 CMD_GET_FRAME JAVA request for frame contents
+ 115 CMD_EXEC_EXPRESSION JAVA
+ 116 CMD_WRITE_TO_CONSOLE PYDB
+ 117 CMD_CHANGE_VARIABLE
+ 118 CMD_RUN_TO_LINE
+ 119 CMD_RELOAD_CODE
+ 120 CMD_GET_COMPLETIONS JAVA
+
+500 series diagnostics/ok
+ 501 VERSION either Version string (1.0) Currently just used at startup
+ 502 RETURN either Depends on caller -
+
+900 series: errors
+ 901 ERROR either - This is reserved for unexpected errors.
+
+ * JAVA - remote debugger, the java end
+ * PYDB - pydevd, the python end
+'''
+from pydevd_constants import * #@UnusedWildImport
+
+import sys
+
+if USE_LIB_COPY:
+ import _pydev_time as time
+ import _pydev_threading as threading
+ try:
+ import _pydev_thread as thread
+ except ImportError:
+ import _thread as thread #Py3K changed it.
+ import _pydev_Queue as _queue
+ from _pydev_socket import socket
+ from _pydev_socket import AF_INET, SOCK_STREAM
+ from _pydev_socket import SHUT_RD, SHUT_WR
+else:
+ import time
+ import threading
+ try:
+ import thread
+ except ImportError:
+ import _thread as thread #Py3K changed it.
+
+ try:
+ import Queue as _queue
+ except ImportError:
+ import queue as _queue
+ from socket import socket
+ from socket import AF_INET, SOCK_STREAM
+ from socket import SHUT_RD, SHUT_WR
+
+try:
+ from urllib import quote
+except:
+ from urllib.parse import quote #@Reimport @UnresolvedImport
+
+import pydevd_vars
+import pydev_log
+import pydevd_tracing
+import pydevd_vm_type
+import pydevd_file_utils
+import traceback
+from pydevd_utils import *
+from pydevd_utils import quote_smart as quote
+
+
+from pydevd_tracing import GetExceptionTracebackStr
+import pydevconsole
+
+try:
+ _Thread_stop = threading.Thread._Thread__stop
+except AttributeError:
+ _Thread_stop = threading.Thread._stop # _stop in Python 3
+
+
+
+CMD_RUN = 101
+CMD_LIST_THREADS = 102
+CMD_THREAD_CREATE = 103
+CMD_THREAD_KILL = 104
+CMD_THREAD_SUSPEND = 105
+CMD_THREAD_RUN = 106
+CMD_STEP_INTO = 107
+CMD_STEP_OVER = 108
+CMD_STEP_RETURN = 109
+CMD_GET_VARIABLE = 110
+CMD_SET_BREAK = 111
+CMD_REMOVE_BREAK = 112
+CMD_EVALUATE_EXPRESSION = 113
+CMD_GET_FRAME = 114
+CMD_EXEC_EXPRESSION = 115
+CMD_WRITE_TO_CONSOLE = 116
+CMD_CHANGE_VARIABLE = 117
+CMD_RUN_TO_LINE = 118
+CMD_RELOAD_CODE = 119
+CMD_GET_COMPLETIONS = 120
+CMD_CONSOLE_EXEC = 121
+CMD_ADD_EXCEPTION_BREAK = 122
+CMD_REMOVE_EXCEPTION_BREAK = 123
+CMD_LOAD_SOURCE = 124
+CMD_ADD_DJANGO_EXCEPTION_BREAK = 125
+CMD_REMOVE_DJANGO_EXCEPTION_BREAK = 126
+CMD_SET_NEXT_STATEMENT = 127
+CMD_SMART_STEP_INTO = 128
+CMD_EXIT = 129
+CMD_SIGNATURE_CALL_TRACE = 130
+CMD_VERSION = 501
+CMD_RETURN = 502
+CMD_ERROR = 901
+
+ID_TO_MEANING = {
+ '101':'CMD_RUN',
+ '102':'CMD_LIST_THREADS',
+ '103':'CMD_THREAD_CREATE',
+ '104':'CMD_THREAD_KILL',
+ '105':'CMD_THREAD_SUSPEND',
+ '106':'CMD_THREAD_RUN',
+ '107':'CMD_STEP_INTO',
+ '108':'CMD_STEP_OVER',
+ '109':'CMD_STEP_RETURN',
+ '110':'CMD_GET_VARIABLE',
+ '111':'CMD_SET_BREAK',
+ '112':'CMD_REMOVE_BREAK',
+ '113':'CMD_EVALUATE_EXPRESSION',
+ '114':'CMD_GET_FRAME',
+ '115':'CMD_EXEC_EXPRESSION',
+ '116':'CMD_WRITE_TO_CONSOLE',
+ '117':'CMD_CHANGE_VARIABLE',
+ '118':'CMD_RUN_TO_LINE',
+ '119':'CMD_RELOAD_CODE',
+ '120':'CMD_GET_COMPLETIONS',
+ '121':'CMD_CONSOLE_EXEC',
+ '122':'CMD_ADD_EXCEPTION_BREAK',
+ '123':'CMD_REMOVE_EXCEPTION_BREAK',
+ '124':'CMD_LOAD_SOURCE',
+ '125':'CMD_ADD_DJANGO_EXCEPTION_BREAK',
+ '126':'CMD_REMOVE_DJANGO_EXCEPTION_BREAK',
+ '127':'CMD_SET_NEXT_STATEMENT',
+ '128':'CMD_SMART_STEP_INTO',
+ '129': 'CMD_EXIT',
+ '130': 'CMD_SIGNATURE_CALL_TRACE',
+ '501':'CMD_VERSION',
+ '502':'CMD_RETURN',
+ '901':'CMD_ERROR',
+ }
+
+MAX_IO_MSG_SIZE = 1000 #if the io is too big, we'll not send all (could make the debugger too non-responsive)
+#this number can be changed if there's need to do so
+
+VERSION_STRING = "@@BUILD_NUMBER@@"
+
+
+#--------------------------------------------------------------------------------------------------- UTILITIES
+
+#=======================================================================================================================
+# PydevdLog
+#=======================================================================================================================
+def PydevdLog(level, *args):
+ """ levels are:
+ 0 most serious warnings/errors
+ 1 warnings/significant events
+ 2 informational trace
+ """
+ if level <= DebugInfoHolder.DEBUG_TRACE_LEVEL:
+ #yes, we can have errors printing if the console of the program has been finished (and we're still trying to print something)
+ try:
+ sys.stderr.write('%s\n' % (args,))
+ except:
+ pass
+
+#=======================================================================================================================
+# GlobalDebuggerHolder
+#=======================================================================================================================
+class GlobalDebuggerHolder:
+ '''
+ Holder for the global debugger.
+ '''
+ globalDbg = None
+
+#=======================================================================================================================
+# GetGlobalDebugger
+#=======================================================================================================================
+def GetGlobalDebugger():
+ return GlobalDebuggerHolder.globalDbg
+
+#=======================================================================================================================
+# SetGlobalDebugger
+#=======================================================================================================================
+def SetGlobalDebugger(dbg):
+ GlobalDebuggerHolder.globalDbg = dbg
+
+
+#------------------------------------------------------------------- ACTUAL COMM
+
+#=======================================================================================================================
+# PyDBDaemonThread
+#=======================================================================================================================
+class PyDBDaemonThread(threading.Thread):
+
+ def __init__(self):
+ threading.Thread.__init__(self)
+ self.setDaemon(True)
+ self.killReceived = False
+ self.dontTraceMe = True
+
+ def run(self):
+ if sys.platform.startswith("java"):
+ import org.python.core as PyCore #@UnresolvedImport
+ ss = PyCore.PySystemState()
+ # Note: Py.setSystemState() affects only the current thread.
+ PyCore.Py.setSystemState(ss)
+
+ self.OnRun()
+
+ def OnRun(self):
+ raise NotImplementedError('Should be reimplemented by: %s' % self.__class__)
+
+ def doKillPydevThread(self):
+ #that was not working very well because jython gave some socket errors
+ self.killReceived = True
+
+ def stop(self):
+ _Thread_stop(self)
+
+ def stopTrace(self):
+ if self.dontTraceMe:
+ pydevd_tracing.SetTrace(None) # no debugging on this thread
+
+
+#=======================================================================================================================
+# ReaderThread
+#=======================================================================================================================
+class ReaderThread(PyDBDaemonThread):
+ """ reader thread reads and dispatches commands in an infinite loop """
+
+ def __init__(self, sock):
+ PyDBDaemonThread.__init__(self)
+ self.sock = sock
+ self.setName("pydevd.Reader")
+
+
+ def doKillPydevThread(self):
+ #We must close the socket so that it doesn't stay halted there.
+ self.killReceived = True
+ try:
+ self.sock.shutdown(SHUT_RD) #shotdown the socket for read
+ except:
+ #just ignore that
+ pass
+
+ def OnRun(self):
+ self.stopTrace()
+ buffer = ""
+ try:
+
+ while not self.killReceived:
+ try:
+ r = self.sock.recv(1024)
+ except:
+ if not self.killReceived:
+ self.handleExcept()
+ return #Finished communication.
+ if IS_PY3K:
+ r = r.decode('utf-8')
+
+ buffer += r
+ if DebugInfoHolder.DEBUG_RECORD_SOCKET_READS:
+ pydev_log.debug('received >>%s<<\n' % (buffer,))
+
+ if len(buffer) == 0:
+ self.handleExcept()
+ break
+ while buffer.find('\n') != -1:
+ command, buffer = buffer.split('\n', 1)
+ pydev_log.debug('Received command: >>%s<<\n' % (command,))
+ args = command.split('\t', 2)
+ try:
+ self.processCommand(int(args[0]), int(args[1]), args[2])
+ except:
+ traceback.print_exc()
+ sys.stderr.write("Can't process net command: %s\n" % command)
+ sys.stderr.flush()
+
+ except:
+ traceback.print_exc()
+ self.handleExcept()
+
+
+ def handleExcept(self):
+ GlobalDebuggerHolder.globalDbg.FinishDebuggingSession()
+
+ def processCommand(self, cmd_id, seq, text):
+ GlobalDebuggerHolder.globalDbg.processNetCommand(cmd_id, seq, text)
+
+
+#----------------------------------------------------------------------------------- SOCKET UTILITIES - WRITER
+#=======================================================================================================================
+# WriterThread
+#=======================================================================================================================
+class WriterThread(PyDBDaemonThread):
+ """ writer thread writes out the commands in an infinite loop """
+ def __init__(self, sock):
+ PyDBDaemonThread.__init__(self)
+ self.setDaemon(False) #writer isn't daemon to be able to deliver all messages after main thread terminated
+ self.sock = sock
+ self.setName("pydevd.Writer")
+ self.cmdQueue = _queue.Queue()
+ if pydevd_vm_type.GetVmType() == 'python':
+ self.timeout = 0
+ else:
+ self.timeout = 0.1
+
+ def addCommand(self, cmd):
+ """ cmd is NetCommand """
+ if not self.killReceived: #we don't take new data after everybody die
+ self.cmdQueue.put(cmd)
+
+ def OnRun(self):
+ """ just loop and write responses """
+
+ self.stopTrace()
+ try:
+ while True:
+ try:
+ try:
+ cmd = self.cmdQueue.get(1, 0.1)
+ except _queue.Empty:
+ if self.killReceived:
+ try:
+ self.sock.shutdown(SHUT_WR)
+ self.sock.close()
+ except:
+ pass
+ self.stop() #mark thread as stopped to unblock joined threads for sure (they can hang otherwise)
+
+ return #break if queue is empty and killReceived
+ else:
+ continue
+ except:
+ #PydevdLog(0, 'Finishing debug communication...(1)')
+ #when liberating the thread here, we could have errors because we were shutting down
+ #but the thread was still not liberated
+ return
+ out = cmd.getOutgoing()
+
+ if DebugInfoHolder.DEBUG_TRACE_LEVEL >= 1:
+ out_message = 'sending cmd: '
+ out_message += ID_TO_MEANING.get(out[:3], 'UNKNOWN')
+ out_message += ' '
+ out_message += out
+ try:
+ sys.stderr.write('%s\n' % (out_message,))
+ except:
+ pass
+
+ if IS_PY3K:
+ out = bytearray(out, 'utf-8')
+ self.sock.send(out) #TODO: this does not guarantee that all message are sent (and jython does not have a send all)
+ if cmd.id == CMD_EXIT:
+ break
+ if time is None:
+ break #interpreter shutdown
+ time.sleep(self.timeout)
+ except Exception:
+ GlobalDebuggerHolder.globalDbg.FinishDebuggingSession()
+ if DebugInfoHolder.DEBUG_TRACE_LEVEL >= 0:
+ traceback.print_exc()
+
+
+
+
+#--------------------------------------------------- CREATING THE SOCKET THREADS
+
+#=======================================================================================================================
+# StartServer
+#=======================================================================================================================
+def StartServer(port):
+ """ binds to a port, waits for the debugger to connect """
+ s = socket(AF_INET, SOCK_STREAM)
+ s.bind(('', port))
+ s.listen(1)
+ newSock, _addr = s.accept()
+ return newSock
+
+#=======================================================================================================================
+# StartClient
+#=======================================================================================================================
+def StartClient(host, port):
+ """ connects to a host/port """
+ PydevdLog(1, "Connecting to ", host, ":", str(port))
+
+ s = socket(AF_INET, SOCK_STREAM)
+
+ MAX_TRIES = 3
+ i = 0
+ while i<MAX_TRIES:
+ try:
+ s.connect((host, port))
+ except:
+ i+=1
+ time.sleep(0.2)
+ continue
+ PydevdLog(1, "Connected.")
+ return s
+
+ sys.stderr.write("Could not connect to %s: %s\n" % (host, port))
+ sys.stderr.flush()
+ traceback.print_exc()
+ sys.exit(1) #TODO: is it safe?
+
+
+
+#------------------------------------------------------------------------------------ MANY COMMUNICATION STUFF
+
+#=======================================================================================================================
+# NetCommand
+#=======================================================================================================================
+class NetCommand:
+ """ Commands received/sent over the network.
+
+ Command can represent command received from the debugger,
+ or one to be sent by daemon.
+ """
+ next_seq = 0 # sequence numbers
+
+ def __init__(self, id, seq, text):
+ """ smart handling of paramaters
+ if sequence is 0, new sequence will be generated
+ if text has carriage returns they'll be replaced"""
+ self.id = id
+ if (seq == 0): seq = self.getNextSeq()
+ self.seq = seq
+ self.text = text
+ self.outgoing = self.makeMessage(id, seq, text)
+
+ def getNextSeq(self):
+ """ returns next sequence number """
+ NetCommand.next_seq += 2
+ return NetCommand.next_seq
+
+ def getOutgoing(self):
+ """ returns the outgoing message"""
+ return self.outgoing
+
+ def makeMessage(self, cmd, seq, payload):
+ encoded = quote(to_string(payload), '/<>_=" \t')
+ return str(cmd) + '\t' + str(seq) + '\t' + encoded + "\n"
+
+#=======================================================================================================================
+# NetCommandFactory
+#=======================================================================================================================
+class NetCommandFactory:
+
+ def __init_(self):
+ self.next_seq = 0
+
+ def threadToXML(self, thread):
+ """ thread information as XML """
+ name = pydevd_vars.makeValidXmlValue(thread.getName())
+ cmdText = '<thread name="%s" id="%s" />' % (quote(name), GetThreadId(thread))
+ return cmdText
+
+ def makeErrorMessage(self, seq, text):
+ cmd = NetCommand(CMD_ERROR, seq, text)
+ if DebugInfoHolder.DEBUG_TRACE_LEVEL > 2:
+ sys.stderr.write("Error: %s" % (text,))
+ return cmd
+
+ def makeThreadCreatedMessage(self, thread):
+ cmdText = "<xml>" + self.threadToXML(thread) + "</xml>"
+ return NetCommand(CMD_THREAD_CREATE, 0, cmdText)
+
+ def makeListThreadsMessage(self, seq):
+ """ returns thread listing as XML """
+ try:
+ t = threading.enumerate()
+ cmdText = "<xml>"
+ for i in t:
+ if t.isAlive():
+ cmdText += self.threadToXML(i)
+ cmdText += "</xml>"
+ return NetCommand(CMD_RETURN, seq, cmdText)
+ except:
+ return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
+ def makeVariableChangedMessage(self, seq, payload):
+ # notify debugger that value was changed successfully
+ return NetCommand(CMD_RETURN, seq, payload)
+
+ def makeIoMessage(self, v, ctx, dbg=None):
+ '''
+ @param v: the message to pass to the debug server
+ @param ctx: 1 for stdio 2 for stderr
+ @param dbg: If not none, add to the writer
+ '''
+
+ try:
+ if len(v) > MAX_IO_MSG_SIZE:
+ v = v[0:MAX_IO_MSG_SIZE]
+ v += '...'
+
+ v = pydevd_vars.makeValidXmlValue(quote(v, '/>_= \t'))
+ net = NetCommand(str(CMD_WRITE_TO_CONSOLE), 0, '<xml><io s="%s" ctx="%s"/></xml>' % (v, ctx))
+ except:
+ net = self.makeErrorMessage(0, GetExceptionTracebackStr())
+
+ if dbg:
+ dbg.writer.addCommand(net)
+
+ return net
+
+ def makeVersionMessage(self, seq):
+ try:
+ return NetCommand(CMD_VERSION, seq, VERSION_STRING)
+ except:
+ return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
+ def makeThreadKilledMessage(self, id):
+ try:
+ return NetCommand(CMD_THREAD_KILL, 0, str(id))
+ except:
+ return self.makeErrorMessage(0, GetExceptionTracebackStr())
+
+ def makeThreadSuspendMessage(self, thread_id, frame, stop_reason, message):
+
+ """ <xml>
+ <thread id="id" stop_reason="reason">
+ <frame id="id" name="functionName " file="file" line="line">
+ <var variable stuffff....
+ </frame>
+ </thread>
+ """
+ try:
+ cmdTextList = ["<xml>"]
+
+ if message:
+ message = pydevd_vars.makeValidXmlValue(str(message))
+
+ cmdTextList.append('<thread id="%s" stop_reason="%s" message="%s">' % (thread_id, stop_reason, message))
+
+ curFrame = frame
+ try:
+ while curFrame:
+ #print cmdText
+ myId = str(id(curFrame))
+ #print "id is ", myId
+
+ if curFrame.f_code is None:
+ break #Iron Python sometimes does not have it!
+
+ myName = curFrame.f_code.co_name #method name (if in method) or ? if global
+ if myName is None:
+ break #Iron Python sometimes does not have it!
+
+ #print "name is ", myName
+
+ filename, base = pydevd_file_utils.GetFilenameAndBase(curFrame)
+
+ myFile = pydevd_file_utils.NormFileToClient(filename)
+
+ #print "file is ", myFile
+ #myFile = inspect.getsourcefile(curFrame) or inspect.getfile(frame)
+
+ myLine = str(curFrame.f_lineno)
+ #print "line is ", myLine
+
+ #the variables are all gotten 'on-demand'
+ #variables = pydevd_vars.frameVarsToXML(curFrame.f_locals)
+
+ variables = ''
+ cmdTextList.append('<frame id="%s" name="%s" ' % (myId , pydevd_vars.makeValidXmlValue(myName)))
+ cmdTextList.append('file="%s" line="%s">"' % (quote(myFile, '/>_= \t'), myLine))
+ cmdTextList.append(variables)
+ cmdTextList.append("</frame>")
+ curFrame = curFrame.f_back
+ except :
+ traceback.print_exc()
+
+ cmdTextList.append("</thread></xml>")
+ cmdText = ''.join(cmdTextList)
+ return NetCommand(CMD_THREAD_SUSPEND, 0, cmdText)
+ except:
+ return self.makeErrorMessage(0, GetExceptionTracebackStr())
+
+ def makeThreadRunMessage(self, id, reason):
+ try:
+ return NetCommand(CMD_THREAD_RUN, 0, str(id) + "\t" + str(reason))
+ except:
+ return self.makeErrorMessage(0, GetExceptionTracebackStr())
+
+ def makeGetVariableMessage(self, seq, payload):
+ try:
+ return NetCommand(CMD_GET_VARIABLE, seq, payload)
+ except Exception:
+ return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
+ def makeGetFrameMessage(self, seq, payload):
+ try:
+ return NetCommand(CMD_GET_FRAME, seq, payload)
+ except Exception:
+ return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
+
+ def makeEvaluateExpressionMessage(self, seq, payload):
+ try:
+ return NetCommand(CMD_EVALUATE_EXPRESSION, seq, payload)
+ except Exception:
+ return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
+ def makeGetCompletionsMessage(self, seq, payload):
+ try:
+ return NetCommand(CMD_GET_COMPLETIONS, seq, payload)
+ except Exception:
+ return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
+ def makeLoadSourceMessage(self, seq, source, dbg=None):
+ try:
+ net = NetCommand(CMD_LOAD_SOURCE, seq, '%s' % source)
+
+ except:
+ net = self.makeErrorMessage(0, GetExceptionTracebackStr())
+
+ if dbg:
+ dbg.writer.addCommand(net)
+ return net
+
+ def makeExitMessage(self):
+ try:
+ net = NetCommand(CMD_EXIT, 0, '')
+
+ except:
+ net = self.makeErrorMessage(0, GetExceptionTracebackStr())
+
+ return net
+
+INTERNAL_TERMINATE_THREAD = 1
+INTERNAL_SUSPEND_THREAD = 2
+
+
+#=======================================================================================================================
+# InternalThreadCommand
+#=======================================================================================================================
+class InternalThreadCommand:
+ """ internal commands are generated/executed by the debugger.
+
+ The reason for their existence is that some commands have to be executed
+ on specific threads. These are the InternalThreadCommands that get
+ get posted to PyDB.cmdQueue.
+ """
+
+ def canBeExecutedBy(self, thread_id):
+ '''By default, it must be in the same thread to be executed
+ '''
+ return self.thread_id == thread_id
+
+ def doIt(self, dbg):
+ raise NotImplementedError("you have to override doIt")
+
+#=======================================================================================================================
+# InternalTerminateThread
+#=======================================================================================================================
+class InternalTerminateThread(InternalThreadCommand):
+ def __init__(self, thread_id):
+ self.thread_id = thread_id
+
+ def doIt(self, dbg):
+ PydevdLog(1, "killing ", str(self.thread_id))
+ cmd = dbg.cmdFactory.makeThreadKilledMessage(self.thread_id)
+ dbg.writer.addCommand(cmd)
+
+
+#=======================================================================================================================
+# InternalRunThread
+#=======================================================================================================================
+class InternalRunThread(InternalThreadCommand):
+ def __init__(self, thread_id):
+ self.thread_id = thread_id
+
+ def doIt(self, dbg):
+ t = PydevdFindThreadById(self.thread_id)
+ if t:
+ t.additionalInfo.pydev_step_cmd = None
+ t.additionalInfo.pydev_step_stop = None
+ t.additionalInfo.pydev_state = STATE_RUN
+
+
+#=======================================================================================================================
+# InternalStepThread
+#=======================================================================================================================
+class InternalStepThread(InternalThreadCommand):
+ def __init__(self, thread_id, cmd_id):
+ self.thread_id = thread_id
+ self.cmd_id = cmd_id
+
+ def doIt(self, dbg):
+ t = PydevdFindThreadById(self.thread_id)
+ if t:
+ t.additionalInfo.pydev_step_cmd = self.cmd_id
+ t.additionalInfo.pydev_state = STATE_RUN
+
+#=======================================================================================================================
+# InternalSetNextStatementThread
+#=======================================================================================================================
+class InternalSetNextStatementThread(InternalThreadCommand):
+ def __init__(self, thread_id, cmd_id, line, func_name):
+ self.thread_id = thread_id
+ self.cmd_id = cmd_id
+ self.line = line
+ self.func_name = func_name
+
+ def doIt(self, dbg):
+ t = PydevdFindThreadById(self.thread_id)
+ if t:
+ t.additionalInfo.pydev_step_cmd = self.cmd_id
+ t.additionalInfo.pydev_next_line = int(self.line)
+ t.additionalInfo.pydev_func_name = self.func_name
+ t.additionalInfo.pydev_state = STATE_RUN
+
+
+#=======================================================================================================================
+# InternalGetVariable
+#=======================================================================================================================
+class InternalGetVariable(InternalThreadCommand):
+ """ gets the value of a variable """
+ def __init__(self, seq, thread_id, frame_id, scope, attrs):
+ self.sequence = seq
+ self.thread_id = thread_id
+ self.frame_id = frame_id
+ self.scope = scope
+ self.attributes = attrs
+
+ def doIt(self, dbg):
+ """ Converts request into python variable """
+ try:
+ xml = "<xml>"
+ valDict = pydevd_vars.resolveCompoundVariable(self.thread_id, self.frame_id, self.scope, self.attributes)
+ if valDict is None:
+ valDict = {}
+
+ keys = valDict.keys()
+ if hasattr(keys, 'sort'):
+ keys.sort(compare_object_attrs) #Python 3.0 does not have it
+ else:
+ if IS_PY3K:
+ keys = sorted(keys, key=cmp_to_key(compare_object_attrs)) #Jython 2.1 does not have it (and all must be compared as strings).
+ else:
+ keys = sorted(keys, cmp=compare_object_attrs) #Jython 2.1 does not have it (and all must be compared as strings).
+
+ for k in keys:
+ xml += pydevd_vars.varToXML(valDict[k], to_string(k))
+
+ xml += "</xml>"
+ cmd = dbg.cmdFactory.makeGetVariableMessage(self.sequence, xml)
+ dbg.writer.addCommand(cmd)
+ except Exception:
+ cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error resolving variables " + GetExceptionTracebackStr())
+ dbg.writer.addCommand(cmd)
+
+
+#=======================================================================================================================
+# InternalChangeVariable
+#=======================================================================================================================
+class InternalChangeVariable(InternalThreadCommand):
+ """ changes the value of a variable """
+ def __init__(self, seq, thread_id, frame_id, scope, attr, expression):
+ self.sequence = seq
+ self.thread_id = thread_id
+ self.frame_id = frame_id
+ self.scope = scope
+ self.attr = attr
+ self.expression = expression
+
+ def doIt(self, dbg):
+ """ Converts request into python variable """
+ try:
+ result = pydevd_vars.changeAttrExpression(self.thread_id, self.frame_id, self.attr, self.expression)
+ xml = "<xml>"
+ xml += pydevd_vars.varToXML(result, "")
+ xml += "</xml>"
+ cmd = dbg.cmdFactory.makeVariableChangedMessage(self.sequence, xml)
+ dbg.writer.addCommand(cmd)
+ except Exception:
+ cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error changing variable attr:%s expression:%s traceback:%s" % (self.attr, self.expression, GetExceptionTracebackStr()))
+ dbg.writer.addCommand(cmd)
+
+
+#=======================================================================================================================
+# InternalGetFrame
+#=======================================================================================================================
+class InternalGetFrame(InternalThreadCommand):
+ """ gets the value of a variable """
+ def __init__(self, seq, thread_id, frame_id):
+ self.sequence = seq
+ self.thread_id = thread_id
+ self.frame_id = frame_id
+
+ def doIt(self, dbg):
+ """ Converts request into python variable """
+ try:
+ frame = pydevd_vars.findFrame(self.thread_id, self.frame_id)
+ if frame is not None:
+ xml = "<xml>"
+ xml += pydevd_vars.frameVarsToXML(frame.f_locals)
+ del frame
+ xml += "</xml>"
+ cmd = dbg.cmdFactory.makeGetFrameMessage(self.sequence, xml)
+ dbg.writer.addCommand(cmd)
+ else:
+ #pydevd_vars.dumpFrames(self.thread_id)
+ #don't print this error: frame not found: means that the client is not synchronized (but that's ok)
+ cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Frame not found: %s from thread: %s" % (self.frame_id, self.thread_id))
+ dbg.writer.addCommand(cmd)
+ except:
+ cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error resolving frame: %s from thread: %s" % (self.frame_id, self.thread_id))
+ dbg.writer.addCommand(cmd)
+
+
+#=======================================================================================================================
+# InternalEvaluateExpression
+#=======================================================================================================================
+class InternalEvaluateExpression(InternalThreadCommand):
+ """ gets the value of a variable """
+
+ def __init__(self, seq, thread_id, frame_id, expression, doExec, doTrim):
+ self.sequence = seq
+ self.thread_id = thread_id
+ self.frame_id = frame_id
+ self.expression = expression
+ self.doExec = doExec
+ self.doTrim = doTrim
+
+ def doIt(self, dbg):
+ """ Converts request into python variable """
+ try:
+ result = pydevd_vars.evaluateExpression(self.thread_id, self.frame_id, self.expression, self.doExec)
+ xml = "<xml>"
+ xml += pydevd_vars.varToXML(result, "", self.doTrim)
+ xml += "</xml>"
+ cmd = dbg.cmdFactory.makeEvaluateExpressionMessage(self.sequence, xml)
+ dbg.writer.addCommand(cmd)
+ except:
+ exc = GetExceptionTracebackStr()
+ sys.stderr.write('%s\n' % (exc,))
+ cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error evaluating expression " + exc)
+ dbg.writer.addCommand(cmd)
+
+#=======================================================================================================================
+# InternalConsoleExec
+#=======================================================================================================================
+class InternalConsoleExec(InternalThreadCommand):
+ """ gets the value of a variable """
+
+ def __init__(self, seq, thread_id, frame_id, expression):
+ self.sequence = seq
+ self.thread_id = thread_id
+ self.frame_id = frame_id
+ self.expression = expression
+
+ def doIt(self, dbg):
+ """ Converts request into python variable """
+ pydev_start_new_thread = None
+ try:
+ try:
+ pydev_start_new_thread = thread.start_new_thread
+
+ thread.start_new_thread = thread._original_start_new_thread #don't trace new threads created by console command
+ thread.start_new = thread._original_start_new_thread
+
+ result = pydevconsole.consoleExec(self.thread_id, self.frame_id, self.expression)
+ xml = "<xml>"
+ xml += pydevd_vars.varToXML(result, "")
+ xml += "</xml>"
+ cmd = dbg.cmdFactory.makeEvaluateExpressionMessage(self.sequence, xml)
+ dbg.writer.addCommand(cmd)
+ except:
+ exc = GetExceptionTracebackStr()
+ sys.stderr.write('%s\n' % (exc,))
+ cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error evaluating console expression " + exc)
+ dbg.writer.addCommand(cmd)
+ finally:
+ thread.start_new_thread = pydev_start_new_thread
+ thread.start_new = pydev_start_new_thread
+ sys.stderr.flush()
+ sys.stdout.flush()
+
+#=======================================================================================================================
+# InternalGetCompletions
+#=======================================================================================================================
+class InternalGetCompletions(InternalThreadCommand):
+ """ Gets the completions in a given scope """
+
+ def __init__(self, seq, thread_id, frame_id, act_tok):
+ self.sequence = seq
+ self.thread_id = thread_id
+ self.frame_id = frame_id
+ self.act_tok = act_tok
+
+
+ def doIt(self, dbg):
+ """ Converts request into completions """
+ try:
+ remove_path = None
+ try:
+ import _completer
+ except:
+ try:
+ path = os.environ['PYDEV_COMPLETER_PYTHONPATH']
+ except :
+ path = os.path.dirname(__file__)
+ sys.path.append(path)
+ remove_path = path
+ try:
+ import _completer
+ except :
+ pass
+
+ try:
+
+ frame = pydevd_vars.findFrame(self.thread_id, self.frame_id)
+ if frame is not None:
+
+ #Not using frame.f_globals because of https://sourceforge.net/tracker2/?func=detail&aid=2541355&group_id=85796&atid=577329
+ #(Names not resolved in generator expression in method)
+ #See message: http://mail.python.org/pipermail/python-list/2009-January/526522.html
+ updated_globals = {}
+ updated_globals.update(frame.f_globals)
+ updated_globals.update(frame.f_locals) #locals later because it has precedence over the actual globals
+ locals = frame.f_locals
+ else:
+ updated_globals = {}
+ locals = {}
+
+
+ if pydevconsole.IPYTHON:
+ completions = pydevconsole.get_completions(self.act_tok, self.act_tok, updated_globals, locals)
+ else:
+ try:
+ completer = _completer.Completer(updated_globals, None)
+ #list(tuple(name, descr, parameters, type))
+ completions = completer.complete(self.act_tok)
+ except :
+ completions = []
+
+
+ def makeValid(s):
+ return pydevd_vars.makeValidXmlValue(pydevd_vars.quote(s, '/>_= \t'))
+
+ msg = "<xml>"
+
+ for comp in completions:
+ msg += '<comp p0="%s" p1="%s" p2="%s" p3="%s"/>' % (makeValid(comp[0]), makeValid(comp[1]), makeValid(comp[2]), makeValid(comp[3]),)
+ msg += "</xml>"
+
+ cmd = dbg.cmdFactory.makeGetCompletionsMessage(self.sequence, msg)
+ dbg.writer.addCommand(cmd)
+
+ finally:
+ if remove_path is not None:
+ sys.path.remove(remove_path)
+
+ except:
+ exc = GetExceptionTracebackStr()
+ sys.stderr.write('%s\n' % (exc,))
+ cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error getting completion " + exc)
+ dbg.writer.addCommand(cmd)
+
+
+#=======================================================================================================================
+# PydevdFindThreadById
+#=======================================================================================================================
+def PydevdFindThreadById(thread_id):
+ try:
+ # there was a deadlock here when I did not remove the tracing function when thread was dead
+ threads = threading.enumerate()
+ for i in threads:
+ if thread_id == GetThreadId(i):
+ return i
+
+ sys.stderr.write("Could not find thread %s\n" % thread_id)
+ sys.stderr.write("Available: %s\n" % [GetThreadId(t) for t in threads])
+ sys.stderr.flush()
+ except:
+ traceback.print_exc()
+
+ return None
+
diff --git a/python/helpers/pydev/pydevd_constants.py b/python/helpers/pydev/pydevd_constants.py
new file mode 100644
index 0000000..34759f7
--- /dev/null
+++ b/python/helpers/pydev/pydevd_constants.py
@@ -0,0 +1,224 @@
+'''
+This module holds the constants used for specifying the states of the debugger.
+'''
+
+STATE_RUN = 1
+STATE_SUSPEND = 2
+
+PYTHON_SUSPEND = 1
+DJANGO_SUSPEND = 2
+
+try:
+ __setFalse = False
+except:
+ import __builtin__
+
+ setattr(__builtin__, 'True', 1)
+ setattr(__builtin__, 'False', 0)
+
+class DebugInfoHolder:
+ #we have to put it here because it can be set through the command line (so, the
+ #already imported references would not have it).
+ DEBUG_RECORD_SOCKET_READS = False
+ DEBUG_TRACE_LEVEL = -1
+ DEBUG_TRACE_BREAKPOINTS = -1
+
+#Optimize with psyco? This gave a 50% speedup in the debugger in tests
+USE_PSYCO_OPTIMIZATION = True
+
+#Hold a reference to the original _getframe (because psyco will change that as soon as it's imported)
+import sys #Note: the sys import must be here anyways (others depend on it)
+try:
+ GetFrame = sys._getframe
+except AttributeError:
+ def GetFrame():
+ raise AssertionError('sys._getframe not available (possible causes: enable -X:Frames on IronPython?)')
+
+#Used to determine the maximum size of each variable passed to eclipse -- having a big value here may make
+#the communication slower -- as the variables are being gathered lazily in the latest version of eclipse,
+#this value was raised from 200 to 1000.
+MAXIMUM_VARIABLE_REPRESENTATION_SIZE = 1000
+
+import os
+
+#=======================================================================================================================
+# Python 3?
+#=======================================================================================================================
+IS_PY3K = False
+IS_PY27 = False
+IS_PY24 = False
+try:
+ if sys.version_info[0] >= 3:
+ IS_PY3K = True
+ elif sys.version_info[0] == 2 and sys.version_info[1] == 7:
+ IS_PY27 = True
+ elif sys.version_info[0] == 2 and sys.version_info[1] == 4:
+ IS_PY24 = True
+except AttributeError:
+ pass #Not all versions have sys.version_info
+
+try:
+ IS_64_BITS = sys.maxsize > 2 ** 32
+except AttributeError:
+ try:
+ import struct
+ IS_64_BITS = struct.calcsize("P") * 8 > 32
+ except:
+ IS_64_BITS = False
+
+SUPPORT_GEVENT = os.getenv('GEVENT_SUPPORT', 'False') == 'True'
+
+USE_LIB_COPY = SUPPORT_GEVENT and not IS_PY3K and sys.version_info[1] >= 6
+
+if USE_LIB_COPY:
+ import _pydev_threading as threading
+else:
+ import threading
+
+_nextThreadIdLock = threading.Lock()
+
+#=======================================================================================================================
+# Jython?
+#=======================================================================================================================
+try:
+ import org.python.core.PyDictionary #@UnresolvedImport @UnusedImport -- just to check if it could be valid
+
+ def DictContains(d, key):
+ return d.has_key(key)
+except:
+ try:
+ #Py3k does not have has_key anymore, and older versions don't have __contains__
+ DictContains = dict.__contains__
+ except:
+ try:
+ DictContains = dict.has_key
+ except NameError:
+ def DictContains(d, key):
+ return d.has_key(key)
+
+
+try:
+ xrange
+except:
+ #Python 3k does not have it
+ xrange = range
+
+try:
+ object
+except NameError:
+ class object:
+ pass
+
+try:
+ enumerate
+except:
+ def enumerate(lst):
+ ret = []
+ i=0
+ for element in lst:
+ ret.append((i, element))
+ i+=1
+ return ret
+
+#=======================================================================================================================
+# StringIO
+#=======================================================================================================================
+try:
+ from StringIO import StringIO
+except:
+ from io import StringIO
+
+
+#=======================================================================================================================
+# NextId
+#=======================================================================================================================
+class NextId:
+
+ def __init__(self):
+ self._id = 0
+
+ def __call__(self):
+ #No need to synchronize here
+ self._id += 1
+ return self._id
+
+_nextThreadId = NextId()
+
+#=======================================================================================================================
+# GetThreadId
+#=======================================================================================================================
+def GetThreadId(thread):
+ try:
+ return thread.__pydevd_id__
+ except AttributeError:
+ _nextThreadIdLock.acquire()
+ try:
+ #We do a new check with the lock in place just to be sure that nothing changed
+ if not hasattr(thread, '__pydevd_id__'):
+ try:
+ pid = os.getpid()
+ except AttributeError:
+ try:
+ #Jython does not have it!
+ import java.lang.management.ManagementFactory #@UnresolvedImport -- just for jython
+
+ pid = java.lang.management.ManagementFactory.getRuntimeMXBean().getName()
+ pid = pid.replace('@', '_')
+ except:
+ #ok, no pid available (will be unable to debug multiple processes)
+ pid = '000001'
+
+ thread.__pydevd_id__ = 'pid%s_seq%s' % (pid, _nextThreadId())
+ finally:
+ _nextThreadIdLock.release()
+
+ return thread.__pydevd_id__
+
+#===============================================================================
+# Null
+#===============================================================================
+class Null:
+ """
+ Gotten from: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/68205
+ """
+
+ def __init__(self, *args, **kwargs):
+ return None
+
+ def __call__(self, *args, **kwargs):
+ return self
+
+ def __getattr__(self, mname):
+ return self
+
+ def __setattr__(self, name, value):
+ return self
+
+ def __delattr__(self, name):
+ return self
+
+ def __repr__(self):
+ return "<Null>"
+
+ def __str__(self):
+ return "Null"
+
+ def __len__(self):
+ return 0
+
+ def __getitem__(self):
+ return self
+
+ def __setitem__(self, *args, **kwargs):
+ pass
+
+ def write(self, *args, **kwargs):
+ pass
+
+ def __nonzero__(self):
+ return 0
+
+if __name__ == '__main__':
+ if Null():
+ sys.stdout.write('here\n')
+
diff --git a/python/helpers/pydev/pydevd_exec.py b/python/helpers/pydev/pydevd_exec.py
new file mode 100644
index 0000000..6cffeaf
--- /dev/null
+++ b/python/helpers/pydev/pydevd_exec.py
@@ -0,0 +1,2 @@
+def Exec(exp, global_vars, local_vars):
+ exec exp in global_vars, local_vars
\ No newline at end of file
diff --git a/python/helpers/pydev/pydevd_exec2.py b/python/helpers/pydev/pydevd_exec2.py
new file mode 100644
index 0000000..9b234b7
--- /dev/null
+++ b/python/helpers/pydev/pydevd_exec2.py
@@ -0,0 +1,2 @@
+def Exec(exp, global_vars, local_vars):
+ exec(exp, global_vars, local_vars)
\ No newline at end of file
diff --git a/python/helpers/pydev/pydevd_file_utils.py b/python/helpers/pydev/pydevd_file_utils.py
new file mode 100644
index 0000000..b4f8d50
--- /dev/null
+++ b/python/helpers/pydev/pydevd_file_utils.py
@@ -0,0 +1,304 @@
+'''
+ This module provides utilities to get the absolute filenames so that we can be sure that:
+ - The case of a file will match the actual file in the filesystem (otherwise breakpoints won't be hit).
+ - Providing means for the user to make path conversions when doing a remote debugging session in
+ one machine and debugging in another.
+
+ To do that, the PATHS_FROM_ECLIPSE_TO_PYTHON constant must be filled with the appropriate paths.
+
+ @note:
+ in this context, the server is where your python process is running
+ and the client is where eclipse is running.
+
+ E.g.:
+ If the server (your python process) has the structure
+ /user/projects/my_project/src/package/module1.py
+
+ and the client has:
+ c:\my_project\src\package\module1.py
+
+ the PATHS_FROM_ECLIPSE_TO_PYTHON would have to be:
+ PATHS_FROM_ECLIPSE_TO_PYTHON = [(r'c:\my_project\src', r'/user/projects/my_project/src')]
+
+ @note: DEBUG_CLIENT_SERVER_TRANSLATION can be set to True to debug the result of those translations
+
+ @note: the case of the paths is important! Note that this can be tricky to get right when one machine
+ uses a case-independent filesystem and the other uses a case-dependent filesystem (if the system being
+ debugged is case-independent, 'normcase()' should be used on the paths defined in PATHS_FROM_ECLIPSE_TO_PYTHON).
+
+ @note: all the paths with breakpoints must be translated (otherwise they won't be found in the server)
+
+ @note: to enable remote debugging in the target machine (pydev extensions in the eclipse installation)
+ import pydevd;pydevd.settrace(host, stdoutToServer, stderrToServer, port, suspend)
+
+ see parameter docs on pydevd.py
+
+ @note: for doing a remote debugging session, all the pydevd_ files must be on the server accessible
+ through the PYTHONPATH (and the PATHS_FROM_ECLIPSE_TO_PYTHON only needs to be set on the target
+ machine for the paths that'll actually have breakpoints).
+'''
+
+
+
+
+from pydevd_constants import * #@UnusedWildImport
+import os.path
+import sys
+import traceback
+
+
+
+basename = os.path.basename
+exists = os.path.exists
+join = os.path.join
+
+try:
+ rPath = os.path.realpath #@UndefinedVariable
+except:
+ # jython does not support os.path.realpath
+ # realpath is a no-op on systems without islink support
+ rPath = os.path.abspath
+
+#defined as a list of tuples where the 1st element of the tuple is the path in the client machine
+#and the 2nd element is the path in the server machine.
+#see module docstring for more details.
+PATHS_FROM_ECLIPSE_TO_PYTHON = []
+
+
+#example:
+#PATHS_FROM_ECLIPSE_TO_PYTHON = [
+#(normcase(r'd:\temp\temp_workspace_2\test_python\src\yyy\yyy'),
+# normcase(r'd:\temp\temp_workspace_2\test_python\src\hhh\xxx'))]
+
+DEBUG_CLIENT_SERVER_TRANSLATION = False
+
+#caches filled as requested during the debug session
+NORM_FILENAME_CONTAINER = {}
+NORM_FILENAME_AND_BASE_CONTAINER = {}
+NORM_FILENAME_TO_SERVER_CONTAINER = {}
+NORM_FILENAME_TO_CLIENT_CONTAINER = {}
+
+
+pycharm_os = None
+
+def normcase(file):
+ global pycharm_os
+ if pycharm_os == 'UNIX':
+ return file
+ else:
+ return os.path.normcase(file)
+
+
+def _NormFile(filename):
+ try:
+ return NORM_FILENAME_CONTAINER[filename]
+ except KeyError:
+ r = normcase(rPath(filename))
+ #cache it for fast access later
+ ind = r.find('.zip')
+ if ind == -1:
+ ind = r.find('.egg')
+ if ind != -1:
+ ind+=4
+ zip_path = r[:ind]
+ if r[ind] == "!":
+ ind+=1
+ inner_path = r[ind:]
+ if inner_path.startswith('/'):
+ inner_path = inner_path[1:]
+ r = zip_path + "/" + inner_path
+
+ NORM_FILENAME_CONTAINER[filename] = r
+ return r
+
+ZIP_SEARCH_CACHE = {}
+def exists(file):
+ if os.path.exists(file):
+ return file
+
+ ind = file.find('.zip')
+ if ind == -1:
+ ind = file.find('.egg')
+
+ if ind != -1:
+ ind+=4
+ zip_path = file[:ind]
+ if file[ind] == "!":
+ ind+=1
+ inner_path = file[ind:]
+ try:
+ zip = ZIP_SEARCH_CACHE[zip_path]
+ except KeyError:
+ try:
+ import zipfile
+ zip = zipfile.ZipFile(zip_path, 'r')
+ ZIP_SEARCH_CACHE[zip_path] = zip
+ except :
+ return None
+
+ try:
+ if inner_path.startswith('/'):
+ inner_path = inner_path[1:]
+
+ info = zip.getinfo(inner_path)
+
+ return zip_path + "/" + inner_path
+ except KeyError:
+ return None
+ return None
+
+
+#Now, let's do a quick test to see if we're working with a version of python that has no problems
+#related to the names generated...
+try:
+ try:
+ code = rPath.func_code
+ except AttributeError:
+ code = rPath.__code__
+ if not exists(_NormFile(code.co_filename)):
+ sys.stderr.write('-------------------------------------------------------------------------------\n')
+ sys.stderr.write('pydev debugger: CRITICAL WARNING: This version of python seems to be incorrectly compiled (internal generated filenames are not absolute)\n')
+ sys.stderr.write('pydev debugger: The debugger may still function, but it will work slower and may miss breakpoints.\n')
+ sys.stderr.write('pydev debugger: Related bug: http://bugs.python.org/issue1666807\n')
+ sys.stderr.write('-------------------------------------------------------------------------------\n')
+ sys.stderr.flush()
+
+ NORM_SEARCH_CACHE = {}
+
+ initial_norm_file = _NormFile
+ def _NormFile(filename): #Let's redefine _NormFile to work with paths that may be incorrect
+ try:
+ return NORM_SEARCH_CACHE[filename]
+ except KeyError:
+ ret = initial_norm_file(filename)
+ if not exists(ret):
+ #We must actually go on and check if we can find it as if it was a relative path for some of the paths in the pythonpath
+ for path in sys.path:
+ ret = initial_norm_file(join(path, filename))
+ if exists(ret):
+ break
+ else:
+ sys.stderr.write('pydev debugger: Unable to find real location for: %s\n' % (filename,))
+ ret = filename
+
+ NORM_SEARCH_CACHE[filename] = ret
+ return ret
+except:
+ #Don't fail if there's something not correct here -- but at least print it to the user so that we can correct that
+ traceback.print_exc()
+
+
+if PATHS_FROM_ECLIPSE_TO_PYTHON:
+ #Work on the client and server slashes.
+ eclipse_sep = None
+ python_sep = None
+ for eclipse_prefix, server_prefix in PATHS_FROM_ECLIPSE_TO_PYTHON:
+ if eclipse_sep is not None and python_sep is not None:
+ break
+
+ if eclipse_sep is None:
+ for c in eclipse_prefix:
+ if c in ('/', '\\'):
+ eclipse_sep = c
+ break
+
+ if python_sep is None:
+ for c in server_prefix:
+ if c in ('/', '\\'):
+ python_sep = c
+ break
+
+ #If they're the same or one of them cannot be determined, just make it all None.
+ if eclipse_sep == python_sep or eclipse_sep is None or python_sep is None:
+ eclipse_sep = python_sep = None
+
+
+ #only setup translation functions if absolutely needed!
+ def NormFileToServer(filename):
+ #Eclipse will send the passed filename to be translated to the python process
+ #So, this would be 'NormFileFromEclipseToPython'
+ try:
+ return NORM_FILENAME_TO_SERVER_CONTAINER[filename]
+ except KeyError:
+ #used to translate a path from the client to the debug server
+ translated = normcase(filename)
+ for eclipse_prefix, server_prefix in PATHS_FROM_ECLIPSE_TO_PYTHON:
+ if translated.startswith(eclipse_prefix):
+ if DEBUG_CLIENT_SERVER_TRANSLATION:
+ sys.stderr.write('pydev debugger: replacing to server: %s\n' % (translated,))
+ translated = translated.replace(eclipse_prefix, server_prefix)
+ if DEBUG_CLIENT_SERVER_TRANSLATION:
+ sys.stderr.write('pydev debugger: sent to server: %s\n' % (translated,))
+ break
+ else:
+ if DEBUG_CLIENT_SERVER_TRANSLATION:
+ sys.stderr.write('pydev debugger: to server: unable to find matching prefix for: %s in %s\n' % \
+ (translated, [x[0] for x in PATHS_FROM_ECLIPSE_TO_PYTHON]))
+
+ #Note that when going to the server, we do the replace first and only later do the norm file.
+ if eclipse_sep is not None:
+ translated = translated.replace(eclipse_sep, python_sep)
+ translated = _NormFile(translated)
+
+ NORM_FILENAME_TO_SERVER_CONTAINER[filename] = translated
+ return translated
+
+
+ def NormFileToClient(filename):
+ #The result of this method will be passed to eclipse
+ #So, this would be 'NormFileFromPythonToEclipse'
+ try:
+ return NORM_FILENAME_TO_CLIENT_CONTAINER[filename]
+ except KeyError:
+ #used to translate a path from the debug server to the client
+ translated = _NormFile(filename)
+ for eclipse_prefix, python_prefix in PATHS_FROM_ECLIPSE_TO_PYTHON:
+ if translated.startswith(python_prefix):
+ if DEBUG_CLIENT_SERVER_TRANSLATION:
+ sys.stderr.write('pydev debugger: replacing to client: %s\n' % (translated,))
+ translated = translated.replace(python_prefix, eclipse_prefix)
+ if DEBUG_CLIENT_SERVER_TRANSLATION:
+ sys.stderr.write('pydev debugger: sent to client: %s\n' % (translated,))
+ break
+ else:
+ if DEBUG_CLIENT_SERVER_TRANSLATION:
+ sys.stderr.write('pydev debugger: to client: unable to find matching prefix for: %s in %s\n' % \
+ (translated, [x[1] for x in PATHS_FROM_ECLIPSE_TO_PYTHON]))
+
+ if eclipse_sep is not None:
+ translated = translated.replace(python_sep, eclipse_sep)
+
+ #The resulting path is not in the python process, so, we cannot do a _NormFile here,
+ #only at the beginning of this method.
+ NORM_FILENAME_TO_CLIENT_CONTAINER[filename] = translated
+ return translated
+
+else:
+ #no translation step needed (just inline the calls)
+ NormFileToClient = _NormFile
+ NormFileToServer = _NormFile
+
+
+def GetFileNameAndBaseFromFile(f):
+ try:
+ return NORM_FILENAME_AND_BASE_CONTAINER[f]
+ except KeyError:
+ filename = _NormFile(f)
+ base = basename(filename)
+ NORM_FILENAME_AND_BASE_CONTAINER[f] = filename, base
+ return filename, base
+
+
+def GetFilenameAndBase(frame):
+ #This one is just internal (so, does not need any kind of client-server translation)
+ f = frame.f_code.co_filename
+ if f is not None and f.startswith('build/bdist.'):
+ # files from eggs in Python 2.7 have paths like build/bdist.linux-x86_64/egg/<path-inside-egg>
+ f = frame.f_globals['__file__']
+ if f.endswith('.pyc'):
+ f = f[:-1]
+ return GetFileNameAndBaseFromFile(f)
+
+def set_pycharm_os(os):
+ global pycharm_os
+ pycharm_os = os
diff --git a/python/helpers/pydev/pydevd_frame.py b/python/helpers/pydev/pydevd_frame.py
new file mode 100644
index 0000000..a91f160
--- /dev/null
+++ b/python/helpers/pydev/pydevd_frame.py
@@ -0,0 +1,383 @@
+from django_debug import is_django_render_call, get_template_file_name, get_template_line, is_django_suspended, suspend_django, is_django_resolve_call, is_django_context_get_call
+from django_debug import find_django_render_frame
+from django_frame import just_raised
+from django_frame import is_django_exception_break_context
+from django_frame import DjangoTemplateFrame
+from pydevd_comm import * #@UnusedWildImport
+from pydevd_breakpoints import * #@UnusedWildImport
+import traceback #@Reimport
+import os.path
+import sys
+import pydev_log
+from pydevd_signature import sendSignatureCallTrace
+
+basename = os.path.basename
+
+#=======================================================================================================================
+# PyDBFrame
+#=======================================================================================================================
+class PyDBFrame:
+ '''This makes the tracing for a given frame, so, the trace_dispatch
+ is used initially when we enter into a new context ('call') and then
+ is reused for the entire context.
+ '''
+
+ def __init__(self, args):
+ #args = mainDebugger, filename, base, info, t, frame
+ #yeap, much faster than putting in self and then getting it from self later on
+ self._args = args[:-1]
+
+ def setSuspend(self, *args, **kwargs):
+ self._args[0].setSuspend(*args, **kwargs)
+
+ def doWaitSuspend(self, *args, **kwargs):
+ self._args[0].doWaitSuspend(*args, **kwargs)
+
+ def trace_exception(self, frame, event, arg):
+ if event == 'exception':
+ (flag, frame) = self.shouldStopOnException(frame, event, arg)
+
+ if flag:
+ self.handle_exception(frame, event, arg)
+ return self.trace_dispatch
+
+ return self.trace_exception
+
+ def shouldStopOnException(self, frame, event, arg):
+ mainDebugger, filename, info, thread = self._args
+ flag = False
+
+ if info.pydev_state != STATE_SUSPEND: #and breakpoint is not None:
+ (exception, value, trace) = arg
+
+ if trace is not None: #on jython trace is None on the first event
+ exception_breakpoint = get_exception_breakpoint(exception, dict(mainDebugger.exception_set), NOTIFY_ALWAYS)
+ if exception_breakpoint is not None:
+ if not exception_breakpoint.notify_on_first_raise_only or just_raised(trace):
+ curr_func_name = frame.f_code.co_name
+ add_exception_to_frame(frame, (exception, value, trace))
+ self.setSuspend(thread, CMD_ADD_EXCEPTION_BREAK)
+ thread.additionalInfo.message = exception_breakpoint.qname
+ flag = True
+ else:
+ flag = False
+ else:
+ try:
+ if mainDebugger.django_exception_break and get_exception_name(exception) in ['VariableDoesNotExist', 'TemplateDoesNotExist', 'TemplateSyntaxError'] and just_raised(trace) and is_django_exception_break_context(frame):
+ render_frame = find_django_render_frame(frame)
+ if render_frame:
+ suspend_frame = suspend_django(self, mainDebugger, thread, render_frame, CMD_ADD_DJANGO_EXCEPTION_BREAK)
+
+ if suspend_frame:
+ add_exception_to_frame(suspend_frame, (exception, value, trace))
+ flag = True
+ thread.additionalInfo.message = 'VariableDoesNotExist'
+ suspend_frame.f_back = frame
+ frame = suspend_frame
+ except :
+ flag = False
+
+ return (flag, frame)
+
+ def handle_exception(self, frame, event, arg):
+ mainDebugger = self._args[0]
+ thread = self._args[3]
+ self.doWaitSuspend(thread, frame, event, arg)
+ mainDebugger.SetTraceForFrameAndParents(frame)
+
+ def trace_dispatch(self, frame, event, arg):
+ mainDebugger, filename, info, thread = self._args
+ try:
+ info.is_tracing = True
+
+ if mainDebugger._finishDebuggingSession:
+ return None
+
+ if getattr(thread, 'pydev_do_not_trace', None):
+ return None
+
+ if event == 'call':
+ sendSignatureCallTrace(mainDebugger, frame, filename)
+
+ if event not in ('line', 'call', 'return'):
+ if event == 'exception':
+ (flag, frame) = self.shouldStopOnException(frame, event, arg)
+ if flag:
+ self.handle_exception(frame, event, arg)
+ return self.trace_dispatch
+ else:
+ #I believe this can only happen in jython on some frontiers on jython and java code, which we don't want to trace.
+ return None
+
+ if event is not 'exception':
+ breakpoints_for_file = mainDebugger.breakpoints.get(filename)
+
+ can_skip = False
+
+ if info.pydev_state == STATE_RUN:
+ #we can skip if:
+ #- we have no stop marked
+ #- we should make a step return/step over and we're not in the current frame
+ can_skip = (info.pydev_step_cmd is None and info.pydev_step_stop is None)\
+ or (info.pydev_step_cmd in (CMD_STEP_RETURN, CMD_STEP_OVER) and info.pydev_step_stop is not frame)
+
+ if mainDebugger.django_breakpoints:
+ can_skip = False
+
+ # Let's check to see if we are in a function that has a breakpoint. If we don't have a breakpoint,
+ # we will return nothing for the next trace
+ #also, after we hit a breakpoint and go to some other debugging state, we have to force the set trace anyway,
+ #so, that's why the additional checks are there.
+ if not breakpoints_for_file:
+ if can_skip:
+ if mainDebugger.always_exception_set or mainDebugger.django_exception_break:
+ return self.trace_exception
+ else:
+ return None
+
+ else:
+ #checks the breakpoint to see if there is a context match in some function
+ curr_func_name = frame.f_code.co_name
+
+ #global context is set with an empty name
+ if curr_func_name in ('?', '<module>'):
+ curr_func_name = ''
+
+ for breakpoint in breakpoints_for_file.values(): #jython does not support itervalues()
+ #will match either global or some function
+ if breakpoint.func_name in ('None', curr_func_name):
+ break
+
+ else: # if we had some break, it won't get here (so, that's a context that we want to skip)
+ if can_skip:
+ #print 'skipping', frame.f_lineno, info.pydev_state, info.pydev_step_stop, info.pydev_step_cmd
+ return None
+ else:
+ breakpoints_for_file = None
+
+ #We may have hit a breakpoint or we are already in step mode. Either way, let's check what we should do in this frame
+ #print 'NOT skipped', frame.f_lineno, frame.f_code.co_name, event
+
+ try:
+ line = frame.f_lineno
+
+
+ flag = False
+ if event == 'call' and info.pydev_state != STATE_SUSPEND and mainDebugger.django_breakpoints \
+ and is_django_render_call(frame):
+ (flag, frame) = self.shouldStopOnDjangoBreak(frame, event, arg)
+
+ #return is not taken into account for breakpoint hit because we'd have a double-hit in this case
+ #(one for the line and the other for the return).
+
+ if not flag and event != 'return' and info.pydev_state != STATE_SUSPEND and breakpoints_for_file is not None\
+ and DictContains(breakpoints_for_file, line):
+ #ok, hit breakpoint, now, we have to discover if it is a conditional breakpoint
+ # lets do the conditional stuff here
+ breakpoint = breakpoints_for_file[line]
+
+ stop = True
+ if info.pydev_step_cmd == CMD_STEP_OVER and info.pydev_step_stop is frame and event in ('line', 'return'):
+ stop = False #we don't stop on breakpoint if we have to stop by step-over (it will be processed later)
+ else:
+ if breakpoint.condition is not None:
+ try:
+ val = eval(breakpoint.condition, frame.f_globals, frame.f_locals)
+ if not val:
+ return self.trace_dispatch
+
+ except:
+ pydev_log.info('Error while evaluating condition \'%s\': %s\n' % (breakpoint.condition, sys.exc_info()[1]))
+
+ return self.trace_dispatch
+
+ if breakpoint.expression is not None:
+ try:
+ try:
+ val = eval(breakpoint.expression, frame.f_globals, frame.f_locals)
+ except:
+ val = sys.exc_info()[1]
+ finally:
+ if val is not None:
+ thread.additionalInfo.message = val
+
+ if stop:
+ self.setSuspend(thread, CMD_SET_BREAK)
+
+ # if thread has a suspend flag, we suspend with a busy wait
+ if info.pydev_state == STATE_SUSPEND:
+ self.doWaitSuspend(thread, frame, event, arg)
+ return self.trace_dispatch
+
+ except:
+ raise
+
+ #step handling. We stop when we hit the right frame
+ try:
+ django_stop = False
+ if info.pydev_step_cmd == CMD_STEP_INTO:
+ stop = event in ('line', 'return')
+ if is_django_suspended(thread):
+ #django_stop = event == 'call' and is_django_render_call(frame)
+ stop = stop and is_django_resolve_call(frame.f_back) and not is_django_context_get_call(frame)
+ if stop:
+ info.pydev_django_resolve_frame = 1 #we remember that we've go into python code from django rendering frame
+
+ elif info.pydev_step_cmd == CMD_STEP_OVER:
+ if is_django_suspended(thread):
+ django_stop = event == 'call' and is_django_render_call(frame)
+
+ stop = False
+ else:
+ if event == 'return' and info.pydev_django_resolve_frame is not None and is_django_resolve_call(frame.f_back):
+ #we return to Django suspend mode and should not stop before django rendering frame
+ info.pydev_step_stop = info.pydev_django_resolve_frame
+ info.pydev_django_resolve_frame = None
+ thread.additionalInfo.suspend_type = DJANGO_SUSPEND
+
+
+ stop = info.pydev_step_stop is frame and event in ('line', 'return')
+
+ elif info.pydev_step_cmd == CMD_SMART_STEP_INTO:
+ stop = False
+ if info.pydev_smart_step_stop is frame:
+ info.pydev_func_name = None
+ info.pydev_smart_step_stop = None
+
+ if event == 'line' or event == 'exception':
+ curr_func_name = frame.f_code.co_name
+
+ #global context is set with an empty name
+ if curr_func_name in ('?', '<module>') or curr_func_name is None:
+ curr_func_name = ''
+
+ if curr_func_name == info.pydev_func_name:
+ stop = True
+
+ elif info.pydev_step_cmd == CMD_STEP_RETURN:
+ stop = event == 'return' and info.pydev_step_stop is frame
+
+ elif info.pydev_step_cmd == CMD_RUN_TO_LINE or info.pydev_step_cmd == CMD_SET_NEXT_STATEMENT:
+ stop = False
+
+ if event == 'line' or event == 'exception':
+ #Yes, we can only act on line events (weird hum?)
+ #Note: This code is duplicated at pydevd.py
+ #Acting on exception events after debugger breaks with exception
+ curr_func_name = frame.f_code.co_name
+
+ #global context is set with an empty name
+ if curr_func_name in ('?', '<module>'):
+ curr_func_name = ''
+
+ if curr_func_name == info.pydev_func_name:
+ line = info.pydev_next_line
+ if frame.f_lineno == line:
+ stop = True
+ else:
+ if frame.f_trace is None:
+ frame.f_trace = self.trace_dispatch
+ frame.f_lineno = line
+ frame.f_trace = None
+ stop = True
+
+ else:
+ stop = False
+
+ if django_stop:
+ frame = suspend_django(self, mainDebugger, thread, frame)
+ if frame:
+ self.doWaitSuspend(thread, frame, event, arg)
+ elif stop:
+ #event is always == line or return at this point
+ if event == 'line':
+ self.setSuspend(thread, info.pydev_step_cmd)
+ self.doWaitSuspend(thread, frame, event, arg)
+ else: #return event
+ back = frame.f_back
+ if back is not None:
+ #When we get to the pydevd run function, the debugging has actually finished for the main thread
+ #(note that it can still go on for other threads, but for this one, we just make it finish)
+ #So, just setting it to None should be OK
+ if basename(back.f_code.co_filename) == 'pydevd.py' and back.f_code.co_name == 'run':
+ back = None
+
+ if back is not None:
+ #if we're in a return, we want it to appear to the user in the previous frame!
+ self.setSuspend(thread, info.pydev_step_cmd)
+ self.doWaitSuspend(thread, back, event, arg)
+ else:
+ #in jython we may not have a back frame
+ info.pydev_step_stop = None
+ info.pydev_step_cmd = None
+ info.pydev_state = STATE_RUN
+
+
+ except:
+ traceback.print_exc()
+ info.pydev_step_cmd = None
+
+ #if we are quitting, let's stop the tracing
+ retVal = None
+ if not mainDebugger.quitting:
+ retVal = self.trace_dispatch
+
+ return retVal
+ finally:
+ info.is_tracing = False
+
+ #end trace_dispatch
+
+ if USE_PSYCO_OPTIMIZATION:
+ try:
+ import psyco
+
+ trace_dispatch = psyco.proxy(trace_dispatch)
+ except ImportError:
+ if hasattr(sys, 'exc_clear'): #jython does not have it
+ sys.exc_clear() #don't keep the traceback
+ pass #ok, psyco not available
+
+ def shouldStopOnDjangoBreak(self, frame, event, arg):
+ mainDebugger, filename, info, thread = self._args
+ flag = False
+ filename = get_template_file_name(frame)
+ pydev_log.debug("Django is rendering a template: %s\n" % filename)
+ django_breakpoints_for_file = mainDebugger.django_breakpoints.get(filename)
+ if django_breakpoints_for_file:
+ pydev_log.debug("Breakpoints for that file: %s\n" % django_breakpoints_for_file)
+ template_line = get_template_line(frame)
+ pydev_log.debug("Tracing template line: %d\n" % template_line)
+
+ if DictContains(django_breakpoints_for_file, template_line):
+ django_breakpoint = django_breakpoints_for_file[template_line]
+
+ if django_breakpoint.is_triggered(frame):
+ pydev_log.debug("Breakpoint is triggered.\n")
+ flag = True
+ new_frame = DjangoTemplateFrame(frame)
+
+ if django_breakpoint.condition is not None:
+ try:
+ val = eval(django_breakpoint.condition, new_frame.f_globals, new_frame.f_locals)
+ if not val:
+ flag = False
+ pydev_log.debug("Condition '%s' is evaluated to %s. Not suspending.\n" %(django_breakpoint.condition, val))
+ except:
+ pydev_log.info('Error while evaluating condition \'%s\': %s\n' % (django_breakpoint.condition, sys.exc_info()[1]))
+
+ if django_breakpoint.expression is not None:
+ try:
+ try:
+ val = eval(django_breakpoint.expression, new_frame.f_globals, new_frame.f_locals)
+ except:
+ val = sys.exc_info()[1]
+ finally:
+ if val is not None:
+ thread.additionalInfo.message = val
+ if flag:
+ frame = suspend_django(self, mainDebugger, thread, frame)
+ return (flag, frame)
+
+def add_exception_to_frame(frame, exception_info):
+ frame.f_locals['__exception__'] = exception_info
\ No newline at end of file
diff --git a/python/helpers/pydev/pydevd_import_class.py b/python/helpers/pydev/pydevd_import_class.py
new file mode 100644
index 0000000..c84b0ce
--- /dev/null
+++ b/python/helpers/pydev/pydevd_import_class.py
@@ -0,0 +1,68 @@
+#Note: code gotten from importsTipper.
+
+import sys
+
+def _imp(name, log=None):
+ try:
+ return __import__(name)
+ except:
+ if '.' in name:
+ sub = name[0:name.rfind('.')]
+
+ if log is not None:
+ log.AddContent('Unable to import', name, 'trying with', sub)
+ log.AddException()
+
+ return _imp(sub, log)
+ else:
+ s = 'Unable to import module: %s - sys.path: %s' % (str(name), sys.path)
+ if log is not None:
+ log.AddContent(s)
+ log.AddException()
+
+ raise ImportError(s)
+
+
+IS_IPY = False
+if sys.platform == 'cli':
+ IS_IPY = True
+ _old_imp = _imp
+ def _imp(name, log=None):
+ #We must add a reference in clr for .Net
+ import clr #@UnresolvedImport
+ initial_name = name
+ while '.' in name:
+ try:
+ clr.AddReference(name)
+ break #If it worked, that's OK.
+ except:
+ name = name[0:name.rfind('.')]
+ else:
+ try:
+ clr.AddReference(name)
+ except:
+ pass #That's OK (not dot net module).
+
+ return _old_imp(initial_name, log)
+
+
+def ImportName(name, log=None):
+ mod = _imp(name, log)
+
+ components = name.split('.')
+
+ old_comp = None
+ for comp in components[1:]:
+ try:
+ #this happens in the following case:
+ #we have mx.DateTime.mxDateTime.mxDateTime.pyd
+ #but after importing it, mx.DateTime.mxDateTime shadows access to mxDateTime.pyd
+ mod = getattr(mod, comp)
+ except AttributeError:
+ if old_comp != comp:
+ raise
+
+ old_comp = comp
+
+ return mod
+
diff --git a/python/helpers/pydev/pydevd_io.py b/python/helpers/pydev/pydevd_io.py
new file mode 100644
index 0000000..ca8c493
--- /dev/null
+++ b/python/helpers/pydev/pydevd_io.py
@@ -0,0 +1,91 @@
+class IORedirector:
+ '''This class works to redirect the write function to many streams
+ '''
+
+ def __init__(self, *args):
+ self._redirectTo = args
+
+ def write(self, s):
+ for r in self._redirectTo:
+ try:
+ r.write(s)
+ except:
+ pass
+
+ def isatty(self):
+ return False #not really a file
+
+ def flush(self):
+ for r in self._redirectTo:
+ r.flush()
+
+ def __getattr__(self, name):
+ for r in self._redirectTo:
+ if hasattr(r, name):
+ return r.__getattribute__(name)
+ raise AttributeError(name)
+
+class IOBuf:
+ '''This class works as a replacement for stdio and stderr.
+ It is a buffer and when its contents are requested, it will erase what
+
+ it has so far so that the next return will not return the same contents again.
+ '''
+ def __init__(self):
+ self.buflist = []
+
+ def getvalue(self):
+ b = self.buflist
+ self.buflist = [] #clear it
+ return ''.join(b)
+
+ def write(self, s):
+ self.buflist.append(s)
+
+ def isatty(self):
+ return False #not really a file
+
+ def flush(self):
+ pass
+
+
+class _RedirectionsHolder:
+ _stack_stdout = []
+ _stack_stderr = []
+
+
+def StartRedirect(keep_original_redirection=False, std='stdout'):
+ '''
+ @param std: 'stdout', 'stderr', or 'both'
+ '''
+ import sys
+ buf = IOBuf()
+
+ if std == 'both':
+ config_stds = ['stdout', 'stderr']
+ else:
+ config_stds = [std]
+
+ for std in config_stds:
+ original = getattr(sys, std)
+ stack = getattr(_RedirectionsHolder, '_stack_%s' % std)
+ stack.append(original)
+
+ if keep_original_redirection:
+ setattr(sys, std, IORedirector(buf, getattr(sys, std)))
+ else:
+ setattr(sys, std, buf)
+ return buf
+
+
+def EndRedirect(std='stdout'):
+ import sys
+ if std == 'both':
+ config_stds = ['stdout', 'stderr']
+ else:
+ config_stds = [std]
+ for std in config_stds:
+ stack = getattr(_RedirectionsHolder, '_stack_%s' % std)
+ setattr(sys, std, stack.pop())
+
+
diff --git a/python/helpers/pydev/pydevd_psyco_stub.py b/python/helpers/pydev/pydevd_psyco_stub.py
new file mode 100644
index 0000000..f196d88
--- /dev/null
+++ b/python/helpers/pydev/pydevd_psyco_stub.py
@@ -0,0 +1,36 @@
+'''
+ Psyco stub: should implement all the external API from psyco.
+'''
+
+def proxy(func, *args, **kwargs):
+ return func
+
+def bind(func, *args, **kwargs):
+ return func
+
+def unbind(func, *args, **kwargs):
+ return func
+
+def unproxy(func, *args, **kwargs):
+ return func
+
+def full(*args, **kwargs):
+ pass
+
+def log(*args, **kwargs):
+ pass
+
+def runonly(*args, **kwargs):
+ pass
+
+def background(*args, **kwargs):
+ pass
+
+def cannotcompile(*args, **kwargs):
+ pass
+
+def profile(*args, **kwargs):
+ pass
+
+def stop(*args, **kwargs):
+ pass
diff --git a/python/helpers/pydev/pydevd_reload.py b/python/helpers/pydev/pydevd_reload.py
new file mode 100644
index 0000000..03ca2fd
--- /dev/null
+++ b/python/helpers/pydev/pydevd_reload.py
@@ -0,0 +1,208 @@
+"""
+Copied from the python xreload (available for change)
+
+Alternative to reload().
+
+This works by executing the module in a scratch namespace, and then
+patching classes, methods and functions in place. This avoids the
+need to patch instances. New objects are copied into the target
+namespace.
+
+Some of the many limitations include:
+
+- Global mutable objects other than classes are simply replaced, not patched
+
+- Code using metaclasses is not handled correctly
+
+- Code creating global singletons is not handled correctly
+
+- Functions and methods using decorators (other than classmethod and
+ staticmethod) is not handled correctly
+
+- Renamings are not handled correctly
+
+- Dependent modules are not reloaded
+
+- When a dependent module contains 'from foo import bar', and
+ reloading foo deletes foo.bar, the dependent module continues to use
+ the old foo.bar object rather than failing
+
+- Frozen modules and modules loaded from zip files aren't handled
+ correctly
+
+- Classes involving __slots__ are not handled correctly
+"""
+
+import imp
+import sys
+import types
+
+
+def xreload(mod):
+ """Reload a module in place, updating classes, methods and functions.
+
+ Args:
+ mod: a module object
+
+ Returns:
+ The (updated) input object itself.
+ """
+ # Get the module name, e.g. 'foo.bar.whatever'
+ modname = mod.__name__
+ # Get the module namespace (dict) early; this is part of the type check
+ modns = mod.__dict__
+ # Parse it into package name and module name, e.g. 'foo.bar' and 'whatever'
+ i = modname.rfind(".")
+ if i >= 0:
+ pkgname, modname = modname[:i], modname[i+1:]
+ else:
+ pkgname = None
+ # Compute the search path
+ if pkgname:
+ # We're not reloading the package, only the module in it
+ pkg = sys.modules[pkgname]
+ path = pkg.__path__ # Search inside the package
+ else:
+ # Search the top-level module path
+ pkg = None
+ path = None # Make find_module() uses the default search path
+ # Find the module; may raise ImportError
+ (stream, filename, (suffix, mode, kind)) = imp.find_module(modname, path)
+ # Turn it into a code object
+ try:
+ # Is it Python source code or byte code read from a file?
+ if kind not in (imp.PY_COMPILED, imp.PY_SOURCE):
+ # Fall back to built-in reload()
+ return reload(mod)
+ if kind == imp.PY_SOURCE:
+ source = stream.read()
+ code = compile(source, filename, "exec")
+ else:
+ import marshal
+ code = marshal.load(stream)
+ finally:
+ if stream:
+ stream.close()
+ # Execute the code. We copy the module dict to a temporary; then
+ # clear the module dict; then execute the new code in the module
+ # dict; then swap things back and around. This trick (due to
+ # Glyph Lefkowitz) ensures that the (readonly) __globals__
+ # attribute of methods and functions is set to the correct dict
+ # object.
+ tmpns = modns.copy()
+ modns.clear()
+ modns["__name__"] = tmpns["__name__"]
+ exec(code, modns)
+ # Now we get to the hard part
+ oldnames = set(tmpns)
+ newnames = set(modns)
+ # Update attributes in place
+ for name in oldnames & newnames:
+ modns[name] = _update(tmpns[name], modns[name])
+ # Done!
+ return mod
+
+
+def _update(oldobj, newobj):
+ """Update oldobj, if possible in place, with newobj.
+
+ If oldobj is immutable, this simply returns newobj.
+
+ Args:
+ oldobj: the object to be updated
+ newobj: the object used as the source for the update
+
+ Returns:
+ either oldobj, updated in place, or newobj.
+ """
+ if oldobj is newobj:
+ # Probably something imported
+ return newobj
+ if type(oldobj) is not type(newobj):
+ # Cop-out: if the type changed, give up
+ return newobj
+ if hasattr(newobj, "__reload_update__"):
+ # Provide a hook for updating
+ return newobj.__reload_update__(oldobj)
+
+ if hasattr(types, 'ClassType'):
+ classtype = types.ClassType
+ else:
+ classtype = type
+
+ if isinstance(newobj, classtype):
+ return _update_class(oldobj, newobj)
+ if isinstance(newobj, types.FunctionType):
+ return _update_function(oldobj, newobj)
+ if isinstance(newobj, types.MethodType):
+ return _update_method(oldobj, newobj)
+ if isinstance(newobj, classmethod):
+ return _update_classmethod(oldobj, newobj)
+ if isinstance(newobj, staticmethod):
+ return _update_staticmethod(oldobj, newobj)
+ # Not something we recognize, just give up
+ return newobj
+
+
+# All of the following functions have the same signature as _update()
+
+
+def _update_function(oldfunc, newfunc):
+ """Update a function object."""
+ oldfunc.__doc__ = newfunc.__doc__
+ oldfunc.__dict__.update(newfunc.__dict__)
+
+ try:
+ oldfunc.__code__ = newfunc.__code__
+ except AttributeError:
+ oldfunc.func_code = newfunc.func_code
+ try:
+ oldfunc.__defaults__ = newfunc.__defaults__
+ except AttributeError:
+ oldfunc.func_defaults = newfunc.func_defaults
+
+ return oldfunc
+
+
+def _update_method(oldmeth, newmeth):
+ """Update a method object."""
+ # XXX What if im_func is not a function?
+ _update(oldmeth.im_func, newmeth.im_func)
+ return oldmeth
+
+
+def _update_class(oldclass, newclass):
+ """Update a class object."""
+ olddict = oldclass.__dict__
+ newdict = newclass.__dict__
+ oldnames = set(olddict)
+ newnames = set(newdict)
+ for name in newnames - oldnames:
+ setattr(oldclass, name, newdict[name])
+ for name in oldnames - newnames:
+ delattr(oldclass, name)
+ for name in oldnames & newnames - set(['__dict__', '__doc__']):
+ setattr(oldclass, name, _update(olddict[name], newdict[name]))
+ return oldclass
+
+
+def _update_classmethod(oldcm, newcm):
+ """Update a classmethod update."""
+ # While we can't modify the classmethod object itself (it has no
+ # mutable attributes), we *can* extract the underlying function
+ # (by calling __get__(), which returns a method object) and update
+ # it in-place. We don't have the class available to pass to
+ # __get__() but any object except None will do.
+ _update(oldcm.__get__(0), newcm.__get__(0))
+ return newcm
+
+
+def _update_staticmethod(oldsm, newsm):
+ """Update a staticmethod update."""
+ # While we can't modify the staticmethod object itself (it has no
+ # mutable attributes), we *can* extract the underlying function
+ # (by calling __get__(), which returns it) and update it in-place.
+ # We don't have the class available to pass to __get__() but any
+ # object except None will do.
+ _update(oldsm.__get__(0), newsm.__get__(0))
+ return newsm
diff --git a/python/helpers/pydev/pydevd_resolver.py b/python/helpers/pydev/pydevd_resolver.py
new file mode 100644
index 0000000..114b849
--- /dev/null
+++ b/python/helpers/pydev/pydevd_resolver.py
@@ -0,0 +1,365 @@
+try:
+ import StringIO
+except:
+ import io as StringIO
+import traceback
+
+try:
+ __setFalse = False
+except:
+ import __builtin__
+ setattr(__builtin__, 'True', 1)
+ setattr(__builtin__, 'False', 0)
+
+import pydevd_constants
+
+
+MAX_ITEMS_TO_HANDLE = 500
+TOO_LARGE_MSG = 'Too large to show contents. Max items to show: ' + str(MAX_ITEMS_TO_HANDLE)
+TOO_LARGE_ATTR = 'Unable to handle:'
+
+#=======================================================================================================================
+# UnableToResolveVariableException
+#=======================================================================================================================
+class UnableToResolveVariableException(Exception):
+ pass
+
+
+#=======================================================================================================================
+# InspectStub
+#=======================================================================================================================
+class InspectStub:
+ def isbuiltin(self, _args):
+ return False
+ def isroutine(self, object):
+ return False
+
+try:
+ import inspect
+except:
+ inspect = InspectStub()
+
+try:
+ import java.lang #@UnresolvedImport
+except:
+ pass
+
+#types does not include a MethodWrapperType
+try:
+ MethodWrapperType = type([].__str__)
+except:
+ MethodWrapperType = None
+
+
+#=======================================================================================================================
+# AbstractResolver
+#=======================================================================================================================
+class AbstractResolver:
+ '''
+ This class exists only for documentation purposes to explain how to create a resolver.
+
+ Some examples on how to resolve things:
+ - list: getDictionary could return a dict with index->item and use the index to resolve it later
+ - set: getDictionary could return a dict with id(object)->object and reiterate in that array to resolve it later
+ - arbitrary instance: getDictionary could return dict with attr_name->attr and use getattr to resolve it later
+ '''
+
+ def resolve(self, var, attribute):
+ '''
+ In this method, we'll resolve some child item given the string representation of the item in the key
+ representing the previously asked dictionary.
+
+ @param var: this is the actual variable to be resolved.
+ @param attribute: this is the string representation of a key previously returned in getDictionary.
+ '''
+ raise NotImplementedError
+
+ def getDictionary(self, var):
+ '''
+ @param var: this is the variable that should have its children gotten.
+
+ @return: a dictionary where each pair key, value should be shown to the user as children items
+ in the variables view for the given var.
+ '''
+ raise NotImplementedError
+
+
+#=======================================================================================================================
+# DefaultResolver
+#=======================================================================================================================
+class DefaultResolver:
+ '''
+ DefaultResolver is the class that'll actually resolve how to show some variable.
+ '''
+
+ def resolve(self, var, attribute):
+ return getattr(var, attribute)
+
+ def getDictionary(self, var):
+ if MethodWrapperType:
+ return self._getPyDictionary(var)
+ else:
+ return self._getJyDictionary(var)
+
+ def _getJyDictionary(self, obj):
+ ret = {}
+ found = java.util.HashMap()
+
+ original = obj
+ if hasattr(obj, '__class__') and obj.__class__ == java.lang.Class:
+
+ #get info about superclasses
+ classes = []
+ classes.append(obj)
+ c = obj.getSuperclass()
+ while c != None:
+ classes.append(c)
+ c = c.getSuperclass()
+
+ #get info about interfaces
+ interfs = []
+ for obj in classes:
+ interfs.extend(obj.getInterfaces())
+ classes.extend(interfs)
+
+ #now is the time when we actually get info on the declared methods and fields
+ for obj in classes:
+
+ declaredMethods = obj.getDeclaredMethods()
+ declaredFields = obj.getDeclaredFields()
+ for i in range(len(declaredMethods)):
+ name = declaredMethods[i].getName()
+ ret[name] = declaredMethods[i].toString()
+ found.put(name, 1)
+
+ for i in range(len(declaredFields)):
+ name = declaredFields[i].getName()
+ found.put(name, 1)
+ #if declaredFields[i].isAccessible():
+ declaredFields[i].setAccessible(True)
+ #ret[name] = declaredFields[i].get( declaredFields[i] )
+ try:
+ ret[name] = declaredFields[i].get(original)
+ except:
+ ret[name] = declaredFields[i].toString()
+
+ #this simple dir does not always get all the info, that's why we have the part before
+ #(e.g.: if we do a dir on String, some methods that are from other interfaces such as
+ #charAt don't appear)
+ try:
+ d = dir(original)
+ for name in d:
+ if found.get(name) is not 1:
+ ret[name] = getattr(original, name)
+ except:
+ #sometimes we're unable to do a dir
+ pass
+
+ return ret
+
+ def _getPyDictionary(self, var):
+ filterPrivate = False
+ filterSpecial = True
+ filterFunction = True
+ filterBuiltIn = True
+
+ names = dir(var)
+ if not names and hasattr(var, '__members__'):
+ names = var.__members__
+ d = {}
+
+ #Be aware that the order in which the filters are applied attempts to
+ #optimize the operation by removing as many items as possible in the
+ #first filters, leaving fewer items for later filters
+
+ if filterBuiltIn or filterFunction:
+ for n in names:
+ if filterSpecial:
+ if n.startswith('__') and n.endswith('__'):
+ continue
+
+ if filterPrivate:
+ if n.startswith('_') or n.endswith('__'):
+ continue
+
+ try:
+ attr = getattr(var, n)
+
+ #filter builtins?
+ if filterBuiltIn:
+ if inspect.isbuiltin(attr):
+ continue
+
+ #filter functions?
+ if filterFunction:
+ if inspect.isroutine(attr) or isinstance(attr, MethodWrapperType):
+ continue
+ except:
+ #if some error occurs getting it, let's put it to the user.
+ strIO = StringIO.StringIO()
+ traceback.print_exc(file=strIO)
+ attr = strIO.getvalue()
+
+ d[ n ] = attr
+
+ return d
+
+
+#=======================================================================================================================
+# DictResolver
+#=======================================================================================================================
+class DictResolver:
+
+ def resolve(self, dict, key):
+ if key == '__len__':
+ return None
+
+ if '(' not in key:
+ #we have to treat that because the dict resolver is also used to directly resolve the global and local
+ #scopes (which already have the items directly)
+ return dict[key]
+
+ #ok, we have to iterate over the items to find the one that matches the id, because that's the only way
+ #to actually find the reference from the string we have before.
+ expected_id = int(key.split('(')[-1][:-1])
+ for key, val in dict.items():
+ if id(key) == expected_id:
+ return val
+
+ raise UnableToResolveVariableException()
+
+ def keyStr(self, key):
+ if isinstance(key, str):
+ return "'%s'"%key
+ else:
+ if not pydevd_constants.IS_PY3K:
+ if isinstance(key, unicode):
+ return "u'%s'"%key
+ return key
+
+ def getDictionary(self, dict):
+ ret = {}
+
+ for key, val in dict.items():
+ #we need to add the id because otherwise we cannot find the real object to get its contents later on.
+ key = '%s (%s)' % (self.keyStr(key), id(key))
+ ret[key] = val
+
+ ret['__len__'] = len(dict)
+ return ret
+
+
+
+#=======================================================================================================================
+# TupleResolver
+#=======================================================================================================================
+class TupleResolver: #to enumerate tuples and lists
+
+ def resolve(self, var, attribute):
+ '''
+ @param var: that's the original attribute
+ @param attribute: that's the key passed in the dict (as a string)
+ '''
+ if attribute == '__len__' or attribute == TOO_LARGE_ATTR:
+ return None
+ return var[int(attribute)]
+
+ def getDictionary(self, var):
+ #return dict( [ (i, x) for i, x in enumerate(var) ] )
+ # modified 'cause jython does not have enumerate support
+ l = len(var)
+ d = {}
+
+ if l < MAX_ITEMS_TO_HANDLE:
+ format = '%0' + str(int(len(str(l)))) + 'd'
+
+
+ for i, item in zip(range(l), var):
+ d[ format % i ] = item
+ else:
+ d[TOO_LARGE_ATTR] = TOO_LARGE_MSG
+ d['__len__'] = len(var)
+ return d
+
+
+
+#=======================================================================================================================
+# SetResolver
+#=======================================================================================================================
+class SetResolver:
+ '''
+ Resolves a set as dict id(object)->object
+ '''
+
+ def resolve(self, var, attribute):
+ if attribute == '__len__':
+ return None
+
+ attribute = int(attribute)
+ for v in var:
+ if id(v) == attribute:
+ return v
+
+ raise UnableToResolveVariableException('Unable to resolve %s in %s' % (attribute, var))
+
+ def getDictionary(self, var):
+ d = {}
+ for item in var:
+ d[ id(item) ] = item
+ d['__len__'] = len(var)
+ return d
+
+
+#=======================================================================================================================
+# InstanceResolver
+#=======================================================================================================================
+class InstanceResolver:
+
+ def resolve(self, var, attribute):
+ field = var.__class__.getDeclaredField(attribute)
+ field.setAccessible(True)
+ return field.get(var)
+
+ def getDictionary(self, obj):
+ ret = {}
+
+ declaredFields = obj.__class__.getDeclaredFields()
+ for i in range(len(declaredFields)):
+ name = declaredFields[i].getName()
+ try:
+ declaredFields[i].setAccessible(True)
+ ret[name] = declaredFields[i].get(obj)
+ except:
+ traceback.print_exc()
+
+ return ret
+
+
+#=======================================================================================================================
+# JyArrayResolver
+#=======================================================================================================================
+class JyArrayResolver:
+ '''
+ This resolves a regular Object[] array from java
+ '''
+
+ def resolve(self, var, attribute):
+ if attribute == '__len__':
+ return None
+ return var[int(attribute)]
+
+ def getDictionary(self, obj):
+ ret = {}
+
+ for i in range(len(obj)):
+ ret[ i ] = obj[i]
+
+ ret['__len__'] = len(obj)
+ return ret
+
+defaultResolver = DefaultResolver()
+dictResolver = DictResolver()
+tupleResolver = TupleResolver()
+instanceResolver = InstanceResolver()
+jyArrayResolver = JyArrayResolver()
+setResolver = SetResolver()
diff --git a/python/helpers/pydev/pydevd_signature.py b/python/helpers/pydev/pydevd_signature.py
new file mode 100644
index 0000000..2f9a182
--- /dev/null
+++ b/python/helpers/pydev/pydevd_signature.py
@@ -0,0 +1,131 @@
+import inspect
+import trace
+import os
+
+trace._warn = lambda *args: None # workaround for http://bugs.python.org/issue17143 (PY-8706)
+import gc
+from pydevd_comm import CMD_SIGNATURE_CALL_TRACE, NetCommand
+import pydevd_vars
+
+class Signature(object):
+ def __init__(self, file, name):
+ self.file = file
+ self.name = name
+ self.args = []
+ self.args_str = []
+
+ def add_arg(self, name, type):
+ self.args.append((name, type))
+ self.args_str.append("%s:%s"%(name, type))
+
+ def __str__(self):
+ return "%s %s(%s)"%(self.file, self.name, ", ".join(self.args_str))
+
+
+class SignatureFactory(object):
+ def __init__(self):
+ self._caller_cache = {}
+ self.project_roots = os.getenv('PYCHARM_PROJECT_ROOTS', '').split(os.pathsep)
+
+ def is_in_scope(self, filename):
+ filename = os.path.normcase(filename)
+ for root in self.project_roots:
+ root = os.path.normcase(root)
+ if filename.startswith(root):
+ return True
+ return False
+
+
+
+ def create_signature(self, frame):
+ try:
+ code = frame.f_code
+ locals = frame.f_locals
+ filename, modulename, funcname = self.file_module_function_of(frame)
+ res = Signature(filename, funcname)
+ for i in range(0, code.co_argcount):
+ name = code.co_varnames[i]
+ tp = type(locals[name])
+ class_name = tp.__name__
+ if class_name == 'instance':
+ tp = locals[name].__class__
+ class_name = tp.__name__
+
+ if tp.__module__ and tp.__module__ != '__main__':
+ class_name = "%s.%s"%(tp.__module__, class_name)
+
+ res.add_arg(name, class_name)
+ return res
+ except:
+ import traceback
+ traceback.print_exc()
+
+
+ def file_module_function_of(self, frame): #this code is take from trace module and fixed to work with new-style classes
+ code = frame.f_code
+ filename = code.co_filename
+ if filename:
+ modulename = trace.modname(filename)
+ else:
+ modulename = None
+
+ funcname = code.co_name
+ clsname = None
+ if code in self._caller_cache:
+ if self._caller_cache[code] is not None:
+ clsname = self._caller_cache[code]
+ else:
+ self._caller_cache[code] = None
+ ## use of gc.get_referrers() was suggested by Michael Hudson
+ # all functions which refer to this code object
+ funcs = [f for f in gc.get_referrers(code)
+ if inspect.isfunction(f)]
+ # require len(func) == 1 to avoid ambiguity caused by calls to
+ # new.function(): "In the face of ambiguity, refuse the
+ # temptation to guess."
+ if len(funcs) == 1:
+ dicts = [d for d in gc.get_referrers(funcs[0])
+ if isinstance(d, dict)]
+ if len(dicts) == 1:
+ classes = [c for c in gc.get_referrers(dicts[0])
+ if hasattr(c, "__bases__") or inspect.isclass(c)]
+ elif len(dicts) > 1: #new-style classes
+ classes = [c for c in gc.get_referrers(dicts[1])
+ if hasattr(c, "__bases__") or inspect.isclass(c)]
+ else:
+ classes = []
+
+ if len(classes) == 1:
+ # ditto for new.classobj()
+ clsname = classes[0].__name__
+ # cache the result - assumption is that new.* is
+ # not called later to disturb this relationship
+ # _caller_cache could be flushed if functions in
+ # the new module get called.
+ self._caller_cache[code] = clsname
+
+
+ if clsname is not None:
+ funcname = "%s.%s" % (clsname, funcname)
+
+ return filename, modulename, funcname
+
+def create_signature_message(signature):
+ cmdTextList = ["<xml>"]
+
+ cmdTextList.append('<call_signature file="%s" name="%s">' % (pydevd_vars.makeValidXmlValue(signature.file), pydevd_vars.makeValidXmlValue(signature.name)))
+
+ for arg in signature.args:
+ cmdTextList.append('<arg name="%s" type="%s"></arg>' % (pydevd_vars.makeValidXmlValue(arg[0]), pydevd_vars.makeValidXmlValue(arg[1])))
+
+ cmdTextList.append("</call_signature></xml>")
+ cmdText = ''.join(cmdTextList)
+ return NetCommand(CMD_SIGNATURE_CALL_TRACE, 0, cmdText)
+
+def sendSignatureCallTrace(dbg, frame, filename):
+ if dbg.signature_factory:
+ if dbg.signature_factory.is_in_scope(filename):
+ dbg.writer.addCommand(create_signature_message(dbg.signature_factory.create_signature(frame)))
+
+
+
diff --git a/python/helpers/pydev/pydevd_tracing.py b/python/helpers/pydev/pydevd_tracing.py
new file mode 100644
index 0000000..7c197ef
--- /dev/null
+++ b/python/helpers/pydev/pydevd_tracing.py
@@ -0,0 +1,90 @@
+from pydevd_constants import * #@UnusedWildImport
+
+try:
+ import cStringIO as StringIO #may not always be available @UnusedImport
+except:
+ try:
+ import StringIO #@Reimport
+ except:
+ import io as StringIO
+
+if USE_LIB_COPY:
+ import _pydev_threading as threading
+else:
+ import threading
+
+import sys #@Reimport
+import traceback
+
+class TracingFunctionHolder:
+ '''This class exists just to keep some variables (so that we don't keep them in the global namespace).
+ '''
+ _original_tracing = None
+ _warn = True
+ _lock = threading.Lock()
+ _traceback_limit = 1
+ _warnings_shown = {}
+
+
+def GetExceptionTracebackStr():
+ exc_info = sys.exc_info()
+ s = StringIO.StringIO()
+ traceback.print_exception(exc_info[0], exc_info[1], exc_info[2], file=s)
+ return s.getvalue()
+
+def _GetStackStr(frame):
+
+ msg = '\nIf this is needed, please check: ' + \
+ '\nhttp://pydev.blogspot.com/2007/06/why-cant-pydev-debugger-work-with.html' + \
+ '\nto see how to restore the debug tracing back correctly.\n'
+
+ if TracingFunctionHolder._traceback_limit:
+ s = StringIO.StringIO()
+ s.write('Call Location:\n')
+ traceback.print_stack(f=frame, limit=TracingFunctionHolder._traceback_limit, file=s)
+ msg = msg + s.getvalue()
+
+ return msg
+
+def _InternalSetTrace(tracing_func):
+ if TracingFunctionHolder._warn:
+ frame = GetFrame()
+ if frame is not None and frame.f_back is not None:
+ if not frame.f_back.f_code.co_filename.lower().endswith('threading.py'):
+
+ message = \
+ '\nPYDEV DEBUGGER WARNING:' + \
+ '\nsys.settrace() should not be used when the debugger is being used.' + \
+ '\nThis may cause the debugger to stop working correctly.' + \
+ '%s' % _GetStackStr(frame.f_back)
+
+ if message not in TracingFunctionHolder._warnings_shown:
+ #only warn about each message once...
+ TracingFunctionHolder._warnings_shown[message] = 1
+ sys.stderr.write('%s\n' % (message,))
+ sys.stderr.flush()
+
+ if TracingFunctionHolder._original_tracing:
+ TracingFunctionHolder._original_tracing(tracing_func)
+
+def SetTrace(tracing_func):
+ TracingFunctionHolder._lock.acquire()
+ try:
+ TracingFunctionHolder._warn = False
+ _InternalSetTrace(tracing_func)
+ TracingFunctionHolder._warn = True
+ finally:
+ TracingFunctionHolder._lock.release()
+
+
+def ReplaceSysSetTraceFunc():
+ if TracingFunctionHolder._original_tracing is None:
+ TracingFunctionHolder._original_tracing = sys.settrace
+ sys.settrace = _InternalSetTrace
+
+def RestoreSysSetTraceFunc():
+ if TracingFunctionHolder._original_tracing is not None:
+ sys.settrace = TracingFunctionHolder._original_tracing
+ TracingFunctionHolder._original_tracing = None
+
+
diff --git a/python/helpers/pydev/pydevd_utils.py b/python/helpers/pydev/pydevd_utils.py
new file mode 100644
index 0000000..134b190
--- /dev/null
+++ b/python/helpers/pydev/pydevd_utils.py
@@ -0,0 +1,99 @@
+import traceback
+
+try:
+ from urllib import quote
+except:
+ from urllib.parse import quote
+
+import pydevd_constants
+import pydev_log
+
+def to_number(x):
+ if is_string(x):
+ try:
+ n = float(x)
+ return n
+ except ValueError:
+ pass
+
+ l = x.find('(')
+ if l != -1:
+ y = x[0:l-1]
+ #print y
+ try:
+ n = float(y)
+ return n
+ except ValueError:
+ pass
+ return None
+
+def compare_object_attrs(x, y):
+ try:
+ if x == y:
+ return 0
+ x_num = to_number(x)
+ y_num = to_number(y)
+ if x_num is not None and y_num is not None:
+ if x_num - y_num<0:
+ return -1
+ else:
+ return 1
+ if '__len__' == x:
+ return -1
+ if '__len__' == y:
+ return 1
+
+ return x.__cmp__(y)
+ except:
+ if pydevd_constants.IS_PY3K:
+ return (to_string(x) > to_string(y)) - (to_string(x) < to_string(y))
+ else:
+ return cmp(to_string(x), to_string(y))
+
+def cmp_to_key(mycmp):
+ 'Convert a cmp= function into a key= function'
+ class K(object):
+ def __init__(self, obj, *args):
+ self.obj = obj
+ def __lt__(self, other):
+ return mycmp(self.obj, other.obj) < 0
+ def __gt__(self, other):
+ return mycmp(self.obj, other.obj) > 0
+ def __eq__(self, other):
+ return mycmp(self.obj, other.obj) == 0
+ def __le__(self, other):
+ return mycmp(self.obj, other.obj) <= 0
+ def __ge__(self, other):
+ return mycmp(self.obj, other.obj) >= 0
+ def __ne__(self, other):
+ return mycmp(self.obj, other.obj) != 0
+ return K
+
+def is_string(x):
+ if pydevd_constants.IS_PY3K:
+ return isinstance(x, str)
+ else:
+ return isinstance(x, basestring)
+
+def to_string(x):
+ if is_string(x):
+ return x
+ else:
+ return str(x)
+
+def print_exc():
+ if traceback:
+ traceback.print_exc()
+
+def quote_smart(s, safe='/'):
+ if pydevd_constants.IS_PY3K:
+ return quote(s, safe)
+ else:
+ if isinstance(s, unicode):
+ s = s.encode('utf-8')
+
+ return quote(s, safe)
+
+
+
+
diff --git a/python/helpers/pydev/pydevd_vars.py b/python/helpers/pydev/pydevd_vars.py
new file mode 100644
index 0000000..b8f95fc
--- /dev/null
+++ b/python/helpers/pydev/pydevd_vars.py
@@ -0,0 +1,302 @@
+""" pydevd_vars deals with variables:
+ resolution/conversion to XML.
+"""
+import pickle
+from django_frame import DjangoTemplateFrame
+from pydevd_constants import * #@UnusedWildImport
+from types import * #@UnusedWildImport
+
+from pydevd_xml import *
+
+try:
+ from StringIO import StringIO
+except ImportError:
+ from io import StringIO
+import sys #@Reimport
+
+if USE_LIB_COPY:
+ import _pydev_threading as threading
+else:
+ import threading
+import pydevd_resolver
+import traceback
+
+try:
+ from pydevd_exec import Exec
+except:
+ from pydevd_exec2 import Exec
+
+#-------------------------------------------------------------------------- defining true and false for earlier versions
+
+try:
+ __setFalse = False
+except:
+ import __builtin__
+
+ setattr(__builtin__, 'True', 1)
+ setattr(__builtin__, 'False', 0)
+
+#------------------------------------------------------------------------------------------------------ class for errors
+
+class VariableError(RuntimeError): pass
+
+class FrameNotFoundError(RuntimeError): pass
+
+
+if USE_PSYCO_OPTIMIZATION:
+ try:
+ import psyco
+
+ varToXML = psyco.proxy(varToXML)
+ except ImportError:
+ if hasattr(sys, 'exc_clear'): #jython does not have it
+ sys.exc_clear() #don't keep the traceback -- clients don't want to see it
+
+def iterFrames(initialFrame):
+ """NO-YIELD VERSION: Iterates through all the frames starting at the specified frame (which will be the first returned item)"""
+ #cannot use yield
+ frames = []
+
+ while initialFrame is not None:
+ frames.append(initialFrame)
+ initialFrame = initialFrame.f_back
+
+ return frames
+
+def dumpFrames(thread_id):
+ sys.stdout.write('dumping frames\n')
+ if thread_id != GetThreadId(threading.currentThread()):
+ raise VariableError("findFrame: must execute on same thread")
+
+ curFrame = GetFrame()
+ for frame in iterFrames(curFrame):
+ sys.stdout.write('%s\n' % pickle.dumps(frame))
+
+
+#===============================================================================
+# AdditionalFramesContainer
+#===============================================================================
+class AdditionalFramesContainer:
+ lock = threading.Lock()
+ additional_frames = {} #dict of dicts
+
+
+def addAdditionalFrameById(thread_id, frames_by_id):
+ AdditionalFramesContainer.additional_frames[thread_id] = frames_by_id
+
+
+def removeAdditionalFrameById(thread_id):
+ del AdditionalFramesContainer.additional_frames[thread_id]
+
+
+
+def findFrame(thread_id, frame_id):
+ """ returns a frame on the thread that has a given frame_id """
+ if thread_id != GetThreadId(threading.currentThread()):
+ raise VariableError("findFrame: must execute on same thread")
+
+ lookingFor = int(frame_id)
+
+ if AdditionalFramesContainer.additional_frames:
+ if DictContains(AdditionalFramesContainer.additional_frames, thread_id):
+ frame = AdditionalFramesContainer.additional_frames[thread_id].get(lookingFor)
+
+ if frame is not None:
+ return frame
+
+ curFrame = GetFrame()
+ if frame_id == "*":
+ return curFrame # any frame is specified with "*"
+
+ frameFound = None
+
+ for frame in iterFrames(curFrame):
+ if lookingFor == id(frame):
+ frameFound = frame
+ del frame
+ break
+
+ del frame
+
+ #Important: python can hold a reference to the frame from the current context
+ #if an exception is raised, so, if we don't explicitly add those deletes
+ #we might have those variables living much more than we'd want to.
+
+ #I.e.: sys.exc_info holding reference to frame that raises exception (so, other places
+ #need to call sys.exc_clear())
+ del curFrame
+
+ if frameFound is None:
+ msgFrames = ''
+ i = 0
+
+ for frame in iterFrames(GetFrame()):
+ i += 1
+ msgFrames += str(id(frame))
+ if i % 5 == 0:
+ msgFrames += '\n'
+ else:
+ msgFrames += ' - '
+
+ errMsg = '''findFrame: frame not found.
+Looking for thread_id:%s, frame_id:%s
+Current thread_id:%s, available frames:
+%s
+''' % (thread_id, lookingFor, GetThreadId(threading.currentThread()), msgFrames)
+
+ sys.stderr.write(errMsg)
+ return None
+
+ return frameFound
+
+def resolveCompoundVariable(thread_id, frame_id, scope, attrs):
+ """ returns the value of the compound variable as a dictionary"""
+ frame = findFrame(thread_id, frame_id)
+ if frame is None:
+ return {}
+
+ attrList = attrs.split('\t')
+
+ if scope == "GLOBAL":
+ var = frame.f_globals
+ del attrList[0] # globals are special, and they get a single dummy unused attribute
+ else:
+ var = frame.f_locals
+ type, _typeName, resolver = getType(var)
+ try:
+ resolver.resolve(var, attrList[0])
+ except:
+ var = frame.f_globals
+
+ for k in attrList:
+ type, _typeName, resolver = getType(var)
+ var = resolver.resolve(var, k)
+
+ try:
+ type, _typeName, resolver = getType(var)
+ return resolver.getDictionary(var)
+ except:
+ traceback.print_exc()
+
+
+def resolveVar(var, attrs):
+ attrList = attrs.split('\t')
+
+ for k in attrList:
+ type, _typeName, resolver = getType(var)
+
+ var = resolver.resolve(var, k)
+
+ try:
+ type, _typeName, resolver = getType(var)
+ return resolver.getDictionary(var)
+ except:
+ traceback.print_exc()
+
+
+def evaluateExpression(thread_id, frame_id, expression, doExec):
+ """returns the result of the evaluated expression
+ @param doExec: determines if we should do an exec or an eval
+ """
+ frame = findFrame(thread_id, frame_id)
+ if frame is None:
+ return
+
+ expression = str(expression.replace('@LINE@', '\n'))
+
+
+ #Not using frame.f_globals because of https://sourceforge.net/tracker2/?func=detail&aid=2541355&group_id=85796&atid=577329
+ #(Names not resolved in generator expression in method)
+ #See message: http://mail.python.org/pipermail/python-list/2009-January/526522.html
+ updated_globals = {}
+ updated_globals.update(frame.f_globals)
+ updated_globals.update(frame.f_locals) #locals later because it has precedence over the actual globals
+
+ try:
+ if doExec:
+ try:
+ #try to make it an eval (if it is an eval we can print it, otherwise we'll exec it and
+ #it will have whatever the user actually did)
+ compiled = compile(expression, '<string>', 'eval')
+ except:
+ Exec(expression, updated_globals, frame.f_locals)
+ else:
+ result = eval(compiled, updated_globals, frame.f_locals)
+ if result is not None: #Only print if it's not None (as python does)
+ sys.stdout.write('%s\n' % (result,))
+ return
+
+ else:
+ result = None
+ try:
+ result = eval(expression, updated_globals, frame.f_locals)
+ except Exception:
+ s = StringIO()
+ traceback.print_exc(file=s)
+
+ result = s.getvalue()
+
+ try:
+ try:
+ etype, value, tb = sys.exc_info()
+ result = value
+ finally:
+ etype = value = tb = None
+ except:
+ pass
+
+ result = ExceptionOnEvaluate(result)
+
+ return result
+ finally:
+ #Should not be kept alive if an exception happens and this frame is kept in the stack.
+ del updated_globals
+ del frame
+
+def changeAttrExpression(thread_id, frame_id, attr, expression):
+ """Changes some attribute in a given frame.
+ @note: it will not (currently) work if we're not in the topmost frame (that's a python
+ deficiency -- and it appears that there is no way of making it currently work --
+ will probably need some change to the python internals)
+ """
+ frame = findFrame(thread_id, frame_id)
+ if frame is None:
+ return
+
+ if isinstance(frame, DjangoTemplateFrame):
+ result = eval(expression, frame.f_globals, frame.f_locals)
+ frame.changeVariable(attr, result)
+
+ try:
+ expression = expression.replace('@LINE@', '\n')
+ #tests (needs proposed patch in python accepted)
+ # if hasattr(frame, 'savelocals'):
+ # if attr in frame.f_locals:
+ # frame.f_locals[attr] = eval(expression, frame.f_globals, frame.f_locals)
+ # frame.savelocals()
+ # return
+ #
+ # elif attr in frame.f_globals:
+ # frame.f_globals[attr] = eval(expression, frame.f_globals, frame.f_locals)
+ # return
+
+
+ if attr[:7] == "Globals":
+ attr = attr[8:]
+ if attr in frame.f_globals:
+ frame.f_globals[attr] = eval(expression, frame.f_globals, frame.f_locals)
+ return frame.f_globals[attr]
+ else:
+ #default way (only works for changing it in the topmost frame)
+ result = eval(expression, frame.f_globals, frame.f_locals)
+ Exec('%s=%s' % (attr, expression), frame.f_globals, frame.f_locals)
+ return result
+
+
+ except Exception:
+ traceback.print_exc()
+
+
+
+
+
diff --git a/python/helpers/pydev/pydevd_vm_type.py b/python/helpers/pydev/pydevd_vm_type.py
new file mode 100644
index 0000000..76aa890
--- /dev/null
+++ b/python/helpers/pydev/pydevd_vm_type.py
@@ -0,0 +1,41 @@
+import sys
+
+#=======================================================================================================================
+# PydevdVmType
+#=======================================================================================================================
+class PydevdVmType:
+
+ PYTHON = 'python'
+ JYTHON = 'jython'
+ vm_type = None
+
+
+#=======================================================================================================================
+# SetVmType
+#=======================================================================================================================
+def SetVmType(vm_type):
+ PydevdVmType.vm_type = vm_type
+
+
+#=======================================================================================================================
+# GetVmType
+#=======================================================================================================================
+def GetVmType():
+ if PydevdVmType.vm_type is None:
+ SetupType()
+ return PydevdVmType.vm_type
+
+
+#=======================================================================================================================
+# SetupType
+#=======================================================================================================================
+def SetupType(str=None):
+ if str is not None:
+ PydevdVmType.vm_type = str
+ return
+
+ if sys.platform.startswith("java"):
+ PydevdVmType.vm_type = PydevdVmType.JYTHON
+ else:
+ PydevdVmType.vm_type = PydevdVmType.PYTHON
+
diff --git a/python/helpers/pydev/pydevd_xml.py b/python/helpers/pydev/pydevd_xml.py
new file mode 100644
index 0000000..73258e6
--- /dev/null
+++ b/python/helpers/pydev/pydevd_xml.py
@@ -0,0 +1,209 @@
+import pydev_log
+import traceback
+import pydevd_resolver
+from pydevd_constants import * #@UnusedWildImport
+from types import * #@UnusedWildImport
+
+try:
+ from urllib import quote
+except:
+ from urllib.parse import quote #@UnresolvedImport
+
+try:
+ from xml.sax.saxutils import escape
+
+ def makeValidXmlValue(s):
+ return escape(s, {'"': '"'})
+except:
+ #Simple replacement if it's not there.
+ def makeValidXmlValue(s):
+ return s.replace('<', '<').replace('>', '>').replace('"', '"')
+
+class ExceptionOnEvaluate:
+ def __init__(self, result):
+ self.result = result
+
+#------------------------------------------------------------------------------------------------------ resolvers in map
+
+if not sys.platform.startswith("java"):
+ typeMap = [
+ #None means that it should not be treated as a compound variable
+
+ #isintance does not accept a tuple on some versions of python, so, we must declare it expanded
+ (type(None), None,),
+ (int, None),
+ (float, None),
+ (complex, None),
+ (str, None),
+ (tuple, pydevd_resolver.tupleResolver),
+ (list, pydevd_resolver.tupleResolver),
+ (dict, pydevd_resolver.dictResolver),
+ ]
+
+ try:
+ typeMap.append((long, None))
+ except:
+ pass #not available on all python versions
+
+ try:
+ typeMap.append((unicode, None))
+ except:
+ pass #not available on all python versions
+
+ try:
+ typeMap.append((set, pydevd_resolver.setResolver))
+ except:
+ pass #not available on all python versions
+
+ try:
+ typeMap.append((frozenset, pydevd_resolver.setResolver))
+ except:
+ pass #not available on all python versions
+
+else: #platform is java
+ from org.python import core #@UnresolvedImport
+
+ typeMap = [
+ (core.PyNone, None),
+ (core.PyInteger, None),
+ (core.PyLong, None),
+ (core.PyFloat, None),
+ (core.PyComplex, None),
+ (core.PyString, None),
+ (core.PyTuple, pydevd_resolver.tupleResolver),
+ (core.PyList, pydevd_resolver.tupleResolver),
+ (core.PyDictionary, pydevd_resolver.dictResolver),
+ (core.PyStringMap, pydevd_resolver.dictResolver),
+ ]
+
+ if hasattr(core, 'PyJavaInstance'):
+ #Jython 2.5b3 removed it.
+ typeMap.append((core.PyJavaInstance, pydevd_resolver.instanceResolver))
+
+
+def getType(o):
+ """ returns a triple (typeObject, typeString, resolver
+ resolver != None means that variable is a container,
+ and should be displayed as a hierarchy.
+ Use the resolver to get its attributes.
+
+ All container objects should have a resolver.
+ """
+
+ try:
+ type_object = type(o)
+ type_name = type_object.__name__
+ except:
+ #This happens for org.python.core.InitModule
+ return 'Unable to get Type', 'Unable to get Type', None
+
+ try:
+ if type_name == 'org.python.core.PyJavaInstance':
+ return type_object, type_name, pydevd_resolver.instanceResolver
+
+ if type_name == 'org.python.core.PyArray':
+ return type_object, type_name, pydevd_resolver.jyArrayResolver
+
+ for t in typeMap:
+ if isinstance(o, t[0]):
+ return type_object, type_name, t[1]
+ except:
+ traceback.print_exc()
+
+ #no match return default
+ return type_object, type_name, pydevd_resolver.defaultResolver
+
+def frameVarsToXML(frame_f_locals):
+ """ dumps frame variables to XML
+ <var name="var_name" scope="local" type="type" value="value"/>
+ """
+ xml = ""
+
+ keys = frame_f_locals.keys()
+ if hasattr(keys, 'sort'):
+ keys.sort() #Python 3.0 does not have it
+ else:
+ keys = sorted(keys) #Jython 2.1 does not have it
+
+ for k in keys:
+ try:
+ v = frame_f_locals[k]
+ xml += varToXML(v, str(k))
+ except Exception:
+ traceback.print_exc()
+ pydev_log.error("Unexpected error, recovered safely.\n")
+
+ return xml
+
+
+def varToXML(val, name, doTrim=True):
+ """ single variable or dictionary to xml representation """
+
+ is_exception_on_eval = isinstance(val, ExceptionOnEvaluate)
+
+ if is_exception_on_eval:
+ v = val.result
+ else:
+ v = val
+
+ type, typeName, resolver = getType(v)
+
+ try:
+ if hasattr(v, '__class__'):
+ try:
+ cName = str(v.__class__)
+ if cName.find('.') != -1:
+ cName = cName.split('.')[-1]
+
+ elif cName.find("'") != -1: #does not have '.' (could be something like <type 'int'>)
+ cName = cName[cName.index("'") + 1:]
+
+ if cName.endswith("'>"):
+ cName = cName[:-2]
+ except:
+ cName = str(v.__class__)
+ value = '%s: %s' % (cName, v)
+ else:
+ value = str(v)
+ except:
+ try:
+ value = repr(v)
+ except:
+ value = 'Unable to get repr for %s' % v.__class__
+
+ try:
+ name = quote(name, '/>_= ') #TODO: Fix PY-5834 without using quote
+ except:
+ pass
+ xml = '<var name="%s" type="%s"' % (makeValidXmlValue(name), makeValidXmlValue(typeName))
+
+ if value:
+ #cannot be too big... communication may not handle it.
+ if len(value) > MAXIMUM_VARIABLE_REPRESENTATION_SIZE and doTrim:
+ value = value[0:MAXIMUM_VARIABLE_REPRESENTATION_SIZE]
+ value += '...'
+
+ #fix to work with unicode values
+ try:
+ if not IS_PY3K:
+ if isinstance(value, unicode):
+ value = value.encode('utf-8')
+ else:
+ if isinstance(value, bytes):
+ value = value.encode('utf-8')
+ except TypeError: #in java, unicode is a function
+ pass
+
+ xmlValue = ' value="%s"' % (makeValidXmlValue(quote(value, '/>_= ')))
+ else:
+ xmlValue = ''
+
+ if is_exception_on_eval:
+ xmlCont = ' isErrorOnEval="True"'
+ else:
+ if resolver is not None:
+ xmlCont = ' isContainer="True"'
+ else:
+ xmlCont = ''
+
+ return ''.join((xml, xmlValue, xmlCont, ' />\n'))
diff --git a/python/helpers/pydev/runfiles.py b/python/helpers/pydev/runfiles.py
new file mode 100644
index 0000000..4a25469
--- /dev/null
+++ b/python/helpers/pydev/runfiles.py
@@ -0,0 +1,530 @@
+import fnmatch
+import os.path
+import re
+import sys
+import unittest
+
+
+
+
+try:
+ __setFalse = False
+except:
+ import __builtin__
+ setattr(__builtin__, 'True', 1)
+ setattr(__builtin__, 'False', 0)
+
+
+
+
+#=======================================================================================================================
+# Jython?
+#=======================================================================================================================
+try:
+ import org.python.core.PyDictionary #@UnresolvedImport @UnusedImport -- just to check if it could be valid
+ def DictContains(d, key):
+ return d.has_key(key)
+except:
+ try:
+ #Py3k does not have has_key anymore, and older versions don't have __contains__
+ DictContains = dict.__contains__
+ except:
+ DictContains = dict.has_key
+
+try:
+ xrange
+except:
+ #Python 3k does not have it
+ xrange = range
+
+try:
+ enumerate
+except:
+ def enumerate(lst):
+ ret = []
+ i=0
+ for element in lst:
+ ret.append((i, element))
+ i+=1
+ return ret
+
+
+
+#=======================================================================================================================
+# getopt code copied since gnu_getopt is not available on jython 2.1
+#=======================================================================================================================
+class GetoptError(Exception):
+ opt = ''
+ msg = ''
+ def __init__(self, msg, opt=''):
+ self.msg = msg
+ self.opt = opt
+ Exception.__init__(self, msg, opt)
+
+ def __str__(self):
+ return self.msg
+
+
+def gnu_getopt(args, shortopts, longopts=[]):
+ """getopt(args, options[, long_options]) -> opts, args
+
+ This function works like getopt(), except that GNU style scanning
+ mode is used by default. This means that option and non-option
+ arguments may be intermixed. The getopt() function stops
+ processing options as soon as a non-option argument is
+ encountered.
+
+ If the first character of the option string is `+', or if the
+ environment variable POSIXLY_CORRECT is set, then option
+ processing stops as soon as a non-option argument is encountered.
+ """
+
+ opts = []
+ prog_args = []
+ if isinstance(longopts, ''.__class__):
+ longopts = [longopts]
+ else:
+ longopts = list(longopts)
+
+ # Allow options after non-option arguments?
+ if shortopts.startswith('+'):
+ shortopts = shortopts[1:]
+ all_options_first = True
+ elif os.environ.get("POSIXLY_CORRECT"):
+ all_options_first = True
+ else:
+ all_options_first = False
+
+ while args:
+ if args[0] == '--':
+ prog_args += args[1:]
+ break
+
+ if args[0][:2] == '--':
+ opts, args = do_longs(opts, args[0][2:], longopts, args[1:])
+ elif args[0][:1] == '-':
+ opts, args = do_shorts(opts, args[0][1:], shortopts, args[1:])
+ else:
+ if all_options_first:
+ prog_args += args
+ break
+ else:
+ prog_args.append(args[0])
+ args = args[1:]
+
+ return opts, prog_args
+
+def do_longs(opts, opt, longopts, args):
+ try:
+ i = opt.index('=')
+ except ValueError:
+ optarg = None
+ else:
+ opt, optarg = opt[:i], opt[i + 1:]
+
+ has_arg, opt = long_has_args(opt, longopts)
+ if has_arg:
+ if optarg is None:
+ if not args:
+ raise GetoptError('option --%s requires argument' % opt, opt)
+ optarg, args = args[0], args[1:]
+ elif optarg:
+ raise GetoptError('option --%s must not have an argument' % opt, opt)
+ opts.append(('--' + opt, optarg or ''))
+ return opts, args
+
+# Return:
+# has_arg?
+# full option name
+def long_has_args(opt, longopts):
+ possibilities = [o for o in longopts if o.startswith(opt)]
+ if not possibilities:
+ raise GetoptError('option --%s not recognized' % opt, opt)
+ # Is there an exact match?
+ if opt in possibilities:
+ return False, opt
+ elif opt + '=' in possibilities:
+ return True, opt
+ # No exact match, so better be unique.
+ if len(possibilities) > 1:
+ # XXX since possibilities contains all valid continuations, might be
+ # nice to work them into the error msg
+ raise GetoptError('option --%s not a unique prefix' % opt, opt)
+ assert len(possibilities) == 1
+ unique_match = possibilities[0]
+ has_arg = unique_match.endswith('=')
+ if has_arg:
+ unique_match = unique_match[:-1]
+ return has_arg, unique_match
+
+def do_shorts(opts, optstring, shortopts, args):
+ while optstring != '':
+ opt, optstring = optstring[0], optstring[1:]
+ if short_has_arg(opt, shortopts):
+ if optstring == '':
+ if not args:
+ raise GetoptError('option -%s requires argument' % opt,
+ opt)
+ optstring, args = args[0], args[1:]
+ optarg, optstring = optstring, ''
+ else:
+ optarg = ''
+ opts.append(('-' + opt, optarg))
+ return opts, args
+
+def short_has_arg(opt, shortopts):
+ for i in range(len(shortopts)):
+ if opt == shortopts[i] != ':':
+ return shortopts.startswith(':', i + 1)
+ raise GetoptError('option -%s not recognized' % opt, opt)
+
+
+#=======================================================================================================================
+# End getopt code
+#=======================================================================================================================
+
+
+
+
+
+
+
+
+
+
+#=======================================================================================================================
+# parse_cmdline
+#=======================================================================================================================
+def parse_cmdline():
+ """ parses command line and returns test directories, verbosity, test filter and test suites
+ usage:
+ runfiles.py -v|--verbosity <level> -f|--filter <regex> -t|--tests <Test.test1,Test2> dirs|files
+ """
+ verbosity = 2
+ test_filter = None
+ tests = None
+
+ optlist, dirs = gnu_getopt(sys.argv[1:], "v:f:t:", ["verbosity=", "filter=", "tests="])
+ for opt, value in optlist:
+ if opt in ("-v", "--verbosity"):
+ verbosity = value
+
+ elif opt in ("-f", "--filter"):
+ test_filter = value.split(',')
+
+ elif opt in ("-t", "--tests"):
+ tests = value.split(',')
+
+ if type([]) != type(dirs):
+ dirs = [dirs]
+
+ ret_dirs = []
+ for d in dirs:
+ if '|' in d:
+ #paths may come from the ide separated by |
+ ret_dirs.extend(d.split('|'))
+ else:
+ ret_dirs.append(d)
+
+ return ret_dirs, int(verbosity), test_filter, tests
+
+
+#=======================================================================================================================
+# PydevTestRunner
+#=======================================================================================================================
+class PydevTestRunner:
+ """ finds and runs a file or directory of files as a unit test """
+
+ __py_extensions = ["*.py", "*.pyw"]
+ __exclude_files = ["__init__.*"]
+
+ def __init__(self, test_dir, test_filter=None, verbosity=2, tests=None):
+ self.test_dir = test_dir
+ self.__adjust_path()
+ self.test_filter = self.__setup_test_filter(test_filter)
+ self.verbosity = verbosity
+ self.tests = tests
+
+
+ def __adjust_path(self):
+ """ add the current file or directory to the python path """
+ path_to_append = None
+ for n in xrange(len(self.test_dir)):
+ dir_name = self.__unixify(self.test_dir[n])
+ if os.path.isdir(dir_name):
+ if not dir_name.endswith("/"):
+ self.test_dir[n] = dir_name + "/"
+ path_to_append = os.path.normpath(dir_name)
+ elif os.path.isfile(dir_name):
+ path_to_append = os.path.dirname(dir_name)
+ else:
+ msg = ("unknown type. \n%s\nshould be file or a directory.\n" % (dir_name))
+ raise RuntimeError(msg)
+ if path_to_append is not None:
+ #Add it as the last one (so, first things are resolved against the default dirs and
+ #if none resolves, then we try a relative import).
+ sys.path.append(path_to_append)
+ return
+
+ def __setup_test_filter(self, test_filter):
+ """ turn a filter string into a list of filter regexes """
+ if test_filter is None or len(test_filter) == 0:
+ return None
+ return [re.compile("test%s" % f) for f in test_filter]
+
+ def __is_valid_py_file(self, fname):
+ """ tests that a particular file contains the proper file extension
+ and is not in the list of files to exclude """
+ is_valid_fname = 0
+ for invalid_fname in self.__class__.__exclude_files:
+ is_valid_fname += int(not fnmatch.fnmatch(fname, invalid_fname))
+ if_valid_ext = 0
+ for ext in self.__class__.__py_extensions:
+ if_valid_ext += int(fnmatch.fnmatch(fname, ext))
+ return is_valid_fname > 0 and if_valid_ext > 0
+
+ def __unixify(self, s):
+ """ stupid windows. converts the backslash to forwardslash for consistency """
+ return os.path.normpath(s).replace(os.sep, "/")
+
+ def __importify(self, s, dir=False):
+ """ turns directory separators into dots and removes the ".py*" extension
+ so the string can be used as import statement """
+ if not dir:
+ dirname, fname = os.path.split(s)
+
+ if fname.count('.') > 1:
+ #if there's a file named xxx.xx.py, it is not a valid module, so, let's not load it...
+ return
+
+ imp_stmt_pieces = [dirname.replace("\\", "/").replace("/", "."), os.path.splitext(fname)[0]]
+
+ if len(imp_stmt_pieces[0]) == 0:
+ imp_stmt_pieces = imp_stmt_pieces[1:]
+
+ return ".".join(imp_stmt_pieces)
+
+ else: #handle dir
+ return s.replace("\\", "/").replace("/", ".")
+
+ def __add_files(self, pyfiles, root, files):
+ """ if files match, appends them to pyfiles. used by os.path.walk fcn """
+ for fname in files:
+ if self.__is_valid_py_file(fname):
+ name_without_base_dir = self.__unixify(os.path.join(root, fname))
+ pyfiles.append(name_without_base_dir)
+ return
+
+
+ def find_import_files(self):
+ """ return a list of files to import """
+ pyfiles = []
+
+ for base_dir in self.test_dir:
+ if os.path.isdir(base_dir):
+ if hasattr(os, 'walk'):
+ for root, dirs, files in os.walk(base_dir):
+ self.__add_files(pyfiles, root, files)
+ else:
+ # jython2.1 is too old for os.walk!
+ os.path.walk(base_dir, self.__add_files, pyfiles)
+
+ elif os.path.isfile(base_dir):
+ pyfiles.append(base_dir)
+
+ return pyfiles
+
+ def __get_module_from_str(self, modname, print_exception):
+ """ Import the module in the given import path.
+ * Returns the "final" module, so importing "coilib40.subject.visu"
+ returns the "visu" module, not the "coilib40" as returned by __import__ """
+ try:
+ mod = __import__(modname)
+ for part in modname.split('.')[1:]:
+ mod = getattr(mod, part)
+ return mod
+ except:
+ if print_exception:
+ import traceback;traceback.print_exc()
+ sys.stderr.write('ERROR: Module: %s could not be imported.\n' % (modname,))
+ return None
+
+ def find_modules_from_files(self, pyfiles):
+ """ returns a lisst of modules given a list of files """
+ #let's make sure that the paths we want are in the pythonpath...
+ imports = [self.__importify(s) for s in pyfiles]
+
+ system_paths = []
+ for s in sys.path:
+ system_paths.append(self.__importify(s, True))
+
+
+ ret = []
+ for imp in imports:
+ if imp is None:
+ continue #can happen if a file is not a valid module
+ choices = []
+ for s in system_paths:
+ if imp.startswith(s):
+ add = imp[len(s) + 1:]
+ if add:
+ choices.append(add)
+ #sys.stdout.write(' ' + add + ' ')
+
+ if not choices:
+ sys.stdout.write('PYTHONPATH not found for file: %s\n' % imp)
+ else:
+ for i, import_str in enumerate(choices):
+ mod = self.__get_module_from_str(import_str, print_exception=i == len(choices) - 1)
+ if mod is not None:
+ ret.append(mod)
+ break
+
+
+ return ret
+
+ def find_tests_from_modules(self, modules):
+ """ returns the unittests given a list of modules """
+ loader = unittest.TestLoader()
+
+ ret = []
+ if self.tests:
+ accepted_classes = {}
+ accepted_methods = {}
+
+ for t in self.tests:
+ splitted = t.split('.')
+ if len(splitted) == 1:
+ accepted_classes[t] = t
+
+ elif len(splitted) == 2:
+ accepted_methods[t] = t
+
+ #===========================================================================================================
+ # GetTestCaseNames
+ #===========================================================================================================
+ class GetTestCaseNames:
+ """Yes, we need a class for that (cannot use outer context on jython 2.1)"""
+
+ def __init__(self, accepted_classes, accepted_methods):
+ self.accepted_classes = accepted_classes
+ self.accepted_methods = accepted_methods
+
+ def __call__(self, testCaseClass):
+ """Return a sorted sequence of method names found within testCaseClass"""
+ testFnNames = []
+ className = testCaseClass.__name__
+
+ if DictContains(self.accepted_classes, className):
+ for attrname in dir(testCaseClass):
+ #If a class is chosen, we select all the 'test' methods'
+ if attrname.startswith('test') and hasattr(getattr(testCaseClass, attrname), '__call__'):
+ testFnNames.append(attrname)
+
+ else:
+ for attrname in dir(testCaseClass):
+ #If we have the class+method name, we must do a full check and have an exact match.
+ if DictContains(self.accepted_methods, className + '.' + attrname):
+ if hasattr(getattr(testCaseClass, attrname), '__call__'):
+ testFnNames.append(attrname)
+
+ #sorted() is not available in jython 2.1
+ testFnNames.sort()
+ return testFnNames
+
+
+ loader.getTestCaseNames = GetTestCaseNames(accepted_classes, accepted_methods)
+
+
+ ret.extend([loader.loadTestsFromModule(m) for m in modules])
+
+ return ret
+
+
+ def filter_tests(self, test_objs):
+ """ based on a filter name, only return those tests that have
+ the test case names that match """
+ test_suite = []
+ for test_obj in test_objs:
+
+ if isinstance(test_obj, unittest.TestSuite):
+ if test_obj._tests:
+ test_obj._tests = self.filter_tests(test_obj._tests)
+ if test_obj._tests:
+ test_suite.append(test_obj)
+
+ elif isinstance(test_obj, unittest.TestCase):
+ test_cases = []
+ for tc in test_objs:
+ try:
+ testMethodName = tc._TestCase__testMethodName
+ except AttributeError:
+ #changed in python 2.5
+ testMethodName = tc._testMethodName
+
+ if self.__match(self.test_filter, testMethodName) and self.__match_tests(self.tests, tc, testMethodName):
+ test_cases.append(tc)
+ return test_cases
+ return test_suite
+
+
+ def __match_tests(self, tests, test_case, test_method_name):
+ if not tests:
+ return 1
+
+ for t in tests:
+ class_and_method = t.split('.')
+ if len(class_and_method) == 1:
+ #only class name
+ if class_and_method[0] == test_case.__class__.__name__:
+ return 1
+
+ elif len(class_and_method) == 2:
+ if class_and_method[0] == test_case.__class__.__name__ and class_and_method[1] == test_method_name:
+ return 1
+
+ return 0
+
+
+
+
+ def __match(self, filter_list, name):
+ """ returns whether a test name matches the test filter """
+ if filter_list is None:
+ return 1
+ for f in filter_list:
+ if re.match(f, name):
+ return 1
+ return 0
+
+
+ def run_tests(self):
+ """ runs all tests """
+ sys.stdout.write("Finding files...\n")
+ files = self.find_import_files()
+ sys.stdout.write('%s %s\n' % (self.test_dir, '... done'))
+ sys.stdout.write("Importing test modules ... ")
+ modules = self.find_modules_from_files(files)
+ sys.stdout.write("done.\n")
+ all_tests = self.find_tests_from_modules(modules)
+ if self.test_filter or self.tests:
+
+ if self.test_filter:
+ sys.stdout.write('Test Filter: %s' % ([p.pattern for p in self.test_filter],))
+
+ if self.tests:
+ sys.stdout.write('Tests to run: %s' % (self.tests,))
+
+ all_tests = self.filter_tests(all_tests)
+
+ sys.stdout.write('\n')
+ runner = unittest.TextTestRunner(stream=sys.stdout, descriptions=1, verbosity=verbosity)
+ runner.run(unittest.TestSuite(all_tests))
+ return
+
+#=======================================================================================================================
+# main
+#=======================================================================================================================
+if __name__ == '__main__':
+ dirs, verbosity, test_filter, tests = parse_cmdline()
+ PydevTestRunner(dirs, test_filter, verbosity, tests).run_tests()
diff --git a/python/helpers/pydev/test_debug.py b/python/helpers/pydev/test_debug.py
new file mode 100644
index 0000000..89051c8
--- /dev/null
+++ b/python/helpers/pydev/test_debug.py
@@ -0,0 +1,16 @@
+__author__ = 'Dmitry.Trofimov'
+
+import unittest
+
+class PyDevTestCase(unittest.TestCase):
+ def testZipFileExits(self):
+ from pydevd_file_utils import exists
+ self.assertTrue(exists('../../../testData/debug/zipped_lib.zip/zipped_module.py'))
+ self.assertFalse(exists('../../../testData/debug/zipped_lib.zip/zipped_module2.py'))
+ self.assertFalse(exists('../../../testData/debug/zipped_lib2.zip/zipped_module.py'))
+
+
+ def testEggFileExits(self):
+ from pydevd_file_utils import exists
+ self.assertTrue(exists('../../../testData/debug/pycharm-debug.egg/pydev/pydevd.py'))
+ self.assertFalse(exists('../../../testData/debug/pycharm-debug.egg/pydev/pydevd2.py'))
diff --git a/python/helpers/required_gen_version b/python/helpers/required_gen_version
new file mode 100644
index 0000000..81c8c90
--- /dev/null
+++ b/python/helpers/required_gen_version
@@ -0,0 +1,47 @@
+# This file lists minimum generator versions required for known packages / files.
+# hash marks start line comments.
+# name is either a package name (as used in import) or a predefined name in parentheses.
+# version is two decimal numbers divided by a dot.
+# settings equally apply to all platforms (jython, cpython, ipy).
+
+(default) 1.127 # anything not explicitly marked
+
+(built-in) 1.130 # skeletons of all built-in modules are built together
+# Note: modules like itertools, etc are "(built-in)" and are ignored if given separately
+
+_fileio 1.127
+_io 1.127
+sys 1.127
+thread 1.127
+_thread 1.127
+_struct 1.127
+datetime 1.127
+_collections 1.127
+
+PyQt4.Qsci 1.127
+PyQt4.QtAssistant 1.127
+PyQt4.QtCore 1.127
+PyQt4.QtDesigner 1.127
+PyQt4.QtGui 1.127
+PyQt4.QtHelp 1.127
+PyQt4.QtNetwork 1.127
+PyQt4.QtScriptTools 1.127
+PyQt4.QtScript 1.127
+PyQt4.QtSvg 1.127
+PyQt4.QtTest 1.127
+PyQt4.Qt 1.127
+PyQt4.QtWebKit 1.127
+PyQt4.QtXmlPatterns 1.127
+PyQt4.QtXml 1.127
+
+pygame.fastevent 1.127
+pygame.image 1.127
+
+sip 1.127
+
+pysqlite2._sqlite 1.127
+_bsddb 1.127
+
+h5py.h5 1.127
+h5py.h5i 1.127
+h5py.h5g 1.127
diff --git a/python/helpers/rest_formatter.py b/python/helpers/rest_formatter.py
new file mode 100644
index 0000000..e3575f8
--- /dev/null
+++ b/python/helpers/rest_formatter.py
@@ -0,0 +1,169 @@
+import sys
+from docutils.core import publish_string
+from docutils import nodes
+from docutils.nodes import Text
+from docutils.writers.html4css1 import HTMLTranslator
+from epydoc.markup import DocstringLinker
+from epydoc.markup.restructuredtext import ParsedRstDocstring, _EpydocHTMLTranslator, _DocumentPseudoWriter, _EpydocReader
+
+
+class RestHTMLTranslator(_EpydocHTMLTranslator):
+ def visit_field_name(self, node):
+ atts = {}
+ if self.in_docinfo:
+ atts['class'] = 'docinfo-name'
+ else:
+ atts['class'] = 'field-name'
+
+ self.context.append('')
+ atts['align'] = "right"
+ self.body.append(self.starttag(node, 'th', '', **atts))
+
+ def visit_field_body(self, node):
+ self.body.append(self.starttag(node, 'td', '', CLASS='field-body'))
+ parent_text = node.parent[0][0].astext()
+ if hasattr(node.parent, "type"):
+ self.body.append("(")
+ self.body.append(self.starttag(node, 'a', '',
+ **{"href": 'psi_element://#typename#' + node.parent.type}))
+ self.body.append(node.parent.type)
+ self.body.append("</a>")
+ self.body.append(") ")
+ elif parent_text.startswith("type "):
+ index = parent_text.index("type ")
+ type_string = parent_text[index + 5]
+ self.body.append(self.starttag(node, 'a', '',
+ **{"href": 'psi_element://#typename#' + type_string}))
+ elif parent_text.startswith("rtype"):
+ type_string = node.children[0][0].astext()
+ self.body.append(self.starttag(node, 'a', '',
+ **{"href": 'psi_element://#typename#' + type_string}))
+
+ self.set_class_on_child(node, 'first', 0)
+ field = node.parent
+ if (self.compact_field_list or
+ isinstance(field.parent, nodes.docinfo) or
+ field.parent.index(field) == len(field.parent) - 1):
+ # If we are in a compact list, the docinfo, or if this is
+ # the last field of the field list, do not add vertical
+ # space after last element.
+ self.set_class_on_child(node, 'last', -1)
+
+ def depart_field_body(self, node):
+ if node.parent[0][0].astext().startswith("type "):
+ self.body.append("</a>")
+ HTMLTranslator.depart_field_body(self, node)
+
+
+ def visit_field_list(self, node):
+ fields = {}
+ for n in node.children:
+ if len(n.children) == 0: continue
+ child = n.children[0]
+ rawsource = child.rawsource
+ if rawsource.startswith("param "):
+ index = rawsource.index("param ")
+ if len(child.children) == 0: continue
+ child.children[0] = Text(rawsource[index + 6:])
+ fields[rawsource[index + 6:]] = n
+ if rawsource == "return":
+ fields["return"] = n
+
+ for n in node.children:
+ if len(n.children) == 0: continue
+ child = n.children[0]
+ rawsource = child.rawsource
+ if rawsource.startswith("type "):
+ index = rawsource.index("type ")
+ name = rawsource[index + 5:]
+ if fields.has_key(name):
+ fields[name].type = n.children[1][0][0]
+ node.children.remove(n)
+ if rawsource == "rtype":
+ if fields.has_key("return"):
+ fields["return"].type = n.children[1][0][0]
+ node.children.remove(n)
+
+ HTMLTranslator.visit_field_list(self, node)
+
+
+ def unknown_visit(self, node):
+ """ Ignore unknown nodes """
+
+ def unknown_departure(self, node):
+ """ Ignore unknown nodes """
+
+ def visit_block_quote(self, node):
+ self.body.append(self.emptytag(node, "br"))
+
+ def depart_block_quote(self, node):
+ pass
+
+ def visit_literal(self, node):
+ """Process text to prevent tokens from wrapping."""
+ self.body.append(
+ self.starttag(node, 'tt', '', CLASS='docutils literal'))
+ text = node.astext()
+ for token in self.words_and_spaces.findall(text):
+ if token.strip():
+ self.body.append('<code>%s</code>'
+ % self.encode(token))
+ elif token in ('\n', ' '):
+ # Allow breaks at whitespace:
+ self.body.append(token)
+ else:
+ # Protect runs of multiple spaces; the last space can wrap:
+ self.body.append(' ' * (len(token) - 1) + ' ')
+ self.body.append('</tt>')
+ raise nodes.SkipNode
+
+
+class MyParsedRstDocstring(ParsedRstDocstring):
+ def __init__(self, document):
+ ParsedRstDocstring.__init__(self, document)
+
+ def to_html(self, docstring_linker, directory=None,
+ docindex=None, context=None, **options):
+ visitor = RestHTMLTranslator(self._document, docstring_linker,
+ directory, docindex, context)
+ self._document.walkabout(visitor)
+ return ''.join(visitor.body)
+
+
+def parse_docstring(docstring, errors, **options):
+ writer = _DocumentPseudoWriter()
+ reader = _EpydocReader(errors) # Outputs errors to the list.
+ publish_string(docstring, writer=writer, reader=reader,
+ settings_overrides={'report_level': 10000,
+ 'halt_level': 10000,
+ 'warning_stream': None})
+ return MyParsedRstDocstring(writer.document)
+
+
+try:
+ src = sys.stdin.read()
+
+ errors = []
+
+ class EmptyLinker(DocstringLinker):
+ def translate_indexterm(self, indexterm):
+ return ""
+
+ def translate_identifier_xref(self, identifier, label=None):
+ return identifier
+
+ docstring = parse_docstring(src, errors)
+ html = docstring.to_html(EmptyLinker())
+
+ if errors and not html:
+ sys.stderr.write("Error parsing docstring:\n")
+ for error in errors:
+ sys.stderr.write(str(error) + "\n")
+ sys.exit(1)
+
+ sys.stdout.write(html)
+ sys.stdout.flush()
+except:
+ exc_type, exc_value, exc_traceback = sys.exc_info()
+ sys.stderr.write("Error calculating docstring: " + str(exc_value))
+ sys.exit(1)
diff --git a/python/helpers/rest_runners/rst2smth.py b/python/helpers/rest_runners/rst2smth.py
new file mode 100644
index 0000000..146da8c
--- /dev/null
+++ b/python/helpers/rest_runners/rst2smth.py
@@ -0,0 +1,24 @@
+__author__ = 'catherine'
+
+if __name__ == "__main__":
+ try:
+ from docutils.core import publish_cmdline
+ except:
+ raise NameError("Cannot find docutils in selected interpreter.")
+
+ import sys
+ command = sys.argv[1]
+ args = sys.argv[2:]
+
+ COMMANDS = {"rst2html": "html", "rst2latex" : "latex",
+ "rst2pseudoxml" : "pseudoxml", "rst2s5" : "s5", "rst2xml" : "xml"}
+
+ if command == "rst2odt":
+ from docutils.writers.odf_odt import Writer, Reader
+ writer = Writer()
+ reader = Reader()
+ publish_cmdline(reader=reader, writer=writer, argv=args)
+ elif command == "rstpep2html":
+ publish_cmdline(reader_name='pep', writer_name='pep_html', argv=args)
+ else:
+ publish_cmdline(writer_name=COMMANDS[command], argv=args)
\ No newline at end of file
diff --git a/python/helpers/rest_runners/sphinx_runner.py b/python/helpers/rest_runners/sphinx_runner.py
new file mode 100644
index 0000000..0712898
--- /dev/null
+++ b/python/helpers/rest_runners/sphinx_runner.py
@@ -0,0 +1,10 @@
+__author__ = 'catherine'
+
+if __name__ == "__main__":
+ try:
+ from sphinx import cmdline
+ except:
+ raise NameError("Cannot find sphinx in selected interpreter.")
+
+ import sys
+ cmdline.main(sys.argv)
\ No newline at end of file
diff --git a/python/helpers/roman.py b/python/helpers/roman.py
new file mode 100644
index 0000000..0335f29
--- /dev/null
+++ b/python/helpers/roman.py
@@ -0,0 +1,81 @@
+"""Convert to and from Roman numerals"""
+
+__author__ = "Mark Pilgrim ([email protected])"
+__version__ = "1.4"
+__date__ = "8 August 2001"
+__copyright__ = """Copyright (c) 2001 Mark Pilgrim
+
+This program is part of "Dive Into Python", a free Python tutorial for
+experienced programmers. Visit http://diveintopython.org/ for the
+latest version.
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the Python 2.1.1 license, available at
+http://www.python.org/2.1.1/license.html
+"""
+
+import re
+
+#Define exceptions
+class RomanError(Exception): pass
+class OutOfRangeError(RomanError): pass
+class NotIntegerError(RomanError): pass
+class InvalidRomanNumeralError(RomanError): pass
+
+#Define digit mapping
+romanNumeralMap = (('M', 1000),
+ ('CM', 900),
+ ('D', 500),
+ ('CD', 400),
+ ('C', 100),
+ ('XC', 90),
+ ('L', 50),
+ ('XL', 40),
+ ('X', 10),
+ ('IX', 9),
+ ('V', 5),
+ ('IV', 4),
+ ('I', 1))
+
+def toRoman(n):
+ """convert integer to Roman numeral"""
+ if not (0 < n < 5000):
+ raise OutOfRangeError, "number out of range (must be 1..4999)"
+ if int(n) != n:
+ raise NotIntegerError, "decimals can not be converted"
+
+ result = ""
+ for numeral, integer in romanNumeralMap:
+ while n >= integer:
+ result += numeral
+ n -= integer
+ return result
+
+#Define pattern to detect valid Roman numerals
+romanNumeralPattern = re.compile("""
+ ^ # beginning of string
+ M{0,4} # thousands - 0 to 4 M's
+ (CM|CD|D?C{0,3}) # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
+ # or 500-800 (D, followed by 0 to 3 C's)
+ (XC|XL|L?X{0,3}) # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
+ # or 50-80 (L, followed by 0 to 3 X's)
+ (IX|IV|V?I{0,3}) # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
+ # or 5-8 (V, followed by 0 to 3 I's)
+ $ # end of string
+ """ ,re.VERBOSE)
+
+def fromRoman(s):
+ """convert Roman numeral to integer"""
+ if not s:
+ raise InvalidRomanNumeralError, 'Input can not be blank'
+ if not romanNumeralPattern.search(s):
+ raise InvalidRomanNumeralError, 'Invalid Roman numeral: %s' % s
+
+ result = 0
+ index = 0
+ for numeral, integer in romanNumeralMap:
+ while s[index:index+len(numeral)] == numeral:
+ result += integer
+ index += len(numeral)
+ return result
+
diff --git a/python/helpers/run_coverage.py b/python/helpers/run_coverage.py
new file mode 100644
index 0000000..5785945
--- /dev/null
+++ b/python/helpers/run_coverage.py
@@ -0,0 +1,36 @@
+"""Coverage.py's main entrypoint."""
+
+import os
+import sys
+import imp
+
+helpers_root = os.getenv('PYCHARM_HELPERS_ROOT')
+if helpers_root:
+ sys_path_backup = sys.path
+ sys.path = [p for p in sys.path if p!=helpers_root]
+ from coverage.cmdline import main
+ sys.path = sys_path_backup
+else:
+ from coverage.cmdline import main
+
+coverage_file = os.getenv('PYCHARM_COVERAGE_FILE')
+run_cov = os.getenv('PYCHARM_RUN_COVERAGE')
+if os.getenv('JETBRAINS_REMOTE_RUN'):
+ line = 'LOG: PyCharm: File mapping:%s\t%s\n'
+ import tempfile
+ (h, new_cov_file) = tempfile.mkstemp(prefix='pycharm-coverage')
+ print(line%(coverage_file, new_cov_file))
+ print(line%(coverage_file + '.syspath.txt', new_cov_file + '.syspath.txt'))
+ print(line%(coverage_file + '.xml', new_cov_file + '.xml'))
+ coverage_file = new_cov_file
+
+if coverage_file:
+ os.environ['COVERAGE_FILE'] = coverage_file
+if run_cov:
+ a_file = open(coverage_file + '.syspath.txt', mode='w')
+ a_file.write(os.getcwd()+"\n")
+ for path in sys.path: a_file.write(path + "\n")
+ a_file.close()
+main()
+if run_cov:
+ main(["xml", "-o", coverage_file + ".xml", "--ignore-errors"])
\ No newline at end of file
diff --git a/python/helpers/setuptools-1.1.5.tar.gz b/python/helpers/setuptools-1.1.5.tar.gz
new file mode 100644
index 0000000..4c202f4
--- /dev/null
+++ b/python/helpers/setuptools-1.1.5.tar.gz
Binary files differ
diff --git a/python/helpers/syspath.py b/python/helpers/syspath.py
new file mode 100644
index 0000000..ba5d8e3
--- /dev/null
+++ b/python/helpers/syspath.py
@@ -0,0 +1,4 @@
+import sys
+import os.path
+for x in sys.path:
+ if x != os.path.dirname(sys.argv [0]) and x != '.': sys.stdout.write(x+chr(10))
\ No newline at end of file
diff --git a/python/helpers/test_generator.py b/python/helpers/test_generator.py
new file mode 100644
index 0000000..e0933ad
--- /dev/null
+++ b/python/helpers/test_generator.py
@@ -0,0 +1,413 @@
+# encoding: utf-8
+"""
+Tests basic things that generator3 consists of.
+NOTE: does not work in Jython 2.2 or IronPython 1.x, because pyparsing does not.
+"""
+
+import unittest
+from generator3 import *
+
+M = ModuleRedeclarator
+
+import sys
+
+IS_CLI = sys.platform == 'cli'
+VERSION = sys.version_info[:2] # only (major, minor)
+
+class TestRestoreFuncByDocComment(unittest.TestCase):
+ """
+ Tries to restore function signatures by doc strings.
+ """
+
+ def setUp(self):
+ self.m = ModuleRedeclarator(None, None, '/dev/null')
+
+ def testTrivial(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(a, b, c) ololo", "f", "f", None)
+ self.assertEquals(result, "f(a, b, c)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testTrivialNested(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(a, (b, c), d) ololo", "f", "f", None)
+ self.assertEquals(result, "f(a, (b, c), d)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testWithDefault(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(a, b, c=1) ololo", "f", "f", None)
+ self.assertEquals(result, "f(a, b, c=1)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testNestedWithDefault(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(a, (b1, b2), c=1) ololo", "f", "f", None)
+ self.assertEquals(result, "f(a, (b1, b2), c=1)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testAbstractDefault(self):
+ # like new(S, ...)
+ result, ret_sig, note = self.m.parse_func_doc('blah f(a, b=obscuredefault) ololo', "f", "f", None)
+ self.assertEquals(result, "f(a, b=None)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testWithReserved(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(class, object, def) ololo", "f", "f", None)
+ self.assertEquals(result, "f(p_class, p_object, p_def)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testWithReservedOpt(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(foo, bar[, def]) ololo", "f", "f", None)
+ self.assertEquals(result, "f(foo, bar, p_def=None)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testPseudoNested(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(a, (b1, b2, ...)) ololo", "f", "f", None)
+ self.assertEquals(result, "f(a, b_tuple)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testImportLike(self):
+ # __import__
+ result, ret_sig, note = self.m.parse_func_doc("blah f(name, globals={}, locals={}, fromlist=[], level=-1) ololo",
+ "f", "f", None)
+ self.assertEquals(result, "f(name, globals={}, locals={}, fromlist=[], level=-1)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testOptionalBracket(self):
+ # reduce
+ result, ret_sig, note = self.m.parse_func_doc("blah f(function, sequence[, initial]) ololo", "f", "f", None)
+ self.assertEquals(result, "f(function, sequence, initial=None)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testWithMore(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(foo [, bar1, bar2, ...]) ololo", "f", "f", None)
+ self.assertEquals(result, "f(foo, *bar)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testNestedOptionals(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(foo [, bar1 [, bar2]]) ololo", "f", "f", None)
+ self.assertEquals(result, "f(foo, bar1=None, bar2=None)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testInnerTuple(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah load_module(name, file, filename, (suffix, mode, type)) ololo"
+ , "load_module", "load_module", None)
+ self.assertEquals(result, "load_module(name, file, filename, (suffix, mode, type))")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testIncorrectInnerTuple(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(a, (b=1, c=2)) ololo", "f", "f", None)
+ self.assertEquals(result, "f(a, p_b)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testNestedOnly(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f((foo, bar, baz)) ololo", "f", "f", None)
+ self.assertEquals(result, "f((foo, bar, baz))")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testTwoPseudoNested(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f((a1, a2, ...), (b1, b2,..)) ololo", "f", "f", None)
+ self.assertEquals(result, "f(a_tuple, b_tuple)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testTwoPseudoNestedWithLead(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(x, (a1, a2, ...), (b1, b2,..)) ololo", "f", "f", None)
+ self.assertEquals(result, "f(x, a_tuple, b_tuple)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testPseudoNestedRange(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f((a1, ..., an), b) ololo", "f", "f", None)
+ self.assertEquals(result, "f(a_tuple, b)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testIncorrectList(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(x, y, 3, $) ololo", "f", "f", None)
+ self.assertEquals(result, "f(x, y, *args, **kwargs)")
+ self.assertEquals(note, M.SIG_DOC_UNRELIABLY)
+
+ def testIncorrectStarredList(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(x, *y, 3, $) ololo", "f", "f", None)
+ self.assertEquals(result, "f(x, *y, **kwargs)")
+ self.assertEquals(note, M.SIG_DOC_UNRELIABLY)
+
+ def testClashingNames(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(x, y, (x, y), z) ololo", "f", "f", None)
+ self.assertEquals(result, "f(x, y, (x_1, y_1), z)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testQuotedParam(self):
+ # like __delattr__
+ result, ret_sig, note = self.m.parse_func_doc("blah getattr('name') ololo", "getattr", "getattr", None)
+ self.assertEquals(result, "getattr(name)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testQuotedParam2(self):
+ # like __delattr__, too
+ result, ret_sig, note = self.m.parse_func_doc('blah getattr("name") ololo', "getattr", "getattr", None)
+ self.assertEquals(result, "getattr(name)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testOptionalTripleDot(self):
+ # like new(S, ...)
+ result, ret_sig, note = self.m.parse_func_doc('blah f(foo, ...) ololo', "f", "f", None)
+ self.assertEquals(result, "f(foo, *more)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testUnderscoredName(self):
+ # like new(S, ...)
+ result, ret_sig, note = self.m.parse_func_doc('blah f(foo_one, _bar_two) ololo', "f", "f", None)
+ self.assertEquals(result, "f(foo_one, _bar_two)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testDashedName(self):
+ # like new(S, ...)
+ result, ret_sig, note = self.m.parse_func_doc('blah f(something-else, for-a-change) ololo', "f", "f", None)
+ self.assertEquals(result, "f(something_else, for_a_change)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testSpacedDefault(self):
+ # like new(S, ...)
+ result, ret_sig, note = self.m.parse_func_doc('blah f(a, b = 1) ololo', "f", "f", None)
+ self.assertEquals(result, "f(a, b=1)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testSpacedName(self):
+ # like new(S, ...)
+ result, ret_sig, note = self.m.parse_func_doc('blah femme(skirt or pants) ololo', "femme", "femme", None)
+ self.assertEquals(result, "femme(skirt_or_pants)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+
+class TestRestoreMethodByDocComment(unittest.TestCase):
+ """
+ Restoring with a class name set
+ """
+
+ def setUp(self):
+ self.m = ModuleRedeclarator(None, None, '/dev/null')
+
+ def testPlainMethod(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(self, foo, bar) ololo", "f", "f", "SomeClass")
+ self.assertEquals(result, "f(self, foo, bar)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testInsertSelf(self):
+ result, ret_sig, note = self.m.parse_func_doc("blah f(foo, bar) ololo", "f", "f", "SomeClass")
+ self.assertEquals(result, "f(self, foo, bar)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+
+class TestAnnotatedParameters(unittest.TestCase):
+ """
+ f(foo: int) and friends; in doc comments, happen in 2.x world, too.
+ """
+
+ def setUp(self):
+ self.m = ModuleRedeclarator(None, None, '/dev/null')
+
+ def testMixed(self):
+ result, ret_sig, note = self.m.parse_func_doc('blah f(i: int, foo) ololo', "f", "f", None)
+ self.assertEquals(result, "f(i, foo)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testNested(self):
+ result, ret_sig, note = self.m.parse_func_doc('blah f(i: int, (foo: bar, boo: Decimal)) ololo', "f", "f", None)
+ self.assertEquals(result, "f(i, (foo, boo))")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+ def testSpaced(self):
+ result, ret_sig, note = self.m.parse_func_doc('blah f(i: int, j :int, k : int) ololo', "f", "f", None)
+ self.assertEquals(result, "f(i, j, k)")
+ self.assertEquals(note, M.SIG_DOC_NOTE)
+
+
+if not IS_CLI and VERSION < (3, 0):
+ class TestInspect(unittest.TestCase):
+ """
+ See that inspect actually works if needed
+ """
+
+ def setUp(self):
+ self.m = ModuleRedeclarator(None, None, '/dev/null')
+
+ def testSimple(self):
+ def target(a, b, c=1, *d, **e):
+ return a, b, c, d, e
+
+ result = restore_by_inspect(target)
+ self.assertEquals(result, "(a, b, c=1, *d, **e)")
+
+ def testNested(self):
+ # NOTE: Py3k can't handle nested tuple args, thus we compile it conditionally
+ code = (
+ "def target(a, (b, c), d, e=1):\n"
+ " return a, b, c, d, e"
+ )
+ namespace = {}
+ eval(compile(code, "__main__", "single"), namespace)
+ target = namespace['target']
+
+ result = restore_by_inspect(target)
+ self.assertEquals(result, "(a, (b, c), d, e=1)")
+
+class _DiffPrintingTestCase(unittest.TestCase):
+ def assertEquals(self, etalon, specimen, msg=None):
+ if type(etalon) == str and type(specimen) == str and etalon != specimen:
+ print("%s" % "\n")
+ # print side by side
+ ei = iter(etalon.split("\n"))
+ si = iter(specimen.split("\n"))
+ if VERSION < (3, 0):
+ si_next = si.next
+ else:
+ si_next = si.__next__
+ for el in ei:
+ try: sl = si_next()
+ except StopIteration: break # I wish the exception would just work as break
+ if el != sl:
+ print("!%s" % el)
+ print("?%s" % sl)
+ else:
+ print(">%s" % sl)
+ # one of the iters might not end yet
+ for el in ei:
+ print("!%s" % el)
+ for sl in si:
+ print("?%s" % sl)
+ raise self.failureException(msg)
+ else:
+ self.failUnlessEqual(etalon, specimen, msg)
+
+
+class TestSpecialCases(unittest.TestCase):
+ """
+ Tests cases where predefined overrides kick in
+ """
+
+ def setUp(self):
+ import sys
+
+ if VERSION >= (3, 0):
+ import builtins as the_builtins
+
+ self.builtins_name = the_builtins.__name__
+ else:
+ import __builtin__ as the_builtins
+
+ self.builtins_name = the_builtins.__name__
+ self.m = ModuleRedeclarator(the_builtins, None, '/dev/null', doing_builtins=True)
+
+ def _testBuiltinFuncName(self, func_name, expected):
+ class_name = None
+ self.assertTrue(self.m.is_predefined_builtin(self.builtins_name, class_name, func_name))
+ result, note = restore_predefined_builtin(class_name, func_name)
+ self.assertEquals(result, func_name + expected)
+ self.assertEquals(note, "known special case of " + func_name)
+
+ def testZip(self):
+ self._testBuiltinFuncName("zip", "(seq1, seq2, *more_seqs)")
+
+ def testRange(self):
+ self._testBuiltinFuncName("range", "(start=None, stop=None, step=None)")
+
+ def testFilter(self):
+ self._testBuiltinFuncName("filter", "(function_or_none, sequence)")
+
+ # we caould want to test a calss without __dict__, but it takes a C extension to really create one,
+
+class TestDataOutput(_DiffPrintingTestCase):
+ """
+ Tests for sanity of output of data members
+ """
+
+ def setUp(self):
+ self.m = ModuleRedeclarator(self, None, 4) # Pass anything with __dict__ as module
+
+ def checkFmtValue(self, data, expected):
+ buf = Buf(self.m)
+ self.m.fmt_value(buf.out, data, 0)
+ result = "".join(buf.data).strip()
+ self.assertEquals(expected, result)
+
+ def testRecursiveDict(self):
+ data = {'a': 1}
+ data['b'] = data
+ expected = "\n".join((
+ "{",
+ " 'a': 1,",
+ " 'b': '<value is a self-reference, replaced by this string>',",
+ "}"
+ ))
+ self.checkFmtValue(data, expected)
+
+ def testRecursiveList(self):
+ data = [1]
+ data.append(data)
+ data.append(2)
+ data.append([10, data, 20])
+ expected = "\n".join((
+ "[",
+ " 1,",
+ " '<value is a self-reference, replaced by this string>',",
+ " 2,",
+ " [",
+ " 10,",
+ " '<value is a self-reference, replaced by this string>',",
+ " 20,",
+ " ],",
+ "]"
+ ))
+ self.checkFmtValue(data, expected)
+
+if not IS_CLI:
+ class TestReturnTypes(unittest.TestCase):
+ """
+ Tests for sanity of output of data members
+ """
+
+ def setUp(self):
+ self.m = ModuleRedeclarator(None, None, 4)
+
+ def checkRestoreFunction(self, doc, expected):
+ spec, ret_literal, note = self.m.parse_func_doc(doc, "foo", "foo", None)
+ self.assertEqual(expected, ret_literal, "%r != %r; spec=%r, note=%r" % (expected, ret_literal, spec, note))
+ pass
+
+ def testSimpleArrowInt(self):
+ doc = "This is foo(bar) -> int"
+ self.checkRestoreFunction(doc, "0")
+
+ def testSimpleArrowList(self):
+ doc = "This is foo(bar) -> list"
+ self.checkRestoreFunction(doc, "[]")
+
+ def testArrowListOf(self):
+ doc = "This is foo(bar) -> list of int"
+ self.checkRestoreFunction(doc, "[]")
+
+ # def testArrowTupleOf(self):
+ # doc = "This is foo(bar) -> (a, b,..)"
+ # self.checkRestoreFunction(doc, "()")
+
+ def testSimplePrefixInt(self):
+ doc = "This is int foo(bar)"
+ self.checkRestoreFunction(doc, "0")
+
+ def testSimplePrefixObject(self):
+ doc = "Makes an instance: object foo(bar)"
+ self.checkRestoreFunction(doc, "object()")
+
+ if VERSION < (3, 0):
+ # TODO: we only support it in 2.x; must update when we do it in 3.x, too
+ def testSimpleArrowFile(self):
+ doc = "Opens a file: foo(bar) -> file"
+ self.checkRestoreFunction(doc, "file('/dev/null')")
+
+ def testUnrelatedPrefix(self):
+ doc = """
+ Consumes a list of int
+ foo(bar)
+ """
+ self.checkRestoreFunction(doc, None)
+
+
+###
+if __name__ == '__main__':
+ unittest.main()
diff --git a/python/helpers/tools/class_method_versions.xml b/python/helpers/tools/class_method_versions.xml
new file mode 100644
index 0000000..9a403b9
--- /dev/null
+++ b/python/helpers/tools/class_method_versions.xml
@@ -0,0 +1,155 @@
+<?xml version="1.0" ?><root><class_name name="TestCase">
+ <python version="2.4"><func>assertCountEqual</func>
+ <func>assertNotRegex</func>
+ <func>assertRaisesRegex</func>
+ <func>maxDiff</func>
+ <func>addTypeEqualityFunc</func>
+ <func>assertGreater</func>
+ <func>assertDictContainsSubset</func>
+ <func>assertLess</func>
+ <func>assertMultiLineEqual</func>
+ <func>assertIsNotNone</func>
+ <func>assertGreaterEqual</func>
+ <func>assertNotIn</func>
+ <func>assertRaisesRegexp</func>
+ <func>addCleanup</func>
+ <func>assertRegexpMatches</func>
+ <func>assertLessEqual</func>
+ <func>tearDownClass</func>
+ <func>doCleanups</func>
+ <func>assertRegex</func>
+ <func>assertSameElements</func>
+ <func>assertDictEqual</func>
+ <func>assertItemsEqual</func>
+ <func>assertIn</func>
+ <func>assertNotRegexpMatches</func>
+ <func>assertNotIsInstance</func>
+ <func>assertTupleEqual</func>
+ <func>assertIsNone</func>
+ <func>assertIs</func>
+ <func>assertIsInstance</func>
+ <func>assertWarnsRegex</func>
+ <func>setUpClass</func>
+ <func>assertListEqual</func>
+ <func>assertIsNot</func>
+ <func>assertSequenceEqual</func>
+ <func>longMessage</func>
+ <func>skipTest</func>
+ <func>assertWarns</func>
+ <func>assertSetEqual</func></python>
+ <python version="2.5"><func>assertCountEqual</func>
+ <func>assertNotRegex</func>
+ <func>assertRaisesRegex</func>
+ <func>maxDiff</func>
+ <func>addTypeEqualityFunc</func>
+ <func>assertGreater</func>
+ <func>assertDictContainsSubset</func>
+ <func>assertLess</func>
+ <func>assertMultiLineEqual</func>
+ <func>assertIsNotNone</func>
+ <func>assertGreaterEqual</func>
+ <func>assertNotIn</func>
+ <func>assertRaisesRegexp</func>
+ <func>addCleanup</func>
+ <func>assertRegexpMatches</func>
+ <func>assertLessEqual</func>
+ <func>tearDownClass</func>
+ <func>doCleanups</func>
+ <func>assertRegex</func>
+ <func>assertSameElements</func>
+ <func>assertDictEqual</func>
+ <func>assertItemsEqual</func>
+ <func>assertIn</func>
+ <func>assertNotRegexpMatches</func>
+ <func>assertNotIsInstance</func>
+ <func>assertTupleEqual</func>
+ <func>assertIsNone</func>
+ <func>assertIs</func>
+ <func>assertIsInstance</func>
+ <func>assertWarnsRegex</func>
+ <func>setUpClass</func>
+ <func>assertListEqual</func>
+ <func>assertIsNot</func>
+ <func>assertSequenceEqual</func>
+ <func>longMessage</func>
+ <func>skipTest</func>
+ <func>assertWarns</func>
+ <func>assertSetEqual</func></python>
+ <python version="2.6"><func>assertCountEqual</func>
+ <func>assertNotRegex</func>
+ <func>assertRaisesRegex</func>
+ <func>maxDiff</func>
+ <func>addTypeEqualityFunc</func>
+ <func>assertGreater</func>
+ <func>assertDictContainsSubset</func>
+ <func>assertLess</func>
+ <func>assertMultiLineEqual</func>
+ <func>assertIsNotNone</func>
+ <func>assertGreaterEqual</func>
+ <func>assertNotIn</func>
+ <func>assertRaisesRegexp</func>
+ <func>addCleanup</func>
+ <func>assertRegexpMatches</func>
+ <func>assertLessEqual</func>
+ <func>tearDownClass</func>
+ <func>doCleanups</func>
+ <func>assertRegex</func>
+ <func>assertSameElements</func>
+ <func>assertDictEqual</func>
+ <func>assertItemsEqual</func>
+ <func>assertIn</func>
+ <func>assertNotRegexpMatches</func>
+ <func>assertNotIsInstance</func>
+ <func>assertTupleEqual</func>
+ <func>assertIsNone</func>
+ <func>assertIs</func>
+ <func>assertIsInstance</func>
+ <func>assertWarnsRegex</func>
+ <func>setUpClass</func>
+ <func>assertListEqual</func>
+ <func>assertIsNot</func>
+ <func>assertSequenceEqual</func>
+ <func>longMessage</func>
+ <func>skipTest</func>
+ <func>assertWarns</func>
+ <func>assertSetEqual</func></python>
+ <python version="2.7"><func>assertCountEqual</func>
+ <func>assertNotRegex</func>
+ <func>assertWarnsRegex</func>
+ <func>assertRaisesRegex</func>
+ <func>assertRegex</func>
+ <func>assertSameElements</func>
+ <func>assertWarns</func></python>
+ <python version="3.0"><func>assertCountEqual</func>
+ <func>assertNotRegex</func>
+ <func>assertWarnsRegex</func>
+ <func>setUpClass</func>
+ <func>assertRaisesRegex</func>
+ <func>assertNotRegexpMatches</func>
+ <func>assertNotIsInstance</func>
+ <func>assertRegex</func>
+ <func>tearDownClass</func>
+ <func>assertItemsEqual</func>
+ <func>maxDiff</func>
+ <func>assertWarns</func>
+ <func>assertIsInstance</func></python>
+ <python version="3.1"><func>assertCountEqual</func>
+ <func>assertNotRegex</func>
+ <func>assertWarnsRegex</func>
+ <func>setUpClass</func>
+ <func>assertRaisesRegex</func>
+ <func>assertNotRegexpMatches</func>
+ <func>assertNotIsInstance</func>
+ <func>assertRegex</func>
+ <func>tearDownClass</func>
+ <func>assertItemsEqual</func>
+ <func>maxDiff</func>
+ <func>assertWarns</func>
+ <func>assertIsInstance</func></python>
+ <python version="3.2"><func>assertItemsEqual</func>
+ <func>assertNotRegexpMatches</func></python>
+ <python version="3.3"><func>assertSameElements</func>
+ <func>assertItemsEqual</func>
+ <func>assertNotRegexpMatches</func></python>
+ </class_name>
+</root>
diff --git a/python/helpers/tools/stdlib_packages.txt b/python/helpers/tools/stdlib_packages.txt
new file mode 100644
index 0000000..619651eb
--- /dev/null
+++ b/python/helpers/tools/stdlib_packages.txt
@@ -0,0 +1,288 @@
+abc
+aifc
+antigravity
+anydbm
+argparse
+array
+ast
+asynchat
+asyncore
+atexit
+audiodev
+audioop
+base64
+BaseHTTPServer
+Bastion
+bdb
+binascii
+binhex
+bisect
+bsddb
+build_class
+__builtin__
+builtins
+bz2
+calendar
+cgi
+CGIHTTPServer
+cgitb
+chunk
+cmath
+cmd
+code
+codecs
+codeop
+collections
+colorsys
+commands
+compileall
+compiler
+concurrent
+configparser
+ConfigParser
+contextlib
+Cookie
+cookielib
+copy
+copyreg
+copy_reg
+cPickle
+cProfile
+crypt
+cStringIO
+csv
+ctypes
+curses
+datetime
+dbhash
+dbm
+decimal
+difflib
+dircache
+dis
+distutils
+doctest
+DocXMLRPCServer
+dumbdbm
+dummy_thread
+dummy_threading
+email
+encodings
+errno
+exceptions
+fcntl
+filecmp
+fileinput
+fnmatch
+formatter
+fpectl
+fpformat
+fractions
+ftplib
+functools
+__future__
+future_builtins
+gc
+genericpath
+getopt
+getpass
+gettext
+glob
+gopherlib
+grp
+gzip
+hashlib
+heapq
+hmac
+hotshot
+html
+htmlentitydefs
+htmllib
+HTMLParser
+http
+httplib
+idlelib
+ihooks
+imaplib
+imghdr
+imp
+importlib
+imputil
+inspect
+io
+itertools
+json
+keyword
+lib2to3
+linecache
+linuxaudiodev
+locale
+logging
+macpath
+macurl2path
+mailbox
+mailcap
+markupbase
+marshal
+math
+md5
+mhlib
+mimetools
+mimetypes
+MimeWriter
+mimify
+mmap
+modulefinder
+multifile
+multiprocessing
+mutex
+netrc
+new
+nis
+nntplib
+ntpath
+nturl2path
+numbers
+opcode
+operator
+optparse
+os
+os2emxpath
+ossaudiodev
+parser
+pdb
+pickle
+pickletools
+pipes
+pkgutil
+platform
+plistlib
+popen2
+poplib
+posix
+posixfile
+posixpath
+pprint
+profile
+pstats
+psycopg2
+pty
+pwd
+pyclbr
+py_compile
+pydoc
+pydoc_data
+pydoc_topics
+pyexpat
+queue
+Queue
+quopri
+random
+re
+readline
+reconvert
+regex
+regex_syntax
+regsub
+repr
+reprlib
+resource
+rexec
+rfc822
+rlcompleter
+robotparser
+runpy
+sched
+select
+sets
+sgmllib
+sha
+shelve
+shlex
+shutil
+signal
+SimpleHTTPServer
+SimpleXMLRPCServer
+site
+smtpd
+smtplib
+sndhdr
+socket
+socketserver
+SocketServer
+spwd
+sqlite3
+sre
+sre_compile
+sre_constants
+sre_parse
+ssl
+stat
+statcache
+statvfs
+string
+StringIO
+stringold
+stringprep
+strop
+struct
+subprocess
+sunau
+sunaudio
+symbol
+symtable
+sys
+sysconfig
+syslog
+tabnanny
+tarfile
+telnetlib
+tempfile
+termios
+test
+textwrap
+this
+thread
+_thread
+threading
+time
+timeit
+timing
+tkinter
+toaiff
+token
+tokenize
+trace
+traceback
+tty
+turtle
+turtledemo
+types
+tzparse
+unicodedata
+unittest
+urllib
+urllib2
+urlparse
+user
+UserDict
+UserList
+UserString
+uu
+uuid
+warnings
+wave
+weakref
+webbrowser
+whichdb
+whrandom
+wsgiref
+xdrlib
+xml
+xmllib
+xmlrpc
+xmlrpclib
+xxsubtype
+zipfile
+zipimport
+zlib
diff --git a/python/helpers/tools/versions.xml b/python/helpers/tools/versions.xml
new file mode 100644
index 0000000..9e92744
--- /dev/null
+++ b/python/helpers/tools/versions.xml
@@ -0,0 +1 @@
+<?xml version="1.0" ?><root><python version="2.4"><module>sqlite3.test.factory</module><module>importlib.test.source.test_file_loader</module><module>test.test_future_builtins</module><module>_dummy_thread</module><module>pydoc_data.topics</module><module>test.test_epoll</module><module>dbm.gnu</module><module>tkinter.test.test_tkinter.test_font</module><module>lib2to3.fixer_util</module><module>importlib.test.import_.test_caching</module><module>ctypes.test.test_cast</module><module>bsddb.test.test_distributed_transactions</module><module>encodings.utf_8_sig</module><module>antigravity</module><module>json.encoder</module><module>test.test_future5</module><module>test.test_future4</module><module>test.json_tests.test_recursion</module><module>runpy</module><module>multiprocessing.pool</module><module>tkinter._fix</module><module>test.ssl_servers</module><module>test.test_float</module><module>lib2to3.fixes.fix_methodattrs</module><module>ctypes.test.test_funcptr</module><module>turtledemo.minimal_hanoi</module><module>turtledemo.fractalcurves</module><module>turtledemo.lindenmayer</module><module>test.test_abstract_numbers</module><module>ctypes.test.test_pickling</module><module>lib2to3.fixer_base</module><module>lib2to3.fixes.fix_types</module><module>ctypes.test.test_pep3118</module><module>test.json_tests.test_separators</module><module>importlib.test.import_.test___package__</module><module>xml.etree</module><module>importlib.test.benchmark</module><module>json.tests.test_unicode</module><module>turtledemo.colormixer</module><module>lib2to3.fixes.fix_xreadlines</module><module>tkinter.ttk</module><module>email.mime.message</module><module>distutils.tests.test_core</module><module>test.test_hashlib</module><module>encodings.cp720</module><module>test.json_tests</module><module>unittest.test.test_result</module><module>test.test_sched</module><module>turtledemo.yinyang</module><module>ctypes.test.test_slicing</module><module>test.test_xmlrpc_net</module><module>test.test_sys_setprofile</module><module>tkinter.test.test_ttk</module><module>test.test_ttk_textonly</module><module>fractions</module><module>importlib.test.source.test_path_hook</module><module>bsddb.test.test_fileid</module><module>tkinter.font</module><module>unittest.__main__</module><module>tkinter.commondialog</module><module>lib2to3.fixes.fix_apply</module><module>test.test_keywordonlyarg</module><module>test.test_setcomps</module><module>test.test_collections</module><module>json.tests.test_speedups</module><module>json.tests.test_recursion</module><module>numbers</module><module>test.test_cmd</module><module>ctypes.test.test_checkretval</module><module>sqlite3.test</module><module>lib2to3.fixes.fix_long</module><module>concurrent.futures.process</module><module>distutils.tests.test_build</module><module>test.test_urllib_response</module><module>idlelib.tabbedpages</module><module>urllib.request</module><module>test.test_print</module><module>argparse</module><module>ctypes.test.test_functions</module><module>test.test_multiprocessing</module><module>ctypes.test.test_keeprefs</module><module>test.test_dbm_gnu</module><module>functools</module><module>ctypes.test.test_libc</module><module>idlelib.macosxSupport</module><module>importlib.util</module><module>test.json_tests.test_pass1</module><module>uuid</module><module>turtledemo.bytedesign</module><module>lib2to3.pgen2.pgen</module><module>distutils.tests.test_install_data</module><module>test.script_helper</module><module>test.test_readline</module><module>unittest.test.test_break</module><module>multiprocessing.synchronize</module><module>multiprocessing.heap</module><module>test.bad_coding2</module><module>test.test_dynamic</module><module>lib2to3.fixes.fix_imports2</module><module>lib2to3.fixes.fix_unicode</module><module>test.test_pep3120</module><module>distutils.tests.test_log</module><module>ctypes.test.test_cfuncs</module><module>ctypes.test.test_init</module><module>test.test_asyncore</module><module>importlib.test.builtin.test_finder</module><module>distutils.tests.test_build_ext</module><module>distutils.command.upload</module><module>test.test_http_cookiejar</module><module>sqlite3.test.types</module><module>ast</module><module>test.test_html</module><module>test.bad_coding</module><module>sqlite3.test.py25tests</module><module>ctypes.test.test_integers</module><module>ctypes.test.test_stringptr</module><module>lib2to3.refactor</module><module>unittest.test.test_setups</module><module>distutils.tests.test_install_lib</module><module>test.test_pep3131</module><module>test.__main__</module><module>test.badsyntax_pep3120</module><module>_collections</module><module>lib2to3.fixes.fix_metaclass</module><module>tkinter.test.test_ttk.test_functions</module><module>email.mime.image</module><module>turtledemo.planet_and_moon</module><module>unittest.test._test_warnings</module><module>test.test_coding</module><module>distutils.tests.test_sdist</module><module>importlib.test.frozen.test_loader</module><module>test.test_lib2to3</module><module>ctypes.macholib.dyld</module><module>lib2to3.fixes.fix_idioms</module><module>unittest.test.test_skipping</module><module>hashlib</module><module>test.test_contextlib</module><module>tkinter.tix</module><module>test.infinite_reload</module><module>_posixsubprocess</module><module>_curses</module><module>turtledemo.clock</module><module>_fileio</module><module>reprlib</module><module>lib2to3.fixes.fix_print</module><module>test.test_range</module><module>ctypes.test.test_simplesubclasses</module><module>xmlrpc.client</module><module>ctypes.test.test_byteswap</module><module>lib2to3.fixes.fix_callable</module><module>lib2to3</module><module>wsgiref.handlers</module><module>ctypes.test.test_pointers</module><module>lib2to3.fixes.fix_standarderror</module><module>abc</module><module>importlib.test.extension.test_case_sensitivity</module><module>_sqlite3</module><module>test.test_copyreg</module><module>distutils.tests.test_ccompiler</module><module>unittest.test.test_case</module><module>ctypes.macholib</module><module>distutils.tests.test_text_file</module><module>test.json_tests.test_indent</module><module>tkinter.dialog</module><module>test.test_memoryview</module><module>json.tests.test_scanstring</module><module>json.tests</module><module>importlib.test.import_.test_fromlist</module><module>ctypes._endian</module><module>lib2to3.btm_matcher</module><module>distutils.tests.test_util</module><module>_markupbase</module><module>importlib.test.import_.test_meta_path</module><module>lib2to3.tests.pytree_idempotency</module><module>lib2to3.fixes.fix_buffer</module><module>xml.etree.ElementTree</module><module>turtledemo.forest</module><module>tkinter.test.test_ttk.test_widgets</module><module>test.warning_tests</module><module>lib2to3.pgen2.conv</module><module>test.test_sqlite</module><module>importlib.test.frozen.test_finder</module><module>lib2to3.fixes.fix_renames</module><module>_lsprof</module><module>test.test_file2k</module><module>test.test_xml_etree</module><module>bsddb.test.test_early_close</module><module>html.parser</module><module>ctypes.test.test_bytes</module><module>pyexpat.model</module><module>test.test_univnewlines2k</module><module>test.test_linecache</module><module>lib2to3.pgen2.parse</module><module>lib2to3.fixes.fix_input</module><module>urllib.response</module><module>distutils.tests.test_unixccompiler</module><module>distutils.tests.test_clean</module><module>multiprocessing.util</module><module>lib2to3.pgen2</module><module>test.test_zipimport_support</module><module>build_class</module><module>ctypes.test.test_array_in_pointer</module><module>ctypes</module><module>http.cookies</module><module>lib2to3.fixes.fix_exitfunc</module><module>test.json_tests.test_fail</module><module>ctypes.test.test_prototypes</module><module>test.badsyntax_3131</module><module>lib2to3.tests.test_all_fixers</module><module>test.profilee</module><module>_stringio</module><module>ctypes.test.test_callbacks</module><module>ctypes.test.test_python_api</module><module>ctypes.test.test_loading</module><module>tkinter.scrolledtext</module><module>email.mime.application</module><module>test.tracedmodules.testmod</module><module>importlib.test.extension.test_path_hook</module><module>encodings.mac_croatian</module><module>importlib.test.import_.test_relative_imports</module><module>distutils.tests.test_bdist_msi</module><module>ctypes.test.test_win32</module><module>test.test_complex_args</module><module>encodings.utf_32</module><module>turtledemo.nim</module><module>lib2to3.fixes.fix_ws_comma</module><module>lib2to3.pgen2.grammar</module><module>test.test_cmd_line_script</module><module>encodings.mac_arabic</module><module>test.test_wait4</module><module>test.test_wait3</module><module>distutils.command.bdist_msi</module><module>distutils.tests.test_file_util</module><module>tkinter.test.test_ttk.test_style</module><module>test.test_http_cookies</module><module>json.decoder</module><module>ctypes.test.test_errcheck</module><module>unittest.test.test_program</module><module>lib2to3.tests.test_refactor</module><module>io</module><module>multiprocessing.queues</module><module>dbm.dumb</module><module>sqlite3</module><module>unittest.test.test_runner</module><module>ctypes.test.test_anon</module><module>test.test_property</module><module>test.test_macos</module><module>distutils.tests.test_filelist</module><module>ctypes.test.test_macholib</module><module>distutils.tests.test_bdist</module><module>ctypes.macholib.dylib</module><module>test.test_dbm_dumb</module><module>ctypes.test</module><module>distutils.tests.test_archive_util</module><module>lib2to3.fixes.fix_getcwdu</module><module>lib2to3.fixes.fix_raw_input</module><module>_ctypes_test</module><module>email.mime</module><module>test.test_mutex</module><module>test.test_argparse</module><module>json.tool</module><module>ctypes.test.test_find</module><module>unittest.main</module><module>multiprocessing.forking</module><module>json.tests.test_decode</module><module>multiprocessing.dummy</module><module>test.test_with</module><module>_bytesio</module><module>ctypes.test.test_returnfuncptrs</module><module>ctypes.test.test_memfunctions</module><module>test.test_sysconfig</module><module>distutils.tests.test_upload</module><module>test.test_py3kwarn</module><module>encodings.mac_romanian</module><module>test.test_dictcomps</module><module>bsddb.test.test_cursor_pget_bug</module><module>multiprocessing.reduction</module><module>test.win_console_handler</module><module>test.datetimetester</module><module>ctypes.test.test_buffers</module><module>lib2to3.fixes.fix_execfile</module><module>test.test_sys_settrace</module><module>ctypes.test.test_structures</module><module>wsgiref</module><module>turtledemo.peace</module><module>multiprocessing.managers</module><module>test.test_old_mailbox</module><module>encodings.mac_farsi</module><module>unittest.loader</module><module>idlelib.AutoComplete</module><module>email.mime.multipart</module><module>bsddb.test.test_dbenv</module><module>lib2to3.tests.test_parser</module><module>importlib.test.extension.util</module><module>lib2to3.fixes.fix_dict</module><module>bsddb.test.test_sequence</module><module>importlib.machinery</module><module>email.test.test_email_renamed</module><module>ctypes.test.test_incomplete</module><module>xml.etree.ElementInclude</module><module>lib2to3.fixes.fix_numliterals</module><module>lib2to3.fixes.fix_map</module><module>_pyio</module><module>_struct</module><module>json.tests.test_separators</module><module>importlib.test.test_util</module><module>test.pydoc_mod</module><module>json.tests.test_float</module><module>unittest.test.test_discovery</module><module>concurrent.futures.thread</module><module>test.make_ssl_certs</module><module>lib2to3.fixes.fix_nonzero</module><module>distutils.tests.setuptools_build_ext</module><module>lib2to3.btm_utils</module><module>genericpath</module><module>spwd</module><module>test.test_defaultdict</module><module>json.tests.test_check_circular</module><module>distutils.tests.test_config_cmd</module><module>concurrent</module><module>importlib.test.test_abc</module><module>email.mime.audio</module><module>test.test_bigaddrspace</module><module>distutils.tests.test_check</module><module>sqlite3.test.userfunctions</module><module>unittest.test.test_assertions</module><module>ctypes.test.test_bitfields</module><module>distutils.tests.test_bdist_wininst</module><module>turtledemo.paint</module><module>test.tracedmodules</module><module>lib2to3.fixes.fix_raise</module><module>idlelib.MultiCall</module><module>test.time_hashlib</module><module>lib2to3.fixes.fix_exec</module><module>urllib.parse</module><module>_elementtree</module><module>test.test_gdb</module><module>test.json_tests.test_pass2</module><module>test.json_tests.test_pass3</module><module>distutils.command.install_egg_info</module><module>wsgiref.util</module><module>json.tests.test_dump</module><module>ctypes.test.test_unaligned_structures</module><module>lib2to3.pygram</module><module>http.server</module><module>tkinter.constants</module><module>test.test_smtpd</module><module>distutils.msvc9compiler</module><module>test.json_tests.test_decode</module><module>multiprocessing</module><module>test.test_kqueue</module><module>unittest.test.support</module><module>lib2to3.pytree</module><module>_thread</module><module>importlib.test</module><module>test.test_cProfile</module><module>distutils.tests.test_spawn</module><module>unittest.suite</module><module>_ctypes</module><module>test.test_reprlib</module><module>json.scanner</module><module>importlib.test.import_.test_packages</module><module>importlib.test.regrtest</module><module>lib2to3.fixes.fix_imports</module><module>test.test_concurrent_futures</module><module>distutils.tests.test_cygwinccompiler</module><module>distutils.tests.setuptools_extension</module><module>tkinter.test</module><module>test.json_tests.test_scanstring</module><module>importlib.test.abc</module><module>lib2to3.main</module><module>distutils.tests.test_msvc9compiler</module><module>test.buffer_tests</module><module>test.encoded_modules</module><module>tkinter.messagebox</module><module>test.outstanding_bugs</module><module>idlelib.HyperParser</module><module>importlib.test.source.test_source_encoding</module><module>test.fork_wait</module><module>_ast</module><module>_datetime</module><module>unittest.test.test_functiontestcase</module><module>importlib.test.source.test_finder</module><module>test.test_telnetlib</module><module>wsgiref.headers</module><module>distutils.tests.test_extension</module><module>importlib.test.import_.test_api</module><module>xml.etree.ElementPath</module><module>tkinter.test.test_ttk.test_extensions</module><module>test.test_int_literal</module><module>importlib._bootstrap</module><module>importlib.test.import_.test_path</module><module>importlib.test.util</module><module>lib2to3.fixes.fix_tuple_params</module><module>test.encoded_modules.module_iso_8859_1</module><module>importlib.test.source</module><module>test.test_numeric_tower</module><module>ctypes.test.test_unicode</module><module>turtledemo.two_canvases</module><module>test.test_zipfile64</module><module>lib2to3.fixes.fix_operator</module><module>lib2to3.fixes.fix_isinstance</module><module>test.test_pydoc</module><module>_weakrefset</module><module>lib2to3.fixes.fix_reduce</module><module>test.test_ftplib</module><module>turtledemo.chaos</module><module>test.test_index</module><module>tkinter.filedialog</module><module>test.test_flufl</module><module>email.mime.text</module><module>sysconfig</module><module>tkinter.__main__</module><module>builtins</module><module>test.test_runpy</module><module>test.test_cprofile</module><module>importlib.test.extension</module><module>pydoc_topics</module><module>lib2to3.tests.test_fixers</module><module>unittest.util</module><module>lib2to3.fixes</module><module>importlib.test.source.util</module><module>test.test_sunau</module><module>test.test_raise</module><module>lib2to3.fixes.fix_intern</module><module>lib2to3.pgen2.driver</module><module>distutils.versionpredicate</module><module>test.test_exception_variations</module><module>turtle</module><module>test.test_wsgiref</module><module>test.json_tests.test_speedups</module><module>test.test_uuid</module><module>unittest.test</module><module>ctypes.util</module><module>lib2to3.fixes.fix_has_key</module><module>distutils.tests.test_versionpredicate</module><module>importlib.test.import_</module><module>test.test_compileall</module><module>lib2to3.tests.test_util</module><module>test.test_email_renamed</module><module>importlib.test.extension.test_finder</module><module>lib2to3.patcomp</module><module>turtledemo</module><module>test.json_tests.test_default</module><module>_string</module><module>sqlite3.test.dump</module><module>test.test_fileio</module><module>lib2to3.tests.test_main</module><module>ctypes.test.test_numbers</module><module>importlib.test.__main__</module><module>test.test_ctypes</module><module>fpectl</module><module>encodings.mac_centeuro</module><module>test.inspect_fodder2</module><module>test.test_SimpleHTTPServer</module><module>test.mock_socket</module><module>ctypes.test.test_random_things</module><module>test.json_tests.test_encode_basestring_ascii</module><module>distutils.tests.test_bdist_rpm</module><module>distutils.config</module><module>test.test_dbm_ndbm</module><module>distutils.tests.test_config</module><module>lib2to3.fixes.fix_repr</module><module>test.test_osx_env</module><module>test.test_modulefinder</module><module>test.test_aifc</module><module>wsgiref.simple_server</module><module>lib2to3.fixes.fix_import</module><module>_hashlib</module><module>test.inspect_fodder</module><module>test.test_platform</module><module>copyreg</module><module>ctypes.test.test_internals</module><module>test.test_docxmlrpc</module><module>concurrent.futures</module><module>xml.etree.cElementTree</module><module>unittest.result</module><module>concurrent.futures._base</module><module>lib2to3.fixes.fix_xrange</module><module>tkinter.test.test_tkinter</module><module>_curses_panel</module><module>future_builtins</module><module>lib2to3.fixes.fix_itertools_imports</module><module>lib2to3.tests</module><module>_io</module><module>sqlite3.test.hooks</module><module>dbm</module><module>_multiprocessing</module><module>lib2to3.fixes.fix_paren</module><module>test.dis_module</module><module>_compat_pickle</module><module>importlib</module><module>json.tests.test_encode_basestring_ascii</module><module>_json</module><module>pyexpat.errors</module><module>ctypes.test.runtests</module><module>cProfile</module><module>tkinter.test.test_tkinter.test_loadtk</module><module>test.test_memoryio</module><module>lib2to3.fixes.fix_itertools</module><module>test.test_rlcompleter</module><module>test.test_sndhdr</module><module>ssl</module><module>test.test_smtplib</module><module>test.test_int</module><module>ctypes.test.test_delattr</module><module>distutils.tests.test_bdist_dumb</module><module>unittest.runner</module><module>test.test_dictviews</module><module>ctypes.wintypes</module><module>test.test_strlit</module><module>test.test_ttk_guionly</module><module>encodings.cp858</module><module>lib2to3.fixes.fix_urllib</module><module>pydoc_data</module><module>test.test_xml_etree_c</module><module>lib2to3.fixes.fix_funcattrs</module><module>plistlib</module><module>multiprocessing.dummy.connection</module><module>unittest.case</module><module>json.tests.test_indent</module><module>distutils.tests.test_dir_util</module><module>test.json_tests.test_float</module><module>tkinter.test.support</module><module>unittest.test.dummy</module><module>lib2to3.tests.test_pytree</module><module>ctypes.test.test_values</module><module>json.tests.test_fail</module><module>_pickle</module><module>lib2to3.fixes.fix_zip</module><module>test.test_structmembers</module><module>bsddb.test.test_replication</module><module>tkinter.dnd</module><module>sqlite3.dump</module><module>test.test_pkgutil</module><module>importlib.test.builtin</module><module>json</module><module>lib2to3.pgen2.tokenize</module><module>turtledemo.round_dance</module><module>test.encoded_modules.module_koi8_r</module><module>test.test_nntplib</module><module>ctypes.test.test_sizes</module><module>tkinter</module><module>ctypes.test.test_arrays</module><module>sqlite3.test.transactions</module><module>ctypes.test.test_strings</module><module>ctypes.test.test_objects</module><module>lib2to3.fixes.fix_basestring</module><module>distutils.tests.test_install_headers</module><module>test.test_importlib</module><module>ctypes.test.test_refcounts</module><module>sqlite3.test.dbapi</module><module>test.test_smtpnet</module><module>tkinter.colorchooser</module><module>encodings.utf_32_be</module><module>test.test_tk</module><module>turtledemo.penrose</module><module>test.json_tests.test_dump</module><module>lib2to3.fixes.fix_throw</module><module>test.test_ssl</module><module>test.test_xdrlib</module><module>html</module><module>email.test.test_email_codecs_renamed</module><module>dbm.ndbm</module><module>turtledemo.wikipedia</module><module>http</module><module>unittest.test.test_loader</module><module>ctypes.test.test_struct_fields</module><module>bsddb.test.test_db</module><module>xmlrpc.server</module><module>ctypes.test.test_parameters</module><module>urllib.error</module><module>distutils.tests.test_build_clib</module><module>json.tests.test_pass2</module><module>json.tests.test_pass3</module><module>json.tests.test_pass1</module><module>test.test_weakset</module><module>test.test_strtod</module><module>test.test_io</module><module>html.entities</module><module>test.gdb_sample</module><module>http.client</module><module>test.test_msilib</module><module>ctypes.test.test_repr</module><module>importlib.test.frozen</module><module>_dbm</module><module>multiprocessing.sharedctypes</module><module>encodings.utf_32_le</module><module>test.curses_tests</module><module>tkinter.simpledialog</module><module>importlib.test.builtin.util</module><module>test.test_codecencodings_iso2022</module><module>xmlrpc</module><module>test.test_unpack_ex</module><module>importlib.abc</module><module>distutils.tests.test_version</module><module>test.test_bytes</module><module>tkinter.test.test_tkinter.test_text</module><module>turtledemo.__main__</module><module>test.support</module><module>test.test_urllib2_localnet</module><module>test.test_syslog</module><module>multiprocessing.process</module><module>distutils.tests.test_register</module><module>importlib.test.source.test_abc_loader</module><module>importlib.test.test_api</module><module>test.test_code</module><module>test.test_super</module><module>lib2to3.fixes.fix_next</module><module>tkinter.test.runtktests</module><module>test.test_genericpath</module><module>distutils.tests.test_sysconfig</module><module>test.test_listcomps</module><module>test.test_functools</module><module>test.test_ast</module><module>http.cookiejar</module><module>lib2to3.fixes.fix_set_literal</module><module>distutils.tests.test_dep_util</module><module>idlelib.RstripExtension</module><module>test.test_ascii_formatd</module><module>lib2to3.fixes.fix_future</module><module>json.tests.test_default</module><module>sqlite3.dbapi2</module><module>test.relimport</module><module>unittest.test.test_suite</module><module>_functools</module><module>test.lock_tests</module><module>lib2to3.pgen2.token</module><module>test.test_typechecks</module><module>ctypes.macholib.framework</module><module>idlelib.AutoCompleteWindow</module><module>lib2to3.fixes.fix_except</module><module>importlib.test.builtin.test_loader</module><module>sqlite3.test.regression</module><module>test.json_tests.test_unicode</module><module>lib2to3.tests.support</module><module>distutils.command.check</module><module>urllib.robotparser</module><module>multiprocessing.connection</module><module>test.test_fractions</module><module>test.test_abc</module><module>_warnings</module><module>importlib.test.extension.test_loader</module><module>lib2to3.fixes.fix_ne</module><module>test.test_buffer</module><module>lib2to3.fixes.fix_filter</module><module>importlib.test.source.test_case_sensitivity</module><module>wsgiref.validate</module><module>turtledemo.tree</module><module>_types</module><module>test.test_pdb</module><module>test.test_timeit</module><module>lib2to3.pgen2.literals</module><module>email.mime.base</module><module>ctypes.test.test_as_parameter</module><module>test.test_bigmem</module><module>ctypes.test.test_frombuffer</module><module>test.test_undocumented_details</module><module>test.test_pstats</module><module>contextlib</module><module>email.mime.nonmultipart</module><module>test.test_pipes</module><module>lib2to3.fixes.fix_sys_exc</module><module>test.test_poplib</module><module>test.test_metaclass</module><module>ctypes.test.test_varsize_struct</module><module>_abcoll</module><module>lib2to3.__main__</module><module>importlib.test.import_.util</module><module>test.test_startfile</module><module>unittest.signals</module><module>ctypes.test.test_errno</module><module>test.test_pep352</module><module>distutils.tests.test_cmd</module><module>bsddb.test.test_compare</module><module>test.test_json</module><module>test.test_httpservers</module><func>bytearray</func><func>all</func><func>memoryview</func><func>next</func><func>bin</func><func>bytes</func><func>ascii</func><func>format</func><func>any</func><func>print</func></python><python version="2.5"><module>importlib.test.source.test_file_loader</module><module>test.test_future_builtins</module><module>_dummy_thread</module><module>pydoc_data.topics</module><module>test.test_epoll</module><module>dbm.gnu</module><module>tkinter.test.test_tkinter.test_font</module><module>lib2to3.fixer_util</module><module>importlib.test.import_.test_caching</module><module>bsddb.test.test_distributed_transactions</module><module>antigravity</module><module>json.encoder</module><module>test.test_future5</module><module>test.test_future4</module><module>test.json_tests.test_recursion</module><module>multiprocessing.pool</module><module>tkinter._fix</module><module>test.ssl_servers</module><module>lib2to3.fixes.fix_methodattrs</module><module>turtledemo.minimal_hanoi</module><module>turtledemo.fractalcurves</module><module>turtledemo.lindenmayer</module><module>test.test_abstract_numbers</module><module>ctypes.test.test_pickling</module><module>lib2to3.fixer_base</module><module>lib2to3.fixes.fix_types</module><module>ctypes.test.test_pep3118</module><module>test.json_tests.test_separators</module><module>importlib.test.import_.test___package__</module><module>importlib.test.benchmark</module><module>json.tests.test_unicode</module><module>turtledemo.colormixer</module><module>lib2to3.fixes.fix_xreadlines</module><module>tkinter.ttk</module><module>distutils.tests.test_core</module><module>encodings.cp720</module><module>test.json_tests</module><module>unittest.test.test_result</module><module>test.test_sched</module><module>turtledemo.yinyang</module><module>test.test_xmlrpc_net</module><module>test.test_sys_setprofile</module><module>tkinter.test.test_ttk</module><module>test.test_ttk_textonly</module><module>fractions</module><module>importlib.test.source.test_path_hook</module><module>reconvert</module><module>bsddb.test.test_fileid</module><module>tkinter.font</module><module>unittest.__main__</module><module>tkinter.commondialog</module><module>lib2to3.fixes.fix_apply</module><module>test.test_keywordonlyarg</module><module>test.test_setcomps</module><module>test.test_collections</module><module>json.tests.test_speedups</module><module>json.tests.test_recursion</module><module>numbers</module><module>test.test_cmd</module><module>email.MIMEBase</module><module>lib2to3.fixes.fix_long</module><module>concurrent.futures.process</module><module>distutils.tests.test_build</module><module>test.test_urllib_response</module><module>idlelib.tabbedpages</module><module>urllib.request</module><module>test.test_print</module><module>argparse</module><module>test.test_multiprocessing</module><module>test.test_dbm_gnu</module><module>importlib.util</module><module>test.json_tests.test_pass1</module><module>turtledemo.bytedesign</module><module>lib2to3.pgen2.pgen</module><module>distutils.tests.test_install_data</module><module>test.script_helper</module><module>test.test_readline</module><module>unittest.test.test_break</module><module>multiprocessing.synchronize</module><module>multiprocessing.heap</module><module>test.test_dynamic</module><module>lib2to3.fixes.fix_imports2</module><module>lib2to3.fixes.fix_unicode</module><module>test.test_pep3120</module><module>distutils.tests.test_log</module><module>test.test_asyncore</module><module>importlib.test.builtin.test_finder</module><module>distutils.tests.test_build_ext</module><module>test.test_http_cookiejar</module><module>ast</module><module>test.test_html</module><module>sqlite3.test.py25tests</module><module>lib2to3.refactor</module><module>unittest.test.test_setups</module><module>distutils.tests.test_install_lib</module><module>test.test_pep3131</module><module>test.__main__</module><module>test.badsyntax_pep3120</module><module>_collections</module><module>lib2to3.fixes.fix_metaclass</module><module>tkinter.test.test_ttk.test_functions</module><module>turtledemo.planet_and_moon</module><module>unittest.test._test_warnings</module><module>distutils.tests.test_sdist</module><module>importlib.test.frozen.test_loader</module><module>test.test_lib2to3</module><module>lib2to3.fixes.fix_idioms</module><module>unittest.test.test_skipping</module><module>tkinter.tix</module><module>_posixsubprocess</module><module>_curses</module><module>turtledemo.clock</module><module>_fileio</module><module>reprlib</module><module>lib2to3.fixes.fix_print</module><module>test.test_range</module><module>xmlrpc.client</module><module>lib2to3.fixes.fix_callable</module><module>lib2to3</module><module>lib2to3.fixes.fix_standarderror</module><module>abc</module><module>importlib.test.extension.test_case_sensitivity</module><module>test.test_copyreg</module><module>distutils.tests.test_ccompiler</module><module>unittest.test.test_case</module><module>distutils.tests.test_text_file</module><module>test.json_tests.test_indent</module><module>tkinter.dialog</module><module>regex_syntax</module><module>test.test_memoryview</module><module>json.tests.test_scanstring</module><module>json.tests</module><module>importlib.test.import_.test_fromlist</module><module>lib2to3.btm_matcher</module><module>distutils.tests.test_util</module><module>_markupbase</module><module>test.test_regex</module><module>importlib.test.import_.test_meta_path</module><module>lib2to3.tests.pytree_idempotency</module><module>lib2to3.fixes.fix_buffer</module><module>turtledemo.forest</module><module>tkinter.test.test_ttk.test_widgets</module><module>test.warning_tests</module><module>lib2to3.pgen2.conv</module><module>importlib.test.frozen.test_finder</module><module>lib2to3.fixes.fix_renames</module><module>test.test_file2k</module><module>bsddb.test.test_early_close</module><module>html.parser</module><module>ctypes.test.test_bytes</module><module>test.test_univnewlines2k</module><module>test.test_linecache</module><module>lib2to3.pgen2.parse</module><module>lib2to3.fixes.fix_input</module><module>urllib.response</module><module>distutils.tests.test_unixccompiler</module><module>distutils.tests.test_clean</module><module>multiprocessing.util</module><module>lib2to3.pgen2</module><module>test.test_zipimport_support</module><module>build_class</module><module>http.cookies</module><module>lib2to3.fixes.fix_exitfunc</module><module>test.json_tests.test_fail</module><module>test.badsyntax_3131</module><module>lib2to3.tests.test_all_fixers</module><module>test.profilee</module><module>_stringio</module><module>tkinter.scrolledtext</module><module>test.tracedmodules.testmod</module><module>importlib.test.extension.test_path_hook</module><module>importlib.test.import_.test_relative_imports</module><module>distutils.tests.test_bdist_msi</module><module>encodings.utf_32</module><module>turtledemo.nim</module><module>lib2to3.fixes.fix_ws_comma</module><module>lib2to3.pgen2.grammar</module><module>test.test_cmd_line_script</module><module>distutils.tests.test_file_util</module><module>tkinter.test.test_ttk.test_style</module><module>test.test_http_cookies</module><module>json.decoder</module><module>unittest.test.test_program</module><module>lib2to3.tests.test_refactor</module><module>io</module><module>multiprocessing.queues</module><module>dbm.dumb</module><module>unittest.test.test_runner</module><module>test.test_property</module><module>test.test_macos</module><module>distutils.tests.test_filelist</module><module>distutils.tests.test_bdist</module><module>test.test_dbm_dumb</module><module>distutils.tests.test_archive_util</module><module>lib2to3.fixes.fix_getcwdu</module><module>lib2to3.fixes.fix_raw_input</module><module>test.test_mutex</module><module>test.test_argparse</module><module>json.tool</module><module>unittest.main</module><module>multiprocessing.forking</module><module>json.tests.test_decode</module><module>multiprocessing.dummy</module><module>_bytesio</module><module>test.test_sysconfig</module><module>distutils.tests.test_upload</module><module>test.test_py3kwarn</module><module>test.test_dictcomps</module><module>multiprocessing.reduction</module><module>test.win_console_handler</module><module>test.datetimetester</module><module>lib2to3.fixes.fix_execfile</module><module>whrandom</module><module>test.test_sys_settrace</module><module>turtledemo.peace</module><module>multiprocessing.managers</module><module>unittest.loader</module><module>bsddb.test.test_dbenv</module><module>lib2to3.tests.test_parser</module><module>importlib.test.extension.util</module><module>lib2to3.fixes.fix_dict</module><module>importlib.machinery</module><module>lib2to3.fixes.fix_numliterals</module><module>lib2to3.fixes.fix_map</module><module>_pyio</module><module>json.tests.test_separators</module><module>importlib.test.test_util</module><module>test.pydoc_mod</module><module>json.tests.test_float</module><module>tzparse</module><module>unittest.test.test_discovery</module><module>concurrent.futures.thread</module><module>test.make_ssl_certs</module><module>lib2to3.fixes.fix_nonzero</module><module>distutils.tests.setuptools_build_ext</module><module>lib2to3.btm_utils</module><module>genericpath</module><module>json.tests.test_check_circular</module><module>distutils.tests.test_config_cmd</module><module>concurrent</module><module>importlib.test.test_abc</module><module>distutils.tests.test_check</module><module>unittest.test.test_assertions</module><module>distutils.tests.test_bdist_wininst</module><module>test.test_timing</module><module>turtledemo.paint</module><module>test.tracedmodules</module><module>lib2to3.fixes.fix_raise</module><module>lib2to3.fixes.fix_exec</module><module>urllib.parse</module><module>test.test_gdb</module><module>test.json_tests.test_pass2</module><module>test.json_tests.test_pass3</module><module>json.tests.test_dump</module><module>lib2to3.pygram</module><module>http.server</module><module>tkinter.constants</module><module>test.test_smtpd</module><module>distutils.msvc9compiler</module><module>test.json_tests.test_decode</module><module>multiprocessing</module><module>test.test_kqueue</module><module>unittest.test.support</module><module>lib2to3.pytree</module><module>_thread</module><module>importlib.test</module><module>distutils.tests.test_spawn</module><module>unittest.suite</module><module>test.test_reprlib</module><module>json.scanner</module><module>importlib.test.import_.test_packages</module><module>importlib.test.regrtest</module><module>lib2to3.fixes.fix_imports</module><module>test.test_concurrent_futures</module><module>distutils.tests.test_cygwinccompiler</module><module>distutils.tests.setuptools_extension</module><module>tkinter.test</module><module>test.json_tests.test_scanstring</module><module>importlib.test.abc</module><module>lib2to3.main</module><module>regex</module><module>email.MIMEMessage</module><module>distutils.tests.test_msvc9compiler</module><module>test.buffer_tests</module><module>test.encoded_modules</module><module>tkinter.messagebox</module><module>importlib.test.source.test_source_encoding</module><module>_datetime</module><module>unittest.test.test_functiontestcase</module><module>importlib.test.source.test_finder</module><module>test.test_telnetlib</module><module>distutils.tests.test_extension</module><module>importlib.test.import_.test_api</module><module>tkinter.test.test_ttk.test_extensions</module><module>test.test_int_literal</module><module>importlib._bootstrap</module><module>importlib.test.import_.test_path</module><module>importlib.test.util</module><module>lib2to3.fixes.fix_tuple_params</module><module>test.encoded_modules.module_iso_8859_1</module><module>importlib.test.source</module><module>test.test_numeric_tower</module><module>turtledemo.two_canvases</module><module>lib2to3.fixes.fix_operator</module><module>lib2to3.fixes.fix_isinstance</module><module>test.test_pydoc</module><module>_weakrefset</module><module>lib2to3.fixes.fix_reduce</module><module>test.test_ftplib</module><module>turtledemo.chaos</module><module>tkinter.filedialog</module><module>test.test_flufl</module><module>sysconfig</module><module>email.MIMEImage</module><module>tkinter.__main__</module><module>email.MIMEText</module><module>builtins</module><module>test.test_cprofile</module><module>importlib.test.extension</module><module>pydoc_topics</module><module>lib2to3.tests.test_fixers</module><module>unittest.util</module><module>lib2to3.fixes</module><module>importlib.test.source.util</module><module>test.test_sunau</module><module>test.test_raise</module><module>lib2to3.fixes.fix_intern</module><module>lib2to3.pgen2.driver</module><module>turtle</module><module>test.json_tests.test_speedups</module><module>unittest.test</module><module>lib2to3.fixes.fix_has_key</module><module>importlib.test.import_</module><module>test.test_compileall</module><module>lib2to3.tests.test_util</module><module>importlib.test.extension.test_finder</module><module>lib2to3.patcomp</module><module>turtledemo</module><module>test.json_tests.test_default</module><module>_string</module><module>idlelib.buildapp</module><module>sqlite3.test.dump</module><module>test.test_fileio</module><module>lib2to3.tests.test_main</module><module>importlib.test.__main__</module><module>fpectl</module><module>test.test_SimpleHTTPServer</module><module>test.mock_socket</module><module>test.json_tests.test_encode_basestring_ascii</module><module>distutils.tests.test_bdist_rpm</module><module>distutils.config</module><module>test.test_dbm_ndbm</module><module>distutils.tests.test_config</module><module>lib2to3.fixes.fix_repr</module><module>test.test_osx_env</module><module>test.test_modulefinder</module><module>test.test_aifc</module><module>lib2to3.fixes.fix_import</module><module>copyreg</module><module>test.test_docxmlrpc</module><module>concurrent.futures</module><module>unittest.result</module><module>concurrent.futures._base</module><module>statcache</module><module>lib2to3.fixes.fix_xrange</module><module>tkinter.test.test_tkinter</module><module>_curses_panel</module><module>future_builtins</module><module>lib2to3.fixes.fix_itertools_imports</module><module>lib2to3.tests</module><module>_io</module><module>dbm</module><module>_multiprocessing</module><module>lib2to3.fixes.fix_paren</module><module>test.dis_module</module><module>_compat_pickle</module><module>importlib</module><module>json.tests.test_encode_basestring_ascii</module><module>_json</module><module>tkinter.test.test_tkinter.test_loadtk</module><module>test.test_memoryio</module><module>lib2to3.fixes.fix_itertools</module><module>test.test_rlcompleter</module><module>test.test_sndhdr</module><module>ssl</module><module>timing</module><module>test.test_smtplib</module><module>test.test_int</module><module>distutils.tests.test_bdist_dumb</module><module>unittest.runner</module><module>test.test_dictviews</module><module>test.test_strlit</module><module>test.test_ttk_guionly</module><module>encodings.cp858</module><module>lib2to3.fixes.fix_urllib</module><module>pydoc_data</module><module>lib2to3.fixes.fix_funcattrs</module><module>plistlib</module><module>multiprocessing.dummy.connection</module><module>unittest.case</module><module>json.tests.test_indent</module><module>distutils.tests.test_dir_util</module><module>test.json_tests.test_float</module><module>tkinter.test.support</module><module>unittest.test.dummy</module><module>lib2to3.tests.test_pytree</module><module>json.tests.test_fail</module><module>_pickle</module><module>email.MIMEMultipart</module><module>lib2to3.fixes.fix_zip</module><module>bsddb.test.test_replication</module><module>tkinter.dnd</module><module>sqlite3.dump</module><module>test.test_pkgutil</module><module>importlib.test.builtin</module><module>json</module><module>lib2to3.pgen2.tokenize</module><module>turtledemo.round_dance</module><module>test.encoded_modules.module_koi8_r</module><module>test.test_nntplib</module><module>tkinter</module><module>email.MIMENonMultipart</module><module>lib2to3.fixes.fix_basestring</module><module>distutils.tests.test_install_headers</module><module>test.test_importlib</module><module>test.test_smtpnet</module><module>tkinter.colorchooser</module><module>encodings.utf_32_be</module><module>test.test_tk</module><module>turtledemo.penrose</module><module>test.json_tests.test_dump</module><module>lib2to3.fixes.fix_throw</module><module>test.test_ssl</module><module>html</module><module>dbm.ndbm</module><module>turtledemo.wikipedia</module><module>http</module><module>unittest.test.test_loader</module><module>bsddb.test.test_db</module><module>xmlrpc.server</module><module>urllib.error</module><module>distutils.tests.test_build_clib</module><module>json.tests.test_pass2</module><module>json.tests.test_pass3</module><module>json.tests.test_pass1</module><module>test.test_weakset</module><module>test.test_strtod</module><module>test.test_io</module><module>html.entities</module><module>test.gdb_sample</module><module>http.client</module><module>test.test_msilib</module><module>importlib.test.frozen</module><module>_dbm</module><module>multiprocessing.sharedctypes</module><module>encodings.utf_32_le</module><module>test.curses_tests</module><module>tkinter.simpledialog</module><module>importlib.test.builtin.util</module><module>test.test_codecencodings_iso2022</module><module>xmlrpc</module><module>test.test_unpack_ex</module><module>importlib.abc</module><module>distutils.tests.test_version</module><module>test.test_bytes</module><module>tkinter.test.test_tkinter.test_text</module><module>turtledemo.__main__</module><module>test.support</module><module>test.test_syslog</module><module>multiprocessing.process</module><module>distutils.tests.test_register</module><module>importlib.test.source.test_abc_loader</module><module>importlib.test.test_api</module><module>test.test_super</module><module>lib2to3.fixes.fix_next</module><module>tkinter.test.runtktests</module><module>test.test_genericpath</module><module>distutils.tests.test_sysconfig</module><module>email.MIMEAudio</module><module>test.test_listcomps</module><module>http.cookiejar</module><module>lib2to3.fixes.fix_set_literal</module><module>distutils.tests.test_dep_util</module><module>idlelib.RstripExtension</module><module>test.test_ascii_formatd</module><module>lib2to3.fixes.fix_future</module><module>json.tests.test_default</module><module>test.relimport</module><module>unittest.test.test_suite</module><module>test.lock_tests</module><module>lib2to3.pgen2.token</module><module>test.test_typechecks</module><module>lib2to3.fixes.fix_except</module><module>importlib.test.builtin.test_loader</module><module>test.json_tests.test_unicode</module><module>lib2to3.tests.support</module><module>distutils.command.check</module><module>urllib.robotparser</module><module>multiprocessing.connection</module><module>regsub</module><module>test.test_fractions</module><module>test.test_abc</module><module>_warnings</module><module>importlib.test.extension.test_loader</module><module>lib2to3.fixes.fix_ne</module><module>test.test_buffer</module><module>lib2to3.fixes.fix_filter</module><module>importlib.test.source.test_case_sensitivity</module><module>turtledemo.tree</module><module>test.test_pdb</module><module>test.test_timeit</module><module>lib2to3.pgen2.literals</module><module>ctypes.test.test_frombuffer</module><module>test.test_undocumented_details</module><module>test.test_pstats</module><module>test.test_pipes</module><module>lib2to3.fixes.fix_sys_exc</module><module>test.test_poplib</module><module>test.test_metaclass</module><module>_abcoll</module><module>lib2to3.__main__</module><module>importlib.test.import_.util</module><module>unittest.signals</module><module>ctypes.test.test_errno</module><module>distutils.tests.test_cmd</module><module>test.test_json</module><module>test.test_httpservers</module><func>bytearray</func><func>memoryview</func><func>next</func><func>bin</func><func>bytes</func><func>ascii</func><func>format</func><func>print</func></python><python version="2.6"><module>importlib.test.source.test_file_loader</module><module>_dummy_thread</module><module>bsddb.test.test_env_close</module><module>pydoc_data.topics</module><module>dbm.gnu</module><module>tkinter.test.test_tkinter.test_font</module><module>importlib.test.import_.test_caching</module><module>antigravity</module><module>test.json_tests.test_recursion</module><module>tkinter._fix</module><module>test.ssl_servers</module><module>turtledemo.minimal_hanoi</module><module>turtledemo.fractalcurves</module><module>turtledemo.lindenmayer</module><module>test.json_tests.test_separators</module><module>importlib.test.import_.test___package__</module><module>importlib.test.benchmark</module><module>turtledemo.colormixer</module><module>tkinter.ttk</module><module>encodings.cp720</module><module>test.json_tests</module><module>unittest.test.test_result</module><module>test.test_sched</module><module>turtledemo.yinyang</module><module>test.test_xmlrpc_net</module><module>test.test_sys_setprofile</module><module>tkinter.test.test_ttk</module><module>test.test_ttk_textonly</module><module>importlib.test.source.test_path_hook</module><module>reconvert</module><module>bsddb.test.test_fileid</module><module>tkinter.font</module><module>unittest.__main__</module><module>tkinter.commondialog</module><module>test.test_keywordonlyarg</module><module>test.test_setcomps</module><module>email.MIMEBase</module><module>concurrent.futures.process</module><module>test.test_operations</module><module>distutils.tests.test_build</module><module>test.test_urllib_response</module><module>urllib.request</module><module>argparse</module><module>test.test_dbm_gnu</module><module>importlib.util</module><module>test.json_tests.test_pass1</module><module>turtledemo.bytedesign</module><module>psycopg2</module><module>distutils.tests.test_install_data</module><module>test.script_helper</module><module>unittest.test.test_break</module><module>test.test_dynamic</module><module>test.test_pep3120</module><module>distutils.tests.test_log</module><module>importlib.test.builtin.test_finder</module><module>test.test_http_cookiejar</module><module>test.test_html</module><module>unittest.test.test_setups</module><module>test.test_pep3131</module><module>test.__main__</module><module>test.badsyntax_pep3120</module><module>tkinter.test.test_ttk.test_functions</module><module>turtledemo.planet_and_moon</module><module>unittest.test._test_warnings</module><module>importlib.test.frozen.test_loader</module><module>idlelib.tabpage</module><module>unittest.test.test_skipping</module><module>tkinter.tix</module><module>_posixsubprocess</module><module>_curses</module><module>turtledemo.clock</module><module>reprlib</module><module>test.test_range</module><module>xmlrpc.client</module><module>importlib.test.extension.test_case_sensitivity</module><module>test.test_copyreg</module><module>distutils.tests.test_ccompiler</module><module>unittest.test.test_case</module><module>distutils.tests.test_text_file</module><module>test.json_tests.test_indent</module><module>tkinter.dialog</module><module>regex_syntax</module><module>test.test_memoryview</module><module>importlib.test.import_.test_fromlist</module><module>lib2to3.btm_matcher</module><module>_markupbase</module><module>test.test_regex</module><module>importlib.test.import_.test_meta_path</module><module>turtledemo.forest</module><module>tkinter.test.test_ttk.test_widgets</module><module>importlib.test.frozen.test_finder</module><module>test.test_file2k</module><module>html.parser</module><module>ctypes.test.test_bytes</module><module>test.test_univnewlines2k</module><module>urllib.response</module><module>distutils.tests.test_clean</module><module>build_class</module><module>http.cookies</module><module>test.json_tests.test_fail</module><module>test.badsyntax_3131</module><module>_stringio</module><module>test.test_macfs</module><module>tkinter.scrolledtext</module><module>test.tracedmodules.testmod</module><module>importlib.test.extension.test_path_hook</module><module>importlib.test.import_.test_relative_imports</module><module>distutils.tests.test_bdist_msi</module><module>turtledemo.nim</module><module>distutils.tests.test_file_util</module><module>tkinter.test.test_ttk.test_style</module><module>test.test_http_cookies</module><module>unittest.test.test_program</module><module>dbm.dumb</module><module>unittest.test.test_runner</module><module>distutils.tests.test_bdist</module><module>test.test_dbm_dumb</module><module>distutils.tests.test_archive_util</module><module>test.test_argparse</module><module>unittest.main</module><module>test.test_sysconfig</module><module>test.test_dictcomps</module><module>test.win_console_handler</module><module>test.datetimetester</module><module>whrandom</module><module>test.test_sys_settrace</module><module>turtledemo.peace</module><module>unittest.loader</module><module>bsddb.test.test_dbenv</module><module>importlib.test.extension.util</module><module>test.test_rgbimg</module><module>psycopg2._psycopg</module><module>importlib.machinery</module><module>_pyio</module><module>importlib.test.test_util</module><module>tzparse</module><module>unittest.test.test_discovery</module><module>concurrent.futures.thread</module><module>test.make_ssl_certs</module><module>lib2to3.btm_utils</module><module>json.tests.test_check_circular</module><module>distutils.tests.test_config_cmd</module><module>concurrent</module><module>importlib.test.test_abc</module><module>distutils.tests.test_check</module><module>unittest.test.test_assertions</module><module>test.test_timing</module><module>turtledemo.paint</module><module>test.tracedmodules</module><module>urllib.parse</module><module>test.test_gdb</module><module>test.json_tests.test_pass2</module><module>test.json_tests.test_pass3</module><module>http.server</module><module>tkinter.constants</module><module>test.test_smtpd</module><module>test.json_tests.test_decode</module><module>unittest.test.support</module><module>_thread</module><module>importlib.test</module><module>test.test_cProfile</module><module>distutils.tests.test_spawn</module><module>unittest.suite</module><module>test.test_reprlib</module><module>importlib.test.import_.test_packages</module><module>importlib.test.regrtest</module><module>test.test_concurrent_futures</module><module>distutils.tests.test_cygwinccompiler</module><module>tkinter.test</module><module>test.json_tests.test_scanstring</module><module>importlib.test.abc</module><module>regex</module><module>email.MIMEMessage</module><module>test.encoded_modules</module><module>tkinter.messagebox</module><module>importlib.test.source.test_source_encoding</module><module>_datetime</module><module>unittest.test.test_functiontestcase</module><module>importlib.test.source.test_finder</module><module>distutils.tests.test_extension</module><module>importlib.test.import_.test_api</module><module>tkinter.test.test_ttk.test_extensions</module><module>importlib._bootstrap</module><module>importlib.test.import_.test_path</module><module>importlib.test.util</module><module>test.encoded_modules.module_iso_8859_1</module><module>importlib.test.source</module><module>test.test_numeric_tower</module><module>turtledemo.two_canvases</module><module>_weakrefset</module><module>turtledemo.chaos</module><module>tkinter.filedialog</module><module>test.test_flufl</module><module>sysconfig</module><module>email.MIMEImage</module><module>tkinter.__main__</module><module>email.MIMEText</module><module>builtins</module><module>importlib.test.extension</module><module>gopherlib</module><module>unittest.util</module><module>importlib.test.source.util</module><module>test.test_sunau</module><module>test.test_raise</module><module>turtle</module><module>bsddb.test.test_1413192</module><module>test.json_tests.test_speedups</module><module>unittest.test</module><module>importlib.test.import_</module><module>test.test_compileall</module><module>importlib.test.extension.test_finder</module><module>turtledemo</module><module>test.json_tests.test_default</module><module>_string</module><module>idlelib.buildapp</module><module>importlib.test.__main__</module><module>fpectl</module><module>test.mock_socket</module><module>test.json_tests.test_encode_basestring_ascii</module><module>distutils.tests.test_bdist_rpm</module><module>test.test_dbm_ndbm</module><module>test.test_osx_env</module><module>copyreg</module><module>concurrent.futures</module><module>unittest.result</module><module>concurrent.futures._base</module><module>statcache</module><module>tkinter.test.test_tkinter</module><module>_curses_panel</module><module>_io</module><module>dbm</module><module>test.dis_module</module><module>_compat_pickle</module><module>importlib</module><module>tkinter.test.test_tkinter.test_loadtk</module><module>test.test_rlcompleter</module><module>test.test_sndhdr</module><module>timing</module><module>distutils.tests.test_bdist_dumb</module><module>unittest.runner</module><module>test.test_dictviews</module><module>test.test_strlit</module><module>test.test_ttk_guionly</module><module>encodings.cp858</module><module>pydoc_data</module><module>unittest.case</module><module>distutils.tests.test_dir_util</module><module>test.json_tests.test_float</module><module>tkinter.test.support</module><module>unittest.test.dummy</module><module>_pickle</module><module>email.MIMEMultipart</module><module>tkinter.dnd</module><module>importlib.test.builtin</module><module>turtledemo.round_dance</module><module>test.encoded_modules.module_koi8_r</module><module>test.test_nntplib</module><module>tkinter</module><module>email.MIMENonMultipart</module><module>distutils.tests.test_install_headers</module><module>test.test_importlib</module><module>tkinter.colorchooser</module><module>test.test_tk</module><module>turtledemo.penrose</module><module>test.json_tests.test_dump</module><module>html</module><module>dbm.ndbm</module><module>turtledemo.wikipedia</module><module>http</module><module>unittest.test.test_loader</module><module>test.test_hexoct</module><module>bsddb.test.test_db</module><module>xmlrpc.server</module><module>urllib.error</module><module>distutils.tests.test_build_clib</module><module>test.test_weakset</module><module>test.test_strtod</module><module>html.entities</module><module>test.gdb_sample</module><module>http.client</module><module>test.test_msilib</module><module>importlib.test.frozen</module><module>_dbm</module><module>tkinter.simpledialog</module><module>importlib.test.builtin.util</module><module>test.test_codecencodings_iso2022</module><module>xmlrpc</module><module>test.test_unpack_ex</module><module>importlib.abc</module><module>distutils.tests.test_version</module><module>tkinter.test.test_tkinter.test_text</module><module>turtledemo.__main__</module><module>test.support</module><module>test.test_syslog</module><module>importlib.test.source.test_abc_loader</module><module>importlib.test.test_api</module><module>test.test_super</module><module>tkinter.test.runtktests</module><module>email.MIMEAudio</module><module>test.test_listcomps</module><module>http.cookiejar</module><module>distutils.tests.test_dep_util</module><module>idlelib.RstripExtension</module><module>test.test_ascii_formatd</module><module>unittest.test.test_suite</module><module>importlib.test.builtin.test_loader</module><module>test.json_tests.test_unicode</module><module>distutils.command.check</module><module>urllib.robotparser</module><module>regsub</module><module>importlib.test.extension.test_loader</module><module>importlib.test.source.test_case_sensitivity</module><module>turtledemo.tree</module><module>test.test_socket_ssl</module><module>_types</module><module>test.test_pdb</module><module>test.test_timeit</module><module>test.test_metaclass</module><module>lib2to3.__main__</module><module>importlib.test.import_.util</module><module>unittest.signals</module><module>distutils.tests.test_cmd</module><func>memoryview</func><func>ascii</func></python><python version="2.7"><module>importlib.test.source.test_file_loader</module><module>_dummy_thread</module><module>bsddb.test.test_env_close</module><module>dbm.gnu</module><module>tkinter.test.test_tkinter.test_font</module><module>importlib.test.import_.test_caching</module><module>test.json_tests.test_recursion</module><module>tkinter._fix</module><module>test.ssl_servers</module><module>turtledemo.minimal_hanoi</module><module>turtledemo.fractalcurves</module><module>turtledemo.lindenmayer</module><module>test.json_tests.test_separators</module><module>importlib.test.import_.test___package__</module><module>importlib.test.benchmark</module><module>turtledemo.colormixer</module><module>tkinter.ttk</module><module>test.json_tests</module><module>test.test_sched</module><module>turtledemo.yinyang</module><module>test.test_xmlrpc_net</module><module>tkinter.test.test_ttk</module><module>importlib.test.source.test_path_hook</module><module>reconvert</module><module>tkinter.font</module><module>tkinter.commondialog</module><module>test.test_keywordonlyarg</module><module>test.test_profilehooks</module><module>email.MIMEBase</module><module>concurrent.futures.process</module><module>test.test_operations</module><module>test.test_urllib_response</module><module>urllib.request</module><module>test.test_dbm_gnu</module><module>importlib.util</module><module>test.json_tests.test_pass1</module><module>turtledemo.bytedesign</module><module>psycopg2</module><module>test.test_dynamic</module><module>test.test_pep3120</module><module>distutils.tests.test_log</module><module>importlib.test.builtin.test_finder</module><module>test.test_http_cookiejar</module><module>test.test_html</module><module>test.test_pep3131</module><module>test.__main__</module><module>test.badsyntax_pep3120</module><module>tkinter.test.test_ttk.test_functions</module><module>turtledemo.planet_and_moon</module><module>unittest.test._test_warnings</module><module>importlib.test.frozen.test_loader</module><module>idlelib.tabpage</module><module>tkinter.tix</module><module>_posixsubprocess</module><module>_curses</module><module>turtledemo.clock</module><module>_fileio</module><module>reprlib</module><module>test.test_range</module><module>xmlrpc.client</module><module>importlib.test.extension.test_case_sensitivity</module><module>test.test_copyreg</module><module>test.json_tests.test_indent</module><module>tkinter.dialog</module><module>regex_syntax</module><module>importlib.test.import_.test_fromlist</module><module>_markupbase</module><module>test.test_regex</module><module>importlib.test.import_.test_meta_path</module><module>turtledemo.forest</module><module>tkinter.test.test_ttk.test_widgets</module><module>test.cjkencodings_test</module><module>importlib.test.frozen.test_finder</module><module>html.parser</module><module>ctypes.test.test_bytes</module><module>urllib.response</module><module>build_class</module><module>http.cookies</module><module>test.json_tests.test_fail</module><module>test.badsyntax_3131</module><module>_stringio</module><module>test.test_macfs</module><module>tkinter.scrolledtext</module><module>importlib.test.extension.test_path_hook</module><module>importlib.test.import_.test_relative_imports</module><module>turtledemo.nim</module><module>tkinter.test.test_ttk.test_style</module><module>test.test_http_cookies</module><module>dbm.dumb</module><module>test.test_dbm_dumb</module><module>_bytesio</module><module>test.datetimetester</module><module>whrandom</module><module>turtledemo.peace</module><module>importlib.test.extension.util</module><module>test.test_rgbimg</module><module>psycopg2._psycopg</module><module>importlib.machinery</module><module>importlib.test.test_util</module><module>distutils.mwerkscompiler</module><module>tzparse</module><module>concurrent.futures.thread</module><module>test.make_ssl_certs</module><module>concurrent</module><module>importlib.test.test_abc</module><module>test.test_timing</module><module>turtledemo.paint</module><module>urllib.parse</module><module>test.json_tests.test_pass2</module><module>test.json_tests.test_pass3</module><module>http.server</module><module>tkinter.constants</module><module>test.test_smtpd</module><module>test.json_tests.test_decode</module><module>_thread</module><module>importlib.test</module><module>test.test_cProfile</module><module>test.test_reprlib</module><module>importlib.test.import_.test_packages</module><module>importlib.test.regrtest</module><module>test.test_concurrent_futures</module><module>distutils.tests.test_cygwinccompiler</module><module>tkinter.test</module><module>test.json_tests.test_scanstring</module><module>importlib.test.abc</module><module>regex</module><module>email.MIMEMessage</module><module>test.encoded_modules</module><module>tkinter.messagebox</module><module>importlib.test.source.test_source_encoding</module><module>_datetime</module><module>importlib.test.source.test_finder</module><module>distutils.tests.test_extension</module><module>importlib.test.import_.test_api</module><module>tkinter.test.test_ttk.test_extensions</module><module>importlib._bootstrap</module><module>importlib.test.import_.test_path</module><module>importlib.test.util</module><module>test.encoded_modules.module_iso_8859_1</module><module>importlib.test.source</module><module>test.test_numeric_tower</module><module>turtledemo.two_canvases</module><module>turtledemo.chaos</module><module>tkinter.filedialog</module><module>test.test_flufl</module><module>email.MIMEImage</module><module>tkinter.__main__</module><module>email.MIMEText</module><module>builtins</module><module>importlib.test.extension</module><module>pydoc_topics</module><module>gopherlib</module><module>importlib.test.source.util</module><module>test.test_sunau</module><module>test.test_raise</module><module>turtle</module><module>bsddb.test.test_1413192</module><module>test.json_tests.test_speedups</module><module>importlib.test.import_</module><module>importlib.test.extension.test_finder</module><module>turtledemo</module><module>test.json_tests.test_default</module><module>_string</module><module>idlelib.buildapp</module><module>importlib.test.__main__</module><module>fpectl</module><module>test.mock_socket</module><module>test.json_tests.test_encode_basestring_ascii</module><module>test.test_dbm_ndbm</module><module>test.test_osx_env</module><module>copyreg</module><module>concurrent.futures</module><module>concurrent.futures._base</module><module>statcache</module><module>tkinter.test.test_tkinter</module><module>_curses_panel</module><module>dbm</module><module>test.dis_module</module><module>_compat_pickle</module><module>tkinter.test.test_tkinter.test_loadtk</module><module>test.test_sndhdr</module><module>timing</module><module>test.test_strlit</module><module>test.json_tests.test_float</module><module>tkinter.test.support</module><module>_pickle</module><module>email.MIMEMultipart</module><module>tkinter.dnd</module><module>importlib.test.builtin</module><module>turtledemo.round_dance</module><module>test.encoded_modules.module_koi8_r</module><module>test.test_nntplib</module><module>tkinter</module><module>email.MIMENonMultipart</module><module>tkinter.colorchooser</module><module>turtledemo.penrose</module><module>test.json_tests.test_dump</module><module>html</module><module>dbm.ndbm</module><module>turtledemo.wikipedia</module><module>http</module><module>test.test_hexoct</module><module>xmlrpc.server</module><module>urllib.error</module><module>html.entities</module><module>http.client</module><module>importlib.test.frozen</module><module>_dbm</module><module>tkinter.simpledialog</module><module>importlib.test.builtin.util</module><module>test.test_codecencodings_iso2022</module><module>xmlrpc</module><module>test.test_unpack_ex</module><module>importlib.abc</module><module>tkinter.test.test_tkinter.test_text</module><module>turtledemo.__main__</module><module>test.support</module><module>test.test_syslog</module><module>importlib.test.source.test_abc_loader</module><module>importlib.test.test_api</module><module>test.test_super</module><module>tkinter.test.runtktests</module><module>email.MIMEAudio</module><module>test.test_listcomps</module><module>http.cookiejar</module><module>importlib.test.builtin.test_loader</module><module>test.json_tests.test_unicode</module><module>urllib.robotparser</module><module>regsub</module><module>importlib.test.extension.test_loader</module><module>importlib.test.source.test_case_sensitivity</module><module>turtledemo.tree</module><module>test.test_socket_ssl</module><module>_types</module><module>test.test_timeit</module><module>test.test_metaclass</module><module>importlib.test.import_.util</module><func>ascii</func></python><python version="3.0"><module>importlib.test.source.test_file_loader</module><module>test.test_future_builtins</module><module>bsddb.test</module><module>bsddb.test.test_basics</module><module>bsddb.test.test_env_close</module><module>test.test_al</module><module>pydoc_data.topics</module><module>mimify</module><module>tkinter.test.test_tkinter.test_font</module><module>importlib.test.import_.test_caching</module><module>bsddb.test.test_distributed_transactions</module><module>test.test_fpformat</module><module>test.test_getargs</module><module>test.test_multifile</module><module>test.json_tests.test_recursion</module><module>test.ssl_servers</module><module>turtledemo.minimal_hanoi</module><module>toaiff</module><module>turtledemo.fractalcurves</module><module>turtledemo.lindenmayer</module><module>UserList</module><module>test.json_tests.test_separators</module><module>new</module><module>test.test_cd</module><module>importlib.test.import_.test___package__</module><module>importlib.test.benchmark</module><module>turtledemo.colormixer</module><module>tkinter.ttk</module><module>StringIO</module><module>encodings.cp720</module><module>test.json_tests</module><module>unittest.test.test_result</module><module>test.test_sched</module><module>Bastion</module><module>turtledemo.yinyang</module><module>copy_reg</module><module>test.test_sys_setprofile</module><module>tkinter.test.test_ttk</module><module>test.test_ttk_textonly</module><module>cPickle</module><module>importlib.test.source.test_path_hook</module><module>reconvert</module><module>test.test_email_codecs</module><module>compiler.syntax</module><module>bsddb.test.test_fileid</module><module>anydbm</module><module>unittest.__main__</module><module>exceptions</module><module>compiler.pyassem</module><module>email.MIMEBase</module><module>strop</module><module>concurrent.futures.process</module><module>test.test_operations</module><module>bsddb.dbshelve</module><module>distutils.tests.test_build</module><module>test.test_urllib_response</module><module>htmlentitydefs</module><module>argparse</module><module>test.test_mimetools</module><module>test.test_sunaudiodev</module><module>test.test_cookielib</module><module>encodings.hex_codec</module><module>bsddb.test.test_dbtables</module><module>importlib.util</module><module>test.json_tests.test_pass1</module><module>turtledemo.bytedesign</module><module>psycopg2</module><module>distutils.tests.test_install_data</module><module>test.script_helper</module><module>test.test_readline</module><module>unittest.test.test_break</module><module>test.test_dynamic</module><module>distutils.tests.test_log</module><module>test.test_mhlib</module><module>importlib.test.builtin.test_finder</module><module>test.test_copy_reg</module><module>test.test_html</module><module>sqlite3.test.py25tests</module><module>md5</module><module>thread</module><module>test.test_cookie</module><module>unittest.test.test_setups</module><module>distutils.tests.test_install_lib</module><module>test.__main__</module><module>xmllib</module><module>tkinter.test.test_ttk.test_functions</module><module>turtledemo.planet_and_moon</module><module>unittest.test._test_warnings</module><module>importlib.test.frozen.test_loader</module><module>idlelib.tabpage</module><module>unittest.test.test_skipping</module><module>test.infinite_reload</module><module>_posixsubprocess</module><module>_curses</module><module>turtledemo.clock</module><module>encodings.string_escape</module><module>stringold</module><module>test.test_MimeWriter</module><module>sets</module><module>importlib.test.extension.test_case_sensitivity</module><module>_sqlite3</module><module>distutils.tests.test_ccompiler</module><module>unittest.test.test_case</module><module>distutils.tests.test_text_file</module><module>test.json_tests.test_indent</module><module>compiler.future</module><module>encodings.rot_13</module><module>regex_syntax</module><module>dbhash</module><module>linuxaudiodev</module><module>importlib.test.import_.test_fromlist</module><module>fpformat</module><module>lib2to3.btm_matcher</module><module>distutils.tests.test_util</module><module>test.test_linuxaudiodev</module><module>test.test_regex</module><module>importlib.test.import_.test_meta_path</module><module>turtledemo.forest</module><module>tkinter.test.test_ttk.test_widgets</module><module>importlib.test.frozen.test_finder</module><module>test.test_file2k</module><module>compiler</module><module>bsddb.test.test_early_close</module><module>test.test_univnewlines2k</module><module>test.test_linecache</module><module>distutils.tests.test_unixccompiler</module><module>distutils.tests.test_clean</module><module>mutex</module><module>lib2to3.fixes.fix_exitfunc</module><module>test.json_tests.test_fail</module><module>test.test_macfs</module><module>encodings.bz2_codec</module><module>dummy_thread</module><module>test.tracedmodules.testmod</module><module>importlib.test.extension.test_path_hook</module><module>importlib.test.import_.test_relative_imports</module><module>distutils.tests.test_bdist_msi</module><module>bsddb.test.test_get_none</module><module>test.test_complex_args</module><module>turtledemo.nim</module><module>test.test_gl</module><module>distutils.tests.test_file_util</module><module>test.test_aepack</module><module>tkinter.test.test_ttk.test_style</module><module>audiodev</module><module>compiler.ast</module><module>unittest.test.test_program</module><module>test.test_cl</module><module>test.test_popen2</module><module>test.test_strop</module><module>bsddb.dbutils</module><module>test.test_sets</module><module>rfc822</module><module>unittest.test.test_runner</module><module>user</module><module>test.test_macos</module><module>distutils.tests.test_filelist</module><module>distutils.tests.test_bdist</module><module>urlparse</module><module>htmllib</module><module>mhlib</module><module>test.test_applesingle</module><module>bsddb.test.test_dbobj</module><module>distutils.tests.test_archive_util</module><module>test.test_new</module><module>test.test_mutex</module><module>test.test_argparse</module><module>unittest.main</module><module>test.test_sysconfig</module><module>test.test_py3kwarn</module><module>bsddb.test.test_cursor_pget_bug</module><module>test.win_console_handler</module><module>test.datetimetester</module><module>HTMLParser</module><module>test.test_dumbdbm</module><module>whrandom</module><module>test.test_sys_settrace</module><module>test.test_imgfile</module><module>turtledemo.peace</module><module>test.test_old_mailbox</module><module>test.test_long_future</module><module>bsddb.test.test_queue</module><module>UserDict</module><module>unittest.loader</module><module>compiler.visitor</module><module>test.test_md5</module><module>bsddb.test.test_dbenv</module><module>importlib.test.extension.util</module><module>test.test_rgbimg</module><module>bsddb.test.test_sequence</module><module>psycopg2._psycopg</module><module>importlib.machinery</module><module>email.test.test_email_renamed</module><module>test.test_sha</module><module>_pyio</module><module>importlib.test.test_util</module><module>mimetools</module><module>tzparse</module><module>unittest.test.test_discovery</module><module>concurrent.futures.thread</module><module>test.make_ssl_certs</module><module>distutils.tests.setuptools_build_ext</module><module>lib2to3.btm_utils</module><module>json.tests.test_check_circular</module><module>distutils.tests.test_config_cmd</module><module>urllib2</module><module>concurrent</module><module>importlib.test.test_abc</module><module>distutils.tests.test_check</module><module>test.test_StringIO</module><module>bsddb.dbobj</module><module>unittest.test.test_assertions</module><module>distutils.tests.test_bdist_wininst</module><module>test.test_timing</module><module>turtledemo.paint</module><module>hotshot.stats</module><module>test.test_bsddb185</module><module>hotshot.stones</module><module>test.tracedmodules</module><module>test.test_gdb</module><module>test.testall</module><module>test.json_tests.test_pass2</module><module>test.json_tests.test_pass3</module><module>test.test_scriptpackages</module><module>test.test_smtpd</module><module>test.json_tests.test_decode</module><module>unittest.test.support</module><module>test.test_macostools</module><module>test.test_coercion</module><module>importlib.test</module><module>test.test_cProfile</module><module>distutils.tests.test_spawn</module><module>unittest.suite</module><module>importlib.test.import_.test_packages</module><module>importlib.test.regrtest</module><module>_hotshot</module><module>compiler.misc</module><module>SimpleXMLRPCServer</module><module>test.test_concurrent_futures</module><module>distutils.tests.test_cygwinccompiler</module><module>distutils.tests.setuptools_extension</module><module>tkinter.test</module><module>test.json_tests.test_scanstring</module><module>importlib.test.abc</module><module>regex</module><module>email.MIMEMessage</module><module>bsddb.test.test_pickle</module><module>rexec</module><module>test.encoded_modules</module><module>posixfile</module><module>importlib.test.source.test_source_encoding</module><module>_datetime</module><module>unittest.test.test_functiontestcase</module><module>importlib.test.source.test_finder</module><module>test.test_hotshot</module><module>markupbase</module><module>test.test_htmllib</module><module>CGIHTTPServer</module><module>distutils.tests.test_extension</module><module>bsddb.test.test_associate</module><module>imputil</module><module>importlib.test.import_.test_api</module><module>tkinter.test.test_ttk.test_extensions</module><module>dumbdbm</module><module>importlib._bootstrap</module><module>importlib.test.import_.test_path</module><module>importlib.test.util</module><module>test.encoded_modules.module_iso_8859_1</module><module>importlib.test.source</module><module>test.test_numeric_tower</module><module>turtledemo.two_canvases</module><module>compiler.symbols</module><module>lib2to3.fixes.fix_operator</module><module>turtledemo.chaos</module><module>test.test_flufl</module><module>sysconfig</module><module>email.MIMEImage</module><module>test.test_support</module><module>tkinter.__main__</module><module>email.MIMEText</module><module>sgmllib</module><module>importlib.test.extension</module><module>gopherlib</module><module>MimeWriter</module><module>unittest.util</module><module>test.test_bastion</module><module>encodings.base64_codec</module><module>importlib.test.source.util</module><module>test.test_sunau</module><module>test.test_imageop</module><module>statvfs</module><module>cStringIO</module><module>compiler.consts</module><module>bsddb.test.test_1413192</module><module>BaseHTTPServer</module><module>test.json_tests.test_speedups</module><module>test.test_rfc822</module><module>unittest.test</module><module>UserString</module><module>bsddb.dbtables</module><module>test.test_transformer</module><module>importlib.test.import_</module><module>test.test_compileall</module><module>test.test_email_renamed</module><module>importlib.test.extension.test_finder</module><module>turtledemo</module><module>test.json_tests.test_default</module><module>_string</module><module>idlelib.buildapp</module><module>encodings.uu_codec</module><module>encodings.zlib_codec</module><module>ihooks</module><module>lib2to3.tests.test_main</module><module>importlib.test.__main__</module><module>fpectl</module><module>test.test_bsddb3</module><module>_LWPCookieJar</module><module>sha</module><module>test.mock_socket</module><module>test.json_tests.test_encode_basestring_ascii</module><module>distutils.tests.test_bdist_rpm</module><module>compiler.transformer</module><module>test.test_aifc</module><module>test.test_bsddb</module><module>concurrent.futures</module><module>unittest.result</module><module>encodings.quopri_codec</module><module>concurrent.futures._base</module><module>test.test_dl</module><module>statcache</module><module>tkinter.test.test_tkinter</module><module>_curses_panel</module><module>future_builtins</module><module>_io</module><module>_compat_pickle</module><module>importlib</module><module>test.test_dircache</module><module>bsddb.test.test_misc</module><module>tkinter.test.test_tkinter.test_loadtk</module><module>test.test_rlcompleter</module><module>test.test_sndhdr</module><module>repr</module><module>test.test_xrange</module><module>hotshot</module><module>timing</module><module>cookielib</module><module>distutils.tests.test_bdist_dumb</module><module>unittest.runner</module><module>whichdb</module><module>dircache</module><module>test.test_repr</module><module>test.test_ttk_guionly</module><module>popen2</module><module>encodings.cp858</module><module>pydoc_data</module><module>test.test_anydbm</module><module>test.test_compiler</module><module>_MozillaCookieJar</module><module>unittest.case</module><module>distutils.tests.test_dir_util</module><module>test.json_tests.test_float</module><module>tkinter.test.support</module><module>unittest.test.dummy</module><module>bsddb.test.test_lock</module><module>commands</module><module>email.MIMEMultipart</module><module>bsddb.test.test_replication</module><module>robotparser</module><module>test.test_gdbm</module><module>bsddb.db</module><module>importlib.test.builtin</module><module>turtledemo.round_dance</module><module>test.encoded_modules.module_koi8_r</module><module>test.test_nntplib</module><module>test.test_cpickle</module><module>email.MIMENonMultipart</module><module>hotshot.log</module><module>multifile</module><module>distutils.tests.test_install_headers</module><module>test.test_importlib</module><module>test.test_smtpnet</module><module>test.test_tk</module><module>turtledemo.penrose</module><module>compiler.pycodegen</module><module>test.json_tests.test_dump</module><module>bsddb.test.test_dbshelve</module><module>turtledemo.wikipedia</module><module>bsddb.test.test_compat</module><module>unittest.test.test_loader</module><module>test.test_hexoct</module><module>bsddb.test.test_db</module><module>distutils.tests.test_build_clib</module><module>test.test_str</module><module>test.test_strtod</module><module>test.gdb_sample</module><module>bsddb.dbrecio</module><module>test.test_msilib</module><module>importlib.test.frozen</module><module>_dbm</module><module>importlib.test.builtin.util</module><module>test.test_codecencodings_iso2022</module><module>httplib</module><module>importlib.abc</module><module>distutils.tests.test_version</module><module>tkinter.test.test_tkinter.test_text</module><module>turtledemo.__main__</module><module>test.test_whichdb</module><module>importlib.test.source.test_abc_loader</module><module>importlib.test.test_api</module><module>SimpleHTTPServer</module><module>tkinter.test.runtktests</module><module>test.test_xmllib</module><module>email.MIMEAudio</module><module>distutils.tests.test_dep_util</module><module>idlelib.RstripExtension</module><module>test.test_ascii_formatd</module><module>bsddb.test.test_join</module><module>unittest.test.test_suite</module><module>sre</module><module>test.lock_tests</module><module>__builtin__</module><module>importlib.test.builtin.test_loader</module><module>test.json_tests.test_unicode</module><module>distutils.command.check</module><module>regsub</module><module>Cookie</module><module>importlib.test.extension.test_loader</module><module>test.test_buffer</module><module>importlib.test.source.test_case_sensitivity</module><module>turtledemo.tree</module><module>test.test_socket_ssl</module><module>_types</module><module>test.test_pdb</module><module>test.test_timeit</module><module>DocXMLRPCServer</module><module>bsddb.test.test_thread</module><module>test.test_xpickle</module><module>test.test_undocumented_details</module><module>bsddb.test.test_all</module><module>bsddb</module><module>lib2to3.__main__</module><module>importlib.test.import_.util</module><module>test.test_sgmllib</module><module>test.test_commands</module><module>unittest.signals</module><module>sunaudio</module><module>test.test_softspace</module><module>bsddb.test.test_recno</module><module>xmlrpclib</module><module>distutils.tests.test_cmd</module><module>bsddb.test.test_compare</module><func>unicode</func><func>apply</func><func>basestring</func><func>long</func><func>raw_input</func><func>reload</func><func>xrange</func><func>reduce</func><func>coerce</func><func>intern</func><func>file</func><func>unichr</func><func>execfile</func><func>buffer</func><func>callable</func></python><python version="3.1"><module>test.test_future_builtins</module><module>bsddb.test</module><module>bsddb.test.test_basics</module><module>bsddb.test.test_env_close</module><module>test.test_al</module><module>mimify</module><module>tkinter.test.test_tkinter.test_font</module><module>bsddb.test.test_distributed_transactions</module><module>test.test_fpformat</module><module>test.test_getargs</module><module>test.test_multifile</module><module>test.json_tests.test_recursion</module><module>test.ssl_servers</module><module>turtledemo.minimal_hanoi</module><module>toaiff</module><module>turtledemo.fractalcurves</module><module>turtledemo.lindenmayer</module><module>UserList</module><module>test.json_tests.test_separators</module><module>new</module><module>test.test_cd</module><module>turtledemo.colormixer</module><module>StringIO</module><module>encodings.cp720</module><module>test.json_tests</module><module>unittest.test.test_result</module><module>test.test_sched</module><module>Bastion</module><module>turtledemo.yinyang</module><module>copy_reg</module><module>cPickle</module><module>reconvert</module><module>test.test_email_codecs</module><module>compiler.syntax</module><module>bsddb.test.test_fileid</module><module>anydbm</module><module>unittest.__main__</module><module>exceptions</module><module>test.test_profilehooks</module><module>compiler.pyassem</module><module>email.MIMEBase</module><module>strop</module><module>concurrent.futures.process</module><module>test.test_operations</module><module>bsddb.dbshelve</module><module>distutils.tests.test_build</module><module>htmlentitydefs</module><module>argparse</module><module>test.test_mimetools</module><module>test.test_sunaudiodev</module><module>test.test_cookielib</module><module>encodings.hex_codec</module><module>bsddb.test.test_dbtables</module><module>test.json_tests.test_pass1</module><module>turtledemo.bytedesign</module><module>psycopg2</module><module>test.test_readline</module><module>unittest.test.test_break</module><module>test.test_dynamic</module><module>test.test_mhlib</module><module>test.test_copy_reg</module><module>test.test_html</module><module>sqlite3.test.py25tests</module><module>md5</module><module>thread</module><module>test.test_cookie</module><module>unittest.test.test_setups</module><module>test.__main__</module><module>xmllib</module><module>turtledemo.planet_and_moon</module><module>unittest.test._test_warnings</module><module>idlelib.tabpage</module><module>unittest.test.test_skipping</module><module>test.infinite_reload</module><module>_posixsubprocess</module><module>turtledemo.clock</module><module>_fileio</module><module>encodings.string_escape</module><module>stringold</module><module>test.test_MimeWriter</module><module>sets</module><module>distutils.tests.test_ccompiler</module><module>unittest.test.test_case</module><module>test.json_tests.test_indent</module><module>compiler.future</module><module>encodings.rot_13</module><module>regex_syntax</module><module>dbhash</module><module>linuxaudiodev</module><module>fpformat</module><module>test.test_linuxaudiodev</module><module>test.test_regex</module><module>turtledemo.forest</module><module>test.cjkencodings_test</module><module>test.test_file2k</module><module>compiler</module><module>bsddb.test.test_early_close</module><module>test.test_univnewlines2k</module><module>mutex</module><module>build_class</module><module>test.json_tests.test_fail</module><module>_stringio</module><module>test.test_macfs</module><module>encodings.bz2_codec</module><module>dummy_thread</module><module>distutils.tests.test_bdist_msi</module><module>bsddb.test.test_get_none</module><module>test.test_complex_args</module><module>turtledemo.nim</module><module>test.test_gl</module><module>test.test_aepack</module><module>audiodev</module><module>compiler.ast</module><module>unittest.test.test_program</module><module>test.test_cl</module><module>test.test_popen2</module><module>test.test_strop</module><module>bsddb.dbutils</module><module>test.test_sets</module><module>rfc822</module><module>unittest.test.test_runner</module><module>user</module><module>test.test_macos</module><module>urlparse</module><module>htmllib</module><module>mhlib</module><module>test.test_applesingle</module><module>bsddb.test.test_dbobj</module><module>test.test_new</module><module>test.test_mutex</module><module>test.test_argparse</module><module>unittest.main</module><module>_bytesio</module><module>test.test_sysconfig</module><module>test.test_py3kwarn</module><module>bsddb.test.test_cursor_pget_bug</module><module>test.win_console_handler</module><module>test.datetimetester</module><module>HTMLParser</module><module>test.test_dumbdbm</module><module>whrandom</module><module>test.test_imgfile</module><module>turtledemo.peace</module><module>test.test_old_mailbox</module><module>test.test_long_future</module><module>bsddb.test.test_queue</module><module>UserDict</module><module>unittest.loader</module><module>compiler.visitor</module><module>test.test_md5</module><module>bsddb.test.test_dbenv</module><module>test.test_rgbimg</module><module>bsddb.test.test_sequence</module><module>psycopg2._psycopg</module><module>email.test.test_email_renamed</module><module>test.test_sha</module><module>distutils.mwerkscompiler</module><module>mimetools</module><module>tzparse</module><module>unittest.test.test_discovery</module><module>concurrent.futures.thread</module><module>test.make_ssl_certs</module><module>distutils.tests.setuptools_build_ext</module><module>json.tests.test_check_circular</module><module>urllib2</module><module>concurrent</module><module>test.test_StringIO</module><module>bsddb.dbobj</module><module>unittest.test.test_assertions</module><module>test.test_timing</module><module>turtledemo.paint</module><module>hotshot.stats</module><module>test.test_bsddb185</module><module>hotshot.stones</module><module>test.test_gdb</module><module>test.testall</module><module>test.json_tests.test_pass2</module><module>test.json_tests.test_pass3</module><module>test.test_scriptpackages</module><module>test.test_smtpd</module><module>test.json_tests.test_decode</module><module>unittest.test.support</module><module>test.test_macostools</module><module>test.test_coercion</module><module>test.test_cProfile</module><module>unittest.suite</module><module>importlib.test.regrtest</module><module>_hotshot</module><module>compiler.misc</module><module>SimpleXMLRPCServer</module><module>test.test_concurrent_futures</module><module>distutils.tests.setuptools_extension</module><module>test.json_tests.test_scanstring</module><module>regex</module><module>email.MIMEMessage</module><module>bsddb.test.test_pickle</module><module>rexec</module><module>test.encoded_modules</module><module>posixfile</module><module>_datetime</module><module>unittest.test.test_functiontestcase</module><module>test.test_hotshot</module><module>markupbase</module><module>test.test_htmllib</module><module>CGIHTTPServer</module><module>bsddb.test.test_associate</module><module>imputil</module><module>importlib.test.import_.test_api</module><module>dumbdbm</module><module>test.encoded_modules.module_iso_8859_1</module><module>test.test_numeric_tower</module><module>turtledemo.two_canvases</module><module>compiler.symbols</module><module>turtledemo.chaos</module><module>sysconfig</module><module>email.MIMEImage</module><module>test.test_support</module><module>tkinter.__main__</module><module>email.MIMEText</module><module>sgmllib</module><module>pydoc_topics</module><module>gopherlib</module><module>MimeWriter</module><module>unittest.util</module><module>test.test_bastion</module><module>encodings.base64_codec</module><module>test.test_imageop</module><module>statvfs</module><module>cStringIO</module><module>compiler.consts</module><module>bsddb.test.test_1413192</module><module>BaseHTTPServer</module><module>test.json_tests.test_speedups</module><module>test.test_rfc822</module><module>unittest.test</module><module>UserString</module><module>bsddb.dbtables</module><module>test.test_transformer</module><module>test.test_email_renamed</module><module>turtledemo</module><module>test.json_tests.test_default</module><module>_string</module><module>idlelib.buildapp</module><module>encodings.uu_codec</module><module>encodings.zlib_codec</module><module>ihooks</module><module>importlib.test.__main__</module><module>test.test_bsddb3</module><module>_LWPCookieJar</module><module>sha</module><module>test.mock_socket</module><module>test.json_tests.test_encode_basestring_ascii</module><module>compiler.transformer</module><module>test.test_bsddb</module><module>concurrent.futures</module><module>unittest.result</module><module>encodings.quopri_codec</module><module>concurrent.futures._base</module><module>test.test_dl</module><module>statcache</module><module>future_builtins</module><module>test.test_dircache</module><module>bsddb.test.test_misc</module><module>repr</module><module>test.test_xrange</module><module>hotshot</module><module>timing</module><module>cookielib</module><module>unittest.runner</module><module>whichdb</module><module>dircache</module><module>test.test_repr</module><module>popen2</module><module>encodings.cp858</module><module>test.test_anydbm</module><module>test.test_compiler</module><module>_MozillaCookieJar</module><module>unittest.case</module><module>test.json_tests.test_float</module><module>unittest.test.dummy</module><module>bsddb.test.test_lock</module><module>commands</module><module>email.MIMEMultipart</module><module>bsddb.test.test_replication</module><module>robotparser</module><module>test.test_gdbm</module><module>bsddb.db</module><module>turtledemo.round_dance</module><module>test.encoded_modules.module_koi8_r</module><module>test.test_nntplib</module><module>test.test_cpickle</module><module>email.MIMENonMultipart</module><module>hotshot.log</module><module>multifile</module><module>turtledemo.penrose</module><module>compiler.pycodegen</module><module>test.json_tests.test_dump</module><module>email.test.test_email_codecs_renamed</module><module>bsddb.test.test_dbshelve</module><module>turtledemo.wikipedia</module><module>bsddb.test.test_compat</module><module>unittest.test.test_loader</module><module>test.test_hexoct</module><module>bsddb.test.test_db</module><module>test.test_str</module><module>test.gdb_sample</module><module>bsddb.dbrecio</module><module>test.test_codecencodings_iso2022</module><module>httplib</module><module>turtledemo.__main__</module><module>test.test_whichdb</module><module>SimpleHTTPServer</module><module>test.test_xmllib</module><module>email.MIMEAudio</module><module>distutils.tests.test_dep_util</module><module>bsddb.test.test_join</module><module>unittest.test.test_suite</module><module>sre</module><module>__builtin__</module><module>test.json_tests.test_unicode</module><module>regsub</module><module>Cookie</module><module>test.test_buffer</module><module>turtledemo.tree</module><module>test.test_socket_ssl</module><module>_types</module><module>test.test_timeit</module><module>DocXMLRPCServer</module><module>bsddb.test.test_thread</module><module>test.test_xpickle</module><module>test.test_undocumented_details</module><module>bsddb.test.test_all</module><module>bsddb</module><module>lib2to3.__main__</module><module>test.test_sgmllib</module><module>test.test_commands</module><module>unittest.signals</module><module>sunaudio</module><module>test.test_softspace</module><module>bsddb.test.test_recno</module><module>xmlrpclib</module><module>bsddb.test.test_compare</module><func>unicode</func><func>apply</func><func>basestring</func><func>long</func><func>raw_input</func><func>reload</func><func>cmp</func><func>xrange</func><func>reduce</func><func>coerce</func><func>intern</func><func>file</func><func>unichr</func><func>execfile</func><func>buffer</func><func>callable</func></python><python version="3.2"><module>test.test_future_builtins</module><module>bsddb.test</module><module>bsddb.test.test_basics</module><module>bsddb.test.test_env_close</module><module>test.test_al</module><module>mimify</module><module>bsddb.test.test_distributed_transactions</module><module>test.test_fpformat</module><module>test.test_getargs</module><module>test.test_multifile</module><module>toaiff</module><module>UserList</module><module>new</module><module>test.test_cd</module><module>json.tests.test_unicode</module><module>StringIO</module><module>Bastion</module><module>copy_reg</module><module>cPickle</module><module>reconvert</module><module>test.test_email_codecs</module><module>compiler.syntax</module><module>bsddb.test.test_fileid</module><module>anydbm</module><module>exceptions</module><module>json.tests.test_speedups</module><module>test.test_profilehooks</module><module>json.tests.test_recursion</module><module>compiler.pyassem</module><module>email.MIMEBase</module><module>strop</module><module>test.test_operations</module><module>bsddb.dbshelve</module><module>htmlentitydefs</module><module>test.test_mimetools</module><module>test.test_sunaudiodev</module><module>test.test_cookielib</module><module>bsddb.test.test_dbtables</module><module>psycopg2</module><module>test.test_mhlib</module><module>test.test_copy_reg</module><module>sqlite3.test.py25tests</module><module>md5</module><module>thread</module><module>test.test_cookie</module><module>xmllib</module><module>idlelib.tabpage</module><module>test.infinite_reload</module><module>_curses</module><module>_fileio</module><module>encodings.string_escape</module><module>stringold</module><module>test.test_MimeWriter</module><module>sets</module><module>_sqlite3</module><module>distutils.tests.test_ccompiler</module><module>compiler.future</module><module>regex_syntax</module><module>dbhash</module><module>json.tests.test_scanstring</module><module>json.tests</module><module>linuxaudiodev</module><module>fpformat</module><module>test.test_linuxaudiodev</module><module>test.test_regex</module><module>test.cjkencodings_test</module><module>test.test_file2k</module><module>compiler</module><module>bsddb.test.test_early_close</module><module>test.test_univnewlines2k</module><module>mutex</module><module>build_class</module><module>_stringio</module><module>test.test_macfs</module><module>dummy_thread</module><module>bsddb.test.test_get_none</module><module>test.test_complex_args</module><module>test.test_gl</module><module>test.test_aepack</module><module>audiodev</module><module>compiler.ast</module><module>test.test_cl</module><module>test.test_popen2</module><module>test.test_strop</module><module>bsddb.dbutils</module><module>test.test_sets</module><module>rfc822</module><module>user</module><module>test.test_macos</module><module>urlparse</module><module>htmllib</module><module>mhlib</module><module>test.test_applesingle</module><module>bsddb.test.test_dbobj</module><module>test.test_new</module><module>test.test_mutex</module><module>json.tests.test_decode</module><module>_bytesio</module><module>test.test_py3kwarn</module><module>bsddb.test.test_cursor_pget_bug</module><module>HTMLParser</module><module>test.test_dumbdbm</module><module>whrandom</module><module>test.test_imgfile</module><module>test.test_old_mailbox</module><module>test.test_long_future</module><module>bsddb.test.test_queue</module><module>UserDict</module><module>compiler.visitor</module><module>test.test_md5</module><module>bsddb.test.test_dbenv</module><module>test.test_rgbimg</module><module>bsddb.test.test_sequence</module><module>psycopg2._psycopg</module><module>email.test.test_email_renamed</module><module>test.test_sha</module><module>json.tests.test_separators</module><module>distutils.mwerkscompiler</module><module>mimetools</module><module>json.tests.test_float</module><module>tzparse</module><module>test.badsyntax_nocaret</module><module>distutils.tests.setuptools_build_ext</module><module>json.tests.test_check_circular</module><module>urllib2</module><module>test.test_StringIO</module><module>bsddb.dbobj</module><module>test.test_timing</module><module>hotshot.stats</module><module>test.test_bsddb185</module><module>hotshot.stones</module><module>test.testall</module><module>json.tests.test_dump</module><module>test.test_scriptpackages</module><module>test.test_macostools</module><module>test.test_coercion</module><module>test.test_cProfile</module><module>_hotshot</module><module>compiler.misc</module><module>SimpleXMLRPCServer</module><module>distutils.tests.setuptools_extension</module><module>regex</module><module>email.MIMEMessage</module><module>bsddb.test.test_pickle</module><module>rexec</module><module>posixfile</module><module>test.test_hotshot</module><module>markupbase</module><module>test.test_htmllib</module><module>CGIHTTPServer</module><module>bsddb.test.test_associate</module><module>imputil</module><module>dumbdbm</module><module>compiler.symbols</module><module>email.MIMEImage</module><module>test.test_support</module><module>email.MIMEText</module><module>sgmllib</module><module>pydoc_topics</module><module>gopherlib</module><module>MimeWriter</module><module>test.test_bastion</module><module>test.test_imageop</module><module>statvfs</module><module>cStringIO</module><module>compiler.consts</module><module>bsddb.test.test_1413192</module><module>BaseHTTPServer</module><module>test.test_rfc822</module><module>UserString</module><module>bsddb.dbtables</module><module>test.test_transformer</module><module>test.test_email_renamed</module><module>idlelib.buildapp</module><module>ihooks</module><module>fpectl</module><module>test.test_bsddb3</module><module>_LWPCookieJar</module><module>sha</module><module>test.test_SimpleHTTPServer</module><module>compiler.transformer</module><module>test.test_bsddb</module><module>test.test_dl</module><module>statcache</module><module>_curses_panel</module><module>future_builtins</module><module>json.tests.test_encode_basestring_ascii</module><module>test.test_dircache</module><module>bsddb.test.test_misc</module><module>repr</module><module>test.test_xrange</module><module>hotshot</module><module>timing</module><module>cookielib</module><module>whichdb</module><module>dircache</module><module>test.test_repr</module><module>popen2</module><module>test.test_anydbm</module><module>test.test_compiler</module><module>_MozillaCookieJar</module><module>json.tests.test_indent</module><module>bsddb.test.test_lock</module><module>commands</module><module>json.tests.test_fail</module><module>email.MIMEMultipart</module><module>bsddb.test.test_replication</module><module>robotparser</module><module>test.test_gdbm</module><module>bsddb.db</module><module>test.test_cpickle</module><module>email.MIMENonMultipart</module><module>hotshot.log</module><module>multifile</module><module>compiler.pycodegen</module><module>email.test.test_email_codecs_renamed</module><module>bsddb.test.test_dbshelve</module><module>bsddb.test.test_compat</module><module>test.test_hexoct</module><module>bsddb.test.test_db</module><module>json.tests.test_pass2</module><module>json.tests.test_pass3</module><module>json.tests.test_pass1</module><module>test.test_str</module><module>bsddb.dbrecio</module><module>_dbm</module><module>httplib</module><module>test.test_whichdb</module><module>SimpleHTTPServer</module><module>test.test_xmllib</module><module>email.MIMEAudio</module><module>test.test_ascii_formatd</module><module>bsddb.test.test_join</module><module>json.tests.test_default</module><module>sre</module><module>__builtin__</module><module>regsub</module><module>Cookie</module><module>test.test_buffer</module><module>test.test_socket_ssl</module><module>_types</module><module>DocXMLRPCServer</module><module>bsddb.test.test_thread</module><module>test.test_xpickle</module><module>test.test_undocumented_details</module><module>bsddb.test.test_all</module><module>bsddb</module><module>lib2to3.__main__</module><module>test.test_sgmllib</module><module>test.test_commands</module><module>sunaudio</module><module>test.test_softspace</module><module>bsddb.test.test_recno</module><module>xmlrpclib</module><module>bsddb.test.test_compare</module><func>unicode</func><func>apply</func><func>basestring</func><func>long</func><func>raw_input</func><func>reload</func><func>cmp</func><func>xrange</func><func>reduce</func><func>coerce</func><func>intern</func><func>file</func><func>unichr</func><func>execfile</func><func>buffer</func></python><python version="3.3"><module>distutils.tests.test_ccompiler</module><module>bsddb.test.test_fileid</module><module>bsddb.test</module><module>hotshot.stats</module><module>bsddb.test.test_basics</module><module>bsddb.test.test_env_close</module><module>test.test_al</module><module>test.test_bsddb185</module><module>hotshot.stones</module><module>json.tests.test_unicode</module><module>mimify</module><module>bsddb.test.test_distributed_transactions</module><module>test.test_fpformat</module><module>test.test_getargs</module><module>test.test_cookielib</module><module>test.test_multifile</module><module>json.tests.test_dump</module><module>test.test_str</module><module>test.test_scriptpackages</module><module>toaiff</module><module>test.test_macostools</module><module>test.test_sha</module><module>fpformat</module><module>test.test_regex</module><module>new</module><module>test.test_coercion</module><module>StringIO</module><module>test.testall</module><module>_hotshot</module><module>compiler.misc</module><module>bsddb.test.test_associate</module><module>Bastion</module><module>test.test_old_mailbox</module><module>email.MIMEMessage</module><module>bsddb.test.test_pickle</module><module>bsddb.test.test_misc</module><module>cPickle</module><module>posixfile</module><module>json.tests.test_check_circular</module><module>reconvert</module><module>test.test_email_codecs</module><module>compiler.syntax</module><module>test.test_timing</module><module>anydbm</module><module>test.test_hotshot</module><module>markupbase</module><module>test.test_htmllib</module><module>exceptions</module><module>json.tests.test_speedups</module><module>CGIHTTPServer</module><module>imputil</module><module>compiler.pyassem</module><module>strop</module><module>test.test_operations</module><module>bsddb.dbshelve</module><module>MimeWriter</module><module>test.test_imageop</module><module>htmlentitydefs</module><module>compiler.symbols</module><module>test.test_mimetools</module><module>test.test_sunaudiodev</module><module>whichdb</module><module>email.MIMEText</module><module>dumbdbm</module><module>psycopg2</module><module>cookielib</module><module>pydoc_topics</module><module>gopherlib</module><module>test.test_bastion</module><module>test.test_cProfile</module><module>httplib</module><module>statvfs</module><module>test.test_mhlib</module><module>cStringIO</module><module>bsddb.test.test_1413192</module><module>BaseHTTPServer</module><module>dircache</module><module>test.test_rfc822</module><module>UserString</module><module>bsddb.dbtables</module><module>test.test_email_renamed</module><module>sqlite3.test.py25tests</module><module>idlelib.buildapp</module><module>md5</module><module>thread</module><module>ihooks</module><module>fpectl</module><module>test.test_bsddb3</module><module>test.test_copy_reg</module><module>_LWPCookieJar</module><module>test.test_SimpleHTTPServer</module><module>email.MIMEMultipart</module><module>idlelib.tabpage</module><module>test.infinite_reload</module><module>_curses</module><module>_fileio</module><module>test.test_cd</module><module>test.test_aepack</module><module>test.test_cl</module><module>test.test_bsddb</module><module>test.test_MimeWriter</module><module>test.test_dl</module><module>statcache</module><module>sets</module><module>_sqlite3</module><module>mhlib</module><module>_curses_panel</module><module>future_builtins</module><module>compiler.future</module><module>regex_syntax</module><module>stringold</module><module>dbhash</module><module>bsddb.test.test_dbobj</module><module>json.tests</module><module>linuxaudiodev</module><module>UserList</module><module>json.tests.test_encode_basestring_ascii</module><module>test.test_dircache</module><module>pyexpat.errors</module><module>rexec</module><module>test.cjkencodings_test</module><module>repr</module><module>test.test_xrange</module><module>hotshot</module><module>timing</module><module>test.test_file2k</module><module>compiler</module><module>compiler.consts</module><module>email.MIMEImage</module><module>bsddb.test.test_early_close</module><module>sha</module><module>test.test_repr</module><module>pyexpat.model</module><module>test.test_univnewlines2k</module><module>popen2</module><module>compiler.transformer</module><module>test.test_anydbm</module><module>test.test_compiler</module><module>bsddb.test.test_dbtables</module><module>mutex</module><module>_MozillaCookieJar</module><module>json.tests.test_indent</module><module>build_class</module><module>sgmllib</module><module>bsddb.test.test_lock</module><module>commands</module><module>test.test_cpickle</module><module>_stringio</module><module>bsddb.test.test_thread</module><module>test.test_macfs</module><module>distutils.tests.setuptools_extension</module><module>bsddb.test.test_replication</module><module>dummy_thread</module><module>robotparser</module><module>test.test_gdbm</module><module>bsddb.db</module><module>bsddb.test.test_get_none</module><module>test.test_complex_args</module><module>SimpleXMLRPCServer</module><module>test.test_future_builtins</module><module>email.MIMENonMultipart</module><module>test.test_gl</module><module>hotshot.log</module><module>multifile</module><module>encodings.string_escape</module><module>audiodev</module><module>copy_reg</module><module>test.test_profilehooks</module><module>json.tests.test_recursion</module><module>email.MIMEBase</module><module>test.test_popen2</module><module>compiler.pycodegen</module><module>test.test_strop</module><module>bsddb.dbutils</module><module>email.test.test_email_codecs_renamed</module><module>bsddb.test.test_dbshelve</module><module>test.test_sets</module><module>rfc822</module><module>bsddb.test.test_compat</module><module>test.test_hexoct</module><module>bsddb.test.test_db</module><module>user</module><module>xmllib</module><module>test.badsyntax_nocaret</module><module>test.test_macos</module><module>urlparse</module><module>htmllib</module><module>test.test_cookie</module><module>json.tests.test_pass2</module><module>json.tests.test_pass3</module><module>test.test_applesingle</module><module>json.tests.test_pass1</module><module>json.tests.test_scanstring</module><module>test.test_linuxaudiodev</module><module>bsddb.dbrecio</module><module>test.test_new</module><module>test.test_mutex</module><module>json.tests.test_decode</module><module>compiler.ast</module><module>_bytesio</module><module>test.test_py3kwarn</module><module>whrandom</module><module>test.test_whichdb</module><module>bsddb.test.test_cursor_pget_bug</module><module>SimpleHTTPServer</module><module>psycopg2._psycopg</module><module>email.test.test_email_renamed</module><module>HTMLParser</module><module>distutils.tests.setuptools_build_ext</module><module>test.test_xmllib</module><module>email.MIMEAudio</module><module>test.test_imgfile</module><module>regex</module><module>test.test_long_future</module><module>test.test_ascii_formatd</module><module>bsddb.test.test_queue</module><module>UserDict</module><module>bsddb.test.test_join</module><module>json.tests.test_default</module><module>compiler.visitor</module><module>sre</module><module>test.test_md5</module><module>bsddb.test.test_dbenv</module><module>__builtin__</module><module>test.test_transformer</module><module>test.test_rgbimg</module><module>regsub</module><module>bsddb.test.test_sequence</module><module>Cookie</module><module>test.test_socket_ssl</module><module>_types</module><module>json.tests.test_separators</module><module>DocXMLRPCServer</module><module>json.tests.test_fail</module><module>distutils.mwerkscompiler</module><module>mimetools</module><module>test.test_xpickle</module><module>json.tests.test_float</module><module>bsddb.dbobj</module><module>tzparse</module><module>bsddb.test.test_all</module><module>test.test_dumbdbm</module><module>_dbm</module><module>_abcoll</module><module>bsddb</module><module>test.test_sgmllib</module><module>test.test_commands</module><module>urllib2</module><module>sunaudio</module><module>test.test_softspace</module><module>bsddb.test.test_recno</module><module>test.test_StringIO</module><module>xmlrpclib</module><module>test.test_undocumented_details</module><module>bsddb.test.test_compare</module><func>file</func><func>buffer</func><func>xrange</func><func>reduce</func><func>coerce</func><func>intern</func><func>unicode</func><func>apply</func><func>unichr</func><func>basestring</func><func>raw_input</func><func>execfile</func><func>long</func><func>reload</func><func>cmp</func></python></root>
diff --git a/python/helpers/virtualenv.py b/python/helpers/virtualenv.py
new file mode 100644
index 0000000..ba357fd
--- /dev/null
+++ b/python/helpers/virtualenv.py
@@ -0,0 +1,2312 @@
+#!/usr/bin/env python
+"""Create a "virtual" Python installation
+"""
+
+__version__ = "1.10.1"
+virtualenv_version = __version__ # legacy
+
+import base64
+import sys
+import os
+import codecs
+import optparse
+import re
+import shutil
+import logging
+import tempfile
+import zlib
+import errno
+import glob
+import distutils.sysconfig
+from distutils.util import strtobool
+import struct
+import subprocess
+import tarfile
+
+if sys.version_info < (2, 6):
+ print('ERROR: %s' % sys.exc_info()[1])
+ print('ERROR: this script requires Python 2.6 or greater.')
+ sys.exit(101)
+
+try:
+ set
+except NameError:
+ from sets import Set as set
+try:
+ basestring
+except NameError:
+ basestring = str
+
+try:
+ import ConfigParser
+except ImportError:
+ import configparser as ConfigParser
+
+join = os.path.join
+py_version = 'python%s.%s' % (sys.version_info[0], sys.version_info[1])
+
+is_jython = sys.platform.startswith('java')
+is_pypy = hasattr(sys, 'pypy_version_info')
+is_win = (sys.platform == 'win32')
+is_cygwin = (sys.platform == 'cygwin')
+is_darwin = (sys.platform == 'darwin')
+abiflags = getattr(sys, 'abiflags', '')
+
+user_dir = os.path.expanduser('~')
+if is_win:
+ default_storage_dir = os.path.join(user_dir, 'virtualenv')
+else:
+ default_storage_dir = os.path.join(user_dir, '.virtualenv')
+default_config_file = os.path.join(default_storage_dir, 'virtualenv.ini')
+
+if is_pypy:
+ expected_exe = 'pypy'
+elif is_jython:
+ expected_exe = 'jython'
+else:
+ expected_exe = 'python'
+
+# Return a mapping of version -> Python executable
+# Only provided for Windows, where the information in the registry is used
+if not is_win:
+ def get_installed_pythons():
+ return {}
+else:
+ try:
+ import winreg
+ except ImportError:
+ import _winreg as winreg
+
+ def get_installed_pythons():
+ python_core = winreg.CreateKey(winreg.HKEY_LOCAL_MACHINE,
+ "Software\\Python\\PythonCore")
+ i = 0
+ versions = []
+ while True:
+ try:
+ versions.append(winreg.EnumKey(python_core, i))
+ i = i + 1
+ except WindowsError:
+ break
+ exes = dict()
+ for ver in versions:
+ path = winreg.QueryValue(python_core, "%s\\InstallPath" % ver)
+ exes[ver] = join(path, "python.exe")
+
+ winreg.CloseKey(python_core)
+
+ # Add the major versions
+ # Sort the keys, then repeatedly update the major version entry
+ # Last executable (i.e., highest version) wins with this approach
+ for ver in sorted(exes):
+ exes[ver[0]] = exes[ver]
+
+ return exes
+
+REQUIRED_MODULES = ['os', 'posix', 'posixpath', 'nt', 'ntpath', 'genericpath',
+ 'fnmatch', 'locale', 'encodings', 'codecs',
+ 'stat', 'UserDict', 'readline', 'copy_reg', 'types',
+ 're', 'sre', 'sre_parse', 'sre_constants', 'sre_compile',
+ 'zlib']
+
+REQUIRED_FILES = ['lib-dynload', 'config']
+
+majver, minver = sys.version_info[:2]
+if majver == 2:
+ if minver >= 6:
+ REQUIRED_MODULES.extend(['warnings', 'linecache', '_abcoll', 'abc'])
+ if minver >= 7:
+ REQUIRED_MODULES.extend(['_weakrefset'])
+ if minver <= 3:
+ REQUIRED_MODULES.extend(['sets', '__future__'])
+elif majver == 3:
+ # Some extra modules are needed for Python 3, but different ones
+ # for different versions.
+ REQUIRED_MODULES.extend(['_abcoll', 'warnings', 'linecache', 'abc', 'io',
+ '_weakrefset', 'copyreg', 'tempfile', 'random',
+ '__future__', 'collections', 'keyword', 'tarfile',
+ 'shutil', 'struct', 'copy', 'tokenize', 'token',
+ 'functools', 'heapq', 'bisect', 'weakref',
+ 'reprlib'])
+ if minver >= 2:
+ REQUIRED_FILES[-1] = 'config-%s' % majver
+ if minver == 3:
+ import sysconfig
+ platdir = sysconfig.get_config_var('PLATDIR')
+ REQUIRED_FILES.append(platdir)
+ # The whole list of 3.3 modules is reproduced below - the current
+ # uncommented ones are required for 3.3 as of now, but more may be
+ # added as 3.3 development continues.
+ REQUIRED_MODULES.extend([
+ #"aifc",
+ #"antigravity",
+ #"argparse",
+ #"ast",
+ #"asynchat",
+ #"asyncore",
+ "base64",
+ #"bdb",
+ #"binhex",
+ #"bisect",
+ #"calendar",
+ #"cgi",
+ #"cgitb",
+ #"chunk",
+ #"cmd",
+ #"codeop",
+ #"code",
+ #"colorsys",
+ #"_compat_pickle",
+ #"compileall",
+ #"concurrent",
+ #"configparser",
+ #"contextlib",
+ #"cProfile",
+ #"crypt",
+ #"csv",
+ #"ctypes",
+ #"curses",
+ #"datetime",
+ #"dbm",
+ #"decimal",
+ #"difflib",
+ #"dis",
+ #"doctest",
+ #"dummy_threading",
+ "_dummy_thread",
+ #"email",
+ #"filecmp",
+ #"fileinput",
+ #"formatter",
+ #"fractions",
+ #"ftplib",
+ #"functools",
+ #"getopt",
+ #"getpass",
+ #"gettext",
+ #"glob",
+ #"gzip",
+ "hashlib",
+ #"heapq",
+ "hmac",
+ #"html",
+ #"http",
+ #"idlelib",
+ #"imaplib",
+ #"imghdr",
+ "imp",
+ "importlib",
+ #"inspect",
+ #"json",
+ #"lib2to3",
+ #"logging",
+ #"macpath",
+ #"macurl2path",
+ #"mailbox",
+ #"mailcap",
+ #"_markupbase",
+ #"mimetypes",
+ #"modulefinder",
+ #"multiprocessing",
+ #"netrc",
+ #"nntplib",
+ #"nturl2path",
+ #"numbers",
+ #"opcode",
+ #"optparse",
+ #"os2emxpath",
+ #"pdb",
+ #"pickle",
+ #"pickletools",
+ #"pipes",
+ #"pkgutil",
+ #"platform",
+ #"plat-linux2",
+ #"plistlib",
+ #"poplib",
+ #"pprint",
+ #"profile",
+ #"pstats",
+ #"pty",
+ #"pyclbr",
+ #"py_compile",
+ #"pydoc_data",
+ #"pydoc",
+ #"_pyio",
+ #"queue",
+ #"quopri",
+ #"reprlib",
+ "rlcompleter",
+ #"runpy",
+ #"sched",
+ #"shelve",
+ #"shlex",
+ #"smtpd",
+ #"smtplib",
+ #"sndhdr",
+ #"socket",
+ #"socketserver",
+ #"sqlite3",
+ #"ssl",
+ #"stringprep",
+ #"string",
+ #"_strptime",
+ #"subprocess",
+ #"sunau",
+ #"symbol",
+ #"symtable",
+ #"sysconfig",
+ #"tabnanny",
+ #"telnetlib",
+ #"test",
+ #"textwrap",
+ #"this",
+ #"_threading_local",
+ #"threading",
+ #"timeit",
+ #"tkinter",
+ #"tokenize",
+ #"token",
+ #"traceback",
+ #"trace",
+ #"tty",
+ #"turtledemo",
+ #"turtle",
+ #"unittest",
+ #"urllib",
+ #"uuid",
+ #"uu",
+ #"wave",
+ #"weakref",
+ #"webbrowser",
+ #"wsgiref",
+ #"xdrlib",
+ #"xml",
+ #"xmlrpc",
+ #"zipfile",
+ ])
+
+if is_pypy:
+ # these are needed to correctly display the exceptions that may happen
+ # during the bootstrap
+ REQUIRED_MODULES.extend(['traceback', 'linecache'])
+
+class Logger(object):
+
+ """
+ Logging object for use in command-line script. Allows ranges of
+ levels, to avoid some redundancy of displayed information.
+ """
+
+ DEBUG = logging.DEBUG
+ INFO = logging.INFO
+ NOTIFY = (logging.INFO+logging.WARN)/2
+ WARN = WARNING = logging.WARN
+ ERROR = logging.ERROR
+ FATAL = logging.FATAL
+
+ LEVELS = [DEBUG, INFO, NOTIFY, WARN, ERROR, FATAL]
+
+ def __init__(self, consumers):
+ self.consumers = consumers
+ self.indent = 0
+ self.in_progress = None
+ self.in_progress_hanging = False
+
+ def debug(self, msg, *args, **kw):
+ self.log(self.DEBUG, msg, *args, **kw)
+ def info(self, msg, *args, **kw):
+ self.log(self.INFO, msg, *args, **kw)
+ def notify(self, msg, *args, **kw):
+ self.log(self.NOTIFY, msg, *args, **kw)
+ def warn(self, msg, *args, **kw):
+ self.log(self.WARN, msg, *args, **kw)
+ def error(self, msg, *args, **kw):
+ self.log(self.ERROR, msg, *args, **kw)
+ def fatal(self, msg, *args, **kw):
+ self.log(self.FATAL, msg, *args, **kw)
+ def log(self, level, msg, *args, **kw):
+ if args:
+ if kw:
+ raise TypeError(
+ "You may give positional or keyword arguments, not both")
+ args = args or kw
+ rendered = None
+ for consumer_level, consumer in self.consumers:
+ if self.level_matches(level, consumer_level):
+ if (self.in_progress_hanging
+ and consumer in (sys.stdout, sys.stderr)):
+ self.in_progress_hanging = False
+ sys.stdout.write('\n')
+ sys.stdout.flush()
+ if rendered is None:
+ if args:
+ rendered = msg % args
+ else:
+ rendered = msg
+ rendered = ' '*self.indent + rendered
+ if hasattr(consumer, 'write'):
+ consumer.write(rendered+'\n')
+ else:
+ consumer(rendered)
+
+ def start_progress(self, msg):
+ assert not self.in_progress, (
+ "Tried to start_progress(%r) while in_progress %r"
+ % (msg, self.in_progress))
+ if self.level_matches(self.NOTIFY, self._stdout_level()):
+ sys.stdout.write(msg)
+ sys.stdout.flush()
+ self.in_progress_hanging = True
+ else:
+ self.in_progress_hanging = False
+ self.in_progress = msg
+
+ def end_progress(self, msg='done.'):
+ assert self.in_progress, (
+ "Tried to end_progress without start_progress")
+ if self.stdout_level_matches(self.NOTIFY):
+ if not self.in_progress_hanging:
+ # Some message has been printed out since start_progress
+ sys.stdout.write('...' + self.in_progress + msg + '\n')
+ sys.stdout.flush()
+ else:
+ sys.stdout.write(msg + '\n')
+ sys.stdout.flush()
+ self.in_progress = None
+ self.in_progress_hanging = False
+
+ def show_progress(self):
+ """If we are in a progress scope, and no log messages have been
+ shown, write out another '.'"""
+ if self.in_progress_hanging:
+ sys.stdout.write('.')
+ sys.stdout.flush()
+
+ def stdout_level_matches(self, level):
+ """Returns true if a message at this level will go to stdout"""
+ return self.level_matches(level, self._stdout_level())
+
+ def _stdout_level(self):
+ """Returns the level that stdout runs at"""
+ for level, consumer in self.consumers:
+ if consumer is sys.stdout:
+ return level
+ return self.FATAL
+
+ def level_matches(self, level, consumer_level):
+ """
+ >>> l = Logger([])
+ >>> l.level_matches(3, 4)
+ False
+ >>> l.level_matches(3, 2)
+ True
+ >>> l.level_matches(slice(None, 3), 3)
+ False
+ >>> l.level_matches(slice(None, 3), 2)
+ True
+ >>> l.level_matches(slice(1, 3), 1)
+ True
+ >>> l.level_matches(slice(2, 3), 1)
+ False
+ """
+ if isinstance(level, slice):
+ start, stop = level.start, level.stop
+ if start is not None and start > consumer_level:
+ return False
+ if stop is not None and stop <= consumer_level:
+ return False
+ return True
+ else:
+ return level >= consumer_level
+
+ #@classmethod
+ def level_for_integer(cls, level):
+ levels = cls.LEVELS
+ if level < 0:
+ return levels[0]
+ if level >= len(levels):
+ return levels[-1]
+ return levels[level]
+
+ level_for_integer = classmethod(level_for_integer)
+
+# create a silent logger just to prevent this from being undefined
+# will be overridden with requested verbosity main() is called.
+logger = Logger([(Logger.LEVELS[-1], sys.stdout)])
+
+def mkdir(path):
+ if not os.path.exists(path):
+ logger.info('Creating %s', path)
+ os.makedirs(path)
+ else:
+ logger.info('Directory %s already exists', path)
+
+def copyfileordir(src, dest, symlink=True):
+ if os.path.isdir(src):
+ shutil.copytree(src, dest, symlink)
+ else:
+ shutil.copy2(src, dest)
+
+def copyfile(src, dest, symlink=True):
+ if not os.path.exists(src):
+ # Some bad symlink in the src
+ logger.warn('Cannot find file %s (bad symlink)', src)
+ return
+ if os.path.exists(dest):
+ logger.debug('File %s already exists', dest)
+ return
+ if not os.path.exists(os.path.dirname(dest)):
+ logger.info('Creating parent directories for %s', os.path.dirname(dest))
+ os.makedirs(os.path.dirname(dest))
+ if not os.path.islink(src):
+ srcpath = os.path.abspath(src)
+ else:
+ srcpath = os.readlink(src)
+ if symlink and hasattr(os, 'symlink') and not is_win:
+ logger.info('Symlinking %s', dest)
+ try:
+ os.symlink(srcpath, dest)
+ except (OSError, NotImplementedError):
+ logger.info('Symlinking failed, copying to %s', dest)
+ copyfileordir(src, dest, symlink)
+ else:
+ logger.info('Copying to %s', dest)
+ copyfileordir(src, dest, symlink)
+
+def writefile(dest, content, overwrite=True):
+ if not os.path.exists(dest):
+ logger.info('Writing %s', dest)
+ f = open(dest, 'wb')
+ f.write(content.encode('utf-8'))
+ f.close()
+ return
+ else:
+ f = open(dest, 'rb')
+ c = f.read()
+ f.close()
+ if c != content.encode("utf-8"):
+ if not overwrite:
+ logger.notify('File %s exists with different content; not overwriting', dest)
+ return
+ logger.notify('Overwriting %s with new content', dest)
+ f = open(dest, 'wb')
+ f.write(content.encode('utf-8'))
+ f.close()
+ else:
+ logger.info('Content %s already in place', dest)
+
+def rmtree(dir):
+ if os.path.exists(dir):
+ logger.notify('Deleting tree %s', dir)
+ shutil.rmtree(dir)
+ else:
+ logger.info('Do not need to delete %s; already gone', dir)
+
+def make_exe(fn):
+ if hasattr(os, 'chmod'):
+ oldmode = os.stat(fn).st_mode & 0xFFF # 0o7777
+ newmode = (oldmode | 0x16D) & 0xFFF # 0o555, 0o7777
+ os.chmod(fn, newmode)
+ logger.info('Changed mode of %s to %s', fn, oct(newmode))
+
+def _find_file(filename, dirs):
+ for dir in reversed(dirs):
+ files = glob.glob(os.path.join(dir, filename))
+ if files and os.path.isfile(files[0]):
+ return True, files[0]
+ return False, filename
+
+def file_search_dirs():
+ here = os.path.dirname(os.path.abspath(__file__))
+ dirs = ['.', here,
+ join(here, 'virtualenv_support')]
+ if os.path.splitext(os.path.dirname(__file__))[0] != 'virtualenv':
+ # Probably some boot script; just in case virtualenv is installed...
+ try:
+ import virtualenv
+ except ImportError:
+ pass
+ else:
+ dirs.append(os.path.join(os.path.dirname(virtualenv.__file__), 'virtualenv_support'))
+ return [d for d in dirs if os.path.isdir(d)]
+
+
+class UpdatingDefaultsHelpFormatter(optparse.IndentedHelpFormatter):
+ """
+ Custom help formatter for use in ConfigOptionParser that updates
+ the defaults before expanding them, allowing them to show up correctly
+ in the help listing
+ """
+ def expand_default(self, option):
+ if self.parser is not None:
+ self.parser.update_defaults(self.parser.defaults)
+ return optparse.IndentedHelpFormatter.expand_default(self, option)
+
+
+class ConfigOptionParser(optparse.OptionParser):
+ """
+ Custom option parser which updates its defaults by checking the
+ configuration files and environmental variables
+ """
+ def __init__(self, *args, **kwargs):
+ self.config = ConfigParser.RawConfigParser()
+ self.files = self.get_config_files()
+ self.config.read(self.files)
+ optparse.OptionParser.__init__(self, *args, **kwargs)
+
+ def get_config_files(self):
+ config_file = os.environ.get('VIRTUALENV_CONFIG_FILE', False)
+ if config_file and os.path.exists(config_file):
+ return [config_file]
+ return [default_config_file]
+
+ def update_defaults(self, defaults):
+ """
+ Updates the given defaults with values from the config files and
+ the environ. Does a little special handling for certain types of
+ options (lists).
+ """
+ # Then go and look for the other sources of configuration:
+ config = {}
+ # 1. config files
+ config.update(dict(self.get_config_section('virtualenv')))
+ # 2. environmental variables
+ config.update(dict(self.get_environ_vars()))
+ # Then set the options with those values
+ for key, val in config.items():
+ key = key.replace('_', '-')
+ if not key.startswith('--'):
+ key = '--%s' % key # only prefer long opts
+ option = self.get_option(key)
+ if option is not None:
+ # ignore empty values
+ if not val:
+ continue
+ # handle multiline configs
+ if option.action == 'append':
+ val = val.split()
+ else:
+ option.nargs = 1
+ if option.action == 'store_false':
+ val = not strtobool(val)
+ elif option.action in ('store_true', 'count'):
+ val = strtobool(val)
+ try:
+ val = option.convert_value(key, val)
+ except optparse.OptionValueError:
+ e = sys.exc_info()[1]
+ print("An error occured during configuration: %s" % e)
+ sys.exit(3)
+ defaults[option.dest] = val
+ return defaults
+
+ def get_config_section(self, name):
+ """
+ Get a section of a configuration
+ """
+ if self.config.has_section(name):
+ return self.config.items(name)
+ return []
+
+ def get_environ_vars(self, prefix='VIRTUALENV_'):
+ """
+ Returns a generator with all environmental vars with prefix VIRTUALENV
+ """
+ for key, val in os.environ.items():
+ if key.startswith(prefix):
+ yield (key.replace(prefix, '').lower(), val)
+
+ def get_default_values(self):
+ """
+ Overridding to make updating the defaults after instantiation of
+ the option parser possible, update_defaults() does the dirty work.
+ """
+ if not self.process_default_values:
+ # Old, pre-Optik 1.5 behaviour.
+ return optparse.Values(self.defaults)
+
+ defaults = self.update_defaults(self.defaults.copy()) # ours
+ for option in self._get_all_options():
+ default = defaults.get(option.dest)
+ if isinstance(default, basestring):
+ opt_str = option.get_opt_string()
+ defaults[option.dest] = option.check_value(opt_str, default)
+ return optparse.Values(defaults)
+
+
+def main():
+ parser = ConfigOptionParser(
+ version=virtualenv_version,
+ usage="%prog [OPTIONS] DEST_DIR",
+ formatter=UpdatingDefaultsHelpFormatter())
+
+ parser.add_option(
+ '-v', '--verbose',
+ action='count',
+ dest='verbose',
+ default=0,
+ help="Increase verbosity")
+
+ parser.add_option(
+ '-q', '--quiet',
+ action='count',
+ dest='quiet',
+ default=0,
+ help='Decrease verbosity')
+
+ parser.add_option(
+ '-p', '--python',
+ dest='python',
+ metavar='PYTHON_EXE',
+ help='The Python interpreter to use, e.g., --python=python2.5 will use the python2.5 '
+ 'interpreter to create the new environment. The default is the interpreter that '
+ 'virtualenv was installed with (%s)' % sys.executable)
+
+ parser.add_option(
+ '--clear',
+ dest='clear',
+ action='store_true',
+ help="Clear out the non-root install and start from scratch")
+
+ parser.set_defaults(system_site_packages=False)
+ parser.add_option(
+ '--no-site-packages',
+ dest='system_site_packages',
+ action='store_false',
+ help="Don't give access to the global site-packages dir to the "
+ "virtual environment (default)")
+
+ parser.add_option(
+ '--system-site-packages',
+ dest='system_site_packages',
+ action='store_true',
+ help="Give access to the global site-packages dir to the "
+ "virtual environment")
+
+ parser.add_option(
+ '--always-copy',
+ dest='symlink',
+ action='store_false',
+ default=True,
+ help="Always copy files rather than symlinking")
+
+ parser.add_option(
+ '--unzip-setuptools',
+ dest='unzip_setuptools',
+ action='store_true',
+ help="Unzip Setuptools when installing it")
+
+ parser.add_option(
+ '--relocatable',
+ dest='relocatable',
+ action='store_true',
+ help='Make an EXISTING virtualenv environment relocatable. '
+ 'This fixes up scripts and makes all .pth files relative')
+
+ parser.add_option(
+ '--no-setuptools',
+ dest='no_setuptools',
+ action='store_true',
+ help='Do not install setuptools (or pip) '
+ 'in the new virtualenv.')
+
+ parser.add_option(
+ '--no-pip',
+ dest='no_pip',
+ action='store_true',
+ help='Do not install pip in the new virtualenv.')
+
+ default_search_dirs = file_search_dirs()
+ parser.add_option(
+ '--extra-search-dir',
+ dest="search_dirs",
+ action="append",
+ default=default_search_dirs,
+ help="Directory to look for setuptools/pip distributions in. "
+ "You can add any number of additional --extra-search-dir paths.")
+
+ parser.add_option(
+ '--never-download',
+ dest="never_download",
+ action="store_true",
+ default=True,
+ help="Never download anything from the network. This is now always "
+ "the case. The option is only retained for backward compatibility, "
+ "and does nothing. Virtualenv will fail if local distributions "
+ "of setuptools/pip are not present.")
+
+ parser.add_option(
+ '--prompt',
+ dest='prompt',
+ help='Provides an alternative prompt prefix for this environment')
+
+ parser.add_option(
+ '--setuptools',
+ dest='setuptools',
+ action='store_true',
+ help="Backward compatibility. Does nothing.")
+
+ parser.add_option(
+ '--distribute',
+ dest='distribute',
+ action='store_true',
+ help="Backward compatibility. Does nothing.")
+
+ if 'extend_parser' in globals():
+ extend_parser(parser)
+
+ options, args = parser.parse_args()
+
+ global logger
+
+ if 'adjust_options' in globals():
+ adjust_options(options, args)
+
+ verbosity = options.verbose - options.quiet
+ logger = Logger([(Logger.level_for_integer(2 - verbosity), sys.stdout)])
+
+ if options.python and not os.environ.get('VIRTUALENV_INTERPRETER_RUNNING'):
+ env = os.environ.copy()
+ interpreter = resolve_interpreter(options.python)
+ if interpreter == sys.executable:
+ logger.warn('Already using interpreter %s' % interpreter)
+ else:
+ logger.notify('Running virtualenv with interpreter %s' % interpreter)
+ env['VIRTUALENV_INTERPRETER_RUNNING'] = 'true'
+ file = __file__
+ if file.endswith('.pyc'):
+ file = file[:-1]
+ popen = subprocess.Popen([interpreter, file] + sys.argv[1:], env=env)
+ raise SystemExit(popen.wait())
+
+ if not args:
+ print('You must provide a DEST_DIR')
+ parser.print_help()
+ sys.exit(2)
+ if len(args) > 1:
+ print('There must be only one argument: DEST_DIR (you gave %s)' % (
+ ' '.join(args)))
+ parser.print_help()
+ sys.exit(2)
+
+ home_dir = args[0]
+
+ if os.environ.get('WORKING_ENV'):
+ logger.fatal('ERROR: you cannot run virtualenv while in a workingenv')
+ logger.fatal('Please deactivate your workingenv, then re-run this script')
+ sys.exit(3)
+
+ if 'PYTHONHOME' in os.environ:
+ logger.warn('PYTHONHOME is set. You *must* activate the virtualenv before using it')
+ del os.environ['PYTHONHOME']
+
+ if options.relocatable:
+ make_environment_relocatable(home_dir)
+ return
+
+ if not options.never_download:
+ logger.warn('The --never-download option is for backward compatibility only.')
+ logger.warn('Setting it to false is no longer supported, and will be ignored.')
+
+ create_environment(home_dir,
+ site_packages=options.system_site_packages,
+ clear=options.clear,
+ unzip_setuptools=options.unzip_setuptools,
+ prompt=options.prompt,
+ search_dirs=options.search_dirs,
+ never_download=True,
+ no_setuptools=options.no_setuptools,
+ no_pip=options.no_pip,
+ symlink=options.symlink)
+ if 'after_install' in globals():
+ after_install(options, home_dir)
+
+def call_subprocess(cmd, show_stdout=True,
+ filter_stdout=None, cwd=None,
+ raise_on_returncode=True, extra_env=None,
+ remove_from_env=None):
+ cmd_parts = []
+ for part in cmd:
+ if len(part) > 45:
+ part = part[:20]+"..."+part[-20:]
+ if ' ' in part or '\n' in part or '"' in part or "'" in part:
+ part = '"%s"' % part.replace('"', '\\"')
+ if hasattr(part, 'decode'):
+ try:
+ part = part.decode(sys.getdefaultencoding())
+ except UnicodeDecodeError:
+ part = part.decode(sys.getfilesystemencoding())
+ cmd_parts.append(part)
+ cmd_desc = ' '.join(cmd_parts)
+ if show_stdout:
+ stdout = None
+ else:
+ stdout = subprocess.PIPE
+ logger.debug("Running command %s" % cmd_desc)
+ if extra_env or remove_from_env:
+ env = os.environ.copy()
+ if extra_env:
+ env.update(extra_env)
+ if remove_from_env:
+ for varname in remove_from_env:
+ env.pop(varname, None)
+ else:
+ env = None
+ try:
+ proc = subprocess.Popen(
+ cmd, stderr=subprocess.STDOUT, stdin=None, stdout=stdout,
+ cwd=cwd, env=env)
+ except Exception:
+ e = sys.exc_info()[1]
+ logger.fatal(
+ "Error %s while executing command %s" % (e, cmd_desc))
+ raise
+ all_output = []
+ if stdout is not None:
+ stdout = proc.stdout
+ encoding = sys.getdefaultencoding()
+ fs_encoding = sys.getfilesystemencoding()
+ while 1:
+ line = stdout.readline()
+ try:
+ line = line.decode(encoding)
+ except UnicodeDecodeError:
+ line = line.decode(fs_encoding)
+ if not line:
+ break
+ line = line.rstrip()
+ all_output.append(line)
+ if filter_stdout:
+ level = filter_stdout(line)
+ if isinstance(level, tuple):
+ level, line = level
+ logger.log(level, line)
+ if not logger.stdout_level_matches(level):
+ logger.show_progress()
+ else:
+ logger.info(line)
+ else:
+ proc.communicate()
+ proc.wait()
+ if proc.returncode:
+ if raise_on_returncode:
+ if all_output:
+ logger.notify('Complete output from command %s:' % cmd_desc)
+ logger.notify('\n'.join(all_output) + '\n----------------------------------------')
+ raise OSError(
+ "Command %s failed with error code %s"
+ % (cmd_desc, proc.returncode))
+ else:
+ logger.warn(
+ "Command %s had error code %s"
+ % (cmd_desc, proc.returncode))
+
+def filter_install_output(line):
+ if line.strip().startswith('running'):
+ return Logger.INFO
+ return Logger.DEBUG
+
+def install_sdist(project_name, sdist, py_executable, search_dirs=None):
+
+ if search_dirs is None:
+ search_dirs = file_search_dirs()
+ found, sdist_path = _find_file(sdist, search_dirs)
+ if not found:
+ logger.fatal("Cannot find sdist %s" % (sdist,))
+ return
+
+ tmpdir = tempfile.mkdtemp()
+ try:
+ tar = tarfile.open(sdist_path)
+ tar.extractall(tmpdir)
+ tar.close()
+ srcdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
+ cmd = [py_executable, 'setup.py', 'install',
+ '--single-version-externally-managed',
+ '--record', 'record']
+ logger.start_progress('Installing %s...' % project_name)
+ logger.indent += 2
+ try:
+ call_subprocess(cmd, show_stdout=False, cwd=srcdir,
+ filter_stdout=filter_install_output)
+ finally:
+ logger.indent -= 2
+ logger.end_progress()
+ finally:
+ shutil.rmtree(tmpdir)
+
+def create_environment(home_dir, site_packages=False, clear=False,
+ unzip_setuptools=False,
+ prompt=None, search_dirs=None, never_download=False,
+ no_setuptools=False, no_pip=False, symlink=True):
+ """
+ Creates a new environment in ``home_dir``.
+
+ If ``site_packages`` is true, then the global ``site-packages/``
+ directory will be on the path.
+
+ If ``clear`` is true (default False) then the environment will
+ first be cleared.
+ """
+ home_dir, lib_dir, inc_dir, bin_dir = path_locations(home_dir)
+
+ py_executable = os.path.abspath(install_python(
+ home_dir, lib_dir, inc_dir, bin_dir,
+ site_packages=site_packages, clear=clear, symlink=symlink))
+
+ install_distutils(home_dir)
+
+ if not no_setuptools:
+ install_sdist('Setuptools', 'setuptools-*.tar.gz', py_executable, search_dirs)
+ if not no_pip:
+ install_sdist('Pip', 'pip-*.tar.gz', py_executable, search_dirs)
+
+ install_activate(home_dir, bin_dir, prompt)
+
+def is_executable_file(fpath):
+ return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
+
+def path_locations(home_dir):
+ """Return the path locations for the environment (where libraries are,
+ where scripts go, etc)"""
+ # XXX: We'd use distutils.sysconfig.get_python_inc/lib but its
+ # prefix arg is broken: http://bugs.python.org/issue3386
+ if is_win:
+ # Windows has lots of problems with executables with spaces in
+ # the name; this function will remove them (using the ~1
+ # format):
+ mkdir(home_dir)
+ if ' ' in home_dir:
+ import ctypes
+ GetShortPathName = ctypes.windll.kernel32.GetShortPathNameW
+ size = max(len(home_dir)+1, 256)
+ buf = ctypes.create_unicode_buffer(size)
+ try:
+ u = unicode
+ except NameError:
+ u = str
+ ret = GetShortPathName(u(home_dir), buf, size)
+ if not ret:
+ print('Error: the path "%s" has a space in it' % home_dir)
+ print('We could not determine the short pathname for it.')
+ print('Exiting.')
+ sys.exit(3)
+ home_dir = str(buf.value)
+ lib_dir = join(home_dir, 'Lib')
+ inc_dir = join(home_dir, 'Include')
+ bin_dir = join(home_dir, 'Scripts')
+ if is_jython:
+ lib_dir = join(home_dir, 'Lib')
+ inc_dir = join(home_dir, 'Include')
+ bin_dir = join(home_dir, 'bin')
+ elif is_pypy:
+ lib_dir = home_dir
+ inc_dir = join(home_dir, 'include')
+ bin_dir = join(home_dir, 'bin')
+ elif not is_win:
+ lib_dir = join(home_dir, 'lib', py_version)
+ multiarch_exec = '/usr/bin/multiarch-platform'
+ if is_executable_file(multiarch_exec):
+ # In Mageia (2) and Mandriva distros the include dir must be like:
+ # virtualenv/include/multiarch-x86_64-linux/python2.7
+ # instead of being virtualenv/include/python2.7
+ p = subprocess.Popen(multiarch_exec, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ stdout, stderr = p.communicate()
+ # stdout.strip is needed to remove newline character
+ inc_dir = join(home_dir, 'include', stdout.strip(), py_version + abiflags)
+ else:
+ inc_dir = join(home_dir, 'include', py_version + abiflags)
+ bin_dir = join(home_dir, 'bin')
+ return home_dir, lib_dir, inc_dir, bin_dir
+
+
+def change_prefix(filename, dst_prefix):
+ prefixes = [sys.prefix]
+
+ if is_darwin:
+ prefixes.extend((
+ os.path.join("/Library/Python", sys.version[:3], "site-packages"),
+ os.path.join(sys.prefix, "Extras", "lib", "python"),
+ os.path.join("~", "Library", "Python", sys.version[:3], "site-packages"),
+ # Python 2.6 no-frameworks
+ os.path.join("~", ".local", "lib","python", sys.version[:3], "site-packages"),
+ # System Python 2.7 on OSX Mountain Lion
+ os.path.join("~", "Library", "Python", sys.version[:3], "lib", "python", "site-packages")))
+
+ if hasattr(sys, 'real_prefix'):
+ prefixes.append(sys.real_prefix)
+ if hasattr(sys, 'base_prefix'):
+ prefixes.append(sys.base_prefix)
+ prefixes = list(map(os.path.expanduser, prefixes))
+ prefixes = list(map(os.path.abspath, prefixes))
+ # Check longer prefixes first so we don't split in the middle of a filename
+ prefixes = sorted(prefixes, key=len, reverse=True)
+ filename = os.path.abspath(filename)
+ for src_prefix in prefixes:
+ if filename.startswith(src_prefix):
+ _, relpath = filename.split(src_prefix, 1)
+ if src_prefix != os.sep: # sys.prefix == "/"
+ assert relpath[0] == os.sep
+ relpath = relpath[1:]
+ return join(dst_prefix, relpath)
+ assert False, "Filename %s does not start with any of these prefixes: %s" % \
+ (filename, prefixes)
+
+def copy_required_modules(dst_prefix, symlink):
+ import imp
+ # If we are running under -p, we need to remove the current
+ # directory from sys.path temporarily here, so that we
+ # definitely get the modules from the site directory of
+ # the interpreter we are running under, not the one
+ # virtualenv.py is installed under (which might lead to py2/py3
+ # incompatibility issues)
+ _prev_sys_path = sys.path
+ if os.environ.get('VIRTUALENV_INTERPRETER_RUNNING'):
+ sys.path = sys.path[1:]
+ try:
+ for modname in REQUIRED_MODULES:
+ if modname in sys.builtin_module_names:
+ logger.info("Ignoring built-in bootstrap module: %s" % modname)
+ continue
+ try:
+ f, filename, _ = imp.find_module(modname)
+ except ImportError:
+ logger.info("Cannot import bootstrap module: %s" % modname)
+ else:
+ if f is not None:
+ f.close()
+ # special-case custom readline.so on OS X, but not for pypy:
+ if modname == 'readline' and sys.platform == 'darwin' and not (
+ is_pypy or filename.endswith(join('lib-dynload', 'readline.so'))):
+ dst_filename = join(dst_prefix, 'lib', 'python%s' % sys.version[:3], 'readline.so')
+ elif modname == 'readline' and sys.platform == 'win32':
+ # special-case for Windows, where readline is not a
+ # standard module, though it may have been installed in
+ # site-packages by a third-party package
+ pass
+ else:
+ dst_filename = change_prefix(filename, dst_prefix)
+ copyfile(filename, dst_filename, symlink)
+ if filename.endswith('.pyc'):
+ pyfile = filename[:-1]
+ if os.path.exists(pyfile):
+ copyfile(pyfile, dst_filename[:-1], symlink)
+ finally:
+ sys.path = _prev_sys_path
+
+
+def subst_path(prefix_path, prefix, home_dir):
+ prefix_path = os.path.normpath(prefix_path)
+ prefix = os.path.normpath(prefix)
+ home_dir = os.path.normpath(home_dir)
+ if not prefix_path.startswith(prefix):
+ logger.warn('Path not in prefix %r %r', prefix_path, prefix)
+ return
+ return prefix_path.replace(prefix, home_dir, 1)
+
+
+def install_python(home_dir, lib_dir, inc_dir, bin_dir, site_packages, clear, symlink=True):
+ """Install just the base environment, no distutils patches etc"""
+ if sys.executable.startswith(bin_dir):
+ print('Please use the *system* python to run this script')
+ return
+
+ if clear:
+ rmtree(lib_dir)
+ ## FIXME: why not delete it?
+ ## Maybe it should delete everything with #!/path/to/venv/python in it
+ logger.notify('Not deleting %s', bin_dir)
+
+ if hasattr(sys, 'real_prefix'):
+ logger.notify('Using real prefix %r' % sys.real_prefix)
+ prefix = sys.real_prefix
+ elif hasattr(sys, 'base_prefix'):
+ logger.notify('Using base prefix %r' % sys.base_prefix)
+ prefix = sys.base_prefix
+ else:
+ prefix = sys.prefix
+ mkdir(lib_dir)
+ fix_lib64(lib_dir, symlink)
+ stdlib_dirs = [os.path.dirname(os.__file__)]
+ if is_win:
+ stdlib_dirs.append(join(os.path.dirname(stdlib_dirs[0]), 'DLLs'))
+ elif is_darwin:
+ stdlib_dirs.append(join(stdlib_dirs[0], 'site-packages'))
+ if hasattr(os, 'symlink'):
+ logger.info('Symlinking Python bootstrap modules')
+ else:
+ logger.info('Copying Python bootstrap modules')
+ logger.indent += 2
+ try:
+ # copy required files...
+ for stdlib_dir in stdlib_dirs:
+ if not os.path.isdir(stdlib_dir):
+ continue
+ for fn in os.listdir(stdlib_dir):
+ bn = os.path.splitext(fn)[0]
+ if fn != 'site-packages' and bn in REQUIRED_FILES:
+ copyfile(join(stdlib_dir, fn), join(lib_dir, fn), symlink)
+ # ...and modules
+ copy_required_modules(home_dir, symlink)
+ finally:
+ logger.indent -= 2
+ mkdir(join(lib_dir, 'site-packages'))
+ import site
+ site_filename = site.__file__
+ if site_filename.endswith('.pyc'):
+ site_filename = site_filename[:-1]
+ elif site_filename.endswith('$py.class'):
+ site_filename = site_filename.replace('$py.class', '.py')
+ site_filename_dst = change_prefix(site_filename, home_dir)
+ site_dir = os.path.dirname(site_filename_dst)
+ writefile(site_filename_dst, SITE_PY)
+ writefile(join(site_dir, 'orig-prefix.txt'), prefix)
+ site_packages_filename = join(site_dir, 'no-global-site-packages.txt')
+ if not site_packages:
+ writefile(site_packages_filename, '')
+
+ if is_pypy or is_win:
+ stdinc_dir = join(prefix, 'include')
+ else:
+ stdinc_dir = join(prefix, 'include', py_version + abiflags)
+ if os.path.exists(stdinc_dir):
+ copyfile(stdinc_dir, inc_dir, symlink)
+ else:
+ logger.debug('No include dir %s' % stdinc_dir)
+
+ platinc_dir = distutils.sysconfig.get_python_inc(plat_specific=1)
+ if platinc_dir != stdinc_dir:
+ platinc_dest = distutils.sysconfig.get_python_inc(
+ plat_specific=1, prefix=home_dir)
+ if platinc_dir == platinc_dest:
+ # Do platinc_dest manually due to a CPython bug;
+ # not http://bugs.python.org/issue3386 but a close cousin
+ platinc_dest = subst_path(platinc_dir, prefix, home_dir)
+ if platinc_dest:
+ # PyPy's stdinc_dir and prefix are relative to the original binary
+ # (traversing virtualenvs), whereas the platinc_dir is relative to
+ # the inner virtualenv and ignores the prefix argument.
+ # This seems more evolved than designed.
+ copyfile(platinc_dir, platinc_dest, symlink)
+
+ # pypy never uses exec_prefix, just ignore it
+ if sys.exec_prefix != prefix and not is_pypy:
+ if is_win:
+ exec_dir = join(sys.exec_prefix, 'lib')
+ elif is_jython:
+ exec_dir = join(sys.exec_prefix, 'Lib')
+ else:
+ exec_dir = join(sys.exec_prefix, 'lib', py_version)
+ for fn in os.listdir(exec_dir):
+ copyfile(join(exec_dir, fn), join(lib_dir, fn), symlink)
+
+ if is_jython:
+ # Jython has either jython-dev.jar and javalib/ dir, or just
+ # jython.jar
+ for name in 'jython-dev.jar', 'javalib', 'jython.jar':
+ src = join(prefix, name)
+ if os.path.exists(src):
+ copyfile(src, join(home_dir, name), symlink)
+ # XXX: registry should always exist after Jython 2.5rc1
+ src = join(prefix, 'registry')
+ if os.path.exists(src):
+ copyfile(src, join(home_dir, 'registry'), symlink=False)
+ copyfile(join(prefix, 'cachedir'), join(home_dir, 'cachedir'),
+ symlink=False)
+
+ mkdir(bin_dir)
+ py_executable = join(bin_dir, os.path.basename(sys.executable))
+ if 'Python.framework' in prefix:
+ # OS X framework builds cause validation to break
+ # https://github.com/pypa/virtualenv/issues/322
+ if os.environ.get('__PYVENV_LAUNCHER__'):
+ os.unsetenv('__PYVENV_LAUNCHER__')
+ if re.search(r'/Python(?:-32|-64)*$', py_executable):
+ # The name of the python executable is not quite what
+ # we want, rename it.
+ py_executable = os.path.join(
+ os.path.dirname(py_executable), 'python')
+
+ logger.notify('New %s executable in %s', expected_exe, py_executable)
+ pcbuild_dir = os.path.dirname(sys.executable)
+ pyd_pth = os.path.join(lib_dir, 'site-packages', 'virtualenv_builddir_pyd.pth')
+ if is_win and os.path.exists(os.path.join(pcbuild_dir, 'build.bat')):
+ logger.notify('Detected python running from build directory %s', pcbuild_dir)
+ logger.notify('Writing .pth file linking to build directory for *.pyd files')
+ writefile(pyd_pth, pcbuild_dir)
+ else:
+ pcbuild_dir = None
+ if os.path.exists(pyd_pth):
+ logger.info('Deleting %s (not Windows env or not build directory python)' % pyd_pth)
+ os.unlink(pyd_pth)
+
+ if sys.executable != py_executable:
+ ## FIXME: could I just hard link?
+ executable = sys.executable
+ shutil.copyfile(executable, py_executable)
+ make_exe(py_executable)
+ if is_win or is_cygwin:
+ pythonw = os.path.join(os.path.dirname(sys.executable), 'pythonw.exe')
+ if os.path.exists(pythonw):
+ logger.info('Also created pythonw.exe')
+ shutil.copyfile(pythonw, os.path.join(os.path.dirname(py_executable), 'pythonw.exe'))
+ python_d = os.path.join(os.path.dirname(sys.executable), 'python_d.exe')
+ python_d_dest = os.path.join(os.path.dirname(py_executable), 'python_d.exe')
+ if os.path.exists(python_d):
+ logger.info('Also created python_d.exe')
+ shutil.copyfile(python_d, python_d_dest)
+ elif os.path.exists(python_d_dest):
+ logger.info('Removed python_d.exe as it is no longer at the source')
+ os.unlink(python_d_dest)
+ # we need to copy the DLL to enforce that windows will load the correct one.
+ # may not exist if we are cygwin.
+ py_executable_dll = 'python%s%s.dll' % (
+ sys.version_info[0], sys.version_info[1])
+ py_executable_dll_d = 'python%s%s_d.dll' % (
+ sys.version_info[0], sys.version_info[1])
+ pythondll = os.path.join(os.path.dirname(sys.executable), py_executable_dll)
+ pythondll_d = os.path.join(os.path.dirname(sys.executable), py_executable_dll_d)
+ pythondll_d_dest = os.path.join(os.path.dirname(py_executable), py_executable_dll_d)
+ if os.path.exists(pythondll):
+ logger.info('Also created %s' % py_executable_dll)
+ shutil.copyfile(pythondll, os.path.join(os.path.dirname(py_executable), py_executable_dll))
+ if os.path.exists(pythondll_d):
+ logger.info('Also created %s' % py_executable_dll_d)
+ shutil.copyfile(pythondll_d, pythondll_d_dest)
+ elif os.path.exists(pythondll_d_dest):
+ logger.info('Removed %s as the source does not exist' % pythondll_d_dest)
+ os.unlink(pythondll_d_dest)
+ if is_pypy:
+ # make a symlink python --> pypy-c
+ python_executable = os.path.join(os.path.dirname(py_executable), 'python')
+ if sys.platform in ('win32', 'cygwin'):
+ python_executable += '.exe'
+ logger.info('Also created executable %s' % python_executable)
+ copyfile(py_executable, python_executable, symlink)
+
+ if is_win:
+ for name in 'libexpat.dll', 'libpypy.dll', 'libpypy-c.dll', 'libeay32.dll', 'ssleay32.dll', 'sqlite.dll':
+ src = join(prefix, name)
+ if os.path.exists(src):
+ copyfile(src, join(bin_dir, name), symlink)
+
+ if os.path.splitext(os.path.basename(py_executable))[0] != expected_exe:
+ secondary_exe = os.path.join(os.path.dirname(py_executable),
+ expected_exe)
+ py_executable_ext = os.path.splitext(py_executable)[1]
+ if py_executable_ext.lower() == '.exe':
+ # python2.4 gives an extension of '.4' :P
+ secondary_exe += py_executable_ext
+ if os.path.exists(secondary_exe):
+ logger.warn('Not overwriting existing %s script %s (you must use %s)'
+ % (expected_exe, secondary_exe, py_executable))
+ else:
+ logger.notify('Also creating executable in %s' % secondary_exe)
+ shutil.copyfile(sys.executable, secondary_exe)
+ make_exe(secondary_exe)
+
+ if '.framework' in prefix:
+ if 'Python.framework' in prefix:
+ logger.debug('MacOSX Python framework detected')
+ # Make sure we use the the embedded interpreter inside
+ # the framework, even if sys.executable points to
+ # the stub executable in ${sys.prefix}/bin
+ # See http://groups.google.com/group/python-virtualenv/
+ # browse_thread/thread/17cab2f85da75951
+ original_python = os.path.join(
+ prefix, 'Resources/Python.app/Contents/MacOS/Python')
+ if 'EPD' in prefix:
+ logger.debug('EPD framework detected')
+ original_python = os.path.join(prefix, 'bin/python')
+ shutil.copy(original_python, py_executable)
+
+ # Copy the framework's dylib into the virtual
+ # environment
+ virtual_lib = os.path.join(home_dir, '.Python')
+
+ if os.path.exists(virtual_lib):
+ os.unlink(virtual_lib)
+ copyfile(
+ os.path.join(prefix, 'Python'),
+ virtual_lib,
+ symlink)
+
+ # And then change the install_name of the copied python executable
+ try:
+ mach_o_change(py_executable,
+ os.path.join(prefix, 'Python'),
+ '@executable_path/../.Python')
+ except:
+ e = sys.exc_info()[1]
+ logger.warn("Could not call mach_o_change: %s. "
+ "Trying to call install_name_tool instead." % e)
+ try:
+ call_subprocess(
+ ["install_name_tool", "-change",
+ os.path.join(prefix, 'Python'),
+ '@executable_path/../.Python',
+ py_executable])
+ except:
+ logger.fatal("Could not call install_name_tool -- you must "
+ "have Apple's development tools installed")
+ raise
+
+ if not is_win:
+ # Ensure that 'python', 'pythonX' and 'pythonX.Y' all exist
+ py_exe_version_major = 'python%s' % sys.version_info[0]
+ py_exe_version_major_minor = 'python%s.%s' % (
+ sys.version_info[0], sys.version_info[1])
+ py_exe_no_version = 'python'
+ required_symlinks = [ py_exe_no_version, py_exe_version_major,
+ py_exe_version_major_minor ]
+
+ py_executable_base = os.path.basename(py_executable)
+
+ if py_executable_base in required_symlinks:
+ # Don't try to symlink to yourself.
+ required_symlinks.remove(py_executable_base)
+
+ for pth in required_symlinks:
+ full_pth = join(bin_dir, pth)
+ if os.path.exists(full_pth):
+ os.unlink(full_pth)
+ if symlink:
+ os.symlink(py_executable_base, full_pth)
+ else:
+ shutil.copyfile(py_executable_base, full_pth)
+
+ if is_win and ' ' in py_executable:
+ # There's a bug with subprocess on Windows when using a first
+ # argument that has a space in it. Instead we have to quote
+ # the value:
+ py_executable = '"%s"' % py_executable
+ # NOTE: keep this check as one line, cmd.exe doesn't cope with line breaks
+ cmd = [py_executable, '-c', 'import sys;out=sys.stdout;'
+ 'getattr(out, "buffer", out).write(sys.prefix.encode("utf-8"))']
+ logger.info('Testing executable with %s %s "%s"' % tuple(cmd))
+ try:
+ proc = subprocess.Popen(cmd,
+ stdout=subprocess.PIPE)
+ proc_stdout, proc_stderr = proc.communicate()
+ except OSError:
+ e = sys.exc_info()[1]
+ if e.errno == errno.EACCES:
+ logger.fatal('ERROR: The executable %s could not be run: %s' % (py_executable, e))
+ sys.exit(100)
+ else:
+ raise e
+
+ proc_stdout = proc_stdout.strip().decode("utf-8")
+ proc_stdout = os.path.normcase(os.path.abspath(proc_stdout))
+ norm_home_dir = os.path.normcase(os.path.abspath(home_dir))
+ if hasattr(norm_home_dir, 'decode'):
+ norm_home_dir = norm_home_dir.decode(sys.getfilesystemencoding())
+ if proc_stdout != norm_home_dir:
+ logger.fatal(
+ 'ERROR: The executable %s is not functioning' % py_executable)
+ logger.fatal(
+ 'ERROR: It thinks sys.prefix is %r (should be %r)'
+ % (proc_stdout, norm_home_dir))
+ logger.fatal(
+ 'ERROR: virtualenv is not compatible with this system or executable')
+ if is_win:
+ logger.fatal(
+ 'Note: some Windows users have reported this error when they '
+ 'installed Python for "Only this user" or have multiple '
+ 'versions of Python installed. Copying the appropriate '
+ 'PythonXX.dll to the virtualenv Scripts/ directory may fix '
+ 'this problem.')
+ sys.exit(100)
+ else:
+ logger.info('Got sys.prefix result: %r' % proc_stdout)
+
+ pydistutils = os.path.expanduser('~/.pydistutils.cfg')
+ if os.path.exists(pydistutils):
+ logger.notify('Please make sure you remove any previous custom paths from '
+ 'your %s file.' % pydistutils)
+ ## FIXME: really this should be calculated earlier
+
+ fix_local_scheme(home_dir, symlink)
+
+ if site_packages:
+ if os.path.exists(site_packages_filename):
+ logger.info('Deleting %s' % site_packages_filename)
+ os.unlink(site_packages_filename)
+
+ return py_executable
+
+
+def install_activate(home_dir, bin_dir, prompt=None):
+ home_dir = os.path.abspath(home_dir)
+ if is_win or is_jython and os._name == 'nt':
+ files = {
+ 'activate.bat': ACTIVATE_BAT,
+ 'deactivate.bat': DEACTIVATE_BAT,
+ 'activate.ps1': ACTIVATE_PS,
+ }
+
+ # MSYS needs paths of the form /c/path/to/file
+ drive, tail = os.path.splitdrive(home_dir.replace(os.sep, '/'))
+ home_dir_msys = (drive and "/%s%s" or "%s%s") % (drive[:1], tail)
+
+ # Run-time conditional enables (basic) Cygwin compatibility
+ home_dir_sh = ("""$(if [ "$OSTYPE" "==" "cygwin" ]; then cygpath -u '%s'; else echo '%s'; fi;)""" %
+ (home_dir, home_dir_msys))
+ files['activate'] = ACTIVATE_SH.replace('__VIRTUAL_ENV__', home_dir_sh)
+
+ else:
+ files = {'activate': ACTIVATE_SH}
+
+ # suppling activate.fish in addition to, not instead of, the
+ # bash script support.
+ files['activate.fish'] = ACTIVATE_FISH
+
+ # same for csh/tcsh support...
+ files['activate.csh'] = ACTIVATE_CSH
+
+ files['activate_this.py'] = ACTIVATE_THIS
+ if hasattr(home_dir, 'decode'):
+ home_dir = home_dir.decode(sys.getfilesystemencoding())
+ vname = os.path.basename(home_dir)
+ for name, content in files.items():
+ content = content.replace('__VIRTUAL_PROMPT__', prompt or '')
+ content = content.replace('__VIRTUAL_WINPROMPT__', prompt or '(%s)' % vname)
+ content = content.replace('__VIRTUAL_ENV__', home_dir)
+ content = content.replace('__VIRTUAL_NAME__', vname)
+ content = content.replace('__BIN_NAME__', os.path.basename(bin_dir))
+ writefile(os.path.join(bin_dir, name), content)
+
+def install_distutils(home_dir):
+ distutils_path = change_prefix(distutils.__path__[0], home_dir)
+ mkdir(distutils_path)
+ ## FIXME: maybe this prefix setting should only be put in place if
+ ## there's a local distutils.cfg with a prefix setting?
+ home_dir = os.path.abspath(home_dir)
+ ## FIXME: this is breaking things, removing for now:
+ #distutils_cfg = DISTUTILS_CFG + "\n[install]\nprefix=%s\n" % home_dir
+ writefile(os.path.join(distutils_path, '__init__.py'), DISTUTILS_INIT)
+ writefile(os.path.join(distutils_path, 'distutils.cfg'), DISTUTILS_CFG, overwrite=False)
+
+def fix_local_scheme(home_dir, symlink=True):
+ """
+ Platforms that use the "posix_local" install scheme (like Ubuntu with
+ Python 2.7) need to be given an additional "local" location, sigh.
+ """
+ try:
+ import sysconfig
+ except ImportError:
+ pass
+ else:
+ if sysconfig._get_default_scheme() == 'posix_local':
+ local_path = os.path.join(home_dir, 'local')
+ if not os.path.exists(local_path):
+ os.mkdir(local_path)
+ for subdir_name in os.listdir(home_dir):
+ if subdir_name == 'local':
+ continue
+ cp_or_ln = (os.symlink if symlink else copyfile)
+ cp_or_ln(os.path.abspath(os.path.join(home_dir, subdir_name)), \
+ os.path.join(local_path, subdir_name))
+
+def fix_lib64(lib_dir, symlink=True):
+ """
+ Some platforms (particularly Gentoo on x64) put things in lib64/pythonX.Y
+ instead of lib/pythonX.Y. If this is such a platform we'll just create a
+ symlink so lib64 points to lib
+ """
+ if [p for p in distutils.sysconfig.get_config_vars().values()
+ if isinstance(p, basestring) and 'lib64' in p]:
+ # PyPy's library path scheme is not affected by this.
+ # Return early or we will die on the following assert.
+ if is_pypy:
+ logger.debug('PyPy detected, skipping lib64 symlinking')
+ return
+
+ logger.debug('This system uses lib64; symlinking lib64 to lib')
+
+ assert os.path.basename(lib_dir) == 'python%s' % sys.version[:3], (
+ "Unexpected python lib dir: %r" % lib_dir)
+ lib_parent = os.path.dirname(lib_dir)
+ top_level = os.path.dirname(lib_parent)
+ lib_dir = os.path.join(top_level, 'lib')
+ lib64_link = os.path.join(top_level, 'lib64')
+ assert os.path.basename(lib_parent) == 'lib', (
+ "Unexpected parent dir: %r" % lib_parent)
+ if os.path.lexists(lib64_link):
+ return
+ cp_or_ln = (os.symlink if symlink else copyfile)
+ cp_or_ln('lib', lib64_link)
+
+def resolve_interpreter(exe):
+ """
+ If the executable given isn't an absolute path, search $PATH for the interpreter
+ """
+ # If the "executable" is a version number, get the installed executable for
+ # that version
+ python_versions = get_installed_pythons()
+ if exe in python_versions:
+ exe = python_versions[exe]
+
+ if os.path.abspath(exe) != exe:
+ paths = os.environ.get('PATH', '').split(os.pathsep)
+ for path in paths:
+ if os.path.exists(os.path.join(path, exe)):
+ exe = os.path.join(path, exe)
+ break
+ if not os.path.exists(exe):
+ logger.fatal('The executable %s (from --python=%s) does not exist' % (exe, exe))
+ raise SystemExit(3)
+ if not is_executable(exe):
+ logger.fatal('The executable %s (from --python=%s) is not executable' % (exe, exe))
+ raise SystemExit(3)
+ return exe
+
+def is_executable(exe):
+ """Checks a file is executable"""
+ return os.access(exe, os.X_OK)
+
+############################################################
+## Relocating the environment:
+
+def make_environment_relocatable(home_dir):
+ """
+ Makes the already-existing environment use relative paths, and takes out
+ the #!-based environment selection in scripts.
+ """
+ home_dir, lib_dir, inc_dir, bin_dir = path_locations(home_dir)
+ activate_this = os.path.join(bin_dir, 'activate_this.py')
+ if not os.path.exists(activate_this):
+ logger.fatal(
+ 'The environment doesn\'t have a file %s -- please re-run virtualenv '
+ 'on this environment to update it' % activate_this)
+ fixup_scripts(home_dir, bin_dir)
+ fixup_pth_and_egg_link(home_dir)
+ ## FIXME: need to fix up distutils.cfg
+
+OK_ABS_SCRIPTS = ['python', 'python%s' % sys.version[:3],
+ 'activate', 'activate.bat', 'activate_this.py']
+
+def fixup_scripts(home_dir, bin_dir):
+ if is_win:
+ new_shebang_args = (
+ '%s /c' % os.path.normcase(os.environ.get('COMSPEC', 'cmd.exe')),
+ '', '.exe')
+ else:
+ new_shebang_args = ('/usr/bin/env', sys.version[:3], '')
+
+ # This is what we expect at the top of scripts:
+ shebang = '#!%s' % os.path.normcase(os.path.join(
+ os.path.abspath(bin_dir), 'python%s' % new_shebang_args[2]))
+ # This is what we'll put:
+ new_shebang = '#!%s python%s%s' % new_shebang_args
+
+ for filename in os.listdir(bin_dir):
+ filename = os.path.join(bin_dir, filename)
+ if not os.path.isfile(filename):
+ # ignore subdirs, e.g. .svn ones.
+ continue
+ f = open(filename, 'rb')
+ try:
+ try:
+ lines = f.read().decode('utf-8').splitlines()
+ except UnicodeDecodeError:
+ # This is probably a binary program instead
+ # of a script, so just ignore it.
+ continue
+ finally:
+ f.close()
+ if not lines:
+ logger.warn('Script %s is an empty file' % filename)
+ continue
+
+ old_shebang = lines[0].strip()
+ old_shebang = old_shebang[0:2] + os.path.normcase(old_shebang[2:])
+
+ if not old_shebang.startswith(shebang):
+ if os.path.basename(filename) in OK_ABS_SCRIPTS:
+ logger.debug('Cannot make script %s relative' % filename)
+ elif lines[0].strip() == new_shebang:
+ logger.info('Script %s has already been made relative' % filename)
+ else:
+ logger.warn('Script %s cannot be made relative (it\'s not a normal script that starts with %s)'
+ % (filename, shebang))
+ continue
+ logger.notify('Making script %s relative' % filename)
+ script = relative_script([new_shebang] + lines[1:])
+ f = open(filename, 'wb')
+ f.write('\n'.join(script).encode('utf-8'))
+ f.close()
+
+def relative_script(lines):
+ "Return a script that'll work in a relocatable environment."
+ activate = "import os; activate_this=os.path.join(os.path.dirname(os.path.realpath(__file__)), 'activate_this.py'); exec(compile(open(activate_this).read(), activate_this, 'exec'), dict(__file__=activate_this)); del os, activate_this"
+ # Find the last future statement in the script. If we insert the activation
+ # line before a future statement, Python will raise a SyntaxError.
+ activate_at = None
+ for idx, line in reversed(list(enumerate(lines))):
+ if line.split()[:3] == ['from', '__future__', 'import']:
+ activate_at = idx + 1
+ break
+ if activate_at is None:
+ # Activate after the shebang.
+ activate_at = 1
+ return lines[:activate_at] + ['', activate, ''] + lines[activate_at:]
+
+def fixup_pth_and_egg_link(home_dir, sys_path=None):
+ """Makes .pth and .egg-link files use relative paths"""
+ home_dir = os.path.normcase(os.path.abspath(home_dir))
+ if sys_path is None:
+ sys_path = sys.path
+ for path in sys_path:
+ if not path:
+ path = '.'
+ if not os.path.isdir(path):
+ continue
+ path = os.path.normcase(os.path.abspath(path))
+ if not path.startswith(home_dir):
+ logger.debug('Skipping system (non-environment) directory %s' % path)
+ continue
+ for filename in os.listdir(path):
+ filename = os.path.join(path, filename)
+ if filename.endswith('.pth'):
+ if not os.access(filename, os.W_OK):
+ logger.warn('Cannot write .pth file %s, skipping' % filename)
+ else:
+ fixup_pth_file(filename)
+ if filename.endswith('.egg-link'):
+ if not os.access(filename, os.W_OK):
+ logger.warn('Cannot write .egg-link file %s, skipping' % filename)
+ else:
+ fixup_egg_link(filename)
+
+def fixup_pth_file(filename):
+ lines = []
+ prev_lines = []
+ f = open(filename)
+ prev_lines = f.readlines()
+ f.close()
+ for line in prev_lines:
+ line = line.strip()
+ if (not line or line.startswith('#') or line.startswith('import ')
+ or os.path.abspath(line) != line):
+ lines.append(line)
+ else:
+ new_value = make_relative_path(filename, line)
+ if line != new_value:
+ logger.debug('Rewriting path %s as %s (in %s)' % (line, new_value, filename))
+ lines.append(new_value)
+ if lines == prev_lines:
+ logger.info('No changes to .pth file %s' % filename)
+ return
+ logger.notify('Making paths in .pth file %s relative' % filename)
+ f = open(filename, 'w')
+ f.write('\n'.join(lines) + '\n')
+ f.close()
+
+def fixup_egg_link(filename):
+ f = open(filename)
+ link = f.readline().strip()
+ f.close()
+ if os.path.abspath(link) != link:
+ logger.debug('Link in %s already relative' % filename)
+ return
+ new_link = make_relative_path(filename, link)
+ logger.notify('Rewriting link %s in %s as %s' % (link, filename, new_link))
+ f = open(filename, 'w')
+ f.write(new_link)
+ f.close()
+
+def make_relative_path(source, dest, dest_is_directory=True):
+ """
+ Make a filename relative, where the filename is dest, and it is
+ being referred to from the filename source.
+
+ >>> make_relative_path('/usr/share/something/a-file.pth',
+ ... '/usr/share/another-place/src/Directory')
+ '../another-place/src/Directory'
+ >>> make_relative_path('/usr/share/something/a-file.pth',
+ ... '/home/user/src/Directory')
+ '../../../home/user/src/Directory'
+ >>> make_relative_path('/usr/share/a-file.pth', '/usr/share/')
+ './'
+ """
+ source = os.path.dirname(source)
+ if not dest_is_directory:
+ dest_filename = os.path.basename(dest)
+ dest = os.path.dirname(dest)
+ dest = os.path.normpath(os.path.abspath(dest))
+ source = os.path.normpath(os.path.abspath(source))
+ dest_parts = dest.strip(os.path.sep).split(os.path.sep)
+ source_parts = source.strip(os.path.sep).split(os.path.sep)
+ while dest_parts and source_parts and dest_parts[0] == source_parts[0]:
+ dest_parts.pop(0)
+ source_parts.pop(0)
+ full_parts = ['..']*len(source_parts) + dest_parts
+ if not dest_is_directory:
+ full_parts.append(dest_filename)
+ if not full_parts:
+ # Special case for the current directory (otherwise it'd be '')
+ return './'
+ return os.path.sep.join(full_parts)
+
+
+
+############################################################
+## Bootstrap script creation:
+
+def create_bootstrap_script(extra_text, python_version=''):
+ """
+ Creates a bootstrap script, which is like this script but with
+ extend_parser, adjust_options, and after_install hooks.
+
+ This returns a string that (written to disk of course) can be used
+ as a bootstrap script with your own customizations. The script
+ will be the standard virtualenv.py script, with your extra text
+ added (your extra text should be Python code).
+
+ If you include these functions, they will be called:
+
+ ``extend_parser(optparse_parser)``:
+ You can add or remove options from the parser here.
+
+ ``adjust_options(options, args)``:
+ You can change options here, or change the args (if you accept
+ different kinds of arguments, be sure you modify ``args`` so it is
+ only ``[DEST_DIR]``).
+
+ ``after_install(options, home_dir)``:
+
+ After everything is installed, this function is called. This
+ is probably the function you are most likely to use. An
+ example would be::
+
+ def after_install(options, home_dir):
+ subprocess.call([join(home_dir, 'bin', 'easy_install'),
+ 'MyPackage'])
+ subprocess.call([join(home_dir, 'bin', 'my-package-script'),
+ 'setup', home_dir])
+
+ This example immediately installs a package, and runs a setup
+ script from that package.
+
+ If you provide something like ``python_version='2.5'`` then the
+ script will start with ``#!/usr/bin/env python2.5`` instead of
+ ``#!/usr/bin/env python``. You can use this when the script must
+ be run with a particular Python version.
+ """
+ filename = __file__
+ if filename.endswith('.pyc'):
+ filename = filename[:-1]
+ f = codecs.open(filename, 'r', encoding='utf-8')
+ content = f.read()
+ f.close()
+ py_exe = 'python%s' % python_version
+ content = (('#!/usr/bin/env %s\n' % py_exe)
+ + '## WARNING: This file is generated\n'
+ + content)
+ return content.replace('##EXT' 'END##', extra_text)
+
+##EXTEND##
+
+def convert(s):
+ b = base64.b64decode(s.encode('ascii'))
+ return zlib.decompress(b).decode('utf-8')
+
+##file site.py
+SITE_PY = convert("""
+eJzFPf1z2zaWv/OvwMqToZTIdOJ0e3tOnRsncVrvuYm3SWdz63q0lARZrCmSJUjL2pu7v/3eBwAC
+JCXbm+6cphNLJPDw8PC+8PAeOhgMTopCZnOxyud1KoWScTlbiiKulkos8lJUy6Sc7xdxWW3g6ewm
+vpZKVLlQGxVhqygInn7lJ3gqPi8TZVCAb3Fd5au4SmZxmm5EsiryspJzMa/LJLsWSZZUSZwm/4AW
+eRaJp1+PQXCWCZh5mshS3MpSAVwl8oW42FTLPBPDusA5v4j+GL8cjYWalUlRQYNS4wwUWcZVkEk5
+BzShZa2AlEkl91UhZ8kimdmG67xO56JI45kUf/87T42ahmGg8pVcL2UpRQbIAEwJsArEA74mpZjl
+cxkJ8UbOYhyAnzfEChjaGNdMIRmzXKR5dg1zyuRMKhWXGzGc1hUBIpTFPAecEsCgStI0WOfljRrB
+ktJ6rOGRiJk9/Mkwe8A8cfwu5wCOH7Pg5yy5GzNs4B4EVy2ZbUq5SO5EjGDhp7yTs4l+NkwWYp4s
+FkCDrBphk4ARUCJNpgcFLcd3eoVeHxBWlitjGEMiytyYX1KPKDirRJwqYNu6QBopwvydnCZxBtTI
+bmE4gAgkDfrGmSeqsuPQ7EQOAEpcxwqkZKXEcBUnGTDrj/GM0P5rks3ztRoRBWC1lPi1VpU7/2EP
+AaC1Q4BxgItlVrPO0uRGppsRIPAZsC+lqtMKBWKelHJW5WUiFQEA1DZC3gHSYxGXUpOQOdPI7Zjo
+TzRJMlxYFDAUeHyJJFkk13VJEiYWCXAucMX7jz+Jd6dvzk4+aB4zwFhmr1eAM0ChhXZwggHEQa3K
+gzQHgY6Cc/wj4vkchewaxwe8mgYH9650MIS5F1G7j7PgQHa9uHoYmGMFyoTGCqjff0OXsVoCff7n
+nvUOgpNtVKGJ87f1MgeZzOKVFMuY+Qs5I/hOw3kdFdXyFXCDQjgVkErh4iCCCcIDkrg0G+aZFAWw
+WJpkchQAhabU1l9FYIUPebZPa93iBIBQBhm8dJ6NaMRMwkS7sF6hvjCNNzQz3SSw67zKS1IcwP/Z
+jHRRGmc3hKMihuJvU3mdZBkihLwQhHshDaxuEuDEeSTOqRXpBdNIhKy9uCWKRA28hEwHPCnv4lWR
+yjGLL+rW3WqEBpOVMGudMsdBy4rUK61aM9Ve3juMvrS4jtCslqUE4PXUE7pFno/FFHQ2YVPEKxav
+ap0T5wQ98kSdkCeoJfTF70DRE6XqlbQvkVdAsxBDBYs8TfM1kOwoCITYw0bGKPvMCW/hHfwLcPHf
+VFazZRA4I1nAGhQivw0UAgGTIDPN1RoJj9s0K7eVTJKxpsjLuSxpqIcR+4ARf2BjnGvwIa+0UePp
+4irnq6RClTTVJjNhi5eFFevHVzxvmAZYbkU0M00bOq1wemmxjKfSuCRTuUBJ0Iv0yi47jBn0jEm2
+uBIrtjLwDsgiE7Yg/YoFlc6ikuQEAAwWvjhLijqlRgoZTMQw0Kog+KsYTXqunSVgbzbLASokNt8z
+sD+A2z9AjNbLBOgzAwigYVBLwfJNk6pEB6HRR4Fv9E1/Hh849WyhbRMPuYiTVFv5OAvO6OFpWZL4
+zmSBvcaaGApmmFXo2l1nQEcU88FgEATGHdoo8zVXQVVujoAVhBlnMpnWCRq+yQRNvf6hAh5FOAN7
+3Ww7Cw80hOn0AajkdFmU+Qpf27l9AmUCY2GPYE9ckJaR7CB7nPgKyeeq9MI0RdvtsLNAPRRc/HT6
+/uzL6SdxLC4blTZu67MrGPM0i4GtySIAU7WGbXQZtETFl6DuE+/BvBNTgD2j3iS+Mq5q4F1A/XNZ
+02uYxsx7GZx+OHlzfjr5+dPpT5NPZ59PAUGwMzLYoymjeazBYVQRCAdw5VxF2r4GnR704M3JJ/sg
+mCRq8u03wG7wZHgtK2DicggzHotwFd8pYNBwTE1HiGOnAVjwcDQSr8Xh06cvDwlasSk2AAzMrtMU
+H060RZ8k2SIPR9T4V3bpj1lJaf/t8uibK3F8LMJf49s4DMCHapoyS/xI4vR5U0joWsGfYa5GQTCX
+CxC9G4kCOnxKfvGIO8CSQMtc2+lf8yQz75kr3SFIfwypB+AwmczSWClsPJmEQATq0POBDhE71yh1
+Q+hYbNyuI40KfkoJC5thlzH+04NiPKV+iAaj6HYxjUBcV7NYSW5F04d+kwnqrMlkqAcEYSaJAYeL
+1VAoTBPUWWUCfi1xHuqwqcpT/InwUQuQAOLWCrUkLpLeOkW3cVpLNXQmBUQcDltkREWbKOJHcFGG
+YImbpRuN2tQ0PAPNgHxpDlq0bFEOP3vg74C6Mps43Ojx3otphpj+mXcahAO4nCGqe6VaUFg7iovT
+C/Hy+eE+ujOw55xb6njN0UInWS3twwWslpEHRph7GXlx6bJAPYtPj3bDXEV2ZbqssNBLXMpVfivn
+gC0ysLPK4id6AztzmMcshlUEvU7+AKtQ4zfGuA/l2YO0oO8A1FsRFLP+Zun3OBggMwWKiDfWRGq9
+62dTWJT5bYLOxnSjX4KtBGWJFtM4NoGzcB6ToUkEDQFecIaUWssQ1GFZs8NKeCNItBfzRrFGBO4c
+NfUVfb3J8nU24Z3wMSrd4ciyLgqWZl5s0CzBnngPVgiQzGFj1xCNoYDLL1C29gF5mD5MFyhLewsA
+BIZe0XbNgWW2ejRF3jXisAhj9EqQ8JYS/YVbMwRttQwxHEj0NrIPjJZASDA5q+CsatBMhrJmmsHA
+Dkl8rjuPeAvqA2hRMQKzOdTQuJGh3+URKGdx7iolpx9a5C9fvjDbqCXFVxCxKU4aXYgFGcuo2IBh
+TUAnGI+MozXEBmtwbgFMrTRriv1PIi/YG4P1vNCyDX4A7O6qqjg6OFiv15GOLuTl9YFaHPzxT99+
++6fnrBPnc+IfmI4jLTrUFh3QO/Roo++MBXptVq7Fj0nmcyPBGkryysgVRfy+r5N5Lo72R1Z/Ihc3
+Zhr/Na4MKJCJGZSpDLQdNBg9UftPopdqIJ6QdbZthyP2S7RJtVbMt7rQo8rBEwC/ZZbXaKobTlDi
+GVg32KHP5bS+Du3gno00P2CqKKdDywP7L64QA58zDF8ZUzxBLUFsgRbfIf1PzDYxeUdaQyB50UR1
+ds+bfi1miDt/uLxbX9MRGjPDRCF3oET4TR4sgLZxV3Lwo11btHuOa2s+niEwlj4wzKsdyyEKDuGC
+azF2pc7havR4QZrWrJpBwbiqERQ0OIlTprYGRzYyRJDo3ZjNPi+sbgF0akUOTXzArAK0cMfpWLs2
+KzieEPLAsXhBTyS4yEedd895aes0pYBOi0c9qjBgb6HRTufAl0MDYCwG5c8Dbmm2KR9bi8Jr0AMs
+5xgQMtiiw0z4xvUBB3uDHnbqWP1tvZnGfSBwkYYci3oQdEL5mEcoFUhTMfR7bmNxS9zuYDstDjGV
+WSYSabVFuNrKo1eodhqmRZKh7nUWKZqlOXjFVisSIzXvfWeB9kH4uM+YaQnUZGjI4TQ6Jm/PE8BQ
+t8Pw2XWNgQY3DoMYrRJF1g3JtIR/wK2g+AYFo4CWBM2CeaiU+RP7HWTOzld/2cIeltDIEG7TbW5I
+x2JoOOb9nkAy6mgMSEEGJOwKI7mOrA5S4DBngTzhhtdyq3QTjEiBnDkWhNQM4E4vvQ0OPonwBIQk
+FCHfVUoW4pkYwPK1RfVhuvt35VIThBg6DchV0NGLYzey4UQ1jltRDp+h/fgGnZUUOXDwFFweN9Dv
+srlhWht0AWfdV9wWKdDIFIcZjFxUrwxh3GDyH46dFg2xzCCGobyBvCMdM9IosMutQcOCGzDemrfH
+0o/diAX2HYa5OpSrO9j/hWWiZrkKKWbSjl24H80VXdpYbM+T6QD+eAswGF15kGSq4xcYZfknBgk9
+6GEfdG+yGBaZx+U6yUJSYJp+x/7SdPCwpPSM3MEn2k4dwEQx4nnwvgQBoaPPAxAn1ASwK5eh0m5/
+F+zOKQ4sXO4+8Nzmy6OXV13ijrdFeOynf6lO76oyVrhaKS8aCwWuVteAo9KFycXZRh9e6sNt3CaU
+uYJdpPj46YtAQnBcdx1vHjf1huERm3vn5H0M6qDX7iVXa3bELoAIakVklIPw8Rz5cGQfO7kdE3sE
+kEcxzI5FMZA0n/wzcHYtFIyxP99kGEdrqwz8wOtvv5n0REZdJL/9ZnDPKC1i9In9sOUJ2pE5qWDX
+bEsZp+RqOH0oqJg1rGPbFCPW57T90zx21eNzarRs7Lu/BX4MFAypS/ARno8bsnWnih/fndoKT9up
+HcA6u1Xz2aNFgL19Pv0VdshKB9Vu4ySlcwWY/P4+Klezued4Rb/28CDtVDAOCfr2X+ryOXBDyNGE
+UXc62hk7MQHnnl2w+RSx6qKyp3MImiMwLy/APf7sQtUWzDDucz5eOOxRTd6M+5yJr1Gr+PldNJAF
+5tFg0Ef2rez4/zHL5/+aST5wKubk+ne0ho8E9HvNhI0HQ9PGw4fVv+yu3TXAHmCetridO9zC7tB8
+Vrkwzh2rJCWeou56KtaUrkCxVTwpAihz9vt64OAy6kPvt3VZ8tE1qcBClvt4HDsWmKllPL9eE7Mn
+Dj7ICjGxzWYUq3byevI+NRLq6LOdSdjsG/rlbJmbmJXMbpMS+oLCHYY/fPzxNOw3IRjHhU4PtyIP
+9xsQ7iOYNtTECR/Thyn0mC7/vFS1ty4+QU1GgIkIa7L12gc/EGziCP1rcE9EyDuw5WN23KHPlnJ2
+M5GUOoBsil2doPhbfI2Y2IwCP/9LxQtKYoOZzNIaacWON2YfLupsRucjlQT/SqcKY+oQJQRw+G+R
+xtdiSJ3nGHrS3EjRqdu41N5nUeaYnCrqZH5wncyF/K2OU9zWy8UCcMHDK/0q4uEpAiXecU4DJy0q
+OavLpNoACWKV67M/Sn9wGk43PNGhhyQf8zABMSHiSHzCaeN7JtzckMsEB/wTD5wk7ruxg5OsENFz
+eJ/lExx1Qjm+Y0aqey5Pj4P2CDkAGABQmP9gpCN3/htJr9wDRlpzl6ioJT1SupGGnJwxhDIcYaSD
+f9NPnxFd3tqC5fV2LK93Y3ndxvK6F8trH8vr3Vi6IoELa4NWRhL6AlftY43efBs35sTDnMazJbfD
+3E/M8QSIojAbbCNTnALtRbb4fI+AkNp2DpzpYZM/k3BSaZlzCFyDRO7HQyy9mTfJ605nysbRnXkq
+xp3dlkPk9z2IIkoVm1J3lrd5XMWRJxfXaT4FsbXojhsAY9FOJ+JYaXY7mXJ0t2WpBhf/9fmHjx+w
+OYIamPQG6oaLiIYFpzJ8GpfXqitNzeavAHakln4iDnXTAPceGFnjUfb4n3eU4YGMI9aUoZCLAjwA
+yuqyzdzcpzBsPddJUvo5MzkfNh2LQVYNmkltIdLJxcW7k88nAwr5Df534AqMoa0vHS4+poVt0PXf
+3OaW4tgHhFrHthrj587Jo3XDEffbWAO248O3Hhw+xGD3hgn8Wf5LKQVLAoSKdPD3MYR68B7oq7YJ
+HfoYRuwk/7kna+ys2HeO7DkuiiP6fccO7QH8w07cY0yAANqFGpqdQbOZail9a153UNQB+kBf76u3
+YO2tV3sn41PUTqLHAXQoa5ttd/+8cxo2ekpWb06/P/twfvbm4uTzD44LiK7cx08Hh+L0xy+C8kPQ
+gLFPFGNqRIWZSGBY3EInMc/hvxojP/O64iAx9Hp3fq5PalZY6oK5z2hzInjOaUwWGgfNOAptH+r8
+I8Qo1Rskp6aI0nWo5gj3SyuuZ1G5zo+mUqUpOqu13nrpWjFTU0bn2hFIHzR2ScEgOMUMXlEWe2V2
+hSWfAOo6qx6ktI22iSEpBQU76QLO+Zc5XfECpdQZnjSdtaK/DF1cw6tIFWkCO7lXoZUl3Q3TYxrG
+0Q/tATfj1acBne4wsm7Is96KBVqtVyHPTfcfNYz2Ww0YNgz2DuadSUoPoQxsTG4TITbik5xQ3sFX
+u/R6DRQsGB70VbiIhukSmH0Mm2uxTGADATy5BOuL+wSA0FoJ/0DgyIkOyByzM8K3q/n+X0JNEL/1
+L7/0NK/KdP9vooBdkOBUorCHmG7jd7DxiWQkTj++H4WMHKXmir/UWB4ADgkFQB1pp/wlPkGfDJVM
+Fzq/xNcH+EL7CfS61b2URam797vGIUrAEzUkr+GJMvQLMd3Lwh7jVEYt0Fj5YDHDCkI3DcF89sSn
+pUxTne9+9u78FHxHLMZACeJzt1MYjuMleISuk++4wrEFCg/Y4XWJbFyiC0tJFvPIa9YbtEaRo95e
+XoZdJwoMd3t1osBlnCgX7SFOm2GZcoIIWRnWwiwrs3arDVLYbUMUR5lhlphclJTA6vME8DI9jXlL
+BHslLPUwEXg+RU6yymQspskM9CioXFCoYxASJC7WMxLn5RnHwPNSmTIoeFhsyuR6WeHpBnSOqAQD
+m/948uX87AOVJRy+bLzuHuYc005gzEkkx5giiNEO+OKm/SFXTSZ9PKtfIQzUPvCn/YqzU455gE4/
+Dizin/YrrkM7dnaCPANQUHXRFg/cADjd+uSmkQXG1e6D8eOmADaY+WAoFollLzrRw51flxNty5Yp
+obiPefmIA5xFYVPSdGc3Ja390XNcFHjONR/2N4K3fbJlPlPoetN5sy35zf10pBBLYgGjbmt/DJMd
+1mmqp+Mw2zZuoW2ttrG/ZE6s1Gk3y1CUgYhDt/PIZbJ+JaybMwd6adQdYOI7ja6RxF5VPvglG2gP
+w8PEEruzTzEdqYyFjABGMqSu/anBh0KLAAqEsn+HjuSOR08PvTk61uD+OWrdBbbxB1CEOheXajzy
+EjgRvvzGjiO/IrRQjx6J0PFUMpnlNk8MP+slepUv/Dn2ygAFMVHsyji7lkOGNTYwn/nE3hKCJW3r
+kfoyueozLOIMnNO7LRzelYv+gxODWosROu1u5KatjnzyYIPeUpCdBPPBl/EadH9RV0NeyS3n0L21
+dNuh3g8Rsw+hqT59H4YYjvkt3LI+DeBeamhY6OH9tuUUltfGOLLWPraqmkL7QnuwsxK2ZpWiYxmn
+ONH4otYLaAzucWPyB/apThSyv3vqxJyYkAXKg7sgvbkNdINWOGHA5UpcOZpQOnxTTaPfzeWtTMFo
+gJEdYrXDr7baYRTZcEpvHthXY3exudj040ZvGsyOTDkGemaqgPWLMlkdIDq9EZ9dmDXI4FL/orck
+cXZDXvLbv56NxdsPP8G/b+RHMKVY/DgWfwM0xNu8hP0lV+/StQpYyVHxxjGvFVZIEjQ6quAbKNBt
+u/DojMciusTEry2xmlJgVm254mtPAEWeIFW0N36CKZyA36ayq+WNGk+xb1EG+iXSYHuxCxaIHOiW
+0bJapWgvnChJs5qXg/Ozt6cfPp1G1R1yuPk5cKIofkIWTkefEZd4HjYW9smsxidXjuP8g0yLHr9Z
+bzpN4QxuOkUI+5LCbjT5So3Ybi7iEiMHotjM81mELYHluVavWoMjPXL2l/caes/KIqzhSJ+iNd48
+PgZqiF/aimgADamPnhP1JITiKRaN8eNo0G+Kx4JC2/Dn6c167kbGdfUPTbCNaTProd/d6sIl01nD
+s5xEeB3bZTAFoWkSq9V05hYKfsyEvhEFtBydc8hFXKeVkBlILm3y6WoK0PRubR9LCLMKmzMqeKMw
+TbqON8pJQoqVGOCoA6quxwMZihjCHvzH+IbtARYdipproQE6IUr7p9zpqurZkiWYt0REvZ7Eg3WS
+vXTzeTSFeVDeIc8aRxbmiW4jY3QtKz1/fjAcXb5oMh0oKj3zKntnBVg9l032QHUWT58+HYj/uN/7
+YVSiNM9vwC0D2L1eyzm93mK59eTsanU9e/MmAn6cLeUlPLii6Ll9XmcUmtzRlRZE2r8GRohrE1pm
+NO1bdpmDdiUfNHMLPrDSluPnLKF7jzC0JFHZ6uujMOxkpIlYEhRDGKtZkoQcpoD12OQ1FuVhmFHz
+i7wDjk8QzBjf4gkZb7WX6GFSAq3lHovOsRgQ4AHllvFoVNVMZWmA5+Rio9GcnGVJ1dSTPHcPT/Vd
+AJW9zkjzlYjXKBlmHi1iOPWdHqs2Hna+k0W9HUs+u3QDjq1Z8uv7cAfWBknLFwuDKTw0izTLZTkz
+5hRXLJkllQPGtEM43JlucSLrEwU9KA1AvZNVmFuJtm//YNfFxfQjnSPvm5F0+lBlb8bi4FCctRIM
+o6gZn8JQlpCWb82XEYzygcLa2hPwxhJ/0EFVLCbwLvBw6xrrTF/MwfkbzW0dAIcug7IK0rKjpyOc
+G8gsfGbaLddp4Ie26ITbbVJWdZxO9P0PE3TYJvZgXeNp6+F2VnpabwWc/Bw84H2dug+Og8myQXpi
+6q0pzTgWCx2iiNwSM78aq8jRyztkXwl8CqTMfGIKo00Q6dKyq6041TmbjopHUM9MFdMWz9yUz3Qq
+T1zMx5TnZOoetnjRfgop3WEhXovhy7E4bG2BZsUGr3QCZJ/MQ98Vo24wFScqYObYviBDvD4Wwxdj
+8ccd0KMtAxwduiO0N7QtCFuBvLx6NBnTZEpkC/ty2e/vq5MZQdMzjqOrNvm7ZPqOqPTvLSpxqaDO
+WH7Rzlhujb11A9v5+EiGK1Aci0TO958oJKFGutHN2xmc8MNK+j2brKWLyJvSGqqgm8JmZN3oQUcj
+GrfZDmKq07X64kJe1DVsOO3lAyZfppWzaK+bw3xGjV6LqABg0nemht/wkhd4r0nh+mdbz1p1NYAF
+2xNK0CWffHLWNGwE9V5H8FEa4B5GESGeqjaKwpWsR4hISBfiEBM9a51mOxz/uzMP1xpsOxPtYPnt
+N7vwdAWzt7qjZ0F3l1x4ImvrLJrlNp/+CJ3HKH1dv0pgHCiN6ICzau6sJDfzCNOY+TKa3KYzr/BW
+SDqiRpOYStdt4q00X/+FfgzFDiirDNYCPKl6gSfKt3TJ5Ymi7De8q+abwxdjUyLMgPQEXkYvn+m7
+IKmbuQXB97HHeu8GL3W/w+jfHGBJ5fe2rzq7GZrWcetSKH+wkMJoo2hi6dAYpvsLQpo1iwVentgQ
+k31rexPIe/B2puDnmFtQc3DYYEMa9aHraoxGerepti0CfL/J2CY5D+raKFJEepewbVOeuxTno0VB
+9+q3IBhCQM5fxvwGXcG6OLIhNmNT8Ah06KZ14qe66S1AY3uCxra6CXdNn/vvmrtuEdiZm6yGztz9
+QlOXBrrvdivaRwMOb2hCPKhWotH4/cbEtQNjnUzTH6rXHyS/2wlnusWs3AfGpO5g4J/YU2NvzP4q
+nrnfMTNsn29mduuKe52N1rQ7NqPN8Q/xFDgLBp/bqwYotWmuOZD3S3TV3oSTZSfy+lpNYrzmcUKb
+bErs6uyezLbtPd3SJ2O1MbstvL0IQBhu0im4bpY9MAboSr5umvOinGtqBA1N2cNOOrJK5mwS9NYO
+wEUcMaX+JiLP+cSDVGKgW9VlUcJueKAvJeaEnb4c5waoCeCtYnVjUDc9xvqOWlKslCVmapE5TtvK
+9gEisBHvmIbJxL4DXnne3LeQjC0zyKxeyTKumruG/NSABDZdzQhUfY6L64TnGqlscYmLWGJ5w0EK
+A2T2+zPYWHqb6h0XLIystns4O1EPHfJ9zN0NjjEyXJzc2XsG3fut5nTHtesd2mYN19m7lWAZzKV5
+pCN1rIzf6ou8+LJZjuSjf+nwD8i7W4Lpp6NbdcberUXDeeYqhO7NTXh1ABnnvgsZOxzQvXqxtQG2
+4/v6wjJKx8Pc0thSUfvkvQqnGW3URJAwc/SeCJJfHfDICJIH/4ERJH19JhgajY/WA731AveEmlg9
+uHdRNowAfSZAJDzJbt1kaEzl0M2+L3KV3A3szdKsK52SPmMekCO7l5QRCL5zUrmpyt6dcLsiSL50
+0ePvzz++OTknWkwuTt7+58n3lJ2FxyUtW/XgEFuW7zO19708cDfcpjNq+gZvsO25KpaLmTSEzvtO
+MkIPhP7Ctb4FbSsy9/W2Dp0CoG4nQHz3tFtQt6nsXsgdv0wXm7h5NK2E7UA/5exa88tJUTCPzEkd
+i0NzEmfeN4cnWkY7seVtC+fkuXbVifZX9XWgW+LeI5ttTSuAZybIX/bIxJTO2MA8Oyjt/98HpYhj
+2aG5SgekcCadKx3pNkcGVfn/Y5ESlF2Mezt2FMf2km5qx8dDyt4+j2e/MxkZgnh1f4Pu/Fxhn8t0
+CxWCgBWevrCQETH6Tx+o2vSDJ0pc7lOF8T4qmyv7C9dMO7d/TTDJoLIXfynOVOJjVmi8qFM3ccD2
+6XQgp49Oo/KFU9ICmu8A6NyIpwL2Rn+JFeJ0I0LYOGqXDLNkiY761j4HebSbDvaGVs/F/rb6U7f+
+UogX2xvOWyWeusch91D39FC1qfJzLDCma24rLBWvCTIfZwq66ctzPvAMXW/74evt5Ysje7iA/I6v
+HUVCaWUDx7BfOmmZO2+XdLoTs5RjytvDvZoTEtYtrhyo7BNs29t0alO27H9MngNDGnjv+0Nmpod3
+B/+gjallvSOYkhg+USOallPNo3G3T0bd6TZqqwuEK5MeAKSjAgEWgunoRidTdMPp3sPnejc4rele
+XveEKXSkgrLGfI7gHsb3a/Brd6eK4gd1ZxRNf27Q5kC95CDc7Dtwq5EXCtluEtpTb/hgiwvAxdn9
+/V88oH83n9F2P9zlV9tWL3sLAtmXxRRYzAxqkcg8jsDIgN4ckrbGugkj6HgfTUNHl6GauSFfoONH
+abV46zZtMMiZnWgPwBqF4P8ACHXrHw==
+""")
+
+##file activate.sh
+ACTIVATE_SH = convert("""
+eJytVVFvokAQfudXTLEPtTlLeo9tvMSmJpq02hSvl7u2wRUG2QR2DSxSe7n/frOACEVNLlceRHa+
+nfl25pvZDswCnoDPQ4QoTRQsENIEPci4CsBMZBq7CAsuLOYqvmYKTTj3YxnBgiXBudGBjUzBZUJI
+BXEqgCvweIyuCjeG4eF2F5x14bcB9KQiQQWrjSddI1/oQIx6SYYeoFjzWIoIhYI1izlbhJjkKO7D
+M/QEmKfO9O7WeRo/zr4P7pyHwWxkwitcgwpQ5Ej96OX+PmiFwLeVjFUOrNYKaq1Nud3nR2n8nI2m
+k9H0friPTGVsUdptaxGrTEfpNVFEskxpXtUkkCkl1UNF9cgLBkx48J4EXyALuBtAwNYIjF5kcmUU
+abMKmMq1ULoiRbgsDEkTSsKSGFCJ6Z8vY/2xYiSacmtyAfCDdCNTVZoVF8vSTQOoEwSnOrngBkws
+MYGMBMg8/bMBLSYKS7pYEXP0PqT+ZmBT0Xuy+Pplj5yn4aM9nk72JD8/Wi+Gr98sD9eWSMOwkapD
+BbUv91XSvmyVkICt2tmXR4tWmrcUCsjWOpw87YidEC8i0gdTSOFhouJUNxR+4NYBG0MftoCTD9F7
+2rTtxG3oPwY1b2HncYwhrlmj6Wq924xtGDWqfdNxap+OYxplEurnMVo9RWks+rH8qKEtx7kZT5zJ
+4H7oOFclrN6uFe+d+nW2aIUsSgs/42EIPuOhXq+jEo3S6tX6w2ilNkDnIpHCWdEQhFgwj9pkk7FN
+l/y5eQvRSIQ5+TrL05lewxWpt/Lbhes5cJF3mLET1MGhcKCF+40tNWnUulxrpojwDo2sObdje3Bz
+N3QeHqf3D7OjEXMVV8LN3ZlvuzoWHqiUcNKHtwNd0IbvPGKYYM31nPKCgkUILw3KL+Y8l7aO1ArS
+Ad37nIU0fCj5NE5gQCuC5sOSu+UdI2NeXg/lFkQIlFpdWVaWZRfvqGiirC9o6liJ9FXGYrSY9mI1
+D/Ncozgn13vJvsznr7DnkJWXsyMH7e42ljdJ+aqNDF1bFnKWFLdj31xtaJYK6EXFgqmV/ymD/ROG
++n8O9H8f5vsGOWXsL1+1k3g=
+""")
+
+##file activate.fish
+ACTIVATE_FISH = convert("""
+eJyVVWFv2jAQ/c6vuBoqQVWC9nVSNVGVCaS2VC2rNLWVZZILWAs2s52wVvvxsyEJDrjbmgpK7PP5
+3bt3d22YLbmGlGcIq1wbmCPkGhPYcLMEEsGciwGLDS+YwSjlekngLFVyBe73GXSXxqw/DwbuTS8x
+yyKpFr1WG15lDjETQhpQuQBuIOEKY5O9tlppLqxHKSDByjVAPwEy+mXtCq5MzjIUBTCRgEKTKwFG
+gpBqxTLYXgN2myspVigMaYF92tZSowGZJf4mFExxNs9Qb614CgZtmH0BpEOn11f0cXI/+za8pnfD
+2ZjA1sg9zlV/8QvcMhxbNu0QwgYokn/d+n02nt6Opzcjcnx1vXcIoN74O4ymWQXmHURfJw9jenc/
+vbmb0enj6P5+cuVhqlKm3S0u2XRtRbA2QQAhV7VhBF0rsgUX9Ur1rBUXJgVSy8O751k8mzY5OrKH
+RW3eaQhYGTr8hrXO59ALhxQ83mCsDLAid3T72CCSdJhaFE+fXgicXAARUiR2WeVO37gH3oYHzFKo
+9k7CaPZ1UeNwH1tWuXA4uFKYYcEa8vaKqXl7q1UpygMPhFLvlVKyNzsSM3S2km7UBOl4xweUXk5u
+6e3wZmQ9leY1XE/Ili670tr9g/5POBBpGIJXCCF79L1siarl/dbESa8mD8PL61GpzqpzuMS7tqeB
+1YkALrRBloBMbR9yLcVx7frQAgUqR7NZIuzkEu110gbNit1enNs82Rx5utq7Z3prU78HFRgulqNC
+OTwbqJa9vkJFclQgZSjbKeBgSsUtCtt9D8OwAbIVJuewQdfvQRaoFE9wd1TmCuRG7OgJ1bVXGHc7
+z5WDL/WW36v2oi37CyVBak61+yPBA9C1qqGxzKQqZ0oPuocU9hpud0PIp8sDHkXR1HKkNlzjuUWA
+a0enFUyzOWZA4yXGP+ZMI3Tdt2OuqU/SO4q64526cPE0A7ZyW2PMbWZiZ5HamIZ2RcCKLXhcDl2b
+vXL+eccQoRzem80mekPDEiyiWK4GWqZmwxQOmPM0eIfgp1P9cqrBsewR2p/DPMtt+pfcYM+Ls2uh
+hALufTAdmGl8B1H3VPd2af8fQAc4PgqjlIBL9cGQqNpXaAwe3LrtVn8AkZTUxg==
+""")
+
+##file activate.csh
+ACTIVATE_CSH = convert("""
+eJx9VG1P2zAQ/u5fcYQKNgTNPtN1WxlIQ4KCUEGaxuQ6yYVYSuzKdhqVX7+zk3bpy5YPUXL3PPfc
+ne98DLNCWshliVDV1kGCUFvMoJGugMjq2qQIiVSxSJ1cCofD1BYRnOVGV0CfZ0N2DD91DalQSjsw
+tQLpIJMGU1euvPe7QeJlkKzgWixlhnAt4aoUVsLnLBiy5NtbJWQ5THX1ZciYKKWwkOFaE04dUm6D
+r/zh7pq/3D7Nnid3/HEy+wFHY/gEJydg0aFaQrBFgz1c5DG1IhTs+UZgsBC2GMFBlaeH+8dZXwcW
+VPvCjXdlAvCfQsE7al0+07XjZvrSCUevR5dnkVeKlFYZmUztG4BdzL2u9KyLVabTU0bdfg7a0hgs
+cSmUg6UwUiQl2iHrcbcVGNvPCiLOe7+cRwG13z9qRGgx2z6DHjfm/Op2yqeT+xvOLzs0PTKHDz2V
+tkckFHoQfQRXoGJAj9el0FyJCmEMhzgMS4sB7KPOE2ExoLcSieYwDvR+cP8cg11gKkVJc2wRcm1g
+QhYFlXiTaTfO2ki0fQoiFM4tLuO4aZrhOzqR4dIPcWx17hphMBY+Srwh7RTyN83XOWkcSPh1Pg/k
+TXX/jbJTbMtUmcxZ+/bbqOsy82suFQg/BhdSOTRhMNBHlUarCpU7JzBhmkKmRejKOQzayQe6MWoa
+n1wqWmuh6LZAaHxcdeqIlVLhIBJdO9/kbl0It2oEXQj+eGjJOuvOIR/YGRqvFhttUB2XTvLXYN2H
+37CBdbW2W7j2r2+VsCn0doVWcFG1/4y1VwBjfwAyoZhD
+""")
+
+##file activate.bat
+ACTIVATE_BAT = convert("""
+eJx9UdEKgjAUfW6wfxjiIH+hEDKUFHSKLCMI7kNOEkIf9P9pTJ3OLJ/03HPPPed4Es9XS9qqwqgT
+PbGKKOdXL4aAFS7A4gvAwgijuiKlqOpGlATS2NeMLE+TjJM9RkQ+SmqAXLrBo1LLIeLdiWlD6jZt
+r7VNubWkndkXaxg5GO3UaOOKS6drO3luDDiO5my3iA0YAKGzPRV1ack8cOdhysI0CYzIPzjSiH5X
+0QcvC8Lfaj0emsVKYF2rhL5L3fCkVjV76kShi59NHwDniAHzkgDgqBcwOgTMx+gDQQqXCw==
+""")
+
+##file deactivate.bat
+DEACTIVATE_BAT = convert("""
+eJxzSE3OyFfIT0vj4ipOLVEI8wwKCXX0iXf1C7Pl4spMU0hJTcvMS01RiPf3cYmHyQYE+fsGhCho
+cCkAAUibEkTEVhWLMlUlLk6QGixStlyaeCyJDPHw9/Pw93VFsQguim4ZXAJoIUw5DhX47XUM8UCx
+EchHtwsohN1bILUgw61c/Vy4AJYPYm4=
+""")
+
+##file activate.ps1
+ACTIVATE_PS = convert("""
+eJylWdmS40Z2fVeE/oHT6rCloNUEAXDThB6wAyQAEjsB29GBjdgXYiWgmC/zgz/Jv+AEWNVd3S2N
+xuOKYEUxM+/Jmzfvcm7W//zXf/+wUMOoXtyi1F9kbd0sHH/hFc2iLtrK9b3FrSqyxaVQwr8uhqJd
+uHaeg9mqzRdR8/13Pyy8qPLdJh0+LMhi0QCoXxYfFh9WtttEnd34H8p6/f1300KauwrULws39e18
+0ZaLNm9rgN/ZVf3h++/e124Vlc0vKsspHy+Yyi5+XbzPhijvCtduoiL/kA1ukWV27n0o7Sb8LIFj
+CvWR5GQgUJdp1Pw8TS9+rPy6SDv/+e3d+0+4qw8f3v20+PliV37efEYBAB9FTKC+RHn/Cfxn3rdv
+00Fube5O+iyCtHDs9BfPfz3q4sfFv9d91Ljhfy7ei0VO+nVTtdOkv/jpt0l2AX6iG1jXgKnnDuD4
+ke2k/i8fzzz5UedkVcP4pwF+Wvz2FJl+3vt598urXf5Y6LNA5WcFOP7r0sW7b9a+W/xcu0Xpv5zk
+Kfq3P9Dz9di/fCxS72MXVU1rpx9L4Bxl85Wmn5a+zP76Zuh3pL9ROWr87PN+//GHIl+oOtvn9XSU
+qH+p0gQBFnx1uV+JLH5O5zv+PXW+WepXVVHZT0+oQezkIATcIm+ivPV/z5J/+cYj3ir4w0Lx09vC
+e5n/y5/Y5LPPfdrqb88ga/PabxZRVfmp39l588m/6u+/e+OpP+dF7n1WZpJ9//Z4v372fDDz9eHB
+7Juvs/BLMHzrxL9+9twXpJfhd1/DrpQ5Euu/vlss3wp9HXC/54C/Ld69m6zwdx3tC0d8daSv0V8B
+n4b9YYF53sJelJV/ix6LZspw/sJtqyl5LJ5r/23htA1Imfm/gt9R7dqVB1LjhydAX4Gb+zksQF59
+9+P7H//U+376afFuvh2/T6P85Xr/5c8C6OXyFY4BGuN+EE0+GeR201b+wkkLN5mmBY5TfMw8ngqL
+CztXxCSXKMCYrRIElWkEJlEPYsSOeKBVZCAQTKBhApMwRFQzmCThE0YQu2CdEhgjbgmk9GluHpfR
+/hhwJCZhGI5jt5FsAkOrObVyE6g2y1snyhMGFlDY1x+BoHpCMulTj5JYWNAYJmnKpvLxXgmQ8az1
+4fUGxxcitMbbhDFcsiAItg04E+OSBIHTUYD1HI4FHH4kMREPknuYRMyhh3AARWMkfhCketqD1CWJ
+mTCo/nhUScoQcInB1hpFhIKoIXLo5jLpwFCgsnLCx1QlEMlz/iFEGqzH3vWYcpRcThgWnEKm0QcS
+rA8ek2a2IYYeowUanOZOlrbWSJUC4c7y2EMI3uJPMnMF/SSXdk6E495VLhzkWHps0rOhKwqk+xBI
+DhJirhdUCTamMfXz2Hy303hM4DFJ8QL21BcPBULR+gcdYxoeiDqOFSqpi5B5PUISfGg46gFZBPo4
+jdh8lueaWuVSMTURfbAUnLINr/QYuuYoMQV6l1aWxuZVTjlaLC14UzqZ+ziTGDzJzhiYoPLrt3uI
+tXkVR47kAo09lo5BD76CH51cTt1snVpMOttLhY93yxChCQPI4OBecS7++h4p4Bdn4H97bJongtPk
+s9gQnXku1vzsjjmX4/o4YUDkXkjHwDg5FXozU0fW4y5kyeYW0uJWlh536BKr0kMGjtzTkng6Ep62
+uTWnQtiIqKnEsx7e1hLtzlXs7Upw9TwEnp0t9yzCGgUJIZConx9OHJArLkRYW0dW42G9OeR5Nzwk
+yk1mX7du5RGHT7dka7N3AznmSif7y6tuKe2N1Al/1TUPRqH6E2GLVc27h9IptMLkCKQYRqPQJgzV
+2m6WLsSipS3v3b1/WmXEYY1meLEVIU/arOGVkyie7ZsH05ZKpjFW4cpY0YkjySpSExNG2TS8nnJx
+nrQmWh2WY3cP1eISP9wbaVK35ZXc60yC3VN/j9n7UFoK6zvjSTE2+Pvz6Mx322rnftfP8Y0XKIdv
+Qd7AfK0nexBTMqRiErvCMa3Hegpfjdh58glW2oNMsKeAX8x6YJLZs9K8/ozjJkWL+JmECMvhQ54x
+9rsTHwcoGrDi6Y4I+H7yY4/rJVPAbYymUH7C2D3uiUS3KQ1nrCAUkE1dJMneDQIJMQQx5SONxoEO
+OEn1/Ig1eBBUeEDRuOT2WGGGE4bNypBLFh2PeIg3bEbg44PHiqNDbGIQm50LW6MJU62JHCGBrmc9
+2F7WBJrrj1ssnTAK4sxwRgh5LLblhwNAclv3Gd+jC/etCfyfR8TMhcWQz8TBIbG8IIyAQ81w2n/C
+mHWAwRzxd3WoBY7BZnsqGOWrOCKwGkMMNfO0Kci/joZgEocLjNnzgcmdehPHJY0FudXgsr+v44TB
+I3jnMGnsK5veAhgi9iXGifkHMOC09Rh9cAw9sQ0asl6wKMk8mpzFYaaDSgG4F0wisQDDBRpjCINg
+FIxhlhQ31xdSkkk6odXZFpTYOQpOOgw9ugM2cDQ+2MYa7JsEirGBrOuxsQy5nPMRdYjsTJ/j1iNw
+FeSt1jY2+dd5yx1/pzZMOQXUIDcXeAzR7QlDRM8AMkUldXOmGmvYXPABjxqkYKO7VAY6JRU7kpXr
++Epu2BU3qFFXClFi27784LrDZsJwbNlDw0JzhZ6M0SMXE4iBHehCpHVkrQhpTFn2dsvsZYkiPEEB
+GSEAwdiur9LS1U6P2U9JhGp4hnFpJo4FfkdJHcwV6Q5dV1Q9uNeeu7rV8PAjwdFg9RLtroifOr0k
+uOiRTo/obNPhQIf42Fr4mtThWoSjitEdAmFW66UCe8WFjPk1YVNpL9srFbond7jrLg8tqAasIMpy
+zkH0SY/6zVAwJrEc14zt14YRXdY+fcJ4qOd2XKB0/Kghw1ovd11t2o+zjt+txndo1ZDZ2T+uMVHT
+VSXhedBAHoJIID9xm6wPQI3cXY+HR7vxtrJuCKh6kbXaW5KkVeJsdsjqsYsOwYSh0w5sMbu7LF8J
+5T7U6LJdiTx+ca7RKlulGgS5Z1JSU2Llt32cHFipkaurtBrvNX5UtvNZjkufZ/r1/XyLl6yOpytL
+Km8Fn+y4wkhlqZP5db0rooqy7xdL4wxzFVTX+6HaxuQJK5E5B1neSSovZ9ALB8091dDbbjVxhWNY
+Ve5hn1VnI9OF0wpvaRm7SZuC1IRczwC7GnkhPt3muHV1YxUJfo+uh1sYnJy+vI0ZwuPV2uqWJYUH
+bmBsi1zmFSxHrqwA+WIzLrHkwW4r+bad7xbOzJCnKIa3S3YvrzEBK1Dc0emzJW+SqysQfdEDorQG
+9ZJlbQzEHQV8naPaF440YXzJk/7vHGK2xwuP+Gc5xITxyiP+WQ4x18oXHjFzCBy9kir1EFTAm0Zq
+LYwS8MpiGhtfxiBRDXpxDWxk9g9Q2fzPPAhS6VFDAc/aiNGatUkPtZIStZFQ1qD0IlJa/5ZPAi5J
+ySp1ETDomZMnvgiysZSBfMikrSDte/K5lqV6iwC5q7YN9I1dBZXUytDJNqU74MJsUyNNLAPopWK3
+tzmLkCiDyl7WQnj9sm7Kd5kzgpoccdNeMw/6zPVB3pUwMgi4C7hj4AMFAf4G27oXH8NNT9zll/sK
+S6wVlQwazjxWKWy20ZzXb9ne8ngGalPBWSUSj9xkc1drsXkZ8oOyvYT3e0rnYsGwx85xZB9wKeKg
+cJKZnamYwiaMymZvzk6wtDUkxmdUg0mPad0YHtvzpjEfp2iMxvORhnx0kCVLf5Qa43WJsVoyfEyI
+pzmf8ruM6xBr7dnBgzyxpqXuUPYaKahOaz1LrxNkS/Q3Ae5AC+xl6NbxAqXXlzghZBZHmOrM6Y6Y
+ctAkltwlF7SKEsShjVh7QHuxMU0a08/eiu3x3M+07OijMcKFFltByXrpk8w+JNnZpnp3CfgjV1Ax
+gUYCnWwYow42I5wHCcTzLXK0hMZN2DrPM/zCSqe9jRSlJnr70BPE4+zrwbk/xVIDHy2FAQyHoomT
+Tt5jiM68nBQut35Y0qLclLiQrutxt/c0OlSqXAC8VrxW97lGoRWzhOnifE2zbF05W4xuyhg7JTUL
+aqJ7SWDywhjlal0b+NLTpERBgnPW0+Nw99X2Ws72gOL27iER9jgzj7Uu09JaZ3n+hmCjjvZpjNst
+vOWWTbuLrg+/1ltX8WpPauEDEvcunIgTxuMEHweWKCx2KQ9DU/UKdO/3za4Szm2iHYL+ss9AAttm
+gZHq2pkUXFbV+FiJCKrpBms18zH75vax5jSo7FNunrVWY3Chvd8KKnHdaTt/6ealwaA1x17yTlft
+8VBle3nAE+7R0MScC3MJofNCCkA9PGKBgGMYEwfB2QO5j8zUqa8F/EkWKCzGQJ5EZ05HTly1B01E
+z813G5BY++RZ2sxbQS8ZveGPJNabp5kXAeoign6Tlt5+L8i5ZquY9+S+KEUHkmYMRFBxRrHnbl2X
+rVemKnG+oB1yd9+zT+4c43jQ0wWmQRR6mTCkY1q3VG05Y120ZzKOMBe6Vy7I5Vz4ygPB3yY4G0FP
+8RxiMx985YJPXsgRU58EuHj75gygTzejP+W/zKGe78UQN3yOJ1aMQV9hFH+GAfLRsza84WlPLAI/
+9G/5JdcHftEfH+Y3/fHUG7/o8bv98dzzy3e8S+XCvgqB+VUf7sH0yDHpONdbRE8tAg9NWOzcTJ7q
+TuAxe/AJ07c1Rs9okJvl1/0G60qvbdDzz5zO0FuPFQIHNp9y9Bd1CufYVx7dB26mAxwa8GMNrN/U
+oGbNZ3EQ7inLzHy5tRg9AXJrN8cB59cCUBeCiVO7zKM0jU0MamhnRThkg/NMmBOGb6StNeD9tDfA
+7czsAWopDdnGoXUHtA+s/k0vNPkBcxEI13jVd/axp85va3LpwGggXXWw12Gwr/JGAH0b8CPboiZd
+QO1l0mk/UHukud4C+w5uRoNzpCmoW6GbgbMyaQNkga2pQINB18lOXOCJzSWPFOhZcwzdgrsQnne7
+nvjBi+7cP2BbtBeDOW5uOLGf3z94FasKIguOqJl+8ss/6Kumns4cuWbqq5592TN/RNIbn5Qo6qbi
+O4F0P9txxPAwagqPlftztO8cWBzdN/jz3b7GD6JHYP/Zp4ToAMaA74M+EGSft3hEGMuf8EwjnTk/
+nz/P7SLipB/ogQ6xNX0fDqNncMCfHqGLCMM0ZzFa+6lPJYQ5p81vW4HkCvidYf6kb+P/oB965g8K
+C6uR0rdjX1DNKc5pOSTquI8uQ6KXxYaKBn+30/09tK4kMpJPgUIQkbENEPbuezNPPje2Um83SgyX
+GTCJb6MnGVIpgncdQg1qz2bvPfxYD9fewCXDomx9S+HQJuX6W3VAL+v5WZMudRQZk9ZdOk6GIUtC
+PqEb/uwSIrtR7/edzqgEdtpEwq7p2J5OQV+RLrmtTvFwFpf03M/VrRyTZ73qVod7v7Jh2Dwe5J25
+JqFOU2qEu1sP+CRotklediycKfLjeIZzjJQsvKmiGSNQhxuJpKa+hoWUizaE1PuIRGzJqropwgVB
+oo1hr870MZLgnXF5ZIpr6mF0L8aSy2gVnTAuoB4WEd4d5NPVC9TMotYXERKlTcwQ2KiB/C48AEfH
+Qbyq4CN8xTFnTvf/ebOc3isnjD95s0QF0nx9s+y+zMmz782xL0SgEmRpA3x1w1Ff9/74xcxKEPdS
+IEFTz6GgU0+BK/UZ5Gwbl4gZwycxEw+Kqa5QmMkh4OzgzEVPnDAiAOGBFaBW4wkDmj1G4RyElKgj
+NlLCq8zsp085MNh/+R4t1Q8yxoSv8PUpTt7izZwf2BTHZZ3pIZpUIpuLkL1nNL6sYcHqcKm237wp
+T2+RCjgXweXd2Zp7ZM8W6dG5bZsqo0nrJBTx8EC0+CQQdzEGnabTnkzofu1pYkWl4E7XSniECdxy
+vLYavPMcL9LW5SToJFNnos+uqweOHriUZ1ntIYZUonc7ltEQ6oTRtwOHNwez2sVREskHN+bqG3ua
+eaEbJ8XpyO8CeD9QJc8nbLP2C2R3A437ISUNyt5Yd0TbDNcl11/DSsOzdbi/VhCC0KE6v1vqVNkq
+45ZnG6fiV2NwzInxCNth3BwL0+8814jE6+1W1EeWtpWbSZJOJNYXmWRXa7vLnAljE692eHjZ4y5u
+y1u63De0IzKca7As48Z3XshVF+3XiLNz0JIMh/JOpbiNLlMi672uO0wYzOCZjRxcxj3D+gVenGIE
+MvFUGGXuRps2RzMcgWIRolHXpGUP6sMsQt1hspUBnVKUn/WQj2u6j3SXd9Xz0QtEzoM7qTu5y7gR
+q9gNNsrlEMLdikBt9bFvBnfbUIh6voTw7eDsyTmPKUvF0bHqWLbHe3VRHyRZnNeSGKsB73q66Vsk
+taxWYmwz1tYVFG/vOQhlM0gUkyvIab3nv2caJ1udU1F3pDMty7stubTE4OJqm0i0ECfrJIkLtraC
+HwRWKzlqpfhEIqYH09eT9WrOhQyt8YEoyBlnXtAT37WHIQ03TIuEHbnRxZDdLun0iok9PUC79prU
+m5beZzfQUelEXnhzb/pIROKx3F7qCttYIFGh5dXNzFzID7u8vKykA8Uejf7XXz//S4nKvW//ofS/
+QastYw==
+""")
+
+##file distutils-init.py
+DISTUTILS_INIT = convert("""
+eJytV1uL4zYUfvevOE0ottuMW9q3gVDa3aUMXXbLMlDKMBiNrSTqOJKRlMxkf33PkXyRbGe7Dw2E
+UXTu37lpxLFV2oIyifAncxmOL0xLIfcG+gv80x9VW6maw7o/CANSWWBwFtqeWMPlGY6qPjV8A0bB
+C4eKSTgZ5LRgFeyErMEeOBhbN+Ipgeizhjtnhkn7DdyjuNLPoCS0l/ayQTG0djwZC08cLXozeMss
+aG5EzQ0IScpnWtHSTXuxByV/QCmxE7y+eS0uxWeoheaVVfqSJHiU7Mhhi6gULbOHorshkrEnKxpT
+0n3A8Y8SMpuwZx6aoix3ouFlmW8gHRSkeSJ2g7hU+kiHLDaQw3bmRDaTGfTnty7gPm0FHbIBg9U9
+oh1kZzAFLaue2R6htPCtAda2nGlDSUJ4PZBgCJBGVcwKTAMz/vJiLD+Oin5Z5QlvDPdulC6EsiyE
+NFzb7McNTKJzbJqzphx92VKRFY1idenzmq3K0emRcbWBD0ryqc4NZGmKOOOX9Pz5x+/l27tP797c
+f/z0d+4NruGNai8uAM0bfsYaw8itFk8ny41jsfpyO+BWlpqfhcG4yxLdi/0tQqoT4a8Vby382mt8
+p7XSo7aWGdPBc+b6utaBmCQ7rQKQoWtAuthQCiold2KfJIPTT8xwg9blPumc+YDZC/wYGdAyHpJk
+vUbHbHWAp5No6pK/WhhLEWrFjUwtPEv1Agf8YmnsuXUQYkeZoHm8ogP16gt2uHoxcEMdf2C6pmbw
+hUMsWGhanboh4IzzmsIpWs134jVPqD/c74bZHdY69UKKSn/+KfVhxLgUlToemayLMYQOqfEC61bh
+cbhwaqoGUzIyZRFHPmau5juaWqwRn3mpWmoEA5nhzS5gog/5jbcFQqOZvmBasZtwYlG93k5GEiyw
+buHhMWLjDarEGpMGB2LFs5nIJkhp/nUmZneFaRth++lieJtHepIvKgx6PJqIlD9X2j6pG1i9x3pZ
+5bHuCPFiirGHeO7McvoXkz786GaKVzC9DSpnOxJdc4xm6NSVq7lNEnKdVlnpu9BNYoKX2Iq3wvgh
+gGEUM66kK6j4NiyoneuPLSwaCWDxczgaolEWpiMyDVDb7dNuLAbriL8ig8mmeju31oNvQdpnvEPC
+1vAXbWacGRVrGt/uXN/gU0CDDwgooKRrHfTBb1/s9lYZ8ZqOBU0yLvpuP6+K9hLFsvIjeNhBi0KL
+MlOuWRn3FRwx5oHXjl0YImUx0+gLzjGchrgzca026ETmYJzPD+IpuKzNi8AFn048Thd63OdD86M6
+84zE8yQm0VqXdbbgvub2pKVnS76icBGdeTHHXTKspUmr4NYo/furFLKiMdQzFjHJNcdAnMhltBJK
+0/IKX3DVFqvPJ2dLE7bDBkH0l/PJ29074+F0CsGYOxsb7U3myTUncYfXqnLLfa6sJybX4g+hmcjO
+kMRBfA1JellfRRKJcyRpxdS4rIl6FdmQCWjo/o9Qz7yKffoP4JHjOvABcRn4CZIT2RH4jnxmfpVG
+qgLaAvQBNfuO6X0/Ux02nb4FKx3vgP+XnkX0QW9pLy/NsXgdN24dD3LxO2Nwil7Zlc1dqtP3d7/h
+kzp1/+7hGBuY4pk0XD/0Ao/oTe/XGrfyM773aB7iUhgkpy+dwAMalxMP0DrBcsVw/6p25+/hobP9
+GBknrWExDhLJ1bwt1NcCNblaFbMKCyvmX0PeRaQ=
+""")
+
+##file distutils.cfg
+DISTUTILS_CFG = convert("""
+eJxNj00KwkAMhfc9xYNuxe4Ft57AjYiUtDO1wXSmNJnK3N5pdSEEAu8nH6lxHVlRhtDHMPATA4uH
+xJ4EFmGbvfJiicSHFRzUSISMY6hq3GLCRLnIvSTnEefN0FIjw5tF0Hkk9Q5dRunBsVoyFi24aaLg
+9FDOlL0FPGluf4QjcInLlxd6f6rqkgPu/5nHLg0cXCscXoozRrP51DRT3j9QNl99AP53T2Q=
+""")
+
+##file activate_this.py
+ACTIVATE_THIS = convert("""
+eJyNU01v2zAMvetXEB4K21jmDOstQA4dMGCHbeihlyEIDMWmG62yJEiKE//7kXKdpN2KzYBt8euR
+fKSyLPs8wiEo8wh4wqZTGou4V6Hm0wJa1cSiTkJdr8+GsoTRHuCotBayiWqQEYGtMCgfD1KjGYBe
+5a3p0cRKiAe2NtLADikftnDco0ko/SFEVgEZ8aRC5GLux7i3BpSJ6J1H+i7A2CjiHq9z7JRZuuQq
+siwTIvpxJYCeuWaBpwZdhB+yxy/eWz+ZvVSU8C4E9FFZkyxFsvCT/ZzL8gcz9aXVE14Yyp2M+2W0
+y7n5mp0qN+avKXvbsyyzUqjeWR8hjGE+2iCE1W1tQ82hsCZN9UzlJr+/e/iab8WfqsmPI6pWeUPd
+FrMsd4H/55poeO9n54COhUs+sZNEzNtg/wanpjpuqHJaxs76HtZryI/K3H7KJ/KDIhqcbJ7kI4ar
+XL+sMgXnX0D+Te2Iy5xdP8yueSlQB/x/ED2BTAtyE3K4SYUN6AMNfbO63f4lBW3bUJPbTL+mjSxS
+PyRfJkZRgj+VbFv+EzHFi5pKwUEepa4JslMnwkowSRCXI+m5XvEOvtuBrxHdhLalG0JofYBok6qj
+YdN2dEngUlbC4PG60M1WEN0piu7Nq7on0mgyyUw3iV1etLo6r/81biWdQ9MWHFaePWZYaq+nmp+t
+s3az+sj7eA0jfgPfeoN1
+""")
+
+MH_MAGIC = 0xfeedface
+MH_CIGAM = 0xcefaedfe
+MH_MAGIC_64 = 0xfeedfacf
+MH_CIGAM_64 = 0xcffaedfe
+FAT_MAGIC = 0xcafebabe
+BIG_ENDIAN = '>'
+LITTLE_ENDIAN = '<'
+LC_LOAD_DYLIB = 0xc
+maxint = majver == 3 and getattr(sys, 'maxsize') or getattr(sys, 'maxint')
+
+
+class fileview(object):
+ """
+ A proxy for file-like objects that exposes a given view of a file.
+ Modified from macholib.
+ """
+
+ def __init__(self, fileobj, start=0, size=maxint):
+ if isinstance(fileobj, fileview):
+ self._fileobj = fileobj._fileobj
+ else:
+ self._fileobj = fileobj
+ self._start = start
+ self._end = start + size
+ self._pos = 0
+
+ def __repr__(self):
+ return '<fileview [%d, %d] %r>' % (
+ self._start, self._end, self._fileobj)
+
+ def tell(self):
+ return self._pos
+
+ def _checkwindow(self, seekto, op):
+ if not (self._start <= seekto <= self._end):
+ raise IOError("%s to offset %d is outside window [%d, %d]" % (
+ op, seekto, self._start, self._end))
+
+ def seek(self, offset, whence=0):
+ seekto = offset
+ if whence == os.SEEK_SET:
+ seekto += self._start
+ elif whence == os.SEEK_CUR:
+ seekto += self._start + self._pos
+ elif whence == os.SEEK_END:
+ seekto += self._end
+ else:
+ raise IOError("Invalid whence argument to seek: %r" % (whence,))
+ self._checkwindow(seekto, 'seek')
+ self._fileobj.seek(seekto)
+ self._pos = seekto - self._start
+
+ def write(self, bytes):
+ here = self._start + self._pos
+ self._checkwindow(here, 'write')
+ self._checkwindow(here + len(bytes), 'write')
+ self._fileobj.seek(here, os.SEEK_SET)
+ self._fileobj.write(bytes)
+ self._pos += len(bytes)
+
+ def read(self, size=maxint):
+ assert size >= 0
+ here = self._start + self._pos
+ self._checkwindow(here, 'read')
+ size = min(size, self._end - here)
+ self._fileobj.seek(here, os.SEEK_SET)
+ bytes = self._fileobj.read(size)
+ self._pos += len(bytes)
+ return bytes
+
+
+def read_data(file, endian, num=1):
+ """
+ Read a given number of 32-bits unsigned integers from the given file
+ with the given endianness.
+ """
+ res = struct.unpack(endian + 'L' * num, file.read(num * 4))
+ if len(res) == 1:
+ return res[0]
+ return res
+
+
+def mach_o_change(path, what, value):
+ """
+ Replace a given name (what) in any LC_LOAD_DYLIB command found in
+ the given binary with a new name (value), provided it's shorter.
+ """
+
+ def do_macho(file, bits, endian):
+ # Read Mach-O header (the magic number is assumed read by the caller)
+ cputype, cpusubtype, filetype, ncmds, sizeofcmds, flags = read_data(file, endian, 6)
+ # 64-bits header has one more field.
+ if bits == 64:
+ read_data(file, endian)
+ # The header is followed by ncmds commands
+ for n in range(ncmds):
+ where = file.tell()
+ # Read command header
+ cmd, cmdsize = read_data(file, endian, 2)
+ if cmd == LC_LOAD_DYLIB:
+ # The first data field in LC_LOAD_DYLIB commands is the
+ # offset of the name, starting from the beginning of the
+ # command.
+ name_offset = read_data(file, endian)
+ file.seek(where + name_offset, os.SEEK_SET)
+ # Read the NUL terminated string
+ load = file.read(cmdsize - name_offset).decode()
+ load = load[:load.index('\0')]
+ # If the string is what is being replaced, overwrite it.
+ if load == what:
+ file.seek(where + name_offset, os.SEEK_SET)
+ file.write(value.encode() + '\0'.encode())
+ # Seek to the next command
+ file.seek(where + cmdsize, os.SEEK_SET)
+
+ def do_file(file, offset=0, size=maxint):
+ file = fileview(file, offset, size)
+ # Read magic number
+ magic = read_data(file, BIG_ENDIAN)
+ if magic == FAT_MAGIC:
+ # Fat binaries contain nfat_arch Mach-O binaries
+ nfat_arch = read_data(file, BIG_ENDIAN)
+ for n in range(nfat_arch):
+ # Read arch header
+ cputype, cpusubtype, offset, size, align = read_data(file, BIG_ENDIAN, 5)
+ do_file(file, offset, size)
+ elif magic == MH_MAGIC:
+ do_macho(file, 32, BIG_ENDIAN)
+ elif magic == MH_CIGAM:
+ do_macho(file, 32, LITTLE_ENDIAN)
+ elif magic == MH_MAGIC_64:
+ do_macho(file, 64, BIG_ENDIAN)
+ elif magic == MH_CIGAM_64:
+ do_macho(file, 64, LITTLE_ENDIAN)
+
+ assert(len(what) >= len(value))
+ do_file(open(path, 'r+b'))
+
+
+if __name__ == '__main__':
+ main()
+
+## TODO:
+## Copy python.exe.manifest
+## Monkeypatch distutils.sysconfig