Compare commits

..

133 Commits
1.3.0 ... 0.8.2

Author SHA1 Message Date
Christopher Dunn
5b3e8a8984 partially revert 'Added features that allow the reader to accept common non-standard JSON.'
revert '642befc836ac5093b528e7d8b4fd66b66735a98c',
but keep the *added* methods for `decodedNumber()` and `decodedDouble()`.
2015-02-15 03:03:47 -06:00
Christopher Dunn
4a6b5a33ed partially revert 'fix bug for static init'
re: 28836b8acc

A global instance of a Value (viz. 'null') was a mistake,
but dropping it breaks binary-compatibility. So we will keep it
everywhere except the one platform where it was crashing, ARM.
2015-02-15 03:03:47 -06:00
Christopher Dunn
37ea604585 revert 'Made it possible to drop null placeholders from array output.'
revert ae3c7a7aab
2015-02-15 03:03:47 -06:00
Christopher Dunn
fd288c3750 Revert "added option to FastWriter which omits the trailing new line character"
This reverts commit 5bf16105b5.
2015-02-15 03:03:29 -06:00
Christopher Dunn
27c5762141 revert 'Added structured error reporting to Reader.'
revert 68db655347
issue #147
2015-02-15 03:03:28 -06:00
Christopher Dunn
bc67af2210 revert 'Add public semantic error reporting'
for binary-compatibility with 0.6.0
issue #147
was #57
2015-02-15 03:03:28 -06:00
Christopher Dunn
83f10358f8 Revert "Switch to copy-and-swap idiom for operator=."
This reverts commit 45cd9490cd.

Ignored ValueInternal* changes, since those did not produce symbols for
Debian build. (They must not have used the INTERNAL stuff.)

  https://github.com/open-source-parsers/jsoncpp/issues/78

Conflicts:
	include/json/value.h
	src/lib_json/json_internalarray.inl
	src/lib_json/json_internalmap.inl
	src/lib_json/json_value.cpp
2015-02-15 03:03:28 -06:00
Christopher Dunn
115f9b91cf NOT C++11 2015-02-15 03:03:28 -06:00
Christopher Dunn
c56d73b9a2 0.8.z 2015-02-15 03:03:28 -06:00
Christopher Dunn
f164288646 help rebasing 2015-02-15 03:01:26 -06:00
Christopher Dunn
3bfd215938 1.4.2 < 1.4.1
* Bug-fix for ValueIterator::operator-() (issue #169)
2015-02-15 02:49:34 -06:00
Christopher Dunn
400b744195 Merge pull request #172 from cdunn2001/master
Fix bug in ValueIteratorBase::operator- 

Fixes #169.
2015-02-15 02:44:17 -06:00
Christopher Dunn
bd55164089 reverse sense for CPPTL too 2015-02-15 02:38:31 -06:00
Kevin Grant
4c5832a0be Fix bug in ValueIteratorBase::operator- 2015-02-15 02:38:31 -06:00
Christopher Dunn
8ba9875962 IteratorTest 2015-02-15 02:38:31 -06:00
Christopher Dunn
9c91b995dd rules for signing and doc-generation 2015-02-14 10:20:45 -06:00
Christopher Dunn
e7233bf056 1.4.1 <- 1.4.0 2015-02-13 10:00:38 -06:00
Christopher Dunn
9c90456890 Merge pull request #167 from cdunn2001/fail-if-extra
Add `failIfExtra` feature to `CharReaderBuilder`.
2015-02-13 09:55:11 -06:00
Christopher Dunn
f4be815c86 failIfExtra
1. failing regression tests, from #164 and #107
2. implemented; tests pass
3. allow trailing comments
2015-02-13 09:39:08 -06:00
Christopher Dunn
aa13a8ba40 comments/minor typos 2015-02-13 09:38:49 -06:00
Christopher Dunn
da0fcfbaa2 link web docs 2015-02-12 11:45:21 -06:00
Christopher Dunn
3ebba5cea8 stop calling validate() in newReader/Writer()
By not calling validate(), we can add
non-invasive features which will be simply ignored when user-code
is compiled against an old version. That way, we can often
avoid a minor version-bump.

The user can call validate() himself if he prefers that behavior.
2015-02-11 11:15:32 -06:00
Christopher Dunn
acbf4eb2ef Merge pull request #166 from cdunn2001/stackLimit
Fixes #88 and #56.
2015-02-11 10:35:16 -06:00
Christopher Dunn
56df206847 limit stackDepth for old (deprecated) Json::Reader too
This is an improper solution. If multiple Readers exist,
then the effect stackLimit is reduced because of side-effects.
But our options are limited. We need to address the security
hole without breaking binary-compatibility.

However, this is not likely to cause any practical problems because:

* Anyone using `operator>>(istream, Json::Value)` will be using the
new code already
* Multiple Readers are uncommon.
* The stackLimit is quite high.
* Deeply nested JSON probably would have hit the system limits anyway.
2015-02-11 10:20:53 -06:00
Christopher Dunn
4dca80da49 limit stackDepth 2015-02-11 10:20:47 -06:00
Christopher Dunn
249ad9f47f stackLimit 2015-02-11 10:01:58 -06:00
Christopher Dunn
99b8e856f6 stackLimit_ 2015-02-11 10:01:58 -06:00
Christopher Dunn
89b72e1653 test stackLimit 2015-02-11 10:01:58 -06:00
Christopher Dunn
2474989f24 Old -> Our 2015-02-11 09:48:24 -06:00
Christopher Dunn
315b8c9f2c 1st StreamWriterTest 2015-02-10 23:29:14 -06:00
Christopher Dunn
29501c4d9f clarify comments
And throw instead of return null for invalid settings.
2015-02-10 23:03:27 -06:00
Christopher Dunn
7796f20eab Merge pull request #165 from cdunn2001/master
Remove some experimental classes that are not needed for 1.4.0. This also helps 0.8.0 binary compatibility with 0.6.0-rc2.
2015-02-10 22:45:32 -06:00
Christopher Dunn
20d09676c2 drop experimental OldCompressingStreamWriterBuilder 2015-02-10 21:29:35 -06:00
Christopher Dunn
5a744708fc enableYAMLCompatibility and dropNullPlaceholders for StreamWriterBuilder 2015-02-10 21:28:13 -06:00
Christopher Dunn
07f0e9308d nullRef, since we had to add that kludge to 0.8.0 2015-02-10 21:28:13 -06:00
Christopher Dunn
052050df07 copy Features to OldFeatures 2015-02-10 17:01:08 -06:00
Christopher Dunn
435d2a2f8d passes 2015-02-10 17:01:08 -06:00
Christopher Dunn
6123bd1505 copy Reader impl to OldReader 2015-02-10 17:01:08 -06:00
Christopher Dunn
7477bcfa3a renames for OldReader 2015-02-10 17:01:08 -06:00
Christopher Dunn
5e3e68af2e OldReader copied from Reader 2015-02-10 17:01:08 -06:00
Christopher Dunn
04a607d95b Merge pull request #163 from cdunn2001/master
Reimplement the new Builders.

Issue #131.
2015-02-09 18:55:55 -06:00
Christopher Dunn
db75cdf21e mv CommentStyle to .cpp 2015-02-09 18:54:58 -06:00
Christopher Dunn
c41609b9f9 set output stream in write(), not in builder 2015-02-09 18:44:53 -06:00
Christopher Dunn
b56381a636 <stdexcept> 2015-02-09 18:29:11 -06:00
Christopher Dunn
f757c18ca0 add all features 2015-02-09 18:24:56 -06:00
Christopher Dunn
3cf9175bde remark defaults via doxygen snippet 2015-02-09 18:16:24 -06:00
Christopher Dunn
a9e1ab302d Builder::settings_
We use Json::Value to configure the builders so we can maintain
binary-compatibility easily.
2015-02-09 17:30:11 -06:00
Christopher Dunn
694dbcb328 update docs, writeString() 2015-02-09 15:25:57 -06:00
Christopher Dunn
732abb80ef Merge pull request #162 from cdunn2001/master
Deprecate the new Builders.
2015-02-09 11:55:54 -06:00
Christopher Dunn
f3b3358a0e deprecate current Builders 2015-02-09 11:51:06 -06:00
Christopher Dunn
1357cddf1e deprecate Builders
see issue #131
2015-02-09 11:46:27 -06:00
Christopher Dunn
8df98f6112 deprecate old Reader; separate Advanced Usage section 2015-02-09 11:15:39 -06:00
Christopher Dunn
16bdfd8af3 --in=doc/web_doxyfile.in 2015-02-09 11:15:11 -06:00
Christopher Dunn
ce799b3aa3 copy doxyfile.in 2015-02-09 10:36:55 -06:00
Christopher Dunn
3a65581b20 drop an old impl 2015-02-09 09:54:26 -06:00
Christopher Dunn
6451412c99 simplify basic docs 2015-02-09 09:44:26 -06:00
Christopher Dunn
66a8ba255f clarify Builders 2015-02-09 01:29:43 -06:00
Christopher Dunn
249fd18114 put version into docs 2015-02-09 00:50:27 -06:00
Christopher Dunn
a587d04f77 Merge pull request #161 from cdunn2001/master
CharReader/Builder

I guess we should but the patch-level version. We will set the version properly soon...
2015-02-08 13:25:08 -06:00
Christopher Dunn
2c1197c2c8 CharReader/Builder
* CharReaderBuilder is similar to StreamWriterBuilder.
* use rdbuf(), since getline(string) is not required to handle EOF as delimiter
2015-02-08 13:22:09 -06:00
Christopher Dunn
2a94618589 Merge pull request #160 from cdunn2001/master
rm unique_ptr<>/shared_ptr<>, for pre-C++11
2015-02-08 13:10:18 -06:00
Christopher Dunn
dee4602b8f rm unique_ptr<>/shared_ptr<>, for pre-C++11 2015-02-08 11:54:49 -06:00
Christopher Dunn
ea2d167a38 Merge pull request #158 from cdunn2001/travis-with-cmake-package
JSONCPP_WITH_CMAKE_PACKAGE in Travis

I guess we don't really need to shared and static separately either. Saves a little time, maybe?
2015-02-07 12:24:58 -06:00
Christopher Dunn
41edda5ebe JSONCPP_WITH_CMAKE_PACKAGE in Travis 2015-02-07 12:18:20 -06:00
Christopher Dunn
2941cb3fe2 Merge pull request #156 from cdunn2001/with-cmake-package
fix JSONCPP_WITH_CMAKE_PACKAGE #155
2015-02-07 11:44:24 -06:00
Christopher Dunn
636121485c fix JSONCPP_WITH_CMAKE_PACKAGE #155
mv JSONCPP_WITH_CMAKE_PACKAGE ahead of INSTALL def.
2015-02-07 11:39:16 -06:00
Christopher Dunn
fe855fb4dd drop nullptr
See issue #153.
2015-02-02 15:33:47 -06:00
Christopher Dunn
198cc350c5 drop scoped enum, for pre-C++11 compatibility 2015-01-29 13:49:21 -06:00
Peter Spiess-Knafl
5e8595c0e2 added cmake option to build static and shared libraries at once
See #147 and #149.
2015-01-27 18:22:43 -06:00
Christopher Dunn
38042b3892 docs 2015-01-26 11:38:38 -06:00
Christopher Dunn
3b5f2b85ca Merge pull request #145 from cdunn2001/simplify-builder
Simplify builder
2015-01-26 11:33:16 -06:00
Christopher Dunn
7eca3b4e88 gcc-4.6 (Travis CI) does not support 2015-01-26 11:17:42 -06:00
Christopher Dunn
999f5912f0 docs 2015-01-26 11:12:53 -06:00
Christopher Dunn
472d29f57b fix doc 2015-01-26 11:04:03 -06:00
Christopher Dunn
6065a1c142 make StreamWriterBuilder concrete 2015-01-26 11:01:15 -06:00
Christopher Dunn
28a20917b0 Move old FastWriter stuff out of new Builder 2015-01-26 10:47:42 -06:00
Christopher Dunn
177b7b8f22 OldCompressingStreamWriterBuilder 2015-01-26 10:44:20 -06:00
Christopher Dunn
9da9f84903 improve docs
including `writeString()`
2015-01-26 10:43:53 -06:00
Christopher Dunn
54b8e6939a Merge pull request #132 from cdunn2001/builder
StreamWriter::Builder

Deprecate old Writers, but include them in tests.

This should still be binary-compatible with 1.3.0.
2015-01-25 18:52:09 -06:00
Christopher Dunn
c7b39c2e25 deprecate old Writers
also, use withers instead of setters, and update docs
2015-01-25 18:45:59 -06:00
Christopher Dunn
d78caa3851 implement strange setting from FastWriter 2015-01-25 18:15:54 -06:00
Christopher Dunn
c6e0688e5a implement CommentStyle::None/indentation_=="" 2015-01-25 17:32:36 -06:00
Christopher Dunn
1e21e63853 default \t indentation, All comments 2015-01-25 16:01:59 -06:00
Christopher Dunn
dea6f8d9a6 incorporate 'proper newlines for comments' into new StreamWriter 2015-01-25 15:55:18 -06:00
Christopher Dunn
648843d148 clarify CommentStyle 2015-01-25 15:54:40 -06:00
Christopher Dunn
fe3979cd8a drop StreamWriterBuilderFactory, for now 2015-01-25 15:54:40 -06:00
Christopher Dunn
94665eab72 copy fixes from StyledStreamWriter 2015-01-25 15:54:40 -06:00
Christopher Dunn
9e4bcf354f test BuiltStyledStreamWriter too 2015-01-25 15:54:40 -06:00
Christopher Dunn
9243d602fe const stuff 2015-01-25 15:54:40 -06:00
Christopher Dunn
beb6f35c63 non-const write 2015-01-25 15:54:40 -06:00
Christopher Dunn
ceef7f5219 copied impl of StyledStreamWriter 2015-01-25 15:54:40 -06:00
Christopher Dunn
77ce057f14 fix comment 2015-01-25 15:54:40 -06:00
Christopher Dunn
d49ab5aee1 use new BuiltStyledStreamWriter in operator<<() 2015-01-25 15:54:40 -06:00
Christopher Dunn
4d649402b0 setIndentation() 2015-01-25 15:54:40 -06:00
Christopher Dunn
489707ff60 StreamWriter::Builder 2015-01-25 15:54:39 -06:00
Christopher Dunn
5fbfe3cdb9 StreamWriter 2015-01-25 15:54:39 -06:00
Christopher Dunn
948f29032e update docs 2015-01-25 15:54:07 -06:00
Christopher Dunn
964affd333 add back space before trailing comment 2015-01-25 15:49:02 -06:00
Christopher Dunn
c038e08efc Merge pull request #144 from cdunn2001/proper-comment-lfs
proper newlines for comments

This alters `StyledStreamWriter`, but not `StyledWriter`.
2015-01-25 15:10:38 -06:00
Christopher Dunn
74c2d82e19 proper newlines for comments
The logic is still messy, but it seems to work.
2015-01-25 15:05:09 -06:00
Christopher Dunn
30726082f3 Merge pull request #143 from cdunn2001/rm-trailing-newlines
rm trailing newlines for *all* comments
2015-01-25 14:35:24 -06:00
Christopher Dunn
1e3149ab75 rm trailing newlines for *all* comments
This will make it easier to fix newlines consistently.
2015-01-25 14:32:13 -06:00
Christopher Dunn
7312b1022d Merge pull request #141 from cdunn2001/set-comment
Fix a border case which causes Value::CommentInfo::setComment() to crash
2015-01-25 11:37:02 -06:00
datadiode
2f046b584d Fix a border case which causes Value::CommentInfo::setComment() to crash
re: pull #140
2015-01-25 11:19:51 -06:00
Christopher Dunn
dd91914b1b TravisCI gcc-4.6 does not yet support -Wpedantic 2015-01-25 10:34:49 -06:00
Christopher Dunn
2a46e295ec Merge pull request #139 from cdunn2001/some-python-changes
Some python changes.

* Better messaging.
* Make `doxybuild.py` work with python3.4
2015-01-24 16:24:12 -06:00
Christopher Dunn
f4bc0bf4ec README.md 2015-01-24 16:21:12 -06:00
Christopher Dunn
f357688893 make doxybuild.py work with python3.4 2015-01-24 16:21:12 -06:00
Florian Meier
bb0c80b3e5 Doxybuild: Error message if doxygen not found
This patch introduces a better error message.

See discussion at pull #129.
2015-01-24 16:21:12 -06:00
Christopher Dunn
ff5abe76a5 update doxbuild.py 2015-01-24 16:21:12 -06:00
Christopher Dunn
9cc0bb80b2 update TarFile usage 2015-01-24 16:21:12 -06:00
Christopher Dunn
494950a63d rm extra whitespace in python, per PEP8 2015-01-24 16:21:12 -06:00
Christopher Dunn
7d82b14726 fix issue #90
We are static-casting to U, so we really have no reason to use
references.

However, if this comes up again, try applying -ffloat-store to
the target executable, per
    https://github.com/open-source-parsers/jsoncpp/issues/90
2015-01-24 14:34:54 -06:00
Christopher Dunn
2bc6137ada fix gcc warnings 2015-01-24 13:42:37 -06:00
Christopher Dunn
201904bfbb Merge pull request #138 from cdunn2001/fix-103
Fix #103.
2015-01-23 14:51:31 -06:00
Christopher Dunn
216ecd3085 fix test_comment_00 for #103 2015-01-23 14:28:44 -06:00
Christopher Dunn
8d15e51228 add test_comment_00
one-element array with comment, for issue #103
2015-01-23 14:28:21 -06:00
Christopher Dunn
9fbd12b27c Merge pull request #137 from cdunn2001/avoid-extra-newline
Avoid extra newline
2015-01-23 14:24:52 -06:00
Christopher Dunn
f8ca6cbb25 1.4.0 <- 1.3.0
Minor version bump, but we will wait for a few more commits this time
before tagging the release.
2015-01-23 14:23:31 -06:00
Christopher Dunn
d383056fbb avoid extra newlines in StyledStreamWriter
Add indented_ as a bitfield. (Verified that sizeof(StyledStreamWriter)
remains 96 for binary compatibility. But the new symbol requires a minor
version-bump.)
2015-01-23 14:23:31 -06:00
Christopher Dunn
ddb4ff7dec Merge pull request #136 from cdunn2001/test-both-styled-writers
Test both styled writers

Not only does this now test StyledStreamWriter the same way as StyledWriter, but it also makes the former work more like the latter, indenting separate lines of a comment before a value. Might break some user tests (as `operator<<()` uses `StyledStreamWriter`) but basically a harmless improvement.

All tests pass.
2015-01-23 13:55:45 -06:00
Christopher Dunn
3efc587fba make StyledStreamWriter work more like StyledWriter
tests pass
2015-01-23 13:36:10 -06:00
Christopher Dunn
70704b9a70 test both StyledWriter and StyledStreamWriter 2015-01-23 13:36:10 -06:00
Christopher Dunn
ac6bbbc739 show cmd in runjsontests.py 2015-01-23 13:36:10 -06:00
Christopher Dunn
26c52861b9 pass --json-writer StyledWriter 2015-01-23 13:36:10 -06:00
Christopher Dunn
3682f60927 --json-writer arg 2015-01-23 13:36:10 -06:00
Christopher Dunn
58c31ac550 mv try-block 2015-01-23 12:35:12 -06:00
Christopher Dunn
08cfd02d8c fix minor bugs in test-runner 2015-01-23 12:35:12 -06:00
Christopher Dunn
79211e1aeb Options class for test 2015-01-23 12:35:12 -06:00
Christopher Dunn
632c9b5032 cleaner 2015-01-23 12:35:12 -06:00
Christopher Dunn
05810a7607 cleaner 2015-01-23 12:35:12 -06:00
Christopher Dunn
942e2c999a unindent test-code 2015-01-23 12:35:12 -06:00
Christopher Dunn
2160c9a042 switch from StyledWriter to StyledStream writer in tests 2015-01-23 09:02:44 -06:00
46 changed files with 5168 additions and 1054 deletions

View File

@@ -7,12 +7,11 @@ language: cpp
compiler: compiler:
- gcc - gcc
- clang - clang
script: cmake -DJSONCPP_LIB_BUILD_SHARED=$SHARED_LIBRARY -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DCMAKE_VERBOSE_MAKEFILE=$VERBOSE_MAKE . && make script: cmake -DJSONCPP_WITH_CMAKE_PACKAGE=$CMAKE_PKG -DJSONCPP_LIB_BUILD_SHARED=$SHARED_LIB -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DCMAKE_VERBOSE_MAKEFILE=$VERBOSE_MAKE . && make
env: env:
matrix: matrix:
- SHARED_LIBRARY=ON BUILD_TYPE=release VERBOSE_MAKE=false - SHARED_LIB=ON STATIC_LIB=ON CMAKE_PKG=ON BUILD_TYPE=release VERBOSE_MAKE=false
- SHARED_LIBRARY=OFF BUILD_TYPE=release VERBOSE_MAKE=false - SHARED_LIB=OFF STATIC_LIB=ON CMAKE_PKG=OFF BUILD_TYPE=debug VERBOSE_MAKE=true VERBOSE
- SHARED_LIBRARY=OFF BUILD_TYPE=debug VERBOSE VERBOSE_MAKE=true
notifications: notifications:
email: email:
- aaronjjacobs@gmail.com - aaronjjacobs@gmail.com

View File

@@ -85,10 +85,10 @@ endif( MSVC )
if (CMAKE_CXX_COMPILER_ID MATCHES "Clang") if (CMAKE_CXX_COMPILER_ID MATCHES "Clang")
# using regular Clang or AppleClang # using regular Clang or AppleClang
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -std=c++11") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall")
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU") elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
# using GCC # using GCC
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -std=c++0x") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -Wextra -pedantic")
endif() endif()
IF(JSONCPP_WITH_WARNING_AS_ERROR) IF(JSONCPP_WITH_WARNING_AS_ERROR)

View File

@@ -80,7 +80,7 @@ New in SVN
(e.g. MSVC 2008 command prompt in start menu) before running scons. (e.g. MSVC 2008 command prompt in start menu) before running scons.
- Added support for amalgamated source and header generation (a la sqlite). - Added support for amalgamated source and header generation (a la sqlite).
Refer to README.txt section "Generating amalgamated source and header" Refer to README.md section "Generating amalgamated source and header"
for detail. for detail.
* Value * Value

View File

@@ -7,17 +7,20 @@ pairs.
[json-org]: http://json.org/ [json-org]: http://json.org/
JsonCpp is a C++ library that allows manipulating JSON values, including [JsonCpp][] is a C++ library that allows manipulating JSON values, including
serialization and deserialization to and from strings. It can also preserve serialization and deserialization to and from strings. It can also preserve
existing comment in unserialization/serialization steps, making it a convenient existing comment in unserialization/serialization steps, making it a convenient
format to store user input files. format to store user input files.
[JsonCpp]: http://open-source-parsers.github.io/jsoncpp-docs/doxygen/index.html
## A note on backward-compatibility ## A note on backward-compatibility
Very soon, we are switching to C++11 only. For older compilers, try the `pre-C++11` branch. * `1.y.z` is built with C++11.
* `0.8.z` can be used with older compilers.
* Major versions maintain binary-compatibility.
Using JsonCpp in your project Using JsonCpp in your project
----------------------------- -----------------------------
The recommended approach to integrating JsonCpp in your project is to build The recommended approach to integrating JsonCpp in your project is to build
the amalgamated source (a single `.cpp` file) with your own build system. This the amalgamated source (a single `.cpp` file) with your own build system. This
ensures consistency of compilation flags and ABI compatibility. See the section ensures consistency of compilation flags and ABI compatibility. See the section
@@ -28,13 +31,11 @@ should be included as follow:
#include <json/json.h> #include <json/json.h>
If JsonCpp was build as a dynamic library on Windows, then your project needs to If JsonCpp was built as a dynamic library on Windows, then your project needs to
define the macro `JSON_DLL`. define the macro `JSON_DLL`.
Building and testing with CMake
Building and testing with new CMake -------------------------------
-----------------------------------
[CMake][] is a C++ Makefiles/Solution generator. It is usually available on most [CMake][] is a C++ Makefiles/Solution generator. It is usually available on most
Linux system as package. On Ubuntu: Linux system as package. On Ubuntu:
@@ -66,7 +67,7 @@ Alternatively, from the command-line on Unix in the source directory:
mkdir -p build/debug mkdir -p build/debug
cd build/debug cd build/debug
cmake -DCMAKE_BUILD_TYPE=debug -DJSONCPP_LIB_BUILD_SHARED=OFF -G "Unix Makefiles" ../.. cmake -DCMAKE_BUILD_TYPE=debug -DJSONCPP_LIB_BUILD_STATIC=ON -DJSONCPP_LIB_BUILD_SHARED=OFF -G "Unix Makefiles" ../..
make make
Running `cmake -`" will display the list of available generators (passed using Running `cmake -`" will display the list of available generators (passed using
@@ -75,10 +76,8 @@ the `-G` option).
By default CMake hides compilation commands. This can be modified by specifying By default CMake hides compilation commands. This can be modified by specifying
`-DCMAKE_VERBOSE_MAKEFILE=true` when generating makefiles. `-DCMAKE_VERBOSE_MAKEFILE=true` when generating makefiles.
Building and testing with SCons Building and testing with SCons
------------------------------- -------------------------------
**Note:** The SCons-based build system is deprecated. Please use CMake; see the **Note:** The SCons-based build system is deprecated. Please use CMake; see the
section above. section above.
@@ -107,14 +106,7 @@ If you are building with Microsoft Visual Studio 2008, you need to set up the
environment by running `vcvars32.bat` (e.g. MSVC 2008 command prompt) before environment by running `vcvars32.bat` (e.g. MSVC 2008 command prompt) before
running SCons. running SCons.
# Running the tests manually
Running the tests manually
--------------------------
Note that test can be run using SCons using the `check` target:
scons platform=$PLATFORM check
You need to run tests manually only if you are troubleshooting an issue. You need to run tests manually only if you are troubleshooting an issue.
In the instructions below, replace `path/to/jsontest` with the path of the In the instructions below, replace `path/to/jsontest` with the path of the
@@ -137,20 +129,21 @@ In the instructions below, replace `path/to/jsontest` with the path of the
# You can run the tests using valgrind: # You can run the tests using valgrind:
python rununittests.py --valgrind path/to/test_lib_json python rununittests.py --valgrind path/to/test_lib_json
## Running the tests using scons
Note that tests can be run using SCons using the `check` target:
scons platform=$PLATFORM check
Building the documentation Building the documentation
-------------------------- --------------------------
Run the Python script `doxybuild.py` from the top directory: Run the Python script `doxybuild.py` from the top directory:
python doxybuild.py --doxygen=$(which doxygen) --open --with-dot python doxybuild.py --doxygen=$(which doxygen) --open --with-dot
See `doxybuild.py --help` for options. See `doxybuild.py --help` for options.
Generating amalgamated source and header Generating amalgamated source and header
---------------------------------------- ----------------------------------------
JsonCpp is provided with a script to generate a single header and a single JsonCpp is provided with a script to generate a single header and a single
source file to ease inclusion into an existing project. The amalgamated source source file to ease inclusion into an existing project. The amalgamated source
can be generated at any time by running the following command from the can be generated at any time by running the following command from the
@@ -172,10 +165,8 @@ The amalgamated sources are generated by concatenating JsonCpp source in the
correct order and defining the macro `JSON_IS_AMALGAMATION` to prevent inclusion correct order and defining the macro `JSON_IS_AMALGAMATION` to prevent inclusion
of other headers. of other headers.
Adding a reader/writer test Adding a reader/writer test
--------------------------- ---------------------------
To add a test, you need to create two files in test/data: To add a test, you need to create two files in test/data:
* a `TESTNAME.json` file, that contains the input document in JSON format. * a `TESTNAME.json` file, that contains the input document in JSON format.
@@ -195,10 +186,8 @@ The `TESTNAME.expected` file format is as follows:
See the examples `test_complex_01.json` and `test_complex_01.expected` to better See the examples `test_complex_01.json` and `test_complex_01.expected` to better
understand element paths. understand element paths.
Understanding reader/writer test output Understanding reader/writer test output
--------------------------------------- ---------------------------------------
When a test is run, output files are generated beside the input test files. When a test is run, output files are generated beside the input test files.
Below is a short description of the content of each file: Below is a short description of the content of each file:
@@ -215,10 +204,7 @@ Below is a short description of the content of each file:
* `test_complex_01.process-output`: `jsontest` output, typically useful for * `test_complex_01.process-output`: `jsontest` output, typically useful for
understanding parsing errors. understanding parsing errors.
License License
------- -------
See the `LICENSE` file for details. In summary, JsonCpp is licensed under the See the `LICENSE` file for details. In summary, JsonCpp is licensed under the
MIT license, or public domain if desired and recognized in your jurisdiction. MIT license, or public domain if desired and recognized in your jurisdiction.

View File

@@ -237,7 +237,7 @@ RunUnitTests = ActionFactory(runUnitTests_action, runUnitTests_string )
env.Alias( 'check' ) env.Alias( 'check' )
srcdist_cmd = env['SRCDIST_ADD']( source = """ srcdist_cmd = env['SRCDIST_ADD']( source = """
AUTHORS README.txt SConstruct AUTHORS README.md SConstruct
""".split() ) """.split() )
env.Alias( 'src-dist', srcdist_cmd ) env.Alias( 'src-dist', srcdist_cmd )

View File

@@ -10,46 +10,46 @@ import os.path
import sys import sys
class AmalgamationFile: class AmalgamationFile:
def __init__( self, top_dir ): def __init__(self, top_dir):
self.top_dir = top_dir self.top_dir = top_dir
self.blocks = [] self.blocks = []
def add_text( self, text ): def add_text(self, text):
if not text.endswith( "\n" ): if not text.endswith("\n"):
text += "\n" text += "\n"
self.blocks.append( text ) self.blocks.append(text)
def add_file( self, relative_input_path, wrap_in_comment=False ): def add_file(self, relative_input_path, wrap_in_comment=False):
def add_marker( prefix ): def add_marker(prefix):
self.add_text( "" ) self.add_text("")
self.add_text( "// " + "/"*70 ) self.add_text("// " + "/"*70)
self.add_text( "// %s of content of file: %s" % (prefix, relative_input_path.replace("\\","/")) ) self.add_text("// %s of content of file: %s" % (prefix, relative_input_path.replace("\\","/")))
self.add_text( "// " + "/"*70 ) self.add_text("// " + "/"*70)
self.add_text( "" ) self.add_text("")
add_marker( "Beginning" ) add_marker("Beginning")
f = open( os.path.join( self.top_dir, relative_input_path ), "rt" ) f = open(os.path.join(self.top_dir, relative_input_path), "rt")
content = f.read() content = f.read()
if wrap_in_comment: if wrap_in_comment:
content = "/*\n" + content + "\n*/" content = "/*\n" + content + "\n*/"
self.add_text( content ) self.add_text(content)
f.close() f.close()
add_marker( "End" ) add_marker("End")
self.add_text( "\n\n\n\n" ) self.add_text("\n\n\n\n")
def get_value( self ): def get_value(self):
return "".join( self.blocks ).replace("\r\n","\n") return "".join(self.blocks).replace("\r\n","\n")
def write_to( self, output_path ): def write_to(self, output_path):
output_dir = os.path.dirname( output_path ) output_dir = os.path.dirname(output_path)
if output_dir and not os.path.isdir( output_dir ): if output_dir and not os.path.isdir(output_dir):
os.makedirs( output_dir ) os.makedirs(output_dir)
f = open( output_path, "wb" ) f = open(output_path, "wb")
f.write( str.encode(self.get_value(), 'UTF-8') ) f.write(str.encode(self.get_value(), 'UTF-8'))
f.close() f.close()
def amalgamate_source( source_top_dir=None, def amalgamate_source(source_top_dir=None,
target_source_path=None, target_source_path=None,
header_include_path=None ): header_include_path=None):
"""Produces amalgated source. """Produces amalgated source.
Parameters: Parameters:
source_top_dir: top-directory source_top_dir: top-directory
@@ -57,69 +57,69 @@ def amalgamate_source( source_top_dir=None,
header_include_path: generated header path relative to target_source_path. header_include_path: generated header path relative to target_source_path.
""" """
print("Amalgating header...") print("Amalgating header...")
header = AmalgamationFile( source_top_dir ) header = AmalgamationFile(source_top_dir)
header.add_text( "/// Json-cpp amalgated header (http://jsoncpp.sourceforge.net/)." ) header.add_text("/// Json-cpp amalgated header (http://jsoncpp.sourceforge.net/).")
header.add_text( "/// It is intented to be used with #include <%s>" % header_include_path ) header.add_text("/// It is intented to be used with #include <%s>" % header_include_path)
header.add_file( "LICENSE", wrap_in_comment=True ) header.add_file("LICENSE", wrap_in_comment=True)
header.add_text( "#ifndef JSON_AMALGATED_H_INCLUDED" ) header.add_text("#ifndef JSON_AMALGATED_H_INCLUDED")
header.add_text( "# define JSON_AMALGATED_H_INCLUDED" ) header.add_text("# define JSON_AMALGATED_H_INCLUDED")
header.add_text( "/// If defined, indicates that the source file is amalgated" ) header.add_text("/// If defined, indicates that the source file is amalgated")
header.add_text( "/// to prevent private header inclusion." ) header.add_text("/// to prevent private header inclusion.")
header.add_text( "#define JSON_IS_AMALGAMATION" ) header.add_text("#define JSON_IS_AMALGAMATION")
header.add_file( "include/json/version.h" ) header.add_file("include/json/version.h")
header.add_file( "include/json/config.h" ) header.add_file("include/json/config.h")
header.add_file( "include/json/forwards.h" ) header.add_file("include/json/forwards.h")
header.add_file( "include/json/features.h" ) header.add_file("include/json/features.h")
header.add_file( "include/json/value.h" ) header.add_file("include/json/value.h")
header.add_file( "include/json/reader.h" ) header.add_file("include/json/reader.h")
header.add_file( "include/json/writer.h" ) header.add_file("include/json/writer.h")
header.add_file( "include/json/assertions.h" ) header.add_file("include/json/assertions.h")
header.add_text( "#endif //ifndef JSON_AMALGATED_H_INCLUDED" ) header.add_text("#endif //ifndef JSON_AMALGATED_H_INCLUDED")
target_header_path = os.path.join( os.path.dirname(target_source_path), header_include_path ) target_header_path = os.path.join(os.path.dirname(target_source_path), header_include_path)
print("Writing amalgated header to %r" % target_header_path) print("Writing amalgated header to %r" % target_header_path)
header.write_to( target_header_path ) header.write_to(target_header_path)
base, ext = os.path.splitext( header_include_path ) base, ext = os.path.splitext(header_include_path)
forward_header_include_path = base + "-forwards" + ext forward_header_include_path = base + "-forwards" + ext
print("Amalgating forward header...") print("Amalgating forward header...")
header = AmalgamationFile( source_top_dir ) header = AmalgamationFile(source_top_dir)
header.add_text( "/// Json-cpp amalgated forward header (http://jsoncpp.sourceforge.net/)." ) header.add_text("/// Json-cpp amalgated forward header (http://jsoncpp.sourceforge.net/).")
header.add_text( "/// It is intented to be used with #include <%s>" % forward_header_include_path ) header.add_text("/// It is intented to be used with #include <%s>" % forward_header_include_path)
header.add_text( "/// This header provides forward declaration for all JsonCpp types." ) header.add_text("/// This header provides forward declaration for all JsonCpp types.")
header.add_file( "LICENSE", wrap_in_comment=True ) header.add_file("LICENSE", wrap_in_comment=True)
header.add_text( "#ifndef JSON_FORWARD_AMALGATED_H_INCLUDED" ) header.add_text("#ifndef JSON_FORWARD_AMALGATED_H_INCLUDED")
header.add_text( "# define JSON_FORWARD_AMALGATED_H_INCLUDED" ) header.add_text("# define JSON_FORWARD_AMALGATED_H_INCLUDED")
header.add_text( "/// If defined, indicates that the source file is amalgated" ) header.add_text("/// If defined, indicates that the source file is amalgated")
header.add_text( "/// to prevent private header inclusion." ) header.add_text("/// to prevent private header inclusion.")
header.add_text( "#define JSON_IS_AMALGAMATION" ) header.add_text("#define JSON_IS_AMALGAMATION")
header.add_file( "include/json/config.h" ) header.add_file("include/json/config.h")
header.add_file( "include/json/forwards.h" ) header.add_file("include/json/forwards.h")
header.add_text( "#endif //ifndef JSON_FORWARD_AMALGATED_H_INCLUDED" ) header.add_text("#endif //ifndef JSON_FORWARD_AMALGATED_H_INCLUDED")
target_forward_header_path = os.path.join( os.path.dirname(target_source_path), target_forward_header_path = os.path.join(os.path.dirname(target_source_path),
forward_header_include_path ) forward_header_include_path)
print("Writing amalgated forward header to %r" % target_forward_header_path) print("Writing amalgated forward header to %r" % target_forward_header_path)
header.write_to( target_forward_header_path ) header.write_to(target_forward_header_path)
print("Amalgating source...") print("Amalgating source...")
source = AmalgamationFile( source_top_dir ) source = AmalgamationFile(source_top_dir)
source.add_text( "/// Json-cpp amalgated source (http://jsoncpp.sourceforge.net/)." ) source.add_text("/// Json-cpp amalgated source (http://jsoncpp.sourceforge.net/).")
source.add_text( "/// It is intented to be used with #include <%s>" % header_include_path ) source.add_text("/// It is intented to be used with #include <%s>" % header_include_path)
source.add_file( "LICENSE", wrap_in_comment=True ) source.add_file("LICENSE", wrap_in_comment=True)
source.add_text( "" ) source.add_text("")
source.add_text( "#include <%s>" % header_include_path ) source.add_text("#include <%s>" % header_include_path)
source.add_text( "" ) source.add_text("")
lib_json = "src/lib_json" lib_json = "src/lib_json"
source.add_file( os.path.join(lib_json, "json_tool.h") ) source.add_file(os.path.join(lib_json, "json_tool.h"))
source.add_file( os.path.join(lib_json, "json_reader.cpp") ) source.add_file(os.path.join(lib_json, "json_reader.cpp"))
source.add_file( os.path.join(lib_json, "json_batchallocator.h") ) source.add_file(os.path.join(lib_json, "json_batchallocator.h"))
source.add_file( os.path.join(lib_json, "json_valueiterator.inl") ) source.add_file(os.path.join(lib_json, "json_valueiterator.inl"))
source.add_file( os.path.join(lib_json, "json_value.cpp") ) source.add_file(os.path.join(lib_json, "json_value.cpp"))
source.add_file( os.path.join(lib_json, "json_writer.cpp") ) source.add_file(os.path.join(lib_json, "json_writer.cpp"))
print("Writing amalgated source to %r" % target_source_path) print("Writing amalgated source to %r" % target_source_path)
source.write_to( target_source_path ) source.write_to(target_source_path)
def main(): def main():
usage = """%prog [options] usage = """%prog [options]
@@ -137,12 +137,12 @@ Generate a single amalgated source and header file from the sources.
parser.enable_interspersed_args() parser.enable_interspersed_args()
options, args = parser.parse_args() options, args = parser.parse_args()
msg = amalgamate_source( source_top_dir=options.top_dir, msg = amalgamate_source(source_top_dir=options.top_dir,
target_source_path=options.target_source_path, target_source_path=options.target_source_path,
header_include_path=options.header_include_path ) header_include_path=options.header_include_path)
if msg: if msg:
sys.stderr.write( msg + "\n" ) sys.stderr.write(msg + "\n")
sys.exit( 1 ) sys.exit(1)
else: else:
print("Source succesfully amalagated") print("Source succesfully amalagated")

View File

@@ -1,5 +1,19 @@
all: build test-amalgamate # This is only for jsoncpp developers/contributors.
# We use this to sign releases, generate documentation, etc.
VER?=$(shell cat version)
default:
@echo "VER=${VER}"
sign: jsoncpp-${VER}.tar.gz
gpg --armor --detach-sign $<
gpg --verify $<.asc
# Then upload .asc to the release.
jsoncpp-%.tar.gz:
curl https://github.com/open-source-parsers/jsoncpp/archive/$*.tar.gz -o $@
dox:
python doxybuild.py --doxygen=$$(which doxygen) --in doc/web_doxyfile.in
rsync -va --delete dist/doxygen/jsoncpp-api-html-${VER}/ ../jsoncpp-docs/doxygen/
# Then 'git add -A' and 'git push' in jsoncpp-docs.
build: build:
mkdir -p build/debug mkdir -p build/debug
cd build/debug; cmake -DCMAKE_BUILD_TYPE=debug -DJSONCPP_LIB_BUILD_SHARED=ON -G "Unix Makefiles" ../.. cd build/debug; cmake -DCMAKE_BUILD_TYPE=debug -DJSONCPP_LIB_BUILD_SHARED=ON -G "Unix Makefiles" ../..
@@ -7,8 +21,11 @@ build:
# Currently, this depends on include/json/version.h generated # Currently, this depends on include/json/version.h generated
# by cmake. # by cmake.
test-amalgamate: build test-amalgamate:
python2.7 amalgamate.py python2.7 amalgamate.py
python3.4 amalgamate.py python3.4 amalgamate.py
clean:
\rm -rf *.gz *.asc dist/
.PHONY: build .PHONY: build

View File

@@ -54,9 +54,9 @@ LINKS = DIR_LINK | FILE_LINK
ALL_NO_LINK = DIR | FILE ALL_NO_LINK = DIR | FILE
ALL = DIR | FILE | LINKS ALL = DIR | FILE | LINKS
_ANT_RE = re.compile( r'(/\*\*/)|(\*\*/)|(/\*\*)|(\*)|(/)|([^\*/]*)' ) _ANT_RE = re.compile(r'(/\*\*/)|(\*\*/)|(/\*\*)|(\*)|(/)|([^\*/]*)')
def ant_pattern_to_re( ant_pattern ): def ant_pattern_to_re(ant_pattern):
"""Generates a regular expression from the ant pattern. """Generates a regular expression from the ant pattern.
Matching convention: Matching convention:
**/a: match 'a', 'dir/a', 'dir1/dir2/a' **/a: match 'a', 'dir/a', 'dir1/dir2/a'
@@ -65,30 +65,30 @@ def ant_pattern_to_re( ant_pattern ):
""" """
rex = ['^'] rex = ['^']
next_pos = 0 next_pos = 0
sep_rex = r'(?:/|%s)' % re.escape( os.path.sep ) sep_rex = r'(?:/|%s)' % re.escape(os.path.sep)
## print 'Converting', ant_pattern ## print 'Converting', ant_pattern
for match in _ANT_RE.finditer( ant_pattern ): for match in _ANT_RE.finditer(ant_pattern):
## print 'Matched', match.group() ## print 'Matched', match.group()
## print match.start(0), next_pos ## print match.start(0), next_pos
if match.start(0) != next_pos: if match.start(0) != next_pos:
raise ValueError( "Invalid ant pattern" ) raise ValueError("Invalid ant pattern")
if match.group(1): # /**/ if match.group(1): # /**/
rex.append( sep_rex + '(?:.*%s)?' % sep_rex ) rex.append(sep_rex + '(?:.*%s)?' % sep_rex)
elif match.group(2): # **/ elif match.group(2): # **/
rex.append( '(?:.*%s)?' % sep_rex ) rex.append('(?:.*%s)?' % sep_rex)
elif match.group(3): # /** elif match.group(3): # /**
rex.append( sep_rex + '.*' ) rex.append(sep_rex + '.*')
elif match.group(4): # * elif match.group(4): # *
rex.append( '[^/%s]*' % re.escape(os.path.sep) ) rex.append('[^/%s]*' % re.escape(os.path.sep))
elif match.group(5): # / elif match.group(5): # /
rex.append( sep_rex ) rex.append(sep_rex)
else: # somepath else: # somepath
rex.append( re.escape(match.group(6)) ) rex.append(re.escape(match.group(6)))
next_pos = match.end() next_pos = match.end()
rex.append('$') rex.append('$')
return re.compile( ''.join( rex ) ) return re.compile(''.join(rex))
def _as_list( l ): def _as_list(l):
if isinstance(l, basestring): if isinstance(l, basestring):
return l.split() return l.split()
return l return l
@@ -105,37 +105,37 @@ def glob(dir_path,
dir_path = dir_path.replace('/',os.path.sep) dir_path = dir_path.replace('/',os.path.sep)
entry_type_filter = entry_type entry_type_filter = entry_type
def is_pruned_dir( dir_name ): def is_pruned_dir(dir_name):
for pattern in prune_dirs: for pattern in prune_dirs:
if fnmatch.fnmatch( dir_name, pattern ): if fnmatch.fnmatch(dir_name, pattern):
return True return True
return False return False
def apply_filter( full_path, filter_rexs ): def apply_filter(full_path, filter_rexs):
"""Return True if at least one of the filter regular expression match full_path.""" """Return True if at least one of the filter regular expression match full_path."""
for rex in filter_rexs: for rex in filter_rexs:
if rex.match( full_path ): if rex.match(full_path):
return True return True
return False return False
def glob_impl( root_dir_path ): def glob_impl(root_dir_path):
child_dirs = [root_dir_path] child_dirs = [root_dir_path]
while child_dirs: while child_dirs:
dir_path = child_dirs.pop() dir_path = child_dirs.pop()
for entry in listdir( dir_path ): for entry in listdir(dir_path):
full_path = os.path.join( dir_path, entry ) full_path = os.path.join(dir_path, entry)
## print 'Testing:', full_path, ## print 'Testing:', full_path,
is_dir = os.path.isdir( full_path ) is_dir = os.path.isdir(full_path)
if is_dir and not is_pruned_dir( entry ): # explore child directory ? if is_dir and not is_pruned_dir(entry): # explore child directory ?
## print '===> marked for recursion', ## print '===> marked for recursion',
child_dirs.append( full_path ) child_dirs.append(full_path)
included = apply_filter( full_path, include_filter ) included = apply_filter(full_path, include_filter)
rejected = apply_filter( full_path, exclude_filter ) rejected = apply_filter(full_path, exclude_filter)
if not included or rejected: # do not include entry ? if not included or rejected: # do not include entry ?
## print '=> not included or rejected' ## print '=> not included or rejected'
continue continue
link = os.path.islink( full_path ) link = os.path.islink(full_path)
is_file = os.path.isfile( full_path ) is_file = os.path.isfile(full_path)
if not is_file and not is_dir: if not is_file and not is_dir:
## print '=> unknown entry type' ## print '=> unknown entry type'
continue continue
@@ -146,57 +146,57 @@ def glob(dir_path,
## print '=> type: %d' % entry_type, ## print '=> type: %d' % entry_type,
if (entry_type & entry_type_filter) != 0: if (entry_type & entry_type_filter) != 0:
## print ' => KEEP' ## print ' => KEEP'
yield os.path.join( dir_path, entry ) yield os.path.join(dir_path, entry)
## else: ## else:
## print ' => TYPE REJECTED' ## print ' => TYPE REJECTED'
return list( glob_impl( dir_path ) ) return list(glob_impl(dir_path))
if __name__ == "__main__": if __name__ == "__main__":
import unittest import unittest
class AntPatternToRETest(unittest.TestCase): class AntPatternToRETest(unittest.TestCase):
## def test_conversion( self ): ## def test_conversion(self):
## self.assertEqual( '^somepath$', ant_pattern_to_re( 'somepath' ).pattern ) ## self.assertEqual('^somepath$', ant_pattern_to_re('somepath').pattern)
def test_matching( self ): def test_matching(self):
test_cases = [ ( 'path', test_cases = [ ('path',
['path'], ['path'],
['somepath', 'pathsuffix', '/path', '/path'] ), ['somepath', 'pathsuffix', '/path', '/path']),
( '*.py', ('*.py',
['source.py', 'source.ext.py', '.py'], ['source.py', 'source.ext.py', '.py'],
['path/source.py', '/.py', 'dir.py/z', 'z.pyc', 'z.c'] ), ['path/source.py', '/.py', 'dir.py/z', 'z.pyc', 'z.c']),
( '**/path', ('**/path',
['path', '/path', '/a/path', 'c:/a/path', '/a/b/path', '//a/path', '/a/path/b/path'], ['path', '/path', '/a/path', 'c:/a/path', '/a/b/path', '//a/path', '/a/path/b/path'],
['path/', 'a/path/b', 'dir.py/z', 'somepath', 'pathsuffix', 'a/somepath'] ), ['path/', 'a/path/b', 'dir.py/z', 'somepath', 'pathsuffix', 'a/somepath']),
( 'path/**', ('path/**',
['path/a', 'path/path/a', 'path//'], ['path/a', 'path/path/a', 'path//'],
['path', 'somepath/a', 'a/path', 'a/path/a', 'pathsuffix/a'] ), ['path', 'somepath/a', 'a/path', 'a/path/a', 'pathsuffix/a']),
( '/**/path', ('/**/path',
['/path', '/a/path', '/a/b/path/path', '/path/path'], ['/path', '/a/path', '/a/b/path/path', '/path/path'],
['path', 'path/', 'a/path', '/pathsuffix', '/somepath'] ), ['path', 'path/', 'a/path', '/pathsuffix', '/somepath']),
( 'a/b', ('a/b',
['a/b'], ['a/b'],
['somea/b', 'a/bsuffix', 'a/b/c'] ), ['somea/b', 'a/bsuffix', 'a/b/c']),
( '**/*.py', ('**/*.py',
['script.py', 'src/script.py', 'a/b/script.py', '/a/b/script.py'], ['script.py', 'src/script.py', 'a/b/script.py', '/a/b/script.py'],
['script.pyc', 'script.pyo', 'a.py/b'] ), ['script.pyc', 'script.pyo', 'a.py/b']),
( 'src/**/*.py', ('src/**/*.py',
['src/a.py', 'src/dir/a.py'], ['src/a.py', 'src/dir/a.py'],
['a/src/a.py', '/src/a.py'] ), ['a/src/a.py', '/src/a.py']),
] ]
for ant_pattern, accepted_matches, rejected_matches in list(test_cases): for ant_pattern, accepted_matches, rejected_matches in list(test_cases):
def local_path( paths ): def local_path(paths):
return [ p.replace('/',os.path.sep) for p in paths ] return [ p.replace('/',os.path.sep) for p in paths ]
test_cases.append( (ant_pattern, local_path(accepted_matches), local_path( rejected_matches )) ) test_cases.append((ant_pattern, local_path(accepted_matches), local_path(rejected_matches)))
for ant_pattern, accepted_matches, rejected_matches in test_cases: for ant_pattern, accepted_matches, rejected_matches in test_cases:
rex = ant_pattern_to_re( ant_pattern ) rex = ant_pattern_to_re(ant_pattern)
print('ant_pattern:', ant_pattern, ' => ', rex.pattern) print('ant_pattern:', ant_pattern, ' => ', rex.pattern)
for accepted_match in accepted_matches: for accepted_match in accepted_matches:
print('Accepted?:', accepted_match) print('Accepted?:', accepted_match)
self.assertTrue( rex.match( accepted_match ) is not None ) self.assertTrue(rex.match(accepted_match) is not None)
for rejected_match in rejected_matches: for rejected_match in rejected_matches:
print('Rejected?:', rejected_match) print('Rejected?:', rejected_match)
self.assertTrue( rex.match( rejected_match ) is None ) self.assertTrue(rex.match(rejected_match) is None)
unittest.main() unittest.main()

View File

@@ -18,62 +18,62 @@ class BuildDesc:
self.build_type = build_type self.build_type = build_type
self.generator = generator self.generator = generator
def merged_with( self, build_desc ): def merged_with(self, build_desc):
"""Returns a new BuildDesc by merging field content. """Returns a new BuildDesc by merging field content.
Prefer build_desc fields to self fields for single valued field. Prefer build_desc fields to self fields for single valued field.
""" """
return BuildDesc( self.prepend_envs + build_desc.prepend_envs, return BuildDesc(self.prepend_envs + build_desc.prepend_envs,
self.variables + build_desc.variables, self.variables + build_desc.variables,
build_desc.build_type or self.build_type, build_desc.build_type or self.build_type,
build_desc.generator or self.generator ) build_desc.generator or self.generator)
def env( self ): def env(self):
environ = os.environ.copy() environ = os.environ.copy()
for values_by_name in self.prepend_envs: for values_by_name in self.prepend_envs:
for var, value in list(values_by_name.items()): for var, value in list(values_by_name.items()):
var = var.upper() var = var.upper()
if type(value) is unicode: if type(value) is unicode:
value = value.encode( sys.getdefaultencoding() ) value = value.encode(sys.getdefaultencoding())
if var in environ: if var in environ:
environ[var] = value + os.pathsep + environ[var] environ[var] = value + os.pathsep + environ[var]
else: else:
environ[var] = value environ[var] = value
return environ return environ
def cmake_args( self ): def cmake_args(self):
args = ["-D%s" % var for var in self.variables] args = ["-D%s" % var for var in self.variables]
# skip build type for Visual Studio solution as it cause warning # skip build type for Visual Studio solution as it cause warning
if self.build_type and 'Visual' not in self.generator: if self.build_type and 'Visual' not in self.generator:
args.append( "-DCMAKE_BUILD_TYPE=%s" % self.build_type ) args.append("-DCMAKE_BUILD_TYPE=%s" % self.build_type)
if self.generator: if self.generator:
args.extend( ['-G', self.generator] ) args.extend(['-G', self.generator])
return args return args
def __repr__( self ): def __repr__(self):
return "BuildDesc( %s, build_type=%s )" % (" ".join( self.cmake_args()), self.build_type) return "BuildDesc(%s, build_type=%s)" % (" ".join(self.cmake_args()), self.build_type)
class BuildData: class BuildData:
def __init__( self, desc, work_dir, source_dir ): def __init__(self, desc, work_dir, source_dir):
self.desc = desc self.desc = desc
self.work_dir = work_dir self.work_dir = work_dir
self.source_dir = source_dir self.source_dir = source_dir
self.cmake_log_path = os.path.join( work_dir, 'batchbuild_cmake.log' ) self.cmake_log_path = os.path.join(work_dir, 'batchbuild_cmake.log')
self.build_log_path = os.path.join( work_dir, 'batchbuild_build.log' ) self.build_log_path = os.path.join(work_dir, 'batchbuild_build.log')
self.cmake_succeeded = False self.cmake_succeeded = False
self.build_succeeded = False self.build_succeeded = False
def execute_build(self): def execute_build(self):
print('Build %s' % self.desc) print('Build %s' % self.desc)
self._make_new_work_dir( ) self._make_new_work_dir()
self.cmake_succeeded = self._generate_makefiles( ) self.cmake_succeeded = self._generate_makefiles()
if self.cmake_succeeded: if self.cmake_succeeded:
self.build_succeeded = self._build_using_makefiles( ) self.build_succeeded = self._build_using_makefiles()
return self.build_succeeded return self.build_succeeded
def _generate_makefiles(self): def _generate_makefiles(self):
print(' Generating makefiles: ', end=' ') print(' Generating makefiles: ', end=' ')
cmd = ['cmake'] + self.desc.cmake_args( ) + [os.path.abspath( self.source_dir )] cmd = ['cmake'] + self.desc.cmake_args() + [os.path.abspath(self.source_dir)]
succeeded = self._execute_build_subprocess( cmd, self.desc.env(), self.cmake_log_path ) succeeded = self._execute_build_subprocess(cmd, self.desc.env(), self.cmake_log_path)
print('done' if succeeded else 'FAILED') print('done' if succeeded else 'FAILED')
return succeeded return succeeded
@@ -82,58 +82,58 @@ class BuildData:
cmd = ['cmake', '--build', self.work_dir] cmd = ['cmake', '--build', self.work_dir]
if self.desc.build_type: if self.desc.build_type:
cmd += ['--config', self.desc.build_type] cmd += ['--config', self.desc.build_type]
succeeded = self._execute_build_subprocess( cmd, self.desc.env(), self.build_log_path ) succeeded = self._execute_build_subprocess(cmd, self.desc.env(), self.build_log_path)
print('done' if succeeded else 'FAILED') print('done' if succeeded else 'FAILED')
return succeeded return succeeded
def _execute_build_subprocess(self, cmd, env, log_path): def _execute_build_subprocess(self, cmd, env, log_path):
process = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=self.work_dir, process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=self.work_dir,
env=env ) env=env)
stdout, _ = process.communicate( ) stdout, _ = process.communicate()
succeeded = (process.returncode == 0) succeeded = (process.returncode == 0)
with open( log_path, 'wb' ) as flog: with open(log_path, 'wb') as flog:
log = ' '.join( cmd ) + '\n' + stdout + '\nExit code: %r\n' % process.returncode log = ' '.join(cmd) + '\n' + stdout + '\nExit code: %r\n' % process.returncode
flog.write( fix_eol( log ) ) flog.write(fix_eol(log))
return succeeded return succeeded
def _make_new_work_dir(self): def _make_new_work_dir(self):
if os.path.isdir( self.work_dir ): if os.path.isdir(self.work_dir):
print(' Removing work directory', self.work_dir) print(' Removing work directory', self.work_dir)
shutil.rmtree( self.work_dir, ignore_errors=True ) shutil.rmtree(self.work_dir, ignore_errors=True)
if not os.path.isdir( self.work_dir ): if not os.path.isdir(self.work_dir):
os.makedirs( self.work_dir ) os.makedirs(self.work_dir)
def fix_eol( stdout ): def fix_eol(stdout):
"""Fixes wrong EOL produced by cmake --build on Windows (\r\r\n instead of \r\n). """Fixes wrong EOL produced by cmake --build on Windows (\r\r\n instead of \r\n).
""" """
return re.sub( '\r*\n', os.linesep, stdout ) return re.sub('\r*\n', os.linesep, stdout)
def load_build_variants_from_config( config_path ): def load_build_variants_from_config(config_path):
with open( config_path, 'rb' ) as fconfig: with open(config_path, 'rb') as fconfig:
data = json.load( fconfig ) data = json.load(fconfig)
variants = data[ 'cmake_variants' ] variants = data[ 'cmake_variants' ]
build_descs_by_axis = collections.defaultdict( list ) build_descs_by_axis = collections.defaultdict(list)
for axis in variants: for axis in variants:
axis_name = axis["name"] axis_name = axis["name"]
build_descs = [] build_descs = []
if "generators" in axis: if "generators" in axis:
for generator_data in axis["generators"]: for generator_data in axis["generators"]:
for generator in generator_data["generator"]: for generator in generator_data["generator"]:
build_desc = BuildDesc( generator=generator, build_desc = BuildDesc(generator=generator,
prepend_envs=generator_data.get("env_prepend") ) prepend_envs=generator_data.get("env_prepend"))
build_descs.append( build_desc ) build_descs.append(build_desc)
elif "variables" in axis: elif "variables" in axis:
for variables in axis["variables"]: for variables in axis["variables"]:
build_desc = BuildDesc( variables=variables ) build_desc = BuildDesc(variables=variables)
build_descs.append( build_desc ) build_descs.append(build_desc)
elif "build_types" in axis: elif "build_types" in axis:
for build_type in axis["build_types"]: for build_type in axis["build_types"]:
build_desc = BuildDesc( build_type=build_type ) build_desc = BuildDesc(build_type=build_type)
build_descs.append( build_desc ) build_descs.append(build_desc)
build_descs_by_axis[axis_name].extend( build_descs ) build_descs_by_axis[axis_name].extend(build_descs)
return build_descs_by_axis return build_descs_by_axis
def generate_build_variants( build_descs_by_axis ): def generate_build_variants(build_descs_by_axis):
"""Returns a list of BuildDesc generated for the partial BuildDesc for each axis.""" """Returns a list of BuildDesc generated for the partial BuildDesc for each axis."""
axis_names = list(build_descs_by_axis.keys()) axis_names = list(build_descs_by_axis.keys())
build_descs = [] build_descs = []
@@ -141,8 +141,8 @@ def generate_build_variants( build_descs_by_axis ):
if len(build_descs): if len(build_descs):
# for each existing build_desc and each axis build desc, create a new build_desc # for each existing build_desc and each axis build desc, create a new build_desc
new_build_descs = [] new_build_descs = []
for prototype_build_desc, axis_build_desc in itertools.product( build_descs, axis_build_descs): for prototype_build_desc, axis_build_desc in itertools.product(build_descs, axis_build_descs):
new_build_descs.append( prototype_build_desc.merged_with( axis_build_desc ) ) new_build_descs.append(prototype_build_desc.merged_with(axis_build_desc))
build_descs = new_build_descs build_descs = new_build_descs
else: else:
build_descs = axis_build_descs build_descs = axis_build_descs
@@ -174,60 +174,57 @@ $tr_builds
</table> </table>
</body></html>''') </body></html>''')
def generate_html_report( html_report_path, builds ): def generate_html_report(html_report_path, builds):
report_dir = os.path.dirname( html_report_path ) report_dir = os.path.dirname(html_report_path)
# Vertical axis: generator # Vertical axis: generator
# Horizontal: variables, then build_type # Horizontal: variables, then build_type
builds_by_generator = collections.defaultdict( list ) builds_by_generator = collections.defaultdict(list)
variables = set() variables = set()
build_types_by_variable = collections.defaultdict( set ) build_types_by_variable = collections.defaultdict(set)
build_by_pos_key = {} # { (generator, var_key, build_type): build } build_by_pos_key = {} # { (generator, var_key, build_type): build }
for build in builds: for build in builds:
builds_by_generator[build.desc.generator].append( build ) builds_by_generator[build.desc.generator].append(build)
var_key = tuple(sorted(build.desc.variables)) var_key = tuple(sorted(build.desc.variables))
variables.add( var_key ) variables.add(var_key)
build_types_by_variable[var_key].add( build.desc.build_type ) build_types_by_variable[var_key].add(build.desc.build_type)
pos_key = (build.desc.generator, var_key, build.desc.build_type) pos_key = (build.desc.generator, var_key, build.desc.build_type)
build_by_pos_key[pos_key] = build build_by_pos_key[pos_key] = build
variables = sorted( variables ) variables = sorted(variables)
th_vars = [] th_vars = []
th_build_types = [] th_build_types = []
for variable in variables: for variable in variables:
build_types = sorted( build_types_by_variable[variable] ) build_types = sorted(build_types_by_variable[variable])
nb_build_type = len(build_types_by_variable[variable]) nb_build_type = len(build_types_by_variable[variable])
th_vars.append( '<th colspan="%d">%s</th>' % (nb_build_type, cgi.escape( ' '.join( variable ) ) ) ) th_vars.append('<th colspan="%d">%s</th>' % (nb_build_type, cgi.escape(' '.join(variable))))
for build_type in build_types: for build_type in build_types:
th_build_types.append( '<th>%s</th>' % cgi.escape(build_type) ) th_build_types.append('<th>%s</th>' % cgi.escape(build_type))
tr_builds = [] tr_builds = []
for generator in sorted( builds_by_generator ): for generator in sorted(builds_by_generator):
tds = [ '<td>%s</td>\n' % cgi.escape( generator ) ] tds = [ '<td>%s</td>\n' % cgi.escape(generator) ]
for variable in variables: for variable in variables:
build_types = sorted( build_types_by_variable[variable] ) build_types = sorted(build_types_by_variable[variable])
for build_type in build_types: for build_type in build_types:
pos_key = (generator, variable, build_type) pos_key = (generator, variable, build_type)
build = build_by_pos_key.get(pos_key) build = build_by_pos_key.get(pos_key)
if build: if build:
cmake_status = 'ok' if build.cmake_succeeded else 'FAILED' cmake_status = 'ok' if build.cmake_succeeded else 'FAILED'
build_status = 'ok' if build.build_succeeded else 'FAILED' build_status = 'ok' if build.build_succeeded else 'FAILED'
cmake_log_url = os.path.relpath( build.cmake_log_path, report_dir ) cmake_log_url = os.path.relpath(build.cmake_log_path, report_dir)
build_log_url = os.path.relpath( build.build_log_path, report_dir ) build_log_url = os.path.relpath(build.build_log_path, report_dir)
td = '<td class="%s"><a href="%s" class="%s">CMake: %s</a>' % ( td = '<td class="%s"><a href="%s" class="%s">CMake: %s</a>' % ( build_status.lower(), cmake_log_url, cmake_status.lower(), cmake_status)
build_status.lower(), cmake_log_url, cmake_status.lower(), cmake_status)
if build.cmake_succeeded: if build.cmake_succeeded:
td += '<br><a href="%s" class="%s">Build: %s</a>' % ( td += '<br><a href="%s" class="%s">Build: %s</a>' % ( build_log_url, build_status.lower(), build_status)
build_log_url, build_status.lower(), build_status)
td += '</td>' td += '</td>'
else: else:
td = '<td></td>' td = '<td></td>'
tds.append( td ) tds.append(td)
tr_builds.append( '<tr>%s</tr>' % '\n'.join( tds ) ) tr_builds.append('<tr>%s</tr>' % '\n'.join(tds))
html = HTML_TEMPLATE.substitute( html = HTML_TEMPLATE.substitute( title='Batch build report',
title='Batch build report',
th_vars=' '.join(th_vars), th_vars=' '.join(th_vars),
th_build_types=' '.join( th_build_types), th_build_types=' '.join(th_build_types),
tr_builds='\n'.join( tr_builds ) ) tr_builds='\n'.join(tr_builds))
with open( html_report_path, 'wt' ) as fhtml: with open(html_report_path, 'wt') as fhtml:
fhtml.write( html ) fhtml.write(html)
print('HTML report generated in:', html_report_path) print('HTML report generated in:', html_report_path)
def main(): def main():
@@ -246,33 +243,33 @@ python devtools\batchbuild.py e:\buildbots\jsoncpp\build . devtools\agent_vmw7.j
parser.enable_interspersed_args() parser.enable_interspersed_args()
options, args = parser.parse_args() options, args = parser.parse_args()
if len(args) < 3: if len(args) < 3:
parser.error( "Missing one of WORK_DIR SOURCE_DIR CONFIG_JSON_PATH." ) parser.error("Missing one of WORK_DIR SOURCE_DIR CONFIG_JSON_PATH.")
work_dir = args[0] work_dir = args[0]
source_dir = args[1].rstrip('/\\') source_dir = args[1].rstrip('/\\')
config_paths = args[2:] config_paths = args[2:]
for config_path in config_paths: for config_path in config_paths:
if not os.path.isfile( config_path ): if not os.path.isfile(config_path):
parser.error( "Can not read: %r" % config_path ) parser.error("Can not read: %r" % config_path)
# generate build variants # generate build variants
build_descs = [] build_descs = []
for config_path in config_paths: for config_path in config_paths:
build_descs_by_axis = load_build_variants_from_config( config_path ) build_descs_by_axis = load_build_variants_from_config(config_path)
build_descs.extend( generate_build_variants( build_descs_by_axis ) ) build_descs.extend(generate_build_variants(build_descs_by_axis))
print('Build variants (%d):' % len(build_descs)) print('Build variants (%d):' % len(build_descs))
# assign build directory for each variant # assign build directory for each variant
if not os.path.isdir( work_dir ): if not os.path.isdir(work_dir):
os.makedirs( work_dir ) os.makedirs(work_dir)
builds = [] builds = []
with open( os.path.join( work_dir, 'matrix-dir-map.txt' ), 'wt' ) as fmatrixmap: with open(os.path.join(work_dir, 'matrix-dir-map.txt'), 'wt') as fmatrixmap:
for index, build_desc in enumerate( build_descs ): for index, build_desc in enumerate(build_descs):
build_desc_work_dir = os.path.join( work_dir, '%03d' % (index+1) ) build_desc_work_dir = os.path.join(work_dir, '%03d' % (index+1))
builds.append( BuildData( build_desc, build_desc_work_dir, source_dir ) ) builds.append(BuildData(build_desc, build_desc_work_dir, source_dir))
fmatrixmap.write( '%s: %s\n' % (build_desc_work_dir, build_desc) ) fmatrixmap.write('%s: %s\n' % (build_desc_work_dir, build_desc))
for build in builds: for build in builds:
build.execute_build() build.execute_build()
html_report_path = os.path.join( work_dir, 'batchbuild-report.html' ) html_report_path = os.path.join(work_dir, 'batchbuild-report.html')
generate_html_report( html_report_path, builds ) generate_html_report(html_report_path, builds)
print('Done') print('Done')

View File

@@ -1,10 +1,10 @@
from __future__ import print_function from __future__ import print_function
import os.path import os.path
def fix_source_eol( path, is_dry_run = True, verbose = True, eol = '\n' ): def fix_source_eol(path, is_dry_run = True, verbose = True, eol = '\n'):
"""Makes sure that all sources have the specified eol sequence (default: unix).""" """Makes sure that all sources have the specified eol sequence (default: unix)."""
if not os.path.isfile( path ): if not os.path.isfile(path):
raise ValueError( 'Path "%s" is not a file' % path ) raise ValueError('Path "%s" is not a file' % path)
try: try:
f = open(path, 'rb') f = open(path, 'rb')
except IOError as msg: except IOError as msg:
@@ -29,27 +29,27 @@ def fix_source_eol( path, is_dry_run = True, verbose = True, eol = '\n' ):
## ##
## ##
## ##
##def _do_fix( is_dry_run = True ): ##def _do_fix(is_dry_run = True):
## from waftools import antglob ## from waftools import antglob
## python_sources = antglob.glob( '.', ## python_sources = antglob.glob('.',
## includes = '**/*.py **/wscript **/wscript_build', ## includes = '**/*.py **/wscript **/wscript_build',
## excludes = antglob.default_excludes + './waf.py', ## excludes = antglob.default_excludes + './waf.py',
## prune_dirs = antglob.prune_dirs + 'waf-* ./build' ) ## prune_dirs = antglob.prune_dirs + 'waf-* ./build')
## for path in python_sources: ## for path in python_sources:
## _fix_python_source( path, is_dry_run ) ## _fix_python_source(path, is_dry_run)
## ##
## cpp_sources = antglob.glob( '.', ## cpp_sources = antglob.glob('.',
## includes = '**/*.cpp **/*.h **/*.inl', ## includes = '**/*.cpp **/*.h **/*.inl',
## prune_dirs = antglob.prune_dirs + 'waf-* ./build' ) ## prune_dirs = antglob.prune_dirs + 'waf-* ./build')
## for path in cpp_sources: ## for path in cpp_sources:
## _fix_source_eol( path, is_dry_run ) ## _fix_source_eol(path, is_dry_run)
## ##
## ##
##def dry_fix(context): ##def dry_fix(context):
## _do_fix( is_dry_run = True ) ## _do_fix(is_dry_run = True)
## ##
##def fix(context): ##def fix(context):
## _do_fix( is_dry_run = False ) ## _do_fix(is_dry_run = False)
## ##
##def shutdown(): ##def shutdown():
## pass ## pass

View File

@@ -13,7 +13,7 @@ BRIEF_LICENSE = LICENSE_BEGIN + """2007-2010 Baptiste Lepilleur
""".replace('\r\n','\n') """.replace('\r\n','\n')
def update_license( path, dry_run, show_diff ): def update_license(path, dry_run, show_diff):
"""Update the license statement in the specified file. """Update the license statement in the specified file.
Parameters: Parameters:
path: path of the C++ source file to update. path: path of the C++ source file to update.
@@ -22,28 +22,28 @@ def update_license( path, dry_run, show_diff ):
show_diff: if True, print the path of the file that would be modified, show_diff: if True, print the path of the file that would be modified,
as well as the change made to the file. as well as the change made to the file.
""" """
with open( path, 'rt' ) as fin: with open(path, 'rt') as fin:
original_text = fin.read().replace('\r\n','\n') original_text = fin.read().replace('\r\n','\n')
newline = fin.newlines and fin.newlines[0] or '\n' newline = fin.newlines and fin.newlines[0] or '\n'
if not original_text.startswith( LICENSE_BEGIN ): if not original_text.startswith(LICENSE_BEGIN):
# No existing license found => prepend it # No existing license found => prepend it
new_text = BRIEF_LICENSE + original_text new_text = BRIEF_LICENSE + original_text
else: else:
license_end_index = original_text.index( '\n\n' ) # search first blank line license_end_index = original_text.index('\n\n') # search first blank line
new_text = BRIEF_LICENSE + original_text[license_end_index+2:] new_text = BRIEF_LICENSE + original_text[license_end_index+2:]
if original_text != new_text: if original_text != new_text:
if not dry_run: if not dry_run:
with open( path, 'wb' ) as fout: with open(path, 'wb') as fout:
fout.write( new_text.replace('\n', newline ) ) fout.write(new_text.replace('\n', newline))
print('Updated', path) print('Updated', path)
if show_diff: if show_diff:
import difflib import difflib
print('\n'.join( difflib.unified_diff( original_text.split('\n'), print('\n'.join(difflib.unified_diff(original_text.split('\n'),
new_text.split('\n') ) )) new_text.split('\n'))))
return True return True
return False return False
def update_license_in_source_directories( source_dirs, dry_run, show_diff ): def update_license_in_source_directories(source_dirs, dry_run, show_diff):
"""Updates license text in C++ source files found in directory source_dirs. """Updates license text in C++ source files found in directory source_dirs.
Parameters: Parameters:
source_dirs: list of directory to scan for C++ sources. Directories are source_dirs: list of directory to scan for C++ sources. Directories are
@@ -56,11 +56,11 @@ def update_license_in_source_directories( source_dirs, dry_run, show_diff ):
from devtools import antglob from devtools import antglob
prune_dirs = antglob.prune_dirs + 'scons-local* ./build* ./libs ./dist' prune_dirs = antglob.prune_dirs + 'scons-local* ./build* ./libs ./dist'
for source_dir in source_dirs: for source_dir in source_dirs:
cpp_sources = antglob.glob( source_dir, cpp_sources = antglob.glob(source_dir,
includes = '''**/*.h **/*.cpp **/*.inl''', includes = '''**/*.h **/*.cpp **/*.inl''',
prune_dirs = prune_dirs ) prune_dirs = prune_dirs)
for source in cpp_sources: for source in cpp_sources:
update_license( source, dry_run, show_diff ) update_license(source, dry_run, show_diff)
def main(): def main():
usage = """%prog DIR [DIR2...] usage = """%prog DIR [DIR2...]
@@ -83,7 +83,7 @@ python devtools\licenseupdater.py include src
help="""On update, show change made to the file.""") help="""On update, show change made to the file.""")
parser.enable_interspersed_args() parser.enable_interspersed_args()
options, args = parser.parse_args() options, args = parser.parse_args()
update_license_in_source_directories( args, options.dry_run, options.show_diff ) update_license_in_source_directories(args, options.dry_run, options.show_diff)
print('Done') print('Done')
if __name__ == '__main__': if __name__ == '__main__':

View File

@@ -1,5 +1,5 @@
import os.path from contextlib import closing
import gzip import os
import tarfile import tarfile
TARGZ_DEFAULT_COMPRESSION_LEVEL = 9 TARGZ_DEFAULT_COMPRESSION_LEVEL = 9
@@ -13,41 +13,35 @@ def make_tarball(tarball_path, sources, base_dir, prefix_dir=''):
prefix_dir: all files stored in the tarball be sub-directory of prefix_dir. Set to '' prefix_dir: all files stored in the tarball be sub-directory of prefix_dir. Set to ''
to make them child of root. to make them child of root.
""" """
base_dir = os.path.normpath( os.path.abspath( base_dir ) ) base_dir = os.path.normpath(os.path.abspath(base_dir))
def archive_name( path ): def archive_name(path):
"""Makes path relative to base_dir.""" """Makes path relative to base_dir."""
path = os.path.normpath( os.path.abspath( path ) ) path = os.path.normpath(os.path.abspath(path))
common_path = os.path.commonprefix( (base_dir, path) ) common_path = os.path.commonprefix((base_dir, path))
archive_name = path[len(common_path):] archive_name = path[len(common_path):]
if os.path.isabs( archive_name ): if os.path.isabs(archive_name):
archive_name = archive_name[1:] archive_name = archive_name[1:]
return os.path.join( prefix_dir, archive_name ) return os.path.join(prefix_dir, archive_name)
def visit(tar, dirname, names): def visit(tar, dirname, names):
for name in names: for name in names:
path = os.path.join(dirname, name) path = os.path.join(dirname, name)
if os.path.isfile(path): if os.path.isfile(path):
path_in_tar = archive_name(path) path_in_tar = archive_name(path)
tar.add(path, path_in_tar ) tar.add(path, path_in_tar)
compression = TARGZ_DEFAULT_COMPRESSION_LEVEL compression = TARGZ_DEFAULT_COMPRESSION_LEVEL
tar = tarfile.TarFile.gzopen( tarball_path, 'w', compresslevel=compression ) with closing(tarfile.TarFile.open(tarball_path, 'w:gz',
try: compresslevel=compression)) as tar:
for source in sources: for source in sources:
source_path = source source_path = source
if os.path.isdir( source ): if os.path.isdir(source):
os.path.walk(source_path, visit, tar) for dirpath, dirnames, filenames in os.walk(source_path):
visit(tar, dirpath, filenames)
else: else:
path_in_tar = archive_name(source_path) path_in_tar = archive_name(source_path)
tar.add(source_path, path_in_tar ) # filename, arcname tar.add(source_path, path_in_tar) # filename, arcname
finally:
tar.close()
def decompress( tarball_path, base_dir ): def decompress(tarball_path, base_dir):
"""Decompress the gzipped tarball into directory base_dir. """Decompress the gzipped tarball into directory base_dir.
""" """
# !!! This class method is not documented in the online doc with closing(tarfile.TarFile.open(tarball_path)) as tar:
# nor is bz2open! tar.extractall(base_dir)
tar = tarfile.TarFile.gzopen(tarball_path, mode='r')
try:
tar.extractall( base_dir )
finally:
tar.close()

View File

@@ -819,7 +819,7 @@ EXCLUDE_SYMBOLS =
# that contain example code fragments that are included (see the \include # that contain example code fragments that are included (see the \include
# command). # command).
EXAMPLE_PATH = EXAMPLE_PATH = ..
# If the value of the EXAMPLE_PATH tag contains directories, you can use the # If the value of the EXAMPLE_PATH tag contains directories, you can use the
# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and

View File

@@ -16,7 +16,7 @@ JsonCpp - JSON data format manipulation library
</a> </a>
</td> </td>
<td width="40%" align="right" valign="center"> <td width="40%" align="right" valign="center">
<a href="https://github.com/open-source-parsers/jsoncpp">JsonCpp home page</a> <a href="http://open-source-parsers.github.io/jsoncpp-docs/doxygen/">JsonCpp home page</a>
</td> </td>
</tr> </tr>
</table> </table>

View File

@@ -4,11 +4,21 @@
<a HREF="http://www.json.org/">JSON (JavaScript Object Notation)</a> <a HREF="http://www.json.org/">JSON (JavaScript Object Notation)</a>
is a lightweight data-interchange format. is a lightweight data-interchange format.
It can represent integer, real number, string, an ordered sequence of value, and
a collection of name/value pairs.
Here is an example of JSON data: Here is an example of JSON data:
\verbatim \verbatim
{
"encoding" : "UTF-8",
"plug-ins" : [
"python",
"c++",
"ruby"
],
"indent" : { "length" : 3, "use_space": true }
}
\endverbatim
<b>JsonCpp</b> supports comments as <i>meta-data</i>:
\code
// Configuration options // Configuration options
{ {
// Default encoding for text // Default encoding for text
@@ -17,22 +27,22 @@ Here is an example of JSON data:
// Plug-ins loaded at start-up // Plug-ins loaded at start-up
"plug-ins" : [ "plug-ins" : [
"python", "python",
"c++", "c++", // trailing comment
"ruby" "ruby"
], ],
// Tab indent size // Tab indent size
"indent" : { "length" : 3, "use_space": true } // (multi-line comment)
"indent" : { /*embedded comment*/ "length" : 3, "use_space": true }
} }
\endverbatim \endcode
<code>jsoncpp</code> supports comments as <i>meta-data</i>.
\section _features Features \section _features Features
- read and write JSON document - read and write JSON document
- attach C++ style comments to element during parsing - attach C++ style comments to element during parsing
- rewrite JSON document preserving original comments - rewrite JSON document preserving original comments
Notes: Comments used to be supported in JSON but where removed for Notes: Comments used to be supported in JSON but were removed for
portability (C like comments are not supported in Python). Since portability (C like comments are not supported in Python). Since
comments are useful in configuration/input file, this feature was comments are useful in configuration/input file, this feature was
preserved. preserved.
@@ -40,47 +50,77 @@ preserved.
\section _example Code example \section _example Code example
\code \code
Json::Value root; // will contains the root value after parsing. Json::Value root; // 'root' will contain the root value after parsing.
Json::Reader reader; std::cin >> root;
bool parsingSuccessful = reader.parse( config_doc, root );
if ( !parsingSuccessful )
{
// report to the user the failure and their locations in the document.
std::cout << "Failed to parse configuration\n"
<< reader.getFormattedErrorMessages();
return;
}
// Get the value of the member of root named 'encoding', return 'UTF-8' if there is no // You can also read into a particular sub-value.
// such member.
std::string encoding = root.get("encoding", "UTF-8" ).asString();
// Get the value of the member of root named 'encoding', return a 'null' value if
// there is no such member.
const Json::Value plugins = root["plug-ins"];
for ( int index = 0; index < plugins.size(); ++index ) // Iterates over the sequence elements.
loadPlugIn( plugins[index].asString() );
setIndentLength( root["indent"].get("length", 3).asInt() );
setIndentUseSpace( root["indent"].get("use_space", true).asBool() );
// ...
// At application shutdown to make the new configuration document:
// Since Json::Value has implicit constructor for all value types, it is not
// necessary to explicitly construct the Json::Value object:
root["encoding"] = getCurrentEncoding();
root["indent"]["length"] = getCurrentIndentLength();
root["indent"]["use_space"] = getCurrentIndentUseSpace();
Json::StyledWriter writer;
// Make a new JSON document for the configuration. Preserve original comments.
std::string outputConfig = writer.write( root );
// You can also use streams. This will put the contents of any JSON
// stream at a particular sub-value, if you'd like.
std::cin >> root["subtree"]; std::cin >> root["subtree"];
// And you can write to a stream, using the StyledWriter automatically. // Get the value of the member of root named 'encoding',
// and return 'UTF-8' if there is no such member.
std::string encoding = root.get("encoding", "UTF-8" ).asString();
// Get the value of the member of root named 'plug-ins'; return a 'null' value if
// there is no such member.
const Json::Value plugins = root["plug-ins"];
// Iterate over the sequence elements.
for ( int index = 0; index < plugins.size(); ++index )
loadPlugIn( plugins[index].asString() );
// Try other datatypes. Some are auto-convertible to others.
foo::setIndentLength( root["indent"].get("length", 3).asInt() );
foo::setIndentUseSpace( root["indent"].get("use_space", true).asBool() );
// Since Json::Value has an implicit constructor for all value types, it is not
// necessary to explicitly construct the Json::Value object.
root["encoding"] = foo::getCurrentEncoding();
root["indent"]["length"] = foo::getCurrentIndentLength();
root["indent"]["use_space"] = foo::getCurrentIndentUseSpace();
// If you like the defaults, you can insert directly into a stream.
std::cout << root; std::cout << root;
// Of course, you can write to `std::ostringstream` if you prefer.
// If desired, remember to add a linefeed and flush.
std::cout << std::endl;
\endcode
\section _advanced Advanced usage
Configure *builders* to create *readers* and *writers*. For
configuration, we use our own `Json::Value` (rather than
standard setters/getters) so that we can add
features without losing binary-compatibility.
\code
// For convenience, use `writeString()` with a specialized builder.
Json::StreamWriterBuilder wbuilder;
wbuilder.settings_["indentation"] = "\t"; // simple Json::Value
std::string document = Json::writeString(wbuilder, root);
// Here, using a specialized Builder, we discard comments and
// record errors as we parse.
Json::CharReaderBuilder rbuilder;
rbuilder.settings_["collectComments"] = false; // simple Json::Value
std::string errs;
bool ok = Json::parseFromStream(rbuilder, std::cin, &root, &errs);
\endcode
Yes, compile-time configuration-checking would be helpful,
but `Json::Value` lets you
write and read the builder configuration, which is better! In other words,
you can configure your JSON parser using JSON.
CharReaders and StreamWriters are not thread-safe, but they are re-usable.
\code
Json::CharReaderBuilder rbuilder;
cfg >> rbuilder.settings_;
std::unique_ptr<Json::CharReader> const reader(rbuilder.newCharReader());
reader->parse(start, stop, &value1, &errs);
// ...
reader->parse(start, stop, &value2, &errs);
// etc.
\endcode \endcode
\section _pbuild Build instructions \section _pbuild Build instructions
@@ -116,4 +156,9 @@ Basically JsonCpp is licensed under MIT license, or public domain if desired
and recognized in your jurisdiction. and recognized in your jurisdiction.
\author Baptiste Lepilleur <blep@users.sourceforge.net> (originator) \author Baptiste Lepilleur <blep@users.sourceforge.net> (originator)
\author Christopher Dunn <cdunn2001@gmail.com> (primary maintainer)
\version \include version
We make strong guarantees about binary-compatibility, consistent with
<a href="http://apr.apache.org/versioning.html">the Apache versioning scheme</a>.
\sa version.h
*/ */

2302
doc/web_doxyfile.in Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,20 +1,35 @@
"""Script to generate doxygen documentation. """Script to generate doxygen documentation.
""" """
from __future__ import print_function from __future__ import print_function
from __future__ import unicode_literals
from devtools import tarball from devtools import tarball
from contextlib import contextmanager
import subprocess
import traceback
import re import re
import os import os
import os.path
import sys import sys
import shutil import shutil
@contextmanager
def cd(newdir):
"""
http://stackoverflow.com/questions/431684/how-do-i-cd-in-python
"""
prevdir = os.getcwd()
os.chdir(newdir)
try:
yield
finally:
os.chdir(prevdir)
def find_program(*filenames): def find_program(*filenames):
"""find a program in folders path_lst, and sets env[var] """find a program in folders path_lst, and sets env[var]
@param filenames: a list of possible names of the program to search for @param filenames: a list of possible names of the program to search for
@return: the full path of the filename if found, or '' if filename could not be found @return: the full path of the filename if found, or '' if filename could not be found
""" """
paths = os.environ.get('PATH', '').split(os.pathsep) paths = os.environ.get('PATH', '').split(os.pathsep)
suffixes = ('win32' in sys.platform ) and '.exe .com .bat .cmd' or '' suffixes = ('win32' in sys.platform) and '.exe .com .bat .cmd' or ''
for filename in filenames: for filename in filenames:
for name in [filename+ext for ext in suffixes.split()]: for name in [filename+ext for ext in suffixes.split()]:
for directory in paths: for directory in paths:
@@ -28,53 +43,56 @@ def do_subst_in_file(targetfile, sourcefile, dict):
For example, if dict is {'%VERSION%': '1.2345', '%BASE%': 'MyProg'}, For example, if dict is {'%VERSION%': '1.2345', '%BASE%': 'MyProg'},
then all instances of %VERSION% in the file will be replaced with 1.2345 etc. then all instances of %VERSION% in the file will be replaced with 1.2345 etc.
""" """
try: with open(sourcefile, 'r') as f:
f = open(sourcefile, 'rb')
contents = f.read() contents = f.read()
f.close()
except:
print("Can't read source file %s"%sourcefile)
raise
for (k,v) in list(dict.items()): for (k,v) in list(dict.items()):
v = v.replace('\\','\\\\') v = v.replace('\\','\\\\')
contents = re.sub(k, v, contents) contents = re.sub(k, v, contents)
try: with open(targetfile, 'w') as f:
f = open(targetfile, 'wb')
f.write(contents) f.write(contents)
f.close()
def getstatusoutput(cmd):
"""cmd is a list.
"""
try:
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output, _ = process.communicate()
status = process.returncode
except: except:
print("Can't write target file %s"%targetfile) status = -1
raise output = traceback.format_exc()
return status, output
def run_cmd(cmd, silent=False):
"""Raise exception on failure.
"""
info = 'Running: %r in %r' %(' '.join(cmd), os.getcwd())
print(info)
sys.stdout.flush()
if silent:
status, output = getstatusoutput(cmd)
else:
status, output = os.system(' '.join(cmd)), ''
if status:
msg = 'Error while %s ...\n\terror=%d, output="""%s"""' %(info, status, output)
raise Exception(msg)
def assert_is_exe(path):
if not path:
raise Exception('path is empty.')
if not os.path.isfile(path):
raise Exception('%r is not a file.' %path)
if not os.access(path, os.X_OK):
raise Exception('%r is not executable by this user.' %path)
def run_doxygen(doxygen_path, config_file, working_dir, is_silent): def run_doxygen(doxygen_path, config_file, working_dir, is_silent):
config_file = os.path.abspath( config_file ) assert_is_exe(doxygen_path)
doxygen_path = doxygen_path config_file = os.path.abspath(config_file)
old_cwd = os.getcwd() with cd(working_dir):
try:
os.chdir( working_dir )
cmd = [doxygen_path, config_file] cmd = [doxygen_path, config_file]
print('Running:', ' '.join( cmd )) run_cmd(cmd, is_silent)
try:
import subprocess
except:
if os.system( ' '.join( cmd ) ) != 0:
print('Documentation generation failed')
return False
else:
if is_silent:
process = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT )
else:
process = subprocess.Popen( cmd )
stdout, _ = process.communicate()
if process.returncode:
print('Documentation generation failed:')
print(stdout)
return False
return True
finally:
os.chdir( old_cwd )
def build_doc( options, make_release=False ): def build_doc(options, make_release=False):
if make_release: if make_release:
options.make_tarball = True options.make_tarball = True
options.with_dot = True options.with_dot = True
@@ -83,56 +101,56 @@ def build_doc( options, make_release=False ):
options.open = False options.open = False
options.silent = True options.silent = True
version = open('version','rt').read().strip() version = open('version', 'rt').read().strip()
output_dir = 'dist/doxygen' # relative to doc/doxyfile location. output_dir = 'dist/doxygen' # relative to doc/doxyfile location.
if not os.path.isdir( output_dir ): if not os.path.isdir(output_dir):
os.makedirs( output_dir ) os.makedirs(output_dir)
top_dir = os.path.abspath( '.' ) top_dir = os.path.abspath('.')
html_output_dirname = 'jsoncpp-api-html-' + version html_output_dirname = 'jsoncpp-api-html-' + version
tarball_path = os.path.join( 'dist', html_output_dirname + '.tar.gz' ) tarball_path = os.path.join('dist', html_output_dirname + '.tar.gz')
warning_log_path = os.path.join( output_dir, '../jsoncpp-doxygen-warning.log' ) warning_log_path = os.path.join(output_dir, '../jsoncpp-doxygen-warning.log')
html_output_path = os.path.join( output_dir, html_output_dirname ) html_output_path = os.path.join(output_dir, html_output_dirname)
def yesno( bool ): def yesno(bool):
return bool and 'YES' or 'NO' return bool and 'YES' or 'NO'
subst_keys = { subst_keys = {
'%JSONCPP_VERSION%': version, '%JSONCPP_VERSION%': version,
'%DOC_TOPDIR%': '', '%DOC_TOPDIR%': '',
'%TOPDIR%': top_dir, '%TOPDIR%': top_dir,
'%HTML_OUTPUT%': os.path.join( '..', output_dir, html_output_dirname ), '%HTML_OUTPUT%': os.path.join('..', output_dir, html_output_dirname),
'%HAVE_DOT%': yesno(options.with_dot), '%HAVE_DOT%': yesno(options.with_dot),
'%DOT_PATH%': os.path.split(options.dot_path)[0], '%DOT_PATH%': os.path.split(options.dot_path)[0],
'%HTML_HELP%': yesno(options.with_html_help), '%HTML_HELP%': yesno(options.with_html_help),
'%UML_LOOK%': yesno(options.with_uml_look), '%UML_LOOK%': yesno(options.with_uml_look),
'%WARNING_LOG_PATH%': os.path.join( '..', warning_log_path ) '%WARNING_LOG_PATH%': os.path.join('..', warning_log_path)
} }
if os.path.isdir( output_dir ): if os.path.isdir(output_dir):
print('Deleting directory:', output_dir) print('Deleting directory:', output_dir)
shutil.rmtree( output_dir ) shutil.rmtree(output_dir)
if not os.path.isdir( output_dir ): if not os.path.isdir(output_dir):
os.makedirs( output_dir ) os.makedirs(output_dir)
do_subst_in_file( 'doc/doxyfile', 'doc/doxyfile.in', subst_keys ) do_subst_in_file('doc/doxyfile', options.doxyfile_input_path, subst_keys)
ok = run_doxygen( options.doxygen_path, 'doc/doxyfile', 'doc', is_silent=options.silent ) run_doxygen(options.doxygen_path, 'doc/doxyfile', 'doc', is_silent=options.silent)
if not options.silent: if not options.silent:
print(open(warning_log_path, 'rb').read()) print(open(warning_log_path, 'r').read())
index_path = os.path.abspath(os.path.join('doc', subst_keys['%HTML_OUTPUT%'], 'index.html')) index_path = os.path.abspath(os.path.join('doc', subst_keys['%HTML_OUTPUT%'], 'index.html'))
print('Generated documentation can be found in:') print('Generated documentation can be found in:')
print(index_path) print(index_path)
if options.open: if options.open:
import webbrowser import webbrowser
webbrowser.open( 'file://' + index_path ) webbrowser.open('file://' + index_path)
if options.make_tarball: if options.make_tarball:
print('Generating doc tarball to', tarball_path) print('Generating doc tarball to', tarball_path)
tarball_sources = [ tarball_sources = [
output_dir, output_dir,
'README.txt', 'README.md',
'LICENSE', 'LICENSE',
'NEWS.txt', 'NEWS.txt',
'version' 'version'
] ]
tarball_basedir = os.path.join( output_dir, html_output_dirname ) tarball_basedir = os.path.join(output_dir, html_output_dirname)
tarball.make_tarball( tarball_path, tarball_sources, tarball_basedir, html_output_dirname ) tarball.make_tarball(tarball_path, tarball_sources, tarball_basedir, html_output_dirname)
return tarball_path, html_output_dirname return tarball_path, html_output_dirname
def main(): def main():
@@ -151,6 +169,8 @@ def main():
help="""Path to GraphViz dot tool. Must be full qualified path. [Default: %default]""") help="""Path to GraphViz dot tool. Must be full qualified path. [Default: %default]""")
parser.add_option('--doxygen', dest="doxygen_path", action='store', default=find_program('doxygen'), parser.add_option('--doxygen', dest="doxygen_path", action='store', default=find_program('doxygen'),
help="""Path to Doxygen tool. [Default: %default]""") help="""Path to Doxygen tool. [Default: %default]""")
parser.add_option('--in', dest="doxyfile_input_path", action='store', default='doc/doxyfile.in',
help="""Path to doxygen inputs. [Default: %default]""")
parser.add_option('--with-html-help', dest="with_html_help", action='store_true', default=False, parser.add_option('--with-html-help', dest="with_html_help", action='store_true', default=False,
help="""Enable generation of Microsoft HTML HELP""") help="""Enable generation of Microsoft HTML HELP""")
parser.add_option('--no-uml-look', dest="with_uml_look", action='store_false', default=True, parser.add_option('--no-uml-look', dest="with_uml_look", action='store_false', default=True,
@@ -163,7 +183,7 @@ def main():
help="""Hides doxygen output""") help="""Hides doxygen output""")
parser.enable_interspersed_args() parser.enable_interspersed_args()
options, args = parser.parse_args() options, args = parser.parse_args()
build_doc( options ) build_doc(options)
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@@ -44,12 +44,6 @@ public:
/// \c true if root must be either an array or an object value. Default: \c /// \c true if root must be either an array or an object value. Default: \c
/// false. /// false.
bool strictRoot_; bool strictRoot_;
/// \c true if dropped null placeholders are allowed. Default: \c false.
bool allowDroppedNullPlaceholders_;
/// \c true if numeric object key are allowed. Default: \c false.
bool allowNumericKeys_;
}; };
} // namespace Json } // namespace Json

View File

@@ -14,6 +14,7 @@
#include <iosfwd> #include <iosfwd>
#include <stack> #include <stack>
#include <string> #include <string>
#include <istream>
// Disable warning C4251: <data member>: <type> needs to have dll-interface to // Disable warning C4251: <data member>: <type> needs to have dll-interface to
// be used by... // be used by...
@@ -27,24 +28,13 @@ namespace Json {
/** \brief Unserialize a <a HREF="http://www.json.org">JSON</a> document into a /** \brief Unserialize a <a HREF="http://www.json.org">JSON</a> document into a
*Value. *Value.
* *
* \deprecated Use CharReader and CharReaderBuilder.
*/ */
class JSON_API Reader { class JSON_API Reader {
public: public:
typedef char Char; typedef char Char;
typedef const Char* Location; typedef const Char* Location;
/** \brief An error tagged with where in the JSON text it was encountered.
*
* The offsets give the [start, limit) range of bytes within the text. Note
* that this is bytes, not codepoints.
*
*/
struct StructuredError {
size_t offset_start;
size_t offset_limit;
std::string message;
};
/** \brief Constructs a Reader allowing all features /** \brief Constructs a Reader allowing all features
* for parsing. * for parsing.
*/ */
@@ -78,7 +68,7 @@ public:
document to read. document to read.
* \param endDoc Pointer on the end of the UTF-8 encoded string of the * \param endDoc Pointer on the end of the UTF-8 encoded string of the
document to read. document to read.
\ Must be >= beginDoc. * Must be >= beginDoc.
* \param root [out] Contains the root value of the document if it was * \param root [out] Contains the root value of the document if it was
* successfully parsed. * successfully parsed.
* \param collectComments \c true to collect comment and allow writing them * \param collectComments \c true to collect comment and allow writing them
@@ -121,38 +111,6 @@ public:
*/ */
std::string getFormattedErrorMessages() const; std::string getFormattedErrorMessages() const;
/** \brief Returns a vector of structured erros encounted while parsing.
* \return A (possibly empty) vector of StructuredError objects. Currently
* only one error can be returned, but the caller should tolerate
* multiple
* errors. This can occur if the parser recovers from a non-fatal
* parse error and then encounters additional errors.
*/
std::vector<StructuredError> getStructuredErrors() const;
/** \brief Add a semantic error message.
* \param value JSON Value location associated with the error
* \param message The error message.
* \return \c true if the error was successfully added, \c false if the
* Value offset exceeds the document size.
*/
bool pushError(const Value& value, const std::string& message);
/** \brief Add a semantic error message with extra context.
* \param value JSON Value location associated with the error
* \param message The error message.
* \param extra Additional JSON Value location to contextualize the error
* \return \c true if the error was successfully added, \c false if either
* Value offset exceeds the document size.
*/
bool pushError(const Value& value, const std::string& message, const Value& extra);
/** \brief Return whether there are any errors.
* \return \c true if there are no errors to report \c false if
* errors have occurred.
*/
bool good() const;
private: private:
enum TokenType { enum TokenType {
tokenEndOfStream = 0, tokenEndOfStream = 0,
@@ -238,8 +196,124 @@ private:
std::string commentsBefore_; std::string commentsBefore_;
Features features_; Features features_;
bool collectComments_; bool collectComments_;
}; // Reader
/** Interface for reading JSON from a char array.
*/
class JSON_API CharReader {
public:
virtual ~CharReader() {}
/** \brief Read a Value from a <a HREF="http://www.json.org">JSON</a>
document.
* The document must be a UTF-8 encoded string containing the document to read.
*
* \param beginDoc Pointer on the beginning of the UTF-8 encoded string of the
document to read.
* \param endDoc Pointer on the end of the UTF-8 encoded string of the
document to read.
* Must be >= beginDoc.
* \param root [out] Contains the root value of the document if it was
* successfully parsed.
* \param errs [out] Formatted error messages (if not NULL)
* a user friendly string that lists errors in the parsed
* document.
* \return \c true if the document was successfully parsed, \c false if an
error occurred.
*/
virtual bool parse(
char const* beginDoc, char const* endDoc,
Value* root, std::string* errs) = 0;
class Factory {
public:
/** \brief Allocate a CharReader via operator new().
* \throw std::exception if something goes wrong (e.g. invalid settings)
*/
virtual CharReader* newCharReader() const = 0;
}; // Factory
}; // CharReader
/** \brief Build a CharReader implementation.
\deprecated This is experimental and will be altered before the next release.
Usage:
\code
using namespace Json;
CharReaderBuilder builder;
builder.settings_["collectComments"] = false;
Value value;
std::string errs;
bool ok = parseFromStream(builder, std::cin, &value, &errs);
\endcode
*/
class JSON_API CharReaderBuilder : public CharReader::Factory {
public:
// Note: We use a Json::Value so that we can add data-members to this class
// without a major version bump.
/** Configuration of this builder.
These are case-sensitive.
Available settings (case-sensitive):
- `"collectComments": false or true`
- true to collect comment and allow writing them
back during serialization, false to discard comments.
This parameter is ignored if allowComments is false.
- `"allowComments": false or true`
- true if comments are allowed.
- `"strictRoot": false or true`
- true if root must be either an array or an object value
- `"allowDroppedNullPlaceholders": false or true`
- true if dropped null placeholders are allowed. (See StreamWriterBuilder.)
- `"allowNumericKeys": false or true`
- true if numeric object keys are allowed.
- `"stackLimit": integer`
- Exceeding stackLimit (recursive depth of `readValue()`) will
cause an exception.
- This is a security issue (seg-faults caused by deeply nested JSON),
so the default is low.
- `"failIfExtra": false or true`
- If true, `parse()` returns false when extra non-whitespace trails
the JSON value in the input string.
You can examine 'settings_` yourself
to see the defaults. You can also write and read them just like any
JSON Value.
\sa setDefaults()
*/
Json::Value settings_;
CharReaderBuilder();
virtual ~CharReaderBuilder();
virtual CharReader* newCharReader() const;
/** \return true if 'settings' are legal and consistent;
* otherwise, indicate bad settings via 'invalid'.
*/
bool validate(Json::Value* invalid) const;
/** Called by ctor, but you can use this to reset settings_.
* \pre 'settings' != NULL (but Json::null is fine)
* \remark Defaults:
* \snippet src/lib_json/json_reader.cpp CharReaderBuilderStrictMode
*/
static void setDefaults(Json::Value* settings);
/** Same as old Features::strictMode().
* \pre 'settings' != NULL (but Json::null is fine)
* \remark Defaults:
* \snippet src/lib_json/json_reader.cpp CharReaderBuilderDefaults
*/
static void strictMode(Json::Value* settings);
}; };
/** Consume entire stream and use its begin/end.
* Someday we might have a real StreamReader, but for now this
* is convenient.
*/
bool parseFromStream(
CharReader::Factory const&,
std::istream&,
Value* root, std::string* errs);
/** \brief Read from 'sin' into 'root'. /** \brief Read from 'sin' into 'root'.
Always keep comments from the input JSON. Always keep comments from the input JSON.

View File

@@ -133,7 +133,11 @@ public:
typedef Json::LargestUInt LargestUInt; typedef Json::LargestUInt LargestUInt;
typedef Json::ArrayIndex ArrayIndex; typedef Json::ArrayIndex ArrayIndex;
static const Value& null; static const Value& nullRef;
#if !defined(__ARMEL__)
/// \deprecated This exists for binary compatibility only. Use nullRef.
static const Value null;
#endif
/// Minimum signed integer value that can be stored in a Json::Value. /// Minimum signed integer value that can be stored in a Json::Value.
static const LargestInt minLargestInt; static const LargestInt minLargestInt;
/// Maximum signed integer value that can be stored in a Json::Value. /// Maximum signed integer value that can be stored in a Json::Value.
@@ -171,7 +175,7 @@ private:
CZString(const char* cstr, DuplicationPolicy allocate); CZString(const char* cstr, DuplicationPolicy allocate);
CZString(const CZString& other); CZString(const CZString& other);
~CZString(); ~CZString();
CZString& operator=(CZString other); CZString &operator=(const CZString &other);
bool operator<(const CZString& other) const; bool operator<(const CZString& other) const;
bool operator==(const CZString& other) const; bool operator==(const CZString& other) const;
ArrayIndex index() const; ArrayIndex index() const;
@@ -240,7 +244,7 @@ Json::Value obj_value(Json::objectValue); // {}
~Value(); ~Value();
// Deep copy, then swap(other). // Deep copy, then swap(other).
Value& operator=(Value other); Value &operator=(const Value &other);
/// Swap everything. /// Swap everything.
void swap(Value& other); void swap(Value& other);
/// Swap values but leave comments and source offsets in place. /// Swap values but leave comments and source offsets in place.
@@ -432,9 +436,11 @@ Json::Value obj_value(Json::objectValue); // {}
// EnumValues enumValues() const; // EnumValues enumValues() const;
//# endif //# endif
/// Comments must be //... or /* ... */ /// \deprecated Always pass len.
void setComment(const char* comment, CommentPlacement placement); void setComment(const char* comment, CommentPlacement placement);
/// Comments must be //... or /* ... */ /// Comments must be //... or /* ... */
void setComment(const char* comment, size_t len, CommentPlacement placement);
/// Comments must be //... or /* ... */
void setComment(const std::string& comment, CommentPlacement placement); void setComment(const std::string& comment, CommentPlacement placement);
bool hasComment(CommentPlacement placement) const; bool hasComment(CommentPlacement placement) const;
/// Include delimiters and embedded newlines. /// Include delimiters and embedded newlines.
@@ -448,13 +454,6 @@ Json::Value obj_value(Json::objectValue); // {}
iterator begin(); iterator begin();
iterator end(); iterator end();
// Accessors for the [start, limit) range of bytes within the JSON text from
// which this value was parsed, if any.
void setOffsetStart(size_t start);
void setOffsetLimit(size_t limit);
size_t getOffsetStart() const;
size_t getOffsetLimit() const;
private: private:
void initBasic(ValueType type, bool allocated = false); void initBasic(ValueType type, bool allocated = false);
@@ -477,7 +476,7 @@ private:
CommentInfo(); CommentInfo();
~CommentInfo(); ~CommentInfo();
void setComment(const char* text); void setComment(const char* text, size_t len);
char* comment_; char* comment_;
}; };
@@ -505,17 +504,12 @@ private:
#endif #endif
} value_; } value_;
ValueType type_ : 8; ValueType type_ : 8;
int allocated_ : 1; // Notes: if declared as bool, bitfield is useless. unsigned int allocated_ : 1; // Notes: if declared as bool, bitfield is useless.
#ifdef JSON_VALUE_USE_INTERNAL_MAP #ifdef JSON_VALUE_USE_INTERNAL_MAP
unsigned int itemIsUsed_ : 1; // used by the ValueInternalMap container. unsigned int itemIsUsed_ : 1; // used by the ValueInternalMap container.
int memberNameIsStatic_ : 1; // used by the ValueInternalMap container. unsigned int memberNameIsStatic_ : 1; // used by the ValueInternalMap container.
#endif #endif
CommentInfo* comments_; CommentInfo* comments_;
// [start, limit) byte offsets in the source JSON text from which this Value
// was extracted.
size_t start_;
size_t limit_;
}; };
/** \brief Experimental and untested: represents an element of the "path" to /** \brief Experimental and untested: represents an element of the "path" to
@@ -943,7 +937,7 @@ public:
bool operator!=(const SelfType& other) const { return !isEqual(other); } bool operator!=(const SelfType& other) const { return !isEqual(other); }
difference_type operator-(const SelfType& other) const { difference_type operator-(const SelfType& other) const {
return computeDistance(other); return other.computeDistance(*this);
} }
/// Return either the index or the member name of the referenced value as a /// Return either the index or the member name of the referenced value as a

View File

@@ -4,10 +4,10 @@
#ifndef JSON_VERSION_H_INCLUDED #ifndef JSON_VERSION_H_INCLUDED
# define JSON_VERSION_H_INCLUDED # define JSON_VERSION_H_INCLUDED
# define JSONCPP_VERSION_STRING "1.3.0" # define JSONCPP_VERSION_STRING "0.8.2"
# define JSONCPP_VERSION_MAJOR 1 # define JSONCPP_VERSION_MAJOR 0
# define JSONCPP_VERSION_MINOR 3 # define JSONCPP_VERSION_MINOR 8
# define JSONCPP_VERSION_PATCH 0 # define JSONCPP_VERSION_PATCH 2
# define JSONCPP_VERSION_QUALIFIER # define JSONCPP_VERSION_QUALIFIER
# define JSONCPP_VERSION_HEXA ((JSONCPP_VERSION_MAJOR << 24) | (JSONCPP_VERSION_MINOR << 16) | (JSONCPP_VERSION_PATCH << 8)) # define JSONCPP_VERSION_HEXA ((JSONCPP_VERSION_MAJOR << 24) | (JSONCPP_VERSION_MINOR << 16) | (JSONCPP_VERSION_PATCH << 8))

View File

@@ -11,6 +11,7 @@
#endif // if !defined(JSON_IS_AMALGAMATION) #endif // if !defined(JSON_IS_AMALGAMATION)
#include <vector> #include <vector>
#include <string> #include <string>
#include <ostream>
// Disable warning C4251: <data member>: <type> needs to have dll-interface to // Disable warning C4251: <data member>: <type> needs to have dll-interface to
// be used by... // be used by...
@@ -23,7 +24,111 @@ namespace Json {
class Value; class Value;
/**
Usage:
\code
using namespace Json;
void writeToStdout(StreamWriter::Factory const& factory, Value const& value) {
std::unique_ptr<StreamWriter> const writer(
factory.newStreamWriter());
writer->write(value, &std::cout);
std::cout << std::endl; // add lf and flush
}
\endcode
*/
class JSON_API StreamWriter {
protected:
std::ostream* sout_; // not owned; will not delete
public:
StreamWriter();
virtual ~StreamWriter();
/** Write Value into document as configured in sub-class.
Do not take ownership of sout, but maintain a reference during function.
\pre sout != NULL
\return zero on success
\throw std::exception possibly, depending on configuration
*/
virtual int write(Value const& root, std::ostream* sout) = 0;
/** \brief A simple abstract factory.
*/
class JSON_API Factory {
public:
virtual ~Factory();
/** \brief Allocate a CharReader via operator new().
* \throw std::exception if something goes wrong (e.g. invalid settings)
*/
virtual StreamWriter* newStreamWriter() const = 0;
}; // Factory
}; // StreamWriter
/** \brief Write into stringstream, then return string, for convenience.
* A StreamWriter will be created from the factory, used, and then deleted.
*/
std::string writeString(StreamWriter::Factory const& factory, Value const& root);
/** \brief Build a StreamWriter implementation.
Usage:
\code
using namespace Json;
Value value = ...;
StreamWriterBuilder builder;
builder.settings_["commentStyle"] = "None";
builder.settings_["indentation"] = " "; // or whatever you like
std::unique_ptr<Json::StreamWriter> writer(
builder.newStreamWriter());
writer->write(value, &std::cout);
std::cout << std::endl; // add lf and flush
\endcode
*/
class JSON_API StreamWriterBuilder : public StreamWriter::Factory {
public:
// Note: We use a Json::Value so that we can add data-members to this class
// without a major version bump.
/** Configuration of this builder.
Available settings (case-sensitive):
- "commentStyle": "None" or "All"
- "indentation": "<anything>"
- "enableYAMLCompatibility": false or true
- slightly change the whitespace around colons
- "dropNullPlaceholders": false or true
- Drop the "null" string from the writer's output for nullValues.
Strictly speaking, this is not valid JSON. But when the output is being
fed to a browser's Javascript, it makes for smaller output and the
browser can handle the output just fine.
You can examine 'settings_` yourself
to see the defaults. You can also write and read them just like any
JSON Value.
\sa setDefaults()
*/
Json::Value settings_;
StreamWriterBuilder();
virtual ~StreamWriterBuilder();
/**
* \throw std::exception if something goes wrong (e.g. invalid settings)
*/
virtual StreamWriter* newStreamWriter() const;
/** \return true if 'settings' are legal and consistent;
* otherwise, indicate bad settings via 'invalid'.
*/
bool validate(Json::Value* invalid) const;
/** Called by ctor, but you can use this to reset settings_.
* \pre 'settings' != NULL (but Json::null is fine)
* \remark Defaults:
* \snippet src/lib_json/json_writer.cpp StreamWriterBuilderDefaults
*/
static void setDefaults(Json::Value* settings);
};
/** \brief Abstract class for writers. /** \brief Abstract class for writers.
* \deprecated Use StreamWriter.
*/ */
class JSON_API Writer { class JSON_API Writer {
public: public:
@@ -39,6 +144,7 @@ public:
*consumption, *consumption,
* but may be usefull to support feature such as RPC where bandwith is limited. * but may be usefull to support feature such as RPC where bandwith is limited.
* \sa Reader, Value * \sa Reader, Value
* \deprecated Use StreamWriterBuilder.
*/ */
class JSON_API FastWriter : public Writer { class JSON_API FastWriter : public Writer {
public: public:
@@ -47,15 +153,6 @@ public:
void enableYAMLCompatibility(); void enableYAMLCompatibility();
/** \brief Drop the "null" string from the writer's output for nullValues.
* Strictly speaking, this is not valid JSON. But when the output is being
* fed to a browser's Javascript, it makes for smaller output and the
* browser can handle the output just fine.
*/
void dropNullPlaceholders();
void omitEndingLineFeed();
public: // overridden from Writer public: // overridden from Writer
virtual std::string write(const Value& root); virtual std::string write(const Value& root);
@@ -64,8 +161,6 @@ private:
std::string document_; std::string document_;
bool yamlCompatiblityEnabled_; bool yamlCompatiblityEnabled_;
bool dropNullPlaceholders_;
bool omitEndingLineFeed_;
}; };
/** \brief Writes a Value in <a HREF="http://www.json.org">JSON</a> format in a /** \brief Writes a Value in <a HREF="http://www.json.org">JSON</a> format in a
@@ -90,6 +185,7 @@ private:
*#CommentPlacement. *#CommentPlacement.
* *
* \sa Reader, Value, Value::setComment() * \sa Reader, Value, Value::setComment()
* \deprecated Use StreamWriterBuilder.
*/ */
class JSON_API StyledWriter : public Writer { class JSON_API StyledWriter : public Writer {
public: public:
@@ -151,6 +247,7 @@ private:
* *
* \param indentation Each level will be indented by this amount extra. * \param indentation Each level will be indented by this amount extra.
* \sa Reader, Value, Value::setComment() * \sa Reader, Value, Value::setComment()
* \deprecated Use StreamWriterBuilder.
*/ */
class JSON_API StyledStreamWriter { class JSON_API StyledStreamWriter {
public: public:
@@ -187,7 +284,8 @@ private:
std::string indentString_; std::string indentString_;
int rightMargin_; int rightMargin_;
std::string indentation_; std::string indentation_;
bool addChildValues_; bool addChildValues_ : 1;
bool indented_ : 1;
}; };
#if defined(JSON_HAS_INT64) #if defined(JSON_HAS_INT64)

View File

@@ -34,57 +34,57 @@ SVN_TAG_ROOT = SVN_ROOT + 'tags/jsoncpp'
SCONS_LOCAL_URL = 'http://sourceforge.net/projects/scons/files/scons-local/1.2.0/scons-local-1.2.0.tar.gz/download' SCONS_LOCAL_URL = 'http://sourceforge.net/projects/scons/files/scons-local/1.2.0/scons-local-1.2.0.tar.gz/download'
SOURCEFORGE_PROJECT = 'jsoncpp' SOURCEFORGE_PROJECT = 'jsoncpp'
def set_version( version ): def set_version(version):
with open('version','wb') as f: with open('version','wb') as f:
f.write( version.strip() ) f.write(version.strip())
def rmdir_if_exist( dir_path ): def rmdir_if_exist(dir_path):
if os.path.isdir( dir_path ): if os.path.isdir(dir_path):
shutil.rmtree( dir_path ) shutil.rmtree(dir_path)
class SVNError(Exception): class SVNError(Exception):
pass pass
def svn_command( command, *args ): def svn_command(command, *args):
cmd = ['svn', '--non-interactive', command] + list(args) cmd = ['svn', '--non-interactive', command] + list(args)
print('Running:', ' '.join( cmd )) print('Running:', ' '.join(cmd))
process = subprocess.Popen( cmd, process = subprocess.Popen(cmd,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT ) stderr=subprocess.STDOUT)
stdout = process.communicate()[0] stdout = process.communicate()[0]
if process.returncode: if process.returncode:
error = SVNError( 'SVN command failed:\n' + stdout ) error = SVNError('SVN command failed:\n' + stdout)
error.returncode = process.returncode error.returncode = process.returncode
raise error raise error
return stdout return stdout
def check_no_pending_commit(): def check_no_pending_commit():
"""Checks that there is no pending commit in the sandbox.""" """Checks that there is no pending commit in the sandbox."""
stdout = svn_command( 'status', '--xml' ) stdout = svn_command('status', '--xml')
etree = ElementTree.fromstring( stdout ) etree = ElementTree.fromstring(stdout)
msg = [] msg = []
for entry in etree.getiterator( 'entry' ): for entry in etree.getiterator('entry'):
path = entry.get('path') path = entry.get('path')
status = entry.find('wc-status').get('item') status = entry.find('wc-status').get('item')
if status != 'unversioned' and path != 'version': if status != 'unversioned' and path != 'version':
msg.append( 'File "%s" has pending change (status="%s")' % (path, status) ) msg.append('File "%s" has pending change (status="%s")' % (path, status))
if msg: if msg:
msg.insert(0, 'Pending change to commit found in sandbox. Commit them first!' ) msg.insert(0, 'Pending change to commit found in sandbox. Commit them first!')
return '\n'.join( msg ) return '\n'.join(msg)
def svn_join_url( base_url, suffix ): def svn_join_url(base_url, suffix):
if not base_url.endswith('/'): if not base_url.endswith('/'):
base_url += '/' base_url += '/'
if suffix.startswith('/'): if suffix.startswith('/'):
suffix = suffix[1:] suffix = suffix[1:]
return base_url + suffix return base_url + suffix
def svn_check_if_tag_exist( tag_url ): def svn_check_if_tag_exist(tag_url):
"""Checks if a tag exist. """Checks if a tag exist.
Returns: True if the tag exist, False otherwise. Returns: True if the tag exist, False otherwise.
""" """
try: try:
list_stdout = svn_command( 'list', tag_url ) list_stdout = svn_command('list', tag_url)
except SVNError as e: except SVNError as e:
if e.returncode != 1 or not str(e).find('tag_url'): if e.returncode != 1 or not str(e).find('tag_url'):
raise e raise e
@@ -92,82 +92,82 @@ def svn_check_if_tag_exist( tag_url ):
return False return False
return True return True
def svn_commit( message ): def svn_commit(message):
"""Commit the sandbox, providing the specified comment. """Commit the sandbox, providing the specified comment.
""" """
svn_command( 'ci', '-m', message ) svn_command('ci', '-m', message)
def svn_tag_sandbox( tag_url, message ): def svn_tag_sandbox(tag_url, message):
"""Makes a tag based on the sandbox revisions. """Makes a tag based on the sandbox revisions.
""" """
svn_command( 'copy', '-m', message, '.', tag_url ) svn_command('copy', '-m', message, '.', tag_url)
def svn_remove_tag( tag_url, message ): def svn_remove_tag(tag_url, message):
"""Removes an existing tag. """Removes an existing tag.
""" """
svn_command( 'delete', '-m', message, tag_url ) svn_command('delete', '-m', message, tag_url)
def svn_export( tag_url, export_dir ): def svn_export(tag_url, export_dir):
"""Exports the tag_url revision to export_dir. """Exports the tag_url revision to export_dir.
Target directory, including its parent is created if it does not exist. Target directory, including its parent is created if it does not exist.
If the directory export_dir exist, it is deleted before export proceed. If the directory export_dir exist, it is deleted before export proceed.
""" """
rmdir_if_exist( export_dir ) rmdir_if_exist(export_dir)
svn_command( 'export', tag_url, export_dir ) svn_command('export', tag_url, export_dir)
def fix_sources_eol( dist_dir ): def fix_sources_eol(dist_dir):
"""Set file EOL for tarball distribution. """Set file EOL for tarball distribution.
""" """
print('Preparing exported source file EOL for distribution...') print('Preparing exported source file EOL for distribution...')
prune_dirs = antglob.prune_dirs + 'scons-local* ./build* ./libs ./dist' prune_dirs = antglob.prune_dirs + 'scons-local* ./build* ./libs ./dist'
win_sources = antglob.glob( dist_dir, win_sources = antglob.glob(dist_dir,
includes = '**/*.sln **/*.vcproj', includes = '**/*.sln **/*.vcproj',
prune_dirs = prune_dirs ) prune_dirs = prune_dirs)
unix_sources = antglob.glob( dist_dir, unix_sources = antglob.glob(dist_dir,
includes = '''**/*.h **/*.cpp **/*.inl **/*.txt **/*.dox **/*.py **/*.html **/*.in includes = '''**/*.h **/*.cpp **/*.inl **/*.txt **/*.dox **/*.py **/*.html **/*.in
sconscript *.json *.expected AUTHORS LICENSE''', sconscript *.json *.expected AUTHORS LICENSE''',
excludes = antglob.default_excludes + 'scons.py sconsign.py scons-*', excludes = antglob.default_excludes + 'scons.py sconsign.py scons-*',
prune_dirs = prune_dirs ) prune_dirs = prune_dirs)
for path in win_sources: for path in win_sources:
fixeol.fix_source_eol( path, is_dry_run = False, verbose = True, eol = '\r\n' ) fixeol.fix_source_eol(path, is_dry_run = False, verbose = True, eol = '\r\n')
for path in unix_sources: for path in unix_sources:
fixeol.fix_source_eol( path, is_dry_run = False, verbose = True, eol = '\n' ) fixeol.fix_source_eol(path, is_dry_run = False, verbose = True, eol = '\n')
def download( url, target_path ): def download(url, target_path):
"""Download file represented by url to target_path. """Download file represented by url to target_path.
""" """
f = urllib2.urlopen( url ) f = urllib2.urlopen(url)
try: try:
data = f.read() data = f.read()
finally: finally:
f.close() f.close()
fout = open( target_path, 'wb' ) fout = open(target_path, 'wb')
try: try:
fout.write( data ) fout.write(data)
finally: finally:
fout.close() fout.close()
def check_compile( distcheck_top_dir, platform ): def check_compile(distcheck_top_dir, platform):
cmd = [sys.executable, 'scons.py', 'platform=%s' % platform, 'check'] cmd = [sys.executable, 'scons.py', 'platform=%s' % platform, 'check']
print('Running:', ' '.join( cmd )) print('Running:', ' '.join(cmd))
log_path = os.path.join( distcheck_top_dir, 'build-%s.log' % platform ) log_path = os.path.join(distcheck_top_dir, 'build-%s.log' % platform)
flog = open( log_path, 'wb' ) flog = open(log_path, 'wb')
try: try:
process = subprocess.Popen( cmd, process = subprocess.Popen(cmd,
stdout=flog, stdout=flog,
stderr=subprocess.STDOUT, stderr=subprocess.STDOUT,
cwd=distcheck_top_dir ) cwd=distcheck_top_dir)
stdout = process.communicate()[0] stdout = process.communicate()[0]
status = (process.returncode == 0) status = (process.returncode == 0)
finally: finally:
flog.close() flog.close()
return (status, log_path) return (status, log_path)
def write_tempfile( content, **kwargs ): def write_tempfile(content, **kwargs):
fd, path = tempfile.mkstemp( **kwargs ) fd, path = tempfile.mkstemp(**kwargs)
f = os.fdopen( fd, 'wt' ) f = os.fdopen(fd, 'wt')
try: try:
f.write( content ) f.write(content)
finally: finally:
f.close() f.close()
return path return path
@@ -175,34 +175,34 @@ def write_tempfile( content, **kwargs ):
class SFTPError(Exception): class SFTPError(Exception):
pass pass
def run_sftp_batch( userhost, sftp, batch, retry=0 ): def run_sftp_batch(userhost, sftp, batch, retry=0):
path = write_tempfile( batch, suffix='.sftp', text=True ) path = write_tempfile(batch, suffix='.sftp', text=True)
# psftp -agent -C blep,jsoncpp@web.sourceforge.net -batch -b batch.sftp -bc # psftp -agent -C blep,jsoncpp@web.sourceforge.net -batch -b batch.sftp -bc
cmd = [sftp, '-agent', '-C', '-batch', '-b', path, '-bc', userhost] cmd = [sftp, '-agent', '-C', '-batch', '-b', path, '-bc', userhost]
error = None error = None
for retry_index in range(0, max(1,retry)): for retry_index in range(0, max(1,retry)):
heading = retry_index == 0 and 'Running:' or 'Retrying:' heading = retry_index == 0 and 'Running:' or 'Retrying:'
print(heading, ' '.join( cmd )) print(heading, ' '.join(cmd))
process = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT ) process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout = process.communicate()[0] stdout = process.communicate()[0]
if process.returncode != 0: if process.returncode != 0:
error = SFTPError( 'SFTP batch failed:\n' + stdout ) error = SFTPError('SFTP batch failed:\n' + stdout)
else: else:
break break
if error: if error:
raise error raise error
return stdout return stdout
def sourceforge_web_synchro( sourceforge_project, doc_dir, def sourceforge_web_synchro(sourceforge_project, doc_dir,
user=None, sftp='sftp' ): user=None, sftp='sftp'):
"""Notes: does not synchronize sub-directory of doc-dir. """Notes: does not synchronize sub-directory of doc-dir.
""" """
userhost = '%s,%s@web.sourceforge.net' % (user, sourceforge_project) userhost = '%s,%s@web.sourceforge.net' % (user, sourceforge_project)
stdout = run_sftp_batch( userhost, sftp, """ stdout = run_sftp_batch(userhost, sftp, """
cd htdocs cd htdocs
dir dir
exit exit
""" ) """)
existing_paths = set() existing_paths = set()
collect = 0 collect = 0
for line in stdout.split('\n'): for line in stdout.split('\n'):
@@ -216,15 +216,15 @@ exit
elif collect == 2: elif collect == 2:
path = line.strip().split()[-1:] path = line.strip().split()[-1:]
if path and path[0] not in ('.', '..'): if path and path[0] not in ('.', '..'):
existing_paths.add( path[0] ) existing_paths.add(path[0])
upload_paths = set( [os.path.basename(p) for p in antglob.glob( doc_dir )] ) upload_paths = set([os.path.basename(p) for p in antglob.glob(doc_dir)])
paths_to_remove = existing_paths - upload_paths paths_to_remove = existing_paths - upload_paths
if paths_to_remove: if paths_to_remove:
print('Removing the following file from web:') print('Removing the following file from web:')
print('\n'.join( paths_to_remove )) print('\n'.join(paths_to_remove))
stdout = run_sftp_batch( userhost, sftp, """cd htdocs stdout = run_sftp_batch(userhost, sftp, """cd htdocs
rm %s rm %s
exit""" % ' '.join(paths_to_remove) ) exit""" % ' '.join(paths_to_remove))
print('Uploading %d files:' % len(upload_paths)) print('Uploading %d files:' % len(upload_paths))
batch_size = 10 batch_size = 10
upload_paths = list(upload_paths) upload_paths = list(upload_paths)
@@ -235,17 +235,17 @@ exit""" % ' '.join(paths_to_remove) )
remaining_files = len(upload_paths) - index remaining_files = len(upload_paths) - index
remaining_sec = file_per_sec * remaining_files remaining_sec = file_per_sec * remaining_files
print('%d/%d, ETA=%.1fs' % (index+1, len(upload_paths), remaining_sec)) print('%d/%d, ETA=%.1fs' % (index+1, len(upload_paths), remaining_sec))
run_sftp_batch( userhost, sftp, """cd htdocs run_sftp_batch(userhost, sftp, """cd htdocs
lcd %s lcd %s
mput %s mput %s
exit""" % (doc_dir, ' '.join(paths) ), retry=3 ) exit""" % (doc_dir, ' '.join(paths)), retry=3)
def sourceforge_release_tarball( sourceforge_project, paths, user=None, sftp='sftp' ): def sourceforge_release_tarball(sourceforge_project, paths, user=None, sftp='sftp'):
userhost = '%s,%s@frs.sourceforge.net' % (user, sourceforge_project) userhost = '%s,%s@frs.sourceforge.net' % (user, sourceforge_project)
run_sftp_batch( userhost, sftp, """ run_sftp_batch(userhost, sftp, """
mput %s mput %s
exit exit
""" % (' '.join(paths),) ) """ % (' '.join(paths),))
def main(): def main():
@@ -286,12 +286,12 @@ Warning: --force should only be used when developping/testing the release script
options, args = parser.parse_args() options, args = parser.parse_args()
if len(args) != 2: if len(args) != 2:
parser.error( 'release_version missing on command-line.' ) parser.error('release_version missing on command-line.')
release_version = args[0] release_version = args[0]
next_version = args[1] next_version = args[1]
if not options.platforms and not options.no_test: if not options.platforms and not options.no_test:
parser.error( 'You must specify either --platform or --no-test option.' ) parser.error('You must specify either --platform or --no-test option.')
if options.ignore_pending_commit: if options.ignore_pending_commit:
msg = '' msg = ''
@@ -299,86 +299,86 @@ Warning: --force should only be used when developping/testing the release script
msg = check_no_pending_commit() msg = check_no_pending_commit()
if not msg: if not msg:
print('Setting version to', release_version) print('Setting version to', release_version)
set_version( release_version ) set_version(release_version)
svn_commit( 'Release ' + release_version ) svn_commit('Release ' + release_version)
tag_url = svn_join_url( SVN_TAG_ROOT, release_version ) tag_url = svn_join_url(SVN_TAG_ROOT, release_version)
if svn_check_if_tag_exist( tag_url ): if svn_check_if_tag_exist(tag_url):
if options.retag_release: if options.retag_release:
svn_remove_tag( tag_url, 'Overwriting previous tag' ) svn_remove_tag(tag_url, 'Overwriting previous tag')
else: else:
print('Aborting, tag %s already exist. Use --retag to overwrite it!' % tag_url) print('Aborting, tag %s already exist. Use --retag to overwrite it!' % tag_url)
sys.exit( 1 ) sys.exit(1)
svn_tag_sandbox( tag_url, 'Release ' + release_version ) svn_tag_sandbox(tag_url, 'Release ' + release_version)
print('Generated doxygen document...') print('Generated doxygen document...')
## doc_dirname = r'jsoncpp-api-html-0.5.0' ## doc_dirname = r'jsoncpp-api-html-0.5.0'
## doc_tarball_path = r'e:\prg\vc\Lib\jsoncpp-trunk\dist\jsoncpp-api-html-0.5.0.tar.gz' ## doc_tarball_path = r'e:\prg\vc\Lib\jsoncpp-trunk\dist\jsoncpp-api-html-0.5.0.tar.gz'
doc_tarball_path, doc_dirname = doxybuild.build_doc( options, make_release=True ) doc_tarball_path, doc_dirname = doxybuild.build_doc(options, make_release=True)
doc_distcheck_dir = 'dist/doccheck' doc_distcheck_dir = 'dist/doccheck'
tarball.decompress( doc_tarball_path, doc_distcheck_dir ) tarball.decompress(doc_tarball_path, doc_distcheck_dir)
doc_distcheck_top_dir = os.path.join( doc_distcheck_dir, doc_dirname ) doc_distcheck_top_dir = os.path.join(doc_distcheck_dir, doc_dirname)
export_dir = 'dist/export' export_dir = 'dist/export'
svn_export( tag_url, export_dir ) svn_export(tag_url, export_dir)
fix_sources_eol( export_dir ) fix_sources_eol(export_dir)
source_dir = 'jsoncpp-src-' + release_version source_dir = 'jsoncpp-src-' + release_version
source_tarball_path = 'dist/%s.tar.gz' % source_dir source_tarball_path = 'dist/%s.tar.gz' % source_dir
print('Generating source tarball to', source_tarball_path) print('Generating source tarball to', source_tarball_path)
tarball.make_tarball( source_tarball_path, [export_dir], export_dir, prefix_dir=source_dir ) tarball.make_tarball(source_tarball_path, [export_dir], export_dir, prefix_dir=source_dir)
amalgamation_tarball_path = 'dist/%s-amalgamation.tar.gz' % source_dir amalgamation_tarball_path = 'dist/%s-amalgamation.tar.gz' % source_dir
print('Generating amalgamation source tarball to', amalgamation_tarball_path) print('Generating amalgamation source tarball to', amalgamation_tarball_path)
amalgamation_dir = 'dist/amalgamation' amalgamation_dir = 'dist/amalgamation'
amalgamate.amalgamate_source( export_dir, '%s/jsoncpp.cpp' % amalgamation_dir, 'json/json.h' ) amalgamate.amalgamate_source(export_dir, '%s/jsoncpp.cpp' % amalgamation_dir, 'json/json.h')
amalgamation_source_dir = 'jsoncpp-src-amalgamation' + release_version amalgamation_source_dir = 'jsoncpp-src-amalgamation' + release_version
tarball.make_tarball( amalgamation_tarball_path, [amalgamation_dir], tarball.make_tarball(amalgamation_tarball_path, [amalgamation_dir],
amalgamation_dir, prefix_dir=amalgamation_source_dir ) amalgamation_dir, prefix_dir=amalgamation_source_dir)
# Decompress source tarball, download and install scons-local # Decompress source tarball, download and install scons-local
distcheck_dir = 'dist/distcheck' distcheck_dir = 'dist/distcheck'
distcheck_top_dir = distcheck_dir + '/' + source_dir distcheck_top_dir = distcheck_dir + '/' + source_dir
print('Decompressing source tarball to', distcheck_dir) print('Decompressing source tarball to', distcheck_dir)
rmdir_if_exist( distcheck_dir ) rmdir_if_exist(distcheck_dir)
tarball.decompress( source_tarball_path, distcheck_dir ) tarball.decompress(source_tarball_path, distcheck_dir)
scons_local_path = 'dist/scons-local.tar.gz' scons_local_path = 'dist/scons-local.tar.gz'
print('Downloading scons-local to', scons_local_path) print('Downloading scons-local to', scons_local_path)
download( SCONS_LOCAL_URL, scons_local_path ) download(SCONS_LOCAL_URL, scons_local_path)
print('Decompressing scons-local to', distcheck_top_dir) print('Decompressing scons-local to', distcheck_top_dir)
tarball.decompress( scons_local_path, distcheck_top_dir ) tarball.decompress(scons_local_path, distcheck_top_dir)
# Run compilation # Run compilation
print('Compiling decompressed tarball') print('Compiling decompressed tarball')
all_build_status = True all_build_status = True
for platform in options.platforms.split(','): for platform in options.platforms.split(','):
print('Testing platform:', platform) print('Testing platform:', platform)
build_status, log_path = check_compile( distcheck_top_dir, platform ) build_status, log_path = check_compile(distcheck_top_dir, platform)
print('see build log:', log_path) print('see build log:', log_path)
print(build_status and '=> ok' or '=> FAILED') print(build_status and '=> ok' or '=> FAILED')
all_build_status = all_build_status and build_status all_build_status = all_build_status and build_status
if not build_status: if not build_status:
print('Testing failed on at least one platform, aborting...') print('Testing failed on at least one platform, aborting...')
svn_remove_tag( tag_url, 'Removing tag due to failed testing' ) svn_remove_tag(tag_url, 'Removing tag due to failed testing')
sys.exit(1) sys.exit(1)
if options.user: if options.user:
if not options.no_web: if not options.no_web:
print('Uploading documentation using user', options.user) print('Uploading documentation using user', options.user)
sourceforge_web_synchro( SOURCEFORGE_PROJECT, doc_distcheck_top_dir, user=options.user, sftp=options.sftp ) sourceforge_web_synchro(SOURCEFORGE_PROJECT, doc_distcheck_top_dir, user=options.user, sftp=options.sftp)
print('Completed documentation upload') print('Completed documentation upload')
print('Uploading source and documentation tarballs for release using user', options.user) print('Uploading source and documentation tarballs for release using user', options.user)
sourceforge_release_tarball( SOURCEFORGE_PROJECT, sourceforge_release_tarball(SOURCEFORGE_PROJECT,
[source_tarball_path, doc_tarball_path], [source_tarball_path, doc_tarball_path],
user=options.user, sftp=options.sftp ) user=options.user, sftp=options.sftp)
print('Source and doc release tarballs uploaded') print('Source and doc release tarballs uploaded')
else: else:
print('No upload user specified. Web site and download tarbal were not uploaded.') print('No upload user specified. Web site and download tarbal were not uploaded.')
print('Tarball can be found at:', doc_tarball_path) print('Tarball can be found at:', doc_tarball_path)
# Set next version number and commit # Set next version number and commit
set_version( next_version ) set_version(next_version)
svn_commit( 'Released ' + release_version ) svn_commit('Released ' + release_version)
else: else:
sys.stderr.write( msg + '\n' ) sys.stderr.write(msg + '\n')
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@@ -1,9 +1,9 @@
import fnmatch import fnmatch
import os import os
def generate( env ): def generate(env):
def Glob( env, includes = None, excludes = None, dir = '.' ): def Glob(env, includes = None, excludes = None, dir = '.'):
"""Adds Glob( includes = Split( '*' ), excludes = None, dir = '.') """Adds Glob(includes = Split('*'), excludes = None, dir = '.')
helper function to environment. helper function to environment.
Glob both the file-system files. Glob both the file-system files.
@@ -12,36 +12,36 @@ def generate( env ):
excludes: list of file name pattern exluced from the return list. excludes: list of file name pattern exluced from the return list.
Example: Example:
sources = env.Glob( ("*.cpp", '*.h'), "~*.cpp", "#src" ) sources = env.Glob(("*.cpp", '*.h'), "~*.cpp", "#src")
""" """
def filterFilename(path): def filterFilename(path):
abs_path = os.path.join( dir, path ) abs_path = os.path.join(dir, path)
if not os.path.isfile(abs_path): if not os.path.isfile(abs_path):
return 0 return 0
fn = os.path.basename(path) fn = os.path.basename(path)
match = 0 match = 0
for include in includes: for include in includes:
if fnmatch.fnmatchcase( fn, include ): if fnmatch.fnmatchcase(fn, include):
match = 1 match = 1
break break
if match == 1 and not excludes is None: if match == 1 and not excludes is None:
for exclude in excludes: for exclude in excludes:
if fnmatch.fnmatchcase( fn, exclude ): if fnmatch.fnmatchcase(fn, exclude):
match = 0 match = 0
break break
return match return match
if includes is None: if includes is None:
includes = ('*',) includes = ('*',)
elif type(includes) in ( type(''), type(u'') ): elif type(includes) in (type(''), type(u'')):
includes = (includes,) includes = (includes,)
if type(excludes) in ( type(''), type(u'') ): if type(excludes) in (type(''), type(u'')):
excludes = (excludes,) excludes = (excludes,)
dir = env.Dir(dir).abspath dir = env.Dir(dir).abspath
paths = os.listdir( dir ) paths = os.listdir(dir)
def makeAbsFileNode( path ): def makeAbsFileNode(path):
return env.File( os.path.join( dir, path ) ) return env.File(os.path.join(dir, path))
nodes = filter( filterFilename, paths ) nodes = filter(filterFilename, paths)
return map( makeAbsFileNode, nodes ) return map(makeAbsFileNode, nodes)
from SCons.Script import Environment from SCons.Script import Environment
Environment.Glob = Glob Environment.Glob = Glob

View File

@@ -47,7 +47,7 @@ import targz
## elif token == "=": ## elif token == "=":
## data[key] = list() ## data[key] = list()
## else: ## else:
## append_data( data, key, new_data, token ) ## append_data(data, key, new_data, token)
## new_data = True ## new_data = True
## ##
## last_token = token ## last_token = token
@@ -55,7 +55,7 @@ import targz
## ##
## if last_token == '\\' and token != '\n': ## if last_token == '\\' and token != '\n':
## new_data = False ## new_data = False
## append_data( data, key, new_data, '\\' ) ## append_data(data, key, new_data, '\\')
## ##
## # compress lists of len 1 into single strings ## # compress lists of len 1 into single strings
## for (k, v) in data.items(): ## for (k, v) in data.items():
@@ -116,7 +116,7 @@ import targz
## else: ## else:
## for pattern in file_patterns: ## for pattern in file_patterns:
## sources.extend(glob.glob("/".join([node, pattern]))) ## sources.extend(glob.glob("/".join([node, pattern])))
## sources = map( lambda path: env.File(path), sources ) ## sources = map(lambda path: env.File(path), sources)
## return sources ## return sources
## ##
## ##
@@ -143,7 +143,7 @@ def srcDistEmitter(source, target, env):
## # add our output locations ## # add our output locations
## for (k, v) in output_formats.items(): ## for (k, v) in output_formats.items():
## if data.get("GENERATE_" + k, v[0]) == "YES": ## if data.get("GENERATE_" + k, v[0]) == "YES":
## targets.append(env.Dir( os.path.join(out_dir, data.get(k + "_OUTPUT", v[1]))) ) ## targets.append(env.Dir(os.path.join(out_dir, data.get(k + "_OUTPUT", v[1]))))
## ##
## # don't clobber targets ## # don't clobber targets
## for node in targets: ## for node in targets:
@@ -161,14 +161,13 @@ def generate(env):
Add builders and construction variables for the Add builders and construction variables for the
SrcDist tool. SrcDist tool.
""" """
## doxyfile_scanner = env.Scanner( ## doxyfile_scanner = env.Scanner(## DoxySourceScan,
## DoxySourceScan,
## "DoxySourceScan", ## "DoxySourceScan",
## scan_check = DoxySourceScanCheck, ## scan_check = DoxySourceScanCheck,
## ) ##)
if targz.exists(env): if targz.exists(env):
srcdist_builder = targz.makeBuilder( srcDistEmitter ) srcdist_builder = targz.makeBuilder(srcDistEmitter)
env['BUILDERS']['SrcDist'] = srcdist_builder env['BUILDERS']['SrcDist'] = srcdist_builder

View File

@@ -70,7 +70,7 @@ def generate(env):
return target, source return target, source
## env.Append(TOOLS = 'substinfile') # this should be automaticaly done by Scons ?!? ## env.Append(TOOLS = 'substinfile') # this should be automaticaly done by Scons ?!?
subst_action = SCons.Action.Action( subst_in_file, subst_in_file_string ) subst_action = SCons.Action.Action(subst_in_file, subst_in_file_string)
env['BUILDERS']['SubstInFile'] = Builder(action=subst_action, emitter=subst_emitter) env['BUILDERS']['SubstInFile'] = Builder(action=subst_action, emitter=subst_emitter)
def exists(env): def exists(env):

View File

@@ -27,9 +27,9 @@ TARGZ_DEFAULT_COMPRESSION_LEVEL = 9
if internal_targz: if internal_targz:
def targz(target, source, env): def targz(target, source, env):
def archive_name( path ): def archive_name(path):
path = os.path.normpath( os.path.abspath( path ) ) path = os.path.normpath(os.path.abspath(path))
common_path = os.path.commonprefix( (base_dir, path) ) common_path = os.path.commonprefix((base_dir, path))
archive_name = path[len(common_path):] archive_name = path[len(common_path):]
return archive_name return archive_name
@@ -37,23 +37,23 @@ if internal_targz:
for name in names: for name in names:
path = os.path.join(dirname, name) path = os.path.join(dirname, name)
if os.path.isfile(path): if os.path.isfile(path):
tar.add(path, archive_name(path) ) tar.add(path, archive_name(path))
compression = env.get('TARGZ_COMPRESSION_LEVEL',TARGZ_DEFAULT_COMPRESSION_LEVEL) compression = env.get('TARGZ_COMPRESSION_LEVEL',TARGZ_DEFAULT_COMPRESSION_LEVEL)
base_dir = os.path.normpath( env.get('TARGZ_BASEDIR', env.Dir('.')).abspath ) base_dir = os.path.normpath(env.get('TARGZ_BASEDIR', env.Dir('.')).abspath)
target_path = str(target[0]) target_path = str(target[0])
fileobj = gzip.GzipFile( target_path, 'wb', compression ) fileobj = gzip.GzipFile(target_path, 'wb', compression)
tar = tarfile.TarFile(os.path.splitext(target_path)[0], 'w', fileobj) tar = tarfile.TarFile(os.path.splitext(target_path)[0], 'w', fileobj)
for source in source: for source in source:
source_path = str(source) source_path = str(source)
if source.isdir(): if source.isdir():
os.path.walk(source_path, visit, tar) os.path.walk(source_path, visit, tar)
else: else:
tar.add(source_path, archive_name(source_path) ) # filename, arcname tar.add(source_path, archive_name(source_path)) # filename, arcname
tar.close() tar.close()
targzAction = SCons.Action.Action(targz, varlist=['TARGZ_COMPRESSION_LEVEL','TARGZ_BASEDIR']) targzAction = SCons.Action.Action(targz, varlist=['TARGZ_COMPRESSION_LEVEL','TARGZ_BASEDIR'])
def makeBuilder( emitter = None ): def makeBuilder(emitter = None):
return SCons.Builder.Builder(action = SCons.Action.Action('$TARGZ_COM', '$TARGZ_COMSTR'), return SCons.Builder.Builder(action = SCons.Action.Action('$TARGZ_COM', '$TARGZ_COMSTR'),
source_factory = SCons.Node.FS.Entry, source_factory = SCons.Node.FS.Entry,
source_scanner = SCons.Defaults.DirScanner, source_scanner = SCons.Defaults.DirScanner,

View File

@@ -7,7 +7,13 @@ ENDIF(JSONCPP_LIB_BUILD_SHARED)
ADD_EXECUTABLE(jsontestrunner_exe ADD_EXECUTABLE(jsontestrunner_exe
main.cpp main.cpp
) )
TARGET_LINK_LIBRARIES(jsontestrunner_exe jsoncpp_lib)
IF(JSONCPP_LIB_BUILD_SHARED)
TARGET_LINK_LIBRARIES(jsontestrunner_exe jsoncpp_lib)
ELSE(JSONCPP_LIB_BUILD_SHARED)
TARGET_LINK_LIBRARIES(jsontestrunner_exe jsoncpp_lib_static)
ENDIF(JSONCPP_LIB_BUILD_SHARED)
SET_TARGET_PROPERTIES(jsontestrunner_exe PROPERTIES OUTPUT_NAME jsontestrunner_exe) SET_TARGET_PROPERTIES(jsontestrunner_exe PROPERTIES OUTPUT_NAME jsontestrunner_exe)
IF(PYTHONINTERP_FOUND) IF(PYTHONINTERP_FOUND)

View File

@@ -8,12 +8,22 @@
#include <json/json.h> #include <json/json.h>
#include <algorithm> // sort #include <algorithm> // sort
#include <sstream>
#include <stdio.h> #include <stdio.h>
#if defined(_MSC_VER) && _MSC_VER >= 1310 #if defined(_MSC_VER) && _MSC_VER >= 1310
#pragma warning(disable : 4996) // disable fopen deprecation warning #pragma warning(disable : 4996) // disable fopen deprecation warning
#endif #endif
struct Options
{
std::string path;
Json::Features features;
bool parseOnly;
typedef std::string (*writeFuncType)(Json::Value const&);
writeFuncType write;
};
static std::string normalizeFloatingPointStr(double value) { static std::string normalizeFloatingPointStr(double value) {
char buffer[32]; char buffer[32];
#if defined(_MSC_VER) && defined(__STDC_SECURE_LIB__) #if defined(_MSC_VER) && defined(__STDC_SECURE_LIB__)
@@ -129,43 +139,67 @@ printValueTree(FILE* fout, Json::Value& value, const std::string& path = ".") {
static int parseAndSaveValueTree(const std::string& input, static int parseAndSaveValueTree(const std::string& input,
const std::string& actual, const std::string& actual,
const std::string& kind, const std::string& kind,
Json::Value& root,
const Json::Features& features, const Json::Features& features,
bool parseOnly) { bool parseOnly,
Json::Value* root)
{
Json::Reader reader(features); Json::Reader reader(features);
bool parsingSuccessful = reader.parse(input, root); bool parsingSuccessful = reader.parse(input, *root);
if (!parsingSuccessful) { if (!parsingSuccessful) {
printf("Failed to parse %s file: \n%s\n", printf("Failed to parse %s file: \n%s\n",
kind.c_str(), kind.c_str(),
reader.getFormattedErrorMessages().c_str()); reader.getFormattedErrorMessages().c_str());
return 1; return 1;
} }
if (!parseOnly) { if (!parseOnly) {
FILE* factual = fopen(actual.c_str(), "wt"); FILE* factual = fopen(actual.c_str(), "wt");
if (!factual) { if (!factual) {
printf("Failed to create %s actual file.\n", kind.c_str()); printf("Failed to create %s actual file.\n", kind.c_str());
return 2; return 2;
} }
printValueTree(factual, root); printValueTree(factual, *root);
fclose(factual); fclose(factual);
} }
return 0; return 0;
} }
// static std::string useFastWriter(Json::Value const& root) {
static int rewriteValueTree(const std::string& rewritePath, // Json::FastWriter writer;
const Json::Value& root, // writer.enableYAMLCompatibility();
std::string& rewrite) { // return writer.write(root);
// Json::FastWriter writer; // }
// writer.enableYAMLCompatibility(); static std::string useStyledWriter(
Json::Value const& root)
{
Json::StyledWriter writer; Json::StyledWriter writer;
rewrite = writer.write(root); return writer.write(root);
}
static std::string useStyledStreamWriter(
Json::Value const& root)
{
Json::StyledStreamWriter writer;
std::ostringstream sout;
writer.write(sout, root);
return sout.str();
}
static std::string useBuiltStyledStreamWriter(
Json::Value const& root)
{
Json::StreamWriterBuilder builder;
return Json::writeString(builder, root);
}
static int rewriteValueTree(
const std::string& rewritePath,
const Json::Value& root,
Options::writeFuncType write,
std::string* rewrite)
{
*rewrite = write(root);
FILE* fout = fopen(rewritePath.c_str(), "wt"); FILE* fout = fopen(rewritePath.c_str(), "wt");
if (!fout) { if (!fout) {
printf("Failed to create rewrite file: %s\n", rewritePath.c_str()); printf("Failed to create rewrite file: %s\n", rewritePath.c_str());
return 2; return 2;
} }
fprintf(fout, "%s\n", rewrite.c_str()); fprintf(fout, "%s\n", rewrite->c_str());
fclose(fout); fclose(fout);
return 0; return 0;
} }
@@ -194,84 +228,98 @@ static int printUsage(const char* argv[]) {
return 3; return 3;
} }
int parseCommandLine(int argc, static int parseCommandLine(
const char* argv[], int argc, const char* argv[], Options* opts)
Json::Features& features, {
std::string& path, opts->parseOnly = false;
bool& parseOnly) { opts->write = &useStyledWriter;
parseOnly = false;
if (argc < 2) { if (argc < 2) {
return printUsage(argv); return printUsage(argv);
} }
int index = 1; int index = 1;
if (std::string(argv[1]) == "--json-checker") { if (std::string(argv[index]) == "--json-checker") {
features = Json::Features::strictMode(); opts->features = Json::Features::strictMode();
parseOnly = true; opts->parseOnly = true;
++index; ++index;
} }
if (std::string(argv[index]) == "--json-config") {
if (std::string(argv[1]) == "--json-config") {
printConfig(); printConfig();
return 3; return 3;
} }
if (std::string(argv[index]) == "--json-writer") {
++index;
std::string const writerName(argv[index++]);
if (writerName == "StyledWriter") {
opts->write = &useStyledWriter;
} else if (writerName == "StyledStreamWriter") {
opts->write = &useStyledStreamWriter;
} else if (writerName == "BuiltStyledStreamWriter") {
opts->write = &useBuiltStyledStreamWriter;
} else {
printf("Unknown '--json-writer %s'\n", writerName.c_str());
return 4;
}
}
if (index == argc || index + 1 < argc) { if (index == argc || index + 1 < argc) {
return printUsage(argv); return printUsage(argv);
} }
opts->path = argv[index];
path = argv[index];
return 0; return 0;
} }
static int runTest(Options const& opts)
{
int exitCode = 0;
int main(int argc, const char* argv[]) { std::string input = readInputTestFile(opts.path.c_str());
std::string path; if (input.empty()) {
Json::Features features; printf("Failed to read input or empty input: %s\n", opts.path.c_str());
bool parseOnly; return 3;
int exitCode = parseCommandLine(argc, argv, features, path, parseOnly);
if (exitCode != 0) {
return exitCode;
} }
std::string basePath = removeSuffix(opts.path, ".json");
if (!opts.parseOnly && basePath.empty()) {
printf("Bad input path. Path does not end with '.expected':\n%s\n",
opts.path.c_str());
return 3;
}
std::string const actualPath = basePath + ".actual";
std::string const rewritePath = basePath + ".rewrite";
std::string const rewriteActualPath = basePath + ".actual-rewrite";
Json::Value root;
exitCode = parseAndSaveValueTree(
input, actualPath, "input",
opts.features, opts.parseOnly, &root);
if (exitCode || opts.parseOnly) {
return exitCode;
}
std::string rewrite;
exitCode = rewriteValueTree(rewritePath, root, opts.write, &rewrite);
if (exitCode) {
return exitCode;
}
Json::Value rewriteRoot;
exitCode = parseAndSaveValueTree(
rewrite, rewriteActualPath, "rewrite",
opts.features, opts.parseOnly, &rewriteRoot);
if (exitCode) {
return exitCode;
}
return 0;
}
int main(int argc, const char* argv[]) {
Options opts;
int exitCode = parseCommandLine(argc, argv, &opts);
if (exitCode != 0) {
printf("Failed to parse command-line.");
return exitCode;
}
try { try {
std::string input = readInputTestFile(path.c_str()); return runTest(opts);
if (input.empty()) {
printf("Failed to read input or empty input: %s\n", path.c_str());
return 3;
}
std::string basePath = removeSuffix(argv[1], ".json");
if (!parseOnly && basePath.empty()) {
printf("Bad input path. Path does not end with '.expected':\n%s\n",
path.c_str());
return 3;
}
std::string actualPath = basePath + ".actual";
std::string rewritePath = basePath + ".rewrite";
std::string rewriteActualPath = basePath + ".actual-rewrite";
Json::Value root;
exitCode = parseAndSaveValueTree(
input, actualPath, "input", root, features, parseOnly);
if (exitCode == 0 && !parseOnly) {
std::string rewrite;
exitCode = rewriteValueTree(rewritePath, root, rewrite);
if (exitCode == 0) {
Json::Value rewriteRoot;
exitCode = parseAndSaveValueTree(rewrite,
rewriteActualPath,
"rewrite",
rewriteRoot,
features,
parseOnly);
}
}
} }
catch (const std::exception& e) { catch (const std::exception& e) {
printf("Unhandled exception:\n%s\n", e.what()); printf("Unhandled exception:\n%s\n", e.what());
exitCode = 1; return 1;
} }
return exitCode;
} }

View File

@@ -1,15 +1,10 @@
OPTION(JSONCPP_LIB_BUILD_SHARED "Build jsoncpp_lib as a shared library." OFF) OPTION(JSONCPP_LIB_BUILD_SHARED "Build jsoncpp_lib as a shared library." OFF)
OPTION(JSONCPP_LIB_BUILD_STATIC "Build jsoncpp_lib static library." ON)
IF(BUILD_SHARED_LIBS) IF(BUILD_SHARED_LIBS)
SET(JSONCPP_LIB_BUILD_SHARED ON) SET(JSONCPP_LIB_BUILD_SHARED ON)
ENDIF(BUILD_SHARED_LIBS) ENDIF(BUILD_SHARED_LIBS)
IF(JSONCPP_LIB_BUILD_SHARED)
SET(JSONCPP_LIB_TYPE SHARED)
ADD_DEFINITIONS( -DJSON_DLL_BUILD )
ELSE(JSONCPP_LIB_BUILD_SHARED)
SET(JSONCPP_LIB_TYPE STATIC)
ENDIF(JSONCPP_LIB_BUILD_SHARED)
if( CMAKE_COMPILER_IS_GNUCXX ) if( CMAKE_COMPILER_IS_GNUCXX )
#Get compiler version. #Get compiler version.
execute_process( COMMAND ${CMAKE_CXX_COMPILER} -dumpversion execute_process( COMMAND ${CMAKE_CXX_COMPILER} -dumpversion
@@ -36,25 +31,14 @@ SET( PUBLIC_HEADERS
SOURCE_GROUP( "Public API" FILES ${PUBLIC_HEADERS} ) SOURCE_GROUP( "Public API" FILES ${PUBLIC_HEADERS} )
ADD_LIBRARY( jsoncpp_lib ${JSONCPP_LIB_TYPE} SET(jsoncpp_sources
${PUBLIC_HEADERS} json_tool.h
json_tool.h json_reader.cpp
json_reader.cpp json_batchallocator.h
json_batchallocator.h json_valueiterator.inl
json_valueiterator.inl json_value.cpp
json_value.cpp json_writer.cpp
json_writer.cpp version.h.in)
version.h.in
)
SET_TARGET_PROPERTIES( jsoncpp_lib PROPERTIES OUTPUT_NAME jsoncpp )
SET_TARGET_PROPERTIES( jsoncpp_lib PROPERTIES VERSION ${JSONCPP_VERSION} SOVERSION ${JSONCPP_VERSION_MAJOR} )
IF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
TARGET_INCLUDE_DIRECTORIES( jsoncpp_lib PUBLIC
$<INSTALL_INTERFACE:${INCLUDE_INSTALL_DIR}>
$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/${JSONCPP_INCLUDE_DIR}>
)
ENDIF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
# Install instructions for this target # Install instructions for this target
IF(JSONCPP_WITH_CMAKE_PACKAGE) IF(JSONCPP_WITH_CMAKE_PACKAGE)
@@ -63,8 +47,40 @@ ELSE(JSONCPP_WITH_CMAKE_PACKAGE)
SET(INSTALL_EXPORT) SET(INSTALL_EXPORT)
ENDIF(JSONCPP_WITH_CMAKE_PACKAGE) ENDIF(JSONCPP_WITH_CMAKE_PACKAGE)
INSTALL( TARGETS jsoncpp_lib ${INSTALL_EXPORT} IF(JSONCPP_LIB_BUILD_SHARED)
ADD_DEFINITIONS( -DJSON_DLL_BUILD )
ADD_LIBRARY(jsoncpp_lib SHARED ${PUBLIC_HEADERS} ${jsoncpp_sources})
SET_TARGET_PROPERTIES( jsoncpp_lib PROPERTIES VERSION ${JSONCPP_VERSION} SOVERSION ${JSONCPP_VERSION_MAJOR})
SET_TARGET_PROPERTIES( jsoncpp_lib PROPERTIES OUTPUT_NAME jsoncpp )
INSTALL( TARGETS jsoncpp_lib ${INSTALL_EXPORT}
RUNTIME DESTINATION ${RUNTIME_INSTALL_DIR} RUNTIME DESTINATION ${RUNTIME_INSTALL_DIR}
LIBRARY DESTINATION ${LIBRARY_INSTALL_DIR} LIBRARY DESTINATION ${LIBRARY_INSTALL_DIR}
ARCHIVE DESTINATION ${ARCHIVE_INSTALL_DIR} ARCHIVE DESTINATION ${ARCHIVE_INSTALL_DIR})
)
IF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
TARGET_INCLUDE_DIRECTORIES( jsoncpp_lib PUBLIC
$<INSTALL_INTERFACE:${INCLUDE_INSTALL_DIR}>
$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/${JSONCPP_INCLUDE_DIR}>)
ENDIF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
ENDIF()
IF(JSONCPP_LIB_BUILD_STATIC)
ADD_LIBRARY(jsoncpp_lib_static STATIC ${PUBLIC_HEADERS} ${jsoncpp_sources})
SET_TARGET_PROPERTIES( jsoncpp_lib_static PROPERTIES VERSION ${JSONCPP_VERSION} SOVERSION ${JSONCPP_VERSION_MAJOR})
SET_TARGET_PROPERTIES( jsoncpp_lib_static PROPERTIES OUTPUT_NAME jsoncpp )
INSTALL( TARGETS jsoncpp_lib_static ${INSTALL_EXPORT}
RUNTIME DESTINATION ${RUNTIME_INSTALL_DIR}
LIBRARY DESTINATION ${LIBRARY_INSTALL_DIR}
ARCHIVE DESTINATION ${ARCHIVE_INSTALL_DIR})
IF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
TARGET_INCLUDE_DIRECTORIES( jsoncpp_lib_static PUBLIC
$<INSTALL_INTERFACE:${INCLUDE_INSTALL_DIR}>
$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/${JSONCPP_INCLUDE_DIR}>
)
ENDIF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
ENDIF()

File diff suppressed because it is too large Load Diff

View File

@@ -31,11 +31,13 @@ namespace Json {
#if defined(__ARMEL__) #if defined(__ARMEL__)
#define ALIGNAS(byte_alignment) __attribute__((aligned(byte_alignment))) #define ALIGNAS(byte_alignment) __attribute__((aligned(byte_alignment)))
#else #else
// This exists for binary compatibility only. Use nullRef.
static const Value null;
#define ALIGNAS(byte_alignment) #define ALIGNAS(byte_alignment)
#endif #endif
static const unsigned char ALIGNAS(8) kNull[sizeof(Value)] = { 0 }; static const unsigned char ALIGNAS(8) kNull[sizeof(Value)] = { 0 };
const unsigned char& kNullRef = kNull[0]; const unsigned char& kNullRef = kNull[0];
const Value& Value::null = reinterpret_cast<const Value&>(kNullRef); const Value& Value::nullRef = reinterpret_cast<const Value&>(kNullRef);
const Int Value::minInt = Int(~(UInt(-1) / 2)); const Int Value::minInt = Int(~(UInt(-1) / 2));
const Int Value::maxInt = Int(UInt(-1) / 2); const Int Value::maxInt = Int(UInt(-1) / 2);
@@ -141,15 +143,17 @@ Value::CommentInfo::~CommentInfo() {
releaseStringValue(comment_); releaseStringValue(comment_);
} }
void Value::CommentInfo::setComment(const char* text) { void Value::CommentInfo::setComment(const char* text, size_t len) {
if (comment_) if (comment_) {
releaseStringValue(comment_); releaseStringValue(comment_);
comment_ = 0;
}
JSON_ASSERT(text != 0); JSON_ASSERT(text != 0);
JSON_ASSERT_MESSAGE( JSON_ASSERT_MESSAGE(
text[0] == '\0' || text[0] == '/', text[0] == '\0' || text[0] == '/',
"in Json::Value::setComment(): Comments must start with /"); "in Json::Value::setComment(): Comments must start with /");
// It seems that /**/ style comments are acceptable as well. // It seems that /**/ style comments are acceptable as well.
comment_ = duplicateStringValue(text); comment_ = duplicateStringValue(text, len);
} }
// ////////////////////////////////////////////////////////////////// // //////////////////////////////////////////////////////////////////
@@ -189,8 +193,9 @@ void Value::CZString::swap(CZString& other) {
std::swap(index_, other.index_); std::swap(index_, other.index_);
} }
Value::CZString& Value::CZString::operator=(CZString other) { Value::CZString &Value::CZString::operator=(const CZString &other) {
swap(other); CZString temp(other);
swap(temp);
return *this; return *this;
} }
@@ -328,7 +333,7 @@ Value::Value(const Value& other)
itemIsUsed_(0) itemIsUsed_(0)
#endif #endif
, ,
comments_(0), start_(other.start_), limit_(other.limit_) { comments_(0) {
switch (type_) { switch (type_) {
case nullValue: case nullValue:
case intValue: case intValue:
@@ -340,7 +345,7 @@ Value::Value(const Value& other)
case stringValue: case stringValue:
if (other.value_.string_) { if (other.value_.string_) {
value_.string_ = duplicateStringValue(other.value_.string_); value_.string_ = duplicateStringValue(other.value_.string_);
allocated_ |= true; allocated_ = true;
} else { } else {
value_.string_ = 0; value_.string_ = 0;
allocated_ = false; allocated_ = false;
@@ -367,7 +372,8 @@ Value::Value(const Value& other)
for (int comment = 0; comment < numberOfCommentPlacement; ++comment) { for (int comment = 0; comment < numberOfCommentPlacement; ++comment) {
const CommentInfo& otherComment = other.comments_[comment]; const CommentInfo& otherComment = other.comments_[comment];
if (otherComment.comment_) if (otherComment.comment_)
comments_[comment].setComment(otherComment.comment_); comments_[comment].setComment(
otherComment.comment_, strlen(otherComment.comment_));
} }
} }
} }
@@ -405,8 +411,9 @@ Value::~Value() {
delete[] comments_; delete[] comments_;
} }
Value& Value::operator=(Value other) { Value &Value::operator=(const Value &other) {
swap(other); Value temp(other);
swap(temp);
return *this; return *this;
} }
@@ -423,8 +430,6 @@ void Value::swapPayload(Value& other) {
void Value::swap(Value& other) { void Value::swap(Value& other) {
swapPayload(other); swapPayload(other);
std::swap(comments_, other.comments_); std::swap(comments_, other.comments_);
std::swap(start_, other.start_);
std::swap(limit_, other.limit_);
} }
ValueType Value::type() const { return type_; } ValueType Value::type() const { return type_; }
@@ -799,8 +804,6 @@ void Value::clear() {
JSON_ASSERT_MESSAGE(type_ == nullValue || type_ == arrayValue || JSON_ASSERT_MESSAGE(type_ == nullValue || type_ == arrayValue ||
type_ == objectValue, type_ == objectValue,
"in Json::Value::clear(): requires complex value"); "in Json::Value::clear(): requires complex value");
start_ = 0;
limit_ = 0;
switch (type_) { switch (type_) {
#ifndef JSON_VALUE_USE_INTERNAL_MAP #ifndef JSON_VALUE_USE_INTERNAL_MAP
case arrayValue: case arrayValue:
@@ -854,7 +857,7 @@ Value& Value::operator[](ArrayIndex index) {
if (it != value_.map_->end() && (*it).first == key) if (it != value_.map_->end() && (*it).first == key)
return (*it).second; return (*it).second;
ObjectValues::value_type defaultValue(key, null); ObjectValues::value_type defaultValue(key, nullRef);
it = value_.map_->insert(it, defaultValue); it = value_.map_->insert(it, defaultValue);
return (*it).second; return (*it).second;
#else #else
@@ -874,16 +877,16 @@ const Value& Value::operator[](ArrayIndex index) const {
type_ == nullValue || type_ == arrayValue, type_ == nullValue || type_ == arrayValue,
"in Json::Value::operator[](ArrayIndex)const: requires arrayValue"); "in Json::Value::operator[](ArrayIndex)const: requires arrayValue");
if (type_ == nullValue) if (type_ == nullValue)
return null; return nullRef;
#ifndef JSON_VALUE_USE_INTERNAL_MAP #ifndef JSON_VALUE_USE_INTERNAL_MAP
CZString key(index); CZString key(index);
ObjectValues::const_iterator it = value_.map_->find(key); ObjectValues::const_iterator it = value_.map_->find(key);
if (it == value_.map_->end()) if (it == value_.map_->end())
return null; return nullRef;
return (*it).second; return (*it).second;
#else #else
Value* value = value_.array_->find(index); Value* value = value_.array_->find(index);
return value ? *value : null; return value ? *value : nullRef;
#endif #endif
} }
@@ -905,8 +908,6 @@ void Value::initBasic(ValueType type, bool allocated) {
itemIsUsed_ = 0; itemIsUsed_ = 0;
#endif #endif
comments_ = 0; comments_ = 0;
start_ = 0;
limit_ = 0;
} }
Value& Value::resolveReference(const char* key, bool isStatic) { Value& Value::resolveReference(const char* key, bool isStatic) {
@@ -922,7 +923,7 @@ Value& Value::resolveReference(const char* key, bool isStatic) {
if (it != value_.map_->end() && (*it).first == actualKey) if (it != value_.map_->end() && (*it).first == actualKey)
return (*it).second; return (*it).second;
ObjectValues::value_type defaultValue(actualKey, null); ObjectValues::value_type defaultValue(actualKey, nullRef);
it = value_.map_->insert(it, defaultValue); it = value_.map_->insert(it, defaultValue);
Value& value = (*it).second; Value& value = (*it).second;
return value; return value;
@@ -933,7 +934,7 @@ Value& Value::resolveReference(const char* key, bool isStatic) {
Value Value::get(ArrayIndex index, const Value& defaultValue) const { Value Value::get(ArrayIndex index, const Value& defaultValue) const {
const Value* value = &((*this)[index]); const Value* value = &((*this)[index]);
return value == &null ? defaultValue : *value; return value == &nullRef ? defaultValue : *value;
} }
bool Value::isValidIndex(ArrayIndex index) const { return index < size(); } bool Value::isValidIndex(ArrayIndex index) const { return index < size(); }
@@ -943,16 +944,16 @@ const Value& Value::operator[](const char* key) const {
type_ == nullValue || type_ == objectValue, type_ == nullValue || type_ == objectValue,
"in Json::Value::operator[](char const*)const: requires objectValue"); "in Json::Value::operator[](char const*)const: requires objectValue");
if (type_ == nullValue) if (type_ == nullValue)
return null; return nullRef;
#ifndef JSON_VALUE_USE_INTERNAL_MAP #ifndef JSON_VALUE_USE_INTERNAL_MAP
CZString actualKey(key, CZString::noDuplication); CZString actualKey(key, CZString::noDuplication);
ObjectValues::const_iterator it = value_.map_->find(actualKey); ObjectValues::const_iterator it = value_.map_->find(actualKey);
if (it == value_.map_->end()) if (it == value_.map_->end())
return null; return nullRef;
return (*it).second; return (*it).second;
#else #else
const Value* value = value_.map_->find(key); const Value* value = value_.map_->find(key);
return value ? *value : null; return value ? *value : nullRef;
#endif #endif
} }
@@ -982,7 +983,7 @@ Value& Value::append(const Value& value) { return (*this)[size()] = value; }
Value Value::get(const char* key, const Value& defaultValue) const { Value Value::get(const char* key, const Value& defaultValue) const {
const Value* value = &((*this)[key]); const Value* value = &((*this)[key]);
return value == &null ? defaultValue : *value; return value == &nullRef ? defaultValue : *value;
} }
Value Value::get(const std::string& key, const Value& defaultValue) const { Value Value::get(const std::string& key, const Value& defaultValue) const {
@@ -1018,7 +1019,7 @@ Value Value::removeMember(const char* key) {
JSON_ASSERT_MESSAGE(type_ == nullValue || type_ == objectValue, JSON_ASSERT_MESSAGE(type_ == nullValue || type_ == objectValue,
"in Json::Value::removeMember(): requires objectValue"); "in Json::Value::removeMember(): requires objectValue");
if (type_ == nullValue) if (type_ == nullValue)
return null; return nullRef;
Value removed; // null Value removed; // null
removeMember(key, &removed); removeMember(key, &removed);
@@ -1066,7 +1067,7 @@ Value Value::get(const CppTL::ConstString& key,
bool Value::isMember(const char* key) const { bool Value::isMember(const char* key) const {
const Value* value = &((*this)[key]); const Value* value = &((*this)[key]);
return value != &null; return value != &nullRef;
} }
bool Value::isMember(const std::string& key) const { bool Value::isMember(const std::string& key) const {
@@ -1225,14 +1226,22 @@ bool Value::isArray() const { return type_ == arrayValue; }
bool Value::isObject() const { return type_ == objectValue; } bool Value::isObject() const { return type_ == objectValue; }
void Value::setComment(const char* comment, CommentPlacement placement) { void Value::setComment(const char* comment, size_t len, CommentPlacement placement) {
if (!comments_) if (!comments_)
comments_ = new CommentInfo[numberOfCommentPlacement]; comments_ = new CommentInfo[numberOfCommentPlacement];
comments_[placement].setComment(comment); if ((len > 0) && (comment[len-1] == '\n')) {
// Always discard trailing newline, to aid indentation.
len -= 1;
}
comments_[placement].setComment(comment, len);
}
void Value::setComment(const char* comment, CommentPlacement placement) {
setComment(comment, strlen(comment), placement);
} }
void Value::setComment(const std::string& comment, CommentPlacement placement) { void Value::setComment(const std::string& comment, CommentPlacement placement) {
setComment(comment.c_str(), placement); setComment(comment.c_str(), comment.length(), placement);
} }
bool Value::hasComment(CommentPlacement placement) const { bool Value::hasComment(CommentPlacement placement) const {
@@ -1245,14 +1254,6 @@ std::string Value::getComment(CommentPlacement placement) const {
return ""; return "";
} }
void Value::setOffsetStart(size_t start) { start_ = start; }
void Value::setOffsetLimit(size_t limit) { limit_ = limit; }
size_t Value::getOffsetStart() const { return start_; }
size_t Value::getOffsetLimit() const { return limit_; }
std::string Value::toStyledString() const { std::string Value::toStyledString() const {
StyledWriter writer; StyledWriter writer;
return writer.write(*this); return writer.write(*this);
@@ -1472,7 +1473,7 @@ const Value& Path::resolve(const Value& root) const {
// Error: unable to resolve path (object value expected at position...) // Error: unable to resolve path (object value expected at position...)
} }
node = &((*node)[arg.key_]); node = &((*node)[arg.key_]);
if (node == &Value::null) { if (node == &Value::nullRef) {
// Error: unable to resolve path (object has no member named '' at // Error: unable to resolve path (object has no member named '' at
// position...) // position...)
} }
@@ -1493,7 +1494,7 @@ Value Path::resolve(const Value& root, const Value& defaultValue) const {
if (!node->isObject()) if (!node->isObject())
return defaultValue; return defaultValue;
node = &((*node)[arg.key_]); node = &((*node)[arg.key_]);
if (node == &Value::null) if (node == &Value::nullRef)
return defaultValue; return defaultValue;
} }
} }

View File

@@ -77,7 +77,7 @@ ValueIteratorBase::difference_type
ValueIteratorBase::computeDistance(const SelfType& other) const { ValueIteratorBase::computeDistance(const SelfType& other) const {
#ifndef JSON_VALUE_USE_INTERNAL_MAP #ifndef JSON_VALUE_USE_INTERNAL_MAP
#ifdef JSON_USE_CPPTL_SMALLMAP #ifdef JSON_USE_CPPTL_SMALLMAP
return current_ - other.current_; return other.current_ - current_;
#else #else
// Iterator for null value are initialized using the default // Iterator for null value are initialized using the default
// constructor, which initialize current_ to the default // constructor, which initialize current_ to the default

View File

@@ -7,13 +7,16 @@
#include <json/writer.h> #include <json/writer.h>
#include "json_tool.h" #include "json_tool.h"
#endif // if !defined(JSON_IS_AMALGAMATION) #endif // if !defined(JSON_IS_AMALGAMATION)
#include <iomanip>
#include <memory>
#include <sstream>
#include <utility> #include <utility>
#include <set>
#include <stdexcept>
#include <assert.h> #include <assert.h>
#include <math.h>
#include <stdio.h> #include <stdio.h>
#include <string.h> #include <string.h>
#include <sstream>
#include <iomanip>
#include <math.h>
#if defined(_MSC_VER) && _MSC_VER < 1500 // VC++ 8.0 and below #if defined(_MSC_VER) && _MSC_VER < 1500 // VC++ 8.0 and below
#include <float.h> #include <float.h>
@@ -33,6 +36,12 @@
namespace Json { namespace Json {
#if __cplusplus >= 201103L
typedef std::unique_ptr<StreamWriter> StreamWriterPtr;
#else
typedef std::auto_ptr<StreamWriter> StreamWriterPtr;
#endif
static bool containsControlCharacter(const char* str) { static bool containsControlCharacter(const char* str) {
while (*str) { while (*str) {
if (isControlCharacter(*(str++))) if (isControlCharacter(*(str++)))
@@ -183,28 +192,21 @@ Writer::~Writer() {}
// ////////////////////////////////////////////////////////////////// // //////////////////////////////////////////////////////////////////
FastWriter::FastWriter() FastWriter::FastWriter()
: yamlCompatiblityEnabled_(false), dropNullPlaceholders_(false), : yamlCompatiblityEnabled_(false) {}
omitEndingLineFeed_(false) {}
void FastWriter::enableYAMLCompatibility() { yamlCompatiblityEnabled_ = true; } void FastWriter::enableYAMLCompatibility() { yamlCompatiblityEnabled_ = true; }
void FastWriter::dropNullPlaceholders() { dropNullPlaceholders_ = true; }
void FastWriter::omitEndingLineFeed() { omitEndingLineFeed_ = true; }
std::string FastWriter::write(const Value& root) { std::string FastWriter::write(const Value& root) {
document_ = ""; document_ = "";
writeValue(root); writeValue(root);
if (!omitEndingLineFeed_) document_ += "\n";
document_ += "\n";
return document_; return document_;
} }
void FastWriter::writeValue(const Value& value) { void FastWriter::writeValue(const Value& value) {
switch (value.type()) { switch (value.type()) {
case nullValue: case nullValue:
if (!dropNullPlaceholders_) document_ += "null";
document_ += "null";
break; break;
case intValue: case intValue:
document_ += valueToString(value.asLargestInt()); document_ += valueToString(value.asLargestInt());
@@ -376,6 +378,9 @@ bool StyledWriter::isMultineArray(const Value& value) {
addChildValues_ = true; addChildValues_ = true;
int lineLength = 4 + (size - 1) * 2; // '[ ' + ', '*n + ' ]' int lineLength = 4 + (size - 1) * 2; // '[ ' + ', '*n + ' ]'
for (int index = 0; index < size; ++index) { for (int index = 0; index < size; ++index) {
if (hasCommentForValue(value[index])) {
isMultiLine = true;
}
writeValue(value[index]); writeValue(value[index]);
lineLength += int(childValues_[index].length()); lineLength += int(childValues_[index].length());
} }
@@ -463,7 +468,10 @@ void StyledStreamWriter::write(std::ostream& out, const Value& root) {
document_ = &out; document_ = &out;
addChildValues_ = false; addChildValues_ = false;
indentString_ = ""; indentString_ = "";
indented_ = true;
writeCommentBeforeValue(root); writeCommentBeforeValue(root);
if (!indented_) writeIndent();
indented_ = true;
writeValue(root); writeValue(root);
writeCommentAfterValueOnSameLine(root); writeCommentAfterValueOnSameLine(root);
*document_ << "\n"; *document_ << "\n";
@@ -539,8 +547,10 @@ void StyledStreamWriter::writeArrayValue(const Value& value) {
if (hasChildValue) if (hasChildValue)
writeWithIndent(childValues_[index]); writeWithIndent(childValues_[index]);
else { else {
writeIndent(); if (!indented_) writeIndent();
indented_ = true;
writeValue(childValue); writeValue(childValue);
indented_ = false;
} }
if (++index == size) { if (++index == size) {
writeCommentAfterValueOnSameLine(childValue); writeCommentAfterValueOnSameLine(childValue);
@@ -581,6 +591,9 @@ bool StyledStreamWriter::isMultineArray(const Value& value) {
addChildValues_ = true; addChildValues_ = true;
int lineLength = 4 + (size - 1) * 2; // '[ ' + ', '*n + ' ]' int lineLength = 4 + (size - 1) * 2; // '[ ' + ', '*n + ' ]'
for (int index = 0; index < size; ++index) { for (int index = 0; index < size; ++index) {
if (hasCommentForValue(value[index])) {
isMultiLine = true;
}
writeValue(value[index]); writeValue(value[index]);
lineLength += int(childValues_[index].length()); lineLength += int(childValues_[index].length());
} }
@@ -598,24 +611,17 @@ void StyledStreamWriter::pushValue(const std::string& value) {
} }
void StyledStreamWriter::writeIndent() { void StyledStreamWriter::writeIndent() {
/* // blep intended this to look at the so-far-written string
Some comments in this method would have been nice. ;-) // to determine whether we are already indented, but
// with a stream we cannot do that. So we rely on some saved state.
if ( !document_.empty() ) // The caller checks indented_.
{
char last = document_[document_.length()-1];
if ( last == ' ' ) // already indented
return;
if ( last != '\n' ) // Comments may add new-line
*document_ << '\n';
}
*/
*document_ << '\n' << indentString_; *document_ << '\n' << indentString_;
} }
void StyledStreamWriter::writeWithIndent(const std::string& value) { void StyledStreamWriter::writeWithIndent(const std::string& value) {
writeIndent(); if (!indented_) writeIndent();
*document_ << value; *document_ << value;
indented_ = false;
} }
void StyledStreamWriter::indent() { indentString_ += indentation_; } void StyledStreamWriter::indent() { indentString_ += indentation_; }
@@ -628,19 +634,30 @@ void StyledStreamWriter::unindent() {
void StyledStreamWriter::writeCommentBeforeValue(const Value& root) { void StyledStreamWriter::writeCommentBeforeValue(const Value& root) {
if (!root.hasComment(commentBefore)) if (!root.hasComment(commentBefore))
return; return;
*document_ << root.getComment(commentBefore);
*document_ << "\n"; if (!indented_) writeIndent();
const std::string& comment = root.getComment(commentBefore);
std::string::const_iterator iter = comment.begin();
while (iter != comment.end()) {
*document_ << *iter;
if (*iter == '\n' &&
(iter != comment.end() && *(iter + 1) == '/'))
// writeIndent(); // would include newline
*document_ << indentString_;
++iter;
}
indented_ = false;
} }
void StyledStreamWriter::writeCommentAfterValueOnSameLine(const Value& root) { void StyledStreamWriter::writeCommentAfterValueOnSameLine(const Value& root) {
if (root.hasComment(commentAfterOnSameLine)) if (root.hasComment(commentAfterOnSameLine))
*document_ << " " + root.getComment(commentAfterOnSameLine); *document_ << ' ' << root.getComment(commentAfterOnSameLine);
if (root.hasComment(commentAfter)) { if (root.hasComment(commentAfter)) {
*document_ << "\n"; writeIndent();
*document_ << root.getComment(commentAfter); *document_ << root.getComment(commentAfter);
*document_ << "\n";
} }
indented_ = false;
} }
bool StyledStreamWriter::hasCommentForValue(const Value& value) { bool StyledStreamWriter::hasCommentForValue(const Value& value) {
@@ -649,9 +666,376 @@ bool StyledStreamWriter::hasCommentForValue(const Value& value) {
value.hasComment(commentAfter); value.hasComment(commentAfter);
} }
std::ostream& operator<<(std::ostream& sout, const Value& root) { //////////////////////////
Json::StyledStreamWriter writer; // BuiltStyledStreamWriter
writer.write(sout, root);
/// Scoped enums are not available until C++11.
struct CommentStyle {
/// Decide whether to write comments.
enum Enum {
None, ///< Drop all comments.
Most, ///< Recover odd behavior of previous versions (not implemented yet).
All ///< Keep all comments.
};
};
struct BuiltStyledStreamWriter : public StreamWriter
{
BuiltStyledStreamWriter(
std::string const& indentation,
CommentStyle::Enum cs,
std::string const& colonSymbol,
std::string const& nullSymbol,
std::string const& endingLineFeedSymbol);
virtual int write(Value const& root, std::ostream* sout);
private:
void writeValue(Value const& value);
void writeArrayValue(Value const& value);
bool isMultineArray(Value const& value);
void pushValue(std::string const& value);
void writeIndent();
void writeWithIndent(std::string const& value);
void indent();
void unindent();
void writeCommentBeforeValue(Value const& root);
void writeCommentAfterValueOnSameLine(Value const& root);
static bool hasCommentForValue(const Value& value);
typedef std::vector<std::string> ChildValues;
ChildValues childValues_;
std::string indentString_;
int rightMargin_;
std::string indentation_;
CommentStyle::Enum cs_;
std::string colonSymbol_;
std::string nullSymbol_;
std::string endingLineFeedSymbol_;
bool addChildValues_ : 1;
bool indented_ : 1;
};
BuiltStyledStreamWriter::BuiltStyledStreamWriter(
std::string const& indentation,
CommentStyle::Enum cs,
std::string const& colonSymbol,
std::string const& nullSymbol,
std::string const& endingLineFeedSymbol)
: rightMargin_(74)
, indentation_(indentation)
, cs_(cs)
, colonSymbol_(colonSymbol)
, nullSymbol_(nullSymbol)
, endingLineFeedSymbol_(endingLineFeedSymbol)
, addChildValues_(false)
, indented_(false)
{
}
int BuiltStyledStreamWriter::write(Value const& root, std::ostream* sout)
{
sout_ = sout;
addChildValues_ = false;
indented_ = true;
indentString_ = "";
writeCommentBeforeValue(root);
if (!indented_) writeIndent();
indented_ = true;
writeValue(root);
writeCommentAfterValueOnSameLine(root);
*sout_ << endingLineFeedSymbol_;
sout_ = NULL;
return 0;
}
void BuiltStyledStreamWriter::writeValue(Value const& value) {
switch (value.type()) {
case nullValue:
pushValue(nullSymbol_);
break;
case intValue:
pushValue(valueToString(value.asLargestInt()));
break;
case uintValue:
pushValue(valueToString(value.asLargestUInt()));
break;
case realValue:
pushValue(valueToString(value.asDouble()));
break;
case stringValue:
pushValue(valueToQuotedString(value.asCString()));
break;
case booleanValue:
pushValue(valueToString(value.asBool()));
break;
case arrayValue:
writeArrayValue(value);
break;
case objectValue: {
Value::Members members(value.getMemberNames());
if (members.empty())
pushValue("{}");
else {
writeWithIndent("{");
indent();
Value::Members::iterator it = members.begin();
for (;;) {
std::string const& name = *it;
Value const& childValue = value[name];
writeCommentBeforeValue(childValue);
writeWithIndent(valueToQuotedString(name.c_str()));
*sout_ << colonSymbol_;
writeValue(childValue);
if (++it == members.end()) {
writeCommentAfterValueOnSameLine(childValue);
break;
}
*sout_ << ",";
writeCommentAfterValueOnSameLine(childValue);
}
unindent();
writeWithIndent("}");
}
} break;
}
}
void BuiltStyledStreamWriter::writeArrayValue(Value const& value) {
unsigned size = value.size();
if (size == 0)
pushValue("[]");
else {
bool isMultiLine = (cs_ == CommentStyle::All) || isMultineArray(value);
if (isMultiLine) {
writeWithIndent("[");
indent();
bool hasChildValue = !childValues_.empty();
unsigned index = 0;
for (;;) {
Value const& childValue = value[index];
writeCommentBeforeValue(childValue);
if (hasChildValue)
writeWithIndent(childValues_[index]);
else {
if (!indented_) writeIndent();
indented_ = true;
writeValue(childValue);
indented_ = false;
}
if (++index == size) {
writeCommentAfterValueOnSameLine(childValue);
break;
}
*sout_ << ",";
writeCommentAfterValueOnSameLine(childValue);
}
unindent();
writeWithIndent("]");
} else // output on a single line
{
assert(childValues_.size() == size);
*sout_ << "[";
if (!indentation_.empty()) *sout_ << " ";
for (unsigned index = 0; index < size; ++index) {
if (index > 0)
*sout_ << ", ";
*sout_ << childValues_[index];
}
if (!indentation_.empty()) *sout_ << " ";
*sout_ << "]";
}
}
}
bool BuiltStyledStreamWriter::isMultineArray(Value const& value) {
int size = value.size();
bool isMultiLine = size * 3 >= rightMargin_;
childValues_.clear();
for (int index = 0; index < size && !isMultiLine; ++index) {
Value const& childValue = value[index];
isMultiLine =
isMultiLine || ((childValue.isArray() || childValue.isObject()) &&
childValue.size() > 0);
}
if (!isMultiLine) // check if line length > max line length
{
childValues_.reserve(size);
addChildValues_ = true;
int lineLength = 4 + (size - 1) * 2; // '[ ' + ', '*n + ' ]'
for (int index = 0; index < size; ++index) {
if (hasCommentForValue(value[index])) {
isMultiLine = true;
}
writeValue(value[index]);
lineLength += int(childValues_[index].length());
}
addChildValues_ = false;
isMultiLine = isMultiLine || lineLength >= rightMargin_;
}
return isMultiLine;
}
void BuiltStyledStreamWriter::pushValue(std::string const& value) {
if (addChildValues_)
childValues_.push_back(value);
else
*sout_ << value;
}
void BuiltStyledStreamWriter::writeIndent() {
// blep intended this to look at the so-far-written string
// to determine whether we are already indented, but
// with a stream we cannot do that. So we rely on some saved state.
// The caller checks indented_.
if (!indentation_.empty()) {
// In this case, drop newlines too.
*sout_ << '\n' << indentString_;
}
}
void BuiltStyledStreamWriter::writeWithIndent(std::string const& value) {
if (!indented_) writeIndent();
*sout_ << value;
indented_ = false;
}
void BuiltStyledStreamWriter::indent() { indentString_ += indentation_; }
void BuiltStyledStreamWriter::unindent() {
assert(indentString_.size() >= indentation_.size());
indentString_.resize(indentString_.size() - indentation_.size());
}
void BuiltStyledStreamWriter::writeCommentBeforeValue(Value const& root) {
if (cs_ == CommentStyle::None) return;
if (!root.hasComment(commentBefore))
return;
if (!indented_) writeIndent();
const std::string& comment = root.getComment(commentBefore);
std::string::const_iterator iter = comment.begin();
while (iter != comment.end()) {
*sout_ << *iter;
if (*iter == '\n' &&
(iter != comment.end() && *(iter + 1) == '/'))
// writeIndent(); // would write extra newline
*sout_ << indentString_;
++iter;
}
indented_ = false;
}
void BuiltStyledStreamWriter::writeCommentAfterValueOnSameLine(Value const& root) {
if (cs_ == CommentStyle::None) return;
if (root.hasComment(commentAfterOnSameLine))
*sout_ << " " + root.getComment(commentAfterOnSameLine);
if (root.hasComment(commentAfter)) {
writeIndent();
*sout_ << root.getComment(commentAfter);
}
}
// static
bool BuiltStyledStreamWriter::hasCommentForValue(const Value& value) {
return value.hasComment(commentBefore) ||
value.hasComment(commentAfterOnSameLine) ||
value.hasComment(commentAfter);
}
///////////////
// StreamWriter
StreamWriter::StreamWriter()
: sout_(NULL)
{
}
StreamWriter::~StreamWriter()
{
}
StreamWriter::Factory::~Factory()
{}
StreamWriterBuilder::StreamWriterBuilder()
{
setDefaults(&settings_);
}
StreamWriterBuilder::~StreamWriterBuilder()
{}
StreamWriter* StreamWriterBuilder::newStreamWriter() const
{
std::string indentation = settings_["indentation"].asString();
std::string cs_str = settings_["commentStyle"].asString();
bool eyc = settings_["enableYAMLCompatibility"].asBool();
bool dnp = settings_["dropNullPlaceholders"].asBool();
CommentStyle::Enum cs = CommentStyle::All;
if (cs_str == "All") {
cs = CommentStyle::All;
} else if (cs_str == "None") {
cs = CommentStyle::None;
} else {
throw std::runtime_error("commentStyle must be 'All' or 'None'");
}
std::string colonSymbol = " : ";
if (eyc) {
colonSymbol = ": ";
} else if (indentation.empty()) {
colonSymbol = ":";
}
std::string nullSymbol = "null";
if (dnp) {
nullSymbol = "";
}
std::string endingLineFeedSymbol = "";
return new BuiltStyledStreamWriter(
indentation, cs,
colonSymbol, nullSymbol, endingLineFeedSymbol);
}
static void getValidWriterKeys(std::set<std::string>* valid_keys)
{
valid_keys->clear();
valid_keys->insert("indentation");
valid_keys->insert("commentStyle");
valid_keys->insert("enableYAMLCompatibility");
valid_keys->insert("dropNullPlaceholders");
}
bool StreamWriterBuilder::validate(Json::Value* invalid) const
{
Json::Value my_invalid;
if (!invalid) invalid = &my_invalid; // so we do not need to test for NULL
Json::Value& inv = *invalid;
bool valid = true;
std::set<std::string> valid_keys;
getValidWriterKeys(&valid_keys);
Value::Members keys = settings_.getMemberNames();
size_t n = keys.size();
for (size_t i = 0; i < n; ++i) {
std::string const& key = keys[i];
if (valid_keys.find(key) == valid_keys.end()) {
inv[key] = settings_[key];
}
}
return valid;
}
// static
void StreamWriterBuilder::setDefaults(Json::Value* settings)
{
//! [StreamWriterBuilderDefaults]
(*settings)["commentStyle"] = "All";
(*settings)["indentation"] = "\t";
(*settings)["enableYAMLCompatibility"] = false;
(*settings)["dropNullPlaceholders"] = false;
//! [StreamWriterBuilderDefaults]
}
std::string writeString(StreamWriter::Factory const& builder, Value const& root) {
std::ostringstream sout;
StreamWriterPtr const writer(builder.newStreamWriter());
writer->write(root, &sout);
return sout.str();
}
std::ostream& operator<<(std::ostream& sout, Value const& root) {
StreamWriterBuilder builder;
StreamWriterPtr const writer(builder.newStreamWriter());
writer->write(root, &sout);
return sout; return sout;
} }

View File

@@ -9,7 +9,15 @@ ADD_EXECUTABLE( jsoncpp_test
main.cpp main.cpp
) )
TARGET_LINK_LIBRARIES(jsoncpp_test jsoncpp_lib)
IF(JSONCPP_LIB_BUILD_SHARED)
TARGET_LINK_LIBRARIES(jsoncpp_test jsoncpp_lib)
ELSE(JSONCPP_LIB_BUILD_SHARED)
TARGET_LINK_LIBRARIES(jsoncpp_test jsoncpp_lib_static)
ENDIF(JSONCPP_LIB_BUILD_SHARED)
# another way to solve issue #90
#set_target_properties(jsoncpp_test PROPERTIES COMPILE_FLAGS -ffloat-store)
# Run unit tests in post-build # Run unit tests in post-build
# (default cmake workflow hides away the test result into a file, resulting in poor dev workflow?!?) # (default cmake workflow hides away the test result into a file, resulting in poor dev workflow?!?)

View File

@@ -323,7 +323,7 @@ void Runner::listTests() const {
} }
int Runner::runCommandLine(int argc, const char* argv[]) const { int Runner::runCommandLine(int argc, const char* argv[]) const {
typedef std::deque<std::string> TestNames; // typedef std::deque<std::string> TestNames;
Runner subrunner; Runner subrunner;
for (int index = 1; index < argc; ++index) { for (int index = 1; index < argc; ++index) {
std::string opt = argv[index]; std::string opt = argv[index];

View File

@@ -178,8 +178,8 @@ private:
template <typename T, typename U> template <typename T, typename U>
TestResult& checkEqual(TestResult& result, TestResult& checkEqual(TestResult& result,
const T& expected, T expected,
const U& actual, U actual,
const char* file, const char* file,
unsigned int line, unsigned int line,
const char* expr) { const char* expr) {
@@ -214,7 +214,7 @@ TestResult& checkStringEqual(TestResult& result,
#define JSONTEST_ASSERT_PRED(expr) \ #define JSONTEST_ASSERT_PRED(expr) \
{ \ { \
JsonTest::PredicateContext _minitest_Context = { \ JsonTest::PredicateContext _minitest_Context = { \
result_->predicateId_, __FILE__, __LINE__, #expr \ result_->predicateId_, __FILE__, __LINE__, #expr, NULL, NULL \
}; \ }; \
result_->predicateStackTail_->next_ = &_minitest_Context; \ result_->predicateStackTail_->next_ = &_minitest_Context; \
result_->predicateId_ += 1; \ result_->predicateId_ += 1; \

View File

@@ -7,6 +7,7 @@
#include <json/config.h> #include <json/config.h>
#include <json/json.h> #include <json/json.h>
#include <stdexcept> #include <stdexcept>
#include <cstring>
// Make numeric limits more convenient to talk about. // Make numeric limits more convenient to talk about.
// Assumes int type in 32 bits. // Assumes int type in 32 bits.
@@ -1496,34 +1497,15 @@ JSONTEST_FIXTURE(ValueTest, typeChecksThrowExceptions) {
#endif #endif
} }
JSONTEST_FIXTURE(ValueTest, offsetAccessors) { struct StreamWriterTest : JsonTest::TestCase {};
Json::Value x;
JSONTEST_ASSERT(x.getOffsetStart() == 0);
JSONTEST_ASSERT(x.getOffsetLimit() == 0);
x.setOffsetStart(10);
x.setOffsetLimit(20);
JSONTEST_ASSERT(x.getOffsetStart() == 10);
JSONTEST_ASSERT(x.getOffsetLimit() == 20);
Json::Value y(x);
JSONTEST_ASSERT(y.getOffsetStart() == 10);
JSONTEST_ASSERT(y.getOffsetLimit() == 20);
Json::Value z;
z.swap(y);
JSONTEST_ASSERT(z.getOffsetStart() == 10);
JSONTEST_ASSERT(z.getOffsetLimit() == 20);
JSONTEST_ASSERT(y.getOffsetStart() == 0);
JSONTEST_ASSERT(y.getOffsetLimit() == 0);
}
struct WriterTest : JsonTest::TestCase {}; JSONTEST_FIXTURE(StreamWriterTest, dropNullPlaceholders) {
Json::StreamWriterBuilder b;
JSONTEST_FIXTURE(WriterTest, dropNullPlaceholders) {
Json::FastWriter writer;
Json::Value nullValue; Json::Value nullValue;
JSONTEST_ASSERT(writer.write(nullValue) == "null\n"); b.settings_["dropNullPlaceholders"] = false;
JSONTEST_ASSERT(Json::writeString(b, nullValue) == "null");
writer.dropNullPlaceholders(); b.settings_["dropNullPlaceholders"] = true;
JSONTEST_ASSERT(writer.write(nullValue) == "\n"); JSONTEST_ASSERT(Json::writeString(b, nullValue) == "");
} }
struct ReaderTest : JsonTest::TestCase {}; struct ReaderTest : JsonTest::TestCase {};
@@ -1534,7 +1516,6 @@ JSONTEST_FIXTURE(ReaderTest, parseWithNoErrors) {
bool ok = reader.parse("{ \"property\" : \"value\" }", root); bool ok = reader.parse("{ \"property\" : \"value\" }", root);
JSONTEST_ASSERT(ok); JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(reader.getFormattedErrorMessages().size() == 0); JSONTEST_ASSERT(reader.getFormattedErrorMessages().size() == 0);
JSONTEST_ASSERT(reader.getStructuredErrors().size() == 0);
} }
JSONTEST_FIXTURE(ReaderTest, parseWithNoErrorsTestingOffsets) { JSONTEST_FIXTURE(ReaderTest, parseWithNoErrorsTestingOffsets) {
@@ -1546,25 +1527,6 @@ JSONTEST_FIXTURE(ReaderTest, parseWithNoErrorsTestingOffsets) {
root); root);
JSONTEST_ASSERT(ok); JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(reader.getFormattedErrorMessages().size() == 0); JSONTEST_ASSERT(reader.getFormattedErrorMessages().size() == 0);
JSONTEST_ASSERT(reader.getStructuredErrors().size() == 0);
JSONTEST_ASSERT(root["property"].getOffsetStart() == 15);
JSONTEST_ASSERT(root["property"].getOffsetLimit() == 34);
JSONTEST_ASSERT(root["property"][0].getOffsetStart() == 16);
JSONTEST_ASSERT(root["property"][0].getOffsetLimit() == 23);
JSONTEST_ASSERT(root["property"][1].getOffsetStart() == 25);
JSONTEST_ASSERT(root["property"][1].getOffsetLimit() == 33);
JSONTEST_ASSERT(root["obj"].getOffsetStart() == 44);
JSONTEST_ASSERT(root["obj"].getOffsetLimit() == 76);
JSONTEST_ASSERT(root["obj"]["nested"].getOffsetStart() == 57);
JSONTEST_ASSERT(root["obj"]["nested"].getOffsetLimit() == 60);
JSONTEST_ASSERT(root["obj"]["bool"].getOffsetStart() == 71);
JSONTEST_ASSERT(root["obj"]["bool"].getOffsetLimit() == 75);
JSONTEST_ASSERT(root["null"].getOffsetStart() == 87);
JSONTEST_ASSERT(root["null"].getOffsetLimit() == 91);
JSONTEST_ASSERT(root["false"].getOffsetStart() == 103);
JSONTEST_ASSERT(root["false"].getOffsetLimit() == 108);
JSONTEST_ASSERT(root.getOffsetStart() == 0);
JSONTEST_ASSERT(root.getOffsetLimit() == 110);
} }
JSONTEST_FIXTURE(ReaderTest, parseWithOneError) { JSONTEST_FIXTURE(ReaderTest, parseWithOneError) {
@@ -1575,13 +1537,6 @@ JSONTEST_FIXTURE(ReaderTest, parseWithOneError) {
JSONTEST_ASSERT(reader.getFormattedErrorMessages() == JSONTEST_ASSERT(reader.getFormattedErrorMessages() ==
"* Line 1, Column 15\n Syntax error: value, object or array " "* Line 1, Column 15\n Syntax error: value, object or array "
"expected.\n"); "expected.\n");
std::vector<Json::Reader::StructuredError> errors =
reader.getStructuredErrors();
JSONTEST_ASSERT(errors.size() == 1);
JSONTEST_ASSERT(errors.at(0).offset_start == 14);
JSONTEST_ASSERT(errors.at(0).offset_limit == 15);
JSONTEST_ASSERT(errors.at(0).message ==
"Syntax error: value, object or array expected.");
} }
JSONTEST_FIXTURE(ReaderTest, parseChineseWithOneError) { JSONTEST_FIXTURE(ReaderTest, parseChineseWithOneError) {
@@ -1592,13 +1547,6 @@ JSONTEST_FIXTURE(ReaderTest, parseChineseWithOneError) {
JSONTEST_ASSERT(reader.getFormattedErrorMessages() == JSONTEST_ASSERT(reader.getFormattedErrorMessages() ==
"* Line 1, Column 19\n Syntax error: value, object or array " "* Line 1, Column 19\n Syntax error: value, object or array "
"expected.\n"); "expected.\n");
std::vector<Json::Reader::StructuredError> errors =
reader.getStructuredErrors();
JSONTEST_ASSERT(errors.size() == 1);
JSONTEST_ASSERT(errors.at(0).offset_start == 18);
JSONTEST_ASSERT(errors.at(0).offset_limit == 19);
JSONTEST_ASSERT(errors.at(0).message ==
"Syntax error: value, object or array expected.");
} }
JSONTEST_FIXTURE(ReaderTest, parseWithDetailError) { JSONTEST_FIXTURE(ReaderTest, parseWithDetailError) {
@@ -1609,12 +1557,255 @@ JSONTEST_FIXTURE(ReaderTest, parseWithDetailError) {
JSONTEST_ASSERT(reader.getFormattedErrorMessages() == JSONTEST_ASSERT(reader.getFormattedErrorMessages() ==
"* Line 1, Column 16\n Bad escape sequence in string\nSee " "* Line 1, Column 16\n Bad escape sequence in string\nSee "
"Line 1, Column 20 for detail.\n"); "Line 1, Column 20 for detail.\n");
std::vector<Json::Reader::StructuredError> errors = }
reader.getStructuredErrors();
JSONTEST_ASSERT(errors.size() == 1); struct CharReaderTest : JsonTest::TestCase {};
JSONTEST_ASSERT(errors.at(0).offset_start == 15);
JSONTEST_ASSERT(errors.at(0).offset_limit == 23); JSONTEST_FIXTURE(CharReaderTest, parseWithNoErrors) {
JSONTEST_ASSERT(errors.at(0).message == "Bad escape sequence in string"); Json::CharReaderBuilder b;
Json::CharReader* reader(b.newCharReader());
std::string errs;
Json::Value root;
char const doc[] = "{ \"property\" : \"value\" }";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs.size() == 0);
delete reader;
}
JSONTEST_FIXTURE(CharReaderTest, parseWithNoErrorsTestingOffsets) {
Json::CharReaderBuilder b;
Json::CharReader* reader(b.newCharReader());
std::string errs;
Json::Value root;
char const doc[] =
"{ \"property\" : [\"value\", \"value2\"], \"obj\" : "
"{ \"nested\" : 123, \"bool\" : true}, \"null\" : "
"null, \"false\" : false }";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs.size() == 0);
delete reader;
}
JSONTEST_FIXTURE(CharReaderTest, parseWithOneError) {
Json::CharReaderBuilder b;
Json::CharReader* reader(b.newCharReader());
std::string errs;
Json::Value root;
char const doc[] =
"{ \"property\" :: \"value\" }";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT(errs ==
"* Line 1, Column 15\n Syntax error: value, object or array "
"expected.\n");
delete reader;
}
JSONTEST_FIXTURE(CharReaderTest, parseChineseWithOneError) {
Json::CharReaderBuilder b;
Json::CharReader* reader(b.newCharReader());
std::string errs;
Json::Value root;
char const doc[] =
"{ \"pr佐藤erty\" :: \"value\" }";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT(errs ==
"* Line 1, Column 19\n Syntax error: value, object or array "
"expected.\n");
delete reader;
}
JSONTEST_FIXTURE(CharReaderTest, parseWithDetailError) {
Json::CharReaderBuilder b;
Json::CharReader* reader(b.newCharReader());
std::string errs;
Json::Value root;
char const doc[] =
"{ \"property\" : \"v\\alue\" }";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT(errs ==
"* Line 1, Column 16\n Bad escape sequence in string\nSee "
"Line 1, Column 20 for detail.\n");
delete reader;
}
JSONTEST_FIXTURE(CharReaderTest, parseWithStackLimit) {
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
"{ \"property\" : \"value\" }";
{
b.settings_["stackLimit"] = 2;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs == "");
JSONTEST_ASSERT_EQUAL("value", root["property"]);
delete reader;
}
{
b.settings_["stackLimit"] = 1;
Json::CharReader* reader(b.newCharReader());
std::string errs;
JSONTEST_ASSERT_THROWS(reader->parse(
doc, doc + std::strlen(doc),
&root, &errs));
delete reader;
}
}
struct CharReaderFailIfExtraTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(CharReaderFailIfExtraTest, issue164) {
// This is interpretted as a string value followed by a colon.
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
" \"property\" : \"value\" }";
{
b.settings_["failIfExtra"] = false;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs == "");
JSONTEST_ASSERT_EQUAL("property", root);
delete reader;
}
{
b.settings_["failIfExtra"] = true;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT_STRING_EQUAL(errs,
"* Line 1, Column 13\n"
" Extra non-whitespace after JSON value.\n");
JSONTEST_ASSERT_EQUAL("property", root);
delete reader;
}
{
b.settings_["failIfExtra"] = false;
b.strictMode(&b.settings_);
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT_STRING_EQUAL(errs,
"* Line 1, Column 13\n"
" Extra non-whitespace after JSON value.\n");
JSONTEST_ASSERT_EQUAL("property", root);
delete reader;
}
}
JSONTEST_FIXTURE(CharReaderFailIfExtraTest, issue107) {
// This is interpretted as an int value followed by a colon.
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
"1:2:3";
b.settings_["failIfExtra"] = true;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT_STRING_EQUAL(
"* Line 1, Column 2\n"
" Extra non-whitespace after JSON value.\n",
errs);
JSONTEST_ASSERT_EQUAL(1, root.asInt());
delete reader;
}
JSONTEST_FIXTURE(CharReaderFailIfExtraTest, commentAfterObject) {
Json::CharReaderBuilder b;
Json::Value root;
{
char const doc[] =
"{ \"property\" : \"value\" } //trailing\n//comment\n";
b.settings_["failIfExtra"] = true;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL("value", root["property"]);
delete reader;
}
}
JSONTEST_FIXTURE(CharReaderFailIfExtraTest, commentAfterArray) {
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
"[ \"property\" , \"value\" ] //trailing\n//comment\n";
b.settings_["failIfExtra"] = true;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL("value", root[1u]);
delete reader;
}
JSONTEST_FIXTURE(CharReaderFailIfExtraTest, commentAfterBool) {
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
" true /*trailing\ncomment*/";
b.settings_["failIfExtra"] = true;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(true, root.asBool());
delete reader;
}
struct IteratorTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(IteratorTest, distance) {
Json::Value json;
json["k1"] = "a";
json["k2"] = "b";
int dist;
std::string str;
for (Json::ValueIterator it = json.begin(); it != json.end(); ++it) {
dist = it - json.begin();
str = it->asString().c_str();
}
JSONTEST_ASSERT_EQUAL(1, dist);
JSONTEST_ASSERT_STRING_EQUAL("b", str);
} }
int main(int argc, const char* argv[]) { int main(int argc, const char* argv[]) {
@@ -1637,9 +1828,10 @@ int main(int argc, const char* argv[]) {
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, compareArray); JSONTEST_REGISTER_FIXTURE(runner, ValueTest, compareArray);
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, compareObject); JSONTEST_REGISTER_FIXTURE(runner, ValueTest, compareObject);
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, compareType); JSONTEST_REGISTER_FIXTURE(runner, ValueTest, compareType);
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, offsetAccessors);
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, typeChecksThrowExceptions); JSONTEST_REGISTER_FIXTURE(runner, ValueTest, typeChecksThrowExceptions);
JSONTEST_REGISTER_FIXTURE(runner, StreamWriterTest, dropNullPlaceholders);
JSONTEST_REGISTER_FIXTURE(runner, ReaderTest, parseWithNoErrors); JSONTEST_REGISTER_FIXTURE(runner, ReaderTest, parseWithNoErrors);
JSONTEST_REGISTER_FIXTURE( JSONTEST_REGISTER_FIXTURE(
runner, ReaderTest, parseWithNoErrorsTestingOffsets); runner, ReaderTest, parseWithNoErrorsTestingOffsets);
@@ -1647,7 +1839,21 @@ int main(int argc, const char* argv[]) {
JSONTEST_REGISTER_FIXTURE(runner, ReaderTest, parseChineseWithOneError); JSONTEST_REGISTER_FIXTURE(runner, ReaderTest, parseChineseWithOneError);
JSONTEST_REGISTER_FIXTURE(runner, ReaderTest, parseWithDetailError); JSONTEST_REGISTER_FIXTURE(runner, ReaderTest, parseWithDetailError);
JSONTEST_REGISTER_FIXTURE(runner, WriterTest, dropNullPlaceholders); JSONTEST_REGISTER_FIXTURE(runner, CharReaderTest, parseWithNoErrors);
JSONTEST_REGISTER_FIXTURE(
runner, CharReaderTest, parseWithNoErrorsTestingOffsets);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderTest, parseWithOneError);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderTest, parseChineseWithOneError);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderTest, parseWithDetailError);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderTest, parseWithStackLimit);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderFailIfExtraTest, issue164);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderFailIfExtraTest, issue107);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderFailIfExtraTest, commentAfterObject);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderFailIfExtraTest, commentAfterArray);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderFailIfExtraTest, commentAfterBool);
JSONTEST_REGISTER_FIXTURE(runner, IteratorTest, distance);
return runner.runCommandLine(argc, argv); return runner.runCommandLine(argc, argv);
} }

View File

@@ -4,7 +4,7 @@ import os
paths = [] paths = []
for pattern in [ '*.actual', '*.actual-rewrite', '*.rewrite', '*.process-output' ]: for pattern in [ '*.actual', '*.actual-rewrite', '*.rewrite', '*.process-output' ]:
paths += glob.glob( 'data/' + pattern ) paths += glob.glob('data/' + pattern)
for path in paths: for path in paths:
os.unlink( path ) os.unlink(path)

View File

@@ -0,0 +1,4 @@
// Comment for array
.=[]
// Comment within array
.[0]="one-element"

View File

@@ -0,0 +1,5 @@
// Comment for array
[
// Comment within array
"one-element"
]

View File

@@ -1,10 +1,10 @@
from __future__ import print_function from __future__ import print_function
import glob import glob
import os.path import os.path
for path in glob.glob( '*.json' ): for path in glob.glob('*.json'):
text = file(path,'rt').read() text = file(path,'rt').read()
target = os.path.splitext(path)[0] + '.expected' target = os.path.splitext(path)[0] + '.expected'
if os.path.exists( target ): if os.path.exists(target):
print('skipping:', target) print('skipping:', target)
else: else:
print('creating:', target) print('creating:', target)

View File

@@ -15,50 +15,50 @@ actual_path = base_path + '.actual'
rewrite_path = base_path + '.rewrite' rewrite_path = base_path + '.rewrite'
rewrite_actual_path = base_path + '.actual-rewrite' rewrite_actual_path = base_path + '.actual-rewrite'
def valueTreeToString( fout, value, path = '.' ): def valueTreeToString(fout, value, path = '.'):
ty = type(value) ty = type(value)
if ty is types.DictType: if ty is types.DictType:
fout.write( '%s={}\n' % path ) fout.write('%s={}\n' % path)
suffix = path[-1] != '.' and '.' or '' suffix = path[-1] != '.' and '.' or ''
names = value.keys() names = value.keys()
names.sort() names.sort()
for name in names: for name in names:
valueTreeToString( fout, value[name], path + suffix + name ) valueTreeToString(fout, value[name], path + suffix + name)
elif ty is types.ListType: elif ty is types.ListType:
fout.write( '%s=[]\n' % path ) fout.write('%s=[]\n' % path)
for index, childValue in zip( xrange(0,len(value)), value ): for index, childValue in zip(xrange(0,len(value)), value):
valueTreeToString( fout, childValue, path + '[%d]' % index ) valueTreeToString(fout, childValue, path + '[%d]' % index)
elif ty is types.StringType: elif ty is types.StringType:
fout.write( '%s="%s"\n' % (path,value) ) fout.write('%s="%s"\n' % (path,value))
elif ty is types.IntType: elif ty is types.IntType:
fout.write( '%s=%d\n' % (path,value) ) fout.write('%s=%d\n' % (path,value))
elif ty is types.FloatType: elif ty is types.FloatType:
fout.write( '%s=%.16g\n' % (path,value) ) fout.write('%s=%.16g\n' % (path,value))
elif value is True: elif value is True:
fout.write( '%s=true\n' % path ) fout.write('%s=true\n' % path)
elif value is False: elif value is False:
fout.write( '%s=false\n' % path ) fout.write('%s=false\n' % path)
elif value is None: elif value is None:
fout.write( '%s=null\n' % path ) fout.write('%s=null\n' % path)
else: else:
assert False and "Unexpected value type" assert False and "Unexpected value type"
def parseAndSaveValueTree( input, actual_path ): def parseAndSaveValueTree(input, actual_path):
root = json.loads( input ) root = json.loads(input)
fout = file( actual_path, 'wt' ) fout = file(actual_path, 'wt')
valueTreeToString( fout, root ) valueTreeToString(fout, root)
fout.close() fout.close()
return root return root
def rewriteValueTree( value, rewrite_path ): def rewriteValueTree(value, rewrite_path):
rewrite = json.dumps( value ) rewrite = json.dumps(value)
#rewrite = rewrite[1:-1] # Somehow the string is quoted ! jsonpy bug ? #rewrite = rewrite[1:-1] # Somehow the string is quoted ! jsonpy bug ?
file( rewrite_path, 'wt').write( rewrite + '\n' ) file(rewrite_path, 'wt').write(rewrite + '\n')
return rewrite return rewrite
input = file( input_path, 'rt' ).read() input = file(input_path, 'rt').read()
root = parseAndSaveValueTree( input, actual_path ) root = parseAndSaveValueTree(input, actual_path)
rewrite = rewriteValueTree( json.write( root ), rewrite_path ) rewrite = rewriteValueTree(json.write(root), rewrite_path)
rewrite_root = parseAndSaveValueTree( rewrite, rewrite_actual_path ) rewrite_root = parseAndSaveValueTree(rewrite, rewrite_actual_path)
sys.exit( 0 ) sys.exit(0)

View File

@@ -14,6 +14,7 @@ def getStatusOutput(cmd):
Return int, unicode (for both Python 2 and 3). Return int, unicode (for both Python 2 and 3).
Note: os.popen().close() would return None for 0. Note: os.popen().close() would return None for 0.
""" """
print(cmd, file=sys.stderr)
pipe = os.popen(cmd) pipe = os.popen(cmd)
process_output = pipe.read() process_output = pipe.read()
try: try:
@@ -25,11 +26,11 @@ def getStatusOutput(cmd):
pass # python3 pass # python3
status = pipe.close() status = pipe.close()
return status, process_output return status, process_output
def compareOutputs( expected, actual, message ): def compareOutputs(expected, actual, message):
expected = expected.strip().replace('\r','').split('\n') expected = expected.strip().replace('\r','').split('\n')
actual = actual.strip().replace('\r','').split('\n') actual = actual.strip().replace('\r','').split('\n')
diff_line = 0 diff_line = 0
max_line_to_compare = min( len(expected), len(actual) ) max_line_to_compare = min(len(expected), len(actual))
for index in range(0,max_line_to_compare): for index in range(0,max_line_to_compare):
if expected[index].strip() != actual[index].strip(): if expected[index].strip() != actual[index].strip():
diff_line = index + 1 diff_line = index + 1
@@ -38,7 +39,7 @@ def compareOutputs( expected, actual, message ):
diff_line = max_line_to_compare+1 diff_line = max_line_to_compare+1
if diff_line == 0: if diff_line == 0:
return None return None
def safeGetLine( lines, index ): def safeGetLine(lines, index):
index += -1 index += -1
if index >= len(lines): if index >= len(lines):
return '' return ''
@@ -48,64 +49,65 @@ def compareOutputs( expected, actual, message ):
Actual: '%s' Actual: '%s'
""" % (message, diff_line, """ % (message, diff_line,
safeGetLine(expected,diff_line), safeGetLine(expected,diff_line),
safeGetLine(actual,diff_line) ) safeGetLine(actual,diff_line))
def safeReadFile( path ): def safeReadFile(path):
try: try:
return open( path, 'rt', encoding = 'utf-8' ).read() return open(path, 'rt', encoding = 'utf-8').read()
except IOError as e: except IOError as e:
return '<File "%s" is missing: %s>' % (path,e) return '<File "%s" is missing: %s>' % (path,e)
def runAllTests( jsontest_executable_path, input_dir = None, def runAllTests(jsontest_executable_path, input_dir = None,
use_valgrind=False, with_json_checker=False ): use_valgrind=False, with_json_checker=False,
writerClass='StyledWriter'):
if not input_dir: if not input_dir:
input_dir = os.path.join( os.getcwd(), 'data' ) input_dir = os.path.join(os.getcwd(), 'data')
tests = glob( os.path.join( input_dir, '*.json' ) ) tests = glob(os.path.join(input_dir, '*.json'))
if with_json_checker: if with_json_checker:
test_jsonchecker = glob( os.path.join( input_dir, '../jsonchecker', '*.json' ) ) test_jsonchecker = glob(os.path.join(input_dir, '../jsonchecker', '*.json'))
else: else:
test_jsonchecker = [] test_jsonchecker = []
failed_tests = [] failed_tests = []
valgrind_path = use_valgrind and VALGRIND_CMD or '' valgrind_path = use_valgrind and VALGRIND_CMD or ''
for input_path in tests + test_jsonchecker: for input_path in tests + test_jsonchecker:
expect_failure = os.path.basename( input_path ).startswith( 'fail' ) expect_failure = os.path.basename(input_path).startswith('fail')
is_json_checker_test = (input_path in test_jsonchecker) or expect_failure is_json_checker_test = (input_path in test_jsonchecker) or expect_failure
print('TESTING:', input_path, end=' ') print('TESTING:', input_path, end=' ')
options = is_json_checker_test and '--json-checker' or '' options = is_json_checker_test and '--json-checker' or ''
cmd = '%s%s %s "%s"' % ( options += ' --json-writer %s'%writerClass
valgrind_path, jsontest_executable_path, options, cmd = '%s%s %s "%s"' % ( valgrind_path, jsontest_executable_path, options,
input_path) input_path)
status, process_output = getStatusOutput(cmd) status, process_output = getStatusOutput(cmd)
if is_json_checker_test: if is_json_checker_test:
if expect_failure: if expect_failure:
if not status: if not status:
print('FAILED') print('FAILED')
failed_tests.append( (input_path, 'Parsing should have failed:\n%s' % failed_tests.append((input_path, 'Parsing should have failed:\n%s' %
safeReadFile(input_path)) ) safeReadFile(input_path)))
else: else:
print('OK') print('OK')
else: else:
if status: if status:
print('FAILED') print('FAILED')
failed_tests.append( (input_path, 'Parsing failed:\n' + process_output) ) failed_tests.append((input_path, 'Parsing failed:\n' + process_output))
else: else:
print('OK') print('OK')
else: else:
base_path = os.path.splitext(input_path)[0] base_path = os.path.splitext(input_path)[0]
actual_output = safeReadFile( base_path + '.actual' ) actual_output = safeReadFile(base_path + '.actual')
actual_rewrite_output = safeReadFile( base_path + '.actual-rewrite' ) actual_rewrite_output = safeReadFile(base_path + '.actual-rewrite')
open(base_path + '.process-output', 'wt', encoding = 'utf-8').write( process_output ) open(base_path + '.process-output', 'wt', encoding = 'utf-8').write(process_output)
if status: if status:
print('parsing failed') print('parsing failed')
failed_tests.append( (input_path, 'Parsing failed:\n' + process_output) ) failed_tests.append((input_path, 'Parsing failed:\n' + process_output))
else: else:
expected_output_path = os.path.splitext(input_path)[0] + '.expected' expected_output_path = os.path.splitext(input_path)[0] + '.expected'
expected_output = open( expected_output_path, 'rt', encoding = 'utf-8' ).read() expected_output = open(expected_output_path, 'rt', encoding = 'utf-8').read()
detail = ( compareOutputs( expected_output, actual_output, 'input' ) detail = (compareOutputs(expected_output, actual_output, 'input')
or compareOutputs( expected_output, actual_rewrite_output, 'rewrite' ) ) or compareOutputs(expected_output, actual_rewrite_output, 'rewrite'))
if detail: if detail:
print('FAILED') print('FAILED')
failed_tests.append( (input_path, detail) ) failed_tests.append((input_path, detail))
else: else:
print('OK') print('OK')
@@ -117,7 +119,7 @@ def runAllTests( jsontest_executable_path, input_dir = None,
print(failed_test[1]) print(failed_test[1])
print() print()
print('Test results: %d passed, %d failed.' % (len(tests)-len(failed_tests), print('Test results: %d passed, %d failed.' % (len(tests)-len(failed_tests),
len(failed_tests) )) len(failed_tests)))
return 1 return 1
else: else:
print('All %d tests passed.' % len(tests)) print('All %d tests passed.' % len(tests))
@@ -125,7 +127,7 @@ def runAllTests( jsontest_executable_path, input_dir = None,
def main(): def main():
from optparse import OptionParser from optparse import OptionParser
parser = OptionParser( usage="%prog [options] <path to jsontestrunner.exe> [test case directory]" ) parser = OptionParser(usage="%prog [options] <path to jsontestrunner.exe> [test case directory]")
parser.add_option("--valgrind", parser.add_option("--valgrind",
action="store_true", dest="valgrind", default=False, action="store_true", dest="valgrind", default=False,
help="run all the tests using valgrind to detect memory leaks") help="run all the tests using valgrind to detect memory leaks")
@@ -136,17 +138,32 @@ def main():
options, args = parser.parse_args() options, args = parser.parse_args()
if len(args) < 1 or len(args) > 2: if len(args) < 1 or len(args) > 2:
parser.error( 'Must provides at least path to jsontestrunner executable.' ) parser.error('Must provides at least path to jsontestrunner executable.')
sys.exit( 1 ) sys.exit(1)
jsontest_executable_path = os.path.normpath( os.path.abspath( args[0] ) ) jsontest_executable_path = os.path.normpath(os.path.abspath(args[0]))
if len(args) > 1: if len(args) > 1:
input_path = os.path.normpath( os.path.abspath( args[1] ) ) input_path = os.path.normpath(os.path.abspath(args[1]))
else: else:
input_path = None input_path = None
status = runAllTests( jsontest_executable_path, input_path, status = runAllTests(jsontest_executable_path, input_path,
use_valgrind=options.valgrind, with_json_checker=options.with_json_checker ) use_valgrind=options.valgrind,
sys.exit( status ) with_json_checker=options.with_json_checker,
writerClass='StyledWriter')
if status:
sys.exit(status)
status = runAllTests(jsontest_executable_path, input_path,
use_valgrind=options.valgrind,
with_json_checker=options.with_json_checker,
writerClass='StyledStreamWriter')
if status:
sys.exit(status)
status = runAllTests(jsontest_executable_path, input_path,
use_valgrind=options.valgrind,
with_json_checker=options.with_json_checker,
writerClass='BuiltStyledStreamWriter')
if status:
sys.exit(status)
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@@ -11,18 +11,18 @@ import optparse
VALGRIND_CMD = 'valgrind --tool=memcheck --leak-check=yes --undef-value-errors=yes' VALGRIND_CMD = 'valgrind --tool=memcheck --leak-check=yes --undef-value-errors=yes'
class TestProxy(object): class TestProxy(object):
def __init__( self, test_exe_path, use_valgrind=False ): def __init__(self, test_exe_path, use_valgrind=False):
self.test_exe_path = os.path.normpath( os.path.abspath( test_exe_path ) ) self.test_exe_path = os.path.normpath(os.path.abspath(test_exe_path))
self.use_valgrind = use_valgrind self.use_valgrind = use_valgrind
def run( self, options ): def run(self, options):
if self.use_valgrind: if self.use_valgrind:
cmd = VALGRIND_CMD.split() cmd = VALGRIND_CMD.split()
else: else:
cmd = [] cmd = []
cmd.extend( [self.test_exe_path, '--test-auto'] + options ) cmd.extend([self.test_exe_path, '--test-auto'] + options)
try: try:
process = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT ) process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
except: except:
print(cmd) print(cmd)
raise raise
@@ -31,9 +31,9 @@ class TestProxy(object):
return False, stdout return False, stdout
return True, stdout return True, stdout
def runAllTests( exe_path, use_valgrind=False ): def runAllTests(exe_path, use_valgrind=False):
test_proxy = TestProxy( exe_path, use_valgrind=use_valgrind ) test_proxy = TestProxy(exe_path, use_valgrind=use_valgrind)
status, test_names = test_proxy.run( ['--list-tests'] ) status, test_names = test_proxy.run(['--list-tests'])
if not status: if not status:
print("Failed to obtain unit tests list:\n" + test_names, file=sys.stderr) print("Failed to obtain unit tests list:\n" + test_names, file=sys.stderr)
return 1 return 1
@@ -41,11 +41,11 @@ def runAllTests( exe_path, use_valgrind=False ):
failures = [] failures = []
for name in test_names: for name in test_names:
print('TESTING %s:' % name, end=' ') print('TESTING %s:' % name, end=' ')
succeed, result = test_proxy.run( ['--test', name] ) succeed, result = test_proxy.run(['--test', name])
if succeed: if succeed:
print('OK') print('OK')
else: else:
failures.append( (name, result) ) failures.append((name, result))
print('FAILED') print('FAILED')
failed_count = len(failures) failed_count = len(failures)
pass_count = len(test_names) - failed_count pass_count = len(test_names) - failed_count
@@ -53,8 +53,7 @@ def runAllTests( exe_path, use_valgrind=False ):
print() print()
for name, result in failures: for name, result in failures:
print(result) print(result)
print('%d/%d tests passed (%d failure(s))' % ( print('%d/%d tests passed (%d failure(s))' % ( pass_count, len(test_names), failed_count))
pass_count, len(test_names), failed_count))
return 1 return 1
else: else:
print('All %d tests passed' % len(test_names)) print('All %d tests passed' % len(test_names))
@@ -62,7 +61,7 @@ def runAllTests( exe_path, use_valgrind=False ):
def main(): def main():
from optparse import OptionParser from optparse import OptionParser
parser = OptionParser( usage="%prog [options] <path to test_lib_json.exe>" ) parser = OptionParser(usage="%prog [options] <path to test_lib_json.exe>")
parser.add_option("--valgrind", parser.add_option("--valgrind",
action="store_true", dest="valgrind", default=False, action="store_true", dest="valgrind", default=False,
help="run all the tests using valgrind to detect memory leaks") help="run all the tests using valgrind to detect memory leaks")
@@ -70,11 +69,11 @@ def main():
options, args = parser.parse_args() options, args = parser.parse_args()
if len(args) != 1: if len(args) != 1:
parser.error( 'Must provides at least path to test_lib_json executable.' ) parser.error('Must provides at least path to test_lib_json executable.')
sys.exit( 1 ) sys.exit(1)
exit_code = runAllTests( args[0], use_valgrind=options.valgrind ) exit_code = runAllTests(args[0], use_valgrind=options.valgrind)
sys.exit( exit_code ) sys.exit(exit_code)
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@@ -1 +1 @@
1.3.0 0.8.2