Compare commits

..

279 Commits
1.1.0 ... 1.6.1

Author SHA1 Message Date
Christopher Dunn
9cb88d2ca6 1.6.1 <- 1.6.0 2015-03-31 15:07:14 -05:00
Christopher Dunn
363e51c0a9 Merge pull request #232 from cdunn2001/fix-snprintf
Fix snprintf

Well, it passes Travis. But when we have time, we should clean up how snprintf is used in both reader and writer.
2015-03-31 15:06:11 -05:00
Christopher Dunn
240ddb6a1b use std::snprintf for C++11 2015-03-31 15:04:24 -05:00
Baruch Siach
9dd77dc0ef Revert "Use std namespace for snprintf."
This reverts commit 1c58876185.

std::snprintf() is only available in C++11, which is not provided by
all compilers. Since the C library snprintf() can easily be used as a
replacement on Linux systems, this patch changes jsoncpp to use the C
library snprintf() instead of C++11 std::snprintf(), fixing the build error
below:

    src/lib_json/json_writer.cpp:33:18: error: 'snprintf' is not a member of 'std'

See #231, #224, and #218.
2015-03-31 15:04:24 -05:00
Christopher Dunn
244b1496e1 Merge pull request #225 from selaselah/master
fix find_program() bug: no result in not-win sys
2015-03-31 11:32:06 -05:00
selaselah
c083835261 fix find_program() bug: no result in not-win sys 2015-03-19 19:18:58 +08:00
Christopher Dunn
cbe7e7c9cb Merge pull request #221 from btolfa/forgotten-virtual-dtor
Added forgotten virtual dtor for `Json::CharReader::Factory`.

(Without this, the destructor of the derived `CharReaderBuilder` would not be called, which is a small memory leak.)
2015-03-15 13:49:24 -05:00
Tengiz Sharafiev
be183def8f Update reader.h 2015-03-14 21:30:00 +03:00
Christopher Dunn
951bd3d05d Merge pull request #219 from cdunn2001/c-std-headers
Close #218. Fix #214.
2015-03-11 21:36:51 -05:00
Connor Manning
1c58876185 Use std namespace for snprintf. 2015-03-11 21:33:08 -05:00
Connor Manning
2f2034629e Constrain MSVC _isfinite to before 2013, remove duplicate includes. 2015-03-11 21:33:08 -05:00
Dani-Hub
7020451b44 Fix isfinite for MSVC. 2015-03-11 21:32:59 -05:00
Connor Manning
80497f102e Use C++ standard headers. 2015-03-10 18:48:45 -05:00
Dani-Hub
f9feb66be2 Change exception data member
from "reference to string" to "string" (Resolves the most serious part of issue #216)
2015-03-09 18:42:16 -05:00
Christopher Dunn
ed495edcc1 prefer ValueIterator::name() to ::memberName()
in case of embedded nulls
2015-03-08 14:35:00 -05:00
Christopher Dunn
3c0a383877 Merge pull request #212 from cdunn2001/macro-deprec
close #210
2015-03-08 13:10:37 -05:00
Dani-Hub
5003983029 Make preprocessor query robust against older gcc versions 2015-03-08 13:07:27 -05:00
Dani-Hub
871b311e7e Provide JSONCPP_DEPRECATED definitions for clang and gcc 2015-03-08 13:07:27 -05:00
Christopher Dunn
cdbc35f6ac 1.6.0 2015-03-08 12:57:13 -05:00
Christopher Dunn
4e30c4fcdb comments 2015-03-08 12:56:32 -05:00
Christopher Dunn
0d33cb3639 Merge pull request #211 from cdunn2001/except
* Add Json::Exception and derivatives.
* Clarify when exceptions are thrown, to avoid crashes caused by malicious input.
* Use our own type (derived fro std::exception) so they are trappable.
2015-03-08 12:50:34 -05:00
Christopher Dunn
2250b3c29d use Json::RuntimeError 2015-03-08 12:44:55 -05:00
Christopher Dunn
9376368d86 use Json::LogicError in macros 2015-03-08 12:42:53 -05:00
Christopher Dunn
5383794cc9 Runtime/LogicError and throwers 2015-03-08 12:31:57 -05:00
Christopher Dunn
75279ccec2 base Json::Exception 2015-03-08 12:20:06 -05:00
Christopher Dunn
717b08695e clarify errors
* use macros for logic errors, not input errors
* throw on parsing failure in `operator>>()`, not assert
* throw on malloc, not assert
2015-03-08 12:06:22 -05:00
Christopher Dunn
ee4ea0ec3f delete debug code from test 2015-03-07 15:47:39 -06:00
Christopher Dunn
ce19001238 require length
Ugh! I meant to do this long ago. It would have caught my blunder.
2015-03-07 15:12:52 -06:00
Christopher Dunn
078f991c57 1.5.4 <- 1.5.3
important bug-fix (thx to datadiode@)
2015-03-07 14:52:01 -06:00
Christopher Dunn
72b5293695 Merge pull request #207 from cdunn2001/fix_CZString_copy_constructor
Fix czstring copy constructor
2015-03-07 14:49:54 -06:00
Christopher Dunn
a63d82d78a drop unused CString ctor case
`Value::CZString::CZString(char const* str, unsigned length, DuplicationPolicy allocate)` with `allocate == duplicate` does not happen.
2015-03-07 14:43:37 -06:00
datadiode
ee83f8891c Trivial fixes in CZString constructors. 2015-03-07 14:43:07 -06:00
Christopher Dunn
5c448687e1 fix ValueTest/zeroes* 2015-03-07 14:41:15 -06:00
Christopher Dunn
401e98269e old-style enum namespacing 2015-03-06 16:11:55 -06:00
Christopher Dunn
b2a7438d08 Merge pull request #205 from open-source-parsers/reject-dup-keys
[Shekhar (shakers007) wrote](https://sourceforge.net/p/jsoncpp/bugs/22/):

> As per RFC4627 (section 2.2), names within an object should be unique. When using JSONCPP's strict mode, parsing such an object should fail.
2015-03-06 12:58:55 -06:00
Christopher Dunn
62ad140d18 rejectDupKeys 2015-03-06 12:39:05 -06:00
Christopher Dunn
527332d5d5 add rejectDupKeys feature - not yet impld 2015-03-06 12:38:58 -06:00
Christopher Dunn
cada3b951f test for repeated key in strictMode
https://sourceforge.net/p/jsoncpp/bugs/22/
2015-03-06 12:38:00 -06:00
Christopher Dunn
ff61752444 change str_ for cross-compilation
https://sourceforge.net/p/jsoncpp/bugs/59/
2015-03-06 10:31:46 -06:00
Christopher Dunn
7f439f4276 clarify operator= 2015-03-06 09:22:57 -06:00
Christopher Dunn
3976f17ffd test assignment over-writes comments, but swapPayload() does not 2015-03-06 09:16:19 -06:00
Christopher Dunn
80ca11bb41 test commentBefore
for issue #203
2015-03-06 05:55:19 -06:00
Christopher Dunn
2fc08b4ebd clarify which versions work with old compilers 2015-03-05 21:45:42 -06:00
Christopher Dunn
239c733ab5 1.5.3 <- 1.5.2 2015-03-05 18:27:52 -06:00
Christopher Dunn
295e73ff3c generate both version.h and version from CMakelists.txt
This forces consistency, since they will be re-generated whenever
a git operation alters CMakelists.txt. They are still in the repo
because users might not actually run cmake.
2015-03-05 18:27:39 -06:00
Christopher Dunn
2a840c105c had trouble finding Python on Windows
With this change, `make jsoncpp_check` will still fail if Python
is missing, so our CI tests are unaffected.
2015-03-05 17:42:12 -06:00
Christopher Dunn
7ec98dc9fe Merge pull request #202 from open-source-parsers/get-with-zero
`Value::get(key, default)` with zero
2015-03-05 16:56:56 -06:00
Christopher Dunn
0fd2875a44 fix get() for embedded zeroes in key
This method had been overlooked.
2015-03-05 16:47:29 -06:00
Christopher Dunn
d31151d150 test get(key, default) 2015-03-05 16:44:50 -06:00
Christopher Dunn
f50145fbda Merge pull request #201 from open-source-parsers/vs
* Copy .dll for running unit-tests in VisualStudio.
* Stop using `do{}while(0)` idiom b/c VisualStudio warns.
* Fix a warning in a test.
2015-03-05 16:37:42 -06:00
Christopher Dunn
b3e6f3d70f drop do{}while(0) idiom
Rationale:
  * http://stackoverflow.com/questions/154136/do-while-and-if-else-statements-in-c-c-macros/154138#154138

But Visual Studio issues a warning: `warning C4127: conditional expression is constant`
  * http://stackoverflow.com/questions/1946445/c-c-how-to-use-the-do-while0-construct-without-compiler-warnings-like-c412
2015-03-05 15:26:29 -06:00
Christopher Dunn
13e063c336 copy .dll for unit-test
Fix 2nd problem in issue #200.
* http://stackoverflow.com/questions/10671916/how-to-copy-dll-files-into-the-same-folder-as-the-executable-using-cmake

Q: What about the Python tests?
A: They are not normally run in Visual Studio. If desired, one can set PATH.
2015-03-05 15:23:20 -06:00
Christopher Dunn
f57da48f48 maybe address warning
cmake -DCMAKE_BUILD_TYPE=debug -DJSONCPP_LIB_BUILD_STATIC=OFF
-DJSONCPP_LIB_BUILD_SHARED=ON -G "Visual Studio 10" ../..

`potentially uninitialized local variable 'dist' (line 2212 of
test_lib_json/main.cpp)`
2015-03-05 14:41:42 -06:00
Christopher Dunn
eaa3fd8eca maybe fix DLL symbols for VS 2015-03-05 10:11:43 -06:00
Christopher Dunn
ff63d034e5 1.5.2 <- 1.5.1
* Fixed compile error for VS2013.
* Added `operator[]` to Builders. Recalling previous minor versions for
  bug-fixes.
2015-03-05 09:18:44 -06:00
Christopher Dunn
2aa4557e79 Merge pull request #199 from open-source-parsers/set-builders
Setting builders
2015-03-05 09:18:06 -06:00
Christopher Dunn
37dde9d29d fix example in docs 2015-03-05 09:18:06 -06:00
Christopher Dunn
c312dd5ef7 Builder::operator[] plus tests 2015-03-05 09:18:01 -06:00
Christopher Dunn
42d7e59fe0 fix compiler-error and warnings for VS2013
fix issue #200
2015-03-05 09:15:10 -06:00
Christopher Dunn
a8104a8035 1.5.1 <- 1.5.0
Fixed Builders ::validate(). Added tests.
2015-03-04 21:18:05 -06:00
Christopher Dunn
7b22768c33 test Builders::validate() 2015-03-04 21:17:03 -06:00
Christopher Dunn
19c49a459d fix Builders::validate()
(cherry picked from commit 626cfcdbb8)
2015-03-04 21:14:53 -06:00
Christopher Dunn
99822b27cd clarify python version 2015-03-03 16:17:11 -06:00
Christopher Dunn
8a70297869 fix inline doxygen comments 2015-03-03 16:17:08 -06:00
Christopher Dunn
24f544996f no struct init
The C struct initializer is not standard C++.
GCC and Clang handle this (at least in some versions) but some
compilers might not.
2015-03-03 10:15:09 -06:00
Christopher Dunn
0c91927da2 assertions should be logic_error 2015-03-03 09:45:33 -06:00
Christopher Dunn
493f6dcebe keep StaticString (!allocated_) for copy ctor 2015-03-03 09:36:22 -06:00
Christopher Dunn
eaa06355e1 test Json::Value::null 2015-03-03 08:45:52 -06:00
Christopher Dunn
effd732aa1 null -> nullRef 2015-03-03 01:25:33 -06:00
Christopher Dunn
70cd04d49a 1.5.0 <- 1.4.4 2015-03-03 00:45:57 -06:00
Christopher Dunn
9e49e3d84a Merge pull request #197 from cdunn2001/allow-zero
allow zeroes in strings
2015-03-03 00:44:19 -06:00
Christopher Dunn
2d653bd15d fix security hole for string-key-lengths > 2^30 2015-03-03 00:14:54 -06:00
Christopher Dunn
585b267595 tests for zeroes
* ValueTest/zeroes
* ValueTest/zeroesInKeys
2015-03-03 00:14:54 -06:00
Christopher Dunn
c28610fb5d fix StaticString test
* support zeroes in string_
* support zeroes in writer; provide getString(char**, unsigned*)
* valueToQuotedStringN(), isCC0(), etc
* allow zeroes for cpptl ConstString
* allocated => non-static
2015-03-03 00:14:54 -06:00
Christopher Dunn
a53283568f cp duplicateStringValue() 2015-03-03 00:14:53 -06:00
Christopher Dunn
ef21fbc785 doc new behavior 2015-03-03 00:14:53 -06:00
Christopher Dunn
25342bac13 support UTF-8 for const methods 2015-03-03 00:14:53 -06:00
Christopher Dunn
2b9abc3ebf Merge pull request #195 from cdunn2001/len-in-czstring
Store length in CZString
2015-03-03 00:07:03 -06:00
Christopher Dunn
e6b46e4503 stop computing strlen() in CZString 2015-03-02 23:50:59 -06:00
Christopher Dunn
8a77037320 actually store length in CZString 2015-03-02 23:50:59 -06:00
Christopher Dunn
57ad051f67 allow length in CZString 2015-03-02 23:50:59 -06:00
Christopher Dunn
b383fdc61e use memcmp in CZString
This is a loss of efficiency, but it prepares for an increase when we
have stored lengths.
2015-03-02 23:50:59 -06:00
Mattes D
5d79275a5b Added MSVC+CMake generated files to .gitignore.
pull #193
2015-03-02 23:50:14 -06:00
Christopher Dunn
c1e834a110 Merge pull request #194 from cdunn2001/master
test StaticString
2015-03-02 17:14:36 -06:00
Christopher Dunn
0570f9eefb test StaticString 2015-03-02 17:06:09 -06:00
Christopher Dunn
b947b0b3df Merge pull request #192 from marklakata/size_t
changed length argument of duplicateStringValue from unsigned int to siz...
2015-03-02 17:03:24 -06:00
Mark Lakata
8051cf6ba7 changed length argument of duplicateStringValue from unsigned int to size_t, to avoid warnings in Visual Studio. 2015-03-02 11:57:17 -08:00
Christopher Dunn
c8bf184ea9 Merge pull request #189 from open-source-parsers/skip-py-tests
skip python jsontestrunner by default
2015-02-26 09:55:21 -06:00
Christopher Dunn
9998094eee skip python jsontestrunner by default
To run these tests, in cmake build-dir:
    make jsoncpp_check

TravisCI is now set to run these always.

For now, the test_json_lib unit-tests will still run.

issue #187
2015-02-26 09:41:45 -06:00
Christopher Dunn
4ac4cac2be Merge pull request #185 from open-source-parsers/drop-internal-map
`JSON_VALUE_USE_INTERNAL_MAP` and `JSON_USE_SIMPLE_INTERNAL_ALLOCATOR` did not actually compile with current compilers. In fact, as far as I can tell, they haven't worked since just after the 0.6.0-rc2 release. And according to blep, there are bugs, e.g. [this one](https://sourceforge.net/p/jsoncpp/bugs/27). It's probably better to hide the implementation of Value (maybe in the next major release) and to put experiments like this into completely separate classes.
2015-02-25 10:12:20 -06:00
Christopher Dunn
4788764844 drop JSON_VALUE_USE_INTERNAL_MAP, JSON_USE_SIMPLE_INTERNAL_ALLOCATOR
And remove some old headers.

These were not actually compiling anymore, and there were outstanding,
known bugs, e.g. https://sourceforge.net/p/jsoncpp/bugs/27
2015-02-25 10:04:13 -06:00
Christopher Dunn
6c898441e3 test-amalgamate
Not tested via Travis (since this is outside cmake), but at least
we can more easily validate changes to `amalgamate.py`.
2015-02-25 10:04:12 -06:00
Christopher Dunn
7d1f656859 Merge pull request #183 from open-source-parsers/allow-single-quote
Allow single quote
2015-02-24 18:24:42 -06:00
Christopher Dunn
0c66e698fb allowSingleQuotes
issue #182
2015-02-24 15:49:45 -06:00
Christopher Dunn
b9229b7400 failing test for allowSingleQuotes 2015-02-23 16:40:06 -06:00
Christopher Dunn
f9db82af17 1.4.4 < 1.4.3
* fixed Readers when `allowDropNullPlaceholders`
2015-02-19 15:37:57 -06:00
Christopher Dunn
ae71879549 Merge pull request #179 from cdunn2001/drop-null
Fix failures for `allowDroppedNullPlaceholders`.
2015-02-19 12:38:16 -06:00
Christopher Dunn
7b3683ccd1 apply fix to old Reader 2015-02-19 11:37:17 -06:00
Christopher Dunn
58499031a4 fix all cases from issue -- all pass! 2015-02-19 11:35:28 -06:00
Christopher Dunn
2c8c1ac0ec all 8 failing cases from issue 2015-02-19 11:35:20 -06:00
Christopher Dunn
c58e93b014 fix failing object case 2015-02-19 11:34:35 -06:00
Christopher Dunn
eed193e151 object cases; 1st passes, 2nd fails 2015-02-19 11:34:35 -06:00
Christopher Dunn
4382a7b585 cases 0,1,5,9,12 from issue -- passing 2015-02-19 11:32:42 -06:00
Christopher Dunn
30d923f155 1.4.3 <- 1.4.2 2015-02-18 09:20:28 -06:00
Christopher Dunn
2f4e40bc95 fix typo: issue #177 2015-02-18 09:17:06 -06:00
Christopher Dunn
505e086ebc Merge pull request #173 from cdunn2001/fix-include-amal
help use of amalgamated header

Reviewed by "Lightness Races in Orbit" at StackOverflow.
2015-02-15 13:11:38 -06:00
Christopher Dunn
c6582415d8 fix typos 2015-02-15 13:06:48 -06:00
Christopher Dunn
0ee7e2426f help use of amalgamated header
* Include from same-directory first, so `-I DIR` is rarely needed.
* Double-check that proper header has been included.

http://stackoverflow.com/questions/28510109/linking-problems-with-jsoncpp
2015-02-15 13:06:48 -06:00
Christopher Dunn
1522e4dfb1 Merge pull request #174 from kobigurk/kobigurk-patch-asserts
only throws exceptions JSON_USE_EXCEPTION
2015-02-15 12:05:28 -06:00
Kobi Gurkan
09b8670536 only throws exceptions JSON_USE_EXCEPTION
JSON_ASSERT now throws a runtime_error
2015-02-15 18:00:31 +02:00
Christopher Dunn
f164288646 help rebasing 2015-02-15 03:01:26 -06:00
Christopher Dunn
3bfd215938 1.4.2 < 1.4.1
* Bug-fix for ValueIterator::operator-() (issue #169)
2015-02-15 02:49:34 -06:00
Christopher Dunn
400b744195 Merge pull request #172 from cdunn2001/master
Fix bug in ValueIteratorBase::operator- 

Fixes #169.
2015-02-15 02:44:17 -06:00
Christopher Dunn
bd55164089 reverse sense for CPPTL too 2015-02-15 02:38:31 -06:00
Kevin Grant
4c5832a0be Fix bug in ValueIteratorBase::operator- 2015-02-15 02:38:31 -06:00
Christopher Dunn
8ba9875962 IteratorTest 2015-02-15 02:38:31 -06:00
Christopher Dunn
9c91b995dd rules for signing and doc-generation 2015-02-14 10:20:45 -06:00
Christopher Dunn
e7233bf056 1.4.1 <- 1.4.0 2015-02-13 10:00:38 -06:00
Christopher Dunn
9c90456890 Merge pull request #167 from cdunn2001/fail-if-extra
Add `failIfExtra` feature to `CharReaderBuilder`.
2015-02-13 09:55:11 -06:00
Christopher Dunn
f4be815c86 failIfExtra
1. failing regression tests, from #164 and #107
2. implemented; tests pass
3. allow trailing comments
2015-02-13 09:39:08 -06:00
Christopher Dunn
aa13a8ba40 comments/minor typos 2015-02-13 09:38:49 -06:00
Christopher Dunn
da0fcfbaa2 link web docs 2015-02-12 11:45:21 -06:00
Christopher Dunn
3ebba5cea8 stop calling validate() in newReader/Writer()
By not calling validate(), we can add
non-invasive features which will be simply ignored when user-code
is compiled against an old version. That way, we can often
avoid a minor version-bump.

The user can call validate() himself if he prefers that behavior.
2015-02-11 11:15:32 -06:00
Christopher Dunn
acbf4eb2ef Merge pull request #166 from cdunn2001/stackLimit
Fixes #88 and #56.
2015-02-11 10:35:16 -06:00
Christopher Dunn
56df206847 limit stackDepth for old (deprecated) Json::Reader too
This is an improper solution. If multiple Readers exist,
then the effect stackLimit is reduced because of side-effects.
But our options are limited. We need to address the security
hole without breaking binary-compatibility.

However, this is not likely to cause any practical problems because:

* Anyone using `operator>>(istream, Json::Value)` will be using the
new code already
* Multiple Readers are uncommon.
* The stackLimit is quite high.
* Deeply nested JSON probably would have hit the system limits anyway.
2015-02-11 10:20:53 -06:00
Christopher Dunn
4dca80da49 limit stackDepth 2015-02-11 10:20:47 -06:00
Christopher Dunn
249ad9f47f stackLimit 2015-02-11 10:01:58 -06:00
Christopher Dunn
99b8e856f6 stackLimit_ 2015-02-11 10:01:58 -06:00
Christopher Dunn
89b72e1653 test stackLimit 2015-02-11 10:01:58 -06:00
Christopher Dunn
2474989f24 Old -> Our 2015-02-11 09:48:24 -06:00
Christopher Dunn
315b8c9f2c 1st StreamWriterTest 2015-02-10 23:29:14 -06:00
Christopher Dunn
29501c4d9f clarify comments
And throw instead of return null for invalid settings.
2015-02-10 23:03:27 -06:00
Christopher Dunn
7796f20eab Merge pull request #165 from cdunn2001/master
Remove some experimental classes that are not needed for 1.4.0. This also helps 0.8.0 binary compatibility with 0.6.0-rc2.
2015-02-10 22:45:32 -06:00
Christopher Dunn
20d09676c2 drop experimental OldCompressingStreamWriterBuilder 2015-02-10 21:29:35 -06:00
Christopher Dunn
5a744708fc enableYAMLCompatibility and dropNullPlaceholders for StreamWriterBuilder 2015-02-10 21:28:13 -06:00
Christopher Dunn
07f0e9308d nullRef, since we had to add that kludge to 0.8.0 2015-02-10 21:28:13 -06:00
Christopher Dunn
052050df07 copy Features to OldFeatures 2015-02-10 17:01:08 -06:00
Christopher Dunn
435d2a2f8d passes 2015-02-10 17:01:08 -06:00
Christopher Dunn
6123bd1505 copy Reader impl to OldReader 2015-02-10 17:01:08 -06:00
Christopher Dunn
7477bcfa3a renames for OldReader 2015-02-10 17:01:08 -06:00
Christopher Dunn
5e3e68af2e OldReader copied from Reader 2015-02-10 17:01:08 -06:00
Christopher Dunn
04a607d95b Merge pull request #163 from cdunn2001/master
Reimplement the new Builders.

Issue #131.
2015-02-09 18:55:55 -06:00
Christopher Dunn
db75cdf21e mv CommentStyle to .cpp 2015-02-09 18:54:58 -06:00
Christopher Dunn
c41609b9f9 set output stream in write(), not in builder 2015-02-09 18:44:53 -06:00
Christopher Dunn
b56381a636 <stdexcept> 2015-02-09 18:29:11 -06:00
Christopher Dunn
f757c18ca0 add all features 2015-02-09 18:24:56 -06:00
Christopher Dunn
3cf9175bde remark defaults via doxygen snippet 2015-02-09 18:16:24 -06:00
Christopher Dunn
a9e1ab302d Builder::settings_
We use Json::Value to configure the builders so we can maintain
binary-compatibility easily.
2015-02-09 17:30:11 -06:00
Christopher Dunn
694dbcb328 update docs, writeString() 2015-02-09 15:25:57 -06:00
Christopher Dunn
732abb80ef Merge pull request #162 from cdunn2001/master
Deprecate the new Builders.
2015-02-09 11:55:54 -06:00
Christopher Dunn
f3b3358a0e deprecate current Builders 2015-02-09 11:51:06 -06:00
Christopher Dunn
1357cddf1e deprecate Builders
see issue #131
2015-02-09 11:46:27 -06:00
Christopher Dunn
8df98f6112 deprecate old Reader; separate Advanced Usage section 2015-02-09 11:15:39 -06:00
Christopher Dunn
16bdfd8af3 --in=doc/web_doxyfile.in 2015-02-09 11:15:11 -06:00
Christopher Dunn
ce799b3aa3 copy doxyfile.in 2015-02-09 10:36:55 -06:00
Christopher Dunn
3a65581b20 drop an old impl 2015-02-09 09:54:26 -06:00
Christopher Dunn
6451412c99 simplify basic docs 2015-02-09 09:44:26 -06:00
Christopher Dunn
66a8ba255f clarify Builders 2015-02-09 01:29:43 -06:00
Christopher Dunn
249fd18114 put version into docs 2015-02-09 00:50:27 -06:00
Christopher Dunn
a587d04f77 Merge pull request #161 from cdunn2001/master
CharReader/Builder

I guess we should but the patch-level version. We will set the version properly soon...
2015-02-08 13:25:08 -06:00
Christopher Dunn
2c1197c2c8 CharReader/Builder
* CharReaderBuilder is similar to StreamWriterBuilder.
* use rdbuf(), since getline(string) is not required to handle EOF as delimiter
2015-02-08 13:22:09 -06:00
Christopher Dunn
2a94618589 Merge pull request #160 from cdunn2001/master
rm unique_ptr<>/shared_ptr<>, for pre-C++11
2015-02-08 13:10:18 -06:00
Christopher Dunn
dee4602b8f rm unique_ptr<>/shared_ptr<>, for pre-C++11 2015-02-08 11:54:49 -06:00
Christopher Dunn
ea2d167a38 Merge pull request #158 from cdunn2001/travis-with-cmake-package
JSONCPP_WITH_CMAKE_PACKAGE in Travis

I guess we don't really need to shared and static separately either. Saves a little time, maybe?
2015-02-07 12:24:58 -06:00
Christopher Dunn
41edda5ebe JSONCPP_WITH_CMAKE_PACKAGE in Travis 2015-02-07 12:18:20 -06:00
Christopher Dunn
2941cb3fe2 Merge pull request #156 from cdunn2001/with-cmake-package
fix JSONCPP_WITH_CMAKE_PACKAGE #155
2015-02-07 11:44:24 -06:00
Christopher Dunn
636121485c fix JSONCPP_WITH_CMAKE_PACKAGE #155
mv JSONCPP_WITH_CMAKE_PACKAGE ahead of INSTALL def.
2015-02-07 11:39:16 -06:00
Christopher Dunn
fe855fb4dd drop nullptr
See issue #153.
2015-02-02 15:33:47 -06:00
Christopher Dunn
198cc350c5 drop scoped enum, for pre-C++11 compatibility 2015-01-29 13:49:21 -06:00
Peter Spiess-Knafl
5e8595c0e2 added cmake option to build static and shared libraries at once
See #147 and #149.
2015-01-27 18:22:43 -06:00
Christopher Dunn
38042b3892 docs 2015-01-26 11:38:38 -06:00
Christopher Dunn
3b5f2b85ca Merge pull request #145 from cdunn2001/simplify-builder
Simplify builder
2015-01-26 11:33:16 -06:00
Christopher Dunn
7eca3b4e88 gcc-4.6 (Travis CI) does not support 2015-01-26 11:17:42 -06:00
Christopher Dunn
999f5912f0 docs 2015-01-26 11:12:53 -06:00
Christopher Dunn
472d29f57b fix doc 2015-01-26 11:04:03 -06:00
Christopher Dunn
6065a1c142 make StreamWriterBuilder concrete 2015-01-26 11:01:15 -06:00
Christopher Dunn
28a20917b0 Move old FastWriter stuff out of new Builder 2015-01-26 10:47:42 -06:00
Christopher Dunn
177b7b8f22 OldCompressingStreamWriterBuilder 2015-01-26 10:44:20 -06:00
Christopher Dunn
9da9f84903 improve docs
including `writeString()`
2015-01-26 10:43:53 -06:00
Christopher Dunn
54b8e6939a Merge pull request #132 from cdunn2001/builder
StreamWriter::Builder

Deprecate old Writers, but include them in tests.

This should still be binary-compatible with 1.3.0.
2015-01-25 18:52:09 -06:00
Christopher Dunn
c7b39c2e25 deprecate old Writers
also, use withers instead of setters, and update docs
2015-01-25 18:45:59 -06:00
Christopher Dunn
d78caa3851 implement strange setting from FastWriter 2015-01-25 18:15:54 -06:00
Christopher Dunn
c6e0688e5a implement CommentStyle::None/indentation_=="" 2015-01-25 17:32:36 -06:00
Christopher Dunn
1e21e63853 default \t indentation, All comments 2015-01-25 16:01:59 -06:00
Christopher Dunn
dea6f8d9a6 incorporate 'proper newlines for comments' into new StreamWriter 2015-01-25 15:55:18 -06:00
Christopher Dunn
648843d148 clarify CommentStyle 2015-01-25 15:54:40 -06:00
Christopher Dunn
fe3979cd8a drop StreamWriterBuilderFactory, for now 2015-01-25 15:54:40 -06:00
Christopher Dunn
94665eab72 copy fixes from StyledStreamWriter 2015-01-25 15:54:40 -06:00
Christopher Dunn
9e4bcf354f test BuiltStyledStreamWriter too 2015-01-25 15:54:40 -06:00
Christopher Dunn
9243d602fe const stuff 2015-01-25 15:54:40 -06:00
Christopher Dunn
beb6f35c63 non-const write 2015-01-25 15:54:40 -06:00
Christopher Dunn
ceef7f5219 copied impl of StyledStreamWriter 2015-01-25 15:54:40 -06:00
Christopher Dunn
77ce057f14 fix comment 2015-01-25 15:54:40 -06:00
Christopher Dunn
d49ab5aee1 use new BuiltStyledStreamWriter in operator<<() 2015-01-25 15:54:40 -06:00
Christopher Dunn
4d649402b0 setIndentation() 2015-01-25 15:54:40 -06:00
Christopher Dunn
489707ff60 StreamWriter::Builder 2015-01-25 15:54:39 -06:00
Christopher Dunn
5fbfe3cdb9 StreamWriter 2015-01-25 15:54:39 -06:00
Christopher Dunn
948f29032e update docs 2015-01-25 15:54:07 -06:00
Christopher Dunn
964affd333 add back space before trailing comment 2015-01-25 15:49:02 -06:00
Christopher Dunn
c038e08efc Merge pull request #144 from cdunn2001/proper-comment-lfs
proper newlines for comments

This alters `StyledStreamWriter`, but not `StyledWriter`.
2015-01-25 15:10:38 -06:00
Christopher Dunn
74c2d82e19 proper newlines for comments
The logic is still messy, but it seems to work.
2015-01-25 15:05:09 -06:00
Christopher Dunn
30726082f3 Merge pull request #143 from cdunn2001/rm-trailing-newlines
rm trailing newlines for *all* comments
2015-01-25 14:35:24 -06:00
Christopher Dunn
1e3149ab75 rm trailing newlines for *all* comments
This will make it easier to fix newlines consistently.
2015-01-25 14:32:13 -06:00
Christopher Dunn
7312b1022d Merge pull request #141 from cdunn2001/set-comment
Fix a border case which causes Value::CommentInfo::setComment() to crash
2015-01-25 11:37:02 -06:00
datadiode
2f046b584d Fix a border case which causes Value::CommentInfo::setComment() to crash
re: pull #140
2015-01-25 11:19:51 -06:00
Christopher Dunn
dd91914b1b TravisCI gcc-4.6 does not yet support -Wpedantic 2015-01-25 10:34:49 -06:00
Christopher Dunn
2a46e295ec Merge pull request #139 from cdunn2001/some-python-changes
Some python changes.

* Better messaging.
* Make `doxybuild.py` work with python3.4
2015-01-24 16:24:12 -06:00
Christopher Dunn
f4bc0bf4ec README.md 2015-01-24 16:21:12 -06:00
Christopher Dunn
f357688893 make doxybuild.py work with python3.4 2015-01-24 16:21:12 -06:00
Florian Meier
bb0c80b3e5 Doxybuild: Error message if doxygen not found
This patch introduces a better error message.

See discussion at pull #129.
2015-01-24 16:21:12 -06:00
Christopher Dunn
ff5abe76a5 update doxbuild.py 2015-01-24 16:21:12 -06:00
Christopher Dunn
9cc0bb80b2 update TarFile usage 2015-01-24 16:21:12 -06:00
Christopher Dunn
494950a63d rm extra whitespace in python, per PEP8 2015-01-24 16:21:12 -06:00
Christopher Dunn
7d82b14726 fix issue #90
We are static-casting to U, so we really have no reason to use
references.

However, if this comes up again, try applying -ffloat-store to
the target executable, per
    https://github.com/open-source-parsers/jsoncpp/issues/90
2015-01-24 14:34:54 -06:00
Christopher Dunn
2bc6137ada fix gcc warnings 2015-01-24 13:42:37 -06:00
Christopher Dunn
201904bfbb Merge pull request #138 from cdunn2001/fix-103
Fix #103.
2015-01-23 14:51:31 -06:00
Christopher Dunn
216ecd3085 fix test_comment_00 for #103 2015-01-23 14:28:44 -06:00
Christopher Dunn
8d15e51228 add test_comment_00
one-element array with comment, for issue #103
2015-01-23 14:28:21 -06:00
Christopher Dunn
9fbd12b27c Merge pull request #137 from cdunn2001/avoid-extra-newline
Avoid extra newline
2015-01-23 14:24:52 -06:00
Christopher Dunn
f8ca6cbb25 1.4.0 <- 1.3.0
Minor version bump, but we will wait for a few more commits this time
before tagging the release.
2015-01-23 14:23:31 -06:00
Christopher Dunn
d383056fbb avoid extra newlines in StyledStreamWriter
Add indented_ as a bitfield. (Verified that sizeof(StyledStreamWriter)
remains 96 for binary compatibility. But the new symbol requires a minor
version-bump.)
2015-01-23 14:23:31 -06:00
Christopher Dunn
ddb4ff7dec Merge pull request #136 from cdunn2001/test-both-styled-writers
Test both styled writers

Not only does this now test StyledStreamWriter the same way as StyledWriter, but it also makes the former work more like the latter, indenting separate lines of a comment before a value. Might break some user tests (as `operator<<()` uses `StyledStreamWriter`) but basically a harmless improvement.

All tests pass.
2015-01-23 13:55:45 -06:00
Christopher Dunn
3efc587fba make StyledStreamWriter work more like StyledWriter
tests pass
2015-01-23 13:36:10 -06:00
Christopher Dunn
70704b9a70 test both StyledWriter and StyledStreamWriter 2015-01-23 13:36:10 -06:00
Christopher Dunn
ac6bbbc739 show cmd in runjsontests.py 2015-01-23 13:36:10 -06:00
Christopher Dunn
26c52861b9 pass --json-writer StyledWriter 2015-01-23 13:36:10 -06:00
Christopher Dunn
3682f60927 --json-writer arg 2015-01-23 13:36:10 -06:00
Christopher Dunn
58c31ac550 mv try-block 2015-01-23 12:35:12 -06:00
Christopher Dunn
08cfd02d8c fix minor bugs in test-runner 2015-01-23 12:35:12 -06:00
Christopher Dunn
79211e1aeb Options class for test 2015-01-23 12:35:12 -06:00
Christopher Dunn
632c9b5032 cleaner 2015-01-23 12:35:12 -06:00
Christopher Dunn
05810a7607 cleaner 2015-01-23 12:35:12 -06:00
Christopher Dunn
942e2c999a unindent test-code 2015-01-23 12:35:12 -06:00
Christopher Dunn
2160c9a042 switch from StyledWriter to StyledStream writer in tests 2015-01-23 09:02:44 -06:00
Christopher Dunn
ee8b58f82f Merge pull request #135 from cdunn2001/removeMember
Deprecate old `removeMember()`. Add new.

[Deprecated methods will be removed at the next major version bump](http://apr.apache.org/versioning.html#binary).
2015-01-22 19:26:46 -06:00
Christopher Dunn
9132aa94b1 1.3.0
http://apr.apache.org/versioning.html#binary
2015-01-22 19:25:44 -06:00
Christopher Dunn
76746b09fc deprecate old removeMember() 2015-01-22 19:25:44 -06:00
Christopher Dunn
70b795bd45 Merge pull request #133 from cdunn2001/travis-11
upgrade -std=c++ version
2015-01-22 19:21:24 -06:00
Christopher Dunn
26842530f2 upgrade -std=c++ version
Travis CI does not yet support gcc-4.8, needed for c++11, so we
will try c++0x for now.
2015-01-22 19:12:23 -06:00
Christopher Dunn
e3f24286c1 Merge pull request #130 from connormanning/master
Build without warnings with -pedantic enabled.
2015-01-22 11:48:58 -06:00
Connor Manning
00b8ce81db Build without warnings with -pedantic enabled. 2015-01-22 10:48:45 -06:00
Christopher Dunn
40810fe326 Merge pull request #127 from cdunn2001/removeIndex
`Value::removeIndex()`

See issue #28.
2015-01-21 16:08:25 -06:00
Christopher Dunn
59167d8627 more changes per cr 2015-01-21 16:05:08 -06:00
Christopher Dunn
05c1b8344d drop this-> (team preference) 2015-01-21 15:43:48 -06:00
Christopher Dunn
e893625e88 test removeIndex/Member() 2015-01-20 17:04:03 -06:00
Christopher Dunn
e87e41cdb0 from Itzik S; see issue #28
with minor corrections
2015-01-20 17:03:58 -06:00
Christopher Dunn
9de2c2d84d partial 2015-01-20 17:02:48 -06:00
Christopher Dunn
7956ccd61e allow stream ops for JSON_FAIL_MESSAGE
http://www.iar.com/Global/Resources/Developers_Toolbox/C_Cplusplus_Programming/Tips%20and%20tricks%20using%20the%20preprocessor%20%28part%20two%29.pdf
2015-01-20 16:25:26 -06:00
datadiode
9454e687a3 Specialize std::swap() for Json::Value in a C++ standard compliant way
originally from pull #119
2015-01-20 15:25:41 -06:00
Christopher Dunn
46a925ba4a fix compiler warning for a test 2015-01-20 15:19:22 -06:00
Christopher Dunn
c407f1407f test-data for #103
passes
2015-01-20 15:16:46 -06:00
Christopher Dunn
ec251df6b7 Merge pull request #125 from cdunn2001/issue-116
1.2.1

Fix issue #116, DOS line-endings. Never output \r.
2015-01-20 15:14:50 -06:00
Christopher Dunn
51c0afab22 1.2.1 <- 1.2.0
This can affect existing round-trip tests, but we never made any
guarantees about whitespace constancy.
2015-01-20 15:12:49 -06:00
Mark Zeren
e39fb0083c Normalize comment EOLs while reading instead of while writing
Tests are currently failing when git cloning on Windows with autocrlf = true. In
that setup multiline comments contain \r\n EOLs. The test code assumes that
comments contain \n EOLs and opens the .actual files (etc.) with "wt" which
converts \n to \r\n. Thus we end up with \r\r\n EOLs in the output, which
triggers a test failure.

Instead we should cannonicalize comments while reading so that they contain only
\n EOLs. This approach simplifies other parts of the reader and writer logic,
and requires no changes to the test. It is a breaking change, but probably the
Right Thing going forward.

This change also fixes dereferencing past the end of the comment string in
StyledWriter::writeCommentBeforeValue.

Tests should be added with appropriate .gitattributes for the input files to
ensure that we run tests for DOS, Mac, and Unix EOL files on all platforms. For
now this change is enough to unblock Windows builds.

issue #116
2015-01-20 13:45:44 -06:00
Christopher Dunn
ec727e2f6b -Wall for Clang/GCC 2015-01-20 13:45:28 -06:00
Christopher Dunn
4ce4bb8404 Merge pull request #124 from cdunn2001/assign-with-comments
1.2.0

 `operator=()` (which already performed a deep-copy) now includes comments. This change is probably harmless in all practical cases. But just in case, we bump the minor version.

Address #47.
2015-01-20 12:49:51 -06:00
Christopher Dunn
2cd0f4ec21 1.2.0 <- 1.1.1
`operator=()` (which already performed a deep-copy) now includes
comments. The change is probably harmless in all practical cases.
2015-01-20 12:44:51 -06:00
Christopher Dunn
836f0fb863 fix comments before several types
tests pass
2015-01-20 12:23:44 -06:00
Christopher Dunn
37644abd77 test comment before several types
* array
* double
* string
* true
* false
* null
2015-01-20 12:23:18 -06:00
Christopher Dunn
66eb72f121 use SwapPayload() to retain comments
All tests pass, but we might be missing coverage.

issue #47
2015-01-20 12:07:01 -06:00
Christopher Dunn
94b0297dc5 Revert "consider these as binary, so git will not alter line-endings"
This reverts commit 8f3aa220db.

We will find a better fix for #116. In the meantime, we want to see
diffs for changes to test-data.
2015-01-20 12:06:12 -06:00
Christopher Dunn
55db3c3cb2 Merge pull request #118 from datadiode/47_fix_value_swap
swap comments too

* Changed `operator=` to exclude start/limit, which should never have been added.
* Changed `swap` to include comments. Hmm. That affects efficiency (but *not* for `operator=`) and probably nothing else in practice.

- issue #47
2015-01-18 11:31:47 -06:00
datadiode
c07ef37904 https://github.com/open-source-parsers/jsoncpp/issues/47 2015-01-18 10:05:25 +01:00
Christopher Dunn
62ab94ddd3 Merge pull request #117 from datadiode/integration
Simplify Reader::decodeNumber() / Remove unused functions
2015-01-17 08:10:59 -06:00
datadiode
09d352ac13 Remove unused functions 2015-01-17 13:26:23 +01:00
datadiode
50753bb808 Simplify Reader::decodeNumber() 2015-01-17 13:21:42 +01:00
Christopher Dunn
8f3aa220db consider these as binary, so git will not alter line-endings
issue #116
2015-01-16 16:29:07 -06:00
Christopher Dunn
73e127892e Merge branch 'fix-fail31' 2015-01-16 15:10:56 -06:00
Christopher Dunn
4997dfb8af 1.1.1 <- 1.1.0
slight change to fail on a bad float
2015-01-16 15:09:54 -06:00
datadiode
c1441ef5e0 stricter float parsing
fixes `test/jsonchecker/fail31.json`
(issue #113)
2015-01-16 15:05:12 -06:00
Christopher Dunn
e0bfb45000 Merge branch 'py3/2' 2015-01-16 14:53:22 -06:00
Christopher Dunn
4bc311503c just in case 2015-01-16 14:53:04 -06:00
Christopher Dunn
cd140b5141 py2 and py3 2015-01-16 14:52:56 -06:00
datadiode
01aee4a0dc Fix Python test scripts for Python 3 and Windows 2015-01-16 09:57:42 -06:00
Christopher Dunn
59a01652ab Merge pull request #114 from Gachapen/fix_cmake_output_dir
CMake: Remove set(CMAKE_*_OUTPUT_DIRECTORY)
2015-01-15 20:17:34 -06:00
Magnus Bjerke Vik
8371a4337c CMake: Remove set(CMAKE_*_OUTPUT_DIRECTORY)
With set(CMAKE_*_OUTPUT_DIRECTORY) when using jsoncpp as a sub project,
the parent project's executables and libraries will also be outputed to
jsoncpp's directory. By removing this, it is up to the parent projects
to decide where to put their and jsoncpp's executables and libraries.
2015-01-15 20:16:54 -06:00
Christopher Dunn
dc2e1c98b9 Merge pull request #111 from open-source-parsers/quotes-spaces-fixed
Quotes spaces fixed
2015-01-09 22:36:40 -06:00
Christopher Dunn
d98b5f4230 quote spaces in commands for Windows
See comments at:
    1a4dc3a888
2015-01-09 22:32:10 -06:00
Christopher Dunn
4ca9d25ccc Revert "Merge pull request #108 from open-source-parsers/quote-spaces"
This reverts commit dfc5f879c1, reversing
changes made to 0f6884f771.
2015-01-09 22:28:20 -06:00
58 changed files with 6634 additions and 2751 deletions

25
.gitignore vendored
View File

@@ -10,4 +10,27 @@
/libs/
/doc/doxyfile
/dist/
/include/json/version.h
#/version
#/include/json/version.h
# MSVC project files:
*.sln
*.vcxproj
*.filters
*.user
*.sdf
*.opensdf
*.suo
# MSVC build files:
*.lib
*.obj
*.tlog/
*.pdb
# CMake-generated files:
CMakeFiles/
CTestTestFile.cmake
cmake_install.cmake
pkg-config/jsoncpp.pc
jsoncpp_lib_static.dir/

View File

@@ -7,12 +7,11 @@ language: cpp
compiler:
- gcc
- clang
script: cmake -DJSONCPP_LIB_BUILD_SHARED=$SHARED_LIBRARY -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DCMAKE_VERBOSE_MAKEFILE=$VERBOSE_MAKE . && make
script: cmake -DJSONCPP_WITH_CMAKE_PACKAGE=$CMAKE_PKG -DJSONCPP_LIB_BUILD_SHARED=$SHARED_LIB -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DCMAKE_VERBOSE_MAKEFILE=$VERBOSE_MAKE . && make && make jsoncpp_check
env:
matrix:
- SHARED_LIBRARY=ON BUILD_TYPE=release VERBOSE_MAKE=false
- SHARED_LIBRARY=OFF BUILD_TYPE=release VERBOSE_MAKE=false
- SHARED_LIBRARY=OFF BUILD_TYPE=debug VERBOSE VERBOSE_MAKE=true
- SHARED_LIB=ON STATIC_LIB=ON CMAKE_PKG=ON BUILD_TYPE=release VERBOSE_MAKE=false
- SHARED_LIB=OFF STATIC_LIB=ON CMAKE_PKG=OFF BUILD_TYPE=debug VERBOSE_MAKE=true VERBOSE
notifications:
email:
- aaronjjacobs@gmail.com

View File

@@ -1,8 +1,10 @@
# vim: et ts=4 sts=4 sw=4 tw=0
CMAKE_MINIMUM_REQUIRED(VERSION 2.8.5)
PROJECT(jsoncpp)
ENABLE_TESTING()
OPTION(JSONCPP_WITH_TESTS "Compile and run JsonCpp test executables" ON)
OPTION(JSONCPP_WITH_TESTS "Compile and (for jsoncpp_check) run JsonCpp test executables" ON)
OPTION(JSONCPP_WITH_POST_BUILD_UNITTEST "Automatically run unit-tests as a post build step" ON)
OPTION(JSONCPP_WITH_WARNING_AS_ERROR "Force compilation to fail if a warning occurs" OFF)
OPTION(JSONCPP_WITH_PKGCONFIG_SUPPORT "Generate and install .pc files" ON)
@@ -31,16 +33,6 @@ SET(PACKAGE_INSTALL_DIR lib${LIB_SUFFIX}/cmake
CACHE PATH "Install dir for cmake package config files")
MARK_AS_ADVANCED( RUNTIME_INSTALL_DIR ARCHIVE_INSTALL_DIR INCLUDE_INSTALL_DIR PACKAGE_INSTALL_DIR )
# This ensures shared DLL are in the same dir as executable on Windows.
# Put all executables / libraries are in a project global directory.
SET(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/lib
CACHE PATH "Single directory for all static libraries.")
SET(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/lib
CACHE PATH "Single directory for all dynamic libraries on Unix.")
SET(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin
CACHE PATH "Single directory for all executable and dynamic libraries on Windows.")
MARK_AS_ADVANCED( CMAKE_RUNTIME_OUTPUT_DIRECTORY CMAKE_LIBRARY_OUTPUT_DIRECTORY CMAKE_ARCHIVE_OUTPUT_DIRECTORY )
# Set variable named ${VAR_NAME} to value ${VALUE}
FUNCTION(set_using_dynamic_name VAR_NAME VALUE)
SET( "${VAR_NAME}" "${VALUE}" PARENT_SCOPE)
@@ -64,17 +56,24 @@ MACRO(jsoncpp_parse_version VERSION_TEXT OUPUT_PREFIX)
ENDMACRO(jsoncpp_parse_version)
# Read out version from "version" file
FILE(STRINGS "version" JSONCPP_VERSION)
#FILE(STRINGS "version" JSONCPP_VERSION)
#SET( JSONCPP_VERSION_MAJOR X )
#SET( JSONCPP_VERSION_MINOR Y )
#SET( JSONCPP_VERSION_PATCH Z )
SET( JSONCPP_VERSION 1.6.1 )
jsoncpp_parse_version( ${JSONCPP_VERSION} JSONCPP_VERSION )
IF(NOT JSONCPP_VERSION_FOUND)
MESSAGE(FATAL_ERROR "Failed to parse version string properly. Expect X.Y.Z")
ENDIF(NOT JSONCPP_VERSION_FOUND)
#IF(NOT JSONCPP_VERSION_FOUND)
# MESSAGE(FATAL_ERROR "Failed to parse version string properly. Expect X.Y.Z")
#ENDIF(NOT JSONCPP_VERSION_FOUND)
MESSAGE(STATUS "JsonCpp Version: ${JSONCPP_VERSION_MAJOR}.${JSONCPP_VERSION_MINOR}.${JSONCPP_VERSION_PATCH}")
# File version.h is only regenerated on CMake configure step
CONFIGURE_FILE( "${PROJECT_SOURCE_DIR}/src/lib_json/version.h.in"
"${PROJECT_SOURCE_DIR}/include/json/version.h" )
"${PROJECT_SOURCE_DIR}/include/json/version.h"
NEWLINE_STYLE UNIX )
CONFIGURE_FILE( "${PROJECT_SOURCE_DIR}/version.in"
"${PROJECT_SOURCE_DIR}/version"
NEWLINE_STYLE UNIX )
macro(UseCompilationWarningAsError)
if ( MSVC )
@@ -93,6 +92,14 @@ if ( MSVC )
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /W4 ")
endif( MSVC )
if (CMAKE_CXX_COMPILER_ID MATCHES "Clang")
# using regular Clang or AppleClang
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -Wall")
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
# using GCC
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x -Wall -Wextra -pedantic")
endif()
IF(JSONCPP_WITH_WARNING_AS_ERROR)
UseCompilationWarningAsError()
ENDIF(JSONCPP_WITH_WARNING_AS_ERROR)

View File

@@ -80,7 +80,7 @@ New in SVN
(e.g. MSVC 2008 command prompt in start menu) before running scons.
- Added support for amalgamated source and header generation (a la sqlite).
Refer to README.txt section "Generating amalgamated source and header"
Refer to README.md section "Generating amalgamated source and header"
for detail.
* Value

View File

@@ -7,17 +7,20 @@ pairs.
[json-org]: http://json.org/
JsonCpp is a C++ library that allows manipulating JSON values, including
[JsonCpp][] is a C++ library that allows manipulating JSON values, including
serialization and deserialization to and from strings. It can also preserve
existing comment in unserialization/serialization steps, making it a convenient
format to store user input files.
[JsonCpp]: http://open-source-parsers.github.io/jsoncpp-docs/doxygen/index.html
## A note on backward-compatibility
Very soon, we are switching to C++11 only. For older compilers, try the `pre-C++11` branch.
* `1.y.z` is built with C++11.
* `0.y.z` can be used with older compilers.
* Major versions maintain binary-compatibility.
Using JsonCpp in your project
-----------------------------
The recommended approach to integrating JsonCpp in your project is to build
the amalgamated source (a single `.cpp` file) with your own build system. This
ensures consistency of compilation flags and ABI compatibility. See the section
@@ -28,13 +31,11 @@ should be included as follow:
#include <json/json.h>
If JsonCpp was build as a dynamic library on Windows, then your project needs to
If JsonCpp was built as a dynamic library on Windows, then your project needs to
define the macro `JSON_DLL`.
Building and testing with new CMake
-----------------------------------
Building and testing with CMake
-------------------------------
[CMake][] is a C++ Makefiles/Solution generator. It is usually available on most
Linux system as package. On Ubuntu:
@@ -66,7 +67,7 @@ Alternatively, from the command-line on Unix in the source directory:
mkdir -p build/debug
cd build/debug
cmake -DCMAKE_BUILD_TYPE=debug -DJSONCPP_LIB_BUILD_SHARED=OFF -G "Unix Makefiles" ../..
cmake -DCMAKE_BUILD_TYPE=debug -DJSONCPP_LIB_BUILD_STATIC=ON -DJSONCPP_LIB_BUILD_SHARED=OFF -G "Unix Makefiles" ../..
make
Running `cmake -`" will display the list of available generators (passed using
@@ -75,10 +76,8 @@ the `-G` option).
By default CMake hides compilation commands. This can be modified by specifying
`-DCMAKE_VERBOSE_MAKEFILE=true` when generating makefiles.
Building and testing with SCons
-------------------------------
**Note:** The SCons-based build system is deprecated. Please use CMake; see the
section above.
@@ -107,14 +106,7 @@ If you are building with Microsoft Visual Studio 2008, you need to set up the
environment by running `vcvars32.bat` (e.g. MSVC 2008 command prompt) before
running SCons.
Running the tests manually
--------------------------
Note that test can be run using SCons using the `check` target:
scons platform=$PLATFORM check
# Running the tests manually
You need to run tests manually only if you are troubleshooting an issue.
In the instructions below, replace `path/to/jsontest` with the path of the
@@ -137,20 +129,21 @@ In the instructions below, replace `path/to/jsontest` with the path of the
# You can run the tests using valgrind:
python rununittests.py --valgrind path/to/test_lib_json
## Running the tests using scons
Note that tests can be run using SCons using the `check` target:
scons platform=$PLATFORM check
Building the documentation
--------------------------
Run the Python script `doxybuild.py` from the top directory:
python doxybuild.py --doxygen=$(which doxygen) --open --with-dot
See `doxybuild.py --help` for options.
Generating amalgamated source and header
----------------------------------------
JsonCpp is provided with a script to generate a single header and a single
source file to ease inclusion into an existing project. The amalgamated source
can be generated at any time by running the following command from the
@@ -172,10 +165,8 @@ The amalgamated sources are generated by concatenating JsonCpp source in the
correct order and defining the macro `JSON_IS_AMALGAMATION` to prevent inclusion
of other headers.
Adding a reader/writer test
---------------------------
To add a test, you need to create two files in test/data:
* a `TESTNAME.json` file, that contains the input document in JSON format.
@@ -195,10 +186,8 @@ The `TESTNAME.expected` file format is as follows:
See the examples `test_complex_01.json` and `test_complex_01.expected` to better
understand element paths.
Understanding reader/writer test output
---------------------------------------
When a test is run, output files are generated beside the input test files.
Below is a short description of the content of each file:
@@ -215,10 +204,7 @@ Below is a short description of the content of each file:
* `test_complex_01.process-output`: `jsontest` output, typically useful for
understanding parsing errors.
License
-------
See the `LICENSE` file for details. In summary, JsonCpp is licensed under the
MIT license, or public domain if desired and recognized in your jurisdiction.

View File

@@ -237,7 +237,7 @@ RunUnitTests = ActionFactory(runUnitTests_action, runUnitTests_string )
env.Alias( 'check' )
srcdist_cmd = env['SRCDIST_ADD']( source = """
AUTHORS README.txt SConstruct
AUTHORS README.md SConstruct
""".split() )
env.Alias( 'src-dist', srcdist_cmd )

View File

@@ -1,6 +1,6 @@
"""Amalgate json-cpp library sources into a single source and header file.
Requires Python 2.6
Works with python2.6+ and python3.4+.
Example of invocation (must be invoked from json-cpp top directory):
python amalgate.py
@@ -10,46 +10,46 @@ import os.path
import sys
class AmalgamationFile:
def __init__( self, top_dir ):
def __init__(self, top_dir):
self.top_dir = top_dir
self.blocks = []
def add_text( self, text ):
if not text.endswith( "\n" ):
def add_text(self, text):
if not text.endswith("\n"):
text += "\n"
self.blocks.append( text )
self.blocks.append(text)
def add_file( self, relative_input_path, wrap_in_comment=False ):
def add_marker( prefix ):
self.add_text( "" )
self.add_text( "// " + "/"*70 )
self.add_text( "// %s of content of file: %s" % (prefix, relative_input_path.replace("\\","/")) )
self.add_text( "// " + "/"*70 )
self.add_text( "" )
add_marker( "Beginning" )
f = open( os.path.join( self.top_dir, relative_input_path ), "rt" )
def add_file(self, relative_input_path, wrap_in_comment=False):
def add_marker(prefix):
self.add_text("")
self.add_text("// " + "/"*70)
self.add_text("// %s of content of file: %s" % (prefix, relative_input_path.replace("\\","/")))
self.add_text("// " + "/"*70)
self.add_text("")
add_marker("Beginning")
f = open(os.path.join(self.top_dir, relative_input_path), "rt")
content = f.read()
if wrap_in_comment:
content = "/*\n" + content + "\n*/"
self.add_text( content )
self.add_text(content)
f.close()
add_marker( "End" )
self.add_text( "\n\n\n\n" )
add_marker("End")
self.add_text("\n\n\n\n")
def get_value( self ):
return "".join( self.blocks ).replace("\r\n","\n")
def get_value(self):
return "".join(self.blocks).replace("\r\n","\n")
def write_to( self, output_path ):
output_dir = os.path.dirname( output_path )
if output_dir and not os.path.isdir( output_dir ):
os.makedirs( output_dir )
f = open( output_path, "wb" )
f.write( str.encode(self.get_value(), 'UTF-8') )
def write_to(self, output_path):
output_dir = os.path.dirname(output_path)
if output_dir and not os.path.isdir(output_dir):
os.makedirs(output_dir)
f = open(output_path, "wb")
f.write(str.encode(self.get_value(), 'UTF-8'))
f.close()
def amalgamate_source( source_top_dir=None,
def amalgamate_source(source_top_dir=None,
target_source_path=None,
header_include_path=None ):
header_include_path=None):
"""Produces amalgated source.
Parameters:
source_top_dir: top-directory
@@ -57,69 +57,73 @@ def amalgamate_source( source_top_dir=None,
header_include_path: generated header path relative to target_source_path.
"""
print("Amalgating header...")
header = AmalgamationFile( source_top_dir )
header.add_text( "/// Json-cpp amalgated header (http://jsoncpp.sourceforge.net/)." )
header.add_text( "/// It is intented to be used with #include <%s>" % header_include_path )
header.add_file( "LICENSE", wrap_in_comment=True )
header.add_text( "#ifndef JSON_AMALGATED_H_INCLUDED" )
header.add_text( "# define JSON_AMALGATED_H_INCLUDED" )
header.add_text( "/// If defined, indicates that the source file is amalgated" )
header.add_text( "/// to prevent private header inclusion." )
header.add_text( "#define JSON_IS_AMALGAMATION" )
header.add_file( "include/json/version.h" )
header.add_file( "include/json/config.h" )
header.add_file( "include/json/forwards.h" )
header.add_file( "include/json/features.h" )
header.add_file( "include/json/value.h" )
header.add_file( "include/json/reader.h" )
header.add_file( "include/json/writer.h" )
header.add_file( "include/json/assertions.h" )
header.add_text( "#endif //ifndef JSON_AMALGATED_H_INCLUDED" )
header = AmalgamationFile(source_top_dir)
header.add_text("/// Json-cpp amalgated header (http://jsoncpp.sourceforge.net/).")
header.add_text('/// It is intended to be used with #include "%s"' % header_include_path)
header.add_file("LICENSE", wrap_in_comment=True)
header.add_text("#ifndef JSON_AMALGATED_H_INCLUDED")
header.add_text("# define JSON_AMALGATED_H_INCLUDED")
header.add_text("/// If defined, indicates that the source file is amalgated")
header.add_text("/// to prevent private header inclusion.")
header.add_text("#define JSON_IS_AMALGAMATION")
header.add_file("include/json/version.h")
header.add_file("include/json/config.h")
header.add_file("include/json/forwards.h")
header.add_file("include/json/features.h")
header.add_file("include/json/value.h")
header.add_file("include/json/reader.h")
header.add_file("include/json/writer.h")
header.add_file("include/json/assertions.h")
header.add_text("#endif //ifndef JSON_AMALGATED_H_INCLUDED")
target_header_path = os.path.join( os.path.dirname(target_source_path), header_include_path )
target_header_path = os.path.join(os.path.dirname(target_source_path), header_include_path)
print("Writing amalgated header to %r" % target_header_path)
header.write_to( target_header_path )
header.write_to(target_header_path)
base, ext = os.path.splitext( header_include_path )
base, ext = os.path.splitext(header_include_path)
forward_header_include_path = base + "-forwards" + ext
print("Amalgating forward header...")
header = AmalgamationFile( source_top_dir )
header.add_text( "/// Json-cpp amalgated forward header (http://jsoncpp.sourceforge.net/)." )
header.add_text( "/// It is intented to be used with #include <%s>" % forward_header_include_path )
header.add_text( "/// This header provides forward declaration for all JsonCpp types." )
header.add_file( "LICENSE", wrap_in_comment=True )
header.add_text( "#ifndef JSON_FORWARD_AMALGATED_H_INCLUDED" )
header.add_text( "# define JSON_FORWARD_AMALGATED_H_INCLUDED" )
header.add_text( "/// If defined, indicates that the source file is amalgated" )
header.add_text( "/// to prevent private header inclusion." )
header.add_text( "#define JSON_IS_AMALGAMATION" )
header.add_file( "include/json/config.h" )
header.add_file( "include/json/forwards.h" )
header.add_text( "#endif //ifndef JSON_FORWARD_AMALGATED_H_INCLUDED" )
header = AmalgamationFile(source_top_dir)
header.add_text("/// Json-cpp amalgated forward header (http://jsoncpp.sourceforge.net/).")
header.add_text('/// It is intended to be used with #include "%s"' % forward_header_include_path)
header.add_text("/// This header provides forward declaration for all JsonCpp types.")
header.add_file("LICENSE", wrap_in_comment=True)
header.add_text("#ifndef JSON_FORWARD_AMALGATED_H_INCLUDED")
header.add_text("# define JSON_FORWARD_AMALGATED_H_INCLUDED")
header.add_text("/// If defined, indicates that the source file is amalgated")
header.add_text("/// to prevent private header inclusion.")
header.add_text("#define JSON_IS_AMALGAMATION")
header.add_file("include/json/config.h")
header.add_file("include/json/forwards.h")
header.add_text("#endif //ifndef JSON_FORWARD_AMALGATED_H_INCLUDED")
target_forward_header_path = os.path.join( os.path.dirname(target_source_path),
forward_header_include_path )
target_forward_header_path = os.path.join(os.path.dirname(target_source_path),
forward_header_include_path)
print("Writing amalgated forward header to %r" % target_forward_header_path)
header.write_to( target_forward_header_path )
header.write_to(target_forward_header_path)
print("Amalgating source...")
source = AmalgamationFile( source_top_dir )
source.add_text( "/// Json-cpp amalgated source (http://jsoncpp.sourceforge.net/)." )
source.add_text( "/// It is intented to be used with #include <%s>" % header_include_path )
source.add_file( "LICENSE", wrap_in_comment=True )
source.add_text( "" )
source.add_text( "#include <%s>" % header_include_path )
source.add_text( "" )
source = AmalgamationFile(source_top_dir)
source.add_text("/// Json-cpp amalgated source (http://jsoncpp.sourceforge.net/).")
source.add_text('/// It is intended to be used with #include "%s"' % header_include_path)
source.add_file("LICENSE", wrap_in_comment=True)
source.add_text("")
source.add_text('#include "%s"' % header_include_path)
source.add_text("""
#ifndef JSON_IS_AMALGAMATION
#error "Compile with -I PATH_TO_JSON_DIRECTORY"
#endif
""")
source.add_text("")
lib_json = "src/lib_json"
source.add_file( os.path.join(lib_json, "json_tool.h") )
source.add_file( os.path.join(lib_json, "json_reader.cpp") )
source.add_file( os.path.join(lib_json, "json_batchallocator.h") )
source.add_file( os.path.join(lib_json, "json_valueiterator.inl") )
source.add_file( os.path.join(lib_json, "json_value.cpp") )
source.add_file( os.path.join(lib_json, "json_writer.cpp") )
source.add_file(os.path.join(lib_json, "json_tool.h"))
source.add_file(os.path.join(lib_json, "json_reader.cpp"))
source.add_file(os.path.join(lib_json, "json_valueiterator.inl"))
source.add_file(os.path.join(lib_json, "json_value.cpp"))
source.add_file(os.path.join(lib_json, "json_writer.cpp"))
print("Writing amalgated source to %r" % target_source_path)
source.write_to( target_source_path )
source.write_to(target_source_path)
def main():
usage = """%prog [options]
@@ -137,12 +141,12 @@ Generate a single amalgated source and header file from the sources.
parser.enable_interspersed_args()
options, args = parser.parse_args()
msg = amalgamate_source( source_top_dir=options.top_dir,
msg = amalgamate_source(source_top_dir=options.top_dir,
target_source_path=options.target_source_path,
header_include_path=options.header_include_path )
header_include_path=options.header_include_path)
if msg:
sys.stderr.write( msg + "\n" )
sys.exit( 1 )
sys.stderr.write(msg + "\n")
sys.exit(1)
else:
print("Source succesfully amalagated")

View File

@@ -1,5 +1,19 @@
all: build test-amalgamate
# This is only for jsoncpp developers/contributors.
# We use this to sign releases, generate documentation, etc.
VER?=$(shell cat version)
default:
@echo "VER=${VER}"
sign: jsoncpp-${VER}.tar.gz
gpg --armor --detach-sign $<
gpg --verify $<.asc
# Then upload .asc to the release.
jsoncpp-%.tar.gz:
curl https://github.com/open-source-parsers/jsoncpp/archive/$*.tar.gz -o $@
dox:
python doxybuild.py --doxygen=$$(which doxygen) --in doc/web_doxyfile.in
rsync -va --delete dist/doxygen/jsoncpp-api-html-${VER}/ ../jsoncpp-docs/doxygen/
# Then 'git add -A' and 'git push' in jsoncpp-docs.
build:
mkdir -p build/debug
cd build/debug; cmake -DCMAKE_BUILD_TYPE=debug -DJSONCPP_LIB_BUILD_SHARED=ON -G "Unix Makefiles" ../..
@@ -7,8 +21,12 @@ build:
# Currently, this depends on include/json/version.h generated
# by cmake.
test-amalgamate: build
test-amalgamate:
python2.7 amalgamate.py
python3.4 amalgamate.py
cd dist; gcc -I. -c jsoncpp.cpp
clean:
\rm -rf *.gz *.asc dist/
.PHONY: build

View File

@@ -54,9 +54,9 @@ LINKS = DIR_LINK | FILE_LINK
ALL_NO_LINK = DIR | FILE
ALL = DIR | FILE | LINKS
_ANT_RE = re.compile( r'(/\*\*/)|(\*\*/)|(/\*\*)|(\*)|(/)|([^\*/]*)' )
_ANT_RE = re.compile(r'(/\*\*/)|(\*\*/)|(/\*\*)|(\*)|(/)|([^\*/]*)')
def ant_pattern_to_re( ant_pattern ):
def ant_pattern_to_re(ant_pattern):
"""Generates a regular expression from the ant pattern.
Matching convention:
**/a: match 'a', 'dir/a', 'dir1/dir2/a'
@@ -65,30 +65,30 @@ def ant_pattern_to_re( ant_pattern ):
"""
rex = ['^']
next_pos = 0
sep_rex = r'(?:/|%s)' % re.escape( os.path.sep )
sep_rex = r'(?:/|%s)' % re.escape(os.path.sep)
## print 'Converting', ant_pattern
for match in _ANT_RE.finditer( ant_pattern ):
for match in _ANT_RE.finditer(ant_pattern):
## print 'Matched', match.group()
## print match.start(0), next_pos
if match.start(0) != next_pos:
raise ValueError( "Invalid ant pattern" )
raise ValueError("Invalid ant pattern")
if match.group(1): # /**/
rex.append( sep_rex + '(?:.*%s)?' % sep_rex )
rex.append(sep_rex + '(?:.*%s)?' % sep_rex)
elif match.group(2): # **/
rex.append( '(?:.*%s)?' % sep_rex )
rex.append('(?:.*%s)?' % sep_rex)
elif match.group(3): # /**
rex.append( sep_rex + '.*' )
rex.append(sep_rex + '.*')
elif match.group(4): # *
rex.append( '[^/%s]*' % re.escape(os.path.sep) )
rex.append('[^/%s]*' % re.escape(os.path.sep))
elif match.group(5): # /
rex.append( sep_rex )
rex.append(sep_rex)
else: # somepath
rex.append( re.escape(match.group(6)) )
rex.append(re.escape(match.group(6)))
next_pos = match.end()
rex.append('$')
return re.compile( ''.join( rex ) )
return re.compile(''.join(rex))
def _as_list( l ):
def _as_list(l):
if isinstance(l, basestring):
return l.split()
return l
@@ -105,37 +105,37 @@ def glob(dir_path,
dir_path = dir_path.replace('/',os.path.sep)
entry_type_filter = entry_type
def is_pruned_dir( dir_name ):
def is_pruned_dir(dir_name):
for pattern in prune_dirs:
if fnmatch.fnmatch( dir_name, pattern ):
if fnmatch.fnmatch(dir_name, pattern):
return True
return False
def apply_filter( full_path, filter_rexs ):
def apply_filter(full_path, filter_rexs):
"""Return True if at least one of the filter regular expression match full_path."""
for rex in filter_rexs:
if rex.match( full_path ):
if rex.match(full_path):
return True
return False
def glob_impl( root_dir_path ):
def glob_impl(root_dir_path):
child_dirs = [root_dir_path]
while child_dirs:
dir_path = child_dirs.pop()
for entry in listdir( dir_path ):
full_path = os.path.join( dir_path, entry )
for entry in listdir(dir_path):
full_path = os.path.join(dir_path, entry)
## print 'Testing:', full_path,
is_dir = os.path.isdir( full_path )
if is_dir and not is_pruned_dir( entry ): # explore child directory ?
is_dir = os.path.isdir(full_path)
if is_dir and not is_pruned_dir(entry): # explore child directory ?
## print '===> marked for recursion',
child_dirs.append( full_path )
included = apply_filter( full_path, include_filter )
rejected = apply_filter( full_path, exclude_filter )
child_dirs.append(full_path)
included = apply_filter(full_path, include_filter)
rejected = apply_filter(full_path, exclude_filter)
if not included or rejected: # do not include entry ?
## print '=> not included or rejected'
continue
link = os.path.islink( full_path )
is_file = os.path.isfile( full_path )
link = os.path.islink(full_path)
is_file = os.path.isfile(full_path)
if not is_file and not is_dir:
## print '=> unknown entry type'
continue
@@ -146,57 +146,57 @@ def glob(dir_path,
## print '=> type: %d' % entry_type,
if (entry_type & entry_type_filter) != 0:
## print ' => KEEP'
yield os.path.join( dir_path, entry )
yield os.path.join(dir_path, entry)
## else:
## print ' => TYPE REJECTED'
return list( glob_impl( dir_path ) )
return list(glob_impl(dir_path))
if __name__ == "__main__":
import unittest
class AntPatternToRETest(unittest.TestCase):
## def test_conversion( self ):
## self.assertEqual( '^somepath$', ant_pattern_to_re( 'somepath' ).pattern )
## def test_conversion(self):
## self.assertEqual('^somepath$', ant_pattern_to_re('somepath').pattern)
def test_matching( self ):
test_cases = [ ( 'path',
def test_matching(self):
test_cases = [ ('path',
['path'],
['somepath', 'pathsuffix', '/path', '/path'] ),
( '*.py',
['somepath', 'pathsuffix', '/path', '/path']),
('*.py',
['source.py', 'source.ext.py', '.py'],
['path/source.py', '/.py', 'dir.py/z', 'z.pyc', 'z.c'] ),
( '**/path',
['path/source.py', '/.py', 'dir.py/z', 'z.pyc', 'z.c']),
('**/path',
['path', '/path', '/a/path', 'c:/a/path', '/a/b/path', '//a/path', '/a/path/b/path'],
['path/', 'a/path/b', 'dir.py/z', 'somepath', 'pathsuffix', 'a/somepath'] ),
( 'path/**',
['path/', 'a/path/b', 'dir.py/z', 'somepath', 'pathsuffix', 'a/somepath']),
('path/**',
['path/a', 'path/path/a', 'path//'],
['path', 'somepath/a', 'a/path', 'a/path/a', 'pathsuffix/a'] ),
( '/**/path',
['path', 'somepath/a', 'a/path', 'a/path/a', 'pathsuffix/a']),
('/**/path',
['/path', '/a/path', '/a/b/path/path', '/path/path'],
['path', 'path/', 'a/path', '/pathsuffix', '/somepath'] ),
( 'a/b',
['path', 'path/', 'a/path', '/pathsuffix', '/somepath']),
('a/b',
['a/b'],
['somea/b', 'a/bsuffix', 'a/b/c'] ),
( '**/*.py',
['somea/b', 'a/bsuffix', 'a/b/c']),
('**/*.py',
['script.py', 'src/script.py', 'a/b/script.py', '/a/b/script.py'],
['script.pyc', 'script.pyo', 'a.py/b'] ),
( 'src/**/*.py',
['script.pyc', 'script.pyo', 'a.py/b']),
('src/**/*.py',
['src/a.py', 'src/dir/a.py'],
['a/src/a.py', '/src/a.py'] ),
['a/src/a.py', '/src/a.py']),
]
for ant_pattern, accepted_matches, rejected_matches in list(test_cases):
def local_path( paths ):
def local_path(paths):
return [ p.replace('/',os.path.sep) for p in paths ]
test_cases.append( (ant_pattern, local_path(accepted_matches), local_path( rejected_matches )) )
test_cases.append((ant_pattern, local_path(accepted_matches), local_path(rejected_matches)))
for ant_pattern, accepted_matches, rejected_matches in test_cases:
rex = ant_pattern_to_re( ant_pattern )
rex = ant_pattern_to_re(ant_pattern)
print('ant_pattern:', ant_pattern, ' => ', rex.pattern)
for accepted_match in accepted_matches:
print('Accepted?:', accepted_match)
self.assertTrue( rex.match( accepted_match ) is not None )
self.assertTrue(rex.match(accepted_match) is not None)
for rejected_match in rejected_matches:
print('Rejected?:', rejected_match)
self.assertTrue( rex.match( rejected_match ) is None )
self.assertTrue(rex.match(rejected_match) is None)
unittest.main()

View File

@@ -18,62 +18,62 @@ class BuildDesc:
self.build_type = build_type
self.generator = generator
def merged_with( self, build_desc ):
def merged_with(self, build_desc):
"""Returns a new BuildDesc by merging field content.
Prefer build_desc fields to self fields for single valued field.
"""
return BuildDesc( self.prepend_envs + build_desc.prepend_envs,
return BuildDesc(self.prepend_envs + build_desc.prepend_envs,
self.variables + build_desc.variables,
build_desc.build_type or self.build_type,
build_desc.generator or self.generator )
build_desc.generator or self.generator)
def env( self ):
def env(self):
environ = os.environ.copy()
for values_by_name in self.prepend_envs:
for var, value in list(values_by_name.items()):
var = var.upper()
if type(value) is unicode:
value = value.encode( sys.getdefaultencoding() )
value = value.encode(sys.getdefaultencoding())
if var in environ:
environ[var] = value + os.pathsep + environ[var]
else:
environ[var] = value
return environ
def cmake_args( self ):
def cmake_args(self):
args = ["-D%s" % var for var in self.variables]
# skip build type for Visual Studio solution as it cause warning
if self.build_type and 'Visual' not in self.generator:
args.append( "-DCMAKE_BUILD_TYPE=%s" % self.build_type )
args.append("-DCMAKE_BUILD_TYPE=%s" % self.build_type)
if self.generator:
args.extend( ['-G', self.generator] )
args.extend(['-G', self.generator])
return args
def __repr__( self ):
return "BuildDesc( %s, build_type=%s )" % (" ".join( self.cmake_args()), self.build_type)
def __repr__(self):
return "BuildDesc(%s, build_type=%s)" % (" ".join(self.cmake_args()), self.build_type)
class BuildData:
def __init__( self, desc, work_dir, source_dir ):
def __init__(self, desc, work_dir, source_dir):
self.desc = desc
self.work_dir = work_dir
self.source_dir = source_dir
self.cmake_log_path = os.path.join( work_dir, 'batchbuild_cmake.log' )
self.build_log_path = os.path.join( work_dir, 'batchbuild_build.log' )
self.cmake_log_path = os.path.join(work_dir, 'batchbuild_cmake.log')
self.build_log_path = os.path.join(work_dir, 'batchbuild_build.log')
self.cmake_succeeded = False
self.build_succeeded = False
def execute_build(self):
print('Build %s' % self.desc)
self._make_new_work_dir( )
self.cmake_succeeded = self._generate_makefiles( )
self._make_new_work_dir()
self.cmake_succeeded = self._generate_makefiles()
if self.cmake_succeeded:
self.build_succeeded = self._build_using_makefiles( )
self.build_succeeded = self._build_using_makefiles()
return self.build_succeeded
def _generate_makefiles(self):
print(' Generating makefiles: ', end=' ')
cmd = ['cmake'] + self.desc.cmake_args( ) + [os.path.abspath( self.source_dir )]
succeeded = self._execute_build_subprocess( cmd, self.desc.env(), self.cmake_log_path )
cmd = ['cmake'] + self.desc.cmake_args() + [os.path.abspath(self.source_dir)]
succeeded = self._execute_build_subprocess(cmd, self.desc.env(), self.cmake_log_path)
print('done' if succeeded else 'FAILED')
return succeeded
@@ -82,58 +82,58 @@ class BuildData:
cmd = ['cmake', '--build', self.work_dir]
if self.desc.build_type:
cmd += ['--config', self.desc.build_type]
succeeded = self._execute_build_subprocess( cmd, self.desc.env(), self.build_log_path )
succeeded = self._execute_build_subprocess(cmd, self.desc.env(), self.build_log_path)
print('done' if succeeded else 'FAILED')
return succeeded
def _execute_build_subprocess(self, cmd, env, log_path):
process = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=self.work_dir,
env=env )
stdout, _ = process.communicate( )
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=self.work_dir,
env=env)
stdout, _ = process.communicate()
succeeded = (process.returncode == 0)
with open( log_path, 'wb' ) as flog:
log = ' '.join( cmd ) + '\n' + stdout + '\nExit code: %r\n' % process.returncode
flog.write( fix_eol( log ) )
with open(log_path, 'wb') as flog:
log = ' '.join(cmd) + '\n' + stdout + '\nExit code: %r\n' % process.returncode
flog.write(fix_eol(log))
return succeeded
def _make_new_work_dir(self):
if os.path.isdir( self.work_dir ):
if os.path.isdir(self.work_dir):
print(' Removing work directory', self.work_dir)
shutil.rmtree( self.work_dir, ignore_errors=True )
if not os.path.isdir( self.work_dir ):
os.makedirs( self.work_dir )
shutil.rmtree(self.work_dir, ignore_errors=True)
if not os.path.isdir(self.work_dir):
os.makedirs(self.work_dir)
def fix_eol( stdout ):
def fix_eol(stdout):
"""Fixes wrong EOL produced by cmake --build on Windows (\r\r\n instead of \r\n).
"""
return re.sub( '\r*\n', os.linesep, stdout )
return re.sub('\r*\n', os.linesep, stdout)
def load_build_variants_from_config( config_path ):
with open( config_path, 'rb' ) as fconfig:
data = json.load( fconfig )
def load_build_variants_from_config(config_path):
with open(config_path, 'rb') as fconfig:
data = json.load(fconfig)
variants = data[ 'cmake_variants' ]
build_descs_by_axis = collections.defaultdict( list )
build_descs_by_axis = collections.defaultdict(list)
for axis in variants:
axis_name = axis["name"]
build_descs = []
if "generators" in axis:
for generator_data in axis["generators"]:
for generator in generator_data["generator"]:
build_desc = BuildDesc( generator=generator,
prepend_envs=generator_data.get("env_prepend") )
build_descs.append( build_desc )
build_desc = BuildDesc(generator=generator,
prepend_envs=generator_data.get("env_prepend"))
build_descs.append(build_desc)
elif "variables" in axis:
for variables in axis["variables"]:
build_desc = BuildDesc( variables=variables )
build_descs.append( build_desc )
build_desc = BuildDesc(variables=variables)
build_descs.append(build_desc)
elif "build_types" in axis:
for build_type in axis["build_types"]:
build_desc = BuildDesc( build_type=build_type )
build_descs.append( build_desc )
build_descs_by_axis[axis_name].extend( build_descs )
build_desc = BuildDesc(build_type=build_type)
build_descs.append(build_desc)
build_descs_by_axis[axis_name].extend(build_descs)
return build_descs_by_axis
def generate_build_variants( build_descs_by_axis ):
def generate_build_variants(build_descs_by_axis):
"""Returns a list of BuildDesc generated for the partial BuildDesc for each axis."""
axis_names = list(build_descs_by_axis.keys())
build_descs = []
@@ -141,8 +141,8 @@ def generate_build_variants( build_descs_by_axis ):
if len(build_descs):
# for each existing build_desc and each axis build desc, create a new build_desc
new_build_descs = []
for prototype_build_desc, axis_build_desc in itertools.product( build_descs, axis_build_descs):
new_build_descs.append( prototype_build_desc.merged_with( axis_build_desc ) )
for prototype_build_desc, axis_build_desc in itertools.product(build_descs, axis_build_descs):
new_build_descs.append(prototype_build_desc.merged_with(axis_build_desc))
build_descs = new_build_descs
else:
build_descs = axis_build_descs
@@ -174,60 +174,57 @@ $tr_builds
</table>
</body></html>''')
def generate_html_report( html_report_path, builds ):
report_dir = os.path.dirname( html_report_path )
def generate_html_report(html_report_path, builds):
report_dir = os.path.dirname(html_report_path)
# Vertical axis: generator
# Horizontal: variables, then build_type
builds_by_generator = collections.defaultdict( list )
builds_by_generator = collections.defaultdict(list)
variables = set()
build_types_by_variable = collections.defaultdict( set )
build_types_by_variable = collections.defaultdict(set)
build_by_pos_key = {} # { (generator, var_key, build_type): build }
for build in builds:
builds_by_generator[build.desc.generator].append( build )
builds_by_generator[build.desc.generator].append(build)
var_key = tuple(sorted(build.desc.variables))
variables.add( var_key )
build_types_by_variable[var_key].add( build.desc.build_type )
variables.add(var_key)
build_types_by_variable[var_key].add(build.desc.build_type)
pos_key = (build.desc.generator, var_key, build.desc.build_type)
build_by_pos_key[pos_key] = build
variables = sorted( variables )
variables = sorted(variables)
th_vars = []
th_build_types = []
for variable in variables:
build_types = sorted( build_types_by_variable[variable] )
build_types = sorted(build_types_by_variable[variable])
nb_build_type = len(build_types_by_variable[variable])
th_vars.append( '<th colspan="%d">%s</th>' % (nb_build_type, cgi.escape( ' '.join( variable ) ) ) )
th_vars.append('<th colspan="%d">%s</th>' % (nb_build_type, cgi.escape(' '.join(variable))))
for build_type in build_types:
th_build_types.append( '<th>%s</th>' % cgi.escape(build_type) )
th_build_types.append('<th>%s</th>' % cgi.escape(build_type))
tr_builds = []
for generator in sorted( builds_by_generator ):
tds = [ '<td>%s</td>\n' % cgi.escape( generator ) ]
for generator in sorted(builds_by_generator):
tds = [ '<td>%s</td>\n' % cgi.escape(generator) ]
for variable in variables:
build_types = sorted( build_types_by_variable[variable] )
build_types = sorted(build_types_by_variable[variable])
for build_type in build_types:
pos_key = (generator, variable, build_type)
build = build_by_pos_key.get(pos_key)
if build:
cmake_status = 'ok' if build.cmake_succeeded else 'FAILED'
build_status = 'ok' if build.build_succeeded else 'FAILED'
cmake_log_url = os.path.relpath( build.cmake_log_path, report_dir )
build_log_url = os.path.relpath( build.build_log_path, report_dir )
td = '<td class="%s"><a href="%s" class="%s">CMake: %s</a>' % (
build_status.lower(), cmake_log_url, cmake_status.lower(), cmake_status)
cmake_log_url = os.path.relpath(build.cmake_log_path, report_dir)
build_log_url = os.path.relpath(build.build_log_path, report_dir)
td = '<td class="%s"><a href="%s" class="%s">CMake: %s</a>' % ( build_status.lower(), cmake_log_url, cmake_status.lower(), cmake_status)
if build.cmake_succeeded:
td += '<br><a href="%s" class="%s">Build: %s</a>' % (
build_log_url, build_status.lower(), build_status)
td += '<br><a href="%s" class="%s">Build: %s</a>' % ( build_log_url, build_status.lower(), build_status)
td += '</td>'
else:
td = '<td></td>'
tds.append( td )
tr_builds.append( '<tr>%s</tr>' % '\n'.join( tds ) )
html = HTML_TEMPLATE.substitute(
title='Batch build report',
tds.append(td)
tr_builds.append('<tr>%s</tr>' % '\n'.join(tds))
html = HTML_TEMPLATE.substitute( title='Batch build report',
th_vars=' '.join(th_vars),
th_build_types=' '.join( th_build_types),
tr_builds='\n'.join( tr_builds ) )
with open( html_report_path, 'wt' ) as fhtml:
fhtml.write( html )
th_build_types=' '.join(th_build_types),
tr_builds='\n'.join(tr_builds))
with open(html_report_path, 'wt') as fhtml:
fhtml.write(html)
print('HTML report generated in:', html_report_path)
def main():
@@ -246,33 +243,33 @@ python devtools\batchbuild.py e:\buildbots\jsoncpp\build . devtools\agent_vmw7.j
parser.enable_interspersed_args()
options, args = parser.parse_args()
if len(args) < 3:
parser.error( "Missing one of WORK_DIR SOURCE_DIR CONFIG_JSON_PATH." )
parser.error("Missing one of WORK_DIR SOURCE_DIR CONFIG_JSON_PATH.")
work_dir = args[0]
source_dir = args[1].rstrip('/\\')
config_paths = args[2:]
for config_path in config_paths:
if not os.path.isfile( config_path ):
parser.error( "Can not read: %r" % config_path )
if not os.path.isfile(config_path):
parser.error("Can not read: %r" % config_path)
# generate build variants
build_descs = []
for config_path in config_paths:
build_descs_by_axis = load_build_variants_from_config( config_path )
build_descs.extend( generate_build_variants( build_descs_by_axis ) )
build_descs_by_axis = load_build_variants_from_config(config_path)
build_descs.extend(generate_build_variants(build_descs_by_axis))
print('Build variants (%d):' % len(build_descs))
# assign build directory for each variant
if not os.path.isdir( work_dir ):
os.makedirs( work_dir )
if not os.path.isdir(work_dir):
os.makedirs(work_dir)
builds = []
with open( os.path.join( work_dir, 'matrix-dir-map.txt' ), 'wt' ) as fmatrixmap:
for index, build_desc in enumerate( build_descs ):
build_desc_work_dir = os.path.join( work_dir, '%03d' % (index+1) )
builds.append( BuildData( build_desc, build_desc_work_dir, source_dir ) )
fmatrixmap.write( '%s: %s\n' % (build_desc_work_dir, build_desc) )
with open(os.path.join(work_dir, 'matrix-dir-map.txt'), 'wt') as fmatrixmap:
for index, build_desc in enumerate(build_descs):
build_desc_work_dir = os.path.join(work_dir, '%03d' % (index+1))
builds.append(BuildData(build_desc, build_desc_work_dir, source_dir))
fmatrixmap.write('%s: %s\n' % (build_desc_work_dir, build_desc))
for build in builds:
build.execute_build()
html_report_path = os.path.join( work_dir, 'batchbuild-report.html' )
generate_html_report( html_report_path, builds )
html_report_path = os.path.join(work_dir, 'batchbuild-report.html')
generate_html_report(html_report_path, builds)
print('Done')

View File

@@ -1,10 +1,10 @@
from __future__ import print_function
import os.path
def fix_source_eol( path, is_dry_run = True, verbose = True, eol = '\n' ):
def fix_source_eol(path, is_dry_run = True, verbose = True, eol = '\n'):
"""Makes sure that all sources have the specified eol sequence (default: unix)."""
if not os.path.isfile( path ):
raise ValueError( 'Path "%s" is not a file' % path )
if not os.path.isfile(path):
raise ValueError('Path "%s" is not a file' % path)
try:
f = open(path, 'rb')
except IOError as msg:
@@ -29,27 +29,27 @@ def fix_source_eol( path, is_dry_run = True, verbose = True, eol = '\n' ):
##
##
##
##def _do_fix( is_dry_run = True ):
##def _do_fix(is_dry_run = True):
## from waftools import antglob
## python_sources = antglob.glob( '.',
## python_sources = antglob.glob('.',
## includes = '**/*.py **/wscript **/wscript_build',
## excludes = antglob.default_excludes + './waf.py',
## prune_dirs = antglob.prune_dirs + 'waf-* ./build' )
## prune_dirs = antglob.prune_dirs + 'waf-* ./build')
## for path in python_sources:
## _fix_python_source( path, is_dry_run )
## _fix_python_source(path, is_dry_run)
##
## cpp_sources = antglob.glob( '.',
## cpp_sources = antglob.glob('.',
## includes = '**/*.cpp **/*.h **/*.inl',
## prune_dirs = antglob.prune_dirs + 'waf-* ./build' )
## prune_dirs = antglob.prune_dirs + 'waf-* ./build')
## for path in cpp_sources:
## _fix_source_eol( path, is_dry_run )
## _fix_source_eol(path, is_dry_run)
##
##
##def dry_fix(context):
## _do_fix( is_dry_run = True )
## _do_fix(is_dry_run = True)
##
##def fix(context):
## _do_fix( is_dry_run = False )
## _do_fix(is_dry_run = False)
##
##def shutdown():
## pass

View File

@@ -13,7 +13,7 @@ BRIEF_LICENSE = LICENSE_BEGIN + """2007-2010 Baptiste Lepilleur
""".replace('\r\n','\n')
def update_license( path, dry_run, show_diff ):
def update_license(path, dry_run, show_diff):
"""Update the license statement in the specified file.
Parameters:
path: path of the C++ source file to update.
@@ -22,28 +22,28 @@ def update_license( path, dry_run, show_diff ):
show_diff: if True, print the path of the file that would be modified,
as well as the change made to the file.
"""
with open( path, 'rt' ) as fin:
with open(path, 'rt') as fin:
original_text = fin.read().replace('\r\n','\n')
newline = fin.newlines and fin.newlines[0] or '\n'
if not original_text.startswith( LICENSE_BEGIN ):
if not original_text.startswith(LICENSE_BEGIN):
# No existing license found => prepend it
new_text = BRIEF_LICENSE + original_text
else:
license_end_index = original_text.index( '\n\n' ) # search first blank line
license_end_index = original_text.index('\n\n') # search first blank line
new_text = BRIEF_LICENSE + original_text[license_end_index+2:]
if original_text != new_text:
if not dry_run:
with open( path, 'wb' ) as fout:
fout.write( new_text.replace('\n', newline ) )
with open(path, 'wb') as fout:
fout.write(new_text.replace('\n', newline))
print('Updated', path)
if show_diff:
import difflib
print('\n'.join( difflib.unified_diff( original_text.split('\n'),
new_text.split('\n') ) ))
print('\n'.join(difflib.unified_diff(original_text.split('\n'),
new_text.split('\n'))))
return True
return False
def update_license_in_source_directories( source_dirs, dry_run, show_diff ):
def update_license_in_source_directories(source_dirs, dry_run, show_diff):
"""Updates license text in C++ source files found in directory source_dirs.
Parameters:
source_dirs: list of directory to scan for C++ sources. Directories are
@@ -56,11 +56,11 @@ def update_license_in_source_directories( source_dirs, dry_run, show_diff ):
from devtools import antglob
prune_dirs = antglob.prune_dirs + 'scons-local* ./build* ./libs ./dist'
for source_dir in source_dirs:
cpp_sources = antglob.glob( source_dir,
cpp_sources = antglob.glob(source_dir,
includes = '''**/*.h **/*.cpp **/*.inl''',
prune_dirs = prune_dirs )
prune_dirs = prune_dirs)
for source in cpp_sources:
update_license( source, dry_run, show_diff )
update_license(source, dry_run, show_diff)
def main():
usage = """%prog DIR [DIR2...]
@@ -83,7 +83,7 @@ python devtools\licenseupdater.py include src
help="""On update, show change made to the file.""")
parser.enable_interspersed_args()
options, args = parser.parse_args()
update_license_in_source_directories( args, options.dry_run, options.show_diff )
update_license_in_source_directories(args, options.dry_run, options.show_diff)
print('Done')
if __name__ == '__main__':

View File

@@ -1,5 +1,5 @@
import os.path
import gzip
from contextlib import closing
import os
import tarfile
TARGZ_DEFAULT_COMPRESSION_LEVEL = 9
@@ -13,41 +13,35 @@ def make_tarball(tarball_path, sources, base_dir, prefix_dir=''):
prefix_dir: all files stored in the tarball be sub-directory of prefix_dir. Set to ''
to make them child of root.
"""
base_dir = os.path.normpath( os.path.abspath( base_dir ) )
def archive_name( path ):
base_dir = os.path.normpath(os.path.abspath(base_dir))
def archive_name(path):
"""Makes path relative to base_dir."""
path = os.path.normpath( os.path.abspath( path ) )
common_path = os.path.commonprefix( (base_dir, path) )
path = os.path.normpath(os.path.abspath(path))
common_path = os.path.commonprefix((base_dir, path))
archive_name = path[len(common_path):]
if os.path.isabs( archive_name ):
if os.path.isabs(archive_name):
archive_name = archive_name[1:]
return os.path.join( prefix_dir, archive_name )
return os.path.join(prefix_dir, archive_name)
def visit(tar, dirname, names):
for name in names:
path = os.path.join(dirname, name)
if os.path.isfile(path):
path_in_tar = archive_name(path)
tar.add(path, path_in_tar )
tar.add(path, path_in_tar)
compression = TARGZ_DEFAULT_COMPRESSION_LEVEL
tar = tarfile.TarFile.gzopen( tarball_path, 'w', compresslevel=compression )
try:
with closing(tarfile.TarFile.open(tarball_path, 'w:gz',
compresslevel=compression)) as tar:
for source in sources:
source_path = source
if os.path.isdir( source ):
os.path.walk(source_path, visit, tar)
if os.path.isdir(source):
for dirpath, dirnames, filenames in os.walk(source_path):
visit(tar, dirpath, filenames)
else:
path_in_tar = archive_name(source_path)
tar.add(source_path, path_in_tar ) # filename, arcname
finally:
tar.close()
tar.add(source_path, path_in_tar) # filename, arcname
def decompress( tarball_path, base_dir ):
def decompress(tarball_path, base_dir):
"""Decompress the gzipped tarball into directory base_dir.
"""
# !!! This class method is not documented in the online doc
# nor is bz2open!
tar = tarfile.TarFile.gzopen(tarball_path, mode='r')
try:
tar.extractall( base_dir )
finally:
tar.close()
with closing(tarfile.TarFile.open(tarball_path)) as tar:
tar.extractall(base_dir)

View File

@@ -819,7 +819,7 @@ EXCLUDE_SYMBOLS =
# that contain example code fragments that are included (see the \include
# command).
EXAMPLE_PATH =
EXAMPLE_PATH = ..
# If the value of the EXAMPLE_PATH tag contains directories, you can use the
# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
@@ -1946,8 +1946,7 @@ INCLUDE_FILE_PATTERNS = *.h
PREDEFINED = "_MSC_VER=1400" \
_CPPRTTI \
_WIN32 \
JSONCPP_DOC_EXCLUDE_IMPLEMENTATION \
JSON_VALUE_USE_INTERNAL_MAP
JSONCPP_DOC_EXCLUDE_IMPLEMENTATION
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
# tag can be used to specify a list of macro names that should be expanded. The

View File

@@ -16,7 +16,7 @@ JsonCpp - JSON data format manipulation library
</a>
</td>
<td width="40%" align="right" valign="center">
<a href="https://github.com/open-source-parsers/jsoncpp">JsonCpp home page</a>
<a href="http://open-source-parsers.github.io/jsoncpp-docs/doxygen/">JsonCpp home page</a>
</td>
</tr>
</table>

View File

@@ -4,11 +4,21 @@
<a HREF="http://www.json.org/">JSON (JavaScript Object Notation)</a>
is a lightweight data-interchange format.
It can represent integer, real number, string, an ordered sequence of value, and
a collection of name/value pairs.
Here is an example of JSON data:
\verbatim
{
"encoding" : "UTF-8",
"plug-ins" : [
"python",
"c++",
"ruby"
],
"indent" : { "length" : 3, "use_space": true }
}
\endverbatim
<b>JsonCpp</b> supports comments as <i>meta-data</i>:
\code
// Configuration options
{
// Default encoding for text
@@ -17,22 +27,22 @@ Here is an example of JSON data:
// Plug-ins loaded at start-up
"plug-ins" : [
"python",
"c++",
"c++", // trailing comment
"ruby"
],
// Tab indent size
"indent" : { "length" : 3, "use_space": true }
// (multi-line comment)
"indent" : { /*embedded comment*/ "length" : 3, "use_space": true }
}
\endverbatim
<code>jsoncpp</code> supports comments as <i>meta-data</i>.
\endcode
\section _features Features
- read and write JSON document
- attach C++ style comments to element during parsing
- rewrite JSON document preserving original comments
Notes: Comments used to be supported in JSON but where removed for
Notes: Comments used to be supported in JSON but were removed for
portability (C like comments are not supported in Python). Since
comments are useful in configuration/input file, this feature was
preserved.
@@ -40,47 +50,77 @@ preserved.
\section _example Code example
\code
Json::Value root; // will contains the root value after parsing.
Json::Reader reader;
bool parsingSuccessful = reader.parse( config_doc, root );
if ( !parsingSuccessful )
{
// report to the user the failure and their locations in the document.
std::cout << "Failed to parse configuration\n"
<< reader.getFormattedErrorMessages();
return;
}
Json::Value root; // 'root' will contain the root value after parsing.
std::cin >> root;
// Get the value of the member of root named 'encoding', return 'UTF-8' if there is no
// such member.
std::string encoding = root.get("encoding", "UTF-8" ).asString();
// Get the value of the member of root named 'encoding', return a 'null' value if
// there is no such member.
const Json::Value plugins = root["plug-ins"];
for ( int index = 0; index < plugins.size(); ++index ) // Iterates over the sequence elements.
loadPlugIn( plugins[index].asString() );
setIndentLength( root["indent"].get("length", 3).asInt() );
setIndentUseSpace( root["indent"].get("use_space", true).asBool() );
// ...
// At application shutdown to make the new configuration document:
// Since Json::Value has implicit constructor for all value types, it is not
// necessary to explicitly construct the Json::Value object:
root["encoding"] = getCurrentEncoding();
root["indent"]["length"] = getCurrentIndentLength();
root["indent"]["use_space"] = getCurrentIndentUseSpace();
Json::StyledWriter writer;
// Make a new JSON document for the configuration. Preserve original comments.
std::string outputConfig = writer.write( root );
// You can also use streams. This will put the contents of any JSON
// stream at a particular sub-value, if you'd like.
// You can also read into a particular sub-value.
std::cin >> root["subtree"];
// And you can write to a stream, using the StyledWriter automatically.
// Get the value of the member of root named 'encoding',
// and return 'UTF-8' if there is no such member.
std::string encoding = root.get("encoding", "UTF-8" ).asString();
// Get the value of the member of root named 'plug-ins'; return a 'null' value if
// there is no such member.
const Json::Value plugins = root["plug-ins"];
// Iterate over the sequence elements.
for ( int index = 0; index < plugins.size(); ++index )
loadPlugIn( plugins[index].asString() );
// Try other datatypes. Some are auto-convertible to others.
foo::setIndentLength( root["indent"].get("length", 3).asInt() );
foo::setIndentUseSpace( root["indent"].get("use_space", true).asBool() );
// Since Json::Value has an implicit constructor for all value types, it is not
// necessary to explicitly construct the Json::Value object.
root["encoding"] = foo::getCurrentEncoding();
root["indent"]["length"] = foo::getCurrentIndentLength();
root["indent"]["use_space"] = foo::getCurrentIndentUseSpace();
// If you like the defaults, you can insert directly into a stream.
std::cout << root;
// Of course, you can write to `std::ostringstream` if you prefer.
// If desired, remember to add a linefeed and flush.
std::cout << std::endl;
\endcode
\section _advanced Advanced usage
Configure *builders* to create *readers* and *writers*. For
configuration, we use our own `Json::Value` (rather than
standard setters/getters) so that we can add
features without losing binary-compatibility.
\code
// For convenience, use `writeString()` with a specialized builder.
Json::StreamWriterBuilder wbuilder;
wbuilder["indentation"] = "\t";
std::string document = Json::writeString(wbuilder, root);
// Here, using a specialized Builder, we discard comments and
// record errors as we parse.
Json::CharReaderBuilder rbuilder;
rbuilder["collectComments"] = false;
std::string errs;
bool ok = Json::parseFromStream(rbuilder, std::cin, &root, &errs);
\endcode
Yes, compile-time configuration-checking would be helpful,
but `Json::Value` lets you
write and read the builder configuration, which is better! In other words,
you can configure your JSON parser using JSON.
CharReaders and StreamWriters are not thread-safe, but they are re-usable.
\code
Json::CharReaderBuilder rbuilder;
cfg >> rbuilder.settings_;
std::unique_ptr<Json::CharReader> const reader(rbuilder.newCharReader());
reader->parse(start, stop, &value1, &errs);
// ...
reader->parse(start, stop, &value2, &errs);
// etc.
\endcode
\section _pbuild Build instructions
@@ -116,4 +156,9 @@ Basically JsonCpp is licensed under MIT license, or public domain if desired
and recognized in your jurisdiction.
\author Baptiste Lepilleur <blep@users.sourceforge.net> (originator)
\author Christopher Dunn <cdunn2001@gmail.com> (primary maintainer)
\version \include version
We make strong guarantees about binary-compatibility, consistent with
<a href="http://apr.apache.org/versioning.html">the Apache versioning scheme</a>.
\sa version.h
*/

2301
doc/web_doxyfile.in Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,22 +1,37 @@
"""Script to generate doxygen documentation.
"""
from __future__ import print_function
from __future__ import unicode_literals
from devtools import tarball
from contextlib import contextmanager
import subprocess
import traceback
import re
import os
import os.path
import sys
import shutil
@contextmanager
def cd(newdir):
"""
http://stackoverflow.com/questions/431684/how-do-i-cd-in-python
"""
prevdir = os.getcwd()
os.chdir(newdir)
try:
yield
finally:
os.chdir(prevdir)
def find_program(*filenames):
"""find a program in folders path_lst, and sets env[var]
@param filenames: a list of possible names of the program to search for
@return: the full path of the filename if found, or '' if filename could not be found
"""
paths = os.environ.get('PATH', '').split(os.pathsep)
suffixes = ('win32' in sys.platform ) and '.exe .com .bat .cmd' or ''
suffixes = ('win32' in sys.platform) and '.exe .com .bat .cmd' or ''
for filename in filenames:
for name in [filename+ext for ext in suffixes.split()]:
for name in [filename+ext for ext in suffixes.split(' ')]:
for directory in paths:
full_path = os.path.join(directory, name)
if os.path.isfile(full_path):
@@ -28,53 +43,56 @@ def do_subst_in_file(targetfile, sourcefile, dict):
For example, if dict is {'%VERSION%': '1.2345', '%BASE%': 'MyProg'},
then all instances of %VERSION% in the file will be replaced with 1.2345 etc.
"""
try:
f = open(sourcefile, 'rb')
with open(sourcefile, 'r') as f:
contents = f.read()
f.close()
except:
print("Can't read source file %s"%sourcefile)
raise
for (k,v) in list(dict.items()):
v = v.replace('\\','\\\\')
contents = re.sub(k, v, contents)
try:
f = open(targetfile, 'wb')
with open(targetfile, 'w') as f:
f.write(contents)
f.close()
def getstatusoutput(cmd):
"""cmd is a list.
"""
try:
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output, _ = process.communicate()
status = process.returncode
except:
print("Can't write target file %s"%targetfile)
raise
status = -1
output = traceback.format_exc()
return status, output
def run_cmd(cmd, silent=False):
"""Raise exception on failure.
"""
info = 'Running: %r in %r' %(' '.join(cmd), os.getcwd())
print(info)
sys.stdout.flush()
if silent:
status, output = getstatusoutput(cmd)
else:
status, output = os.system(' '.join(cmd)), ''
if status:
msg = 'Error while %s ...\n\terror=%d, output="""%s"""' %(info, status, output)
raise Exception(msg)
def assert_is_exe(path):
if not path:
raise Exception('path is empty.')
if not os.path.isfile(path):
raise Exception('%r is not a file.' %path)
if not os.access(path, os.X_OK):
raise Exception('%r is not executable by this user.' %path)
def run_doxygen(doxygen_path, config_file, working_dir, is_silent):
config_file = os.path.abspath( config_file )
doxygen_path = doxygen_path
old_cwd = os.getcwd()
try:
os.chdir( working_dir )
assert_is_exe(doxygen_path)
config_file = os.path.abspath(config_file)
with cd(working_dir):
cmd = [doxygen_path, config_file]
print('Running:', ' '.join( cmd ))
try:
import subprocess
except:
if os.system( ' '.join( cmd ) ) != 0:
print('Documentation generation failed')
return False
else:
if is_silent:
process = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT )
else:
process = subprocess.Popen( cmd )
stdout, _ = process.communicate()
if process.returncode:
print('Documentation generation failed:')
print(stdout)
return False
return True
finally:
os.chdir( old_cwd )
run_cmd(cmd, is_silent)
def build_doc( options, make_release=False ):
def build_doc(options, make_release=False):
if make_release:
options.make_tarball = True
options.with_dot = True
@@ -83,56 +101,56 @@ def build_doc( options, make_release=False ):
options.open = False
options.silent = True
version = open('version','rt').read().strip()
version = open('version', 'rt').read().strip()
output_dir = 'dist/doxygen' # relative to doc/doxyfile location.
if not os.path.isdir( output_dir ):
os.makedirs( output_dir )
top_dir = os.path.abspath( '.' )
if not os.path.isdir(output_dir):
os.makedirs(output_dir)
top_dir = os.path.abspath('.')
html_output_dirname = 'jsoncpp-api-html-' + version
tarball_path = os.path.join( 'dist', html_output_dirname + '.tar.gz' )
warning_log_path = os.path.join( output_dir, '../jsoncpp-doxygen-warning.log' )
html_output_path = os.path.join( output_dir, html_output_dirname )
def yesno( bool ):
tarball_path = os.path.join('dist', html_output_dirname + '.tar.gz')
warning_log_path = os.path.join(output_dir, '../jsoncpp-doxygen-warning.log')
html_output_path = os.path.join(output_dir, html_output_dirname)
def yesno(bool):
return bool and 'YES' or 'NO'
subst_keys = {
'%JSONCPP_VERSION%': version,
'%DOC_TOPDIR%': '',
'%TOPDIR%': top_dir,
'%HTML_OUTPUT%': os.path.join( '..', output_dir, html_output_dirname ),
'%HTML_OUTPUT%': os.path.join('..', output_dir, html_output_dirname),
'%HAVE_DOT%': yesno(options.with_dot),
'%DOT_PATH%': os.path.split(options.dot_path)[0],
'%HTML_HELP%': yesno(options.with_html_help),
'%UML_LOOK%': yesno(options.with_uml_look),
'%WARNING_LOG_PATH%': os.path.join( '..', warning_log_path )
'%WARNING_LOG_PATH%': os.path.join('..', warning_log_path)
}
if os.path.isdir( output_dir ):
if os.path.isdir(output_dir):
print('Deleting directory:', output_dir)
shutil.rmtree( output_dir )
if not os.path.isdir( output_dir ):
os.makedirs( output_dir )
shutil.rmtree(output_dir)
if not os.path.isdir(output_dir):
os.makedirs(output_dir)
do_subst_in_file( 'doc/doxyfile', 'doc/doxyfile.in', subst_keys )
ok = run_doxygen( options.doxygen_path, 'doc/doxyfile', 'doc', is_silent=options.silent )
do_subst_in_file('doc/doxyfile', options.doxyfile_input_path, subst_keys)
run_doxygen(options.doxygen_path, 'doc/doxyfile', 'doc', is_silent=options.silent)
if not options.silent:
print(open(warning_log_path, 'rb').read())
print(open(warning_log_path, 'r').read())
index_path = os.path.abspath(os.path.join('doc', subst_keys['%HTML_OUTPUT%'], 'index.html'))
print('Generated documentation can be found in:')
print(index_path)
if options.open:
import webbrowser
webbrowser.open( 'file://' + index_path )
webbrowser.open('file://' + index_path)
if options.make_tarball:
print('Generating doc tarball to', tarball_path)
tarball_sources = [
output_dir,
'README.txt',
'README.md',
'LICENSE',
'NEWS.txt',
'version'
]
tarball_basedir = os.path.join( output_dir, html_output_dirname )
tarball.make_tarball( tarball_path, tarball_sources, tarball_basedir, html_output_dirname )
tarball_basedir = os.path.join(output_dir, html_output_dirname)
tarball.make_tarball(tarball_path, tarball_sources, tarball_basedir, html_output_dirname)
return tarball_path, html_output_dirname
def main():
@@ -151,6 +169,8 @@ def main():
help="""Path to GraphViz dot tool. Must be full qualified path. [Default: %default]""")
parser.add_option('--doxygen', dest="doxygen_path", action='store', default=find_program('doxygen'),
help="""Path to Doxygen tool. [Default: %default]""")
parser.add_option('--in', dest="doxyfile_input_path", action='store', default='doc/doxyfile.in',
help="""Path to doxygen inputs. [Default: %default]""")
parser.add_option('--with-html-help', dest="with_html_help", action='store_true', default=False,
help="""Enable generation of Microsoft HTML HELP""")
parser.add_option('--no-uml-look', dest="with_uml_look", action='store_false', default=True,
@@ -163,7 +183,7 @@ def main():
help="""Hides doxygen output""")
parser.enable_interspersed_args()
options, args = parser.parse_args()
build_doc( options )
build_doc(options)
if __name__ == '__main__':
main()

View File

@@ -7,35 +7,48 @@
#define CPPTL_JSON_ASSERTIONS_H_INCLUDED
#include <stdlib.h>
#include <sstream>
#if !defined(JSON_IS_AMALGAMATION)
#include "config.h"
#endif // if !defined(JSON_IS_AMALGAMATION)
/** It should not be possible for a maliciously designed file to
* cause an abort() or seg-fault, so these macros are used only
* for pre-condition violations and internal logic errors.
*/
#if JSON_USE_EXCEPTION
#include <stdexcept>
#define JSON_ASSERT(condition) \
assert(condition); // @todo <= change this into an exception throw
#define JSON_FAIL_MESSAGE(message) throw std::runtime_error(message);
// @todo <= add detail about condition in exception
# define JSON_ASSERT(condition) \
{if (!(condition)) {Json::throwLogicError( "assert json failed" );}}
# define JSON_FAIL_MESSAGE(message) \
{ \
std::ostringstream oss; oss << message; \
Json::throwLogicError(oss.str()); \
abort(); \
}
#else // JSON_USE_EXCEPTION
#define JSON_ASSERT(condition) assert(condition);
# define JSON_ASSERT(condition) assert(condition)
// The call to assert() will show the failure message in debug builds. In
// release bugs we write to invalid memory in order to crash hard, so that a
// debugger or crash reporter gets the chance to take over. We still call exit()
// afterward in order to tell the compiler that this macro doesn't return.
#define JSON_FAIL_MESSAGE(message) \
// release builds we abort, for a core-dump or debugger.
# define JSON_FAIL_MESSAGE(message) \
{ \
assert(false&& message); \
strcpy(reinterpret_cast<char*>(666), message); \
exit(123); \
std::ostringstream oss; oss << message; \
assert(false && oss.str().c_str()); \
abort(); \
}
#endif
#define JSON_ASSERT_MESSAGE(condition, message) \
if (!(condition)) { \
JSON_FAIL_MESSAGE(message) \
JSON_FAIL_MESSAGE(message); \
}
#endif // CPPTL_JSON_ASSERTIONS_H_INCLUDED

View File

@@ -15,17 +15,6 @@
/// std::map
/// as Value container.
//# define JSON_USE_CPPTL_SMALLMAP 1
/// If defined, indicates that Json specific container should be used
/// (hash table & simple deque container with customizable allocator).
/// THIS FEATURE IS STILL EXPERIMENTAL! There is know bugs: See #3177332
//# define JSON_VALUE_USE_INTERNAL_MAP 1
/// Force usage of standard new/malloc based allocator instead of memory pool
/// based allocator.
/// The memory pools allocator used optimization (initializing Value and
/// ValueInternalLink
/// as if it was a POD) that may cause some validation tool to report errors.
/// Only has effects if JSON_VALUE_USE_INTERNAL_MAP is defined.
//# define JSON_USE_SIMPLE_INTERNAL_ALLOCATOR 1
// If non-zero, the library uses exceptions to report bad input instead of C
// assertion macros. The default is to use exceptions.
@@ -81,6 +70,14 @@
#if defined(_MSC_VER) && _MSC_VER >= 1500 // MSVC 2008
/// Indicates that the following function is deprecated.
#define JSONCPP_DEPRECATED(message) __declspec(deprecated(message))
#elif defined(__clang__) && defined(__has_feature)
#if __has_feature(attribute_deprecated_with_message)
#define JSONCPP_DEPRECATED(message) __attribute__ ((deprecated(message)))
#endif
#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 5))
#define JSONCPP_DEPRECATED(message) __attribute__ ((deprecated(message)))
#elif defined(__GNUC__) && (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 1))
#define JSONCPP_DEPRECATED(message) __attribute__((__deprecated__))
#endif
#if !defined(JSONCPP_DEPRECATED)

View File

@@ -31,12 +31,6 @@ class Value;
class ValueIteratorBase;
class ValueIterator;
class ValueConstIterator;
#ifdef JSON_VALUE_USE_INTERNAL_MAP
class ValueMapAllocator;
class ValueInternalLink;
class ValueInternalArray;
class ValueInternalMap;
#endif // #ifdef JSON_VALUE_USE_INTERNAL_MAP
} // namespace Json

View File

@@ -14,6 +14,7 @@
#include <iosfwd>
#include <stack>
#include <string>
#include <istream>
// Disable warning C4251: <data member>: <type> needs to have dll-interface to
// be used by...
@@ -27,6 +28,7 @@ namespace Json {
/** \brief Unserialize a <a HREF="http://www.json.org">JSON</a> document into a
*Value.
*
* \deprecated Use CharReader and CharReaderBuilder.
*/
class JSON_API Reader {
public:
@@ -78,7 +80,7 @@ public:
document to read.
* \param endDoc Pointer on the end of the UTF-8 encoded string of the
document to read.
\ Must be >= beginDoc.
* Must be >= beginDoc.
* \param root [out] Contains the root value of the document if it was
* successfully parsed.
* \param collectComments \c true to collect comment and allow writing them
@@ -108,7 +110,7 @@ public:
* during parsing.
* \deprecated Use getFormattedErrorMessages() instead (typo fix).
*/
JSONCPP_DEPRECATED("Use getFormattedErrorMessages instead")
JSONCPP_DEPRECATED("Use getFormattedErrorMessages() instead.")
std::string getFormatedErrorMessages() const;
/** \brief Returns a user friendly string that list errors in the parsed
@@ -187,7 +189,6 @@ private:
typedef std::deque<ErrorInfo> Errors;
bool expectToken(TokenType type, Token& token, const char* message);
bool readToken(Token& token);
void skipSpaces();
bool match(Location pattern, int patternLength);
@@ -239,8 +240,132 @@ private:
std::string commentsBefore_;
Features features_;
bool collectComments_;
}; // Reader
/** Interface for reading JSON from a char array.
*/
class JSON_API CharReader {
public:
virtual ~CharReader() {}
/** \brief Read a Value from a <a HREF="http://www.json.org">JSON</a>
document.
* The document must be a UTF-8 encoded string containing the document to read.
*
* \param beginDoc Pointer on the beginning of the UTF-8 encoded string of the
document to read.
* \param endDoc Pointer on the end of the UTF-8 encoded string of the
document to read.
* Must be >= beginDoc.
* \param root [out] Contains the root value of the document if it was
* successfully parsed.
* \param errs [out] Formatted error messages (if not NULL)
* a user friendly string that lists errors in the parsed
* document.
* \return \c true if the document was successfully parsed, \c false if an
error occurred.
*/
virtual bool parse(
char const* beginDoc, char const* endDoc,
Value* root, std::string* errs) = 0;
class Factory {
public:
virtual ~Factory() {}
/** \brief Allocate a CharReader via operator new().
* \throw std::exception if something goes wrong (e.g. invalid settings)
*/
virtual CharReader* newCharReader() const = 0;
}; // Factory
}; // CharReader
/** \brief Build a CharReader implementation.
Usage:
\code
using namespace Json;
CharReaderBuilder builder;
builder["collectComments"] = false;
Value value;
std::string errs;
bool ok = parseFromStream(builder, std::cin, &value, &errs);
\endcode
*/
class JSON_API CharReaderBuilder : public CharReader::Factory {
public:
// Note: We use a Json::Value so that we can add data-members to this class
// without a major version bump.
/** Configuration of this builder.
These are case-sensitive.
Available settings (case-sensitive):
- `"collectComments": false or true`
- true to collect comment and allow writing them
back during serialization, false to discard comments.
This parameter is ignored if allowComments is false.
- `"allowComments": false or true`
- true if comments are allowed.
- `"strictRoot": false or true`
- true if root must be either an array or an object value
- `"allowDroppedNullPlaceholders": false or true`
- true if dropped null placeholders are allowed. (See StreamWriterBuilder.)
- `"allowNumericKeys": false or true`
- true if numeric object keys are allowed.
- `"allowSingleQuotes": false or true`
- true if '' are allowed for strings (both keys and values)
- `"stackLimit": integer`
- Exceeding stackLimit (recursive depth of `readValue()`) will
cause an exception.
- This is a security issue (seg-faults caused by deeply nested JSON),
so the default is low.
- `"failIfExtra": false or true`
- If true, `parse()` returns false when extra non-whitespace trails
the JSON value in the input string.
- `"rejectDupKeys": false or true`
- If true, `parse()` returns false when a key is duplicated within an object.
You can examine 'settings_` yourself
to see the defaults. You can also write and read them just like any
JSON Value.
\sa setDefaults()
*/
Json::Value settings_;
CharReaderBuilder();
virtual ~CharReaderBuilder();
virtual CharReader* newCharReader() const;
/** \return true if 'settings' are legal and consistent;
* otherwise, indicate bad settings via 'invalid'.
*/
bool validate(Json::Value* invalid) const;
/** A simple way to update a specific setting.
*/
Value& operator[](std::string key);
/** Called by ctor, but you can use this to reset settings_.
* \pre 'settings' != NULL (but Json::null is fine)
* \remark Defaults:
* \snippet src/lib_json/json_reader.cpp CharReaderBuilderStrictMode
*/
static void setDefaults(Json::Value* settings);
/** Same as old Features::strictMode().
* \pre 'settings' != NULL (but Json::null is fine)
* \remark Defaults:
* \snippet src/lib_json/json_reader.cpp CharReaderBuilderDefaults
*/
static void strictMode(Json::Value* settings);
};
/** Consume entire stream and use its begin/end.
* Someday we might have a real StreamReader, but for now this
* is convenient.
*/
bool JSON_API parseFromStream(
CharReader::Factory const&,
std::istream&,
Value* root, std::string* errs);
/** \brief Read from 'sin' into 'root'.
Always keep comments from the input JSON.

View File

@@ -11,6 +11,7 @@
#endif // if !defined(JSON_IS_AMALGAMATION)
#include <string>
#include <vector>
#include <exception>
#ifndef JSON_USE_CPPTL_SMALLMAP
#include <map>
@@ -32,6 +33,31 @@
*/
namespace Json {
/** Base class for all exceptions we throw.
*
* We use nothing but these internally. Of course, STL can throw others.
*/
class JSON_API Exception;
/** Exceptions which the user cannot easily avoid.
*
* E.g. out-of-memory (when we use malloc), stack-overflow, malicious input
*
* \remark derived from Json::Exception
*/
class JSON_API RuntimeError;
/** Exceptions thrown by JSON_ASSERT/JSON_FAIL macros.
*
* These are precondition-violations (user bugs) and internal errors (our bugs).
*
* \remark derived from Json::Exception
*/
class JSON_API LogicError;
/// used internally
void throwRuntimeError(std::string const& msg);
/// used internally
void throwLogicError(std::string const& msg);
/** \brief Type of the value held by a Value object.
*/
enum ValueType {
@@ -74,14 +100,14 @@ enum CommentPlacement {
*/
class JSON_API StaticString {
public:
explicit StaticString(const char* czstring) : str_(czstring) {}
explicit StaticString(const char* czstring) : c_str_(czstring) {}
operator const char*() const { return str_; }
operator const char*() const { return c_str_; }
const char* c_str() const { return str_; }
const char* c_str() const { return c_str_; }
private:
const char* str_;
const char* c_str_;
};
/** \brief Represents a <a HREF="http://www.json.org">JSON</a> value.
@@ -99,26 +125,27 @@ private:
* The type of the held value is represented by a #ValueType and
* can be obtained using type().
*
* values of an #objectValue or #arrayValue can be accessed using operator[]()
*methods.
* Non const methods will automatically create the a #nullValue element
* Values of an #objectValue or #arrayValue can be accessed using operator[]()
* methods.
* Non-const methods will automatically create the a #nullValue element
* if it does not exist.
* The sequence of an #arrayValue will be automatically resize and initialized
* The sequence of an #arrayValue will be automatically resized and initialized
* with #nullValue. resize() can be used to enlarge or truncate an #arrayValue.
*
* The get() methods can be used to obtanis default value in the case the
*required element
* does not exist.
* The get() methods can be used to obtain default value in the case the
* required element does not exist.
*
* It is possible to iterate over the list of a #objectValue values using
* the getMemberNames() method.
*
* \note #Value string-length fit in size_t, but keys must be < 2^30.
* (The reason is an implementation detail.) A #CharReader will raise an
* exception if a bound is exceeded to avoid security holes in your app,
* but the Value API does *not* check bounds. That is the responsibility
* of the caller.
*/
class JSON_API Value {
friend class ValueIteratorBase;
#ifdef JSON_VALUE_USE_INTERNAL_MAP
friend class ValueInternalLink;
friend class ValueInternalMap;
#endif
public:
typedef std::vector<std::string> Members;
typedef ValueIterator iterator;
@@ -133,7 +160,8 @@ public:
typedef Json::LargestUInt LargestUInt;
typedef Json::ArrayIndex ArrayIndex;
static const Value& null;
static const Value& null; ///< We regret this reference to a global instance; prefer the simpler Value().
static const Value& nullRef; ///< just a kludge for binary-compatibility; same as null
/// Minimum signed integer value that can be stored in a Json::Value.
static const LargestInt minLargestInt;
/// Maximum signed integer value that can be stored in a Json::Value.
@@ -159,7 +187,6 @@ public:
private:
#ifndef JSONCPP_DOC_EXCLUDE_IMPLEMENTATION
#ifndef JSON_VALUE_USE_INTERNAL_MAP
class CZString {
public:
enum DuplicationPolicy {
@@ -168,20 +195,31 @@ private:
duplicateOnCopy
};
CZString(ArrayIndex index);
CZString(const char* cstr, DuplicationPolicy allocate);
CZString(const CZString& other);
CZString(char const* str, unsigned length, DuplicationPolicy allocate);
CZString(CZString const& other);
~CZString();
CZString& operator=(CZString other);
bool operator<(const CZString& other) const;
bool operator==(const CZString& other) const;
bool operator<(CZString const& other) const;
bool operator==(CZString const& other) const;
ArrayIndex index() const;
const char* c_str() const;
//const char* c_str() const; ///< \deprecated
char const* data() const;
unsigned length() const;
bool isStaticString() const;
private:
void swap(CZString& other);
const char* cstr_;
ArrayIndex index_;
struct StringStorage {
DuplicationPolicy policy_: 2;
unsigned length_: 30; // 1GB max
};
char const* cstr_; // actually, a prefixed string, unless policy is noDup
union {
ArrayIndex index_;
StringStorage storage_;
};
};
public:
@@ -190,7 +228,6 @@ public:
#else
typedef CppTL::SmallMap<CZString, Value> ObjectValues;
#endif // ifndef JSON_USE_CPPTL_SMALLMAP
#endif // ifndef JSON_VALUE_USE_INTERNAL_MAP
#endif // ifndef JSONCPP_DOC_EXCLUDE_IMPLEMENTATION
public:
@@ -217,47 +254,59 @@ Json::Value obj_value(Json::objectValue); // {}
Value(UInt64 value);
#endif // if defined(JSON_HAS_INT64)
Value(double value);
Value(const char* value);
Value(const char* beginValue, const char* endValue);
Value(const char* value); ///< Copy til first 0. (NULL causes to seg-fault.)
Value(const char* beginValue, const char* endValue); ///< Copy all, incl zeroes.
/** \brief Constructs a value from a static string.
* Like other value string constructor but do not duplicate the string for
* internal storage. The given string must remain alive after the call to this
* constructor.
* \note This works only for null-terminated strings. (We cannot change the
* size of this class, so we have nowhere to store the length,
* which might be computed later for various operations.)
*
* Example of usage:
* \code
* Json::Value aValue( StaticString("some text") );
* static StaticString foo("some text");
* Json::Value aValue(foo);
* \endcode
*/
Value(const StaticString& value);
Value(const std::string& value);
Value(const std::string& value); ///< Copy data() til size(). Embedded zeroes too.
#ifdef JSON_USE_CPPTL
Value(const CppTL::ConstString& value);
#endif
Value(bool value);
/// Deep copy.
Value(const Value& other);
~Value();
/// Deep copy, then swap(other).
/// \note Over-write existing comments. To preserve comments, use #swapPayload().
Value& operator=(Value other);
/// Swap values.
/// \note Currently, comments are intentionally not swapped, for
/// both logic and efficiency.
/// Swap everything.
void swap(Value& other);
/// Swap values but leave comments and source offsets in place.
void swapPayload(Value& other);
ValueType type() const;
/// Compare payload only, not comments etc.
bool operator<(const Value& other) const;
bool operator<=(const Value& other) const;
bool operator>=(const Value& other) const;
bool operator>(const Value& other) const;
bool operator==(const Value& other) const;
bool operator!=(const Value& other) const;
int compare(const Value& other) const;
const char* asCString() const;
std::string asString() const;
const char* asCString() const; ///< Embedded zeroes could cause you trouble!
std::string asString() const; ///< Embedded zeroes are possible.
/** Get raw char* of string-value.
* \return false if !string. (Seg-fault if str or end are NULL.)
*/
bool getString(
char const** str, char const** end) const;
#ifdef JSON_USE_CPPTL
CppTL::ConstString asConstString() const;
#endif
@@ -348,19 +397,23 @@ Json::Value obj_value(Json::objectValue); // {}
Value& append(const Value& value);
/// Access an object value by name, create a null member if it does not exist.
/// \note Because of our implementation, keys are limited to 2^30 -1 chars.
/// Exceeding that will cause an exception.
Value& operator[](const char* key);
/// Access an object value by name, returns null if there is no member with
/// that name.
const Value& operator[](const char* key) const;
/// Access an object value by name, create a null member if it does not exist.
/// \param key may contain embedded nulls.
Value& operator[](const std::string& key);
/// Access an object value by name, returns null if there is no member with
/// that name.
/// \param key may contain embedded nulls.
const Value& operator[](const std::string& key) const;
/** \brief Access an object value by name, create a null member if it does not
exist.
* If the object as no entry for that name, then the member name used to store
* If the object has no entry for that name, then the member name used to store
* the new entry is not duplicated.
* Example of use:
* \code
@@ -378,27 +431,69 @@ Json::Value obj_value(Json::objectValue); // {}
const Value& operator[](const CppTL::ConstString& key) const;
#endif
/// Return the member named key if it exist, defaultValue otherwise.
/// \note deep copy
Value get(const char* key, const Value& defaultValue) const;
/// Return the member named key if it exist, defaultValue otherwise.
/// \note deep copy
/// \param key may contain embedded nulls.
Value get(const char* key, const char* end, const Value& defaultValue) const;
/// Return the member named key if it exist, defaultValue otherwise.
/// \note deep copy
/// \param key may contain embedded nulls.
Value get(const std::string& key, const Value& defaultValue) const;
#ifdef JSON_USE_CPPTL
/// Return the member named key if it exist, defaultValue otherwise.
/// \note deep copy
Value get(const CppTL::ConstString& key, const Value& defaultValue) const;
#endif
/// Most general and efficient version of isMember()const, get()const,
/// and operator[]const
/// \note As stated elsewhere, behavior is undefined if (end-key) >= 2^30
Value const* find(char const* key, char const* end) const;
/// Most general and efficient version of object-mutators.
/// \note As stated elsewhere, behavior is undefined if (end-key) >= 2^30
/// \return non-zero, but JSON_ASSERT if this is neither object nor nullValue.
Value const* demand(char const* key, char const* end);
/// \brief Remove and return the named member.
///
/// Do nothing if it did not exist.
/// \return the removed Value, or null.
/// \pre type() is objectValue or nullValue
/// \post type() is unchanged
/// \deprecated
Value removeMember(const char* key);
/// Same as removeMember(const char*)
/// \param key may contain embedded nulls.
/// \deprecated
Value removeMember(const std::string& key);
/// Same as removeMember(const char* key, const char* end, Value* removed),
/// but 'key' is null-terminated.
bool removeMember(const char* key, Value* removed);
/** \brief Remove the named map member.
Update 'removed' iff removed.
\param key may contain embedded nulls.
\return true iff removed (no exceptions)
*/
bool removeMember(std::string const& key, Value* removed);
/// Same as removeMember(std::string const& key, Value* removed)
bool removeMember(const char* key, const char* end, Value* removed);
/** \brief Remove the indexed array element.
O(n) expensive operations.
Update 'removed' iff removed.
\return true iff removed (no exceptions)
*/
bool removeIndex(ArrayIndex i, Value* removed);
/// Return true if the object has a member named key.
/// \note 'key' must be null-terminated.
bool isMember(const char* key) const;
/// Return true if the object has a member named key.
/// \param key may contain embedded nulls.
bool isMember(const std::string& key) const;
/// Same as isMember(std::string const& key)const
bool isMember(const char* key, const char* end) const;
#ifdef JSON_USE_CPPTL
/// Return true if the object has a member named key.
bool isMember(const CppTL::ConstString& key) const;
@@ -416,9 +511,11 @@ Json::Value obj_value(Json::objectValue); // {}
// EnumValues enumValues() const;
//# endif
/// Comments must be //... or /* ... */
/// \deprecated Always pass len.
void setComment(const char* comment, CommentPlacement placement);
/// Comments must be //... or /* ... */
void setComment(const char* comment, size_t len, CommentPlacement placement);
/// Comments must be //... or /* ... */
void setComment(const std::string& comment, CommentPlacement placement);
bool hasComment(CommentPlacement placement) const;
/// Include delimiters and embedded newlines.
@@ -442,26 +539,14 @@ Json::Value obj_value(Json::objectValue); // {}
private:
void initBasic(ValueType type, bool allocated = false);
Value& resolveReference(const char* key, bool isStatic);
Value& resolveReference(const char* key);
Value& resolveReference(const char* key, const char* end);
#ifdef JSON_VALUE_USE_INTERNAL_MAP
inline bool isItemAvailable() const { return itemIsUsed_ == 0; }
inline void setItemUsed(bool isUsed = true) { itemIsUsed_ = isUsed ? 1 : 0; }
inline bool isMemberNameStatic() const { return memberNameIsStatic_ == 0; }
inline void setMemberNameIsStatic(bool isStatic) {
memberNameIsStatic_ = isStatic ? 1 : 0;
}
#endif // # ifdef JSON_VALUE_USE_INTERNAL_MAP
private:
struct CommentInfo {
CommentInfo();
~CommentInfo();
void setComment(const char* text);
void setComment(const char* text, size_t len);
char* comment_;
};
@@ -480,20 +565,12 @@ private:
LargestUInt uint_;
double real_;
bool bool_;
char* string_;
#ifdef JSON_VALUE_USE_INTERNAL_MAP
ValueInternalArray* array_;
ValueInternalMap* map_;
#else
char* string_; // actually ptr to unsigned, followed by str, unless !allocated_
ObjectValues* map_;
#endif
} value_;
ValueType type_ : 8;
int allocated_ : 1; // Notes: if declared as bool, bitfield is useless.
#ifdef JSON_VALUE_USE_INTERNAL_MAP
unsigned int itemIsUsed_ : 1; // used by the ValueInternalMap container.
int memberNameIsStatic_ : 1; // used by the ValueInternalMap container.
#endif
unsigned int allocated_ : 1; // Notes: if declared as bool, bitfield is useless.
// If not allocated_, string_ must be null-terminated.
CommentInfo* comments_;
// [start, limit) byte offsets in the source JSON text from which this Value
@@ -565,345 +642,6 @@ private:
Args args_;
};
#ifdef JSON_VALUE_USE_INTERNAL_MAP
/** \brief Allocator to customize Value internal map.
* Below is an example of a simple implementation (default implementation
actually
* use memory pool for speed).
* \code
class DefaultValueMapAllocator : public ValueMapAllocator
{
public: // overridden from ValueMapAllocator
virtual ValueInternalMap *newMap()
{
return new ValueInternalMap();
}
virtual ValueInternalMap *newMapCopy( const ValueInternalMap &other )
{
return new ValueInternalMap( other );
}
virtual void destructMap( ValueInternalMap *map )
{
delete map;
}
virtual ValueInternalLink *allocateMapBuckets( unsigned int size )
{
return new ValueInternalLink[size];
}
virtual void releaseMapBuckets( ValueInternalLink *links )
{
delete [] links;
}
virtual ValueInternalLink *allocateMapLink()
{
return new ValueInternalLink();
}
virtual void releaseMapLink( ValueInternalLink *link )
{
delete link;
}
};
* \endcode
*/
class JSON_API ValueMapAllocator {
public:
virtual ~ValueMapAllocator();
virtual ValueInternalMap* newMap() = 0;
virtual ValueInternalMap* newMapCopy(const ValueInternalMap& other) = 0;
virtual void destructMap(ValueInternalMap* map) = 0;
virtual ValueInternalLink* allocateMapBuckets(unsigned int size) = 0;
virtual void releaseMapBuckets(ValueInternalLink* links) = 0;
virtual ValueInternalLink* allocateMapLink() = 0;
virtual void releaseMapLink(ValueInternalLink* link) = 0;
};
/** \brief ValueInternalMap hash-map bucket chain link (for internal use only).
* \internal previous_ & next_ allows for bidirectional traversal.
*/
class JSON_API ValueInternalLink {
public:
enum {
itemPerLink = 6
}; // sizeof(ValueInternalLink) = 128 on 32 bits architecture.
enum InternalFlags {
flagAvailable = 0,
flagUsed = 1
};
ValueInternalLink();
~ValueInternalLink();
Value items_[itemPerLink];
char* keys_[itemPerLink];
ValueInternalLink* previous_;
ValueInternalLink* next_;
};
/** \brief A linked page based hash-table implementation used internally by
*Value.
* \internal ValueInternalMap is a tradional bucket based hash-table, with a
*linked
* list in each bucket to handle collision. There is an addional twist in that
* each node of the collision linked list is a page containing a fixed amount of
* value. This provides a better compromise between memory usage and speed.
*
* Each bucket is made up of a chained list of ValueInternalLink. The last
* link of a given bucket can be found in the 'previous_' field of the following
*bucket.
* The last link of the last bucket is stored in tailLink_ as it has no
*following bucket.
* Only the last link of a bucket may contains 'available' item. The last link
*always
* contains at least one element unless is it the bucket one very first link.
*/
class JSON_API ValueInternalMap {
friend class ValueIteratorBase;
friend class Value;
public:
typedef unsigned int HashKey;
typedef unsigned int BucketIndex;
#ifndef JSONCPP_DOC_EXCLUDE_IMPLEMENTATION
struct IteratorState {
IteratorState() : map_(0), link_(0), itemIndex_(0), bucketIndex_(0) {}
ValueInternalMap* map_;
ValueInternalLink* link_;
BucketIndex itemIndex_;
BucketIndex bucketIndex_;
};
#endif // ifndef JSONCPP_DOC_EXCLUDE_IMPLEMENTATION
ValueInternalMap();
ValueInternalMap(const ValueInternalMap& other);
ValueInternalMap& operator=(ValueInternalMap other);
~ValueInternalMap();
void swap(ValueInternalMap& other);
BucketIndex size() const;
void clear();
bool reserveDelta(BucketIndex growth);
bool reserve(BucketIndex newItemCount);
const Value* find(const char* key) const;
Value* find(const char* key);
Value& resolveReference(const char* key, bool isStatic);
void remove(const char* key);
void doActualRemove(ValueInternalLink* link,
BucketIndex index,
BucketIndex bucketIndex);
ValueInternalLink*& getLastLinkInBucket(BucketIndex bucketIndex);
Value& setNewItem(const char* key,
bool isStatic,
ValueInternalLink* link,
BucketIndex index);
Value& unsafeAdd(const char* key, bool isStatic, HashKey hashedKey);
HashKey hash(const char* key) const;
int compare(const ValueInternalMap& other) const;
private:
void makeBeginIterator(IteratorState& it) const;
void makeEndIterator(IteratorState& it) const;
static bool equals(const IteratorState& x, const IteratorState& other);
static void increment(IteratorState& iterator);
static void incrementBucket(IteratorState& iterator);
static void decrement(IteratorState& iterator);
static const char* key(const IteratorState& iterator);
static const char* key(const IteratorState& iterator, bool& isStatic);
static Value& value(const IteratorState& iterator);
static int distance(const IteratorState& x, const IteratorState& y);
private:
ValueInternalLink* buckets_;
ValueInternalLink* tailLink_;
BucketIndex bucketsSize_;
BucketIndex itemCount_;
};
/** \brief A simplified deque implementation used internally by Value.
* \internal
* It is based on a list of fixed "page", each page contains a fixed number of
*items.
* Instead of using a linked-list, a array of pointer is used for fast item
*look-up.
* Look-up for an element is as follow:
* - compute page index: pageIndex = itemIndex / itemsPerPage
* - look-up item in page: pages_[pageIndex][itemIndex % itemsPerPage]
*
* Insertion is amortized constant time (only the array containing the index of
*pointers
* need to be reallocated when items are appended).
*/
class JSON_API ValueInternalArray {
friend class Value;
friend class ValueIteratorBase;
public:
enum {
itemsPerPage = 8
}; // should be a power of 2 for fast divide and modulo.
typedef Value::ArrayIndex ArrayIndex;
typedef unsigned int PageIndex;
#ifndef JSONCPP_DOC_EXCLUDE_IMPLEMENTATION
struct IteratorState // Must be a POD
{
IteratorState() : array_(0), currentPageIndex_(0), currentItemIndex_(0) {}
ValueInternalArray* array_;
Value** currentPageIndex_;
unsigned int currentItemIndex_;
};
#endif // ifndef JSONCPP_DOC_EXCLUDE_IMPLEMENTATION
ValueInternalArray();
ValueInternalArray(const ValueInternalArray& other);
ValueInternalArray& operator=(ValueInternalArray other);
~ValueInternalArray();
void swap(ValueInternalArray& other);
void clear();
void resize(ArrayIndex newSize);
Value& resolveReference(ArrayIndex index);
Value* find(ArrayIndex index) const;
ArrayIndex size() const;
int compare(const ValueInternalArray& other) const;
private:
static bool equals(const IteratorState& x, const IteratorState& other);
static void increment(IteratorState& iterator);
static void decrement(IteratorState& iterator);
static Value& dereference(const IteratorState& iterator);
static Value& unsafeDereference(const IteratorState& iterator);
static int distance(const IteratorState& x, const IteratorState& y);
static ArrayIndex indexOf(const IteratorState& iterator);
void makeBeginIterator(IteratorState& it) const;
void makeEndIterator(IteratorState& it) const;
void makeIterator(IteratorState& it, ArrayIndex index) const;
void makeIndexValid(ArrayIndex index);
Value** pages_;
ArrayIndex size_;
PageIndex pageCount_;
};
/** \brief Experimental: do not use. Allocator to customize Value internal
array.
* Below is an example of a simple implementation (actual implementation use
* memory pool).
\code
class DefaultValueArrayAllocator : public ValueArrayAllocator
{
public: // overridden from ValueArrayAllocator
virtual ~DefaultValueArrayAllocator()
{
}
virtual ValueInternalArray *newArray()
{
return new ValueInternalArray();
}
virtual ValueInternalArray *newArrayCopy( const ValueInternalArray &other )
{
return new ValueInternalArray( other );
}
virtual void destruct( ValueInternalArray *array )
{
delete array;
}
virtual void reallocateArrayPageIndex( Value **&indexes,
ValueInternalArray::PageIndex
&indexCount,
ValueInternalArray::PageIndex
minNewIndexCount )
{
ValueInternalArray::PageIndex newIndexCount = (indexCount*3)/2 + 1;
if ( minNewIndexCount > newIndexCount )
newIndexCount = minNewIndexCount;
void *newIndexes = realloc( indexes, sizeof(Value*) * newIndexCount );
if ( !newIndexes )
throw std::bad_alloc();
indexCount = newIndexCount;
indexes = static_cast<Value **>( newIndexes );
}
virtual void releaseArrayPageIndex( Value **indexes,
ValueInternalArray::PageIndex indexCount )
{
if ( indexes )
free( indexes );
}
virtual Value *allocateArrayPage()
{
return static_cast<Value *>( malloc( sizeof(Value) *
ValueInternalArray::itemsPerPage ) );
}
virtual void releaseArrayPage( Value *value )
{
if ( value )
free( value );
}
};
\endcode
*/
class JSON_API ValueArrayAllocator {
public:
virtual ~ValueArrayAllocator();
virtual ValueInternalArray* newArray() = 0;
virtual ValueInternalArray* newArrayCopy(const ValueInternalArray& other) = 0;
virtual void destructArray(ValueInternalArray* array) = 0;
/** \brief Reallocate array page index.
* Reallocates an array of pointer on each page.
* \param indexes [input] pointer on the current index. May be \c NULL.
* [output] pointer on the new index of at least
* \a minNewIndexCount pages.
* \param indexCount [input] current number of pages in the index.
* [output] number of page the reallocated index can handle.
* \b MUST be >= \a minNewIndexCount.
* \param minNewIndexCount Minimum number of page the new index must be able
* to
* handle.
*/
virtual void
reallocateArrayPageIndex(Value**& indexes,
ValueInternalArray::PageIndex& indexCount,
ValueInternalArray::PageIndex minNewIndexCount) = 0;
virtual void
releaseArrayPageIndex(Value** indexes,
ValueInternalArray::PageIndex indexCount) = 0;
virtual Value* allocateArrayPage() = 0;
virtual void releaseArrayPage(Value* value) = 0;
};
#endif // #ifdef JSON_VALUE_USE_INTERNAL_MAP
/** \brief base class for Value iterators.
*
*/
@@ -915,31 +653,37 @@ public:
typedef ValueIteratorBase SelfType;
ValueIteratorBase();
#ifndef JSON_VALUE_USE_INTERNAL_MAP
explicit ValueIteratorBase(const Value::ObjectValues::iterator& current);
#else
ValueIteratorBase(const ValueInternalArray::IteratorState& state);
ValueIteratorBase(const ValueInternalMap::IteratorState& state);
#endif
bool operator==(const SelfType& other) const { return isEqual(other); }
bool operator!=(const SelfType& other) const { return !isEqual(other); }
difference_type operator-(const SelfType& other) const {
return computeDistance(other);
return other.computeDistance(*this);
}
/// Return either the index or the member name of the referenced value as a
/// Value.
Value key() const;
/// Return the index of the referenced Value. -1 if it is not an arrayValue.
/// Return the index of the referenced Value, or -1 if it is not an arrayValue.
UInt index() const;
/// Return the member name of the referenced Value, or "" if it is not an
/// objectValue.
/// \note Avoid `c_str()` on result, as embedded zeroes are possible.
std::string name() const;
/// Return the member name of the referenced Value. "" if it is not an
/// objectValue.
const char* memberName() const;
/// \deprecated This cannot be used for UTF-8 strings, since there can be embedded nulls.
JSONCPP_DEPRECATED("Use `key = name();` instead.")
char const* memberName() const;
/// Return the member name of the referenced Value, or NULL if it is not an
/// objectValue.
/// \note Better version than memberName(). Allows embedded nulls.
char const* memberName(char const** end) const;
protected:
Value& deref() const;
@@ -955,17 +699,9 @@ protected:
void copy(const SelfType& other);
private:
#ifndef JSON_VALUE_USE_INTERNAL_MAP
Value::ObjectValues::iterator current_;
// Indicates that iterator is for a null value.
bool isNull_;
#else
union {
ValueInternalArray::IteratorState array_;
ValueInternalMap::IteratorState map_;
} iterator_;
bool isArray_;
#endif
};
/** \brief const iterator for object and array value.
@@ -976,8 +712,8 @@ class JSON_API ValueConstIterator : public ValueIteratorBase {
public:
typedef const Value value_type;
typedef unsigned int size_t;
typedef int difference_type;
//typedef unsigned int size_t;
//typedef int difference_type;
typedef const Value& reference;
typedef const Value* pointer;
typedef ValueConstIterator SelfType;
@@ -987,12 +723,7 @@ public:
private:
/*! \internal Use by Value to create an iterator.
*/
#ifndef JSON_VALUE_USE_INTERNAL_MAP
explicit ValueConstIterator(const Value::ObjectValues::iterator& current);
#else
ValueConstIterator(const ValueInternalArray::IteratorState& state);
ValueConstIterator(const ValueInternalMap::IteratorState& state);
#endif
public:
SelfType& operator=(const ValueIteratorBase& other);
@@ -1043,12 +774,7 @@ public:
private:
/*! \internal Use by Value to create an iterator.
*/
#ifndef JSON_VALUE_USE_INTERNAL_MAP
explicit ValueIterator(const Value::ObjectValues::iterator& current);
#else
ValueIterator(const ValueInternalArray::IteratorState& state);
ValueIterator(const ValueInternalMap::IteratorState& state);
#endif
public:
SelfType& operator=(const SelfType& other);
@@ -1081,6 +807,14 @@ public:
} // namespace Json
namespace std {
/// Specialize std::swap() for Json::Value.
template<>
inline void swap(Json::Value& a, Json::Value& b) { a.swap(b); }
}
#if defined(JSONCPP_DISABLE_DLL_INTERFACE_WARNING)
#pragma warning(pop)
#endif // if defined(JSONCPP_DISABLE_DLL_INTERFACE_WARNING)

View File

@@ -4,10 +4,10 @@
#ifndef JSON_VERSION_H_INCLUDED
# define JSON_VERSION_H_INCLUDED
# define JSONCPP_VERSION_STRING "1.1.0"
# define JSONCPP_VERSION_STRING "1.6.1"
# define JSONCPP_VERSION_MAJOR 1
# define JSONCPP_VERSION_MINOR 1
# define JSONCPP_VERSION_PATCH 0
# define JSONCPP_VERSION_MINOR 6
# define JSONCPP_VERSION_PATCH 1
# define JSONCPP_VERSION_QUALIFIER
# define JSONCPP_VERSION_HEXA ((JSONCPP_VERSION_MAJOR << 24) | (JSONCPP_VERSION_MINOR << 16) | (JSONCPP_VERSION_PATCH << 8))

View File

@@ -11,6 +11,7 @@
#endif // if !defined(JSON_IS_AMALGAMATION)
#include <vector>
#include <string>
#include <ostream>
// Disable warning C4251: <data member>: <type> needs to have dll-interface to
// be used by...
@@ -23,7 +24,115 @@ namespace Json {
class Value;
/**
Usage:
\code
using namespace Json;
void writeToStdout(StreamWriter::Factory const& factory, Value const& value) {
std::unique_ptr<StreamWriter> const writer(
factory.newStreamWriter());
writer->write(value, &std::cout);
std::cout << std::endl; // add lf and flush
}
\endcode
*/
class JSON_API StreamWriter {
protected:
std::ostream* sout_; // not owned; will not delete
public:
StreamWriter();
virtual ~StreamWriter();
/** Write Value into document as configured in sub-class.
Do not take ownership of sout, but maintain a reference during function.
\pre sout != NULL
\return zero on success (For now, we always return zero, so check the stream instead.)
\throw std::exception possibly, depending on configuration
*/
virtual int write(Value const& root, std::ostream* sout) = 0;
/** \brief A simple abstract factory.
*/
class JSON_API Factory {
public:
virtual ~Factory();
/** \brief Allocate a CharReader via operator new().
* \throw std::exception if something goes wrong (e.g. invalid settings)
*/
virtual StreamWriter* newStreamWriter() const = 0;
}; // Factory
}; // StreamWriter
/** \brief Write into stringstream, then return string, for convenience.
* A StreamWriter will be created from the factory, used, and then deleted.
*/
std::string JSON_API writeString(StreamWriter::Factory const& factory, Value const& root);
/** \brief Build a StreamWriter implementation.
Usage:
\code
using namespace Json;
Value value = ...;
StreamWriterBuilder builder;
builder["commentStyle"] = "None";
builder["indentation"] = " "; // or whatever you like
std::unique_ptr<Json::StreamWriter> writer(
builder.newStreamWriter());
writer->write(value, &std::cout);
std::cout << std::endl; // add lf and flush
\endcode
*/
class JSON_API StreamWriterBuilder : public StreamWriter::Factory {
public:
// Note: We use a Json::Value so that we can add data-members to this class
// without a major version bump.
/** Configuration of this builder.
Available settings (case-sensitive):
- "commentStyle": "None" or "All"
- "indentation": "<anything>"
- "enableYAMLCompatibility": false or true
- slightly change the whitespace around colons
- "dropNullPlaceholders": false or true
- Drop the "null" string from the writer's output for nullValues.
Strictly speaking, this is not valid JSON. But when the output is being
fed to a browser's Javascript, it makes for smaller output and the
browser can handle the output just fine.
You can examine 'settings_` yourself
to see the defaults. You can also write and read them just like any
JSON Value.
\sa setDefaults()
*/
Json::Value settings_;
StreamWriterBuilder();
virtual ~StreamWriterBuilder();
/**
* \throw std::exception if something goes wrong (e.g. invalid settings)
*/
virtual StreamWriter* newStreamWriter() const;
/** \return true if 'settings' are legal and consistent;
* otherwise, indicate bad settings via 'invalid'.
*/
bool validate(Json::Value* invalid) const;
/** A simple way to update a specific setting.
*/
Value& operator[](std::string key);
/** Called by ctor, but you can use this to reset settings_.
* \pre 'settings' != NULL (but Json::null is fine)
* \remark Defaults:
* \snippet src/lib_json/json_writer.cpp StreamWriterBuilderDefaults
*/
static void setDefaults(Json::Value* settings);
};
/** \brief Abstract class for writers.
* \deprecated Use StreamWriter. (And really, this is an implementation detail.)
*/
class JSON_API Writer {
public:
@@ -39,8 +148,10 @@ public:
*consumption,
* but may be usefull to support feature such as RPC where bandwith is limited.
* \sa Reader, Value
* \deprecated Use StreamWriterBuilder.
*/
class JSON_API FastWriter : public Writer {
public:
FastWriter();
virtual ~FastWriter() {}
@@ -90,6 +201,7 @@ private:
*#CommentPlacement.
*
* \sa Reader, Value, Value::setComment()
* \deprecated Use StreamWriterBuilder.
*/
class JSON_API StyledWriter : public Writer {
public:
@@ -151,6 +263,7 @@ private:
*
* \param indentation Each level will be indented by this amount extra.
* \sa Reader, Value, Value::setComment()
* \deprecated Use StreamWriterBuilder.
*/
class JSON_API StyledStreamWriter {
public:
@@ -187,7 +300,8 @@ private:
std::string indentString_;
int rightMargin_;
std::string indentation_;
bool addChildValues_;
bool addChildValues_ : 1;
bool indented_ : 1;
};
#if defined(JSON_HAS_INT64)

View File

@@ -178,15 +178,6 @@
<File
RelativePath="..\..\include\json\json.h">
</File>
<File
RelativePath="..\..\src\lib_json\json_batchallocator.h">
</File>
<File
RelativePath="..\..\src\lib_json\json_internalarray.inl">
</File>
<File
RelativePath="..\..\src\lib_json\json_internalmap.inl">
</File>
<File
RelativePath="..\..\src\lib_json\json_reader.cpp">
</File>

View File

@@ -34,57 +34,57 @@ SVN_TAG_ROOT = SVN_ROOT + 'tags/jsoncpp'
SCONS_LOCAL_URL = 'http://sourceforge.net/projects/scons/files/scons-local/1.2.0/scons-local-1.2.0.tar.gz/download'
SOURCEFORGE_PROJECT = 'jsoncpp'
def set_version( version ):
def set_version(version):
with open('version','wb') as f:
f.write( version.strip() )
f.write(version.strip())
def rmdir_if_exist( dir_path ):
if os.path.isdir( dir_path ):
shutil.rmtree( dir_path )
def rmdir_if_exist(dir_path):
if os.path.isdir(dir_path):
shutil.rmtree(dir_path)
class SVNError(Exception):
pass
def svn_command( command, *args ):
def svn_command(command, *args):
cmd = ['svn', '--non-interactive', command] + list(args)
print('Running:', ' '.join( cmd ))
process = subprocess.Popen( cmd,
print('Running:', ' '.join(cmd))
process = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT )
stderr=subprocess.STDOUT)
stdout = process.communicate()[0]
if process.returncode:
error = SVNError( 'SVN command failed:\n' + stdout )
error = SVNError('SVN command failed:\n' + stdout)
error.returncode = process.returncode
raise error
return stdout
def check_no_pending_commit():
"""Checks that there is no pending commit in the sandbox."""
stdout = svn_command( 'status', '--xml' )
etree = ElementTree.fromstring( stdout )
stdout = svn_command('status', '--xml')
etree = ElementTree.fromstring(stdout)
msg = []
for entry in etree.getiterator( 'entry' ):
for entry in etree.getiterator('entry'):
path = entry.get('path')
status = entry.find('wc-status').get('item')
if status != 'unversioned' and path != 'version':
msg.append( 'File "%s" has pending change (status="%s")' % (path, status) )
msg.append('File "%s" has pending change (status="%s")' % (path, status))
if msg:
msg.insert(0, 'Pending change to commit found in sandbox. Commit them first!' )
return '\n'.join( msg )
msg.insert(0, 'Pending change to commit found in sandbox. Commit them first!')
return '\n'.join(msg)
def svn_join_url( base_url, suffix ):
def svn_join_url(base_url, suffix):
if not base_url.endswith('/'):
base_url += '/'
if suffix.startswith('/'):
suffix = suffix[1:]
return base_url + suffix
def svn_check_if_tag_exist( tag_url ):
def svn_check_if_tag_exist(tag_url):
"""Checks if a tag exist.
Returns: True if the tag exist, False otherwise.
"""
try:
list_stdout = svn_command( 'list', tag_url )
list_stdout = svn_command('list', tag_url)
except SVNError as e:
if e.returncode != 1 or not str(e).find('tag_url'):
raise e
@@ -92,82 +92,82 @@ def svn_check_if_tag_exist( tag_url ):
return False
return True
def svn_commit( message ):
def svn_commit(message):
"""Commit the sandbox, providing the specified comment.
"""
svn_command( 'ci', '-m', message )
svn_command('ci', '-m', message)
def svn_tag_sandbox( tag_url, message ):
def svn_tag_sandbox(tag_url, message):
"""Makes a tag based on the sandbox revisions.
"""
svn_command( 'copy', '-m', message, '.', tag_url )
svn_command('copy', '-m', message, '.', tag_url)
def svn_remove_tag( tag_url, message ):
def svn_remove_tag(tag_url, message):
"""Removes an existing tag.
"""
svn_command( 'delete', '-m', message, tag_url )
svn_command('delete', '-m', message, tag_url)
def svn_export( tag_url, export_dir ):
def svn_export(tag_url, export_dir):
"""Exports the tag_url revision to export_dir.
Target directory, including its parent is created if it does not exist.
If the directory export_dir exist, it is deleted before export proceed.
"""
rmdir_if_exist( export_dir )
svn_command( 'export', tag_url, export_dir )
rmdir_if_exist(export_dir)
svn_command('export', tag_url, export_dir)
def fix_sources_eol( dist_dir ):
def fix_sources_eol(dist_dir):
"""Set file EOL for tarball distribution.
"""
print('Preparing exported source file EOL for distribution...')
prune_dirs = antglob.prune_dirs + 'scons-local* ./build* ./libs ./dist'
win_sources = antglob.glob( dist_dir,
win_sources = antglob.glob(dist_dir,
includes = '**/*.sln **/*.vcproj',
prune_dirs = prune_dirs )
unix_sources = antglob.glob( dist_dir,
prune_dirs = prune_dirs)
unix_sources = antglob.glob(dist_dir,
includes = '''**/*.h **/*.cpp **/*.inl **/*.txt **/*.dox **/*.py **/*.html **/*.in
sconscript *.json *.expected AUTHORS LICENSE''',
excludes = antglob.default_excludes + 'scons.py sconsign.py scons-*',
prune_dirs = prune_dirs )
prune_dirs = prune_dirs)
for path in win_sources:
fixeol.fix_source_eol( path, is_dry_run = False, verbose = True, eol = '\r\n' )
fixeol.fix_source_eol(path, is_dry_run = False, verbose = True, eol = '\r\n')
for path in unix_sources:
fixeol.fix_source_eol( path, is_dry_run = False, verbose = True, eol = '\n' )
fixeol.fix_source_eol(path, is_dry_run = False, verbose = True, eol = '\n')
def download( url, target_path ):
def download(url, target_path):
"""Download file represented by url to target_path.
"""
f = urllib2.urlopen( url )
f = urllib2.urlopen(url)
try:
data = f.read()
finally:
f.close()
fout = open( target_path, 'wb' )
fout = open(target_path, 'wb')
try:
fout.write( data )
fout.write(data)
finally:
fout.close()
def check_compile( distcheck_top_dir, platform ):
def check_compile(distcheck_top_dir, platform):
cmd = [sys.executable, 'scons.py', 'platform=%s' % platform, 'check']
print('Running:', ' '.join( cmd ))
log_path = os.path.join( distcheck_top_dir, 'build-%s.log' % platform )
flog = open( log_path, 'wb' )
print('Running:', ' '.join(cmd))
log_path = os.path.join(distcheck_top_dir, 'build-%s.log' % platform)
flog = open(log_path, 'wb')
try:
process = subprocess.Popen( cmd,
process = subprocess.Popen(cmd,
stdout=flog,
stderr=subprocess.STDOUT,
cwd=distcheck_top_dir )
cwd=distcheck_top_dir)
stdout = process.communicate()[0]
status = (process.returncode == 0)
finally:
flog.close()
return (status, log_path)
def write_tempfile( content, **kwargs ):
fd, path = tempfile.mkstemp( **kwargs )
f = os.fdopen( fd, 'wt' )
def write_tempfile(content, **kwargs):
fd, path = tempfile.mkstemp(**kwargs)
f = os.fdopen(fd, 'wt')
try:
f.write( content )
f.write(content)
finally:
f.close()
return path
@@ -175,34 +175,34 @@ def write_tempfile( content, **kwargs ):
class SFTPError(Exception):
pass
def run_sftp_batch( userhost, sftp, batch, retry=0 ):
path = write_tempfile( batch, suffix='.sftp', text=True )
def run_sftp_batch(userhost, sftp, batch, retry=0):
path = write_tempfile(batch, suffix='.sftp', text=True)
# psftp -agent -C blep,jsoncpp@web.sourceforge.net -batch -b batch.sftp -bc
cmd = [sftp, '-agent', '-C', '-batch', '-b', path, '-bc', userhost]
error = None
for retry_index in range(0, max(1,retry)):
heading = retry_index == 0 and 'Running:' or 'Retrying:'
print(heading, ' '.join( cmd ))
process = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT )
print(heading, ' '.join(cmd))
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout = process.communicate()[0]
if process.returncode != 0:
error = SFTPError( 'SFTP batch failed:\n' + stdout )
error = SFTPError('SFTP batch failed:\n' + stdout)
else:
break
if error:
raise error
return stdout
def sourceforge_web_synchro( sourceforge_project, doc_dir,
user=None, sftp='sftp' ):
def sourceforge_web_synchro(sourceforge_project, doc_dir,
user=None, sftp='sftp'):
"""Notes: does not synchronize sub-directory of doc-dir.
"""
userhost = '%s,%s@web.sourceforge.net' % (user, sourceforge_project)
stdout = run_sftp_batch( userhost, sftp, """
stdout = run_sftp_batch(userhost, sftp, """
cd htdocs
dir
exit
""" )
""")
existing_paths = set()
collect = 0
for line in stdout.split('\n'):
@@ -216,15 +216,15 @@ exit
elif collect == 2:
path = line.strip().split()[-1:]
if path and path[0] not in ('.', '..'):
existing_paths.add( path[0] )
upload_paths = set( [os.path.basename(p) for p in antglob.glob( doc_dir )] )
existing_paths.add(path[0])
upload_paths = set([os.path.basename(p) for p in antglob.glob(doc_dir)])
paths_to_remove = existing_paths - upload_paths
if paths_to_remove:
print('Removing the following file from web:')
print('\n'.join( paths_to_remove ))
stdout = run_sftp_batch( userhost, sftp, """cd htdocs
print('\n'.join(paths_to_remove))
stdout = run_sftp_batch(userhost, sftp, """cd htdocs
rm %s
exit""" % ' '.join(paths_to_remove) )
exit""" % ' '.join(paths_to_remove))
print('Uploading %d files:' % len(upload_paths))
batch_size = 10
upload_paths = list(upload_paths)
@@ -235,17 +235,17 @@ exit""" % ' '.join(paths_to_remove) )
remaining_files = len(upload_paths) - index
remaining_sec = file_per_sec * remaining_files
print('%d/%d, ETA=%.1fs' % (index+1, len(upload_paths), remaining_sec))
run_sftp_batch( userhost, sftp, """cd htdocs
run_sftp_batch(userhost, sftp, """cd htdocs
lcd %s
mput %s
exit""" % (doc_dir, ' '.join(paths) ), retry=3 )
exit""" % (doc_dir, ' '.join(paths)), retry=3)
def sourceforge_release_tarball( sourceforge_project, paths, user=None, sftp='sftp' ):
def sourceforge_release_tarball(sourceforge_project, paths, user=None, sftp='sftp'):
userhost = '%s,%s@frs.sourceforge.net' % (user, sourceforge_project)
run_sftp_batch( userhost, sftp, """
run_sftp_batch(userhost, sftp, """
mput %s
exit
""" % (' '.join(paths),) )
""" % (' '.join(paths),))
def main():
@@ -286,12 +286,12 @@ Warning: --force should only be used when developping/testing the release script
options, args = parser.parse_args()
if len(args) != 2:
parser.error( 'release_version missing on command-line.' )
parser.error('release_version missing on command-line.')
release_version = args[0]
next_version = args[1]
if not options.platforms and not options.no_test:
parser.error( 'You must specify either --platform or --no-test option.' )
parser.error('You must specify either --platform or --no-test option.')
if options.ignore_pending_commit:
msg = ''
@@ -299,86 +299,86 @@ Warning: --force should only be used when developping/testing the release script
msg = check_no_pending_commit()
if not msg:
print('Setting version to', release_version)
set_version( release_version )
svn_commit( 'Release ' + release_version )
tag_url = svn_join_url( SVN_TAG_ROOT, release_version )
if svn_check_if_tag_exist( tag_url ):
set_version(release_version)
svn_commit('Release ' + release_version)
tag_url = svn_join_url(SVN_TAG_ROOT, release_version)
if svn_check_if_tag_exist(tag_url):
if options.retag_release:
svn_remove_tag( tag_url, 'Overwriting previous tag' )
svn_remove_tag(tag_url, 'Overwriting previous tag')
else:
print('Aborting, tag %s already exist. Use --retag to overwrite it!' % tag_url)
sys.exit( 1 )
svn_tag_sandbox( tag_url, 'Release ' + release_version )
sys.exit(1)
svn_tag_sandbox(tag_url, 'Release ' + release_version)
print('Generated doxygen document...')
## doc_dirname = r'jsoncpp-api-html-0.5.0'
## doc_tarball_path = r'e:\prg\vc\Lib\jsoncpp-trunk\dist\jsoncpp-api-html-0.5.0.tar.gz'
doc_tarball_path, doc_dirname = doxybuild.build_doc( options, make_release=True )
doc_tarball_path, doc_dirname = doxybuild.build_doc(options, make_release=True)
doc_distcheck_dir = 'dist/doccheck'
tarball.decompress( doc_tarball_path, doc_distcheck_dir )
doc_distcheck_top_dir = os.path.join( doc_distcheck_dir, doc_dirname )
tarball.decompress(doc_tarball_path, doc_distcheck_dir)
doc_distcheck_top_dir = os.path.join(doc_distcheck_dir, doc_dirname)
export_dir = 'dist/export'
svn_export( tag_url, export_dir )
fix_sources_eol( export_dir )
svn_export(tag_url, export_dir)
fix_sources_eol(export_dir)
source_dir = 'jsoncpp-src-' + release_version
source_tarball_path = 'dist/%s.tar.gz' % source_dir
print('Generating source tarball to', source_tarball_path)
tarball.make_tarball( source_tarball_path, [export_dir], export_dir, prefix_dir=source_dir )
tarball.make_tarball(source_tarball_path, [export_dir], export_dir, prefix_dir=source_dir)
amalgamation_tarball_path = 'dist/%s-amalgamation.tar.gz' % source_dir
print('Generating amalgamation source tarball to', amalgamation_tarball_path)
amalgamation_dir = 'dist/amalgamation'
amalgamate.amalgamate_source( export_dir, '%s/jsoncpp.cpp' % amalgamation_dir, 'json/json.h' )
amalgamate.amalgamate_source(export_dir, '%s/jsoncpp.cpp' % amalgamation_dir, 'json/json.h')
amalgamation_source_dir = 'jsoncpp-src-amalgamation' + release_version
tarball.make_tarball( amalgamation_tarball_path, [amalgamation_dir],
amalgamation_dir, prefix_dir=amalgamation_source_dir )
tarball.make_tarball(amalgamation_tarball_path, [amalgamation_dir],
amalgamation_dir, prefix_dir=amalgamation_source_dir)
# Decompress source tarball, download and install scons-local
distcheck_dir = 'dist/distcheck'
distcheck_top_dir = distcheck_dir + '/' + source_dir
print('Decompressing source tarball to', distcheck_dir)
rmdir_if_exist( distcheck_dir )
tarball.decompress( source_tarball_path, distcheck_dir )
rmdir_if_exist(distcheck_dir)
tarball.decompress(source_tarball_path, distcheck_dir)
scons_local_path = 'dist/scons-local.tar.gz'
print('Downloading scons-local to', scons_local_path)
download( SCONS_LOCAL_URL, scons_local_path )
download(SCONS_LOCAL_URL, scons_local_path)
print('Decompressing scons-local to', distcheck_top_dir)
tarball.decompress( scons_local_path, distcheck_top_dir )
tarball.decompress(scons_local_path, distcheck_top_dir)
# Run compilation
print('Compiling decompressed tarball')
all_build_status = True
for platform in options.platforms.split(','):
print('Testing platform:', platform)
build_status, log_path = check_compile( distcheck_top_dir, platform )
build_status, log_path = check_compile(distcheck_top_dir, platform)
print('see build log:', log_path)
print(build_status and '=> ok' or '=> FAILED')
all_build_status = all_build_status and build_status
if not build_status:
print('Testing failed on at least one platform, aborting...')
svn_remove_tag( tag_url, 'Removing tag due to failed testing' )
svn_remove_tag(tag_url, 'Removing tag due to failed testing')
sys.exit(1)
if options.user:
if not options.no_web:
print('Uploading documentation using user', options.user)
sourceforge_web_synchro( SOURCEFORGE_PROJECT, doc_distcheck_top_dir, user=options.user, sftp=options.sftp )
sourceforge_web_synchro(SOURCEFORGE_PROJECT, doc_distcheck_top_dir, user=options.user, sftp=options.sftp)
print('Completed documentation upload')
print('Uploading source and documentation tarballs for release using user', options.user)
sourceforge_release_tarball( SOURCEFORGE_PROJECT,
sourceforge_release_tarball(SOURCEFORGE_PROJECT,
[source_tarball_path, doc_tarball_path],
user=options.user, sftp=options.sftp )
user=options.user, sftp=options.sftp)
print('Source and doc release tarballs uploaded')
else:
print('No upload user specified. Web site and download tarbal were not uploaded.')
print('Tarball can be found at:', doc_tarball_path)
# Set next version number and commit
set_version( next_version )
svn_commit( 'Released ' + release_version )
set_version(next_version)
svn_commit('Released ' + release_version)
else:
sys.stderr.write( msg + '\n' )
sys.stderr.write(msg + '\n')
if __name__ == '__main__':
main()

View File

@@ -1,9 +1,9 @@
import fnmatch
import os
def generate( env ):
def Glob( env, includes = None, excludes = None, dir = '.' ):
"""Adds Glob( includes = Split( '*' ), excludes = None, dir = '.')
def generate(env):
def Glob(env, includes = None, excludes = None, dir = '.'):
"""Adds Glob(includes = Split('*'), excludes = None, dir = '.')
helper function to environment.
Glob both the file-system files.
@@ -12,36 +12,36 @@ def generate( env ):
excludes: list of file name pattern exluced from the return list.
Example:
sources = env.Glob( ("*.cpp", '*.h'), "~*.cpp", "#src" )
sources = env.Glob(("*.cpp", '*.h'), "~*.cpp", "#src")
"""
def filterFilename(path):
abs_path = os.path.join( dir, path )
abs_path = os.path.join(dir, path)
if not os.path.isfile(abs_path):
return 0
fn = os.path.basename(path)
match = 0
for include in includes:
if fnmatch.fnmatchcase( fn, include ):
if fnmatch.fnmatchcase(fn, include):
match = 1
break
if match == 1 and not excludes is None:
for exclude in excludes:
if fnmatch.fnmatchcase( fn, exclude ):
if fnmatch.fnmatchcase(fn, exclude):
match = 0
break
return match
if includes is None:
includes = ('*',)
elif type(includes) in ( type(''), type(u'') ):
elif type(includes) in (type(''), type(u'')):
includes = (includes,)
if type(excludes) in ( type(''), type(u'') ):
if type(excludes) in (type(''), type(u'')):
excludes = (excludes,)
dir = env.Dir(dir).abspath
paths = os.listdir( dir )
def makeAbsFileNode( path ):
return env.File( os.path.join( dir, path ) )
nodes = filter( filterFilename, paths )
return map( makeAbsFileNode, nodes )
paths = os.listdir(dir)
def makeAbsFileNode(path):
return env.File(os.path.join(dir, path))
nodes = filter(filterFilename, paths)
return map(makeAbsFileNode, nodes)
from SCons.Script import Environment
Environment.Glob = Glob

View File

@@ -47,7 +47,7 @@ import targz
## elif token == "=":
## data[key] = list()
## else:
## append_data( data, key, new_data, token )
## append_data(data, key, new_data, token)
## new_data = True
##
## last_token = token
@@ -55,7 +55,7 @@ import targz
##
## if last_token == '\\' and token != '\n':
## new_data = False
## append_data( data, key, new_data, '\\' )
## append_data(data, key, new_data, '\\')
##
## # compress lists of len 1 into single strings
## for (k, v) in data.items():
@@ -116,7 +116,7 @@ import targz
## else:
## for pattern in file_patterns:
## sources.extend(glob.glob("/".join([node, pattern])))
## sources = map( lambda path: env.File(path), sources )
## sources = map(lambda path: env.File(path), sources)
## return sources
##
##
@@ -143,7 +143,7 @@ def srcDistEmitter(source, target, env):
## # add our output locations
## for (k, v) in output_formats.items():
## if data.get("GENERATE_" + k, v[0]) == "YES":
## targets.append(env.Dir( os.path.join(out_dir, data.get(k + "_OUTPUT", v[1]))) )
## targets.append(env.Dir(os.path.join(out_dir, data.get(k + "_OUTPUT", v[1]))))
##
## # don't clobber targets
## for node in targets:
@@ -161,14 +161,13 @@ def generate(env):
Add builders and construction variables for the
SrcDist tool.
"""
## doxyfile_scanner = env.Scanner(
## DoxySourceScan,
## doxyfile_scanner = env.Scanner(## DoxySourceScan,
## "DoxySourceScan",
## scan_check = DoxySourceScanCheck,
## )
##)
if targz.exists(env):
srcdist_builder = targz.makeBuilder( srcDistEmitter )
srcdist_builder = targz.makeBuilder(srcDistEmitter)
env['BUILDERS']['SrcDist'] = srcdist_builder

View File

@@ -70,7 +70,7 @@ def generate(env):
return target, source
## env.Append(TOOLS = 'substinfile') # this should be automaticaly done by Scons ?!?
subst_action = SCons.Action.Action( subst_in_file, subst_in_file_string )
subst_action = SCons.Action.Action(subst_in_file, subst_in_file_string)
env['BUILDERS']['SubstInFile'] = Builder(action=subst_action, emitter=subst_emitter)
def exists(env):

View File

@@ -27,9 +27,9 @@ TARGZ_DEFAULT_COMPRESSION_LEVEL = 9
if internal_targz:
def targz(target, source, env):
def archive_name( path ):
path = os.path.normpath( os.path.abspath( path ) )
common_path = os.path.commonprefix( (base_dir, path) )
def archive_name(path):
path = os.path.normpath(os.path.abspath(path))
common_path = os.path.commonprefix((base_dir, path))
archive_name = path[len(common_path):]
return archive_name
@@ -37,23 +37,23 @@ if internal_targz:
for name in names:
path = os.path.join(dirname, name)
if os.path.isfile(path):
tar.add(path, archive_name(path) )
tar.add(path, archive_name(path))
compression = env.get('TARGZ_COMPRESSION_LEVEL',TARGZ_DEFAULT_COMPRESSION_LEVEL)
base_dir = os.path.normpath( env.get('TARGZ_BASEDIR', env.Dir('.')).abspath )
base_dir = os.path.normpath(env.get('TARGZ_BASEDIR', env.Dir('.')).abspath)
target_path = str(target[0])
fileobj = gzip.GzipFile( target_path, 'wb', compression )
fileobj = gzip.GzipFile(target_path, 'wb', compression)
tar = tarfile.TarFile(os.path.splitext(target_path)[0], 'w', fileobj)
for source in source:
source_path = str(source)
if source.isdir():
os.path.walk(source_path, visit, tar)
else:
tar.add(source_path, archive_name(source_path) ) # filename, arcname
tar.add(source_path, archive_name(source_path)) # filename, arcname
tar.close()
targzAction = SCons.Action.Action(targz, varlist=['TARGZ_COMPRESSION_LEVEL','TARGZ_BASEDIR'])
def makeBuilder( emitter = None ):
def makeBuilder(emitter = None):
return SCons.Builder.Builder(action = SCons.Action.Action('$TARGZ_COM', '$TARGZ_COMSTR'),
source_factory = SCons.Node.FS.Entry,
source_scanner = SCons.Defaults.DirScanner,

View File

@@ -1,4 +1,4 @@
FIND_PACKAGE(PythonInterp 2.6 REQUIRED)
FIND_PACKAGE(PythonInterp 2.6)
IF(JSONCPP_LIB_BUILD_SHARED)
ADD_DEFINITIONS( -DJSON_DLL )
@@ -7,14 +7,20 @@ ENDIF(JSONCPP_LIB_BUILD_SHARED)
ADD_EXECUTABLE(jsontestrunner_exe
main.cpp
)
TARGET_LINK_LIBRARIES(jsontestrunner_exe jsoncpp_lib)
IF(JSONCPP_LIB_BUILD_SHARED)
TARGET_LINK_LIBRARIES(jsontestrunner_exe jsoncpp_lib)
ELSE(JSONCPP_LIB_BUILD_SHARED)
TARGET_LINK_LIBRARIES(jsontestrunner_exe jsoncpp_lib_static)
ENDIF(JSONCPP_LIB_BUILD_SHARED)
SET_TARGET_PROPERTIES(jsontestrunner_exe PROPERTIES OUTPUT_NAME jsontestrunner_exe)
IF(PYTHONINTERP_FOUND)
# Run end to end parser/writer tests
SET(TEST_DIR ${CMAKE_CURRENT_SOURCE_DIR}/../../test)
SET(RUNJSONTESTS_PATH ${TEST_DIR}/runjsontests.py)
ADD_CUSTOM_TARGET(jsoncpp_readerwriter_tests ALL
ADD_CUSTOM_TARGET(jsoncpp_readerwriter_tests
"${PYTHON_EXECUTABLE}" -B "${RUNJSONTESTS_PATH}" $<TARGET_FILE:jsontestrunner_exe> "${TEST_DIR}/data"
DEPENDS jsontestrunner_exe jsoncpp_test
)

View File

@@ -8,12 +8,22 @@
#include <json/json.h>
#include <algorithm> // sort
#include <sstream>
#include <stdio.h>
#if defined(_MSC_VER) && _MSC_VER >= 1310
#pragma warning(disable : 4996) // disable fopen deprecation warning
#endif
struct Options
{
std::string path;
Json::Features features;
bool parseOnly;
typedef std::string (*writeFuncType)(Json::Value const&);
writeFuncType write;
};
static std::string normalizeFloatingPointStr(double value) {
char buffer[32];
#if defined(_MSC_VER) && defined(__STDC_SECURE_LIB__)
@@ -129,43 +139,67 @@ printValueTree(FILE* fout, Json::Value& value, const std::string& path = ".") {
static int parseAndSaveValueTree(const std::string& input,
const std::string& actual,
const std::string& kind,
Json::Value& root,
const Json::Features& features,
bool parseOnly) {
bool parseOnly,
Json::Value* root)
{
Json::Reader reader(features);
bool parsingSuccessful = reader.parse(input, root);
bool parsingSuccessful = reader.parse(input, *root);
if (!parsingSuccessful) {
printf("Failed to parse %s file: \n%s\n",
kind.c_str(),
reader.getFormattedErrorMessages().c_str());
return 1;
}
if (!parseOnly) {
FILE* factual = fopen(actual.c_str(), "wt");
if (!factual) {
printf("Failed to create %s actual file.\n", kind.c_str());
return 2;
}
printValueTree(factual, root);
printValueTree(factual, *root);
fclose(factual);
}
return 0;
}
static int rewriteValueTree(const std::string& rewritePath,
const Json::Value& root,
std::string& rewrite) {
// Json::FastWriter writer;
// writer.enableYAMLCompatibility();
// static std::string useFastWriter(Json::Value const& root) {
// Json::FastWriter writer;
// writer.enableYAMLCompatibility();
// return writer.write(root);
// }
static std::string useStyledWriter(
Json::Value const& root)
{
Json::StyledWriter writer;
rewrite = writer.write(root);
return writer.write(root);
}
static std::string useStyledStreamWriter(
Json::Value const& root)
{
Json::StyledStreamWriter writer;
std::ostringstream sout;
writer.write(sout, root);
return sout.str();
}
static std::string useBuiltStyledStreamWriter(
Json::Value const& root)
{
Json::StreamWriterBuilder builder;
return Json::writeString(builder, root);
}
static int rewriteValueTree(
const std::string& rewritePath,
const Json::Value& root,
Options::writeFuncType write,
std::string* rewrite)
{
*rewrite = write(root);
FILE* fout = fopen(rewritePath.c_str(), "wt");
if (!fout) {
printf("Failed to create rewrite file: %s\n", rewritePath.c_str());
return 2;
}
fprintf(fout, "%s\n", rewrite.c_str());
fprintf(fout, "%s\n", rewrite->c_str());
fclose(fout);
return 0;
}
@@ -194,84 +228,98 @@ static int printUsage(const char* argv[]) {
return 3;
}
int parseCommandLine(int argc,
const char* argv[],
Json::Features& features,
std::string& path,
bool& parseOnly) {
parseOnly = false;
static int parseCommandLine(
int argc, const char* argv[], Options* opts)
{
opts->parseOnly = false;
opts->write = &useStyledWriter;
if (argc < 2) {
return printUsage(argv);
}
int index = 1;
if (std::string(argv[1]) == "--json-checker") {
features = Json::Features::strictMode();
parseOnly = true;
if (std::string(argv[index]) == "--json-checker") {
opts->features = Json::Features::strictMode();
opts->parseOnly = true;
++index;
}
if (std::string(argv[1]) == "--json-config") {
if (std::string(argv[index]) == "--json-config") {
printConfig();
return 3;
}
if (std::string(argv[index]) == "--json-writer") {
++index;
std::string const writerName(argv[index++]);
if (writerName == "StyledWriter") {
opts->write = &useStyledWriter;
} else if (writerName == "StyledStreamWriter") {
opts->write = &useStyledStreamWriter;
} else if (writerName == "BuiltStyledStreamWriter") {
opts->write = &useBuiltStyledStreamWriter;
} else {
printf("Unknown '--json-writer %s'\n", writerName.c_str());
return 4;
}
}
if (index == argc || index + 1 < argc) {
return printUsage(argv);
}
path = argv[index];
opts->path = argv[index];
return 0;
}
static int runTest(Options const& opts)
{
int exitCode = 0;
int main(int argc, const char* argv[]) {
std::string path;
Json::Features features;
bool parseOnly;
int exitCode = parseCommandLine(argc, argv, features, path, parseOnly);
if (exitCode != 0) {
return exitCode;
std::string input = readInputTestFile(opts.path.c_str());
if (input.empty()) {
printf("Failed to read input or empty input: %s\n", opts.path.c_str());
return 3;
}
std::string basePath = removeSuffix(opts.path, ".json");
if (!opts.parseOnly && basePath.empty()) {
printf("Bad input path. Path does not end with '.expected':\n%s\n",
opts.path.c_str());
return 3;
}
std::string const actualPath = basePath + ".actual";
std::string const rewritePath = basePath + ".rewrite";
std::string const rewriteActualPath = basePath + ".actual-rewrite";
Json::Value root;
exitCode = parseAndSaveValueTree(
input, actualPath, "input",
opts.features, opts.parseOnly, &root);
if (exitCode || opts.parseOnly) {
return exitCode;
}
std::string rewrite;
exitCode = rewriteValueTree(rewritePath, root, opts.write, &rewrite);
if (exitCode) {
return exitCode;
}
Json::Value rewriteRoot;
exitCode = parseAndSaveValueTree(
rewrite, rewriteActualPath, "rewrite",
opts.features, opts.parseOnly, &rewriteRoot);
if (exitCode) {
return exitCode;
}
return 0;
}
int main(int argc, const char* argv[]) {
Options opts;
int exitCode = parseCommandLine(argc, argv, &opts);
if (exitCode != 0) {
printf("Failed to parse command-line.");
return exitCode;
}
try {
std::string input = readInputTestFile(path.c_str());
if (input.empty()) {
printf("Failed to read input or empty input: %s\n", path.c_str());
return 3;
}
std::string basePath = removeSuffix(argv[1], ".json");
if (!parseOnly && basePath.empty()) {
printf("Bad input path. Path does not end with '.expected':\n%s\n",
path.c_str());
return 3;
}
std::string actualPath = basePath + ".actual";
std::string rewritePath = basePath + ".rewrite";
std::string rewriteActualPath = basePath + ".actual-rewrite";
Json::Value root;
exitCode = parseAndSaveValueTree(
input, actualPath, "input", root, features, parseOnly);
if (exitCode == 0 && !parseOnly) {
std::string rewrite;
exitCode = rewriteValueTree(rewritePath, root, rewrite);
if (exitCode == 0) {
Json::Value rewriteRoot;
exitCode = parseAndSaveValueTree(rewrite,
rewriteActualPath,
"rewrite",
rewriteRoot,
features,
parseOnly);
}
}
return runTest(opts);
}
catch (const std::exception& e) {
printf("Unhandled exception:\n%s\n", e.what());
exitCode = 1;
return 1;
}
return exitCode;
}

View File

@@ -1,15 +1,10 @@
OPTION(JSONCPP_LIB_BUILD_SHARED "Build jsoncpp_lib as a shared library." OFF)
OPTION(JSONCPP_LIB_BUILD_STATIC "Build jsoncpp_lib static library." ON)
IF(BUILD_SHARED_LIBS)
SET(JSONCPP_LIB_BUILD_SHARED ON)
ENDIF(BUILD_SHARED_LIBS)
IF(JSONCPP_LIB_BUILD_SHARED)
SET(JSONCPP_LIB_TYPE SHARED)
ADD_DEFINITIONS( -DJSON_DLL_BUILD )
ELSE(JSONCPP_LIB_BUILD_SHARED)
SET(JSONCPP_LIB_TYPE STATIC)
ENDIF(JSONCPP_LIB_BUILD_SHARED)
if( CMAKE_COMPILER_IS_GNUCXX )
#Get compiler version.
execute_process( COMMAND ${CMAKE_CXX_COMPILER} -dumpversion
@@ -36,25 +31,13 @@ SET( PUBLIC_HEADERS
SOURCE_GROUP( "Public API" FILES ${PUBLIC_HEADERS} )
ADD_LIBRARY( jsoncpp_lib ${JSONCPP_LIB_TYPE}
${PUBLIC_HEADERS}
json_tool.h
json_reader.cpp
json_batchallocator.h
json_valueiterator.inl
json_value.cpp
json_writer.cpp
version.h.in
)
SET_TARGET_PROPERTIES( jsoncpp_lib PROPERTIES OUTPUT_NAME jsoncpp )
SET_TARGET_PROPERTIES( jsoncpp_lib PROPERTIES VERSION ${JSONCPP_VERSION} SOVERSION ${JSONCPP_VERSION_MAJOR} )
IF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
TARGET_INCLUDE_DIRECTORIES( jsoncpp_lib PUBLIC
$<INSTALL_INTERFACE:${INCLUDE_INSTALL_DIR}>
$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/${JSONCPP_INCLUDE_DIR}>
)
ENDIF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
SET(jsoncpp_sources
json_tool.h
json_reader.cpp
json_valueiterator.inl
json_value.cpp
json_writer.cpp
version.h.in)
# Install instructions for this target
IF(JSONCPP_WITH_CMAKE_PACKAGE)
@@ -63,8 +46,40 @@ ELSE(JSONCPP_WITH_CMAKE_PACKAGE)
SET(INSTALL_EXPORT)
ENDIF(JSONCPP_WITH_CMAKE_PACKAGE)
INSTALL( TARGETS jsoncpp_lib ${INSTALL_EXPORT}
IF(JSONCPP_LIB_BUILD_SHARED)
ADD_DEFINITIONS( -DJSON_DLL_BUILD )
ADD_LIBRARY(jsoncpp_lib SHARED ${PUBLIC_HEADERS} ${jsoncpp_sources})
SET_TARGET_PROPERTIES( jsoncpp_lib PROPERTIES VERSION ${JSONCPP_VERSION} SOVERSION ${JSONCPP_VERSION_MAJOR})
SET_TARGET_PROPERTIES( jsoncpp_lib PROPERTIES OUTPUT_NAME jsoncpp )
INSTALL( TARGETS jsoncpp_lib ${INSTALL_EXPORT}
RUNTIME DESTINATION ${RUNTIME_INSTALL_DIR}
LIBRARY DESTINATION ${LIBRARY_INSTALL_DIR}
ARCHIVE DESTINATION ${ARCHIVE_INSTALL_DIR}
)
ARCHIVE DESTINATION ${ARCHIVE_INSTALL_DIR})
IF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
TARGET_INCLUDE_DIRECTORIES( jsoncpp_lib PUBLIC
$<INSTALL_INTERFACE:${INCLUDE_INSTALL_DIR}>
$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/${JSONCPP_INCLUDE_DIR}>)
ENDIF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
ENDIF()
IF(JSONCPP_LIB_BUILD_STATIC)
ADD_LIBRARY(jsoncpp_lib_static STATIC ${PUBLIC_HEADERS} ${jsoncpp_sources})
SET_TARGET_PROPERTIES( jsoncpp_lib_static PROPERTIES VERSION ${JSONCPP_VERSION} SOVERSION ${JSONCPP_VERSION_MAJOR})
SET_TARGET_PROPERTIES( jsoncpp_lib_static PROPERTIES OUTPUT_NAME jsoncpp )
INSTALL( TARGETS jsoncpp_lib_static ${INSTALL_EXPORT}
RUNTIME DESTINATION ${RUNTIME_INSTALL_DIR}
LIBRARY DESTINATION ${LIBRARY_INSTALL_DIR}
ARCHIVE DESTINATION ${ARCHIVE_INSTALL_DIR})
IF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
TARGET_INCLUDE_DIRECTORIES( jsoncpp_lib_static PUBLIC
$<INSTALL_INTERFACE:${INCLUDE_INSTALL_DIR}>
$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/${JSONCPP_INCLUDE_DIR}>
)
ENDIF(NOT CMAKE_VERSION VERSION_LESS 2.8.11)
ENDIF()

View File

@@ -1,121 +0,0 @@
// Copyright 2007-2010 Baptiste Lepilleur
// Distributed under MIT license, or public domain if desired and
// recognized in your jurisdiction.
// See file LICENSE for detail or copy at http://jsoncpp.sourceforge.net/LICENSE
#ifndef JSONCPP_BATCHALLOCATOR_H_INCLUDED
#define JSONCPP_BATCHALLOCATOR_H_INCLUDED
#include <stdlib.h>
#include <assert.h>
#ifndef JSONCPP_DOC_EXCLUDE_IMPLEMENTATION
namespace Json {
/* Fast memory allocator.
*
* This memory allocator allocates memory for a batch of object (specified by
* the page size, the number of object in each page).
*
* It does not allow the destruction of a single object. All the allocated
* objects can be destroyed at once. The memory can be either released or reused
* for future allocation.
*
* The in-place new operator must be used to construct the object using the
* pointer returned by allocate.
*/
template <typename AllocatedType, const unsigned int objectPerAllocation>
class BatchAllocator {
public:
BatchAllocator(unsigned int objectsPerPage = 255)
: freeHead_(0), objectsPerPage_(objectsPerPage) {
// printf( "Size: %d => %s\n", sizeof(AllocatedType),
// typeid(AllocatedType).name() );
assert(sizeof(AllocatedType) * objectPerAllocation >=
sizeof(AllocatedType*)); // We must be able to store a slist in the
// object free space.
assert(objectsPerPage >= 16);
batches_ = allocateBatch(0); // allocated a dummy page
currentBatch_ = batches_;
}
~BatchAllocator() {
for (BatchInfo* batch = batches_; batch;) {
BatchInfo* nextBatch = batch->next_;
free(batch);
batch = nextBatch;
}
}
/// allocate space for an array of objectPerAllocation object.
/// @warning it is the responsability of the caller to call objects
/// constructors.
AllocatedType* allocate() {
if (freeHead_) // returns node from free list.
{
AllocatedType* object = freeHead_;
freeHead_ = *(AllocatedType**)object;
return object;
}
if (currentBatch_->used_ == currentBatch_->end_) {
currentBatch_ = currentBatch_->next_;
while (currentBatch_ && currentBatch_->used_ == currentBatch_->end_)
currentBatch_ = currentBatch_->next_;
if (!currentBatch_) // no free batch found, allocate a new one
{
currentBatch_ = allocateBatch(objectsPerPage_);
currentBatch_->next_ = batches_; // insert at the head of the list
batches_ = currentBatch_;
}
}
AllocatedType* allocated = currentBatch_->used_;
currentBatch_->used_ += objectPerAllocation;
return allocated;
}
/// Release the object.
/// @warning it is the responsability of the caller to actually destruct the
/// object.
void release(AllocatedType* object) {
assert(object != 0);
*(AllocatedType**)object = freeHead_;
freeHead_ = object;
}
private:
struct BatchInfo {
BatchInfo* next_;
AllocatedType* used_;
AllocatedType* end_;
AllocatedType buffer_[objectPerAllocation];
};
// disabled copy constructor and assignement operator.
BatchAllocator(const BatchAllocator&);
void operator=(const BatchAllocator&);
static BatchInfo* allocateBatch(unsigned int objectsPerPage) {
const unsigned int mallocSize =
sizeof(BatchInfo) - sizeof(AllocatedType) * objectPerAllocation +
sizeof(AllocatedType) * objectPerAllocation * objectsPerPage;
BatchInfo* batch = static_cast<BatchInfo*>(malloc(mallocSize));
batch->next_ = 0;
batch->used_ = batch->buffer_;
batch->end_ = batch->buffer_ + objectsPerPage;
return batch;
}
BatchInfo* batches_;
BatchInfo* currentBatch_;
/// Head of a single linked list within the allocated space of freeed object
AllocatedType* freeHead_;
unsigned int objectsPerPage_;
};
} // namespace Json
#endif // ifndef JSONCPP_DOC_INCLUDE_IMPLEMENTATION
#endif // JSONCPP_BATCHALLOCATOR_H_INCLUDED

View File

@@ -1,360 +0,0 @@
// Copyright 2007-2010 Baptiste Lepilleur
// Distributed under MIT license, or public domain if desired and
// recognized in your jurisdiction.
// See file LICENSE for detail or copy at http://jsoncpp.sourceforge.net/LICENSE
// included by json_value.cpp
namespace Json {
// //////////////////////////////////////////////////////////////////
// //////////////////////////////////////////////////////////////////
// //////////////////////////////////////////////////////////////////
// class ValueInternalArray
// //////////////////////////////////////////////////////////////////
// //////////////////////////////////////////////////////////////////
// //////////////////////////////////////////////////////////////////
ValueArrayAllocator::~ValueArrayAllocator() {}
// //////////////////////////////////////////////////////////////////
// class DefaultValueArrayAllocator
// //////////////////////////////////////////////////////////////////
#ifdef JSON_USE_SIMPLE_INTERNAL_ALLOCATOR
class DefaultValueArrayAllocator : public ValueArrayAllocator {
public: // overridden from ValueArrayAllocator
virtual ~DefaultValueArrayAllocator() {}
virtual ValueInternalArray* newArray() { return new ValueInternalArray(); }
virtual ValueInternalArray* newArrayCopy(const ValueInternalArray& other) {
return new ValueInternalArray(other);
}
virtual void destructArray(ValueInternalArray* array) { delete array; }
virtual void
reallocateArrayPageIndex(Value**& indexes,
ValueInternalArray::PageIndex& indexCount,
ValueInternalArray::PageIndex minNewIndexCount) {
ValueInternalArray::PageIndex newIndexCount = (indexCount * 3) / 2 + 1;
if (minNewIndexCount > newIndexCount)
newIndexCount = minNewIndexCount;
void* newIndexes = realloc(indexes, sizeof(Value*) * newIndexCount);
JSON_ASSERT_MESSAGE(newIndexes, "Couldn't realloc.");
indexCount = newIndexCount;
indexes = static_cast<Value**>(newIndexes);
}
virtual void releaseArrayPageIndex(Value** indexes,
ValueInternalArray::PageIndex indexCount) {
if (indexes)
free(indexes);
}
virtual Value* allocateArrayPage() {
return static_cast<Value*>(
malloc(sizeof(Value) * ValueInternalArray::itemsPerPage));
}
virtual void releaseArrayPage(Value* value) {
if (value)
free(value);
}
};
#else // #ifdef JSON_USE_SIMPLE_INTERNAL_ALLOCATOR
/// @todo make this thread-safe (lock when accessign batch allocator)
class DefaultValueArrayAllocator : public ValueArrayAllocator {
public: // overridden from ValueArrayAllocator
virtual ~DefaultValueArrayAllocator() {}
virtual ValueInternalArray* newArray() {
ValueInternalArray* array = arraysAllocator_.allocate();
new (array) ValueInternalArray(); // placement new
return array;
}
virtual ValueInternalArray* newArrayCopy(const ValueInternalArray& other) {
ValueInternalArray* array = arraysAllocator_.allocate();
new (array) ValueInternalArray(other); // placement new
return array;
}
virtual void destructArray(ValueInternalArray* array) {
if (array) {
array->~ValueInternalArray();
arraysAllocator_.release(array);
}
}
virtual void
reallocateArrayPageIndex(Value**& indexes,
ValueInternalArray::PageIndex& indexCount,
ValueInternalArray::PageIndex minNewIndexCount) {
ValueInternalArray::PageIndex newIndexCount = (indexCount * 3) / 2 + 1;
if (minNewIndexCount > newIndexCount)
newIndexCount = minNewIndexCount;
void* newIndexes = realloc(indexes, sizeof(Value*) * newIndexCount);
JSON_ASSERT_MESSAGE(newIndexes, "Couldn't realloc.");
indexCount = newIndexCount;
indexes = static_cast<Value**>(newIndexes);
}
virtual void releaseArrayPageIndex(Value** indexes,
ValueInternalArray::PageIndex indexCount) {
if (indexes)
free(indexes);
}
virtual Value* allocateArrayPage() {
return static_cast<Value*>(pagesAllocator_.allocate());
}
virtual void releaseArrayPage(Value* value) {
if (value)
pagesAllocator_.release(value);
}
private:
BatchAllocator<ValueInternalArray, 1> arraysAllocator_;
BatchAllocator<Value, ValueInternalArray::itemsPerPage> pagesAllocator_;
};
#endif // #ifdef JSON_USE_SIMPLE_INTERNAL_ALLOCATOR
static ValueArrayAllocator*& arrayAllocator() {
static DefaultValueArrayAllocator defaultAllocator;
static ValueArrayAllocator* arrayAllocator = &defaultAllocator;
return arrayAllocator;
}
static struct DummyArrayAllocatorInitializer {
DummyArrayAllocatorInitializer() {
arrayAllocator(); // ensure arrayAllocator() statics are initialized before
// main().
}
} dummyArrayAllocatorInitializer;
// //////////////////////////////////////////////////////////////////
// class ValueInternalArray
// //////////////////////////////////////////////////////////////////
bool ValueInternalArray::equals(const IteratorState& x,
const IteratorState& other) {
return x.array_ == other.array_ &&
x.currentItemIndex_ == other.currentItemIndex_ &&
x.currentPageIndex_ == other.currentPageIndex_;
}
void ValueInternalArray::increment(IteratorState& it) {
JSON_ASSERT_MESSAGE(
it.array_ && (it.currentPageIndex_ - it.array_->pages_) * itemsPerPage +
it.currentItemIndex_ !=
it.array_->size_,
"ValueInternalArray::increment(): moving iterator beyond end");
++(it.currentItemIndex_);
if (it.currentItemIndex_ == itemsPerPage) {
it.currentItemIndex_ = 0;
++(it.currentPageIndex_);
}
}
void ValueInternalArray::decrement(IteratorState& it) {
JSON_ASSERT_MESSAGE(
it.array_ && it.currentPageIndex_ == it.array_->pages_ &&
it.currentItemIndex_ == 0,
"ValueInternalArray::decrement(): moving iterator beyond end");
if (it.currentItemIndex_ == 0) {
it.currentItemIndex_ = itemsPerPage - 1;
--(it.currentPageIndex_);
} else {
--(it.currentItemIndex_);
}
}
Value& ValueInternalArray::unsafeDereference(const IteratorState& it) {
return (*(it.currentPageIndex_))[it.currentItemIndex_];
}
Value& ValueInternalArray::dereference(const IteratorState& it) {
JSON_ASSERT_MESSAGE(
it.array_ && (it.currentPageIndex_ - it.array_->pages_) * itemsPerPage +
it.currentItemIndex_ <
it.array_->size_,
"ValueInternalArray::dereference(): dereferencing invalid iterator");
return unsafeDereference(it);
}
void ValueInternalArray::makeBeginIterator(IteratorState& it) const {
it.array_ = const_cast<ValueInternalArray*>(this);
it.currentItemIndex_ = 0;
it.currentPageIndex_ = pages_;
}
void ValueInternalArray::makeIterator(IteratorState& it,
ArrayIndex index) const {
it.array_ = const_cast<ValueInternalArray*>(this);
it.currentItemIndex_ = index % itemsPerPage;
it.currentPageIndex_ = pages_ + index / itemsPerPage;
}
void ValueInternalArray::makeEndIterator(IteratorState& it) const {
makeIterator(it, size_);
}
ValueInternalArray::ValueInternalArray() : pages_(0), size_(0), pageCount_(0) {}
ValueInternalArray::ValueInternalArray(const ValueInternalArray& other)
: pages_(0), size_(other.size_), pageCount_(0) {
PageIndex minNewPages = other.size_ / itemsPerPage;
arrayAllocator()->reallocateArrayPageIndex(pages_, pageCount_, minNewPages);
JSON_ASSERT_MESSAGE(pageCount_ >= minNewPages,
"ValueInternalArray::reserve(): bad reallocation");
IteratorState itOther;
other.makeBeginIterator(itOther);
Value* value;
for (ArrayIndex index = 0; index < size_; ++index, increment(itOther)) {
if (index % itemsPerPage == 0) {
PageIndex pageIndex = index / itemsPerPage;
value = arrayAllocator()->allocateArrayPage();
pages_[pageIndex] = value;
}
new (value) Value(dereference(itOther));
}
}
ValueInternalArray& ValueInternalArray::operator=(ValueInternalArray other) {
swap(other);
return *this;
}
ValueInternalArray::~ValueInternalArray() {
// destroy all constructed items
IteratorState it;
IteratorState itEnd;
makeBeginIterator(it);
makeEndIterator(itEnd);
for (; !equals(it, itEnd); increment(it)) {
Value* value = &dereference(it);
value->~Value();
}
// release all pages
PageIndex lastPageIndex = size_ / itemsPerPage;
for (PageIndex pageIndex = 0; pageIndex < lastPageIndex; ++pageIndex)
arrayAllocator()->releaseArrayPage(pages_[pageIndex]);
// release pages index
arrayAllocator()->releaseArrayPageIndex(pages_, pageCount_);
}
void ValueInternalArray::swap(ValueInternalArray& other) {
Value** tempPages = pages_;
pages_ = other.pages_;
other.pages_ = tempPages;
ArrayIndex tempSize = size_;
size_ = other.size_;
other.size_ = tempSize;
PageIndex tempPageCount = pageCount_;
pageCount_ = other.pageCount_;
other.pageCount_ = tempPageCount;
}
void ValueInternalArray::clear() {
ValueInternalArray dummy;
swap(dummy);
}
void ValueInternalArray::resize(ArrayIndex newSize) {
if (newSize == 0)
clear();
else if (newSize < size_) {
IteratorState it;
IteratorState itEnd;
makeIterator(it, newSize);
makeIterator(itEnd, size_);
for (; !equals(it, itEnd); increment(it)) {
Value* value = &dereference(it);
value->~Value();
}
PageIndex pageIndex = (newSize + itemsPerPage - 1) / itemsPerPage;
PageIndex lastPageIndex = size_ / itemsPerPage;
for (; pageIndex < lastPageIndex; ++pageIndex)
arrayAllocator()->releaseArrayPage(pages_[pageIndex]);
size_ = newSize;
} else if (newSize > size_)
resolveReference(newSize);
}
void ValueInternalArray::makeIndexValid(ArrayIndex index) {
// Need to enlarge page index ?
if (index >= pageCount_ * itemsPerPage) {
PageIndex minNewPages = (index + 1) / itemsPerPage;
arrayAllocator()->reallocateArrayPageIndex(pages_, pageCount_, minNewPages);
JSON_ASSERT_MESSAGE(pageCount_ >= minNewPages,
"ValueInternalArray::reserve(): bad reallocation");
}
// Need to allocate new pages ?
ArrayIndex nextPageIndex = (size_ % itemsPerPage) != 0
? size_ - (size_ % itemsPerPage) + itemsPerPage
: size_;
if (nextPageIndex <= index) {
PageIndex pageIndex = nextPageIndex / itemsPerPage;
PageIndex pageToAllocate = (index - nextPageIndex) / itemsPerPage + 1;
for (; pageToAllocate-- > 0; ++pageIndex)
pages_[pageIndex] = arrayAllocator()->allocateArrayPage();
}
// Initialize all new entries
IteratorState it;
IteratorState itEnd;
makeIterator(it, size_);
size_ = index + 1;
makeIterator(itEnd, size_);
for (; !equals(it, itEnd); increment(it)) {
Value* value = &dereference(it);
new (value) Value(); // Construct a default value using placement new
}
}
Value& ValueInternalArray::resolveReference(ArrayIndex index) {
if (index >= size_)
makeIndexValid(index);
return pages_[index / itemsPerPage][index % itemsPerPage];
}
Value* ValueInternalArray::find(ArrayIndex index) const {
if (index >= size_)
return 0;
return &(pages_[index / itemsPerPage][index % itemsPerPage]);
}
ValueInternalArray::ArrayIndex ValueInternalArray::size() const {
return size_;
}
int ValueInternalArray::distance(const IteratorState& x,
const IteratorState& y) {
return indexOf(y) - indexOf(x);
}
ValueInternalArray::ArrayIndex
ValueInternalArray::indexOf(const IteratorState& iterator) {
if (!iterator.array_)
return ArrayIndex(-1);
return ArrayIndex((iterator.currentPageIndex_ - iterator.array_->pages_) *
itemsPerPage +
iterator.currentItemIndex_);
}
int ValueInternalArray::compare(const ValueInternalArray& other) const {
int sizeDiff(size_ - other.size_);
if (sizeDiff != 0)
return sizeDiff;
for (ArrayIndex index = 0; index < size_; ++index) {
int diff = pages_[index / itemsPerPage][index % itemsPerPage].compare(
other.pages_[index / itemsPerPage][index % itemsPerPage]);
if (diff != 0)
return diff;
}
return 0;
}
} // namespace Json

View File

@@ -1,473 +0,0 @@
// Copyright 2007-2010 Baptiste Lepilleur
// Distributed under MIT license, or public domain if desired and
// recognized in your jurisdiction.
// See file LICENSE for detail or copy at http://jsoncpp.sourceforge.net/LICENSE
// included by json_value.cpp
namespace Json {
// //////////////////////////////////////////////////////////////////
// //////////////////////////////////////////////////////////////////
// //////////////////////////////////////////////////////////////////
// class ValueInternalMap
// //////////////////////////////////////////////////////////////////
// //////////////////////////////////////////////////////////////////
// //////////////////////////////////////////////////////////////////
/** \internal MUST be safely initialized using memset( this, 0,
* sizeof(ValueInternalLink) );
* This optimization is used by the fast allocator.
*/
ValueInternalLink::ValueInternalLink() : previous_(0), next_(0) {}
ValueInternalLink::~ValueInternalLink() {
for (int index = 0; index < itemPerLink; ++index) {
if (!items_[index].isItemAvailable()) {
if (!items_[index].isMemberNameStatic())
free(keys_[index]);
} else
break;
}
}
ValueMapAllocator::~ValueMapAllocator() {}
#ifdef JSON_USE_SIMPLE_INTERNAL_ALLOCATOR
class DefaultValueMapAllocator : public ValueMapAllocator {
public: // overridden from ValueMapAllocator
virtual ValueInternalMap* newMap() { return new ValueInternalMap(); }
virtual ValueInternalMap* newMapCopy(const ValueInternalMap& other) {
return new ValueInternalMap(other);
}
virtual void destructMap(ValueInternalMap* map) { delete map; }
virtual ValueInternalLink* allocateMapBuckets(unsigned int size) {
return new ValueInternalLink[size];
}
virtual void releaseMapBuckets(ValueInternalLink* links) { delete[] links; }
virtual ValueInternalLink* allocateMapLink() {
return new ValueInternalLink();
}
virtual void releaseMapLink(ValueInternalLink* link) { delete link; }
};
#else
/// @todo make this thread-safe (lock when accessign batch allocator)
class DefaultValueMapAllocator : public ValueMapAllocator {
public: // overridden from ValueMapAllocator
virtual ValueInternalMap* newMap() {
ValueInternalMap* map = mapsAllocator_.allocate();
new (map) ValueInternalMap(); // placement new
return map;
}
virtual ValueInternalMap* newMapCopy(const ValueInternalMap& other) {
ValueInternalMap* map = mapsAllocator_.allocate();
new (map) ValueInternalMap(other); // placement new
return map;
}
virtual void destructMap(ValueInternalMap* map) {
if (map) {
map->~ValueInternalMap();
mapsAllocator_.release(map);
}
}
virtual ValueInternalLink* allocateMapBuckets(unsigned int size) {
return new ValueInternalLink[size];
}
virtual void releaseMapBuckets(ValueInternalLink* links) { delete[] links; }
virtual ValueInternalLink* allocateMapLink() {
ValueInternalLink* link = linksAllocator_.allocate();
memset(link, 0, sizeof(ValueInternalLink));
return link;
}
virtual void releaseMapLink(ValueInternalLink* link) {
link->~ValueInternalLink();
linksAllocator_.release(link);
}
private:
BatchAllocator<ValueInternalMap, 1> mapsAllocator_;
BatchAllocator<ValueInternalLink, 1> linksAllocator_;
};
#endif
static ValueMapAllocator*& mapAllocator() {
static DefaultValueMapAllocator defaultAllocator;
static ValueMapAllocator* mapAllocator = &defaultAllocator;
return mapAllocator;
}
static struct DummyMapAllocatorInitializer {
DummyMapAllocatorInitializer() {
mapAllocator(); // ensure mapAllocator() statics are initialized before
// main().
}
} dummyMapAllocatorInitializer;
// h(K) = value * K >> w ; with w = 32 & K prime w.r.t. 2^32.
/*
use linked list hash map.
buckets array is a container.
linked list element contains 6 key/values. (memory = (16+4) * 6 + 4 = 124)
value have extra state: valid, available, deleted
*/
ValueInternalMap::ValueInternalMap()
: buckets_(0), tailLink_(0), bucketsSize_(0), itemCount_(0) {}
ValueInternalMap::ValueInternalMap(const ValueInternalMap& other)
: buckets_(0), tailLink_(0), bucketsSize_(0), itemCount_(0) {
reserve(other.itemCount_);
IteratorState it;
IteratorState itEnd;
other.makeBeginIterator(it);
other.makeEndIterator(itEnd);
for (; !equals(it, itEnd); increment(it)) {
bool isStatic;
const char* memberName = key(it, isStatic);
const Value& aValue = value(it);
resolveReference(memberName, isStatic) = aValue;
}
}
ValueInternalMap& ValueInternalMap::operator=(ValueInternalMap other) {
swap(other);
return *this;
}
ValueInternalMap::~ValueInternalMap() {
if (buckets_) {
for (BucketIndex bucketIndex = 0; bucketIndex < bucketsSize_;
++bucketIndex) {
ValueInternalLink* link = buckets_[bucketIndex].next_;
while (link) {
ValueInternalLink* linkToRelease = link;
link = link->next_;
mapAllocator()->releaseMapLink(linkToRelease);
}
}
mapAllocator()->releaseMapBuckets(buckets_);
}
}
void ValueInternalMap::swap(ValueInternalMap& other) {
ValueInternalLink* tempBuckets = buckets_;
buckets_ = other.buckets_;
other.buckets_ = tempBuckets;
ValueInternalLink* tempTailLink = tailLink_;
tailLink_ = other.tailLink_;
other.tailLink_ = tempTailLink;
BucketIndex tempBucketsSize = bucketsSize_;
bucketsSize_ = other.bucketsSize_;
other.bucketsSize_ = tempBucketsSize;
BucketIndex tempItemCount = itemCount_;
itemCount_ = other.itemCount_;
other.itemCount_ = tempItemCount;
}
void ValueInternalMap::clear() {
ValueInternalMap dummy;
swap(dummy);
}
ValueInternalMap::BucketIndex ValueInternalMap::size() const {
return itemCount_;
}
bool ValueInternalMap::reserveDelta(BucketIndex growth) {
return reserve(itemCount_ + growth);
}
bool ValueInternalMap::reserve(BucketIndex newItemCount) {
if (!buckets_ && newItemCount > 0) {
buckets_ = mapAllocator()->allocateMapBuckets(1);
bucketsSize_ = 1;
tailLink_ = &buckets_[0];
}
// BucketIndex idealBucketCount = (newItemCount +
// ValueInternalLink::itemPerLink) / ValueInternalLink::itemPerLink;
return true;
}
const Value* ValueInternalMap::find(const char* key) const {
if (!bucketsSize_)
return 0;
HashKey hashedKey = hash(key);
BucketIndex bucketIndex = hashedKey % bucketsSize_;
for (const ValueInternalLink* current = &buckets_[bucketIndex]; current != 0;
current = current->next_) {
for (BucketIndex index = 0; index < ValueInternalLink::itemPerLink;
++index) {
if (current->items_[index].isItemAvailable())
return 0;
if (strcmp(key, current->keys_[index]) == 0)
return &current->items_[index];
}
}
return 0;
}
Value* ValueInternalMap::find(const char* key) {
const ValueInternalMap* constThis = this;
return const_cast<Value*>(constThis->find(key));
}
Value& ValueInternalMap::resolveReference(const char* key, bool isStatic) {
HashKey hashedKey = hash(key);
if (bucketsSize_) {
BucketIndex bucketIndex = hashedKey % bucketsSize_;
ValueInternalLink** previous = 0;
BucketIndex index;
for (ValueInternalLink* current = &buckets_[bucketIndex]; current != 0;
previous = &current->next_, current = current->next_) {
for (index = 0; index < ValueInternalLink::itemPerLink; ++index) {
if (current->items_[index].isItemAvailable())
return setNewItem(key, isStatic, current, index);
if (strcmp(key, current->keys_[index]) == 0)
return current->items_[index];
}
}
}
reserveDelta(1);
return unsafeAdd(key, isStatic, hashedKey);
}
void ValueInternalMap::remove(const char* key) {
HashKey hashedKey = hash(key);
if (!bucketsSize_)
return;
BucketIndex bucketIndex = hashedKey % bucketsSize_;
for (ValueInternalLink* link = &buckets_[bucketIndex]; link != 0;
link = link->next_) {
BucketIndex index;
for (index = 0; index < ValueInternalLink::itemPerLink; ++index) {
if (link->items_[index].isItemAvailable())
return;
if (strcmp(key, link->keys_[index]) == 0) {
doActualRemove(link, index, bucketIndex);
return;
}
}
}
}
void ValueInternalMap::doActualRemove(ValueInternalLink* link,
BucketIndex index,
BucketIndex bucketIndex) {
// find last item of the bucket and swap it with the 'removed' one.
// set removed items flags to 'available'.
// if last page only contains 'available' items, then desallocate it (it's
// empty)
ValueInternalLink*& lastLink = getLastLinkInBucket(index);
BucketIndex lastItemIndex = 1; // a link can never be empty, so start at 1
for (; lastItemIndex < ValueInternalLink::itemPerLink;
++lastItemIndex) // may be optimized with dicotomic search
{
if (lastLink->items_[lastItemIndex].isItemAvailable())
break;
}
BucketIndex lastUsedIndex = lastItemIndex - 1;
Value* valueToDelete = &link->items_[index];
Value* valueToPreserve = &lastLink->items_[lastUsedIndex];
if (valueToDelete != valueToPreserve)
valueToDelete->swap(*valueToPreserve);
if (lastUsedIndex == 0) // page is now empty
{ // remove it from bucket linked list and delete it.
ValueInternalLink* linkPreviousToLast = lastLink->previous_;
if (linkPreviousToLast != 0) // can not deleted bucket link.
{
mapAllocator()->releaseMapLink(lastLink);
linkPreviousToLast->next_ = 0;
lastLink = linkPreviousToLast;
}
} else {
Value dummy;
valueToPreserve->swap(dummy); // restore deleted to default Value.
valueToPreserve->setItemUsed(false);
}
--itemCount_;
}
ValueInternalLink*&
ValueInternalMap::getLastLinkInBucket(BucketIndex bucketIndex) {
if (bucketIndex == bucketsSize_ - 1)
return tailLink_;
ValueInternalLink*& previous = buckets_[bucketIndex + 1].previous_;
if (!previous)
previous = &buckets_[bucketIndex];
return previous;
}
Value& ValueInternalMap::setNewItem(const char* key,
bool isStatic,
ValueInternalLink* link,
BucketIndex index) {
char* duplicatedKey = makeMemberName(key);
++itemCount_;
link->keys_[index] = duplicatedKey;
link->items_[index].setItemUsed();
link->items_[index].setMemberNameIsStatic(isStatic);
return link->items_[index]; // items already default constructed.
}
Value&
ValueInternalMap::unsafeAdd(const char* key, bool isStatic, HashKey hashedKey) {
JSON_ASSERT_MESSAGE(bucketsSize_ > 0,
"ValueInternalMap::unsafeAdd(): internal logic error.");
BucketIndex bucketIndex = hashedKey % bucketsSize_;
ValueInternalLink*& previousLink = getLastLinkInBucket(bucketIndex);
ValueInternalLink* link = previousLink;
BucketIndex index;
for (index = 0; index < ValueInternalLink::itemPerLink; ++index) {
if (link->items_[index].isItemAvailable())
break;
}
if (index == ValueInternalLink::itemPerLink) // need to add a new page
{
ValueInternalLink* newLink = mapAllocator()->allocateMapLink();
index = 0;
link->next_ = newLink;
previousLink = newLink;
link = newLink;
}
return setNewItem(key, isStatic, link, index);
}
ValueInternalMap::HashKey ValueInternalMap::hash(const char* key) const {
HashKey hash = 0;
while (*key)
hash += *key++ * 37;
return hash;
}
int ValueInternalMap::compare(const ValueInternalMap& other) const {
int sizeDiff(itemCount_ - other.itemCount_);
if (sizeDiff != 0)
return sizeDiff;
// Strict order guaranty is required. Compare all keys FIRST, then compare
// values.
IteratorState it;
IteratorState itEnd;
makeBeginIterator(it);
makeEndIterator(itEnd);
for (; !equals(it, itEnd); increment(it)) {
if (!other.find(key(it)))
return 1;
}
// All keys are equals, let's compare values
makeBeginIterator(it);
for (; !equals(it, itEnd); increment(it)) {
const Value* otherValue = other.find(key(it));
int valueDiff = value(it).compare(*otherValue);
if (valueDiff != 0)
return valueDiff;
}
return 0;
}
void ValueInternalMap::makeBeginIterator(IteratorState& it) const {
it.map_ = const_cast<ValueInternalMap*>(this);
it.bucketIndex_ = 0;
it.itemIndex_ = 0;
it.link_ = buckets_;
}
void ValueInternalMap::makeEndIterator(IteratorState& it) const {
it.map_ = const_cast<ValueInternalMap*>(this);
it.bucketIndex_ = bucketsSize_;
it.itemIndex_ = 0;
it.link_ = 0;
}
bool ValueInternalMap::equals(const IteratorState& x,
const IteratorState& other) {
return x.map_ == other.map_ && x.bucketIndex_ == other.bucketIndex_ &&
x.link_ == other.link_ && x.itemIndex_ == other.itemIndex_;
}
void ValueInternalMap::incrementBucket(IteratorState& iterator) {
++iterator.bucketIndex_;
JSON_ASSERT_MESSAGE(
iterator.bucketIndex_ <= iterator.map_->bucketsSize_,
"ValueInternalMap::increment(): attempting to iterate beyond end.");
if (iterator.bucketIndex_ == iterator.map_->bucketsSize_)
iterator.link_ = 0;
else
iterator.link_ = &(iterator.map_->buckets_[iterator.bucketIndex_]);
iterator.itemIndex_ = 0;
}
void ValueInternalMap::increment(IteratorState& iterator) {
JSON_ASSERT_MESSAGE(iterator.map_,
"Attempting to iterator using invalid iterator.");
++iterator.itemIndex_;
if (iterator.itemIndex_ == ValueInternalLink::itemPerLink) {
JSON_ASSERT_MESSAGE(
iterator.link_ != 0,
"ValueInternalMap::increment(): attempting to iterate beyond end.");
iterator.link_ = iterator.link_->next_;
if (iterator.link_ == 0)
incrementBucket(iterator);
} else if (iterator.link_->items_[iterator.itemIndex_].isItemAvailable()) {
incrementBucket(iterator);
}
}
void ValueInternalMap::decrement(IteratorState& iterator) {
if (iterator.itemIndex_ == 0) {
JSON_ASSERT_MESSAGE(iterator.map_,
"Attempting to iterate using invalid iterator.");
if (iterator.link_ == &iterator.map_->buckets_[iterator.bucketIndex_]) {
JSON_ASSERT_MESSAGE(iterator.bucketIndex_ > 0,
"Attempting to iterate beyond beginning.");
--(iterator.bucketIndex_);
}
iterator.link_ = iterator.link_->previous_;
iterator.itemIndex_ = ValueInternalLink::itemPerLink - 1;
}
}
const char* ValueInternalMap::key(const IteratorState& iterator) {
JSON_ASSERT_MESSAGE(iterator.link_,
"Attempting to iterate using invalid iterator.");
return iterator.link_->keys_[iterator.itemIndex_];
}
const char* ValueInternalMap::key(const IteratorState& iterator,
bool& isStatic) {
JSON_ASSERT_MESSAGE(iterator.link_,
"Attempting to iterate using invalid iterator.");
isStatic = iterator.link_->items_[iterator.itemIndex_].isMemberNameStatic();
return iterator.link_->keys_[iterator.itemIndex_];
}
Value& ValueInternalMap::value(const IteratorState& iterator) {
JSON_ASSERT_MESSAGE(iterator.link_,
"Attempting to iterate using invalid iterator.");
return iterator.link_->items_[iterator.itemIndex_];
}
int ValueInternalMap::distance(const IteratorState& x, const IteratorState& y) {
int offset = 0;
IteratorState it = x;
while (!equals(it, y))
increment(it);
return offset;
}
} // namespace Json

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -16,68 +16,29 @@ namespace Json {
// //////////////////////////////////////////////////////////////////
ValueIteratorBase::ValueIteratorBase()
#ifndef JSON_VALUE_USE_INTERNAL_MAP
: current_(), isNull_(true) {
}
#else
: isArray_(true), isNull_(true) {
iterator_.array_ = ValueInternalArray::IteratorState();
}
#endif
#ifndef JSON_VALUE_USE_INTERNAL_MAP
ValueIteratorBase::ValueIteratorBase(
const Value::ObjectValues::iterator& current)
: current_(current), isNull_(false) {}
#else
ValueIteratorBase::ValueIteratorBase(
const ValueInternalArray::IteratorState& state)
: isArray_(true) {
iterator_.array_ = state;
}
ValueIteratorBase::ValueIteratorBase(
const ValueInternalMap::IteratorState& state)
: isArray_(false) {
iterator_.map_ = state;
}
#endif
Value& ValueIteratorBase::deref() const {
#ifndef JSON_VALUE_USE_INTERNAL_MAP
return current_->second;
#else
if (isArray_)
return ValueInternalArray::dereference(iterator_.array_);
return ValueInternalMap::value(iterator_.map_);
#endif
}
void ValueIteratorBase::increment() {
#ifndef JSON_VALUE_USE_INTERNAL_MAP
++current_;
#else
if (isArray_)
ValueInternalArray::increment(iterator_.array_);
ValueInternalMap::increment(iterator_.map_);
#endif
}
void ValueIteratorBase::decrement() {
#ifndef JSON_VALUE_USE_INTERNAL_MAP
--current_;
#else
if (isArray_)
ValueInternalArray::decrement(iterator_.array_);
ValueInternalMap::decrement(iterator_.map_);
#endif
}
ValueIteratorBase::difference_type
ValueIteratorBase::computeDistance(const SelfType& other) const {
#ifndef JSON_VALUE_USE_INTERNAL_MAP
#ifdef JSON_USE_CPPTL_SMALLMAP
return current_ - other.current_;
return other.current_ - current_;
#else
// Iterator for null value are initialized using the default
// constructor, which initialize current_ to the default
@@ -100,80 +61,58 @@ ValueIteratorBase::computeDistance(const SelfType& other) const {
}
return myDistance;
#endif
#else
if (isArray_)
return ValueInternalArray::distance(iterator_.array_,
other.iterator_.array_);
return ValueInternalMap::distance(iterator_.map_, other.iterator_.map_);
#endif
}
bool ValueIteratorBase::isEqual(const SelfType& other) const {
#ifndef JSON_VALUE_USE_INTERNAL_MAP
if (isNull_) {
return other.isNull_;
}
return current_ == other.current_;
#else
if (isArray_)
return ValueInternalArray::equals(iterator_.array_, other.iterator_.array_);
return ValueInternalMap::equals(iterator_.map_, other.iterator_.map_);
#endif
}
void ValueIteratorBase::copy(const SelfType& other) {
#ifndef JSON_VALUE_USE_INTERNAL_MAP
current_ = other.current_;
isNull_ = other.isNull_;
#else
if (isArray_)
iterator_.array_ = other.iterator_.array_;
iterator_.map_ = other.iterator_.map_;
#endif
}
Value ValueIteratorBase::key() const {
#ifndef JSON_VALUE_USE_INTERNAL_MAP
const Value::CZString czstring = (*current_).first;
if (czstring.c_str()) {
if (czstring.data()) {
if (czstring.isStaticString())
return Value(StaticString(czstring.c_str()));
return Value(czstring.c_str());
return Value(StaticString(czstring.data()));
return Value(czstring.data(), czstring.data() + czstring.length());
}
return Value(czstring.index());
#else
if (isArray_)
return Value(ValueInternalArray::indexOf(iterator_.array_));
bool isStatic;
const char* memberName = ValueInternalMap::key(iterator_.map_, isStatic);
if (isStatic)
return Value(StaticString(memberName));
return Value(memberName);
#endif
}
UInt ValueIteratorBase::index() const {
#ifndef JSON_VALUE_USE_INTERNAL_MAP
const Value::CZString czstring = (*current_).first;
if (!czstring.c_str())
if (!czstring.data())
return czstring.index();
return Value::UInt(-1);
#else
if (isArray_)
return Value::UInt(ValueInternalArray::indexOf(iterator_.array_));
return Value::UInt(-1);
#endif
}
const char* ValueIteratorBase::memberName() const {
#ifndef JSON_VALUE_USE_INTERNAL_MAP
const char* name = (*current_).first.c_str();
std::string ValueIteratorBase::name() const {
char const* key;
char const* end;
key = memberName(&end);
if (!key) return std::string();
return std::string(key, end);
}
char const* ValueIteratorBase::memberName() const {
const char* name = (*current_).first.data();
return name ? name : "";
#else
if (!isArray_)
return ValueInternalMap::key(iterator_.map_);
return "";
#endif
}
char const* ValueIteratorBase::memberName(char const** end) const {
const char* name = (*current_).first.data();
if (!name) {
*end = NULL;
return NULL;
}
*end = name + (*current_).first.length();
return name;
}
// //////////////////////////////////////////////////////////////////
@@ -186,19 +125,9 @@ const char* ValueIteratorBase::memberName() const {
ValueConstIterator::ValueConstIterator() {}
#ifndef JSON_VALUE_USE_INTERNAL_MAP
ValueConstIterator::ValueConstIterator(
const Value::ObjectValues::iterator& current)
: ValueIteratorBase(current) {}
#else
ValueConstIterator::ValueConstIterator(
const ValueInternalArray::IteratorState& state)
: ValueIteratorBase(state) {}
ValueConstIterator::ValueConstIterator(
const ValueInternalMap::IteratorState& state)
: ValueIteratorBase(state) {}
#endif
ValueConstIterator& ValueConstIterator::
operator=(const ValueIteratorBase& other) {
@@ -216,16 +145,8 @@ operator=(const ValueIteratorBase& other) {
ValueIterator::ValueIterator() {}
#ifndef JSON_VALUE_USE_INTERNAL_MAP
ValueIterator::ValueIterator(const Value::ObjectValues::iterator& current)
: ValueIteratorBase(current) {}
#else
ValueIterator::ValueIterator(const ValueInternalArray::IteratorState& state)
: ValueIteratorBase(state) {}
ValueIterator::ValueIterator(const ValueInternalMap::IteratorState& state)
: ValueIteratorBase(state) {}
#endif
ValueIterator::ValueIterator(const ValueConstIterator& other)
: ValueIteratorBase(other) {}

View File

@@ -7,18 +7,30 @@
#include <json/writer.h>
#include "json_tool.h"
#endif // if !defined(JSON_IS_AMALGAMATION)
#include <utility>
#include <assert.h>
#include <stdio.h>
#include <string.h>
#include <sstream>
#include <iomanip>
#include <math.h>
#include <memory>
#include <sstream>
#include <utility>
#include <set>
#include <cassert>
#include <cstring>
#include <cstdio>
#if defined(_MSC_VER) && _MSC_VER < 1500 // VC++ 8.0 and below
#if defined(_MSC_VER) && _MSC_VER >= 1200 && _MSC_VER < 1800 // Between VC++ 6.0 and VC++ 11.0
#include <float.h>
#define isfinite _finite
#elif defined(__sun) && defined(__SVR4) //Solaris
#include <ieeefp.h>
#define isfinite finite
#else
#include <cmath>
#define isfinite std::isfinite
#endif
#if defined(_MSC_VER) && _MSC_VER < 1500 // VC++ 8.0 and below
#define snprintf _snprintf
#elif __cplusplus >= 201103L
#define snprintf std::snprintf
#endif
#if defined(_MSC_VER) && _MSC_VER >= 1400 // VC++ 8.0
@@ -26,13 +38,14 @@
#pragma warning(disable : 4996)
#endif
#if defined(__sun) && defined(__SVR4) //Solaris
#include <ieeefp.h>
#define isfinite finite
#endif
namespace Json {
#if __cplusplus >= 201103L
typedef std::unique_ptr<StreamWriter> StreamWriterPtr;
#else
typedef std::auto_ptr<StreamWriter> StreamWriterPtr;
#endif
static bool containsControlCharacter(const char* str) {
while (*str) {
if (isControlCharacter(*(str++)))
@@ -41,6 +54,16 @@ static bool containsControlCharacter(const char* str) {
return false;
}
static bool containsControlCharacter0(const char* str, unsigned len) {
char const* end = str + len;
while (end != str) {
if (isControlCharacter(*str) || 0==*str)
return true;
++str;
}
return false;
}
std::string valueToString(LargestInt value) {
UIntToStringBuffer buffer;
char* current = buffer + sizeof(buffer);
@@ -175,6 +198,84 @@ std::string valueToQuotedString(const char* value) {
return result;
}
// https://github.com/upcaste/upcaste/blob/master/src/upcore/src/cstring/strnpbrk.cpp
static char const* strnpbrk(char const* s, char const* accept, size_t n) {
assert((s || !n) && accept);
char const* const end = s + n;
for (char const* cur = s; cur < end; ++cur) {
int const c = *cur;
for (char const* a = accept; *a; ++a) {
if (*a == c) {
return cur;
}
}
}
return NULL;
}
static std::string valueToQuotedStringN(const char* value, unsigned length) {
if (value == NULL)
return "";
// Not sure how to handle unicode...
if (strnpbrk(value, "\"\\\b\f\n\r\t", length) == NULL &&
!containsControlCharacter0(value, length))
return std::string("\"") + value + "\"";
// We have to walk value and escape any special characters.
// Appending to std::string is not efficient, but this should be rare.
// (Note: forward slashes are *not* rare, but I am not escaping them.)
std::string::size_type maxsize =
length * 2 + 3; // allescaped+quotes+NULL
std::string result;
result.reserve(maxsize); // to avoid lots of mallocs
result += "\"";
char const* end = value + length;
for (const char* c = value; c != end; ++c) {
switch (*c) {
case '\"':
result += "\\\"";
break;
case '\\':
result += "\\\\";
break;
case '\b':
result += "\\b";
break;
case '\f':
result += "\\f";
break;
case '\n':
result += "\\n";
break;
case '\r':
result += "\\r";
break;
case '\t':
result += "\\t";
break;
// case '/':
// Even though \/ is considered a legal escape in JSON, a bare
// slash is also legal, so I see no reason to escape it.
// (I hope I am not misunderstanding something.)
// blep notes: actually escaping \/ may be useful in javascript to avoid </
// sequence.
// Should add a flag to allow this compatibility mode and prevent this
// sequence from occurring.
default:
if ((isControlCharacter(*c)) || (*c == 0)) {
std::ostringstream oss;
oss << "\\u" << std::hex << std::uppercase << std::setfill('0')
<< std::setw(4) << static_cast<int>(*c);
result += oss.str();
} else {
result += *c;
}
break;
}
}
result += "\"";
return result;
}
// Class Writer
// //////////////////////////////////////////////////////////////////
Writer::~Writer() {}
@@ -239,7 +340,7 @@ void FastWriter::writeValue(const Value& value) {
const std::string& name = *it;
if (it != members.begin())
document_ += ',';
document_ += valueToQuotedString(name.c_str());
document_ += valueToQuotedStringN(name.data(), name.length());
document_ += yamlCompatiblityEnabled_ ? ": " : ":";
writeValue(value[name]);
}
@@ -280,8 +381,15 @@ void StyledWriter::writeValue(const Value& value) {
pushValue(valueToString(value.asDouble()));
break;
case stringValue:
pushValue(valueToQuotedString(value.asCString()));
{
// Is NULL is possible for value.string_?
char const* str;
char const* end;
bool ok = value.getString(&str, &end);
if (ok) pushValue(valueToQuotedStringN(str, static_cast<unsigned>(end-str)));
else pushValue("");
break;
}
case booleanValue:
pushValue(valueToString(value.asBool()));
break;
@@ -376,6 +484,9 @@ bool StyledWriter::isMultineArray(const Value& value) {
addChildValues_ = true;
int lineLength = 4 + (size - 1) * 2; // '[ ' + ', '*n + ' ]'
for (int index = 0; index < size; ++index) {
if (hasCommentForValue(value[index])) {
isMultiLine = true;
}
writeValue(value[index]);
lineLength += int(childValues_[index].length());
}
@@ -421,26 +532,27 @@ void StyledWriter::writeCommentBeforeValue(const Value& root) {
document_ += "\n";
writeIndent();
std::string normalizedComment = normalizeEOL(root.getComment(commentBefore));
std::string::const_iterator iter = normalizedComment.begin();
while (iter != normalizedComment.end()) {
const std::string& comment = root.getComment(commentBefore);
std::string::const_iterator iter = comment.begin();
while (iter != comment.end()) {
document_ += *iter;
if (*iter == '\n' && *(iter + 1) == '/')
if (*iter == '\n' &&
(iter != comment.end() && *(iter + 1) == '/'))
writeIndent();
++iter;
}
// Comments are stripped of newlines, so add one here
// Comments are stripped of trailing newlines, so add one here
document_ += "\n";
}
void StyledWriter::writeCommentAfterValueOnSameLine(const Value& root) {
if (root.hasComment(commentAfterOnSameLine))
document_ += " " + normalizeEOL(root.getComment(commentAfterOnSameLine));
document_ += " " + root.getComment(commentAfterOnSameLine);
if (root.hasComment(commentAfter)) {
document_ += "\n";
document_ += normalizeEOL(root.getComment(commentAfter));
document_ += root.getComment(commentAfter);
document_ += "\n";
}
}
@@ -451,25 +563,6 @@ bool StyledWriter::hasCommentForValue(const Value& value) {
value.hasComment(commentAfter);
}
std::string StyledWriter::normalizeEOL(const std::string& text) {
std::string normalized;
normalized.reserve(text.length());
const char* begin = text.c_str();
const char* end = begin + text.length();
const char* current = begin;
while (current != end) {
char c = *current++;
if (c == '\r') // mac or dos EOL
{
if (*current == '\n') // convert dos EOL
++current;
normalized += '\n';
} else // handle unix EOL & other char
normalized += c;
}
return normalized;
}
// Class StyledStreamWriter
// //////////////////////////////////////////////////////////////////
@@ -481,7 +574,10 @@ void StyledStreamWriter::write(std::ostream& out, const Value& root) {
document_ = &out;
addChildValues_ = false;
indentString_ = "";
indented_ = true;
writeCommentBeforeValue(root);
if (!indented_) writeIndent();
indented_ = true;
writeValue(root);
writeCommentAfterValueOnSameLine(root);
*document_ << "\n";
@@ -557,8 +653,10 @@ void StyledStreamWriter::writeArrayValue(const Value& value) {
if (hasChildValue)
writeWithIndent(childValues_[index]);
else {
writeIndent();
if (!indented_) writeIndent();
indented_ = true;
writeValue(childValue);
indented_ = false;
}
if (++index == size) {
writeCommentAfterValueOnSameLine(childValue);
@@ -599,6 +697,9 @@ bool StyledStreamWriter::isMultineArray(const Value& value) {
addChildValues_ = true;
int lineLength = 4 + (size - 1) * 2; // '[ ' + ', '*n + ' ]'
for (int index = 0; index < size; ++index) {
if (hasCommentForValue(value[index])) {
isMultiLine = true;
}
writeValue(value[index]);
lineLength += int(childValues_[index].length());
}
@@ -616,24 +717,17 @@ void StyledStreamWriter::pushValue(const std::string& value) {
}
void StyledStreamWriter::writeIndent() {
/*
Some comments in this method would have been nice. ;-)
if ( !document_.empty() )
{
char last = document_[document_.length()-1];
if ( last == ' ' ) // already indented
return;
if ( last != '\n' ) // Comments may add new-line
*document_ << '\n';
}
*/
// blep intended this to look at the so-far-written string
// to determine whether we are already indented, but
// with a stream we cannot do that. So we rely on some saved state.
// The caller checks indented_.
*document_ << '\n' << indentString_;
}
void StyledStreamWriter::writeWithIndent(const std::string& value) {
writeIndent();
if (!indented_) writeIndent();
*document_ << value;
indented_ = false;
}
void StyledStreamWriter::indent() { indentString_ += indentation_; }
@@ -646,19 +740,30 @@ void StyledStreamWriter::unindent() {
void StyledStreamWriter::writeCommentBeforeValue(const Value& root) {
if (!root.hasComment(commentBefore))
return;
*document_ << normalizeEOL(root.getComment(commentBefore));
*document_ << "\n";
if (!indented_) writeIndent();
const std::string& comment = root.getComment(commentBefore);
std::string::const_iterator iter = comment.begin();
while (iter != comment.end()) {
*document_ << *iter;
if (*iter == '\n' &&
(iter != comment.end() && *(iter + 1) == '/'))
// writeIndent(); // would include newline
*document_ << indentString_;
++iter;
}
indented_ = false;
}
void StyledStreamWriter::writeCommentAfterValueOnSameLine(const Value& root) {
if (root.hasComment(commentAfterOnSameLine))
*document_ << " " + normalizeEOL(root.getComment(commentAfterOnSameLine));
*document_ << ' ' << root.getComment(commentAfterOnSameLine);
if (root.hasComment(commentAfter)) {
*document_ << "\n";
*document_ << normalizeEOL(root.getComment(commentAfter));
*document_ << "\n";
writeIndent();
*document_ << root.getComment(commentAfter);
}
indented_ = false;
}
bool StyledStreamWriter::hasCommentForValue(const Value& value) {
@@ -667,28 +772,386 @@ bool StyledStreamWriter::hasCommentForValue(const Value& value) {
value.hasComment(commentAfter);
}
std::string StyledStreamWriter::normalizeEOL(const std::string& text) {
std::string normalized;
normalized.reserve(text.length());
const char* begin = text.c_str();
const char* end = begin + text.length();
const char* current = begin;
while (current != end) {
char c = *current++;
if (c == '\r') // mac or dos EOL
{
if (*current == '\n') // convert dos EOL
++current;
normalized += '\n';
} else // handle unix EOL & other char
normalized += c;
//////////////////////////
// BuiltStyledStreamWriter
/// Scoped enums are not available until C++11.
struct CommentStyle {
/// Decide whether to write comments.
enum Enum {
None, ///< Drop all comments.
Most, ///< Recover odd behavior of previous versions (not implemented yet).
All ///< Keep all comments.
};
};
struct BuiltStyledStreamWriter : public StreamWriter
{
BuiltStyledStreamWriter(
std::string const& indentation,
CommentStyle::Enum cs,
std::string const& colonSymbol,
std::string const& nullSymbol,
std::string const& endingLineFeedSymbol);
virtual int write(Value const& root, std::ostream* sout);
private:
void writeValue(Value const& value);
void writeArrayValue(Value const& value);
bool isMultineArray(Value const& value);
void pushValue(std::string const& value);
void writeIndent();
void writeWithIndent(std::string const& value);
void indent();
void unindent();
void writeCommentBeforeValue(Value const& root);
void writeCommentAfterValueOnSameLine(Value const& root);
static bool hasCommentForValue(const Value& value);
typedef std::vector<std::string> ChildValues;
ChildValues childValues_;
std::string indentString_;
int rightMargin_;
std::string indentation_;
CommentStyle::Enum cs_;
std::string colonSymbol_;
std::string nullSymbol_;
std::string endingLineFeedSymbol_;
bool addChildValues_ : 1;
bool indented_ : 1;
};
BuiltStyledStreamWriter::BuiltStyledStreamWriter(
std::string const& indentation,
CommentStyle::Enum cs,
std::string const& colonSymbol,
std::string const& nullSymbol,
std::string const& endingLineFeedSymbol)
: rightMargin_(74)
, indentation_(indentation)
, cs_(cs)
, colonSymbol_(colonSymbol)
, nullSymbol_(nullSymbol)
, endingLineFeedSymbol_(endingLineFeedSymbol)
, addChildValues_(false)
, indented_(false)
{
}
int BuiltStyledStreamWriter::write(Value const& root, std::ostream* sout)
{
sout_ = sout;
addChildValues_ = false;
indented_ = true;
indentString_ = "";
writeCommentBeforeValue(root);
if (!indented_) writeIndent();
indented_ = true;
writeValue(root);
writeCommentAfterValueOnSameLine(root);
*sout_ << endingLineFeedSymbol_;
sout_ = NULL;
return 0;
}
void BuiltStyledStreamWriter::writeValue(Value const& value) {
switch (value.type()) {
case nullValue:
pushValue(nullSymbol_);
break;
case intValue:
pushValue(valueToString(value.asLargestInt()));
break;
case uintValue:
pushValue(valueToString(value.asLargestUInt()));
break;
case realValue:
pushValue(valueToString(value.asDouble()));
break;
case stringValue:
{
// Is NULL is possible for value.string_?
char const* str;
char const* end;
bool ok = value.getString(&str, &end);
if (ok) pushValue(valueToQuotedStringN(str, static_cast<unsigned>(end-str)));
else pushValue("");
break;
}
case booleanValue:
pushValue(valueToString(value.asBool()));
break;
case arrayValue:
writeArrayValue(value);
break;
case objectValue: {
Value::Members members(value.getMemberNames());
if (members.empty())
pushValue("{}");
else {
writeWithIndent("{");
indent();
Value::Members::iterator it = members.begin();
for (;;) {
std::string const& name = *it;
Value const& childValue = value[name];
writeCommentBeforeValue(childValue);
writeWithIndent(valueToQuotedStringN(name.data(), name.length()));
*sout_ << colonSymbol_;
writeValue(childValue);
if (++it == members.end()) {
writeCommentAfterValueOnSameLine(childValue);
break;
}
*sout_ << ",";
writeCommentAfterValueOnSameLine(childValue);
}
unindent();
writeWithIndent("}");
}
} break;
}
return normalized;
}
std::ostream& operator<<(std::ostream& sout, const Value& root) {
Json::StyledStreamWriter writer;
writer.write(sout, root);
void BuiltStyledStreamWriter::writeArrayValue(Value const& value) {
unsigned size = value.size();
if (size == 0)
pushValue("[]");
else {
bool isMultiLine = (cs_ == CommentStyle::All) || isMultineArray(value);
if (isMultiLine) {
writeWithIndent("[");
indent();
bool hasChildValue = !childValues_.empty();
unsigned index = 0;
for (;;) {
Value const& childValue = value[index];
writeCommentBeforeValue(childValue);
if (hasChildValue)
writeWithIndent(childValues_[index]);
else {
if (!indented_) writeIndent();
indented_ = true;
writeValue(childValue);
indented_ = false;
}
if (++index == size) {
writeCommentAfterValueOnSameLine(childValue);
break;
}
*sout_ << ",";
writeCommentAfterValueOnSameLine(childValue);
}
unindent();
writeWithIndent("]");
} else // output on a single line
{
assert(childValues_.size() == size);
*sout_ << "[";
if (!indentation_.empty()) *sout_ << " ";
for (unsigned index = 0; index < size; ++index) {
if (index > 0)
*sout_ << ", ";
*sout_ << childValues_[index];
}
if (!indentation_.empty()) *sout_ << " ";
*sout_ << "]";
}
}
}
bool BuiltStyledStreamWriter::isMultineArray(Value const& value) {
int size = value.size();
bool isMultiLine = size * 3 >= rightMargin_;
childValues_.clear();
for (int index = 0; index < size && !isMultiLine; ++index) {
Value const& childValue = value[index];
isMultiLine =
isMultiLine || ((childValue.isArray() || childValue.isObject()) &&
childValue.size() > 0);
}
if (!isMultiLine) // check if line length > max line length
{
childValues_.reserve(size);
addChildValues_ = true;
int lineLength = 4 + (size - 1) * 2; // '[ ' + ', '*n + ' ]'
for (int index = 0; index < size; ++index) {
if (hasCommentForValue(value[index])) {
isMultiLine = true;
}
writeValue(value[index]);
lineLength += int(childValues_[index].length());
}
addChildValues_ = false;
isMultiLine = isMultiLine || lineLength >= rightMargin_;
}
return isMultiLine;
}
void BuiltStyledStreamWriter::pushValue(std::string const& value) {
if (addChildValues_)
childValues_.push_back(value);
else
*sout_ << value;
}
void BuiltStyledStreamWriter::writeIndent() {
// blep intended this to look at the so-far-written string
// to determine whether we are already indented, but
// with a stream we cannot do that. So we rely on some saved state.
// The caller checks indented_.
if (!indentation_.empty()) {
// In this case, drop newlines too.
*sout_ << '\n' << indentString_;
}
}
void BuiltStyledStreamWriter::writeWithIndent(std::string const& value) {
if (!indented_) writeIndent();
*sout_ << value;
indented_ = false;
}
void BuiltStyledStreamWriter::indent() { indentString_ += indentation_; }
void BuiltStyledStreamWriter::unindent() {
assert(indentString_.size() >= indentation_.size());
indentString_.resize(indentString_.size() - indentation_.size());
}
void BuiltStyledStreamWriter::writeCommentBeforeValue(Value const& root) {
if (cs_ == CommentStyle::None) return;
if (!root.hasComment(commentBefore))
return;
if (!indented_) writeIndent();
const std::string& comment = root.getComment(commentBefore);
std::string::const_iterator iter = comment.begin();
while (iter != comment.end()) {
*sout_ << *iter;
if (*iter == '\n' &&
(iter != comment.end() && *(iter + 1) == '/'))
// writeIndent(); // would write extra newline
*sout_ << indentString_;
++iter;
}
indented_ = false;
}
void BuiltStyledStreamWriter::writeCommentAfterValueOnSameLine(Value const& root) {
if (cs_ == CommentStyle::None) return;
if (root.hasComment(commentAfterOnSameLine))
*sout_ << " " + root.getComment(commentAfterOnSameLine);
if (root.hasComment(commentAfter)) {
writeIndent();
*sout_ << root.getComment(commentAfter);
}
}
// static
bool BuiltStyledStreamWriter::hasCommentForValue(const Value& value) {
return value.hasComment(commentBefore) ||
value.hasComment(commentAfterOnSameLine) ||
value.hasComment(commentAfter);
}
///////////////
// StreamWriter
StreamWriter::StreamWriter()
: sout_(NULL)
{
}
StreamWriter::~StreamWriter()
{
}
StreamWriter::Factory::~Factory()
{}
StreamWriterBuilder::StreamWriterBuilder()
{
setDefaults(&settings_);
}
StreamWriterBuilder::~StreamWriterBuilder()
{}
StreamWriter* StreamWriterBuilder::newStreamWriter() const
{
std::string indentation = settings_["indentation"].asString();
std::string cs_str = settings_["commentStyle"].asString();
bool eyc = settings_["enableYAMLCompatibility"].asBool();
bool dnp = settings_["dropNullPlaceholders"].asBool();
CommentStyle::Enum cs = CommentStyle::All;
if (cs_str == "All") {
cs = CommentStyle::All;
} else if (cs_str == "None") {
cs = CommentStyle::None;
} else {
throwRuntimeError("commentStyle must be 'All' or 'None'");
}
std::string colonSymbol = " : ";
if (eyc) {
colonSymbol = ": ";
} else if (indentation.empty()) {
colonSymbol = ":";
}
std::string nullSymbol = "null";
if (dnp) {
nullSymbol = "";
}
std::string endingLineFeedSymbol = "";
return new BuiltStyledStreamWriter(
indentation, cs,
colonSymbol, nullSymbol, endingLineFeedSymbol);
}
static void getValidWriterKeys(std::set<std::string>* valid_keys)
{
valid_keys->clear();
valid_keys->insert("indentation");
valid_keys->insert("commentStyle");
valid_keys->insert("enableYAMLCompatibility");
valid_keys->insert("dropNullPlaceholders");
}
bool StreamWriterBuilder::validate(Json::Value* invalid) const
{
Json::Value my_invalid;
if (!invalid) invalid = &my_invalid; // so we do not need to test for NULL
Json::Value& inv = *invalid;
std::set<std::string> valid_keys;
getValidWriterKeys(&valid_keys);
Value::Members keys = settings_.getMemberNames();
size_t n = keys.size();
for (size_t i = 0; i < n; ++i) {
std::string const& key = keys[i];
if (valid_keys.find(key) == valid_keys.end()) {
inv[key] = settings_[key];
}
}
return 0u == inv.size();
}
Value& StreamWriterBuilder::operator[](std::string key)
{
return settings_[key];
}
// static
void StreamWriterBuilder::setDefaults(Json::Value* settings)
{
//! [StreamWriterBuilderDefaults]
(*settings)["commentStyle"] = "All";
(*settings)["indentation"] = "\t";
(*settings)["enableYAMLCompatibility"] = false;
(*settings)["dropNullPlaceholders"] = false;
//! [StreamWriterBuilderDefaults]
}
std::string writeString(StreamWriter::Factory const& builder, Value const& root) {
std::ostringstream sout;
StreamWriterPtr const writer(builder.newStreamWriter());
writer->write(root, &sout);
return sout.str();
}
std::ostream& operator<<(std::ostream& sout, Value const& root) {
StreamWriterBuilder builder;
StreamWriterPtr const writer(builder.newStreamWriter());
writer->write(root, &sout);
return sout;
}

View File

@@ -1,3 +1,4 @@
# vim: et ts=4 sts=4 sw=4 tw=0
IF(JSONCPP_LIB_BUILD_SHARED)
ADD_DEFINITIONS( -DJSON_DLL )
@@ -9,14 +10,32 @@ ADD_EXECUTABLE( jsoncpp_test
main.cpp
)
TARGET_LINK_LIBRARIES(jsoncpp_test jsoncpp_lib)
IF(JSONCPP_LIB_BUILD_SHARED)
TARGET_LINK_LIBRARIES(jsoncpp_test jsoncpp_lib)
ELSE(JSONCPP_LIB_BUILD_SHARED)
TARGET_LINK_LIBRARIES(jsoncpp_test jsoncpp_lib_static)
ENDIF(JSONCPP_LIB_BUILD_SHARED)
# another way to solve issue #90
#set_target_properties(jsoncpp_test PROPERTIES COMPILE_FLAGS -ffloat-store)
# Run unit tests in post-build
# (default cmake workflow hides away the test result into a file, resulting in poor dev workflow?!?)
IF(JSONCPP_WITH_POST_BUILD_UNITTEST)
ADD_CUSTOM_COMMAND( TARGET jsoncpp_test
POST_BUILD
COMMAND $<TARGET_FILE:jsoncpp_test>)
IF(JSONCPP_LIB_BUILD_SHARED)
# First, copy the shared lib, for Microsoft.
# Then, run the test executable.
ADD_CUSTOM_COMMAND( TARGET jsoncpp_test
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different $<TARGET_FILE:jsoncpp_lib> $<TARGET_FILE_DIR:jsoncpp_test>
COMMAND $<TARGET_FILE:jsoncpp_test>)
ELSE(JSONCPP_LIB_BUILD_SHARED)
# Just run the test executable.
ADD_CUSTOM_COMMAND( TARGET jsoncpp_test
POST_BUILD
COMMAND $<TARGET_FILE:jsoncpp_test>)
ENDIF(JSONCPP_LIB_BUILD_SHARED)
ENDIF(JSONCPP_WITH_POST_BUILD_UNITTEST)
SET_TARGET_PROPERTIES(jsoncpp_test PROPERTIES OUTPUT_NAME jsoncpp_test)

View File

@@ -323,7 +323,7 @@ void Runner::listTests() const {
}
int Runner::runCommandLine(int argc, const char* argv[]) const {
typedef std::deque<std::string> TestNames;
// typedef std::deque<std::string> TestNames;
Runner subrunner;
for (int index = 1; index < argc; ++index) {
std::string opt = argv[index];

View File

@@ -178,8 +178,8 @@ private:
template <typename T, typename U>
TestResult& checkEqual(TestResult& result,
const T& expected,
const U& actual,
T expected,
U actual,
const char* file,
unsigned int line,
const char* expr) {
@@ -214,7 +214,7 @@ TestResult& checkStringEqual(TestResult& result,
#define JSONTEST_ASSERT_PRED(expr) \
{ \
JsonTest::PredicateContext _minitest_Context = { \
result_->predicateId_, __FILE__, __LINE__, #expr \
result_->predicateId_, __FILE__, __LINE__, #expr, NULL, NULL \
}; \
result_->predicateStackTail_->next_ = &_minitest_Context; \
result_->predicateId_ += 1; \

View File

@@ -6,7 +6,7 @@
#include "jsontest.h"
#include <json/config.h>
#include <json/json.h>
#include <stdexcept>
#include <cstring>
// Make numeric limits more convenient to talk about.
// Assumes int type in 32 bits.
@@ -17,8 +17,8 @@
#define kint64min Json::Value::minInt64
#define kuint64max Json::Value::maxUInt64
static const double kdint64max = double(kint64max);
static const float kfint64max = float(kint64max);
//static const double kdint64max = double(kint64max);
//static const float kfint64max = float(kint64max);
static const float kfint32max = float(kint32max);
static const float kfuint32max = float(kuint32max);
@@ -198,6 +198,18 @@ JSONTEST_FIXTURE(ValueTest, objects) {
object1_["some other id"] = "foo";
JSONTEST_ASSERT_EQUAL(Json::Value("foo"), object1_["some other id"]);
JSONTEST_ASSERT_EQUAL(Json::Value("foo"), object1_["some other id"]);
// Remove.
Json::Value got;
bool did;
did = object1_.removeMember("some other id", &got);
JSONTEST_ASSERT_EQUAL(Json::Value("foo"), got);
JSONTEST_ASSERT_EQUAL(true, did);
got = Json::Value("bar");
did = object1_.removeMember("some other id", &got);
JSONTEST_ASSERT_EQUAL(Json::Value("bar"), got);
JSONTEST_ASSERT_EQUAL(false, did);
}
JSONTEST_FIXTURE(ValueTest, arrays) {
@@ -240,6 +252,10 @@ JSONTEST_FIXTURE(ValueTest, arrays) {
array1_[2] = Json::Value(17);
JSONTEST_ASSERT_EQUAL(Json::Value(), array1_[1]);
JSONTEST_ASSERT_EQUAL(Json::Value(17), array1_[2]);
Json::Value got;
JSONTEST_ASSERT_EQUAL(true, array1_.removeIndex(2, &got));
JSONTEST_ASSERT_EQUAL(Json::Value(17), got);
JSONTEST_ASSERT_EQUAL(false, array1_.removeIndex(2, &got)); // gone now
}
JSONTEST_FIXTURE(ValueTest, null) {
@@ -265,6 +281,8 @@ JSONTEST_FIXTURE(ValueTest, null) {
JSONTEST_ASSERT_EQUAL(0.0, null_.asDouble());
JSONTEST_ASSERT_EQUAL(0.0, null_.asFloat());
JSONTEST_ASSERT_STRING_EQUAL("", null_.asString());
JSONTEST_ASSERT_EQUAL(Json::Value::null, null_);
}
JSONTEST_FIXTURE(ValueTest, strings) {
@@ -1499,6 +1517,126 @@ JSONTEST_FIXTURE(ValueTest, offsetAccessors) {
JSONTEST_ASSERT(y.getOffsetLimit() == 0);
}
JSONTEST_FIXTURE(ValueTest, StaticString) {
char mutant[] = "hello";
Json::StaticString ss(mutant);
std::string regular(mutant);
mutant[1] = 'a';
JSONTEST_ASSERT_STRING_EQUAL("hallo", ss.c_str());
JSONTEST_ASSERT_STRING_EQUAL("hello", regular.c_str());
{
Json::Value root;
root["top"] = ss;
JSONTEST_ASSERT_STRING_EQUAL("hallo", root["top"].asString());
mutant[1] = 'u';
JSONTEST_ASSERT_STRING_EQUAL("hullo", root["top"].asString());
}
{
Json::Value root;
root["top"] = regular;
JSONTEST_ASSERT_STRING_EQUAL("hello", root["top"].asString());
mutant[1] = 'u';
JSONTEST_ASSERT_STRING_EQUAL("hello", root["top"].asString());
}
}
JSONTEST_FIXTURE(ValueTest, CommentBefore) {
Json::Value val; // fill val
val.setComment("// this comment should appear before", Json::commentBefore);
Json::StreamWriterBuilder wbuilder;
wbuilder.settings_["commentStyle"] = "All";
{
char const expected[] = "// this comment should appear before\nnull";
std::string result = Json::writeString(wbuilder, val);
JSONTEST_ASSERT_STRING_EQUAL(expected, result);
std::string res2 = val.toStyledString();
std::string exp2 = "\n";
exp2 += expected;
exp2 += "\n";
JSONTEST_ASSERT_STRING_EQUAL(exp2, res2);
}
Json::Value other = "hello";
val.swapPayload(other);
{
char const expected[] = "// this comment should appear before\n\"hello\"";
std::string result = Json::writeString(wbuilder, val);
JSONTEST_ASSERT_STRING_EQUAL(expected, result);
std::string res2 = val.toStyledString();
std::string exp2 = "\n";
exp2 += expected;
exp2 += "\n";
JSONTEST_ASSERT_STRING_EQUAL(exp2, res2);
JSONTEST_ASSERT_STRING_EQUAL("null\n", other.toStyledString());
}
val = "hello";
// val.setComment("// this comment should appear before", Json::CommentPlacement::commentBefore);
// Assignment over-writes comments.
{
char const expected[] = "\"hello\"";
std::string result = Json::writeString(wbuilder, val);
JSONTEST_ASSERT_STRING_EQUAL(expected, result);
std::string res2 = val.toStyledString();
std::string exp2 = "";
exp2 += expected;
exp2 += "\n";
JSONTEST_ASSERT_STRING_EQUAL(exp2, res2);
}
}
JSONTEST_FIXTURE(ValueTest, zeroes) {
char const cstr[] = "h\0i";
std::string binary(cstr, sizeof(cstr)); // include trailing 0
JSONTEST_ASSERT_EQUAL(4U, binary.length());
Json::StreamWriterBuilder b;
{
Json::Value root;
root = binary;
JSONTEST_ASSERT_STRING_EQUAL(binary, root.asString());
}
{
char const top[] = "top";
Json::Value root;
root[top] = binary;
JSONTEST_ASSERT_STRING_EQUAL(binary, root[top].asString());
Json::Value removed;
bool did;
did = root.removeMember(top, top + sizeof(top) - 1U,
&removed);
JSONTEST_ASSERT(did);
JSONTEST_ASSERT_STRING_EQUAL(binary, removed.asString());
did = root.removeMember(top, top + sizeof(top) - 1U,
&removed);
JSONTEST_ASSERT(!did);
JSONTEST_ASSERT_STRING_EQUAL(binary, removed.asString()); // still
}
}
JSONTEST_FIXTURE(ValueTest, zeroesInKeys) {
char const cstr[] = "h\0i";
std::string binary(cstr, sizeof(cstr)); // include trailing 0
JSONTEST_ASSERT_EQUAL(4U, binary.length());
{
Json::Value root;
root[binary] = "there";
JSONTEST_ASSERT_STRING_EQUAL("there", root[binary].asString());
JSONTEST_ASSERT(!root.isMember("h"));
JSONTEST_ASSERT(root.isMember(binary));
JSONTEST_ASSERT_STRING_EQUAL("there", root.get(binary, Json::Value::nullRef).asString());
Json::Value removed;
bool did;
did = root.removeMember(binary.data(), binary.data() + binary.length(),
&removed);
JSONTEST_ASSERT(did);
JSONTEST_ASSERT_STRING_EQUAL("there", removed.asString());
did = root.removeMember(binary.data(), binary.data() + binary.length(),
&removed);
JSONTEST_ASSERT(!did);
JSONTEST_ASSERT_STRING_EQUAL("there", removed.asString()); // still
JSONTEST_ASSERT(!root.isMember(binary));
JSONTEST_ASSERT_STRING_EQUAL("", root.get(binary, Json::Value::nullRef).asString());
}
}
struct WriterTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(WriterTest, dropNullPlaceholders) {
@@ -1510,6 +1648,39 @@ JSONTEST_FIXTURE(WriterTest, dropNullPlaceholders) {
JSONTEST_ASSERT(writer.write(nullValue) == "\n");
}
struct StreamWriterTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(StreamWriterTest, dropNullPlaceholders) {
Json::StreamWriterBuilder b;
Json::Value nullValue;
b.settings_["dropNullPlaceholders"] = false;
JSONTEST_ASSERT(Json::writeString(b, nullValue) == "null");
b.settings_["dropNullPlaceholders"] = true;
JSONTEST_ASSERT(Json::writeString(b, nullValue) == "");
}
JSONTEST_FIXTURE(StreamWriterTest, writeZeroes) {
std::string binary("hi", 3); // include trailing 0
JSONTEST_ASSERT_EQUAL(3, binary.length());
std::string expected("\"hi\\u0000\""); // unicoded zero
Json::StreamWriterBuilder b;
{
Json::Value root;
root = binary;
JSONTEST_ASSERT_STRING_EQUAL(binary, root.asString());
std::string out = Json::writeString(b, root);
JSONTEST_ASSERT_EQUAL(expected.size(), out.size());
JSONTEST_ASSERT_STRING_EQUAL(expected, out);
}
{
Json::Value root;
root["top"] = binary;
JSONTEST_ASSERT_STRING_EQUAL(binary, root["top"].asString());
std::string out = Json::writeString(b, root["top"]);
JSONTEST_ASSERT_STRING_EQUAL(expected, out);
}
}
struct ReaderTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(ReaderTest, parseWithNoErrors) {
@@ -1601,6 +1772,552 @@ JSONTEST_FIXTURE(ReaderTest, parseWithDetailError) {
JSONTEST_ASSERT(errors.at(0).message == "Bad escape sequence in string");
}
struct CharReaderTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(CharReaderTest, parseWithNoErrors) {
Json::CharReaderBuilder b;
Json::CharReader* reader(b.newCharReader());
std::string errs;
Json::Value root;
char const doc[] = "{ \"property\" : \"value\" }";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs.size() == 0);
delete reader;
}
JSONTEST_FIXTURE(CharReaderTest, parseWithNoErrorsTestingOffsets) {
Json::CharReaderBuilder b;
Json::CharReader* reader(b.newCharReader());
std::string errs;
Json::Value root;
char const doc[] =
"{ \"property\" : [\"value\", \"value2\"], \"obj\" : "
"{ \"nested\" : 123, \"bool\" : true}, \"null\" : "
"null, \"false\" : false }";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs.size() == 0);
delete reader;
}
JSONTEST_FIXTURE(CharReaderTest, parseWithOneError) {
Json::CharReaderBuilder b;
Json::CharReader* reader(b.newCharReader());
std::string errs;
Json::Value root;
char const doc[] =
"{ \"property\" :: \"value\" }";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT(errs ==
"* Line 1, Column 15\n Syntax error: value, object or array "
"expected.\n");
delete reader;
}
JSONTEST_FIXTURE(CharReaderTest, parseChineseWithOneError) {
Json::CharReaderBuilder b;
Json::CharReader* reader(b.newCharReader());
std::string errs;
Json::Value root;
char const doc[] =
"{ \"pr佐藤erty\" :: \"value\" }";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT(errs ==
"* Line 1, Column 19\n Syntax error: value, object or array "
"expected.\n");
delete reader;
}
JSONTEST_FIXTURE(CharReaderTest, parseWithDetailError) {
Json::CharReaderBuilder b;
Json::CharReader* reader(b.newCharReader());
std::string errs;
Json::Value root;
char const doc[] =
"{ \"property\" : \"v\\alue\" }";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT(errs ==
"* Line 1, Column 16\n Bad escape sequence in string\nSee "
"Line 1, Column 20 for detail.\n");
delete reader;
}
JSONTEST_FIXTURE(CharReaderTest, parseWithStackLimit) {
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
"{ \"property\" : \"value\" }";
{
b.settings_["stackLimit"] = 2;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs == "");
JSONTEST_ASSERT_EQUAL("value", root["property"]);
delete reader;
}
{
b.settings_["stackLimit"] = 1;
Json::CharReader* reader(b.newCharReader());
std::string errs;
JSONTEST_ASSERT_THROWS(reader->parse(
doc, doc + std::strlen(doc),
&root, &errs));
delete reader;
}
}
struct CharReaderStrictModeTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(CharReaderStrictModeTest, dupKeys) {
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
"{ \"property\" : \"value\", \"key\" : \"val1\", \"key\" : \"val2\" }";
{
b.strictMode(&b.settings_);
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT_STRING_EQUAL(
"* Line 1, Column 41\n"
" Duplicate key: 'key'\n",
errs);
JSONTEST_ASSERT_EQUAL("val1", root["key"]); // so far
delete reader;
}
}
struct CharReaderFailIfExtraTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(CharReaderFailIfExtraTest, issue164) {
// This is interpretted as a string value followed by a colon.
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
" \"property\" : \"value\" }";
{
b.settings_["failIfExtra"] = false;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs == "");
JSONTEST_ASSERT_EQUAL("property", root);
delete reader;
}
{
b.settings_["failIfExtra"] = true;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT_STRING_EQUAL(errs,
"* Line 1, Column 13\n"
" Extra non-whitespace after JSON value.\n");
JSONTEST_ASSERT_EQUAL("property", root);
delete reader;
}
{
b.settings_["failIfExtra"] = false;
b.strictMode(&b.settings_);
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT_STRING_EQUAL(errs,
"* Line 1, Column 13\n"
" Extra non-whitespace after JSON value.\n");
JSONTEST_ASSERT_EQUAL("property", root);
delete reader;
}
}
JSONTEST_FIXTURE(CharReaderFailIfExtraTest, issue107) {
// This is interpretted as an int value followed by a colon.
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
"1:2:3";
b.settings_["failIfExtra"] = true;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(!ok);
JSONTEST_ASSERT_STRING_EQUAL(
"* Line 1, Column 2\n"
" Extra non-whitespace after JSON value.\n",
errs);
JSONTEST_ASSERT_EQUAL(1, root.asInt());
delete reader;
}
JSONTEST_FIXTURE(CharReaderFailIfExtraTest, commentAfterObject) {
Json::CharReaderBuilder b;
Json::Value root;
{
char const doc[] =
"{ \"property\" : \"value\" } //trailing\n//comment\n";
b.settings_["failIfExtra"] = true;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL("value", root["property"]);
delete reader;
}
}
JSONTEST_FIXTURE(CharReaderFailIfExtraTest, commentAfterArray) {
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
"[ \"property\" , \"value\" ] //trailing\n//comment\n";
b.settings_["failIfExtra"] = true;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL("value", root[1u]);
delete reader;
}
JSONTEST_FIXTURE(CharReaderFailIfExtraTest, commentAfterBool) {
Json::CharReaderBuilder b;
Json::Value root;
char const doc[] =
" true /*trailing\ncomment*/";
b.settings_["failIfExtra"] = true;
Json::CharReader* reader(b.newCharReader());
std::string errs;
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(true, root.asBool());
delete reader;
}
struct CharReaderAllowDropNullTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(CharReaderAllowDropNullTest, issue178) {
Json::CharReaderBuilder b;
b.settings_["allowDroppedNullPlaceholders"] = true;
Json::Value root;
std::string errs;
Json::CharReader* reader(b.newCharReader());
{
char const doc[] = "{\"a\":,\"b\":true}";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(2u, root.size());
JSONTEST_ASSERT_EQUAL(Json::nullValue, root.get("a", true));
}
{
char const doc[] = "{\"a\":}";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(1u, root.size());
JSONTEST_ASSERT_EQUAL(Json::nullValue, root.get("a", true));
}
{
char const doc[] = "[]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs == "");
JSONTEST_ASSERT_EQUAL(0u, root.size());
JSONTEST_ASSERT_EQUAL(Json::arrayValue, root);
}
{
char const doc[] = "[null]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs == "");
JSONTEST_ASSERT_EQUAL(1u, root.size());
}
{
char const doc[] = "[,]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(2u, root.size());
}
{
char const doc[] = "[,,,]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(4u, root.size());
}
{
char const doc[] = "[null,]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(2u, root.size());
}
{
char const doc[] = "[,null]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs == "");
JSONTEST_ASSERT_EQUAL(2u, root.size());
}
{
char const doc[] = "[,,]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(3u, root.size());
}
{
char const doc[] = "[null,,]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(3u, root.size());
}
{
char const doc[] = "[,null,]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(3u, root.size());
}
{
char const doc[] = "[,,null]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs == "");
JSONTEST_ASSERT_EQUAL(3u, root.size());
}
{
char const doc[] = "[[],,,]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(4u, root.size());
JSONTEST_ASSERT_EQUAL(Json::arrayValue, root[0u]);
}
{
char const doc[] = "[,[],,]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(4u, root.size());
JSONTEST_ASSERT_EQUAL(Json::arrayValue, root[1u]);
}
{
char const doc[] = "[,,,[]]";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT(errs == "");
JSONTEST_ASSERT_EQUAL(4u, root.size());
JSONTEST_ASSERT_EQUAL(Json::arrayValue, root[3u]);
}
delete reader;
}
struct CharReaderAllowSingleQuotesTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(CharReaderAllowSingleQuotesTest, issue182) {
Json::CharReaderBuilder b;
b.settings_["allowSingleQuotes"] = true;
Json::Value root;
std::string errs;
Json::CharReader* reader(b.newCharReader());
{
char const doc[] = "{'a':true,\"b\":true}";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(2u, root.size());
JSONTEST_ASSERT_EQUAL(true, root.get("a", false));
JSONTEST_ASSERT_EQUAL(true, root.get("b", false));
}
{
char const doc[] = "{'a': 'x', \"b\":'y'}";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(2u, root.size());
JSONTEST_ASSERT_STRING_EQUAL("x", root["a"].asString());
JSONTEST_ASSERT_STRING_EQUAL("y", root["b"].asString());
}
}
struct CharReaderAllowZeroesTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(CharReaderAllowZeroesTest, issue176) {
Json::CharReaderBuilder b;
b.settings_["allowSingleQuotes"] = true;
Json::Value root;
std::string errs;
Json::CharReader* reader(b.newCharReader());
{
char const doc[] = "{'a':true,\"b\":true}";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(2u, root.size());
JSONTEST_ASSERT_EQUAL(true, root.get("a", false));
JSONTEST_ASSERT_EQUAL(true, root.get("b", false));
}
{
char const doc[] = "{'a': 'x', \"b\":'y'}";
bool ok = reader->parse(
doc, doc + std::strlen(doc),
&root, &errs);
JSONTEST_ASSERT(ok);
JSONTEST_ASSERT_STRING_EQUAL("", errs);
JSONTEST_ASSERT_EQUAL(2u, root.size());
JSONTEST_ASSERT_STRING_EQUAL("x", root["a"].asString());
JSONTEST_ASSERT_STRING_EQUAL("y", root["b"].asString());
}
}
struct BuilderTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(BuilderTest, settings) {
{
Json::Value errs;
Json::CharReaderBuilder rb;
JSONTEST_ASSERT_EQUAL(false, rb.settings_.isMember("foo"));
JSONTEST_ASSERT_EQUAL(true, rb.validate(&errs));
rb["foo"] = "bar";
JSONTEST_ASSERT_EQUAL(true, rb.settings_.isMember("foo"));
JSONTEST_ASSERT_EQUAL(false, rb.validate(&errs));
}
{
Json::Value errs;
Json::StreamWriterBuilder wb;
JSONTEST_ASSERT_EQUAL(false, wb.settings_.isMember("foo"));
JSONTEST_ASSERT_EQUAL(true, wb.validate(&errs));
wb["foo"] = "bar";
JSONTEST_ASSERT_EQUAL(true, wb.settings_.isMember("foo"));
JSONTEST_ASSERT_EQUAL(false, wb.validate(&errs));
}
}
struct IteratorTest : JsonTest::TestCase {};
JSONTEST_FIXTURE(IteratorTest, distance) {
Json::Value json;
json["k1"] = "a";
json["k2"] = "b";
int dist = 0;
std::string str;
for (Json::ValueIterator it = json.begin(); it != json.end(); ++it) {
dist = it - json.begin();
str = it->asString().c_str();
}
JSONTEST_ASSERT_EQUAL(1, dist);
JSONTEST_ASSERT_STRING_EQUAL("b", str);
}
JSONTEST_FIXTURE(IteratorTest, names) {
Json::Value json;
json["k1"] = "a";
json["k2"] = "b";
Json::ValueIterator it = json.begin();
JSONTEST_ASSERT(it != json.end());
JSONTEST_ASSERT_EQUAL(Json::Value("k1"), it.key());
JSONTEST_ASSERT_STRING_EQUAL("k1", it.name());
JSONTEST_ASSERT_EQUAL(-1, it.index());
++it;
JSONTEST_ASSERT(it != json.end());
JSONTEST_ASSERT_EQUAL(Json::Value("k2"), it.key());
JSONTEST_ASSERT_STRING_EQUAL("k2", it.name());
JSONTEST_ASSERT_EQUAL(-1, it.index());
++it;
JSONTEST_ASSERT(it == json.end());
}
JSONTEST_FIXTURE(IteratorTest, indexes) {
Json::Value json;
json[0] = "a";
json[1] = "b";
Json::ValueIterator it = json.begin();
JSONTEST_ASSERT(it != json.end());
JSONTEST_ASSERT_EQUAL(Json::Value(Json::ArrayIndex(0)), it.key());
JSONTEST_ASSERT_STRING_EQUAL("", it.name());
JSONTEST_ASSERT_EQUAL(0, it.index());
++it;
JSONTEST_ASSERT(it != json.end());
JSONTEST_ASSERT_EQUAL(Json::Value(Json::ArrayIndex(1)), it.key());
JSONTEST_ASSERT_STRING_EQUAL("", it.name());
JSONTEST_ASSERT_EQUAL(1, it.index());
++it;
JSONTEST_ASSERT(it == json.end());
}
int main(int argc, const char* argv[]) {
JsonTest::Runner runner;
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, checkNormalizeFloatingPointStr);
@@ -1623,6 +2340,15 @@ int main(int argc, const char* argv[]) {
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, compareType);
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, offsetAccessors);
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, typeChecksThrowExceptions);
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, StaticString);
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, CommentBefore);
//JSONTEST_REGISTER_FIXTURE(runner, ValueTest, nulls);
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, zeroes);
JSONTEST_REGISTER_FIXTURE(runner, ValueTest, zeroesInKeys);
JSONTEST_REGISTER_FIXTURE(runner, WriterTest, dropNullPlaceholders);
JSONTEST_REGISTER_FIXTURE(runner, StreamWriterTest, dropNullPlaceholders);
JSONTEST_REGISTER_FIXTURE(runner, StreamWriterTest, writeZeroes);
JSONTEST_REGISTER_FIXTURE(runner, ReaderTest, parseWithNoErrors);
JSONTEST_REGISTER_FIXTURE(
@@ -1631,7 +2357,33 @@ int main(int argc, const char* argv[]) {
JSONTEST_REGISTER_FIXTURE(runner, ReaderTest, parseChineseWithOneError);
JSONTEST_REGISTER_FIXTURE(runner, ReaderTest, parseWithDetailError);
JSONTEST_REGISTER_FIXTURE(runner, WriterTest, dropNullPlaceholders);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderTest, parseWithNoErrors);
JSONTEST_REGISTER_FIXTURE(
runner, CharReaderTest, parseWithNoErrorsTestingOffsets);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderTest, parseWithOneError);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderTest, parseChineseWithOneError);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderTest, parseWithDetailError);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderTest, parseWithStackLimit);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderStrictModeTest, dupKeys);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderFailIfExtraTest, issue164);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderFailIfExtraTest, issue107);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderFailIfExtraTest, commentAfterObject);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderFailIfExtraTest, commentAfterArray);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderFailIfExtraTest, commentAfterBool);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderAllowDropNullTest, issue178);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderAllowSingleQuotesTest, issue182);
JSONTEST_REGISTER_FIXTURE(runner, CharReaderAllowZeroesTest, issue176);
JSONTEST_REGISTER_FIXTURE(runner, BuilderTest, settings);
JSONTEST_REGISTER_FIXTURE(runner, IteratorTest, distance);
JSONTEST_REGISTER_FIXTURE(runner, IteratorTest, names);
JSONTEST_REGISTER_FIXTURE(runner, IteratorTest, indexes);
return runner.runCommandLine(argc, argv);
}

View File

@@ -4,7 +4,7 @@ import os
paths = []
for pattern in [ '*.actual', '*.actual-rewrite', '*.rewrite', '*.process-output' ]:
paths += glob.glob( 'data/' + pattern )
paths += glob.glob('data/' + pattern)
for path in paths:
os.unlink( path )
os.unlink(path)

View File

@@ -0,0 +1,4 @@
// Comment for array
.=[]
// Comment within array
.[0]="one-element"

View File

@@ -0,0 +1,5 @@
// Comment for array
[
// Comment within array
"one-element"
]

View File

@@ -1,5 +1,7 @@
.={}
// Comment for array
.test=[]
// Comment within array
.test[0]={}
.test[0].a="aaa"
.test[1]={}

View File

@@ -1,6 +1,8 @@
{
"test":
// Comment for array
[
// Comment within array
{ "a" : "aaa" }, // Comment for a
{ "b" : "bbb" }, // Comment for b
{ "c" : "ccc" } // Comment for c

View File

@@ -11,4 +11,13 @@
// Multiline comment cpp-style
// Second line
.cpp-test.c=3
.cpp-test.d=4
// Comment before double
.cpp-test.d=4.1
// Comment before string
.cpp-test.e="e-string"
// Comment before true
.cpp-test.f=true
// Comment before false
.cpp-test.g=false
// Comment before null
.cpp-test.h=null

View File

@@ -12,6 +12,15 @@
// Multiline comment cpp-style
// Second line
"c" : 3,
"d" : 4
// Comment before double
"d" : 4.1,
// Comment before string
"e" : "e-string",
// Comment before true
"f" : true,
// Comment before false
"g" : false,
// Comment before null
"h" : null
}
}

View File

@@ -1,10 +1,10 @@
from __future__ import print_function
import glob
import os.path
for path in glob.glob( '*.json' ):
for path in glob.glob('*.json'):
text = file(path,'rt').read()
target = os.path.splitext(path)[0] + '.expected'
if os.path.exists( target ):
if os.path.exists(target):
print('skipping:', target)
else:
print('creating:', target)

View File

@@ -15,50 +15,50 @@ actual_path = base_path + '.actual'
rewrite_path = base_path + '.rewrite'
rewrite_actual_path = base_path + '.actual-rewrite'
def valueTreeToString( fout, value, path = '.' ):
def valueTreeToString(fout, value, path = '.'):
ty = type(value)
if ty is types.DictType:
fout.write( '%s={}\n' % path )
fout.write('%s={}\n' % path)
suffix = path[-1] != '.' and '.' or ''
names = value.keys()
names.sort()
for name in names:
valueTreeToString( fout, value[name], path + suffix + name )
valueTreeToString(fout, value[name], path + suffix + name)
elif ty is types.ListType:
fout.write( '%s=[]\n' % path )
for index, childValue in zip( xrange(0,len(value)), value ):
valueTreeToString( fout, childValue, path + '[%d]' % index )
fout.write('%s=[]\n' % path)
for index, childValue in zip(xrange(0,len(value)), value):
valueTreeToString(fout, childValue, path + '[%d]' % index)
elif ty is types.StringType:
fout.write( '%s="%s"\n' % (path,value) )
fout.write('%s="%s"\n' % (path,value))
elif ty is types.IntType:
fout.write( '%s=%d\n' % (path,value) )
fout.write('%s=%d\n' % (path,value))
elif ty is types.FloatType:
fout.write( '%s=%.16g\n' % (path,value) )
fout.write('%s=%.16g\n' % (path,value))
elif value is True:
fout.write( '%s=true\n' % path )
fout.write('%s=true\n' % path)
elif value is False:
fout.write( '%s=false\n' % path )
fout.write('%s=false\n' % path)
elif value is None:
fout.write( '%s=null\n' % path )
fout.write('%s=null\n' % path)
else:
assert False and "Unexpected value type"
def parseAndSaveValueTree( input, actual_path ):
root = json.loads( input )
fout = file( actual_path, 'wt' )
valueTreeToString( fout, root )
def parseAndSaveValueTree(input, actual_path):
root = json.loads(input)
fout = file(actual_path, 'wt')
valueTreeToString(fout, root)
fout.close()
return root
def rewriteValueTree( value, rewrite_path ):
rewrite = json.dumps( value )
def rewriteValueTree(value, rewrite_path):
rewrite = json.dumps(value)
#rewrite = rewrite[1:-1] # Somehow the string is quoted ! jsonpy bug ?
file( rewrite_path, 'wt').write( rewrite + '\n' )
file(rewrite_path, 'wt').write(rewrite + '\n')
return rewrite
input = file( input_path, 'rt' ).read()
root = parseAndSaveValueTree( input, actual_path )
rewrite = rewriteValueTree( json.write( root ), rewrite_path )
rewrite_root = parseAndSaveValueTree( rewrite, rewrite_actual_path )
input = file(input_path, 'rt').read()
root = parseAndSaveValueTree(input, actual_path)
rewrite = rewriteValueTree(json.write(root), rewrite_path)
rewrite_root = parseAndSaveValueTree(rewrite, rewrite_actual_path)
sys.exit( 0 )
sys.exit(0)

View File

@@ -1,17 +1,36 @@
from __future__ import print_function
from __future__ import unicode_literals
from io import open
from glob import glob
import sys
import os
import pipes
import os.path
import optparse
VALGRIND_CMD = 'valgrind --tool=memcheck --leak-check=yes --undef-value-errors=yes '
def compareOutputs( expected, actual, message ):
def getStatusOutput(cmd):
"""
Return int, unicode (for both Python 2 and 3).
Note: os.popen().close() would return None for 0.
"""
print(cmd, file=sys.stderr)
pipe = os.popen(cmd)
process_output = pipe.read()
try:
# We have been using os.popen(). When we read() the result
# we get 'str' (bytes) in py2, and 'str' (unicode) in py3.
# Ugh! There must be a better way to handle this.
process_output = process_output.decode('utf-8')
except AttributeError:
pass # python3
status = pipe.close()
return status, process_output
def compareOutputs(expected, actual, message):
expected = expected.strip().replace('\r','').split('\n')
actual = actual.strip().replace('\r','').split('\n')
diff_line = 0
max_line_to_compare = min( len(expected), len(actual) )
max_line_to_compare = min(len(expected), len(actual))
for index in range(0,max_line_to_compare):
if expected[index].strip() != actual[index].strip():
diff_line = index + 1
@@ -20,7 +39,7 @@ def compareOutputs( expected, actual, message ):
diff_line = max_line_to_compare+1
if diff_line == 0:
return None
def safeGetLine( lines, index ):
def safeGetLine(lines, index):
index += -1
if index >= len(lines):
return ''
@@ -30,65 +49,65 @@ def compareOutputs( expected, actual, message ):
Actual: '%s'
""" % (message, diff_line,
safeGetLine(expected,diff_line),
safeGetLine(actual,diff_line) )
safeGetLine(actual,diff_line))
def safeReadFile( path ):
def safeReadFile(path):
try:
return file( path, 'rt' ).read()
return open(path, 'rt', encoding = 'utf-8').read()
except IOError as e:
return '<File "%s" is missing: %s>' % (path,e)
def runAllTests( jsontest_executable_path, input_dir = None,
use_valgrind=False, with_json_checker=False ):
def runAllTests(jsontest_executable_path, input_dir = None,
use_valgrind=False, with_json_checker=False,
writerClass='StyledWriter'):
if not input_dir:
input_dir = os.path.join( os.getcwd(), 'data' )
tests = glob( os.path.join( input_dir, '*.json' ) )
input_dir = os.path.join(os.getcwd(), 'data')
tests = glob(os.path.join(input_dir, '*.json'))
if with_json_checker:
test_jsonchecker = glob( os.path.join( input_dir, '../jsonchecker', '*.json' ) )
test_jsonchecker = glob(os.path.join(input_dir, '../jsonchecker', '*.json'))
else:
test_jsonchecker = []
failed_tests = []
valgrind_path = use_valgrind and VALGRIND_CMD or ''
for input_path in tests + test_jsonchecker:
expect_failure = os.path.basename( input_path ).startswith( 'fail' )
expect_failure = os.path.basename(input_path).startswith('fail')
is_json_checker_test = (input_path in test_jsonchecker) or expect_failure
print('TESTING:', input_path, end=' ')
options = is_json_checker_test and '--json-checker' or ''
pipe = os.popen( "%s%s %s %s" % (
valgrind_path, jsontest_executable_path, options,
pipes.quote(input_path)))
process_output = pipe.read()
status = pipe.close()
options += ' --json-writer %s'%writerClass
cmd = '%s%s %s "%s"' % ( valgrind_path, jsontest_executable_path, options,
input_path)
status, process_output = getStatusOutput(cmd)
if is_json_checker_test:
if expect_failure:
if status is None:
if not status:
print('FAILED')
failed_tests.append( (input_path, 'Parsing should have failed:\n%s' %
safeReadFile(input_path)) )
failed_tests.append((input_path, 'Parsing should have failed:\n%s' %
safeReadFile(input_path)))
else:
print('OK')
else:
if status is not None:
if status:
print('FAILED')
failed_tests.append( (input_path, 'Parsing failed:\n' + process_output) )
failed_tests.append((input_path, 'Parsing failed:\n' + process_output))
else:
print('OK')
else:
base_path = os.path.splitext(input_path)[0]
actual_output = safeReadFile( base_path + '.actual' )
actual_rewrite_output = safeReadFile( base_path + '.actual-rewrite' )
file(base_path + '.process-output','wt').write( process_output )
actual_output = safeReadFile(base_path + '.actual')
actual_rewrite_output = safeReadFile(base_path + '.actual-rewrite')
open(base_path + '.process-output', 'wt', encoding = 'utf-8').write(process_output)
if status:
print('parsing failed')
failed_tests.append( (input_path, 'Parsing failed:\n' + process_output) )
failed_tests.append((input_path, 'Parsing failed:\n' + process_output))
else:
expected_output_path = os.path.splitext(input_path)[0] + '.expected'
expected_output = file( expected_output_path, 'rt' ).read()
detail = ( compareOutputs( expected_output, actual_output, 'input' )
or compareOutputs( expected_output, actual_rewrite_output, 'rewrite' ) )
expected_output = open(expected_output_path, 'rt', encoding = 'utf-8').read()
detail = (compareOutputs(expected_output, actual_output, 'input')
or compareOutputs(expected_output, actual_rewrite_output, 'rewrite'))
if detail:
print('FAILED')
failed_tests.append( (input_path, detail) )
failed_tests.append((input_path, detail))
else:
print('OK')
@@ -100,7 +119,7 @@ def runAllTests( jsontest_executable_path, input_dir = None,
print(failed_test[1])
print()
print('Test results: %d passed, %d failed.' % (len(tests)-len(failed_tests),
len(failed_tests) ))
len(failed_tests)))
return 1
else:
print('All %d tests passed.' % len(tests))
@@ -108,7 +127,7 @@ def runAllTests( jsontest_executable_path, input_dir = None,
def main():
from optparse import OptionParser
parser = OptionParser( usage="%prog [options] <path to jsontestrunner.exe> [test case directory]" )
parser = OptionParser(usage="%prog [options] <path to jsontestrunner.exe> [test case directory]")
parser.add_option("--valgrind",
action="store_true", dest="valgrind", default=False,
help="run all the tests using valgrind to detect memory leaks")
@@ -119,17 +138,32 @@ def main():
options, args = parser.parse_args()
if len(args) < 1 or len(args) > 2:
parser.error( 'Must provides at least path to jsontestrunner executable.' )
sys.exit( 1 )
parser.error('Must provides at least path to jsontestrunner executable.')
sys.exit(1)
jsontest_executable_path = os.path.normpath( os.path.abspath( args[0] ) )
jsontest_executable_path = os.path.normpath(os.path.abspath(args[0]))
if len(args) > 1:
input_path = os.path.normpath( os.path.abspath( args[1] ) )
input_path = os.path.normpath(os.path.abspath(args[1]))
else:
input_path = None
status = runAllTests( jsontest_executable_path, input_path,
use_valgrind=options.valgrind, with_json_checker=options.with_json_checker )
sys.exit( status )
status = runAllTests(jsontest_executable_path, input_path,
use_valgrind=options.valgrind,
with_json_checker=options.with_json_checker,
writerClass='StyledWriter')
if status:
sys.exit(status)
status = runAllTests(jsontest_executable_path, input_path,
use_valgrind=options.valgrind,
with_json_checker=options.with_json_checker,
writerClass='StyledStreamWriter')
if status:
sys.exit(status)
status = runAllTests(jsontest_executable_path, input_path,
use_valgrind=options.valgrind,
with_json_checker=options.with_json_checker,
writerClass='BuiltStyledStreamWriter')
if status:
sys.exit(status)
if __name__ == '__main__':
main()

View File

@@ -1,4 +1,6 @@
from __future__ import print_function
from __future__ import unicode_literals
from io import open
from glob import glob
import sys
import os
@@ -9,37 +11,41 @@ import optparse
VALGRIND_CMD = 'valgrind --tool=memcheck --leak-check=yes --undef-value-errors=yes'
class TestProxy(object):
def __init__( self, test_exe_path, use_valgrind=False ):
self.test_exe_path = os.path.normpath( os.path.abspath( test_exe_path ) )
def __init__(self, test_exe_path, use_valgrind=False):
self.test_exe_path = os.path.normpath(os.path.abspath(test_exe_path))
self.use_valgrind = use_valgrind
def run( self, options ):
def run(self, options):
if self.use_valgrind:
cmd = VALGRIND_CMD.split()
else:
cmd = []
cmd.extend( [self.test_exe_path, '--test-auto'] + options )
process = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT )
cmd.extend([self.test_exe_path, '--test-auto'] + options)
try:
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
except:
print(cmd)
raise
stdout = process.communicate()[0]
if process.returncode:
return False, stdout
return True, stdout
def runAllTests( exe_path, use_valgrind=False ):
test_proxy = TestProxy( exe_path, use_valgrind=use_valgrind )
status, test_names = test_proxy.run( ['--list-tests'] )
def runAllTests(exe_path, use_valgrind=False):
test_proxy = TestProxy(exe_path, use_valgrind=use_valgrind)
status, test_names = test_proxy.run(['--list-tests'])
if not status:
print("Failed to obtain unit tests list:\n" + test_names, file=sys.stderr)
return 1
test_names = [name.strip() for name in test_names.strip().split('\n')]
test_names = [name.strip() for name in test_names.decode('utf-8').strip().split('\n')]
failures = []
for name in test_names:
print('TESTING %s:' % name, end=' ')
succeed, result = test_proxy.run( ['--test', name] )
succeed, result = test_proxy.run(['--test', name])
if succeed:
print('OK')
else:
failures.append( (name, result) )
failures.append((name, result))
print('FAILED')
failed_count = len(failures)
pass_count = len(test_names) - failed_count
@@ -47,8 +53,7 @@ def runAllTests( exe_path, use_valgrind=False ):
print()
for name, result in failures:
print(result)
print('%d/%d tests passed (%d failure(s))' % (
pass_count, len(test_names), failed_count))
print('%d/%d tests passed (%d failure(s))' % ( pass_count, len(test_names), failed_count))
return 1
else:
print('All %d tests passed' % len(test_names))
@@ -56,7 +61,7 @@ def runAllTests( exe_path, use_valgrind=False ):
def main():
from optparse import OptionParser
parser = OptionParser( usage="%prog [options] <path to test_lib_json.exe>" )
parser = OptionParser(usage="%prog [options] <path to test_lib_json.exe>")
parser.add_option("--valgrind",
action="store_true", dest="valgrind", default=False,
help="run all the tests using valgrind to detect memory leaks")
@@ -64,11 +69,11 @@ def main():
options, args = parser.parse_args()
if len(args) != 1:
parser.error( 'Must provides at least path to test_lib_json executable.' )
sys.exit( 1 )
parser.error('Must provides at least path to test_lib_json executable.')
sys.exit(1)
exit_code = runAllTests( args[0], use_valgrind=options.valgrind )
sys.exit( exit_code )
exit_code = runAllTests(args[0], use_valgrind=options.valgrind)
sys.exit(exit_code)
if __name__ == '__main__':
main()

View File

@@ -1 +1 @@
1.1.0
1.6.1

1
version.in Normal file
View File

@@ -0,0 +1 @@
@JSONCPP_VERSION@