The index type of an array is a `range` meaning that converting an
out-of-bounds value to it will raise a `Defect`.
This PR fixes the defect but does nothing for arrays which are not
"full" - probably, this should become an error.
* bump version
* Add compile time switch to alter encoder enum representation
* Fix test
* Add enum section in README.md
Add more tests and encoder features of enums
* Add enums section to README.md
In Nim 2, `distinct` values no longer match `value is object` but need
to be checked separately with `value is distinct`. The underlying type
can be unwrapped with `distinctBase` to inspect whether this is a value
of type `distinct object` or `distinct` something-else.
* Resilience against fields with null value
* writeField helper also handle optional fields correctly
* Use uint4 for test_parser's parseValue
* Add parseObjectWithoutSkip and parseObjectSkipNullFields
Other changes:
* Migrate many procs accepting JsonReader to JsonLexer in order to
reduce the number of generic instantiations and the resulting code
bloat
Currently, we encode enum values always as the numeric value `ord(val)`.
unless explicitly overridden using `serializesAsTextInJson` or with a
custom `writeValue` implementation. Reduce verbosity by automatically
doing that.
* Proper error handling when parsed number exceeds uint64
details:
Returns an "errNonPortableInt" error
* need legacy flag for unit tests
* lazy numeric token parser
why:
Numeric data may have a custom format. In particular,numeric data may be
Uint256 which is not a JSON standard and might lead to an overflow.
details:
Numeric values are assigned a preliminary token type tkNumeric without
being fully parsed. This can be used to insert a custom parser.
Otherwise the value is parsed implicitly when querying/fetching the
token type.
+ tok: replaced by getter tok() resolving lazy stuff (if necessary)
+ tokKind: current type without auto-resolving
This lazy scheme could be extended to other custom types as long as
the first token letter determines the custom type.
* activate lazy parsing in reader
howto:
+ no code change if a custom reader refers to an existing reader
type FancyInt = distinct int
proc readValue(reader: var JsonReader, value: var FancyInt) =
value = reader.readValue(int).FancyInt
+ bespoke reader for cusom parsing
type FancyUint = distinct uint
proc readValue(reader: var JsonReader, value: var FancyUint) =
if reader.lexer.lazyTok == tkNumeric:
var accu: FancyUint
reader.lexer.customIntValueIt:
accu = accu * 10 + it.u256
value = accu
elif reader.lexer.tok == tkString:
value = reader.lexer.strVal.parseUint.FancyUint
...
reader.lexer.next
+ full code explanation at json_serialisation/reader.readValue()
* Add lazy parsing for customised string objects
why:
This allows parsing large or specialised strings without storing it
in the lexer state descriptor.
details:
Similar logic applies as for the cusomised number parser. For mostly
all practical cases, a DSL template is available serving as wrapper
around the character/byte item processor code.
* fix typo in unit test