Currently, we encode enum values always as the numeric value `ord(val)`.
unless explicitly overridden using `serializesAsTextInJson` or with a
custom `writeValue` implementation. Reduce verbosity by automatically
doing that.
* Proper error handling when parsed number exceeds uint64
details:
Returns an "errNonPortableInt" error
* need legacy flag for unit tests
* lazy numeric token parser
why:
Numeric data may have a custom format. In particular,numeric data may be
Uint256 which is not a JSON standard and might lead to an overflow.
details:
Numeric values are assigned a preliminary token type tkNumeric without
being fully parsed. This can be used to insert a custom parser.
Otherwise the value is parsed implicitly when querying/fetching the
token type.
+ tok: replaced by getter tok() resolving lazy stuff (if necessary)
+ tokKind: current type without auto-resolving
This lazy scheme could be extended to other custom types as long as
the first token letter determines the custom type.
* activate lazy parsing in reader
howto:
+ no code change if a custom reader refers to an existing reader
type FancyInt = distinct int
proc readValue(reader: var JsonReader, value: var FancyInt) =
value = reader.readValue(int).FancyInt
+ bespoke reader for cusom parsing
type FancyUint = distinct uint
proc readValue(reader: var JsonReader, value: var FancyUint) =
if reader.lexer.lazyTok == tkNumeric:
var accu: FancyUint
reader.lexer.customIntValueIt:
accu = accu * 10 + it.u256
value = accu
elif reader.lexer.tok == tkString:
value = reader.lexer.strVal.parseUint.FancyUint
...
reader.lexer.next
+ full code explanation at json_serialisation/reader.readValue()
* Add lazy parsing for customised string objects
why:
This allows parsing large or specialised strings without storing it
in the lexer state descriptor.
details:
Similar logic applies as for the cusomised number parser. For mostly
all practical cases, a DSL template is available serving as wrapper
around the character/byte item processor code.
* fix typo in unit test