Meme transcription:
Panel 1: Bilbo Baggins ponders, “After all… why should I care about the difference between int and String?
Panel 2: Bilbo Baggins is revealed to be an API developer. He continues, “JSON is always String, anyways…”
Meme transcription:
Panel 1: Bilbo Baggins ponders, “After all… why should I care about the difference between int and String?
Panel 2: Bilbo Baggins is revealed to be an API developer. He continues, “JSON is always String, anyways…”
What that means is that you cannot rely on numbers in JSON. Just use strings.
Unless you’re dealing with some insanely flexible schema, you should be able to know what kind of number (int, double, and so on) a field should contain when deserializing a number field in JSON. Using a string does not provide any benefits here unless there’s some big in your deserialzation process.
What’s the point of your schema if the receiving end is JavaScript, for example? You can convert a string to BigNumber, but you’ll get wrong data if you’re sending a number.
I’m not following your point so I think I might be misunderstanding it. If the types of numbers you want to express are literally incapable of being expressed using JSON numbers then yes, you should absolutely use string (or maybe even an object of multiple fields).
The point is that everything is expressable as JSON numbers, it’s when those numbers are read by JS there’s an issue
Can you give a specific example? Might help me understand your point.
I am not sure what could be the example, my point was that the spec and the RFC are very abstract and never mention any limitations on the number content. Of course the implementations in the language will be more limited than that, and if limitations are different, it will create dissimilar experience for the user, like this: Why does JSON.parse corrupt large numbers and how to solve this
This is what I was getting at here https://programming.dev/comment/10849419 (although I had a typo and said big instead of bug). The problem is with the parser in those circumstances, not the serialization format or language.
I disagree a bit in that the schema often doesn’t specify limits and operates in JSON standard’s terms, it will say that you should get/send a number, but will not usually say at what point will it break.
This is the opposite of what C language does, being so specific that it is not even turing complete (in a theoretical sense, it is practically)
What makes you think so?
const bigJSON = '{"gross_gdp": 12345678901234567890}'; JSON.parse(bigJSON, (key, value, context) => { if (key === "gross_gdp") { // Ignore the value because it has already lost precision return BigInt(context.source); } return value; }); > {gross_gdp: 12345678901234567890n}
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse
Because no one is using JSON.parse directly. Do you guys even code?
It’s neither JSON’s nor JavaScript’s fault that you don’t want to make a simple function call to properly deserialize the data.
It’s not up to me. Or you.