|
I think so, but until I see it, I can't tell.
|
|
|
|
|
If you're allowed to upgrade to .NET 5, they effectively implemented Newtonsofts one natively with pretty much the identical syntax. Works really well, and you're not using third-party add-ins.
-= Reelix =-
|
|
|
|
|
Yup, looking forward to it. Not holding my breath.
It doesn't help that my boss read a blog that said that Microsoft is abandoning .net ( ). Middle-managers will believe anything if it's in a blog.
I countered with a link to Microsoft's road map for the future of .net, but the damage was already done.
|
|
|
|
|
I already told you a bit about mine. It's probably a bit permissive.
I think your assumption of "search/skip operations" is not one which most others will even consider.
I assume that most would not implement either of those, but instead want to have the whole entire document, because why else would you be parsing the thing anyway?
As to well-formedness checking -- "You Ain't Gonna Need It" (the same as with XML).
In my case, I had to "stomp-the-pedal" because I was given a short deadline to have a working solution for reading JSON files (75GB worth) and loading the data into SQL Server.
|
|
|
|
|
PIEBALDconsult wrote: I assume that most would not implement either of those, but instead want to have the whole entire document, because why else would you be parsing the thing anyway?
In my JSON on Fire[^] article I present several cases where you only need a little data from a much larger dataset.
Consider querying any mongoDB repository online. You don't need to parse everything you get back because the data they return is very large grained/chunky. You don't get fine grained query results with it. You get kilobytes of data at least, and on an IoT device you may just not have the room.
The show information for Burn Notice from tmdb.com is almost 200kB. I know that because I'm using it as a test data set.
Real programmers use butterflies
|
|
|
|
|
My needs are simple -- some other team sends us some number of JSON files and I need to load the data into SQL Server.
In most cases, each JSON file contains one "table" of data so loading it into a table is simple.
At most I may want to filter out large binary values which are of no use to us.
And we trust the sender to have provided well-formed JSON -- if it isn't, we find out real fast and throw it back to them to fix.
Well-formedness is one of those things you shouldn't be concerned about once you get your application to PROD.
At this time, I'm consuming two sets of files from third-party products which those products also have to be able to read -- they're the configuration files for those products.
The only untrustworthy set of data I consume is one which is generated by a utility I wrote, so if it's broken it's my fault and I can fix it.
|
|
|
|
|
This lib i wrote was originally in C# and I ported it. I originally designed it (the C# version) to do bulk loads of data - basically exactly what you're doing but perhaps a lot more of it.
Real programmers use butterflies
|
|
|
|
|
Whichever lets me stream the file into a database, without clogging any system resources.
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
I would want the "text" of the JSON to be well-formed (proper braces, quotes, commas, colons, brackets, etc.) but as the to the contents, whether they map or not to the backing entity doesn't much matter, though obviously things would break of a collection is expected and it's not a collection, or vice-versa. Same with automatic data type conversion.
So, yeah, basically I would want the "defensive driver" approach.
|
|
|
|
|
So if it wasn't, you'd like to error as soon as you catch it, even if it meant a slower parse is what I'm hearing.
Real programmers use butterflies
|
|
|
|
|
|
It depends on context, of course.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
I barely ever use JSON and have never written (nor am I likely to) a parser, but what I do know is that I can't answer a question like this without knowing the context.
- Is it more important to be fast 100% of the time and permit errors 1% of the time, or to be 100% reliable at the cost of a few percentage points in speed? (i.e. how critical is the data, and how critical is speed? This is a pretty common trade-off)
- Is the data coming from another system I / we have written, or a trusted partner, or from Joe Public? Is the data machine generated or hand-crafted?
|
|
|
|
|
|
First of all this is a hypothetical. Second, hosting the .NET CLI in C++ just to use a .NET package from C++ to parse a little JSON seems heavy handed and horribly inefficient.
Plus C# won't run on arduinos.
Real programmers use butterflies
|
|
|
|
|
|
I should add that I originally wrote it in C# and then ported it to C++
Why did I write it in C#? Because I didn't know about NewtonSoft's JSON on the day I wrote it and then when i found out about it it turns out NewtonSoft's pull parser sucks and is slow.
I'm glad I did.
People are religious about never reinventing the wheel, but it's not always such a bad thing - it depends on the wheel.
Real programmers use butterflies
|
|
|
|
|
we use Newtonsoft with all of our Web APIs, etc. never had any noticeable issues with performance.
I guess if you are parsing big json files then, perhaps that is an issue, but we don't do that. so....
|
|
|
|
|
If you ever find yourself bulk loading JSON dumps into a database, you can do better. Hell, you could use my tiny JSON C# lib which is around here at CP somewhere.
Real programmers use butterflies
|
|
|
|
|
Tell me when you make a parser for XML.
I'm loading 80 GB into a database every week, and XML (or rather the built in tools) seriously isn't made for that.
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
will do!
Real programmers use butterflies
|
|
|
|
|
I load 51GB of XML with what SSIS has built-in. It takes about twelve minutes.
I load 5GB of JSON with my own parser. It takes about eight minutes.
I load 80GB of JSON with my own parser -- this dataset has tripled in size over the last month. It's now taking about five hours.
These datasets are in no way comparable, I'm just comparing the size-on-disk of the files.
I will, of course, accept that my JSON loader is a likely bottleneck, but I have nothing else to compare it against. It seemed "good enough" two years ago when I had a year-end deadline to meet.
I may also be able to configure my JSON Loader to use BulkCopy, as I do for the 5GB dataset, but I seem to recall that the data wasn't suited to it.
At any rate, I'm in need of an alternative, but it can't be third-party.
Next year will be different.
|
|
|
|
|
PIEBALDconsult wrote: I load 51GB of XML with what SSIS has built-in. It takes about twelve minutes.
How much memory do you have?
Early tests of mine ran out of memory.
Or have I done something wrong?
Mine takes an hour for 85GB XML, but that uses bulkcopy. Early versions without bulkcopy indicated that it would indeed take 5-6 hours.
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
I don't know what SSIS does internally, but I doubt it loads the entire XML document into memory all at once.
I don't know how much RAM or how many processors the servers have.
I ran the XML load on my laptop, 16GB of RAM and usage increased by only four percent.
|
|
|
|
|
Ok, then I had some other problem, I might take another look at SSIS then.
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|