|
I don't have any current processes that do bulk json processing. Are you perhaps thinking of someone else?
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
I suppose so. I thought it was you doing bulk JSON uploads but I guess not. Sorry for the churn.
Real programmers use butterflies
|
|
|
|
|
No prob
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
I believe that might have been @marc-clifton
<edit >sorry Marc, remembered wrong </edit>
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
Might have been. Whoever was said they couldn't run 3rd party code like Newtonsoft on their server.
Real programmers use butterflies
|
|
|
|
|
|
That's who it was. Now I remember! I don't know why I was thinking JSOP other than they both seem similarly gruff to me.
Real programmers use butterflies
|
|
|
|
|
here[^]
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
Thank you. What a thread sleuth. I couldn't remember which one it was under.
Real programmers use butterflies
|
|
|
|
|
I was involved otherwise I wouldn't have remembered
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
I was the OP and I didn't remember.
To be honest, my memory is garbage. My RAM is bad.
I'm amazed I can code with how rickety it is.
Real programmers use butterflies
|
|
|
|
|
For having such bad RAM you're amazingly productive.
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
I can't deploy DLLs to the servers, so I basically can use only C# which I write myself. That and the ADO.net providers for Oracle and Teradata. Other than that, it has to be part of .net 4.6 -- though I hope we get at least 4.7 soon (as mentioned in another post).
I'm fine with my parser at this time, but I look forward to trying what Microsoft has once it's available to me -- it may prove faster, it may not, but at this time I have nothing against which to benchmark mine.
A sort of simplified diagram of the layers of my parser:
______________________________________________________________
| |
| Loop: |
| Get the next token (JSONitem). |
| |
| If the token is a value: |
| Unquote it and add it to the item on top of the stack. |
| |
| If the token is the start of an object: |
| Instantiate a new object. |
| Add it the current item on top of the stack. |
| Push it onto the stack. |
| |
| If the token is the start of an array: |
| Instantiate a new array. |
| Add it the current item on top of the stack. |
| Push it onto the stack. |
| |
| If the token is the end of an object: |
| Pop the current item off the stack. |
| If a filter has been specified for the object: |
| Apply the filter (remove content). |
| |
| If the token is the end of an array: |
| Pop the current item off the stack. |
| |
| Break the loop when the stack is empty |
| or if the end-of-file is reached. |
| |
| Return the tree of tokens which represent the value. |
| (Or NULL for end-of-file.) |
| |
| Note: |
| This does not check to ensure that an end-of- matches the |
| start-of- which is popped of the stack. |
| |
| Possibly, the filter could wait to be applied just before |
| the tree is returned. |
| |
|____________________________________________________________|
| |
| Get the next token (string). |
| |
| Peek the following (significant) character. |
| |
| Is the following character a COLON? |
| No : The token we just got is unnamed. |
| Yes: |
| The token we just read is the name of a value. |
| Discard the COLON. |
| Get the next token. |
| |
| Return the (named or unnamed) token as a JSONitem. |
| (Or NULL for end-of-file.) |
| |
|____________________________________________________________|
| |
| Read the next character from the file and classify it as |
| appropriate for the type of parse being performed: |
| normal, delimiter, etc. |
| |
| Is the character part of the current token? |
| No : Return the current token. |
| Yes: Add it to the current token (StringBuilder). |
| |
| Note: This handles QUOTEs and ESCAPEs, throws away |
| insignificant whitespace, and normalizes newlines. |
| |
| This part of the parser is not JSON-specific, I also use |
| it for CSV. |
| |
|============================================================|
| |
| .net, TextReader for input file |
| |
|============================================================|
|
|
|
|
|
ah, you use a stack. my pull parsers never have. it's a little faster not to, the only hangup is without a stack it's possible to do this '[ "foo":1 ] ' because of the fact that the : follows the field name.
It's the one area where the latest parser of mine is not quite compliant. It *will* error on that, just not as soon as it should.
Real programmers use butterflies
|
|
|
|
|
I think my parser allows that, it trusts that the file is well-formed and doesn't check.
I see no reason to raise an error for that unlikely situation.
Besides, with my parser, every JSONitem has a name (at least an empty one) and a value (and a type), so it doesn't matter whether one is (erroneously) provided or defaulted by my parser.
Now that I think about it more, I don't actually need the Stack.
I could just as easily do something like curr = curr.Parent to step back (up) a level of the tree.
And then the "stack" would be empty when curr is null -- or similar.
Eliminating the Stack probably won't provide a big improvement to the code though.
I'm quite certain any "slowness" is occurring at higher levels, and not in the parser itself.
And, of course, the database access is likely to be the tightest bottleneck.
|
|
|
|
|
DB access times can be improved if you're careful. It pays to check your update times in the DB because you can often improve them by using things like intermediary in-memory tables without constraints on them, and then updating the "real" table with that one transactionally
Of course, obviously profiling is best. I like to time individual things and then check percentage of time within each operation relative to each other so I can know overall where improvements can benefit me. Like for example, DB uses 75% of the time, parsing uses 25% that kind of thing.
Adding, the only thing about a stack is without one you have to scan to the end of a string before you can tell whether you're reading a field or a value node, because the ':' is the only thing you can use to discern that without keeping a stack.
Your parsing might be able to be wholesale improved in .NET by ditching JSON parsing altogether and using carefully constructed regular expressions instead.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: update times
No updates. Truncate/load only. BulkCopy preferably.
honey the codewitch wrote: tables without constraints
Exactly. I'm loading staging tables for the use of others.
honey the codewitch wrote: you have to scan to the end of a string before you can tell whether you're reading a field or a value node, because the ':' is the only thing you can use to discern that
Well, you have to read to the end of the string/token anyway, and then you can "peek" the next token to see whether or not it's a COLON, no big deal.
Knowing "I'm in an object, therefore this must be a name", or "I'm in an array, therefore this must be a value" is unnecessary complexity.
honey the codewitch wrote: using carefully constructed regular expressions instead
Frack no. And that would require loading an entire file into memory, wouldn't it?
|
|
|
|
|
Oh that's right, i forgot that .NET's is in memory only. I've been using my own DFA regex engine for so long now (it streams) that I didn't even think about that.
Also, sorry, I shouldn't have said update, because I meant load.
The other thing I can think of that might speed it up is to orchestrate the loader to be on the same server as the DB depending on the network but it sounds like you probably don't have that ability, based on what you said before it sounds like your environment is restricted. Oh well.
Real programmers use butterflies
|
|
|
|
|
|
DFA engines don't typically (if ever) backtrack. Microsoft's is an NFA engine.
DFA engines are faster, but take longer to compile and support less kinds of matching. Basically DFAs support standard regex ()[^-]*?. but nothing fancy like lazy matching** or atomic zero width assertions.
** apparently someone on CP has produced a research DFA regex engine that can do lazy matches by engaging in some sorcery in the way it builds the states for the machines, but typically they cannot.
Real programmers use butterflies
|
|
|
|
|
No Makeup On Zoom[^]
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I have often wondered why they can blur the background but they won't let us blur our faces. The background might at least be interesting.
To err is human to really elephant it up you need a computer
|
|
|
|
|
sounds like my driver
"Please don't come to my funeral.." Sheldon Cooper
|
|
|
|
|
Jeffrey... I hope you are picking your nose!
OMG, Turn it off!
Turn it Off!
Zoom Meetings can get out of control pretty quick. The worst thing is the bad audio "ping pong", everyone but ONE person is fine. They get fixed and someone else starts having a problem. LOL. Great Times!
I hope 2021 is AS INTERESTING as 2020!
|
|
|
|
|
The task I have is pretty basic: Generate a PDF label with some barcodes. To get there is a bit of a mission though, mostly due to the availability (or lack of) tools.
I create an html template and have wkhtmltopdf convert it to pdf. Easy enough, but having precise layout and positioning in html isn't always that easy.
Generating code39 and 128 barcodes is relatively easy with JsBarcode. Except when it doesn't want to display once converted to pdf. Then you find out you have to set both the script and html to utf-8 encoding and then it works.
Generating a 2D pdf417 type barcode is relatively easy with a javascript library, except it fails to display once converted to pdf by wkhtmltopdf. So I find a .Net Core library that can generate the barcode as a png, convert the bytes to a base64 image and use that in the html by replacing placeholder text.
Another hurdle was wkhtmltopdf suddenly becoming very slow after being pretty fast in the past. Finally tracked down the issue to spoolsvc and my default printer being a network printer that's not connected anymore. Once removed the conversion works at a decent speed again.
In short, what should be an easy task had lots of complications and workarounds, some quite weird and difficult to track down, but in the end I learned some interesting things
modified 23-Dec-20 7:29am.
|
|
|
|