|
trønderen wrote: I have seen compile times drop by a factor far larger than 60!
I think you read more into what I said than what is there.
Given a current product which is currently running in a production environment. The performance for that and only that is impacted by the following
1. Requirements (highest impact)
2. Architecture
3. Design/implementation
4. Technology. (lowest)
The above statement has nothing to do with the history of computing. It is specifically about the real world day to day needs of developers that might attempt to increase performance.
trønderen wrote: The ability to trap to the debugger on special conditions can be very helpful,
That of course has nothing to do with speed and everything to do with features of the debugger.
Going on that it was a lot easier to debug 50 years ago because threads did not exist. And because the hardware was much slower.
|
|
|
|
|
Your "problem" is that 99,6% of all the optimizations you think of for an AI is already implemented in today's optimizers, and have been for decades.
Years ago, a paper was presented at a 'History of Programming Language' conference. The presenter told about the development of the first optimizing Fortran compiler, around 1960: The development team frequently sat down to study the code generated, asking each other: How the elephant did the compiler find out that it could do that? They were the ones teaching it, yet they had a feeling of the compiler living its own life.
This was 60+ years ago. Optimizers have learned a lot since then. There has been some interference - the liberty C takes with pointers is like throwing a wrench into the machinery - but learning to handle the noise has made the optimizers even stronger.
Traditionally, optimizers handled a single compilation unit only. Unix promoted the idea of extremely small compilation units, limiting cross-module optimizations. (In other environments, the average module size is usually larger than in Unix environments.) For space optimization, we soon got object code formats where the compilation is not one monolithic block, but can be loaded piecewise, depending on which external symbols are referenced. Furthermore, the compiler leaves metadata in the object format, allowing the linker to make some adaptations, e.g. depending on whether the module makes direct or indirect recursive calls. These are not very significant optimizations for speed, but can be noticeable for space.
Modern IDEs do not consider 20-line C functions (12 of the lines a copyleft statement) in isolation. Have you ever used an advanced static code analysis tool? Maybe it will tell you something like
'In module M1, method Ma in class Cx, on line 256 the method M2.I24.Mb() is called with an X argument of 120. This value, when added to the Y value, which will never exceed 40, and passed to the method M3.J8.Mc() as argument Z, the test at line 442 of this function, if (Z>200) {...} will never be true, so the if clause is dead code. There are no other calls to M3.J8.Mc() with arguments that will cause the condition to be true.'
Or something like that. When your IDE generates a complete assembly, not just a linkable module, it can use such information to remove the dead code, along with the if test, for this assembly. Even when generating a linkable module, when compiling non-public elements, it can do similar tuning (along with e.g. detecting possible NULL references, out-of-bounds indexing, and lots of other possible pitfalls).
Today, the more advanced code analyzers are independent tools, not generating code, but you see more and more of that kind of functionality creeping slowly into IDEs. IDEs also typically make use of database like structures for publishing metadata about modules for which the source code may be unavailable. The compiler address this info for all referenced modules for adapting the code generating to what it finds about the called method.
One optimization available with JIT systems: For a fully compiled, executable program you either cannot make use of architectural extension such as advanced instruction sets, or the program must have code both using the extensions and for emulating them of they are not available. In e.g. dotNet the jitter may have the emulation code available as IL, but when generating code for a machine that provides the extension, it is omitted, along with the test to check the availability - it checked itself before generating the final code.
Code that is fully optimized with all well established methods of today (that includes complete flow analysis) is so good that I cannot see how AI could make any major improvement.
|
|
|
|
|
I agree.
"99,6% of all the optimizations you think of for an AI is already implemented in today's optimizers, and have been for decades."
IBM's PL/I compiler (70's), one of the first really acceptable optimizers in the market place, was capable of many passes over your code to optimize it. Same with their Fortran compiler.
Once, I asked ChatGPT to create a binary tree traversal C code that did not use recursion. I found lots of faults.
I am sure it can create usable code, but no guarantees.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
#Worldle #633 1/6 (100%)
🟩🟩🟩🟩🟩🎉
https://worldle.teuteuf.fr
easy
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
I have a gradient structure in SVG that has all the data necessary for rendering, but in order to create it I actually need more information than the structure contains to begin with - like bounding information for the gradient. If I add the data to the structure it will increase the memory requirements for my SVG across the board. I need to figure out a way to transfer the information I'm only using for creating the gradient structure to the point in my code where the shapes get built without modifying my core svg_gradient structure. It's such a stupid problem, and if wouldn't have worked this way if I was the one that designed it initially, although admittedly the way it works presently probably has a lower memory footprint than what I would have designed.
The gradient thing is taking longer than all of the rest of it.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
"Any problem can be solved with enough layers of indirection."
|
|
|
|
|
I figured it out. What I did in this case, is sort of create structures that shadow my final structures, keeping the same data as the final structures, plus additional data. These structures are used for building the document, and thrown away once the associated shape structures that use them are inserted into the final shape list for the SVG rasterizing engine.
So now instead of creating an svg_gradient you're creating an svg_gradient_info. However, to do this, I "poisoned" my structures all the way up to svg_shape_info - basically what I mean by that, is I had to spin off these shadow/"info" structures all the way up the hierarchy. It's somewhat unfortunate, but no less efficient than anything else I would have had to do, and the usability of it is what I need.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Gradients can be a slippery slope (pun or no pun intended) without some sort of mathematical definition/formulation. Easier said than done, though.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
The good news is I don't have to come up with my own. SVG dictates all of that mess.
The bad news is I can't just come up with my own. SVG dictates all of that mess.
Fortunately, I have a reference implementation (ugly, unmaintained C code, but it works)
I'm just rearchitecting it and separating concerns of parsing and building the document, such that I can build the documents programmatically - initially the code was just a parser and a rasterizer, so I refactored about 5000 lines, and started producing builder interfaces.
I'm debating about replacing the parser code to use my builder code instead of building the end structures itself as i'm kind of duplicating efforts there. On the other hand, the parser needs a lot more information like tag ids and stuff that aren't used in the final structures, so I need to make up my mind on that score.
But that's another can of worms. Right now I just need to get the gradients to work, but I bring up the parser, because the parser is where all of that gradient logic currently resides, and oh boy is it ugly.
So this is fun. And I've got a cold, so my ability to focus has been compromised, but I need to keep occupied right now or I'll just be miserable.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
comprende! TX for got it.
Take care of yourself.
There is only one of you.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
i know you have it working, and I think your solution is not to far off from this…
This reminds me of working with Windows GDI Brush objects.
They were standard fill patterns, but you had to “realize” the brush against your target GDI context before you could use it to fill a shape on that context.
|
|
|
|
|
Wordle 850 4/6
🟩🟨⬛⬛⬛
🟩⬛🟨⬛⬛
🟩🟩⬛⬛🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 850 2/6
🟩🟩⬜⬜🟨
🟩🟩🟩🟩🟩
One of those lucky breaks.
|
|
|
|
|
Wordle 850 2/6
⬜🟨🟨🟨⬜
🟩🟩🟩🟩🟩
Lucky day today.
|
|
|
|
|
Wordle 850 3/6
🟩🟨🟨⬜⬜
🟩⬜🟩⬜⬜
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 850 2/6
🟩🟨🟨⬜⬜
🟩🟩🟩🟩🟩
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
Wordle 850 3/6
⬛⬛🟨⬛⬛
🟩🟨⬛⬛⬛
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 850 2/6*
🟩⬜⬜⬜⬜
🟩🟩🟩🟩🟩
Yay! Another good (but incredibly lucky) guess!
It's actually boring when that happens - I don't have to think much to work it out from the "clues" when a wild guess gets it right ...
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
⬜⬜🟨⬜⬜
⬜⬜⬜🟨⬜
⬜🟨🟩🟩🟩
🟩🟩🟩🟩🟩
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Wordle 850 4/6
⬜🟨⬜⬜⬜
⬜⬜⬜⬜⬜
🟩⬜🟨🟩⬜
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 850 4/6*
⬜🟨⬜🟨🟨
⬜🟨🟨⬜🟩
🟩🟨⬜⬜🟩
🟩🟩🟩🟩🟩
Happiness will never come to those who fail to appreciate what they already have. -Anon
And those who were seen dancing were thought to be insane by those who could not hear the music. -Frederick Nietzsche
|
|
|
|
|
Wordle 850 3/6
🟩🟨⬛⬛⬛
🟩⬛⬛🟨🟨
🟩🟩🟩🟩🟩
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
Wordle 850 3/6
⬜⬜⬜⬜⬜
🟨🟨⬜🟩⬜
🟩🟩🟩🟩🟩
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
I just tried to pay my health service premium online. The health servive's payment service requests an email address and password. It also has an option to choose one of your previously used logins. I was horrified to see it cough up website urls, Login IDs, AND passwords over the past I do not know how many years. Unbelievable!
Update: Upon further inspection as Richard points out below this list of logins is coming from the browser, not the website. I use mostly FireFox. The list pops out even on my own website, BirdBuffs, and it is definitely not from my own code. Also Edge is doing something similar, probably Chrome as well. No doubt safe for now, at least until the bad guys hack it.
|
|
|
|
|
How is this a Windows security problem?
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
|
|
|
|
|