|
Having seen some of the code that ChatGPT created and that gets posted in QA, I wouldn't trust an app optimised by it for anything important.
And "important" here means "coming within 100 feet of me or my computer"
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
What it spits out is really apalling sometimes, other times not too bad but I think you are better off googling
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
And the problem is that the people using it don't know the difference - they just hammer it into the app and walk away ...
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Ever since the advent of the Web it's become common (and largely acceptable) practice to cut'n paste, which is fine if you understand the code - I used to carry several, very large heavy books around for years until I knew and understood them inside out, buy more and repeat the process.
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Meh, it's fine if what it is doing is immediately verifiable.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Maybe we can use AI to decipher some of your more technical messages on here
|
|
|
|
|
I see people here ranting about ChatGPT. I am sure your post did not refer to such a generalized tool!
I would assume that this will happen quite soon. If you would have access to searching academic papers I think you would see some [at least] theoretical advances in the field.
"If we don't change direction, we'll end up where we're going"
|
|
|
|
|
Given that what passes for AI is basically advanced pattern matching, this may be possible. It is my understanding that optimizers already do this, they just don't call it AI.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
They do it but only go so far.
C++ won't for example, "de-inline" repeated code for you.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: you could also use it to optimize program code output during compilation.
Optimizing code seldom provides any measurable impact. At least not in my domain space. But might so more in yours.
In my domain spaces optimization is impacted by the following
1. Requirements (most)
2. Architecture
3. Design (explicit or implicit)
4. Technology/code (least)
Keeping in mind of course that one must be able to differentiate between those parts.
One can, and I have, achieve orders of magnitudes improvements by requesting changes to 1. But for 4 (per this suggestion) actual work seldom can achieve anything more than about 1% difference. Especially in terms of the user experience.
A profiling tool might be able to find a bottleneck but then one must analyze where the problem lies in the parts above.
|
|
|
|
|
Yeah, I'd have to see data before I wrote it off.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
You are certainly right that the first three points are essential. I take for granted that '3. Design' includes choice of algorithms and data structures (at multiple levels). (Side remark: I think that the importance of good data structures is highly undervalued in many of today's designs!) Doing these points right might lead to big-O improvements.
Yet, I think that you go to far stating that 'Optimizing code seldom provides any measurable impact', suspecting that what you think of as 'unoptimized' code is what your compiler produces by default. It must be decades since I saw a production compiler that did no sort of optimizing at all! Any common compiler today employs a big crowd of optimizing techniques. Using the result of this as a base for comparison, you might see that you 'seldom can achieve anything more than about 1% difference', because the potential has already been taken out.
When I update my computer to new technology, running exactly the same binary applications, no changes in steps 1-3, so the only change is in technology (not even including coding), I experience a lot more than 1% speedup. 'Especially in terms of the user experience', as you phrase it.
(My current main PC is of 2016 vintage, so I honestly expect a speedup the day I upgrade, otherwise I would say that technological development has failed miserably ... except for sound and video. I do not expect those to play any faster.)
|
|
|
|
|
trønderen wrote: Yet, I think that you go to far stating that 'Optimizing code seldom provides any measurable impact',
I was referring to what is normally known as 'micro optimization'.
Happens when a developer comes across a bit of code which they 'know' can be written to execute more efficiently and then they re-write it. Without ever attempting an end to end test on the application with real business messages to see what impact that specific bit of code has on the enterprise.
trønderen wrote: I experience a lot more than 1% speedup. 'Especially in terms of the user experience', as you phrase it.
I don't.
Coding
1. Most of my time is spent designing and writing code. Hardware can't speed that up.
2. Compiling. Certainly never seen a compile go from 1 hour to 1 minute regardless of hardware. If it goes from 60 minutes to 50 minutes (more than 1%) I would not even notice.
3. Debugging. Hardware can't help with that.
For some other examples at the current company the primary system is on the biggest cloud box available (multiples.) Can't go bigger. The performance problem is due to a legacy system upgraded piecemeal (not even coherently) and with no limits on how the users are allowed to use the system. Up until about 6 months ago even the public Rest API throttle did not work but even with that the company motto is just to increase the limit, at no charge, if they have a problem.
Hardware will not fix any of that.
Another example is from a different company. Performance testing with business load demonstrated that the application was as optimized as possible. That was even proven over time with production. The proven (measured) bottleneck was the third party services that the application had to interact with. No way to even fix that problem.
|
|
|
|
|
jschell wrote: 'micro optimization'.
Happens when a developer comes across a bit of code which they 'know' can be written to execute more efficientl I guess that in 99% of these cases, optimizing is 'real' and does save time - except that it happened before your developers got a hand of it. They moved invariants out of loops. They removed dead code. They factored out common expressions to be calculated once.
But the compiler already did all of this. The efforts of your developers made no change to that. You are comparing optimized code (by the compiler) to optimized code (by your developers). Take that as a proof: Compilers do as good a job at compiling as a human. (And often better, but you do not notice it.)
1. Most of my time is spent designing and writing code. Hardware can't speed that up. For the intellectual part, that is mostly true. In my early student days, handing in card stacks to be compiled and run, receiving printouts 24-48 hours later, the hardware definitely affected our mental process of software development. Even when we later got a VAX750, and our student problems took half an hour to compile, it did affect our way of working. Today, small scale system development is usually different - it depends on the setup in in your shop. A few years ago, my employer invested in a new powerful testbed: Earlier, running all the regression tests took a week: We started it Monday morning, having all the regressions reported the next Monday morning. Or rather: We used to start in Friday when we left for the weekend, giving it nine days. With the new test bench, we started the complete regression test suite Friday when we left, and it was completed Monday morning, two days later. That most certainly did affect our development work!
2. Compiling. Certainly never seen a compile go from 1 hour to 1 minute regardless of hardware. If it goes from 60 minutes to 50 minutes (more than 1%) I would not even notice. Well, that 60 times speedup you are calling for usually demands more than one hardware upgrade. But through my career, I have seen compile times drop by a factor far larger than 60! Maybe not in a single step, but not that many. If you haven't seen a speedup in compile times, either you haven't been in software for very long, or you are suppressing the memory of earlier experiences.
3. Debugging. Hardware can't help with that. In execution speed: Certainly not. In lots of other aspects: Of course it can! The ability to trap to the debugger on special conditions can be very helpful, and is far more developed in today's processors than in those a decade or two ago.
For the problems you mention: If the real problem lies outside your scope, you are most certainly right: Trying to fix them within your scope is futile. That should't paralyze you to make you unable to solve the problems within your scope.
|
|
|
|
|
trønderen wrote: I have seen compile times drop by a factor far larger than 60!
I think you read more into what I said than what is there.
Given a current product which is currently running in a production environment. The performance for that and only that is impacted by the following
1. Requirements (highest impact)
2. Architecture
3. Design/implementation
4. Technology. (lowest)
The above statement has nothing to do with the history of computing. It is specifically about the real world day to day needs of developers that might attempt to increase performance.
trønderen wrote: The ability to trap to the debugger on special conditions can be very helpful,
That of course has nothing to do with speed and everything to do with features of the debugger.
Going on that it was a lot easier to debug 50 years ago because threads did not exist. And because the hardware was much slower.
|
|
|
|
|
Your "problem" is that 99,6% of all the optimizations you think of for an AI is already implemented in today's optimizers, and have been for decades.
Years ago, a paper was presented at a 'History of Programming Language' conference. The presenter told about the development of the first optimizing Fortran compiler, around 1960: The development team frequently sat down to study the code generated, asking each other: How the elephant did the compiler find out that it could do that? They were the ones teaching it, yet they had a feeling of the compiler living its own life.
This was 60+ years ago. Optimizers have learned a lot since then. There has been some interference - the liberty C takes with pointers is like throwing a wrench into the machinery - but learning to handle the noise has made the optimizers even stronger.
Traditionally, optimizers handled a single compilation unit only. Unix promoted the idea of extremely small compilation units, limiting cross-module optimizations. (In other environments, the average module size is usually larger than in Unix environments.) For space optimization, we soon got object code formats where the compilation is not one monolithic block, but can be loaded piecewise, depending on which external symbols are referenced. Furthermore, the compiler leaves metadata in the object format, allowing the linker to make some adaptations, e.g. depending on whether the module makes direct or indirect recursive calls. These are not very significant optimizations for speed, but can be noticeable for space.
Modern IDEs do not consider 20-line C functions (12 of the lines a copyleft statement) in isolation. Have you ever used an advanced static code analysis tool? Maybe it will tell you something like
'In module M1, method Ma in class Cx, on line 256 the method M2.I24.Mb() is called with an X argument of 120. This value, when added to the Y value, which will never exceed 40, and passed to the method M3.J8.Mc() as argument Z, the test at line 442 of this function, if (Z>200) {...} will never be true, so the if clause is dead code. There are no other calls to M3.J8.Mc() with arguments that will cause the condition to be true.'
Or something like that. When your IDE generates a complete assembly, not just a linkable module, it can use such information to remove the dead code, along with the if test, for this assembly. Even when generating a linkable module, when compiling non-public elements, it can do similar tuning (along with e.g. detecting possible NULL references, out-of-bounds indexing, and lots of other possible pitfalls).
Today, the more advanced code analyzers are independent tools, not generating code, but you see more and more of that kind of functionality creeping slowly into IDEs. IDEs also typically make use of database like structures for publishing metadata about modules for which the source code may be unavailable. The compiler address this info for all referenced modules for adapting the code generating to what it finds about the called method.
One optimization available with JIT systems: For a fully compiled, executable program you either cannot make use of architectural extension such as advanced instruction sets, or the program must have code both using the extensions and for emulating them of they are not available. In e.g. dotNet the jitter may have the emulation code available as IL, but when generating code for a machine that provides the extension, it is omitted, along with the test to check the availability - it checked itself before generating the final code.
Code that is fully optimized with all well established methods of today (that includes complete flow analysis) is so good that I cannot see how AI could make any major improvement.
|
|
|
|
|
I agree.
"99,6% of all the optimizations you think of for an AI is already implemented in today's optimizers, and have been for decades."
IBM's PL/I compiler (70's), one of the first really acceptable optimizers in the market place, was capable of many passes over your code to optimize it. Same with their Fortran compiler.
Once, I asked ChatGPT to create a binary tree traversal C code that did not use recursion. I found lots of faults.
I am sure it can create usable code, but no guarantees.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
#Worldle #633 1/6 (100%)
🟩🟩🟩🟩🟩🎉
https://worldle.teuteuf.fr
easy
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
I have a gradient structure in SVG that has all the data necessary for rendering, but in order to create it I actually need more information than the structure contains to begin with - like bounding information for the gradient. If I add the data to the structure it will increase the memory requirements for my SVG across the board. I need to figure out a way to transfer the information I'm only using for creating the gradient structure to the point in my code where the shapes get built without modifying my core svg_gradient structure. It's such a stupid problem, and if wouldn't have worked this way if I was the one that designed it initially, although admittedly the way it works presently probably has a lower memory footprint than what I would have designed.
The gradient thing is taking longer than all of the rest of it.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
"Any problem can be solved with enough layers of indirection."
|
|
|
|
|
I figured it out. What I did in this case, is sort of create structures that shadow my final structures, keeping the same data as the final structures, plus additional data. These structures are used for building the document, and thrown away once the associated shape structures that use them are inserted into the final shape list for the SVG rasterizing engine.
So now instead of creating an svg_gradient you're creating an svg_gradient_info. However, to do this, I "poisoned" my structures all the way up to svg_shape_info - basically what I mean by that, is I had to spin off these shadow/"info" structures all the way up the hierarchy. It's somewhat unfortunate, but no less efficient than anything else I would have had to do, and the usability of it is what I need.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Gradients can be a slippery slope (pun or no pun intended) without some sort of mathematical definition/formulation. Easier said than done, though.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
The good news is I don't have to come up with my own. SVG dictates all of that mess.
The bad news is I can't just come up with my own. SVG dictates all of that mess.
Fortunately, I have a reference implementation (ugly, unmaintained C code, but it works)
I'm just rearchitecting it and separating concerns of parsing and building the document, such that I can build the documents programmatically - initially the code was just a parser and a rasterizer, so I refactored about 5000 lines, and started producing builder interfaces.
I'm debating about replacing the parser code to use my builder code instead of building the end structures itself as i'm kind of duplicating efforts there. On the other hand, the parser needs a lot more information like tag ids and stuff that aren't used in the final structures, so I need to make up my mind on that score.
But that's another can of worms. Right now I just need to get the gradients to work, but I bring up the parser, because the parser is where all of that gradient logic currently resides, and oh boy is it ugly.
So this is fun. And I've got a cold, so my ability to focus has been compromised, but I need to keep occupied right now or I'll just be miserable.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
comprende! TX for got it.
Take care of yourself.
There is only one of you.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
i know you have it working, and I think your solution is not to far off from this…
This reminds me of working with Windows GDI Brush objects.
They were standard fill patterns, but you had to “realize” the brush against your target GDI context before you could use it to fill a shape on that context.
|
|
|
|
|