|
Look, I disagree with the absolutist position too, but I agree that using goto's is generally bad. I just acknowledge there might be exceptions.
But, having worked on code with gotos, which I did not write, but I had to maintain it, and having worked on object-oriented code, in my experience the code with gotos had both many more bugs and also worse bugs. I've had this same experience in more than one job too, so I see that as a recurring pattern.
And, I've also seen object-oriented code that was faster than the older C code. Efficiency has to do with many other factors than the language used.
If only encapsulation is used in C++, there is no calling penalty and the generated code is essentially C code.
And, 'often' the overhead of calling virtual functions is typically less than using a 'switch' statements, so if the setting that is being tested is in a loop, doing the switch statement outside of the loop and choosing an object with a virtual function based on the switch results can be much faster.
In C, you can do that same thing with function pointers, but it's more complicated, and people don't typically write code that way in C. In C++. it's very simple to do that, plus, there are other benefits.
Here's another issue about performance unrelated to the language used:
I once unrolled a loop for a floating point routine and the code got significantly faster. I did the same thing for a routine that did the same calculations, but was for a different processor. That routine used integer arithmetic. When I unrolled the loop, the routine got much slower! The integer routine required shifting the products down after every multiplication. These extra shift instructions made the code grow to over 4Kbytes, and 4Kbytes is the size of the instruction cache. The code was cache-thrashing.
I've seen the pattern of people imagining their code was faster than some other implementation, when a profiler later showed that the other implementation was actually faster. Modern systems do multiple levels of caching for both instructions and data, do branch prediction, and even do instruction reordering based on both the instruction types and register pressure. Predicting how fast code will run is more complicated today than it ever was. Without using a profiler, it's usually just guesswork.
Finally, languages are tools. Use the right tool for the job. If you think object-oriented languages are slow and buggy, then you definitely don't understand them.
Also, performance is not always the most important attribute of code. If the code takes 10 microseconds instead of 5 microseconds, and the requirement is that the code runs in 1 millisecond, then I would rather the simpler, easier to maintain, implementation, and that is typically (as in almost always) the code written in C++. C is more portable than C++, so C has that going for it, although if one restricts the features they use in C++, it's probably as portable as C.
But, when you suggest that object-oriented languages are inherently more buggy than C, well, that flies in the face of both research and experience. Well-written C++ hides the data, and that definitely tends to make code less buggy, not more.
|
|
|
|
|
Bill_Hallahan wrote: Without using a profiler, it's usually just guesswork.
That point needs clarification.
I don't deliver code. I deliver systems.
The customers that use my systems don't care if a for loop is faster or not. They do care how fast the business functions of the system are though.
There is no way that I can guess which piece of code is going to prove to be a bottleneck in a system. And I haven't worked with anyone that can do that either. So profiling is always required.
Not to mention of course that profiling systems is unlikely to substantially increase the speed of the application. The only time it does lead to substantial increases is when it identifies points which were poorly designed or architected in the first place.
Although to be fair I haven't worked on anything that I would consider a small system for years (perhaps 2 decades.) So that probably colors my experiences.
|
|
|
|
|
jschell:
"Not to mention of course that profiling systems is unlikely to substantially increase the speed of the application. The only time it does lead to substantial increases is when it identifies points which were poorly designed or architected in the first place."
I have found that profiling the code and subsequently optimizing the code can, in some cases, cause significant speedups even in a well-designed and well-implemented system.
However, I don't doubt that it's not desirable to optimize the code in whatever problem domain you worked in. Some things can't be made significantly faster. And, even when code can be sped up a lot, it's not always necessary.
Digital filters, which are used in audio codecs, video codecs, imaging, and sometimes graphics, can be sped up dramatically, such as running from 1.5 to 3 times faster, or rarely even faster, by writing specialized assembly routines that use parallel (SIMD) instructions. Modern Intel and AMD processor have wide registers that allow simultaneous multiples and adds of multiple numbers at the same time. These are Single Instruction Multiple Data (SIMD) instructions. Even Intel's parallel compiler cannot handle the special parallel instructions in an optimal fashion in all cases. There's still a need for hand-written assembly code in some cases.
The C or C++ code to implement the algorithms is not badly designed. The compiler often can't generate the best code using these SIMD instructions today. Compilers can handle much more than in the past, but they still aren't as good as a person yet for some things.
Intel now sells various routines that do mathematical operations, such as a dot-product, and even a modern video codec, and these routines use the SIMD instructions internally, so less SIMD code has to be written today. The routines don't cover all cases though.
In a previous job, I wrote code to rotate an color image using bicubic filtering. I then sped up rotating the image on a page by a factor of 1.7 by writing writing specialized assembly code using SSE1 instructions to do part of the calculations. This was only part of the calculations, but that speedup was very significant and resulted in a competitive product.
Not all the optimizations that I have done involved writing assembly language. For one audio application, I unrolled a loop and that caused a significant speedup. That does make the code harder to read, but I added comments. In this case, the speedup was critical to the application.
For another hypothetical example that is similar to something I did, imagine the profiler shows most of the time is spent sorting the data and the code is using a radix sort algorithm. Quicksort is generally better than radix sort, but if sorting 7 or fewer items, quicksort, heapsort, a radix sort, and other advanced sort algorithms are actually slower than a very simple sort routine. A specialized sort routine written for just 7 items will be very, very, fast.
If the problem domain usually had fewer than 8 items to sort, then a good optimization might be to choose the appropriate sort routine based on the number of items to be sorted. I have also seen significant improvements with that type of optimization, although this specific case is hypothetical. The actual case where I did something similar to this is difficult to explain.
My point is, while most of the time the code I worked on didn't need to be optimized, occasionally I've seen optimization result in significant improvements in performance.
To quote Knuth again, ""We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil""
Of course, Knuth was guessing those numbers. 86.3% of statistics are made up on the spot.
modified 14-Nov-13 0:25am.
|
|
|
|
|
Bill_Hallahan wrote: My point is, while most of the time the code I worked on didn't need to be
optimized, occasionally I've seen optimization result in significant
improvements in performance.
I have not doubt that optimization can make applications significantly faster. However that is different than saying that you managed this only after profiling.
Bill_Hallahan wrote: Digital filters, which are used in audio codecs, video codecs, imaging, and
sometimes graphics, can be sped up dramatically, such as running from 1.5 to 3
times faster, or rarely even faster, by writing specialized assembly routines
that use parallel (SIMD)...
Sounds quite reasonable and are you saying that you came across this need, all of it, by profiling the existing application?
|
|
|
|
|
jsachell wrote:
"I have not doubt that optimization can make applications significantly faster. However that is different than saying that you managed this only after profiling."
I usually profiled the system, but not always. Sometimes I just timed how long it took to do something. These times, I did know what the bottleneck would be without profiling because it was obvious, even to the casual observer, even without experience and history, both of which I have in this area.
For just one such case, in an embedded processor, I was getting data from an Analog to Digital Converter (ADC) using DMA, and then doing millions of calculations per second on the data, and then writing the data to an in-memory buffer, and then the processed data would be written to a Digital To Analog Converter (DAC). I knew that the majority of the time would be spent doing the algorithm that does millions of calculations per second, because the input and output buffering would be many orders of magnitude faster.
Sometimes in simple systems, it is obvious what needs to be optimized even without profiling.
I do agree with you, in large systems, profiling is needed. Very simple systems can be understood almost completely, particularly when someone is repeatedly implementing similar simple systems.
|
|
|
|
|
Stefan_Lang wrote: And I could even choose if I wanted to generate the statemachine using swicth or
inheritance! In other words, there are valid alternatives and even tools that
help you generate the code.
None of which however addresses whether those generated solutions are optimal performance implementations.
In my experience performance optimization, based on actual bottleneck analysis, can often lead to code that is less than ideal in some other aspect such as design and/or maintenance.
|
|
|
|
|
Please tell us about your experience.
In some projects I worked on, there was UI code, audio code, and video code. The video code was the bottleneck. I can't imagine it making any sense to change the design of the system to fix the video code. Nor was the overall design of the video codec redesigned. Only minor implementation changes were made.
And, ironically, using a goto for performance optimization is often an example of code that is less than ideal. (Note, I didn't write "always" - I can't know that because I don't know all possible situations - and sometimes that last tiny bit of performance gain does matter - but such situations are certainly extremely rare).
As an aside, to make a generalization (and most generalizations are false, including this one!), the overall concern for code should be maintainability. Typically, 85% of the cost (or time) for software is maintenance, so making code easy to maintain usually should overrides all other concerns.
Also, for most software, performance isn't an issue.
So, if a goto is ever justified, it would have to be in a fringe case.
|
|
|
|
|
Bill_Hallahan wrote: Please tell us about your experience.
My ultimate performance impact was a report that would have taken 4-12 hours to run and 3-6 weeks to implement most of which would have been optimization to get into the lower range estimate. The original performance estimate was based on the time to run an existing report so the confidence factor was high.
This was due to one requirement on the report. The business person who requested it took one look at the problem requirement and promptly stated they didn't need that. After that it took 1.5 days to implement the report and it ran in a couple of seconds.
In another case (in the early days of OO adoption) a senior engineer insisted on making everything an object, including in one case a value that specified via specification to be an integer in the range of 0-255. So the engineer required a class for the integer. That single request lead to a dialog box that required the user to wait each time while it was displayed. I ended up creating a cache for the required class just to speed that up.
Bill_Hallahan wrote: And, ironically, using a goto for performance optimization is often an example of code that is less than ideal. (Note, I didn't write "always" - I can't know that because I don't know all possible situations - and sometimes that last tiny bit of performance gain does matter - but such situations are certainly extremely rare).
I agree with all of that.
But that doesn't address what I was saying. I could point out that I have seen abomination designs that used classes and in fact I know I created several of those myself long ago.
So misuse is misuse. It is what it is.
But after that, when one has a some what ideal implementation then one might find, that to eck out just a bit more speed that one might need to modify what would otherwise be a fairly good OO design/architecture to make it less than ideal because some odd change produces just enough performance that one can deliver it.
Bill_Hallahan wrote: Also, for most software, performance isn't an issue.
Depends on what you mean by "software".
Most deliverables will have either implicit or even explicit performance requirements with respect to business processes.
But most code of such a deliverables will have nothing to do with making the system faster. Nothing is more irritating than finding out that a junior, or worse a senior, developer has spent the last week optimizing some piece of code that has nothing to do with major business requirements on a system that is already late.
|
|
|
|
|
The case I mentioned was an embedded system where the state machines modelled hardware actors and sensors. And yes, we had critical performance conditions too. The performance of the state machine implementation was never an issue, far from it, even though we did have serious problems staying within the given hard timing limits! And when I say hard timing limits, I mean hard - it wasn't a case of users having to wait a bit longer because of clogged video streaming, it was a case of users potentially being seriously injured or not.
Besides, any decent compiler will translate a switch statement into a goto anyway, so no need at all to obfuscate your code and decrease readability and maintainability. Even worse: explicitely programming gotos may prevent the compiler from optimizing your code, so a set of gotos replacing a switch may in fact incur a performance penalty, if you're not very careful!
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
We essentially agree that gotos are undesirable the vast majority of the time.
I'm just not positive that gotos are bad all of the time. One can develop tunnel realities, even with decades of experience.
Take the saying at the end of your post: "GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)"
While I expect that is often the case, I would think it untrue in some situations. If a well-managed team were to find a reason to use a goto, and the team had guidelines that were documented in the code around the goto to not do this elsewhere, then I don't think that goto's would necessarily proliferate.
But again, I've always used break or continue and avoided them too, and I agree with your philosophies, just not the absolutist part of your philosophy.
I don't know absolutely every situation for ever program in every context, and there actually is a documented benefit to gotos in some circumstances. Even though there can be dire consequences from using a single goto, nonetheless the tradeoff might be that without using a goto, something isn't fast enough to do that job. In that case, the developer might decide that practicality takes precedence over good software engineering. I can't prove that situation doesn't exist. So I can't be absolute about the rule.
I don't get that someone posted that goto's were needed for a state machine. A switch statement works, and there are other solutions too. The book "Design Patterns" provides the Strategy pattern, which (and I expect you know this already Stefan) where different "state" classes derive from a common base class, and switching states is done by switching the type of object. Virtual methods are called on the current state object.
And, except in rare instances, I suspect a goto isn't much faster than well-written code, and probably even slower in some cases. Today, often instruction cache fetch limitations, data cache fetch limitations, or instruction ordering (the last less of an issue with modern Intel compilers than it used to be) affect performance more than the number of instructions in the code path. (I know you know this too). So, in general, the only way today to find out if code is faster is to profile the code. (Note, I wrote "in general" there - of course there are exceptions). I would need pretty strong proof in a particular situation to even consider that using a goto is justified.
Perhaps no such situation exists. I can't prove that though.
|
|
|
|
|
Bill_Hallahan wrote: Take the saying at the end of your post
The keyword is "tend". I liked that statement (also found in this thread, btw.), so I put it in my sig. Please note that I did not state that gotos are always bad, only that there are always alternatives. I agree that such an alternative may only be "equivalent", not better, depending on how you measure "goodness" of the code.
As I said in another post: a decent C++ compiler will translate most control statements into gotos anyway, and optimize them in ways you may not even have thought of when considering your variant of goto coding. Therefore there is likely no performance benefit, and you actually may risk performance penalties by being more explicit about how exactly you like to use and place your gotos.
I think we agree on pretty much everything else anyway. thanks for your thoughtful responses.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
|
I agree.
I love GOTO
Use it all the time
int i = 0;
:beginLoop
if (foo[i] == searchTerm)
GOTO Found;
i+=1;
if i > foo.Length()
GOTO NotFound;
GOTO beginLoop;
:NotFound
MessageBox.Show("Not Found")';
GOTO ExitBad;
:Found
MessageBox.Show("Found one at " +i.ToString());
GOTO ExitGood;
:ExitBad
return false;
:ExitGood
return true;
What could be more clear and maintainable than that?
MVVM # - I did it My Way
___________________________________________
Man, you're a god. - walterhevedeich 26/05/2011
.\\axxx
(That's an 'M')
|
|
|
|
|
|
On Error goto theCoffeeMachine;
I use that one one-to-many times a day.
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
It's right there in C++ as well....
If your neighbours don't listen to The Ramones, turn it up real loud so they can.
“We didn't have a positive song until we wrote 'Now I Wanna Sniff Some Glue!'” ― Dee Dee Ramone
"The Democrats want my guns and the Republicans want my porno mags and I ain't giving up either" - Joey Ramone
|
|
|
|
|
... for backward compatibility only.
|
|
|
|
|
rgrep goto /path/to/linux/kernel/sources; git blame each of the hundreds of occurrences, and then try to convince those who committed that lines that they are worse coders than you are. Chances are, you're going to fail miserably.
|
|
|
|
|
After far more than 20 years with C/C++ (and other languages too) I put it this way: I strongly recommend not using goto - except if it is really necessary. There should be no dogma but only good reasoning.
There are good reasons for using a goto (most goto-s I've seen did not, but few did). In total I personally used it about maybe 10 times over all those years, but (as far as I see it) not breaking readability but guaranteeing readability at those perticular points. Of course it would have been possible to avoid the goto-s there too but only if I would have been breaking the "natural" logic of that code (or at least what seemed "natural" to me ).
Making a long talk short, I think: "There is no silver bullet".
|
|
|
|
|
1. If it is critical that your code is correct, don't use goto : it has the capability to jump out of or into one or more nesting layers - even backwards - and thus makes it considerably harder to verify the correctness of the code.
2. If there is more than one programmer on the team, don't use goto : using it makes it considerably harder for another programmer to understand the flow of code, and what it is supposed to do.
3. If you intend to build on and maintain the code over a period of more than a couple of months, don't use goto : viewing a piece of code that you yourself wrote a couple of months ago is often not so much different from viewing another programmers' code - see item 2 above.
Please note that modern programming languages have plenty of alternatives that can be used in many cases where goto could be used. In C/C++, here are some examples:
- to repeat a block of code, use a for , while , or do loop construct rather than jumping backwards
- to skip over some piece of code, use an if -block rather than jumping forward
- to skip over the rest of a loop body, use continue
- to exit out of a loop, use break
In C++ you should also use the standard exception handling mechanism rather than using goto as an error exit mechanism. (There is no equivalent in C, so you might argue that in C the use of goto for that purpose is acceptable - but see below!)
The main reason however that you shouldn't use goto is that there is no benefit. Over the past 30 years I've used, learned about, read about, and had plenty of discussions about goto . In all that time I've never heard or read one compelling argument in favor of using it. Yes, you can use it to reduce or avoid nesting, or otherwise reduce the amount of code. But that by itself is not a valid argument in my book.
|
|
|
|
|
Nearly all the static code analysis tools I know are operating on a CFG level (i.e., nothing but conditional and unconditional gotos). So, how exactly having an explicit goto may hinder code verification?
And there are many benefits in using goto. Pity you failed to notice them in the past 30 years. Chances are, you never implemented a state machine or a threaded code interpreter, and you never generated code from a high level DSL.
|
|
|
|
|
Tools can only get you so far, and they're not suitable for prooving code correctness. So I'm not sure why you brought that up.
As for benefits, I've read and taken part in countless discussions, and not a single example brought up managed to convince me. In every single case there was a suitable alternative using standard control statements. Most of the time the person bringing up either didn't come up with the proper way, or considered the effort of writing 2-5 additional lines of code too much to bear.
Based on that experience I'm convinced that there is always a better alternative. People claiming otherwise are just not sufficiently experienced to see it, or understand the need.
That said, all this assumes you're looking at code where proper coding guidelines and style even makes sense to take care of: if you're just programming away a piece of throw-away-code, then yes, use whatever suits you best and solves the problem.
In actual production code that is going to live through years of maintenance and adding features, the presumed benefits of goto never outweigh the long term maintenance problems.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
Funny. It was YOU who brought the correctness and verification issue up. I would never imagine mixing it into a goto discussion. Proving code correctness always involves operating on CFG, no matter if it is a manual or automated proof. Your precious structured coding constructs are eliminated long before you can do any useful reasoning about the code.
For a "single example", are you blind?!? Never seen Linux kernel? Then how can you refer to your nearly non-existent "experience"?
Or, take this code and make it better: [^]
I am 100% sure you cannot keep the same performance without goto. Show me that pathetic "better alternative".
modified 12-Nov-13 6:52am.
|
|
|
|
|
Stefan_Lang wrote: Based on that experience I'm convinced that there is always a better
alternative.
Could be. But I don't write code for fun but rather I get paid for it and it is often critical code that I can't spend weeks finding an optimal solution but rather the first one that is good enough goes out the door.
Nor can I refactor millions of lines of code every two weeks every time I figure out a "better" way to do it.
And neither can the guy that is going to maintain my code after I am gone.
Stefan_Lang wrote: People claiming otherwise are just not sufficiently experienced to see it, or
understand the need.
Of course there are always people willing to rationalize that their way is "best" despite the fact that they can't demonstrate that with objective data and often can't even construct a coherent argument as to why it is "best".
And technology rationalizations are often based on nothing but technology while ignoring the realities of delivering software in a business environment.
Stefan_Lang wrote: In actual production code that is going to live through years of maintenance and
adding features, the presumed benefits of goto never outweigh the long term
maintenance problems.
That would of course be an excellent argument if in fact none of the following was true.
- Maintenance was the sole and only driving business requirement.
- The business had a firm enough grasp on process control to be able to quantify maintenance costs.
- The process control was structured enough that it could enforce quality on the entire rest of the enterprise and to such an extent that the trivial cost of infrequent code misuse rose above the most miniscule noise level of maintenance cost. Versus for example, no requirements, poor requirements, unused requirements, invalid requirements, zero architecture, chaotic process management, etc, etc, etc.
|
|
|
|
|
jschell wrote: Nor can I refactor millions of lines of code every two weeks every time I figure out a "better" way to do it.
Agreed, The code I'm working on has quite a few gotos, but I don't have the time to dig through the code and lack the test cases to do a secure refactoring, so they'll remain right there, unless I find the code is broken.jschell wrote: And technology rationalizations are often based on nothing but technology while ignoring the realities of delivering software in a business environment.
Absolutely, the points I've made refer to creating new code, not modifiying existing one to either insert or remove gotos. My point is that you shouldn't use goto in new code, or insert it into existing code where there is no gto yet. I claim that if you see no good or at least equivalent alternative using other language constructs, then maybe you haven't looked hard enough.
I willing to concede that there may be cases where there is a real benefit if using goto over any alternative, but I can't think of an example in C++, as long as you're using a decent compiler with a good optimizer that will translate alternate control statements into gotos anyway.
jschell wrote: That would of course be an excellent argument if in fact none of the following was true.
- Maintenance was the sole and only driving business requirement.
- The business had a firm enough grasp on process control to be able to quantify maintenance costs.
- The process control was structured enough ...
Maintenance doesn't need to be sole requirement and of course never is. But ignoring it would be a falacity, unless your application is supposed to be throw-away code that shouldn't be maintained (and I already said that for that kind of code all bets are off - there's no point in discussing coding guidelines at all
A business not able to quantify and control maintenance cost will soon be out of business, specifically in software development). There's a reason why there are SCRUM, XP, Agile, (R)UP, etc..
Ideally process control should indeed enforce quality on the entire enterprise - that's what they're modelled to achieve! We as software developers should strive to contribute towards that goal and leave the decision whether our efforts result in small or big gains to the project leaders. In my experience, while maintenance cost is considerably lower per year or month compared to development, it is never minuscule, and will add up over time to the point where it is relevant.
Also you shouldn't neglect the time you need to fix a bug: if you need double the time because of sloppy coding, then this may lower your reputation, resulting in less sales. Ask your sales people how much they like that! Of course, at this point you also need to weigh the effect of releasing your product on time, with clean code: if you need too long for that, you may lose market shares to a competitor. However, at this point we're leaving the scope of this discussion
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|