|
0, if you build an optimized version of the C++ code (see my full answer below, but basically the C++ compiler works out that the body of the loop is a no-op and throws the whole loop away!)
|
|
|
|
|
I was only trying to do the easyest optimization to beat 2.5 sec.
;P
Russell
|
|
|
|
|
Do you know if Java has the concept of Debug vs. release builds?
|
|
|
|
|
I don't think the Java-to-bytecode compiler has different options here. I'm more familiar with C#, where the C#-to-IL compiler csc does have a /debug option. This has the effect of inserting a certain number of nop IL instructions to ensure that line number information lines up and that you can put a breakpoint on some lines that otherwise wouldn't be possible (for example on the closing brace of a block). Further, it emits a DebuggableAttribute which the JIT compiler detects - this disables JIT optimizations to make the code easier to debug. I don't know if Java JIT compilers have similar options.
Compiling a similar C# program with C# 2.0 gives 10.8s on my home computer with /debug:full and 1.078 second with /debug:pdbonly . With /debug- it's 1.062s. I should say here that my home computer ran the unoptimized C++ version in 14.4s - it's a Core 2 Duo T7200-based laptop (2.0GHz) whereas my work computer is a P4 3.0GHz with HyperThreading enabled. It looks, from sticking a call to System.Diagnostics.Debugger.Break at the top of Main and looking at the disassembly in VS, like it's managed to eliminate the contents of the loop but not the loop itself; however, it's keeping and manipulating the loop counter variable in a register.
|
|
|
|
|
In fact, the optimizer should have taken care of this. It should, in fact, have made the int into a register.
What went wrong?
Though I speak with the tongues of men and of angels, and have not money, I am become as a sounding brass, or a tinkling cymbal. George Orwell, "Keep the Aspidistra Flying", Opening words
|
|
|
|
|
yes, but optimization is something that is not everytime clear/sure for the user...look at your words:
jhwurmbach wrote: he optimizer should have taken care of this
the only way to know what appens after compile is to look to the assembler code...or, better, don't think to compare the same code written in different languages without considering optimizations, code-shortcuts, assembler translation, ...
Optimization is a too wide chapter to think to understand everithing with only a sample code
Russell
|
|
|
|
|
Russell` wrote: optimization is something that is not everytime clear/sure for the user
Sure.
Mike Dimmik did his study while I was writing my reply.
I got 19,3 and 3,4*10^-7 seconds in debug and relase, respectivly.
I took this as "quite a lot" and "practically nothing" and asked where the 2,5 seconds came from. I doubt that there is a computer 10 times as fast as my double Xeon 2.4 Ghz...
Though I speak with the tongues of men and of angels, and have not money, I am become as a sounding brass, or a tinkling cymbal. George Orwell, "Keep the Aspidistra Flying", Opening words
|
|
|
|
|
2.5 seconds comes from Java, he took 11 secs in C++ ... I think that he was using the DEBUG mode.
...but ....you know...they are only unuseful numbers
Russell
|
|
|
|
|
jhwurmbach wrote: What went wrong?
answer:
jhwurmbach wrote: should have
/\
||
right there. Don't assume the compiler fixes everything. Compile time is the absolute worst time to try to figure out what a programmer "wants" to achieve and then attempt to get there. Some compilers do a better job than others, so it depends on which compiler you are using. Is that a Fuji apple or a Granny Smith?
I am not saying every programmer has to learn what the compiler does with their code. But if you are actually concerned with, or absolutely need performance, then you MUST know what the compiler will do with your code. Hand tuning an algorithm might take half an hour and save you 75% of your execution time. A large error could save you considerably more. A compiler can only do so much with junk code before it also outputs junk.
Compilers aren't mind readers and none that I know of have heuristic analysis to try to figure out what your intent was. Until they make a mind reading compiler, we're all in the same boat. A compiler actually listens to what we tell it, even when we want it to obey what we mean, not what we type.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
We were concerned with the speed of execution, and when the compilers optimisation did not kick in (as it should), I asked "Why?"
On the whole, I agree.
This is part of why the original posting was so sensless:
What good is a fast nonsense-loop, when all Java-applications have a sluggish GUI anyway?
When the application has a bottleneck you determine it and fix it. Probably with a design change or a different algorithm.
But you certainly are not paied for doing the compilers work. It has to, and will, optimize code in a way I can not possibly do by hand.
Though I speak with the tongues of men and of angels, and have not money, I am become as a sounding brass, or a tinkling cymbal. George Orwell, "Keep the Aspidistra Flying", Opening words
|
|
|
|
|
jhwurmbach wrote: We were concerned with the speed of execution, and when the compilers optimisation did not kick in (as it should), I asked "Why?"
how do you know it did not kick in? it may have assumed you needed that code. Some people use hard coded timing loops without external references as timing loops. It all depends on which compiler you use, what options you set, and what assumptions you allow your compiler to use. Knowing what this means to your code is very important. If you, as some people have, used dead-code as a sleep, then you may not want that optimized. Does your compiler have an option to remove unused code? See, it is more than a "magic" compile and it works.
jhwurmbach wrote: a different algorithm
exactly my point as a matter of fact. If you know your compiler optimizes certain algorithms faster than others, because it is able to understand the logic, you begin adapting your code algorithms to match the compiler without even knowing it. The problem with that is that when you compile the same code on a new compiler, suddenly the performance changes. The reason is either consciously or inadvertently you have tuned the code to match the optimization of that one compiler, not the new one. Knowing the strengths and weaknesses of your own compiler is not a bad thing.
As I said, I understand that if you are not concerned with performance, you don't need to do this. But if you really are concerned with performance, writing code that cooperates with the optimization analysis of the compiler is certainly a good thing.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
It very much depends on the actual problem, and on the optimization settings you select, for C++. I would expect that a decent optimizer could detect that the operations within the loop cancel each other out and it might even conclude that the loop itself doesn't do anything, leaving you with no code whatever. Indeed, when run with the optimization options /Oxs , that is exactly what Visual Studio 2005 does. So in fact does Visual C++ 6.0.
I was trying to make it do more work, so asked it to print the final values of j and k . It simply passed the literals 5 and 5.0 into printf . I used the /FAs switch to generate an assembly listing with source code. This can be daunting if you're not familiar with x86 assembly, but it's pretty obvious that the compiler didn't generate any instructions for the loop. (It even did crazy tricks like loading the address of GetTickCount , which I used for time measurement, into a register!)
On my computer a debug build, with optimizations disabled, took 23 seconds to execute. The 'optimizations disabled' version is incredibly naive, as close to a literal translation as possible, and actually performs the operations as coded, although it performs some straightforward conversions (e.g. multiplying and dividing integers by 2 by bit-shifting, and using floating-point addition to multiply k by 2). It also stores results back into main memory rather than just leaving them in a register, for example.
Almost all Java implementations now use a JIT (Just-In-Time) compiler. This is able to perform some of the optimizations that a C++ compiler can do and so can eliminate some of the unnecessary operations (like avoiding write-back to memory for local variables), but not all.
If you want to do a real test, you'll have to actually make the operations somehow dependent on the value of the loop variable, so the compilers don't just throw the code away, and allow the C++ compiler to optimize the code.
|
|
|
|
|
And where is the C++ code? I see only Java in your post. Also, which compiler are you using and what optimizations?
|
|
|
|
|
You can write bad code in C++ and you can write bad code in Java. When you try to compare them you can deliberately show the advantages of either one over the other, but only for that example. It can give you a false sense of what each language is capable of. It is "a little knowledge is a dangerous thing" type problem.
Seriously, if you are going to try to compare apples and oranges, understand what the strengths and weaknesses are. A good compiler can save you from some of your errors, but only so far.
how about 0.75 seconds for C++. If I turn on full analysis, as mentioned above, I get 0 seconds. Intel can figure out the loop never uses the variable entries and removes it as useless code. A print of the values of k and j at the end will keep the code. Oh... I forgot, that is on my slowest computer....
That is the point of understanding how the language and compiler/framework work together. If you only know the "syntax" of any language, you only know half of what you are doing.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
Thats because java has an internal piece of code that gets inserted secretly that resembles this...
for (int i = 0; i < 1000000000; i++)
// ***************************
// Java specific internal code
if ((bInVeryBigLoop==true) && (bCodeServesNoObviousPurpose==true)) {
// User must be running a benchmark against c++
break;
}
// End java specific internal code
// *******************************
int j = 5;
j++;
j *= 2;
j /= 2;
j--;
double k = 5.0;
k++;
k *= 2;
k /= 2;
k--;
}
|
|
|
|
|
This comes scaringly close to reality...
Though I speak with the tongues of men and of angels, and have not money, I am become as a sounding brass, or a tinkling cymbal. George Orwell, "Keep the Aspidistra Flying", Opening words
|
|
|
|
|
This is driving me mad.
I just started to get error C2664 for things I have been doing for years!
I have a Edit Box with a control variable m_cRange and a string used for formating data for these Edit boxes for display.
e.g.
CString g_szDisplayStr;<br />
CEdit m_cRange;<br />
double Range;<br />
Both these statements produce the C2664 error.
<br />
g_szDisplayStr.Format("%0.4f", Range);<br />
m_cRange.SetWindowText(g_szDisplayStr);
Have I screwed up by project settings; its MFC C++ Dialog application.
Andy.
|
|
|
|
|
C2664 mentions two types. What are they? Is this a Unicode application?
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
The complier option Use UNICODE Response Files is Yes; but still get the error when set to No.
This line of code
g_szDisplayStr.Format("%8.4f", WGS84Latitude);
Gives the error below:-
Error 2 error C2664: 'void ATL::CStringT<BaseType,StringTraits>::Format(const wchar_t *,...)' : cannot convert parameter 1 from 'const char [6]' to 'const wchar_t *' c:\model\model\modeldlg.cpp 546
|
|
|
|
|
Andy202 wrote: This line of code
g_szDisplayStr.Format("%8.4f", WGS84Latitude);
Gives the error below:-
Because it should be:
g_szDisplayStr.Format(_T("%8.4f"), WGS84Latitude);
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
OK David thanks you.
But next stupid question, my previous program worked OK.
Has new system files for thw environment been dowloaded?
|
|
|
|
|
With VS6, ANSI is the default. With VS200x, Unicode is the default.
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
Thanks again David.
Just one final question.
When ever I create a control variable, it takes about 4 minutes. I think all PCs on the network are being accessed. I cannot find a setting to limit this activity.
Many thanks,
Andy.
|
|
|
|
|
Andy202 wrote: When ever I create a control variable, it takes about 4 minutes.
Within the IDE?
Andy202 wrote: When ever I create a control variable...I think all PCs on the network are being accessed.
I'm not aware of any correlation between the two. Does unplugging the network cable have an effect?
"A good athlete is the result of a good and worthy opponent." - David Crow
"To have a respect for ourselves guides our morals; to have deference for others governs our manners." - Laurence Sterne
|
|
|
|
|
Yes, works OK with the cable unplugged.
and it is within the IDE.
|
|
|
|
|