|
Why don't you just once speak for yourself? Never mind what your Guru has to say about anything. How many years have you done all those things? On how many different systems? All what you say may apply to you. You may not be productive with something. You may be too lazy to learn and understand the fundamentals before relying on all kinds of mechanisms and gadgets. You may be helpless when those things fail because you don't have a clue what might be going on.
Don't be so arrogant to assume that everybody is like that. Many of us have forgotten more about those things than you have ever known and done those things when you still were a possibility in the gene pool. And you don't have to take our or any Guru's word for it. We can prove it.
At least artificial intelligence already is superior to natural stupidity
|
|
|
|
|
Inline asm allows a developer to access hardware that the c/c++ could not access. An example of this is the CPUID machine instructions. Before C#, if you wanted information about your hardware, you would write something like this.
unsigned long value;
_highestFunction = 0;
union VendorUnion
{
unsigned long vendorLong;
unsigned char vendorArray[4];
} vendorTail;
unsigned char temp;
__asm
{
mov eax, 0x00
cpuid
mov value, eax
mov vendorTail.vendorLong, ecx
}
_highestFunction = value;
temp = vendorTail.vendorArray[0];
vendorTail.vendorArray[0] = vendorTail.vendorArray[3];
vendorTail.vendorArray[3] = temp;
temp = vendorTail.vendorArray[2];
vendorTail.vendorArray[2] = vendorTail.vendorArray[1];
vendorTail.vendorArray[1] = temp;
switch (vendorTail.vendorLong)
{
case 'ter!': _vendorID = eAMDK5; break;
case 'cAMD': _vendorID = dAMD; break;
case 'auls': _vendorID = eCentaur; break;
case 'tead': _vendorID = eCyrix; break;
case 'ntel': _vendorID = eIntel; break;
case 'Mx86': _vendorID = eTransmeta; break;
case 'aCPU': _vendorID = eTransmeta; break;
case ' NSC': _vendorID = eNationalSemiconductor; break;
case 'iven': _vendorID = eNexGen; break;
case 'Rise': _vendorID = eRise; break;
case ' SIS': _vendorID = eSiS; break;
case ' UMC': _vendorID = eUMC; break;
case ' VIA': _vendorID = eVIA; break;
}
unsigned long _eax, _ebx, _ecx, _edx;
__asm
{
mov eax, 0x01
cpuid
mov _eax, eax
mov _ebx, ebx
mov _ecx, ecx
mov _edx, edx
}
if (_highestFunction >= 3)
{
__asm
{
mov eax, 0x03
cpuid
}
}
Writing that same code in pure ASM would be a pain but in this case, you can use the higher level language when appropriate but use the assembly to access the hardware. The ASM being platform dependent is irrelevant at this point because that is the point. The only time that C# or Java can outperform C++ is when a loop is being performed and the VM can rearrange the byte codes to predict execution. C++ is statically built so the speed is constant. Other than that, I am aware of no instances of C# or Java being faster than well written C++.
|
|
|
|
|
So, you're saying you can still "do" PL1? (Actually, it was PL/I; which I "did").
What you "did" and what you can "do" (expertly) today are two different things.
While I may have been an "expert" 1401/7010 Autocoder at one time, mentioning it today is meaningless.
|
|
|
|
|
I would not be be able to code PL/I at the level I was doing it in the 80's. Although I still remember segments of PL/I, JCL and ISPF. I wrote channel drivers. I wrote an implementation of Kermit in PL/I. Yeah, I think I had it wired. Today, I'd have to get reacquainted with it again. I was definitely an expert and was made the IBM mainframe advisor for General Dynamics Western Division.
As far as meaningless, I disagree. One of the strengths I bring to a job is the vast diversity of languages I've used. There is nothing new under the sun and I've reverted to ideas that I was exposed to, to implement "novel" things today. I am not familiar with Autocoder but I'm certain there were novel constructs that would be applicable to solving problems today.
I started C++ with CFront on MPW. I've been with it to the present. However, it isn't a one tool fits all. I use other tools when they make sense.
The aversion to the "I hate C++" post is due to the history of seeing this posted and then accepted as fact. I work for Qualcomm. I use C and C++ a lot. When I recruit at local technical colleges and the hardest thing they have had to develop with is Java, I find it hard to take the student's education seriously. I want to see students who know assembly. Not so that they can code in it, but to demonstrate they have some knowledge of what is going on underneath the covers. We want to know that the person has the ability to resolve issue not only at the software level, but at the hardware level as well.
But to answer your question, I do a lot of things expertly. I've worked on many projects that has personally affected you or those around you.
|
|
|
|
|
Vasily Tserekh wrote: please dont tell me inline asm Why not? The ability to take control of the generated code locally and without having cumbersome calling conventions or even Win32 Interop wrappers or any marshalling of data types allows very precise and effective optimizations. I know you think that processors are fast enough now, but that's a very strange thing to hear from somebody who wants to write game engines. Anyway, there are also far more limited devices, like microcontrollers, where you cannot afford to be wasteful or the comfort you are so much accustomed to.
What next? Oh, yes, C++ includes C. That allows to simply forget about OOP. I know, the Guru you have learned from now has a heart attack and you are probably about to faint, but that actually can be a good thing. It makes the generated code smaller and the additional performance overhead for objects is also eliminated. Again, this is a blessing when you are working on slower devices. If you must you can do that, C# will never allow you to go there. But that's ok, because those devices often would not be able to host the .Net framework anyway.
Has the Guru recovered yet? Ok, let's have some multiple inheritance in C++ then. I can inherit from several baseclasses? Oh yes, that can be problematic, but it's really helpful when you know what you are doing. And spare me a lecture about interfaces. An interface is nothing more than a purely abstract baseclass and I have to implement it separately when several classes inherit from it. And that very quickly leads to one of my oldest enemies called redundancy. Once again, C# will not allow you to do something because some guy at Microsoft considered all of us too dumb to use it correctly.
Now let's finish the poor Guru off, shall we? While it's not quite so bad as with those Java guys whose religion obviously forbids even thinking of managing memory, but what makes you guys think that entrusting the management of one of the most essential resources in the entire system to some dumb mechanism like a garbage collection is a good idea? That thing takes its share of both memory and the CPU just to find out what might now be freed up. C++ gives you both the privilege and the responsibility to take your object's lifecycle into your own hands. Memory is allocated precisely when you need it and released precisely when you want it to. So, what are you going to do when you have produced a memory leak? That's not quite as impossible as both the Java and .Net Gurus would like to have you believe. How do you find it? How does the great framework assist you? And what means do you have to take things into your own hands if that's the only solution? You probably will have to get stuck big time before you understand that a little comfort and some blissful ignorance are worth that trouble.
And last, good old C++ will allow me to allocate and access data in memory far more efficiently than in a managed environment. With real primitive data types (which are not classes) I can construct data structures that take not a byte more or less than intended. And then I will use pointer arithmetic to access the data and even optimize the calculations of those pointers by adjusting the structures' size so that I can use shift operations to calculate the pointers instead of far slower multiplications. In short, I will get more data into the same memory and access it faster than you ever will by stuffing it into collections. Have you ever heard of a game engine where that was no issue at all?
Let's stop here for now. I have torn down enough of your Guru's holy icons and don't want to have the poor man on my conscience
At least artificial intelligence already is superior to natural stupidity
modified 27-Apr-12 13:39pm.
|
|
|
|
|
http://fbe.am/5JO[^]
if you are so clever why dont you tell me whats wrong here!!!!
CDP1802 wrote: With real primitive data types (which are not classes) I can construct data structures that take not a byte more or less than intende
Have you ever studied .NET interop namespace and marshalling, read a little bit an then post a clever answer
|
|
|
|
|
I can barely restrain myself
What is it? Another one of your failures? And what does it have to do with our topic? Right now I have the impression that you are a very entertaining spambot
At least artificial intelligence already is superior to natural stupidity
|
|
|
|
|
(from the code in your link)
void TfrmMain::GetDir()
{
TFileListBox * ListBox = new TFileListBox(this);
frmMain->InsertControl(ListBox);
AnsiString way = ListBox->Directory;
way=way+" audio";
way[way.Length()-5]=92;
dir=way;
way=way.SubString(0,way.Length()-6);
ListBox->Directory=way;
frmMain->RemoveControl(ListBox);
}
If it's your code, I think you may want to learn a little more before complaining about C++. One thing you certainly should take into account is that C++ is not a managed language and constructing objects and never deleting them will lead to memory leaks.
|
|
|
|
|
Situations like those are when I usually pull down the OS symbols and install them. Break the app in the debugger while the errant error dialog is being displayed and look at the stack trace. One can't necessarily debug the OS code with the symbols, but they allow an accurate stack trace report to be generated by the debugger. NT's function names are remarkably informative. Between the function names and google, and I can often get a pretty good clue about what the OS is trying to do on my behalf. Occasionally, when I'm really lucky, I can even look at the parameter values being passed (when the deubgger deigns to report them) and start to figure out what's wrong.
Then, there's always printf's. Or the GUI equivalant, MessageBox() calls. Make them unique and try to bracket the line of your code that's causing the error. And the best part, no debugger is required -- you can even debug fully optimized production code if you have to (how else does one find an optimizer bug?).
Or you can do as suggested, comment out blocks of code in a sort of binary search for the troublesome line... although I find that a lot harder to do in practice than the above techniques.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
|
|
|
|
|
The point the other two posters were making is that there are two exe files created. One in the debug folder and one in the release folder. Both are application folders. The one in the debug folder should work just like the IDE launch.(I'm not sure breakpoints are reccognized outside the IDE. I seem to remember not without linking to the mapping file. Don't remember the file type, but if you map to it and blow up, the IDE is brought up for you and your local environment at the time of blowing up is available for viewing. I read how to do this in 2005, very useful at the time, got away from programming for a while so don't remember how that is done)
So, you KNOW you are double-clicking the DEBUG folder's version of the code and it doesn't work? Do you know how to bring up the IDE from a double-click?
Earlier, you said the drive letter you started from is causing the problem. From that, I assumed you started the app from a cmd file. Since you are double-clicking, are you navigating to the drive causing the problem in the app? If not, how do you know it is a drive permission problem?
|
|
|
|
|
Your environment may be different when you run the debugger verses just launching the exe file. Check "PATH" when executing both. Also, as I use to do, kill the program with print statements. You can do a "binary search" with print statements. One at the beginning, one in the middle and one at the end of the program. See which one prints and move them around accordingly.
THE OLE HACK
|
|
|
|
|
Did you try compile your code with other IDE?
|
|
|
|
|
Vasily Tserekh wrote:
yes I know but at least in managed languages you
get a nice error message not a blank error messsage, thats why I hate c++
You "hate" C++ because you don't understand it.
To be clear, in .NET languages like C# and VB.NET (and similarly in Java), when there is a unhandled fault, the end user is presented with a exception that includes a stack trace. Generally you would never want a end-user to see a stack-trace.
You can get exactly the same thing in C++ if you choose to, but it requires additional work that is done for you in a managed environment. Generally stack traces are available in a debugger.
/* Charles Oppermann */
http://weblogs.asp.net/chuckop
|
|
|
|
|
I don't like C++ either, but it's because I rarely use it and I like the pampering you get with managed code. I do like the IDE and I definitely don't completely understand it.
I have to agree with you, that I don't like it when something goes wrong and I don't understand what is happening to cause the problem.
It's worse when I don't understand what it is that I need to understand.
I have to admit to berating the perpetrator after I've traced the source of the problem. For some reason, I don't take it personally when I'm berating myself. (Why is that? How more personal can you get? )
|
|
|
|
|
Chris is right - and it isn't even just software. A couple of times I have had problems with complex hardware prototypes not working - until you put an oscilloscope probe in the right place to monitor what the software is doing to it, and the problem goes away... That, my friend is when the nightmares start.
Ideological Purity is no substitute for being able to stick your thumb down a pipe to stop the water
|
|
|
|
|
OriginalGriff wrote: That, my friend is when the nightmares start.
[grimmace]
Ahh, many a time in frustration have I yelled in agony suggested we ship a scope with each piece of hardware when god awful delightful scenario occurs.
[/grimmace]
|
|
|
|
|
Ahh, a Heisenbug
Another good one is a race condition between threads where sometimes one and sometimes the other thread 'wins' the race.
At least artificial intelligence already is superior to natural stupidity
|
|
|
|
|
That “uncertainty” sure shows up in a lot of places. One time I didn't have an IDE, and I'm getting duplicates. Comb over the code, can't find the logic problem. Insert write statements. Problem disappears. I'd had this happen in the past and the problem reappeared when I took out the write statements. This time, the problem remained gone and has stayed gone since.
It's frustrating I'll never know what caused the problem or exactly what I did that fixed it. When it reappeared that earlier time, I think it took a week to find the single line of code that was a problem, and about the 20'th time re-reading that line.
|
|
|
|
|
I remember fixing unstable hardware by putting a probe onto the right place.
|
|
|
|
|
All you guys let that one go?!
|
|
|
|
|
The boy is already trouble with CPP and you are threatening him with oscilloscope
How about forgetting to connect your circuit board with ground and your embedded device is not running at all
|
|
|
|
|
Too simple. How can you overlook that you have no power at all?
At least artificial intelligence already is superior to natural stupidity
|
|
|
|
|
do you think i spent all day? not at all, i found it right away. but can i blame micro-controller for this?
|
|
|
|
|
Yes, of course you can. It will not help very much and you will not find out very much that way, but if it makes you feel better...
At least artificial intelligence already is superior to natural stupidity
|
|
|
|
|
That's happened to me a few times (occasionally with C++ Builder too!). In each case it has been a variation on:
Different memory (especially heap space) allocations between the debug and live environment (even on the same machine). Debugger initialising memory to something sensible, thus hiding an uninitialised variable somewhere in my code, debugger locating the program in a different area of physical memory, thus avoiding a faulty patch of RAM that - by coincidence - wasn't used by the OS normally either, and so on
In other words, this type of problem is almost always memory related in some way, or possibly due to uninitialised use of a physical device or internal queue in some that way the debugger covers up when it sets up the debug environment.
One technique that can sometimes help catch this is to use a remote debugging session run from a second machine - often the remote debug stub will not do as much pre-execution setup as an integrated debugger, thus leaving the problem undisturbed, or at least different.
I remember one particular problem porting a program from C++ builder to Linux, where the C++ Builder program loader initialised class memory to 0 before starting the program, whereas GNU C++ didn't - took me a while to spot that, as the program only failed when loaded without gdb (which placed it in an area of memory that was, by chance, all 0s!)
Oh, happy days...
8)
|
|
|
|
|