|
ForNow wrote: can it be my own class which I register at run time via RegesterClass I am not sure as I have never tried that. The documentation at CONTROL control - Win32 apps | Microsoft Docs[^] states (although it is slightly ambiguous) that it must be one of the predefined classes. If your code is failing when you try it with a locally defined class then you need to use the debugger to find out why.
|
|
|
|
|
I think the documentation means you have to Register the Class before calling CDialog::Create as I remember I forgot to call AfxInitRichEdit before doing a CDialog::Create and it failed I think CDialog::Create Checks Uid template dialog resource and sees if the classes are registered
Thanks you
|
|
|
|
|
I assumed you were doing that anyway, since Windows cannot create an object of a class that it does not know about.
|
|
|
|
|
I've never tried it so I'm not sure exactly how it works but I notice a rather strange capitalization in you class name: "PieCOntrol" . I don't know if Windows is case sensitive or not when looking for class name.
Mircea
|
|
|
|
|
Thanks that’s they way I had it in my wndclass I am displaying a pie chart on the side of a dialog box but need a Cwnd class to hang it off of thats the class I’m using for it used the cpiedemo from this site for learning
|
|
|
|
|
|
|
|
Hi,now I need to sign pdf and verify the signatue of a signed pdf,could someone tell me is there any C++ library can do this.
|
|
|
|
|
|
Im getting a stack over flow execption its not from an ASSERT but storage on my stack frame is corrupted. I have a thread Waitting on event to be notified when a socket read is pending
I trace socket number upon entry and its ok I initialize a WSABUF which 15 entires and its fine somewhere when I do the WaitforSingleObject the stack frame gets corrupted
I did a data break point on the socket and its was somewhere in UserCallwinprocccheckwow.
best to post my code some where in the WaiforSingleObject sonething happens to my stack frame
here is the code for notification
int ret1 = WSAEventSelect(mysocket, socksevent, FD_READ | FD_CLOSE);
here is the createthread
struct threadparm thethreads;
thethreads.thesocket = mysocket;
thethreads.sendwidow = pFrame->m_hWnd;
thethreads.messge = WM_STORAGE;
thethreads.sockevent = socksevent;
struct threadparm* parmptr = &thethreads;
HANDLE threadhandle = ::CreateThread(NULL, 0, SocketThread, (LPVOID)parmptr, 0, &threadid);
the thread with Waitforsingleobect were somewhere threadptr->thesocket gets corrupted
DWORD WINAPI SocketThread(LPVOID lphadleparater)
{
WSANETWORKEVENTS socknetwork;
struct threadparm* threadptr;
threadptr = (threadparm *)lphadleparater;
WSABUF DataBuf[15];
DWORD recived;
int i, j;
DWORD flags = 0;
WSAOVERLAPPED myoverlap;
char* sendcopy = new char[3825];
char* holdptr = sendcopy;
j = 0;
for (i = 0; i < 15; i++)
{
DataBuf[i].buf = new char[255];
DataBuf[i].len = 255;
}
struct _WSABUF* tcpip = &DataBuf[0];
DWORD dwWaitStatus = WaitForSingleObject(threadptr->sockevent, INFINITE);
switch (dwWaitStatus)
case WAIT_OBJECT_0 + 0:
{
int return_code = WSAEnumNetworkEvents(threadptr->thesocket, threadptr->sockevent, &socknetwork);
|
|
|
|
|
It would be easier to look at your code if it was formatted. However, it isn't obvious to me what's causing your stack overflow. Do you have more code that you're not showing?
A stack overflow is caused by too much function call nesting (perhaps because of recursion), too many local variables (usually large arrays), or creating a thread with a smaller stack than it needs (that's the second parameter to CreateThread , and since you're using 0 , you're getting Windows' default size, which should be OK).
|
|
|
|
|
is there a way to determine how much storage my local variables are currently taking maybe an output on the build
thanks
|
|
|
|
|
If there's a compiler option to generate a listing file, that might contain the information. I've never done this in Windows C++, so you'd have to look into it.
Another way would be to do this at the top of your function:
int first;
int last;
int size = &last - &first; if(size < 0) size = -size; size -= sizeof(int);
cout << "This function's locals use " << size << " bytes." << std::endl;
|
|
|
|
|
|
This might not work - the compiler doesn't have to maintain the order of variables on the stack, so last could end up before or after first and the other variables.
|
|
|
|
|
Seems to me the thethreads was allocated on the stack and disappeared by the time thread is executing.
As Greg was saying, formatting would be nice.
Mircea
|
|
|
|
|
you are right about the formatting thethreads is in the stackframe of CWinApp but still you are correct in your point I'll move its thehreads to the class
thank you
|
|
|
|
|
What`s a function in ASM? My guess is that it`s an isolated sentence sequence that gets an ID. When the sequence is called from another sequence, the location in the original sequence where the calling is taking place is saved in the sequence being called (as some kind of statement that is placed at the end), the execution of the initial sequence is paused and the traversal/iteration through the sequence being called is started, when the execution reaches the last statement, that last statement contains the saved location of the place where the sequence was called from and is used to resume the execution in the first/initial sequence.
|
|
|
|
|
Your guess is mostly right.
For a real life assembly example, see, for instance: 8051 CALL INSTRUCTIONS[^].
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
Thanks CPallini, a confirmation/denial is what I was looking for.
|
|
|
|
|
You are welcome.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
As CPallini writes: Essentially correct.
But your description is so abstract that it applies as well to functions in any algorithmic language, from Fortran through Algol and Pascal and C and C#. It is certainly not ASM specific.
Actually, I'd say: Quite to the contrary ... If your title line hadn't said 'function translated to ASM'. If you hand code ASM, you have a lot more freedom. E.g. that 'sentence sequence' would not have to be that isolated: A function could have multiple entry points. (For an extreme case: Read Jumping into the middle of an instruction ...[^]).
Also, I think that parameter transfer and return of result value(s) is such an essential part of the function concept that it should be included in even the most basic definition/description of the function concept. But again, parameters are certainly not specific to ASM functions; it applies equally to ASM and high level languages.
Rant part:
I really wish that you were right about 'an isolated sentence sequence that gets an ID'. That is not the case neither in ASM nor in C style languages. The ID does not identify the sentence sequence, but the point in the code at the start of the sequence. This is one of the major fundamental flaws in the design of these languages.
In a few other algorithmic languages, such as CHILL, a label identifies a sentence sequence, that be a function, a loop, a conditional statement or whatever. Usually, a sentence sequence is termed a 'block'. You can e.g. break out of any block by stating its ID, even if it is not the innermost one. You can have compiler support for block completion by repeating the block ID at the end, improving readability a lot and catching nesting errors.
If there were a dotNET CHILL compiler out there, I'd gladly kick out C# (even if C# certainly is my favorite alternative in the C class of languages)!
|
|
|
|
|
In ASM there is no distinction between functions and procedures. The name procedure is usually used. You can only CALL a procedure. A function in the high-level sense (a procedure that returns something) is just a variant.
Regarding passing parameters to procedures in ASM. This can be done:
a) By putting values to CPU registers. This works if the number of parameters is small and parameters are rather simple data types. The procedure has direct access to parameters by means of registers. Compilers do this for simple functions/procedures/methods. Of course you need to save registers to stack and restore them after return. This is named the call sequence/frame of the procedure.
b) By pushing parameters to the stack. This is the facto standard. You can push parameters from left to right (the so-called "Pascal" convention) or from right to left (the so-called "C" convention). The "C" convention works also with procedures that have a variable number of parameters. This is why the C function printf has the format as the first (and mandatory) parameter - it will be on the top of the stack when entering printf and printf will know where to find it (the format is supposed to correctly describe the number ad type of each other parameters like %s, %d etc.)
When returning from the procedure the stack must be discarded of the parameters that were put on the stack. This can be done by the caller (the "C" approach) or by the procedure (the "Pascal" approach).
E.g. "ADD SP, 24" or "RET 24". The c/C++ compilers use of course the "C" approach.
Observe that the caller "knows" exactly how many parameters were pushed onto the stack so discarding the stack by the caller is more natural. Windows SDK uses "Pascal" convention.
When dealing with large objects that must be passed, it's easier to pass then by reference, i.e. to pass an address (pointer) to a memory area where the object is stored. A pointer is a simple type.
If you really need to pass a large object by value (i.e. make a copy), you can copy the internal representation of the object onto the stack, and define the stack frame so that procedure has access to it. However this is more time-consuming.
b) Combinations of the above 2 methods.
A procedure can return a value (i.e. becoming a function) by:
1) A register (if the return value is a scalar type). For Intel CPU, the convention is to return in the accumulator (AL, AX, DX:AX, EAX, etc., depending on the processor type). Observe that scalar types include all numerical values (int, float, double) and pointers.
2) If the result is a large object, things get complicated, because when converting "return t" into machine code, a copy needs to be done somewhere in memory. However compilers can do whatever they want, assuming they don't break the language semantics. A copy could be made onto the stack.
That's why is best to avoid methods that return objects in C++/C# etc. Pass a reference/pointer where you want the result to be placed instead.
See for example: <a href="https://en.wikipedia.org/wiki/Copy_elision#Return_value_optimization">
If you work directly in ASM and are not just interested in interfacing high-level language with ASM-level modules, you can use any combination of the above methods. For example, pass the first parameter by means of a register and the rest onto the stack (there are compilers that do that).
However, I recommend to stick to conventional methods. You never know when you will need to call an ASM procedure from C++ or a C++ method/function from ASM.
In any case, any compiler documents (or should document) very exactly how it transfers parameter to procedures, and hw results are returned by functions. If you need to work at this level, read this carefully, and then make a small interface project. What I described above is merely a top-level sketch.
Unfortunately, ASM is not so much taught in universities nowadays (more just as an addendum to digital electronics) and this is really a pity. Many questions regarding pointers, references, memory allocation, constructors, destructors etc. would become more clear and even obvious to developers if they had a small ASM experience.
|
|
|
|
|
In ASM there is no distinction between functions and procedures. I haven't been programming languages that made a syntactically explicit distinction between functions and procedures since I used Pascal last time (and that is quite a few years ago). For a short period, I found it difficult to merge the two into one concept, but soon I started asking myself 'Why?'. A function with a void (/null) result is as good a procedure as any!
Regarding passing parameters to procedures in ASM. Again, this is not specific to ASM. Some platforms, such as ARM, define a binary call and parameter interface independent of programming language. If you follow that standard, you can call functions in any other language, and any other language can call your ASM functions. If you do not, then you are misbehaving
You didn't mention one parameter passing method that was the only viable one on machines with extremely small stacks (like the 8051): The accumulator holds the address of a 'struct'-like block of values, allocated anywhere, possibly statically. The call conventions say that the accumulator is volatile; you never expect it to retain its value when other code is executed, so you do not save/restore it for a function call.
Btw: In the Win32 API, this convention is used for a share of the function calls: (The address of) a single composite struct is passed by the caller. The first word in the struct indicates its size, so when a new, extended version of the function is published, taking more parameters, the name of the function is unchanged, and the extra parameters are added at the end of the struct. The function can see whether the the caller wants the old or the new extended functionality from the size of the struct. And, it reduces the risk of overflow.
The alternative, used by another share of the Win32 functions, is to extend the function name with an 'Ex' (and a an extended parameter specification. Later comes the 'FuncExEx', and 'FuncExExEx' and ... there are cases of function names with five 'Ex' suffixes in a row. I think that is extremely messy. I much prefer the 'parameter struct' alternative (using that philosophy in my own code).
The c/C++ compilers use of course the "C" approach. By default, that is. I have never used a C/C++ compiler that could not be directed to use Pascal conventions (that is a requirement for calling Win32 functions!). Note that 64 bit Windows has different calling conventions.
discarding the stack by the caller is more natural So every caller must have code to do the cleanup for every call ... Well, for the simple cases where nothing more is required than an SP update, it is fair enough. In more complex cases (e.g. a non-linear stack), the question is more debatable.
One issue regarding stacks: In recent years, use of threads has become far more common. Often a software system may be implemented by several hundred or even thousands of threads, which are usually preemptable. Each requires its own stack space, which must be large enough to handle the very deepest call sequence that this thread might make. So you could end up tying up quite large amounts of RAM for thread stacks. In theory, every thread might be preempted at its deepest call level, all at the same time. That never happens in practice, so you really occupy a lot more RAM than really needed.
There are machines supporting non-linear stacks. No stack space is initially allocated to the threads; when a call is executed, a stack frame is allocated from the heap. Upon return, the frame is released to the heap. Then no more RAM is occupied than what is in actual, active use at any time. Especially if you implement (possibly parts of) the system as non-preemptible, the compiler can make optimizations to collapse multiple heap allocations/frees into one, to reduce overhead. However, this requires the allocation / release to be handled by the called routine; the caller does not have enough information to handle it.
The "C" convention works also with procedures that have a variable number of parameters. Note that passing a 'parameter struct' (headed by its size) would also handle this.
That's why is best to avoid methods that return objects in C++/C# etc. Eeeh ... In C#, objects are always addressed through a reference. They are always allocated on the heap. You do not see the reference as such, the way you do in C/C++, but at the binary level, returning a MyObject* in C++ or a MyObject in C# is practically identical.
In any case, any compiler documents (or should document) very exactly how it transfers parameter to procedures, and hw results are returned by functions. I beg to disagree. This is not to be defined by each compiler (/language), but by the machine architecture. All compilers should follow the same conventions, so that you can mix languages freely. One good thing about dotNet is that high level language compilers do not generate binary code; they generate an architecture independent Common Intermediate Language (CIL), which is not transformed to 'real' machine code until the assembly is loaded into one specific machine, at which time native code for that architecture is generated, regardless of programming language.
Unfortunately, ASM is not so much taught in universities nowadays (more just as an addendum to digital electronics) and this is really a pity. I agree only halfway (or less). Sure, students should learn what the compiler does, with registers and stacks and such, but not for coding ASM themselves.
Much more than ASM mnemonics, programmers need to understand concepts like paging and other aspects memory management. You do not teach memory mapped files through assembly code! Actually, you do not see the MMS at all from ASM code (unless you teach OS kernel programming, which is not for the average application programmer). Interrupts are similarly 'invisible' - and equally important, both with regard to execution time costs, and for synchronization / protection issues. Note that as early as the mid-1970s, Per Brinch Hansen developed a complete set of synchronization concepts, from simple semaphores, through critical regions and monitors, in a high level language, Concurrent Pascal.
Students make a mess of ASM, abusing it in the worst way possible. Generally, they believe that they can make really, really super-fast ASM code, which is simply not true with any modern CPU, using prefetch and pipelining and speculative execution and hyperthreading and whathaveyou of hardware tricks affecting real execution speed.
An extreme/funny example: I was teaching CPU architecture 25-30 years ago, with a few ASM coding exercises on the x86 (which is a terrible architecture for teaching good principles!). I tried to stress that ASM is hard to read; we must code for the best possible readability. To zero AX, you move zero into it: MOV AX, 0. A few students insisted that the right way of doing it is XOR AX, AX - it is faster. No, it is not! I had to dig up timing tables for various x86 CPUs, showing that for the original 8086, you sure would save one whole clock cycle using XOR, but since 286, the alternatives where equally fast. (We were using 386.) They kept insisting on using XOR, because they 'wanted the code to be optimal for the slowest CPUs'. For the next hand-in, they delivered a code file headed by a comment: 'This is the style our lecturer forces us to code:' - and a readable, clean solution - followed by a large comment block headed by 'This is how REAL programmers would do it:', and the messiest, most unreadable ASM code I ever saw!
ASM serves no function in code optimization. Long ago, I read the proceedings from the first Conference on the History of Programming Languages (or something like that), where the developers of the first optimizing compiler, Fortran II, told that they had spent days to understand how the h* that compiler had found out that the code would run faster if it did so-and-so. Note: These were the people who had developed the optimizing techniques! Modern compilers go much further; there is no way that you could do any similar optimizing 'by hand' in ASM. Actually, the same goes for heap management: There are still lots of programmers that believe they can do a better job than a modern GC system. They can not. (Possible exception: Extremely small heaps e.g. in tiny embedded systems - but in most such cases the right alternative is to abandon dynamic allocation at all!)
ASM serves a single purpose today: To get access to facilities that cannot be addressed directly through high level languages, such as special registers or peripherals with strange interfaces to the CPU. Commonly, providing such access to an HLL requires less than a dozen instructions. Usually, there are no loops, no jumps - that is handled at the HLL level.
Sometimes, you come across architectures where interrupt handlers are activated in special ways so they cannot be defined as plain functions, but usually, C compilers for those architectures offer modifiers for those 'calling conventions'. Last time I needed ASM was when I had to write a couple dozen instruction to handle a full CPU reset, to set up stack areas etc. before high level code could take over, but that is like OS programming - not something that every application programmer need to relate to.
I'd prefer to teach 'memory allocation, constructors, destructors etc.' using a high level language (if you consider C 'high level' ) to manage the data structures etc. I always thought that Donald Knuth made a serious mistake when choosing to illustrate large families of algorithms using (a hypothetical) ASM language rather than a high level language. Conceptually, his The Art of Computer Programming is great, but for all practical purposes, the code examples have about zero value today, and even 30 years ago. The textual description is not a sufficient good reason to use this series as a reference work for basic algorithms; you read it for historical purposes only.
|
|
|
|
|