|
Thanks CPallini, a confirmation/denial is what I was looking for.
|
|
|
|
|
You are welcome.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
As CPallini writes: Essentially correct.
But your description is so abstract that it applies as well to functions in any algorithmic language, from Fortran through Algol and Pascal and C and C#. It is certainly not ASM specific.
Actually, I'd say: Quite to the contrary ... If your title line hadn't said 'function translated to ASM'. If you hand code ASM, you have a lot more freedom. E.g. that 'sentence sequence' would not have to be that isolated: A function could have multiple entry points. (For an extreme case: Read Jumping into the middle of an instruction ...[^]).
Also, I think that parameter transfer and return of result value(s) is such an essential part of the function concept that it should be included in even the most basic definition/description of the function concept. But again, parameters are certainly not specific to ASM functions; it applies equally to ASM and high level languages.
Rant part:
I really wish that you were right about 'an isolated sentence sequence that gets an ID'. That is not the case neither in ASM nor in C style languages. The ID does not identify the sentence sequence, but the point in the code at the start of the sequence. This is one of the major fundamental flaws in the design of these languages.
In a few other algorithmic languages, such as CHILL, a label identifies a sentence sequence, that be a function, a loop, a conditional statement or whatever. Usually, a sentence sequence is termed a 'block'. You can e.g. break out of any block by stating its ID, even if it is not the innermost one. You can have compiler support for block completion by repeating the block ID at the end, improving readability a lot and catching nesting errors.
If there were a dotNET CHILL compiler out there, I'd gladly kick out C# (even if C# certainly is my favorite alternative in the C class of languages)!
|
|
|
|
|
In ASM there is no distinction between functions and procedures. The name procedure is usually used. You can only CALL a procedure. A function in the high-level sense (a procedure that returns something) is just a variant.
Regarding passing parameters to procedures in ASM. This can be done:
a) By putting values to CPU registers. This works if the number of parameters is small and parameters are rather simple data types. The procedure has direct access to parameters by means of registers. Compilers do this for simple functions/procedures/methods. Of course you need to save registers to stack and restore them after return. This is named the call sequence/frame of the procedure.
b) By pushing parameters to the stack. This is the facto standard. You can push parameters from left to right (the so-called "Pascal" convention) or from right to left (the so-called "C" convention). The "C" convention works also with procedures that have a variable number of parameters. This is why the C function printf has the format as the first (and mandatory) parameter - it will be on the top of the stack when entering printf and printf will know where to find it (the format is supposed to correctly describe the number ad type of each other parameters like %s, %d etc.)
When returning from the procedure the stack must be discarded of the parameters that were put on the stack. This can be done by the caller (the "C" approach) or by the procedure (the "Pascal" approach).
E.g. "ADD SP, 24" or "RET 24". The c/C++ compilers use of course the "C" approach.
Observe that the caller "knows" exactly how many parameters were pushed onto the stack so discarding the stack by the caller is more natural. Windows SDK uses "Pascal" convention.
When dealing with large objects that must be passed, it's easier to pass then by reference, i.e. to pass an address (pointer) to a memory area where the object is stored. A pointer is a simple type.
If you really need to pass a large object by value (i.e. make a copy), you can copy the internal representation of the object onto the stack, and define the stack frame so that procedure has access to it. However this is more time-consuming.
b) Combinations of the above 2 methods.
A procedure can return a value (i.e. becoming a function) by:
1) A register (if the return value is a scalar type). For Intel CPU, the convention is to return in the accumulator (AL, AX, DX:AX, EAX, etc., depending on the processor type). Observe that scalar types include all numerical values (int, float, double) and pointers.
2) If the result is a large object, things get complicated, because when converting "return t" into machine code, a copy needs to be done somewhere in memory. However compilers can do whatever they want, assuming they don't break the language semantics. A copy could be made onto the stack.
That's why is best to avoid methods that return objects in C++/C# etc. Pass a reference/pointer where you want the result to be placed instead.
See for example: <a href="https://en.wikipedia.org/wiki/Copy_elision#Return_value_optimization">
If you work directly in ASM and are not just interested in interfacing high-level language with ASM-level modules, you can use any combination of the above methods. For example, pass the first parameter by means of a register and the rest onto the stack (there are compilers that do that).
However, I recommend to stick to conventional methods. You never know when you will need to call an ASM procedure from C++ or a C++ method/function from ASM.
In any case, any compiler documents (or should document) very exactly how it transfers parameter to procedures, and hw results are returned by functions. If you need to work at this level, read this carefully, and then make a small interface project. What I described above is merely a top-level sketch.
Unfortunately, ASM is not so much taught in universities nowadays (more just as an addendum to digital electronics) and this is really a pity. Many questions regarding pointers, references, memory allocation, constructors, destructors etc. would become more clear and even obvious to developers if they had a small ASM experience.
|
|
|
|
|
In ASM there is no distinction between functions and procedures. I haven't been programming languages that made a syntactically explicit distinction between functions and procedures since I used Pascal last time (and that is quite a few years ago). For a short period, I found it difficult to merge the two into one concept, but soon I started asking myself 'Why?'. A function with a void (/null) result is as good a procedure as any!
Regarding passing parameters to procedures in ASM. Again, this is not specific to ASM. Some platforms, such as ARM, define a binary call and parameter interface independent of programming language. If you follow that standard, you can call functions in any other language, and any other language can call your ASM functions. If you do not, then you are misbehaving
You didn't mention one parameter passing method that was the only viable one on machines with extremely small stacks (like the 8051): The accumulator holds the address of a 'struct'-like block of values, allocated anywhere, possibly statically. The call conventions say that the accumulator is volatile; you never expect it to retain its value when other code is executed, so you do not save/restore it for a function call.
Btw: In the Win32 API, this convention is used for a share of the function calls: (The address of) a single composite struct is passed by the caller. The first word in the struct indicates its size, so when a new, extended version of the function is published, taking more parameters, the name of the function is unchanged, and the extra parameters are added at the end of the struct. The function can see whether the the caller wants the old or the new extended functionality from the size of the struct. And, it reduces the risk of overflow.
The alternative, used by another share of the Win32 functions, is to extend the function name with an 'Ex' (and a an extended parameter specification. Later comes the 'FuncExEx', and 'FuncExExEx' and ... there are cases of function names with five 'Ex' suffixes in a row. I think that is extremely messy. I much prefer the 'parameter struct' alternative (using that philosophy in my own code).
The c/C++ compilers use of course the "C" approach. By default, that is. I have never used a C/C++ compiler that could not be directed to use Pascal conventions (that is a requirement for calling Win32 functions!). Note that 64 bit Windows has different calling conventions.
discarding the stack by the caller is more natural So every caller must have code to do the cleanup for every call ... Well, for the simple cases where nothing more is required than an SP update, it is fair enough. In more complex cases (e.g. a non-linear stack), the question is more debatable.
One issue regarding stacks: In recent years, use of threads has become far more common. Often a software system may be implemented by several hundred or even thousands of threads, which are usually preemptable. Each requires its own stack space, which must be large enough to handle the very deepest call sequence that this thread might make. So you could end up tying up quite large amounts of RAM for thread stacks. In theory, every thread might be preempted at its deepest call level, all at the same time. That never happens in practice, so you really occupy a lot more RAM than really needed.
There are machines supporting non-linear stacks. No stack space is initially allocated to the threads; when a call is executed, a stack frame is allocated from the heap. Upon return, the frame is released to the heap. Then no more RAM is occupied than what is in actual, active use at any time. Especially if you implement (possibly parts of) the system as non-preemptible, the compiler can make optimizations to collapse multiple heap allocations/frees into one, to reduce overhead. However, this requires the allocation / release to be handled by the called routine; the caller does not have enough information to handle it.
The "C" convention works also with procedures that have a variable number of parameters. Note that passing a 'parameter struct' (headed by its size) would also handle this.
That's why is best to avoid methods that return objects in C++/C# etc. Eeeh ... In C#, objects are always addressed through a reference. They are always allocated on the heap. You do not see the reference as such, the way you do in C/C++, but at the binary level, returning a MyObject* in C++ or a MyObject in C# is practically identical.
In any case, any compiler documents (or should document) very exactly how it transfers parameter to procedures, and hw results are returned by functions. I beg to disagree. This is not to be defined by each compiler (/language), but by the machine architecture. All compilers should follow the same conventions, so that you can mix languages freely. One good thing about dotNet is that high level language compilers do not generate binary code; they generate an architecture independent Common Intermediate Language (CIL), which is not transformed to 'real' machine code until the assembly is loaded into one specific machine, at which time native code for that architecture is generated, regardless of programming language.
Unfortunately, ASM is not so much taught in universities nowadays (more just as an addendum to digital electronics) and this is really a pity. I agree only halfway (or less). Sure, students should learn what the compiler does, with registers and stacks and such, but not for coding ASM themselves.
Much more than ASM mnemonics, programmers need to understand concepts like paging and other aspects memory management. You do not teach memory mapped files through assembly code! Actually, you do not see the MMS at all from ASM code (unless you teach OS kernel programming, which is not for the average application programmer). Interrupts are similarly 'invisible' - and equally important, both with regard to execution time costs, and for synchronization / protection issues. Note that as early as the mid-1970s, Per Brinch Hansen developed a complete set of synchronization concepts, from simple semaphores, through critical regions and monitors, in a high level language, Concurrent Pascal.
Students make a mess of ASM, abusing it in the worst way possible. Generally, they believe that they can make really, really super-fast ASM code, which is simply not true with any modern CPU, using prefetch and pipelining and speculative execution and hyperthreading and whathaveyou of hardware tricks affecting real execution speed.
An extreme/funny example: I was teaching CPU architecture 25-30 years ago, with a few ASM coding exercises on the x86 (which is a terrible architecture for teaching good principles!). I tried to stress that ASM is hard to read; we must code for the best possible readability. To zero AX, you move zero into it: MOV AX, 0. A few students insisted that the right way of doing it is XOR AX, AX - it is faster. No, it is not! I had to dig up timing tables for various x86 CPUs, showing that for the original 8086, you sure would save one whole clock cycle using XOR, but since 286, the alternatives where equally fast. (We were using 386.) They kept insisting on using XOR, because they 'wanted the code to be optimal for the slowest CPUs'. For the next hand-in, they delivered a code file headed by a comment: 'This is the style our lecturer forces us to code:' - and a readable, clean solution - followed by a large comment block headed by 'This is how REAL programmers would do it:', and the messiest, most unreadable ASM code I ever saw!
ASM serves no function in code optimization. Long ago, I read the proceedings from the first Conference on the History of Programming Languages (or something like that), where the developers of the first optimizing compiler, Fortran II, told that they had spent days to understand how the h* that compiler had found out that the code would run faster if it did so-and-so. Note: These were the people who had developed the optimizing techniques! Modern compilers go much further; there is no way that you could do any similar optimizing 'by hand' in ASM. Actually, the same goes for heap management: There are still lots of programmers that believe they can do a better job than a modern GC system. They can not. (Possible exception: Extremely small heaps e.g. in tiny embedded systems - but in most such cases the right alternative is to abandon dynamic allocation at all!)
ASM serves a single purpose today: To get access to facilities that cannot be addressed directly through high level languages, such as special registers or peripherals with strange interfaces to the CPU. Commonly, providing such access to an HLL requires less than a dozen instructions. Usually, there are no loops, no jumps - that is handled at the HLL level.
Sometimes, you come across architectures where interrupt handlers are activated in special ways so they cannot be defined as plain functions, but usually, C compilers for those architectures offer modifiers for those 'calling conventions'. Last time I needed ASM was when I had to write a couple dozen instruction to handle a full CPU reset, to set up stack areas etc. before high level code could take over, but that is like OS programming - not something that every application programmer need to relate to.
I'd prefer to teach 'memory allocation, constructors, destructors etc.' using a high level language (if you consider C 'high level' ) to manage the data structures etc. I always thought that Donald Knuth made a serious mistake when choosing to illustrate large families of algorithms using (a hypothetical) ASM language rather than a high level language. Conceptually, his The Art of Computer Programming is great, but for all practical purposes, the code examples have about zero value today, and even 30 years ago. The textual description is not a sufficient good reason to use this series as a reference work for basic algorithms; you read it for historical purposes only.
|
|
|
|
|
I am trying to keep discussions to a general level, so that statements were valid 20 years ago, are valid today and will be valid 20 years from now, at least with the classic CPU architecture.
1) Regarding stack management:
From CPU perspective, when entering a procedure, the stack is just a memory contiguous area defined by a segment descriptor and by a stack pointer. It is irrelevant how this memory area was allocated: statically, when the process was started or dynamically before the call. Dynamically means that somebody must deallocate that area as well.
2) Run-time/development environment matters when choosing how to pass/return parameters
Allocating memory on the heap is fine, assuming you have a heap in the first place. This assumes calls to the OS to get/release memory, but what if don't have on OS at all? What if you write code for a dedicated hardware controller and the only memory is statically defined?
There are special environments like space/military/medical in which you are not even allowed to use dynamic memory allocation, for obvious reasons.
3) I still say that a good insight in hardware and in assembly language is essential for becoming a good software engineer. If not, who should have these insights?
I don't write assembler as well nowadays, but the fact that once I did helps me write better C/C++/C# code.
4) XOR AX, AX vs. MOV AX, 0
It is not only about speed, but also about instruction encoding.
"XOR AX, AX" occupies just one byte of memory, while "MOV AX, 0" occupies one byte for the op code and 2 bytes for the "immediate" 16-bit operand. If you consider 32 bits, then "MOV EAX, 0" occupies 5 bytes: one for the instruction code and 4 bytes for the 32-bit operand. The compiler treats all immediate operands in the same way.
Following the same logic, on a 64-bit CPU, "MOV RAX, 0" will occupy 10 bytes, since the operand is on 8 bytes, while "XOR RAX, RAX" will be on 2 bytes only (64-bit prefix and op code).
There is also another aspect.
If you want to do compare operations then you must be sure that the arithmetic flags are correctly set with respect to the entity you want to compare. A conditional jump "JNZ address" will not work as expected after "MOV AX, 0" (if the AX is what you want to jump on) because "MOV" does not set any of the arithmetic flags, but will work fine after "XOR AX, AX", because "XOR" does.
So those students who insisted on using XOR instead of MOV were fully right.
|
|
|
|
|
You seem to know a lot about assembly language. Are you aware of any C or C++ compilers that can generate location-independent code? That's been something I've been interested in for a long time.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Linux uses position independent code (PIC) to produce shared libraries. PIC objects can be produced with the -fPIC flag to either GCC or CLANG. More information here: fPIC option in GCC
Keep Calm and Carry On
|
|
|
|
|
Neat!
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Rather interesting. worth remembering.
|
|
|
|
|
I just spent an hour screwing around with a simple dialog project. I worked on it yesterday, but today there is no dialog editor listed in the ToolBox. After an hour, and looking at other projects, I said, "elephant this." Closed the project and reopened - now it's there.
wtf. Anyone else seen this and maybe knows how to fix it (other than recycling the project)? I want my hour back.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
I've seen this or something similar before - I believe it was because I had the .rc file open in an editor window.
|
|
|
|
|
That may have been it - I'll have to try it when I get back to the other machine. Usually my experience has been that the graphical dialog editor closes when I want to open the source of the resource file.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
Did you try to open this dialog from the Resource View?
|
|
|
|
|
yes. The dialog was up in the graphical editor. Prior post may have hit on something... will have to try it later.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
I tried a few times by opening the resource file, etc. But each time I go back to the graphical dialog editor, it displays correctly.
Things that make you go hmm.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
Message Closed
modified 15-May-23 19:07pm.
|
|
|
|
|
Didn't we already cover this? How do I pass command to system calll as variable? In particular see popen(3) - Linux manual page I still think that QProcess is the way to go if you're in QT land. You seem to be having difficulties getting that working - my advice would be to write as small a program as you can that uses Qprocess to do something like a directory listing, and get that working, then integrate what you learned from that to get it working in your main project.
If you insist, though, you should be able to redirect output, just like you would from the terminal command line e.g.
const char *command = "rfkill list > textEdit";
system(command)
system("rfkill list > textEdit"); In this instance, I don't see the value in constructing a QString just to then re-format it as a C string for the system command, and then calling the QString destructor.
Keep Calm and Carry On
|
|
|
|
|
Message Closed
modified 15-May-23 19:07pm.
|
|
|
|
|
The tee command splits output the man page states
SYNOPSIS
tee [OPTION]... [FILE]... For every FILE you add to the end of the command, tee will direct stdout to the given file. So I think you want
rfkill list | tee tempFile This will send output to tempFile (in the current working directory) and send the output to the screen as well. You can test this from your terminal session.
Keep Calm and Carry On
|
|
|
|
|
|
Message Closed
modified 15-May-23 19:07pm.
|
|
|
|
|
11:19:00: Running steps for project HCI_VERSION_622...
11:19:00: Configuration unchanged, skipping qmake step.
Boilerplate startup. As noted, no changes detected, the "qmake" step has been skipped
11:19:00: Starting: "/usr/bin/make" -j4 starting /usr/bin/make with 4 jobs (-j4)
/home/qy/Qt/6.2.2/gcc_64/bin/qmake -o Makefile ../HCI_VERSION_622/HCI_VERSION_622.pro -spec linux-g++ CONFIG+=debug CONFIG+=qml_debug qmake constructs a Makefile
rm -f libHCI_VERSION_622.so.1.0.0 libHCI_VERSION_622.so libHCI_VERSION_622.so.1 libHCI_VERSION_622.so.1.0 remove (rm) outdated project target
g++ -Wl,-rpath,/home/qy/Qt/6.2.2/gcc_64/lib -Wl,-rpath-link,/home/qy/Qt/6.2.2/gcc_64/lib -shared -Wl,-soname,libHCI_VERSION_622.so.1 -o libHCI_VERSION_622.so.1.0.0 main.o mainwindow_hci_v_622.o moc_mainwindow_hci_v_622.o /home/qy/Qt/6.2.2/gcc_64/lib/libQt6Widgets.so /home/qy/Qt/6.2.2/gcc_64/lib/libQt6Gui.so /home/qy/Qt/6.2.2/gcc_64/lib/libQt6Concurrent.so /home/qy/Qt/6.2.2/gcc_64/lib/libQt6Core.so -lpthread -lGL I think that's all one command line : run g++ to create the target
ln -s libHCI_VERSION_622.so.1.0.0 libHCI_VERSION_622.so
ln -s libHCI_VERSION_622.so.1.0.0 libHCI_VERSION_622.so.1
ln -s libHCI_VERSION_622.so.1.0.0 libHCI_VERSION_622.so.1.0 create some soft links, presumably needed for link name resolution for other projects that might use this library
11:19:02: The process "/usr/bin/make" exited normally.
11:19:02: Elapsed time: 00:02. Boilerplate successful completion of project creation
Keep Calm and Carry On
|
|
|
|
|
Message Closed
modified 15-May-23 19:07pm.
|
|
|
|
|
The -Wl,... arguments to g++ are passed on the the linker. For details of the rpath and rpath-link arguments see here: Using LD, the GNU linker - Options
Member 14968771 wrote: -shared -Wl,-soname,libHCI_VERSION_622.so.1 -o libHCI_VERSION_622.so.1.0.0 main.o mainwindow_hci_v_622.o moc_mainwindow_hci_v_622.o
-shared : create a shared runtime -necessary for producing a shared library (DLL in windows-speak)
-Wl,-soname,libHCI... : see above document re -soname linker option
The .o files are the object files to put in the shared library
You are correct about the other libraries. They get picked up by rpath and rpath-link, so that when you use then libHCI_VERSION_6622 library, you do not have to add them to the link command line
Keep Calm and Carry On
|
|
|
|
|