Click here to Skip to main content
15,867,453 members
Please Sign up or sign in to vote.
5.00/5 (1 vote)
See more:
Hello,

Well, the title of this question is perhaps self explanatory, but at the same time I already know the answer to that question is "No". So allow me to elaborate my question and goal a bit.

There used to be a 12-byte integer type in C++ a few years back, at least in Microsoft's compiler. But even then that didn't make sense, because as you know, C++ is converted to Assembly code, and the two largest integer types in Assembly are, to my knowledge, TBYTE and REAL10 -- both 10 bytes in size, with REAL10 being a floating point type and TBYTE an integer type. So my first question: as the 12-byte type I mentioned earlier is/was an intrinsic, non-structure type, how could it possibly be supported in Assembly? What if you assigned a value so big it would need more than the 10 bytes provided by, say, TBYTE?

My second question brings .NET into the mix. We have the 12-byte System.Decimal type, and all things point to it also being intrinsic. If you look at the IL-code generated when you compile your program, you can see it is also not a structure, a library, or anything else other than a type that appears to be directly supported by MSIL and one that is handled just fine by your computer, taking up no more RAM than those 12 bytes.

How is that possible if Assembly doesn't provide a type of this size or a feature to handle values of this size? I know full well that .NET-programs are compiled to a different set of binary instructions than unmanaged C++ programs, but you can still use a program like MASM32 to disassembly .NET programs -- and that makes perfect sense to me, since .NET binaries still consist of the most basic instructions that the processor has to be able to recognize.

So, to sum up what became a longer question than I intended:
1) How can I write a program in Assembly that can handle 12-byte values with intrinsic types?
2) If 1) isn't possible, then how is it possible in C# / .NET?

Thank you.

What I have tried:

I have tried looking for the 12-byte datatype I want online.
I have tried analyzing a disassembled .NET program using such a type, but to no avail.
I have tried writing my own code to handle such values, but this is not really what I want.
Posted
Updated 26-Oct-20 4:18am
Comments
raddevus 26-Oct-20 10:05am    
I've often wondered about this also and this is a great question. Thanks for asking it here where there are people who can offer some answers. Really great. Thanks
deXo-fan 28-Oct-20 5:43am    
I appreciate your thanks, raddevus, and I hope you get as much out of the solutions as I have, because as you've already noticed, they gave us some amazing answers!

Yes it can be done: just because something isn't directly supported by the processor doesn't mean you can't use it in assembler - think about a BigInt value where each decimal digit is stored in a nibble - so two digits fit in a byte - and an arbitrary number of them can be put together in a block of memory to form an arbitrarily large number that you can still do math with - provided you write the +, -, * / ^, and % operators in assembler to process them!

Decimal numbers in C# are the same: Decimal Struct (System) | Microsoft Docs[^] - they don't correspond to any "processor aware" datatype, so the .NET framework contains code with implements maths operations on them.

You would have to use the (brief and seriously lacking in detail) information in the link to write assembler functions that process Decimal values, then accept and return 12 byte value pointers (probably) to communicate with .NET.

I've never seen it done, so it could be an interesting project for you!

[edit]
Have a look at the .NET reference source and you'll see the way Microsoft do it: Reference Source[^]
[/edit]
 
Share this answer
 
v2
Comments
raddevus 26-Oct-20 10:06am    
Really great and interesting answer. Thanks
deXo-fan 28-Oct-20 5:41am    
Fantastic answer!
I actually once wrote a small and very simple library with an algorithm that was able to divide any two floating point numbers of limitless sizes, but my algorithm was incredibly slow and I may not have done it whole heartedly, as it bothered me my values had to be quoted strings (like BigFloat bf = "123.456";), but I guess it can't be helped. Not if I want to use numbers greater than the longest (character wise) double value there is.

But with the link you provided to the reference source, I might just give it another go and put more effort into it this time, because like you said, it could be a very interesting project for me! And thanks heaps for that link by the way, I had no idea it even existed. I always thought Microsoft was very hush hush about their code.:D
OriginalGriff 28-Oct-20 6:08am    
You're welcome!
MS released the whole source for .NET back around 2012 IIRC as the Reference Sources - and it's handy stuff!
BTW: there is a BigInteger class in .NET as well:
https://docs.microsoft.com/en-us/dotnet/api/system.numerics.biginteger?view=netcore-3.1
Might be worth a look at that, too:
https://referencesource.microsoft.com/#System.Numerics/System/Numerics/BigInteger.cs
There is support in some C++ libraries like Boost for that.

Some compilers are not handling exact sizes for data types, but are using alignment. The advantage is that full bytes are used and so the processing speed is better, because no efforts for single bit access are needed. If you may use it, check whether it isnt slower than the unoptimized code.
 
Share this answer
 
Comments
deXo-fan 28-Oct-20 5:53am    
I will try and put together something myself first, but I have heard of Boost before as a matter of fact. People speak very highly of it if I'm not mistaken.

I like what you said about using alignment; that's something I will keep in mind if and when I begin.
Quote:
There used to be a 12-byte integer type in C++ a few years back, at least in Microsoft's compiler. But even then that didn't make sense, because as you know, C++ is converted to Assembly code, and the two largest integer types in Assembly are, to my knowledge, TBYTE and REAL10 -- both 10 bytes in size, with REAL10 being a floating point type and TBYTE an integer type. So my first question: as the 12-byte type I mentioned earlier is/was an intrinsic, non-structure type, how could it possibly be supported in Assembly? What if you assigned a value so big it would need more than the 10 bytes provided by, say, TBYTE?

At processor level, only registers exist and they are of size 8, 16, 32 and 64 bits.
Memory is basically addressed by words of 8 bits (a byte), on modern hardware, memory is usually accessed by lot of 64 bits (8 bytes) for efficiency purpose, because of hardware design.
So 12 bytes integer comes from the era of 32 bits (4bytes) hardware and is 3 words of 4 bytes.
Processor have features to handle data larger than registers. Detail on that will not fit in scope of this textbox.
Working with Big Numbers Using x86 Instructions[^]
 
Share this answer
 
v2
Comments
deXo-fan 28-Oct-20 5:50am    
I am definitely looking at the page you linked to, I have a feeling I am going to learn a lot!
My memory (not my RAM :P) may be off, but I'm pretty sure I once (back when long double was 12 bytes in size) disassembled a C++ program with such a type, and what I found was that the corresponding assembly code had assigned it a REAL10. It WAS a long time ago, but I'm pretty sure I remember correctly and read the assembly code correctly, because I remember thinking it didn't make much/any sense.

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900