reeselmiller2 wrote:
Thanks for the reference. This is part of my confusion. The flag is a ulong and can have a value like 18446744073709551583 which converts to a decimal -33, works fine in C#. But in VB if you try "i As Integer = CInt(18446744073709551583UL) you get an overflow error. So you have to use something like Decimal.GetBits(18446744073709551583D)(0) which gives you the low 32bits of the 96 bit Integer. I agree the code is horrible, just trying to understand why it is written this way.
First of all, I believe you cannot understand why some horrible code is horrible; well, someone acted in a sloppy way, so what? :-)
Maybe I can explain the confusing part. Consider these facts: the flag is ulong, but still, bitwise operations in .NET are done on enumeration and integer types converted to a widest type in binary sense of it; in particular, the signed semantic is completely ignored. I advised you to read about
two's complement not just for general education, but more for understanding of its main idea: it allows the CPU to remain agnostic about the type of the operands of the arithmetic operations (not just logical, but even +, -, *, /): the CPU instruction operates on bits in the same exact way, no matter if one or both operands are signed or not. Are you getting the idea? The signed vs unsigned is the matter of interpretation of the arithmetic result in semantic way (in this case, from the standpoint of the integer value in mathematical sense of this notion) while the actual bits remain the same. Of course, same goes for the bitwise arithmetic. For example, for 16-bit integers ushort 0xFFFF is the same object as short -1, and so one; in terms of bits, and CPU cannot "see" any difference.
Now, overflow error comes from the attempt of type case from a wider type to a narrower type, when a value is beyond the range of the wider type. This kind of cast is semantic and not bitwise. That is, the signed nature of the number and its mathematical semantics is taken into account. ("Bitwise cast", in C++, is called reinterpret_cast. In .NET, it is done via the class
System.BitConverter
:
http://msdn.microsoft.com/en-us/library/system.bitconverter%28v=vs.110%29.aspx[
^].)
As to the method,
System.Decimal.GetBits
, it just does what it does. Actually, it returns not 32-bit value, but an array of 4 32-bit integers, as described here:
http://msdn.microsoft.com/en-us/library/system.decimal.getbits%28v=vs.110%29.aspx[
^].
(I don't know why would you concern about this method. :-))
—SA