
Finished converting the U.S. Naval Observatory Astrometry software from c and doubles to c# and decimal.
The only calculations the decimals couldn't handle were the speed of light in meters (overflow) and weak gravity calculation (divide by zero). In those cases, had to use intermediate doubles.
So now you know.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





Gerry Schmitz wrote: In those cases, had to use intermediate doubles. Not a compensating scale factor?





Using published algorithms. Any fiddling on my part would come much later.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





"... and then some magic happens ..."





How could going from decimal to double avoid a divide by zero?
I would have spent significant resources in discovering how that could be.





The fraction in the decimal get truncated much earlier than a double. The double has less precision but much more range; it's the other way around with decimal; even though its wider bytewise.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food





If the divisor in theory, mathematically, "should have been" zero but isn't because of imprecision as floats are not 'real' numbers of unlimited precision, then the underlaying algorithm should indeed be closely inspected and investigated.
If the avoidance of a divide by zero exception is an artifact of limited precision, then I am getting close to call that a faulty implementation of the algorithm.
If, on the other hand, the divisor is from a mathematical point of view  really nonzero, but has been zeroed by some software or hardware algorithmic rules, then these rules should be reconsidered. It should never be tolerated that a small, nonzero divisor causes a dividebyzero error. That is simply incorrect! If the divisor is "valid", yet so small that it should be equivalent to zero, then it must be handled as a value (like zero), not causing an exception. If it could be handled in float format, it should be handled similarlu in decimal format!





They use it for space missions.
I'm telling you the mechanics; you want to analyze the writer's minds.
You're trying to argue there is "zero" gravity when in fact there is; "imprecision" helps maintain it.
A decimal can hold 2829 digits; if gravity is less than that, it's still not "zero".
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
modified 12Jan21 23:22pm.





If gravity can be zero, you must be prepared for it. If it is so small that it rounds off to zero, you must be prepared for it.
Note that IEEE 754 double precision has 52 mantissa bits  53 if you count the hidden one. That is a precision of about 16 decimal digits. If you need even more, you have to go to 128 bit float, with about 33 decimal digits of precision.
If it is so small that it doesn't round off to zero in 128 bit float representation, but does round to zero in decimal representation, that is rather accidental. The next weaker level of gravitation might round off to zero even in 128 bit FP, and causing a similar divide by zero. I certainly think that any algorithm implementation where the divisor might either be "true" zero, or so low that it is rounded off to zero (whether in decimal digit #34 for 128 bit FP or in decimal digit #30 for decimal representation), but nevertheless tries to do the division without checking, is faulty. The decimal format as such is not to blame.





In clinical chemistry it is common for the concentration of an analyte to be reported as "< 0.10 mg/dl" when that is the sensitivity of the test. Sensitivity of a test is defined by the random analytical variation of the test in the absence of ANY analyte, i.e. the "noise" in the measurement. cf. Gaussian standard error of estimate.
In this example, a "result" of 0.005 mg/dl is analytically indistinguishable from 0.00 mg/dl.
The same issues of analytical precision apply to the extremes of floating point calculation.





The problem in this case isn't an avoidance of a divide by zero, it's that the value is so close to zero that system.decimal can't handle the number and truncates it to zero. The wider dynamic range of system.double can handle it.





If your application really needs to handle (application) epsilons as nonzero, then you certainly have to choose a data format that allows you to handle epsilons. Maybe decimal isn't for you.
Note that even double is guaranteed to handle you application's epsilons. The Double struct defines a Double.Epsilon which is the smallest epsilon that can be represented as a double. If your application needs to handle smaller values, like Double.Epsilon/2 or smaller, the value will end up as zero. Using double does not avoid the problem entirely, just moved the borderline somewhat.
If you make a cost calculation suggesting a unit sales price of USD 2.5000000001, and the customer pays two and a half dollar, then your accounting application probably does not want to handle a balance due of USD 0.0000000001, but set it to zero. Now this certainly can be represented by a decimal, but your accounting system should round the sales price to the number of decimal places it finds relevant  the decimal struct provides such a Round method.
Your application should be conscious about epsilons: If it needs to be treated as nonzero, you use float / double (crossing fingers for the application epsilon being no smaller than Double.Epsilon). If (application) epsilons have no relevance, you should zero them, e.g. using a Round function.





decimal x = 1;
decimal y = 0;
decimal z = x / y;
double x = 1;
double y = 0;
double z = x / y; The double type complies with IEEE 754[^].
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
 Homer





As the decimal representation does not provide any bit pattern for infinities, the divide operator has no way of returning any 'decimal.PositiveInfinity'  it doesn't exist. Similarly, there is no representation of Not a Number (NaN). Some values can be represented in float/double but not in decimal; others can be represented in decimal but not in float/double.
If you really want / need to handle infinities in decimal, you can define your own AugmentedDecimal struct that is a decimal plus the required flag bits for flagging the value as, say, positive / negative infinities or NaN. Then, a DivideByZeroException handler may set these flag bits appropriately. Of course you then have to define the operators for AugmentedDecimal to check the flag bits as appropriate before performing the operation.
This is built into the float hardware. decimal is essentially done in software, even without these augmentations. So it is slow anyway. But if that is what you need, that is the price you have to pay.
Then: My guess is that the tiny part of float/double handling software that handles infinities properly is a fraction of a percent. You don't get an exception here, but when you try to use the value as a normal number (because no exception has signaled to you the exceptional value), your program may do funny things. Tracing back to the division that caused the infinity may be nontrivial.
If you do write code that properly and throughout handles infinities (i.e. within that fraction of a percent), then I am sure that you would have no problems writing a proper handler for the decimal DivideByZeroException as well.






One of the issues of using floatingpoint (binary / decimal / hex) is the issue of "wobbling precision". For example, the number 1.234 has less actual bits of precision than the number 8.765, even though they both have the same number of digits. This complicates error analysis.
Binary floatingpoint has the least "wobbling precision", and therefore  all other things being equal  it is preferable for use in scientific calculations.
If you need precision greater than that provided by 'double', you may either use packages like MPFR (which provide IEEE 754style types of greater precision) or construct a 'doubledouble' or 'quaddouble' type out of the sum of 2 or 4 'double's (aka expansions). Note that if you go the second route, you will have to perform a careful error analysis on your code, because some of the lemmas applicable to IEEE 754style floating point types are not applicable to expansions.
Accurate scientific computing requires careful attention to details like these!
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
 6079 Smith W.





Why decimals instead of doubles though? Not trying to question the decision, I'd like to understand for myself.





My guess is one of two: Either, the application calls for the higher precision (almost 29 decimal digits for decimal, less than 17 decimal digits for double), but the numeric range is sufficiently large. This level of precision is rarely needed, except in some extreme scientific calculations.
Or, the application calls for full control of roundoff errors, with exact value representation of e.g. decimal fractions, rather than a binary approximation. This is a common requirement in e.g. accounting systems. (An alternative to decimal is of course to use a long value to represent an integer number of millicents or whatever the required precision of the transactions. Using decimal will usually require fewer application code lines, as you can deal directly with the unscaled amounts.)
When I explain programming to beginners (and also to some at intermediate level needing some correction...), I know from experience that for some, understanding the principal difference between int and float is difficult. So I rather call it 'counts' and 'measurements'. The price of an apple is not a measurement, it is a count of the pennies you have to pay. So we use an int  int is a synonym for a count. If you weigh that apple, you do not count the grams, or milligrams, or micrograms  you will not get a measurement exactly right. So we use a float  float is a synonym for a measurement value. Even if you claim that the weight of the apple is exactly 123.456 grams, it nevertheless is a measurement, calling for a float, not a count of milligrams.
Making beginners understand the difference between a count and a measurement is far easier than to explain int and float. In that framework, I really would like to consider decimal as a "counting" value, with the extension that it lets us count not only Euros or dollars, but even cents and tenths of a cent. With BCD, this was very explicit at the representation level as well; with decimal, it is blurred. C# documentation classifies it as a 'floating point numeric type'. Yet, decimal is functionally like BCD values, rather than as measurement float / double. Also by implementation, it is a scaled integer (i.e. counting) value.
I think using decimal for measurement values is an abuse. If you really need more than the 17 digits of a double, then you should go for one of the 'infinite precision' libraries provided for almost any language. (Writing one is not that difficult  I made one myself in my student days.) If you measure the dimension of the universe down to the millimeter, you are still making a measurement, not a count.
And if the range of decimal is not sufficient  e.g. if you want to count all the atoms in the universe  and decimal doesn't provide a sufficient numeric range, then you should go for a software 'infinite integer range' library. (I never saw one, but writing one is almost down to the trivial level, if you really have the need! Biggest problem is formatting the number as text!)





Greetings May I inquire if the beginners should simply be informed that int's represent the integers and float's the real numbers? Kind Regards Cheerios





The problem is that lots of people consider a weight of 454 gram an integer numeric value, rather than a real, especially if they use a digital scale. If you ask "the man in the street" to explain "integer" and "real" numbers, you may be surprised by the (lack of) answers you get! (There is nothing 'unreal' about, say, 'five' ) You cannot expect beginners in programming to have a solid background in mathematics.
(Almost) everyone have little problems distinguishing between counting and measuring. Well, that is my experience. YMMV.





It sounds to me like you could write an interesting article on this application of decimal.






Speak for yourself.





If I saw this in my local food store, I might give it a try.
Unless it appears to be sweet like a fruit jam. A smoked, meaty bredspread on dark, whole grain bread could be very tasty, assuming that it would be something like if you put some bacon through your meat grinder.
Sometimes, you use a tiny amount of sugar in meats to round off the taste, without making it 'sweet'. If this is 50% bacon and 50% sugar, then it is not for me!





A couple of weeks ago I tried unsuccessfully to get slots for the both of us to get vaccinated, but the website kept crashing and the phone number was busy all day. In the end we got a slot for just one of us at a location some 60 miles from our home! Yuck!
This morning the second batch of bookings became available, a mere 8 miles for home. We got slots for the both of us for Friday. And the website only crashed some 10 times before we got our bookings! The government in Florida can be proud. Their IT guys are getting better! Still, we are both in a very good mood now.
Get me coffee and no one gets hurt!




