14,984,271 members
Articles / Programming Languages / C#
Tip/Trick
Posted 12 Jan 2018

9.4K views
2 bookmarked

# What is the Lower Limit of Floating Point?

Rate me:
Microsoft Versus Google: Who is correct?

## Microsoft Versus Google: Who is Correct?

#### Microsoft Floating Point Range in C# Documentation

Microsoft Floating Point Range

#### Google Floating Point Range Returned from its Search

While their higher float point limit tallied, Microsoft and Google are giving different values for lower limit! What is going on? One of them has to be correct! Make a guess before the answer is unveiled!

Both Microsoft and Google are correct! Google answer is correct from normalized number perspective while Microsoft lower range takes into account subnormal numbers (also known as denormalized numbers).

As you may recollect from your computer science school days, IEEE 754 floating point format consists of 3 components: a sign bit, mantissa and exponent. The actual value is derived from multiplying mantissa by base number raised to power of exponent.

## Normal Floating Point

In a normal floating point number, its mantissa always implies a leading binary one on left side of decimal point. Since it is always present, mantissa does not store this leading one.

`1.xxxx`

Since the left side of decimal point is always one, how do you store, say `0.5`?

`0.5` can be derived from multiplying 1(mantissa) with 2 raised to power of -1(exponent).
Note: Mantissa and exponent values are stored in base 2, not base 10, so we raise 2 to power of exponent.

`1 * 2^(-1) = 0.5`

## Subnormal Floating Point

In a subnormal floating point number, its mantissa has a leading binary zero on the left side of decimal point.

`0.xxxx`

But but... didn't I just tell you the left leading one is always there? So how is a subnormal number defined? To give you a definitive answer, we have to go through every nook and cranny of floating point format which is, frankly speaking, too long to fit into this short tip. As promised, the floating-point guide is finally here!

## Share

 Software Developer (Senior) Singapore
Shao Voon is from Singapore. CodeProject awarded him a MVP in recognition of his article contributions in 2019. In his spare time, he prefers to writing applications based on 3rd party libraries than rolling out his own. His interest lies primarily in computer graphics, software optimization, concurrency, security and Agile methodologies.

You can reach him by sending a message on CodeProject or at his Coding Tidbit Blog!

 First Prev Next
 difference is ~0.0 and negative infinity Member 1070168315-Jan-18 20:47 Member 10701683 15-Jan-18 20:47
 Re: difference is ~0.0 and negative infinity Luis Perez Garcia15-Jan-18 22:00 Luis Perez Garcia 15-Jan-18 22:00
 Slightly relevant: The precision of trigonometric functions Member 798912215-Jan-18 10:58 Member 7989122 15-Jan-18 10:58
 Some time ago, I worked with a guy who had been with IBM for a long time. He received a request from a university professor, teaching error propagation to his students. The problem: Some trig functions are difficult to calculate numerically for extreme values; arctan() is one of these. Knowing the word length of the CPU, you can estimate the error. But on his IBM 370 (or maybe it was an ever newer model - but it was in that family!) arctan() had an error noticably larger than what could be expected. Why? After a long search, the reason was found: The Fortran library for the 370 (or whatever, more modern) had been directly ported from the 360 series. The 360 library had been directly ported from the 7090 series. The 7090 library was ported from the 709, which got it from IBM 704... Now we are back to the mid-1950s. At that time, some developer calculated the binary representation of pi, and wrote it into the library as a floating point literal in hexadecimal format. Noone had ever questioned this value; it was carried over from one architecture to the next, in the hexadecimal format. Now, the 7090 was a 36 bit CPU, while the 360 was a 32 bit one. In the porting, the last hex digit of pi would not fit in, and was simply ignored. It was truncated. Not rounded. A proper representation of pi in 32 bit format would have caused the least significant bit to be rounded up to 1; it was left as 0. Once the proper rounding was introduced, and the lowermost bit of pi set to 1 instead of 0, the professor got exactly those error he would expect from the precision provided by the 32 bit format. IBM 360/370/30xx did not natively provide IEEE 754, so you cannot directly transfer hex representations and error propagation issues from these machines to 754 formats. Still I think it is a rather amusing story of error estimation.
 Did you really say anything about Google's lower limit? Member 798912215-Jan-18 10:29 Member 7989122 15-Jan-18 10:29
 very good review Southmountain13-Jan-18 9:22 Southmountain 13-Jan-18 9:22
 Last Visit: 31-Dec-99 18:00     Last Update: 5-Aug-21 17:53 Refresh 1