|
Voluntarily removed
modified 29-Sep-24 23:07pm.
|
|
|
|
|
|
As I'm sure we all know, there are basically three ways to handle currency values in code.
1) Can store the value as cents in an integer. So, $100.00 would be 10,000. Pro: You don't have to worry about floating point precision issues. Con: Harder to mentally glance at without converting it in your head. Con: Depending on size of integer may significantly reduce available numbers to be stored.
2) Can use a float with a large enough precision. Pro: Easy to read. Con: Rounding issues, precision issues, etc.
3) Can use a struct with a dollars and cents. Pro: Same benefit as integer with no number loss. Pro: Easy to read mentally. Con: Have to convert back and forth or go out of the way for calculations.
Historically, I've always gone with either 1 or 2 with just enough precision to get by. However, I'm working on a financial app and figured... why not live a little.
So, I'm thinking about using 128-bit ints and shift its "offset" by 6, so I can store up to 6 decimal places in the int. For a signed value, this would effectively max me out at ~170 nonillion (170,141,183,460,469,231,731,687,303,715,884.105727). Now, last I checked there's not that much money in the world. But, this will be the only app running on a dedicated machine and using 1GB of RAM is completely ok. So, it's got me thinking...
Here's the question... any of y'all ever use 128-bit ints and did you find them to be incredibly slow compared to 64-bit or is the speed acceptable?
Jeremy Falcon
modified 2-Sep-24 14:37pm.
|
|
|
|
|
If you're not concerned about speed, then the decimal::decimal[32/64/128] numerical types might be of interest. You'd need to do some research on them, though. It's not clear how you go about printing them, for example. Latest Fedora rawhide still chokes on
#include <iostream>
#include <decimal/decimal>
int main()
{
std::decimal::decimal32 x = 1;
std::cout << x << '\n';
} where the compiler produces a shed load of errors at std::cout << x , so the usefulness is doubtful. An alternative might be an arbitrary precision library like gmp
A quick test of a loop adding 1,000,000 random numbers showed very little difference between unsigned long and __uint128_t For unsigned long the loop took 0.0022 seconds, and for __int128_t it took 0.0026 seconds. Slower, but not enough to not consider them as a viable data type. But as with the decimal::decimal types, you would probably have to convert to long long for anything other than basic math.
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
Well, just so you know I'm not using C++ for this. But the ideas are transferable, for instance a decimal type is just a fixed-point number. Which, in theory sounds great, but as you mentioned it's slow given the fact there's no FPU-type hardware support for them.
I was more interested in peeps using 128-bit integers in practice rather than simply looping. I mean ya know, I can write a loop.
While I realize 128-bit ints still have to be broken apart for just about every CPU to work with, I was curious to know if peeps have noticed any huge performance bottlenecks with doing heavy maths with them in a real project.
Not against learning a lib like GMP if I have to, but I think for my purposes I'll stick with ints, in a base 10 fake fixed-point fashion, as they are fast enough. It's only during conversions in and out of my fake fixed-point I'll need to worry about the hit if so.
So the question was just how much slower is 128-bit compared to 64-bit... preferably in practice.
Jeremy Falcon
|
|
|
|
|
You mean 64-bit CPUs can't deal natively with 128-bit integers?
You had me at the beginning thinking that it was a real possibility.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
I'm too tired to know if this is a joke or not. My brain is pooped for the day.
Richard Andrew x64 wrote: You had me at the beginning thinking that it was a real possibility. Any time I can crush your dreams. You just let me know man. I got you.
Jeremy Falcon
|
|
|
|
|
FYI I wasn't joking.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Ah, I haven't played with ASM since the 16-bit days and it was only a tiny bit back then to help me debug C code really. So, this may be old and crusty info...
But, yeah typically in a 64-bit CPU the registers don't go any wider than 64-bits. Now, there are extended instruction sets (SSE, SSE2, etc.), but those usually deal more with capabilities per instruction than data/bus width.
One notable exception is that all CPUs have FPUs these days and most FPUs can process 80-bit wide floats natively, even on a 64-bit CPU. AFAIK, there are no 128-bit registers/extensions for 64-bit CPUs for anything.
Which means, if I got a 128-bit number, any programming language that compiles it will have to treat that as two 64-bit values in the binary. Good news is, it's a loooooooooot easier to do with integers than floats. Say for instance, a quadruple precision float that's 128-bits is over 100 times slower than a 80-bit float. With an integer, you're just one bit shift away from getting the high word.
Stuff like the C runtime will have a native 128-bit type, but the binary will still have to break it down into two under the hood.
Jeremy Falcon
|
|
|
|
|
I would use 64-bit integers, representing cents.
My 0x0000000000000002.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
That's what I'm leaning towards, but I'd want to go to at least a tenth of a mil (4 decimal places) as that's the minimum resolution most accounting software has. So, looking to see if 128-bit is viable so I go to 6 decimal places and not have to worry about it for a while. It's a dedicated machine for this app, so using 1GB RAM isn't an issue. Speed is the only concern.
Jeremy Falcon
modified 3-Sep-24 9:07am.
|
|
|
|
|
Even with such a constraint, a 64-bit integer gives you a plethora of dollars.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
Unless you're tracking the national US debt.
Jeremy Falcon
|
|
|
|
|
Hmm, unless my math is wrong, per Double-precision floating-point format - Wikipedia[^]:
Quote: Integers from −253 to 253 (−9,007,199,254,740,992 to 9,007,199,254,740,992) can be exactly represented.
US national dept is around $33.17T = 33,170,000,000,000. Seems you can accurately represent US national debt down to 0.1 cents using just a regular double-precision number.
Mircea
|
|
|
|
|
Depends on your precision. I mentioned earlier I want to store up to the sixth decimal place (one thousandth of a mill) and because of that my available numbers are smaller.
Jeremy Falcon
|
|
|
|
|
Even if I had less precession, it's only a factor of 300 times difference anyway, which while it would work... isn't really something I'd consider forward thinking to ensure some weird spike doesn't screw up the system.
I take my numbers seriously.
Jeremy Falcon
|
|
|
|
|
Now, if you take Carlo's idea of using 64 bit integers, you range becomes -263 to 263 or ±9.2E18. That gives you 5 decimal places for US national debt. If you can live with unsigned numbers your range becomes 0 to 1.8E19. That gives you 6 decimal places for numbers the size of US the national debt.
System design is finding the least bad compromise, so only you will know if the complication of using some fancy math library is justified or not in your case.
Mircea
|
|
|
|
|
I think peeps assume I’m a total n00b just because I’m asking for folk’s opinions. Using an integer was the first thing I mentioned. I promise you I know of 64-bit ints.
The question was has anybody used 128-bit ints and noticed a serious performance hit. The only reason I mentioned all three original ways is because I knew someone would come along and tell something unrelated.
Also, that’s grossly over simplifying system design. I’ve architected plenty of enterprise apps in my day. Future proofing is also a consideration.
So, to repeat man… the question isn’t what’s an int. It’s how fast is a 128-bit int for those who actually used it in a project. If someone has a better way to store currency than the four ways already mentioned (including fixed point) - great.
Jeremy Falcon
|
|
|
|
|
Addendum
I have settled on the following ( the example is for converting MSB ) , still convoluted ,code.
I would like to find out , discuss , the usage of "toLongLong" .
QString binaryNumber = QString::number(hexadecimalNumber.toLongLong(&ok, 16), 2).rightJustified(4,'0').leftJustified(8,'0');
Up front
I am very sorry to reopen this post.
For information, I am leaving the original post (code).
I have an additions issue, I need help with to correct.
This code snippet correctly converts string "42"
to binary code "01000000".
I do not need more help with that conversion,
BUT in need help converting when
the string contains hexadecimal value, such as "F9".
Changing "toInt(),2)" options to toInt(),16) does not do the job.
I realize that the Qt code is little convoluted and for this reason -
May I suggest that only coders with experience of Qt take a look at this ?
No, it is not an instruction how to reply, just a suggestion.
pFT857_library->CAT_Data_String[CAT_Data_Index].mid(0,1).number(pFT857_library->CAT_Data_String[CAT_Data_Index].mid(0,1).toInt(),2).leftJustified(7,'0');
Solution :
pFT857_library->CAT_Data_String[CAT_Data_Index].mid(0,1).number(n,2).rightJustified(4,'0');
Output :
" convert LSB to binary with leading zeroes "
"0001"
I have Qt style string QString frequency = "1426000" split into another QString as
QString frequency_array = "01 42 60 00"
I need to change each pair as 8 bit binary with
MSD as upper 4 bits of the 8 bit word
AND
LSD as lower 4 bits of the 8 bit word
For debugging purpose I like to print each step of the conversion.
As an example , I like to see "42 "
as
"01000010"
I prefer Qt QString in C++ code
Here is my code so far
```
for (int index = 0; index <6 ; index++)
{
text = pFT857_library->CAT_Data_String[index];
m_ui->lineEdit_14->setText(text);
qDebug() << text;
text = text.mid(0,1).toLocal8Bit();
qDebug() << text;
text = pFT857_library->CAT_Data_String[index];
text = text.mid(1,1).toLocal8Bit();
qDebug() << text;
}
```
The above WAS my initial post / code and I have dropped the post.
My current code is little over-documented so I am hesitant to post it.
However,
I have a (simple_) question.
Using the following snippet
int n = pFT857_library->CAT_Data_String[CAT_Data_Index].toInt();
text = pFT857_library->CAT_Data_String[CAT_Data_Index].number(n,2);
qDebug() << text;
q
I can visualize the binary representation of the string - that is partially my goal.
My question is - how do I visualize FULL 4 bits of the desired info.
"number" with option "2" "prints" all valid bits BUT I need full
length of 4 bits - including "leading zeroes".
Example
"number" prints "100" representing decimal 4
I need
"0100" - full 4 bits.
modified 6-Sep-24 15:33pm.
|
|
|
|
|
What is pFT857_library , as that appears to be the code you are using? But if you want a simple method then you could easily write a short loop that prints the value of each of the 4 or 8 bits one by one.
|
|
|
|
|
This is a Qt issue as far as using the toInt function. And as the documentation (QString Class | Qt Core 5.15.17[^]) clearly shows, it handles numbers in any base from 2 to 36.
modified 4-Sep-24 6:16am.
|
|
|
|
|
I'm maintaining old code.
I can't use enum class.
Do you wrap your enums in namespace to kinda simulate enum class ?
or is there a pattern Ì don't know about to make regular enum safer ?
Thanks.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
You don't need to go as far as a namespace, just a struct will do.
struct Color {
enum value { Red, Yello, Blue };
};
int main()
{
Color::value box = Color::value::Red;
}
If you want to be able to print Color::Red as a string, it's a bit more involved
#include <iostream>
struct Color {
enum hue { Red, Yellow, Blue } value;
std::string as_string() {
std::string color;
switch(value) {
case Red : color = "Red"; break;
case Yellow : color = "Yellow"; break;
case Blue : color = "Blue"; break;
};
return color;
}
Color(Color::hue val) : value(val) {};
bool operator==(const Color&other) {
return value == other.value;
}
friend std::ostream& operator<<(std::ostream& os, const Color& color);
};
std::ostream& operator<<(std::ostream& os, const Color& color)
{
os << color.value;
return os;
}
int main()
{
Color x = Color::Red;
Color y = Color::Blue;
std::cout << x << '\n';
std::cout << x.as_string() << '\n';
if(x == y)
return 1;
else
return 0;
}
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
modified 29-Aug-24 11:50am.
|
|
|
|
|
ahhhh yes, I've seen that before.
thanks.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
Last time I did hardcore C was a while back. Before the 128-bit days. Ok, cool. But, I got a silly question when it comes to printing a 128-bit integer. You see online examples saying just do a long long cast and call it a day bro, but then they use small numbers. Which obviously works for them because it's a small number.
But, I figure hey, I'll try it for poops and giggles. As you'd might expect the number is never correct.
#include <stdio.h>
int main()
{
__uint128_t u128 = 34028236692093846346337460743176821145LL;
printf("%llu\n", (unsigned long long)u128);
return 0;
}
The above don't do it. Now, I could bit shift to get around any limits, but I ultimately need the output formatted with comma separators, so doing bit logic would cause issues when putting humpty dumpty back together again.
So, um... anyone know how to print a 128-bit number in C? Preferably portable C.
Jeremy Falcon
modified 28-Aug-24 17:19pm.
|
|
|
|
|