|
you are very old school
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Yes, the movie was a tad before my time but it's a good one.
The most expensive tool is a cheap tool. Gareth Branwyn
JaxCoder.com
|
|
|
|
|
I agree.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Does the volume mixer show Opera at the same level as the rest of the programs?
|
|
|
|
|
David O'Neil wrote: Does the volume mixer show Opera at the same level as the rest of the programs?
DUH !
You fixed it.
Of course, I had to use my Super-IQ-Genius brain to find the mixer in the first place; honestly, I do not remember how I made it appear on my screen. Thank you Microsoft for adding yet another IQ test obfuscation to make things idiotically more difficult.
Whatever, whatever, I now advocate David O'Neil for president of the USA
modified 11-Jul-22 7:10am.
|
|
|
|
|
|
The question remains in my mind: When did I change that ?
Or perhaps, even more relevant: How did that setting change ?
I don't remember ever looking at it; certainly not in the past week or two.
|
|
|
|
|
|
Have you considered using the built in browser?
Again, I hear someone saying;
C-P-User-3 wrote: Opened Microsoft Edge
Works Fine.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Yeah, the Microsoft Edge Browser ! What a great idea !
That's better than using a Google designed whatever-they-call-theirs
|
|
|
|
|
Opera Just Went Silent
Damn. Here I thought I was gonna read Oprah Winfrey's obituary...
|
|
|
|
|
I have a little graphics library for IoT: GFX (htcw_gfx)[^]
I originally targeted C++17 but the Arduino toolchain did not have a new enough GCC version to support it. I had to fork it a bit to support C++14.
One reason was a massive function that is computed 90% at compile time down to a very few asm instructions. Which depend on how you call it which is why there's several pages of code.
What it does is convert one pixel format to another, including doing things like grayscale or HSV conversion.
The trouble with C++14 is it couldn't do the entire function at compile time. With C++17 the compiler can run the function so if the color value is constant there is no runtime overhead for doing the conversion.
The bottom line is a bit of a performance win since this function is used all over the place for virtually every drawing operation. If the user passes in a constant color value, which is typical, the conversions no longer require any runtime overhead!
Gosh that's neat, and one of the reasons I love C++ so much, and every version keeps allowing the compiler to compute more and more during the compile phase. It's actually kind of amazing how much you can get it to do. I joke that the C++ compiler will make your bed. There's no other language like it.
Feeling pretty good about all this. It's deeply satisfying.
Here's something C++17 can do at compile time if all input values are known at that point:
const int CVACC = (sizeof(int) > 2) ? 1024 : 128;
using trindexR = typename PixelTypeRhs::template channel_index_by_name<channel_name::R>;
using trchR = typename PixelTypeRhs::template channel_by_index_unchecked<trindexR::value>;
using trindexG = typename PixelTypeRhs::template channel_index_by_name<channel_name::G>;
using trchG = typename PixelTypeRhs::template channel_by_index_unchecked<trindexG::value>;
using trindexB = typename PixelTypeRhs::template channel_index_by_name<channel_name::B>;
using trchB = typename PixelTypeRhs::template channel_by_index_unchecked<trindexB::value>;
const auto chY = uint8_t(source.template channelr_unchecked<chiY>()*255);
const auto chCb = uint8_t(source.template channelr_unchecked<chiCb>()*255);
const auto chCr = uint8_t(source.template channelr_unchecked<chiCr>()*255);
const int cBA = chCb-128;
const int cRA = chCr-128;
const auto cnR = (uint8_t)helpers::clamp((int)(chY + ((int)(1.402 * CVACC) * cRA) / (float)CVACC),0,255);
const auto cnG = (uint8_t)helpers::clamp((int)(chY - ((int)(0.344 * CVACC) * cBA + (int)(0.714 * CVACC) * cRA) / (float)CVACC),0,255);
const auto cnB = (uint8_t)helpers::clamp((int)(chY + ((int)(1.772 * CVACC) * cBA) / (float)CVACC),0,255);
const auto r = typename trchR::int_type(cnR*(trchR::scale/255.0));
helpers::set_channel_direct_unchecked<PixelTypeRhs,trindexR::value>(native_value,r);
const auto g = typename trchG::int_type(cnG*(trchG::scale/255.0));
helpers::set_channel_direct_unchecked<PixelTypeRhs,trindexG::value>(native_value,g);
const auto b = typename trchB::int_type(cnB*(trchB::scale/255.0));
helpers::set_channel_direct_unchecked<PixelTypeRhs,trindexB::value>(native_value,b);
good = true;
Just amazing. There's actually compile time computed bit shifts to get and set channel values behind that mess. *shaking my head*. It's incredible.
To err is human. Fortune favors the monsters.
modified 10-Jul-22 6:04am.
|
|
|
|
|
I am reminded of a quote attributed to David Gries[^] of Cornell.
Gries said: Never put off to runtime what you can do at compile time. I had the pleasure, ca 1982, of attending a University of Wollongong Summer School where Gries was one of the presenters. He pretty much ran us through his book "The Science of Programming", about the mathematical proof of algorithms - loop invariants, progress, termination conditions etc.
And yes, the title of the book is a light hearted reference to Knuth.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
I'd tweak that quote to use should instead of can. Some things are better left until runtime so they can be configured rather than hard-coded.
|
|
|
|
|
Greg Utas wrote: I'd tweak Then it ain't a quote.
Makes you a politician and a liar. Don't go that path, it leads to the dark side.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
do you have an example? I think it's memory vs speed. How dll's were born.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Reading a configuration file instead of hard-coding things is an example. Even just waiting until runtime to allocate an array qualifies.
When C++ calculates things at compile time, it's usually a case of using more memory to run faster, at least when templates are involved. And a DLL is the opposite, running slower to save memory.
|
|
|
|
|
You are right, configuration files would qualify.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
I agree with second part about templates and DLL's, memory / speed trade offs.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Multix operating system was the mother of DLL's.
Multics - Wikipedia[^]
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Greg Utas wrote: When C++ calculates things at compile time, it's usually a case of using more memory to run faster, at least when templates are involved.
I disagree with this. Where I'm using templates to do precomputed things, it's not increasing memory use at all, but simply flexibility of my API.
For example, an RGB565 pixel is 16 bits, with 5 bits for Red, 6 bits for Green and 5 bits for Blue
To pick off or set individual channels requires bitshifting and masking
I use templates to facilitate
A) Querying pixels, so you can do like rgb_pixel<16>::has_channel_names<channel_name::R, channel_name::G>::value (will be bool true )
B) Picking/Setting off arbitrary pixel channel data like px.template channel<channel_name::G>(); which returns the green channel
C) Doing bitmap manipulation using the binary footprint implied by the constructed pixels, which can have any number of channels with any sort of names - RGB is just one example)
None of this is about trading speed for memory.
It's about buying flexibility. Period.
There is no case in GFX where I use computed compilation to trade performance aspects. It's all about flexibility while maintaining the same kind of performance you get without it.
In fact, forgive me but I can't think of a situation like you describe.
To err is human. Fortune favors the monsters.
|
|
|
|
|
I'm not talking about your particular example.
Memory usage depends on how many instantiations of a template you end up creating. If a small number, doing things at compile time might actually yield a savings. But the way templates are generally used will produce multiple copies of more or less the same code unless the compiler is very clever. There is an Embedded C++ standard that removes templates (and exceptions and RTTI) for this reason, though most systems now have so much memory that I doubt many people still use it.
|
|
|
|
|
You're talking about binary size. I see. Yes there is some code bloat, but it all depends on what you're doing.
In some cases - surprisingly more than even I would have expected - you won't be duplicating any more code than you would by hand.
This code is a good example. There's virtually zero template overhead for multiple instantiations. It doesn't mean some code isn't duplicated, but you would have had to duplicate that duplicated code if you made each instantiation by hand.
htcw_tft_io/tft_spi.hpp at master · codewitch-honey-crisis/htcw_tft_io · GitHub[^]
Forgive me, because earlier I thought you were talking about runtime memory usage, not binary size.
I should add that while it's true in many cases that binary size increases memory usage, this is not true on most of the platforms I develop on these days. Hence your meaning in your assertion was completely off my radar.
To err is human. Fortune favors the monsters.
|
|
|
|
|
Precomputed sin tables. If you want to do sin computations it's a pretty complicated math formula for a computer. You can precompute tables to do your sins. You can use memory to hold those precomputed sins which you either computed on startup, or computed using the compiler and embedded in your executables .text section or whatever, but either way it ends up using RAM to hold it. Then when you want to compute a sin, it's a simple array lookup, at least assuming your operand is contained in the table.
To err is human. Fortune favors the monsters.
|
|
|
|
|
Precomputed trig, logs, etc. is pretty old school and very fast. If you have to do a lot of them and accuracy is not so critical, tables are the way to go. Most modern CPU's are usually fast enough to handle non-pre-computed math functions on a need to use basis. I am glad you brought this up. Gave me a reason to research this topic. Thanx
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|