Welcome to the Lounge
For discussing anything related to a software developer's life but is not for programming questions. Got a programming question?
The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
|My graphics library for embedded devices has to deal with pixel formats I don't know ahead of time because the devices are diverse, and in order to be efficient pixels must be represented in their native bit format at all times. In other words, a pixel should store its values in whatever underlying format best suits the device or data stream
To complicate things, not all channels are color channels, (think alpha channel for transparency) and not all channels are the same bit width. (rgb565 16 bit color for example)
There are also totally weird ones that are common like rgb666/18 bit format (262,144 colors)
And not all devices use the same byte order in their streams.
To complicate things further, not all devices use an RGB color model. To my surprise I found out that some devices are BGR, regardless of byte order. Meanwhile, JPEGs are (IIRC) CMY or CMYk! Also critical because that's the input format from cameras, which are commonly used with these devices.
What is a pixel?
It's color space, which includes color model. It's binary layout. It uses potentially highly heterogenous channels for its data.
A pixel is as complicated as a unicode character!
And I can't be messing with most of this in RAM, nor at runtime. Nope, most of it needs to be not only resolvable at compile time, but the dead code generated from all the metaprogramming involved needs to be removable by the compiler to avoid code bloat on these tiny devices.
One example is retrieving values, which I have generic getters and setters for (using metaprogramming) that generate the necessary masks and shifts at compile time based on the variable series of pixel_channel_traits you gave it and channel index, each which have their own bit depth, so retrieving and setting the channels individually gets complex.
My code can resolve it all at compile time. I feel like a hero.
I wouldn't need to do this if there was some sort of unified driver model for these things.
But all the meta programming in C++ is cool. I really hadn't caught up with C++11 variadic templates and such until now. They're neat!
An rgb565BE pixel definition:
typedef pixel_channel_traits<uint8_t,5,pixel_channel_kind::color> color5_channel_traits_t;
typedef pixel_channel_traits<uint8_t,6,pixel_channel_kind::color> color6_channel_traits_t;
false, color_5_channel_traits_t, color_6_channel_traits_t, color_5_channel_traits_t> rgb565be_traits_t;
This actually generates proper getter and setter methods off the int_type, and gives you a union between the int type, and an array of bytes representing the data. It's complicated and weird code - and all of the pixel template classes (above are *trait* classes associated with the pixel template classes) which add indexed getter and setter methods for the channels that get/set a floating point value between 0 and 1, so you can operate on them generically regardless of format if you need to which is useful for things like color conversion.
Real programmers use butterflies
modified 13-Mar-21 5:29am.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.