|
and others can understand Angular.
Real programmers use butterflies
|
|
|
|
|
You must not be using MVVM. If you were, a visual designer would show up to do the visuals for you.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
Yeah, you don't get visual designers with these particular MCUs and frameworks. Nor a lot of RAM. Nor a lot of CPU.
It's not like desktop or web development.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: Does anyone actually like creating screens/user interfaces?
It's a slog, making screens. Just rote code and very little problem solving or creativity. I'll try not to take your comments personally.
I've spent the last 20 years at my current employer doing the user interfaces and installers for our line of commercial ink-jet printing systems[^]. Our older products used C++/MFC, and our current ones C#/WPF.
I genuinely enjoy what I do. The challenge comes in making the product's features available to the user in a way that they can easily discover and understand. WPF provides wonderful tools for making articulate, capable UI's in a fairly short amount of time. Our UI's are heavily graphical since they are acting as the control panel for a complicated piece of machinery.
That said, the "screen stuff" that others in this thread have derided actually isn't as much of my job as you might think. My UI applications are multithreaded out the wazoo. They communicate with one or more Windows services that handle hardware control. We have external interfaces for control from customer workflow equipment. There's a substantial data management facet to things, keeping track of the machine configuration and setup. A major part of my job is to insulate the user as much as possible from timing and other dependencies imposed by the hardware.
I acquired this part of my team's workload by default. Nobody else wanted to do it, which seems to correspond with the prevailing view in this thread.
In my view, both the "screen stuff" and the underlying application require significant creativity and skill. The creativity comes from considering the user, the foremost priority when designing a user interface. Skill comes from mapping that consideration onto your product features flexibly and efficiently. I gain significant satisfaction from doing both.
Software Zen: delete this;
|
|
|
|
|
Quote: The creativity comes from considering the user, the foremost priority when designing a user interface. When I have said this very sentiment to people in the past, I very often receive head-tilted, forehead scrunched, incredulity in response.
I would make one caveat, however. The same sentiment applies to everything else in software. Even to the point of having empathy current and future colleagues who may come later to repair or extend one's work.
Empathy is a highly underrated character asset.
Just my opinion, of course.
Cheers,
Mike Fidler
"I intend to live forever - so far, so good." Steven Wright
"I almost had a psychic girlfriend but she left me before we met." Also Steven Wright
"I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
|
|
|
|
|
Graphical UIs (not UIs in general, CLIs can often be hacked together easily which would explain why most CLI tools out there require arcane knowledge & a goat every now and then to use properly) are somewhat of a pain indeed.
I came to regard that good old 80/20 rule as "you spent 20% of the work on actual functionality, 80% of the myriad UI details".
Well, it really depends on how you regard the situation. When you're supposed to ship a product to end-users, then the GUI is indeed a crucial product feature. And it's not like doing GUI work sucked by default, it's just that it's a whole different kind of work from that algorithmic back-end work.
However, I very much disagree on "little problem solving and complexity". Granted, that heavily depends on the product in question but I gotta say, the GUIs I'm working on require quite a lot of problem solving due to complex & (optionally) interconnected workflows the product is supposed to allow. The stuff I'm doing is not mundane dilligence-first creativity-never. And quite frankly, the moment there's no creativity involved, I'd write a GUI generator which takes my back-end data structures and just plops auto-generated GUIs on top of them.
The biggest problem, aside from GUI being a whole different way to think (to, let's say, churning data in back-end), is the reliance on the product manager. As I said, the stuff I'm working on does a lot of (complex) things. That means that a dude talking to the users of this piece, discussing how the GUI could be set up to serve their needs, is invaluable. I sooooo wouldn't want to sit in customer seminars, let alone talking with customers! Who would apparently rather kill themselves, than say CLEARLY what they mean.
So far, so good. The problem is, this dude isn't a developer. Meaning he doesn't quite get the concept of "If we talk about what we want in advance, this whole thing would take 1 week. If you keep on dropping tiny tidbits you want to have added after I'm basically done, it'll take a month." but I'm still glad to have him. And I think, such culture clashes are manageable. Meaning after I got proactive about suggesting potential extension points, defining by myself which kinds of changes could be dropped at which stage of that particular GUI development, things really got better.
But yeah, if the GUIs I was doing required no problem-solving/creativity, I'd just automate the sh*t out of GUI building and call it a day.
|
|
|
|
|
I guess I overstated it. Basically here's the thing - I'm working on a machine running @ 240Mhz, with 300kB of usable internal RAM, 4MB of PSRAM accessible via SPI @80MHz, and a display operating at 20MHz.
You'd think I'd need a lot of creativity to make that work. I did. I already put all that creativity into GFX - the library I used to make these screens.
Basically, in a sense I did automate the sh*t out of it. But I can't automate everything. For starters, typesetting true type fonts without some sort of sophisticated open-type layout system requires manual tweaking, as does just figuring out where to put everything in 480x800 (yes the display is in profile)
I'm considering adding a .bmp save feature just so I can render the screens to files on PC (GFX runs anywhere but i don't have drivers for windows so it will only draw to bitmaps) and that would save some turnaround time because of the upload times on the device every time i make a code change.
I haven't had time to build out my layout/"windowing" engine yet or this would be somewhat easier.
The bottom line though is, the creative stuff is behind me. Now it's just placement and tweaking.
The actual functionality of the UI is pretty limited due to the nature of the device. The only challenge was making it pretty on a machine this petty.
Real programmers use butterflies
|
|
|
|
|
Ah, I see your point. So if I get it right, you're doing a heap crap lot of grunt work, GUI boilerplate not being any more enjoyable, than any sort of boilerplate.
That's why I didn't get your point first, my product is a Windows executable and Windows is pretty good at taking grunt boilerplate off my shoulders.
|
|
|
|
|
I've done some UI work on an arduino - note the lack of the G but it's still a UI. Rotary encoders, buttons, joysticks, it's still all UI. I totally get the "how can I automate this?"
|
|
|
|
|
Yes, I enjoy it, but then I work in plain vanilla php, building things from scratch, so there is plenty of creativity involved.
Most often, working with legacy code, I must just copy the ui of other pages in the code, but occasionally I do get to build something new and I enjoy figuring out the CSS to make it work (there is a rule around here, soon to change, thank God, that we only use javascript when we also have a noscript option). Even though I know that I'm often reinventing the wheel, I find it a fun little diversion. I also enjoyed deriving physics formulas from scratch in my college classes, so that's a clue
|
|
|
|
|
I liked the days of the old Winforms. I don't mind HTML, but super dislike XAML and scaffolds like React. I can work in them, but they are not fun to me and I find myself delaying building them. Note I have not tried the new Winforms yet.
|
|
|
|
|
Been a full-stack developer for a WinForms app for 15+ years. I actually enjoy the UI part as much as the rest (code, DB design/development). I am very particular about my UI work and have established a number of standards that I consistently follow. I make sure the other devs (who are much, much newer to the project than me) follow the same standards. They are not overly happen about that...
|
|
|
|
|
I don't like bizdev and I try to avoid it. I don't look down on it, as we need folks to do it, and it is legitimate dev work, but I just do not like it, Sam, I am.
I'm happy for you that you do.
These days I mostly do hardware and software for little smart gadgets. It reminds me of coding back in the 80s when I learned, and when every byte counted. Getting true type fonts rendering on a system with less than 300kb of ram felt like a huge accomplishment.
Real programmers use butterflies
|
|
|
|
|
I do.
Users usually deliver their UI design, we have to collect the spec from it. UI driven design is as old as the programmer-only degree.
It is so crucial that they don't leave it to the IT to figure it out.
|
|
|
|
|
Actually I love to make those UI screens.
I try to make things accessible and simple for users. It's very hard to make a simple and useful interface.
|
|
|
|
|
I feel you! I LOVE creating the tools and I LOVE the satisfaction of customers using my tools, but actually using my tools myself is BORING. Check out my toy...
BuilderHMI[^]
|
|
|
|
|
It's rare for me to post here in the lounge but I'm curious about this.
So I'm buying a new rig, after 10+ years on my current one. A new Dell Precision 5820 with 2x more cpu cores so I can build bigger things faster. And I'm trying to decide if I should spend more money on a RTX A4000 over the RTX 4000. The RTX A4000 is 2x more in terms of everything for $300 USD more.
I'm wondering if anybody here is doing any GPU programming, or writing code to use the GPU along with the CPU, taking advantage of CUDA cores and parallel processing. Currently this RTX A4000 would be overkill for me, but if I can use it to do something cool with all those GPUs on the card, then I would pull the trigger and get the better card.
I hear all this talk about Deep Learning and AI, plus BitCoin mining and wonder if that's something within reach of dabbling with, if I have a cool video card. Sometimes I wonder if I should be mining Bitcoin while I write code during the day. Being able to get just one coin would be cool at today's rate.
RTX A4000 Graphics Card | NVIDIA
Quadro RTX 4000 Graphics Card | NVIDIA Quadro
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
No idea, but my kid keeps complaining that the high-end cards just aren't available.
So look into what is actually available.
|
|
|
|
|
Oh I get one no problem, already confirmed that. If I book it today, it would be Jan 20 2022 delivery for the whole system. It's getting the card alone that is hard, but Dell sets aside cards for complete system builds as a priority. And the price wasn't that jacked up either.
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
After 5 weeks, everything becomes a habit. I expect that at the end of that time, you'll want more rigs ... for mining. Like having a chinchilla farm.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
I used to write a lot of GPU shaders for graphics, but if there is anything that has an even greater hunger for GPU processing power, then it is AI. The shaders for calculating a neuron may be relatively simple, but there is no limit to how many neurons you may want to use. That said, for both graphics and AI, the question always is if you are working on a level that justifies the price, otherwise it's more economic to use the best of the last generation and upgrade later when the now best of the best also has taken the back seat to something new.
Bitcoin mining also is no argument. The time where bitcoins were easily found is over. Finding more always takes more time than the previous solutions. You can throw more processing power at the problem, but you will suffer the law of diminishing returns. The only people who tell you otherwise are those who intend to pull as much money as possible out of your pockets before it does not even justify the energy costs of a computer running 24/7 anymore. So, if you are mining just for fun, don't invest much in it and just take whatever you get with whatever you have.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I do a LOT of programming for the GPU. I've been working on a project for about two years now where I adapted a large amount of code to work on both the GPU or the CPU. With the speed and core-count of AMD's processors these days we have found that the CPU version is just as fast as the GPU version. It's been a lot of fun and I've learned a ton of stuff. I also dabble with GPU stuff for my side projects which tend to be graphics-oriented and it's great for that stuff too. My avatar image is an example. One caveat about my statement on performance: comparisons were done between my home system and systems with Threadrippers (32 cores) and EPYC CPUs. My GPU is a 3090 and it's pretty fast. We have more systems on order with EPYCs and A100s so I will have some more to compare with, someday. We were supposed to get one of those in September but they are no longer even giving us a delivery date now because EPYC CPUs are in very short supply.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
I specked out an Intel W-2255 with 10 cores, up from my Intel 6 core Xeon with NVIDIA K-4000 that I have now.
So your sort of saying that I can dabble with a RTX-A4000 and a 10 core CPU and do some neat things on a much lower level of hardware than yours, but still have the capabilities that your working with? That 3090 is pretty strong or stout by 2x, and I just looked up the Threadripper(32) and WOW! That's like $4200 just for the CPU and card. And I'm looking at $3433 total and scratching my head on spending another $308 for the card.
That's feedback I was looking for.
Thanks!
If it ain't broke don't fix it
Discover my world at jkirkerx.com
|
|
|
|
|
Just to clarify, the Threadripper and EPYC are CPUs on work machines. We have a couple of Titan RTXs for them but the 3090 is faster with twice the memory and twice the cores. My home machine has a Ryzen 5900 in it with twelve cores. I essentially bought a video card and they threw a computer in with it. Obviously, I am a big AMD fan. That is, of their CPUs. Their GPUs are OK but I use CUDA and they are not compatible with it, unfortunately.
Best of luck with your efforts.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Nice card. Too bad the styling of it is so boring.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|