|
Mike Hankey wrote: but Widows only1
When I worked in Japan many moons ago, I went to a computer exhibition. The Microsoft booth had a giant sign - Microsoft Widows.
Ain't spell-check wonderful...
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
LOL, I can image there are quite a few.
That's how my first ex and I ended up exes.
|
|
|
|
|
I've recently created numerous ElectronJS projects.
Gratuitous Self Promotion ==> My CYaPass app is an ElectronJS app[^] which I successfully deployed to the Windows store.
Let me know, maybe I can help.
|
|
|
|
|
It's bad enough that multithreaded code is nondeterministic.
I propose that it is also meta-nondeterministic: You can not even count on it to be non-deterministic
When you need it to be unpredictable, the scheduler will inexplicably run your timeslices the exact same way, even when threads are executing on different cores, and even reboot to reboot.
I'm stuck on creating an *example* simply because I cannot create a situation wherein two secondary threads appear to be in competition (with the third thread being the main application thread) on a dual core ESP32 running FreeRTOS. I can do it where one thread is in competition with the primary thread, but it's as if the scheduler is just a dog when it comes to scheduling between two threads on the same core or something. Grrr.
It's bizarre.
Real programmers use butterflies
|
|
|
|
|
Threads are like fireflies. They tend to synchronize into repeated patterns.
|
|
|
|
|
Unless you need them to.
Real programmers use butterflies
|
|
|
|
|
I thought that was women and their...never mind.
|
|
|
|
|
Hmm. No criticism here, just old fart woolgathering.
After a long time developing multithreaded applications (including UI's), I've come to some observations:
- Thinking "adding a separate thread" will fix a problem generally won't.
- If you pay attention to timeslices, you're doing it wrong.
- If threads care about sequence of execution, you are doomed to failure.
Sleep(0) to force a context switch in Windows is A Bad Idea.Sleep(n > 0) is even worse.- Indiscriminately adding synchronization primitives like critical sections, mutexes, semaphores, and so on without understanding what you're doing gives you a false sense of security.
- Modifying thread priorities is bad karma. Please don't.
Software Zen: delete this;
|
|
|
|
|
You're not wrong at least in the general case, however:
1. This isn't about adding a separate thread in my case. I'm writing a library to allow you to use threads more easily than FreeRTOS otherwise lets you
2. This isn't true if you're writing a library that includes a threadpooler on a system with a primitive scheduler that's prone to starvation.
3. Threads don't care. Hell, my code doesn't care. But it's sure hard to demonstrate out of order execution and resyncing execution order for a *demo* when I can't get the execution order to scramble in the first place
4. Yeah, but this isn't windows, see also, craptastic scheduler
5. If you're doing that to force a context switch I'm not sure what's wrong with you. =)
6. Absolutely true. To that end my library provides you access to *none* of those. Seriously though, it offers you a message passing system in the alternative
7. See also, craptastic scheduler
Real programmers use butterflies
|
|
|
|
|
Quote: Absolutely true. To that end my library provides you access to *none* of those. Laugh | Seriously though, it offers you a message passing system in the alternative
Message passing is probably the only way to rein in the complexity of threads. Message-queues is the easiest message-passing interface you will find.
I recently did a simple message queueing library (based on pthreads) for a personal project and still managed to get the system to deadlock eventually (only happened on Windows due to different scheduling algorithm[1]). After fixing it I realised that there was no value in a linked-list queue.
I implemented my message-queue library as a double-linked list, so that any thread taking a message off of the queue does not block any thread trying to put a message onto the queue. My intention was that threads removing messages from the queue would never hold a lock that threads posting messages to the queue would need (and vice versa).
Unfortunately all threads still have to lock the entire queue just to check if (head==tail) in case there is only one item in the queue (then that item is both the head and the tail).
This is the stupid way of doing this. Don't do what I did. Instead, do one of the following:
1. Use a fixed-length message queue (either fixed at runtime or fixed at compile-time). This removes quite a lot of the unnecessary complexity; you're going to lock the entire queue for any posting or removal, but you're going to do that anyway with linked-lists too, so no big deal.
2. Address the fixed-length queue using modulus of the length (with appropriate locks); this gives you a circular buffer with no if statements.
message_t messages[BUFLEN];
...
messages[index % BUFLEN] = new_message; ...
message_t mymessage = messages[index % BUFLEN];
The problem with doing this is that it would automatically drop old messages (which, strategically, may be something you want, actually). Also, if you're not using C++ (no smart pointers) that's going to be a memory leak.
3. If your target platform and implementation allows (which it will), use #defines to define a CMPXCHG macro that expands to the assembly of the cmpxchg opcode. You can then use that for a superfast single lock with a sleep in nanoseconds or milliseconds that gradually decrements by a fixed amount so that no thread will starve. Doing this using your platforms mutex could be a lot slower than you think.
And, in case you're wondering, I am intending to update my implementation of a message queue to use fixed-length queues (probably settable at by the caller at runtime, with a sane default set at compile-time).
I'm not so happy about the cmpxchg thing as my queue library is supposed to be working on all pthreads platforms, and I'll need an implementation of cmpxchg for each platform. Not too much of a problem for things like ARM7 and later devices as I'm an embedded dev and am surrounded by various boards, but still a problem for platforms I don't have easy access to (one of my open source projects recently received a bug report for a bug that occurs on Z/OS).
[1] Which is why it's important to test code on multiple platforms, even if you never intend to ship on any of them - different platforms shake out different bugs.
|
|
|
|
|
1. I don't write my own concurrency safe queues because FreeRTOS has one and so does .NET so I've not had the need.
2. Yeah, when I wrote a ring buffer in C# I did that
3. I don't know a good reason to use that over say, std::atomic. In my experience, anything that won't support std::atomic won't support atomic CMPXCHG operations at the CPU level anyway, at least not that way. With the atMega2560 for example, IIRC it doesn't have one, forcing you to disable interrupts and then reenable them after the operation is complete. Don't quote me on the mega's capabilities, I'm not an AVR expert. It might be a bad example.
Particularly, #3 is curious to me. Why wouldn't you use for example, std::atomic_int?
Is it because it's a C++ thing? I use C++ even on 8-bit machines with 4kb of RAM. I just severely limit my use of things like The STL to the bare minimum. std::atomic is one area I use. std::chrono is another. Why? Because writing cross platform CMPXCHNG and timer code is error prone and i don't have access to all that hardware.
Real programmers use butterflies
|
|
|
|
|
Yes, it's because it's a C++ thing, and my queueing library is a C thing (My mention of pthreads should have given it away )
|
|
|
|
|
I've used pthreads from C++, so it didn't give it away for me.
I was about to write a top level post pondering the overall utility of writing *new* code in C.
I love C, but I just can't think of any hardware I've coded for (and i code for little devices) that can't at least host a binary compiled with a C++ compiler.
Given that I don't use *most* of C++ (like the STL) when I'm targeting an 8 bit monster, but classes and RAII still help with code management, for example.
Real programmers use butterflies
|
|
|
|
|
Quote: I was about to write a top level post pondering the overall utility of writing *new* code in C.
The overall utility is in the reuse. Writing a library in C means that it can be reused from Python, C++, Java, Perl, Rust, C#, Php, Go, Tcl, Delphi/Lazarus, Pascal, etc. Libraries in C++ can be reused by making C-compatible wrappers around functions, not exposing classes, suppressing exceptions, typedefing structs and prefixing all functions with 'extern "C"'. But then you lose a lot of the value of C++.
Lets say you come up with something new and truly useful: a new uber-compression alorithm, or a quantum-computer-proof elliptic curve cryptography. You go ahead and write it in Haskell, or Clojure, or Scala, etc. It definitelywon't take off until someone clones it into a reusable library.
Right now we are in a period of computing where it is fashionable to rewrite everything (and also fashionable to pretend that C is the great evil). As far as I can tell, though, the new systems languages are taking a bite out of the C++ mindshare, not out of the C mindshare.
C will die the usual way, simply due to attrition as those that know it die off
|
|
|
|
|
The simplicity of it for expressing algorithms one has developed is certainly a win, but honestly, I'd rather read it in C# than C if I was going to port it.
In practice, I've ported C# stuff to C++ on IoT things several times - everything from a thread synchronization library to a streaming JSON parser with a less than 4kB footprint not tied to document size. C# is just easy to read, IMO but that's just one dev's opinion based on dev's experience. Some of that could have easily been made into C.
I don't think wrapping things with extern loses you valuable C++, but exporting C also is kind of like writing new code for "C" so it wasn't something that was on my radar when I responded. Importing code that's compiled in C is another matter.
I don't think "C" is evil, so maybe I'm just not fashionable. I'm just seeing less of a point for it these days.
The beauty of C++ is you *don't* have to rewrite all that C code. You can use it at the source or binary level in your C++ apps.
Real programmers use butterflies
|
|
|
|
|
Quote: The beauty of C++ is you *don't* have to rewrite all that C code. You can use it at the source or binary level in your C++ apps.
But that was my point, sort of... anything I write that I want to reuse has to be written in C (or, lately, Rust). If I write it in C I can use it from anywhere and any language. If I write in in C++ I can't. If I write in C# I have even fewer opportunities to reuse.
|
|
|
|
|
I think we're talking about two different things because time after time I have a much easier go of porting C# to C than the other way around.
Real programmers use butterflies
|
|
|
|
|
Quote: I think we're talking about two different things because time after time I have a much easier go of porting C# to C than the other way around.
You're correct, I am not talking about porting, I am talking about using. Anything you write in C# can only be used inside the .net runtime. If you want to use it elsewhere yo have to port it.
With C, and some care, anything written can be used by any other language without porting ... like libpng (usable by all languages without porting, or libzip, or almost anything else in my system (yours too, probably).
A good example is SQLite (the most-used and most deployed library in the world according to the statistics from MS): if it were written in C#, or in C++, or in anything else other than C, it would not be as useful as it is because it would not be usable from all languages.
|
|
|
|
|
That's weird because my JSON parser was ported from C# and it doesn't require the .NET runtimes.
Same with my threading and synchronization library (also originally written in C#)
And the only place you can run C without porting is C++, and even that is not always true.
Furthermore, as soon you declare
int* foo;
Or any "array" of indeterminate size in C
you've pretty much nixed any dream of making it work on anything without pointers- "without porting"
sorry, but what are you even talking about?
Real programmers use butterflies
|
|
|
|
|
Like I said, I'm not talking about porting. Take, for example, SQLite. You can access it from any language without porting.
|
|
|
|
|
As long as you're willing to write a wrapper for any language that isn't C++.
If you don't count that.
Funny, if I don't count the work involved it takes to do something, how it suddenly doesn't take any work at all.
Real programmers use butterflies
|
|
|
|
|
Quote: As long as you're willing to write a wrapper for any language that isn't C++.
If you don't count that.
Generally, since it is usually trivial to write those wrappers, you don't count that (also, I think you mean "C", not "C++"). Porting SQLite so that Python programs can use it would mean re-implementing a few 100s of thousands of lines of code and a decade or so of manpower. Writing the wrappers takes ~2000 and can be done in a weekend for Python.
If SQLite was written in C#, your only option would be to rewrite a few 100s of thousands of lines of code, you don't get the option of sitting down in a weekend and writing the interface for it.
Quote: Funny, if I don't count the work involved it takes to do something, how it suddenly doesn't take any work at all.
It's just a preference I have when I write software - I prefer to write it only once and never have to port because I prefer reuse. You obviously have a different preference. More power to you, but stop pretending that a weekends work writing wrappers is equivalent to a few decades by experts in the field (Dr Hipp is the main author of SQLite, and a recognised database expert).
You feel that reuse is useless, fine, stop pretending it doesn't exist.
|
|
|
|
|
I didn't say reuse is useless. In fact, all I've maintained is that you're wrong in trying to paint C as write once, use anywhere. It's not.
Your code still has to interface with other languages and other languages do not in fact speak "extern 'C'" out of the box unless they are C++
I can write code just easily in C++ that exports the exact same way you do in C.
But I could just as easily expose something as COM, using some other programming language, and other languages that spoke COM, including C, could use it.
There's nothing magic about C. It's yet another language. It doesn't just interface with everything out there.
Real programmers use butterflies
|
|
|
|
|
Quote: I didn't say reuse is useless. In fact, all I've maintained is that you're wrong in trying to paint C as write once, use anywhere. It's not.
No, all you're done is say that you can port stuff, which is irrelevant.
Quote: I can write code just easily in C++ that exports the exact same way you do in C.
But I already said that.
Quote: I was about to write a top level post pondering the overall utility of writing *new* code in C.
Libraries in C++ can be reused by making C-compatible wrappers around functions, not exposing classes, suppressing exceptions, typedefing structs and prefixing all functions with 'extern "C"'. But then you lose a lot of the value of C++.
Except you thought I was talking about porting :-/
Quote: But I could just as easily expose something as COM, using some other programming language, and other languages that spoke COM, including C, could use it.
I dunno, last I checked COM didn't work on anythe systems I target. You live in an all-windows world, don't you?
Quote: There's nothing magic about C. It's yet another language. It doesn't just interface with everything out there.
Maybe, but it interfaces to more systems than any other language. Rust is a new possibility that can somewhat do the same thing, but so far it still supports fewer systems than C.
A good example of C being foundational in almost all software is the recent trouble over the cryptography library in Python - the dependency (which was in C) was rewritten in Rust, and that broke multiple distributions that did not run on x86/64.
|
|
|
|
|
I'm pretty sure I know what I said, but just in case I went back and looked at what I wrote, and indeed what I said was what I said. Since you've reduced yourself to lying about me and what I've said, we're done here. I just don't have the stomach to watch someone get so angry they humiliate themselves like that.
Real programmers use butterflies
|
|
|
|
|