|
You seriously called me a "twat"?
What the hell is wrong with you? You are a child. You can't even code in C or C++, and you talk about Delphi like that's a flex. It's a joke. You're a joke, and you're a belligerent clown. I've reported your account because this isn't the first time you've been abusive and hostile to other people.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Quote: I've reported your account because this isn't the first time you've been abusive and hostile to other people. I cannot argue that, as it is true. I do not have much patience.
Quote: You are a child You meant "childish".
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
I would have been happy to debate this with you.
That changed when it became clear to me that weren't interested in actually debating anyone.
You came here for a fight. You came here because you wanted to abuse other people.
You have some issues, and you are making them the problem of other people here.
That's not cool.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Not taking sides, but your posts are so much like the arguments I used to hear from my fellow Navy nucs. I kind of miss it, but we Navy nucs re an odd lot.
|
|
|
|
|
"and anyone who would doesn't use an interpreter but a compiler. No, not a byte code compiler, that's just a fancy marketing sh*t for an interpreter that doesn't compile to native."
Far as my limited experience goes C#, Java and JavaScript use an interpreter for their regexes.
And I know Perl does.
I suspect JavaScript like Perl cannot be anything but an interpreter (in most usages.)
At least for Java it might at some point go Native. Same could be true of C#.
|
|
|
|
|
|
|
Any other day, this would go. Today is not that day.
It isn't 30% even if all you do is enumerate.
GC's is losing performance by definition of what the beast is made of. Show me one example of .NET that outperforms Delphi? Where, o where, would an interpreter with a lot of libraries outperform a compiled native language?
You .NET dev? Than you write in VB7, complete with a GC and a runtime interpreter. Your code will be faster than anything I write in a real compiler, innit? And your 20 ms on 60 is gonna make a 30% if it is only enumerating, because all that code does is enumerate.
I am not even amused a bit. Obfuscation to save a few ms. Yeah, that will help, really. You saved the world, but the rest of us are going to use enumerations because it is a damned good tradeof for those that only know .NET and cannot handle pointers. It makes code readable, which yours damned ain't. Great, you reduced some code by 20 ms. Most code takes longer.
Was there anything else you'd like to whine about, or are we done?
BPFH
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
I love GC for jobs that allocate 14 Gigs of memory and then finish.
Poof!
14 gigs of memory returned to the OS very quickly. No need to clean the heap.
|
|
|
|
|
I mean, if you have it, why not, with a modern damned paged vmem system and gobs of RAM on a modern machine? Spare your program having to garbage collect as often.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: Spare your program having to garbage collect as often. So you prefer to collect tons of garbage before doing anything about it? Then you may have some job to do when space runs out ...
Nothing wrong to be said about having 'enough' RAM, but if there is anything risk at all of having to garbage collect (and if you use heap allocation at all, there is ), I would much prefer to do it in small steps!
If you have got plenty of RAM, I'd much rather close my ears to all the whining about the internal fragmentation of buddy allocation (the only serious argument against buddy that I have encountered), to have a very fast allocation / deallocation mechanism, that also lends itself to incremental garbage collection.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
It depends on what I'm doing. In some scenarios, such as when you don't need to garbage collect but for at the end, such as CLI tools often do then yes, absolutely, because you've finished doing useful work and you don't need to make the user wait for the collection (even if the process is still running at that point you can have written out all of your output and everything.)
I write a lot of command line tools that do complicated things, like Deslang: From Code to CodeDOM and Back[^] that absolutely benefit from doing things this way.
I should add, that modern GCs collect in the background, and that should perhaps influence one's decision as it's probably less expensive overall to do one large collection than a bunch of little ones, particularly when asynchronicity is involved.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
If you do buddy with a set of freelist heads, one for each size, and your buddy combiner orders the freelist, you have an extra benefit of locality: most accesses would go to the lower end of the heap, making better use of virtual memory (less paging).
A background GC could unhook a freelist (maybe leaving a couple entries in the list for use while the GC was working), returning with one shorter list for the original freelist and one list of combined buddies to be put into the next higher size freelist.
The head end of the freelist may be rather unordered - this is where all the allocation and freeing is taking place. If the list is long - it hasn't been emptied for quite some time - the tail end may be perfectly sorted after the previous GC/combination round. If you do sorting e.g. by Smoothsort, handling the already sorted part has complexity O(n), so most likely, the long freelist will not required much effort.
You find buddies by traversing a sorted list, so the list of buddy pairs will also be sorted. If the next higher freelist is also mostly sorted, all buddy pairs is inserted into this is list in a single traversal.
I would do real timing tests with a synthetic heap load (modeled after a relevant usage scenario) to see if it really is worthwhile the resource cost of an asynchronous GC thread - strongly suspecting that a finely tuned incremental but synchronous buddy manager can do it both at a lower total resource cost and with so small delays that it would be a much better solution.
Final remark:
"you've finished doing useful work and you don't need to make the user wait for the collection". In most systems, each process has its own heap. Multiple processes allocating from one common global heap requires a lot of resource consuming synchronization. Most CLI programs are run in their own processes. So when they complete, noone cares about what their heap looks like at that time. There is no reason to do any garbage collection at that time. The entire data segment holding the heap is released en bloc.
In an embedded system, you often have a single systemwide heap. But few embedded system have CLI interfaces for running arbitrary programs that start up and terminate as a function of user operations. Even if the embedded system has some sort of UI, user actions are usually limited to activating specific built-in operations in the embedded code, not separate CLI oriented programs. But of course, there may be exceptions
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Welcome to my bandwagon. I've been saying this for ages. Don't use foreach unless you have to -- or where it doesn't matter.
Having said that... I hypothesize that foreach has improved. To test this hypothesis, last summer (?) I was testing and measuring some comparisons and I didn't see much difference. I was unable to form conclusions at that time because I wasn't convinced that the tests were valid.
I'll have another look later.
P.S.
And besides, you mean iteration, not enumeration -- I blame Microsoft for misnaming the thing.
modified 19-Jan-24 11:05am.
|
|
|
|
|
I've said it before, but this is one of the few times I've used it in a critical codepath.
PS: I'm using enumeration because we're talking about .NET. If I started talking about iterators in .NET parlance that's a C# compiler feature.
Iterators and iterating are terms I'd use if we were talking about C++. You may not agree with my terminology but I tend to choose it with some deliberation.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
modified 18-Jan-24 21:50pm.
|
|
|
|
|
You "think"?
We do not think, we measure. If you say that you think, you think that you think.
Measure or shut the elephant op.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Of course it's slower. The IEnumerable interface expects a class with methods you have to call to maintain which item in the IEnumerable implementor you're looking at.
Calling methods adds overhead, and plenty of it compared to the overhead of an index variable, which you know is just pointer math.
Enumerable being slower is not surprising at all. Just don't use it where you don't have to, and that includes LINQ because it's heavily dependent on the IEnumerable interfaces.
|
|
|
|
|
IList<t> uses methods as well. Virtual calls and everything. There is no direct array access through IList<t> afaik
So the primary difference between IEnumerable<t> and IList<t> is the creation of a new object to traverse the former.
Microsoft appears to believe that object creation is very cheap in .NET, and everything I've read from them suggests they practically think it's free. It's not.
That was 30% gain in performance.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Well, the first call (GetEnumerator) definitely has a penalty, but each retrieval after that (each call to the enumerator) may be as quick as an indexed access... or it may not be.
Anyway, I agree with -- if you know you're iterating across an array, use array access instead.
And don't use Linq.
|
|
|
|
|
Just to be difficult, I'd argue that an Enumerator - even a special cased one like the implementation on System.String will be slower than indexed access.
The reason being is that it's necessary to execute an additional call to MoveNext() for each advance, whereas with indexed access you are simply incrementing a value. You must then call Current to get the actual value.
I haven't benchmarked it, but I'd be very surprised if this was not the case.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Upvoted and sorry.
It is a decision. Do you want the best in speed? Or do you just need to get things done and be readable?
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
You did the timing in debug mode, I assume? I always suspected that the compiler might optimize such things at release time, but no?
|
|
|
|
|
Nope, that was release build. My code actually warns me if I run the benchmarks in debug, because I do it by mistake so often.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
|
If I don't care about access performance in general I will use IEnumerable<T> if I can rather than a collection.
The reason being is (A) I don't like to impose functionality I'm not going to use and enumerating a collection is the same as enumerating with IEnumerable. (B) Lazy loading isn't really doable with collections in most circumstances because of the presence of count. (C) Collections provide methods to modify them. I certainly don't like suggesting I will modify something I won't, so if i can take the immutable version for a read only function i will. (D) unbounded collections are not supported by .NET collections. You must know the count ahead of time.
My choice of switching to IList was improved index access performance. ICollection doesn't provide that.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|