|
Rather certain that curved monitors have a problem with sharing. The more people you need to look at the same screen at the same time the more that becomes a problem.
I also like using two monitors for a very specific reasons. First because it allows for a backup screen. Second 'full screen' means there is still another screen to have information on.
One can have two curved monitors but I suspect that the range of eye motion is for using both of those is going to require turning ones head to see all of it. And setting it up like that probably takes quite a bit away from the esthetic reasons for having it in the first place.
|
|
|
|
|
Mom 1: Do you do elf on the shelf for your kids?
Mom 2: No, I am a Harry Potter fan. I teach my kids to be against the enslavement of house-elves.
Sooo... I guess Dobby is a free elf... on your shelf...
|
|
|
|
|
|
A chilling experience indeed!
|
|
|
|
|
Ooomph Ooomph Ooomph
I rarely click these YT links. Glad I did this time 🎄
"If we don't change direction, we'll end up where we're going"
modified 23-Dec-21 5:42am.
|
|
|
|
|
If you feel like clicking more links, to find similar stuff, I came across that song plumbing YouTube's auto-generated list for 'similar to Jamie xx'. There's some other good stuff in there. Also, the group 'You Man' has some more chill stuff.
|
|
|
|
|
What is up with customers making a product launch for New Years Eve? I get that people have to work on holidays, but this seems unnecessary. Instead of having fun with a small group (Thanks COVID) of friends and family, I'll be hanging out at home waiting on call for a product launch. In my opinion, it should have been done on a non-holiday after hours.
Am I being selfish?
Hogan
|
|
|
|
|
I went to a co-worker's wedding New Year's Eve of 2000. No matter how your 'launch' goes, it will be better than the dud that that party was.
|
|
|
|
|
Not at all. Holidays aren't just for managers, though sometimes they forget that. Up side is that you might get (should get!) Time-and-half or even Double-Time for working on a holiday. At the very least you should get time off in lieu. Neither of which can adequately compensate for lost time with your nearest and dearest, but its better that a poke in the eye with a sharp stick.
Keep Calm and Carry On
|
|
|
|
|
We had a client that insisted that a project goes live before year end, even though we'd all be on vacation without support staff etc. Just because the project had to be done this year.
|
|
|
|
|
Ah yes, the year-end Holy Grail: "rev rec", or revenue recognition. Even though you haven't received the money (and may not for months), the customer commits to paying you and you can therefore 'recognize' that the revenue was received this year rather than when you actually deposit the money.
Software Zen: delete this;
|
|
|
|
|
I posted a while back that I had purchase the "Apollo Guidance Computer" book and now I have a guilt complex.
They put a man on the moon with 36K of fixed memory and 2K of read/writable memory.
I just sent for another 32GB upgrade for my machine putting it total at 64GB.
BTW good book
The less you need, the more you have.
Even a blind squirrel gets a nut...occasionally.
JaxCoder.com
|
|
|
|
|
Mike Hankey wrote: They put a man on the moon with 36K of fixed memory and 2K of read/writable memory.
I'm sure the rockets helped.
*hides*
Real programmers use butterflies
modified 22-Dec-21 15:21pm.
|
|
|
|
|
Looks like rockets are back on the menu boys...
|
|
|
|
|
They didn't use Windows or Chrome
|
|
|
|
|
In that case the BSOD would have taken on new meaning.
The less you need, the more you have.
Even a blind squirrel gets a nut...occasionally.
JaxCoder.com
|
|
|
|
|
and today I am viewing this site in firefox which is running five processes and using a total of about 300MB of memory.
All righty then.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Seems like applications use memory with no regard to limits or management.
Why should FireFox use 1GB of memory with 2 tabs open?
Why does AVG have 5 processes and use ~1GB of memory?
Why does Windows Explorer use 77.2MB while TurboCAD uses 60.9MB?
My memory usage with only FireFox and TurboCAD is at 48%, that is why I am upgrading my memory.
But I think I'll be wasting my time and money because as soon as I upgrade the system will use more!
The less you need, the more you have.
Even a blind squirrel gets a nut...occasionally.
JaxCoder.com
|
|
|
|
|
Part of that is for performance reasons. A lot of applications these days preallocate GOBS of ram so that they don't have to put a lot of burden on the process heap block list and allocations are quicker. .NET is a very good example of this, since it's garbage collected, but browsers do it too.
The more RAM you have, the more they will potentially use, probably up to a point, simply because it's there, and preallocating it is faster for overall performance. Probably they've perfed tested this style on modern machines, and found that the swap burden isn't that large in practice, even with all of the allocations.
Bottom line is, it's not total RAM usage you need to care about as much as you need to care about allocations per second, vs deallocations per second - your "memory pressure"
Swap sort of muddies the water.
Real programmers use butterflies
|
|
|
|
|
Yeah that makes sense.
But it"s my memory and they can't have it.
The less you need, the more you have.
Even a blind squirrel gets a nut...occasionally.
JaxCoder.com
|
|
|
|
|
I no longer find myself worrying about my memory usage. I went from an 8GB system to a 32GB system and now I'm cruising along.
The thing about the browsers is the web is really "top heavy" these days. The clients are so fat - you can flippin run windows 95 in a browser these days.
So it's kind of the web's fault I think, that resource usage exploded.
My browser has always been the biggest resource pig overall on my machine, for what it does. VS doesn't even really pig out the way Chrome will.
Real programmers use butterflies
|
|
|
|
|
I do a LOT (at times) CAD, Photoshop and other memory intensive apps so I need the extra.
The less you need, the more you have.
Even a blind squirrel gets a nut...occasionally.
JaxCoder.com
|
|
|
|
|
honey the codewitch wrote: Bottom line is, it's not total RAM usage you need to care about as much as you need to care about allocations per second, vs deallocations per second - your "memory pressure" Heap allocation/freeing has gained a reputation for being costly. It certainly doesn't have to be, especially if you have a large heap.
If you can afford 25% internal fragmentation, you can do buddy allocation in a dozen instructions or less (including call overhead) - there have been machines providing it as machine instruction. Freeing is even simpler than allocation. To reduce internal fragmentation, you can use fibonacci numbers as block sizes, rather than binary sizes, but buddy combining becomes slightly more complex.
Of course: Every now and then you run out of larger blocks and must combine buddies. But first: The limited number of block sizes means that the free lists each cover a range of sizes - e.g. any request between 257 and 512 units allocate from and free to the same list. If this free list is empty, but bigger blocks are available, cutting one in two is trivial, even if you have to go more than one level up.
Only if all higher lists are empty do you need to combine. If you want to distribute that cost over time, you combine only up to the requested size. That is likely to give you a lot of free blocks of this and smaller sizes for later requests. With binary buddies, combination isn't that costly. If you can require the heap to start at an address that is a multiple of the largest block size, you can really fine tune it, if you work at assembler or high level assembler (i.e. C or C++) level so you can really fine tune it by bit masking addresses directly to find buddies.
If you expect buddy pairing to be a fairly uncommon event (which it usually is), you will free to the head of the free list, which is super cheap. If you expect pairing to be common, you can make a trade: When freeing, scan the free list from the head until you find a larger block address. If you have to combine smaller buddies up to that size "all the time", it suggests that the free list typically is rather short, and so is the search for higher block address. Freeing becomes slightly more costly, but
it keeps the free list sorted, so you can combine all buddies of that size in a single sweep through the list.
It is possible to set up artificial test scenarios where buddy allocation causes a lot of splitting and combining (e.g. oscillating between filling the heap with minimum size blocks, freeing them and filling it with max size blocks). In a real-world load, and you are not starved on total heap size, you won't see much need for combining, and allocate/free is very fast.
The major disadvantage of buddy allocation is the internal fragmentation - for binary buddies: 25% if all block sizes are equally probable - that the probability of requesting e.g. 514 words is as high as that of requesting 512 words. In the stuff I code, 512 is a much more likely size than 514, but YMMV. If your typical allocation size is a binary size plus a small header (which really is a worst case for binary buddy!), consider fibonacci buddy.
If you have super-tight requirements for allocation latency, on the uppermost layer to combine (i.e. half the requested size, with binary buddies), you need not combine them all: Finding the first buddy pair is enough to satisfy the request. You can leave combining the remaining ones at that level for some later time.
If your application causes such a memory pressure that binary buddy allocation takes a noticeable fraction of the CPU power, then I'd like to learn about it! In an embedded environment I'd expect (internal and external) fragmentation in a tiny heap to be a far more significant issue. However: Make sure not to underestimate the internal and external fragmentation of a best or perfect fit allocation scheme! Buddy will probably be better externally and worse internally - but maybe less worse than you'd expect, compared to best/perfect fit algorithms.
|
|
|
|
|
First, I'm talking PC applications. Embedded is a whole different animal, particularly here, since part of what I wrote is fundamentally tied to the presence of virtual memory.
Also you're talking about an unmanaged environment where you can handle your own heap, and you've provided one mechanism for doing it.
.NET uses another. It allocates a huge block as a memory pool. Allocations amount to incrementing a pointer. That's it. You pay for that on the backend through garbage collection.
A scheme just as valid as yours, on a PC with (by today's measure) a reasonable amount of RAM and decent swap.
And whether or not you can do it your way, applications may or may not do it that way. I'd argue most written in C++ probably don't do all that much custom heap stuff at all, just relying on the C and C++ libraries and however those do it.
And then there's the fact that so much of todays code is "managed" (read, garbage collected)
Please keep in mind my position isn't about how you "should" manage the heap. We could argue about that for days.
My position is about the hows and whys of the way it's done on PCs now. This isn't about ideal situations, it's about the present situation, if that makes sense.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: what I wrote is fundamentally tied to the presence of virtual memory I'd be happy to learn that connection to virtual memory. I do not immediately see the direct connection between real/virtual memory and heap allocation strategy. Extreme cases may be imagined, as well as in super-fine-tuning, but I have a hard time seeing the connection in the general case - certainly if you claim to be talking about PC applications.
Also you're talking about an unmanaged environment where you can handle your own heap, and you've provided one mechanism for doing it. A managed system also allocates and frees memory from a pool, even if the freeing is a result of a garbage collection process. A managed system may certainly use buddy allocation for providing managed memory.
Also, if you do not trust the management of managed memory to be as efficient as you would like it to be: Feel free to request a huge block of memory at startup, and then do your own, presumably more efficient, memory allocation from that block.
Actually, that is exactly what I did a few years ago when playing around with buddy allocation mechanisms. The code I used for testing/timing my buddy allocation algorithms was written in C#, rather than assembler/C/C++, but they did confirm my ideas that unless you are completely starved on memory space, buddy allocation is a very efficient allocation strategy from a performance point of view.
Maybe you won't be able to trap memory allocations, e.g. if you have no write access to the source code to make adjustments. It takes some sophistication to go into the dotNet memory manager, modifying it to use a different allocation strategy. I never did. But I never rejected a strategy on the grounds that "dotNet doesn't do it that way, and I don't know how to force it to change its ways".
You may complain about the hows and whys of the way it's done on PCs now. This isn't about ideal situations, it's about the present situation, if that makes sense. (Redskin argument: If you don't like it the way it is done in America, go back to where you came from!)
You can handle your own objects. You don't have to point any 'It's his fault!' fingers neither at 'managed memory' nor anything else. If your application needs a well-managed heap, then you allocate a heap, and you manage it well. My theses is that unless you are severely cramped on space in that heap, buddy allocation is one of the most efficient ways to handle that heap of yours.
It might be, even for the underlaying managed memory management system. Probably you won't have any opportunity to affect that. So if your managed memory system is too slow, then you request a memory buffer, and you manage it yourself using better methods.
Your machine may run scores of applications, each of them thinking that they can manage memory better than the common one. So they all allocate huge buffers to do their own administration. Maybe, all things considered, it would have been better relying on the common buffering, with common buffering for all users of the buffer. But maybe you put higher emphasis on 'I did it my way', rather than total system performance.
My position is about the hows and whys of the way it's done on PCs now. This isn't about ideal situations, it's about the present situation, if that makes sense. If you think alternate buddy allocation strategies are unrelated to "the way it's done on PCs now", because 'that's not how they do it', then you're of course welcome to reject the alternatives, even if they would be better.
|
|
|
|
|