|
Member 14968771 wrote: You are welcome to say / state your opinion.
But you are plain wrong .
Wearing a badge at school may not prevent much , but it is better than endless
discussions to whom is to blame after next mass shooting (screwed-up police response ) ANYWHERE. How could I have been so blind!?
Wearing IDs at school is obviously the one and only solution to school shootings that just so happen to happen in the only country in the world where you get a free gun with your haircut
Just kidding, I still don't get it.
I don't think the identity of the shooter in a school shooting was ever an issue, so why would you need IDs?
Member 14968771 wrote: And if you bring up "freedom of personal choice" - such as in case of endless arguments
against wearing masks during pandemic , I don't, I'm just saying you shouldn't just follow rules blindly, because rules may not make sense.
I don't think masks made a difference, but I wore them when I had to because it's a smaller effort to just wear them than to go into discussion with people.
Besides, they may have worked, so I'd rather wear one for nothing than not wear one when I should've.
Wearing an ID at school all day is just annoying and definitely serves no purpose.
|
|
|
|
|
I suspect your average person will stick their ID in their pocket while walking to school instead of having it swinging around their neck. I know, some of us have never misplaced their car keys or forgotten their wallet. Or gone in their house slippers to the store.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
We had ID cards in high school in the 80s. Mostly, it was for teachers that did not know you. If they caught you doing something wrong, they would take your id card from you and put it in the office. Then the vice principal/head disciplinarian would call you to the office to chew you out, give you a few pops and then give you your id back.
Every classroom had outside windows so we would have seen trouble coming.
|
|
|
|
|
I am happy with my 2014 vintage PC; I have no need to replace it. I just recently installed another 32 TB of magnetic disk, and will get a 1 TB M.2 system disk next week - that will keep the PC going for a few more years
Before ordering the M.2 disk, I checked the motherboard user guide to make sure it would accept it. The guide listed the PCIe capacities with "40 lane" and "28 lane" CPUs. Mine, an i7 5820K, has "only" 28 PCIe lanes.
So I checked up the Core CPUs of today, to see which models would provide more, starting at the top of the list from Intel, sorted by release date. I gave up before I found any model with more than 20 (which they all had). I also noticed that while my old CPU has four memory channels, most new ones only have two.
I am certainly no hardware expert, but to me, this looks like a downgrade, on both points. In spite of significant faster memory chips today, the max memory bandwidth is only marginally higher (less than 15%) on today's CPUs.
I am curious about why this is so. Did Intel conclude that PCIe wasn't such a great success after all - there is no need for that many lanes? With my Asus X99-A/USB3.1 board, I can plug in and bridge together three GPU cards. Nowadays, it seems as if most users want a single super-powerful card. Is that why we don't need that much PCIe any more? In 2014, I expected super-speed network interface boards to be PCIe based. I expected super-speed storage to be PCIe based, but all I see is a single M.2 socket. Does it tell that PCIe was (at least partially) a flop, replaced by USB4 for network, disks and other uses?
For the memory bandwidth: Did Intel conclude that memory bandwidth isn't any serious bottleneck at all?
Maybe I am right in concluding that my 2014 vintage PC is good enough for a few more years ...
|
|
|
|
|
Commonly 4 PCIe lanes are used for the M.2 socket, or SATA but using that for the main SSD is a bit behind the times
|
|
|
|
|
So 4 lanes for your M.2 system disk, 16 lanes for your GPU. That is it, as I understand it. Forget PCIe for anything else.
Or did I miss something?
|
|
|
|
|
Starting at 28 lanes that still leaves 8, and you usually get some extra lanes from the chipset (check mobo specs), so you can probably slap some extra add-in cards in there if you wanted, such as one of those cards that have extra M.2 slots on them
|
|
|
|
|
There are PCIe lanes that originate from the cpu and others from the chipset. The cpu really only needs 20. The chipset can have up to 24 more.
The speed of the PCIe bus has about doubled with each new version. PCIe 2.0 was 500MBs on each lane. PCIE 3.0 was just under 1000. PCIe 4.0 is just under 2000.
A PCIe 3.0x4 M.2 can hit over 3000MBs. A PCIe 4.0 can do twice that. It leaves anything else in the dust, even an SSD on SATA 6Gbs (note the lower case “b” on the SATA spec).
Most GPU processing is done on the card and sent straight to the monitor(s). All but the very expensive top end ones really only need 8 PCIe 3.0 or 4.0 lanes.
Here’s an article that explains it pretty well:
Guide to PCIe Lanes: How many do you need for your workload?[^]
I’m not up to date on memory throughput but I would be surprised if the story there was any different.
I would expect an i5-11xxx cpu with a Z590 chipset and a PCIe 4.0x4 m.2 SSD would benchmark three to five times faster than an i7-5xxx with x99 and PCIe 3.0x4 m.2.
|
|
|
|
|
A 28/40 lane CPU means you have an HEDT (high end desktop) chip, aka a "Xeon we stripped a few features out to make in unattractive to businesses so we can soak them for an extra gigabuck" or an actual Xeon.
16 lanes from the CPU has long been the baseline for mass market consumer chips; enough for a GPU (or 2 if bifurcated) along with an additional 10-20 multiplexed over an x4 sized link on the chipset. More recent designs are starting to edge this up, with 20 on Intel boards giving an x4 direct to the CPU for the primary (and in 99% of cases only) SSD; they're also upgrading the chipet link to an x8 meaning a single SSD can't saturate it and making the dedicated CPU lanes as much me-too marketing as anything else. Consumer tier AMD chips will claim 24/28 CPU lanes; but since 4 of those are used to connect the chipset (vs intel using DMI (pronounced "not-PCIe" lanes, and created to kill off 3rd party chipsets ~15 years ago) the effective number for comparison with intel is 20/24 and AFAIK they're sticking with just x4 to the chipset.
It's the same thing with memory channels. 2 has been the consumer baseline for about 20 years, with high end ones offering higher counts.
In both cases this is avoiding overkill on consumer systems while keeping costs down - adding extra layers to the PCB, especially at the quality levels needed for modern high speed signalling - is really expensive. The bigger socket doesn't help either. Nor does the fact that the big OEMs prefer smaller than ATX mobos which really makes fitting more than 4 dimm sockets problematic.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
|
|
|
|
|
I was on reddit just now advising someone on an ESP32 question regarding its ability to do over the air updates to its firmware.
Long story short, he's making things for family and wants to be able to patch and deploy when there are fixes.
I told him for his use case it's a mug's game, because it won't be completely hands off/zero touch, and it requires infrastructure, such as something to serve the new firmware. At the end of the day, your user/family member will still need help, at which point reprogramming it manually via USB is just as easy.
But I've always been more or less against these auto-rollout things anyway. I like it in theory, but I quickly learned that in practice it's anywhere from clunky (Java updates) to evil (Windows or Ubuntu updates) in terms of what they do, and they aren't hands off. At the end of the day, the user is going to be interrupted and expected or worse, required to intervene when things inevitably go south.
At its very worst it feels like an excuse to release without thorough QA.
Am I the only one who feels this way?
To err is human. Fortune favors the monsters.
|
|
|
|
|
nope, I hate auto updates.
But then again we (cp users) are pretty much all tech savvy and will update our devices on a fairly regular basis. It was one of my main reasons for going to Linux. I am in more control.
That being said. The vast majority of users just can't be bothered to run an auto update. So when I setup someones machine. It is set to auto update after a few days after the 2nd tuesday since that seems to be the go to date that people do rollouts. It isn't perfect but it is what it is.
I usually try to set the schedule for middle of the night and hope like hell they don't have anything open.
I probably should setup a message that pops up on their machine during the day before that says something along the lines of "hey we are going to update this tonight. Save your work. Close things you want closed. No you can't stop it"
To err is human to really elephant it up you need a computer
|
|
|
|
|
I like updates that
a) fix things
b) add sensible features that make sense
c) don't interrupt and make a song and dance
The Apple airpod updates are my current favourite because I never know when they happen, generally don't notice anything changed, but I always have the feeling that I have the latest firmware, always. I'd be annoyed as hell if I had to manually update the firmware in my airpods.
On the other hand was a modem I used that required signing into a server, downloading, unpacking, uploading to the modem's internal server, and instantiating the update. And then being ready to rollback with the backup they asked you to make. So, so stupid.
cheers
Chris Maunder
|
|
|
|
|
ditto
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Somehow, the instant I hear "auto-update" nowdays, the first thing that comes to mind is the Sony 3D TV my dad bought a few years ago (when so-called 3D was all the rage).
Being a so-called "smart TV", it had an app that gave you access to a somewhat decent 3D video library to showcase what the TV could do. A mere few months after purchasing the TV, the app was removed through an update, and the only way the TV can now show any 3D content is by using whatever few 3D discs my dad had purchased.
Personally I still don't tell people to avoid Sony - that's my choice - but for my own purchases, I've boycotted them for decades for similar reasons.
For my own Windows systems - everything goes through a WSUS server, so any and all updates must be approved by myself first. As a result, I've never seen any of the nagware that people were complaining about when MS was pushing Windows 8 (and later 10) hard to get people to migrate away from Win7.
Updates are good. But you should always have the final say as to whether you want them or not.
|
|
|
|
|
|
In my opinion Sony got off too lightly for their root kit. They should have been hit with a DCMA violation for every single DVD sold with this root kit as they damaged every single PC that had one of these DVDs inserted, even if the movie wasn't played.
|
|
|
|
|
Agreed.
I still remember reading Russinovich's[^] blog posts about both Sony's and Symantec's rootkits. Interesting times.
|
|
|
|
|
When one creates something which can be "fixed" that easily, one also admits that it will be buggy and will require such updates frequently. Updates to malware signature databases and similar are fine of course, but not updates to the software or Operating System.
At worst, it allows the developers to become lazy. And of course, if the software being updated is buggy, how can one expect the update module itself to be bug-free?
|
|
|
|
|
Not at all. A long time ago in a house far, far away (the other side of town)...
At one time I used a popular CD authoring application that issued patches frequently. Yes, patches. The patches were cumulative and if one of them borked you had to uninstall the whole mess, clean up your disk and registry by hand, and start from scratch.
I finally gave up and spent the money on the 2nd-rated authoring application.
Software Zen: delete this;
|
|
|
|
|
Quote: I finally gave up and spent the money on the 2nd-rated authoring application. Was that out of necessity after the first software house was firebombed?
|
|
|
|
|
I can neither confirm nor deny that speculation, besides I was not anywhere at the time.
Software Zen: delete this;
|
|
|
|
|
Checking my email just now, I got a bogus "reply" to a "Quick Answers" question I did not ask.
The email looked exactly like a legitimate notice that a question had been answered.
Thankfully, I read it all before clicking the link and there was an offer to go to a porno site to see Russian girls, etc.
Has anyone else experienced something like this?
Not joking...
|
|
|
|
|
Are you sure this is not the hamsters making extra money on the side?
|
|
|
|
|
Mind Bleach! I need Mind Bleach!
@Sean-Ewington in a mankini was bad enough, but ...
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
That's likely phishing. Don't open stuff like that, it at least sends back an "Eddie Opened it" so it's the right bait return. So they'll cast again and again.
|
|
|
|
|