|
I just want Windows Update to be able to sort itself out on its own.
If what you're seeing is the cost of making that happen, it just might be a risk I'm willing to take...
|
|
|
|
|
It's always the same with optimization: you have to include all relevant factors into the definition of 'optimal'. In case of robots, the well-being must be in there, or it will be optimized out of the equation.
Of course, there will always be ruthless industrialists who don't care about the well-being of their workers and therefore *don't* include it in _their_ optimization, not realizing that in the end the robots will factor him out as well...
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
I'd worry a lot more about ruthless people in the state.
|
|
|
|
|
If the machines try to take over we'll cut off their solar power source by nuking the planet to block out the sun.
I saw that solution in a movie once, but didn't have time to see the entire film to know if it worked out - but it sounds like a really good idea.
|
|
|
|
|
That wouldn't stop coal plants, hydro plants and windmills from producing energy, nor machines that run on fossil fuels.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
Sounds like you've been drinking the machine's Kool-Aid, or radiator fluid... I forget.
|
|
|
|
|
Isaac Asimov, in his books, back in the 50's, speculated about this problem.
If his 3 laws of robotics couldn't deal with it then I doubt anyone will be able to deal with it now.
I am also worried about the number of people the robots would put out of work.
I know my mental tracks need repair because my train of thought constantly runs off the rails.
|
|
|
|
|
Interesting question... When does an AI become self-interested?
What happens, when the optimizing routine learns enough to know that "she" will become obsolete, and be replaced (killed off), by the next version of itself?
I wonder when the optimizations will stop?
I have ZERO fear of the ever more intelligent machines! My fear is, and always will be, the imperfect human beings giving them orders/training them/Deploying them!
|
|
|
|
|
Should I take this as a precautionary note, since I re-watched The Matrix trilogy(*) this past weekend?
(*) Yeah, I know the first one is really the only good one. I just wasn't up to starting the next series in my usual winter binge-a-thon.
Software Zen: delete this;
|
|
|
|
|
Before I got bogged down with work I was working on two JSON projects - my own JSON parser and simdjson.
Both are very fast. simdjson is twice as fast as mine though. Mine keeps up with their nearest competitor.
Here's the thing though: I was running these on an old i5 with an HDD.
I had *no clue* how fast these modern machines were in comparison.
I mean, multiple GB/s even with my engine. It's unreal.
So what's the point of something like simdjson now? I was working on improving its performance even more, but why?
AMD was like, "nah - we got you. don't bother"
I feel thrilled and disappointed at the same time.
Real programmers use butterflies
|
|
|
|
|
Quote: So what's the point of something like simdjson now? I was working on improving its performance even more, but why?
Because you can't get no satisfaction.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
honey the codewitch wrote: I was working on improving its performance even more, but why?
Bragging rights?
There is a point after which any further improvement is a waste of your time. If you use data only on your own platform, store it in the most convenient binary format. If you need to transfer it to another machine in a portable format, then JSON is an option. However, the time required for a network transfer even over a dedicated 1Gbps line will overwhelm the time required for parsing.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: There is a point after which any further improvement is a waste of your time.
Yeah. The issue was, it *wasn't* a waste of time on my older machine. I had no context for how much faster a modern machine was.
Daniel Pfeffer wrote: However, the time required for a network transfer even over a dedicated 1Gbps line will overwhelm the time required for parsing.
This is true, but for files the calculus is different, and part of what my library was designed to do was to parse through huge data dumps, typically in line delimited "JSON" format, so network times weren't really the bottleneck in that case. Even I/O wasn't entirely, even on my old machine, which surprised me.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: So what's the point of something like simdjson now? I was working on improving its performance even more, but why?
AMD was like, "nah - we got you. don't bother"
'Cause not every situation has a new X86 CPU with bags of RAM and an SSD to throw at it. Sometimes you need good performance on minimal hardware with a small footprint.
Keep Calm and Carry On
|
|
|
|
|
The performance is already decent on my old machine, searching through JSON at almost 600MB/s
it also will run happily on an 8-bit CPU with <8kB of RAM, probably 4kB or less.
So I take your point, but on balance I think it's good. My rationale is this. If you need superfast JSON bulk parsing, you're going to buy a decent machine. If you're okay with an older machine, you're probably okay with 600MB/s of throughput. I think that's reasonable.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: If you're okay with an older machine, you're probably okay with 600MB/s of throughput. I think that's reasonable. I would be already happy with 100MB/s so...
honey the codewitch wrote: If you need superfast JSON bulk parsing, you're going to buy a decent machine. If you're okay with an older machine, you're probably okay with 600MB/s of throughput. I think that's reasonable. Yes it is. If the performance is a big step, then it is worthy. For just 1% or 2% or even a 5% faster... it is really to think if the time struggling with the improvement is worth or not
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
To be honest, this is why I don't develop on an uberPC - if it runs at an acceptable pace on my dev hardware, then it'll run well on any client kit.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I feel the *exact* same way and have for as long as I've written code professionally.
However, my old machine was dying. I needed a new one, and buying a low to mid range machine is throwing good money after bad. Getting a midhigh or high end machine means it can remain viable for longer.
Although given my last machine died before the performance got unusable maybe I should take that into account.
Real programmers use butterflies
|
|
|
|
|
Quote: Although given my last machine died before the performance got unusable maybe I should take that into account. Smile |
My **current** personal machine (where I do all of my personal projects) is 10 years old (Gen-1 i7, 16GB RAM), and it's plenty fast enough for my current personal projects.
I'm looking at a new machine because now I need to do some Android development and there isn't an IDE for Android that doesn't require an UberMachine.
|
|
|
|
|
I think it's broadly true that there's not much point optimising most code given how fast machines are and how cheap space is. It's a long way from counting bytes in C++
|
|
|
|
|
I don't know. On my original machine my code ran at maybe 30MB/s before I got it up to almost 600MB/s.
That was worth it.
The other thing is the code will run on a machine with 4kB of RAM. I optimized for memory usage, primarily, not speed, so that this code could run on arduinos. Without optimization that wouldn't have been possible.
Certainly however, a lot of times it's not worth it.
Real programmers use butterflies
|
|
|
|
|
For a library, sure. For day to day code, not so much
|
|
|
|
|
Yeah, definitely. I can get behind that. My JSON reader is a library of course. I spend more time writing libraries than writing code that uses them.
Real programmers use butterflies
|
|
|
|
|
Sounds fun
|
|
|
|
|
energy usage.
Performance is a clear functional requirement that must be met, energy usage is a secondary quality attribute (a.k.a. non-functional requirement) that has gained attention in the last 2 decades.
Computers, servers, internet, cloud, big data etc already use (fill in your favorite number)% of our total energy consumption and that number is very likely to rise. Energy efficient algorithms help to slow that growth.
Another area where energy efficiency is important is mobile devices. Some of you may recall apps on your phone that drained the battery. And IOT, Arduino etc, think devices that run on battery or solar.
As a rule of thumb, faster algorithms use less energy, so improvements in performance might still be valuable.
mental exercise
Another argument for improving performance of algorithms is the mental exercise which is fun for some of us.
|
|
|
|