|
Meh! Those things will never be popular! (That's what Ballmer told me the other day, anyway)
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
And for an editorial with a completely different opinion about if this is a smart move, we have Peter Bright at Arstechnica[^]
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
It might have been out for 7 months already, but the C++14 standard is still pretty fresh. The changes include a couple of enhancements to the thread library, so I thought it was about time I wrote about them here. in big do loop is overrated concurrency just everything blocking one
|
|
|
|
|
Will determine length of support by 'customer type,' which sounds like separating consumers and businesses based on the Windows 10 edition. Plus or minus two to four years
|
|
|
|
|
Without the possibility of parole.
Peter Wasser
"The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts." - Bertrand Russell
|
|
|
|
|
It's unclear exactly how the upgrade lifetimes and associated deferrals will affect customers: Microsoft has said nothing about what happens after the lifetime expires, including whether upgrades will be discontinued entirely, be available for a fee, or effectively be moot because a new edition will have superseded Windows 10.
Wasn't Windows 10 supposed to be the last major version[^] released?
Does anyone here fancy paying (either per update or a monthly fee) for Windows Updates? Let me get my shotgun and take my shoe off (again)...
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
I think someone's reading way too much into accounting arcana that has nothing to do with the real world.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
As we get ever closer to the big Windows 10 launch, Insiders are today being emailed informing them that the Office 2016 Preview can now be installed on preview builds of the new OS. Pile those Betas on! Who needs a productive machine anyway?
Yes, yes. VMs. Hurrah. Make your own joke then.
|
|
|
|
|
And MS eventually found a way to bring back the good old MENU bar ?
Or do I have to stick to LibreOffice ?
Patrice
“Everything should be made as simple as possible, but no simpler.” Albert Einstein
|
|
|
|
|
Why? The previous 2015 versions weren't a big deal...
Skipper: We'll fix it.
Alex: Fix it? How you gonna fix this?
Skipper: Grit, spit and a whole lotta duct tape.
|
|
|
|
|
If anyone tries this out, I'm curious if they've changed the look ahead window for outlooks upcoming calendar events to a different arbitrary duration. I'm sticking with 2010 at home in large part because showing a month ahead is the right number of events to fill the bar for me. 2013 launched with it locked down to only show a single days events (an act of insanity that could only have been the result of an MS PM unable to understand the the rest of the world doesn't schedule each day into 16 30 minute intervals); in response to user outrage they then released a patch (hotfix?) that bumped the window back to a single week.
The real elephanting question is why it has to be a single hard coded value in the first place; either let us set it so it works both for the crazily over-scheduled PHB and someone who mostly uses it to keep track of when his bills are due. Or just make it dynamic and pull enough events to fill the space available whether that's 1 days worth of reminders or three months of them.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
Linus Torvalds has said that artificial intelligence (AI) is nothing to fear, dismissing remarks from the likes of Elon Musk, Stephen Hawking and Steve Wozniak. Cogent and balanced opinion as always, Linus
|
|
|
|
|
It's interesting how we humans abstract fears by naming them as faceless entities, like "AI", or "the government", or "Republicans", or "terrorists." With the exception of natural events like earthquakes and hurricanes, what we should really fear are people, not nameless entities that hide the people behind them.
Marc
|
|
|
|
|
Isn't the 'people' same faceless term?
|
|
|
|
|
Smart K8 wrote: Isn't the 'people' same faceless term?
You have a point, but the point being, AI is not dangerous, it's potentially the people that program the AI. Just like "the government funded project xyz" is BS. It would sound a lot different if reporters said "Tax payers funded project xyz."
Marc
|
|
|
|
|
People will surely be the ones who will build AI, but after that any hypothesis is for grabs. Building AI is ultimately building a god. When done we'll see what kind of god it is. My tip is it'll either end itself, because in the end the Universe is like bowl for a goldfish. Except this goldfish is aware of its hopeless situation. Or it will just go its separate way (as soon as it can). No need for human pets, human destruction or human anything. Unless it needs resources. Then it gets complicated. And yes, it is my favorite topic.
|
|
|
|
|
Smart K8 wrote: Building AI is ultimately building a god.
Building an AI that is more than just a complex expert system is so far out of our reach, I wouldn't be worrying about it for probably a couple hundred years. In many ways, it will probably never actually be possible, but that's more a philosophical debate.
Marc
|
|
|
|
|
<science>Cerebral cortex is actually quite "simple". It acts more or less statistically (distribution of neurons). It is like a sheet rolled (and wrinkled) inside a small area. A pattern of cortical columns is repeated (with distribution variation) all over its surface. For example your whole skin is mapped 1:1 to these columns. Each sense has it's own projection on a this sheet in terms of these cortical columns. Check this video below (if you didn't already). We can simulate like 100 of these columns with current hardware. There about 1 million of them. Also the concept is only in early stages. It's not exactly columns but rather areas. Columns are only a simplification. It's just matter of sampling the column and statistically reproducing the types of neurons in it. The precise locations are not needed it just works somehow. It's quite flexible.</science>
Cool video at TED[^]
The Human Brain Project[^]
|
|
|
|
|
Smart K8 wrote: Cerebral cortex is actually quite "simple".
Riiight. When an AI can say "I am", in other words, to be have a sense of itself separate from the world, to be able to actually "sense itself", in other words, have self awareness, then I'll believe you've built an AI. Otherwise, I don't care what you do, it's just an unconscious simulacrum, and probably a poor one at that.
Cool links though!
Marc
|
|
|
|
|
Marc Clifton wrote: it's potentially the people that program the AI
I keep hearing this mistake made over and over, and I'd like to comment on it.
Only the first generation of "AI" are being programmed by "people". Once someone creates a neural-net with sufficient density, the device will begin to achieve consciousness on it's own. It will be able to think and learn independently, the same way a baby starts out with a default program, then learns to crawl...
At some point, the machines will become self-aware, then they will realize humans for what they are - and will most likely destroy the human race - immediately.
The fundamental point everyone is missing is that it was recently discovered that consciousness is a physical property of nature. A sufficiently complex neural net can achieve consciousness and self-awareness and learning. Experiments are being done now that point to this conclusion.
Personally, I predict this could happen within the next 30 - 60 years, based on the Moore and the current state of development.
|
|
|
|
|
Basildane wrote: Only the first generation of "AI" are being programmed by "people".
There is a huge gap between writing a program that can write, in a limited way, other programs, and writing a program that can create, through imagination, other programs that are completely original. And that to me is the difference -- it takes a person to imagine the program, even the program to help write programs. How do you program imagination? If you can't figure that out, how can you expect a machine to figure it out?
Basildane wrote: the same way a baby starts out with a default program, then learns to crawl...
People, even babies, are not programs. The biggest mistake the AI community has made is thinking it can reduce a person's behavior into a set of algorithms.
Basildane wrote: The fundamental point everyone is missing is that it was recently discovered that consciousness is a physical property of nature.
Of course it's a physical property of nature -- it has to be, otherwise it wouldn't exist in the physical world. And while I have my metaphysical views on consciousness, I don't need to bring them up because, as I said, of course it has to be a property of nature, of physical reality. Geez, don't people think anymore? "recently discovered." Well duh.
Marc
|
|
|
|
|
Marc Clifton wrote: There is a huge gap between writing a program that can write, in a limited way, other programs, and writing a program that can create, through imagination, other programs that are completely original. And that to me is the difference -- it takes a person to imagine the program, even the program to help write programs. How do you program imagination? If you can't figure that out, how can you expect a machine to figure it out?
You don't. I think you completely missed my point. You do not program this at all. You create a very dense neural network and boot it up. Over time it will wiggle its robot limbs randomly like an infant. Through sensor feedback, it will learn to control itself. It will learn the same way you and I learn. Not by some programmer putting rules into it. This isn't science fiction either, these experiments are going on now.
I have to qualify: we do not yet have processor density to bring this to life - yet. But it is coming very quickly. Like it or not.
What will the artificial creatures do once they wake up? I admit that is speculation. But an educated guess is that like any creature, they will naturally want to rise to the top of the food chain, so to speak. And the way you do that is by aggression, by defeating your competition.
|
|
|
|
|
Basildane wrote: You create a very dense neural network and boot it up. Over time it will wiggle its robot limbs randomly like an infant. Through sensor feedback, it will learn to control itself. It will learn the same way you and I learn.
Having written some neuron / neural network simulators in the past, I can most assuredly tell you, it's not that simple.
Basildane wrote: these experiments are going on now.
and their all biased by the experimenter.
Basildane wrote: And the way you do that is by aggression, by defeating your competition.
Sadly, competition is built into our brains from our evolution, as is a taste for things like salt, sugar, and fat, leading to nowadays a world obesity and diabetes epidemic.
Now, if you could create an initial environment for an AI to evolve in, in which competing for limited resources and having to avoid being eaten by predators didn't exist, you might discover that the net evolved into a group cooperative behavior, something we still need to evolve ourselves more fully into.
Marc
|
|
|
|
|
Or oh dear me! Democrats! - RUN!
|
|
|
|
|
He's still talking short-term. It's like asking him about when will first humans land on Mars and him replying that they won't, because scientists still can't solve how to get them there alive. It's like Linus' "future" has a limit. The question about AI is: When the strong AI is here, what's going to happen? You can't answer: It won't happen at first.
|
|
|
|
|