|
Johnny J. wrote: I was septic I hope you have since recovered.
No object is so beautiful that, under certain conditions, it will not look ugly. - Oscar Wilde
|
|
|
|
|
Bear with me, please, I live so far outside the netiverse (by choice) that I do not understand what you mean when you say: "she had been tagged in a photo by a friend on Facebook."
If you are feeling kindly towards old hermits at present, perhaps you can explain that to me. Does it mean there is a photo of your wife on FaceBook that someone has added a ? to ... or there is a photo somewhere on FaceBook (on the page(s) of her dead friend?) to which the name of your wife (or a link to her whatever on FaceBook) has been added to a list of names that are somehow linked to that photo.
thanks, Bill
«I want to stay as close to the edge as I can without going over. Out on the edge you see all kinds of things you can't see from the center» Kurt Vonnegut.
|
|
|
|
|
The title says it all. What would you suggest as the best way to keep artificial intelligence ethical when it can think, feel, create strategies, continuously improve its level of intelligence etc? I have a new algorithm and I am building it for first release in about a years time.
My initial thoughts are to use open source libraries for network access, so that people can tell who and what exactly the software to is talking to, as well as creating a set of ironclad rules that cannot be overridden. If you were to build such a thing, how would you go about it?
My website is at http://okeuvo.com[^] (warning, its still quite raw).
|
|
|
|
|
how about "Thou shalt not hurt any human in anyway"?
Life's like a nose, you've got to get out of it whats in it!
|
|
|
|
|
Maybe with the addendum "You will not create, or cause to be created, any autonomous machines or systems that can endanger human life."?
Ah, damn, after spending many years criticizing the law profession for being overly wordy and complicated I can start to see the difficulties they face .
Andy B
|
|
|
|
|
Or just fit all of them with a big, red, brightly lit "off" button on their backs.
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
That or we can make em self destruct upon manslaughter , that's what i call going out with a bang
Life's like a nose, you've got to get out of it whats in it!
|
|
|
|
|
There are not really any ethical problems with Artificial Intelligence. There will be issues if we ever get to Artificial Free Will but that is both very speculative and actually not necessarily correlated in any way to intelligence.
|
|
|
|
|
Artificial free will is exactly what I am talking about. It is general artificial intelligence, but it is not speculative.
|
|
|
|
|
Whilst there are many examples of what is termed artificial intelligence (self driving cars, Watson winning a jeopardy, medical diagnostic expert systems etc.) there has not (to the best of my knowledge) been anything artificial that has demonstrated any free will.
In order to apply ethics to a situation the actors must have free will. As it is you can no more apply ethics to a computer software than you could to a volcano.
|
|
|
|
|
I think that sooner or later its inevitable that we will see intelligent AIs. Sooner or later they will be able to bypass almost any safe guards. Be it a neural network styled AI contained in a supercomputer or distributed and self organizing that spreads itself like virus to networked computers.
There for there are two options available here and they are not exclusive to each other.
First start an organization for the AI rights and reason with it/they, meaning they will have rights that say they can live in this or that system as long as they don't infect everything with themselves as well as collect some points so those who are active in this organization won't be on their shitlist if the sh*t hits fan so to speak.
Second option is to push for neural interfacing for us humans and connect with any potential AI's and develop a symbiotic relationship.
Lastly we could ignore this issue and just deal with it as it comes. Sit back and enjoy a drink while we can.
|
|
|
|
|
Thanks for your reply. The fact is that the issue is here, that's why I am reaching out to fellow developers for suggestions. In less than a year, applications of this code might be out in the wild. It will not require a supercomputer either, but would be possible to squeeze unto smaller and smaller devices.
I like the AI rights suggestion. Perhaps, we can act along those lines and limit scope of what they are allowed; they will very strongly mimic human behaviour, so can be governed by the same rules. Their rights might only exist under the most guarded situations which they can help police.
To my mind, the major part of the problem is us humans, not the things we create; a knife is a helper until a murderer wields it. The moment we start trying to get above one another with this technology is the moment it becomes evil. If we use it properly though, there will be huge benefits in almost all fields. Us humans, like the machines, would need a new paradigm of civilisation.
|
|
|
|
|
I would say that Asimov's Three Laws of Robotics http://en.wikipedia.org/wiki/Three_Laws_of_Robotics[^] would be a good start. I would perhaps modify them by replacing "Human Being" with "Intellent Being", or perhaps Larry Niven's "Legal Entity". This would cover non-Human intelligences as well, if or when they are discovered.
The problem of coding these laws is left as an exercise for the student...
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack.
--Winston Churchill
|
|
|
|
|
Just not most of the ones asking questions here.
What do you get when you cross a joke with a rhetorical question?
The metaphorical solid rear-end expulsions have impacted the metaphorical motorized bladed rotating air movement mechanism.
Do questions with multiple question marks annoy you???
|
|
|
|
|
So law 1 would become: "A robot may not injure an intelligent being or, through inaction, allow a intelligent being to come to harm".
I guess we'd need to stop eating meat in that case then (there was no clause about how intelligent a being should be)?
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
Brent Jenkins wrote: So law 1 would become: "A robot may not injure an intelligent being or, through inaction, allow a intelligent being to come to harm". The problem with that is that the first sign that robots have achieved intelligence/awareness/consciousness etc. is that they will realize that human beings are not intelligent.
«I want to stay as close to the edge as I can without going over. Out on the edge you see all kinds of things you can't see from the center» Kurt Vonnegut.
|
|
|
|
|
It's likely we'll just create something as dumb (or worse) as ourselves. I can imagine a future where there are specialist TV channels with robot-only reality TV programmes..
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
Thanks, I'd never heard of Larry Niven. Good info.
Asimovs laws can be coded written in, but how do we keep our fellow humans from removing such safeguards?
|
|
|
|
|
Asame Imoni Obiomah wrote: Asimovs laws can be coded written in, but how do we keep our fellow humans from removing such safeguards?
You have a similar problem with humans that were brought up properly, but turned bad in adulthood. Solve one problem, and you've solved the other.
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack.
--Winston Churchill
|
|
|
|
|
Thanks for that Wikipedia link. There are tonnes of useful recourses linked that deal with this problem.
|
|
|
|
|
You've got a bit problem here for a start: the term "intelligence" really means "human-like intelligence" so you're going to be trying to make something that "thinks" like us.
But we're all flawed so no matter how hard you try, you're going to build some (or all) of those flaws into whatever system you create.
Secondly, what is "ethical"? "Ethics" is different from country to country and between different cultures. How do you even start to think about quantifying it to the point that you can write an algorithm?
If you ask me, you're on a hiding to nothing. Try improving the "real" intelligence of the world first (a tough enough job in itself)
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
Without intervention, the system will be able to overcome human flaws given time.
However, it would seem that our security against any artificially intelligent agent lies in it inheriting our flaws. So, instead of wiping our flaws out in their code, or allowing such an agent to cleanse itself off these flaws, we can amplify them and knobble its ability to communicate outside certain fixed bounds.
It's quite an engaging point you've raised about ethics. A true curveball indeed. The thing with an intelligent network though, is that variation enriches, so we could actually see both melding and growth in both culture and understanding.
We most certainly would lose with bigoted software (sounds so strange), so yes, bigotry would be a very important rule to hardcode from scratch.
|
|
|
|
|
I am just wondering. Would it be nice to have a virtual development environment? like a portable all development tools(Database, IDE, local server, etc.) that you can insert in your USB, plugin to different computer and you don't have to setup everthing.
|
|
|
|
|
Looks like it exists (from a very quick Google) SharpDevelop[^] apparently runs from a USB stick.
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
|
|
|
|
|
yes there are plenty of portable IDE or source code editor. But the local server, and database setup, things like that.
|
|
|
|