|
Maybe with the addendum "You will not create, or cause to be created, any autonomous machines or systems that can endanger human life."?
Ah, damn, after spending many years criticizing the law profession for being overly wordy and complicated I can start to see the difficulties they face .
Andy B
|
|
|
|
|
Or just fit all of them with a big, red, brightly lit "off" button on their backs.
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
That or we can make em self destruct upon manslaughter , that's what i call going out with a bang
Life's like a nose, you've got to get out of it whats in it!
|
|
|
|
|
There are not really any ethical problems with Artificial Intelligence. There will be issues if we ever get to Artificial Free Will but that is both very speculative and actually not necessarily correlated in any way to intelligence.
|
|
|
|
|
Artificial free will is exactly what I am talking about. It is general artificial intelligence, but it is not speculative.
|
|
|
|
|
Whilst there are many examples of what is termed artificial intelligence (self driving cars, Watson winning a jeopardy, medical diagnostic expert systems etc.) there has not (to the best of my knowledge) been anything artificial that has demonstrated any free will.
In order to apply ethics to a situation the actors must have free will. As it is you can no more apply ethics to a computer software than you could to a volcano.
|
|
|
|
|
I think that sooner or later its inevitable that we will see intelligent AIs. Sooner or later they will be able to bypass almost any safe guards. Be it a neural network styled AI contained in a supercomputer or distributed and self organizing that spreads itself like virus to networked computers.
There for there are two options available here and they are not exclusive to each other.
First start an organization for the AI rights and reason with it/they, meaning they will have rights that say they can live in this or that system as long as they don't infect everything with themselves as well as collect some points so those who are active in this organization won't be on their shitlist if the sh*t hits fan so to speak.
Second option is to push for neural interfacing for us humans and connect with any potential AI's and develop a symbiotic relationship.
Lastly we could ignore this issue and just deal with it as it comes. Sit back and enjoy a drink while we can.
|
|
|
|
|
Thanks for your reply. The fact is that the issue is here, that's why I am reaching out to fellow developers for suggestions. In less than a year, applications of this code might be out in the wild. It will not require a supercomputer either, but would be possible to squeeze unto smaller and smaller devices.
I like the AI rights suggestion. Perhaps, we can act along those lines and limit scope of what they are allowed; they will very strongly mimic human behaviour, so can be governed by the same rules. Their rights might only exist under the most guarded situations which they can help police.
To my mind, the major part of the problem is us humans, not the things we create; a knife is a helper until a murderer wields it. The moment we start trying to get above one another with this technology is the moment it becomes evil. If we use it properly though, there will be huge benefits in almost all fields. Us humans, like the machines, would need a new paradigm of civilisation.
|
|
|
|
|
I would say that Asimov's Three Laws of Robotics http://en.wikipedia.org/wiki/Three_Laws_of_Robotics[^] would be a good start. I would perhaps modify them by replacing "Human Being" with "Intellent Being", or perhaps Larry Niven's "Legal Entity". This would cover non-Human intelligences as well, if or when they are discovered.
The problem of coding these laws is left as an exercise for the student...
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack.
--Winston Churchill
|
|
|
|
|
Just not most of the ones asking questions here.
What do you get when you cross a joke with a rhetorical question?
The metaphorical solid rear-end expulsions have impacted the metaphorical motorized bladed rotating air movement mechanism.
Do questions with multiple question marks annoy you???
|
|
|
|
|
So law 1 would become: "A robot may not injure an intelligent being or, through inaction, allow a intelligent being to come to harm".
I guess we'd need to stop eating meat in that case then (there was no clause about how intelligent a being should be)?
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
Brent Jenkins wrote: So law 1 would become: "A robot may not injure an intelligent being or, through inaction, allow a intelligent being to come to harm". The problem with that is that the first sign that robots have achieved intelligence/awareness/consciousness etc. is that they will realize that human beings are not intelligent.
«I want to stay as close to the edge as I can without going over. Out on the edge you see all kinds of things you can't see from the center» Kurt Vonnegut.
|
|
|
|
|
It's likely we'll just create something as dumb (or worse) as ourselves. I can imagine a future where there are specialist TV channels with robot-only reality TV programmes..
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
Thanks, I'd never heard of Larry Niven. Good info.
Asimovs laws can be coded written in, but how do we keep our fellow humans from removing such safeguards?
|
|
|
|
|
Asame Imoni Obiomah wrote: Asimovs laws can be coded written in, but how do we keep our fellow humans from removing such safeguards?
You have a similar problem with humans that were brought up properly, but turned bad in adulthood. Solve one problem, and you've solved the other.
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack.
--Winston Churchill
|
|
|
|
|
Thanks for that Wikipedia link. There are tonnes of useful recourses linked that deal with this problem.
|
|
|
|
|
You've got a bit problem here for a start: the term "intelligence" really means "human-like intelligence" so you're going to be trying to make something that "thinks" like us.
But we're all flawed so no matter how hard you try, you're going to build some (or all) of those flaws into whatever system you create.
Secondly, what is "ethical"? "Ethics" is different from country to country and between different cultures. How do you even start to think about quantifying it to the point that you can write an algorithm?
If you ask me, you're on a hiding to nothing. Try improving the "real" intelligence of the world first (a tough enough job in itself)
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
Without intervention, the system will be able to overcome human flaws given time.
However, it would seem that our security against any artificially intelligent agent lies in it inheriting our flaws. So, instead of wiping our flaws out in their code, or allowing such an agent to cleanse itself off these flaws, we can amplify them and knobble its ability to communicate outside certain fixed bounds.
It's quite an engaging point you've raised about ethics. A true curveball indeed. The thing with an intelligent network though, is that variation enriches, so we could actually see both melding and growth in both culture and understanding.
We most certainly would lose with bigoted software (sounds so strange), so yes, bigotry would be a very important rule to hardcode from scratch.
|
|
|
|
|
I am just wondering. Would it be nice to have a virtual development environment? like a portable all development tools(Database, IDE, local server, etc.) that you can insert in your USB, plugin to different computer and you don't have to setup everthing.
|
|
|
|
|
Looks like it exists (from a very quick Google) SharpDevelop[^] apparently runs from a USB stick.
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
|
|
|
|
|
yes there are plenty of portable IDE or source code editor. But the local server, and database setup, things like that.
|
|
|
|
|
I'm pretty sure that you can't run SQL Server from a stick - it's a set of services, and AFAIK they have to be installed into the system.
You could always create a VM with all your "favourites" loaded and put that on the stick?
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
|
|
|
|
|
SQLite could work from a stick if I recall correctly.
|
|
|
|
|
SQLite will - but it's a single user system, not multiuser.
Access and SQL CE will work from a stick as well.
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
|
|
|
|
|
A matter of definition, it works with multiple users and is generally thread safe, but it uses file locking when writing, so it's not very useful doing so.
I would never run a multi user system from a stick in any case.
|
|
|
|