|
mark merrens wrote: I think you're being a luddite. I can't see what difference it makes: would you rather leave it to chance?
And I know you're being blind and shortsighted. Might want to go experience more life then try again.
Jeremy Falcon
|
|
|
|
|
Quote: And I know you're being blind and shortsighted. Might want to go experience more life then try again.
What? Have you not understood any of this? Apparently not!
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
mark merrens wrote: What? Have you not understood any of this? Apparently not!
If that's what you must believe to rationalize your point of view, go right on ahead blind man.
Jeremy Falcon
|
|
|
|
|
a) the fact that you are getting personal shows the weakness of your point of view and b) you appear to have gone off an some sort of tangent.
What, exactly, is your objection to robots, under very specific circumstances, deciding that the result of an accident could be somewhat mitigated (i.e. more people will live) by taking a specific course of action at the last moment.
How is this any worse than maintaining that blind luck and chance are a better arbiter?
Is your objection that technology is soulless and shouldn't be allowed to decide the fate of humans?
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
mark merrens wrote: a) the fact that you are getting personal shows the weakness of your point of view
Ok, this is my last reply since you obviously would rather argue than learn. God this sounds childish, so shame on me for entertaining you this far. My bad. But, you got personal first. Duh. What a waste of time.
mark merrens wrote: you appear to have gone off an some sort of tangent.
Of course it seems like that, you're shortsighted and blind. What else would it seem to someone who has very little life experience? Instead of arguing you could say "I don't get it", then I'd explain or attempt to or we could agree to disagree instead of acting like children. But no, I'm a luddite. That's the easy way out to avoid thinking. That must be it. A programmer that hates technology. Makes sense.
mark merrens wrote: How is this any worse than maintaining that blind luck and chance are a better arbiter?
You really are blind man. You need to step away from computers for a while to see the rest of the world you're blind in if you honestly can't see it. Seriously man. This ain't an insult no matter how you want to take it, it's saying you really need to open your eyes. This does not mean one hates technology in doing do so but in not doing that one has a very limited view of the world that impossible to see behind a computer screen.
mark merrens wrote: Is your objection that technology is soulless and shouldn't be allowed to decide the fate of humans?
Yeah, I'm soulless for defending the only thing with a soul. And you're not because you think something soulless should exercise the right as to whether or not a soul should exist.
Have fun not learning. Bye bye now!
Jeremy Falcon
|
|
|
|
|
Jeremy Falcon wrote: something soulless should exercise the right as to whether or not a soul should exist.
couldn't resist jumping in.
T don't think anyone is suggesting a machine deciding who should die or who should live ins some sort of rise of the robots world, but rather allowing different actions to be taken depending on programmed criteria - such as the number of possible casualties.
Say you were driving down the street when a kid runs into the road in front of you, chasing a ball.
You swerve to avoid him (as you naturally would) ... and plough into a bus stop, killing two kids.
If you had known there were two kids at the bus stop, would you have swerved or not?
A computer could (potentially) make that call - kill one or two.
Of course, three may be a third option, drive off the cliff and kill you, the driver. Maybe, armed with the previous knowledge that's what you would have done - you would rather die than kill a child. Good call, probably.
But what, now, if your child is in the car?
Kill someone else's child? Kill two other children, or kill you and yours?
Tough one, eh?
Using a computer to take over the decision (which it can also computer faster than you) would depend on the programming - but it might (for example) determine that a cliff plunge would certainly be fatal, as would running the kid over in front of you, but driving onto the bus stop has a slightly higher chance of non-fatal injuries, and so is the right call.
Do you believe that we shouldn't install that sort of technology on the grounds that a machine doesn't have a soul?
|
|
|
|
|
mark merrens wrote: Is your objection that technology is soulless and shouldn't be allowed to decide the fate of humans?
Actually I read that last part wrong, in a lightening fast attempt to move on...
Here's the short answer to that: yes!
So yay, my bad, twice.
Jeremy Falcon
|
|
|
|
|
Just in case you can't resist because your ego is so massive. See, here you are.
Intimating that you are a luddite is not getting personal: it is an observation.
However, I do believe you are an arrogant twat incapable of understanding anything but your own perspective. Good luck with that: you'll need it in real life.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
mark merrens wrote: I don't see why anyone would be upset about this unless they simply reacted without thinking. Robots may be emotionless and logical 'thinking' things, humans are not
I wouldn't know why anyone would be upset over gay marriage, over sex before marriage, over women having rights, over having a tv in your house, over working on sundays... And those are things you can choose to do or not do. Still people get mad to the extent they are willing to kill others for it just because they think it's not how it's supposed to be.
It's an OO world.
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
|
|
|
|
|
I'd rather it spent its cycles slowing the car.
You'll never get very far if all you do is follow instructions.
|
|
|
|
|
I believe the assumption is that it is beyond that - the accident is going to happen.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
I think this is a spurious situation, arising from our innate tendency to anthropomorphise the 'robot'.
I don't believe any robot car will ever* be programmed to make this sort of decision in this way. A car will never be able to know
who the passengers of another car are, for privacy reasons. They will be (are?) programmed to do everything possible
to safely avoid a collision. If the anti-collision routines of both cars cannot avoid colliding, the severity of the crash should be vastly
diminished (via braking, evasive action etc. faster than any human could).
On some very rare occasions (barring programming errors) a serious crash will be unavoidable, and will occur.
A car will never* make any decision about the people riding in it, or in any other vehicle.
* at least until a sentient AI is created.
|
|
|
|
|
Yeah, think that was pretty much already said.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
Yes and no.
Yes, because a rational and impartial program will be better at judging the odds and finding the 'solution' with the least loss, most of the time. Especially when that solution has to be found within a split of a second! Humans cannot make such a decision as quickly, because when you're forced to react, subconsciousness takes over, and will always try to preserve your own, personal, life, no matter how many others lifes are at stake! I'm not sure how I could live with the knowledge that my own survival cost the lives of a hundred other people. Especially if some of them were friends or relatives!
No, because it is humans who ultimately write the programs to make these decisions. Humans make errors, but it takes software and computers to turn such errors into catastrophes! Besides, what makes us think nobody will go ahead and manipulate that software to their own benefit, or worse, to cause catastrophical mass accidents?
The optimist in me wants to believe that the benefit of the former will outweigh the risk of the latter. But the realist tells me that one day a single incident will make me regret it.
|
|
|
|
|
Who would drive a car that can 'decide' to kill you?
What if the driver of the family of four is pretty sharp today and could have dodged your car at the last split second? To late, your car has already thrown you of a cliff...
A car might be able to predict what is going to happen if everything stayed as it is now (that is other drivers will not speed up, slow down, make a turn etc.), but it cannot predict what others will do and what the consequences of their actions will be.
It's an OO world.
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
|
|
|
|
|
"Damned cars, that was our second kamikaze blowing up the parking".
Veni, vidi, vici.
|
|
|
|
|
This is really interesting, and was already debated (to some extent) with the Law Zero[^] added to the initial three Laws of Asimov.
Practically, there is a huge information difference required to be able to fullfill Law Zero and Law One : You can evaluate easily the facts for one or a bunch of people in a car, but for humanity ? Maybe one of the people that is killed because of the AI decision would have had a big influence on hunanity's destiny (because he was a researcher or a dictator, etc...)
So we see that all 4 laws are required for the decision to be the fairest possible, but law 0 cannot be easily implemented. This law would be also the one required to answer properly the question in your post.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus
Entropy isn't what it used to.
|
|
|
|
|
Indeed though I think everyone is overthinking this. The bots will do everything to prevent an accident and I doubt that they would ever be given the power to decide if the occupants of car a will live and those of car b die. Still, it's fun to discuss the possibilities.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
Those who seek perfection will only find imperfection
nils illegitimus carborundum
me, me, me
me, in pictures
|
|
|
|
|
I think the car technology will improve safety long before the AI will be able to decide about one's fate, so there are odds that the situation of having to make the choice will never happen.
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus
Entropy isn't what it used to.
|
|
|
|
|
Since we humans can't cope with the thought of letting a computer, in this case a car, decide whether a living creature should survive or not, why should it be able to choose whether a few more lives are more important than a bit less lives? It'll reach the (international) news anyway blaming the computer for its actions.
So, let it just gather all the information on the crash, sit back and act like a 3D camera, making sure it is 100% a humans fault someone died. My answer is no.
|
|
|
|
|
I'm surprised that nobody mentioned Asimov so far (at least AFAIK, nobody mentioned him)
I believe that the poll is misleading (particularly the part that says "especially if I paid for it". That's just crap to drive people to pick the suicide choice as the "morally correct" one).
The two choices set as possible outcomes to the question posed to the robot are:
1. Kill the occupant(s) only.
2. Possibly kill the occupant(s) and occupant(s) of other bot-car(s) as well
If the three laws apply, then both of these choices would be rejected immediately as violating the first law (actively killing the occupants, or by doing nothing - i.e. inaction - possibly kill others). The bot-car would probably try to steer away from ALL oncoming traffic, and ALL oncoming traffic would probably try to steer away from the bot-car. In the end all bot-cars would actively try to save their occupants and the occupants of the other bot-cars first, and themselves (i.e. the bots) second.
Φευ! Εδόμεθα υπό ρηννοσχήμων λύκων!
(Alas! We're devoured by lamb-guised wolves!)
|
|
|
|
|
Interesting problem, I wonder if the person in the car that's about to slam into the SUV loaded with the family with 4 kids would do if given the choice?
Along with Antimatter and Dark Matter they've discovered the existence of Doesn't Matter which appears to have no effect on the universe whatsoever!
Rich Tennant 5th Wave
|
|
|
|
|
Ok car. Drive over the cliff.
Are you sure?
Ah, too late...
If I I had purchased a 'smart' car that was stupid enough to get into such a situation, I would ask for my money back. That's assuming I survived the crash.
I may not last forever but the mess I leave behind certainly will.
|
|
|
|
|
I better stop kicking the tires.
|
|
|
|
|
"Save the girl!"
I doubt we'll ever be able to program all factors that should be considered into that equation of who should die and who is worth preserving. Worse, as soon as that gets programmed into cars, someone somewhere will abuse it by deciding that their life is more valuable than N others and force that to get written into the programming. I don't so much mean individuals, as classes of people -- should we preserve doctors over McDonalds clerks, or political leaders over soldiers?
No, cars (or robots in general) should not make these kinds of value-of-human-life decisions. They're better left to us humans, who will make them with incomplete information and totally subjectively, just like we've always done.
We can program with only 1's, but if all you've got are zeros, you've got nothing.
|
|
|
|
|