|
I agree, animals are self aware in a form and their brains are less complex.
|
|
|
|
|
Good to see some support
|
|
|
|
|
'The Adolesences of P1', author is Thomas J Ryan.
Dave.
|
|
|
|
|
I believe there are aspects of a biological organism that just cannot be fully replicated in electronics.
We may get to the point where CPUs and software can replicate the processing power of a human brain (like one post says: we are as close to that as earth is to the edge of the universe) but I don't think it would ever be "alive" or aware of its self.
Of course, this is my non-professional opinion so take it with a grain of salt.
This is indeed a very interesting and thought-provoking post.
|
|
|
|
|
yeah it is hard to get around this thought, i was thinking that if you can hold a meaningful conversation with a machine and it can recognize you and respond to your emotions i don't see why it shouldn't be considered to be aware of it's environment and whats going on at least.
|
|
|
|
|
You know you've looked into a can of worms that people have been looking into for many many years.
I would argue that once a comptuer passes the turing test than it will probably demand human rights or "intelligent lifeform" rights and will probably get some form of legal protection.
I would call it self aware, it would probably call itself self aware.
Good question. And judging by the number of responses many other people are interested too.
|
|
|
|
|
Nice reply Tim Yen.
|
|
|
|
|
When you build it, you should ask it.
|
|
|
|
|
Good one, but i think it can argue that it is self awareness and can convince a lot of people, and i think if it did that then it deserves to be considered self aware.
|
|
|
|
|
One newbie mistake is looking at this from *only* a computer science aspect. Defining consciousness, as well as answering certain fundamental questions such as how it arises and is kept up are currently being researched very heavily.
Anyone coming back with solely "In my opinion, *blah* defines consciousness" will be summarily dismissed
Thanks,
Sean
|
|
|
|
|
I'am afraid i did not look at this from "only" a computer science perspective, i have researched in neural sensory processing as well.And i don't seem to get your point, the reply is not clear.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man.”
|
|
|
|
|
In the free online course CS101 from Stanford University, available at www.coursera.com[^], they say that:
The fundamental equation of computers is:
Computer = Powerful + Stupid
Where does "Extreme Artificial Intelligence" come in?
|
|
|
|
|
Amarnath S wrote: they say that:
The fundamental equation of computers is:
Computer = Powerful + Stupid
It's their opinion.
Amarnath S wrote: Where does "Extreme Artificial Intelligence" come in?
Extreme Artificial intelligence comes in because i'am talking about mimicking human intelligence in a machine.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
modified 11-May-12 9:31am.
|
|
|
|
|
Before this question, how do you know that I am self aware?
It's quite easy to answer a "Are you self aware?" question for a program.
My opinion is that if we can't distinguish a program from a human, or any creature, then we can say that it has self awareness. Google Turing Test may give more information.
But I don't think the way to achieve such intelligence is the same structure as human brain.
|
|
|
|
|
vault_zry wrote: My opinion is that if we can't distinguish a program from a human, or any creature, then we can say that it has self awareness.
yes that is right in someway.
vault_zry wrote: Google Turing Test may give more information.
I know what Turing test is.
vault_zry wrote: But I don't think the way to achieve such intelligence is the same structure as human brain.
Yes, such intelligence can be achievable with other designs other than that of the human brain. I used the human brain in the question as a reference because we are convinced that it is the most advanced signal processor and gives as the self awareness we enjoy.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
Awareness is one thing. Being aware that you are aware is something different. Animals are aware but not self-aware (well, so I've heard).
Self-awareness is being conscious. This brings to the fore how something develops consciousness. Is it a specific arrangements of molecules? Is it a specific set of chemical reactions? Or is it maybe a specific set of computations?
I think consciousness works on a whole different level where computers will never dwell.
So no self-aware computers or programs will likely ever be seen.
|
|
|
|
|
EbenRoux wrote: So no self-aware computers or programs will likely ever be seen.
Well that's your opinion I think self awareness has something to do with short term working memory and a set of computations which can be emulated in a program. We probably have self aware programs now, they need not be as complex as the human brain.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
I don't think there's anything extreme about the prospect. What does it really mean for a program to be self-aware? If we can say a program can be "aware" of *anything* then surely if it's model in any way includes any aspect of itself, then it IS self-aware.
Many people probably will have a hard time accepting this. But to those I ask them to specify by what criterion it is we can say that the brain is self-aware. And was the brain self-aware before we knew we have brains? I'm not just speaking of an evolutionary perspective here, but each and every one of us were born having not the slightest clue that we possessed a brain - it is something we became aware of years into our lives. In fact, we weren't even aware of our own individuality (which of course what we mean when we say a human is self-aware, rather than the more specific proposal that the brain is self-aware) until years into our lives. And no matter how wise and old you may be, you are *still* unaware of the vast majority of what is going on inside your head. When you speak, you are merely aware of part of the "top layer" occupied with expressing some idea or opinion, while all the lower-level processing required to unfold words into phonems and phonems into sequences of precise motor action (and surely a lot I don't know about) is totally transparent.
In the end, I think this whole issue of self-awareness is just a special case of the larger problem of perception. We all experience it and so can agree that it is a real phenomenon. There has never been a hint of solid evidence it is anything but a "side effect" of the physical activity in our brain, but nor do we have even a beginning of a clue of how and why the phenomena we observe in the brain actually lead to perception. In other words, if we were faced with a machine other than a brain that implemented self-awareness in a different way (if that is even possible) we have no reason at all to believe we would recognize that is was - at least not unless the machine could somehow express this self-awareness in a manner detectable and understandable to us.
Numenta are seemingly having some success in building "intelligent" machines, or at least machines that are cracking the kind of problems 50 years of traditional AI research could not. And with some "animal-like" characteristics, such as being much better at recognizing visual objects when they move and more generally a dependency on the temporal aspect in sensing that has so often been ignored.
|
|
|
|
|
dojohansen wrote: I don't think there's anything extreme about the prospect. What does it really mean for a program to be self-aware?
The extreme part comes in because I think that if a program had to mimic human intelligence, at least at a level were one can have a meaningful conversation with the program and that the program be environment aware, then that's extreme artificial intelligence, I think self awareness stems from intelligence which most animals possess but current programs seem to fall short of.
Having a program able to demonstrate such attributes can be considered extreme artificial intelligence, more like the peak of AI.
But i do agree with you on the other points you stipulated.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
Boringly, we probably agree on all points then. A program demonstrating anything resembling generalized intelligence at anywhere near a human capability level would indeed be extreme. I just don't think being self-aware necessarily has that much to do with intelligence, even if it did emerge in us that way. Self-modifying code is in a (limited) sense self-aware.
|
|
|
|
|
dojohansen wrote: I just don't think being self-aware necessarily has that much to do with intelligence, even if it did emerge in us that way
I think for a program to induce that it is self aware, then it must have a "mental capacity" to do just that. Induction at such levels requires intelligence, for you to induce that you exist and therefore self aware requires some thought and intelligence.
dojohansen wrote: Self-modifying code is in a (limited) sense self-aware
Ability to modify it's own coding is another "mentally" demanding activity without a form of intelligence i doubt that that program can be able to modify it's on coding. Therefore self awareness requires some intelligence at a certain level.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
Not really.
This "code" is kinda the same for the robot and a person:
if self is touching("Hot Stove") feelHot();
But for the person, the function feelHot will have several million, if not billion lines of code. Robot will have much much much less.
In the function, the robot will probably have something that tells him to move away his hand and maybe inspect it for any damage, but that's it.
The person, on the other, would actually feel the pain. Maybe they will start crying, maybe they will get ice or maybe do something else. It is impossible to predict.
If you write a new genius sorting algorithm computer will not be aware of it. Other people may inspect it and find it really smart and cool but computer will just execute it faster than a slower algorithm. You can teach a 10 year old the basic math and he will probably be able to use the algorithm, and perform the operations specified there, but he will probably not be aware of how it works.
Even the basic functions for human, have unimaginable complexity.
For computer adding 2 numbers is just adding 2 numbers. But for humans, each number can also invoke specific memory, feeling, etc.
|
|
|
|
|
Complexity does not imply self awareness. The ability to be unpredictable does not imply self awareness, but the ability to associate stimuli to a specific response and the ability to self monitor imply self awareness, there is no need to be as complex as a human to attain self awareness. Or and it's not the computer in question but the computer program.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
I have an Image taken from Camera . Using Aforge.net , i have filtered the image and got the 2d coordinate of the circle in the picture. Now i want to make a real world image and get the co-ordinate of the circle in the real world (3d coordinate) with reference to the camera location.
How can i get it.. please help...
|
|
|
|
|