|
I don't think there's anything extreme about the prospect. What does it really mean for a program to be self-aware? If we can say a program can be "aware" of *anything* then surely if it's model in any way includes any aspect of itself, then it IS self-aware.
Many people probably will have a hard time accepting this. But to those I ask them to specify by what criterion it is we can say that the brain is self-aware. And was the brain self-aware before we knew we have brains? I'm not just speaking of an evolutionary perspective here, but each and every one of us were born having not the slightest clue that we possessed a brain - it is something we became aware of years into our lives. In fact, we weren't even aware of our own individuality (which of course what we mean when we say a human is self-aware, rather than the more specific proposal that the brain is self-aware) until years into our lives. And no matter how wise and old you may be, you are *still* unaware of the vast majority of what is going on inside your head. When you speak, you are merely aware of part of the "top layer" occupied with expressing some idea or opinion, while all the lower-level processing required to unfold words into phonems and phonems into sequences of precise motor action (and surely a lot I don't know about) is totally transparent.
In the end, I think this whole issue of self-awareness is just a special case of the larger problem of perception. We all experience it and so can agree that it is a real phenomenon. There has never been a hint of solid evidence it is anything but a "side effect" of the physical activity in our brain, but nor do we have even a beginning of a clue of how and why the phenomena we observe in the brain actually lead to perception. In other words, if we were faced with a machine other than a brain that implemented self-awareness in a different way (if that is even possible) we have no reason at all to believe we would recognize that is was - at least not unless the machine could somehow express this self-awareness in a manner detectable and understandable to us.
Numenta are seemingly having some success in building "intelligent" machines, or at least machines that are cracking the kind of problems 50 years of traditional AI research could not. And with some "animal-like" characteristics, such as being much better at recognizing visual objects when they move and more generally a dependency on the temporal aspect in sensing that has so often been ignored.
|
|
|
|
|
dojohansen wrote: I don't think there's anything extreme about the prospect. What does it really mean for a program to be self-aware?
The extreme part comes in because I think that if a program had to mimic human intelligence, at least at a level were one can have a meaningful conversation with the program and that the program be environment aware, then that's extreme artificial intelligence, I think self awareness stems from intelligence which most animals possess but current programs seem to fall short of.
Having a program able to demonstrate such attributes can be considered extreme artificial intelligence, more like the peak of AI.
But i do agree with you on the other points you stipulated.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
Boringly, we probably agree on all points then. A program demonstrating anything resembling generalized intelligence at anywhere near a human capability level would indeed be extreme. I just don't think being self-aware necessarily has that much to do with intelligence, even if it did emerge in us that way. Self-modifying code is in a (limited) sense self-aware.
|
|
|
|
|
dojohansen wrote: I just don't think being self-aware necessarily has that much to do with intelligence, even if it did emerge in us that way
I think for a program to induce that it is self aware, then it must have a "mental capacity" to do just that. Induction at such levels requires intelligence, for you to induce that you exist and therefore self aware requires some thought and intelligence.
dojohansen wrote: Self-modifying code is in a (limited) sense self-aware
Ability to modify it's own coding is another "mentally" demanding activity without a form of intelligence i doubt that that program can be able to modify it's on coding. Therefore self awareness requires some intelligence at a certain level.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
Not really.
This "code" is kinda the same for the robot and a person:
if self is touching("Hot Stove") feelHot();
But for the person, the function feelHot will have several million, if not billion lines of code. Robot will have much much much less.
In the function, the robot will probably have something that tells him to move away his hand and maybe inspect it for any damage, but that's it.
The person, on the other, would actually feel the pain. Maybe they will start crying, maybe they will get ice or maybe do something else. It is impossible to predict.
If you write a new genius sorting algorithm computer will not be aware of it. Other people may inspect it and find it really smart and cool but computer will just execute it faster than a slower algorithm. You can teach a 10 year old the basic math and he will probably be able to use the algorithm, and perform the operations specified there, but he will probably not be aware of how it works.
Even the basic functions for human, have unimaginable complexity.
For computer adding 2 numbers is just adding 2 numbers. But for humans, each number can also invoke specific memory, feeling, etc.
|
|
|
|
|
Complexity does not imply self awareness. The ability to be unpredictable does not imply self awareness, but the ability to associate stimuli to a specific response and the ability to self monitor imply self awareness, there is no need to be as complex as a human to attain self awareness. Or and it's not the computer in question but the computer program.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
I have an Image taken from Camera . Using Aforge.net , i have filtered the image and got the 2d coordinate of the circle in the picture. Now i want to make a real world image and get the co-ordinate of the circle in the real world (3d coordinate) with reference to the camera location.
How can i get it.. please help...
|
|
|
|
|
|
With a circle I think it is not possible. Imagine you're looking straight from the front at the circle, then there is no way to tell the camera's rotation, as the circle is symmetric!
|
|
|
|
|
TomasRiker2 wrote: With a circle I think it is not possible. Imagine you're looking straight from the front at the circle, then there is no way to tell the camera's rotation, as the circle is symmetric!
The OP is asking for the 3D co-ordinates of the center of the circle not rotation angle.
“Be at war with your vices, at peace with your neighbors, and let every new year find you a better man or woman.”
|
|
|
|
|
Try OpenCV. It's a really easy to use, and powerful library for C++.
|
|
|
|
|
I am wondering, why dont we use humidity, temperature to create new random number?
|
|
|
|
|
Was tried before...
Humidity - shorts out your keyboard
Temperature - over heats your keyboard
|
|
|
|
|
because it's easy to control the humidity and temperature in a closed environment.
|
|
|
|
|
But its a true fact that humidity and temperature are controlled in some environment
|
|
|
|
|
right. that was my point. if it's controllable, then someone can duplicate the inputs which generated a series of values.
|
|
|
|
|
It wouldn't be random because humidity and temperature follow patterns. E.g. they both vary continuously; they don't "jump".
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
So what? Any useful hash function applied to them will provide the "jumps".
|
|
|
|
|
So what does the hash function do when the temperature zig-zags over the same fractions of a degree, at the limit of the thermometer's resolution? It repeats numbers more than you'd expect by chance.
It might be possible to overcome this limitation by incorporating state into the function, i.e. using the history of values in the calculation so you get different output for repeated inputs.
"Microsoft -- Adding unnecessary complexity to your work since 1987!"
|
|
|
|
|
So, still we can use environment parameter to generate random numbers....
|
|
|
|
|
In short, because there are better environmental sources of randomness.
|
|
|
|
|
BobJanova wrote: there are better environmental sources of randomness.
Care to give some better alternatives? Just curious.
Signature construction in progress. Sorry for the inconvenience.
|
|
|
|
|
electronic noise?
Ciao,
luker
|
|
|
|
|
Thermal noise is a common one. Interval between photons on a light detector would work well if it was low enough intensity. Radioactive decay is a classic which you can use through a web service if you're connected to the Internet.
Intervals between discrete events are generally better, as a continuous variable will always have linkage between readings taken close together in time (i.e. if it's 15.264C now, a measurement taken next time round the loop is more more likely to be 15.263, 4 or 5 than it is any other value).
|
|
|
|
|
There is no such a thing as a random number in practice. I would Rather you use pseudo - random since whatever you use to generate your "random" number has a pattern unlike in theoretical mathematics. So temperature or humidity, like others have said, have certain patterns though complex but not random.
|
|
|
|